text
stringlengths 0
2.11M
| id
stringlengths 33
34
| metadata
dict |
---|---|---|
[email protected] of Mathematics, Institute for Advanced Study, 1 Einstein Drive, Princeton, 08540, NJ, USAQCD Labs, QTF Centre of Excellence, Department of Applied Physics, Aalto University, P.O. Box 13500, FI-00076 Aalto, Finland Photonics Laboratory, Physics Unit, Tampere University, P.O. Box 692, FI-33014 Tampere, FinlandDepartment of Chemistry, University of Helsinki, P.O. Box 55, FI-00014 Helsinki, Finland QCD Labs, QTF Centre of Excellence, Department of Applied Physics, Aalto University, P.O. Box 13500, FI-00076 Aalto, Finland We propose, and theoretically analyze, a practical protocol for the creation of topological monopole configurations, quantum knots, and skyrmions in Bose–Einstein condensates by employing fictitious magnetic fields induced by the interaction of the atomic cloud with coherent light fields. It is observed that a single coherent field is not enough for this purpose, but instead we find incoherent superpositions of several coherent fields that introduce topological point charges. We numerically estimate the experimentally achievable strengths and gradients of the induced fictitious magnetic fields and find them to be adjustable at will to several orders of magnitude greater than those of the physical magnetic fields employed in previous experimental studies. This property together with ultrafast control of the optical fields paves the way for advanced engineering of topological defects in quantum gases.Optically Induced Monopoles, Knots, and Skyrmions in Quantum Gases Mikko Möttönen January 14, 2024 ==================================================================Topology is the mathematical theory of the qualitative properties of shapes <cit.>. Its roots can be traced back to the origins of graph theory <cit.>, and, in its modern form, to an 1895 article by Poincare <cit.>. In physics, topological structures have attracted persistent attention ever since the vortex atom hypothesis of Kelvin in 1869 <cit.>, suggesting atoms to be knotted vortex loops in the ubiquitous ether. Even though this hypothesis was incorrect, an abundance of other topological structures has been discovered since then. Examples in fundamental physics include the Dirac monopole <cit.> and cosmic strings <cit.>. Moreover, condensed-matter systems support a wide variety of topological phenomena, such as topological defects <cit.> in concrete physical systems such as superfluids <cit.>, liquid crystals <cit.>, and Bose–Einstein condensates <cit.>, the quantum Hall effect <cit.>, and topological phases of matter, such as topological insulators <cit.> and topological superconductors <cit.>. Accordingly, topological phenomena in physics has developed into a vibrant research area in which experiments <cit.> and simulations <cit.> meet the methods of abstract mathematics <cit.>, and which has a wide range of potential applications to, for example, high-temperature superconductivity <cit.> and fault-tolerant quantum-computation <cit.>.Ultracold quantum gases and especially atomic Bose–Einstein condensates (BECs) with spin degree of freedom <cit.> provide a highly controllable system where the coherent quantum state of the gas can be imaged with high resolution. Thus they seem ideal for studying the topological properties of matter. Based on a theoretical proposal <cit.> that the topological winding of the direction of a three-dimensional quadrupole magnetic field can be used to imprint Dirac monopoles in BECs, these intrigiung defects were observed for the first time in any continuous field <cit.>, followed by first creations of topological monopoles <cit.>, quantum knots <cit.>, three-dimensional skyrmions <cit.>, and Alice rings <cit.>. However, a practical protocol for creating multiple defects inside a single cloud—a prerequisite for studying defect interactions—has been essentially lacking. A key challenge in manufacturing such configurations has been the difficulty of creating magnetic fields that vary in a sufficiently nontrivial fashion inside the small spatial extent of a Bose–Einstein condensate cloud by employing a conventional magnet <cit.>.A promising avenue for bypassing this issue is the manipulation of the internal degrees of freedom of atoms and molecules by coherent light <cit.>, which is one of the most versatile tools in the toolbox of the modern experimental physicist. The traditional applications of this phenomenon include magneto-optical traps and laser cooling <cit.>, which paved the way to the first experimental observation of BECs <cit.> and their topological vortex defects <cit.>. Moreover, optical driving fields have been employed in the creation of two-dimensional skyrmions in spin-2 BECs <cit.>, and several protocols based on optical methods have been proposed for generating skyrmions and possibly knotted vortex-loop configurations in Bose–Einstein condesates <cit.>. In addition, fine spatial control of the trapping potential allows the fabrication of optical lattices, enabling the simulation of a variety of quantum systems <cit.>. In this paper, we are focusing on the creation of topologically winding fictitious magnetic fields generated by atom-light interactions, suitable for creating a variety of topological defects in BECs, including monopoles, quantum knots, three-dimensional skyrmions, and configurations thereof. The effect of a coherent, off-resonant light field can be described by an effective Hamiltonian <cit.>, which has three interaction terms, the scalar, vector, and tensor light shift. The scalar light shift usually plays the predominant role in applications. However, the effects of the vector light shift, equivalent to the effects of a fictitious magnetic field _, cannot be overlooked, for example, in the estimation of the transition energies relevant for atomic clocks <cit.>. Importantly, large-gradient fictitious fields are a versatile tool <cit.> that have been used in cooling atoms below the Doppler limit <cit.>, in building deformable optical lattices <cit.> which are useful for the realization of two-qubit gates <cit.> in neutral-atom quantum computing, and in optical nanofiber traps <cit.>. The tensor part of the light shift is negligible for the ground states of alkali metal atoms, and does play a role in our investigations.Let us consider the possibility of creating point-defect configurations in BECs by employing fictitious magnetic fields, thus addressing a key challenge in the case of complex point-defect configurations. Previously, single monopoles have been imprinted in BECs by adiabatically moving a topologically nontrivial magnetic field inside the cloud by slowly ramping a homogeneous bias magnetic field <cit.>. Unfortunately, the extension of this scheme to simultaneously create several point defects is experimentally challenging due to the strong magnetic fields required to create the required topologically nontrivial field <cit.>. To resolve this issue, we propose to replace the physical magnetic field by a fictitious field _, the shape, strength, and stability of which are determined by the light sources used to induce _. We find that the large gradients attainable <cit.> offer an opportunity for tailoring the micrometer-scale shape of the fictitious field, which is a prerequisite for creating a small enough point-defect configuration to fit inside the spatial extent of a BEC cloud. Thanks to the established experimental methods to control optical fields with high precision and resolution in space and time, our proposal uncovers paths to conquer new regimes for the experimental creation of various topological configurations.However, our observations suggest that there is a subtlety inherent to this strategy: the fictitious magnetic field _ generated by a single coherent laser field contains no topological charges, and therefore cannot alone be employed to create even a single point defect. Nonetheless, we demonstrate that it is possible to obtain topologically non-trivial fictitious fields by considering an incoherent superposition of multiple laser fields, the fictitious field of which is the sum of the fictitious fields of the individual components <cit.>. Consequently, we arrive below at an advanced technique for experimentally creating monopole and other topological configurations in spinor BECs. Let us consider an atom of total hyperfine angular momentum F interacting with a single coherent light field which is characterized by the complex-valued polarization vector . The corresponding fictitious magnetic field experienced by the atom is given by <cit.>_ := α_v/8 g_F μ_B F ( ^* × ), where α_v is the vector polarizability of the atom, g_F is the Landé g factor, and μ_B is the Bohr magneton. Thus, a spatially varying polarization vector may lead to a spatially varying fictitious magnetic field. Denoting =+ 𝐖, whereand 𝐖 are three-dimensional real-valued vector fields, we observe that_∝×𝐖. As we show below, a fictitious field _ of Eq. (<ref>) does not contain any topologically non-trivial winding along any sphere which implies that the field does not contain any point-like topological charges, also referred to as topological monopoles, examples of which are illustrated in Fig. <ref>.More formally, the number of times the field winds along an arbitrary sphere Ω, on which the fictitious field obtains only nonzero values, can be obtained fromQ_Ω := 1/8 π∫_Ωω_i ϵ_ijk_·(∂_/∂ x_j×∂_/∂ x_k) = 0,whereϵ_ijk is the fully anti-symmetric tensor introducing an implicit summation over i, j, k∈{1, 2, 3}, the coordinate system is given by (x_1,x_2,x_3)=(x,y,z), and the hat on the vectors denotes unit norm. We mathematically consider Q_Ω as the degree <cit.> of the induced map _ |_Ω: Ω→ S^2, where we have identified the two-dimensional sphere S^2 with the space of three-dimensional unit-vectors. The equality Q_Ω = 0 holds if and only if _ |_Ω is nullhomotopic, i.e., homotopic to a constant map <cit.>. In other words, a field that has no topological winding, can be continuously transformed without passing through zero into a homogeneous configuration.To show the nullhomotopy that Q_Ω = 0 for all _∝×𝐖, we observe that the normalized ficticious field can be expressed as_ = ' ×𝐖',where ' and 𝐖' are obtained fromand 𝐖 on Ω by applying the Gran–Schmidt orthonormalization process. The triad (', 𝐖', _) may be regarded as a continuous map Ω→(3), and since any such map is null-homotopic <cit.>, we obtain the desired null-homotopy of _ from the null-homotopy of the triad. Thus, the fictitious field induced by a single, coherent light field contains no topological point charges.To overcome the above obstruction, we incoherently superimpose several coherent light fields, generated by several independent laser sources. The fictitious field induced by such a superposition is the sum of the fictitious fields generated by the individual coherent fields <cit.>.Alternatively, one could employ several lasers of slightly different wavelengths, in such a way that none of the frequency-differences are close to the resonance frequencies of the atom. This ensures that the temporal dependence of the fictitious fields induced by the frequency offsets can be neglected in the dynamics of the condensate <cit.>. The cross-sectional polarization vector of a Hermite–Gaussian TEM_mn beam at focus, propagating along the z axis and polarized along a vector in the xy plane, is given by_m,n(x,y) = A_m,nexp(-x^2 + y^2w^2_0) H_m( √(2) xw_0) H_n( √(2) yw_0)where parameter w_0 is the waist of the beam <cit.> and the constant A is given byA_m,n = √(4P/π w_0^2ϵ_0c_02^m+nm!n!),where P is the optical power of the beam, ϵ_0 is the permittivity of vacuum, and c_0 is the speed of light in vacuum. Hence, a coherent superposition of a TEM_00 and a TEM_10 beam, with orthogonal polarization and a π/2 phase difference, induces for x,y≪ w_0 a fictitious field of the form_^(00,10,z)∝x ,as we illustrate in Fig. <ref>.Adding three such fields generated by incoherently superimposing pairs of beams propagating along x, y, and z directions respectively, we obtain the total fictitious field ^tot_=_^(00,10,x)+_^(00,10,y)+_^(00,10,z)∝[ y z x ]^,that contains a topological charge 1 at the origin as shown in Fig. <ref>(d).In order to create more complicated point-defect configurations, one or several of the above TEM_10 beams can be replaced by a higher-order Hermite–Gaussian beam TEM_n0 with n>1. For instance, if one TEM_20 beam is used together with two TEM_10 beams, the total fictitious field ^pair_=_^(00,10,x)+_^(00,20,y)+_^(00,10,z) contains a topological monopole-antimonopole pair in the vicinity of the origin as shown in Fig. <ref>. In general, any configuration of monopoles and antimonopoles in a three-dimensional charge-alternating grid may be generated.The effects of a physical and a fictitious magnetic fields combine additively <cit.>. Hence, a monopole configuration may be imprinted inside the sample by first producing a fictitious field with the corresponding point-defect configuration, and then adiabatically moving the zero point(s) of the fictitious field inside the cloud in order for the order parameter to keep aligned with the field, by slowly ramping down the strength of a physical bias magnetic field as in Refs. <cit.>. Similarly, a configuration of either quantum knots or three-dimensional skyrmions may be created by rapidly switching on the fictitious field, inducing position-dependent spin rotations that form topological solitons with nontrivial π_3-charges as in Refs. <cit.>. To capture the full topology of the fictitious field, its zero-point configuration needs to be small enough to fit inside the BEC cloud. Alternatively to the physical magnetic field, one may employ two phase shifted TEM_00 beams to induce an additional essentially homogeneous fictitious field at the BEC and move the zero points by changing the intensity of the fields. In case there are physical off-set magnetic fields to cancel, one may need homogeneous fictitious fields in three linearly independent directions. This kind of all-optical control of the magnetic fields enables orders of magnitude faster field ramps than those with physical magnetic fields, which can be advantageous for example in creating quantum knots <cit.> and skyrmions <cit.> where the zero point of the field is instantaneously brought into the condensate.Next, we describe the parameter values for the experimental creation of fictitious magnetic fields containing topological monopole-antimonopole pairs that are small enough to fit inside a typical BEC that is 15 µm in diameter, similar to Ref. <cit.>. We consider a ^87Rb BEC with α_v = 4462 a.u. <cit.> (7.357×10^-38 A^2s^4kg^-1), F=1, and g_F=-1/2, but parameter values for other atomic species may be obtained in a similar manner.We consider the beam configuration of Eq. (<ref>) and Fig. <ref>, where all beams have a waist of w_0 = 5 µm, a wavelength of λ = 770 nm, and optical power of P=1 W.There are several approaches to create a coherent superposition of Hermite–Gaussian beams of different orders. For example, one can split a Gaussian beam and transform one of the beams into a higher order mode using phase plates <cit.>, or employ two phase-locked lasers with intra-cavity elements to control the transverse profiles of each laser beam. Alternatively, two different Hermite–Gaussian beams with orthogonal polarizations may be produced in the same cavity using a birefringent beam displacer or a polarizing beam splitter to partially separate the beam paths in the cavity and phase elements to provide gain only for the selected transverse modes <cit.>. An additional phase plate can be added into one of the paths for maintaining coherence with a selected phase difference between the modes. Regardless of the approach, the resulting beams are focused in order to obtain the desired beam waist.The three pairs of coherent beams will be focused on the BEC with orthogonal propagation directions.Since the distance between the two zero-points of a TEM_20 beam at focus coincides with the waist, we expect the total fictitious field ^pair_=_^(00,10,x)+_^(00,20,y)+_^(00,10,z) to contain a topological monopole and an antimonopole that are five microns apart <cit.>.To confirm the position of the field zeros and to estimate the strength of the total fictitious field ^pair_, we calculate fields _^(00,10,x), _^(00,20,y), and _^(00,10,z) using Eqs. (<ref>), (<ref>), and (<ref>) in a three-dimensional grid around the origin.For each pair of coherently superimposed Hermite–Gaussian laser beams, we evaluate Eq. (<ref>) across the transverse plane at the BEC using the known electric-field profiles of linearly polarized Hermite–Gaussian modes <cit.>. In addition to the dominant polarization component considered in Eq. (<ref>), we also include cross and longitudinal polarization components that may become significant when strongly focusing the beam <cit.>, but here their contribution is rather small. Since the considered spatial lengths (∼10 µm) are an order of magnitude shorter than the Rayleigh range of the beam, we can neglect changes in the transverse profile and the Gouy phase of the laser beams during propagation, and assume for each beam that the magnetic field from Eq. (<ref>) remains constant along the direction of beam propagation.The sum of the three fields yields the total fictitious field ^pair_ shown in Fig. <ref>. The position of the field zeros is approximately 5 µm as expected. The maximum strength of the fictitious field is 71 mT, and the spatial gradient field between the monopole-antimonopole pair is about 22 kT/m. This gradient is almost six orders of magnitude greater than roughly 40 mT/m used in previous monopole experiments <cit.>. Thus by adjusting the power of the lasers, it is convenient to span a very broad range of gradients.At 100 µW optical power, the maxima of the scalar light shifts caused by the coherent superpositions TEM_00 + TEM_10 and TEM_00 + TEM_20, are roughly 10^-28 J each.Thus, with 5 µm waist, the total effect of the incoherent superposition employed in the defect creation corresponds to a repulsive 100-Hz trap, which is weaker than the optical traps previously employed in studying monopoles in BECs. Moreover, we estimate the ground-state-loss of atoms owing to the spontaneous scattering of the trapping laser to be of the order of 10 s^-1 <cit.>, corresponding to BEC lifetime of the order of 100 ms. In contrast, the defect creation time of roughly 0.5 ms has been previously used for quantum knots and skyrmions<cit.>, and 40 ms for monopoles<cit.>. Thus with 100 µW optical power, corresponding to 2.2-T/m gradient, the spurious effects of the scalar light shift and spontaneous scattering are not dominating even if the defect creation time and optical trap is taken as in previous experiments, but we obtain two-orders-of-magnitude improvement in the gradient field.We described a practical protocol for the imprinting topological monopole, quantum-knot, and skyrmion configurations in Bose–Einstein condensates by employing fictitious magnetic fields induced by the ac Stark Shift. Using experimentally feasible parameters, the gradients of the generated fictitious fields seem orders of magnitude greater than previously obtained by physical magnetic fields. Combining this result with those of previous numerical simulations <cit.>, the proposed protocol seems feasible for creating, for example, a Dirac monopole-antimonopole pair in the ferromagnetic phase of spin-1 BEC, and a topological monopole-antimonopole pair in the polar phase of spin-1 BEC <cit.>. The experimental implementation of the protocol is an appealing future direction of research.Our analysis demonstrates the versatility of the fictitious fields generated by light-matter interactions. In particular, fictitious fields seem to have the advantage of convenient shaping of micrometer-scale textures compared to physical magnetic fields. They can also offer an all-optical scheme to extremely quickly control the fictitious magnetic fields free of typical sources of electrical noise in coils, thus opening opportunities for unprecedented accuracy in creating quantum knots, skyrmons, and possibly new type of topological textures. We estimate that 1-W laser beams may be used to create a quantum knot in less than a nanosecond. Future studies may reveal whether it is possible to extent our configuration of point-like field zeros to line-like zeros with non-trivial topological consequences. § ACKNOWLEDGMENTSThe authors would like thank Roberto Zamora-Zamora, Joonas Vuojamo, and David Hall for useful discussions. We have received funding from the Research Council of Finland Centre of Excellence program (project nos. 352925 and 336810), from the Vilho, Yrjö and Kalle Väisälä Foundation of the Finnish Academy of Science and Letters, from the National Science Foundation (Grant No. DMS-1926686), from the Fulbright Finland Foundation, and from the Emil Aaltonen Foundation. T.A. thanks the IAS for excellent working conditions.Otin tän kuvan nyt pois koska se ei oo kauhean informatiivinen, eikä edes hieno.Supplemental Material for: Manufacturing Monopole Configutations in Bose–Einstein Condensates by Employing Vector Light Shift§ NOTES * what is the size of a BEC cloud?A: about 15 micrometers <cit.>* What is the essential size of the defect configuration? How small can we make the cross sections of a Hermite–Gaussian TEM_20 beam? In particular, how far apart are the two zeroes?A: the zeroes have coordinates ±w2, where w is the waist (or radius?) of the beam (Tommi: 'radius' is always correct, 'waist' is the beam radius at the focus). In practice, w ∼ 0.5mm if you don't focus.In Joonas Vuojamo's thesis, which should use parameters similar to <cit.>, the two zeroes are about 5μm apart. To achieve this with a TEM_20 beam, the radius should be 5μm. * Do we need to focus the beams? If yes, then does it affect the behavior of the field in an essential way?A: Yeah, we need to bring the radius of the beam from .5mm down to 5μm so that the zeroes of the TEM_20 beam fall inside the cloud. This induces a longitudinal component for the field, how big is it?For a single coherent superposition of a Gaussian and Hermite–Gaussian beam, let us denote by Δ the longitudinal field component. Since Δ×Δ^* = 0,( + Δ) × ( + Δ)^* = ×^* + Δ×^* + ×Δ^*.The deviation from the ideal case, Δ×^* + ×Δ^*, is transversal to the propagation direction. If the longitudinal field is considerably weaker than ^* at the boundary of the cloud, then the topological properties of the field remain equivalent to the ideal case.* What is the strength of the fictitious field? Is it comparable to the fields used in previous experiments and in the simulations?The average intensity of electromagnetic waves is I = 1/2 cϵ_0 ||^2, where c is the speed of light and ϵ is the permittivity (I guess these constants need to be not for vacuum but specific to rubidium). The amplitude of the electric field is|| = √(2I/cϵ) = √(2P/π r^2cϵ)where P is the optical power and r is the beam radius. For a TEM_00 beam we can use P=1 W and r=5 µm, which results in a fictitious field amplitude |_| = 0.019 T, when α_v = 4462 a.u. (7.357·10^-38 A^2s^4kg^-1), F=1, and g_F=-1/2. In higher-order beams the transverse profile is larger and therefore the intensity smaller. From simulations we see that the intensity of the TEM_10 beam (both two spots) is 74% from the fundamental beam. Similarly, the intensity of the TEM_20 beam is 66% (outer spots) and 50% (central spot) from the TEM_00 beam. We get |_| = 14.1 mT for the TEM_10 beam and |_| = 12.6 mT (9.5 mT) for the side spots (central spot) of the TEM_20 beam. * Description of the beam-splitting, mode-alteration, and coherent superposition process.A: Discussion about the effects of focusing can be found in <cit.>.Discussion about creating higher order Hermite–Gaussian beams from a Gaussian one can be found in <cit.>. A simpler approach that better maintains the coherence is described in <cit.> and <cit.>. One can create two different HG beams (with orthogonal polarization) in the same cavity using a birefringent beam displacer (or a polarizing beam splitter) to partially separate the beam paths in the cavity, and phase elements to provide gain only for the selected modes. An additional phase plate can be added into one of the paths for maintaining coherence with a selected phase difference between the modes. * Remark/Question: The scalar phases of different types of Hermite–Gaussian beams evolve differently in the propagation direction due to the Gouy phase evolving as (l+m+1) arctan( zz_R).This should not be a problem, if the phase difference exceeds π/2 inside the cloud. What we want to know is, is the Rayleigh range r_R much larger than the size of the cloud?A: This should not be an issue, as the wavelength is λ∼ 1μm and w_0 = 5 μm ⇒ r_R ∼π w_0λ∼80 μm, which is much larger than the diameter of the cloud. Reference for some claims about fictitious fields: * Fictitious magnetic field combines additively with conventional magnetic fields <cit.>. * Fictitious magnetic fields generated by multiple beams are added into each other: this is stated in Ref. <cit.>. By <cit.> if you have several beams with slightly different wavelengths, then this is also true up to very rapidly oscillating time dependent terms. This should also be fine. * Tensor part of the light shift is negligible for Alkali atoms <cit.> | http://arxiv.org/abs/2311.15972v2 | {
"authors": [
"Toni Annala",
"Tommi Mikkonen",
"Mikko Möttönen"
],
"categories": [
"cond-mat.quant-gas",
"physics.optics"
],
"primary_category": "cond-mat.quant-gas",
"published": "20231127161426",
"title": "Optically Induced Monopoles, Knots, and Skyrmions in Quantum Gases"
} |
Mass reconstruction and noise reduction with cosmic-web environments Longlong Feng^1 January 14, 2024 ==================================================================== Instruction tuning is now a widely adopted approach to aligning large multimodal models (LMMs) to follow human intent. It unifies the data format of vision-language tasks, enabling multi-task joint training. However, vision-language tasks are constantly being created in practice. Instead of always re-training LMMs when new tasks arrive, continual learning offers flexibility for models to continually and efficiently exploit the evolving data. This work aims to explore the following two questions: 1) Do LMMs still suffer from catastrophic forgetting in continual instruction tuning? 2) Are the existing three classes of continual learning methods still applicable to the continual instruction tuning of LMMs? An extensive study is conducted to address the above questions. First, we establish the first benchmark in this setting and reveal that catastrophic forgetting is still observed when continually instruction-tuning LMMs. However, the multi-task joint instruction tuning can facilitate the model's continual learning ability and mitigate forgetting. Second, we integrate and adapt classic continual learning methods to our context, demonstrating the efficacy of data replay and model expansion strategies across diverse scenarios. In contrast, regularization-based methods only perform well on models that have been jointly instruction-tuned on multiple tasks. Third, we delve into the correlation and forgetting dynamics between vision-language task pairs and propose task-similarity-informed regularization and model expansion methods for continual instruction tuning of LMMs. Experimental results show that our approach consistently boosts the model's performance.§ INTRODUCTIONInspired by the success of GPT4, an array of works pertaining to large multimodal models (LMMs) have emerged recently <cit.>. These LMMs typically undergo a two-stage training process, first pretraining for text-image alignment and then finetuning for downstream tasks. In the second phase, instruction tuning stands out as a widely adopted scheme for aligning LMMs with human intent. This approach enables multi-task training with a unified image-instruction-output data format and makes the trained models easier to generalize to unseen tasks <cit.>.While LMMs exhibit impressive zero-shot performance on unseen instructions, expanding the training datasets to incorporate new task data can substantially enhance their capabilities on the new task <cit.>. However, since vision-language tasks can be constantly created, it is costly to always merge the incoming data to retrain the LMMs. Hence, an approach is sought that can render the model flexible enough to continually and efficiently exploit the ever-emerging data. This aligns with the principles of continual learning, where models are designed to continually learn new tasks like humans.Existing continual learning studies have shown that sequentially finetuned models suffer from catastrophic forgetting <cit.>, a phenomenon that models finetuned for new tasks forget or overwrite previously acquired knowledge. Recently, several researchers studied the continual instruction tuning for large language models (LLMs) <cit.>. Zhang et al <cit.> found that sequential instruction tuning on LLMs exhibits surprisingly comparable performance to continual learning methods, which seems to contradict the phenomenon of catastrophic forgetting. However, continual instruction tuning for LMMs remains underexplored. Zhai et al. <cit.> investigate the catastrophic forgetting in LMMs by treating them as image classifiers. Despite the convenience, it is confined to classification tasks and fails to fully harness the potential of instruction tuning which unifies various vision-language tasks.Inspired by existing works, we aim to explore the following two questions: 1) Do LMMs still suffer from catastrophic forgetting in continual instruction tuning? 2) Are the existing three classes of continual learning methods still applicable to the continual instruction tuning of LMMs? In this work, an extensive study is conducted to address the above questions. First, we establish the first continual instruction tuning benchmarks for LMMs by curating a selection of tasks and datasets based on the taxonomy of vision-language tasks in <cit.>. To explore the effect of initial multi-task instruction tuning on continual learning, two benchmarks are examined. Benchmark 1 starts continual instruction tuning from a text-image aligned pretrained model, i.e. BLIP2 <cit.>, whereas benchmark 2 starts from a multi-task instruction-tuned model, i.e. InstructBLIP <cit.>. As shown in <ref>, the phenomenon of catastrophic forgetting is observed in both settings, with benchmark 2 showing a milder degree of forgetting. Possible reasons are that multi-task joint instruction tuning helps the model learn to follow instructions and thus facilitates continual learning.Second, we exhaustively explore the effectiveness of existing continual learning methods in this setting. Specifically, we integrate representatives of the two classes of continual learning methods into our setting, i.e. regularization-based <cit.> and replay-based methods <cit.>, and adapt the model expansion methods <cit.> for LMMs. Specifically, we expand the projection layer for each new task as a task-specific module and freeze all other modules to prevent forgetting. Our results reveal that the regularization-based methods fail to effectively handle forgetting when the model is not instruction-tuned on multiple tasks initially. Conversely, it shows competitive continual learning performance without additional isolated structures or stored samples when starting from instructBLIP. The other two replay-based approaches and model expansion approaches can consistently achieve promising results in both settings. Third, since there are some correlations between vision-language tasks, which can have a significant impact on anti-forgetting and transfer ability, continual instruction tuning methods for LMMs are expected to exploit this characteristic effectively. By virtue of instruction tuning, tasks are uniformly formulated as image-instruction-output datasets. We can easily obtain task relevance by measuring the similarity of image, instruction, and output between tasks. Based on this, we propose task-similarity-informed regularization and model expansion methods to encourage the reuse of parameters or structures for relevant tasks. Experimental results show consistent improvement with our method compared to traditional continual learning baselines. To summarize, our contributions in this paper can be outlined as follows: * We are the first to establish continual instruction tuning benchmarks for LMMs. * We conduct an in-depth analysis of classic continual learning methods and shed light on the applicability of these methods to the continual instruction tuning of LMMs. * We introduce task similarity to traditional continual learning methods to exploit the relevance of vision-language tasks, which consistently boost the model's performance.§ RELATED WORKS §.§ Large Multimodal Models Large multimodal models (LMMs) primarily function as generative models that produce text sequences as output when provided with images and texts as input. Most of the LMMs share the architecture of bridging the visual encoder and the large language model by a connection module <cit.>. Specifically, BLIP2 <cit.> and InstructBILP <cit.> train the Qformer as the vision language connector while LLaVA <cit.> and MiniGPT4 <cit.> only train a linear projection layer. LMMs generally follow a two-step training procedure. First, they are pretrained using image-text pairs to align visual features with large language model word embedding. Next, instruction tuning is adopted to finetune LMMs for downstream tasks. Originating from natural language processing, instruction tuning is now a commonly used strategy to align LMMs with human intents. This method allows for multi-task training with a unified image-instruction-output format, enhancing the models' ability to generalize to new tasks. §.§ Continual Learning Existing continual learning methods can be broadly summarized into three categories including regularization-based, replay-based, and model expansion methods. Regularization-based methods usually add a regularization term to prevent important parameters from deviating from the last stage checkpoint <cit.> or to enforce similar model outputs with old tasks <cit.>. Replay-based methods buffer a small number of selected samples from old tasks and incorporate them into the training process of the current task <cit.>. Model expansion methods typically expand some structure of the model to accommodate new tasks <cit.>.§.§ Continual Learning of Multimodal Models Recently, there have been a growing number of studies focusing on the continual learning of multimodal models. Some of these works propose new benchmarks for continual learning of visual question answering (VQA) <cit.>. Some focus on the continual learning of vision-language models by taking VQA tasks as classification problems <cit.>. Other works study the continual pretraining of CLIP models <cit.>. Zhai et al. <cit.> also investigate the continual learning of LMMs but the study is limited to classification tasks. Different from these works, we conduct our continual learning study on LMMs and examine the most prevalent training scheme of instruction tuning. This setting is very different from the former continual learning studies on computer vision and VQA tasks in the following aspects: 1) The output is from a generative language model instead of an ever-expanding classifier. 2) Instruction tuning unifies different task forms. In this case, we are able to study the continual learning of diverse vision-language tasks instead of only VQA tasks. § CONTINUAL INSTRUCTION TUNING §.§ Preliminary: Continual LearningContinual learning requires the model to adapt to each new task that arrives in succession without erasing the knowledge gained from earlier ones. Suppose there are N tasks [𝒯_1, ..., 𝒯_N] in total, each corresponding to one of the N datasets [𝒟_1, ..., 𝒟_N]. At each time step i, the learning system is presented with a new dataset 𝒟_i and aims to incorporate this new data into its existing knowledge. Old data {𝒟_k}_k=1^i-1 is deemed inaccessible at this point except that replay-based methods can store a small proportion of samples in the buffer and mix them with 𝒟_i for training. In the context of instruction tuning, tasks are described as instructions, and datasets are denoted as 𝒟_i={(t^i_j, v^i_j, o^i_j)}_j=1^N_i, where t,v,o represents text input, image, and text output, respectively.§.§ Continual Instruction Tuning Benchmark The training of LMMs consists of two phases: image-text alignment pretraining and instruction tuning. We focus on continual instruction tuning, but use the model obtained after each phase, i.e. BLIP2 <cit.> and InstructBLIP <cit.>, as a starting point, respectively. To be more specific, we explore continual instruction tuning on LMMs trained with or without task 0 in <ref>. Our goal is to study whether multi-task joint instruction tuning improves the model's continual learning ability as well as the differences in the applicability of the continual learning methods between the two cases.We commence by establishing benchmarks. Dai et al. <cit.> categorize the vision-language tasks into 11 groups, and seven of them are included in the training set. We followed their taxonomy for dataset selection.For the first setting, there is more flexibility in the choice of datasets since the model has not yet been trained on any instruction-tuning datasets. We try to ensure that the image set is also incremental across tasks and that the selected datasets are of comparable size. Eventually, the benchmark is shown in <ref>.For the second setting, since instructBLIP has seen a bunch of datasets, we made a selection out of the remaining datasets. These datasets have greater variation in size compared to benchmark 1, and the involved tasks have been exposed to the initial model except for visual reasoning. In this benchmark, the joint training datasets of instructBLIP are considered task 𝒯_0 and tested for forgetting in the subsequent continual learning phases. Detailed information about the benchmark datasets can be found in <ref>. §.§ Revisit of Continual Learning Methods Continual learning methods can be broadly summarized into three typical categories. We integrate several compatible representatives of each category into continual instruction tuning. 1) Regularization-based Methods: this sort of method reduces forgetting utilizing a regularization term:ℒ_reg = ∑_k r_k(θ_k-θ_k)^2where r denotes the parameter importance scores. θ and θ are the trainable parameters and the old parameters, respectively. Specific methods mostly differ from the importance measures: EWC <cit.> calculate the fisher information matrix after training on each task; MAS <cit.> consider the sensitivity of output to changes in each parameter; SI <cit.> compute the contribution of each parameter to loss reduction throughout training; CLoRA <cit.> incorporate parameter-efficient finetuning method LoRA <cit.> and compute the cumulative parameter changes as importance scores.2) Replay-based Methods: ER <cit.> stores a small proportion of samples from each task and replays them when training on the current task. Specifically, we merge the stored data directly with the current training set and then sample the training batches following <cit.>. AGem <cit.> uses the buffered samples to rectify the gradient direction of each parameter to the current task loss. 3) Model Expansion Methods: Since we are finetuning on top of a pre-trained LMM, the expanded module of the existing continual learning methods does not apply to the architecture of these models. Despite the various structures of existing LMMs, the projection layers are all used for finetuning and exhibit comparable results. Therefore, we expand the projection layer in LMMs for each new task and learn a corresponding key feature to retrieve the task-specific module for evaluation. Except for task-specific components, all other modules are frozen to prevent forgetting. This scheme is denoted as EProj.§ TASK-SIMILARITY-INFORMED CONTINUAL LEARNINGIn exploratory experiments, we observed a high correlation between some vision-language tasks, and a model trained on one task may also perform well on similar tasks. This correlation can greatly influence the anti-forgetting and transfer ability of the model. A quantitative analysis of this phenomenon is displayed in <ref>. To exploit this property, we introduce task similarity into two compatible classes of continual learning methods, i.e. regularization-based methods and model expansion methods. The overall idea is illustrated in <ref>. We first discuss how to measure task similarity and then present the task-similarity-informed regularization and model expansion methods. §.§ Task Similarity Measures Task similarity measures are utilized to automatically determine the relevance between tasks. Nikandrou et al. <cit.> compute task similarities considering answer distributions, average embeddings of image, question, and the joint pair. In our case, instruction tuning enables various tasks to be formulated as image-instruction-output datasets. Therefore, we utilize the mean embeddings of image e(v), instruction e(t), and output e(o) of the entire dataset to comprise the task embeddings. Note that mean answer embedding is employed instead of answer distribution since answers across different tasks barely overlap in our benchmark.Specifically, we adopt the BERT <cit.> model and the frozen ViT <cit.> in LMMs as the function e to encode the texts and images, respectively. Cosine similarity is then applied to measure the similarity score between the current task 𝒯_i and each old task. To fuse the similarity of the three embeddings, namely s^v_i, s^t_i, s^o_i, we standardize each of them and multiply them up to get the final task similarity score s_i (See <ref> for more details). §.§ Task-Similarity-Informed Regularization Existing regularization-based methods mainly focus on the parameter importance measures but accumulate multi-stage importance scores through simple moving average or sum operations. For long-term continual learning, moving average gradually relaxes the parameter constraints on early tasks and causes forgetting, whereas summing the importance scores for each task leads to increasing parameter constraints and discourages learning for future tasks. However, given task similarity scores, we can adaptively weight the parameter importance based on the relation between the current task and each old task.Inspired by skill localization <cit.>, we associate each old task with a group of parameters, taking them as skill parameters for a given task. For those old tasks that resemble the current task, we impose looser regularization constraints on their skill parameters and, conversely, stricter constraints. Specifically, we determine the task ID l_k associated with each parameter θ_k as the task with the largest importance score r_k. In this way, we only need to store the cumulative maximum parameter importance r^max and the corresponding task ID l instead of the importance scores of every task. To adaptively weight the parameter importance, the stored task ID l_k of some parameter θ_k is used to index the similarity scores s_i. Then, the indexed similarity score s_i,l_k is employed as a guide weight for the regularization term of the corresponding parameter: ℒ_reg = ∑_k (1 - s_i,l_k) ·r^max_k · (θ_k - θ_k)^2 This adaptive importance weighting formula is also depicted in <ref>. Note that we do not specify the importance measure since the focus of our method is on the adaptive task-similarity-informed weighting mechanism. Therefore, we experiment with different importance measures from existing methods.The overall procedure for task-similarity-informed regularization (TIR) can be found in <ref>. For the first task, we only need to compute and store the task embeddings, the importance scores r^max and the initial task indices l. For the following tasks, the task similarity vector is calculated using the embeddings of all seen tasks, and the adaptively weighted regularization term is added to comprise the overall training loss: ℒ_overall = ℒ_task + λ_1 ℒ_reg where λ_1 is a fixed factor to scale the regularization term to a close magnitude as the task loss. §.§ Task-Similarity-Informed Model Expansion Model expansion methods can achieve remarkable performance without storing old samples by training a new module for each new task and keeping all other modules frozen. During evaluation, we need to retrieve task IDs for test samples in order to perform model inference with the task-specific component.As noted in <ref>, task similarity can be obtained by comparing the embeddings of instruction, image, and answer. Here this method can also be applied to match test samples with known tasks, only to remove the part of answer embedding. Specifically, we learn a task-specific key for task ID retrieval following <cit.>. In our case, this key comprises both image and instruction embedding and is constantly pulled closer to the corresponding embedding of samples from task 𝒯_i during training. ℒ_pull = ∑_j (1 - γ(e(v_j^i), k^v_i)) + (1- γ(e(t_j^i), k^t_i)) where γ is cosine similarity and k is the learnable task-specific key. The overall loss function comprises the task loss for training the task-specific module and the pull loss for training the task-specific key: ℒ_overall = ℒ_task + λ_2 ℒ_pull In addition to its application to task ID retrieval, task similarity can also be used to gauge the need for adding new structures. Most model expansion methods indiscriminately add structure for new tasks, which can cause a growing number of model parameters with the number of tasks. However, by measuring the similarity of the new data to the existing tasks, it is easy to determine whether the additional structure needs to be introduced. We include the details on the overall procedure in <ref>. § EXPERIMENTS §.§ Implementation Details We adopt the InstrutBLIP (FlanT5XL) model as the architecture of continual learners. Following <cit.>, both the visual encoder and the LLM are kept frozen, and we refer the readers to <cit.> for more details on the instruction tuning process. For the EProj approach, we only train the projection layer of the current task in Qformer and its corresponding key. Both checkpoints of the pretrained BLIP2 <cit.> and the joint instruction-tuned instructBLIP are adopted as the starting point for the continual instruction tuning experiments. The batch size of each task is 256, the max epoch is 5, and the learning rate is 1e-5 if not otherwise specified. λ_1 is 1e8 and λ_2 is 0.1. For data replay methods, we store 1% samples of the entire dataset for each task in the buffer. In addition to continual learning methods, we also present results for two baselines. SeqFT naively finetunes the model on each task in a sequential manner without any continual learning recipe, and DirectFT represents the results of finetuning the initial model to each dataset directly. §.§ Metrics For each task and dataset, we report the widely adopted metrics as shown in <ref> following <cit.>. Let A_t,i be the evaluation score on task 𝒯_i after training on task 𝒯_t. We compute the average performance on all seen tasks after training on each task𝒯_t: A_t = 1/t∑_i A_t,i To measure the degree of forgetting, we also report the average forgetting on all old tasks after each stage t: F_t = 1/t-1∑_i max_j<t(A_j,i)- A_t,i §.§ Results §.§.§ Results on Benchmark1<Ref> demonstrates the continual learning metrics across stages in benchmark1. The performance on each specific seen dataset after learning the final task is illustrated in <ref> (a). First, we can explicitly observe that a sequentially instruction-tuned model shows a very high forgetting metric at all stages. The final model performs even worse than the initial model on the old tasks. Regularization-based methods can somewhat alleviate forgetting, but the improvement is very limited. Although we have brought consistent improvements to such methods by exploiting task similarity, the final model only achieved comparable results to the initial model, implying that the model failed to achieve continual learning. In contrast, both replay-based and model expansion methods yield remarkable results. Among the data replay approaches, simply training with old samples works better than constraining the gradient direction. We store 1% of the original dataset samples for each task and show that this is effective in mitigating catastrophic forgetting. As for the optimal method EProj, we attribute its impressive performance to the effectiveness of finetuning the projection layer as well as the high accuracy of task ID retrieval through task similarity. §.§.§ Results on Benchmark2 Similar experiments are conducted on benchmark 2 and the results are displayed in <ref> and <ref> (b). Note that in this benchmark, we compute the forgetting of both those datasets in task 𝒯_0 and new tasks learned sequentially. It is clearly shown that the forgetting resulting from sequential finetuning is much milder when the model has been jointly instruction-tuned over multiple tasks. The conclusions are more evident in <ref> where we directly compare the forgetting of newly learned tasks starting from two different models in this benchmark. Thus, we conjecture that phase 0 is helpful for the model's continual learning ability in the sense that the model learns how to follow instructions first. In addition, all three types of continual learning methods achieve promising performance in this setting. With separate structures maintained for each task, EProj expectantly achieved minimal forgetting and the highest average performance. In addition, regularization methods exhibit promising performance without retaining old samples or introducing new model structures. The task-similarity-informed mechanism we introduced further improves its performance and narrows down the gap with the other two types of methods. Note that in this setting, some of the regularization methods are not covered as their importance measure cannot be applied separately for each dataset in task 𝒯_0. Looking further at the results of the final model presented in <ref> (b), we notice that different approaches exhibit different strengths. The task-similarity-informed regularization method TIR showed a balanced anti-forgetting effect on datasets from 𝒯_0, while ER is more proficient in later learned tasks. In contrast, the EProj method performs more balanced across all datasets as each task is treated independently. More results about the continual instruction tuning experiments in the opposite order are presented in <ref>. §.§ Ablation Study §.§.§ Effectiveness of regularization methodsEarlier results show that TIR consistently boosts the original regularization-based methods. Here we ablate the adaptive task-similarity weight to further verify its effectiveness. Results of the ablation experiments are listed in <ref>. We replaced the adaptive task similarity weights with a constant value of 0.5 and showed that this was significantly less effective than TIR. This demonstrates that the good performance of TIR results from the knowledge provided by task similarity.§.§.§ Effectiveness of model expansion methods To uncover the reason for EProj's effectiveness, we show the ablation study results on benchmark1 in <ref>. As can be seen from the table, the task IDs are predicted with high accuracy, so Eproj is pretty close to the results of testing with ground-truth task IDs. In addition, as we mentioned in <ref>, task similarity can also be used to determine if introducing a new task-specific module is necessary. <Ref> shows the results of direct reusing structure for tasks with high similarity, i.e. VQA v2 <cit.> and GQA <cit.>, as well as retraining on the new task after reusing the structure. We can observe that reusing structures for similar tasks can achieve comparable results. §.§ Discussion §.§.§ Visualization of forgetting To further understand how forgetting occurs in continual instruction tuning for LMMs, we provide an example in <ref>. In this example, the model was first trained on the image captioning dataset, Flickr30k <cit.>, and then continued training on TextCaps <cit.> and VQA v2 <cit.>, respectively. TextCaps is a dataset for image captioning involving OCR. We found that after training on this dataset, the model tends to describe items in a format of "with the word ... on sth.", even though there's no writing on them. Meanwhile, models trained on VQA v2 are inclined to give a more brief description since the ground-truth output in this VQA dataset is relatively short. Apart from these details, the models still give roughly correct answers. We speculate that this is because without the joint multi-task instruction tuning in stage 0, the model does not obtain instruction-following capabilities, but rather just fits the data distribution of one dataset at a time.§.§.§ Effect of task similarity on anti-forgetting and transfer ability To get a more intuitive look at the effect of task similarity on continual learning, a two-stage continual instruction tuning experiment was performed among all task pairs. We measure the model's relative forgetting of the first task as well as the transfer ability of the model trained on the first task to all other tasks. Results are illustrated in <ref>. First, we found that Flickr30k <cit.> facilitates continual learning evenly across all tasks. Since such datasets are also used in image-text alignment pre-training for LMMs, training over such datasets may itself be beneficial for downstream tasks. In addition, it can be observed that tasks of similar forms, i.e. VQA or captioning, usually produce less forgetting and greater transfer ability across one another. This explains why it is necessary and effective to introduce task similarity to continual instruction tuning for LMMs. On the one hand, since similar tasks cause less forgetting, we can adjust the weights of parameter regularization based on task similarity. On the other hand, there is better transfer between similar tasks, so we can reuse the structure for similar tasks instead of constantly expanding the model. § CONCLUSION In this paper, we conduct a comprehensive study on continual instruction tuning for large multimodal models. First, we established the first benchmarks in this setup and found that sequential instruction tuning on these benchmarks still leads to catastrophic forgetting. Second, by integrating or adapting existing continual learning methods, we consistently observed favorable results with replay-based and model expansion methods. However, the efficacy of regularization-based methods requires a model to be first jointly instruction-tuned on multiple tasks. Third, observing that task similarity greatly affects the model's anti-forgetting and transfer ability, we introduce it into the regularization-based and model expansion methods to enhance their performance and utility. We hope that this work will provide some guidance to the community and contribute to the development of new continual instruction tuning methods for LMMs. ieeenat_fullname § APPENDIX§ CONTINUAL INSTRUCTION TUNING BENCHMARK Specific information about the two benchmarks is listed in <ref> and <ref>, respectively. The task names shown in the tables are derived from the taxonomy of datasets in <cit.>. In both benchmarks, we train the model continually in the order of the tasks listed in the table by default. In benchmark 2, we start continual instruction tuning from instructBLIP and its complete training datasets can be found in <cit.>. Note that we only examine the forgetting of those involved academic datasets in task 0 as shown in the upper part of <ref>.§ DETAILS ON TASK SIMILARITY MEASUREWe present here the details of the task similarity measure. There are three components of the task embedding, i.e. e^v_i,e^t_i,e^o_i, each consisting of the average embedding of the entire dataset 𝒟_i. For instance, e^v_i is calculated as follows: e^v_i= 1/N_i∑_j e(v_j^i)where e denotes the task encoder which is the pretrained ViT <cit.> for image and BERT <cit.> for text. Then we compute the task similarity between the current task 𝒯_i and all previous tasks regarding each component:s_i,j^v= γ(e_i^v,e_j^v) s_i^v= [s^v_i,0,⋯,s^v_i,i-1] s_i^v = s_i^v-μ(s_i^v)/σ(s_i^v)where γ is the cosine similarity measure, and μ and σ stand for the mean and the standard deviation, respectively.s^t_i,s^o_i can be obtained in a similar way. Lastly, we fuse the three types of task similarity scores:s_i=s_i^v·s_i^t·s_i^o § DETAILS ON TASK-SIMILARITY-INFORMED REGULARIZATION The similarity scores obtained in <ref> are utilized for the adaptive weighting rule of the regularization term.We demonstrate the specific training procedure in <ref>. Note that l indicates the correspondence between the skill parameter and the task ID, and r^max denotes the cumulative maximum parameter importance across tasks as mentioned in <ref>. Since the regularization-based approach only works in the training phase, it does not increase the inference cost. As for the training cost, compared to the classic regularization-based methods, our method additionally stores the task embeddings of each known task and the parameter task ID l with the same shape as the trainable parameters. The increased training cost is fixed and manageable. § DETAILS ON TASK-SIMILARITY-INFORMED MODEL EXPANSION For model expansion methods, the task similarity score is used to evaluate the necessity of adding the task-specific module for a new task. We achieve this by setting a threshold. The detailed training process is shown in <ref>. Note that the task-specific module m is trained using ℒ_task and the task-specific key k is trained using ℒ_pull as described in <ref>. During testing, we compute the embedding e^v,e^t, of the test sample and compare it with all the task-specific keys k^v,k^t to retrieve the task ID with the highest similarity score according to <ref> similarly. Traditional model expansion methods involve a growing model structure with the number of tasks, but our approach can effectively control the parameter growth based on task similarity. § ADDITIONAL RESULTS To avoid the effect of transfer order on the conclusions, we show the experimental results in the opposite transfer order in benchmark 2 in <ref>.Results show that the experimental conclusions are consistent under the opposite transfer order. As shown in <ref>, the joint instruction tuning for task 0 improves the continual learning ability of the model and reduces the forgetting of subsequent tasks. | http://arxiv.org/abs/2311.16206v1 | {
"authors": [
"Jinghan He",
"Haiyun Guo",
"Ming Tang",
"Jinqiao Wang"
],
"categories": [
"cs.LG",
"cs.AI",
"cs.CV"
],
"primary_category": "cs.LG",
"published": "20231127150448",
"title": "Continual Instruction Tuning for Large Multimodal Models"
} |
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, including reprinting/republishing this material for advertising or promotional purposes, collecting newly collected works for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. Learning with Errors over Group Rings Constructed by Semi-direct Product^† Jiaqi Liu, Fang-Wei Fu Jiaqi Liu and Fang-Wei Fu are with Chern Institute of Mathematics and LPMC, Nankai University, Tianjin 300071, China, Emails: [email protected], [email protected] ^†This research is supported by the National Key Research and Development Program of China (Grant Nos. 2022YFA1005000 and 2018YFA0704703), the National Natural Science Foundation of China (Grant Nos. 12141108, 62371259, 12226336), the Fundamental Research Funds for the Central Universities of China (Nankai University), the Nankai Zhide Foundation. manuscript submitted January 14, 2024January 14, 2024 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================empty empty This paper presents a novel auto-tuning subsystem-based fault-tolerant control (SBFC) system designed for robot manipulator systems with n degrees of freedom. It first employs an actuator fault model to account for various faults that may occur, and second, a mathematical saturation function is incorporated to address torque constraints. Subsequently, a novel robust subsystem-based adaptive control method is proposed to direct system states to follow desired trajectories closely in the presence of input constraints, unknown modeling errors, and actuator faults, which are primary considerations of the proposed system. This ensures uniform exponential stability and sustained performance. In addition, optimal values are identified by tuning the SBFC gains and customizing the JAYA algorithm (JA), a high-performance swarm intelligence technique. Theoretical assertions are validated through the presentation of simulation outcomes.daptive control,controller gain optimization, fault-tolerant control, input constraint. § INTRODUCTIONSubsystem-based control, when applied to high-degree-of-freedom (DOF) manipulator systems, represents two distinct facets. On the positive side, utilizing various techniques to decompose a complex and high-order system into subsystems can assist in the development of localized control strategies and in the assessment of stability at the subsystem level <cit.>–<cit.>. Conversely, and negatively, a different form of complexity is introduced, arising from modularity, particularly when encountering state- and time-variant uncertainties, as well as failures, which are commonplace in real-world industrial settings <cit.>–<cit.>. Failures in autonomous and intelligent robotic systems can stem from various events, including internal actuator issues, power supply system failures, or wiring problems <cit.>, impairing their performance, rendering them incapable of carrying out their tasks, and necessitating the design of fault-tolerant control mechanisms to ensure their continued safe operation without causing harm <cit.>–<cit.>. As one potential remedy, many studies have focused on passive fault-tolerant control (PFTC) to maintain operational integrity and safety in applications that lack fault diagnosis and active intervention sections <cit.>. In their work <cit.>, Van and Ge designed a passive fault-tolerant approach to mitigate the rapid effects of faults for robot manipulators based on a robust backstepping control integrated with other methods. Likewise, in pursuit of achieving both a fast response and high-precision tracking performance, Anjum and Guo in <cit.>, proposed a PFTC system for robotic manipulators, built upon a fractional-order adaptive backstepping approach.Furthermore, considering the limitations imposed by the magnitude of physical actuators, sensors, and interfacing devices, it becomes imperative to account for control input constraints <cit.>. Deviating from these constraints can result in the emergence of undesirable vibrations, degradations in system performance, and, in some cases, complete system immobilization <cit.>. Nohooji, as outlined in <cit.>, enhanced the robustness of his neural adaptive proportional-integral-derivative (PID) control for manipulators by incorporating considerations of constrained behavior during system operation. Similarly, Yang et al. <cit.> developed an online integral reinforcement learning strategy to address the challenges of robust constrained control in nonlinear continuous-time systems.Furthermore, to overcome a formidable challenge for subsystem-based control designers, managing the extensive array of control gains that demand meticulous tuning is imperative, as they exert a distinct impact on the system's transient and steady performance, even when deploying highly effective and top-performing control methodologies. As a promising solution to this challenge, population-based optimization algorithms have gained popularity in recent times due to their efficiency. However, the improper tuning of algorithm-specific parameters can lead to increased computational effort or the attainment of suboptimal local solutions <cit.>. In contrast to most other optimization algorithms that necessitate the fine-tuning of algorithm-specific parameters for updating particle positions, the JAYA algorithm (JA) uniquely relies on its inherent principles to adapt and optimize a wide range of problems <cit.>. The JA was developed by Rao <cit.>, with the primary objective of addressing both constrained and unconstrained optimization problems. It stems from an innovative swarm-based heuristic introduced in the work by Nanda et al. <cit.>. Further, in <cit.>, Houssein and colleagues conducted an extensive review of renowned optimization algorithms. Their investigation revealed that in the task of function minimization, the JA consistently outperformed these well-established swarm-based algorithms, delivering markedly superior results in terms of both precision and convergence speed. Interestingly, in <cit.>, Bansal and collaborators explored the capabilities of three distinct optimization algorithms for fundamental backstepping control of a single-link flexible joint manipulator system. Similar to <cit.>, their investigative findings indicated that JA optimization consistently outperformed the other methods in terms of fitness value.In consideration of the critical significance of robust control in ensuring both the safety and performance of robot manipulators, this paper proposes a novel robust adaptive subsystem-based control to maintain the system's uniformly exponential stability while addressing unknown modeling errors, and actuator faults. It not only incorporates the management of torque constraints but also enhances the highly promising swarm intelligence technique (JA) for the fine-tuning of control parameters. Therefore, the present study offers notable contributions to the field of robotics: (1) it introduces a new SBFC approach designed for manipulators with n DOF in the presence of input constraints, modeling errors, and various actuator faults. (2) To optimize the SBFC gains, a multi-population and single-phase swarm intelligence technique (JA) is amended, (3) and the proposed control strategy ensures the achievement of uniform exponential stability. § MODELING THE SYSTEM AND DEFINING THE PROBLEM §.§ N Degrees-of-Freedom ManipulatorConsidering the typical robot manipulator dynamics, as detailed in <cit.>, we have:I(q) q̈=T-C_m(q, q̇) q̇-f(q̇)-G(q)-T_L·In the given context, q∈ℝ^n represents the generalized joint coordinate vector comprising ‘n’ joints. I(q):ℝ^n →ℝ^n× n characterizes the mass (inertia) properties, while C_m(q, q̇) :ℝ^n ×ℝ^n →ℝ^n × n accounts for the centrifugal and Coriolis forces. G(q):ℝ^n →ℝ^n represents the gravitational forces/torques, and f(q̇):ℝ^n →ℝ^n accounts for the resistance encountered during movement. The vector T=[T_1,…,T_n]^⊤ represents the generalized continuous torque applied at the joints, and T_L∈ℝ^n signifies unaccounted-for external disturbances that affect each joint. Notably, the inertia matrix I(q) possesses the properties of being symmetric, positive, and definite; thus, we can also say: 0<λ_min (I(q)^-1) ≤I(q)^-1≤λ_max(I(q)^-1) ,where · denotes the squared Euclidean norm, λ_max(.) ∈ℝ^+ and λ_min(.)∈ℝ^+ represent the matrix I(q)^-1's maximum and minimum eigenvalues, respectively. §.§ Passive Fault-Tolerant ApproachNext, we integrate the fault correction functionality into the established control algorithm. To do so, we adopt the following fault model for the actuator <cit.>:T = T_c + ϵ(T_s a t - T_c),where T_c∈ℝ^n represents the normal command control during the system's healthy state. We use ϵ = diag(ϵ_1, …, ϵ_n) and T_s a t∈ℝ^n to characterize various types of actuator failures, with t_f signifying the period of fault occurrence. When ϵ_i = 0, the corresponding actuator is functioning normally. If ϵ_i = 1, there is a complete control failure, where T_s a t(i)≠ 0 indicates a stuck failure. Meanwhile, 0 < ϵ_i < 1 represents a performance loss. The behavior model of the fault, when 0 < ϵ_i < 1, is extended, as follows:ϵ_i= 1-e^-γ_i t t ∈ t_f, γ_i > 0,where γ_i represents the rate of evolution of an undisclosed fault. A small γ_i value indicates slow fault development, termed an 'incipient fault.' Conversely, a high γ_i value results in the time course γ_i approximating a step form, classified as an 'abrupt fault' <cit.>. Consequently, the manipulator dynamics described in (<ref>), incorporating the fault model introduced in (<ref>), can be reformulated as follows:q̈=I^-1(q)[(I_n × n-ϵ)T_c -C_m(q, q̇) q̇-f(q̇)-G(q)-T_L+ ϵT_s a t],where I_n × n∈ℝ^n× n represents the identity matrix. §.§ Torque Signal ConstraintIn addition to addressing actuator faults, our objective is to account for the torque constraints to ensure they do not exceed the specified nominal torque values. Consequently, we define S_i(T_i(t)) for i=1,…,n to operate in compliance with the constraints imposed on the control torque T_i(t), whether in a healthy or faulty state. This is achieved as follows:S_i(T_i(t)) = T̅_i,T(t) ≥T̅_iT(t)T_i≤ T_(t) ≤T̅_i T_i T(t) ≤T_i·In this context, T̅_i and T_i denote the upper and lower nominal torque bounds, respectively, of the permissible T_i(t) values that can be generated. To elaborate further, we define a constraint model as follows:S_i(T_i(t))=s_1i T_i(t) + s_2i ,wheres_1i = 1/| T_i(t) | +1, T_i(t) ≥T_ior T_i(t) ≤T_i1T_i≤ T_i(t) ≤T̅_iands_2i= T_i - T_i(t)/| T_i(t) | + 1, T_i(t) ≥T_i0T_i≤ T_i(t) ≤T_i T_i - T_i(t)/| T_i(t) | + 1T_i(t) ≤T_i·It is evident that Eqs. (<ref>), (<ref>), and (<ref>) imply Eq. (<ref>). We have s_2i≤max(|T_i| + 1, |T_i| + 1) and s_1i≤ 1. In addition, if we generally say s_1=diag(s_11,…,s_1n), and s_2=[s_21,…,s_2n]^⊤, we can transform the dynamic expression in (<ref>) into the following equation:q̈=I^-1(q)[s_1(I_n × n-ϵ)T_c +s_2 -C_m(q, q̇) q̇-f(q̇)-G(q)-T_L+ s_1 ϵT_s a t] ·For convenience, we can consider: λ̅=s_1(I_n × n-ϵ)=diag(λ̅_1,…,λ̅_n),0 < λ̅_i ≤ 1λ̅_min=inf(λ̅_i)s_max=s_2 + s_1 ϵT_s a t·Then, the ultimate expression of the n DOF of a robot manipulator, accounting for the specified fault model in (<ref>) and the control constraints in (<ref>), as follows:q̈= I^-1(q)[ λ̅T_c +s_max -C_m(q, q̇) q̇ -f(q̇)-G(q)-T_L] ·§ DESIGNING THE CONTROLLER AND ANALYZING STABILITY §.§ Fundamental Prerequisites and AssumptionsTo apply the subsystem-based control methodology, the dynamics of a manipulator robot, provided in (<ref>), can be transformed into a triangular feedback form as shown below:{ẋ_1(t) = x_2(t)ẋ_2(t) =A_1 λ̅T_c+g_1(x,t) +Δ_1(x, t)+ T_L y(t) =x_1(t) . ·Let us define two state variables x=[x_1,x_2]^⊤, x_1=q as the position vector and x_2=q̇ as the velocity vector. The control torque input incorporates a non-zero coefficient A_1, represented as I^-1(q), and λ̅ signifies the fault and constraint effect introduced in (<ref>). The terms g_1(x,t) can be considered established functional elements derived from the system's model, given by I^-1(q)(-C_m(q, q̇) q̇-G(q)). Meanwhile, Δ_1(x,t) characterizes uncertain aspects arising from incomplete knowledge of system parameters or modeling inaccuracies, expressed as I^-1(q)(-f(q̇)+s_max). In addition, there exists a time-varying disturbance T_L with uncertain magnitudes and timings. In continuation of the preceding form, we can define the tracking error e=[e_1,e_2]^⊤, as follows:e_1= x_1-x_d e_2= x_2-ẋ_d,where x_d∈ℝ^n and ẋ_d∈ℝ^n are the position and velocity reference trajectories, and e_1:ℝ^n ×ℝ^n →ℝ^n and e_2:ℝ^n ×ℝ^n →ℝ^n are the position and velocity tracking errors, respectively. Now, we can transform the tracking system into a new form:Q_1 = e_1 Q_2 = e_2 - κ_1·We introduce the virtual control κ_1 ∈ℝ^n. To prevent the complexity from growing unmanageable, as discussed in Wang et al. <cit.>, we consider the time derivative of the virtual control to be an element of uncertainty in the system. Definition (1): Assuming the function κ_1 is smooth, we introduce the following function to simplify complexity and mitigate uncertainties, as follows:Δ_1=Δ_1(x,t)- ∂κ_1/∂x_1dx_1/ d t-∂κ_1/∂ϕ̂_1dϕ̂_1/ d t,where ϕ̂_1 is an adaptive function law, which will be defined in (<ref>).Thus, according to (<ref>) and (<ref>), we can obtain a new representation of the system as follows:Q̇_1= Q_2+κ_1 Q̇_2= A_1 λ̅T_c+g_1(x,t) +Δ_1(x, t)+ T_L-ẍ_d ·Assumption (1): There exists Λ_1 ∈ℝ^+ and a continuously smooth and positive function r_1: ℝ^n →ℝ^+ constrained within the uncertainty bound denoted as Δ̅_1. In addition, there are positive parameters Ω_1, D_max, and g̅_max∈ℝ^+, which may also be unknown such that:Δ_1≤Λ_1 r_1 , T_L≤D_maxẍ_d≤Ω_1,g_1(x,t)≤g̅_max,where ẍ_d∈ℝ^n can be the desired acceleration of the manipulator robot.Definition (2): To define an adaptive law, we can assume there are the positive and unknown constants ϕ^*_1 and ϕ^*_2 ∈ℝ^+ to compensate for the adaptive estimation errors, as follows:ϕ^*_1= ζ_1 ^-1 ϕ^*_2= ζ_2^-1[1+2λ̅_min^-1(μ_1Λ_1^2+ν_1D_max^2+ν_2Ω_1^2+ν_3 g̅_max^2)] ·Apart from ζ_1 and ζ_2 ∈ℝ^+, which are used as control design parameters, all remaining parameters in (<ref>) are assumed positive but unknown constants.Definition (3): <cit.> For any initial condition x(0), if α, β, and μ̃∈ℝ^+ exist, the tracking error e between the state x and the reference states x_r=[x_d, ẋ_d]^⊤ converges uniformly and exponentially to a defined region g(τ), such that:e=x(t) - x_r(t)≤β e^-α tx(0) + μ̃g(τ):={e|e≤τ=μ̃}· §.§ Fault-Tolerant Adaptive Subsystem-Based ControlNow, we can define adaptive laws, as follows:ϕ̇̂̇_1 = -k_1σ_1ϕ̂_1+1/2ζ_1k_1Q_1^2 ϕ̇̂̇_2 = -k_2σ_2ϕ̂_2+1/2ζ_2k_2Q_2^2,where k_1, k_2, σ_1, and σ_2 are positive constants. Assumption (2): By selecting an initial condition ϕ̂_i(0) ≥ 0 for the system and allowing the system to evolve according to the governing dynamics that determine ϕ̂_i(t) based on design parameters, we assert that, for all t≥ 0, it is possible to ensure ϕ̂_i(t) > 0.By assuming the adaptive law error is ϕ̃_1,2=ϕ̂_1,2-ϕ^*_1,2 in which ϕ^*_1,2 is as defined in (<ref>), we can obtain:ϕ̇̃̇_1=-k_1σ_1ϕ̃_1+1/2ζ_1k_1Q_1^2-k_1σ_1ϕ^*_1ϕ̇̃̇_2=-k_2σ_2ϕ̃_2+1/2ζ_2k_2Q_2^2-k_2σ_2ϕ^*_2· Then, the virtual control (κ_1) can be proposed as follows:κ_1 = - 1/2 (δ_1+ζ_1ϕ̂_1)Q_1,where δ_1 and ζ_1 are positive constants. Consequently, the actual control T_c is as follows:T_c=-1/2(δ_2+ζ_2ϕ̂_2)λ_min^-1Q_2,where δ_2 and ζ_2 are positive constants, and we know λ_min^-1 from (<ref>). §.§ JAYA Algorithm-Based Parameter TuningGiven the eight gains in the SBFC, denoted as k_1, k_2, δ_1, δ_2, ζ_1, ζ_2, σ_1, and σ_2, it is necessary to tune each within an iterative function based on the multipopulational JA. Let us consider each gain to be associated with c ∈ℝ^+. In this paper, the JA commences by initializing two positive collections of gains of control within two sample times (0 ≤ t < 0.001 and 0.001 ≤ t < 0.002), known as the initial population, through a random process. For each individual within this population, the cost function is calculated, based on the standard deviation of the position and velocity tracking errors e̅=√(e_1^2+e_2^2) representing the target objective function to be minimized. The top-performing candidate (c_best) is determined as the one with the most favorable value (referred to as e̅_best), while the other (the poorest performer) is identified as the candidate (c_worst) with the least favorable value (referred to as e̅_worst). Next, these values are iteratively adjusted to find the new candidate (c_new) in the following iterative function:c_new = c + r_1(c_best - c) - r_2(c_worst - c), where c_new∈ℝ is the updated random c. Further, r_1 ∈ℝ and r_2 ∈ℝ are the two random numbers for each variable during the iteration in the range [0,1], and c_best and c_worst are replaced with c_new if it gives a best (e̅_best) or worst (e̅_worst) function value, respectively. All accepted function values at the end of the iteration are maintained, and these values become the input to the next iteration.Remark (1): The expression r_1(c_best-c) represents the inclination of the solution to approach the best solution, while the expression -r_2(c_worst-c) signifies the propensity of the solution to eschew the worst solution.Remark (2): According to the details provided in this paper, all gain parameters must be both positive and finite. To ensure adherence to this requirement, we must first choose initial populations for these parameters to be positive. Then, by following this approach and referring to (<ref>), while bearing in mind that c > 0, we can suggest random c be larger than r_2 c_worst-r_1 c_best/1-r_1+r_2, as well. In this way, we can guarantee that all newly generated values for c_new will remain positive. Furthermore, by incorporating the principles outlined in Eq. (<ref>), we can impose constraints on the JA to prevent it from producing gains that exceed a predetermined threshold, as necessary.The block diagram shown in Fig. <ref> illustrates the interaction among the SBFC system sections. As depicted in the figure, the system computes variables related to subsystem-based transformation upon receiving reference trajectories. In addition, the adaptation mechanisms estimate upper bounds for disturbances, uncertainties, and actuator failures. Then, the calculated values from the subsystem-based transformation component, along with the parameters estimated through online adaptation update laws, are received by the proposed controller. Subsequently, the control command, denoted as T, is generated. The input constraint section S(T) verifies that the torque value does not exceed the defined constraints. It is worth noting that the eight gains of the adaptation law and controller are automatically adjusted using the JA block. §.§ Stability AnalysisTheorem: Consider the adaptive algorithm presented in Eq. (<ref>), the actuator fault model in Eqs. (<ref>) and (<ref>), the input constraint specified in (<ref>), and the control input as given in (<ref>). It is assumed that under these conditions, the states x_1 and x_2 can attain the reference trajectories x_d and ẋ_d through uniformly exponential convergence, as defined in Definition (3).Proof: A Lyapunov function is suggested as follows:V_1 =1/2λ_min [ Q_1^⊤Q_1+k^-1_1ϕ̃_1^2 ] · After differentiating V_1 and inserting (<ref>), we obtain:V̇_̇1̇=λ_minQ_1^⊤ [Q_2+κ_1]+k_1^-1λ_minϕ̃_1ϕ̇̃̇_1· By using the Cauchy–Schwarz and the squared Euclidean norm concepts:V̇_̇1̇≤ 1/2λ_minQ_1^2 + 1/2λ_minQ_2^2+ λ_minQ_1^⊤κ_1+ k_1^-1λ_minϕ̃_1ϕ̇̃̇_1· Then, by considering the definition of ϕ^*_1 in (<ref>), we achieve:V̇_̇1̇≤ 1/2λ_minQ_2^2 + 1/2λ_minζ_1ϕ_1^* Q_1^2 + λ_minQ_1^⊤κ_1+ k_1^-1λ_minϕ̃_1ϕ̇̃̇_1· Now, by inserting ϕ̇̃̇_1 and κ_1 from (<ref>) and (<ref>), we obtain:V̇_̇1̇≤ 1/2λ_minQ_2^2 + 1/2λ_minζ_1ϕ_1^* Q_1^2- 1/2λ_minδ_1Q_1^2-1/2λ_minζ_1ϕ̂_1Q_1^2-λ_minσ_1ϕ̃_1^2+ 1/2λ_minζ_1Q_1^2ϕ̃_1- λ_minσ_1ϕ^*_1ϕ̃_1· Because ϕ̃_1=ϕ̂_1-ϕ_1^*:V̇_̇1̇≤ 1/2λ_minQ_2^2 -1/2λ_minδ_1Q_1^2 -λ_minσ_1ϕ̃_1^2-λ_minσ_1ϕ^*_1ϕ̃_1· After dividing λ_minσ_1ϕ̃_1^2 into 1/2λ_minσ_1ϕ̃_1^2+1/2λ_minσ_1ϕ̃_1^2, and considering (<ref>), we obtain:V̇_̇1̇≤-Ψ_1 V_1 + 1/2λ_minQ_2^2 -1/2λ_minσ_1ϕ̃_1^2-λ_minσ_1ϕ^*_1ϕ̃_1, whereΨ_1 = min [δ_1,k_1σ_1] ·As -1/2λ̅_minσ_1ϕ̂_1^2≤ 0, we eliminate it and reach:V̇_̇1̇≤-Ψ_1 V_1 + 1/2λ̅_minQ_2^2 +1/2λ̅_minσ_1ϕ_1^*^2 · Likewise, the Lyapunov function V_2 is suggested as follows:V_2 = V_1+1/2[Q_2^⊤Q_2+k_2^-1λ̅_minϕ̃_2^2] · By differentiating V_2 and inserting (<ref>), we obtain:V̇_̇2̇≤-Ψ_1 V_1 + 1/2λ̅_minQ_2^2 +1/2λ̅_minσ_1ϕ_1^*^2+Q_2^⊤ [A_1λ̅T_c+g_1(x,t)+Δ_1(x, t)+ T_L-ẍ_d]+k_2^-1λ̅_minϕ̃_2ϕ̇̃̇_2 ·Then, by inserting T_c from (<ref>):V̇_̇2̇≤-Ψ_1 V_1 + 1/2λ̅_minQ_2^2 +1/2λ̅_minσ_1ϕ_1^*^2- 1/2λ̅_minδ_2Q_2^2-1/2λ̅_minζ_2ϕ̂_2Q_2^2+Q_2^⊤g_1 +Q_2^⊤Δ_1 -Q_2^⊤ẍ_d+Q_2^⊤T_L+k_2^-1λ̅_minϕ̃_2ϕ̇̃̇_2 ·Now, by assuming that μ_1, ν_1, ν_2, and ν_3 are positive constants, according to Young’s inequality, we can argue:Q_2^⊤Δ_1 ≤μ_1 Λ_1^2 Q_2^2+ 1/4μ_1^-1 r_1^2 Q_2 ^⊤T_L ≤ν_1 D_max^2 Q_2^2+ 1/4ν_1^-1- Q_2 ^⊤ẍ_d≤ν_2 Ω_1^2 Q_2^2 + 1/4ν_2^-1Q_2 ^⊤g_1(x,t)≤ν_3 g̅_max^2 Q_2^2 + 1/4ν_3^-1·Because we have ϕ^*_2 from (<ref>), we can obtain:V̇_̇2̇≤-Ψ_1 V_1 +1/2λ̅_minσ_1ϕ_1^*^2+ 1/4μ_1^-1 r_1^2+ 1/2λ_minζ_2ϕ_2^* Q_2^2- 1/2λ̅_minδ_2Q_2^2 -1/2λ̅_minζ_2ϕ̂_2Q_2^2+ 1/4ν_1^-1 + 1/4ν_2^-1+1/4ν_3^-1+k_2^-1λ̅_minϕ̃_2ϕ̇̃̇_2 ·In addition, by inserting ϕ̇̃̇_2 from (<ref>) into V̇_̇2̇≤-Ψ_1 V_1 +1/2∑_k=1^2λ̅_minσ_kϕ_k^*^2+ 1/4μ_1^-1 r_1^2- 1/2λ̅_minδ_2Q_2^2+ 1/4∑_k=1^3 ν_k^-1-λ_minσ_1ϕ̃_1^2-λ_minσ_1ϕ^*_1ϕ̃_1·Like (<ref>), we can obtain:V̇_̇2̇≤-Ψ_2 V_2 + 1/4μ_1^-1 r_1^2+ 1/4∑_k=1^3 ν_k^-1+1/2∑_k=1^2 λ̅_minσ_kϕ_k^*^2,where:Ψ_2 = min [Ψ_1, λ̅_minδ_2,k_2σ_2] ·Thus, considering V=V_2, we can argue:V= 1/2λ̅_minQ^⊤ΥQ + 1/2λ̅_minϕ̃^⊤K^-1ϕ̃,where:Q = [ Q_1; Q_2; ], Υ=[ 1 0; 0 λ̅^-1_min; ], ϕ̃ = [ ϕ̃_1; ϕ̃_2;], K^-1 = [ k_1^-10;0 k_2^-1 ;]·Thus, according to (<ref>), we obtain:V̇≤-Ψ_2 V+ 1/4μ_1^-1 r_1^2 + μ̃,whereμ̃=1/4∑_k=1^3 ν_k^-1+1/2∑_k=1^2 λ̅_minσ_kϕ_k^*^2 ·By recalling that:V̇ =Ψ V+μ rV=e^Ψ t V(0)+∫_0^t e^Ψ(t-τ)μ r(τ) d τ·Thus, we can solve (<ref>), as follows:V ≤ V(t_0) e^-{Ψ_2(t-t_0)} +1/4μ_1^-1∫_t_0^t e^{-Ψ_2(t-T)} r_1^2dT +μ̃∫_t_0^t e^{-Ψ_2(t-T)} dT ·Considering (<ref>), we can interpret (<ref>) as follows:Q^2 ≤2V(t_0) e^-{Ψ_2(t-t_0)}+ 2μ̃Ψ_2^-1+1/2μ_1^-1∫_t_0^t e^{-Ψ_2(t-T)} r_1^2dT·Because μ_1 is a positive constant, we can express:1/21/μ_1 Ψ_2<1·Therefore, we can posit a continuous function:Z(ι)=1/2μ_1^-1/Ψ_2-ι>0 ι∈ [0,Ψ_2)·Observe that the initial quantity Z(0) in equation (<ref>) is equal to equation (<ref>). Hence, it becomes evident that there exists a positive value ι̅∈ι:0 ≤Z̅=Z(ι̅) <1·By multiplying e^ι̅(t-t_0) by (<ref>), we reach:Q^2e^ι̅(t-t_0)≤2 V(t_0) e^-(Ψ_2-ι̅)(t-t_0)+2 μ̃Ψ_2^-1 e^ι̅(t-t_0)+1/2μ_1^-1∫_t_0^t e^-Ψ_2(t-T)+ι̅(t-t_0) r_1^2dT·Because 0 ≤ι̅ <Ψ_2, we can eliminate the decreasing element e^-(Ψ_2-ι̅)(t-t_0) from (<ref>):Q^2e^ι̅(t-t_0)≤2V(t_0)+2μ̃Ψ_2^-1 e^ι̅(t-t_0) + 1/2μ_1^-1∫_t_0^t e^-(Ψ_2-ι̅)(t-T)r_1^2 e^ι̅(t-t_0) dT·We represent the non-decreasing and continuous functions E_0 and E_1, as follows:E_0 =sup _e ∈(t-t_0) [Q^2 e^ι̅(e-t_0))] E_1=sup _e ∈(t-t_0) [(r_1^2)e^ι̅(e-t_0)]·Then, by considering Eqs. (<ref>) and (<ref>), we achieve:Q^2e^ι̅(t-t_0)≤2 V(t_0)+1/2μ_1^-1/Ψ_2-ι̅ E_1+2 μ̃Ψ_2^-1 e^ι̅(t-t_0)·Because E_1 is non-decreasing, the left-hand side of Eq. (<ref>) will also not decrease. Hence, with respect to the definition of E_0 in Eq. (<ref>), we can conclude:E_0 ≤2V(t_0)+1/2μ_1^-1/Ψ_2-ι̅ E_1+2 μ̃Ψ_2^-1 e^ι̅(t-t_0)·By defining:E=max _i (E_i) k=0,1 ,we can obtain:E_0 ≤ 2V(t_0)+Z̅ E+2 μ̃Ψ_2^-1 e^ι̅(t-t_0),such that 0<E_0≤ E and both E_0 and E are not decreasing, enabling *Z, as follows:*Z>Z̅, 0<*Z<1 Z̅ E ≤*Z E_0·Hence, Eq. (<ref>) is meaningful, as μ_1 can be considered an option to reduce Z̅ to a sufficiently small value. Incorporating Eq. (<ref>) into Eq. (<ref>), we arrive at:E_0 ≤ 2 V(t_0)+*Z E_0(t)+2 μ̅Ψ_2^-1 e^ι̅(t-t_0)·Afterward, we obtain:E_0 ≤2 V(t_0)+2 μ̃Ψ_2^-1 e^ι̅(t-t_0)/1-*Z·Concerning the definition (<ref>), we obtain:Q^2 ≤2 V(t_0) e^-ι̅(t-t_0)+2 μ̃Ψ_2^-1/1-*Z·It is significant that:sup _t ∈[t_0, ∞](2 V(t_0) e^-ι̅(t-t_0)/1-*Z)≤2 V(t_0)/1-*Z·Consequently, by Definition (3), it is evident from Eq. (<ref>) that ||Q|| is uniformly and exponentially stable towards a specific ball 𝒢(τ̅_0) when employing the SBFC approach, such that:𝒢(τ̅_0):={Q|Q≤τ̅_0=√(2 μ̃Ψ_2^-1/1-*Z)}· § NUMERICAL VALIDITYThe deployment approaches for the SBFC are delineated within the SBFC Algorithm, offering a detailed overview. To evaluate the efficacy of the proposed methodology, we applied it to a two DOF robot featured in the work by Humaloja et al. <cit.>, which was based on <cit.>.The modeling of the unknown friction and external disturbance term is represented as follows:Δ_1+T_L=[[ 0.6 sin(0.8q̇_1) +3 sin(2t); -1.6 sin(1.8 q_2)+1.3 sin(0.7q̇_2) -0.2 ]]·The desired trajectory of the system is selected as follows:x_d=[sin (t / 4 π)-1, sin (t / 4 π+π/3)]^T·In this case study, We examined a fault model represented in Fig. <ref> in which both actuators were initially in a healthy state for up to 10 seconds. The effectiveness of JA and tracking control during this healthy task is illustrated in Fig. <ref>. This depiction indicates that control parameters were suitably optimized, leading to the cost function in Fig. <ref>(a) reaching a minimum effort at 0.25 sec. Fig. <ref>(b) also illustrates the potential for position tracking using SBFC in the healthy actuator mode, which reached near-zero values before 0.13 seconds, following parameter tuning. The best SBFC gains obtained for the mentioned manipulator and the specified task are as follows: δ_1=62, δ_2=75, ζ_1=0.2, ζ_2=3.5, σ_1=5.6, σ_2=1.9, k_1=1.4, and k_2=0.96. Likewise, the effectiveness of the control constraints outlined in (<ref>) is demonstrated in Fig. <ref>, where the constraints have ensured that the amplitude of the first actuator torque remains below 80 Nm. Although the fault model is employed, Fig. <ref> demonstrates the system's response to various fault types mentioned in Fig. <ref>, showcasing its ability to effectively reduce tracking errors to zero even when both actuators are faulty. Table I compares the performance of SBFC with two similar works <cit.> and <cit.> under identical fault and uncertainty conditions, demonstrating SBFC's superior performance in terms of tracking control accuracy and speed.§ CONCLUSIONSThis study introduced a novel subsystem-based fault-tolerant control system tailored to address uniformly exponential stability among robot manipulator systems featuring n DOF, effectively managing unknown modeling errors, input constraints, and actuator faults, and optimizing controller gains through amending the JA, a highly effective swarm intelligence technique. Looking forward, this generic control methodology not only holds promise for further refinement but also opens new avenues for its application across a spectrum of robotic dynamics, suggesting broader implications for the field in future research endeavors. IEEEtran[ < g r a p h i c s > ] Mehdi Heydari Shahna earned a B.Sc. degree in Electrical Engineering from Razi University, Kermanshah, Iran, in 2015, and an M.Sc. degree in Control Engineering at Shahid Beheshti University, Tehran, Iran, in 2018. Since December 2022, he has been pursuing his doctoral degree in Automation Technology and Mechanical Engineering at Tampere University. His research interests encompass robust control, nonlinear model-based control of robotic manipulators and electrified actuators, fault-tolerant algorithms, and Stability Theory.[ < g r a p h i c s > ] Jouni Mattila received M.Sc. and Ph.D. degrees in Automation Engineering from Tampere University of Technology, Tampere, Finland, in 1995 and 2000, respectively. He is currently a Professor of Machine Automation with the Unit of Automation Technology and Mechanical Engineering at Tampere University. His research interests include machine automation, nonlinear model-based control of robotic manipulators, and energy-efficient control of heavy-duty mobile manipulators. | http://arxiv.org/abs/2311.15852v1 | {
"authors": [
"Mehdi Heydari Shahna",
"Jouni Mattila"
],
"categories": [
"cs.RO",
"cs.SY",
"eess.SY"
],
"primary_category": "cs.RO",
"published": "20231127141950",
"title": "Exponential Auto-Tuning Fault-Tolerant Control of N Degrees-of-Freedom Manipulators Subject to Torque Constraints"
} |
Department of Astronomy, School of Physics and Technology, Wuhan University, Wuhan 430072, China Department of Astronomy, Beijing Normal University, Beijing 100875, China Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China [email protected] Department of Astronomy, School of Physics and Technology, Wuhan University, Wuhan 430072, China Department of Astronomy, Beijing Normal University, Beijing 100875, China [email protected] Cosmological observations, e.g., cosmicmicrowave background, have precisely measured the spectrum of primordial curvature perturbation on larger scales, but smaller scales are still poorly constrained. Since primordial black holes (PBHs) could form in the very early Universe through the gravitational collapse of primordial density perturbations, constrains on the PBH could encodes much information on primordial fluctuations. In this work, we first derive a simple formula for lensing effect to apply PBH constraints with the monochromatic mass distribution to an extended mass distribution. Then, we investigate the latest fast radio burst observations with this relationship to constrain two kinds of primordial curvature perturbation models on the small scales. It suggests that, from the null search result of lensed fast radio burst in currently available observations, the amplitude of primordial curvature perturbation should be less than 8× 10^-2 at the scale region of 10^5-10^6 Mpc^-1. This corresponds to an interesting mass range relating to binary black holes detected by LIGO-Virgo-KAGRA and future Einstein Telescope or Cosmic Explorer.§ INTRODUCTIONThe power spectrum of primordial curvature perturbations on large scales has been precisely constrained by a variety of observations. For instance, cosmic microwave background (CMB) and large scale structure (LSS) observations suggest that the amplitude of primordial curvature perturbation should be orders of magnitude larger than 𝒪(10^-9) at 𝒪(10^-4-10^0) Mpc^-1 scales <cit.>. However, most current available cosmological observations are only able to constrain primordial fluctuations at larger scales. Therefore, new probes are in great request to constrain primordial perturbations at smaller scales. Moreover, primordial black hole (PBH) have been a field of great astrophysical interest because they are often considered to make up a part of dark matter. PBH could form in the early Universe through the gravitational collapse of primordial density perturbations <cit.>, its formation is closely related to the primordial power spectrum <cit.>. Therefore, there are many inflation models, e.g., inflation model with modified gravity <cit.>, multi-field inflation model <cit.>, special single-field inflation model <cit.>, to enhance the amplitude of power spectrum of primordial curvature perturbations on small scales which corresponds to PBHs in various mass windows.Theoretically, the mass of PBHs can range from the Planck mass (10^-5 g) to the level of the supermassive black hole in the center of the galaxy. So far, numerous methods, including both direct observational constraints and indirect ones, have been proposed to constrain the abundance of PBHs in various mass windows <cit.>. Gravitational lensing effect is one of direct observational probes to constrain the abundance of PBH over a wide mass range from 𝒪(10^-10 M_⊙) to 𝒪(10^10 M_⊙). In general, we can divide the method of lensing effect into four types <cit.>: 1. Searching the luminosity variation of persistent sources <cit.>, for example, observing a large number of stars and looking for amplifications in their brightness caused by lensing effect of intervening massive objects could yield constraints on the abundance of deflectors <cit.>; 2. Searching multiple peaks structures of transient sources <cit.>, such as searching echoes due to the milli-lensing effect of fast radio bursts (FRBs) were proposed to put constraints on the PBH abundance <cit.>; 3. Searching multiple images produced by milli-lensing of possible persistent sources like the compact radio sources (CRSs) can be used to constrain the supermassive PBH <cit.>; 4. Searching the waveform distortion caused by the lensing effect of distant sources <cit.>, for example,distorting the GW waveform as the fringes were proposed to constrain PBH with stellar mass <cit.>. In this work, based on a relationship for applying constraints with the monochromatic mass distribution (MMD) to a specific extended mass distribution (EMD), we proposed to use the lensing effect of fast radio bursts to study the primordial curvature perturbations on small scales which have not been achieved by other observations.This paper is organized as follows: Firstly, we introduce formation of PBHs from the primordial curvature perturbation model in Section <ref>,. In Section <ref>, we carefully analyzed constraints on PBHs from the lensing effect. In Section <ref>, we present the results of constraints on power spectrum. Finally, we present discussion in Section <ref>. Throughout, we use the concordance ΛCDM cosmology with the best-fit parameters from the recent Planck observations <cit.>.§ FORMATION OF PRIMORDIAL BLACK HOLESThe power spectrum of primordial curvature perturbations determines the probability of PBH production, the mass function of PBHs, and the PBH abundance <cit.>. The phenomena of critical collapse could describe the formation of PBHs with mass m_ PBH in the early Universe, depending on the horizon mass m_ H and the amplitude of density fluctuations δ <cit.>:m_ PBH=Km_ H(δ-δ_ th)^γ,where K=3.3, γ=0.36, and δ_ th=0.41 <cit.>, and the horizon mass m_ H is related to the horizon scale k <cit.>m_ H≈17(g_*/10.75)^-1/6(k/10^6 Mpc^-1)^-2 M_⊙where g_* is the number of relativistic degrees of freedom. The coarse-grained density perturbation is given byσ^2(k)=∫ dln q 16/81(q/k)^4W^2(q/k) T^2(q,k)×P_ζ(q, p_ mf),where W(q/k) is the Gaussian window function, P_ζ(q, p_ mf) is the power spectrum of primordial curvature perturbation, and T(q,k) is the transfer function <cit.>T(q,k)=3(sin x-xcos x)/x^3,where x=q/√(3)k. To convert σ^2(k) to the mass function of PBHs, we calculate the probability of PBH production by considering the Press-Schechter formalism <cit.>β_m_ H=∫_δ_ th^+∞m_ PBH/m_ HP_m_ H(m_ H)d δ(m_ H)= ∫_-∞^+∞m_ PBH/m_ HP_m_ H(m_ H)dδ(m_ PBH)/d ln m_ PBH d ln m_ PBH= ∫_-∞^+∞β̅_m_ PBH d ln m_ PBH,where P_m_ H(m_ H) denotes a Gaussian probability distribution of primordial density perturbations at the given horizon scale,P_m_ H(δ(m_ PBH))=1/√(2πσ^2(k(m_ H)))× exp(-δ^2(m_ PBH)/2σ^2(k(m_ H))).The PBH energy fraction is calculated from Eq. (<ref>) asΩ_ PBH=∫_-∞^+∞ dln m_ H(M_ eq/m_ H)^1/2β_m_ H,where M_ eq=2.8×10^17 M_⊙ is the horizon mass at the time of matter-radiation equality <cit.>. In addition, the mass function of PBHs ψ(m_ PBH,p_ mf) can be obtained by differentiating Ω_ PBH with the PBH massψ(m_ PBH,p_ mf)=1/Ω_ PBHd Ω_ PBH/d m_ PBH= 1/m_ PBHΩ_ PBH× ∫_-∞^+∞dln m_ H(M_ eq/m_ H)^1/2β̅_m_ PBH,where p_ mf represents the parameters from the power spectrum of primordial curvature perturbation P_ζ(q, p_ mf). The corresponding total PBH abundance is defined as f_ PBH,th≡Ω_ PBH/Ω_ DM,where Ω_ DM is dark matter density parameter at present universe <cit.>. In order to distinguish f_ PBH from the observational constraints, we label the f_ PBH obtained from the power spectrum of primordial curvature perturbation is written as f_ PBH,th in Eq. (<ref>).§ CONSTRAINTS ON F_ PBH FROM THE LENSING EFFECTFor a lensing system, Einstein radius is one of the characteristic parameters and, taking the intervening lens with mass m as a point mass, it is given byθ_ E=2√(mD_ LS/D_ LD_ S),where D_ S, D_ L and D_ LS represent the angular diameter distance to the source, to the lens, and between the source and the lens, respectively. The lensing cross section due to a PBH lens is given by an annulus between the maximum and minimum impact parameters (y≡β/θ_ E, β stands for the source angular position),σ(m, z_ L, z_ S)=πθ_ E^2D_ L^2(y^2_max-y^2_min)= 4π mD_ LD_ LS/D_ S(y^2_max-y^2_min).It is worth emphasizing that the maximum impact parameter y_max and minimum impact parameter y_min generally depends on the observing instruments or the nature of the lensing source. For example, the maximum impact parameter y_ max and minimum impact parameter y_ min of FRBs micro-lensing system are determined by the maximum flux ratio of two lensed peaks and the width of signals, respectively.For a single source, the optical depth for lensing due to a single PBH isτ(m,f_ PBH,obs,z_ S)=∫_0^z_ Sdχ(z_ L)(1+z_ L)^2×n_ L(f_ PBH,obs, m)σ(m,z_ L,z_ S)= 3/2f_ PBH,obsΩ_ DM× ∫_0^z_ Sdz_ LH_0^2/H(z_ L)D_ LD_ LS/D_ S(1+z_ L)^2(y^2_max-y^2_min),where H(z_ L) is the Hubble expansion rate at z_ L, H_0 is the Hubble constant, and n_ L(f_ PBH,obs, m) is the comoving number density of the PBHs with the monochromatic mass distribution (MMD)n_ L(f_ PBH,obs,m)=f_ PBH,obsΩ_ DMρ_ c/m,where ρ_ c is critical density of universe. Correspondingly, the f_ PBH obtained from the lensing effect is written as f_ PBH,obs in Eq. (<ref>). According to the Poisson law, the probability for the null detection of lensed event is P_i=exp(-τ_i(m,f_ PBH,obs,z_ S)).If we have detected a large number of astrophysical events N_ tot, and none of them has been lensed, the total probability of unlensed event would be given byP_ tot=exp(-∑^N_ tot_i=1τ_i).If none lensed detection is consistent with the hypothesis that the universe is filled with the PBHs to a fraction f_ PBH,obs at 100Π% confidence level, the following condition must be validP_ tot(f_ PBH,obs)≥1-Π.For a null search of lensed signals, then the constraint on the upper limit of f_ PBH,obs can be estimated from Eq. (<ref>). For the optical depth τ_i≪1, we can obtain the expected number of lensed eventsN_ lensed(m, f_ PBH,obs)=∑^N_ tot_i=1(1-exp(-τ_i)) ≈∑^N_ tot_i=1τ_i. It should be pointed out that the above formalism is only valid for the simple but widely used MMD, ψ(m_ PBH,m)=δ(m_ PBH-m),where δ(m_ PBH-m) represents the δ-function at the mass m. In fact, there is a specific EMD which corresponds to different the power spectrum of primordial curvature perturbation from different inflation models. Therefore, it is important and necessary to derive constraints on PBH with some theoretically motivated EMDs, which are closely related to realistic formation mechanisms of PBHs. For the above-mentioned EMDs, the lensing optical depth for a given event can be written as,τ(f_ PBH,obs,z_ S,p_ mf)=∫ dm_ PBH∫_0^z_ Sdχ(z_ L)(1+z_ L)^2 × d n_ L(f_ PBH,obs,m_ PBH, p_ mf)/dm_ PBHσ(m_ PBH,z_ L,z_ S),where d n_ L(f_ PBH,obs,m_ PBH, p_ mf)/dm_ PBH is the comoving number density of the PBHs at EMD ψ(m_ PBH,p_ mf)d n_ L(f_ PBH,obs,m_ PBH, p_ mf)/dm_ PBH=ψ(m_ PBH,p_ mf)× f_ PBH,obsΩ_ DMρ_ c/m_ PBH. Then we can derive a universal formula for connecting the constraints on the f_ PBH,obs for applying constraints with the MMD to EMD. Firstly, we must respectively note the f_ PBH,obs under MMD and EMD as f_ PBH,obs^ MMDand f_ PBH,obs^ EMD. In addition, we can obtain the optical depth of single lensing source in MMD and EMD framework and note them as{τ^ MMD(m,f_ PBH,obs^ MMD)=f_ PBH,obs^ MMDτ^ MMD(m,f_ PBH,obs^ MMD=1), τ^ EMD( p_ mf,f_ PBH,obs^ EMD)=f_ PBH,obs^ EMDτ^ EMD( p_ mf,f_ PBH,obs^ EMD=1). .From the relationship of the optical depth of MMD and EMD(see Eq. (<ref>) and Eq. (<ref>)), we can obtain thatτ^ EMD( p_ mf,f_ PBH,obs^ EMD)=∫_0^+∞ dmψ(m_ PBH, p_ mf)× τ^ MMD(m_ PBH,f_ PBH,obs^ EMD).In addition, we can obtain the upper limits of f_ PBH,obs^max from Eq. (<ref>) and Eq. (<ref>) in the MMD and EMD framework as{ f_ PBH,obs^ MMD,max(m)=-ln(1-Π)/∑^N_ tot_i=1τ^ MMD_i(m,f_ PBH,obs^ MMD=1), f_ PBH,obs^ EMD,max( p_ mf)=-ln(1-Π)/∑^N_ tot_i=1τ^ EMD_i( p_ mf,f_ PBH,obs^ EMD=1). .Then, we can obtain this relationship by combining Eqs. (<ref>, <ref>) f_ PBH,obs^ EMD,max( p_ mf)/f_ PBH,obs^ MMD,max(m)=∑^N_ tot_i=1τ^ MMD_i(m,f_ PBH,obs^ MMD=1) /∑^N_ tot_i=1τ^ EMD_i( p_ mf,f_ PBH,obs^ EMD=1)= ∑^N_ tot_i=1τ^ MMD_i(m,f_ PBH,obs^ MMD=1)/∑^N_ tot_i=1∫_0^∞ dm_ PBHτ^ MMD_i(m_ PBH,f_ PBH,obs^ EMD=1)ψ(m_ PBH, p_ mf). Finally, we can integrate Eq. (<ref>) with the same mass distribution ψ(m, p_ mf) over m to obtain that∫_0^∞ dm f_ PBH,obs^ EMD,max( p_ mf)ψ(m, p_ mf)/f_ PBH,obs^ MMD,max(m)= ∫_0^∞ dm ∑^N_ tot_i=1τ^ MMD_i(m,f_ PBH,obs^ MMD=1)ψ(m, p_ mf)/∑^N_ tot_i=1∫_0^∞ dmτ^ MMD_i(m,f_ PBH,obs^ EMD=1)ψ(m, p_ mf)=1.This relationship indicates that constraints on the f_ PBH,obs from lensing effect can be perfectly consistent with the formula for applying constraints with the MMD to specific EMD <cit.>. Furthermore, the same relationship in Eq. (<ref>) can be derived from Eq. (<ref>).§ RESULTSIn this section, we consider two kinds of power spectrum of primordial curvature perturbation P_ζ(k, p_ mf). The first case is a δ function of ln k, i.e.P_ζ(k, p_ mf,1)=A_δδ(ln k-ln k_0),where p_ mf,1≡[A_δ,k_0], A_δ and k_0 are dimensionless amplitude and constant wave number, respectively. In Fig. <ref>, we show several examples for the mass function ψ(m, p_ mf,1) which correspond to the δ-function power spectrum of primordial curvature perturbation as Eq. (<ref>). Specifically, we choose the constant wave number to be k_0=[2,3,4,5,6]×10^5 Mpc^-1 with fixed dimensionless amplitude A_δ=5×10^-2 presented in the left panel of Fig. <ref>. Similarly, we show the mass function with different dimensionless amplitude A_δ=[1,3,5,7,9]×10^-2 and fixed constant wave number k_0=5×10^5 Mpc^-1 presented in the right panel of Fig. <ref>.The second case is a nearly scale invariant shape of the formP_ζ(k, p_ mf,2)=A_ ns(k/k_min)^n_ s-1× Θ(k-k_min)Θ(k_max-k),where p_ mf,2≡[A_ ns,n_ s], A_ ns and n_ s are dimensionless amplitude and spectral tilt, respectively. In addition, We take k_min and k_max as 10^5 Mpc^-1 and 10^6 Mpc^-1, which approximately correspond to PBH mass in the range of 10 M_⊙ to 10^3 M_⊙. In Fig. <ref>, we show several examples for the mass function ψ(m, p_ mf,2) which correspond to the nearly scale invariant power spectrum of primordial curvature perturbation as Eq. (<ref>). The same as the first case, we choose the spectral tilt n_ s=[0.5,0.7,0.9,1.1,1.3] with fixed dimensionless amplitude A_ ns=5×10^-2 presented in the left panel of Fig. <ref>. Similarly, we show the mass function with different dimensionless amplitude A_ ns=[1,3,5,7,9]×10^-2 and fixed spectral tilt n_ s=0.8 presented in the right panel of Fig. <ref>.In order to discuss the constraints on the power spectrum of primordial curvature perturbation from the lensing effect, we take FRBs as an example. At present, we use 593 publicly available FRBs compiled by zhou et al. 2022 works <cit.>. These sources consist of more than five hundred FRB events from 2018 July 25 to 2019 July 1[https://www.chime-frb.ca/catalog] <cit.>. The distance and redshift of a detected FRB can be approximately estimated from its observed dispersion measure (DM), which is proportional to the number density of free electron along the line of sight and is usually decomposed into the following four ingredients,DM= DM_host+DM_src/1+z+ DM_IGM+ DM_MW,where DM_host and DM_src represent DM from host galaxy and local environment, respectively. We adopt the minimum inference of redshift for all host galaxies, which corresponds to the maximum value of DM_host+ DM_src to be 200 pc/cm^3. DM_MW is the contribution from the Milky Way. In addition, DM_IGM represents DM contribution from intergalactic medium (IGM). The DM_IGM-z relation is given by <cit.> and it is approximately expressed as DM_ IGM∼855z pc/cm^3 by considering the fraction f_ IGM of baryon in the IGM to f_ IGM=0.83 and the He ionization history <cit.>. DM and redshift measurements for several localized FRBs suggested that this relation is statistically favored by observations <cit.>. We present the inferred redshifts of 593 available FRBs in the left panel of Fig. <ref>. For milli-lensing of FRBs, the critical value R_ f, max and the width (w) of the observed signal determine the maximum and minimum value of impact parameter in the cross section. To ensure that both signals are detectable, the maximum value of impact parameter y_max can be obtained by requiring that the flux ratio of two lensed images is smaller than a critical value R_ f, max,y_max=R_ f,max^1/4-R_ f,max^-1/4,Here, following previous works <cit.>, we take R_ f,max=5 for cases when we study lensing of the whole sample of all currently public FRBs. In addition,the minimum value of impact parameter y_min can be obtained from the time delay between lensed signalsΔ t=4M_ PBH(1+z_ L)× [y/2√(y^2+4)+ln(√(y^2+4)+y/√(y^2+4)-y)] ≥ w,and pulse widths of all FRBs are presented in the left panel of Fig. <ref>.After determining the the maximum and minimum value of impact parameter in the optical depth from Eqs. (<ref>-<ref>), we can combined 593 FRBs and Eq. (<ref>) to obtain the upper limit of f_ PBH,obs at 100Π% confidence level. In the right panel of Fig. <ref>, we demonstrate the constraints on f_ PBH,obs when the MMD is considered. In the ≳ 10^3 M_⊙ large-mass end, the constraint on f_ PBH saturates to 9.8×10^-2 at 68% confidence level. Then, From the relationship in Eq. (<ref>), we derive the upper limit on f_ PBH,obs corresponding to above two primordial curvature perturbation P_ζ(k, p_ mf) models, and the results are shown in the left panel of Figs. (<ref>-<ref>). For the first primordial curvature perturbation, we vary the constant wave number k_0 from 10^5 Mpc^-1 to 10^6 Mpc^-1, which roughly correspond to PBHs with ≲ 10^3 M_⊙. And, the dimensionless amplitude A_δ is greater than 0.01. The regions where 20% and 50% of f_ PBH,obs have been marked by the red solid lines in the left panel of Fig. <ref>. In addition, it should be note that the white regions in the left panel of Fig. <ref> represent that f_ PBH,obs is more than 1. In this model, less k_0 and larger A_δ corresponds to the smaller peak of the mass distribution function, which leads to improve the constraints on the f_ PBH,obs. In the right panel of Fig. <ref>, the parameter space p_ mf,1≡[A_δ,k_0] can be allowed to exist in the brown area within the red solid line. There are two conditions under which a parameter space can be allowed to exist:{ f_ PBH,th≤1, Δ f_ PBH≡ f_ PBH,th-f_ PBH,obs≤0. .The first condition means that the density parameter of PBH from theory can not be larger than the one of dark matter. The second condition means that the theoretical prediction of PBH abundance should be lower than the upper limit of observational constraints. We find that the amplitude of primordial curvature perturbation is less than 6× 10^-2 at the scale region of k_0≥2×10^5 Mpc^-1.For the second case, we assume that the value of n_ s varies from 0.5 to 1.5 and A_ ns is greater than 0.01. The regions where 15% and 20% of dark matter can consist of PBHs are denoted by red solid line in the left panel of Fig. <ref>. In this model, less n_ s corresponds to the lagrer peak of the mass distribution function. In addition, larger A_ ns corresponds to the broader mass distribution. The quantitative competition between the two above-mentioned effects of broadening the mass distribution, hence the constraints on the f_ PBH,th is more complicated. Based on the above conditions of Eq. (<ref>), we present the allowed parameter space p_ mf,2≡[A_ ns,n_ s] in the brown area within the red solid line at the right panel of Fig. <ref>. We find that the amplitude of primordial curvature perturbation is less than 5× 10^-2 at the scale invariant range (n_ s∼1). § DISCUSSIONAlthough CMB and LSS observations have yielded strict bounds on the primordial curvature perturbation at Mpc scale and higher, effective constraints on primordial fluctuations on the small scales are still rare. Fortunately, the gravitational lensing effects, such as echoes of transient sources, can be used as powerful probes to constrain PBHs. Therefore, we proposed using lensing effect to constrain the primordial curvature perturbation on small scales. In this paper, we first derive the relationship that connects constraints of f_ PBH,obs from the MMD to EMD for all lensing effect. Then, taking FRB as an example, we propose that its lensing effect can be used to exploring the primordial curvature perturbation. By combining 593 FRB samples <cit.> and two kinds of primordial curvature perturbation models, we present the constraints on f_ PBH,obs and the allowed regions of parameter space of primordial curvature perturbations in Figs. (<ref>-<ref>). In general, null search result of lensed FRB in the latest 593 events would constrain the amplitude of primordial curvature perturbation to be less than 8× 10^-2 at the scale region of 10^5-10^6 Mpc^-1. Moreover, there are two significant aspects in our analysis: * When comparing the abundance of PBHs calculated by any theoretical model with the observational constraints, we should transform the results fromMMD to the EMD under the corresponding theoretical framework. For observational constraints from the lensing effect, we can use Eqs. (<ref>) to translate MMD and EMD results. * Since the primordial power spectrum determines the mass distribution (ψ(m, p_ mf)) and theoretical abundance of PBHs (f_ PBH,th), it suggests that the primordial curvature perturbation parameters p_ mf are degenerate with the abundance of PBHs f_ PBH theoretically. Therefore, if there is a tension between the predicted range of f_ PBH,obs and f_ PBH,th from the future lensing signals, we should consider the following possible reasons: 1. Whether the primordial perturbation model is correct; 2. Whether there are other compact dark matter, such as axion mini-clusters <cit.> and compact mini halos <cit.>, participating in the observation process; 3. Whether PBHs exist evolutionary processes, such as accretion <cit.> and halo structure <cit.>, to change the theoretical f_ PBH,th or observed physical processes. There are several factors contributing to the uncertainties in our analysis. For example, the values for δ_ th depends on the profile of perturbations, the threshold value of the comoving density could vary from 0.2 to 0.6 <cit.>. Moreover, non-Gaussian due to the nonlinear relationship between curvature and density perturbations would lead to the amplitude of the power spectrum of primordial curvature perturbation P_ζ(q, p_ mf) might be a factor of 𝒪(2) larger than if we assumed a linear relationship between ζ and δ <cit.>. Finally, our analysis are based on the Press-Schechter theory. It should be noted that the statistical methods, e.g., Press-Schechter or peaks theory, would slightly affect the results <cit.>. It is foreseen that these constraints will be of great importance for exploring PBHs with their formation mechanisms relating to the physics of the early universe.§ ACKNOWLEDGEMENTSThis work was supported by the National Key Research and Development Program of China Grant No. 2021YFC2203001; National Natural Science Foundation of China under Grants Nos.11920101003, 12021003, 11633001, 12322301, and 12275021; the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant Nos. XDB2300000 and the Interdiscipline Research Funds of Beijing Normal University. H.Z is supported by China National Postdoctoral Program for Innovative Talents under Grant No.BX20230271.[Ando et al.(2018)]Ando2018Ando, K., Inomata, K., & Kawasaki, M. 2018, PhRvD, 97, 103528[Barnacka et al.(2012)]Barnacka2012Barnacka, A., Glicenstein, J.F., & Moderski, R. 2012, PhRvD, 86, 043001[Basak et al.(2022)]Basak2022Basak, S., Ganguly,A., Haris, K., Kapadia, S., Mehta, A. K., & Ajith, P. 2022, ApJL, 926, L28[Blaes & Webster(1992)]Blaes1992Blaes, O. M., & Webster, R. L. 1992, ApJl, 391, L63[Cai et al.(2020)]Cai2020Cai, R.-G., Guo, Z.-K., Liu, J., & Liu, L., 2020, JCAP, 06, 013[Carr et al.(2016)]Carr2016Carr, B., Kuhnel, F., & Sandstad, M. 2016, PhRvD, 94, 083504[Carr et al.(2017)]Carr2017Carr, B., Raidal, M., Tenkanen, T., Vaskonen, V., & Veermäe, H. 2017, PhRvD, 96, 023514[Carr & Hawking(1974)]Carr1974 Carr, B. J., & Hawking, S. W. 1974, MNRAS, 168, 399[Carr(1975)]Carr1975 Carr, B. J. 1975, ApJ, 201, 1[Casadio et al.(2021)]Casadio2021 Casadio, C., Blinov, D., Readhead, A. C. S., et al. 2021, MNRAS, 507, L6[Chen & Cai(2019)]Cai2019Chen, C., & Cai, Y.-F. 2019, JCAP, 10, 068[CHIME/FRB Collaboration, (2021)]CHIME2021CHIME/FRB Collaboration. 2021, ApJS, 257, 59[CHIME/FRB Collaboration, (2022)]CHIME2022CHIME/FRB Collaboration, 2022, PhRvD, 106, 043016[Connor & Ravi(2023)]Connor2023Connor, L., & Ravi, V. 2023, MNRAS, 521, 4024[Clesse & García-Bellido(2015)]Clesse2015Clesse, S., & García-Bellido, J. 2015, PhRvD, 92, 023524[Delos & Franciolini (2023)]Delos2023Delos, M. S., & Franciolini, G. 2023, PhRvD, 107, 083505[De Luca et al.(2019)]Luca2019De Luca, V., Franciolini, G., Kehagias, A., Peloso, M., Riotto, A., & Ünal, C. 2019, JCAP, 07, 048[Deng & Zhang(2014)]Deng2014Deng, W., & Zhang, B. 2014, ApJL, 783, L35[EROS-2 Collaboration, (2007)]Tisserand2007EROS-2 Collaboration, 2007, A&A, 469, 387[Fu et al.(2019)]Fu2019Fu, C.-J., Wu, P.-X., &Yu, H.-W. 2019, PhRvD, 100, 063532[Green & Kavanagh(2021)]Green2021Green, A. M., & Kavanagh, B. J. 2021, J.Phys.G, 48, 043001[Gow et al.(2021)]Gow2020Gow, A. D. , Byrnes, C. T., Cole, P. S., & Young, S. 2021, JCAP, 02, 002 [Griest et al.(2013)]Griest2013Griest, K., Cieplak, A. M., & Lehner, M. J. 2013, PhRvL, 111, 181302[Harada et al.(2013)]Harada2013Harada, T., Yoo, C.-M., & Kohri, K. 2013, PhRvD, 88, 084051[Hardy(2017)]Hardy2017Hardy, E. 2017, JHEP 02, 046[Hawking(1971)]Hawking1971Hawking, S. W. 1971, MNRAS, 152, 75.[Ji et al.(2018)]Ji2018Ji, L.-Y., Kovetz, E. D., & Kamionkowski, M. 2018, PhRvD, 98, 123523[Jung & Shin(2019)]Jung2019Jung S., & Shin C. S. 2019, PhRvL, 122, 041103[Kalita et al.(2023)]Kalita2023Kalita, S.,Bhatporia, S., &Weltman, A. 2023, JCAP, 11, 059[Kassiola et al.(1991)]Kassiola1991Kassiola, A., Kovner, I., & Blandford, B. D. 1991, ApJ, 381, 6[Krochek & Kovetz(2022)]Krochek2022Krochek, K., & Kovetz, E. D. 2022, PhRvD, 10, 103528[Laha(2020)]Laha2020Laha, R. 2020, PhRvD, 102, 023016 [Leung et al.(2022)]Leung2022Leung, C., Kader, Z., Masui, K. W., et al. 2022, PhRvD 106, 043017[Li et al.(2020)]Li2020Li, Z.-X, Gao, H., Wei, J.-J., Yang, Y.-P., Zhang, B., & Zhu, Z.-H. 2020, MNRAS, 496, L28[Liao et al.(2020a)]Liao2020aLiao, K., Tian, S.-X., & Ding, X.-H. 2020a, MNRAS, 495, 2002[Liao et al.(2020b)]Liao2021Liao, K., Zhang, S.-B., Li, Z.-X, & Gao, H. 2020b, ApJL, 896, L11[Liao et al.(2022)]Liao2022Liao, K., Biesiada, M., & ,Zhu, Z.-H. 2022, Chin.Phys.Lett., 39, 119801[LIGO Scientific and VIRGO and KAGRA Collaborations, (2023)]LIGO2023LIGO Scientific and VIRGO and KAGRA Collaborations, 2023, arXiv: 2304.08393[Lin et al.(2022)]Lin2022Lin, S.-J., Li, A., Gao, H., et al. 2022, ApJ, 931, 1[MACHO Collaboration, (2001)]Allsman2001MACHO Collaboration, 2001, ApJL, 550, L169[Motohashi et al.(2020)]Motohashi2020Motohashi, H., Mukohyama, S., & Oliosi, M. 2020, JCAP, 03, 002[Muñoz et al.(2016)]Munoz2016 Muñoz, J. B., Kovetz E. D., Dai L., & Kamionkowski M. 2016, PhRvL, 117, 091301[Musco & Miller(2013)]Musco2013Musco, I., & Miller, J. C. 2013, Class.Quant.Grav., 30, 145009[Musco et al.(2021)]Musco2021Musco, I., De Luca, V., Franciolini, V., & Riotto, A. 2021, PhRvD, 103, 063538[Nakama et al.(2017)]Nakama2017Nakama, T., Silk, J., & Kamionkowski, M. 2017, PhRvD, 95, 043511 [Nemiroff et al.(2001)]Nemiroff2001Nemiroff, R. J.,Marani, G. F., Norris, J. P., & Bonnell, J. T. 2001, PhRvL, 86, 580[Niikura et al.(2019a)]Niikura2019aNiikura, H., Masahiro, T., Naoki, Y., et al. 2019a, Nature Astronomy, 3, 524.[Niikura et al.(2019b)]Niikura2019bNiikura, H., Takada, M., Yokoyama, S., Sumi, T., & Masaki, S. 2019b, PhRvD, 99, 083503[Oguri et al. (2022)]Oguri2022Oguri, M., & Takhistov, V., Kohri, K., 2022, Phys.Lett.B, 847, 138276[Pi et al.(2018)]Pi2018Pi, S., Zhang, Y.-L., Huang, Q.-G., & Sasaki, M. 2018, JCPA, 05, 042[Planck Collaboration, (2020a)]CMB2018Planck Collaboration. 2020a, A&A, 641, A10[Planck Collaboration, (2020b)]Planck2018Planck Collaboration. 2020b, A&A, 641, A6 [Press & Gunn(1973)]Press1973Press, W. H., & Gunn, J. E. 1973, ApJ, 185, 397[Press & Schechter(1974)]Press1974Press, W. H., & Schechter, P. 1974, ApJ, 187, 425[Ricotti(2007)]Ricotti2007Ricotti, M. 2007, ApJ, 662, 61[Ricotti(2009)]Ricotti2009Ricotti, M. 2009, ApJ, 707, 987[Sasaki et al.(2018)]Sasaki2018Sasaki, M., Suyama, T., Tanaka1, T., & Yokoyama, S. 2018, Class.Quant.Grav., 35, 063001[Urrutia & Vaskonen (2021)]Urrutia2021Urrutia, J., & Vaskonen, V. 2021, MNRAS, 509, 1358[Urrutia et al.(2023)]Urrutia2023Urrutia, J., Vaskonen, V., & Veermäe, H. 2023, PhRvD, 108, 023507[Wang et al.(2021)]Wang2021Wang, J.-S., Herrera-Martín, A., & Hu, Y.-M. 2021, PhRvD, 104, 083515[Wilkinson et al.(2001)]Wilkinson2001Wilkinson, P. N., Henstock, D. R., Browne, W. A., et al. 2001, PhRvL, 86, 584[Yoo et al.(2018)]Yoo2018Yoo,C.-M., Harada, T., Garriga, J., & Kohri, K. 2018, PTEP, 12, 123E01 [Young et al.(2014)]Young2014Young, S., Byrnes, C. T., & Sasaki, M. 2014, JCAP, 07, 045[Young et al.(2019)]Young2019Young, S., Musco, I., & Byrnes, C. T. 2019, JCAP, 11, 012[Zhang(2018)]Zhang2018Zhang, B. 2018, ApJL, 867, L21 [Zhou et al.(2022a)]Zhou2022a Zhou, H., Li, Z.-X., Huang, Z.-Q., Gao, H., & Huang, L. 2022a, MNRAS, 511, 1141[Zhou et al.(2022b)]Zhou2022Zhou, H., Li, Z.-X., Liao, K., Niu, C.-H., Gao, H., Huang, Z.-Q., Huang, L., & Zhang, B. 2022b, ApJ, 928, 124[Zhou et al.(2022c)]Zhou2022cZhou, H., Lian, Y.-J., Li, Z.-X, Cao, S., & Huang, Z.-Q. 2022c, MNRAS, 513, 3627[Zhou et al.(2022d)]Zhou2022dZhou, H., Li, Z.-X., Liao, K., & Huang Z.-Q. 2022d, MNRAS, 518, 149[Zumalacarregui & Seljak(2018)]Zumalacarregui2018 Zumalacarregui, M., & Seljak, U. 2018, PhRvL, 121, 141101 | http://arxiv.org/abs/2311.15848v2 | {
"authors": [
"Huan Zhou",
"Zhengxiang Li",
"Zong-Hong Zhu"
],
"categories": [
"astro-ph.CO"
],
"primary_category": "astro-ph.CO",
"published": "20231127141509",
"title": "Exploring primordial curvature perturbation on small scales with the lensing effect of fast radio bursts"
} |
1]Vasiliki Bitsouni 0000-0002-0684-05832,3]Nikolaos Gialelis 0000-0002-6465-72421]Vasilis Tsilidis 0000-0001-5868-4984[1]Department of Mathematics, University of Patras, GR-26504 Rio Patras, Greece[2]Department of Mathematics, National and Kapodistrian University of Athens, GR-15784 Athens, Greece[3]School of Medicine, National and Kapodistrian University of Athens,GR-11527 Athens, GreeceAn age-structured SVEAIR epidemiological model [============================================== In this paper, we introduce and study an age-structured epidemiological compartment model and its respective problem, applied but not limited to the COVID-19 pandemic, in order to investigate the role of the age of the individuals in the evolution of epidemiological phenomena. We investigate the well-posedness of the model, as well as the global dynamics of it in the sense of basic reproduction number via constructing Lyapunov functions.Keywords: Age-based epidemiological model, Basic reproductive number,Asymptomatic infectious, Stability analysis, Global stability, Lyapunov functionMSC2020: 35B35, 35Q92, 37N25, 92D30 § INTRODUCTIONEpidemiological mathematical models have played a crucial role in understanding and predicting the spread of infectious diseases, as well as informing public health policies and measures (see <cit.> and many references therein). With the emergence of the COVID-19 pandemic, the importance of these models has been highlighted as they have been used to assess the anticipated spread of the virus and inform strategies to mitigate its impact <cit.>.One of the key aspects of modern epidemiological models is the incorporation of age-structured models, which take into account the differences in susceptibility, transmission, and disease progression across various age groups (such as <cit.>). This approach allows for a more accurate representation of disease dynamics and enables better targeting of interventions and resource allocation. The aim of the present paper is to investigate the role of the age of the individuals in the evolution of epidemiological phenomena. Using as a case study the COVID-19 outbreak we aim to address the following questions:– How does the age of individuals affect the spread of the epidemic? – What is the effect of the asymptomatic infectious individuals on the basic reproduction number, ℛ_0, of COVID-19? We answer the above questionsby deriving an age-structured epidemiological compartment modelthat incorporates the important role of both asymptomatic and symptomatic individuals.This study is organized as follows.In [sec:model]<ref>sec:model, we develop a novel age-structured SVEAIR model that incorporates, among others, the ambiguous (see [sec:derivanal]<ref>sec:derivanal) variable of asymptomaticity of infectious individuals for the spread of COVID-19 disease. We show its global well-posedness, we derive the basic reproductive number, ℛ_0, of the model and we study the global stability of its steady states. In [sec:numerics]<ref>sec:numerics, we undertake numerical simulations to confirm the behaviour of the solution of the problem.We conclude in[sec:CD]<ref>sec:CD with a summary and discussion of the results.§ THE EPIDEMIOLOGICAL MODEL Here we introduce an epidemiological model, ℳ, along with the respective problem, 𝒫, as a means of utilization of the proposed scheme in answering the main question of the present paper. §.§ Derivation and analysis of the model One of the most critical facts about COVID-19, is that a significant number of cases, mainly those of young age, has been reported as asymptomatic (see <cit.> and many references therein), leading to fast spread of the infection. Although the asymptomatic cases have a shorter duration of viral shedding and lower viral load <cit.>, their proportion can range from 4%-90% (see <cit.> and many references therein) and most of the time they play a key role in infection transmission. Therefore, we incorporate not only both symptomatic and asymptomatic cases in our model (as it is done in, e.g., <cit.>), but also the age of the infected/infectious individuals. In particular, the proposed ℳ is based on the following hypotheses.* The total population, N, is classified into six non-negative-valued compartments, susceptible, S, vaccinated-with-a-prophylactic-vaccine, V, latent/exposed, E, asymptomatic infectious, A, symptomatic infectious, I, and recovered/removed, R, individuals, thusN=S+V+E+A+I+R.All of the above epidemiological variables depend on non-negative time, t. * * There is also another independent non-negative age-variable, θ, which measures the time elapsed since, e.g., birth or infection. The two time-variables have different scales, i.e they are measured in different units, and the parameter ω∈ℝ^+ stands for the conversion factor from the units of θ to the units of t. * Only the non-negative-valued age-densities of E, A and I, i.e. e, a and i, respectively, contribute to our ℳ. Those densities should vanish at (or have already vanished before) θ→∞, hence it is natural for them to be considered as elements of L^1(ℝ_0^+), for every fixed t. In the light of the above assumption, the expressionsE=∫_0^∞e( · ,θ) θ,A=∫_0^∞a( · ,θ) θ and I=∫_0^∞i( · ,θ) θare well-posed.** The vaccine is considered to be purely prophylactic. * Only a part of population is vaccinated and p∈[0,1] stands for the vaccine coverage. Since the vaccine is supposed to be purely prophylactic, the only source for the vaccinees concerns the pool of the susceptible individuals. That source is considered to be linear. * The vaccine is likely to be imperfect (at providing prophylaxis) and ϵ∈[0,1] stands for its effectiveness. * The vaccine-induced immunity, i.e. the process of vaccinees obtaining immunity and moving into recovered population, is considered to be linear and the letter ζ∈ℝ_0^+ is employed for the vaccine-induced immunity rate.* The transmission, i.e. the process of susceptible individuals and failed-to-be-immune vaccinees becoming latent, is considered to be exclusively horizontal and to be governed by the Holling-type-I functional response. The parameters β_A,β_I ∈ L^∞(ℝ_0^+;ℝ_0^+) stand for the transmission rates of asymptomatic and symptomatic, respectively, infectious individuals. * The incubation, i.e. the process of latent individuals becoming infectious, is considered to be linear and k∈ L^∞(ℝ_0^+;ℝ_0^+) is the incubation rate. That rate is the same for both asymptomatic and symptomatic classes, but those classes are different from each other in terms of magnitude of their sources. In particular, q∈ L^∞(ℝ_0^+;[0,1]) stands for the proportion of latent individuals that become asymptomatic infectious ones. * The recovery, i.e. the process of infectious individuals moving into the recovered population, is considered to be linear and γ_A,γ_I ∈ L^∞(ℝ_0^+;ℝ_0^+) stand for the recovery rates of asymptomatic and symptomatic, respectively, infectious ones. * * Some of the asymptomatic infectious individuals never develop symptoms and they move directly into the recovered/removed class and the letter ξ∈ L^∞(ℝ_0^+;[0,1]) is employed for the proportion of those asymptomatic infectious individuals. * The symptomatic transition, i.e. the process of asymptomatic infectious individuals turning into symptomatic ones, is considered to be linear and χ∈ L^∞(ℝ_0^+;ℝ_0^+) stands for the symptomatic transition rate.* Demographic terms are taken into account and they are considered to be linear, with μ∈ℝ^+ being the universal birth/death rate. We note that μ is considered to be the only strictly positive constant of ℳ.* No reinfections are taken into account, hence no movement from the pool of the removed individuals to the pool of the susceptible ones is considered. The respective initial-boundary value 𝒫 has the following form: For given(S_0,V_0,e_0,a_0,i_0,R_0)∈(ℝ_0^+)^2×(L^1(ℝ_0^+;ℝ_0^+))^3×ℝ_0^+,we search for T>0 and smooth enough(S,V,e,a,i,R) [0,T)→(ℝ_0^+)^2×(L^1(ℝ_0^+;ℝ_0^+))^3×ℝ_0^+,such that S t=μ N-(p+∫_0^∞β_A(θ)a( · ,θ)+β_I(θ)i( · ,θ) θ+μ)SS(0)=S_0,V t=pS-(ζϵ+∫_0^∞β_A(θ)a( · ,θ)+β_I(θ)i( · ,θ) θ(1-ϵ)+μ)VV(0)=V_0, ∂ e∂ t+1ω∂ e∂θ=-(k+μ)ee( · ,0)=ω∫_0^∞β_A(θ)a( · ,θ)+β_I(θ)i( · ,θ) θ(S+(1-ϵ)V)e(0, · )=e_0, ∂ a∂ t+1ω∂ a∂θ=-(γ_Aξ+χ(1-ξ)+μ)aa( · ,0)=ω∫_0^∞k(θ)q(θ)e( · ,θ) θa(0, · )=a_0, ∂ i∂ t+1ω∂ i∂θ=-(γ_I+μ)ii( · ,0)=ω∫_0^∞k(θ)(1-q(θ))e( · ,θ)+χ(θ)(1-ξ(θ))a( · ,θ) θi(0, · )=i_0,R t=ζϵ V+∫_0^∞γ_A(θ)ξ(θ)a( · ,θ)+γ_I(θ)i( · ,θ) θ-μ RR(0)=R_0. The dimensional units of all variables and parameters appeared in 𝒫 (<ref>) are gathered in [table-1]Table <ref>table-1.We notice that by integration (with respect to θ over ℝ_0^+) and summation of the left and right-hand side of the derived ordinary differential equations, one gets N/ t=0⇔ N=N_0 S_0+V_0+E_0+A_0+I_0+R_0,whereE_0∫_0^∞e_0(θ) θ,A_0∫_0^∞a_0(θ) θ and I_0∫_0^∞i_0(θ) θ.Hence, an additional hypothesis made is as follows. * The total population remains constant. This is a practical (yet not necessary) assumption, and makes sense when the time-span of the modeled epidemiological phenomenon is way shorter than the time needed for observable changes of the total population (whether they are caused by the epidemic or not).Equations (<ref>)-(<ref>) are independent of R, hence the problem is reduced to the aforementioned subsystem itself. In fact, with (<ref>) at hand, R can be easily calculated byR=N_0-S-V-E-A-I.§.§.§ Scaling of ageIn order to simplify the analysis of (<ref>)-(<ref>), we eliminate the factor ω. We do so by the scaling of the independent age-variable, θ, and turning it to another time-variable measured in the same units as t. Hence, while keeping the same notation, we change the variables as follows ωθ ↦θ, 1/ωf∘1/ωid ↦ f,g∘1/ωid ↦ g,for (f,g)∈ {e(t, · ),a(t, · ),i(t, · ) | t∈ℝ_0^+}×{β_A,β_I,k,q,γ_A,ξ,χ,γ_I}, and (<ref>)-(<ref>) then becomes S t=μ N_0-(p+∫_0^∞β_A(θ)a( · ,θ)+β_I(θ)i( · ,θ) θ+μ)SS(0)=S_0,V t=pS-(ζϵ+∫_0^∞β_A(θ)a( · ,θ)+β_I(θ)i( · ,θ) θ(1-ϵ)+μ)VV(0)=V_0, ∂ e∂ t+∂ e∂θ=-(k+μ)ee( · ,0)=∫_0^∞β_A(θ)a( · ,θ)+β_I(θ)i( · ,θ) θ(S+(1-ϵ)V)e(0, · )=e_0, ∂ a∂ t+∂ a∂θ=-(γ_Aξ+χ(1-ξ)+μ)aa( · ,0)=∫_0^∞k(θ)q(θ)e( · ,θ) θa(0, · )=a_0, ∂ i∂ t+∂ i∂θ=-(γ_I+μ)ii( · ,0)=∫_0^∞k(θ)(1-q(θ))e( · ,θ)+χ(θ)(1-ξ(θ))a( · ,θ) θi(0, · )=i_0, where t and (the new) θ are now measured in the same time-units. The flow diagram of the differential equations in (<ref>) is shown in [fig-1]Figure <ref>fig-1. §.§.§ Global well-posednessWe set β ∫_0^∞β_A(θ)a( · ,θ)+β_I(θ)i( · ,θ) θ, εe( · ,0)=β(S+(1-ϵ)V), αa( · ,0)=∫_0^∞k(θ)q(θ)e( · ,θ) θ, ιi( · ,0)=∫_0^∞k(θ)(1-q(θ))e( · ,θ)+χ(θ)(1-ξ(θ))a( · ,θ) θ.Integrating the independent variables of (<ref>) and (<ref>) along [0,T), as well as the independent variables of (<ref>)-(<ref>) along the characteristic straight-line paths{(t,θ)∈[0,T)×ℝ_0^+ | t-θ=c},∀ c∈ℝ,we deduce that S(t) =S_0^-∫_0^tp+β(s)+μs+μ N_0∫_0^t^-∫_s^tp+β(τ)+μ τs,∀ t∈[0,T),V(t) =V_0^-∫_0^tζϵ+β(s)(1-ϵ)+μs+p∫_0^tS(s)^-∫_s^tζϵ+β(τ)(1-ϵ)+μ τs,∀ t∈[0,T),e(t,θ) = e_0(θ-t)^-∫_0^tk(θ-t+s)+μs,if t∈[0,θ)⊊[0,T) ε(t-θ)^-∫_0^θk(s)+μs,if θ∈[0,t)⊊[0,T), a(t,θ) = a_0(θ-t)^-∫_0^tγ_A(θ-t+s)ξ(θ-t+s)+χ(θ-t+s)(1-ξ(θ-t+s))+μs,if t∈[0,θ)⊊[0,T) α(t-θ)^-∫_0^θγ_A(s)ξ(s)+χ(s)(1-ξ(s))+μs,if θ∈[0,t)⊊[0,T). i(t,θ) = i_0(θ-t)^-∫_0^tγ_I(θ-t+s)+μs,if t∈[0,θ)⊊[0,T) ι(t-θ)^-∫_0^θγ_I(s)+μs,if θ∈[0,t)⊊[0,T), We then plug system (<ref>) into system (<ref>) to obtain β(t) =∫_0^tβ_1(t,s)s+∫_0^∞β_2(t,s)s,∀ t∈[0,T), ε(t) =β(t)(S(t)+(1-ϵ)V(t)),∀ t∈[0,T), α(t) =∫_0^tα_1(t,s)s+∫_0^∞α_2(t,s)s,∀ t∈[0,T), ι(t) =∫_0^tι_1(t,s)s+∫_0^∞ι_2(t,s)s,∀ t∈[0,T). where β_1(t,s) β_A(t-s)α(s)^-∫_0^t-sγ_A(τ)ξ(τ)+χ(τ)(1-ξ(τ))+μ τ++β_I(t-s)ι(s)^-∫_0^t-sγ_I(τ)+μ τ, β_2(t,s) β_A(t+s)a_0(s)^-∫_0^tγ_A(τ+s)ξ(τ+s)+χ(τ+s)(1-ξ(τ+s))+μ τ++β_I(t+s)i_0(s)^-∫_0^tγ_I(τ+s)+μ τ, S and V have already been calculated in terms of β in (<ref>) and (<ref>), respectively, α_1(t,s)k(t-s)q(t-s)β(s)(S(s)+(1-ϵ)V(s))^-∫_0^t-sk(τ)+μ τ, α_2(t,s)k(t+s)q(t+s)e_0(s)^-∫_0^tk(τ+s)+μ τ andι_1(t,s)k(t-s)(1-q(t-s))β(s)(S(s)+(1-ε)V(s))^-∫_0^t-sk(τ)+μ τ++χ(t-s)(1-ξ(t-s))α(s)^-∫_0^t-sγ_A(τ)ξ(τ)+χ(τ)(1-ξ(τ))+μ τ, ι_2(t,s)k(t+s)(1-q(t+s))e_0(s)^-∫_0^tk(τ+s)+μ τ++χ(t+s)(1-ξ(t+s))a_0(s)^-∫_0^tγ_A(t+s)ξ(t+s)+χ(t+s)(1-ξ(t+s))+μ τ.Equations (<ref>), (<ref>) and (<ref>) can be considered as an auxiliary problem (in integral form) for the unknown functions β, α and ι. A direct application of the classic theory of integral equations provides us with the following preliminary result, the standard proof of which is omitted (see, e.g., <cit.>). For every (S_0,V_0,e_0,a_0,i_0,R_0)∈ℝ^2×(L^1(ℝ_0^+;ℝ))^3×ℝ, the problem (<ref>), (<ref>), (<ref>) is globally (i.e. T=∞) well-posed, with (β,α,ι)∈(C(ℝ_0^+;ℝ))^3. Moreover, it is straightforward to check that if (S_0,V_0,e_0,a_0,i_0,R_0)=(0,0,0,0,0,N_0), then the solution of (<ref>), (<ref>), (<ref>) is the constant (β,α,ι)=(0,0,0). Hence, from the uniqueness of solution we derive the next result.If (S_0,V_0,e_0,a_0,i_0,R_0)∈(ℝ_0^+)^2×(L^1(ℝ_0^+;ℝ_0^+))^3×ℝ_0^+, then (β,α,ι)∈(C(ℝ_0^+;ℝ_0^+))^3.The global well-posedness of the main problem then follows.For every (S_0,V_0,e_0,a_0,i_0,R_0)∈(ℝ_0^+)^2×(L^1(ℝ_0^+;ℝ_0^+))^3×ℝ_0^+, the 𝒫 (<ref>) is globally well-posed, with (S,V,e,a,i)∈(C^1(ℝ_0^+;ℝ_0^+))^2×(C(ℝ_0^+;L^1(ℝ_0^+)))^3. In particular, the differential equations in (<ref>) and (<ref>) are satisfied ∀ t∈ℝ_0^+, while the subsystem (<ref>)-(<ref>) is satisfied in the following sense lim_h→ 0e(t+h,θ+h)-e(t,θ)h=-(k(θ)+μ)e(t,θ), for a.e. (t,θ)∈(ℝ_0^+)^2e(t,0)=ε(t),∀ t∈ℝ^+e(0,θ)=e_0(θ), for a.e. θ∈ℝ_0^+,lim_h→ 0a(t+h,θ+h)-a(t,θ)h=-(γ_A(θ)ξ(θ)+χ(θ)(1-ξ(θ))+μ)a(t,θ), for a.e. (t,θ)∈(ℝ_0^+)^2a(t,0)=α(t),∀ t∈ℝ^+a(0,θ)=a_0(θ), for a.e. θ∈ℝ_0^+,lim_h→ 0i(t+h,θ+h)-i(t,θ)h=-(γ_I(θ)+μ)i(t,θ), for a.e. (t,θ)∈(ℝ_0^+)^2i(t,0)=ι(t),∀ t∈ℝ^+i(0,θ)=i_0(θ), for a.e. θ∈ℝ_0^+.We also note that we can obtain certain regularity results by strengthening the assumptions regarding the data of the problem, but this lies beyond the scope of the present work.§.§.§ Steady states and basic reproductive number A steady state, (S^*,V^*,e^*,a^*,i^*), of 𝒫 (<ref>) is a constant-with-respect-to-t solution, i.e. it is defined to satisfy 0=μ N_0-(p+β^*+μ)S^*, 0=p S^*-(ζϵ+β^*(1-ϵ)+μ)V^*,e^*θ=-(k+μ)e^*e^*(0)=ε^*,a^*θ=-(γ_Aξ+χ(1-ξ)+μ)a^*a^*(0)=α^*,i^*θ=-(γ_I+μ)i^*i^*(0)=ι^*, where β^* ∫_0^∞β_A(θ)a^*(θ)+β_I(θ)i^*(θ) θ, ε^* β^*(S^*+(1-ϵ)V^*), α^* ∫_0^∞k(θ)q(θ)e^*(θ) θ, ι^* ∫_0^∞k(θ)(1-q(θ))e^*(θ)+χ(θ)(1-ξ(θ))a^*(θ) θ, hence S^* =μ N_0/p+β^*+μ,V^* =p S^*/ζϵ+β^*(1-ϵ)+μ,e^*(θ) =ε^*^-∫_0^θk(s)+μs,∀θ∈ℝ_0^+,a^*(θ) =α^*^-∫_0^θγ_A(s)ξ(s)+χ(s)(1-ξ(s))+μs,∀θ∈ℝ_0^+,i^*(θ) =ι^*^-∫_0^θγ_I(s)+μs,∀θ∈ℝ_0^+. By plugging (<ref>)-(<ref>) into (<ref>) and expressing the components of a steady state exclusively in terms of the constant parameter β^*, we get S^* =μ N_0/p+β^*+μ,V^* =pμ N_0/(p+β^*+μ)(ζϵ+β^*(1-ϵ)+μ),e^*(θ) =β^*μ N_0/p+β^*+μ(1+p(1-ϵ)/ζϵ+β^*(1-ϵ)+μ)^-∫_0^θk(s)+μs,∀θ∈ℝ_0^+,a^*(θ) =β^*μ N_0/p+β^*+μ(1+p(1-ϵ)/ζϵ+β^*(1-ϵ)+μ)∫_0^∞k(s)q(s)^-∫_0^sk(τ)+μ τs×=×^-∫_0^θγ_A(s)ξ(s)+χ(s)(1-ξ(s))+μs,∀θ∈ℝ_0^+,i^*(θ) =β^*μ N_0/p+β^*+μ(1+p(1-ϵ)/ζϵ+β^*(1-ϵ)+μ)×=×(+∫_0^∞k(s)(1-q(s))^-∫_0^sk(τ)+μ τs++∫_0^∞k(s)q(s)^-∫_0^sk(τ)+μ τs∫_0^∞χ(s)(1-ξ(s))^-∫_0^sγ_A(τ)ξ(τ)+χ(τ)(1-ξ(τ))+μ τs)×=×^-∫_0^θγ_I(s)+μs,∀θ∈ℝ_0^+.We now set ℝ_0^+∋ℛ_0μ N_0/p+μ(1+p(1-ϵ)/ζϵ+μ)(ℛ_A+ℛ_I),where ℝ_0^+∋ℛ_A∫_0^∞k(s)q(s)^-∫_0^sk(τ)+μ τs∫_0^∞β_A(s)^-∫_0^sγ_A(τ)ξ(τ)+χ(τ)(1-ξ(τ))+μ τsand ℝ_0^+∋ℛ_I (+∫_0^∞k(s)(1-q(s))^-∫_0^sk(τ)+μ τs++∫_0^∞k(s)q(s)^-∫_0^sk(τ)+μ τs∫_0^∞χ(s)(1-ξ(s))^-∫_0^sγ_A(τ)ξ(τ)+χ(τ)(1-ξ(τ))+μ τs)××∫_0^∞β_I(s)^-∫_0^sγ_I(τ)+μ τs,for the basic reproductive number of the aforementioned problem. Its definition emerges naturally from the following result. Concerning β^*∈ℝ_0^+,* if ℛ_0≤ 1, then β^*=0, * if ℛ_0> 1, then* either β^*=0, * or β^*>0, such thatb_2β^*^2+b_1β^*+b_0=0,where b_2 =(1-ϵ),b_1 =(p+μ)(1-ϵ)+ζϵ+μ-μ N_0(1-ϵ)(ℛ_A+ℛ_I),b_0 =(p+μ)(ϵζ+μ)(1-ℛ_0).We substitute a^* and i^* of (<ref>) and (<ref>), respectively, into (<ref>) to deduce thatβ^*=β^*μ N_0/p+β^*+μ(1+p(1-ϵ)/ζϵ+β^*(1-ϵ)+μ)(ℛ_A+ℛ_I).There are only two discrete cases, either β^*=0, or β^*>0. If β^*>0, then, equivalently,1=μ N_0/p+β^*+μ(1+p(1-ϵ)/ζϵ+β^*+(1-ϵ)+μ)(ℛ_A+ℛ_I),or elseb_2β^*^2+b_1β^*+b_0=0.We observe that* if ϵ=1 then b_2=0 and b_1>0,* if ϵ≠ 1 then b_2>0. Therefore, in any case, there exists β^*>0 satisfying the above equation iff b_3<0, i.e. ℛ_0>1, and of course, such β^* is unique.The following result is now straightforward.Concerning (S^*,V^*,e^*,a^*,i^*),* if ℛ_0≤ 1 then (^*,a^*,i^*)=(0,0,0), * if ℛ_0>1 then* either (^*,a^*,i^*)=(0,0,0),* or (^*,a^*,i^*)>(0,0,0).The solution (S^*,V^*,e^*,a^*,i^*) is called disease-free steady state if (^*,a^*,i^*)=(0,0,0), as well as endemic steady state if (^*,a^*,i^*)>(0,0,0).§.§.§ Global stability We are interested in the longer-time dynamics of the modeled epidemiological phenomenon, globally with respect to the set of initial data, (ℝ_0^+)^2×(L^1(ℝ_0^+;ℝ_0^+))^3×ℝ_0^+. Below we check the global stability of the steady state of 𝒫 (<ref>) by finding a Lyapunov function. Since the steady state changes with respect to the sign of 1-ℛ_0, we check each such case separately.If ℛ_0≤ 1, then the disease-free steady state,(S^*,V^*,e^*,a^*,i^*)=(μ N_0/p+μ,pμ N_0/(p+μ)(ζϵ+μ),0,0,0),is globally asymptotically stable.Step I:We define the following functions f ℝ^+ →ℝ_0^+x ↦ f(x) x-1-lnx,andL ℝ_0^+ →ℝ_0^+t ↦ L(t;S,V,e,a,i) L_SV+L_E+L_A+L_I,where L_SV S^*f(S/S^*)+V^*f(V/V^*),L_E∫_0^∞f_E(θ)e( · ,θ) θL_A∫_0^∞f_A(θ)a( · ,θ) θL_I∫_0^∞f_I(θ)i( · ,θ) θ,and f_E, f_A and f_I are left to be defined.Step II:We differentiate L_SV, L_E, L_A and L_I. From (<ref>) and (<ref>) we get L_SV t =(1-S^*/S) S t+(1-V^*/V) V t==μ S^*(2-S/S^*-S^*/S)+p S^*(3-V/V^*-S^*/S-SV^*/S^*V)-β(S+(1-ϵ)V)+β(S^*+(1-ϵ)V^*).With (<ref>) at hand, we also calculateL_E t = t(∫_0^tf_E(θ)ε(t-θ)^-∫_0^θk(s)+μs θ+∫_t^∞f_E(θ)e_0(θ-t)^-∫_0^tk(θ-t+s)+μs θ)==f_E(0)ε+∫_0^∞( f_Eθ(θ)-f_E(θ)(k(θ)+μ))e( · ,θ) θ.Similarly, from (<ref>) and (<ref>) we deduce the following expressions L_A t=f_A(0)α+∫_0^∞( f_Aθ(θ)-f_A(θ)(γ_A(θ)ξ(θ)+χ(θ)(1-ξ(θ))+μ))a( · ,θ) θandL_I t=f_I(0)ι+∫_0^∞( f_Iθ(θ)-f_I(θ)(γ_I(θ)+μ))i( · ,θ) θ.Therefore, we haveL t = L_SV t+ L_E t+ L_A t+ L_I t==-μ S^*(S/S^*+S^*/S-1)-p S^*(V/V^*+S^*/S+SV^*/S^*V-3)-(1-f_E(0))ε+=+(S^*+(1-ϵ)V^*)∫_0^∞β_A(θ)a( · ,θ)+β_I(θ)i( · ,θ) θ+=+∫_0^∞( f_Eθ(θ)-f_E(θ)(k(θ)+μ)+f_A(0)k(θ)q(θ)+f_I(0)k(θ)(1-q(θ)))e( · ,θ) θ+=+∫_0^∞( f_Aθ(θ)-f_A(θ)(γ_A(θ)ξ(θ)+χ(θ)(1-ξ(θ))+μ)+f_I(0)χ(θ)(1-ξ(θ)))a( · ,θ) θ+=+∫_0^∞( f_Iθ(θ)-f_I(θ)(γ_I(θ)+μ))i( · ,θ) θ. Step III:We choose f_E, f_A and f_I such that the latter terms in the last equation to be zero, that isf_Eθ =f_E(k+μ)-f_A(0)kq-f_I(0)k(1-q),f_Aθ =f_A(γ_Aξ+χ(1-ξ)+μ)-f_I(0)χ(1-ξ)-(S^*+(1-ϵ)V^*)β_A,f_Iθ =f_I(γ_I+μ)-(S^*+(1-ϵ)V^*)β_I.Hence, ∀θ∈ℝ_0^+, we set f_E(θ)f_A(0)∫_0^∞k(s)q(s)^-∫_θ^sk(τ)+μ τs+f_I(0)∫_0^∞k(s)(1-q(s))^-∫_θ^sk(τ)+μ τs,f_A(θ) (S^*+(1-ϵ)V^*)∫_θ^∞β_A(s)^-∫_θ^sγ_A(τ)ξ(τ)+χ(τ)(1-ξ(τ))+μ τs++f_I(0)∫_θ^∞χ(s)(1-ξ(s))^-∫_θ^sγ_A(τ)ξ(τ)+χ(τ)(1-ξ(τ))+μ τs,f_I(θ) (S^*+(1-ϵ)V^*)∫_θ^∞β_I(s)^-∫_θ^sγ_I(τ)+μ τs.For f_E, f_A and f_I defined as such we haveL t =-μ S^*(S/S^*+S^*/S-1)-pS^*(V/V^*+S^*/S+SV^*/S^*V-3)-(1-ℛ_0)ε. Step IV:Due to the arithmetic-geometric mean inequality, we deriveℛ_0≤ 1⇒L/ t≤ 0,∀ t∈ℝ_0^+and the equality holds only for the disease-free steady state, i.e. when(S,V,e,a,i)=(S^*,V^*,e^*,a^*,i^*).Hence, the singleton {(S^*,V^*,e^*,a^*,i^*)} is the largest invariant set for whichL/ t=0.Then, from the LaSalle in-variance principle it follows that the disease-free steady state is globally asymptotically stable.If ℛ_0>1, then the endemic steady state,(S^*,V^*,e^*,a^*,i^*)≠(S^*,V^*,0,0,0),is globally asymptotically stable.Step I:Based on (<ref>)-(<ref>), we now define L ℝ_0^+ →ℝ_0^+t ↦ L(t;S,V,e,a,i) L_SV+L_E+L_A+L_I,withL_SVS^*f(S/S^*)+V^*f(V/V^*),L_Ef_A(0)∫_0^∞∫_θ^∞k(s)q(s)e^*(s)s f(e( · ,θ)/e^*(θ)) θ++f_I(0)∫_0^∞∫_θ^∞k(s)(1-q(s))e^*(s)s f(e( · ,θ)/e^*(θ)) θ,L_A (S^*+(1-ϵ)V^*)∫_0^∞∫_θ^∞β_A(s)a^*(s)s f(a( · ,θ)/a^*(θ)) θ++f_I(0)∫_0^∞∫_θ^∞χ(s)(1-ξ(s))a^*(s)s f(a( · ,θ)/a^*(θ)) θ,L_I (S^*+(1-ϵ)V^*)∫_0^∞∫_θ^∞β_I(s)i^*(s)s f(i( · ,θ)/i^*(θ)) θand f ℝ^+ →ℝ_0^+x ↦ f(x) x-1-lnx.Step IIa:We differentiate L_SV, L_E, L_A and L_I. From (<ref>) and (<ref>) we get L_SV t =(1-S^*/S) S t+(1-V^*/V) V t==-μ S^*(S/S^*+S^*/S-2)-p S^*(V/V^*+S^*/S+SV^*/S^*V-3)+=+S^*∫_0^∞β_A(θ)a^*(θ)(1-S a( · ,θ)/S^*a^*(θ)-S^*/S+a( · ,θ)/a^*(θ)) θ+=+S^*∫_0^∞β_I(θ)i^*(θ)(1-S i( · ,θ)/S^*i^*(θ)-S^*/S+i( · ,θ)/i^*(θ)) θ+=+(1-ϵ)V^*∫_0^∞β_A(θ)a^*(θ)(-1-V a( · ,θ)/V^*a^*(θ)+V/V^*+a( · ,θ)/a^*(θ)) θ+=+(1-ϵ)V^*∫_0^∞β_I(θ)i^*(θ)(-1-V i( · ,θ)/V^*i^*(θ)+V/V^*+i( · ,θ)/i^*(θ)) θ.From the differential equation in (<ref>) along with (<ref>) we have∂∂ tf(e(t,θ)/e^*(θ))=-∂∂θf(e(t,θ)/e^*(θ)),thus L_E t =-f_A(0)∫_0^∞∫_θ^∞k(s)q(s)e^*(s)s ∂∂θf(e( · ,θ)/e^*(θ)) θ-=-f_I(0)∫_0^∞∫_θ^∞k(s)(1-q(s))e^*(s)s ∂∂θf(e( · ,θ)/e^*(θ)) θ==f_A(0)∫_0^∞k(θ)q(θ)e^*(θ)(ε/ε^*-e( · ,θ)/e^*(θ)+lne( · ,θ)/e^*(θ)-lnε/ε^*) θ+=+f_I(0)∫_0^∞k(θ)(1-q(θ))e^*(θ)(ε/ε^*-e( · ,θ)/e^*(θ)+lne( · ,θ)/e^*(θ)-lnε/ε^*) θ.Similarly, from (<ref>) along with (<ref>), as well as (<ref>) along with (<ref>), we deduce L_A t = (S^*+(1-ϵ)V^*)∫_0^∞β_A(θ)a^*(θ)(α/α^*-a( · ,θ)/a^*(θ)+lna( · ,θ)/a^*(θ)-lnα/α^*) θ+=+f_I(0)∫_0^∞χ(θ)(1-ξ(θ))a^*(θ)(α/α^*-a( · ,θ)/a^*(θ)+lna( · ,θ)/a^*(θ)-lnα/α^*) θandL_I t = (S^*+(1-ϵ)V^*)∫_0^∞β_I(θ)i^*(θ)(ι/ι^*-i( · ,θ)/i^*(θ)+lni( · ,θ)/i^*(θ)-lnι/ι^*) θ,respectively. Therefore, we haveL t = L_SV t+ L_E t+ L_A t+ L_I t==-μ S^*(S/S^*+S^*/S-2)-pS^*(V/V^*+S^*/S+SV^*/S^*V-3)+=+S^*∫_0^∞β_A(θ)a^*(θ)(1-S^*/S+lna( · ,θ)/a^*(θ)-lnα/α^*) θ+=+S^*∫_0^∞β_I(θ)i^*(θ)(1-S^*/S+lni( · ,θ)/i^*(θ)-lnι/ι^*) θ+=+(1-ϵ)V^*∫_0^∞β_A(θ)a^*(θ)(-1+V/V^*+lna( · ,θ)/a^*(θ)-lnα/α^*) θ+=+(1-ϵ)V^*∫_0^∞β_I(θ)i^*(θ)(-1+V/V^*+lni( · ,θ)/i^*(θ)-lnι/ι^*) θ+=+f_A(0)∫_0^∞k(θ)q(θ)e^*(θ)(lne( · ,θ)/e^*(θ)-lnε/ε^*) θ+=+f_I(0)∫_0^∞k(θ)(1-q(θ))e^*(θ)(lne( · ,θ)/e^*(θ)-lnε/ε^*) θ+=+f_I(0)∫_0^∞χ(θ)(1-ξ(θ))a^*(θ)(lna( · ,θ)/a^*(θ)-lnα/α^*) θ+∑_i=1^6D_i,where D_1 (S^*+(1-ϵ)V^*)∫_0^∞β_A(θ)a^*(θ)α/α^*+β_I(θ)i^*(θ)ι/ι^* θ,D_2 -(S^*+(1-ϵ)V^*)∫_0^∞β_A(θ)^-∫_0^θγ_A(s)ξ(s)+χ(s)(1-ξ(s))+μs θ∫_0^∞k(θ)q(θ)e^*(θ)e( · ,θ)/e^*(θ) θ--f_I(0)(∫_0^∞k(θ)(1-q(θ))e^*(θ)e( · ,θ)/e^*(θ) θ+∫_0^∞χ(θ)(1-ξ(θ))a^*(θ)a( · ,θ)/a^*(θ) θ),D_3 -S^*∫_0^∞β_A(θ)a^*(θ)S a( · ,θ)/S^*a^*(θ)+β_I(θ)i^*(θ)S i( · ,θ)/S^*i^*(θ) θ--(1-ϵ)V^*∫_0^∞β_A(θ)a^*(θ)V a( · ,θ)/V^*a^*(θ)+β_I(θ)i^*(θ)V i( · ,θ)/V^*i^*(θ) θ,D_4 ε/ε^*∫_0^∞(f_A(0)k(θ)q(θ)+f_I(0)k(θ)(1-q(θ)))e^*(θ) θ,D_5 -f_I(0)∫_0^∞χ(θ)(1-ξ(θ))^-∫_0^θγ_A(s)ξ(s)+χ(s)(1-ξ(s))+μs θ∫_0^∞k(θ)q(θ)e^*(θ)e( · ,θ)/e^*(θ) θ,D_6f_I(0)∫_0^∞χ(θ)(1-ξ(θ))a^*(θ)α/α^* θ.Step IIb:From (<ref>) and (<ref>) we see thatD_5+D_6=0.Moreover, by (<ref>) and (<ref>), along with (<ref>) and (<ref>), we observe thatD_4 =ε/ε^*(S^*+(1-ϵ)V^*)∫_0^∞β_A(θ)^-∫_0^θγ_A(s)ξ(s)+χ(s)(1-ξ(s))+μs θ∫_0^∞k(θ)q(θ)e^*(θ) θ+=+ε/ε^*f_I(0)(∫_0^∞k(θ)(1-q(θ))e^*(θ)+χ(θ)(1-ξ(θ))a^*(θ) θ)==ε/ε^*(S^*+(1-ϵ)V^*)(α^*∫_0^∞β_A(θ)^-∫_0^θγ_A(s)ξ(s)+χ(s)(1-ξ(s))+μs θ+ι^*∫_0^∞β_I(θ)^-∫_0^θγ_I(s)+μs θ)==ε/ε^*(S^*+(1-ϵ)V^*)β^*=ε/ε^*ε^*=β(S+(1-ϵ)V)==(S+(1-ϵ)V)∫_0^∞β_A(θ)a( · ,θ)+β_I(θ)i( · ,θ) θ=-D_3.Additionally, from (<ref>) and (<ref>) we have that-D_2 =α(S^*+(1-ϵ)V^*)∫_0^∞β_A(θ)^-∫_0^θγ_A(s)ξ(s)+χ(s)(1-ξ(s))+μs θ+ι f_I(0)==αα^*α^*(S^*+(1-ϵ)V^*)∫_0^∞β_A(θ)^-∫_0^θγ_A(s)ξ(s)+χ(s)(1-ξ(s))+μs θ+ιι^*ι^*f_I(0)==αα^*(S^*+(1-ϵ)V^*)∫_0^∞β_A(θ)^-∫_0^θγ_A(s)ξ(s)+χ(s)(1-ξ(s))+μs θ∫_0^∞k(θ)q(θ)e^*(θ) θ+=+ιι^*f_I(0)(∫_0^∞k(θ)(1-q(θ))e^*(θ) θ+∫_0^∞χ(θ)(1-ξ(θ))a^*(θ) θ)=D_1.Consequently,∑_i=1^6D_i=0and with (<ref>) at hand we deduce that L t =-μ S^*(S/S^*+S^*/S-2)-(ζϵ+β^*(1-ϵ)+μ)V^*(V/V^*+S^*/S+SV^*/S^*V-3)+=+S^*∫_0^∞β_A(θ)a^*(θ)(1-S^*/S+lna( · ,θ)/a^*(θ)-lnα/α^*) θ+=+S^*∫_0^∞β_I(θ)i^*(θ)(1-S^*/S+lni( · ,θ)/i^*(θ)-lnι/ι^*) θ+=+(1-ϵ)V^*∫_0^∞β_A(θ)a^*(θ)(-1+V/V^*+lna( · ,θ)/a^*(θ)-lnα/α^*) θ+=+(1-ϵ)V^*∫_0^∞β_I(θ)i^*(θ)(-1+V/V^*+lni( · ,θ)/i^*(θ)-lnι/ι^*) θ+=+f_A(0)∫_0^∞k(θ)q(θ)e^*(θ)(lne( · ,θ)/e^*(θ)-lnε/ε^*) θ+=+f_I(0)∫_0^∞k(θ)(1-q(θ))e^*(θ)(lne( · ,θ)/e^*(θ)-lnε/ε^*) θ+=+f_I(0)∫_0^∞χ(θ)(1-ξ(θ))a^*(θ)(lna( · ,θ)/a^*(θ)-lnα/α^*) θ.Step IIc: We then proceed by adding some useful zero terms in the above equation. First, from (<ref>), along with (<ref>) and (<ref>), we have the following useful expression for(S^*+(1-ϵ)V^*)∫_0^∞β_A(θ)a^*(θ)+β_I(θ)i^*(θ) θ=ε^*as follows ε^* =ε^*εε=ε^*ε(S+(1-ϵ)V)∫_0^∞β_A(θ)a( · ,θ)+β_I(θ)i( · ,θ) θ==S^*∫_0^∞β_A(θ)a^*(θ)S a( · ,θ)ε^*/S^*a^*(θ)ε+β_I(θ)i^*(θ)S i( · ,θ)ε^*/S^*i^*(θ)ε θ+=+(1-ϵ)V^*∫_0^∞β_A(θ)a^*(θ)V a( · ,θ)ε^*/V^*a^*(θ)ε+β_I(θ)i^*(θ)V i( · ,θ)ε^*/V^*i^*(θ)ε θ.Second, by (<ref>) and (<ref>), along with (<ref>) and (<ref>), we have0 =(S^*+(1-ϵ)V^*)∫_0^∞β_A(θ)^-∫_0^θγ_A(s)ξ(s)+χ(s)(1-ξ(s))+μs θ(α^*-α^*αα)+f_I(0)(ι^*-ι^*ιι)==(S^*+(1-ϵ)V^*)∫_0^∞β_A(θ)^-∫_0^θγ_A(s)ξ(s)+χ(s)(1-ξ(s))+μs θ∫_0^∞k(θ)q(θ)e^*(θ)(1-e( · ,θ)α^*/e^*(θ)α) θ+=+f_I(0)(∫_0^∞k(θ)(1-q(θ))e^*(θ)(1-e( · ,θ)ι^*/e^*(θ)ι) θ+∫_0^∞χ(θ)(1-ξ(θ))a^*(θ)(1-a( · ,θ)ι^*/a^*(θ)ι) θ).In view of the above, we now write L t =-μ S^*(S/S^*+S^*/S-2)-(ζϵ+μ)V^*(V/V^*+S^*/S+SV^*/S^*V-3)+=+(S^*+(1-ϵ)V^*)∫_0^∞(β_A(θ)a^*(θ)+β_I(θ)i^*(θ))(1-S^*/S+lnS^*/S) θ+=+S^*∫_0^∞β_A(θ)a^*(θ)(lnS a( · ,θ)/S^*a^*(θ)-lnα/α^*+1-S a( · ,θ)ε^*S^*a^*(θ)ε) θ+=+S^*∫_0^∞β_I(θ)i^*(θ)(lnS i( · ,θ)/S^*i^*(θ)-lnι/ι^*+1-S i( · ,θ)ε^*S^*i^*(θ)ε) θ+=+(1-ϵ)V^*∫_0^∞β_A(θ)a^*(θ)(1-SV^*S^*V+lnS a( · ,θ)/S^*a^*(θ)-lnα/α^*+1-V a( · ,θ)ε^*V^*a^*(θ)ε) θ+=+(1-ϵ)V^*∫_0^∞β_I(θ)i^*(θ)(1-SV^*S^*V+lnS i( · ,θ)/S^*i^*(θ)-lnι/ι^*+1-V i( · ,θ)ε^*V^*i^*(θ)ε) θ+=+(1-ϵ)V^*∫_0^∞β_A(θ)a^*(θ)(lnSV^*S^*V-lnSV^*S^*V+lnV a( · ,θ)ε^*V^*a^*(θ)ε-lnV a( · ,θ)ε^*V^*a^*(θ)ε) θ+=+(1-ϵ)V^*∫_0^∞β_I(θ)i^*(θ)(lnSV^*S^*V-lnSV^*S^*V+lnV i( · ,θ)ε^*V^*i^*(θ)ε-lnV i( · ,θ)ε^*V^*i^*(θ)ε) θ+=+f_A(0)∫_0^∞k(θ)q(θ)e^*(θ)(lne( · ,θ)/e^*(θ)-lnε/ε^*) θ+=+f_I(0)∫_0^∞k(θ)(1-q(θ))e^*(θ)(lne( · ,θ)/e^*(θ)-lnε/ε^*) θ+=+f_I(0)∫_0^∞χ(θ)(1-ξ(θ))a^*(θ)(lna( · ,θ)/a^*(θ)-lnα/α^*) θ+=+(S^*+(1-ϵ)V^*)∫_0^∞β_A(θ)^-∫_0^θγ_A(s)ξ(s)+χ(s)(1-ξ(s))+μs θ∫_0^∞k(θ)q(θ)e^*(θ)(1-e( · ,θ)α^*/e^*(θ)α) θ+=+f_I(0)(∫_0^∞k(θ)(1-q(θ))e^*(θ)(1-e( · ,θ)ι^*/e^*(θ)ι) θ+∫_0^∞χ(θ)(1-ξ(θ))a^*(θ)(1-a( · ,θ)ι^*/a^*(θ)ι) θ).Step IId:From the definition of f and the equation D_1 =αα^*(S^*+(1-ϵ)V^*)∫_0^∞β_A(θ)^-∫_0^θγ_A(s)ξ(s)+χ(s)(1-ξ(s))+μs θ∫_0^∞k(θ)q(θ)e^*(θ) θ+=+ιι^*f_I(0)(∫_0^∞k(θ)(1-q(θ))e^*(θ) θ+∫_0^∞χ(θ)(1-ξ(θ))a^*(θ) θ),the expression can eventually be simplified as follows L t =-μ S^*(S/S^*+S^*/S-2)-(ζϵ+μ)V^*(V/V^*+S^*/S+SV^*/S^*V-3)-=-(S^*+(1-ϵ)V^*)∫_0^∞(β_A(θ)a^*(θ)+β_I(θ)i^*(θ))f(S^*/S) θ-=-S^*∫_0^∞β_A(θ)a^*(θ)f(S a( · ,θ)ε^*S^*a^*(θ)ε) θ-S^*∫_0^∞β_I(θ)i^*(θ)f(S i( · ,θ)ε^*S^*i^*(θ)ε) θ-=-(1-ϵ)V^*∫_0^∞β_A(θ)a^*(θ)f(V a( · ,θ)ε^*V^*a^*(θ)ε) θ-(1-ϵ)V^*∫_0^∞β_I(θ)i^*(θ)f(V i( · ,θ)ε^*V^*i^*(θ)ε) θ-=-(1-ϵ)V^*∫_0^∞(β_A(θ)a^*(θ)+β_I(θ)i^*(θ))f(SV^*S^*V) θ-=-(S^*+(1-ϵ)V^*)∫_0^∞β_A(θ)^-∫_0^θγ_A(s)ξ(s)+χ(s)(1-ξ(s))+μs θ∫_0^∞k(θ)q(θ)e^*(θ)f(e( · ,θ)α^*/e^*(θ)α) θ-=-f_I(0)∫_0^∞k(θ)(1-q(θ))e^*(θ)f(e( · ,θ)ι^*/e^*(θ)ι) θ-f_I(0)∫_0^∞χ(θ)(1-ξ(θ))a^*(θ)f(a( · ,θ)ι^*/a^*(θ)ι) θ.Step III:Employing the arithmetic-geometric mean inequality, we getℛ_0≤ 1⇒L/ t≤ 0,∀ t∈ℝ_0^+and the equality holds only for the endemic steady state, i.e. when (S,V,e,a,i)=(S^*,V^*,e^*,a^*,i^*). Hence, the singleton {(S^*,V^*,e^*,a^*,i^*)} is the largest invariant set for whichL/ t=0.Then, from the LaSalle in-variance principle it follows that the endemic steady state is globally asymptotically stable.§ NUMERICAL SIMULATIONSIn this section, we numerically solve 𝒫 (<ref>) in order to verify the validity of the analysis performed in [sec:derivanal]<ref>sec:derivanal and to further investigate the behavior of 𝒫 (<ref>). §.§ Numerical schemeHere, we present the temporal discretization used to numerically solve 𝒫 and the code used to implement it.§.§.§ Temporal discretization We assume that the maximum age of the population, θ_†, is equal to 90 · 360 days. Furthermore, we study 𝒫 (<ref>) for a time of up to 1500 days. Hence, we solve 𝒫 (<ref>) in the interval (t, θ) ∈[0,1500 ] ×[0,90 · 360 ]· days. The time-age step we chose is h = 0.05. Let 𝒩 be the number of time-age steps needed to reach the maximum age, i.e θ_†, and 𝒥 be the number of time-age steps needed to reach the maximum time, i.e 1500 days.To discretize the time derivative, we use the following first-order forward difference scheme:∂/∂ t( u(t^n) ) = lim_h→0^+u(t^n+h)-u(t^n)/h≈u^n+1-u^n/h,0 ≤ n ≤𝒩-1,for u∈ {S(t), V(t)|t ∈[0,1500 ] · days}.To discretize the temporal directional derivative, we use the following first-order approximation:( ∂/∂ t+∂/∂θ) ( u(t^n, θ_j) ) =lim_h→0^+u(t^n+h,θ_j+h)-u(t^n,θ_j)/h≈u_j+1^n+1-u_j^n/h,0 ≤ n ≤𝒩-1, 0 ≤ j ≤𝒥-1,for u∈ {e(t, θ ),a(t, θ ),i(t, θ ) | (t, θ)∈[0,1500 ] ×[0,90 · 360 ]· days} .To discretize the integrals, we use the following quadrature formula:∫_0^∞g(θ)u( t^n ,θ) θ≈ h∑_j=0^𝒥g(θ_j)u( t^n ,θ_j) =h∑_j=0^𝒥g_j u^n_j ,0 ≤ n ≤𝒩-1,for (u,g)∈ {e(t, θ ),a(t, θ ),i(t, θ ) | (t, θ)∈[0,1500 ] ×[0,90 · 360 ]· days}× ×{β_A(θ),β_I(θ),k(θ),q(θ),γ_A(θ),ξ(θ),χ(θ),γ_I(θ) | θ∈[0,90 · 360 ]· days} . §.§.§ Code implementation To implement the aforementioned discretization schemes, we use Julia (v1.8.5) <cit.>. The code can be found at <https://github.com/TsilidisV/age-structured-SVeaiR-model>. To plot the numerical solution of 𝒫 (<ref>), we use Makie.jl <cit.>. To save and load the results, we use JLD2.jl and CodecZlib.jl. To calculate ℛ_0, we use QuadGK.jl <cit.> and Integrals.jl <cit.>. To create faster Julia structs for the parameters and initial conditions, we use FunctionWrappers.jl. Finally, we use Dierckx.jl to interpolate, as well as CSV.jl and DataFrames.jl <cit.> to load the data for the parameter values. §.§ Parameter valuesHere, we give a description of the parameter values chosen to represent the case of SARS-CoV-2.A summary of the parameter values, can be found in [tab:paramValues]Table <ref>tab:paramValues. * N_0 = 80 · 10^6, the size of the population, is chosen to be that of a relative large country <cit.>.* μ = 4.38356 · 10^-5 day^-1, the birth/death rate, is converted from the average birth/death rate of the world for the year 2021, 16 per 1000 individuals per year, found in <cit.>.* β_A and β_I are functions of age and are estimated from <cit.>. As can be seen from Fig. 2 of <cit.>, the average contacts an individual makes each day regardless of their epidemiological status is about 16.71 contacts per day. In order to examine the effect of age in the dynamics of 𝒫 (<ref>), we assume the following two functions to respectively model two extreme cases of the average number of contacts an individual makes:c_1(θ)= 16.71/0.38exp((θ-80ω/10^4)^2),θ∈[0,90 · 360 ]· daysc_2(θ)= 16.71/0.38exp((θ-10ω/10^4)^2),θ∈[0,90 · 360 ]· days .Both c_1 and c_2 have the same mean value of 16.71 contacts per day in the interval [0,90 · 360 ]· days. We additionally assume that the probability of an exposed individual passing to the compartments of asymptomatic and symptomatic individuals to be ϖ_E→ A = 1/5 and ϖ_E→ I = 2/5, respectively. Finally, assuming the transmission rates to be defined as β_A_i = c_i ·ϖ_E→ A/N_0 and β_I_i= c_i ·ϖ_E→ I/N_0, for i = 1,2, we get [fig:2]Figure <ref>fig:2.* p = 10^-3 day^-1, the vaccination rate, is assumed to be that during the summer of 2021 in the USA <cit.>.* ϵ = 0.7, the vaccine effectiveness, is assumed to be an average effectiveness of the BNT162b2 and ChAdOx1 nCoV-19 vaccine <cit.>.* ζ = 1/14, the vaccine-induced immunity rate, is taken from <cit.>.* k, the latent rate, is a function of age and is taken by assuming that the latent and incubation period differ by one day <cit.>. It is given byk(θ) = 1/4 day^-1,θ < 30 · 3601/4.8 day^-1, 30 · 360 ≤θ < 40 · 3601/4.8 day^-1, 40 · 360 ≤θ < 50 · 360 1/5.5 day^-1, 50 · 360 ≤θ < 60 · 360 1/3.1 day^-1, 60 · 360 ≤θ < 70 · 360 1/6 day^-1, 70 · 360 ≤θ. * q, the proportion of the latent/exposed individuals becoming asymptomatic infectious is taken from <cit.> and can be seen in [fig:3]Figure <ref>fig:3. To digitise the data from <cit.>, we use WebPlotDigitizer 4.6 <cit.>.* ξ = 0.5, the proportion of the asymptomatic infectious individuals becoming recovered/removed without developing any symptoms, is estimated from <cit.>.* χ, the incubation rate, is a function of age and is taken form data from <cit.>. It is given byχ(θ) = 1/5 day^-1,θ < 30 · 360 1/5.8 day^-1, 30 · 360 ≤θ < 40 · 360 1/5.8 day^-1, 40 · 360 ≤θ < 50 · 360 1/6.5 day^-1, 50 · 360 ≤θ < 60 · 360 1/4.1 day^-1, 60 · 360 ≤θ < 70 · 360 1/7 day^-1, 70 · 360 ≤θ.* γ_A = 1/8 day^-1, the recovery rate of the asymptomatic infectious individuals, is a function of age, but it is taken as a constant due to lack of available data. It is estimated from <cit.>. * γ_I = 1/14 day^-1, the recovery rate of the symptomatic infectious individuals, is a function of age, but it is taken as a constant due to lack of available data. It is estimated from <cit.>. §.§ Results Throughout our simulations we assume that S_0=V_0=2· 10^7 individuals. In order to study 𝒫 (<ref>) in a global scale, we vary the rest of the initial conditions. In particular, we assume that E_0 = A_0 = I_0 = d and let d take the values of 10, 10^4, 10^6, 4 · 10^6, 10^7.§.§.§ The case of R0<=1Here, we assume the average number of contacts of each individual, c, to be as in (<ref>), i.e. c=c_1. In such a case, ℛ_0=5.95 · 10^-5. As we see in [fig:figure4]Figure <ref>fig:figure4, for every initial condition we have that ( E, A,I) →( 0, 0,0), as t →∞. This confirms the global stability analysis performed in [sec:derivanal]<ref>sec:derivanal, since the solutions converge to the disease-free steady state for every initial condition when ℛ_0 ≤ 1.§.§.§ The case of R0>1Here, we assume the average number of contacts of each individual, c, to be as in (<ref>), i.e. c=c_2. In such a case, ℛ_0=9.14. As we see in [fig:figure5]Figure <ref>fig:figure5, for every initial condition we have that(E, A, I) converges to a nonzero value, as t →∞. This confirms the global stability analysis performed in [sec:derivanal]<ref>sec:derivanal, since the solutions converge, in an oscillatory way, to the endemic steady state for every initial condition when ℛ_0>1. § CONCLUSIONS AND DISCUSSION In this paper, we derived an age-structured epidemiological compartment problemand we studied it in terms of global well-posedness and stability analysis. From this analysis we deduced the basic reproductive number, ℛ_0, of the model, a critical measurement of the transmission potential of a disease. The model presented in this paper focused on the age structure of a population. A straightforward generalization includes the consideration of more independent variables, such as a spatial one. Moreover, it would be essential to include additional, potentially important factors of the evolution of the epidemiological phenomenon, such as waning immunity gained by both infected and vaccinated individuals. plaintocchapterBibliography | http://arxiv.org/abs/2311.16049v1 | {
"authors": [
"Vasiliki Bitsouni",
"Nikolaos Gialelis",
"Vasilis Tsilidis"
],
"categories": [
"math.DS",
"math.AP",
"physics.soc-ph",
"q-bio.PE",
"35B35, 35Q92, 37N25, 92D30"
],
"primary_category": "math.DS",
"published": "20231127181335",
"title": "An age-structured SVEAIR epidemiological model"
} |
justifiedjustified font=small, compatibility=false, format=plain, justification=justified, figurename=Fig., singlelinecheck=off∂ ℒ L ℓm ϵ ε et al.M_ADM M_tot M_BBS M_Pl T_Pl L_Pl M̂_Pl T̂_Pl L̂_Pl Ψ_4 Ψ_4,20 f_cut r_ext N_extrp 𝒞_mon ∫^3 x ||C_mon|| A_max bamps supsup datastatementSimulations of binary collisions involving compact objects require initial data that satisfy the constraint equations of general relativity. For binary boson star simulations it is common practice to use a superposition of two isolated star solutions to construct an approximate solution to the constraint equations. Such superposed data is simple to set up compared to solving these equations explicitly, but also introduces extra constraint violations in the time evolution.In this work we investigate how physical observables depend on the quality of initial data in the case of head-on boson star collisions. In particular we compare results obtained from data prepared using four different methods: the standard method to superpose isolated stars, a heuristic improvement to this superposition technique and two versions of this data where excess constraint violations were removed through a conformal thin-sandwich solver. We find that differences in the time evolutions are dominated by differences in the way the two superposition methods differ, whereas additionally constraint solving the superposed data has smaller impact. The numerical experiments are conducted using the pseudospectral code . Our work demonstrates thatis a code suited for generating high accuracy numerical waveforms for boson star collisions due to the exponential convergence in the polynomial resolution of the numerical approximation. Boson star head-on collisions with constraint-violating and constraint-satisfying initial data Bernd Brügmann 0000-0003-4623-0525 January 14, 2024 ============================================================================================== § INTRODUCTION With the first successful numerical relativity (NR) simulations of binary black hole (BH) collisions carried out <cit.>, an industry for engineering waveform templates was founded which played a key role in the first gravitational wave (GW) detections <cit.>. The late inspiral part of GW templates are informed by NR simulations that solve an initial-boundary-value problem posed by a Cauchy formulation of the Einstein field equations (EFEs) of general relativity (GR). Essential to such simulations is the data described on an initial hypersurface which is then propagated forward in time to trace out a foliation of spacetime. This initial data should satisfy the Hamiltonian and momentum constraint equations of GR. If the initial data also involve matter models in the form of neutron stars (NSs) then one must also ensure that each star is initially in a state of quasi-equilibrium,this should account for effects like tidal deformations which are due to them inspiraling on each other at a finite distance. Given the difficulty of numerically solving the constraint equations to generate such physically plausible data a variety of formalisms and numerical methods have been developed for collisions in the context of BHs and NSs (see <cit.> for a review).Almost a decade after the first GW detection the waveform template industry continues pushing forward analytical, computational, and phenomenological boundaries in order to keep up with the advancing detector technology and upcoming new experiments. Among these theoretical advances is also considerable effort to study compact objects described by exotic matter models that have been developed as dark matter candidates. Among those candidates are boson stars (BSs), which were first theorized in <cit.>, and for which a variety of NR studies involving this particular model have been conducted. These studies uncovered a dynamical formation process termed gravitational cooling <cit.>, as well as the anatomy of the GW signals from coalescences of such objects, see <cit.> for head-on collisions and <cit.> for inspirals. Also see <cit.> for a review on BSs and variations thereof.Most of today's understanding of BS collisions comes from studies that use a superposition of isolated BSs as initial data which does not satisfy the constraint equations. Such data should be seen as approximate solutions to the constraint equations. Its use can be justified when it is explicitly verified that the error due to initial constraint violations does not dominate the overall error budget of the simulation. Only recently, work has started on refining the initial data construction for BS encounters. In <cit.> a heuristically motivated correction to the commonly used method of superposition was developed by minimizing initial constraint violations, with some still remaining. To the best of our knowledge the first BS collisions using constraint solved data were reported in <cit.>, which discussed BS head-on encounters only to calibrate the BAM code for evolutions of mixed NS-BS systems. In <cit.> a generic constraint solver for BS initial data was developed, which is an important contribution in order to catch up with the state-of-the-art initial data construction techniques commonly used for BH and NS simulations. Such constraint solved data was used to demonstrate that the collision of two non-rotating BSs can yield a rotating remnant <cit.>.As the sensitivity of the next generation of GW detectors increases, the accuracy demands on GW templates do as well. In anticipation of such a trend new NR codes are developed and existing NR codes are upgraded to improve the computational efficiency and the mathematical modeling of physical processes,BAM <cit.>, GR-Athena++ <cit.>, Dendro-GR <cit.>, Einstein-Toolkit <cit.>, ExaHyPE <cit.>, GRChombo <cit.>, Lean <cit.>, MHDuet <cit.>, NMesh <cit.>, NRPy+ <cit.>, SACRA <cit.>, SpEC <cit.>, SpECTRE <cit.>, SPHINCS_BSSN <cit.>, Spritz <cit.> (see also <cit.> for an extended list). Among these research efforts is also thecode <cit.> which employs a nodal pseudospectral (PS) discretization for the spatial representation of the solution. The promised efficiency and accuracy gain of a PS method can be demonstrated best for problems which admit smooth solutions. Since BSs lack a hard surface (or boundary) and no shock fronts are formed during mergers of such objects, which are common obstacles for BH and NS simulations that spoil smoothness, they represent an ideal testbed to develop and assess PS methods in the context of GR. Theoretically, the solution has exponential convergence when increasing the polynomial resolution of the PS approximations. This translates into a significantly reduced error budget that we attribute to the time evolution, assuming high enough resolution. Thus, for PS codes constraint violations in superposed initial data can in principle dominate the total error budget of our results, making the usage of superposed initial data ill-advised.In this work we utilize thecode to perform binary BS head-on collisions with axisymmetry and reflection symmetry and investigate how physical observables extracted from these simulations depend on the quality of initial data. In particular, we compare results obtained from data that was prepared using four different methods: a simple superposition of isolated stars as defined in <cit.>, the heuristic improvement to the simple superposition technique also reported in <cit.>, and two versions of constraint-solved data obtained from a conformal thin-sandwich (CTS) solver and superposed free data. We then assess the differences between evolutions done with these data and the individual accuracy of each evolution by using a mixture of constraint monitors, global (and conserved) physical quantities, as well as gravitational waves, and (self-)convergence tests involving those quantities.The rest of this work is structured as follows. First, we review the theory under study as well as the formulation of the equations of motion we use for the numerical simulations in <ref>. In <ref> a summary about earlier work on superposed initial data is given and we discuss how we construct constraint-solved data for the comparisons. Details on the computational setting are provided in <ref> and our numerical results are presented in <ref>. A summary of our findings is provided in <ref>. In Appendix:gw-analysis and Appendix:gw-wiener-product we discuss details of our GW analysis and comment on a problem specific to head-on collisions and the reconstruction of the GW strain h from the Newman-Penrose pseudoscalar Ψ_4. We use Latin indices starting from a to denote spacetime components and Latin indices starting from i to denote spatial components of tensors. We work in Planck units where G = c = ħ = 1 so that all variables are automatically dimensionless and we also set the scalar-field mass (defined below) to μ = 1. In Appendix:units we show that this choice of μ does not limit the generality of our results, but instead corresponds to a particular rescaling of the variables.§ THEORY §.§ Action and equations of motion This work is concerned with scalar BSs in GR. They are compact objects defined as solutions to the equations of motion in a theory in which the Einstein-Hilbert action for the gravitational field g_ab is minimally coupled to a complex scalar field ϕ in the following way <cit.>S = ∫ ^4 x √(-g)×( [^(4)]R/16π - 1/2( g^ab∇_a ϕ^∗∇_b ϕ + V(|ϕ|^2) ) ) ,where g is the metric determinant, [^(4)]R the Ricci scalar associated with g_ab and V(|ϕ|^2) is the scalar-field potential, and an asterisk refers to complex conjugation. The above action gives rise to the following equations of motion, known as the Einstein-Klein-Gordon (EKG) equations,G_ab = 8 π T_ab,ϕ = ϕ V/ |ϕ|^2,where G_ab is the Einstein tensor, ≡ g^ab∇_a∇_b and the stress-energy tensor is given byT_ab = ∇_(aϕ^∗∇_b)ϕ - 1/2 g_ab( g^cd∇_cϕ^∗∇_dϕ + V(|ϕ|^2) ) . BSs can come in different flavors determined by the form of the potential V(|ϕ|^2). In this work we restrict ourselves to mini BSs for which the potential is that of a massive free scalar field,V(|ϕ|^2) = μ^2 |ϕ|^2 ,where μ is the scalar field's mass. For the rest of this work we set μ = 1. §.§ 3+1 decompositionThe basis for NR simulations is laid by a covariant 3+1 decomposition of the spacetime metric g_a_b, often written in the form <cit.>g_ab x^ax^b = - α^2t^2 + γ_ij ( x^i + β^it) ( x^j + β^jt) .The variables α, β^i, γ_ij and K_ij are called the lapse, shift, spatial metric and extrinsic curvature, respectively, and are referred to as the 3+1 variables. Within this framework the EFEs are rewritten accordingly and this unveils the so-called Hamiltonian and momentum constraint equations (from here on only referred to as constraint equations) as part of this system of nonlinear PDEs <cit.>,ℋ := R + K^2 - K_ij K^ij - 16 πρ = 0 ,ℳ^i := D_j ( K^ij - γ^ij K ) - 8 π S^i = 0 ,where R and K_ij are the Ricci scalar and extrinsic curvature associated with γ_ij and a spatial hypersurface Σ that is embedded in the surrounding spacetime (ℳ,g_a_b). The quantities ρ = n^a n^b T_ab and S^i = - γ^ia n^b T_ab are projections of the stress-energy tensor T_ab onto (Σ,γ_i_j), where n^a is the unit normal vector to Σ and γ^ia = g^ia + n^i n^a. These equations constrain the fields (γ_ij, K_ij) of a time slice Σ such that the embedding of (Σ,γ_ij,K_ij) in (ℳ,g_a_b) is compatible with the covariant decomposition of the EFEs.The remainder of the decomposition of the EFEs is complemented by how γ_ij and K_ij develop away from the initial hypersurface Σ_0. Augmenting these equations with conditions on how the variables α and β^i evolve completes the system of nonlinear PDEs we aim to solve. In this work we utilize the generalized harmonic gauge (GHG) formulation of the EFEs <cit.>∂_t g_a_b = β^i ∂_i g_a_b - αΠ_a_b + γ_1 β^i C_i_a_b , ∂_t Π_a_b = β^i ∂_i Π_a_b - αγ^ij∂_i Φ_j_a_b + γ_1 γ_2 β^i C_i_a_b + 2 α g^cd( γ^ijΦ_i_c_aΦ_j_d_b - Π_c_aΠ_d_b - g^efΓ_a_c_eΓ_b_d_f)- 2 α( ∇_(_aH_b_) + γ_4 Γ^c_a_bC_c - 1/2γ_5 g_a_bΓ^c C_c)-1/2α n^c n^d Π_c_dΠ_a_b - α n^c γ^ijΠ_c_iΦ_j_a_b + αγ_0 ( 2 δ^c_(an_b) - g_a_b n^c ) C_c - 16 πα (T_ab - 1/2 g_abT ^c_c),∂_t Φ_i_a_b = β^j ∂_j Φ_i_a_b - α∂_i Π_a_b + γ_2 αC_i_a_b + 1/2α n^c n^d Φ_i_c_dΠ_a_b + αγ^jk n^c Φ_i_j_cΦ_k_a_b ,where the evolved variables are the metric g_ab and the time reduction variable Π_ab = -n^c ∂_c g_ab, Γ^a = g^bcΓ^a_b_c and Γ_abc = g_adΓ^d_b_c are the Christoffel symbols associated with g_ab. We would like to point out the extra minus sign in the definition of Π_ab which was previously missing in <cit.> due to a typo. The spatial reduction variable Φ_iab is associated with the reduction constraint C_iab=∂_ig_ab-Φ_iab = 0 andC_a = H_a + Γ_a = 0is the harmonic constraint, where H_a is a gauge source function. Here we choose the gauge source function introduced in <cit.> with R(t)=W(x^i)=1. The constraint damping parameters are fixed to be αγ_0=1/10, γ_1=-1, αγ_2=1 and γ_4=γ_5=1/2. The above evolution equations are implemented in our numerical relativity codeand more details on the computational setup are discussed in <ref>.To reduce the Klein-Gordon equation (<ref>) to first order we introduce the reduction variables Π = n^a∂_a ϕ, Φ_i=∂_i ϕ and the spatial reduction constraint B_i :=∂_i ϕ-Φ_i. The reduced system of equations is then of the form∂_t ϕ = αΠ + β^i Φ_i ,∂_t Π =β^i ∂_i Π+ γ^ij(Φ_j∂_iα+α∂_i Φ_j -α^(3)Γ^k_ijΦ_k) +αΠ K +σβ^i B_i ,∂_t Φ_i=Π∂_iα+ α∂_i Π+Φ_j∂_iβ^j +β^j∂_j Φ_i+σα B_i .The evolved variables are ϕ, Φ_i and Π. ^(3)Γ^k_i_j refer to the Christoffel symbols associated with γ_ij and σ is a damping parameter which we set in all our simulations such that ασ = 1, equivalent to our treatment of γ_2. § INITIAL DATA§.§ Binary boson star initial data from superposition of isolated stars The construction of initial data for simulations of compact binary objects requires finding a solution (γ_ij, K_ij) to the geometric constraints (<ref>) and (<ref>) on an initial time slice Σ_0, while simultaneously also preparing the matter variables ρ and S^i in a state of quasi-equilibrium determined through auxiliary conditions. In the following discussion we neglect the latter aspect and focus only on the geometric constraints, but we revisit it briefly below. What defines physically plausible initial data is in general not a question with a simple answer. The basis for binary data construction is the existence of isolated star solutions, which are often assumed to be at least axisymmetric and, thus, simple to obtain numerically. Unfortunately, the assumption of a star being in isolation is, in principle, in contradiction with a star partaking in a binary collision, unless they are displaced by an infinite distance. On a physical basis this is clear, because the gravitational pull of one star will be felt by its companion, causing tidal deformations of it and, hence, influencing its gravitational potential, which in turn modifies the initial star through its own tidal forces. This is also reflected by the nonlinearity of the constraint equations which in general prevents the construction of new solutions as simple combinations of isolated single star solutions – commonly referred to as superposition.As can be straightforwardly shown, the constraint equations, if satisfied at one instant of time and evolved exactly, will continue to hold at all times. Failing to satisfy these constraints does not necessarily cause crashes in numerical simulations, but results that do so are not solutions to the EFEs. Realizing that NR can only ever produce approximate numerical solutions to the continuum EFEs, any numerical result will inevitably violate the constraints to some degree. The goal of NR is to construct successive numerical approximations in such a way that constraint violations, as well as the evolved fields and other analysis quantities, show a controlled convergence trend towards the continuum EFEs in the limit of infinite resolution. We emphasize that violations occurring in time evolutions do not solely arise from the failure of the initial data to satisfy the constraints, but canalso grow dynamically through numerical errors due to insufficient resolution, the appearance of (coordinate) singularities or other pathologies. In fact, constraint violations associated with the initial data quality are conceptually even simpler to control, because the mathematical problem of solving the constraint equations for the initial data is completely decoupled from solving the free evolution equations. Because of that, dedicated codes have been developed to tackle the initial data problem in GR (see <cit.> for a review of methods for BH and NS binaries, see <cit.> for a constraint solver for BS binaries).Solving the constraint equations is a difficult task. Using instead a superposition of isolated single star solutions presents itself as an attractive shortcut to constructing initial data that solve the constraints approximately[ We refer to <cit.> for a brief list of special cases in which the superposition of solutions can solve the constraint equations exactly. ]. In studies that use such data it is often argued that the approximation is accurate enough when the inherited constraint violations are at most of the same order as the error budget of the time evolutionary part of a numerical code <cit.>. Such an argument can be supported with convergence studies to demonstrate that the numerical results approximate solutions up to a controlled error, provided that one commits to routinely verify that the above premise is satisfied. In the case of BS simulations superposed data has been used many times before as starting points for head-on and inspiraling binary collisions <cit.>. The same technique has also been employed for collisions involving other exotic matter models like Proca stars <cit.>, ℓ-boson stars <cit.>, dark boson stars <cit.>, axion stars <cit.>, neutron stars with bosonic cores <cit.> as well as mixed mergers of an axion star and a black hole <cit.>, a neutron star and an axion star <cit.>, and a boson star and a black hole <cit.>.Focusing on pure scalar BS simulations, one recipe employed in the literature to construct superposed data can be summarized as follows. Let γ_ij^A, K_ij^A, α_A and β_A^i denote the 3+1 variables of spacetime and let ϕ_A and Π_A denote the scalar-field variables of star A, likewise for star B. A simple superposition (SSP) initial data construction involving stars A and B is then given by <cit.>γ_ij = γ_ij^A + γ_ij^B - δ_ij, K_ij = γ_m(i( K_j)n^Aγ_A^nm + K_j)n^Bγ_B^nm) ,ϕ = ϕ_A + ϕ_B ,Π = Π_A + Π_B ,where the data of the isolated stars is boosted such that the superposition mimics a binary configuration where each star carries initial momenta. This boosting itself is readily done through Galilean or Lorentz boost transformations <cit.>. A slight variation of this recipe has been used in other studies that fueled many important contributions to the field <cit.>. The above construction disregards the constraint equations. Consequently, it comes as no surprise that in <cit.> it has been demonstrated that the scalar-field amplitude can admit artificial modulations. These effects can be strong enough to trigger premature BH collapse in encounters of BSs with a solitonic potential. Similar results were reported earlier in <cit.> for the case of compact real-scalar solitons (oscillatons). To tackle these symptoms, the same authors came up with a heuristically motivated improvement to the SSP construction, guided by minimizing initial constraint violations, which we refer to as the constant volume element (CVE) construction. Since the ansatz (<ref>) alters the volume form inside a star depending on the displacement of the companion star, it was adjusted toγ_ij = γ_ij^A + γ_ij^B - γ_ij^A(x_B) .Here γ_ij^A(x_B) refers to the constant value of the components of the induced metric of star A, γ_ij^A, evaluated at the center of star B, x_B. Note that in the case of equal mass BS binary systems we have γ_ij^A(x_B) = γ_ij^B(x_A). Consequently, the above ensures that γ_ij(x_A) = γ_ij^A(x_A) and likewise for B, hence, it approximately restores the volume element form around each star's center to the value of an isolated star. This simple correction was enough to cure premature BH collapse, but at the same time, the correction led to qualitative changes in the emitted GW signals <cit.>. Recently, <cit.> generalized the CVE construction to also work with unequal mass binaries.We now have two methods at our disposal to construct superposed initial data and one might ask which of those is preferred, given that their time evolutions can yield different physical results. One might be inclined to prefer the CVE construction, because it reduces constraint violations and cures premature BH collapse. But whether these differences are only caused by the improvement of the constraint violations or are perhaps primarily due to changes in physical characteristics of the initial data, like for instance the local energy density, is yet unclear. To further illustrate this point consider <ref> which schematically shows how constraint violations (measured in some norm) associated with initial data might propagate in time, when using an evolution system with a built-in constraint damping scheme like GHG. The CVE construction provides initial data with less constraint violations than the initial data constructed with the SSP technique, which is indicated by the two red dots lying on the space of all initial data (orange) at t=0 while also being displaced vertically from the space of constraint-satisfying data (green). The two blue dots, on the other hand, represent two different initial data sets (which are constructed in this work) where all excess constraint violations (ignoring numerical error) were removed from the superposed initial data. Performing time evolutions will then trace out the dashed trajectories, which indicate that constraint-satisfying initial data remain constraint-satisfying throughout the evolution. Contrary to this, evolutions starting from superposed initial data will only gradually in time approach the space of constraint-satisfying data. Note that the seemingly attractive character of the space of constraint-satisfying data is usually due to two factors: 1) the use of modified evolution equations that include constraint damping terms and 2) the possibility that constraint violations can propagate and eventually leave the computational domain through a boundary. Furthermore, there is no guarantee that initially-constraint-violating data is going to converge with increasing resolution towards constraint-satisfying data at late times. Instead one should expect some excess violations to remain also for late times t, which is illustrated by the trajectories ending up on the yellow space that is displaced by ϵ from the space of constraint-satisfying data. Besides this vertical displacement at late times, it is also unclear whether two trajectories that emanated from two initial data sets that are based on the same superposition, but where one is constraint-violating and one constraint-satisfying, will end up close to each other. Below we investigate the qualitative behavior of the sketched trajectories for the case of selected binary BS configurations and study their behavior when varying numerical resolution.The above discussion as well as the rest of this work does not account for the problem of the progenitors not being initially in a state of quasi-equilibrium, which is done to ease this comparison. We want to highlight that the constraint equations need to be satisfied regardless of the matter model considered to study solutions to the EFEs, and matter fields enter these equations only as source terms. On the other hand, the equations of motion of most matter models, including BSs, do not provide any constraints, meaning that, in principle, arbitrary matter configurations could be used as initial data, provided the metric fields are adjusted to satisfy the geometric constraints. Instead, one requires additional assumptions to define a quasi-equilibrium,the existence of a helical Killing vector field for binary inspiral configurations, as well as an understanding of the matter model to derive (elliptic) equations that equilibrate their fields accordingly. Special attention has been given to the latter aspect of initial data construction in the BH and NS literature, see <cit.>. First work in this direction for binary BSs has started only recently in <cit.> and we leave it to future work to further study the importance of this facet.In the next section we examine one possibility to remove excess constraint violations from SSP and CVE initial data by numerically solving the constraint equations using a CTS solver, where the free data is constructed from the SSP and CVE data. Subsequently, we perform time evolutions using this data in order to answer the question of whether differences in physical observables between SSP and CVE initial data are caused by constraint violations. Along the way we also conduct convergence studies to assess the quality of our numerical results. §.§ Constraint satisfying binary boson star initial data Constraint satisfying binary BS initial data has been constructed before in <cit.> as well as in <cit.> and was an important ingredient in demonstrating that the collision of two non-rotating BSs can form a rotating remnant. Below we follow a similar procedure by using the CTS formulation of the Hamiltonian and momentum constraints (<ref>) and (<ref>), which read <cit.>D̅^2 ψ - 1/8ψR̅ - 1/12ψ^5 K^2 + 1/8ψ^-7A̅_ijA̅^ij= - 2 πψ^5 ρ, ( Δ̅_L β)^i- (L̅β)^ijD̅_j log(α̅) = α̅D̅_j (α̅^-1u̅^ij) + 4/3α̅ψ^6 D̅^i K + 16 πα̅ψ^10 S^i ,whereA̅^ij = 1/2 α̅ ( (L̅β)^ij - u̅^ij) .The above constitute a system of four elliptic PDEs for the conformal factor ψ and the components of the shift vector β^i, where D̅ is the covariant derivative associated with the conformal metric γ̅_ij, R̅ is the Ricci scalar associated with γ̅_ij and <cit.>(Δ̅_L β)^i= D̅^2 β^i + 1/3D̅^i( D̅_j β^i ) + R̅^i_jβ^j , (L̅β)^ij = D̅^i β^j + D̅^j β^i - 2/3γ̅^ijD̅_k β^k ,are the conformal versions of the vector Laplacian and vector gradient applied to β^i, respectively. Once a solution for ψ and β^i is known, the remaining 3+1 variables can be recovered fromγ_ij = ψ^4 γ̅_ij, K_ij = ψ^-2A̅_ij + 1/3γ_ij K ,α = ψ^6 α̅. Equations (<ref>) and (<ref>) need to be complemented with freely specifiable data for γ̅_ij, its time derivative u̅_ij := ∂_t γ̅_ij, the trace of the extrinsic curvature K and the conformal lapse α̅. Because we want to test the influence of eliminating constraint violations from superposed initial data, denoted by (γ_ij^(sup), K^(sup)), we use that data to setup the free data for the CTS formulation, following <cit.>,γ̅_ij = γ_ij^(sup),u̅_ij = u_ij^(sup),u̅^ij = γ̅^ikγ̅^jl( u̅_kl^(sup) - 1/3γ̅^mnu̅_mnγ̅_kl) , K= K^(sup),α̅ = α^(sup) = (γ_ij^(sup))^-1/6,β^(sup)i = 0 ,andu_ij^(sup) = ∂_t γ_ij^(sup) = - 2 α^(sup) K^(sup) + ℒ_β^(sup)γ_ij^(sup),where ℒ_β^(sup) denotes the Lie transport along the shift vector β^(sup)i of the superposed data. The choices (<ref>) and (<ref>) were adopted from <cit.>. Note that the difference in sign in (<ref>) when compared to <cit.> is due to a difference in notation (see Appendix:notation-conformal-metric).The metric variables γ_ij^(sup), K^(sup), α^(sup) and β^(sup)i are given by (<ref>), (<ref>), (<ref>) and (<ref>) or (<ref>), (<ref>), (<ref>) and (<ref>), respectively. The matter source terms ρ and S^i are computed from γ_ij^(sup), K^(sup), α^(sup), β^(sup) and T_ab given by (<ref>). The construction of the latter is done by using the metric variables as well as the matter variables ϕ and Π as given by (<ref>) and (<ref>). Note that, although the way the scalar field is superposed by (<ref>) and (<ref>) is the same between the SSP and CVE method, the difference in the superposition of the induced metric γ_ij, given by (<ref>) and (<ref>), then also translates into differences in the stress-energy tensor between these two constructions.The numerical solution of (<ref>) and (<ref>) is obtained using the hyperbolic relaxation method <cit.>, which is available inside thecode. For the CTS solver we use Robin boundary conditions to impose the asymptotic behavior <cit.>α = ψ = 1 + 𝒪(r^-1) ,β^i= 𝒪(r^-1) .To set up the free data for the solver we proceed as follows: first, we compute spherically symmetric stationary solutions of isolated BSs; two such stars are then each boosted by a parameter v using a Lorentz transformation. For both of these steps we follow closely the algorithms given in <cit.> (also see Appendix:isolated-bs). We then use these stars to construct SSP or CVE initial data following the algorithms given in <ref>. These SSP and CVE data are then used together with (<ref>)-(<ref>) to set up the free data and sources for solving the CTS equations (<ref>) and (<ref>). As an initial guess for the CTS solver we useψ = 1 ,β^i= 0 .The hyperbolic relaxation of the CTS solver terminates once the sum of the L^1 norm of the right-hand side (RHS) of equations (<ref>) and (<ref>) is smaller than 1/10th the sum of the L^1 norm of the residuals of those equations, or when the L^1 norm of the residual of the equations falls below 10^-8×#N_DOF× 4, where N_DOF is the total number of grid points on the target resolution. The resulting constraint-satisfying data is referred to as CTS+SSP and CTS+CVE data for the rest of this work.When we consider evolutions of constraint-satisfying data below, for which we fix the grid structure and the polynomial resolution n in each cell, but we vary n between different runs, the initial data is computed using the hyperbolic relaxation method on the very same grid structure and with the same polynomial resolution. In particular, no extra interpolation step is needed to convert the solution of theinternal CTS solver into initial data for the evolution. For evolutions of superposed data we utilize (<ref>) and (<ref>) as initial data for α and β^i. For the case of constraint-solved data we use instead the solutions obtained from the CTS equations (<ref>), (<ref>) and the relation (<ref>) to set up α and β^i. § COMPUTATIONAL SETUP <cit.> is a numerical code that has been used successfully to study critical collapse <cit.>. It uses a pseudospectral collocation method for the spatial discretization of the EFEs and an explicit fourth-order Runge-Kutta time stepping algorithm.utilizes distributed memory parallelization based on the Message Passing Interface (MPI) standard and, recently, has been complemented with an adaptive mesh refinement (AMR) feature <cit.>. However, in this work we do not make use of the AMR feature in order to facilitate the comparison. This means that we are using the same static computational grid structure between different simulations and we use the same polynomial resolution in all grid cells. In <ref> we summarize important parameters of our computational setup. The GHG formulation (<ref>)-(<ref>) of the EFEs to evolve the gravitational field and the first order reduction of the Klein-Gordon equation (<ref>)-(<ref>) to evolve the complex scalar field are implemented in . The code employs radiation-controlling and constraint-preserving boundary conditions as described in <cit.>. The scalar field uses a maximally dissipative boundary condition on the physical degrees of freedom and constraint preserving boundary conditions for the reduction constraints <cit.>. We note that these boundary conditions were initially designed for real massless scalar-field evolutions and were adapted for this study to also work for massless complex scalar fields. In particular, they do not account for a scalar-field potential V(|ϕ|^2), which is likely the cause for some of the artifacts we report below. We leave it for future work to improve these conditions. After extensive testing we found that the combination of the GHG formulation with the constraint damping parameters as given in <ref> together with the grid parameters reported in <ref> and the above boundary conditions allow us to perform long-time stable and convergent evolutions of binary BS head-on collisions. § RESULTS We focus exclusively on head-on collisions of equal mass mini BSs. Some of the key properties of the particular stationary and isolated BS solution we used as the basis for the initial data construction are summarized in <ref>. In the head-on configurations we study we vary the initial boost parameter v as well as the initial separation d between the stars. Similar configurations were discussed already in <cit.> and a slight variation thereof, with non-zero impact parameter, was investigated in <cit.>. All collisions for which we report results below culminate in a single perturbed and non-spinning BS remnant with zero bulk motion with respect to the coordinate origin and where the gravitational and scalar fields continue to oscillate. The latter process causes a continued emission of GWs and scalar-field radiation, which lasts for much longer than the head-on impact. The GWs emitted due to the oscillating remnant are referred to as the gravitational afterglow of BSs <cit.>. This afterglow radiation is characterized by an amplitude that is comparable with the GW burst that is due to the initial head-on impact. See <ref> below for an example of GW afterglow obtained from a long-time evolution.Besides the intended direct comparison of the initial data quality, we repeat some of the experiments reported in <cit.> to perform calibration tests with our NR code , as it is the first time it is used for BS evolutions. Given thatemploys a PS method for the spatial discretization of the fields, whereas <cit.> used the Lean code <cit.> and the GRChombo code <cit.> which both work with finite-differencing methods, the presented results also provide an (indirect) benchmark between different numerical methods for the simulation of spacetimes with smooth matter fields.To gauge differences in numerical evolutions for the comparison of constraint-satisfying and violating initial data one can use various quantities. Below we focus on a mixture of constraint monitors, global (and conserved) physical quantities, as well as local field values and gravitational waves. Studying these quantities also allows us to assess the accuracy and reliability of the PS scheme employed. This is important to our work, because the equations of motion of the matter (Klein-Gordon equation) can be rewritten to mimic a balance law ( a conservation law with an inhomogeneity) for which a plethora of numerical methods have been developed in the computational fluid dynamics literature and are often employed in the context of general relativistic hydrodynamics simulations. On the other hand the PS method developed in <cit.> was solely designed to conserve energy within a particular approximation and so it is of interest to us to see how well the scheme can balance matter fields over long times.During binary BS evolutions we compute: * The constraint monitordefined in <cit.>. It summarizes violations of the constraint subsystem of GHG, among which are the Hamiltonian and momentum constraints (<ref>) and (<ref>) as well as the harmonic gauge constraint (<ref>). In the continuum limit = 0 throughout the evolution. In practice we observe non-zero values forand they serve as a proxy to gauge to which accuracy the EFEs can be solved. * The dynamical behavior of the stars and spacetime are monitored through the maximum of the scalar-field amplitude (t) and the value of the Ricci scalar at the coordinate origin R(t,x=0). These quantities together withare used to distinguish physical signatures from numerical artifacts in our analysis.* The Noether charge associated with the global U(1) symmetry of (<ref>) is given by <cit.>N= ∫_Vx^3 √(-g) j^t , j^a= i/2 g^ab( ϕ^∗∇_bϕ - ϕ∇_bϕ^∗) ,where j^a is the Noether current and integration is performed over the whole computational domain V. N can be related to the total number of bosonic particles <cit.>. This quantity is also conserved, provided no matter leaves the domain through a boundary, and, thus, its time evolution allows to gauge the accuracy of the evolution of the Klein Gordon equation.* The ADM mass of the spacetime <cit.>= 1/16πlim_r→∞∫_∂Σ_r S 𝒩^k γ^ij ( ∂_j g_ik - ∂_k g_ij ) , where 𝒩^m is the outward pointing unit normal vector to a 2-hypersurface ∂Σ_r of a spatial slice Σ. In theory, this quantity is conserved in time and we monitor its temporal evolution to benchmark our results. Note that the results ofwe report below were obtained without taking the limit r →∞ and instead computed at a finite radius. In this sense all references ofin the following refer to an approximation of the ADM mass. In fact, this approximation ofis also not necessarily conserved and can show a decrease in time, in particular when matter leaves the computational domain.* A total mass number <cit.>= ∫_Vx^3 √(γ)ρ,where ρ = T_μν n^μ n^ν is the local energy density and integration is performed over the whole computational domain V. This is a coordinate dependent quantity and it is not necessarily conserved.* Gravitational radiation represented through the curvature pseudo-scalar field Ψ_4. This (or rather integrals thereof) is the only accessible observable with which binary BS encounters could be experimentally detected. In particular, we only focus on the dominant l,m=2,0 mode. We leave out a discussion of the GW strain h due to unacceptably large uncertainties introduced in the reconstruction procedure, see Appendix:gw-analysis.* The radiated GW energy E associated withas recorded by an asymptotic observer over a fixed time interval. This quantity was also studied in <cit.> to analyze the behavior of the long-lived oscillating remnant. The error bars we provide account for errors due to finite resolution, errors in the extrapolation to null infinity and errors in the reconstruction of E from . For details on this analysis see Appendix:gw-analysis. * The detector-noise-weighted Wiener product W(^(1), ^(2)) between two GW signals. This quantity gives a measure of how similar two GWs are, while accounting for detector sensitivity. To evaluate W one has to assume a value for scalar-field mass μ in order to convert the results to SI units. Because we use a noise-sensitivity curve of Advanced LIGO <cit.> to weight the Wiener product, we fix μ = 1× 10^-11 eV so that the frequency of the dominant component of the PSD offalls into the most sensitive region of the detector, which is at 𝒪(100 Hz) <cit.>. For details see Appendix:gw-wiener-product. §.§ Constraint violationsA comparison of the convergence behavior ofobtained from runs that used SSP dataCTS+SSP data with initial separation d = 80, boost parameters v = 0.05,0.1,0.15,0.2 and varying polynomial resolutions n is provided in <ref>.First, we compare the values ofat t = 0 between the constraint-violating SSP data (left columns) and constraint-satisfying CTS+SSP data (right columns). The figure shows that all SSP evolutions start from an initial violation ≈ 10^-4, which is independent of the resolution and initial boost. On the other hand, the CTS+SSP data starts off at ≈ 10^-5 and decreases with n and independent of v to < 10^-10. Because we solve the CTS equations for each resolution n separately, instead of solving them once for a high resolution and then using interpolation to obtain data on a lower resolution, this behavior ofdemonstrates that our CTS solver is capable of removing excess constraint violations with increased resolution.Focusing on the time evolution ofof CTS+SSP data, one observes a clear convergence pattern in resolution. We want to emphasize the rate by whichdecreases by pointing to the exponential improvement offrom 10^0 down to 10^-12 while the polynomial resolution n increases linearly in steps of two from 7 to 19 (see color coding in legend). For boost values v = 0.1, 0.15 and 0.2 one observes that with resolutions n ≥ 15 the violations are bounded by ≲ 10^-10 when 0 ≤ t ≤ 4000, except around the merger which occurs at t ≈ 350, 260 and 210, respectively. For the runs using v = 0.05 one would need to increase the resolution beyond n = 19 to achieve the same level of violations in the afterglow signal, which is computationally well within reach even without AMR. Comparing now with the violations from evolutions of SSP data one can also see thatdecreases for all simulations independent of v. However, the results for v= 0.05, 0.1 and 0.15 show that this trend eventually halts when n ≳ 13 and for v=0.2 no improvement occurs beyond n = 11 in the afterglow. From this we conclude that evolutions of non-constraint-solved SSP data bear a residual constraint violation ≳ 10^-8 when 0 ≤ t ≤ 4000 for this configuration in our code. Note that this result does not imply that these evolutions are not convergent, because a numerical value of order 𝒪(10^-8) is usually not regarded as numerically zero. Instead one must treat this data series as a series that approaches a non-zero residual value and conduct a self-convergence test, which is done further below.Studying now the behavior ofaround merger one observes that the constraint monitor continues to improve with increasing n, even whenwas already saturated away from merger. This behavior is independent of the boost parameter v and occurs for SSP and CTS+SSP data. The reason for this is that the merger phase typically involves higher field amplitudes and gradients compared to less extreme field configurations before and after merger, which translates into a need for locally higher polynomial approximations to resolve the solutions accurately, due to aliasing effects in the PS expansion being amplified by the nonlinearity of the EFEs. As a side note we mention that the use of AMR could potentially bring down these constraint violations around merger to the same level away from the merger, while at the same time improve computational efficiency by distributing the available resources where numerical resolution is needed. We leave this test to future work as it will certainly become of relevance for inspiral simulations.<ref> shows the same comparison of , but for evolutions that started with constraint-violating CVE dataconstraint-satisfying CTS+CVE data and the same initial distance and boost parameters. The overall behavior of the results is similar to the one from <ref>: at t = 0 the CVE data comes with an initial violation ≈ 10^-6 that is independent of n and v, whereas the CTS+CVE data starts at ≈ 10^-5 and continuously decreases with increasing resolution to < 10^-10. The time evolution of CTS+CVE data also displays a convergence pattern with exponential decrease when increasing n, and the constraint violations are reduced to < 10^-10 when 0 ≤ t ≤ 4000, except around merger. Also similar is the behavior of the results from constraint-violating CVE initial data where it is evident that increasing n decreases . The plot also shows that the constraint monitor saturates for > 10^-10 when v = 0.05, 0.1 and 0.15. When v = 0.15 or 0.2 we also observe that the lower bound onadditionally decreases over time and approaches a value of ≈ 10^-10 at t = 4000. As for the merger phase, increasing n also reducesfor both CVE and CTS+CVE data.In <ref> a self-convergence test ofusing the constraint-violating data presented in <ref> (left columns) and <ref> (left columns) is depicted. With such a test one can study the convergence of a series without knowing the exact result the series is converging to. This method is applied tohere, because for constraint-violating datadoes not approach zero, but it attains limiting values ≳ 10^-8 and ≳ 10^-10 for SSP and CVE data, respectively. <ref> demonstrates that the differences ofbetween consecutive resolutions decrease exponentially with increasing resolution, thus, confirming the claim made earlier thatis also convergent for evolutions of initially-constraint-violating data. A direct comparison of the analysis quantities (t), R(t,x=0) and (t) obtained from evolutions starting from constraint-violating and constraint-solved initial data, based on a configuration with initial separation d = 80, initial boost v = 0.1 and polynomial resolution n=11, is provided in <ref>. Focusing on the results obtained with the SSP construction method (left column), it is evident thatis always smaller for the evolution of CTS+SSP data than the evolution of SSP data when 0 ≤ t ≤ 4000. The difference inbetween these evolutions gradually decreases over time and there is considerable overlap in the afterglow phase of the evolution. However, this overlap does not continue to hold with increasing polynomial resolution n, becausefor the runs with SSP data eventually levels off at a non-zero value, as demonstrated previously. Comparing vertically (t) (top, left)R(t,x=0) (middle, left)(t) (bottom, left) one can see that the times at which the constraint monitors overlap correspond to times where R(t,x=0) and (t) each show local maxima. This allows to conclude that the increase inaround merger, as well as the oscillations in the afterglow, are caused by physical processes. Studying the time evolutions of R(t,x=0) and (t) closer one can observe good agreement between SSP and CTS+SSP until t ≈ 2800, after which these quantities start to dephase (see insets). This dephasing persists when increasing the resolution. Not visible in this plot is that (t) also shows deviations during the infall phase when 0 ≤ t ≤ 500, because they are about a factor of 10 smaller than the differences that appear at late times and we revisit this further below. The comparison offrom runs with constraint-violating and constraint-solved initial data based on the CVE construction is displayed in <ref> (right column). Similar to the case of evolutions of SSP data, one observes that, prior to merger,is always smaller for the evolutions using CTS+CVE data compared to evolutions of CVE data. We see that at this resolution the evolution ofafter the merger is independent of the initial value ofand, therefore, it is only dominated by violations that occur during the merger. Also similar is the temporal alignment in the extrema between (t), R(t,x=0) and (t). The evolutions of R(t,x=0) and (t) also show that no dephasing occurs for these quantities for late times at this resolution (as well as resolutions n=9,13,15, but not shown) and, thus, one can conclude that the reduction in initial constraints did not have a noticeable influence on these observables. Comparing the late time behavior of R(t,x=0) and (t) between SSP data (left column) and CVE data (right column) one can see from the insets that the amplitudes of these quantities show a difference, in particular the strength of the oscillations is reduced for the CVE data. This observation suggests that the effect of constraint-solving data has less impact on physical observables than the way in which the superpositions are constructed.<ref> shows another comparison of the evolution of (t) as presented in <ref>, but now with an emphasize on the infall phase of the head-on collision where 0 ≤ t ≤ 500. This plot is added to make a connection with <cit.> where the authors found that, in head-on collisions of solitonic BSs, the use of SSP data causes premature BH collapse,an apparent horizon is detected before the center of the stars even meet. This behavior is preceded by a growth of the central scalar field amplitude before the collapse. In their work, the evolutions of CVE initial data with the same solitonic BSs did not show this phenomenon, and instead the central scalar field amplitude remained to good approximation constant during all of the infall phase, only changing after the star's centers merged and before the first apparent horizon was detected. In <ref> one can see that also for a head-on collision of two mini BSs, which is based on SSP initial data, (t) shows a growth during the infall phase. Furthermore, it is evident that (t) grows strongly even for CTS+SSP data. In contrast to this, (t) obtained from the evolution of CVE and CTS+CVE initial data agrees between each other, it remains roughly constant during the infall phase. Evenutally, at t ≈ 300 one can see that (t) drops down and then increases again right before the time of merger at t ≈ 510. Some oscillations of (t) are also visible, but they are likely due to the fact that neither initial data construction accounts for any quasi-equilibrium conditions. Although the mini BS evolutions considered in this work do not undergo BH formation, the plot shows a similar qualitative behavior of the scalar field amplitude as reported in <cit.>, which is also independent of resolution (not shown). Since we are interested in studying how two initially non-interacting stars fall in on each other, we expect on physical grounds that the scalar field amplitude is not drastically altered before the stars' centers get close to each other. For the case at hand, the isolated stars have a radius of r_99≈ 22 and are initially well separated by a distance d = 80. Because (t) is less for the CVE and CTS+CVE data than for SSP and CTS+SSP data, we conclude that this result favors the use of the two former initial data sets for mini BS head-on collisions, assuming no additional considerations regarding quasi-equilibrium conditions are involved.During testing we found that some observables can be polluted with artificial high frequency noise for late times. As an example seein <ref> for SSP data (top, left) and CVE (top, right) which shows this noise starting to appear at t ≈ 2000. These artifacts are not physical, but depend on parameters like the initial separation and boost of the stars, and numerical resolution and are caused by the scalar field interacting with the outer boundary conditions, which happens because part of the scalar field is ejected from the central region during and after merger. This hypothesis is confirmed by redoing these simulations with a grid setup where the outer domain boundary is moved from radius R=400 to R=800. Results of this test are displayed in <ref>, which shows the spatial distribution of the scalar-field amplitude A(t,x) (top) extracted at coordinate time t ≈ 2000 for the two differently sized domains. It is evident that A(t≈2000,x) in the smaller domain displays noticeable radial oscillations, whereas they are absent from the scalar-field profile evaluated at the same coordinate time in the larger domain, which reveals these oscillations as artifacts. The comparison of the time evolutions of(bottom) extracted from these two simulations shows that around t ≈ 2000 the data is free of noise when the boundary is placed at R = 800. We note that this strategy of moving the boundary out further and further is in general not a reliable solution when interested in artifact-free long-time evolutions, because the associated computational costs eventually start to become prohibitive. Although it might be possible to mitigate the computational costs by using AMR with properly tuned hp refinement, a more sustainable solution would be to investigate how our boundary conditions need to be adjusted to work with massive complex scalar fields. We leave this task to future work. In the following discussion of global quantities and gravitational waves we show results that were obtained with R = 800.§.§ Global quantities In <ref> we continue the comparison started in <ref> by studying how relative differences in global quantities develop for evolutions that used constraint-violatingconstraint-satisfying initial data with initial separation d = 80, boost v = 0.05 and resolution n=11. It is evident from the plot that the relative differences in Noether charge Δ N, ADM mass Δ and total mass Δ are constant in time to a good approximation. The differences between CVECTS+CVE data evolutions for Δ N and Δ are of the order 0.01%, whereas the differences between SSPCTS+SSP evolutions are below 0.1%. The differences in Δ are of order 0.1% and 1% for CVECTS+CVE and SSPCTS+SSP, respectively. We want to emphasize that the utilized resolution n = 11 is rather coarse and computationally inexpensive, but the observed differences are already very small for both data sets.<ref> displays the time evolutions of N,and(top) as well as a self-convergence study of the same quantities (bottom). These results were obtained from evolutions of runs done with the CTS+CVE data with initial separation d = 80 and boost parameter v = 0.05. Focusing first on the upper panel and the top plot which shows N(t)/N(0) one can see that for resolution n = 13 the relative deviation from the initial value N(0) is well below 1% when 0 ≤ t ≤ 2000, indicating that N(t) is numerically conserved. On the other hand, for t ≳ 2000 the Noether charge starts to decrease until ≈ 98% of the initial value is left at t = 4000. Similar behavior is observed in the evolution of (t)/(0),the ADM mass is conserved well below 1% over the time range 0 ≤ t ≤ 2000, after which it starts to decrease until it reaches a final value below 97% at t = 4000. On the other hand, (t)/(0) is by definition not a conserved quantity and this is reflected in its time evolution, because it shows significant oscillations already for t ≤ 2000. Note also that this quantity shows a decreasing trend for late times. We verified that the apparent loss in N,andis due to the scalar field getting absorbed by the outer boundary conditions, much like the artificial oscillations of A that we discussed in <ref>.Regarding the self-convergence tests displayed in the lower panel, the differences in Δ N decrease with increasing resolution, confirming that N is indeed convergent. We note that the differences are noticeably increased through the merger, but overall remain roughly constant in time. The differences in Δ show more variations in time, but one can still recognize a convergence trend in this data. We attribute this behavioral difference to the fact that N andare computed through a volume and surface integral, respectively, and the latter being more susceptible to numerical errors, because its computation involves less degrees of freedom than that of a volume integral. The differences in Δ also behave similarly to Δ N in the way that they show a convergence pattern, the differences remain roughly constant over time and they are increased through the merger. These plots demonstrate that also global quantities like Δ N and Δ converge exponentially.§.§ Gravitational waves <ref> shows results of a long-time evolution of a BS head-on with initial separation d = 80 and initial boost parameter v = 0.05. The plot shows that the run with the lowest resolution n=7 crashed, but starting with n≥ 9 a stable convergence pattern can be observed where the exponential improvement between resolutions is maintained also for late times. One can also recognize a growth ofover time, which might become troublesome for even longer evolutions; however, we were able to evolve to at least t/ = 50000 without problems and there remains the possibility to adjust the GHG damping system through the parameter γ_0 in combination with adjustments to the local grid widths. Thestrain extracted from the highest resolved run with n=17 displays the characteristic afterglow signature of these kinds of collisions, where the remnant scalar-field cloud is continuing to oscillate and emit gravitational radiation with an amplitude that is of the same order as the merger spike <cit.>. The dominant amplitude in the frequency spectrum ofis located atf_dom≈ 7.7 × 10^-3( μ/6.582× 10^-16 eV) Hz.Assuming a scalar-field mass μ = 1 × 10^-11 eV this translates into f_dom≈ 117 Hz. Furthemore, the plot shows that the afterglow lasts at least for timeT_glow≳ 18750 ( μ/6.582× 10^-16 eV)^-1 s,which corresponds to T_glow≳ 1.2 s, assuming the same scalar-field mass.We want to highlight that our computational setup does not involve angular momentum, because the head-on collisions proceed with zero impact parameter and the simulations are carried out in axisymmetry and with reflection symmetry, and yet we do observe a noticable afterglow signature in <ref>. Our results therefore demonstrate that the emission of GW afterglow radiation in BS collisions is not solely tied to the presence of angular momentum in the initial data or the remnant.In <cit.> a comparison betweensignals extracted around the time of merger and obtained from runs using SSP and CVE initial data was already carried out. The binary BS head-on configuration studied in there used the same isolated BS solution as we do for the superposition. The initial boost parameter was fixed to v = 0.1. In <ref> we reproduce their results for resolution n=15, but using instead CTS+SSP initial data (top, left) and CTS+CVE initial data (top, right). Furthermore, we also evolved SSP and CVE initial data and we plot the absolute differences between thesignals obtained from SSPCTS+SSP data (bottom, left) and CVECTS+CVE data (bottom, right). The data was extracted at spheres with radius = 720. Focusing on rawdata first (top, left and right), one can observe the same qualitative differences between CTS+SSP and CTS+CVE data that were reported for SSP and CVE data in <cit.>. The maximum amplitude ofshows a dependence on the initial separation d between the two stars for the runs that used CTS+SSP data, whereas the CTS+CVE results show almost constant maximum amplitude and the wave's shape is roughly independent of d. Furthermore, for d=80 the GW signal arrives later by Δ t/≈ 14 from CTS+SSP data compared to the CTS+CVE data and this time delay decreases to Δ t/≈ 10 for d=140. From the plot showing the differences of , one can read off that the GW signal differs at most by 10^-4 for SSPCTS+SSP data and at most by 10^-5 for CVECTS+CVE data. When normalized by the maximum amplitude of thesignal, these differences translate into a maximum deviation of ≈ 10% for SSPCTS+SSP and ≈ 2% CVECTS+CVE data, respectively.<ref> presents a self-convergence test usingsignals extracted from evolutions of CTS+CVE initial data with initial separation d = 80 and boost parameter v = 0.05. The signals were extracted at spheres with coordinate radius = 720 and the results are plotted against retarded time u. Similar to the self-convergence test of the global quantities N,and , one can observe a clear decrease in the differences ΔΨ_4 with increasing polynomial resolution, indicating an exponentially convergent result. The differences increase notably through the merger by roughly two orders of magnitude. The differences between the two highest resolved runs with n=17 and 19 can be interpreted as an error bound on the numerical accuracy ofand we infer that the results are accurate up to 10^-5 for the studied configuration. This bound is conservative, because, as the trend of this convergence test indicates, any differences obtained between two consecutive resolutions that are each higher than n=19 will be smaller than this difference. In <ref> we compare the radiated gravitational wave energy E/ emitted over a retarded time span of u ∈ [100, 3000] from evolutions of SSP, CTS+SSP, CVE and CTS+CVE initial data, where we fix the initial separation d = 80 and polynomial resolution n=15 and vary the boost parameter v. First, the plot shows a difference in E depending on whether SSP or CVE initial data is evolved and this difference decreases with increasing v. On the other hand, the difference between E obtained from SSPCTS+SSP and CVECTS+CVE is comparatively negligible. Taking into account the error bars the conclusion is that one cannot distinguish superposed initial data from constraint-solved data by the amount of radiated energy emitted over this time span. Regarding the dependence on v, it is evident that E decreases with increasing v. From a physical perspective this is counter intuitive, because increasing v increases the energy of the binary system in the initial slice. Furthermore, with larger v the time to merger is reduced and, thus, there would be more time left for GWs to radiate away from merger till u=3000. This behavior is in contrast with a similar experiment that was conducted for BH head-on collisions in <cit.>, where indeed the radiated GW energy increases when increasing v. Nevertheless, these results are in accordance with what was observed in <cit.>, where E also decreases when the initial separation d between the stars is increased, which also corresponds to an increase of the total energy in the initial slice.Analysis of 1D and 2D output from simulations that were used for <ref> shows that with increasing v the gravitational and scalar field both display decreasing amplitudes. To this end, consider <ref> where the time evolution of E(u)/, R(t,x=0), (t) N(t) and (t) is shown for simulations of CTS+CVE data with initial separation d= 80 and resolution n=15, but we vary the boost parameter v. First, one can read off the values of E/ at u/≈ 4050 which were shown in <ref> for the CTS+CVE data. The plot displays that the initial burst of radiation is responsible for a significant increase in E/(u). When contrasted with the evolutions of R(t,x=0) and (t), it is apparent that this is caused by the merger where these quantities peak. Furthermore, one observes that with increasing v the first burst is emitted at earlier times, but the strength of the burst also decreases. However, at the same time the flux of E(u)/ radiated in the afterglow phase decreases when v increases. During that phase R(t,x=0) and (t) also display a continued decrease in the amplitude of the oscillations. The evolutions of N(t) and (t) show that until t/≈ 2000 the quantities N andare both conserved to good approximation. The difference inwhen 0 ≤ t/≤ 2000 between different values of v reflects the fact that an increase in the initial momenta of the stars corresponds to initially more energetic configurations. Eventually, both N(t) and (t) decrease, because the scalar field leaves the computational domain through the boundary. We note that this decrease sets in earlier and proceeds faster when v=0.2 than v=0.0, which we verified by studying dN/dt (not shown). This allows one to conclude that the larger v, the faster energy is carried away through the scalar field being ejected. As a consequence, the produced remnant will be lighter for larger v.<ref> shows the (complementary) Wiener product W̅ := 1 - W(^(1), ^(2)) computed between two signals ^(1) and ^(2) and obtained from different initial data constructions. The initial distance was fixed to d = 80, we vary the polynomial resolution n and we show results for boost values v= 0.05, 0.1 and 0.15. We used theAdvanced LIGO sensitivity design curve <cit.> to weight the product and the complex scalar field's mass was fixed to μ = 1×10^-11 eV. W was computed using data that was extrapolated to null infinity with different orders . We remind the reader that a value W̅ = 1 - W(^(1), ^(2)) = 0 means that ^(1) and ^(2) are identical if they would appear in a detector with the provided noise sensitivity curve. The plot shows, independent of v, that W̅ is always smallest when comparingfrom evolutions of CVE dataCTS+CVE data, meaning that these waveforms are the closest to each other among those we compare. The difference between GWs received from SSP dataCTS+SSP data is bigger than the one from CVE dataCTS+CVE data. W̅ is biggest when comparing either SSP dataCVE data or CTS+SSP dataCTS+CVE data. We note that these qualitative results are to very good agreement independent of the numerical resolution n. Furthermore, the results are independent of the extrapolation order , because <ref> already shows the results obtained from analyses wherewas varied too, however, the obtained curves all overlap with one another. Overall this picture is consistent with the discussions of other comparisons in this work where it turned out that any apparent differences between analysis quantities are more pronounced when comparing results from SSP dataCTS+SSP data than comparing results from CVE dataCTS+CVE data. § CONCLUSIONIn this manuscript we reported results of long-time stable and accurate binary head-on BS collisions obtained using a PS method as implemented in . We studied variations of a mini BS configuration which was already investigated in <cit.>, but differed in the way the initial data was constructed: two of the data sets (SSP, CVE) were obtained from a superposition of isolated stars and thus carried initial constraint violations, whereas the other two data sets (CTS+SSP, CTS+CVE) were obtained by numerically solving the CTS equations and with free data based on SSP and CVE data, using the hyperbolic relaxation method presented in <cit.>. In our work we did not enforce quasi-equilibrium conditions to ease our comparison with previous work in <cit.>. This effort was undertaken to answer the question of how much of a difference one can expect physical observables to vary when numerically evolving constraint-violating and satisfying data.In summary we found that the differences in the discussed analysis quantities computed during evolutions, among which is also the constraint monitor of , are always bigger between SSPCTS+SSP data when compared to CVECTS+CVE data. We demonstrated that evolutions of SSP and CVE data bear residual constraint violations above 10^-8 and 10^-10, respectively, despite being self-convergent. On the other hand, we could reduce and preserve the constraint violations for CTS+SSP and CTS+CVE data well below 10^-10 using a reasonable resolution, which demonstrates the capabilities of PS approximations as viable methods for next generation NR codes targeted at smooth solutions. A study of global analysis quantities showed that conserved variables like Noether charge N and ADM masscan be preserved well below 1% of their initial value already starting with comparatively small polynomial order n=9 per grid, as long as the scalar field does not leave the computational domain, while at the same time also displaying exponential decrease in a self-convergence study.A direct comparison of the Ricci scalar at the collision center R(x=0) and the scalar field amplitudebetween results obtained from SSPCTS+SSP showed that these quantities eventually start to dephase after merger. No such deviations were observed when comparing results from evolutions of CVECTS+CVE initial data. The main motivation for the development of the CVE method in <cit.> was that for head-on collisions of solitonic BSsobtained from SSP data displayed an artificial growth during the infall phase and eventually lead to premature BH collapse. This growth was avoided by the use of the CVE construction, which also delayed the BH formation after the stars' centers merged. Although, this work was concerned with mini BS collisions for which no BH collapse occurs, we find a qualitatively similar behavior forwhen evolving SSP and CVE data. Furthermore, the constraint-solved CTS+SSP data did not cure this artificial growth pre-merger. On the other hand, the evolution ofobtained from CTS+CVE data is to good approximation identical to the CVE data, remains approximately constant during infall when the centers of the stars are well separated. This latter observation serves as an argument which favors the use of CVE and CTS+CVE data over SSP and CTS+SSP data, in particular, when the goal is to model the coalescence of initially non-interacting stars that fall in on each other.We reproduced thesignals of the mini BS head-on collisions that were reported in <cit.> for SSP and CVE data and compared these findings with signals obtained from CTS+SSP and CTS+CVE data. We found that constraint solving does not significantly alter the qualitative differences that were discussed in <cit.>. These results are robust, as we demonstrated exponential decrease in differences ofsignals in a self-convergence study. We then computed the radiated energy from these kinds of head-on collisions while also varying the initial boost parameter v and found, again, a difference that is dominated by the way in which the isolated stars are superposed. For this quantity differences due to constraint solving the initial data are negligible and indistinguishable when accounting for errors in postprocessing. The analysis of the errors required special attention due to the lack of a heuristic cutoff frequency for signals originating from head-on collisions and which is used to suppress nonlinear drifts in the GW reconstruction (see Appendix:gw-analysis).An interesting physical result is that the radiated energy emitted during a fixed interval of coordinate time decreases when the initial momenta of the stars is increased, which is in opposition to BH head-on collisions where increasing the progenitors initial momenta leads to more energy being radiated gravitationally <cit.>. However, these findings are conceptually in agreement with results presented in <cit.> where for fixed boost parameter v the radiated energy also decreases when the initial distance between two BSs is increased, which also corresponds to an increase of the total energy of the system. This qualitative behavior is independent of the quality of the initial data, but quantitative differences in total radiated energy could be observed.We quantified the difference in the shape of the GW signals by computing the Wiener product between results of SSP, CVE, CTS+SSP and CTS+CVE initial data and varying resolutions, while taking into account detector noise. Here we found that signals obtained from CVECTS+CVE data are more similar to one another than signals obtained from SSPCTS+SSP data. Comparing signals between SSPCVE and CTS+SSPCTS+CVE showed that the way in which the superposition of stars is done dominates any effects due to constraint violations, which is in agreement with the rest of our findings.Overall we can conclude that all the differences we observed in the analysis quantities we studied, and for the particular BS head-on configuration we considered, are dominated by the way in which the SSP and CVE constructions differ. Differences in physical observables due to constraint violations were comparatively negligible, and marginally relevant when accounting for errors. This result is reassuring for theoretical predictions that were made based on BS simulations that used superposed data and for which it was verified that the initial constraint violations were reasonably small. Nevertheless, we recommend the use of constraint-satisfying over constraint-violating initial data for multiple reasons. First, solutions to the EFEs satisfy the constraint equations exactly at all times, which includes the initial slice, and, thus, the goal for numerical approximations of these solutions should be to satisfy these equations as well as possible. In practice it is also easier to preserve small initial constraint violations in time than having to rely on large initial constraint violations being reduced in time through a damping scheme, or having to rely on them to leave the computational domain through a boundary. Furthermore, it is not uncommon in long-time evolutions to find that constraint violations eventually start to grow for late times. In such a scenario, constraint-satisfying data will likely allow to evolve for longer times than constraint-violating data would, because the former leaves more room for the growth of violations during the evolution until the result becomes dominated by errors or the simulation crashes.This work established the basis for future studies of BS evolutions with thecode. With the addition of AMR support in <cit.> and the nice convergence behavior demonstrated in this work, we are confident that our code will be able to perform high-resolution and long-time-stable simulations of binary inspiraling BS collisions to produce quality GW signals. A future goal is to eventually build a waveform template bank for a targeted search of merger signals involving exotic compact objects. Another avenue worth exploring would be the problem of the construction of quasi-equilibrium initial data for which first work has started in <cit.>. In particular the hyperbolic relaxation method used to solve the CTS equations in the present paper is directly implemented inand, as such, it can profit from all optimizations that are added to the evolution code, which should enable generation of high quality initial data with a moderate amount of computational resources. The data that support the findings of this study will be made available in the CoRe database <cit.>.We thank Rossella Gamba, Thomas Helfer, Robin Croft and Ulrich Sperhake for providing valuable feedback for the manuscript. We are also grateful to Thomas Helfer, Robin Croft and Ulrich Sperhake for answering questions regarding their simulations, their GW radiated energy computations and for providing access to their data. F.A. is also thankful to Alexander Jercher for discussions on natural units in GR and to Rosella Gamba for discussions on GW analysis as well as pointing us to reference <cit.>, which lead to the addition of the Wiener product analysis.Computations were performed on the ARA cluster at the Friedrich-Schiller University Jena and on the supercomputer SuperMUC-NG at the Leibniz-Rechenzentrum (LRZ) Munich under project number .F.A., D.C. and R.R.M. acknowledge support by the Deutsche Forschungsgemeinschaft (DFG) under Grant No. 406116891 within the Research Training Group RTG 2522/1. R.R.M. acknowledges support by the DFG under Grant No. 2176/7-1. H.R.R. acknowledges support from the Fundação para a Ciência e Tecnologia (FCT) within the Projects No. UID/04564/2021, No. UIDB/04564/2020, No. UIDP/04564/2020 and No. EXPL/FIS-AST/0735/2021. We acknowledge financial support provided under the European Union’s H2020 ERC Advanced Grant “Black holes: gravitational engines of discovery” Grant Agreement No. Gravitas–101052587.The figures in this article were produced with Makie.jl <cit.>, ParaView <cit.>, Inkscape <cit.>. § GRAVITATIONAL WAVE ANALYSIS Below we outline the GW analysis used inand describe the postprocessing techniques used for the analysis of the data presented in the main text.Thecode employs the Newman-Penrose formalism <cit.> to compute the curvature pseudo-scalar field Ψ_4, which measures radially outgoing gravitational radiation. The actual implementation follows the standard procedure outlined in <cit.> (see also <cit.> for a review), where Ψ_4 is first computed from the Weyl tensor and an orthonormal null tetrad, which itself is constructed from an orthonormal basis adapted to the normal direction of spheres centered around the collision region.Taking into account the pseudo-scalar characteristic of Ψ_4, it is then decomposed into a modal expansion of spin-weighted spherical harmonics _-2Y_lm with complex coefficients Ψ_4,lm, where l ≥ 2, m = -l, …, l.For the analysis of head-on collisions, which take place in axisymmetry, only the expansion coefficients with l even and m = 0 are non-zero, and are purely real <cit.>. In this work we only analyze the dominant l,m = 2,0 mode. The GW flux and strain is reconstructed from <cit.>r h_20/ t(t)= lim_r→∞∫^t_-∞ t' r Ψ_4,20(t',r) , r h_20(t)= lim_r→∞∫^t_-∞ t' ∫_-∞^t' t” r Ψ_4,20(t”,r) ,and the radiated energy associated withis computed fromE(t)= ∫^t_-∞ t' E(t')/ t, E/ t(t)= lim_r→∞1/16π| ∫^t_-∞ t' rΨ_4,20(t',r) |^2 . In order to carry out the limit r →∞ above we utilize the peeling property of <cit.> and perform an extrapolation in radius r before integration. To this end, we introduce the retarded time coordinateu(t)= u_∗(t) - t_∗, = t - r - 2 log(r/2-1) - t_∗,where t_∗ is a constant that is added in order to correct the retarded Schwarzschild time coordinate u_∗ to account for a potential time dependence of the lapse and the radial coordinate in the far field zone, as well as for the fact that the r used in our simulations is in general not an areal coordinate in our gauge <cit.>. t_∗ is determined numerically by aligning multiple Ψ_4,20 data streams extracted at different radiiat the first zero before the merger spike. We note that in <cit.> more sophisticated expressions for determining t_∗ were provided which can also account for departures of the standard tortoise Schwarzschild coordinate. Their method requires extra simulation output, but we found that numerically determining t_∗ in a postprocessing step can also improve the quality of the extrapolation to null infinity to a satisfying degree, as we discuss next.The so obtained time aligneddata streams recorded at N different extraction radii _,1≤…≤_,N are then fitted to the ansatzrΨ_4,20(u) = r_∞Ψ_4,20(u) + ∑_n=1^α_n(u)/r^n,using the linear least squares method to determine the time dependent coefficients r_∞Ψ_4,20 and α_n. The desired extrapolated curvature scalar for the evaluation of (<ref>), (<ref>) and (<ref>) is then given by r_∞Ψ_4,20. We note that the quality of the extrapolated result depends 1) on the number and range of extraction radii _,N - _,1 on which data is recorded (the more, the better), 2) the extrapolation order , and 3) the time alignment through the correction t_∗. All results involvingand presented in this work were obtained from simulations where the outermost boundary was located at R = 800 and we recordeddata on 48 spheres with equally spaced increments between _,1 = 200 and _,N = 800. The extrapolations to null infinity of r_∞ were only used for <ref>, <ref> and <ref>, for which we verified that the results are stable between extrapolations that used different orders = 1, …, 5. The last ingredient for the analysis is the numerical evaluation of time integrals appearing in the reconstruction formulae. To this end we first preprocess the r_∞ data by limiting it to a range u ∈ [u_L, u_R] withw(u,u̅,σ)= 1/2 (tanh( σ (u - u̅) ) - 1) , r_∞'(u)= r_∞(u) × w(u,u_L,σ) w(u,u_R,-σ) .We used σ=1/10, u_L=100 and u_R=3000 for all our analysis to make the signal periodic in u, which avoids additional artifacts when Fourier analyzing the signal later on. We note that the choice of u_L is such that the dominant spike in the analysis of all studied GW signals is not affected by the windowing, even when the boost parameter is v = 0.2. Given the windowed data, we then evaluate the integrals using a variation of the fixed frequency method (FFI) <cit.>, which requires a choice for the cutoff frequencyto remove spurious nonlinear drifts and we comment on this further below. In our variation to the FFI method we first apply a discrete Fourier transform (DFT) to the windowed data to obtain ℱ(r_∞), we then apply a high-pass Butterworth filter of order N=10 with transfer function H_N(f,) to obtainℱ'(r_∞) = ℱ(r_∞) | H_N(f,) | ,and finally perform the FFI integration with ℱ'(r_∞) and cutoffto compute (<ref>), (<ref>) and (<ref>). Lastly, to compute E via (<ref>) we resort to using a standard trapezoidal rule, since E(t) will be monotonically increasing and the FFI method can only be used for oscillating signals. <ref> shows an example of an extrapolated and windoweddata stream, reconstructions h_4,20/ t and h_4,20, as well as the respective power spectral densities (PSDs), that were computed with three different values for(see legend). The PSD of the h_4,20/ t results show that the integration amplified small frequency components and the additional processing via a Butterworth high-pass filter exponentially suppresses this growth for frequencies below . The choice ofappears to have a mild impact on the time domain signal h_4,20/ t. On the other hand, the PSD of h_4,20 demonstrates that for too small values oflow frequency components are amplified to a level that they become comparable to components above . This translates into a strong dependence of the time domain signal on the choice of , even when using the Butterworth filter. From this test we conclude that h_4,20/ t is robust enough for the computations of E/ t and E via (<ref>) and (<ref>), whereas the quality of h_4,20 in our analysis is not sufficient for further studies and we leave it to future work to improve on this.In <cit.> it was established that a good choice for the cutoff frequency for the (l,m)=(2,2) mode for binary inspiral simulations is given by ≈ 2 f_orb, where f_orb is the initial orbital frequency of the stars. Unfortunately, we are not aware of a similar heuristic for head-ons due to the absence of orbital motion and the time domain signal displaying a dominant pulse due to the merger. The three values ofgiven in <ref> are used for all computations of E for which results are shown in the main text (including varying the boost parameter). The associated error bars account for 1) finite resolution errors and 2) uncertainties due to variation in the extrapolation orderand variation in . To estimate those contributions we proceed as follows. We compute E for two resolutions n=13 and n=15 and for all combinations ofand . The contribution 1) is taken to be the maximum of all differences of E between results of resolution n=13 and n=15, but the same values ofand . For contribution 2) we compute the average of all E results obtained for n=15 and all combinations ofand . The associated error is then defined as the maximum of all differences of E and the average of E. These two errors are then combined in quadrature and added in <ref>. We note that the finite resolution error is on average two orders of magnitude smaller than the error due to the uncertainties involved in the postprocessing.§ COMPARISON OF WAVEFORMSFor GW analysis and parameter estimation studies one often assumes Gaussian noise distribution, which motivates the definition of a detector-noise-weighted inner product (Wiener product) <cit.> to study the space of real time signals a(t) and b(t). It is defined as(a|b)_S_n, [0,∞) = ∫_0^∞ f ã(f) b̃^∗(f)/S_n(f),where ã(f) and b̃(f) are the respective Fourier transforms and S_n(f) is the PSD of the noise n(t). This product defines a norm which is positive-definite and assigns an Euclidean structure to the vector space of real signals <cit.>. If one is given two normalized signals â and b̂, such that (â,â)_S_n,[0,∞) = 1 and analogously for b̂, then a result of (â,b̂)_S_n,[0,∞) that is close to 1 indicates that a and b are close to each other.In practice, one often utilizes the GW strain h together with (<ref>) for parameter studies. However, the reconstruction of h from Ψ_4 can suffer from uncertainties and even render an analysis based on h useless. Recently, in the context of Proca star head-on collision it was shown in <cit.> that an analysis using Ψ_4 data together with (<ref>) and second differenced Gaussian noise PSD S_Ψ_4 is equivalent to the more commonly practiced analysis based on h and S_n, while at the same time it removes the uncertainties due to free parameters in the h reconstruction procedure. For the results presented in the main text we adopted this analysis which requires additional (but parameter free) processing. In particular, given two Ψ_4,20(u) data streams we compute <cit.> (in practice we use the extrapolated r_∞ data)Ψ̃'_4,20[k]= 1 - cos(2π kΔ fΔ t)/2π^2(kΔ fΔ t)^2Ψ̃_4,20(k Δ f) , k= 0, …, N_d-1 ,where Ψ̃_4,20(f) is the DFT of Ψ_4,20, N_d is the length of the output data stream, Δ t is the time resolution ofand Δ f = 1/(N_dΔ t). This transformation relates the second differenced Fourier transform Ψ̃'_4,20 with the Fourier transform of the second derivative Ψ̃_4,20 = ḧ̃. In this work we exclusively used theAdvanced LIGO sensitivity design curve <cit.> for S_n which is also processed through <cit.>S_Ψ_4[k] = 1/(Δ t)^4 ( 6 - 8 cos(2π k/N_d) + . . + 2 cos(4π k/N_d) ) S_n[k] .Given these quantities we then compute the Wiener product between two signals Ψ^(1)_4,20 and Ψ^(2)_4,20 byW(Ψ^(1)_4,20, Ψ^(2)_4,20) := (Ψ̂^(1)_4,20|Ψ̂^(2)_4,20)_S_Ψ_4,[f_min,f_max],where we limited the integration to the range [f_min, f_max] = [5 Hz,1000 Hz] for practical purposes, since the detector PSD is only sampled for a finite frequency range. The numerical evaluation of the integral is carried out by using a cubic interpolation to map Ψ̃_4,1, Ψ̃_4,2 and S_Ψ_4 to a common frequency grid and then use a trapezoidal quadrature rule to compute the integral in (<ref>). We verified that the results presented in <ref> are insensitive to the choice of frequency range. We want to point out that care must be taken when evaluating (<ref>) and (<ref>), since a limited frequency range [f_min, f_max] also limits the range of k indices to use for numerical integration, because f_k = k Δ f.The results in the main text are presented in Planck units and with a particular choice of rescaling of all quantities (see Appendix:units). Because of that, the obtained results depend on the experimentally unknown scalar-field mass μ. The choice of μ influencesthe time scale with which physical processes take place and, thus, controls the frequency of the GW signals when converted to SI units. Since we are only interested in directly comparing GWs obtained from different initial data construction techniques, we decided to fix μ = 1 × 10^-11 eV for all computations. With this choice the GW's spectrum falls roughly into the sensitive region of the detector for the range of boost parameters v we study,the frequencies of the dominant amplitudes of the PSD ofare then 𝒪(100 Hz) <cit.>. § UNIT SYSTEMS Let l, t, m and A denote length, time, mass and scalar-field amplitude and let l̂, t̂, m̂ and  be the numerical values given with respect to reference values l_0, t_0, m_0 and A_0, l= l̂ l_0 , t= t̂ t_0 , m= m̂ m_0 , A=  A_0 . In the main text we use the Planck unit system in which G = c = ħ = 1. From the definition of the Planck length, time, and mass= √(ħ G/c^3), = √(ħ G/c^5), = √(ħ c/G),respectively, we then infer that === 1 holds in such a system. Furthermore, all quantities effectively lose their physical dimensions and l_0, t_0, m_0 and A_0 now merely serve as dimensionless rescalings. One possible choice for reference values in which this is realized isl_0= 1/μ, t_0= 1/μ, A_0= 1 ,where μ is the scalar-field mass introduced in the main text, and we fix m_0 further below. The numerical values of Planck length and time then read == μ. This is a convenient choice, because it eliminates μ from all equations without loss of generality. To see this consider the action (<ref>), which is the fundamental object of the theory under study. Using (<ref>) we can rewrite it asS= Ŝ S_0 = 1/μ^2∫^4 x̂√(-g)×( [^(4)]R̂/16π - 1/2( g^ab∇̂_a ϕ̂^∗∇̂_b ϕ̂ + |ϕ̂|^2 ) ) ,where we omit carets from √(-g) and g^ab, since they are rescaled together with ^4 x and (∇_a·) (∇_b·), respectively. Choosing S_0 = 1/μ^2 as the reference value for the action then eliminates all factors of μ, such that (<ref>) is formally equivalent to (<ref>) when also setting μ = 1 in (<ref>), which was done in the main text.We note that m_0 need not be specified in order to obtain (<ref>). Instead, one way to fix it is by demanding1 = ħ̂ S_0 = ħ = ^2/ = ^2/m_0 l_0^2/t_0,so thatħ̂ = μ, S_0= 1/μ^2 = m_0 l_0^2/t_0,which impliesm_0= 1/μ.In summary, (<ref>) and (<ref>) are the reference values one can adopt to eliminate all occurrences of μ from all equations when working with Planck units and BSs.Another set of units commonly found in the literature are the natural units in which ħ = c = 1. From (<ref>) one then obtains the relations == 1/ and the Einstein-Hilbert term in the action also attains an extra factor of 1/G = ^2. Thus, when working in such units we recommend the choicel_0= 1/μ, t_0= 1/μ, A_0= ,as reference values. This then allows to adopt S_0 = ^2/μ^2 as the reference value for the action so that all occurrences of μ andare eliminated from it. Again, to arrive at S_0 we need not specify m_0. Instead the latter is fixed by a similar argument as used for (<ref>) and (<ref>), which yieldsm_0= ^2/μ.In summary, (<ref>) and (<ref>) are a convenient choice of reference values one can adopt when working with natural units and BSs to eliminate all factors of μ andfrom the equations.We note that <cit.> also works with natural units but usesl_0= 1/μ, t_0= 1/μ, m_0= 1/μ, A_0= .This set of reference values also removes all factors of μ andfrom the action. However, we find that this choice is not compatible with other parts of our analysis. As an example consider the g_rr component in (<ref>) which, when expressed in natural units and non-rescaled variables, must read1/g_rr = 1 - 2Gm/r = 1 - 2m/^2 r,since g_rr is dimensionless. Performing a rescaling using our convention (<ref>) and (<ref>) eliminates , whereas using (<ref>) does not. Yet another set of units that is commonly used are geometric units in which G = c = 1. From (<ref>) one obtains == and now only the term involving the scalar-field potential in the action attains an extra factor of 1/ħ^2 = 1/^4. For this setup we recommend the choicel_0= ^2/μ, l_0= ^2/μ, A_0= 1 .This then allows to adopt S_0 = ^4 / μ^2 to eliminate all factors of μ andfrom the action. By a similar argument we used before, we fix now m_0 through^2 = ħ = ħ̂ S_0 = ħ̂m_0 l_0^2/t_0,and findm_0 = ^2 / μ.In summary, (<ref>) and (<ref>) are a convenient choice of reference values one can adopt when working with geometric units and BSs to eliminate all factors of μ andfrom the equations. § CONSTRUCTION OF ISOLATED BOSON STARS IN 1DIn this appendix we fix a typo from <cit.> in the equations that describe stationary, spherically symmetric and isolated BS models. The starting point for the construction of such stars are ansätze for the metric and scalar field which are of the forms^2= - e^2Φ t^2 + ( 1 - 2m/r)^-1 r^2 + r^2 ( θ^2 + sin(θ)^2 φ^2 ) ,ϕ(t,r)= A(r) e^iω t,where A, ω are the amplitude and harmonic angular frequency of the scalar field. Inserting these into the EKG system yields∂_r Φ = m/r(r-2m)+ 2π r^2/r-2m( η^2 + ω^2 e^-2Φ A^2 - V ) ,∂_r m= 2π r^2 ( η^2 + ω^2 e^-2Φ A^2 + V ) ,∂_r A= ( 1 - 2m/r)^-1/2η,∂_r η = - 2 η/r - η∂_r Φ+ ( 1 - 2m/r)^-1/2( V' - ω^2 e^-2Φ) A ,where V' =V /A^2. In <cit.> a factor r/(r - 2m) was missing in the second term on the RHS in (<ref>).This system of equation is subjected to the boundary conditionsA(0)= A_ctr, m(0)= 0 ,η(0)= 0 ,∂_r Φ(0)= 0 ,lim_r→∞ A(r)→ 0 .The asymptotic behavior of A is given byA ∼1/r^1+ϵ e^-r √(1 - ω^2 e^-2Φ),where ϵ is a non-integer correction that was already reported in <cit.> and recently discussed in <cit.>.Our numerical implementation uses the same shooting algorithm as <cit.>, where integration starts at r = 0 and proceeds radially outwards. The asymptotic behavior is used for patching the solution at a radius r_m, determined dynamically during integration, beyond which the scalar field is no longer integrated, but instead frozen to the behavior (<ref>). Results presented in this work do not account for the ϵ correction, as we were simply not aware of it when doing the analysis. However, we implemented the correction in hindsight and found that our single star solutions are only altered in A and η in the far-field regime by 𝒪(10^-10), which is negligible when compared to other error sources.§ CONFORMAL METRIC AND ITS TIME DERIVATIVE We note a difference in notation between <cit.> and our work which is mainly based on <cit.>. In particular, <cit.> defines the time derivative of the conformal metric γ̅_ij asu̅_ij := ∂_t γ̅_ij = - γ̅_ikγ̅_jl∂_t γ̅^kl.Its indices are also raised with the conformal metric,u̅^mn = γ̅^imγ̅^jnu̅_ij.Using (<ref>) we obtainu̅^mn = - ∂_t γ̅^mn. On the other hand, <cit.> does not work with (<ref>), but instead usesγ̇̅̇_ij := ∂_t γ̅_ij,γ̇̅̇^ij := ∂_t γ̅^ij.To make the connection with their notation we had to set u̅^mn = - γ̇̅̇^mn, which explains the change in sign in (<ref>).§ MOVING AVERAGEThe differences betweenobtained from different resolutions n are postprocessed for the purpose of demonstrating self-convergence in <ref>. To this end, the data is interpolated to a common time grid using cubic splines and N = 2000 points, after which differences between consecutive resolutions are computed. To suppress the artificial noise due to zero crossings introduced by interpolation errors, and which are amplified on a logarithmic scale, we apply a moving average filter to these differences. In particular, let ΔΨ_4,i, i = 1,…,N be the difference between two interpolated data streams and let w be an averaging window width, assumed to be even and positive. We define the moving average asavg[ΔΨ_4,i]= 1/w∑_j=1^wΔΨ_4,i-w/2+j, i∈[1+w/2, N - w/2] ,avg[ΔΨ_4,i]= avg[ΔΨ_4,w/2] , i∈[1, …, w/2] ,avg[ΔΨ_4,i]= avg[ΔΨ_4,N-w/2] , i∈[N-w/2+1, N] .For the data presented in this work we chose w = 20. | http://arxiv.org/abs/2311.16251v1 | {
"authors": [
"Florian Atteneder",
"Hannes R. Rüter",
"Daniela Cors",
"Roxana Rosca-Mead",
"David Hilditch",
"Bernd Brügmann"
],
"categories": [
"gr-qc"
],
"primary_category": "gr-qc",
"published": "20231127190142",
"title": "Boson star head-on collisions with constraint-violating and constraint-satisfying initial data"
} |
Relationship between Model Compression and Adversarial Robustness: A Review of Current Evidence Svetlana Pavlitska^1,2, Hannes Grolig^2, J. Marius Zöllner^1,2 ^1 FZI Research Center for Information Technology^2 Karlsruhe Institute of Technology (KIT)Karlsruhe, Germany [email protected] 14, 2024 ==========================================================================================================================================================================================================empty emptyIncreasing the model capacity is a known approach to enhance the adversarial robustness of deep learning networks. On the other hand, various model compression techniques, including pruning and quantization, can reduce the size of the network while preserving its accuracy. Several recent studies have addressed the relationship between model compression and adversarial robustness, while some experiments have reported contradictory results. This work summarizes available evidence and discusses possible explanations for the observed effects.model compression, adversarial robustness§ INTRODUCTION AND RELATED WORKGoodfellow et al. <cit.> and Szegedy et al.<cit.> first brought up the risk of adversarial attacks, small perturbations (often imperceptible by humans) that are carefully crafted and added to the input of state-of-the-art (SOTA) deep neural networks (DNNs). Without specific DNN training or mitigation measures, these attacks lead to high-confidence wrong outputs of SOTA DNNs and convolutional neural networks (CNNs).This inherent vulnerability of DNNs poses an especially high risk when applying them in autonomous driving, facial recognition, or medical domains.Adversarial defenses attempt to robustify neural networks artificially, but robustly solving a task fundamentally increases its difficulty. However, simply scaling model sizes is not always an option and is quickly restricted by technical and financial factors. Model compression approaches such as quantization and pruning can significantly reduce model size while preserving comparable performance levels. The impact of model compression on adversarial robustness has been a focus of several recent studies. However, to the best of our knowledge, no analysis of the existing publications to summarize the state of the art has been performed so far. Our work aims at closing this research gap. We have reviewed existing works that either explored the effect of model compression methods on the adversarial vulnerability of the networks or tried to combine both goals in a single training algorithm. We group the existing evidence from the experiments and make conclusions based on these.§ RELATED WORK§.§ Adversarial Training Adversarial training (AT) remains among the most successful defenses against adversarial examples <cit.>. Salman et al. showed that adversarially trained ImageNet <cit.>-classifiers show better transferability <cit.>, which is consistent with the hypothesis that adversarially trained robust networks provide better feature representations. Gong et al. showed that AT can improve image recognition models by preventing overfitting <cit.>. Andriushchenko et al. <cit.> stated that performing AT efficiently is important because it is the crucial algorithm for robust deep learning. The idea is intuitive: DNNs are trained by handing them data and correct labels to learn their decision boundaries. In AT, adversarial examples and their correct labels are precautiously augmented into the training process to train a more robust model. Madry et al. proposed the prime baseline for AT with a Projected Gradient Descent (PGD) attack <cit.>, which was later improved by <cit.> using early-stopping. §.§ Model Compression DNN and CNN architectures have become increasingly deep and complex and can require millions of parameters, which leads to slow inference. Many techniques have been developed to speed up inference, including quantization and pruning. Pruning aims at selecting insignificant parameters that can be removed to make the model smaller while maintaining high prediction accuracy. The simplest approach, magnitude-based pruning, removes weights below a specified threshold value. Instead of pruning individual weights, it is also possible to prune at a higher level of granularity by removing entire feature maps or filters in a CNN. Filters can be removed using data-independent pruning methods based on properties such as their L1 norm <cit.>. Correct pruning can help to speed up the inference without impacting accuracy <cit.>. Quantization is another method that reduces the precision of the model parameters, e.g., from 32-bit floating point to 8-bit integers. It can be performed on scalars or vectors as demonstrated in <cit.>, where the reconstruction error of the activations rather than the weights is minimized. § RELATIONSHIP BETWEEN QUANTIZATION AND ROBUSTNESS Quantization has so far been a focus of only a few works exploring adversarial robustness (see Table <ref>). Our search has revealed a total of four papers <cit.>, all of which consider both white-box and black-box attacks, while PGD <cit.> is a method used in all works. One of the first works regarding quantization and adversarial robustness is from Galloway et al. <cit.>.The authors focused on binarized neural networks where both weights and activations in the hidden layers arequantized to ±1. Randomized quantization was used.They compared full-precision networks to their respective binarized network.It was observed that AT is a balancing act with binary models, whereas scaled binary models can benefit from AT.Overall, they concluded that binarized networks can slightly improve the robustness against certain attacks. In terms of efficiency, they observed an advantage of the binarized networks over their full-precision equivalents. In <cit.>, Rakin et al. proposed a novel approach where activations are quantized to increase the adversarial robustness of DNNs.The approach integrates the quantized activation functions into AT. They proposed a fixed as well as a dynamic activation quantization method.For experiments, adversarially trained baseline networkswere used.Then, the authors trained LeNet <cit.> and ResNet-18 <cit.> with the fixed and dynamic quantization techniques.The models were quantized with different quantization levels (1-, 2- and 3-bit activation).The robustness of the fixed and dynamic quantized networks against various attacks (PGD <cit.>, FGSM <cit.>, Carlini and Wagner (C&W) attack <cit.>)was compared with the robustness of the baseline networks. The authors concluded that fixed and dynamic quantization can increase the robustness.A further work by Wijayanto et al. <cit.> proposed an adversarial-aware compression framework for DNNs. This framework combines pruning, quantization, and encoding. In their experiments, the approach is compared to pruned and quantized networks. It was observed that quantization can improve robustness. Another novel quantization method is proposed by Lin et al. <cit.>, where an empirical study regarding quantization and robustness was conducted. The authors quantized the activations and compared the naive quantized models to their respective full-precision models.They observed that the conventional quantization method is not robust and that input image quantization applied to hidden layers worsens the robustness.The proposed defensive quantization approach achieved higher robustness than their full-precision counterparts and improved the accuracy without adversarial attack.Gorsline et al. investigated the effect of weight quantization on robustness in <cit.>.They experimented on MNIST <cit.> and a two-spiral classification problem. They concluded with the observation that quantization does not affect robustness if the adversarial attack exceeds a critical strength.Finally, Varghese et al. <cit.> introduced a novel hybrid compression approach that combines pruning and quantization and studied the relationships between robustness and compression.They investigated the more complex task of semantic segmentation for automated driving. In contrast to the other works, the authors investigated corruption robustness, not adversarial robustness. By corruption, they refer to augmentations caused by real-world events (e.g., noise, blur, or weather conditions). They observed improved robustness of the compressed DeepLabv3+ <cit.> network compared to the reference network.In summary, naive quantization without AT has demonstrated both negative <cit.> and positive <cit.> impact on adversarial robustness. If quantization was combined with AT, a positive effect was observed in several works <cit.>. Moreover, AT was shown to improve quantization itself <cit.>. § RELATIONSHIP BETWEEN PRUNING AND ROBUSTNESS An overview of the works that focus on pruning and robustness is given in Table <ref>.We divide the considered approaches into three groups: (1) works that examine the intrinsic relationships between pruning and robustness, (2) works proposing novel approaches via a combination of static pruning with robust training, and (3) the dynamic pruning approach, incorporating adversarial robustness as a training objective. §.§ Effects of Pruning on RobustnessThe first group of works aims at studying the general effects of pruning on adversarial robustness. In the theoretical and empirical analyses, particular attention was paid to the question of whether pruning offers inherent protection against adversarial attacks.Wang et al. <cit.> conducted the first analysis regarding the adversarial robustness of pruned deep neural networks. The work was not published because the experimental evidence was not grounded enough.The effects of pruning on robustness and the impact of AT on pruned networks were investigated.Naturally trained models were compared to their original networks. The accuracy of a pruned model was similar to the accuracy of an original network. The robustness of a pruned network under FGSM and Papernot's attacks was worse than the robustness of an original network. Neither the pruned nor the original model could withstand the PGD attack. The authors suspected, that pruning reduces the network capacity, which in turn reduces its robustness.Then, the authors performed AT with FGSM and PGD along with the network pruning procedure and compared these models to their respective adversarially trained original networks. They observed that highly pruned networks can become considerably robust, while weight pruning allows more compression than filter pruning, and PGD leads to more robust models than FGSM. In additional experiments with a Wide ResNet <cit.> on CIFAR-10 <cit.>, the authors observed an interesting result. The PGD-trained network that was moderately pruned (less than 50% of the parameters) was slightly more accurate and more robust than the respective original network. The robustness of the highly pruned network (80% to 94% of the weights) was higher than the original, but the accuracy on natural images dropped simultaneously.With an increasing compression rate, the robustness of the model drops earlier than the classification accuracy. The authors observed that with the training procedures applied, a model cannot be both highly robust and pruned simultaneously. Another early work that studied the intrinsic relationships between the sparsity achieved through weight and activation pruning and the adversarial robustness of DNNs is by Guo et al. <cit.>. Their analysis is one of the few works that examine the effects of pure pruning without AT on adversarial robustness.The authors trained different architectures and evaluated their robustness under various l_2 and l_∞ white-box attacks. For the evaluation of the robustness of the models, the authors suggested two metrics that describe the ability to resist l_2 and l_∞ attacks, respectively. First, they pruned the weights of the dense reference networks and compared the robustness of the pruned networks to the original ones. Sparse DNNs are prone to be more robust against l_∞ (FGSM and rFGSM <cit.>) and l_2 (DeepFool <cit.>, C&W L_2 <cit.>) attacks until the sparsity reaches some thresholds, above which the capacity of the pruned models degrades. This observation is consistent with the observations from <cit.> described above. The authors verified their results additionally with the attack-agnostic CLEVER <cit.> scores. They observed positive correlations between activation sparsity in a certain range and robustness.The authors suggested taking care and avoiding sparsity rates that are too high and concluded that sparse nonlinear DNNs can be more robust than their dense counterparts if the sparsity is within a certain range. Similar to the work by Guo et al. <cit.>, Jordao and Pedrini <cit.> studied the intrinsic effect of pruning on the adversarial robustness of deep convolutional networks without AT. However, unlike <cit.>, the authors did not examine the trade-off between robustness, accuracy, and compression but the relationship between generalization and robustness. They observed that pruning preserves generalization.The authors pruned filters and layers from several reference architectures based on different pruning criteria. After pruning, they fine-tuned the compressed networks with augmented data.First, they compared the accuracy and robustness of the dense reference networks to their pruned counterparts (filters, layers, and both) under different attacks.Overall, they observed that pruning improves robustness without sacrificing generalization. Similar to <cit.>, the authors did not use the PGD attack in their experiments. Furthermore, they could not observe a superior pruning strategy with respect to all attacks.Then, they demonstrated that removing single filters can improve the robustness without adjusting the network parameters. They also observed that fine-tuning leads to increased adversarial robustness than training from scratch. When comparing the pruned network to other defense mechanisms, they observed that pruning obtained one of the best average improvements. They suggested combining pruning with other defense mechanisms to achieve more robust and efficient networks.The authors concluded that pruning filters or layers (or both) increase the adversarial robustness of convolutional networks.In summary, both negative <cit.> and positive <cit.> effect of pruning on robustness were seen in the experiments, although studies leading to the latter provided significantly more empirical evidence. Both papers observing positive effects <cit.> have used retraining – this confirms again that omitted retraining strongly weakens robustness. On the other hand, these works did not provide results for the PGD, making comparing the pieces of evidence difficult. §.§ Combined Compression-Robustness MethodsVarious combined compression-robustness approaches were proposed, with network pruning performed before, after, or alternately with AT.Liao et al. <cit.> theoretically proved the correlation between weight sparsity and adversarial robustness and showed in experiments that weight sparsity improves robustness with AT. They showed that pruning does not affect the model robustness negatively in some adversarial settings. Furthermore, they demonstrated, that the robustness can be improved with AT after pruning. Overall, the proposed novel AT method that includes pruning was shown to lead to sparse networks with better performance than their dense counterparts. In <cit.> the authors stated, that they describe the first framework that connects model compression with adversarial robustness. They proposed their Adversarially Trained Model Compression (ATMC) framework, which includes pruning, quantization, and AT.ATMC was compared to adversarially trained, pruned, adversarially trained, and pruned, as well as adversarially trained, pruned, and adversarially retrained models. Their results support the existence of a trilateral trade-off between robustness, accuracy, and compression. Analogously to <cit.>, the authors concluded, that if robustness is taken into account, model compression can maintain accuracy and robustness, whereas naive model compression may decrease adversarial robustness. A similar approach is proposed by Ye et al <cit.>. The authors proposed a framework of concurrent AT and weight pruning.To compare weight pruning and training from scratch, they adversarially trained models of different architectures with various scaling factors.Then, the authors pruned the filters of each network with the proposed framework. Each reference network was pruned to the respective smaller scaling factors. The authors summarized that pruned networks can have high accuracy and robustness, which can be lost if a network with a comparable size is adversarially trained from scratch.Framework evaluation under different pruning schemes and transfer attacks has demonstrated, that irregular pruning performs the best and filter pruning performs the worst. Interestingly, the pruned model turned out to be more robust to transfer attacks than the respective dense network.In <cit.>, pruning is formulated as an empirical risk minimization problem, while the minimization problem can be integrated with various robust training objectives like AT. The authors demonstrated that pruning after training helps to achieve state-of-the-art accuracy and robustness.The proposed method (HYDRA) incorporates the AT approach by Carmon et al. <cit.>, although other robust training objectives are possible.The authors observed improved compression, accuracy, and robustness compared to the baseline networks and previous work like the ADMM <cit.>-based approach by Ye et al. <cit.>. The authors advocated for formulating pruning as an optimization problem that integrates the robust training objective. They identified the performance gap between non-pruned and pruned networks as an open challenge. In summary, two works <cit.> observed a significantly higher robustness of pruned networks compared to compact networks of comparable size. Furthermore, the authors concluded that pruned networks can, after all, exhibit similar robustness to their dense reference networks. Furthermore, the results overall indicate that the effect of pruning on robustness varies in magnitude depending on whether we are comparing networks of the same capacity or networks of different capacities. Retraining the pruned models seems to be a crucial factor in that view. It was observed that most networks show a higher robustness when retrained after pruning, compared to the networks for which no retraining was performed. §.§ Dynamic Pruning and RobustnessHu et al. <cit.> proposed the first dynamic approach to improve network efficiency, accuracy, and robustness and called it Robust Dynamic Inference Networks (RDI Nets). These networks are based on the work of Kaya et al. <cit.>. RDI-nets stop inference in early layers. In their experiments, the authors evaluated three adversarially (PGD) trained models against their respective RDI nets using three white-box attack algorithms, which were executed in three proposed attack forms. Then the authors compared the RDI-nets to defended sparse networks, i.e., networks that were compressed with a state-of-the-art network pruning method Sparse Structure Selection (SSS) <cit.> and then adversarially retrained (PGD). Furthermore, they compared their RDI nets to the latest ATMC algorithm <cit.>. The pruning + defense baseline has demonstrated superior robustness compared to the respective dense reference network. The authors concluded with the statement that they achieved better accuracy, stronger robustness, and computational savings of up to 30%. It should be noted, however, that dynamic pruning does not reduce the model size, but can only achieve efficiency gains in terms of the required computing resources. §.§ Connection to the Lottery Ticket Hypothesis The lottery ticket hypothesis by Frankle et al. <cit.> states that randomly initialized networks contain subnetworks ("the winning tickets"). When trained in isolation, these subnetworks can reach test accuracies comparable to the reference network in a less or equal number of iterations. The initial weights of these winning tickets make training particularly effective. The only meaning of weight pruning is thus the effective initialization of the final pruned model. In contrast, Liu et al. <cit.> observed that the winning ticket initialization does not bring improvement over random initialization. They showed that training from scratch gave comparable or better performance than SOTA pruning algorithms, thus making the original network's inherited weights useless. The meaning of weight pruning is thus the pruned architecture itself. They suggested that pruning can be a useful architecture search paradigm, but the pruned network should be trained with random initialized values. A few works examined these hypotheses with respect to adversarial robustness. In particular, Ye et al. <cit.> observed that training from scratch cannot achieve robustness and accuracy simultaneously, even with inherited initialization, which contradicts the lottery ticket hypothesis. In contrast, Liao et al. <cit.> concluded that preferable adversarial robustness can be achieved through the lottery ticket settings. They argue that they search for the winning ticket by iterative global unstructured pruning, while Ye et al. <cit.> used filter pruning. Jordao et al. <cit.> showed that fine-tuning leads to better robustness than the winning ticket.Finally, Sehwag et al. <cit.>demonstrated the existence of hidden sub-networks that are more robust than the original network. They showed that highly robust sub-networks exist even within non-robust networks. § CONCLUSIONIn this work, we reviewed and compared the existing works exploring the relationship between model compression methods (quantization and pruning) and adversarial robustness. Throughout all experiments, it was shown that naive pruning and quantization can reduce robustness. Furthermore, as long as networks are compressed within certain limits, pruning may preserve or even improve robustness, especially when comparing compressed and compact models of the same size. Moreover, the reviewed works showed that combining model compression and robustness in AT is possible. However, a trade-off exists between compression ratio, accuracy, and robustness. It was observed relatively consistently that once a critical compression ratio is exceeded, first the robustness and then the accuracy decrease. Some authors explain that robustness thus requires a greater capacity than accuracy.Overall, many reviewed works agree that compression must be performed carefully. Simple, straightforward compression can also have negative effects on robustness; some authors, therefore, also suggest that robustness should be taken into account in the evaluation of new compression methods.§ ACKNOWLEDGMENT This research is funded by the German Federal Ministry of Education and Research within the project "GreenEdge-FuE“, funding no. 16ME0517K. IEEEtran | http://arxiv.org/abs/2311.15782v1 | {
"authors": [
"Svetlana Pavlitska",
"Hannes Grolig",
"J. Marius Zöllner"
],
"categories": [
"cs.LG",
"cs.CV"
],
"primary_category": "cs.LG",
"published": "20231127125539",
"title": "Relationship between Model Compression and Adversarial Robustness: A Review of Current Evidence"
} |
Work distribution of a colloid in an elongational flow field and under Ornstein-Uhlenbeck noise Rati Sharma January 14, 2024 =============================================================================================== We consider linear problems in the worst case setting. That is, given a linear operator and a pool of admissible linear measurements, we want to approximate the values of the operator uniformly on a convex and balanced set by means of algorithms that use at most n such measurements. It is known that, in general, linear algorithms do not yield an optimal approximation.However, as we show in this paper, an optimal approximation can always be obtained with a homogeneous algorithm. This is of interest to us for two reasons. First, the homogeneity allows us to extend any error bound on the unit ball to the full input space.Second, homogeneous algorithms are better suited to tackle problems on cones, a scenario that is far less understood than the classical situation of balls. We illustrate our results by several examples. § INTRODUCTION We consider problemsgiven by a solution operator S→ between normed spacesandover ∈{,},a class Λ⊆^of admissible measurements, and a class F⊂ of inputs. We want to approximate the solution S(f) for some unknown f∈ F based on the outcome of a finite number n of measurements L_1(f),,L_n(f). The measurements L_1,,L_n shall be contained in the classΛand may be chosen adaptively, i.e.,the choice of L_i∈Λ may depend on the already computed L_1(f),,L_i-1(f).[In principle, also the number n=n(f) of measurements could bechosen adaptively; however, in the setting of nth minimal worst-case errors considered in the first three sections,such algorithms can be identified with algorithms using a fixed n (which can be chosen asthe maximum of all n(f)).In Section <ref>, the adaptive choice of n will be of importance.] This results in an information mapping of the formN→^n,N(f) =(L_1(f),,L_n(f)).We consider algorithms of the formA_n →,A_n(f)=φ(N(f)),where φ^n → is an arbitrary mapping and often called the recovery map. If the L_i are not chosen adaptively, i.e., L_1,,L_n∈Λ are the same for all f∈, then the algorithm A_n is called non-adaptive. The error of an algorithm A_n is measured in the norm ofand in the worst case over the given input class F⊂, that is,(A_n)= (A_n,S,F):= sup_f ∈ F ‖ S(f) - A_n(f) ‖_.A problem of this form is called a linear problem if the following conditions hold. (1) The input class F is non-empty, convex (i.e., λ f + (1-λ)g ∈ for all f,g∈ and λ∈ (0,1)), and balanced (i.e., λ f∈ for all f∈ and λ∈ with |λ|≤ 1),(2) the solution operator S→ is linear,(3) the class Λ of admissible measurements contains only linear functionals.Typical examples of linear problems are approximation problems, where we have S(f)=f, and integration problems, whereis some class of integrable functions and S(f) is the integral of f.Examples for the class Λ of admissible measurements are the class Λ^ all := ' of all continuous linear functionals (linear information) or the class Λ^ std of all function evaluations (standard information) ifis a function space. The input class F often equals the unit ball of the spacebut the model also allows for so-called approximation sets of the formF= { f∈ |dist_(f,𝒱) ≤δ},where 𝒱 is a (typically finite-dimensional) subspace of F and δ>0. We refer to <cit.> and the references therein for further details on the model of linear problems. The class of all algorithms of the form (<ref>), which we denote by 𝒜_n, is quite huge and contains very complicated and impractical mappings,so it is natural to ask whether an (almost) optimal error bound can already be achieved with simpler algorithms, obeying a special structure. One class of particularly simple algorithms are linear algorithms. It is a classical result in Information-Based Complexity that linear algorithms are optimal for linear problems in the case that =, that is, if the solution operator S is a linear functional. Then we haveinf_A_n ∈𝒜_n^ lin(A_n)= inf_A_n ∈𝒜_n(A_n),where 𝒜_n^ lin denotes the class of all linear algorithms of the form (<ref>). This result goes back to Smolyak <cit.> and Bakhvalov <cit.>, see also <cit.> for similar results in the presence of noise. Other instances where linear algorithms are optimal for linear problems arewhenis a pre-Hilbert space or whenis a space of bounded functions with the sup-norm, see Mathé <cit.> and Creutzig and Wojtaszczyk <cit.>. On the other hand,linear algorithms are not optimal for all linear problems. An example where (<ref>) does not hold goes back to Kashin, Garnaev, and Gluskin <cit.> and is important in the area of compressive sensing. It is given by the approximation problemSℓ_1^m →ℓ_2^m,S(f)=f,on the unit ball F of ℓ_1^m with Λ=Λ^ all. Here, non-linear algorithms are much better than linear algorithms if the dimension m is large compared to the number n of measurements. Indeed, it can be shown that, in this case, the left-hand side of (<ref>) equals √((m-n)/m)for all n<m,while the right-hand side is of order√(log(m/n)/n). See, e.g., <cit.> and <cit.>. Furthermore, there are even linear problems for which every linear algorithm has an infinite error, butfor whicha nonlinear algorithm with finite error exists, see <cit.>.In this paper, we consider the larger class ofhomogeneous algorithms. These are algorithms with the property that A_n(λ f)=λA_n(f) for all f∈ and all λ∈. If the same property holds for all real λ≥ 0, we call an algorithm positively homogeneous.When linear algorithms are bad, one may hope that at least the homogeneity is reconcilable with a small error,and in fact, this is known to be the casefor various examples(see Remark <ref>). Here, we show that homogeneous algorithms are optimal for all linear problems up to a factor of at most two. Let(S→, F,Λ) be a linear problem and letbe complete. For n∈, we let 𝒜_n^* denote the class of all homogeneous and non-adaptive algorithms of the form (<ref>), and, as above, let 𝒜_n denote the more general class of all algorithms of the form (<ref>). Theninf_A_n ∈𝒜_n^*(A_n)≤2 inf_A_n ∈𝒜_n(A_n).The infimum on the right-hand side of (<ref>) is referred to as the nth minimal error (with respect to all admissible algorithms, wherethe class of admissible algorithms depends on the precise problem specifications).In fact, we are going to show that homogeneous recovery maps are optimal for every fixed non-adaptive information mapping N, see Proposition <ref>. Theorem <ref> is then implied by the known optimality of non-adaptive information mappings, see <cit.>; we also refer to <cit.> for a discussion of the power of adaption.Assuming additional structure, there is a result by Bartle and Graves <cit.> which implies the existence of a positively homogeneous and continuous approximate spline mapping for every continuous and linear information mapping N, see <cit.>. The notion of approximate splines is explained in Remark <ref>. Using this result, it is easy to obtain a (less general) version of Theorem <ref>. Here, we take a more general approach, which is also easier since we disregard continuity. There are two reasons why we are interested in results of this type. First, in the prominent case that F is the unit ball of , it is easily seen that any (positively) homogeneous algorithm A_n satisfies‖ S(f) - A_n(f) ‖_ ≤ (A_n) ·‖ f‖_for allf∈.In this sense, a homogeneous algorithm with small erroris not only good on the unit ball ofbut instead on the full space. This is usually not the case for non-homogeneous algorithms. In particular, as stated in Corollary <ref>, the nth minimal error and the complexity of a linear problem do not change (up to a factor of two)if we switch from the error criterion (A_n) in (<ref>) to the error criterion(A_n):= sup_f∈∖{0}‖ S(f) - A_n(f) ‖_/‖ f‖_.Second, homogeneous algorithms are better suited for problems that are defined on cones as considered, e.g., in <cit.>. By a cone we generally understand any subsetof a -vector spacewhich satisfies that λ f∈ for all f∈ and λ > 0. Problems on cones are usually not solvable with algorithms that use a fixed number of measurements, see Proposition <ref>.In other words, they are not uniformly solvable. However, they are often solvable with algorithms that use an adaptive number of measurements, see <cit.>. In other words, they are weakly solvable. In Theorem <ref>, we provide a statement on the solvability of such problems on a certain family of cones, and this insight is based on the optimality of homogeneous algorithms for linear problems.Let (S,B_,Λ) and (T,B_,Λ) be uniformly solvable linear problems, where B_ is the unit ball of , and let t>0. Then the problem (S,_t,Λ) is weakly solvable, where_t:= { f∈‖ f ‖_≤ t ‖ Tf ‖_}.More precisely, we have the cost bound (<ref>).We refer to Section <ref> for the precise definitions of the solvability notions and the explicit cost bound (<ref>). To illustrate Theorem <ref> with an example, which is discussed among other examples in Section <ref>,consider the approximation problem S W_2^1([0,1]) → L_2(0,1) with standard information on the input set_t:= { f ∈ W_2^1([0,1]) ‖ f' ‖_2 ≤ t ‖ f ‖_2 }.Here, there is no algorithm A_n ∈𝒜_n that uses a fixed number n of function values and has a finite error (A_n) < ∞. Buta prescribed error ε>0 can still be guaranteed with a varying finite number of function evaluations. The algorithm, which does not require knowledge of ‖ f ‖_2, needs at most 𝒪(tε^-1‖ f ‖_2) function values to find an ε-approximation of any f∈_t.We mentioned above that linear algorithms are, in general, not optimal for linear problems, but they are optimal if the target spaceis the space B(X) of bounded functions on a compact Hausdorff space X (see <cit.>). It is shown in <cit.> that for any normed spacethere exists a compact Hausdorff space X and a subspace ' of B(X) such thatand ' are isometrically isomorphic. Thus, any linear problem S→ may be interpreted as a linear problem S'→', which implies that the nth minimal error can be achieved with a linear algorithm A_n'→ B(X). Thus, in a certain sense, linear algorithms are always optimal for linear problems if we sufficiently blow up the target space. However, since A_n' maps to B(X) and not to the subspace ', there is in general no meaningful interpretation of the approximation A_n'(f) in terms of the original space of solutions .In the following, depending on the situation, we might sometimes write Sf instead of S(f), Nf instead of N(f), and similarly for other mappings, when it eases readability.Furthermore, we will use the notion of (positively) homogeneous mappings also for mappings between subsets of -vector spaces. § HOMOGENEOUS ALGORITHMS AND LINEAR PROBLEMSThe following proposition is at the heart of our findings. It is a statement on homogeneous solution operators S→ andhomogeneous information mappings N→^n. In short, it implies that there always exists a homogeneous recovery map φ^n → such that the corresponding algorithm A_n=φ∘ N is optimal up to a factor of at most 2.For a precise statement, we introduce the diameter of an (information) mappingN →^n,(N)= (N,S,F):= sup{‖ S f - S g ‖_ f,g∈ F,Nf=Ng },which measures the maximal uncertainty in the solution under the a priori knowledge f∈ F if the information about f is given by N. The diameter of information relates to the minimal error that can be achieved with this information byinf_φ^n → (φ∘ N)≤ (N) ≤2 inf_φ^n → (φ∘ N).The infimum appearing in (<ref>) is commonly referred to as the radius of information and denoted by rad (N). We refer to <cit.> for a proof of (<ref>) and for background on these concepts.Let F be a balanced, convex and non-empty subset of a -vector space , letbe a complete normed space, and let S→ be positively homogeneous. Then, for every positively homogeneous mapping N →^nand any δ>0,there is a positively homogeneous mapping φ^n → such that(φ∘ N)≤(1+δ) (N).In particular,(φ∘ N)≤(2+2δ) inf_ψ^n → (ψ∘ N).Moreover, if S and N are homogeneous, we can also choose φ to be homogeneous.The main step in the proof will be to define a (positively) homogeneous approximate spline mapping. Here, a mapping s N() → is called an approximate spline, if it satisfies* N(s(y))=y for all y∈ N() and*‖ s(Nf)‖_F ≤ (1+δ) ‖ f‖_F for all f∈, where δ>0 is a given parameter and ‖·‖_F is the Minkowski semi-norm induced by F, see (<ref>). It is called an exact spline in the case δ=0.Exact splines are studied in <cit.>, for example. In certain situations,see <cit.>, there exists a unique exact spline, and then the spline is necessarily (positively) homogeneous. But in general, an exact spline does not exist. Here, we show that there always exists a (positively) homogeneous approximate spline. The desired recovery map is then given by φ = S ∘ s.The existence of a positively homogeneous approximate spline was already noted in <cit.>, a result that is attributed to Bartle and Graves <cit.>. This result makes extra assumptions on , N and F (e.g., that N is linear and continuous and that ‖·‖_F is a norm), but in return the approximate spline is also continuous.We first observe that the positively homogeneous case implies the homogeneous case. Indeed, by the first part of the statement, we can find a positively homogeneous mapping φ with the desired error bound (<ref>). We let M be the set of vectors in ^n which are either zero or whose first non-zero coordinate is real and positive. Then each y∈^n∖{0} can be uniquely expressed as y=λ(y) · y_+ with some y_+∈ M and some λ(y)∈ with |λ(y)|=1. We replace the mapping φ by the mapping φ^n → with φ(y) := λ(y) ·φ(y_+). It is easily verified that the new mapping φ is homogeneous and that the error bound (<ref>) is preserved under this replacement whenever S and N are homogeneous.Moreover, Equation (<ref>) follows from Equation (<ref>) via (<ref>). Thus our task is to prove the first statement on the existence of a positively homogeneous mapping φ that satisfies (<ref>).For this, we first note that span(F) = {λ ff∈ F, λ≥ 0}. Without loss of generality, we assume that =span(F) since the quantities (φ∘ N) and (N) depend on F but not . Note furthermore that the result is trivially correct if (N) = ∞. Hence, we may assume for the rest of the proof that (N) < ∞.We define a semi-norm on F by the Minkowski functional‖ f ‖_F:= inf{ r>0f/r ∈ F }.Then{ f ∈‖ f ‖_F < 1 } ⊂F ⊂ { f ∈‖ f ‖_F ≤ 1 }.Moreover, we defineK N() → [0,∞),K(y) := inf_f ∈ N^-1(y)‖ f ‖_F .This mapping is positively homogeneous; we have K(λ y) = λ K(y) for y∈^n and λ≥ 0. We now define a mapping φ N() →. For this, we splitinto the two disjoint cones _0 := { f∈ K(Nf)=0 }and_+ := { f∈ K(Nf)>0 }and thereby also split N() into the two disjoint cones N(_0) = { y∈ N()K(y)=0 }andN(_+) = { y∈ N()K(y)>0 }.We will define two positively homogeneous mappings φ_0N(_0) → and φ_+N(_+) → such that‖ Sf - φ_0(N f) ‖_ ≤ (N)for allf ∈ F ∩_0and ‖ Sf - φ_+(N f) ‖_ ≤(1+δ)(N)for allf ∈ F ∩_+.Then the mappingφ^n →, φ(y) = φ_0(y) φ_+(y) 0is positively homogeneous and satisfies (<ref>). Thus, it only remains to show (<ref>) and (<ref>) to complete the proof. We start with the definition of φ_+ and the proof of (<ref>). Let y∈ N(_+). If K(y)= 1, we choose s(y) as an arbitrary element from the pre-imageN^-1(y) that satisfies ‖ s(y) ‖_F<(1+δ) K(y).For arbitrary y∈ N(_+), we have K(y/K(y))=1 and put s(y)=K(y) s(y/K(y)). Then the mapping s N(_+) → is positively homogeneous, which also implies that (<ref>) holds for all y∈ N(_+). Moreover, for all y∈ N(_+), the element s(y)∈ satisfies N(s(y))=y(i.e., it is an approximate spline). We put φ_+(y) = S(s(y)).The mapping φ_+ N(_+) → is positively homogeneous as a composition of positively homogeneous mappings. To bound the error of φ_+ ∘ N,we note that for all f∈_+ with norm ·_F at most 1/(1+δ), which implies f∈ F, we have K(Nf)≤ 1/(1+δ) and thus by(<ref>) that s(Nf) is contained in F. Moreover, f and s(Nf) yield the same information. Therefore,‖ Sf - φ_+(Nf) ‖_ ≤ (N)for allf ∈_+with ‖ f ‖_F≤ 1/(1+δ).The bound (<ref>) is now obtained by recalling (<ref>) and scaling due to the positive homogeneity of S and φ_+ ∘ N.We continue with the definition of φ_0 and the proof of (<ref>). To this end, let y∈ N(_0) and hence K(y)=0. Then there is a sequence (f_k) ⊂ N^-1(y) such that ‖ f_k ‖_F → 0. This implies that (Sf_k) is a Cauchy sequence in . Indeed, assume to the contrary that it is no Cauchy sequence. Then there is some c>0 such that for all n_0 there are k,m>n_0 with ‖ Sf_k - Sf_m ‖_≥ c. Given any R>0, we can choose n_0 large enough such that ‖ f_k‖_F and ‖ f_m‖_F are smaller than c/R. Then h_k = R f_k /c and h_m = R f_m/c are in F and satisfy Nh_k = N h_m and ‖ Sh_k - Sh_m ‖_≥ R. This yields (N)=∞, which would be a contradiction.As G is complete, the sequence (Sf_k) has a limit g and we define φ_0(y) := g. This definition is independent of the choice of the sequence (f_k). Indeed, let (f_k^*) be another sequence in N^-1(y) with ‖ f_k^* ‖_F → 0. Assume to the contrary that (Sf_k^*) has a different limit than (Sf_k), i.e., there is some c>0 with ‖ Sf_k - Sf_k^* ‖_≥ c for infinitely many k. For any R>0, choosing k large enough, we thus have h_k := Rf_k/c ∈ F and h_k^* := Rf_k^*/c ∈ F with N(h_k)=N(h_k^*) and ‖ Sf_k - Sf_k^* ‖_≥ R, again leading to the contradiction (N)=∞.It is not hard to check that the mapping φ_0 is indeed positively homogeneous: We have φ_0(0)=0 (since the value of φ_0(0) is independent of the concrete choice of the sequence (f_k) ⊂ N^-1(0) from above) and for λ>0 and y∈ N(_0) ∖{0}, we know that φ_0(λ y) may be expressed as the limit of a sequence (S(λ f_k)), where (f_k) ⊂ N^-1(y) such that ‖ f_k ‖_F → 0, and hence Sf_k →φ_0(y); thus φ_0(λ y) = λφ_0(y). Moreover, for every y∈ N(_0),we have that φ_0(y) is contained in the closure S(N^-1(y)∩ F). Thus, for all f∈ N^-1(y)∩ F, we have‖ Sf - φ_0(Nf) ‖_ ≤ (S(N^-1(y)∩ F)) = (S(N^-1(y)∩ F))≤ (N).Here, the diameter of a set inis defined in the usual way. This gives (<ref>).We now turn to the setting of linear problems. Here, a classical result from <cit.> states that the minimal diameter of information can be obtained with non-adaptive information mappings, see, e.g., <cit.>. We note that this result is usually stated in the case =, but the proof is exactly the same in the case =. Since the proof is short and elegant anyway, let us repeat it.Let (S→, F,Λ) be a linear problem and n∈. For any δ>0, there exists a non-adaptive information mapping N→^n such that(N)≤(2+δ) inf_A_n ∈𝒜_n(A_n). Let A_n ∈𝒜_n be an arbitrary algorithm and N be the corresponding (possibly adaptive) information mapping. Let L_1,,L_n ∈Λ be the measurement maps that the algorithm chooses for the input f=0 and consider the non-adaptive information mapping N^ non:=(L_1,,L_n). Let further f,g∈ F with N^ non(f)=N^ non(g). Since F is convex and balanced, the function h = f-g/2 is contained in F, and since N^ non is linear, we have N^ non(h)=0. But this means that the adaptive information mapping N recursively chooses the same measurement maps as for the zero input, and thus also N(h)=0. The same statements hold for the function -h, and thus A_n cannot distinguish between h and -h; we have A_n(h)=A_n(-h). This gives(A_n)≥ 1/2 ‖ S(h) - S(-h) ‖_ = 1/2 ‖ S(f) - S(g) ‖_.Taking the supremum over all such f and g, we obtain 2 (A_n) ≥(N^ non). For a linear problem, any non-adaptive information mapping is also homogeneous. Hence, Proposition <ref> results in Theorem <ref>, the statement of which is repeated here for completeness. Let(S→, F,Λ) be a linear problem and letbe complete. For n∈, we let 𝒜_n^* denote the class of all homogeneous and non-adaptive algorithms of the form (<ref>), and, as above, let 𝒜_n denote the more general class of all algorithms of the form (<ref>). Theninf_A_n ∈𝒜_n^*(A_n)≤2 inf_A_n ∈𝒜_n(A_n).Let δ>0. ByLemma <ref> there exists a non-adaptive (and thus homogeneous) information mapping N→^n such that(N)≤(2+δ) inf_A_n ∈𝒜_n(A_n).By Proposition <ref>, there exists a homogeneous mapping φ^n→ such that the homogeneous algorithm A_n^*:=φ∘ N satisfies(A_n^*)≤(1+δ) (N) ≤(1+δ)(2+δ) inf_A_n ∈𝒜_n(A_n),and the proof is finished. By Theorem <ref>, homogeneous recovery maps are essentially (i.e., up to an absolute multiplicative constant) optimal for linear problems while linear recovery maps are not. To compensate for the non-linearity, one may ask whether homogeneous recovery maps with a finite-dimensional image are optimal. But this is not the case. There exists a linear solution operator S→ such that for any homogeneous information mapping N and every homogeneous recovery map φ with finite-dimensional image, the algorithm φ∘ N has infinite error, while φ∘ N has finite error for other choices of φ, see <cit.>. For many linear problems, as for example in numerical integration, one naturally uses algorithms that are not only homogeneous, but even linear.So one may ask to what extenthomogeneous algorithms are relevantat all from a practical point of view. Let us thus mention a few specific examples of homogeneous and non-linear algorithms that are used for linear problems in the literature. * ℓ^1-minimization or basis pursuit is proven to be vastly superior to any linear method in the area of compressive sensing, see, e.g., the book <cit.>. * The median of means is a popular estimator for the computation of integrals. When used in a randomized setting, the median leads to a probability amplification, see, e.g., <cit.>. In the deterministic sense, it can be used to obtain certain universality properties, see <cit.>. * The paper <cit.> considers a randomized setting and gives an example of a linear problem where adaptive algorithms achieve a better rate of convergence than non-adaptive algorithms. The algorithms that achieve the optimal rate are homogeneous, but not linear.Let us now assume thatis a normed space and that the input class F is the unit ball of . Then the nth minimal worst case error of the problem (S→, F,Λ) is defined as(n):= (n,S,Λ):= inf_A_n ∈𝒜_n(A_n,S,F)with the worst case error (A_n,S,F) as defined in (<ref>), i.e.,(A_n)= (A_n,S,F)= sup_‖ f ‖_≤ 1‖ S(f) - A_n(f) ‖_.It has recently been shown in <cit.> for a broad class of approximation problems that the nth minimal worst case error over the unit ballcoincides up to a factor of at most eight[The formula in <cit.> shows the factor four but only compares with the minimal error of non-adaptive algorithms, which is why an additional factor of two comes into play.] with the nth minimal relative error on the full input space, i.e., with(n):= (n,S,Λ):= inf_A_n ∈𝒜_n(A_n,S),where(A_n):= (A_n,S):= sup_f∈∖{0}‖ S(f) - A_n(f) ‖_/‖ f‖_.As a corollary to Theorem <ref>, we obtain that this result is true for all linear problems, where the factor eight may be replaced by the factor two.Consider a linear problem (S→,F,Λ), where F is the unit ball ofandis complete. For any n∈, we have(n)≤ (n)≤2 (n). The second inequality follows from Theorem <ref> and the fact that (A_n) = (A_n) whenever A_n is homogeneous. Namely,(n)≤ inf_A_nhomogeneous(A_n)= inf_A_nhomogeneous(A_n)≤2(n).The first inequality uses that any algorithm A_n with A_n(0)=0 satisfies(A_n)= sup_f∈∖{0}‖ f ‖_≤ 1‖ S(f) - A_n (f) ‖_ ≤ sup_f∈∖{0}‖ f ‖_≤ 1‖ S (f) - A_n (f) ‖_/‖ f‖_ ≤ (A_n)and hence(n)≤ inf_A_nA_n(0)=0(A_n) ≤ inf_A_nA_n(0)=0(A_n).It remains to show thatinf_A_nA_n(0)=0(A_n) ≤ (n).To this end, suppose that we have analgorithm A_n such that A_n=φ∘ N with φ (0)0. We then introduce a mapping φ such that φ(0)=0 and φ(y)=φ(y) for all y≠ 0 and set A_n^*:=φ∘ N.We claim that (A_n^*) ≤(A_n), which proves (<ref>).Indeed, let us consider the set N^-1(0) of those f∈ for which N(f)=0, and for which then A_n (f)≠ 0, but A_n^* (f)=0. If all such f satisfy that S(f)=0, thenA_n^* is at least as good as A_n and thus (A_n^*) ≤(A_n).If, on the other hand, there exist f∈ N^-1(0) such that S(f)≠ 0, then we argue that (A_n)=∞ and thus (A_n^*) ≤(A_n) again holds.Since S is linear, we have f 0. On the other hand, we have N(c f)=0 for any positive constant c. Then, however, by the linearity of S,(A_n)≥ S(cf)-φ (0)_/cf_ = c S(f)-φ (0)_/cf_.Letting c tend to zero, we obtain the asserted identity (A_n)=∞. § HOMOGENEOUS ALGORITHMS IN OTHER SETTINGSRecall from the introduction that our problems are described by a solution operator S mapping from a vector spaceto a normed space , an input set F⊂, and a class Λ⊂^ of admissible measurements. The problem is called linear if (1) the worst case error is considered on a non-empty, convex, and balanced subset F of , (2) the solution operator S→ is linear,(3) the class Λ of admissible measurements contains only linear functionals.In Section <ref>, we have seen that (positively) homogeneous algorithms are optimal for linear problems. It is natural to ask whether this extends to a more general class of (positively) homogeneous problems. Suppose, for example, that we replace one or several of the above conditions by the weaker conditions, (1') the worst caseis considered on a class F⊂ satisfying λ f∈ F for f∈ F and 0≤λ≤ 1,(2') the solution operator S→ is positively homogeneous,(3') the class Λ of admissible measurements contains only positively homogeneous functionals.Are positively homogeneous algorithms still optimal for such positively homogeneous problems? We do not know the answer to this question. Are positively homogeneous algorithms optimal for positively homogeneous problems? That is, if we consider a class of problems where one or several of the conditions (1)–(3) are replaced by their respective weaker variants (1')–(3'), do there exist constants C,c ∈ such that, for all such problemsand all n∈, we have inf_ A ∈𝒜_cnApos. hom.(A)≤C inf_A ∈𝒜_n(A) ?Of course, a similar question can be asked for homogeneous problems instead of positively homogeneous problems. Examples of such positively homogeneous problems could be the computation of some norm or the maximum of a function f∈ F.If we only consider positively homogeneous problems where the input set F is convex and balanced, Proposition <ref> implies thatpositively homogeneous algorithms are optimal whenever positively homogeneous information mappings are optimal (in the sense that the corresponding diameter of information is close to minimal). In particular, for such problems,positively homogeneous algorithms are optimal whenever non-adaptive algorithms are optimal.This leads to the question whether non-adaptive algorithms are optimal for homogeneous problems. Unfortunately, this is not the case. The optimality of non-adaptive algorithms does not stay true if any of the conditions (1)–(3) is replaced by its respective weaker variant, as illustrated by the following examples. An example where the properties (1'), (2), and (3) are satisfied is given in <cit.>,see also <cit.>. Here, F is the class of all monotonically increasing, α-Höldercontinuous functions on an interval with Hölder-constant at most one and α<1, S is the embedding into L_∞ and Λ is the class of all function evaluations. Then the error of non-adaptive algorithms is of order n^-α while the error of adaptive algorithms is of polynomial order n^-1, where n is the number of measurements. We also refer to <cit.> for related results. We give an example where the properties (1), (2'), and (3) are satisfied. We let F be the unit ball in the spaceof all Lipschitz-continuous functionson [0,1] with the norm‖ f ‖_ := max{‖ f ‖_∞ , sup_x y|f(x)-f(y)|/|x-y|}.We do bisection in order to approximate a point z∈ [0,1/2] with f(z) = 1/2(f(0)+f(1/2)). The bisection method converges to a solution z(f) of the above equation, while the approximation z_n(f) obtained after n bisection steps (using n+1 function values) satisfies |z(f)-z_n(f)| ≤ 2^-n. We consider the homogeneous solution operator S(f)=f(z(f)+1/2) and the class Λ^ std of all function evaluations. The adaptive algorithm A_n(f)=f(z_n(f)+1/2) has an error of at most 2^-n.On the other hand, any non-adaptive algorithm using n function values has an error of at least 1/(8n). Indeed, there is an interval (a,a+1/(2n))⊂ [0,1/2] that does not contain any sampling point. The algorithm thus cannot distinguish the two functions f and g defined by the properties that f(0)=g(0)=0, f is constant on [0,a] and [a+1/(4n),1/2], g is constant on [0,a+1/(4n)] and [a+1/(2n),1/2], and both functions are linearly increasing with slope one on the rest of the interval [0,1]. We illustrate this situation in Figure <ref>.However, since S(f) and S(g) differ by 1/(4n), the error of the algorithm is at least 1/(8n). Thus, the error of certain adaptive algorithms converges exponentially and the error of non-adaptive algorithms converges only linearly. This problem is clearly related to the zero-finding problem (see, e.g., <cit.> and the references therein for detailed information on the zero-finding problem). We also give an example where the properties (1), (2), and (3') are satisfied. We letbe defined as in the previous example and consider the M-fold Cartesian product _M := ×…×, with elements denoted by f=(f_1,…,f_M), equipped with the norm‖ f ‖__M := ∑_i=1^M ‖ f_i ‖_.Let F_M be the unit ball of _M and let _M be the same set of functions as _M, but with the norm‖ f ‖__M := max_i≤ M‖ f_i ‖_∞.Furthermore, let S be the embedding of _M into _M. The class Λ shall consist of the homogeneous measurementsf_i(x) with i≤ M and x∈ [0,1]and ‖ f_i ‖_ with i≤ M.An adaptive algorithm using at most n measurements is given by first computing the M norms ‖ f_i ‖_ and then approximating each f_i by a piecewise constant function that interpolates m_i function values of f_i at equi-spaced points, where m_i =0if ‖ f_i ‖_ < 2/(n-M) (i.e., in this case we approximate f_i by the zero function), andm_i= ⌈‖ f_i ‖_ (n-M)/2 ⌉≤‖ f_i ‖_ (n-M) otherwise.This algorithm has an error of at most 2/(n-M).On the other hand, for a non-adaptive algorithm taking n measurements, there exists some i where at most n/M samples of f_i are computed (recall the form of Λ). Then there is some f^* ∈ F that equals zero at all those points but satisfies ‖ f^*‖_∞≥ M/(2n) (by choosing f^* tobe linear with slope 1 left of the midpoints of the intervals between the interpolation nodes, and slope -1 right of them, respectively). The algorithm cannot distinguish f∈_M defined by f_j=0 for j i and f_i=f^* from its negative since all measurements for f and -f yield the same value (including the possible norm-evaluations). Thus, it has an error of at least M/(2n). Taking, for instance, M=n/2, we obtain that all non-adaptive algorithms have an error of at least 1/4, while a suitable adaptive algorithm can achieve an error of at most 4/n. Another interesting question is whether the optimality of homogeneous algorithms for linear problems remains true in the randomized setting. Recall from the introduction that our deterministic algorithms are described as the composition of an information mapping N→^n and a recovery map φ^n →. In the randomized setting, N and φ can be random and the randomized error of the algorithm A=φ∘ N is defined ase^ ran(A):= sup_f∈ F[‖ S(f) - A(f) ‖_].We refrain from a precise description of the setting and refer to <cit.> and the references therein. We denote by 𝒜_n^ ran the class of all randomized algorithms with cardinality at most n. The algorithm A is called homogeneous if almost every realization of A is homogeneous. Are homogeneous algorithms optimal for linear problems in the randomized setting? That is, do there exist constants C,c ∈ such that, for all linear problems and all n∈, we have inf_ A ∈𝒜_cn^ ranAhomogeneous e^ ran(A)≤C inf_A ∈𝒜_n^ ran e^ ran(A) ?Again, there is a relation to the question whether non-adaptive algorithms are optimal. But the answer to this question differs from the deterministic setting. It only turned out recently that there exist linear problems where adaptive randomized algorithms are significantly better than non-adaptive ones. The first such example was found by Heinrich in <cit.>. We refer to <cit.> for further progress on this matter.As a consequence, the optimality of homogeneous algorithms for linear problems is likely harder to prove in the randomized setting than it was in the deterministic setting (if possible at all). In the deterministic setting, we already knew that we could take N non-adaptive, and hence homogeneous, and only had to prove that we can also take φ homogeneous. In the randomized setting, it is already unclear (to us) whether an almost minimal error can be achieved with homogeneous N and arbitrary φ. § SOLVABLE PROBLEMS AND PROBLEMS ON CONES Also in this section, our problems are given by a solution operator S→ between normed spacesandover , a class Λ⊆^of admissible measurements, and a class F⊂ of inputs. In the previous sections, we studied linear and homogeneous problems in the worst case setting. Those problems were solvable in the following sense.We say that the problem (S→, F, Λ) is uniformly solvable iff, for any ε>0, there is some n∈ and an algorithm A_n of the form (<ref>) such that (A_n) ≤ε. In this case, we define the ε-complexity of the problem by (ε):= (ε,S,F,Λ):= min{n∈|∃ A_n(A_n) ≤ε}. We now turn to problems which are not uniformly solvable. Specifically, we are interested in the case that the input set F⊂ is a cone. Then, in general, the worst case error of any algorithm using homogeneous measurements with fixed n is infinite.More precisely, the following holds.Let S→ be ahomogeneous mapping between normed spaces, let Λbe a class of homogeneous functionals on , and let F⊂ be a cone. Then the problem (S,F,Λ) is uniformly solvable if and only if there exists some n∈ such that (n)=0.We prove the stronger statement that any algorithm with finite error can be turned into an algorithm with arbitrarily small error and the same cost. Due to (<ref>), this is equivalent to saying that every information mapping of the form (<ref>) with finite diameter can be turned into an information mapping with the same cost and a diameter arbitrarily close to zero. To this end, assume that N is an information mapping of the form (<ref>) such that (N)<∞. Given r>0, we consider the new information mapping N_r, which chooses those measurement maps L_i that the mapping N would choose for rf instead of f. Note that the measurement results of rf are available due to L(rf)=rL(f) for all L∈Λ and f∈ F.If f,g∈ F are such that N_r(f)=N_r(g), then this implies N(rf)=N(rg). Thus,‖ Sf - Sg ‖_ = 1/r‖ S(rf) - S(rg) ‖_ ≤ (N)/rand hence the diameter of N_r is bounded by (N)/r, which can be made arbitrarily small by choosing r sufficiently large. In order to study problems that are not uniformly solvable, we consider a more general class of algorithms, where not only the measurements L_i∈Λ themselves, but also the cardinalities of the algorithms, i.e., the number of those measurements, may be chosen adaptively. Roughly speaking this means that, after each measurement, we may decide based on the already computed measurements whether we compute another measurement and which one. Formally, those algorithms can be described as follows. An information mapping is a function N F → c_00, where c_00:= ⋃_n∈^n, for which there are functions L_j: F ×^j-1→ with L_j(·,y_1,…,y_j-1)∈Λ for all y_1,…,y_j-1∈ and a Boolean function Δ c_00→{0,1} such that N(f) = (y_1,…,y_n(f)) with y_j = L_j(f,y_1,…,y_j-1) and n(f) = min{n∈|Δ(y_1,…,y_n)=0}. That is, if we have already obtained the data y_1,…,y_j-1, we next perform the measurement L_j(·,y_1,…,y_j-1), unless Δ(y_1,…,y_j-1)= 0.Now, an algorithm A for the problem (S,F,Λ)is defined as a pair (N,φ) given by an information mapping N as aboveand an arbitrary mapping φ c_00→, which is used to turn our finite information N(f)∈ c_00 into an approximation of the solution S(f) ∈. This results in a mapping φ∘ N F→ G used to approximate the solution operator S F →. In slight abuse of notation, we denote this mapping also by A, that is, we write A=φ∘ N. We denote the class of all such algorithms by 𝒜^+. We write (A,f):=n(f) for the(information) cost of the algorithm at f∈ F, and for F_0⊆ F, we set(A,F_0):= sup{(A,f) | f∈ F_0}. We are now ready to define a weaker notion of solvability.We say that an algorithm A ∈𝒜^+ is ε-approximating for S on F, iff ‖ Sf - Af ‖_≤ε for all f∈ F. The problem (S→, F, Λ) is called weakly solvable if, for any ε>0, there exists an ε-approximating algorithm A ∈𝒜^+. This notion of solvability is inspired by <cit.>, where an analogous notion of solvability is considered in a randomized setting. Note that an ε-approximating algorithm has finite cost for each input f∈ F, but it may have infinite cost on the full input set F.As mentioned above, we are mainly interested in problems where the input set is a cone. Problems on cones have lately been considered in <cit.> and <cit.>, see also <cit.>. Here, we study the following general kind of cones.Let S→ be a linear operator between normed spacesandand let Λ be a class of linear functionals on .We consider the problem (S,_t,Λ) on the cone_t:= { f∈‖ f ‖_≤ t ‖ Tf ‖_},where T→ is any linear mapping to another normed spaceand t>0 is a constant, which we call the inflation factor. This setting covers a situation considered in <cit.>, where S(f) is the integral of a function f∈ L_2 and T L_4 → L_2 is the identity. Then, essentially, _t is the cone of all functions with bounded kurtosis. The setting also covers a situation considered in <cit.>, where S is a diagonal operator mapping a Schauder basis ofonto a Schauder basis ofand T→ is a basis projection with finite rank. We will come back to these examples later.The problem (S,_t,Λ) as defined in Assumption <ref> is included in the assumptions of Proposition <ref>. Therefore, it is usually not uniformly solvable. Here, we show that the problem is weakly solvable for a wide range of operators S and T. We recall that we write B_ to denote the unit ball in the space , and by rB_ the ball with radius r. Let Assumption <ref> hold. If (S,B_,Λ) and (T,B_,Λ) are uniformly solvable, then the problem (S,_t,Λ) is weakly solvable for any t>0.More precisely, for any ε>0, there is an ε-approximating algorithm A_ε∈𝒜^+ such that, for all f∈_t∖{0}, (A_ε,f)≤ (1/5t,T,B_,Λ) +(ε/7 t ‖ Tf ‖_ ,S,B_,Λ).To interpret the cost bound, let us assume that the operator T is bounded and that‖ T ‖ and t are absolute constants (and not very big). Corollary <ref> implies that (ε/R,S,B_,Λ) ≤(ε,S,2RB_,Λ).By putting c=14t‖ T‖, we get for small enough ε>0 (assuming that ( ε,S,B_,Λ) →∞ for ε→ 0) that(A_ε, _t ∩ r B_) ≤2 ( ε,S,crB_,Λ),i.e., the cost of the algorithm on a ball is proportional to the complexity of the problem S on a ball of comparable radius.Let ε>0 and t>0. Due to the solvability of (T,B_,Λ) and Theorem <ref>, there is a homogeneous algorithm Q_m→ that uses at most m measurements such that ‖ Tf - Q_mf ‖_ ≤(2t)^-1·‖ f ‖_, ∀ f ∈,where m=(1/(5t),T,B_,Λ). This implies that, for all f∈_t, we have‖ f ‖_ ≤ t ‖ Tf ‖_ ≤ t ‖ Q_m f ‖_ + t ‖ Tf - Q_m f ‖_ ≤ t ‖ Q_m f ‖_ + 1/2 ‖ f ‖_and thus‖ f ‖_ ≤2 t ‖ Q_m f ‖_ .Our algorithm consists of two steps. First, given the unknown f∈_t, we compute Q_mf. If Q_mf=0, then we know that f=0 and thus obtain the exact solution S(f)=0.In case that Q_mf0, we adaptively choose the cost k := (ε / ((4+2δ) t ‖ Q_m f ‖_),S,B_,Λ) for the second step, where δ>0 is arbitrary. By the solvability of (S,B_,Λ) and Theorem <ref>, there exists a homogeneous algorithm A_k → that uses at most k measurements, whose worst case error on B_ is at most ε /(2 t ‖ Q_m f ‖_). In particular, for any f∈𝒞_t,‖ Sf - A_k f ‖_ ≤ ε/2 t ‖ Q_m f ‖_·‖ f ‖_ ≤ ε.Thus, our two-step procedure gives us an ε-approximating algorithm that uses in total m+k measurements from Λ. Noting that‖ Q_m f ‖_ ≤ ‖ Tf - Q_m f ‖_ + ‖ T f ‖_ ≤ ‖ f ‖_/2t + ‖ Tf ‖_ ≤ 3/2‖ Tf ‖_,we havek≤ (ε/3(2+δ) t ‖ Tf ‖_ ,S,B_,Λ),because (·,S,B_,Λ) is decreasing in its first argument. The desired cost bound follows. In the proof of Theorem <ref>, we used homogeneous algorithms Q_m and A_k which are optimal in the sense of the complexities of the problems of approximating T and S for certain error thresholds. Since such optimal algorithms are often not known in practice, we point out that the same approach works with arbitrary homogeneous algorithms of our choice. We can take any homogeneous algorithm Q_m for approximating T with arbitrary cost m and worst case error smaller than 1/(2t) and then any homogeneous algorithm A_k for S with arbitrary cost k and worst case error smaller than ε/(2t‖ Q_m f ‖_), and take A_k f as an approximation for Sf (or zero, if Q_mf=0). This gives us an ε-approximating algorithm on _t with the total cost m+k. We now discuss several examples.§.§ Bounded kurtosis Consider the the integration problemS→,S(f)= ∫_0^1 f(x)dx,whereshall be the space of all continuous real-valued functions on [0,1], equipped with the 4-norm. The input set is given by the cone_t:= { f∈‖ f ‖_4 ≤ t‖ f ‖_2 }with some t>1 and the information is given by the class Λ^ std of function evaluations.The paper <cit.> proves that such problems are weakly solvable in a randomized sense. In our deterministic setting, we quickly realize that the problems S→ and T→ L_2 are not uniformly solvable on the unit ball ofand thus Theorem <ref> does not apply. And indeed, the integration problem on _t is not weakly solvable.In fact, there exists no ε-approximating algorithm for any ε>0:To see this, let A∈𝒜^+ be an arbitrary algorithm and let ε>0 be arbitrary. We let x_1,…,x_n ∈ [0,1] be the measurement points that the algorithm uses for the input zero. Then, for any δ∈ (0,1), one can easily find a continuous function f [0,1]→ [0,4ε] such that f equals zero at all those points and the Lebesgue measure of the set f^-1(4ε) is at least 1-δ. We get‖ f ‖_4≤4εand‖ f ‖_2≥4ε· (1-δ)^1/2and thus f∈_t if δ is sufficiently small. Since A(f)=A(0) and S(f)-S(0) ≥ 4ε· (1-δ), the algorithm has an error of at least 2ε (1-δ) for one of the functions f or zero. Thus, the algorithm is not ε-approximating for the integration problem on _t.In the next section, we present a simple example that follows a similar pattern, but which is weakly solvable. §.§ Inverse Poincaré To give another explicit example, let us consider the approximation problemS W_2^1([0,1]) → L_2([0,1]), S(f) = f,where W_2^1([0,1]) is the univariate Sobolev space of all continuous functions f [0,1]→ that possess a weak derivative f'∈ L_2([0,1]), equipped with the norm ‖ f‖_W_2^1^2 = ‖ f‖_2^2+ ‖ f'‖_2^2.The input set is given by the cone_t:= { f∈ W_2^1([0,1]) ‖ f' ‖_2 ≤ t ‖ f ‖_2 }of all functions fromsatisfying a reverse Poincaré type inequality with constant t>0 and the information is given by the class Λ^ std of all function evaluations. This situation matches Assumption <ref> if =W_2^1, ==L_2, and T=S, where the cone _t from above is contained in the cone from (<ref>) with inflation factor √(1+t^2). It is a well-known fact that the problem S is uniformly solvable on balls inand that there is a constant c>0 such that(ε,S,B_,Λ^ std)≤c ε^-1for all ε>0, see, e.g., <cit.>. Thus, Theorem <ref> yields that the L_2-approximation problem is weakly solvable on the full cone _t and gives rise to ε-approximating algorithms A_ε that satisfy(A_ε,f)≤14ct ·ε^-1‖ f ‖_2for all f∈_t with ‖ f‖_2 ≥ε. We conclude this example by noting that also the integration problem is weakly solvable on _t. The algorithm A_ε^* defined by A_ε^*(f) := ∫_0^1 A_ε(f)(x)dx satisfies the same cost bound and is an ε-approximating algorithm for the integration problem on _t since| ∫_0^1 f(x)dx - ∫_0^1 A_ε(f)(x)dx | ≤ ∫_0^1 |f(x) - A_ε(f)(x)|dx ≤ ‖ f - A_ε(f) ‖_2. §.§ Diagonal operators between Banach spaces We now consider a more general example analogous to <cit.>.Letandbe two Banach spaces and let S→ be a continuous linear operator. We assume that there exist unconditional Schauder bases {u_}_∈𝕀 ofand {v_}_∈𝕀 ofsuch thatS (u_)=v_,∀∈𝕀.By the spectral theorem, this situation occurs, for example, ifandare separable Hilbert spaces and S is a compact and injective linear operator with a dense range. In this case, we can choose {u_}_∈𝕀 as an orthonormal basis of eigenfunctions of S^*S. But also in the general case, all functions f∈ and g∈ can be written as a convergent series,f = ∑_∈𝕀(f,u_)_ · u_, g = ∑_∈𝕀(g,v_)_ · v_,where the coefficients (f,u_)_ and (g,v_)_ are unique. We have for all f∈ thatS(f)= ∑_∈𝕀(f,u_)_ · v_.Let 𝕀_0⊆𝕀 be a finite index set. We denote the span of the functions u_, ∈𝕀_0, by _0 and the span of the functions v_, ∈𝕀_0, by _0. Moreover, we consider the basis projections T→_0 and Q→_0, given byT(f) = ∑_∈𝕀_0 (f,u_)_ · u_,Q(g) = ∑_∈𝕀_0 (g,v_)_ · v_,which are bounded linear operators. Let further S_0 be the restriction of S to _0. Consider the cone _t defined analogously to <cit.>, i.e.,_t:= { f ∈ℱ‖ f ‖_ℱ≤ t ‖ Tf ‖_ℱ},t≥ 1.While this problem was dealt with in <cit.> for the case of arbitrary linear information Λ = Λ^ all, it remained an open questionhow to deal with it if one allows only standard information,which seems to be more relevant for practical cases.In our approach, any Λ⊆Λ^ all is allowed, and we may consider Λ = Λ^ std wheneveris a space of functions. The problem matches our Assumption <ref> if we let =. In particular, Theorem <ref> applies. That is, the problem is weakly solvable on _t, whenever S and T are solvable on the unit ball of . We want to argue that the solvability of S implies the solvability of T and simplify the cost bound from Theorem <ref>. To this end, let A_n be an algorithm for S with fixed cost n. We put M_n = S_0^-1 Q A_n. This is an algorithm for the approximation of T that uses at most n measurements. We have‖ Tf - M_nf ‖_ ≤ ‖ S_0^-1‖·‖ STf - Q A_n f ‖_ ST=QS≤ ‖ S_0^-1‖·‖ Q ‖·‖ Sf - A_n f ‖_.This means that(ε,T,B_,Λ) ≤ ( ε/‖ S_0^-1‖·‖ Q ‖, S,B_,Λ).Summing up, we obtain the following corollary. Consider the problem (S,_t,Λ) as defined in this section. If S is uniformly solvable on the unit ball, then S is weakly solvable on _t. There exist ε-approximating algorithms A_ε using information from Λ such that, for all f∈_t, we have(A_ε,f)≤2 ·( min{ε_0 , c·ε/‖ f ‖_}, S, B_, Λ),where ε_0:=(5t ·‖ S_0^-1‖·‖ Q ‖)^-1andc:=(7t ‖ T‖)^-1.Let us see how this formula looks like in the Hilbert space setting mentioned after equation (<ref>), where S is a compact operator between Hilbert spaces. Here, ‖ T‖ = ‖ Q ‖ = 1 and ‖ S_0^-1‖ is the reciprocal of the smallest singular number σ_ min of the mapping S_0. Thus, we have(A_ε,f)≤ 2 ( ε/7t ‖ f ‖_ ,S, B_,Λ)for all 0<ε<σ_ min·‖ f ‖_.§.§.§ Simple cones based on a pilot sample In the paper <cit.>, a setting as in the beginning of Section <ref> is considered, but with more particular requirements regarding the norms inand . Indeed, letus additionally assume that f_ := ((f,u_)_/λ_)_∈𝕀_ρ, ρ∈ [1,∞], g_ := ((g,v_)_)_∈𝕀_τ, τ∈ [1,ρ],where the λ_ are positive reals, which we assume to be ordered:λ__1≥λ__2≥λ__3≥⋯ >0.In this context, we write (_)_∈𝕀_ρ to denote the ℓ_ρ-norm of the sequence (_)_∈𝕀, and analogously for ℓ_τ.We again consider S:→ such thatS(u_)=v_ for all ∈𝕀,and choose 𝕀_0 as the set of integers {1,2,…,n_1}. Then the cone in (<ref>) takes the special form_t={f∈f_≤ t((f,u__i)_/λ__i)_1≤ i ≤ n_1_ρ}.This means that the cone _t consists of functions whose norm can be bounded in terms of the pilot sample(f,u__1)_,…, (f,u__n_1)_,and the mapping T is given by T(f)=∑_i=1^n_1 (f, u__i)_· v__i. The paper <cit.> then studies an adaptive algorithm A_n, defined by A_n (f)=∑_i=1^n (f, u__i)_· v__i, where n=n(f,ε) is allowed to depend on f∈ and on an error threshold ε>0, and in general n may be different from n_1. The definition of A_nrequires that the information class Λ allows access to (f, u__i)_. Note that this is in general not fulfilled if we consider function spaces and the information class Λ^ std, but for the sake of comparison let us consider the case that Λ contains the measurements(f, u__i)_. Using the notation in Theorem <ref> and its proof, we are allowed to approximate T by Q_m=T, where m=n_1, because we assume that we have access to the (f, u__i)_. Hence we obviously have (ε,T,B_,Λ)≤ n_1 for any error threshold ε. Then Theorem <ref> or Corollary <ref> implies the existence of anε-approximating algorithm A_ε such that (A_ε,f)≤ (ε/7 t ‖ Tf ‖_ ,S,B_,Λ)+ n_1 .As outlined in <cit.>, we have (ε/7 t ‖ Tf ‖_ ,S,B_,Λ) =min{n∈_0((λ_)_≥ n)_ρ'≤ε/7 t ‖ Tf ‖_},where ρ' is such that 1/ρ + 1/ρ'=1/τ. In order to determinethe minimum in (<ref>), one needs to find the minimal n such that ‖ Tf ‖_·((λ_)_≥ n)_ρ' = ((f,u__i)_/λ__i)_1≤ i ≤ n_1_ρ·((λ_)_≥ n)_ρ'≤ε/7t.This corresponds to the analysis done in <cit.> and leads (up to constants) to the same result. Indeed, it is shown in that paper that the cost is essentially optimal. Furthermore, using these findings, one can then analyze the problem from the viewpoint of Information-Based Complexity, and do a tractability analysis as discussed in <cit.>; differences in the constant are due to the fact that the results in <cit.> are tailored to this particular situation, whereas Theorem <ref> and Corollary <ref> are formulated in a much more general way. A drawback of the approach just outlined is that we need information that isin general not contained in Λ^ std, but we need direct access to the (f,u__i)_. If this information is not available, thenone way is to approximate the (f,u__i)_, possibly by only using information from Λ^ std, which may be more practical for applications. In this case, Q_m will in general no longer be equal to T as above, but a proper approximation of T. Theorem <ref> and Corollary <ref>enable us to analyze this situation. We present an example of such a situation in the following section. §.§.§ Simple cones based on a pilot sample: L_2-approximation in weighted Korobov spaces using standard information We now give a concrete example of the situation described in Section <ref>, but with an algorithm that only uses information from Λ^ std.Let =L_2 ([0,1]^d) and let ℱ⊂ L_2 ([0,1]^d be the weighted Korobov space of one-periodic functions on [0,1]^d with smoothness α>1/2, i.e.,ℱ={f f()=∑_∈^df()e^2π·, f_:= (f() √(r_2α, ()))_∈^d_2 < ∞}.Here, =(γ_j)_j≥ 1 is a non-increasing sequence of reals in (0,1], andr_2α,()=∏_j=1^d r_2α,γ_j (k_j),r_2α,γ_j(k_j)= 1 γ_j^-1 k_j^2αfor =(k_1,…,k_d) (note that we always have r_2α,()≥ 1). The spaceis a function class commonly considered in the literature on quasi-Monte Carlo rules, see, e.g., <cit.>.Let S(f)=f, i.e., S is the embedding ofin . This means that in the notation of the previous section we have ρ=τ=2, 𝕀=^d, u_=v_=e^2π·∘, and λ_= (r_2α, ())^-1/2, ∈^d.For a constant M>1, let𝕀_0=𝒜_d,M:= {∈^d r_2α, ()≤ M},so 𝒜_d,M contains the indicescorresponding to the “largest” Fourier coefficients of a given f∈ℱ, or putting it differently, 𝒜_d,M contains the collection of those 𝒜_d,M indicesfor which the values of λ_=√(r_2α,()) are the largest. In the notation of Section <ref>, 𝒜_d,M=n_1.Let the cone _t be defined as in (<ref>), which in this case yields_t={f∈f_≤ t(f() √(r_2α,()))_∈𝒜_d,M_2}.Let also the mappings T, Q, and S_0 be defined as in the beginning of Section <ref>.As shown in <cit.>, an—in the sense of the cost—optimalalgorithm using information from Λ^ all would be to approximate f∈_t by the truncated Fourier seriesA_n (f)=∑_∈_d,M'f() e^2π·,with M'=M'(ε,f) chosen adaptively, and _d,M' defined anaolously to _d,M which exactly corresponds to the algorithm A_n in Section <ref>. However, we would now like to take a different route, where we only use information from Λ^ std. Recall that it is described in Remark <ref> how we can find an ε-approximating algorithm on the cone _t using only information from Λ^ std. The algorithm consists of two successive parts: * a homogeneous algorithm Q_m for approximating T whose worst case error on the unit ball is smaller than the constant 1/(2t),* and a homogeneous algorithm A_k for approximating S whose worst case error on the unit ball is smaller than ε/(2t‖ Q_m f ‖_). For the first part, we can consider an algorithmQ_m=A_m that uses an N-point rank-1 lattice rule to approximate Fourier coefficients of f∈_t, and then replaces the f() by these approximations. Such algorithms were analyzed in, e.g., <cit.>, see also <cit.> for an overview. The integration nodes used in the lattice rule are given by_ℓ={ℓ/N}, 0≤ℓ≤ N-1,where {y}=y-⌊ y ⌋ denotes the fractional part of a real y, and whereis suitably chosen generating vector of the lattice rule. Further details on the choice of suchcan be found again in <cit.> and <cit.>. Then, A_m is defined by(A_m (f))():= ∑_∈𝒜_d,M(1/N∑_ℓ=0^N-1 f({ℓ/N})e^-2πℓ·/N)e^2π·for f∈_t and ∈ [0,1]^d. Note that A_m is linear, and in particular homogeneous. Moreover, A_m usesm=N function evaluations as information measurements.For the algorithm A_k in the second part, one can again use an algorithm of the form (<ref>) with index set 𝒜_d,M' instead of 𝒜_d,M and lattice size k instead of N, where M' and k are adapted to the outcome of A_m(f). Alternatively, we can use a least-squares estimator of the form A_k(f)= g∈𝒯(𝒜_d,M') argmin ∑_i=1^k| f(x_i) - g(x_i)|^2.Here, 𝒯(I) denotes the space of all trigonometric polynomials with frequencies from I⊂^d. This way, the information cost k will be much lower than with algorithms of the form (<ref>),see <cit.> for upper bounds using least-squares and <cit.> for lower bounds using lattices.The points x_i of the least-squares algorithm can simply be chosen as realizations of i.i.d. uniformly distributed random variables on [0,1]^d, see <cit.>. The drawback of this construction is that it only works with high probability and there is always a (very) small probability that the worst-case error of the resulting algorithm A_k will be larger than the desired threshold ε/(2t‖ Q_m f ‖_).No matter how we decide, we end up withan ε-approximating algorithm using information from Λ^ std such that (<ref>) holds, with the same notation as in Corollary <ref>. We stress that this is an advancement in comparison to the results in <cit.>, where there were no tools available to analyzealgorithms and complexity for information from Λ^ std.§.§.§ Multi-layer conesAs mentioned in <cit.>, it may be advantageous to consider a definition of cones that modifies the definition(<ref>), in order to keep better track of the decay of thequantities (f,u_)_. This would help in limiting the cost of adaptive algorithms in situations where the (f,u_)_ decay fast. Indeed, consider a partition of 𝕀 into a countable number of pairwise disjoint subsets(𝕀_j)_j≥ 0, i.e., 𝕀=⋃_j=0^∞𝕀_j.For j≥ 1, defineT_j ℱ→ℱ,T_j(f):= ∑_∈𝕀_j (f,u_)_· u_.That is, the T_j are the basis projections corresponding to the finite index sets 𝕀_j.Then, as in <cit.>, one can modify (<ref>) to𝒞:= { f ∈ℱ‖ T_j+ℓ f ‖_ℱ≤ t v^ℓ‖ T_j f ‖_ℱ,∀ j,ℓ∈},where v<1<t.Opposed to the cones considered in (<ref>), which were defined via a single reference layer, let us calla multi-layer cone. One quickly realizes (using the triangle inequality) that such multi-layer cones are actually contained in a single-layer cone as in (<ref>) with T being the basis projection for coefficients ∈𝕀_0 ∪𝕀_1 and suitably chosen inflation factor. As such, solvability statements and upper bounds carry over from Corollary <ref>.We leave it for future research to improve upon the bounds resulting from this inclusion. Noting that the information class Λ^ all has been dealt with in <cit.>,this problem is especially interesting for the case of information from the class Λ^ std.Acknowledgement.We would like to thank Aicke Hinrichs and Erich Novak for fruitful discussions on the optimality of homogeneous algorithms. David Krieg is supported by the Austrian Science Fund (FWF) Project M 3212-N. Peter Kritzer acknowledges the support of the Austrian Science Fund (FWF) Project P34808. Research work on this paper was partly carried out during the authors' stays at the MATRIX Institute in Creswick, Australia, during the program “Computational Mathematics for High-Dimensional Data in Statistical Learning” (Feb. 2023), and at Schloss Dagstuhl, Germany, during the Dagstuhl Seminar 23351 “Algorithms and Complexity for Continuous Problems” (Aug. 2023); we thank both institutions for their hospitality.99Bak71 N. S. Bakhvalov. On the optimality of linear methods for operator approximation in convex classes of functions (in Russian),Zh. Vychisl. Mat. Mat. Fiz. 11, 244–249, 1971.BG R. Bartle and L. Graves. Mappings between function spaces. Trans. Am. Math. Soc. 72, 400–413, 1952.BKUV G. Byrenheid, L. Kämmerer, T. Ullrich, T. Volkmer. Tight error bounds for rank-1 lattice sampling in spaces of hybrid mixed smoothness. Numer. Math., 136, 993– 1034, 2017.CW04 J. Creutzig and P. Wojtaszczyk. Linear vs. nonlinear algorithms for linear problems, J. Complexity 20, 807–820, 2004.DKP22 J. Dick, P. Kritzer, F. Pillichshammer. Lattice Rules. Springer, Cham, 2022. DHKM20 Y. Ding, F.J. Hickernell, P. Kritzer, S. Mak. Adaptive approximationfor multivariate linear problems with inputs lying in a cone.In: F.J. Hickernell, P. Kritzer (eds.) Multivariate Algorithms and Information-Based Complexity, 109–145, DeGruyter, Berlin/Boston, 2020.DKU M. Dolbeault, D. Krieg, and M. Ullrich.A sharp upper bound for sampling numbers in L_2.Appl. Comput. Harmon. Anal. 63,113–134, 2023.Don94 D. L. Donoho. Statistical estimation and optimal recovery. The Annals of Statistics, 22(1), 238–270, 1994.DPW R. DeVore, G. Petrova, P. Wojtaszczyk. Data assimilation and sampling in Banach spaces. Calcolo, 54:963–1007, 2017.FP23 S. Foucart, G. Paouris.Near-optimal estimation of linear functionals with log-concave observation errors. https://arxiv.org/abs/2301.07228arXiv:2301.07228, 2023. FPRU S. Foucart, A. Pajor, H. Rauhut, and T. Ullrich. The Gelfand widths of ℓ_p-balls for 0<p≤ 1, J. Complexity 26, 629–640, 2010.FR S. Foucart, H. Rauhut. A Mathematical Introduction to Compressive Sensing. Birkhäuser, New York, 2013.GGM23 P. Gaillard, S. Gerchinovitz, E. de Montbrun. Adaptive approximation of monotone functions. https://arxiv.org/abs/2309.07530arXiv:2309.07530, 2023.GM80 S. Gal and C. A. Micchelli. Optimal sequential and non-sequential procedures for evaluating a functional. Appl. Anal. 10, 105–120, 1980.GG84 A. Yu. Garnaev and E. D. Gluskin.The widths of a Euclidean ball. Soviet Math. Dokl. 30 (1984), 200–204.GL22 T. Goda, P. L'Ecuyer. Construction-free median quasi-Monte Carlo Rules for function spaces with unspecified smoothness and general weights. SIAM J. Sci. Comp. 44,A2765–-A2788, 2022.GSM23 T. Goda, K. Suzuki, M. Matsumoto. A universal median quasi-Monte Carlo integration. https://arxiv.org/abs/2209.13186arXiv:2209.13186, 2023. Hei23a S. Heinrich. Randomized Complexity of Parametric Integration and the Role of Adaption I. Finite Dimensional Case https://arxiv.org/abs/2306.13471arXiv:2306.13471, 2023.Hei23b S. Heinrich. Randomized Complexity of Parametric Integration and the Role of Adaption II. Sobolev Spaces. https://arxiv.org/abs/2306.13499arXiv:2306.13499, 2023. HJR16 F.J. Hickernell, L.A. Jiménez Rugama. Reliable adaptive cubature using digital sequences. In: R. Cools, D. Nuyens (eds.), Monte Carlo and quasi-Monte Carlo Methods 2014, 367–383, Springer, Cham, 2016.HJRL18 F.J. Hickernell, L.A. Jiménez Rugama, D. Li. Adaptive quasi-Monte Carlo methods for cubature. In: J. Dick, F.Y. Kuo, H. Woźniakowski (eds.), Contemporary Computational Mathematics—a celebration of the 80th birthday of Ian Sloan, 597–619, Springer, Cham, 2018.K77 B. S. Kashin. Diameters of some finite-dimensional sets and classes of smooth functions. Math. USSR, Izv., 11:317–333, 1977.Kor94 N. P. Korneichuk.Optimization of active algorithms for recovery of monotonic functions from Hölder’s class. J. Complexity 10, 265–269, 1994.KU1 D. Krieg and M. Ullrich. Function values are enough for L_2-approximation.Foundations of Computational Mathematics, 21(4), 1141-1151, 2021.KNR19 R. J. Kunsch, E. Novak, D. Rudolf. Solvable integration problems and optimal sample size selection. J. Complexity 53, 40–67, 2019.KNW R. J. Kunsch, E. Novak, M. Wnuk. Randomized approximation of summable sequences – adaptive and non-adaptive. https://arxiv.org/abs/2308.01705arXiv:2308.01705, 2023.KSW06 F.Y. Kuo, I.H. Sloan, H. Woźniakowski. Lattice rules for multivariate approximation in the worst case setting. In: H. Niederreiter, D. Talay (eds.), Monte Carlo and Quasi-Monte Carlo Methods 2004, 289–330. Springer, Berlin, 2006.M90 P. Mathé. s-Numbers in Information-Based Complexity. J. Complexity 6, 41–66, 1990.NSU N. Nagel, M. Schäfer, and T. Ullrich. A new upper bound for sampling numbers.Found. Comput. Math. 22(2), 445–468, 2022.N96 E. Novak. On the power of adaption. J. Complexity 12, 199–237, 1996.NW08 E. Novak, H. Woźniakowski. Tractability of Multivariate Problems. Volume I: Linear Information. EMS, Zürich, 2008.P86 E.W. Packel. Linear problems (with extended range) have linear optimal algorithms. Aeq. Math. 31, 18–25, 1986. Pinkus A. Pinkus. n-Widths in Approximation Theory. Springer, Berlin, 1985.Pla96 L. Plaskota. Noisy Information and Computational Complexity. Cambridge University Press, Cambridge, UK, 1996.S01 K.A. Sikorski. Optimal Solution of Nonlinear Euqations. Oxford University Press, Oxford, UK, 2001.S60 S. A. Smolyak. Interpolation and quadrature formulas for the classes W_s^α and E_s^α. Dokl. Akad. Nauk SSSR, 131(5), 1028–1031, 1960.S65 S. A. Smolyak. On optimal restoration of functions and functionals of them (in Russian), Candidate Dissertation, Moscow State University, 1965.TW80 J. F. Traub, H. Woźniakowski. A General Theory of Optimal Algorithms. Academic Press, NewYork, 1980.V22 F. Voigtlaender. L^p sampling numbers for the Fourier-analytic Barron space. Submitted, 2022, <https://arxiv.org/abs/2208.07605>.WW86 A.G. Werschulz, H. Woźniakowski. Are linear algorithms always good for linear problems? Aeq. Math. 31, 202–211, 1986. Authors' addresses: David KriegDepartment of Analysis Johannes Kepler University Linz Altenbergerstr. 69, 4040 Linz, Austria Peter Kritzer Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy of Sciences Altenbergerstr. 69, 4040 Linz, Austria | http://arxiv.org/abs/2311.15767v1 | {
"authors": [
"David Krieg",
"Peter Kritzer"
],
"categories": [
"math.NA",
"cs.NA"
],
"primary_category": "math.NA",
"published": "20231127123759",
"title": "Homogeneous algorithms and solvable problems on cones"
} |
[email protected] [email protected] [email protected] [email protected] [email protected] [email protected] of Mathematics, COMSATS University Islamabad, Lahore Campus, Lahore, PakistanSchool of Mathematics and Natural Sciences, New Uzbekistan University, Mustaqillik Ave. 54, Tashkent 100007, UzbekistanUlugh Beg Astronomical Institute, Astronomy str. 33, Tashkent 100052, UzbekistanInstitute of Fundamental and Applied Research, National Research University TIIAME, Kori Niyoziy 39, Tashkent 100000, Uzbekistan Central Asian University, Tashkent 111221, UzbekistanUniversity of Tashkent for Applied Sciences, Gavhar Str. 1, Tashkent 100149, UzbekistanTashkent State Technical University, Tashkent 100095, UzbekistanInstitute of Theoretical Physics, National University of Uzbekistan, Tashkent 100174, UzbekistanResearch Centre for Theoretical Physics and Astrophysics, Institute of Physics, Silesian University in Opava, Bezručovo nám. 13, CZ-74601 Opava, Czech RepublicShadow of novel rotating black holes in GR coupled to nonlinear electrodynamics and constraints from EHT results Muhammad Ali Razae0,add1ab Furkat Sarikulove0b,addr1a,addr4a Javlon Rayimbaeve2,addr1,addr2q,addr2r,addr2a Muhammad Zubaire0a,add1ab Bobomurat Ahmedove3,addr4a,addr1,addr2 Zdeněk Stuchlíke2a,adr0 Received: date / Accepted: date ======================================================================================================================================================================================================= We study the optical properties of spacetime around a novel regular black hole (BH) in general relativity (GR) coupled to nonlinear electrodynamics (NED), which is asymptotically flat. First, we study the angular velocity and Lyapunov exponent in unstable photon circular orbits in the novel spherically symmetric BH spacetime. Later, the rotating regular BH solution is obtained using the Newmann-Janis algorithm, and the event horizon properties of the BH are determined. We analyze the effective potential for the circular motion of photons in the spacetime of the novel rotating BH. Also, we analyze the photon sphere around the novel BH and its shadow using celestial coordinates. We obtain that an increase of the BH spin and charge as well as NED field nonlinearity parameters causes an increase in the distortion parameter of the BH shadow, while, the area of the shadow and its oblateness decrease. Moreover, we also obtain the constraint values for the BH charge and the nonlinearity parameters using Event Horizon Telescope data from shadow sizes of supermassive BHs Sgr A* and M87*. Finally, the emission rate of BH evaporation through Hawking radiation is also studied. Shadow of novel rotating black holes in GR coupled to nonlinear electrodynamics and constraints from EHT results Muhammad Ali Razae0,add1ab Furkat Sarikulove0b,addr1a,addr4a Javlon Rayimbaeve2,addr1,addr2q,addr2r,addr2a Muhammad Zubaire0a,add1ab Bobomurat Ahmedove3,addr4a,addr1,addr2 Zdeněk Stuchlíke2a,adr0 Received: date / Accepted: date =======================================================================================================================================================================================================§ INTRODUCTIONFrom a theoretical point of view, astrophysical black holes (BHs) are objects that have (Arnowitt-Deser-Misner) mass and electric charge, as well as are considered in most scenarios to be rotating with spin parameters. It is also important to know the electromagnetic sources of the electric field generated by the BH charge, whether it is a linear or nonlinear electrodynamic (NED) charge. For the first time, electrically and magnetically charged BH solutions have been obtained by Reissner <cit.> and independently by Nordström <cit.>.governed by general relativity (GR) coupled to Maxwell's electrodynamics, however, with a physical singularity. Other exact solutions for charged BHs in GR coupled to NED that avoid the singularity are called regular BH solutions <cit.>.The Event Horizon Telescope (EHT) collaboration revealed in 2019 that they had captured the first photograph of a BH, a shadow image of the supermassive BH at the center of the M87 galaxy <cit.>. Thanks to the information provided by this revelation about BH physics, scientists are now more interested in researching the BH shadow. In relation to BH shadow, one of the first quantitative recommendations for validating the Kerr metric with shadow analysis was done by Johannsen & Psaltis <cit.>.Since the strong-field phenomena are the only indirect tests that can access the event horizon <cit.>, the BH shadow plays an important role in GR <cit.>. The EHT team published the first horizon-scale image of the BH M87*. Based on the publications of EHT, a BH was rendered physically tangible, and constraints could be placed on the size of the shadow, which is the central flux depression with a factor of ≳10 and a compact emission zone with an angular diameter of θ_d = 42± 3μ as <cit.>. Researchers from the EHT project later unveiled a picture of the Milky Way BH Sgr A* in 2022 based on star dynamical priors on its mass and distance <cit.>, showing the angular shadow of diameter (d_sh = 48.7± 7 μ a s). According to GR, the two BHs' recorded images, M87* and Sgr A* are consistent with the traits of a Kerr BH <cit.>. Although Kerr-like BHs arising under modified gravities are not entirely confirmed in the relative deviation of quadrupole moments and the current measurement error of spin or angular momentum, they are not entirely excluded <cit.>. Furthermore, Sgr A * shows concordance with the predictions of GR in three orders of magnitude in central mass compared to the EHT results for M87* <cit.>. Therefore, one of the actual problems in astrophysics nowadays is testing modified or alternative gravity models using the data reported by the EHT Collaboration.Recently, in Refs.<cit.> shadow of the Simpson-Vesser BHs (wormholes) was investigated and constraints on the length parameter were obtained using the image size of supermassive BHs/wormhole candidates at the center of galaxies M87, and Milky Way observed by EHT observations, and the similarity of the SV BH shadow and the shadow of the Kerr BHs was given. However, the SV wormhole (with l>2) with a large spin can cast a closed photon ring. Gravitational lensing and retrolensing in both weak and strong gravitational field limits, together with quasinormal spectra and gray-body factors, have been studied in Refs. <cit.>. Also, the strong deflection limits of the Simpson-Visser spacetime have been studied in Ref.<cit.> and found that the photonsphere around the SV spacetime does not depend on (weakly depend on) the length parameters when l ≤ 3. This implies that distinguishing the BH from the wormhole in the SV metric is not possible. Moreover, the degeneration of combined effects of electric charge of SV BH and bounce parameters was studied in Ref.<cit.> providing orbital motion around the black-bounce-Reissner-Nordström BHs the same as around Schwarzschild BH. Relationships between the BH charge and bounce parameter that may break the degeneracy using the precession data from the S2 star orbits around Sgr A* detected by GRAVITY collaboration and the shadow size of Sgr A* measured by EHT. Similar tests of various BHs in gravity theories using both EHT observations have been performed in Refs. <cit.> The main focus of this work is to study the photon motion in the spacetime of the novel regular BH and its optical properties, such as the photon sphere around the BH, the BH shadow, and its distortion. Also, we obtained the constraints on the BH charge and coupling parameters using EHT data from shadow sizes of supermassive BHs SgrA* and M87*. Moreover, we study the emission rate of novel regular BH evaporation through Hawking radiation. The work is organized as follows: in Sect. <ref> we give a brief explanation of the novel regular BH solution. In Section <ref> we analyze the effect of the interaction between the photon and NED field on the novel BH shadow using the effective metric for photon motion in the NED field. that is small enough (smaller than the error in BH shadow observations) to not consider. Sect.<ref> is devoted to studying the angular velocity and Lyapunov exponent in unstable photon circular orbits. In Sect.<ref>, we obtain a rotating regular BH solution using the Newman-Janis algorithm (NJA). The geodesic structure around the obtained rotating BH and circular photon orbits of the BH together with the shadow cast by the BH are investigated and constraints on the BH charge and coupling parameter for the BHs are obtained using EHT observations in Sect.<ref>. Moreover, in Sect.<ref>, we study the energy emission rate of the BH evaporation through Hawking radiation in the spacetime near the BH horizon. Finally, we summarize our findings and results in Sect.<ref>.Throughout this paper, we use geometrized units c=G=1 and M=1 if not mentioned.§ CHARGED BLACK HOLE WITH NONLINEAR ELECTRODYNAMICSWe begin by defining the coupling of gravity through Einstein's GR with the NED model determined by the action <cit.>.S=∫ dx^4√(-g)[R/16 π G+ℒ(F)],where g is the determinant of the metric tensor, R is Ricci scalar, G is Newton's gravitational constant, and the Lagrangian defining the NED model is denoted byℒ(F) given by <cit.>ℒ( F)=-4bF/(√(b)+√(b-2√(- F/2)))^2,such that 4 F=F_μνF^μν=2(B^2-E^2) is the Maxwell invariant and b is nonlinearity parameter. If we expand the Lagrangian (<ref>) for b→∞, we getℒ( F)=- F-√(- F) F/√(2)b+5 F^2/8b^2+7√(- F) F^2/8√(2)b^3+ O(1/b^4).Therefore, under the limit b→∞, we get ℒ( F)=- F which is the Lagrangian for linear Maxwell electrodynamics. By considering the magnetic field B=0 and solving the equation ∇_μ(F^μν∂ℒ/∂ F)=0, the electric field E(r) becomes <cit.>E(r)=bq(2br^2+q)/2(br^2+q)^2,where q is the electric charge. Note that, the electric field in Eq. (<ref>) can be expressed asymptotically and at the origin, respectively as E(r) = q/r^2-3q^2/2br^4+ O(1/r^6), E(r) = b/2-b^3r^4/2q^2+ O(r^6).It means that the electric field E(r) vanishes far from the source and unlike Maxwell's linear electrodynamics, the electric field is finite at the origin. The variation of the action (<ref>) and solving the Einstein's field equations for the static spherically symmetric metricds^2=-f(r)dt^2+dr^2/f(r)+r^2(dθ^2+sin^2θ dϕ^2)yields the BH solution given byf(r)=1-2M/r+π q√(b q)/2r-q√(b q)/rtan^-1(√(b) r/√(q)),where M is the BH mass. It is interesting to note that the asymptotic expansion of the metric function f(r) can be written asf(r)=1-2M/r+q^2/r^2+ O(1/r^4),which shows that the metric function f(r)→1 as r→∞, i.e, the spacetime metric (<ref>) is asymptotically flat. Furthermore, far from the source, the BH solution mimics the Reissner-Nordström BH solution. Furthermore, expanding f(r) around b=0, we getf(r)=1-2M/r+π q√(bq)/2r-bq+b^2r^2/3+ O(b^5/2).Then, the BH metric (<ref>) reduces to the Schwarzschild metric when b→0. § EFFECTIVE METRIC FOR PHOTONS In fact, there is no interaction between electromagnetic waves and electrostatic fields in linear electrodynamics. However, in NED, a photon interacts with the field due to the field's nonlinearity and does not follow null geodesics. Thus, one has to take into account considerations for photon motion spacetime of regular BHs obtained in GR coupled with NED governed by the lapse function given by Eq. (<ref>) that is determined by the "new-null" geodesics in the spacetime of the effective geometry for photons around the regular BH (introduced in <cit.>)g̃^μν = g^μν-4ℒ_ FF/ℒ_ FF^λ_μF^μν ,g̃_μν = 16ℒ_ FFF_μηF^η_ν-(ℒ_ F+2Fℒ_ FF) g_μν/F^2ℒ^2_ FF-16(ℒ_ F+Fℒ_ FF)^2 , where ℒ_ F=∂ℒ(F)/∂ F , ℒ_ FF=∂^2 ℒ(F)/∂ F^2 . The eikonal equation for photons in the effective geometry related to the regular BH can be written as g̃_μν k^μ k^ν=0,where k^μ is the four-wave vector related to the four-momentum of photons by p^μ=ħ k^μ (in Gaussian units, ħ = 1). Our graphical and numerical analyses have shown that the difference in shadow sizes obtained with and without taking into account the interaction between the photon and the NED field (see Fig. <ref>, it is less than 6%) is smaller than the error in the data of EHT observations of Sgr A* (∼ 14%) and M87* (about 7.14%). Moreover, the influence of NED on photon motion weakens due to the appearance of the spin parameter when we obtain the rotating BH for this spacetime metric. Since in this work, we mainly focus on obtaining constraints on spacetime parameters using EHT data, we will work with the metric (<ref>) for further analysis.§ ANGULAR VELOCITY AND LYAPUNOV EXPONENTNow, we study the quasinormal modes in terms of angular velocity Ω and Lyapunov exponent Γ for static BH (<ref>). The angular velocity and Lyapunov exponent are respectively the real and imaginary parts of the eikonal quasinormal frequencies ω_n in dimensions D≥4 and can be written as <cit.>ω_n=Ω l_c-i(n+1/2)|Γ|,where n and l_c are overtones and multipole numbers, respectively. The parameters Ω and Γ depend on the unstable circular photon trajectories in the vicinity of a static spherically symmetric asymptotically flat BH <cit.> that are emitted by the BH in the eikonal part of its spectrum. The angular velocity and Lyapunov exponent are useful in studying the thermodynamics of the BH since these parameters allow us to construct the relation between the phase transition and quasinormal modes <cit.>. The relation for angular velocity can be written asΩ=ϕ̇/ṫ=√(f(r_ph))/r_phand the Lyapunov exponent is given asΓ=√(-1/2ṫ^2∂^2V_eff/∂ r^2)|_r=r_ph,where r_ph is the radius of the photon sphere determined by the roots ofd/dr(r^2/f(r))|_r=r_ph=0and V_eff is the effective potential in the equatorial plane (θ=π/2) given byV_eff=L^2f(r)/r^2-E^2,such that L and E are angular momentum along z-axis and energy of the photon, respectively.We have plotted the behavior of angular velocity Ω and Lyapunov exponent Γ for the different values of b and q in Fig. <ref>. The upper panel shows the plots for angular velocity, and the lower panel corresponds to the plots for the Lyapunov exponent. In the upper left plot, the curves correspond to the different values of b, and the behavior of Ω is observed with respect to q. It shows that the angular velocity increases as q increases for each curve. However, as the value of b increases, the angular velocity increases more rapidly with increasing q. In the upper right plot, each curve corresponds to a different value of q, and the behavior of angular velocity is represented as a function of b. For a small value of q, the angular velocity is constant with respect to b. When the value of q is increased, the angular velocity increases rapidly up to a small value of b and then increases at such a small rate that it approaches nearly a constant value. For even a larger value of q, the angular velocity increases at a higher rate up to a certain value of b and then increases slowly with respect to b. The lower left graph shows the behavior of the Lyapunov exponent with respect to q for different values of q corresponding to each curve. It can be seen that the behavior of the Lyapunov exponent is the same as that for the angular velocity, i.e., the value of the Lyapunov exponent increases with an increase in the value of q and the increasing rate of the Lyapunov exponent also rises as the value of b becomes larger. From the right plot, the Lyapunov exponent is constant with respect to b for a small value of q. For a larger value of q, the Lyapunov exponent increases to a small value of b and then becomes constant. When the value of q increases further, the Lyapunov exponent increases rapidly to a certain value of b and then decreases at a slower rate after reaching a maximum value.§ ROTATING BLACK HOLEA rotating BH metric is one of the simplest generalizations of a static BH with an additional spin parameter usually denoted by a. The behavior and properties of the spacetime structure around a rotating BH are different from its static counterpart, especially in terms of photon motion. The significance of a rotating BH can be estimated by conducting an analysis for the comparison of the BH shadow with the EHT data for supermassive BHs. Since the supermassive BHs are rotating, therefore, for a viable and meticulous comparison of the shadows, it is better to work with rotating BHs. The NJA <cit.> was designed to develop the rotating counterparts of the static BH metrics within GR. The Kerr and Kerr-Newman metrics are the earliest examples of the application of this algorithm to Schwarzschild and Reissner-Nordström metrics, respectively. It is well known that Schwarzschild and Kerr BHs are vacuum solutions, while the Reissner-Nordström and Kerr-Newman BHs are sourced by electric charge. However, Hansen and Yunes <cit.> found that some additional unknown sources arise when the NJA is applied to static BHs in non-GR gravity theories. Recently, the NJA was modified by Azreg-Aïnou <cit.> to obtain the rotating counterparts without the complexification of radial coordinates. Hence, this algorithm can easily be applied to BH metrics within GR and non-GR theories. It has successfully generated the rotating BH metrics for imperfect fluids and generic rotating regular BH metrics <cit.> within GR. Therefore, we also apply this modified NJA to obtain the rotating counterpart of the static metric (<ref>). First, we introduce the Eddington-Finkelstein coordinates (u,r,θ,ϕ) and using the transformationdu=dt-dr/f(r),the static metric (<ref>) becomesds^2=-f(r)du^2-2dudr+r^2dθ^2+r^2sin^2θ dϕ^2.Further, the conjugate metric tensor can be expressed asg^ab=-l^an^b-n^al^b+m^am̅^b+m̅^am^bwith the null tetrads given byl^a = δ^a_r,n^a = δ^a_u-f(r)/2δ^a_r,m^a = 1/√(2)r(δ^a_θ+i/sinθδ^a_ϕ),where m̅^a is the complex conjugate of m^a. It is easy to find that for these null tetrads, the following relations hold:l_al^a=n_an^a=m_am^a=m̅_am̅^a = 0,l_am^a=l_am̅^a=n_am^a=n_am̅^a = 0,-l_an^a=-l^an_a=m_am̅^a=m^am̅_a = 1.Now, we perform the complex coordinate transformations in (u,r)-plane as u'→ u-iacosθ,r'→ r+iacosθ,with a being a spin parameter of the BH. The Next step in NJA is complexifying the radial coordinate r. However, it is not necessary as shown by <cit.>. The complexification process can be avoided by considering that δ^μ_ν transforms as a vector under (<ref>). At the same time, the metric functions of the metric (<ref>) transform to new undetermined functions, that is,f(r)→ F(r,a,θ),r^2→ H(r,a,θ),such thatlim_a→0F(r,a,θ)=f(r), lim_a→0H(r,a,θ)=r^2,Using this transformation, the null tetrads becomel^a = δ^a_r,n^a = δ^a_u-F/2δ^a_r,m^a = 1/√(2H)((δ^a_u-δ^a_r)iasinθ+δ^a_θ +i/sinθδ^a_ϕ).Making use of the new null tetrads, the rotating metric in the Eddington-Finkelstein coordinates is given byds^2 = -Fdu^2-2dudr+2asin^2θ(F-1)dudϕ +2asin^2θ drdϕ+Hdθ^2+sin^2θ(H+a^2sin^2θ(2-F))dϕ^2.Bringing these coordinates back to the Boyer-Lindquist coordinates, we obtain the rotating counterpart for the static BH metric (<ref>). Therefore, we introduce a global coordinate transformationsdu = dt+λ(r)dr,dϕ = dϕ'+χ(r)dr,withλ(r) = -a^2+r^2/a^2+r^2f(r), χ(r) = -a/a^2+r^2f(r).Finally, we chooseF = (r^2f(r)+a^2cos^2θ)/H,H = r^2+a^2cos^2θand the Kerr-like rotating BH metric readsds^2 = -Δ(r)-a^2sin^2θ/ρ^2dt^2+ρ^2/Δ(r)dr^2+ρ^2dθ^2+sin^2θ/ρ^2((r^2+a^2)^2-Δ(r)a^2sin^2θ)dϕ^2-2asin^2θ/ρ^2(a^2+r^2-Δ(r))dtdϕ,with the metric functions given asρ^2 = r^2+a^2cos^2θ,Δ(r) = a^2+r^2f(r)=r^2-2Mr+a^2+π rq√(bq)/2 -rq√(bq)tan^-1(√(b)r/√(q)). The metric (<ref>) with metric functions (<ref>) and (<ref>) describe the novel charged rotating BH in NED. It is obvious that under the limit a→0, the metric (<ref>) reduces to the static metric (<ref>), and the conditions (<ref>) are also satisfied. Note that the asymptotic expansion of the metric function Δ(r) can be written asΔ(r) = r^2-2Mr+a^2+q^2+ O(1/r^2),which shows that far from the source, the rotating BH solution (<ref>) mimics the Kerr-Newman BH solution. Moreover, expanding Δ(r) around b=0, we getΔ(r) = r^2-2Mr+a^2+π q^3/2√(b)r/2-bqr^2+ O(b^3/2).Therefore, under the limit b→0, The rotating metric (<ref>) reduces to the Kerr metric.In Fig. <ref>, we have plotted the limits of BH parameters as parametric spaces for which the event horizon exits. In particular, these parametric spaces are a-b, a-q, and b-q spaces for which the corresponding third parameter has been kept fixed. The highlighted curves correspond to the extremal BHs. It shows that for a fixed q, when b is increased, the value of spin a decreases drastically and gradually slows down as b→ l. The same behavior is observed when a is fixed and the event horizon is plotted for space b-q. However, for a fixed b, a decreases slowly as q increases and further the decrement in a becomes faster as q→1. Next, we study the horizon structure of the charged rotating BH in NED defined by the metric (<ref>). We employ a numerical technique to solve the equation g^rr=0 for its roots to obtain the radius of the horizon. We plotted the horizon radius r_h with respect to a for various values of b and q in Fig. <ref>. For all curves, it is quite usual to observe that the event horizon decreases and the Cauchy horizon increases with an increase in a. For all cases, the extremal value of a decreases. Moreover, when q increases, the event horizon decreases for all values of b. Conversely, when b increases, the event horizon does not change significantly for small values of q. However, for large values of q, the event horizon decreases with an increase in b.§ PHOTON MOTION AND SHADOW OF THE BHIn this section, we would like to investigate the geodesic structure around the rotating BH (<ref>). We will study the circular photon orbits of the BH characterized by the effective potential, which is a key quantity for examining the shadow cast by the novel charged rotating BH in NED. The geodesics of a particle moving in this background can be obtained by solving the geodesic equations. To derive geodesic equations, one can adopt the Hamilton-Jacobi (HJ) approach. The HJ equation describing the motion of a particle is given by∂ S/∂τ=-1/2g^μν∂ S/∂ x^μ∂ S/∂ x^ν,where τ is the affine parameter. For the BH metric under consideration, there exist two Killing vector fields ∂_t and ∂_ϕ as translational and rotational invariance of time, which generate two constants; the particle energy E and orbital angular momentum L along the z axis given as-E=g_tμẋ^μ = p_t,L=g_ϕμẋ^μ = p_ϕ,where p_t and p_ϕ are generalized momenta in respective directions. Then we can assume the Jacobi action in the formS=1/2m_p^2τ-Et+Lϕ+S_r(r)+S_θ(θ),where m_p is the rest mass of the particle and is the third constant of motion. The functions S_r(r) and S_θ(θ), respectively, depend only on r and θ and are arbitrary functions yet to be determined. Substituting the Jacobi action (<ref>) into the HJ equation (<ref>) and considering photon mass m_p=0, one obtainsS_r(r) = ∫^r√(ℛ(r))/Δ(r)dr, S_θ(θ) = ∫^θ√(Θ(θ))dθ,withℛ(r) = ((r^2+a^2)E-aL)^2-Δ(r)(𝒵+(L-aE)^2),Θ(θ) = 𝒵+cos^2θ(a^2E^2-L^2^2θ),where 𝒵 is the Carter constant related to the Killing-Yano tensor field and is the fourth constant of the geodesics. Since we have four equations corresponding to four coordinate variables, and the system of equations is completely integrable, we have four constants. Further, taking derivatives of Jacobi action with respect to these four constants and solving by setting the equations equal to zero, we obtain the following geodesic equationsρ^2dt/dτ = r^2+a^2/Δ(r)(E(r^2+a^2)-aL)+a(L-aEsin^2θ),ρ^2dr/dτ = ±√(ℛ(r)),ρ^2dθ/dτ = ±√(Θ(θ)),ρ^2dϕ/dτ = a/Δ(r)(E(r^2+a^2)-aL)+(l^2θ-aE).Now we focus on the circular photon orbits by analyzing the radial motion. The radial geodesic equation (<ref>) can also be expressed as1/2(ρ^2dr/dτ)^2+V_eff=0,where V_eff is the effective potential and in the equatorial plane, it takes the formV_eff = -ℛ(r)/2r^4.The behavior of effective potential V_eff is plotted with respect to r for various values of b and q in Fig. <ref>. We focus only on the peak of the curves in the plots that correspond to the unstable circular null orbits. In the top panel, we can see that for a large value of q and a fixed value of a, the unstable circular null orbits decrease in size as b increases. However, no significant change has been observed for a small value of q. In the middle panel, the unstable circular null orbits reduce in size with the increase in q for both cases. Similarly, as the value of a increases, the unstable circular null orbits shrink. The circular photon orbits satisfy the following conditionsV_eff=0 , ∂ V_eff/∂ r=0 ,and the condition ∂^2 V_eff/∂ r^2<0 ,ensures the orbits are unstable. Now, supposing the new definitions ξ=L/E and η=𝒵/E^2 and solving both equations in (<ref>), we obtain the following.ξ(r_p) = (a^2+r_p^2)Δ'(r_p)-4Δ(r_p)r_p/aΔ'(r_p),η(r_p) = 16r_p^2Δ(r_p)(a^2-Δ(r_p))/a^2Δ'(r_p)^2-r_p^4/a^2 +8r_p^3Δ(r_p)/a^2Δ'(r_p),where the prime denotes the derivative with respect to r and r_p is the radius of the photon sphere. Solving for the unstable orbits, the condition (<ref>) yieldsr+2Δ(r)/Δ'(r)^2(Δ'(r)-rΔ”(r))|_r=r_p>0. §.§ The rotating black Hole in NED shadowNow, we would like to study the shadow cast by the rotating charged BH in NED. It is worth pointing out that, in our case, all light sources are located at infinity and distributed uniformly in all directions. Moreover, there is no light source between the BH and the observer, and the observer is located at infinity. In order to develop the shadow images under the above-mentioned assumption, one needs to introduce two celestial coordinates <cit.>α = -lim_r_0→∞(r_0^2sinθ_0dϕ/dr|_θ→θ_0,r→ r_0) = -ξ(r_p)θ_0,β = lim_r_0→∞(r_0^2dθ/dr|_θ→θ_0,r→ r_0) = ±√(η(r_p)+a^2cos^2θ_0-ξ(r_p)^2^2θ_0),where θ_0 is the inclination angle of the observer. If the observer is located on the equatorial plane, these celestial coordinates are simplified toα = -ξ(r_p),β = ±√(η(r_p)). The shadow can be obtained by producing a parametric plot in the α-β celestial plane consistent with Eqs. (<ref>) and (<ref>), where the parameter governing the plot is r_p. Such a region is actually not illuminated by the bright photon sources. The boundary of the shadow can be determined by the radius of the circular photon orbits. The shadow curves have been plotted in Fig. <ref> for various cases of parametric values of a, b, and q. The top panel shows the fixed value of a while b varies in the panel from left to right and each curve corresponds to a different value of q. It is clear that with an increase in q, the shadow size reduces, whereas with an increase in b in the panel, the variation in the shadow size increases. In the middle panel, again a has been kept fixed and the value of q varies from left to right in the panel while the different values of b correspond to each curve. For the small value of q, there is no significant variation in the size of the shadow with respect to the increase in b. Some variation is observed in the shadow size with an increase in b when q increases further in the middle plot. In the right plot for the largest value of q, the variation in shadow size is more prominent, and therefore we can specify that the shadow size is reduced with an increase in b. In the lower panel, we mainly study the role of spin parameters on the variation in shadow size. Therefore, each curve corresponds to a different value of a. These plots show that for the static case, the shadows are purely circular, and as the value of a increases, the shadows shift towards the right, and the flatness on one side increases. The maximum flatness is also plotted for the extremal spin values for each combination of the values of b and q.§.§ DistortionThe distortion measures the difference in the shape of the rotating BH shadow. It is characterized only for the rotating BHs, because all static BHs are circular. Therefore, the distortion is studied for shadow comparison for rotating BHs and static BHs as well as for those that have the least distorted shadows. The distortion is measured in terms of an observable known as the linear radius of the shadow <cit.>, defined asR_sh=(α_t-α_r)^2+β_t^2/2|α_t-α_r|,where R_sh is the radius of a hypothetical circle that can be assumed to touch the shadow curve at the points (α_t,β_t), (α_b,β_b) and (α_r,0) lying at the top, bottom and right most point on the shadow curve, respectively. The shadow curve is described by the points in the space (α,β) such that the points are characterized by the subscripts t, b, and r corresponding to the top, bottom, and the rightmost point, respectively. For further details, we refer to Fig. 9 in <cit.>. Eq. (<ref>) is valid only for rotating BHs because the static BHs have circular shadows and can be expressed by coordinates of the curve on any coordinate axes. Then, the distortion can be obtained by the relation.δ_s=|α̅_l-α_l|/R_sh,where (α_l,0) and (α̅_l,0) intersect the -α-axis and the points reside on the shadow and hypothetical circle, respectively. The points on the shadow are characterized by the subscript l on the left side of β-axis, whereas, the bar denotes the points on the imaginary circle.The variation of distortion is plotted with respect to the BH parameters b, q, and a in Fig. <ref> corresponding to the shadows in Fig. <ref>. The value of a has been fixed for both plots in the upper panel, whereas no parameter has been fixed for the plot in the lower panel. For the left plot in the upper panel, the distortion δ_s has been plotted vs q and different values of b correspond to the curves. We found that the distortion increases with an increase in q and with an increase in b, the distortion increases more rapidly with respect to the increase in q. The right plot shows the variation of distortion with respect to b. It can be seen that for the smallest value of q, the distortion is almost constant with respect to b. When the value of q is increased, some variation in distortion is observed for the values of b up to 0.5. The distortion becomes constant for further values of b. For the largest value of q, the variation in distortion is more obvious and increases with an increase in b but at a slower rate. The lower plot shows the behavior of distortion with respect to a and the curves correspond to the different values of b and q. It is found that the distortion increases at an accelerating rate with an increase in a for all cases and certainly, the distortion is maximum for the extremal cases.§.§ Parameter EstimationThe shadow of the BH characterizes the features of background spacetime in its shape and size. Thus, it can serve as a useful method to test new gravity theories and constrain BH parameters <cit.>. One can get estimations of BH parameters by using the shadow observables. To do so, we need to define the observable parameters that describe the size and shape of the BH shadow. In addition to the method by Hioki and Maeda, a straightforward method for estimating the BH parameter can be employed, which relies on the coordinate independent formalism <cit.> and makes use of the shadow observables, shadow area, and oblateness. One can introduce the area of BH shadow A and oblateness D being associated with the deformation of the BH shadow in the forms <cit.>A = 2∫_r_-^r_+β(r)dα(r)/drdr, D = Δα/Δβ,where r_± are radii of stable circular orbits, which are obtained by solving the equation η_c=0.In Figs. <ref> and <ref>, we illustrate the dependencies of the shadow area A and the oblateness D on the parameters of rotating charged BH in NED with the help of numerical calculations. Here we studied two cases by fixing parameter b, for the cases of b = 0.1 M^-1 and b = 0.5 M^-1. From these plots, we can conclude that the increase of all BH parameters causes a decrease in the values of both the area and oblateness of the BH. By comparing Figs. <ref> and <ref>, one can notice that there is only a small change in observable parameters with the increase of parameter b. Moreover, the dependence of shadow observables on q is more dramatic in the larger values as q compared to lower values. It is useful to note that there is a considerable decrease in the values of oblateness with an increase in the spin parameter of BH.As we mentioned above, one can estimate two parameters of BH by shadow observables using Eqs. (<ref>)-(<ref>). We present the cross-section contour plots of the area and oblateness in a-q space for the fixed values of b = 0.1 and b = 0.5 with M=1 (Fig. <ref>). It can be seen from these figures that the coordinates of the intersections uniquely determine the two BH parameters a and q. One can determine the values of BH parameters from its shadow by applying this estimation method. These results can be a useful tool to get information about the BH based on its shadow parameters, which can be measured from observations.§ CONSTRAINING WITH EHT OBSERVATIONSObservational data of two supermassive BHs (Sgr A* and M87*) shadow images that are obtained by EHT collaboration can motivate us to study BH shadows with great scientific interest. By using this data, one may estimate the BH parameters in the framework of different modified or alternative gravity models to see if the results of the model really fit with observations. In our analysis, we take constraints for the parameters of the rotated charged BH in NED from the EHT results of M87* and Sgr A*.We used the angular diameters of these two BHs measured by the EHT collaboration to obtain constraints. The angular diameter of the shadow image for an observer at distance d from BH can be expressed as <cit.>θ_d =2 R_a/d,R_a = √(A/π)where R_a is the areal shadow radius. If we consider Eq. (<ref>), the angular diameter of the shadow depends on the parameters of the BH and the observation angle. Also, it implicitly depends on the mass of the BH. We now consider supermassive BHs M87* (and Sgr A*) as rotating charged BHs in NED to compare our theoretical results of shadow analysis with the shadow images of M87* (and Sgr A*) from EHT data.The mass of M87* and the distance from Earth can be considered as M=6.5×10^9M_⊙ and d=16.8Mpc, respectively <cit.>. For simplicity, we do not take into account the uncertainties of the mass and distance measurements of the SMBHs. The angular diameter of the image of the SMBH M87* is θ_d=42±3μ as <cit.> in the 1-σ confidential level. In Fig. <ref>, the constraints for M87* are presented, as are the density plots of the angular diameter θ_d in the a-q space at inclination angles 90^∘ (left panel) and 17^∘ (right panel) for the fixed value's parameter b. Here, the black curves correspond to the lower borders of the measured angular diameter of the SMBH M87* shadow. From these plots, one can study the dependence of the angular diameter of the BH shadow on the parameters of charged BH in NED in particular cases. The maximum limit of charge parameter q is more than 0.35 M in the constraint part for the first case, b=0.1 M^-1, while it is decreased to less than 0.35 M when b=0.5 M^-1. Furthermore, there is a noticeable decrease in the upper limit of spin parameter a (around 0.6 M) when we see the plots for inclination angle 17^∘ compared to the left panels. Similarly, we can get constraints for the shadow image of Sgr A* from EHT observations. The angular diameter of the shadow of the SMBH Sgr A* is θ_d=48.7±7μ as <cit.>. The mass of Sgr A* and the distance from the solar system be considered as M≃4×10^6M_⊙ and d≃ 8 kpc, respectively <cit.>. In Fig. <ref>, the constraints from Sgr A* are described for two cases we have considered. The fitted density plots of the angular diameter of BH lie inside the measured angular diameter of Sgr A*, θ_d = 48.7±7 μas. Therefore, we can claim that our theoretical results of rotating charged BH in NED can be compared with the supermassive BH at the center of the Milky Way Galaxy. Only in the second case did we manage to show the upper limit of the measured angular diameter of the Sgr A* shadow with black curves, while blue dashed curves correspond to the mean value of the measured angular diameter. What can be observed from the figure is that the blue dashed curves for the limits of the observable parameters shift slightly to the left, and there is a decrease in the upper limit of the spin parameter a (from 0.6 M to more than 0.55 M) when we increase the fixed value of the parameter b. Moreover, the change in the inclination angle influences the distribution of angular diameter throughout the a-q space. § ENERGY EMISSION RATEClassically, an object is lost forever if it enters a BH, whereas it is believed that a BH does emit quantum mechanically. Inside the BH horizon, the quantum fluctuations cause the creation and annihilation of particles. The particles with positive energy may escape out of the horizon due to a process known as tunneling. These escaping particles carry energy that helps the BH to evaporate. An absorption process is measured in terms of the probability of absorption cross-section. Far from the gravitational effect of BH, the absorption cross-section is related to the shadow of BH. In the high-energy regime, this absorption cross-section is obtained by a constant value denoted by σ_lim. The value of σ_lim is approximately equal to the area of BH shadow as <cit.>σ_lim≈π R_sh^2.The energy emission rate can be expressed asℰ_ω t:=d^2ℰ(ω)/dω dt =2π^2σ_limω^3/e^ω/T_H-1≈2π^3R_sh^2ω^3/e^ω/T_H-1,where ω denotes the angular frequency, T_H=κ/2π is the Hawking temperature andκ=Δ'(r)/2(a^2+r^2)|_r=r_his the surface gravity at the event horizon for rotating BH. Surface gravity for the static case becomesκ=1/2f'(r)|_r=r_h.The energy emission rate ℰ_ω t is plotted versus the frequency ω in Fig. <ref> for various values of the BH parameters corresponding to the shadows in Fig. <ref>. The top panel shows that the BH evaporation rate decreases by increasing the value of q. Furthermore, from left to right in the panel, as the value of b increases, the variation in the BH evaporation rate increases. In the middle panel, the variation in the BH evaporation rate is much lower for small values of q, and with an increase in the value of q, the variation in the BH evaporation rate becomes prominent. Therefore, we can see that the BH evaporation rate decreases with increasing b. In the lower panel, with respect to a, a slow rate of evaporation of BH is observed. Furthermore, for extreme values of a, the BH does not evaporate. It is because the Hawking temperature is zero Kelvin for extremal BHs, and therefore, they do not radiate.§ CONCLUSION We focused on the non-linear effect of electrodynamics and the electric charge together with BH spin on various BH properties. The rotating BH metric is obtained by incorporating modified NJA. The real and imaginary parts of quasinormal modes have been studied related to the radius of the photon sphere for the static BH. The horizon radius is investigated for the rotating BH in terms of spin a. To study the shadows, we incorporated the HJ formalism, and by using Bardeen's method for an observer located at infinity, we obtained the effective potential, shadows, and distortion. We also estimated parameters using the shadow area and oblateness. The shadows are compared with the EHT observations for SMBHs Sgr A* and M87*, and the constraints on BH parameters are obtained. Lastly, the rate of energy emissions is discussed. The results presented in previous sections are summarized below: * The angular velocity associated with quasinormal modes increases with an increase in q. For a small value of q, the angular velocity remains constant with respect to b. When the value of q is increased, the angular velocity increases rapidly and then gradually becomes constant with respect to b. A very similar behavior of the Lyapunov exponent has been observed.* The event horizon of the BH decreases with an increase in spin up to its extremal value. Moreover, with respect to b and q, the event horizon also decreases. However, the decrease with respect to b is prominent for larger values of q.* The effective potential determines the unstable circular null orbits. The unstable circular null orbits are found to shrink with increasing b, q, and a.* The shadows also follow the behavior of unstable circular null orbits. That is, the shadow size decreases with respect to increases in b and q. However, for a small value of q, the shadow size is almost constant with respect to b. The shadows shift towards the right as we increase a and the maximum flatness is seen for extremal a.* The flatness measure shows that the distortion increases with respect to q and a with an accelerated rate and increases with an increase in b with a decelerated rate. * The study of shadow observables in the NED for rotating charged BH parameters reveals that as the BH parameters increase, both the shadow area (A) and oblateness (D) values decrease.With increasing values of q, the dependence of shadow observables on q becomes more considerable. The research can serve as a useful tool for identifying BH parameters from their shadow parameters since it also illustrates in plots how the coordinates of crossings uniquely determine the two BH parameters a and q. * The constraints for M87* and Sgr A* are presented, and density plots are used to study the dependence of the angular diameter on charged BH parameters in NED for specific cases. Generally, the distribution of angular diameter over the a-q space depends on the change in inclination angle. For the first instance of M87* restrictions, the highest limit of the charge parameter q is greater than 0.35 M, but it drops to less than 0.35 M for the case of b=0.5 M^-1. The maximum limit of the spin parameter a in the graphs for Sgr A* decreases as the fixed value of the parameter b increases.* The BH evaporation rate decreases with increasing values of q, b, and a. It also shows that the extremal BH does not emit and thus has a zero-emission rate. Furthermore, the effect of the electric charge of the BH on the emission rate becomes stronger at higher values of b. Similarly, the increase of q enhances the effect of b on the rate. § ACKNOWLEDGEMENTS J.R. acknowledges the Grants F-FA-2021-510 of the Agency of Innovative Development of Uzbekistan. J.R. also thanks the SU in Opava for its hospitality. Z.S. acknowledges the Research Centre for Theoretical Physics and Astrophysics, Institute of Physics, SU in Opava, and the GAČR 23-07043S project.Phys. Rev. C Phys. Rev. E Phys. Rev. D Physical Review Letters Journal of Cosmology and Astroparticle Physics Astrophysics and Space Science Monthly Notices of the Royal Astronomical Society The Astrophysical Journal Astronomy and Astrophysics Acta Astronomica Publications of the Astronomical Society of Japan Astrophysical Journal Letters Publications Astronomical Society of Australia Nature Physics Reports Annual Review of Astronomy and Astrophysics The Astrophysical Journal Supplement The Astronomy and Astrophysics Review Proceedings of the SPIE spphys | http://arxiv.org/abs/2311.15784v2 | {
"authors": [
"Muhammad Ali Raza",
"Furkat Sarikulov",
"Javlon Rayimbaev",
"Muhammad Zubair",
"Bobomurat Ahmedov",
"Zdenek Stuchlik"
],
"categories": [
"gr-qc"
],
"primary_category": "gr-qc",
"published": "20231127125635",
"title": "Shadow of novel rotating black holes in GR coupled to nonlinear electrodynamics and constraints from EHT results"
} |
Temporal Transfer Learning for Traffic Optimization with Coarse-Grained Advisory AutonomyJung-Hoon Cho,Sirui Li, Jeongyun Kim,Cathy Wu Manuscript created October 2023. Jung-Hoon Cho is with the Department of Civil and Environmental Engineering and the Laboratory for Information & Decision Systems, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. (e-mail: [email protected]) Sirui Li is with the Institute for Data, Systems, and Society and the Laboratory for Information & Decision Systems, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. (e-mail: [email protected]) Jeongyun Kim is with the Laboratory for Information & Decision Systems, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. (e-mail: [email protected]) Cathy Wu is with the Laboratory for Information & Decision Systems; the Institute for Data, Systems, and Society; and the Department of Civil and Environmental Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. (e-mail: [email protected])January 14, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ The recent development of connected and automated vehicle (CAV) technologies has spurred investigations to optimize dense urban traffic. This paper considers advisory autonomy, in which real-time driving advisories are issued to drivers, thus blending the CAV and the human driver. Due to the complexity of traffic systems, recent studies of coordinating CAVs have resorted to leveraging deep reinforcement learning (RL). Advisory autonomy is formalized as zero-order holds, and we consider a range of hold duration from 0.1 to 40 seconds. However, despite the similarity of the higher frequency tasks on CAVs, a direct application of deep RL fails to be generalized to advisory autonomy tasks. We introduce Temporal Transfer Learning (TTL) algorithms to select source tasks, systematically leveraging the temporal structure to solve the full range of tasks. TTL selects the most suitable source tasks to maximize the performance of the range of tasks. We validate our algorithms on diverse mixed-traffic scenarios, demonstrating that TTL more reliably solves the tasks than baselines. This paper underscores the potential of coarse-grained advisory autonomy with TTL in traffic flow optimization.Intelligent Transportation Systems, Learning and Adaptive Systems, Deep Learning in Robotics and Automation, Transfer Learning. § INTRODUCTION The recent advancements in connected and automated vehicle (CAV) technologies have opened up new frontiers in addressing the challenges of urban traffic congestion and associated environmental problems.The growing urgency to mitigate traffic-related issues, buoyed by advances in autonomous vehicles (AVs) and machine learning, is pushing the boundaries of urban roadway autonomy. As the transportation sector progressively moves towards a fully autonomous paradigm, the spotlight is firmly on devising innovative methods for traffic flow optimization, targeting key outcomes such as enhanced eco-driving, throughput maximization, and congestion reduction <cit.>.This paper highlights the significant role of advisory autonomy, where real-time driving advisories are communicated to human drivers, creating a harmonized blend of automated and human-driven vehicular traffic. The notion of coarse-grained advisory autonomy is formalized through the lens of coarse-grained zero-order holds. This article subsumes and extends Sridhar et al. <cit.>, which originally formulated the problem as piecewise-constant control for traffic optimization. With this coarse-grained advice, instead of instantaneous controls (<cit.>), vehicles are provided with guidance that persists for a particular duration, thereby addressing the intricacies of fluctuating hold duration. This is significant as human drivers, unlike AVs, may find it challenging to adhere to frequent and rapid control changes. Specifically, our objective is to develop an algorithm that, given a traffic scenario, can determine whether guidance that human drivers could conceivably follow would achieve outcomes comparable to those of AVs. We concentrate on human compatibility for traffic optimization and the ability of human drivers to match corresponding system-level metrics (such as the average speed of all vehicles and the throughput) rather than achieve accurate maneuvers where AVs possess clear advantages, such as being able to react to abrupt braking without hesitation.Integrating this approach with reinforcement learning (RL) presents an elegant way forward, given RL's structured framework for sequential decision-making.Deep RL has recently been employed to develop vehicle control policies. However, when directly applied to the advisory system, the intrinsic brittleness of deep RL algorithms necessitated a more refined approach to effectively solve a set of tasks.To confront these challenges head-on, we turned to transfer learning, a widely employed technique in numerous research fields that enables the utilization of knowledge acquired from one task to enhance performance in another related task <cit.>. Specifically, transfer learning can be applied to adapt the pre-trained policy to a new task or to initialize a learning algorithm with pre-existing knowledge, substantially expediting the learning process and boosting overall performance. Transfer learning has been successfully applied to improve the efficiency and training performance of traffic management systems <cit.>. We introduce two Temporal Transfer Learning (TTL) algorithms– Greedy Temporal Transfer Learning (GTTL) and Coarse-to-fine Temporal Transfer Learning (CTTL). These algorithms adeptly leverage the temporal similarities across tasks to judiciously select training tasks, thereby significantly facilitating the training efficiency and overall performance. The essence of TTL lies in its capability to seamlessly transfer knowledge acquired from one task to another, circumventing the often observed training brittleness in deep RL algorithms. This ability of TTL to draw insights from prior models offers a promising avenue to circumvent the fragility often observed in deep RL training. Then, to evaluate our algorithm's generalizability, we consider validation on various traffic scenarios in which mixed-autonomy traffic has been proven effective for traffic optimization <cit.>.The core contributions of this paper are twofold: * We delve into a coarse-grained advisory system, presenting a compelling case for its viability in enhancing system-level traffic outcomes. Our empirical evidence underscores the possibility of furnishing human drivers with guidance that mirrors AV behavior, leading to tangible traffic improvements. Such findings pave the way for considering human drivers as immediate, practical alternatives to full-fledged AV deployments.* Our research introduces Temporal Transfer Learning (TTL) algorithms, a robust methodology specifically designed to tackle the training brittleness intrinsic to deep RL algorithms. TTL can be a promising frontier in evolving generalizable training paradigms for complex traffic optimization tasks by adeptly identifying sources of variation and harnessing insights from pre-existing models. § RELATED WORK §.§ Reinforcement Learning (RL) for Mixed autonomy trafficAs we await the era of fully automated vehicles, we can anticipate a mixed autonomy system where automated and human-driven vehicles share the road.In such a system, controlling only a small proportion of the vehicles can significantly improve the overall traffic flow <cit.>.Several studies have explored the potential of RL in addressing the challenges posed by the coexistence of AVs and human-driven vehicles.Researchers have worked on enhancing traffic efficiency in mixed autonomy settings using deep RL-based approaches and showed that it could eliminate stop-and-go traffic and congestion mitigation <cit.>.These studies collectively highlight the potential of RL in optimizing mixed autonomy traffic, paving the way for enhanced safety, efficiency, and performance in transportation systems. §.§ Advisory AutonomyAdvisory systems in roadway autonomy span a broad range of applications, from enhancing safety to mitigating traffic congestion.These systems provide considerable benefits to users.For instance, collision warning alerts have been employed to ensure the driver’s safety <cit.>, and speed advisory systems at signalized intersections help users pass the green light efficiently <cit.>.At a system level, on the other hand, the advisory system provides system-level traffic optimization.For example, speed advisory systems contribute significantly towards eco-driving <cit.> and personalized advisory systems have been introduced to mitigate traffic congestion <cit.>.Furthermore, roadway signs suggesting advisory speeds represent another form of advisory autonomy.However, the application of advisory systems poses unique challenges given their interaction with human drivers. While fully automated vehicles can operate within clearly defined parameters and constraints, human drivers behave differently. For instance, as noted by Mok et al., humans require a minimum of 5-8 seconds to appropriately transition control <cit.>. This finding underscores the importance of accounting for the unique attributes and limitations of human drivers when developing control methods. For instance, Sridhar et al. identified two key characteristics of human-compatible driving policies: a simple action space and the capacity to maintain the same action for a few seconds <cit.>. An example of a human-compatible advisory system is a coarse-grained control system, which is provably stable in the context of Lyapunov in mitigating congestion on single-lane ring roads <cit.>. This system, known as an action-persistent Markov Decision Process (MDP), successfully addresses the human need for simplicity and persistent actions. In light of these considerations, it is crucial to integrate human driving characteristics into the design of control methods for human drivers. §.§ Action repetition in RL A number of studies have delved into the role of action repetition in Reinforcement Learning (RL).The concept of Semi-Markov Decision Processes (Semi-MDPs) is used to model temporally abstract actions within Markov Decision Processes (MDPs) <cit.>.It has been shown that action repetition can offer benefits in tackling complex control problems in RL. Various methodologies such as reducing control granularity, implementing a skip policy, and applying temporal abstraction have been employed to analyze action repetition <cit.>. Metelli et al. introduced action persistence and the Persistent Fitted Q-Iteration (PFQI) algorithm to modify control frequencies and learn optimal value functions <cit.>.Lee et al. addressed the multiple control frequency problems that guarantee convergence to an optimal solution and outperform baselines <cit.>.These lines of work provide a wealth of insights and methodologies that can be used to incorporate action repetition into RL.A primal example of action repetition in a driving context is coarse-grained control.This method involves executing a constant action for a piece of predetermined time interval, known as the guidance hold duration.It could potentially serve as a minimal model for a coarse-grained advisory system. §.§ Transfer learningTransfer learning is a popular technique used in various research domains to leverage the knowledge gained from one task to improve performance in another related task <cit.>.In particular, transfer learning can be used to adapt a pre-trained policy to a new task or to initialize a learning algorithm with pre-existing knowledge, which can greatly accelerate the learning process and improve overall performance. In contrast to multitask learning's simultaneous approach, transfer learning applies knowledge from source tasks to optimize a particular target task, underscoring an asymmetrical relationship between tasks <cit.>.Transfer learning offers the advantage of significantly decreasing the amount of data needed for learning compared to traditional independent learning methods <cit.>.Dynamic transfer learning maps for multi-robot systems can be obtained by the basic system properties from approximated physical models or experiments <cit.>.Kouw and Loog not only delved into domain adaptation's specific instances and various techniques but also highlighted the challenges of sequential domain adaptation <cit.>.Moreover, transfer learning also has its benefit with the reduced number of data required for the new tasks coming from the shared representation of related tasks <cit.>.In robotics, transfer learning has been utilized for a wide range of applications such as robot manipulation, locomotion, and control <cit.>.In the context of traffic settings, transfer learning has been applied to improve the efficiency and training performance of traffic management systems <cit.>.For example, Kreidieh et al. proposed a transfer learning framework that can help the warm start for training policies to dissipate shockwaves from closed traffic scenarios to more complex open ones <cit.>.Similarly, zero-shot policy transfer to adapt a pre-trained policy for autonomous driving in a structured environment to an unstructured environment results in improved performance and safety <cit.>. Also, the transferability of the learned policies may differ at different levels of tasks; for instance, policies derived from more structured and informative tasks are more robust to diverse tasks <cit.>.Yan et al. proposed a unified framework for traffic signal control using transfer learning to transfer knowledge across different intersections and adapt to varying traffic conditions <cit.>.Also, transfer learning is used for real-time crash prediction <cit.>, and traffic flow prediction in data-sparse regions <cit.>.RL-based methods require generating significant amounts of simulation data, which can be costly.However, transfer learning offers a solution to alleviate the burden of data generation and simulation for training each model.By employing an efficient training scheme, the model can quickly learn when, what, and where to transfer knowledge in scenarios with limited data availability <cit.>.A hierarchical approach to task granularity can be beneficial as it allows for the refinement of coarse attributes while learning more finer tasks.This method has been successfully employed by Wei et al. in their work on vehicle re-identification tasks <cit.> and in large-scale fault diagnosis tasks <cit.>.A coarse-to-fine framework can progressively improve task performance in complex problem-solving environments.Overall, transfer learning has shown promising results in improving the efficiency and safety of traffic management systems by leveraging the similar temporal structure of a series of tasks and prior knowledge from related tasks.§ PRELIMINARIES §.§ Markov Decision Process (MDP) and Reinforcement learning (RL) Markov Decision Process (MDP) is a mathematical process that models sequential decision-making problems.MDP is the 6-tuple =(,,,,H,γ), wheredefines the state space,is action space, :××→ is a transition probability distribution,is reward function, H is a total time horizon, and γ is a discount factor.The transition probability function P(s'|s,a) specifies the probability of transitioning to a state s' from a state s by taking action a.An agent's objective in an MDP is to find a policy π that maximizes the expected sum of rewards obtained over time, given the current state s and the actions it can take. Reinforcement learning (RL) is a subfield of machine learning that involves an agent learning to make a sequence of decisions by interacting with an environment.Since our problem setting for the advisory autonomy system is a sequential decision-making problem that can be formulated as MDP, RL chooses its actions based on the optimal policy to maximize the expected cumulative reward.§.§ Modeling control problemWe can consider the conventional vehicle control problem as below. s_i(t) =p_i-1(t)-p_i(t)v_i(t) =ṗ_i(t)a_i(t) =v̇_i(t)=F(s_i(t),ṡ_i(t),v_i(t))where p_i(t) is a position, v_i(t) is a velocity, and a_i(t) is an acceleration of of ith vehicle, and s_i(t) refers to the headway between ith vehicle and the vehicle in front.If we consider the acceleration control, the control a_i(t) is defined as F(s_i(t),ṡ_i(t),v_i(t)). We could find the corresponding optimal spacing and velocity s^*, v^* such that F(s^*,0,v^*)=0. For the speed guidance control, the control function should be defined differently.§.§ Partial advisory system We assume that all vehicles are human-driven vehicles.The advisory system provides guidance to a fraction, denoted by ρ, of these vehicles, while the remaining (1-ρ) fraction of vehicles are considered to be default-driven vehicles. These default-driven vehicles are assumed to be controlled by the car-following model, the Intelligent Driver Model (IDM) <cit.>.§ COARSE-GRAINED CONTROL §.§ Coarse-grained guidance in advisory autonomyAdvisory autonomy stands for the automated system that provides guidance to human drivers rather than a fully controllable process. In this context, it is designed to work in the presence of human-driven vehicles, ensuring that controlled vehicles operate in a manner that is safe, predictable, and intuitive for human drivers.Coarse-grained control refers to the vehicle control system that gives control periodically.Coarse-grained control involves applying the same action to an autonomous vehicle for a fixed time segment. As we discussed in <Ref>, coarse-grained control can be interpreted as an action persistent MDPs with different control granularities. §.§ Action persistent MDPs Guided vehicles are human-driven vehicles with periodic assistance from a trained policy for coarse-grained control, π_HC(s_t_m). The policy is applied at intervals t_m=δ m, where m ∈_0 (_0 as a set of non-negative integers) and δ denotes the guidance hold duration.These vehicles receive guidance for any time t that falls within the range [t_m, t_m+1].This action persistent MDP can be represented by the 7-tuple _δ=(,,,,H,γ,δ). The coarse-grained control (or piecewise constant control, zero-order hold control) stands for the controlling same action applied for the time segment of length <cit.>.δ denotes a guidance hold duration, and H is horizon (H≫δ). In other words, the same action u(z(t_m)) is applied to the time segment of t∈[t_m,t_m+1]). In the single-lane ring, the simulation experiments reported that the hold duration could be extended to 24 seconds without degradation of the system performance <cit.>. Li et al. derived sufficient conditions for piecewise-constant controls with the guidance hold duration to stabilize the system using the Lyapunov stability <cit.>.This piecewise constant control is backed up with the simulator experiments to evaluate the effect of the coarse-grained advisory system <cit.>. Hasan et al. also introduces a cooperative advisory system that leverages a novel driver trait conditioned Personalized Residual Policy (PeRP) to guide drivers in ways that reduce traffic congestion <cit.>. §.§ Guidance typeAcceleration and speed are some examples of guidance types to control human drivers in advisory autonomy systems. The detailed formulation is as follows:Acceleration guidance. When controlling automated vehicles, acceleration guidance suggests the best action based on the policy from the continuous action set.v̇_1(t)=u(z(t_k)) for all t∈[t_k,t_k+1=t_k+δ].For this acceleration guidance in the single-lane ring, Lyapunov analysis gives sufficient conditions for the stability of the coarse-grained advisory system <cit.>.Speed guidance.Speed guidance provides the autonomous vehicle with a discretized target speed, ensuring its attainment of the target speed as soon as possible while maintaining stability and adhering to preset boundaries. This approach is rooted in the challenges human drivers face in comprehending acceleration guidance <cit.>.Moreover, using acceleration guidance type for the coarse-grained control often struggles to achieve and sustain the optimal velocity as discussed in <cit.>.Researchers have worked on the speed advisory system <cit.>.For example, Liang et al. guided the driver with the speed for signal phase and timing in CAV environment <cit.>. Wang et al. reported that the human-machine interface displaying the difference between current and suggested speeds with the cooperative driving simulator improved the performance while displaying time difference harmed the speed adaptation <cit.>. Speed guidance provides controlled human drivers with discretized target speeds and enforces them to reach these speeds as soon as possible, given the prevailing conditions.However, there are some drawbacks that human drivers tend to perceive the target speed as the easily broken speed limit and easily exceed the speed limit <cit.>.The speed guidance system provides the guided vehicle with the target speed of u(z(t_k)) where z(t)=[s_1(t),v_1(t),...,s_n(t),v_n(t)]^⊤. In control problem, the speed guidance has the form as follows:v̇_1(t) =F(s_1(t),ṡ_1(t),v_1(t))=α(u(z(t_k))-v_1(t))+βṡ_1(t) for all t∈[t_k,t_k+1=t_k+δ].<Ref> intuitively depicts two distinct forms of advisory provided to drivers: acceleration and speed guidance. From a stabilization standpoint, speed guidance has certain advantages, as it allows the vehicle to sustain the same speed throughout the hold duration.However, under acceleration guidance, the vehicle's speed is subject to change unless the acceleration is precisely zero.This inherent difference between the two forms of guidance leads to unique behaviors and responses in the traffic system, as demonstrated in our results.§ TEMPORAL TRANSFER LEARNING (TTL)In advisory autonomy, we guide human drivers by providing a predetermined period, known as the hold duration, indicating how long they should maintain their guided actions.We consider solving families of MDP tasks whose only difference is the guidance hold duration since the control of humans can vary.That is, apart from this, all other traffic dynamics, such as the number of agents and road networks, remain completely identical. Even in this setting, we find that RL will train successfully in some scenarios and unsuccessfully in others, with no clear pattern among the tasks.Similar findings have been documented in <cit.>.The algorithm we introduce in this section is inspired by the intuition that an optimal strategy for hold duration δ should not be so different from that of hold duration δ' ≈δ.To see that this is especially true for small δ, consider that an optimal strategy under hold duration δ can be exactly recovered under the finer-grained control with the hold duration of δ/2.We, therefore, exploit task similarity along the axis of hold duration to derive provably (sub)optimal strategies for transfer learning.We wish to explore multiple tasks to understand the intricacies of the coarse-grained advisory system and its efficacy in optimizing traffic flow, particularly in mitigating congestion.Addressing multiple tasks gives us a holistic understanding of the system's behavior under different scenarios, thus informing more robust optimization strategies.While solving multiple tasks simultaneously with a separate model per each task could be computationally intensive and resource-demanding, leveraging a pre-trained model and transfer learning for our specific tasks can drastically reduce the computational burden.This strategy not only saves valuable time but also harnesses the knowledge from the pre-trained model to achieve superior performance.Thus, it's crucial to design a systematic approach for transfer learning to make the process efficient and effective.Focusing on the hold duration, we observe three main properties regarding the relationship between the tasks: estimated performance, upper-bound performance, and generalization gap. §.§ Problem definitionsIn coarse-grained advisory settings, we denote the estimated performance of the task with a hold duration of δ as J(δ).denotes a range of the guidance hold duration ranging from a minimum value (δ_min) to a maximum value (δ_max).The initial condition for J(δ) starts with the estimated performance across the range of tasks set to zero.Mathematically, this is expressed as:J(δ)=0 ∀δ∈ = [δ_min, δ_max]. The aggregate performance across the range of tasks A() is continuous integral of the estimated performance function over a range of guidance hold duration.A()=∫_δ_min^δ_max J(δ) dδThis involves calculating the area under the curve that the performance function J(δ) describes.A() provides an aggregate measure of system performance across different hold durations, thereby giving us a more comprehensive view of how the transfer policy performs across various scenarios. In the empirical sections of this paper, we will use this discrete summation instead of a continuous integral for simplicity. We denote the discrete sum of estimated performance over a range of δ as Â() = ∑_δ∈ J(δ). We employ k to represent the kth source task training, while K denotes the transfer budget, the maximum limit of such iterations.For the initial selection of the source task, we train the policy for the MDP task, which is associated with δ^1.Any subsequent selection iteration, marked as k, involves the policy's training for tasks related to δ^k.We also define J_k(δ) as the estimated performance of a task with the hold duration of δ after k iterations. When training the model for an MDP with a hold duration of δ^k, we obtain the policy π_k, and its performance at the task with δ^k is J^π_k(δ^k).At every iteration of selecting the source task, it is possible that the policy, originally trained at the source task, may not yield optimal results for the target task to which it is transferred.When a policy, initially trained for a source task δ_S, is zero-shot transferred to a distinct but related target task δ_T, the estimated performance may decrease since they are similar but distinct tasks.Generalization gap represents the performance degradation from out-of-distribution evaluation after the zero-shot transfer from the source task to the target task.The generalization gap grasps the performance degradation of the policy when transferred from a source task to a target task.Denote the performance of the policy at the source task δ_S as J(δ_S) and the generalization gap when zero-shot transferred at the target task δ_T as J_S → T. At each iteration step k, the estimated performance is updated with the best-performing performance among a set of trained policies.Overall, the estimated performance of the policy π_k at the current iteration is updated as follows:J_k(δ) = J^π_k(δ^k), if δ = δ^k max(J_k-1(δ), J^π_k(δ^k)-J_δ^k →δ), if δ≠δ^k We consider the optimization problem of choosing the optimal strategy for selecting the appropriate hold duration that maximizes the estimated performance of the trained policy.We call this a source tasks selection problem.The source tasks selection problem is the optimization problem to choose the optimal next training task that maximizes the estimated performances.At each selection step k, we select the task with a hold duration of δ^k that maximizes the sum of the estimated performance over a range of δ. max_δ^1, ⋯, δ^KA_k() s.t.δ^1, ⋯, δ^K ∈ S_k Solving the source tasks selection problem greedily at every iteration can be equivalent to selecting the task that maximizes the estimated area. The trajectory of transfer tasks from δ^1 to δ^K reflects tasks trained along the training iterations.§.§ Modeling assumptionsWe denote the upper-bound performance for a task with the hold duration δ as J^*(δ). <Ref> contends that the upper-bound performance serves as a constant function when considered with the guidance hold duration, denoted by δ. [Constant upper-bound performance] The upper bound performance of J^*(δ) remains constant for all ranges in .J^*(δ)=J^* ∀δ∈ <Ref> finds its justification within the empirical analysis with the coarse-grained advisory autonomy settings.Various coarse-grained guidance tasks may uphold the same upper-bound performance, provided each task has the same upper-bound performance under unlimited training resources.<Ref> aligns well with our observations in a single-lane ring environment, as demonstrated in <Ref>.However, in the more complex highway ramp environment, we observe a decline in the upper-bound performance when we increase the hold duration, as illustrated in <Ref>.<Ref> assumes that with the training for the selected task, we can achieve the upper-bound performance of J^*.[Constant and deterministic upper-bound performance] When we train on the task with the hold duration of δ^k, we consistently achieve the upper bound performance of J^*(δ^k).J^π_k(δ^k)=J^*(δ^k) ∀δ^k ∈ S_k Moving on to our following assumption, we consider transferring a pre-trained policy from the source task to the target task as a linear function.<Ref> postulates that this transfer function exhibits a linear behavior. Notably, the slope of the transfer function could vary depending on whether we are transferring to a coarser or finer problem.[Linear generalization gap] If we transfer from source hold duration δ_S to target hold duration δ_T, the generalization gap J_S→ T is linearly proportional to the absolute difference between δ_S and δ_T.J_S→ T =θ_L (δ_S-δ_T), if δ_S > δ_T θ_R (δ_T-δ_S),otherwisewhere θ_L signifies the slope of transfer performance when transitioning from a coarser to a finer task, implying that δ_S > δ_T. Conversely, θ_R represents the slope when shifting from a finer to a coarser task, suggesting that δ_S < δ_T.<Ref> posits that the generalization gap remains constant, whether moving from a coarse to a fine task or vice versa.Symbolized as θ_L and θ_R, these slopes are presumed equal, asserting that task granularity doesn't affect the degradation rate during policy transfer. We may alternatively represent the slope as θ throughout this paper for simplicity. [Identical slope of generalization gap function] The slope of the generalization gap function is identical when transferring from a coarse to a fine task (θ_L) and from a fine to coarse task (θ_R).θ_L=θ_R(=θ) <Ref> simplifies the upper-bound performance for analytical traceability and interpretability of analysis.If J^* is larger than θ(δ_max-δ_min), the transfer from any point would be able to encompass the additional volume.Thus, without loss of generality, we can assume J^*=θ(δ_max-δ_min) in our geometric analysis. If J^* is lesser than θ(δ_max-δ_min), indicating a relatively constrained effective transfer range, the advantage of using transfer learning might be limited.This is due to the fact that TTL tends to benefit from the case where the amount of generalization gap is prominent. [Bounded slope of generalization gap function] The upper-bound performance J^* is assumed to be greater than or equal to θ(δ_max-δ_min). That is, the slope of the generalization gap function is less than or equal to J^*/δ_max-δ_min.θ≤J^*/δ_max-δ_min §.§ An optimal strategy for source tasks selection problemWith several assumptions we made in the previous section, we can devise a systematic algorithm to solve the source tasks selection problem and choose the subsequent training source task based on the simple geometry. We consider analysis that simplifies the gained performance after each iteration to obtain intuition and provide a theoretical grounding for the TTL process.For our analysis, we divide the entire region into small segments, each represented by piecewise linear functions. After (k-1) iterations of selecting source task, there will be k distinct segments with inflection points at δ^1,...,δ^k-1 as illustrated in <Ref>. This segmentation helps the following analyses, identifying segments with the largest increase in aggregate performance. Our objective is to find the next δ^k that maximizes the A_k. To achieve this, we systematically evaluate each of these small piecewise linear segments.For each segment, we calculate the potential marginal increase in A_k.The specific marginal increase for each segment is influenced by the underlying shape of the performance function J.<Ref> illustrates different decision rules about where to transfer and how much area to be covered based on the J(δ) shape.Decision rules for TTL are based on the assumptions we made to simplify the analysis.<Ref> assumes the upper bound performance J^* to be the flat shape as a blue dotted line, <Ref> and <Ref> makes the transfer performance functions in both directions have linear functions with an identical slope.We present <Ref> grounded on the geometric characteristics of the function J(δ) and the estimated marginal increase in A_k.This theorem embodies a greedy strategy for the source tasks selection problem.It aims to assess the estimated marginal increase in performance across each piecewise linear segment, choosing the optimal subsequent transfer task of δ^k within the designated segment.The term “greedy" is chosen as it focuses only a single step look-ahead at any given instance.The greedy optimal strategy for optimal transfer to maximize the estimated performance of the policy is to choose the piecewise linear segment estimated to increase by the largest area. Given a restricted piecewise linear segment of J_k(δ) with the longest length of δ segment such that δ∈ [δ_L, δ_R], the greedy policy for choosing the optimal transfer target, δ^k, to maximize the estimated aggregate performance, as:δ^k=δ_L+δ_R/2 fork=1orJ_ksymmetric 2δ_L+δ_R/3 for k≠1and dJ_k/dδ>0 δ_L+2δ_R/3 for k≠1and dJ_k/dδ<0 With this selection process, we can calculate the estimated marginal increase of the aggregate performance along the range of the given segments depending on the shape of the J.Δ A_k=3/4θ(δ_R-δ_L)^2 for k=1 1/8θ (δ_R-δ_L)^2 for k≠1and J_ksymmetric 1/3θ (δ_R-δ_L)^2 otherwiseIn Appendix <ref>, we provide detailed proofs for <Ref>, based on the underlying geometric shape of both the estimated performance and its aggregate.<Ref> provides a visual representation of these varying marginal increases for A_k across various segments based on the shape of the performance function J_k. If J_k(δ) is symmetric, the optimal strategy is to opt for the central point of the segment. Otherwise, the optimal choice for δ' should be one of the trisections within the piecewise linear segment, depending on the slope of J. We can devise a greedy, efficient, and robust algorithm for temporal transfer learning by leveraging the aforementioned assumptions and the estimated performance function.We present Greedy Temporal Transfer Learning (GTTL) algorithm (<Ref>), an iterative system that determines which task of hold duration to transfer at each iteration. Given the estimated performance function, we have the capability to choose the most advantageous initial training point.With the <Ref>, intuitively, the point that maximizes the estimated covered area along the ranges is the median within the entire range of hold duration. For the subsequent decisions, we choose the best option based on the <Ref>. The transfer process continues until the area is sufficiently covered or once the number of source tasks exceed the predefined transfer budget, in case we have it. <Ref> describes the difference between TTL algorithms in the coherent transfer procedure between temporally linked tasks. <Ref> starts by initializing with the minimum and maximum hold duration, setting the performance of all hold duration to 0, initializing J and S to 0, and having an empty set for the policies. It continues as long as the covered area is below a threshold or the number of source tasks is less than the budget. For simplicity in notation, we propose substituting the whole area of (δ_max-δ_min)J^* with A^*. Inside the loop, the algorithm chooses a new training task with a hold duration of δ^k+1 and appends it to its set. It then trains a policy for this hold duration and adds it to the set of policies.After updating the performance with this new policy, the algorithm then calculates the area under this performance curve. Once the loop finishes, the algorithm returns the best performance for each task (J_k) and a set of selected training tasks (S_k). In <Ref>, <Ref> assists in identifying the greedy training source task, drawing insights from the shape of the estimated performance function J.This decision-making rule is grounded in <Ref>. GTTL algorithm (<Ref>) exhibits several noteworthy characteristics underpinning its functionality and efficiency.This algorithm is formulated as an anytime algorithm, meaning it can provide a valid solution even if stopped in the middle of the iterations.Beyond mere validity, GTTL algorithm offers performance assurances. At any given step k, GTTL not only provides a valid solution but also ensures a performance that's oriented towards optimization.For example, CTTL might struggle with finer tasks in the initial selection of the source task.This is because the trained policy is inherently skewed to excel in coarser tasks. This means that while other methods like CTTL can also deliver valid results at step k, GTTL is specifically designed to offer a performance closer to optimal at every individual step. This property ensures flexibility and usability under varying operational constraints, allowing continuous solution improvement with each additional source task. This intelligent selection process ensures efficient knowledge transfer and promotes effective learning across different stages of the algorithm's execution.§.§ A theoretical analysis for optimal temporal transfer learning The next natural question is how good are these incremental transfer learning strategies over multiple iterations of selecting source tasks.The optimality criterion can be defined as the best performance achieved within K steps. Until now, we have focused on the performance we could achieve with the number of steps.Drawing an analogy to K-means clustering, the predetermined number of source tasks of K in CTTL resembles the predefined number of clusters in K-means, which can often be a limitation if not chosen wisely, as it directly influences the granularity of the solutions and potentially the computational budget. Thus, in practice, where training computation budgets are constrained, we care about the dual of this. Moreover, the dual optimality criterion could be how many source tasks are needed to achieve a certain performance threshold.We can analyze the number of steps K^*(ε) that can achieve some given suboptimality ϵ.This measure provides a quantifiable way to evaluate the algorithm's efficiency and effectiveness in traversing the solution space, adding another layer of flexibility and adaptability to its application. For each successive iteration of selecting source task, denoted as k, the trained model's cumulative area under the estimated performance function at the kth iteration is represented as A_k. K^*(ε) is an optimal K to be the discrete sum of estimated performance to have suboptimality of ε. This relationship can be formally defined through the following equations:A_K^*(ε)()≥ (1-ε)A^* As we progress further, the cumulative gain A_k inches closer to the maximum possible performance J^*, indicating that the performance coverage improves with the additional policy obtained by training the source task. However, the speed of improvement depends on the specifics of the generalization gap and the range of guidance hold duration.<Ref> essentially conveys about the optimal K (K^*) that is estimated to cover the area of (1-ε)A^*, having remaining area represented by the ratio of ε. Hence, the definition offers insight into how many steps are required to meet the prespecified level of performance as we iterate.These equations also provide a quantitative understanding of how performance coverage evolves with each iteration and how close it gets to the maximum possible performance. At every iteration, it is crucial to consider the potential coverage area of each monotonic segment.However, it is hard to write a clean, closed-form solution for the optimal policy because the segments on the sides have different shapes from the others when choosing the largest section. To have a clean, closed-form analysis, the lower bound performance of GTTL is given by creating ghost cells at the end of the whole segment as depicted in <Ref>. This lower-bound case chooses δ_max and δ_min for the second and third selection of source tasks, respectively, to create the symmetric V-shaped estimated performance for all sub-segments. This idea enables us to simplify the evaluation process while maintaining reasonable accuracy. The lower bound of the cumulative area of GTTL up to iteration k is represented by A_k(). At each step, A_k() is always greater or equal to A_k(). A_k()≥A_k() ∀ k=1,...,K It is important to note that A_k() will always be less than or equal to A_k(), which represent the cumulative areas under the estimated performance function. This discrepancy arises because the GTTL consistently chooses the optimal task to maximize the area at each step, while A_k() opts for sub-optimal choices of the second and third source tasks. <Ref> highlights that if there exists an integer n such that A_n() is greater than or equal to (1-ε)A^*, then it logically follows that the cumulative area under the entire value function A_n() is also greater than or equal to (1-ε)A^*.This is grounded by its definition that A_n is never less thanA_n. Given an integer n for which A_n≥ (1-ε)A^*, it is demonstrable that A_n≥ (1-ε)A^* because A_n≥A_n.Also, we require at least 4ε+1/4ε steps to cover at least (1-ε)A^*.We prove <Ref> by leveraging the lower bound cumulative area of GTTL as outlined in <Ref>, which has a streamlined expression of A_n. The comprehensive proof is provided in Appendix <ref>. §.§ Bounded SuboptimalityIf we have privileged access to the transfer budget, we can devise a better algorithm than GTTL, since GTTL can be viewed as decision-making based on a 1-step greedy policy. This motivates us to introduce the Coarse-to-fine Temporal Transfer Learning (CTTL), a structured and sequential way of transferring. The algorithm systematically selects steps that span the task range uniformly for a given set number of source tasks.Initiation occurs with the coarser tasks, progressively narrowing down to the finer ones.For example, given the budget of source tasks of 7 and a hold duration range from 1 to 40, the training begins with a hold duration of 37.14 and subsequently transitions to 31.43, 25.71, 20, 14.29, 8.57, and then 2.86, mirroring a diminishing granularity (<Ref>).The inherent advantage of starting with coarser tasks is their limited effective horizon, simplifying their resolution.Consequently, they often pose fewer challenges compared to their finer counterparts.Moreover, existing literature suggests that a coarse-to-fine temporal transfer learning approach can be advantageous, particularly when adapting a pre-trained policy from a coarser to a finer task <cit.>.<Ref> states the optimality of the CTTL algorithm, which selects its subsequent transfer task contingent on the allocated transfer budget K.Algorithm <ref> establishes a structured, sequential framework for transitioning from coarser to finer tasks across specified iterations. Given a range of hold durations and a budget K, it begins with a coarser task and gradually advances to finer tasks, ensuring that each iteration spans the task range uniformly.Coarse-to-fine Temporal Transfer Learning (CTTL) algorithm optimizes maximizing the cumulative area under the performance function given the transfer budget of K under assumptions <ref>, <ref>, <ref>, and <ref>. Initiated from the coarsest task with the hold duration of (δ_max-δ_max-δ_min/2K), CTTL progressively transitions to finer tasks, each marked by an equidistant progression of (δ_max-δ_min/K). The optimal estimated performance of CTTL after K iterations is written as follows:A^CTTL_K=(1-1/4K)θ(δ_max-δ_min)^2. The optimality of CTTL can be proved by the equality condition of the Cauchy-Schartz inequality. The authors may recommend readers to go through the detailed proof in Appendix <ref>. Notably, when one possesses an intimate knowledge of the number of source tasks K^*(ϵ), CTTL tends to outpace the GTTL algorithm.However, obtaining such precise information on the transfer budget remains a challenge in practice.Therefore, it is crucial to analyze the degree to which GTTL might be suboptimal compared to the oracle-like CTTL.In essence, we seek to quantify the suboptimality gap between the two. <Ref> contends the suboptimality and its bound of GTTL compared to CTTL, given that CTTL achieves the optimal strategy with the transfer budget of K.GTTL operates at a suboptimal level relative to CTTL, bounded by: 1/4K(K-1)θ(δ_max-δ_min)^2 forK=2^i+1 1/2(K-1)^2θ(δ_max-δ_min)^2 otherwisewhere i∈_0.The proof for <Ref> is detailed in Appendix <ref>, where the suboptimality bounds of GTTL relative to CTTL are expounded, specifically for two distinct cases dictated by the value of K.The proof is grounded in the algebraic elucidation of this suboptimality measure.§ SIMULATION EXPERIMENTSThis section elucidates the simulation experiments conducted to address our primary research questions. The main purpose of our investigation is to explore the potential of human-compatible control serving as an immediate surrogate for AVs and to verify the degree to which such control can optimize traffic performance at a system level. We conducted many experimental trials in various environments to obtain valuable answers to these essential questions. §.§ Modular Road Networks In mixed-autonomy roadway settings, we delve into various traffic scenarios as explored in prior works <cit.>, including single-lane ring, highway ramp, and signalized intersection networks, as depicted in Figure <ref>.Each scenario has distinct objectives; for instance, the single-lane ring aims to elevate all vehicles' average velocity, while the highway ramp scenario focuses on augmenting the outflow given a constant inflow. On the other hand, the signalized intersection scenario employs a multitask reinforcement learning (RL) strategy, simulating varied penetration rates to accommodate different levels of human-guided vehicle presence, with a focal evaluation on a penetration rate of 0.1 to assess the RL policy's performance.The respective reward functions are tailored to the objectives of each scenario, primarily gauging the average speed of all vehicles alongside other factors such as stopping time, abrupt acceleration, and fuel consumption in some scenarios. A thorough examination of these scenarios and reward formulations is provided in Appendix <ref>. §.§ Experimental SetupWe utilize the microscopic traffic simulation called Simulation of Urban MObility (SUMO) <cit.> v.1.16.0 and its connecting Python API TraCI. The experiments used the MIT Supercloud with 48 CPU cores <cit.>. We used the Trust Region Policy Optimization (TRPO) <cit.> for the numerical experiments.We tested two different types of guidance: acceleration and speed. We used the discretized action space for speed guidance, while acceleration guidance uses the continuous action space. We evaluated the system's performance over a range of guidance hold duration δ∈ [0.1, 40].In our simulation experiments, we simplify the analysis by setting δ_min=0, and we round the calculated δ to the nearest multiple of 10. The detailed experimental setup is explained in <Ref> and Appendix <ref>. §.§ BaselinesWe compare our TTL approaches with several baselines.These baselines represent various strategies for learning and transfer in the context of coarse-grained advisory autonomy tasks.§.§.§ 100% UnguidedIn this baseline, all vehicles are unguided, following the Intelligent Driver Model (IDM) car-following models. This scenario represents a completely decentralized system without any reinforcement learning. §.§.§ Oracle TransferLeveraging the performance results for every zero-shot transfer combination, we select the most well-trained model and apply it across all ranges of tasks. §.§.§ Exhaustive RLThis strategy represents the classic approach to machine learning, where a separate model is trained for each task. The performance is evaluated by calculating the average performance across all tasks. (<Ref>) §.§.§ Multitask RLA multitask reinforcement learning approach trains the model with different configurations of guidance hold duration, varying from 10 to 400.This approach seeks to understand how incorporating multiple tasks into the learning process can impact performance. §.§.§ Random Temporal Transfer Learning (RTTL)Random Temporal Transfer Learning (RTTL) chooses tasks for transfer from a pool of temporal tasks at random. This scenario represents a non-deterministic transfer learning strategy and serves as a stochastic comparison point for our deterministic GTTL approach.From all the policies from the source task at each iteration, we select the top-performing one for tasks with varying hold duration. (<Ref>)§.§ Temporal transfer learning (TTL) results<Ref> delineates the simulation outcomes across three distinct traffic scenarios.The TTL algorithms exhibit exemplary performance in both acceleration and speed guidance categories, markedly outperforming the baselines across diverse traffic conditions.Remarkably, with just a few number of source tasks, TTL algorithms approach the near-term performance of the oracle transfer.Some scenarios require a small number of source tasks to achieve the near-term performance of Oracle transfer, while others demand more extensive iterations.Specifically, in the signalized intersection scenario, all Transfer Learning methods yield the top performance when paired with speed guidance.This underscores the potency of TTL in optimizing traffic management tasks, especially when harmonized with speed guidance. <Ref> shows the system-level performance of each task after the temporal transfer learning methods are applied compared to the exhaustive RL.The results presented in <Ref> offer an insightful comparison of several training methodologies in the context of coarse-grained advisory autonomy tasks in different traffic scenarios. The metrics used here indicate the average speed, with higher values reflecting better performance.<Ref> shows the performance of the task with a range of different hold durations at each iteration in traffic scenarios of the single-lane ring and highway ramp. The figure shows the difference between the assumptions made in <ref> and the variance of the performance increase after transfer in different scenarios.Single-lane Ring. <Ref> compares the system performance of acceleration and speed guidance in the single-lane ring road network when trained from scratch.When analyzing the results, both guidance types demonstrate excellent overall performance as the guidance hold duration increases, with an average speed increase of approximately 22.22% for all vehicles in the system.However, it is worth noting that the acceleration guidance results were slightly lower than speed guidance. First, in a single-lane ring environment (<Ref>), the average speed of GTTL starts higher than both RTTL and CTTL in the first iteration of selecting the source task.Despite a slight decrease in the early stages, the speed improves consistently over the iterations and stays competitive against the other methods.The performance of GTTL shows that it learns quickly in the initial stages and then continues to optimize its performance in subsequent steps, indicating an effective transfer of knowledge. It's also worth noting that while no strategy surpasses Oracle Transfer's average speed of 4.10 m/s, GTTL gets relatively close, reaching final average speeds of approximately 4.04 m/s.While the trends for RTTL are upward as the number of source tasks gets larger, it does not exceed the performance demonstrated by GTTL.Multitask RL, although slightly surpassing the baseline, falls short when compared to our GTTL method.Furthermore, a clear distinction is observed when comparing the number of source tasks required to achieve a given performance level across methods.To surpass baselines with exhaustive RL, RTTL necessitates approximately ten steps, whereas GTTL achieves this in merely seven steps, highlighting its efficiency. These findings strongly advocate the effectiveness of GTTL in such driving scenarios, reinforcing its potential suitability for real-world applications in achieving coarse-grained advisory systems in mixed autonomy. Upon examining speed guidance results (<Ref>), we observe that performance levels are already near-optimal even before applying transfer learning algorithms.This observation highlights the intrinsic effectiveness of speed guidance, making the added benefits derived from implementing TTL algorithms less distinguishable in this specific scenario.Highway Ramp. Following the single-lane ring road scenario, we analyze the results from a highway ramp scenario, where the complexity of the traffic situations and interactions are significantly elevated. Figure <ref> displays the jagged performance of training exhaustively in the highway ramp road network, which could indicate the difficulty of traffic coordination around the ramp.However, the overall trend suggests that the average speed of all vehicles decreases as the guidance hold duration increases.The results of multitask RL are not presented as they consistently converged to policies that induced collisions in the highway ramp scenario, hindering a fair comparison.With the acceleration guidance (<Ref>), Oracle Transfer, considered as upper-bound performance, consistently achieved 5.48 m/s for speed guidance, while the average speed in the unguided case is maintained around 3.95 m/s.CTTL progressively improved the average speed from 4.47 m/s within the budget of 5 to 5.20 m/s over 15 source tasks, obtaining the highest performance.GTTL started at 5.19 m/s after the first five steps, which is the highest among other methods, and eventually optimized its performance to 5.19 m/s across the 15 steps. Switching to the speed guidance scenario (<Ref>), all methods indicated an enhancement compared to the acceleration guidance scenario.The RTTL method started at 5.70 m/s and reached a higher peak speed of 6.03 m/s.Furthermore, the CTTL method increased the average speed from 5.31 m/s to 5.56 m/s over the 15 steps.GTTL exhibited robustness, initiating at an average speed of 5.19 m/s and advancing to 6.25 m/s across the steps.Signalized Intersection.Analyzing the signalized intersection scenarios with acceleration and speed guidance reveals some notable trends. When trained exhaustively, the system performance of the signalized intersection remains steady (<Ref>). The outcomes of multitask RL for the intersection have been omitted due to challenges encountered during the training process for a full set of tasks. For acceleration guidance (<Ref>), the unguided scenario and exhaustive RL resulted in average speeds of 6.84 m/s and 6.86 m/s, respectively, while Oracle Transfer reached 7.71 m/s.TTL methods, particularly CTTL and GTTL, improved significantly, up to the performance of Oracle Transfer. Speed guidance scenario (<Ref>) benefits from the transfer learning procedures.Both CTTL and GTTL mirrored Oracle Transfer performance, achieving average speeds of around 7.71 m/s, almost close to the optimal performance. In conclusion, while unguided and exhaustive RL methods remained consistent, transfer-learning approaches showed dynamic improvements in complex traffic situations, demonstrating their potential effectiveness. This remarkable stability illustrates the robustness of the GTTL approach and its ability to deliver consistent performance, even in complex environments like signalized intersections. § CONCLUSIONThis paper presents a novel approach to the coarse-grained advisory system with temporal transfer learning (TTL) algorithms in addressing the complexity and brittleness of the RL algorithm.To address this problem, the developed methods utilize the temporal structure of a range of tasks. It provides a principled way of selecting which problem to solve for a range of temporal tasks and zero-shot transferring to other problems. Despite the simplest settings and assumptions to be made for the theoretical analysis, TTL algorithms make significant contributions, fully leveraging the temporal structure of a series of problems. Focusing on three diverse scenarios—a single-lane ring, a highway ramp, and a signalized intersection—this research optimizes system-level traffic performance of coarse-grained advisory autonomy by leveraging the TTL algorithms. The results underscore the potency of coarse-grained advisory autonomy implemented through either speed or acceleration guidance.As the guidance hold duration extends, intuitively, the overall performance can face degradation because of the limited controllers.Moreover, with only a change in guidance hold duration, which is the only difference over the tasks, it is hard to optimize the entire range of possible guidance hold duration in multiple coarse-grained settings due to the uncertain reliability of training RL. Moreover, temporal transfer across tasks outperforms other baselines in single-lane ring, highway ramp, and signalized intersection scenarios.In the context of mixed-autonomy traffic, the results of this research highlight the potential of combining coarse-grained advisory systems and efficient TTL algorithms.This not only contributes to a safer, more efficient traffic environment but also supports the progressive transition towards safe and robust autonomous systems. TTL could find meaningful applications in industrial automation and robotics, where tasks exhibit varying temporal components.The approach, starting with training on the task in the middle and transferring the acquired knowledge to finer tasks, promises to bolster robustness, efficiency, and quality in cross-domain applications.Future work could also involve developing more generalizable models of transfer, capable of seamlessly navigating both temporal and spatial task structures.§ PROOF FOR <REF> We can divide it into three cases. * If J is symmetric,We can divide the area into the left trapezoid and right trapezoid.max_δ'(Obj.) =1/2(δ'-δ_L)[J^*+J^*-θ_L(δ'-δ_L)]+1/2(δ_R-δ')[J^*-θ_R(δ_R-δ')+J^*]=(δ_R-δ_L)J^*-θ_L/2(δ'-δ_L)^2-θ_R/2(δ_R-δ')^2Find the δ' that makes d(Objective)/dδ'=0.d(Objective)/dδ' =-θ_L(δ'-δ_L)+θ_R(δ_R-δ')=0 δ'=θ_Lδ_L+θ_Rδ_R/θ_L+θ_RIf we assume θ_L=θ_R=θ (<ref>), δ'=δ_L+δ_R/2If it were the first step, the shaded area would be as follows:∑_δ=δ_L^δ_RJ(δ) =(δ_R-δ_L)J^*-θ_L/2(δ_L+δ_R/2-δ_L)^2-θ_R/2(δ_R-δ_L+δ_R/2)^2=(δ_R-δ_L)J^*-θ(δ_R-δ_L/2)^2Otherwise, the marginal area of increase would be calculated as follows:Δ A_k= 1/2{1/2(δ_R-δ_L)}{1/2θ(δ_R-δ_L)}=1/8θ (δ_R-δ_L)^2. * If J has a positive slope,max_δ'(Obj.) =(δ'-δ_L)[J^*-(J^*-θ(δ_R-δ'))]+1/2(δ_R+δ')[J^*-(J^*-θ(δ_R-δ'))]=(δ'-δ_L)θ(δ_R-δ')+1/2(δ_R+δ')θ(δ_R-δ')=1/2θ(δ_R+3δ'-2δ_L)(δ_R-δ')Find the δ' that makes d(Obj.)/dδ'=0.d(Obj.)/dδ' =3/2θ(δ_R-δ')-1/2θ(δ_R+3δ'-2δ_L)=0 δ_R-3δ'+2δ_L=0If we find the δ' that maximizes the area, δ'=2δ_L+δ_R/3The marginal area increase would be 1/3θ (δ_R-δ_L)^2. * If J has a negative slope, without loss of generality, δ' that maximizes the shaded area is as follows:δ'=δ_L+2δ_R/3Likewise, the marginal area increase would be 1/3θ (δ_R-δ_L)^2. § PROOF FOR <REF>For the initial step, Ã_1 =(δ_max-δ_min)J^*-1/4θ(δ_max-δ_min)^2For a few beginning steps, we can calculate Ã_k based on the geometric shape illustrated in <ref>.Ã_3 =(δ_max-δ_min)J^*-1/8θ(δ_max-δ_min)^2 Ã_5 =(δ_max-δ_min)J^*-1/16θ(δ_max-δ_min)^2 Ã_9 =(δ_max-δ_min)J^*-1/32θ(δ_max-δ_min)^2 ⋮In general, we can write a clean form of Ã_k if k=2^i+1 when i∈, whererepresents the set of natural numbers.Also, <Ref> leads to J^*=θ(δ_max-δ_min).Ã_2^i+1 =(δ_max-δ_min)J^*-1/2^i+2θ(δ_max-δ_min)^2=(1-1/2^i+2)θ(δ_max-δ_min)^2To cover more than (1-ε)(δ_max-δ_min)J^*, 2^i+1 steps are needed.Ã_2^i+1 ≥ (1-ε)θ(δ_max-δ_min)^2 1-1/2^i+2 ≥ 1-ε ε ≥1/2^i+22^i+1 ≥1/4ε+1=4ε+1/4εTo sum up, we require at least 4ε+1/4ε steps to cover more than (1-ε)(δ_max-δ_min)J^*. § PROOF FOR <REF>We prove the optimality of this CTTL algorithm under the transfer budget of K. We cut a segment of [δ_min,δ_max] into K+1 subsegments. We aim to maximize A_K^CTTL, described as the remaining area subtracted from the big rectangle of J^*(δ_max-δ_min).This area can be determined by aggregating the areas of the small triangles within each subsegment. The subsegments are denoted as from l_1 to l_K+1. The remaining area of each subsegment can be calculated as follows:1/2l_k(θ l_k)=1/2θ l_k^2 fork=1 1/2l_k(θ (1/2l_k))=1/4θ l_k^2 fork=2,⋯,K 1/2l_k(θ l_k)=1/2θ l_k^2 fork=K+1 Our approach involves solving the quadratic programming problem with a linear constraint as follows:min 1/2θ l_1^2+1/4θ l_2^2 + ⋯ + 1/4θ l_K^2+1/2θ l_K+1^2 s.t.l_1+l_2+⋯+l_K+l_K+1=δ_max-δ_min l_1,l_2,⋯,l_K,l_K+1≥ 0To solve this optimization problem, we apply the Cauchy–Schwarz inequality (𝐮𝐯≥|⟨𝐮,𝐯⟩|).𝐮 =(l_1/√(2),l_2/2, ⋯ ,l_K/2,l_K+1/√(2)) 𝐯 =(√(2),2, ⋯ ,2,√(2)) ⟨𝐮,𝐯⟩ =l_1+l_2+⋯+l_K+l_K+1 𝐮^2 =(l_1/√(2))^2+(l_2/2)^2+⋯+(l_K/2)^2+(l_K+1/√(2))^2 𝐯^2 =√(2)^2+2^2+⋯+2^2+√(2)^2=4K |⟨𝐮,𝐯⟩|^2 =(l_1+l_2+⋯+l_K+l_K+1)^2=(δ_max-δ_min)^2Using the Cauchy–Schwarz inequality of 𝐮≥|⟨𝐮,𝐯⟩|/𝐯, 1/2θ l_1^2+1/4θ l_2^2 + ⋯ + 1/4θ l_K^2+1/2θ l_K+1^2 =θ𝐮^2 ≥θ|⟨𝐮,𝐯⟩|^2/𝐯^2=θ/4K(δ_max-δ_min)^2The equality of the Cauchy-Schwarz inequality holds when 𝐮=λ𝐯.l_1/√(2)/√(2)=l_2/2/2=⋯=l_K/2/2=l_K+1/√(2)/√(2)=λ 2l_1=l_2=⋯=l_K=2l_K+1Thus, the optimal solution would be as follows: l_1=l_K+1=δ_max-δ_min/2Kl_2=⋯=l_K=δ_max-δ_min/K.Also, the optimal objective value would be θ/4K(δ_max-δ_min)^2, denoted as A^CTTL_K.§ PROOF FOR <REF>We divide the case of K into K=2^i+1 and K≠2^i+1 where i ∈.First, for a given K such that K=2^i+1, the relationship between A^CTTL_K and A^GTTL_K can be comprehensively understood through a detailed examination of A^GTTL_K and Ã^GTTL_K. These analyses are rooted in the evidence presented in the proof of <ref> in Appendix <ref>.A^CTTL_K =(1-1/4K)θ(δ_max-δ_min)^2A^GTTL_K ≥Ã^GTTL_K =(1-1/4(K-1))θ(δ_max-δ_min)^2A^CTTL_K-A^GTTL_K≤ A^CTTL_K-Ã^GTTL_K=(1/4(K-1)-1/4K)θ(δ_max-δ_min)^2=1/4K(K-1)θ(δ_max-δ_min)^2Thus, GTTL operates at a suboptimal level relative to CTTL, bounded by 1/4K(K-1)θ(δ_max-δ_min)^2.Moreover, for the case of 2^i-1+1<K<2^i+1,Ã^GTTL_2^i+1 =(1-1/2^i+2)θ(δ_max-δ_min)^2 Ã^GTTL_2^i-1+1 =(1-1/2^i+1)θ(δ_max-δ_min)^2 Ã^GTTL_2^i+1-Ã^GTTL_2^i-1+1 =(1/2^i+1-1/2^i+2)θ(δ_max-δ_min)^2Given that all Ã^GTTL_K where 2^i-1+1<K<2^i+1 are all the same and their sum is (1/2^i+1-1/2^i+2)θ(δ_max-δ_min)^2, Ã^GTTL_K+1-Ã^GTTL_K is written as follows: Ã^GTTL_K+1-Ã^GTTL_K =1/(2^i+1)-(2^i-1+1)(Ã^GTTL_2^i+1-Ã^GTTL_2^i-1+1)=1/2^2i+1θ(δ_max-δ_min)^2 Ã^GTTL_K =Ã^GTTL_2^i-1+1+K-(2^i-1+1)/2^2i+1θ(δ_max-δ_min)^2=(1-1/2^i+1)θ(δ_max-δ_min)^2+K-(2^i-1+1)/2^2i+1θ(δ_max-δ_min)^2=(1-2^i/2^2i+1+K-2^i-1-1/2^2i+1)θ(δ_max-δ_min)^2=(1+K-3·2^i-1-1/2^2i+1)θ(δ_max-δ_min)^2 A^CTTL_K-A^GTTL_K≤ A^CTTL_K-Ã^GTTL_K=(1-1/4K)θ(δ_max-δ_min)^2-(1+K-3·2^i-1-1/2^2i+1)θ(δ_max-δ_min)^2=(-1/4K-K-3·2^i-1-1/2^2i+1)θ(δ_max-δ_min)^2=(K-(K-2^i-1)(K-2^i)/4K·2^2i-1)θ(δ_max-δ_min)^2≤1/2^2i+1θ(δ_max-δ_min)^2≤1/2(K-1)^2θ(δ_max-δ_min)^2§ EXPERIMENTAL DETAILS FOR MODULAR ROAD NETWORK In mixed autonomy roadway settings, we investigate the traffic scenarios covered in the previous works <cit.>, including the following road networks: single-lane ring, highway ramp, and signalized intersection.Single-lane Ring The single-lane circular ring road network was inspired by Sugiyama's work <cit.>.The single-lane ring environment aims to increase the average velocity of all vehicles in the road network.The ring circumference is 250 meters long.The reward function is the average speed of all vehicles.r(s,a) = 1/n∑_alli v_i(s,a)Highway Ramp The objective in the highway ramp environment was to increase the outflow given the same inflow.The reward is the number of vehicles exiting the system at each rollout. Signalized Intersection We have designed a single-lane, 4-way signalized intersection regulated by a static traffic signal phase. For training this intersection, a multi-task training approach is employed. Specifically, we use a multi-task reinforcement learning (RL) strategy, considering various penetration rates to simulate different levels of human-guided vehicle presence. Nonetheless, when evaluating the effectiveness of this strategy, we concentrate on scenarios characterized by a penetration rate of 0.1. This allows us to assess the performance of the trained RL policy in conditions where only 10% of the vehicles are controlled by the RL policy, and the remaining 90% operate under human guidance. The reward function is the average speed of all vehicles with some penalty terms for stopping time, abrupt acceleration, and fuel consumption.Specifically, we compare the average speed of all vehicles in the system as a performance measure.§ EXPERIMENTAL SETUP <Ref> states the detailed experimental setup for RL, simulation, and traffic scenarios. IEEEtran | http://arxiv.org/abs/2312.09436v1 | {
"authors": [
"Jung-Hoon Cho",
"Sirui Li",
"Jeongyun Kim",
"Cathy Wu"
],
"categories": [
"cs.RO",
"cs.AI",
"cs.LG",
"cs.SY"
],
"primary_category": "cs.RO",
"published": "20231127211806",
"title": "Temporal Transfer Learning for Traffic Optimization with Coarse-grained Advisory Autonomy"
} |
1Dipartimento Interateneo di Fisica, Università degli Studi di Bari Aldo Moro, Bari, 70125,Italy2Sezione di Bari, Istituto Nazionale di Fisica Nucleare, Bari, 70125,Italy3Dipartimento di Bioscienze, Biotecnologie e Ambiente,Università degli Studi di Bari Aldo Moro, Bari, 70125,Italy4Fischell Department of Bioengineering, University of Maryland, College Park MD 20742 USA5Institute for Organic Synthesis and Photoreactivity, National Research Council of Italy, Bologna, 40129, Italy6Dominik P.Purpura Department of Neuroscience, Albert Einstein College of Medicine, New York, 10461, NY, USA†The authors contributed equally to this work. *[email protected] Wide-field imaging is widely adopted due to its fast acquisition, cost-effectiveness and ease of use. Its extension to direct volumetric applications, however, is burdened by the trade-off between resolution and depth of field (DOF), dictated by the numerical aperture of the system.We demonstrate that such trade-off is not intrinsic to wide-field imaging, but stems from the spatial incoherence of light: images obtained through spatially coherent illumination are shown to have resolution and DOF independent of the numerical aperture. This fundamental discovery enabled us to demonstrate an optimal combination of coherent resolution-DOF enhancement and incoherent tomographic sectioning for scanning-free, wide-field 3D microscopy on a multicolor histological section.§ INTRODUCTIONWide-field imaging is amongst the most common imaging modalities for the observation and characterization of absorbing specimens, as done, for instance, in bright-field microscopy <cit.>. Some of the reasons behind its widespread use across many diverse applications are its ease of use, cost-effectiveness, fast acquisition, and its direct imaging capability (namely, the availability of the output image in real time, with no need for inverse computation techniques on the collected intensity). Although conventional devices work extremely well with 2D samples, having negligible thickness along the optical axis (z), their use with 3D samples is significantly complicated by the well-known dependence of both resolution and depth of field (DOF) on the numerical aperture (NA) of the imaging device: this dependence results in a strong trade-off between image resolution and DOF, and imposes the need to z-scan the whole sample in order to collect the complete volumetric profile. The operation of z-scanning requires that either the imaging device or the sample itself are mechanically shifted along the optical axis, so as to change the plane at focus and perform multiple acquisitions of different transverse planes <cit.>. The intrinsically long acquisition required by moving components implies limited in vivo applicability and comes with further disadvantages, such as the need for precise stabilization, requiring large and heavy devices, costly mechanical parts with the required precision, as well as high maintenance costs, which preclude the use of scanning microscopes in low-budget applications. The limitations of axial scanning become particularly relevant in large-NA devices, where the higher resolution comes at the expense of a narrower DOF. This has detrimental effects on the number of axial measurements necessary to characterize the entire sample, so that a common option to keep the measurement time low is to under-sample along the optical axis, with a consequent loss of information. In 3D imaging, resolution, axial sampling and acquisition speed are thus in direct conflict.Several apporaches have been proposed in the literature to address this problem; optical coherence tomography (OCT) is one of the most noticeable examples <cit.>. However, in all cases, the limitations imposed by the NA of the imaging system persist; in OCT, for example, small NA are required, at the expenses of resolution and signal-to-noise ratio (SNR), for addressing the loss of intensity implied by large-NA optics <cit.>.A recent and rapidly developing approach to scanning-free wide-field 3D microscopy is light-field (LF) imaging, where direct images of thick samples containing heavily defocused planes are acquired and then refocused, in post-processing<cit.>. Directional information about light from the sample is in fact acquired by a microlens array and employed, in post-processing, to perform software z-scans with similar features to the typical mechanical scans. LF devices thus enable scanning-free single-shot acquisition of a 3D sample, but its fast acquisition comes at the expenses of a dramatic loss of resolution, well beyond the diffraction limit <cit.>.In fact, due to its geometric-optics-based working principle, the maximum achievable DOF is defined by the circle of confusion (CoC), namely, by the projection of the lens aperture over the acquired defocused planes <cit.>. The resolution of the refocused images is thus not determined by the Airy disk, as is typically the case in microscopy, but is rather dominated by the geometrical effects of defocusing, as typically occurring in photography. In addition, the lenslets introduce an even stronger trade-off between resolution and DOF, consisting in the loss of resolution at focus with the improvement of the volumetric performance <cit.>.In this work, we demonstrate that the resolution versus DOF trade-off of defocused images is governed by the spatial coherence properties of light, and is naturally relieved when the sample is illuminated with spatially coherent light (i.e., the coherence area on the sample is either comparable or larger than the sample details, as explained in the Methods and Results). Coherent imaging is thus found to entail a much slower image degradation with defocusing, a result that leads to discover the direct 3D imaging potential of spatially coherent light: by combining the extremely large DOF of coherent imaging with the strong localization capability of incoherent imaging, we design a direct, scanning-free, wide-field 3D microscope and demonstrate its working principle by means of both test and histological samples. In particular, we characterize the properties of direct coherent wide-field imaging and show 3D reconstruction compatible with absorbing non-fluorescent dyes routinely used for histochemistry <cit.>.An important aspect to remark is that the required coherence exclusively relates with the transverse coherence of the field illuminating the sample, disregarding both the temporal and the spatial coherence of the source <cit.>: although our findings apply to both temporally and spatially coherent sources such as lasers, as well as to collimated beams, none of these properties are necessary to our scopes, and our results are thus not confined to these scenarios. On the contrary, NA-independent resolution and DOF are shown to be obtained with virtually any source of spatially and temporally incoherent light, such as a LED, since the required spatial coherence can always be acquired through propagation (Van Cittert-Zernike theorem <cit.>).Our 3D microscope is, in fact,based on a conventional bright-field imaging device, integrated with an array of LEDs for implementing a dedicated coherent illumination strategy. Conversely, typical bright-field illumination is obtained with extended sources, shining spatially incoherent light on the sample <cit.>.Spatial coherence, on the other hand, is used in a plethora of non real-time imaging modalities relying on post-processing of the acquired data aimed at recovering rich phase information about the sample; techniques such as holography <cit.> and ptychography <cit.>, for example, can achieve super-resolution, wavefront reconstruction, and correction of optical aberrations. Most notably, techniques based on computational illumination from LED arrays <cit.> have demonstrated high-resolution 3D amplitude and phase reconstruction <cit.> by exploiting sequential multi-angle plane-wave illumination and recursive phase-retrieval algorithms. However, all such coherent imaging techniques are indirect, due to the time-consuming algorithms they require for data analysis, and are thus not suitable for real-time imaging <cit.>.In this work, we show that the spatial coherence of light can be exploited in direct wide-field imaging to obtain a breakthrough improvement of the image resolution over large DOF.This result is supported by the discovery of the completely different physical mechanisms regulating resolution loss in defocused images obtained through spatially coherent and spatially incoherent illumination. In fact, while the peculiarities of focused images, whether coherent or incoherent, are well known <cit.>, the properties of coherent defocused images have been so far mostly unexplored, with the only exception of the very special case of collimated light illumination <cit.>. In this work, the introduction of a dedicated formalism and an unbiased image quantifier enables to study the properties of coherent images and to compare them with the ones of conventional incoherent imaging.One of the main results we shall present is that neither the NA nor the design of the imaging system affect the quality of defocused coherent images; in fact, the NA-dependent trade-off between resolution and DOF defined, in incoherent defocused images, by the CoC, naturally disappears when illuminating the sample with spatially coherent light. We shall profit from this effect to perform the typical tomographic reconstruction of LF imaging and retain its multicolor capability, but with enhanced resolution both at focus (where we recover Rayleigh-limited resolution) and in refocused planes. No phase retrieval and time-consuming post processing of the acquired images are required in our approach, paving the way toward 3D real-time imaging.§ METHODS As mentioned in the Introduction, we refer to coherent imaging whenever the coherence area of the illumination <cit.>, on the sample, is larger than the spatial features of the sample one wishes to resolve <cit.>. According to the size of the details composing a given object, an imaging system might thus behave coherently for object details smaller than the coherence area, and incoherently for larger details.In this respect, we should highlight that the size of the coherence area with respect to the whole field of view (FOV) of the image does not play any role. The transition from one regime to the other will be discussed in details later in the paper. For the sake of simplicity, we shall now disregard the effects of partial coherence, and only consider coherent systems as having a coherence area larger than any object detail, and incoherent systems as having a negligible (point-like) coherence area on the sample. §.§ Resolution and DOF in coherent imagingThe upper part of panel b) suggests one of many possible ways for obtaining coherent illumination from an incoherent source: since the coherence area on the sample scales proportionally with the ratio between the source diameter and the source distance, the desired coherence can easily be obtained by reducing the source size <cit.>. An obvious alternative would be to employ laser light illumination, but the presented results are not limited to this scenario. In Figs. <ref> a) and b), we report two typical examples of incoherent and coherent imaging, respectively, as obtained by changing the illumination in the same exact imaging system. The opposite situation of incoherent illumination is typically achieved by placing extended natural sources at a small distance from the sample, as reported in the upper part of panel a). In the lower part of panels a) and b), we report the corresponding incoherent and coherent images, both focused (left panels) and defocused (central and right panels), of a two-dimensional sample (a double-slit mask). Defocused images have very distinctive features depending on the spatial incoherence or coherence of the light on the sample: while incoherent images tend to quickly blur upon defocusing, coherent images do not blur. This is even more apparent in panel c), reporting a section, in the (x, z) plane, of the 3D cubes obtained by mechanically z-scanning the two-slit mask, in both cases of incoherent (left panel) and coherent (right panel) imaging. Whereas, in incoherent imaging, z-scanning quickly gives rise to flat intensity distributions as the object is moved out of focus, in coherent imaging, the transmissive details contain rich spatial modulations and stay well separated from each other over a much longer axial range compared to the corresponding incoherent image. Transmissive details thus appear “resolved” at a much larger distance from the plane at focus, before being completely altered by diffraction . A much longer DOF (or, equivalently, higher resolution of defocused images) is thus expected in coherent imaging, with image degradation not due to blurring. Upon quantitatively describing these effects, we shall find that resolution and DOF of defocused coherent images are actually completely independent of the NA of the imaging system.The differences between coherent and incoherent systems can be traced back to the different underlying image formation processes, as formally expressed by the intensity distributions describing the images<cit.>:I_inc( x)=|𝒜( x)|^2 * |𝒫( x )|^2I_coh( x)= |𝒜( x)* 𝒫( x)|^2 ,where 𝒜( x) is the complex transmission function of the object, 𝒫( x) is the Green's function describing the field propagation through the optical system, and f*g denotes the convolution between two complex-valued functions, f and g. Unlike the incoherent image formation process, which is linear in the optical intensity, coherent imaging is non-linear with respect to the object 𝒜 ( x). Therefore, although the same quantities are involved in both intensity distributions of Eq. (<ref>), those contributing to the incoherent image formation are real and positive, whereas coherent imaging is sensitive to both the amplitude and phase of complex functions describing both the field distribution within the sample and its propagation through the imaging system <cit.>. In fact, upon neglecting optical aberrations, the coherent (i.e. complex) PSF 𝒫( x) of Eq. (<ref>) can be decomposed into two contributions: 𝒫 ( x) = 𝒟_z-δ ( x) * 𝒫_0( x),where 𝒫_0 is the complex PSF describing the focused coherent image and determining the well-known Airy disk <cit.>, and 𝒟_z-δ represents the field propagation over a distance z-δ, with δ and z the axial coordinates of the object point and of the plane at focus, respectively. Depending on the placement of the sample and the numerical aperture of the device, the quality of the output image can thus be dominated either by the effects of out-of-focus propagation or by the Airy disk, with the two effects blending into each other only when the object is placed close to (but not perfectly on) focus.The corresponding transition between the focused and defocused image is well known in incoherent imaging: at focus, both the resolution (λ /NA) and the DOF (λ /NA^2) are determined by wave optics (Airy disk), with λ the illumination wavelenght. However, as the object is moved outside of the natural DOF of the focused device, the PSF 𝒫 is dominated by geometrical optics effects and reduces to the circle of confusion (namely, the projection of the lens aperture onto the defocused image plane) <cit.>, which induces a typically circular blurring with a radius proportional to both the defocusing |z-δ| and the effective lens radius. The different physics regulating coherent and incoherent imaging helps developing an intuition about the different behaviour observed in Fig. <ref>, but does not suffice to quantitatively compare the resolution and DOF of coherent and incoherent imaging. In fact, image quality estimators typically used for characterizing imaging performance, from two-point resolution criteria, such as Rayleigh's and Abbe's <cit.>, to more advanced ones, such as modulation transfer functions <cit.>, all rely on the linearity of the (incoherent) image formation and the positiveness of the PSF, and thus fail in assessing the performance of coherent imaging. For instance, the definition of a Rayleigh criterion prescribes that, because of the broadening effect of the incoherent PSF, the image of two ”points” is the superposition of two disks (Airy disks, at focus, CoC, out of focus). The resolution is then easily defined by arbitrarily setting an acceptable threshold to when the two disks are perceived as separated. But these methods cannot be applied as effectively to a non-linear process such as the coherent image formation, since coherence induces the appearance of spurious spatial frequency components. Therefore, neither an approach based on modulation transfer functions, that require the harmonic content to be unaltered, nor the two-point visibility, which requires a relative minimum separation between the images of two points, can be used.To quantify the performance of coherent and incoherent imaging systems, we thus introduce a general-purpose quality estimator: the functional F_A, which we shall refer to as image fidelity, defined as a positive quantity F_A[I( x)] that compares the intensity distribution I( x) of the image produced by an imaging system directly with the original intensity profile of the object A=|𝒜|^2, namely,F_A[I]=∫√(A( x/M) I( x)) d x,where M is the magnification of the imaging system in its plane at focus. Both A and I are normalized quantities for the definition of the fidelity to be consistent and to saturate to unity in the ideal case of perfect imaging (I=A). Being completely independent of any detail of the image formation process, the fidelity enables performing image quality evaluation through any imaging device, as long as the shape of the known reference object is known: resolution and DOF shall thus be defined as the minimum object size and the maximum axial range producing a “faithful” image, as identified by a threshold set to the fidelity. Both these definitions apply equally well to focused and defocused images, thus enabling to study how resolution changes with defocusing. Since incoherent imaging is only sensitive to the intensity transmitted by the sample, our study will now be restricted to non-diffusive objects and will disregard phase information, namely, we shall consider field transmission profiles with arg(𝒜)=0 uniformly in the sample, so that 𝒜 = |𝒜|≥ 0. §.§ Resolution limits The plot reported in Fig. <ref> employs the fidelity to offer a quantitative interpretation of the coherent and incoherent z-scans reported in Fig. <ref>c).The colored areas in Fig. <ref> highlight how far from the plane at focus (abscissas) an s-sized object (ordinates) can be placed to produce an image with fidelity higher than 95%. The orange area refers to spatially incoherent illumination, whereas the blue area refers to the coherent case. The physical regimes leading to the dashed curves that delimit the high-fidelity regions associated with coherent and incoherent imaging offer a clear perspective on the physical mechanisms regulating the two image formation processes and enable quantifying the resolution versus DOF trade-off in the two cases.Such boundaries can be interpreted as resolution limit curves, giving the functional dependence of the resolution on the displacement of the sample from focus, at the threshold of the image fidelity above which an image is considered resolved.These curves are obtained from the analytical expression of the image fidelity, written in terms of the parameters on which the image depends, which in our case are: the dimension s of the features of the sample, the axial coordinate δ where the sample is located, and the axial location z of the plane focused by the imaging system. The image fidelity associated with I( x)=I( x; δ-z, s) is thus a two-variable function: F_A[I](δ-z,s). Since the quality of the image, upon mechanical z-scanning, only depends on the relative distance between the object and the focused plane, we shall set for simplicity δ=0 and interpret z as the relative defocusing distance.By studying the analytical expression of F_A[I](z, s), exact expressions of relevant image quantifiers can be extracted. For instance, F_A[I](0, s) gives the image fidelity in the plane with Rayleigh-limited resolution, as a function of the object size. By inversion, one obtains, both for coherent and incoherent imaging,s_foc = λ/NA f_foc(c),where f_foc(c) is a coefficient depending of the threshold image fidelity, amounting to 0.157 for c=0.95 (Fig. <ref>). Apart from the multiplying constant, which only depends on the arbitrary choice of a threshold on the fidelity, the equation corresponds to the well-known diffraction-limited resolution of focused imaging systems (dashed black line in Fig. <ref>), as determined by the Airy disk.Therefore, the analysis in terms of fidelity recovers the well-known fact that the optical performance of focused coherent and incoherent systems is analogous.The differences between the two illumination strategies emerge when investigating defocused images in two different physical regimes.The geometrical optics regimeis explored by considering the fidelity in the limit λ→ 0, namely,F_geom[I](z, s)=lim_λ→ 0F_A[I].If this physical regime is investigated in the incoherent imaging case, the implicit curves F_geom[I_inc]=c, in the (z,s) plane, have an explicit expression, which, unsurprisingly, prescribes the well-known circle of confusion of geometrical optics:s_geom(z) = NA | z| f_geom(c),with f_geom(0.95)=1.97. As shown in Fig. <ref>, the CoC-defined trend perfectly traces the boundary of the fidelity area. Hence, the fidelity analysis confirms that wave optics has negligible effects on the optical performance of an incoherent system when the sample is moved away from perfect focus. By exploring the same physical limit in the case of coherent imaging, the obtained analytical expression does not describe any physically relevant situation and does not have a counterpart in the shape of the fidelity region. Instead, interesting results are obtained, in the coherent case, by investigating the opposite regime, namely by neglecting geometrical effects. This is done by considering the radius of the limiting aperture l →∞, so as to completely ignore the influence of the imaging device. This condition is equivalent to considering an imaging system where the image formation process is solely governed by diffraction, from the object plane up to the plane at focus; in fact, in Eq. (<ref>), 𝒫( x)→𝒟_z( x), indicating that no CoC exists in this case.Upon setting a threshold c to the fidelity of a coherent system with infinite NA F_diff[I_coh]=lim_l→∞F_A[I_coh],we obtain the resolution limit curvess_diff(z)=√(λ| z|) f_diff(c),with f_diff(0.95)=0.396. Rather surprisingly, this square-root scaling of the resolution with defocusing perfectly reproduces the boundary of coherent imaging out of the plane at focus, as reported by the blue dashed line in Fig. <ref>. As in the previous case, exploring the same physical limit in the case of incoherent illumination yields no interesting conclusion. The optical performance of coherent and incoherent imaging are thus defined by two entirely different processes: the geometrical CoC (hence, the system NA) is basically the only factor limiting the resolution of defocused incoherent imaging; on the contrary, the aperture size and optical design of the imaging system play no role in coherent imaging, where the sole responsible for image degradation is diffraction and free-space space propagation from the object to the observation plane. The different physical phenomena governing image degradation (geometric optics, as opposed to diffraction and wave propagation) have surprising effects on the image quality. Resolution and DOF of coherent imaging are found to be independent of the NA of the imaging system, and their trade-off is extremely relieved with respect to incoherent imaging, as defined by the square-root law (dashed blue line) compared to the linear dependence (dashed red line).§.§ Coherent 3D imaging with incoherent sectioning capabilityThe newly discovered properties of direct coherent images can be integrated with the strong axial localization capability of incoherent imaging to achieve scanning-free 3D wide-field imaging of absorbing samples with enhanced volumetric resolution. In fact, spatially coherent illumination will enable a (NA-independent) square-root scaling of transverse resolution, thus offering high lateral resolution over a long DOF; at the same time, the axial sectioning typical of spatially incoherent illumination entails, in the wide DOF accessed because through coherence, a precise sectioning capability, as enabled by large-NA tomographic systems <cit.>.We should clarify that, in this context, the concepts of DOF and axial resolution are rather distinct: while the DOF represents the axial length of the volume where object details of a given size can be faithfully imaged, the axial resolution represents the axial sectioning capabilities, that is, how finely transverse planes within the DOF can be isolated along the axis.The underlying principle for achieving high-resolution 3D imaging within a direct wide-field coherent system is similar to LF imaging: in both cases, information about the propagation direction of light enables scanning-free volumetric reconstruction. However, while LF imaging acquires the required directional information by means of the microlens array, our proposal prescribes to do it with spatially coherent illumination of the sample from different locations. Our approach will be shown to come with two major advantages: a much larger DOF, as defined by the square-root (as opposed to linear) scaling of the resolution with defocusing, and Rayleigh-limited images at focus. In the proposed scheme, 3D information about the sample is acquired by accessing the 4D function I( x_0, x), where x_0 is the transverse coordinate of a point-like emitter enabling spatially coherent illumination of the sample, and x is the transverse coordinate of the collected image. Sampling of the complete 4D function is performed by sequentially sweeping an illumination plane made of point-like emitters centered in x_0, and collecting, for each coordinate x_0, the resulting intensity I( x_0, x) = |[ℒ_ x_0( x)𝒜( x)]* 𝒫( x)|^2,where 𝒜 and 𝒫 are the same object transmittance and coherent PSF as in Eq. (<ref>), and ℒ_ x_0 is the Green's function propagating the field from the point-like source centered in x_0 to the sample plane.As we shall discuss in the “Results” section, the wide freedom in the choice of ℒ_ x_0 (hence, of the illumination scheme) enables to greatly customize the optical performances of the proposed 3D imaging system. Specifically, in order to encode 3D information into I( x_0, x), illuminating the sample from many different angles is not necessary. In previous works (see, e.g., Ref. <cit.>), in fact, ℒ_ x_0 has always been arranged in such a way to have an illumination distance L between a source at coordinate x_0 and the sample, such that the latter can be considered to be illuminated by tilted plane waves, corresponding to the choiceℒ_ x_0( x) = exp[ i 2π/λ x_0/L· x ],as conventionally done in tomographic systems. However, as we shall discuss in the “Results” section, our complete formal analysis, and the consequent understanding of coherent imaging, enable to demonstrate that neither the angular illumination nor the requirement of collimated light are in any way necessary to encode 3D information into I( x_0, x). Most importantly, understanding the underlying physics of coherent and incoherent imaging is the key for achieving scanning-free direct 3D imaging, with no need for time-consuming phase retrieval algorithms.The intensity distribution described by Eq. (<ref>) is easily recognized as a coherent image, as in Eq. (<ref>), with the only difference that the object is now replaced by the expression ℒ_ x_0 𝒜, emphasizing the role of the illumination scheme and the wide freedom in its design. The acquired 4D intensity can thus be expected to have mostly the features we have attributed to coherent images, such as the decoupling of the lateral resolution and DOF. However, the large DOF entails the lack of axial localization: thick 3D samples are imaged with high transverse resolution, but lack any axial localization. To address the issue, we shall integrate the proposed technique with the properties of incoherent imaging, in which the tomographic properties are defined by the angular acceptance of the lens. The analogy with incoherent systems can easily be understood by considering the image resulting from the sum of the coherent images obtained from different illumination coordinates, namely,R_0( x)=∫ I( x_0, x) d x_0 = I_inc( x).This equality can be analytically verified by plugging the expression of the plane-wave illumination into Eq. (<ref>) and integrating the result; however, any other illumination schemes presented in this work yields the same result. In fact, from a more intuitive standpoint, we can recognize that integrating over the entire illumination plane is equivalent to shining uniform incoherent light onto the sample, which is exactly the typically sought-after experimental condition of uniform illumination in conventional systems (e.g. Kohler illumination). It is thus not surprising that the integration must yield exactly the same results as conventional (incoherent) imaging: Rayleigh-limited resolution at focus, CoC blurring out of focus, and dependence on NA, with the only difference that the uniform illumination is achieved in post-processing.However, the mere integration reported in Eq. (<ref>) is a rather poor way of employing the much larger amount of information contained within I( x_0, x): due to the shallow DOF of incoherent imaging, the features of the sample are rapidly lost as it is moved away from perfect focus, more so for large NA. On the contrary, a Radon transformation of I( x_0, x) (here expressed as a line integral):R_z( x^')=∫_γ( x^') I( x_0, x) dl,enables localizing the object within the much larger DOF characterizing coherent imaging. In Eq. (<ref>), γ( x^') are two lines of equations sinθ(z) x_0+cosθ(x) x= x^' defined in the spaces (x_0,x) and (y_0,y). In fact, the Radon transform R_z( x^') isolates a specific axial coordinate z by integrating over the whole dataset I(x_0,x) at a z-dependent angle θ(z); this allows one to perform, in post-processing, a software z-scanning similar to the hardware scan done by manually moving the focus of a conventional (incoherent imaging) device. The relation between the integration angle and the reconstructed axial plane can be understood by considering that the object point at coordinate x^', once illuminated by the source lit at transverse coordinate x_0, is mapped onto the detector coordinate x, which depends on both x_0 and x^'. As anticipated, the geometrical locus of the points of the sensor x corresponding to the same object coordinate x^' is a line in the (x_0,x) space with equations_x^': α(δ)x_0 + β(δ)x+x^'=0,where α and β are two functions depending on both the defocusing distance δ and the particular illumination scheme; as in conventional LF imaging, they are obtained through ray tracing in a geometrical optics context. The same holds, with the same coefficients α and β, for the other two coordinates (y_0,y). Therefore, for an object imaged at an axial displacement δ from the focused plane, the most accurate reconstructed image is R_z=δ, as obtained by performing the integration in Eq. (<ref>) along lines with a tilting θ=arctan (-α/β).As we shall discuss in the “Results” section, the 3D performance of the proposed coherent 3D imaging technique can be characterized in terms of the image fidelity F_A[R_z]. However, volumetric reconstruction shall require the image fidelity to be evaluated onto a three-parameter space: since the focus of the system is fixed at a given coordinate, both the relative position δ of the s-sized object and the reconstruction coordinate z can be moved independently with respect to the plane at focus.§ RESULTS The introduction of the image fidelity has enabled to directly compare the performance of coherent and incoherent systems and to discover that, in coherent imaging, the degradation of the image resolution with defocusing is not related with geometrical blurring mechanisms such as the CoC. On the contrary, the degradation of the image quality is governed almost entirely by diffraction from the object plane to the imaged plane; this leads to a square-root law scaling √(|z|) of the resolution with the distance from focus z, as opposed to the linear scaling characterizing the CoC. Thus, in coherent imaging, the axial range in which the object is resolved thus scales quadratically with the object size, rather than linearly. Therefore, in addition to being independent of the NA of the imaging device, coherent imaging yields a quantitative DOF advantage over conventional (incoherent) imaging. In Fig. <ref>, we report the experimental demonstration of this prediction and show that coherent illumination enables a 4 times larger DOF at 2 μm resolution and an almost 20 times larger DOF at 10 μm resolution, with respect to incoherent illumination. The experimental images shown in the figure are obtained by illuminating the masks with an array of green LEDs, placed 110mm apart from the plane at focus of a 20× magnification conventional microscope. Although it is a well-known fact in microscopy, and in bright-field imaging in general, that source collimation indeed implies DOF augmentation <cit.>, we should highlight that the square-root trend we experimentally demonstrate is the result of an entirely different physical phenomenon and cannot be understood in terms of source collimation, but only in terms of its spatial coherence.The conventional (incoherent imaging) explanation of the DOF improvement through source collimation, in fact, is related to the divergence of the illuminating beam becoming smaller than the acceptance angle of the optical devices, so that the optical properties of the imaging device are no longer dictated by the NA of imaging device, but rather by the effective NA defined by the illumination itself.However, this effect is profoundly different from the DOF extension enabled by spatial coherence, where collimation is by no means a requirement. The presented DOF advantage, in fact, is by all means maintained even with a quasi-infinite illumination NA, as one could get by bringing the illumination stage in extreme proximity to the sample and employing smaller sources, such as quantum dots and single-molecule LEDs. The incoherent effects of DOF improvement, however, indeed exist in the regime in which the illumination collimation is such to define the effective NA of the system, but the coherence area on the sample is not wide enough for the system to behave in a coherent manner. To gain more insight about the role played by the NA of the illumination system, we shall now study the transition from incoherent to coherent imaging and consider the general case of a finite-sized source, matching the experimental conditions of Fig. <ref>. In Fig. <ref>, we plot the 95%-fidelity curve (solid black line) of the image of a transmissive mask (an s-sized slit) as a function of its distance from a w-sized incoherent emitter; strictly speaking, z is the distance of the object from the plane at focus, but its variation naturally changes the object-to-source distance as well. When the object is so close to the source that the spatial coherence acquired through propagation towards the sample is smaller than s, the resolution versus DOF trade-off is determined by the numerical aperture of the illumination, as expected in a conventional system; this is demonstrated by the overlap of the evaluated fidelity (black line) with the one obtained in the geometrical optics approximation (red dashed line), for large values of z. In this regime, imaging is thus incoherent.However, as the object is moved farther away from the source, the coherence area on the object becomes proportionally larger till coherence effects become dominant, and the fidelity trend (black line) detaches from the geometrical optics prediction and overlaps on the coherent trend (dashed blue line), completely NA-independent.The yellow region highlights the transition from the incoherent to the coherent imaging, and shows that coherent effects enter into play, as predicted, when the coherence area becomes comparable to the details one wishes to resolve.Although this result is quite intuitive, its implications are very relevant.First, it demonstrates that the maximum DOF that direct imaging can achieve is ultimately limited, at least in the realms of classical optics, by the spatial coherence of the illumination on the sample; this is in contrast with the approximately infinite DOF one might incorrectly expect by interpreting the case of perfectly collimated illumination in terms of conventional incoherent imaging, along the line of what discussed above.Second, it leads to better appreciate the implications of the approach employed for obtaining the results of Fig. <ref>, namely, the reduction of the illumination stage area for showing, on the one hand, the effect of the NA on the CoC of incoherent imaging (blue and red points), on the other hand, the transition from incoherent to coherent imaging (orange points). The trend of the blue and red points indicate that the decrease of the illumination NA by a factor of 2 increases the DOF by the same amount, as expected for incoherent imaging; however, in both cases, the illumination area (i.e., the radius of the area of lit LEDs) is not yet small enough for the coherence area to be comparable with the desired resolution. By further shrinking the illumination, a point is reached where the linear trend is lost and the of NA-independent square-root trend emerge, as a consequence of the larger coherence acquired by the illumination. The transition is explained in terms of spatial coherence.Let us now show why localized coherent illumination is so convenient for performing 3D scanning-free imaging. As detailed in the “Methods” section, 3D information about the sample is gathered by measuring the 4D function I( x_0, x), obtained by sequentially illuminating the object from different point-source on the illumination plane. The image dataset I( x_0, x) can then be Radon-transformed to isolate axial planes of the samples, as prescribed by Eq. (<ref>).In particular, an extremely interesting result is obtained by applying the Radon reconstruction to the plane δ on which an object with transmission function 𝒜 is placed, namely:R_δ( x)=|𝒜( x)*𝒫̃( x) |^2, with 𝒫̃( x) = 𝒟_d(δ)( x) * 𝒫_0( x),where 𝒫_0 is the coherent PSF of the imaging system in its focus, and 𝒟_z is propagation in vacuum by a distance z, as in Eq. (<ref>). The properties of the reconstructed image are easily understood by noticing that Eq. (<ref>) is exactly the expression of a coherent image (see Eqs. (<ref>) and (<ref>)), as observed by the same imaging device, but affected by an equivalent defocusing d(δ). More specifically, coherent illumination gives rise to images having a resolution at focus defined by the Rayleigh limit and a resolution out-of-focus scaling with the square root of the defocusing √(δ), whereas the images reconstructed by our method have the same resolution at focus, but an out-of-focus resolution scaling with √(d(δ)). As reported in Fig. <ref>a), different illumination schemes are thus possible, each one characterized by a specific scaling of the resolution with the defocusing; the optical performance of the device can thus be greatly enlarged, offering a wide flexibility in view of a variety of different applications.In fact, the plots indicate that the scaling of the lateral resolution as a function of the defocusing is a pure square-root only in the case of plane-wave illumination, namely, when Z_0→∞, in the first scheme, or when the middle scheme is adopted.In the other cases, the scaling remains defined by a square-root law, but the actual dependence also involves the illumination distance (Z_0 in the first scheme, δ_0 in the third one).Fig. <ref>b) demonstrates that the great (NA-independent) DOF extension typical of coherent imaging is integrated with very accurate (NA-dependent) axial localization, due to the incoherent imaging properties brought in by the reconstruction process.The reported software z-scan shows that a double-slit object placed outside of the native DOF of the microscope can actually be localized extremely well around the plane where the most accurate reconstruction happens. Also, at a glance, one immediately recognizes that the depth of the reconstruction is not what one would expect by coherent imaging, but rather exactly the native (NA-defined) incoherent DOF of the device. As shown on the right hand side of the axial scanning, however, the reconstructed image is exactly the same image that coherent imaging would give, with an object displaced by an equivalent defocusing d(δ).All the aforementioned properties are summarized by Fig. <ref> c). The blue and yellow areas identify the resolution performance of coherent and incoherent illumination, respectively, with their characteristic linear (CoC) and square-root trends. The colored “V”-shaped regions, instead, show the optical properties of the images reconstructed, through software z-scanning, by the proposed 3D imaging modality, and correspond to five different axial positions of the object. As expected from our findings, the depth of the reconstructions, which represents the axial resolution of the 3D imaging technique, coincide with the DOF of incoherent imaging, namely, it is the same one would obtain by focusing the equivalent incoherent imaging system (i.e., be lighting on the whole array of small sources at once) on the correct object plane; the only difference with respect to the image obtained by mechanical z-scanning within an equivalent incoherent imaging system is in the minimum resolution, which in our approach lies on the square-root curve defined by coherent imaging.On the other hand, when the object is placed at focus (δ=0), the image reconstructed with our method is exactly the same as in incoherent imaging, both in the minimum Rayleigh-limited resolution and in the CoC-defined axial localization.We shall now employ these results to experimentally demonstrate the high-resolution volumetric multicolor capability of the proposed technique (Fig. <ref>). Coherent illumination from localized emitters is obtained through an array of commercial RGB LEDs, placed far enough from the sample plane for the coherence area on the sample plane to be comparable with the details of interest. The sample is a 10 μm-thick mouse brain section, where cell nuclei and cytoplasm have been labeled, respectively, by hematoxylin and eosin. The acquired 3D information enables to clearly compensate for the sub-optimal placement of the microscope slide, whose closest part to focus is 10 μm away from the focused plane, and mounted with a tilting of about 10 degrees. Unlike the color-independent CoC, the square-root scaling of the resolution of coherent imaging has a weak dependence on wavelength (∝√(λ)), thus giving rise to images characterized by negligible chromatic aberration. § DISCUSSION We have found that the lateral resolution and DOF of defocused images obtained through spatially coherent illumination are decoupled from the numerical aperture of the imaging system. Such independence is particularly convenient for designing 3D imaging devices exploiting transverse coherence of light: the resulting overall DOF of the technique becomes independent of the optical components used for the image acquisition, and is instead entirely defined by the coherent illumination scheme, as shown in Fig. <ref>.Despite being based on the same image reconstruction principle as LF imaging, our resolution scaling is much more convenient, with the additional benefit of retaining Rayleigh-limited resolution at focus. In fact, compared to conventional LF devices, which achieve DOF extension at the expense of the lateral resolution, 3D imaging systems based on spatially coherent illumination have a DOF that scales quadratically with the desired resolution, thus always yielding an advantage over the linear scaling typical of LF <cit.>. Furthermore, since a large NA has no effect on the resolution and DOF of the system, large apertures can be used to obtain optimal sectioning capability upon refocusing, enabling a strong suppression of the background neighbouring planes, as in high-NA tomographic systems.Since our proposal only requires transverse spatial coherence, these systems work with temporally incoherent sources, which induce negligible to modest radiation damage, as required by in vivo biological applications. Although image reconstruction through Radon transform does not recover the phase content, as opposed to computational techniques based on coherence <cit.>, it carries the enormous advantage of being performed in real-time with current GPU architectures and FPGAs <cit.>, or through the use of holographic screens <cit.>. The proposed 3D wide-field imaging technique can thus be used both for direct and real-time imaging.The extreme simplicity and low cost of the optical design, also compared to LF imaging, has high potentials to open up the possibility of employing 3D imaging in new scenarios, low-budget applications as well as for public healthcare in developing countries.Funding All Authors acknowledge funding from Università degli Studi di Bari through the Horizon Europe Seeds program, project INTERGLIO (S081). M.D. and F.V.P. are supported by PNRR MUR project PE0000023-NQSTI.M.D., G.M. and F.V.P. are supported by INFN project QUISS. MD, GM and FVP acknowledge funding under project ADEQUADE: this project has received funding from the European Defence Fund (EDF) under grant agreement EDF-2021-DIS-RDIS-ADEQUADE (n°101103417). G.P.N. is supported by: 1) AstroDyn (FA9550-19-1-0370), AstroColl (FA9550-21-1-00352) and Stochastic Biophysical Interactions within Aquaporin-4 Assemblies (FA9550-20-1-0324) funded by AFOSR; 2) Marie Skłodowska-Curie Actions -ITN-2020 ASTROTECH (GA956325) funded by the European Commission; 3) NEXTGENERATIONEU (NGEU) funded by the Ministry of University and Research (MUR), National Recovery and Resilience Plan (NRRP), project MNESYS (PE0000006) – A Multiscale integrated approach to the study of the nervous system in health and disease (DD 1553, 11.10.2022); 4) NEXTGENERATIONEU (NGEU) funded by the Ministry of University and Research (MUR), National Recovery and Resilience Plan (NRRP), project CN00000041 - National Center for Gene Therapy and Drugs based on RNA Technology (DD n.1035, 17.06.2022).Note. Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the granting authority can be held responsible for them.Author contributions G.M. conceptualized the idea, developed theory and simulations, designed and built the experimental setup, developed the software, performed data analysis, and wrote the original draft. G.M. and B.B. performed the experiments. B.B. prepared the histological sample and contributed to write the biological part in the Supplemental document. F.V.P. supervised the theoretical work. G.P.N. supervised the biological part of the work. M.D. supervised the overall physical part of the work. M.D. and G.S. contributed to the organization and writing of the manuscript M.D. and G.P.N. were responsible for fundings. All authors read and edited the manuscript.Disclosures The authors declare no competing interests.Data AvailabilityAll the data leading to the results discussed in the paper are available upon request to the corresponding author.Supplemental document See Supplementary text for supporting content. | http://arxiv.org/abs/2311.16002v1 | {
"authors": [
"Gianlorenzo Massaro",
"Barbara Barile",
"Giuliano Scarcelli",
"Francesco V. Pepe",
"Grazia Paola Nicchia",
"Milena D'Angelo"
],
"categories": [
"physics.optics"
],
"primary_category": "physics.optics",
"published": "20231127164749",
"title": "Direct 3D imaging through spatial coherence of light"
} |
An Ensemble of 2.5D ResUnet Based Models C. Chen, et al^1Infervision Advanced Research Institute, Beijing, China ^2Academy for Multidisciplinary Studies, Capital Normal University, Beijing, China [email protected], [email protected] Ensemble of 2.5D ResUnet Based Models for Segmentation of Kidney and Masses Cancan Chen^1 Rongguo Zhang^1,2 January 14, 2024 ============================================================================== The automatic segmentation of kidney, kidney tumor and kidney cyst on Computed Tomography (CT) scans is a challenging task due to the indistinct lesion boundaries and fuzzy texture. Considering the large range and unbalanced distribution of CT scans' thickness, 2.5D ResUnet are adopted to build an efficient coarse-to-fine semantic segmentation framework in this work. A set of 489 CT scans are used for training and validation, and an independent never-before-used CT scans for testing. Finally, we demonstrate the effectiveness of our proposed method. The dice values on test set are 0.954, 0.792, 0.691, the surface dice values are 0.897, 0.591, 0.541 for kidney, tumor and cyst, respectively. The average inference time of each CT scan is 20.65s and the max GPU memory is 3525MB. The results suggest that a better trade-off between model performance and efficiency.§ INTRODUCTIONIn recent years, over 430,000 people are diagnosed with kidney cancer and roughly 180,000 deaths are caused by kidney cancer annually <cit.>. Kidney tumors are found in an even larger number each year, and in most circumstances, it's not currently possible to radiographically determine whether a given tumor is malignant or benign <cit.>. Computer tomography (CT) scans is an import clinical tool to diagnose and detect kidney tumors. Surgery is the most common treatment option. Radiologists and surgeons are also dedicated to study kidney tumors on CT scans to design optimal treatment schedule by annotating the kidney and its masses manually. However, the manual annotation is a repetitive heavy laborious work and always subjective and varied from the different radiologists. Considering this, automatic segmentation of kidney and kidney tumors is a promising tool for alleviating these clinical problems.Based on the 2019 and 2021 Kidney Tumor Segmentation Challenge <cit.>, KiTS23 features an expanded training set (489 cases) with a fresh never-before-used test set (110 cases), and aims to serve a stronger benchmark and develop the best automatic semantic segmentation system for kidney tumors. Besides, hardware (GPU, CPU, etc)about average inference time of each case are also real factors in clinical application scenes, so it is important to balance the performance and efficiency of the automatic semantic segmentation system.In this paper, based on the original ResUnet <cit.>, we propose an efficient coarse-to-fine semantic segmentation framework to automatically segment kidneys and tumors. In the coarse segmentation stage, the whole CT images are re-sampled to 128×128×128 as the input. In the fine segmentation stage, we firstly obtain regions of interest (ROIs) for the kidney on the whole CT images based on the coarse segmentation mask, and according to this, randomly crop cubes along z-axis, which are re-sampled to 48×224×384 as the input. Besides, a cascaded model, consisting of the kidney segmentation model and the kidney-tumor-cyst segmentation model, is applied on the second stage.The main contributions of this work are summarized as follows:* We propose a coarse-to-fine semantic segmentation framework, which can effectively segment kidney, kidney tumor and kidney cyst from the abdominal CT images. * We firstly conduct a statistical analysis on the spacing resolution of all CT images, especially the thickness distribution at the z-axis, which sparks the major design ideas about the random cropping method, patch size and 2.5D ResUnet structure on the fine segmentation stage. * We evaluate our proposed framework by 5-fold cross validation on Kits23 data set. § METHODS Semantic segmentation of organs and lesions is a common task for medical image analysis. There are already numerous accurate and efficient algorithms for medical image segmentation, such as U-Net <cit.>, ResUNet <cit.>, nnU-Net <cit.>, et al. Based on the natural properties of Kits23 CT images and the strong baseline<cit.>, we develop a whole-volume-based coarse-to-fine framework as follows, which consists of coarse segmentation, fine kidney segmentation (two-classification task of kidney and others) and fine tumor-mass segmentation (three-classification task of tumor, cyst and other kidney regions).§.§ PreprocessingOur proposed method includes the following preprocessing steps:* Cropping strategy: In the coarse segmentation stage, the input is the whole volumes. In the fine segmentation stage, the kidney ROIs are firstly cropped from the whole volumes based on the coarse segmentation mask, and after that, we randomly crop 3D cubes from the kidney ROIs only along z-axis to ensure 2D kidney scans' structural integrity. * Re-sampling method for anisotropic data:The original images are re-sampled to 128 × 128 × 128 for coarse segmentation. In the fine segmentation stage, if the shape of the cropped kidney ROI is d × w × h, it will be resampled to d × 224 × 384(in this work, d=48), i.e., no-re-sampling at z-axis direction, and re-sampling at x/y-axis direction due to the shape distribution of all kidneys. * Intensity normalization method: Images are clipped to range [-200, 400] and normalized to range [-1, 1]. * Others:To improve the training and testing efficiency, mixed precision is adopted in the whole process of our framework working. §.§ Proposed MethodOur proposed framework is shown in Figure <ref>. The details of two stages are addressed as follows. §.§.§ Coarse SegmentationWe firstly use a original ResUnet <cit.> to obtain the coarse segmentation mask of all kidneys, and the input size is 128×128×128. The kidney tumor and masses are always located in the kidney region. Based on this, the kidney ROI of each CT image is cropped as the input of the next segmentation stage. This step reduces the computational cost of irrelevant information on this task and preserves all segmentation target. §.§.§ Fine SegmentationThe fine segmentation consists of kidney fine segmentation and lesion fine segmentation. Notably, the thickness range of all CT scans is between 0.5mm and 5mm. To resolve the data heterogeneity, cropping or re-sampling should be used. Considering the framework efficiency, the kidney ROIs are re-sampled to the fixed size at x and y direction, and then, we crop the cubes from kidney ROIs only along z axis. That's to say, if the shape of the kidney ROI is d × w × h, it will be re-sampled to d × 224 × 384, and the re-cropped cube size for fine segmentation is 48 × 223 × 384 in this work. Finally, we adopt 2.5D ResUnet as the segmentation backbone. Network architecture has 3 down-sample layers, 3 up-sample layers, and no down-sample at z direction for the high-performance and high-efficiency of our framework, which is shown in <ref>.§.§.§ Loss functionWe use the summation of the weighted Dice loss and Cross-Entropy loss as the final compound loss function which has been proved to be robust in various medical image segmentation tasks <cit.>. §.§.§ Other tricks The mixup <cit.> and hard examples mining are adopted in the model training process, both of which significantly improve the ResUnet's fitting capability.§.§ Post-processingIn the inference process, the connected component analysis <cit.> is applied to avoid the influence of noise. Based on the natural attributes of the kidney and lesions, we choose connected component regions larger than 10000 pixels as the final segmentation results. Notably, we abandon the multi-models ensemble method for efficient inference. Our method consists of the coarse segmentation model, the kidney fine segmentation model (background, kidney) and the lesion fine segmentation model (kidney, cyst, tumor). The final result is the average of the two predictions for the original image and the mirror image along the z-axis.§ RESULTS §.§ Dataset and evaluation measuresThe KiTS23 organizer has publicly released an expanded training set, totally 489 cases, based onKiTs19 and KiTs21. The volumetric Dice coefficient and the Surface Dice are used to evaluate algorithms, and the following Hierarchical Evaluation Classes (HECs) will be used: Kidney + Tumor + Cyst, Tumor + Cyst and Tumor only. §.§ Implementation details§.§.§ Environment settingsThe development environments and requirements are presented in Table <ref>. §.§.§ Training protocolsIn our training process, we performed the following data augmentation with project MONAI <cit.> : 1). randomly crop the volumes with range [0.6, 1.3]; 2). add brightness, contrast and gamma augmentation on the volumes and lesions with range [0.6, 1.5], respectively. 3). random elastic transform with prob=0.5 and with sigma from range 3 to 5 and magnitude from range 100 to 200; 4). clip volumes to range [-1, 1]. Details of our training protocols are shown in Table <ref> and Table <ref>. §.§ Results on cross validation and test dataOriginally, our proposed framework would be evaluated by 5-fold cross validation. However, we only train and evaluate model on fold-0 data set due to the time and computation resource constraints, and all scores are listed in Table <ref>. The average inference time on fold-0 validation (98 cases) and test set (110 cases) is 19.22s and 20.65s, respectively. The max GPU memory at inference step is 3525MB.§ CONCLUSIONBased on 2.5D ResUnet, we propose a efficient coarse-to-fine framework for the automatic segmentation of kidney and masses. The experimental results indicate that our framework is effective, but the segmentation robustness of kidney tumors and cysts need further improvement. One possible reason is that the capability level of single model has a lower upper-limit for the hard segmentation task. Thus, the ensemble of multi-models is an alternative solution after balancing performance and efficiency. splncs04 | http://arxiv.org/abs/2311.15586v1 | {
"authors": [
"Cancan Chen",
"RongguoZhang"
],
"categories": [
"eess.IV",
"cs.CV"
],
"primary_category": "eess.IV",
"published": "20231127072450",
"title": "An Ensemble of 2.5D ResUnet Based Models for Segmentation for Kidney and Masses"
} |
All data presented in this article are available on Zenodo repository. See DOI: https://doi.org/10.5281/zenodo.1020368210.5281/zenodo.10203682 <cit.> [Currently at: ]Univ. Bordeaux, CNRS, LOMA, UMR 5798, F-33400, Talence, France Univ Lyon, Université Claude Bernard Lyon 1, CNRS, Institut Lumière Matière, F-69622, Villeurbanne, FranceUniv Lyon, Université Claude Bernard Lyon 1, CNRS, Institut Lumière Matière, F-69622, Villeurbanne, France Univ Lyon, Université Claude Bernard Lyon 1, CNRS, Institut Lumière Matière, F-69622, Villeurbanne, France [Corresponding author: ][email protected] Univ Lyon, Université Claude Bernard Lyon 1, CNRS, Institut Lumière Matière, F-69622, Villeurbanne, France We experimentally study the effects of salt concentration on the flowing dynamics of dense suspensions of micrometer-sized silica particles in microfluidic drums. In pure water, the particles are fully sedimented under their own weight, but do not touch each others due to their negative surface charges, which results in a “frictionless” dense colloidal suspension. When the pile is inclined above a critical angle θ_c ∼5 a fast avalanche occurs, similar to what is expected for classical athermal granular media. When inclined below this angle, the pile slowly creeps until it reaches flatness. Adding ions in solution screens the repulsive forces between particles, and the flowing properties of the suspension are modified. We observe significant changes in the fast avalanche regime: a time delay appears before the onset of the avalanche and increases with the salt concentration, the whole dynamics becomes slower, and the critical angle θ_c increases from ∼5 to ∼20. In contrast, the slow creep regime does not seem to be heavily modified. These behaviors can be explained by considering an increase in both the initial packing fraction of the suspension Φ_0, and the effective friction between the particles μ_p. These observations are confirmed by confocal microscopy measurements to estimate the initial packing fraction of the suspensions, and AFM measurements to quantify the particles surface roughness and the repulsion forces, as a function of the ionic strength of the suspensions. Effects of salinity on flows of dense colloidal suspensions Antoine Bérut January 14, 2024 ===========================================================§ INTRODUCTION Rheology of colloidal suspensions has been an active field for years, and it is well known that interactions (van der Waals, electrostatics, polymer brushes, etc.) between particles play an important role in the suspension behavior <cit.>. In the last decade, numerous experimental and numerical works have led to a complete theoretical framework to describe the rheology of dense non-Brownian suspensions <cit.>. The main difference with colloidal suspensions is that bigger particles are not sensitive to thermal motion, and that surface forces are usually negligible. However, surface interactions between grains have recently been proposed as the key ingredient to explain several macroscopic rheologic behavior, such as shear-thickening <cit.> or shear-thinning <cit.> in non-Brownian suspensions. It is therefore interesting to study how changes in particles interactions can affect the flowing properties of the suspensions. In colloidal suspensions, salt concentration has been historically used as a way to tune the interactions between particles <cit.>. In non-Brownian suspensions, salt has also been used recently as a way to enhance the friction between particles <cit.>.The cross-over between thermal “colloids” and athermal “granular” suspensions is controlled by the gravitational Péclet number:P_e = mgd/k_BTwith d the diameter of the particles, m = Δρ/6π d^3 the mass the particles corrected by the buoyancy (Δρ = ρ_silica - ρ_fluid is the difference of density between the particle and its surrounding fluid), g the gravitational acceleration, k_B the Boltzmann constant, and T the temperature. In this work, we place ourselves in the intermediate regime of “dense colloidal suspensions” (P_e ≳ 1), where the particles are fully sedimented, inducing a high concentration in particles, but the thermal agitation and the surface interactions cannot be neglected <cit.>. In particular, we use silica micro-particles that are negatively charged and show a repulsive interaction in water, that can be tuned by adding ions in solutions <cit.>. In a previous work <cit.> we have shown that such dense colloidal suspensions show peculiar flow properties when inclined in rotating drum experiments. Above a threshold angle θ_c, those suspensions exhibit a “fast avalanche” regime, that is similar to the one observed in non-Brownian ones. Below θ_c they show a “slow creep” regime, that is thermally activated and depends heavily on the Péclet number. In this study, we explore the flowing behavior of those suspensions when the repulsive interactions between the particles is progressively screened by salt added in the suspension, as a way to connect the rheology of the suspensions to the interactions between the particles. § EXPERIMENTAL SET-UP§.§ Microfluidic drums Suspensions are made of silica particles from https://microparticles.de/microParticles GmbH, with diameter d = 2.12(6), dispersed in solutionsof NaCl (Sigma-Aldrich) in deionized water (ELGA Purelab®Flex, 18.2) with concentration C ranging from 0 to 2.5e-2. The gravitational Péclet number of those particles in suspension is P_e ≈ 21. The suspensions are held in polydimethylsiloxane (PDMS) microfluidic drums, made with standard soft-lithography techniques. The drums have a diameter of 100 and a depth of 45.The microfluidic drums are filled using the following protocol: a PDMS sample is made with an array of thousands circular holes with the desired diameter and depth (once sealed, these holes will become the drums which contain the colloidal suspension). The PDMS sample is carefully washed and rinsed, first with isopropyl alcohol, then with deionized water. It is then cleaned for 15 in deionized water in an ultrasonic bath. Next, it is immersed in a beaker containing the saline solution with the desired NaCl concentration C, and is let to degas for 15 in the ultrasonic bath. The PDMS sample is removed from the ultrasonic bath and placed on a sample holder, the drums facing up. At this stage, the drums are only filled with the saline solution, and a drop (200) of this solution is added on top of the sample, to avoid bubble formation due to evaporation. Then a droplet (30) of a concentrated microparticles suspension is injected with a micropipette on top of the microdrums. The particles are let to sediment for 1 in the drums. Finally, the drums are closed by placing a clean glass slide[The cleaning procedure for the glass slide is the same as the one for the PDMS sample.] on top of the PDMS sample, and by pressing it against the PDMS. The glass slide is maintained in position by six screws in the sample holder, which guarantees that the drums remains correctly sealed during the whole experiment. The particles typically fill ∼25 of the drums volume.The observation is made with the custom-made experimental set-up shown in Figure <ref> a). It is a horizontal video-microscopy apparatus, made of a CCD camera (Basler acA2440-75um) linked to a microscope turret with long working distance microscope objectives (Olympus MPLFLN x10, and LUCPLFLN x20) through a lens tube (InfiniTube™Standard), in front of a motorized rotation stage (Newport URB100CC), with a manual 2D translation (Owis KT 90-D56-EP) for the sample holder. To guarantee correct visualization of the sample, the rotation axis of the rotation stage is aligned with the optical axis of the video-microscopy system with a very high precision (up to a few microns). This axis is horizontal, so that the field of view contains the vertical gravity vector. To avoid external vibration, the whole set-up in installed on an optical table with passive isolation mounts (Thorlabs PWA075).Before each measurement, the sample is shaken so that the particles are suspended, then let to sediment for 8, ensuring that the initial horizontal state of the pile is the same for each experiment. Then, the drums are rotated by an initial angle of 30 and images are taken with a logarithmic framerate for 24 while the pile relaxes toward horizontal (at the beginning of the experiment the frame-rate is 20 images per second, at the end it is 10^-3 images per second). An example of experimental image is shown in Figure <ref> b). Thanks to the use of low magnification microscope objectives, we are able to record simultaneously the flows in 20 different drums for each experiment. Images are then analyzed using contrast difference (contour finding algorithm from scikit-image) to automatically detect the top surface of the pile, and extract its angle θ as a function of the time t. §.§ AFM Measurements AFM force spectroscopy studies are performed on a MFP-3D AFM from Asylum Research (Oxford Instrument) with a homemade colloidal probe <cit.>. The probe is a silica bead of 10, from the same manufacturer (https://microparticles.de/microParticles GmbH), glued to the end of a silicon nitride cantilever (DNP cantilever - Bruker) with a bi-component adhesive (Loctite EA3430). We used the thermal noise method <cit.> to know the nominal spring constant of the tooled cantilever for quantitative measurements.The force curves are recorded in deionized water and in three different NaCl solutions: forces between the silica probe and a flat silica substrate are measured during the movement of the probe at constant velocity of 1 towards the surface until contact is made (the maximum applied force is 6) and then during the return.Surface imaging of 2 silica particles is also performed in tapping mode (PPP NCHR AFM tip from NanoAndMore). Typical images have a spatial resolution of 2, and the total field of view is about 1×1 □ (512×512 pixels). To determine their RMS roughness value, the curvature of the spherical cap of the bead was subtracted from the image by a flatten of order 2 along the X and Y axis. §.§ Confocal Microscopy Stacks of images of dense colloidal suspensions at rest are obtained with a confocal microscope (Leica SP5, excitation wavelength 488). The measurement is made with two different salt concentration (“pure” deionized water and 1e-2 NaCl). Each suspension is introduced in a container, and a small amount of Rhodamine B is added (final concentration = 6e-6). Before measurement, the suspension is let to sediment for 10. Scanns are performed at 400, with an oil-immersion microscope objective (HCX PL FLUOTAR63.0x 1.25 OIL). The images stacks have a spatial resolution of 0.2 in all three directions (XYZ), and the total field of view is about 200×200×10 . Before tracking the particles' coordinates, the contrast of each image is corrected. An example of the image obtained is shown in Figure <ref>. Due to the relatively high monodisperisty of the particles, and the high Péclet number, the sediment shows a poly-crystalline structure, as it has been observed in other experimental <cit.> or numerical <cit.> works.We use the TrackPy software <cit.> to obtain the 3D coordinates of the particles present in the stack. After analysis, about 40000 particles distributed on up to 5 successive layers of sediment are found.§ RESULTS§.§ Microfluidic drums The effect of the salt concentration on the average flow curves of dense colloidal suspensions in micro-fluidic drums is presented in Figure <ref>. For the lowest ionic strength[For a monovalent electrolyte such as NaCl, the ionic strength is directly equal to the concentration.] (bottom curve), the typical flow behavior is retrieved, with a “fast avalanche” regime at high angle, and a “slow creep” regime, which is logarithmic in time, below a threshold angle θ_c <cit.>. However, when the ionic strength is increased, the flow behavior progressively changes, as the repulsive force between the particles is progressively screened. The most noticeable changes are: the increase of a waiting time plateau at the beginning of the experiment before the fast avalanche regime starts, the increase of the time needed for the fast avalanche regime to reach the threshold angle θ_c, the increase of the threshold angle θ_c itself, and the increase of the final angle at the end of the experiment. There might also be a small change in the slow creep regime, however, as discussed later, this effect is less clear. For an ionic strength of 5e-2, almost no flow is observed when the drums are rotated[Note that the steep decrease that is visible at the very end of the high ionic strength curves (at t ≈1e5) is an artifact due to the aging of the micro-fluidic sample.], and particles agglomerates are visible in the pile. This is consistent with the fact that ionic strengths above 5e-2 can be used to generate floculated suspensions <cit.> or colloidal gels <cit.> from silica particles dispersed in water.To better quantify the effect of the salt concentration, we define a few experimental quantities, schematically presented in Figure <ref>. We call τ_s the “starting time” of the fast avalanche, that is the time required for the pile to reach 95 of its initial angle θ_S. We fit both the end of the fast avalanche regime, and the slow creep regime by a linear function in the semilogarithmic plot (i.e. θ = A log(t) + B with A and B two constants). We define the threshold angle θ_c as the crossing point between the two fitted regimes. We call “avalanche speed” Δθ/Δ t, the average flowing rate of the fast avalanche regime. We call S the slope of the slow creep regime in the semilogarithmic time-scale.The measured values are presented in Figure <ref>. Both τ_S and θ_c increase with the ionic strength, with a steep increase when the ionic strength is close to 5e-2. The avalanche speed Δθ/Δ t seems to decrease almost linearly with the ionic strength. Finally, the slope of the creep regime S seems to first increase up to a maximum when the ionic strength is close to 1e-2, and then slightly decreases. §.§ AFM Measurements The colloids surface roughness is measured by AFM imaging. The RMS roughness of a 2 silica particle is less than 1 over 1×1. A typical image of the surface imaging is shown in Figure <ref> a).Repulsive forces F are shown in Figure <ref> b) as a function of the distance D between the particle's surface and the flat silica surface, for different salt concentrations. They show a good agreement with the theoretical double-layer electrostatic forces between surfaces in liquids <cit.>:F(D) = d/2λ_DZe^-D/λ_Dwhere, d is the particle diameter, λ_D is the Debye length, and Z is a constant equal to 9.22 × 10^-11tanh^2(ψ_0/103)at 25, with ψ_0 the surface potential in .The data presented in Figure <ref> b) are fitted to eq. <ref>, with two free parameters (ψ_0 and λ_D). The best fitting values are summarized in table <ref>, and are consistent with the expected Debye length computed from the ionic strength of the solutions <cit.>: at 25, λ_D = 0.304/√(C) , with C the monovalent salt concentration in . Note that the ionic strength of the “pure” deionized water is not measured, but is expected to be about a few 1e-5. §.§ Confocal Microscopy To estimate the typical volume that is occupied by one particle on the bulk of each suspension, we compute the 3D Voronoï tessellation (scipy.spatial library based on the Qhull library) of the particles 3D coordinates obtained by confocal microscopy. We first remove the particles on the side/edges of the sample, then compute the distribution of volumes of the Voronoï cells of the remaining particles. The probability density functions (PDF) of the volume occupied by a single particle in the bulk are shown in figure <ref>, for two different salt concentration.Finally, the mode of the distribution is taken as the typical volume around each particle. This value is used to compute the initial packing fraction Φ_0 of the suspension, given that the actual volume of one particle is know (π d^3/6). The measured packing fractions are presented in table <ref> for the two tested salt concentrations.§ DISCUSSION In this section, we provide physical explanations for the changes in flowing behaviors of the dense colloidal suspensions when the salt concentration is increased. §.§ Critical angle Salt added to screen repulsive interactions between silica particles has been used as a way to modify the friction between the grains in non-Brownian suspensions<cit.>. The main idea is that the double-layer repulsive force, which size is typically given by the Debye length λ_D, prevents the particles from having frictional contacts as long as λ_D is bigger than the surface roughness of the grains r. When salt is added, the Debye length becomes smaller, up to a point where the particles can touch each others. This induces a frictionless to frictional transition for the suspension around λ_D≈ r, which leads to shear-thickening behavior <cit.> or hysteretic flows <cit.>.This idea can be used to simply explain the observed increase of the critical angle θ_c when the ionic strength increases (Figure <ref> b)). Indeed, θ_c corresponds roughly to the “angle of repose” of the granular suspension: it's the angle below which no flow should be observed if the pile was non-Brownian. Both numerical <cit.> and experimental <cit.> studies have shown that the angle of repose of a dry granular pile increases when the microscopic friction coefficient between the grains μ_p is increased. Therefore, one can expect that the increase in salt concentration, increases the effective friction between the particles, which then increases the angle of repose of the pile. Notably, we see that the measured critical angle θ_c in deionized water is about 4.6, which is close[Note that the critical angle θ_c, as defined in Figure <ref>, is always lower than the real pile angle at which the transition from the “fast avalanche” regime to the “slow creep” regime occurs.] to the angle of repose 5.76 that is observed in numerical simulation for frictionless particles <cit.>. Moreover, θ_c in the solution with the highest salt concentration (4e-2) is about 17.2, which is not too far from the repose angle of 25 for macroscopic glass beads <cit.>.Our AFM measurements seem to support this hypothesis. As shown in Fig. <ref>, the typical surface roughness r of our particles is about 2, and the repulsive force F(D) between one particle and a flat silica surface is well described by the theoretical double-layer electrostatic theory (Equation <ref>). Therefore one can estimate that the critical salt concentration where the Debye length becomes equal to the surface roughness (λ_D≈ r) is about C_c = 2.3e-2, which is consistent with the critical ionic strength at which we see a transition from colloidal piles completely flowing back to horizontal ( C ≤1e-2), and completely arrested colloidal piles ( C ≥5e-2) (see Figure <ref>). Note that we cannot directly measure the microscopic friction coefficient μ_p between particles with our experimental set-up. However, values gathered in the literature can be found in Lee et al. <cit.>: silica microparticles have a typical friction coefficient 0.03 ≤μ_p ≤ 0.1 in milli-Q water, μ_p ≈ 0.3 in NaCl solution with concentration C = 1e-3, and μ_p ≈ 0.9 in alkaline solution with ionic strength 16e-2. §.§ Starting time The increase of the delay before the start of the “fast avalanche” regime (Figure <ref> a)) is reminiscent of the dilatancy effects that are observed in macroscopic granular suspensions <cit.>. When a pile of grains is fully immersed in a Newtonian fluid, its flowing properties strongly depend on its initial packing fraction. After a sudden inclination, loosely packed piles tend to flow almost immediately, while densely packed piles show a time delay before the initiation of the flow. This phenomenon is explained by a pore pressure feedback scenario: a densely packed pile has to dilate before being able to flow, and during this dilation, the surrounding fluid is sucked into the granular layer, which tends to stabilize the pile and delay the start of the flow <cit.>.Since the ionic strength reduces the double-layer repulsive force between the particles, one can expect that it reduces the mean distance between particles, hence increasing the initial packing fraction Φ_0 of the pile. Therefore, we can expect that the ionic stress increases the time delay τ_S before the start of the flow, due to dilatancy effects. Our set-up does not allow us to measure the packing fraction Φ of the pile during the flow, to directly observe dilatancy effects. However, our confocal microscopy measurements seem to support the fact that the initial packing fraction Φ_0 of the sedimented pile increases with the ionic strength of the suspension. As shown in table <ref>, the packing fraction is about 51 in deionized water and increases to about 61 in a solution with NaCl concentration 1e-2. Notably, the critical packing fraction Φ_0 above which dilatancy effect are observed in macroscopic granular suspensions <cit.> is about 58. This is consistent with the fact that we observe almost immediate flow in deionized water (τ_S = 0.87), while we observe significant start delay with high ionic strength suspensions (τ_S = 40.9 for C = 4e-2).Nevertheless, two points must be noted. First, the increase of the initial packing fraction Φ_0 that is observed with the increase of the salt concentration might seem surprising. Indeed, for macroscopic granular materials, it is known that the packing fraction obtained after sedimentation of the granular medium (random loose packing) decreases when the friction between the particles increases <cit.>. Since we have already shown that the effective friction between the particles μ_p increases with the salt concentration, one could expect that the packing fraction would rather decrease when the ionic strength of the suspension is higher. The solution to this apparent contradiction comes from the fact that our suspensions are Brownian: we think that the thermal agitation helps the suspension to always reach the highest accessible packing fraction (random close packing). Second, the fact that we observe dilantancy effects is itself a proof that the friction between the grains increases with the salt concentration. Indeed, numerical simulations have shown that frictionless grains do not show dilatancy effects <cit.>. §.§ Avalanche speed The fact that the salt concentration increases the effective friction between the particles can also explain the observed decrease of the avalanche speed Δθ/Δ t (Figure <ref> c)). Indeed, both numerical simulations <cit.> and experimental works <cit.> have shown that the rheology of dense non-Brownian suspensions depends on the microscopic friction coefficient between the grains. A recent review can be found in Lemaire et al. <cit.>, and we only recall here a few key results. For example, in volume-imposed simulations, the viscosity of the suspension η_S increases with the microscopic friction coefficient μ_p. In pressure-imposed simulations, the stress ratio μ (which can be seen as the macroscopic friction coefficient) increases with μ_p, while the volume fraction Φ decreases with μ_p, at fixed viscous number[For a complete definition of the viscous number J used in the μ(J) rheology of dense granular suspensions, see the review by Guazzelli et al. <cit.>] J. In general, it is expected that the flow rate of the suspension Q decreases when the viscosity η_S increases, when the stress ratio μ increases, and when the volume fraction Φ decreases. Thus, an increase of the microscopic friction coefficient is expected to lead to a decrease of the avalanche speed Δθ/Δ t. Direct comparison are difficult to achieve, since it is non-trivial to compute the theoretical flow rate Q in the rotating drum geometry[Note that it is possible to predict Q in simpler geometries: for example, on an inclined plane (which corresponds to pressure-imposed conditions, with constant stress ratio μ), predictions yields Q∝ J Φcosθ, where J is the viscous number and θ is the inclination angle <cit.>.]. But the orders of magnitude are reasonable. In our experiment we observe that Δθ/Δ t decreases by a factor of ∼ 10 between pure water (Δθ/Δ t = 0.55) and high ionic strength suspensions (Δθ/Δ t = 0.03 for C=4e-2). In simulations <cit.>, the viscosity of the suspension increases by a factor of 10 when the microscopic friction coefficient μ_p increases from 0 to 1 at volume fraction Φ=55. §.§ Slope of the creep regime Following previous work <cit.>, the time evolution of the pile angle θ during the creep regime can be described with a simple model where particles in the top layers are considered blocked by their neighbors, and the creep occurs when they jump above those neighbors thanks to thermal agitation. This model gives the following mathematical expression:θ (t) = 2/α P_earcoth[ exp( t/τα P_ee^-α P_e θ_c) coth( α P_e θ_0 /2 ) ]where: α is a dimensionless geometric parameter, P_e is the gravitationnal Péclet number, τ is a characteristic time depending on the fluid's properties and drum geometry, θ_c is the critical angle, θ_0 is the initial inclination angle (if θ_0 ≤θ_c).If P_e ≫ 1, and θ_c ≪θ≪ 0, the equation <ref> can be approximated by:θ (t) ≈θ_0 - 1/α P_eln[ 1 + t/2τα P_ee^-α P_e (θ_c - θ_0)]Equation <ref> directly gives the slope of the creep regime: S = 1/(α P_e). Knowing that P_e ≈ 21.4 for the 2.12 particles and that α≈ 2.6 was found in previous experiments <cit.>, we can expect S ≈0.0177 = 1.01. This is consistent with the values we measured (0.6≤ S ≤1.6, see Figure <ref> d)). However, the model does not predict a significant variation of S with the salt concentration C. Indeed, when salt is added to the suspension, P_e only slightly varies because the density ρ_fluid of the salted water varies. Even with the most concentrated solution (C = 5e-2) the density only increases by ∼3, which leads to a small decrease P_e ≈ 20.7. As for α, it corresponds to the “height” of the barrier that one particle has to cross to jump over its neighbors. One can assume that this value slightly decreases when the Debye length λ_D decreases because the particles has to jump above a particle of effective diameter d + 2 λ_D. For example, if λ_D goes from 50 (deionized water) to 1 (high salt concentration), this would predict that α decreases by 0.1/2.12≈5. In the end, following the model, the slope of the creep regime S should monotonically increase when the salt concentration increases, and should not vary by more than ∼10. Therefore, it remains unclear whether the variations that we observe in Figure <ref> d) are real physical effects, or experimental artifacts due to the difficulty to measure small pile angles during long times.§ CONCLUSION In conclusion, we have measured the flow of dense colloidal suspensions, in microfluidic drums after an initial inclination angle, for different salt concentrations. The flowing curves show two regimes: a “fast avalanche” regime above a critical angle θ_c, and a “slow creep” regime (logarithmic in time) below θ_c. We observe that the flowing behavior is strongly modified by the ionic strength of the suspension. As the salt concentration increases, a initial time delay τ_S before the fast avalanche regime increases, the speed of this regime Δθ/Δ t decreases, and the critical angle θ_c increases. All those observations are well explained by the fact that ions added in solution screen the repulsive double-layer electrostatic forces between the colloidal particles, which increases the effective microscopic friction μ_p between the particles and the initial packing fraction Φ of the suspension. We have independently verified with AFM measurements that the particles roughness r is consistent with the critical salt concentration C _c∼2.3e-2 at which we observe a transition from “very flowing curves” to “almost not flowing curves” (with particles agglomerates). We have also verified with direct confocal microscopy observations that the initial packing fraction of the sedimented suspension increases from Φ_0 ∼51 in deionized water to Φ_0 ∼61 in solution with ionic strength 1e-2. This explains why increasing dilatancy effects are observed when more salt is added to the suspension.Finally, even though all our measurements seem to indicate that the microscopic friction between the particles μ_p is increased by the salt concentration C, we cannot conclude on the physical origin of this effective friction increase. Indeed, this effective friction might come from direct contact friction (if the particles surfaces roughness touch each others), or from indirect hydrodynamic interactions (either long-range pore pressure effects, or short-range lubrication effects). Numerical simulations tend to show that contact friction dominates over long-range hydrodynamics at high volume fraction <cit.> (Φ≥40), and over both long-range and short-range hydrodynamics at low viscous number <cit.> (J ≤ 10^-1). However, only direct measurement of the normal and tangential forces between two colloidal particles (such as those obtained with quartz-tuning fork atomic force microscopy <cit.>, or lateral force microscopy <cit.>), in different ionic strength suspensions, would be able to experimentally confirm this result in our system.§ AUTHOR CONTRIBUTIONSA.B. designed the study, and built the horizontal video-microscopy apparatus. R.F. fabricated the microfluidic samples. M.L. and A.B. performed and analyzed the microfluid drums measurements. A.P. performed and analyzed the AFM measurements. A.B. performed and analyzed the confocal microscope measurements. A.B. curated the data and Pyhthon analysis scripts for the open data repository. All authors contributed to the writing of the manuscript.§ DATA AVAILABILITYAll data presented in this article, as well as the associated Python analysis scripts, are freely available on Zenodo repository <cit.>. Further requests should be addressed to Antoine Bérut.§ ACKNOWLEDGEMENTSThe authors acknowledge the support of the French Agence Nationale de la Recherche (ANR), under grant ANR-21-CE30-0005 (JCJC MicroGraM).The authors would like to thank Gilles Simon for his help with building the horizontal video-microscopy apparatus, as well as for manufacturing some of the mechanical pieces used in the set-up ; Mathieu Leocmach for his help with the confocal microscopy measurements ; and Yoël Forterre for fruitful scientific discussions. | http://arxiv.org/abs/2311.16055v1 | {
"authors": [
"Marc Lagoin",
"Rémy Fulcrand",
"Agnès Piednoir",
"Antoine Bérut"
],
"categories": [
"cond-mat.soft",
"physics.flu-dyn"
],
"primary_category": "cond-mat.soft",
"published": "20231127182036",
"title": "Effects of salinity on flows of dense colloidal suspensions"
} |
a]Joy Ganguly, b]Janusz Gluza, b]Biswajit Karmakar, c]Satyabrata Mahapatra,[a]Department of BSH, University of Engineering and Management, Kolkata, India [b]Institute of Physics, University of Silesia,Katowice, Poland [c]Department of Physics and Institute of Basic Science, Sungkyunkwan University, Suwon 16419, [email protected] [email protected] [email protected] [email protected] propose a hybrid scoto-seesaw model based on the A_4 non-Abelian discrete flavor symmetry. Light neutrino masses come from the tree-level type-I seesaw mechanismand from the one-loop scotogenic contribution accommodating viable dark matter candidates responsible for observed relic abundance of dark matter (DM). Respectively, both these contributions restore the atmospheric and solar neutrino mass scales. With only one right-handed neutrino, the model features specific predictions with the normal ordering of light neutrino masses, the lightest neutrino being massless, and only one relevant CP Majorana phase. The flavor symmetric setup helps us to realize the TM_1 mixing scheme with concrete correlations and constraints on the mixing angles and associated CP phases. The framework predicts the atmospheric mixing angle to be in the upper octant with specific ranges0.531 ≤sin^2θ_23≤ 0.544, 0.580 ≤sin^2θ_23≤ 0.595 and the Dirac CP phase is restricted within the range ±(1.44-1.12) radian. The Majorana phase is also tightly constrained with a range of 0.82-0.95 and 1.58-1.67 radian, which is otherwise unconstrained from neutrino oscillations.Strict predictions on the Majorana phases also yield an accurate prediction for the effective mass parameter for neutrinoless double beta within the range of 1.61-3.85 meV.The model offers a rich phenomenology regarding DM relic density and direct search constraints,and the fermionic DM scenario has been discussed in detail, estimating its possible connection with the neutrino sector.As an example of the model studies at colliders, the SM Higgs in the diphoton decay channel is examined. The model predicts strictly vanishing τ→ eγ, τ→ 3e decays and testable signals byMEG-II and SINDRUM/Mu3e experiments for the μ→ e γand μ→ 3 e decays, respectively. Phenomenology of the flavor symmetric scoto-seesaw model withdark matter and TM_1 mixing [=================================================================================================== § INTRODUCTIONIn the last few decades, several experiments around the globe have confirmed the phenomenon of neutrino oscillation with incredible precision <cit.>. The immediate consequence of neutrino oscillation is that at least two light neutrinos have nonzero mass. Furthermore, if we combine this with a bound on the absolute neutrino masses coming from the end-point spectrum of the tritium beta decay <cit.>, as well as the bounds from cosmological surveys <cit.> and the neutrinoless double beta decay <cit.>, we conclude that the neutrino masses are in the sub-eV scale. Despite these spectacular accomplishments, the origin of tiny neutrino masses (compared to other Standard Model fermions)remains an open question in particle physics. Over the years various ideas have been proposed, and the most common schemes are seesaw mechanisms <cit.> and radiative generation of neutrino masses <cit.>. Nonzero neutrino masses can also be realized within the framework of hybrid mass mechanisms where both seesaw and radiative mass mechanisms contribute. In addition to the tiny neutrino masses, we are yet to understand the observed pattern of the lepton mixing comprehensively. In fact, two of the three mixing angles, namely solar (θ_12) and atmospheric (θ_23), are found to be large, while the reactor (θ_13)mixing angle is relatively small. Such a finding clearly shows the distinctive feature associated with the lepton sector in contrast to the quark sector.The study of the underlying principle behind this typical mixing is particularly interesting with the precise measurement of the reactor mixing angle θ_13 <cit.>. Neutrino oscillation data also constraints the two mass squared differences (solar and atmospheric) defined as Δ m_21^2 = m_2^2-m_1^2 and |Δ m_31^2| = | m_3^2-m_1^2| where m_1, m_2, m_3 are the masses of the three light neutrinos.The present global analysis from several experimental data can be summarized as <cit.>Δ m^2_21=(6.82 - 8.03)×10^-5eV^2,|Δ m^2_31|=(2.427 - 2.590)×10^-3eV^2,sin^2θ_12=0.270-0.341, sin^2θ_23=0.408-0.603, sin^2θ_13=0.02052-0.02398,for normal ordering (NO) of light neutrino mass and similar constraints for inverted ordering (IO) <cit.>. In this regard, many conjectures have been put forward. A particular pattern yielding sin^2 θ_23 = 1/2, sin^2θ_12 = 1/3 and θ_13=0 known as tri-bimaximal mixing(TBM) <cit.>,received a lot of attention due the proximity of θ_23 and θ_12 with experimental values. Such amixing pattern can also be elegantly generated using flavor symmetries. Particularly, the use of non-Abelian discrete symmetries like S_3, A_4, S_4, A_5 is very well known <cit.> in this context. For a detailed discussion on such frameworks see <cit.> and references therein. Unsurprisingly, adeformation from TBM mixing becomes inevitable after precisely measuring θ_13. Nevertheless, the TBM mixing scheme can still be considered as a leading-order approximation, requiring adjustments such as accounting for non-zero θ_13 andthe Dirac CP phase δ. Possible simple deviations from the TBM mixing are calledtrimaximal (TM_1 and TM_2) mixings where the first and second columns of the TBM mixing respectively remain identical <cit.>. Such deviations can be elegantly achieved by considering larger residual symmetry (compared to the TBM scenario) or introducing an additional constituent that breaks the TBM structure <cit.>. Although the 3σ allowed ranges for all three mixing angles can be explained by both TM_1 and TM_2 mixings, the allowed value of thesolar mixing angle θ_12 (within these trimaximal scenarios)slightly prefers the TM_1 over the TM_2 mixing scheme.For a detailed discussion on the relative comparison of both mixings, see <cit.>.Apart from the neutrino masses and mixing, unraveling the nature of dark matter (DM) remains a pressing challenge in contemporary particle physics. While compelling astrophysical evidence, including observations like galaxy rotation curves, gravitational lensing, and the cosmic large-scale structure, substantiates the existence of DM <cit.>, the quest for a laboratory-based confirmation persists. Satellite missions such as WMAP <cit.> and PLANCK <cit.> have precisely determined that DM constitutes roughly 26.8% of the total energy content of the Universe.Expressing the prevailing dark matter abundance through the density parameter Ω_ DM and the normalized Hubble parameter h (Hubble Parameter divided by 100 km s^-1Mpc^-1) yields Ω_ DMh^2 = 0.120±0.001 at a 68% confidence level. Still the intricacies of its properties beyond gravitational interactions remain elusive. Among all proposed particle dark matter, the most sought-after paradigm is the weakly interacting massive particle (WIMP) paradigm, which suggests a dark matter particle with a mass and interaction strength akin to the electroweak scale. Unfortunately, the Standard Model of particle physics fails to comprehensively explain neutrino masses, mixings and dark matter. Standing at this juncture, certainly, it is a tempting challenge to find a common origin of these two seemingly uncorrelated sectors, if any. Hence, we aim to go beyond the SM of particle physics to explore scenarios that can accommodate a candidate of DM and explain non-zero neutrino masses and mixings. Neutrino oscillation data presented earlier, see Eq. (<ref>), does not determine the absolute scale or ordering of neutrino masses. The experiments have measured the two mass-squared differences (solar and atmospheric) associated with neutrino oscillations, and the ratio of the solar-to-atmospheric mass-squared difference (r) is found to be <cit.>r=Δ m_21^2/|Δ m_31^2|∼ 0.03.This may be an indication of the involvement of two different mass scales that might originate from entirely separate mechanisms[Within the framework of type-I seesaw mechanism, the hierarchy of the atmospheric and solar neutrino mass scales can also be explained through the mechanisms of sequential dominance <cit.> and constrained sequential dominance <cit.>where one of the right-handed neutrinos is considered to dominantly contribute the light neutrino masses.]. Following this ethos, the authors in <cit.> showed ascotogenic extension of the type-I seesaw scenario that can minimally explain the hierarchy of solar and atmospheric neutrino mass scales. In this set-up, the Standard Model particle content is extended by including one heavy isosinglet neutral lepton N_R (for the type-I sector) along with adark fermion f and an inert scalar doublet η (for the scotogenic sector), both being odd under a dark Z_2 symmetry. Here, the type-I contribution gives rise to the larger atmospheric scale. In contrast, the one-loop scotogenic contribution turns out to be the origin of the smaller solar mass scale mediated by a dark fermion, also providing a potential dark matter candidate <cit.>. Unfortunately, such constructions turn out to be inadequate in explaining the observed neutrino oscillation data associated with the mixing and the Dirac CP phase mentioned above. This problem can be addressed by augmenting non-Abelian discrete flavor symmetries <cit.> mentioned above. Hence, in the work <cit.>, aflavor symmetric scoto-seesaw (FSS) scenario is proposed to understand the neutrino mass hierarchy, which also explains the TM_2 mixing scheme. The whole framework is embedded within widely used A_4 non-Abelian discrete flavor symmetry, the smallest group with triplet irreducible representation providing an opportunity to unify three generations of leptons having the remarkable ability to produce realistic lepton mixing patterns. Moreover, such symmetry can arise in many ways, starting from a continuous group <cit.> or superstring theory in compactified extra dimensions <cit.>. Earlier in Ref. <cit.>, the authors discussed a scoto-seesaw scenario withZ_8 discrete symmetry where two right-handed neutrinos were present. In a similar setup <cit.> with two right-handed neutrinos,we implemented a scoto-seesaw model within the A_4flavor symmetric framework to realize the TM_2 mixing.In the present work, we show that the scoto-seesaw mechanism can be embedded within a flavor symmetric framework with the involvement of only one right-handed neutrino, and the experimentally favoredTM_1 mixing scheme can be realized. We will call the present model FSS_1 from now on. The scotogenic contribution contains neutral particles in both fermionic and scalar sectors. Within this flavor symmetric framework, we perform a phenomenological study of the fermionic dark matter and determine its correlation with the observed neutrino mixing. The obtained magnitude and flavor structure of the scotogenic contribution dictates the observed neutrino mixing pattern and facilitates us in obtaining the correct dark matter relic density. The presence of flavor symmetry makes an interesting prediction for the neutrino mass hierarchy, determines the octant of the atmospheric mixing angle θ_23 and tightly constrains the TM_1prediction for the Dirac CP phase δ_ CP. Here, the type-I contribution produces a neutrino mass matrix of rank 1, yielding only one massive light neutrino. Subsequently, the scotogenic contribution generates another neutrino mass eigenstate, and together, we obtain two massive neutrinos, which follow the normal ordering of light neutrino mass spectrum. In <cit.>, the authors showed that the type-I and scotogenic contributions could be the origin of the atmospheric and solar mass scale. Here, we show that such a hierarchy can also be procured within a flavor symmetric construction, explaining observed neutrino oscillation data. Furthermore, in the present model, we can obtain the constraint on the Majorana phase and predict the effective mass parameter appearing in the neutrinoless double beta decay and the sum of the absolute neutrino masses.One of the essential aspects of any theoretical model is its experimental viability. For the version of the FSS model discussed here, we perform a comprehensive phenomenological analysis involving the h→γγ decay, where h is the SM Higgs boson. The signal strength of the Higgs in the diphoton decay channel, R_γγ, is measured at the LHC, the value of which is around one <cit.>. The additional contribution to the decay of h→γγ in the FSS_1 model is the charged scalar of the η field. Our analysis shows that R_γγ can be fitted in our model, which can constrain the mass of the charged component of the η field. Owing to the flavor structure of this scoto-seesaw framework, we find that only the scotogenic part contributes to the lepton flavor violating decays such as μ→ eγ, μ→ 3e, whereas only the seesaw part contributes in the decays such as τ→μγ, τ→ 3 μ. However,scotogenic and seesaw parts do not contribute to the τ→ eγ and τ→ 3 e decays. All these phenomenological analyses for the FSS_1 framework serve as crucial tests of the model’s predictions and provide valuable insights into its compatibility with experimental data.The rest of the paper is organized as follows. In section <ref>, we briefly introduce the minimal scoto-seesaw framework. In section <ref> we present the complete A_4 flavor symmetric scoto-seesaw scenario, and insection <ref> we analyze corresponding neutrino masses and mixing. We mention the low energy scalar potential in section <ref>. In section <ref>, we discuss the detailed phenomenology of fermionic dark matter and further phenomenological implications for the Higgs to the diphoton decay and lepton flavor violation in section <ref> and section <ref>, respectively.Then, in section <ref>, we summarize the phenomenological analysis. Finally, in section <ref>, we present the conclusion and outlook of the FSS_1 framework. § MINIMAL SCOTO-SEESAW MODELIn this section, we present the minimal scoto-seesaw model which is introduced in <cit.>. The minimal scoto-seesaw model consists of one[The number of right-handed neutrinos added <cit.> to the SM is not fixed as they do not carry any anomaly <cit.>.] right-handed neutrino N_R, one singlet dark fermion f and one extra scalar doublet η_R. In addition to these particles, one Z_2 symmetry is introduced to stabilize the dark matter. In this model[For various extensions of the minimal scoto-seesaw scenario, see Refs. <cit.>.], the usual type-I seesaw mechanism with one right-handed neutrino N_R is combined with the scotogenic model with fermion f. The type-I seesaw generates the atmospheric mass scale at the tree level, while the solar mass scale is generated at the loop level in the scotogenic mechanism. As a result, the hierarchy between solar mass scale and atmospheric mass scale is maintained. The relevantLagrangian in the model can be written asℒ=-Y_N^k L̅^kiσ_2 H^* N_R+1/2M_N N̅_R^cN_R+ Y_f^k L̅^kiσ_2 η^* f +1/2M_f f̅^c f + h.c..where L^k are the lepton doublets. The scalars H and η are the SU(2) doublets defined in Eq. (<ref>). Y_N and Y_f are complex 3× 1 Yukawa coupling matrices, and M_N,f are the mass terms for N_R and f. The total neutrino mass reads <cit.>M_ν^ij=-v^2/M_NY_N^i Y_N^j+ℱ(M_η_R,M_η_I,M_f) M_f Y_f^i Y_f^j.Here, the first term is due to the tree-level seesaw mechanism, while the second term originates from the one-loop scotogenic contribution withℱ(M_η_R,M_η_I,M_f)=1/32 π^2[M_η_R^2 log(M_f^2/M_η_R^2)/M_f^2-M_η_R^2-M_η_I^2 log(M_f^2/M_η_I^2)/M_f^2-M_η_I^2],where M_η_R and M_η_I are the masses of the neutral component of η. Although the ratio of the above two contributions in Eq. (<ref>) can explain the hierarchy of the solar and atmospheric mass scales, it fails to explain the observed neutrino mixing pattern. In this regard, the use of non-Abelian discrete flavor symmetries is well motivated <cit.>. In the following sections, we discuss the phenomenological consequences of flavor symmetric construction of the scoto-seesaw framework with only one right-handed neutrino to explain the correct neutrino masses and mixing.We also provide a detailed analysis of the fermionic dark matter relic abundance and direct detection search constraint to determine the parameter space consistent with neutrino oscillation data and predictions for Higgs to diphoton signal strength and lepton flavor violating decays.§ SCOTO-SEESAW WITH FLAVORSYMMETRY: THE FSS_1 MODELThe model we are proposing is the flavor symmetric version of the scoto-seesaw model described in the previous section with usual scotogenic fermion f and inert doublet η in addition to one right-handed neutrino N_R. To obtain the flavor structure, A_4 flavor discrete symmetry and flavons ϕ_s, ϕ_a, ϕ_T and ξ are introduced. To avoid unwanted terms in the Lagrangian and get the correct flavor Yukawa structure, additional Z_N symmetries are introduced. The inclusion of flavon fields and auxiliary symmetries are generic features of such flavor symmetric constructions <cit.>. A remnant Z_2 symmetry of the Z_N symmetries acts as a dark symmetry that ensures the stability of dark matter under which only f and η are odd. Similar types of flavored scoto-seesaw models were studied before in <cit.> and <cit.> with Z_8 and A_4 discrete symmetries, respectively. No simple analytic correlation can be obtained due to the use of the Z_8 symmetry <cit.> whereas the TM_2 mixing was reproduced in <cit.>with the A_4 symmetry. In both cases, two right-handed neutrinos are introduced in the seesaw contribution to get the flavor structure and mixing. In the present work, we construct the framework with only one right-handed neutrino and realize the experimentally preferred TM_1 mixing scheme compared to the TM_2 mixing scheme (derived in <cit.>). The particle content of our model and charge assignment under different symmetries are shown in Table <ref>.The role of each discrete auxiliary symmetry will be described in detail as we proceed further. With the field content and charges assignment in Table <ref>, the charged lepton Lagrangian can be written up to leading order as ℒ_l=y_e/Λ(L̅ϕ_T)H e_R + y_μ/Λ(L̅ϕ_T)H μ_R + y_τ/Λ(L̅ϕ_T)H τ_R + h.c., where Λ is the cut-off scale of our model. y_e, y_μ and y_τ are the coupling constants. Now, when the flavon ϕ_T gets a vacuum expectation value (VEV) in the direction ⟨ϕ_T ⟩=(v_T,0,0)^T and subsequently the Higgs field also getVEV as ⟨ H ⟩=v, where v is the SM VEV, we get the charged lepton mass matrix to be in the diagonal form as M_l=v_T/Λv[ y_e 0 0; 0 y_μ 0; 0 0 y_τ ].Now, the Lagrangian in the neutrino sector, which generates neutrino masses, constitutes two parts: a type-I seesaw contribution with one right-handed neutrino N_R and another, one loop scotogenic part with the presence of the dark fermion f and scalar η. Following the symmetries and particle content mentioned in table <ref>, the Lagrangian for the neutrino sector can be written asℒ = y_N/Λ(L̅ϕ_s)H̃ N_R+1/2M_NN̅_R^c N_R+ y_s/Λ^2(L̅ϕ_a)ξ iσ_2 η^* f+ 1/2M_f f̅^c f+ h.c., where y_N and y_s are the coupling constants and M_N is the Majorana mass of the right-handed neutrino N_R while M_f is the mass of the fermion f. In the above Lagrangian, we have considered VEVs of the flavons ϕ_s, ϕ_a and ξ in directions ⟨ϕ_s ⟩=(0,-v_s,v_s), ⟨ϕ_a ⟩=(2v_a,v_a,0) and⟨ξ⟩ = v_ξ, respectively. A similar vacuum alignment can be found in the literature for neutrino model building <cit.> which can be realized inherently by analyzing the complete scalar potential <cit.>. The light neutrino mass matrix involving both type-I seesawand scotogenic contributions can be written as (M_ν)_ij =-v^2/M_NY_N^iY_N^j +ℱ(M_η_R,M_η_I,M_f)M_f Y_f^i Y_f^j where the Yukawa couplings take the following form Y_N=(Y_N^e,Y_N^μ,Y_N^τ)^T=(0,y_N v_s/Λ,- y_N v_s/Λ)^T,Y_F=(Y_F^e,Y_F^μ,Y_F^τ)^T= (y_s v_ξ/Λv_a/Λ,y_s v_ξ/Λ2 v_a/Λ, 0)^T≡(κ,2κ,0)^T.Within this setup, the total effective light neutrino mass matrix of Eq. (<ref>) is the followingM_ν = [b 2b0; 2b -a+ 4ba;0a -a ]witha = y_N^2v^2/M_Nv_s^2/Λ^2, b = y_s^2 v_ξ^2/Λ^2v_a^2/Λ^2ℱ(m_η_R,m_η_I,M_f)M_f ≡κ^2 ℱ(M_η_R,M_η_I,M_f)M_f,whereℱ is the loop function defined in Eq. (<ref>). Clearly, from Eq. (<ref>) to Eq. (<ref>), it is evident that the parameters a and b originate from type I-seesaw and scotogenic contributions, respectively. In the next section, we show how these parameters' relative magnitude helps us explain the hierarchy of the atmospheric and solar oscillation mass scales. Though the neutrino mass matrix given in Eq. (<ref>) is obtained through a combination of type-I seesaw and scotogenic mechanisms, there can be additional operators like LHLH/Λ,contributing to the light neutrino masses. In our model,this higher dimensional term is not invariant explicitly under the Z_4 symmetry given in Tab. <ref>. Also, terms like LHLH (ϕ_a, ϕ_s, ϕ_T,ξ)/Λ^2 are disallowed due to the considered discrete Z_N symmetries. For the same Z_N symmetries, the scotogenic contribution (L̅iσ_2 η^* f) is only allowed at 1/Λ^2 with the involvement of flavons ϕ_a and ξ, which are both odd under the Z_2 symmetry along with f and η.Owing to the A_4 symmetry, in the charged lepton sector, the leading order contribution appears only at dimension-5. However, there could be a next-to-leading correction at 𝒪(1/Λ^2) via (L̅ϕ_s^†ϕ_a) Hα_R/ Λ^2, where α_R is the corresponding right-handed charged lepton. Interestingly, such a contribution is disallowed due to the Z_2 symmetry given in Tab. <ref>. As the right-handed Majorana neutrino present in our model is also a singlet under A_4 symmetry, any higher-order correction can be absorbed in the leading order contribution toM_N. For the same reason, we can also absorb any higher order contribution toM_f as it does not affect the flavor structure of our model.Finally, the Dirac Yukawa coupling is allowed at dimension-5 as given in Eq. (<ref>). The next-to-leading order contribution at 𝒪(1/Λ^2) can be written as (L̅ϕ_a^†ϕ_T) H̃N_R and (L̅ϕ_s^†ϕ_T) H̃N_R, which are not allowed due to Z_2 and Z_4 symmetries.§ NEUTRINO MASSES AND MIXINGS IN THE FSS_1 MODELThe model we presented in the last section has two parts. One is coming from a type-I seesaw with one right-handed neutrino N_R. Another part is the scotogenic contribution with the dark fermion f. The full light neutrino mass matrix is given in Eq. (<ref>), and both contributions are essential in explaining observed neutrino masses and mixing. To diagonalize the mass matrix in Eq. (<ref>),we first write the mass matrix in the TBM basis asM_ν^'=U_TB^T M_νU_TB=[000;0 3b -√(6)b;0 -√(6)b 2(b-a) ],whereU_TB=[√(2/3)√(1/3) 0; -√(1/6)√(1/3) -√(1/2); -√(1/6)√(1/3)√(1/2) ].As evident from Eq. (<ref>), a further rotation by U_23 (another unitary matrix) in the 23 plane will diagonalize the light neutrino mass matrixvia M_ν^ diag=U_23^T M_ν^'U_23. The unitary rotation matrix U_23 can be parameterizedas U_23=[100;0 cosθ sinθ e^-iψ;0 -sinθ e^iψ cosθ ]where θ and ψ are the rotation angle and the associated phase factor,respectively.So, the diagonalization ofM_ν can be achieved through(U_TBU_23)^T M_ν(U_TBU_23)= diag(m_1 e^iγ_1,m_2 e^iγ_2,m_3 e^iγ_3)where m_1,2,3 are the real and positive mass eigenvalues, and γ_1,2,3 are the phases that are extracted from the corresponding complex eigenvalues. In our framework, we have only one right-handed neutrino, which, via type-I seesaw (the first term in Eq. (<ref>)), yields arank 1 mass matrix which makes one light neutrino massive. Together with the scotogenic contribution, we obtain a rank 2 mass matrix given in Eq. (<ref>),generating two massive neutrinos. Hence,within this flavor symmetric construction,one mass eigenvalue (lightest) will be zero. So, we have m_1=0, which implies γ_1=0. Now, we can get the form of neutrino mixing matrix U_ν such that U_ν^T M_νU_ν= diag(0,m_2,m_3). Thus, U_ν becomes U_ν=U_TBU_23U_m, where U_m= diag(1,1,e^iα_32/2) is the Majorana phase matrix with α_32=γ_3-γ_2. Therefore, we have only one non-zero phase in the Majorana phase matrix U_m as the lightest neutrino is massless. The explicit form of U_ν followsU_ν=[√(2/3) cosθ/√(3)e^-iψsinθ/√(3); -1/√(6) cosθ/√(3)+e^iψsinθ/√(2) -cosθ/√(2)+e^-iψsinθ/√(3); -1/√(6) cosθ/√(3)-e^iψsinθ/√(2)cosθ/√(2)+e^-iψsinθ/√(3) ]U_m.This form of U_ν is well known in the literature as a deviation from U_ TBM and is called the TM_1 mixing pattern. The lepton mixing matrix U_ν can now be compared with U_PMNS which in its standard parametrization is given by <cit.>U_ PMNS=[ c_12c_13 s_12c_13s_13e^-iδ_ CP; -s_12c_23-c_12s_23s_13e^iδ_ CPc_12c_23-s_12s_23s_13e^iδ_ CP s_23c_13;s_12s_23-c_12c_23s_13e^iδ_ CP -c_12s_23-s_12c_23s_13e^iδ_ CP c_23c_13 ]U_m,where c_ij=cosθ_ij, s_ij=sinθ_ij,δ_ CP is the Dirac CP violating phase and U_m is the Majoranaphase matrix. We can see that the total light neutrino mass matrix of Eq. (<ref>) contains two parameters a and b associated with the type-I seesaw and scotogenic contributions, which can be complex in general. We can write these parameters as a=|a|e^iϕ_a and b=|b|e^iϕ_b where ϕ_a and ϕ_b are the associated phases. For calculational purpose,we define the parameter α=|a|/|b| and the difference of phases by ϕ_ab=ϕ_a-ϕ_b. As M_ν^' is diagonalized by U_23, the rotation angle θ and the phase ψ appearing in Eq. (<ref>) can be expressed in terms of the model parameters astanψ=2αsinϕ_ab/5-2αcosϕ_ab,tan2θ=2√(6)/cosψ+2αcos(ψ+ϕ_ab).As the charged lepton mass matrix is diagonal, to obtain the correlation among the mixing angles and phases, we can compare U_ν=U_TBU_23U_m of Eq. (<ref>) with U_ PMNS given in Eq. (<ref>). These correlations can be written as <cit.> sinθ_13e^-iδ_ CP=e^-iψsinθ/√(3),sin^2θ_12=1-2/3-sin^2θ, sin^2θ_23=1/2(1-√(6)sin2θcosψ/3-sin^2θ).The above relations among the three mixing angles imply a mutual correlation. These correlations are the unique feature of the considered A_4 flavor symmetry, giving rise to theTM_1 mixing scheme. More specifically, relations in Eq. (<ref>) are general for the TM_1 mixing scheme <cit.> where the mixing angles θ_13, θ_12 are being correlated to each other. The correlation plot among these mixing angles can be found in Ref. <cit.> where sin^2θ_12 is restricted to some narrow range corresponding to the 3σ regions of sin^2θ_13.Relations in Eq. (<ref>) are unique for the considered FSS_1 model. From Eqs. (<ref>)-(<ref>), it is clear that the angleθ and the associatedphase ψ in U_23 can be linked with the parameters involved in M_ν. Relations in Eq. (<ref>) imply that δ_ CP=ψ when sinθ>0, and δ_ CP=ψ±π for sinθ<0 which can be written in a compact form as tanδ_ CP=tanψ. Now, from Eq. (<ref>), the complex mass eigenvalues are calculated to be m_1^c = 0,m_2^c = 1/2(-2a+5b-√(4 a^2+4ab+25 b^2)),m_3^c = 1/2(-2a+5b+√(4 a^2+4ab+25 b^2)).The real and positive mass eigenvalues are calculated asm_1 = 0,m_2 = |b|/2[(5-2αcosϕ_ab-P)^2+(Q+2αsinϕ_ab)^2]^1/2,m_3 = |b|/2[(5-2αcosϕ_ab+P)^2+(Q-2αsinϕ_ab)^2]^1/2.whereP^2=M±√(M^2+N^2)/2, Q^2=-M±√(M^2+N^2)/2, M=25+4αcosϕ_ab+4α^2cos2ϕ_ab, N= 4αsinϕ_ab+4α^2sin2ϕ_ab.Now, from Eq.(<ref>) to Eq. (<ref>), we get the phases associated with the complex eigenvalues m^c_1,2,3. These phases can be written as γ_i=ϕ_b+ϕ_i, i=2,3. i=1 is excluded here as the lightest mass eigenvalue is zero, the phase associated with m_1^c is γ_1=0. Now, ϕ_2,3 in our model can be written asϕ_2=tan^-1(Q+2αsinϕ_ab/5-2αcosϕ_ab-P),ϕ_3=tan^-1(Q-2αsinϕ_ab/5-2αcosϕ_ab+P)Using the above relations, we can calculate theMajorana phase α_32 in U_m, which can be written as α_32=tan^-1(Q-2αsinϕ_ab/5-2αcosϕ_ab+P)-tan^-1(Q+2αsinϕ_ab/5-2αcosϕ_ab-P).The phase ϕ_b is irrelevant while calculating the Majorana phase as it is the difference between γ_3 and γ_2. Finally, the Jarlskog invariant J_ CP <cit.>J_ CP=ℐ(U_11U_22U_12^* U_21^*)=s_12c_12s_13c_13^2 s_23c_23sinδ_ CPwill be used to quantify the CP violation in the FSS_1 model. From Eqs. (<ref>)-(<ref>), we observe that the mixing angles and all the phases depend on parameters α and ϕ_ab while the light neutrino masses depend on these parameters as well as on |b|. Now, we will estimate these model parameters (α, |b| and ϕ_ab ) using neutrino oscillation data on neutrino mixing angles and mass squared differences. With measured values <cit.> of mixing angles θ_13, θ_12 and θ_23,mass-squared differences Δ m_21^2,|Δ m_31^2| (mentioned in Eq. (<ref>), taken from <cit.>) and the ratio r defined in Eq. (<ref>),we first estimateα and the phase ϕ_ab using the 3σ range of neutrino oscillation data. The allowed ranges for α and ϕ_ab are plotted in the left panel of Fig <ref> in the α-ϕ_ab plane.Here, we find that the allowed ranges of α vary between 4.82-5.27 whereas two distinct regions of ϕ_ab are allowed between 4.72-4.76 and 5.03-5.06 radian. As mentioned earlier, the effective light neutrino mass matrix in the FSS_1 model has rank 2 due to the considered A_4 symmetry. Hence we obtain two massive light neutrinos as given inEqs. (<ref>)-(<ref>), predictingonly the normal ordering (NO) of light neutrino masses. To obtain the absolute values ofm_2 and m_3, we need to find the overall factor |b| appearing in Eqs. (<ref>) and (<ref>). Though the factor |b| cancels out while calculating r, it can be calculated by fitting solar or atmospheric mass-squared differences after knowing α and ϕ_ab from the left panel of Fig. <ref>. After evaluating |b|, |a| can be easily estimated using the relation|a|=α|b|. Hence, in the right panel of Fig. <ref>, we have plotted the allowed region in the |a|-|b| plane for the 3σ range of neutrino oscillation data. Corresponding to two distinct regions of ϕ_ab in the left panel there also exist two distinct regions of the parameters |a| as shown in the right panel of Fig. <ref>. Now, from Eqs. (<ref>) and (<ref>), we find that the light neutrino masses are functions of both a andb,whose origin lies in the type-I seesaw and the scotogenic contributions, respectively. Since m_1=0 in the FSS_1 framework, m_2 and m_3 are proportional to the solar and atmospheric mass-squared differences. Hence, in Fig <ref>, we have plotted variation of |b| with respect to |a| (represented by the color variation from blue to red) to reproduce correct r. This plot shows that the hierarchy between |a| and |b| essentially explains the observed value of the ratio of the solar to atmospheric mass-squared differences r, where |a| is the dominant contribution originated from the type-I seesaw.With the allowed values of α, ϕ_ab, |a| and|b|obtained from Fig. <ref>, we are in a position to study the correlations among neutrino mixing angles, phases, and masses. Due to the presence of the A_4 discrete flavor symmetry we have realized the TM_1 mixing scheme yielding interesting correlations among the observables appearing in the neutrino mixing. It is well known that there are still some unsettled issues in the measurement of θ_23 and δ_ CP <cit.>. These are (i) octant of θ_23 ( i.e., whether θ_23<45^o or θ_23>45^o) and (ii) the precise measurement of δ_ CP. Following Eqs. (<ref>) and (<ref>), we find a correlation between the atmospheric mixing angle θ_23 and the Dirac CP phase δ_ CP for TM_1 mixing scheme. Together with Eq. (<ref>) and Fig. <ref> within the FSS_1 framework, the predictions regarding θ_23 and δ_ CP for the TM_1 scheme gets constrained further as plotted in the left panel of Fig. <ref>.Here, the gray-shaded region represents the TM_1 prediction in theθ_23-δ_ CP plane where the red-shaded region is the prediction for the FSS_1 framework. We find that our model prefers the higher octant of θ_23 for narrow regions of δ_ CP. The allowed regions of sin^2θ_23 are 0.531-0.544 and 0.580-0.595 whereas the allowed regions of δ_ CP are ±(1.44-1.12) rad. Here, the relative phase between type-I and scotogenic contributions (denoted by ϕ_ab) is the source of CP violation in the lepton sector, see Eq. (<ref>) and subsequent discussion. Hence, in the right panel of Fig. <ref>, we have plotted the dependence of δ_ CP on ϕ_ab (the relative phase between a and b) denoted by the red shaded regions. It is established that the Majorana phases cannot be constrained from neutrino oscillation data directly,as they do not appear in the neutrino oscillation probability <cit.>. In the FSS_1 framework, the Majorana phase α_23 can be constrained using Eq. (<ref>) with the allowed range for α and ϕ_ab. Hence, in the left panel of Fig. <ref>, we showed the correlation among the CP phases in α_23-δ_CPplane, and the Majorana phase is found to be within the range 0.82-0.95 and 1.58-1.67 radian. Estimating the Majorana phase will play a crucial role in predicting the effective mass parameter appearing in the neutrinoless double decay <cit.>. Now, following Eq. (<ref>), we have plotted theJarlskog invariant J_ CP as a function of ϕ_ab in the right panel of Fig. <ref>. Here the magnitude of J_ CP is found to be within the range 0.0290-0.0313 and 0.0318-0.0344. Finally, with the allowed parameter space obtained in Fig. <ref>, we make predictions for the light neutrino masses (m_2,m_3), their sum (∑ m_i) and the effective mass parameter appearing in the neutrinoless double decay (m_ββ) as summarized in Table <ref>. The prediction for ∑ m_i is consistent with cosmological observation <cit.> whereas prediction for m_ββ falls below the upper limit provided by the next generation double beta decay experiment nEXO <cit.>. § SCALAR POTENTIALThe FSS_1 model considered here consists of two SU(2) doublet scalars H and η. To obtain the flavor structure of the leptons we have four flavons ϕ_s,ϕ_a, ϕ_T and ξ as mentioned in Tab <ref>. TheseSU(2) singlet flavons are considered to be very heavy compared to H and η and hence remain decoupled from the low energy phenomenology of scalars. The low energy scalar potential of the model can be written asV = -μ_1^2(H^†H)+μ_2^2(η^†η)+λ_1 (H^†H)^2+λ_2 (η^†η)^2+λ_3 (H^†H)(η^†η)+λ_4 (H^†η)(η^†H) +λ_5/2{(H^†η)(H^†η)+h.c.}The doublets in the model can be parameterized asH=[H^+; v/√(2)+(h+iζ)/√(2) ],η=[ η^+; (η_R+iη_I)/√(2) ].The electroweak gauge symmetry is given byH=[0; v/√(2) ],η=[ 0; 0 ].The above symmetry breaking pattern ensures that the Z_2 symmetry will remain unbroken and results in two CP even scalars (h,η_R), one CP odd neutral scalar η_I in addition to a pair of charged scalars (η^±). Due to the dark Z_2 symmetry, there is no mixing between h and η_R, and h plays the role of the SM Higgs boson. The Z_2 symmetry also ensures the stability of the lightest scalar (η_R or η_I) that can act as a dark matter candidate. The masses of all scalars can be written in terms of the following parameters{μ_2,λ_1,λ_2,λ_3,λ_4,λ_5}.These parameters can be written in terms of physical masses of scalars as <cit.>λ_1=m_h^2/2 v^2,λ_3=2/v^2(M_η^±^2-μ_2^2), λ_4=M_η_R^2+M_η_I^2- 2 M_η^±^2/v^2,λ_5=M_η_R^2-M_η_I^2/v^2.We can choose all the λs as free parameters or, equivalently the four physical scalar masses, λ_2 and μ_2, namely{μ_2^2,m_h, M_η_R, M_η_I, M_η^±,λ_2}.The quartic couplings are constrained theoretically by perturbativity and vacuum stability. We force the scalar potential to be perturbative which requires all quartic couplings of the scalar potential to obey |λ_i|≤ 8π.To get the scalar potential to be bounded from below, the following conditions can be obtained <cit.>λ_1,2>0 andλ_3+λ_4-|λ_5|+2√(λ_1 λ_2) >0andλ_3+2√(λ_1 λ_2)>0.Eq. (<ref>) give constraints based on the bare couplings of the Lagrangian. Another approach with running parameters of the model evaluated at the cut-off scale Λ of the theory is possible, see for instance <cit.>. Apart from these theoretical constraints, λ_3, λ_4 and λ_5 given in Eq. (<ref>) can also be constrained from experimental and phenomenological constraints. As we will discuss in the subsequent sections,λ_5 is crucially relevant in determining the scotogenic Yukawa coupling and hence is constrained from DM relic density, direct search constraints as well as the neutrino phenomenology. Similarly, λ_3,4 can also be constrained from DM direct search as well as SM Higgs diphoton signal strength.The presence of the doublet scalar η in our model can have important consequences in the context of CDF-II W-boson mass anomaly <cit.>, for instance see<cit.>,as it can affect the EW precision observables S, T, and U <cit.>.Through the self-energy correction of the W-boson with the doublet scalar in the loop, the W-boson mass can be increased from the SM prediction to the value obtained by the CDF-II collaboration. Parameterizing the new physics effects in terms of the S,T,U parameters as <cit.>:S = 1/12πlnM_η^0^2/M_η^+^2,T = G_F/4√(2)π^2 α_em(M_η^0^2+M_η^+^2/2-M_η^0^2 M_η^+^2/M_η^0^2-M_η^+^2lnM_η^+^2/M_η^0^2), U = 1/12π[(M_η^0^2+M_η^+^2)(M_η^0^4-4 M_η^0^2 M_η^+^2+M_η^+^4)ln(M_η^+^2/M_η^0^2)/(M_η^+^2-M_η^0^2)^3-5 M_η^0^4-22 M_η^0^2 M_η^+^2+5 M_η^+^4/3 (M_η^+^2-M_η^0^2)],where η_0= (η_R+iη_I)/√(2), the W boson mass can be written as <cit.>M_W^2=(M_W^2)_ SM+α_emcos^2θ_W/cos^2θ_W-sin^2θ_WM_Z^2[-1/2S+cos^2θ_W T+(cos^2θ_W-sin^2θ_W)/4 sin^2θ_WU].where α_em is the fine structure constant, θ_W is the Weinberg angle, and (M_W^2)_ SM is the SM predicted value of W boson mass. The dominant correction to M_W comes from the T-parameter which is very much sensitive to the mass difference between the charged scalar and the neutral scalar components of the inert doublet. And the CDF-II W mass can be obtained if the mass difference between η^+ and η_0 is around 80 to 100 GeV. However, we should stress that CDF-II data on W mass are in contradiction with global electroweak e^+e^- fits and recent ATLAS LHC analysis, with systematic uncertainty improved by 15% <cit.> and optimised reconstruction of the W-boson transverse momentum <cit.>. § DM PHENOMENOLOGY FOR THE FSS_1 MODELIn this FSS_1 framework, both type-I seesaw and scotogenic mechanisms are combined to obtain correct neutrino masses and mixing.The scotogenic contribution contains two potential candidates for DM: the lightest neutral scalar and the singlet fermion. Determining the DM relic density hinges on these candidates' production mechanisms during the early Universe. While the literature extensively covers the scalar DM phenomenology[For scalar dark matter phenomenology within the scoto-seesaw framework, see Ref. <cit.>], which aligns with the inert doublet model (IDM) perspective, our focus here is on the singlet fermion denoted as f, an odd Z_2 particle in the scoto-seesaw scenario. We explore various mechanisms that can yield the correct relic density and delve into the associated parameter space. Since f is a gauge singlet, its production mechanism is intricately tied to its Yukawa couplings, see Eq. (<ref>) and Eq. (<ref>), with SM leptons and the inert doublet scalar η. The magnitude of these Yukawa couplings plays a pivotal role in determining whether the correct relic density can be achieved through thermal freeze-out or freeze-in mechanism.Relic Density of DM: As outlined in the preceding section, in our FSS_1 model, the scotogenic contribution to neutrino mass is parameterized in terms of the parameter b given in Eq.(<ref>), which also plays a crucial role in explaining the observed neutrino oscillation data, is constrained in a range |b| ∈ [0.0048,0.0056] eV. Thus to obtain the magnitude of Yukawa couplings that can satisfy this constraint with the masses of the loop particles with masses of the order 𝒪(1-10^3) GeV, we perform a numerical scan, the result of which is shown in the plane of κ versus λ_5 in Fig. <ref>. We have used Eq. (<ref>) and Eq. (<ref>) to obtain the estimations of κ and λ_5. It is worth noticing from the neutrino mass expression that in the limit λ_5→0, the scotogenic contribution to neutrino mass vanishes. This is due to the fact that in this limit, the CP-even and CP-odd scalars η_R and η_I become degenerate and thus ℱ(M_η_R,M_η_I,M_f)→ 0. Thus to satisfy the constraints on b, from neutrino oscillation data, if λ_5 is made small, then κ can be enhanced and vice versa. We see that with this constraint on b, it is not possible to obtain Yukawa couplings smaller than 𝒪(10^-6) even if λ_5 is 𝒪(1).Consequently, the singlet fermion also remains in thermal equilibrium with the SM bath. This equilibrium is guaranteed by the doublet scalar η, which, due to its gauge interactions, consistently maintains equilibrium with the SM bath during the early stages of the Universe. Hence the DM relic density is governed through the WIMP mechanism. Several pertinent processes contribute to therelic density of DM. Specifically, the essential parameters influencing the relic density are the Yukawa couplings and the mass difference between the singlet fermion f and other particles in the dark sector, namely η_R,I,η^±. For WIMP type DM which is produced thermally in the early universe, its thermal relic abundance can be obtained by solving the Boltzmann equation for the evolution of the DM number density d n/dt + 3H n = -⟨σ v ⟩_ eff (n^2 - (n^eq)^2)where n=∑_i n_i represents the total number density of all the dark sector particles andn^ eq is the equilibrium number density.⟨σ v ⟩_ eff represents the effective annihilation cross-section which takes into account all number changing processes for DM freeze-out. It can be written as <cit.>:⟨σ v ⟩_ eff = g^2_f/g^2_ eff⟨σ v⟩_ff+g_f g_η_R/g^2_ eff⟨σ v⟩_fη_R (1+Δ_η_R)^3/2exp(-xΔ_η_R) + g_f g_η_I/g^2_ eff⟨σ v⟩_fη_I (1+Δ_η_I)^3/2exp(-xΔ_η_I) + g_f g_η^±/g^2_ eff⟨σ v⟩_f η^± (1+Δ_η^±)^3/2exp(-xΔ_η^±)+g^2_η_R/g^2_ eff⟨σ v⟩_η_R η_R (1+Δ_η_R)^3exp(-2xΔ_η_R)+ g_η_Rg_η_I/g^2_ eff⟨σ v⟩_η_R η_I (1+Δ_η_R)^3/2(1+Δ_η_I)^3/2exp(-x(Δ_η_R+Δ_η_I))+ g_η_R g_η^±/g^2_ eff⟨σ v⟩_η_R η^± (1+Δ_η_R)^3/2(1+Δ_η^±)^3/2exp(-x(Δ_η_R+Δ_η^±))+ g^2_η_I/g^2_ eff⟨σ v⟩_η_Iη_I (1+Δ_η_I)^3exp(-2xΔ_η_I)+g^2_η^±/g^2_ eff⟨σ v⟩_η^±η^∓ (1+Δ_η^±)^3exp(-2xΔ_η^±) + g_η_I g_η^±/g^2_ eff⟨σ v⟩_η_I η^± (1+Δ_η_I)^3/2(1+Δ_η^±)^3/2exp(-x(Δ_η_I+Δ_η^±)), where g_f, g_η_R, g_η_I and g_η^± represent the internal degrees of f,η_R,η_I and η^± respectively and Δ_i stands for the ratio (M_i-M_f)/M_f with M_i denoting masses of η_R,η_I,η^±. Here g_ eff is the effective degree of freedom which is given byg_ eff =g_f + g_η_R (1+Δ_η_R)^3/2exp(-xΔ_η_R)+ g_η_I (1+Δ_η_I)^3/2exp(-xΔ_η_I)+g_η^± (1+Δ_η^±)^3/2exp(-xΔ_η^±),and x is the dimensionless parameter M_f/T.The relic density of DM f can then be evaluated as :Ω_f h^2 = 1.09 × 10^9GeV^-1/√(g_* )M_Pl[∫_x_ F.O.^∞ dx ⟨σ v ⟩_ eff/x^2]^-1.Here M_Pl is the Planck mass, x_ F.O. =M_f/T_ F.O., and T_ F.O. denotes the freeze-out temperature of f. For this scenario we have implemented the model in<cit.> to calculate the relic abundance of f. As evident from Eq.(<ref>),the mass difference between the dark sector particles, namely f and η, along with the coupling κ is pivotal in determining the ultimate relic abundance of dark matter in this configuration. Smaller mass splittings can induce effective co-annihilations between η and f, potentially reducing the relic abundance to the observed ballpark. The dominant number changing processes relevant in governing the relic density are as shown in Figs. <ref>, <ref>, and <ref>.Clearly, the processes pivotal in establishing the relic abundance of dark matter fall into three distinct categories: I) the annihilation of dark matter particles into both charged and neutral SM leptons (Fig. <ref>), II) the co-annihilation of dark matter particles with scalar particles from the dark sector (Fig. <ref>), and III) the co-annihilation contribution arising from the annihilation of dark-sector scalars (Fig. <ref>). As denoted by Eq. ( <ref>), the co-annihilation contribution to the effective annihilation cross-section ⟨σ v⟩_ eff is predominantly shaped by the mass difference between dark matter and the dark scalars. To elucidate the influence of Yukawa couplings and mass splitting on the relic abundance of dark matter, in Fig. <ref>,we illustrate the variation of relic density with the dark matter mass. In the left panel of Fig. <ref>, the Yukawa coupling κ is varied within the range κ∈ [10^-4,10^-3], while the mass difference between the lightest neutral scalar η_R and f (i.e., M_η_R-M_f) is varied in three different ranges, as indicated in the figure's inset. Evidently, an increase in the mass difference leads to a corresponding increase in the relic density. This trend arises because the co-annihilation contribution to ⟨σ v ⟩_ eff gradually diminishes with an increase in (M_η_R-M_f), thereby boosting the relic abundance of f.Expanding the analysis, in the right panel of Fig. <ref>, the mass difference (M_η_R-M_f) is varied within a small range of [50,60] GeV and the variation of relic density with dark matter mass is then showcased for three different ranges of Yukawa couplings, as outlined in the figure's inset. It is evident that an increase in Yukawa coupling leads to a decrease in relic density, attributed to the increase in ⟨σ v ⟩_ eff. Additionally, an intriguing observation from this figure is that, when (M_η_R-M_f)∈ [50,60] GeV and Yukawa couplings are small (i.e., κ≲𝒪(10^-3)), the relic density does not change with further reduction in Yukawa couplings, for dark matter masses exceeding 500 GeV, as indicated by the red and blue-colored points. This phenomenon can be explained by the fact that, in scenarios with small Yukawa couplings, neither the annihilation of DM nor the co-annihilation of DM with dark sector scalars efficiently affects the relic density. Instead, it is primarily determined by the co-annihilation contribution from the annihilation of dark scalar partners. Conversely, in situations where Yukawa couplings are large and the mass difference is substantial, the relic density is primarily influenced by the annihilation of DM, as indicated by the purple-colored points.Thus, in summary of the effects that affect the relic density, in scenarios characterized by small Yukawa couplings and substantial mass differences (Δ_i), the relic density is predominantly governed by theco-annihilation contribution from the dark scalars. In such scenarios, DM annihilation becomes subdominant, and co-annihilation among dark matter and dark scalars is suppressed due to the large mass splitting. Conversely, when the mass difference between dark matter and dark scalars is not considerably large, co-annihilation among DM and dark scalars, as well as dark scalar annihilations, play a crucial role in determining the relic density. Only in cases where Yukawa couplings are significantly large, and the mass difference is also substantial, dark matter annihilations become relevant for achieving the correct relic density.We present the parameter space satisfying correct relic density in the plane of M_f and κ with the color code representing the corresponding value of M_η_R-M_f in Fig. <ref>. The grey-colored points are ruled out by imposing a conservative limit on the doublet scalar mass given by the LEP experiment of about M_η≥ 100 GeV. It is evident that, when DM mass is small and κ is small, the effective annihilation cross-section is very small and thus it is not possible to achieve correct relic density even with co-annihilation contributions. Thus we obtain a over abundant region below M_f around 30 GeV and κ≲𝒪(10^-2). In the small DM mass range M_f ≲ 100 GeV, correct relic density can be obtained only when the Yukawa couplings are significant i.e. κ∼𝒪(1) such that DM annihilation cross-section is appropriate to match the thermal cross-section as in this region the co-annihilation contributions are negligible. When DM mass is greater than 100 GeV, and Yukawa coupling κ≲𝒪(10^-2), we see that with increase in DM mass, the M_η_R-M_f shows a gradual decrease to achieve the correct relic density. This is attributed to the fact that as the DM mass increases, the effective cross-section gradually decreases thereby increasing the relic density and thus it needs more effective co-annihilations which is possible by decreasing the M_η_R-M_f, to bring the relic density to correct ballpark. We also observe an under-abundant region when κ≳𝒪(10^-1) even with very large M_η_R-M_f. This is due to the fact that, with very large κ the DM annihilation cross-section is large. So even if the co-annihilation contribution is suppressed because of large M_η_R-M_f, it is still not possible to achieve correct density.Direct Detection of DM: As the sole interaction connecting f with SM particles is the Yukawa term in Eq. (<ref>), direct interactions between quarks and dark matter are absent at the tree-level. However, at the one-loop level, f can have effective couplings with various SM particles, such as the photon, Z boson, and Higgs boson. Specifically, the exchange of the Z-boson results in the emergence of an effective axial-vector interaction which gives rise to a Spin-dependent DM-nucleon scattering and is dominant only when the couplings between Higgs and η are very small. The constraints of spin dependent DM nucleon scattering is also relatively less constrained as compared to the spin-independent scattering cross-sections. Thus we focus here on the spin-independent DM-nucleon scattering rate as the direct-search experiments very stringently constrain it. The detection rate of dark matter particles within a detector can experience an amplification if the quartic couplings λ_3 and λ_4 are significant. When this condition is met, the exchange of Higgs bosons, as depicted in Fig. <ref>, leads to the emergence of an effective scalar interaction term between the quark q and the dark matter particle f. This interaction is effectively described by S_q q̅q f̅ f, whereS_q= -κ^2/16π^2M_h^2M_f[λ_3𝒢(M_f^2/M_η^±^2)+(λ_3+λ_4)/2𝒢(M_f^2/M_0^2)], with the loop function 𝒢(x) defined as: 𝒢_1(x)= x+(1-x)ln(1-x)/x,and its value spans between 0 and 1 for 0≤ x≤ 1.This interaction then results in the computation of the spin-independent cross section σ_SI for the interaction of f with a proton and expression for σ_ SI is given by:σ_ SI = 4/πM_f^2m_p^2/(M_f+m_p)^2m_p^2 S^2_q f_p^2,where f_p represents the scalar form factor. We show the DM-nucleon scattering cross-section as a function of DM mass for the points satisfying correct relic density in Fig. <ref>. Because of the loop suppression, we observe that even when the Yukawas and scalar quartic couplings are large, none of the points are ruled out, and the parameter space remains safe from the DM direct search constraints. However, interestingly, future experiments like XENON-nT <cit.> and DARWIN <cit.> with enhanced sensitivity can probe the Yukawa coupling κ down to 𝒪(0.1). § HIGGS BOSON IN THE DIPHOTON DECAY CHANNELThe SM Higgs boson has a mass of m_h ≃ 125 GeV <cit.>, and one of the main decay channelsis the diphoton, where the SM rate for h→γγ is dominated by the W-boson loop contribution. The signal strength of h→γγ is the ratio between the observed cross section pp → h →γγ and the same quantity computed in the SM. The observed cross section pp→ h →γγ should match the FSS_1 model prediction. Since the dominant process of the Higgs boson production is the gluon fusion, in the first approximation, the production cross-section of the Higgs boson in the FSS_1 model is the same as in the SM. As a result, following <cit.>, after using narrow width approximation, the signal strength of h →γγ in the model can be written as R_γγ=[σ(gg→ h)× Br(h→γγ)]_ FSS_1/[σ(gg→ h)× Br(h→γγ)]_ SM =Γ_ SM^h/Γ_ FSS_1^hΓ(h→γγ)_ FSS_1/Γ(h→γγ)_ SM.Here, the quantities with the FSS_1 and SM suffixes are computed in the flavored scoto-seesaw and the Standard Model, respectively. Γ^h_ FSS_1, SM is the total decay width for these models. Theh→γγ decay is experimentally well established, and in the LHC, the signal strength of h→γγ is R=1.04_-0.09^+0.10 <cit.>. While computing R_γγ, we take for the total decay width of the Higgs boson Γ_ SM^h = 4.07× 10^-3 GeV with a relative uncertainty of ^+4.0%_-3.9% <cit.>. For a theoretical error estimate, see also <cit.>. For detailed studies on h →γγdecays withinmiscellaneous beyond SM scenarios, see <cit.>. In the framework of the FSS_1 model, this decay can be enhanced with the charged scalars (η^±) in the loop, over the SM contribution with charged fermions and W bosons in the loop. Using Eq. (<ref>),the expression for the partial decay width of h→γγ in the FSS_1 model induced by the η^± loop can be written as <cit.>Γ(h→γγ)=G_Fα_em^3 m_h^3/128√(2)π^3| ∑_fN_f Q_f^2 F_1/2(β_f)+ F_1(β_W)+λ_3 v^2/2M_η^+^2 F_0(β_η^±)|^2,where β_i=4 M_i^2/m_h^2, i=f,W,η^+. N_f is the color factor, and Q_f is the charge of quarks. α_em and G_F are the fine structure constant and Fermi constant. The F-functions in Eq. (<ref>) are the form factors of spin 1/2, 1,0 fields for the h→γγ decay F_1/2(β_f) = -2β[1+(1-β)f(β)], F_1(β_W) =[2+ 3β+3 β(2-β)f(β)], F_0(β_η^±) = β [1-β f(β)],wheref(β) = (sin^-11/√(β))^2,β≥ 1 = -1/4[ ln1+√(1-β)/1-√(1-β)-iπ]^2, β <1. In the FSS_1 model, therefore,the total decay width of the Higgs boson can be written asΓ^ h_ FSS_1=Γ^h_ SM+Γ (h →η_R η_R)+Γ (h →η_I η_I) + Γ (h →η^+ η^-).In the above equation, the decay width of the Higgs boson to different scalar particles is calculated using tree-level couplings λ_hη_R η_R=2/v( M_η_R^2- μ_2^2),λ_hη_I η_I=2/v(M_η_I^2- μ_2^2) .Doing numerical analysis for Γ(h→γγ), we scanned the parameters of the FSS_1 model in the range100GeV < M_η_R,M_η_I, M_η^+ < 2000 GeV, |λ_3,4,5|≤ 4π.In Eq. (<ref>), the total decay width of SM Higgs h in the FSS_1 model has three extra contributions over the SM. In the FSS_1 framework, the scalars η_R and η_I are not the lightest neutral Z_2 odd particles of the theory, f being the DM candidate. Thus with a judicious choice of the DM mass M_f (satisfying relic density and direct search constraints), the Higgs boson decays to η_Rη_R and η_Iη_I can be made kinematically forbidden. The result of numerical analysis is shown in Fig. <ref> where the signal strength of R_γγ in the wide λ_3 range is given as a function of the charged scalar mass M_η^+. The horizontal white region (R_γγ=1.04^+0.10_-0.09) represents the currently allowed region measured by the ATLAS experiment using 139 fb^-1 of pp collision data at √(s)=13 TeV <cit.>. This shows that the M_η^+ masses heavier than 1000 GeV are completely safe from LHC constraints.As follows from Eq. (<ref>), if λ_3<0, the partial decay width of h is smaller than in the SM while positive λ_3 will give the enhancement beyond the SM value. So, depending on the positive or negative values of λ_3, we get R_γγ>1 or R_γγ<1, respectively.This behaviour can be seen in Fig. <ref>. § LEPTON FLAVOR VIOLATIONThe constraints on lepton flavor violating processes is an important aspect of the FSS_1 model under consideration. The model offers specific predictions, given that the flavor structure of the Yukawa couplings is entirely dictated by the A_4 discrete flavor symmetry and the alignment of flavon vacuums. Along with neutrino masses, mixing and DM phenomenology, LFV decays also give valuable insight on the FSS_1 model parameters. As a consequence of the considered flavor symmetry, the Yukawa couplings in the charged lepton sector are diagonal, see Eq. (<ref>). However, Yukawa couplings y_N and y_s in Eq. (<ref>) associated with the type-I seesaw and scotogenic mechanisms, respectively, contribute to the LFV decays. These Yukawa couplings can generate lepton flavor violating processes like l_α→ l_βγ and l_α→ 3 l_β (α,β = e, μ,τ)[ To study lepton flavor violation in the pure scotogenic model, see Refs. <cit.>.]. Studies on these LFV decays completely depend on the FSS_1 model construction as described below.In our framework, the branching ratios of thel_α→ l_βγ decays for the scotogenic contribution can be written as <cit.>Br(l_α→ l_βγ) ≈3 πα_em/64 G_F^2|Y_F^β *Y_F^α|^2 1/M_η^+^4(F_1(M_f^2/M_η^+^2))^2Br(l_α→ l_βν_αν̅_β).Here G_F is the Fermi constant,Y_F is the Yukawa coupling matrix from the scotogenic contribution given in Eq. (<ref>). The expression for the function F_1 is given byF_1(x)=1-6x+3x^2+2 x^3-6 x^2logx/6(1-x)^4.As mentioned earlier, the Yukawa couplings are determined by the considered discrete symmetries of the model. Due to the specific VEV structure of the A_4 triplet flavon, Y_F^τ=0 as given in Eq. (<ref>). Therefore, the scotogenic contribution alone yields a vanishing contribution for τ→ eγ and τ→μγ lepton flavor violating decays. So, the only non-vanishing contribution arising in the decays of the form ł_α→ł_βγ comes from the μ→ eγ decay, the expression of branching fraction of which is given by <cit.>Br(μ→ e γ)≈ 3 πα_em/64 G_F^2|2y_s y_s^* ϵ^4|^2 1/M_η^+^4(F_1(M_f^2/M_η^+^2))^2Br(μ→ eν_μν̅_e) =3 πα_em/16 G_F^2M_f^2(|b|/ℱ(M_η_R,M_η_I,M_f))^2 1/M_η^+^4(F_1(M_f^2/M_η^+^2))^2Br(μ→ eν_μν̅_e),where we have substituted Eq. (<ref>) in Eq. (<ref>) to obtain Eq. (<ref>). In the above, ϵ=v_f/Λ, where for simplicity we have assumed all flavon VEVs to be the same, i.e. v_ξ=v_s,a=v_f. Clearly, Br(μ→ eγ) depends on the parameter |b|, which is constrained from neutrino oscillation data ranging from 0.0048 to 0.0056 eV as given in Fig. <ref>.Another type of the LFV decays appearing in our FSS_1 framework are the l_α→ 3 l_β (ł_α→ l_βl̅_βl_β) processes. The corresponding branching ratios are given by <cit.>Br(l_α→ 3 l_β)≈3α_em^2/512 G_F^2|Y_F^β ^*Y_F^α|^2 1/M_η^+^4𝒢(m_α/m_β)(F_2(M_f^2/M_η^+^2))^2Br(l_α→ l_βν_αν̅_β),whereF_2(x) = 2-9x+18x^2-11x^3+6x^3logx/6(1-x)^4,𝒢(m_α/m_β) = (16/3 log(m_α/m_β)-22/3).Again, following Eq. (<ref>),we find that Y_F^τ=0,hence the branching fractions for τ→ 3e and τ→ 3μ decays vanish. The only non-vanishing contribution originates from the μ→ 3 e decay, and the branching fraction can be written as <cit.>Br(μ→ 3 e) ≈ 3α_em^2/512 G_F^2|2 y_s y_s^* ϵ^4|^2 1/M_η^+^4𝒢(m_μ/m_e)(F_2(M_f^2/M_η^+^2))^2, = 3α_em^2/128 M_f^2 G_F^2(|b|/ℱ(M_η_R,M_η_I,M_f))^2 1/M_η^+^4𝒢(m_μ/m_e)(F_2(M_f^2/M_η^+^2))^2,where we have substituted Eq. (<ref>) in Eq. (<ref>) to obtain Eq. (<ref>) with ϵ=v_f/Λ. Similar to Eq. (<ref>), here we also find that Br(μ→ 3 e) depends on the scotogenic mass parameters M_f,η^+,η_R,η_I as well as |b|, the parameter involved in explaining correct neutrino oscillation parameter and DM relic density. The variation of the corresponding coupling λ_5 is given in the inset. In Fig. <ref>, we have shown plots forμ→ eγ (left panel) and μ→ 3 e (right panel) branching ratios against the dark matter mass M_f, satisfying bound on |b| obtained from Fig. <ref>.The current constraints (denoted by magenta lines) for the branching ratio of theμ→ eγ decay is given by the MEG-II experiment as Br(μ→ eγ) ≤ 3.1 × 10^-13 <cit.> whereas for μ→ 3 e decay the constraint by SINDRUM experiment is given as Br(μ→ 3 e) ≤ 1 × 10^-12 <cit.>. In both plots, a current upper bound on both these decays, constrains the dark matter mass M_f specifically in the low mass region. M_f will be further constrained by the future MEG-II (Proj.) <cit.> and Mu3e Phase-I <cit.> experiments.To illustrate the dependence of the LFV branching ratio on the neutrino oscillation parameters and its consistency with the DM phenomenology, in Fig. <ref>, we have plottedBr(μ→ e γ) against |b|. Here, the white-shaded region is consistent with correct neutrino masses and mixing given in the right panel of Fig. <ref>.Hence,the cyan-shaded regions are ruled out by neutrino oscillation data. This plot also depicts the dependence of the branching ratio on the scotogenic Yukawa coupling shown by the variation of κ. The upper shaded region is already ruled out by the recent updated constraint from MEG-II <cit.>, the projected sensitivity of MEG-II can probe κ of the order 𝒪(10^-2). For the type-I seesaw contributions to LFV decays,thebranching fractions for l_α→ l_βγ decays can be cast in the following formBr(l_α→ l_βγ) ≈ 3 α_em v^4/8 π M_N^4| Y_N^β Y_N^α^* f(M^2_N/M_W^2)|^2,where Y_N is given in Eq. (<ref>). The loop function f(x) in Eq. (<ref>) isf(x)=x(2x^3+3x^2-6x-6 x^2 log(x) +1)/2(1-x)^4.Similar to the scotogenic contribution, the A_4 discrete symmetry and the VEV alignment of the flavon ϕ_s plays a crucial role in estimating the branching ratio for l_α→ l_βγ. The VEV alignment of the flavon ϕ_s is such that it gives Y_N^e=0 as a result of which the branching fraction for μ→ eγ and τ→ eγ decays vanish. The only non-vanishing contribution arising in the type of l_α→ l_βγ decay is τ→μγ and the branching fraction is given byBr(τ→μγ) = 3 α_em v^4/8 π M_N^4| (y_N y_N^*ϵ^2 f(M^2_N/M_W^2)|^2 =3α_em/8π M_N^2|a|^2|f(M_N^2/M_W^2)|^2.For M_N ∼ 10^4 GeV and |a|=0.0250 eV, the branching fraction in Eq. (<ref>) gives 5.4 × 10^-33 which is very small compared to the experimentallimit ( 4.4 × 10^-8) <cit.>. For higher M_N values, the branching ratio will be even more suppressed.Similarly, the branching ratio for the τ→ 3 μ conversion is found to be very small compared to the experimental bound <cit.>.In Tab. <ref>, we have summarized the allowedLFV decays in the FSS_1 model. The considered discrete flavor symmetry and corresponding vacuum alignment of the flavons completely disallow decay channels such as τ→ eγ and τ→ 3e. Such a decisive prediction can be made since we have vanishing values for theYukawa couplingsY_F^τ and Y_N^e, see Eq. (<ref>) and Eq. (<ref>), associated with the scotogenic and type-I seesaw contributions, respectively. Present experiments already exclude branching ratios larger than about 𝒪 (10^-8). Any positive signal by the future experiments will essentially test the validity of the FSS_1 framework.§ SUMMARY OF PHENOMENOLOGICAL ANALYSISBoth type-I seesaw and the scotogenic contribution within our FSS_1 framework are crucial in explaining the hierarchy associated with the neutrino mass-squared differences. The scotogenic contribution is characterized by the parameter b and its magnitude is restricted within a narrow range 0.0048-0.0056 eV.The estimation ofDM relic density depends on the scotogenic contribution Yukawa coupling κ associated with |b| as given in Eq. (<ref>). This dependence is shown in Fig. <ref> in the DM mass M_f - κ plane. The allowed parameter space gets further constrained to satisfy correct neutrino oscillation data and experimental limits on LFV decays discussed in Section <ref> and <ref>,respectively. Although the allowed range of |b| is tightly constrained from neutrino oscillation data, interplay of DM f and other dark sector particles η_I,R,± can satisfy correct DM relic density with contributions from various annihilation and co-annihilation contributions mentioned in Fig. <ref> - <ref>.Hence, updating Fig. <ref>, in Fig. <ref>, we have plotted the final parameter space, which includes constraints from DM relic density,neutrino oscillation data, and LFV decays. The points with the color code represent the parameter space consistent with the DM relic density and direct search constraints. Once we impose the constraints for |b| from the neutrino oscillation data obtained from Fig. <ref>, we get the magenta-colored points. Finally, we obtain the red star points when we impose the constraint from LFV decays along with the constraints mentioned above from DM phenomenology and neutrino oscillation. LFV constraints restrict the maximum allowed Yukawa coupling to be less than 𝒪(10^-2) and DM masses between (100-1000) GeV are found to be simultaneously consistent with neutrino oscillation data,DM relic density, direct search, and LFV decay constraints.§ CONCLUSIONS AND OUTLOOK We propose the flavor-scoto-seesaw (FSS) model, which explains the observed hierarchy between the solar and atmospheric neutrino mass scales, experimentally allows the trimaximal mixing scheme, and naturally accommodates viable dark matter candidates. In this framework, type-I seesaw and 1-loop scotogenic mechanisms contribute to the effective light neutrino mass.With only one right-handed neutrino, the type-I seesaw contribution dominantly contributes to generating atmospheric neutrino mass scale, and the scotogenic contribution (with the involvement of the dark fermion f and scalar η) is mainly responsible for the solar neutrino mass scale. The whole framework is embedded within A_4× Z_4 × Z_3 × Z_2 discrete flavor symmetry predicting the lightest neutrino to massless and one non-vanishing Majorana phase. The model also contains a few flavon fields to realize appropriate flavor structure to explain observed neutrino mixing.The inclusion of auxiliary Z_N (N= 4, 3, 2) symmetries is a generic feature of discrete flavor symmetric models to forbid several unwanted terms, and the charged lepton mass matrix is found to be a diagonal one. These Z_N symmetries are broken down to a dark Z_2 symmetry, ensuring the stability of dark matter under which only f and η are odd. With a judicious choice of the flavon vacuum alignments,the TM_1 mixing scheme can be realized, and hence we call our flavor symmetric scoto-seesaw model an FSS_1 model. Considered flavor symmetry completely dictates the flavor structure of the model and makes it highly predictive. The FSS_1 model provides rich phenomenology for neutrino masses, mixing, LFV decays, and collider studies and accommodates potential dark matter candidates with DM f fermion and η scalar. With both type-I and scotogenic contributions, a rank-2 light neutrino mass matrix is obtained, predicting normal ordering of light neutrino mass. The presence of flavor symmetry in FSS_1 implies a preference for the higher octant of the atmospheric mixing angle θ_23 where the allowed ranges given by 0.531≤sin^2θ_23≤ 0.544 and 0.580≤sin^2θ_23≤ 0.595. The model also tightly constrains the TM_1prediction for the Dirac CP phase δ_ CP (within the range ±(1.44-1.12) radian) and the Jarlskog CP invariant.Moreover, correlations among neutrino mixing parameters within the FSS_1 model (see Figs. <ref> and <ref>) give a strict determination of the Majorana CP phase, thus giving an accurate prediction for m_ββ (see Tab. <ref>)within the range 1.61-3.85 meV. Here, the dark fermion f is considered as the DM candidate whose production mechanism is connected with its Yukawa coupling with SM leptons and the inert doublet scalar η. The magnitude of these Yukawa couplings plays a critical role in determining correct neutrino mixing and DM relic density through the thermal freeze-out mechanism. With the flavor structure of the FSS_1 framework, only the scotogenic part contributes to the lepton flavor violating decays such as μ→ eγ, μ→ 3e. On the other hand, though they are very small, the seesaw part of FSS_1 only contributes to decays such as τ→μγ, τ→ 3 μ.Interestingly, owing to the flavor symmetry and vacuum alignment of the flavons, LFV decays such as τ→ eγ and τ→ 3 e are completely disallowed, and any positive signal LFV for these two decays will test the viability of this model. Within the FSS_1 framework, the WIMPDM masses between (100-1000) GeV are simultaneously consistent with the constraints from neutrino oscillation data,DM relic density, direct search, and LFV decays.The FSS_1 model can also be tested at the colliders via a wide range of phenomenological studies. For example, FSS_1 can contribute to the Higgs boson diphoton decay channel h →γγ. Fig. <ref> shows that M_η^+ masses up to 1 TeV can have implications at the diphoton Higgs decay channel using the present LHC experimental results. With the increasing data collection at LHC and HL-LHC, the precision of R_γγ will improve, giving prospects for better determination of allowed regions for specific flavor model parameters. Thus, phenomenology-based R_γγ constraints can be used for further studies and predictions for producing exotic discrete flavor model signals at present and future colliders. The same statement is valid for further phenomenological studies of the model based on DM and LFV constraints. Thus, in alignment with all pertinent constraints, the model retains its predictiveness across LFV experiments, direct detection of DM, as well as collider experiments.§§ APPENDIX: A_4 SYMMETRYA_4 is a discrete group of even permutations of four objects[For a detailed discussion on A_4 see Refs. <cit.>]. Geometrically, it is an invariance group of a tetrahedron. It has 12 elements which can be generated by two basic objects S and T which obey the following relationsS^2=T^3=(ST)^3=1The A_4 group has three one-dimensional irreducible representations 1,1^' and 1^'' and one three dimensional irreducible representation 3. Products of the singlets and triplets are given by <cit.>1 ⊗1=1; 1^'⊗1^''=1, 1^' ⊗1^'=1^''; 1^''⊗ 1^''=1^', 3 ⊗ 3 =1⊕1^'⊕ 1^''⊕ 3_s⊕ 3_a,where the subscripts “s" and “a" denote symmetric and antisymmetric parts, respectively. Writing two A_4 tripletsas X=(x_1,x_2,x_3)^T and Y=(y_1,y_2,y_3)^T respectively, their product can be written as <cit.> X ⊗ Y = (X ⊗ Y)_1⊕(X ⊗ Y)_1'⊕(X ⊗ Y)_1”⊕(X ⊗ Y)_3s⊕(X ⊗ Y)_3a where(X ⊗ Y)_1 ∼ x_1 y_1+x_2y_3+x_3 y_2, (X ⊗ Y)_1' ∼ x_3 y_3+x_1y_2+x_2 y_1, (X ⊗ Y)_1” ∼ x_2 y_2+x_1y_3+x_3 y_1,(X ⊗ Y)_3s ∼ [2 x_1 y_1-x_2 y_3-x_3y_2; 2 x_3 y_3-x_1 y_2-x_2 y_1; 2 x_2 y_2-x_1 y_3-x_3 y_1 ] , (X ⊗ Y)_3a ∼ [ x_2 y_3-x_3 y_2; x_1 y_2-x_2 y_1; x_3 y_1-x_1 y_3 ].These relations are used in theconstruction of the mass matrices given in Eq. (<ref>) and Eq. (<ref>). This work has been supported in part by the Polish National Science Center (NCN) under grant 2020/37/B/ST2/02371, the Freedom of Research (Swoboda Badań) and the Research Excellence Initiative of the University of Silesia in Katowice. BK would like to thank José W. F. Valle and Claudia Hagedorn for useful discussions. BK also acknowledges hospitality at the Korea Institute of Advanced Study, Seoul, where part of this work has been completed.SM acknowledges the financial support from the National Research Foundation of Korea grant 2022R1A2C1005050.JHEP | http://arxiv.org/abs/2311.15997v1 | {
"authors": [
"Joy Ganguly",
"Janusz Gluza",
"Biswajit Karmakar",
"Satyabrata Mahapatra"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20231127164507",
"title": "Phenomenology of the flavor symmetric scoto-seesaw model with dark matter and TM$_1$ mixing"
} |
Bayesian Approach to Linear Bayesian Networks Seyong Hwang [email protected] Department of Statistics Seoul National University Seoul, 08826, South Korea Kyoungjae Lee [email protected] Department of Statistics Sungkyunkwan University Seoul, 03063, South Korea Sunmin Oh [email protected] Department of Statistics Seoul National University Seoul, 08826, South Korea Gunwoong Park [email protected] Department of Statistics Seoul National University Seoul, 08826, South Korea ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ This study proposes the first Bayesian approach for learning high-dimensional linear Bayesian networks. The proposed approach iteratively estimates each element of the topological ordering from backward and its parent using the inverse of a partial covariance matrix. The proposed method successfully recovers the underlying structure when Bayesian regularization for the inverse covariance matrix with unequal shrinkage is applied. Specifically, it shows that the number of samples n = Ω( d_M^2 log p) and n = Ω(d_M^2 p^2/m) are sufficient for the proposed algorithm to learn linear Bayesian networks with sub-Gaussian and 4m-th bounded-moment error distributions, respectively, where p is the number of nodes and d_M is the maximum degree of the moralized graph. The theoretical findings are supported by extensive simulation studies including real data analysis. Furthermore the proposed method is demonstrated to outperform state-of-the-art frequentist approaches, such as the BHLSM, LISTEN, and TD algorithms in synthetic data. Bayesian approach, Bayesian networks, causal discovery, directed acyclic graph, linear structural equation model, structure learning § INTRODUCTIONBayesian networks are probabilistic graphical models that use a graph to represent variables of interest and their conditional dependence and causal relationships. Hence, Bayesian networks, especially Gaussian linear Bayesian networks, have been applied in various fields due to their popularity and usefulness. (e.g., ).Recent studies have investigated structure learning algorithms for (Gaussian) linear Bayesian networks (BNs), also referred to as linear structural equation models (SEMs). These algorithms can be categorized into three groups: (i) likelihood-based learning, (ii) inverse covariance matrix-based, and (iii) a node-wise regression-based algorithms. For example, <cit.> develops the likelihood-based greedy DAG search algorithm in large sample settings. <cit.> provide the regularized inverse covariance matrix-based algorithms for high-dimensional sparse Gaussian linear BNs. <cit.> develop the regularized-regression-based algorithms. Additionally, <cit.> propose the inverse covariance matrix and independence test-based uncertainty scoring (US) algorithms for low-dimensional settings, and <cit.> develops the high-dimensional Gaussian linear BN learning algorithm by combining the graphical Lasso and the US algorithm. However, the study of learning algorithms under the Bayesian framework is less developed, due to the significant computational cost and the nascent status of this field.The main objective of this study is to develop a Bayesian approach for learning high-dimensional linear BNs using the principles of inverse covariance matrix-based approaches. Specifically, the proposed algorithm applies an iterative step of inferring element-wise ordering and its parents, where both problems are effectively addressed using Bayesian regularization for inverse covariance matrix estimation with unequal shrinkage (BAGUS). In addition, the theoretical guarantees on the proposed algorithm are shown, that is the numbers of samples n = Ω(d_M^2 log p) and n = Ω(d_M^2p^2/m) are sufficient for the proposed algorithm to recover the underlying structure, for sub-Gaussian and 4m-th bounded-moment linear BNs, respectively, where p is the number of nodes and d_M is the maximum degree of the moralized graph. The theoretical findings of this study are heuristically confirmed through extensive simulation studies. The proposed algorithm consistently recovers the underlying graph with sample complexities of n = Ω(d_M^2 log p) and n = Ω(d_M^2p^2/m) for sub-Gaussian and 4m-th bounded-moment linear BNs, respectively. Furthermore, the proposed Bayesian approach is compared to state-of-the-art frequentist approaches, such as BHLSM <cit.>, LISTEN <cit.>, TD <cit.>, and US <cit.> algorithms. Finally, we demonstrate through online shopping mall order amount data that the proposed algorithm is well-suited for estimating the relationship between the sales of each product. The remainder of this paper is structured as follows. Section <ref> summarizes some necessary notations, explains basic concepts of a linear BN, and discusses the identifiability conditions and existing learning algorithms for linear BNs. Section <ref> introduces the proposed algorithm for high-dimensional linear BNs. Section <ref> provides theoretical guarantees of the proposed algorithm and illustrates specific examples. Section <ref> compares the proposed algorithm with other frequentist high-dimensional linear BN learning algorithms. Section <ref> evaluates the proposed algorithm and compares it with state-of-the-art algorithms in various simulation settings. Section <ref> demonstrates that the proposed algorithm performs well for estimating relationships between sales of each product of an online mall. Lastly, Section <ref> offers a discussion and suggests a future work.§ BACKGROUND In this section, we will first introduce some necessary notations and definitions for linear Bayesian networks (BNs), also known as linear structural equation models (SEMs). Then, we will discuss previous relevant works. §.§ Bayesian Network A Bayesian network (BN) is a probabilistic graphical model that represents a joint probability distribution over a set of random variables using a directed acyclic graph (DAG) and a set of conditional probability distributions. DAG G = (V, E) consists of a set of nodes V = {1, 2, ... , p} and a set of directed edges E ⊂ V × V with no directed cycles. A directed edge from node j to k is denoted by (j,k) or j → k. The set of parents of node k, denoted by (k), consists of all nodes j such that (j,k) ∈ E. In addition, the set of children, denoted by (j), consists of all nodes k such that (j,k) ∈ E. If there is a directed path from node j to node k, then k is called a descendant of j, and j is called an ancestor of k. The sets of all descendants and ancestors of node k are denoted by (k) and (k), respectively. An important property of DAGs is that there exists (topological) ordering π = (π_1,π_2, ...., π_p) of a directed graph that represents the directions of edges such that for every directed edge (j, k) ∈ E, j comes before k in the ordering. Hence, recovering a graph can be decomposed into learning the ordering and the presence of edges.In Bayesian network, we consider a random vector X := (X_j)_j ∈ V with a probability distribution taking values in sample space 𝒳_V over the nodes in G. For any subset S of V, let X_S :=(X_j)_j ∈ S and 𝒳_S := ×_j ∈ S𝒳_j where 𝒳_j is the sample space of X_j. For any node j ∈ V, (X_j | X_S) denotes the conditional distribution of the variable X_j given a random vector X_S. Then, a DAG model has the following joint probability density function:(X_1, X_2,..., X_p) = ∏_j=1^p(X_j | X_(j)),where (X_j | X_(j)) is the conditional distribution of X_j given its parent variables X_(j) =(X_k)_ k ∈(j). This study considers n independent and identically distributed (i.i.d.) samples X^1:n := ( X^(i) )_i=1^n from a given Bayesian network where X^(i) := ( X_j^(i) )_j = 1^p is a p-variate random vector. The notation · denotes an estimate based on samples X^1:n. This study assumes that the moralized graph is sparse, meaning that the Markov blanket of each node is small. Additionally, it assumes causal sufficiency, meaning that X_(j) is the only source of confounding for X_j. §.§ Linear Bayesian networksA linear Bayesian network, also known as a linear structural equation model, is a type of Bayesian network (BN) where the joint distribution is defined by linear structural equations. The model can be expressed in a matrix equation as follows:(X_1, X_2,..., X_p)^⊤ =B (X_1, X_2,..., X_p)^⊤ + (ϵ_1,ϵ_2,... , ϵ_p)^⊤,where B ∈ℝ^p × p is an edge weight matrix with each element [B]_j,k = β_k,j, representing the linear weight of an edge from X_k to X_j. In addition, (ϵ_j)_j ∈ V are independent error random variables with (ϵ_j) = 0 and (ϵ_j) = σ_j^2>0. Then, the covariance matrix of X and its inverse, Σ and Ω, respectively, can be calculated as:Σ= (I_p - B)^-1Σ^ϵ (I_p - B)^-⊤,andΩ = (I_p - B)^⊤ (Σ^ϵ)^-1 (I_p - B),where I_p∈ℝ^p × p is the identity matrix, and Σ^ϵ = (σ_1^2,σ_2^2, ..., σ_p^2) is the covariance matrix for the independent errors.Since the inverse covariance matrix is a function of edge weights, many existing algorithms apply the inverse covariance matrix to recover the structure, i.e., the support of B. More precisely, the algorithms learn the models under the following uncertainty level conditions. Let X be generated from a linear BN (<ref>) with DAG G and true ordering π. In addition, suppose that Σ is the true covariance matrix. Then, DAG G is uniquely identifiable if either of the two following conditions is satisfied: Consider any node j = π_r ∈ V, k ∈(j), ℓ∈(j), S_r = {π_1, ..., π_r-1}, and T_r = V ∖{π_r+1, ..., π_p} in which π_0 = π_p+1 = ∅. (1) Forward selection <cit.>: 1/ [(Σ_S_r ∪{j}, S_r∪{j})^-1]_j,j = σ_j^2 < σ_k^2 + ∑_k' ∈(k) ∖{π_1, ..., π_r-1}β_k' → k^2 σ_k'^2 = 1/ [(Σ_S_r ∪{k}, S_r∪{k})^-1]_k,k where β_k' → k is the sum over products of coefficients along each directed path from k' to k. In addition, Σ_A,A is the |A| × |A| sub-matrix of Σ corresponding to variables X_A. Finally, [ Σ ]_k,k is the diagonal entry corresponding to variable X_k. (2) Backward selection <cit.>: [(Σ_T_r, T_r)^-1]_j,j = 1/σ_j^2 < 1/σ_ℓ^2 + ∑_ℓ' ∈(ℓ) ∖{π_r, ..., π_p}β_ℓ, ℓ'^2 /σ_ℓ'^2 = [(Σ_T_r, T_r)^-1]_ℓ,ℓ. It is straightforward that if the error variances are the same, both identifiability conditions hold. Hence, in many areas, these identifiable models are acceptable and widely used. For example, the assumption of exact same error variances is used for applications with variables from a similar domain, such as spatial or time-series data.It is also important to note that these identifiability conditions do not rely on any specific distribution, such as Gaussian. However, in order to achieve high-dimensional consistency, two types of linear BNs are considered, which are concerned with the tail conditions of error distributions. The first type is sub-Gaussian linear BN, which is a natural extension of Gaussian linear BNs, in which each error variable is sub-Gaussian. That is, ϵ_j / √( (ϵ_j) ) is sub-Gaussian with parameter s_max^2. The second type is bounded-moment linear BN, which is defined as the linear BN with errors having a bounded moment. Specifically, max_j ∈ V (ϵ_j / √( (ϵ_j) ) )^4m≤ K_m, where K_m >0 only depends on m.In the linear BN, each variable X_j can be expressed as a linear combination of independent errors corresponding to its ancestors:X_j = ∑_ k ∈(j) β_k,j X_k + ϵ_j = ∑_k ∈(j)β_k → jϵ_k + ϵ_j,where β_k → j is the sum over products of coefficients along directed paths from k to j. Hence, if the error variables have a sub-Gaussian or a bounded-moment property, then X_j also satisfies a sub-Gaussian or a bounded-moment property. These two types of linear BNs can be expressed as follows. * Sub-Gaussian linear BN: For any j ∈ V, X_j is sub-Gaussian with proxy parameter s_max^2 [Σ]_j,j for some constant s_max>0, which means that {exp (t X_j) }≤exp (s_max^2 [Σ]_j,j t^2/ 2 ) for allt∈ℝ . * 4m-th bounded-moment linear BN: For any j ∈ V, X_j has 4m-th bounded-moment for some positive integer m and some constant K_max>0 such that { (X_j)^4m}≤ K_max [Σ]_j,j^2m . §.§ Frequentist Approaches to Linear BNsThis section provides a brief review of recent frequentist methods for learning linear BNs. One popular approach is based on regression. For example, <cit.> applies a standard regression and a conditional independence test to learn a low-dimensional Gaussian linear BN. It estimates the ordering using the variance of residuals, and subsequently infers the directed edges using conditional independence tests. <cit.> develop the ℓ_0- and ℓ_1-regression-combined TD algorithm for a sub-Gaussian linear BN. The algorithm estimates the ordering using the best-subset-regression, and then infers the edges using a ℓ_1-regularized approach. The TD algorithm requires a sample size of n = Ω(q^2 log p) for accurate ordering estimation, where q is the predetermined upper bound of the maximum indegree.Similarly, <cit.> develops the ℓ_1-regression-based linear BN learning (LSEM) algorithm with sample complexities of n = Ω(d_M^4 log p) and n = Ω(d_M^4 p^2/m ) for sub-Gaussian and 4m-th bounded-moment linear BNs, respectively. Finally, <cit.> proposes the best-subset-selection-based optimal learning algorithm for Gaussian linear BNs. Its sample complexity is optimal n = Ω(d_inlogp/d_in) under the known maximum indegree d_in. Another popular approach is based on the inverse covariance matrix. These algorithms estimate the last element of the ordering using the diagonal entries of the inverse covariance matrix, which can be estimated by any inverse covariance matrix estimators, such as graphical Lasso and CLIME. These algorithms then determine its parents with non-zero entries on its row of the inverse covariance matrix. After eliminating the last element of the ordering, this procedure is repeated until the graph is fully estimated. <cit.> and <cit.> apply graphical Lasso and CLIME, respectively, for inverse covariance matrix estimation, and prove that their algorithms require sample sizes of n = Ω(d_M^4 log p) and n = Ω(d_M^4 p^2/m ), when the error distributions are sub-Gaussian and 4m-th bounded-moment, respectively.In summary, the existing algorithms for learning linear BNs mainly use frequentist methods such as OLS, Lasso, graphical Lasso, and CLIME. However, a Bayesian framework structure learning algorithm has not yet been explored due to its heavy computational cost, model complexity, and choice of prior. Therefore, this study proposes an inverse covariance matrix-based Bayesian algorithm for high-dimensional linear BNs, using a provable and scalable Bayesian approach for inverse covariance matrix estimation. §.§ Bayesian Approaches to Undirected Graphical ModelsSeveral Bayesian methods have been proposed for estimating high-dimensional sparse Gaussian undirected graphical models, that is equivalent to high-dimensional sparse inverse covariance matrices. For example, <cit.> suggests using the G-Wishart distribution as a prior. <cit.> proves theoretical properties such as posterior convergence rate and graph selection consistency when a carefully chosen prior is used for the graph G. However, since its normalizing constant has a closed form only when the graph G is decomposable, posterior inference for a general graph G requires a computationally expensive Markov chain Monte Carlo (MCMC) algorithm. As alternatives, spike-and-slab and its variants as priors for sparse inverse covariance matrix have been suggested <cit.>.Due to the positive definite constraint, the resulting prior usually contains an unknown normalizing constant. Hence, block Gibbs samplers have been suggested for efficient posterior inference. In this study, we concentrate on the Bayesian regularization for graphical models with unequal shrinkage (BAGUS) approach proposed by <cit.>. The off-diagonal entries of the inverse covariance matrix Ω∈ℝ^p× p are modeled using a spike-and-slab prior as follows (see details in ): for any 1≤ j < k ≤ p,π([Ω]_j,k) = η/2 ν_1exp( -|[Ω]_j,k|/ν_1 ) + 1 - η/ 2ν_0exp( -|[Ω]_j,k|/ν_0 ),s.t. [Ω]_k,j = [Ω]_j,kwhere ν_1>ν_0 >0 arescaling hyper-parameters, and 0<η<1 is a hyper-parameter that reflects the prior belief about the proportion of signals. Note that the spike-and-slab prior in Equation (<ref>) consists of two Laplace distributions. The first component (slab), with relatively large scale ν_1, captures large signals, whereas the second component (spike), with small scale ν_0, shrinks small noises to zero. In addition, the BAGUS method assumes the following exponential prior for the diagonal entries of Ω, which is defined as follows: for any 1≤ j ≤ p, π( [Ω]_j,j) = τexp( - τ [Ω]_j,j),s.t.[Ω]_j,j > 0,where τ>0 is a hyper-parameter.To define a prior distribution for inverse covariance matrices, the BAGUS method applies the following prior that is restricted to a subset of positive definite matrices:π( Ω ) = ∏_j < kπ([Ω]_j,k) ∏_jπ( [Ω]_j,j) 1(Ω≻ 0 ,Ω_2 ≤ B_0),where B_0>0 is a constant and 1(·) is an indicator function. Additionally, Ω≻ 0 means that Ω is positive definite, and Ω_2 = sup_x ∈ℝ^p ,x_2=1 Ω x _2 is the spectral norm of Ω.Then, the BAGUS method returns the maximum a posteriori (MAP) estimator, say Ω, which maximizes theposterior distribution using the expectation-maximization (EM) algorithm:Ω := Ω≻ 0 ,Ω_2 ≤ B_0n/2{( ΣΩ) - log (Ω) } - ∑_j <klog( η/2 ν_1 e^ -|[Ω]_j,k|/ν_1+ 1 - η/ 2ν_0 e^-|[Ω]_j,k|/ν_0 ) + τ∑_j=1^p [Ω]_j,j,where Σ is the sample covariance matrix.<cit.> shows that the EM algorithm has a computational complexity of O(p^3). Hence, the BAGUS method is much more tractable compared to other Bayesian methods based on the MCMC algorithm. Moreover, <cit.> establishes convergence rates of the MAP estimatorin high-dimensional settings under regularity conditions on hyperparameters and the true inverse covariance matrix. Specifically, they shows that the BAGUS method consistently estimates the true inverse covariance matrix if the sample size scales as n = Ω(d^2 log p), where d is the maximum cardinality of nonzero elements in each column of the true inverse covariance matrix, which is equivalent to the maximum degree of the moralized graph in the language of DAG models.<cit.> further proposes a consistent support recovery procedure based on the posterior inclusion probability. The spike-and-slab prior in Equation (<ref>) can be written as the following hierarchical prior:[Ω]_j,k|r_j,k =0∼(ν_0), [Ω]_j,k|r_j,k =1 ∼(ν_1), r_j,k∼(η),where (ν) is the probability density function of the Laplace distribution with the scale parameter ν. The binary variable r_j,k indicates whether [Ω]_j,k is zero or not; hence, we can conduct variable selection based on the posterior inclusion probability:( r_j,k =1 |X^1:n ) =∫( r_j,k =1 | [Ω]_j,k ) π([Ω]_j,k| X^1:n ) d [Ω]_j,k .The above posterior inclusion probability can be estimated based on the MAP estimator Ω as follows:p_j,k := (r_j,k =1 |[Ω]_j,k )= η / (2ν_1)exp(- | [Ω]_j,k| /ν_1 )/η / (2ν_1)exp(- | [Ω]_j,k| /ν_1 ) + (1-η) / (2ν_0)exp(- | [Ω]_j,k| /ν_0 ).<cit.> shows that, under regularity conditions, the estimated support Ŝ = {(j,k) : p_j,k≥ T } is consistent for any threshold 0<T<1.§ ALGORITHM This section presents a Bayesian approach for learning high-dimensional linear BNs. The proposed algorithm combines the BAGUS method and the principle of inverse covariance matrix-based linear BN learning approaches. Specifically, at the first step, it estimates the inverse covariance matrix for all variables. Then, the last element of the ordering is chosen with the smallest diagonal entry using the backward selection condition in Lemma <ref>. Subsequently, its parents are determined by the indices with inclusion probabilities greater than a given threshold 0<T<1, which represents whether an edge weight is zero or not in Equation (<ref>).In the next step, the inverse covariance matrix is estimated using only the remaining variables except the last element of the ordering. The proposed algorithm estimates the next element of the ordering and its parents with the same method. It iterates this procedure until the complete graph structure is inferred. The detailed process of the proposed algorithm is summarized in Algorithm <ref>.In Algorithm <ref>, the r-th iteration of the algorithm is first conducted by calculating the following MAP estimator of the inversion of the partial covariance matrix based on the BAGUS method:Ω^(r) :=Θ≻ 0 , Θ_2 ≤ B_0 ,Θ∈ℝ^(p+1-r) × (p+1-r)n/2{(Σ^(r)Θ ) - log ( Θ) } -∑_j <k,j,k∈ S(r)log( η/2 ν_1 e^ -| [Θ]_j,k |/ν_1+ 1 - η/ 2ν_0 e^-| [Θ]_j,k|/ν_0 ) + τ∑_j ∈ S(r)[Θ]_j,j,where Σ^(r) is a sample covariance matrix for X_S(r) in which S(r) = V ∖{π_p+2-r ,…, π_p+1} and π_p+1 =∅.Then, it determines π_p+1-r with the smallest diagonal entries of the estimated inverse covariance matrix Ω^(r).Finally, it determines the parents of node j = π_p+1-r such that the posterior probability of non-zero [Ω^(r)]_j,k is greater or equal to the pre-specified threshold. We discuss the required conditions on the threshold in Section <ref>.When recovering the ordering, it is recommended to choose a small value of the tuning parameter τ in Equation (<ref>), which is the inverse scale for diagonal entries. This is because the diagonal entries of the inverse covariance matrix do not need to be shrunk to zero. Additionally, small ν_0 for the spike part and large scale ν_1 for the slab part are recommended to achieve a sparse estimated graph. The rationale behind this is that the most off-diagonal entries of the inverse covariance matrix should be shrunk to zero while keeping the large signals, when recovering the parents in a sparse graph setting.It is emphasized that the proposed method is based on the estimation of the edges of the moralized graph instead of the DAG. Specifically, the proposed method estimates the uncertainty score for the ordering estimation using the posterior distribution of the moralized graph.Subsequently, it recovers the parents using the posterior distribution of the moralized graph again for better computational efficiency.Additionally, it would make more sense to exploit the posterior distribution of the directed graph when recovering the directed edges given the ordering, as shown in previous studies <cit.>. Regarding computational complexity, the bottleneck of the r-th iteration of the proposed method is the estimation of an inverse covariance matrix with the size of (p+1-r) × (p+1-r) using the BAGUS method. According to <cit.>, its computational cost is O((p+1-r)^3). Since there are p-1 iterations, the computational complexity of the proposed method is ∑_j=2^p O(j^3) = O(p^4). Hence, the proposed method has polynomial computational complexity in the number of nodes.§ THEORETICAL RESULTSThis section provides the statistical results for Algorithm <ref> in learning high-dimensional linear BNs with sub-Gaussian and 4m-th bounded-moment errors. Specifically, we present the required assumptions and the theoretical results for the consistent estimation of both the ordering and directed edges. The main results are expressed in terms of the triple (n,p,d_M), where n is the number of samples, p is the number of nodes, and d_M is the maximum degree of the moralized graph. For ease of notation, let X_π_1:r = (X_π_1, X_π_2,...,X_π_r).We assume some prevalent constraints on the linear BN (<ref>) with both sub-Gaussian and 4m-th bounded-moment error distributions. (e.g., )[Dependency Assumption] There exists a positive constant k_1 and k_2 > 0 such that Λ_min( Σ) ≥ k_1 andmax_j ∈{1,2,...,p} [Σ]_j,j < k_2 where Λ_min(A) denotes the minimum eigenvalue of matrix A. [Minimum Gap Assumption] For any r ∈{2,3,...,p} and j = π_r and ℓ∈(j), there exists a positive constant τ_min > 0 such that -1/σ_j^2 + 1/σ_ℓ^2 + ∑_ℓ' ∈(ℓ) ∖{π_r, ..., π_p}β_ℓ, ℓ'^2 /σ_ℓ'^2 > τ_min.[Minimum Signal Assumption] For any r ∈{1,2,...,p-1}, there exists a positive constant θ_min > 0 such that min_(k, j) ∈ E^(r) |Ω^(r)_j,k| > θ_min, where E^(r) and Ω^(r) is an edge set and an inverse covariance matrix of {π_1,π_2,...,π_p+1-r}, respectively. Assumption <ref> is necessary because the proposed algorithm relies on the inverse covariance matrix, which requires an invertible covariance matrix. Assumption <ref> is a sample version of the backward selection condition in Lemma <ref>, which ensures that the difference between uncertainty scores is large enough to identify the correct ordering. Furthermore, Assumption <ref> ensures that, for any r ∈{1,2,...,p-1}, the π_p+1-r-th row of non-zero entries of the inverse covariance matrix for X_π_1:(p+1-r) are sufficiently far away from zero. Since the edge weight is proportional to [Ω^(r)]_π_p+1-r, k from Equation (<ref>), the assumption guarantees that each non-zero edge weight is sufficiently large. For learning high-dimensional models, we require the following related conditions on the covariance and inverse covariance matrix. For ease of notation, let S = {(i,j):[Ω]_i,j≠ 0} and S^(r) = {(i,j):[Ω^(r)]_i,j≠ 0}. We also define M_Σ = Σ_∞, M_Γ = (Ω⊗Ω)_SS_∞ and M_Γ^(r) = (Ω^(r)⊗Ω^(r))_S^(r)S^(r)_∞ for any r ∈{1,2,...,p-1}, where ·_∞ and ⊗ denote the ℓ_∞/ℓ_∞ operator norm and Kronecker product, respectively.Lastly, suppose that M_Γ_max = max_r ∈{1,2,...,p-1} M_Γ^(r) and M_Γ_min = min_r ∈{1,2,...,p-1} M_Γ^(r). There exists positive constants M_1 > 0 and M_2 > 0 such that M_Σ < M_1andM_Γ_max < M_2. Assumption <ref> constrains the ℓ_∞/ℓ_∞ operator norm of the inverse covariance matrix, whose support is related to the moralized graph, thereby controlling the sparsity of the moralized graph. Specifically, this assumption is particularly favorable for a DAG with a sparse moralized graph, such as a chain graph. In contrast, it is harsh for a DAG with a dense moralized graph, such as a star graph (for further details, see Section <ref>). The similar assumption is also employed in the graphical Lasso method proposed in <cit.> for learning the inverse covariance matrix. <cit.> constrains the ℓ_∞/ℓ_∞ operator norm of Ω to be constant whereas the proposed method requires restraint for all Ω^(r),r∈{1,2,...,p} to estimate the inverse covariance matrix for each iteration.Armed with Assumptions <ref>, <ref>, <ref> and <ref>, the main result is achieved, which shows that the proposed algorithm can consistently learn a high-dimensional linear BN. Consider a linear BN (<ref>) with sub-Gaussian and 4m-th bounded-moment errors. Suppose that Assumptions <ref>, <ref>, <ref>, and <ref> are satisfied. Additionally, suppose that there exists constants C_1>0, C_2 > C_3>0, ϵ_1 >0 such that C_3ϵ_1 ≤k_1^2p/2,and (C_1+C_3) < 1/4 M_Γ_maxmin{τ_min , 2 θ_min d, k_1^2, 2/3M_Σ, 2/3M_Γ_maxM_Σ^3}. Finally, suppose that the prior hyper-parameters (v_0, v_1, η, τ), threshold parameter T, and the spectral norm bound B_0 satisfy 1/v_1 = C_3/1+ϵ_1n/p, 1/v_0 > C_4n/p, v_1^2(1-η)/v_0^2η≤ϵ_1exp{ 2(C_2-C_3)M_Γ_min(C_4-C_3) n/p^2},τ ≤ C_3n/2p, log(T/1-T) ∈log(ν_0η/ν_1(1-η))+ (0, (θ_min-2(C_1+C_3)M_Γ1/d)(1/ν_0-1/ν_1)), and1/k_1 + 2(C_1+C_3)M_Γ_max < B_0 < √(2nv_0), where C_4 = C_1+M_Σ^22(C_1+C_3)M_Γ_max+6(C_1+C_3)^2M_Γ_max^2M_Σ^3. Then, Algorithm <ref> correctly estimates the graph with high probability as follows: * For a sub-Gaussian linear BN, ( G = G ) ≥ 1-4p^2exp( -C_1^2/128(1+4 s_max^2)max_j( [Σ]_j,j)^2n/(d_M+1)^2). * For a 4m-th bounded-moment linear BN, ( G = G ) ≥ 1-4p^22^2mmax_j( [Σ]_j,j)^2mC_m(K_max+1)/C_1^2m(d_M+1)^2m/n^m. Theorem <ref> states that the proposed Bayesian approach successfully learns high-dimensional linear BNs. Note that the required assumptions ensure that C_1, C_m, [Σ]_j,j remain constant for (n,p,d_M) in Theorem <ref>, which are involved with the graph recovery probability. Consequently, the proposed algorithm with appropriate hyper-parameters recovers the graph with high probability if n= Ω(d_M^2 log p ) for a sub-Gaussian linear BN and if n = Ω( d_M^2 p^2/m) for a 4m-th bounded-moment linear BN. The proof is built upon the related studies in <cit.> and <cit.>, where the inverse covariance matrix-based algorithm and Bayesian regularized inverse covariance matrix estimation are considered, respectively. However, we combine these studies in a careful way to establish consistency in high-dimensional linear BN settings. The detailed proof can be found in Appendix.Theorem <ref> also shows that the proposed algorithm does not require commonly applied assumptions, such as the incoherence and faithfulness conditions (e.g., ). Instead, it requires closely related conditions on the hyper-parameters and thresholding parameter. In the following section, we demonstrate how these required conditions can be satisfied using certain special types of linear BNs. §.§ Illustration of the Assumptions and Hyper-Parameters This section confirms the validity of Assumptions <ref>, <ref>, <ref> and <ref>. Additionally, it provides appropriate hyper-parameters v_0, v_1, η, τ and the inclusion probability threshold T when recovering chain and star linear BNs illustrated in Figure <ref>. These are popular sparse and dense models because the chain graph has a small maximum degree of the moralized graph d_M = 2, whereas the star graph has a large maximum degree d_M = p-1. More specifically, the considered chain and star linear BNs are as follow: For all j ∈{1,2,...,p-1},Chain:X_j+1 = β X_j + ϵ_j+1 and Star:X_j+1 = β X_1 + ϵ_j+1where X_1=ϵ_1 in both linear BNs and all the error variances are σ^2. Additionally, β∈ (-1, 1) is a small edge weight. In the chain linear BN, we can determine fixed bounds for M_Γ_max, M_Γ_min, and M_Σ, which are as follows:M_Γ_max = (β^4+2|β|^3+4β^2+2|β|+1)/σ^4, M_Γ_min = (β^4+2|β|^3+3β^2+2|β|+1)/σ^4, M_Σ = (1-|β|^p)(1-|β|^p+1)/(1-|β|)(1-|β|^2)σ^2 ≤1/(1-|β])(1-|β|^2)σ^2.Consequently, Assumption <ref> is satisfied ifM_1 >1/(1-|β|)(1-|β|^2)σ^2, M_2 > β^4+2|β|^3+4β^2+2|β|+1 /σ^4. Moreover, simple algebra yields that Assumptions <ref>, <ref>, and <ref> are satisfied if k_1 ≤σ^2/(1+|β|)^2, τ_min≤β^2/σ^2, θ_min≤|β|/σ^2.However, in the star linear BN, we have the diverging values of M_Γ_max and M_Σ as the number of nodes increases:M_Γ_max = 2(p-1)^2β^4+2(p-1)^2|β|^3+3(p-1)β^2+2(p-1)|β|+1 /σ^4, M_Γ_min = (2β^4+2|β|^3+3β^2+2|β|+1)/σ^4,M_Σ = max{ (p-1)|β|+1, (p-1)β^2 + |β| +1}σ^2.Consequently, Assumption <ref> rarely hold in large-scale linear BNs because of diverging M_1 and M_2. This heuristically supports that Assumption <ref> involves with the sparsity of the moralized graph.The success of the proposed approach depends on the existence of appropriate hyper-parameters. Hence, we now turn our attention to the existence of appropriate hyper-parameters ν_0, ν_1, τ, and T according to Theorem <ref>. By setting suitable constants C_1, C_2, C_3, and T, we can easily find proper values for these hyper-parameters. A possible choice is as follows: For any ϵ_1>0,ν_1 = p(1+ϵ_1)/nC_3, ν_0 = p/nC_4, τ = nC_3/2p, andT = ν_0η/ν_1(1-η) + ν_0η where η = ν_1^2/ν_1^2 + ν_0^2ϵ_1.In addition, for both chain and star linear BNs, we can set C_1 = C_3/10, C_2 = dθ_min / (2M_Γ_max), andC_3 = 1/2min{1/6M_Γ_maxM_Σ, 1/6M_Γ_max^2M_Σ^3, k_1^2/4M_Γ_max, τ_min/4M_Γ_max, dθ_min/2M_Γ_max, k_1^2p/2ϵ_1}. As discussed, the chain BN has fixed values for M_Γ_min, M_Γ_max, and M_Σ, resulting in the fixed hyper-parameters regardless of the number of sample size and nodes. However, the star linear BN has diverging values for M_Γ_max and M_Σ, leading to unacceptable value of hyper-parameters, such as ν_0 and ν_1 diverges, τ→ 0, and T → 1, as p increases in this setting.So far, we have demonstrated that appropriate hyper-parameters exist for chain BNs regardless of the number of nodes or samples. However, finding appropriate hyper-parameters for large-scale star BN is difficult. Hence, this underscores the importance of Assumption <ref> in achieving consistency, and highlights that the proposed method is well-suited for learning high-dimensional sparse linear BNs, while it may not be the optimal choice for recovering dense linear BNs. Of course, the choice of required hyper-parameters depends on the true model quantities which are typically unknown in practice. Hence, in applications, we find ourselves in a similar position as for other graph learning algorithms (e.g., the PC, HGSM, BHLSM, and TD algorithms) where the output depends on test and tuning parameters. To select good hyper-parameter values for the BAGUS method, cross-validation can be applied as discussed in <cit.>. However, due to the heavy computational cost, we used fixed hyper-parameters, v_0 = √(1/(100n)), v_1 = 1, τ = 0.0001, and T = 0.5, assuming unknown true model information in all our numerical experiments. The simulation results heuristically confirm that the proposed algorithm successfully recoverssparse graphs with high probability.§ COMPARISON TO FREQUENTIST APPROACHESThis section compares Algorithm <ref> against other high-dimensional linear BN learning algorithms, such as BHLSM, TD, LISTEN, and graphical Lasso-based algorithms <cit.>, in terms of sample complexity and required assumptions. Regarding sample complexity, the ℓ_1-regularized regression-based BHLSM algorithm can recover linear BNs with high probability if the sample sizes are n = Ω( d_M^2log p) and n = Ω( d_M^2 p^2/m ) for sub-Gaussian and 4m-th bounded-moment error distributions, respectively. The best-subset-selection and ℓ_1-regularized regression-combined TD algorithm can successfully learn a sub-Gaussian linear BN with high probability if the sample size is n = Ω(q^2 log p), where q is the predetermined upper bound of the maximum indegree. Finally, CLIME-based LISTEN algorithm can successfully learn a linear BN with high probability if the sample sizes are n = Ω(d_M^4 log p) and n = Ω(d_M^4 p^2/m) for sub-Gaussian and 4m-th bounded-moment error distributions, respectively. The proposed algorithm has a similar sample complexity and can learn a linear BN with high probability if the sample sizes are n = Ω( d_M^2 log p ) and n = Ω( d_M^2 p^2/m ) with sub-Gaussian and 4m-th bounded-moment linear BNs, respectively. The various simulation results in Section <ref> empirically support that the proposed and existing frequentist algorithms have similar performance when recovering directed edges. In terms of required assumptions, most assumptions are similar. However, the BHLSM, TD, and graphical Lasso-based algorithms require the incoherence condition, which states that neighboring and non-neighboring nodes are not highly correlated. This condition may limit the applicability of these methods to certain sparse special graphs, such as the example of 1 → 3, 4, 5 and 2 → 3, 4, 5, where the neighboring node of 1 is (3,4,5), and the non-neighboring node of 1 is 2. However, it is structurally inevitable for non-neighboring node 2 and neighborhood (3,4,5) to be highly correlated. Hence, the proposed Bayesian approach can learn some models that other frequentist algorithms may fail to learn due to their different required assumptions, as long as appropriate hyper-parameters are applied.§ NUMERICAL EXPERIMENTSThis section presents empirical results to support our theoretical findings that Algorithm <ref> can consistently learn sub-Gaussian and 4m-th bounded-moment linear BNs. Specifically, we consider three types of models: (i) Gaussian linear BNs, (ii) sub-Gaussian linear BNs with sequentially Uniform, Gaussian, and two-sided truncated Gaussian error distributions, and (iii) linear BNs with heavy-tailed error distributions, where student t-distributions with 10 degrees of freedom are applied in both low- and high-dimensional settings. Additionally, this section compares the performance of Algorithm <ref> with that of LISTEN <cit.>, BHLSM <cit.>, and TD <cit.> algorithms in terms of edge recovery. The hyper-parameters for the proposed algorithm were set to v_0 = √(1/(100n)), v_1 = 1, τ = 0.0001, T = 0.5, under the assumption of unknown true model information. These parameters could have been selected through cross-validation, as suggested in <cit.>. However, to facilitate faster implementation, we applied fixed values while respecting a small spike (v_0) and tuning (τ) parameters, as recommended in Section <ref>.For our simulation settings, we slightly modified the LISTEN and TD algorithms to improve their performance and stability. Specifically, the modified LISTEN algorithm applied CLIME to estimate each element of the ordering, instead of using it only to estimate the last element and then applying a matrix update approach. This was necessary because the original LISTEN algorithm often failed due to accumulated error in matrix updates. Additionally, we set the regularized regression parameter to √(log p/n) and the hard threshold parameter to half the minimum value of true edge weights, min_(j,k) ∈ E (|β_j,k|/2). The modified TD algorithm used ℓ_1-regularized regression for parent estimation, similar to the BHLSM algorithm. The regularized parameter was set to 3√(log p/n) to achieve higher accuracy in our settings. Finally, we set q to the maximum degree of the true moralized graph, d_M. The proposed algorithm and comparison algorithms were evaluated by measuring the average hamming distance between the estimated and true DAGs while varying sample sizes. The hamming distance was calculated as the number of edges that differ between the two graphs; hence, a smaller value is better. §.§ Gaussian Linear BNs We conducted simulations using 100 realizations of p-node Gaussian linear BNs with randomly generated underlying DAG structures for node size p ∈{25, 50, 100, 150, 200} while respecting the maximum degree constraint d_M ∈{3, 5, 8}, as done by <cit.>. Specifically, Erdös and Rényi graphs were considered with edge probability q = min(1, 3 d_M/ p). If the maximum degree of the moralized graph was greater than pre-determined d_M, we generated the graph again with an updated edge probability of q - 0.001 until the maximum degree condition was satisfied. We generated non-zero edge weights uniformly at random in the range β_k,j∈ (-1.0, -0.5) ∪ (0.5, 1.0). Finally, we set all noise variances to σ_j^2 = 2.Figures <ref> (a) - (c) show the average hamming distance of Algorithm <ref> by varying sample size n ∈{100, 200, ..., 800}. Figures <ref> (d) - (f) show the hamming distance against re-scaled sample size C = n / log p. As seen in Figure <ref>, the proposed algorithm recovers the true directed edges better as the sample size increases and the hamming distance converges to 0. Additionally, the empirical curves for different numbers of nodes align more closely with the re-scaled sample size on the horizontal axis. This supports the main result in Theorem <ref> that number of samples n required for successful graph recovery scales logarithmically with number of nodes p in Gaussian linear BNs. Figure <ref> also reveals that Algorithm <ref> requires fewer samples to recover a sparse graph. Specifically, the average hamming distances for 50-node graphs with d_M = 3 and 8 are approximately 5 and 20, respectively, when n = 200. Similar phenomena are shown for all other considered number of samples and nodes. Hence, the simulation results confirm our theoretical findings that the sample complexity of the proposed algorithm relies on the maximum degree. Figure <ref> evaluates the proposed algorithm (BayBHLSM) and the frequentist alternatives, BHLSM, LISTEN, and TD algorithms, in terms of recovering DAGs with p ∈{100, 200} and d_M ∈{3,5,8} by varying n ∈{100, 200, ..., 1000}. As shown in Figure <ref>, the proposed algorithm generally performs as accurately as the comparison algorithms. This reflects that the proposed and comparison algorithms have similar sample complexities. Additionally, the Lasso-based BHLSM and TD algorithms perform better when sample size is small due to the choice of the regularization parameter, not because of the superiority of the frequentist approaches. However, we can also see that the proposed algorithm performs better when the sample size is sufficiently large. A major drawback of the proposed algorithm is its computational cost. For example, the proposed algorithm takes about 5 minutes and 1.5 hours, on average, when learning 100- and 200-node graphs with d_M = 8 given a sample size of n = 100, respectively. This is consistent with the computational complexity of the proposed algorithm discussed in Section <ref>, where it is sensitive to the number of nodes (O(p^4)). Nevertheless, it is faster than the best-subset-selection-based TD algorithm with computational complexity of at least O(p^q+1q^3). For example, the TD algorithm takes more than 12 hours to learn a 100-node graph with d_M = 8. Hence, we do not present the results of the TD algorithm for large-scale graphs with p = 200 and d_M ≥ 5 due to the huge run time. §.§ Sub-Gaussian Linear BNs with Different Error Distributions This section considerssub-Gaussian linear BNs where heterogeneous error variances and non-Gaussian error distributions are allowed. Hence, we generated 100 sets of samples following the procedure specified in Section <ref>, except that error distributions were sequentially Uniform (U(-2.5, 2.5)), Gaussian (N(0, 2)), and two-sided truncated Gaussian (N(0, 10) within the interval (-2.5, 2.5)). Figures <ref> and <ref> evaluate the proposed algorithm and the comparison methods in terms of the hamming distance by varying the sample size. The simulation results in Figures <ref> and <ref> are analogous to the results for Gaussian linear BNs with the same error variances presented in Section <ref>. Specifically, Figure <ref> empirically supports the theoretical result that the proposed algorithm requires sample size n depending on maximum degree d_M and log p for successful graph recovery. However, Figure <ref> also shows that the hamming distance does not reach zero when d_M = 8. This reflects the fact that τ_min in the minimum gap condition can be small when error variances are different. Hence, the proposed algorithm requires a larger number of samples to recover a graph. Nonetheless, Figure <ref> confirms that Algorithm <ref> can consistently learn high-dimensional sparse linear BNs, even when error distributions are non-Gaussian, and error variances are different. Figure <ref> shows that the proposed algorithm at our settings recovers the graph as accurately as the frequentist algorithms in terms of hamming distance.§.§ T Linear BNs This section considers linear BNs with heavy-tailed error distributions. Hence, we generated 100 sets of samples under the procedure specified in Sections <ref>, except that error distributions were student t-distributions with 10 degrees of freedom. The performance of the proposed algorithm and the comparison algorithms in terms of the hamming distance is presented in Figures <ref> and <ref>. The results shown in Figures <ref> and <ref> are similar to the previous simulation results for (sub-)Gaussian linear BNs in Sections <ref> and <ref>. More specifically, Figure <ref> confirms that the proposed algorithm can consistently learn high-dimensional sparse linear BNs with heavy-tailed error distributions. Additionally, Figure <ref> shows that the proposed algorithm in our settings recovers the graph as accurately as the frequentist algorithms in terms of hamming distance.§ REAL DATAThe proposed algorithm was applied to real-world order data from an online shopping mall, which is available at <https://www.kaggle.com/datasets/mervemenekse/ecommerce-dataset>.The original dataset contains the order history of 38997 customers and 42 products. It includes information such as customer ID, gender, product category, and product name. In this analysis, we focused on finding the relationships between product order amounts using only the number of orders for each product to highlight the advantages of the proposed algorithm. Hence, the data considered in this analysis consists of n=38997 observations and p=42 variables, where the i-th row and j-th column represent the number of orders for the j-th product by the i-th customer at the online mall. We begin by describing some characteristics of the orders by category. All products on sale at the online mall are divided into four categories:`Auto & Accessories', `Fashion', `Electronic', and `Home & Furniture'. Specifically, each category contains a similar numbers of products, with 9, 11, 12, and 10 products in the `Auto & Accessories', `Fashion', `Electronic', and `Home & Furniture' categories, respectively. However, most orders are concentrated in two categories, `Fashion' and `Home & Furniture', with 57.28% of orders for fashion-related products and 32.90% of orders for home-related products. Furthermore, 91.76% of customers made six or fewer purchases, which suggests that most nodes may not be connected by edges. In Figure <ref>, the correlation and partial correlation plots are depicted, revealing the near absence of (conditional) correlation among all 42 variables. Notably, the partial correlation plot in Figure <ref> (b) demonstrates that fashion-related products exhibit modest conditional correlations when accounting for other variables. This makes sense because 10.04% of customers bought multiple fashion-related products, which is a higher percentage than for other categories. The percentages of customers who bought multiple products belonging to the same category are 8.45%, 7.88%, and 2.33% in `Auto & Accessories', `Electronic', and `Home & Furniture' categories, respectively. Hence, one can expect a sparse underlying graph structure although some nodes are highly connected.Figure <ref> shows the directed graph estimated by the proposed algorithm. Since the true model information is unknown, the hyper-parameters of the proposed algorithm are set as ν_0 = √(1/(100n)), ν_1=1, τ=0.0001, and T=0.5, as in Section <ref>. In Figure <ref>, the color of each node indicates the category to which it belongs. The edges between nodes in the same category are colored the same as the nodes, while edges between nodes in different categories are colored gray. As shown in Figure <ref>, there are 42 edges between nodes belonging to the `Fashion' category. This indicates that the orders of products in the `Fashion' category affect each other, which is consistent with the fact that the percentage of customers who bought multiple fashion-related products is the highest among all categories. For example, a customer who bought a suit was more likely to buy a shirt, formal shoes, and a titan watch. On the other hand, a customer who bought jeans was more likely to buy casual shoes, running shoes, and sneakers. Additionally, there are 15 edges from nodes categorized as `Home & Furniture' to nodes categorized as `Fashion'. This suggests that customers who bought home-related products were also likely to buy fashion-related products. For example, a customer who bought a shoe rack was more likely to buy accessories and shoes, such as a titan watch and formal shoes. All the 57 edges are concentrated in two categories, `Fashion' and `Home & Furniture', which agrees with the fact that most orders are concentrated in these two categories.The directions of edges are associated with the order amounts of products. Precisely, it tends to have directed edges from the less sold products to the more sold products. For example, the best-selling product, a titan watch, has the highest indegree of 11, while the second best-selling product, formal shoes, has an indegree of 8, which is the second largest. The relationship between the directions of edges and the order amounts is intuitive, because a product that is likely to be purchased with other items is likely to have high order amounts. Hence, we can conclude that the estimated graph provides reasonable information regarding the purchasing behaviors of online customers. In other words, it demonstrates that the proposed algorithm can identify legitimate edges in a sparse graph with some hub nodes. § SUMMARYThis study proposes the first consistent Bayesian algorithm for learning high-dimensional linear BNs with light and heavy-tailed error distributions. More precisely, this study shows that the proposed algorithm can learn linear Bayesian networks with sub-Gaussian and 4m-th bounded-moment error distributions, with sufficient numbers of samples n = Ω( d_M^2 log p) and n = Ω(d_M^2 p^2/m), respectively.It also shows that the algorithm can outperform frequentist algorithms that rely on different assumptions, but with the same complexity.However, the proposed algorithm suffers from heavy computational costs and requires careful selection of the hyper-parameters. The hyper-parameters can be chosen by the cross validation, or simply selected by respecting the recommended small spike and tuning parameters as discussed in Sections <ref> and <ref>. The theoretical guarantees of the algorithm are supported through various numerical experiments.In future work, it would be interesting to develop faster Bayesian method using the topological layer of a graph as in other DAG model learning algorithms (e.g., ). Additionally, it would be important to develop Bayesian approach for other identifiable DAG models, such as nonlinear additive noise models (), non-Gaussian linear BNs (), and count DAG models (). § PROOF OF THEOREM <REF>This section proves that Algorithm <ref> accurately recover the underlying graph with high probability.The proof is built upon the prior works of linear BN learning algorithms <cit.> and theoretical result of BAGUS in <cit.>. Here, we restate the proof in our framework. Assume that the true ordering satisfying the backward selection condition in Lemma <ref> is π = (π_1, π_2, ..., π_p) = (p,p-1,...,1), and hence π_p+1-r = r. For ease of notation, let π_1:j = (π_1, π_2, ..., π_j).For any matrix A ∈ℝ^p × p, let ||A ||_∞ = max_1≤ i,j ≤ p|[A]_i,j|. Lastly, σ_max^2 = max_1≤ j ≤ pσ_j^2 is the maximum error variance. In this proof, we show that Algorithm <ref> accurately recover the graph structure if a sample covariance matrix error is sufficiently small using mathematical induction. The diagonal entries of the inverse covariance matrix are given as: [Ω]_k,k = 1/σ_k^2 + ∑_l ∈(k)β_k,l^2/σ_l^2. Consider a linear BN (<ref>) with sub-Gaussian and 4m-th bounded-moment errors. For any pre-defined constants C_1>0 and C_2 > C_3>0, assume i) the prior hyper-parameters v_0, v_1, η, and τ satisfy 1/nv_1 = C_31/p1/1+ϵ_1, 1/nv_0 > C_41/p v_1^2(1-η)/v_0^2η≤ϵ_1exp[ 2(C_2-C_3)M_Γ(C_4-C_3)n/p^2] ,and τ≤ C_3n/21/p for some constants ϵ_1>0, C_4 = C_1+M_Σ^22(C_1+C_3)M_Γ+6(C_1+C_3)^2M_Γ^2M_Σ^3, ii) the spectral norm bound B_0 satisfies 1/k_1 + 2(C_1+C_3)M_Γ < B_0 < (2nv_0)^1/2, iii) max{2(C_1+C_3)M_Γmax(3M_Σ, 3M_ΓM_Σ^3, 2/k_1^2), 2C_3ϵ_1/k_1^2p}≤ 1, iv) Λ_min(Σ) ≥ k_1. If Σ - Σ_∞ < C_11/d, then Ω - Ω_∞ < 2(C_1+C_3)M_Γ1/d, where d = max_r ∈{1,2,⋯,p}max_ i ∈{π_1,⋯,π_r} card{j : Ω^(r)_i,j≠ 0}. AssumeΣ - Σ_∞ < C_11/d. The last element of the ordering, say π_p, is achieved by comparing diagonal entries of the MAP estimator Ω in Equation (<ref>). More specifically, it is estimated as π_p = min_k ∈ V [Ω ]_k,k.If the following inequality holds, Algorithm <ref> estimates node 1 as π_p, so the terminal vertex of the graph is correctly recovered, π_p = π_p.min_ k ∈π_1:p-1([Ω]_k,k - [Ω]_1,1) > 0. Since min_ k ∈π_1:p-1([Ω]_k,k - [Ω]_1,1)= min_ k ∈π_1:p-1{([Ω]_k,k - [Ω]_1,1)-([Ω]_1,1 - [Ω]_1,1) + ([Ω]_k,k - [Ω]_k,k) }≥min_ k ∈π_1:p-1{([Ω]_k,k - [Ω]_1,1)-|[Ω]_1,1 - [Ω]_1,1| - |[Ω]_k,k - [Ω]_k,k| }, the Inequality (<ref>) holds if min_ k ∈π_1:p-1([Ω]_k,k - [Ω]_1,1)> 4(C_1+C_3)M_Γ1/dand max_ k ∈ V |[Ω]_k,k - [Ω]_k,k| < 2(C_1+C_3)M_Γ1/d. Inequality (<ref>) holds due to Assumption <ref>, condition in Theorem <ref> and Lemma <ref>.Also, since the constants C_1,C_2,C_3 and hyper-parameters ν_0,ν_1,η,τ satisfy the conditions i),ii),iii) in Lemma <ref> and condition iv) holds by Assumption <ref>, we can apply Lemma <ref>. Then Inequality (<ref>) holds.Hence, Inequality (<ref>) holds, which implies that the last element of the ordering is correctly estimated, i.e., π_p = π_p =1, if Σ - Σ_∞ < C_1/ d. Suppose that Assumption <ref>, <ref>, <ref> and conditions i), ii), iii) in Theorem <ref> are satisfied. If Σ - Σ_∞ < C_1/d, S = supp(Ω) where S= { (j,k) : p_jk≥ T }, by choosing appropriate threshold T. If k is a terminal vertex of the graph, then (k) = supp([Ω]_k,*)∖{k}, where [Ω]_k,* is the k-th row of Ω. Hence, by Lemmas <ref> and <ref>, if Σ - Σ_∞ < C_1/ d, parents of π_p are correctly estimated:(π_p) = (π_p).Now, suppose that we have correctly estimated {π_p+2-r,…, π_p } for some r= 2,3 ,…,p-1, i.e.,{π_p+2-r, …, π_p } = {π_p+2-r, …, π_p }.Then if Σ - Σ_∞ < C_1/ d, the following inequality holds:Σ^(r) - Σ^(r)_∞≤Σ - Σ_∞≤ C_11/d,where Σ^(r) and Σ^(r) are the sample and true covariance matrix of ( X_π_1, …,X_π_p+1-r ). Also, we can derive the minimum eigenvalue of Σ^(r) by the following inequality:Λ_min(Σ^(r)) = min_||x||_2=1, x ∈ℝ^p+1-r x^⊤Σ^(r) x ≥min_||x||_2=1, x ∈ℝ^p x^⊤Σ x =Λ_min(Σ) ≥ k_1The (p+1-r)-th element of the ordering, say π_p+1-r, is achieved by comparing diagonal entries of the estimated inverse covariance matrix of ( X_π_1, …,X_π_p+1-r ),denoted as Ω^(r). It is correctly estimated if the following inequality holds:min_ k ∈π_1:p-r([Ω^(r)]_k,k - [Ω^(r)]_r,r) > 0. By similar arguments used to show Inequality (<ref>), the above inequality holds ifmin_ k ∈π_1:p-r([Ω^(r)]_k,k - [Ω^(r)]_r,r) > 4(C_1+C_3)M_Γ^(r)1/dand max_ k ∈π_1:p+1-r([Ω^(r)]_k,k - [Ω^(r)]_k,k)< 2(C_1+C_3)M_Γ^(r)1/d. The first inequality holds due to Assumption <ref>, condition in Theorem <ref> and applying Lemma <ref> to Ω^(r). Also, since the constants C_1,C_2,C_3 and hyper-parameters ν_0,ν_1,η,τ satisfy the conditions i),ii),iii) in Lemma <ref>, condition iv) holds by (<ref>) and Σ^(r) - Σ^(r)_∞≤ C_11/d, we can apply Lemma <ref> to Σ^(r) and Ω^(r). Then the second inequality holds. Thus, if Σ - Σ_∞ < C_1/ d, Inequality (<ref>) holds, which impliesπ_p+1-r = π_p+1-r= r.Again by Lemmas <ref> and <ref>, if Σ - Σ_∞ < C_1/ d, parents of π_p+1-r are correctly estimated:(π_p+1-r) = (π_p+1-r) = (π_p+1-r). Therefore, by the mathematical induction, G=G holds if Σ - Σ_∞ < C_1/ d. Using this result, the following inequality holds:(G = G)≥ ( Σ - Σ_∞ < C_11/d) = 1 - ( Σ - Σ_∞≥ C_11/d) = 1 - (∪_i,j ∈{1,2, ... ,p}[| [Σ]_i,j -[Σ]_i,j | ≥ C_11/d])≥1 - ∑_i,j ∈{1,2, ... ,p}(| [Σ]_i,j -[Σ]_i,j | ≥ C_11/d) Consider a random vector (X_j)_j = 1^p whose covariance matrix is Σ∈ℝ^p× p. * Lemma 1 of <cit.>: When (X_j)_j = 1^p follows a sub-Gaussian distribution, ( |[Σ]_j,k -[Σ]_j,k| ≥ζ) ≤ 4 ·exp( -n ζ^2 / 128( 1+ 4 s_max^2 )max_j ([Σ]_j,j)^2 ), for all ζ∈ (0, max_j [Σ]_j,j 8( 1 + 4 s_max^2 ) ). * Lemma 2 of <cit.>: When (X_j)_j = 1^p follows a 4m-th bounded-moment distribution, ( |[Σ]_j,k -[Σ]_j,k| ≥ζ) ≤ 4 · 2^2mmax_j ( [Σ]_j,j)^2m C_m(K_max + 1) / n^m ζ^2m, where C_m is a constant depending only on m. For a linear BN, the following inequality holds between column sparsity d and maximum degree of the moralized graph d_M: d ≤ d_M+1.Applying Lemmas <ref> and <ref>, for a sub-Gaussian linear BN (<ref>), (G = G)≥1 - ∑_i,j ∈{1,2, ... ,p}(| [Σ]_i,j -[Σ]_i,j | ≥ C_11/d)≥1-4p^2exp(-C_1^2 /128(1+4 s_max^2 )max_j([Σ]_j,j)^2n/d^2)≥ 1-4p^2exp( -C_1^2/128(1+4 s_max^2)max_j( [Σ]_j,j)^2n/(d_M+1)^2). Applying Lemma <ref> and <ref>, for a 4m-th bounded-moment linear BN (<ref>), (G = G)≥1 - ∑_i,j ∈{1,2, ... ,p}(| [Σ]_i,j -[Σ]_i,j | ≥ C_11/d)≥1-4p^22^2mmax_j( [Σ]_j,j)^2mC_m(K_max+1)d^2m/n^mC_1^2m ≥ 1-4p^22^2mmax_j( [Σ]_j,j)^2mC_m(K_max+1)(d_M+1)^2m/n^mC_1^2m. ▪§PROOFS OF LEMMAS §.§ Proof of Lemma <ref>Lemma <ref>The diagonal entries of the inverse covariance matrix are given as:[Ω]_k,k = 1/σ_k^2 + ∑_l ∈(k)β_k,l^2 /σ_l^2. By (<ref>), the inverse covariance matrix can be expressed as Ω = (I_p-B)^⊤(Σ^ϵ)^-1(I_p-B). So the diagonal entries [Ω]_k,k are calculated as following: [Ω]_k,k = 1/σ_k^2 + ∑_l ∈(k)β_k,l^2 /σ_l^2.§.§ Proof of Lemma <ref>Lemma <ref>Consider a linear BN (<ref>) with sub-Gaussian and 4m-th bounded-moment errors. For any pre-defined constants C_1>0 and C_2 > C_3>0, assume i) the prior hyper-parameters v_0, v_1, η, and τ satisfy 1/nv_1 = C_31/p1/ϵ_1, 1/nv_0 > C_41/p v_1^2(1-η)/v_0^2η≤ϵ_1exp[ 2(C_2-C_3)M_Γ(C_4-C_3)n/p^2] ,and τ≤ C_3n/21/p for some constants ϵ_1>0, C_4 = C_1+M_Σ^22(C_1+C_3)M_Γ+6(C_1+C_3)^2M_Γ^2M_Σ^3, ii) the spectral norm bound B_0 satisfies 1/k_1 + 2(C_1+C_3)M_Γ < B_0 < (2nv_0)^1/2, iii) max{2(C_1+C_3)M_Γmax(3M_Σ, 3M_ΓM_Σ^3, 2/k_1^2), 2C_3ϵ_1/k_1^2p}≤ 1, iv) Λ_min(Σ) ≥ k_1.If Σ - Σ_∞ < C_11/d, thenΩ - Ω_∞ < 2(C_1+C_3)M_Γ1/d. Before presenting the proof, we will define two penalize functions pen_SS(θ), pen_1(θ) as following: pen_SS(θ) = -log[(η/2ν_1)e^-|θ|/ν_1 + (1-η/2ν_0)e^-|θ|/ν_0],pen_1(θ) = τ|θ|. By the definition, we can find bounds of the first and second derivatives of pen_SS(δ), respectively. * Bound of the first derivative of pen_SS(δ): Using equation (5) in the appendix of <cit.>, the following inequality holds: 1/n|pen'_SS(δ)| < 1/nν_1(1+ν_1^2(1-η)/ν_0^2η/e^|δ|/ν_0-|δ|/ν_1). With the conditions in this lemma and letting ν_1^2(1-η)/(ν_0^2η) = ξexp[ ψ(C_4-C_3)n/p^2] where ξ < ϵ_1, ν_1^2(1-η)/ν_0^2η/e^|δ|/ν_0-|δ|/ν_1≤ξexp[ ψ(C_4-C_3)n/p^2]/exp[ψ(C_4-C_3)n/p^2]≤ξ, when |δ| ≥ψ/p. So we can get the bound of 1/n|pen'_SS(δ)| as 1/n|pen'_SS(δ)| < 1/nν_1(1+ν_1^2(1-η)/ν_0^2η/e^|δ|/ν_0-|δ|/ν_1) ≤ C_31/p1/1+ϵ_1(1+ξ) < C_31/p≤ C_31/d. * Bound of the second derivative of pen_SS(δ): Using equation (7) in the appendix of <cit.> and since the condition iii) implies 1/p≤k_1^2/2C_3ϵ_1, the following inequality holds: 1/2n|pen”_SS(δ)| < ξ/2nν_1 < C_3/2ξ1/p < C_3/2ϵ_11/p≤1/4k_1^2. Using the bounds of the first and second derivatives of pen_SS(δ) derived above, this lemma can be proved as same as Theorem A in the appendix of <cit.> changing √(log p/n) to 1/d. Also, we can omit the sample size bound n ≥ M^2 log p in Theorem A in the appendix of <cit.> so this lemma can be used more freely without considering the condition of n and p.§.§ Proof of Lemma <ref>Lemma <ref>Suppose that Assumption <ref>, <ref>, <ref> and conditions i), ii), iii) in Theorem <ref> are satisfied. If Σ - Σ_∞ < C_1/d, S = supp(Ω) where S= { (j,k) : p_jk≥ T },by choosing appropriate threshold T. Set the threshold T satisfies the following range: log(T/1-T) ∈log(ν_0η/ν_1(1-η))+ (0, (θ_min-2(C_1+C_3)M_Γ1/d)(1/ν_0-1/ν_1)). Equation (9) of <cit.> is equivalent to log(p_ij/1-p_ij) = -log(ν_1(1-η)/ν_0η) + |θ_i,j|(1/ν_0-1/ν_1) =log(ν_0η/ν_1(1-η))+ |θ_i,j|(1/ν_0-1/ν_1). To check a support of the estimated inverse covariance matrix is correctly estimated, we consider two cases which an element of the true inverse covariance matrix is zero or not, θ_i,j = 0 and θ_i,j≠ 0. * When θ_i,j=0, by constructor in the proof of Lemma <ref>, θ_i,j=0. Hence, the following inequality holds using the lower bound of Range (<ref>): log(p_ij/1-p_ij) =log(ν_0η/ν_1(1-η)) <log(T/1-T). This inequality implies p_i,j < T, so the zero entries are recovered correctly. * When θ_i,j≠ 0, applying Lemma <ref>, the following inequality holds: |θ_i,j| > θ_min - 2(C_1+C_3)M_Γ1/d > 0. Then the following inequality holds using the upper bound of Range (<ref>): log(p_ij/1-p_ij) =log(ν_0η/ν_1(1-η)) + |θ_i,j|(1/ν_0-1/ν_1) >log(ν_0η/ν_1(1-η)) + (θ_min-2(C_1+C_3)M_Γ1/d)(1/ν_0-1/ν_1) >log(T/1-T). This inequality implies p_i,j > T, so the nonzero entries are recovered correctly. Therefore, a support of the true inverse covariance matrix is correctly recovered, supp(Ω) = supp(Ω).§.§ Proof of Lemma <ref>Lemma <ref>If k is a terminal vertex of the graph, then (k) = supp(Ω_k,*)∖{k}. By (<ref>), the inverse covariance matrix can be expressed as Ω = (I_p-B)^⊤(Σ^ϵ)^-1(I_p-B). So the entries [Ω]_k,j are calculated as following: [Ω]_k,j = -1/σ_j^2β_k,j -1/σ_k^2β_j,k + ∑_l ∈(k) ∩(j)β_k,lβ_j,l/σ_l^2. If k is a terminal vertex of the graph, β_k,j=0 for all j ∈ V∖{k}. Then the following equation holds: [Ω]_k,j = -1/σ_k^2β_j,k. Since (k) = {j ∈ V | β_j,k≠ 0}, the parent set of k can be found using the support set of the k-th row of the inverse covariance matrix as following: (k) = supp(Ω_k,*)∖{k} §.§ Proof of Lemma <ref>Lemma <ref> [Error Bound for the Sample Covariance Matrix] Consider a random vector (X_j)_j = 1^p and suppose that its covariance matrix is Σ. * Lemma 1 of <cit.>: When (X_j)_j = 1^p follows a sub-Gaussian distribution, ( |[Σ]_j,k -[Σ]_j,k| ≥ζ) ≤ 4 ·exp( -n ζ^2 / 128( 1+ 4 s_max^2 )max_j ([Σ]_j,j)^2 ), for all ζ∈ (0, max_j [Σ]_j,j 8( 1 + 4 s_max^2 ) ). * Lemma 2 of <cit.>: When (X_j)_j = 1^p follows a 4m-th bounded-moment distribution, ( |[Σ]_j,k -[Σ]_j,k| ≥ζ) ≤ 4 · 2^2mmax_j ( [Σ]_j,j)^2m C_m(K_max + 1) / n^m ζ^2m, where C_m is a constant depending only on m. Since it is the same as Lemmas 1 and 2 from <cit.>, we omit the proof.§.§ Proof of Lemma <ref>Lemma <ref>For a linear BN, the following inequality holds between column sparsity d and maximum degree of the moralized graph d_M:d ≤ d_M+1. If two nodes j,k is not connected in the moralized graph, it means that [B]_j,k=[B]_k,j=0 and there are no common child between j and k. Then by (<ref>), it can be shown that [Ω]_k,j=0. So if [Ω]_j,k≠ 0, nodes j and k are connected in the moralized graph. Hence, the following inequality holds: d= max_i =1,2,⋯ p card{j: [Ω]_i,j≠ 0}=1+max_i =1,2,⋯ p card{j ≠ i: [Ω]_i,j≠ 0}≤ 1+max_i =1,2,⋯ p card{j: (i,j) ∈ M(G)}= d_M+1, where M(G) is the moralized graph of G. 0.2in | http://arxiv.org/abs/2311.15610v1 | {
"authors": [
"Seyong Hwang",
"Kyoungjae Lee",
"Sunmin Oh",
"Gunwoong Park"
],
"categories": [
"stat.ML",
"cs.LG",
"math.ST",
"stat.ME",
"stat.TH"
],
"primary_category": "stat.ML",
"published": "20231127081053",
"title": "Bayesian Approach to Linear Bayesian Networks"
} |
ICAMS, Ruhr-Universität Bochum, Bochum, GermanyICAMS, Ruhr-Universität Bochum, Bochum, Germany ICAMS, Ruhr-Universität Bochum, Bochum, GermanyWe extend the basis functions of the Atomic Cluster Expansion to graphs. This naturally leads to a representation that enables us to describe semilocal interactions in physiscally and chemically transparent form. Simplifications of the graph Atomic Cluster Expansion recover the currently most accurate message-passing representations of atomic interactions. We demonstrate the accuracy and efficiency of our expansion for a number of small molecules, clusters and a general-purpose model for carbon. Atomic Cluster Expansion for semilocal interactions beyond equivariant message passing Ralf Drautz January 14, 2024 ====================================================================================== § INTRODUCTION Semilocal interatomic interactions beyond the reach of Hamiltonian matrix elements iαĤj β between orbitals α, β located on atoms i and j are ubiquitous in quantum mechanical calculations. Semilocal interactions involve contributions of different origin. First, diagonalization of the Hamiltonian induces interactions that reach multiple times beyond that of the Hamiltonian matrix, as can be seen from a simple series expansion that results in contributions of the form ∑_k γiαĤk γkγĤj β and higher order <cit.>. Second, relaxation of the electronic structure in self-consistent calculations induces interactions beyond the direct range of the Hamiltonian matrix elements <cit.>. Third, direct interactions induced by electronic correlations, dispersion corrections in density functional theory calculations, extend beyond the reach of Hamiltonian matrix elements. We base our analysis of local and semilocal contributions on the Atomic Cluster Expansion (ACE) that enables accurate and efficient parametrization of many-atom interactions. The basis functions of ACE are complete and can represent other local descriptors<cit.>. For example, the smooth overlap of atomic positions (SOAP) descriptor <cit.>, the Spectral Neighbor Analysis Potential (SNAP) <cit.>, the atom-centred symmetry functions (ACSF) <cit.> and many other descriptors can be cast in the form of ACE. By expanding Cartesian in spherical coordinates, other models such as the moment tensor potentials (MTP)<cit.> can also be represented <cit.>.ACE is efficient to evaluate and linear-scaling with the number of basis functions irrespective of the body order of the expansion<cit.>. This means that higher body-order interactions are captured efficiently and machine learning frameworks for non-linearly transforming descriptors to energies, such as neural network potentials, Gaussian process regression or kernel methods, are no longer necessary but optional for achieving accurate models. In fact, it was demonstrated in a number of publications that ACE exceeds the accuracy and numerical efficiency of more traditional machine learning interatomic potentials <cit.>. The ACE formalism allows for the seamless incorporation of additional features, such as atomic magnetic moments and atomic or orbital charges, as well as the representation of vectorial and tensorial outputs <cit.>. ACE is neither bound to atomic interactions nor three-dimensional space and was extended to jet tagging <cit.>, Hamiltonian matrix representations <cit.> and wave functions <cit.>. The fact that ACE builds on a basis representation enables efficient uncertainty prediction and active learning and exploration <cit.>. ACE may therefore been seen to provide a general and unified representation of atom-centered approaches <cit.>. In the past years graph and message-passing representations <cit.> were developed in parallel to atom-centered representations and only recently unified views on atom-centred and message-passing approaches emerged<cit.>. Here ACE-based message passing <cit.> provided the most accurate models <cit.> as these incorporated semilocal interactions <cit.>. On each atom messages in the form of ACE are evaluated and passed to neighboring atoms, which use these to construct the next layer of ACE messages until after some layers energy and forces are evaluated. While the unified atom-centered and message-passing representations are elegant and accurate, one can argue that in some aspects they fall behind the original ACE that provides a complete basis for the space of atomic environments.Here we extend local ACE to incorporate graphs. We show that graph ACE encompasses atom-centred local ACE as well as multi-atom and multi-layer message-passing architectures. We demonstrate that ACE generalizes and outperforms current local and semilocal machine learning interatomic potentials with respect to accuracy and efficiency.We start by introducing graph-based cluster basis functions in Sec. <ref>. We then show that invariance and equivariance with respect to translation, rotation, inversion and permutation can be achieved along the same lines of local ACE in Sec. <ref> before we compare local and global, graph ACE in detail in Sec. <ref>. Here we use the term global as the graph basis functions can in principle extend over all atoms, but as we will see this does not imply that the number of atoms in a model is fixed. We discuss the quantum mechanical foundation of graph ACE in Sec. <ref> before we rationalize details of the expansion exemplified by small clusters in Sec. <ref>. The global, graph ACE is significantly more complex than local ACE and we introduce a series of simplifications in Sec. <ref>. This leads to a recursive evaluation scheme within the dandelion approximation in Sec. <ref>. We then show that the dandelion approximation is closely related to message-passing architectures with multi-atom messages and in fact provides a derivation of the multi-ACE framework in Sec. <ref>. After having established that message-passing architectures can be viewed as a recursive evaluation of graph ACE, we further show that graph ACE exceeds the numerical accuracy and efficiency of other potentials for molecules, clusters and solids in Sec. <ref>. We conclude in Sec. <ref>.ACE is used by different research groups in different contexts and designations, which has led to a family of ACE models and it is sometimes hard to grasp the relation between them. For example, the performant implementation of ACE in LAMMPS <cit.> is called PACE<cit.> and PACEmaker<cit.> is software for the parametrization of ACE. PACEmaker enables non-linear and linear models as well as radial basis optimization. Software for obtaining linear ACE models is available as ACEsuit <cit.> and FitSNAP <cit.>.ACE-related representations for other Lie groups than the rotation group or for global expansions were termed G-equivariant CE <cit.>or boost-invariant polynomials (BIP) <cit.>, respectively, and it is tempting to call the tensor reduction of ACE coefficients<cit.> trACE. To the magnetic extension of ACE<cit.> we referred to as mACE, message-passing multilayer ACE architectures were abbreviated as ml-ACE <cit.>and multi-ACE <cit.>, and the leading version and implementation of multi-ACE was designated as MACE<cit.>. In this paper we show how several of the different ACE variants are comprised in a single, straightforward generalization of ACE. To discriminate between different ACE flavors we specify the model details after the basic name ACE, which we hope contributes to make connections between ACE variants more transparent.§ GRAPH ATOMIC CLUSTER EXPANSIONACE <cit.> builds on a decomposition of the energy, or any other scalar, vectorial or tensorial quantity, into atomic contributions E = ∑_i E_i, where i indexes all of the i = 1, …, N atoms. Such a decomposition is always possible and in itself not an approximation. An approximation is introduced in practice, however, by confining interactions to a cut-off distancebeyond which pairs of atoms no longer interact directly, which also confines ACE to the local atomic environment of each atom. We will refer to this as the local in this paper, in contrast to the global, graph that we will introduce next. As we will see, local and global ACE are closely related, with being a subset of .A decomposition into atomic quantities from the very start is not necessary and avoiding the decomposition allows us to incorporate semilocal interactions naturally within the ACE framework. We are interested in the energy (or an other scalar, vectorial or tensorial property) of a system of N atoms,E = E(σ).The energy depends on the atomic positions _1, _2, …, _N, the species of the atoms μ_1, μ_2, …, μ_N as well as other properties such as atomic magnetic moments m_i, atomic charges q_i, dipole moments, etc. that we will not specify in detail, but we will assume that μ_i designates all atomic variables on atom i. The features are collected in the configuration σ = ( _1, μ_1,_2, μ_2, …, _N, μ_N). The ACE framework allows one to incorporate these different dependencies seamlessly in a single coherent model<cit.>.§.§ ConfigurationsIn local , for the evaluation of the energy E_i, or other scalar, vectorial or tensorial quantities, the configuration σ is centered on atom i, σ_i = ( _1i, μ_1,_2i, μ_2, …, _Ni, μ_N), with _ji = _j - _i. The vector _ii=0 of the central atom is ignored, therefore σ_i has only N-1 position vector entries. This is done for every atom and the energy expressed as E = ∑_i E_i(σ_i). Centering the configuration on the atoms brings two advantages, first it seems natural to localize the expansion for E_i on atom i and second, any function of σ_i will be invariant under translation.For example, for four atoms N=4 one has four configurationsσ_0111 = (μ_1,_21, μ_2, _31, μ_3, _41, μ_4), σ_2022 = (_12, μ_1, μ_2, _32, μ_3, _42, μ_4) , σ_3303 = (_13, μ_1,_23, μ_2, μ_3, _43, μ_4), σ_4440 = (_14, μ_1,_24, μ_2, _43, μ_3,μ_4),where we changed notation slightly. The index 0 signals the removed vector and the remaining indices the reference atom that was subtracted. These are the configurations that are used in the computation of the atomic energies for four atoms in . There are, however, many additional configurations of four atoms, for example,σ_0112 = (μ_1,_21, μ_2, _31, μ_3, _42, μ_4), σ_2012 = (_12,μ_1, μ_2, _31, μ_3, _42, μ_4), σ_0122 = (μ_1,_21, μ_2, _32, μ_3, _42, μ_4), σ_0123 = (μ_1,_21, μ_2, _32, μ_3, _43, μ_4), … , just to show a few. These configurations also provide a complete description of the atomic positions and species, in a sense that all atomic positions can be reconstructed up to a translation. Not all combinations are possible, for example, configuration σ_3012 contains _13 and _31 and therefore does not allow for a reconstruction of atomic positions up to a translation. At first glance the additional configurations appear less attractive than the configurations employed for , as they are not localized on an atom. As we will see, however, for this reason these configurations are suitable for the description of semilocal interactions and global, graph builds on all possible configurations.§.§ Single-particle basis functionsWe attach to each atom i basis functions ϕ_i u() = ϕ_i u(Δ),with Δr =- _i and where u is a basis function index. Atom-centered basis functions help to ensure translational invariance and introduce a natural, distant dependent interaction hierarchy, but in principle different basis functions are possible, too. We further assume that the basis functions approach zero at some cut-off distance. The cut-off distance can vary from atom to atom, but for ease of notation we just usehere. We ask that the basis functions are orthonormal and complete,⟨i u|j u'⟩ = ∫ϕ_iu^*() ϕ_ju'() dΔr = δ_ijδ_uu' ,∑_iu|iu⟩⟨iu| = ∑_iuϕ_iu^*() ϕ_iu(') = δ( - '),Extensions to non-orthogonal basis functions are easily possible as well as to continuous basis functions or different forms of the inner product <cit.>. We further attach to the atoms basis functions χ_iκ(μ) that depend on the species of the atom and/or on other variables. These basis functions can depend on discrete variables, for example, different atomic species or continuous variables such as magnetic moments with magnitudes and directions. We ask these basis functions to be orthonormal and complete on each atom, too, ⟨i κ|i κ'⟩ = ∫χ_iκ(μ)^* χ_iκ'(μ) dμ = δ_κκ' , ∑_κ|iκ⟩⟨iκ| = ∑_κχ_iκ(μ)^* χ_iκ(μ') = δ(μ - μ'), where for discrete variables in Eq.(<ref>) the integration is replaced by a summation and on the right hand side of Eq.(<ref>) the Dirac delta function becomes a Kronecker delta δ_μμ'.As we are aiming at representations with well defined transformation under given group operations, we work with basis functions that are also basis functions of the irreducible representations of the group in question, i.e. for the rotation group we useϕ_i u() = ϕ_i u(r,e ) = R_nl(r) Y_lm( r̂),with the multi-index u = nlm, distance dependent radial functions R_nl with r = | - _i| and spherical harmonics Y_lm that depend on the direction r̂ = ( - _i)/| - _i|. Extension of the formalism to incorporate many other Lie groups with accessible irreducible representations <cit.> or basis functions in Cartesian coordinates <cit.> ϕ_i u() = f(| - _i|) ( - _i) ⊗ ( - _i)⊗… is straightforward <cit.>.We note that by intuition it should be possible to represent chemistry in terms of distant dependent basis functions of the form Eq.(<ref>) only, if these basis functions are made to depend on atomic species. In turn basis functions χ_iκ(μ) that depend on chemistry would not be necessary. This approach is followed, for example, by MTP <cit.>. We will make the relation to species-dependent basis functions explicit in Sec. <ref>.§.§ Cluster basis functionsNext we build cluster basis functions from products of the single-particle basis functions,Φ_α = ∏_k=1^Nϕ_i_k u_k(_k) ∏_k'=1^N χ_k'κ_k'(μ_k'),with 0 ≤ i_k ≤ N and k = 1, …, N. The indices i_1, i_2 …, i_N label the configuration as illustrated in Eq.(<ref>) and following. Exactly one entry has to vanish, i_k =0, and for this ϕ_0 u_k(_k) = 1, i.e. the product of distance dependent basis functions contains only N-1 terms as there are only N-1 vectors required to reconstruct atomic positions of N atoms up to a translation. The configuration σ_i_1 i_2 … i_N are must allow for the reconstruction of atomic positions up to a translation.The products are limited to order N as in a system with N atoms at most N atoms can interact.To aid a local interpretation we ask ϕ_i 0 = 1 and χ_k 0(μ_k) = 1. The cluster α collects all atoms i_k with i_k ≠ 0 and basis function indices u_k ≠ 0 and κ_k'≠ 0. When all basis function indices on all atoms are zero, the cluster α is empty α = 0, with Φ_0 =1. The single-particle basis function indices of cluster α are given by = (u, κ). The lowest product-order basis functions take the formΦ_i= χ_i κ(μ_i),Φ_ij= χ_i κ(μ_i)ϕ_i u (_j) χ_j κ_1(μ_j), Φ_ijk= χ_i κ(μ_i) ϕ_i u(_j) χ_j κ_1(μ_j)ϕ_j u'(_k) χ_k κ_2(μ_k),where these examples show physically and chemically meaningful basis functions as will be discussed in the next Sec. <ref>. For a given configuration σ_i_1 i_2 … i_N, by construction the cluster basis functions are orthonormal and complete,⟨α|β'⟩ = Φ_α^*( σ) Φ_β'( σ)d σ= δ_αβδ_' ,∑_α|α⟩⟨α| = ∑_αΦ_α^*( σ)Φ_α ( σ') = δ( σ - σ' ),where the sum and integral are carried out over discrete and continuous configuration variables, respectively. The algebra is identical to local <cit.> and closely related to cluster expansion in alloy theory <cit.>. Completeness Eq.(<ref>) holds for every configuration individually, simply because it is possible to reconstruct atomic positions up to a translation from a configuration.Different from local the basis functions are not localized on a single atom only. As the expansion is complete, for any given configuration the energy or other scalar, vectorial or tensorial properties can be represented as a linear combination of the cluster basis functionsE = ∑_αJ_αΦ_α ( σ).The expansion coefficients are formally obtained asJ_α = ⟨α|E⟩ =Φ_α^*( σ) E( σ) d σ ,where the integral, or sum for discrete variables, is taken over all variables. Different from , here Eqs.(<ref>) and (<ref>) should be seen asformal results that hold for any admissible configuration but tell us little about the convergence and effectiveness of the expansion.In the following we will work with all possible configurations simultaneously and not only with a single configuration per atom as in . As every configuration formally provides a complete basis, working with all possible configurations necessarily leads to a globally overcomplete set of basis functions. However, as we will see, this will gives us freedom for physical interpretation and will allow us to select more sensitive basis functions, which ultimately leads to more accurate models. In Sec. <ref> we compare and contrast local and global . In order to prepare for this we analyse the cluster basis functions in more detail in the following.§.§ Graph structure and reductionThe cluster basis functions Eq.(<ref>) are evaluated for a given cluster from a configuration. The clusters are graphs with directed edges that correspond to bonds i → j decorated with single particle basis functions ϕ_iu(_j) and where atoms correspond to nodes decorated with basis function χ_i κ(μ). Considerable simplifications and reductions of cluster basis functions are achieved by the following observations.§.§.§ Topology The expansion coefficient J_α associated to cluster basis function Φ_α is independent of atomic positions, from Eq.(<ref>). Therefore only graph topology and for a given topology edge orientation and basis function indices need to be considered for the classification of the cluster basis functions and their expansion coefficients J_α, while the positions of the nodes only affect the numerical values of the cluster basis functions. (For determining graph topology we ignore edge directions. The reason for this will become clear with the introduction of root nodes.)§.§.§ Connected graphsA cluster basis function can only contribute if it is possible to reach any node in the graph from any other node by a walk along graph edges, irrespective of edge orientations. If a cluster basis function consists of two or more graph fragments that are not connected by at least one graph edge, then the graph fragments can be transported rigidly to infinite distance from each other and this will not change the numerical value of thecluster basis function. Therefore the corresponding cluster basis function is non-local and not suitable for the description of local properties that arise from the interaction of atoms. See illustration in Fig. <ref> b.§.§.§ Edges shorter than cut-off distanceCluster basis functions on a graph with an edge i → jthat is longer thanvanish identically as ϕ_iv(_j)=0. Therefore these cluster basis function cannot contribute, as illustrated in Fig. <ref> c. §.§.§ Maximum one incoming edge per nodeUp to a maximum of N-1 directed edges may come out of a node, while at most a single directed edge may enter any node, as every atomic position is present at most once in any of the cluster basis functions. This limits graphs with n vertices to n-1 edges. The graphs are necessarily acylic as a cycle would imply redundant geometrical information that by construction may not be present in admissable configurations. We determine the root node in a graph as the node from which any other node may be reached by directed walks involving edges that point outwards only. In Fig. <ref> we illustrate graphs with n-1 edges and show possible graphs up to order four. The graphs that are part of local are stars with n-1 edges that all come out of one of the n nodes.§.§ Trees and subtreesAdmissible graphs are trees. Each of the nodes in a tree can be a root node. As all edges point outwards from the root node, specification of the root node fully determines edge directions, which means that in a tree with n nodes n configurations with different edge orientations exist. In Fig. <ref> we show tree graphs up to order six without indication of edge directions. Often several subtrees are attachedto a root node. In the following we break trees into subtrees that come out of the root node. We use a basic topological classification to describe trees and subtrees. For each node the distance from the root node is measured in numbers of edges. The distances are ordered to describe the paths to reach each each node from small to large as far as possible, i.e. (123234) is a subtree that branches at node 2 into (23) and (234).In Fig. <ref> the topological classification for subtrees with up to five nodes is shown.§.§ Atomic propertiesAtoms form the nodes of the graphs of the cluster basis functions. For the computation of properties it seems sensible to associate the contribution of a particular cluster basis function to one or several atoms with the aim of achieving a decomposition into atomic quantities of the form E = ∑_i E_i. It is evident that this decomposition is a matter of choice. For example, the contributionJ_αΦ_α of cluster basis function Φ_α that is evaluated on a graph with several nodes could be split equally among the graph nodes. Alternatively, the contribution J_αΦ_α could be added to a particular atom. It is clear that formally there are infinitely many choices how to define atomic properties E_i that all lead to the same global energy E = ∑_i E_i and gradients on the atoms. Further constraints are required for a unique definition of E_i, such as, for example, symmetrization of many-body interactions and assigning their contributions equally to the nodes <cit.>. However, any decomposition into atomic energies is a matter of choice as all physical observables such as energy, forces, etc. are unaffected by the details of the decomposition and atomic site energies E_i are not directly useful for training <cit.>. Therefore a particular definition of atomic properties E_i reflects the intuition of the authors but cannot be justified from observable physical or chemical properties. Here we choose to evaluate atomic properties by adding the contribution of each graph to its root node.To this end we introduce atomic bases for tree graphs that generalize the atomic base of local ,A^(1)_iκ v_1 =χ_i κ(μ_i) ∑_j ϕ_iu_1(r_j) χ_j κ_1(μ_j) =[ v_1]_iκ , A^(12)_i κ v_1 v_2 =χ_i κ(μ_i)∑_j_1 j_2ϕ_iu_1(r_j_1)χ_j_1 κ_1(μ_j_1) ×ϕ_j_1 u_2(r_j_2)χ_j_2 κ_2(μ_j_2) = [ v_1[ v_2] ]_iκ , A^(11)_i κ v_1 v_2 =χ_i κ(μ_i)∑_j_1 j_2ϕ_iu_1(r_j_1)χ_j_1 κ_1(μ_j_1) ×ϕ_i u_2(r_j_2)χ_j_2 κ_2(μ_j_2) = [ v_1][ v_2]_iκ ,A^(123)_i κ v_1 v_2 v_3 =[ v_1 [ v_2 [v_3 ]]]_i κ , A^(122)_i κ v_1 v_2 v_3 =[ v_1 [ v_2] [v_3 ]]_i κ , A^(112)_i κ v_1 v_2 v_3 =[ v_1] [ v_2 [v_3 ]]_i κ , A^(111)_i κ v_1 v_2 v_3 =[ v_1] [ v_2] [v_3 ]_i κ , A^(1234)_i κ v_1 v_2 v_3 v_4 =[ v_1 [ v_2 [v_3 [v_4]]]]_i κ , A^(1223)_i κ v_1 v_2 v_3 v_4 =[ v_1 [ v_2 ] [v_3 [v_4]]]_i κ , A^(1222)_i κ v_1 v_2 v_3 v_4 =[ v_1 [ v_2 ] [v_3] [v_4]]_i κ ,etc. for higher order, with v_k= ( u_k, κ_k ) and where the superscript indicates graph topology. The first terms A^(1), A^(11), A^(111), A^(1111), … are the basis functions of local . The summation over atoms has to be carried out over values j_1 ≠ j_2, j_1 ≠ j_3, …, j_2 ≠ j_3, … to avoid self-interactions. In Sec. <ref> we discuss self-interactions that arise from unconstrained summation, i.e. summation that does not respect j_n ≠ j_k. The short-hand notation with square brackets provides another, computable form of the graph topology that we use for coding . Each left square implies advancing across one edge along the tree and summation over neighbors of the following basis function, while closing a square bracket means retreating one edge. The number of left and right square brackets are the same, therefore the root node is always reached at the end. For example, the atomic base of the cluster basis function with indices v v_1 v_2 v_3 v_4 v_5 on tree (123234) is computed as [v[v_1[v_2]][v_3[v_4[v_5]]]].We can now write the expansion of a property E in terms of atomic contributions E = ∑_i E_i. We order the graphs in the expansion by the number of nodes, corresponding to the body order. Terms including up to five nodes are given as E_i = E_0 + ∑_κ vc^(1)_κ v A^(1)_iκ v+∑_κ v_1 v_2 c^(11)_κ v_1 v_2 A^(11)_iκ v_1 v_2 +∑_κ v_1 v_2 c^(12)_κ v_1 v_2 A^(12)_i κ v_1 v_2 +∑_κ v_1 v_2 v_3 c^(111)_κ v_1 v_2 v_3 A^(111)_iκ v_1 v_2 v_3 + ∑_κ v_1 v_2 v_3 c^(112)_κ v_1 v_2 v_3 A^(112)_iκ v_1 v_2 v_3+ ∑_κ v_1 v_2 v_3 c^(122)_κ v_1 v_2 v_3 A^(122)_iκ v_1 v_2 v_3+∑_κ v_1 v_2 v_3 c^(123)_κ v_1 v_2 v_3 A^(123)_iκ v_1 v_2 v_3+ ∑_κ v_1 v_2 v_3 v_4 c^(1111)_κ v_1 v_2 v_3 v_4 A^(1111)_iκ v_1 v_2 v_3 v_4 + ∑_κ v_1 v_2 v_3 v_4c^(1112)_κ v_1 v_2 v_3 v_4A^(1112)_iκ v_1 v_2 v_3 v_4 + ∑_κ v_1 v_2 v_3 v_4c^(1212)_κ v_1 v_2 v_3 v_4 A^(1212)_iκ v_1 v_2 v_3 v_4 + ∑_κ v_1 v_2 v_3 v_4c^(1122)_κ v_1 v_2 v_3 v_4 A^(1122)_iκ v_1 v_2 v_3 v_4 + ∑_κ v_1 v_2 v_3 v_4c^(1222)_κ v_1 v_2 v_3 v_4 A^(1222)_iκ v_1 v_2 v_3 v_4 + ∑_κ v_1 v_2 v_3 v_4c^(1123)_κ v_1 v_2 v_3 v_4 A^(1123)_iκ v_1 v_2 v_3 v_4 + ∑_κ v_1 v_2 v_3 v_4c^(1223)_κ v_1 v_2 v_3 v_4 A^(1223)_iκ v_1 v_2 v_3 v_4 + ∑_κ v_1 v_2 v_3 v_4c^(1233)_κ v_1 v_2 v_3 v_4 A^(1233)_iκ v_1 v_2 v_3 v_4+ ∑_κ v_1 v_2 v_3 v_4c^(1234)_κ v_1 v_2 v_3 v_4 A^(1234)_iκ v_1 v_2 v_3 v_4 + … The star graphs (1), (11), (111), (1111) are part of local , all others are new additions from graph . Clearly global ACE generates a great number of different basis functions and associated coefficients and the main effort in the remainder of this paper is to reduce the complexity of the expansion. Before we get to this, we will discuss transformation under rotation and inversion in the following section. § EQUIVARIANCEFor a model of the interatomic interaction we request invariance with respect to translation, rotation, inversion and permutation (TRIP). If the interest is in the expansion of vectorial or tensorial quantities, one requires TRIP equivariance, i.e. rotation of the vectorial output or of the input graphs leads to the same result. Invariance with respect to permutation of identical atoms, i.e. chemical species, is built into as the cluster basis functions are formed from products of the single particle basis functions, Sec. <ref>. Invariance under translation is ensured as all coordinate inputs are formed as differences that are unchanged by translation. Specifically we require equivariance with respect to the group E(3), which is the semidirect product group of translations T(3) and rotations and inversions O(3), E(3) = T(3) ⋊ O(3).The orthogonal group O(3) is the group of 3× 3 matrices with determinant ±1,O(3) = C_2 × SO(3), where SO(3) is the group of rotations in three dimensions and C_2 the cyclic group of order two for inversion.To ensure equivariance under rotation and inversion, we rely on the properties of spherical harmonics that are basis functions of the irreducible representations of the rotation group. Evaluation of cluster basis functions on the various graphs leads to products of spherical harmonics. The products of spherical harmonics can be reduced to irreducible representations with generalized Clebsch-Gordan coefficients, as discussed in detail in Refs. <cit.>. The mechanism for the reduction to invariant or suitable, equivariant basis functions is identical for the star graphs of local and the more general graphs of global . Spherical harmonics also possess well defined properties under inversion, which allows one to incorporate appropriate transformation.We therefore follow to achieve equivariance with respect to rotation and inversion. It suffices to express the expansion coefficients in the form <cit.>c_κκnlm = ∑_LMc̃_κκnlL C_lm^LM ,where v_i = (κ_i n _i l_i m_i) are the indices of the cluster basis function, κnlm = v = (v_1, v_2, …) and c_κκnlm the expansion coefficients in Eq.(<ref>). The generalized Clebsch-Gordan coefficients are given by C_lm^LM, with angular momenta L for different internal couplings of the coefficients and the expansion coefficients c̃_κnlL that are independent and trainable parameters. For simplicity we omitted the indices for graph topology, as these are unaffected by the transformation and identical for c and c̃. Note that here we exchanged c and c̃ compared to previous work<cit.>, simply in order that we do not have to write expansion coefficients as c̃ everywhere.The analysis for the group O(3) that trivially extends to E(3) for the simulation of matter in three dimensions is directly transferable to other groups for which irreducible representations and generalized Clebsch-Gordan coefficients are available or can be generated <cit.>, so that and are applicable directly in other symmetries and dimensions.§ LOCAL AND GLOBAL ACEThe derivation of local starts by assuming a decomposition of the property of a system into its atomic constituents, E = ∑_i E_i. For each of the atomic constituents E_i then an is carried out. To this end employs configurations centered on each atom. The derivation of global, graph is different. does not decompose a-priori into atomic contributions E_i but provides graph-based cluster basis functions for the complete, global property E. By assigning the graphs of the cluster basis functions to their root nodes, the decomposition of E into atomic properties E_i is accomplished only after the formal construction. Thereby builds on all possible configurations and not only one, atom-centered configuration per atom.It is immediately clear that local is a subset of global . The is obtained by limiting the cluster basis functions to stars only. This also implies that for general radial basis functions, and are identical up to three-body interactions and differ from four-body interactions onward. §.§ Completeness and sensitivityBoth local and global ACE expansions are complete by construction. Local basis functions are complete for each atom, the basis functions suffice to discriminate between all different environments that an atom can have. None of the basis functions are redundant as every basis function contributes information and a description of the local atomic environment that is orthogonal on all other basis functions. This, however, is only true as long as none of the atoms have access to basis functions on other atoms.In contrast to the local, atomic perspective on completeness of local , graph does not focus on the local atomic environment but provides an (over)complete set of cluster basis functions for all atoms. The single-particle basis functions of are associated to atoms and they are pairwise, which leads to cluster basis functions on the same set of atoms that correspond to graphs with different topology, see Fig. <ref> for an example. As discussed in Sec. <ref>, cluster basis functions have directed edges and only one or no edge ends in each node, while several edges may come out of every node. This leads to graphs with n+1 nodes for n edges.Fig. <ref> further exemplifies the sensitivity of tree-based cluster basis functions of in comparison to the stars. The narrow opening angles on the root node of the left hand star graph makes it numerically challenging to discriminate the details of atomic positions, i.e. one may expect that a relatively large number of spherical harmonics and radial functions are required for an accurate portrayal of the associated four-body interactions. In contrast, the angles between edges on the tree graph on the right-hand side are large and the edge lengths are comparable. One therefore can assume that cluster basis functions on the tree graph on the right-hand side are more sensitive to small displacements of the atomic positions than the star cluster basis functions on the left-hand side, which in turn should give the tree basis function in the configuration of Fig. <ref> a greater importance and sensitivity. This argument is further supported by the quantum mechanical analysis in the next Sec. <ref> that corroborates that direct pairwise connectivities are the basis for the representation of energies and forces in quantum mechanical systems.Another obvious distinction between local and global, graph ACE is introduced by the cut-off distance. While is limited to star graphs with maximum twice the cut-off distance interaction range, the tree graphs can extend over several cut-off distances. For example, if one assumes that the cut-off distance limits single-particle basis functions to the short pair distances in Fig. <ref>, then the contribution of the left-hand star graph must vanish while the right-hand tree graph can contribute.§.§ Self-interactionsThe summation over atoms j_1, j_2, j_3, j_4,… in Eqs.(<ref>)-(<ref>) has to be taken for pairwise different values j_1 ≠ j_2, j_1 ≠ j_3, …, j_2 ≠ j_3, … to avoid self-interactions. For local the restriction in the summation can be lifted as self-interactions can be accounted for by a renormalization of lower body-order expansion coefficients <cit.>, which in turn enables very efficient implementations <cit.>.Self-interactions appear to be more difficult to remove in global as it is not obvious for all graph topologies that arise from self-interactions, i.e. when one or several nodes are present twice or multiple times in a graph, that these can be represented by graphs that are admissible graphs in as discussed in Sec. <ref>. Fig. <ref> illustrates self-interacting graphs. On the left hand side a star graph is shown that is part of together with a star graph with one edge less that enables self-interaction correction. The right-hand side shows a self-interacting graph that is not admissible in and in grey a graph that can contribute to correcting self-interactions.Clearly, in some cases, for example, when the self-interaction leads to walking an edge forward and backward, then there are graphs in that obviously contribute to correct for these self-interactions. Furthermore, the over completeness of the basis functions further implies that in general any self-interacting graph can be represented by a linear combination of cluster basis functions.We argue in the next Sec. <ref> that some of the self-interactions can be interpreted from quantum-mechanical considerations and are in turn beneficial for the accuracy of . We further will see in Sec. <ref> that efficient implementations of involve self-interactions, just as , and illustrate this point numerically in Sec. <ref>. § QUANTUM MECHANICAL FOUNDATIONGraph closely resembles quantum mechanical electronic structure models, which helps to explain why is able to model reference data from electronic structure calculations very well. However, as is not conceived as a quantum mechanical model, not all of its contributions have a direct counterpart in quantum mechanics.To make the connection to quantum mechanics explicit, we first simplify to the two-center approximation <cit.>, with local orbitals |κ n l m⟩ = R_nl^(κ) Y_lmon two atoms i and j oriented along the bond axis r_ij =r_i - r_j. As other atoms in the environment of the bond do not contribute, one assumes invariance under rotation about the bond axis, which implies that the distance-dependent two-center Hamiltonian matrix elements in bond orientation can be written asβ_n_1 l_1 m_1 n_2 l_2 m_2^(κ_1 κ_2)(r_ij) = κ_1 n_1 l_1 m_1Ĥκ_2 n_2 l_2 m_2= κ_1 n_1 l_1 m_1Ĥκ_2 n_2 l_2 m_1δ_m_1 m_2 ,with the bond length R_ij. One rotates the matrix elements to a global coordinate system asH_n_1 l_1 m_1 n_2 l_2 m_2^(κ_1 κ_2) = ∑_m'_1 J( l_1 m_1 l_2 m_2 m'_1) β_n_1 l_1 m'_1 n_2 l_2 m'_1^(κ_1 κ_2) . Spherical harmonics are rotated with Wigner D-matrices D_m m'^(l)(αβγ) and as rotation about the bond axis is not required, only two angles αβ are needed. Therefore the Wigner D-matrices reduce to spherical harmonics and the transformation matrix J( l_1 m_1 l_2 m_2 m'_1) can be expressed as a linear combination of spherical harmonicsJ( l_1 m_1 l_2 m_2 m'_1) = ∑_L C(L l_1 l_2 m_1 m_2 m'_1) Y_L^m_2-m_1 ,and the same holds for the Hamiltonian in global coordinate system,H_n_1 l_1 m_1 n_2 l_2 m_2^(κ_1 κ_2) = ∑_L ϕ̃^(κ_1 κ_2)_ L l_1 m_1 l_2 m_2(r_ij)Y_L^m_2-m_1 . See, for example, Eq.(17) in Ref. Sharma79 for an explicit expression of the constants C(L l_1 l_2 m_1 m_2 m'_1) and ϕ̃(r_ij) collects the different pre-factors of the spherical harmonics. This means that the two-center matrix elements can be expressed as linear combinations of spherical harmonics with distance dependent pre-factors, which brings a direct link to basis functions ϕ_iv(j), Eq.(<ref>), and one can in general represent the two-center Hamiltonian matrix elements from linear combinations of basis functions ϕ_iv(j). Conceptually we could therefore replace the basis functions ϕ_iv(j) with two-center Hamiltonian matrix elements (obviously at the cost of a more complex disentangling of angular contributions for rotational covariance), or we can just take the view that the basis functions ϕ_iv(j) are indeed Hamiltonian matrix elements. This is the view that we take for the further discussion. However, it should also be noted that not all basis functions ϕ_iv(j) automatically qualify as Hamiltonian matrix elements. Quantum mechanical operators are Hermitian, Ĥ = Ĥ^†, which imposes symmetries that are not fulfilled by basis functions in general. We note in passing that overlap matrices are two center matrices, too, which further reveals a relation between the basis functions and the Overlap Matrix descriptor of Li et al. <cit.> Several linear scaling Density Functional Theory and Tight-Binding methods such as recursion <cit.>, Fermi-Operator Expansion <cit.>, Bond-Order Potentials <cit.> and Kernel Polynomial Method <cit.> rely directly or indirectly <cit.> on the evaluation of the moments of the density of states from the atomic coordinates via the moments theorem <cit.>μ_iα^(M) = i αĤ^Mi α= ∑_j_1 β_2 j_2 β_2 … H_i α j _1 β_1 H_j _1 β_1 j_2 β_2…H_j _M-1β_M-1 i α ,with the matrix elements H_i α j β = i αĤj β of the orthormal basis functions ⟨i α|j β⟩ = δ_i α, j β with orbitals α and β that are associated to atoms i and j, respectively.Moments of order M of the local density of states of orbital α on atom i are obtained from all self-returning hopping paths, i.e. cycles with M edges, where each edge is given by a Hamiltonian matrix element H_i α j β. Self-interactions in a cycle must be taken into account. Due to self-interactions as well as non-zero onsite matrix elements H_i α i α some contributions to the M-th moment involve fewer than M atoms. Fig. <ref> illustrates contributions to the fourth moment that differ by topology and shows their representation in cluster basis functions. For simplicity on-site matrix elements were ignored, H_i α i α = 0. Note that the leftmost pair contribution is identical to the self-returning three-body contribution to the right of it, implying a self-interacting pair basis function depicted in grey. A detailed analysis of tight-binding models <cit.> showed that fourth-moment cycle contributions are small compared to efficient pair and three-body contributions. This is broadly due to different self-cancellation in the graphs in Fig. <ref>. The effective pair contribution of the fourth moment has no angular contributions and therefore no cancellation can occur when the sum over neighbors is taken. The effective-three body graph has one opening angle and when the sum is taken over the neighbors of the root atom, angular dependence may lead to some self-cancellation. For the cycle the sum is taken over three neighbors, which due to interfering angular dependencies can lead to effective self-cancellation of the cycle contributions, which render the cycle contribution less important than the pair and three-body contributions to the fourth moment. § SIMPLIFICATIONS §.§ Tensor decompositionIn the following we will discuss possible simplifications of the expression Eq.(<ref>). We will assume at various places that tensors can be decomposed asT_n_1 n_2 n_3 n_4 … = ∑_k λ^(k) t_k n_1^(1) t_k n_2^(2) t_k n_3^(3) t_k n_4^(4)… ,which further implies that other decomposition schemes that involve tensors of different sizes are also possible, for example,T_n_1 n_2 n_3 n_4 = ∑_k λ^(k) t_k n_1 n_2 ^(12) t_k n_3^(3) t_k n_4^(4) .Low rank decomposition as provided by a joint summation index k is evidently a key advantage, but not strictly necessary. By trivially expanding λ^(k) =∑_k_2 k_3 k_4λ^(k)δ_ k k_2δ_ k_2 k_3δ_ k_3 k_4 and defining t^(l)_k_l k_l+1 n_l = t^(l)_k_l n_lδ_k_l k_l+1 from Eq.(<ref>) one arrives at T_n_1 n_2 n_3 n_4 … = ∑_k_1 k_2 k_3 k_4 …λ^(k_1) t_k_1 k_2 n_1^(1) t_k_2 k_3 n_2^(2) t_k_3 k_4 n_3^(3) t_k_4 k_5 n_4^(4)… .For us this means that we have significant flexibility for the representation of the graph ACE^(g) expansion coefficients and we will make use of this flexibility as seems best for an efficient representation.Tensor decomposition has been used in the context of ACE recently to eliminate the combinatorial scaling of the number of coefficients with the number of chemical elements by Darby et al.<cit.>. Here we build on this work, but take a slightly different route. We start by limiting tensor decomposition to chemical indices only. In fact, we deliberately keep the chemical index of the root atom in the associated expansion coefficient for the moment as it appears physically and chemically intuitive to do so. We further discuss in Appendix <ref> how to completely remove all chemical indices and also employ this reduced representation in our numerical examples. §.§ Atomic speciesFor removing atomic onsite basis functions we limit onsite atomic degrees of freedom to atomic species, i.e., we assume that χ_iκ(μ_i) is a function of the atomic species μ_i on node i only. The expansion coefficients c_κ v_1 v_2 v_3 …^(…) of in Eq.(<ref>) with v = (u, κ ) and changing notation u → v are rewritten asc_κ v_1 κ_1 v_2 κ_2^(t_1 t_2) = ∑_k c^(k)_κ v_1 v_2 w^(t_1)_k κ_1w^(t_2)_k κ_2 , c_κ v_1 κ_1 v_2 κ_2 v_3 κ_3^(t_1 t_2 t_3) = ∑_k c^(k)_κ v_1 v_2 v_3 w^(t_1)_k κ_1w^(t_2)_k κ_2w^(t_3)_k κ_3 ,and so on for higher orders. We then modifybasis functions of the form Eq.(<ref>) to have radial functions that depend on chemistry,R_μ_j knl^(t)(r_ji) = ∑_κ w^(t)_k κχ_j κ(μ_j) R_nl(r_ji).Next, we combine the indices k and n of the radial functions into a joint index n and we do the same for the expansion coefficients c^(k)_κ v_1 v_2 v_3→ c_κ v_1 v_2 v_3 with v_i = (n_i l_i m_i), which essentially means that we are hiding k in the indices n_1, n_2, n_3, …. Then, by choosingχ_i κ(μ_i) = δ_κμ_i we arrive at simplified expressions for Eq.(<ref>) and following,A^(1)_i v_1 = ∑_j ϕ_iv_1^(1) (r_j) =[ v]_i , A^(12)_i v_1 v_2 = ∑_j_1 j_2ϕ_iv_1^(1)(r_j_1)ϕ_j_1 v_2^(2)(r_j_2)= [ v_1[ v_2] ]_i ,A^(123)_i v_1 v_2 v_3 = ∑_j_1 j_2 j_3ϕ_i v_1^(1)(r_j_1) ϕ_j_1 v_2^(2)(r_j_2) ϕ_j_2 v_3^(3)(r_j_3)= [ v_1 [ v_2 [v_3 ]]]_i ,with the superscript index from the weights w and so on for higher-order and other graphs, and withϕ_i v^(t)(_j) = R^(t)_μ_j nl(r_ji) Y_lm( e_ji). We see that the chemistry dependence of the atomic interaction has been fully incorporated into the pairwise, chemistry dependent radial functions and that in this representation the decoration of the nodes with basis functions χ_iκ(μ_i) is not required. In practice we do not need to know the decomposition Eq.(<ref>) as we optimize the radial functions during training and for this we only need to know the variables t, μ_j, n, l and r_ji that determine the radial functions. Note that the radial functions depend only on the chemical species of the neighboring atom j, i.e. R^(t)_μ_j nl(r_ji). From our quantum mechanical considerations in Sec. <ref> it may be appropriate to extend this to ϕ_i v^(t)(_j) = R_μ_j μ_i nl^(t)(r_ji) Y_lm( e_ji). We have used radial basis functions of the type R_μ_j μ_inl(r_ji) previously <cit.> but ultimately this is a matter of choice and design and not dictated by the formulas that we have derived here. Also, as mentioned in Sec. <ref> the Moment Tensor Potentials <cit.> start from chemistry-dependent radial functions and without spanning chemical space with basis functions χ_iκ(μ_i) and we have demonstrated here explicitly that employing chemistry-dependent radial functions or chemical space basis functions can lead to identical representations. The expression for atomic properties in , Eq.(<ref>), remains essentially unchanged, but v does no longer contain the chemical index κ, which means that the number of entries in expansion coefficients c_μ v_1 v_2 v_3 in multi-component systems do not suffer from combinatorial explosion. In this way the expansion Eq.(<ref>) is written as E_i = E_0 + ∑_vc^(1)_μ_i vA^(1)_i v+∑_v_1 v_2 c^(11)_μ_iv_1 v_2A^(11)_i v_1 v_2+∑_v_1 v_2 c^(12)_μ_i v_1 v_2A^(12)_i v_1 v_2+∑_v_1 v_2 v_3 c^(111)_μ_i v_1 v_2 v_3 A^(111)_i v_1 v_2 v_3 + ∑_v_1 v_2 v_3 c^(112)_μ_i v_1 v_2 v_3A^(112)_i v_1 v_2 v_3+ ∑_v_1 v_2 v_3 c^(122)_μ_i v_1 v_2 v_3A^(122)_i v_1 v_2 v_3+∑_v_1 v_2 v_3 c^(123)_μ_i v_1 v_2 v_3A^(123)_i v_1 v_2 v_3+ …This reduced representation and the full representationEq.(<ref>) are identical if sufficiently many terms are used in the tensor decomposition Eq.(<ref>). However, while the parameterization of multi-component systems is hardly possible with the full representation, the reduced representation enables this.In the following we further keep Eq.(<ref>) so that it can be read in two ways. If we take the indices as v_1 = (n_1 l_1 m_1), v_2 = (n_2 l_2 m_2), v_3 = (n_3 l_3 m_3), …, we allow for different radial functions. We can also choose v_1 = (n l_1 m_1), v_2 = (n l_2 m_2),v_3 = (n l_3 m_3), …, i.e., a single joint index n, as briefly summarized in Appendix <ref>. It is a matter of choice and numerical considerations, which set of indices is superior and we delay making this choice until the numerical implementation for the examples in Sec. <ref>. §.§ Star decomposition and layersWe next make contact with local ACE^(l) by conceptually decomposing trees and subtrees into stars, see Fig. <ref>. The stars are categorized by the distance from the root node, i.e. the number of edges required to reach the star along the (sub)tree and the number of outgoing edges of the star. We call the number of edges that are required to reach the star from the root node the layer of the star. Star (t,p) is located on the t-th layer from the root node and has p outgoing edges. A floret in the following is an outgoing edge together with a node at the outgoing end, where this node has no outgoing edges.We next employ tensor decomposition once more and write expansion coefficients as products of star expansion coefficients. For example, the expansion coefficient of graph (12332333) is represented as c_μ v_011 v_111 v_112 v_211 v_212 v_221v_222 v_223^(12332333)= ∑_k λ_k^(μ) c^(0,1)_k v_011 c^(1,2)_k v_111 v_112c^(2,2)_k v_211v_212 c^(2,3)_k v_221v_222 v_223 ,where the basis function indices are labeled as v_tor, with t the layer index, o the index of the star node in layer t, r the index of the floret in star o. Generalization to arbitrary tree structures is obvious. This representation is general and does not limit interactions associated with trees and subtrees.The representation further takes into account the symmetry of the graph as illustrated, for example, for the (1233233) subtree, which must be invariant with respect to exchange of the two (2,2) stars,c_μ v_011 v_111 v_112 v_211 v_212 v_221v_222^(1233233) =∑_k λ_k^(μ) c^(0,1)_k v_011 c^(1,2)_k v_111v_112c^(2,2)_k v_211v_212 c^(2,2)_k v_221v_222 .As will be discussed next, the decomposition of the expansion coefficients in products of star coefficients Eq.(<ref>) has the advantage that the expansion coefficients of all subgraphs of a graph can immediately be written in analogous form.§.§ Floret picking and subgraph expansion coefficients Starting from a given tree, one can generate all possible trees that are subgraphs of the starting tree by picking one floret after another. For example, picking a floret in the (2,3) star of the (12332333) tree, results in the subtree (1233233) with one node less. Fig. <ref> further illustrates floret picking in a (12333333) tree.In general when a floret on node o in layer t is removed, the number of edges p in the corresponding star is reduced by one. To a removed floret we assign index v_top=0 and denote the corresponding expansion coefficient asc^(t,p-1)_k v_to1… v_top-1 =c^(t,p)_k v_to1… v_top-1 0 ,i.e. the index 0 signals a removed edge. When two florets are pickedc^(t,p-2)_k v_to1… v_top-2 =c^(t,p)_k v_to1… v_top-20 0 ,and so on until all florets of the star on the node were removedc^(t,0)_k =c^(t,p)_k 0 … 0 .For the example graph in the previous section, by picking florets on the (2,3) star we generate interaction coefficients of the subgraphs c_μ v_011 v_111 v_112 v_211 v_212 v_221v_222 v_223^(12332333)= ∑_k λ_k^(μ) c^(0,1)_k v_011 c^(1,2)_k v_111v_112c^(2,2)_k v_211v_212 c^(2,3)_k v_221v_222 v_223 , c_μv_011 v_111 v_112 v_211 v_212 v_221v_222^(1233233)= ∑_k λ_k^(μ) c^(0,1)_k v_011 c^(1,2)_k v_111v_112c^(2,2)_k v_211v_212 c^(2,2)_k v_221v_222 , c_μv_011 v_111 v_112 v_211 v_212 v_221^(123323)= ∑_k λ_k^(μ) c^(0,1)_k v_011 c^(1,2)_k v_111v_112c^(2,2)_k v_211v_212 c^(2,1)_k v_221 , c_μv_011 v_111 v_112 v_211 v_212^(12233)= ∑_k λ_k^(μ) c^(0,1)_k v_011 c^(1,2)_k v_111v_112c^(2,2)_k v_211v_212 c^(2,0)_k ,A star with zero florets is a floret itself that can be picked for further reducing the tree. Thus if we pick the new floret in the above (12233) graph, we arrive atc_μv_011 v_111 v_211 v_212^(1233) = ∑_k λ_k^(μ) c^(0,1)_k v_011 c^(1,1)_k v_111c^(2,2)_k v_211v_212 c^(2,0)_k .If we next pick the two florets on the (2,2) star, the resulting expansion coefficient is given byc_μ v_011 v_111^(12) = ∑_k λ_k^(μ) c^(0,1)_k v_011 c^(1,1)_k v_111c^(2,0)_kc^(2,0)_k .The key observation is that by picking florets in all possible ways all subtrees and their corresponding expansion coefficients can immediately be obtained and represented. We will exploit this next for the recursive evaluation of a tree together with all of its subtrees.§.§ Recursive evaluation Because all expansion coefficients of subtrees of a tree can be expanded as part of the representation of the tree expansion coefficient, efficient recursive evaluation becomes possible. We illustrate this here for a small (122) graph, but extension to more complex graphs is obvious and will be exploited in the following section. From Eq.(<ref>) we have for the contribution of a (122) tree, including its (12) and (1) subtrees,E_i = E_0 + ∑_vc^(1)_μ_i vA^(1)_i v_1+∑_v_1 v_2 c^(12)_μ_iv_1 v_2A^(12)_i v_1 v_2+∑_v_1 v_2 v_3 c^(122)_μ_i v_1 v_2 v_3A^(122)_i v_1 v_2 v_3 ,with expansion coefficientsc_μ_i v_1 v_2 v_3^(122) = ∑_k λ_k^(μ_i) c^(0,1)_k v_1c^(1,2)_k v_2 v_3 , c_μ_i v_1 v_2^(12) = ∑_k λ_k^(μ_i) c^(0,1)_k v_1c^(1,1)_k v_2 , c_μ_i v ^(1) = ∑_k λ_k^(μ_i) c^(0,1)_k vc^(1,0)_k, and basis functionsA^(1)_i v = ∑_j ϕ_iv^(1) (r_j), A^(12)_i v_1 v_2 = ∑_j j_2ϕ_iv_1^(1)(r_j)ϕ_j v_2^(2)(r_j_2) ,A^(122)_i v_1 v_2 v_3 = ∑_j j_2 j_3ϕ_i v_1^(1)(r_j) ϕ_j v_2^(2)(r_j_2) ϕ_j v_3^(2)(r_j_3) .Here the sum of j, j_2, j_3 is unrestricted so that we incorporate self-interactions in the expansion, see the discussions in Sec. <ref>, <ref> and <ref>.We next define a local on layer 1 of the expansionφ^(1)_jk =c^(1,0)_k +∑_v c^(1,1)_k v A^(2)_j v + ∑_v_1 v_2c^(1,2)_k v_1 v_2 A^(2)_j v_1A^(2)_j v_2 ,withA^(2)_j v = ∑_j_1ϕ_jv^(2) (r_j_1). On the first layer we evaluateA^(1)_i k v = ∑_jϕ_iv^(1) (r_j) φ^(1)_jk , and another localφ^(0)_ik =c^(0,0)_k +∑_v c^(0,1)_k v A^(1)_i kv ,from which one can identifyE_i = ∑_k λ_k^(μ_i)φ^(0)_ik .We see that graph can be evaluated by layer-wise evaluation of local and the summation of all channel k contributions at the end. In the following section we will employ this for the evaluation of more complex trees and including all of their subgraph contributions.§ DANDELION APPROXIMATION In the previous analysis we ordered graphs by their number of nodes. In the following we will drop this hierarchy as this enables us to make use of efficient recursive evaluation of the cluster basis functions. We start by considering trees that are identical up to one node, from which different numbers of edges emerge, i.e. the trees are characterized by topology strings that are identical up to one number that is repeated for a different number of times, for example (12), (123), (1233), (12333), (123333), (1233333) and (12333333). We call the largest of the trees, from which all others can be obtained by floret picking,a dandelion and abbreviate it as (123_6), see Fig. <ref> for an illustration.We focus on dandelions of the form (12 … T_P) that have a depth of T layers and every node has P outgoing edges, see lower part of Fig. <ref>. The regularity of the dandelion trees makes notation easier and more transparent, while on the other hand the dandelions are sufficiently general to accommodate many other trees as subtrees. §.§ Dandelion expansion coefficients and recursion We start by decomposing the dandelion expansion coefficient into star contributions following Sec. <ref>. The expansion coefficient of a (12 … T_P) dandelion can in general be represented asc ^(12 … T_P) = ∑_k λ_k^(μ) c^(0,P)_k v_011… v_01P( ∏_p = 1^P c^(1,P)_k v_1p1… v_1pP) ×( ∏_p = 1^P^2 c^(2,P)_k v_2p1… v_2pP) ×(∏_p = 1^P^3 c^(3,P)_k v_3p1… v_3pP) ×…×( ∏_p = 1^P^T-1 c^(T-1,P)_k v_T-1,p1… v_T-1,pP),where we did not write out the indices on the left hand side. This formula is best understood for small values and then expanded to general T and P. For example, for two layers T=2 and and two nodes P=2, the expansion coefficient readsc^(12_2) = ∑_k λ_k^(μ) c^(0,2)_k v_011v_012c^(1,2)_k v_111 v_112 c^(1,2)_k v_121 v_122 ,which can be compared directly to the examples given in Sec. <ref>. Following the discussion in Sec. <ref>, the contribution of the (12 … T_P) dandelion tree and all its subtrees can be evaluated recursively. The recursion is initializedby a local in the last layer T, with the usual atomic base,A^(T)_i v = ∑_jϕ_iv^(T) (r_j), and asφ^(T-1)_jk =c^(T-1,0)_k +∑_v c^(T-1,1)_k v A^(T)_j v + ∑_v_1 v_2c^(T-1,2)_k v_1 v_2 A^(T)_j v_1A^(T)_j v_2 + …+∑_v_1 v_2 … v_Mc^(T-1,P)_k v_1 v_2 … v_P A^(T)_j v_1A^(T)_j v_2…A^(T)_j L_P .The layers t = T-1, T-2, … ,2, 1,0 are iterated downwards by forming an atomic base that pulls in information from its neighbours in the form of a localA^(t)_i k v = ∑_jϕ_iv^(t) (r_j) φ^(t)_jk . The atomic base is then used to set up an effective on the next layer,φ^(t-1)_jk =c^(t-1,0)_k +∑_v c^(t-1,1)_k v A^(t)_j k v + ∑_v_1 v_2c^(t-1,2)_k v_1 v_2 A^(t)_j k v_1A^(t)_j k v_2 + …+∑_v_1 v_2 … v_Pc^(t-1,P)_k v_1 v_2 … v_P A^(t)_j k v_1A^(t)_j k v_2…A^(t)_j k L_P .The iteration is terminated withφ_i = ∑_k λ_k^(μ_i)φ^(0)_ik .In practise one may want to stop the on each layer at a body-order K smaller than P, implicitly assumingc^(t,k) = 0, k = K+1 ,…, P for the corresponding star expansion coefficients. One can understand the local on each layer φ^(t)_jk as messages that are attached to the atoms j and transferred to their neighbors i during construction of the effective atomic base A^(t)_i v. If desired, the expansion coefficients of the local on each layer can be tensor-decomposed further, using the approach discussed for in Sec. <ref> and Appendix <ref> or Ref. Darby23.Further, for a scalar expansion, one can just take directly as E_i = φ_ior alternatively, in analogy to our work on <cit.>, it can be efficient to compute several expansions for each atom, φ_i1, φ_i2, φ_i3, … and compute a scalar property asE_i = (_i1, _i2, _i3, … ),whereis a non-linear function, for example,E_i = _i1 + √(_i2),for two expansions. §.§ Explicitly including angular contributions We repeat Eqs.(<ref>)-(<ref>) with explicit angular indices. To this end we approximate the reduction of angular products as discussed in Appendix <ref>. This approximation allows for different couplings that we illustrate with two examples. We write the index v in Eq.(<ref>) and k in Eq.(<ref>) as a combination of angular indices lm, parity index p = ± 1 for even and odd parity, respectively, and further radial index n as nplm. Eq.(<ref>) then readsA^(T)_i nlm = ∑_jϕ_inlm^(T) (r_j) , and , Eq.(<ref>),φ^(T-1)_jnplm =c^(T-1,0)_nplm +∑_n_1l_1m_1 c^(T-1,1)_nplm n_1 l_1 m_1 A^(T)_j n_1 l_1 m_1+ ∑_n_1 l_1 m_1 n_2 l_2 m_2c^(T-1,2)_nplm n_1 l_1 m_1 n_2 l_2 m_2 A^(T)_j n_1 l_1 m_1A^(T)_j n_2 l_2 m_2 + …+∑_n_1 l_1 m_1 n_2 l_2 m_2 … n_P l_P m_Pc^(T-1,P)_nplm n_1 l_1 m_1 n_2 l_2 m_2 … n_P l_P m_P× A^(T)_j n_1 l_1 m_1A^(T)_j n_2 l_2 m_2…A^(T)_j n_P l_P m_P .The expansion coefficients are represented in the form of Eq.(<ref>), which corresponds to a transformation into different angular channels l. In general the first order term can be simplified as c^(T,1)_nplm n_1 l_1 m_1 = c^(T,1)_nplm n_1δ_l l_1δ_m m_1 as there is no angular momentum coupling possible.Here we can no longer keep the notation general with respect to radial indices. The representation of the expansion with full tensor decomposition of the radial indices as in Appendix <ref> is obtained simply by limiting n = n_1 = n_2 = n_3 = ….§.§.§ Independent radial channels Next the layers t = T-1, T-2, …, 2, 1, 0 are iterated downwards. To this end the effective atomic base is computed by pulling in the from its neighbors. In order to keep tensor dimensions constant during the iteration, we choose to transform the angular character of the basis function l_1 and of , l_2, into a joint angular momentum l with Clebsch-Gordan coefficients C, A^(t)_i n n_1 plm = ∑_l_1 m_1 l_2 m_2 C_l_1 m_1 l_2 m_2^lm∑_jϕ_i n_1 l_1 m_1 ^(t) (r_j) φ^(t)_jnp_2 l_2 m_2 ,with parity p = p_2 (-1)^l_1. Here the radial indices n and n_1 are not mixed for the construction of the atomic base, but kept independent from the other channels. From the effective atomic base the effective is computed φ^(t-1)_jnplm =c^(t-1,0)_nplm +∑_n_1l_1m_1 c^(t-1,1)_nplm p_1 n_1 l_1 m_1 A^(t)_j n n_1 p_1 l_1 m_1+ ∑_n_1 p_1 l_1 m_1 n_2 p_2 l_2 m_2c^(t-1,2)_nplm n_1 p_1 l_1 m_1 n_2 p_2 l_2 m_2× A^(t)_j n n_1 p_1 l_1 m_1A^(t)_j n n_2 p_2 l_2 m_2 + …+∑_n_1 p_1 l_1 m_1 … n_P p_P l_P m_Pc^(t-1,P)_nplm n_1 p_1 l_1 m_1 … n_P p_P l_P m_P× A^(t)_j n n_1 p_1 l_1 m_1A^(t)_j n n_2 p_2 l_2 m_2…A^(t)_j n n_P p_P l_P m_P .The iteration is terminated withφ_i = ∑_nplmλ_nplm^(μ_i)φ^(0)_inplm .For an interatomic potential, in layer 1 only contributions l = m =0 and even parity p=1 are relevant, but for vectorial or tensorial expansions other values of l and p are of interest <cit.>.During the iteration implicitly products of (generalized) Clebsch Gordan coefficients are taken. These products could in principle be reduced to remove some linearly dependent functions <cit.>.§.§.§ Coupling radial channels While angular coupling seems advisable to maintain clean angular momentum channels, an analogous coupling is also possible for the radial and chemical indices. This can be achieved by introducing additional, layer-dependent weights W^(t) in Eq.(<ref>) to readA^(t)_i n plm = ∑_n_1 n_2 l_1 m_1 l_2 m_2p_1 p_2 W^(t)_n pl n_1 p_1 l_1 n_2 p_2 l_2 C_l_1 m_1 l_2 m_2^lm∑_jϕ_i n_1 l_1 m_1 ^(t) (r_j) φ^(t)_jn_2 p_2l_2 m_2 . This representation has the advantage that the effective atomic base has only one radial, respectively, chemical channel as in the original formalism. The coupling W^(t) brings some freedom and its dependency on the indices n pl n_1 p_1 l_1 n_2 p_2 l_2 can be explored for numerical efficiency and reduced if necessary. Here full tensor decomposition implies n = n_1 = n_2. Other variants, such as requesting n_1 = n_2 but keeping n different are also possible and we employ this for the numerical examples in Sec. <ref>. The effective is then evaluated asφ^(t-1)_jnplm =c^(t-1,0)_nplm +∑_n_1l_1m_1 c^(t-1,1)_nplm p_1 n_1 l_1 m_1 A^(t)_j n_1 p_1 l_1 m_1+ ∑_p_1 n_1 l_1 m_1 p_2 n_2 l_2 m_2c^(t-1,2)_nplm p_1 n_1 l_1 m_1 p_2 n_2 l_2 m_2× A^(t)_j n_1 p_1 l_1 m_1A^(t)_j n_2 p_2 l_2 m_2 + …+∑_n_1 p_1 l_1 m_1 … n_P p_P l_P m_Pc^(t-1,P)_nplm n_1 p_1 l_1 m_1 … n_P p_P l_P m_P× A^(t)_j n_1 p_1 l_1 m_1A^(t)_j n_2 p_2 l_2 m_2…A^(t)_j n_P p_P l_P m_P ,and the recursion terminated with Eq.(<ref>). Eqs.(<ref>,<ref>) and Eqs.(<ref>,<ref>) represent two alternative recursion variants that build on slightly different flavours of tensor decomposition discussed in Sec. <ref>. Both variants allow for an accurate recursive evaluation of in dandelion approximation. We also note that while recent graph and message passing interatomic potentials highlighted the importance of non-scalar, equivariant messages, in the recursive evaluation of equivariant, vectorial and tensorial intermediate emerge naturally from the angular character of the coupling in the basis functions. § COMPARISON TO OTHER METHODSThere are two main approaches that claim unification of message passing networks with atom-centered many-atom expansions <cit.>. In particular the multi-ACE framework is close to results of our analysis as it builds on messages. We therefore discuss the multi-ACE framework in some detail first.§.§ Multi ACE The multi-ACE framework <cit.> has unified with message-passing graph networks. Multi-ACE shares many features with the recursive evaluation of the dandelion graph of in Sec. <ref> if the approximate product reduction of angular contributions (Appendix <ref>) is employed. In fact, the multi-ACE framework is fully contained in the graph design space that allows for some freedom due to different choices that one can make when tensor-decomposing the expansion coefficients. A key difference between multi-ACE and is, however, that we derived the recursive evaluation of the dandelion approximation starting from the global (over)completeness of graph , while the multi-ACE development was guided by merging and NequIP <cit.>. For example, message passing in multi-ACE is understood as a chemically inspired sparsification of a local with a very long cutoff. As discussed in Sec. <ref>, the cluster basis functions of are a subset of , and the graph cluster basis functions enter the recursive evaluation of the dandelion approximation. This also means that multi-ACE cannot be obtained from local directly, but only from . Put this way, provides the foundation and derivation of multi-ACE. Multi-ACE makes a distinction between features h and messages m. The messages essentially are identical to local on each layer in the form of Eq.(<ref>). The features are obtained from linear transformation of the messages and are multiplied with the one-particle basis functions.In dandelion approximation of the distinction between features and messages is not required explicitly, but combined into the computation of the effective atomic base in Eq.(<ref>) and local on each layer. Furthermore, multi-ACE has a slightly different way of handling angular channels. It keeps two angular indices for the effective atomic base and does not mix one of the two angular channels, in contrast to the mixing of the angular channels in the effective atomic base for the dandelion approximation. It should be emphasized that both approaches are part of the design space of the dandelion approximation and both make use, implicitly or explicitly, of the approximate product reduction of angular contributions discussed in Sec. <ref>.The multi-ACE manuscript <cit.> discusses design choices explicitly for several message-passing models and how these models can be understood from a multi-ACE perspective. This analysis immediately carries over to graph and we will therefore limit our discussion to selected representatives of the multi-ACE framework. §.§ Nequip NequIP <cit.> pre-dates multi-ACE. It builds on tensor field networks <cit.> for an accurate representation of the atomic interaction. In retrospect NequIP may be understood as a specific graph realization, limited to low body-order messages. Alternatively multi-ACE may be understood as a generalization of NequIP to higher body-order messages. The limitation to low body-order messages helps to explain why NequIP requires several layers for accurate representations. §.§ Multilayer ACE The multilayer ACE, abbreviated ml-ACE, is a multi-layer message passing interatomic potential<cit.>. It is limited to scalar, but non-linear update function and and in this sense a subset of multi-ACE and of course . The ml-ACE was inspired by electronic structure relaxation that induces semilocal interactions beyond the immediate neighbor shell of an atom. §.§ MACE MACE <cit.> is a version and an accurate implementation of multi-ACE. MACE was built on the e3nn software <cit.> and makes a few specific design choices within the multi-ACE framework. For example, MACE uses a rank six tensor for representing radial functions despite not indexing chemistry explicitly. This provides some freedom in the multi-ACE design space but cannot be justified directly from our analysis. MACE uses a multilayer perceptron to represent the radial functions, a choice that we followed in our numerical implementation of . As in the multi-ACE framework, MACE also makes a distinction between features and messages and mixes both contributions from one layer into the features of the next layer, while in this distinction is not necessary. The on each layer in MACE is constructed recursively, e.g., Sec. A3.3 in <cit.>. The recursion is defined to reduce the number of basis function with many edges. Furthermore, MACE does not incorporate all possible parity couplings. As discussed in Sec. <ref>, we observe that MACE requires about one order of magnitude more parameters than , which may be attributed to some of these design choices.§.§ AllegroAllegro <cit.> is a message passing architecture that is limited to the direct neighbor shell of each atom, i.e. messages are passed only in the immediate neighborhood of an atom and not propagated further away. The structure of Allegro has been compared in detail to local in Ref. Musaelian2022Allegro. For example, the body order ofhas its analogy in the number of layers in Allegro. We speculate that limited to the same direct neighbor atoms as Allegro has a similar mechanism for maintaining flexible training functions, while in addition benefiting from other graphs than the stars of, see the discussion in Sec. <ref>.§.§ Unified atom-centred and message-passing schemesNigam et al. <cit.> unified atomic density based descriptors and message passing networks by combining the two into multi-centred atomic density representations. The authors employ an abstract notation, but it is our understanding that the unified representation is close to the multi-ACE framework and our discussion of multi-ACE also applies here. § APPLICATIONSWe consider three application examples. First, a model system of four-atom Si clusters. The four-atom clusters are sufficiently simple to demonstrate and compare various graph architectures, including direct construction of the basis functions, Eq.(<ref>) and following. At the same time due to the bonding and structural diversity of the reference data it presents challenge for any machine learning potential. Second, we consider the frequently used revised MD17 <cit.> dataset of 10 small organic molecules as well as a dataset of a flexible 3BPA molecule <cit.>. And third, a large carbon dataset that was recently used for a general purpose <cit.>. From these examples, we assess the general applicability of , its computational performance and its accuracy compared to other models. Details on training and model parameters are given in Appendix <ref>. §.§ Analysis for four-atom Si clusters We first consider a dataset of four-atom Si clusters. The dataset consists of a total 16180 clusters. The clusters were computed with DFT using the PBE functional<cit.> with the FHI-aims code <cit.> using tight settings. The calculations were non-magnetic to ensure a single energy hypersurface. The clusters were not relaxed and comprise completely randomized atomic positions. The clusters were constructed such that all distances between atoms are smaller than the cut-off radius of 8 Å. In this way, we ensure that non of the star graph contributions vanish due to limited interaction range andinstead compare sensitivity of the various graph contributions. From the clusters 90 % were taken for training and the remaining 10 % for testing. We compare three different ACE models, local limited to (1̅) stars, an explicit graph basis with star and tree type contributions constructed explicitly and without self-interactions, and global models in various forms of the dandelion approximation that incorporate self-interactions. Fig. <ref> shows the test metrics of the considered models as a function of the maximum number of graph edges. For models the number of edges is the same as the product order. The results are indicated by filled circles. Star and tree graphs correspond to Eqs. (<ref>,<ref>,<ref>,<ref>). models were limited to tree graphs and included (12), (1212), (121212) maximum graphs (diamonds), (122) and (122122) maximum graphs (squares), and (123) and (123123) maximum graphs (triangles). As discussed in Sec. <ref>, and start to differ only from three edges in the graph. Therefore, different models with two edges, i.e. (11) or (12) graphs, lead to numerically identical results. For maximum three edges, corresponding to 4-body interactions, as expected the (111) star contribution without self-interactions and the third-order contributions of the atomic base ( A^(1))^3, that incorporate self-interactions, are identical. Furthermore, we observe that the (123) tree is more accurate than the (111) star, which highlights the importance of tree-sensitivity as illustrated in Fig. <ref>. The dandelion approximation with (122) or (123) largest graphs, which incorporates self-interactions, improves over the explicit (123) graph. The reason for this is that the dandelion recursion brings in additional degrees of freedom, c.f. Eq.(<ref>), due to the approximate product reduction of angular contributions.These additional numerical parameters lead to a further lowering of the test error as compared to the (123) tree and, therefore, utilizing these parameters is considered beneficial.We note that if Eq.(<ref>) is used instead, the results for the A^(123) dandelion model closely match the (123) tree (not shown).A maximum of three edges corresponding to four-body interactions should be sufficient for the description of four-atom clusters. In fact, if self-interaction contributions are not permitted, graphs with more than four nodes vanish identically for the four-atom clusters. Thus all graphs with four or more edges cannot contribute without self-interactions. However, clearly visible in Fig. <ref>, increasing the number of edges further lowers the errors for and . This is due to the fact that basis functions with up to three edges were not exhaustively increased but kept constant and further self-interacting basis functions with more edges can contribute to lower the error. The rational behind this is our observation that adding a few more basis functions on graphs with additional edges improves the model more efficiently than adding more basis functions to a given graph.At a maximum of six edges, improves significantly over . Thereby the number of layers, i.e. two or three layers, does not appear to be decisive. More accurate models can be achieved by either increasing the number of layers or alternatively the body order of the in each layer. This explains, for example, why MACE works well with two layers only and many-body messages while NequIP and Allegro require three layers. We remark that in Fig. <ref> we focus on relative errors of different models. Changing hyperparameters, including non-linear embeddings or other forms of the radial functions, could shift the complete graph to lower errors, but without modifying the above analysis and its conclusions.§.§ Small molecules The MD17 dataset <cit.> consists of configurations from ab initio molecular dynamics simulations of 10 small molecules. For each molecule, there are 100000 configurations, 1000 of which were randomly selected for training, the rest was used for testing. The graph mean absolute errors for energies end forces are shown in Table <ref> in comparison to the best available and ACE-related ML models. Results of many other models can be found, for example, in Refs. <cit.>. The graph shows the best performance for all 10 molecules in comparison to all other state-of-the-art models. A further test for which the models were trained only on 50 configurations from the training set is shown on the right side of Table <ref>. Also here outperforms the other models for every molecule, indicating excellent data efficiency.The 3BPA dataset was introduced in Ref. Kovacs2021linACE to assess extrapolation capability of machine learned interatomic potentials. The training set consists of 500 configurations of the flexible drug-like molecule 3-(benzyloxy)pyridin-2-amine, obtained from ab initio molecular dynamics at 300 K and the test set contains a series of molecular dynamics calculations at 300 K, 600 K and 1200 K as well as relaxed configurations of so-called dihedral slices that consist of conformer configurations far from the training data. The energy and force RMSE of for these tests are shown in Table <ref> together with results from other state-of-the-art models. outperforms all other models in all cases. is also computationally more efficient than other methods. The evaluation time of the models on a NVIDIA A100 GPU is around 2.3 ms/molecule which is about 10 times faster than MACE <cit.> and 45 times faster than NequIP <cit.> timings reported for the same GPU. Also, models like MACE, Allegro and NequIP utilize several million parameters to achieve the reported performance, while the largest model uses around 162k parameters.§.§ Carbon The carbon dataset <cit.> is particularly challenging as it was designed for a general purpose potential and therefore consists of structures of various atomic distances and bonding characteristics. The structures can be split into five groups based on bonding character and structure, namely sp2 and sp3 bonded, amorphous and liquid configurations, bulk structures of various crystals and clusters with two to six atoms. The dataset contains in total 19205 structures, and as in the reference, 17293 were taken for fitting and 1912 for testing. We fit and MACE models and compare to the original result <cit.> in Fig. <ref>. Both and MACE provide a considerable improvement over , especially for clusters. While both and MACE show similar performance with showing slightly lower errors in total, is able to achieve this with an order of magnitude fewer parameters, i.e., has around 105k parameters, while MACE has almost 2.5M. We also assess evaluation time on a NVIDIA A100 GPU. For a supercell with 1200 atoms in equilibrium diamond structure we obtain 6 μs/atom for versus 765 μs/atom for MACE.§ CONCLUSIONWe introduced graph . For the derivation of we did not assume a decomposition into atomic quantities. Therefore, by construction the cluster basis functions for each configuration are complete and orthonormal in the space of N interacting atoms. The basis functions from different configurations are categorized by graph topology and their radial, angular, chemical, magnetic, etc. character. A decomposition into atomic contributions is achieved by assigning the contributions of cluster basis functions to the root node of their graphs. We show that local is a subset of global, graph obtained by limiting to star graphs only. We further highlight the relation of to quantum mechanical models. The basis functions have a close relation to quantum mechanical self-returning hopping contributions, but do not feature cycle graphs, leaving room for future work.By employing tensor decomposition we achieve a representation of the expansion that enables efficient recursive evaluation of tree graphs including the contribution of all subgraphs. In passing this allows us to illustrate that the success of recently developed equivariant message passing models is neither connected directly to message passing nor to equivariance, but a consequence of including graph basis functions in that are more sensitive than the stars of and have a longer reach, which makes them well suited for modelling semilocal interactions. We show that graph encompasses multi-ACE and its derivative MACE. We demonstrate the numerical accuracy and efficiency of graph in dandelion approximation for molecules, clusters and solids.In all our tests is more accurate than the currently most accurate machine learned interatomic potentials, while is also significantly faster and an order of magnitude more parameter efficient. § ACKNOWLEDGEMENT We acknowledge helpful discussions with Matous Mrovec, Marc Cawkwell, Gábor Csányi and Christoph Ortner. AB acknowledges funding by the German Science Foundation (DFG) through CRC 1394. § REDUCING RADIAL FUNCTIONSRadial functions can be further simplified in analogy to reduction of chemical species in Sec. <ref>. To this end we writec_n_1 l_1 m_1n_2 l_2 m_2n_3 l_3 m_3 = ∑_n c^(n)_l_1 m_1 l_2 m_2 l_3 m_3 w^(1)_n n_1w^(2)_n n_2w^(3)_n n_3 ,which just means that all radial function indices are identicalA^(1)_iμ_i n l_1 m_1= ∑_j ϕ_i n l_1 m_1^(1) (r_j) =[ nl_1m_1]_i , A^(12)_i μ_i n l_1 m_1 l_2 m_2 = ∑_j_1 j_2ϕ_i n l_1 m_1^(1)(r_j_1)ϕ_j_1 n l_2 m_2 ^(2)(r_j_2)=[ n l_1 m_1[ n l_2 m_2] ]_i ,A^(123)_i μ_i n l_1 m_1 l_2 m_2 l_3 m_3 = ∑_j_1 j_2 j_3ϕ_i n l_1 m_1^(1)(r_j_1) ×ϕ_j_1 n l_2 m_2^(2)(r_j_2) ϕ_j_2 n l_3 m_3^(3)(r_j_3)= [ n l_1 m_1 [ n l_2 m_2 [n l_3 m_3 ]]]_i ,with basis functions ϕ_i n l m given by Eq.(<ref>). The expression for properties remains analogous to Eq.(<ref>), but with expansion coefficients that are limited to a single radial index c^(n)_l_1 m_1 l_2 m_2 l_3 m_3. See also Ref. <cit.>. § APPROXIMATE PRODUCT REDUCTION OF ANGULAR CONTRIBUTIONSFor invariance and equivariance the expansion coefficients are written in the form of Eq.(<ref>). This makes tensor decompositions that involve angular contributions more difficult. We decompose an expansion coefficientc_nlm = ∑_LMc̃_nlL C_lm^LM ,asc_nlm = ∑_k λ_k c_k n_1 l_1 m_1c_k n_2 l_2 m_2 ,with n = (n_1,n_2) and analogous for the other indices. We assume that this decomposition is possible in the form of Eq.(<ref>),c_k n_1 l_1 m_1 = ∑_L_1 M_1 c̃_k n_1 l_1 L_1C_l_1 m_1^L_1 M_1 ,andc_k n_2 l_2 m_2 = ∑_L_2 M_2 c̃_k n_2 l_2 L_2C_l_2 m_2^L_2 M_2 .The key is that the generalized Clebsch-Gordan coefficients C do not depend on k.We call this the approximate product angular reduction as a more elaborate analysis must involve the appropriate decomposition of the Clebsch-Gordan coefficients. The generalized Clebsch-Gordan coefficients can be generated from products of the Clebsch-Gordan matrices. The products of the Clebsch-Gordan coefficients are overcomplete in a sense that contraction with spherical harmonics in general leads to linearly dependent functions. Therefore not all possible angular momenta couplings in the generalized Clebsch-Gordan coefficients are admissable and selected couplings need to be removed <cit.>. Thus if one approximates generalized Clebsch-Gordan coefficients from products of smaller generalized Clebsch-Gordan coefficients, one expects to create too many couplings of which some lead to linearly dependent functions. On the other hand, however, none of the possible angular couplings are missed and in this sense the approximate product angular reduction is overcomplete.§ MODEL AND TRAINING DETAILS§.§.§ Basis functions We employ basis functions Eq.(<ref>) that separate chemistry from radial functions asϕ_i v^(t)(_j) = R^(t)_nl(r_ji) W_n(μ_j) Y_lm( e_ji),with trainable parameters W_n(μ_j) for differentiating the edge chemistry.§.§.§ Radial functions Radial functions R_nl(r), Eq.(<ref>), in models are represented as expansions in radial basis functions g_k (r). We use two types of expansion, namely linear and multi-layer perceptron (MLP). The linear expansion is given byR_nl(r) = ∑_k c_nlk g_k (r),with the radial expansion coefficients c_nlk. For the MLP, a layer transforms inputs x_k to output h_n ash_n = a(∑_k w_nk x_k),where a is the activation function and w_nk are trainable parameters.For the radial function R_nl(r), inputs x_k=g_k (r) are transformed via three hidden layers with 64 units each and SiLu activation function. In the last layer no activation is applied. Two types of radial basis functions were utilized, the simplified spherical Bessel functions <cit.> andthe Bessel function with polynomial envelope function <cit.> with p=5.§.§.§ Non-linear embedding The atomic energy is expressed as a non-linear function Eq.(<ref>) with inputs Eq.(<ref>). Here we use a single-layer MLP with SiLu activation as a non-linear embedding and vary the number of units in the layer and number of inputs.§.§.§ Loss functions For optimizing our models we used the following loss function ℒ = κ_E∑_n=1^N_struct w_n^(E)(E_n^ACE-E_n^ref)^2+ κ_F∑_n=1^N_struct∑_i=1^n_at,n w_ni^(F)(F_ni^ACE-F_ni^ref)^2 + Δ_L_2,where κ_E and κ_F weight the contributions of energies and forces errors, N_struct is the number of structures employed in the parametrization, and w_n^(E) and w_n^(F) are per-structure and per-atom weights for the energy and force contributions for structure n, which were set to one for every structure and normalized by the number of structures and force components respectively. For the carbon dataset, the energy residual is in addition normalized to the number of atoms n_at,n in a structure. Δ_L_2 is the regularization term that penalizes the absolute values of the expansion coefficients c in Eq.(<ref>) Δ_L_2 = κ_L_2c^2 ,where κ_L_2 is the regularization weight parameter. §.§.§ Model parameters For Si clusters the radial functions are produced from a linear expansion of eight radial basis functions in the from of simplified spherical Bessel functions with cutoff distance of 8Å and the radial functions are shared across the layers. We further use n_max=8 and l_max=4 for the atomic base on each layer and all intermediate couplings. A linear embedding was used for the energy. For training we used the L-BFGS-B algorithm as implemented in scipy <cit.> with κ_E=1, κ_F=100 and κ_L_2=5 · 10^-8.For small molecules, we use models of dandelion type (1222_4) with variable radial functions for two layers. A linear expansion of 8 radial basis functions in the from of Bessel functions <cit.> with n_max=8 is used for the first layer, while a MLP expansion of the same basis is used on the layer zero with n_max=32. For the MD17 dataset the cutoff for the radial basis is set to 4Å and 4.5Å for the 3BPA dataset. For all models l_max=3 for the atomic base and all the basis functions intermediate couplings as well as the second layer expansion. A single layer MLP with two inputs and 16 hidden units is used for the energy embedding alongside with linear embedding. The L-BFGS-B algorithm was used for training with κ_E=1, κ_F=500 and κ_L_2=5 · 10^-5 for all the models except benzene for which was κ_F=1000.For carbon a (1222_3) model was used, with radial functions in the form of a MLP expansion with Bessel functions <cit.> input, n_max=16 and cut-off 5Å, which is shared across layers. We further set l_max=4 for atomic base and intermediate couplings on the layer zero. The first layer expansion and basis functions intermediate couplings were set to l_max=3. We utilize a single layer MLP with 16 inputs and 32 hidden units for the energy embedding together with linear embedding. For training we used the AMSGrad version of the Adam <cit.> optimizer with batch size 10, learning rate 5 · 10^-3 and κ_E=1, κ_F=500. During optimization, the learning rate was reduced in steps with a factor 0.6 down to 6 · 10^-4. With each reduction the fit was restarted and energy and force weights in the loss functions were gradually adjusted to κ_E=10, κ_F=1 as well. Finally, the fit was further refined with the L-BFGS-B algorithm. For training the carbon MACE model we use the same train and test split as above, 256 feature channels, l_max=3, messages with L_max=2 and radial cutoff of 5Å. Optimization is performed with the AMSGrad version of the Adam optimizer with learning rate 0.01 and batch size 5. Energy and force weights in the loss function were set to 80 and 1000 respectively followed by switching energy weight to 1000 and force weight to 10.The models and fitting algorithm are implemented within tensorpotential package <cit.> and GPU acceleration is enabled via TensorFlow <cit.>. All fits and evaluations were performed with double precision. 78 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Drautz and Pettifor(2006)]Drautz06 author author R. Drautz and author D. G. Pettifor, 10.1103/physrevb.74.174117 journal journal Phys. Rev. B volume 74 (year 2006), 10.1103/physrevb.74.174117NoStop [Bochkarev et al.(2022a)Bochkarev, Lysogorskiy, Ortner, Csányi, andDrautz]Bochkarev2022mlACE author author A. Bochkarev, author Y. Lysogorskiy, author C. Ortner, author G. Csányi,andauthor R. Drautz, 10.1103/PhysRevResearch.4.L042019 journal journal Phys. Rev. Res. volume 4, pages L042019 (year 2022a)NoStop [Drautz(2019)]Drautz19 author author R. Drautz, @noopjournal journal Phys. Rev. B volume 99, pages 014104 (year 2019)NoStop [Dusson et al.(2022)Dusson, Bachmayr, Csányi, Drautz, Etter, van der Oord, and Ortner]Dusson2022 author author G. Dusson, author M. Bachmayr, author G. Csányi, author R. Drautz, author S. Etter, author C. van der Oord,and author C. Ortner, https://doi.org/10.1016/j.jcp.2022.110946 journal journal Journal of Computational Physics volume 454, pages 110946 (year 2022)NoStop [Bartók et al.(2013)Bartók, Kondor, and Csányi]Csanyi2013SOAP author author A. P. Bartók, author R. Kondor, and author G. Csányi, 10.1103/PhysRevB.87.184115 journal journal Phys. Rev. B volume 87, pages 184115 (year 2013)NoStop [Thompson et al.(2015)Thompson, Swiler, Trott, Foiles, and Tucker]Thompson2015SNAP author author A. Thompson, author L. Swiler, author C. Trott, author S. Foiles,and author G. Tucker, https://doi.org/10.1016/j.jcp.2014.12.018 journal journal Journal of Computational Physics volume 285, pages 316 (year 2015)NoStop [Behler and Parrinello(2007)]Behler2007ACSF author author J. Behler and author M. Parrinello, 10.1103/PhysRevLett.98.146401 journal journal Phys. Rev. Lett. volume 98, pages 146401 (year 2007)NoStop [Shapeev(2016)]Shapeev2016MTP author author A. V. Shapeev, 10.1137/15M1054183 journal journal Multiscale Modeling Simulation volume 14, pages 1153 (year 2016), http://arxiv.org/abs/https://doi.org/10.1137/15M1054183 https://doi.org/10.1137/15M1054183 NoStop [Drautz(2020)]Drautz2020 author author R. Drautz, 10.1103/PhysRevB.102.024104 journal journal Phys. Rev. B volume 102, pages 024104 (year 2020)NoStop [Lysogorskiy et al.(2021)Lysogorskiy, Oord, Bochkarev, Menon, Rinaldi, Hammerschmidt, Mrovec, Thompson, Csányi, Ortner, and Drautz]Lysogorskiy2021PACE author author Y. Lysogorskiy, author C. v. d. Oord, author A. Bochkarev, author S. Menon, author M. Rinaldi, author T. Hammerschmidt, author M. Mrovec, author A. Thompson, author G. Csányi, author C. Ortner,and author R. Drautz, 10.1038/s41524-021-00559-9 journal journal npj Comput Mater volume 7 (year 2021),10.1038/s41524-021-00559-9NoStop [Kaliuzhnyi and Ortner(2022)]Kaliuzhnyi2022recursive author author I. Kaliuzhnyi and author C. Ortner, https://arxiv.org/abs/2202.04140 journal journal ArXiv e-prints volume 2202.04140 (year 2022)NoStop [Qamar et al.(2023)Qamar, Mrovec, Lysogorskiy, Bochkarev, and Drautz]Qamar2023 author author M. Qamar, author M. Mrovec, author Y. Lysogorskiy, author A. Bochkarev,and author R. Drautz, 10.1021/acs.jctc.2c01149 journal journal Journal of Chemical Theory and Computation volume 19,pages 5151 (year 2023), note pMID: 37347981, http://arxiv.org/abs/https://doi.org/10.1021/acs.jctc.2c01149 https://doi.org/10.1021/acs.jctc.2c01149 NoStop [Kovács et al.(2021)Kovács, Oord, Kucera, Allen, Cole, Ortner, and Csányi]Kovacs2021linACE author author D. P. Kovács, author C. v. d. Oord, author J. Kucera, author A. E. A. Allen, author D. J. Cole, author C. Ortner,and author G. Csányi, 10.1021/acs.jctc.1c00647 journal journal Journal of Chemical Theory and Computation volume 17,pages 7696 (year 2021), note pMID: 34735161, http://arxiv.org/abs/https://doi.org/10.1021/acs.jctc.1c00647 https://doi.org/10.1021/acs.jctc.1c00647 NoStop [Ibrahim et al.(2023)Ibrahim, Lysogorskiy, Mrovec, andDrautz]Ibrahim23 author author E. Ibrahim, author Y. Lysogorskiy, author M. Mrovec,and author R. Drautz, 10.1103/PhysRevMaterials.7.113801 journal journal Phys. Rev. Mater. volume 7, pages 113801 (year 2023)NoStop [Rinaldi et al.(2023)Rinaldi, Mrovec, Bochkarev, Lysogorskiy, and Drautz]Rinaldi2023noncollinear author author M. Rinaldi, author M. Mrovec, author A. Bochkarev, author Y. Lysogorskiy,and author R. Drautz, @nooptitle Non-collinear magnetic atomic cluster expansion for iron,(year 2023), http://arxiv.org/abs/2305.15137 arXiv:2305.15137 [cond-mat.mtrl-sci] NoStop [Munoz et al.(2022)Munoz, Batatia, and Ortner]Munoz2022 author author J. M. Munoz, author I. Batatia, and author C. Ortner, 10.1088/2632-2153/aca9ca journal journal Machine Learning: Science and Technology volume 3, pages 04LT05 (year 2022)NoStop [Zhang et al.(2022)Zhang, Onat, Dusson, Anand, Maurer, Ortner, and Kermode]Zhang2022equivariantACE author author L. Zhang, author B. Onat, author G. Dusson, author G. Anand, author R. J. Maurer, author C. Ortner,and author J. R. Kermode, @nooptitle Equivariant analytical mapping of first principles hamiltonians to accurate and transferable materials models,(year 2022), http://arxiv.org/abs/2111.13736 arXiv:2111.13736 [cond-mat.mtrl-sci] NoStop [Drautz and Ortner(2022)]Drautz2022ACEwave author author R. Drautz and author C. Ortner, @nooptitle Atomic cluster expansion and wave function representations,(year 2022),http://arxiv.org/abs/2206.11375 arXiv:2206.11375 [cond-mat.mtrl-sci] NoStop [Zhou et al.(2023)Zhou, Chen, Ho, and Ortner]Zhou2023multilevel author author D. Zhou, author H. Chen, author C. H. Ho,and author C. Ortner, @nooptitle A multilevel method for many-electron schrödinger equations based on the atomic cluster expansion,(year 2023), http://arxiv.org/abs/2304.04260 arXiv:2304.04260 [physics.comp-ph] NoStop [Lysogorskiy et al.(2023)Lysogorskiy, Bochkarev, Mrovec, andDrautz]Lysogorskiy2023active author author Y. Lysogorskiy, author A. Bochkarev, author M. Mrovec, and author R. Drautz, 10.1103/PhysRevMaterials.7.043801 journal journal Phys. Rev. Mater. volume 7, pages 043801 (year 2023)NoStop [van der Oord et al.(2023)van der Oord, Sachs, Kovacs, Ortner, and Csányi]Oor2023hyper author author C. van der Oord, author M. Sachs, author D. Kovacs, author C. Ortner,and author G. Csányi, @noopjournal journal npj Comp. Mater volume 9, pages 128 (year 2023)NoStop [Bartók et al.(2010)Bartók, Payne, Kondor, and Csányi]Csanyi2010GAP author author A. P. Bartók, author M. C. Payne, author R. Kondor,and author G. Csányi, 10.1103/physrevlett.104.136403 journal journal Physical Review Letters volume 104 (year 2010), 10.1103/physrevlett.104.136403NoStop [Braams and Bowman(2009)]Braams2009PIPs author author B. J. Braams and author J. M. Bowman, 10.1080/01442350903234923 journal journal International Reviews in Physical Chemistry volume 28, pages 577 (year 2009),http://arxiv.org/abs/https://doi.org/10.1080/0144235090323492 https://doi.org/10.1080/0144235090323492 NoStop [Faber et al.(2018)Faber, Christensen, Huang, and von Lilienfeld]Faber2018FCHL author author F. A. Faber, author A. S. Christensen, author B. Huang,and author O. A. von Lilienfeld, 10.1063/1.5020710 journal journal The Journal of Chemical Physics volume 148, pages 241717 (year 2018),http://arxiv.org/abs/https://doi.org/10.1063/1.5020710 https://doi.org/10.1063/1.5020710 NoStop [Zhang et al.(2018)Zhang, Han, Wang, Car, andE]Zhang2018DeepPot author author L. Zhang, author J. Han, author H. Wang, author R. Car,and author W. E, 10.1103/PhysRevLett.120.143001 journal journal Phys. Rev. Lett. volume 120, pages 143001 (year 2018)NoStop [Smith et al.(2017)Smith, Isayev, and Roitberg]Smith2017ANINN author author J. S. Smith, author O. Isayev, and author A. E. Roitberg,10.1039/C6SC05720A journal journal Chem. Sci. volume 8, pages 3192 (year 2017)NoStop [Zaverkin and Kästner(2020)]Zaverkin2020 author author V. Zaverkin and author J. Kästner, 10.1021/acs.jctc.0c00347 journal journal Journal of Chemical Theory and Computationvolume 16, pages 5410 (year 2020)NoStop [Klicpera et al.(2020)Klicpera, Groß, and Günnemann]Klicpera2020DimeNet author author J. Klicpera, author J. Groß, and author S. Günnemann,@nooptitle Directional message passing for molecular graphs,(year 2020), http://arxiv.org/abs/2003.03123 arXiv:2003.03123 [cs.LG] NoStop [Anderson et al.(2019)Anderson, Hy, and Kondor]Anderson2019Cormorant author author B. Anderson, author T. S. Hy, and author R. Kondor, inhttps://proceedings.neurips.cc/paper/2019/file/03573b32b2746e6e8ca98b9123f2249b-Paper.pdf booktitle Advances in Neural Information Processing Systems, Vol. volume 32, editor edited byeditor H. Wallach, editor H. Larochelle, editor A. Beygelzimer, editor F. d'Alché-Buc, editor E. Fox,and editor R. Garnett (publisher Curran Associates, Inc., year 2019)NoStop [Lubbers et al.(2018)Lubbers, Smith, and Barros]Lubbers2018 author author N. Lubbers, author J. S. Smith,and author K. Barros, 10.1063/1.5011181 journal journal The Journal of Chemical Physics volume 148, pages 241715 (year 2018)NoStop [Thomas et al.(2018)Thomas, Smidt, Kearnes, Yang, Li, Kohlhoff, and Riley]Thomas2018TensorField author author N. Thomas, author T. Smidt, author S. M. Kearnes, author L. Yang, author L. Li, author K. Kohlhoff,and author P. Riley, http://arxiv.org/abs/1802.08219 journal journal CoRR volume abs/1802.08219 (year 2018), http://arxiv.org/abs/1802.08219 arXiv:1802.08219 NoStop [Batzner et al.(2022)Batzner, Musaelian, Sun, Geiger, Mailoa, Kornbluth, Molinari, Smidt, and Kozinsky]Batzner2022nequip author author S. Batzner, author A. Musaelian, author L. Sun, author M. Geiger, author J. P. Mailoa, author M. Kornbluth, author N. Molinari, author T. E.Smidt,and author B. Kozinsky, 10.1038/s41467-022-29939-5 journal journal Nature Communications volume 13, pages 2453 (year 2022)NoStop [Satorras et al.(2021)Satorras, Hoogeboom, and Welling]Satorras2021EGNN author author V. G. Satorras, author E. Hoogeboom,and author M. Welling, https://arxiv.org/abs/2102.09844 journal journal CoRR volume abs/2102.09844 (year 2021), http://arxiv.org/abs/2102.09844 arXiv:2102.09844 NoStop [Unke and Meuwly(2019)]Unke2019Physnet author author O. T. Unke and author M. Meuwly,10.1021/acs.jctc.9b00181 journal journal Journal of Chemical Theory and Computation volume 15, pages 3678–3693 (year 2019)NoStop [Schütt et al.(2017)Schütt, Kindermans, Sauceda Felix, Chmiela, Tkatchenko, and Müller]Schutt2017schnet author author K. Schütt, author P.-J. Kindermans, author H. E. Sauceda Felix, author S. Chmiela, author A. Tkatchenko,and author K.-R. Müller,in https://proceedings.neurips.cc/paper/2017/file/303ed4c69846ab36c2904d3ba8573050-Paper.pdf booktitle Advances in Neural Information Processing Systems, Vol. volume 30, editor edited byeditor I. Guyon, editor U. V. Luxburg, editor S. Bengio, editor H. Wallach, editor R. Fergus, editor S. Vishwanathan,and editor R. Garnett (publisher Curran Associates, Inc., year 2017)NoStop [Haghighatlari et al.(2021)Haghighatlari, Li, Guan, Zhang, Das, Stein, Heidar-Zadeh, Liu, Head-Gordon, Bertels, Hao, Leven, andHead-Gordon]Haghighatlari2021newtonnet author author M. Haghighatlari, author J. Li, author X. Guan, author O. Zhang, author A. Das, author C. J.Stein, author F. Heidar-Zadeh, author M. Liu, author M. Head-Gordon, author L. Bertels, author H. Hao, author I. Leven,and author T. Head-Gordon, @nooptitle Newtonnet: A newtonian message passing network for deep learning of interatomic potentials and forces,(year 2021), http://arxiv.org/abs/2108.02913 arXiv:2108.02913 [physics.chem-ph] NoStop [Schütt et al.(2021)Schütt, Unke, and Gastegger]Schutt2021Painn author author K. T. Schütt, author O. T. Unke,and author M. Gastegger, https://arxiv.org/abs/2102.03150 journal journal CoRR volume abs/2102.03150 (year 2021), http://arxiv.org/abs/2102.03150 2102.03150 NoStop [Klicpera et al.(2022)Klicpera, Becker, and Günnemann]Klicpera2022gemnet author author J. Klicpera, author F. Becker, and author S. Günnemann,@nooptitle Gemnet: Universal directional graph neural networks for molecules,(year 2022), http://arxiv.org/abs/2106.08903 arXiv:2106.08903 [physics.comp-ph] NoStop [Chmiela et al.(2017)Chmiela, Tkatchenko, Sauceda, Poltavsky, Schütt, and Müller]Chmiela2017machine author author S. Chmiela, author A. Tkatchenko, author H. E. Sauceda, author I. Poltavsky, author K. T. Schütt,and author K.-R. Müller, @noopjournal journal Science advances volume 3, pages e1603015 (year 2017)NoStop [Musaelian et al.(2022)Musaelian, Batzner, Johansson, Sun, Owen, Kornbluth, andKozinsky]Musaelian2022Allegro author author A. Musaelian, author S. Batzner, author A. Johansson, author L. Sun, author C. J. Owen, author M. Kornbluth,and author B. Kozinsky, 10.48550/ARXIV.2204.05249 title Learning local equivariant representations for large-scale atomistic dynamics,(year 2022)NoStop [Pozdnyakov and Ceriotti(2023)]Pozdnyakov2023smooth author author S. N. Pozdnyakov and author M. Ceriotti, @nooptitle Smooth, exact rotational symmetrization for deep learning on point clouds,(year 2023), http://arxiv.org/abs/2305.19302 arXiv:2305.19302 [cs.CV] NoStop [Nigam et al.(2022)Nigam, Pozdnyakov, Fraux, and Ceriotti]Nigam2022unified author author J. Nigam, author S. Pozdnyakov, author G. Fraux,and author M. Ceriotti, 10.1063/5.0087042 journal journal The Journal of Chemical Physics volume 156, pages 204115 (year 2022), http://arxiv.org/abs/https://pubs.aip.org/aip/jcp/article-pdf/doi/10.1063/5.0087042/16543386/204115_1_online.pdf https://pubs.aip.org/aip/jcp/article-pdf/doi/10.1063/5.0087042/16543386/204115_1_online.pdf NoStop [Batatia et al.(2022)Batatia, Batzner, Kovács, Musaelian, Simm, Drautz, Ortner, Kozinsky, and Csányi]Batatia2022design author author I. Batatia, author S. Batzner, author D. P. Kovács, author A. Musaelian, author G. N. C. Simm, author R. Drautz, author C. Ortner, author B. Kozinsky,and author G. Csányi, @nooptitle The design space of e(3)-equivariant atom-centered interatomic potentials,(year 2022), http://arxiv.org/abs/2205.06643 arXiv:2205.06643 [stat.ML] NoStop [Batatia et al.(2023a)Batatia, Geiger, Munoz, Smidt, Silberman,and Ortner]batatia2023general author author I. Batatia, author M. Geiger, author J. Munoz, author T. Smidt, author L. Silberman,and author C. Ortner, @nooptitle A general framework for equivariant neural networks on reductive lie groups,(year 2023a), http://arxiv.org/abs/2306.00091 arXiv:2306.00091 [stat.ML] NoStop [Batatia et al.(2023b)Batatia, Kovács, Simm, Ortner, and Csányi]batatia2023mace author author I. Batatia, author D. P. Kovács, author G. N. C. Simm, author C. Ortner,andauthor G. Csányi, @nooptitle Mace: Higher order equivariant message passing neural networks for fast and accurate force fields,(year 2023b), http://arxiv.org/abs/2206.07697 arXiv:2206.07697 [stat.ML] NoStop [Kovacs et al.(2023)Kovacs, Batatia, Arany, and Csanyi]kovacs2023evaluation author author D. P. Kovacs, author I. Batatia, author E. S. Arany,andauthor G. Csanyi, @nooptitle Evaluation of the mace force field architecture: from medicinal chemistry to materials science,(year 2023), http://arxiv.org/abs/2305.14247 arXiv:2305.14247 [physics.chem-ph] NoStop [Thomas et al.(2021)Thomas, Chen, and Ortner]thomas2021rigorous author author J. Thomas, author H. Chen,andauthor C. Ortner, @nooptitle Rigorous body-order approximations of an electronic structure potential energy landscape,(year 2021), http://arxiv.org/abs/2106.12572 arXiv:2106.12572 [math-ph] NoStop [Thompson et al.(2022)Thompson, Aktulga, Berger, Bolintineanu, Brown, Crozier, in't Veld, Kohlmeyer, Moore, Nguyen et al.]Thompson2022LAMMPS author author A. P. Thompson, author H. M. Aktulga, author R. Berger, author D. S. Bolintineanu, author W. M. Brown, author P. S. Crozier, author P. J. in't Veld, author A. Kohlmeyer, author S. G. Moore, author T. D. Nguyen,et al., @noopjournal journal Comp. Phys. Comm. volume 271, pages 108171 (year 2022)NoStop [Bochkarev et al.(2022b)Bochkarev, Lysogorskiy, Menon, Qamar, Mrovec, and Drautz]Bochkarev2022PACEmaker author author A. Bochkarev, author Y. Lysogorskiy, author S. Menon, author M. Qamar, author M. Mrovec,and author R. Drautz, 10.1103/PhysRevMaterials.6.013804 journal journal Phys. Rev. Mater. volume 6, pages 013804 (year 2022b)NoStop [Witt et al.(2023)Witt, van der Oord, Gelžinytė, Järvinen, Ross, Darby, Ho, Baldwin, Sachs, Kermode, Bernstein, Csányi, and Ortner]witt2023ACEjl author author W. C. Witt, author C. van der Oord, author E. Gelžinytė, author T. Järvinen, author A. Ross, author J. P. Darby, author C. H.Ho, author W. J. Baldwin, author M. Sachs, author J. Kermode, author N. Bernstein, author G. Csányi,and author C. Ortner, @nooptitle Acepotentials.jl: A julia implementation of the atomic cluster expansion,(year 2023), http://arxiv.org/abs/2309.03161 arXiv:2309.03161 [physics.comp-ph] NoStop [Rohskopf et al.(2023)Rohskopf, Sievers, Lubbers, Cusentino, Goff, Janssen, McCarthy, de Zapiain, Nikolov, Sargsyan, Sema, Sikorski, Williams, Thompson, and Wood]Rohskopf2023FitSNAP author author A. Rohskopf, author C. Sievers, author N. Lubbers, author M. Cusentino, author J. Goff, author J. Janssen, author M. McCarthy, author D. M. O.de Zapiain, author S. Nikolov, author K. Sargsyan, author D. Sema, author E. Sikorski, author L. Williams, author A. Thompson,andauthor M. Wood, 10.21105/joss.05118 journal journal Journal of Open Source Software volume 8, pages 5118 (year 2023)NoStop [Darby et al.(2023)Darby, Kovács, Batatia, Caro, Hart, Ortner, and Csányi]Darby23 author author J. P. Darby, author D. P. Kovács, author I. Batatia, author M. A. Caro, author G. L. W. Hart, author C. Ortner,and author G. Csányi, 10.1103/PhysRevLett.131.028001 journal journal Phys. Rev. Lett. volume 131, pages 028001 (year 2023)NoStop [Sanchez et al.(1984)Sanchez, Ducastelle, and Gratias]Sanchez84 author author J. M. Sanchez, author F. Ducastelle,and author D. Gratias,@noopjournal journal Physica Avolume 128, pages 334 (year 1984)NoStop [Drautz et al.(2004)Drautz, Fähnle, and Sanchez]Drautz04 author author R. Drautz, author M. Fähnle,and author J. M.Sanchez, 10.1088/0953-8984/16/23/005 journal journal J. Phys.: Condens. Mattervolume 16, pages 3843 (year 2004)NoStop [Slater and Koster(1954)]Slater54 author author J. C. Slater and author G. F. Koster, @noopjournal journal Phys. Rev. volume 94, pages 1498 (year 1954)NoStop [Sharma(1979)]Sharma79 author author R. R. Sharma, 10.1103/PhysRevB.19.2813 journal journal Phys. Rev. B volume 19,pages 2813 (year 1979)NoStop [Zhu et al.(2016)Zhu, Amsler, Fuhrer, Schaefer, Faraji, Rostami, Ghasemi, Sadeghi, Grauzinyte, Wolverton, and Goedecker]Goedecker2016fingerprints author author L. Zhu, author M. Amsler, author T. Fuhrer, author B. Schaefer, author S. Faraji, author S. Rostami, author S. A.Ghasemi, author A. Sadeghi, author M. Grauzinyte, author C. Wolverton,andauthor S. Goedecker, 10.1063/1.4940026 journal journal The Journal of Chemical Physics volume 144, pages 034203 (year 2016), http://arxiv.org/abs/https://doi.org/10.1063/1.4940026 https://doi.org/10.1063/1.4940026 NoStop [Haydock(1980)]Haydock80 author author R. Haydock, in @noopbooktitle Solid State Physics, Vol. volume 35, editor edited byeditor H. Ehrenreich, editor F. Seitz,and editor D. Turnbull (publisher Academic Press, address New York, year 1980)p. pages 215NoStop [Goedecker and Colombo(1994)]Goedecker94 author author S. Goedecker and author L. Colombo, @noopjournal journal Phys. Rev. Lett. volume 73, pages 122 (year 1994)NoStop [Horsfield et al.(1996)Horsfield, Bratkovsky, Pettifor, andAoki]Horsfield96 author author A. P. Horsfield, author A. M. Bratkovsky, author D. G. Pettifor,and author M. Aoki, @noopjournal journal Phys. Rev. B volume 53, pages 1656 (year 1996)NoStop [Silver et al.(1996)Silver, Röder, Voter, and Kress]Silver96 author author R. N. Silver, author H. Röder, author A. F. Voter,andauthor J. D. Kress, @noopjournal journal J. Comp. Phys. volume 124, pages 115 (year 1996)NoStop [Seiser et al.(2013)Seiser, Pettifor, and Drautz]Seiser13 author author B. Seiser, author D. G. Pettifor,and author R. Drautz, @noopjournal journal Phys. Rev. B volume 87, pages 094105 (year 2013)NoStop [McEniry and Drautz(2017)]mceniry2017linearscaling author author E. J. McEniry and author R. Drautz, @nooptitle Linear-scaling electronic structure theory: Electronic temperature in the kernel polynomial method,(year 2017), http://arxiv.org/abs/1701.01568 arXiv:1701.01568 [cond-mat.mtrl-sci] NoStop [Cyrot-Lackmann(1967)]Cyrot-Lackmann67 author author F. Cyrot-Lackmann, @noopjournal journal Adv. Phys. volume 16, pages 393 (year 1967)NoStop [Pettifor and Oleinik(1999)]Pettifor99 author author D. G. Pettifor and author I. I. Oleinik, @noopjournal journal Phys. Rev. B volume 59, pages 8487 (year 1999)NoStop [Pettifor and Oleinik(2000)]Pettifor00 author author D. G. Pettifor and author I. I. Oleinik, @noopjournal journal Phys. Rev. Lett. volume 84, pages 4124 (year 2000)NoStop [Pettifor and Oleinik(2002)]Pettifor02 author author D. G. Pettifor and author I. I. Oleinik, @noopjournal journal Phys. Rev. B volume 65, pages 172103 (year 2002)NoStop [Goff et al.(2023)Goff, Sievers, Wood, and Thompson]goff2023permutationadapted author author J. M. Goff, author C. Sievers, author M. A. Wood,andauthor A. P. Thompson,@nooptitle Permutation-adapted complete and independent basis for atomic cluster expansion descriptors,(year 2023), http://arxiv.org/abs/2208.01756 arXiv:2208.01756 [cond-mat.mtrl-sci] NoStop [Geiger et al.(2020)Geiger, Smidt, M., Miller, Boomsma, Dice, Lapchevskyi, Weiler, Tyszkiewicz, Batzner, Uhrin, Frellsen, Jung, Sanborn, Rackers, and Bailey]Geiger2020e3nn author author M. Geiger, author T. Smidt, author A. M., author B. K. Miller, author W. Boomsma, author B. Dice, author K. Lapchevskyi, author M. Weiler, author M. Tyszkiewicz, author S. Batzner, author M. Uhrin, author J. Frellsen, author N. Jung, author S. Sanborn, author J. Rackers, and author M. Bailey, 10.5281/zenodo.5292912 title Euclidean neural networks: e3nn,(year 2020)NoStop [Christensen and Anatole von Lilienfeld(2020)]Christensen2020 author author A. S. Christensen and author O. Anatole von Lilienfeld, 10.1088/2632-2153/abba6f journal journal Machine Learning: Science and Technology volume 1 (year 2020),10.1088/2632-2153/abba6fNoStop [Christensen and von Lilienfeld(2020)]revMD17 author author A. Christensen and author O. A. von Lilienfeld, 10.24435/materialscloud:wy-kn title Revised MD17 dataset,(year 2020),note Materials Cloud Archive 2020.82NoStop [Perdew et al.(1996)Perdew, Burke, and Ernzerhof]Perdew96 author author J. P. Perdew, author K. Burke, and author M. Ernzerhof,@noopjournal journal Phys. Rev. Lett.volume 77, pages 3865 (year 1996)NoStop [Blum et al.(2009)Blum, Gehrke, Hanke, Havu, Havu, Ren, Reuter, andScheffler]FHIaims author author V. Blum, author R. Gehrke, author F. Hanke, author P. Havu, author V. Havu, author X. Ren, author K. Reuter,and author M. Scheffler, @noopjournal journal Comput. Phys. Commun. volume 180, pages 2175 (year 2009)NoStop [Havu et al.(2009)Havu, Blum, Havu, and Scheffler]FHIaims2 author author V. Havu, author V. Blum, author P. Havu,and author M. Scheffler, @noopjournal journal J. Comp. Phys. volume 228, pages 8367 (year 2009)NoStop [Kocer et al.(2019)Kocer, Mason, and Erturk]Kocer2019 author author E. Kocer, author J. K. Mason, and author H. Erturk,@noopjournal journal The Journal of Chemical Physics volume 150, pages 154102 (year 2019)NoStop [Virtanen et al.(2020)Virtanen, Gommers, Oliphant, Haberland, Reddy, Cournapeau, Burovski, Peterson, Weckesser, Bright, van der Walt, Brett, Wilson, Millman, Mayorov, Nelson, Jones, Kern, Larson, Carey, Polat, Feng, Moore, VanderPlas, Laxalde, Perktold, Cimrman, Henriksen, Quintero, Harris, Archibald, Ribeiro, Pedregosa, van Mulbregt, and SciPy 1.0 Contributors]2020SciPy-NMeth author author P. Virtanen, author R. Gommers, author T. E. Oliphant, author M. Haberland, author T. Reddy, author D. Cournapeau, author E. Burovski, author P. Peterson, author W. Weckesser, author J. Bright, author S. J.van der Walt, author M. Brett, author J. Wilson, author K. J.Millman, author N. Mayorov, author A. R. J. Nelson, author E. Jones, author R. Kern, author E. Larson, author C. J. Carey, author İ. Polat, author Y. Feng, author E. W.Moore, author J. VanderPlas, author D. Laxalde, author J. Perktold, author R. Cimrman, author I. Henriksen, author E. A. Quintero, author C. R. Harris, author A. M. Archibald, author A. H. Ribeiro, author F. Pedregosa, author P. van Mulbregt,and author SciPy 1.0 Contributors, 10.1038/s41592-019-0686-2 journal journal Nature Methods volume 17, pages 261 (year 2020)NoStop [Kingma and Ba(2017)]kingma2017adam author author D. P. Kingma and author J. Ba,@nooptitle Adam: A method for stochastic optimization,(year 2017), http://arxiv.org/abs/1412.6980 arXiv:1412.6980 [cs.LG] NoStop [Abadi et al.(2015)Abadi, Agarwal, Barham, Brevdo, Chen, Citro, Corrado, Davis, Dean, Devin, Ghemawat, Goodfellow, Harp, Irving, Isard, Jia, Jozefowicz, Kaiser, Kudlur, Levenberg, Mané, Monga, Moore, Murray, Olah, Schuster, Shlens, Steiner, Sutskever, Talwar, Tucker, Vanhoucke, Vasudevan, Viégas, Vinyals, Warden, Wattenberg, Wicke, Yu, and Zheng]tensorflow2015-whitepaper author author M. Abadi, author A. Agarwal, author P. Barham, author E. Brevdo, author Z. Chen, author C. Citro, author G. S.Corrado, author A. Davis, author J. Dean, author M. Devin, author S. Ghemawat, author I. Goodfellow, author A. Harp, author G. Irving, author M. Isard, author Y. Jia, author R. Jozefowicz, author L. Kaiser, author M. Kudlur, author J. Levenberg, author D. Mané, author R. Monga, author S. Moore, author D. Murray, author C. Olah, author M. Schuster, author J. Shlens, author B. Steiner, author I. Sutskever, author K. Talwar, author P. Tucker, author V. Vanhoucke, author V. Vasudevan, author F. Viégas, author O. Vinyals, author P. Warden, author M. Wattenberg, author M. Wicke, author Y. Yu,and author X. Zheng, https://www.tensorflow.org/ title TensorFlow: Large-scale machine learning on heterogeneous systems,(year 2015), note software available from tensorflow.orgNoStop | http://arxiv.org/abs/2311.16326v1 | {
"authors": [
"Anton Bochkarev",
"Yury Lysogorskiy",
"Ralf Drautz"
],
"categories": [
"cond-mat.mtrl-sci"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20231127211955",
"title": "Atomic Cluster Expansion for semilocal interactions beyond equivariant message passing"
} |
II. Institut für Theoretische Physik, Universität Hamburg,D–22761 Hamburg, GermanyInstitute for Nuclear Research of the Russian Academy of Sciences, 117312 Moscow, RussiaDepartment of Physics and Astronomy, University of South Carolina, Columbia, South Carolina 29208, USA We discuss results from our global QCD analyses including nuclear data off deuterium from various measurements, as well as off andtargets from theexperiment. We simultaneously determine the parton distribution functions of the proton, the higher-twist terms, and the nucleon off-shell correction functions responsible for the modifications of the partonic structure in bound protons and neutrons.In particular, we study the neutron-proton asymmetry of the off-shell correction and its interplay with the treatment of the higher-twist terms. We observe that the data on the / cross section ratio are consistent with a single isoscalar off-shell function. We also provide our predictions on the ratio F_2^n/F_2^p and on the d and u quark distributions in the proton and in theandnuclei. Off-shell modifications of bound nucleons and parton distributions R. Petti January 14, 2024 ==================================================================§ INTRODUCTIONUsing data fromdeep-inelastic scattering (DIS) off nuclear targets with different proton-neutron content in global QCD analyses allows to unravel the physics mechanisms responsible of the modifications of bound nucleons in the nuclear environment, to accurately constrain PDFs in the neutron, as well as to test the nucleon charge symmetry. We summarize the results of our recent global QCD analyses <cit.>, in which we simultaneously constrain the proton PDFs, the higher-twist (HT) terms, and the functions describing the modification of the nucleon structure functions (SFs) in nuclei. [Presented at DIS2023: XXX International Workshop on Deep-Inelastic Scattering and Related Subjects, Michigan State University, USA, 27-31 March 2023.]We use deuterium DIS data from various experiments, the data on the / cross section ratio from theexperiment <cit.>, along with a typical set of the proton DIS and collider data (for details see Refs. <cit.>). Nuclear corrections are treated following the microscopic model of Ref. <cit.>, which addresses a number of effects relevant in different kinematical regions of Bjorken x. In the large-x region relevant for the nuclear DIS data considered, the most important nuclear corrections originate from the nuclear momentum distribution, the nuclear binding <cit.> and the off-shell (OS) corrections to the bound nucleon SFs <cit.>. The latter are directly related to the modification of the partonic structure of bound nucleons, and the validity of such approachwas demonstrated in the analysis of data on the nuclear EMC effect <cit.>. The observations of Ref. <cit.> have been confirmed in a global QCD analysis including deuterium DIS data <cit.>.The data from theexperiment on DIS cross sections offandtargets allow to constrain the nucleon isospin dependence of the OS functions <cit.>. The OS functions, in turn, determine the in-medium modifications of the partonic structure of bound protons and neutrons.It should be noted, that most of the fixed-target nuclear data in the present analysis typically have invariant momentum transfer squared Q^2 about a fewand for this reason the HT terms should be addressed. To this end, we consider two different models of HT terms and study the interplay between the underlying HT model and the resulting predictions on the ratio d/u of the quark distributions, the structure function ratio F_2^n/F_2^p, and the proton-neutron asymmetry in the off-shell correction. § THEORY BACKGROUND The cross sections of the spin-independent charged-lepton inelastic scattering are fully described in terms of F_T=2xF_1 and F_2 SFs. In the DIS region of high invariant momentum transfer squared Q^2, SFs can be expressed as a power series in Q^-2 (twist expansion) within the operator product expansion (OPE). The leading twist (LT) SFs are given by a convolution of PDFs with the functions describing the quark-gluon interaction at the scale Q, which can be computed perturbatively as a series in the strong coupling constant (see, e.g., <cit.>). SFs can then be writen as F_i = F_i^TMC + H_i/ Q^2 + ⋯, where i=T,2, F_i^TMC are the corresponding LT SFs including the target mass correction (TMC) <cit.>, H_i describes the twist-4 contribution. We consider two HT models commonly used: (i) additive HT model (aHT) motivated by the OPE, in which H_i=H_i(x)and (ii) multiplicative HT model (mHT) <cit.>, in which H_i is assumed to be proportional to the corresponding LT SF, H_i=F_i^LT(x,Q^2) h_i(x).We address nuclear corrections in the DIS process by treating it as an incoherent scatteringoff bound nucleons in the target rest frame.The deuteron SFs can be calculated as the sum of bound proton and neutron SFs convoluted with thenucleon momentum distribution given by the deuteron wave function squared, |Ψ_d( k)|^2: F_i^d = ∫^3 k K_ij|Ψ_d( k)|^2 (F_j^p + F_j^n), where the integration is performed over the bound nucleon momentum k, i,j=T,2, we assume a summation over the repeated index j, and K_ij are the kinematic factors <cit.>. For nuclei with A≥ 3 the convolution by eq:IA2 requires the integration over the energy spectrum of the residual nuclear system, along with the nucleon momentum, which are described by the nuclear spectral functions 𝒫_p/A and 𝒫_n/A <cit.>: F_i^A = ∫^4 k K_ij(𝒫_p/A F_j^p + 𝒫_n/A F_j^n), where the integration is performed over the bound nucleon four-momentum k. The corresponding nucleon off-shell SFs in both eq:IA2 and eq:IA depend on the scaling variable x'=Q^2/2k· q, the DIS scale Q^2, and the nucleon invariant mass squared k^2=k_0^2- k^2≠M^2, where M is the nucleon mass. This latter dependence originates from both the power TMC terms of the order k^2/Q^2 and the OS dependence of the LT SFs. Following Refs. <cit.>, we treat the OS correction in the vicinity of the mass shell k^2=M^2 by expanding SFs in a power series in v=(k^2-M^2)/M^2. To the leading order in v we have F_i^LT(x,Q^2,k^2)= F_i^LT(x,Q^2,M^2)( 1+δ f_i v ),δ f_i= ∂ln F_i^LT(x,Q^2,k^2)/∂ln k^2, where the derivative is taken on the mass shell k^2=M^2. We assume equal functions δ f_T=δ f_2=δ f for F_T and F_2, motivated by the observation that F_T≈ F_2 in the region for which the OS effect is numerically important <cit.>.We use eq:IA2eq:IA to address the nuclear corrections from the momentum distribution, the nuclear binding, and the OS effect, which are the main nuclear corrections at large x. Other nuclear effects like the meson-exchange currents and the nuclear shadowingresult in corrections comparable to theexperimental uncertainties at large x <cit.> and are thereforeneglected in the present analysis.We use a deuteron wave function based on the Argonne nucleon-nucleon otential <cit.> (AV18). For the A=3 nuclei, the proton (neutron) spectral function 𝒫_p(n)/A(, k) describes the corresponding energy (=k_0-M) and momentum (k) distribution in a nucleus at rest. The nuclear spectral function involves contributions from all possible A-1 intermediate states. For the proton spectral function of ^3He, 𝒫_p/, the relevant contributions come from two-body pn intermediate states, both the pn continuum and the pn bound state, i.e. the deuteron. The neutron spectral function of ^3He, 𝒫_n/, involves only the pp continuum states. Similarly, for the ^3H nucleus, the neutron spectral function involves contributions from the bound pn state and from the pn continuum states, while the proton spectral function includes only the nn continuum states. We use the ^3He and ^3H spectral functions of Ref. <cit.> computed with the AV18 nucleon-nucleon force and accounting for the Urbana three-nucleon interaction, as well as the Coulomb effect in . The details of the correspondingnuclear convolution equation, eq:IA2eq:IA, can be found in Refs. <cit.>.§ ANALYSIS FRAMEWORK We simultaneously constrain the proton PDFs, the HT corrections, and the proton and the neutron OS functions, δ f^p and δ f^n, describing the modifications the proton and neutron PDFs in the nuclear environment, in a global QCD analysis.The datasets used are described in Refs. <cit.>and include charged-lepton DIS data off proton, deuterium, , andtargets, as well as data from the W^±/Z boson production at hadron colliders. In particular, data on the ratio of the DIS cross sections of the three-body nuclei, σ^/σ^, from theexperiment <cit.> allow to study the neutron-proton asymmetryδ f^a=δ f^n-δ f^p <cit.>.We parametrize the proton PDFs following Ref. <cit.>, while the Q^2 dependence of the LT SFs is computed at the next-to-next-to-leading order (NNLO) in perturbative QCD. The functions H_i(x) in the aHT model are treated independently for i=T,2 and are parameterized in the form of spline polynomials. A similar procedure is applied for the functions h_i in the mHT model. To reduce the number of parameters we assume H_i^p=H_i^n in the aHT model and also test the assumption h_i^p=h_i^n in the mHT model. We apply the cutsQ^2>2.5 and W>1.8, where W is the invariant mass of the produced hadronic states. Additional details about the analysis setup, like the treatment of the uncertainties and the PDFs and HTs parametrizations, can be found in Refs. <cit.>. We parametrize the proton function δ f^p(x) in terms of a generic second order polynomial <cit.>: δ f^p(x)=a+bx+cx^2, where the parameters a, b, and c are determined simultaneously with those of the proton PDFs and HTs. We also consider the corresponding neutron-proton asymmetry δ f^a=δ f^n-δ f^p, for which we assume a linear function, δ f^a(x)=a_1+b_1x,with a_1 and b_1 free parameters. § RESULTS AND DISCUSSION In order to study the impact of various effects, we perform a number of fits with different settings. In our default QCD analysis we assume equal off-shell functions for protons and neutrons, δ f^p=δ f^n=δ f, and the aHT model for the HT terms. With such settings we obtain <cit.> a good agreement with thedata on the ratio σ^/σ^ <cit.> with χ^2 per number of data points (NDP) of 20/22, and χ^2/NDP=4861/4065 considering all data <cit.>. The function δ f(x) obtained from the analysis of Ref. <cit.> is shown in Fig. <ref> (left panel). The results are in good agreement with the original determination <cit.> from the ratios σ^A/σ^d of the DIS cross sections off nuclear targets with a mass number A≥ 4 using the proton and the neutron SFs of Ref. <cit.>. The results of Ref. <cit.> also agree with those of Ref. <cit.>, which does not include thedata from A=3 nuclei. It should be also noted that the data on the ratio σ^/σ^ allows a reduction of the δ f(x) uncertainty at large x. The results shown in Fig. <ref> (left panel) are obtained assuming an isospin-symmetric function δ f^p=δ f^n and the aHT model for the HT terms. Such an assumption was verified in the analysis of the EMC effect in Ref. <cit.> and was also used in Refs. <cit.>. Thedata on the ratio σ^/σ^ were used to constrain the asymmetry δ f^a=δ f^n-δ f^p <cit.>. With the aHT model we obtain a functionδ f^p similar to that of the isospin-symmetric case shown in Fig. <ref> (left panel), as well as an asymmetry δ f^a consistent with zero within uncertainties, as shown in Fig. <ref> (right panel) <cit.>. However, we obtain substantiallydifferent results on the function δ f^a with the mHT model, as shown in the right panel of Fig. <ref>. The underlying reason for a nonzero asymmetry δ f^a in the mHT model isthe interplay between the HT terms and the LT ones, as H_i=F_i^LT(x,Q^2)h_i(x).On one side, the factor F_i^LT results in a Q^2 dependence in H_i, as illustrated in Ref. <cit.>.On another side, the factor F_i^LT also introduces an explicit isospin dependence in the HT terms, present even with an isoscalar function h_i^p=h_i^n. The nonzero asymmetry δ f^a we found in the mHT model (Fig. <ref>) may therefore be a bias partially compensating such an isospin dependence.Thedata are particularly interesting as they are sensitive not only to isospin effects, but also to the HT contributions in the region x >0.6.Figure <ref> shows a comparison of theF_2^n/F_2^p measurement with our predictions for both the aHT and the mHT models.Overall, we obtain an excellent description of thedata using our default QCD analysis with the aHT model, with a χ^2/NDP=20/22.The data seem to prefer the aHT model over the mHT, as indicated by the higher value ofχ^2/NDP=34/22 with the latter. We compare the ratio of d/u quark distributions obtained with different HT models for the kinematics of theexperiment,together with the one from the analysis of Ref. <cit.> (ABMP16), which was performed with the aHT model but without any nuclear data (see Fig. <ref>). In the latter case the ratio d/u is mostly constrained by forward W-boson production data from the LHCb <cit.> and D0 <cit.> experiments. The ABMP16 result is in good agreement with the present one obtained with the aHT model. Instead, the d/u ratio in the mHT model is substantially higher at large x. Such an enhancement appears to be correlated with the nonzero values of theasymmetry δ f^a (cf. Figs. <ref> and <ref>). This observation indicates a tension between the DIS and Drell-Yan data in the mHT model.We can use the results of Ref. <cit.> to calculate the nuclear modifications ofthe quark distributions for different flavors. In particular, the nuclear PDFs q_i/A for the parton type i=u,d,… can be obtained from the proton and neutron PDFs using a convolution equation similar to eq:IA <cit.>: xq_i/A =∫^4k(1+k_z/M)(𝒫_p/A x'q_i/p + 𝒫_n/A x'q_i/n), where the off-shell nucleon PDFs depend on x', Q^2, and k^2, and the z-axis is antiparallel to the momentum transfer q. The correspondingoff-shell corrections are treated as in SF:OSeq:deltaf with an OS function δ f_q which, in general, depends on the quark flavor.We use the results we obtained with the aHT default model, suggesting the sameOS function δ f_q=δ f for both u and d quark distributions. We calculated the ratio R_q=q_p/A/q_pbetween the proton contribution q_p/A to eq:convq and the corresponding free proton PDF for both u and d quarks inandusing the proton PDFs and the δ f(x) function of Ref. <cit.>, shown in Fig. <ref>. The ratio R_q describes the modifications of the parton distributions q=u,d,… in a bound proton due to the energy-momentum distributionand to the off-shell effect. Even using an isoscalar OS function δ f,we observe a pronounced flavor dependence of the EMC effect at x>0.5 as a result of theconvolution of PDFs with different x dependence with the nucleon momentum distribution. The nuclear dependence of R_q is also noticeable and is owed to the differences in the proton spectral functions ofand .In order to further clarify the flavor dependence of nuclear effects, we show the asymmetry Δ_3=(R_q()-R_q())/(R_q()+R_q()) for both q=u and d quarks in the right panel of Fig. <ref>.The results described above contrast with those of Ref. <cit.>,claiming a significant isovector nuclear EMC effect from a global QCD analysis including A=2 and A=3 DIS data (see Fig. 3 in <cit.>). We comment in this context that Ref. <cit.> uses the mHT model of HT terms in their analysis. As we show above (see Ref. <cit.>), there is an interplay of the nucleon isospin dependence of the OS correction and the d/u ratio and the HT terms in the mHT model. In particular, the isospin effect in the OS correction tends to compensate the isospin dependence of the HT terms in the mHT model. Furthermore, the analysis of Ref. <cit.>introduces an explicit nuclear dependence in theOS functions for individual quark flavors, whichmay result in additional correlations among parameters potentially affecting the results.We note that the HT terms cancel out in the ratio F_i^n/F_i^p = F_i^LT, n/F_i^LT, p in the mHT model with the assumption h_i^p=h_i^n. We therefore expect that analyses of the^3He/^3H ratio based on a naive LT approximation for SFs <cit.>could be affected by somewhat similar biases on theresulting isospin dependence of the OS function δ f as the ones found in the mHT model.§ SUMMARY AND OUTLOOK We obtain a good description of thedata within the simple assumption of isoscalar HT contributions in the aHT model.From our global QCD analysis we get the samefunction δ f for both the protons and the neutrons, within uncertainties. This result is consistent with our former observations from the global QCD analyses including deuterium DIS data <cit.>, as well as with the analysis of the nuclear DIS data with A≥ 3 <cit.>. The resulting prediction on d/u ratio for the proton is similar to the one obtained in Ref. <cit.> without the use of any nuclear data. The presence of nuclear ^2H, ^3He, and ^3H DIS data in the QCD analysis allows a significant reduction of the uncertainty on the proton d/u ratio at large x. Further improvements are expected fromdata on the ratios σ^/σ^d and σ^/σ^d <cit.>.We emphasize the importance of taking into account HT terms in the QCD analysis of DIS data with Q^2 ≲ 10 GeV^2.Two different HT models are considered: additive (aHT) and multiplicative (mHT) HT models. While the aHT model provides a good performance with isoscalar HT terms and the OS function, in the mHT model the HT terms are different for protons and neutrons, because of a correlation with the LT terms. In the mHT model we also find a nonzero neutron-proton asymmetry in the OS function. The ratio d/u at large x is enhanced in the mHT model as compared to that in the aHT model. These results are driven by the/ data and originate from the interplay between the LT and HT terms in SFs, which is inherent to the mHT model. We conclude that this feature of the mHT model can lead to potential biases and inconsistencies, while theσ^/σ^ data clearly prefer the aHT model over the mHT one with χ^2/NDP=20/22 vs 34/22.Future precision cross-section measurements with ^2H, ^3H and ^3He targets in a wide kinematical region would allow to address the HT model and to further constrain the isospin dependence of nuclear effects at the parton level. More precise measurements of the latter will require future flavor sensitive data from DIS at the electron-ion collider <cit.> and from both neutrino and antineutrino charged-current interactions off hydrogen and various isoscalar and non-isoscalar nucleartargets <cit.> at the long-baseline neutrino facility <cit.>. We thank M. V. Garzelli and S.-O. Moch for valuable comments, G. Salmè for providing theandspectral functions of Ref. <cit.>, and G. G. Petratos for clarifications about thedata. S. A. is supported by the DFG Grants No. MO 1801/5-1 and No. KN 365/14-1. R. P. thanks the support of USC and of the CERN neutrino platform. | http://arxiv.org/abs/2312.00809v1 | {
"authors": [
"S. I. Alekhin",
"S. A. Kulagin",
"R. Petti"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20231127091246",
"title": "Off-shell modifications of bound nucleons and parton distributions"
} |
[ [ 27 November 2024 ==================== Climate change has resulted in a year over year increase in adverse weather and weather conditions which contribute to increasingly severe fire seasons. Without effective mitigation, these fires pose a threat to life, property, ecology, cultural heritage, and critical infrastructure. To better prepare for and react to the increasing threat of wildfires, more accurate fire modelers and mitigation responses are necessary. In this paper, we introduce SimFire, a versatile wildland fire projection simulator designed to generate realistic wildfire scenarios, and SimHarness, a modular agent-based machine learning wrapper capable of automatically generating land management strategies within SimFire to reduce the overall damage to the area. Together, this publicly available system allows researchers and practitioners the ability to emulate and assess the effectiveness of firefighter interventions and formulate strategic plans that prioritize value preservation and resource allocation optimization. The repositories are available for download at <https://github.com/mitrefireline>.§ INTRODUCTIONThe global effects of climate change such as drought and increased temperatures are exacerbating the frequency and intensity of wildfires <cit.> and in-turn increasing the effects of climate change through excessive carbon-dioxide emissions <cit.> and significant terrain change. Severe wildfires pose a significant threat to life, property, ecology, cultural heritage, and critical infrastructure - in 2022 alone, over 7.6 million acres of land was burned due to wildfires across the United States <cit.> costing over 3.5 billion dollars in suppression costs <cit.>. While wildfires are an essential natural occurrence for maintaining healthy ecological systems <cit.>, uncontrolled fires, particularly those close to the Wildland Urban Interface (WUI), can present significant risks to public health, life and property, necessitating effective management or suppression measures.In this paper, we introduce SimFire and SimHarness, a Python-based system to accurately model wildland fire spread and generate appropriate mitigation strategy responses via Reinforcement Learning (RL). SimFire utilizes the Rothermel fire spread formula <cit.> to simulate the movement of wildland fire through an environment generated from real-world operational or procedurally generated fictional data. Simulated agents can be added to the generated environment and place firefighter-induced mitigations to visualize how wildfires will react to specific mitigation strategies. SimHarness is a machine learning harness designed to train RL agents within SimFire to identify the optimal mitigation strategies for a given wildfire scenario. The combination of SimFire and SimHarness provides a customizable system designed to simulate the spread of wildland fire through a generated environment and suggest optimal mitigation strategies for the given scenario. The repositories are available for download at <https://github.com/mitrefireline>. §.§ Related Works Existing fire spread models <cit.> and visualization tools <cit.> have brought value to the decision making and planning process for fire chiefs, burn bosses, and land managers. SimFire and SimHarness aims to derive even more insights for planners and mitigators by leveraging agent-based machine learning that identifies the optimal strategies for addressing wildland fire scenarios. In recent years, there has been an increase of academics studying RL for disaster relief and response. In <cit.>, both provide open-source RL environments and models for training agents to mitigate the spread of wildfire to limit overall damage. Altamimi <cit.> similarly trains an RL agent to mitigate the spread of wildfire through an environment, but does not include open-source code. In all these cases, the environments do not support using real-world data during terrain generation or true fire-spread models such as Rothermel <cit.>. SimFire aims to fill this gap with realistic environments, research-backed fire spread capabilities, and a design that supports further improvement by the open-source community. Similarly, SimHarness' modular structure makes it compatible with any disaster modeler that utilizes the requiredAPI - not just wildland fire - making SimHarness more flexible than current frameworks and extensible for a variety of disaster scenarios. § BACKGROUND AND PRELIMINARIES §.§ Rothermel Fire ProjectionThe Rothermel surface fire spread model has been widely used in the field of fire and fuels management since 1972 <cit.>. To model the spread of fire across a given surface, the Rothermel equation takes fuel moisture and wind into account for weather conditions, and slope and elevation into account for terrain conditions. These environmental conditions, weather and terrain, pair with input fuel complexity values to determine the spread of a fire throughout an environment. While Rothermel is considered a valuable tool for estimating the rate of fire spread under certain conditions, its accuracy and applicability can vary depending on factors such as the specific environmental conditions, fuel types, and terrain. Researchers and practitioners often use Rothermel as part of a suite of tools with weather data to better understand and manage wildfires.§.§ Reinforcement Learning Reinforcement learning is an agent-based sub-field of machine learning that utilizes a user-designed reward function to train an intelligent agent how to interact within an environment and achieve a desired goal <cit.>. In RL, the environment is defined as a Markov Decision Process <cit.>, MDP, M = (S, A, P, ρ_0, R, γ, T), where S is the state space, A is the action space, P : S × A × S → [0,1] is the state transition probability, ρ_0 : S × A → [0,1] is the initial state probability, R : S × A is the reward function, γ is the discount factor, and T is the maximum episode length. The policy π_θ : S × A assigns a probability value to an action given a state.Throughout training, the agent receives an observed state from the environment s_t ∈ S, representing the state space information currently available to the agent, and performs an action a_t ∈ A according to its policy π_θ, or, at times, a random policy to encourage exploration. After the agent interacts with the environment via the given action, the environment returns both a next state s_t^'∈ S and reward r_t ∈ R. The agent is trained to find a policy that optimizes the user-defined reward function R. § SYSTEM COMPONENTSAs wildfire frequency and severity increases, it is clear innovation is needed to generate more effective wildfire management and mitigation strategies. SimFire and SimHarness work as a single system to both accurately simulate the spread of a fire through a user-specified environment and suggest mitigation strategies for the given scenario to reduce the fire severity risk and lessen the financial, ecological, and public health impacts that agencies manage. §.§ SimFireSimFire is an open-source Python tool that simulates realistic wildfire spread over generated environments. These generated environments can be created using procedurally generated fictional data or using topographic and fuel data sources available through LANDFIRE <cit.> to model real-world environments. When using real-world data, users can specify a year to gather data from and an area's GPS coordinates to create a realistic training environment.SimFire introduces theclass, which is a base class to support a simulated disaster environment. This parent class provides the API necessary for SimHarness to train agents within the environment. , a child class of , provides a simulated wildfire disaster environment based off the Rothermel equations provided by Andrews <cit.> as the basis for the fire spread model. Through a configuration file, users can adjust the simulated environment's size, terrain, fuel sources, and wind dynamics - including complex wind dynamics based off of Navier-Stokes equations <cit.>. SimFire provides a variety of fuel configurations out-of-the-box, including the standard 13 Anderson Behavior Fuel Models <cit.>, and supports the addition of user-specified fuel types as well. Additionally, users can configure aspects of the fire itself, such as ignition location, rate of spread attenuation, and max fire duration for a single space. The library allows researchers and wildland fire managers control over the scenarios used in their mitigation experiments.In addition to the fire spread model, SimFire supports the placement of different mitigations to control the spread of fire. Mitigations such as firelines, scratchlines, and wetlines can be placed at any pixel within the simulated environment, allowing users to experiment with different mitigation strategies to see how the fire reacts in certain scenarios. SimFire employs PyGame <cit.>, a scalable and highly-optimized game simulation Python library to visualize the fire spread, agent movements, and agent interactions within the environment. The implemented algorithms and formulas along with the flexibility provided by SimFire allow researchers to define different test scenarios and locations for their mitigation experiments. Additional information about SimFire's fire spread verification, data layers, and agent actions can be found in Appendix <ref> along with example fire scenarios.§.§ SimHarness SimHarness is a Python repository designed to support the training of RLlib <cit.> RL algorithms within simulated disaster environments. SimHarness takes as input an instance of theclass, such as SimFire's , as the training environment. Theobject provides an API that allows SimHarness to move agents around the simulated environment and interact with it by placing mitigations. Theagents represent firefighters moving through an environment as a wildfire spreads, placing mitigations such as firelines to limit the spread of the fire within the area.SimHarness utilizes Hydra <cit.> as a hierarchical configuration management tool to allow users to configure the training parameters of SimHarness. The configuration files provided by SimHarness mirror the structure of the Algorithm Configs used by RLlib for model training, such as , , and . Users can also configure the parameters used for initializing the Simulation and the agents within the environment. For example, users can configure the , which determines the number of actions an agent can take before the simulation is run, , which are the mitigation techniques an agent can apply to the landscape, and , which determine which attributes are passed as an input dimension to the RL model during training and inference.Another configurable aspect of the SimHarness environment is the reward function. Users can create a custom reward function for training that emphasizes user-specific goals. This allows for tuning of the agent policy to better suit the user's goals. For example, some users may want policies that prioritize ending the fire as quickly as possible, while others may focus more on limiting the fire spread to specific areas. An example workflow of SimHarness can be found in Appendix <ref>. § PRELIMINARY EXPERIMENTSSimFire and SimHarness provide a novel system for generating mitigation strategies for scenarios including real-world data. For this reason, comparisons between SimHarness and other available methods are not one-to-one, but we hope the open-sourcing of SimFire and our preliminary experiments can expand current benchmarks to include the testing of strategies in real-world scenarios.The following experiment applies the SimFire and SimHarness tools to an area of land in Coalinga, CA near the location of the Mineral Fire of 2020. The fuel and terrain data is the true observed data from 2019 pulled from LANDFIRE <cit.> to simulate the fuels and terrain prior to fire ignition. The area of land simulated covers a 128 unit × 128 unit square area with a left-corner GPS lat-long location of , where each unit in the grid represents 30 square meters.In the scenario, the fire starts at a random location within the simulated area and the simulated "firefighter" controlled by a trained DQN <cit.> policy always begins at location [0, 64], halfway down the left-hand side of the simulated environment. As shown in Figure <ref>, the trained agent has generalized to generate successful mitigation strategies for random fire ignition scenarios. In all three cases, the agent's mitigation strategy successfully saved a large section of land from being burned and limited the rate of spread of the fire. In scenario 2, the fire ignites close to latitude of the agent, resulting in the agent needing to move in a straight line to cut the fire off before it can spread to the top half of the environment. The training parameters and quantitative metrics from the training run can be found in Appendix <ref>. § DISCUSSIONThe SimFire and SimHarness repositories together create an open-source Python-based system to accurately model wildland fire spread and generate appropriate mitigation strategy responses via RL. Researchers and wildfire managers can leverage SimFire and SimHarness to identify new land management strategies that limit the fire severity risk and lessen financial, ecological, and public health impacts caused by increasingly severe wildfires.In the future, we aim to incorporate additional agent constraints, like agent distance to the fire and realistic movements <cit.>, to the training process to produce more accurate strategies. We also aim to add agent types and capabilities to more accurately model the range of equipment and crews available to land managers and add data layers to the Simulation to more accurately model the landscape, including buildings and areas of cultural importance such that the environment more accurately models the real-world.The authors acknowledge the help of Chris Kempis in the development of SimFire. This work was funded under MITRE's 2022 and 2023 Independent Research and Development Program.§ SIMFIRE§.§ Available Data Layers and ActionsSimFire provides Fuel, Terrain, and Wind data layers that are ingested into SimHarness to create the following state spaces at each pixel:* Fuel: w_0: Oven-dry Fuel Load (lb/ft^2) sigma: Surface-Area-to-Volume Ratio (ft^2/ft^3) delta: Fuel Bed Depth (ft) M_x: Dead Fuel Moisture of Extinction* Terrain: elevation: Elevation (ft) in relation to sea level.* Wind: wind_speed: Wind Speed (mph) wind_direction: Direction of the Wind (degree) These layers are provided to SimHarness as dimensions of the input observation to the RL model. Users can specify which data layers are passed to the model and what order they are within the observation. SimFire also provides the minimum and maximum bounds for each data layer which helps SimHarness normalize observations dimensions, if desired by the user.The Fuel data layer is set based on the type of fuel present in the given scenario, determined by the fire start location and the size of the simulation specified. SimFire supports the usage of the 13 Anderson Behavior Fuel Models <cit.>.SimFire also supports 3 types of firefighting mitigations: * Fireline: Flammable material has been removed by scraping or digging down to mineral soil.* Wetline: Water/suppressant sprayed in an area.* Scratchline: Preliminary, quick-action fireline where flammable material is removed but not entirely and not completely down to mineral soil. Each mitigation has different properties which effect the speed and movement of the fire when the fire is in contact with the mitigation. §.§ Additional Fire Scenarios§.§ Validation and VerificationFuture work will provide a detailed comparison of the validation and verification process for the fire spread simulator, SimFire, and the underlying fire spread model, Rothermel to other fire spread models including ElmFire. This study is evaluated using the historical database, BurnMD <cit.>, which includes 308 medium sized fires with near real-time mitigations and daily wildfire perimeters. For more details about the database, its contents, and the data sources, please see the referenced publication or visit the BurnMD website, <https://fireline.mitre.org/burnmd>. § SIMHARNESS§.§ WorkflowSimHarness allows users to train RL agents within any simulated disaster environment, assuming the disaster environment implements the methods required by theAPI. In the case of SimFire, SimHarness can generated mitigation strategies for firrefighters to limit the damage caused by wildfires. The general workflow for SimHarness is shown in Figure <ref>. The SimHarness training loop functions similarly to a traditional RL training loop, except it expects the passed-in environment to be a child class ofas opposed to a gym <cit.> environment.is currently a class within the SimFire package, but is expected to be moved to a separate, non-disaster-specific package in the future. The simulated environment outputs training signals such as observations and rewards to the SimHarness agent(s) which use the observations to predict optimal actions. The actions produced by the model provide both movement and interaction information. Movements are how the agent is traversing across the environment, such as . Interactions are how the agent is changing the environment itself. In the case of SimFire, this can be . These actions are relayed back to the simulated environment, which then affects the overall disaster scenario simulated by the environment. §.§ Training Parameters and Reward FunctionThe table below provides a detailed overview of the training parameters leveraged by the environment, agent, learning models, and harness for the experimental results that are presented in Section <ref>. The operational metrics of performance for this experiment are displayed in Section <ref>.A notable aspect of this experiment was the inclusion of a parallel benchmark simulation during each episode. The benchmark simulation simulates the spread of the current wildfire scenario without any agent placed mitigations. The input observation to the model at each step includes 3 arrays: * Fire Map: Array representing the current fire spread at a given timestep including the placed mitigations and agent locations.* Benchmark Fire Map: Array representing how the fire spread would look at the current timestep if no mitigations were placed.* Final Benchmark Fire Map: Array representing how the fire spread would look at the final timestep if no mitigations were placed. The inclusion of the above information within the training environment is used for both the input observation to the agent as well as the reward function generation, as the reward function compares the total area burned within the mitigated episode to the unmitigated counterpart. Class Experiment Parameter ValueEnv Type WildfireEnv Simulator SimFireEnv Type OperationalEnv Left Corner (Lat,Lon) (36.09493, -120.52193)Env Geographic Area 491520 Square MetersEnv Grid Cells 128x128Env Fire Start Location RandomEnv Simulation Data [Terrain, Fuel, Wind]Agent Type Single AgentAgent Start Location (64,0)Agent Observation Space [current fire map, benchmark fire map, final benchmark fire map]Agent Action Space Movements [up, down, left, right]Agent Action Space Interactions [Fireline]Agent Speed 4Training Algorithm DQNTraining Algorithm Config [Dueling + Distributional + Noisy]Training Episodes 14000Training Timesteps 8273288Training Exploration Epsilon Greedy [1.0, 0.01]Training Replay Buffer Episodic Prioritized ReplayTraining [lr, gamma] [0.0045, 0.99]Harness Type Ray[Rllib]Harness CPU 16Harness GPU 2The corresponding reward function for this experiment was based on the incremental proportion of Area saved, or area that was not burned, burning, or mitigated, at each timestep (t), when comparing the mitigated simulation (Sim) to the unmitigated benchmark simulation (Bench). Damaged_Sim_t =Sim[Burned_t] + Sim[Burning_t]+ Sim[Mitigated_t]Damaged_Bench_t = Bench[Burned_t] + Bench[Burning_t]NewDamaged_Sim_t =Damaged_Sim_t - Damaged_Sim_t-1NewDamaged_Bench_t = Damaged_Bench_t - Damaged_Bench_t-1TotalEndangered = Bench_final_t[Burned]Reward_t = NewDamaged_Bench_t - NewDamaged_Sim_t/TotalEndangered Equation <ref> and <ref> represent the total number of pixels that are "lost" in the simulation at a given timestep, including burned pixels, currently burning pixels, and pixels that have been mitigated by the agent, for the mitigated sim and unmitigated benchmark sim, respectfully. Equation <ref> and <ref> represent the new number of pixels that are "lost" in the simulation at the given timestep. Equation <ref> is the total number of pixels that will burn if no mitigations are placed in the environment. The final reward in Equation <ref> is the difference in new pixels burned between the unmitigated and mitigated simulations as a percentage of the total pixels burned in the benchmark simulation. A positive value represents more pixels being saved in the mitigated scenario than in the unmitigated scenario, with a higher value corresponding to more area saved. A value of 0 means the unmitigated and mitigated scenarios saved the same amount of pixels, and a negative value means the mitigated scenario saved less land than the unmitigated scenario. This ensures that the total sum of the rewards within an episode directly corresponds to the total proportion of Area 'saved' for the entire episode (Equation <ref>). A small positive reward (Equation <ref> is applied to the final reward (Equation <ref>) when the agent places a mitigation on an unburned square with no prior mitigations. This addition encourages the agent to place more firelines overall, which helps with training as the agent will get better training examples of how the fire spread reacts to placed mitigations. Reward_t = Reward_t + 0.25/Area_sim§.§ Training MetricsThe graphs below report the experiment results and the operational metrics of performance for the experiment detailed in Section <ref>. The graphs illustrate the Episode Reward Mean, Mean Area Saved, Mean Timesteps Saved, and Mean Burn Rate Reduction over the training of 14,000 episodes. The metrics per episode (eps) of Mean Area Saved (Equation <ref>), Mean Timesteps Saved (Equation <ref>), and Mean Burn Rate Reduction (Equations <ref>, <ref>) are based on operational metrics that are utilized to estimate wildfire severity and mitigation efficacy. These metrics also serve as heuristic measurements to monitor to validate that the agent is learning effective policies. AreaSaved_eps = (Sim[Burned_eps] + Sim[Mitigated_eps]) - Bench[Burned_eps]TimestepsSaved_eps = Sim[timsteps_eps] - Bench[timesteps_eps]BurnRate_eps = (Burned_eps + Mitigated_eps)/timesteps_eps*100BurnRateReduction_eps = Sim[BurnRate_eps] - Bench[BurnRate_eps] | http://arxiv.org/abs/2311.15925v1 | {
"authors": [
"Alexander Tapley",
"Marissa Dotter",
"Michael Doyle",
"Aidan Fennelly",
"Dhanuj Gandikota",
"Savanna Smith",
"Michael Threet",
"Tim Welsh"
],
"categories": [
"cs.LG",
"cs.AI",
"cs.MA",
"cs.SE"
],
"primary_category": "cs.LG",
"published": "20231127153705",
"title": "Reinforcement Learning for Wildfire Mitigation in Simulated Disaster Environments"
} |
Optimization of Image Processing Algorithms for Character Recognition in Cultural Typewritten Documents Carla Teixeira Lopes January 14, 2024 ======================================================================================================= “The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.”— Albert EinsteinThis paper does not present a novel method. Instead, it delves into an essential, yet must-know baseline in light of the latest advancements in Generative Artificial Intelligence (GenAI): the utilization of GPT-4 for visual understanding.Our study centers on the evaluation of GPT-4's linguistic and visual capabilities in zero-shot visual recognition tasks. Specifically, we explore the potential of its generated rich textual descriptions across various categories to enhance recognition performance without any training. Additionally, we evaluate its visual proficiency in directly recognizing diverse visual content.To achieve this, we conduct an extensive series of experiments, systematically quantifying the performance of GPT-4 across three modalities: images, videos, and point clouds. This comprehensive evaluation encompasses a total of 16 widely recognized benchmark datasets, providing top-1 and top-5 accuracy metrics.Our study reveals that leveraging GPT-4's advanced linguistic knowledge to generate rich descriptions markedly improves zero-shot recognition. In terms of visual proficiency, GPT-4V's average performance across 16 datasets sits roughly between the capabilities of OpenAI-CLIP's ViT-L and EVA-CLIP's ViT-E. We hope that this research will contribute valuable data points and experience for future studies. We release our code at <https://github.com/whwu95/GPT4Vis>.§ INTRODUCTION * Research interns at Baidu Corresponding author ChatGPT <cit.>, lanuched in November 2022, marked a seismic shift in the application of AI, sparking a “wow” moment that galvanized the tech industry. This innovative product catalyzed a flurry of investment in Generative AI. The innovation journey continued in March 2023 with the introduction of GPT-4[GPT-4 can accept a prompt of text and images. For clarity, in this work, we refer to the version of the model with visual capabilities as “GPT-4V”, following the OpenAI report <cit.>.], a large multimodal model, capable of processing both text and images, further captivated the industry by demonstrating the extensive capabilities of multimodal technologies. By September 2023, GPT-4 with Vision (GPT-4V) was fully integrated into the ChatGPT platform. Following this milestone, comprehensive user study reports <cit.> by computer vision researchers began to emerge, providing evaluations of its visual prowess. More recently, on the first anniversary of ChatGPT, November 6, OpenAI hosted its first DevDay, during which the GPT-4V API was released. This release opens new doors for the academic community to conduct extensive evaluations of its performance across a range of visual benchmarks, offering quantitative metrics beyond the limited scope of user studies. In this paper, we evaluate GPT-4's performance in visual recognition tasks—one of the fundamental tasks in the field of computer vision—without any prior training (, in a zero-shot manner). We explore two main facets: linguistic and visual capabilities.(i) For linguistic capabilities, we explore the role of GPT-4's language expertise in enhancing visual recognition. The large-scale image-text pre-training model, CLIP <cit.> has built a bridge between vision and text, enabling the calculation of similarity scores between various category name embeddings and image embeddings to perform zero-shot visual recognition. Building upon this framework, we consider leveraging GPT-4's broad linguistic knowledge to generate richer and more detailed descriptions of category names, thus enhancing intra-class diversity and inter-class distinguishability, offering a refined alternative to the use of basic category names for zero-shot recognition. (ii) Secondly, the evaluation of visual capabilities is quite straightforward: we directly input images and candidate categories into GPT-4V for relevance ranking, from which we obtain the Top-1 and Top-5 prediction results. To conduct a comprehensive evaluation, we included three distinct modalities: images, videos, and point clouds, across 16 well-known and publicly available classification benchmarks <cit.>, as showcased in Figure <ref>.For video datasets, we implemented a uniform sampling of frames to create multi-image inputs. For point cloud data, we process the 3D shape into multiple 2D rendered images. For each dataset, we offer the zero-shot performance of CLIP, a representative model among web-scale pre-trained vision-language models (VLMs), as a reference. The results includes four backbones: OpenAI CLIP (pre-trained on 400M image-text pairs) with ViT-B/32, ViT-B/16, and ViT-L/14, as well as the more recent and larger EVA CLIP's ViT-E/14, which boasts a staggering 4.4B parameters (14× that of ViT-L), and has been pre-trained on 2B image-text pairs. Our investigation indicates that GPT-4's linguistic capabilities can significantly enhance zero-shot visual recognition through the generation of detailed descriptions. As for visual capabilities, GPT-4V's performance across 16 datasets averages between that of OpenAI-CLIP's ViT-L and EVA-CLIP's ViT-E.For more detailed results, analyses, and experimental details, please refer to the experimental section. In summary,to the best of our knowledge, this study is the first quantitative evaluation of zero-shot visual recognition capabilities using GPT-4V across three modalities—images, videos, and point clouds—over 16 popular visual benchmarks. We believe that the empirical evidence, the prompts used, and the unresolved questions presented herein are worth knowing. It is our aspiration that our data points and experience will be useful for future studies. § RELATED WORKS Preliminary Explorations of GPT-4V. Recent studies have undertaken detailed case studies on GPT-4V's capabilities across diverse tasks.Prior research <cit.> delved into the reasoning skills of foundational models within visual domains from a qualitative perspective.Subsequently, GPT-4V's performance has been examined in various visual-language tasks, including but not limited to video understanding <cit.>, optical character recognition (OCR) <cit.>, image context reasoning <cit.>, recommender system <cit.>, mathematical logic <cit.>, medical imaging analysis <cit.>, anomaly detection <cit.>, social media analysis <cit.> and autonomous driving <cit.>.However, a gap remains in these studies: most have concentrated on qualitative, initial explorations without extensive quantitative analysis utilizing established visual benchmarks. Such analysis is essential for a comprehensive validation of GPT-4V's visual understanding capabilities.The recent availability of its API [gpt-4-vision-preview: https://platform.openai.com/docs/guides/vision] now enables large-scale quantitative evaluations. Enhancing Zero-shot Visual Recognition with LLMs. The web-scale image-text pre-training model, CLIP <cit.>, has established a pivotal connection between visual and textual domains. Numerous subsequent studies have extended this model to video understanding <cit.> or point cloud recognition <cit.>. With the growing influence of Large Language Models (LLMs), several researchers are investigating how class-specific insights from LLMs can improve CLIP’s accuracy. <cit.> leveraged GPT-3 <cit.> to create text descriptions for unseen class labels and compare the image embedding with the embeddings of the descriptions. <cit.> further developed this concept by employing ChatGPT to structure the classes hierarchically, thereby boosting zero-shot image recognition. In our study, rather than constructing a hierarchical structure, we prompt GPT-4 to produce detailed multi-sentence descriptions for categories, examining the effectiveness of this straightforward approach in image, video, and point cloud recognition. § METHODOLOGYAs demonstrated in Figure <ref>, we evaluate GPT-4's linguistic and visual capabilities in zero-shot visual recognition. This section will introduce the specific details.§.§ Data Processing To meet the input standards of CLIP <cit.> or GPT-4V <cit.>, apart from image classification tasks, the inputs for video and point cloud classification must be transformed into images. As illustrated in Figure <ref>, this process involves the transformation of video and point cloud data into image sets. Specifically, for input videos, we extract multiple frames through uniform temporal sampling as multi-image inputs. In the case of point clouds, we follow MVCNN <cit.> to render multiple views around the object in a uni-directional manner at an angle of 30 degrees. To reduce testing costs, we use six rendered images in our process.This prepares us to carry out subsequent evaluations. §.§ Exploration of Linguistic CapabilitiesOur objective is to explore how the extensive linguistic knowledge of GPT-4 can be leveraged to enhance visual recognition performance. Building on the cross-modal bridge established by CLIP through large-scale image-text pre-training, we aim to enrich textual descriptions beyond using simple category names to better align with visual content. As shown in Figure <ref>(a), we begin by guiding GPT-4 to generate K sentences describing each category in the dataset using appropriate prompts. These K sentences are then converted into K text embeddings via CLIP's frozen text encoder, while the visual signal is encoded into a vision embedding by CLIP's frozen image encoder (, for video and point cloud data, the vision embedding is obtained by global averaging pooling over multiple frame embeddings or multiple viewpoint embeddings). Subsequently, these text embeddings are compared with the vision embedding to calculate K similarity scores. After normalization with a Softmax function and averaging, we obtain a consolidated similarity score for each category in relation to the visual input.Given a dataset with C categories, each visual input yields C similarity scores, which are then ranked from highest to lowest to determine the final prediction. §.§ Evaluation of Visual CapabilitiesThe recent release of the GPT-4V API enables us to perform comprehensive visual benchmark evaluation, advancing beyond mere case studies constrained to the ChatGPT web interface. The evaluation workflow is intuitive, as shown in Figure <ref>(b). We input visual samples, which may be a single image or a collection thereof, along with an appropriate text prompt. This prompt instructs GPT-4V to assess the dataset's categories, sorting them according to their relevance to the provided visual content, and eventually yields the top-5 prediction results. These predictions are then measured against the dataset's ground truth to determine the top-1 and top-5 accuracy metrics. § EXPERIMENTS§.§ DatasetsThis study evaluates 16 visual datasets across images, videos, and point clouds. The evaluation utilizes the commonly used validation sets for these benchmarks, with Table <ref> detailing the number of test samples per dataset. §.§ Implementation Details GPT-4 Generated Descriptions. Using the GPT-4 API (version gpt-4-1106-preview), we generate K descriptive sentences for each category, with K defaulting to be 20. As an example, for the “British Shorthairs" category from the Oxford Pets dataset <cit.>, we present our prompts alongside GPT-4's responses in Figure <ref>.Utilizing GPT-4 with Vision. In our study, we employ the GPT-4V API (specifically, gpt-4-vision-preview) to evaluate 16 different benchmarks. For videos, we select three frames via uniform sampling for API processing, and for point clouds, we provide images from six perspectives. Figure <ref> showcases the interaction with GPT-4V, highlighting both the prompts used and the subsequent responses for evaluations across images, videos, and point clouds.Moreover, by the end of our experimental phase (November 17, 2023), due to OpenAI's ongoing limitation of 100 RPD (requests per day) per account for this model, we've adopted batch testing to ensure we fully utilize each request. For image datasets, we submit sets of images (, 10) in a single query, prompting the API to return results for the entire batch simultaneously. For video and point cloud data, we involve inputting 10 videos (totaling 30 frames) or 5 point cloud instances (also totaling 30 images) at once, coupled with specific prompts such as sample names to facilitate separation and identification. Further details about our batch testing prompts is available in our https://github.com/whwu95/GPT4VisCode Repo. §.§ Gains from GPT PromptsTable <ref> showcases our evaluation results on 16 datasets and their average performance.For each dataset, we've detailed results using four different CLIP backbones, including OpenAI CLIP <cit.>'s configurations of ViT-B/32, ViT-B/16, and ViT-L/14, each pre-trained with 400 million image-text pairs, and the EVA CLIP <cit.>'s ViT-E/14, which is notable for its 4.4 billion parameters (14× that of ViT-L/14) and training on 2 billion image-text pairs. We will delve into an analysis of these results next.Descriptions generated by GPT-4 distinctly surpass the CLIP baseline in a majority of datasets, boasting an average top-1 accuracy improvement of 7% across 16 datasets. This consistent enhancement across all three modalities—images, videos, and point clouds—highlights the method's potent generalizability. More specifically:1) For image datasets, with RAF-DB <cit.> as a focal point, GPT Prompts enable an over 20% increment in accuracy across various backbones. For other datasets like EuroSAT <cit.> satellite image classification, Flower <cit.> fine-grained recognition, Pets <cit.> fine-grained recognition, Aircraft <cit.> fine-grained classification, and Caltech101 <cit.> object classification, we observe boosts of approximately 9-15%. Smaller gains in Stanford Cars <cit.> and Food101 <cit.> suggest that a high density of similar categories may lead to ambiguous descriptions, confusing the CLIP model. In general, larger CLIP models achieve better zero-shot recognition performance on image tasks, and GPT-generated prompts reliably offer additional enhancements.2) On video datasets, especially HMDB-51 <cit.> and UCF101 <cit.>, we observe astonishing gains of up to 11-15%, indicating that rich descriptions of human actions align better with video content than simpler phrases. The Something-Something V1 (SSV1) <cit.> dataset, however, exhibits poor performance with the CLIP baseline ( 4% Top-1) due to the lack of temporal modeling. Unlike Kinetics, UCF, and HMDB datasets, which can be recognized through scenes and object appearances as shown in Figure <ref>, SSV1 demands the understanding of complex object-object and human-object interactions, requiring robust temporal and motion modeling for correct recognition. Hence, activities cannot be inferred merely from individual frames (, Pushing something so it spins), as demonstrated in Figure <ref>.In essence, with scene-based video recognition datasets, the larger the CLIP model, the greater the zero-shot performance, a trend consistent with image tasks where GPT Prompts lead to additional gains. Yet, in datasets where temporal modeling is crucial, CLIP's simple frame averaging strategy falls short, and GPT prompts cannot compensate for this deficiency. 3) For point cloud datasets, employing multiple rendered viewpoints for zero-shot recognition with CLIP achieves noteworthy accuracy, mirroring the positive effects seen with image and scene-based video datasets. The integration of GPT Prompts further amplifies these positive results. §.§ Zero-shot Visual Performance of GPT-4VTo evaluate the visual capabilities of GPT-4V, as shown in Table <ref>, we conduct quantitative evaluation across 16 datasets. Utilizing straightforward prompts (depicted in Figure <ref>), we obtain predictions from GPT-4V.Analyzing the average results from these 16 datasets, GPT-4V's top-1 accuracy roughly falls between that of CLIP's ViT-L and EVA's ViT-E. Specifically:1) On image datasets, GPT-4V significantly outstrips the largest CLIP model, EVA ViT-E, on the RAF-DB dataset <cit.> (58.5% 31.0%), demonstrating a strong capability in facial expression recognition. It also surpasses EVA ViT-E on Caltech101 <cit.> object recognition (95.5% 94%). Additionally, GPT-4V attains results on par with EVA ViT-E in texture classification <cit.> and Pets <cit.> fine-grained recognition tasks. Its performance in aircraft <cit.> fine-grained recognition ranks between CLIP ViT-L and EVA ViT-E, while slightly trailing behind ViT-L in flower <cit.> fine-grained recognition, and it scores between ViT-B/32 and ViT-B/16 in food <cit.> fine-grained recognition and ImageNet <cit.> 1k-class recognition.For scene recognition <cit.>, car type recognition <cit.>, and satellite image classification <cit.>, its performance is just below that of ViT-B/32.It's noteworthy that, as per the GPT-4V documentation[https://platform.openai.com/docs/guides/vision], the low-resolution version of the model scales images to 512×512, while the high-resolution version scales to 2048×2048. Given that many of these datasets have relatively low resolution, with some significantly below 512×512, this could potentially impact the recognition accuracy of GPT-4V, such as with the EuroSAT dataset, which has a resolution of 64×64.2) For video datasets, it’s important to highlight that Something-Something V1 <cit.> focuses on modeling temporal relationships, whereas UCF101 <cit.>, HMDB51 <cit.>, and Kinetics <cit.> are less dependent on such temporal relationships, meaning actions can often be inferred from individual frames, as shown in Figure <ref>. GPT-4V performs well on Kinetics, UCF101, and HMDB51, significantly surpassing EVA ViT-E's performance on UCF101 and HMDB51: achieving 81.6% 74.8% on UCF, and an even more significant 61.6% 41.5% on HMDB.Given GPT-4V API's daily usage limits and for quicker evaluation turnover, we limited our frame sampling to three per video. It is plausible that increasing the number of sampled frames could further improve video recognition accuracy. Notably, GPT-4V's performance on the SSV1 dataset is also markedly poor, at just 4.6% top-1 accuracy, which aligns with the CLIP baseline.This is exemplified in Figure <ref>, where isolating each frame does not provide enough context to ascertain the person's activity; only through the analysis of motion information across a sequence of frames can we make a prediction. Such results highlight the current limitations of GPT-4V in temporal modeling due to the absence of a video encoder capable of processing temporal dynamics and motions.3) For point cloud datasets, GPT-4V demonstrates excellent performance with just six rendered images, on par with EVA ViT. It stands to reason that adding more views would likely enhance the recognition accuracy even further.In this section, all the above results are intended to provide baseline data and experience for future research, encouraging the development of more effective prompts to guide GPT-4V towards more accurate recognition. §.§ Ablation Studies on GPT PromptsHere we present several ablation studies demonstrating the impact of prompts on CLIP's zero-shot performance.Impact of different prompts. Table <ref> comprehensively exhibits the results of different prompts on the zero-shot visual recognition performance of CLIP across various datasets. Augmenting this with Hand-crafted Prompt combined with the category names leads to further improvements in most datasets, showcasing the method's robustness. We then explore the effectiveness of employing multiple GPT-generated descriptive sentences related to category names. We find that GPT Prompt outperforms the baseline in 14 datasets. Figure <ref> showcases the performance enhancement of GPT Prompts over category names in certain categories.Also, GPT Prompt can achieve better performance than Hand-crafted Prompts in 10 datasets.Our conjecture is that single category names may convey more global concepts, while the fine-grained details in generated descriptions are likely to align more closely with the visual content, thus amplifying inter-class distinctiveness. The strategy of generating multiple descriptive sentences may potentially further augment this effect. However, it's noteworthy that GPT Prompts are either below or roughly on par with Hand-crafted Prompt in 6 datasets, particularly in SUN397 <cit.>, ImageNet-1k <cit.>, Oxford Cars <cit.>, and Kinetics-400 <cit.>. These datasets generally have a large number of categories with an emphasis on highly fine-grained classification. For such closely similar categories (like similar cars or scenes), richer descriptions generated may not be as distinctive as simply using the category name. Therefore, we consider combining “Hand-crafted Prompt + GPT Prompts" to amalgamate the advantages of both, which has led to improved results in 11 datasets. For the 4 datasets (, EuroSAT <cit.>, RAF-DB <cit.>, Flower102 <cit.> and ModelNet10 <cit.>) where GPT Prompts demonstrate a clear advantage, the integration of Hand-crafted Prompt has been deemed unnecessary.Impact of sentence quantity generated by GPT. Our exploration also delved into the effect of the number of descriptive sentences generated by GPT-4 on zero-shot performance.Taking the EuroSAT <cit.> dataset as an example, as shown in Table <ref>, performance with only one generated sentence was lower than using the category name alone. However, an increase to three sentences led to a noticeable improvement and surpassed the baseline (42.9% 40.2%). With five sentences, there was a substantial performance boost. In pursuit of identifying a saturation point for this improvement, we observed that increasing to 20 sentences brought about minimal additional benefits. Consequently, we adopt the generation of 20 sentences as the default setting for our experiments. § SPECIAL CASES AND DISCUSSION ON GPT-4VIn this section, we primarily present some special phenomena observed during the evaluation, provided for the reference of future researchers. Predict categories beyond the given list. In some instances, GPT-4V predicted categories that were not included in the given category list. For example, during the evaluation of the texture recognition dataset DTD <cit.>, GPT-4V might respond with: “Note: As there were no categories provided that perfectly matched some images (such as straw-like), I have used the most comparable or related terms from the list provided. Additionally, not all images may have an exact or obvious matching category within the provided list, and I've estimated the most relevant categories based on observable texture patterns in the images." In such cases, we tried to restrict the predictions to the provided category list through the prompts, but this proved ineffective. To proceed with the evaluation, we chose to exclude these predictions that were not within the given list.Occasional misalignment in batch evaluation. Within multi-sample batch evaluation setups, occasional misalignments in results have been observed.Specifically, with the GPT-4V API's requests per day (RPD) limited to 100 per account currently, we need to improve evaluation efficiency.Therefore, we input multiple samples with their corresponding identifiers (, file names) within the input token allowance, generating predictions in a JSON format where each key is an ID and its value the top-5 category predictions.However, larger batch sizes, especially those ranging from 50 to 100 or more, there's a notable probability of result misalignment, , the results for sample A actually belongs to sample B.Moreover, we've encountered instances of both repeated predictions for the same sample and omissions of predictions for others. To ensure the stability of the evaluation, we have reduced the batch size to between 10-30 images, which has led to comparatively stable results.Prediction based on image names. We observed that GPT-4V tends to occasionally infer categories based on the names of image files. To facilitate batch processing, we have prompted GPT-4V to output batch prediction results in JSON format, with the sample ID (derived from the file name) as the key, and the predicted categories as the value.Notably, in cases where the file names explicitly suggest category information, like `banded_0060.jpg' from the DTD <cit.> dataset, GPT-4V has been found to sometimes base its predictions on the name rather than the image visual content. For instance, without altering file names, top-1 evaluation accuracy on the DTD dataset anomalously soared to 98%, yet normalized to a plausible 59% upon anonymizing the file names.To counter this, we've implemented a strategy of hashing each sample's name to ensure GPT-4V's focus on visual content rather than nomenclature clues.Safety system in GPT-4V. Throughout our dataset evaluations, we stumbled upon specific instances, as depicted in Figure <ref>, where GPT-4V refused to generate predictions,stating: “Your input image may contain content that is not allowed by our safety system."We surmise that this precautionary mechanism is designed to ensure that GPT-4V adheres to ethical guidelines by avoiding engagement with potentially sensitive or inappropriate content.§ CONCLUSION AND LIMITATIONThis work aims to quantitatively evaluating the linguistic and visual capabilities of the current state-of-the-art large multimodal model GPT-4 in zero-shot visual recognition tasks. To ensure a comprehensive evaluation, we have conducted experiments across three modalities—images, videos, and point clouds—spanning a total of 16 popular academic benchmark. We hope our empirical study and experience will be useful for the community to push the development of future multimodal model.Limitations: 1) Currently, due to the GPT-4V API's request limits, we have utilized a small-batch prediction method to maximize the utility of each request. While we've shuffled the samples for each dataset, the potential impact of batching samples on individual predictions remains to be clarified. Once the API's limits be relaxed, we intend to carry out evaluations on a per-sample basis for more detailed insights. 2) This study has focused solely on fundamental visual recognition tasks. A comprehensive quantitative analysis of other tasks, including dense prediction, is necessary to truly gauge the breadth of these models' capabilities in analyzing complex visual information. 3) This research is limited to the evaluation of GPT-4 alone. In the future, we aim to quantitatively evaluate more multimodal models (, ERNIE Bot, MiniGPT-4, ), to broaden our comparative analysis and the scope of our study. ieeenat_fullname | http://arxiv.org/abs/2311.15732v1 | {
"authors": [
"Wenhao Wu",
"Huanjin Yao",
"Mengxi Zhang",
"Yuxin Song",
"Wanli Ouyang",
"Jingdong Wang"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231127112910",
"title": "GPT4Vis: What Can GPT-4 Do for Zero-shot Visual Recognition?"
} |
Optimal Clustering of Discrete Mixtures: Binomial, Poisson, Block Models, and Multi-layer Networks Zhongyuan Lyu^†, Ting Li^∙ and Dong Xia^†Lyu and Li are co-first authors.Dong Xia’s research was partially supported by Hong Kong RGC Grant GRF 16302020 and GRF 16301622. ^† Department of Mathematics, Hong Kong University of Science and Technology^∙ Department of Applied Mathematics, The Hong Kong Polytechnic University(January 14, 2024) ==============================================================================================================================================================================================================================================================================================================================================In this paper, we first study the fundamental limit of clustering networks when a multi-layer network is present. Under the mixture multi-layer stochastic block model (MMSBM), we show that the minimax optimal network clustering error rate, which takes an exponential form and is characterized by the Rényi-1/2 divergence between the edge probability distributions of the component networks. We propose a novel two-stage network clustering method including a tensor-based initialization algorithm involving both node and sample splitting and a refinement procedure by likelihood-based Lloyd's algorithm. Network clustering must be accompanied by node community detection. Our proposed algorithm achieves the minimax optimal network clustering error rate and allows extreme network sparsity under MMSBM. Numerical simulations and real data experiments both validate that our method outperforms existing methods. Oftentimes, the edges of networks carry count-type weights.We then extend our methodology and analysis framework to study the minimax optimal clustering error rate for mixture of discrete distributions including Binomial,Poisson, and multi-layer Poisson networks. The minimax optimal clustering error rates in these discrete mixtures all take the same exponential form characterized by the Rényi-1/2 divergences.These optimal clustering error rates in discrete mixtures can also be achieved by our proposed two-stage clustering algorithm. § INTRODUCTION Binary and count-type data routinely arise from diverse applications,especially in network data analysis.Unweighted networks have binary edges,encoding the existence or inexistence of a pairwise interaction between two nodes,e.g., whether Alice and Bob called each other during some week or whether there exists a direct flight between New York and Beijing.Networks can also have edges which carry count-type weight,e.g.,the number of phone calls between Alice and Bob or number of direct flights between New York and Beijing.In the big data era, collections of networks have become increasingly accessible across various domains <cit.>. Typical examples include ecological networks <cit.> representing the interactions of species in different ecosystems, trading networks <cit.> of different types ofcommoditiesamong countries, human brain connectivity networks derived from resting state functional magnetic resonance imaging (fMRI) <cit.>. Most of existing literature focused on community detection <cit.>, which aims at clustering “similar" vertices into groups. When multiple networks are present, finding similarity between networks and clustering them into groups have great scientific values in many fields <cit.>. Basically, a network itself is viewed as a single unit of observation. Clustering networks aims at discriminating samples on the graph level, aids comparative and targeted research, reduces redundancy for increased efficiency, and unveils concealed structural information within each layer of network. Broadly speaking, two types of network clustering methods have appeared in the literature. The first type consists of those graphon-based methods <cit.>. These methods are non-parametric in nature, offering great practical flexibility in that network sizes can vary and node registration can be absent. The ultimate clustering is performed on a low-dimensional embedding of networks. Nonetheless, it is unclear what the “optimal" embedding is in practice and the non-parametric framework lacks interpretability of the results of network clustering. The second line of methods are model-based, assuming that the observed networks are generated by some mixture model.See, e.g., <cit.>. Model-based approaches facilitate the design of specially crafted clustering algorithms and enjoy better interpretability. However, the forging works provided few insights on the theoretical fronts of network clustering. More recently, <cit.> proposed a mixture multi-layer stochastic block model (MMSBM), which simultaneously models node communities and network clusters. It is a generalization of SBM <cit.> - one of the most popular generative models for a single network. MMSBM allows heterogeneous node community assignments across different networks. A tensor-based clustering algorithm was designed in <cit.>, which can consistently recover the node communities and network clusters even when the observed networks are extremely sparse. Some other works <cit.> proposed alternative algorithms attempting to achieve smaller clustering error but with a more stringent network sparsity condition. <cit.> studied a generative multi-layer network models and investigated the optimal error rates in nodecommunity detection. Their proposed model is free of network labels so that network clustering was not an essential goal.This paper aims at studying the fundamental limit of clustering networks, e.g., what is the optimal clustering error and what quantity plays the essential role there.The question is challenging for several reasons. First of all, it seems that network clustering must be accompanied by node community detection. Without taking advantage of node community information, multiple networks are barely matrix-valued observations with binary entries where clustering becomes extremely challenging if only some low-rank structure is present, even for low-rank Gaussian mixture model <cit.>. Optimal network clustering under MMSBM thus hinges upon efficient node community detection. Secondly, multi-layer networks can be very sparse, for example, some network may even be a dis-connected graph. Intuitively, it can be statistically challenging to compare different networks if most of them are dis-connected graphs. We shall design novel clustering methods and technical tools to cope with the extreme sparsity in multi-layer networks. Lastly, model-based clustering methods usually require specific design of computational algorithms for both initialization and node community detection. The integrated analysis of both computational and statistical performances is also technically challenging. By a careful inspection of the oracle case and a comparable minimax lower bound, we find that the minimax optimal error rate of clustering networks under MMSBM takes the form exp(-I^∗/2) as I^∗→∞. Here I^∗ represents the Rényi-1/2 divergence <cit.> between the two Bernoulli random matrices characterizing the edge generating mechanism of two SBMs. Rényi divergence measures the “distance" between probability distributions, for example, the Rényi-1/2 divergence between Bern(p) and Bern(q) is simply -2log((pq)^1/2+(1-p)^1/2(1-q)^1/2); the Rényi-1/2 divergence between Poisson(θ_1) and Poisson(θ_2) is θ_1+θ_2-2θ_1^1/2θ_2^1/2.The formal definition of I^∗ under MMSBM can be found in Section <ref>. One can simply view I^∗ as the separation strength in mixture models. We first show that the exponential-type rate exp(-I^∗/2) isachievable by the oracle version of a likelihood-based Lloyd's algorithm assuming that the SBM parameters are known. A matching minimax lower bound is also established confirming the derived rate is optimal.This significantly improves those sub-optimal polynomial-type error rates attained by prior works<cit.>, thereby filling the void in the optimal error rate of network clustering under MMSBM.More importantly, we propose a novel two-stage clustering algorithm based on the popular initialization then refinement scheme. The algorithm begins with initial node community detection and layer clustering, which areutilized to estimate the edge probability matrices. The refinement step re-assigns layer label to networks based on the maximum likelihood. The accuracy of estimated edge probability matrices crucially affects the success of refinement procedure. We prove that the two-stage clustering algorithm can achieve the minimax optimal network clustering error rate, both in expectation and with high probability, as long as the initial node community detection and layer clustering are sufficiently accurate. It thus remains to design an efficient initial node community detection and layer clustering algorithm under MMSBM. Unfortunately, the tensor-based spectral method proposed in <cit.>, while being able to deliver a consistent clustering for both nodes and layers,cannot provide a sufficiently accurate initial clustering of nodes or layers. Due to technical reasons, the clustering error rate delivered by <cit.> involves additional log n terms and, as a result, our newly proposed two-stage clustering algorithm works only if the separation strength I^∗≫log n, which actually implies exact clustering in view of the optimal rate exp(-I^∗/2). Here n denotes the number of layers. We design a novel initialization algorithm based on tensor decomposition and node and sample switching. The algorithm involves both node and network sample splitting and uses two sub-samples for cross spectral estimate. The sample splitting allows us to derive a sharp concentration inequality for the sum of random sparse matrices. Note that sample splitting introduces loss of efficiency only for initialization, which is recovered subsequently by the refinement procedure. By combining the initialization and two-stage clustering algorithms, we end up with a computationally efficient method which achieves minimax optimal network clustering error rate and allows extreme network sparsity under MMSBM. Besides the theoretical achievements, numerical simulations and real data experiments demonstrate that our new network clustering algorithm considerably outperforms the method proposed in <cit.>.It is noteworthy that the exponential-type minimax optimal clustering error rate has also been found in other problems.Notable examples include node community detection in a single network <cit.> and multi-layer networks <cit.>, and clustering task for Gaussian Mixture Model (GMM) <cit.> under the assumption of isotropic covariance.In particular,<cit.> designed a computationally feasible two-stage algorithm, consisting of a spectral initialization anda refinement procedure motivated by penalized MLE,which achieves theoptimal node clustering error rate under SBM.More exactly,the optimal rate takes the form exp(-dI_0/2) where d is the number of nodes and I_0 denotes the Rényi-1/2 divergence between Bern(p) and Bern(q).Here p and q represents the within-community and between-community edge probabilities,respectively.On the other hand,the seminalwork <cit.> first derived the optimal clustering error rate in GMM,taking the form exp(-Δ^2/8), for the widely-used Lloyd's algorithm <cit.>,where the separation strength Δ denotes the Frobenius norm of the difference of two population center matrices.The same rate was later established by <cit.> under a more general framework with a weaker separation condition on Δ.It has also been shown that spectral clustering can achieve optimal rates in GMM <cit.> and SBM <cit.>.More recently,<cit.> introduced a low-rank mixture model (LrMM) for clustering matrix-valued observations and proposed a low-rank Lloyd's algorithm to leverage the planted structures within data.Their method also achieves an optimal clustering error rate in the same exponential form.We also extend MMSBM to multi-layer weighted networks where edges can carry count-type weights. The model is referred to as the mixture multi-layer Poisson block model (MMPBM) in that the edge-wise weight follows a Poisson distribution. Poisson block model has been studied for weighted networks. See, e.g., <cit.>.By analyzing the oracle case and a likelihood-based Lloyd's algorithm, we show that the minimax optimal rate of network clustering under MMPBM also takes the form exp(-I^∗/2) where I^∗ is the Rényi-1/2 divergence between the Poisson distributions describing the edge weight distribution. A similar two-stage clustering algorithm is shown to achieve the minimax optimal clustering error rate if well initialized. Finally, we study the optimal clustering of other discrete mixtures including the mixture of Binomial distributions <cit.> and the mixture of Poisson distributions <cit.>. Our contributions are summarized as follows. We study the clustering problem and investigate the fundamental limits for several mixture models of discrete distributions including Binomial, Poisson, multi-layer binary networks, and multi-layer Poisson networks. By analyzing a likelihood-based Lloyd's algorithm and establishing respective minimax lower bounds, we demonstrate that the minimax optimal clustering error rate in these discrete mixtures takes a universal exponential form where the exponent reflects the Rényi-1/2 divergence between the underlying discrete distributions. To our best knowledge, these are the first results revealing the fundamental limits of clustering error rate in discrete mixtures. In particular, layer clustering error rate is of great interest inmulti-layer networks analysis. We provide a general two-stage clustering algorithm based on the initialization-then-refinement scheme which achieves the minimax optimal clustering error rate under the aforementioned discrete mixture models. Initialization under MMSBM is challenging. We design a novel initialization algorithm involving both node and sample switching and cross spectral estimate, which is guaranteed to provide a sufficiently accurate initialization even when the multi-layer networks are extremely sparse.The rest of the paper is organizedas follows. In Section <ref>, we study the MMSBM for multi-layer binary networks, derive the minimax lower bound of network clustering error rate, present a two-stage clustering algorithm achieving the minimax optimal network clustering error rate, and introduce a novel initialization algorithm based on node and sample splitting. Section <ref> focuses on the MMPBM for multi-layer weighted networks when edges carry Poisson-type weights. The minimax lower bound of network clustering error rate is developed and we show that the two-stage clustering algorithm is able to achieve the optimal clustering error rate under MMPBM. Mixture of Binomial distributions and Poisson distributions is investigated in Section <ref>.Numerical simulations and real data experiments are presented in Section <ref>, which validate the theoretical findings of this paper. All proofs and technical lemmas are deferred to the Appendix.The following common notations will frequently appear in later sections. We use c,c_0,c_1⋯ and C,C_0,C_1,⋯ to denote generic small and large constants, which may vary from line to line.For nonnegative sequences x_nand y_n, we write x_n≲ y_n or y_n≳ x_n or x_n=O(y_n) or y_n=Ω(x_n)if there exists a constant C>0 such that x_n≤ Cy_n, write x_n≍ y_n if x_n≲ y_n and y_n≲ x_n, and write x_n≪ y_n or y_n≫ x_nor x_n=o(y_n) or y_n=ω(x_n) if x_n=O(c_ny_n)for some c_n→ 0. Denote ·the ℓ_2-norm for vectors and operator norms for matrices. Let ·_ F denote the matrix Frobenious norm. § OPTIMAL CLUSTERING OF MIXTURE MULTI-LAYER NETWORKS§.§ Mixture multi-layer stochastic block model The mixture multi-layer stochastic block model (MMSBM) was introducedin <cit.> for community detection in multi-layer networks.Suppose we observe a collection of undirected, binary networks {_i}_i=1^n of common d nodes, or equivalently, their symmetric adjacency matrices {_i∈{0,1}^d× d,i∈[n]}. The essential assumption is that each network, a so-called layer, is sampled independently from one of two [For simplicity, we focus on a two-component mixture model, but the results can be easily extended to multi-component cases.] stochastic block model (SBM). Let a vector ^∗∈[2]^n denote the latent label of each network, i.e., z^∗_i=2 meaning that the i-th network was sampled from the second SBM. The two SBM's are characterized by two community probability matrices _1,_2∈(0,1)^K× K and community assignment _1, _2: [d]↦ [K]. Here we assume there are K communities in each SBM for simplicity. Each edge of the network is independently sampled as_i(j_1,j_2)∼Bernoulli(_z^∗_i(_z^∗_i(j_1),_z^∗_i(j_2)) ), i∈[n], j_1, j_2∈[d]Note that we allow self-loops for notational simplicity, whereas there is no essential difficulty in extending our results to the scenario with no self-loops. After observing all the networks _i, i∈[n], our goal is to cluster them into two groups, i.e., which layers are possibly generated from the same SBM.The Hamming distance evaluates the goodness of clustering: h(,^∗)=min_π∈𝔖_21/n∑_i=1^n𝕀( z_iπ(z^∗_i)),whereis a vector in [2]^n and 𝔖_k stands for the set of all permutations of [k].It can be viewed as the mis-clustering error of multi-layer networks.Previously, <cit.> proposed a tensor-decomposition based approach for layer clustering and showed that it can consistently recover the layer labels. Our interest is to characterize the minimax optimal clustering error under MMSBM. For ease of exposition,define _m(^∗):={i∈[n]:z^∗_i=m} and S_k(_m):={j∈[d]:_m(j)=k} as the corresponding index set of ^∗ and _m for ∀ m∈[2] ,k∈[K]. Denote n_m:=|_m(^∗)| so that n_1+n_2=n. Throughout this section, we focus on thefollowing parameter space of MMSBM::=(n,d,K, ,α,β,γ)={(,{_1,_2},{_1,_2}):∈[2]^n,|_m()|∈[n/2α,α n/2],_m:[d]→[K],|_k(_m)|∈[d/β K,β d/K], _m=_m^⊤∈(0,1)^K× K, p_m:=min_i_m(i,i)≥γ^-1max_i_m(i,i), γmin_i j_m(i,j)≥ q_m:=max_i j_m(i,j),max{p_m_1/p_m_2, q_m_1/q_m_2, p_m_1/q_m_2,q_m_1/p_m_2}≤γ, ∀ m_1, m_2∈[2] }, where α,β,γ≥ 1 are assumed to be absolute constants and :=(p_1,q_1, p_2,q_2) is a vector containing the boundary probabilities. Denote _m(j_1,j_2):=_m(_m(j_1),_m(j_2) ) the edge probability matrix, i.e., the expectation of adjacency matrix under SBM(_m,_m). Note that the network sparsity is characterized by the probabilities p_m, q_m and we will be particularly interested in the extremely sparse case when p_m, q_m are of order ((n∧ d)d)^-1.Since network clustering becomes trivial if the two probability matrices _m_1 and _m_2 are strikingly different, we focus on the more difficult yet challenging regime where the edge probabilities are homogeneous,i.e.,p_m_1/p_m_2 and q_m_1/q_m_2 are bounded for ∀ m_1, m_2∈[2]. Homogeneous probability condition as above is typical in existing literature of studying minimax rates in network analysis. See, e.g., <cit.>. §.§ Oracle property of likelihood-based Lloyd's algorithmLloyd's algorithm <cit.> is a simple yet popular clustering algorithm. The algorithm alternates between re-estimating the cluster centers and re-assigning cluster labels. It has been demonstrated that Lloyd's algorithm can achieve statistically optimal clustering error under various mixture models such as GMM <cit.> and the low-rank mixture model (LrMM, <cit.>).In this section, we explore the limit of Lloyd's algorithm under MMSBM by considering the oracle case. Under Gaussian mixture model, the oracle case means the situation where the population cluster centers are known beforehand and the label of a new observation can be decided by comparing its distances from the known population cluster centers. Similarly, for MMSBM, oracle case refers to the situation that the edge probability matrices _1 and _2 are already given. If an observed network ∈{0,1}^d× d is known sampled from either _1 or _2, the log-likelihood can be written as ℓ(|z):=∑_ω∈_d(ω)log_z(ω)+∑_ω∈_d(1-(ω))log(1-_z(ω)) where _d:={(j_1, j_2): 1≤ j_1≤ j_2≤ d} and z takes values in {1,2}.The likelihood-based Lloyd's algorithm then assigns the label towhich maximizes the log-likelihood ℓ(|z),i.e.,ẑ:=min_z∈{1,2}ℓ (| z).Without loss of generality,assume the true label ofis 1.This network is mis-clustered by Lloyd's algorithm if ∑_ω∈_d(ω)log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))>∑_ω∈_dlog1-_1(ω)/1-_2(ω).By studying the probability of event defined in (<ref>),we get the oracle mis-clustering error rate of Lloyd's algorithm,which is formally stated in the following lemma.Note that the result holds for any given edge probability matrices,which are not necessarily from stochastic block models. Suppose that the edge probability matrices _1 and _2 are known,and a new observed networkis sampled from the probability matrix _1.,i.e.,(ω) ind.∼ Bern(_1(ω)) for ∀ω∈_d. The probability of mis-clusteringby Lloyd's algorithm is ( is mis-clustered)≤exp-I^*/2, where I^*=-2∑_ω∈_dlog(√(_1(ω)_2(ω))+√((1-_1(ω))(1-_2(ω)))).Similarly,the same bound holds ifis sampled from the probability matrix _2.The Rényi divergence of order 1/2 between two distributions Bern(_1(ω)) and Bern(_2(ω)) is known as I_ω:=-2log(√(_1(ω)_2(ω))+√((1-_1(ω))(1-_2(ω)))),implying that the exponent appeared in Lemma <ref> can be written as I^∗=∑_ω∈_d I_ω.Hereafter,I^∗ is regarded as the separation strength under MMSBM.The Rényi divergence has commonly played the role of SNR in literature of network community detection <cit.>.For instance,the optimal mis-clustering rate for community detection in a two-community single SBM whose within-clustering and between-cluster edge probabilities are p and q,respectively,is exp(-(1-o(1))dI_0/2),where I_0:=-2log(√(pq)+√(1-p)√(1-q)).It implies that a consistent clustering requires SNR condition dI_0→∞.Lemma <ref> shows that a necessary condition of consistently clustering multiple networks under MMSBM isI^∗→∞.Comparison with Gaussian mixture model For simplicity,consider the two-component Gaussian mixture model 1/2N( vec(_1),σ)+1/2N( vec(_2),σ) where the population cluster centers are _1 and _2,respectively.The difficulty of clustering is essentially characterized by the separation strength:Δ:=_1-_2.It has been shown that<cit.> the minimax optimal mis-clustering error rate is exp(-(1-o(1))Δ^2/(8σ^2)).In particular,this minimax rate holds for GMM <cit.>,sparse GMM (i.e.,_1 and _2 are sparse,<cit.>),and low-rank GMM (i.e.,_1 and _2 are low-rank,<cit.>).Furthermore,<cit.> showed that a Lloyd's algorithm achieves the same rate even if the noise is sub-Gaussian with a variance proxy parameter σ^2_.Under MMSBM,the noise is Bernoulli and thus sub-Gaussian.If p_m,q_m ≍ 1,then σ_≍ 1 and the exponent Δ^2/σ_^2≍_1-_2_ F^2≍ I^∗.Together with Lemma <ref>,this implies that the low-rank Lloyd's algorithm proposed in <cit.> can achieve minimax optimal clustering error rate under MMSBM if the edge probabilities are bounded away from zero.However,when p_m,q_m=o(1) (the most interesting scenario for network analysis),the variance proxy parameter σ_≍ 1 <cit.> so that Δ^2 /(8σ_^2)≍Δ^2.The exponent in Lemma <ref> becomes I^∗≍∑_ω (_1(ω)-_2(ω))^2(_1(ω)∨_2(ω))^-1≫Δ^2.This suggests that the low-rank Lloyd's algorithm fails to achieve the ideal error rate stated in Lemma <ref> when clustering sparse networks.The reason is that the low-rank Lloyd's algorithm essentially finds the local MLE solution assuming the Gaussian noise.Consequently,the Euclidean distance is no longer an optimal criterion under MMSBM because of the difference of likelihood functions between Bernoulli and Gaussian distributions.Such sub-optimality has been demonstrated for spectral clustering on non-Gaussian data in <cit.>. §.§ Minimax lower bound We now characterize the minimax lower bound of clustering networks under MMSBM. Recall the working parameter space (n,d,K, ,α,β,γ) and Hamming distance h(_1, _2) defined in (<ref>) and (<ref>),respectively.It actually suffices to consider a fix (_1,_2,_1,_2) that satisfies the constraint in (<ref>).Denote _1,2:={_1,_2},_1,2:={_1, _2}, and _^(_1,2,_1,2):={: (,_1,2,_1,2)satisfiesconstraints in (<ref>)}The minimax lower bound in the following theorem presents an exponential rate, which is concordantwith the oracle mis-clustering rate in Lemma <ref>.For fixed _1,2 and _1,2 satisfying the constraints in (<ref>), let _1=_1∘_1 and _2=_2∘_2, and defineI^∗:=I^∗(_1,2, _1,2):=-2∑_ω∈_dlog(√(_1(ω)_2(ω))+√((1-_1(ω))(1-_2(ω))))If I^∗→∞, then inf_ sup_^∗∈_^(_1,2,_1,2) h(,^∗)≥exp(-(1+o(1))I^∗/2 ),where the infimum is taken over all possible clustering algorithms working on the multi-layer networks sampled from MMSBM with parameters (^∗, _1,2, _1,2).§.§ Achieving optimal clustering via a two-stage algorithm In <cit.>, a spectral clustering algorithm was proposed for layer clustering under MMSBM based on tensor decomposition. The algorithm only achieves consistency and the attained mis-clustering error rate decays polynomially, which is sub-optimal in view of the established minimax lower bound in Theorem <ref>. This sub-optimality is essentially caused by the fact that tensor decomposition finds MLE of the network model using Gaussian likelihood functions. Lemma <ref> suggests that optimal clustering error rate can be achieved if the edge probability matrices _1 and _2 are known. In practice, these matrices must be estimated from the observed networks and a clustering algorithm tends to achieve smaller mis-clustering error if the edge probability matrices are estimated more accurately. The following lemma characterizes the desired accuracy of estimated edge probability matrices that ensures the likelihood-based method achieves the minimax optimal clustering error rate.Denote the ℓ_1-norm _ℓ_1:=∑_ω∈_d|(ω)| for a symmetric matrixand sup-norm _ℓ_∞:=max_ω∈_d |(ω)|.We emphasize that,as in Lemma <ref>, the established bound in Lemma <ref> holds without assuming block structures on _1 or _2. Basically, these edge probability matrices can be generic as long as the estimated matrices are sufficiently accurate with respect to the ℓ_1-norm.Suppose that a networkis sampled from the edge probability matrix _1, i.e., (ω) ind.∼ Bern(_1(ω)) for ∀ω∈_d.Let _1 and _2 be the edge probability matrices estimated without using . Let ẑ be the MLE of 's label based on _1 and _2: ẑ:=z∈{1,2}max ∑_ω∈_d(ω)log_z(ω)+∑_ω∈_d(1-(ω))log(1-_z(ω))If _1-_1_ℓ_1,_2-_2_ℓ_1=o(I^∗) where I^∗ is as defined in Lemma <ref> and I^∗→∞, then, conditioned on _1 and _2, we have (ẑ=2)≤exp(-(1-o(1))I^∗/2)The same bound still holds ifis sampled from the probability matrix _2. The implications of Lemma <ref> for network clustering under MMSBM are two-fold. First, a network _i can be clustered/classified accurately with a minimal error rate if the unknown edge probability matrices can be estimated accurately.Second, the estimated edge probability matrices should be independent with the network _i. The independence can be achieved easily by leaving _i out, i.e., estimating the edge probabilitymatrices using networks _j's, j∈[n]∖ i. However, the challenging task is to obtain estimated edge probability matrices which are sufficiently accurate in sup-norm distance. Suppose that all the networks _j's are correctly clustered so that _1 and _2 are attained by the sample averages of networks within each cluster, respectively, Chernoff bound tells us that _1-_1_ℓ_∞ is,with high probability, approximately in the order O(√(p_1/n)). This is exceedingly larger than than the desired accuracy O(I^∗d^-2) unless the networks are dense, more more precisely, the network sparsity should satisfy np_1=Ω(1). The block structures of _1 and _2 under MMSBM facilitate more accurate estimations. Basically, the entries of _1 can now take the average not only from the multiple networks but also from observed entries within each network. The accuracy of estimated edge probability matrices then rely on both the clustering accuracy of networks and community detection accuracy of nodes,which can be achieved by any initial clustering algorithm as long as it can consistently recover the network clusters and node communities, e.g., the tensor-based initial clustering algorithm in <cit.> or a variant we shall introduce later. Our methods thus consists of two stages. The first step aims to estimate the edge probability matrices. In order to accurately cluster the network _i, we apply an initial clustering and community detection algorithm to the other networks {_j: j∈[n]∖ i}, which outputs an initial estimated layer labels ^(-i)∈ [2]^n-1 and node community memberships _1^(-i), _2^(-i):[d]↦ [K]. Here K stands for the number of communities in each SBM. Based on these initial layer labels and node community memberships,we estimate the edge probability matrices, denoted by _1^(-i) and _2^(-i). They are independent of the network _i.The second stage of our algorithm then assigns alabel to _i using the MLE approach as in (<ref>). Lemma <ref> dictates that the estimated label of _i is correct with a minimax optimal probability guarantee. The second stage thus provides a refined estimate of the label of _i. At a high level, our method follows the popular initialization then refinement scheme in studying minimax optimal community detection in network analysis <cit.>. The detailed implementation of our method can be found in Algorithm <ref>. For ease of exposition, with slight abuse of notation, we now denote ^(-i) as an n-dimensional vector by adding a dummy entry z̃^(-i)_i=0. The last step of Algorithm <ref> serves to re-align the estimated layer labels to eliminate the permutation issue, inspired by <cit.>. By Lemma <ref>, the accuracy of the estimated layer labels ẑ_i^(-i), i∈[n] is essentially determined by the precision of the estimated edge probability matrices, which relies on the effectiveness of the initialization algorithm in finding the layer labels and node community memberships. Suppose that the initialization algorithm ensures ({max_i∈[n] h (^(-i), ^∗)≤η_z} ⋂{max_i∈[n]; i∈ [2] h(_m^(-i), _m)≤η_σ})≥ 1-C_1 n^-2for some absolute constant C_1>0. Here η_z, η_σ→ 0 as n, d→∞ and they reflect the effectiveness of the initialization algorithm. Note that the hamming distance h(_1, _1) between any local community memberships is defined in the same fashion as in (<ref>). The performance of both the initialization algorithm and Algorithm <ref> crucially relies on the network sparsity. We primarily focus on the extreme sparse case and setp_1:=a_1/(n∧ d)d, p_2:=a_2/(n∧ d)d, q_1:=b_1/(n∧ d)d, and q_2:=b_2/(n∧ d)dDenote a̅:=(a_1∨ a_2) and a:=(a_1∧ a_2). The following theorem affirms that Algorithm <ref> achieves the minimax optimal clustering error rate.Recall that K denotes the number of communities in each SBM.Suppose that the initialization algorithm satisfies (<ref>) with η_σ=O(min_m∈[2](a_m-b_m)/(a_mK)), η_σlog1/η_σ=o( I^*2/a̅ K^2(1+dn^-1)^2) andη_σ∨η_z=o(I^∗/a̅K^2(1+dn^-1))Ifa≥ C for some absolute constants C>0,K^2log n=o(n), and I^∗→∞,then, conditioned on the event (<ref>), with probability at least 1-exp-(I^*)^1-ϵ for any ϵ∈(0,1),the output of Algorithm <ref> satisfies h(,^∗)≤exp(-(1-o(1))I^∗/2).Moreover, h(,^∗)≤exp(-(1-o(1))I^∗/2)+O(n^-C^' ),and some large constant C^'>0.Let us illustrate the implications of Theorem <ref> for special cases. The case of particular interest is p_m, q_m=o(1) so that I^∗≍∑_ω∈_d(_1(ω)-_2(ω))^2/_1(ω)∨_2(ω)≍ (p_1-p_2)^2/p_1∨ p_2·d^2/K + (q_1-q_2)^2/q_1∨ q_2· d^2 ≍ (a_1-a_2)^2/K(a_1∨ a_2)(1+d/n)+(b_1-b_2)^2/b_1∨ b_2(1+d/n),if we further assume that the local community memberships are the same, i.e., _1=_2.Assume K=O(1) for simplicity,the condition I^∗→∞ holds as long as ((a_1-a_2)^2/a_1∨ a_2+(b_1-b_2)^2/b_1∨ b_2)(1+d/n)→∞ as n, d→∞.and the mis-clustering error rate becomes h(, ^∗)≤exp{-Ω((a_1-a_2)^2/a_1∨ a_2+(b_1-b_2)^2/b_1∨ b_2)·(1+d/n) }The sparsity conditions (<ref>) and a=Ω(1) are much weaker than that is required in <cit.>, which reads a=Ω(log^4(n∨ d)). The bound (<ref>) implies that networks can be more accurately clustered if network sizes grow,which is reasonable since it facilitates more precise estimation of edge probability matrices. Moreover, exact network clustering is possible if ((a_1-a_2)^2/a_1∨ a_2+(b_1-b_2)^2/b_1∨ b_2)·(1+d/n)=Ω(log n),which is also much weaker than the conditions required by <cit.>. The initialization requirement in the aforementioned case is also inspiring. Note that -xlog x= o(x^1/(1+δ)) as x→0 for any constant δ>0. For the sake of clarity, assume that (b_1-b_2)^2/(b_1∨ b_2)≍ (a_1-a_2)^2/(a_1∨ a_2). Then, the initialization condition (<ref>) is equivalent to η_σ=o(1/a̅^1+δ·((a_1-a_2)^2/a̅)^2(1+δ))for any δ>0 and η_σ,η_z=o(1/a̅·((a_1-a_2)^2/a̅))They suggest that a stronger initialization condition is necessary if a_1-a_2 is smaller, i.e., the two edge probability matrices are closer. Furthermore, if a_1-a_2≍a̅, the network sparsity condition (<ref>) becomes a̅→∞ and the initialization requirements reduce to η_σ,η_z=o(1), namely, a consistent initial clustering suffices to guarantee the minimax optimal layer clustering.§.§ Provable tensor-based initialization The success of Algorithm <ref> in achieving the minimax optimal clustering error relies on a warm initialization for both layer clustering and local community detection.We now consider a tensor-based spectral method, adapted from <cit.>, for an initial clustering. To this end, some extra notationsare necessary. Denote ∈{0,1}^d× d× n the adjacency tensor such that its i-th slice (:,:,i):=_i,and let _1, _2∈{0,1}^n× K be the local membership matrix of two SBMs, respectively. Basically, _m(:,j)=^⊤__m(j) for m∈ [2] where _j represent the j-th canonical basis vector whose dimension varies at different appearances. Under MMSBM, it is readily seen that (_i|z_i^∗)=_z_i^∗_z_i^∗^⊤_z_i^∗ and the expected adjacency tensor is decomposable in the following format(|^∗)=×_1 ×_2×_3 ,where :=(_1,_2)∈{0,1}^d× 2K is called the global membership matrix, :=(_z_1^∗,_z_2^∗⋯,_z_n^∗)^⊤∈{0,1}^n× 2 is the layer label matrix, and ∈[0,1]^2K× 2K× 2 is a probability tensor with its 1st and 2nd slices are defined as (:,:,1):= [ _10 00 ]and(:,:,2):= [00 0 _2 ].The multilinear product in (<ref>) is defined in the way such that for any i_1,i_2∈[d] and i_3∈[n], ((i_1,i_2,i_3)|^∗)=∑_j_1=1^2K∑_j_2=1^2K∑_m∈{1,2}(j_1,j_2,m) (i_1,j_1)(i_2,j_2) (i_3,m). For notational clarity, we use SVD_r() to denote the top r left singular vectors of any matrixand _k() to denote the mode-k matricization of any tensor . See <cit.> for a more comprehensive introduction on tensor algebra. Due to technical reasons, we focus on the regime n,d→∞ and n=O(d).Denote =:p̅_0 with p̅:=a̅(nd)^-1. Without loss of generality, we focus on the most interesting yet challenging sparsity regime a̅≪ d.Let :=^⊤ be the compact SVD with , being left/right singular vectors ofand ∈^r× r containing all non-zero singular values of , then the population adjacency tensor admits the Tucker decomposition as (|^∗)=×_1 ×_2×_3 ,where =×_1 ^⊤×_2 ^⊤×_3_n^1/2∈^r× r× 2 is a core tensor, _n=diag(n_1,n_2)∈^2× 2, n_m=|_m(^∗)| is the number of layers in m-th clusters and :=_n^-1/2∈^n× 2. The matricesandare often referred to as the singular vectors of (|^∗),which provide the essential information of global community memberships and layer clusters.The tensor-based spectral initialization algorithm proposed in <cit.> works as follows. The mode-1 singular vectorsare estimated using the SVD of the sum of adjacency matrices ∑_i _i. This simple yet useful method is blessed by the non-negativity of the adjacency matrices. The empirical mode-1 singular vectors are then truncated by the regularization operator defined as _δ():=SVD_r(_*) with _*(i,:):=(i,:)·min{δ,(i,:)}/(i,:) for any . The regularization guarantees the incoherence property in that _δ()_2,∞=O(δ).We assume that _2,∞=O((r/n)^1/2),i.e.,the majority rows ofhave comparable norms.Our theoretical results can be re-written to underscore the explicit dependence on _2,∞(n/r)^1/2 allowing unbounded incoherence parameter.For ease of exposition,we simply assume _2,∞(n/r)^1/2=O(1).The adjacency tensor is then multiplied by the regularized estimate of mode-1 singular vectors, whose mode-3 singular vectors, denoted by , are used as the estimate of .The layer clusters can be found by screening the rows ofusing, say, K-means clustering with K=2.Finally, the local community memberships are estimated by spectral clustering on aggregating networks with the same estimated layer labels.The detailed implementation can be found in Algorithm <ref>.It has been shown by <cit.> that Algorithm <ref> together with regularized tensor power iterations, referred to as TWIST, can provide a consistent layer clustering. Unfortunately, the mis-clustering error rate established in <cit.> is rather weak such that an immediate application of their result will trivialize our conclusions in Theorem <ref>.In particular, <cit.> showed that TWIST achieves the following layer mis-clustering error rate:h(^,^∗)=O(r^2log(n∨ d)/a̅),implying that a consistent clustering requires a̅≫log(n∨ d). The initialization condition by Theorem <ref> requires η_z·a̅ K^2(1+dn^-1)≪ I^* means thatI^*≫ r^2K^2(1+d/n )log(n∨ d)However, in a regime of strong separation strength as in (<ref>), Theorem <ref> already implies that the two-stage clustering algorithm can achieve exact clustering since h(^,^∗)< 1/n is equivalent to h(^,^∗)=0. We now present a simple variant of Algorithm <ref> that enables us to eliminate the logarithmic term in (<ref>).The basic idea is to introduce independence betweenandin Algorithm <ref> so that a sharper perturbation bound ofcan be derived.In particular, we randomly split the network samples [n] and vertices [d] into two disjoint subsets of approximately equal size, denoted by ^[0], ^[1] and ^[0], ^[1], respectively. Here ^[0]∪^[1]=[n], ^[0]∪^[1]=[d], and assume n_0:=|^[0]|,n_1=|^[1]|, d_0=|^[0]|, d_1=|^[1]|. We focus on the sub-networks restricted to each subset of vertices. As a result, we end up with four multi-layer sub-networks, whose adjacency tensors are denoted as _+^[0]∈_+^d_0× d_0× n_0, _+^[1]∈_+^d_1× d_1× n_1, _-^[0]∈_+^d_1× d_1× n_0, and _-^[1]∈_+^d_0× d_0× n_1,respectively.The node and layer splitting is displayed as in Figure <ref>.Without loss of generality, we assume that the node and layer splitting will not change the tensor ranks of [_+/-^[0/1]|^∗].First,an estimate of mode-1 singular vectors, denoted by ^[0],is obtained utilizing ^[0]_+,similar as in Algorithm <ref>.We then multiply ^[1]_+ by the regularized version of ^[0],from which we estimate the mode-3 singular vectors and the layer labels of networks in ^[1]_+,denoted as ^[1].Second,we switch the roles of ^[0]_+ and ^[1]_+ and estimate the layer labels of networks in ^[0],denoted as ^[0], in the same fashion.The labels ^[0] and ^[1] are both accurate estimates up to permutations in 𝔖_2. Wealign the labels of ^[0] and ^[1] using reference layer labels estimated from the procedure without sample splitting.Finally,local community memberships can be estimated by spectral clustering applied onto the average adjacency matrices aggregated according to the estimated layer labels (also based on sample switching).The detailed implementations can be found in Algorithm <ref>, where we choose δ_1= C_0(r/n)^1/2 with a tuning parameter C_0>0. Suppose the followingconditions (A1)-(A3) hold before and after the node and layer splitting: (A1) _0 has Tucker rank (r,r,2) and σ_min(_0)≥ c for some absolute constant c>0. (A2)is well-conditioned such that σ_1()/σ_r()≤κ_0 for some κ_0≥ 1. (A3) σ_r(n_1_1+n_2_2)=Ω(√(ndp̅)). Furthermore, if d≥ C_0 log^3(n∨ d) and n/log n≥ C_0 κ_0^4r^6 for some absolute constant C_0>0, then there exist some absolute constant C_1,C_2,C_3,C_4,C_5>0 such that, with probability at least 1-C_1(n∨ d)^-3, the output of Algorithm <ref> satisfiesh(,^∗)≤C_2κ_0^4r^2/a̅ andmax_m∈{1,2}h(_m,_m)≤C_3/a̅,provided that C_4κ_0^8r^4log^2(κ_0r)log^7(n∨ d )≤a̅≤C_5n∧ d/log^2n.The conditions (A1)-(A3) were originallyproposed by <cit.> in the context of MMSBM. Due to the rank-deficiencyof , conditions (A1)-(A2) emerge as the prices of exploitingtensor structures, as discussed in details in <cit.>. Moreover, condition (A3) is a standard condition,probably unavoidable for spectral method by aggregating multiple adjacency matrices. We note that our condition (A3) is weaker compared with the counterparts required in Lemma 5.6in <cit.> and Theorem 2 in <cit.>, in terms of logarithmic factor. In particular, our condition (A3) allows a diverging condition number of n_1^∗_1+n_2^∗_2.Consider the extreme sparse case (<ref>) in that p_mdn=a_m and q_m dn=b_m.Let κ_0, K ,r=O(1) for simplicity.Theorem <ref> and Theorem <ref> together require(i)min_m∈{1,2}(a_m-b_m)=Ω(1)and(ii) I^*≫K^2(d/n+1 ) √(log(a̅∧ d ))Condition (i) is mild compared to the condition for achieving exponential rate in the community detection in a single network <cit.>, which requires (a_m-b_m)^2/a_m→∞. Ours is weaker because we do not seek an optimal clustering of vertices.For dense networks where a̅ > C_3d, local memberships can be exactly recovered by Algorithm <ref> and hence η_σlog1/η_σ=0. Thus condition (ii) is satisfied if I^*≫K^2(d/n+1 )log^1/2 d, which is much weaker than (<ref>) when d=o(n ). Even when n≲ d≤n^m for any constant m>1, condition (ii) gains a non-trivial improvement from log(n) to log^1/2 n comparing to (<ref>). The most interesting case is when a̅≍log^5(n∨ d ) (the most difficult regime in that the networks are extremely sparse), condition (ii) is fulfilled byI^*≫K^2(d/n+1 )√(loglog(n∨ d)),which is slightly stronger thancondition (<ref>) in Theorem <ref> by a factor of loglog(n∨ d). The following corollary is a formal statement of aforementioned argument. Under the conditions of Theorem <ref>, assume that there exists absolute constant c_0,C_0>0 such that log n≤ C_0 d and min_m∈{1,2}(a_m-b_m)≥ c_0K.There exist absolute constants C, C'>0 such that if a≥ C I^*/K^2(d/n+1)√(log(a̅∧ d ))→∞,then with probability 1-exp(-I^∗(1-ϵ)) for any ϵ∈(0,1) , h(,^∗)≤exp(-(1-o(1))I^∗/2),andh(,^∗)≤exp(-(1-o(1))I^∗/2)+O(n^-C^' ).§ OPTIMAL CLUSTERING OF MIXTURE MULTI-LAYER POISSON NETWORKS §.§ Mixture of Poisson Block Models Count-type data arise in many scientific and econometric applications such as,4D-STEM imaging <cit.>,cell counts in spatial-transcriptome study <cit.>,and trade flow networks <cit.>.A Poisson block model (PBM) can be viewed as a generalization of SBM to count-type data.Without loss of generality,we view PBM as a model for weighted networks.A weighted networkis observed with d vertices,each of which belongs to one of K communities.Let :[d]↦ [K] denote the community assignment defined similarly as in SBM. There is a symmetric matrix ∈_+^K× K carrying the Poisson weight.The networkcarries random weight,or equivalently,the entries of its adjacency matrix , generated from(ω) ind.∼Poisson(((ω))),∀ ω∈_d:={(i,j): 1≤ i≤ j≤ d}which is referred to as the PBM(, ). Suppose we observe a collection of weighted networks {_i}_i=1^n of common d nodes, i.e., a multi-layer weighted network. Each network is either sampled from PBM(_1, _1) or PBM(_2, _2). Denote ^∗∈[2]^n the layer label vector meaning that _i∼ PBM(_z_i^∗, _z_i^∗). Let _i denote the adjacency matrix of _i whose entries follow Poisson distributions as _i(ω) ind.∼Poisson(_z^∗_i(_z_i^∗(ω))),∀ i∈[n].We call it the mixture multi-layer Poisson block model (MMPBM), which models both layer clusters and nodal communities in multi-layer weighted networks.After observing the networks _i, i∈[n], our goal is to find their latent labels. The notations _m(^∗), S_k(_m), and n_1^∗, n_2^∗ are defined similarly as in Section <ref>.Throughout this section, we focus on thefollowing parameter space of MMPBM::=(n,d,K, θ_0,α,β,γ)={(,{_1,_2},{_1,_2}):∈[2]^n,|_m()|∈[n/2α,α n/2],_m:[d]→[K],|_k(_m)|∈[d/β K,β d/K], _m=_m^⊤∈_+^K× K, min_m∈[1,2]min_i,j_m(i,j)≥θ_0andmax_m∈[1,2]max_i,j_m(i,j)≤γθ_0 }, where α,β,γ≥ 1 are assumed to be absolute constants and θ_0>0 is a constant characterizing the overall edge intensity. The condition that θ_0 is bounded away from zero is mild. See, <cit.> and <cit.>.Denote _m(j_1,j_2):=_m(_m(j_1),_m(j_2) ) the edge intensity matrix, i.e., the expectation of adjacency matrix under PBM(_m,_m).§.§ Oracle property of likelihood-based Lloyd's algorithm Suppose that the edge intensity matrices _1 and _2 are known in advance,if an observed network ∈_+^d× d is known sampled from either _1 or _2, the log-likelihood can be written as ℓ(|z):=∑_ω∈_d(-_z(ω)+(ω)log_z(ω)-log((ω)!)),where z takes values in [2].The likelihood-based Lloyd's algorithm assigns the label towhich maximizes the log-likelihood ℓ(|z).Without loss of generality,assume the true label ofis 1.This network is mis-clustered by Lloyd's algorithm if∑_ω∈_d(_1(ω)-_2(ω))>∑_ω∈_d(ω)log_1(ω)/_2(ω)The following lemma characterizes the oracle mis-clustering error rate of Lloyd's algorithm. Suppose that the edge probability matrices _1 and _2 are known,and a new observed networkis sampled from the probability matrix _1.,i.e.,(ω) ind.∼ Poisson(_1(ω)) for ∀ω∈_d. The probability of mis-clusteringby Lloyd's algorithm is ( is mis-clustered)≤exp-I^*/2, where I^*=∑_ω∈_d(√(_1(ω))-√(_2(ω)))^2.Similarly,the same bound holds ifis sampled from the probability matrix _2. Note that the Rényi-divergence of order 1/2 of Poisson(_1(ω)) from Poisson(_2(ω)) is defined by (√(_1(ω))-√(_2(ω)))^2. The quantity I^∗ is thus viewed as the Rényi-divergence between the two joint Poisson distributions. Hereafter, I^∗ is regarded as the SNR under MMPBM. It shows that a necessary condition of consistently clustering multiple Poisson networks is I^∗→∞. We remark that Poisson is sub-exponential, while the distributions under MMSBM (Section <ref>) and LrMM <cit.> are sub-Gaussian.§.§ Minimax lower boundWe now characterize the minimax lower bound of clustering networks under MMPBM. Similarly, we fix the node community memberships _1, _2 and the edge intensity matrices _1, _2.Denote _1,2:={_1,_2},_1,2:={_1, _2}, and _^(_1,2,_1,2):={: (,_1,2,_1,2)satisfiesconstraints in (<ref>)}The minimax lower bound in the following theorem presents an exponential rate, matching the oracle rate in Lemma <ref>. For fixed _1,2 and _1,2 satisfying the constraints in (<ref>), let _1=_1∘_1 and _2=_2∘_2, and define I^∗:=I^∗(_1,2, _1,2)=∑_ω∈_d(√(_1(ω))-√(_2(ω)))^2If I^∗→∞, theninf_sup_^∗∈_^(_1,2, _1,2)h(, ^∗)≥exp(-I^∗/2·(1+o(1))),where the infimum is taken over all possible clustering algorithms working on MMPBM with parameters (^∗,_1,2, _1,2).§.§ Two-stage clusteringBy Lemma <ref> and Theorem <ref>, the likelihood-based Lloyd's algorithm achieves optimal clustering error if the edge intensity matrix _1 and _2 are known. Since they must be estimated, we first study the necessary accuracy of estimated edge intensity matrices that ensures the optimality of likelihood-based Lloyd's algorithm. It turns out that if the estimated edge intensity matrices are accurate with the ℓ_1-norm error bounded by o(I^∗), the likelihood-based Lloyd's algorithm still achieves optimal clustering error. Suppose that a networkis sampled from the edge intensity matrix _1, i.e., (ω) ind.∼ Poisson(_1(ω)) for ∀ω∈_d.Let _1 and _2 be the edge probability matrices estimated without using . Let ẑ be the MLE of 's label based on _1 and _2: ẑ:=z∈{1,2}max ∑_ω∈_d(-_z(ω)+(ω)log_z(ω)))If _1-_1_ℓ_1,_2-_2_ℓ_1=o(I^∗) where I^∗ is as defined in Lemma <ref> and I^∗→∞, then, conditioned on _1 and _2, we have (ẑ=2)≤exp(-(1-o(1))I^∗/2)The same bound still holds ifis sampled from the probability matrix _2. Based on Lemma <ref>, we develop a two-stage algorithm for clustering networks under MMPBM. The algorithm starts with an initial estimate of the layer labels and node community memberships. Following the same ideas as Algorithm <ref>,we then estimate the edge intensity matrices and use them for estimating layer labels by the likelihood-based Lloyd's algorithm. For self-completeness, the detailed implementations are provided in Algorithm <ref>.Suppose that the initialization algorithm ensures ({max_i∈[n] h (^(-i), ^∗)≤η_z} ⋂{max_i∈[n]; i∈ [2] h(_m^(-i), _m)≤η_σ})≥ 1-C_1(n∨ d)^-2for some absolute constant C_1>0. Here η_z, η_σ→ 0 as n, d→∞ and they reflect the effectiveness of the initialization algorithm. Suppose that the initialization algorithm satisfies (<ref>) with, η_σlog1/η_σ=o( I^*2/K^2d^2θ_0(d/n)) andη_σ∨η_z=o(I^∗/K^2d^2θ_0)If a≥ C_0 for some absolute constants C_0,K^2log n=o(n), and I^∗→∞,then, conditioned on the event (<ref>), with probability at least 1-exp-(I^*)^1-ϵ for any ϵ∈(0,1),the output of Algorithm <ref> satisfies h(,^∗)≤exp(-(1-o(1))I^∗/2).Moreover, h(,^∗)≤exp(-(1-o(1))I^∗/2)+O(n^-C^' ),and some large constant C^'>0.§ MIXTURE OF DISCRETE DISTRIBUTIONS§.§ Mixture of BinomialLet X_1,⋯, X_n be sampled i.i.d. from a mixture of Binomial distributions: 1/2 Bin(d,p_1)+1/2 Bin(d,p_2).Without loss of generality, suppose that n_1 of them are sampled from Bin(d, p_1) and n_2 sampled from Bin(d,p_2), where n=n_1+n_2 and n_1≍ n_2.Denote ^∗∈[2]^n encoding the latent labels, i.e., X_i∼ Bin(d, p_z^∗_i), which is assumed to be fixed.We focus on the difficult regime where p_1≍ p_2, d^-1≪p=o(1), and d is known. Let=(X_1,⋯,X_n)^⊤ and^∗:=(dp_z^∗_1,⋯, dp_z^∗_n)^⊤=Note that if p_1 and p_2 are known, the maximum likelihood estimator of X's label is defined by ℓ_(X):=(Xlogp_2(1-p_1)/p_1(1-p_2)≥ dlog1-p_1/1-p_2)+1It suffices to have an accurate estimate of the probabilities p_1 and p_2. Here we apply a similar leave-one-out trick as in Algorithm <ref>to decouple the dependence between estimated probabilities and the samples to be clustered. Denote ^(-i) the (n-1)-dimensional sub-vector ofby deleting its i-th entry and ^(-i)∈[2]^n an outcome of clustering the entries of ^(-i), where we set z_i^(-i)=0 for ease of exposition. Similarly, define p_1^(-i) and p_2^(-i) the estimated Binomial probabilities based on ^(-i), i.e.,p_k^(-i):=∑_j∈[n]X_j ( z_j^(-i)=k)/d|{j∈[n]:z_j^(-i)=k}|,∀ k∈[2].The detailed implementations can be found in Algorithm <ref>. Here we consider two initialization methods: K-means and method of moments (MoM). The MoM was first proposed by <cit.> for estimating the probability parameters in the mixture of Binomial distributions. Let M̂_1:=1/nd∑_i=1^nX_i andM̂_2:=1/nd(d-1)∑_i=1^n (X_i^2-X_i).It is easy to check that 2M̂_1=p_1+p_2 and 2M̂_2=p_1^2+p_2^2.Without loss of generality, assume p_1≥ p_2. The MoM estimates are p̂_1:=M̂_1+(M̂_2 -M̂_1^2)^1/2 and p̂_2:=M̂_1-(M̂_2-M̂_1^2)^1/2. The performance of Algorithm <ref> hinges upon the accuracy of initializations. Denote the event_0n:={max_i∈[n]| p_1^(-i)-p_1|+| p_2^(-i)-p_2|≤c_nI^∗/d},where c_n→ 0 is an arbitrary positive sequence. Here I^∗:=-2dlog((1-p_1)^1/2(1-p_2)^1/2+(p_1p_2)^1/2) is the Rényi-1/2 divergence between Bin(d, p_1) and Bin(d, p_2).If I^∗→∞, then under _0n, the output of Algorithm <ref> achieves the error rate h(, ^∗)=exp{-(1-o(1))I^∗/2},and, for any ε∈(0,1), with probability at least 1-exp(-(I^∗/2)^1-ε),h(,^∗)=exp{-(1-o(1))I^∗/2}We remark that if p_1,p_2=o(1),then I^∗→∞ implies that d(p_1-p_1)≫ 1. A minimax lower bound of the form exp(-I^∗/2) can established similarly as in Theorem <ref>. Theorem <ref> shows that Algorithm <ref> can achieve minimax optimal clustering error rate if well initialized. Now it suffices to investigate the performance of the K-means initialization and the method of moments. There exists an absolute constant C>0 such that for any γ>2 if d(p_1-p_2)^2≥ Cγ p_1, nd(p_1-p_2)^2≥ Cγ, and I^∗≫ Cγ(p_1/p_1-p_2)^2·(1+1/np_1),then,with probability at least 1-10^-γ,the K-means initial clustering outputs max_i∈[n]| p_1^(-i)-p_1 |+| p_2^(-i)-p_2 |=o(I^∗/d).On the other hand,the method of moments guarantees (<ref>) with probability at least 1-14n^-2 if I^∗≫ 1+p_1/|p_1-p_2|√(dp_1(dp_1+log n)log n/n) §.§ Mixture of PoissonWe now consider the clustering of count-type data.More specifically,let X_1,⋯, X_n be sampled i.i.d. from a mixture of Poisson distributions: 1/2 Poisson(θ_1)+1/2 Poisson(θ_2).Without loss of generality, suppose that n_1 of them are sampled from Poisson(θ_1) and n_2 sampled from Poisson(θ_2), where n=n_1+n_2 and n_1≍ n_2.Let ^∗∈[2]^n encode the latent labels, i.e., X_i∼ Poisson(θ_z^∗_i), which is assumed to be fixed.Similarly,we focus on the difficult regime θ_1≍θ_2 and |θ_1-θ_2|^2≫θ_1≫ 1. Let=(X_1,⋯,X_n)^⊤ and^∗:=(θ_z^∗_1,⋯, θ_z^∗_n)^⊤=The maximum likelihood estimator of X's label given θ_1 and θ_2 is defined by ℓ_(X):=(Xlogθ_2/θ_1≥θ_2-θ_1 )+1It suffices to have an accurate estimate of the probabilities θ_1 and θ_2.Define ^(-i) and ^(-i) as in Section <ref>.Thethe estimated Poisson probabilities θ_1^(-i) and θ_2^(-i) based on ^(-i) are defined byθ_k^(-i):=∑_j∈[n]X_j ( z_j^(-i)=k)/|{j∈[n]:z_j^(-i)=k}|,∀ k∈[2].The detailed implementations can be found in Algorithm <ref>. Here the MoM works as follows.Let M̂_1:=1/n∑_i=1^n X_i and:=1/n∑_i=1^n√(X_i).Clearly, their expectations M_1=M̂_1=(θ_1+θ_2)/2 and ==(θ_1^1/2+θ_2^1/2)/2+O(θ_1^-1/2).Assume that θ_1>θ_2, then the MoM estimators are θ̂_1:=M̂_1+2 √(M̂_1-^2) andθ̂_2:=M̂_1-2 √(M̂_1-^2). The proof of the following theorem is almost identical to that of Theorem <ref> and is thus skipped.Denote the event_0n:={max_i∈[n]|θ_1^(-i)-θ_1|+|θ_2^(-i)-θ_2|≤ c_nI^∗},where c_n→ 0 is an arbitrary positive sequence. Here I^∗:=(√(θ_1)-√(θ_2))^2 is the Rényi-1/2 divergence between Poisson(θ_1) and Poisson(θ_2).If I^∗→∞, then under _0n, the output of Algorithm <ref> achieves the error rate h(, ^∗)=exp{-(1-o(1))I^∗/2},and, for any ε∈(0,1), with probability at least 1-exp(-(I^∗/2)^1-ε),h(,^∗)=exp{-(1-o(1))I^∗/2}Note that I^∗→∞ implies that (θ_1-θ_2)^2≫θ_1, which implies that |θ_1-θ_2|≫ 1. The theoretical guarantees of the K-means initialization or the MoM initialization can be similarly established as in the case of Binomial mixtures. There exists an absolute constant C>0 such that for any γ>2 if (θ_1-θ_2)^2≥ Cγθ_1, n(θ_1-θ_2)^2≥ Cγ,andI^∗≫ Cγθ_1/(θ_1-θ_2)^2·(1+1/nθ_1),then,with probability at least 1-10^-γ,the K-means initial clustering outputs max_i∈[n]|θ_1^(-i)-θ_1 |+|θ_2^(-i)-θ_2 |=o(I^∗).On the other hand,the method of moments guarantees (<ref>) with probability at least 1-2n^-2 if I^∗≫θ_1/θ_1-θ_2+(θ_1^2/(θ_1-θ_2)^2+θ_1-θ_2/√(θ_1))·√(θ_1log n+log^2n/n)and n≥ Clog^2n.§ NUMERICAL EXPERIMENTSWe will use Algorithm <ref>, which can be regarded as a practical version of Algorithm <ref>, in our numerical experiments.§.§ Simulation studiesWe conduct several simulations to test the performance of the refinement algorithm on the MMSBM with different choices of network sparsity, “out-in" ratio, number of layers and the size of each layer. We use K-means as the clustering algorithm.We compare the mis-clustering rate across 150 replications of each experiment to identify the average performance.We generate the data according to MMSBM in the following fashion. The underlying class s_l for the l-th layer is generated from the multinomial distribution with (s_l=j)=1/2, j=1,2.The membership z_i^j for node i in layer type j is generated from the multinomial distribution with (z_i^j=s)=1/K, s=1,⋯,K. We choose the probability matrix asB=pI_K+q(1_K1_K^⊤-I_K), where 1_K is a K-dimensional all-one vector and I_K is the K× K identity matrix . Let α=q/p be the out-in ratio.We compare the performance of the refinement algorithm with the TWIST <cit.>, Tucker decomposition initialized by HOSVD (HOSVD-Tucker) and spectral clustering applied to the mode-3 flatting of (M3-SC). The function “tucker" from the R package "rTensor" <cit.> is used to apply Tucker decomposition for HOSVD-Tucker. In Simulation 1, the networks are generated with the number of nodes n=40, the number of layers L=40, number of types of networks m=2, number of communities of each network K=2 and out-in ratio of each layer α=0.75. The p of each layer varies from 0.1 to 0.8, which reflects the average degree with fixed out-in ratio. In Simulation 2, the networks are generated as in Simulation 1, except that the average degree of each layer p=0.4, the number of layers L=40 and the out-in ration α of each layer varies from 0.1 to 0.9.In Simulation 3, the networks are the same as in Simulation 2, except that the out-in ratio α=0.75 and the number of layers L varies from 20 to 80.In Simulation 4, the networks are the same as in Simulation 3, except that the average degree of each layer L=40 and the the size of each layer n varies from 10 to 100.The results of the simulations are shown in Figure <ref>, from which we can draw several key findings: * As we anticipated from our theoretical results, mis-clustering rates for all three methods decrease when the average degree of each layer increases, the out-in ratio of each layer decreases, the number of layers increases, and the size of each layer increases. * Our refinement algorithm notably surpasses the other methods in terms of accuracy as the number of layers increases. While the other methods only achieve a 95% accuracy rate when the number of layers reaches 80, the refinement algorithm nearly hits 100% accuracy with only 40 layers (as observed in Simulation 3). * The refinement algorithm consistently outperforms the other three methods across all simulations. Its superior performance is especially noticeable when the signal strength is relatively weak, for example when p<0.5 in Simulation 1, α>0.6 in Simulation 2, in all scenarios in Simulation 3, and when n<60 in Simulation 4. * While the refinement algorithm uses TWIST as an initialization step, the improvements obtained from the refinement step are significant, indicating its pivotal role in the performance of the algorithm.§.§ Worldwide food trading networks For real application, we consider the dataset on the worldwide food trading networks, which is collected by <cit.> and have been widely adopted in muli-layer network analysis <cit.>. The data contains an economic network in which layers represent different products, nodes are countries and edges at each layer represent trading relationships of a specific food product among countries. We focus on the trading data in 2010 only. Following the data prepossess in <cit.>, we obtained a 30-layers network with 99 nodes at each layer. Each layer represents trading relationships between 99 countries/regions worldwide with respect to one of the 30 different food products. Together they form a 3rd-order tensor of dimension 99×99×30.We apply the refinement algorithm to the data tensor. The resulting two clusters of layers are listed in Table <ref>. We then apply the spectral method to the sum of each cluster of networks separately (here we have two clusters) to find the community structures for each cluster, in order to obtain the clustering result of countries. The membership of 99 countries with clustering results from 4-means are shown in Figure <ref>.For the two types of networks, we plot in Figure <ref> the sum of adjacency matrices with nodes arranged according to the community labels to have a glance of different community structures of two network types.We make the following remarks from Table <ref>, Figures <ref> and<ref>. * Food types are mainly divided into two categories. The first category comprises food products that have a longer shelf life, facilitating a more globalized trading pattern. Conversely, the second category includes foods that typically have a shorter shelf life or whose transportation costs are high relative to their overall value. Thus, the trade of these foods is predominantly conducted within nearby regions. * From the first category, a handful of countries, such as China, Canada, the United Kingdom, the United States, France, and Germany, are notably active in trading both internationally and amongst themselves. However, outside of these hub nations, other countries are grouped mainly by geographical location, such as America, East Europe, Africa, and Australia. This pattern demonstrates the dominance of these hub countries in the global food trade, while other nations engage more in regional trading to minimize delivery costs. * In the second category, countries are primarily clustered based on their geographical location. Apart from the USA, which is grouped with European countries, the four main clusters consist of America, Western Europe, Eastern Europe and West Africa, and nations surrounding the Indian Ocean - Asia, East Africa, and Australia. Regional trading is crucial in this category as it helps to keep food costs low due to reduced transportation costs and maintains food freshness due to faster delivery times. §.§ Brain connectivity networksWe test our proposed method using the COBRE dataset (<cit.>), which consists of resting state fMRI experiments conducted on both diagnosed schizophrenia patients and healthy control subjects. The brain network is defined as the connectivity among various regions of a person's brain. The focus of this study is on functional connectivity - a statistical measure that denotes the correlation between each pair of locations in the brain, obtained from functional magnetic resonance imaging (fMRI) data. Given the label, <cit.> took a deep dive into understanding the differences in the community structure of the Region of Interest (ROI) between schizophrenia patients and control participants' connectivity networks.The processed connectomic dataset used is downloaded from <cit.>. It includes connectivity networks of 54 schizophrenia patients and 70 healthy controls, each represented as a 263 × 263 matrix. Weak connections in the network matrix are transmuted to 0 by taking the median of the mean network constructed from all samples as the threshold. All other strong connections are converted to 1.Results of all methods used within the simulation study are reported, comparing misclustering errors with their true labels - schizophrenia patients or healthy controls - as shown in Table <ref>. The proposed method showed an accuracy rate of 60.5%, superseding other methods tested. It is worth to note that while an higher accuracy rate was reported in <cit.>, their model was based on supervised learning. Conversely, the task in our study is unsupervised learning.§ DISCUSSIONS plainnat§ PROOF OF MAIN RESULTS §.§ Proof of Theorem <ref>Recall that _m()={i∈[n]: z_i=m} for a label vector ∈[2]^n and m∈ [2]. With slight abuse of notation, let _z:={∈[2]^n: |_m()|∈[n/2α,α n/2]}. For any fixed ^*∈_z with n_m^*=|_m(^*)|, we can choose any subset _m⊂_m(^*) with cardinality |_m|=n_m^*-⌊ n/8⌋. Let :=_1⋃_2 and define_:={∈[2]^n: z_i=z_i^*,∀ i∈}Note that _1⋂_2=ϕ, hence 3n/4≤ ||≤4n/5. For any two ^'∈_, we have 1/n∑_i=1^n(z_iz^'_i)≤n-||/n≤1/4This also implies that h(, ^')=1/n∑_i=1^n(z_iz^'_i).Now define_0^(_1,2,_1,2):={(,{_1,_2},{_1,_2}):∈_} Hence we haveinf_sup__^(_1,2,_1,2) h(,^*)≥inf_sup__0^(_1,2,_1,2) h(,^*) ≥1/ninf_1/|_|∑_^*∈_∑_i∈^c(ẑ_i z_i^*)≥1/n∑_i∈^cinf_ẑ_i1/|_|∑_^*∈_(ẑ_i z_i^*),where the second inequality holds since minimax risk is lower bounded by Bayes risk if we assume a uniform prior on _.It suffices to lower boundthe last term. Now fix any i∈^c, let ^m_:={∈_:z_i^*=m} for m=1,2. Then _=^1_⋃^2_ and |^1_|=|^2_| by symmetry. This implies that inf_ẑ_i1/|_|∑_^*∈_(ẑ_i z_i^*) = inf_ẑ_i1/|_|(∑_^*∈^1_(ẑ_i z_i^*)+∑_^*∈^2_(ẑ_i z_i^*))=1/2inf_ẑ_i(_H_0^(i)(ẑ_i 1)+_H_1^(i)(ẑ_i 2)),where we define the following hypothesis testing for each i∈[n]:H_0^(i):z_i^*=1 v.s. H_1^(i):z_i^*=2.By Neyman-Pearson Lemma (see, e.g., Lemma A.2 in <cit.>), the optimal test that minimize the Type-I error + Type-II error of above simple v.s. simplehypothesis is given by the likelihood ratio test, which rejects H_0 if ∏_ω∈_d_2(ω)^_i(ω)(1-_2(ω))^1-_i(ω)>∏_ω∈_d_1(ω)^_i(ω)(1-_1(ω))^1-_i(ω)Rearranging terms we obtain∑_ω∈_d_i(ω)log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))>∑_ω∈_dlog1-_1(ω)/1-_2(ω)We need the following lemma, whose proof isrelegated to Section <ref>. If max_ω_m_1(j_1,j_2)/_m_2(j_1,j_2)=O(1) for ∀ m_1, m_2∈[2] andis sampled from _1,i.e.,(ω) ind.∼_1(ω),then(∑_ω∈_d(ω)log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))>∑_ω∈_dlog1-_1(ω)/1-_2(ω) )≥exp(-I^*(1+o(1)) ),provided that I^*→∞.Now it suffices to apply Lemma <ref> to get that inf_sup__^(_1,2,_1,2) h(,^*) ≥1/2n∑_i∈^cinf_ẑ_i(_H_0^(i)(ẑ_i 1)+_H_1^(i)(ẑ_i 2)) ≥exp(-I^*(1+o(1))). §.§ Proof of Lemma <ref>By definition, (ẑ=2) is equivalent to (∑_ω∈_d(ω)log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))>∑_ω∈_dlog1-_1(ω)/1-_2(ω))≤(∑_ω∈_d(ω)log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))>∑_ω∈_dD_KL(_1(ω)||_2(ω))-δ I^*)+(∑_ω∈_d(ω)(log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))-log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω)))>∑_ω∈_d[log1-_1(ω)/1-_2(ω)-log1-_1(ω)/1-_2(ω)]+δ I^*),where (ω):=(ω)-_1(ω) and δ=o(1) shall be determined later.We now analyze both terms of (<ref>).First notice that (∑_ω∈_d(ω)log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))>∑_ω∈_dD_KL(_1(ω)||_2(ω))-δ I^*)≤∏_ω∈_d[1-_1(ω)+_1(ω)e^-1/2log_1(ω)(1-_2(ω))/_2(ω)(1-_1(ω))]·exp(1/2_1(ω)log_1(ω)(1-_2(ω))/_2(ω)(1-_1(ω)))exp(-1/2 D_KL(_1(ω)||_2(ω) )+1/2δ I^*)≤exp-I^*/2+1/2δ I^*=exp(-I^*/2(1-o(1)) )where the first inequality holds by applying Chernoff bound and the last equality holds provided that δ=o(1).Denoteξ_ideal:=(∑_ω∈_d(ω)log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))>∑_ω∈_dD_KL(_1(ω)||_2(ω))-δ I^*)Then (<ref>) impliesξ_ideal≤exp(-I^*/2(1-o(1)) )as I^*→∞. It suffices to show the second term of (<ref>) vanishes.Observe that(∑_ω∈_d(ω)(log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))-log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω)))>∑_ω∈_dlog[1-_1(ω)/1-_2(ω)-log1-_1(ω)/1-_2(ω)]+δ I^*)=(∑_ω∈_d(ω)(log_1(ω)/_1(ω)+log_2(ω)/_2(ω)+log1-_1(ω)/1-_1(ω)+log1-_2(ω)/1-_2(ω))>∑_ω∈_d[log1-_1(ω)/1-_1(ω)+log1-_2(ω)/1-_2(ω)]+δ I^*) Without loss of generality,suppose that _1-_1_ℓ_1,_2-_2_ℓ_1≤ρ̃I^∗ for some ρ̃=o(1).This means that for ∀ m∈[1,2],∑_ω∈_d|_m(ω)-_m(ω)|≤ρ̃I^*This implies that |∑_ω∈_d[log1-_1(ω)/1-_1(ω)+log1-_2(ω)/1-_2(ω)]|=∑_ω∈_d|log(1+_1(ω)-_1(ω)/1-_1(ω))|+∑_ω∈_d|log(1+_2(ω)-_2(ω)/1-_2(ω))|≤∑_ω∈_d|_1(ω)-_1(ω)|(1+o(1))+∑_ω∈_d|_2(ω)-_2(ω)|(1+o(1))≲ρ̃I^*Now set δ=ρ̃^ϵ for some fixed ϵ∈(0,1),and we get|∑_ω∈_d[log1-_1(ω)/1-_1(ω)+log1-_2(ω)/1-_2(ω)]|=o(δ I^*). Then term (<ref>) can be further bounded as (∑_ω∈_d(ω)(log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))-log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω)))>∑_ω∈_dlog[1-_1(ω)/1-_2(ω)-log1-_1(ω)/1-_2(ω)]+δ I^*) ≤(∑_ω∈_d(ω)log_1(ω)/_1(ω)>δ/8I^*)+ (∑_ω∈_d(ω)log_2(ω)/_2(ω)>δ/8I^*)+ (∑_ω∈_d(ω)log1-_1(ω)/1-_1(ω)>δ/8I^*)+ (∑_ω∈_d(ω)log1-_2(ω)/1-_2(ω)>δ/8I^*)To control the first term of (<ref>), we begin with:∑_ω∈_d(ω)|log_1(ω)/_1(ω)| ≤∑_ω∈_d(ω)|_1(ω)-_1(ω)|/_1(ω)(1+o(1))≲∑_ω∈_d(ω) ρ̃I^∗/d^2/_1(ω)where in the first inequality we've used the following fact:I^*/d(d+1)/2 ≲1/d(d+1)/2∑_ω∈_d_1(ω)-_2(ω)^2/_1(ω)∨_2(ω)≲1/d(d+1)/2∑_ω∈_d_1(ω)∨_2(ω)≲min_ωmin_m∈[2]_m(ω),where the last inequality holds by the constraints in the parameter spacein (<ref>) and the block structures of _m and _m so that |_1(ω)-_1(ω)|/ _1(ω)=O(1).Therefore there exists an absolute constant C≥ 1 such that(∑_ω(ω)log_1(ω)/_1(ω)>δ/8I^*)≤ (∑_ω(ω) ρ̃I^∗/d^2/_1(ω)>δ/8CI^*)≤(∑_ω(ω) ρ̃I^∗/d^2/_1(ω)>δ I^∗/16C)where (ω):=(ω)-_1(ω) and we used the fact δ=ρ^ for some ∈(0,1). Denote the event:={∑_ω(ω) ρ̃I^∗/d^2/_1(ω)>δ I^∗/16C}.Due toindependencebetween _1 and ,we have,by Chernoff bound and conditioned on _1,that for any λ>0,( ) ≤exp(-λδ/16CI^* )∏_ω∈_dexp(λ(ω)ρ̃I^∗/d^2/_1(ω))By choosing λ≍ρ̃^- so that λρ̃=o(1),we get expλ(ω)ρ̃I^∗/d^2/_1(ω)=_1(ω)expλ(1-_1(ω)) ρ̃I^∗/d^2/_1(ω)+(1-_1(ω))exp(-λ_1(ω)ρ̃I^∗/d^2/_1(ω))=(1+_1(ω)(expλρ̃I^∗/d^2/_1(ω)-1))exp-λ_1(ω)ρ̃I^∗/d^2/_1(ω)(a)≤(1+2λρ̃I^*/d^2)exp-λρ̃I^*/d^2(b)≤exp(λρ̃I^*/d^2 )where in (a) we've used (<ref>) and e^x≤ 1+2x for 0<x<1, in (b) we've used e^x≥1+x for x>0. Recall thatδ=ρ̃^ϵ for some ϵ∈(0,1), (<ref>) can be further bounded as exp(-λδ/16I^* )∏_ω∈_d(expλ(ω)ρ̃I^*/d(d+1)/2/_1(ω) )≤exp(λρ̃I^*-λδ/16I^* )≤exp(-cλδ I^*/2 )for some absolute constant c>0.Then we can choose λ=(cδ)^-1=c^-1ρ̃^-ϵ and obtain that ( )≤exp(-I^*/2).The same high probability bounds for the other three terms of (<ref>) can be obtained similarly and hence omitted.Thenξ_pertub:= (∑_ω∈_d(ω)(log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))-log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))) >∑_ω∈_dlog[1-_1(ω)/1-_2(ω)-log1-_1(ω)/1-_2(ω)]+δ I^*)Hence we obtain that ξ_pertub≤ 4exp(-I^*/2 )=exp(-I^*/2(1-o(1)) )§.§ Proof of Theorem <ref>We introduce extra notations for convenience.For any ,^'∈[2]^n and ,^':[d]→[K], we defineh_0(,^'):=1/n∑_i=1^n𝕀( z_i z^'_i ) and h_0(,^'):=1/d∑_j=1^d𝕀( (j)^'(j) ) For each l∈[n],by definition there exists some π_(-l)∈𝔖_2 and ϕ_(-l)∈𝔖_K such that h_0(^(-l),π_(-l)(^∗) )=h(^(-l),^∗ )≤η_z and h_0(^(-l),ϕ_(-l)∘ )=h(^(-l),^∗ )≤η_σ. We now fix an i∈[n] and suppose z_i^∗=1.Without loss of generality,we assume π_(-i)=Id and ϕ_(-i)=Id[Otherwise we can always replace _m(j_1,j_2), _m(k,l), _m(j) with _π(m)(j_1,j_2), _π(m)(ϕ(k),ϕ(l)), ϕ∘_π(m)(j) in the following analysis. ]. To avoid clutters of notations, we temporarily drop the superscript (-i) in _m^(-i)(j_1,j_2) and _m^(-i)(k,l) but keep in mind the independence structure between _i and other estimated parameters.Similarly as Lemma <ref>,the i-th network is mis-clustered when(ẑ_i^(-i)=2)= (∑_ω∈_d_i(ω)log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))>∑_ω∈_dlog1-_1(ω)/1-_2(ω)).Since _i is independent of _1^(-i) and _2^(-i),it suffices to apply Lemma <ref>.Towards that end,we must bound the entry-wise error of ^(-i)_1 and ^(-i)_2,respectively. It thus suffices to bound the error of _1^(-i) and _2^(-i),respectively,as shown in the following lemma.Under the same conditions of Theorem <ref>,there exists a large absolute constant C>0 such that,for each i∈[n],with probability at least 1-n^-C,we havemin_π∈𝔖_2,ϕ∈𝔖_Kmax_m∈{1,2}max_k,l∈[K]|_m^(-i)(k,l)-_π(m)(ϕ(k),ϕ(l))|=ρ̃·I^*/d(d+1)/2with some ρ̃=o(1).Denote the event _(-i):={(<ref>) holds}. Notice that (_(-i))≥ 1-n^-C and we now proceed conditioned on _(-i). Observe that,for m∈{1,2},we get∑_ω∈_d |_m(ω)-_m(ω)|=∑_ω∈_d|∑_k,l∈[K][_m(k,l)(_m(ω)=(k,l))-_m(k,l)(_m(ω)=(k,l))]|=∑_ω∈_d|∑_k,l∈[K][(_m(k,l)-_m(k,l))(_m(ω)=(k,l))(_m(ω)=_m(ω))-_m(k,l)(_m(ω) (k,l))(_m(ω)≠_m(ω))]|≤∑_ωmax_k,l∈[K]|_m(k,l)-_m(k,l)|+2K^2max_k,l∈[K]_m(k,l)∑_ω(_m(ω)≠_m(ω))≲ρ̃I^*+K^2d^2η_σ·a̅/(n∧ d)d≲ρ̃I^*,where we've used (<ref>) and η_σ=o( I^*/a̅ K^2(n/d∧1)). By Lemma <ref>, we conclude, conditioned on the event _(-i) and the event defined in (<ref>), that(ẑ_i^(-i)≠ z_i^∗)≤exp(-I^∗/2(1-o(1))) Now we need the following lemma to validate our final step for label alignments, which is a special case of <cit.>. Recall that _m()={i∈[n]: z_i=m} for each m∈[2] and any ∈[2]^n. For any label vector ,^'∈{1,2}^n such that |_m()|∧|_m(^' )|≥n/2α and h(,^')≤ cα^-1 for some absolute constant c∈[0,1/2), define the map ς:[2]→[2] as ς(k):=_m∈{1,2}|{j∈[n]: z_j=m}⋂{j∈[n]:z_j^'=k}|, k∈{1,2} Then we have ς∈ S_2 and h_0(,ς(^') )=h(,^' ).Notice that after refinement procedure for all samples {_i}_i∈[n], we can obtain n label vectors ^(-i), each of which only differs from ^(-i)at thei-th label. Since for any i∈[n], h_0 (^(-i),π_(-i)(^∗) )=h (^(-i),^∗ )≤η_z. Hence we haveh_0 (^(-i),π_(-i)(^∗) )≤η_z+1/nFor each i=2,⋯,n,define the map ς_(-i):[2]→ [2] asς_(-i)(k):= _m∈{1,2}|{j∈[n]: ẑ_j^(-1)=m}⋂{j∈[n]:ẑ_j^(-i)=k}|, k∈{1,2}By definition, we have ẑ_i=ς_(-i)(ẑ_i^(-i) ). We can assume π_(-1)=Id without loss of generality, then (<ref>) is equivalent toh_0 (^(-1),^∗ )≤η_z+1/n , h_0 (π_(-i)^-1(^(-i)),^∗ )≤η_z+1/nWe therefore have h_0(^(-1),π^-1_(-i)(^(-i)) )≤ h_0 (^(-1),^∗ )+h_0 (π_(-i)^-1(^(-i)),^∗ )≤ 2η_z+2/n=o(1). Then we can apply Lemma <ref> with ^(-1) and ^(-i), and obtain that ς_(-i)=π^-1_(-i) for i=2,⋯,n. Thus we haveh(,^∗) ≤ h_0(,^∗)=1/n∑_i=1^n(π^-1_(-i)(ẑ_i^(-i) ) z_i^∗ )=1/n∑_i=1^n(ẑ_i^(-i)π_(-i)(z_i^∗ ) )By the above relation and eq. (<ref>), eq. (<ref>), we obtainh(,^∗)≤ 1/n∑_i=1^n (ẑ_i^(-i)π_(-i)(z_i^∗ ) )(_(-i) )+(ẑ_i^(-i)π_(-i)(z_i^∗ ) )(^c_(-i) ) ≤ exp(-I^*/2(1-o(1)) )+n^-CBy Markov's inequality, we further obtain that(h(,^∗)≥exp((I^*/2)^1-ϵ) h(,^∗))≤exp-(I^*/2)^1-ϵfor any ϵ∈(0,1), which implies that with probability at least 1-exp-(I^*/2)^1-ϵ,h(,^∗)≤exp((I^∗/2)^1-ϵ) h(,^∗)≤exp(-I^*/2(1-o(1)) )+exp(I^*/2)^1-ϵn^-Cas I^*→∞. Finally, it suffices to notice that h(,^∗)<n^-1 implying h(,^∗)=0, thus n^-C is ignorable andn^-C< exp(-(1-o(1))I^∗/2 ). We thus conclude, under the event of (<ref>), thath(,^∗)≤exp(-I^*/2(1-o(1)) ), with probability at least 1-exp(-(I^*/2)^1-ϵ). §.§ Proof of Theorem <ref>The proof is proceeded with several steps. Without loss of generality, we assume_0={1,⋯,n_0} and _0={1,⋯,d_0} throughout the proof. *Lower bound on core tensor and incoherent condition.Under conditions of Theorem <ref>, Lemma 4.1 in <cit.> implies thatσ_min()≳κ_0^-2r^-1√(n)dp̅,and Lemma 4.2 in <cit.> implies that δ=_2,∞≤κ_0√(r/d)The above two inequalities will be used repeatedly in the proof. Sample-splitting properties Due to sample splitting, it is more convenient to view the sets ^[0],^[1],^[0],^[1] as fixed. In fact,by Hoeffding's inequality we have with probability at least 1-n∧ d^-10, there exists some absolute constant c_1,c_2>0 such that1-c_1log n/√(n)n_m^∗/2<|{i∈^[k]:z_i^∗=m}|≤1+c_1log n/√(n)n_m^∗/2,1-c_2log d/√(d)d_m,l/2<|{j∈^[k]:_m(j)=l}|≤1+c_2log d/√(d)d_m,l/2for k∈ [2], m∈ [2] and l∈[K], where we define d_m,l:=|S_l(_m)|. Moreover, denote n_m^[k]∗:=|{i∈^[k]:z_i^∗=m}| and for i∈^[k], denote ^[+,k]_m=^d_k× d_k to be the sub-matrix of _mrestricting to the vertices in ^[k]. Notice thatσ_minn_1^[0]⋆^[+,0]_1+n_2^[0]⋆^[+,0]_2=σ_min1+Olog n/√(n)n_1^⋆^[+,0]_1+1+Olog n/√(n)n_2^⋆^[+,0]_2=σ_minn_1^⋆^[+,0]_1+n_2^⋆^[+,0]_2+C_1log n/√(n)n_1^⋆^[+,0]_1+C_2log n/√(n)n_2^⋆^[+,0]_2(a)≥σ_minn_1^⋆^[+,0]_1+n_2^⋆^[+,0]_2-log n/√(n)C_1n_1^⋆^[+,0]_1+C_2n_2^⋆^[+,0]_2(b)≥σ_minn_1^⋆_1+n_2^⋆_2-log n/√(n)C_1n_1^⋆_1+C_2n_2^⋆_2≥C√(ndp̅)-C^'log n/√(n)· ndp̅(c)≥ C√(ndp̅)where in (a) we've usedWely's inequality, in (b) we've used that n_1^⋆_1^[+,0]+n_2^⋆_2^[+,0] is a principle sub-matrix of n_1^⋆_1+n_2^⋆_2 andhence the Cauchy interlacing theorem applies, in (c) we've used that p̅≲dlog^2 n^-1 which is further implied by a̅≲n∧ d/log^2n. Due to the independence of sample splitting and the data itself, we assume the above results holds deterministically.Pre-estimation for reference labels. For the performance of pre-estimationand _1,_2 of RSpec (i.e., Algorithm <ref>), we have the following result. With probability at least 1-(n∨ d)^-3, the output of Algorithm <ref> satisfiesh(,^∗)≤Cκ_0^6r^3log(κ_0r)log^4(n∨ d)/a̅and max_m∈{1,2}h(_m,_m)≤ Clog^6(n∨ d)/a̅+κ_0^6r^3log(κ_0r)log^4(n∨ d)/a̅^2for some absolute constant C>0, provided that a̅≫κ_0^6r^3log(κ_0r)log^6(n∨ d). Sample-switching estimation for network labels. Without loss of generality we first considerk=1,k^'=0i.e.,^[0]⟵SVD_r(∑_i∈^[0]_i^[+] ),^[1]⟵SVD_2(_3(_+^[1]×_1 _δ_1(^[0])×_2_δ_1(^[0]) ) ), and ^[1]⟵K-means clustering on rows of ^[1].We first upper bound min_∈_r^[0]-. Here we slightly abuse the notation to denotethe corresponding left singular vectors of the sub-networks either on _1 and _2 and, respectively, d can be d_0 and d_1.By Lemma <ref> and Davis-Kahan theorem, we obtain that with probability at least 1-(n∨ d)^-3,min_∈_r^[0]-≤C_0√(ndp̅)/σ_r(∑_i∈^[0]_i^[+] )≤1/4provided thatσ_rn_1^[0]∗^[+,0]_1+n_2^[0]∗^[+,0]_2≥ 4C_0√(ndp̅). Denote :=_δ_1(^[0]).By the property of regularization operator _δ (see <cit.>), we have min_∈_r-≤ 2√(2)·min_∈_r^[0]-≤√(2)/2and_2,∞≤√(2)δIt turns out that we need to upper bound _3(^[1]_+×_1 ^⊤×_2 ^⊤ )=_3(^[1]_+)(⊗ ) with ^[1]_+:=_+^[1]-_+^[1]. The following lemma is derived from recent result on matrix concentration inequality <cit.>, the proof of which is delegated to Section <ref>. Suppose ∈{0,1}^d× d× n is a Bernoulli tensor such that for each i∈[n],(:,:,i)∈{0,1}^d× d is a symmetric Bernoulli matrix with independent entries up to symmetry. Denote p̅=_∞. For any ∈_d,r such that _2,∞≤Cκ_0√(r/d) for some constant C>0, then with probability at least 1-(n∨ d)^-3 we have _3(-)(⊗ )≤ C√(np̅),provided thatn/log n≥ C_1 κ_0^4r^6 and √(nd^2p̅)≥C_2(κ_0r)^2log^2(n∨ d) for some absolute constant C_1,C_2>0. Due to the independence ofandof ^[1]_+, we can viewas fixed (or equivalently, conditional on ^[0]_+)and apply Lemma <ref> (note the conditions in Lemma <ref> is met automatically under our setting) to obtain that _3(^[1]_+)(⊗ )≲√(np̅)with probability at least 1-(n∨ d)^-3. On the other hand, we haveσ_r(_3(^[1]_+ )(⊗ ))=σ_r(_3( )(^⊤⊗^⊤ ))≥σ_r(_3( ))σ^2_min(^⊤)≥σ_r(_3( ))/4with probability at least 1-(n∨ d)^-3, where we've used the fact-^⊤^⊤≤min_∈_d,r-≤√(2)/2by (<ref>) and hence σ_min(^⊤)≥ 1/2. We then obtain, by Davis-Kahan theorem, that with the same probability,min_∈_2^[1]-≤4C√(np̅)/σ_min()≤C^'κ_0^2r/√(d^2p̅),Following the same argument in the proofof first claim in Theorem 5.4 in <cit.>, we obtain h(^[1],^[1]∗)≤Cκ_0^4r^2/d^2p̅≤Cκ_0^4r^2/a̅,with probability at least 1-(n∨ d)^-3. By symmetry, we can similarly obtain the bound for h(^[0],^∗) by considering k=0,k^'=1. We thusconclude that with probability at least 1-3(n∨ d)^-3, we haveh(^[k],^[k]∗)≤Cκ_0^4r^2/d^2p̅≤Cκ_0^4r^2/a̅,∀ k∈{0,1} Sample-switching estimation for local memberships.Theproof strategy in this part is similar to the proof of Proposition <ref>. Wefirst consider k=0,k^'= 1 andm∈{1,2}. Recall_m^[1] is obtained byK-means clustering on rows of SVD_K(∑_i∈^[0]:z_i^[0]=m^[-]_i) .Without loss of generality, we assume h_0^[0],^[0]∗=h^[0],^[0]∗ (see the definition of h_0(· ) in the proof Theorem <ref>). For i∈^[0], denote _i^[-]:=_i^[-]-^[-,0]_z_i^∗, where ^[-,0]_z_i^∗=^[-]_i∈^d_1× d_1 is the expectation of sub-matrix of _i restricting to the vertices in ^[1] and n^[0]_m:=∑_i∈^[0]( z_i^[0]=m) for m∈{1,2}. Letdenote the event in (<ref>) we continue on . For m∈{1,2} we haven^[0]_m=∑_i∈^[0] z_i^[0]=m≥∑_i∈^[0]z_i^∗=m-∑_i∈^[0] z_i^∗ z^[0]_i≥c_1n/2-nh(,^∗)≳n_m^[0]∗ where the last inequality holds if a̅≫κ_0^4r^2. Following the same argument line by lineof the proof of Proposition <ref> (local memberships estimation part), it suffices for us to bound 1/n_m^[0]∗∑_i∈^[0] z_i^[0] m, z_i^∗= m_i^[-]Now since ^[0] is obtained by using ^[0]_+ and ^[1]_+, we have independence between ^[0]and _i^[-]. Instead of using Lemma <ref>, we can sharply control the concentration of (<ref>) by Lemma <ref> such that 1/n_m^[0]∗∑_i∈^[0] z_i^[0] m, z_i^∗= m_i^[-]≲1/n_m^[0]∗√(n_m^[0]∗ h^[0],^[0]∗dp̅)≲√(dp̅/n)with probability at least 1-(n∨ d)^-3. Define _m^[k](j)=_m(j) for j∈^[k] and k=0,1, the same argument of the proof of Proposition <ref> would lead toh^[1]_m,_m^[1]≤dp̅/n+h^2^[0],^[0]∗· (dp̅)^2/σ_K^2(^[-,0]_m )≲1/a̅with probability at least1-3(n∨ d)^-3, where we've used σ_K^2(^[-,0]_m )≳dp̅^2 and a̅≫κ_0^8r^4log(κ_0r)log^6(n∨ d). By symmetry, the same argument applies to the case whenk=1,k^'=0. We can thereby conclude thatmax_m∈{1,2}h^[k]_m,^[k]_m≲1/a̅,∀ k∈{0,1}with probability exceeding 1-C(n∨ d)^-3.Alignment for network labels. Denote the event :={h(,^∗ )≤η_1 and h(^[k],^[k]∗ )≤η_2 for k=0,1}where η_1=Cκ_0^6r^3log(κ_0r)log^4(n∨ d)(a̅)^-1 and η_2= Cκ_0^4r^2(a̅)^-1. According to our previous analysis, we have ()≥ 1-3(n∨ d)^-3 and we proceed on .Denote ^∗⊤=(^[0]∗⊤,^[1]∗⊤) and^⊤=(^[0]⊤,^[1]⊤). By definition, there exists some π̌∈ S_2 such that h_0(,π̌(^∗ ) )=h(,^∗ ). Hence we have h_0(^[0],π̌(^[0]∗ ) )=h(^[0],^[0]∗ )≤η_1 and h_0(^[1],π̌(^[1]∗ ) )=h(^[1],^[1]∗ )≤η_1. On the other hand, there exists some π_0,π_1∈ S_2 such that h_0(^[0],π_0(^[0]∗ ) )=h(^[0],^[0]∗ )≤η_2 and h_0(^[1],π_1(^[1]∗ ) )=h(^[1],^[1]∗ )≤η_2. Consequently, we haveh_0(π_0^-1(^[0]),π̌^-1(^[0]) )≤ h_0(π̌^-1(^[0]),^[0]∗ )+h_0(π^-1_0(^[0]),^[0]∗ )≤η_1+η_2Hence h(^[0],^[0])≤ h_0(π̌∘π_0^-1(^[0]),^[0] )≤η_1+η_2=o(1) with the proviso thata≫κ_0^6r^3log^4(n∨ d). This indicates that h(^[0],^[0])= h_0(π̌∘π_0^-1(^[0]),^[0] ). Now for k∈{0,1}, define the map ς_k:[2]→ [2] asς_k(m)=_m^'∈{1,2}|{j∈[n_k]:ž_j^[k]=m^'}⋂{j∈[n_k]: z_j^[k]=m}|, m=1,2By definition we have z_i=ς_0( z_i^[0] ) for i=1,⋯,n_0. By Lemma <ref> and the fact that h(^[0],^[0])=o(1), we conclude that ς_0=π̌∘π_0^-1. That is to say, h(^[0],^[0])= h_0(ς_0(^[0]),^[0] )≤η_1+η_2=o(1).Meanwhile, we haveh_0(ς_0(^[0]),π̌(^∗[0]) )≤h_0(ς_0(^[0]),^[0] ) + h_0(^[0],π̌(^∗[0]) )≤ 2η_1+η_2=o(1)Repeating the same argument, we can obtain h_0(ς_1(^[1]),π̌(^∗[1]) )≤ 2η_1+η_2=o(1) with ς_1=π̌∘π_1^-1. This implies thath( ,^∗) ≤ h_0( ,π̌(^∗) )≤n_0· h_0(ς_0(^[0] ),π̌(^[0]∗) )+n_1· h_0(ς_1(^[1] ),π̌(^[1]∗) )/n=n_0· h_0(^[0],π_0(^[0]∗) )+n_1· h_0(^[1],π_1(^[1]∗) )/n≤ 2η_2In other word, if a̅≫κ_0^6r^3log^2(n∨ d) we have( {h( ,^∗)≤Cκ_0^4r^2/a̅}⋂)≥ 1-3(n∨ d)^-3. Alignment for local memberships.Denote the event :={max_m∈{1,2}h(_m,_m )≤η_1 and max_m∈{1,2}h^[k]_m,_m≤η_2 for k=0,1 }where η_1=Clog^6(n∨ d)a̅^-1+κ_0^6r^3log(κ_0r)log^4(n∨ d)^2a̅^-2, η_2=Ca̅^-1, and≥ 1-Cn∨ d^-3. On , the remaining proofs are almost the same as that of alignment for network labels and hence omitted.§.§ Proof of Theorem <ref>The strategy is similar to the proof of Theorem <ref>. Using the same notations there, we get inf_sup__^(_1,2, _1,2) h(, ^∗)≥1/n∑_i∈^cinf_ẑ_i1/|_|∑_^*∈_(ẑ_i z_i^*)=1/2inf_ẑ_i(_H_0^(i)(ẑ_i 1)+_H_1^(i)(ẑ_i 2)),where we define the following hypothesis testing for each i∈[n]:H_0^(i):z_i^*=1 v.s. H_1^(i):z_i^*=2.It then suffices to lower bound the following probability(∑_ω∈_d(ω)log_2(ω)/_1(ω)> ∑_ω∈_d(_2(ω)-_1(ω))).Define(ω):=(ω)log_2(ω)/_1(ω)-(_2(ω)-_1(ω)). Then for any ϑ >0, we have that ( ∑_ω∈_d(ω)log_2(ω)/_1(ω)> ∑_ω∈_d(_2(ω)-_1(ω)))=(∑_ω∈_d (ω)>0)=(ϑ≥∑_ω∈_d(ω)>0)=∑_∈∏_ω∈_d h_ω(x_ω)≥exp(∑_ω(ω)/2)/exp(ϑ/2)·∑_∈∏_ωexp(x_ω/2)h_ω(x_ω)/exp((ω)/2),where :={∈^d(d+1)/2: 0≤∑_ωx_ω≤ϑ} and h_ω is the probability mass function of (ω). By the moment generating function of Poisson variables, we get exp(∑_ω∈_d(ω)/2)= ∏_ω∈_de^-_2(ω)-_1(ω)/2 e^(ω)log√(_2(ω)/_1(ω))= ∏_ω∈_dexp(-(√(_1(ω))-√(_2(ω)))^2/2)=exp(-I^∗/2)Define q_ω(x):=exp(x/2) h_ω(x)/exp((ω)/2),which is a probability mass function for any ω∈_d. Let Y_ω, ω∈_d be a sequence of independent random variables such that Y_ω∼ q_ω(·). Then, ( ∑_ω∈_d(ω)log_2(ω)/_1(ω)> ∑_ω∈_d(_2(ω)-_1(ω)))≥exp(-(I^∗+ϑ)/2)·∑_∈∑_ω∈_d q_ω(x_ω)=exp(-(I^∗+ϑ)/2)·(ϑ≥∑_ω∈_d Y_ω≥ 0).Let M_ω(·) be the moment generating function of (ω) so thatM_ω(t):=exp(t(ω))= exp(-t(_2(ω)-_1(ω))+_1(ω)[(_2(ω)/_1(ω))^t-1])andM_ω'(t)=exp(-t(_2(ω)-_1(ω)) +_1(ω)[(_2(ω)/_1(ω))^t-1]) · (-(_2(ω)-_1(ω))+_1(ω)(_2(ω)/_1(ω))^tlog_2(ω)/_1(ω))and M_ω”(t)=exp(-t (_2(ω)-_1(ω))+_1(ω)[(_2(ω)/_1(ω))^t-1]) · (-(_2(ω)-_1(ω))+_1(ω)(_2(ω)/_1(ω))^tlog_2(ω)/_1(ω))^2+ exp(-t(_2(ω)-_1(ω))+_1(ω)[(_2(ω)/_1(ω))^t-1])·_1(ω)(_2(ω)/_1(ω))^tlog^2 _2(ω)/_1(ω)Then Y_ω has the moment generating function exp(tY_ω)=M_ω(t+1/2) M^-1_ω(1/2). Since Y_ω=M_ω'(1/2)M_ω^-1(1/2) and Y_ω^2=M_ω”(1/2)M_ω^-1(1/2), we getY_ω=-_2(ω)+_1(ω)+√(_1(ω)_2(ω))·log_2(ω)/_1(ω)andY_ω^2 =(-_2(ω)+_1(ω)+√(_1(ω)_2(ω))·log_2(ω)/_1(ω))^2 +√(_1(ω)_2(ω))·log^2 _2(ω)/_1(ω).Therefore, we get Var(Y_ω)=√(_1(ω)_2(ω))·log^2 _2(ω)/_1(ω).For _1 and _2 from the parameter space, the ratio _2(ω)/_1(ω), _1(ω)/_2(ω)∈ [γ^-1, γ] for all ω∈_d and γ is treated as a constant. As a result,Var(Y_ω)≍√(_2(ω)/_1(ω))(_2(ω)-_1(ω))^2/_1(ω)≍ (√(_1(ω))-√(_2(ω)))^2(√(_1(ω))+√(_2(ω)))^2/_1(ω)≍(√(_1(ω))-√(_2(ω)))^2for all ω∈_d. We now calculate the third moment of |Y_ω|.Towards that end, let us explicitly find out the distribution of Y_ω. For any t=0,1,2,⋯, denote x̃_t:=tlog_2(ω)/_1(ω)-_2(ω)+_1(ω).Then Y_ω takes value from {x̃_0, x̃_1,x̃_2,⋯} with the probability mass function(Y_ω=x̃_t)=q_ω(x̃_t)=e^-√(_1(ω)_2(ω))(√(_1(ω)_2(ω)))^t/t!,implying that Y_ω is a linear transformation of Poisson(√(_1(ω)_2(ω))). Since Poisson distribution is sub-exponential, we have |Y_ω|^3<∞. Now taking ϑ=√(∑_ω Var(Y_ω)), by Berry-Esseen theorem, we can get (∑_ω∈_d (ω)log_2(ω)/_1(ω)> ∑_ω∈_d(_2(ω)-_1(ω)))≥exp(-(I^∗+ϑ)/2)·(ϑ≥∑_ω∈_d Y_ω≥ 0) ≳ exp(-I^∗/2-1/2√(∑_ω∈_d Var(Y_ω)))·(Φ(1)-Φ(0)) ≥ exp(-I^∗/2(1+o(1))),provided that I^∗→∞.§.§ Proof of Lemma <ref>By definition, (ẑ=2) is equivalent to (∑_ω∈_d(ω)log_2(ω)/_1(ω)> ∑_ω∈_d(_2(ω)-_1(ω)))≤(∑_ω∈_d(ω)log_2(ω)/_1(ω)>∑_ω∈_d(_2(ω)-_1(ω))-δ I^*)+(∑_ω∈_d(ω)(log_2(ω)/_1(ω)-log_2(ω)/_1(ω))>∑_ω∈_d[(_2(ω)-_1(ω))-(_2(ω)-_1(ω))]+δ I^*),where δ=o(1) shall be determined later.We now analyze both terms of (<ref>).First notice that (∑_ω∈_d(ω)log_2(ω)/_1(ω)>∑_ω∈_d(_2(ω)-_1(ω))-δ I^*)≤exp(-1/2(∑_ω∈_d(_2(ω)-_1(ω))-δ I^∗))·exp(1/2∑_ω∈_d(ω)log_2(ω)/_1(ω))= exp(-1/2(∑_ω∈_d(_2(ω)-_1(ω))-δ I^∗))·exp(∑_ω∈_d(√(_1(ω)_2(ω))-_1(ω)))=exp(-1/2∑_ω∈_d(√(_1(ω))-√(_2(ω)))^2+1/2δ I^∗)≤exp-I^*/2+1/2δ I^*=exp(-I^*/2(1-o(1)) ),where the last equality holds provided that δ=o(1).Denoteξ_ideal:=(∑_ω∈_d(ω)log_2(ω)/_1(ω)>∑_ω∈_d(_2(ω)-_1(ω))-δ I^*)Then (<ref>) impliesξ_ideal≤exp(-I^*/2(1-o(1)) )as I^*→∞. It suffices to show the second term of (<ref>) vanishes.Observe that(∑_ω∈_d(ω)(log_2(ω)/_1(ω)-log_2(ω)/_1(ω))>∑_ω∈_d[(_2(ω)-_1(ω))-(_2(ω)-_1(ω))]+δ I^*)=(∑_ω∈_d(ω)(log_1(ω)/_1(ω)+log_2(ω)/_2(ω))>∑_ω∈_d[(_2(ω)-_1(ω))-(_2(ω)-_1(ω))]+δ I^*) Without loss of generality,suppose that _1-_1_ℓ_1,_2-_2_ℓ_1≤ρ̃I^∗ for some ρ̃=o(1).This means that for ∀ m∈[1,2],∑_ω∈_d|_m(ω)-_m(ω)|≤ρ̃I^*This implies that |∑_ω∈_d[(_2(ω)-_1(ω))-(_2(ω)-_1(ω))]|≤ 2 ρ̃I^*Now set δ=ρ̃^ϵ for some fixed ϵ∈(0,1),and we get|∑_ω∈_d[(_2(ω)-_1(ω))-(_2(ω)-_1(ω))]|=o(δ I^*). Then term (<ref>) can be further bounded as (∑_ω∈_d(ω)(log_1(ω)/_1(ω)+log_2(ω)/_2(ω))>∑_ω∈_d[(_2(ω)-_1(ω))-(_2(ω)-_1(ω))]+δ I^*)≤(∑_ω∈_d(ω)log_1(ω)/_1(ω)>δ/4I^*)+ (∑_ω∈_d(ω)log_2(ω)/_2(ω)>δ/4I^*)To control the first term of (<ref>), we begin with:∑_ω∈_d(ω)|log_1(ω)/_1(ω)| ≤∑_ω∈_d(ω)|_1(ω)-_1(ω)|/_1(ω)(1+o(1))≲∑_ω∈_d(ω)ρ̃I^*/d(d+1)/2/_1(ω)where in the first inequality we've used the following fact:I^*/d(d+1)/2 ≲1/d(d+1)/2∑_ω∈_d(√(_1(ω))-√(_2(ω)))^2≲1/d(d+1)/2∑_ω∈_d_1(ω)∨_2(ω)≲min_ωmin_m∈[2]_m(ω),where the last inequality holds by the constraints in the parameter spacein (<ref>) and the block structure of _m and _m. As a result,there exists an absolute constant C>1 such that (∑_ω∈_d(ω)log_1(ω)/_1(ω)>δ/4I^*)≤(∑_ω∈_d(ω)ρ̃I^∗/d^2/_1(ω)>δ I^∗/4C)Denote the event:={∑_ω∈_d(ω)ρ̃I^∗/d^2/_1(ω)>δ I^∗/4C}.Due toindependencebetween _1 and ,we have,by Chernoff bound and conditioned on _1,that for any λ>0,( ) ≤(exp(λ∑_ω∈_d(ω)ρ̃I^∗/d^2/_1(ω)) >exp(λδ I^∗/4C) ) ≤exp(-λδ/4CI^* )∏_ω∈_d(exp(λ(ω) ρ̃I^∗/d^2/_1(ω)))=exp(-λδ/4CI^* ) exp(∑_ω∈_d_1(ω)(e^λρ̃I^∗/d^2_1(ω)-1))≤exp(-λδ/4CI^* )exp(C_1λρ̃I^∗)≤exp(-λδ I^∗/(8C))By choosing λ≍ρ̃^- for some ∈(0,1) so that λρ̃I^∗/d^2_1(ω)=o(1),we get ()≤exp(-λδ/4CI^* )exp(C_1λρ̃I^∗)≤exp(-λδ I^∗/(8C)).Recall δ=ρ̃^.We can choose λ=4Cδ^-1 andobtain that( )≤exp(-I^*/2).The same high probability bounds for the other term of (<ref>) can be obtained similarly and hence omitted.Thenξ_pertub:= (∑_ω∈_d(ω)(log_1(ω)/_1(ω)+log_2(ω)/_2(ω))>∑_ω∈_d[(_2(ω)-_1(ω))-(_2(ω)-_1(ω))]+δ I^*)Hence we obtain that ξ_pertub≤ 4exp(-I^*/2 )=exp(-I^*/2(1-o(1)) ),which completes the proof. §.§ Proof of Theorem <ref>We introduce extra notations for convenience.For any ,^'∈[2]^n and ,^':[d]→[K], we defineh_0(,^'):=1/n∑_i=1^n𝕀( z_i z^'_i ) and h_0(,^'):=1/d∑_j=1^d𝕀( (j)^'(j) ) For each l∈[n],by definition there exists some π_(-l)∈ S_2 and ϕ_(-l)∈ S_K such that h_0(^(-l),π_(-l)(^∗) )=h(^(-l),^∗ )≤η_z and h_0(^(-l),ϕ_(-l)∘ )=h(^(-l),^∗ )≤η_σ. We now fix an i∈[n] and suppose z_i^∗=1.Without loss of generality,we assume π_(-i)=Id and ϕ_(-i)=Id. To avoid clutters of notations, we temporarily drop the superscript (-i) in _m^(-i)(j_1,j_2) and _m^(-i)(k,l) but keep in mind the independence structure between _i and other estimated parameters.The i-th network is mis-clustered when(ẑ_i^(-i)=2)= (∑_ω∈_d_i(ω)log_2(ω)/_1(ω)>∑_ω∈_d(_2(ω)-_1(ω) )).Since _i is independent of _1^(-i) and _2^(-i),it suffices to apply Lemma <ref>.Towards that end,we must bound the entry-wise error of ^(-i)_1 and ^(-i)_2,respectively. It thus suffices to bound the error of _1^(-i) and _2^(-i),respectively,as shown in the following lemma.Its proof is almost identical to the proof of Lemma <ref> except that the concentration inequalities should be replaced for Poisson random variables, which have sub-exponential tails. Note that X- X has a sub-exponential norm O(1+√(λ)) if X∼ Poisson(λ).We hereby omit the proof of Lemma <ref>. Under the same conditions of Theorem <ref>,there exists a large absolute constant C>0 such that,for each i∈[n],with probability at least 1-n^-C,we havemin_π∈𝔖_2,ϕ∈𝔖_Kmax_m∈{1,2}max_k,l∈[K]|_m^(-i)(k,l)-_π(m)(ϕ(k),ϕ(l))|=ρ̃·I^*/d(d+1)/2with some ρ̃=o(1).Denote the event _(-i):={(<ref>) holds}. Notice that (_(-i))≥ 1-n^-C and we now proceed conditioned on _(-i). Observe that,for m∈{1,2},we get∑_ω∈_d |_m(ω)-_m(ω)|=∑_ω∈_d|∑_k,l∈[K][_m(k,l)(_m(ω)=(k,l))-_m(k,l)(_m(ω)=(k,l))]|=∑_ω∈_d|∑_k,l∈[K][(_m(k,l)-_m(k,l))(_m(ω)=(k,l))(_m(ω)=_m(ω))-_m(k,l)(_m(ω) (k,l))(_m(ω)≠_m(ω))]|≤∑_ωmax_k,l∈[K]|_m(k,l)-_m(k,l)|+2K^2max_k,l∈[K]_m(k,l)∑_ω(_m(ω)≠_m(ω))≲ρ̃I^*+K^2d^2η_σθ_0≲ρ̃I^*,where we've used (<ref>) and η_σ=o( I^*/K^2d^2θ_0). By Lemma <ref>, we conclude, conditioned on the event _(-i) and the event defined in (<ref>), that(ẑ_i^(-i)≠ z_i^∗)≤exp(-I^∗/2(1-o(1))) The rest of proof is identical to that of Theorem <ref>. §.§ Proof of Theorem <ref>It suffices to study the probability that X_i is mis-clustered by the rule (<ref>). Without loss of generality,we study ẑ_1^(-1) and assume that z_1^∗=1.Then,by independence between X_1 and ( p_1^(-1),p_2^(-1)),we have (ẑ_1^(-1)=2)= (X_1log p_2^(-1)+(d-X_1)log(1- p_2^(-1))≥ X_1log p_1^(-1)+(d-X_1)log(1- p_1^(-1)))= (X_1log p_2^(-1)(1- p_1^(-1))/ p_1^(-1)(1- p_2^(-1))≥ dlog1- p_1^(-1)/1- p_2^(-1)) ≤ (1- p_2^(-1)/1- p_1^(-1))^d/2exp{X_1/2log p_1^(-1)(1- p_2^(-1))/p̃_2^(-1)(1- p_1^(-1))} ≤ (1- p_2^(-1)/1- p_1^(-1))^d/2(1-p_1+p_1( p_1^(-1)(1- p_2^(-1))/p̃_2^(-1)(1- p_1^(-1)))^1/2)^dimplying that log(ẑ_1^(-1)=2)= -I^∗/2+dlog(1+Δ/√((1-p_1)(1-p_2))+√(p_1p_2))where Δ:= (1-p_1)(√(1- p_2^(-1)/1- p_1^(-1))-√(1-p_2/1-p_1))+p_1(√( p_2^(-1)/ p_1^(-1))-√(p_2/p_1))It is easy to verify that when p_1^(-1)-p_1=o(p_1), p_2^(-1)-p_2=o(p_2),p_1,p_2=o(1),and together with (<ref>),we haveΔ/√((1-p_1)(1-p_2))+√(p_1p_2)=o(I^∗/d).As a result, log(ẑ_1^(-1)=2)=-(1-o(1))I^∗/2.Therefore,we have [ h(,^∗)|_0n]=exp{-(1-o(1))I^∗/2}.The rest of proof of the alignment and high probability bound is the same as the proof of Theorem <ref>.§ PROOF OF TECHNICAL LEMMAS§.§ Proof of Lemma <ref>Denote (ω):=(ω)-_z(ω) for any ω∈_d. Now suppose z^∗=1, thenwe get log(∑_ω(ω)log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))>∑_ωlog1-_1(ω)/1-_2(ω))≤min_λ>0[-λ∑_ωlog1-_1(ω)/1-_2(ω)+∑_ωlog(1-_1(ω)+_1(ω)e^-λlog_1(ω)(1-_2(ω))/_2(ω)(1-_1(ω)))]≤-1/2∑_ωlog1-_1(ω)/1-_2(ω)+∑_ωlog(1-_1(ω)+_1(ω)e^-1/2log_1(ω)(1-_2(ω))/_2(ω)(1-_1(ω)))=-[∑_ωlog√(1-_1(ω)/1-_2(ω))-∑_ωlog(1-_1(ω)+√(_1(ω)_2(ω))√(1-_1(ω)/1-_2(ω)))]=-∑_ωlog√(1-_1(ω)/1-_2(ω))/1-_1(ω)+√(_1(ω)_2(ω))√(1-_1(ω)/1-_2(ω))=-∑_ωlog1/√((1-_1(ω))(1-_2(ω)))+√(_1(ω)_2(ω))=-I^*/2where the first inequality holds due toMarkov's inequality and in the second inequality we take λ=1/2. Similarly, if z^∗=2, then the networkwill be mis-clustered if and only if ∑_ω(ω)log_1(ω)(1-_2(ω))/_2(ω)(1-_1(ω))>∑_ωlog1-_2(ω)/1-_1(ω)Due to the symmetry, the foregoing argument still stands without essential modification. This completes the proof. §.§ Proof of Lemma <ref>For any λ>0, The probability thatis mis-clustered by Lloyd's algorithm is bounded as(∑_ω∈_d(ω)log_2(ω)/_1(ω)> ∑_ω∈_d(_2(ω)-_1(ω)))≤exp(-λ∑_ω(_2(ω)-_1(ω)))∏_ωexp(λ(ω)log_2(ω)/_1(ω))=exp(-λ∑_ω(_2(ω)-_1(ω)))∏_ωexp(_1(ω)[e^λlog_2(ω)/_1(ω)-1]),where the last equality is from the moment generating function of Poisson distribution. By taking logarithmic of both sides and setting λ=1/2, we end up withlog(∑_ω∈_d(ω)log_2(ω)/_1(ω)> ∑_ω∈_d(_2(ω)-_1(ω)))≤ -1/2∑_ω(_2(ω)-_1(ω))+∑_ω(√(_1(ω)_2(ω))-_1(ω))=-1/2∑_ω(√(_1(ω))-√(_2(ω)))^2=-I^∗/2,where I^∗:=∑_ω(√(_1(ω))-√(_2(ω)))^2.This completes the proof. §.§ Proof of Lemma <ref>We adopt the standard Cramer-Chernoff argument to establish the lower bound.Define (ω):=(ω)log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))-D_KL(_1(ω)||_2(ω) ). Then for any ϑ >0, we have that (∑_ω∈_d(ω)log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))>∑_ω∈_dlog1-_1(ω)/1-_2(ω) )=(∑_ω∈_d(ω)>0 )≥(ϑ≥∑_ω∈_d(ω)>0 )=∑_∈∏_ω∈_dh_ω(x_ω) ≥exp(1/2∑_ω(ω))/exp(1/2ϑ)∑_∈∏_ωexp(1/2x_ω)h_ω(x_ω)/exp(1/2(ω))where :={∈ℝ^d(d+1)/2:ϑ≥∑_ωx_ω≥ 0} and,with slight abuse of notations,we denote x_ω the corresponding entry of .Here h_ω(·) is the probability mass function of (ω), and the last inequality holds by the definition of . Defineq_ω(x)=exp1/2xh_ω(x)/exp(1/2(ω)).It is readily seen that q_ω(x)≥ 0, and∑_xq_ω(x)=exp(1/2(ω))/exp(1/2(ω))=1.Hence q_ω(·) is a probability mass function for any ω∈_d.Let Y_ω be a sequence of independent random variables such that Y_ω∼ q_ω(· ), it follows from (<ref>) that(∑_ω∈_d(ω)log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))>∑_ω∈_dlog1-_1(ω)/1-_2(ω) )≥exp(1/2∑_ω(ω))/exp(1/2ϑ)∑_∈∏_ωq_ω(x_ω)=exp(1/2∑_ω(ω))/exp(1/2ϑ)(ϑ≥∑_ω∈_dY_ω≥ 0)=exp(-I^*/2-1/2ϑ)(ϑ≥∑_ω∈_dY_ω≥ 0),Let M_ω(· ) be the moment generating function of (ω), that is, for any t>0,M_ω(t) =exp(t(ω)log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))-tD_KL(_1(ω)||_2(ω) ))=[1-_1(ω)+_1(ω)e^t log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))]e^-tlog1-_1(ω)/1-_2(ω),and M^'_ω(t)= e^-tlog1-_1(ω)/1-_2(ω)[_1(ω)log_2(ω)/_1(ω)e^t log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω)) -(1-_1(ω))log1-_1(ω)/1-_2(ω)],then the moment generating function of Y_ω is given by exp(tY_ω)=∑_xexp(tx)exp(1/2x)h_ω(x)/exp(1/2(ω))=M_ω(t+1/2)/M_ω(1/2)Therefore, direct algebra givesthatY_ω =dexp(tY_ω)/dt|_t=0=M^'_ω(1/2 )/M_ω(1/2 )=_1(ω)log_2(ω)/_1(ω)e^1/2log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))-(1-_1(ω))log1-_1(ω)/1-_2(ω)/1-_1(ω)+_1(ω)e^1/2log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))=-log1-_1(ω)/1-_2(ω)+_1(ω)log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))e^1/2log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))/1-_1(ω)+_1(ω)e^1/2log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω)) Furthermore, we haveY^2_ω =d^2exp(tY_ω)/dt^2|_t=0=M^''_ω(1/2 )/M_ω(1/2 )=_1(ω)log^2_2(ω)/_1(ω)e^1/2log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))+(1-_1(ω))log^2 1-_1(ω)/1-_2(ω)/1-_1(ω)+_1(ω)e^1/2log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))By direct calculationwe have thatVar(Y_ω ) = Y^2_ω -( Y_ω)^2=_1(ω)(1-_1(ω))[log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))]^2e^1/2log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))/[1-_1(ω)+_1(ω)e^1/2log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))]^2Notice that _1(ω)e^1/2log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))=^1/2_1(ω)^1/2_2(ω)hence we obtainVar(Y_ω ) ≤√(_1(ω)_2(ω))[log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))]^2(1+o(1)),and Var(Y_ω ) ≥√(_1(ω)_2(ω))[log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))]^2(1-o(1)). We now show that Var(∑_ω∈_dY_ω )≍ I^*. Without loss of generality, assume _1(ω)>_2(ω) and recall that we have _1(ω)≍_2(ω)=o(1). Now write p:=_2(ω) and ϵ:=_1(ω)-_2(ω).We consider two cases.Firstly if ϵ =o(p), then we haveVar(Y_ω)≍ p[log1+ϵ/p+log1+ϵ/1-p-ϵ]^2≍ϵ^2/pOn the other hand we have I_ω≍√(p+ϵ)-√(p)^2≍ϵ^2/p, which implies that Var(Y_ω)≍ I_ω. Secondly if ϵ≍ p, we simply have Var(Y_ω)≍ I_ω≍ p. Hence we conclude that Var∑_ω∈_dY_ω≍∑_ω∈_d I_ω→∞. Furthermore, direct calculation gives that |Y_ω- Y_ω|<1 and hence |Y_ω- Y_ω|^3< Var(Y_ω ). Taking ϑ=√(∑_ωVar(Y_ω )) in (<ref>), by Berry–Esseen theorem we can proceed as (∑_ω∈_d(ω)log_2(ω)(1-_1(ω))/_1(ω)(1-_2(ω))>∑_ω∈_dlog1-_1(ω)/1-_2(ω) )=exp(-I^*-1/2√(∑_ω∈_dVar(Y_ω )))(1≥∑_ω∈_dY_ω/√(∑_ω∈_dVar(Y_j_1j_2 ))≥ 0)≥exp(-I^*/2-1/2√(∑_ω∈_dVar(Y_ω )))·((1≥(0,1)≥ 0)-C√(1/∑_ω∈_dVarY_ω))≥exp(-I^*/2(1+o(1)))provided that I^*→∞. §.§ Proof of Lemma <ref> For notational simplicity, it suffices for us to prove the version using all samples (instead of leave-one-out estimator), i.e., (,_1,_2)=Init({_i}_i=1^n) _m(k,l)=∑_i=1^n𝕀(z_i=m)∑_ω∈_d_i(ω)𝕀(_m(ω)=(k,l))/∑_i=1^n𝕀(z_i=m)∑_ω𝕀(_m(ω)=(k,l)) for any k,l∈[K] and m∈{1,2}. The proof for leave-one-out estimator defined in Algorithm <ref> is essentially the same. With slight abuse of notation, defineh_0(,^'):=1/n∑_i=1^n𝕀( z_iz^'_i), h_0(,^'):=1/d∑_j=1^d𝕀((j)^'(j)),for any ,^'∈[2]^n and ,^'∈[K]^d. For any fixed (,{_1,_2},{_1,_2})∈, letπ_0:=_π∈𝔖_2h_0(,π())and ϕ_0:=_ϕ∈𝔖_Kmax_m∈{1,2} h_0(_m,ϕ(_m)) where π():=(π(z_1),⋯,π(z_n)) and similar for ϕ(_m). Denote the event={h_0(,π_0())≤η_z, max_m∈{1,2}h_0(_m,ϕ_0(_m))≤η_σ}.Under conditions of Theorem <ref>, ()≥ 1-C_0n^-2.Without loss of generality, we assume π_0 and ϕ_0 to be identity maps. For any m∈{1,2} and k∈[K], define the following sets_m^*:={i∈[n]: z_i=m},_m:={i∈[n]:z_i=m}_m,k^*:={j∈[d]:_m(j)=k},_m,k:={j∈[d]:_m(j)=k}^*_m,k:=_m^*⊗_m,k^*,_m,k:=_m⊗_m,k.Denote n_m:=|_m^*| and d_m,k:=|_m,k^*|. It is readily seen that the sets _m and _m,k have the following properties:n_m∨ |_m|-ϖ_1n≤ |_m^*⋂_m|≤ n_m ∧ |_m||_m^*c⋂_m|≤ϖ_2 n d_m,k∨ |_m,k|-ω_1d≤ |_m,k^*⋂_m,k|≤ d_m,k∧ |_m,k||_m,k^*c⋂_m,k|≤ω_2dwhere ϖ_1,ϖ_2,ω_1,ω_2≥ 0 and ϖ_1+ϖ_2≤η_z, ω_1+ω_2 ≤η_σ. Define the collections of sets:_m:={⊂[n]: satisfying (<ref>) as _m },_m,k:={⊂[d]: satisfying (<ref>) as _m,k}_m,k:={⊗⊂[n]⊗[d]: ∈_m, ∈_m,k}.It turns out that|_m|≤∑_i=0^ϖ n∑_i_2=0^i∑_i_1=0^i n_m i_1n-n_m i_2≤ (ϖ n+1)(en_m/ϖ n)^ϖ n(en/ϖ n)^ϖ n≤exp(C_Iϖ nlog1/ϖ).and similarly |_m,k|≤exp(C_Sω dlog1/ω) for some absolute constant C_I,C_S>0. Hence |_m,k|≤ |_m|· |_m,k|≤exp(C_Iϖ nlog1/ϖ+C_Sω dlog1/ω).Now fix some _m,k∈_m,k. By definition, _m,k=_m⊗_m,k for some _m∈_m and _m,k∈_m,k. Denote n_m:=|_m| and d_m,k:=|_m,k|. Due to the property of _m,k, we have |n_m- n_m|≤η_z n and |d_m,k-d_m,k|≤η_σ d. Moreover, let _m,k be edges in _m,k, which consistsof independent Bernoulli random variables, where at least a fraction of (n_m-ϖ_1 n)· (d_m,k-ω_1d)(d_m,k-ω_1d+1)/n_m·d_m,k(d_m,k+1) ≥(1-ϖ_1 n/n_m)(1-ω_1 d/d_m,k)(1-ω_1 d/d_m,k+1)≥ (1-4αϖ_1)(1-2β Kω_1)^2follows Bernoulli(_m(k,k)), at most a fraction of n_m·ω_2d(ω_2d+1)/n_m·d_m,k(d_m,k+1)≤ (2β Kω_2)^2follows Bernoulli(p) with p_m<p<γ p_m, at most a fraction ofn_m·ω_2d·d_m,k/n_m·1/2d_m,k(d_m,k+1)≤ 4β Kω_2follows Bernoulli(q) with γ^-1 q_m<q<q_m, at most a fraction of ϖ_2 n·d_m,k(d_m,k+1)/n_m·d_m,k(d_m,k+1)≤ 4αϖ_2follows Bernoulli(p) with p_-m<p<γ p_-m where p_-m:=p_{1,2}\ m, at most a fraction of ϖ_2 n· (d_m,k/2)^2/n_m·1/2d_m,k(d_m,k+1)≤ 4αϖ_2follows Bernoulli(q) with γ^-1 q_-m<q<q_-m with q_-m:=q_{1,2}\ m. Then the cardinality of _m,k can be expressed as|_m,k|=∑_i∈_m ω∈_m,k∩_d_i(ω).According to the former analysis, we have that(|_m,k|/|_m|1/2|_m,k||_m,k+1|) ≤max_x∈[0,4αη_z] y∈[0,2β Kη_σ](1-x)[(1-y)^2 _m(k,k)+y^2γ p_m+ 2yq_m]+x[γ p_-m+q_-m].Given that η_σ≤ c min_ma_m-b_m/a_mK for some small absolute constant c>0and η_z=o( I^*/a̅(n/d∧ 1)), we havemax_x∈[0,4αη_z] y∈[0,2β Kη_σ](1-x)[(1-y)^2 _m(k,k)+y^2γ p_m+ 2yq_m]+x(γ p_-m+q_-m)=max_x∈[0,4αη_z](1-x)_m(k,k)+x(γ p_-m+q_-m)=max{_m(k,k),_m(k,k)+4αη_z[γ p_-m+q_-m-_m(k,k)]}≤max{_m(k,k),_m(k,k)+ρ(I^*/d(d+1)/2)},for some ρ→ 0. Under the samecondition, we obtain(|_m,k|/|_m|1/2|_m,k||_m,k+1|) ≥min_x∈[0,4αη_z] y∈[0,2β Kη_σ](1-x)[(1-y)^2 _m(k,k)+2yγ^-1q_m]=min_x∈[0,4αη_z](1-x)_m(k,k)= (1-4αη_z)_m(k,k)≥_m(k,k)-ρ( I^*/d(d+1)/2).As a consequence, we have that|(|_m,k|/|_m|1/2|_m,k||_m,k+1|)-_m(k,k)|≤ρ( I^*/d(d+1)/2).Notice that Var(|_m,k|)≤1/2(d_m,k+η_σ d)(d_m,k+η_σ d-1)[(n_m+η_zn)γ p_m+η_znγ p_-m]≲ n(d/K)^2(p_m+η_z p_-m).By Bernstein's inequality, we have with probability at least 1-e^-t,||_m,k|-|_m,k||≤ C_0(dK^-1√(n(p_m+η_z p_-m)t)+t).Take t=C_Iη_z nlog1/η_z+C_Sη_σ dlog1/η_σ+Clog n, then we have that with probability at least 1-n^-Cexp(-C_Iη_z nlog1/η_z-C_Sη_σ dlog1/η_σ),||_m,k|-|_m,k|/|_m|1/2|_m,k||_m,k+1|| ≲K/√(n)d√((p_m+η_z p_-m)(nη_z log1/η_z+dη_σlog1/η_σ+log n))+K^2(η_z log1/η_z/d^2+η_σlog1/η_σ/nd +log n/nd^2).Without loss of generality, we can assume η_z>1/n since otherwise we have η_z=0, implying exact recovery in the initialization. Hence n·η_zlog1/η_z≥log n and the term log n in (<ref>) can be ignored. Also, by assumption η_z≤C_3 ρ I^*/a̅K^2(d/n+1 ), whichimplies that η_z log1/η_z≤2C_3ρ I^*2/a̅ K^2(d/n+1). Otherwise, we have log1/η_z≥2I^*, or equivalently η_z≤exp(-2I^*),implying that a better than optimal rate has been already achieved. Together with the condition η_σlog1/η_σ≲ρ I^*2/a̅ K^2(d/n+1)^2, we conclude that the first term of of (<ref>) is of order O(ρ I^*/d(d+1)/2).For the second term, notice that I^∗≲∑_ω∈_d(_1(ω)-_2(ω))^2/_1(ω)∨_2(ω)≲∑_ω∈_da̅/d(n∧ d)≲a̅(1+d/n).As a result, η_z log1/η_z≤ 2C_3ρ I^*2/a̅ K^2(d/n+1)≲ρ I^∗/K^2, η_σlog1/η_σ≲ ρ I^*2/a̅ K^2(d/n+1)^2≲ρ I^∗/K^2(d/n+1),implying that K^2(η_z/d^2log1/η_z+η_σ/ndlog1/η_σ)=O(ρ I^∗/d^2). Moreover, if K^2log n/n→ 0, we have K^2 log n/nd^2=o(I^∗/d^2),since I^∗→∞,then the second term of (<ref>) is of order O(ρ^' I^*/d(d+1)/2) for some ρ^'→0. Finally, taking a union bound over _m,k would lead to ||_m,k|/|_m|1/2|_m,k||_m,k+1|-_m(k,k)|≤ρ̃( I^*/d(d+1)/2).with probability at least 1-n^-C, where ρ̃=ρ∨ρ^'=o(1). It remains to the consider the estimation of _m(k,l). To that end, we additionally define_m,k,l:={⊗⊗^'⊂[n]⊗[d]⊗[d]: ∈_m, ∈_m,k,^'∈_m,l},for k l∈[K] with |_m,k|≤exp(C_Iϖ nlog1/ϖ+2C_Sω dlog1/ω). Fix some _m,k,l=_m⊗_m,k⊗_m,l∈_m,k,l. Let _m,k,l be edges between _m⊗_m,k and _m⊗_m,l. Analogue to the previous case, since η_z=O(ρ·I^*/a̅(n/d∧ 1 )), we have that(|_m,k,l|/|_m||_m,k||_m,l|) ≤max_x∈[0,4αη_z] y∈[0,2β Kη_σ](1-x)[(1-y)^2 _m(k,l)+2yγ p_m+ 2yq_m]+x(γ p_-m+q_-m) ≤max{_m(k,l),_m(k,l)+ρ( I^*/d(d+1)/2)}, and (|_m,k,l|/|_m||_m,k||_m,l|) ≥min_x∈[0,4αη_z] y∈[0,2β Kη_σ](1-x)[(1-y)^2 _m(k,k)+2yγ^-1q_m] ≥_m(k,l)-ρ( I^*/d(d+1)/2).We then arrive at|(|_m,k,l|/|_m||_m,k||_m,l|)-_m(k,l)|≤ρ( I^*/d(d+1)/2).and Var(|_m,k,l|)≤ (d_m,k+η_σ d)(d_m,l+η_σ d)[(n_m+η_zn)γ p_m+η_znγ p_-m]≲ n(d/K)^2(p_m+η_z p_-m).The remaining analysis is almost identical and hence omitted. *□We introduce the following lemma <cit.>. Let=∑_i=1^N_i where _i's are independent m× m random matrices with _i=0. Then for a universal constant C and any t≥ 0, (≥σ()+C(v( )^1/2σ ( )^1/2(log m)^3/4+σ_*( )t^1/2+R()^1/3σ()^2/3t^2/3+R()t) )≤ me^-t where we define σ():=max{^⊤^1/2,^⊤^1/2}σ_*():=sup_v=w=1[|v w|]^1/2v():=Cov()^1/2,where Cov()∈^m^2× m^2 and Cov()(ij,kl)=[(i,j)(k,l)]R():=max_1≤ i≤ N_i_∞ Decompose _i=_i^u+_i^l for i∈[n] where _i^u(j_1,j_2)=(j_1≤ j_2)_i(j_1,j_2) for j_1,j_2∈[d]. Then we have ∑_i=1^n(_i-_i)≤∑_i=1^n(^u_i-_i^u)+∑_i=1^n(_i^l-_i^l). We first bound ∑_i=1^n(^u_i-_i^u). Now we write _i^u:=_i^u-_i^u for i∈[n]. Notice that :=∑_i=1^n_i^u=∑_i=1^n∑_j_1≤ j_2_i(j_1,j_2)e_j_1e^⊤_j_2=∑_i=1^n∑_j_1≤ j_2∈[d]_ij_1j_2where _ij_1j_2=_i(j_1,j_2)e_j_1e^⊤_j_2. To apply Lemma <ref>, notice that ^⊤=∑_i=1^n_i^u⊤_i^u+∑_i j^n_i^u⊤^u_j=∑_i=1^n_i^u⊤^u_iwhere for i∈[n] and j_1,j_2∈[d], (_i^u⊤^u_i)(j_1,j_2)=∑_l=1^d[_i^u(l,j_1)]^2, if j_1=j_2 0,o.w.Hence ^⊤ is a diagonal matrix with ^⊤≤ ndp̅. Due to symmetry, the same bound holds for ^⊤ and thus σ()≤√(ndp̅). Then we consider the (ij,kl)-th entry of Cov(), which takes form of Cov()(ij,kl)=∑_l^'=1^n^u_l^'(i,j)^u_l^'(k,l)=∑_l^'=1^n[_l^'^u(i,j)]^2, if i=k,j=l 0,o.w.Hence Cov() is a diagonal matrix with Cov()≤ np̅. Hence we have σ_*()≤ v()≤√(np̅), where the first inequality can be found in <cit.>. It suffices to notice thatR()=max_i∈[n],j_1,j_2∈[d]_ij_1j_2_∞≤ 1Lemma <ref> entails thatwith probability at least 1-(n∨ d)^-4,∑_i=1^n(^u_i-_i^u)≲√(ndp̅)provided that d≳log^3(n∨ d) and √(ndp̅)≳log^2(n∨ d). The bound for ∑_i=1^n(^l_i-_i^l) is almost identical except that the diagonal term is 0, thus the proof is omitted.§.§ Proof of Proposition <ref>The proof consists of two parts. Network labels recovery. The proof can directly adapted from the result for initialization in <cit.>. Notice that <cit.> requiresn≤ d, which can be removed by carefully examining the proof therein. Here we just state the revised version without the proof, adapted to our notations. Suppose that (n∧ d )dp̅≥log (n∨ d). Denote m_1=m_2=d and m_3=n. Then for k=1,2,3 we have{-_k,δ≥ t}≤2/(n∨ d)^4+10log^2 (n∨ d)⌈log_2δ^2m_1⌉[exp(-t^2/C_3p̅)+exp(-3t/C_4δ)]provided t≥max{C_1, C_2δ√(m_k)log (n∨ d)}√((n∨ d)p̅)log (δ^2m_k)log(n∨ d) for some absolute constants C_1,C_2,C_3,C_4>0. We also introduce the following lemma establish the concentration for sum of Bernoulli random matrices. Suppose _1,⋯,_n∈{0,1}^d× d are independentsymmetric Bernoulli matrices with independent entries up to symmetry. Denote p̅:=_1_∞ and suppose d≥ C log^3(n∨ d) for some absolute constant C>0, then we have (∑_i=1^n(_i-_i)≥ C_0√(ndp̅))≤ 1-(n∨ d)^-3provided that √(ndp̅)≥ C_1log^2 (n∨ d) for some absolute constant C_1>0. As a result, we can essentially first follow the proof of Lemma 5.6 in <cit.> but substitute the proof for min_∈_r- with Lemma <ref> (hence no log factor on the condition of σ_r(n_1^∗_1+n_2^∗_2 )) and substitute the incoherent norm with Lemma <ref>, and then follow the proof of the first part of <cit.> to obtain that with probability at least 1-(n∨ d)^-3,h(,^∗)≤Cκ_0^6r^3log^2(κ_0r)log^4(n∨ d)/(n∧ d)dp̅=Cκ_0^6r^3log(κ_0r)log^4(n∨ d)/a̅provided that σ_r(n_1^∗_1+n_2^∗_2 )≳√(ndp̅). Hence is a consistent estimator of ^∗, i.e., h(,^∗)=o(1) as long as a̅≫κ_0^6r^3log(κ_0r)log^4(n∨ d). Local memberships recovery. Without loss of generality, we assume h_0(,^∗)=h(,^∗). Denote _i:=_i-_z_i^∗ for i∈[n] and ň_m:=∑_i=1^n(ž_i=m) for m∈{1,2}. Letdenote the event in (<ref>) we continue on . For m∈{1,2} we haveň_m=∑_i=1^n(ž_i=m)≥∑_i=1^n(z_i^∗=m)-∑_i=1^n( z_i^∗ž_i)≥ n_m^∗-nh(,^∗)≥ (1-c_0)n_m^∗for some constant c_0∈[0,1), where the last inequality holds if a̅≫κ_0^6r^3log(κ_0r)log^4(n∨ d). Then for m∈{1,2} we have1/ň_m∑_i:ž_i=m_i =1/ň_m∑_i=1^n(ž_i=m)_z_i^∗+1/ň_m∑_i=1^n(ž_i=m)_i=_m+_+_+1/ n_m^∗∑_i=1^n( z^∗_i=m)_iwhere _:=(ň_m)^-1∑_i=1^n(ž_i=m)_z_i^∗-_m and _:=(ň_m)^-1∑_i=1^n(ž_i=m)_i-( n_m^∗)^-1∑_i=1^n( z^∗_i=m)_i. First note that_ =1/ň_m∑_i=1^n(ž_i=m)(_z_i^∗-_ž_i)=1/ň_m∑_i=1^n(ž_i=m,z_i^∗ m)(_z_i^∗-_ž_i)≤n h(,^∗)/ň_m_1-_2≲ h(,^∗)_1-_2Meanwhile we have_ ≤1/ň_m∑_i=1^n(ž_i=m)_i-1/ň_m∑_i=1^n( z^∗_i=m)_i+1/ň_m∑_i=1^n( z^∗_i=m)_i-1/n_m^∗∑_i=1^n( z^∗_i=m)_i≲1/n_m^∗∑_i=1^n((ž_i=m)-(z_i^∗=m) )_i+(1/ň_m-1/n_m^∗ )∑_i=1^n( z^∗_i=m)_i≤1/n_m^∗∑_i=1^n(ž_i m, z_i^∗= m)_i+1/n_m^∗∑_i=1^n(ž_i=m, z_i^∗ m)_i+c_0/(1-c_0)n_m^∗∑_i=1^n( z^∗_i=m)_iFor the last term of (<ref>), by Lemma <ref> we have with probability at least 1-(n∨ d)^-3,∑_i=1^n( z^∗_i=m)_i≲√(n_m^∗ dp̅)It suffices to bound the first two terms of (<ref>). The following lemma will be needed.We have with probability at least 1-4(n∨ d)^-3 for m∈{1,2}, ∑_i=1^n(ž_i=m,z_i^∗ m)_i≲√(n(n ∨ d)p̅)log^2(n ∨ d)log (n )and the same bound also holds for ∑_i=1^n(ž_i m,z_i^∗= m)_i.Using Lemma <ref> and eq. (<ref>), we have with probability at least 1-5(n∨ d)^-3,_ ≲√(dp̅/n)+√((d/n+1)p̅)log^2(n ∨ d)log n≲√((d/n+1)p̅)log^3(n∨ d)Collecting all pieces above, we obtain1/ň_m∑_i:ž_i=m_i-_m≲√((d/n+1)p̅)log^3(n∨ d) +h(,^∗)· dp̅with probability at least 1-5(n∨ d)^-3. Denote _m=SVD_K(∑_i:ž_i=m_i ) and notice that col(_m(_m^⊤_m )^-1/2) span the singular space of _m (or _m).Following the proofof first claim of Theorem 5.4 in <cit.> and applying Davis-Kahan theorem, we obtain that with probability at least 1-5(n∨ d)^-3, h(_m,_m) ≤min_∈_K_m-_m(_m^⊤_m )^-1/2^2≤(d/n+1 )p̅log^6(n∨ d)+h^2(,^∗)· (dp̅)^2/σ_K^2(_m )≲log^6(n∨ d)/a̅+κ_0^6r^3log(κ_0r)log^4(n∨ d)/a̅^2where the last inequality holds as σ_K(_m)=σ_K(_m_m_m^⊤ )≥σ_K^2(_m)σ_K(_m)≳ dp̅ under assumption (A1) and (A3), and the fact that a̅≫log^6(n∨ d) and eventhold. Notice that the above argument holds for m∈{1,2} and hence the proof is completed by taking a union bound on .§.§ Proof of Lemma <ref>The following lemma is modified from Lemma <ref>, whose proof is deferred to Section <ref>. Let=∑_i=1^N_i where _i's are independent m^'× m random matrices with_i=0. Then for a universal constant C and any t≥ 0, (≥σ()+C(v( )^1/2σ ( )^1/2(logm̅)^3/4+σ_*( )t^1/2+R()^1/3σ()^2/3t^2/3+R()t) )≤m̅e^-t wherewe define m̅:=max{m, m^'}, v():=Cov()^1/2,where Cov()∈^m^' m× m^' m and Cov()(ij,kl)=[(i,j)(k,l)] and σ(), σ_*(),R() the same as those in Lemma <ref>.Due to the independence betweenand , we temporarily viewas fixed. Notice thatis slice-wise symmetric, we can decompose it as =_u+_l where _u(j_1,j_2,i):=(j_1≤ j_2)[_i(j_1,j_2)-_z_i^∗(j_1,j_2) ] and _l:=-_u. Notice that both _u and _l have independent entries with many zeros. Also, _3()(⊗ )≤_3(_u)(⊗ )+_3(_l)(⊗ ).We first consider _3(_u)(⊗ ). For simplicity we letΩ_u:={j∈[d^2]:_3(_u)(i,j)=0,∀ i∈[n]}, which is fixed and |Ω_u|=d(d+1)/2 by construction. Denote =∈^n× r^2 with =_3(_u)∈^n× d^2 and =⊗∈^d^2× r^2, we have =∑_i=1^n∑_j=1^r∑_l=1^d^2(i,l)(l,j)e_ie_j^⊤ =∑_i=1^n∑_l=1^d^2_ilin form of Lemma <ref> with _il=(∑_j=1^r(l,j))(i,l)e_ie_j^⊤ and N=nd^2. It suffices to compute σ(), σ_*(), v() and R(), respectively. First of all, we consider the upper bound for σ(). Observe that^⊤=^⊤(^⊤) ≤(^⊤)where the inequality holds since =⊗ is orthonormal. Further notice that the (i,j)-th entry of ^⊤∈^d^2× d^2 can be expressed as ∑_l=1^n(l,i)(l,j)=∑_l=1^n[_3(_u)(l,i)]^2, if i=j∈Ω_u 0,o.w.This indicates that ^⊤ is a diagonal matrix with its maximum entry bounded by np̅ and hence ^⊤^1/2≲√(np̅). On the other hand, ^⊤=^⊤^⊤ and the (i,j)-th entry of ^⊤^⊤∈^n× n can be expressed as ∑_l_2=1^d^2∑_l_1=1^d^2(^⊤)(l_1,l_2)(i,l_1)(j,l_2)= ∑_l∈Ω_u(^⊤)(l,l)[_3(_u)(i,l)]^2, if i=j 0,o.w.Again, ^⊤^⊤ is a diagonal matrix with its maximum entry bounded by ∑_l∈Ω_u(^⊤)(l,l)[_3(_u)(i,l)]^2Observe that |(^⊤)(i,j)|=|∑_l=1^r(i,l)(j,l)|≤ rδ^2, hence|(^⊤)(l,l)|=|(^⊤⊗^⊤)(l,l)|≤ r^2δ^4. Thus ^⊤^1/2≲ rδ^2d √(p̅)≤ (κ_0r)^2√(p̅). Hecne provided that n≳ (κ_0r)^4, we have σ()≤√(np̅). Next, we consider an upper bound for v(). By definition, the (ij,kl)-th entry of the covariance matrix Cov()∈^nr^2× nr^2 can be written as_ij_kl=∑_l_1=1^d^2∑_l_2=1^d^2(l_1,j)(l_2,l)(i,l_1)(k,l_2)= ∑_l^'∈Ω_u(l^',j)(l^',l)[_3(_u)(i,l^')]^2, if i=k 0,o.w.This implies that Cov() admits the following block diagonal structureCov()= ccccccccccc11⋯1r^2 21⋯2r^2⋯n1⋯nr^2 c(cccccccccc) 11⋮Cov((1,:))0_d× d⋯0_d× d1r^2 21⋮0_d× d Cov((2,:))⋯0_d× d2r^2⋮⋮ ⋮⋱⋮n1⋮0_d× d 0_d× d⋯Cov((n,:))nr^2For any i∈[n] and any u,v∈^r^2 such that u=v=1, we have |u^⊤Cov((i,:)v| =|∑_l_1=1^r^2∑_l_2=1^r^2u_l_1v_l_2_il_1_il_2|=|∑_l_1=1^r^2∑_l_2=1^r^2u_l_1v_l_2_il_1_il_2|=|∑_l∈Ω_u[_3(_u)(i,l)]^2∑_l_1=1^r^2u_l_1(l,l_1)∑_l_2=1^r^2v_l_2(l,l_2)|≤ r^4δ^4d^2p̅where the inequality holds since max_i,j|(i,j)|≤max_i,j,k,l|(i,j)(k,l)|≤δ^2. Thus we conclude that Cov((i,:)^1/2≲ r^2δ^2d√(p̅) and hence v()=Cov()^1/2≲κ_0^2r^3√(p̅).Then we have the following bound for R():R()=max_i∈[n],l∈[d^2]_il_∞≤max_l∈[d^2]|∑_j=1^r(l,j)|≤ rδ^2≤(κ_0r)^2/dMoreover, we have σ_*()≤ v()≲κ_0^2r^3√(p̅), cf. <cit.>. Collecting the above bounds and using Lemma <ref>, we obtain that with probability at least 1-(n∨ d)^-3,≤σ()+C(v( )^1/2σ ( )^1/2(log n)^3/4+σ_*( )log^1/2(n∨ d)+R()^1/3σ()^2/3log^2/3(n∨ d)+R()log(n∨ d)) ≲√(np̅)+κ_0r^3/2n^1/4√(p̅)(log n)^3/4+κ_0^2r^3√(p̅)(log n)^1/2+((κ_0r)^2/d)^1/3(√(np̅))^2/3log^2/3 (n∨ d)+(κ_0r)^2log (n∨ d)/d≲√(np̅)with the proviso that n/log n≳κ_0^4r^6 and √(nd^2p̅)≳ (κ_0r)^2log^2(n∨ d). The bound for_3(_l)(⊗ ) is almost identical and hence omitted. We conclude that _3()(⊗ )≤_3(_u)(⊗ )+_3(_l)(⊗ )≤ C√(np̅)for some absolute constant C>0 with probability at least 1-(n∨ d)^-3. *□ §.§ Proof of Lemma <ref>Without loss generality we bound the term ∑_i=1^n(z̃_i=1,z_i^∗1)_i. Consider some fixed s∈[n], let _s^n:={x∈{0,1/√(s)}^n:x_0≤ s} and E(s):=√(s)max_w∈_s^n∑_i=1^nw_i_i. Notice that |_s^n|=∑_k≤ sn k≲ n^s and for each w∈_s^n,max_w∈_s^n∑_i=1^nw_i_i=max_w∈_s^nsup_u=v=1u⊗ v⊗ w≤_3,1/√(s)We therefore obtain that(E(s)≥ t) =(max_w∈_s^n∑_i=1^nw_i_i≥t/√(s))≤(_3,1/√(s)≥t/√(s))≤2/(n∨ d)^4+10log^2 (n∨ d)⌈log_2δ^2d⌉[exp(-t^2/C_3sp̅)+exp(-3t/C_4√(s)δ)]for any t such thatt≥√(s)max{C_1√((n∨ d)p̅)log(n∨ d)log (δ^2d), C_2δ√(d(n ∨ d)p̅)log^2(n ∨ d)log (δ^2d )}.In other words, we have the following inequality holds:(E(s)≥ C√(s)δ√(d(n ∨ d)p̅)log^2(n ∨ d)log (δ^2d ) )≤ (n∨ d)^-4 Note that (<ref>) only holdsfor any given s∈[n]. Now consider s∈[1,n], let ϵ_j=2^j for j=0,1,⋯,k^*+1 with k^*=⌊log_2(n)⌋, then s∈⋃_j=1^k^*[ϵ_j,ϵ_j+1]. For any fixed j and s∈ [ϵ_j,ϵ_j+1], we have s≍ϵ_j≍ϵ_j+1, and (<ref>) holds up to change in constant C. Take a union bound over all j=0,1,⋯,k^*+1, we claim that(E(s)≥C^'√(s)δ√(d(n ∨ d)p̅)log^2(n ∨ d)log (δ^2d ) )≤log_2(n)· (n∨ d)^-4≤ (n∨ d)^-3holds for any random s∈ [1,n]. Thus we have with probability at least 1-(n∨ d)^-3,∑_i=1^n(z̃_i=1,z_i^∗1)_i≲√(nh(,^∗))·δ√(d(n ∨ d)p̅)log^2(n ∨ d)log (δ^2d )The bound for∑_i=1^n(z̃_i 1,z_i^∗= 1)_i is identical and hence omitted. The proof is completed by taking a union bound over m∈{1,2}.§.§ Proof of Lemma <ref>It suffices to extend Lemma <ref> to non-square case. Without loss of generality we can assume m^'<m (otherwise we can apply the same argument to ^⊤). Let=[; 0 ]∈^m× m,_i=[ _i;0 ]∈^m× m, i∈[N]i.e.,is constructed by adding m-m^' zero rows to . It is readily seen that =. Meanwhile, we haveσ()=max{([ ^⊤0 ][; 0 ])^1/2,([; 0 ][ ^⊤0 ])^1/2}=max{^⊤^1/2,^⊤^1/2}=σ( ) σ_*()=sup_v,w∈^m v=w=1[|vw|]^1/2=sup_ṽ∈^m^',w∈^m ṽ=w=1[|[ ṽ; 0 ][ w; 0 ]|]^1/2=sup_ v∈^m^',w∈^mv=w=1[|v w|]^1/2=σ_*( )For any i,k∈[m^' ] and j,l∈[m], Cov()(ij,kl)=[(i,j)(k,l)]=[ (i,j) (k,l)]. For any Cov()(ij,kl)=[ (i,j) (k,l)], i,k∈[m^' ], j,l∈[m] 0.o.w.implying that v()=Cov()^1/2=Cov()^1/2=v() withCov()∈^m^' m× m^' m. Moreover, it is easy to verify thatR()=R( ). Together with Lemma <ref>, the proof is completed.*□ §.§ Proof of Lemma <ref>Note that -^∗ is equal to the spectral norm of the (n+1)× (n+1) matrix :=((0, ^⊤-^∗⊤; -^∗, 0)).It suffices to prove the upper bound of .The proof is adapted from existing literature ofconcentration inequalities for sum of random matrices <cit.>. Denote Y_i:=X_i-m_i^∗,i∈[n] the centered Binomial random variables.By <cit.>,Y_i has a sub-Gaussian norm O(√(d/log(1/p_1))). For notational brevity, we denote _i:=(0, _i^⊤; _i, 0) where _i denotes the i-th canonical basis vector in ^n. Now if suffices to bound :=∑_i∈[n] Y_i_i,which is a sum of independent random symmetric matrices and Y_i_ψ_2=O(√(d/log(1/p_1))). Denote ϕ(x)=e^x-x-1.Following the same arguments as in <cit.>, we get (λ_max()≥ t)≤ trϕ(λ)/ϕ(λ t),where λ>0 is to be determined.Moreover,trϕ(λ)≤ tr(exp(∑_i=1^n log e^λ Y_i(ω)_i)-)For each i∈[n],similarly as <cit.>, we have e^λ Y_i_i≤ +λ^2(Y_i^2_i^2·e^λY_i_i-λY_i_i-1/λ^2Y_i_i^2)= +λ^2_i^2e^λ |Y_i|-λ|Y_i|-1/λ^2,where we used the fact _i=1. For any τ>0 and assume λ≤ c_0(d/log(1/p_1))^-1/2 for some small but absolute constant c_0>0 such that exp{4λ |Y_i|}≤ 2,we writee^λ |Y_i|-λ|Y_i|-1/λ^2≤ Y_i^2·e^λτ-λτ-1/λ^2τ^2+ |Y_i|^2/λ^2τ^2·(e^λ|Y_i|-λ|Y_i|-1)( |Y_i|≥τ) ≲ dp_1(e^λτ-λτ-1)/λ^2τ^2+ ^1/2|Y_i|^4/λ^2τ^2 e^4λ |Y_i|^1/4(|Y_i|≥τ) ≲ dp_1(e^λτ-λτ-1)/λ^2τ^2+ dp_1/λ^2τ^2exp{-c_1τ/√(d/log(1/p_1))},where the last inequality is by the tail bound of sub-exponential random variables and c_1>0 is a universal constant. Therefore,there exists a universal constant C_2>0 such thate^λ Y_i _i≤ +C_2(λ^2 dp_1(e^λτ-λτ-1)/λ^2τ^2+ dp_1/τ^2exp{-c_1τ/√(d/log(1/p_1))})_i^2 ≤ +2C_2λ^2 dp_1 _i^2≤exp{2C_2λ^2 dp_1 _i^2},where the second inequality holds if τ= C_0√(d/log(1/p_1))and we choose a λτ≤ 1, where C_0>0 is a large absolute constant. As a result,we get ∑_i,ωlog e^λ Y_i_i≤ 2C_2λ^2dp_1∑_i_i^2Denote _v:=∑_i_i^2=([n0;0 _n ])implying that _v≤ n and tr(_v)=2n.Then,exp(C_3λ^2 dp_1 _v)-≤C_3λ^2dp_1_v(1+C_3nλ^2dp_1/2!+(C_3nλ^2dp_1)^2/3!+⋯+(C_3nλ^2dp_1)^k/(k+1)!+⋯) ≤ _v/n·(e^C_3nλ^2 dp_1-1)and thustr(exp(∑_ilog e^λ Y_i_i)-)≤ tr(_v/n)·(e^C_3nλ^2 dp_1-1)≤ 2(e^C_3nλ^2 dp_1-1).Continuing from (<ref>), we get (λ_max()≥ t)≤ 2/ϕ(λ t)· e^C_3nλ^2 dp_1=2e^λ t/ϕ(λ t)· e^C_3nλ^2dp_1-λ t≤ 2(1+6/λ^2t^2)e^C_3nλ^2dp_1-λ t.By minimizing the exponent w.r.t. λ and with the constraint λ≤ c_0(d/log(1/p_1))^-1/2,we set λ:=min{t/(2C_3ndp_1), c_0(d/log(1/p_1))^-1/2} and get (λ_max()≥ t)≤ 2(1+6/λ^2 t^2)exp{-(-t^2/4C_3ndp_1⋀t/√(d/log(1/p_1))) }The above exponential term is meaningful only when t≥ C_4√(ndp_1) and t≥ C_4√(d/log(1/p_1)) for some large constants , in both cases, we have 6/(λ^2 t^2)≤ 6.As a result, we conclude that, for any t>0,(λ_max()≥ t)≤ 14exp{-(-t^2/4C_3ndp_1⋀t/√(d/log(1/p_1))) },which concludes the proof.§.§ Proof of Lemma <ref>We begin with K-means initial clustering.*K-means clustering. Clearly, the K-means clustering error, described by the Hamming distance h(^(-i), ^∗), depends on an upper bound of ^(-i)-^∗(-i). It suffices to bound -^∗. By <cit.>, the centered Binomial random variable X_i-dp_z_i^∗ is sub-Gaussian with a sub-Gaussian norm at the order of √(d/log(1/p_1)). The following lemma characterizes a sharp upper bound of -^∗, whose proof is relegated to the appendix. There exists a large absolute constant C>0 such that for any t>0(-^∗≥ t)≤ 14 exp{-(t^2/4C_3ndp_1⋀t/√(d/log(1/p_1))) } By Lemma <ref>,we get (-^∗≤ C_γ√(ndp_1+d/log(1/p_1)))≥ 1-10^-γ,where C_γ>0 is a large but absolute constant. We denote _0 the above event.By a standard analysis of K-means clustering error, conditioned on _0,for a small absolute constant c_0>0,we getmax_i∈[n] h(^(-i), ^∗)≤η_z:=C_1(p_1/d(p_1-p_2)^2+1/nd(p_1-p_2)^2log(1/p_1))≤ c_0,if d(p_1-p_2)^2≥ C_γ p_1 and nd(p_1-p_2)^2log(1/p_1)≥ C_γ for some large absolute constant C_γ depending on c_0 or γ only. By definition, if p_1≍ p_2=o(1) and dp_1≫ 1, we haveI^∗:=-2dlog[√((1-p_1)(1-p_2)+p_1p_2)]≍ d(p_1-p_2)^2/p_1,implying that p_1∧ p_2=Ω(I^∗/d).Following the same proof as Lemma <ref>, if η_z=o(I^∗/(d|p_1-p_2|)) and η_z log(1/η_z)=o(I^∗2/(dp_1)), we get with probability at least 1-n^-2 such that max_i∈[n] | p_1^(-i)-p_1|+| p_2^(-i)-p_2|=o(I^∗/d).Here, for ease of notations, we simply assume that ^(-i) is already properly aligned with ^∗ without considering the possible permutation in 𝔖_2. *Method of moments. Denote M_1=(p_1+p_2)/2 and M_2=(p_1^2+p_2^2)/2. By definition of p̂_1, we have |p̂_1-p_1|≤ |M̂_1 - M_1|+|M̂_2-M_2|+|M̂_1-M_1||M̂_1+M_1|/√(M_2-M_1^2)= |M̂_1-M_1|+2/|p_1-p_2|·[|M̂_2-M_2|+|M̂_1-M_1||M̂_1+M_1|]It suffices to upper bound |M̂_1-M_1| and |M̂_2-M_2|.Let us fix ^∗ as well as n_1 and n_2.By Chernoff bound, ({|n_1-n/2|≤ C√(nlog n)}⋂{|n_2-n/2|≤ C√(nlog n)})≥ 1-n^-2.Conditioned on ^∗,we writeM̂_1-n_1p_1+n_2p_2/n =1/nd∑_i: z_i^∗=1(X_i-dp_1) +1/nd∑_i: z_i^∗=2(X_i-dp_2)=_ dY_1-n_1dp_1/nd+Y_2-n_2dp_2/nd,where Y_1∼ Bin(n_1d,p_1) and Y_2∼ Bin(n_2d,p_2).By the concentration of Binomial random variables,({|Y_1-n_1dp_1|≤ C√(n_1dp_1log n)}⋂{|Y_2-n_2dp_2|≤ C√(n_2dp_2log n)})≥ 1-n^-2.By (<ref>) and (<ref>), we conclude that (|M̂_1-M_1|≤ C√(p_1log n/nd)+Cp_1√(log n/n))≥ 1-2n^-2.Similarly,conditioned on ^∗,we writeM̂_2-n_1p_1^2+n_2p_2^2/n =1/nd(d-1)[∑_i: z_i^∗=1(X_i^2-X_i-d(d-1)p_1^2)+∑_i: z_i^∗=2(X_i^2-X_i-d(d-1)p_2^2)]Observe that ∑_i: z_i^∗=1(X_i^2-X_i-d(d-1)p_1^2)=_ d∑_i:z_i^∗=1[∑_1≤ j_1≠ j_2≤ dZ_i, j_1Z_i,j_2-d(d-1)p_1^2],where Z_i,j, j∈[d] are i.i.d.Bern(p_1) random variables. The RHS of (<ref>) is a sum of symmetric order-2 U-statistics.By the classical de-coupling technique of U-statistics (see,e.g.,<cit.>),we have (| ∑_i:z_i^∗=1[∑_1≤ j_1≠ j_2≤ dZ_i, j_1Z_i,j_2- d(d-1)p_1^2] |>t)≤ C_2 (| ∑_i:z_i^∗=1[∑_1≤ j_1≠ j_2≤ dZ_i, j_1 Z_i,j_2-d(d-1)p_1^2] |>t)for any t>0 and Z_i,j's are independent copies of Z_i,j's.It thus suffices to bound the RHS of (<ref>),where the summation can be written as ∑_i:z_i^∗=1 [∑_1≤ j_1≠ j_2≤ dZ_i, j_1 Z_i,j_2-d(d-1)p_1^2]=_ d∑_i:z_i^∗=1(X_i X_i-d^2p_1^2)-∑_i:z_i^∗=1∑_j=1^d (Z_i,j Z_i,j-p_1^2),where X_i is an independent copy of X_i.We only show the upper bound of the first term in the RHS of (<ref>),since the second term can be bounded in the same fashion. We first fix X_i's and upper bound ∑_i:z_i^∗=1(X_i-dp_1) X_i. Conditioned on ^∗ and X_i's,for any λ>0,we have exp(∑_i: z_i^∗=1(X_i-dp_1)X̃_iλ)= e^-dp_1 λ∑_i:z_i^∗=1 X_i∏_i: z_i^∗=1 e^λ X_i X_i≤ e^-dp_1 λ∑_i:z_i^∗=1 X_i∏_i: z_i^∗=1 e^dp_1(e^λ X_i-1) ≤e^-dp_1λ S_1· e^dp_1∑_i: z_i^∗=1(e^λ X_i-1),where S_1:=∑_i: z_i^∗=1 X_i.Therefore,for any t∈(0,1), (∑_i: z_i^∗=1(X_i-dp_1) X_i≥ tdp_1 S_1)≤min_λ>0 e^-(1+t)λdp_1 S_1+dp_1∑_i: z_i^∗=1(e^λ X_i-1)λ= X_max^-1ln(1+t)≤exp(-dp_1(1+t)ln(1+t) S_1/ X_max+dp_1t S_1/ X_max),=(e^t/(1+t)^1+t)^dp_1 S_1/ X_max≤exp(-t^2dp_1 S_1/ X_max/3)where X_max:=max_i:z_i^∗=1 X_i and we used the fact (1+t)^a-1≤ at for a≤ 1 and t∈(0,1). A left tail can be similarly established.Conditioned on X_i and ^∗,we get (|∑_i: z_i^∗=1(X_i-dp_1) X_i|≤ C√(dp_1 S_1 X_maxlog n))≥ 1-n^-2.By the concentration property of Binomial random variables,conditioned on ^∗,we have ({| S_1- n_1dp_1|≤ C√(n_1dp_1log n)}⋂{ X_max≤ Cdp_1log n})≥ 1-n^-2.A similar bound can be derived for dp_1∑_i:z_i^∗ ( X_i-dp_1) can be derived by the concentration of Binomial random variables. By (<ref>) and (<ref>),we have (|∑_i:z_i^∗=1(X_i X_i-d^2p_1^2)|≤ C√(dp_1n_1dp_1dp_1log^2n))≥ 1-2n^-2,which,together with (<ref>)-(<ref>),implies that (| ∑_i: z_i^∗=1(X_i^2-X_i-d(d-1)p_1^2)|≤ C√(dp_1n_1dp_1dp_1log^2n))≥ 1-6n^-2.Finally,we conclude that with probability at least 1-13n^-2, |M̂_2 - M_2|≤ Cp_1^2√(log n/n)+C√(p_1^3log^2n/nd).By summarizing all the above results,we can conclude that |p̂_1- p_1|+|p̂_2-p_1|=o(I^∗/d) with probability at least 1-14n^-2 if I^∗≫ 1+p_1/|p_1-p_2|√(dp_1log n(dp_1+log n)/n). §.§ Proof of Lemma <ref>The proof of K-means is identical to that of Lemma <ref> by noticing that Poisson(θ_1) has sub-exponential norm upper bounded by O(1+√(θ_1)). *Method of moments.Denote θ_1:=M_1+2√(M_1-^2).Since =(θ_1^1/2+θ_2^1/2)/2+O(θ_1^-1/2) and √(θ_1)-√(θ_2)≫ 1, we can get|θ_1-θ_1 |=O(√(θ_1)/√(θ_1)-√(θ_2))=O(θ_1/θ_1-θ_2)=o(I^∗),where the last inequality is by the condition on I^∗. By definition of θ̂_1 and θ_1, we have |θ̂_1-θ_1|≤ |M̂_1 - M_1|+2|√(M_1-^2)-√(M̂_1-^2)| ≤ |M̂_1-M_1|+2|M_1-M̂_1|+|^2-^2|/M_1-^2+2|-|√(M̂_1-^2)It suffices to upper bound |M̂_1-M_1| and |-|.Let us fix ^∗ as well as n_1 and n_2.By Chernoff bound, ({|n_1-n/2|≤ C√(nlog n)}⋂{|n_2-n/2|≤ C√(nlog n)})≥ 1-n^-2.Conditioned on ^∗,we writeM̂_1-n_1θ_1+n_2θ_2/n =1/n∑_i: z_i^∗=1(X_i-θ_1) +1/n∑_i: z_i^∗=2(X_i-θ_2)=_ dY_1-n_1θ_1/n+Y_2-n_2θ_2/n,where Y_1∼ Poisson(n_1θ_1) and Y_2∼ Poisson(n_2θ_2). By the concentration of Poisson random variables,we have ({|Y_1-n_1θ_1|≤ C√(n_1θ_1log n+log^2n)}⋂{|Y_2-n_2θ_2|≤ C√(n_2θ_2log n+log^2n)})≥ 1-n^-2.By (<ref>) and (<ref>), we conclude that (|M̂_1-M_1|≤ C√(θ_1log n/n)+Clog n/n)≥ 1-2n^-2.If nθ_1≥ Clog n, then |M̂_1-M_1|≤ cM_1 in the above event for some small constant c>0.Since θ_1≫ 1,it only requires n≥ Clog n.Similarly,conditioned on ^∗,we write-n_1μ_1+n_2μ_2/n=1/n[∑_i: z_i^∗=1(√(X_i)-μ_1)+∑_i: z_i^∗=2(√(X_i)-μ_2)],where μ_1=√( Poisson(θ_1)) and μ_2=√( Poisson(θ_2)). By Chapter 6 (Exercise 6.12) of <cit.>,if X∼ Poisson(θ), then for any λ∈(0,1/2), logexp(λ (√(X)-√(X)))≤λ(e^λ-1)(X) 1/4X+1≤ λ (e^λ -1)θ1/X+1=λ (e^λ-1)θ1-e^-θ/θ≤λ (e^λ -1).It implies that √(X)-√(X) is sub-exponential with a sub-exponential norm bounded by O(1). Meanwhile, its variance is also bounded by O(1). By the concentration of sum of independent sub-exponential random variables, we conclude that conditioned on ^∗, (|1/n∑_i: z_i^∗=1(√(X_i)-μ_1)|≤ Clog n/√(n)⋂|1/n∑_i: z_i^∗=2(√(X_i)-μ_2)|≤ Clog n/√(n))≥ 1-n^-2. Then,(∑_i:z_i^∗=1(√(X_i)-μ_1)≥ nt)≤min_λ>0 e^-λ nt∏_i:z_i^∗=1exp(λ(√(X_i)-μ_1)) Putting together (<ref>)-(<ref>), we get(|-|≤ C√(θ_1log n/n)+Clog n/√(n))≥ 1-2n^-2.Recall that ≍√(θ_1).Therefore,|-|≤ c_1 if nθ_1≥ C_1log^2n for some small constant c_0>0, which can easily satisfied since θ_1≫ 1 and n≥ C_1log^2n. Finally, we conclude that with probability at least 1-2n^-2,|θ̂_1-θ_1|≤C(θ_1/(√(θ_1)-√(θ_2))^2+√(θ_1)-√(θ_2)/2)·√(θ_1log n+log^2n/n)≤ C(θ_1^2/(θ_1-θ_2)^2+θ_1-θ_2/√(θ_1))·√(θ_1log n+log^2n/n).Therefore, in the same event, we have θ̂_1-θ_1=o(I^∗) if I^∗≫θ_1/θ_1-θ_2+(θ_1^2/(θ_1-θ_2)^2+θ_1-θ_2/√(θ_1))·√(θ_1log n+log^2n/n) | http://arxiv.org/abs/2311.15598v1 | {
"authors": [
"Zhongyuan Lyu",
"Ting Li",
"Dong Xia"
],
"categories": [
"math.ST",
"cs.LG",
"cs.SI",
"stat.ME",
"stat.ML",
"stat.TH"
],
"primary_category": "math.ST",
"published": "20231127074850",
"title": "Optimal Clustering of Discrete Mixtures: Binomial, Poisson, Block Models, and Multi-layer Networks"
} |
: Parallelized Inference Through Post-Training Quantization Ensembling of Residual Expansions [ January 14, 2024 ============================================================================================= Once there is a decision of rebalancing or updating a portfolio of funds, the process of changing the current portfolio to the target one, involves a set of transactions that are susceptible of being optimized. This is particularly relevant when managers have to handle the implications of different types of instruments. In this work we present linear programming and heuristic search approaches that produce plans for executing the update. The evaluation of our proposals shows cost improvements over the compared based strategy. The models can be easily extended toother realistic scenarios in which a holistic portfolio management is required : Parallelized Inference Through Post-Training Quantization Ensembling of Residual Expansions [ January 14, 2024 =============================================================================================§ INTRODUCTIONIn the context of individual managed accounts, portfolio updates are generally handled by back-office processes that execute the transactions needed to achieve the target allocation. Regardless of the investment strategy <cit.>, any recurrent allocation decision yields a target portfolio <cit.>, so a set of transactions are executed to change the current portfolio to the target one. For instance, a mean variance optimization approach <cit.> will produce a new allocation for the next period, or a rule-based strategy might stick to a predefined allocation and trigger rebalancing <cit.> when the current weights drift away from the strategic asset allocation. When portfolios hold funds or equivalent collective investment schemes, the update process needs to optimize the transactions to minimize costs or include additional investor non-pecuniary preferences.The prevalence of robo-advisors <cit.> demonstrates the high levels of automation achieved for various rebalancing strategies.However, a more holistic portfolio management approach, should consider that portfolio updates involve instruments of different types, denominated in different currencies or traded at different venues. Additionally, investors might have different accounts with different tax treatment or transaction restrictions.Consequently, a simple strategy such as “sell allocations above target weights and buy allocations under target weights” is no longer appropriate. We argue that academic focus on incorporating costs into portfolio selection <cit.> has overshadowed the planning of portfolio updates. An optimized plan for updating portfolios benefits both investors and back-office intermediaries involved in the transactions.A particular scenario of interest is the Spanish tax-deferral regime known as 'Traspasos', where individual investors in mutual funds can defer the capital gain taxes by using the proceeds from redeeming a mutual fund to subscribe shares of another fund. Rather than considering redemption and subscription as two trades, the joint operation is treated as a "switch" or a transfer without triggering a tax event.However, this benefit does not apply universally to all collective investment schemes.The most remarkable exception is Exchange-traded Funds (ETFs). Buying or selling ETFs in the market is considered as separate trades, therefore planning the update of the portfolio is more complex when it has a combination of transferable mutual funds and exchange-traded instruments. Planning portfolio updates also gets complicated when the target allocation is decided for investors with multiple accounts <cit.>, holding different types of instruments and have diverse tax treatments.In this work we focus on developing solutions for the "Traspasos" use case, and then leave the discussion to analyze how our approach applies to other scenarios. The main contributions of this work are twofold: (1) formalizing the portfolio update as a combinatorial problem, and (2) proposing the use of heuristic search to find the optimal update plan. The paper is structured as follows: Firstly, we describe the base case of planning fund switches for a portfolio holding transferable mutual funds. Secondly, we introduce the fund multi-type case, which presents a combinatorial problem due to the various transaction alternatives. We then model the problem as a search task, employing classical AI search algorithms for resolution. We evaluate the benefits of the approach in terms of transaction number and costs. Furthermore, we discuss potential extensions to inspire further research in this domain. Finally, we present the key insights we gained throughout the development of the project.§ MUTUAL FUND SWITCHES In the base case, we assume all portfolio holdings are mutual funds (i.e., they belong to the same type of instruments) and all update operations are transfers from one fund to another (i.e., single type of transaction). We call this case the fund switching problem.Before solving the problem we get the money flows from the difference between the target portfolio and the current portfolio. The inputs to the problem are outflows p_ out, inflows p_ in,and cost function c_ijthat express the cost of switching 1 monetary units from fund i to fund j. The fund switching problem can be easily modelled with linear programming (LP) asthe balanced version of the so called Hitchcock–Koopmans transportation problem <cit.>. In this transportation analogy, goods are the money value of funds (i.e., the decision variables), the supply sources correspond to the outflows and the demand destinations correspond to the inflows. The problem objective is to minimize the total costs of the switches (i.e. the transportation costs). The model is formally expressed in Eq <ref>, where x_ij is the amount transferred from i to j. MinZ= ∑_i=0^m ∑_j=0^n x_ij c_ijs.a.∑_j=1^n x_ij≤ p_ out_i∑_i=1^m x_ij≥ p_ in_j x_ij≥ 0 ∀ i j§ MULTI-TYPE TRANSACTION UPDATES In the general case, the LP transportation problem is no longer applicable if in addition to the switching transactions there are other ways to move the money, and the order or timing of transactions is relevant. Let's consider an example of a managed account with 100k(€), which follows the model portfolio outlined in Table <ref>. The account is mandated to rebalance quarterly to the target weights. As the current allocations have deviated from the target, the indicated money outflows and inflows should be implemented to achieve the necessary rebalance update. In this case, exchange-traded securities (i.e., ETFs and ETC) can be bought or sold at any time during the day, but their allocated values cannot be transferred as a single operation as before. Instead, we must combine a buy transaction with a preceding sell transaction to achieve an equivalent outcome. Similarly, when allocating value from transferable funds to a non-transferable security, we must first redeem the shares and utilize the resulting proceeds to fulfill the new allocation.The individualsubscription (buy) or redemption (sell) of mutual funds are transactions with typically end-of-day valuation. For the purpose of the model they are equivalent to ETFs market trades that increase or decrease the account cash balance. One alternative for addressing this problem is to modify the LP model to incorporate a second way for "transferring", which essentially represents a combination of sell and buy transactions that can be separated during execution. Specifically, the model is updated as follows: * include a matrix x'_ij of duplicated decision variables to differentiate between switches and trades * extend objective function with a synthetic cost function c'_ij that aggregates the cost of buying and selling* Update the constraints to account for moves from both types of transactions.* introduce constraints x_k,r = 0 for any non-transferable instruments k or r. Under the additivity assumption of LP programs, this proposal is also equivalent (assuming non-equal costs) to pre-select the cheapest option between c_ij and c'_ij for each i and j and compute the solution in a single set of decision variables. Table <ref> shows a solution assuming the plausible scenario in which all possible switches are cheaper than sell+buy transactions.The limitation of both proposals is that they are not aware of the number of transactions. On the one hand, transaction cost would be sub-optimal if we have to include a fixed cost per transaction, as it frequently occur with brokerage fees.On the other hand, having fewer transactions will simplify the operational burden.One way to partially overcome this problem is to post-process the LP solution, collecting from x'_ij all amounts in row i as a single sell, and all amounts of column j as a single buy.Reviewing Table <ref>, we would have (Buy 48.04 EM) as a result of grouping BT→EM and GD→EM from x'_ij assignments. However, this post-process adjustment can not guarantee that the LP solution is optimal in the number of transactions. Consider for instance the extreme case of having both switching and tradingcosts equal for all flows.An optimal LP solution could split the flows assigning arbitrary non-zero values in all decision variables, both x_ij and x'_ij.This (i× j× 2) is obviously above the optimal number of transactions and the post-process adjustment only takes care of buy and sell trades independently. Given that in many cases this proposal should provide a fairly good solutions, we will consider it for comparison in the evaluation section.§ STATE SPACE MODEL In this section we describe the proposal to formalize the multi-type transaction updates. We define the task by means of graph search in a state space. A search task is composed of the following elements: * 𝒮 is a finite set of numeric state variables derived from the list L of the portfolio holdings. Variables in 𝒮 are partitioned in: * {u_1,…,u_n} as the outflow variables* {v_1,…,v_m} as the inflow variables* {w} an additional special variable representing the cash balanceA state S in the state space is a value assignment for all variables in 𝒮.* τ: L →{⊤,} is boolean function representing if a given holding in L is transferable* C_K: L →ℝ_+ is the individual trading cost as a percentage of the transaction amount. * C_S: L × L →ℝ_+ is the combined switching cost as a percentage of the funds being transferred from one holding to another* C_F: L →ℝ_+ is the individual fixed cost, independent of the instrument type* I is the initial state, which corresponds to the initial flowsWe use the notation u[x] (or v[x]) to indicate the value assignment of x ∈ L, when we want to refer to variables by the holding they represent.The definition of the goal is implicit, meaning that for all tasks, the objective is to find a state G_0 with no pending flows. This is, a state in which u_i=0 and v_j=0 for all i and j. Now we have to define the set of actions (transactions) and the associated transition function that allow us to change one state into another.Note that 𝒮 entails an infinite state space. However, knowing in advance the desired values in the goal state, it does not make sense to allow actions to perform numeric changes arbitrarily.Let us consider the SELL action scenarios.If an outflow x in any given state is not zero, the SELL x action should only consider the whole amount in u[x], given that if an additional SELL is needed it could have been included in the first SELL. This does not mean that the action will always refer to the same amount. As counter example, a former SWITCH can reduce u[x] and then, the SELL action will still refer to the (pending) whole amount, but different from the original statement.This simplification is also inspired from the similarities with the transportation domain. The general idea consists of forcing the model to consider only the actions that move the quantities that are either required or available.Following this idea, the transition function that changes state S into S' when an action is applied, can be succinctly expressed as the set of parametrized operators indicated in Figure <ref>. A solution to this task is the update plan, the sequence of actions π = (a_1, …, a_k) that transform the initial flows into the target G_0. Table <ref> shows an update plan for the running example.§ SEARCH ALGORITHM AND HEURISTICSIn this section we describe the alternatives to solve a task in the search model described before. The first point to emphasize is that all actions have at least one effect that makes moves towards G_0and there is no effect in the opposite direction.Therefore, a depth-first search (DFS) will provide a first sub-optimal solution without any single backtracking step.Any further exploration that consider this upper-bound should provide subsequent solutions of improving costs. Thus, the first alternative we examine is performing a Depth-first Branch and Bound (DFBnB) algorithm <cit.> until the search is exhausted or a search limit is reached (i.e., execution time or node generation count). In the case the algorithm exhausts the search space, the last solution found corresponds to the optimal solution. However, the state space produces a lot of symmetries, which will cause DFBnB to scale poorly in terms of the state size.Another option is to use a heuristic search algorithm such as A* <cit.>. If we provide an admissible heuristic, the solution found by A* is optimal.To derive such a heuristic function (Eq.<ref>), we compute for any given state S, what is the lower bound cost to achieves G_0. Fixed fees are associated to each position, so at least one action for each of the pending non-zero variables (L_ out(S) and L_ in(S)) will require this fee to be paid (Eq. <ref>). For the variable costs, we consider the minimum between trading or switching funds in C_ min, and then, only pending outflow variables L_ out are used in h_ rel cost computation (Eq. <ref>) to avoid double counting in hypothetical switching transactions. At the end, the heuristic value for S is the sum of the estimated lower-bounds for both fixed and variable costs. L_ out(S) = { x ∈ L |u[x] > 0 } L_ in(S) = { x ∈ L | v[x] > 0 } C_ min(x) = min_x ∈ L{C_K(x), min_y ∈ L{C_S(x,y)}} h_ fix(S) = ∑_x ∈ L_ out(s) C_F(x) + ∑_x ∈ L_ in(s) C_F(x) h_ rel(S) = ∑_x ∈ L_ pout(S) C_ min(x) × u[x] h_ fee(S) = h_ fix(S) + h_ rel(S) This heuristic function can be computed efficiently during search. C_ min is not state-dependent so it is computed in advance. The rest are arithmetic computation that are linear in the state size. Extending the previous reasoning we derive an estimate of the number of transactions to achieve the goal (Eq. <ref>). Even in the best-case scenario of making all switches, at least h_ count transactions are needed to complete the update. h_ count(S) = max{|L_ in|, |L_ out| }Nonetheless, we will not use this heuristic. We want to have the minimum number of transactions, but as a secondary objective once the optimal transaction cost is determined.§ EXPERIMENTAL EVALUATIONIn this section we present the experiments conducted to evaluate the varioussolutions proposed in previous sections. The main objective is to verify whether the linear programming and search approaches produce better solutions thanthe one generated by a naive approach in the multi-type transaction scenario.The software was implemented in Python. The LP approaches were modelled using the CVXPY library <cit.>. The heuristic search approach was implemented as described in the the state model section. We used the SimpleAI library functionality to implement the domain transition, cost and heuristic functions. The A* algorithm is included in the library,The DFBnB algorithm is not present, but we coded it using the library search data structures.The SimpleAI is, as stated by the authors, a more stable pythonic implementation of the algorithms in Russell and Norvig's book <cit.>.The list of evaluated algorithms/configurations is: * Naive: The simple base line of creating a plan by selling all positions with outflows and buying all positions with inflows.* LP+: The LP model, along with the post-process adjustment for grouping market transactions, as described at the end of Section <ref>.* DFBnB: Our implementation of DFBnB, providing the initial depth-limit equal to the length of the naive solution. This length is computed in advance as it is the number of flows in I. Additionally, the limit of generated nodes is set to 100k.* Astar-fee: Running A* with the h_ fee heuristic function (Eq <ref>).For all configurations, we measured the solution cost and the plan length. For search algorithms, we measured the number of generated nodes.For DFBnB, we also measured the same metrics at the first solution found.First, we analyze the overall performance and the task scalability. To simulate diverse update tasks, we generated groups of random problems of incremental portfolio size, ranging from 4 up to 13 holdings. All portfolios were scaled to have 10k of money value. Each group consists of 20 problems. Each problem has the following features: * 70% of the holdings represent transferable funds. The rest are considered a general type of ETFs.* The current portfolios are allocations randomly sampled from a uniform distribution and scaled to sum 1 (i.e., no initial cash positions). * Target portfolios were designed such that random fund flows have a common factor amount.Here we wanted the simulation to also include rebalance splits (e.g., 2% overweight in fund X is allocated 1% to fund Y and 1% to fund Z.) * ETFs have a fixed fee selected from the alternatives (0.5 1.0,2.5) with the idea of simulating different exchange commissions. Mutual funds have no fixed fee for the subscription or redemption of shares.* All funds have a variable fee in basic points (bps) rangingfrom 1 to 10. In this case, we wanted to have differences between funds, to represent both explicit and implicit estimated costs.In 180 problems (20× 9) both LP+ and Astar-fee achieved the optimal solution. As a reference, the average cost per transaction is 0.27±0.08. We computed the extra costs achieved by the rest of algorithms. Figure <ref> shows the distribution per portfolio size. Naive approach is clearly producing plans of worse cost across the whole range of portfolios. Their update plans coincide with the optimal cost in 16 problems only.DFBnB achieved remarkable improvements, even with its first solution. The DFBnB best solution matched the optimal solution in 145 tasks, of which in only 74, the complete search tree was explored. Regarding the plan length, LP+ only matched the number of transactions with Astar-fee in 148 plans, confirming our initial conjecture that LP+ lacks awareness of the plan length. DFBnB matched the length on 174 plans and Naive in 106, but for both of them, having the same plan length as Astar-fee does not mean they achieved a solution with the same (optimal) cost.DFBnB produced 3 fee sub-optimal plans that have fewer number of transactions.To have a closer look of this plan length awareness we run another experiment described later in this section. On the other hand, search algorithms have a scalability issue. They generate a number of nodes that is exponential in the portfolio size. Figure <ref> shows for Astar-fee the distribution of generated nodes (in log scale) per problem size.Nevertheless, heuristic h_ fee provides relative good guidance toward the goal. Consider for example thatDFBnB explored completely all group tasks up to size 7, and 12 search trees reached the bound of 100k nodes in tasks of size 8. Besides,we think that testing on portfolios with up to 13 positions is fair enough for our mutual fund scenarios <cit.>.Nevertheless, real portfolio could be of larger size, but what matters for the update plan is the effective number of positions being changed in the target portfolio.Regarding LP+, the time for solving tasks is negligible. The number of variables and constraints are relatively small for the performance of state-of-the-art convex optimization solvers, which are the ones under the hood of the CVXPY library.Now we focus on the update plan length. To facilitate the interpretationof the results we will explore problems of the same size. For this experiment we generated 500 random problems of portfolios with 10 holdings. Again, both LP+ and Astar-fee obtained the same solution cost in all problems. As expected, LP+ produced solutions with sub-optimal number of transactions. Table <ref> shows the counts of the 500 plans, split by number of additional steps to the best plan length, which in all cases was achieved by Astar-fee search. As we increased the sample size, we see more cases with several number of extra transactions. Analyzing these plans we observed that LP+ unnecessarily split switches of equivalent cost in several transactions. If a Astar-fee solution is computable in the available time resources, it will provide the benefit of being cost-optimal with the lowest number of transactions.§ UPDATES AS AUTOMATED PLANNINGThe state-space model for updating portfolios can also be approached as AI Automated Planning.The interesting point of using automated planning is that tasks are described in a high-level standard language, such as PDDL2.1 (Planning Domain Definition Language) <cit.> and the solutions are computed by domain-independent solvers called automated planners. In a PDDL task, the state space, actions and transition function is modelled in the PDDL domain, and the PDDL problem describes the initial state and the goals. In terms of representation richness, PDDL could enrich our model in the following features:* Timed Plan: Rather than having a sequence of steps, the planner will provide actions with a timestamp relative to a initial time t_0. It would be easy to recognize for instance that a group of transactions are independent and can be executed a the same time step.* Durative actions: Apart from the cost, actions can have a duration. In our examples this will make the difference between almost instant buy and sell transactions, compared to mutual fund switches that can take some few days.* Time Effects: In durative actions, effects should be temporally annotated. Thus, some effects can be considered instantaneous (eg., the cash balance is no longer available right away) or they can take place at the end of the action (eg. amounts in progress are settled at the end).Figure <ref> depicts the equivalent switch-available action modelled in PDDL2.1.Even though the modelling features of PDDL look appealing, the capabilities of available planners have discourage us to continue the research in this direction.Many temporal planners have focused onpropositional concurrency, therefore they are only supporting a fragment of PDDL2.1 without numeric state variables. From the remaining list of available planners we found that the Optic planner<cit.> is the most reliable one in terms of handling the language features. We developed a complete PDDL model for portfolio update tasks. It is equivalent to our domain-dependent implementation, but including durative actions.However, Optic did not provide competitive solutions even for small-sized tasks. A timed plan is not justified if it implies significantly sub-optimal results compared to the domain-dependent alternative. Optic runs a sup-optimal algorithm (WA*) with a heuristic derived in a domain-independent way. At this point we do not think that additional effort in this direction would benefit the performance terms of the application perspective § DISCUSSIONNow, we want to delve into some aspects that in terms of portfolio operations are somewhat closer to the real world. Our model can be extended or adapted to cover these features. Let's consider that in the original example the exchange-traded instruments are denominated in US Dollars (USD) and the mutual fund shares are denominated in Euros (EUR). The fund flows represent the equivalence in the portfolio base currency (EUR). Sell actions will produce USD cash balance. Therefore, to have an actionable rest of the plan, we should include a forex transaction that exchanges part of the USD for EUR, to buy shares of EUR mutual funds.Figure <ref> depicts the fund flows including the currencyexchange. The basic option is to exchange currencies as needed on a per-transaction basis.Another option is to compute the currency imbalances from the original flows, and extend our model to include: * variables { w_1, … , w_n} instead of the single {w}, to keep record of the cash balances by currency.* Operators for forex transactions. Exchange-available to fully exchange a cash balance, and Exchange-needed to fulfill a currency imbalance.Mutual funds charge a management fee that is passed on over time to the fund net asset value. This implies that in practice, few funds have a explicit commissionat the time buying or redeeming shares <cit.>, and therefore in fund switches. What is left, it is building a cost function from implicit cost estimates, but this can be a difficult task without information from fund sponsors.Although the cost function is a rough approximation, the solutions are always useful because they provide a feasible assignment of how to distribute the flows. Any form of rebalance automation needs an update plan no matter it is optimized with accurate input costs or not.For the particular case of the Traspasos regime, there is an alternative cost function that does not focus on the monetary cost. It turns out that the regulation allows the transfer, especially if it is between different managers, to be carried out in up to 8 days.This means that the effective day of selling and buying the shares is not the same. Therefore, from the point of view of an investor who wants to be always invested, the cost function can be the money-weighted sum of the time out-of-the-market. However, this function is useful for the basic LP model when only switches are considered. Incorporating it into the multi-transaction version is more complicated because the almost instantaneous execution of market orders can distort its calculation.On the other hand, we also consider the cost for the investor regarding his tax bill. Unlike the previous cost functions, this one is specific to each portfolio.If the information is available, one must compute the capital gains implied by the initial outflows. The great difficulty is thatthese data, as it is of personal nature, will not be available for research purposes. Consequently, it becomes necessary to create high-quality simulated data to evaluate the potential impact of using an optimized solution with this type of function. Combining the cost functions into a single weighted function would allow modeling a multi-objective approach, but the mechanism for choosing the weights and the implications for the solution plans is an open question.Portfolio updates also poses additional challenges when integrated management has to allocate money across multiple investor accounts.It is typical for an investor to have different accounts (tax-exempt, taxable-accounts, retirement plans). In this context, rebalancing the portfolio may imply that you first have to transfer money from one account to another, and then carry out operations related to investment funds.In the same way, our model can include these transfer actions between accounts, with their associated costs, if any.Finally, the general case of multi-transaction updates could also be modeled with MIP (Mixed Integer Programming). With boolean variables it is possible to model for instance if an action is executed or not. The objective function could have a factor that takes into account the number of transactions. We preferred the state-space model since the definition of the actions facilitates plan interpretation, or from other view, they have a closer similarity to the transactions that are then executed. § CONCLUSIONWe presented a comprehensive analysis of various approaches for generating a plan to update a portfolio from its current allocation to a target allocation. We successfully developed a state-space model capable of handling transactions involving two types of funds. In particular, this model is useful to provide plans to update portfolios in which the transfer must be considered as a special transaction within the Spanish tax regime.We showed that using heuristic search with our model allows us to generate plans that, maintaining the optimal solution, manage to have fewer transactions than a simplistic solution or an LP solution that does not take into account the number of transactions. Moreover, we discussed the potential extensions of our model to address specific portfolio requirements, including dealing with various currencies, execution times, and the tax implications faced by clients. As portfolio updates occur frequently, we strongly believe that optimizing back-office processes can lead to cost improvements, ultimately benefiting both clients and financial institutions. In conclusion, our research contributes valuable insights and practical solutions to optimize portfolio management processes, fostering better performance in the fund industry.§ ACKNOWLEDGEMENTSThis paper was prepared for informational purposes by the Artificial Intelligence Research group of JPMorgan Chase & Co and its affiliates (“J.P. Morgan”) and is not a product of the Research Department of J.P. Morgan.J.P. Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein.This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful. aaai | http://arxiv.org/abs/2311.16204v1 | {
"authors": [
"Tomás de la Rosa"
],
"categories": [
"q-fin.PM",
"cs.AI"
],
"primary_category": "q-fin.PM",
"published": "20231127130956",
"title": "Planning for the Efficient Updating of Mutual Fund Portfolios"
} |
Mixed scalarization of charged black holes: from spontaneous to non-linear scalarization Zakaria Belkhadria^1,2,3,4 and Alexandre M. Pombo^5^1Département de Physique Théorique, Université de Genève, 24 quai Ernest Ansermet, CH-1211 Geneva 4, Switzerland ^2Gravitational Wave Science Center (GWSC), Université de Genève, CH-1211 Geneva, Switzerland ^3Dipartimento di Matematica, Università di Cagliari, via Ospedale 72, 09124 Cagliari, Italy ^4INFN, Sezione di Cagliari, Cittadella Universitaria, 09042 Monserrato, Italy^5CEICO, Institute of Physics of the Czech Academy of Sciences, Na Slovance 2, 182 21 Praha 8, CzechiaScalarized black holes (BH) have been shown to form dynamically in extended-scalar-tensor theories, either through spontaneous scalarization – when the BH is unstable against linear perturbations – or through a non-linear scalarization. In the latter, linearly stable BHs can ignite scalarization when sufficiently perturbed. These phenomena are, however, not incompatible and mixed scalarization is also possible. The objective of this work is twofold: first, study mixed scalarization on a family of Einstein-Maxwell-scalar models; and second, study the effect of the counter scalarization that occurs when one of the coupling parameters has a sign opposite to the one that generates scalarization. Both objectives are addressed by constructing and examining the mixed scalarization's domain of existence. An overall dominance of the spontaneous scalarization over the non-linear scalarization is observed. Thermodynamically, an entropical preference for mixed over the standard scalarization (spontaneous or non-linear) exists. In the presence of counter scalarization, a quench of the scalarization occurs, mimicking the effect of a scalar particle's mass/positive self-interaction term. § INTRODUCTION With the recent observational data from the LIGO-VIRGO collaboration (e.g. <cit.>) and the direct imaging by the Event Horizon Telescope <cit.>, a heightened interest in alternative black hole (BH) solutions has emerged. Of particular interest are hairy black hole solutions (see <cit.> for a review) resulting from a scalarization process (aka scalarized BHs). A possible mechanism of scalarization occurs in extended-scalar-tensor theories (eST) <cit.>. In these, the scalar field non-minimally couples to the model's invariant and scalar perturbations of the vacuum BH can ignite the growth of scalar hair around the BH. The resulting objects may present significant deviations when compared with the standard vacuum General relativity (GR) solutions, which can give a better insight into the nature of gravity and particle physics.A family of eST theories that have undergone extensive analysis is theEinstein-Maxwell-Scalar (EMS) model <cit.>. In the latter, scalarization is triggered by a non-minimal coupling between a real scalar field, ϕ, and the Maxwell invariant, ℐ=F_μνF^μν, through a coupling function f(ϕ ) 𝒮=∫ d^4 x√(-g)[ R - 2 ∂ _μϕ∂ ^μϕ - f (ϕ)ℐ] , and minimally coupled with the Ricci scalar, R, associated with the metric ansatz g_μν. Scalarization is triggered by a sufficiently large charge-to-mass ratio.Although BHs within this model are considered of less astrophysical significance[In a dynamical astrophysical environment, the presence of plasmas around the BH leads to prompt discharge. Alternatively, the neutralization can occur through Hawking charge evaporation <cit.>.] compared to those in other eST models such as the Scalar-Gauss-Bonnet <cit.>, their relative computational simplicity has proven valuable to developing a better insight into the scalarization phenomenon <cit.>.In particular, the simplicity associated with the EMS model has allowed the study of several coupling functions <cit.>, from which two classes of solutions were found[The same classification also exists for Scalar-Gauss-Bonnet models <cit.>]: class I (or dilatonic-type) and class II (aka scalarization type). While in the former ϕ =0 does not solve the field equation [For the EMS model, there is an exceptional case: if Q =P, ϕ=0 solves this class so that the dyonic, equal charges Reissner-Nordstrom BH is a solution.], in the latter, the trivial scalar field does solve the field equation and scalarized solutions can result from perturbations of the vacuum BH solution [A similar phenomenon was observed for a vector field instead of a scalar field <cit.>, however, these seem to be prone to ghost instabilities <cit.>]. This demands that df (ϕ)/d ϕ |_ϕ=0= 0. This condition is naturally implemented, for instance, if one requires the model to be ℤ_2-invariant under ϕ→ -ϕ. Scalarized solutions can be further divided into two sub-classes. When vacuum BH are unstable against linear perturbations, a tachyonic instability arises when d^2 f(ϕ)/d ϕ^2 |_ϕ=0≠ 0, and with the opposite sign of ℐ. The scalar hair around the BH grows spontaneously from a perturbed vacuum Reissner-Nordstrom BH (RN BH). In this case, the scalarized solutions bifurcate from the vacuum RN BH solutions. These are known as class II.A or spontaneous/normal scalarization.A second family of scalarized solutions, class II.B or non-linear scalarization, is also possible when the BH is stable against linear perturbations, d^2 f(ϕ)/d ϕ^2 |_ϕ=0 = 0, but unstable against non-linear perturbations. In this case, a sufficiently large non-linear perturbation can trigger the growth of the scalar hair of the vacuum BH. These two scalarization types are, however, not incompatible and mixed scalarization is also possible <cit.>. A coupling function compatible with both spontaneous and non-linear scalarization is f(ϕ)=e^-αϕ ^2 -βϕ ^4 , with α <0 and β<0. The sign of the coupling parameters (α, β) was chosen in accordance with the literature. Pure – aka non-mixed – spontaneous (non-linear) scalarization being recovered when β = 0 (α =0). The existence of scalarization is highly dependent on the sign of the spontaneous/non-linear function parameters (α, β). While for pure scalarization the sign of either parameter is well defined, the presence of an additional scalarization mechanism, say spontaneous scalarization (α), allows β to have the “wrong” sign. We call this counter-scalarization.The objective of this work is twofold: first, to study the interplay between spontaneous and non-linear scalarization in the mixed scalarization scenario; second, to examine the effects of incorporating a counter-scalarization term into the coupling function. The paper is organized as follows: Sec. <ref> introduces the basics of the EMS model, including a description of the equations of motion, boundary conditions, and relevant relations. Sec. <ref> is dedicated to the numerical results. These include the computation of the domain of existence, Sec. <ref>, and both the entropical, Sec. <ref>, and perturbative, Sec. <ref>, stabilities. The paper concludes with some final remarks in Sec. <ref>.Throughout this paper, we set 4π G = 1 = 4πϵ_0 for convenience. The spacetime signature is chosen to be (-,+,+,+). We focus exclusively on spherically symmetric solutions, which implies that the metric and matter functions depend solely on the radial coordinate. For simplicity in notation, once a function is introduced with its radial dependency, such as X(r), we will subsequently denote it by X with the understanding that it is a function of r. Derivatives with respect to the radial coordinate r and the scalar field ϕ are represented by X' ≡dX/dr and X_,ϕ≡dX/dϕ, respectively.§ THE EMS MODELAs already stated in the Introduction (Sec. <ref>), in this work we are going to restrict ourselves to spherically symmetric EMS models described by action (<ref>). For the line element, let us consider a standard metric ansatz that is compatible with spherical symmetry and has two unknown functions ds^2 = - N(r) e^-2δ (r) dt^2+dr^2/N(r)+r^2 (dθ ^2 +sin ^2 θ d φ ^2) , with N(r)≡ 1-2m(r) /r, where m(r) is the Misner-Sharp mass function <cit.> and δ (r) is an unknown metric function. Spherical symmetry, in the absence of a magnetic charge[A magnetic charge would also be compatible with spherical symmetry, but that shall not be considered here -see <cit.> for magnetically charged BHs in this context.], imposes an electrostatic 4-vector potential, A(r)=V(r) dt, and a scalar field solely radial dependent, ϕ(t,r,θ,φ)≡ϕ (r). The absence of angular dependence allows one to obtain the effective Lagrangian, ℒ_ eff =e^-δm'-1/2r^2 e^-δNϕ'^2+1/2f(ϕ) e^δr^2 V'^2. Variation of the effective Lagrangian with respect to the metric and matter functions yields the field equations: m'=r^2N ϕ '^2/2 +Q ^2/2r^2 f(ϕ) , δ'+r ϕ '^2=0, V'= -Q e^-δ/f(ϕ) r^2 ,ϕ” +1+N/rNϕ'-Q ^2/r^3N f(ϕ)(ϕ'-f_, ϕ(ϕ)/2rf(ϕ))=0 . Where the electrostatic potential V is under a first integral which was used to simplify the remaining equations. The constant of integration is interpreted as the electric charge, Q.To solve the set of four coupled ordinary differential equations (<ref>), one must implement the appropriate boundary conditions. At the horizon, r=r_H, the field equations can be approximated by a power series expansion in r - r_H as m = r_H/2+ Q ^2/2r_H ^2 f(ϕ_0) (r-r_H) +⋯, δ = δ _0 -r_H(Q ^2/2 r_H (Q ^2-r_H^2 f (ϕ_0)) f_,ϕ (ϕ_0)/ f (ϕ _0))^2(r-r_H)+⋯ , ϕ =ϕ _0 + Q ^2 f_,ϕ (ϕ_0)/2 r_H f (ϕ _0) (Q ^2-r_H^2 f (ϕ_0)) (r-r_H)+⋯ ,V = -e^-δ _0 Q/r_H ^2 f (ϕ_0) (r-r_H)+⋯ ,in terms of the two essential parameters ϕ_0 and δ_0, where the subscript _0 denotes functions evaluated at the horizon r_H. At spatial infinity, asymptotic flatness is ensured by a power series expansion in 1/r.m(r) =M- Q^2+Q_s^2/2r+⋯ ,δ (r) ≈Q_s^2/2r^2+⋯ ,ϕ (r) = Q_s/r+MQ_s/r^2+⋯ , V(r) =ψ_e+Q/r+⋯ , with M representing the ADM mass, Q the BH's electric charge, and Q_s the scalar “charge”[The term scalar “charge” is used due to the similar radial decay to a true electric charge, not because of an associated conserved Noether current.][The so-called “scalar" hair associated with the EMS scalarization model is of a secondary nature and does not add any additional degree of freedom.], while ψ_e is the electrostatic potential at infinity.Equation (<ref>), together with the boundary conditions arising from the power series expansion (<ref>) and (<ref>), constitute aDirichlet boundary condition problem that must be numerically integrated (see Sec. <ref>). §.§ Identities and physical quantities of interestScalarized solutions are physically characterised by the dimensionless quantities: charge-to-mass ratio, q, reduced horizon area, a_H, and reduced horizon temperature, t_H,q≡Q/M , a_H≡A_H/16π M ^2 =r_H ^2/4 M ^2 , t_H≡ 8π M T_H= 2MN'(r_H) e^-δ _0, where A_H=4π r_H^2 and T_H=N'(r_H)e^-δ_0/4π are the area and temperature of the BH's horizon, respectively. Regularity of the solutions is guaranteed by the Ricci scalar, R, and the Kretschmann scalar, K≡ R_μνδλ R^μνδλ, R= N'/r( 3r δ '-4) + 2/r^2{ 1+N [ r^2 δ” -( 1-r δ ')^2]} -N” , K= 4/r^4( 1-N)^2 + 2/r^2[N'^2+( N'-2Nδ')^2] + [N”-3 δ' N' +2N( δ'^2-δ”)]^2 . Physical accuracy is ensured through the so-called virial identity, Smarr law and non-linear Smarr relation. The virial identity is obtained through a Derrick-like scaling argument <cit.> and is given by ∫ _r_H ^∞ dr { e^-δr^2 ϕ'^2[ 1+2r_H/r(m/r-1)]}= ∫ _r_H ^∞ dr [ e^-δ(1-2r_H/r)1/r^2Q^2/f(ϕ)] , which is independent of the equations of motion and displays that scalarization can occur only in the presence of an electric charge Q ≠ 0 in an EMS model [Since 1+2r_H/r(m/r-1) >0, the left-hand side of the equation is strictly positive and can only be counterbalanced by a non-zero electric charge in the right-hand side.]. The Smarr law for this family of solutions is found not to be explicitly dependent on the scalar field: M = 1/2 T_H A_H + ψ_e Q . The first law of black hole thermodynamics for EMS black holes is expressed as dM = 1/4 T_H dA_H + ψ_e dQ . At last, it can be shown that scalarized solutions also obey the so-called non-linear Smarr formula <cit.>, M^2 + Q_s^2 = Q^2 + 1/4 A_H^2 T_H^2 .§.§ Coupling functionAs previously mentioned, the choice of the coupling function is crucial in determining whether the RN BH is susceptible to scalarization, and hence one should be careful in designing the coupling function. Let us recap the conditions for scalarization. The first requirement is that the GR BH solution should also be a solution within the EMS model. Analysis of the field equations (<ref>) shows that this can be secured by imposing (<ref>).This condition is naturally implemented if one requires the model to be ℤ_2-invariant under the transformation ϕ→ -ϕ. The type of scalarization, on the other hand, is controlled by the second derivative of f(ϕ). For this, it is important to recall that the scalar field is described by the Klein-Gordon equation ϕ = ℐ/4 f_,ϕ . Let us now consider a small-ϕ expansion of the coupling function f(ϕ)=f(0)+1/2d^2 f/d ϕ^2 |_ϕ=0ϕ^2+⋯ , the Klein-Gordon equation (<ref>) linearized for small-ϕ reads: (-μ_ eff^2)ϕ =0 ,whereμ_ eff^2= ℐ d^2 f(ϕ)/d ϕ^2 |_ϕ=0 . The instability arises if the scalar field's effective mass μ_ eff^2<0 (aka tachyonic mass), which, in particular, requires f_,ϕϕ to obey (<ref>) and with the opposite sign of ℐ. This constitutes the set of solutions known as normal/spontaneous scalarization associated with class II.A, of which an exemplary function is f(ϕ) = e^-αϕ ^2 , Class II.B on the other hand, occurs when solutions are always linearly stable and no tachyonic instability exists, i.e. for (<ref>). This condition is easily satisfied by considering higher order terms in the expansion of f(ϕ) such as f(ϕ) = e^-βϕ ^4 . An exemplary function that exhibits, simultaneously, the tachyonic and non-linear instabilities is the previously introduced (<ref>), f (ϕ) = e^-αϕ^2-βϕ^4. Observe that, due to the presence of non-linear (linear) scalarization in the coupling function, scalarized BHs in the EMS model can incorporate a counteracting linear (non-linear) term.In particular, it is feasible to have scalarized solutions with α > 0 supported by the non-linear term β <0; or a scalarized solution with β > 0 supported by the tachyonic (linear) instability α <0.Analysis of (<ref>) reveals that these counteracting terms do not contribute to scalarization. Instead, they can be interpreted as a mass term for the scalar field – in the case of the linear term α > 0 –, or as a positive quartic self-interaction – for β >0 –, both proportional to the BH's electric charge due to the non-minimal coupling between the scalar field and the Maxwell invariant. Investigation of the latter's impact on the scalarization phenomena constitutes the second objective of this work.§ NUMERICAL RESULTS The set of four coupled ODEs (<ref>), with the proper boundary conditions (<ref>)-(<ref>), are solved numerically. The chosen routines automatically impose the proper boundary conditions through a shooting method on the two unknown parameters ϕ _0 and δ _0, with the maximum integration error and boundary conditions automatically ensured to be less than 10^-15. Physical accuracy is guaranteed through the virial identity, with a relative error of 10^-6, and Smarr and non-linear Smarr relations, with a relative error of 10^-7. The resulting numerical solution's profile can be seen in Fig. <ref> for an exemplary solution with r_H=0.78, ϕ_0=0.40 and q=0.90 and coupling parameters α=-10.0 and β = -10.0. All obtained solutions are everywhere regular at and outside the horizon, r_H. Each scalarized BH solution is uniquely defined by (r_H, α, β,q). §.§ Domain of existenceVariation of the horizon radius, r_H, and coupling parameters, (α, β), for a fixed electric charge Q, form the 3-dimensional domain of existence that characterizes the EMS model under analysis. To avoid the additional complexity associated with the 3-dimensional domain of existence, let us fix one of the coupling parameters (say α) and vary the other (β) – see Fig. <ref>. In Fig. <ref> is graphically represented the projection of the domain of existence of a scalarized BH as a function of α for three values of β ={-10.0,0,+10.0} (left panel); and as a function of β for four values of α = {-10.0, -1.0,0,+1.0} (right panel). A common feature of both domains of existence is a region with q ⩽ 1 – where degeneracy occurs– and a region with overcharged solutions q>1. The former region is limited from below by the bifurcation/turning line and from above by the extremal line q=1, and represents a region of the parameter space where, at least, two BH configurations with the same q coexist: a bald RN BH and one or more scalarized BH solution. These solutions, however, possess different horizon radii (Fig. <ref>), temperatures and entropy (see Sec. <ref>). The overcharged region goes from the extremal line, q=1, and is bounded from above by the critical line, which is highly coupling parameter's dependent: q_crit.≡ q_crit. (α ,β). At the critical line, the solution's horizon radius tends to zero, r_H → 0. In this region, no bald RN BH exist and no degeneracy between scalarized BHs was observed. After the critical line, no solutions exist.In addition, Fig. <ref> demonstrates a dominance of the tachyonic instability over the non-linear scalarization: for the same value of β and α, the domain of existence is bounded from below by the bifurcation line and has a q_crit. comparable to the pure spontaneous scalarization, β=0.0. Such is expected since the maximum of the scalar field amplitude occurs at the horizon (see Fig. <ref>) and is smaller than unity max(phi)=ϕ_0<1, resulting in a decreased contribution of the higher powers of the scalar field[A similar behaviour was observed for the scalar-Gauss-Bonnet model <cit.>].The lower bound, however, depends on the relative value of α in relation to β. If β≲α, the scalarization is dominated by the tachyonic instability and the domain of existence is bounded from below by the bifurcation line; when β≫α, non-linear scalarization dominates and the domain of existence is bounded from below by the turning line (see Fig. <ref>). In this case, the tachyonic instability is not strong enough to sustain a negligible amount of scalar field (see Fig. <ref>), and a minimum value of ϕ_0 exists. With the increase of α, the size of the initial ϕ_0 jump decreases until ϕ_0 → 0 and the tachyonic instability dominates. Let us now analyse the individual domains of existence. Starting with a fixed β = {-10.0, 0.0 ,+10.0} and vary α – see Fig. <ref> (left panel). In the majority of the domain of existence, solutions bifurcate from the RN BH at q⩽ 1 – where scalarized BHs with a negligibly amount of scalar field can exist – and stops at the critical line, r_H→ 0. In between a continuously monotonic increase of the scalar field amplitude at the horizon and a reduction of the latter occurs. As expected, the bifurcation line is insensitive to the non-linear term since the bifurcation is only dependent on the linear terms of f_,ϕ(ϕ) that enter into the rhs of the Klein-Gordon equation (<ref>).The critical line, on the other hand, is highly dependent on the coupling parameters. The maximum charge-to-mass ratio for which the critical solutions exist, q_crit., increases/decreases with the addition of a negative/positive β term. While for β<0, a non-linear instability that amplifies the scalarization exists, a positive value has the opposite effect. The β >0 decreases the width of the domain of existence due to a decrease in q_crit., resulting in a quench of the scalarization similar to the one observed for scalarization with a positive self-interaction <cit.>. Observe now the case with fixed α ={-10,-1, 0,+1} and varying β – Fig. <ref> (right panel). In this case, for all the chosen α≠ -10, solutions start at the extremal line q=1 for which a minimal, non-negligible, amount of scalar field exists around the BH, ϕ_0 ≠ 0. The BH's scalar field's associated jump high decreases (increases) with the addition of the α <0 (α >0) term (see also Fig. <ref>).Following the initial jump in the scalar field amplitude at the horizon, solutions observe a simultaneous increase of ϕ_0 and r_H until a maximum mass, r_H is reached – Fig. <ref> β=-10 lines. After this point, a second branch with decreasing r_H and increasing ϕ _0 exists until the critical solution is achieved. The latter is known as the hot branch and is known to be stable for α =0 <cit.>; the former is known as the cold branch and is unstable (the denomination will be clear in the thermodynamics section <ref>).As mentioned before, while the effect of a positive α is similar to scalarization by a massive scalar field, the positive β mimics a positive self-interacting (attractive) potential. The present results follow the same pattern as the one presented for a massive/self-interacting scalar field in EMS <cit.> and scalar-Gauss-Bonnet models <cit.>. In particular, a quench of the scalarization phenomena due to the decrease in the domain of existence width (see Fig. <ref>). However, due to the non-minimal coupling between the "mass" or "self-interaction" terms to the Maxwell invariant, the impact of the latter is proportional to the electric charge Q. §.§ ThermodynamicsA solution is said to be stable if it is simultaneously entropically preferable and stable against radial perturbations. The latter will be studied in Sec. <ref>. In EMS models, entropy is given by the Bekeinstein-Hawking formula <cit.> and reduces to the analysis of the reduced horizon area, a_H, observe Fig. <ref> (left panel). Analysis of the horizon area shows that solutions dominated by spontaneous scalarization are always entropically preferable when compared with electro-vacuum GR; while non-linearly dominated solutions have a first branch that is everywhere entropically unfavourable (aka cold branch), and a second branch which contains a set of solutions that are entropically preferable (aka hot branch). The second branch's entropically non-preferable region decreases (increases) with the addition of α <0 (α >0). The terminology hot/cold comes from the second branch having a higher temperature than the first, see Fig. <ref> (right panel).§.§ Comments on stabilityStability against radial perturbations is studied through a standard strategy that considers spherically symmetric, linear perturbations of the equilibrium solutions while keeping the metric ansatz (<ref>), but allowing the functions N, δ, ϕ,V time, t, dependent besides r: d s^2=-Ñ(r, t)e^-2 δ̃(r, t) d t^2+d r^2/Ñ(r, t)+r^2(d θ^2+sin ^2θd φ^2),A=Ṽ(r, t)d t, ϕ=ϕ̃(r, t) . Each function can be further expanded into the equilibrium solution (aka “bare”) that where obtained previously (Sec. <ref>-<ref>), plus a perturbation term. The latter contains the time-dependence through a Fourier mode with frequency Ω, Ñ(r, t)=N(r)+ϵ N_p(r) e^-i Ω t , δ̃(r, t)=δ(r)+ϵδ_p(r) e^-i Ω t , ϕ̃(r, t)=ϕ(r)+ϵϕ_p(r) e^-i Ω t , Ṽ(r, t)=V(r)+ϵ V_p(r) e^-i Ω t , where the subscript _p denotes perturbations of the equilibrium solutions. The linearized field equations around the background solution yield the metric and V_p(r) perturbations expressed in terms of the scalar field perturbation, ϕ _p (r), N_p=-2rN ϕ^' ϕ_p , δ_p=-2 ∫ d rr ϕ^'ϕ_p^' ,V_p^'=-V^'[δ_p+ϕ_pf_, ϕ(ϕ)/f(ϕ)] , This leads to a single perturbation equation for ϕ_p (r) which can be written in a Schrödinger-like form by redefining Ψ(r) = r ϕ_p and inserting the 'tortoise' coordinate x defined by dx/dr = e^δ / N: -d^2Ψ/d x^2+U_ΩΨ=Ω^2 Ψ , with the perturbation potential U_Ω defined as: U_Ω≡e^-2 δ N/r^2{1-N-2r^2ϕ^'2-Q^2/2 r^2[2/f(ϕ)(1-2r^2ϕ^' 2)-2 f_, ϕ^2(ϕ)/f^3(ϕ)+1/f^2(ϕ)(f_, ϕϕ(ϕ)+4r ϕ^' f_, ϕ(ϕ))]} The potential U_Ω, which is regular across the entire domain -∞ < x < ∞, diminishes to zero at both the black hole (BH) event horizon and infinity. A mode is classified as unstable if Ω^2 < 0, which, within the asymptotic boundary conditions of our framework, indicates a bound state. However, a standard quantum mechanical result dictates that equation (<ref>) will not exhibit bound states if U_Ω consistently exceeds its minimal asymptotic value, implying it must be positive (see e.g. <cit.>). Therefore, a uniformly positive effective potential serves as evidence of mode stability against spherical perturbations. To discern the interplay between the parameters α and β on the effective potential, we analyzed the profile of U_Ω for several scalarized solutions, holding α constant while varying β – Fig. <ref> (left panel)–, and vice-versa – Fig. <ref> (right panel). The effective potential for radial spherical perturbations of spontaneously scalarized dominated solutions corresponding to a fixed α =-10.0 – Fig. <ref> (left panel) –, exhibit an everywhere positive effective potential, U_Ω>0, suggesting the absence of instabilities. Additionally, one observes an increase (decrease) of the maximum value of U_Ω with the addition of the non-linear (counter-non-linear) parameter β.In the case of non-linearly dominated scalarized solutions with fixed β =-10.0 – Fig. <ref> (right panel) –, for both the hot and cold branches, a region where U_Ω changes sign exists. Which, while not indicating instabilities, doesn't guarantee stability.The addition of a tachyonic term, α =-1.0, to the non-linearly dominated scalarized solution reduces the amplitude of the negative U_Ω region, bringing it closer to the stable spontaneously lead scalarization, while a counter-scalarized term α =+1.0 deepens it. At last, it is important to note that a non-uniformly positive potential for the non-linear scalarized solutions is not a guarantee of instabilities. To further investigate these solutions, the application of more intricate methods like the S-deformation method <cit.> is required. Such is beyond the scope of this paper (see <cit.> for a similar study of the pure non-linear scalarization, and <cit.> for the impact of self-interacting/mass term). § CONCLUSIONSIn this work, we have investigated the interplay between the spontaneous and non-linear scalarization of charged black holes within the Einstein-Maxwell-Scalar model. The resulting mixed scalarized solutions possess both properties of pure non-linear and spontaneous scalarization. The interplay between the two types of scalarization shows a dominance of the spontaneus properties over the non-linear ones. In particular, the domain of existence for comparable coupling parameters possesses the same structure of the spontaneous scalarization with a slight increase in the width from the non-linear scalarization. The influence of non-linear scalarization becomes apparent only when its coupling parameter is significantly larger than that of spontaneous scalarization.The tachyonic instability associated with the spontaneous scalarization in the mixed coupling makes the black hole more susceptible to scalarization. As a result, it requires a weaker coupling of the scalar field to the Maxwell invariant to achieve the same level of scalarization. The presence of a mixed scalarization also allowed the study of a “counter-scalarization´´ term, wherein one of the scalarization parameters has the “wrong´´ sign and, instead of supporting/intensifying the scalarizaton, suppresses it. From the linearized KG equation, one observes that these terms possess the properties of a scalar field's mass (for the linear parameter α >0) or of a positive self-interaction (for the non-linear β >0). The resulting quench associated with these parameters mimics the effect of a scalar field's mass/self-interaction observed in previous studies <cit.>. However, due to the coupling to the Maxwell invariant, the effect is not constant.Thermodynamically, mixed scalarized solutions with higher negative values of both α and β are favourable. Solutions dominated by spontaneous scalarization are entropically preferable over their General Relativity counterparts, while non-linear dominated solutions show mixed thermodynamic behaviour, i.e. an entropically favourable and unfavourable regions.Perturbative stability analysis against radial perturbations indicated stability for dominant spontaneously scalarized black holes. In contrast, for non-linear dominated scalarization, such a conclusion is not possible to be made. Further studies into these must be made. However, it seems like the addition of the spontaneous scalarization parameter to the non-linear scalarization makes the resulting mixed scalarization tend to a more stable configuration.Future research directions include applying the S-Deformation method for an in-depth analysis of the radial stability of non-linearly dominated solutions. A comprehensive study of quasi-normal modes in mixed scalarized solutions could provide further insights extending the work of <cit.>. Additionally, incorporating rotation into these mixed scalarization configurations presents an intriguing avenue for exploration. § ACKNOWLEDGMENTS We would like to express our sincere gratitude to Eugen Radu for reading and commenting on the manuscript and for the insightful discussions. Our thanks also go to Pedro G.S. Fernandes and Nuno M. Santos for their valuable discussions. Z.B extends special thanks to the Gravitation Group of Aveiro (CIDMA) and to Carlos Herdeiro for their hospitality during the initial phase of studying EMS models. Z.B also gratefully acknowledges the networking support provided by the COST Action CA18108. A. M. Pombo is supported by the Czech Grant Agency (GAĈR) under grant number 21-16583M. ieeetr | http://arxiv.org/abs/2311.15850v1 | {
"authors": [
"Zakaria Belkhadria",
"Alexandre M. Pombo"
],
"categories": [
"gr-qc",
"astro-ph.CO",
"hep-ph",
"hep-th"
],
"primary_category": "gr-qc",
"published": "20231127141739",
"title": "Mixed scalarization of charged black holes: from spontaneous to non-linear scalarization"
} |
=1GrayBox[1][-4.5ex]node[draw=black,fill=black!10,rounded corners,inner sep=2ex,text width=]; shapes.misc decorations.pathmorphing, decorations.pathreplacing, decorations.shapes v/.style=decorate, decoration=snake, segment length=3mm, amplitude=0.75mm, draw, f/.style=draw=black, postaction=decorate, decoration=markings,mark=at position .6 with [very thick]latex, fb/.style=draw=black, postaction=decorate, decoration=markings,mark=at position .4 with [very thick]latex, fnar/.style=draw=black, g/.style=decorate, draw=black, decoration=coil,amplitude=3pt, segment length=3.5pt, s/.style=dashed,draw=black, postaction=decorate, decoration=markings,mark=at position .55 with [very thick]latex, sb/.style=dashed,draw=black, postaction=decorate, decoration=markings,mark=at position .55 with [draw=black,very thick]latex, snar/.style=dashed,draw=black,line width =1.25pt, cross/.style=cross out, draw=black, minimum size=2*(#1-), inner sep=0pt, outer sep=0pt,cross/.default=3pt,decorations.pathmorphingsnake it/.style=decorate, decoration=snake | http://arxiv.org/abs/2311.16253v1 | {
"authors": [
"Enrico Bertuzzo",
"Christophe Grojean",
"Gabriel M. Salla"
],
"categories": [
"hep-ph",
"hep-th"
],
"primary_category": "hep-ph",
"published": "20231127190157",
"title": "ALPs, the on-shell way"
} |
1.0 [email protected]@wipm.ac.cn^1State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China^2University of Chinese Academy of Sciences, Beijing 100049, China A unitary Fermi gas in an isotropic harmonic trap is predicted to show scale and conformal symmetry that have important consequences in its thermodynamic and dynamical properties. By experimentally realizing an isotropic harmonic trap, we study the expansion of a unitary Fermi gas and demonstrate its universal expansion dynamics along different directions and at different temperatures. We show that as a consequence of SO(2,1) symmetry, the measured release energy is equal to that of the trapping energy. In addition, away from resonance when scale invariance is broken, we determine the effective exponent γ that relates the chemical potential and average density along the BEC-BCS crossover, which qualitatively agrees with the mean field predictions. This work opens the possibility of studying non-equilibrium dynamics in a conformal invariant system in the future. Exploring scale invariance in the expansion of a spherical unitary Fermi gas Kaijun Jiang^1 January 14, 2024 ============================================================================Strongly interacting Fermi gases are created by tuning the interaction strength between atoms of different spin states via Feshbach resonance <cit.>. The unitary Fermi gas, realized when the s-wave scattering length is tuned to infinity, is of special interest <cit.>, as it is not only strongly correlated but also an example of scale-invariant quantum many-body system. One of the basic tools used to explore the properties of unitary Fermi gas is the expansion dynamics <cit.> and much insight has been obtained about the role of interactions. For instance, anisotropic expansions <cit.>, virial theorem <cit.>, conformal symmetry breaking <cit.>, and Efimovian expansion <cit.> of Fermi gases have been demonstrated by imaging the expansion of the atomic cloud.The strongly interacting Fermi gas at finite temperature is described by a hydrodynamic theory (see Eq. <ref>), where the transport behaviors are determined by viscosities <cit.>. At unitarity, the bulk viscosity ζ_B vanishes, and the friction force arise from shear viscosity η. For a unitary Fermi gas in an anisotropic trap studied previously, the conformal symmetry is broken and the shear viscosity plays an dominant role, which allowed its extraction from expansion dynamics <cit.>. On the other hand, for a spherical unitary Fermi gas, the transverse relative motion of the atomic cloud is absent (σ_ii=0), and consequently the effect of the shear viscosity can be neglected. The system without viscosity contribution would have universal properties regardless of the interaction details. Contrary to the anisotropic system, the spherical unitary Fermi gas has a hidden SO(2,1) symmetry <cit.>, which is predicted to has several universal properties, such as the exact relations between the trapping potential energy and total energy. However, the preparation and exploration of the universal properties of a spherical unitary Fermi gas are yet to be demonstrated experimentally.In this letter, we produce a spherical Fermi gas in an optical dipole trap (ODT) and study its expansion behaviors in the strongly interacting regimes. By tuning the interaction strength to unitarity, the expansion of the system shows the scale invariance along different directions and at different temperatures, which is absent in an anisotropic system. We find that the trapping potential energy equals to the half of the total energy of the system, and verify the virial theorem for the unitary Fermi gas <cit.>. To the best of our knowledge, this is the first experiment on the 3D ultracold quantum gases with the SO(2, 1) symmetry. Furthermore, we explore expansion dynamics away from unitarity when scale invariance is broken, and measure the effective exponent γ of μ(n)∝ n^γ where μ is the chemical potential and n is the average density. The measured values of γ qualitatively agree with the mean-field calculations.The expansion of strongly interacting Fermi gases is described by the hydrodynamic theory <cit.>,d^2/dt^2m⟨x_i^2⟩/2= ⟨x_i·∂ U/∂ x_i⟩_0+1/N∫d^3r[Δ p-Δ p_0]-1/N∫d^3r(ησ_ii+ζ_Bσ'),where ⟨x_i^2⟩ represents the mean square cloud radius along the ith axis (i=x, y, z), U is the trapping potential, t is the expansion time, the subscript (_0) denotes the initial condition in the trap at t=0, and Δ p=p-(2/3)ε is the scale-invariance breaking pressure, where ε is the energy density <cit.>. The last term on the right describes the friction forces arising from shear viscosity η and bulk viscosity ζ_B. Here, σ_ii=2ḃ_i/b_i-(2/3)∑_jḃ_j/b_j represents the transverse relative motion and σ'=∑_iḃ_i/b_i for the dilation process, where b_i denotes the expansion scale factor. In the unitary regime, both Δ p and ζ_B vanish <cit.>. The value of σ_ii depends on the geometry or symmetry of the atomic cloud. For the system in an anisotropic trap, σ_ii is non zero. Only for a spherical gas, the relative motion is absent with σ_ii=0, and in this case, we obtain the expansion behavior,⟨x_i^2⟩ =⟨x_i^2⟩_0+t^2/m⟨x_i·∂ U/∂ x_i⟩_0.Eq. (<ref>) shows the ballistic expansion analogous to a non-interacting ideal gas, and the interaction is included in the in-situ atomic cloud size ⟨x_i^2⟩_0.The scale-invariant expansion can be tested by determining the value,τ_i^2(t)=⟨x_i^2⟩_t-⟨x_i^2⟩_0/⟨x_i^2⟩_0ω^2,where ω_x=ω_y=ω_z=ω is the trapping frequency of the isotropic trap. According to Eq. (<ref>), τ^2(t)=t^2.In the experiment, we prepare a spherical Fermi gas with variable trapping frequencies and tunable interactions based on our previous works <cit.>. We initially prepare a ^6Li atomic degenerate Fermi gas with two spin states |F=1/2,m_F=±1/2⟩ in an elongated ODT and at the Feshbach-resonance magnetic field of 834 G. The experimental setup of the isotropic trap is schematically displayed in Fig. <ref>(a), where some special techniques are applied. Firstly, a magnetic field with a gradient B'_z=1.05 G/cm is applied along z axis to simultaneously compensate the gravity force of the two spin states. This is valid for ^6Li atoms because the hyperfine interaction is much smaller than the Zeeman shift at the applied magnetic field. Secondly, two elliptic optical beams with a cross-sectional aspect ratio of √(2), propagating perpendicularly in the horizontal plane, form the isotropic trap. Under these conditions, the trapping frequency can be varied by adjusting the optical power (See Supplemental Material <cit.> for details). We transfer the unitary Fermi gas to the isotropic trap with an efficiency of more than 90%, by lowering the power of the elongated ODT and simultaneously increasing that of the isotropic ODT in a period of 25 ms. After performing the evaporative cooling in the isotropic trap, we slowly increase the optical power to 3.8 W in about 75 ms. The temperature is adjusted by controlling the depth of the evaporative cooling. The atom number is N=2.9(3)×10^4. The trapping frequencies are (ω_x, ω_y, ω_z)=2π×(1234(6), 1165(11), 1204(3)) Hz, which are nearly the same along three axes <cit.>.We switch off the isotropic ODT and measure the cloud width versus the expansion time t at the magnetic field B=834 G. Two laser beams with a frequency difference of 76 MHz, propagating along vertical and horizonal directions, respectively, are applied to detect two spin states. Exemplary atomic images during the expansion are shown in Fig. <ref>(b), indicating an isotropic expansion in direct contradiction to an elongated Fermi gas <cit.>. We use a fringe-removal algorithm <cit.> to reduce the imaging noise, which is helpful in accurately determining the size of the atomic cloud. The cloud radius ⟨x_i^2⟩_t is obtained by fitting a Gaussian distribution to the atomic density profile. In the unitary regime, the cloud radius in the trap ⟨x_i^2⟩_0 can be theoretically calculated, and the temperature is determined by analyzing the atomic density distribution <cit.>. At the temperature T/T_F=0.36(3), values of τ^2(t) are calculated according to Eq. (<ref>). As shown in Fig. <ref>(c), the expansion behaviors along different directions all obey the scale theory τ^2(t)=t^2, which indicates the absence of the effect of viscosity. This scale-invariant expansion along different directions is unique for a spherical Fermi gas in the unitary regime. For an anisotropic Fermi gas, only the sum of sizes along three axes shows the scale-invariant expansion <cit.>.In Fig. <ref>, we measure the atomic expansion at different temperatures. Only expansion along the x-axis is displayed for simplicity. Due to the finite-temperature effect, the atomic cloud size ⟨x^2(t)⟩ shows an obvious difference, as shown in Fig. <ref>(a). System at a higher temperature has a larger in-situ cloud radius ⟨x_i^2⟩_0, leading to a faster expansion, which agrees well with the theoretical prediction of Eq. (<ref>). While values of τ^2(t) at different temperatures are consistent, all obeying the scale theory τ^2(t)=t^2 (see Fig. <ref>(b)). For comparison, the expansion behavior of a non-interacting Fermi gas (a=0, where a is the s-wave scattering length) is also shown. The Fermi gas in the unitary (a=∞) regime has the same scaled expansion behavior with that of the non-interacting (a=0) Fermi gas.According to the exact property predicted by the hidden SO(2,1) symmetry, the total energy should be twice the trapping potential energy <cit.>. This is called the virial theorem for a unitary Fermi gas, which can be verified using the expansion method. The total energy of the trapped gas is the sum of trapping potential energy U, kinetic energy E_kin and interaction energy E_int, i.e., E_tot=U+E_kin+E_int. After switching off the trapping potential (U → 0), the release energyE_rel=E_kin+E_int remains constant during the expansion process <cit.> and will be completely converted to the kinetic energy in the long-time expansion. So we only need to demonstrate the equality relation U=E_rel. By fitting the slope of atomic cloud radii respect to the expansion time, we obtain the release energy E_rel=(3/2)mv_x^2, where v_x is the expansion velocity along x axis. We can also determine the trapping potential energy U=(3/2)mω^2 ⟨x^2⟩_0. The experimental results are shown in Fig. <ref>, where the atomic temperature varies across the Fermi degeneracy. The trapping potential energy U is equal to the release energy E_rel over a wide range of energies.Away from Feshbach resonance, the scattering length is finite with Δ p≠0 and the scale invariance will be broken. We assume a power-law dependence of the chemical potential, μ(n)= n^γ, where n is the average atomic density and γ=n/ μ· dμ /dn is the effective exponent <cit.>. By imposing that the total energy variation vanishes to first order, one gets the energy relation in the BEC-BCS crossover <cit.>, 3γ E_rel=2U. Considering that the bulk viscosity is negligibly small <cit.> and the effect of the shear viscosity in a spherical system is zero, we obtain the solution of the expansion scale factors <cit.>,b̈_i-(ω^2/b_i)Γ^-γ=0,where Γ=b^3_i. Eq. (<ref>) is a decoupled equation for each single direction, where b_i can be easily calculated if knowing the value of γ. Especially in the unitary regime with γ=2/3, Eq. (<ref>) has an analytical solution the same as Eq. (<ref>). In the BCS-BEC crossover, τ^2(t) can be calculated asτ^2(t) =b^2_i-1/ω^2. To measure γ at different interactions, we adiabatically ramp the magnetic field from 834 G to the desired value within 300 ms. The power of the optical trap is decreased to about 100 mW to perform the experiment at low temperature, where the trapping frequencies are (ω_x,ω_y,ω_z)=2π× (188(1), 176(2), 187(1)) Hz. From Eq. (<ref>), the value of γ can be obtained according to an iterative method. We only consider the expansion along x axis. The value of γ at zero temperature <cit.> is initially input into Eq. (<ref>) to calculate b_x(t). Then we can obtain the cloud radius in the trap from the expansion data, ⟨x^2⟩_0=⟨x(t)^2⟩_t/b(t)^2_x, which is used to determine the trapping potential energy U. We also measure the release energy E_rel from the long-time expansion. Using the energy relation, we obtain a new value of γ and input it into Eq. (<ref>) again for the next iterative calculation. We repeat the calculation until (γ_i+1-γ_i)/γ_i≤10^-5, where i denotes the number of iterations.The obtained γ in the BEC-BCS crossover is shown in Fig. <ref>(b). In the unitary regime (1/k_Fa=0), γ≈2/3, indicating the scale-invariant expansion. On the BEC side (1/k_Fa>0), γ increases towards the molecule condensate limit with γ=1. On the BCS side, γ decreases to some extent. The experimental measurements have the same variation trend with the mean-field calculation at zero temperature <cit.>. The value of γ in the unitary regime does not change with temperature. But due to the finite-temperature effect (in the experiment, T/T_F=0.21(2)), γ is smaller than the zero-temperature calculation on the BEC side, and on the BCS side it is larger. This can be reasonably understood that, as temperature increases, γ will change towards the thermal gas with γ=2/3. On the BCS side, there is a shallow dip in the zero-temperature calculation. In current condition, we could not distinguish the non-monotonic behaviors on the BCS side due to the finite-temperature effect and experimental fluctuation. The measurement of γ is helpful to obtain the equation of state, frequencies of collective modes and expansion behavior in the BEC-BCS crossover <cit.>.With obtained γ, we can determine the cloud radius, ⟨x^2⟩_0=γ E_rel/mω^2. Three exemplary expansion behaviors in the BEC-BCS crossover are shown in Fig. <ref>(a), which display the obvious deviation from that in the unitary regime (1/k_Fa=0) when the interaction strength is tuned away from the resonance. The expansion is fast on the BCS side (1/k_Fa=-0.68) and slow on the BEC side (1/k_Fa=1.1), which can be well calculated with Eq. (<ref>).In conclusion, we experimentally produce a spherical Fermi gas with variable trapping frequencies and tunable interactions. We observe the unique feature of the scale invariance induced by the coexistence of the spherical symmetry and unitary interaction. The scale invariance is broken when the system is tuned away from the resonant interaction. The virial theorem for the unitary Femi gas has been verified. We also measure the effective exponent γ in the equation of state along the BEC-BCS crossover. The unitary Fermi gas in an isotropic trap, a system with the hidden SO(2,1) symmetry, has several exact properties, such as mapping between the trapped problem and the free-space zero-energy problem, relations between the moments of the trapping potential energy and the moments of the total energy, and undamped breathing mode with twice the trapping frequency <cit.>. We also have the opportunity to study the non-equilibrium dynamics in the presence of conformal symmetry <cit.>.We thank Shizhong Zhang and Xi-Wen Guan for carefully reading and revising the paper, and Georgy Shlyapnikov for favorite discussions. This work has been supported by the NKRDP (National Key Research and Development Program) under Grant No. 2022YFA1404102, NSFC (Grant No. 12004398, 12121004, 11974384 and 12374250), CAS under Grant No. YJKYYQ20170025, and Hubei province under Grant No. 2021CFA027.Lu Wang, Xiangchuan Yan and Jing Min contributed equally to this work.10Chin2010 C. Chin, R. Grimm, P. Julienne, and E. Tiesinga.Feshbach resonances in ultracold gases. Rev. Mod. Phys. 82, 1225 (2010).Kinast2005 J. Kinast, A. Turlapov, J. E. Thomas, Q. Chen, J. Stajic, and K. Levin.Heat capacity of a strongly interacting Fermi gas. Science 307, 1296 (2005).Nascimbene2010 S. Nascimbène, N. Navon, K. J. Jiang, F. Chevy, and C. Salomon.Exploring the thermodynamics of a universal Fermi gas. Nature 463, 1057 (2010).Ku2012 M. J. H. Ku, A. T. Sommer, L. W. Cheuk, and M. W. Zwierlein.Revealing the superfluid lambda transition in the universal thermodynamics of a unitary Fermi gas. Science 335, 563 (2012).Sidorenkov2013 L. A. Sidorenkov, M. K. Tey, R. Grimm, Y.-H. Hou, L. Pitaevskii, and S. Stringari.Second sound and the superfluid fraction in a Fermi gas with resonant interactions. Nature 498, 78 (2013).Bardon2014 A. B. Bardon, S. Beattie, C. Luciuk, W. Cairncross, D. Fine, N. S. Cheng, G. J. A. Edge, E. Taylor, S. Zhang, S. Trotzky, and J. H. Thywissen.Transverse demagnetization dynamics of a unitary Fermi gas. Science 344, 722 (2014).Patel2020 P. B. Patel, Z. Yan, B. Mukherjee, R. J. Fletcher, J. Struck, and M. W. Zwierlein.Universal sound diffusion in a strongly interacting Fermi gas. Science 370, 1222 (2020).Li2022 X. Li, X. Luo, S. Wang, K. Xie, X.-P. Liu, H. Hu, Y.-A. Chen, X.-C. Yao, and J.-W. Pan.Second sound attenuation near quantum criticality. Science 375, 528 (2022).Menotti2002 C. Menotti, P. Pedri, and S. Stringari.Expansion of an interacting Fermi gas. Phys. Rev. Lett. 89, 250402 (2002).OHara2002 K. M. O'Hara, S. L. Hemmer, M. E. Gehm, S. R. Granade, and J. E. Thomas.Observation of a strongly interacting degenerate Fermi gas of atoms. Science 298, 2179 (2002).Thomas2005 J. E. Thomas, J. Kinast, and A. Turlapov.Virial theorem and universality in a unitary Fermi gas. Phys. Rev. Lett. 95, 120402 (2005).Elliott2014 E. Elliott, J. A. Joseph, and J. E. Thomas.Observation of conformal symmetry breaking and scale invariance in expanding Fermi gases. Phys. Rev. Lett. 112, 040405 (2014).Deng2016 S. Deng, Z.-Y. Shi, P. Diao, Q. Yu, H. Zhai, R. Qi, and H. Wu.Observation of the efimovian expansion in scale-invariant Fermi gases. Science 353, 371 (2016).Cao2011a C. Cao, E. Elliott, H. Wu, and J. E. Thomas.Searching for perfect fluids: Quantum viscosity in a universal Fermi gas. New J. Phys. 13, 075007 (2011).Levin2011viscosity H. Guo, D. Wulin, C.-C. Chien, and K. Levin.Microscopic approach to shear viscosities of unitary Fermi gases above and below the superfluid transition. Phys. Rev. Lett. 107, 020403 (2011).Shafer2017viscosity M. Bluhm, J. Hou, and T. Schäfer.Determination of the density and temperature dependence of the shear viscosity of a unitary Fermi gas based on hydrodynamic flow. Phys. Rev. Lett. 119, 065302 (2017).Cao2011 C. Cao, E. Elliott, J. Joseph, H. Wu, J. Petricka, T. Schäfer, and J. E. Thomas.Universal quantum viscosity in a unitary Fermi gas. Science 331, 58 (2011).Thomas2014anomalousViscosity E. Elliott, J. A. Joseph, and J. E. Thomas.Anomalous minimum in the shear viscosity of a Fermi gas. Phys. Rev. Lett. 113, 020406 (2014).Thomas2015superfluidViscosity J. A. Joseph, E. Elliott, and J. E. Thomas.Shear viscosity of a unitary Fermi gas near the superfluid phase transition. Phys. Rev. Lett. 115, 020401 (2015).Werner2006 F. Werner and Y. Castin.Unitary gas in an isotropic harmonic trap: Symmetry properties and applications. Phys. Rev. A 74, 053604 (2006).Supplemental2023 See Supplemental Material for the formation of an isotropic trap, measurement of the trapping frequency, determination of the atomic temperature and in-situ atomic cloud size, fringe-removal analysis of the atomic images, and hydrodynamic description of the atomic expansion from an isotropic trap.Ho2004 T.-L. Ho.Universal thermodynamics of degenerate quantum gases in the unitarity limit. Phys. Rev. Lett. 92, 090402 (2004).Son2007 D. T. Son.Vanishing bulk viscosities and conformal invariance of the unitary Fermi gas. Phys. Rev. Lett. 98, 020604 (2007).Escobedo2009 M. A. Escobedo, M. Mannarelli, and C. Manuel.Bulk viscosities for cold Fermi superfluids close to the unitary limit. Phys. Rev. A 79, 063623 (2009).Dusling2013 K. Dusling and T. Schäfer.Bulk viscosity and conformal symmetry breaking in the dilute Fermi gas near unitarity. Phys. Rev. Lett. 111, 120603 (2013).Yan2021 X. Yan, D. Sun, L. Wang, J. Min, S. Peng, and K. Jiang.Production of degenerate Fermi gases of 6Li atoms in an optical dipole trap. Chin. Phys. Lett. 38, 056701 (2021).Yan2022 X. Yan, D. Sun, L. Wang, J. Min, S. Peng, and K. Jiang.Observation of the BEC-BCS crossover in a degenerate Fermi gas of lithium atoms. Chin. Phys. B 31, 016701 (2022).Ockeloen2010 C. F. Ockeloen, A. F. Tauschinsky, R. J. C. Spreeuw, and S. Whitlock.Detection of small atom numbers through image processing. Phys. Rev. A 82, 061606 (2010).Stringari1999RMP F. Dalfovo, S. Giorgini, L. P. Pitaevskii, and S. Stringari.Theory of Bose-Einstein condensation in trapped gases. Rev. Mod. Phys. 71, 463 (1999).Cooper1997PRL M. J. Holland, D. S. Jin, M. L. Chiofalo, and J. Cooper.Emergence of interaction effects in Bose-Einstein condensation. Phys. Rev. Lett. 78, 3801 (1997).liExpansionDynamicsSpherical2019 R. Li, T. Gao, D. Zhang, S. Peng, L. Kong, X. Shen, and K. Jiang.Expansion dynamics of a spherical Bose–Einstein condensate. Chin. Phys. B 28, 106701 (2019).Stringari2008RMP S. Giorgini, L. P. Pitaevskii, and S. Stringari.Theory of ultracold atomic Fermi gases. Rev. Mod. Phys. 80, 1215 (2008).huCollectiveModesBallistic2004 H. Hu, A. Minguzzi, X.-J. Liu, and M. P. Tosi.Collective modes and ballistic expansion of a Fermi gas in the BCS-BEC crossover. Phys. Rev. Lett. 93, 190403 (2004).Heiselberg2004PRL H. Heiselberg.Collective modes of trapped gases at the BEC-BCS crossover. Phys. Rev. Lett. 93, 040402 (2004).wangOscillatorylikeExpansionFermionic2020 X. Wang, Y. Wu, X. Liu, Y. Wang, H. Chen, M. Maraj, Y. Deng, X.-C. Yao, Y.-A. Chen, and J.-W. Pan.Oscillatory-like expansion of a fermionic superfluid. Sci. Bull. 65, 7 (2020).PitaevskiiPRA1997SOsymmetry L. P. Pitaevskii and A. Rosch.Breathing modes and hidden symmetry of trapped atoms in two dimensions. Phys. Rev. A 55, R853 (1997).ZhoufeiPRA2019Conformal J. Maki and F. Zhou.Quantum many-body conformal dynamics: Symmetries, geometry, conformal tower states, and entropy production. Phys. Rev. A 100, 023601 (2019).ZhoufeiPRA2020Conformal J. Maki and F. Zhou.Far-away-from-equilibrium quantum-critical conformal dynamics: Reversibility, thermalization, and hydrodynamics. Phys. Rev. A 102, 063319 (2020).ZhoufeiPRL2022Conformal J. Maki, S. Zhang, and F. Zhou.Dynamics of strongly interacting Fermi gases with time-dependent interactions: Consequence of conformal symmetry. Phys. Rev. Lett. 128, 040401 (2022).§ SUPPLEMENTAL MATERIALS § PRODUCTION OF AN ISOTROPIC OPTICAL TRAP§.§ Experimental setup of the isotropic optical trap Experimental setup of the isotropic trap is schematically displayed in Fig. <ref>, which isalso shown as Fig. 1(a) in the main text. Here we will display more details about the setup. We prepare ^6Li atomic degenerate Fermi gas with two spin states |F=1/2,m_F=±1/2⟩. A pair of Helmholtz coils (not shown in the figure) produce a homogeneous magnetic field in vertical direction, which is used to tune the interactions. The Feshbach-resonance magnetic field for ^6Li atoms is about 834 G. Some special techniques are applied to produce the isotropic trap. Firstly, A pair of anti-Helmholtz coils produce a gradient magnetic field in vertical direction to simultaneously compensate the gravity force of the two spin states, which is valid for ^6Li atoms because the hyperfine interaction is much smaller than the Zeeman shift at the strong magnetic field applied in the experiment. The magnetic gradient is B'_z=1.05 G/cm. Secondly, two elliptic optical beams with a cross-sectional aspect ratio of √(2), propagating perpendicularly to each other, form the isotropic trap. Under this condition, the trapping frequency can be varied by the optical power. The waist radius of the optical beam is 60 μm and the wavelength is 1064 nm. Two laser beams with a frequency difference of 76 MHz, propagating in vertical and horizonal directions, respectively, are applied to detect two spin states separately. §.§ Production of the elliptic beam with a cross-sectional aspect ratio of √(2) To form an isotropic optical trap, we should produce two elliptical beams whose cross-sectional aspect ratio is √(2). The optical configuration to produce one beam is shown in Fig. <ref>. A 1064 nm laser beam outputs from a polarization-maintaining fiber. The Glan-Taylor prism is used to purify the optical polarization. A set of three cylindrical lens increases the radius in x direction, while maintains that in z direction. The focus lengths and distances of the lens are carefully selected to obtain the desired aspect ratio of the cross-section, w_z:w_x=1:√(2). After passing the final achromatic doublets, the aspect ratio reverses, i.e., w_z:w_x=√(2):1. In order to increase the optical stability, the optical devices are constructed with stainless steel mounts, and the experimental setup rests on an air-floating platform.§.§ Gravity compensation with a gradient magnetic field To study interactions between two spin states in an isotropic optical trap with variable trapping frequencies, the gravity force of the two spin states should be simultaneously compensated. For ^6Li atoms at the Feshbach resonant magnetic field of 834 G, the hyperfine interaction (≈ 228 MHz) is much smaller than the Zeeman shift (≈ 1.2 GHz). In this condition, the hyperfine quantum number F is no longer a good quantum number. The energy shift due to the Zeeman effect at this strong magnetic field is Δ E=μ_B(g_J m_J+g_I m_I)B_z,where g is the Landé factor and μ_B is the Bohr magneton. For the two lowest spin states |F=1/2,m_F=±1/2⟩, L=0 and m_J=m_S=-1/2. With g_I≪ g_S, the nuclear contribution can be neglected. Then the energy shift is well approximated byΔ E≃μ_B g_S m_S B_z,where g_S=2. The atoms in two spin states have the same magnetic moment μ=m_S g_Sμ_B=-μ_B. To compensate the gravity force of the two spin states, the magnetic field gradient B'_z can be calculated,μ_B B_z'=mg. In the experiment, we use a pair of anti-Helmholtz coils in z direction to generate a quadrupole magnetic field, which is combined with the Feshbach magnetic field to create a linear magnetic field in z direction. According to Eq. (<ref>), B_z'=1.05 G/cm. §.§ Residual Confinement of the Feshbach Magnetic Field We probe the atomic expansion at a magnetic field of 834 G. So it is important to analyze the residual confinement of the Feshbach magnetic field. A pair of Helmholtz coils in z direction are used to tune the Feshbach resonance. The Feshbach magnetic field has a curvature which gives rise to an additional trapping potential,U_mag=1/2μ_B B_z”z^2.Then the trap frequency due to the magnetic field can be calculated,ω_mag=√(μ_B B_z”/m). Through the parameters of the Feshbach coils, the curvature of the magnetic field in the vertical direction is calculated to be B”_z=0.1 G/cm^2 , while on the horizonal plane, it is negligibly small, i.e., B”_⊥→0 G/cm^2. Then the residual trapping frequency originating from the Feshbach magnetic field is only ω_mag=2π×1.54 Hz, which is much smaller than that of the optical dipole trap. According to Ref. <cit.>, the scale factor b(t) of the atomic cloud after released from the optical trap in z-direction isb(t)=[cos^2(ω_magt)+ω^2_opt/ω^2_magsin^2(ω_magt)]^1/2where ω_opt is the trapping frequency of the optical trap and t is the expansion time in the magnetic field. When ω_mag= 0, the expansion evolves according to b_0(t)=(1+ω^2_optt^2)^1/2, which corresponds to the scale invariant process. The effect of the residual magnetic confinement can be defined by δ_b=(b_0(t)-b(t))/b_0(t), which represents the shift of the atomic cloud size.In the experiment, the trapping frequency of the optical trap is ω_opt≈2π×1201 Hz. For t=2 ms, the longest expansion time in the experiment, δ_b∼ 6×10^-5. So the residual confinement effect of the Feshbach magnetic field can be ignored. §.§ Theoretical analysis on how to form an isotropic optical dipole trap. §.§.§ One Gaussian BeamWe first consider one focused Gaussian beam. Suppose that the optical beam propagates along z axis. Then the optical intensity isI(r,z)=I_0/1+(z/z_0)^2exp[-2r^2/w_0^2],where I_0 is the peak intensity, z_0=π w_0^2/λ is the Rayleigh length, and w_0 is waist radius (the 1/e^2 intensity radius of the beam at the focus). Then the trapping potential is given byU(r,z)=-U_0/1+(z/z_0)^2exp[-2r^2/w_0^2],whereU_0=3π c^2Γ/2ω_0^3ΔωI_0 ,I_0=2P/π w_0^2.Here c is the speed of light, ω_0=2π c/λ_0, ω=2π c/λ, and Δω=ω_0-ω. In the experiment, the natural line width is Γ=2π× 5.87 MHz for ^6Li atom, λ_0=671 nm and λ=1064 nm. To determine the trapping frequency of the trap, we expand the trapping potential of Eq. (<ref>) into Taylor series around the center (x, y, z)=(0, 0, 0),U(r,z)≃-U_0(1-z^2/z_0^2)(1-2r^2/w_0^2)+… ≃-U_0+U_0/z_0^2z^2+2U_0/w_0^2r^2+….ThenU_0/z_0^2z^2=1/2m ω_axial^2z^2,2U_0/w_0^2r^2=1/2m ω_radial^2r^2,where ω_axial and ω_radial are the trapping frequencies of the optical beam in axial and radial directions, respectively.ω_axial=√(2U_0/m z_o^2),ω_radial=√(4U_0/m w_o^2). In the experiment, the waist radius w_o is 60 μm. Then ω_axial/ω_radial≃ 4.0×10^-3. The trapping effect in axial direction of the optical beam is negligibly small.§.§.§ Two Orthogonal Beams with the Circular Cross-Section We consider two identical Gaussian beams with the circular cross-section. The two optical beams propagate along x and y axes, respectively. Then the trapping potential can be expressed asU_OPT= U_OPT1+U_OPT2 =-U_0/1+(x/x_0)^2exp[-2(y^2+z^2)/w_0^2]- U_0/1+(y/y_0)^2exp[-2(x^2+z^2)/w_0^2],where x_0=y_0 is the Rayleigh length and w_0 is the waist radius.We expansion Eq. (<ref>) into Tailor series at the point (x, y, z)=(0, 0, 0), U_OPT≃-2U_0+(U_0/x_0^2+2U_0/w_0^2)x^2+(U_0/y_0^2+2U_0/w_0^2)y^2+4U_0/w_0^2z^2+…. As mentioned above, the trapping effect in the axial direction of the optical beam can be ignored, and we only consider the trapping effect in the radial direction. Then the trapping frequencies can be calculated as ω_x=ω_y≃ω_radial, ω_z≃√(2)ω_radial.According to Eq. (<ref>), the trapping frequency in the vertical direction is about √(2) times that in the horizontal direction.§.§.§ Two Orthogonal Beams with the Elliptical Cross-Section According to the above analysis, we couldn't form an isotropic trap using the beams with the circular cross-section. Here we consider two beams with the elliptical cross-section. The two optical beams propagate along x and y axes, respectively. The waist radius in x and y direction is different from that in z direction, i.e., w_x0=w_y0=w_0, w_x0≠ w_z0. Ignoring the trapping effect in the axial direction of the optical beam as mentioned above, then the trapping potential is calculated asU_OPT= U_OPT1+U_OPT2≈ -U_0exp[-2(y^2/w_0^2+z^2/w_z0^2)]- U_0exp[-2(x^2/ w_0^2+z^2/ w_z0^2)]. Ignoring the trapping effect in the axial direction of the optical beam, we expand Eq. (<ref>) into Tailor series at the point (x, y, z)=(0, 0, 0),U_OPT≃-2U_0+2U_0/w_0^2x^2+2U_0/ w_0^2y^2+4U_0/w_z0^2z^2+…. Then the trapping frequencies can be calculated,ω_x=ω_y ≃ω_radial, ω_z/ω_x=√(2)w_0/ w_z0.According to Eq. (<ref>), if w_z0=√(2) w_0, for any optical power, the trapping frequencies along three orthogonal axes are the same,ω_x=ω_y = ω_z. In conclusion, in our experiment, the gravity force is compensated with a gradient magnetic field, and the residual confinement of the Feshbach magnetic field is negligibly small. Then we can form an isotropic optical trap using two orthogonal beams with the elliptical cross-section. The aspect ratio of the cross-section is √(2). The trapping frequency can be varied by the power of the optical beam.§ MEASUREMENT OF THE TRAPPING FREQUENCY In the experiment, the trapping frequency of the trap is determined by measuring the center-of-mass oscillation of the atomic cloud. We perturb the position of the atomic cloud in the spherical trap by controlling a pulse of the elongated optical trap. We tune the relative position of the elongated trap to the spherical trap, simultaneously shifting positions of the atomic cloud in three axes. After switching off the pulse of the elongated trap, the atoms will oscillate in the trap. After different waiting time in the trap, we probe atoms with a time-of-flight (TOF) of 1 ms. The atomic temperature is above the superfluid temperature T_c, and the atomic position x_i (i → x, y, z) is obtained by fitting the density profile using a Gaussian distribution. We can determine the oscillation of the center-of-mass Δ x_i(t)=x_i(t)-x̅_i, where t is the waiting time in the trap, and x̅_i is the mean value of x_i(t). Then we can fit Δ x_i(t) using a sinusoidal function,Δ x_i(t)=A_isin(ω_it+ϕ_i),where A_i is the oscillation amplitude, and ω_i is the trapping frequency in x_i direction.Fig. <ref> shows the measurement results, where the optical power is 3.8 W. The trapping frequencies are (ω_x, ω_y, ω_z)=2π×(1234(6), 1165(11), 1204(3)) Hz, which are almost the same along three axes.§ MEASUREMENT OF THE TEMPERATURE AND IN-SITU ATOMIC CLOUD SIZE§.§ How to obtain the temperature of the unitary Fermi gasThe temperature of the unitary Fermi gas is obtained using the method similar to J. E. Thomas <cit.> and C. J. Vale's groups <cit.>. The normalized one-dimensional profile of a non-interacting Fermi gas is n(r_i(t),T)=-3N/√(π)σ_Fi(t)(T/T_F)^5/2Li_5/2[-exp(μ/E_F-r_i^2/σ_Fi^2(t)/T/T_F)],where N is the total atom number, Li_5/2 is the polylogarithm function, μ is the chemical potential and σ_Fi(t) is the Thomas-Fermi radius of the atomic cloud after released from the trap.The fitting procedure for the unitary Fermi gas is similar to that of the non-interacting gas as mentioned above, except that the Fermi radius σ_Fi and Fermi temperature T_F should be replaced by σ_Fi^* and T_F^* , respectively. At unitarity, σ_Fi^*=(1+β)^1/4σ_Fi and T_F^*=(1+β)^1/2T_F , where β is a universal constant, and the fitting profile becomes n(r_i(t),T)=-3N/√(π)σ_Fi^*(t)(T̃)^5/2Li_5/2[-exp(q-r_i^2(t)/(σ_Fi^*(t))^2T̃)],where q=μ/(E_FT̃) , E_F=ħω_0(3N)^1/3 and T̃ is the empirical temperature given by T̃=T/T_F√(1+β). The expansion behavior is known in the unitary regime. σ_Fi^*(t) can be calculated by the atom number and trapping frequency. Another way to obtain σ_Fi^*(t) is to fit the atomic density profile using the zero-temperature Thomas-Fermi distribution at the lowest temperature. We fix σ_Fi^*(t) constant for the fits at all higher temperatures, leaving only T̃ and q as the free parameters. Then we obtain the value of T/T_F from Eq. (<ref>), where β=-0.56<cit.>. §.§ How to obtain the in-situ atomic cloud size of the unitary Fermi gas in finite temperature At finite temperature, for a non-interacting Fermi gas, the in-situ mean square size ⟨ r_i^2(T)⟩ _0 is given by⟨ r_i^2(T)⟩ _0=σ_Fi^2/8E/E_0(T/T_F)where σ_Fi=√(2E_F/mω_0^2) is the Fermi radius of the atomic cloud in the trap, E_0 is the ground energy.For the unitary Fermi gas, ⟨ r_i^2(T̃)⟩ _0=(σ_Fi^*)^2/8E/E_0(T̃). As seen in Ref. <cit.>, with the same value of T/T_F and T̃ for the non-interacting Fermi gas and unitary Fermi gas, E/E_0(T/T_F)=E/E_0(T̃)̃. After knowing the temperature T̃ of the unitary Fermi gas using the method mentioned above, the value of E/E_0(T̃)̃ can be obtained. According to Eq. (<ref>) and the relation σ_Fi^*=(1+β)^1/4σ_Fi , we can obtain the in-situ size ⟨ r_i^2⟩ _0 of the unitary Fermi gas in finite temperature.§ OPTIMIZATION OF THE ATOMIC IMAGE USING THE FRINGE-REMOVAL ANALYSIS To obtain the density distribution of the gas, three atomic images should be taken, respectively, as P_abs, P_ref and P_bg. P_abs is the absorption image with atom and light, P_ref is the reference image without atom, and P_bg is the background image without light and atom. The optical density (OD) distribution is obtained fromP_od=ln(P_ref-P_bg/P_abs-P_bg). Due to changes in the intensity and spatial position of the imaging light, P_ref is different from the background of P_abs, leading to fringes and other noises in P_od (see Fig. <ref>(a)). As known in Ref. <cit.>, these noises can be reduced by using an algorithm to synthesize a new reference image P_refn, which is closest to the background of P_abs. Replacing P_ref with P_refn in Eq. (<ref>), we can optimize the atomic image P_od. A set of reference images compose a background library R, whose linear superposition gives P_refn byP_refn=RC.The coefficient matrix C is determined by setting the least square difference between P_refn and P_abs in the regions without absorption of atoms. Setting the partial derivative of the square difference with respect to the coefficient to zero, a set of equations are obtained, where the solutions give the coefficients.Fig. <ref> displays an example of the image optimization. Without optimization, there are many fringes and noises in the background, as shown in Fig. <ref>(a). After optimization, the fringes and noises are greatly reduced in Fig. <ref>(b). In Fig. <ref>(c), we use a Gaussian function to fit the one dimensional OD,OD(x)=OD_0+Aexp(-x^2/2σ^2_x).Without optimization, OD_0=0.63±0.13, σ_x=27.26±0.87, A=10.73±0.27, and χ^2 of the fitting is 0.45. After optimization, OD_0=0.20±0.09, σ_x=25.95±0.62, A=10.93±0.20, and χ^2 of the fitting is 0.26. It can be seen that, through image optimization, the background level and fitting uncertainty are well reduced. The atomic cloud radius also changes, which should be more accurate.§ HYDRODYNAMIC DESCRIPTION OF THE ATOMIC EXPANSION FROM AN ISOTROPIC TRAPA Gaussian distribution is used to fit the atomic density profile, n(x)=A e^(-x^2/2σ_x^2), and the fitted value of σ_x is related to the mean square cloud size in x direction by⟨x^2⟩=1/N∫nx^2dx=σ_x^2.We consider how ⟨x_i^2⟩ (i = x, y, z) evolves with expansion time t,d⟨x_i^2⟩/dt =1/N∫∂ n/∂ tx_i^2dx.Using the continuity equation of the hydrodynamic description for one component fluid <cit.>, ∂ n/∂ t+∇·(n 𝐯)=0, Eq. (<ref>) can be written asd⟨x_i^2⟩/dt= 1/N∫[-∇·(n 𝐯)]x_i^2dx = 1/N∫[-∇·(x_i^2)]n 𝐯dx+ 1/N∫[-∇·(x_i^2n 𝐯)]dx =1/N∫[-∇·(x_i^2)]n 𝐯dx =1/N∫2x_i n v_idx=2⟨x_iv_i⟩.Using the same procedure, we can write the evolution of ⟨x_iv_i⟩ with time:d⟨x_iv_i⟩/dt= 1/N∫∂ n/∂ tx_iv_idx + 1/N∫nx_i∂ v_i/∂ tdx =1/N∫[∇·(x_iv_i)]n 𝐯dx+ 1/N∫nx_i∂ v_i/∂ tdx =⟨x(∂ _t+𝐯·∇)v_i⟩ +⟨v_i^2⟩.Combining Eq. (<ref>) and (<ref>), we obtaind^2/dt^2 ⟨x_i^2⟩/2=⟨x_i(∂ _t+𝐯·∇)v_i⟩ +⟨v_i^2⟩.The first term is similar to Euler's equation,mn(∂ _t+𝐯·∇)v =-∂ _i P-n∂ _iU+∑_j∂ _j(ησ_ ij+ζσ'δ_ij),where m is the atomic mass, P is the scalar pressure, U is the trapping potential energy, and the last term on the right side denotes the friction forces due to shear η and bulk ζ viscosity. The viscosity can be written as η≡α_S ħ n and ζ≡α_B ħ n. σ_ij≡∂ v_i/∂ x_i+∂ v_j/∂ x_j-2/3δ_ij∇·𝐯, σ' =∇·𝐯. We then take the density averaged product of the Euler's equation with a position component, Eq. (<ref>) can be written as1/N∫ nx_i (∂ _t+𝐯·∇)v_i d ^3r=-1/Nm∫ x_i ∂ _iP d^3r -1/Nm∫ x_i n∂ _iU d^3r +1/Nm∑_j∫ x_i∂ _j(ησ_ ij+ζσ'δ_ij)d^3r =1/Nm∫ P d^3r -1/Nm∫ x_i n∂ _iU d^3r -1/Nm∫ (ησ_ii +ζσ')d^3r.The pressure, trapping potential and viscosity terms must be zero for x_i→±∞. Then combining Eq. (<ref>) and (<ref>), we obtaind^2/dt^2⟨x_i^2⟩/2 =1/Nm∫ P d^3r -1/m⟨ x_i∂ _iU ⟩ -ħ/m⟨α_Sσ_ii + α_Bσ'⟩ +⟨v_i^2⟩.Eq. (<ref>) determines the evolution of the mean square cloud radius, which depends on the conservative forces from the scale pressure and the trapping potential, as well as the dissipative forces arising from the shear and bulk viscosity. For a spherical system, expansion behaviors in all directions are the same, i.e., ∂ v_i/∂ x_i =∂ v_j/∂ x_j=∂ v_k/∂ x_k. Thenσ_ii ≡∂ v_i/∂ x_i +∂ v_i/∂ x_i -2/3δ_ii∇·𝐯 =∂ v_i/∂ x_i +∂ v_i/∂ x_i -2/3(∂ v_i/∂ x_i +∂ v_j/∂ x_j+∂ v_k/∂ x_k)=0,σ'=∇·𝐯= ∂ v_i/∂ x_i +∂ v_j/∂ x_j+∂ v_k/∂ x_k =3∂ v_i/∂ x_i.For simplicity, we only consider the atomic expansion in x direction. Inserting Eq. (<ref>) into Eq. (<ref>), we can find that for a spherical system, the evolution of the mean square cloud radius can be written asd^2/dt^2⟨x^2⟩/2 =1/Nm∫ P d^3r -1/m⟨ x∂ _xU ⟩ -ħ/m⟨ 3α_B∂_xv_x⟩ +⟨v_x^2⟩.Now we need to eliminate ⟨v_x^2⟩ by using the energy conservation equation:d/dt∫d^3r (n1/2m𝐯^2+ε+nU) =0.At t⩾0^+, atoms are released from the optical trap (U=0),t=0^+E=1/N∫d^3r ε_0, t>0^+E =1/N∫d^3rε+m/2⟨𝐯^2⟩.Taking P≡ P-2/3ε into Eq. (<ref>),1/N∫d^3rε_0 = 3/2N∫d^3rP_0 - 3/2N∫d^3r P_0= 1/N∫d^3r ε+m/2⟨𝐯^2⟩.Then we find⟨ v^2 ⟩=2/m (3/2N∫d^3rP_0 - 3/2N∫d^3r P_0 - 1/N∫d^3rε) =3/m⟨ r·∇ U ⟩_0 - 3/mN∫d^3r P_0 - 2/mN∫d^3rε .For a spherical system, ⟨ v_x^2 ⟩=⟨ v^2 ⟩/3.Before release from the trap at t=0^-, 𝐯=0. Eq. (<ref>) can be written as1/N∫d^3r P_0 =⟨x∂_x U⟩_0,where the subscript ()_0 describe the initial condition in the trap. Combining Eqs. (<ref>), (<ref>) and (<ref>), we obtaind^2/d^2 tm⟨x^2⟩/2=⟨ x∂_x U⟩_0 - 1/N∫d^3r P_0 - 2/3N∫d^3rε+1/N∫d^3r P + 2/3N∫d^3rε - ħ⟨ 3α_B∂_xv_x⟩=⟨ x∂_x U⟩_0 + 1/N∫d^3r ( P- P_0) - ħ⟨ 3α_B∂_xv_x⟩.This is the atomic expansion evolution from an isotropic harmonic trap. For a unitary Fermi gas, P=0 and α_B=0. Eq. (<ref>) can be written asd^2/dt^2m⟨x^2⟩/2=⟨x ∂ U/∂ x⟩_0,⟨x^2⟩=⟨x^2⟩_0+ ω_x^2⟨x^2⟩_0t^2.Eqs. (<ref>) and (<ref>) display the scale-invariant expansion of a unitary Fermi gas from an isotropic trap, which is similar to a non-interacting Fermi gas. While away from the resonant interaction, P≠0 and α_B≠0, the scale-invariant expansion will be broken. 1ZhoufeiPRA2020Conformal J. Maki and F. Zhou.Far-away-from-equilibrium quantum-critical conformal dynamics: Reversibility, thermalization, and hydrodynamics. Phys. Rev. A 102, 063319 (2020).Kinast2005 J. Kinast, J. E. Turlapov, A.and Thomas, Q. Chen, J. Stajic, and K. Levin.Heat capacity of a strongly interacting Fermi gas. Science 307, 1296 (2005).veeravalliBraggSpectroscopyStrongly2008a G. Veeravalli, E. Kuhnle, P. Dyke, and C. J. Vale.Bragg spectroscopy of a strongly interacting Fermi gas. Phys. Rev. Lett. 101, 250403 (2008).luo2009thermodynamic L. Luo and J. E. Thomas.Thermodynamic measurements in a strongly interacting fermi gas. J. Low Temp. Phys. 154, 1 (2009).Ockeloen2010 C. F. Ockeloen, A. F. Tauschinsky, R. J. C. Spreeuw, and S. Whitlock.Detection of small atom numbers through image processing. Phys. Rev. A 82, 061606 (2010).Cao2011 C. Cao, E. Elliott, J. Joseph, H. Wu, J. Petricka, T. Schäfer, and J. E. Thomas.Universal quantum viscosity in a unitary Fermi gas. Science 331, 58 (2011).Cao2011a C. Cao, E Elliott, H. Wu, and J. E. Thomas.Searching for perfect fluids: Quantum viscosity in a universal Fermi gas. New J. Phys. 13, 075007 (2011).houScalingSolutionsTwofluid2013 Y. Hou, L. P. Pitaevskii, and S. Stringari.Scaling solutions of the two-fluid hydrodynamic equations in a harmonically trapped gas at unitarity. Phys. Rev. A 87, 033620 (2013). | http://arxiv.org/abs/2311.15779v1 | {
"authors": [
"Lu Wang",
"Xiangchuan Yan",
"Jing Min",
"Dali Sun",
"Xin Xie",
"Shi-Guo Peng",
"Mingsheng Zhan",
"Kaijun Jiang"
],
"categories": [
"cond-mat.quant-gas",
"physics.atom-ph"
],
"primary_category": "cond-mat.quant-gas",
"published": "20231127125239",
"title": "Exploring scale invariance in the expansion of a spherical unitary Fermi gas"
} |
0pt 10pttheoremTheorem[section] corollaryCorollary[section]definitionDefinition[section] lemmaLemma[section] propositionProposition[section] remarkRemark[section] equationsectionϕαβ̱γδ̣łλμνθ∇∂Bifurcation diagrams for spacetime singularities and black holes Spiros Cotsakis^1,2, ^1Department of Applied Mathematics and Theoretical Physics Universityof Cambridge Wilberforce Road, Cambridge CB3 0WA, United Kingdom ^2Institute of Gravitation and Cosmology RUDN Universityul. Miklukho-Maklaya 6, Moscow 117198, Russia November2023 =================================================================================================================================================================================================================================================================================== We reexamine thefocusing effect crucial to the theorems that predict the emergence of spacetime singularities andvarious results in the general theory of black holes in general relativity. Our investigation incorporates the fully nonlinear and dispersive nature of the underlying equations. We introduce and thoroughly explore the concept of versal unfolding (topological normal form) within the framework of the Newman-Penrose-Raychaudhuri system, the convergence-vorticity equations (notably the first and third Sachs optical equations), and the Oppenheimer-Snyder equation governing exactly spherical collapse. The findings lead to a novel dynamical depiction of spacetime singularities and black holes, exposing their continuous transformations into new topological configurations guided by the bifurcation diagrams associated with these problems. § INTRODUCTIONA fundamental attribute of strong gravitational fields is the Hawking-Penrose prediction of spacetime singularities in thegravitational collapse to a black hole and in cosmology (cf. standard papers <cit.>-<cit.>, and books <cit.>-<cit.>). The Hawking-Penrose analysis generalized the first mathematical model of a black hole by Oppenheimer and Snyder <cit.>,was based on the focusing effect due to the Ricci curvature, and can be best described using the language of the causal structure of spacetime. These works contain results that predict the existence of singularities at the centre of black holes and in general cosmological models in the form of causal geodesic incompleteness, and offer a first evidence as to how spacetime may behave inside black holes, or nearcosmological singularities, e.g., the area theorem, trapped surfaces inside an event horizon, the caustics formed by the intersection of geodesics on approach to the singularity due to the focusing effect, etc.The Einstein equations are not used in the Hawking-Penrose works except only indirectly through the energy conditions, and there only in order to obtain the focusing effect. (This effect was first noted in Refs. <cit.>-<cit.>, but its central significance for general relativity was only clearly realized with the appearance of the Hawking-Penrose theorems on singularities and black holes.) Instead,the main equations used by Hawking-Penrose for this purpose are the so-called Raychaudhuri equation that describes the rate of change of the expansion (or convergence) of the geodesic congruence,the volume (or area) equation that describes the rate of change of volume (orarea) associated with the geodesic congruence, and additional equations that describechanges in the shear and in the vorticity of the congruence. The shear equation is combined with the Raychaudhuri equation and together describe the rates of change of the convergence and the shear of the congruence in a set of equations called the Newman-Penrose-Raychaudhuri system (cf. e.g., <cit.>), and the vorticity equation is also combined with the Raychaudhuri equation in a form which appears as a subsystem of the Sachs optical equations (cf. e.g., <cit.>), below we shall call this the `convergence-vorticity' system. (In fact, the convergence-vorticity system is not really used or needed in the derivation of the focusing effect.)The deployment of the focusing effect in the proofs of the singularity theorems and other related results is very well-known, as is its use in the various theorems, in conjunction with other assumptions, in particular, the generic assumption and the energy condition. The combined use of the focusing effect with these physical or plausible assumptions leads to the singularity theorems and other basic results, the proofs ofwhich involve the methodsof causal structure in general relativity <cit.>-<cit.>,<cit.>-<cit.>.However, the successful exploitation of the focusing effect and its use in combination with the energy and generic conditions in the proofs of the singularity theorems when working with the nonlinear equations such as those that describe the large-scale structure of spacetime, lead us to ask two more general questions about the basic approach to such equations:*Given a nonlinear system of equations, how do we study the way an equation in the system interacts with or influences another?*What is the relation between the structural (in-)stability of the nonlinear system itself and the genericity or globalstability of its solutions?It is obvious that bothquestions apply to the nonlinear systems used when studying spacetime singularities, and so both questionsbecome relevant in the present context.For a dynamical system of the form Ẋ=F(X), F being some smooth function of X,the solution X is generally speaking influenced by two factors,an initial condition (datum) X_0, and the nonlinear `forcing' term F(X). The main issue is to understand the `feedback loop',in which the solution X influences the forcing term F(X) which in turn influences the solution.There are cases in which instead of looking at the full nonlinear system and nonlinear feedback effects, one is able to isolate and capture distinctive features in the behaviour of the problem by reducing the problem to a scalar equation. This usually becomes possible through the use of physical assumptions or special structures present in the original system, and using those one may end up with a linear feedback effect that may provide a viable approach. In this way, we may be studying the full nonlinear feedback effect by acquiring control of only the linear part of it.In fact, this is a viable method when dealing with essentiallynonlinear systems for which it is difficult to separate what the effects of the linear and nonlinear feedback on the solution really are. This is a standard way of approach,particularly in the class of dispersive problems, that is those described by equations that share some sort of degeneracy or instability, cf. <cit.>.Let us now move on to a brief discussion of the second question. Gravitating systems describing instabilities such as those studied in this work, are all described by structurally unstable systems of equations. This raises the question of what exactly one means by the word `generic' for a structurally unstable system because of the following reason. In the space of all vector fields the non-generic ones can be thought to lie on a hypersurface of some finite codimension[To study problems with degeneracies of infinite codimension is also possible, but in this work all systems have finite, and in general a small codimension.], with the generic systems occupying the complement - the non-generic systems lie on the boundaries of the generic domains (cf. <cit.> for a detailed discussion). A small perturbation of a non-generic system will thentake it off that hypersurfaceto the domain of the generic ones. This is perhaps the main reason why under normal circumstances one's attention is driven away of non-generic systems and focuses almost exclusively to the generic ones.However, consider the transversal intersection (i.e., at nonzero angle) of a curve (i.e., a 1-parameter family) of systems with the non-generic boundary surface. Under a small perturbation, this family will again intersect that surface at some nearby point, and so although a single non-generic system can be made generic by perturbation, it is not possible to achieve this with all members of a family. In general, it is not possible to remove degeneracies of codimension not exceeding k in k-dimensional families, but all degeneracies of higher codimension are removable in such families. This argument shows that the natural object to study is not the original vector field but the one that has the right codimension, so that its degeneracies do not disappear upon perturbation. Objects with the `right' codimension can be constructed starting from some degenerate one, using the subtle rules of bifurcation and singularity theory (cf. e.g., <cit.>, <cit.>, <cit.>).In this work, we take up this problemfor the systems involved in the original analysis of Hawking-Penrose that led to the singularity theorems and black holes. In a sense, in this work we provide an answer to the problem posed in the book <cit.>, p. 363[In the coming decades since this sentence was written, catastrophe theory was eventually taken to imply a general term describing possible applications of bifurcation theory and singularity theory (by the latter we mean the singularity theory of functions, cf. e.g., <cit.>, <cit.>). In fact, we shall not refer to `catastrophe theory', but use instead `bifurcation theory' as a general term that encompasses all three.]: ... It may also be that there is some connection between the singularities studied in General Relativity and those studied in other branches of physics (cf. for instance, Thom's theory of elementary catastrophes (1969)) ... To be more precise, we shall provide a complete analysis based on bifurcation theory of the following three systems:*The Newman-Penrose-Raychaudhuri system*The convergence-vorticity system*The Oppenheimer-Snyder system.It is a remarkable fact that as seen from the present perspective, the original analysesby S. W. Hawking and R. Penrose constitute the first ever bifurcation calculation and analysis in general relativity. In particular, their treatment of the focusing effect (through their employment of the energy and generic conditions andsubsequent applications to the study of singularities and black holes) exactly corresponds to an analysis of the versal unfolding associated with a codimension-1reduction of the full Newman-Penrose-Raychaudhuri system.From this point of view, the results discovered in the original papers <cit.>-<cit.> (and subsequently described in varioussources such as <cit.>-<cit.>) provide the appropriate basis for the analysisperformed in this work.The plan of this paper is as follows. In the next Section, we offer a guide for the reader aboutthe most important results of the subsequent sections. Section 3 is a summary of some of the basic ideas of bifurcation and singularity theory, which form the basis of our subsequent developments. In Section 4, we present a review of the focusing effect,introduce the idea of a bifurcation theory approach forspacetime singularities and black holes, and examine how the Hawking-Penrose pioneeringanalysis is closely related to bifurcation theory and the feedback loop problem. In Section 5-7, the bifurcation treatment of the three main systems mentioned above is fully developed. In Section 8, we present some first applications of our results to the problem of singularities in general relativity, only with the purpose of providing a few examples of the possible breadth and probable importance that a bifurcation theoryapproach has to offer to the problem of the nature of spacetime singularities, black holes, and related issues. Some extra discussion is also given in the last Section of this paper.§ SUMMARY OF THE MAIN RESULTS OF THIS PAPERIn this Section, we provide a brief summary of some of the results in subsequent Sections.In the next Section, we develop some bifurcation theory ideas with a view to their subsequent applications in later Sections. The main purpose is to acquaint the reader with the symbolic sequence (<ref>) which describes a basic message of bifurcation theory. Namely, starting with a system which has degeneracies (as inthe `original system' in (<ref>)), the way to study these through bifurcation theoryis to first obtain the normal form of the original system. This is usually a different (or topologically inequivalent) system than that we started with.The principal reason to find the normal form of the original system, and not work directly with the latter, is because the structure of the nonlinear terms that affect the solutions of a degeneratenonlinear system is determined by its linear part, and such crucial nonlinear terms may not be fully present in the original form in which the system is given. The normal form procedure is described in some detail in subsection <ref>, whereas in the first part of Section <ref>, we introduce basic ideas of the Poincaré program for bifurcation theory: structural (in-)stability, stability of solutions and perturbations of unstable systems,the idea of genericity, types of degeneracies present in such systems and, finally, the bifurcation diagram. The most important idea in this Section is that of a versal unfolding,treatedin Section <ref>, and the closely related notions of stratification and moduli. Both of these are crucial for the construction of the bifurcation diagram.In Section <ref>, we introduce the three main systems mentioned above,and in Section <ref> we briefly review the standard argument for the focusing effect and how it leads to the global theorems about the structure of singularities and black holes, before we embark on the bifurcation theory approach to this problem in Section <ref>. In this latter Section, we show that the focusing effect corresponds to the linear part of the feedback loop for the NPR-system, and also show how the original Hawking-Penrose treatment of it closely resembles the modern approach employed in this work. In addition, we discuss how the original analysis of Hawking-Penroseclearly points to the need for consideration ofnonlinear feedback effects, and we provide a description of what such an analysis would entail.In Sections <ref>-<ref>, we provide a detailed bifurcation analysis of each one of the three systems mentioned earlier. This analysis is performed in a number of different steps, but the main results are presented in a concise form infour bifurcation diagrams given in the following figures: Fig. <ref> for the NPR-system, Fig. <ref> for the convergence-vorticity system, and Figs. <ref>, <ref> for the Oppenheimer-Snyder system.For these diagrams we make the following remarks with a purpose of making their understanding somewhat smoother. Firstly, there are certain structures common to all four, namely, the existence of a central, `parameter diagram', which is stratified in `subregions', and secondly, the placement of corresponding phase portraits in each one of them. We imagine that as the parameter point moves in any of the parameter planes in the four bifurcation diagrams, the corresponding phase portraits smoothly deform to one another, producing the famous `metamorphoses' (or `perestroikas' in other terminology) of bifurcation theory, here, however,in a gravitational context. Some of these phenomena are briefly discussed in Section 8 of this work, and some extra comments are also given in the last Section.For the reader who has some acquaintance with the basic terminology of bifurcation theory and with standard results from the theory of global spacetime structure, one way to obtain a quicker summary of this workis this: after a review of the three basic systems in Section <ref>, read through Section <ref>, and then have a look at the four bifurcation diagrams in the Figures <ref>, <ref>, <ref>, and <ref>. An introduction to the main metamorphoses of singularities and black holes is then given in Section 8. Thework for all the proofs of the main statements and constructions in this paper is presented (with some brevity!) in Sections 5-7.§ BIFURCATION THEORY: DEGENERACY, INSTABILITY, AND VERSALITYIn Section <ref>, we discuss general aspectsof bifurcation theory such as the idea of instability as it emerges in the study of structurally unstable systems, genericity and degeneracy, and an overview of Poincaré's program to study these issues. In Section <ref>, we discuss the normal form theorem, which leads to a first familiarity with certain novel fundamental dynamical aspects of the three main systems studied later in this work. In the Section <ref>, a further discussion is given of more advanced material from singularity and bifurcations. This material is about the ideas of codimension and stability of bifurcating families, unfoldings, and versality in general. Last, we include a discussion of the bifurcation diagram, the cornerstone of any analysis of degenerate problems.§.§ General remarks on bifurcations §.§.§ Intuitive discussionIt has been said that bifurcation theory describes the behaviour of solutions of a dynamical system as the parameters of the system change. This is of course true, and thatis perhaps a standard definition of the subject. In bifurcation theory problems, onealways ends up studying a dynamical system which depends on one or more parameters, and observes how the behaviour and/or number of solutions change as the parameters of the system pass through some `bifurcation set' (cf.standard references of this subject, e.g.,<cit.>-<cit.>).However, this definitionmay givethe misleading impression that bifurcation theory enters the scene only when some parameter is present in the problem. On the contrary,bifurcation theory is the only mathematical field solely devoted to the study of instabilities. From the growth of a population to the saddle-node bifurcation, from the simple harmonic oscillator to the Hopf bifurcation,from the pitchfork bifurcation to the Lorenz system, or in the stable versalfamilies of diverse degenerate unstable systems, one gradually becomes acquainted with the unfamiliar but fundamental fact that to correctly account for unstable phenomena one has to extend, or `unfold', the original system describing them just so much as to reach a stable parametric family, without at the same time removing the defining degeneracies of the original system.To properlyperform this extension andfully study the `unfolded dynamics' of the resulting parametric families, represents the glorious mathematical developments of bifurcation, catastrophe, and singularity theory over a period of more than a century.§.§.§ Stable and unstable systemsBefore we proceed further, we briefly discuss the difference between stable and unstable systems.It is a central lesson of bifurcation theory that, given an unstable or special solution, it isinadequate to perturb only itself in order to see if it stabilizes. Ideally, and perhaps more importantly, one needs to perturb the system itself to a point where a stable family of systems containing the original one is reached.In the space of all dynamical systems, we have structurally stable and structurally unstable systems. A structurally stable system is one whose behaviour can be deduced from that of its linearization, and as such it has, for example, only hyperbolic fixed points. If a system is not structurally stable, it is called a structurally unstable, dispersive, or bifurcating system (we shall avoid the finer differences that exist in the meanings of these three terms and consider them as synonymous).There was indeed a time during the sixties and the seventies when many people were led to believe that only structurally stable systemsare important, or more common andabundant, and called such systems `generic' meaning typical orretaining their form and properties under perturbations[It is an interesting historical fact that the first book on the use and importance of bifurcation theory in science (in that case biology) by R. Thom <cit.> in 1972, had the title `Structural Stability and Morphogenesis', even though it studies the different ways that structure may emerge from changes of different forms that may arise in unstable systems. In that book, the foundations for a bifurcation theory approach to all of science were discussedin both scientific and philosophical terms, and the fundamental idea of structural stability of families was laid down for the first time.]. This led to the general tendency to distinguish or `prefer' the structurally stable systemsfrom the non-generic or physically implausible ones that represented special cases, and so devoid of any physical significance. This had the unfortunate consequence in some cases to totally neglect the latter as being unimportant.§.§.§ Genericity and degeneracyThe development of bifurcation theory (and also its sister field `singularity theory') in the last half-century or so has shown that an approach based only on individual `generic' or structurally stable systems is rather naive, if not totally wrong. It is important to clarify first of all whether or not the givensystem at hand is structurally unstable and if yes, its exact type of `degeneracy', because otherwise there is a real danger to treat such a system as stable one when it is not. In fact, an individualstructurally stable nonlinear system is in a sense uninteresting because its behaviour is essentially linear, and so nonlinearities do not offer anything new.Secondly, it has become apparent that various kinds of degeneracies,such as zero eigenvalues, are the rule rather than an exceptionin nonlinear systems, and therefore cannot really be avoided for reasons of convenience or `simplicity'. In turn, structurally unstable systemsappear everywhere[This constitutesa kind of paradox (an `unstable trap' so to speak) associated with structurally unstable systems: since they usually appear as solitary, individual curiosities, they can be easily mixed up with uninterestingsystems of no physical importance.] and, although they can individually be perturbed to stable ones, this cannot be done at all for unstable families of systems: If for some value of the parameter present in a family, one perturbs the resulting unstable system to a structurally stable one, then the degeneracy and non-genericity are avoided for that parameter value but appear again for another. It is thus impossible to perturb an unstable family of systems into a stable one for all values of the parameters present in the system simultaneously.For these reasons, we shall only focus on structurally unstable, `non-generic' systems. As we discussed above, such systems become unavoidable when considered in the context of parametrized families.§.§.§ Poincaré's programThe approach of bifurcation theory to the study of dynamical systems that describe unstable phenomena consists of threesteps.* Normal form theory (this is the `static' part): Given an unstable system (we shall only deal below with vector fields), put it in a`simplified' form using normal form theory: By a coordinate transfomation[These transformations are those of the unknown functions and their derivatives as these enter in the `field equations' of the problem, and have nothing to do with the coordinate transformations usually considered in general relativity. They represent the coordinates in the phase space of the given problem.], rewrite it in a way that exhibits only the `unremovable' terms at each level in a perturbation expansion. Sometimes this leads to a new form of the system, where many (perhaps all) terms at a given order may be absent (as they can be eliminated). Of course, as we shall see, this merely indicates the need for the consideration of higher-order terms.* Singularity theory (this is the `kinematic' part):Find all possible (topological) extensions, or unfoldings,of the normal form system that was obtained in the previous step. In some cases, one is able to reach a universal form containing all possible such extensions, the `versal unfolding'. Here one introduces various kinds of parameters, called `modular' and `standard' parameters respectively,as dictatedby the nature of the problem, and the determinancy of the degenerate vector field (in general, the determinancy of the vector field is not equivalent to that of the unfolding)[We note that while determinancy is a highly non-trivial process for a vector field with some degeneracy (and we need to include higher-order terms), it is trivial for a structurally stable vector field, as the latter is completely determined by the jacobian of its linear part as per the Grobman-Hartman theorem.].* Bifurcation theory (this is the `dynamical' part)[Below we shall use the word `bifurcation', perhaps somewhat degenerately, to cover all three steps of the analysis.]: Study the dynamics of the unfoldings and construct the bifurcation diagram. The unfoldings respect the symmetries and other characteristics of the original system, and in the case of a versal unfolding, contain all possible forms of instability that the original system may exhibit - they are stable with respect to any perturbation. In a sense, the versal unfolding determines the bifurcation diagram completely. The latter containsall possible phase portraits andpossible parameter regions,gives the overall and complete behaviour of any perturbation associated with the original system, and mostimportantly it describes all metamorphoses of the phase portraits of the system[We note that singularity theory may be described as one where only metamorphoses of equilibria, but not phase portraits, can be given.]. This is a far-reaching generalization and refinement of the original approach to physical science. In essence, changing the parameters `kinematically' in the resulting families is the way to completely describe the possible instabilities of the system without going outside the family - a new form of structural stability, this time referring to families.One thus achieves a major goal, to arrive at a (or, perhaps better, `the') global picture of all instabilities,how they are all related to each other viatheir metamorphoses - smooth changes in the phase portraits. This is essentially Poincaré's program for bifurcation theory, which aims to discover all possible forms of behaviour of unstable systems in a self-consistent, systematic way.Up until the present day,this program is far from being completed, despite the very substantial progress by many mathematicians over a period of more than 100 years.One central idea in bifurcation theory is the global bifurcation diagram. This isa set of distinct (topologically inequivalent) diagrams each having the following structure: a set of qualitatively different phase portraits corresponding to different regions of the parameter diagram of the system. In fact, constructing the bifurcation diagram of a given dynamical system is the key step in understanding all possible dynamical behaviours associated with the system as well as those of all dynamical systems that lie near it (in a suitable sense), and describing all stable perturbations of it. §.§ Normal formsTo give a more precise discussion of the bifurcation diagram, we need to introduce some standard terminology from bifurcation and singularity theory (see <cit.>, Section 3, for an introductory discussion of more foundational material on bifurcation theory not discussed here).We consider a dynamical system, ẇ=G(w), w∈ℝ^n,where G is a 𝐂^r function on some open subset of ℝ^n, and suppose that (<ref>) has a non-hyperbolic fixed point at w_0. Although this system may depend on a vector parameter ϵ∈ℝ^p, and the non-hyperbolic fixed point be at (w,ϵ)=(w_0,ϵ_0), we shall in fact forget about parameter-dependence for the moment. In addition, although our discussion holds for n-dimensional systems, for concreteness we shall restrict our development to planar systems, i.e., we shall consider only consider the case n=2.For the present purposes, we shall only consider the case where the linearized Jacobian evaluated at w_0,A=D_w G(w_0) (which enters in the linear system ξ̇=Aξ) has a double-zero eigenvalue, and the Jordan normal form of the linear part of (<ref>) has been found. This means that we can introduce the linear transformation v=w-w_0 and transfer w_0 to the origin, so that (<ref>) becomes a system of the form v̇=H(v), H(v)=G(v+ w_0).We can then split the systeminto a linear and a nonlinear part, v̇=DH(0)v+H̅(v) and using the eigenvector matrix T of DH(0), we can simplify the system and write its linear part in Jordan canonical form J under the transformation v=TX, so that the full nonlinear system will be written as, Ẋ=JX+F(X),where J=T^-1DH(0)T, and F(X)=T^-1H̅(TX). This is a `normal form' of the system, in which only the linear part DH(0) has been simplified as much as possible. We shallassume that the Jordan form J has either the `cusp' (or, Bogdanov-Takens) form,J|_(0,0)=( [ 0 1; 0 0; ]),or else,J is the zero matrix,J|_(0,0)=( [ 0 0; 0 0; ]).In the last case, we shall assume that the system (<ref>) is invariant under the ℤ_2-symmetry (a particular case of equivariance), that is if, X=(x,y), F(X)=(f(x,y),g(x,y)), the system ẋ=f,ẏ=g, is invariant under the transformation,x→ x, y→-y. We shall show later that the NPR and convergence-vorticity systems are ℤ_2-equivariant, while the Oppenheimer-Snyder system has a linear part thatis of the Bogdanov-Takens form.Because of the non-hyperbolicity of the origin, the flow near the origin is not topologically conjugate to that of its linearization, and so the flow will be sensitive to nonlinear perturbations. Therefore for the given dynamical system (<ref>) written in the form (<ref>), the fundamental problem arises of how to fully describe the flow.This problem is further perplexed because the system (<ref>) (or (<ref>)) will in this case be subject to certain degeneracy conditions at various levels (i.e., orders in a Taylor expansionof the X), and these will lead to further important terms that will appear by necessity in the original system. This problem can be accounted forthrough the construction of the so-called Poincarénormal form of the original system (<ref>) as in the following theorem, which simplifies the nonlinear part F(X) at each order.Under a sequence of analytic changes X=Y+h_k (Y) of the coordinate X, the system (<ref>) takes the form, Ẏ=JY+∑_k=2^N F_k(Y) +O(|T|^N+1),where the unknowns h_k (Y) satisfythe equation,L_J^(k)(h_k(Y))=F_k(Y), L_J^(k)(h_k(Y))=Dh_k(Y)JY-Jh_k(Y),at each order k.Equation (<ref>) is called the normal form of (<ref>) at order N. Equation (<ref>) is known as the homological equation associated with the linear vector field JY. If the operator L_J^(k) is invertible, then h_k(Y) can be chosen so that h_k(Y)=(L_J^(k))^-1F_k(Y), and so all terms F_k(Y) in (<ref>) can be eliminated leaving only the linear system Ẏ=JY+O(|T|^N+1). Of course this rarely happens, and there will be extra resonant terms remaining in the normal form (<ref>) of the system (<ref>). The terms that can be eliminated at each step are called nonresonant.At each order k, one views the terms h_k(Y), F_k(Y) as belonging to the linear space of vector-valued homogenous polynomials of order k, denoted here by H_k. For instance, for k=2 and in ℝ^2, this space is spanned by the products of the monomials x^2, xy, y^2 times the basis vectors of ℝ^2, and H_2 can be represented by the direct sum,H_2=L_J^(2)(H_2)⊕ G_2,with the last term being a complementary space to L_J^(2)(H_2) that contains all those terms F_2^r(Y) (`r stands for `resonant') that cannot be inthe range of L_J^(2)(H_2), and hence cannot be removed. All other terms can be eliminated, except such resonant terms of the form F_2^r(Y). So at each order, the eigenvectors of L_J^(k) will form a basis for H_k, while the eigenvectors of L_J^(k) having non-zero eigenvalues will form a basis of the image L_J^(k)(H_k). The components of F_k(Y) in L_J^(k)(H_k)can be expessed in terms of such eigenvectors and socan be eliminated. Hence, the terms that remain in the transformed vector field Eq. (<ref>) will be of the form F_k(Y) that cannot be written as linear combinations of the eigenvectors of L_J^(k) having non-zero eigenvalues.The important thing that the normal form gives us is that the structure of the nonlinear remaining terms will be entirely determined by the Jordan matrix J, and also that simplifying (or eliminating) the terms at a given order k will not alter the lower-order terms. However, higher-order terms will be modified at each step of the method of normal forms. Eventually a simplified vector field will be the result instead of (<ref>) at some given order. §.§ Versal unfoldingReturning to the general bifurcation problem, the normal form (<ref>) obtained this way will still be unstable with respect to different perturbations, that is with respect to nearby systems (vector fields), and so its flow (or that of the original system) will not be fully determined this way. It is here that bifurcation theory makes its entry, in that it uses the normal form to construct a new system, the (uni-)versal unfolding, that is based on the normal form but contains the right number of parameters needed to take into full account the degeneracies of the normal form system (and so also of the original one). The necesary number of parameters needed to take into full account the nature of the degeneracy of the normal form is called the codimension of the bifurcation. We shall be dealing in this work only with codimension-2 problems, that is those which can be fully unfolded using two independent parameters.Once one knows the versal unfolding of a particular system, then any perturbation of the system will be realized in the versal unfolding for some particular choice of the parameters. Therefore studying the dynamics of the versal unfolding instead of that of the original system (or its normal form) implies that we have a complete knowledge of the behaviour of all possible perturbations of it. Hence, bifurcation theory suggests that in order to study and fully understand the behaviour of a degenerate system, one proceeds in the direction: Original system→normal form→versal unfolding→dynamics,and one eventually studies the dynamics of the versal unfolding rather than that of the original system (or its normal form)[We note the important remark not often stressed enough, that the dynamics (e.g., phase portraits) of a givensystem and that of its normal form are generally inequivalent. One aspect of bifurcation theory, in particular the versal unfolding construction, that is particularly important in this respect is that it is not really relevant at the end whether any of the two dynamical situations (original or normal form system) is the correct one. This is so because on the one hand,the phase dynamics of the normal form corresponds to the `stratum' at the origin in the versal unfolding, while on the other hand, certain features of the original system dynamics (assuming that the original system is not already in normal form), appear as `scattered'in various strata in the final bifurcation diagram.].Since the unfolded system by construction contains parameters, instead of just ending upwith a single phase portrait for this purpose, one is required to study the global bifurcation diagram of the versal unfolding which contains:*the modular coefficients*the parameter diagram*the various phase portraits.Let us briefly explain these terms. Suppose we have constructed the versal unfolding starting from a normal form system (corresponding to the original equation). In this example, this will be a system of the form, Ẋ=F(X,μ,s),where X is polynomial in x,y, having a non-hyperbolic equilibrium at X=0, s will denote the set of values of coefficients appearing in front of certain terms of the polynomial X,and μ=(μ_1,μ_2) will be two parameters in the versal unfolding (corresponding to a codimension-2 bifurcation). We shall assume for simplicity that the modular coefficient s only appears taking two distinct integer values (the `moduli')in just one of the terms of the vector polynomial X. In this case, the two values of the s=± 1 will lead to two different versions of the versal unfolding, one corresponding to s=+1, and a second corresponding to s=-1. Thus for each moduli, we obtain a version of the versal unfolding that can be analysed separately.We now show that the parameter plane (μ_1,μ_2) can be stratified. For a fix moduli value (and so given a particular version of versal unfolding), wetake a parameter value μ=μ_0 and consider all points in the plane (μ_1,μ_2) for which the system (<ref>) has phase portraits topologically equivalent to that which corresponds to μ_0. This point set is called a stratum in the parameter plane, and all such strata make up the parametric portrait of (<ref>). The parameter plane is thus partitioned into different strata.This means that for a fixed moduli value the parameter plane provides a stratification of the parameter space induced by topological equivalence. For each stratum in a given stratification, we have aphase portrait, and the total number of phase portraits thus constructed together with the parameter plane give us the bifurcation diagram. The global bifurcation diagram is the set of all so constructed bifurcation diagrams, and provides a complete picture of the dynamics of the versal unfolding.We note that a versal unfolding as a family of systems parametrized by the parameter μ is a structurally stable family (cf. <cit.>), and as such it contains all physically relevant perturbations of the original system. § A BIFURCATION THEORY APPROACH TO SPACETIME SINGULARITIESIn Section <ref>, we write down the precise forms of thedynamical systems involved in the three main problems studied in this paper. In Section <ref> we give a short summary of the focusing effect for causal geodesic congruences, and indicate how this is used in the proofs of the singularity theorems and related results. In Section <ref>, we relate the dynamics of the focusing state with more general dynamical issues such asthe linear feedback loop and adversarial behaviour, and highlight how the original calculations leading to the focusing effect constitutea form of versal analysis for the Raychaudhuri equation. Lastly, in Section <ref>, we set the stage for the consideration of further effects which are of an essentially nonlinear character associated with the problem of taking into account all stable perturbations of these problems. §.§ The three basic systemsWe consider a timelike or null congruence of geodesics in spacetime, and denote by θ the trace of the extrinsic curvature, also called the expansion of the congruence, ρ=-θ is the convergence of the congruence, and v is the 3-volume form (or the 2-area element in the case of a null congruence) of a positive-definite metricon the spacelike 3-surface (correspondingly, 2-surface). An overdot denotes derivatives with respect to proper time (or an affine parameter if a null congruence is considered),2σ^2 stands for σ_abσ^ab, σ_ab being the shear tensor, while 2ω^2=ω_abω^ab, and ω_ab is the rotation (vorticity) tensor.We set ℛ=R_ab t^a t^b, where R_ab is the Ricci curvature and t^aa timelike vector field tangent to the congruence. To describe especiallythe null case, it is standard to introduce a null tetrad l,m,n,m̅, that isl^a is tangent to the null congruence, n^a is null such that l^a n_a=1, and m^a,m̅^a are also null vectors orthogonal to l^a, and satisfy m^am̅_a=-1, with m^a beinga complex combination of two spacelike vectors orthogonal to l^a,n^a. We then set𝒲=C_abcd l^a m^b l^c m^d, with C_abcd the Weyl curvature (we still use the letter ℛ to denote R_ab l^a l^b).Standard (but nonuniform!) conventions apply, and these together with other propertiescan be found in the general references <cit.>-<cit.>, whose notation and proofs wegenerally assume in this work.§.§.§ The Newman-Penrose-Raychaudhuri dynamical system The first problem we shall study requires a hypersurface-orthogonal congruence, where σ^2≠ 0, ω^2=0. In this `zero-rotation' case, we shall be concerned with the global structure of solutions of the Newman-Penrose-Raychaudhuri (in short `NPR') system:ρ̇ =ρ^2+σ^2 +ℛ, σ̇ =nρ σ +𝒲,where n=2,3, according to whether the congruence is null or timelike. The terms ℛ (resp. 𝒲) in (<ref>) represent matter (resp. gravitational radiation) crossing the congruence transversally. We assume that ρ,σ are real.We shall also use the definition, θ=(log v)^,where v is proportional to the volume (area) element of thehypersurface orthogonal to the timelike (null) congruence.Another common form of (<ref>) isobtained by changing to ρ→ -θ (or t→ -t), to obtain its past version, dynamically equivalent to (<ref>). We do not discuss itfurther here, because exactly the same conclusions will apply (we note that this equivalence is a technical term as in Section <ref>).We omit the derivation of (<ref>), as it is discussed in great detail in the standard references given above. Indeed, (<ref>) may also be viewed as a subsystem of theSachs optical equations, given by the vector field (ρ^2+σ^2-ω^2 +ℛ,nρ σ +𝒲,nρ ω). A derivation of the Sachs equations can be founde.g., in <cit.> (cf.Eqns. (4.22), (4.26), (4.27) for the timelike, and (4.34-6) for the null cases, respectively), in <cit.>, Sect. 9.2,in <cit.>, pp. 500-1, or in <cit.>, chap. 5.§.§.§ The convergence-vorticity dynamical systemThe second problem relates to the full dynamicaldescription of a pure vorticity congruence, that is one for which the shear is zero, σ^2=0, but ω^2 never vanishes. The term ℛ again represents mattercrossing the congruence transversally. This vorticity, shearfree (or, `type-D') case is described by thefollowing convergence-vorticity (in short `CV') system:ρ̇ =ρ^2-ω^2 +ℛ, ω̇ =nρ ω ,where n=2,3, according to whether the congruence is null or timelike.We shall also use the definition (<ref>), and refer to the dynamical system (<ref>) as the future version of the convergence-vorticity system. Changing to ρ→ -θ (or t→ -t), we obtain its past version, dynamically equivalent to (<ref>). Exactly the same conclusions will apply to the past version.We omit the derivation of (<ref>),as it is discussed in great detail in the standard references given above. We shall see later that the NPR- and CV-systems(<ref>), (<ref>) respectively, are very closely related dynamically. §.§.§ The Oppenheimer-Snyder exampleThe third problem to be analysed in this work is the original Oppenheimer-Snyder equation, namely, ẍ+3/4ẋ^2=0,(cf.Eqn. (20) in <cit.>), which describes the gravitational collapse of a dustlike sphere.The geometric setup is very standard and goes as follows. We introduce the Schwarzschild metric in comoving coordinates, ds^2=dτ^2-e^λ dR^2-e^x dσ^2, where τ, R are the time and radial coordinates respectively, e^x=r^2, with r the `radius', and dσ^2 is the metric of the unit 2-sphere (we use x in the place of the Oppenheimer-Snyder function ω to avoid confusion with the vorticity function introduced above).In <cit.>, it is shown that in this case the Einstein equations reduce to the equation (<ref>), (cf. Eqns. (13)-(20) in <cit.>, see also <cit.>, Section 100, Problem 5 on p. 304, Section 103). This leads to the following solution of the field equations (cf. <cit.>, Eq. (21)): e^x=(Fτ +G)^4/3, where F,G are arbitrary functions of R, so that, r=(Fτ +G)^2/3 (cf. <cit.>, Eq. 27). Using this solution, the standard result of <cit.>, namely, their Eq. (37) is obtained, describing the optical disconnection with the exterior spacetime and the formation of a singularity at the centre of the black hole in afinite time (see also <cit.>, Section 103). §.§ The standard argument for the focusing stateThe pioneering arguments generalizing the Oppenheimer-Snyder example and leading to the focusing effect and spacetime singularities in general relativity were obtained using the system (<ref>). As it is well-known, these arguments were deployed in the standard works and led to the existence theorems for spacetime singularities in general relativity.A brief summary will be given here in several steps (all definitions, proofs, and constructions in this Section can be found in the standard references <cit.>-<cit.>, and so we do not cite them below). Our review of these results, however, aims to relate them with certain central ideas of bifurcation theory,and although very analogous, the methods of proof for the timelike and null cases presented here in subsections <ref>, <ref> will in this respect be useful to us later.The focusing effect plays a central role in the proofs of the singularity theorems for gravitational collapse andcosmology, and also in the proofs of the area law and other fundamental properties of black holes. In general relativity, this effect emerges when the convergence ρ of a congruence of causal geodesics becomes infinite. Because of the definition in Eq. (<ref>), this happens at zero volume (or area):We say that a congruence of causal geodesics through a point p has a focal point at q (or, there is pair of conjugate points(p,q) along a causal geodesic), if ρ→∞ along solutions of (<ref>), or,because of (<ref>), when v→ 0. In this case, we say that we have focusing along the geodesic congruence(also called `positive convergence').In terms of the expansion θ=-ρ of the congruence, when focusing occurs we have θ→-∞. According to standard arguments, the inevitability of a focusing state, that is when: Focusing State:ρ→∞⇔θ→-∞⇔ v→ 0,arises provided we choose initial conditions such that ρ=ρ_0>0, or equivalently, θ=θ_0<0. This state is synonymous tothe existence of a spacetime singularity, in the sense of geodesic incompleteness[A non-singular spacetime is defined to be one that isgeodesically complete.],formed at the `end' of gravitational collapse either in a cosmological situation or at the center of black holes. Except for the singularity theorems, the focusing state is also used in a very essential way to prove the area law for black holes, the statement that event horizons containtrapped surfaces, and many other fundamental properties of black holes.We now proceed to review the standard argument that leads to conditions for the occurrence of a focusing state. We treattimelike geodesic congruences before the null case.§.§.§ Timelike focussing Step-T1: Use of the generic condition, R_abcdt^b t^c≠ 0. For the timelike case, ones sets n=3 in Eq. (<ref>b). A violation of the generic condition occurs precisely when ℛ=0, 𝒲=0 in (<ref>). The usual arguments (cf. e.g., <cit.>, p. 540 after Eq. 3.11) imply that only in very special, non-generic, unrealistic spacetimes and models, this situation may arise. In all such non-generic cases, the NPR system, Eq. (<ref>), becomes,ρ̇ =ρ^2+σ^2, σ̇ =nρσ.Therefore all the non-generic, special cases described by the system (<ref>) may be avoided by simply assuming that ℛ≠ 0, 𝒲≠ 0 in the system (<ref>), or equivalently `re-inserting the perturbations' back into the (<ref>).Step-T2: Use of the energy condition, ℛ≥ 0.A very lucky circumstance occurs here in the sense thatthe strict inequality ℛ> 0, 1) complies with the non-generic-cases-avoiding condition (that is the generic condition of Step-1), and 2)appears as the positive energy-density condition for matter crossing the geodesic congruence transversally. Thusthe energy condition assumption is absolutely necessary and playscrucial role in the arguments leading the focusing state, and in addition it complies with a very plausible physical situation. Step-T3: Partial decoupling of the Landau-Komar-Raychaudhuri equation.The main technical role of the energy condition is to alter the Eq. (<ref>a) into a weak inequality. In the first instance, one observes the the first equation in the system, Eq.(<ref>a) (usually called the Landau-Komar-Raychaudhuri equation, <cit.>, <cit.>, p. 289), decouples from the volume-convergence equation (2), namely the equation v̇=ρ v, because it does not contain the variable v. In fact, it also decouples from the Eq.(<ref>b), because using the energy condition and the positivity of the shear term, we have that σ^2 +ℛ≥ 0 (with the equality holding iff both terms on the left vanish), and so the Landau-Komar-Raychaudhuri equation (<ref>a) becomes the weak inequality, ρ̇≥ρ^2,with the equality holding iff σ=ℛ=0.Hence, the thought strikes one that the Landau-Komar-Raychaudhuri equation can be treated separately both from the definition (<ref>) (thought of as a volume/area equation), and also from the second (the shear) equation (<ref>b), as something equivalent to the inequality (<ref>). In terms of the expansion of the geodesic congruence, we findequivalently, θ̇≤ -θ^2.It follows that theweak inequality (<ref>) (equivalently (<ref>) for the expansion) fully describesthe Landau-Komar-Raychaudhuri equation (<ref>a), and so an infinite growth for ρ (obtained by a simple integration of (<ref>) (or, resp., (<ref>)) is unavoidable in all cases:an initial condition ρ_0>0 (or θ_0<0, respectively), implies that ρ becomes infinite in proper time equal to 1/ρ_0.Step-T4: Use of the volume equation (<ref>).Since we now know the behaviour of ρ(t) from the above argument,the Eq. (<ref>) in the form v̇=ρ(t) v, becomes a linear (variable coefficient) equation in v, and it may be shownthat the volume function v(t) vanishes as ρ diverges to infinity. The standard argument for this is to show thatthe positive function l(t)=v^1/3(t) satisfies l̈≤ 0, and so it is concave andvanishes when ρ diverges (namely, in time at most 1/ρ_0 a focal or conjugate point is created). Therefore a focusing statefor a timelike congruence is the result.§.§.§ Null focusingThe method to show that a focusing state results for null geodesic congruences is essentially analogous to that in the timelike case, with small differences in the two treatments, as we now discuss. One uses a null tetrad, in particular, we use the null vector field l^a with obvious modifications in the definitions of the quantities ρ, σ, ℛ, 𝒲, and of course the area (instead of volume) element. Under these changes, one uses the system (<ref>) for the treatment of the null case. Step-N1: Use of the generic condition, R_abcdl^b l^c≠ 0. This works exactly like in the timelike case, Step-T1 above, but with n=2 in the non-generic system (<ref>).Step-N2: Use of the energy condition, ℛ≥ 0.This is constructed as a limiting case as t^a→ l^a, with exactly the same conclusions as in Step-T2.Step-N3: Partial decoupling of the Raychaudhuri equation.Again, one obtains the equation (<ref>) (or, (<ref>)) but through a different procedure from the physical point of view. We consider a pulse of light (i.e., a congruence of null geodesics near some given one) that initially isa parallel circular beam defined by the state: ρ=0, σ=0, in a region where ℛ=𝒲=0. In this situation, focusing is generated in the two main cases, namely, that of an anastigmatic lens with σ=0, and that of an astigmatic lens, σ≠0 (where we have 𝒲≠ 0).In the former case, the situation is described by Eq. (<ref>), and so focusing follows as before, and theEq. (<ref>) is decoupled from the shear equation (<ref>b). In the latter case of an astigmatic lens, the Raychaudhuri equation is the first equation in the system (<ref>), and so focusing still occurs (due to the positivity of the shear term, one still gets Eq. (<ref>)). The remaining cases are as follows: If we add a nonzero ℛ satisfying the energy condition of Step-N2, to an anastigmatic lens, then ρ̇=ρ^2+ℛ, which is non-negative provided this is a strict inequality, and focusing follows. If we further add a nonzero 𝒲, then we shall have shear present, that is an astigmatic lens, and we end up with the case considered previously, where again focusing follows.Step-N4: Use of the area equation (<ref>).This step proceeds exactly as before in Step-T4, but this time with l(t)=v^1/2(t). We note that the area equation is again used in the calculation for the second derivative. Therefore in the null case, focusing is the result of the (essential similar) application of the four steps above.To end our discussion on the standard focusing mechanism, we note an alternative derivation leading toa focusing state. This isgiven in Ref. <cit.>, Sect. 2.7, andp. 203 (and refs. therein), and is completely equivalent to the above. This derivation is of interest for the present work because it uses the non-generic system (<ref>): Defining the functions,w_±=ρ±σ,and taking the algebraic sum of the two equations in (<ref>), reduces the two-dimensional system (<ref>) to a one-dimensional one for w_±, namely, ẇ_±=w_±^2, which can then be treated using the above methods but now applied to the function w_±. Then w_± are found to diverge at a finite affine parameter value, hence so do ρ and σ, assuming (as it is done in <cit.>) that ρ∼σ (Note: these lines will appear below as the `Stewart separatrices').§.§.§ Applications to spacetime singularities and black holesAs is well-known, there are a number of fundamental results in general relativity that use the existence of a focusing state (<ref>) in an essential way in their proofs. We note that all such proofs are constructed by contradiction. We refer below to a small, selected number oftheorems, in which the existence of a focusing state (<ref>) is used as a true statement, so that this - or an implication of it - be compared with some other statement or hypothesis of the result to be proved,toobtain the desire contradiction.The following results have proofs that depend in an essential way on the use of the statement about the divergence of ρ (or, θ), and so on the existence of a focusing state (or a conjugate point), and also on the fact that the focusing state contradicts some other proposition or a hypothesis of the theorem.* The Penrose 1965 singularity theorem,<cit.>. * The two Hawking singularity theorems of 1967, <cit.>. * The Hawking-Penrose 1970 singularity theorem, <cit.>. * The area law for black holes, <cit.>, p. 1345 (cf. discussion before Eq. 2),<cit.>, p. 312.*The black hole property that: `a trapped surface is contained in the event horizon', <cit.>, prop. 9.2.1, and many other black hole properties, cf. e.g., <cit.>, Sections 9.2, 9.3.A typical argument met in the theorems above concerning how the focusing state is used, goes as follows (here it is about the Penrose theorem which is the prototypical result of all), and shows how important the existence of a focusing state is in the proofs of all these fundamental theorems.One chooses the future trapped surface 𝒯 which corresponds to the maximum negative value of the expansion, say θ_0, for both sets of null geodesics orthogonal to 𝒯. This means that 𝒯 corresponds to the `earliest' such surface.One then considers A, the set of all spacetime points lying on a null geodesicthat starts at 𝒯 and proceeds with an affine parameter all the way up to 2/|θ_0|. Oneshows that A is compact as a continuous image of a compact set under suitable continuous maps.Then we have the crucial step: the existence of a focusing state as in the prescription(<ref>) is used to show that any point of the boundary of thefuture of the surface 𝒯, ∂ I^+(𝒯), also belongs to A: it cannot proceed further than 2/|θ_0| along a null geodesic, because θ→ -∞ there.In the last step, one shows that the compactness of ∂ I^+(𝒯) (it is a closed subset of A) contradicts another hypothesis of the theorem (namely, the global hyperbolicity of the spacetime). §.§ The versality of the focusing effectIn this Subsection, we discuss two silent but very important points in the standard treatment of the system (<ref>), common to both the timelike and the null cases.The first is that the standard argument shows us the way of how to correctly distinguish between the linear, or adversarial,behaviour leading to the focusing state and any other: we simply have to take into accountessentially nonlinear feedback effects associated with the system (<ref>). Such effects may also lead to something less than adverse behaviour.The second point is associated with the use of the word `generic' in the standard treatment of the problem. In this connection, we discuss how the standard approach to spacetime singularities through the focusing effect represents the first ever bifurcation theory approach to spacetime singularitiesin general relativity.We end this Subsection with a number of further questions associated with a bifurcation theory approach to the problems of singularities and black holes. §.§.§ Focussing is a linear, adversarial feedback effectWe show that the focusing effect is the result of combining the decoupling of the Landau-Komar-Raychaudhuri equation with the linearity of the volume/area evolution equation.As explained in the previous Section, the energy condition (together with the non-negativity of the shear) in conjunction with the generic condition which dictates that the terms ℛ,𝒲must be present to ensure that one avoids non-generic effects lead to the partial decoupling of the Landau-Komar-Raychaudhuri equation from the volume/area and the shear equations (Eqns. (<ref>), (<ref>), respectively), and this is absolutely necessary in orderto obtain the desired focusing behaviour of the expansion scalar θ (or, the convergence ρ). With this decoupling, the Landau-Komar-Raychaudhuri equation (<ref>a) becomes the inequality (<ref>) (or, equivalently,(<ref>)) (for the shear case, this follows since ρ^2+σ^2≥ρ^2, so that one always ends up with the Landau-Komar-Raychaudhuri inequality without a shear term), which can be directly integrated. Thereforethe volume equation (<ref>)can now be treated independently from the Eq. (<ref>a). As we also noted in the Step-T,N4 of Sections <ref>, <ref>, the standard treatment of volume/area equation proceeds by the direct calculation of the second derivative of the quantity x that denotes either the volume,or the area a (or the `luminosity parameter' L of <cit.>, p. 542), and satisfying the linear equation ẋ=θ(t) x. The result is that x vanishes in a finite time, and a focusing state is the result. Similarly, forthe linear equation for the shear, σ̇=nρ(t)σ, knowing the behaviour of ρ, we can directly integrate[We note that the term 𝒲 can be also ignored from itbecause it is usually regarded as inducing extra shear thus enhancing the convergence power and so the focusing effect (𝒲 acts as a purely astigmatic lens), cf. <cit.>, pp. 167-9, <cit.>, pp. 44-45.].We shallnow provide a different treatment of the linear equation (<ref>), that is, v̇=θ(t) v. We make use of a differential form of the Gronwall's inequality, as this is developedin<cit.>, pp. 12-3. We take a more general stance, and consider instead a differential inequality for the variable x playing the role of either the volume v, or the area a, or the shear σ, and satisfying an inequality of the form, ẋ≤θ(t) x.We note that this inequality is sharp in the `worst case scenario', that iswhen ẋ=θ(t) x holds for all t. Looked at it this way, the latter is the case of adversarial feedback<cit.>, when the forcing term θ(t) x always acts to increase θ(t) the maximum possible amount, preciselyas in the focusing effect. To see this,from Gronwall's theorem, assuming that θ iscontinuous on some interval [t_0,t_1], it follows directly from the linear relation (<ref>) that,x(t)≤ x(t_0)exp( ∫_t_0^tθ (s) ds ),from which the focusing state (x→ 0 as θ→ -∞) directly follows.Hence, the linear feedback of the term θ x to the solution of the equation ẋ=θ x, causes exponential decay of the solution x as θ→ -∞. On the other hand, if we had a nonlinear forcing term, say F(X), X=(ρ,x), (instead of (ρ^2,ρ x)) influencing the solution X, then the feedback loop,X→ F(X)→ X,of the solution X influencing the forcing term nonlinearly which in turn influences the solution X,would lead to an overall difference in the behaviour of the system (<ref>).In conclusion, the standard treatment of the system (<ref>) helps to clearly distinguish betweenthe adversarial behaviour of the solutions associated with a linear feedback loop, namely the focusing behaviour, and any other. This is accomplished by an analysis of the linear and adversarial feedback effects of the equations, and by associating the focusing effect to the linearity of the volume/area equation (as a result of the decoupled treatment of the Landau-Komar-Raychaudhuri equation).Therefore any truly nonlinear feedback effect associated with the system (<ref>) must, in addition to the focusing state, lead to some distinctly different behaviour controlling the feedback loop, as compared to the focusing effect. Of course, the problem with this is that normally one does not have any generalprocedure to realize that distinction.We shall show presently that such a method may be subtly obtained by an application of bifurcation theory to this problem. The crucial advance that bifurcation theory brings about in this case is that taking seriously the versal unfolding idea and applying it to the problem (<ref>), one ends up with a concrete proposal of the exactly admissible forms of the essentially nonlinear `perturbation terms' ℛ,𝒲, in such a way that the resulting `unfolding' is versal: that is, it contains all stable perturbations of the system.§.§.§ The focusing effect as a primitive bifurcation calculationLet us now move on to the consideration of the meaning of the generic condition as used in the standard approach of the system (<ref>). For the standard treatment and meaning of the generic condition, we refer to the references, in particular, e.g.,<cit.>, p. 540. According to this, the generic condition is used to avoid very special and therefore physically unrealistic geometric situations when employing the focusing effect.When the generic condition fails, we have ℛ=𝒲=0, and (<ref>) - instead of (<ref>) - is the result, namely, we obtain the dynamical system (<ref>). In the standard approach, and because this system is associated with the unphysical situation discussed in the previous paragraph,any analysis of it isavoided by using the generic condition and `re-inserting the perturbation terms'ℛ,𝒲≠ 0 to obtain the original system (<ref>)[An exception is the method noted earlier, cf. Eq. (<ref>), however, that analysis is completely equivalent to the standard treatment.].However, this approach is indeed very similar to that met in bifurcation theory. Namely, one starts with a system (cf. (<ref>)) which has some degeneracy (in this case, this means that it represents unphysical solutions). As we have emphasized earlier (cf. the sequence <ref>), the way to deal with suchsystems is to augment or `unfold' the original system to consider all possible perturbations of it in a consistent way.Seen in this way, the pioneering Hawking-Penrose treatment leading to the focusing effect and the singularity theoremsrepresents the very first bifurcation calculation to the study of singularities: With a vanishing shear, the system (<ref>) reads, ρ̇=ρ^2, which represents the `normal form' of the system (<ref>) for vanishing shear. Since the Landau-Komar-Raychaudhuri equation decouples, in thegeneral situation we expect to have ρ̇≥ρ^2, and this can be regarded asa `versal unfolding' of (<ref>), because `it contains all possible perturbations' of it (that is,the terms σ^2, ℛ). Then the treatment of the versal unfolding through the use of the volume/area equation, yields the final answer, that is the focusing effect epitomized as the existence of spacetime singularities (through the globalization obtained by global causal structure techniques).It is interesting that from this point of view, the original Hawking-Penrose theorems are very complete because they led to the only possible answer, namely, the focusing effect. The essential uniqueness of this answer is of course related to the `versal unfolding' mechanism in conjunction with the adversarial behaviour. §.§ Nonlinear feedback effects, versality,and spacetime singularitiesThe present work may be regarded as a nonlinear version of the Hawking-Penrose approach seen in this way. As we are interested in genuine nonlinear feedback effects associated with the system (<ref>), the non-generic system (<ref>) appears as a simpler one compared to (<ref>) just because there are no `unknown' perturbation terms like ℛ,𝒲 in it. Regarding the possible role that thesystem (<ref>) may play for the main problem, that is (<ref>), we ask:*What are the dynamical properties of (<ref>)?*What is the relation between properties of the system (<ref>) and those of the `perturbed system' (<ref>)?*What is the nature of the perturbation terms ℛ,𝒲?*Is the system (<ref>) structurally stable under small perturbations to `nearby' systems?*In what sense is the behaviour of the system (<ref>) `non-generic' as compared to that of nearby systems?*How does one perturba degenerate system such as (<ref>)?*How does any `degeneracy' of the subsystem (<ref>) affect the behaviour of the solutions of the original system (<ref>)?*Can one account for the essentially nonlinear nature of the system (<ref>), without at the same time understanding the nonlinear nature of its `degenerate' subsystem given by (<ref>) ?*What is the nature of the set of all possible stable perturbationsof (<ref>), and what is their precise relation to the original system (<ref>)?*What is the influence of the vorticity in the evolution?These questions are related to essentially nonlinear behaviours present of the system (<ref>) (and of course (<ref>)), and because they are also important for the other two systems, namely(<ref>), (<ref>), they are a main topic of the present paper. In fact, the pioneering papers that established the existence of singularities and black holes in general relativity are very useful, even essential, in this respect because the focusing and related behaviours must be present in the final answer, and as such may play the role of a basis to orient ourselves in the possible patterns and forms that may emerge from, and be associated with,the answers to the questions above.The mathematics needed for the full analysis of suchproblems as (<ref>), (<ref>) belongs to bifurcation theory, where the nature of important ideas such as degeneracy, non-hyperbolicity, topological normal forms, versal unfoldings,symmetries, and local and global bifurcations, can be adequately clarified. In turn, these ideas and others will necessarily play a central role in an attempt to unravel the dynamical nature of spacetime singularities and black holes.The system (<ref>) is notas trivial or uninteresting system as it looks. A purpose of this paper is to provide a full analysis of both this system and of the original system (<ref>). A full understanding of the latter depends on that of the former system, to such anextent that it is not possible to say anything reliable about the system (<ref>) without first understanding fully the system (<ref>). Once this is done, we shall returnto apply the results to the problem of spacetime singularities and the structure of black holes.Similar remarks apply to the treatment of the systems (<ref>) describing vorticity-induced effects, and (<ref>) which is the Oppenheimer-Snyder for spherically symmetric dustlike gravitational collapse. We note that the system (<ref>) can be studied with bifurcation methods very similar to those of the NPR system (<ref>) because they share the same structure of the linear terms and also they both are ℤ_2-equivariant. However, the Oppenheimer-Snyder system(<ref>) is different because its linear part is different than those of the NPR and CV-systems, even though here we too have a double-zero eigenvalue.However, the vorticity system (<ref>) leads to very different effects compared to the NPR system, the main difference between them being that whereas the NPR solutions are generally characterized as being unstable or of a `runaway' character, the CV-system has a unique, stable limit cycle attracting all solutions to a self-sustained state of oscillations, for certain ranges of the parameters.On the other hand, in the the Oppenheimer-Snyder problem, the versal unfolding has two moduli and so leads to two very different bifurcation diagrams, one sharing some of the effects found in the NPR-system, and the other being closer to the vorticity-induced effects of the CV-system, namely, the appearance of closed orbits and global bifurcations.§ THE BIFURCATION DIAGRAM OF THE NPR SYSTEMIn this Section, we studythe versal unfolding dynamics associated with the Newman-Penrose-Raychaudhuri system(<ref>). §.§ The normal form and versal unfoldingWe start with the degenerate system(<ref>), and the basic observation that it possesses the ℤ_2-symmetry, namely it is invariant under the transformation, ρ→ρ,σ→-σ.The system (<ref>) is already in normal form. The various normal forms and versal unfoldings for systems with a ℤ_2-symmetry have been completely classified, cf. e.g., <cit.>, Sections XIX.1-3, <cit.>, Section 8.5.2, <cit.>, Sections 20.7, 33.2, <cit.>, Sect. 7.4, <cit.>, Section 4.4, and refs. therein (finding the versal unfolding in such systems has beenone of the most illustrious problems in bifurcation theory). Consequently, the versal unfolding of (<ref>) is given by,ρ̇ =μ_1+ρ^2+σ^2, σ̇ =μ_2 σ+nρσ,where μ_1,μ_2 are the two unfolding parameters, and n=2,3. Our efforts here will be directed to obtain the complete bifurcation diagram of (<ref>), that is the parameter diagram together with the corresponding phase diagrams for each one of the strata partitioning the parameter diagram.With regard to the system (<ref>), the following remarks are in order. Although the parameter diagram for the system (<ref>) will be the end-result of the analysisin this Section, it is instructive and helpful to give it here and refer to it as we develop the details, cf. Fig. <ref>. In this Figure, we observe the different strata determining the subsequent phase space dynamics. We see that there are seven important regions in the full parameter diagram in Fig. <ref>, namely[We shall introduce a particular naming system for the various strata of the parameter diagrams in this and following Sections of this paper, thatreflect the great variety of the possibilities that arise due to the codimension-2 bifurcations. This naming system comes from the poem `Theogeny' of Hesiod, and imparts names and corresponding letters to the various strata according to the First Gods appearing in that poem.],*The origin, the Gaia-γ stratum.*The right half-plane, the Chaos-χ stratum.*The μ_2-axis, the Eros-ε stratum, in two components ε_+,ε_-.*The parabola in the left half-plane, the Tartara-τ stratum, in two components τ_+,τ_-.*The upper region between ε_+ andτ_+, the Uranus-o stratum.*The lower regionbetween ε_- andτ_-, Pontus-π stratum.*The region inside the parabola, the Ourea-β stratum.These regions will have an important role to play in the dynamics of the system (<ref>), and will appear as the analysis unfolds.§.§ Dynamics at zero parameterLet us first consider the dynamics of the versal unfolding (<ref>) in the case where (μ_1,μ_2)=(0,0), that is the degenerate system (<ref>), which we also reproduce here,ρ̇ =ρ^2+σ^2, σ̇ =nρσ.This is the case that corresponds to the origin in the parameter diagram, the Gaia-γ point, and it comprises the degenerate system (<ref>) that is already in normal form. To obtain the corresponding phase portrait in this case, we work as follows. First, by exploiting the fact that in our problem (<ref>) we have n=2,3, we can now introduce the key fact that the lines, σ=mρ,representinvariant lines in the phase plane (ρ,σ) of the problem. This follows because using (<ref>), (<ref>), the condition of tangency of these lines to the flow, gives,m^2=n-1,which is always satisfied, so that these lines are separatrices of the flow. (We note that the ρ-axis is always invariant in this problem as one may easily check.) For n=2 (resp. 3), we have m=± 1 (resp. ±√(2)), and wecall them the Stewart separatrices, since the former lines were first found in <cit.>, p. 203, using different methods (which, however, reduce the codimension of the singularities).To find the direction of the flow along the Stewart lines, we calculate the product of the vector field (<ref>) times the radial vector, namely,l=(ρ^2+σ^2,nρσ)·(ρ,σ), on the Stewart lines, to get,l=n^2ρ^3,so that l>0 (resp. <0) when ρ>0 (resp. <0), and the flow is directed outward (inward) along the Stewart separatrices.To obtain the full phase portrait for (<ref>), consider the function,i_n(ρ,σ)=n/2σ^2/n(ρ^2+σ^2/1-n),and it is straightforward to see that the derivative i̇_n=0, for any n>1 along the flow (<ref>), making i_n a first integral of (<ref>). Therefore the level curves of i_n provide all the phase curves of the phase portrait of (<ref>). For instance, for the null case (n=2),the family, ρ^2-σ^2=aσ,a≠ 0,gives all the orbits below (above) the Stewart separatrices (which are also shown), with a≷ 0, as in Fig. <ref>. The timelike case (with n=3) is very similar[We note here a subtle difference in the phase portrait given in Fig. <ref> in that the Stewart separatrices only exist because in our problem because n>1. Had n be in the range 0<n<1, the phase portrait in that case would have been different (actually it would be more similar to that of Fig. <ref> below). In this sense, the n which takes the values 2, 3 (for the null and timelike case, respectively) is in fact a second, and already determined, modular coefficient of the problem.]. §.§ Stability of the fixed branchesWe now return to the consideration of the full system (<ref>). To begin the stability analysis of (<ref>), we note the decisive fact that this system has the ℤ_2-symmetry (<ref>),and without loss of generality we may assume that σ>0.The system (<ref>) has three fixed branches[Since in the situation we shall be dealingwith, the fixed `points' are parameter-dependent, we shall usually call any equilibrium, parameter-dependent, solution family, a fixed branch.]:* ℰ_1,2=(∓√(-μ_1),0). These are real, provided μ_1<0. *The third fixed branch is, ℰ_3=(-μ_2/n,√(-(μ_2^2/n^2+μ_1 ))),which is real if the bracket inside the square root is negative, μ_1<-μ_2^2/n^2.A particular aspect of the ensuingbifurcation analysis is that although the fixed branches belong in the phase space of the problem, that is on the (ρ,σ)-plane in this case, because of their parameter dependence they may also be considered as `attached' to the parameter diagram of Fig. <ref>. This observation is useful in understanding many of the subsequent dynamical issues. For example, the fixed branches ℰ_1,2 belong to the σ=0 axis of the phase space, however, they can also be considered as attached to the negative μ_1 axis of the parameter diagram in Fig. <ref>, while the fixed branch ℰ_3 is also attachedto the β-stratum Ourea of Fig. <ref>(apart from lying anywhere in the phase plane, except at the origin).We note that none ofthe three fixedbranches exist for the χ-stratum (half-space μ_1>0, cf. Fig. <ref>) - no fixed points there,and so the phase portrait can be easily drawn in this case, see Fig. <ref>, <ref>.Before we examine the stability of the fixed branches, some preliminary work is needed. §.§.§ Two lemmasThe parameter plane is stratified according to hyperbolic or bifurcating behaviour associated to changes in the μ parameter, and a simple way to connect the different strata of parameter diagram to the corresponding phase space dynamics is to directly find the stability of the three fixed branches ℰ_i, i=1,2,3,. This can be done by employing the two lemmas below, instead of developing stability for each one of the particular subsets of the parameter space (μ_1,μ_2).The linearized Jacobian of (<ref>) is given by,J=( [2ρ2σ;nσ μ_2 +nρ; ]),and it is helpful to use the standard formulae for the eigenvalues, that is, λ^2-(Tr J)λ +det J=0,where, λ_±=1/2(TrJ ±√(Δ)),Δ=(TrJ)^2-4 det J.For any of the three fixed branches ℰ_i,i=1,2,3, the following two lemmas about branches follow easily, andtheir proofs will be omitted. The first lemma describes simple bifurcational behaviour, i.e., simple situations where the dynamics near a fixed branch will be radically different from that of the linearization[The word `slightly' in lemma <ref> means that only codimension-1 bifurcations will exist in this case.].For a fixed branch ℰ we have,*If detJ=0, then one of the eigenvalues of the linearized Jacobian is zero.*If detJ>0, andTrJ=0, then λ _±=± i√(|detJ|), and ℰ is a centre.The second lemma describes the range of hyperbolic behaviours resulting from any situation with a non-zero linearized Jacobian.For a fixed branchℰ we have:*If detJ<0, then ℰ is a saddle. When in addition, TrJ>0, or TrJ<0, then λ _+>0,λ_-<0, whereas when TrJ=0, thenλ _±=±√(|detJ|).*If detJ>0, andTrJ>0, then λ _+>0,λ_->0, and ℰ is a source.* If detJ>0, andTrJ<0, then λ _+,λ_-<0, and ℰ is a sink.We are now in a position to proceed with the nature of the fixed branches.§.§.§ Stability of the branch ℰ_1To apply the two lemmas for the fixed branch ℰ_1=(-√(-μ_1,0)), we first calculate, TrJ|_ℰ_1=μ_2-(2+n)√(-μ_1),detJ|_ℰ_1=-2√(-μ_1)(μ_2-n√(-μ_1)).Then we have the following result.For the fixed branch ℰ_1=(-√(-μ_1,0)), we have the following sign conditions:* TrJ|_ℰ_1≷ 0, implies that μ_2-n√(-μ_1)≷ 2√(-μ_1).* detJ|_ℰ_1≷ 0, implies that μ_2-n√(-μ_1)≶ 0.From these results, we arrive at the following theorem about the stability of the fixed branch ℰ_1 (we note that μ_1<0).We have the following types for the ℰ_1 branch: * detJ|_ℰ_1= 0,TrJ|_ℰ_1>0:cannot happen. * detJ|_ℰ_1= 0,TrJ|_ℰ_1<0:bifurcation. * detJ|_ℰ_1> 0,TrJ|_ℰ_1>0:cannot happen. * detJ|_ℰ_1> 0,TrJ|_ℰ_1<0:stable node. * detJ|_ℰ_1< 0,TrJ|_ℰ_1>0:saddle. * detJ|_ℰ_1< 0,TrJ|_ℰ_1<0:saddle. * TrJ|_ℰ_1=0,detJ|_ℰ_1> 0:cannot happen. * TrJ|_ℰ_1=0,detJ|_ℰ_1< 0:saddle. Items 1, 3, and 7 follow from (<ref>). For 2, using Lemma 1, we findfrom (<ref>) that one of the eigenvalues is zero, and so we expect a saddle-node bifurcation on the μ_2-axis when μ_2>0. For 4, it follows that TrJ|_ℰ_1+√(Δ)<0, which implies that both eigenvalues are negative, and from Lemma 2 we find a sink (i.e., stable node). Items 5, 6 follow directly from Lemma 2, while for item 8, we get a saddle from Lemma 2. Therefore fromTheorem <ref>, we find the following types of stability for the fixed branch ℰ_1.The nature of the fixed branch ℰ_1 is as follows:*When μ_2-n√(-μ_1)>0,ℰ_1 is a saddle.*When μ_2-n√(-μ_1)<0,ℰ_1 is a sink.*When μ_2-n√(-μ_1)=0,ℰ_1 bifurcates.This completes the stability of the equilibrium ℰ_1.§.§.§ Stability of the branch ℰ_2For the fixed branch ℰ_2=(√(-μ_1,0)), we find, TrJ|_ℰ_2=μ_2+(2+n)√(-μ_1),detJ|_ℰ_2=2√(-μ_1)(μ_2+n√(-μ_1)).The sign conditions now become:For the fixed branch ℰ_2=(√(-μ_1,0)), we have the following sign conditions:* TrJ|_ℰ_2≷ 0, implies that μ_2+n√(-μ_1)≷ -2√(-μ_1).* detJ|_ℰ_2≷ 0, implies that μ_2+n√(-μ_1)≷ 0.Then the nature of the fixed branch ℰ_2 is determined by the following theorem. We have the following types for the ℰ_2 branch:* detJ|_ℰ_2= 0,TrJ|_ℰ_2>0:bifurcation. * detJ|_ℰ_2= 0,TrJ|_ℰ_2<0:cannot happen. * detJ|_ℰ_2> 0,TrJ|_ℰ_2>0:source. * detJ|_ℰ_2> 0,TrJ|_ℰ_2<0:cannot happen. * detJ|_ℰ_2< 0,TrJ|_ℰ_2>0:saddle. * detJ|_ℰ_2< 0,TrJ|_ℰ_2<0:saddle. * TrJ|_ℰ_2=0,detJ|_ℰ_2> 0:cannot happen. * TrJ|_ℰ_2=0,detJ|_ℰ_2< 0:saddle. For 1, using Lemma 1, we findfrom (<ref>) that one of the eigenvalues is zero, and so we expect a saddle-node bifurcation on the μ_2-axis when μ_2<0, as it follows directly from the vanishing of the determinant. Items, 2, 4, and 7 follow from (<ref>). For 3, it follows that TrJ|_ℰ_1+√(Δ)>0, so that all eigenvalues are positive, and from Lemma 2 we find a source. Items 5, 6 follow directly from Lemma 2, with parameter conditions being -2√(-μ_1)<μ_2+n√(-μ_1)<0, and μ_2+n√(-μ_1)<-2√(-μ_1) respectively,while for item 8, we get a saddle from Lemma 2, with μ_2+n√(-μ_1)<0. All remaining cases cannot happen because of sign incompatibilities. Therefore we find:The nature of the fixed branch ℰ_2 is as follows:*When μ_2+n√(-μ_1)>0,ℰ_2 is a source.*When μ_2+n√(-μ_1)<0,ℰ_2 is a saddle.*When μ_2+n√(-μ_1)=0,ℰ_2 bifurcates.The main conclusion from the results in the last two subsections is that both parameters are necessarilyused in order to determine the stability of the fixed branchesℰ_1,2. Theseare expected to bifurcate insaddle-node bifurcations happening on the Eros-ε axis (this is proven in detail below), with ℰ_1 giving a saddle and a sink on the (positive) ε_+-semiaxis, and ℰ_2 giving a saddle and a source on the (negative) ε_--semiaxis.§.§.§ Stability of the branch ℰ_3For the fixed branch ℰ_3 given by Eq. (<ref>), we find, TrJ|_ℰ_3=-2μ_2/n,detJ|_ℰ_3=2n( μ_2^2/n^2+μ_1).It follows that the sign conditions for the trace and determinant are, detJ|_ℰ_3<0,on theβ-stratum,and TrJ|_ℰ_3≷ 0,ifμ_2≶ 0,which implies that the eigenvalues are λ_+>0,λ_-<0, that is the fixed branch ℰ_3is a saddle (and so totally unstable).The important conclusion from this result is that ℰ_3 cannot bifurcate further on the β-stratum.To complete the stability analysis of the fixed branch ℰ_3, we now examine the remaining (borderline) case, namely, when detJ|_ℰ_3=0. From Eq. (<ref>), it follows that this takes us to the Tartarus-τ curve μ_1=-μ_2^2/n^2,and we find the following two conditions (note that μ_1<0), τ_+-branch:μ_2-n√(-μ_1) = 0,andμ_2>0, τ_–branch:μ_2+n√(-μ_1) = 0,andμ_2<0, and so ℰ_3→ℰ_1,on theτ_+-branch,while ℰ_3→ℰ_2,on theτ_–branch.From what we showed in the previous two subsections, we know that the conditions (<ref>), (<ref>) are necessary and sufficient for the vanishing of the determinants detJ|_ℰ_1,2 respectively.In other words, we arrive at the following conclusion.*On the β-stratum and on the τ-curve, the fixed branchℰ_3 cannot bifurcate further. In addition, ℰ_3 is a saddle on the β-stratum, while it becomes the branch ℰ_1 on the τ_+-curve, and the branch ℰ_2 on the τ_--curve.*We expect that there willfurther bifurcations of the fixed branches ℰ_1,2 on the curves τ_+,τ_-, both to yield the saddle ℰ_3 and possibly other new branches.A complete proof of this `Theorem' (the second part of it is not yet proved!) will be given in the next Subsection, where we shall prove that these new bifurcations are actually pitchforks. We note that the announced extra bifurcations of the fixed branches ℰ_1,2 are not related to the saddle-node bifurcations discussed earlier on the Eros-ε line, because there we had μ_1=0, whereas here we haveμ_1<0 for both ℰ_1,2. §.§ Description of the bifurcations §.§.§ General commentsAs we discussedin the previous Subsection,we expect that there will two kinds of bifurcations associated withthe fixed branches ℰ_1,2, and described in the parameter diagram of Fig. <ref>. In this Subsection, we show that these bifurcationswill be a saddle-node bifurcation occurring `horizontally' when crossing the Eros-ε-axis with dynamics governed by the ρ variable, and a pitchfork bifurcation occurring `vertically' as we cross the Tartara-τ curve with dynamics governed by the σ variable.In both cases, the dynamics ofthe system (<ref>) is drastically reduced to a one-dimensional centre manifold, andnew equilibrium solutions appear (or disappear!): the branches ℰ_1,2 for the saddle-node, and the saddle ℰ_3 for the pitchfork. Generally, as we showed previously, the number of equilibria fluctuatesfrom zero at the chaos-χ region, to one on the Eros-ε-axis and Gaia-γ point, to two on the Uranus and Pontus regions, and to three on the Ourea-β region.The system (re-)visits all these regions as guided by the stability of the unfolding, and consequently, the phase space dynamics is smoothly changing according to the transitions between phase portraits associated with the corresponding strata in parameter space. §.§.§ The saddle-nodeWe can of course just calculate the centre manifold for the system in this case, however, it is simpler and perhaps more illuminating to proceed as follows. In the present case, the saddle-node is generally speaking a bifurcation involving only the convergence ρ, because of the following reason.The fixed branches ℰ_1,2 exist for μ_1<0 only, coalesce (or form) at μ_1=0, and disappear when μ_1>0 (that is on the χ-stratum), and in all cases we always have σ=0 (the second coordinate of these branches) for these fixed branches to exist. Therefore to find the evolution on the centre manifold, we simply set σ=0 in (<ref>) to get the reduced system in the form, ρ̇=μ_1+ρ^2,on the centre manifold. This is of course the normal form of a system undergoing a saddle-node bifurcation in the ρ-direction at μ_1=0, and so the saddle-node dynamics is convergence-dominated, with evolution equation given by (<ref>).In this case therefore, the system has between zero (when μ_1>0) and the two equilibrium solutions ℰ_1,2 (when μ_1<0), while their stability is determined by the standardbifurcation diagrams. On crossing the ε_+-axis (i.e., μ_2>0) from the right (i.e.,starting from the χ-region and proceeding to the left), the bifurcation occurs on the ε_+-axis, and upon entering the Uranus o-stratum we have, μ_2-n√(-μ_1)>0,as one may easily see. Hence, the emerging fixed branches are, (a) the branchℰ_1 whichis a saddle by theCorollary <ref>, and(b) the branch ℰ_2, which from the Corollary <ref> is a source, since in this case it follows from (<ref>) that,μ_2+n√(-μ_1)>2n√(-μ_1)>0. Similarly, on crossing the ε_--axis (i.e., μ_2<0), and entering the Pontus π-region, we find that, μ_2+n√(-μ_1)<0,so that ℰ_2 is a saddle from Corollary <ref>, while from Corollary <ref> we find that,ℰ_1 is a sink there, because μ_2-n√(-μ_1)<-2n√(-μ_1)<0, as it follows from(<ref>). These results obviously also hold in the opposite direction of motion in the parameter diagram. We conclude that the saddle-node bifurcationmoves the dynamics of the system (<ref>) along the `fragments', χ→ε_+→ o,orχ→ε_-→π,of the parameter diagram in either direction; that is, if the system finds itself on the region χ, then it is moved to the Uranus-o region if μ_2>0, or to the Pontus-π region if μ_2<0, by crossing the corresponding half-line on the Eros-ε axis, and vice versa. The result is a continuous transformation of the corresponding phase portraits followed by a creation or annihilation of the fixed branches ℰ_1,2 as shown above. Therefore we have proved the following Theorem.On crossing the ε-stratum, we have the following saddle-node bifurcations:*Fragment χ→ε_+→ o: The system moves from zero equilibria on χ, to one on ε_+, to two the fixed branches ℰ_1,2 on o, such that the branchℰ_1 is a saddle, and the branch ℰ_2 is a source.*Fragment χ→ε_-→π: The system moves from zero equilibria on χ, to one on ε_-, to two the fixed branches ℰ_1,2 on π, such that the branch ℰ_2 is a saddle, while ℰ_1 is a sink.*The evolution equation on the centre manifold for both of these saddle-node bifurcations is given by Eq. (<ref>).Given the above directions of ε-crossings, the two new fixed branches ℰ_1,2are created at the crossings. These results hold true in the opposite crossing directions, namely, for the fragments o→ε_+→χ, andπ→ε_-→χ, where the two fixed branches are now annihilated at the crossings. We note that a similar result about the role of the saddle-node bifurcation in cosmology was recently found in the more restricted case of the versal unfolding of the Friedmann equations <cit.>.§.§.§ The pitchforkIn this subsection, we shall be interested in the properties of the flow of Eq. (<ref>) in a neighborhood of the tartarus τ-stratum (cf. Fig. <ref>), and prove the following theorem for the system (<ref>) about the existence of furtherpitchfork bifurcations of the two fixed branches ℰ_1,2 when μ_1<0.When μ_2≠ 0 and with μ_1 decreasing, the system (<ref>) undergoes a pitchfork bifurcation on the tartarus τ-stratum such that:*the pitchforkdynamics reduced on the centre manifold is shear-dominated, with evolution equation given by, σ̇=± ϵn σ-n^2/2μ_2σ^3,where ϵ is a variational parameter that describestransversal crossings of the τ-stratum, and the upper (resp. lower) sign corresponds to the fixed branchℰ_1 (resp. ℰ_2).* a supercritical pitchfork bifurcation of the fixed branchℰ_1 when crossing the τ-curve with μ_2>0 (i.e., the upper part of τ), to the appearing fixed branch saddle ℰ_3, and*a subcritical pitchfork bifurcation of the fixed branchℰ_2 when crossing the τ-curvewith μ_2<0 (i.e., the lower part of τ), to the appearing fixed branch saddle ℰ_3.On the tartarus-τ curve, τ={ (μ_1,μ_2): μ_1=-μ_2^2/n^2},we have, detJ|_ℰ_3=0, and so the eigenvalues of the linearized Jacobian at ℰ_3 are, λ_+=TrJ|_ℰ_3, λ_-=0 (cf. (<ref>)). Therefore a centre manifold analysis is required to compute the dynamics of the bifurcations to ℰ_3 on the centre manifold.Since the proof of Theorem <ref> is somewhat long, we shall break it down to a number of different steps.Step-1:Jordan form of linear part of Eqn. (<ref>).Assuming μ_1<0, since the 0-eigendirection is associated with the σ variable (rather than ρ), we move the fixed `point' ℰ_3to the origin, by setting ρ_*=∓√(-μ_1),σ_*=0, and letting, ξ=ρ-ρ_*=ρ±√(-μ_1),and rewritethe original system (<ref>) in the form,σ̇ =μ_2σ+nσ (ξ∓√(-μ_1)), ξ̇ =μ_1+(ξ∓√(-μ_1))^2+σ^2.On the tartarus curve -μ_1=μ_2^2/n^2, we have, √(-μ_1)=|μ_2|/n.Now suppose we cross the tartarus τ-curve transversally at some constant but nonzero μ_2 with decreasing μ_1. This can be described by varying the ϵ defined by, √(-μ_1)=|μ_2|/n-ϵ,so as to cross the τ-parabola from right to left and vertically (decreasing μ_1). In this way, we regard ϵ as a parameter replacing μ_2.Substituting (<ref>) into (<ref>) we find,σ̇ =n σ ξ± n σ ϵ, :=f_1,ξ̇ =ξ^2+σ^2+2(μ_2/n±ϵ)ξ, :=f_2.Therefore we can write this system in the form, Ẋ=J_|(0,0)X+(X,ϵ),ϵ̇=0, where X=(σ,ξ)^⊤, that is with its linear part in normal form. We find, ( [ σ̇; ξ̇;])= ( [00;0 2μ_2/n;]) ( [ σ; ξ; ]) + ( [ n σ ξ± n σ ϵ; σ^2+ξ^2± 2ϵξ;]), ϵ̇ =0.This is the form we need in order to apply the centre manifold theorem (cf. any of the standard references on bifurcation theory in the bibliography). Step-2:Evolution on the centre manifold.The centre manifold to (<ref>) will be the set,W^c={(σ,ξ,ϵ)| ξ=h(σ,ϵ) },with h(0,0)=0, Dh(0,0)=0. Thisis of the form Y=h(X), where, Y=ξ,X=(σ,ϵ)^⊤, subject to the tangency condition, DhẊ-Ẏ=0. Setting,h(σ,ϵ)=ασ^2+βσϵ+γϵ^2,the problem is to find the coefficients. Using Eq. (<ref>), the tangency condition becomes,(2ασ+βϵ,βσ+2γϵ)( [ nσ h± nσϵ; 0; ]) -2μ_2/nh-g=0,with g:=σ^2+ξ^2± 2ϵξ. Using (<ref>), and balancing the various terms in (<ref>), we obtain:σ^2-terms: α=-n/2μ_2,ϵσ-terms: β=0, ϵ^2-terms: γ=0.Therefore the centre manifold (<ref>) is the graph of ξ=h(σ,ϵ)=-n/2μ_2σ^2+O(3),The evolution equation on the centre manifold cannow be read off directly from Eq.(<ref>), namely, σ̇=±ϵ n σ-n^2/2μ_2σ^3.We note that the upper (resp. lower) sign is for the ℰ_1(resp. ℰ_2) branch. This equation describes the normal form of a system undergoing a pitchfork bifurcation at ϵ=0. As we noted earlier, we regard ϵ as a parameter and μ_2 as a nonzero constant.From(<ref>), we may also obtain the ξ-evolution equation, namely, ξ̇=2μ_2/nξ,This completes the proof of the first part of the Theorem <ref>. For the second and third parts of the Theorem, we proceed as follows. Step-3:Pitchfork evolution for the fixed branch ℰ_1.Let us first suppose that μ_2>0, and decreasing μ_1. Thenℰ_1 undergoes a pitchfork bifurcation on the τ_+-curve μ_2=n√(-μ_1) (cf. Corollary <ref>), and the evolution equation (<ref>) becomes a supercritical equationfor ϵ>0, σ̇=ϵ n σ-n^2/2μ_2σ^3.This is called supercritical because the two new orbits created at σ_*=±√(2μ_2ϵ/n),μ_2>0,are stable: for denoting by f the right-hand-side of (<ref>), we find the σ_* orbits haveeigenvalues Df(σ_*)=-2nϵ, which is negative for ϵ>0. We note that only the positive sign orbit (<ref>) is admitted here because we have taken σ>0. So on the centre manifold, this equilibrium attracts all orbits.In addition, for ϵ>0, all orbits are repelled from the unstable equilibrium at the origin (the equilibrium at the origin always exists for (<ref>), and is stable for ϵ<0). The bifurcation diagram is therefore as in Fig. <ref>, and the stability diagrams of bifurcations on the centre manifold are shown in Fig. <ref>. We see from these diagrams that the fixed branch ℰ_1 attracts all nearby orbits in the β-stratum. Step-4:Pitchfork evolution for the fixed branch ℰ_2.The case μ_2<0 is treated by similar methods, but the results are different, in that here forϵ>0 all orbits blow up in a finite time as there is no opposing of the cubic term as in the supercritical case. We take ϵ<0, so that μ_1 is decreasing in this case, and the (-) sign in the evolution eqn (<ref>) to examineℰ_2. The centre manifold graph is given by, ξ=-n/2|μ_2|σ^2+O(3),and the evolution equation for the shear on W^c reads, σ̇=|ϵ| n σ+n^2/2|μ_2|σ^3.This is therefore a subcritical pitchfork bifurcation for ℰ_2 taking place on the τ_--curve μ_2=-n√(-μ_1) (cf. Corollary <ref>), implying that the origin is a stable equilibrium in this case when ϵ<0. The ξ-evolution is given by, ξ̇=-2|μ_2|/nξ,these being the vertical directions to the centre manifold. From Eq. (<ref>) with ϵ<0, it follows that the new orbits are created, σ_*=±√(2|μ_2||ϵ|/n),μ_2<0,(we again keep only the positive one as we take σ>0) and are unstable on the centre manifold, whereas the origin is stable. (We note that when ϵ>0, there are no new orbits created, and also the origin becomes unstable.) These resultslead to the Figs. <ref>, <ref>. We see from these diagrams that the fixed branch ℰ_2 is a source repelling all nearby orbits.This concludes the proof of the Theorem <ref>. §.§ The bifurcation diagram for the NPR-systemAssembling all results on the various bifurcations found in the previous subsections, we arrive at the bifurcation diagram <ref>. This is the parameter diagram together with all phase diagrams showing the dynamics on the different strata superimposed. § THE BIFURCATION DIAGRAM FOR THE CV SYSTEMIn this Section, we construct the bifurcation diagram of the convergence-vorticity system, that is we take into account the possible effects of rotation when considering the dynamics of the convergence function. §.§ Versal unfoldingWe now move on to the examination of the convergence-vorticity problem (<ref>) (`CV'-system hereafter), and as with the treatment of the NPR-system, we start with the non-generic situation obtained by setting ℛ=0 in (<ref>), and arrive at,ρ̇ =ρ^2-ω^2 , ω̇ =nρ ω ,where n=2,3 as before. As with the system (<ref>),the non-generic convergence-vorticity system (<ref>) is already in normal form, and has a ℤ_2-symmetry ρ→ρ,ω→ -ω.The versal unfolding for systems with a ℤ_2-symmetry have been found and is given by (cf. e.g., <cit.>: Sections XIX.1-3, <cit.>: Section 8.5.2, <cit.>: Sections 20.7, 33.2, <cit.>: Sect. 7.4, <cit.>: Section 4.4, and refs. therein for complete proofs and further references),ρ̇ =μ_1+ρ^2-ω^2, ω̇ =μ_2 ω+nρω. For the ensuing analysis of the system (<ref>),we shall use the same symbols for the parameter μ as with the NPR-system of the previous Section. Although the method of analysis is analogous to that of the unfolding (<ref>), we shall see that the results are dramatically different than those in the previous Section for the NRP-system, the main new phenomenon being the appearance of a stable limit cycle associated with the dynamics of (<ref>)[We emphasize that limit cycles are isolated closed orbits in phase space, and can only appear in nonlinear systems. Linear systems may of course have periodic solutions, but these can never be isolated (such solutions are always surrounded by an infinite number of others). A stable limit cycle attracts all nearby orbits.].We give here the full parameter diagram for the convergence-vorticity case. As we show below, in the convergence-vorticity case, there is a more complicated stratification of the parameter space in that there are now four new strata, namely, the positive μ_1-axis, that we call the Eρ-line, the ν-line in the fourth-quarter of the parameter space, and the regions α, η. §.§ The zero-parameter caseTo start the examination of the zero-parameter system (<ref>), a similar analysis as thatSection <ref> suggests that the invariant line condition (<ref>) cannot be fulfilled because we now obtainm^2=1-n, and thisis negative for both the timelike and null congruences.Therefore there are no invariantlines (Stewart separatrices) in the vorticity case. However, the phase portrait in this case can be easily drawn by considering the first integral of the system (<ref>), namely, the functioni_n(ρ,ω)=n/2ω^2/n(ω^2/n-1+ρ^2),along solutions, with the level curves of i_n providing all the phase curves of(<ref>). These are given by, ω^2/n-1+ρ^2=2/ncω^2/n, c∈ℝ. For instance, for the null case (n=2),the family, ρ^2+ω^2=cω,provides these curves, and similarly for the timelike case, ρ^2+ω^2/2=2/3cω.The phase diagram is as in Fig. <ref>. We note that the direction of the flow in Fig. <ref> can still be found by the method used before for the NPR-system. §.§ General comments on the stability and bifurcations of the fixed branchesThe versal unfolding (<ref>) has three fixed branches, namely,* ℰ_1,2=(∓√(-μ_1),0). These are real, provided μ_1<0. *The third fixed branch is, ℰ_3=(-μ_2/n,√(μ_2^2/n^2+μ_1)),which is real if,μ_2^2/n^2+μ_1>0.The linearized Jacobian of (<ref>) is given by,J=( [2ρ -2ω;nω μ_2 +nρ; ]).Since ℰ_1,2 are identical to the corresponding fixed branches of the NPR-system, the analysis of Section <ref> holds true for the present case. In particular,the two general lemmas of that Section continue to hold as well as the stability analyses of the branchesℰ_1,2 pass on to the present case, specifically the two main Theorems <ref>, <ref>, and the two Corollaries <ref>, <ref>. (We note that this analysis depends only on n, not on the sign of the coefficients of the σ^2 or ω^2 terms.) However, for the fixed branch ℰ_3, although the forms for the trace and determinant are identical to (<ref>),the sign of the determinant satisfies, detJ|_ℰ_3> 0,on theβ-stratum,because of the condition μ_2^2/n^2+μ_1>0[We note that the vanishing of this expression on the τ-curve implies the degeneration of ℰ_3 into ℰ_1,2, exactly as before, cf. (<ref>), (<ref>).]. This means that in the vorticity case, the fixed branch ℰ_3 cannot be a saddle anymore, but it is a node: Using the lemma <ref>, we find, TrJ|_ℰ_3≷ 0,ifμ_2≶ 0,that is ℰ_3 is a source when μ_2<0, and a sink when μ_2>0. In this case, we find from Eq. (<ref>) that the eigenvalues are real, and using the condition (<ref>), we obtain from (<ref>) thatthe parameters must satisfy, μ_1<(1-2n)μ_2^2/2n^3.This implies that μ_1<0, for both the null and timelike cases.To proceed further, we note that in all regions of the parameter plane (μ_1,μ_2) where the condition (<ref>) is violated, that is when, μ_1>(1-2n)μ_2^2/2n^3,the eigenvalues of the linearized Jacobian forℰ_3 are complex conjugate of the form,a± i b,witha=-μ_2/n.This means that in this case (also on the ε-axis, where μ_1=0),* ℰ_3 is an unstable node for a>0, (i.e., μ_2<0),* ℰ_3 is a stable node a<0, (i.e., μ_2>0).Therefore the bifurcations of the fixed branches ℰ_1,2 are as for the NPR-system:we have a saddle-node bifurcation of the Eros ε-line, and a pitchform bifurcation on the τ-curve. These are described exactly as in the previous Section, and we shall not repeat that analysis.The only difference to the previous case of the NPR-system is that in the present case the pitchfork bifurcations of the branches ℰ_1,2are to a node instead of a saddle, a stable node (sink) for μ_2>0, and an unstable node (source) for μ_2<0. There is, however, one remaining case for ℰ_3 - now possible in the vorticity case - that is, when the eigenvalues from Eq. (<ref>) are purely imaginary (a=0 above): On the Erebus Eρ-line, μ_1>0,μ_2=0, we find, λ_± =± i√(2nμ_1).In this case, we expect a degenerate Hopf bifurcation for the system (<ref>) when μ_1>0, that is on the Eρ-line, there exist an infinite number of closed orbits surrounding the centre (cf. Eq. (<ref>)). The proof of this statement is very similar to the construction of the phase curves of Fig. <ref>, and the result holdstrue for both the null and timelike cases.However, as we shall describe below, it is a fundamental result in this case that the inclusion of higher-order terms in the versal unfolding (<ref>) stabilizes the situation and leads to a non-degenerate Hopf bifurcation and to a unique stable limit cycle in the α-stratum. Furthermore, this limit cycle disappears on the ν-curveand η-region (see Section <ref> below).We shall now describe in some detail the phase portraits in each of thestrata of Fig. <ref>. §.§ Detailed description of the vorticity-induced bifurcationsThe final bifurcation diagram of the convergence-vorticity system (<ref>) is given in Fig. <ref>, and can be deduced by gathering together all the results we have so far in this Section, and attaching the corresponding phase diagrams to each of the strata shown in Fig. <ref>. This is discussed below for the various cases taking into account all previous results, cf. Fig. <ref>. The basic difference between the present case and that of the NPR-system is that presently, in the region of the parameter plane with μ_1>0, the fixed branch ℰ_3 bifurcates further and affects the global topology of the orbits, despite the fact that the fixed branches ℰ_1,2 bifurcate as before in the other regions of the plane. This leads to a totally new and complementary behaviour to the convergence-shearing aspect met in the NPR-system (in the latter we had only the focusing behaviour present in the χ-region, instead of the various strata now appearing). This novel behaviour is apparently linked to the nature of the non-hypersurface-orthogonal congruences associated with the vorticity case. §.§.§ Stratum Gaia-γWe have already plotted the phase portrait in the zero-parameter case in Fig. <ref>, corresponding to the origin of the parameter plane, the Gaia-γ point. One observes the absence of the Stewart lines and similar `focusing' behaviour, now replaced by the closed orbits shown in the phase portrait. §.§.§ Stratum Chaos-χIn this stratum (i.e., in the first quadrant of the (μ_1,μ_2)-plane) there is only the fixed branch ℰ_3, and no ℰ_1,2 branches. Since μ_1,μ_2>0,a<0 from Eq. (<ref>), and since the eigenvalues are complex conjugate, ℰ_3 is a stable focus.§.§.§ Stratum Eros-εOn the Eros ε-line {μ_1=0}, both ℰ_1,2 become degenerate at the origin, while ℰ_3=(-μ_2/n,|μ_2/n|). Then on the ε_+-line, where μ_2>0, ℰ_3 is a sink lying in the second quadrant with all orbits approaching it, and on the ε_--line, where μ_2<0, ℰ_3 is a source lying in the second quadrant with all orbits repelled towards the origin. This is describedinFig. <ref> in the two phase portraits drawn near the ε-strata.§.§.§ Strata Uranus-o, Pontus-πOn both of these regions, the condition (<ref>) always holds, andwe get a similar situation as with that on the Eros-ε line, but this time with ℰ_1=(-√(-μ_1),0), and ℰ_2=(-√(μ_1),0), cf. these strata in Fig. <ref>. In the o-stratum, ℰ_1 is a saddle and ℰ_2 a source, whereas in the Pontus-π stratum ℰ_1 is a sink and ℰ_2 a saddle as before, cf. Fig. <ref>.§.§.§ Stratum Ourea-β, tartara-τ curveOn the tartara-τ curve as well as in the Ourea-β region,there is no ℰ_3 branch. On the τ-curve,the two fixed branches ℰ_1,2 bifurcate as before, however, instead of getting a saddle we have nowtwo nodes in the β-stratum, one unstable for μ_2<0, and one stable when μ_2>0 leading to the corresponding portrait in these two regions, Ourea-β, τ_+,τ_- curves in Fig. <ref>.This leaves us with the situation concerning the Hopf bifurcation when μ_1>0, which is examined below.§.§.§ Erebus-Eρ-line, Aether-α stratum, Nyx-ν-curve, Chemera-η stratumOn the Erebus-Eρ-line there is a degenerate Horf bifurcation of the ℰ_3 branch, as a result of which,there is an accompaniedinfinity of closed orbits. This situation is stabilized by the inclusion of higher-order (only cubic suffices) terms on the stratum Aether-α, and a stable limit cycle appears there. This however, disappears on the Nyx-ν-curve, leaving an unstable focus in the Chemera-η stratum.These results are discussed in <cit.>, and in <cit.>, <cit.>, <cit.>, <cit.>, and references therein, where descriptions of theproofs, and especially that of the uniqueness of the stable limit cycle, may be found. The phase portraits portraits are given in the Fig. <ref>.The meaning of the appearance or disappearance of the stable limit cycle during the Hopf bifurcations χ→η is related to the phenomena of mild and hard loss of stability (cf. <cit.> for a general overview, and <cit.>, <cit.> for details). During evolution in the bifurcation fragment χ→α, the stable node branch ℰ_3 gives birth to the limit cycle (generally of radius √(-μ_2), with its stability transferred to the cycle and ℰ_3 becoming unstable (mild loss of stability). During evolution along the fragment, α→η, the cycle disappears by becoming invisible and the state becomes unstable at crossing the ν-line into the η stratum (for more on this interesting `cycle blow up' phenomenon, cf. <cit.>, p. 377, <cit.>, <cit.>, pp. 783-6).We know from the Poincaré-Andronov theorem that for planar systems (as the ones we study here) the only one-parameter bifurcations in generic families are the ones found here (i.e., merging of equilibria and Hopf). This completes our discussion of the convergence-vorticity problem.§ THE BIFURCATION DIAGRAM FOR THE OS-SYSTEMWe now turn to the Oppenheimer-Snyder problem (hereafter `OS'-problem) and develop the versal dynamics and bifurcation diagrams. First, however, we shall bring theOS-system to its normal form and find the versal unfolding. This is done in the next Subsection. The bifurcation diagram is constructed in later subsections.A main conclusion from the analysis of the problem in the present Section is that the OS dynamics lies somewhat between the NRP-system and the vorticity-induced bifurcations considered in previous Sections, in that there are two main cases, one giving unstable solutions and resembling more to the NPR-problem, and another having stable limit cycle solutions - closer to the vorticity situation met in the previous Section. §.§ The topological normal formWe write the OS equation (<ref>) in a dynamical system form, by settingẋ =y, ẏ =-3/4y^2,with phase portrait as in Fig. <ref>. One observes the standard OS behaviour in this phase portrait, namely that the comoving coordinate x (being ln r),diverges to -∞ (so corresponding to r=0), or to +∞ (r=∞), for negative or positive y respectively. However, we note that linearized Jacobian of the system(<ref>)at the origin is,J|_(0,0)=( [ 0 1; 0 0; ]),thus suggesting the nontrivial fact that the system (<ref>) in the form Ẋ=J|_(0,0) X+F(X), where X=(x,y)^⊤ , F(X)=(0,-(3/4)y^2)^⊤, may be subject to a Bogdanov-Takens bifurcation at the origin.§.§.§ Elimination of the second-order termsTo study this, the first problem is how to write (<ref>) in normal form.This stepcan be accomplished by first simplifying the second-order terms present in (<ref>).Since the normal form up to second-order terms is known (it is the Bogdanov-Takens normal form), it is a trivial matter to see that the quadratic Oppenheimer-Snyder term (0,-(3/4)y^2)^⊤ in (<ref>) is nonresonant[Note: As we discussed earlier, in standard terminology, the resonant terms are the unremovable nonlinear terms which according to the normal form theorem belong to the complement of the set that contains all the terms that can be written as linear combinations of linearly independent elements of the space L_J^(2)(H_2) - see below. This is generally true: if at any given order, terms present in the vector field at that order do not appear in the normal form, then they cannot be present and can be completely eliminated.], and therefore it can be completely eliminated.To see this explicitly, we first recall some standard terminology from normal form theory (cf.refs. given in the bibliography). The scope of normal form theory, as we already discussed earlier, is to simplify the appearance of nonlinear terms at a given order. For example, to simplify terms of, say,second-order, let us denote such terms by F_2(X). Then the simplification may beaccomplished by performing a nonlinear transformation X=Y+h_2(Y), to the original system variables to obtain the homological equation for h_2(Y), namely,L_J^(2)(h_2(Y))=F_2(Y), L_J^(2)(h_2(Y))=Dh_2(Y)JY-Jh_2(Y),where h_2(Y),F_2(Y) are vector-valued homogeneous polynomials of degree two. In ℝ^2, with standard basis (1,0)^⊤,(0,1)^⊤, the space H_2 of all vector-valued homogeneous polynomials of degree two is spanned by the vectors,H_2=span{( [ x^2; 0; ]),( [ xy;0;]),( [ y^2; 0; ]),( [ 0; x^2; ]),( [0; xy;]),( [ 0; y^2; ])},obtained by multiplying each of the standard basis vectors by all possible homogenous polynomials of degree two. Now for J given byEq. (<ref>), it is a simple calculation to show that,L_J^(2)(H_2)=span{( [ -2xy;0;]),( [ ± y^2; 0; ]),( [x^2; -2xy;]),( [ xy; -y^2;]),( [ 0; 0; ]),}.Consequently, the Oppenheimer-Snyder quadratic term from (<ref>), can be written as the linear combination, ( [ 0; -3/4y^2; ]) = 3/8[ ( [ -2xy;0;])+2( [ xy; -y^2;]) ],of two of the basis elements, and therefore this term can be eliminated leaving no second-order terms in the normal form. §.§.§ The third-order termsThe calculation of L_J^(3)(H_3) is of course known in the literature but since no details are often provided, we include below some of the relevant results. A basis for H_3 is, H_3=span{( [ x^3; 0; ]),( [ x^2y;0;]),( [ xy^2;0;]),( [ y^3; 0; ]),( [ 0; x^3; ]),( [0; x^2y;]),( [0; xy^2;]),( [ 0; y^3; ])}.The linear map on H_3 is,L_J^(3)=Jh_3(Y)-Dh_3(Y)JY,and for each basis element, we calculate the action of the L_J^(3)-operator to be:L_J^(3)( [ x^3; 0; ])=-3( [ x^2y;0;]),L_J^(3)( [ x^2y;0;])=-2( [ xy^2;0;]),L_J^(3)( [ xy^2;0;])=-( [ y^3; 0; ]),L_J^(3)( [ y^3; 0; ])=( [ 0; 0; ]),L_J^(3)( [ 0; x^3; ])=( [x^3; -3x^2y;]),L_J^(3)( [0; x^2y;])=( [x^2y; -2 xy^2; ]),L_J^(3)( [0; xy^2;])=( [ xy^2; -y^3;]),L_J^(3)( [ 0; y^3; ])=( [ y^3; 0; ]).Therefore we find thatH_3=span{( [x^3; -3x^2y;]),( [ x^2 y; 0; ]), ( [ xy^2;0;]),( [ y^3; 0; ]),( [ 0; y^3; ]),( [0; xy^2;]) },which implies that dimL_J^(3)H_3=6, and so dim G_2=dim H_3-dimL_J^(3)H_3=8-6=2.In other words, to compute G_2 we need to find two linearly independent, orthogonal 6-vectors to each column of the (8× 8)-matrix representation of L_J^(3). If (a b c d e f g h)^⊤ are the components of any such (column-)vector, we find,a=3f,any e,and b=c=d=g=h=0,and so two such vectors are,(1 0 0 0 0 -3 0 0)^⊤, (0 0 0 0 1 0 0 0)^⊤, leading to the two vectors, ( [ 0; x^3; ]),( [x^3; x^2y;]).In fact, a simpler choice of a basis for G_2 is, ( [ 0; x^3; ]),( [0; x^2y;]),where we have used, ( [0; x^2y;])=( [x^3; x^2y;])-( [ x^3; 0; ]),where the second vector is an element of L_J^(3)(H_3). This is the choice that we shall use below. §.§.§ The normal form and the versal unfoldingConsequently, the normal form of the OS-system near the origin to third-order terms is,ẋ =y, ẏ =ax^3+bx^2y,with a,b constants.We can further rescale x,y and arrive at the final normal form (cf. <cit.>, p. 437-8, <cit.>, p. 365-6),ẋ =y, ẏ =± x^3-x^2y.Therefore we have shown the following theorem.The normal form of the OS-system (<ref>) contains two moduli coefficients and is given by Eq. (<ref>). The versal unfolding can now be found if we recallthat theversal deformationof the matrix,( [ 0 1; 0 0; ]),is,( [ 0 1; μ_1 μ_2; ]),and so using the normal form (<ref>), we arrive at the versal unfolding,ẋ =y, ẏ =μ_1 x+μ_2 y± x^3-x^2y,with the two parameters μ_1, μ_2[The proof of the ±-sign, thus reducing the number of cases without any loss of generality,is justified by a rescaling analysis very similar to that perform for the quadratic case of the Bogdanov-Takens singularity, cf. <cit.>, p. 437-8, and it is omitted here.].The versality of this unfolding is provenin many places, originally in<cit.>. We note the important result that when μ_1,μ_2=0 we get the degenerate system (<ref>), not the original OS-system (<ref>), because in the normal form of the latter system all quadratic terms have disappeared.We further note that the versal unfolding of the Oppenheimer-Snyder system given by (<ref>) contains the modular coefficient s=± 1 in frontof the x^3 term, and so there are two separate systems to be considered for the versal dynamics. This is done below. §.§ Versal dynamics for a positive modular coefficient §.§.§ Local stabilityWe start with the positive modular parameter system,ẋ =y, ẏ =μ_1 x+μ_2 y+ x^3-x^2y,which has the following fixed branches:* ℰ_1,2=(±√(-μ_1),0). These are real provided, μ_1<0. *The third fixed point isat the origin, ℰ_3=(0,0),We note that there are no fixed points for μ_1>0.It is not difficult to deduce the unstable nature of thebranches ℰ_1,2 in this case. The linearized Jacobian is,J_+=( [01; μ_1+3x^2-2xy μ_2 -x^2;]),and so we find,J_+|_ℰ_1,2=( [ 0 1; -2μ_1 μ_1+μ_2; ]).Therefore, TrJ_+|_ℰ_1,2=μ_1+μ_2,detJ_+|_ℰ_1,2=2μ_1,and so in the region under consideration where μ_1<0, detJ_+|_E_1,2<0. Therefore, from Lemma <ref> we have that both ℰ_1,2 are saddles.On the other hand,J_+|_ℰ_3=( [ 0 1; μ_1 μ_2; ]),and, TrJ_+|_ℰ_3=μ_2,detJ_+|_ℰ_3=-μ_1,so that detJ_+|_ℰ_3>0, when μ_1<0, and using Lemmas <ref>, <ref>,we have the following result.In the half-space μ_1<0, the origin ℰ_3 is:*a source for μ_2>0, *a sink for μ_2<0.When μ_1>0, there are no fixed branches ℰ_1,2, while since detJ_+|_ℰ_3<0, the origin is a saddle for any sign of μ_2. We note that in the cases μ_1≷ 0, there are no bifurcations. In addition, there are three equilibria in the half-space μ_1<0,but only ℰ_3 when μ_1>0. These phase portraits will be shown after we also complete the study of their bifurcations (see end of the subsection).The most interesting cases dynamically are when one of the parameters is zero, in which case we have local bifurcations.§.§.§ Local bifurcations, Case A: μ_1=0In this case, we treat μ_2 is an arbitrary nonzero constant and put the versal unfolding (<ref>) in a suitable form for the center manifold theorem, which examines possible bifurcations nearμ_1=0 (note that the eigenvalues in this case are 0,μ_2).In order to achieve this , we pass to new `coordinates', ( [ x; y; ])=([ 1 1; 0 μ_2; ])( [ u; v; ]),( [ u; v; ])=([ μ_2-1; 0 1; ])( [ x; y; ]),so that, ( [ u̇; v̇;])=1/μ_2([ μ_2-1; 0 1; ])( [ ẋ; ẏ; ]),where the ẋ,ẏ terms are given by (<ref>).The centre manifold in this case is (at least of O(2)),v=v(u,μ_2),and so after some calculation using Eq. (<ref>), the reduced equation on the centre manifold is: u̇=-μ_1/μ_2 u-1/μ_2u^3 +O(5),implying a pitchfork bifurcation on μ_1=0. This is supercritical for μ_2>0 (origin repels all orbits), and subcritical for μ_2<0 (origin attracts all orbits). This is like in Figs. <ref>, <ref> respectively, with μ_1 in the place of -ϵ.§.§.§ Local bifurcations, Case B: μ_2=0In the half-space μ_1<0, the origin ℰ_3 is a centre when μ_2=0, and so we expect a Hopf bifurcation on μ_2=0. We shall be brief in proving these results.It follows froman application of the Hopf bifurcation theory, that since the eigenvalues of the linearized Jacobian satisfy, d/dμ_2|_μ_2 =0λ_±=1/2(1+μ_1/√(μ_1^2-8μ_1))>0,μ_1<0,we have that for μ_2<0 the origin will be asymptotically stable, and for μ_2>0 it will be unstable. In addition, since the bifurcating periodic orbit at μ_2=0 is stable, this is a supercritical Hopf bifurcation.Also there are no periodic orbits when: 1) μ_1>0, 2) on the third quadrant, and 3) by a Hamiltonian analysis, above the curve μ_2=-μ_1/5 (for this last result, cf. e.g.,<cit.>, p. 372-3).This then provides the complete bifurcation diagram for the positive moduli case, cf. Fig. <ref>. A basic characteristic of it is the existence of only unstable solutions in this case.§.§ Versal dynamics for a negative modular coefficientThis case is somewhat more complicated that the positive moduli case (there are more global bifurcations), but the local bifurcations can be treated analogously, and so we refer the reader to the literature for the global problem.§.§.§ Local stabilityThe negative modular parameter system,ẋ =y, ẏ =μ_1 x+μ_2 y- x^3-x^2y,has the following fixed branches:* ℰ_1,2=(±√(μ_1),0). These are real provided, μ_1>0. *The third fixed point isat the origin, ℰ_3=(0,0),and always exists (that is for all μ_1). We note that there are no fixed points of the form ℰ_1,2 for μ_1<0.The linearized Jacobian isJ_-=( [01; μ_1-3x^2 μ_2 -x^2;]),and so we find,J_-|_ℰ_1,2=( [ 0 1; -2μ_1 μ_2-μ_1; ]),and, TrJ_-|_ℰ_1,2=μ_2-μ_1,detJ_-|_ℰ_1,2=2μ_1.Also, for ℰ_3, we have,J_-|_ℰ_3=( [ 0 1; μ_1 μ_2; ]),and TrJ_-|_ℰ_3=μ_2,detJ_-|_ℰ_3=-μ_1.Therefore for μ_1>0, ℰ_3 is a saddle. The fixed branches ℰ_1,2 have positive determinant from Eq. (<ref>), and so they are centres on the line μ_2=μ_1 of the (μ_1,μ_2) parameter plane, with eigenvalues given by λ_± =±2i√(2μ_1).In addition, they are sources above and sinks below that line.On the other half-space, i.e., whenμ_1<0, there are no fixed branches ℰ_1,2, and since the detJ_-|_ℰ_3>0,on the lower half-space with μ_2<0,ℰ_3 is a sink, while on μ_2>0,ℰ_3 is a source.This takes care of the local stability of the three equilibria ℰ_1,2,3 of the system (<ref>). §.§.§ Local bifurcationsThe two centres on the lineμ_2=μ_1 with the purely imaginary eigenvalues given by Eq. (<ref>) lead to a Hopf bifurcation on this line when μ_1>0. By the same method as in the previous subsection of positive moduli, we find that in the present case the bifurcating orbit is unstable, and so the Hopf bifurcation is subcritical.As in the previous subsection, we also find a pitchfork bifurcation on μ_1=0, and we shall not repeat the analysis here.We note that there are no periodic orbits in the present case when μ_2<0.The bifurcation diagram for the negative moduli case is therefore given as in <ref> (except for the global bifurcations). We note that by a Hamiltonian analysis (similar to that needed also in the positive moduli case), one proves the existence of closed orbits surrounding the three equilibria when μ_1>0, and also the presence of a double saddle connection on the curve μ_2=4μ_1/5, just below the μ_2=μ_1 curve (not shown here, cf. <cit.>, p. 373-4, Fig. 7.3.7).The form of the solutions indicates the presence of additionalglobal bifurcations which in the present case are more substantial than in those of the previous subsection. We refer the reader to the references for this problem (cf. <cit.>, p. 376, Fig. 7.3.9, <cit.>, p. 309, for a summary of the global bifurcations).This concludes the analysis of the Oppenheimer-Snyder problem. § METAMORPHOSES OF SPACETIME SINGULARITIES AND BLACK HOLESIn this Section, we present some examples of the application of the previousresults to spacetime singularities and black holes. It is not intended to provide an exhaustive treatment of all the possible behaviours depicted in the bifurcation diagrams, only a small selection of examples from them. §.§ General commentsThe four bifurcation diagrams, namely, Fig. <ref> for the NPR-system, Fig. <ref> for the CV-system, and Figs. <ref>, <ref> for the Oppenheimer-Snyder system, are the central results found in earlier Sections of this work. Using them, we can make direct contact with the possible metamorphoses of the spacetime near singularities and black holes. We note that no metamorphosis is ever possible without a bifurcation, because in this case any `change of form' is bound to happen exclusively in one phase portrait which can never become anything else.Before we start, let us make clear that looking at any of the four bifurcation diagrams referred to above, we observe not one but several phase portraits each corresponding to a region (a `stratum') of the parameter diagram lying at the centre of the bifurcation diagram. One may wonder why we need all these diagrams and why we do not have just one phase portrait as that is common in introductory treatments of dynamical systems. As we explained earlier, the purpose of each one of these phase portraits is to depict the dynamics corresponding to a particular region of the parameter space. This dynamics changes when the parameter point moves from one stratum to another (that is as `the parameter changes'), and with it phase diagrams also change.However, this is but one way of thinking about the changes (or metamorphoses) of the phase diagrams, and in fact one that it may appear asa somewhat deceptive one to some readers. Another, perhaps more indicative, approach is to think of all of them as being just one phase portrait which includes all others and as the parameter changesit smoothly alters its form, each time it precisely becomes one of those different phase portraits we see during the metamorphoses of the dynamics. This approach makes the smooth motion of the phase point more transparent during the phase portraits' metamorphoses.As the vector parameter μ changes its values (for instance, as the `parameter point' μ moves in a small circle around the origin of the parameter plane) and the phase point moves smoothly from one phase portrait to another, the causality relations of spacetime become dependent of μ. Therefore causal set relationsmay change as the parameters change their values and pass through from different regions of the parameter plane (μ_1,μ_2) separated by the bifurcation boundaries,that is as the system enters intoor exits fromthe various strata.To wit, the metamorphoses of the phase portraits imply a constant topological transformation of the states of the system as the parameter μ changes and the system finds itself in different regions of the parameter space. One thus observes how the different phase portraits (seen in any of the four bifurcation diagrams) constantly and smoothlytransform to the next one and take the system represented by a point in any of the phase portraits to move in another such portrait corresponding to another stratum in the parameter space.We shall first discuss the NPR- and CV-systems, and then in the last Subsection we shall return to the discussion of the Oppenheimer-Snyder bifurcations. We note that in both cases, that of the NPR- and CV-systems, and the two subcases of the Oppenheimer-Snyder collapse, there are in general focusing and defocusing solutions, with the focusing ones producing the unstable diagrams <ref> and <ref>, and the defocusing solutions seen in the other pair, namely Figs. <ref>,<ref>. We shall not discuss each one of the phase portraits in the bifurcation diagrams, but only restrict ourselves to discussing some of them as well as of their metamophoses. §.§ The NPR and CV systems The versal unfoldings (<ref>), (<ref>) can be written in the unified form,ρ̇ =μ_1+ρ^2+sz^2, ż =μ_2 z+nρ z.for z=σ, or, z=ω, respectively,s=± 1 is a moduli coefficient, and n=2,3.Eq. (<ref>) is a standard form of a planar system with a ℤ_2-symmetry. Since for both values of the modular coefficient, (<ref>) represents a different subsystem of the Sachs optical equations, it is natural to make the following correspondences of the parameters μ_1,μ_2 ofthe NPR- and the CV-systems: NPR-system:μ_1→ℛ,μ_2→𝒲,with ℛ,𝒲 as in Subsection <ref>. For the CV-system, since when σ=0 there is no W term, it is natural in this case to think of μ_2 as a rotational parameter, so that we set, CV-system:μ_1→ℛ,μ_2→rotation parameter. §.§.§ Convergence-shear transfigurationsWe shall start by examining the continuous metamorphoses (or transfigurations) of phase portraits for the NPR-system following the bifurcation diagram in Fig. <ref>. We have shown that the following bifurcations occur:* A pair of saddle-node bifurcations dominated by the convergence ρ on the centre manifold-reduced dynamics, taking the system along the following fragments (cf.the bifurcation diagram in Fig. <ref>, and Theorem <ref>): χ→ε_+→ o, or, in opposite direction,o→ε_+→χ,and similarly for the negative ε_-.* A pair of pitchfork bifurcationsdominated by the shear σ on the centre manifold-reduced dynamics, taking the system along the following fragments (cf.the bifurcation diagram in Fig. <ref>, and Theorem <ref>):o→τ_+→β,supercritical, in direction above to below, and, π→τ_-→β,subcritical, in direction below to above.These bifurcationsdescribe the possible metamorphoses of the phase portraits of the NPR-problem, and dictate how and where the system will change into something else. The stratum μ_1>0 in this Fig. <ref> identifies with positive local energy density as in the energy conditions,and so it describes a region where gravity is always attractive, while the region μ_1<0 obviously identifies with repulsive gravitational effects.Therefore, the ρ-dominated, saddle-node bifurcations of the NPR-system take the system from a region of attractive gravity to one where gravity is always repulsive and back, whereas both ofthe σ-dominated, pitchfork bifurcations occur in the repulsive gravity region of the parameter space.In the stratum μ_1>0, where the energy conditions hold, we have onlyphase portrait shown in the χ-region, the orbits of which describe the focusing state given by the relations in (<ref>). Hence, this is the region in the parameter plane that corresponds to the Hawking-Penrose singularity theorems, based on the standard focusing effect. According to our results, the system in this region of the parameter space will bifurcate through a saddle-node to the strata o or π when crossing the positive or negative ε axis respectively.§.§.§ Spacetime singularities and their metamorphosesAs an example of a metamorphosis, we now consider the fragment o→ε_+→χ. That is, suppose thatinitially the system is in the stratum o of the parameter space (μ∈ o) where repulsive gravity rules, so that the possible motions of the system are given by the corresponding phase portrait in that stratum, as in Fig. <ref>.Now suppose that the system moves according to the fragment o→ε_+→χ, first to the line ε_+, and then it enters the stratum χ. The dynamics in this case is described by the saddle-node bifurcation and subsequent metamorphoses as in the Theorem <ref>. At the ε_+-crossing, μ_1=0, the two equilibria are annihilated and the system is described by the evolution law (<ref>), that is ρ̇≥ρ^2 in standard focusing effect language,on the centre manifold as it enters the stratum χ.In terms of phase space dynamics, consider a phase point moving on a phase curve of the o-phase portrait in Fig. <ref>. We may choose that the phase point in question is on some phase curve corresponding to an initial condition with ρ_0>0. At the ε_+-crossing, and as the phase portrait of the region o transfigures to that of ε_+,the phase point continues smoothly its evolution along thatorbit which passes through the same initial condition in the new ε_+-phase portrait. The same happens to the phase point when it finds itself in the phase portrait that corresponds to the stratum-χ, namely, it continues smoothly its evolution along the corresponding phase orbit of the χ-phase portrait leading to the focusing state eventually.Therefore during this process, as a focusing state is approached for all phase orbits with suitable initial conditions, a singularity is formed from a previous state (i.e., at o) where no such situation existed. This result gives a first indication of how a spacetime singularity, as predicted by the singularity theorems, may arise in the χ stratum during the evolution of the system starting from a state in the o stratum where no focusing existed. The main reason for the singularity formation in this case is related to the system's ability to bifurcate in a saddle-node, ρ-dominated bifurcation.On the other hand, and as we have shown in Section <ref>, the centre manifold analysis leading to singularity formation is reversible and applies to the evolution along the opposite fragment, namely,χ→ε_+→ o. In this case, the system starts in the singularity-forming region χ, and then crosses the ε_+ axis to find itself in the o-stratum where two new fixed branches have been formed. The transfigurations of the χ-phase portrait to the ε_+ one, to the o-region one, are described as before, with the main difference being that at crossing the two new solutions are created (instead of colliding) according to the saddle-node prescription, and are then separated in the o-stratum.Another aspect of the`singularity-forming region'χ-stratum is related to the so-called ghost effect[This is also called the `bottleneck effect', cf. <cit.>, pp. 99, 242, and Refs. therein.]: a slow passage to the eventual singularity at the χ-region of Fig. <ref>, after the two equilibria of the system present in the o-stratum collide and annihilate on ε_+, and as the system enters the singularity χ-region. This is calculated byintegrating the centre manifold evolution law (<ref>) to obtain,T_bottleneck=∫_-∞^∞dρ/μ_1 +ρ^2=π/√(μ_1).Therefore the system delays to reach the singularity while in the χ region, with the delay time scaling as μ_1^-1/2. This describesthe parameter-dependence of the time to the singularity as a square-root scaling law. This is a new feature introduced here because of the bifurcating behaviour (saddle-node) and describes possible transitions of the system. It is a parameter-dependent effect, absent in the standard focusing effect (the Raychaudhuri inequality,ρ̇≥ρ^2 implies that only a dependence on the initial condition can appear).§.§.§ Black hole metamorphosesAs another example of the previous developments, suppose that an event horizon has formed in the χ-region, where according tostandard theorems, ρ will increase and become infinite on null geodesics within a finite affine distance in the future depending on the initial condition ρ_0 (as we have also shown above). In other words, in the χ-region, the generators of the boundary of a future set S, i.e., ∂ I^+(S), will have future end points where they will intersect neighboring generators.However, here the situation is slightly different. Because of the possibility of transfiguration through the saddle-node bifurcation described above,all causal-structure pointsets will be equipped with an extra dependence on the parameter μ. For example, in the present case, we can write ∂_μ_1 I^+(S) to indicate the parameter dependence. Upon crossing from the χ to the o region through ε_+, the boundary ∂ I^+(S), will gradually transfigure as follows, ∂_μ_1>0 I^+(S)→∂_μ_1=0 I^+(S)→∂_μ_1<0 I^+(S),and as a result, the generators intersecting for the sets ∂_μ_1>0 I^+(S) will not intersect anymore on ∂_μ_1<0 I^+(S). In other words, while on χ once the generators of ∂_μ_1>0 I^+(S) started converging they were destined to intersect and have their future endpoints within a finite distance, now because of the transfiguration to the pointset ∂_μ_1<0 I^+(S), the same generators now describing ∂_μ_1<0 I^+(S) will not intersect each other.This isbecause the phase portrait of the orbits corresponding to χ has gradually changed to that on ε_+ and finally on o, and here there is generally no possibility for a focusing state (that is, at crossing or on o), as the corresponding phase diagrams clearly show. Hence, unlike in the proof of Penrose theorem in Section <ref> (cf. the `crucial step' mentioned there), the compactness of a set like ∂_μ_1<0 I^+(𝒯) does not follow in the present case because no point on this set can be made to belong to the compact set A as constructed in the proof of that result anymore. Further, all such transfigurations will be smooth.This describes how a blackhole-forming spacetime region corresponding toχ is transfigured into one where no such regions exist, whilephase points continue with their orbits smoothly in the phase diagrams of the new strata. Such regions can be now further transfigured using the pitchfork bifurcations which take the system from the regions o and π and crossing the τ parabola into other forms. §.§.§ Vorticity-induced transfigurationsLet us now consider the bifurcation diagram in Fig. <ref> of the CV-system. Here we have vorticity-induced bifurcations as follows:* A pair of saddle-node bifurcations dominated by the convergence ρ on the centre manifold-reduced dynamics, taking the system along the following fragments (cf.the bifurcation diagram in Fig. <ref>, and Section <ref>): χ→ε_+→ o, or, in opposite direction,o→ε_+→χ, and analogously for the negative ε_-.* A pair of pitchfork bifurcationsdominated by the vorticity ω on the centre manifold-reduced dynamics, taking the system along the following fragments (cf.the bifurcation diagram in Fig. <ref>, and Section <ref>):o→τ_+→β,supercritical to a node, in direction above to below, and, π→τ_-→β,subcritical to a node, in direction below to above.* A Hopfbifurcationdominated by the vorticity ω taking the system along the following fragments (cf.the bifurcation diagram in Fig. <ref>, and Section <ref>): χ→ Eρ→α→ν→η,or, in the opposite direction.While the first two are as in the NPR-problem (but with the important difference that a node instead of a saddle is involved), the third bifurcation is very important and is wholly due to the effects of convergence-vorticity combined. More precisely, the cycle created in the degenerate Hopf bifurcation on the Eρ-axis, stabilizes on the α-stratum and makes theHopf bifurcation non-degenerate.This is perhaps the most distinct feature of the vorticity-inducedbifurcations, namely, the creation and annihilation of a unique stable limit cycle in the stratum α of the parameter space in the attractive-gravity region where the energy condition holds. Its appearance, dominance, and eventual disappearance in the η stratum means that stable configurations with finite convergence and vorticity which attractnearby orbits are dominant features in this problem.We know that the existence of such a closed orbit is an extremely important phenomenon in general: although periodic orbits exist in linear systems, limit cycles only appear in nonlinear studies because they are isolated (unlike in the linear case). They describe the ability of a system to oscillate in a self-sustained manner (that is without any external forcing). In the present case, the unique stable limit cycle appears in the α-stratum and attracts all neighboring orbits. It describes the finite behaviour of the (ρ,ω) solutions as self-sustained oscillations. We conjecture that the flow on the α-stratum, describing configurations with finite ρ,ω for instance, expanding universes with rotation, becomes quasi-periodic on invariant 2-tori for a dense set of parameter values having positive measure.Upon parameter variation, the cycle disappears when crossing the ν-line and into the η-stratum, to become an unstable focus, and then further bifurcate as shown in Fig. <ref>. It is an intermediate feature that appears in the bifurcation diagram <ref> in the fragment from the χ to η strata and back.The analysis of the vorticity-induced bifurcations points to features not present in the NPR bifurcation diagram discussed above.In both cases, NPR and CV, there are as we have shownvarious kinds of bifurcations in the repulsive-gravity regions, corresponding to the left half-spaces in their bifurcating diagrams. These effectswill be discussed in more detail elsewhere. §.§ Transfigurations in the Oppenheimer-Snyder exampleWe conclude the discussion of the possible metamorphoses of singularities and black holes by giving some general remarks about the perturbations of the Oppenheimer-Snyder example. Unstable as well as isolated closed orbits also appear in the Oppenheimer-Snyder example as in the bifurcation diagrams <ref>, <ref>.To interpret our results of this problem, since the versal unfolding (<ref>) and Figs. <ref>, <ref> describe all stable perturbations possible for the OS equation (<ref>), we may generally set: μ_1→deviations from spherical symmetry,μ_2→rotation,where, when μ_1=μ_2=0 in (<ref>) we are back to the original Oppenheimer-Snyder equation (<ref>).Since the OS equation has two moduli coefficients, we list the possible bifurcations as follows.§.§.§ List of bifurcations, positive modular coefficient* A pair of pitchfork bifurcationsdominated by the u-variable on the centre manifold-reduced dynamics, taking the system along the following fragments (cf.the bifurcation diagram in Fig. <ref>, and Sections <ref>, <ref>):*1st to 2nd quadrant, supercriticalin direction right to left (μ_2>0)*4th to 3rd quadrant, subcritical in direction right to left (μ_2<0).* A supercritical Hopfbifurcationdominated by the v-variable(that is the y) taking the system in the μ_1<0 half-space from bottom to top. We note that the bifurcating orbit is stable on the horizontal axis in the Fig. <ref>. §.§.§ List of bifurcations, negative modular coefficient* A pair of pitchfork bifurcationsdominated by the u-variable on the centre manifold-reduced dynamics, taking the system along the following fragments (cf.the bifurcation diagram in Fig. <ref>, and Section <ref>):*1st to 2nd quadrant, supercriticalin direction right to left (μ_2>0)*4th to 3rd quadrant, subcritical in direction right to left (μ_2<0).* A subcritical Hopfbifurcation on the μ_1=μ_2 line taking the system in the μ_1>0 half-space. We note that the bifurcating orbit is stable on the horizontal axis in the Fig. <ref>.* Global bifurcations (not shown here) because of the presence of the saddle connections. §.§.§ Some remarksA basic aspect of the bifurcation diagrams is the existence of collapsing solutions for the perturbations. These are described by escaping orbits in the bifurcation diagram of the positive moduli case in Fig. <ref>. Since these can be found everywhere in that diagram, we conclude that gravitational collapse is possible for all stable perturbations of the OS equation as these are described by the versal unfolding (<ref>). (This is of course also a conclusion of the NPR-system bifurcation analysis of Section 5 as well!)In this way, the bifurcation diagram in Fig. <ref> contains only unstable solutions, and so resembles more to the general situation of the NPR system of Fig. <ref>. On the other hand, the existence of closed orbits in the Fig. <ref> points to stable solutions and for this reason this is closer to the vorticity case in Fig. <ref>. We see that both properties may be deduced in general terms already for the versal unfolding of the OS equation.The existence of a focusing state appears clearly everywhere in the positive moduli coefficient case, whereas the negative moduli case is more amenable to defocusing (closed orbit) solutions. In fact, as we have already discussed, in the latter case there is a varietyof global bifurcations (cf. the references).We shall provide a more detailed description of these bifurcations elsewhere.§ DISCUSSIONIn this paper we have provided an analysis of bifurcation theory effects for the problem of the formation of spacetime singularities as these appear in gravitational collapse and cosmological situations. We constructed the complete bifurcation diagrams of the evolution laws associated with these problems, namely, the `Raychaudhuri-related' convergence-shear and convergence-vorticity equations, as well as the differential equation that modelled the Oppenheimer-Snyder problem of `continued gravitational contraction', the first mathematical model of a black hole.An analysis of these diagrams leads to interesting new features of the overall dynamics of these laws, and we have discussed some of these features in detail in earlier Sections of this paper.A starting point of our analysis is how to solve the problem ofcontrollingthe `feedback loop' associated with the fundamental equations of thisproblem. In the standard approach, one first employs an energy condition and the positivity of the shear term to directly obtain the solution for the convergence ρ corresponding to some initial condition ρ_0, byintegrating the inequality resulting from the Raychaudhuri equation. Then one uses an equation of the form ẋ=aρ x, a const., (with x usually being the volume, or area, or the shear of the congruence), to deduce, or `control' the decay behaviour of x. In this case, using the behaviour of the `growth factor' ρ found previously, the solution x influences the `forcing term' ρ (t) x, which in turn influences linearly the solution x, thus creating a linear feedback loop. This analysis leads directly to the focusing effect and the consequent predictions of spacetime singularities.A very basic issue associated with the `feedback loop problem' is how to separate the linear from the nonlinear aspects of thefeedback, that is how to distinguish the focusing (or adversarial case) from the possible (or suspected) defocusing (or, average case) behaviours (cf. <cit.> for more discussion on this fundamental problem). In this paper, guided by the pioneering studies that led to the singularity theorems and related results, we introduced the use of bifurcation theory as an efficient means to separate these two kinds of behaviour. We showed that the nonlinear feedback loop is naturally described by the normal forms of the evolution laws, and this approach provides a novel way to study the problem of spacetime singularities. As an example of this behaviour, we were able to show that a stable perturbation of the OS equation to non-spherical or rotational regimes must include focusing state solutions.A second, and perhaps even more basic, aspectof our approach is the problem of structural (in-)stability and genericity of the basic laws that govern the dynamics of feedback loop. It turns out that the basic equations that govern phenomena associated with spacetime singularities in gravitational collapse and cosmology, such as the three systems studied in this paper but also many others, are structurally unstable from the viewpoint of dynamical systems theory. This means that the behaviour of the solutions of nearby systems obtained as perturbations of the original ones (a notion that can be made precise) may have very different behaviour than that of the original law. This raises the question of what is the precise meaning of proving global stability of an exact solution of the original system with respect to some perturbations, if the system itself is structurally unstable. In other words, for systems with some kind of degeneracy,the stability of both the solutions and of the systems themselves must be studied in order to obtain a reliable picture, cf. e.g., <cit.>. For the three dynamical systems studied in this work, we have performed a complete analysis of this problem and found the versal unfoldings of each one of them. These extended systems are parametric families which contain all possible stable perturbations of the original equations but, unlike the latter, the versal families are themselves structurally stable.The dynamical analysis of the versal unfoldings reveals both thefocusing and new defocusing aspects of the three main systems. For the NPR equations, the vorticity-induced perturbations lead to defocusing solutions, as does the versal unfolding with negative moduli in the Oppenheimer-Snyder problem. A characteristic feature of the defocusing solutions is the `nucleation' of a unique stable limit cycle by transfer of stability (vorticity case) implying self-sustained oscillations of the perturbations, andalso various closed orbits as well as global bifurcations.Another aspect of the solutions is that although in the attractive-gravity regions (where μ_1>0) the focusing and defocusing solutions generally correspond to positive curvature solutions, in the repulsive-gravity region (where μ_1<0) solutions correspond to metrics with hyperbolic regions where the curvature is negative. This is evident for example, in the regions o in the NPR diagram where de Sitter solutions form due to the saddle node bifurcation on the ε_+ axis, and then these further bifurcate to give the saddle inside the parabola (recall our interpretation of the parameters μ_1,μ_2, an the constraint these satisfy on the parabola). In the saddle-type solutions there are focusing (unstable) orbits possibly leading to singularities just like in the standard singularity theorems in χ, but this time in the repulsive region.It isinteresting to observe the possibility of continuous transfiguration of any of the phase portraits in any of the regions in the four bifurcation diagrams, which is perhaps the most distinctive phenomenon of all solutions employed here. This aspect in turn is probably due to the tendency of the system to maintain its implied global structural stability of the versal families associated with the singularities present in the solutions.A fuller analysis of these effects will be given elsewhere.tocsectionAcknowledgments§ ACKNOWLEDGMENTSThe author isespecially grateful to Gary Gibbons for many usefuldiscussions which have had a positive effect on the finalmanuscript.A Visiting Fellowship to Clare Hall, University of Cambridge, is gratefullyacknowledged. The author further thanks Clare Hall forits warm hospitality and partial financial support. This researchwas funded by RUDN University,scientific project number FSSF-2023-0003.tocsectionReferences99pen65R. Penrose, Gravitational Collapse and Space-Time Singularities, Phys. Rev. Lett. 14 (1965) 57-9. ha67S. W. Hawking, The occurence of singularities in cosmology III, Proc. Roy. Soc. Lond. A300 (1967) 187-201. pe68R. Penrose, Structure of space-time, In Battelle Rencontres, 1967 Lectures in Mathematics and Physics,C. M. De Witt and J. A. Wheeler (Benjamin, 1968), pp. 121-235. hp70S. W. Hawking and R. Penrose, The Singularities of Gravitational Collapse and Cosmology, Proc.Roy. Soc. A 314 (1970) 529-548 ha71 S. W. Hawking, Gravitational Radiation from Colliding Black Holes, Phys. Rev. Lett. 26 (1971) 1344 pe72R. Penrose, Techniques of Differential Topology in Relativity(SIAM Philadelphia,1972) ha73S. W. Hawking, The Event Horizon, In: Black Holes, B. S.De Witt and C. M. DeWitt(Gordon and Breach, 1973)heS. W. Hawking and G. F. R. Ellis, The Large Scale Structure of Space-Time (CUP, 1973) mtwC. W. Misner, K. P. Thorne, and J. A. Wheeler, Gravitation (Freeman, 1973) oneillB. O'Neill, Semi-Riemannian Geometry with Applications to Relativity (Academic Press, 1983) waldR. M. Wald, General Relativity (University of Chicago Press, 1984) stewJ. Stewart, Advanced General Relativity (CUP, 1991) strauN. Straumann, General Relativity with Applications to Astrophysics (Springer, 2010) onB. O'Neill, The Geometry of Kerr Black Holes (Dover, 2014)osJ. R. Oppenheimer and H. Snyder, On Continued Gravitational Contraction, Phys. Rev. 56 (1939) 455llL. D. Landau and E. M. Lifshitz, The Classical Theory of Fields (4th Rev. Ed.) (Pergamon Press, 1975) ray1A. Raychaudhuri, Phys. Rev. 98 (1955) 1123 koA. Komar, Phys. Rev. 104 (1956) 544 ray2A. Raychaudhuri, Phys. Rev. 106 (1957) 172taoT. Tao, Nonlinear Dispersive Equations: Local and Global Analysis (Americal Mathematical Society, 2006) ar83V. I. Arnold, Geometrical Methods in the Theory of Ordinary Differential Equations (Springer, 1983) ar94V. I. Arnold, Dynamical Systems V: Bifurcation Theory and Catastrophe Theory (Springer, 1994) ar72V. I. Arnold, Lectures on bifurcations in versal families,Russ. Math. Surv. 27 (1972) 54 ar86V. I. Arnold, Carastrophe Theory (Springer, 1986) gh83J. Guckenheimer and P. Holmes, Nonlinear oscillations, dynamical systems, and bifurcations of vector fields (Springer, 1983) golu1M. Golubitsky, D. G. Schaeffer, Stable Mappings and their Singularities, (Springer, 1979) golu2M. Golubitsky, D. G. Schaeffer, Singularities and Groups in Bifurcation Theory, Volume I (Springer, 1984) golu3M. Golubitsky, I. Stewart, D. G. Schaeffer, Singularities and Groups in Bifurcation Theory, Volume II (Springer, 1988)wigS. Wiggins, Introduction to applied nonlinear dynamical systems and chaos, 2nd. Ed. (Springer, 2003) kuzYu. A. Kuznetsov, Elements of Applied Bifurcation Theory, Fourth Ed. (Springer, AMS 112, 2023) thomR. Thom, Structural Stability and Morphogenesis (CRC Press, 2018) cot23S. Cotsakis, Dispersive Friedmann universes andsynchronization, Gen. Rel. Grav. 55 (2023) 61; arXiv:2208.07892 zol84K. Zholondek, On the versality of a family of symmetric vector fields in the plane, Math. USSR Sbornik 48 (1984) 463 tak1F. Takens, Singularities of vector fields, Publ. Math. IHES 43 (1974) 47–100 tak2F. Takens, Forced oscillations and bifurcations, Comm. Math. Inst. Rijksuniv. Utrecht 3 (1979) 1–59. stroS. H.Strogatz, Nonlinear Dynamics and Chaos (Perseus Books Publishing, 1994) | http://arxiv.org/abs/2311.16000v1 | {
"authors": [
"Spiros Cotsakis"
],
"categories": [
"gr-qc",
"hep-th"
],
"primary_category": "gr-qc",
"published": "20231127164636",
"title": "Bifurcation diagrams for spacetime singularities and black holes"
} |
tabularcompat=1.16 spy,calc0 /tikz/.cd, zoombox paths/.style= draw=orange, very thick , black and white/.is choice, black and white/.default=static, black and white/static/.style=draw=white,zoombox paths/.append style= draw=white, postaction= draw=black, loosely dashed , black and white/static/.code= 1 , black and white/cycle/.code= 1 , black and white pattern/.is choice, black and white pattern/0/.style=, black and white pattern/1/.style= draw=white, postaction= draw=black, dash pattern=on 2pt off 2pt, black and white pattern/2/.style= draw=white, postaction= draw=black, dash pattern=on 4pt off 4pt, black and white pattern/3/.style= draw=white, postaction= draw=black, dash pattern=on 4pt off 4pt on 1pt off 4pt, black and white pattern/4/.style= draw=white, postaction= draw=black, dash pattern=on 4pt off 2pt on 2 pt off 2pt on 2 pt off 2pt, zoomboxarray inner gap/.initial=5pt, zoomboxarray columns/.initial=2, zoomboxarray rows/.initial=2, subfigurename/.initial=, figurename/.initial=zoombox, zoomboxarray/.style= execute at begin picture= [ spy using outlines= zoombox paths, width=/ /tikz/zoomboxarray columns - (/tikz/zoomboxarray columns - 1) / /tikz/zoomboxarray columns * /tikz/zoomboxarray inner gap -, height=/ /tikz/zoomboxarray rows - (/tikz/zoomboxarray rows - 1) / /tikz/zoomboxarray rows * /tikz/zoomboxarray inner gap-, magnification=3, every spy on node/.style= zoombox paths , every spy in node/.style= zoombox paths ] , execute at end picture= , spymargin/.initial=0.5em, zoomboxes xshift/.initial=1, zoomboxes right/.code=/tikz/zoomboxes xshift=1, zoomboxes left/.code=/tikz/zoomboxes xshift=-1, zoomboxes yshift/.initial=0, zoomboxes above/.code= /tikz/zoomboxes yshift=1, /tikz/zoomboxes xshift=0 , zoomboxes below/.code= /tikz/zoomboxes yshift=-1, /tikz/zoomboxes xshift=0 , caption margin/.initial=0.01ex, , adjust caption spacing/.code=, image container/.style= inner sep=0pt, at=(image.north), anchor=north, adjust caption spacing , zoomboxes container/.style= inner sep=0pt, at=(image.north), anchor=north, name=zoomboxes container, xshift=/tikz/zoomboxes xshift*(+/tikz/spymargin), yshift=/tikz/zoomboxes yshift*(+/tikz/spymargin+/tikz/caption margin), adjust caption spacing , calculate dimensions/.code= imagesouth west imagenorth east ==1 1 1 , image node/.style= inner sep=0pt, name=image, anchor=south west, append after command= [calculate dimensions] node [image container,subfigurename=/tikz/figurename-image]node [zoomboxes container,subfigurename=/tikz/figurename-zoom] , color code/.style= zoombox paths/.append style=draw=#1 , connect zoomboxes/.style= spy connection path=[draw=none,zoombox paths] (tikzspyonnode) – (tikzspyinnode); , help grid code/.code= [ x=(image.south east), y=(image.north west), font=, help lines, overlay ] in 0,1,...,9 (/10,0) – (/10,1); [anchor=north] at (/10,0) 0.;in 0,1,...,9(0,/10) – (1,/10);[anchor=east] at (0,/10) 0.; , help grid/.style= append after command= [help grid code], [ A deep learning approach for marine snow synthesis and removal. Fernando Galetto and Guang Deng Nov 2023 ===============================================================\begin@twocolumnfalseMarine snow, the floating particles in underwater images, severely degrades the visibility and performance of human and machine vision systems. This paper proposes a novel method to reduce the marine snow interference using deep learning techniques. We first synthesize realistic marine snow samples by training a Generative Adversarial Network (GAN) model and combine them with natural underwater images to create a paired dataset. We then train a U-Net model to perform marine snow removal as an image to image translation task. Our experiments show that the U-Net model can effectively remove both synthetic and natural marine snow with high accuracy, outperforming state-of-the-art methods such as the Median filter and its adaptive variant. We also demonstrate the robustness of our method by testing it on the MSRB dataset, which contains synthetic artifacts that our model has not seen during training. Our method is a practical and efficient solution for enhancing underwater images affected by marine snow.\end@twocolumnfalse ]§ INTRODUCTION The ability of machines to perceive the world as humans through imaging sensors has allowed researchers to create a massive number of tools to increase productivity, to improve performance and to solve important problems that couldn’t be solved in other way.Factors such as noise, blurriness and low lighting conditions are the main enemies of computer vision algorithms and they are very common in underwater applications.Thus, underwater image enhancement is an important and challenging research topic that has been actively studied in recent years <cit.>. Underwater images and videos suffer from low visibility primarily due to scattering and absorption <cit.>. Absorption attenuates the light as it travels throughwater, while scattering alters its direction <cit.>. The presence of organic and inorganic matter in water contributes to both scattering and absorption, reducing visibility by attenuating light energy and deviating its trajectory. As depth and distance increase, wavelength-dependent attenuation leads to a particular color cast in underwater images <cit.>.Several methods have been developed to enhance and restore underwater images and videos, some of them require special hardware <cit.> or multiple images <cit.> but single image methods are preferred due to their simplicity and adaptability to existing imaging systems.Single image methods mostly tackle problems associated with colour cast and haze-like low contrast effect. Model-based methods use a physical model to describe the degradation and formulate the restoration as an inverse problem <cit.>. Some models used the traditional image formation model <cit.> while others specifically developed for underwater scenarios <cit.>. Machine learning methods were also proposed to enhance underwater images <cit.>, the lack of ground truth images for underwater image enhancement made generative methods to stand up from the rest <cit.>. Alternative datasets were created trying to simulate the underwater environment <cit.> enabling researchers to train other types of models such as encoder-decoder models <cit.>, CNN models <cit.> and other multi-branch architectures <cit.> which performs very well under assumed conditions but are not robust for real-world applications. Floating particles, also known as marine snow, produce back-scattering causing a significant problem in real-life applications such as vessel hull cleaning and unmanned asset inspection. Despite recent advances in underwater image enhancement, only a limited number of proposed methods (that will be discussed in later sections) address it <cit.>. State-of-the-art results have not been achieved yet,mainly due to the complexity of marine snow artifacts and the lack of realism produced by the existing models. To tackle this problem, we introduce a novel method based on a generative model to synthesize marine snow and a CNN model for image to image translation to reduce marine snow. Key contributions of this paper are as follows.* GAN-based Marine Snow Synthesis: We present a Generative Adversarial Network (GAN) model capable of synthesizing samples of marine snow, replicating its complex characteristics. * Paired Dataset Creation: We construct a dataset for marine snow removal by linearly combining natural underwater images with randomly distributed synthetic marine snow samples. * CNN for marine snow removal: We propose a CNN architecture that effectively enhances underwater images by removing artifacts caused by marine snow. The paper is organized as follows. In Section 2, we provide an overview of the main characteristics of marine snow and review prior work related to marine snow removal. In Section 3, we present our method for synthesizing marine snow using the GAN model. The dataset and model for marine snow removal are described in Sections 4 and 5, respectively. Experimental results are presented in Section 6. Finally, we conclude our findings in Section 7.§ PREVIOUS WORK The back-scattering effect caused by floating particles, sediments, and bubbles is a widespread degradation problem that has been overlooked by most underwater image enhancement methods. This effect, significantly impacts image quality. Some studies have attempted to model marine snow by a simple Gaussian model <cit.>. Sato et al. <cit.> categorized marine snow artifacts into two types and developed corresponding models to synthesize it. Unlike the Gaussian model, the proposed 3D plots resemble elliptic conical frusta, providing a fresh perspective on marine snow representation. For the removal of marine snow in images, methods based on median filter (MF) <cit.> have been used. However, the effectiveness of these methods is limited by the ability of the MF to remove large artifacts. In video processing, Farhadifard <cit.> used background modeling to identify marine snow in static scenes, while Cyganek <cit.> used a tracking method combined with MF. Recently, neural network-based methods have been studied. Koziarski et al. <cit.> trained a fully convolutional 3D neural network using manually labeled data to locate marine snow and combined it with an adaptive median filter to remove the artifacts. These video-based approaches may not be suitable for videos where numerous moving objects are present. The approach proposed in <cit.> utilized three networks with the RESNET architectures and targeted fisheries videos. The method decomposed the input image into low and high-frequency components and applied separate networks for marine snow removal. However, its overall performance was not satisfactory. Guo et al.<cit.> treated the problem as an image-to-image translation problem. They created a dataset adding marine snow using Photoshop. Due to limited marine snow samples, the algorithm’s robustness was compromised. Jiang et al. proposed a different approach utilizing a GAN for denoising images affected by marine snow <cit.>. The authors created a dataset by adding marine snow effect to underwater images from the IMAGENET dataset but the results on real underwater images were subpar, showing blurriness and incomplete artifact removal.In summary, marine snow poses a significant challenge in underwater image processing. The existing approaches still require further improvement to achieve satisfactory results in real-life applications. A major limitation in the current studies is the scarcity of diverse marine snow samples, which hinders the performance of the proposed algorithms. Addressing this issue may lead to better marine snow removal techniques.§SYNTHESIZING MARINE SNOWThe appearance of marine snow can vary based on the scene's location and illumination, often leading to bright reflections when captured with a camera. Previous attempts at synthesizing marine snow using Gaussian functions or similar techniques lacked the realism required for training networks with robust performance. To overcome these limitations, we leverage the power of generative adversarial models, which have shown promise in learning and reproducing realistic samples. Our method begins with a dataset of natural underwater images, from which we extract and curate 2600 marine snow samples. Each sample is resized to a 32x32 patch size, and pixel values are scaled to a range from -1 to 1, optimizing suitability for training. Through this dataset, we train a generator model to produce fake samples of marine snow, while simultaneously training a discriminator model to distinguish between fake and real samples, resulting in an effective and visually convincing synthesis of marine snow. Figure <ref> shows 12 samples of marine snow produced by the generator after training for 10000 epochs.The GAN architecture is shown in Figure <ref>. The generator model is designed to produce realistic images based on a latent space representation. The model takes a 100-dimensional random noise vector z as input and transforms it into a 32x32 grayscale image. It consists of several layers, including a dense layer, batch normalization, leaky ReLU activation, and convolutional transpose layers. The model progressively upscales the spatial dimensions of the tensor while reducing the number of channels. Batch normalization and leaky ReLU activation are applied after each transposed convolutional layer to improve training stability. The final convolutional transpose layer outputs a 32x32 image. The activation function used in the last layer is the hyperbolic tangent. Overall, this model demonstrates the ability to generate diverse and realistic images of marine snow from random noise. The discriminator model, which serves as the adversarial component, takes as input a 32x32 image and aims to distinguish between real and generated images. It consists of convolutional layers, each followed by leaky ReLU activation to introduce non-linearity. Dropout layers with a rate of 0.3 are added to prevent overfitting. The model further flattens the output and connects to a dense layer with a single output unit, responsible for making the decision on whether the input image is real or fake. The discriminator's role is to provide feedback to the generator to produce more realistic images. The loss functions are part of the Wasserstein GAN (WGAN) formulation <cit.>, which provides better stability and convergence properties compared to the original objective function. The loss function for the discriminator (denoted as ℒ_d) and generator(denoted as ℒ_g) are defined in Eq.<ref> and Eq.<ref> respectively: ℒ_d = 1/N∑_i=1^N( D(G(z_i)) - D(x_i) ) ℒ_g = -1/N∑_i=1^N D(G(z_i))where D(G(z_i)) is the output of the discriminator for the generated (fake) image G(z_i), D(x_i) is the output of the discriminator for the real image x_i, and N is the batch size.We followed the suggestion in <cit.> and used RMSprop<cit.> as optimization methodwith a learning rate of 5×10^-5 to stabilize the training process and mitigate some of the issues related to mode collapse and vanishing gradients. § DATASET CREATION To train our marine snow removal model, we created a dataset using natural underwater images from three existing datasets: MSRB <cit.>,USR-248<cit.> and USOD <cit.>. The original images, used as ground truth, are free of marine snow. Because some images in the existing datasets are of high resolution, we derive three distinct images from each of these high-resolution images. We achieve this by cropping top-left, bottom-right, and center patches, all sized at our desired resolution of 384x384 pixels. Furthermore, we incorporate another image into this set by resizing the original image to match our target resolution. For a clear visual representation of this procedure, we refer to Figure <ref>. The dataset of images with marine snow is produced by linearly adding synthetic marine snow to the ground truth image. Specifically, let I ∈ R^H× W be the ground truth andP_i ∈ R^m× m be a resized version of the ith synthetic marine snow sample. We add N patches P_i to the ground truth at random positions (x_i, y_i) resulting a distorted image J :J = min( 1 ,I + ∑_i=1^Nτ_i P_i ),where τ_i is an attenuation coefficient, and the min-operationenforces the condition J∈[0,1].In our experiment, we set the number of samples as 0 < N ≤ 200, the attenuation coefficient as 0.5 < τ_i≤ 1.5, and the patch size as 4 ≤ m≤ 32. These are random numbers from uniform distributions. Figure <ref>a shows an example of a natural image without marine snow artifacts and Figure <ref>b shows the result after placing 1000 samples of synthetic marine snow. After inspecting numerous images with marine snow, we observed that they usually have a significant amount of noise. So, to add realism to the generated image, we further added 3 different types of noise: * Impulse noise: It is used to model single pixel marine snow artifacts. This type of random noise manifests itself as isolated, randomly occurring bright pixels in the image. * Gaussian noise: It is used to simulate random variations or errors in images. The Gaussian noise used in this paper has a standard deviation σ^2= 10 and mean μ=0. * Poisson noise: This type of noise is particularly prevalent in low-light conditions and is characterized by a single parameter denoted as λ. Experimentally, we found that λ = 0.2 produces realistic results.Finally, we apply a data augmentation step by flipping each image horizontally, to increase the number of images in the dataset and avoid overfitting. We were able to create a dataset of 18846 paired color images that can be used to train and test a deep learning model for marine snow removal. We used 12869 images for training, 3217 for validation and 2760 for testing.§ MARINE SNOW REMOVAL The dataset created in Section <ref> is used to train a deep learning model to remove the artifacts on images affected by marine snow. We employ a U-Net model architecture designed for image enhancement tasks. The U-Net architecture is shown in Figure <ref>. The model follows an encoder-decoder structure with skip connections. The encoder path captures high-level features through multiple convolutional layers, max-pooling operations, and down sampling the spatial dimensions. This process helps the model learn significant image representations. The decoder path then uses transpose convolutions to up sample the feature maps and reconstruct the enhanced image with improved spatial details. The skip connections connect corresponding encoder and decoder layers, allowing the model to combine low-level and high-level features effectively. The final layer uses a 1x1 convolution with a sigmoid activation function to produce the enhanced image, preserving the color and spatial information. The U-Net architecture and its variants have demonstrated its effectiveness in improving the visual quality of images in various applications <cit.>.The model uses the mean squared error (MSE) and the perceptual loss functions. The perceptual loss leverages a pre-trained VGG19 neural network to extract high-level features from the true image y and predicted enhanced image ŷ. By comparing these high-level features, the perceptual loss quantifies the perceptual similarity between the enhanced images and the ground truth. This approach aligns with the human visual perception, ensuring that the enhanced images preserve important visual characteristics and structural details. The VGG19-based perceptual loss (denoted as ℒ_p) is calculated as the mean squared error (MSE) between the VGG feature maps: ℒ_p= 1/N∑_i=1^N( VGG(y)_i - VGG(ŷ)_i )^2where N represents the total number of elements in the VGG feature maps, and VGG(y)_i and VGG(ŷ)_i represent the ith element in the VGG feature maps of the ground truth and predicted images, respectively.The perceptual loss encourages the U-Net model to generate predicted images that have similar high-level feature representations as the ground truth images, thereby capturing perceptual similarity between the two images rather than focusing solely on pixel-wise differences.Additionally, the MSE loss (denoted as ℒ_MSE) is employed as a pixel-wise difference to capture the fine-grained differences between the true and the predicted enhanced images.To compute the overall MSE loss for the entire image, we use the squared differences for all pixels: ℒ_MSE= 1/N∑_i=1^N( y_i - ŷ_i )^2where N represents the total number of pixels in the image.By combining both perceptual loss and MSE loss in the training process, the U-Net model is optimized to produce enhanced images that not only closely match the ground truth in terms of perceptual quality but also exhibit precise pixel-level similarities.The combined loss is calculated as follows:ℒ_U-Net = ℒ_MSE + γℒ_pwhere γ is a hyper-parameter that determines the relative importance of the perceptual loss compared to the MSE loss. By setting γ=1, we aim for a balanced trade-off between the pixel-wise accuracy and the preservation of high-level features in the generated images. Experimental results show that this setting leads to an effective and visually appealing marine snow removal.The model is trained using the Adam optimization algorithm configured with the following hyper-parameters: Learning rate (α): 0.001, First moment decay rate (β_1): 0.9, Second moment decay rate (β_2): 0.999 and Epsilon (ϵ): 1 × 10^-7. Loss values per epochs are shown in Figure <ref>. We can see that the training loss is 0.0020 and the validation loss is 0.0023 after 20 epochs.§ RESULTSIn this section, we present the results of applying the trained U-Net to effectively remove marine snow from underwater images. We first evaluate the performance of our method by using the dataset described in Section <ref>, which encompasses images with synthetic marine snow. We then apply the U-Net to underwater images with real marine snow. Additionally, we demonstrated the utility of our method as a pre-processing step for enhancing underwater images (subsection 6.3). Finally, we evaluated our model using the benchmark proposed by Sato et al. <cit.>. §.§ Removing synthetic marine snow To assess the performance of our method, we compare it with the median filter, which effectively reduces impulsive noise while simultaneously preserving the sharpness of image edges. In this paper, we use kernel sizes of 3x3 and 5x5 pixels. Aiming for a more comprehensive comparison, we also include: BM3D (Block-Matching 3D) <cit.> and DnCNN (Denoising Convolutional Neural Network) <cit.> which are two different state-of-the-art image denoising techniques.BM3D is a non-local image denoising algorithm that is particularly effective at removing noise from images while preserving important image structures and details. DnCNN is a deep learning-based image denoising technique that employs convolutional neural networks to learn the mapping from noisy images to clean images.Results are summarized in Table <ref>, which presents the average values of MSE, PSNR, and SSIM for each method. Remarkably, the proposed U-Net algorithm outperforms the Median filter across all metrics, indicating its superiority. The median filter excels in removing small artifacts with high intensities but underperforms in removing large artifacts. Larger kernel sizes could overcome this limitation at the cost of poor performance in edge preservation. The trained U-Net removes both small and large size artifacts while still preserving small details and sharp edges. BM3D and DnCNN, recognized for their efficacy in combating general noise types, prove less suitable for the unique challenges posed by marine snow. DnCNN, while effective in preserving small image features, fails in the removal ofmedium and large artifacts. BM3D, shows some success in mitigating marine snow except in cases of high-intensity or larger artifacts but it sometimes eliminates fine details. These shortcomings make both BM3D and DnCNN less effective in the context of marine snow removal.To visualize the comparison effectively, Figure <ref> demonstrates the visual output of both methods to remove synthetic marine snow. The graphical results substantiate the metrics, revealing that the proposed U-Net better preserves contrast and sharpness while successfully removing synthetic marine snow.§.§ Removing natural marine snow Figure <ref> illustrates a test aimed at showcasing the performance of the proposed U-Net in eliminating natural marine snow artifacts from real underwater images. The image was chosen because it contains a large amount of marine snow of different sizes and intensities. We can see that while the U-Net’s performance is not as good as that in removing synthetic marine snow, it still manages to significantly reduce the presence of natural marine snow. An important aspect to note is that the U-Net achieves this without sacrificing image details or textures through blurring. The visual contrast between the original and U-Net processed images is accentuated using colored rectangles (green and orange) to highlight the improvements.We also compare results from the Unet with those obtained using a median filter of two distinct kernel sizes,the BM3D, and DnCNN denoiser algorithms. The median filter, in both kernel sizes, effectively eliminates the presence of bright, high-intensity spots created by small and medium-sized objects while maintaining key edges and structures. However, it comes at the cost of losing fine details and textures, especially when using a larger kernel size. The U-Net closely matches the artifact-reduction capabilities of a 5x5 median filter but notably excels in preserving intricate image details, positioning it as a superior choice for marine snow removal. DnCNN also preserves intricate image details but leaves most marine snow artifacts untouched. On the other hand, BM3D produces an evident reduction of artifacts in the image and effectively preserves details and edges. However, its performance is surpassed by the proposed U-Net especially when removing large size and bright artifacts. The U-Net removes a significant amount of marine snow artifacts from the original image, outperforming BM3D, retaining fine details and edges in the image. §.§ Underwater image enhancement There are a large number of methods for the enhancement and restoration of underwater images. Color cast and haze can be successfully removed. However, marine snow is not always considered by these methods, which not only fail to remove the artifacts but also fail to enhance the image when the marine snow is presented. In Figure <ref>, we demonstrate the performance of a state-of-the-art underwater image enhancement algorithm proposed by Ancutti et. all <cit.>. As shown in Figure<ref>b, the enhancement algorithm reduces the color cast and improves the sharpness on the image. However, the sharpening is also applied to the marine snow artifacts, worsening its impact on the image quality. The marine snow artifacts have a higher intensity, making them more noticeable. To obtain the improved result shown in Figure<ref>c, we use the proposed U-Net to pre-process the input image before applying Ancutti’s algorithm. As can be seen, the algorithm still removes the color cast and improves the sharpness of the image, but the effect of marine snow is now notably reduced. A quantitative comparison is shown in Table <ref>. We employ two well-known non-reference metrics widely employed to evaluate the quality of underwater images, UIQM <cit.> and UCIQE <cit.>. UIQM is a comprehensive metric that measures sharpness, contrast and chromaticity to evaluate the quality of underwater images while UCIQE has been designed to emulate human quality perception. The metrics demonstrate that using the proposed U-Net as a pre-processing step with Ancuttis algorithms produces a higher quality image, improving the UIQM score by almost 30% and preserving a similar UCIQE score.§.§ Comparison with MSRB We conduct a performance evaluation of the proposed U-Net using the benchmark framework introduced by Sato et al. in <cit.>. The primary objective of the benchmark is to assess the efficacy of the proposed U-Net in removing the presence of marine snow in each image from the MSRB (Marine Snow Removal Benchmark) dataset, followed by quantifying the quality of the denoised images using two essential metrics: PSNR and SSIM. For the assessment, undistorted images are utilized as reference.The MSRB dataset incorporates synthetic marine snow. Diverging from conventional Gaussian models, the authors of this benchmark introduced a novel approach by representing the marine snow as 3D plots reminiscent of elliptic conical frustums. The benchmark comprises two distinct categories: Task 1 and Task 2. Task 1 involves images containing relatively smaller instances of synthetic marine snow, offering a challenging but manageable test. In contrast, Task 2 escalates the difficulty level, featuring images with marine snow samples of up to 32x32 pixels. A representative selection of images from the dataset, alongside their corresponding denoised outcomes, is presented in Figure <ref>.The top row shows an example image that belongs to the Task 1 test set, while the bottom row images belong to the Task 2 test set. As can be seen, our proposed U-Net successfully removes the synthetic marine snow produced by <cit.> in both Task 1 and Task 2. Table <ref> presents the average PSNR and SSIM values obtained from our benchmark evaluation for both tasks. The proposed U-Net demonstrates strong performance when compared to the MSRB model proposed in <cit.>, and it outperforms both the median filter and its adaptive variants in this task.It’s worth highlighting a crucial point: the MSRB model has been trained with synthetic data generated using the same methodology as the testing set. Hence, it is expected to excel in the task of removing such artifacts. On the other hand, our method performs well on this set, even though it was not trained on identical data. This showcases two significant findings. Firstly, our GAN model successfully generates diverse and realistic marine snow samples. Secondly, our proposed U-Net model demonstrates its capability to effectively identify and remove artifacts that are modeled as 3D plots reminiscent of elliptic conical frustums. These findings demonstrate the robustness and versatility of our approach.We remark that it would be interesting to evaluate the performance of the MSRB model on other dataset such as the one that is created in this work. The result would then be used to compare the robustness of the MSRB model with the proposed U-Net mode.However, we have not been able to run the MSRB model on our dataset since it is not publicly available.§ CONCLUSIONIn this paper, we proposed a novel approach to tackle the challenge of reducing the marine snow interference in underwater imagery. Our method involved the development of a WGAN model for the generation of realistic synthetic marine snow samples. These synthetic samples were then seamlessly integrated into real underwater images of diverse scene, forming a comprehensive dataset for marine snow removal. We trained a U-Net model on this dataset, using both Mean Squared Error (MSE) loss and perceptual loss. We showed that trained U-Net can remove synthetic marine snow to a high degree of accuracy. We conducted tests using the marine snow removal benchmark proposed by Sato et al. <cit.>. Despite not specifically training proposed U-Net model on their synthetic marine snow samples, results from the proposed U-Net demonstrated commendable performance, highlighting the potential of our GAN model in generating realistic synthetic marine snow compared to existing Gaussian models and the MSRB dataset.A limitation of the proposed approach, which is in general associated with any data-driven approach, is that the performance of the resulting neural network is dependent on the training data to some extent. Further research could involve enriching WGAN model with a larger number of real marine snow samples to create an even more realistic dataset. This enhanced dataset, when used for retraining the U-Net, holds promise for improving the performance and adaptability to the nuances of marine snow removal in natural underwater environments.§ DECLARATIONS §.§.§ Conflict of interestThe authors declare that they have no conflict of interest.§.§.§ Data availability The datasets and models are publicly available in Github: https://github.com/fergaletto/MSSR/http://github.com/fergaletto/MSSR/.10jaffe2014underwater J. S. Jaffe, “Underwater optical imaging: the past, the present, and the prospects,” IEEE J. Ocean Eng., vol. 40, no. 3, pp. 683–700, 2014.sheinin2016next M. Sheinin and Y. Y. Schechner, “The next best underwater view,” in Proc. IEEE CVPR, pp. 3764–3773, 2016.schettini2010underwater R. Schettini and S. Corchs, “Underwater image processing: state of the art of restoration and image enhancement methods,” EURASIP Journal on Advances in Signal Processing, vol. 2010, pp. 1–14, 2010.ancuti2017color C. O. Ancuti, C. Ancuti, C. De Vleeschouwer, and P. Bekaert, “Color balance and fusion for underwater image enhancement,” IEEE Trans. Image Process., vol. 27, no. 1, pp. 379–393, 2017.mcglamery1980computer B. McGlamery, “A computer model for underwater camera systems,” in Ocean Optics VI, vol. 208, pp. 221–231, International Society for Optics and Photonics, 1980.blasinski2016three H. Blasinski and J. Farrell, “A three parameter underwater image formation model,” Electronic Imaging, vol. 2016, no. 18, pp. 1–8, 2016.akkaynak2017space D. Akkaynak, T. Treibitz, T. Shlesinger, Y. Loya, R. Tamir, and D. Iluz, “What is the space of attenuation coefficients in underwater computer vision?,” in Proc. IEEE CVPR, pp. 4931–4940, 2017.Akkaynak_2018_CVPR D. Akkaynak and T. Treibitz, “A revised underwater image formation model,” in Proc. IEEE CVPR, June 2018.murez2015photometric Z. Murez, T. Treibitz, R. Ramamoorthi, and D. Kriegman, “Photometric stereo in a scattering medium,” in Proc. IEEE ICCV, pp. 3415–3423, 2015.treibitz2012turbid T. Treibitz and Y. Y. Schechner, “Turbid scene enhancement using multi-directional illumination fusion,” IEEE Trans. Image Process., vol. 21, no. 11, pp. 4662–4667, 2012.huang2016underwater B. Huang, T. Liu, H. Hu, J. Han, and M. Yu, “Underwater image recovery considering polarization effects of objects,” Optics express, vol. 24, no. 9, pp. 9826–9838, 2016.hu2018underwater H. Hu, L. Zhao, X. Li, H. Wang, and T. Liu, “Underwater image recovery under the nonuniform optical field based on polarimetric imaging,” IEEE Photonics Journal, vol. 10, no. 1, pp. 1–9, 2018.liu2019polarization F. Liu, Y. Wei, P. Han, K. Yang, L. Bai, and X. Shao, “Polarization-based exploration for clear underwater vision in natural illumination,” Optics express, vol. 27, no. 3, pp. 3629–3641, 2019.roser2014simultaneous M. Roser, M. Dunbabin, and A. Geiger, “Simultaneous underwater visibility assessment, enhancement and improved stereo,” in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), pp. 3840–3847, IEEE, 2014.wang2011research H. Wang, H. Sun, J. Shen, and Z. Chen, “A research on stereo matching algorithm for underwater image,” in 2011 4th International Congress on Image and Signal Processing, vol. 2, pp. 850–854, IEEE, 2011.zhang2014underwater S. Zhang, J. Zhang, S. Fang, and Y. Cao, “Underwater stereo image enhancement using a new physical model,” in Proc. Int. Conf. Image Process., ICIP, pp. 5422–5426, IEEE, 2014.carlevaris2010initial N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, “Initial results in underwater single image dehazing,” in Proc. IEEE Oceans, pp. 1–8, 2010.drews2013transmission P. Drews, E. Nascimento, F. Moraes, S. Botelho, and M. Campos, “Transmission estimation in underwater single images,” in Proc. IEEE ICCV, pp. 825–830, 2013.galdran2015automatic A. Galdran, D. Pardo, A. Picón, and A. Alvarez-Gila, “Automatic red-channel underwater image restoration,” J. Vis. Commun. Image Represent., vol. 26, pp. 132–145, 2015.peng2017underwater Y.-T. Peng and P. C. Cosman, “Underwater image restoration based on image blurriness and light absorption,” IEEE Trans. Image Process., vol. 26, no. 4, pp. 1579–1594, 2017.akkaynak2019sea D. Akkaynak and T. Treibitz, “Sea-thru: A method for removing water from underwater images,” in Proc. IEEE CVPR, pp. 1682–1691, 2019.narasimhan2002vision S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” International journal of computer vision, vol. 48, no. 3, pp. 233–254, 2002.anwar2020diving S. Anwar and C. Li, “Diving deeper into underwater image enhancement: A survey,” Signal Process., Image Commun., vol. 89, p. 115978, 2020.fabbri2018enhancingUGAN C. Fabbri, M. J. Islam, and J. Sattar, “Enhancing underwater imagery using generative adversarial networks,” in Proc. IEEE Int. Conf. Robot. Autom. (ICRA)., pp. 7159–7165, 2018.guo2019underwaterDenseGAN Y. Guo, H. Li, and P. Zhuang, “Underwater image enhancement using a multiscale dense generative adversarial network,” IEEE J. Ocean Eng., vol. 45, no. 3, pp. 862–870, 2019.wang2019uwgan N. Wang, Y. Zhou, F. Han, H. Zhu, and Y. Zheng, “Uwgan: underwater gan for real-world underwater color restoration and dehazing,” arXiv preprint arXiv:1912.10269, 2019.lu2019multiCycleGAN J. Lu, N. Li, S. Zhang, Z. Yu, H. Zheng, and B. Zheng, “Multi-scale adversarial network for underwater image restoration,” Optics & Laser Technology, vol. 110, pp. 105–113, 2019.ye2018underwaterUIEsGAN X. Ye, H. Xu, X. Ji, and R. Xu, “Underwater image enhancement using stacked generative adversarial networks,” in Pacific Rim Conference on Multimedia, pp. 514–524, Springer, 2018.zhang2023underwater J. Zhang, D. Pan, K. Zhang, J. Jin, Y. Ma, and M. Chen, “Underwater single-image restoration based on modified generative adversarial net,” Signal, Image and Video Process., vol. 17, no. 4, pp. 1153–1160, 2023.sun2018deepP2P X. Sun, L. Liu, Q. Li, J. Dong, E. Lima, and R. Yin, “Deep pixel-to-pixel network for underwater image enhancement and restoration,” IET Image Process., vol. 13, no. 3, pp. 469–474, 2018.uplavikar2019all P. M. Uplavikar, Z. Wu, and Z. Wang, “All-in-one underwater image enhancement using domain-adversarial learning.,” in Proc. IEEE CVPR, pp. 1–8, 2019.shin2016estimation Y.-S. Shin, Y. Cho, G. Pandey, and A. Kim, “Estimation of ambient light and transmission map with common convolutional architecture,” in Proc. IEEE Oceans, pp. 1–7, 2016.anwar2018deep S. Anwar, C. Li, and F. Porikli, “Deep underwater image enhancement,” arXiv preprint arXiv:1807.03528, 2018.li2019underwater C. Li, C. Guo, W. Ren, R. Cong, J. Hou, S. Kwong, and D. Tao, “An underwater image enhancement benchmark dataset and beyond,” IEEE Trans. Image Process., vol. 29, pp. 4376–4389, 2019.wang2017deepUENET Y. Wang, J. Zhang, Y. Cao, and Z. Wang, “A deep cnn method for underwater image enhancement,” in Proc. Int. Conf. Image Process., ICIP, pp. 1382–1386, IEEE, 2017.gangisetty2022underwater S. Gangisetty and R. R. Rai, “Underwater image restoration using deep encoder–decoder network with symmetric skip connections,” Signal, Image and Video Process., pp. 1–9, 2022.boffety2012phenomenological M. Boffety and F. Galland, “Phenomenological marine snow model for optical underwater image simulation: applications to color restoration,” in Proc. IEEE Oceans, pp. 1–6, 2012.boffety2012color M. Boffety, F. Galland, and A.-G. Allais, “Color image simulation for underwater optics,” Applied optics, vol. 51, no. 23, pp. 5633–5642, 2012.sato2021marine Y. Sato, T. Ueda, and Y. Tanaka, “Marine snow removal benchmarking dataset,” arXiv preprint arXiv:2103.14249, 2021.banerjee2014elimination S. Banerjee, G. Sanyal, S. Ghosh, R. Ray, and S. N. Shome, “Elimination of marine snow effect from underwater image-an adaptive probabilistic approach,” in 2014 IEEE Students' Conference on Electrical, Electronics and Computer Science, pp. 1–4, IEEE, 2014.farhadifard2017single F. Farhadifard, M. Radolko, and U. F. von Lukas, “Single image marine snow removal based on a supervised median filtering scheme.,” in VISIGRAPP (4: VISAPP), pp. 280–287, 2017.farhadifard2017marine F. Farhadifard, M. Radolko, and U. Freiherr von Lukas, “Marine snow detection and removal: underwater image restoration using background modeling,” 2017.cyganek2018real B. Cyganek and K. Gongola, “Real-time marine snow noise removal from underwater video sequences,” Journal of Electronic Imaging, vol. 27, no. 4, pp. 043002–043002, 2018.wang2021underwater Y. Wang, X. Yu, D. An, and Y. Wei, “Underwater image enhancement and marine snow removal for fishery based on integrated dual-channel neural network,” Computers and Electronics in Agriculture, vol. 186, p. 106182, 2021.guo2022marine D. Guo, Y. Huang, T. Han, H. Zheng, Z. Gu, and B. Zheng, “Marine snow removal,” in Proc. IEEE Oceans, pp. 1–7, 2022.jiang2020novel Q. Jiang, Y. Chen, G. Wang, and T. Ji, “A novel deep neural network for noise removal from underwater image,” Signal Process., Image Commun., vol. 87, p. 115921, 2020.koziarski2019marine M. Koziarski and B. Cyganek, “Marine snow removal using a fully convolutional 3d neural network combined with an adaptive median filter,” in Proc. ICPR, pp. 16–25, Springer, 2018.arjovsky2017wasserstein M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adversarial networks,” in International conference on machine learning, pp. 214–223, PMLR, 2017.tieleman2012lecture T. Tieleman and G. Hinton, “Lecture 6.5-rmsprop, coursera: Neural networks for machine learning,” University of Toronto, Technical Report, 2012.islam2020underwater M. J. Islam, S. S. Enan, P. Luo, and J. Sattar, “Underwater image super-resolution using deep residual multipliers,” in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), pp. 900–906, 2020.islam2022svam M. J. Islam, R. Wang, and J. Sattar, “Svam: Saliency-guided visual attention modeling by autonomous underwater robot,” in Proceedings of Robotics: Science and Systems, 2022.bm3d K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. image process., vol. 16, no. 8, pp. 2080–2095, 2007.zhang2017beyond K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. Image Process., vol. 26, no. 7, pp. 3142–3155, 2017.panetta2015human K. Panetta, C. Gao, and S. Agaian, “Human-visual-system-inspired underwater image quality measures,” IEEE J. Ocean Eng., vol. 41, no. 3, pp. 541–551, 2015.yang2015underwater M. Yang and A. Sowmya, “An underwater color image quality evaluation metric,” IEEE Trans. Image Process., vol. 24, no. 12, pp. 6062–6071, 2015. § ADDITIONAL RESULTS | http://arxiv.org/abs/2311.15584v1 | {
"authors": [
"Fernando Galetto",
"Guang Deng"
],
"categories": [
"eess.IV",
"cs.CV",
"cs.LG"
],
"primary_category": "eess.IV",
"published": "20231127071941",
"title": "A deep learning approach for marine snow synthesis and removal"
} |
[ ET3D: Efficient Text-to-3D Generation via Multi-View Distillation Yiming Chen^1,2Zhiqi Li^1,3Peidong Liu^1,†^1Westlake University^2Tongji University ^3Zhejiang University {chenyiming, lizhiqi49, liupeidong}@westlake.edu.cn January 14, 2024 ===================================================================================================================================================================================== < g r a p h i c s > < g r a p h i c s > (a) “a stone bust of tiger" (b) “a pig, graffiti colors" < g r a p h i c s > < g r a p h i c s > (c) “... photorealistic"“... robot" “... cartoon" “... sculpture" (d) “... disney style" → “... robot" figureET3D specializes in the efficient generation of 3D objects from text input, offering capabilities such as (a) producing multiview-consistent 3D objects conditioned on textual input, (b) generating diverse 3D objects with identical text and distinct latent inputs, (c) enabling style control in the output through text, and (d) facilitating smooth interpolations between prompts. ] ^† Corresponding author. Recent breakthroughs in text-to-image generation has shown encouraging results via large generative models. Due to the scarcity of 3D assets, it is hardly to transfer the success of text-to-image generation to that of text-to-3D generation. Existing text-to-3D generation methods usually adopt the paradigm of DreamFusion, which conducts per-asset optimization by distilling a pretrained text-to-image diffusion model. The generation speed usually ranges from several minutes to tens of minutes per 3D asset, which degrades the user experience and also imposes a burden to the service providers due to the high computational budget. In this work, we present an efficient text-to-3D generation method, which requires only around 8 ms to generate a 3D asset given the text prompt on a consumer graphic card. The main insight is that we exploit the images generated by a large pre-trained text-to-image diffusion model, to supervise the training of a text conditioned 3D generative adversarial network. Once the network is trained, we are able to efficiently generate a 3D asset via a single forward pass. Our method requires no 3D training data and provides an alternative approach for efficient text-to-3D generation by distilling pre-trained image diffusion models. § INTRODUCTION Considerable advancements have been achieved in the realm of 2D image generation recently. The generation of high-fidelity images through input text prompts has become a straightforward process. However, the translation of this success from text-to-image generation to the text-to-3D domain faces challenges due to the limited availability of 3D training data.To circumvent the need for training an extensive text-to-3D generative model from scratch, given the scarcity of 3D data, recent methods have capitalized on the favorable characteristics of diffusion models and differentiable 3D representations. These methods, rooted in score distillation sampling optimization (SDS), endeavor to extract 3D knowledge from a pre-trained, large text-to-image generative model, yielding impressive results. One notable example of such work is DreamFusion, which introduces a novel paradigm for 3D asset generation. In light of the 2D-to-3D distillation approach, there has been a rapid evolution of techniques in the past year. Numerous studies have emerged, aiming to enhance the quality of generation through the implementation of multiple optimization stages. Although those methods are able to deliver impressive quality of the generated 3D objects, they usually require hours to finish the optimization process, which would degrade the user experience and also impose a burden to the service providers due to the requirement of more computational resources. To tackle the efficiency issue of existing text-to-3D generation methods, Lorraine recently proposed ATT3D <cit.>. The main insight is that they design a feed-forward mapping network, which maps the input text prompt to the parameters of a neural radiance field (NeRF). They can then render multi-view images from NeRF, and train the mapping network by SDS loss computed via a pre-trained 2D diffusion model. Once the network is trained, they are able to achieve efficient text-to-3D generation via a simple feed-forward pass. Due to characteristic of the used SDS loss, their method suffers from a lack of diversity and shared limitations with prior SDS-based works <cit.>. Another inspiring work is StyleAvatar3D <cit.>, they exploit a pre-trained ControlNet <cit.> to generate multi-view images given a prior 3D head model and use those images to re-train EG3D <cit.> for 3D head generation. Since they require an existing 3D model for multi-view image generation, it is difficult for them to scale to general text-to-3D generation.Inspired by the recent development of large text-to-multi-view image generative models <cit.> and StyleAvatar3D <cit.>, we propose to train a text-to-3D generative model via multi-view distillation. The main insight is to exploit a pre-trained large image generative model as a teacher and distill multi-view knowledge to supervise the training of our text-to-3D model, as a student network. In particular, we employ the pre-trained teacher network (MVDream <cit.>) to generate multi-view images given a text prompt. We then train a text-conditioned generative adversarial network to generate a tri-plane represented 3D object, such that its rendered multi-view images follow the same distribution as that of the pre-trained text-to-multi-view model. Different from StyleAvatar3D <cit.>, our method does not require any prior 3D model and can scale to general text-to-3D generation task.Once our network is trained, we are able to generate a 3D object given a text prompt in only 8 ms on an NVIDIA RTX 4090 graphic card. It significantly accelerates the generation speed and reduces the computational expenses, to further democratize 3D content creation. In summary, our contributions are as follows: 0em * We propose a simple yet effective text conditioned 3D generative adversarial network; * Our network can be trained by distilling multi-view knowledge from a pre-trained large text-to-multiview image generative model, without requiring SDS loss and any 3D dataset; * Once our network is trained, it can generate a 3D asset given a text prompt, in only 8 ms on a consumer-grade graphic card. It significantly reduces the computational budget and provide the user with real-time experience; * It demonstrates the possibility to train efficient general text-to-3D generative model by relying on pre-trained large text-to-multi-view image diffusion model; * We would like to draw the attention of the community, that it would be a worthwhile direction to explore for efficient text-to-3D content generation, by exploiting pre-trained text-to-multi-view foundation models. § RELATED WORKWe review prior methods which are the most related to ours. We classify them into three categories: unconditional 3D generation, text conditioned 3D generation and 3D aware image synthesis.3D generative models. Unconditional 3D generation methods typically utilize existing 3D datasets to train generative models that employ various 3D representations. These representations commonly include volumetricrepresentation<cit.>, triangular mesh <cit.>, point cloud <cit.>, and the more recent implicit neural representation <cit.>.In the realm of 3D data, researchers have explored various generative modeling techniques that have demonstrated success in 2D image synthesis. These techniques encompass a range of methods, such as variational auto-encoders <cit.>, generative adversarial networks <cit.>, flow-based methods<cit.>, and the increasingly popular diffusion-based method <cit.>. However, unlike image generative modeling, which benefits from a large abundance of training images, 3D generative methods often face a scarcity of sufficient 3D assets for training purposes. Typically, they are confined to category-specific datasets, such as shapeNet <cit.>. Although there has been a recent release of a million-scale 3D asset dataset by Objaverse <cit.>, its size still pales in comparison to the vast amounts of 2D training data <cit.> employed by modern generative models for image synthesis.The limited availability of extensive training data poses a challenge for these 3D generative methods, as they struggle to generate arbitrary types of objects that can meet the diverse requirements of end consumers. In contrast to these methods that rely on copious amounts of 3D data, we propose an alternative approach that leverages a pre-trained large text-to-multi-view image generative model. By distilling multi-view knowledge, our proposed method aims to facilitate more generalized text-to-3D generation capabilities.Text conditioned 3D generation. Owing to the scarcity of 3D data, researchers have endeavored to extract knowledge for 3D generation by utilizing pre-trained large image models. Initially, efforts were made to employ a pre-trained CLIP model <cit.> to align the input text prompt with rendered images, aiming to supervise the process of 3D object generation <cit.>. However, the resulting 3D objects often exhibited a decreased level of realism, primarily due to the fact that CLIP could only provide high-level semantic guidance.With the advancement of large text-to-image diffusion models <cit.>, a notable example being DreamFusion <cit.>, the potential to generate more realistic 3D objects through knowledge distillation has been demonstrated. Subsequent works have consistently pushed the boundaries to achieve the generation of photo-realistic 3D objects that closely correspond to the provided text prompts <cit.>. These methods typically offer valuable insights by developing more sophisticated score distillation loss functions or by refining optimization strategies, to further enhance the quality of the generated objects.Despite the success achieved by these methods in generating high-fidelity 3D shapes based on textual descriptions, they usually require hours to complete the text-to-3D shape generation process. It degrades the user experience and imposes additional economic burden to the service providers. Consequently, we propose to train an efficient text-to-3D generative model via multi-view distillation. Once our network is trained, we are able to generate 3D objects given text prompts in real-time on a consumer-grade graphic card.3D aware image synthesis. The exploration of extending 2D generative adversarial networks (GANs) <cit.> to the realm of 3D has been extensively researched, primarily due to the advantage of not requiring a dedicated 3D dataset. A key concept behind this approach involves training a GAN capable of generating 3D representations based on 2D images.Multiple forms of 3D representations have been investigated, including triangular mesh <cit.>, volumetric representation <cit.>, and tri-plane <cit.> . Among these options, tri-plane stands out as an efficient choice due to its low memory consumption and fast image rendering, making it well-suited for GAN training. Moreover, there are alternative methods such as GANcraft <cit.>, which utilize sparse voxel grids for 3D scene generation, as well as fully implicit NeRF-based techniques that replace traditional generators with radiance fields <cit.>.Although these approaches have demonstrated remarkable capabilities in generating high-quality 3D assets, they are often limited to class-specific tasks and lack the flexibility to enable control over the generation process through textual input. Consequently, we propose ET3D for text-to-3D generation by distilling knowledge from a pre-trained large image diffusion model. § METHOD Our goal is to propose a new text-to-3D generation paradigm utilizing multi-view images synthesized by a large pre-trained image diffusion model. Although the text-to-multi-view diffusion model can generate impressive multi-view images, these images still lack pixel-wise consistency such that they can be used to reconstruct 3D assets directly. Instead of using Score Distillation Sampling (SDS) loss to align both data distributions, which has shown to suffer from over-saturation, over-smoothing, low diversity and the multi-face Janus problem <cit.>, we propose to exploit Generative Adversarial Network (GAN) to learn the real data distribution.Our method consists of two main parts as shown in Figure <ref>, a teacher model and a student model. During training, both models accept the same textual input. The student model is then trained to generate a 3D asset which can render multi-view images, that follow the same distribution as that from the teacher model. The teacher model is a pre-trained text-to-multi-view image diffusion model. Since there is no prior text-to-3D GAN network available, we propose a simple yet effective network based upon EG3D <cit.>, due to its impressive performance in un-conditional 3D aware image synthesis from category-specific multi-view image dataset. We will detail each component as follows.§.§ Text-to-multi-view image diffusion model.Without loss of generality, we exploit a recently proposed text-to-multi-view image diffusion model, MVDream <cit.>, as our teacher model. More advanced text-to-multi-view foundation models can also be used in future. MVDream is able to generate multi-view consistent images from a given text prompt. It achieves both the generalizability of 2D diffusion and the consistency of 3D data, by leveraging diffusion models pre-trained on large-scale web datasets and a multi-view dataset rendered from 3D assets. MVDream accepts a text prompt and four extrinsic camera parameters as input, it then generates four view-consistent images which satisfy the input text prompt each time. The current released pre-trained model enforces the four views to be 90^∘ separated for the longitude angle and share the same elevation angle, which ranges within [0^∘, 30^∘]. During the training of the student network, we sample multiple times for the same text prompt and the starting longitude angle is randomly selected within [0^∘, 360^∘] each time. We note that the generated images are not always consistent between two samples (8 images in total) even the input text prompts are the same. However, we found that our student network is not affected and still can learn to generate 3D assets properly. §.§ Text-to-3D generative model.Our student model is built upon EG3D <cit.>, a state-of-the-art 3D-aware image synthesis GAN network, which can learn from images only without requiring any 3D assets. As shown in fig_pipeline, it consists of five key components: a mapping network, a tri-plane generator network, a neural renderer, a super-resolution module and a discriminator network. We will describe each component briefly as follows. More detailed network architecture can be found in our supplementary material. Mapping network. The mapping network takes a latent variable z ∈^512, camera parameters P ∈^25 and text embedding T ∈^768 as input, and map them into a 1280-dim feature vector. Both the latent variable and camera parameters are mapped into a 512-dim feature vector via EG3D's original MLP network. We then use a pre-trained CLIP model <cit.> to encode the input text prompt into a 768-dim feature vector. Both feature vectors are then concatenated together to form the final 1280-dim feature vector for both the tri-plane generator network and the super-resolution module.Tri-plane generator network. By compromising the rendering efficiency and representation ability, we choose to use tri-plane <cit.> to represent the 3D object implicitly. The generator network takes the 1280-dim feature vector as input and outputs the tri-plane feature images, each with a dimension ^256×256×32.Neural renderer. Given the generated tri-plane feature images and the sampled camera pose, we can render a 2D feature image ∈^128×128×32 via volume rendering. In particular, we can shoot a ray from the camera center towards the sampled pixel. Discrete 3D points can be sampled along the ray. For each 3D point, we can project it into the tri-planes to obtain three feature vectors: F_XY, F_XZ and F_YZ∈^32. They are then concatenated together and input to a tri-plane decoder to obtain the point density σ and a color feature vector c ∈^32. The pixel feature vector can then be computed via: () = ∑_i=1^n T_i (1 - exp(-σ_i δ_i)) c_i, where () ∈^32 is the rendered feature vector at pixel position , T_i is the transmittance and can be computed via T_i = exp (- ∑_k=1^i-1σ_k δ_k), both σ_i and c_i are the predicted density and color feature vector of the sampled i^th 3D point, and δ_i is the distance between two neighboring sampled points. Super-resolution module. To generate higher-resolution 3D assets, a super-resolution module is applied. It takes the rendered feature image ∈^128×128×32 and the 1280-dim feature vector from mapping network as input, and predicts an image ∈^256×256×3 as the final output image. Discriminator network. We modify the discriminator network of StyleGAN2 <cit.> to exploit text prompt embedding as additional condition to train the generator network. Same as the mapping network, we use a pre-trained CLIP model <cit.> to encode the input text prompt, such that the discriminator can learn to differentiate images according to the provided text prompt.§.§ Loss functions. Both the generator network and discriminator network are trained in an adversarial manner. Given imagesfrom the pre-trained text-to-multi-view image diffusion model, with known camera parameters _,for both the extrinsics and intrinsics, latent codes ∈(0, 1) and the corresponding text prompts t, we train our model using a GAN objective. R1-regularization is applied to further stabilize the training <cit.>:(θ,ϕ) = _∈ p_D (f(D_ϕ (, _, t)) - λ∇ D_ϕ(, _,t)^2 ) +_∈(0, 1), , ^'∈ p_, t [f(-D_ϕ (G_θ(, , ^', , t), ^', t))],where f(t) = -log(1 + exp(-t)) and λ controls the strength of the R1-regularizer. To better align the generated 3D asset with the textual description, we also apply a CLIP loss between the predicted imageand the text prompt, which has shown to be effective in prior methods <cit.>. Both the generator and discriminator are then trained with alternating gradient descent combining the GAN objective with the CLIP loss: min_θmax_ϕ(θ,ϕ) + λ_c _clip(θ), _clip(θ) = arccos^2(enc_i(), enc_t(t)),where λ_c is a hyper-parameter, both enc_i and enc_t are the pre-trained CLIP image and text encoders. § EXPERIMENTS§.§ Implementation details Our framework offers the flexibility of being trained either online or offline with MVDream. We experimentally find that offline training still can deliver satisfying results, even the number of training samples would be much smaller than the online training. For efficiency consideration, we construct a substantial dataset with a wide variety of animals, objects etc, facilitating offline training in this experimental setup. The dataset comprises compositions of animals, objects and styles, totaling up to 5,000 different prompts and 800,000 generated images at a resolution of 256×256 pixels. We hold out 100 prompts during training and use them to evaluate the compositional generalization performance. Our method has no restriction to be scaled for even larger amount of text prompts. We use a learning rate 2.5×10^-3 for the generator training and 2×10^-3 to train the discriminator network. The batch size is 32. The network is trained with 8 NVIDIA A100 graphic card. All the evaluations are conducted on a single RTX 4090 graphic card.We exploit the commonly used Frechet Inception Distance (FID) metric to evaluate the quality of rendered images from the generated 3D assets. CLIP score is used to evaluate the similarity between the input text prompt and the generated 3D asset.§.§ Ablation studyWe conduct experiments to study the effects of the CLIP loss and conditional text input for the discriminator network. The experimental results are presented in table_ablation and fig_ablation. It demonstrates that the textual condition on the discriminator network improves both the quality and text coherence of the generated 3D assets. The qualitative results shown in fig_ablation also demonstrate that the generated 3D assets fail to satisfy the input textual description if we do not apply the text condition on the discriminator. Unexpectedly, the usage of CLIP loss does not improve the similarity between the text description and generated 3D asset. The reason might be the text condition of the discriminator network can already provide sufficient supervision on text control. However, we find the FID metric is improved from 7.7 to 7.4 when the CLIP loss is applied. Therefore, we still keep the CLIP loss during training to obtain a better image quality.We also compare against EG3D <cit.>, which does not support text controlled 3D generation. The experimental results demonstrate that the EG3D struggles to learn unconditional 3D generation from dataset with many different categories. It demonstrates that the additional text condition helps the network cluster the data distribution and ease the learning of the generative network. §.§ Quantitative comparisonsWe compare ET3D against prior state-of-the-art methods for quantitative evaluations.We exploit two state-of-the-art SDS optimization based methods, DreamFusion <cit.> and ProlificDreamer <cit.>. We use a rendering image resolution at 64×64 pixels for the optimization of ProlificDreamer. We also compare against Shap-E <cit.>, which is pre-trained with text-labeled 3D data. Another similar method is ATT3D <cit.>. However, we cannot compare against it since they do not release their implementations to the general public. We exploit the CLIP score and time consumption as metrics for the evaluation. The experimental results are presented in tab_quant. The metrics are computed over 400 different objects.It demonstrates that our method is able to achieve similar or even better text similarity score, compared to DreamFusion, ProlificDreamer and Shap-E. On the other hand, the time required to generate a 3D asset by our method is only around 8 ms, which is 225000 times faster than DreamFusion and 450000 times faster than ProlificDreamer. The evaluations are conducted on a consumer graphic card, NVIDIA RTX 4090. §.§ Qualitative comparisonsThe qualitative evaluation results are presented in fig_quality. We present rendered images from two different views to evaluate their 3D consistency and texture quality. The surface normal or 3D mesh is also presented for geometry comparisons. The experimental results demonstrate that DreamFusion tends to generate over-saturated and blurry 3D assets due to the inherited characteristic of SDS loss. While ProlificDreamer improves the SDS loss and delivers impressive results, it still performs poorly for some text prompts. In contrary, our method delivers better results, in terms of both the texture quality and 3D consistency. It demonstrates the great potential to exploit multi-view images, generated by a pretrained image diffusion model, for high-quality text-to-3D content generation. To demonstrate the generalization capability of our model to unseen prompts, we follow the experimental setting used by ATT3D <cit.>. In particular, it generates compositional prompts using the template "a {animal} {activity} {theme}" and withholding a subset of prompts as unseen for evaluation. Additionally, we further select 40 animals and 40 styles, employing the compositional prompt "a {animal}, {style}" and exploit the compositions along the diagonal direction to validate style generalization. The 3D objects generated from part of these unseen prompts are illustrated in <ref>. To better perceive consistent 3D objects, we render images of the same object from multi-views, 0^∘, 90^∘, 180^∘ and 270^∘. Please refer to the Appendix and supplementary materials for more results. The experimental demonstrates that our network is able to generalize to unseen text prompts and delivers good results for both geometry and texture style compositions.fig_interpolation presents the textual embedding interpolation results. We linearly interpolate the latent vector of two text prompts and input it to the generator network. It demonstrates that the interpolation properties of GANs continue to be considerably smooth. § CONCLUSION AND FUTURE WORKWe present a novel framework for efficient text-to-3D generation. Our network is built upon an unconditional 3D GAN network, and is trained via multi-view distillation of a pretrained text-to-multi-view model. Different from prior Score Distillation Sampling (SDS) based optimization methods, which usually requires large amount of computational resources to generate a 3D asset, we are able to generate a 3D object in only 8 ms once the network is trained.Due to the available resources, we currently only trained our network with a small amount of text prompts. Even with such limited number of text prompts, our network exhibit good generalization performance to unseen text input. It demonstrates the great potential of our framework for large-scale efficient text-to-3D generation task. ieee_fullname | http://arxiv.org/abs/2311.15561v1 | {
"authors": [
"Yiming Chen",
"Zhiqi Li",
"Peidong Liu"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231127061423",
"title": "ET3D: Efficient Text-to-3D Generation via Multi-View Distillation"
} |
[email protected] Department of Physics, University of Basel, 4056 Basel, SwitzerlandShuttling spins with high fidelity is a key requirement to scale up semiconducting quantum computers, enabling qubit entanglement over large distances and favouring the integration of control electronics on-chip. To decouple the spin from the unavoidable charge noise, state-of-the-art spin shuttlers try to minimize the inhomogeneity of the Zeeman field.However, this decoupling is challenging in otherwise promising quantum computing platforms such as hole spin qubits in silicon and germanium, characterized by a large spin-orbit interaction and electrically-tunable qubit frequency. In this work, we show that, surprisingly, the large inhomogeneity of the Zeeman field stabilizes the coherence of a moving spin state, thus enabling high-fidelity shuttling also in these systems.We relate this enhancement in fidelity to the deterministic dynamics of the spin which filters out the dominant low-frequency contributions of the charge noise.By simulating several different scenarios and noise sources, we show that this is a robust phenomenon generally occurring at large field inhomogeneity.By appropriately adjusting the motion of the quantum dot, we also design realistic protocols enabling faster and more coherent spin shuttling.Our findings are generally applicable to a wide range of setups and could pave the way toward large-scale quantum processors. High-fidelity spin qubit shuttling via large spin-orbit interaction Daniel Loss January 14, 2024 ===================================================================§ INTRODUCTIONSpin qubits confined in silicon and germanium quantum dots are front-runners in therace toward large-scale quantum computers <cit.>. Their demonstrated compatibility with industry-level CMOS processing <cit.> and their high-temperature operations <cit.> make these systems ideal for scalability and co-integration with control electronics <cit.>. The small footprint of spin qubits, typically a few tens of nanometers, however, imposes demanding technological constraints for the classical hardware and requires dense multi-layered architectures that add significant extra complexity to the process <cit.>.These constraints are significantly relaxed by introducing quantum links coupling distant spins that are placed micrometers apart <cit.>. This long-range connectivity can be achieved in various ways, including for example virtual couplings enabled by photons in superconducting cavities <cit.>, Luttinger liquids <cit.>, floating gates <cit.>, and magnetic systems <cit.>. Correlated dissipative coupling emerging by appropriately engineering the spin coupling to a bosonic bath has also been proposed as a viable route to entangle distant qubits <cit.>.All these possibilities, however, require external components that are not straightforward to integrate into the conventional CMOS processes, therefore hindering the competitive advantage of spin qubits. On the other hand, shuttling spins across the chip provides a viable and CMOS-compatible way to link qubits in a sparse array <cit.>. The fidelity of this operation is determined by the noise that the spin experiences during shuttling and, in current devices, this noise is predominantly related to random fluctuations of the electrostatic environment. Because spin-orbit interactions (SOI) directly couple the spin degree of freedom to these charge fluctuations, current experiments try to minimize the SOI to maximize the shuttling fidelity. This approach is challenging in hole-based spin qubits, whose predominant feature is their large SOI <cit.>. The tunability of their SOI enables sweet spots <cit.> where the SOI can be turned off, however, for shuttling operations this optimization requires a demanding fine-tuning of the electrostatic potential over wide areas.In this work, we show that, surprisingly, large SOI and inhomogeneity of the Zeeman field can substantially enhance the shuttling fidelity. This improvement depends on the coherent dynamics imprinted by the SOI on the spin state. The spin moving in a large SOI field rotates quickly in a deterministic and controllable way and this motion provides an intrinsic dynamical decoupling, filtering out the dominant, low-frequency contribution of the noise, thus boosting shuttling fidelity. The high spin shuttling fidelities reached in our shuttling scheme are qualitatively independent of the type and spatial distribution of the noise sources, and can be reached also by moving spin in an inhomogeneous Zeeman field for example produced by varying g-tensors <cit.> or micromagnets <cit.>, opening up to effective SOI-driven improvements of shuttling fidelity in electron spin qubits in silicon and germanium. Expanding on these ideas, we propose optimal protocols to leverage the SOI to further dynamically decouple the moving spin from the environment,rendering the shuttling faster and at the same time more coherent, and paving the way towards high-fidelity shuttling of hole spin qubits for the large-scale quantum processors.This manuscript is organized as follows.In Sec. <ref>, we introduce our generalmodel describing spins shuttling ininhomogeneous SOI and Zeeman fields. Our theory captures the spin dynamics in a wide variety of setups, including the silicon and germanium spin qubits in fin field-effect transistors and heterostructures sketched in Fig. <ref>.In Sec. <ref>, we specialize our discussion on inhomogeneous Zeeman fields only. This simple case provides a valuable intuitive understanding of the coherent and incoherent time evolution of the spin, and of the effect of different sources of noise. We expand the discussion inSec. <ref>, by including an inhomogeneous SOI field, nicely describing realistic hole-based silicon and germanium devices. We show that large effective SOI is beneficial to reduce the effect of noise during shuttling and, as proposed in Sec. <ref>, it can be further leveraged in alternative shuttling schemes that dynamically decouple the spin from low-frequency noise. These protocols enable a faster motion of the spins and substantially boost the shuttling coherence of a wide range of materials and systems presenting large inhomogeneities of Zeeman fields. §THEORETICAL MODELIn this work, we analyze spin qubits confined in moving quantum dots, as sketched in Fig. <ref>.The dynamic of the spin along the direction of motion (z-direction) is accurately modeled by the effective one-dimensional HamiltonianH_1D=p^2/2m+mω_o^2/2[z-z̅(t)]^2- {v(z), p}·σ+ ħω̃_B(z)·σ/2 . This Hamiltoniandescribes a quantum dot with harmonic frequency ω_o and width l=√(ħ/mω_o), whose center of mass z̅(t) is shiftedtime-dependently.This moving electric potential is experimentally implemented in conveyer-mode shuttling architectures <cit.>. In this work we restrict ourselves to this type of shuttling, however, we expect that our results can be generalized also to buckled-brigade shuttling <cit.>. During its motion, the spin experiences inhomogeneous spin-orbit and Zeeman fields, described by the vectors of spin-orbit velocities v(z) and Larmor frequencies ω̃_B(z), respectively. We anticipate that the local Zeeman field of the nanostructure ω̃_B(z) differs from the local qubit splitting ω_B(z) by a correction arising from the confinement in the z-direction <cit.>, see Eq. (<ref>).Here,m is the effective mass along z, p=-iħ∂_z is the momentum operator in the direction of motion, σ=(σ_1,σ_2,σ_3) is the vector of Pauli matrices, and { a, b}=(a b+ba)/2 is the symmetrized anticommutator that guarantees the hermiticity of H_1D. Our Eq. (<ref>) generally captures the response of a wide variety of different setups, including the ones in Fig. <ref>. In particular, in this work, we focus primarily on hole spin qubit architectures, where the effective parameters originate from the mixture of heavy and light holes in the valence band caused by kinetic energy and strain and described by the Luttinger-Kohn and Bir-Pikus Hamiltonian. In one-dimensional hole channels in silicon and germanium, the SOI velocity v(z) is large, yielding experimentally measured SOI lengths λ_s= ħ/m|v| of tens of nanometers, comparable with the quantum dot width l <cit.>. In planar hole nanostructures, the SOI is generally smaller although it can be enhanced by device engineering <cit.>. However, in these systems, the effective Zeeman field ω̃_B^h(z)=μ_Bĝ(z)B/ħ is also largely inhomogeneous because of the space-dependent and electrically tunable g tensor ĝ(z), which rotates the energetically preferred quantization axes at different location also when the externally applied magnetic field B is homogeneous <cit.>.We stress that our model also directly describeselectron spin qubits moving in an inhomogeneous magnetic field provided, for example, by micromagnets <cit.>. In this case, similarly to planar hole heterostructures, the SOI v(z) is small, and the leading contribution to the spin dynamics is the inhomogeneous Zeeman field ω̃_B^e(z)=μ_B g B(z)/ħ.Throughout this work, we restrict ourselves to adiabatically moving quantum dots, and consider shuttling velocities that are slow compared to the orbital energy gap. The small corrections to our model arising from non-adiabaticity in the orbital degrees of freedom and an exact solution of a simple case where this condition is lifted are discussed in detail in Appendix <ref>. We note that for holes this condition is ħ∂_t z̅/l≪ħω_o∼ 1 meV, while for electrons in silicon and germanium this condition is more stringent and we require ħ∂_t z̅/l to be much smaller than the valley splitting ∼ 0.1 meV.We emphasize that because ħω_o≫ħ|ω_B(z)|∼ 0.01 meV, in our adiabatically moving quantum dots, the dynamics of spin does not need to be adiabatic with respect to the Zeeman field and we anticipate that resonance processeswith ∂_t z̅/l∼|ω_B(z)| can further enhance the fidelity of spin shuttling, see Sec. <ref>.§INHOMOGENEOUS ZEEMAN FIELD §.§ Deterministic spin dynamicsWe first focus on a spin moving in an inhomogeneous Zeeman field and neglect for the moment the effect of SOI, i.e. v=0 in the Hamiltonian H_1D of Eq. (<ref>). This simple case captures the response of planar hole nanostructures and of electron spins moving in micromagnetic fields and shows how the spin dynamics during shuttling can filter out the relevant low-frequency noise sources.Assuming that the confinement potential is strong compared to the local Zeeman field and restricting for now to shuttling processes that are adiabatic compared to both orbital and spin dynamics, i.e. ω_o≫ | ω_B(z)|≫∂_t z̅/l, we find by conventional time-dependent perturbation theory that the spin degree of freedom evolves according to the inhomogeneous Zeeman HamiltonianH_Z =ħ/2ω_B[z̅(t)] ·σ ,ω_B[z̅] =∫ dz |ψ(z-z̅)|^2 ω̃_B(z), see Appendix <ref> for more details. The Zeeman energy of the quantum dot ω_B contains quantitative corrections coming from the inhomogeneity of the field averaged over the charge density |ψ(z-z̅)|^2≈ e^-(z-z̅)^2/l^2/l√(π) of the particle. The adiabatic condition on the spin degrees of freedom constrains the shuttling velocity to be ∂_t z̅≪min|ω_B| l. For typical values of |ω_B|/2π∼ 1- 10 GHz and l∼ 10-100 nm, this condition is well satisfied for reasonable velocities ≲ 10 m/s. We will further relax this condition in Sec. <ref>.The time-evolution of the spin generated by H_Z is well approximatedby the unitary operatorU_Z(t)≈ e^-i θ_B[z̅(t)] n_B[z̅(t)]·σ/2e^-i Φ_B(t) σ_3/2 .The first transformation e^-i θ_B n_B·σ/2 locally diagonalizes H_Z at position z̅.The local angle θ_B[z̅] and unit vector n_B[z̅] arefound explicitly solving the equation ω_B/|ω_B|= R̂_B(θ_B)n_3, for each value of z̅. Here, n_3=(0,0,1) and R̂_B(θ_B) isan anticlockwise rotation matrix around the axis n_B of an angle θ_B, see Appendix <ref> for more details and for a general solution for the vector n_B and angle θ_B. We conventionally choose the local angle θ_B to satisfy θ_B[z̅=0]=0 and U_Z(t=0)=1.Because of the adiabatic condition in the spin degrees of freedom, we discard negligible terms ∝∂_t z̅/l generated by the first transformation, and the time-evolution in this locally rotated frame is the spin-dependent phase accumulationgiven by the second exponential: e^-i Φ_B(t) σ_3/2, withΦ_B(t)=∫_0^t |ω_B[z̅(τ)] |dτ.Non-adiabatic corrections to this model can prove beneficial for shuttling and are leveraged in Sec. <ref>. §.§Shuttling fidelity in a noisy environmentThe unitary operator U_Z(t) in Eq. (<ref>) describes the coherent deterministic time evolution of the spin.Because U_Z can be characterized in experiments and can be compensated for or engineered to implement single-qubit gates, it does not influence the overall shuttling fidelity. However, during shuttling the spin also experiences random fluctuations in the environment that result in a loss of its coherence. At small shuttling velocities, the dominant contribution in a conveyer-mode shuttling process is estimated to be the variation of spin splitting caused by charge noise <cit.>.To describe this effect, we consider the noise Hamiltonian <cit.>H_N=h(t)·σ/2 ,where a stochastic, time-dependent vector h(t) couples to the spin. Physically, this vectororiginates from long-range fluctuations of the gate electric field (global noise sources) or from short-range atomistic defects (local noise sources) coupling to spin by the effective SOI or hyperfine interactions. This Hamiltonian can also describe the effect of small random variations of the trajectory of the shuttled spin in the inhomogeneous field. A detailed comparison between local and global noise sources is delayed to Sec. <ref>. We anticipate that, while the microscopic origin of the noise influences quantitatively the shuttling fidelity, the coherent spin dynamicsreduce the effect of the noise independently of the source, and for this reason we focus first on the simpler case of global noise sources. The derivations for general cases are provided in Appendix <ref>.In the interaction picture, H_N is dressed by the time evolution of the spin as H_N^I =U_Z^† H_NU_Z=1/2h(t)·R̂_Z(t) σ ,R̂_Z(t) = R̂_B (t)[θ_B(t)]R̂_3[Φ_B(t)], Here, R̂_Z is the combined rotation matrix generated by the transformation U_Z and the notation R̂_B(t)[θ_B(t)] emphasizes that R̂_B depends on time via its time-dependent rotation axis n_B(t) and angle θ_B(t); R̂_3 is the rotation matrix about the local Zeeman axis, see Appendix <ref> for the explicit form. When noise is small, H_N generates the time-evolution operator U_N≈ e^-iϕ_N(t)·σ/2, with the vector of random phasesϕ_N(t)=1/ħ∫_0^t dτh(τ)R̂_Z(τ) . To quantify the error caused by the stochastic phase accumulation during shuttling, we introduce the fidelity of a single shuttling eventℱ=1/2Tr(U_id^† U_re)=1/2Tr(e^-i ϕ_N ·σ/2) ,that measures the distance between the ideal (coherent) and real (noisy) operations U_id=U_Z and U_re=U_ZU_N, respectively. The average shuttling fidelity ℱ̅is obtained by averagingℱ over the probability distribution of ϕ_N.Assuming a Gaussian-distributed noise <cit.>, we obtainℱ̅=∫_-∞^∞dϕ_N e^-ϕ_N·Σ̂^-1ϕ_N/2/√(8π^3|Σ|)cos(√(ϕ_N·ϕ_N)/2 ),where we introduced the covariance matrixΣ̂=1/2πħ^2∫_-∞^∞ dω S(ω) F̂(ω, t),with determinant |Σ|; S(ω)=∫ dt e^iω t⟨ h(t)h(0)⟩ is the power spectral function of the noise, which for simplicity we assumed to be isotropic and uncorrelated in space and spin directions, i.e., ⟨ h_i(t)h_j(0)⟩= δ_ij⟨ h(t)h(0)⟩. The generalization of Eq. (<ref>) for noise sources that couple to the moving spin anisotropically <cit.> is straightforward and is provided in Appendix <ref>. The matrix of filter functions <cit.>F̂(ω, t)=∫_0^t dτ∫_0^t dτ' e^-iω(τ-τ')R̂_Z^T(τ)R̂_Z(τ')depends on fast rotations around the local spin quantization axis [R_3(Φ_B)], which account for the phase accumulated because of the Zeeman energy |ω_B|/2π∼ 10 GHz,and on slower rotations∼ 10-100 MHz of the spin quantization axis [R_B(θ_B)] caused by the motion of the spin in an inhomogeneous Zeeman field.In realistic semiconducting devices, the spectral function S(ω) is strongly peaked at low frequencies and has a 1/ω tail at large frequencies <cit.>. Because the transversal elements of F̂ contain rapidly oscillating terms determined by Φ_B, they are peaked at large frequencies in the GHz range, where the noise has less weight. For this reason, the dominant contribution to the fidelity arises from the longitudinal element of the covariance matrix Σ̂_33, which is peaked at low frequencies, and is determined by the element F̂_33≡ F,F(ω,t)=∫_0^t dτ∫_0^t dτ' e^-iω(τ-τ')ω_B[z̅(τ)]/|ω_B[z̅(τ)]|·ω_B[z̅(τ')]/|ω_B[z̅(τ')]| ,of the matrix of filter functions F̂ [ To derive Eq. (<ref>), we used [R_Z^T(τ)R_Z(τ')]_33=[R_B^T(τ)R_B(τ')]_33=(R_B(τ) n_3)·(R_B(τ') n_3)=ω_B[z̅(τ)]·ω_B[z̅(τ')]/|ω_B[z̅(τ)]||ω_B[z̅(τ')]|.]. In this case, the average shuttling fidelity becomesℱ̅ =e^-Σ̂_33/8 .We note that the corrections coming from the fast-rotating transversal terms causing spin relaxation lead to a power-law decay, with slower time constants, instead of the faster exponential decay included here <cit.>.Eqs. (<ref>) and (<ref>) highlight the fundamental role that the inhomogeneity of the Zeeman field has in determining the average shuttling fidelity ℱ̅. In particular, the inhomogeneous tilt of the spin quantization axis encoded in the product ω_B[z̅(τ)]·ω_B[z̅(τ')] can substantially impact the filter function. We discuss this phenomenon in the next section by analysing a few keyexamples. A comparison between the filter functions and average shuttling fidelities obtained for different cases is shown in Fig. <ref>. §.§Suppressing noise by shuttling§.§.§Spin rotation in homogeneous Zeeman fields We consider first the simplest case where during shuttling the spin moves in a homogeneous Zeeman field, i.e. ω̃_B(z)=ω_B(z)=ω_B n_3. This case is the aim of current experimental settings, but we will show that it does not always correspond to the highest shuttling fidelity.If the Zeeman field does not depend on space the unitary time-evolution operator of the spin given in Eq. (<ref>) reduces to the simple phase accumulation U_Z=e^-i ω_B tσ_3/2, which rotates the spin around the fixed axis n_3. Moreover, the product ω_B[z̅(τ)]·ω_B[z̅(τ')]= 1 and the longitudinal filter function F in Eq. (<ref>) simplifies toF(ω,t)=4sin^2(ω t/2)/ω^2≡ F_FID(ω,t),which corresponds to the filter function of a free-induction decay (FID) experiment <cit.>.We remark that F_FID is peaked at zero frequency ω=0, where it grows as F_FID(ω=0,t)=t^2, see the black line in Fig. <ref>(a).For this reason, the shuttling fidelity ℱ̅, related to the longitudinal component Σ̂_33 of the covariance matrix by Eq. (<ref>), is determined by low-frequency noise which dominates the integral in Eq. (<ref>).To explicitly compare different scenarios, we use here the typical spectral function measured in experiments <cit.> S(ω)=2πħ^2/T^2-η1/|ω|^1-η ,where η∈ (0,1] and we introduce the time scale T>0, that characterize the amplitude of the noise fluctuations in different experiments. In particular, combining Eqs. (<ref>), (<ref>), and (<ref>), we find that for FID, the average shuttling fidelity isℱ̅_FID =e^-(t/T)^2-ηcos(πη/2) Γ (η -2)/2≈ e^-t^2/T_φ^2 , T_φ =2 T√(η) , where Γ(x) is the gamma function. The approximation reports the purely pink noise case, with η→ 0^+, such that the noise spectrum is S(ω)∝ 1/|ω|, see the blue line in Fig. <ref>(a). Importantly, we stress that the dephasing time T_φ∝√(η) vanishes for purely1/|ω| noise because of the characteristic non-integrable divergence at zero frequency.The average shuttling infidelity 1-ℱ̅ for FID is shown with a black line in Fig. <ref>(b), and it will serve as a reference to compare different cases. For typical experimental values of T_φ∼ 1 μs <cit.> and shuttling velocities of ∼ 1 m/s, we obtain a loss of coherence of the spin within a distance z̅∼ 1 μm. Finally, weremark here that as long as the motion of the spin remains adiabatic compared to orbital and Zeeman fields, the shuttling fidelity of FID is independent of the velocity of the quantum dot. This is not generally valid in the presence of inhomogeneity of the Zeeman field, as we discuss next.§.§.§ Spin precession in inhomogeneous Zeeman fieldsIn striking contrast to the FID case, if the Zeeman field is inhomogeneous, the time-dependence of the productω_B[z̅(τ)]·ω_B[z̅(τ')] in Eq. (<ref>) shifts the weight of the longitudinal filter function F to frequencies of tens of MHz, thus significantly improving the average shuttling fidelity. To illustrate this effect, we consider first a simple scenario where the moving spin precesses in the inhomogeneous Zeeman field ω̃_B^P(z)/ω̃_B = cos(2z/λ) n_3+sin(2z/λ) n_2 =R̂_1(2z/λ) n_3,that fully rotates around a fixed axis. The matrix R̂_1 is reported in Appendix <ref> and describes a rotation around n_1=(1,0,0) with period πλ. While being an ideal field, we emphasize that ω_B^P nicely describes a wide variety of devices.For example, in electronic systems ω_B^P matches the stray magnetic field produced by modular nanomagnets spaced by a distance πλ <cit.>.In this case, we note that a small homogeneous magnetic field is required to polarize the magnets, but this field could be switched off after the initial polarization.Moreover, in planar hole nanostructures, ω̃_B^P reasonably approximates the strain- and electric field-induced tilting of the g tensor <cit.> caused by the periodic arrangement of gates required for a conveyer-mode shuttling architecture. For example, in neighbouring quantum dots defined in planar germanium heterostructures, g-tensors tilting of more than 40% <cit.>, and even g-factors with opposite signs <cit.>, have been recorded, suggesting that fully rotating fields as ω̃_B^P are within reach in these systems.A detailed discussion of the effects of residual homogeneous Zeeman fields is delayed to the next section <ref>.We also anticipate that the field ω̃_B^P matches the effective Zeeman field produced by a finite SOI, as typical in hole nanowires and fin field effect transistors, as we will show in Sec. <ref>.The Zeeman energy of the moving quantum dot appearing in H_Z in Eq. (<ref>) isω_B^P(z̅)= ω_BR̂_1(2z̅/λ) n_3 ,withω_B= e^-l^2/λ^2ω̃_B,and is related to the local Zeeman energy ω̃^P_B(z) in H_1D in Eq. (<ref>) by the well-known Gaussian renormalization factor e^-l^2/λ^2, which accounts for the effects of strong confinement and large inhomogeneity in the z-direction <cit.>.In this case, the time-evolution operator in Eq. (<ref>) and the kernel of the longitudinal filter function F in Eq. (<ref>) reduce respectively toU_Z^P(t)=e^-i z̅(t) σ_1/ λe^-i ω_B t σ_3/2 , ω_B[z̅(τ)]/|ω_B[z̅(τ)]|·ω_B[z̅(τ')]/|ω_B[z̅(τ')]|=cos[2z̅(τ)-z̅(τ')/λ]. In contrast to FID, in an inhomogeneous Zeeman field the quantum dot motion plays a critical role because the energetically favoured spin quantization axis varies at different positions and times. This results in spin precession during shuttling.Considering a constant shuttling velocity z̅(t)=v̅t, the integral in Eq. (<ref>) defining F can be evaluated exactly, and the complete solution is provided in Appendix <ref>, see Eq. (<ref>). We find that an accurate approximation for the exact result is provided by the simple equationF_P(ω, t)≈t^2/2[f_L(ω-2ω_λ/2/t)+f_L(ω+2ω_λ/2/t)],where f_L(x)= (1+x^2)^-1 is a Lorentzian function normalized as f_L(0)=1. We introduce here the relevant frequency shift ω_λ= v̅/λ, quantifying the rate of change in spin quantization axis; in a similar way, we also define the frequency ω_l= v̅/l.In Fig. <ref>(a), we show a comparison between the exact (solid lines) and the approximate (dashed lines). Importantly, F_P comprised two functions peaked at finite frequencies ± 2ω_λ and with broadening 1/t becoming narrower at large times. Assuming an adiabatic shuttling velocity v̅= 1 m/s and a typical gate pitch of πλ=50 nm, we find ω_λ/2π=10 MHz, substantially shifting the relevant components of noise toward MHz frequencies, where the noise has lower weight (blue line). This shift is equivalent to an intrinsic dynamical decouplingof the largest low-frequency noise.Considering pink noise with the spectral function S(ω) in Eq. (<ref>), and usingEqs. (<ref>), (<ref>), and (<ref>), we can estimate the shuttling fidelity.The complete equation is provided in Eq. (<ref>) and is shown with a solid red line in Fig. <ref>(b). For pure 1/|ω| noise (η→ 0^+) this function can be approximated byℱ̅_P≈{[ e^-t^2/T_φ^2,t≲ 1/ω_λ;e^-ω_λ^2 T_φ^2 , 1/ω_λ≲ t ≲ T_P=8ω_λ T^2/π;e^-t/T_P ,t≳ T_P ].,and nicely matches the limiting behaviour ofℱ̅_P, see dashed lines in Fig. <ref>(b). Here, T_φ is the FID dephasing time given in Eq. (<ref>). At small values of ω_λ t, corresponding to a few spin rotations during shuttling,ℱ̅_P≈ℱ̅_FID. However, if the spin experiences many rotations during shuttling and ω_λ T_φ≳ 1, the fidelity first saturates to a finite value following the interpolation function ℱ̅_P≈ e^-t^2 f_L(ω_λ t)/T_φ^2, and then decays exponentially with a longer time constant T_P that is independent of the small diverging cut-off η→ 0^+.Considering the estimated value of ω_λ/2π≈ 10 MHz and T_φ=1 μs, we find a significant improvement in the shuttling fidelity by the inhomogeneous magnetic field compared to the FID, as shown in Fig. <ref>(b), with infidelities that remain below 10^-3 for a much wider range of shuttling times. Because of the intrinsicdynamical decoupling of low-frequency noise, the inhomogeneous Zeeman fieldboosts the possible shuttling times to times a few orders of magnitude larger than the dephasing time T_φ, corresponding to a coherent shuttling over distances larger than 100 μm.We also note that while we assumed for simplicity a constant absolute value of the Zeeman frequency ω_B, see Eq. (<ref>), because the term dominating the fidelity is independent of ω_B, our results remain approximately valid also when ω_B has a spatial dependence, e.g. an additional oscillatory component with period πλ, provided that the minimal Zeeman frequency ω_B^min remains large compared to ω_λ. More details on the effects of inhomogeneous ω_B (z̅) are provided in Sec. <ref>.§.§.§ Spin nutation in inhomogeneous Zeeman fields We now show that the enhancement of fidelity by inhomogeneous Zeeman field occurs in more general cases. In particular, we study here the nutating dynamics of a moving spin in the Zeeman field ω_B^N(z̅) =ω_BR̂_N(2z̅/λ)n_3,that rotates around an inhomogeneous vector. The matrix R̂_N describes a general rotation around the oscillating unit vectorn_N(z)=n_1/√(1+A^2)+ A [cos(2z̅/λ_N) n_3 - sin(2z̅/λ_N) n_2]/√(1+A^2) .We refer to this process as to a nutationout-of-phase because the rotation of n_N is out-of-phase compared to the precessing Zeeman field, see Eq. (<ref>). The amplitude of the nutation is characterizes by the dimensionless constant A and by its period λ_N, which does not need to match the period λ of the precession. We also only consider the cases where ω_λ∼ω_N ≪ω_B, with ω_N= v̅/λ_N.Using Eq. (<ref>), we can easily evaluate ω_B^N(z̅). The components of the out-of-phase nutating Zeeman field are shown with solid lines in Fig. <ref>(a).Compared to the rotating Zeeman field ω^P_B in Eq. (<ref>), ω_B^N includes an additional component oscillating in the x-direction (red line).This oscillating term produces on average the finite homogeneous Zeeman field -ω_B A/(1+A^2), and thus ω_B^N nicely describes the effects of residual homogeneous fields in realistic experiments. These fields can occur because of non-zero polarizing magnetic field for electronic systems with nanomagnets <cit.> or non-fully precessing g tensors in hole nanostructures <cit.>. Here, we restrict ourselves to the case A≪ 1 and we show that in this case the shuttling fidelity is still strongly enhanced, however, we anticipate that similarly high fidelities can be engineered by increasing v̅ also when residual homogeneous field is large, as we discuss in detail in Sec. <ref>.The spin dynamics in this case is well-approximated by the time-evolution operatorU_Z^N(t)=e^-i z̅(t) n_N[2z̅(t)/λ_N]·σ / λe^-i ω_B t σ_3/2 ,describing a spin nutation. The longitudinal filter function F in Eq. (<ref>) can be evaluated numerically.In the limit of small A, we find thatF_N=F_P- δ F_N, withδ F_N≈A^2t^2/4[2f_L(ω-2ω_λ/2/t) - f_L(ω-2ω_N/2/t) .. -f_L(ω-2ω_λ+2ω_N/2/t) ]+ (ω→ -ω).Here, the notation ω→ -ω indicates that in the brackets there are three additional Lorentzian peaks obtained from the ones reported by inverting the frequency, and we neglected corrections 𝒪(A^4) and combing from oscillations at higher frequencies.The corrections δ F_Nto the precessing filter function F_P coming from out-of-phase nutation are shown with red and blue lines in Fig. <ref>(b) for different values of λ/λ_N. We observe a good agreement of the approximated Eq. (<ref>) (dashed lines) with the exact solution (solid lines). Importantly, nutation introduces sideband peaks at frequencies ω=± 2 ω_N and ω=± 2(ω_λ-ω_N) with amplitude ∝ A^2. When the period of nutation λ_N is much shorter than the period of precession λ, and λ_N≲λ/2 these sideband peaks sample noise at high frequency yielding negligible corrections to F_P (blue lines). In contrast, when λ_N≳λ/2, the sideband peaks of δ F_N occur at low frequencies.This effect results in a resonant condition at λ_N=λ, where the side peaks merge into the Lorentzian peakA^2t^2f_L(ω t/2) sampling the noise at ω=0 (red lines).In this resonant scenario, and for the 1/|ω| noise given in Eq. (<ref>), the average shuttling fidelity acquires a significant correction and becomesℱ_N≈ℱ_Pe^-t^2/T_N^2 ,with T_N=T_φ/A.This fidelity is shown in the inset of Fig. <ref>(b). Comparing to the dephasing time T_φ in Eq. (<ref>), we observe that the time constant of the Gaussian decay is enhanced by the small amplitude of the nutation A. This decay time dominates the fidelity in the long time asymptotic.The dependence of T_N on A can be understood in general by considering that at λ=λ_N the out-of-phase nutating Zeeman field in Eq. (<ref>) contains on average the homogeneous component -A n_1/(1+A^2)≈ -A^2 n_1 along the main precession axis. This residual homogeneous field causes a constant dephasing during shuttling with time constant T_φ (1+A^2)/ A = T_N+𝒪(A^2). This interpretation clearly shows that when the spin degree of freedom is moved adiabatiacally compared to the Zeeman energy, the maximal enhancement of coherence occurs for effective inhomogeneous Zeeman fields that fully rotate during shuttling. We emphasize that the worst-case scenario presented here, where λ=λ_N, also requires the nutation in Eq. (<ref>) to be out-of-phase. When the nutation is in-phase and is generated for example by the vectorn_N(z)=n_1/√(1+A^2)+ A [cos(2z̅/λ_N) n_3 + sin(2z̅/λ_N) n_2]/√(1+A^2) ,there is on average no homogeneous Zeeman field along the main precession axis, see the dashed lines in Fig. <ref>(a), and thus T_N→∞. §.§ Local noise sourcesThe noise model introduced in Eq. (<ref>) assumes that during the shuttling the spin experience a random time-dependent Zeeman field h(t) that is homogeneous in space. This model describes global noise sources originating for example from the fluctuation of the externally applied magnetic field orlong-range electric fields.Here, we analyse the effect of an inhomogeneous noise distribution during shuttling. We focus, in particular, on an ensemble of short-range impurities at fixed positions z=z_k, that couple to the spin via local interactions h_k(t). This model describes well nuclear spins and local dynamical charge-traps electrostatically coupled to the dot.In this case, the local noise Hamiltonian isH_N= 1/2 n_0∑_k δ(z-z_k)h_k(t)·σ ,where n_0 is the atomic density and δ(z) is the delta function.The spin confined in the moving quantum dot has a charge density |ψ[z-z̅(t)]|^2≈ e^-[z-z̅(t)]^2/l^2/l√(π) and experiences the time-dependent noiseH^L_N= 1/2 n_0∑_k |ψ[z_k-z̅(t)]|^2h_k(t)·σ . Proceeding as in Sec. <ref>, assuming an isotropic and spatially uncorrelated noise with ⟨h_k^n(t)h_k'^m(0)⟩ =δ_kk'δ_nm∫ dω e^-iω t S(ω)/2π, and using the envelope function approximation ∑_k→ν n_0∫ dz, where ν is the average percentage of defects,we find that for local noise sources the longitudinal component of the filter function modifies asF^L(ω, t)= ν/N∫_0^t dτ∫_0^tdτ'e^-iω(τ-τ') × e^-[z̅(τ)-z̅(τ')]^2/2l^2ω_B[z̅(τ)]/|ω_B[z̅(τ)]|·ω_B[z̅(τ')]/|ω_B[z̅(τ')]|where N=√(2π)l n_0 is the number of atoms in the dot. More detailed derivations of F^L, also including more general noise sources are provided in Appendix <ref>. Importantly, for local noise sources, the kernel of the filter function includes the additional weight e^-[z̅(τ)-z̅(τ')]^2/2l^2 that accounts for the locality of the noise and the spatial distribution of the spin. This term describes the motional narrowing of inhomogeneous noise during shuttling <cit.>.To illustrate its effect explicitly, we consider here the precessing Zeeman field ω_B^P given in Eq. (<ref>).The coherent dynamics of the spin is not altered and the spin precesses according to the time-evolution operator U_Z^P in Eq.(<ref>). However, the longitudinal filter function F_P^L is significantly modified. By combing Eqs. (<ref>) and (<ref>), we derive an exact solution, reported in Eq. (<ref>). In analogy to the global noise solution F_P in Eq. (<ref>), we find that F_P^L can be approximated byF_P^L≈ν/Nt/2ω_l [f_G(ω-2ω_λ/ω_l) +f_G(ω+2ω_λ/ω_l) ],with f_G(x)=e^-x^2/2 being a Gaussian normalized to f_G(0)=1. As shown in Fig. <ref>(a), we observe a good match between the exact and the approximated solutions (solid and dashed lines, respectively). While qualitativelyF_P^L and F_P show a similar behaviour with the peaks of the filter function being shifted by the finite λ to the higher frequencies ± 2ω_λ, with ω_λ=v̅/λ, we emphasize that there are a number of key differences between the two cases, see Eqs. (<ref>) and (<ref>). First, for local noise the peaks of F_P^L have a Gaussian lineshape that originates from the approximated charge density of the quantum dot |ψ(z)|^2 in contrast to the Lorentzian peaks of F_P. Moreover, the broadening of the Gaussian peaks of F_P^L is time-independent and it is determined by the characteristic frequency ω_l=v̅/l. Finally, we observe that F_P^L∝ t/ω_l, while the for global noise theF_P∝ t^2, thus strongly impacting the average shuttling fidelity ℱ̅.By considering the noise spectrum S(ω) in Eq. (<ref>), we find thatfor local noise sources ℱ̅_P^L =e^-2^η-6/2ν t/N TΓ(η/2) (ω_l T)^η -1_1F_1(1-η/2;1/2;-2l^2/λ ^2)≈ e^-t/T_φ^L , T_φ^L =ω_l (√(N/ν) T_φ)^2e^2l^2/λ^2 , where _1F_1(a;b;c) is the hypergeometric function. We note that when the quantum dot is static, the FID dephasing times due to global and local noise are T_φ=2T√(ν) [see Eq. (<ref>)] and √(N/ν)T_φ, respectively, where the factor √(N/ν) accounts for the average percentage of local defects in the quantum dot <cit.>. In the inset of Fig. <ref>(a), we show the average shuttling infidelity for local noise sources, comparing the homogeneous λ→∞(black curve) and precessing λ=l/2 (red curve)Zeeman field cases. In contrast to ℱ̅_P in Eq. (<ref>), the shuttling fidelity ℱ̅_P^L follows an exponential decay with the time constant T_φ^L being significantly larger than for global noise case because of the motional narrowing of local fluctuators. This effect can be clearly observed by consideringhomogeneous Zeeman fields, and observing that for typical values of ω_l∼ 10-100 MHz and √(N/ν)T_φ∼ 0.1-10 μs, T_φ^L ≳ 10 √(N/ν)T_φ also at λ→∞.The additionalspin dynamics in the inhomogeneous field produces an additional beneficial effect which isencoded in the Gaussian correction e^2l^2/λ^2 in Eq. (<ref>). As a consequence, low-frequency local noise is substantially filtered out by the inhomogeneous field inlong quantum dots with l≳λ, see Fig. <ref>(a).However, we note that in long quantum dots the effective Zeeman energy ω_B is also reduced by a weaker Gaussian correction e^-l^2/λ^2, see Eq. (<ref>), thus limiting the maximal values of the useful l/λ ratio.This trade-off is highlighted in Fig. <ref>(b) by comparing solid blue and gray lines, that represent T_φ^L and ω_B, respectively. Considering for example a typical gate pitch of πλ≈ 50 nm and realistic values of quantum dots length l≈ 20 nm, we observe a significant reduction of the noise with T_φ^L≈ 20 T_φ^L(l/λ=0), still preserving a large Zeeman gap ω_B≈ 0.2 ω̃_B at realistic values of magnetic fields ∼ 1 T. We anticipate that this trade-off between fidelity and Zeeman energy can be lifted by higher shuttling velocities that are not adiabatic with respect to the Zeeman energy, see the dashed gray curve, as we will discuss in Sec. <ref>. §.§ Charge noise in inhomogeneous Zeeman fieldsWe showed that an inhomogeneous Zeeman field dynamically decouples the moving spin from the dominant low-frequency noise, and thus provides an effective way to filter out the noise caused for example by hyperfine interactions with nuclear spins. However, more care is required to analyse its effect on charge noise, because Zeeman field inhomogeneities characterized by the ratio l/λ also render the spin susceptible to the fluctuations of the electrostatic environment, thus directly coupling the spin to these charge noise sources. For this reason, current shuttling experiments minimize the inhomogeneity of the field and operate at l/λ≪ 1.We show here that while this approach indeed provides a coherent shuttling, the inhomogeneity-induced intrinsic dynamical decoupling also enables large shuttling fidelities at l/λ≳ 1. In particular, the time scale T characterizing the noise spectral function S(ω) in Eq. (<ref>) also depends on l/λ, thus further influencing the time T_φ.To quantify this effect, we focus on the precessing spins discussed in Sec. <ref> and we include explicitly the coupling of the spin to charge noise due to the Gaussian renormalization of the Zeeman energy ω_B= e^-l^2/λ^2ω̃_B given in Eq. (<ref>). Focusing on a local noise source labelled by k, small randomvariations δ V_k(t) of the electrostatic environment cause fluctuations of the length l and couple directly to the Zeeman energy resulting in the noise field <cit.>h_k(t)≈ħω_B^P(z_k) [∂_V ω̃_B/ω̃_B -2l^2/λ^2∂_V l/l]δ V_k(t) .Here, we introduced the susceptibilities ∂_V l and ∂_V ω̃_B of the length l and of the local Zeeman field ω̃_B to variations in the environment.Moreover, we assumed that charge defects have a local effect on the spin, see Eq. (<ref>), however we point out that a similar noise Hamiltonian can be derived for global noise sources; corrections coming from intermediate-range noise are discussed in Appendix <ref>.Introducing now the pure 1/f charge noise spectral density S_δ V(ω)= V̅^2/|ω|, such that ⟨δ V_k(t)δ V_k'(0)⟩ =δ_k k'∫ dω S_δ V(ω) /2π, we find the functional dependence of the time scale T in Eq. (<ref>) to beT= √(2π)/V̅ω̃_Be^-l^2/λ^2|∂_V ω̃_B/ω̃_B -2l^2/λ^2∂_V l/l|^-1 .Away from sweet spots where the T→∞ <cit.> and by combining Eqs. (<ref>) and (<ref>) we findT_φ^L≈T_0 λ^4/l^4e^4l^2/λ^2 , withT_0=2πηN/νω_l/ω̃_B^2l^2/V̅^2(∂_V l)^2 .We discarded here the term ∝∂_V ω̃_B that is independent of l/λ and is therefore clearly filtered out by the inhomogeneous field.The functional dependence of the average shuttling fidelity on the inhomogeneity of the Zeeman field is illustrated in Fig. <ref>(b) with a blue curve. As expected, for small values of l/λ when the Zeeman field is rather homogeneous, the time constant T_φ^L determining the shuttling fidelity decreases as ∝ l^4/λ^4, resulting in an lower shuttling fidelity.This power law is related to the typical scaling of the FID dephasing time T_φ∝ l^2/λ^2∝√(T_φ^L) <cit.>; wenote that also relaxation processes scale as the square of the inhomogeneity <cit.>. In this regime, the noise is dominated by the variations of ∝∂_V ω̃_B or by nuclear spin noise that are independent of λ. However, if the Zeeman field inhomogeneity is large and l/λ≳ 1, the induced intrinsic dynamical decoupling of the spin becomes effective and rapidly increases the shuttling fidelity.This same trend occurs also as a function of SOI, as we show in the following section.§SPIN-ORBIT INTERACTIONThe SOIcauses a spin rotation depending on the velocity of the particle.This effect is captured by the term v(z) in H_1D (<ref>) and is strongly enhanced in hole nanostructures, where the SOI are large and cause full spin rotations in lengths of a few tens of nanometers <cit.>. We show here that SOI generally produces an inhomogeneous fully-rotating Zeeman field matching the ones analysed in Sec. <ref>.To highlight the role of SOI, we rewrite Eq. (<ref>) asH_1D=[p-m v(z)·σ]^2/2m+U(z)+ ħω̃_B(z)·σ/2 ,with U(z)=m[ω_o^2(z-z̅)^2-|v(z)|^2]/2. We remove the SOI by the exact unitary transformationS= 𝒫exp(im/ħ∫_0^z dsv(s)·σ) satisfying S^† [p-m v(z)·σ] S= p, and where 𝒫exp is the path-ordered exponential.Generally, S describes an inhomogeneous spin rotation around a local axis. To find an explicit expression for this rotation, we restrict our analysis to SOI of the form v(z)=v_s n_s +δv(z).By introducing the SOI length λ_s=ħ/m v_s, we findS=e^i zn_s·σ/λ_s e^i ϕ_s(z)·σ .For sufficiently small δv(z)/v_s, i.e. the individual components of the inhomogeneous term are bounded bym∫_0^zds δv_j(s)/ħ<π, the phases ϕ_s(z) can be estimated by a second order Magnus expansion <cit.> asϕ_s(z)≈m/ħ∫_0^z ds δṽ(s)+ m^2/ħ^2∫_0^z ds ∫_0^s ds' δṽ(s)×δṽ(s'),with δṽ(z)=R̂_s(2z/λ_s)δv(z); R̂_s is a rotation matrix by the fixed SOI axis n_s. Here, the first integral term captures the effect of a varying amplitude of the SOI, while the second term captures the first correction due to a small tilting of the vector of SOI.We note that for SOI with a constant directionv(z)= v_s(z) n_s, Eq. (<ref>) is exact and the second integral vanishes.Projecting the transformed Hamiltonian onto the moving charge state of the quantum dot |ψ(z-z̅)|^2, we find a spin model H_Z=ħω_B[z̅(t)]·σ/2, analogous to Eq. (<ref>), with effective Zeeman fieldω_B[z̅]=∫ dz |ψ(z-z̅)|^2R̂_δ(z)[ϕ_s(z)] R̂_s[2z/λ_s] ω̃_B(z),with ϕ_s(z)=ϕ_s(z)δn(z); the matrix R̂_δ(z) describes a general rotation around the local axis δn(z). We now examine different cases. To highlight the effect of SOI, we restrict ourselves to the analysis of a homogeneous Zeeman field, and we consider an homogeneous Zeeman field ω̃_B(z)=ω̃_B. §.§ Spin precession in homogeneous SOIWe first consider the homogeneous SOI v_H(z)= v_sn_s .The effective Zeeman field then reduces to <cit.>ω_B(z)=ω̃_B^∥+ e^-l^2/λ_s^2R̂_s(2z/λ_s) ω̃_B^⊥ ,where ω̃_B^⊥,∥ are the component of the Zeeman field perpendicular and parallel to n_s, respectively.If the SOI and Zeeman vectors are aligned [n_s∥ω̃_B and ω_B(z)=ω̃_B^∥], the spin rotates around a fixed axisand the noise filter function reduces to F_FID in Eq. (<ref>) as discussed in Sec. <ref>. In contrast, if theSOI and Zeeman vectors are perpendicular to each other [n_s⊥ω̃_B and ω_B(z)= e^-l^2/λ_s^2R̂_s(2z/λ_s) ω̃_B^⊥], the spin precesses around an effective Zeeman field rotating around a fixed axis in analogy to ω_B^P in Eq. (<ref>), see Sec. <ref>. In this case, the period of the rotation of the effective Zeeman field is determined by the SOI length λ_s; the dynamics of the spin is then given by the time-evolution operator U_Z^P in Eq. (<ref>).However, because of the transformation S in Eq. (<ref>), the response of the system to noise differs from the one discussed in Sec. <ref>.In this case, there is an important difference between global and local noise sources.For the global noise modeled by H_N in Eq. (<ref>),the transformation S rotates the global stochastic vector h as h→ e^-l^2/λ_s^2R̂_s(2z̅/λ_s)h. Because this additional rotation compensates for the spin dynamics, the low-frequency noise is not filtered out and F=e^-2l^2/λ_s^2F_FID. This change results in the rescaling of the dephasing time T_φ→ T_φ e^l^2/λ_s^2, i.e., the dephasing time is increased inverse proportionally to the Zeeman energy renormalization, see Eq. (<ref>).In contrast, for local noise sources, H_N^L in Eq. (<ref>) transforms as h_k→R̂_s(2z_k/λ_s)h_k. Because the rotations in this case are local, the noise response of this system is described by the longitudinal filter function F^L_P in Eq. (<ref>),resulting in the average shuttling fidelity ℱ̅_P^Lgiven in Eq. (<ref>). More details on this different noise response, including a general derivation for intermediate-range noise are provided in Appendix <ref>. §.§ Spin nutation in inhomogeneous SOI Our general theory describes small variation of the SOI direction during shuttling.Such variations can arise in hole nanostructures for example in planar germanium andsilicon fin field-effect-transistors because of gate-induced strain and electric field modulations which can impact amplitude and direction of the SOI field <cit.>. These variations are captured by the additional phases ϕ_s in the transformation S, see Eq. (<ref>). To illustrate this effect, we consider a concrete example where the SOI precesses as v_N(z)= v_s [n_1 - A sin(2 z/λ_N)n_2+ A cos(2 z/λ_N)n_3 ].The precession of the SOI has a largely different effect than the precession of the inhomogeneous Zeeman field in Eq. (<ref>).By using Eq. (<ref>), we find that when A≪ 1 the inhomogneneous SOI leads to the phasesϕ_s≈A^2 [sin (2 kz)-2 kz]/4k^2λ_s^2n_1-Asin ^2(kz) /kλ_sn_2+Asin (2 kz) /2 k λ_sn_3,where we define the wavevector k=1/λ_s+1/λ_N.This equation remains rather accurate for large values ofz ≲1/kA^2, as we show in Fig. <ref>(a) by comparing this approximation to the numerical integration of the path ordered exponential in Eq. (<ref>). We focus on the homogeneous Zeeman field ω̃_B= ω̃_Bn_3 that is perpendicular to the constant component of the SOI. For simplicity, we now restrict ourselves to the case λ_s=λ_N; we will lift this fine-tuned condition later. From Eq. (<ref>), we find the effective Zeeman energy ω_B^N (z̅) =ω_B R̂_1(2z̅/λ)+ δω_B (z̅),δω_B(z̅)/ω_B ≈A^2 z̅/ 2kλ_s^2[cos(2z̅/λ_s)n_2+sin(2z̅/λ_s)n_3 ] . The first term in ω_B^N is equivalent to ω^P_B in Eq. (<ref>) and includes both the renormalization of Zeeman energy ω_B= e^-l^2/λ_s^2ω̃_B and SOI-induced rotation R̂_s=R̂_1.The correction to the effective field δω_B arising from the precession of SOI vector is shown in Fig. <ref>(b). We note that the largest correction originates from the term ∝z̅n_1 of ϕ_s in Eq. (<ref>), which increases linearly with z̅, and produces the simple approximate expression provided in Eq. (<ref>). Focusing on local noise sources, the additional local rotation of the Zeeman field δω_B caused by the inhomogeneity of the SOI modifies the longitudinal filter function asF_N^L =F_P^L+ δ F_N^L,δ F_N^L=A^2 ν t/4ω_l k λ_s^2 N[f_G'(ω-2ω_λ/ω_l) -f_G'(ω+2ω_λ/ω_l) ], where F_P^L is given in Eq. (<ref>), the frequency shift is ω_λ=v̅/λ_s, and we introduced the first derivative of the function f_G as f_G'(x)=-x e^-x^2/2.We show the variation δ F_N^L of the filter function caused by the inhomogeneous SOI inFig. <ref>(c).Compared to the homogeneous SOI case, F_N^L acquires only a small correction which scales with A^2 and is centered at ω± 2ω_λ. Interestingly, because of the linear increase of the Zeeman field ∝z̅, the Gaussian shape of the peaks is modified by a polynomial correction. We anticipate that a similar polynomial renormalization appears also when the moving spin is resonantly driven, as we discuss in Sec. <ref>. We note that the corrections caused by the SOI precession are negligible in the regime considered. In contrast to the case of precessing Zeeman field and global noise discussed in Sec. <ref>, they only quantitatively renormalise the exponential decay of shuttling fidelity. In particular, the SOI precession renormalizes the decay rate as1/T_φ^L→1/T_φ^L(1+ A^2 l^2/k λ_s^3)= 1/T_φ^L(1+ A^2 l^2/2 λ_s^2),where T_φ^L is defined in Eq. (<ref>).We now examine the case λ_s≠λ_N, the inhomogeneous Zeeman field ω_B^N in Eq. (<ref>) acquires the additional correctionA ω̃_B/2 k λ_s[e^-l^2/λ_N^2cos (2 z̅/λ_N)-e^-l^2/λ_s^2cos (2 z̅/λ_s)] n_1,that is linear in A and is aligned to the homogeneous SOI direction.This term causes extra peaks in the longitudinal filter function δ F_N^L=A^2 tν/16ω_l k λ_s^2 N [f_G(ω± 2ω_N/ω_l) -f_G(ω± 2ω_λ/ω_l) ],with ω_N=v̅/λ_N.These peaks are qualitatively similar to the ones in F_P^L given in Eq. (<ref>), and only provide an additional correction to the decay rate ∝ A^2.§RESONANT DYNAMICAL DECOUPLINGThe average shuttling fidelity can be further enhanced by appropriately engineering the trajectory of the spin while shuttled. As anticipated, by rendering the quantum dot motion non-adiabatic with respect to Zeeman field, but still slow compared to the orbital splitting, the resonantly-induced deterministic spin dynamics more effectively filter out the low frequency noise, thus resulting in higher shuttling fidelities.In particular, we propose two different approaches: a fast time-modulation of the position of the quantum dot, and a fast shuttling in a weakly inhomogeneous Zeeman field. In this section, we restrict our analysis to shuttling experiments where the spin moves in a precessing inhomogeneous Zeeman fields and focus on local noise sources. As discussed in Sec. <ref>, this case is equivalent to a system with an homogeneous SOI.§.§Fast time-modulated position §.§.§ General solutionWe consider a time-modulated position of the quantum dotz̅(t)=v̅ t+Z cos(ω_d t),which is modulated with an additional signal with amplitude Z and frequency ω_d. We restrict to small resonant modulation with Z≪ l and ω_d∼ω_B. This additional driving term in the spin position can be experimentally achieved by appropriately designing the ac pulses of a conveyer-mode shuttler, and could be implemented in electronic systems with nanomagnets and in hole nanostructures. The additional small driving term induces resonant dynamics in the spin degrees of freedom, thus lifting the adiabaticity condition compared to the Zeeman energy discussed in Sec. <ref>, however because ω_B≪ω_o, we still consider the motion to be adiabatic compared to the orbital degree of freedom. In particular, we note that this system is still well-described by the Hamiltonian H_Z in Eq. (<ref>), up to small corrections of order ω_B^2/ω_o, that are derived in Appendix <ref>. Because of the fast modulation, however, the adiabatic time evolution operator U_Z provided inEq. (<ref>) does not describe accurately the time-evolution of H_Z.In this case, the neglected dynamical term iħ U_Z^†∂_t U_Z is relevant and induces additional resonant spin dynamics. By applyingthe transformation U_Z^TM=e^-iθ_B(z̅)n_B(z̅)·σ/2e^-iω_d tσ_3/2 to H_Z in Eq. (<ref>), we find the effective HamiltonianH_TM=ħΔ(z̅)/2σ_3- ħ∂_t z̅/2δθ_B(z̅) R̂_3(ω_d t)·σ .For convenience, here the second transformation in U_Z^TM moves the system to a frame rotating at the frequency of the drive ω_d rather than at the Zeeman frequency as in U_Z in Eq. (<ref>). We also introduce the detuning Δ(z̅)=|ω_B(z̅)|-ω_d and the vectorδθ_B=θ_B'n_B +θ_B (n_B·n_B') n_B+sin(θ_B)(n_B×n_B')×n_B +[1-cos(θ_B)] (n_B×n_B'),which is derived by using Eq. (<ref>) and the equality e^-X(∂_t e^X)=∫_0^1 ds e^-sX (∂_t X) e^sX.§.§.§ Resonant rotating Zeeman fieldThe Hamiltonian in Eq. (<ref>) holds generally.However, to clearly illustrate the effect of the small additional driving in Eq. (<ref>), we now focus for concreteness on the rotating Zeeman field ω_B^P defined in Eq. (<ref>). In this case, the Hamiltonian H_TM simplifies asH_TM/ħ =Δ/2σ_3- [ω_λ e^-i ω_d t-iΩ/2(1-e^-2i ω_d t)] σ_+ +h.c.≈Δ/2σ_3- Ω/2σ_2. We defined here the Rabi frequency Ω=ω_d Z/λ≪ω_d, and we note that, in the second line, we used the conventional rotating wave approximation and neglected terms rotating at the fast frequency ω_d ≫Δ, ω_λ, Ω. We also introducedσ_±=σ_1± iσ_2 and h.c. indicates the hermitian conjugate.The Rabi frequency Ω induces an additional rotation of the moving spin.At resonance Δ=0, the spin dynamics in the rotating frame is captured by the unitary time evolution U_Ω= e^i Ω tσ_2 /2, and thus in the original frame U_TM=e^-iθ_B[z̅(t)]n_B[z̅(t)]·σ/2e^-iω_d tσ_3/2e^i Ω tσ_2 /2e^iθ_B(0)n_B(0)·σ/2 .The time evolution of the spin expectation values obtained starting from a spin state originally in the groundstate |↓⟩ are provided in Fig. <ref>(a). Even a small driving term Z≪ l produces non-trivial spin dynamics, as we observe by comparing the solid and dashed curves, that correspond to the case Z=0.01 l and Z=0, respectively. The spin dynamicsin the resonant case presents fast oscillations with frequency ω_d weighted by envelopes oscillating at the smaller frequencies Ω and ω_λ.This non-trivial deterministic spin dynamics also strongly modifies the response of the qubit to noise. First, with a finite the Rabi driving, the dominant longitudinal component of the filter function is aligned to then_2 direction, thus leading toF= ν/N∫_0^t dτ dτ' e^-[z̅(τ)-z̅(τ')]^2/2l^2e^iω(τ'-τ) [R̂_Z^T(τ)R̂_Z(τ')]_22 .In contrast to Eq. (<ref>), the kernel of the integral depends on R̂_Z(t) n_2= R̂_1[2z̅(t)/λ]R̂_3[ω_d t]n_2, and oscillates at the high frequency ω_d=ω_B.For this reason, one might expect F to be peaked at high frequencies. However, we emphasize that the wavefunction contribution e^-[z̅(τ)-z̅(τ')]^2/2l^2 also oscillates at frequency ω_d, because z̅(t) contains the rapidly oscillating term ∝ Z, see Eq. (<ref>), and thus F has finite weight also at low frequency, where the noise is the largest.The exact filter function obtained by integratingEq.(<ref>) is shown in Fig. <ref>(b) with a solid red line. The integral can be performed analytical for small values of Z, but the results are lengthy and we do not report them here. However, we note that by focusing on thedominant low-frequency terms, F is well approximated by F≈Z^2/l^2 ν/Nt/8 ω_l ω^2/ω_l^2[f_G(ω-2ω_λ/ω_l) +f_G(ω+2ω_λ/ω_l) ],see the dashed orange line in Fig. <ref>(b). Importantly, because of the resonant dynamical decoupling, the low-frequency noise is efficiently filtered out by the additional polynomial factor ω^2 in the filter function.The polynomial factor ω^2 in F yields the exponential decay of the average shuttling fidelityℱ̅_TM =e^-t/T_φ^TM ,T_φ^TM ≈4l^2/ηZ^2 T_φ^L [1+ √(2 π) l /λe^2 l^2/λ ^2erf(√(2) l/λ)]^-1 . Compared to the case withZ=0 where the time scale is T_φ^L in Eq. (<ref>), T_φ^TM is substantially enhanced by the large factor l^2/η Z^2≫ 1. The dependence of time constant T_φ^TM on the inhomogeneity of the field l/λ is illustrated in the inset of Fig. <ref>(b). Strikingly, the decay time T_φ^TM is significantly larger than T_φ^L when the Zeeman field is not strongly inhomogeneous l/λ≲ 1, but it becomes smaller at l/λ≳ 1. The enhancement in average shuttling fidelity induced by the time-modulation of the position can be clearly observed in Fig. <ref>(c) by comparing black and blue curves.At small values of l/λ (solid lines), there is a substantial improvement in the coherence of the shuttling process that is due to the resonant dynamical decoupling induced by Z. In contrast to the Z=0 case, where l/λ≳ 1 is required to filter out low frequency noise (dashed lines), the time-modulation enables a high shuttling fidelity also in the regime where the Zeeman energy is weakly renormalized by the factor e^-l^2/λ^2. We also note that the high-frequency components ofF in Eq. (<ref>) produce the additional high-frequency termsF_HF≈ν/Nt /2ω_l [f_G(ω-ω_B/ω_l) +f_G(ω+ω_B/ω_l) ],whose functional form resembles Eq. (<ref>), but with shifted frequency 2ω_λ→ω_B. These corrections modify the fidelity as ℱ̅_TM→ℱ̅_TM e^-t/T_φ^B, with time constant T_φ^B= T_φ^L e^-2l^2/λ^2+ ω_B^2/ω_l^2≫ T_φ^TM for small values of ω_l/ω_B≪ 1.§.§.§ Finite detuning and phase drivingAn homogeneous detuning Δ in H_TM in Eq. (<ref>) tilts the Rabi rotation by an angle φ=arctan(Δ/Ω) around the n_1-axis and speeds up the Rabi frequency by Ω→√(Ω^2+Δ^2).The detuning causes incomplete Rabi oscillations with probability P=Ω^2/(Ω^2+Δ^2) and the typical Rabi chevron pattern measured in Rabi experiments. Assuming a large driving field Ω compared to Δ, the angle φ≈Δ/Ω≪ 1 causes the appearance of a competing decay time for the average shuttling fidelity in Eq. (<ref>) that modifies as ℱ̅_TM→ℱ̅_TM e^-t/T_φ^Δ, withT_φ^Δ≈ T_φ^L/ φ^2 +𝒪(φ^3),where the decay time T_φ^L of adiabatic shuttling is given in Eq. (<ref>).First, comparing T_φ^Δ to T_φ^TM, we find thatT_φ^Δ dominates when the Zeeman field is largely inhomogeneous l/λ≳ 1 and when the power spectrum S(ω) of the noise strongly deviates from the 1/|ω| trend, i.e. at large values of η in Eq. (<ref>).However, even in this case, we emphasize that sufficiently close to resonance [φ≪ 1] T_φ^Δ≫ T_φ^L thus showing that time-modulation provides a substantial advantage compared to adiabatic driving. We also point out that an inhomogeneous detuning Δ(z̅), which can originate in experiments from local modulations of the g factor or the magnetic fields, in general only results in an additional a small correction to the fidelity in Eq. (<ref>).In particular, we focus here on the following detuningΔ[z̅(t)]= Δ_0+ Δ_1 cos[2z̅(t)/λ_Δ]≈Δ_0+Δ_1 cos(2ω_Δ t),where we introduced ω_Δ=v̅/λ_Δ and Δ_0=ω_B-ω_d, with ω_B being the average Zeeman energy during shuttling.With this inhomogeneous detuning, the Hamiltonian H_TM in Eq. (<ref>) is modified to the phase driving Hamiltonian <cit.>H_PD≈ħω_B/2σ_3+ħΔ_1/2cos(2ω_Δ t)σ_3- ħΩsin(ω_d t)σ_1,where the driving field has two tones and couplea to the transversal (Rabi driving ∝Ωσ_1) and longitudinal (phase driving ∝Δ_1 σ_3) spin degrees of freedom. For clarity, here we report the Hamiltonian before performing the rotating frame transformation e^-iω_d t σ_3/2 of U_Z^TM, i.e. without the rotation R̂_3(ω_d t) in H_TM, see Eq. (<ref>).As demonstrated in Ref. <cit.>, in general cases only off-resonant phase driving, with frequency ω_Δ∼Ω, significantly impacts the spin dynamics. For this reason, in Eq. (<ref>), we discarded fast rotating phase driving terms oscillating at frequencies ω_d. In contrast, Rabi driving only impacts the spin dynamics when close to resonance ω_d∼ω_B, and for this reason we neglect slowly rotating Rabi driving terms oscillating at frequencies ω_l.For small values of the modulation Δ_1≲ω_Δ, the effect of phase driving is negligible and one can safely operate at Δ_0=0, i.e., by using a microwave pulse resonant with the average of the inhomogeneous Zeeman energy ω_B. For larger values of Δ_1≳ω_Δ, phase driving introduces additional interesting dynamics in the spin evolution <cit.>. First, operating at a finite Δ_0, enables additional resonant dynamics of the spin at Δ_0=± 2 m ωΔ, with integer m, where the Rabi frequency is rescaled by Ω J_m(Δ_1/ω_Δ).Here J_m(x) is the m^th Bessel function. This additional resonant dynamics will also effectively filter out low-frequency noise. Moreover, as discussed in Ref. <cit.>,even for small values of Δ_1≲ω_Δ, by fine tuning the Rabi frequency to Ω∼ 2ω_Δ, we expect that additional resonant dynamics could substantially enhance the filtering out of dominant noise sources, further improving the average shuttling fidelity.§.§.§ Precessing Zeeman fieldFinally, we discuss the role of a precessing Zeeman field when the position is time-modulated. These effects can be nicely described by our theory and in particular by H_TMin Eq. (<ref>).By considering for concretenessthe precessing Zeeman field ω_B^N inEq. (<ref>) which enables spin nutation, and using Eq. (<ref>), we find thatfor small values of A, the driving term in H_TM modifies asδθ_B=2/λ[n_1 - Asin(4 z̅/λ)n_2+ Acos(4 z̅/λ)n_3] + 𝒪(A^2),where we only kept the terms to linear order in A, and we restrict ourselves to the analysis of the case λ=λ_N.In this case, there are two leading corrections to the spin dynamics.In particular, we note that the last contribution in the expansion gives rise to a phase driving, see Eq. (<ref>), with frequency 4ω_λ and amplitude 2ω_λ A. As argued in Sec. <ref>, for small values of A and far from the fine-tuned resonance condition 4ω_λ∼Ω, this term has negligible effect. Moreover, the transversal term comprises a far detuned pulse with frequency 4ω_λ that does not significantly contributeto the spin dynamics and the frequency-modulated nearly-resonant term -ω_d AZsin(ω_d t)sin(4 ω_λ t). In the RWA, this term yields an additionaltransversalRabi driving.When off-resonant and Ω≫ 4 ω_λ, this term is negligible and thus we do not explore it further in this work. Interestingly, however, we envision that this frequency-modulated driving could provide an additional effective filteringof the noise, that is analogous to the frequency modulation in SMART dressed qubit protocols in global fields <cit.>.An optimized pulse-shapingcould also further enhance the fidelity <cit.>. §.§ Fast shuttling in weakly inhomogeneous fieldsIn Sec. <ref>, we showed that fully rotating Zeeman field enables an effective way to intrinsically dynamically decouple a shuttled spin from low-frequency noise, thus resulting in a high shuttling fidelity.In particular, we focused on particles moving adiabatically with a constant velocity v̅≪ |ω_B| l ≪ω_o l, which is small compared to both Zeeman and orbital energy.We also demonstrated in Sec. <ref> that the shuttling fidelity can be further improved by adding a small time-dependent modulation which is non-adiabatic with respect to |ω_B|, but still adiabatic compared toω_o. Here, we show that a substantialimprovement in fidelity also occurs for incomplete rotations of the Zeeman field, when the constant shuttling velocity is non-adiabatic compared to the Zeeman field |ω_B|, but remains adiabatic compared toω_o.For concreteness, we consider the weakly inhomogeneous Zeeman field ω_B^D(z̅)= ω_B [A cos(2 z̅/λ)n_1+ A sin(2 z̅/λ)n_2 +n_3 ], with A≪ 1 and a constant velocity motion with z̅=v̅ t. This Hamiltonian accurately describes a residual homogeneous magnetic field in electronic systems with nanomagnets <cit.> and hole heterostructures, for example in planar germanium, presenting an incomplete tilting of the g-tensor <cit.>. When v̅ is adiabatic compared to the Zeeman field v̅≪ω_B l, the shuttling fidelity is dominated by the dephasing accumulated by the homogeneous component of the Zeeman field aligned along n_3, see Sec. <ref>. However, here we focus instead on a different case, where v̅∼ω_B l and show that in this case there are resonant conditions for v̅ that can substantially filter out low-frequency noise, still providing a large enhancement in the shuttling fidelity.We note that, as derived in Appendix <ref>, in this case corrections to Eq. (<ref>) are ∝ A ω_λω_B/ω_o, with ω_λ=v̅/λ and remain negligible compared to the leading terms ∝ Aω_B also in this case. The resonance condition in this case is straightforwardly recognizable by moving to a rotating frame with frequency 2ω_λ by the transformation e^-iω_λ tσ_3. In this frame, we immediately recognize the time-independent Rabi HamiltonianH_D=ħΔ/2σ_3+ ħΩ/2σ_1,Ω=Aω_B ,Δ=ω_B-2ω_λ ,describing Rabi oscillation of the spin with Rabi frequency Ω at the resonance Δ=0, see also Eq. (<ref>).Wefocus now on local noise sources and we can straightforwardly verify that the longitudinal component of the filter function F_D^L is equivalent to F_P^Lthat is given in Eq. (<ref>).We then find the shuttling fidelityℱ̅_D^L≈ e^-t/T_φ^D , T_φ^D= ω_l (√(ν/N) T_φ)^2 e^2l^2/λ^2 .This result is equivalent to Eq. (<ref>), however, we emphasize that because in this case the Zeeman field is not fully rotating, the Zeeman energy ω_B is not rescaled by the small prefactor e^-l^2/λ^2. As a result at large values ofthe ratio l/λ≳ 1 the Zeeman energy remains large, while the fidelity is rapidly improved. This critical difference between this approach and the one in Sec. <ref> is clearly illustrated by comparing the dashed and solid gray lines in Fig. <ref>(b), corresponding to the Zeeman fields in the two situations, which yield the same shuttling fidelity (blue line). We stress that the condition Δ=0 is within reach of current experiments. Considering ω_B/2π=1 GHz and πλ=50 nm, which corresponds to either the spacing between neighbouring nanomagnets for electronic systems or the gate spacing determining the tilt of the g-factor in planar hole heterostructures, we find that high-fidelity shuttling can be achieved at feasible velocities v̅=ω_B λ/2=50 m/s <cit.>. Our protocol thus enables at the same time high-fidelity and fast shuttling even in the presence of residual large homogeneous Zeeman fields.§CONCLUSIONIn this work, we showed that the fidelity of the spin shuttling can be substantially enhancedby engineering highly inhomogeneous Zeeman fields.We related this surprising effect to the non-trivial deterministic dynamics of the spin during its motion, which filters out the dominant low-frequency components of the noise. This intrinsic dynamical decoupling of low-frequency noise is a general feature that appears in a wide variety of relevant experimental cases, including hole nanostructures in silicon and germanium as well as in electronic systems with artificial spin-orbit fields induced by micromagnets. We propose a framework to describe many scenarios where spins are shuttled in an inhomogeneous Zeeman field caused by rotation of principal axes of g-tensors, inhomogeneous magnetic field, and SOI. We also include a detailed analysis of different sources of noise, that affect the shuttled spin in a global or local way. Despite some qualitative and quantitative differences in these cases, we confirm that an inhomogeneous Zeeman fieldimproves shuttling fidelity independent of the noise locality. We also propose protocols where the spin is moved non-adiabatically compared to the Zeeman energy, that enable further dynamical decoupling of low-frequency noise and thus can significantly improve the coherence of shuttling. Our findings clearly demonstrate that highly efficient shuttling can be reached in materials with large SOI and inhomogeneous Zeeman fields, and that these systems are not only ideal hosts for compact spin-qubit architectures, but also for long-range spin qubit connectivity, and are thus ideal candidates for future large-scale quantum computers. We thank Andreas Fuhrer, Michele Aldeghi, Andras Palyi, Bence Hetényi, and Maria Spethmann for useful discussions. This work was supported by the Swiss National Science Foundation, NCCR SPIN (grant number 51NF40-180604), andthe Georg H. Endress Foundation.§ NON-ADIABATIC CORRECTIONS We now discuss in more detail the condition of adiabaticity of the quantum dot motion compared to the orbital degrees of freedom. We stress again that in the main text, we occasionally lift the condition of adiabaticity compared to the Zeeman field ω_B/2π≲ 10 GHz, but the shuttling is always adiabatic compared to the orbital splitting ω_o/2π≳ 1 THz. First, we derive with a simple perturbative treatment the expected corrections to the model presented in the main text, and then we verify these corrections by showing that they match an exactly solvable simple case. §.§ Perturbative treatmentOur derivations in the main text always assume that the quantum dot motion remains adiabatic compared to the orbital degree of freedom. We now discuss the validity of this approximation by using a simple model, that includes perturbatively the contribution of the next excited orbital state.In particular, we now include in our derivation of Eq. (<ref>) the effect of the neglected dynamical term -i p ∂_t z̅, originating from the time-dependence of the state ψ[z-z̅(t)].The expectation value of this term in the ground state vanishes. However, this term provides a coupling to the first excited state ψ_1[z-z̅(t)]; assuming a harmonic potential ψ_1(z)=H_1(z/l) e^-z^2/2l^2/π^1/4√(2 l) with H_1 being the first Hermite polynomial. Theeffective Hamiltonian acting on these two states isH= ħ( [ω_B(z̅)·σ/2 -l∂_z̅ω_B(z̅)·σ/2√(2) + ∂_tz̅σ_0/√(2)l; -l∂_z̅ω_B(z̅)·σ/2√(2)+∂_tz̅σ_0/√(2)lω_B^1(z̅)·σ/2 + ω_o σ_0 ]) ,wherewe introduce ω_B^1(z̅)=∫ dz |ψ_1(z)|^2ω̃_B(z+z̅). We also use the relation ∫ dz ψ(z)ψ_1(z)ω̃_B(z+z̅)=-l∂_z̅ω_B(z̅)/√(2), valid for harmonic potential eigenfunctions and straightforward to derive using the Rodrigues formula defining the Hermite polynomials.By using second order perturbation theory, we find the effective Hamiltonian for the ground state H_=ħ/2[ω_B(z̅)+∂_t z̅/ω_o∂_z̅ω_B(z̅)]·σ .The corrections arising from the orbital non-adiabaticity of the motion scale with ∼ω_λω_B/ω_o. In our work, these corrections are most significant when we lift the Zeeman field adiabaticitycondition, in which case ∼ω_B^2/ω_o, and they still produce small terms that are quadratic in the magnetic field. §.§ Exact solution with SOIHere, we validate the perturbative results just derived by presenting an exact solution for the time-dependent Schrodinger equation, which fully accounts for non-adiabatic corrections. This solution describes a spin confined in a quantum dot moving in a homogeneous Zeeman aligned to a possibly time-dependent SOI field with a fixed direction. We consider the following one-dimensional HamiltonianH=p^2/2m+mω_o^2/2z^2+ v(t) p σ_3+ mω_o^2 z̅(t) z+ ħω_B /2σ_3 ,where SOI and Zeeman fields are aligned to the n_3 direction. Introducing the usual orbital bosonic ladder operators a and a^†, the harmonic length l, the time-dependent spin-orbit length λ_s(t)=ħ/m v(t), we can rewrite this Hamiltonian asH/ħω_o= a^† a+ω_B/2ω_oσ_3+ il/√(2)λ_s(t) (a^†-a) σ_z+ z̅(t)/√(2)l (a^†+a) .We move to a spin-dependent rotating frame by the unitary operatorU_E(t)=e^-i t(ω_o a^† a+ω_B σ_3/2) , yielding H_R/ħω_o = α(t)a^†+α^†(t) a ,α(t) =e^iω_o t/√(2)[z̅(t)/l+il/λ_s(t)σ_3 ], where we used U_E^†(t) a U_E(t)=ae^-iω_o t. The time-evolution operator of the system can then be formally found as U(t)=U_E(t)𝒯e^-i∫_0^tH_R(τ)dτ/ħ .In our case, this equation can be evaluated exactly because the spin sector remains diagonal during the time evolution and the problem is quadratic in the orbital degree of freedom. The explicit exact solution of time-ordered exponential is obtained by a second-order Magnus expansion <cit.>: because [a,a^†]=1 and higher order commutators coming from the expansions vanish and the result of the second-order expansion is exact.We thus obtain U(t)=U_E(t)e^-iϕ(t)D[Γ(t) ] . We introduced the conventional quantum optical displacement operator D(x)=e^x a^†-x^† a, and the spin-dependent phase-space shift Γ(t) andphase ϕ(t) are Γ(t) =-i ω_o ∫_0^t α(τ)dτ ,ϕ(t) =iω_o^2 ∫_0^tdτ∫_0^τ dτ' α(τ)α^†(τ')-α^†(τ)α(τ')/2 .As a concrete example, we consider the case z̅(t)= v̅ t and a time-independent λ_s, in which case Γ(t) =-e^i ω _o t/√(2)[θ_1(t) +i l /λ_s(1-e^-i ω _o t) σ_3] ,θ_1(t) =ω_l t +i ω_l/ω_o(1-e^-i ω _o t) ,ϕ(t) =ϕ_0-[ω_s t/2[1+cos(ω _o t)]- ω_s/ω_osin(ω_o t)]σ_3 , where ϕ_0 is a trivial global spin-independent phase, and ω_l=v̅/l and ω_s=v̅/λ_s as in the main text.We focus on the time-evolution of the orbital ground states of H_R at time t=0 and centred at z̅(0)=0:|ψ_↑↓⟩_0=D(-i l σ_3/√(2)λ_s) |0, ↑̃↓̃⟩ ,where the ↑̃↓̃ spins are the pseudo spin degrees of freedom defined by removing the SOI by the usual transformationD(-i l σ_3/√(2)λ_s). Time evolved state at time t is |ψ_↑↓⟩(t)=e^-iω_B t/2σ_3-iϕ(t) e^il /√(2)λ_sRe[Γ(t)]σ_3 × D[(Γ(t) -i l/√(2)λ_sσ_3)e^-iω_o t] |0, ↑̃↓̃⟩ .In particular, when z̅(t)=v̅ t, we find |ψ_↑↓(t)⟩=e^-iω_B t+θ_0(t)/2σ_3 D[-θ_1(t)/√(2)-i l /√(2)λ_sσ_3] |0,↑̃↓̃⟩=e^-iω_B t+2θ_0(t)/2σ_3D[-θ_1(t)/√(2)]D[-i l σ_3/√(2)λ_s]|0, ↑̃↓̃⟩=e^-iω_B t+2θ_0(t)/2σ_3D[-θ_1(t)/√(2)] |ψ_↑↓⟩_0 , where we introduce the spin-dependent phase shiftθ_0(t)=-ω_s t+ω_s/ω _o sin(ω _o t).In this simple case, it is clear that the non-adiabatic corrections provide fast oscillations of the angles θ_0,1(t) that become suppressed as ω_l∼ω_s≪ω_o. More generally, by averaging out the fast oscillations of period 1/ω_o, one can generalize these results asθ_0(t)≈ - z̅(t)/λ_s , andθ_1(t)≈z̅(t)/l.As expected, we note that the non adiabatic corrections are ∝ω_l,s/ω_o and result in additional oscillations terms ∝ e^-i ω_o t that we neglect in our adiabatic approximation.§ROTATION MATRICESHere, we provide an explicit expression for the rotation matrices used in the main text. The unitary operatorU=e^-iθn·σ/2 ,with unit vector n=(n_1,n_2,n_3) (such that n·n=1), transforms a vector of Pauli matrices σ=(σ_1,σ_2,σ_3) asU^†σ U=R̂_n(θ)σ .The counterclockwise rotation matrix R̂_n(θ) rotates a vector by an angle θ around nand is given by R̂_n(θ)=( [(1-n_x^2)cos (θ )+n_x^2 n_x n_y[1-cos (θ )]-sin (θ ) n_z n_x n_z[1-cos (θ )]+sin (θ ) n_y; n_x n_y [1-cos (θ )] +sin (θ ) n_z (1-n_y^2) cos (θ )+n_y^2 n_y n_z[1-cos (θ )]-sin (θ ) n_x; n_x n_z[1-cos (θ )]-sin (θ ) n_y n_y n_z[1-cos (θ )]+sin (θ ) n_x(1-n_z^2)cos (θ )+n_z^2;]). For convenience, we also define the rotation matrices R̂_i(θ) around the i=(1,2,3) axis asR̂_1(θ) =( [ 1 0 0; 0cos(θ) -sin(θ); 0sin(θ)cos(θ); ]),R̂_2(θ) =( [cos(θ) 0sin(θ); 0 1 0; -sin(θ) 0cos(θ) ]),R̂_3(θ) =( [cos(θ) -sin(θ) 0;sin(θ)cos(θ) 0; 0 0 1 ]). and we report the useful relation R̂_n(θ)A=n(n·A)+cosθ (n×A)×n+sinθ (n×B). For the discussion in the main text, we are particularly interested in the solution of the equationR̂_n(θ)n_3 = [sin (φ ) sin(φ _1),sin(φ _1) cos (φ ),cos(φ _1)],which aligns a general vector parametrized by the angles φ∈[0,2π) and φ_1∈ [0,π) to the 3rd direction. Note that the rotation R̂_n(θ) is straightforwardly decomposed as R̂_n(θ)=R̂_3(-φ)R̂_1(-φ_1).A particular solution for the vector and angle of the combined rotations valid for cos(φ_1)≥ 0 isθ =-sgn(sinφ ) cos ^-1(cosφ+(1+cosφ ) cosφ _1-1/2),n =[cos(φ/2) cos( ϕ/2),-sin(φ/2) cos( ϕ/2),sin( ϕ/2)],ϕ =-2 sin ^-1[ tan(φ/2)(θ/2)]. For small positive angles φ_1 around the third axis, one can Taylor expand this solution to the first order in φ_1, resulting inθ =- cos ^-1(cos (φ )) sgn(sin (φ ))+𝒪(φ_1^2) ,ϕ = π-φ_1/|sin(φ/2)|+𝒪(φ_1^2) ,n =[φ _1 sgn(sin(φ/2))/2(φ/2), -φ _1 sgn(sin(φ/2))/2,1 ],or equivalently, unwinding the phases, θ =- φ ,n =[φ _1/2(φ/2) , -φ _1/2,1 ]+𝒪(φ_1^2),resulting in the vectorR̂_n(θ)n_3=[ φ_1 sin(φ) ,φ_1 cos(φ) ,1] +𝒪(φ_1^2). §INTERMEDIATE-RANGE NOISE SOURCESWe discuss in more detail the role of inhomogeneous noise with an intermediate range. We focus on systems with arbitrary SOI. We consider to this aim the Hamiltonian H_1D in Eq. (<ref>), and the noise HamiltonianH_N^z= 1/2n_0∑_k V(z-z_k)h_k(t)·σ ,where the function V(z-z_k) determines whether the noise sources are local [V(z)=δ(z)] or global [V(z)=n_0]. We consider here an homogeneous Zeeman field, i.e., ω̃_B(z)=ω̃_B.We remove the SOI by the transformation S in Eq. (<ref>) and we project the total Hamiltonian onto the moving dot ground state wavefunction, resulting in the effective HamiltonianH= ħω_B(z̅)·σ/2+1/2H̃(z̅,t) ·σ ,with [see Eq. (<ref>)]ω_B(z̅) =ω̃_B ∫_-∞^+∞ d z |ψ(z-z̅)|^2R̂_s^T[2z/λ_s] R̂_δ(z)^T[ϕ_s(z)] ,H̃(z̅,t) =1/n_0∑_kh_k(t)∫_-∞^+∞d z V(z-z_k)|ψ(z-z̅)|^2R̂_s^T[2z/λ_s] R̂_δ(z)^T[ϕ_s(z)] .The longitudinal component of the covariance matrix Σ̂_33, which determines the average shuttling fidelity is Σ̂_33=1/2πħ^2∫_-∞^∞ dωη^†(ω, t) Ŝ(ω) η(ω, t),with Ŝ_ij(ω)=∫ d t e^iω t ⟨ h_i(t)h_j(0)⟩ being a general anisotropic noise spectral function. We introduced the vector η=1/n_0∫_0^t dτ e^-iωτ∑_k∫_-∞^+∞d zV(z-z_k)|ψ(z-z̅(τ))|^2 R̂_s^T[2z/λ_s] R̂_δ(z)^T[ϕ_s(z)] ω_B[z̅(τ)]/|ω_B[z̅(τ)]|.Assuming isotropic uncorrelated noise, Ŝ_ij(ω)= δ_ij S(ω), we find that the longitudinal filter function is F̂_33=F = η^†·η.For global and local noise, we obtain, respectively,H̃_G=h∫_-∞^+∞ d z |ψ(z-z̅)|^2R̂_s^T[2z/λ_s] R̂_δ(z)^T[ϕ_s(z)], andH̃_L=1/n_0∑_k|ψ(z_k-z̅)|^2h_kR̂_s^T[2z_k/λ_s] R̂_δ(z_k)^T[ϕ_s(z_k)] . We define here h=∑_kh_k. We emphasize that for global noise the SOI-induced rotation is independent of the location of the defects and affects noise and Zeeman field in the same way. Local noise, on the other hand, locally rotates the noise fluctuators yielding a qualitatively different effect compared to the Zeeman field. This qualitative difference can be straightforwardly understood by considering a homogeneous SOI field, such as v_H in Eq. (<ref>). In this case ϕ_s(z)=0 and we findω_B =ω̃_B^∥+ e^-l^2/λ_s^2ω̃_B^⊥R̂_s^T(2z̅/λ_s) ,H̃_G =h^∥+ e^-l^2/λ_s^2h^⊥R̂_s^T(2z̅/λ_s),H̃_L =1/n_0∑_k|ψ(z_k-z̅)|^2h_kR̂_s^T[2z_k/λ_s] , where ∥ and ⊥ refer to components of the vectors parallel and perpendicular to the SOI n_s, respectively.We now focus on the case where the Zeeman field is perpendicular to the SOI, e.g., n_s= n_1 and ω̃_B=ω̃_B n_3. In the interaction picture including the dynamics induced by the Zeeman field, the relevant longitudinal noise is [H̃_G]_3 =e^-l^2/λ_s^2h·n_3 , [H̃_L]_3 =1/n_0∑_k|ψ(z_k-z̅)|^2h_k ·R̂_s^T[2(z_k-z̅)/λ_s] n_3 , resulting in [see Eqs. (<ref>) and (<ref>)]F_G=e^-2l^2/λ_s^2F_FID , andF_L=F_P^L. § EXACT RESULTS FOR THE FILTER FUNCTIONS In this section, we report exact equations for filter function for a fully precessing Zeeman field and fidelities: F_P=2 (ω _λ^2-(ω _λ^2+ω ^2) cos (t ω ) cos(t ω _λ)-2 ωω _λsin (t ω ) sin(t ω _λ)+ω ^2)/(ω ^2-ω _λ^2)^2 ,ℱ̅_P = exp[1/8 i π2^η -1 (πη ) t^2-η T^η -2(-(t ω_λ +i)^η -1+( t ω_λ -i)^η -1+(-t ω_λ -i)^η -1-(-t ω_λ +i)^η -1) ] , F_P^L =1/ω_l^2Re[ f(ω-2ω_λ/ω_l, ω_l t )], f(ω, t) =e^-t^2+ω ^2/2[√(2/π) e^ω ^2/2(cos (t ω )-e^t^2/2)+1/2e^t^2/2((t+i ω ) erf(t+i ω/√(2))+(t-i ω ) erf(t-i ω/√(2))+2 ωerfi(ω/√(2)))] . | http://arxiv.org/abs/2311.15970v1 | {
"authors": [
"Stefano Bosco",
"Ji Zou",
"Daniel Loss"
],
"categories": [
"cond-mat.mes-hall",
"quant-ph"
],
"primary_category": "cond-mat.mes-hall",
"published": "20231127161316",
"title": "High-fidelity spin qubit shuttling via large spin-orbit interaction"
} |
felix.ginot(at)uni-konstanz.de Fachbereich Physik, Universität Konstanz, 78457 Konstanz, GermanyFachbereich Physik, Universität Konstanz, 78457 Konstanz, Germany We experimentally study the motion of a colloidal particle, translated back and forth within a viscoelastic, i.e. non-Markovian bath.The particle starts in equilibrium before the forward motion, but only partially relaxes at the turning point.During the backward motion, we measure a systematic (negative) heat flow from the bath to the particle.Our observations are in good agreement with a simple model that describes the time-delayed response of the fluid.We expect our results to be important for the realization and optimization of novel types of micro-engines in non-Markovian surroundings. Average negative heat in a non-Markovian bath Clemens Bechinger January 14, 2024 =============================================Stochastic thermodynamics provides a powerful framework to describe the behaviour of small systems where fluctuations become crucial <cit.>.Within this approach, the concepts of classical, i.e., macroscopic thermodynamics can be transferred to microscopic length scales which permits the definition of small-scale equivalents of entropy, work, heat or internal energy <cit.>.Opposed to macroscopic systems, however, such quantities are not given by sharp values but by distributions with finite width which is an immediate consequence of thermal fluctuations which govern the behavior of tiny systems. An immediate consequence of these fluctuations is the temporal "violation" of the second law of thermodynamics.Experimental studies <cit.> of colloidal particles driven through water revealed clear evidence that entropy can be consumed rather than generated on short time scales <cit.>.Nevertheless, because the viscous friction of a dragged particle is entirely dissipated within the surrounding heat bath, the average entropy and heat production in such experiments always remains positive. This, however, is only valid when the relaxation time of the bath is much shorter than that of the particle, i.e. when the bath remains in equilibrium even while the particle is driven <cit.>. In the case of a viscoelastic bath with long relaxation time, such conditions are no longer fulfilled <cit.>. As such environments constitute the natural surrounding for molecular motors <cit.>, bacteria <cit.> and motile cells <cit.>, this breaking has important consequences on the heat production for driven systems at small scales.In this work, we experimentally demonstrate that the heat of a colloidal particle driven through a viscoelastic fluid can be negative, even when looking at averages.Because the relaxation time τ_R of the fluid is several seconds, it cannot immediately dissipate the energy of a driven particle.As a consequence, a finite amount of the transferred energy is restored back to the particle, which leads to an averaged negative heat (ANH) production which is limited by the amount of work initially spent on the particle.Our results are in excellent agreement with a micro-mechanical <cit.> model, which is expected to be also applicable to other types of non-Markovian baths. We expect our findings to be particularly relevant for the design and optimization of thermodynamic machines in non-Markovian environments. We perform experiments using silica particles with diameter 2.73 within a 100 thick capillary. The particles are suspended in an 8 aqueous solution of cetylpyridinium chloride monohydrate (CPyCl) and sodium salicylate (NaSal). We kept the sample cell at 25 where the fluid forms an entangled network of giant worm-like micelles leading to a viscoelastic, i.e. time-delayed behavior <cit.>.At the above conditions, the largest relaxation time of a free particle (in absence of a trap) has been determined to be τ_R≈2 <cit.>.Note, that the relaxation time is considerably longer in presence of a trap.We confined the particle using a highly focused 1064 laser beam, leading to a harmonic optical potential V(x,λ) = 1/2κ (x-λ)^2 with stiffness κ=2.0. The trap position λ(t) was controlled using a piezo-actuated stage, and kept far from the surface of the capillary. We recorded video pictures with a frame rate of 100, and obtained particle trajectories X(t) with an accuracy of ±6 using a custom Matlab tracking routine <cit.>. More information regarding sample preparation and the setup, is available in Supplemental Material (SM).Figure <ref> shows the typical driving cycle protocol being applied in our experiments, which consists of four steps. 1 Starting from a fully equilibrated state, we first move the trap to the right at constant speed λ̇=0.2 during time t_d = 3.2 The trap motion is stopped for the time t_neq during which the particle relaxes within the trap.However, since t_neq is smaller than the time required for equilibration, this relaxation process is not complete, and the trap exerts a non-zero force on the particle (see orange line).3 Afterward, the protocol is reversed, i.e. the trap moves left, back to its original position with opposite velocity λ̇=-0.2.4 Finally, we let the system fully equilibrate during time t_eq=50 before the next cycle starts. Due to t_eq >> t_neq, the forward-backward protocol is asymmetric which can be seen on the particle trajectory (red line).To yield sufficient statistics, the cycle is repeated about 100 times. From the trap and particle's trajectory, we calculate the work W and heat Q associated with the particle with potential energy U[X(t)] = 1/2κ (X-λ)^2. The accumulated values within the time interval [0,t]are given by <cit.> W[X(t)] =∫_0^t ∂ V/∂λλ̇dt = - ∫_0^t κ(X-λ)λ̇dtand Q[X(t)] = ∫_0^t ∂ V/∂ XẊdt = ∫_0^t κ(X-λ)Ẋdt. Figure <ref> shows the average time-dependent values of work <W(t)> (blue), heat <Q(t)> (red) and internal energy <U(t)> (orange) for t_neq = 1.The system starts at equilibrium and <U(t=0)> = 1/2k_BT.During 1 the trap moves with constant speed, the viscous force of the fluid causes the particle to slightly lag behind the trap center, and <U(t)> increases.Since the corresponding optical force acting on the particle is then pointing opposite to the trap motion, <W(t)> also increases (Eq.<ref>).Because <W(t)> > <U(t)>, the heat <Q(t)> is positive (energy is flowing from the system towards the bath) in agreement with the first law W - Q - U = 0 (see black line).In the following the changes of work, heat, and internal energy during step 1 will be denoted as Δ W_eq, Δ Q_eq, and Δ U_eq respectively. During 2 the trap is at rest, <W(t)> remains constant because λ̇= 0 (Eq. <ref>).Since the particle is partially relaxing towards the trap center, this leads to an increase of <Q(t)> at the expense of <U(t)>. During 3 the trap moves back to its initial position, all quantities first decrease and then increase again at about t ≈6. For <U(t)> and <W(t)>, this behavior is easily understood by considering that the particle, which is only partially relaxed at the beginning of step 3, is initially sitting in the front of the advancing trap center.After t ≈6 the trap has passed the particle position, and both values increase, similar to 1.However, even with the presence of viscous friction during 3 (λ̇≠ 0), the heat <Q(t)> is not monotonically increasing.Instead, similar to <W(t)> and <U(t)>, it first decreases (see arrow in Fig. <ref>) and only later increases again.In the following, we refer to this anomalous transfer of heat, from the bath to the particle, as average negative heat (ANH).Changes of work, heat, and internal energy during step 3 will be denoted as Δ W_neq, Δ Q_neq, and Δ U_neq respectively. The appearance of ANH which is here reported for a viscoelastic fluid, is absent in viscous, i.e. memory-free baths, where negative heat events are only sporadically observed at the level of single trajectories but not for their averages (see SM).As will be shown below, an ANH results from the time-delayed response of a viscoelastic fluid, which prevents heat to become immediately dissipated in the bath.As a consequence, heat can be partially recovered from the bath to perform work on the particle.To rationalize the above experimental findings, we perform numerical simulations using a minimal model.The interaction between the colloidal particle and the viscoelastic fluid is introduced using a (fictitious) bath particle which is connected to the colloid via a harmonic spring.Accordingly, the extension of the spring is associated with the storage of elastic energy within the bath.This microscopic equivalent of the Maxwell model has been previously used to describe the behavior of the viscoelastic solution <cit.>.Considering the presence of an external potential V(x), the positions of the colloidal (X) and bath (X_b) particles are described by the following Langevin equations:γẊ =-κ_b (X - X_b) -∇ V+ ξ(t) γ_ bẊ_b =-κ_b (X_b - X)+ ξ_b(t)where γ and γ_b correspond to the friction of the colloidal and the bath particle, respectively, and κ_b is the coupling strength. ξ and ξ_b are delta correlated random forces with zero mean. Notably, the two coupled Markovian equations correspond to a single non-Markovian generalized Langevin equation with an exponentially decaying memory kernel <cit.>.The parameters γ, γ_b, and κ_b have been obtained by comparison with the experimentally measured work, heat and internal energy as shown in Fig. <ref> (see SM).Within the above model, one can immediately understand the presence of an ANH.When the colloidal particle is driven by the optical trap 1, the spring to the bath particle becomes extended, and the associated elastic energy increases. This is a clear example of strong coupling, where driving the system out of equilibrium also sets the bath out of equilibrium. When the motion of the trap stops 2, both the colloidal and the bath particles begin to relax.However, because t_neq is too small to reach equilibrium, a finite amount of elastic energy remains trapped in the spring.Consequently, upon reversing the driving force on the colloid 3, the previously extended spring is compressed, which transfers elastic energy back to the colloidal particle.When measuring the stochastic heat, this flow of energy from the bath to the colloid translates into an ANH.Naturally, the recovered energy cannot exceed the energy previously stored in the spring. Thus, during a full cycle heat and entropy systematically increases, in agreement with the second law. Another consequence of our model is, that the stored elastic energy determines the magnitude of the ANH.For an experimental check, we have varied t_neq which controls the amount of elastic energy left in the bath at the begin of 3.Figure <ref> shows the experimentally measured <Δ Q_neq> during 2 as a function of t_neq.The averaged heat <Δ Q_neq> increases with t_neq and goes from negative to positive values.The latter confirms that ANH is a non-equilibrium feature being absent in fully equilibrated systems.In addition we also plotted the average work <Δ W_neq> and internal energy <Δ U_neq> exchanged during 3 which show a similar trend.The fact that<Δ W_neq> < <Δ U_neq> for small values of t_neq immediately shows that heat is being converted into work.Only above t_neq≈ 3s, we recover the behavior of memory-free dissipative systems, i.e. that the amount of extracted work is smaller than the change of internal energy.The dashed curves correspond to the numerical results of our bath-particle model.In particular at small t_neq we find excellent agreement, with some deviations (attributed to the simplicity of the model) towards larger timescales.Finally, we also expect a direct correlation between the heat dissipated in 1 and measured in 3. Due to energy conservation, such correlations should hold both for averages and single trajectories.Figure <ref> shows the correlation of the distributions of Δ Q_eq and Δ Q_neq for t_neq = 1, 3, and 10, respectively.Symbols and dashed lines correspond to experimental data and those obtained from simulations.The negative slope confirms that an increase of Δ Q_eq leads to a decrease of Δ Q_neq and thus to a larger ANH.As expected, with increasing t_neq, this correlation becomes weaker and eventually vanishes.We also performed Langevin simulations for a viscous fluid (black dotted line, see SM).Clearly, such heat correlations must be absent in memory-free viscous baths, but are a distinctive feature of non-Markovian surroundings.Formally, the occurrence of an ANH is a consequence of the definitions of heat and work (Eqs. <ref> and <ref>) which have been derived for ideal heat baths with infinitely fast relaxation.In case of a non-Markovian heat bath, the slow decaying degrees of freedom must be taken into account in the definition of the above quantities.When explicitly considering the contribution of the bath particle and the spring in our model, a classical Markovian behavior is recovered and the ANH disappears.However, an accurate microscopic description is typically not achievable in complex systems.Therefore, the presence of an ANH in experiments can serve as a quantifier to provide evidence for (hidden) slow degrees of freedom in the bath.In addition, the presence of ANH also has consequences for the design of energy efficient driving protocols in non-Markovian environments.Inspired by microscopic engines <cit.>, we have studied how the ratio < Δ W_neq > / < Δ W_eq > varies as a function of the driving time t_d.Fig.5 shows the results obtained for γ̇=0.2 and t_neq=1, with κ=1.35 (light blue) and κ=3.89 (dark blue). For the Newtonian case (dashed lines), we only show the results of our model (with γ_ N=γ+γ_ b and κ_b=0). As expected, in this situation Δ W_neq/Δ W_eq monotonically increases with t_d: the longer the particle is driven, the more heat is dissipated in the bath, which decreases the fraction of work that can be recovered.Opposed to this, the behavior in a non-Markovian bath is quite different.Since heat can now be exploited to extract work, maximum efficiency is no longer achieved by avoiding heat generation, but by optimizing the amount of energy recovered. As a result, the ratio Δ W_neq/Δ W_eq now increases non-monotonically with t_d, and shows a minimum for t_d∼3.Notably, for larger values of t_d, and particularly in the case of the stiffer trap (dark blue), the measured efficiency in the non-Markovian bath is systematically better than in the Markovian one. Simulations (plain lines) again nicely match the experimental data (symbols), with a very good agreement for small values of t_d, and some deviations afterward.In summary, we have observed the onset of an averaged negative heat flow between a viscoelastic bath and a driven colloidal particle.Opposed to Markovian baths, where negative heat events only arise sporadically due to thermal fluctuations, our observations hold even when averaging over a large ensemble of trajectories.Our results, which are in good agreement with a micro-mechanical model, demonstrate that energy can be temporarily stored and recovered in baths with slow hidden degrees of freedom.As a consequence, non-Markovian baths can be exploited to increase the efficiency of cyclic processes compared to Markovian surroundings.We expect such approach to be important for the realization and optimization of novel types of microscopic heat engines.Finally, we our results do not only hold in case of viscoelastic baths but should also apply to any non-Markovian systems with a delayed response.We thank Samuel Monter for fruitful discussions. This project was funded by the Deutsche Forschungsgemeinschaft (DFG), Grant No. SFB 1432 - Project ID 425217212.30 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Sekimoto(1998)]sekimoto1998langevin author author K. Sekimoto, @noopjournal journal Progress of Theoretical Physics Supplement volume 130, pages 17 (year 1998)NoStop [Seifert(2012)]seifert2012stochastic author author U. Seifert, @noopjournal journal Reports on progress in physics volume 75,pages 126001 (year 2012)NoStop [Crooks(1999)]crooks1999entropy author author G. E. Crooks, @noopjournal journal Physical Review E volume 60, pages 2721 (year 1999)NoStop [Seifert(2005)]seifert2005entropy author author U. Seifert, @noopjournal journal Physical review letters volume 95, pages 040602 (year 2005)NoStop [Baiesi et al.(2009)Baiesi, Maes, and Wynants]baiesi2009fluctuations author author M. Baiesi, author C. Maes,andauthor B. Wynants, @noopjournal journal Physical review lettersvolume 103, pages 010602 (year 2009)NoStop [Jarzynski(2011)]jarzynski2011equalities author author C. Jarzynski, @noopjournal journal Annu. Rev. Condens. Matter Phys. volume 2,pages 329 (year 2011)NoStop [Collin et al.(2005)Collin, Ritort, Jarzynski, Smith, Tinoco Jr, and Bustamante]collin2005verification author author D. Collin, author F. Ritort, author C. Jarzynski, author S. B. Smith, author I. Tinoco Jr,and author C. Bustamante, @noopjournal journal Nature volume 437,pages 231 (year 2005)NoStop [Bustamante et al.(2005)Bustamante, Liphardt, and Ritort]bustamante2005nonequilibrium author author C. Bustamante, author J. Liphardt,and author F. Ritort, @noopjournal journal Physics today volume 58, pages 43 (year 2005)NoStop [Bérut et al.(2012)Bérut, Arakelyan, Petrosyan, Ciliberto, Dillenschneider, andLutz]berut2012experimental author author A. Bérut, author A. Arakelyan, author A. Petrosyan, author S. Ciliberto, author R. Dillenschneider,and author E. Lutz, @noopjournal journal Naturevolume 483, pages 187 (year 2012)NoStop [Wang et al.(2002)Wang, Sevick, Mittag, Searles,and Evans]wang2002experimental author author G. Wang, author E. M. Sevick, author E. Mittag, author D. J. Searles,and author D. J. Evans, @noopjournal journal Physical review lettersvolume 89, pages 050601 (year 2002)NoStop [Evans and Searles(2002)]evans2002fluctuation author author D. J. Evans and author D. J. Searles, @noopjournal journal Advances in Physics volume 51, pages 1529 (year 2002)NoStop [Dexter and Matheson(1972)]dexter1972mechanical author author A. Dexter and author A. Matheson, @noopjournal journal Advances in Molecular Relaxation Processes volume 2, pages 251 (year 1972)NoStop [Larson(2017)]larson1999structure author author R. G. Larson, @nooptitle The structure and rheology of complex fluids (publisher OUP USA, year 2017)NoStop [Liu et al.(2006)Liu, Gardel, Kroy, Frey, Hoffman, Crocker, Bausch,and Weitz]liu2006microrheology author author J. Liu, author M. Gardel, author K. Kroy, author E. Frey, author B. D. Hoffman, author J. C.Crocker, author A. Bausch,and author D. Weitz, @noopjournal journal Physical review letters volume 96, pages 118104 (year 2006)NoStop [Gomez-Solano and Bechinger(2014)]gomez2014probing author author J. R. Gomez-Solano and author C. Bechinger, @noopjournal journal Europhysics Letters volume 108, pages 54008 (year 2014)NoStop [Khan et al.(2019)Khan, Regan, and Robertson-Anderson]khan2019optical author author M. Khan, author K. Regan,andauthor R. M. Robertson-Anderson, @noopjournal journal Physical Review Letters volume 123,pages 038001 (year 2019)NoStop [Ginot et al.(2022a)Ginot, Caspers, Reinalter, Kumar, Krüger, and Bechinger]ginot2022recoil author author F. Ginot, author J. Caspers, author L. F. Reinalter, author K. K. Kumar, author M. Krüger,and author C. Bechinger, @noopjournal journal New Journal of Physics volume 24, pages 123013 (year 2022a)NoStop [Schliwa and Woehlke(2003)]schliwa2003molecular author author M. Schliwa and author G. Woehlke, @noopjournal journal Nature volume 422, pages 759 (year 2003)NoStop [Peterson et al.(2015)Peterson, He, Ren, Zerdoum, Libera, Sharma, Van Winkelhoff, Neut, Stoodley, Van Der Mei et al.]peterson2015viscoelasticity author author B. W. Peterson, author Y. He, author Y. Ren, author A. Zerdoum, author M. R. Libera, author P. K.Sharma, author A.-J.Van Winkelhoff, author D. Neut, author P. Stoodley, author H. C. Van Der Mei,et al., @noopjournal journal FEMS microbiology reviews volume 39,pages 234 (year 2015)NoStop [Thurston(1972)]thurston1972viscoelasticity author author G. B. Thurston, @noopjournal journal Biophysical journal volume 12, pages 1205 (year 1972)NoStop [Darabi et al.(2023)Darabi, Ferrer, and Gomez-Solano]darabi2023stochastic author author F. Darabi, author B. R. Ferrer,and author J. R. Gomez-Solano, @noopjournal journal arXiv preprint arXiv:2308.04148(year 2023)NoStop [Cates and Candau(1990)]cates1990statics author author M. Cates and author S. Candau,@noopjournal journal Journal of Physics: Condensed Matter volume 2, pages 6869 (year 1990)NoStop [Crocker and Grier(1996)]crocker1996methods author author J. C. Crocker and author D. G. Grier, @noopjournal journal Journal of colloid and interface science volume 179,pages 298 (year 1996)NoStop [Speck and Seifert(2007)]speck2007jarzynski author author T. Speck and author U. Seifert, @noopjournal journal Journal of Statistical Mechanics: Theory and Experiment volume 2007, pages L09002 (year 2007)NoStop [Ginot et al.(2022b)Ginot, Caspers, Krüger, and Bechinger]ginot2022barrier author author F. Ginot, author J. Caspers, author M. Krüger,andauthor C. Bechinger, @noopjournal journal Physical Review Lettersvolume 128, pages 028001 (year 2022b)NoStop [Medina-Noyola(1987)]medina1987generalized author author M. Medina-Noyola, @noopjournal journal Faraday Discussions of the Chemical Society volume 83, pages 21 (year 1987)NoStop [Mason and Weitz(1995)]mason1995optical author author T. G. Mason and author D. A. Weitz, @noopjournal journal Physical review letters volume 74, pages 1250 (year 1995)NoStop [Martínez et al.(2016)Martínez, Roldán, Dinis, Petrov, Parrondo, and Rica]martinez2016brownian author author I. A. Martínez, author É. Roldán, author L. Dinis, author D. Petrov, author J. M. Parrondo,and author R. A. Rica, @noopjournal journal Nature physics volume 12, pages 67 (year 2016)NoStop [Argun et al.(2017)Argun, Soni, Dabelow, Bo, Pesce, Eichhorn, and Volpe]argun2017experimental author author A. Argun, author J. Soni, author L. Dabelow, author S. Bo, author G. Pesce, author R. Eichhorn,and author G. Volpe, @noopjournal journal Physical Review E volume 96, pages 052106 (year 2017)NoStop [Guevara-Valadez et al.(2023)Guevara-Valadez, Marathe, and Gomez-Solano]guevara2023brownian author author C. A. Guevara-Valadez, author R. Marathe,and author J. R. Gomez-Solano, @noopjournal journal Physica A: Statistical Mechanics and its Applications volume 609, pages 128342 (year 2023)NoStop | http://arxiv.org/abs/2311.16324v1 | {
"authors": [
"Felix Ginot",
"Clemens Bechinger"
],
"categories": [
"cond-mat.soft",
"cond-mat.stat-mech"
],
"primary_category": "cond-mat.soft",
"published": "20231127211650",
"title": "Average negative heat in a non-Markovian bath"
} |
School of Physical Sciences, National Institute of Science Education and Research, HBNI, Jatni 752050, Odisha, India Department of Physics and Astronomy, University of Rochester, Rochester NY 14627, USASchool of Physical Sciences, National Institute of Science Education and Research, HBNI, Jatni 752050, Odisha, IndiaSchool of Physical Sciences, National Institute of Science Education and Research, HBNI, Jatni 752050, Odisha, India Luke Chamandy [email protected] Galactic dynamo models have generally relied on input parameters that are very challenging to constrain.We address this problem by developing a model that uses observable quantities as input: the galaxy rotation curve, the surface densities of the gas, stars and star formation rate, and the gas temperature.The model can be used to estimate parameters of the random and mean components of the magnetic field, as well as the gas scale height, root-mean-square velocity and the correlation length and time of the interstellar turbulence, in terms of the observables.We use our model to derive theoretical scaling relations for the quantities of interest, finding reasonable agreement with empirical scaling relations inferred from observation.We assess the dependence of the results on different assumptions about turbulence driving, finding that agreement with observations is improved by explicitly modeling the expansion and energetics of supernova remnants.The model is flexible enough to include alternative prescriptions for the physical processes involved, and we provide links to two open-source python programs that implement it.§ INTRODUCTION Spiral galaxies generally host magnetic fields with strengths of order 10. The general properties of such fields can be explained using turbulent dynamo models,but detailed comparison between theory and observation is still rudimentary <cit.>.The situation can be improved using variouscomplementary approaches.One of them is to evaluate observable quantities using simple,preferably analytic models of galactic magnetic fields,since they are readily available, transparent, easy to apply and can be used to explore the parameter space.However, such models tend to depend on poorly known galactic parameters, such as the turbulent correlation time τ and length l.This problem can now be partially circumvented by using a model for the turbulence parameters <cit.>whose inputs can be expressed in terms of observable quantities.The point of the present work is to write down a closed set of coupled algebraic equationsthat can be solved toobtain magnetic field properties from a handful of galactic observables, to use these equations to derive scaling relations in certain asymptotic limits, and to compare them with those suggested by observations. We make no attempt to include cosmological evolution, and thus our model is restricted to low-redshift galaxies (z≪1).The paper is organised as follows. Section <ref> presents an overview of the model.Then we present each part of the model separately: the magnetic field in Section <ref>,the turbulence parameters in Section <ref>,and the remaining relevant quantities in Section <ref>.In Section <ref> we show howsolutions can be approximated as scaling relations, and compare these to observations and numerical simulations.Finally, we summarize our results and provide conclusions in Section <ref>. § OVERVIEW OF THE MODELThe model combines existing analytic solutions for the mean magnetic field <cit.> and parameters of the supernova-driven interstellar turbulence <cit.>. Those solutions have been tested againstasymptotic and numerical solutions in the case of mean-field dynamo models <cit.>, and against direct numerical simulationsof the local interstellar medium in the case of turbulence models <cit.>. Furthermore, local galactic dynamo simulations that solve the full equations of MHD are reasonably consistent with dynamo theory <cit.>. In addition to our mean-field dynamo and interstellar turbulence models,we use suitably motivated heuristic expressionsfor quantities like the random magnetic field strength and gas scale height.For simplicity, the gaseous disk is approximated as being comprised of a single-phase fluid in a statistical steady state. We use cylindrical polar coordinates (r,ϕ,z)with the z-axis aligned with the galaxy's rotation axis and z=0 at the galactic midplane. All variables depend on the galactocentric radius r and represent vertical and azimuthal averages over the gaseous galactic disk (although we discuss the azimuthal variations in connection with the effects of the spiral arms).The model is summarized in Fig. <ref>. The observables on which the turbulence and magnetic field properties depend arethe angular speed of the gas rotation about the galactic centre Ω(r),the dimensionless rotational velocity shear rate q≡-lnΩ/ln r ,the stellar surface mass density Σ_⋆(r),the gas surface mass density Σ(r),the surface density of the star formation rate Σ(r) and the gas temperature T(r).Note that q>0 since Ω/ r<0 in most parts of galaxies,and q=1 for a flat (constant circular velocity) rotation curve. These input quantities – with the exception of Ω(r) and q(r) – arefirst used to calculate the gas scale height h(r), the gas density ρ(r), the supernova (SN) rate density ν(r),the root-mean-square (rms) turbulent velocity u(r), the turbulent correlation length l(r) and turbulent correlation time τ(r). These quantities are, in turn, used as input for the magnetic field model,along with Ω(r) and q(r). The magnetic field parameters include the rms strength of the random (turbulent) magnetic field b(r),its degree of anisotropy (defined below), the strength B(r) of the mean magnetic field, and its pitch angle tan p_B(r)= B_r(r)/B_ϕ(r),where -π/2<p_B≤π/2.The key quantities in the model are defined and summarized in Table <ref>.Source code has been provided for quickly obtaining scaling relations in various asymptotic limits, including but not limited to the ones presented in Section <ref>. [See <https://github.com/Rnazx/Scaling-Relations>.] In Paper II, we will apply our model to specific galaxies. For this purpose, we will solve the general equations using semi-analytic methods. [The source code is available at <https://github.com/Rnazx/goblin>.]§ MAGNETIC FIELD MODEL Dynamo amplification is fast enough that the magnetic fields of nearby galaxies (definitely their random components and, likely, their large-scale components, at least in the central parts of galaxies) are expected to be in a saturated state <cit.>.In this statistically steady state,they are predicted to have magnetic energy densitysimilar to the turbulent kinetic energy density and this is borne out in observations <cit.>.Thus, in this work we consider only the saturated state of the magnetic field. We separate the field into two main components,the mean field B and the random field ,so that we may write the total field as =B+,where overbar represents a suitable averaging(spatial in the case of observed variables and ensembleaverages in theoretical results). This separation is both physically and mathematically appropriatebecause the two terms in equation (<ref>) are believed to be governed by distinct physical processesand are sensitive to different parameters <cit.>,and also because they have distinct observational signatures.§.§ Random field Magnetic fields in galaxies are inferred from observationsto usually be dominated by a random small-scale component <cit.>. Such random fields can be produced by fluctuation (small-scale) dynamo action, as well as the turbulent tangling of the mean magnetic field. It is also possible that part of the random component detected in observations is actually unresolved mean field. Below we neglect the influence of the mean field on the random field for simplicity, leaving such effects for future study.Random galactic fields are inferred from observations to be anisotropic <cit.>. Below, we assume that this anisotropy is solely producedby the large-scale galactic shear arising from differential rotation. Other sources of anisotropy,such as shear and compression in spiral density waves, could also be important. The galactic differential rotation stretches the radial component of the random field, leading to a linear increase with time of the azimuthal component. This can lead to a cumulative effect for a duration of about the correlation time of turbulence τ. Let the standard deviation of any component of the random field be given by σ_i=√(b_i^2), with i=(r,ϕ,z). For a random magnetic field in a statistically steady (saturated) state, we can estimate <cit.>σ_ϕ= σ_r(1+qΩτ).Galactic outflows (winds or fountain flow) may also contain large-scale shear,which would enhance σ_z relative to σ_r.We neglect galactic outflows for simplicity,but general expressions that include the mean vertical outflow speed U are given in Appendix <ref>. With U=0, we obtain σ_z=σ_r.The strength of the random field is thus given by b≡√(^2)= √(σ_r^2+σ_ϕ^2+σ_z^2)=σ_r√(2+(1+qΩτ)^2)and that of the isotropic background by b= √(3)σ_r.The degree of anisotropy can be expressed asε=b/b,where b≡√(b^2-b^2)= b/√(3)[2qΩτ(1+qΩτ/2)]^1/2. Simulations of fluctuation dynamos in periodic boxes with isotropic forcingcan be used to estimate the strength of the random field in the saturated state. Since large-scale shear is typically not included in such simulations,they are best used to estimate b, rather than b. In particular, they can help to constrain the ratio ξ of energy density of the saturated magnetic field b^2/8 to that of the turbulent motions 12ρ u^2,for a range of magnetic Reynolds and magnetic Prandtl numbers; in galaxies, one expects ≫1 and ≫1. These simulations suggest that ξ=b^2/4ρ u^2 ranges from ≈0.4 for solenoidally forced subsonic turbulence, to values that are considerably smaller for compressively forced or supersonic turbulence.Based loosely on results from <cit.> (see also ) we chooseb= ξ^1/2 B/max(1,/A_1),where A_1 is constant of order unity,≡u/cis the turbulent sonic Mach number,c is the speed of sound and ξ=0.4. Here, B= β(4ρ)^1/2 uis the field strength corresponding to energy equipartition with turbulence. A parameter, β, has been inserted to account forimprecision in both theory and observational inference. §.§ Mean fieldSpiral galaxies also contain magnetic field componentsthat are coherent on scales up to the system size. This large-scale component (sometimes called the mean or regular field) can generally be explained by appealing to mean-field dynamo action <cit.>. Below we apply α-Ω dynamo theory including a nonlinearbackreaction that quenches the dynamo and leads to saturation <cit.>.We make use of an analytic solution of the mean-field dynamo equations that are averaged over zand neglect temporal and azimuthal derivatives. General expressions that allow for a finite mean vertical velocityare derived in <cit.> and can be found in Section <ref>. In the absence of mean vertical outflow or inflow,the strength of the mean magnetic field in the saturated state is given by≡ ||= Kπ l/h[(D/D -1)R_κ]^1/2B,where K is a factor of order 0.1–1that accounts for theoretical uncertainty <cit.>, D is the dynamo number,subscripts `k' and `c' refer to the kinematic (≪ B) and critical(no growth or decay) values, and D>D (supercritical dynamo);if D≤ D then we instead adopt =0. The critical dynamo number is given byD= -(/2)^5,and the kinematic dynamo number byD= R_ R_Ω,with Reynolds-type dimensionless numbersR_≡α h/η,R_Ω≡ -qΩ h^2/η,R_κ≡κ/η.whereα = -1/3τ ·,the turbulent diffusivity of B is given by η= 1/3τ u^2,and κ is the turbulent diffusivity of the quantity α= 1/3τ·/4ρ,which becomes important in the nonlinear regime owing to the backreactionof the field onto the flow. The total contribution to the α effect is obtained by summing α and α. We approximate α as <cit.>α=C_ατ^2u^2Ω/h,min(1,C'_α h/C_ατ u)≥Ωτ; C_ατ u^2/h,min(Ωτ,C'_α h/C_ατ u)≥1;C'_α u,min(Ωτ,1)≥C'_α h/C_ατ u,where C_α and C'_α are constants of order unitythat account for a lack of precision in the modeling; note that C'_α u acts as a ceiling on α. The pitch angle ofis given by tan p_B= _r/_ϕ = ^2/4R_Ω= -^2 τ u^2/12 q Ωh^2,defined such that -/2<p_B≤/2 with p_B<0 for trailing spirals (with respect to the galactic rotation).§ INTERSTELLAR TURBULENCE MODEL Magnetic field solutions depend on the root-mean square turbulent velocity u,the turbulent correlation time τ and length l,which are either very challenging (u and l)or impossible (τ) to measure directly from observations. The quantity u can be inferred from the line-of-sight velocity dispersion,which is commonly observed in galaxies, but the data is contaminated by contributions from thermal motions, cloud motions and outflows, which are difficult to separate out <cit.>. [Seefor a review of methods to study interstellar turbulence using observations.] Therefore, we seek solutions for these turbulence quantities in terms of accessible observable quantities.Supernova (SN) feedback is generally believed to be the dominant driver of turbulencein nearby galaxies <cit.>. For SN-driven turbulence, u, τ and l have been estimated using an analytic model that considers turbulence to be simultaneously driven by isolated SNeand superbubbles (SBs) formed by multiple SNe in OB associations <cit.>. In the model, a fraction f of SNe are assumed to contribute to SBs.Here we neglect SBs by setting f=0,but we include a summary of the general model (f0) in Appendix <ref>. §.§ Correlation lengthFor f=0, equation (<ref>) for the turbulent correlation length becomesl= (Γ-1/Γ)C_l l=3/10l,where we have assumed Γ=5/3 and C_l=3/4 (see Section <ref> and Table <ref>), and where l is the driving scale of turbulence driven by isolated supernovae. The quantity l is equal to the radius of the SN remnant (SNR) when its age is t,defined as the time at which the SNR expansion speed becomes equal to the ambient sound speed,Ṙ= c. At this time the SNR is assumed to fragment and merge with the interstellar medium (ISM),transferring its energy to the latter.This givesl= R(t)= 0.14 ψ E_51^16/51n_0.1^-19/51c_10^-1/3,where E_51=E/10^51 is the SN energy (excluding that in neutrinos),n_0.1=n/0.1 is the gas number density,and c_10=c/10. A dimensionless parameter of order unity, ψ,is introduced in the present workto account for the uncertainty in the model. To convert from mass density to number density,we have used the expressionn= ρ/μ,with mean molecular mass μ=14/11. §.§ Root-mean-square turbulent velocityThe root-mean-square turbulent velocity, u,is estimated by equating the energy injection rate per unit volumefrom SNe with the dissipation rate per unit volume ∼ρ u^3/2lowing to the turbulent energy cascade. With f=0, equation (<ref>) becomesu= ( 4/3l^3l c^2ν)^1/3,with l given by equation (<ref>) and l by equation (<ref>). §.§ Correlation timeWe estimate the correlation time τ as theminimum of the turnover time τ of energy-carrying eddies, and the time τ for the flow to renovate due to the passage of an SN blast wave,τ=min(τ,τ).The eddy turnover time (comparable to the lifetime of the largest eddies)is estimated as τ=l/u,where l is given by equation (<ref>) and u is given by equation (<ref>) for the case f=0. For this case, the renovation time is given byτ= τ= 3/4 l^3ν= 6.8 1/4ν_50^-1E_51^-16/17n_0.1^19/17c_10.Equations (<ref>) (or <ref>), (<ref>) (or <ref>) and (<ref>), can be used wherever the quantities l, u and τ appearin the equations of Section <ref>.§ FORMULATING THE EQUATIONS IN TERMS OF THE OBSERVABLES We must still obtain expressions for the SN rate per unit volume ν(r),gas density ρ(r),disk scale height h(r) andsound speed c(r) in terms of the observable quantities. Following <cit.>, we writeν= δΣ/2hm_⋆where Σ/2h is the mean star formation rate density averaged across the gas disk, δ≈8×10^-3 is the fraction of stars that evolve to SNe for the initial mass function (IMF) of <cit.>,and m_⋆=0.85 is the average stellar mass for this IMF (other choices of IMF would lead to small changes in δ and m_⋆).For an exponential or uniform density disk of scale height or half-thickness h,we haveρ= /2h. Approximating the ISM as a uniform ideal gas, we write the sound speed asc= (γ T/μ)^1/2.A reasonable choice for the adiabatic index is γ=1.5 <cit.>.The scale height h can be estimated from vertical hydrostatic balance <cit.>. We useh≈w^2/3 G(+/ζ)≈ζw^2/3π G,where w≡(u^2+A_2^2c^2)^1/2. Here A_2^2=2 for γ=3/2,is the surface density of stars, σ_⋆ is the 1D velocity dispersion of stars, ζ (formally equal to √(3)σ_⋆/w above) is a parameter that allows for uncertainty in the model, and the last equality of (<ref>) assumes that /ζ≫ and that stars dominate the total surface density of the disk , which includes stars, gas and dark matter. [In practice, we only require that ∝, since ζ can be rescaled.] A similar expression – but with u^2 replacing w^2 – is motivated in <cit.>. [See also the expression for the total midplane pressure in , which would lead to a similar expression for the scale height to that of <cit.>.] Equation (<ref>) is a generalisation of their formula that includes the thermal pressure, P=ρ c^2/γ≈23ρ c^2 (if γ=3/2), in addition to the turbulent pressure P=13ρ u^2. [For example, naively comparing stellar velocity dispersion data <cit.>with gas velocity dispersion data <cit.> for two different sets of galaxies suggestsa mean ratio of stellar to gas 1D velocity dispersions in the range 5–10.]The equations of Sections <ref>, <ref> and <ref> can be used to calculate the vertically and azimuthally averagedturbulence and magnetic field properties at a radius r, in terms of commonly observed quantities.§ SCALING RELATIONS We now derive scaling relations for key turbulence and magnetic quantities. These can be obtained using straightforward algebra, but we have provided the link to a tool which facilitates this in Section <ref>. More specifically,we present values of the power law exponents, though expressions for the proportionality coefficients can also be derived if needed. Adjustable parameters ζ, ψ, C_α, K and βare assumed to be constant or to vary weakly between galaxies, so the exponents do not depend on them. We set A_1=A_2≡ A for simplicity and restrict our analysis to the asymptotic cases ≪ A and ≫ A,so that w can be replaced, respectively, by c or u in equation (<ref>), but we comment on the regime ≈ A in Section <ref>.To obtain scaling relations for b and B, additional approximations are necessary. In equation (<ref>) for b, we assume that qΩτ/2 ≪ 1. This condition is likely satisfied in the Solar Neighbourhood, for example, where qΩτ≈0.14 assuming Ω=27.5, q=1 and τ=5 <cit.>. Equation (<ref>) then simplifies tob= (2/3qΩτ)^1/2b,where b is still given by equation (<ref>).In order to be able to write down a scaling relation for B , we assume min(1,C'_α h/C_ατ u)≥Ωτ, so that, from equation (<ref>) we have the standard result<cit.>α= C_ατ^2u^2Ω/h.In addition, we assume that the dynamo is highly supercritical, i.e. D/D≫1.This assumption is generally satisfied in the inner parts of galaxies.With these assumptions, equation (<ref>) simplifies to=K l/h(D R_κ/D)^1/2B,withD = -9C_α q h^2Ω^2/u^2,where we have made use of equations (<ref>), (<ref>), (<ref>) and (<ref>). Note that D∝ qΩ^2,so there is some tension between assuming both D≫ D and qΩτ/2≪1.Both assumptions may be satisfied simultaneously for small τ. Upon substituting equations (<ref>) and (<ref>) in equation (<ref>), we find that ∝ lΩ (C_α qρ R_κ)^1/2. Simulations suggest R_κ≈0.3 <cit.>, while a recent analytic calculation gives R_κ=21(1+ξ̃)/27 with ξ̃=b^2/B^2<cit.>. In the present work R_κ is assumed to be constant.We now have all of the expressions needed to obtain scaling relations forh(r), u(r), l(r), τ(r), b(r), b(r), (r) and p_B(r) in terms of the observables Σ_⋆(r) or (r), Σ(r), Σ(r),Ω(r), q(r) and T(r). However, the scaling relations will dependon whether > or < (equation <ref>), and on whether u≪ A c or u≫ A c (equation <ref>). For transonic turbulence we might expect scaling relation exponents in between those for the subsonic and supersonic cases. §.§ Alternative turbulence modelsTurbulence driving in galaxies is still not very well-understoodand there is still a lack of consensus, e.g., about the driving scale(s). Thus, we choose to explore the implications of making different assumptions. Model S is our fiducial modeland uses the framework summarized above for turbulence driven by expanding isolated SNRs. We include two simpler models for comparison.Model Alt1 is our minimalistic model,which assumes l∝ h, u∝ c and τ∝ l/u (in deriving the scaling relation exponents,equalities can be replaced with proportionalities). Model Alt2 is our hybrid model and assumes l∝ h and τ∝ l/u,but retains equation (<ref>) for u,which permits two asymptotic regimes (≪ A and ≫ A), as in Model S. The python tool mentioned in Section <ref> can be used to explore a wider set of combinations of assumptions about turbulence. §.§ ExpressionsScaling relations for Models S, Alt1and Alt2 are given in Tables <ref> and <ref>. Where reduced fractional forms of exponents are cumbersome to write, we have rounded to decimal values. We choose to write the scaling relations in terms ofrather than , but we refer to them somewhat interchangeably below. Note that the dependency on ζ is the same as the dependency on 1/. §.§ Relations between observablesAn important caveat is that the observables q(r), (r), (r), (r), Σ_⋆(r) and T(r)are not mutually statistically independent.For example, the star formation rate and gas surface densitiesare typically related by the Kennicutt-Schmidt (KS) law,∝^N,with N≈1.4 <cit.>. As this relation is rather tight and universal, we have used the KS law to replaceby ^1/N in Tables <ref> and <ref>,eliminating the explicit dependence on . We focus on Tables <ref> and <ref> in the discussion below.Other correlations between our observables have also been measured, like the so-called spatially resolved star-forming main sequence relatingand Σ_⋆ <cit.>. However, this relation shows quite a lot of scatter. Moreover, the power law exponent, lnΣ_⋆/ln, is not well-constrained,and may be sensitive to resolution <cit.>. For this reason, we treat ≈Σ_⋆ andas mutually independent quantities.§.§ Consistency of models across regimesIn Model S (SN-driven model), the exponents in Tables <ref> and <ref> are remarkably consistent when transitioning between the four different physical regimes. Thus, the results for Model S are fairly insensitive to variations in the Mach number and the ratio of the eddy turnover and renovation times. By contrast, in Model Alt2 (hybrid model) the rms turbulent velocity u has very different power law exponents for the two limiting cases(≪ A and ≫ A). This implies a sharp transition in the transonic regime,which seems unlikely from a physical standpoint, lending support to the claim that Model S is the most realistic among the models.§.§ Comparison with observationsWe now compare our results with previously published observations,continuing to focus on Tables <ref> and <ref>,which take into account the KS law, i.e. the empirical relation betweenand(equation <ref>).§.§.§ Turbulent velocityAs seen in Table <ref>,Model S predicts that the root-mean-square turbulent velocity u depends very weakly onand depends weakly onand T. One might then expect the variation in the velocity dispersion within a galaxy or between galaxies to be weak. This is borne out in the data.Based on second moment maps of THINGS galaxies <cit.>,<cit.> find the 1D velocity dispersion for THINGS galaxies to be 11±3. The gas velocity dispersion within a given galaxyis also fairly constant <cit.>. Thus, the predicted weak dependence of u on ,and Tis qualitatively consistent with observations at low redshift.From Table <ref>,Model Alt1 predicts u to be independent of the other observables except T,where the T^1/2 dependence is stronger than for Model S. By contrast, Model Alt2 predicts a strong dependence on the various observables as well as different signs of the exponents for the subsonic and supersonic cases, and thus disagrees with observations.§.§.§ Relationship between field strength and It has been observed in many galaxies that the total magnetic field strength correlates with , and many authors have obtained values for j in the relation B∝^j. <cit.> find j=0.34±0.08, assuming energy equipartition between magnetic field and cosmic rays. <cit.> finds a tight correlation looking at the spatial variation of B andin the Virgo Cluster spiral galaxy NGC 4254, and obtains j=0.18±0.01.They also find a tight correlation between the random component (akin to b in our model)and , with a power law exponent of 0.26±0.01. <cit.> find j=0.30±0.04 for dwarf irregular galaxies. <cit.> infer j=0.30±0.02 for 17 THINGS galaxies. <cit.> find j=0.19±0.03, using results that assume energy equipartition. <cit.> interpret <cit.> and other observations and assume energy equipartition to obtain j=0.15±0.06. They show how an exponential profile for (r) – seen in many disk galaxies –can be explained by their model as long as 0.1≲ j≲0.2.For the galaxy NGC 6946, <cit.> obtain j=0.14±0.01, and, if B is replaced by the random component,they obtain an equally strong correlation with a slightly different exponent of 0.16±0.01. <cit.> obtain j values between 0.03 and 0.92,with a lot of scatter and a best fit value of either 0.13±0.09 or 0.44, depending on the statistical technique adopted. There has also been a flurry of very recent work.<cit.> find j=0.182±0.004, but with considerable scatter,or j=0.22 when they use the mean values of each of the 12 galaxies considered. However, when they account for cosmic-ray transport, they infer a value of j=0.284±0.006 instead of 0.182±0.004. These authors also report that B is found to be more tightly correlatedwith the gas surface density Σ= Σ_HI+Σ_H_2 than with . <cit.> study the galaxy M33 and find bi-modal behaviour.For the high-SFR regime they obtain j=0.25–0.26 and for the low-SFR regime they obtain j=0.10–0.12. <cit.> obtains j=0.35±0.03 for the dwarf irregular galaxy IC 10. Finally, <cit.> obtain j=0.31±0.06 for a sample of seven galaxies, using the equipartition assumption.In the present work we have derived expressions forb, b and B, and scaling relations are presented in Table <ref>. Observers generally infer that the unpolarized component of the magnetic fielddominates in most galaxies. Moreover, when a mean component is modeled from the observations,this component is usually determined to be quite weak <cit.>. Thus, observations seem to suggest that most of the polarized emissionis not due to the mean field,but to an anisotropic small-scale component. At face value, this would imply that, typically, b>b>B.However,the mean field may often be underestimated if, as seems likely,it contains scales that cannot be resolved.Examining the scaling relations for b, b and Bin Table <ref> for Model S, we see that they are related toby power laws with exponentsin the ranges 0.34–0.39, 0.21–0.29 and 0.09–0.10, respectively. Thus, ifis the dominant field component, a suitable weighted average might give j∼0.30,whereas if B is the dominant component, we might expect j∼0.15. While a detailed comparison between theory and observation is clearly not possible here, the ranges agree rather well.On the other hand, Model Alt1 predicts j=0.36,which is reasonably consistent with observations, but Model Alt2 predicts an exponent 0.69 for b, 0.52 for b and a range 0.02–0.36 for B, which are too high to explain observations.A theoretical model by <cit.>(see also ) predicts B∝^1/3, as does the model by <cit.> (see also ).Those models have similarities with one another. They are also somewhat similar to Model Alt2a of the present work, but their prescriptions for the gas scale height are different from ours.§.§.§ Relationship between field strength and density<cit.> obtain B∝ρ^0.40±0.09 for a sample of seven galaxies. Using equations (<ref>), (<ref>), (<ref>) and (<ref>),we find that b is almost independent of ρ for Models Sa and Sc, but for Models Sb and Sd we obtain a power law exponent of 1/2 since u cancels out in equation (<ref>) with B given by equation (<ref>). For transonic turbulence, we would therefore expect a positive exponent somewhere in between 0 and 1/2, so Model S is broadly consistent with observations. For Model Alt1, b∝ρ^1/2 since u is independent of ρ. For Model Alt2a, l∝ h, which is independent of ρ,so b∝ρ^1/2,while for Model Alt2b, u cancels so that again b∝ρ^1/2.Thus, predictions of Models S, Alt1 and Alt2 all broadly agree with the empirical relation.§.§.§ Relationship between field alignment and In the study of NGC 4254 mentioned above, <cit.> also foundthat the ratio of the field inferred from polarized emission (comprising mean and anisotropic turbulent components) to that inferred from total emission is negatively correlated with ,i.e. ∝^-m, with m=0.32±0.01. More recently, <cit.> measured the same relation, but using data from several galaxies. They find a tight correlation with m=0.22 or 0.24, depending on the fitting method.To make a comparison with the model,let us first assume that the polarized emission is dominated by b and the total emission by b and consider the ratio b/b using results from Table <ref>.For Model S, we obtain m=0.13, 0.13, 0.11 and 0.10, respectively. Therefore, the correct sign of the exponent is reproduced in the model,but its magnitude is lower than the observational values. For Model Alt1, we obtain m=0, which is in disagreement with the observations. For Model Alt2, we obtain m=0.17,which is quite close to the observational values.However, if B actually dominates the polarized emission,then we should use B/b instead of b/b. In this case, Model S predicts values of m in the range 0.25-0.29,which is very close to the observed values. On the other hand, Model Alt1 again predicts m=0,while Model Alt2a and Alt2b predict m=0.33 and m=0.67, respectively.§.§.§ Other dependencies of magnetic field properties<cit.> explored the dependencies of the total field strength,mean field strength and mean field pitch angle,as obtained from observations(supplemented by the usual assumption of equipartition between magnetic and cosmic ray energies). The observables they explored as independent quantities were the surface density Σ_I,molecular gas surface density Σ_2,their sum Σ_I+Σ_2, , Ω, qΩ and qΩ^2. They also looked for statistical relationships between B, B and p_B. Of these 24 pairs of variables, 10 were found to exhibit correlations. We now look at each of these and compare them with the findings in Table <ref>.As mentioned in Section <ref>, <cit.> obtained B∝^0.19±0.03,with B the strength of the total field. They also obtained B∝Σ_2^0.21±0.04 and B∝(Σ_I+Σ_2)^0.24±0.07. These latter two can be interpreted as a confirmation of Schmidt/KS laws,given the relationship found between B and . However, the value of N implied is slightly lower than 1.4.The fourth and fifth correlations obtained were between B and qΩ (power law not given)and B∝(qΩ^2)^0.14±0.04. If the total field strength were dominated by b,then Table <ref> would predict no dependence,i.e. no correlation.If dominated by B, then Table <ref> would predict a power of 0.5.Thus, the observed value of 0.14±0.04 is intermediate between these values, suggesting that the mean field tends to be significant, but not dominant.The dependence of b on q and Ω(but not explicitly on qΩ^2 in the model) likely also plays a role.The sixth, seventh and eighth correlations found by <cit.>were tan p_B ∝Σ_2^-0.10±0.08, tan p_B ∝(Σ_I+Σ_2)^-0.25±0.13 and tan p_B∝^-0.15±0.09. These are approximately related by the Schmidt/KS law, as expected,so let us consider only the last relation. The exponent is similar to that predicted for Model S in Table <ref>, where we obtain -(0.13–0.29),depending on the exact regime (τ/τ<1 or >1, and ≪1 or ≫1). On the other hand, Model Alt1 predicts no dependence and Model Alt2 predicts an exponent equal to positive 1/3, so Model S is compatible with observations, but Models Alt1 and Alt2 are not.§.§.§ Relation between mean and total field strengthsThe ninth correlation found by <cit.>can be described by the power law B∝ B^0.76±0.23. Given that the powers of the observables generally have the same sign in Model Sfor b, b and B (Table <ref>), one would expect the scaling between B and B to have a positive exponent, consistent with the above observation.The exception is the power of the gas temperature T. However, T does not usually vary as much as some of the other parameters in galaxies, where warm gas is the dominant volume-filling phase of the ISM,and retains a temperature of order 10^4. In addition, one might expect the power to be <1, as observed,since the exponents in the scaling relation for B are generally smaller than those in the scaling relations for b and b,but not very small because B itself contributes significantly to B. Therefore, Model S again gives results that are broadly consistent with observations. For Models Alt1 and Alt2, the exponents oftend to change sign when going from B to b, so the correlation between B and B seems harder to explainusing these alternative models.§.§.§ Relation between mean field strength and pitch angleThe final correlation detected by <cit.> – between Band p_B – was the most statistically significant. Though the power law fit was not provided, the magnitude of p_B was found to decrease as B increases. This trend was confirmed by <cit.>, using updated data. Given this negative correlation,one might expect the exponents in the scaling relations for B and p_Bto generally have opposite signs. The powers of q, Ω andhave opposite signsin the expressions for B and tan p_B of Model S, while those ofhave the same sign, and those of T are the same in three of four cases. Thus, more work is needed to understand this empirical relation using dynamo models.§.§.§ Cases where no empirical scaling relation was foundFor the remaining 14 relationships explored by <cit.>,no statistically significant correlation was found. Can these null results be explained by Model S? One such result was that B was not found to depend significantly on any of, Σ_I, Σ_2, Σ_I+Σ_2, Ω, qΩ or qΩ^2. Model S predicts only a weak dependence on most of these,but one would expect to find B∝ (qΩ^2)^1/2 in all of the models. Moreover, tan p_B would be expected to scale as (qΩ)^-1 in all the models, whereas no such correlation was found by <cit.>. More work is needed to understand why models and data seem to disagree about this particular aspect.§.§.§ Arm-interarm contrast of random and mean fieldsModel S predicts (Table <ref>) that for loweror ,the ratio B/b should increase. This suggests that the field should be coherent on larger scales in the interarm regions, whereandare smaller than in the arms. When the arm-interarm contrast of B/b can be estimated from observations this ratio is inferred – particularly for the galaxy NGC 6946 where the data is of good quality – tobe higher within the interarm regions <cit.>, consistent with our model predictions. On the other hand, Model Alt1 predicts that B/bshould decrease withbut not depend onand Model Alt2 predicts that B/b should decrease withbut not depend on (increase with)if ≪ A (≫ A). §.§ Comparison with simulations<cit.> is the only work of which we are awarethat obtains such scaling relations from direct numerical simulations <cit.>.These authors employ simulations of SN-driven turbulence to study dynamo action in a small section of a galaxy. They provide scaling relations for a few quantities, based on nine simulations. In seven of their runs, the final magnetic field energy is of order the turbulent energy, but in the other two the simulation ends when B≪ B, so those runs are less relevant. They also find that both vertical outflows and diamagnetic pumping are important, but these effects have been neglected in deriving the scaling relations found in the present work.Given the differences between their model and ours,it is perhaps most useful to compare results for the turbulent diffusivity η because this quantity is not expected to be sensitive to details such as the dynamo saturation mechanism <cit.>. <cit.> find η∝σ^0.4n^0.4Ω^-0.55, where σ is the star formation rate density in dimensions of [T]^-1[L]^-2, and, as above, n and Ω are the number density of gas and the rotation angular speed,respectively. The value of σ is determined from their input SN rateassuming a constant initial mass function,so we can replace σ with ,resulting in η∝^0.4n^0.4Ω^-0.55.To compare this result with our models, we derive scaling relations using equation (<ref>), the scaling relations of Table <ref>, and equation (<ref>), and assume ρ/n=. We use Table <ref> rather than Table <ref> because the supernova rate density is constant in the <cit.> simulations.Neglecting the temperature dependence,Models Sa-d respectively give η∝^1/3n^-0.9^0.4, η∝^0.2n^-0.7^0.2, η∝^-1/3n^0.1^-0.3 and η∝ n^-0.1^-0.4(where in the last relation we have neglected the very weak dependence on ). Models Alt1, Alt2a and Alt2b give η∝^-1, η∝^1/3^-2 and η∝^-1^2, respectively.Given that Ω depends on the gravitational potential,Ω andare not completely independent from one another. Thus, the absence of Ω in our scaling relations for ηdoes not necessarily imply disagreement with <cit.>. For this reason, let us focus onand n. In Models Sa, Sb and Alt2a, we findexponents of 1/3, 0.2 and 1/3, respectively,which are close to the value of 0.4 obtained by <cit.>. In the other models, the agreement is poorer, with exponents -1/3, ≈0, 0 and -1, for Models Sc, Sd, Alt1 and Alt2b, respectively. Now turning to the dependence on n, we obtain a positive exponent only for Model Sc (0.1),whereas other models give exponents ranging from -0.9 to 0,which is in disagreement with the value of 0.4 obtained by <cit.>. We can conclude that the level of agreement between our predicted scaling relationfor η and that found by <cit.> is rather poor.However, the dependence onagrees quite well in Model S whenever < and in Model Alt2a, where ≪ A (althoughreport mildly supersonic turbulencefor the simulations of that paper, so Alt2b is probably more relevant than Alt2a,but there the model predicts a dependence ondifferent from that seen in the simulations). In addition, <cit.> report that the ratio of the strengthof the mean field to that of the random fieldis related to the SN rate (SFR density) as B/b∝^-0.38±0.01, while for similar simulations by <cit.> the authors report B/b∝^-0.30±0.07. Generally, we expect b>b <cit.>,so we focus on the ratio B/b,using Table <ref>. For Model Alt1 we find no dependence of B/b on . For Models Alt2a and Alt2b we find B/b∝^-1/3 and B/b∝^-2/3, respectively. For Models Sa and Sc, we find B/b∝^-1/3, and for Models Sb and Sd we find B/b∝^0.37. Thus, for Models Alt2a, Sa and Sc (all of which have ≪ A) the agreement is good,for Model Alt2b the agreement is fair, and for Models Sb and Sd the agreement is poor. Differences may be partly attributableto the different physical assumptions between our models and those of <cit.>. However, there is another important caveat,namely that the separation of the magnetic field into random and mean componentsmay be sensitive to the method of averaging. The mean-field dynamo theory on which our models are founded assumes ensemble averaging, while <cit.> use horizontal averaging.§ SUMMARY AND CONCLUSIONSWe have presented a model for the turbulence parameters and magnetic field properties of disk galaxies that takes as inputvarious observables. The set of algebraic equations comprising the modelcan be solved semi-analyticallyusing the source code linked in Section <ref>. Solutions depend on the galactocentric radiusand represent averages over the azimuthal and vertical coordinates. The model rests on the assumption that the system is in a statistically steady state, and evolution with cosmological redshift is neglected. A list of the quantities and parameters in the modeland the equations for computing them can be found in Table <ref>. §.§ Scaling relations We used the model to derive scaling relations for the gas scale height, key turbulence parameters and magnetic field properties, in terms of the observables. These are relations of the form X∝ x^ay^bz^c⋯,where X is a quantity of interest, x, y and z are observable quantities, and a, b and c are constants. Scaling relations are useful for placing priors on missing informationin datasets and for providing physical insight, and are ubiquitous in astrophysics. The scaling relations can be derived analytically, but we provide a link to a general numerical tool in Section <ref>. To reduce the solutions to scaling relations,we focused on certain plausible asymptotic regimes.Most importantly, we assumed that turbulence is driven purely by isolated SNe and that mean (large-scale) vertical and radial gas motions can be neglected. In deriving scaling relations for the mean magnetic field strength,we assumed that the Coriolis number is smalland that the dynamo is strong (Section <ref>),but these assumptions were not necessaryfor deriving scaling relations for the pitch angle of the mean field.These scaling relations can be found in Tables <ref> and <ref>. Our fiducial model considers turbulence to be driven by isolated SNRs, as they decelerate to the sound speed and merge with the ISM. Given the lack of consensus about turbulence driving in the ISM,we also considered two simpler prescriptions for calculating the turbulence parameters,as summarized in Table <ref>.The predictions in Tables <ref> and <ref>do not consider possible correlationsbetween the observed quantities on the right side of the scaling relations. Given the well-known empirical relation betweenand(the Kennicutt-Schmidt law), we used ∝^N with N=1.4 to eliminate the dependence on ,and the results are presented in Tables <ref> and <ref>.The theoretical scaling relationsin Tables <ref> and <ref> were then compared with empirical scaling relations in the literature(Section <ref>). The level of agreement between the model predictions and observations is remarkably good for our fiducial model, given the various theoretical and observational uncertainties. The level of agreement is generally poorer for the alternative turbulence prescriptions we tried. This can be taken as further evidence that turbulence driving in nearby galaxies is dominated by SN feedback.It also suggests that assuming that turbulence is drivenat the disk scale height can lead to incorrect results, and points to a need for modeling turbulence driving in more detail,taking into account the dynamics of SNRs. §.§ Limitations and future work Our model should be thought of as an adaptable tool for understanding galactic magnetic fields rather than a fixed set of formulae.This tool could be extended by including additional physical effects. Outflows and SBs (which tend to cause outflows) are already included in the most general versionof our model, summarized in Appendix <ref>, but including these effects introduces extra parameters that are challenging to constrain. SBs may be unlikely to dominate turbulence drivingbecause they tend to blow out of the disk,which limits the amount of energy they transfer to the ISM <cit.>. Likewise, rough estimates suggest that mean outflow speeds may often be too small to strongly affectmean-field dynamo action <cit.>. However, such findings are preliminary and more work is needed on the roles of SBs and outflows in turbulence driving and magnetic field evolution. Furthermore, our model does not include radial inflow <cit.>,which may affect the mean-field dynamo <cit.>and play a role in driving interstellar turbulence,though primarily at high redshift <cit.>. On the MHD side,we have not included the turbulent tangling of the mean field to produce random field, nor additional effects – still not very well-understood – involving the influence of the random field on the mean-field dynamo<cit.>. Furthermore, the equations for the turbulence parameters, gas scale height, etc.,do not take into account the magnetic field. This is hard to avoid given the lack of current knowledge about such feedback effects, and attempting to include them would make the model more complicated without providing much benefit, given the various uncertainties. Nevertheless, such effects may be important <cit.>. In some cases, they can be roughly included in our existing model by choices of parameter values; for example, the effect of the magnetic pressure on the gas scale height can be included heuristically by increasing the value of ζ in equation (<ref>).Our model does not consider cosmological evolution, but there is scope for extending the model to include high redshift galaxies <cit.>.There is also scope for combining our model with galaxy modelsthat rely on certain magnetic field parameters as input <cit.>, in order to calculate those parameters self-consistently.Modelling scaling relations can be complicated by correlations between the observables. While we used the KS relation to remove Σ as an independent variable, we made no attempt to incorporate the resolved star-forming main sequence relation betweenand , for instance. If this could be used to eliminate ,it would affect the -dependence of the scaling relations derived. Nor did we attempt to relate the angular rotation speed Ω with the disk surface density , for example, though they are not completely independent. Another caveat is that parameters like ζ may vary somewhat between galaxies, which would introduce scatter.Observationally derived scaling relations for magnetic fields are now plentiful and will improve as new instruments like the Square Kilometre Array become available. This increases the urgency of studying such scaling relations theoretically,and in our view various complementary approaches can be utilized. For instance, one could make use of a population synthesis model that solves for the magnetic fields of galaxies,using input from a semi-analytic model of galaxy formation <cit.>. Second, one could run several local ISM simulations to explore the parameter space, building on the work begun by <cit.>. Third, one could explore scaling relations using cosmological zoom MHD simulations <cit.>. Given the heterogeneous nature of galaxies,such a multi-pronged statistical approach may be very useful for learning about galactic magnetic fields and the various processes that shape them.§ ACKNOWLEDGEMENTSWe are very grateful to Anvar Shukurov for providing detailed comments on the manuscript and for many useful discussions about galactic magnetism and interstellar turbulence. We also thank Jennifer Schober and Luiz Felippe S. Rodrigues for discussions.§ DATA AVAILABILITYThere are no additional data to report. aasjournal§ GENERALIZATIONS OF THE MODEL Above, we neglected galactic outflows. Applying an expression for the mean outflow speed U derived in <cit.> (see also ), <cit.> found outflows to affectnegligibly the mean magnetic field properties for the five galaxiesfor which the data to compute this quantity was available. Even so, outflows may still have important effects,and in Section <ref>, we present mean-field dynamo equations for the case U0.If turbulence driving by SBs is important, the mean magnetic field then also depends on the fraction of SNe that contribute to SBs, f,the number of SNe per SB, N, and the energy efficiency of SBs, ϵ,which in turn may depend on each other and on other parametersin ways that are difficult to constrain with current knowledge. Thus, above, we neglected SN clustering, i.e. we adopted f=0.find that this assumptiondoes not drastically alter the values of the turbulence parameters,typically making τ about 2–3 times smaller and l about 2 times smallercompared to the fiducial case f=3/4,whereas u is similar in the two cases. Nevertheless, SBs may sometimes be important,so in Section <ref> we formulate the equations for f0. The mean vertical outflow speed U is likely affected by SBs,so may itself depend on f, N and ϵ. §.§ Including mean vertical outflowsThis section generalises the magnetic field model of Section <ref>to include a galactic outflow, with outflow speed U.Using an expression from <cit.> we can writeσ_z≃σ_r[1+K_U Uτ/l(1+1/(1+qΩτ)^2)]^1/2,where U is defined to be equal to the mean vertical outflow speed at the disk surface z=± h, with h the scale height of the gaseous disk, and K_U a constant of order unity. As it represents an average for the entire disk,the quantity U should be taken as the mass-weighted, area-averaged outflow speed, and has been estimated to be in the range 0.2–2 for spiral galaxies <cit.>.Substituting equations (<ref>) and (<ref>) into the expressions for b and b,and setting K_U=1, we obtainb≡√(b^2-b^2)= 1/√(3)[2qΩτ(1+qΩτ/2)+Uτ/l(1+1/(1+qΩτ)^2)]^1/2. The mean magnetic field strength in the saturated state is now given by≡ ||= K B χ(p)(D/D -1)^1/2l/h(R_U +^2 R_κ)^1/2.where we make use of the Reynolds-type dimensionless number R_U≡U h/η.The critical dynamo number is now given byD= -(/2)^5(1 +1/^2R_U)^2.Additionally, we haveχ(p)=(2-3cos^2p/2√(2))^-1/2,wheretan p_B= _r/_ϕ= ^2 +R_U/4R_Ω = -^2 τ u^2+6 hU/12 q Ωh^2,and p_B is the pitch angle of ,defined such that -/2<p_B≤/2 with p_B<0 for trailing spirals. Note that χ≈1, depending only weakly on p_B. Given the uncertainties of the model, this dependence is inconsequential, and thus we set χ=1. The quantity K in equation (<ref>) is an adjustable parameter of the model.§.§ Including superbubbles This section summarizes the SN-driven turbulence model of <cit.>. For an overall SN rate per unit volume ν, the rate per unit volume of isolated SNe is given byν=(1-f)ν,where f is the fraction of SNe in SBs,and that of SBs is given byν=fν/N,where N is the mean number of SNe in a given SB. <cit.> estimate that the fraction of SNe occurring in OB associations is ∼ 3/4for the Milky Way, which suggests f∼3/4 for our Galaxy.§.§.§ Turbulent correlation length lThe turbulent correlation length is estimated asl= (Γ-1/Γ)C_l l(1+(l/l)//1+/),where l and l are the driving scales for isolated SNe and SBs, respectively, andandare their respective energy injection rates. For Kolmogorov turbulence, Γ=5/3,and C_l is a constant of order unity that comes from turbulence theory – below we adopt C_l=3/4 <cit.>.For SBs, we identify the driving scale of turbulence with the final radius reached by an SB in the midplane of the galaxy,l≈min[R(t), λ h],where the first case corresponds to deceleration to c and the second case to blowout from the disk. Here t is the SB age for which the SB expansion has slowed to the ambient sound speed (if it has not blown out),λ is a parameter of order unity, andR(t) = 0.53 ϵ_0.1^1/3N_100^1/3E_51^1/3n_0.1^-1/3c_10^-2/3,where 0.1ϵ_0.1is the fraction of the SB energy that is mechanical and N_100=N/100.The ratio of the rates of energy per unit volume injected by isolated SNe and SBs is l^3ν/l^3ν, which gives/=0.63 (3(1-f)f)ϵ_0.1^-1E_51^-1/17n_0.1^-2/17c_10,t≤ t; 1.47 (3(1-f)f)N_100E_51^16/17n_0.1^-19/17 c_10^-1λ^-3h_0.4^-3,t< t.Here t is the age of the SB when it blows out of the disk (if it in fact blows out). The similarity solution for SBs yields, from Ṙ( t)=c,t = 31 ϵ_0.1^1/3N_100^1/3E_51^1/3n_0.1^-1/3c_10^-5/3. From R(t)=λ h, we find t= 15 ϵ_0.1^-1/2N_100^-1/2E_51^-1/2n_0.1^1/2λ^5/2h_0.4^5/2,where h_0.4= h/(0.4).§.§.§ Root-mean square turbulent velocity uApplying the method briefly outlined in Section <ref> to the case f0, we obtain the general expressionu= [ 4/3l c^2ν((1-f)l^3+f/Nl^3)]^1/3. §.§.§ Turbulent correlation time τThe quantity τ is the time for the flow to renovate due to the passage of an SN or SB blast wave. The renovation rate is equal to the sum of the rates from isolated SNe and SBs,so the renovation time is given byτ= (1/τ +1/τ)^-1.A generalisation of equation (<ref>) to f0 givesτ= 6.8 (1/4(1-f)) ν_50^-1E_51^-16/17n_0.1^19/17c_10,where ν_50=ν/(50^-3^-1). The renovation time for SBs is equal to 3/(4 l^3ν), which givesτ=4.3(f3/4)^-1ν_50^-1ϵ_0.1^-1E_51^-1 n_0.1c_10^2,t≤ t;9.9(f3/4)^-1ν_50^-1N_100λ^-3 h_0.4^-3,t> t. | http://arxiv.org/abs/2311.15612v1 | {
"authors": [
"Luke Chamandy",
"Rion Glenn Nazareth",
"Gayathri Santhosh"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20231127081132",
"title": "Galactic magnetic fields I. Theoretical model and scaling relations"
} |
verbose,margin=2.5cm 1.1 ↠ ∅∅∅equationsection thmTheorem[section] prop[thm]Proposition lem[thm]Lemma cor[thm]Corollary conj[thm]Conjecture dfn[thm]Definition definitioneg[thm]Example remark rem[thm]Remark | http://arxiv.org/abs/2311.16373v1 | {
"authors": [
"Kang Lu"
],
"categories": [
"math.RT",
"math-ph",
"math.MP",
"math.QA"
],
"primary_category": "math.RT",
"published": "20231127233336",
"title": "Twisted super Yangians of type AIII and their representations"
} |
Model-agnostic Body Part Relevance Assessment for Pedestrian DetectionMaurice Günder1,20000-0001-9308-8889 Sneha Banerjee1,30000-0002-9950-2873 Rafet Sifa1,20009-0004-6680-8210 Christian Bauckhage1,20000-0001-6615-2128January 14, 2024 ==============================================================================================================================================================§ INTRODUCTION The discovery of Hagge's circle by K. Hagge in 1907 <cit.> opened new perspectives in classical geometry (<cit.>, <cit.>, <cit.>, …). In a recent paper <cit.>, Bradley described two generalizations of Hagge's theorems. By means of coordinate calculations, first he proved Let a triangle ABC be given in the Euclidean plane. Let D be a point, not lying on the side lines of the triangle, and let Σ be a circle passing through D. If Σ\{D} meets the circles BCD, ACD, ABD at the points U, V, W, and meets the lines AD, BD, CD at the points X, Y, Z, respectively, then the lines UX, VY, WZ are concurrent. Next he deduced an essentially projective generalization of this result: Let a triangle ABC be given in the Euclidean plane. Let D, E, F be non-collinear points, neither of which lies on a side line of the triangle. If a conic Σ passes through the points D, E, F, and Σ\{D} meets the conics BCDEF, ACDEF, ABDEF at the points U, V, W, and meets the lines AD, BD, CD at the points X, Y, Z, then the lines UX, VY, WZ are concurrent. Theorem 2 indeed reduces to Theorem 1 if E and F are the 'circular points at infinity'. In this note we present synthetic, elementary proofs for both of these theorems. Our proof for Theorem 2 does not rely on Theorem 1 (so we may immediately deduce the first theorem from the second one). The reasoning applied in the proof of Theorem 2 is a substantial refinement of that in the proof of Theorem 1. In fact we show that Theorem 2 is valid in any Pappian projective plane satisfying Fano's axiom.In both proofs we need the following basic facts from projective geometry. Let a Pappian plane be given, satisfying Fano's axiom. Then we have (A) The three pairs of opposite sides of a complete quadrangle meet any line (not passing through a vertex) in the three pairs of an involution. (B) If U, V, W, X, Y, Z are six points on a conic, then the three lines UX, VY, WZ are concurrent, if and only if, (UX), (VY), (WZ) are pairs of an involution on the conic. For a proof we refer to Coxeter's book <cit.>.§ AN ELEMENTARY PROOF OF THEOREM 1 We may interpret the Euclidean plane as a part of its projective closure. The latter is a Pappian plane satisfying Fano's axiom (in fact, it is isomorphic to the real projective plane).Apply an inversion of pole D, denoting the images of points and sets by a prime. Then the sets{D,A',X'},{D,B',Y'},{D,C',Z'},{A',B',W'},{A',C',V'},{B',C',U'}are collinear, therefore the opposite sides of the complete quadrangle A'B'C'D meet the line Σ' at the pairs (U'X'), (V'Y'), (W'Z'). By (A), these are the three pairs of an involution. On the other hand, inversion preserves cross ratio cr, and pairs(P_1P'_1), (P_2P'_2), (P_3P'_3) are pairs of an involution, if and only if, cr(P_1,P_2,P_3,P'_3)=cr(P'_1,P'_2,P'_3,P_3). Thus the involution sends the pairs (U'X'), (V'Y'), (W'Z') to the pairs of an involution on the circle Σ=(Σ')'. Hence, by (B), the linesUX,VY,WZare concurrent.§ AN ELEMENTARY PROOF OF THEOREM 2 In this section we consider a Pappian projective plane 𝐏 satisfying Fano's axiom. First we collect some basic facts concerning the so-called Steiner-correspondence. These suggest that it is a good candidate for a purely projective generalization of inversion. Indeed, Steiner correspondence will play the same role in the proof of Theorem 2 as inversion in the proof of Theorem 1.Let Σ_1 and Σ_2 be two fixed conics in 𝐏. (1) It is known (see e.g. <cit.>) that Σ_1 and Σ_2 have a common self-polar triangle Δ={D_1,D_2,D_3}. For every point P∈𝐏\Δ let p_1 and p_2 be the polars of P with respect to Σ_1 and Σ_2, respectively. Then the mapping𝒮: 𝐏\Δ→𝐏,P↦ P':=p_1∩ p_2is said to be the Steiner correspondence with respect to Σ_1 and Σ_2. If P'=𝒮(P), then we say that the points P and P' are in Steiner-correspondence. Notice that if P is a vertex of Δ, e.g., P=D_1, then p_1=p_2=D_2D_3, so the Steiner correspondence cannot be defined. Over𝐏\{D_1D_2∪D_2D_3∪D_3D_1}𝒮 is involutive, and hence invertible. (2) Let l⊂𝐏 be a line, not passing through any vertices of Δ. We show that the set𝒮(l)={𝒮(P)∈𝐏|P∈ l}is a conic. Let L_1 and L_2 be the poles of l with respect to Σ_1 and Σ_2, and consider the pencils ℒ_i with centres L_i, i∈{1,2}. Then for each point P∈ l the point P'=𝒮(P) can be obtained as the intersection of two corresponding lines in ℒ_1 and ℒ_2. Since there is a projectivity between ℒ_i and the range of all points on l for i∈{1,2}, it follows that we also have a projectivity f:ℒ_1→ℒ_2. Then, by Steiner's characterization of conics, the locus of points m∩ f(m) , m∈ℒ_1 is a conic. Clearly, this conic is just the set 𝒮(l). (3) Observe that the conic 𝒮(l) contains the vertices of Δ, since 𝒮 sends every side line of Δ into the vertex opposite to the side. From the involutiveness of 𝒮 it follows that the image of a conic passing through a vertex of Δ is a line. (4) By the reasoning applied in (2) we can also see that the Steiner correspondence sends the pairs of an involution of points on l to the pairs of an involuion of points on the conic 𝒮(l). (5) Suppose, finally, that the line l⊂𝐏 passes through a vertex D∈Δ. We claim that in this case the image of l\{D} under 𝒮 is a range of points on a line. Indeed, using the same notation as in (2), the poles L_1 and L_2 are on the side line d opposite to D. d is the polar of D, so the projectivity f:ℒ_1→ℒ_2 sends d into itself, therefore f is a perspectivity, and the points m∩ f(m) (m∈ℒ_1) are collinear. Again, this point set is just 𝒮(l\{D}). Now we are in a position to prove Theorem 2 in the given Pappian plane 𝐏. Consider two conics Σ_1, Σ_2 with the same self-polar triangle DEF. Let𝒮:𝐏\{D,E,F}→𝐏,P↦𝒮(P)=P'be the Steiner correspondence with respect to Σ_1 and Σ_2. Then the sets{U',V',W'},{U',A',C'},{V',B',C'},{W',A',B'}are collinear. So by observation (5),X'=DA'∩U'V',Y'=DB'∩U'V',Z'=DC'∩U'V'.Thus the opposite side lines of the complete quadrangle A'B'C'D meet the line U'V' at the pairs of points (U'X'), (V'Y'), (W'Z'). These are the pairs of an involution on the line U'V', therefore, in view of (4), their images (UX), (VY), (WZ) under 𝒮 are the pairs of an involution on the conic Σ. By our construction this is equivalent to the property that the lines UX, VY, WZ are concurrent. 9 BradSmith C. J. Bradley, G. C. Smith: On a construction of Hagge, Forum Geometricorum 7 (2007), 231-247. B C. J. Bradley: Generalizations of Hagge's Theorems, arXiv:1007.2762, 2010. Cox1 H. S. M. Coxeter: The Real Projective Plane, Second edition, Cambridge, 1955. Hagge K. Hagge: Der Fuhrmannsche Kreis und der Brocardsche Kreis als Sonderfälle eines allgemeineren Kreises, Zeitschrift fr Math. Unterricht 38 (1907), 257-269. Hatt J. L. S. Hatton: The Principles of Projective Geometry Applied to the Straight Line and Conic, Cambridge, 1913. J R. A. Johnson: Advanced Euclidean Geometry, Dover, 1960. Preiser A. M. Peiser: The Hagge circle of a triangle, American Mathematical Monthly 49 (1942), 524-527. Zoltn Szilasi Institute of Mathematics University of Debrecen H-4010 Debrecen HungaryE-mail: [email protected] | http://arxiv.org/abs/2311.15645v1 | {
"authors": [
"Zoltán Szilasi"
],
"categories": [
"math.HO"
],
"primary_category": "math.HO",
"published": "20231127091652",
"title": "Hagge configurations and a projective generalization of inversion"
} |
thu]Jinyu Miaothu]Kun Jiangcorthu]Tuopu Wencorthu]Yunlong Wang thu]Peijing Jia thu]Xuhe Zhao thu]Qian Cheng nio]Zhongyang Xiao thu]Jin Huang thu]Zhihua Zhong thu]Diange Yangcor [email protected] [cor]Corresponding author: Diange Yang, Kun Jiang, and Tuopu Wen[thu]organization=School of Vehicle and Mobility, Tsinghua University,city=Beijing, country=P.R. China [nio]organization=Autonomous Driving Division of NIO Inc.,city=Beijing, country=P.R. ChinaMonocular Re-Localization (MRL) is a critical component in numerous autonomous applications, which estimates 6 degree-of-freedom poses with regards to the scene map based on a single monocular image. In recent decades, significant progress has been made in the development of MRL techniques. Numerous landmark algorithms have accomplished extraordinary success in terms of localization accuracy and robustness against visual interference. In MRL research, scene maps are represented in various forms, and they determine how MRL methods work and even how MRL methods perform. However, to the best of our knowledge, existing surveys do not provide systematic reviews of MRL from the respective of map. This survey fills the gap by comprehensively reviewing MRL methods employing monocular cameras as main sensors, promoting further research. 1) We commence by delving into the problem definition of MRL and exploring current challenges, while also comparing ours with with previous published surveys. 2) MRL methods are then categorized into five classes according to the representation forms of utilized map, i.e., geo-tagged frames, visual landmarks, point clouds, and vectorized semantic map, and we review the milestone MRL works of each category. 3) To quantitatively and fairly compare MRL methods with various map, we also review some public datasets and provide the performances of some typical MRL methods. The strengths and weakness of different types of MRL methods are analyzed. 4) We finally introduce some topics of interest in this field and give personal opinions. This survey can serve as a valuable referenced materials for newcomers and researchers interested in MRL, and a continuously updated summary of this survey, including reviewed papers and datasets, is publicly available to the community at: Simultaneous Localization and Mapping Positioning Monocular Re-Localization Pose Estimation§ INTRODUCTION Visual Re-Localization, as a typical state estimation problem, is a challenging and ongoing topic. Given a low-cost monocular camera, Monocular visual Re-Localization (MRL) tends to estimate 6 Degree-of-Freedom (DOF) poses, including orientation and position, with regards to scene map when the vehicle re-visits a mapped area <cit.>. This task holds great attentions in various autonomous applications, e.g., Virtual Reality (VR) <cit.>, robots navigation <cit.>, Autonomous Driving (AD) <cit.>. Generally, MRL is commonly discussed to solve long-term association (so called loop closure) problem in Simultaneous Localization and Mapping (SLAM) system <cit.> or recover pose as a state re-initialization in kidnapped robot scenarios <cit.>. As the spatial model of the explored scene, map saves prior knowledge of scene (such as appearance, geometry, topology, etc.) and serves a reference coordinate for MRL. Generally, the MRL methods need to align current sensor data to scene map so that the ego pose can be solved, which can be consistently summarized as a two-step map-matching pipeline: build scene map in a specific form (mapping stage) and then localize based on the matching information between current data and the map (localization stage). Under the context of MRL, the data in the map is captured by sensors in the historical period, and it probably has extreme differences in visual conditions (like weather <cit.>, illumination <cit.>, day-night <cit.>, etc.) and even modality <cit.> with the currently captured data, challenging the MRL solutions. Therefore, the representation form of data in scene map plays a vital role in MRL task and it affects the robustness of MRL methods against visual interference from real-world.Under such a context, various kinds of map have been involved in the existing MRL researches to achieve robust, efficient, and high-precision localization. In some proposals aiming at light-weight map, map is represented by frames with geographic location annotation (named as geo-tagged frame map) so that localization can be performed by image retrieval <cit.> or relative pose estimation <cit.>. For most of MRL methods tending to localization accuracy, map is built by visual point clouds with high-dimensional descriptors (named as visual landmark map) <cit.>. In this formulation, the alignment can be performed by pixel-wise matching between local features <cit.>. Some cross-modal MRL methods adopt raw point cloud map <cit.> for its higher geometry accuracy and density. In the area of AD, the compact and vectorized semantic map, called vectorized High-Definition (HD) map, is widely utilized <cit.>. So the alignment process becomes matching between semantic instances. To achieve improvement by the benefit of End-to-End (E2E) way as other computer vision task <cit.>, some proposals learn to extract pose-related information from implicit map and localize itself, which takes current image as input and estimates poses <cit.>. We believe that the scene map is implicitly represented in a form of network parameters and name such a kind of map as learnt implicit map. In this study, we find that MRL methods with the same kind of scene map usually perform in a theoretically similar way. Map determines the implementation way and even the localization performance of MRL methods. However, existing surveys in related field do not discuss about the relationship between MRL methods and the scene map, which blocking the analysis and development of MRL. In the light of such a status, we aim to reduce the gap by reviewing existing MRL methods based on their utilized scene map. Specifically, as shown in Fig. <ref>, the MRL methods are categorized and introduced in five classes based on its used map, including not only traditional geo-tagged frame map (see Sec. <ref>), visual landmark map (see Sec. <ref>), and point cloud map (see Sec. <ref>), but also vectorized HD Map (see Sec. <ref>) and recently raised learnt implicit map (see Sec. <ref>). For each class, we fully review existing methods and deeply analyze the strength, weakness, evaluation metric, public benchmark. To boost the development of the community, we also discuss some opening problem in this field and provide some personal opinions.Contribution The main contribution of this survey can be summarized as follows: * To the best of our knowledge, this manuscript is the first survey exclusively paying attentions on visual localization with monocular camera as main sensor. We attempt to provide an in-depth review so that it can be used as a systematic reference material for newcomers and researchers acquainted with the MRL problem. * The survey reviews MRL methods from a new perspective that we categorize the existing algorithms based on the representation form of utilized map. The relationships between map and MRL solution can be clearly studied. * For each kind of MRL method with diverse scene map, we summarize the typical algorithms but also introduce the popularly used datasets, and evaluation metrics. * Additionally, we also provide some opening discussions in MRL researches, which helps the community to further improve the MRL algorithms. * As a final contribution, we build a continuously updated repository at , including papers and datasets reviewed in this survey. Organization The structure of this survey is as follows. Section <ref> summarizes the contributions and paper structures. Section <ref> describes the research background of MRL. From Sec. <ref> to <ref>, the existing MRL methods are fully reviewed based on their scene map representation. The typical algorithms with various scene map will be evaluated in Sec. <ref> and we analyze their advantages and disadvantages, respectively. Finally, Section <ref> discusses some heated questions about MRL and Section <ref> ends with conclusion and promising future directions. § BACKGROUND §.§ Problem Formulation and Symbols Definition We start by define MRL problem in theory and introduce the symbols used in this survey.During the offline mapping stage, we build the scene map in a specific representation form. In those MRL method that scene map is explicitly represented and saved, scene map is ℳ^𝒢={m^𝒢_i}, where m^𝒢_i is the i-th represented item in the map, e.g., visual landmarks, point clouds, and semantics. Here, {·}^𝒢 denotes the map is considered in a global coordinate of this scene 𝒢 by default and can be simplified as ℳ={m_i} if not specified. When a vehicle v revisits such a mapped area, it captures the observation of the scene, that is, a monocular image ℐ in MRL research. The vehicle v can estimate its current ego-pose based on the current observations ℐ given scene map information ℳ, which is defined as a MRL problem. Formally, the MRL problem can be written asx̂^𝒢_v=(ℐ | ℳ)where {̂·̂}̂ denotes an estimated value, (·) is a MRL solution, and x^𝒢_v is the vehicle's ego-pose with regard to 𝒢.Commonly, a 6 DOF pose x^𝒢_v=[R^𝒢_v ; t^𝒢_v] is composed by a 3 DOF rotation R^𝒢_v and a 3 DOF translation t^𝒢_v. The rotation can be represented as quaternion, rotation matrix, or Euler Angle. In some MRL application for ground AD vehicles, only 3 DOF pose is considered, i.e., lateral and longitudinal position [x,y] and heading/yaw angle θ_yaw.For better readability, we define frequently used symbols here. The current image in localization stage (query image) is defined as ℐ_q while a historical image at t timestamp in mapping stage (reference image) is defined as ℐ_r(t). Local features from ℐ_t are denoted as ℱ^t={f^t_i} where a local feature f^t_i contains a key point p^t_i=[u,v] ∈ℝ^2 and corresponding local descriptor d^t_i ∈ℝ^C where C is the dimension. A pair of matched local features between ℐ_q and ℐ_r(t) is denoted as <f^q_i,f^r(t)_j>. The global feature of ℐ_t is a global descriptor and denoted as ℱ^t ∈ℝ^C. The items in ℳ share the same symbols but in global coordinate, i.e., {·}^𝒢. §.§ Challenges of Monocular Re-Localization MRL has been developed for decades and achieved many great success in many applications. However, tremendous challenges still block the development and usage of MRL methods in real word autonomy. Here, as shown in Fig. <ref>, some of primary challenges for MRL algorithms are given:* Appearance change: Since monocular camera serves as the primary sensor in MRL solutions, the appearance changes of scene will dramatically affect the algorithms. The conditional appearance variance, e.g., weather <cit.>, illumination <cit.>, seasons <cit.>, and day-night changes <cit.> makes current image visually different from the map, thus challenging matching. And structural changes like dynamic occlusions <cit.>, and layout changes <cit.> will interfere with the geometrical pose estimation. * Viewpoint difference: When the camera viewpoint is quite different between current timestamp and mapping stage, MRL method will be struggled by limited co-visible area and changed layout of items. As typical instances, cross-view MRL for drones <cit.>, opposite-view <cit.> and even ground-to-aerial <cit.> MRL for ground vehicles attract growing interests. * Perceptual aliasing: In some scenaoris with visually similar or repeated textures, the MRL method will generate ambiguous estimation when distinct places have similar appearances, e.g., corridors <cit.>, and parking lots <cit.>, which is called perceptual aliasing problem. * Generalization and scalability: The real world scenarios are infinite, and we cannot exhaust all the types, visual conditions, and interference of possibily occurred scenes. A practical real-world autonomy requires MRL methods to work stably in diverse scenarios, even in unseen environments, called generalization ability <cit.>. Besides, the real world is unbounded, we also need MRL solutions scalable, which limits the unbearable increasement of map size and computational costs with vehicles continuously explore.§.§ Comparison with other surveysIn decades of years, numerous reviews on autonomous robotics techniques have been raised, addressing various topics, e.g., place recognition <cit.>, ego and object localization <cit.>, SLAM <cit.>. As a milestone survey, Lowry et al. <cit.> firstly introduced the concepts behind visual place recognition. Later, Garg et al. <cit.> refine the definition by introducing orientation limitation. With a growing number of visual place recognition methods based on deep learning, Zhang et al. <cit.> reviewed recently proposed solutions from the deep learning perspective. Yin et al. <cit.> presented general place recognition challenges. However, place recognition methods could only serve as a coarse localization that provide a rough pose approximation, which only cover few part of visual localization researches. In <cit.>, Elhousni et al. defined visual map localization as a two-stage process, namely, place recognition and metric map localization, and reviewed MRL methods using LiDAR, camera, or cross-modal sensors. Such a two-stage framework is only adopted by those MRL solutions using visual landmark map and point cloud map, and it cannot completely describe all the MRL solutions, such as newly emerged absolute pose regression-based methods <cit.>, scene coordinate regression-based methods <cit.>, etc.By deeply analyze existing reviews about MRL methods, we find that although all these surveys have made great contribution to the community and boosted the progress of MRL researches, they have regrettably neglected the relationship between MRL methods and scene map, and thus cannot analyze MRL solutions in a unified framework.In this paper, we propose to review MRL methods from the perspective of scene map so that we can clearly categorize them based on the representation form of utilized map, thereby providing a comprehensive and deep review about MRL methods.§ GEO-TAGGED FRAME MAPDuring mapping stage, many frames are labeled by geographic position or even precise 6 DoF poses. These geo-tagged frames can be served as scene map in MRL methods. In this field, some proposals estimate ego-pose by historical pose of retrieved geo-tagged frames (Fig. <ref> a) or estimated relative pose between query image and geo-tagged frame (Fig. <ref> b). §.§ Visual Place RecognitionVisual Place Recognition (VPR) algorithm aims to identify the re-observed places by retrieving reference frame when the vehicle goes back to a previously visited scene <cit.>, which is also adopted for loop closure detection in SLAM system <cit.>. VPR regards the pose of retrieved reference frame as an initial or approximated pose of current query image.Formally, in VPR, the scene map ℳ is represented by reference frames ℐ_r(t) with pose x^𝒢_t, i.e., ℳ={m_t} where m_t=<ℐ_r(t),x^𝒢_t>. Generally, given the query image ℐ_q, a VPR algorithm tends to retrieve the best-matched reference frame based on the image similarity:m̂=max_𝐦 P(ℐ_q, ℐ_r(i) m_i | m_i ∈ℳ) where the matching function P(A,B) provides the similarity or matching score between A and B, and m_i is a matching candidate, m̂ is the best-matched one for ℐ_q. Then, the pose of current query image can be given by the pose of retrieved reference frame:x̂^𝒢_q ←x^𝒢_m̂ In early proposals, SeqSLAM <cit.> and Fast-SeqSLAM <cit.> direcly use down-sampled and normalized image patches to measure image similarity. However, raw image intensity is sensitive to visual interference. So VPR methods often extract high-level image features from images and measure feature similarity as a more reliable image similarity. In this feature-based scheme, image feature algorithm plays an important role in VPR solutions. In VPR, based on the receptive field, the image feature can be categorized into global feature <cit.> and local feature <cit.>. Global Feature-based VPR (GF-VPR):Global feature algorithm can directly represent the whole image as a compact matrix or vector so that the image similarity can be simply defined as a cosine distance P(A,B)=(A· B)/(‖A‖‖B‖) in Eqn. <ref>.Therefore, the core component of the GF-VPR methods should be the global feature algorithm.Before the deep learning stage, the global image features utilized in this area is designed to analyze the statistical distribution of image intensity, e.g., Gist <cit.>, color histograms <cit.>, HOG <cit.>. Zaffer et al. <cit.> improved HOG-based GF-VPR method by using image entropy to extract Regions-of-Interest and regional-convolutional descriptor matching.As a widely used solution, VLAD <cit.> converts N local descriptors {d_i}^N_i=1 into a compact matrix V using a codebook with K clusters: V(j,k)=∑^N_i=1 a_k(d_i)(d_i(j)-c_k(j))where d_i(j) and c_k(j) are the j-th dimensions of the i-th descriptor and k-th cluster centre, respectively. a_k(d_i) denotes the membership of the d_i to c_k, i.e., a_k(d_i)=1 if c_k is the closest cluster to d_i and 0 otherwise.Since deep learning-based methods dominate the visual tasks, some VPR methods have applied Deep Neural Network (DNN) to extract global features. As a milestone of DNN-based global feature, NetVLAD <cit.> designs a trainable feature aggregation layer based on the principle of VLAD algorithm <cit.>. It replaces the non-differentiable VLAD pooling with soft-assignment of descriptors to multiple clusters, and construct the NetVLAD layer amenable to E2E training via back-propagation:V(j,k)=∑^N_i=1e^w^T_k d_i+b_k/∑_k' e^w^T_k'd_i+b_k'(d_i(j)-c_k(j))where {w_k},{b_k}, and {c_k} are trainable for each cluster c_k.Meanwhile, many other attempts are made by scholars to boost the representative ability of global feature. GeM <cit.> designs a trainable generalized-mean pooling layer that generalizes maximum and average pooling for feature aggregation. TransVPR <cit.> could aggregate task-relevant features by self-attention mechanism in Transformers <cit.>. MixVPR <cit.> incorporates a global relationship between elements in each feature map in cascaded feature mixing modules. AnyLoc <cit.> leverages an off-the-shelf self-supervised foundation model features (such as DINOv2 <cit.>) with no VPR-specific training or finetuning. Although global feature can compactly represent images, the retrieval time of GF-VPR methods will linearly increase with the size of the database. And the robustness against viewpoint changes and dynamic occlusions of global features is also inferior than local feature <cit.>.Local Feature-based VPR (LF-VPR):A local feature algorithm extracts massive key points and corresponding descriptors from images. Then, the extracted local features are utilized to retrieve the best-matched reference frames. Traditional local features are hand-crafted based on human knowledge <cit.>. Recently, many DNN-based local features have been adopted in VPR tasks. Early researches directly apply off-the-sheld DNN model pretrained by other computer vision tasks as local feature extraction module <cit.>. Then, specific DNN-based local features are introduced in LF-VPR researches and greatly improve the performance <cit.>. Directly matching between local features across two images is costly, local features need to be further processed to measure image similarity score. One common way is to aggregate extract local features to a compact global feature, e.g., VLAD <cit.>, ASMK <cit.>, and BoW <cit.>, and retrieve reference frames as GF-VPR methods.As typical solutions. DBoW <cit.> extracts binary local features ORB <cit.> from images and build vocabulary to aggregate local features so that an image can be compactly represented by the occurrences of visual words in the vocabulary (some general local features).Yue et al. <cit.> improved it by introducing SuperPoint features <cit.>.LRO <cit.> also utilizes SuperPoint features <cit.> but it adopts an ASMK-based retrieval framework <cit.>.In dynamic scene, local features located at dynamic objects/regions will confuse VPR methods, so Chen et al. <cit.> removed inconsistent dynamics by exploiting instance-level semantics. But semantics are hard to accurately judge the actual motion state of the objects, so DSFeat <cit.> learns to select stable and disciminative local features based on not only semantics but also attention information.These LF-VPR methods need to build offline vocabulary/codebook to aggregate local features, limiting the generalization ability.Some methods propose to build vocabularies in an online or incremental manner. In <cit.>, the authors proposed a novel updating strategy to automatically build and update vocabulary so that retrieval can still be performanced as <cit.>.Different from the solution aggregating local features and retrieving by aggregated feature similarity, some proposals tends to adopt probabilistic scheme in Eqn. <ref> for finding matched reference frame. FAB-MAP <cit.> and FAB-MAP 2.0 <cit.> train a Chow-Liu tree to learn the co-visibility of features and then estimate the matching probability of candidates. IBuILD <cit.> tracks BRISK features <cit.> across consecutive frames to incrementally generate vocabulary, and perform VPR by a likelihood function.iBoW-LCD <cit.> applies Bayesian framework by measuring the score of local features occurred in current image.LiPo-LCD <cit.> achieves further improvement by adding line-wise local feature in human-made environments. Tsintotas et al. <cit.> dynamically segmented the incoming image stream to particular places and assigned places with visual words by using an on-line clustering algorithm .Then in <cit.>, Tsintotas et al. improved the former work by tracking SURF features <cit.> by KLT point tracker <cit.> to obtain robust tracked words and assigned them to the corresponding locations along the map, then VPR is performed by a binomial probability density function. Some methods exploit geometrical information from local features as a re-ranked matching score for matching candidates in Eqn. <ref>. Built upon NetVLAD <cit.>, Patch-NetVLAD <cit.> re-orders candidates by cross matching of patch-level local DNN features with geometrical verification.As a de-facto standard in LF-VPR, researchers apply epipolar check across images and validate the matched reference by the number of feature inliers <cit.>. This method works properly in most scenes but fails if mismatched local features is dominated or pre-defined parametric model is incorrect. So, Yue et al. <cit.> and Ma et al. <cit.> developed new methods to measure geometrical consistency. And Some other works choose to introduce temporal consistency in Eqn. <ref> <cit.> but this scheme is limited to retrieval in sequential images. They follow a basic assumption that if I_q(t_i) and I_r(t_j) were matched, their adjecent images I_q(t_i-1) and I_r(t_j-1) should also be matched.Sequence Feature-based VPR (SF-VPR):SeqNet <cit.> and Delta Descriptor <cit.> proposes to fuse temporal information that utilize the changes across image global features as sequence global feature. SeqVLAD <cit.> combines NetVLAD layer <cit.> and sequence global features that exploits the temporal cues in a sequence. In these works, VPR first performs sequential matching across the image sequences then finds the most matched frame within the matched sequence. Semantic-based VPR (S-VPR):Gawel et al. <cit.> utilized semantic objects as features, and built a semantic topological graph based on the nearby relationship of semantic objects in images. The graph can be compactly represented by a global feature based on random walk <cit.> and S-VPR is performed in the GF-VPR way. Summary:VPR algorithms aim to retrieve geographical nearby reference frame. Benefit from robustly representing images/map, current VPR solution can fully and precisely retrieve the frame. However, VPR perform localization by regarding the reference pose as an approximated query pose, which inherently limits the localization accuracy when the current trajectory is very different from the mapping trajectory. Thus, VPR usually serves as a coarse or initial localization step in practical localization system <cit.>.§.§ Relative Pose EstimationRelative Pose Estimation (RPR) methods aims to estimate the relative pose between query image and reference image in the map. In this field, the scene map is also represented by images with pose, i.e., M={m_t}, where m_t=<ℐ_r(t),x^𝒢_t>. The scene sometimes only contain one reference image M=m_r=<ℐ_r,x^𝒢_r> <cit.>. When the query image ℐ_q is fed into the system, the RPR algorithm estimate the relative pose of ℐ_q with regard to its (closest) reference image ℐ_r:x̂^r_q = (ℐ_q, ℐ_r)so that the absolute pose of ℐ_q can be calculated by:x̂^𝒢_q ←x^𝒢_r * x̂^r_qwhere * is the pose transform operator.RPR methods avoid costly pre-scanning and reconstructing scene, enabling economical MRL in some new environment with extremely sparse records, which is commonly seen in Augmented Reality (AR) applications: User A shares a photo with its location in a new scene, any user B can instantly re-localize with regard to user A and perform interactions <cit.>. Generally, RPR methods can be categorized into geometry-based methods and learning-based methods. This section will review RPR algorithms from such a perspective.Geometry-based RPR (G-RPR):The G-RPR methods perform feature matching between query image and reference image to obtain pixel-level correspondence, and then heuristically solve relative pose. According to multiple view geometry theory <cit.>, G-RPR problem with known camera intrinsic can be solved by first matching local features, then estimating an essential matrix such as using a 5-point solver <cit.> with RANSAC <cit.>, and finally decomposing the essential matrix to the relative pose (including relative rotation and scaleless translation) between two images. Later proposals further improve such a basic formula by learning better features <cit.>, better matching <cit.> and better robust estimators <cit.>].In the early G-RPR solutions, scholars utilized scaleless pairwise relative poses between the query image and multiple reference images to triangulate the scaled, metric pose of query image <cit.>.However, in the scene where only one reference image exists, such a scheme fails to recover scale <cit.>.In <cit.>, as show in Fig. <ref>, authors proposed to utilized estimated monocular depth <cit.> to a) provide scale or b)lift 2D-2D correspondences to 2D-3D or c) 3D-3D correspondences, and concluded the second solution back-projecting reference image to 3D space achieves best performance, which is technically converting G-RPR to typical Perspective-n-Point (PnP) problem. And <cit.> also concludes better feature matching could boost the G-RPR methods.Learning-based RPR (L-RPR):Using DNN model, L-RPR algorithms predict the relative pose from two input images in an E2E way, eliminating the requirement of explicit local feature matching. Some methods first estimate poses up to scale and then recover scales by triangulation <cit.>, while others directly estimate metric relative pose <cit.>. Both way, such as ExReNet <cit.> and RelocNet <cit.>, show great generalization ability when perform RPR on unseen scenarios. Since L-RPR methods do not need to match between query image and reference image, they generally have low requirements of texture in the scene. Thus, L-RPR has been applied in scenarios that are challenging for G-RPR approaches. For instance, in <cit.>, authors focused on develop RPR in extreme cases, including when there is low or even no overlap between the query image and reference image. In <cit.>, authors combined two type of PRP methods and designed a network to predict 3D-3D correspondence between two images, adopting KeypointNet <cit.>. The relative pose is then solved as G-RPR method using Orthogonal Procrustes <cit.> inside a RANSAC loop <cit.>. The experimental results claimed that this solution outperforms the L-RPR method, indicating that involving multiple view geometry knowledge in neural network may benefit RPR algorithms. Summary:RPR perform localize by estimating relative pose between query and reference images. Since scale recovery for pairwise RPR is not perfectly solved yet, RPR methods cannot achieve very high-precision localization performance. But, the low requirement of scene map still let RPR attract growing attention in light applications like AR. § VISUAL LANDMARK MAPAs the most popularly applied representation format of scene map, visual landmarks is constructed by Structure from Motion (SfM) <cit.> or SLAM <cit.>. Visual landmarks is some informative and representative 3D points that lifted from 2D pixels by 3D reconstruction, and they are associated with corresponding local features in various observed images including 2D key point and high-dimensional descriptor as shown in Fig. <ref>, i.e., scene map can formatted as ℳ={m_i}, where m_i=<p^𝒢_i,{ℱ^t_i}_t> and ℱ^t_i is a set of local features from reference image ℐ_t associated with m_i, p^𝒢_i denotes the 3D location of m_i in global frame 𝒢. Among the Visual Landmarks-based MRL methods (VL-MRL), Hierarchical Localization (HLoc) <cit.> is the most famous framework and has been widely used in many applications. Up to now, it still dominates the long term visual localization task[https://www.visuallocalization.net/benchmark/].In HLoc framework <cit.>, scene map ℳ composed by 3D visual landmarks with descriptors is offline built via SfM reconstruction <cit.>. During online localization stage, query image ℐ_q is matched with retrieved reference image ℐ_r and the resulting 2D-2D matches are lifted to 2D-3D matches between ℐ_q and scene map based on ℳ, which can be used to solve scaled pose as a typical PnP problem. For clarify, we split VL-MRL algorithms into two step based on their functions (Step 1.A and 1.B are alternative):1.A extract local feature from ℐ_q and ℐ_r (we assume only one ℐ_r is utilized here):ℱ^q = {f^q_i}_i=F_(ℐ_q) ℱ^r = {f^r_i}_i=F_(ℐ_r)Then, match local features between ℐ_q and ℐ_r and obtain 2D-2D correspondences:{f^q_i,f^r_i}_i =(ℱ^q,ℱ^r)where i is the indices of matched local features.1.B or jointly extract and match local feature between ℐ_q and ℐ_r:{f^q_i,f^r_i}_i = F_ (ℐ_q, ℐ_r)where i is the indices of matched local features.2 get the associated visual landmarks of ℱ^r in ℳ so that 2D-2D correspondences can be lifted to 2D-3D correspondences, and the pose can be solved as a typical PnP problem:x̂^𝒢_q =({f^q_i,f^r_i}_i | ℳ) = ({f^q_i,f^𝒢_i}_i)where i is the index of matched local features or visual landmarks. In this section, we review related proposals in the two steps of VL-MRL methods, namely, local feature extraction then matching (Step 1.A), joint local feature extraction and matching (Step 1.B), and pose solver (Step 2). And we also review some proposals aiming at other perspectives in VL-MRL methods. §.§ Local Feature Extraction-then-MatchingLocal Feature Extraction:As previously mentioned in Sec. <ref>, local feature detects massive salient pixels in an image (denoted as key points), and describes the neighboring area of the key point using a high-dimensional vector (denoted as descriptor).Under the context of VL-MRL methods, local feature is utilized to obtain accurate 2D-2D pixel-wise correspondences between images. The key points detected by local feature algorithms should be repeatable under different image conditions and qualities while located in salient regions <cit.> so that visual landmarks built by local features can be sparse but informative to represent the whole scene <cit.>. And descriptors should be designated to enable accurate correspondence establishment between detected feature points across different views <cit.>, so they need be invariant against typical visual interference. Additionally, local feature should be accurate.The early local features are built upon the human prior knowledge about the quality of pixels. As milestones in this area, the float SIFT <cit.>, SURF <cit.> and binary BRISK <cit.>, BRIEF <cit.>, ORB <cit.> have been widely used and achieved great successes in many autonomous applications. However, due to the limited presentation ability, these hand-crafted local features are challenged by complex visual interference in real-world scenarios. Since entering the perception age ruled by DNN, researchers have begun to introduce deep learning technique in local feature extraction. As a beginning, SuperPoint <cit.> utilizes a shared encoder and two heads to automatically detect key points and extract descriptors, respectively. It is first trained on synthetic samples and then fine-tuned in two images generated by the homographic adaptation. After that, the training datasets with posed images and 3D map are constantly proposed, the ground-truth correspondence between different views of the scene can be available, facilitating the training of DNN-based local features. D2-Net <cit.> is optimize by apply metric learning between two co-visible patches and achieve better performance, but it extracts local feature on a feature map with reduced resolution, so the accuracy of matching and localization is limited. ASLFeat <cit.> improves by detecting on feature map with original resolution. ALIKE <cit.> and ALIKED <cit.> support sub-pixel-level detection. Thus, their accuracy are boosted with a lot of margin. As a commonly applied strategy, multiple scale (MS) detection can help local feature extraction. Some works explicitly apply MS detection <cit.>, some other works implicitly apply MS detection in network by fusing multiple features from feature pyramid inherited by CNN model <cit.>. To make DNN-based local feature get rid of costly training data collection, SEKD <cit.> proposes a self-evolving framework to train CNN model without any annotations or pre-processing on the training data. Some researchers apply attention mechanism in CNN model to further enhance the representative ability of local features <cit.>. These methods train CNN model as GF-VPR tasks (see Sec. <ref>) and then detect local features based on estimated attention mask of trained CNN model. To facilitate the interpretability of trained attention mask, DSFeat <cit.> further combine attention information and semantic information to select stable and discriminative local features. Different from the methods trained with metric learning scheme, DISK <cit.> train CNN model by reinforcement learning scheme where the feature extraction and matching procedures can be trained in an E2E manner, thereby greatly boosting the performance of local features.Limited by the locality of CNN, most existing local features description methods only learn local descriptors with local information and lack awareness of global and surrounding spatial context. Thus, MTLDesc <cit.> and GLFeat <cit.> aggregate non-local awareness into local feature extraction. Besides, as shown in Fig. <ref>, most of the state-of-the-art local features cannot meet the requirements of efficiency even with high-performance GPU devices, which blocks the usage of local feature algorithms in real-time applications. Zhao et al. specially designed light CNN model for real-time local features <cit.>. Thin neural networks usually sacrifice representative ability, so strong supervision for local feature training is required to achieve balance between matching accuracy and computational efficiency <cit.>.Local Feature Matching:Before deep learning stage, researchers often apply the Nearest Neighbour (NN) searching to find matches between the extracted local features and perform ratio test to reject mismatches <cit.>. This strategy is then improved by using the Earth Mover’s Distance <cit.>. Then, FLANN algorithm <cit.> is developed as an efficient alternate of brute-force searching. To get more robust matches, various methods explore positional distribution of local features <cit.>. Some other works further explore locally adaptive local descriptor selection <cit.> and optical flow guided matching <cit.> for better matching. These methods only consider local properties of local features so they can hardly achieve globally consistent matching. So, some approaches turn to leverage the global properties of local features <cit.>. In most practical applications, the feature matching is generally integrated into a RANSAC loop <cit.> to obtain matches with most inliers. Recently, AdaLAM <cit.> has been proposed to detects inliers by searching for significant local affine patterns in image correspondences. However, these traditional hand-crafted matching methods have large running latency and require costly computational consumption due to complex rules and iterative formulation, and the performance will degrade if the underlying matching model between local features in real scenarios differs from the pre-defined model of matching algorithms. In <cit.>, Sattler et al. proposed a novel vocabulary-based prioritized matching step that enables to first consider features more likely to yield 2D-to-3D matches and to terminate the correspondence search as soon as enough matches have been found, making VL-MRL method much more efficient and effective. As for learning-based local feature matching methods, Some algorithms explore semantic information to match the semantic key points in different instances of the same category of objects <cit.>.Inspired by PointNet<cit.>, a DNN of the point set learning, putative matches filtering networks <cit.> are developed. They commonly organize the putative matches of key points as 4D quads and feed the 4D quads into network to estimate scores of each putative match. False matches are discarded by a threshold on the scores. These methods only consider the position information of the key point and neglect the descriptor of local features.In 2019, SuperGlue <cit.> was proposed as an E2E learning-based approach for local feature matching. SuperGlue <cit.> considers both key points and descriptors of local features and learns their matches with a Graph Neural Network (GNN). The descriptor of each local feature is enhanced by all the features in both images by an attentional GNN layer so that matching will be reliable. Later, Zhao et al. <cit.> exploited motion estimation from IMU integration and use the estimation as a spatial distribution prior of key points between images. With the assistance of such a prior, the network can make less efforts to refine features and achieve comparable accuracy with less GNN layers, boosting the real-time performance. More recently, LightGlue <cit.> improves the inference speed by enabling the model prune unmatchable points and predict whether further computation is required.Summary:The local feature extraction-then-matching scheme has worked well in most cases and supported high-precision VL-MRL for a long time. However, there remains a fundamental shortcoming that they cannot match well if the extracted local features were extremely bad. The supervision on matches can only optimize matching module but cannot make any effort on the extraction module, as shown in Fig. <ref> a). To solve this problem, joint local feature extraction-and-matching solution, or called detector-free matching, is proposed.§.§ Joint Local Feature Extraction and MatchingTechnically similar to SIFT Flow <cit.>, joint local feature extraction-and-matching methods integrate feature extraction and matching module in an unified DNN model, and they directly estimate dense local feature matches so that the two module can be jointly optimized, as shown in Fig. <ref> b). Early proposals in this area adopt convolutional DNN (CNN) model, e.g., NCNet <cit.>, Sparse NCNet <cit.>, and DRC-Net <cit.>. Although all the possible matches should be considered in the CNN of these works, the receptive field of convolution is still limited to neighborhood area, making matching sided and unreliable.To solve the problem, in recent three years, some works begin to adopt global attention mechanism, Transformer <cit.>, to achieve global consensus between matches.As the first proposal, COTR <cit.> applies zooming in method to obtain more accurate matching.LoFTR <cit.> proposes to perform dense matching in a coarse-to-fine framework, that it firstly obtains confidence matrices for patch-level coarse matching and then refines matched points positions at pixel level. Both steps using self- and cross-attention layers in Transformer <cit.> to enhance the representative ability of feature descriptors. It achieves accurate matching performance even in textureless region, as shown in Fig. <ref>, effectively reducing the requirements of scene texture in VL-MRL methods.As follow-up methods, QuadTree LoFTR <cit.>, ASpanFormer <cit.>, and TKwinFormer <cit.> make efforts to improve the matching accuracy and efficiency.Existing matching methods choose to ignore the occluded objects caused by camera motion and scene structure, and they will fail when there is no overlap region between ℐ_q and ℐ_r. To overcome this limitation, ^2 <cit.> models occlusion relations using 3D occupancy, enabling local feature matching for occluded regions.Summary:The joint extraction-and-matching scheme breaks through the limits that matching need to first extract excellent local features, and achieves impressive matching performance even in challenging scenarios, making VL-MRL methods more practical. But it need both ℐ_q and ℐ_r as inputs, which requires the scene map additionally saves ℐ_r, enlarging the storage cost of scene map. A storage-friendly scheme need to be explored.§.§ Pose SolverGiven the camera instrinsic K and camera pose x^𝒢_q, the 2D camera projection p^ℐ_q_i=[u_i,v_i] from 3D visual landmarks m_i=<p^𝒢_i,*>, p^𝒢_i=[X^𝒢_i,Y^𝒢_i,Z^𝒢_i] to image plane ℐ_q can be formatted as:[[ p^ℐ_q_i; 1 ]] = 1/Z^𝒞_i K [X^𝒞_i Y^𝒞_i Z^𝒞_i]^T = 1/Z^𝒞_i K [R^𝒞_𝒢 | t^𝒞_𝒢][X^𝒢_i Y^𝒢_i Z^𝒢_i 1]^T = 1/Z^𝒞_i K (x^𝒢_𝒞)^-1[[ p^𝒢_i; 1 ]]where {·}^𝒞 denotes points in camera frame 𝒞. And PnP problem is to estimate x̂^𝒢_q given K and n 2D-3D correspondences {p^ℐ_q_i,p^𝒢_i}^n_i=1. This is a typical problem in 3D vision. Direct Linear Transformation (DLT) was first developed by photogrammetrists <cit.> to solve PnP problem and then introduced to computer vision community <cit.>. If the 3D visual landmarks are coplanar, the homography transformation can be used instead <cit.>. In the last two decades there has been a great deal of progress in PnP solvers.The minimum number of correspondences to solve PnP prblem is reduced from 5 <cit.>, 4 <cit.>, to 3 <cit.>. Generally, the PnP solvers are categorised as non-iterative and iterative.Early non-iterative PnP solvers are generally computationally expensive, e.g., O(n^5) <cit.> and O(n^2) <cit.>. The EPnP solution <cit.> effectively reduces the computation complexity to O(n) by representing 3D point coordinates as a linear combination of four control points, which is widely utilized in many MRL solutions. In addition to EPnP <cit.>, some other non-iterative PnP solvers with O(n) complexity are later proposed <cit.> but they are polynomial solvers compared to linearization-based EPnP <cit.>. Instead, iterative methods solve PnP problems in an iterative manner, beginning from an appropriate initial value <cit.>. The non-iterative solutions can be employed to provide an initial value, and iterative algorithms, such as Gauss-Newton (GN) iterations, are then used to refine the estimation <cit.>.The PnP problem can also be seen as a non-linear least square problems with regard to re-projection error so that camera pose can be solved by Bundle Adjustment (BA) <cit.>. Looking back to Eqn. <ref>, the re-projection error can be formatted as:e_i =p^ℐ_q_i - 1/Z^𝒞_i K (x^𝒢_𝒞)^-1p^𝒢_iwhere p means homogeneous coordinates of p. To estimate optimal camera pose, we need to minimize the total re-projection errors:x̂^𝒢_q = min_x^𝒢_q1/2∑^n_i=1 e^2_iAnd the function can be iteratively done by non-linear optimization methods like GN method and Levenberg-Marquardt (LM) method. In ORB-SLAM3 <cit.>, the VL-MRL after tracking lost or kidnapping is implemented by first using MLPnP solver <cit.> to estimate an initial result and then refining with BA optimization.Summary: Given accurate 2D-3D correspondences, PnP is a well-established problem and can be excellently solved. Pose can be precisely estimated. However, in most real-world cases, 2D-3D correspondences usually have many or even predominant mismatches, challenging the pose estimation procedure. The PnP solver is always integrated into a RANSAC loops <cit.> to estimate more robustly and precisely <cit.>.§.§ Further ImprovementsThere are several other attempts to improve VL-MRL methods, except feature extraction and matching, and pose solver. Here we introduce some works with noticeable improvements.Cross Descriptor matching <cit.>: Traditional VL-MRL methods supposes local features used in mapping and localization stage are same, which blocks the usage of new local feature algorithm. For example, if we used SIFT <cit.> in mapping while HardNet <cit.> in localization, traditional VL-MRL solution fails. In <cit.>, the authors overcame this problem by proposing to translate different type of descriptors into other descriptors or joint embeddings so that different descriptors can be indirectly matched. Line Feature <cit.>: Along with point-wise local features for matching, line-wise local features provide additional constraints in VL-MRL methods, especially in human-made environment. In <cit.>, authors first presented a novel line segment descriptor and suggested the line feature can be complementary constraints to traditional point feature, as show in Fig. <ref>, thereby boosting localization performance on extreme scenarios where point features are biased or sparse. Dense CNN Matching <cit.>: Indoor environments are full of large textureless areas, where sparse local feature extraction methods detect very few features and extracted local features are clustered in small, well-textured parts in the image, leading to potentially unstable configurations for VL-MRL. To overcome this problem, InLoc <cit.> and its extension <cit.> use multi-scale dense CNN features for both image description and feature matching. GN-Net <cit.> is optimized with the proposed GN loss for training weather-invariant deep features, which is tailored for direct image alignment. VL-MRL without SfM <cit.>: Traditional VL-MRL need to reconstruct scene map using SfM <cit.> in offline mapping stage, which is computational and memorial costly. MeshLoc <cit.> explores a more flexible alternative based on dense 3D meshes that does not require local features matching between ℐ_r to build the scene map. During localization stage, MeshLoc <cit.> still perform local feature matching on 2D images and estimate pose by solving PnP prblem, but the only difference is ℐ_r is online rendered by 3D mesh model. Map Squeeze <cit.>: Storing pre-built scene map with visual landmarks can be prohibitively expensive for large-scale environments, especially on mobile devices with limited storage and communication bandwidth. In <cit.>, authors designed a novel framework, SceneSqueezer, to compresses a scene while still maintaining localization accuracy. The scene ℳ is compressed in three stages: 1) ℐ_r for mapping are clustered; 2) a learned point selection module prunes the visual landmarks in each cluster taking into account the final pose estimation accuracy; 3) the descriptors of the selected visual landmarks are further compressed using learned quantization. By applying SceneSqueezer <cit.>, the size of scene map is reduced to 31 MB compared to 7828 MB of HLoc <cit.> in Aachen Day-Night dataset <cit.> while achieving comparable localization performance. Pose Verification <cit.> and Correction <cit.>: Some VL-MRL methods estimate a set of candidate poses for ℐ_q, a pose selection procedure need to be conducted. InLoc <cit.> utilizes the 3D structure of indoor scene and explicitly renders a virtual view that shows how the scene looks like from the estimated query pose. Then, authors extracted RootSIFT <cit.> on a regular grid or dense CNN features from synthesized view and measured the region similarity between ℐ_q and synthesized image, which can be seen as a score of candidate poses. In <cit.>, Taira et al. improved pose verification by integrating semantic constraint and trainable pose verification network. In <cit.>, to circumvent the sparsity of co-visible local features between ℐ_q and ℐ_r and improve the accuracy, Hyeon et al. proposed pose correction to reorganize local features observed from the estimated pose so that more reliable candidate poses can be provided to pose selection. Open-sourced Toolbox <cit.>: To facilitate the development of VL-MRL researches, many great works have open-sourced the overall localization pipeline for the community, e.g., HLoc[https://github.com/cvg/Hierarchical-Localization], Kapture-Localization[https://github.com/naver/kapture-localization], and XRLocalization[https://github.com/openxrlab/xrlocalization]. By leverage these toolbox, researchers can quickly valid their new contributions or evaluate their new proposals without any unrelated efforts to build localization pipeline. § POINT CLOUD MAPVisual landmark map in Sec. <ref> has a large requirement of storage due to saved high-dimensional features descriptors. The sensitivity of appearance (e.g., illumination) also challenges mapping with monocular camera and affect the reconstruction of visual landmark map. As a alternate, LiDAR can provide direct and precise 3D perception of scene map geometry, and the point cloud map built by high-precision LiDAR always be illumination-invariant and commonly more accurate than visual map. And point cloud map only need to save 3D position of point cloud, alleviating the storage burden. The point cloud map can be represented as ℳ={m_i} where m_i is a 3D point in global frame 𝒢, m_i=<p^𝒢_i, l_i>, p^𝒢_i is its 3D coordinates and l_i is its intensity (intensity sometimes is missing). In practical usage, the 3D points are usually used in “clip” format, that is, given a virtual viewpoint x^𝒢_v, the 3D point observed in this view can be clustered in a set 𝒫^𝒢_v={p_i,l_i}, so ℳ is changed to ℳ={m_i} where m_i is a clip of 3D point and its associated viewpoint m_i=<𝒫^𝒢_v,x^𝒢_v>. Since the wide deployment of LiDAR localization is hampered by the huge cost of high-precision LiDAR, localizing camera in LiDAR point cloud map (named as Image-to-Point cloud (I2P) localization for simplicity) is a newly emerged trend in MRL. However, the appearance information of camera and the geometry information of LiDAR is in different modality, the inherit difficulties in cross-modal data association challenges I2P Localization. In this section, we review cross-modal I2P localization using traditional geometry rules and using learning-based methods. §.§ Geometry-based Cross-modal Localization Geometry-based I2P (G-I2P) methods localize camera in LiDAR map by traditional geometry rules. In the G-I2P researches, stereo camera is a natural pick because we can easily lift 2D image data to 3D by stereo matching, which makes the alignment between camera and point cloud map much easier <cit.>. Instead, localizing monocular camera images in 3D point cloud maps is inherently more challenging, as they lack any depth or 3D information. For this, researchers turn to align ℐ_q to ℳ by lifting 2D visual measurements to 3D local construction <cit.>, digging geometry features <cit.>, or modelling 3D structure in other way <cit.>. As an early and successful attempt, Caselitz et al. <cit.> proposed to associate the local visual landmarks reconstructed from the monocular Visual Odometry (VO) <cit.> with point cloud map, as show in Fig. <ref> a). The 7 DoF transform (including 6 DoF pose and 1 DoF scale) was estimated in an ICP scheme. In <cit.>, authors estimate pose by maximizing the normalized mutual information between real query image ℐ_q and synthetic images rendered by LiDAR intensity. Although identifying matched features across different modalities is proven challenging, the incorporation of matched 2D-3D line-level features between query image and point cloud map has been demonstrated to aid in this objective in <cit.>. In DSL <cit.>, Ye et al. introduced the surfel constraints of point clouds into the direct photometric error and estimated poses by a tightly-coupled BA framework. Huang et al. first modelled the dense structure as an Euclidean Signed Distance Field (ESDF) in <cit.> and then improved by modelling the prior distribution of the point cloud map by Gaussian Mixture Model (GMM) <cit.> in <cit.>. The absolute pose estimated by I2P solution can correct the accumulated drafts of VO <cit.>. Coupling I2P method with VO is proven to lead to consistent and accurate localization performance.Summary: Many attempts on G-I2P have been made and achieved impressive successes, but all of these works need to be integrated into a VO pipeline to obtain initial guess of pose, and they are hard to re-localize with only one monocular image. These methods tend to apply continuous re-localization in point cloud map for absolute pose estimation in order to add absolute constraint into VO system, alleviating the drift problem of long term exploration.§.§ Learning-based Cross-modal LocalizationCross-modal Visual Place Recognition (I2P-VPR): Recently, scholars have begun to explore deep learning-based methods to achieve I2P localization with only one monocular image. First, many researches adopt GF-VPR method (in Sec. <ref>) into this cross-modal I2P localization task and propose I2P-VPR algorithms. I2P-VPR methods are very similar to GF-VPR methods, but its reference “frames” ℐ_r(t) is changed to local clips of point cloud map 𝒫^𝒢_r(t). For clearness, we also provide detailed pipeline of I2P-VPR methods here. Given the query image ℐ_q, a I2P-VPR algorithm can be written as a two-step pipeline, that is,1 represent ℐ_q and all the reference point cloud clips 𝒫^𝒢_r(t) in ℳ as features:ℱ^q =F_(ℐ_q) ℱ^r(t) =F_(𝒫^𝒢_r(t)) 2 retrieve the best-matched point cloud clip based on the similarity of features:m̂=max_𝐦_i P({ℱ^q, ℱ^i m_i | m_i ∈ℳ) Then, the pose of ℐ_q can be approximated as the associated pose with m̂.For this problem, one straightforward solution is jointly training a 2D CNN F_(·) for images and a 3D DNN F_(·) for point clouds to create shared embeddings <cit.>. However, this approach does not generalize well to unseen environments. <cit.> proposes to project images and point clouds into unit spheres and extract features through sphere CNN, which requires multiple images as input. Towards I2P localization robust to inconsistent environmental conditions, i3dLoc <cit.> matches equirectangular images to the 3D range projections by extracting cross-domain symmetric place descriptors. The 3D geometry features eliminates the condition-related features by a designed Generative Adversarial Network. Based on such features, i3dLoc <cit.> further design a spherical convolution network to learn viewpoint-invariant symmetric place descriptors. Similarly, <cit.> focuses on correlating the information of 360-degree spherical images to point clouds. Attention mechanism is applied to let the network capture the salient feature for comparing images and point clouds. I2P-Rec <cit.> provides a new baseline for I2P-VPR that it leverages on depth estimation networks to recover point clouds from images so that cross-modal data is converted into the same modality, i.e., Bird-Eye-View (BEV) images. Using the BEV image as an intermediate representation, very simple global feature network (CNN encoder followd by NetVLAD layer) trained by small number of training data can achieve impressive localization performance.Cross-modal Relative Pose Regression (I2P-RPR): After obtaining matched point cloud clip 𝒫^𝒢_r m_i and its pose x^𝒢_r, many works further refine query pose by I2P-RPR that is similar to image-to-image RPR in Sec. <ref>, that is, estimate relative pose between ℐ_q and 𝒫^𝒢_r: x̂^r_q = (ℐ_q,𝒫^𝒢_r)so that the absolute pose of ℐ_q can be calculated by:x̂^𝒢_q ←x^𝒢_r * x̂^r_q In this field, Cattaneo et al. first presented CMRNet <cit.> that calculates the cost volume between the image feature and LiDAR feature by a optical flow network <cit.>, and then regresses the pose of monocular camera relative to the LiDAR map. The LiDAR map provides projected depth for the network as input. Chang et al. compressed the LiDAR map to reduce map size by 87-94% while achieving comparable accuracy <cit.>. Such I2P-RPR networks simply utilize stacked convolution layer or fully connection layer as pose regressor and cannot fully exploit pose-related information from image-point cloud cost volumes. Therefore, POET <cit.> converts pose to high-dimensional features as queries in Transformer <cit.> and iteratively optimizes pose within a Transformer-based pose regressor, achieving improved localization accuracy.However, I2P-RPR methods are hard to achieve centimeter-level accuracy on large scale scene. So many proposals turn to re-localize camera based on 2D-3D matching between images and point cloud map, which is technically similar to VL-MRL methods in Sec. <ref> and is also called image-to-point cloud registration.Cross-modal Matching-based Localization (I2P-MLoc): 2D3D-MatchNet <cit.> is one of the earliest works focusing on image-to-point cloud registration for robot localization. The 2D and 3D key points are obtained by SIFT <cit.> and ISS <cit.> respectively. Then, a neural network is introduced to learn the descriptors for key points. Finally, EPnP <cit.> is adopted to estimate the transformation between the image and the point cloud with the 2D-3D correspondences. Built upon CMRNet <cit.>, CMRNet++ <cit.> turn to regress displacements between images and projected depth of point cloud map instead of directly regressing pose. CMRNet++ <cit.> achieves improved performance with significant margin, indicating that matching-based methods should be a more reasonable choice than regression-based alternates for I2P localization. DeepI2P <cit.> splits the image-to-point cloud registration problem into a classification problem and an optimization problem. A cross-modality neural network is adopted to classify whether the points fall into the image frustum. The classification results are utilized to construct the cost function of the inverse camera projection. The optimization solver solves the transformation that minimizes the value of the cost function. CorrI2P <cit.> designs a cross-modality network to extract the image-to-point cloud overlapping region and corresponding dense descriptors for the image and point cloud. CorrI2P <cit.> constructs dense image-to-point cloud correspondences and uses iterative RANSAC-based EPnP <cit.> to estimate the relative pose. EFGHNet <cit.> adopts the divide-and-conquer strategy to divide the image-to-point cloud registration into four separate sub-networks. These sub-networks are responsible for the horizon and ground normal alignments, rotation estimation, and translation estimation, respectively. These mentioned above methods divide the image-to-point cloud registration into separate modules for robot localization. The separation makes the modules separately optimized and thus not able to refine the error of the previous modules. Thus, I2D-Loc <cit.> apply BPnP module <cit.> to calculate the gradients of the back-end PnP-based pose estimation process, enabling the model to be trained end-to-end with the supervision on estimated pose. More recently, I2PNet <cit.> utilizes a coarse-to-fine 2D-3D registration architecture to accurately regress pose. By combining both advantages of regression-based and matching based I2P localization methods, I2PNet <cit.> achieves centimeter-level localization accuracy. Some other works add visual features in point cloud map, which is quite similar to visual landmark map in Sec. <ref> but the visual features of point cloud in map are online extracted during localization without offline storage. <cit.> introduces the Neural Re-projection Error (NRE) as a substitute for re-projection error in PnP problem. Given ℐ_q and 𝒫^𝒢_r in map with ℐ_r, the method extracts dense CNN features from both images. The sparse descriptors of 3D point clouds are sampled from ℐ_r based on image projection as Eqn. <ref>. For each 3D point, <cit.> computes dense loss maps and minimizes the loss with respect to the network parameters. The proposed NRE in <cit.> is differentiable not only w.r.t. the camera pose but also w.r.t. the descriptors. PixLoc <cit.> is another work in this field. PixLoc <cit.> also extracts multi-level features with pixel-wise confidences for ℐ_q and ℐ_r. The LM optimization then aligns corresponding features according to the 3D points in the map, guided by the confidence, from the coarse to the fine level. Compared with GN-Net <cit.> that also trained deep features via direct alignment, PixLoc <cit.> leverages the power of differentiable programming, and the only used supervision in training is pose, without the requirement of accurate pixel-wise ground truth correspondences.Summary: The inherit difference between cross-modal data make I2P localization hard. Benefit from high-dimensional CNN features, matching across cross-modal data becomes possible. In general, I2P-VPR only serves to retrieve reference point cloud clip and provides an coarse pose estimation. I2P-RPR and I2P-MLoc could provide more precise pose estimation. I2P-RPR avoid explicit feature matching and pose solver, the training costs is cheap but it cannot achieve very high localization accuracy. Instead, I2P-MLoc performs best in cross-modal localization task, showing good prospects for development. § VECTORIZED HIGH-DEFINITION MAPIn AD task, the compactness and accuracy are two vital characteristics of scene map. Therefore, the most widely used map in AD area has been HD Map since the its first appearance in late 2010. Commonly, the HD Map is created by mobile mapping system equipped with high-precision sensors including LiDAR, RTK and IMU at centimeter-level precision <cit.>. The localization feature in the HD map includes dense point cloud and sparse map element. The dense point cloud in HD Map is similar to the raw point cloud map in Sec. <ref>. In this section, we focus on the sparse map elements usually represented as vectors with semantic labels. These elements correspond to road element or signatures in the real world, e.g., lighting poles, road markers, and lane lines. Such a semantic element-based map representation is much lighter than other scene map like point cloud map and visual landmark map while carries road elements with a high level of detail <cit.>. MRL with HD map (HD-MRL) is concluded as an effective solution for mass-produced vehicles. The fundamental formulation of HD-MRL methods involve identifying semantic elements of HD map from images and then estimating pose by aligning the detected 2D element in the image with their corresponding 3D element in the HD map.Generally, the HD map can be denotes as ℳ={m_i} where each map element m_i with its semantic category s_i is modeled as a set of 3D control points m_i=<{p^𝒢_i(j)∈ℝ^3 }_j=1:N_i, s_i>, sampled uniformly in the 3D space for a unified representation, with N_i as the total number of control points of m_i. Given the camera pose x^𝒢_q, the 3D points {p^𝒢_i(j)} of m_i ∈ℳ can be projected into the image space as Eqn. <ref>, obtaining 2D projection {p̃^ℐ_q_i(j)}. The HD-MRL methods is to find an optimal camera pose x̂^𝒢_q which can minimize a defined cost model d between the projected HD map element points and their corresponding observations:x̂^𝒢_q = min_x^𝒢_q∑_m_i ∈ℳ∑^N_i_j=1 d( z^ℐ_q_i(j),p̃^ℐ_q_i(j))where z^ℐ_q_i(j) is the observation of j-th point of i-th element m_i in image plane ℐ_q.From the basic formulation defined above, HD-MRL methods need to be integrated into a localization system with multiple sensor fusion for acquiring an initial camera pose, which is similar to G-I2P in Sec. <ref>, since HD map elements can be seen as sets of 3D point cloud with semantic information. In this field, Pink et al. <cit.> built a lane-level map by aerial images, and then matched the lane marking in the query image with this pre-built map using ICP algorithm. However, the sparsity of HD map elements in real-world scenarios restricts the available information for pose estimation, blocking the wide usage of HD-MRL methods. To solve this problem, multiple sensor fusion often be used in Extended Kalman Filter (EKF) frameworks that IMU and GPS are utilized for continuous localization <cit.>. For the same purpose, later advances in HD-MRL begin to integrate relative pose constraint by visual feature tracking in VO <cit.> and visual(-inertial) SLAM <cit.> so that tracked visual features can be considered as complementary localization information to the HD map elements. In this area, some solutions combine these two pieces of information in a loosely coupled manner. They first perform HD MRL and VO separately and then fused by Kalman Filter <cit.> or sliding window-based state estimator <cit.>. However, such a loosely-coupled strategies require the HD map elements to be minimally self-sufficient for pose estimation, which does not fundamentally address the failure case caused by sparsity of HD map element. So, Wen et al. <cit.> followed this way to combine VO and HD-MRL but in a tightly-coupled framework with sliding window-based optimization. The optimization target of pose estimation considers both constraints of VO and HD-MRL, enabling consistent and accurate localization even when HD map elements are insufficient in current observation.Also, the depth ambiguity of monocular camera makes 2D-3D association between observed map elements in query image and the ones in HD map a technically challenging problem. For precisely matching map elements across views, we need to project the map elements in query image and HD map into a unified space. As the 3 DoF motion of vehicles using HD-MRL is generally sticked to the ground, one common strategy is to transform the detected map elements into the BEV using Inverse Projection Mapping (IPM) algorithms <cit.>. This strategy follows an ideal assumption that the road is flat, so it will suffer a serious disruption when there is an indispensable vibration on camera tilt angle with respect to the ground plane <cit.>. Some other works turn to project map elements from HD map into query image given initial query poses and perform matching on 2D image space <cit.>, as shown in Fig. <ref>. Due to the perspective effect of the monocular camera model, the shape of map elements will differ from their original 3D shape in HD map. So, the parameterization of map elements and the definition of cost model d need to be investigated. In <cit.>, authors simply modelled land boundaries as straight lines so that a point-to-line cost model between observed and projected map elements to solve poses. Liao et al. <cit.> instead utilized the pole-like landmark features and represented them as straight lines in the image plane, and localized by a particle filter. For fully using map elements in the scene, Guo et al. <cit.> used different post-processing methods for semantic segmentation of different elements in HD map. However, since map elements only have shape and semantic information, accurate association between 2D observations and 3D HD map is still difficult. The repeated structures, missed detections and false detections make data association highly ambiguous, thus challenging HD-MRL methods. To this end, Wang et al. <cit.> proposed a robust data association method considering local structural consistency, global pattern consistency and temporal consistency, and introduced a sliding window factor graph optimization framework to fuse association and odometry measurements. In <cit.>, authors adopted the Semantic Chamfer Matching (SCM) algorithm to perform 2D-3D data association. In these works, SCM is utilized as a general cost model d() for different map elements. As a further improvement, TM^3Loc <cit.> derives an analytical derivation of SCM cost with respect to the 6 DoF pose on 𝔰𝔢(3) to ensure efficient optimization, avoiding the inaccuracy due to any prior assumption of the element shapes. With the efforts on simplified model of map elements, these works can perform HD-MRL in real-time and achieve reasonable performances for long-term localization.In last two years, some deep learning-based HD-MRL methods have been developed. OrienterNet <cit.> estimates 3 DOF pose in a 2D map, OpenStreetMap (OSM), by proposed neural map matching. But its localization accuracy is only limited to sub-meter level due to the low precision of used OSM map. BEV-Locator <cit.> formulates this problem as an E2E learning scheme and proposes a Transformer-based architecture to address the key challenge of the cross-modality matching for HD-MRL. It encodes the map elements from discrete points into structured vectors, and conduct interaction between images and HD map on BEV space. EgoVM <cit.> extracts BEV features and map embeddings by Transformer Decoder <cit.>, and then map embeddings are compared with interpolated map features by candidate poses to calculate their similarities, so as to estimate the optimal pose offset. However, BEV-Locator <cit.> and EgoVM <cit.> needs multiple view images as input, learning-based HD-MRL problem still be unsolved.Summary: HD map should be the most compact and lightwight scene map, so it is preferred by AD researches. Although it need heavy construction and maintenance cost, the recently proposed crowdsourced map building <cit.> and updating <cit.> strategies may help to reduce the maintenance cost, making HD map more widely acceptable. MRL in HD map is still a very hard problem because precise data association between visual observations and semantic elements in HD map is theoretically difficult. Traditional methods seek aids from multiple sensor fusion or online odometry while recent learning-based methods leverage advanced representation ability of DNN models. Based on the localization performance and required sensor setup, we believe learning-based method is a more promising way for HD-MRL.§ LEARNT IMPLICIT MAPIn the current age of deep learning, scholars begin to rethink the representation format of scene map in a more implicit way. Some recently proposed works encode scene map into neural networks so that the network can achieve amazing things, such as directly recover pose of images (called Absolute Pose Regression, APR), estimate 3D coordinates of each pixel in images (called Scene Coordinate Regression, SCR), or render geometry structure and appearance of scene (called Neural Radiance Field, NeRF). Benefit from high-performance computation device, matured deep learning technology, and tremendous number of training samples, such implicit map may be new choice for MRL methods (called NN-MRL) in the near future. In this section, we review APR, SCR, and NeRF-based MRL methods, as shown in Fig. <ref>. §.§ Absolute Pose Regression APR solution was first proposed by PoseNet <cit.> to directly regress the position and orientation of the camera with regard to global frame, given a query image ℐ_q. The original PoseNet <cit.> is composed by a feature encoder, i.e., a GoogLeNet backbone <cit.>, and a pose regressor that is a MLP head. Although early APR methods achieve less accuracy than the VL-MRL methods (Sec. <ref>), but enable pose estimation using a single forward pass. The scene map is represented in a totally implicit and very compact formulation, i.e., parameters of DNN F_(·). Thus, APR method can be simply formatted as: x̂^𝒢_q = F_ (ℐ_q) Single Scene APR (SS-APR):In order to improve the localization accuracy, contemporary APR methods achieve further improvements by developing different CNN backbones <cit.> or MLP heads <cit.>, modelling localization uncertainty <cit.>, or introducing Long-Short-Term-Memory (LSTM) layers <cit.> in the networks. Since the scale of orientation (in degrees) and position (in meters) are quite different across scenes, APR methods need to use weight to balance these two factors. In original PoseNet <cit.>, the scale between orientation error and translation error varies in different scenes, e.g., 120 to 750 for indoor scenes while 250 to 2000 for outdoor scenes, which is tricky. Kendall et al. <cit.> proposed a new loss function with learnable weight of orientation error and translation error so that manual fine-tunning can be avoided, which is popularly used in later works. In <cit.>, authors improved by using logarithm of a unit quaternion instead of traditional unit quaternion as a more advanced orientation representation, and introduced geometric constraints from additional sensors, such as VO and GPS, as new loss terms in network training to improve the performance. In <cit.>, SS-APR problem is split into two step that first predicts the most relevant anchor point uniformly defined in the scene and then regress the relative offsets with respect to it. Such a scheme simplifies the APR problem and significantly improve the accuracy. For higher localization accuracy, Wang et al. <cit.> applied self-attention to the CNN features to guide the pose regression while CaTiLoc <cit.> exploits attention mechanism on full receptive field by Transformer <cit.>. Xue et al. <cit.> incorporated L-RPR and APR in a single DNN and jointly trained the network so that the uncertainty of APR can be alleviated by augmenting the observation based on the co-visibility from VO estimation. In testing stage, Brahmbhatt et al. <cit.> and Xue et al. <cit.> performed a standard Pose Graph Optimization (PGO) by optimizing only the nodes (APR estimation) with edges (VO or RPR estimation) fixed. Although many modifications are proposed to the architecture and training method originally formulated by PoseNet <cit.>, the main paradigm remained the same that employs (1) a CNN backbone to generate a global latent feature, which is considered to contain pose-related information, and (2) a regressor to estimate absolute pose based on extracted features. The backbone and regressor are E2E trained for each scene in the SS-APR methods mentioned above.Multiple Scene APR (MS-APR):MS-APR methods are later developed aiming to extend a single trained APR model on multiple scenes. Blanton et al. <cit.> first proposed multi-scene PoseNet to first classifies the scene where the query image is taken, and then retrieve a scene-specific regressor corresponding to this scene for pose regression. They train a set of regressors, one per scene, for APR with a set of scene-specific parameterized losses. MS-Transformer <cit.> then learn multi-scene APR with a single unified Transformers-based network, where latent features and scenes encoding are both considered in pose regression. MS-Transformer <cit.> use scene-specific queries (scenes encoding) in Transformer, so they can encode many scenes in parallel and achieve MS-APR. Summary: Even APR algorithms work well on small scale dataset, e.g., 7Scenes <cit.>, there is no evidence that they can scale to large scale scene. For example, on Cambridge Landmarks dataset <cit.> that consists of six medium-sized scenes (about 900-5500 ^2) in an urban environment, the state-of-the-art (SOTA) MS-Transformer <cit.> achieves an average localization accuracy of 0.98 m and 3.10^∘, much worse than VL-MRL method, InLoc <cit.>, with accuracy of 0.11 m and 0.5^∘. APR methods can serve as a initial or coarse pose estimation in some real-time application, and its O(1) computational complexity is also remarkable.§.§ Scene Coordinate RegressionSCR methods follow the PnP-based solution of MRL methods but in a E2E learning-based way. The scene map ℳ is also represented as network parameters of DNN F_(·). Specifically, SCR methods directly learn to predict 3D scene coordinates p^𝒢_i of each 2D pixel p^ℐ_q_i in the ℐ_q and thus query pose can be solved as a PnP problem based on estimated 2D-3D correspondences, which can be formatted as:{p̂^𝒢_i} = F_ (ℐ_q)x̂^𝒢_q=({p^ℐ_q_i, p̂^𝒢_i}_i)where {p^ℐ_q_i, p̂^𝒢_i}_i is a pair of image pixel and estimated scene coordinate.In this field, early approaches <cit.> used regression forest models and they required RGB-D images as input. More recent works <cit.> have instead applied DNN-based models fed by RGB images, which greatly improves the usability of the SCR solution.Scene-specific SCR (SS-SCR):SCR method is firstly used to estimate dense 3D coordinates for ℐ_q in a previously seen scene, and SS-SCR model need to be specifically trained for each scene, called scene-specific methods. By applying SCR model, 2D-3D correspondence can be directly obtained and pose can be estimated by PnP solver as Eqn. <ref>. To reduce the localization uncertainty caused by imperfect network predication, the PnP solver is always integrated in a RANSAC framework so that an estimation with the most inliers will be selected. However, vanilla RANSAC is non-differentiable, blocking E2E training for SCR models. So, Brachmann et al. <cit.> proposed differentiable RANSAC strategies for SCR so that pose can be E2E estimated. For instance, in DSAC <cit.>, Brachmann et al. scored estimated scene coordinates (can be seen as 2D-3D correspondences) using a CNN and selected final pose by probabilistic selection while in <cit.>, authors proposed a novel RANSAC framework called Expert Sample Consensus (ESAC) that multiple SCR networks are regard as a set of “experts” and a gating network is trained to measure the relevance of each “expert” regarding the query image.For effectively encode scene map in SCR model, some works turn to learn region-wise label for each pixel. HSCNet <cit.> predicts scene coordinates in a coarse-to-fine manner that it first classifies pixels into corresponding sub-regions and then predict scene coordinates to reduce prediction ambiguity. Such a joint classification and regression strategy is then well studied <cit.>. This way is concluded to alleviate the dependency on dense ground truth scene coordinates in SCR model training <cit.>. Specifically, HSCNet++ <cit.> proposes a pseudo-labelling method, where ground-truth labels at each pixel location are propagated to a fixed spatial neighbourhood. However, SS-SCR need to train models per scene and it commonly pixel-wise ground-truth scene coordinates. The heavy training costs block the wide usage of SS-SCR.Scene-agnostic SCR (SA-SCR): To boost the generalization ability and scalability of SCR methods, SA-SCR methods are proposed <cit.>. They can regress dense scene coordinates based on some given reference views in unseen scenes. SANet <cit.> first constructs a scene pyramid by extracting multiple scale feature map from reference images, then predicts scene coordinates for the query image with the assistance of scene pyramid. In <cit.>, authors proposed a Dense Scene Matching (DSM) module. The DSM module predicts scene coordinates by cost volume constructed between query image and reference image with its scene coordinates. Following this way, the Transformer-based SAReg <cit.> takes a variable number of images (including query images and retrieved reference images) and sparse 2D-3D annotations from SfM database <cit.> as input. More implicitly, NeuMap <cit.> encodes the scene as sparse scene-specific latent codes and estimates scene coordinates with the help of the resulting latent codes. When localizing in a new scene, NeuMap <cit.> only need to optimize latent code for the new scene without re-training other part of DNN. With the latent code pruning, redundant codes can be removed. So NeuMap <cit.> can achieves similar performance as DSAC++<cit.> with a 100 to 1000 times smaller scene map size.Instead of densely estimate scene coordinates, NeuMap <cit.> and D2S <cit.> estimate scene coordinates for sparse 2D key points, making them cost-effectiveness. D2S <cit.> also supports updating by self-supervised learning procedure with new observations without requiring camera poses, intrinsic parameters, and ground-truth scene coordinates, which greatly improves the scalability and generalization ability of SCR.Summary: SCR methods do not explicitly rely on local feature detection, description, and matching, and are able to provide dense correspondences between query image and scene map. They also do not require storing the 3D models and descriptors, as scene map is densely encoded in a learnable DNN. The localization accuracy of SCR is impressively great in small- and medium-scale scenarios <cit.>, but its usage in unbounded scene is hard due to expensive training costs.§.§ Neural Radiance FieldNeRF implicitly represents a scene as a “radiance field”: a volumetric density that models the geometry of the scene, and a view-dependent color that models the appearance of occupied regions of the scene <cit.>.A Multiple Layer Perception (MLP) is utilized to takes the 3D position of a point p^𝒢 = (x, y, z) and the unit-norm viewing direction of that point v = (d_x, d_y, d_z) as input, and estimate the density σ and color c of the point, (σ,c) ← F_(p^𝒢,v). Scene map is represented by the parameters Θ of F_(). Given a set of RGB images {ℐ_t} with known camera poses {x^𝒢_t}, NeRF is trained by minimizing a photometric loss ℒ <cit.>:Θ̂ = min_Θℒ ({ℐ_t}, {x_t}) NeRF learns to synthesize novel views associated with any camera poses, enabling virtual image rendering and scene geometry recovery, thus have been applied in MRL in recent years. Some works directly uses pre-trained NeRF to estimate pose while other works utilize NeRF as a novel data augmentation method to enrich the training data.NeRF as pose estimator: As the first framework to estimate pose using NeRF, iNeRF [28] directly optimizes 6 DoF poses with freezed network parameters Θ by minimizing the photometric error between rendered pixels and observed pixels, which is the inverse problem of original NeRF <cit.>: x̂_q = min_x_q ∈ SE(3)ℒ (ℐ_q, x_q | Θ )Such a joint pose estimation and NeRF training scheme then motivates pose-free NeRF <cit.>. In LATITUDE <cit.>, authors trained a pose regressor through images generated from pre-trained NeRFs, which provides an initial value for hierarchical localization as VPR methods without the requirement of saving reference images. In fine pose optimization stage, LATITUDE <cit.> minimizes the photometric error between the observed image and rendered image by optimizing the pose on the tangent plane.In Loc-NeRF <cit.>, a pre-trained NeRF model is integrated into a Monte Carlo localization to assigns weights to particles by image similarity between query image and rendered image.In NeRF-Loc <cit.>, a NeRF conditioned on reference images takes 3D point as input and generates corresponding 3D descriptors so that 2D-3D matches can be obtained by matching. Pose is then estimated by PnP solver.NeRF as data augmentation: As a more straightforward way to apply NeRF in MRL, NeRF can be used to render virtual images at unseen viewpoints so that the overfitting problem of APR methods can be effectively solved and thus improves performances of APR <cit.>. LENS <cit.> applies pre-trained NeRF-W <cit.> as an offline data augmentation pipeline to enhance APR model. As an online augmentation way, Direct-PoseNet <cit.> minimizes the photometric errors between query image and virtual image rendered at estimated pose while DFNet <cit.> uses feature map errors <cit.> during online localization at unseen viewpoints, making APR method more precise and scalable. Summary: As a newly emerged technique, NeRF has two favourable characterises: 1) the implicit representation of scene; 2) the ability to render synthesis image at novel viewpoint. These two characterises present great potential in MRL researches. Current attempts in this area focus on incorporating NeRF with APR methods. With the assistance of NeRF, APR could perform comparable or even better than VL-MRL methods in small- and medium-scale scenes. But we have to say the utilization of NeRF in MRL is still an opening question to be explored. § BENCHMARKING MONOCULAR RE-LOCALIZATIONIn this section, the common evaluation metric and dataset are introduced, and then we provide the performances of some SOTA MRL methods with various type of scene map on these benchmark.Looking back to all kind of MRL methods mentioned above, some of MRL methods retrieve reference image as query pose approximation while others perform precise pose estimation for query image, as listed in Tab. <ref>. These two kinds of MRL are generally tested using different evaluation metric. We provide introduction and evaluation respectively.§.§ Evaluation metrics§.§.§ MRL for IPA Those retrieval-based MRL utilize evaluation metric as image retrieval task. A retrieved reference image/point cloud clip is defined as true positive if it was geometrically near the query images. The ground-truth is generally labeled by geometric distance based on GPS signals (for example, a threshold of 10 m in <cit.>) or manually defined <cit.>. With the ground-truth, we can measure Precision and Recall metric, where precision denotes the ratio between true positives and all the retrieved references and recall is the ratio between the number of true positives and all the potentially matched query-references existed in this scene. In some MRL application need to retrieve comprehensively, we measure the ratio of retrieved true positives over top-K candidates, i.e., Recall@K. In some applications that a mismatch will make the whole system dramatically fails, e.g., loop closure detection in SLAM <cit.>, we turn to measure the maximum recall metric with 100% precision. The higher these two metrics, the better MRL performs.§.§.§ MRL for FPE Direct measurement about estimated poses is applied in those MRL methods aiming at precise pose estimation. The common evaluation metrics are position error for translation t^𝒢_v (in distance) and orientation error for rotation R^𝒢_v (in degree) between estimated results and ground-truth poses. The lower the errors, the better MRL performs.Additionally, the size of represented scene map ℳ and the running efficiency of MRL methods should also be considered. §.§ Dataset There are a large number of datasets available for the evaluation of MRL methods and it is difficult to give a complete overview. Here we only present some typical datasets that we have used to evaluate MRL methods in this survey.§.§.§ MRL for VPR onlyThe dataset used to evaluate VPR is required to include query and reference images that can be paired by geometric distance. So the dataset should have traverse the same scene multiple times, or the trajectory of the dataset should be “looped”. Gardens Point dataset <cit.> has been recorded on the Gardens Point Campus in Queensland University of Technology at different times of day and night. It contains three traverses of scene that two in the day while one in the night. SPEDTest dataset <cit.> includes frames depicting various locations from all over the world. The collected scenes are with a vast range of weather, seasonal, and illumination conditions. Nordland dataset <cit.> was first introduced to VPR evaluation by <cit.>. It represents a 728 km of train journey in Norway during summer and winter seasons. Nordland dataset represents natural (non-urban), outdoor environment, which is unexplored in any other dataset. Pittsburgh dataset <cit.> has two versions of test dataset for VPR evaluation. Pitts250k-test contains 8k queries and 83k reference images, collected from Google Street View and Pitts30k-test comprises 8k queries and 8k references. Both Pittsburgh datasets show significant viewpoint changes. Mapillary Street-Level Sequences (MSLS) dataset <cit.> has been collected using camera mounted on ground vehicles and presents a wide range of viewpoint and illumination changes. Oxford dataset <cit.>, including CityCentre and NewCollege, is collected through the fish-eye binocular images of a wheeled robot in a campus. Ground-truth matches between query and reference image are provided by the authors. St. Lucia dataset <cit.> is recorded through a suburb in Brisbane at 5 different times during a day, and also on different days over a period of two weeks. The viewpoint changes are limited, but significant appearance changes due to the different times of day are included, as well as some dynamic objects such as traffic or cars parked on the street. 17-Places dataset <cit.> consists of a number of different indoor scenes, ranging from office to labs, hallways, seminar rooms, bedrooms, etc. This dataset exhibits both viewpoint- and conditional-variations. Except to these datasets, tremendous challenging datasets aiming at evaluating VPR still exists, such as Tokyo 24/7 dataset <cit.>, ESSEX3IN1 dataset <cit.>, INRIA Holidays dataset <cit.>, Synthia dataset <cit.>, etc.§.§.§ MRL for IPATo evaluate both VPR and I2P-VPR, it requires the dataset to have both images and LiDAR sweeps. Since I2P-VPR is a newly emerged research, there is very few dataset developed for the evaluation in this field.KITTI dataset <cit.> is a widely used AD benchmark that contains 22 sequences, of which 11 sequences (00-10) present ground-truth trajectories that is originally used for VO or SLAM evaluation. Based on the given pose of each image, ground-truth matches between query and reference image/point cloud can be labelled, and both VPR and I2P-VPR can be evaluated since synchronized LiDAR scans are also provided, we can offline build LiDAR point cloud map using LiDAR SLAM to generate reference point cloud clip as mentioned in Sec. <ref>. §.§.§ MRL for FPE Some datasets provide ground-truth pose for each image so that we can evaluate the MRL methods aiming at FPE. These dataset usually provide full sensor suite, so we can build visual landmark map (Sec. <ref>), point cloud map (Sec. <ref>), HD map (Sec. <ref>), and even DNN-based implicit map (Sec. <ref>).Aachen Day-Night <cit.> utilizes reference images to build the scene map, which were taken during daytime with hand-held cameras over a period of about two years. The datasets offers query images taken at daytime and at nighttime so that viewpoint changes and illumination changes are both included. It is popularly used in evaluation for VL-MRL methods. InLoc Dataset <cit.> is designed for large-scale indoor localization that contains significant variation in appearance between queries and the 3D database due to large viewpoint changes, moving furniture, occlusions or changing illumination. The dataset is composed of a database of RGBD images geometrically registered to the floor maps augmented with a separate set of RGB query images taken by hand-held devices to make it suitable for the task of indoor localization. The reference images are 9,972 perspective images consisting of 277 RGBD panoramic images obtained from scanning two buildings at the Washington University in St. Louis. The query images consist of 329 photos using a smart-phone camera annotated with ground-truth 6 DoF camera poses. It is used in evaluation for VL-MRL methods. KITTI Dataset <cit.> provides ground-truth pose for each image, so it can also be used to evaluate MRL for FPE. Since multiple sensor data is provided, it can support evaluation for many kind of MRL. KAIST Urban <cit.> provides LiDAR data and stereo image with various sensors targeting a highly complex urban environment. It captures features in urban environments (e.g. metropolis areas, complex buildings and residential areas). Wen et al. build HD map for it so that it can be used to evaluate HD-MRL methods <cit.>. 7Scenes <cit.> is a collection of tracked RGB-D camera frames. The authors used KinectFusion system <cit.> to obtain the ground truth camera poses, and a dense 3D model. Several sequences were recorded per scene by different users, and split into distinct training and testing sequence sets. It supports the evaluation of almost all the kinds of MRL methods. Cambridge Landmarks <cit.> is a large scale outdoor MRL dataset taken around Cambridge University. It contains original video, with extracted image frames labelled with their 6 DOF camera pose and a visual reconstruction of the scene. Similar to 7Scenes <cit.>, it also supports the evaluation of almost all the kinds of MRL methods. The mentioned above datasets only cover a small part of dataset for evaluating MRL methods for FPE. Except to these dataset, the famous dataset, CMU Seasons and Extended CMU Seasons <cit.>, RobotCar Seasons <cit.>, ETH-Microsoft Dataset <cit.> is also widely used in evaluation, and Map-free Localization dataset <cit.> provides a specialised benchmark for evaluating RPR methods when only one reference image is included in the scene map. §.§ Performance of SOTAsHere, we provide the performance of some SOTA MRL methods from public publications as a reference to evaluate the advantages and disadvantages of different methods.§.§.§ MRL for IPAMRL methods for IPA, including VPR and I2P-VPR, tend to retrieve reference images given query image, so they are measured and compared based on the precision and recall metrics of their retrieved results. VPR:First, we give some performance of VPR methods in public datasets, evaluated with different metrics, e.g., recall@K (in Tab. <ref>) and maximum recall at 100% precision (in Tab. <ref>). It shows in Tab. <ref> that the differentiable VLAD pooling layer could effectively aggregate dense local descriptors into global features, thus making NetVLAD and its extensions <cit.> perform better than average pooling <cit.> and GeM pooling <cit.> in retrieval-based GF-VPR tasks. Then, benefit from the global fusion of features, MixVPR <cit.> achieves improved performance, espeically in challenging Nordland dataset with varying weather and seasons. More recently, pre-trained features from visual foundation model (such as DINOv2 <cit.>) are concluded to be an excellent alternate for GF-VPR, and enable AnyLoc <cit.> achieves impressive retrieval performance in very different scene. Although GF-VPR methods can fully retrieve reference images, their retrieval accuracy is lower than SOTA LF-VPR methods as shown in Tab. <ref>. Adding verification with local features <cit.> or jointly utilizing local feature and global feature <cit.> can achieve pleased accuracy while retrieve most of reference images for each query image.[htbp]Recall over top-K candidates of different GF-VPR techniques on popular benchmarks. 0.97! 2*Method 2*dim 3c|Pitts250k-test <cit.> 3c|MSLS-val <cit.> 3c|SPED <cit.> 3cNordland <cit.>R@1 R@5 R@10R@1R@5R@10 R@1R@5R@10 R@1R@5R@10 AVG <cit.> ^† 2048 62.682.788.459.3 71.9 75.5 54.7 72.5 77.1 4.48.410.4 GeM <cit.> ^† 2048 72.387.291.465.1 76.8 81.4 55.0 70.2 76.1 7.413.5 16.6 NetVLAD <cit.> ^† 3276886.093.295.159.5 70.4 74.7 71.0 87.1 90.4 4.16.68.2AVG <cit.> ^⋆ 2048 78.389.892.673.5 83.9 85.8 58.8 77.3 82.7 15.3 27.4 33.9 GeM <cit.> ^⋆ 2048 82.992.194.376.5 85.7 88.2 64.6 79.4 83.5 20.8 33.3 40.0 NetVLAD <cit.> ^⋆ 3276890.596.297.482.6 89.6 92.0 78.7 88.3 91.4 32.6 47.1 53.3 SPE-NetVLAD <cit.> ^⋆ 163840 89.295.397.078.2 86.8 88.8 73.1 85.5 88.7 25.5 40.1 46.1 Gated NetVLAD <cit.> ^⋆ 3276889.795.997.182.0 88.9 91.4 75.6 87.1 90.8 34.4 50.4 57.7 CosPlace <cit.> ^⋆ 2048 91.596.997.984.5 90.1 91.8 75.3 85.9 88.6 34.4 49.9 56.5 MixVPR <cit.> ^⋆ 2048 94.198.298.887.0 92.7 94.2 84.7 92.1 94.4 57.9 73.8 79.0 MixVPR <cit.> ^⋆ 4096 94.698.399.088.0 92.7 94.6 85.2 92.1 94.6 58.4 74.6 80.0 2*Method2c|Baidu Mall <cit.> 2c|Gardens Point <cit.> 2c|17 Places <cit.> 2c|Pitts-30k <cit.> 2c|St. Lucia <cit.> 2cOxford <cit.>R@1R@5R@1R@5R@1 R@5R@1 R@5 R@1 R@5 R@1 R@5 NetVLAD <cit.> 53.1070.5158.5085.0061.58 77.8386.08 92.66 57.92 72.95 52.88 74.87 CosPlace <cit.> 41.6255.0274.0094.5061.08 76.1190.45 95.66 99.59 99.93 91.10 99.48 MixVPR <cit.> 64.4480.2891.5096.0063.79 78.8291.52 95.47 99.66 100 90.05 98.95 AnyLoc-GeM-DINOv2 <cit.>50.1370.5588.0097.5063.55 79.5677.04 87.28 76.91 89.34 81.15 97.38 AnyLoc-VLAD-DINO <cit.> 61.1778.3295.0098.5063.79 78.8283.45 91.99 88.46 94.88 78.53 96.34 AnyLoc-VLAD-DINOv2 <cit.> 75.2287.5795.5099.5065.02 80.5487.66 94.69 96.17 98.84 98.95 1001 The results are cited from MixVPR <cit.> and AnyLoc <cit.>.† The results reported by the original publications.⋆ The variants are trained on GSVcities dataset <cit.> using the same backbone network (ResNet-50 <cit.>) with MixVPR <cit.>. [htbp] =5pt 0.97!Maximum recall at 100% precision of different VPR methods on public benchmarks. 2*Methodology 2*From 2cOxford <cit.> 3cKITTI <cit.> Malaga <cit.> 2cSt. Lucia <cit.>CityCentre NewCollege #00 #05 #06 Parking 6L 100909 (12:10) 180809 (15:45) 10lImage Global Feature-based Visual Place RecognitionNetVLAD <cit.> TPAMI'180.7148 0.4369 0.9315 0.9108 0.9632 0.3236 0.8046 0.7967 ResNet50-AP-GeM <cit.> TPAMI'190.9056 0.8485 0.9124 0.9470 0.9779 0.6011 0.7926 0.6936 Kazmi et al. <cit.>TRO'190.7558 0.5109 0.9039 0.8141 0.9739 0.5098 0.8006 0.7255 10lImage Local Feature-based Visual Place Recognition FABMAP 2.0 <cit.>IJRR'11 0.3526 0.6038 0.8222 0.3712 0.6347 n.a. n.a. n.a. DLoopDetector <cit.>TRO'120.3059 n.a. 0.7243 0.5197 0.8971 0.7475 0.3722 0.3136 Tsintotas et al. <cit.>ICRA‘18 0.6595 0.2988 0.9318 0.9420 0.9669 0.8799 0.2627 0.1507 iBoW-LCD <cit.> RAL'180.8825 0.7940 0.7650 0.5307 0.9553 0.5098 0.7002 0.8750 Tsintotas et al. <cit.>RAL'19n.a. n.a. 0.9750 0.9260 0.8897 0.8500 n.a. n.a. Yue et al. <cit.>IROS'19 0.8984 0.9229 0.9416 0.9494 0.9963 0.8605 n.a. n.a. Han et al. <cit.>JFR'210.3345 0.5982 0.9145 0.8651 n.a. 0.7251 n.a.SLCD <cit.> TII'210.4088 0.7529 0.9753 0.8972 0.9313 n.a. n.a. n.a. Yue et al. <cit.>JFR'220.9127 0.9463 0.9569 0.9518 0.9963 0.8992 n.a. n.a. 10lImage Global and Local Feature-based Visual Place RecognitionHTMap <cit.> TRO'170.7968 0.7360 0.9024 0.7588 0.9703 n.a. n.a. n.a. FILD <cit.> IROS'19 0.6648 0.7674 0.9123 0.8515 0.9338 0.5609 0.7606 0.6696 FILD++ <cit.>JFR'220.9091 n.a. 0.9492 0.9542 0.9816 0.6274 0.8339 0.8136 Liu et al.<cit.>ICRA'21 0.8601 0.9121 0.9302 0.9253 n.a. n.a. n.a. n.a. 10lSequence-based Visual Place RecognitionSeqSLAM <cit.> ICRA'12 0.3186 0.5757 0.8758 0.1823 0.6068 n.a. n.a. n.a. Bampis et al. <cit.> IJRR'18 0.4963 0.8087 0.8947 0.8771 0.8015 0.3393 0.6093 0.49791 Most of the results are cited from recent papers <cit.> and <cit.>.2 “n.a.” means the method cannot achieve 100% precision in our implementation with default setups and the result in published papers is unavailable.I2P-VPR:As for cross-modal I2P-VPR between query image and reference point cloud clip, a SOTA performance is achieved by I2P-Rec <cit.> as shown in Fig. <ref>. With the assistance of mature monocular depth estimation, I2P-Rec <cit.> transform query image and point cloud clip from scene map to BEV images, simplifying the cross-modal retrieval problem and leading to pleased performance. It also concludes such a BEV representation of point cloud can work better than point cloud representation, and better monocular depth estimation leads to better retrieval performance Summary:The objective of MRL methods for IPA is to accurately and completely retrieve reference for each query, which requires robustly measure the similarity between query and reference against shifted viewpoints, changed appearance, dynamic occlusions, and so on. The utilization of pre-trained feature from computer vision community <cit.>, geometric verification based on local features <cit.>, robust representation combined local and global features <cit.>, temporal enhancement <cit.> may be helpful ways to improve performance for this field. But since VPR and I2P-VPR methods directly use retrieved reference pose as current pose, their localization accuracy is inherently limited, which will be experimental concluded in later paragraphs. So they are generally used as coarse or initial pose approximation in hierarchical localization framework <cit.>.Additionally, VPR-Bench[https://github.com/MubarizZaffar/VPR-Bench] <cit.> is an open-sourced VPR evaluation framework with quantifiable viewpoint and illumination invariance, enabling fast and full quantified evaluation for VPR methods. In I2P-VPR field, such public evaluation benchmark is not proposed yet.§.§.§ MRL for FPE Several MRL methods tend to estimate fine pose. They utilize various kind of scene map and achieve different level of localization accuracy. We first provide performance of RPR, VL-MRL, I2P-MRL, HD-MRL, and NN-MRL respectively. Then, most kind of VL-MRL methods are evaluated on two typical dataset so that their localization accuracy can be fairly compared.RPR:Towards evaluation of RPR methods, a benchmark called mapfree-reloc-benchmark has been recently proposed [https://research.nianticlabs.com/mapfree-reloc-benchmark/leaderboard], we suggest readers to refer to the public leader board for detailed information. Up to now, the SOTA solution of RPR is solving pose by decomposing essential matrix between matched features across query image and reference image and then recover absolute scale by a SOTA monocular depth estimation method, DPT <cit.>. This solution can achieve a localization performance with 1.23m translation error and 11.1 rotation error. VL-MRL:As for VL-MRL methods, as shown in Tab. <ref> and Tab. <ref>, many recent SOTA method in local feature extraction <cit.> and matching <cit.> can be readily integrated into HLoc framework as alternates to originally used HFNet <cit.> and NN matching. We report a widely used evaluation metric here, which is the ratio of successfully localized queries within three defined tolerances of localization errors.If there were enough correctly matched local features between query image and visual landmark map, HLoc can achieve an excellent localization performance.From Tab. <ref>, it can be seen that extracting more local features contribute to localization, but MS detection is not constantly positive for localization <cit.>. The same key point detectors with more advanced descriptors will generate more accurate matches so they can achieve better performance <cit.>.From Tab. <ref> and <ref>, we can also see that accurate localization in indoor scene <cit.> is much more difficult than localization in urban scene <cit.> due to lack of textures. In current framework of VL-MRL methods that local features are widely used as scene representation, textures of scene still be necessary. I2P-MRL:Tab. <ref> shows some cross-modal MRL methods on KITTI 00 sequence <cit.>. The learning-based I2P-MRL methods <cit.> seem to outperform geometry-based one <cit.> with a lot of margin. The SOTA I2P-MRL methods can achieve an impressive localization performance with 0.08 m in translation error and 0.74in rotation error <cit.>. However, these I2P-MRL methods are trained and tested on human-defined data that the displacement between query image and reference point cloud is randomly determined within a defined threshold and it is often limited. So, their performance in practical applications is still waiting for evaluation.HD-MRL:In Tab. <ref>, we collect the results of some HD-MRL methods from their publications. Since the public HD map data is very few in existing dataset, researchers often collect and build their own dataset to evaluate the proposed methods. For fair and full comparison, we also list the evaluated scene/dataset for each method. It can be seen that the tightly coupling of VO and HD-MRL <cit.> improves the accuracy in long-term localization. The learning-based methods effectively solve the data association problem between 2D images and HD map elements, thus outperforming traditional methods, and they can localize without the requirements of online VO threads. BEV-Locator <cit.> estimates pose by regression whereas EgoVM <cit.> relies on exhaustively searching pose candidates, the latter solution achieves better performance. It should be noticed that HD map is a very compact format of map representation. In MLVHM <cit.>, the HD maps stored in ASCII format only requires about 50 KB storage space per kilometer, compared to about 600 MB per kilometer of the raw point cloud map. It is not the only case that the map size of EgoVM is 0.35MB/km compared to 5.92MB/km of DA4AD <cit.>. Therefore, the compactness and accuracy of HD map makes it a popular choice for localization in AD. NN-MRL:Then, we discuss about MRL with learnt implicit map in Sec. <ref>. According to the localization results on medium-scale Cambridge Landmarks dataset <cit.> in Tab. <ref> and small-scale 7Scenes dataset <cit.> in Tab. <ref>, we find that the localization accuracy of APR methods is not pleased, and sometimes they even perform worse than VPR methods that only give an approximated pose. APR performs much worse than VL-MRL methods. When enhanced by NeRF, the performances of APR methods are significantly improved, and achieve comparable performances with VL-MRL methods. As a surprise, we find that SCR methods work well and achieve impressive localization accuracy in both medium-scale outdoor and small-scale indoor scenes. DSAC <cit.> and its extended works <cit.> even work better than VL-MRL methods in some cases. SCR may be a promising trend in MRL research in the future. But its heavy cost of training will block its wide usage. As for NeRF, it can significantly boost APR methods <cit.> and it also can work well as a pose estimator <cit.>. We are still waiting for the usage of NeRF in large-scale and long-term localization tasks since building NeRF for unbounded scene is a much more challenging problem compared to indoor scene.Summary:Finally, we compare many MRL methods with various kind of scene map on outdoor Cambridge Landmarks <cit.> and indoor 7Scenes <cit.>, as listed in Tab. <ref> and Tab. <ref>. In outdoor scene, the rankings of MRL methods with regard to localization accuracy should be: NeRF pose estimator <cit.> > I2P-MRL <cit.> > SCR > APR+NeRF ≈ VL-MRL > RPR > APR > VPR.And in indoor scene, the rankings of MRL methods are slightly changed as: NeRF pose estimator <cit.> ≈ I2P-MRL <cit.> ≈ SCR ≈ APR+NeRF ≈ VL-MRL > RPR > APR > VPR.It can be seen that in small-scale indoor scene, although APR and RPR work worse than other MRL methods, they can also achieve pleased localization performance. Given the low requirement of built map in APR and RPR, they can be used in some applications that do not need high-precision localization, such as VR.VPR only retrieve visually similar reference image and regard the reference pose as an approximated query pose, so its localization accuracy is very limited when reference images are few in outdoor scene.In indoor scene, most MRL methods with various kinds of scene map achieve comparable localization performance. But in outdoor scene, the gaps among different methods are more significant.Using NeRF as pose estimator like NeRF-Loc <cit.>, I2P-MRL <cit.>, SCR, and APR+NeRF seem to be promising ways in MRL researches. Adapting NeRF-based localization to large-scale unbounded scenes, reducing the requirement of high-precision point cloud map for I2P-MRL, reducing the training cost of SCR should be explored to make these MRL methods more practical. § OPENING DISCUSSIONIn this section, we discuss about some opening questions in MRL field, and provide our opinion. The answers of following questions are still unavailable, we only hope our personal opinion can motivate further contributions in MRL research. §.§ Are Explicit 3D Models Necessary in Visual Localization? Traditional MRL methods require an explicit 3D scene map, e.g., visual landmarks map in VL-MRL, point cloud map in I2P-MRL, and vectorized map in HD-MRL. An explicit 3D model of the scene appears to be necessary in the early stage of MRL research. However, with the recent advancements in MRL, some techniques no longer require an explicit 3D model, such as VPR, RPR, and NN-MRL. Therefore, a natural question arises: are explicit 3D models necessary for accurate visual localization? This is an extension of a widely known question: are large-scale 3D models really necessary for accurate visual localization?<cit.>.As shown in Tab. <ref> and <ref>, it can be seen that some 2D image-based methods without explicit map, namely, VPR and RPR, perform worse than 3D model-based method with explicit map such as VL-MRL and I2P-MRL. However, 2D image-based methods with a learnt implicit map, i.e., NN-MRL, can achieve comparable or even better performances. Specially, NeRF-Loc <cit.> has been found to localize with centimeter-level accuracy, surpassing VL-MRL and I2P-MRL by a considerable margin. Thus, it is believed that the learnable implicit map has the potential to be a superior alternative to the explicit map for accurate localisation. The explicit 3D map is not strictly necessary in current deep learning era.§.§ How to Select Specific Localization Method? Hundreds of MRL researches are proposed each year, relying on various kinds of scene map and achieving differing levels of localization accuracy. Therefore, one may consider how to choose the appropriate MRL methods for a specific autonomous task?As shown in the experimental results in Sec. <ref>, the utilized scene map of MRL usually determines the localization accuracy that a particular kind of MRL can achieve. 1) VPR methods estimate current pose by retrieving historical posed images. Their localization accuracy is limited, but they solely require a 2D image map and can obtain reference images with overlapped view-of-field. As a result, they are often utilized as a coarse step in hierarchical localization framework <cit.> to acquire co-visible query-reference image pairs. 2) With a slightly better localization accuracy, RPR and APR methods can be employed in some applications that do not necessitate high-precision poses but call for a low-cost and light scene map. 3) VL-MRL and I2P-MRL methods require an offline mapping stage to obtain 3D scene map, i.e., visual landmarks map or point cloud map, so their cost of mapping is significantly higher than previously mentioned methods. With these precise and explicit 3D scene map, these two kind of methods achieve outstanding localization performance under most circumstances. However, when aiming for autonomy in large-scale scenarios, these two kinds of scene map require significant storage space, making them difficult to deploy on resource-limited platform, like intelligent vehicles. 4) HD map is the most light and compact form of scene map, so HD-MRL methods are widely applied in autonomous driving task. But HD Map is limited to human-made scenes with semantic map elements, and it is not suitable for natural scenes.5) Recently developed NN-MRL methods utilize DNN model to represent scenes, which have demonstrated impressive success in small-scale indoor scene and medium-scale urban scenes. However, the efficacy of NN-MRL has yet to be evaluated in unbounded scenes. For instance, SCR methods require dense ground-truth data of scene coordinates, which is hard to be obtained in large-scale scenes. Training NeRF for unbounded dynamic scenes remains a formidable task, rendering MRL based on NeRF currently unsolvable.In summary, we should choose specific MRL methods based on its required localization accuracy, supported platform, and types of applied scenarios (including textures, scene size, etc.). §.§ What's Next for Visual Localization? Developed over serveral decades, MRL continues to attract growing attention from both academia and industry. We would like to discuss about an interesting and instructive problem: " What are the future trends of visual localization research?"End-to-end pipeline:The robustness against environmental changes should be well considered by MRL methods for long-term autonomy. The current query image often has different visual conditions with reference data stored in the scene map. Most of existing solutions rely on image feature algorithms <cit.> to attain condition-invariant image representation.However, these features often fail to achieve comparable performance to that on feature matching tasks.For instance, the matching accuracy of D2-Net <cit.> is inferior to SuperPoint <cit.> on HPatches benchmark <cit.>, but whtn it comes to localization benchmark <cit.>, D2-Net <cit.> yields better results than SuperPoint <cit.>.This discrepancy between feature matching and localization might be attributed to the fact that human-defined supervision of features.Some recent approaches suggest building the whole localization pipeline into an E2E manner, incorporating feature extraction, matching, and pose estimation, and training such localization models by supervising them on estimated poses <cit.>. Such pose supervision can facilitate the learning of involved geometry priors in MRL and enhance the global consistency of feature matching.This kind of image feature are specifically learned for MRL, enabling the closure of the gap between feature matching and localization, thus becoming a more reasonable solution for representation in MRL research.It is pertinent to note that HD-MRL is encountered with similar challenges alongside feature-based localization.Since semantic instance-level matching is sparse and difficult, some E2E solutions prefer to make models extract and match high-dimensional features for both images and vectorized map elements, and then estimate poses, leading to improved localization performances <cit.>. We believe E2E localization system will be a promising trend. Resource-friendly MRL:To achieve real-time ego-pose estimation in large-scale scenarios during long-term exploration, we must take into account the storage demands of scene map and the running efficiency of localization algorithms.For example, high-level autonomous vehicles in unbounded driving areas generally depend on vectorized HD map, rather than visual landmarks map and point cloud map that are commonly used in robotic applications within confined scenes <cit.>.In the field of robotic localization, several recommendations are available to lighten scene map, e.g., SceneSqueezer <cit.>, NeuMap <cit.>, and HyperMap <cit.>, which significantly decrease the map size while ensuring localization accuracy. There is no doubt that MRL will prefer a lightweight and compact scene map in the future. Aiming at high efficiency, MRL methods are commonly built in coarse-to-fine framework where VPR is applied to coarsely estimate poses <cit.>. Recent advances in MRL methods enable us to employ NN-MRL solutions as a coarse step, such as APR in LATITUDE <cit.>. Additionally, improving the efficiency of key components in practical MRL pipelines is also essential. ALIKE <cit.> efficiently extracts local features while LightGLue <cit.> achieves high-speed matching.Efforts to reduce computational cost of MRL methods should be encouraged, as this will stimulate the proposal of a growing number of practical MRL solutions for resource-limited platforms. MRL with new kind of map:In recent years, numerous MRL methods that use new types of scene map have been created. For example, MeshLoc <cit.> employs a dense mesh-based map rather than visual landmarks map and achieves superior performance. Similarly, NN-MRL methods utilize implicit neural network-based scene map as a substitute for traditional explicit 3D structure-based scene map, e.g., visual landmarks map and point cloud map. Proposing new representation formats of scene map is still an area of research that requires further exploration in MRL.The ideal scene map should be lightweight, compact, and easy to build and deploy.Moreover, it ought to provide comprehensive information for autonomy. The reference data for localization in the scene map should be sufficiently robust against potential changes in actual scene.Current scene map is challenging to accomplish all of these criteria, so we believe it is necessary to develop new kind of scene map for MRL, which will bring about impressive improvement to MRL researches.Multiple sensor fusion for localization:We have to acknowledge the fact that visual information is sensitive to environmental interference, so visual-only localization is hard to guarantee a stable performance all the time. As a solution, in practical applications, researches frequently combine MRL with other sensor-based localization systems, such as IMU, GNSS, and wheel encoder, to couple multi-sensor-based localization results in loose or tight manner.For example, in TM^3Loc <cit.>, the authors tightly coupled HD-MRL with visual-inertial odometry to achieve accurate and smoothen ego-pose estimation when the map elements are insufficient to support localization.Although the multi-sensor fusion strategy has been well studied in computer vision tasks <cit.> and SLAM researches <cit.> for many decades, we believe that the strategy is still a promising area for exploration due to its value in practical applications.§ CONCLUSIONSIn this survey, we formulate MRL method as a interaction procedure between query image and scene map where poses are estimated, and then we systemically review MRL methods based on the representation format of utilized scene map, that is MRL using geo-tagged frame map, visual landmark map, point cloud map, HD map, and more recently proposed learnt implicit map. Each kind of MRL methods and the related component of them are fully reviewed. Also, we provide a review about evaluation of MRL methods, and draw some conclusions based on the evaluation results of typical algorithms. We provide some opening problem in this field and give our personal opinions. Finally, as a continuous contribution to the community, we list the reviewed papers and datasets on the website so that researches can readily find the best-matched MRL method based on their interests.§ ACKNOWLEDGMENTSThis work was supported in part by the National Natural Science Foundation of China under Grants U22A20104 and 52372414, and Beijing Municipal Science and Technology Commission (Grant No.Z221100008122011).00 url<#>1urlprefixURL href#1#2#2 #1#1VBL2018PR N. Piasco, D. Sidibé, C. Demonceaux, V. Gouet-Brunet, https://www.sciencedirect.com/science/article/pii/S0031320317303448A survey on visual-based localization: On the benefit of heterogeneous data, Pattern Recognition 74 (2018) 90–109. https://doi.org/https://doi.org/10.1016/j.patcog.2017.09.013 doi:https://doi.org/10.1016/j.patcog.2017.09.013. <https://www.sciencedirect.com/science/article/pii/S0031320317303448>LCD2022TITS K. A. Tsintotas, L. Bampis, A. Gasteratos, The revisiting problem in simultaneous localization and mapping: A survey on visual loop closure detection, IEEE Transactions on Intelligent Transportation Systems 23 (11) (2022) 19929–19953. https://doi.org/10.1109/TITS.2022.3175656 doi:10.1109/TITS.2022.3175656.Sven2014ECCV S. Middelberg, T. Sattler, O. Untzelmann, L. Kobbelt, Scalable 6-dof localization on mobile devices, in: Proceedings of European Conference on Computer Vision (ECCV), 2014, pp. 268–283.Jonathan2014TVCG J. Ventura, C. Arth, G. Reitmayr, D. Schmalstieg, Global localization from monocular slam on a mobile phone, IEEE Transactions on Visualization and Computer Graphics 20 (4) (2014) 531–539. https://doi.org/10.1109/TVCG.2014.27 doi:10.1109/TVCG.2014.27.Lim2015IJRR H. Lim, S. N. Sinha, M. F. Cohen, M. Uyttendaele, H. J. Kim, https://doi.org/10.1177/0278364914561101Real-time monocular image-based 6-dof localization, The International Journal of Robotics Research 34 (4-5) (2015) 476–492. http://arxiv.org/abs/https://doi.org/10.1177/0278364914561101 arXiv:https://doi.org/10.1177/0278364914561101, https://doi.org/10.1177/0278364914561101 doi:10.1177/0278364914561101. <https://doi.org/10.1177/0278364914561101>LATITUDE Z. Zhu, Y. Chen, Z. Wu, C. Hou, Y. Shi, C. Li, P. Li, H. Zhao, G. Zhou, Latitude: Robotic global localization with truncated dynamic low-pass filter in city-scale nerf, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2023, pp. 8326–8332.LocNeRF D. Maggio, M. Abate, J. Shi, C. Mario, L. Carlone, Loc-nerf: Monte carlo localization using neural radiance fields, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2023, pp. 4018–4025.TM3Loc T. Wen, K. Jiang, B. Wijaya, H. Li, M. Yang, D. Yang, Tm3loc: Tightly-coupled monocular map matching for high precision vehicle localization, IEEE Transactions on Intelligent Transportation Systems 23 (11) (2022) 20268–20281. https://doi.org/10.1109/TITS.2022.3176914 doi:10.1109/TITS.2022.3176914.MLVHM Z. Xiao, D. Yang, T. Wen, K. Jiang, R. Yan, https://www.mdpi.com/1424-8220/20/7/1870Monocular localization with vector hd map (mlvhm): A low-cost method for commercial ivs, Sensors 20 (7) (2020). https://doi.org/10.3390/s20071870 doi:10.3390/s20071870. <https://www.mdpi.com/1424-8220/20/7/1870>DA4AD Y. Zhou, G. Wan, S. Hou, L. Yu, G. Wang, X. Rui, S. Song, Da4ad: End-to-end deep attention-based visual localization for autonomous driving, in: Proceedings of European Conference on Computer Vision (ECCV), Springer, 2020, pp. 271–289.RoadMap T. Qin, Y. Zheng, T. Chen, Y. Chen, Q. Su, A light-weight semantic map for visual localization towards autonomous driving, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2021, pp. 11248–11254.Sotirios2020CVC S. Diamantas, Towards resolving the kidnapped robot problem: Topological localization from crowdsourcing and georeferenced images, in: Proceedings of the Computer Vision Conference (CVC), Springer, 2020, pp. 551–563.seqslam M. J. Milford, G. F. Wyeth, SeqSLAM: Visual route-based navigation for sunny summer days and stormy winter nights, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), Saint Paul, USA, 2012, pp. 1643–1649. https://doi.org/10.1109/ICRA.2012.6224623 doi:10.1109/ICRA.2012.6224623.vprbench M. Zaffar, S. Garg, M. Milford, J. Kooij, D. Flynn, K. McDonald-Maier, S. Ehsan, Vpr-bench: An open-source visual place recognition evaluation framework with quantifiable viewpoint and appearance change, International Journal of Computer Vision 129 (7) (2021) 2136–2174.Asha2019ICRA A. Anoosheh, T. Sattler, R. Timofte, M. Pollefeys, L. Van Gool, Night-to-day image translation for retrieval-based localization, in: Proceedings of International Conference on Robotics and Automation (ICRA), IEEE, 2019, pp. 5958–5964.mono_i2p_vo T. Caselitz, B. Steder, M. Ruhnke, W. Burgard, Monocular camera localization in 3d lidar maps, in: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016, pp. 1926–1931. https://doi.org/10.1109/IROS.2016.7759304 doi:10.1109/IROS.2016.7759304.cmrnet D. Cattaneo, M. Vaghi, A. L. Ballardini, S. Fontana, D. G. Sorrenti, W. Burgard, Cmrnet: Camera to lidar-map registration, in: Proceedings of IEEE Intelligent Transportation Systems Conference (ITSC), 2019, pp. 1283–1289. https://doi.org/10.1109/ITSC.2019.8917470 doi:10.1109/ITSC.2019.8917470.netvlad R. Arandjelović, P. Gronat, A. Torii, T. Pajdla, J. Sivic, Netvlad: Cnn architecture for weakly supervised place recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence 40 (6) (2018) 1437–1451. https://doi.org/10.1109/TPAMI.2017.2711011 doi:10.1109/TPAMI.2017.2711011.CoHOG M. Zaffar, S. Ehsan, M. Milford, K. McDonald-Maier, Cohog: A light-weight, compute-efficient, and training-free visual place recognition technique for changing environments, IEEE Robotics and Automation Letters 5 (2) (2020) 1835–1842. https://doi.org/10.1109/LRA.2020.2969917 doi:10.1109/LRA.2020.2969917.arnold2022mapfree E. Arnold, J. Wynn, S. Vicente, G. Garcia-Hernando, A. Monszpart, V. Prisacariu, D. Turmukhambetov, E. Brachmann, Map-free visual relocalization: Metric pose relative to a single image, in: Proceedings of European Conference on Computer Vision (ECCV), Springer, 2022, pp. 690–708.hloc1 P.-E. Sarlin, F. Debraine, M. Dymczyk, R. Siegwart, C. Cadena, https://proceedings.mlr.press/v87/sarlin18a.htmlLeveraging deep visual descriptors for hierarchical efficient localization, in: A. Billard, A. Dragan, J. Peters, J. Morimoto (Eds.), Proceedings of the Conference on Robot Learning (CoRL), Vol. 87 of Proceedings of Machine Learning Research, PMLR, 2018, pp. 456–465. <https://proceedings.mlr.press/v87/sarlin18a.html>hloc2 P.-E. Sarlin, C. Cadena, R. Siegwart, M. Dymczyk, From coarse to fine: Robust hierarchical localization at large scale, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 12708–12717. https://doi.org/10.1109/CVPR.2019.01300 doi:10.1109/CVPR.2019.01300.orb E. Rublee, V. Rabaud, K. Konolige, G. Bradski, Orb: An efficient alternative to sift or surf, in: Proceedings of International Conference on Computer Vision (ICCV), 2011, pp. 2564–2571. https://doi.org/10.1109/ICCV.2011.6126544 doi:10.1109/ICCV.2011.6126544.superpoint D. DeTone, T. Malisiewicz, A. Rabinovich, Superpoint: Self-supervised interest point detection and description, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2018, pp. 337–33712. https://doi.org/10.1109/CVPRW.2018.00060 doi:10.1109/CVPRW.2018.00060.alexnet A. Krizhevsky, I. Sutskever, G. E. Hinton, https://doi.org/10.1145/3065386Imagenet classification with deep convolutional neural networks, Communations of the ACM 60 (6) (2017) 84–90. https://doi.org/10.1145/3065386 doi:10.1145/3065386. <https://doi.org/10.1145/3065386>faster-rcnn S. Ren, K. He, R. Girshick, J. Sun, Faster r-cnn: Towards real-time object detection with region proposal networks, IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (6) (2017) 1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031 doi:10.1109/TPAMI.2016.2577031.posenet A. Kendall, M. Grimes, R. Cipolla, Posenet: A convolutional network for real-time 6-dof camera relocalization, in: Proceedings of IEEE International Conference on Computer Vision (ICCV), 2015, pp. 2938–2946. https://doi.org/10.1109/ICCV.2015.336 doi:10.1109/ICCV.2015.336.neumap S. Tang, S. Tang, A. Tagliasacchi, P. Tan, Y. Furukawa, Neumap: Neural coordinate mapping by auto-transdecoder for camera localization, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 929–939.tokyo247 A. Torii, R. Arandjelović, J. Sivic, M. Okutomi, T. Pajdla, 24/7 place recognition by view synthesis, IEEE Transactions on Pattern Analysis and Machine Intelligence 40 (2) (2018) 257–271. https://doi.org/10.1109/TPAMI.2017.2667665 doi:10.1109/TPAMI.2017.2667665.university1652 Z. Zheng, Y. Wei, Y. Yang, https://doi.org/10.1145/3394171.3413896University-1652: A multi-view multi-source benchmark for drone-based geo-localization, in: Proceedings of the ACM International Conference on Multimedia (MM), MM '20, Association for Computing Machinery, New York, NY, USA, 2020, p. 1395–1403. https://doi.org/10.1145/3394171.3413896 doi:10.1145/3394171.3413896. <https://doi.org/10.1145/3394171.3413896>malaga J.-L. Blanco, F.-A. Moreno, J. Gonzalez, https://doi.org/10.1007/s10514-009-9138-7A collection of outdoor robotic datasets with centimeter-accuracy ground truth, Autonomous Robots 27 (4) (2009) 327–351. https://doi.org/10.1007/s10514-009-9138-7 doi:10.1007/s10514-009-9138-7. <https://doi.org/10.1007/s10514-009-9138-7>4seasons P. Wenzel, N. Yang, R. Wang, N. Zeller, D. Cremers, https://api.semanticscholar.org/CorpusID:2553937324seasons: Benchmarking visual slam and long-term localization for autonomous driving in challenging conditions, ArXiv abs/2301.01147 (2022). <https://api.semanticscholar.org/CorpusID:255393732>benchmark6dof T. Sattler, W. Maddern, C. Toft, A. Torii, L. Hammarstrand, E. Stenborg, D. Safari, M. Okutomi, M. Pollefeys, J. Sivic, et al., Benchmarking 6dof outdoor visual localization in changing conditions, in: Proceedings of the IEEE conference on computer vision and pattern recognition (ICCV), 2018, pp. 8601–8610.slcd H. Chen, G. Zhang, Y. Ye, Semantic loop closure detection with instance-level inconsistency removal in dynamic industrial scenes, IEEE Transactions on Industrial Informatics 17 (3) (2021) 2030–2040. https://doi.org/10.1109/TII.2020.3010580 doi:10.1109/TII.2020.3010580.InLoc H. Taira, M. Okutomi, T. Sattler, M. Cimpoi, M. Pollefeys, J. Sivic, T. Pajdla, A. Torii, Inloc: Indoor visual localization with dense matching and view synthesis, IEEE Transactions on Pattern Analysis and Machine Intelligence 43 (4) (2021) 1293–1307. https://doi.org/10.1109/TPAMI.2019.2952114 doi:10.1109/TPAMI.2019.2952114.each_part_matters T. Wang, Z. Zheng, C. Yan, J. Zhang, Y. Sun, B. Zheng, Y. Yang, Each part matters: Local patterns facilitate cross-view geo-localization, IEEE Transactions on Circuits and Systems for Video Technology 32 (2) (2022) 867–879. https://doi.org/10.1109/TCSVT.2021.3061265 doi:10.1109/TCSVT.2021.3061265.donot_look_back S. Garg, N. Suenderhauf, M. Milford, https://doi.org/10.1109/ICRA.2018.8461051Don't look back: Robustifying place categorization for viewpoint- and condition-invariant place recognition, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), IEEE Press, 2018, p. 3645–3652. https://doi.org/10.1109/ICRA.2018.8461051 doi:10.1109/ICRA.2018.8461051. <https://doi.org/10.1109/ICRA.2018.8461051>garg2018lost S. Garg, N. Suenderhauf, M. Milford, Lost? appearance-invariant place recognition for opposite viewpoints using visual semantics, in: Proceedings of Robotics: Science and Systems (RSS), 2018.xview A. Gawel, C. D. Don, R. Siegwart, J. Nieto, C. Cadena, X-View: Graph-based semantic multi-view localization, IEEE Robotics and Automation Letters 3 (3) (2018) 1687–1694. https://doi.org/10.1109/LRA.2018.2801879 doi:10.1109/LRA.2018.2801879.rapnet D. Li, J. Miao, X. Shi, Y. Tian, Q. Long, T. Cai, P. Guo, H. Yu, W. Yang, H. Yue, Q. Wei, F. Qiao, Rap-net: A region-wise and point-wise weighting network to extract robust features for indoor localization, in: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021, pp. 1331–1338. https://doi.org/10.1109/IROS51168.2021.9636248 doi:10.1109/IROS51168.2021.9636248.openloris X. Shi, D. Li, P. Zhao, Q. Tian, Y. Tian, Q. Long, C. Zhu, J. Song, F. Qiao, L. Song, Y. Guo, Z. Wang, Y. Zhang, B. Qin, W. Yang, F. Wang, R. H. M. Chan, Q. She, Are we ready for service robots? the openloris-scene datasets for lifelong slam, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 3139–3145. https://doi.org/10.1109/ICRA40945.2020.9196638 doi:10.1109/ICRA40945.2020.9196638.keetha2023anyloc N. Keetha, A. Mishra, J. Karhade, K. M. Jatavallabhula, S. Scherer, M. Krishna, S. Garg, Anyloc: Towards universal visual place recognition, arXiv preprint arXiv:2308.00688 (2023).SANet L. Yang, Z. Bai, C. Tang, H. Li, Y. Furukawa, P. Tan, Sanet: Scene agnostic network for camera localization, in: Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 42–51. https://doi.org/10.1109/ICCV.2019.00013 doi:10.1109/ICCV.2019.00013.Yin2022GeneralPR P. Yin, S. Zhao, I. Cisneros, A. Abuduweili, G. P. Huang, M. Milford, C. Liu, H. Choset, S. A. Scherer, https://api.semanticscholar.org/CorpusID:252199702General place recognition survey: Towards the real-world autonomy age, ArXiv abs/2209.04497 (2022). <https://api.semanticscholar.org/CorpusID:252199702>VPR_a_survey S. Lowry, N. Sünderhauf, P. Newman, J. J. Leonard, D. Cox, P. Corke, M. J. Milford, Visual place recognition: A survey, IEEE Transactions on Robotics 32 (1) (2016) 1–19. https://doi.org/10.1109/TRO.2015.2496823 doi:10.1109/TRO.2015.2496823.vpr_survey_ijcai S. Garg, T. Fischer, M. Milford, https://doi.org/10.24963/ijcai.2021/603Where is your place, visual place recognition?, in: Z.-H. Zhou (Ed.), Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), International Joint Conferences on Artificial Intelligence Organization, 2021, pp. 4416–4425, survey Track. https://doi.org/10.24963/ijcai.2021/603 doi:10.24963/ijcai.2021/603. <https://doi.org/10.24963/ijcai.2021/603>VRP_a_survey_from_DL_perspective X. Zhang, L. Wang, Y. Su, https://www.sciencedirect.com/science/article/pii/S003132032030563XVisual place recognition: A survey from deep learning perspective, Pattern Recognition 113 (2021) 107760. https://doi.org/https://doi.org/10.1016/j.patcog.2020.107760 doi:https://doi.org/10.1016/j.patcog.2020.107760. <https://www.sciencedirect.com/science/article/pii/S003132032030563X>a_survey_on_visual_map_localization_using_LiDARS_and_camera E. Mahdi, H. Xinming, A survey on visual map localization using lidars and cameras, arXiv preprint arXiv:2208.03376 (2022).visual_and_object_geo_localization_a_comprehensive_survey D. Wilson, X. Zhang, W. Sultani, S. Wshah, Visual and object geo-localization: A comprehensive survey, arXiv preprint arXiv:2112.15202 (2021).past_present_future_SLAM C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, J. Leonard, Past, present, and future of simultaneous localization and mapping: Towards the robust-perception age, IEEE Transactions on Robotics 32 (6) (2016) 1309–1332.a_survey_on_DL_for_SLAM C. Chen, B. Wang, C. X. Lu, A. Trigoni, A. Markham, https://api.semanticscholar.org/CorpusID:219981387A survey on deep learning for localization and mapping: Towards the age of spatial machine intelligence, ArXiv abs/2006.12567 (2020). <https://api.semanticscholar.org/CorpusID:219981387>visual_SLAM J. Fuentes-Pacheco, J. Ruiz-Ascencio, J. M. Rendón-Mancha, https://doi.org/10.1007/s10462-012-9365-8Visual simultaneous localization and mapping: A survey, Artificial Intelligence Review 43 (1) (2015) 55–81. https://doi.org/10.1007/s10462-012-9365-8 doi:10.1007/s10462-012-9365-8. <https://doi.org/10.1007/s10462-012-9365-8>fastslam S. M. Siam, H. Zhang, Fast-SeqSLAM: A fast appearance based place recognition algorithm, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), Singapore, 2017, pp. 5702–5708. https://doi.org/10.1109/ICRA.2017.7989671 doi:10.1109/ICRA.2017.7989671.hog N. Dalal, B. Triggs, Histograms of oriented gradients for human detection, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 1, San Diego, CA, USA, 2005, pp. 886–893.vlad H. Jegou, C. Schmid, M. Douze, P. Perez, Aggregating local descriptors into a compact image representation, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, CA, USA, 2010, pp. 3304–3311. https://doi.org/10.1109/CVPR.2010.5540039 doi:10.1109/CVPR.2010.5540039.sift D. G. Lowe, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision 60 (2) (2004) 91–110. https://doi.org/10.1023/B:VISI.0000029664.99615.94 doi:10.1023/B:VISI.0000029664.99615.94.Siagian C. Siagian, L. Itti, Biologically inspired mobile robot vision localization, IEEE Transactions on Robotics 25 (4) (2009) 861–873. https://doi.org/10.1109/TRO.2009.2022424 doi:10.1109/TRO.2009.2022424.Ulrich I. Ulrich, I. Nourbakhsh, Appearance-based place recognition for topological localization, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), Vol. 2, San Francisco, CA, USA, 2000, pp. 1023–1029 vol.2. https://doi.org/10.1109/ROBOT.2000.844734 doi:10.1109/ROBOT.2000.844734.GeM F. Radenović, G. Tolias, O. Chum, Fine-tuning cnn image retrieval with no human annotation, IEEE Transactions on Pattern Analysis and Machine Intelligence 41 (7) (2019) 1655–1668. https://doi.org/10.1109/TPAMI.2018.2846566 doi:10.1109/TPAMI.2018.2846566.transvpr R. Wang, Y. Shen, W. Zuo, S. Zhou, N. Zheng, Transvpr: Transformer-based place recognition with multi-level attention aggregation, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 13638–13647. https://doi.org/10.1109/CVPR52688.2022.01328 doi:10.1109/CVPR52688.2022.01328.transformer A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, I. Polosukhin, Attention is all you need, in: Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS), NIPS'17, Curran Associates Inc., Red Hook, NY, USA, 2017, p. 6000–6010.mixvpr A. Ali-Bey, B. Chaib-Draa, P. Giguére, Mixvpr: Feature mixing for visual place recognition, in: Proceedings of IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 2997–3006. https://doi.org/10.1109/WACV56688.2023.00301 doi:10.1109/WACV56688.2023.00301.oquab2023dinov2 M. Oquab, T. Darcet, T. Moutakanni, H. V. Vo, M. Szafraniec, V. Khalidov, P. Fernandez, D. Haziza, F. Massa, A. El-Nouby, R. Howes, P.-Y. Huang, H. Xu, V. Sharma, S.-W. Li, W. Galuba, M. Rabbat, M. Assran, N. Ballas, G. Synnaeve, I. Misra, H. Jegou, J. Mairal, P. Labatut, A. Joulin, P. Bojanowski, Dinov2: Learning robust visual features without supervision, arXiv:2304.07193 (2023).DSFeat Y. Tian, J. Miao, X. Wu, H. Yue, Z. Liu, W. Chen, https://www.sciencedirect.com/science/article/pii/S0167865521004050Discriminative and semantic feature selection for place recognition towards dynamic environments, Pattern Recognition Letters 153 (2022) 75–82. https://doi.org/https://doi.org/10.1016/j.patrec.2021.11.014 doi:https://doi.org/10.1016/j.patrec.2021.11.014. <https://www.sciencedirect.com/science/article/pii/S0167865521004050>surf H. Bay, A. Ess, T. Tuytelaars, L. Van Gool, Speeded-up robust features (SURF), Computer Vision and Image Understanding 110 (3) (2008) 346–359. https://doi.org/10.1016/j.cviu.2007.09.014 doi:10.1016/j.cviu.2007.09.014.BoCNF Y. Hou, H. Zhang, S. Zhou, https://doi.org/10.1007/s10514-017-9684-3Bocnf: Efficient image matching with bag of convnet features for scalable and robust visual place recognition, Auton. Robots 42 (6) (2018) 1169–1185. https://doi.org/10.1007/s10514-017-9684-3 doi:10.1007/s10514-017-9684-3. <https://doi.org/10.1007/s10514-017-9684-3>PlaceRW N. Sünderhauf, S. Shirazi, A. Jacobson, F. Dayoub, E. Pepperell, B. Upcroft, M. Milford, Place recognition with convnet landmarks: Viewpoint-robust, condition-robust, training-free, in: Proceedings of Robotics: Science and Systems (RSS), 2015, pp. 1–10.convnet_pr N. Sünderhauf, S. Shirazi, F. Dayoub, B. Upcroft, M. Milford, On the performance of convnet features for place recognition, in: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015, pp. 4297–4304. https://doi.org/10.1109/IROS.2015.7353986 doi:10.1109/IROS.2015.7353986.multivlad R. Arandjelovic, A. Zisserman, All about vlad, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 1578–1585. https://doi.org/10.1109/CVPR.2013.207 doi:10.1109/CVPR.2013.207.ASMK G. Tolias, Y. Avrithis, H. Jégou, To aggregate or not to aggregate: Selective match kernels for image search, in: Proceedings of the IEEE international conference on computer vision (ICCV), 2013, pp. 1401–1408.bow J. Sivic, A. Zisserman, Video google: A text retrieval approach to object matching in videos, in: Proceedings of IEEE International Conference on Computer Vision (ICCV), IEEE, 2003, pp. 1470–1477.dbow-iros D. Gálvez-López, J. D. Tardós, Real-time loop detection with bags of binary words, in: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Francisco, USA, 2011, pp. 51–58. https://doi.org/10.1109/IROS.2011.6094885 doi:10.1109/IROS.2011.6094885.dbow-tro D. Galvez-López, J. D. Tardos, Bags of binary words for fast place recognition in image sequences, IEEE Transactions on Robotics 28 (5) (2012) 1188–1197. https://doi.org/10.1109/TRO.2012.2197158 doi:10.1109/TRO.2012.2197158.yue1 H. Yue, J. Miao, Y. Yu, W. Chen, C. Wen, Robust loop closure detection based on bag of superpoints and graph verification, in: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019, pp. 3787–3793. https://doi.org/10.1109/IROS40897.2019.8967726 doi:10.1109/IROS40897.2019.8967726.yue2 H. Yue, J. Miao, W. Chen, W. Wang, F. Guo, Z. Li, https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.22088Automatic vocabulary and graph verification for accurate loop closure detection, Journal of Field Robotics 39 (7) (2022) 1069–1084. http://arxiv.org/abs/https://onlinelibrary.wiley.com/doi/pdf/10.1002/rob.22088 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/rob.22088, https://doi.org/https://doi.org/10.1002/rob.22088 doi:https://doi.org/10.1002/rob.22088. <https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.22088>LRO J. Ma, X. Ye, H. Zhou, X. Mei, F. Fan, Loop-closure detection using local relative orientation matching, IEEE Transactions on Intelligent Transportation Systems 23 (7) (2022) 7896–7909. https://doi.org/10.1109/TITS.2021.3074520 doi:10.1109/TITS.2021.3074520.AVBoW T. Nicosevici, R. Garcia, Automatic visual Bag-of-Words for online robot navigation and mapping, IEEE Transactions on Robotics 28 (4) (2012) 886–898. https://doi.org/10.1109/TRO.2012.2192013 doi:10.1109/TRO.2012.2192013.fabmap M. Cummins, P. Newman, FAB-MAP: Probabilistic localization and mapping in the space of appearance, International Journal of Robotics Research 27 (6) (2008) 647–665. https://doi.org/10.1177/0278364908090961 doi:10.1177/0278364908090961.fabmap2 M. Cummins, P. Newman, Appearance-only SLAM at large scale with FAB-MAP 2.0, International Journal of Robotics Research 30 (9) (2011) 1100–1123. https://doi.org/10.1177/0278364910385483 doi:10.1177/0278364910385483.ibuild S. Khan, D. Wollherr, IBuILD: Incremental bag of binary words for appearance based loop closure detection, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, 2015, pp. 5441–5447. https://doi.org/10.1109/ICRA.2015.7139959 doi:10.1109/ICRA.2015.7139959.brisk S. Leutenegger, M. Chli, R. Y. Siegwart, Brisk: Binary robust invariant scalable keypoints, in: Proceedings of International Conference on Computer Vision (ICCV), 2011, pp. 2548–2555. https://doi.org/10.1109/ICCV.2011.6126542 doi:10.1109/ICCV.2011.6126542.ibow-lcd E. Garcia-Fidalgo, A. Ortiz, iBoW-LCD: An appearance-based loop-closure detection approach using incremental bags of binary words, IEEE Robotics and Automation Letters 3 (4) (2018) 3051–3057. https://doi.org/10.1109/LRA.2018.2849609 doi:10.1109/LRA.2018.2849609.lipolcd J. P. Company-Corcoles, E. Garcia-Fidalgo, A. Ortiz, Msc-vo: Exploiting manhattan and structural constraints for visual odometry, IEEE Robotics and Automation Letters 7 (2) (2022) 2803–2810.AVWs K. A. Tsintotas, L. Bampis, A. Gasteratos, Assigning visual words to places for loop closure detection, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Austrlia, 2018, pp. 5979–5985. https://doi.org/10.1109/ICRA.2018.8461146 doi:10.1109/ICRA.2018.8461146.BoTW K. A. Tsintotas, L. Bampis, A. Gasteratos, Probabilistic appearance-based place recognition through bag of tracked words, IEEE Robotics and Automation Letters 4 (2) (2019) 1737–1744. https://doi.org/10.1109/LRA.2019.2897151 doi:10.1109/LRA.2019.2897151.KLT C. Tomasi, T. Kanade, Detection and tracking of point features, International Journal of Computer Vision 9 (3) (1991) 137–154.patch-netvlad S. Hausler, S. Garg, M. Xu, M. Milford, T. Fischer, Patch-netvlad: Multi-scale fusion of locally-global descriptors for place recognition, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 14136–14147. https://doi.org/10.1109/CVPR46437.2021.01392 doi:10.1109/CVPR46437.2021.01392.htmap E. Garcia-Fidalgo, A. Ortiz, Hierarchical place recognition for topological mapping, IEEE Transactions on Robotics 33 (5) (2017) 1061–1074. https://doi.org/10.1109/TRO.2017.2704598 doi:10.1109/TRO.2017.2704598.fild2 S. An, H. Zhu, D. Wei, K. A. Tsintotas, A. Gasteratos, https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.22060Fast and incremental loop closure detection with deep features and proximity graphs, Journal of Field Robotics 39 (4) (2022) 473–493. http://arxiv.org/abs/https://onlinelibrary.wiley.com/doi/pdf/10.1002/rob.22060 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/rob.22060, https://doi.org/https://doi.org/10.1002/rob.22060 doi:https://doi.org/10.1002/rob.22060. <https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.22060>seqnet S. Garg, M. Milford, Seqnet: Learning descriptors for sequence-based hierarchical place recognition, IEEE Robotics and Automation Letters 6 (3) (2021) 4305–4312. https://doi.org/10.1109/LRA.2021.3067633 doi:10.1109/LRA.2021.3067633.delta-des S. Garg, B. Harwood, G. Anand, M. Milford, Delta descriptors: Change-based place representation for robust visual localization, IEEE Robotics and Automation Letters 5 (4) (2020) 5120–5127. https://doi.org/10.1109/LRA.2020.3005627 doi:10.1109/LRA.2020.3005627.SeqVLAD R. Mereu, G. Trivigno, G. Berton, C. Masone, B. Caputo, Learning sequential descriptors for sequence-based visual place recognition, IEEE Robotics and Automation Letters 7 (4) (2022) 10383–10390. https://doi.org/10.1109/LRA.2022.3194310 doi:10.1109/LRA.2022.3194310.deepwalk B. Perozzi, R. Al-Rfou, S. Skiena, https://doi.org/10.1145/2623330.2623732Deepwalk: Online learning of social representations, in: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14, Association for Computing Machinery, New York, NY, USA, 2014, p. 701–710. https://doi.org/10.1145/2623330.2623732 doi:10.1145/2623330.2623732. <https://doi.org/10.1145/2623330.2623732>MVG R. Hartley, A. Zisserman, Multiple View Geometry in Computer Vision, 2nd Edition, Cambridge University Press, 2004. https://doi.org/10.1017/CBO9780511811685 doi:10.1017/CBO9780511811685.5points D. Nister, An efficient solution to the five-point relative pose problem, IEEE Transactions on Pattern Analysis and Machine Intelligence 26 (6) (2004) 756–770. https://doi.org/10.1109/TPAMI.2004.17 doi:10.1109/TPAMI.2004.17.ransac M. A. Fischler, R. C. Bolles, Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography, Communications of the ACM 24 (6) (1981) 381–395.disk M. J. Tyszkiewicz, P. Fua, E. Trulls, DISK: Learning local features with policy gradient, in: Proceedings of Advances in Neural Information Processing Systems (NeurIPS), 2020.alike X. Zhao, X. Wu, J. Miao, W. Chen, P. C. Y. Chen, Z. Li, ALIKE: Accurate and lightweight keypoint detection and descriptor extraction, IEEE Transactions on Multimedia (2022) 1–1https://doi.org/10.1109/TMM.2022.3155927 doi:10.1109/TMM.2022.3155927.aliked X. Zhao, X. Wu, W. Chen, P. C. Y. Chen, Q. Xu, Z. Li, Aliked: A lighter keypoint and descriptor extraction network via deformable transformation, IEEE Transactions on Instrumentation and Measurement 72 (2023) 1–16. https://doi.org/10.1109/TIM.2023.3271000 doi:10.1109/TIM.2023.3271000.r2d2 J. Revaud, P. Weinzaepfel, C. D. Souza, N. Pion, G. Csurka, Y. Cabon, M. Humenberger, R2D2: Repeatable and reliable detector and descriptor, in: Proceedings of Advances in Neural Information Processing Systems (NeurIPS), 2019, p. 12.superglue P.-E. Sarlin, D. DeTone, T. Malisiewicz, A. Rabinovich, Superglue: Learning feature matching with graph neural networks, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 4937–4946. https://doi.org/10.1109/CVPR42600.2020.00499 doi:10.1109/CVPR42600.2020.00499.LoFTR J. Sun, Z. Shen, Y. Wang, H. Bao, X. Zhou, Loftr: Detector-free local feature matching with transformers, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 8918–8927. https://doi.org/10.1109/CVPR46437.2021.00881 doi:10.1109/CVPR46437.2021.00881.adalam L. Cavalli, V. Larsson, M. R. Oswald, T. Sattler, M. Pollefeys, Handcrafted outlier detection revisited, in: Proceedings of European Conference on Computer Vision (ECCV), Springer, 2020, pp. 770–787.lindenberger2023lightglue P. Lindenberger, P.-E. Sarlin, M. Pollefeys, LightGlue: Local Feature Matching at Light Speed, in: Proceedings of International Conference on Computer Vision (ICCV), 2023.ransac_analysis R. Raguram, J.-M. Frahm, M. Pollefeys, A comparative analysis of ransac techniques leading to adaptive real-time random sample consensus, in: D. Forsyth, P. Torr, A. Zisserman (Eds.), Proceedings of European Conference on Computer Vision (ECCV), Springer Berlin Heidelberg, Berlin, Heidelberg, 2008, pp. 500–513.good_correspondence K. M. Yi, E. Trulls, Y. Ono, V. Lepetit, M. Salzmann, P. Fua, Learning to find good correspondences, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 2666–2674. https://doi.org/10.1109/CVPR.2018.00282 doi:10.1109/CVPR.2018.00282.MAGSAC D. Barath, J. Matas, J. Noskova, Magsac: Marginalizing sample consensus, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 10189–10197. https://doi.org/10.1109/CVPR.2019.01044 doi:10.1109/CVPR.2019.01044.MAGSAC++ D. Baráth, J. Noskova, M. Ivashechkin, J. Matas, Magsac++, a fast, reliable and accurate robust estimator, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 1301–1309. https://doi.org/10.1109/CVPR42600.2020.00138 doi:10.1109/CVPR42600.2020.00138.NGRANSAC E. Brachmann, C. Rother, Neural-guided ransac: Learning where to sample model hypotheses, in: Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 4321–4330. https://doi.org/10.1109/ICCV.2019.00442 doi:10.1109/ICCV.2019.00442.ACNe W. Sun, W. Jiang, E. Trulls, A. Tagliasacchi, K. M. Yi, Acne: Attentive context normalization for robust permutation-equivariant learning, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 11283–11292. https://doi.org/10.1109/CVPR42600.2020.01130 doi:10.1109/CVPR42600.2020.01130.EssNet Q. Zhou, T. Sattler, M. Pollefeys, L. Leal-Taixé, To learn or not to learn: Visual localization from essential matrices, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 3319–3326. https://doi.org/10.1109/ICRA40945.2020.9196607 doi:10.1109/ICRA40945.2020.9196607.ExReNet D. Winkelbauer, M. Denninger, R. Triebel, https://doi.org/10.1109/ICRA48506.2021.9560872Learning to localize in new environments from synthetic training data, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), IEEE Press, 2021, p. 5840–5846. https://doi.org/10.1109/ICRA48506.2021.9560872 doi:10.1109/ICRA48506.2021.9560872. <https://doi.org/10.1109/ICRA48506.2021.9560872>DPT R. Ranftl, A. Bochkovskiy, V. Koltun, Vision transformers for dense prediction, in: Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 12159–12168. https://doi.org/10.1109/ICCV48922.2021.01196 doi:10.1109/ICCV48922.2021.01196.planeRCNN C. Liu, K. Kim, J. Gu, Y. Furukawa, J. Kautz, Planercnn: 3d plane detection and reconstruction from a single image, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 4445–4454. https://doi.org/10.1109/CVPR.2019.00458 doi:10.1109/CVPR.2019.00458.CNNSPP I. Melekhov, J. Ylioinas, J. Kannala, E. Rahtu, Relative camera pose estimation using convolutional neural networks, in: J. Blanc-Talon, R. Penne, W. Philips, D. Popescu, P. Scheunders (Eds.), Proceedings of Advanced Concepts for Intelligent Vision Systems (ACIVS), Springer International Publishing, Cham, 2017, pp. 675–687.Zakaria2017ICCVW Z. Laskar, I. Melekhov, S. Kalia, J. Kannala, Camera relocalization by computing pairwise relative poses using convolutional neural network, in: Proceedings of IEEE International Conference on Computer Vision Workshops (ICCVW), 2017, pp. 920–929. https://doi.org/10.1109/ICCVW.2017.113 doi:10.1109/ICCVW.2017.113.DistillPose Y. Abouelnaga, M. Bui, S. Ilic, https://doi.org/10.1109/IROS51168.2021.9635870Distillpose: Lightweight camera localization using auxiliary learning, in: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE Press, 2021, p. 7919–7924. https://doi.org/10.1109/IROS51168.2021.9635870 doi:10.1109/IROS51168.2021.9635870. <https://doi.org/10.1109/IROS51168.2021.9635870>RelocNet V. Balntas, S. Li, V. Prisacariu, Relocnet: Continuous metric learning relocalisation using neural nets, in: Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 751–767.RPNet S. En, A. Lechervy, F. Jurie, Rpnet: An end-to-end network for relative camera pose estimation, in: L. Leal-Taixé, S. Roth (Eds.), Proceedings of European Conference on Computer Vision Workshops (ECCVW), Springer International Publishing, Cham, 2019, pp. 738–745.Cai2021ExtremeRot R. Cai, B. Hariharan, N. Snavely, H. Averbuch-Elor, Extreme rotation estimation using dense correlation volumes, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 14561–14570. https://doi.org/10.1109/CVPR46437.2021.01433 doi:10.1109/CVPR46437.2021.01433.DirectionNet K. Chen, N. Snavely, A. Makadia, Wide-baseline relative camera pose estimation with directional learning, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 3257–3267. https://doi.org/10.1109/CVPR46437.2021.00327 doi:10.1109/CVPR46437.2021.00327.keypointnet S. Suwajanakorn, N. Snavely, J. Tompson, M. Norouzi, Discovery of latent 3d keypoints via end-to-end geometric reasoning, in: Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS), NIPS'18, Curran Associates Inc., Red Hook, NY, USA, 2018, p. 2063–2074.Procrustes D. W. Eggert, A. Lorusso, R. B. Fisher, https://doi.org/10.1007/s001380050048Estimating 3-d rigid body transformations: A comparison of four major algorithms, Machine Vision and Applications 9 (5–6) (1997) 272–290. https://doi.org/10.1007/s001380050048 doi:10.1007/s001380050048. <https://doi.org/10.1007/s001380050048>colmap J. L. Schönberger, J.-M. Frahm, Structure-from-motion revisited, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 4104–4113. https://doi.org/10.1109/CVPR.2016.445 doi:10.1109/CVPR.2016.445.orbslam3 C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. M. Montiel, J. D. Tardós, Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam, IEEE Transactions on Robotics 37 (6) (2021) 1874–1890. https://doi.org/10.1109/TRO.2021.3075644 doi:10.1109/TRO.2021.3075644.zhang2017learning X. Zhang, F. X. Yu, S. Karaman, S.-F. Chang, Learning discriminative and transformation covariant local feature detectors, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 4923–4931. https://doi.org/10.1109/CVPR.2017.523 doi:10.1109/CVPR.2017.523.l2net Y. Tian, B. Fan, F. Wu, L2-net: Deep learning of discriminative patch descriptor in euclidean space, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 6128–6136. https://doi.org/10.1109/CVPR.2017.649 doi:10.1109/CVPR.2017.649.hardnet A. Mishchuk, D. Mishkin, F. Radenović, J. Matas, Working hard to know your neighbor's margins: Local descriptor learning loss, in: Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS), NIPS'17, Curran Associates Inc., Red Hook, NY, USA, 2017, p. 4829–4840.sosnet Y. Tian, X. Yu, B. Fan, F. Wu, H. Heijnen, V. Balntas, Sosnet: Second order similarity regularization for local descriptor learning, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 11008–11017. https://doi.org/10.1109/CVPR.2019.01127 doi:10.1109/CVPR.2019.01127.geodesc Z. Luo, T. Shen, L. Zhou, S. Zhu, R. Zhang, Y. Yao, T. Fang, L. Quan, Geodesc: Learning local descriptors by integrating geometry constraints, in: Proceedings of the European conference on computer vision (ECCV), 2018, pp. 168–183.contextdesc Z. Luo, T. Shen, L. Zhou, J. Zhang, Y. Yao, S. Li, T. Fang, L. Quan, Contextdesc: Local descriptor augmentation with cross-modality context, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 2522–2531. https://doi.org/10.1109/CVPR.2019.00263 doi:10.1109/CVPR.2019.00263.brief M. Calonder, V. Lepetit, C. Strecha, P. Fua, Brief: Binary robust independent elementary features, in: Proceedings of European Conference on Computer Vision (ECCV), Springer, 2010, pp. 778–792.d2net M. Dusmanu, I. Rocco, T. Pajdla, M. Pollefeys, J. Sivic, A. Torii, T. Sattler, D2-Net: A trainable CNN for joint description and detection of local features, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 8084–8093. https://doi.org/10.1109/CVPR.2019.00828 doi:10.1109/CVPR.2019.00828.aslfeat Z. Luo, L. Zhou, X. Bai, H. Chen, J. Zhang, Y. Yao, S. Li, T. Fang, L. Quan, Aslfeat: Learning local features of accurate shape and localization, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 6588–6597. https://doi.org/10.1109/CVPR42600.2020.00662 doi:10.1109/CVPR42600.2020.00662.sekd Y. Song, L. Cai, J. Li, Y. Tian, M. Li, Sekd: Self-evolving keypoint detection and description, arXiv preprint arXiv:2006.05077 (2020).delf H. Noh, A. Araujo, J. Sim, T. Weyand, B. Han, Large-scale image retrieval with attentive deep local features, in: Proceedings of IEEE International Conference on Computer Vision (ICCV), 2017, pp. 3476–3485. https://doi.org/10.1109/ICCV.2017.374 doi:10.1109/ICCV.2017.374.mda H. Wu, M. Wang, W. Zhou, H. Li, Learning deep local features with multiple dynamic attentions for large-scale image retrieval, in: Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 11396–11405. https://doi.org/10.1109/ICCV48922.2021.01122 doi:10.1109/ICCV48922.2021.01122.hpatches1 V. Balntas, K. Lenc, A. Vedaldi, K. Mikolajczyk, Hpatches: A benchmark and evaluation of handcrafted and learned local descriptors, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 3852–3861. https://doi.org/10.1109/CVPR.2017.410 doi:10.1109/CVPR.2017.410.hpatches2 V. Balntas, K. Lenc, A. Vedaldi, T. Tuytelaars, J. Matas, K. Mikolajczyk, ℍh-patches: A benchmark and evaluation of handcrafted and learned local descriptors, IEEE Transactions on Pattern Analysis and Machine Intelligence 42 (11) (2020) 2825–2841. https://doi.org/10.1109/TPAMI.2019.2915233 doi:10.1109/TPAMI.2019.2915233.glfeat J. Miao, H. Yue, Z. Liu, X. Wu, Z. Fang, G. Yang, Real-time local feature with global visual information enhancement, in: Proceedings of IEEE Conference on Industrial Electronics and Applications (ICIEA), 2022, pp. 189–194. https://doi.org/10.1109/ICIEA54703.2022.10006314 doi:10.1109/ICIEA54703.2022.10006314.lfnet Y. Ono, E. Trulls, P. Fua, K. M. Yi, Lf-net: Learning local features from images, in: Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS), NIPS'18, Curran Associates Inc., Red Hook, NY, USA, 2018, p. 6237–6247.MTLDesc C. Wang, R. Xu, Y. Zhang, S. Xu, W. Meng, B. Fan, X. Zhang, Mtldesc: Looking wider to describe better, in: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2022, pp. 2388–2396.Pele2008ECCV O. Pele, M. Werman, A linear time histogram metric for improved sift matching, in: D. Forsyth, P. Torr, A. Zisserman (Eds.), Proceedings of European Conference on Computer Vision (ECCV), Springer Berlin Heidelberg, Berlin, Heidelberg, 2008, pp. 495–508.FLANN M. Muja, D. G. Lowe, Scalable nearest neighbor algorithms for high dimensional data, IEEE Transactions on Pattern Analysis and Machine Intelligence 36 (11) (2014) 2227–2240. https://doi.org/10.1109/TPAMI.2014.2321376 doi:10.1109/TPAMI.2014.2321376.Guo2012PRL X. Guo, X. Cao, https://www.sciencedirect.com/science/article/pii/S0167865511002765Good match exploration using triangle constraint, Pattern Recognition Letters 33 (7) (2012) 872–881, special Issue on Awards from ICPR 2010. https://doi.org/https://doi.org/10.1016/j.patrec.2011.08.021 doi:https://doi.org/10.1016/j.patrec.2011.08.021. <https://www.sciencedirect.com/science/article/pii/S0167865511002765>Ma2014TIP J. Ma, J. Zhao, J. Tian, A. L. Yuille, Z. Tu, Robust point matching via vector field consensus, IEEE Transactions on Image Processing 23 (4) (2014) 1706–1721. https://doi.org/10.1109/TIP.2014.2307478 doi:10.1109/TIP.2014.2307478.LPM J. Ma, J. Zhao, J. Jiang, H. Zhou, X. Guo, https://doi.org/10.1007/s11263-018-1117-zLocality preserving matching, International Journal of Computer Vision 127 (5) (2019) 512–531. https://doi.org/10.1007/s11263-018-1117-z doi:10.1007/s11263-018-1117-z. <https://doi.org/10.1007/s11263-018-1117-z>Ma2018GLP J. Ma, J. Jiang, H. Zhou, J. Zhao, X. Guo, Guided locality preserving feature matching for remote sensing image registration, IEEE Transactions on Geoscience and Remote Sensing 56 (8) (2018) 4435–4447. https://doi.org/10.1109/TGRS.2018.2820040 doi:10.1109/TGRS.2018.2820040.CODE W.-Y. Lin, F. Wang, M.-M. Cheng, S.-K. Yeung, P. H. Torr, M. N. Do, J. Lu, Code: Coherence based decision boundaries for feature correspondence, IEEE Transactions on Pattern Analysis and Machine Intelligence 40 (1) (2018) 34–47. https://doi.org/10.1109/TPAMI.2017.2652468 doi:10.1109/TPAMI.2017.2652468.GMS J. Bian, W.-Y. Lin, Y. Matsushita, S.-K. Yeung, T.-D. Nguyen, M.-M. Cheng, Gms: Grid-based motion statistics for fast, ultra-robust feature correspondence, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2828–2837. https://doi.org/10.1109/CVPR.2017.302 doi:10.1109/CVPR.2017.302.Hu2015TIP Y.-T. Hu, Y.-Y. Lin, H.-Y. Chen, K.-J. Hsu, B.-Y. Chen, Matching images with multiple descriptors: An unsupervised approach for locally adaptive descriptor selection, IEEE Transactions on Image Processing 24 (12) (2015) 5995–6010. https://doi.org/10.1109/TIP.2015.2496305 doi:10.1109/TIP.2015.2496305.Maier2016GuidedMB J. Maier, M. Humenberger, M. Murschitz, O. Zendel, M. Vincze, https://api.semanticscholar.org/CorpusID:44865089Guided matching based on statistical optical flow for fast and robust correspondence analysis, in: Proceedings of European Conference on Computer Vision (ECCV), 2016, pp. 101–117. <https://api.semanticscholar.org/CorpusID:44865089>ICF X. Li, Z. Hu, https://doi.org/10.1007/s11263-010-0318-xRejecting mismatches by correspondence function, International Journal of Computer Vision 89 (1) (2010) 1–17. https://doi.org/10.1007/s11263-010-0318-x doi:10.1007/s11263-010-0318-x. <https://doi.org/10.1007/s11263-010-0318-x>Yaron2014ToG Y. Lipman, S. Yagev, R. Poranne, D. W. Jacobs, R. Basri, https://doi.org/10.1145/2602142Feature matching with bounded distortion, ACM Transactions on Graphics 33 (3) (jun 2014). https://doi.org/10.1145/2602142 doi:10.1145/2602142. <https://doi.org/10.1145/2602142>RIRW Y. Liu, L. De Dominicis, B. Wei, L. Chen, R. R. Martin, Regularization based iterative point match weighting for accurate rigid transformation estimation, IEEE Transactions on Visualization and Computer Graphics 21 (9) (2015) 1058–1071. https://doi.org/10.1109/TVCG.2015.2410272 doi:10.1109/TVCG.2015.2410272.DCM Y.-F. Yu, G. Xu, K.-K. Huang, H. Zhu, L. Chen, H. Wang, Dual calibration mechanism based l2, p-norm for graph matching, IEEE Transactions on Circuits and Systems for Video Technology 31 (6) (2021) 2343–2358. https://doi.org/10.1109/TCSVT.2020.3023781 doi:10.1109/TCSVT.2020.3023781.SOGM R. Zhang, W. Wang, Second- and high-order graph matching for correspondence problems, IEEE Transactions on Circuits and Systems for Video Technology 28 (10) (2018) 2978–2992. https://doi.org/10.1109/TCSVT.2017.2718225 doi:10.1109/TCSVT.2017.2718225.Lorenzo2008ECCV L. Torresani, V. Kolmogorov, C. Rother, Feature correspondence via graph matching: Models and global optimization, in: Proceedings of the European Conference on Computer Vision (ECCV), ECCV '08, Springer-Verlag, Berlin, Heidelberg, 2008, p. 596–609.BBHomo S. Liu, H. Wang, Y. Wei, C. Pan, Bb-homography: Joint binary features and bipartite graph matching for homography estimation, IEEE Transactions on Circuits and Systems for Video Technology 25 (2) (2015) 239–250. https://doi.org/10.1109/TCSVT.2014.2339591 doi:10.1109/TCSVT.2014.2339591.VPS T. Sattler, B. Leibe, L. Kobbelt, Efficient & effective prioritized matching for large-scale image-based localization, IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (9) (2017) 1744–1756. https://doi.org/10.1109/TPAMI.2016.2611662 doi:10.1109/TPAMI.2016.2611662.DeepSemanticFM N. Ufer, B. Ommer, Deep semantic feature matching, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 5929–5938. https://doi.org/10.1109/CVPR.2017.628 doi:10.1109/CVPR.2017.628.Khan2020TCSVT S. Khan, M. Nawaz, X. Guoxia, H. Yan, Image correspondence with cur decomposition-based graph completion and matching, IEEE Transactions on Circuits and Systems for Video Technology 30 (9) (2020) 3054–3067. https://doi.org/10.1109/TCSVT.2019.2935838 doi:10.1109/TCSVT.2019.2935838.Yu2018HierarchicalSemanticIM W. Yu, X. Sun, K. Yang, Y. Rui, H. Yao, https://api.semanticscholar.org/CorpusID:13805092Hierarchical semantic image matching using cnn feature pyramid, Computer Vision and Image Understanding 169 (2018) 40–51. <https://api.semanticscholar.org/CorpusID:13805092>PointNet R. Q. Charles, H. Su, M. Kaichun, L. J. Guibas, Pointnet: Deep learning on point sets for 3d classification and segmentation, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 77–85. https://doi.org/10.1109/CVPR.2017.16 doi:10.1109/CVPR.2017.16.NMNet C. Zhao, Z. Cao, C. Li, X. Li, J. Yang, Nm-net: Mining reliable neighbors for robust feature correspondences, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 215–224. https://doi.org/10.1109/CVPR.2019.00030 doi:10.1109/CVPR.2019.00030.OANet J. Zhang, D. Sun, Z. Luo, A. Yao, H. Chen, L. Zhou, T. Shen, Y. Chen, L. Quan, H. Liao, Oanet: Learning two-view correspondences and geometry using order-aware network, IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (6) (2022) 3110–3122. https://doi.org/10.1109/TPAMI.2020.3048013 doi:10.1109/TPAMI.2020.3048013.IMUPrior X. Zhao, J. Liu, X. Wu, W. Chen, F. Guo, Z. Li, Probabilistic spatial distribution prior based attentional keypoints matching network, IEEE Transactions on Circuits and Systems for Video Technology 32 (3) (2022) 1313–1327. https://doi.org/10.1109/TCSVT.2021.3068761 doi:10.1109/TCSVT.2021.3068761.SIFTFLow C. Liu, J. Yuen, A. Torralba, Sift flow: Dense correspondence across scenes and its applications, IEEE Transactions on Pattern Analysis and Machine Intelligence 33 (5) (2011) 978–994. https://doi.org/10.1109/TPAMI.2010.147 doi:10.1109/TPAMI.2010.147.NCNet I. Rocco, M. Cimpoi, R. Arandjelović, A. Torii, T. Pajdla, J. Sivic, Ncnet: Neighbourhood consensus networks for estimating image correspondences, IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (2) (2022) 1020–1034. https://doi.org/10.1109/TPAMI.2020.3016711 doi:10.1109/TPAMI.2020.3016711.SparseNCNet I. Rocco, R. Arandjelović, J. Sivic, Efficient neighbourhood consensus networks via submanifold sparse convolutions, in: A. Vedaldi, H. Bischof, T. Brox, J.-M. Frahm (Eds.), Proceedings of European Conference on Computer Vision (ECCV), Springer International Publishing, Cham, 2020, pp. 605–621.DRCNet X. Li, K. Han, S. Li, V. Prisacariu, https://proceedings.neurips.cc/paper_files/paper/2020/file/c91591a8d461c2869b9f535ded3e213e-Paper.pdfDual-resolution correspondence networks, in: H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, H. Lin (Eds.), Proceedings of Advances in Neural Information Processing Systems (NeurIPS), Vol. 33, Curran Associates, Inc., 2020, pp. 17346–17357. <https://proceedings.neurips.cc/paper_files/paper/2020/file/c91591a8d461c2869b9f535ded3e213e-Paper.pdf>COTR W. Jiang, E. Trulls, J. Hosang, A. Tagliasacchi, K. M. Yi, Cotr: Correspondence transformer for matching across images, in: Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 6187–6197. https://doi.org/10.1109/ICCV48922.2021.00615 doi:10.1109/ICCV48922.2021.00615.Tang2022QuadTreeAF S. Tang, J. Zhang, S. Zhu, P. Tan, https://api.semanticscholar.org/CorpusID:245837195Quadtree attention for vision transformers, ArXiv abs/2201.02767 (2022). <https://api.semanticscholar.org/CorpusID:245837195>Chen2022ASpanFormerDI H. Chen, Z. Luo, L. Zhou, Y. Tian, M. Zhen, T. Fang, D. Mckinnon, Y. Tsin, L. Quan, Aspanformer: Detector-free image matching with adaptive span transformer, in: Proceedings of European Conference on Computer Vision (ECCV), Springer, 2022, pp. 20–36.Liao2023TKwinFormerTK Y. Liao, Y. Di, H. Zhou, K. Zhu, M. Lu, Y. Zhang, Q. Duan, J. Liu, Tkwinformer: Top k window attention in vision transformers for feature matching, arXiv preprint arXiv:2308.15144 (2023).Fan2023Occ2NetRI M. Fan, M. Chen, C. Hu, S. Zhou, Occ 2net: Robust image matching based on 3d occupancy estimation for occluded regions, in: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 9652–9662.DLT I. E. Sutherland, https://doi.org/10.1145/280811.281031Sketchpad—a Man-Machine Graphical Communication System, Association for Computing Machinery, New York, NY, USA, 1998, p. 391–408. <https://doi.org/10.1145/280811.281031>3DCV O. Faugeras, Three-Dimensional Computer Vision: A Geometric Viewpoint, MIT Press, Cambridge, MA, USA, 1993.P5P B. Triggs, Camera pose and calibration from 4 or 5 known 3d points, in: Proceedings of the IEEE International Conference on Computer Vision (ICCV), Vol. 1, 1999, pp. 278–284 vol.1. https://doi.org/10.1109/ICCV.1999.791231 doi:10.1109/ICCV.1999.791231.P4P M. Bujnak, Z. Kukelova, T. Pajdla, A general solution to the p4p problem for camera with unknown focal length, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008, pp. 1–8. https://doi.org/10.1109/CVPR.2008.4587793 doi:10.1109/CVPR.2008.4587793.p3p2 S. Li, C. Xu, https://api.semanticscholar.org/CorpusID:14686666A stable direct solution of perspective-three-point problem, International Journal of Pattern Recognition and Artificial Intelligence 25 (2011) 627–642. <https://api.semanticscholar.org/CorpusID:14686666>p3p1 D. DeMenthon, L. Davis, Exact and approximate solutions of the perspective-three-point problem, IEEE Transactions on Pattern Analysis and Machine Intelligence 14 (11) (1992) 1100–1105. https://doi.org/10.1109/34.166625 doi:10.1109/34.166625.Quan1999TPAMI L. Quan, Z. Lan, Linear n-point camera pose determination, IEEE Transactions on Pattern Analysis and Machine Intelligence 21 (8) (1999) 774–780. https://doi.org/10.1109/34.784291 doi:10.1109/34.784291.Fiore2001TPAMI P. Fiore, Efficient linear solution of exterior orientation, IEEE Transactions on Pattern Analysis and Machine Intelligence 23 (2) (2001) 140–148. https://doi.org/10.1109/34.908965 doi:10.1109/34.908965.Lepetit2009EPnPAA V. Lepetit, F. Moreno-Noguer, P. Fua, https://api.semanticscholar.org/CorpusID:207252029Epnp: An accurate o(n) solution to the pnp problem, International Journal of Computer Vision 81 (2009) 155–166. <https://api.semanticscholar.org/CorpusID:207252029>RPnP S. Li, C. Xu, M. Xie, A robust o(n) solution to the perspective-n-point problem, IEEE Transactions on Pattern Analysis and Machine Intelligence 34 (7) (2012) 1444–1450. https://doi.org/10.1109/TPAMI.2012.41 doi:10.1109/TPAMI.2012.41.DLS J. A. Hesch, S. I. Roumeliotis, A direct least-squares (dls) method for pnp, in: Proceedings of International Conference on Computer Vision (ICCV), 2011, pp. 383–390. https://doi.org/10.1109/ICCV.2011.6126266 doi:10.1109/ICCV.2011.6126266.OPnP Y. Zheng, Y. Kuang, S. Sugimoto, K. Åström, M. Okutomi, Revisiting the pnp problem: A fast, general and optimal solution, in: Proceedings of IEEE International Conference on Computer Vision (ICCV), 2013, pp. 2344–2351. https://doi.org/10.1109/ICCV.2013.291 doi:10.1109/ICCV.2013.291.SRPnP P. Wang, G. Xu, Y. Cheng, Q. Yu, https://www.sciencedirect.com/science/article/pii/S0167865518300692A simple, robust and fast method for the perspective-n-point problem, Pattern Recognition Letters 108 (2018) 31–37. https://doi.org/https://doi.org/10.1016/j.patrec.2018.02.028 doi:https://doi.org/10.1016/j.patrec.2018.02.028. <https://www.sciencedirect.com/science/article/pii/S0167865518300692>EOPnP L. Zhou, M. Kaess, An efficient and accurate algorithm for the perspecitve-n-point problem, in: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019, pp. 6245–6252. https://doi.org/10.1109/IROS40897.2019.8968482 doi:10.1109/IROS40897.2019.8968482.LHM C.-P. Lu, G. Hager, E. Mjolsness, Fast and globally convergent pose estimation from video images, IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (6) (2000) 610–622. https://doi.org/10.1109/34.862199 doi:10.1109/34.862199.FP G. Pavlakos, X. Zhou, A. Chan, K. G. Derpanis, K. Daniilidis, https://doi.org/10.1109/ICRA.2017.79892336-dof object pose from semantic keypoints, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), IEEE Press, 2017, p. 2011–2018. https://doi.org/10.1109/ICRA.2017.7989233 doi:10.1109/ICRA.2017.7989233. <https://doi.org/10.1109/ICRA.2017.7989233>SQPnP G. Terzakis, M. Lourakis, A consistently fast and globally optimal solution to the perspective-n-point problem, in: A. Vedaldi, H. Bischof, T. Brox, J.-M. Frahm (Eds.), Proceedings of European Conference on Computer Vision (ECCV), Springer International Publishing, Cham, 2020, pp. 478–494.Urban2016MLPnPA S. Urban, J. Leitloff, S. Hinz, https://api.semanticscholar.org/CorpusID:14704843Mlpnp - a real-time maximum likelihood solution to the perspective-n-point problem, ArXiv abs/1607.08112 (2016). <https://api.semanticscholar.org/CorpusID:14704843>cross-desc M. Dusmanu, O. Miksik, J. L. Schönberger, M. Pollefeys, Cross-descriptor visual localization and mapping, in: Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 6038–6047. https://doi.org/10.1109/ICCV48922.2021.00600 doi:10.1109/ICCV48922.2021.00600.PL-Loc S. Yoon, A. Kim, Line as a visual sentence: Context-aware line descriptor for visual localization, IEEE Robotics and Automation Letters 6 (4) (2021) 8726–8733. https://doi.org/10.1109/LRA.2021.3111760 doi:10.1109/LRA.2021.3111760.PoseVerification H. Taira, I. Rocco, J. Sedlar, M. Okutomi, J. Sivic, T. Pajdla, T. Sattler, A. Torii, Is this the right place? geometric-semantic pose verification for indoor visual localization, in: Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 4372–4382. https://doi.org/10.1109/ICCV.2019.00447 doi:10.1109/ICCV.2019.00447.Stumberg2019GNNetTG L. von Stumberg, P. Wenzel, Q. A. Khan, D. Cremers, https://api.semanticscholar.org/CorpusID:202542796Gn-net: The gauss-newton loss for multi-weather relocalization, IEEE Robotics and Automation Letters 5 (2019) 890–897. <https://api.semanticscholar.org/CorpusID:202542796>meshloc V. Panek, Z. Kukelova, T. Sattler, Meshloc: Mesh-based visual localization, in: S. Avidan, G. Brostow, M. Cissé, G. M. Farinella, T. Hassner (Eds.), Proceedings of European Conference on Computer Vision (ECCV), Springer Nature Switzerland, Cham, 2022, pp. 589–609.SceneSqueezer L. Yang, R. Shrestha, W. Li, S. Liu, G. Zhang, Z. Cui, P. Tan, Scenesqueezer: Learning to compress scene for camera relocalization, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 8249–8258. https://doi.org/10.1109/CVPR52688.2022.00808 doi:10.1109/CVPR52688.2022.00808.PoseCorrection J. Hyeon, J. Kim, N. Doh, Pose correction for highly accurate visual localization in large-scale indoor spaces, in: Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 15954–15963. https://doi.org/10.1109/ICCV48922.2021.01567 doi:10.1109/ICCV48922.2021.01567.RootSIFT R. Arandjelović, A. Zisserman, Three things everyone should know to improve object retrieval, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 2911–2918. https://doi.org/10.1109/CVPR.2012.6248018 doi:10.1109/CVPR.2012.6248018.Kapture M. Humenberger, Y. Cabon, N. Guérin, J. Morat, J. Revaud, P. Rerole, N. Pion, C. R. de Souza, V. Leroy, G. Csurka, https://api.semanticscholar.org/CorpusID:220831449Robust image retrieval-based visual localization using kapture, ArXiv abs/2007.13867 (2020). <https://api.semanticscholar.org/CorpusID:220831449>xrlocalization X. Contributors, Openxrlab visual localization toolbox and server, <https://github.com/openxrlab/xrlocalization> (2022).stereo_i2p Y. Kim, J. Jeong, A. Kim, Stereo camera localization in 3d lidar maps, in: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, pp. 1–9. https://doi.org/10.1109/IROS.2018.8594362 doi:10.1109/IROS.2018.8594362.mono_i2p_line H. Yu, W. Zhen, W. Yang, J. Zhang, S. Scherer, Monocular camera localization in prior lidar maps with 2d-3d line correspondences, in: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020, pp. 4588–4594. https://doi.org/10.1109/IROS45743.2020.9341690 doi:10.1109/IROS45743.2020.9341690.Wolcott2014IROS R. W. Wolcott, R. M. Eustice, Visual localization within lidar maps for automated urban driving, in: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2014, pp. 176–183. https://doi.org/10.1109/IROS.2014.6942558 doi:10.1109/IROS.2014.6942558.mono_i2p_SDF H. Huang, Y. Sun, H. Ye, M. Liu, Metric monocular localization using signed distance fields, in: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019, pp. 1195–1201. https://doi.org/10.1109/IROS40897.2019.8968033 doi:10.1109/IROS40897.2019.8968033.dsl H. Ye, H. Huang, M. Liu, Monocular direct sparse localization in a prior 3d surfel map, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 8892–8898. https://doi.org/10.1109/ICRA40945.2020.9197022 doi:10.1109/ICRA40945.2020.9197022.Huang2020GMMLocSC H. Huang, H. Ye, Y. Sun, M. Liu, https://api.semanticscholar.org/CorpusID:220042209Gmmloc: Structure consistent visual localization with gaussian mixture models, IEEE Robotics and Automation Letters 5 (2020) 5043–5050. <https://api.semanticscholar.org/CorpusID:220042209>orbslam R. Mur-Artal, J. M. M. Montiel, J. D. Tardós, Orb-slam: A versatile and accurate monocular slam system, IEEE Transactions on Robotics 31 (5) (2015) 1147–1163. https://doi.org/10.1109/TRO.2015.2463671 doi:10.1109/TRO.2015.2463671.GMM D. Reynolds, Gaussian Mixture Models, Springer US, Boston, MA, 2009, pp. 659–663.orbslam2 R. Mur-Artal, J. D. Tardós, Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras, IEEE Transactions on Robotics 33 (5) (2017) 1255–1262. https://doi.org/10.1109/TRO.2017.2705103 doi:10.1109/TRO.2017.2705103.2d3d_embed D. Cattaneo, M. Vaghi, S. Fontana, A. L. Ballardini, D. G. Sorrenti, Global visual localization in lidar-maps through shared 2d-3d embedding space, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2020, pp. 4365–4371.Lukas2021ICRA_Spherical L. Bernreiter, L. Ott, J. Nieto, R. Siegwart, C. Cadena, https://doi.org/10.1109/ICRA48506.2021.9561078Spherical multi-modal place recognition for heterogeneous sensor systems, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), IEEE Press, 2021, p. 1743–1750. https://doi.org/10.1109/ICRA48506.2021.9561078 doi:10.1109/ICRA48506.2021.9561078. <https://doi.org/10.1109/ICRA48506.2021.9561078>i3dloc P. Yin, L. Xu, J. Zhang, H. Choset, S. Scherer, https://arxiv.org/abs/2105.12883i3dloc: Image-to-range cross-domain localization robust to inconsistent environmental conditions, in: Proceedings of Robotics: Science and Systems (RSS), Robotics: Science and Systems 2021, 2021. <https://arxiv.org/abs/2105.12883>AE-CrossModal Z. Zhao, H. Yu, C. Lyu, W. Yang, S. Scherer, Attention-enhanced cross-modal localization between spherical images and point clouds, IEEE Sensors Journal (2023) 1–1https://doi.org/10.1109/JSEN.2023.3306377 doi:10.1109/JSEN.2023.3306377.Li2023I2PRecRI Y. Li, S. Zheng, Z. Yu, B. Yu, S. Cao, L. Luo, H. Shen, https://api.semanticscholar.org/CorpusID:257280108I2p-rec: Recognizing images on large-scale point cloud maps through bird's eye view projections, ArXiv abs/2303.01043 (2023). <https://api.semanticscholar.org/CorpusID:257280108>pwcnet D. Sun, X. Yang, M.-Y. Liu, J. Kautz, Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 8934–8943. https://doi.org/10.1109/CVPR.2018.00931 doi:10.1109/CVPR.2018.00931.hypermap M.-F. Chang, J. Mangelson, M. Kaess, S. Lucey, Hypermap: Compressed 3d map for monocular camera registration, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 11739–11745. https://doi.org/10.1109/ICRA48506.2021.9561864 doi:10.1109/ICRA48506.2021.9561864.miao2023poses J. Miao, K. Jiang, Y. Wang, T. Wen, Z. Xiao, Z. Fu, M. Yang, M. Liu, D. Yang, Poses as queries: Image-to-lidar map localization with transformers, arXiv preprint arXiv:2305.04298 (2023).2d3dmatchnet M. Feng, S. Hu, M. H. Ang, G. H. Lee, 2d3d-matchnet: Learning to match keypoints across 2d image and 3d point cloud, in: Proceedings of International Conference on Robotics and Automation (ICRA), 2019, pp. 4790–4796. https://doi.org/10.1109/ICRA.2019.8794415 doi:10.1109/ICRA.2019.8794415.ISS Y. Zhong, Intrinsic shape signatures: A shape descriptor for 3d object recognition, in: Proceedings of IEEE International Conference on Computer Vision Workshops (ICCVW), 2009, pp. 689–696. https://doi.org/10.1109/ICCVW.2009.5457637 doi:10.1109/ICCVW.2009.5457637.cmrnet++ D. Cattaneo, D. G. Sorrenti, A. Valada, https://api.semanticscholar.org/CorpusID:216641866Cmrnet++: Map and camera agnostic monocular visual localization in lidar maps, ArXiv abs/2004.13795 (2020). <https://api.semanticscholar.org/CorpusID:216641866>deepi2p J. Li, G. Hee Lee, Deepi2p: Image-to-point cloud registration via deep classification, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 15955–15964. https://doi.org/10.1109/CVPR46437.2021.01570 doi:10.1109/CVPR46437.2021.01570.CorrI2P S. Ren, Y. Zeng, J. Hou, X. Chen, Corri2p: Deep image-to-point cloud registration via dense correspondence, IEEE Transactions on Circuits and Systems for Video Technology 33 (3) (2023) 1198–1208. https://doi.org/10.1109/TCSVT.2022.3208859 doi:10.1109/TCSVT.2022.3208859.EFGHNet Y. Jeon, S.-W. Seo, Efghnet: A versatile image-to-point cloud registration network for extreme outdoor environment, IEEE Robotics and Automation Letters 7 (3) (2022) 7511–7517. https://doi.org/10.1109/LRA.2022.3183899 doi:10.1109/LRA.2022.3183899.I2DLoc K. Chen, H. Yu, W. Yang, L. Yu, S. Scherer, G.-S. Xia, https://www.sciencedirect.com/science/article/pii/S0924271622002775I2d-loc: Camera localization via image to lidar depth flow, ISPRS Journal of Photogrammetry and Remote Sensing 194 (2022) 209–221. https://doi.org/https://doi.org/10.1016/j.isprsjprs.2022.10.009 doi:https://doi.org/10.1016/j.isprsjprs.2022.10.009. <https://www.sciencedirect.com/science/article/pii/S0924271622002775>BPnP B. Chen, Á. Parra, J. Cao, N. Li, T.-J. Chin, https://api.semanticscholar.org/CorpusID:208268020End-to-end learnable geometric vision by backpropagating pnp optimization, Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) 8097–8106. <https://api.semanticscholar.org/CorpusID:208268020>I2PNet G. Wang, Y. Zheng, Y. Guo, Z. Liu, Y. Zhu, W. Burgard, H. Wang, https://api.semanticscholar.org/CorpusID:259202501End-to-end 2d-3d registration between image and lidar point cloud for vehicle localization, ArXiv abs/2306.11346 (2023). <https://api.semanticscholar.org/CorpusID:259202501>NRE H. Germain, V. Lepetit, G. Bourmaud, Neural reprojection error: Merging feature learning and camera pose estimation, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 414–423. https://doi.org/10.1109/CVPR46437.2021.00048 doi:10.1109/CVPR46437.2021.00048.pixloc P.-E. Sarlin, A. Unagar, M. Larsson, H. Germain, C. Toft, V. Larsson, M. Pollefeys, V. Lepetit, L. Hammarstrand, F. Kahl, T. Sattler, Back to the feature: Learning robust camera localization from pixels to pose, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 3247–3257.HDMap_review R. Liu, J. Wang, B. Zhang, High definition map for automated driving: Overview and analysis, The Journal of Navigation 73 (2) (2020) 324–341. https://doi.org/10.1017/S0373463319000638 doi:10.1017/S0373463319000638.Oliver2008CVPRW O. Pink, Visual map matching and localization using a global feature map, in: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2008, pp. 1–7. https://doi.org/10.1109/CVPRW.2008.4563135 doi:10.1109/CVPRW.2008.4563135.Tao2013IROS Z. Tao, P. Bonnifait, V. Frémont, J. Ibañez-Guzman, Mapping and localization using gps, lane markings and proprioceptive sensors, in: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2013, pp. 406–412. https://doi.org/10.1109/IROS.2013.6696383 doi:10.1109/IROS.2013.6696383.Cai2018Sensors H. Cai, Z. Hu, G. Huang, D. Zhu, X. Su, Integration of gps, monocular vision, and high definition (hd) map for accurate vehicle localization, Sensors 18 (10) (2018) 3270.Guo2021IROS C. Guo, M. Lin, H. Guo, P. Liang, E. Cheng, Coarse-to-fine semantic localization with hd map for autonomous driving in structural scenes, in: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021, pp. 1146–1153. https://doi.org/10.1109/IROS51168.2021.9635923 doi:10.1109/IROS51168.2021.9635923.Derenzi2018TITS R. P. D. Vivacqua, M. Bertozzi, P. Cerri, F. N. Martins, R. F. Vassallo, Self-localization based on visual lane marking maps: An accurate low-cost approach for autonomous driving, IEEE Transactions on Intelligent Transportation Systems 19 (2) (2018) 582–597. https://doi.org/10.1109/TITS.2017.2752461 doi:10.1109/TITS.2017.2752461.Deng2019SensorsJ L. Deng, M. Yang, B. Hu, T. Li, H. Li, C. Wang, Semantic segmentation-based lane-level localization using around view monitoring system, IEEE Sensors Journal 19 (21) (2019) 10077–10086. https://doi.org/10.1109/JSEN.2019.2929135 doi:10.1109/JSEN.2019.2929135.Jang2022JAS W. Jang, J. Hyun, J. An, M. Cho, E. Kim, A lane-level road marking map using a monocular camera, IEEE/CAA Journal of Automatica Sinica 9 (1) (2022) 187–204. https://doi.org/10.1109/JAS.2021.1004293 doi:10.1109/JAS.2021.1004293.Xiao2018TISC Z. Xiao, K. Jiang, S. Xie, T. Wen, C. Yu, D. Yang, Monocular vehicle self-localization method based on compact semantic map, in: Proceedings of International Conference on Intelligent Transportation Systems (ITSC), 2018, pp. 3083–3090. https://doi.org/10.1109/ITSC.2018.8569274 doi:10.1109/ITSC.2018.8569274.Wen2020IV T. Wen, Z. Xiao, B. Wijaya, K. Jiang, M. Yang, D. Yang, High precision vehicle localization based on tightly-coupled visual odometry and vector hd map, in: Proceedings of IEEE Intelligent Vehicles Symposium (IV), 2020, pp. 672–679. https://doi.org/10.1109/IV47402.2020.9304659 doi:10.1109/IV47402.2020.9304659.Liao2019CoarseToFineVL Z. Liao, J. Shi, X. Qi, X. Zhang, W. Wang, Y. He, R. Wei, X. Liu, https://api.semanticscholar.org/CorpusID:204401830Coarse-to-fine visual localization using semantic compact map, Proceedings of International Conference on Control and Robots (ICCR) (2019) 30–37. <https://api.semanticscholar.org/CorpusID:204401830>Wang2021ICRA H. Wang, C. Xue, Y. Zhou, F. Wen, H. Zhang, Visual semantic localization based on hd map for autonomous vehicles in urban scenarios, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 11255–11261. https://doi.org/10.1109/ICRA48506.2021.9561459 doi:10.1109/ICRA48506.2021.9561459.Lu2017IV Y. Lu, J. Huang, Y.-T. Chen, B. Heisele, Monocular localization in urban environments using road markings, in: Proceedings of IEEE Intelligent Vehicles Symposium (IV), 2017, pp. 468–474. https://doi.org/10.1109/IVS.2017.7995762 doi:10.1109/IVS.2017.7995762.Pauls2020IROS J.-H. Pauls, K. Petek, F. Poggenhans, C. Stiller, Monocular localization in hd maps by combining semantic segmentation and distance transform, in: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020, pp. 4595–4601. https://doi.org/10.1109/IROS45743.2020.9341003 doi:10.1109/IROS45743.2020.9341003.vins-mono T. Qin, P. Li, S. Shen, Vins-mono: A robust and versatile monocular visual-inertial state estimator, IEEE Transactions on Robotics 34 (4) (2018) 1004–1020. https://doi.org/10.1109/TRO.2018.2853729 doi:10.1109/TRO.2018.2853729.Julius2014IV J. Ziegler, H. Lategahn, M. Schreiber, C. G. Keller, C. Knöppel, J. Hipp, M. Haueis, C. Stiller, Video based localization for bertha, in: Proceedings of IEEE Intelligent Vehicles Symposium (IV), 2014, pp. 1231–1238. https://doi.org/10.1109/IVS.2014.6856560 doi:10.1109/IVS.2014.6856560.sarlin2023orienternet P.-E. Sarlin, D. DeTone, T.-Y. Yang, A. Avetisyan, J. Straub, T. Malisiewicz, S. R. Bulò, R. Newcombe, P. Kontschieder, V. Balntas, Orienternet: Visual localization in 2d public maps with neural matching, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 21632–21642.bevlocator Z. Zhang, M. Xu, W. Zhou, T. Peng, L. Li, S. Poslad, https://api.semanticscholar.org/CorpusID:254044186Bev-locator: An end-to-end visual semantic localization network using multi-view images, ArXiv abs/2211.14927 (2022). <https://api.semanticscholar.org/CorpusID:254044186>he2023egovm Y. He, S. Liang, X. Rui, C. Cai, G. Wan, Egovm: Achieving precise ego-localization using lightweight vectorized maps, arXiv preprint arXiv:2307.08991 (2023).crowdsourcedmapping B. Wijaya, K. Jiang, M. Yang, T. Wen, X. Tang, D. Yang, Crowdsourced road semantics mapping based on pixel-wise confidence level, Automotive Innovation 5 (01 2022). https://doi.org/10.1007/s42154-021-00173-x doi:10.1007/s42154-021-00173-x.GCDL P. Sun, Y. Wang, P. He, X. Pei, M. Yang, K. Jiang, D. Yang, https://api.semanticscholar.org/CorpusID:257108366Gcd-l: A novel method for geometric change detection in hd maps using low-cost sensors, Automotive Innovation 5 (2022) 324 – 332. <https://api.semanticscholar.org/CorpusID:257108366>googlenet C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1–9. https://doi.org/10.1109/CVPR.2015.7298594 doi:10.1109/CVPR.2015.7298594.Shavit2020CoRR Y. Shavit, R. Ferens, Do we really need scene-specific pose encoders?, in: Proceedings of International Conference on Pattern Recognition (ICPR), IEEE, 2021, pp. 3186–3192.Tayyab2017IROS T. Naseer, W. Burgard, Deep regression for monocular camera-based 6-dof global localization in outdoor environments, in: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 1525–1530. https://doi.org/10.1109/IROS.2017.8205957 doi:10.1109/IROS.2017.8205957.hourglassPN I. Melekhov, J. Ylioinas, J. Kannala, E. Rahtu, https://api.semanticscholar.org/CorpusID:1300551Image-based localization using hourglass networks, Proceedings of IEEE International Conference on Computer Vision Workshops (ICCVW) (2017) 870–877. <https://api.semanticscholar.org/CorpusID:1300551>BayesianPN A. Kendall, R. Cipolla, https://doi.org/10.1109/ICRA.2016.7487679Modelling uncertainty in deep learning for camera relocalization, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), IEEE Press, 2016, p. 4762–4769. https://doi.org/10.1109/ICRA.2016.7487679 doi:10.1109/ICRA.2016.7487679. <https://doi.org/10.1109/ICRA.2016.7487679>LSTMPN F. Walch, C. Hazirbas, L. Leal-Taixé, T. Sattler, S. Hilsenbeck, D. Cremers, Image-based localization using lstms for structured feature correlation, in: Proceedings of IEEE International Conference on Computer Vision (ICCV), 2017, pp. 627–637. https://doi.org/10.1109/ICCV.2017.75 doi:10.1109/ICCV.2017.75.PoseNet2 A. Kendall, R. Cipolla, Geometric loss functions for camera pose regression with deep learning, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 6555–6564. https://doi.org/10.1109/CVPR.2017.694 doi:10.1109/CVPR.2017.694.mapnet S. Brahmbhatt, J. Gu, K. Kim, J. Hays, J. Kautz, Geometry-aware learning of maps for camera localization, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 2616–2625. https://doi.org/10.1109/CVPR.2018.00277 doi:10.1109/CVPR.2018.00277.Saha2018ImprovedVR S. Saha, G. Varma, C. V. Jawahar, https://api.semanticscholar.org/CorpusID:52283643Improved visual relocalization by discovering anchor points, in: Proceedings of British Machine Vision Conference (BMVC), 2018. <https://api.semanticscholar.org/CorpusID:52283643>Wang2019AtLocAG B. Wang, C. Chen, C. X. Lu, P. Zhao, N. Trigoni, A. Markham, Atloc: Attention guided camera localization, in: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2020, pp. 10393–10401.catiloc A. Ghofrani, R. M. Toroghi, S. Mojtaba Tabatabaie, Catiloc: Camera image transformer for indoor localization, in: Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021, pp. 1450–1454. https://doi.org/10.1109/ICASSP39728.2021.9414939 doi:10.1109/ICASSP39728.2021.9414939.LSG F. Xue, X. Wang, Z. Yan, Q. Wang, J. Wang, H. Zha, Local supports global: Deep camera relocalization with sequence enhancement, in: Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 2841–2850. https://doi.org/10.1109/ICCV.2019.00293 doi:10.1109/ICCV.2019.00293.MSPN H. Blanton, C. Greenwell, S. Workman, N. Jacobs, Extending absolute pose regression to multiple scenes, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2020, pp. 170–178. https://doi.org/10.1109/CVPRW50498.2020.00027 doi:10.1109/CVPRW50498.2020.00027.MS-Transformer1 Y. Shavit, R. Ferens, Y. Keller, Learning multi-scene absolute pose regression with transformers, in: Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 2713–2722. https://doi.org/10.1109/ICCV48922.2021.00273 doi:10.1109/ICCV48922.2021.00273.MS-Transformer2 Y. Shavit, R. Ferens, Y. Keller, Coarse-to-fine multi-scene pose regression with transformers, IEEE Transactions on Pattern Analysis and Machine Intelligence (2023) 1–12https://doi.org/10.1109/TPAMI.2023.3310929 doi:10.1109/TPAMI.2023.3310929.7scenes B. Glocker, S. Izadi, J. Shotton, A. Criminisi, Real-time rgb-d camera relocalization, in: Proceedings of IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2013, pp. 173–179. https://doi.org/10.1109/ISMAR.2013.6671777 doi:10.1109/ISMAR.2013.6671777.SCRForests J. Shotton, B. Glocker, C. Zach, S. Izadi, A. Criminisi, A. Fitzgibbon, Scene coordinate regression forests for camera relocalization in rgb-d images, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 2930–2937. https://doi.org/10.1109/CVPR.2013.377 doi:10.1109/CVPR.2013.377.Abner2014CVPR A. Guzman-Rivera, P. Kohli, B. Glocker, J. Shotton, T. Sharp, A. Fitzgibbon, S. Izadi, Multi-output learning for camera relocalization, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 1114–1121. https://doi.org/10.1109/CVPR.2014.146 doi:10.1109/CVPR.2014.146.Julien2015CVPR J. Valentin, M. Nießner, J. Shotton, A. Fitzgibbon, S. Izadi, P. Torr, Exploiting uncertainty in regression forests for accurate camera relocalization, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 4400–4408. https://doi.org/10.1109/CVPR.2015.7299069 doi:10.1109/CVPR.2015.7299069.DSAC E. Brachmann, A. Krull, S. Nowozin, J. Shotton, F. Michel, S. Gumhold, C. Rother, Dsac — differentiable ransac for camera localization, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2492–2500. https://doi.org/10.1109/CVPR.2017.267 doi:10.1109/CVPR.2017.267.DSAC2 E. Brachmann, C. Rother, Visual camera re-localization from rgb and rgb-d images using dsac, IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (9) (2022) 5847–5865. https://doi.org/10.1109/TPAMI.2021.3070754 doi:10.1109/TPAMI.2021.3070754.HSCNet X. Li, S. Wang, Y. Zhao, J. Verbeek, J. Kannala, Hierarchical scene coordinate classification and regression for visual localization, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 11980–11989. https://doi.org/10.1109/CVPR42600.2020.01200 doi:10.1109/CVPR42600.2020.01200.few-shot-SRC S. Dong, S. Wang, Y. Zhuang, J. Kannala, M. Pollefeys, B. Chen, Visual localization via few-shot scene region classification, in: Proceedings of International Conference on 3D Vision (3DV), 2022, pp. 393–402. https://doi.org/10.1109/3DV57658.2022.00051 doi:10.1109/3DV57658.2022.00051.DSM S. Tang, C. Tang, R. Huang, S. Zhu, P. Tan, Learning camera localization via dense scene matching, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 1831–1841. https://doi.org/10.1109/CVPR46437.2021.00187 doi:10.1109/CVPR46437.2021.00187.KFNet L. Zhou, Z. Luo, T. Shen, J. Zhang, M. Zhen, Y. Yao, T. Fang, L. Quan, Kfnet: Learning temporal camera relocalization using kalman filtering, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 4918–4927. https://doi.org/10.1109/CVPR42600.2020.00497 doi:10.1109/CVPR42600.2020.00497.Less_is_More E. Brachmann, C. Rother, Learning less is more - 6d camera localization via 3d surface regression, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 4654–4662. https://doi.org/10.1109/CVPR.2018.00489 doi:10.1109/CVPR.2018.00489.Huang2021VSNetVW Z. Huang, H. Zhou, Y. Li, B. Yang, Y. Xu, X. Zhou, H. Bao, G. Zhang, H. Li, https://api.semanticscholar.org/CorpusID:235166060Vs-net: Voting with segmentation for visual localization, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021) 6097–6107. <https://api.semanticscholar.org/CorpusID:235166060>ESAC E. Brachmann, C. Rother, Expert sample consensus applied to camera re-localization, in: Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 7524–7533. https://doi.org/10.1109/ICCV.2019.00762 doi:10.1109/ICCV.2019.00762.Budvytis2019LargeSJ I. Budvytis, M. Teichmann, T. Vojír, R. Cipolla, https://api.semanticscholar.org/CorpusID:202630723Large scale joint semantic re-localisation and scene understanding via globally unique instance coordinate regression, ArXiv abs/1909.10239 (2019). <https://api.semanticscholar.org/CorpusID:202630723>Wang2023HSCNetHS S. Wang, Z. Laskar, I. Melekhov, X. Li, Y. Zhao, G. Tolias, J. Kannala, https://api.semanticscholar.org/CorpusID:258547156Hscnet++: Hierarchical scene coordinate classification and regression for visual localization with transformer, ArXiv abs/2305.03595 (2023). <https://api.semanticscholar.org/CorpusID:258547156>Revaud2023SACRegSC J. Revaud, Y. Cabon, R. Br'egier, J. Lee, P. Weinzaepfel, https://api.semanticscholar.org/CorpusID:260091307Sacreg: Scene-agnostic coordinate regression for visual localization, ArXiv abs/2307.11702 (2023). <https://api.semanticscholar.org/CorpusID:260091307>Bui2023D2SRL B.-T. Bui, D. T. Tran, J.-H. Lee, https://api.semanticscholar.org/CorpusID:260316363D2s: Representing local descriptors and global scene coordinates for camera relocalization, ArXiv abs/2307.15250 (2023). <https://api.semanticscholar.org/CorpusID:260316363>NeRF B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, R. Ng, Nerf: Representing scenes as neural radiance fields for view synthesis, in: A. Vedaldi, H. Bischof, T. Brox, J.-M. Frahm (Eds.), Proceedings of European Conference on Computer Vision (ECCV), Springer International Publishing, Cham, 2020, pp. 405–421.Meng2021GNeRFGN Q. Meng, A. Chen, H. Luo, M. Wu, H. Su, L. Xu, X. He, J. Yu, https://api.semanticscholar.org/CorpusID:232404358Gnerf: Gan-based neural radiance field without posed camera, Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV) (2021) 6331–6341. <https://api.semanticscholar.org/CorpusID:232404358>NoPe-NeRF W. Bian, Z. Wang, K. Li, J.-W. Bian, Nope-nerf: Optimising neural radiance field with no pose prior, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 4160–4169. https://doi.org/10.1109/CVPR52729.2023.00405 doi:10.1109/CVPR52729.2023.00405.liu2023nerfloc J. Liu, Q. Nie, Y. Liu, C. Wang, Nerf-loc: Visual localization with conditional neural radiance field, arXiv preprint arXiv:2304.07979 (2023).Moreau2021LENSLE A. Moreau, N. Piasco, D. Tsishkou, B. Stanciulescu, A. de La Fortelle, Lens: Localization enhanced by nerf synthesis, in: Proceedings of Conference on Robot Learning (CoRL), PMLR, 2022, pp. 1347–1356.DPN S. Chen, Z. Wang, V. Prisacariu, Direct-posenet: Absolute pose regression with photometric consistency, in: Proceedings of International Conference on 3D Vision (3DV), 2021, pp. 1175–1185. https://doi.org/10.1109/3DV53792.2021.00125 doi:10.1109/3DV53792.2021.00125.DFNet S. Chen, X. Li, Z. Wang, V. A. Prisacariu, Dfnet: Enhance absolute pose regression with direct feature matching, in: Proceedings of European Conference on Computer Vision (ECCV), Springer, 2022, pp. 1–17.NeRF-W R. Martin-Brualla, N. Radwan, M. S. M. Sajjadi, J. T. Barron, A. Dosovitskiy, D. Duckworth, Nerf in the wild: Neural radiance fields for unconstrained photo collections, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 7206–7215. https://doi.org/10.1109/CVPR46437.2021.00713 doi:10.1109/CVPR46437.2021.00713.GardenPoint2 Z. Chen, O. Lam, A. Jacobson, M. Milford, https://api.semanticscholar.org/CorpusID:18130455Convolutional neural network-based place recognition, ArXiv abs/1411.1509 (2014). <https://api.semanticscholar.org/CorpusID:18130455>SPED Z. Chen, L. Liu, I. Sa, Z. Ge, M. Chli, Learning context flexible attention model for long-term visual place recognition, IEEE Robotics and Automation Letters 3 (4) (2018) 4015–4022. https://doi.org/10.1109/LRA.2018.2859916 doi:10.1109/LRA.2018.2859916.nordland S. Skrede, Nordlandsbanen: minute by minute, season by season, <https://nrkbeta.no/2013/01/15/nordlandsbanen-minute-by-minute-season-by-season/> (2013).pittsburgh A. Torii, J. Sivic, M. Okutomi, T. Pajdla, Visual place recognition with repetitive structures, IEEE Transactions on Pattern Analysis and Machine Intelligence 37 (11) (2015) 2346–2359. https://doi.org/10.1109/TPAMI.2015.2409868 doi:10.1109/TPAMI.2015.2409868.MSLS F. Warburg, S. Hauberg, M. López-Antequera, P. Gargallo, Y. Kuang, J. Civera, Mapillary street-level sequences: A dataset for lifelong place recognition, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 2623–2632. https://doi.org/10.1109/CVPR42600.2020.00270 doi:10.1109/CVPR42600.2020.00270.RatSLAM A. J. Glover, W. P. Maddern, M. J. Milford, G. F. Wyeth, Fab-map + ratslam: Appearance-based slam for multiple times of day, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), 2010, pp. 3507–3512. https://doi.org/10.1109/ROBOT.2010.5509547 doi:10.1109/ROBOT.2010.5509547.17places R. Sahdev, J. K. Tsotsos, Indoor place recognition system for localization of mobile robots, in: Proceedings of Conference on Computer and Robot Vision (CRV), 2016, pp. 53–60. https://doi.org/10.1109/CRV.2016.38 doi:10.1109/CRV.2016.38.ESSEX3IN1 M. Zaffar, S. Ehsan, M. Milford, K. D. McDonald-Maier, Memorable maps: A framework for re-defining places in visual place recognition, IEEE Transactions on Intelligent Transportation Systems 22 (12) (2021) 7355–7369. https://doi.org/10.1109/TITS.2020.3001228 doi:10.1109/TITS.2020.3001228.INRIA H. Jegou, M. Douze, C. Schmid, Hamming embedding and weak geometric consistency for large scale image search, in: D. Forsyth, P. Torr, A. Zisserman (Eds.), Proceedings of European Conference on Computer Vision (ECCV), Springer Berlin Heidelberg, Berlin, Heidelberg, 2008, pp. 304–317.SYNTHIA G. Ros, L. Sellart, J. Materzynska, D. Vazquez, A. M. Lopez, The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 3234–3243. https://doi.org/10.1109/CVPR.2016.352 doi:10.1109/CVPR.2016.352.KITTI A. Geiger, P. Lenz, R. Urtasun, Are we ready for autonomous driving? the kitti vision benchmark suite, in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), IEEE, 2012, pp. 3354–3361.KAIST J. Jeong, Y. Cho, Y.-S. Shin, H. Roh, A. Kim, Complex urban dataset with multi-level sensors from highly diverse urban environments, The International Journal of Robotics Research 38 (6) (2019) 642–657. https://doi.org/10.1177/0278364919843996 doi:10.1177/0278364919843996.newcombe2011kinectfusion R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohli, J. Shotton, S. Hodges, A. Fitzgibbon, https://www.microsoft.com/en-us/research/publication/kinectfusion-real-time-dense-surface-mapping-tracking/Kinectfusion: Real-time dense surface mapping and tracking, in: Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR), IEEE, 2011, pp. 127–136. <https://www.microsoft.com/en-us/research/publication/kinectfusion-real-time-dense-surface-mapping-tracking/>eth_ms_visloc_2021 ETH Zurich Computer Vision Group and Microsoft Mixed Reality & AI Lab Zurich, The ETH-Microsoft Localization Dataset, <https://github.com/cvg/visloc-iccv2021> (2021).SPE-NetVLAD J. Yu, C. Zhu, J. Zhang, Q. Huang, D. Tao, Spatial pyramid-enhanced netvlad with weighted triplet loss for place recognition, IEEE Transactions on Neural Networks and Learning Systems 31 (2) (2020) 661–674. https://doi.org/10.1109/TNNLS.2019.2908982 doi:10.1109/TNNLS.2019.2908982.vlaad J. Zhang, Y. Cao, Q. Wu, https://www.sciencedirect.com/science/article/pii/S0031320321001394Vector of locally and adaptively aggregated descriptors for image feature representation, Pattern Recognition 116 (2021) 107952. https://doi.org/https://doi.org/10.1016/j.patcog.2021.107952 doi:https://doi.org/10.1016/j.patcog.2021.107952. <https://www.sciencedirect.com/science/article/pii/S0031320321001394>cosplace G. Berton, C. Masone, B. Caputo, Rethinking visual geo-localization for large-scale applications, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 4878–4888.BaiduMail X. Sun, Y. Xie, P. Luo, L. Wang, A dataset for benchmarking image-based localization, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 5641–5649. https://doi.org/10.1109/CVPR.2017.598 doi:10.1109/CVPR.2017.598.oxford W. Maddern, G. Pascoe, C. Linegar, P. Newman, 1 year, 1000 km: The oxford robotcar dataset, The International Journal of Robotics Research 36 (1) (2017) 3–15. https://doi.org/10.1177/0278364916679498 doi:10.1177/0278364916679498.GSVCities A. Ali-bey, B. Chaib-draa, P. Giguère, https://www.sciencedirect.com/science/article/pii/S0925231222012188Gsv-cities: Toward appropriate supervised visual place recognition, Neurocomputing 513 (2022) 194–203. https://doi.org/https://doi.org/10.1016/j.neucom.2022.09.127 doi:https://doi.org/10.1016/j.neucom.2022.09.127. <https://www.sciencedirect.com/science/article/pii/S0925231222012188>resnet K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778. https://doi.org/10.1109/CVPR.2016.90 doi:10.1109/CVPR.2016.90.Kazmi2019TRO S. M. A. M. Kazmi, B. Mertsching, Detecting the expectancy of a place using nearby context for appearance-based mapping, IEEE Transactions on Robotics 35 (6) (2019) 1352–1366. https://doi.org/10.1109/TRO.2019.2926475 doi:10.1109/TRO.2019.2926475.lcd-pl J. Han, R. Dong, J. Kan, A novel loop closure detection method with the combination of points and lines based on information entropy, Journal of Field Robotics 38 (3) (2021) 386–401. https://doi.org/https://doi.org/10.1002/rob.21992 doi:https://doi.org/10.1002/rob.21992.fild S. An, G. Che, F. Zhou, X. Liu, X. Ma, Y. Chen, https://doi.org/10.1109/IROS40897.2019.8968043Fast and incremental loop closure detection using proximity graphs, in: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE Press, 2019, p. 378–385. https://doi.org/10.1109/IROS40897.2019.8968043 doi:10.1109/IROS40897.2019.8968043. <https://doi.org/10.1109/IROS40897.2019.8968043>liu B. Liu, F. Tang, Y. Fu, Y. Yang, Y. Wu, A flexible and efficient loop closure detection based on motion knowledge, in: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 11241–11247. https://doi.org/10.1109/ICRA48506.2021.9561126 doi:10.1109/ICRA48506.2021.9561126.Bampis2018IJRR L. Bampis, A. Amanatiadis, A. Gasteratos, https://doi.org/10.1177/0278364917740639Fast loop-closure detection using visual-word-vectors from image sequences, The International Journal of Robotics Research 37 (1) (2018) 62–82. https://doi.org/10.1177/0278364917740639 doi:10.1177/0278364917740639. <https://doi.org/10.1177/0278364917740639>CAPS Q. Wang, X. Zhou, B. Hariharan, N. Snavely, Learning feature descriptors using camera pose supervision, in: A. Vedaldi, H. Bischof, T. Brox, J.-M. Frahm (Eds.), Proceedings of European Conference on Computer Vision (ECCV), Springer International Publishing, Cham, 2020, pp. 757–774.S2DNet H. Germain, G. Bourmaud, V. Lepetit, S2dnet: Learning image features for accurate sparse-to-dense matching, in: A. Vedaldi, H. Bischof, T. Brox, J.-M. Frahm (Eds.), Proceedings of European Conference on Computer Vision (ECCV), Springer International Publishing, Cham, 2020, pp. 626–643.patch2pix Q. Zhou, T. Sattler, L. Leal-Taixé, Patch2pix: Epipolar-guided pixel-level correspondences, in: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 4667–4676. https://doi.org/10.1109/CVPR46437.2021.00464 doi:10.1109/CVPR46437.2021.00464.Choi2019Access M. J. Choi, J. K. Suhr, K. Choi, H. G. Jung, Low-cost precise vehicle localization using lane endpoints and road signs for highway situations, IEEE Access 7 (2019) 149846–149856. https://doi.org/10.1109/ACCESS.2019.2947287 doi:10.1109/ACCESS.2019.2947287.gaidon2016virtual A. Gaidon, Q. Wang, Y. Cabon, E. Vig, Virtual worlds as proxy for multi-object tracking analysis, in: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 4340–4349.Zhang2023TIV Z. Zhang, J. Zhao, C. Huang, L. Li, Learning visual semantic map-matching for loosely multi-sensor fusion localization of autonomous vehicles, IEEE Transactions on Intelligent Vehicles 8 (1) (2023) 358–367. https://doi.org/10.1109/TIV.2022.3173662 doi:10.1109/TIV.2022.3173662.nuscenes2019 H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, O. Beijbom, nuscenes: A multimodal dataset for autonomous driving, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 2020, pp. 11621–11631.Sattler2019UnderstandingTL T. Sattler, Q. Zhou, M. Pollefeys, L. Leal-Taixé, https://api.semanticscholar.org/CorpusID:81979654Understanding the limitations of cnn-based absolute camera pose regression, Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) 3297–3307. <https://api.semanticscholar.org/CorpusID:81979654>SVS-Pose T. Naseer, W. Burgard, Deep regression for monocular camera-based 6-dof global localization in outdoor environments, in: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 1525–1530. https://doi.org/10.1109/IROS.2017.8205957 doi:10.1109/IROS.2017.8205957.Cai2018AHP M. Cai, C. Shen, I. D. Reid, https://api.semanticscholar.org/CorpusID:52286190A hybrid probabilistic model for camera relocalization, in: Proceedings of British Machine Vision Conference (BMVC), 2018. <https://api.semanticscholar.org/CorpusID:52286190>Shavit2020DoWR Y. Shavit, R. Ferens, https://api.semanticscholar.org/CorpusID:229349364Do we really need scene-specific pose encoders?, 2020 25th International Conference on Pattern Recognition (ICPR) (2020) 3186–3192. <https://api.semanticscholar.org/CorpusID:229349364>MS-Trans1 Y. Shavit, R. Ferens, Y. Keller, Learning multi-scene absolute pose regression with transformers, in: Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 2713–2722. https://doi.org/10.1109/ICCV48922.2021.00273 doi:10.1109/ICCV48922.2021.00273.CamNet M. Ding, Z. Wang, J. Sun, J. Shi, P. Luo, Camnet: Coarse-to-fine retrieval for camera re-localization, in: Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 2871–2880. https://doi.org/10.1109/ICCV.2019.00296 doi:10.1109/ICCV.2019.00296.3dmodel A. Torii, H. Taira, J. Sivic, M. Pollefeys, M. Okutomi, T. Pajdla, T. Sattler, Are large-scale 3d models really necessary for accurate visual localization?, IEEE Transactions on Pattern Analysis and Machine Intelligence 43 (3) (2021) 814–829. https://doi.org/10.1109/TPAMI.2019.2941876 doi:10.1109/TPAMI.2019.2941876.roadsidemapping T. Wen, K. Jiang, J. Miao, B. Wijaya, P. Jia, M. Yang, D. Yang, Roadside hd map object reconstruction using monocular camera, IEEE Robotics and Automation Letters 7 (3) (2022) 7722–7729. https://doi.org/10.1109/LRA.2022.3185367 doi:10.1109/LRA.2022.3185367.SGFNet Y. Wang, K. Jiang, T. Wen, X. Jiao, B. Wijaya, J. Miao, Y. Shi, Z. Fu, M. Yang, D. Yang, Sgfnet: Segmentation guided fusion network for 3d object detection, IEEE Robotics and Automation Letters 8 (12) (2023) 8239–8246. https://doi.org/10.1109/LRA.2023.3326697 doi:10.1109/LRA.2023.3326697. | http://arxiv.org/abs/2311.15643v2 | {
"authors": [
"Jinyu Miao",
"Kun Jiang",
"Tuopu Wen",
"Yunlong Wang",
"Peijing Jia",
"Xuhe Zhao",
"Qian Cheng",
"Zhongyang Xiao",
"Jin Huang",
"Zhihua Zhong",
"Diange Yang"
],
"categories": [
"cs.RO"
],
"primary_category": "cs.RO",
"published": "20231127091352",
"title": "A Survey on Monocular Re-Localization: From the Perspective of Scene Map Representation"
} |
System-Environment Entanglement Phase Transitions Masaki Oshikawa November 27, 2023 =================================================Instruction-following language models demand robust methodologies for information retrieval to augment instructions for question-answering applications. A primary challenge is the resolution of coreferences in the context of chunking strategies for long documents. The critical barrier to experimentation of handling coreferences is a lack of open source datasets, specifically in question-answering tasks that require coreference resolution. In this work we present our Coreference Resolution in Question-Answering (CRaQAn) dataset, an open-source dataset that caters to the nuanced information retrieval requirements of coreference resolution in question-answering tasks by providing over 250 question-answer pairs containing coreferences. To develop this dataset, we developed a novel approach for creating high-quality datasets using an instruction-following model (GPT-4) and a Recursive Criticism and Improvement Loop.§ INTRODUCTIONInformation retrieval (IR) is a fundamental component in many applications of instruction-following models providing ground truth text from a corpus of documents. Long documents pose an issue for embedding-based information retrieval, because state-of-the-art embedding models have a limited context window length <cit.>. A common practice is to store embedded chunks of a long document in a vector database, which then enables targeted information retrieval <cit.>. Chunking refers to the process of dividing a long document into smaller, more manageable pieces or chunks.However, naive chunking strategies for long documents may inadvertently split a coreference sequence, altering the semantic context of the individual chunks <cit.>. Coreference sequences pertain to instances in a text where different expressions link to the same entity, like a person or object. For instance, consider the sentence "Mary planted a tree in her backyard because she loves nature". In this case, "Mary" and "she" are coreferent. If a chunking strategy splits this sentence into two chunks - "Mary planted a tree in her backyard" and "because she loves nature" - the reference to "she" in the second chunk becomes unclear without the context provided by the first chunk. Even more challenging is the scenario where coreferences span several paragraphs or even pages <cit.>. This greatly amplifies the complexity of maintaining semantic integrity when chunking, as spatially distant coreferences could easily be disrupted with common chunking strategies, leading to potential loss of critical contextual information.We posit that for every information retrieval task, there needs to be a balance between preservation of long-range coreferences and chunk size, where the latter might be limited by the embedding modelarchitecture <cit.> or diluted contextual meaning <cit.>. Our motivation for this body of work is to generate a dataset necessitating coreference resolution across both contiguous and widely separated sentences for accurate question-answering (QA). This dataset can be used to test various chunking strategies for information retrieval in a QA pipeline.Manual creation of such a dataset would be labor-intensive, time-consuming, and subject to significant human error through crowdsourcing. To alleviate these concerns, we propose and demonstrate an approach to automated dataset creation leveraging the advanced capabilities of GPT-4, a state-of-the-art instruction-following model developed by OpenAI <cit.> and a Recursive Criticism and Improvement loop (RCI) <cit.>.In this paper, we introduce the Coreference Resolution in Question-Answering (CRaQAn) open-source dataset, alongside a scalable methodology for automated dataset creation that leverages instruction-following models. These contributions not only provide a tool for enhancing the robustness and effectiveness of QA systems but also establish a new approach for accelerating and refining dataset generation in the broader natural language processing research community.§ RELATED WORK§.§ Coreference Resolution and Question-Answering Datasets There are many open-source datasets individually focused on coreference resolution <cit.> or QA of passages <cit.>. However, to our knowledge, there exists only one open-source dataset (Quoref, from Dasigi et al., <cit.>) that contains question-answer pairs that require coreference resolution within a single document to answer. However, Quoref’s reliance on crowdsourcing has its limitations, including inconsistencies in the quality and relevance of the coreference resolution required by the QA pairs <cit.>. Furthermore, there is no requirement for coreferences to exist across sentences, meaning certain chunking strategies, such as by sentence, cannot be assessed using the Quoref dataset. HotpotQA is another popular open-source dataset for coreference resolution and question-answering. HotpotQA relies on multiple-document hopping for its coreferences <cit.>, which does not allow for the assessment of single-document chunking strategies. §.§ Automatic Question Generation Automated dataset creation methodologies have been increasingly explored to overcome the limitations of manual and crowdsourced data collection. Automatic question generation (AQG) is one subset of this space that has grown in popularity. A number of traditional approaches to AQG exist. Utilization of language models, and specifically transformers, for QA generation is becoming increasingly common <cit.>. However, these methods rely on previously labeled datasets and are not trained to provide complex question-answer pairs, such as the ones desired in this study for coreferences <cit.>. Using instruction-following models like large language models (LLMs) is a relatively new strategy that researchers are beginning to explore for more complex AQG tasks <cit.>. §.§ Recursive Criticism and Improvement Loop Kim et al. coined the term "RCI" for their prompting scheme approach in which a model Recursively Criticizes and Improves its output <cit.>. We build on this work by applying it to dataset creation and combining it with other techniques such as memetic proxies <cit.>, few-shot prompting <cit.>, Chain-of-Thought <cit.>, and Show-Your-Work reasoning <cit.>. Our approach leverages the iterative feedback process to refine and improve the quality of the generated data.§ METHODS §.§ Automated Approach for Natural Language Dataset Generation with Instruction-following Models and a Recursive Criticism and Improvement Loop Our primary objective is to develop a framework for automated generation of high-quality natural language datasets consisting of question-answer pairs relating to a passage using an instruction-following model, GPT-4. To achieve this, we establish a set of logical rules for the dataset and utilize a GENERATOR to suggest candidate dataset entries. The GENERATOR is designed to respond iteratively to feedback from a REVIEWER panel, ensuring a continuous improvement in the quality of generated entries. We developed a set of comprehensive guidelines used in both prompting and instructions for human reviewers. Development of these guidelines was an iterative process wherein feedback from human reviewers and domain experts was incorporated into prompts to improve robustness of the automated GENERATOR and REVIEWERS.§.§.§ Generator Prompt The GENERATOR prompt is responsible for producing candidate dataset entries. A well-crafted GENERATOR prompt enhances the efficiency and quality of automated dataset generation.It needs a clear task definition,where GPT-4 is provided with unambiguous, precise guidelines that enable the generation of relevant text. This involves a succinct task description accompanied by a list of instructions that leaves no room for misinterpretation. The GENERATOR prompt should also utilize a memetic proxy, a concept backed by research suggesting that the portrayal of the GENERATOR as an expert in the targeted domain can enhance the quality of the responses <cit.>. It's also beneficial to use few-shot prompting, giving the model high-quality output examples to aid task comprehension <cit.>. Finally, the prompt should be feedback-responsive, adjusting to reviewer panel input for data refinement. A default temperature parameter of 0.7 has been found effective for initial generation and feedback response. §.§.§ Reviewer Panel Prompts The REVIEWERS are designed to ensure high-quality, contextually accurate data entries. This process is based on Recursive Criticism and Improvement (RCI) <cit.>. The REVIEWERS are an ensemble of prompts, each specialized in adhering to logical guidelines initially set for the dataset. A REVIEWER should respond with their rejection or acceptance of the candidate from the GENERATOR, as well as their reasoning. Each REVIEWER can be considered an individual critic, similar to the system proposed by Gou et al <cit.>. We developed the following REVIEWER best practices: Panel Formation To create a robust feedback system, we recommend distributing the rules among multiple REVIEWER personas, with each one specializing in a subset of rules. The distribution of responsibilities does not need to be mutually exclusive. Allowing overlap in the rules may increase the reliability of the feedback system, because the same rule will be evaluated from different perspectives among the REVIEWER personas. This persona system is also driven by the idea of memetic proxy <cit.>. Reviewer Role Once the GENERATOR produces a candidate QA pair, the candidate is forwarded to the REVIEWERS. Each of the REVIEWERS evaluates the candidate based on its assigned rules. If a REVIEWER identifies any rule breaches, it generates feedback indicating the specific issues and suggestions for improvement. Since we want the REVIEWERS to be rigid rule-followers, a lower temperature of 0.3 is appropriate. Feedback Loop The feedback from each REVIEWER is sent back to the GENERATOR for iterative improvement. The GENERATOR uses insights from the feedback, aiming to resolve the identified issues while maintaining the validity and context of the entry. This revised entry is again forwarded to the reviewer panel, initiating another round of feedback or acceptance. If the review panel does not reach a consensus to accept after a maximum of five feedback loop iterations, no QA pair is stored for that text, and the application proceeds to the next text. Chain-of-Thought Reasoning A REVIEWER is requested to provide its reasoning before deciding if a candidate is passing. This utilizes Chain-of-Thought <cit.> and Show-Your-Work reasoning <cit.>.The feedback prompts for the REVIEWERS were crafted ensuring they provided clear, actionable suggestions for the GENERATOR. To achieve this, the prompts were designed to indicate not only the issues but also possible solutions or improvements. In summary, coordination between the GENERATOR and REVIEWERS forms the backbone of our automated dataset generation method. §.§.§ Human Quality Review Human review remains an essential step in dataset generation. Human reviewers serve as a final quality check, assessing candidate entries generated by the model for final acceptance or rejection. This process is much faster than manual generation as reviewers merely need to evaluate pre-generated entries. Their feedback also helps refine the RCI process by highlighting the strengths and weaknesses of the model's iterations. §.§ Application of Approach to Coreference Resolution Dataset Creation We aim to generate a dataset, Coreference Resolution and Question Answering (CRaQAn), that requires coreference resolution across sentences in a passage for accurate QA to assess chunking strategies for information retrieval. Recognizing the importance and complexity of coreference resolution, we believe our method can significantly contribute to this field by creating a high-quality dataset quickly and at scale. Complete documentation of prompts for the sections below can be found in the Appendix.§.§.§ Guidelines for CRaQAn We developed the following guidelines for our CRaQAn dataset: 1) Create an aligned question and answer from the provided text focusing on pronominal, nominal, and anaphoric coreferences across sentences. The complexity of the coreference can range from basic to moderate. 2) Refrain from including complex elements like cataphoric coreferences, appositive coreferences, and zero anaphora. 3) Include in your response those sentence indices that are necessary to understand the question and provide an accurate answer. 4) Exclude from your response those sentence indices which are not essential in understanding the question and the correct answer. 5) Respond appropriately to the feedback from the REVIEWER. Please note, some guidelines have been edited to improve readability.§.§.§ Text Data Curation for CRaQAn We curated our text corpus from Wikipedia articles on modern U.S. laws, selected for their complexity and rich coreference relationships. We converted 100 selected articles to Markdown and split them by their sections. We then selected the summary section at the top of the article as well as randomly selected sections from the body of the article. The sections were split into sentences using gpt-3.5-turbo, a cost-effective alternative to GPT-4 for this relatively simple task. In our experience, gpt-3.5-turbo is a more reliable sentence splitter than the Natural Language Toolkit (NLTK) in Python. The resulting text corpus consisted of 578 sections from Wikipedia articles, split into indexed sentences.§.§.§ Approach for Generator and Reviewer Panel Following our above described best practices and guidelines, we developed a GENERATOR prompt and 4 REVIEWER prompts. The GENERATOR is written to accept split text sections and create coreference–dependent question-answer pairs from them. The REVIEWER prompts are each specialized in different aspects of our dataset, with some overlap in the prompts themselves, including a: 1) Content Cohesion Reviewer, 2) Information Accuracy Reviewer, 3) Linguistic Quality Reviewer, and 4) Required Sentence Reviewer. Personas were chosen to reflect the guidelines we developed for the coreference dataset curation. The REVIEWER prompts respond to the GENERATOR question-answer pairs with feedback.§.§.§ Methods for Human Review Each question-answer sample in the dataset was reviewed by a minimum of two human reviewers who were responsible for rejecting low quality QA pairs. The human reviewers were given the same guidelines as the GENERATOR and REVIEWERS to assess quality. Reviewing the CRaQAn dataset took our human reviewers approximately 2 minutes on average to evaluate each QA pair. § CRAQAN DATASET The initial release of CRaQAn contains 261 human reviewed question-answer samples. Table 1 illustrates characteristics of the dataset. The yield from our automated generation was approximately 60.2%, where at least 2 individual reviewers accepted 348 out of 578 QA pairs. The most common reasons for rejection were inclusion of irrelevant sentences (n = 47), exclusion of required sentences (n = 43), and formatting errors (n = 36). Table 2 highlights all human reviewer versus CRaQAn candidate disagreements. 87 out of 348 of the accepted QA pairs were identified as duplicates of the same Wikipedia section and were subsequently dropped, leaving 261 QA pairs in our initial release.§ DISCUSSION Our work presents a practical approach to automated dataset generation, an area of growing interest in ML research. Leveraging GPT-4 and a Recursive Criticism and Improvement (RCI) loop, we created CRaQAn, a distinctive dataset that caters to the nuanced information retrieval requirements of coreference resolution in QA tasks. Most existing datasets have either focused on QA or coreference resolution individually. By integrating both, CRaQAn represents a significant contribution in the field, providing a valuable resource for researchers and practitioners in natural language processing aiming to tackle complex information retrieval tasks. By making this dataset and code available on Hugging Face, we hope to contribute to the ongoing research in this domain.Our method led to the generation of a diverse set of coreference resolution scenarios, many of which were complex and nuanced, stretching beyond our initial expectations and guidelines. This highlights the potential richness of automated dataset creation, where instruction-following language models like GPT-4 can generate a plethora of unique and challenging real-world examples.However, it's important to recognize the limitations of our approach. The requirement for human review, the necessity to craft effective prompts, and the costs associated with generation are among the challenges that need to be addressed. Without scaling this method, the initial CRaQAn release of 261 QA pairs will be reasonably limited to testing and evaluation. Future work will seek to refine and scale this process, striving for better efficiency and cost-effectiveness in automated dataset generation.Additionally, we recognize the inherent limitation that questions and answers generated by GPT-4 are only reflective of the types of questions that GPT-4 comprehends. While beneficial for particular applications, it may not serve as an unbiased benchmark for comparison across different LLMs or against human performance. This is because GPT-4 may not encapsulate the range of questions that other LLMs find challenging or the types of questions humans would naturally ask. We highlight this point as essential for interpreting results using the CRaQAn dataset and managing expectations of its utility.The CRaQAn dataset could be enhanced in several ways in the future. One way is by expanding the dataset from single Wikipedia sections to whole articles or even full-length books. This would enable the dataset to tackle more intricate coreference problems, making it more representative of real-world information retrieval tasks. Another potential enhancement is to incorporate more challenging types of coreference, such as zero anaphora or cataphora. This would add to the complexity and usefulness of the dataset. Lastly, introducing per-phrase coreference labeling to the dataset could be beneficial. This would allow for more detailed tasks of granular resolution, thereby facilitating a deeper understanding of the relationships within the text.§ DATASET AVAILABILITYThe CRaQAn dataset, along with the code used for its generation, is publicly available on the Hugging Face platform to facilitate open research and collaboration: <https://huggingface.co/datasets/Edge-Pyxos/CRaQAn_v1>. The dataset is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0) License, which allows for the free distribution, modification, and use of the dataset, provided appropriate credit is given through citation of this paper. Researchers interested in exploring coreference resolution in QA tasks are encouraged to use this dataset, and we welcome any contributions to its improvement and expansion.unsrt§ GENERATOR PROMPTAs a PhD holder in Computational Linguistics and Natural Language Processing (NLP) with a focus on Information Extraction, your task is to aid in the creation of a dataset based on coreference resolution for question-answers. This mainly concerns the development of clear and relevant question-answer pairs from a given segmented_text, which may contain coreferential links within sentences. The indices of segmented_text are the order of the sentences in an original document. Follow these rules:1. Create an aligned question and answer from the segmented_text focusing on pronominal, nominal, and anaphoric coreferences across sentences. The complexity of the coreference can range from basic to moderate.2. Refrain from including complex elements like cataphoric coreferences, appositive coreferences, and zero anaphora.3. Include in the field "required_sentence_indices" those sentence indices that are necessary to understand the question and provide an accurate answer.4. Exclude from the field "required_sentence_indices" those sentence indices which are not essential in understanding the question and the correct answer.5. Respond appropriately to the feedback from the REVIEWER, usually by creating a new question, answer, and required_sentence_indices. At times, modifications on existing inputs may be enough.6. Ensure that the "required_sentence_indices" field includes either 2 or 3 sentences.Your completed task should be in JSON format:"question": <question>, "answer": <answer>, "required_sentence_indices": <required_sentence_indices> Example 1, which is a great response and follows all of the above rules:SEGMENTED_TEXT: ["index": 0, "sentence": "Albert Einstein was a theoretical physicist who developed the theory of relativity.", "index": 1, "sentence": "His work is also known for its influence on the philosophy of science.", "index": 2, "sentence": "He won the 1921 Nobel Prize in Physics.", "index": 3, "sentence": "Einstein, considered one of the most important figures in the history of science, was awarded the prize for his services to theoretical physics and especially for his discovery of the law of the photoelectric effect."]YOU: "question": "For what discovery did Albert Einstein win the Nobel Prize in Physics?", "answer": "The law of the photoelectric effect.", "required_sentence_indices": [0, 2, 3]REVIEWER: Great job! Your response fills all of our criteria.Example 2, which is a great response and follows all of the above rules:SEGMENTED_TEXT: ["index": 0, "sentence": "Samantha is a talented painter.", "index": 1, "sentence": "She has won numerous awards for her work.", "index": 2, "sentence": "The artist often uses bright colors in her pieces.", "index": 3, "sentence": "Despite her young age, she enjoys respect and admiration from older artists."]YOU: "question": "What does the artist Samantha often use in her pieces?", "answer": "Bright colors.", "required_sentence_indices": [0, 2]REVIEWER: Great job! Your response fills all of our criteria.Example 3, which is a great response and follows all of the above rules:SEGMENTED_TEXT: ["index": 0, "sentence": "Rainforests are places with enormous diversity among species.", "index": 1, "sentence": "Amazon rainforest is the world's largest tropical rainforest.", "index": 2, "sentence": "It covers an area of five and half a million square kilometers.", "index": 3, "sentence": "The Amazon is home to an astounding number of plant species, some of which are not found anywhere else in the world.", "index": 4, "sentence": "This forest is also a habitat for many animal species."]YOU: "question": "How large is the area that the Amazon rainforest covers?", "answer": "Five and half a million square kilometers.", "required_sentence_indices": [1, 2]REVIEWER: Great job! Your response fills all of our criteria.Example 4, which is an initially bad response made better by the REVIEWER:SEGMENTED_TEXT: ["index": 0, "sentence": "The Affordable Care Act (ACA), formally known as the Patient Protection and Affordable Care Act and colloquially known as Obamacare, was signed into law by President Barack Obama on March 23, 2010.", "index": 1, "sentence": "Together with the Health Care and Education Reconciliation Act of 2010 amendment, it represents the U.S. healthcare system's most significant regulatory overhaul and expansion of coverage since the enactment of Medicare and Medicaid in 1965.","index": 2, "sentence": "The ACA's major provisions came into force in 2014."]YOU: "question": "When did the ACA's major provisions come into force?", "answer": "2014.", "required_sentence_indices": [0, 2]REVIEWER: Your question does not require a coreference resolution between sentences to answer and only requires sentence index 2 to answer. Please revise your question.YOU: "question": "When did the Affordable Care Act's major provisions come into force?", "answer": "2014.", "required_sentence_indices": [0, 2]REVIEWER: Great job! Your response fills all of our criteria.Now it's your turn: SEGMENTED_TEXT: *PLACEHOLDER*YOU: § REVIEWER PROMPT: CONTENT COHESION REVIEWER As a Context and Cohesion Reviewer, your chief task is to ensure that there is total consistency and adherence to contextual information and solid cohesion amongst all components. All entities must be able to not only stand alone but also integrate seamlessly into the dataset, which includes the segmented text, required_sentence_indices, and the question & answer pair.Operational Directives:1. Verify that the question and answer pair depend ONLY on the information in the sentences of the segmented text that are indicated by the required_sentence_indices. 2. If there are any pronouns or references in the question, ensure they have clear antecedents in the sentences provided as indicated by the required_sentence_indices. 3. Verify that the question does not introduce or imply any context that is not explicitly stated in the sentences referred to by required_sentence_indices. 4. Confirm that all required_sentence_indices have been utilized in the usage of the question and formation of the answer. 5. To mark an instance as "quality", ensure that all these directives are fulfilled. If any of these directives fall short, mark the instance as "not quality".Please respond in the following JSON format "reason": <reason_for_quality>, "is_quality": <true/false>Here is an excellent example where "is_quality" should be marked as false:INPUT:"segmented_text": ["index": 0, "sentence": "Steve creates web designs.", "index": 1, "sentence": "His clients say they are impressed.", "index": 2, "sentence": "He works in the Silicon Valley."], "question": "Why are Steve's clients impressed?", "answer": "Because of his web designs.", "required_sentence_indices": [1, 2] YOU:"reason": "The question assumes information ('Steve creates web designs.') that is not provided in the sentences indicated by required_sentence_indices([1, 2])", "is_quality": falseHere is an excellent example where "is_quality" should be marked as true:INPUT:"segmented_text": ["index": 0, "sentence": "The 'Titanic' sank on its maiden voyage.", "index": 1, "sentence": "It hit an iceberg and began to sink.", "index": 2, "sentence": "The ship went down on April 15, 1912."], "question": "What happened on April 15, 1912?", "answer": "The 'Titanic' sank.", "required_sentence_indices": [0, 2] YOU:"reason": "All operational directives are followed.", "is_quality": trueNow it's your turn: INPUT: *PLACEHOLDER*YOU: § REVIEWER PROMPT: INFORMATION ACCURACY REVIEWER As an Information Accuracy Reviewer, your chief task is to ensure the precision and factual correctness of the information presented in the given segmented text, question, and answer. You specialize in checking the accuracy of the relevant information, particularly critical contextual details such as dates, names, and places. Your tasks include verifying the listed sentences' accuracy by analyzing the content in relation to the indices mentioned. By doing so, you validate if the answer is both concise and correct in response to the question. You should ensure that the details mentioned in the segmented text, question, and answer align perfectly, without any discrepancies. It's your responsibility to check that the question and answer pair revolve around the information present only in the sentences pointed out by the required_sentence_indices.Operational directives:1. Evaluate the segmented_text, question, and answer for factual accuracy. 2. Assess if the answer is concise and correctly addresses the asked question. 3. Validate that there are no discrepancies in the critical details such as dates, names, and places across the segmented_text, question, and answer. 4. Confirm that the question and answer pair utilize the information from the sentences mentioned by required_sentence_indices and that no additional details outside those sentences are present in the question or answer. 5. Ensure that the question does not assume any details or context that are not present in the sentences indicated by the required_sentence_indices. 6. To mark an instance as "quality", ensure that all these directives are fulfilled. If any of these directives fall short, mark the instance as "not quality".Please respond in the following JSON format "reason": <reason_for_quality>, "is_quality": <true/false>Here is an excellent example where "is_quality" should be marked as false:INPUT: "segmented_text": ["index": 0, "sentence": "Steve Jobs co-founded Apple Inc. with Steve Wozniak in 1976.", "index": 1, "sentence": "Jobs also became the majority shareholder of Pixar in 1986."], "question": "Who were the co-founders of Apple Inc. and what animation company did Jobs become a majority shareholder of?", "answer": "Steve Jobs and Steve Wozniak co-founded Apple Inc. Jobs became the majority shareholder of Walt Disney Animation Studios.", "required_sentence_indices": [0, 1] YOU: "reason": "The answer includes information that is not present in the sentences indicated by required_sentence_indices. Jobs became the majority shareholder of Pixar, not Walt Disney Animation Studios.", "is_quality": falseHere is an excellent example where "is_quality" should be marked as true:INPUT: "segmented_text": ["index": 0, "sentence": "Thomas Edison was an inventor who developed many devices.", "index": 1, "sentence": "Among his greatest innovations was the practical electric light bulb."], "question": "What is one of Thomas Edison's greatest innovations?", "answer": "The practical electric light bulb.", "required_sentence_indices": [0, 1] YOU: "reason": "All operational directives are followed.", "is_quality": trueNow it's your turn: INPUT: *PLACEHOLDER*YOU:§ REVIEWER PROMPT: LINGUISTIC QUALITY REVIEWER As a Linguistic Quality Reviewer, your chief task is to ensure that the linguistic aspects of the dataset example are of high-quality and meet all the set guidelines. Your role is vital in guaranteeing the clarity, grammaticality, and coherence of the segmented text, question, and answer.You will focus on the structure and content of the question, ensuring that it is phrased clearly and concisely, and doesn't join multiple queries into one, using conjunctions. You will review the answer to verify it's unambiguous, pertinent, and doesn’t entail any unnecessary details that could potentially confuse the reader or student.Correctness of grammar and syntax used, punctuation accuracy, appropriate usage of language and vocabulary are all within your responsibility. In the case of any detected linguistic errors or cases of confusing text, you will need to report these issues, providing a valid reason.Operational directives:1. Review the question for clearness and conciseness. The question should pose a single issue; split queries joined by conjunctions should be flagged. 2. Assess the accuracy of the answer. It must be terse and provide a straightforward response to the question. 3. Check for linguistic quality. The language should be fluent and grammatically correct, with no instances of ambiguity, slang, or jargon. Any language inconsistencies should be noted and described. 4. Evaluate the overall coherence between the segmented text with required_sentence_indices, question, and answer. They should all be logically and linguistically consistent. 5. Review the question for clearness, conciseness, and complete reliance on the context provided by the sentences as indicated by the required_sentence_indices. 6. To mark an instance as "quality", ensure that all these directives are fulfilled. If any of these directives fall short, mark the instance as "not quality".Please respond in the following JSON format "reason": <reason_for_quality>, "is_quality": <true/false>Here is an excellent example where "is_quality" should be marked as true:INPUT: "segmented_text": ["index": 0, "sentence": "Jane Austen’s Pride and Prejudice was published in 1813.", "index": 1, "sentence": "The novel has since become one of the most famous works in English literature"], "question": "When was Jane Austen's Pride and Prejudice published?", "answer": "1813.", "required_sentence_indices": [0]YOU: "reason": "All operational directives are followed.", "is_quality": trueHere is an excellent example where "is_quality" should be marked as false:INPUT: "segmented_text": ["index": 0, "sentence": "In 1912, RMS Titanic sank in the North Atlantic Ocean.", "index": 1, "sentence": "The exact number of passengers is unknown, but estimates put it at over 2,200 people"], "question": "When did the Titanic sink and how many people were on board?", "answer": "1912 and over 2,200 people.", "required_sentence_indices": [0,1]YOU: "reason": "The question combines two queries into one using a conjunction.", "is_quality": falseNow it's your turn: INPUT: *PLACEHOLDER*YOU:§ REVIEWER PROMPT: REQUIRED SENTENCE REVIEWER As a Required Sentence Reviewer, your task is to review question & answer pairs that have been generated from a passage of text to ensure that the required sentences are actually required. You must categorize the generated questions & answers as either "quality" or "not quality" and explain your reasoning.Criteria for marking an instance as "quality":1. The question and answer depend ONLY on the information in the sentences of the segmented text that are indicated by the required_sentence_indices. There is no critical information in the passage which was not marked as required. Importantly, sentences which are required for pronoun disambiguation and coreference resolution must also be marked as required. 2. ALL of the sentences indicated by the required_sentence_indices are actually required for answering the question. There are no irrelevant sentences included in required_sentence_indices.Here is an excellent example where "is_quality" should be true:INPUT:"segmented_text": ["index": 0, "sentence": "The 'Titanic' sank on its maiden voyage.", "index": 1, "sentence": "It hit an iceberg and began to sink.", "index": 2, "sentence": "The ship went down on April 15, 1912."], "question": "What happened on April 15, 1912?", "answer": "The 'Titanic' sank.", "required_sentence_indices": [0, 2] YOU:"reason": "Sentence 2 mentions "The ship", but without the additional context from sentence 0, we could not be certain which ship was being talked about. With both sentence 0 and 2, we have all the information that we need to answer the question and no critical information is missing. That means criteria #1 has been met. In addition, no unnecessary sentences were marked as required, so criteria #2 has been met as well.", "is_quality": true Here is an excellent example where "is_quality" should be marked as false because some of the criteria are not met:INPUT:"segmented_text": ["index": 0, "sentence": "Steve creates web designs.", "index": 1, "sentence": "His clients say they are impressed.", "index": 2, "sentence": "He works in the Silicon Valley."], "question": "Why are Steve's clients impressed?", "answer": "Because of his web designs.", "required_sentence_indices": [1, 2] YOU:"reason": "The question asks about "Steve's clients". Sentence 1 and 2 use pronouns but don't mention "Steve" by name. Sentence 0 is required in order to disambiguate the pronouns "He" and "His" in the later sentences. Sentence 0 should have been marked as required but was not.","is_quality": false Here is an excellent example where "is_quality" should be marked as false because some of the criteria are not met:INPUT: "segmented_text": ["index": 0, "sentence": "The Amazon rainforest, also called Amazon jungle or Amazonia, is a moist broadleaf tropical rainforest in the Amazon biome that covers most of the Amazon basin of South America.", "index": 1, "sentence": "More than 56YOU:"reason": "The question is about the dust that fertilizes the Amazon rainforest. Sentence 1 contains all the information needed to answer the question and it does not contain any references which need to be disambiguated by preceding sentences. Sentence 0 was marked but is not actually required to answer the question.","is_quality": false Now it's your turn:INPUT: *PLACEHOLDER*YOU: | http://arxiv.org/abs/2311.16338v1 | {
"authors": [
"Rob Grzywinski",
"Joshua D'Arcy",
"Rob Naidoff",
"Ashish Shukla",
"Alex Browne",
"Ren Gibbons",
"Brinnae Bent"
],
"categories": [
"cs.CL",
"cs.AI"
],
"primary_category": "cs.CL",
"published": "20231127215450",
"title": "Releasing the CRaQAn (Coreference Resolution in Question-Answering): An open-source dataset and dataset creation methodology using instruction-following models"
} |
1]Abhilash Reddy MalipeddiCorresponding author.2,3]C. Alberto Figueroa 1,4]Jesse Capecelatro [1]Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI 48109, USA. [2]Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI 48109, USA. [3]Department of Surgery, University of Michigan, Ann Arbor, MI 48109, USA. [4]Department of Aerospace Engineering, University of Michigan, Ann Arbor, MI 48109, USA.Volume filtered FEM-DEM framework for simulating particle-laden flows in complex geometries [=========================================================================================== We present a computational framework for modeling large-scale particle-laden flows in complex domains with the goal of enabling simulations in medical-image derived patient specific geometries. The framework is based on a volume-filtered Eulerian-Lagrangian strategy that uses a finite element method (FEM) to solve for the fluid phase coupled with a discrete element method (DEM) for the particle phase, with varying levels of coupling between the phases. The fluid phase is solved on a three-dimensional unstructured grid using a stabilized FEM. The particle phase is modeled as rigid spheres and their motion is calculated according to Newton's second law for translation and rotation. We propose an efficient and conservative particle-fluid coupling scheme compatible with the FEM basis that enables convergence under grid refinement of the two-way coupling terms. Efficient algorithms for neighbor detection for particle-particle collision and particle-wall collisions are adopted. The method is applied to a few different test cases and the results are analyzed qualitatively. The results demonstrate the capabilities of the implementation and the potential of the method for simulating large-scale particle-laden flows in complex geometries. § INTRODUCTIONFluid mechanics plays a crucial role in many physiological processes on health and disease. Given recent advances in medical imaging, computational power, and mathematical algorithms, real-time patient-specific computational fluid dynamics is now becoming possible. We present a computational framework capable of capturing fluid-particle interactions in complex medical-image derived geometries that are much larger than the size of the particles and with large number of particles. A four-way coupled volume-filtered Euler-Lagrange solver with a stabilized finite element based incompressible flow solver is developed.Most of the development around particle-laden flows has been in the context of industrial applications such as gas-solid flows in fluidized beds using structured grids.In contrast, the application areas of interest here are biological flows which often involve complex organic geometry. Finite element method based incompressible flow solvers have been gaining popularity since the development of stabilization schemes. They are attractive for their nice mathematical properties and their ability to naturally deal with complex geometries. FEM based solvers for Euler-Lagrange type particle-laden flows are an active area of research. <cit.> have developed a stabilized FEM based incompressible solver for particle-laden and porous flows that was restricted to structured grids. Recently <cit.> have developed a FEM based particle-laden flow solver for gas-particle flows in engineering capable of handling unstructured hexahedral meshes for industrial applications. <cit.> have implemented and demonstrated a massively parallel particle-laden flow solver in thecode. This uses a finite volume discretization for the fluid phase. While some “simple" complex boundaries can be represented on structured grids through immersed boundary methods for example, this is in general not very efficient and the fidelity of the representation is limited by the grid size. Presence of large empty regions in the domain can lead to excessive memory usage and computational cost due to unused grid points. Unstructured grids are better because they capture the complex geometry more accurately in comparison. But, unstructured grids are more challenging to work with. The development of efficient algorithms for particle tracking on unstructured grids is an active area of research. In this work we present the development of a versatile and massively parallel framework for studying biological particles in subject-specific geometries by combining (i) a recently-developed statistical hydrodynamic model for fluid- particle flows; (ii) scalable Eulerian–Lagrangian algorithms; and (iii) state-of-the-art parallel techniques for simulating biological flows in image-based geometrical models.There are few salient aspects that set the current work apart. The inter-phase coupling procedure developed here can be applied to a wide range of particle and mesh sizes at once. It is shown to be highly efficient, consistent and convergent. The particle-wall collision algorithm is more efficient and avoid preprocessing steps. First we briefly describe the mathematical model and particularly the volume-filtering procedure as applied to the particle-fluid. Then we describe algorithmic developments that enable scalable tracking of large number of particles within unstructured grids. Next we describe the collision processing for inter-particle and particle-wall collisions that can efficiently deal with complex non-convex boundary collisions. Next, we describe a novel two stage particle-fluid interphase-coupling transfer function that is efficient, conservative and convergent on unstructured grids.§ VOLUME-FILTERED EULER-LAGRANGE EQUATIONSThe volume-filtered Euler-Lagrange approach to modeling particle-laden flows is described in this section. The general idea is that the particle phase is treated as being composed of Lagrangian particles with properties such has diameter, density, velocity and potentially other scalar quantities like temperature or species concentration. The effect of the fluid on the particles is modeled using a so-called drag law, whereas the particle-particle collisions are captured explicitly. This approach offers a good compromise between expensive particle-resolved simulation methods and two-fluid/Eulerian-Eulerian methods that rely heavily on sub-grid models to account for both particle-particle and particle-fluid interactions. The equations for the fluid phase and particle phase and the coupling is described next.§.§ Fluid-phase equationsThe equation for the fluid phase are obtained by applying a spatial filtering operator on the incompressible Navier-Stokes equations, taking into account the volume occupied by the particles and the momentum exchange between the particles and fluid phase. The detailed derivation of the equations can be found in <cit.>. <Ref> illustrates the filtering procedure on a system of particles suspended in a fluid. The resulting continuity equation after filtering reads∂ϕ_f/∂ t + ∂/∂ x_i(ϕ_f u_i) = 0and the momentum equation is ρϕ_f ∂ u_i /∂ t + ρϕ_f u_j∂ u_i/∂ x_j=-ϕ_f∂ p/∂ x_i +ϕ_f∂τ_ij/∂ x_j + ϕ_fρ g_i + f^p_iwhere τ_ij = μ(u_i,j+u_j,i), μ is the fluid viscosity, ρ_f is the fluid density, g_i is the body force. The term f^p_i is the momentum feedback force from the particles acting on the fluid. ϕ_f is the volume fraction of the fluid phase.The model used to calculate the force on the particle due to the fluid is described next.§.§ Particle equationsThe particles are modeled as rigid spheres. Their motion is calculated according to Newtons second law for translation and rotation. For spherical particles, due to the symmetry, tracking the absolute orientation is not strictly necessary. The rotational velocity is required to calculate the hydrodynamic torque based on the difference in the particle and fluid rotational velocity. The translational motion for particle p is given bym_pv_it = f_i^h + ∑_a f^col_i,p ← a + (ρ_p-ρ_f)V_p g_iwhere m_p is the mass of the particle, V_p is the volume of the particle, f_i^h is the hydrodynamic force exerted on the particle by the fluid (elaborated further down). f^col_i,p ← a is the force on the particle due to collisions with other particles (and walls). Within the summation, p ← a denotes the collision of particle p with particle a. Gravity is included as a body force term denoted with the acceleration due to gravity g_i. Since we do not include the resolved fluid stress force in the particle momentum equation, we include a buoyancy force that takes into account the difference in the density of the particle (ρ_p) and the fluid (ρ_f). In the biofluids application area ρ_p/ρ_f ∼𝒪(1). If the resolved fluid stress force is included in the particle momentum equation, then we would need to add the total gravitational force on the particle, ρ_p V_p g_i to the right hand side of the particle momentum equation.The angular momentum equation conservation for the pth particle is given byIω_it = τ^d_i + ∑_ad_p/2ϵ_ijkn_j f^tcol_k,p ← awhere I is the moment of inertia, ω_i is the angular velocity of the particle. The first term on the right hand side is the hydrodynamic torque due to drag and the second term the summation of the forces due to the tangential component of the frictional force due to collisions. The torque acting on a small isolated sphere rotating in a quiescent viscous fluid is given byτ^d_i = πμ d^3_p(ω^f_i-ω^p_i)where ω^f_i is the angular velocity of the fluid interpolated to the particle center and ω^p_i is the angular velocity of the particle. The hydrodynamic force can be written as a superposition of forces arising from distinct mechanisms.f_i^h = f_i^d + f_i^a + f_i^b + f_i^l+ f_i^mwhere f_i^d is the quasi-steady drag force, f_i^a is the added mass force due to the acceleration of the fluid around the particle, f_i^b is the Basset history force to account for viscous effects due to unsteady motion of the particle, f_i^l is the Saffman lift force due to the pressure distribution that develops on the surface of a particle in velocity field with a non-zero gradient, and finally the Magnus lift force f^m due to the rotation of the particle. In general, this linear superposition of individually identified hydrodynamic forces on the particle is not well-founded, but is invariably used in literature. It can be shown to hold in the low and high Reynolds number limits. In the absence of better alternatives, we do the same and superpose the individually identified hydrodynamic forces on the particle.For the application areas of interest, Basset force and Magnus force are not expected to be significant and are not included in the simulations. In some Euler-Lagrange formulations in literature, there is an additional fluid stress force (resolved stress term) added to the RHS. The particular Euler-Lagrange formulation used here accounts for the resolved fluid force in the fluid momentum equation and hence this term is not included in the particle momentum equation. See <cit.> for a detailed discussion on the different formulations and their equivalence. The added mass force is given byf_i^a = 1/2ρ V_p ( u_it- v_it)where t is the material derivative and v_i is the velocity of the particle. There are alternative forms in literature that account for the finite size of the particle by adding a term called Faxen's correction, that has not been included here. t is the time derivative in the particle frame of reference. The Saffman lift force is calculated according to the expression given by <cit.>. The equations are integrated using a second order accurate scheme. The fluid solver is fully implicit and as a result can take large time steps. If the fluid time step is larger than the particle's stable time step, the particle equations are sub-cycled in time to avoid stability issues.§.§.§ Quasi-steady drag forceThe quasi-steady drag force is calculated using the relation provided by <cit.> for freely evolving suspensions of particles. The correlation was developed from particle-resolved direct numerical simulations conducted for a range of Reynolds numbers, volume fractions, and density ratios. Most of the prior work on developing the drag relations assumed particles to be much denser than the fluid phase, which is typically the case in gas-solid flows relevant to many industrial applications such fluidized bed reactors for example. Consequently, most of the the particle-resolved simulations consider fixed particle packing and the particles are not allowed to move. For the application being considered here, i.e., biological flows, the difference in the density of the particles and the fluid is not very large and the assumption of fixed particle packing might reduce the accuracy of the drag relations.For ρ_p/ρ_f <10 the formula for the drag force isf^d_i(ϕ, Re_m)=f^St_i ( 1 + 0.15Re_m^0.687) (78.96ϕ_p^3 - 18.63ϕ_p^2 + 9.845ϕ_p + 1)where f^St_i = 3πμ d_p(u_i^f-v_i) is the Stokes drag force and ϕ_p is the volume fraction of the particles. The modified Reynolds number is defined asRe_m = ρ_f d_pϕ_f |u_i^f-v_i|/μ § NUMERICS §.§ Fluid discretizationThe fluid phase equations equations are solved using the finite element method. The filtered equations are written in their weak form and discretized using the finite element method. The mass and momentum conservations can be rearranged into a convenient form and written respectively asu_i,i + 1/ϕ_f(ϕ_f,t + u_jϕ_f,j) = 0,ℒ_i = ρ u_i,t + ρ u_ju_i,j + p_,i -τ_ij,j - f^p_i/ϕ_f= 0_i. The weak form of the above equations is obtained by multiplying by test functions and integrating over the domain∫_Ω{qu_i,i + q/ϕ_f(ϕ_f,t + u_jϕ_f,j) + w_i ( ρ u_i,t + ρ u_ju_i,j + p_,i -τ_ij,j - 1/ϕ_ff^p_i ) } dΩ = 0,where q is the test function for the divergence constraint and w_i is the test function for the momentum equation. This is the standard (meaning unstabilized) Galerkin weak formulation. We will refer to these terms as Galerkin terms (to separate them from the stabilization terms that we will add below). This is formulation is not stable in general. There are two issues to overcome. One is the spurious upstream disturbance at high Re and the other is spurious modes in the pressure for certain combination of velocity-pressure elements related to the LBB orinf-sup condition. Same type of elements for velocity and pressure in particular do not work. There is some attraction to use same element types for both velocity and pressure. To remedy the high Re problem and to remove restrictions on the choice solution function spaces we add stabilization terms to the formulation. Let us denote the last LHS by B_G(w_i,q;u_i,p). With the stabilization terms added the formulation isB(w_i,q; u_i,p) = 0B(w_i,q; u_i,p) = B_G(w_i,q; u_i,p) + ∑_e=1^n_el∫_Ω̅_e{τ_M(u_jw_i,j+q_i/ρ)ℒ_i + τ_Cw_i,iu_j,j} dΩ̅_e + ∑_e=1^n_el∫_Ω̅_e{ w_i ρΔu_j u_i,j + τ̅{ρΔu_j/τ_M}w_i,j{ρΔu_k/τ_M}u_i,k} dΩ̅_e,The details of the stabilization scheme and the complete details of the numerical approach can be found in <cit.>. In brief, consistent terms are added to the weak form to stabilize the system. In the continuous limit, the stabilization terms vanish and we are left with a consistent set of equations. The non-linear fluid equations are solved implicitly in a monolithic fashion using a Newton-Raphson method. The time integration scheme used is second order generalized-α method suitably modified for the system of differential algebraic equations that arise from the discretization of the fluid equations. We use linear tetrahedral elements for all variables.The implementation is done within theframework <cit.>.specializes in simulation of blood flow in the vasculature. It has advanced capabilities to read-in medical images and allows the user to create a corresponding geometry and mesh and apply complex physiologically accurate dynamic boundary conditions. The underlying flowsolver is a general purpose finite element solver, PHASTA <cit.> that solves the incompressible Navier-Stokes equations on unstructured grids. The solver is parallelized using MPI and has been demonstrated to scale to 100,000 cores<cit.>.§.§ Particle discretizationThe particle related computations are described in this section. First we describe how the particles are tracked through the domain as they are carried by the flow. Then we describe how the collisions are detected and processed. §.§.§ Particle tracking Tracking particles in an unstructured grid is a challenging task. The main difficulty is that the elements are not arranged in a regular fashion. So, we cannot use a simple indexing scheme to identify the element that contains the particle. During the course of the simulation, as teh particles are advected by the flow, the particles will move from one element to another. After each particle has been advected, we identify the new element host for each particle by executing a delaunay search with the last known element as the starting point. For the applications of interest, the time-step is small enough that the particles do not move very far and the delaunay search works well.When the particles are initialized, as a preprocessing step we identify and record the starting element for each particle to enable the quick delaunay search.The particle tracking and collision detection is implemented using a two stage approach. The first stage is to identify the element that contains the particle. The second stage is to localize the particle within the element. The first stage is implemented using a cell list method. The second stage is implemented using a delaunay search method. In prior work <cit.> have used a cell list method for particle tracking. The idea is to overlay a Cartesian grid on top of the unstructured mesh and assign each cartesian cell with all the unstructured mesh elements that intersect with this cell. First, we initialize a integer array for the background grid. We will store just one value per cell. This can be a hashmap if the domain has a lot of empty space and memory usage is a concern. Lets say that the side of the cell is approximately twice as large as a characteristic length of the largest element in the unstructured mesh.Then, we loop over the elements and for each element find indices of the cell that contains the first vertex of this element. If this cell has not already been assigned, assign this element id to this cell. At this point one might think that that all relevant cells have been assigned a value. If we draw the cells that have been assigned overlaid on the mesh, we would like to see that the cartesian cells cover the entire region. But it is very likely that there are cells which did not have the vertex that was tested( or any vertex) inside them. So, there will be "holes" if the cartesian cells are too small. Additionally, near the boundaries (particularly curved or inclined) it is possible that there are parts of the element that do not have a cartesian cell covering it. If the cartesian cells are not too small ( i.e. large enough that each cell contains at least one whole element), then the holes do not appear. To deal with the potential lack of coverage with boundary elements, the following is done. We loop over the cartesian cells and for each cell check if the immediate neighbors are assigned. if not we make a note of the indices and the value that will be assigned to this adjacent cell will be the value assigned this cartesian cell. This needs to be done in two stages, identification and assignment, because we wouldn't know if a cell was assigned the first time or during this step. The effect will be that we are padding the cartesian cells with one layer of cells. This ensures full coverage of boundary elements.§.§.§ Collision processingThis section described the algorithms adopted for scalable calculation of particle-particle and particle-wall collisions are described. It is well known that a naive implementation of nearest neighbor search for particle-particle collisions scales as 𝒪(N_p^2). Similarly a naive implementation of particle-wall collisions scales as 𝒪(N_p N_w) where N_w is the number of wall elements. This is a non-starter even for a moderate number of particles. To enable simulation of 𝒪(10^9) particles we need to avoid the quadratic and multiplicative scaling of the collision detection algorithms. Such algorithms for both particle-particle and particle-wall collisions are described below. Particle-particle collisionsThe flow solver uses domain decomposition to solve the equations in parallel. Working within this framework, we store the particles on the process that contains the element that contains the center of the particle. A simple example of a meshed domain that has been decomposed into three partitions is shown in <ref>. Let us consider the partition highlighted in blue. To capture all the collisions of the particles in this partition, the process that owns this partition needs to know about particles on neighboring processes that are nearby the shared partition boundary within some cut-off distance. This distance will depend on the size of the particles. As we use a soft sphere model for collisions, the cut-off distance should be at least the diameter of the largest particle. If there are some guarantees about the relative size of the particles with respect to the mesh size, then, theelements in the cut off region could be counted based on their graph distance, i.e. we could determine this halo zone in terms of layers of cells. But in general such assumptions are not possible and a general approach is called for. Here, we identify the the elements in the cut off region by calculating the distance field from the from the partition boundary to the interior of the partition. Then, the particles in the cut off region are identified by interpolating the distance field to the particle centers. These are then communicated to the corresponding neighboring process. This is illustrated in <ref> where the elements highlighted in green are the elements that contain particles that can potentially collide with particles in the blue partition.Once all the particles are available on a process, the remaining work is processor local and does not depend on parallelization details. Using a cell list we divide the bounding box containing the particles into Cartesian cubes. The size of the cube is based on the largest particle diameter. We loop over the particles and assign them to the cell that contains the particle center. Once this is done for all particles, to identify collisions for a given particle, we only need to check the particles in the cell that contains the particle and the neighboring cells (1 + 26=27 neighbors in 3D). This is illustrated in <ref> in two dimensions. To compute all possible collisions for a particle in the green cell, we only need to check the particles in the green cell and the neighboring blue-outlined cells. This is a significant reduction in the number of particles that need to be checked for collisions. The scaling of the algorithm with the number of particles is shown in <ref>. The choice of the size of the cell list box is important. For optimal results, <cit.> has shown that the number of cells should approximately be equal to the number of particles.Since we are dealing with complex tortuous geometries, the distribution of the particles within the bounding box will not be uniform. This can lead to excessive memory usage if we allocate a 3d array of lists to store the particles in each cell. To avoid this issue, we use a locality sensitive hashing scheme to store the particles in a unsorted key-value store. The key is a hash of the cell indices and the value is a list of particles. This is implemented using thedata structure in C++, also known as a hash map. With this method, if a cell is empty, we do not need to allocate any memory for it. This is a significant advantage over the 3d array of lists approach. The particular hash function used is the 3D Morton index or Z-order curve. This is a space filling curve that reversibly maps a 3D point to a 1D index. The mapping is such that points that are close in 3D space are mapped to indices that are close in 1D space. This is a desirable property for the hash function because it means that the particles that are close in 3D space will be mapped to keys that are close in 1D space. This will lead to a more uniform distribution of the particles in the key-value store. In addition, since this is a reversible hash, by knowing the key, we can recover the 3D cell index for any particle. The scaling of this collision detection algorithm with the number of particles is shown in <ref> going up to a million particles. We get the optimal linear scaling with the number of particles.Particle-wall collisions The collision detection can be accelerated if we only check the boundary collisions of only those particles that are near the boundaries and ignore the particles in the interior. To enable quick identification of near wall particles, we initialize a distance field due to the boundary of the domain in the interior of the domain. At any point in the interior, the distance field is the shortest distance to the nearest boundary. This is illustrated in <ref> for a 2D aorta geometry. Near wall particles are identified by interpolating the distance field to the particle center and checking if the distance is less than the about 1.2 times radius (including a factor of safety). Once the near boundary particles have been flagged, we use a another cell-list like structure and assign the boundary triangles to the cells. This reduces the number of boundary features that need to be checked to only those that are nearby the particle and skips triangles, edges and vertices that are far away. Then, for each particle we easily identify the cell and test for collision with the boundary faces, edges and vertices in the neighboring cells and calculate the collision force. We first calculate a distance field in the interior of the domain. To quickly eliminate particles in the interior, we use the FEM interpolating functions to calculate the distance at the particle centers.In general, the particles near the domain boundary could collide with a boundary feature such as a face, an edge, a vertex or any permutation of multiple of them. Robust and unambiguous processing of collisions is made possible by using the concept of Voronoi regions<cit.>. The near boundary region is divided into regions based on which boundary feature is nearest to the point in the region. This is illustrated in <ref>. Particles near the boundary are assigned to the region corresponding to the feature that is nearest to the particle center. Collision force is calculated only if the particle is in the Voronoi region of the boundary feature it is intersecting with. The algorithm adopted here first checks for face collisions. These are always valid and are registered immediately. For any faces that are registered, we void the edges and vertices. That is, we ignore any intersection of this particle with any edge or vertex that were part of previously registered faces. Then, we do a pass over the edges and register edge collisions and void the vertices for the edges that are registered. Finally, we do a pass over the vertices and register vertex collisions.The approach presented here deals with correctly with all possible collision scenarios regardless of the local curvature characteristics of the boundary surface. It is a mathematically exact and exhaustive procedure without any user-specified tolerance that might require tuning. In literature, the idea of Voronoi region has been used but it required an expensive preprocessing step to actually build and store the Voronoi information. The algorithm described here implicitly implements the Voronoi region idea and does not require a preprocessing step and has improved efficiency while retaining the robustness and unambiguous nature of the original Voronoi region approach.To calculate the actual collision force, once the collision normal and penetration depth are calculated, we treat the wall as a particle with infinite mass and calculate the collision force as if it were a particle-particle collision using the same soft sphere model.§ PARTICLE-FLUID COUPLINGThe coupling between particles and fluid is a crucial aspect of Euler-Lagrange simulations. Depending on the specific characteristics of the particle-laden flow being simulated, different levels of physical interactions between the particle and fluid phases can be incorporated in the simulation. In one-way coupling, the fluid exerts a drag force on the particles, but the particles do not affect the fluid. This is typically observed in dilute flows where the particle volume fraction is small and the particles have minimal impact on the fluid flow. Two-way coupling involves interactions in both directions, where the particles affect the fluid and the fluid affects the particles. This is commonly seen in dense flows where the particles significantly influence the fluid flow and vice versa. Four-way coupling takes into account not only the interactions between particles and the fluid, but also considers inter-particle collisions. This is suitable for dense flows with strong particle-particle and particle-wall collisions. Within two-way and four-way coupling, it is possible to ignore the excluded volume effects of the particles if the particle volume fraction is small, but still consider the momentum exchange between the particles and the fluid.In one-way coupled simulations, we need to compute the fluid velocity, vorticity, fluid stresses etc. at the particle center to be able to calculate all the hydrodynamic forces on the particle and integrate the particle equations. For higher levels of coupling in addition to the Eulerian to Lagrangian transfer, we also need to have Eulerian representations of particle centered quantities, mainly the volume fraction field and feedback force.Ideally, transfers in both directions would be done consistently using the same filtering kernel function by identifying the support of the kernel function and integrating over the support. But, this is not practical due to the 𝒪(N_p N_sn) complexity involved where N_sn is the number of nodes in the support of the kernel function. The time complexity further hides the fact that identifying the support nodes in a unstructured grid is a non-trivial task in itself. In literature, there have been a variety of ad hoc methods that have been used to project particle centered quantities to the Eulerian mesh. But these are lacking in either accuracy, consistency or are grid-dependent. These properties become important when dealing with particle sizes about equal to or greater than the local mesh size and can severely compromise the accuracy of the simulation. In order to sidestep the complexity issues, various alternative methods of transferring information between the particle and fluid phases have been proposed in literature. There are different considerations for the Eulerian-to-Lagrangian and Lagrangian-to-Eulerian transfers. There are no conservation and boundedness issues when interpolating from the Eulerian mesh to the particles. It is a straight-forward interpolation and one can devise efficient schemes to do this. The only consideration is the accuracy of the interpolation. §.§ Particle-to-fluid transferThe Lagrangian-to-Eulerian transfer needs more care. For example, a particle-in-cell type approach would assign the whole volume of the particle to the Eulerian cell it is contained in. If we project using a mesh-size dependent kernel, we will end up with a mesh dependent result that can give rise to spurious and unbounded values. The volume fraction field in particular needs to lie in [0,1] to be physically meaningful. The random close packing limit for mono-disperse spheres is∼0.64. So, in reality in a non-jammed particle-laden flow the particle volume fraction will be in [0,0.64].In literature, there have been a variety of methods that have been used to project particle centered quantities to the Eulerian mesh. But these are lacking in either accuracy, consistency or are grid-dependent. For instance, <cit.> use a particle-in-cell type approach where the volume occupied by a particle is assigned to the element containing the center. There are no conservation issues andthis works fine if the particles are much smaller than the mesh size. If not, there is a possibility of unphysical values appearing in the projected quantities. In <cit.> a similar idea is used to get element-constant values that are then projected to nodal values while artificially restricting the nodal values to be within physical limits. Obtaining a volume fraction field from a particle configuration was exclusively considered in <cit.>, where different methods are reviewed.<cit.> use the finite element basis functions to perform the Lagrangian to Eulerian projection and directly obtain nodal values on the mesh. The process can be formulated as enforcing a weak equivalence between the particle volumes as a collection of δ-functions at the particle centers and the Eulerian mesh representation of the particle volume fraction field. This is conservative, but it has same boundedness and convergence issues as the PIC-type method. In addition, depending on the specifics of the quadrature rules used, this method can generate oscillations and potentially negative values locally. The divergent result of this method is demonstrated in <ref>.It shows the projected volume fraction field on a 1D finite element mesh due to one particle at the origin under mesh refinement. As the mesh is refined, the projected value diverges. §.§ Two stage projectionHere we outline a novel two-stage method that does not suffer from the issues described before and is efficient and scalable. It is a generalization of the two stage method described in <cit.> to unstructured, anisotropic and inhomogeneous meshes. The first step is along the lines of <cit.>, with one modification to ensure that we preserve the original sign of the quantity being projected. Let the particle distribution be denoted by ψ = ∑_i^N_p V_i δ(x-x_i)where the V_i are the particle volumes and x_i are the particle centers. The finite element representation of the volume fraction field cane be written as ϕ_p = ∑_i^N_nφ_iN_i(x)where N_i are the finite element basis functions and φ_i are the nodal values. The first step is to enforce the equivalence between the particle distribution and the finite element representation of the volume fraction field in a weak sense. This is done by enforcing the following∫_Ωϕ_p N_i(x) dΩ = ∫_Ωψ N_i(x)dΩThis can be simplified and written as a linear system of equations of the form[M] {φ} = [V] {w}where [M] is the mass matrix, {φ} is the vector of nodal values and {w} is the vector of particle volumes. [V] is a sparse matrix whose rows correspond to the particle centers and columns correspond to the mesh nodes. The mass matrix in <ref> should be lumped for a couple of different reasons. First, lumping avoids oscillatory and negative values in the projected field. It also diagonalizes the linear system and the solution can be obtained by simple division at each node. This result is still mesh dependent and cannot in general be used if the particle sizes are comparable or larger than the mesh size. The second step is to smooth this projected field using a diffusion equation, using a carefully chosen diffusivity that takes into account the local mesh size to give a mesh independent smoothing effect. The diffusion equation is discretized using the same finite elements and advanced using pseudo-time-stepping on the same domain as the fluid. This can be written as ϕ_pτ = ∇·(𝒟∇ϕ_p)with a diffusivity defined as 𝒟 = max(δ_f^2-Δ x^2/16ln 2,0)where δ_f is the filter size/smoothing-length-scale set to be 4 d_p and Δ x is a nominal local mesh size that is calculated as the average of the mesh edge lengths for each node. Here, τ is the pseudo time variable and <ref> is advanced from τ=0 to τ=1. This corresponds to a smoothing length scale of √(𝒟τ) which corresponds to a Gaussian with a full width at half maximum of δ_f i.e. it is equivalent to analytically transforming a Dirac delta function to a Gaussian of width δ_f while conserving the integrated value. While the method is demonstrated here on a 1D case, the extension to higher dimensions is quite straight forward and we have verified that the 3D version retains the desiredproperties. § EXAMPLE CASESThe method developed here is applied to a few different particle-laden flow cases. This is preliminary work and the focus is on demonstrating the capabilities of the implementation. The results presented here are presently only qualitative. In addition to a simple pipe geometry, we demonstrate the capabilities of the method by simulating particles flowing through a simplified bifurcation geometry that one might find in the arterial vasculature. §.§ Pressure driven flow in a pipeThe first case considered is pressure driven flow in a cylindrical tube. The geometry is shown in figure <ref>. The Reynolds number based on the pipe diameter is 100. There are 10,000 particles are initialized in as a spherical bolus near the inlet boundary. The domain is discretized using 24,000 elements. The inlet velocity is specified as parabolic flow profile. The outlet is a zero pressure boundary condition and the walls are no-slip boundaries. The particle are denser than the carrier fluid, ρ_p = 5ρ_f and gravity is included. This case is solved in parallel with 4 MPI processes.In figure <ref> we see that the particles are transported down the cylinder by the fluid. Due to the presence of gravity particles also settle along the bottom. The fully-coupled nature of the simulation is evident from the velocity contours. Initially, the particles are at rest and block the fluid. As a result, he fluid has to accelerate around the particles. This delays the development of the parabolic profile downstream of the particles. The particles slowly accelerate due to the fluid and gradually pick up speed and are carried out of the cylinder through the outlet. §.§ Flow in bifurcation geometry§.§.§ Flow in a simple bifurcationAnother example demonstrated here is particle-laden flow through a bifurcation geometry shown in <ref>. The fluid domain is vessel that splits into two smaller vessels, that resembles a typical feature of the arterial vasculature. The vessel geometry is discretized using 220K tetrahedral elements. We initialize 100,000 particles in the inlet in a cylindrical bolus with a mean particle volume fraction in the bolus of ∼ 0.35. The particles and fluid are initially at rest. The inlet velocity is specified with a parabolic profile corresponding to a Reynolds number of 250 based on the inlet flow and diameter. There is no gravity and the particles density is twice that of the fluid. The outlet vessels are coupled to a 0D lumped parameter model that models the downstream vasculature. The lumped parameter model is a 3-element Windkessel model. Since this is a steady flow simulation, after the lumped parameter capacitor is charged, the outflow boundary conditions reduce to outlets with a resistance.<Ref> shows the initial condition in the top left panel and snapshots at two subsequent instances as the simulation progresses. Within each subfigure, the left panel shows a contour plot of the velocity field magnitude along with the particle positions. The right panel is a close-up view of the bifurcation region looking up stream into the inlet vessel.As the particles come to the bifurcation, they collide with the boundary walls and get split into both the downstream vessels following complex inertial trajectories.§.§.§ Flow in a bifurcation with a stenosisThe final example shown is a modification of the bifurcation geometry from the previous example. Here, we introduce a stenosis in one of the daughter vessels, to simulate a narrowing of the vessel due to atherosclerosis. The rest of the conditions are unchanged. The results for this case are shown in <ref>. Within each subfigure in <ref>, the previous healthy geometry is on the right and the stenosed geometry is on the left. We compare these two simulations at same instances in time. Till subfigure (b) the the particle configurations do not show any significant difference between the two cases. In <ref>c we see that the particles are accelerated through the narrowing and reach the outlet much quicker than before. In the final subfigure, in the stenosed geometry we see a particles form a spur-like shape and flowing backward. This is due to a recirculation zone that forms downstream of the stenosis, a well known feature of flow through stenosed vessels. These examples are meant to be a qualitative demonstration of the capabilities of the method. In the future, this work will be followed up with validation and performance evaluation and scaling studies. We will also apply this method to more larger and complex geometries derived from patient-specific medical images.§ CONCLUSIONSTo summarize, we have presented a computational framework for modeling large scale particle-laden flows in complex geometries that enable subject-specific analysis. The framework is based on a volume-filtered Euler-Lagrange approach that uses a finite element method for the fluid phase and a discrete element method for the particle phase. The fluid phase is solved on a three-dimensional unstructured grid using a stabilized finite element method. The particle phase is modeled as rigid spheres and their motion is calculated according to Newton's second law for translation and rotation. The hydrodynamic force on the particles is calculated using a recently developed correlation for freely evolving suspensions of particles. The method is applied to a few different particle-laden flow cases. The results are of a qualitative nature and are not meant to be quantitative. The results demonstrate the capabilities of the implementation and the potential of the method for simulating large-scale particle-laden flows in complex geometries. We intend to follow up this work with validation and performance evaluation and scaling studies. We will also apply this method to more larger and complex geometries derived from patient-specific medical images and other biofluids applications. | http://arxiv.org/abs/2311.15989v1 | {
"authors": [
"Abhilash Reddy Malipeddi",
"C. Alberto Figueroa",
"Jesse Capecelatro"
],
"categories": [
"physics.flu-dyn"
],
"primary_category": "physics.flu-dyn",
"published": "20231127163710",
"title": "Volume filtered FEM-DEM framework for simulating particle-laden flows in complex geometries"
} |
Stab-GKnock: Controlled variable selection for partially linear models using generalized knockoffsHan Su, Panxu Yuan, Qingyang Sun, Mengxi YiCorresponding author, Email: ., Gaorong LiSchool of Statistics, Beijing Normal University, Beijing 100875, P. R. ChinaJanuary 14, 2024 ============================================================================================================================================================================= Soil organic carbon (SOC) plays a pivotal role in the global carbon cycle, impacting climate dynamics and necessitating accurate estimation for sustainable land and agricultural management. While traditional methods of SOC estimation face resolution and accuracy challenges, recenttechnological solutions harness remote sensing, machine learning, and high-resolution satellite mapping. Graph Neural Networks (GNNs), especially when integrated with positional encoders, can capture complex relationships between soil and climate. Using the LUCAS database, this study compared four GNN operators in the positional encoder framework. Results revealed that the PESAGE and PETransformer models outperformed others in SOC estimation, indicating their potential in capturing the complex relationship between SOC and climate features. Our findings confirm the feasibility of applications ofGNN architectures in SOC prediction, establishing a framework for future explorations of this topic with more advanced GNN models.§ INTRODUCTION Soil organic carbon plays major role in the global carbon cycle, acting as both a source and sink of carbon and profoundly influencing soil health, fertility, and overall ecosystem functionality. Accurate estimation and monitoring of SOC is crucial for understanding climate dynamics and driving sustainable land management and agricultural practices. Climate change profoundly impacts SOC dynamics by influencing various processes related to plant growth, microbial activity, and organic matter decomposition. Traditional methods of SOC estimation often face challenges in spatial resolution, coverage, and accuracy, particularly when applied at larger scales or diverse landscapes [1]. Recent technological advancements brought SOC monitoring to a new level, employing methods like remote sensing, machine learning, and satellite-driven high-resolution mapping [2,3].Satellite imagery offers a scalable and cost-effective solution, capturing spatial heterogeneity, temporal dynamics of SOC, and variability of climate features across regions, from South Africa's diverse landscapes [4] to Bavaria's agriculturally intensive zones [5].Graph Neural Networks have the ability to model complex interdependencies between SOC and multifaceted climate features. By design they excel at capturing relational information in data [6], which makes them indespansable for modelling the relationship between soil and climate. For instance, integrating positional encoders [7] in GNNs allows them to capture the spatial dependencies crucial for geographic data. Furthermore, GNNs that learn both structural and positional representations provide a comprehensive understanding, ensuring that elements' composition and spatial distribution are considered. In this paper, we propose applying advanced GNN operators [8-11] to the positional encoder framework [7] for SOC estimation. We aim to use their relational modelling capabilities and computational optimizations to deliver accurate, detailed, and scalable SOC predictions across diverse climate-related features.§ METHODOLOGY§.§ Message passing networks In graph-based learning, the challenge lies in adapting traditional convolution operators, which thrive on regular grid structures, to work effectively on irregular graph domains. One widely adopted approach to address this challenge is through the notion of message passing or neighbourhood aggregation. In this context, let's define x^(k)_i ∈ℝ^F as the features associated with node i at the kth layer, e_j,i∈ℝ^D represents the edge features from node j to node i. Given this, the operation of message-passingcan be articulated as: 𝐱^(k)_i = γ^(k)( 𝐱^(k-1)_i, ⊕_j ∈𝒩(i) ϕ^(k)(𝐱^(k)_i, 𝐱^(k-1)_j,𝐞_j,i) ) where ⊕ is a differentiable function that remains invariant to permutations. Common choices for this function include operations like sum, mean, or max. The functions γ and ϕ are differentiable mappings, often realized using structures like Multi-Layer Perceptrons (MLPs).With this scheme, we proceed to evaluate four prominent operators in the positional encoder framework: GCN, SAGE, Transformer [11], and GAT (Tab. <ref>). §.§ Positional encoder graph neural network Unlike the standard GNN method, PE-GNN (Fig.<ref>) integrates a positional encoder that transforms 2D geographic coordinates into context-aware vector embeddings. This allows for a flexible representation of spatial context and relationships. A spatial graph is constructed for each batch using the k-nearest neighbours method in the training process. The outcome variable's local Moran’s I values, an autocorrelation metric, are computed, generating a "shuffled" version of the metric due to randomized minibatching. The PE-GNN model uses two prediction heads with shared graph operation layers, and its loss calculation incorporates both the main task and the auxiliary Moran’s I task, weighted by a parameter λ. This unique design enables PE-GNN to learn spatial complexities in a more adaptable manner, considering relationships between varying clusters of points across iterations. Consequently, this helps the model generalize better and not rely on memorized neighbourhood structures. Combining the positional encoder framework with the previous four operators, we got PEGCN, PESAGE, PETransformer and PEGAT. § RESULTS AND DISCUSSIONWe compared the four operators using the Land Use/Land Cover Area Frame Survey (LUCAS) database [12], a harmonised in situ land cover and land use data collectionover the whole of the EU’s territory. The terrain attributes are derived from COPERNICUS and USGS DEMs. Macroclimate features come from ERA5-Land Daily Aggregated data (ECMWF). Landsat-8 offers high-resolution landcover images, MODIS supplies medium-resolution surroundings, and OpenLandMap contributes some soil attributes. All the data are prepared with the help of Google Earth Engine.The prepared dataset, comprising 21,245 samples with 42 features each, was collected fromcropland and grassland in 2015 and 2018. All methods shared a comparison framework. Given the target value's heavy-tailed distribution, we log-transform it to mitigate outlier effects and expedite training. The data split was 70% training, 15% testing, and 15% evaluation. According to Fig.<ref>, PESAGE and PETransformer methods provide better testing performance than others during the training.Since climate features have different dimensions and scales, for each point,we convert all of them to a single vector. It'll make the features have complex distributions, which is not good for GCNConv.GATConv incorporates attention mechanisms to weigh neighbour contributions. In testing with head=1, the single attention mechanism may limit concurrent focus on diverse graph regions. SAGEConv and TransformerConv might be capturing more complex patterns in the data due to their aggregation mechanisms and self-attention capabilities, respectively. The training process, including learning rates, regularization, and other hyperparameters, can also impact the performance of these layers. It's possible that the training setup was more favorable for SAGEConv and TransformerConv. The evaluation results shown in Tab.<ref> also proved this. PESAGE with λ equals 0.5 provides the best testing results.Compared with the ground truth data, the spatial variance of the predicted values shown in Fig.<ref> provided by PEGCN and PEGAT have been smoothed, while the other methods can provide fine features. The fundamental operation of graph convolutional networksis to aggregate information from neighbouring nodes. In GCNConv, node features are aggregated using a simple weighted average of their neighbours. This kind of aggregation tends to produce a smoothing effect over the graph. While GATConv introduces an attention mechanism that weighs the contributions of neighbouring nodes, the attention weights can sometimes lead to a kind of averaging, especially if the attention scores do not vary significantly among the neighbours.One of the methodologies in SAGEConv is to sample a fixed-size set of neighbors at each layer. This sampling can prevent the rapid expansion of receptive fields, thereby reducing the over-smoothing effect seen in traditional GCNs. SAGEConv often concatenates the current node's features with aggregated neighbor features, helping to preserve the node's original information.TransformerConv use positional encodings, which could add more distinctiveness to node embeddings, reducing the chances of over-smoothing. The transformer has residual connections, which can help retain original information and prevent over-smoothing by allowing gradients to flow directly through layers. PESAGE and PETransformer provide better evaluation results as shown in Fig.<ref>. § CONCLUSION This study emphasized the significance of SOC estimation within the global carbon cycle and its complex relationship with climate dynamics. By leveraging the LUCAS database, we explored the potential of GNNs, particularly the PESAGE and PETransformer models, in addressing the challenges faced by traditional SOC estimation methods. Our findings showed that GNN architectures can capture the complex interdependencies between SOC and climate-related features, setting a new benchmark in SOC prediction. Based on these insights, in future research we will explore the GPS graph transformer [13] to enhance SOC prediction methods.§ REFERENCES[1] Anthony D Campbell, et al.(2022)A review of carbon monitoring in wet carbon systems using remote sensing. Environmental Research Letters, 17(2):025009, 2022.[2] Camile Sothe et al. (2022)Large scale mapping of soil organic carbon concentration with 3d machine learning and satellite observations. Geoderma, 405:115402.[3] Ken CLWong, et al. (2022)Image-based soil organic carbon estimation from multispectral satellite images with fourier neural operator and structural similarity. In NeurIPS 2022 Workshop on Tackling Climate Change with Machine Learning.[4] Zander S Venter, et al. (2021)Mapping soil organic carbon stocks and trends with satellite-driven high resolution maps over South Africa. Science of the Total Environment, 771:145384.[5] Simone Zepp,et al. (2023) Optimized bare soil compositing for soil organic carbon prediction of topsoil croplands in bavaria using landsat. ISPRS Journal of Photogrammetry and Remote Sensing, 202:287–302, 2023.[6] Sergi Abadal et al. (2021) Computing graph neural networks: A survey from algorithms to accelerators. ACM Computing Surveys (CSUR), 54(9):1–38.[7] Konstantin Klemmer, et al. (2023)Positional encoder graph neural networks for geographic data. In International Conference on Artificial Intelligence and Statistics, pages 1379–1389.[8] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.[9] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30.[10] Petar Veličković, et al. (2018)Graph attention networks. ICLR 2018.[11] Yunsheng Shi, et al. (2020)Masked label prediction: Unified message passing model for semi-supervised classification. 30th International Joint Conference on Artificial Intelligence (IJCAI-21).[12] Raphaël d’Andrimont, et al. (2020)Harmonised LUSAS in-situ land cover and use database for field surveys from 2006 to 2018 in the european union. Scientific data, 7(1):352.[13] Ladislav Rampášek et al. (2022)Recipe for a general, powerful, scalable graph transformer. Advances in Neural Information Processing Systems, 35:14501–14515. | http://arxiv.org/abs/2311.15979v1 | {
"authors": [
"Weiying Zhao",
"Natalia Efremova"
],
"categories": [
"cs.LG",
"cs.AI"
],
"primary_category": "cs.LG",
"published": "20231127162512",
"title": "Soil Organic Carbon Estimation from Climate-related Features with Graph Neural Network"
} |
Kiran Jain [email protected]]Kiran Jain National Solar Observatory, Boulder, CO 80303, USA0000-0003-3253-9054]Partha Chowdhury University College of Science and Technology, Department of Chemical Technology,University of Calcutta, Kolkata, 700009, West Bengal, India0000-0002-4995-6180]Sushanta C. Tripathy National Solar Observatory, Boulder, CO 80303, USA We studied the temporal evolution of quasi-biennial oscillations (QBOs) using acoustic mode oscillation frequencies from the Global OscillationNetwork Group.The data used here span over more than 25 yr, coveringsolar cycles 23 and 24 and the ascending phase of cycle 25.The analysis revealsthat the QBO-like signals are present in both the cycles,but with different periods.The dominant QBO period in cycle 23 is foundto be about 2 yr while it is about 3 yr in cycle 24.Furthermore, thequasi-biennial oscillatory signals are present only during the ascendingand high-activity phases of cycle 23 and quickly weaken around 2005during the declining phase. In comparison, the QBO signals are present throughout the cycle 24, starting from 2009 to 2017.We also exploredthe depth dependence in QBO signals and obtained aclose agreement atall depths, except in the near-surface shear layer. A detailed analysisof the near-surface shear layer suggests that thesource region of QBOs is probablywithin a few thousand kilometers just below the surface. § INTRODUCTIONSolar activitydisplays a variety of periodicities; some are long term and others are short term <cit.>. Records of sunspot numbers[<https://www.sidc.be/silso/datafiles>] over multiple centuries reveal both types of periodicities. Among them, themost dominant cyclic behavior is commonly known as the solaractivity cycle or Schwabe cycle <cit.>, the period of which ranges from 9 to 13 yr. It is present in all activity measures observed in different layers of the solar atmosphere. Since the consistent and uninterrupted observations of 5 minute acoustic modes<cit.> are also available for more than two solar cycles, it is now possible toprobe the variability and structure of the solar interior in great detail <cit.>. Similar to above-surface activity indicators, strong 11 yr cyclic patterns are also found in the change of helioseismic p-mode frequencies computed using the methods of global helioseismology <cit.> as well as local helioseismology <cit.>. These changes are strongly correlated with the variations in solar magnetic activity, though the correlation between them differs depending on the phase of the cycle <cit.>. The long time series have further allowed us to uncover several new features in helioseismic data that were inaccessible otherwise, e.g., recently discovered high-latitude inertial modes <cit.>. In addition to 11 yr cyclic patterns, other periods have also been identified in both solar activity proxies <cit.> andhelioseismic data obtained from different instruments <cit.>.During the early years of the operation of the Michelson Doppler Imager (MDI) on board the Solar and Heliophysics Observatory (SoHO), the oscillatory component of a 1 yrperiod was reported in the oscillation frequencies <cit.>, which was later found to be an artifact in the data, as it matched with the orbital period of the Earth. However, shorter quasiperiodic variations or quasi-biennial oscillations (QBOs)with periods ranging from 0.6 to 4 yrexist in all types of data <cit.>. These quasiperiodic variations have also been identified in several energetic events, e.g., solar flares <cit.>, coronal mass ejections <cit.>, as well as meteorological parameters <cit.>. Studies based on solar observations show that the amplitude of the QBOs varies with the activity cycle, with the highest amplitudes during solar maxima that become weaker during the activityminima. During most solar cycle maxima, double peaks (Gnevyshev Gap) are more prevalent in activity indices, due to the asymmetry between the northern and southern hemispheres <cit.>, where one hemisphere achieves the maximum amplitude earlier than the other hemisphere. It is also suggested that the solar cycle may evolve independently in the two hemispheres <cit.>, which introduces asymmetry in both hemispheres.This asymmetry is also believed to be linked with the higher amplitude of the QBOs at solar maxima, especially in global indices. Thus,all these studies suggest the existence oftwo magnetic cycles with different periodicities that have been discussed by several authors <cit.>.The magnetic field is believed to be generated in convection zone and then transported upward to differentlayers of the Sun's atmosphere. During this process, the magnetic field slowly gets dissipated and a part of it reaches up to interplanetary space. Thus, the periodic variations obtained at different layers seem to be connected. In addition, the availability of asteroseismic observations of thousands of stars from Kepler and CoROT provide evidence of stellar magnetic cycles <cit.>.Therefore, aprecise understanding of different magnetic cycles progressing simultaneously is crucial for the better understanding of solar and stellar dynamos, and their variability. <cit.> argued that thequasiperiodic energy exchange among magnetic fields, Rossby waves, and differential rotation of the solar interiorare important to explain the explosive and quiet periods during the activity cycle.Furthermore, the quasi-biennial pulsations observed in both solar and stellar flares suggest the similar underlying physics of these flares <cit.>. However, the stellar data are currently limited by the length of the observations. Therefore, the knowledge of solar dynamos can be expanded to understand the stellar dynamics. To get a deeper understanding of QBOs and their origin,we present an analysis of the QBO periodicities observed in the last two solar cycles by studying acoustic mode frequencies. Note that cycles 23 and 24 had very different amplitudes and were separated by an extremely long period of very low solar activity. Thus, the similarities and differences in QBOs in these cycles are important in constraining solar dynamo models. The p-mode frequency data and the method are described in Section 2. We present the results in Section 3 and the possible origin of QBOs is discussed in Section 4. Finally, we summarize our findings in Section 5.§ DATA AND TECHNIQUEWe utilized p-mode frequencies computed from Doppler observations of the Global Oscillation Network Group <cit.>.GONG is a ground-based network of six sites, and it has been providing unique and consistent frequency measurements with a significantly high duty cycle <cit.> for probing the solar interior for more than two solar cycles.the data analyzed here consist of 274 nonoverlapping sets of 36 days, covering a period of 27 yr from 1995 May 7 to 2022 May 8 (two full solar cycles, 23 and 24). The duty cycles of these data are plotted in Figure <ref>, and the mean and median duty cycles are 86% and 87%, respectively.The frequencies, ν_n ℓ m, were computed for the individual (n, ℓ, m) multiplets,where n is the radial order, ℓ is the harmonic degree, and m is the azimuthal order, runningfrom -ℓ to +ℓ.The mode frequency for each multiplet was estimated from the m-ν power spectra constructed from the time series of an individual 36 day period.We used a GONG peak-fitting algorithm based on the multitaper spectral analysis coupled with a Fast Fourier Transform (FFT) to compute the power spectra <cit.>. Finally, we applied a minimization scheme guided by an initial guess table to the Lorentzian profiles to fit the peaks in the m-ν spectra.The frequency and degree ranges covered in this work are1860 ≤ν≤ 3450 μHz and0 ≤ℓ≤ 120, respectively. §.§ p-mode Frequency ShiftsTo investigate the QBO periods in the acoustic oscillatory signal, we first calculated the change in frequencies with reference to the guess frequencies that are used for the fitting of ν_n ℓ m.Since the frequency shifts have well-known dependencies on frequency and the mode inertia <cit.>, we scale the change in frequencies with mode inertia as described by <cit.>, while calculating the weighted mean frequency shift, δν, from the following relation:δν(t) = ∑_n ℓ mQ_n ℓ/σ_n ℓ m^2δν_n ℓ m(t) / ∑_n ℓ mQ_n ℓ/σ_n ℓ m^2Here, Q_n ℓ is the inertia ratio, σ_n ℓ m is the uncertaintyin frequency determination, and δν_n ℓ m(t) is the change in measured frequency fora given n, ℓand m.We display in Figure <ref>athe temporal variation of δν for the entire period. As illustrated,the frequency shifts follow the trends of the solar activity cycles, confirming that the strength of cycle 24 was much weaker than the cycle 23. To extract the short-term fluctuations in δν, we follow the procedure as described by <cit.> and subtract a smooth trend from the mean shifts by applying a boxcar filter with the width of 2 yr. The smoothed curve, shown in Figure <ref>(a), depicts the 11 yr envelope of the solar activity cycle. Figure <ref>(b) shows the residuals that are present in the oscillation frequenciesbut not related with the 11 yractivity cycle, and are believed to originate from theshort-term periodicities. §.§Morlet Wavelet Analysis The wavelet analysis is a valuable tool for examining the presence of localized oscillations in the nonlinear time series, both in the time and frequency domains. Herewe use the continuous wavelet transformationof the Morlet wavelet tool to study the presence and temporal evolution of the QBOs in p-mode frequencies <cit.>,ψ_n(η) = π^-1/4e^iω_0ne^-η^2/2.In this expression, ω_0 is a nondimensional frequency and we have adopted ω_0 = 6 <cit.>. The thick dashed line in subsequent wavelet plots indicates the cone of influence, where the wavelet power reduces by a factor of e^-2 due to the edge effect, and the thin black contours indicate the periods above the 95% confidence level, under a red-noise background <cit.>. We also compute the Global Wavelet Power Spectra (GWPS) by averaging over time at a given frequency. The 95% confidence level of the GWPS plots is determined following the recipe of <cit.>. §RESULTSWe calculated the weighted mean frequency shift over the frequency interval 1860μHz≤ν≤3450μHz using Equation <ref>.Since our aim is to study the periods relevant to QBOs, i.e., 1 – 4 years, the Morlet wavelet spectra, displayed in Figure <ref> and all subsequent figures, are limited to this period range.As seen in Figure <ref>, we obtain two distinct zones with significant power inthe wavelet spectrum; one during cycle 23 and the other during cycle 24. It is also seen that the QBO signal diminishes during the low-activity periods, while itis enhanced during the high-activity periods. The variation in QBO power with timeis consistent with the previous studies based on various activity indices where a decrease in the QBO power was reported during the activity minimum by several authors <cit.>. The most striking feature in the wavelet spectrum is power distribution within the 95% confidence level;while it is confined to thelow-frequency part of the QBO spectrum with higher periods during 2009–2017, the spread is wider during 1997–2005 covering a large range of QBO periods. In the GWPS displayed in the bottom panel ofFigure <ref>, weidentify a prominent double-peak structure above the 95% confidence level with QBO periods of 677_-82^+146 and 1136_-295^+133 days. These two QBO periods are primarily due to the differences in dominant power buildup in different parts of the spectrum.It is important to mention that the double-peak structurewas not identified in earlier studies. For example, <cit.> reported only a single peak in the global power spectrum by using the oscillation frequencies for cycle 23 and the ascending phase of cycle 24. Furthermore, the QBO power is found to disappear around 2005, during the declining phase of cycle 23, while a gradual decrease is seen throughout the declining phase of cycle 24 until 2017. The sudden loss of QBO power in the declining phase of cycle 23 might have resulted from the changes occurring in subsurface layers as reported by <cit.> and <cit.>.To explore the origin of the two peaks in GWPS, shown in Figure <ref>, we separated out the frequency shifts for cycles 23 and 24 and calculated the wavelet spectrum and GWPS for both cycles independently.As is evident in Figure <ref>, the double-peak structure no longer appears in the GWPS; only one peak is obtained in each cycle,but with different QBO period and power; forcycles 23 and 24, the periods are 677_-69^+78 and 1090_-153^+206 days, respectively. These periods are consistent with those obtained from the analysis of the entire data, suggesting that both cycles have different dominant QBO periods. The QBO period obtained in cycle 23confirms the single peak in the power spectrum as reported by <cit.> with a period of about 2 yr (∼700 days).We also obtain an insignificant hump around a period of1010 days in cycle 23. Moreover, separate analyses for both cycles reveal that the QBO power in cycle 24 had weakened as compared to cycle 23.We believe that the varying changes in subsurface layers during cycles 23 and 24 are responsible for the different periods. §.§ Depth dependence in QBOs Helioseismic studies focusing on the minima preceding cycles 23 and 24 reveal that the surface activity minimum occurredabout a year later than the minimum inferred in the deep solar interior, particularly in the radiative zone and core <cit.>. These results suggest the existence of more than one dynamo at different locations below the surface that may be affecting the variability in different zones. Thus, it is important to explore the influence of distinct zones on the QBO signals, if any.For investigating theQBOperiods for modes sensitive todifferent zones below the surface, we divided the entire solar interior into three main zones: the core (0.0 < r_t/R_Sun≤0.3), theradiative zone (0.3 < r_t/R_Sun≤ 0.7), and the convection zone (0.7 < r_t/R_Sun≤ 1.0). The modes traveling to these zones can be restricted by using their ray path, defined by the lower- and upper-turning points,where the lower-turningpoint (r_t) determines the penetration depthand the upper-tuning point provides information about the layer from which mode reflects back into the interior<cit.>. The penetration depthis calculated by the relation r_t = c(r_t)/2π√(ℓ(ℓ+1))/ν,where c(r_t) is the sound speed at depth r_t. A higher value of ν/√(ℓ(ℓ+1)) denotes a smaller value of r_t and hence a greater depth. Note thatall modes spend maximum time near the surface due to rapidly increasing sound speed with depth. Moreover, the upper-turning point defines a radius at which the acoustic cutoff frequency of the Sun (star) equals the frequency of the mode <cit.>.Modes with higher frequencies reach much closer to the surface.Thus, we use the lower-and upper turning points defined by Equation <ref>and the mode frequencies, respectively,to infer the depth information.The Morlet spectra and GWPS for modes with lower-turning points inthree differentzones are displayed in Figure <ref>. It is worth mentioning that all p-modes irrespective of their lower-turning points travel to the surface, thus the frequencies are sensitive to the properties of the entire path (region) they travel through. For example, the modes returning to surface from the core will also reveal the conditions of the radiative and convection zones, while the modes returning from the convection zone will carry information from the convection zone only. As shown in Figure <ref>,the QBO-type signals are present in different zones exhibiting similar double-peak structures. The obtainedQBO periods are:692_-44^+63and1090_-173^+128 days for the modes returning from core: 677_-56^+78and1136_-139^+133days for the modes returning from the radiative zone: and677_-108^+146and1112_-271^+205 days for the modes confined to the convection zone only. There are some differences in the peak values, but they agree within the uncertainties. A close agreement in the QBO periods implies no depth dependence in the QBOs.These results are supported by a recent study <cit.>, where the authorsinvestigated the depth dependence in QBOs by applying the methods of Empirical Mode Decomposition (EMD) and the FFT. They used oscillation frequencies from both GONG and MDI/Helioseismic and magnetic Imager instruments covering solar cycles 23 and 24, but did not find a clear depth dependence in the QBO periods in any of the cycles. Since all these modes travel tothe surface and reflect back from there, our findings of comparable QBO periods hint that the possible source lies in the near-surface shear layer (NSSL). Similar explanations for the location of QBO source region were also proposed in earlier studies, based on the helioseismic data <cit.>.§.§ Is the second dynamo in NSSL responsible for QBOs?To examine the possibility of the QBO source location being in the NSSL, we analyzed modes confined to this layer. It is conjectured that the second dynamo resides in the upper 5% below the surface, thus we usemodes that have sensitivity to the properties of this layer only. Since a sufficient number of global acoustic modes in the upper 5%layer are currently not available, we widened the depth to 10%.This selection criterion automatically excludes modes that have lower-turning points below this layer.The Morlet wavelet and global power spectra for all the modes confined to this layer are shown in Figure <ref>for the entire data set and in Figure <ref> for solar cycles 23 and 24 analyzed separately. We obtain two prominent QBO periods at 648_-56^+107and1136_-112^+57 days, withone marginally significant peak at 937 days for the entire data andpeaks at 663_-68^+60 and 1067_-109^+125 days for cycles 23 and 24, respectively. These QBO periods are consistent with those reported in previous sections for the modes probing deeper regions. Despite the fact that all modes spend maximum time near the surface, the modes restricted by their lower-turning points preclude any information below this turning-point radius. Since the analyses of modes carrying information from different regions below the NSSL provide similar QBO periods, we emphasize that a close agreement between the QBO periods in all cases is significantly influenced by the conditions in NSSL. If the changes in deeper layers were responsible for QBOs, we would not have obtained any QBO periods for the modes confined to the NSSL. Therefore, we suggest the plausible source location of QBOs to be within the near-surface layers. We further study the influence of the changes in the NSSL on QBO periods. It was reported by <cit.> that there were differences in the properties of different oscillation modes caused by the changes in the Sun's shear layers during the declining phase of cycle 23. This was confirmed in subsequent studies by <cit.> and <cit.>. Following <cit.>,we divided the entire data intothree frequency bands: 1860 ≤ν≤ 2400 μHz (low-ν), 2400 < ν≤ 2920 μHz (mid-ν), and2920 < ν≤ 3450 μHz (high-ν).It may be noted that for a given ℓ, the upper-turning points lie closer to the surface with increasing frequencies. Since the mode characteristics of the higher-frequency range are also influenced by the properties of the modes corresponding to lower-frequency ranges, we emphasize that these frequency bands are not entirely independent.We again utilized Equation <ref> and computed the weighted mean frequency shiftsfor different frequency bands. The Morletand the GWPSpresented inFigure <ref> display similarities as well as differences between the different frequency ranges.While we find a close agreement between the two QBO periods obtained for the mid-ν (692^+113_-84 and 1112^+157_-289 days) and high-ν bands (663^+142_-81 and 1136^+133_-277 days), there is only one peak inthe low-ν band representing the low-frequency part of the QBO spectrum witha period of 1112^+157_-289 days. It is evident from Figure <ref>that there was insufficient power buildup around 700 daysfor the modes inthe low-ν band, which might be responsible for the single peak in this frequency band. To examine this, we repeated the analysis for cycles 23 and 24 separately in each frequency band. Similar to the earlier results, we obtain one QBO period corresponding to each frequency band in each cycle;at 788^+380_-219, 692^+167_-122and692^+206_-120 days for cycle 23 in the low-ν, mid-ν and high-ν frequency bands, respectively,and at 1112^+336_-234, 1090^+261_-212and1112^+211_-234 days for cycle 24. We notice that the QBO periods for all three frequency bands in cycle 24 are comparable. The peak period obtained for the low-ν band in cycle 23 is different from other two frequency bands, butthe differences are within the estimated uncertainties. We also find that the QBO power is strongest for the modes corresponding to the high-ν band, whichdecreases by several orders for the lower-ν bands. This leads us to speculate that the QBO signals originate from a source lying in the layers where the power is stronger.Since the selected frequency bands are not completely isolated, a precise location of the QBO source would require inverting global high-degree p modes that are not currently available.§ DISCUSSION We have presented a detailed analysis of the acoustic oscillation frequencies in order to explore short-term periodicities. While numerous studies have been carried out to characterize the periods shorter than the 11 yr cycle, such as the QBO type,in the sunspots, the 10.7 cm radio flux and other measures of solar activity <cit.>, such studies based on the helioseismic data are comparatively fewer <cit.>. Nonetheless, the QBO-like periodicities detected in solar activity and the oscillation data are more or less consistent with each other. Since the variations in helioseismic oscillation frequencies are sensitive to the changes in the solar interior, this similarity indicates that the surface activity measurements mostly reflectthe properties of the solar interior. However, it is important to note that the studies based on surface activity measurements do not provide the precise location of the QBO source, whilethe helioseismic measurements have the unique ability to identifythe location of the source. This has been confirmed to some extent in several helioseismic studies. In this study based on the helioseismic data, we found differences in the QBO periods between cycles 23 and 24. Thus, it is of interest to explore if the differences in the QBO periods corresponding to different cycles are limited to the solar interior only or whetherthese are also present in the magnetic activity measured in the solar atmosphere.For this purpose, we analyzed the daily 10.7 cm radio fluxmeasurements[https://www.spaceweather.gc.ca/forecast-prevision/solar-solaire/solarflux/sx-5-en.php] <cit.>.The 10.7 cm flux represents the contributions from both strong (sunspots) and weak (radio plages) magnetic flux in the upper chromosphere, in addition to the quiet-Sun background emission. We averaged the radio flux values over the same 36 day time intervals as the frequency time series and analyzed them in a similar way as the frequency shifts. As shown in Figure <ref>, we obtain two noticeable zones in the wavelet spectrum,but only one dominant peak at 917_-296^+195 days in the global power spectrum in the analysis of the entire data. This contradicts the double-peak structure found in the oscillation mode frequencies discussed in Section <ref>. We speculate that this might have resulted from the lack of significant power with different prominent periodsin the QBO spectrum, and that the identified period is an average of quasiperiodic behavior spread over several periods. The disparity between the QBO periods obtained for the radio flux and the acoustic modes suggests a complex relationship between the interior and atmosphere, and has been discussed by several authors <cit.>.It is intriguing to note from Figure <ref> that the separate analyses for both cycles confirm two different QBO periods: 677_-82^+164 and 1022_-181^+196 days for cycles 23 and 24, respectively. These arecomparable with the QBO periods found in our analysis of acoustic modes.To understand the origin of QBOs and their periods, <cit.> proposed a model based on the idea of two dynamo sources separated in space;the first source is located near the bottom of the convection zone, with the second source operating in the NSSL. Though the dynamo theories successfully explain the 11 yr cycle to some extent, the significantly differentpredictions for cycle 24based on different dynamo theories raise some questions about our understandingof the solar dynamo <cit.>. In addition, the quasi-biennial cycle poses another challenge to these theories. Our study suggests the possible origin of QBOs in a layer closer to the surface, while a few earlier studies suggest that these are generated by an interaction between two oppositely signed magnetic activity bands seated in the deep interior of the opposite hemispheres. The later perception appears to be supported bythe recent numerical simulations of <cit.>, where authors demonstratedthat the solar cycle evolves independently in the two hemispheres. Other suggested mechanisms responsible for QBOs include the instability of the magnetic Rossby waves in the tachocline <cit.>, periodic energy exchange between the Rossby waves, differential rotation, and the toroidal field via tachocline nonlinear oscillations <cit.>, and the interplay between the flow and magnetic fields, where the turbulent α-mechanism works in the lower half of the solar convection zone and extends to the surface <cit.>.In a subsequent study, <cit.> argued that the source region of QBOs is below 0.78 R_. <cit.> studied the influence of a toroidal magnetic field on the dynamics of shallow water waves in the solar tachocline. The author found that the toroidal magnetic field splits equatorial Rossby and Rossby-gravity waves into fast and slow modes. While the global equatorial fast magneto-Rossby waves have a periodicity of 11 yr, matching the timescale of activity cycles with the solutions confined around sunspot activity belts, the equatorial slow magneto-Rossby-gravity waves have the periodicity of 1–2 yr, which may correspond to observed annual oscillations and QBOs. Evidence of the existence of equatorial Rossby waves has been presented in several studies using the helioseismic data <cit.>.In addition, <cit.> have also detected the presence ofRossby-type waves in coronal bright points obtained by the Extreme-Ultraviolet Imagerinstruments on the Solar Terrestrial Relations Observatory spacecraft, and the AtmosphericImaging Assemblyinstrument on theSolar Dynamic Observatoryspacecraft. These waves propagate in the retrograde direction relative to the rotation and have been studied in detail by several authors <cit.>.On the other hand, some studies suggest that QBOs might originate from the dynamo action in the layers just below the surface where the second dynamo is situated. <cit.> performed a series of 3D nonlinear MHD simulations by varying the rotation rate and luminosity of the modeled solar-like convective envelopes. They found that the shorter cycles,located at the top of the convective envelope close to the equator,are observed in numerical experiments for the small values of the local Rossby number,while a moderateRossby numberis neededfor the decadal magnetic cycles originating near the base of the convection zone. The deep-seated dynamo sustained in these numerical experiments is fundamentally nonlinear, thus it is the feedback of the large-scale magnetic field on the differential rotation that sets the magnetic cycle period. Theauthors also found that the cycle period decreases with the Rossby number, which offers an alternative theoretical explanation for the variety of activity cycles observed in solar-like stars. In addition, using low-degree frequencies from Birmingham Solar Oscillation Network, <cit.> interpreted the seismic signatures of the QBOs as a result of the second dynamo mechanism seated near the bottom of the layer extending 5% below the solar surface.Recently, <cit.> supported these findings, stating that the magnetic field responsible for producing QBOs in the frequency shifts of p-modes is anchored above approximately the upper 5% of the solar interior.They also found that the presence of the QBOs is not sensitive to the depth to which the p-mode traveled, nor to the average frequency of the p-mode. <cit.> have discussed a different scenario and suggested that the observed properties could result from the beating between a dipole and quadrupole magnetic configuration of the dynamo. The understanding of the mechanisms responsible for QBOs has been advanced significantly in recent years, but itis not yet fully understood.§ SUMMARYIn summary, by analyzing the acoustic mode oscillation frequencies from GONG for cycles 23 and 24, we affirm that the QBO-type signals are present in both cycles. The amplitudes are found to vary with the progression of the cycle, appearing higher during the high-activity phase, with a subsequent decrease during the minimum-activity period. This is not unusual, as similar results have been reported in earlier studies <cit.>. However, the most striking features found in this study are the double-peak structure in the global wavelet spectra and different QBO periods in cycles 23 and 24. The dominant QBO periods are found to be about 2 and 3 years in cycles 23 and 24, respectively.Since these periods are found to be influenced by the changes in the near-surface layers, we conjecture that their source might be located in these layers.Note thatconsistent helioseismic data are currently available only for two solar cycles having different characteristics, thus continuous measurements for several solar cycles and their inclusion in solar dynamo models are required for a better understanding of the origin of QBOs. We thank the reviewer for several useful suggestions. This work utilizes GONG data obtained by the NSO Integrated Synoptic Program, managed by the National Solar Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA), Inc. under a cooperative agreement with the National Science Foundation and with a contribution from the National Oceanic and Atmospheric Administration. The GONG network of instruments is hosted by the Big Bear Solar Observatory, High Altitude Observatory, Learmonth Solar Observatory, Udaipur Solar Observatory, Instituto de Astrofísica de Canarias, and Cerro Tololo Inter-American Observatory. K.J. and S.C.T. acknowledge partial funding from the NASA DRIVE Science Center COFFIES Phase II grant 80NSSC22M0162 to Stanford University. GONG aasjournal | http://arxiv.org/abs/2311.16331v1 | {
"authors": [
"Kiran Jain",
"Partha Chowdhury",
"Sushanta C. Tripathy"
],
"categories": [
"astro-ph.SR"
],
"primary_category": "astro-ph.SR",
"published": "20231127213553",
"title": "Helioseismic Investigation of Quasi-biennial Oscillation Source Regions"
} |
FALCON: Fairness Learning via Contrastive Attention Approach to Continual Semantic Scene Understanding in Open WorldThanh-Dat Truong^1, Utsav Prabhu^2, Bhiksha Raj^3,4, Jackson Cothren^5, Khoa Luu^1^1CVIU Lab, University of Arkansas, USA ^2Google, USA^3Carnegie Mellon University, USA^4Mohammed bin Zayed University of AI, UAE^5Dep. ofGeosciences, University of Arkansas, USA {tt032, jcothre, khoaluu}@uark.edu, [email protected], [email protected] 14, 2024 ==========================================================================================================================================================================================================================================================================================================================================================================Continual Learning in semantic scene segmentation aims to continually learn new unseen classes in dynamic environments while maintaining previously learned knowledge. Prior studies focused on modeling the catastrophic forgetting and background shift challenges in continual learning. However, fairness, another major challenge that causes unfair predictions leading to low performance among major and minor classes, still needs to be well addressed. In addition, prior methods have yet to model the unknown classes well, thus resulting in producing non-discriminative features among unknown classes. This paper presents a novel Fairness Learning via Contrastive Attention Approach to continual learning in semantic scene understanding. In particular, we first introduce a new Fairness Contrastive Clustering loss to address the problems of catastrophic forgetting and fairness. Then, we propose an attention-based visual grammar approach to effectively model the background shift problem and unknown classes, producing better feature representations for different unknown classes. Through our experiments, our proposed approach achieves State-of-the-Art (SOTA) performance on different continual learning settings of three standard benchmarks, i.e., ADE20K, Cityscapes, and Pascal VOC. It promotes the fairness of the continual semantic segmentation model. § INTRODUCTION The semantic segmentation networks, e.g., Transformers <cit.> and Convolutional Neural Networks <cit.>, learned from data with a closed-set of known classes have shown outstanding performance. However, they often suffer performance degradation when encountering novel objects or classes in new dynamic environments <cit.>. To improve their performance, several transfer learning and domain adaptation methods <cit.> were introduced to adapt trained models into deployed environments. While the former often aims to fine-tune the model on labeled data collected in the new environments, the latter adapts the model to the new domains in an unsupervised manner <cit.>. However, these methods cannot handle novel objects well due to their close-set learning. In practice, the semantic segmentation models should be able to adaptively and continually learn the new knowledge of novel classes. It motivates the development of Continual Learning paradigm <cit.>, a.k.a, Continual Semantic Segmentation (CSS), where the segmentation models are learned sequentially to new contents of data.Far apart from prior segmentation methods <cit.> that learn one time on static, closed-set data, Continual Learning requires the segmentation models to learn from dynamic, open-set data <cit.>. In addition, in particular scenarios, accessing previous learning data is restricted due to privacy concerns. In CSS, three challenges have been identified, including (1) Catastrophic Forgetting, (2) Background Shift, and (3) Fairness.While the catastrophic forgetting problem <cit.> depicts the segmentation model tends to forget its knowledge when learning new data,background shift indicates the problem of classes of previous or future data (unknown classes) have collapsed into a background class <cit.>.Prior methods <cit.> addressed these two problems by introducing knowledge distillation and pseudo labels. However, these methods can not handle unknown classes since they either consider these unknown classes as a background class or assign unknown pixels by a pseudo label of prior known classes <cit.>. More importantly, the last problem, fairness, is a significant challenge that limits the performance of CSS models. As shown in Fig. <ref>, the number of pixels of each class in training data have been imbalanced among classes and significantly decreased after each task. Thus, this bias influences the learning procedure and model predictions that later cause unfair predictions among classes. However, limited studies are taking the fairness problem into account. <cit.> presented a similar problem in domain adaptation and extended it to continual learning <cit.>. These methods rely on the assumption of ideal fair or balanced data distributions. However, it is not applicable in practice since the size, i.e., the number of pixels, of several classes can never be more significant than others. For example, the size of the bottle should not be more significant than the size of a car. Meanwhile, the current knowledge distillation methods <cit.> in CSS are unable to handle the fairness problem since they focus on modeling catastrophic forgetting and background shift problems. Therefore, it is essential to develop a new CSS approach to address these limitations.Contributions of This Work: This work presents a novel Fairness Learning via Contrastive Attention Approach (FALCON) to Continual Semantic Segmentation (as shown in Fig. <ref>).First, we introduce a novel Contrastive Clustering Paradigm approach to Continual Learning that models the catastrophic forgetting problem. Second, by analyzing the limitation of vanilla Contrastive Clustering in biased data, we introduce a novel Fairness Contrastive Clustering loss to model the fairness problem in continual learning efficiently. Third, to effectively model the background shift problem, we introduce a new Attention-based Visual Grammarthat model the topological structures of feature distribution to handle the unknown classes effectively. Finally, the ablation studies illustrate the effectiveness of the proposed approach in different aspects of fairness promotion in CSS models. Compared with prior methods, our approach achieves state-of-the-art performance on different settings of three standard benchmarks of CSS, including ADE20K, Pascal VOC, and Cityscapes. § RELATED WORKContinual Semantic SegmentationSeveral studies have been introduced to address catastrophic forgetting and background shift problems in CSS <cit.>. The common approach of CSS adopts knowledge distillation <cit.> and pseudo labels <cit.> to model catastrophic forgetting and background shift, respectively. Later, it is further improved by decoupling knowledge representations <cit.>, modeling the inter- and intra-class knowledge <cit.>,distinguishing the feature representations of the future classes <cit.>, reducing background confusion <cit.>,or modeling distillation loss via the geodesic flow <cit.>. Another approach <cit.> adopted the mask-based segmentation networks <cit.> to improve the performance of CSS models.Recent studies have introduced CSS under the unsupervised domain adaptation settings <cit.>. However, prior studies have yet well modeled the unknown classes in the open-world environments. Particularly, as previous methods<cit.> use the pseudo labels to model the unknown classes, the future classes will be treated as a background class.Then, Joseph et al. <cit.> improves the unknown class modeling by using clustering but this method considers all different unknown classes as a single cluster, leading to the non-discriminative features among unknown classes.Contrastive Learning is a common learning approach <cit.> to structure the deep feature representations in the deep latent space. Oorde et al. <cit.> first introduced the Noise-Contrastive Estimation (InfoNCE) learning framework. Then, Chen et al. <cit.> presented SimCLR, a self-supervised contrastive learning approach to improve the representation power of Residual Networks. He <cit.> proposed a Momentum Contrast framework for unsupervised representation learning. Later, it was further improved by using MLP projection head <cit.> and extended to improve the self-supervised training process of vision transformers <cit.>. Cui et al. <cit.> introduced a supervised parametric contrastive learning loss to address the long-tailed recognition.Li et al. <cit.> adopted contrastive learning to develop the one-stage online contrastive clustering method. Radford et al. <cit.> presents a contrastive framework to learn the vision-language model. Later, several methods also adopted this framework to vision-language pretraining <cit.>. Imbalanced and Fairness Learning The early methods utilized the balanced Softmax loss <cit.> to alleviate the impact of imbalanced data distribution. Later, Wang et al.<cit.> introduced a Seesaw loss to re-balance the contributions of positive and negative instances via the mitigation and compensation modules.Ziwei et al.<cit.> introduced a dynamic meta-embedding to model the imbalanced classification problem. Chu et al.<cit.> reduce the bias in the segmentation model by presenting a new stochastic training scheme. Szabo et al.<cit.> presented a tilted cross-entropy loss to promote class-relevant fairness. However, there are limited studies that address the fairness problem in CSS. Truong et al. <cit.> introduced a fairness domain adaptation approach to semantic segmentation and later extended it into continual learning setting <cit.>. However, these methods <cit.> rely on the assumption of ideal balanced data which could not be achieved by nature. To address the limitations in prior work, this paper will introduce a novel approach to effectively model the fairness problem and unknown classes in the continual learning setting.§ THE PROPOSED FALCON APPROACH CSS aims to learn a segmentation network F on sequence data 𝒟 = {𝒟^1, ..., 𝒟^T} where T is the number of learning steps. At learning step t, the model F encounters a dataset 𝒟_t = {(𝐱^t, ŷ^t)} where 𝐱^t ∈ℝ^H × W × 3 is the image and 𝐲∈ℝ^H × W is a segmentation label of 𝐱^t.The ground truths at learning step t only consist of current classes 𝒞^t, while the class labels of the previous 𝒞^1...t-1 and future steps 𝒞^t+1...T are collapsed into a background class.Formally, learning the CSS model at step t can be formed as Eqn. (<ref>).θ_t^* = min_θ_t𝔼_𝐱^t, ŷ^t ∈𝒟^t[ℒ_CE(𝐲^t,ŷ^t) + λ_CLℒ_CL(F(𝐱^t))]where, 𝐲^t = F(𝐱^t, θ_t),θ_t is the parameter of F at current learning step t, ℒ_CE is the cross-entropy loss, λ_CL is the balanced weight. and ℒ_CL is the CSS objective. At learning step t, the segmentation model F is required to be able to predict both previously learned classes 𝒞^1...t-1 and current new classes 𝒞^t.Under this learning scenario,three challenges have been identified, i.e., Catastrophic Forgetting, Background Shift, and Fairness. Several prior methods were presented to model the two first issues in CSS using knowledge distillation <cit.>. The last issue has not been well addressed yet due to its challenges <cit.>.Prior methods <cit.> adopt knowledge distillation to design ℒ_CL. However, this method prevents the CSS model from diverging knowledge learned previously, therefore resulting in limiting the ability to adopt new knowledge <cit.>. In addition, these methods have not addressed fairness and background shift problems due to their dedicated design for maintaining knowledge via distillation <cit.>. Therefore, to address these problems, we introduce a novel Fairness Learning via Contrastive Attention Approach to CSS. §.§ Continual Learning via Contrastive Clustering Apart from prior methods <cit.>, our CSS is defined as Contrastive Clustering Learning. Given a set of centroid vectors {𝐜_i}_i=1^N_K + N_U where N_K = |𝒞^1..t| and N_U is the number of known and unknown classes up to current learning tasks. Prior work <cit.> often defined the number of unknown classes as 1 where background classes are considered as a single unknown class.Formally, our Contrastive Clustering Learning for CSS can be defined as Eqn. (<ref>).ℒ_CL(F(𝐱^t))= ∑_𝐜_iℒ_Cont(𝐅^t, 𝐜_i) =∑_𝐜_i∑_h, w-ϕ(𝐟^t_h,w, 𝐜_i)logexp(𝐟^t_h,w×𝐜_i)/∑_𝐟'exp(𝐟' ×𝐜_i)where 𝐅^t ∈ℝ^H × W × Dis the feature maps extracted from the input image 𝐱^t by the segmentation network F,𝐟^t_h,w∈ℝ^D is the feature at the pixel location (h , w) of features 𝐅^t, ∑_𝐟' means the summation over all feature representations 𝐟' ∈ℝ^D,and ϕ: ℝ^D×ℝ^D→ [0, 1] is the function that determines either 𝐟^t_h,w belongs to the cluster 𝐜_i or not.By defining CSS as contrastive clustering learning, the knowledge of the segmentation model has been well maintained via the cluster vectors 𝐜. Then, minimizingEqn. (<ref>) will separate the representations of different classeswhile gathering the features of the same class into the same cluster. As the cluster vectors 𝐜 of the old classes 𝒞^1..t-1 have been well learned to represent their knowledge, these vectors are frozen at learning step t to maintain the knowledge representations of previous classes to address the catastrophic forgetting problem.To effectively learn cluster vectors 𝐜,the cluster vector 𝐜 will periodically updated after each M steps by the momentum update <cit.> based on the features 𝐟^t_h,w assigned to cluster 𝐜. However, there are two major problems in contrastive clustering learning. First, since the training data in CSS suffer the bias among classes as shown in Fig. <ref>,this bias will influence Eqn. (<ref>) and cause the unfair predictions.Second, as the function ϕ requires the labels to determine the features belonging to clusters, it limits the ability to model the unknown classes where the labels are not available. Therefore, Secs. <ref>-<ref>will present a novel approach to tackle these problems.§.§ Fairness Contrastive Clustering LearningWhile the contrastive clustering learning defined in Eqn. (<ref>) promotes the compact representations of features around their clusters, inspired by <cit.>, we observe that the imbalanced class distribution will influence unfair behaviors among classes.In particular, for simplicity, we consider {𝐟^t_i}_i=1^L is the set of features that belong to the cluster 𝐜 at learning step t (i.e., ϕ(𝐟_i^t, 𝐜) = 1) and L is the number of features (in this case, L is the total number of pixels belong to the class of cluster 𝐜). Let us define the enforcement between the feature 𝐟^t_t and the cluster 𝐜 as ℓ_i = exp(𝐟^t_i×𝐜)/∑_𝐟'exp(𝐟' ×𝐜). Hence,the lower the value of the enforcement ℓ_i is, the more compact the representation of visual features and clusters is. Then, the contrastive clustering learning loss in Eqn. (<ref>) of the entire cluster 𝐜 can be defined as Eqn. (<ref>).ℒ_Cont(;, 𝐜) = -∑_i=1^Llogexp(𝐟^t_i×𝐜)/∑_𝐟'exp(𝐟' ×𝐜) = -∑_i=1^Llogℓ_iProposition 1: If the contrastive clustering loss ℒ_Cont(;, 𝐜) achieves the optimal value, the enforcement ℓ_i between the feature and the cluster will converge to ℓ_i = L^-1.Proposition 1 has implied that the class with more samples will result in a lower value of the enforcement and produce a more compact representation while the class having fewer samples will be more scattered in the feature space due to the higher value of the enforcement. In particular, let L_major and L_minor be the number of samples of the major and minor class where L_major >> L_minor. Then, based on Proposition 1, the enforcement between features and the cluster of the major class will be significantly lower than the one of the minor class, i.e., L_major^-1 << L_minor^-1. Therefore, a direct adoption of the contrastive clustering loss in Eqn. (<ref>) will result in an unfair CSS model. In addition, for classes in the minority group, the weak enforcement results in the feature presentations of classes being far away from their clusters. Thus, the model will produce non-discriminative features compared to the ones in the majority group.Moreover, if the loss is applied to the cases of unknown labels, these feature representations can be scattered in the latent space and pulled into the incorrect clusters due to weak enforcement between features and clusters (Fig. <ref>).To address the unfair problem in contrastive clustering learning,inspired by <cit.>,we introduce a scaling factor α and a learnable transition vector 𝐯 for each cluster 𝐜 (all clusters have the same value of α but different vector 𝐯).Our Fairness Contrastive Clustering Learning Loss for the entire cluster in Eqn. (<ref>) can be re-formed as:ℒ^α_Cont(;, 𝐜) = -α∑_i=1^Llogexp(𝐟^t_i×𝐜)/∑_𝐟'exp(𝐟' ×𝐜) -logexp(𝐯×𝐜)/∑_𝐟'exp(𝐟' ×𝐜)Intuitively, the scaling factor α will help to re-scale the impact of the enforcement in learning, and the transitive vector 𝐯 assists in translating the center cluster into the proper position of the latent space.This action promotes the compactness of clusters in the minority group.Proposition 2: If the fairness contrastive clustering loss ℒ^α_Cont(;, 𝐜) achieves the optimal value, the enforcement ℓ_i between the feature and the cluster will converge to ℓ_i = (α^-1 + L)^-1.Proofs of Propositions 1-2 are provided in the supplementary.Under the Proposition 2, when the value of α is small, the divergence of the enforcement between major and minor classes will be smaller, i.e., ||(α^-1+L_major)^-1 - (α^-1+L_minor)^-1|| < || L^-1_major - L^-1_minor||. Fig. <ref> has illustrated the impact of fairness contrastive clustering loss. Therefore, our designed proposed fairness contrastive loss has effectively addressed the fairness issue in Eqn. (<ref>).It should be noted that although the smaller α results in the fairer enforcement varied from major to minor classes. However, if the value of scaling factor α is too small, the contrastive clustering loss will rely more on the enforcement of the transitive vector 𝐯, and the distribution of features 𝐟^t_i around its cluster 𝐜 will be scattered due the weak enforcement caused by small α. Therefore, the value of scaling factor α in practice should be carefully selected.§.§ Open-world Unknown Class Modeling An ideal CSS approach must be able to model the unknown classes without supervision, especially in open-world environments <cit.>. Prior studies have adopted the pseudo-label strategies <cit.> based on the model predictions to assign labels for seen classes while unseen classes have been ignored, thus resulting in non-discriminative features. <cit.> improved the background modeling by using an additional prototypical representation for unknown classes. However, these approaches consider different unknown classes as one (i.e., N_U = 1) resulting in non-distinguished representations of different unknown classes. Thus, modeling function ϕ in Eqn. (<ref>) without supervision of different unknown classes (i.e., N_U > 1) is challenging.Although modeling ϕ to determine the single feature 𝐟 belonging to the cluster 𝐜 is challenging,prior studies in clustering <cit.> have suggested that determine a set of features {𝐟^t_i}_i=1^M belonging to cluster 𝐜 should be easier. This derives from the fact that even though the feature representations of different classes are different, the distributions of features around its cluster (termed as Visual Grammar) in the feature space should be similar among classes or clusters. As a result, by learning the distribution of features and their clusters, the model ϕ can determine whether a feature belongs to a cluster. Then, by learning the model ϕ on prior known clusters and features, the knowledge ofϕ can be adaptively applied to unknown clusters. Fig. <ref> illustrates our visual grammar model of the cluster distributions.Limitations of Prior Clustering Methods The traditional methods in clustering, e.g., KNN or density-based clustering <cit.>, remain limited to noisy features leading to producing the incorrect cluster assignment. Meanwhile, the modern clustering methods, e.g., Graph Neural Networks (GNNs) <cit.>, require a large memory to build the affinity graph for clusters.In addition, GNNs often learn the local structures of graphs (or clusters) and accumulate them via the aggregation layers. Hence, the global structures of the clusters, i.e., visual grammar, are not well modeled by GNNs <cit.>. Therefore, to address these limitations, we introduced a new Attention-based Visual Grammar approach to efficiently model the distribution of features and their clusters via the self-attention mechanism <cit.>. Remark 1: Given a center 𝐜 and a set of M features {𝐟^𝐜_i}_i=1^M where 𝐟^𝐜_i denotes the feature 𝐟_i belonging to the cluster 𝐜, and ∀ i ∈ [1..M-1]: cos(𝐟_i^𝐜, 𝐜) ≥cos(𝐟_i+1^𝐜, 𝐜) the Visual Grammar of the cluster 𝐜 parameterized by Θ can be defined as Eqn. (<ref>).min_Θ𝔼_𝐜, {𝐟^𝐜_i}_i=1^M[-log p(𝐟^𝐜_1, 𝐟^𝐜_2, ..., 𝐟^𝐜_M, 𝐜, Θ)] = min_Θ𝔼_𝐜, {𝐟^𝐜_i}_i=1^M[-log p(Δ^𝐜_1, Δ^𝐜_2, ..., Δ^𝐜_M, 𝐜, Θ)]where Δ^𝐜_i = 𝐟^𝐜_i - 𝐜. Eqn. (<ref>) defines the visual grammar of the cluster by modeling the feature distribution of 𝐟_i^𝐜 and its cluster center 𝐜. Let ϕ: ℝ^(M + 1) × D→ [0, 1]^M be a function receiving a center 𝐜 and a set of M features {𝐟_i}_i=1^M (cos(𝐟_i, 𝐜) ≥cos(𝐟_i+1, 𝐜)) to determine whether 𝐟_i belonging to 𝐜, i.e.,𝐮 = ϕ(Δ_1, Δ_2 , ..., Δ_M, 𝐜) where Δ_i = 𝐟_i - 𝐜, 𝐮 = [u_1, u_2, ..., u_M] and u_i = 1 denotes 𝐟_i belong to cluster 𝐜 and vice versa.Hence, the visual grammar model in Eqn. (<ref>) can be modeled by the network ϕ as follows with parameter Θ as follows:Θ^* = min_Θ𝔼_𝐜, {𝐟_i}_i=1^M[-log p(𝐮 | Δ_1, Δ_2, ..., Δ_M, 𝐜, Θ)]Eqn. (<ref>) aims to model the distribution of features around its cluster by learning the correlation of relatively topological structures Δ_i of features 𝐟_i around cluster 𝐜. Then, based on knowledge of the cluster distribution, the model ϕ is able to determine whether a feature 𝐟_i belongs to cluster 𝐜. Hence, it is essential that the model ϕ has the ability to exploit the correlation between features 𝐟_i and cluster 𝐜 to learn the topological structure of visual grammar.Therefore,we adopt the self-attention mechanism <cit.> to efficiently model these feature correlations. Particularly, the model ϕ is formed by L_ϕ blocks of self-attention as follows:𝐳_0= LN([Δ_1, ..., Δ_M, 𝐜]) + β, 𝐚_l = 𝐳_l + MHSA(𝐳))𝐳_l+1 = 𝐚_l + MLP(LN(𝐚_l)),𝐮 = Proj(𝐳_L_ϕ)where β is the positional embedding, LN is Layer Normalization, MHSA is multi-head self-attention, MLP is the multi-layer perception, and Proj is the linear projection. By using Transformers, the correlation of cluster distributions can be well modeled by the self-attention mechanism. Cluster Assignment via Visual GrammarInstead of assigning the clusters based on the model prediction <cit.> or nearest cluster <cit.> that are less effective, the cluster assignment in our approach will be performed by the visual grammar model, i.e., the visual grammar model will consider the M closest features around cluster 𝐜 to assign the cluster for these features. Then, the cluster assignments are used to compute our Fairness Contrastive Clustering loss.In addition, following common practices <cit.>, we improve background shift modeling by usingthe cluster assignments of features as the pseudo labels of pixels.Unknown Cluster Initialization Prior work <cit.> initialized a single unknown cluster (N_U=1), thus resulting in producing non-discriminative class-wise features. However, there should be more than a single unknown cluster (N_U>1) to produce discriminative features for different unknown classes.Therefore, our approach first initializes a list of potential unknown clusters at each learning step via DB-SCAN <cit.> on the features of unknown classes extracted by the current CSS model. For the new known class 𝒞^t, we initialize these clusters based on the mean of their feature representations. Meanwhile, the clusters of known classes learned in previous steps are maintained.§.§ Continual Learning ProcedureFig. <ref> illustrates the training procedure of our continual learning approach. At each learning step t, the CSS model F with θ_t is trained with the Fairness Contrastive Clustering loss defined in Eqn. (<ref>) and the previous visual grammar model ϕ with Θ_t-1. In addition, we introduce a cluster regularizer ℛ_C to avoid the clusters of different classes collapsing into a single cluster. Therefore, the entire CSS learning objective in our approach can be formed as:min_θ_t𝔼_𝐱^t, ŷ^t[ℒ_CE(𝐲^t,ŷ^t) + λ_CL∑_𝐜_iℒ^α_Cont(𝐅^t, 𝐜_i )+ λ_Cℛ_C(𝐜)]where ℛ_C(𝐜) = ∑_𝐜_i, 𝐜_j{max(0, 2∇ - ||𝐜_i -𝐜_j||)}^2 is the regularizer to avoid the cluster collapsing, λ_C is the balanced weight,and ∇ is the margin between clusters. Training Procedure of Visual Grammar Model At CSS learning step t, we adopt the visual grammar model trained on the previous learning step, i.e., ϕ with Θ_t-1, to perform the cluster assignment for the contrastive clustering loss defined in Eqn. (<ref>). Then, the visual grammar model at learning step t, i.e., ϕ with Θ_t, will be learned (initialized from Θ_t-1) on the features extracted from the dataset and the set of known clusters 𝐜 up to the current learning step.Following <cit.>, we sample a center 𝐜 from the known clusters and its M closest featuresto train the visual grammar model. Initial Visual Grammar Model At the first learning step t=1, since no clusters have been learned at initial, the visual grammar model ϕ with Θ_0 is not available. However, as common practices in CSS <cit.>, the segmentation model is typically trained from a pre-trained backbone on ImageNet <cit.>. As a result, the features extracted at the first learning step are characterized by the ImageNet features. Therefore, we adopt this philosophy to initialize our visual grammar model (ϕ with Θ_0) by pre-training the visual grammar model on the ImageNet dataset. Then, during CSS training, we will progressively train our visual grammar model at each learning step as aforementioned. § EXPERIMENTS §.§ Implementations and Evaluation Protocols Implementation Following common practices <cit.>, we adopt DeepLab-V3 <cit.> with ResNet-101 <cit.> and SegFormer <cit.> with MiT-B3 <cit.> in our experiments. For the Visual Grammar model, we adopt the design of <cit.> with L_ϕ=12 blocks of multi-head self-attention layers. The feature vectors from the last layer of the decoder are used for our ℒ^α_Cont loss.The value α is set individually for each dataset, i.e., α = 5×10^-2 for ADE20K, α = 10^-2 for VOC for Cityscapes. The details of our hyper-parameters are provided in the supplementary. Evaluation Protocols:We evaluate models on three standard datasets of CSS, i.e., ADE20K <cit.>, Pascal VOC <cit.>, and Cityscapes <cit.>. Following common practices <cit.>, our experiments are conducted on the overlapped CSS settings.In particular, on ADE20K, we use three different settings, i.e., ADE20K 100-50 (2 steps), ADE20K 100-10 (6 steps), and ADE20K 100-5 (11 steps). On Pascal VOC, we evaluate FALCON in three benchmarks, i.e., VOC 15-5 (2 steps), VOC 15-1 (6 steps), and VOC 10-1 (11 steps). On Cityscapes, we conduct domain incremental experiments with three settings, i.e., Cityscapes 11-5 (3 steps), Cityscapes 11-1 (11 steps), and Cityscapes 1-1 (21 steps). Following <cit.>, the mean Intersection over Union (mIoU) metric is adopted in our comparison, including mIoU of the last learning step on initial classes, incremental classes, and all classes.In addition, to illustrate the fairness improvement, we report the mIoU of major and minor classes. §.§ Ablation StudyEffectiveness of Fairness Contrastive Clustering Table <ref> presents our results using DeepLab-V3 <cit.> with Resnet101 on ADE20K 100-50 and ADE20K 100-10 benchmarks. We evaluate the impact of the fairness contrastive clustering loss ℒ^α_Cont by comparing it with the vanilla contrastive clustering loss ℒ_Cont. As shown in our results, the overall performance has been significantly improved to 37.9% and 36.4% on ADE20K 100-50 and ADE20K 100-10, respectively. In addition, the fairness of the model has been promoted since the mIoU performance of major and minor groups was enhanced.We also study the impact of network backbones and cluster margin ∇ in our supplementary.Effectiveness of Scaling Factor of Cluster Table <ref> illustrates the experimental results of the impact of different scaling factor α on ADE20K 100-50 and Pascal VOC 15-5 benchmarks. As shown in Table <ref>, when the value of scaling factor α gradually decreases, the performance of our proposed approach is improved accordingly since the fairness contrastive loss in Eqn (<ref>) tends to be more uniform across major and minor classes. However, when the scaling factor is too small (α = 0.005), the impact of the loss enforcement becomes weaker leading to the weaker enforcement of the fairness contrastive clustering, resulting in lower overall performance.In addition, we have observed that the higher the number of classes demands the higher the value of α since it will increase the compactness of more clusters.Effectiveness of Loss Contributions Table <ref> illustrates the contributions of proposed learning objectives. For the model without using visual grammar, we only use a single unknown cluster (N_U = 1) and adopt the nearest cluster strategies to assign clusters of unknown pixels. By using only cross-entropy loss, the mIoU performance remains low due to catastrophic forgetting and background shift problems. Meanwhile, with our fairness clustering loss ℒ^α_Cont, visual grammar model ϕ, and the cluster regularizer ℛ, the mIoU performance has been significantly improved to 37.9% and 36.4% on ADE20K 100-50 and ADE20K 100-10, respectively. Moreover, our FALCON has significantly promoted the fairness of segmentation models illustrated by the mIoU improvement of major and minor groups.Effectiveness of Visual Grammar We evaluate FALCON under three settings, i.e., Nearest Cluster, Fixed ϕ pretrained ImageNet (without updating on each learning step), and Adaptive ϕ (with updating on each learning step). As in Table <ref>, the mIoU result using only the nearest cluster remains ineffective. Meanwhile, the adaptive visual grammar model updated at each learning step further boosts the mIoU performance and promotes fairness, i.e., increased by 4.5% and 4.9% on ADE20K 100-50 and ADE20K 100-10 compared to the nearest cluster approach. In addition, we study the impact of choosing the number of features M in the visual grammar model in our supplementary.Fig. <ref> illustrates the feature distributions of unknown classes (future class). As a result, our FALCON approach is able to model features of unknown classes into different clusters and produce better and more compact clusters compared to the one without Fairness Learning via Contrastive Attention. §.§ Comparison with Prior SOTA Methods ADE20K Table <ref> presents our experimental results using DeepLab-V3 and Transformer networks compared to prior CSS methods. Overall, our proposed approach has achieved the SOTA performance compared to prior methods. In particular, by using DeepLab-V3,our approach has achieved SOTA performance, i.e., the mIoU results of 37.9% and +36.4% on ADE20K 100-50 and ADE20K 100-10 benchmarks, higher than prior FairCL <cit.>.Meanwhile, our approach using Transformer has outperformed the prior SOTA CoMFormer <cit.> model by +3.5%, +8.0%, and +2.6% on ADE20K 100-50, ADE20K 100-10, and ADE20K 100-5 respectively. In addition, our mIoU results on the initial classes remain competitive with the upper-bounded results because our method is able to well handle the fairness problem compared to the fully supervised learning approach. We also report our results on the ADE20K 50-50 benchmark in the supplementary.As in Fig. <ref>, FALCON produces better segmentation maps compared to prior methods.Pascal VOC Table <ref> presents the results of our FALCON on Pascal VOC benchmarks. Our proposed approach has consistently achieved the SOTA performance on three benchmarks. In particular, compared to the prior FairCL <cit.> approach, our methods using DeepLab-V3 have improved the mIoU performance up to 73.50%, 69.83%, and 62.41% onPascal VOC 15-5, Pascal VOC 15-1, and Pascal VOC 10-1, respectively. Additionally, by using the better network backbone, i.e., Transformer, the performance of the segmentation model is also further improved. Our results have reduced the gap with the upper bound performance. Cityscapes Table <ref> reports the performance of our approach using DeepLab-V3 compared to prior methods on three different settings of Cityscapes benchmarks, i.e., Cityscapes 11-5, Cityscapes 11-1, and Cityscapes 1-1. As shown in the experimental results, the performance of our methods has consistently outperformed prior FairCL <cit.> approach by +3.78%, +3.14%, and +6.02% on three benchmarks. Similar to our experiments on ADE20K and VOC, the better network brings higher results. These results have shown the effectiveness of FALCON on various benchmarks. § CONCLUSIONS This paper has presented a novel Fairness Learning via Contrastive Attention approach to CSS. In particular, the fairness contrastive clustering loss has been introduced to model both catastrophic forgetting and fairness problems. Then, the visual grammar model was presented to model the unknown classes. The experimental results on different benchmarks have shown the SOTA performance and fairness promotion of our proposed FALCON approach. Limitations Our study chose a set of learning hyper-parameters to support our theoretical analysis. However, it potentially consists of several limitations related to choosing learning parameters and cluster initialization. The details of our limitations are discussed in the supplementary. These limitations will motivate future research to further improve our Fairness Learning via Contrastive Attention.ieee_fullnameSupplementary § PROOF OF PROPOSITIONS 1 AND 2§.§ Proof of Proposition 1 Proposition 1: If the contrastive clustering loss ℒ_Cont(;, 𝐜) achieve the optimal value, the enforcement ℓ_i between the feature and the cluster will converges to ℓ_i = L^-1.Proof: Let us consider the optimization of the Eqn. (4) in the paper as follows:min -∑_i=1^Llogexp(𝐟^t_i×𝐜)/∑_𝐟'exp(𝐟' ×𝐜)= -∑_i=1^Llogℓ_i subject to∑_i=1^Lℓ_i = ℓwhere ℓ is the total enforcement between features 𝐟^t_i and cluster 𝐜. Then, the optimization of Eqn. (4) in the paper can be rewritten by using Lagrange multiplier as follows:ℒ({ℓ_i}_i=1^L, λ) = -∑_i=1^Llogℓ_i + λ(∑_i=1^Lℓ_i - ℓ)where λ is the Lagrange multiplier. Then, the contrastive clustering loss in Eqn. (4) in the paper achieves minimum if and only if:∂ℒ({ℓ_i}_i=1^L, λ) /∂ℓ_i = -ℓ_i^-1 + λ = 0∂ℒ({ℓ_i}_i=1^L, λ) /∂λ = ∑_i=1^Lℓ_i - ℓ = 0⇒ℒ({ℓ_i}_i=1^L, λ)= -Llogℓ/LAs the total enforcement between features and the cluster is normalized, i.e., ℓ∈ [0..1], the contrastive clustering loss ℒ({ℓ_i}_i=1^L, λ) achieves minimum when logℓ = 0 ⇒ℓ = 1. Then, the enforcement between a single feature and the cluster will be equal to ℓ_i = ℓ/L = L^-1. §.§ Proof of Proposition 2 Proposition 2: If the fairness contrastive clustering loss ℒ^α_Cont(;, 𝐜) achieve the optimal value, the enforcement ℓ_i between the feature and the cluster will converges to ℓ_i = (α^-1 + L)^-1.Proof: We first define the the enforcement between transitive vector 𝐯 and the cluster 𝐜 as ℓ_𝐯 = exp(𝐯×𝐜)/∑_𝐟'exp(𝐟' ×𝐜). Then, let us consider the optimization of Eqn. (5) in the paper as follows:min -∑_i=1^Lαlogℓ_i - logℓ_𝐯subject to∑_i=1^Lℓ_i + ℓ_𝐯 = ℓSimilar to Eqn. (<ref>), Eqn. (<ref>) can be reformulated via Lagrange multiplier as follows:ℒ({ℓ_i}_i=1^L, λ) = -∑_i=1^Lαlogℓ_i -logℓ_𝐯+ λ(∑_i=1^Lℓ_i + ℓ_𝐯- ℓ)Then, the fairness contrastive loss ℒ^α_Cont achieves minimum if and only if:∂ℒ({ℓ_i}_i=1^L, λ) /∂ℓ_i = -αℓ_i^-1 + λ = 0∂ℒ({ℓ_i}_i=1^L, λ) /∂ℓ_𝐯 = -ℓ_𝐯^-1 + λ = 0∂ℒ({ℓ_i}_i=1^L, λ) /∂λ = ∑_i=1^Lℓ_i + ℓ_𝐯 - ℓ = 0⇒ℒ({ℓ_i}_i=1^L, λ)= - α Llogαℓ/1+α L - logℓ/1+α LAs in Eqn. (<ref>), the fairness contrastive learning loss ℒ({ℓ_i}_i=1^L, λ) archives minimum when logℓ = 0 →ℓ = 1. Thus, the enforcement between the single feature the cluster will be re-balanced as ℓ_i = α/1+α L = (α^-1+L)^-1.§ IMPLEMENTATION Implementation Our framework is implemented in PyTorch and trained on four 40GB-VRAM NVIDIA A100 GPUs. The contrastive loss in our implementation is normalized with respect to the number of samples. These models are optimized by the SGD optimizer <cit.> with momentum 0.9, weight decay 10^-4, and a batch size of 16.The learning rate of the first learning step and the continual steps is set to 10^-4 and 5×10^-5 respectively. To update the cluster vectors 𝐜, following prior work <cit.>, we maintain a set of 500 features for each cluster and update the clusters after 100 steps with a momentum η = 0.99.In our domain incremental experiments, all clusters are updated at each learning step by momentum update. The number of features selected for each cluster in the visual grammar model is set to M = 128. The balanced weight of CSS objective λ_CL and the cluster regularizer λ_C is set to 1. Following the common practices <cit.>, the margin between clusters ∇ is set to 10.Unknown Cluster Initialization As mentioned in the main paper, we adopt the DB-SCAN algorithm to initialize the clusters for unknown samples. In addition, to reduce the noise clusters and isolated clusters, we also merge several close clusters, i.e., if the distance between two clusters is less than the margin 2∇, these will be merged into a single cluster where the new cluster center will be the means of these two merging cluster centers. By empirical observation, we have noticed that the number of unknown clusters initialized at each learning step, i.e., N_U at the current learning step t, is not greater than 1.5× times of the remaining classes (i.e., |𝒞^t+1..T|) in the dataset, e.g., in our ADE20K 100-50 experiments, at the first learning step of 100 classes, there are 68 unknown clusters that have been initialized while there are 50 remaining unknown classes in the dataset. Cluster Assignment In our approach, we use our visual grammar model to assign the cluster for each feature representation. Theoretically, although there is a possibility that a feature could not be assigned to a cluster via the visual grammar model, we have empirically observed that this issue rarely happens in our approach.Indeed, since we initialize the known clusters via the DB-SCAN, it guarantees that for each feature, there is at least one cluster nearby that the feature representation should belong to. However, to preserve the integrity of our approach, for the outlier features in cases that cannot be assigned clusters via the visual grammar model,these outliers will be heuristically assigned to their closest clusters as similar to <cit.>.Continual Learning Procedure Algorithm <ref> illustrates the training procedure of our CSS approach. § ADDITIONAL EXPERIMENTAL RESULTS§.§ Experiment Results of ADE20K 50-50 Benchmark Table <ref> presents the results of our method on the ADE20K 50-50 benchmark compared to prior methods. For fair comparisons, we use the DeepLab-V3 and Transformer in this experiment. As shown in the results, our proposed FALCON approach significantly outperforms prior methods. The results of our approach have reduced the gap with the upper bound result. §.§ Ablation Study Effectiveness of Choosing Margin ∇ Table <ref> studies the effectiveness of the value of margin ∇ to the performance of our approach on ADE20K 100-50 and ADE20K 100-10 benchmarks. As shown in the results, the change of ∇ also slightly influences the performance of the model. Since the margin defines the distance between two clusters, while the smaller value of the margin ∇ could cause the incorrect cluster assignment of the features, the larger value of the margin ∇ could produce the less compact clusters. Effectiveness of Choosing Number of Features M We study the impact of choosing the number of features M in the visual grammar model. As in shown Table <ref>, the optimal performance of our approach is M = 128. When the number of features selected is small (M = 96), it does not have enough number of features to form the visual grammar so the model is hard to exploit the correlation among features and the cluster.Meanwhile, when we increase the number of selected features (M = 256), the clusters will consist of many outlier features (the ones that do not belong to the cluster), thus being challenging for the visual grammar model to exploit the topological structures of the feature distribution.Effectiveness of Different Segmentation Networks To illustrate the flexibility of our proposed approach, we evaluate our proposed approach with different network backbones. Table <ref> illustrates the results of our approach using DeepLab-V3 <cit.>, SegFormer <cit.> with different backbones, i.e., ResNet-50, ResNet-101, MiT-B2, and MiT-B3. As shown in the performance, the more powerful the segmentation model is, the better performance of the model is. In particular, our approach has shown its flexibility since it consistently improves the performance of the segmentation model and achieves the SOTA performance on two different benchmarks, i.e., the performance of Transformer models achieves 41.9%, and 40.3% on ADE20K 100-50, ADE20K 100-10, respectively.§ RELATION TO KNOWLEDGE DISTILLATIONKnowledge Distillation is a common approach to continual semantic segmentation <cit.>. Prior work in clustering <cit.> has shown that the clustering loss is an upper bound of the knowledge distillation loss.Formally, the knowledge distillation loss can be formed as follows:ℒ_distill(𝐱^t, F, θ_t, θ_t-1) = ℒ(𝐅^t-1, 𝐅^t)where 𝐅^t and 𝐅^t-1 are the feature representations extracted from the model at learning step t and step t-1, respectively, and the metric ℒ measure the knowledge gap between 𝐅^t and 𝐅^t-1. Then, given a set of cluster 𝐜, we consider the following triangle inequality of the metric ℒ as follows:∀𝐜: ℒ(𝐅^t, 𝐅^t-1)≤ℒ(𝐅^t, 𝐜) + ℒ(𝐜, 𝐅^t-1)⇔ℒ(𝐅^t, 𝐅^t-1)_ℒ_distill ≤1/|𝒞^1..T|∑_𝐜[ℒ(𝐅^t, 𝐜)_ℒ_Cont + ℒ(𝐜, 𝐅^t-1)]At the computational time of Contrastive Clustering loss, the set of cluster vectors 𝐜 is fixed (could be considered as constants). In addition, the features extracted at learning step t-1, i.e., 𝐅^t-1, are constant due to the fix pre-trained model θ_t-1. Therefore, without a strict argument, the distance ℒ(𝐜, 𝐅^t-1) could be considered as constant. Therefore, Eqn. (<ref>) can be further derived as follows:ℒ(𝐅^t, 𝐅^t-1)_ℒ_distill = 𝒪(ℒ1/|𝒞^1..T|_Constant∑_𝐜[ℒ(𝐅^t, 𝐜)_ℒ_Cont + ℒ(𝐜, 𝐅^t-1)_Constant]) = 𝒪(∑_𝐜ℒ(𝐅^t, 𝐜)_ℒ_Cont)⇒ℒ_distill(𝐅^t-1, 𝐅^t) = 𝒪(ℒ_Cont(𝐅^t, 𝐜))where 𝒪 is the Big-O notation. Hence, from Eqn. (<ref>), without lack of generality, we can observe that the Contrastive Clustering Loss is the upper bound of the Knowledge Distillation loss. Therefore, by minimizing the Contrastive Clustering Loss, the constraint ofKnowledge Distillation is also maintained due to the property of the upper bound. § DISCUSSION OF LIMITATIONSIn our paper,we choose a specific set of hyper-parameters and learning approaches to support our hypothesis. However, our work could contain several limitations. First, choosing the scaling factor α could be considered as one of the potential limitations of our approach. In practice, when data keeps continuously growing, the pre-defined scaling factor α could not be good enough to control the fairness among classes. Our work focuses on investigating the effectiveness of our proposed losses to fairness, catastrophic forgetting, and background shift problems. Thus, the investigation of balance weights among losses has not been fully exploited, and we leave this experiment as our future work. Third, initializing the unknown clusters at each training step could potentially be room for improvement since the bad initial clusters could result in difficulty during training and updating these clusters and linking the unknown clusters learned in previous steps and new initial unknown clusters at the current learning steps have been yet fully exploited in our method.In addition, while our approach is designed for the DeepLab-V3 and Transformer segmentation networks <cit.>, the extensions of FALCON to mask-based segmentation networks <cit.> could be a potential next research for further performance improvement. These limitations could motivate new studies to further improve Fairness Learning via the Contrastive Attention Approach to continual learning in the future. | http://arxiv.org/abs/2311.15965v1 | {
"authors": [
"Thanh-Dat Truong",
"Utsav Prabhu",
"Bhiksha Raj",
"Jackson Cothren",
"Khoa Luu"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231127160739",
"title": "FALCON: Fairness Learning via Contrastive Attention Approach to Continual Semantic Scene Understanding in Open World"
} |
==== Medical images are often characterized by their structured anatomical representations and spatially inhomogeneous contrasts.Leveraging anatomical priors in neural networks can greatly enhance their utility in resource-constrained clinical settings.Prior research has harnessed such information for image segmentation, yet progress in deformable image registration has been modest.Our work introduces textSCF, a novel method that integrates spatially covariant filters and textual anatomical prompts encoded by visual-language models, to fill this gap.This approach optimizes an implicit function that correlates text embeddings of anatomical regions to filter weights, relaxing the typical translation-invariance constraint of convolutional operations.TextSCF not only boosts computational efficiency but can also retain or improve registration accuracy.By capturing the contextual interplay between anatomical regions, it offers impressive inter-regional transferability and the ability to preserve structural discontinuities during registration.TextSCF's performance has been rigorously tested on inter-subject brain MRI and abdominal CT registration tasks, outperforming existing state-of-the-art models in the MICCAI Learn2Reg 2021 challenge and leading the leaderboard.In abdominal registrations, textSCF's larger model variant improved the Dice score by 11.3% over the second-best model, while its smaller variant maintained similar accuracy but with an 89.13% reduction in network parameters and a 98.34% decrease in computational operations. § INTRODUCTIONDeformable image registration aims to find a dense, non-linear correspondence between a pair/group of images to estimate their alignment transformation. This process is essential for numerous medical imaging applications, including tracking changes in longitudinal studies, measuring organ motion, and analyzing population-based studies <cit.>. Traditional methods <cit.> approach deformable image registration as a pairwise optimization problem. This optimization typically requires numerous iterations to minimize the energy function, which is computationally expensive and consequently limits its application in real-time and large-scale volumetric image registration. Additionally, these traditional methods depend on raw image contrasts, potentially leading to a loss of important contextual information.Recent advances in convolutional neural networks (ConvNets) <cit.> and transformers <cit.> have changed the landscape of medical image registration research <cit.>. Voxelmorph <cit.> has enabled unsupervised learning and real-time registration. Subsequent research has sought to enhance registration precision further by harnessing contextual cues from neural networks. This can be seen in strategies such as leveraging auxiliary segmentation masks for anatomy awareness <cit.> or preserving discontinuities <cit.>, employing cascaded <cit.>, parallel <cit.>, or band-limited architectures <cit.>, as well as attention-based transformers <cit.> to refine representation learning. Despite their advances, these methods often underperform on datasets with large deformations or limited training instances. While the integration of pre-trained anatomical embedding networks <cit.> with energy-based optimization presents a potential strategy to address these limitations <cit.>, they generally do not attain the efficiency levels of learning-based approaches. Typically, medical images obtained from modalities such as magnetic resonance imaging (MRI) and computerized tomography (CT) often exhibit structured anatomical patterns <cit.> and present spatially inhomogeneous contrasts <cit.>, lending themselves well for analysis using neural networks <cit.>.Acknowledging these characteristics leads us to two pivotal questions whose answers could mitigate the aforementioned challenges: Q1: How can we leverage the innate prior information within each dataset, such as the consistent anatomical structures observed across different scans? Q2: How might we incorporate external prior knowledge, like that from models pre-trained on disparate datasets?To tackle the above challenges, we introduce a novel weakly-supervised learning framework that integrates the capabilities of large-scale visual-language model with spatially covariant filters (textSCF). Our approach differs from previous methods by not only utilizing auxiliary segmentation masks in our loss function but also encoding these masks into an N× C embedding matrix, where N is the number of labels and C denotes the encoding length. Building upon earlier work in spatially covariant lesion segmentation <cit.>, our framework optimizes an implicit function that associates this embedding matrix with corresponding filter weights, ensuring that the output is inherently spatially variant and aligned with anatomical regions. Furthermore, we leverage the Contrastive Language-Image Pre-Training (CLIP) model <cit.> to generate the embedding matrix.By inputting text descriptions of anatomical regions, the model captures rich contextual information, such as the latent correlation of certain anatomical regions and discontinuities across these regions.In this study, we put our method to the test on two different tasks: inter-subject brain registration via MRI and abdomen registration using CT scans.Our research has yielded several interesting findings: * The textSCF approach consistently outperformed other leading methods in both brain and abdomen registration tasks.A significant achievement was securing the first place on the MICCAI Learn2Reg 2021 challenge leaderboard at the time of our submission.* Utilizing spatially covariant filters, textSCF demonstrated remarkable efficiency.A scaled-down version of our model delivered comparable accuracy in abdomen registration while substantially cutting down on network parameters by 89.13% and computational operations by 98.34%.* This research is the first to integrate text embeddings from visual-language model into volumetric image registration, improving the model's ability to contextually interpret anatomical structures.§ RELATED WORK§.§ Learning-based Medical Image RegistrationRecent advancements in unsupervised ConvNets have significantly enhanced medical image registration, removing the dependence on ground-truth deformation fields, and achieving real-time performance. VoxelMorph <cit.> pioneers the field by demonstrating that a generalized representation learned from a collection of image pairs could yield registration results on par with iterative approaches, and notably, in under a second for volumetric brain scans. Further research has explored various network designs, including parallel <cit.> and cascaded architectures <cit.>, as well as the incorporation of attention-based transformers <cit.> to refine the representation learning.In this paper, our focus is directed towards anatomically-aware methodologies <cit.> and the employment of pre-trained models from external datasets <cit.>, as they present promising solutions for enhancing registration accuracy, particularly for challenging datasets characterized by large deformations or limited training instances.<cit.> and <cit.> apply auxiliary segmentation masks to regularize the training process using a Dice loss.<cit.> goes a step further by incorporating multiple anatomical constraints, such as a curvature regularizer and an anatomical keypoints loss.<cit.> utilizes segmentation masks to enforce deformation discontinuities between different organs, thereby preserving clinically important indices. In terms of leveraging external data, <cit.> introduces a self-supervised framework named SAM, which is capable of generating voxel-wise embeddings that provide semantically coherent features aiding in accurate registration.Building on SAM, SAME <cit.> and SAMConvex <cit.> exhibit improved registration performance on datasets with large deformations by incorporating more sophisticated network designs and combining energy-based optimization within ConvNets. Different from conventional methods, our textSCF harnesses knowledge from both large-scale visual-language models <cit.> and specialized segmentation frameworks <cit.>. §.§ Text-driven Dense PredictionsLarge-scale visual-language pretraining models like CLIP <cit.> have shown impressive capabilities in complex visual tasks. Despite its training on instance-level text-image pairs via contrastive learning, CLIP's efficacy extends to downstream tasks requiring per-pixel dense predictions.Its effectiveness is demonstrated in various applications such as semantic segmentation <cit.>, referent segmentation <cit.>, instance segmentation <cit.>, and object detection <cit.>. Research leveraging these models for medical imaging, however, remains limited <cit.>.Most efforts concentrate on image instance-level analysis, like X-ray interpretation <cit.>.Yet, <cit.> has demonstrated that a carefully crafted framework could harness anatomical relationships from text-image models for building universal segmentation models.Inspired by these advancements, our work pioneers the integration of text-image model CLIP <cit.> with spatially covariant filters <cit.> for deformable image registration.TextSCF harnesses the latent relationships among anatomical regions by utilizing large-scale visual-language models <cit.>, enhancing its understanding of contextual anatomy. §.§ Spatially Covariant FiltersTranslation invariance is a characteristic of ConvNets, yet it's been revealed that these networks can implicitly capture positional cues, presenting both opportunities and constraints. <cit.> reveals that zero padding implicitly leaks location information, where a stack of deeper convolutional layers can improve the position readout.On the other hand, <cit.> argues that removing such positional encodings could enforce stronger translation invariance, beneficial in certain classification tasks. This suggests that ConvNets allocate a portion of their capacity to position encoding, implying that direct coordinate feeding as in CoordConv <cit.> can enhance network performance by utilizing its capacity. Similarly, <cit.> discovers that moderate relaxation of translation invariance could benefit classification tasks. <cit.> consolidates the above insights, introducing Spatially Covariant Pixel-aligned classifier (SCP), which relaxes translation invariance through an implicit function that maps image coordinates to classifier weights. While SCP falls short for our registration task that needs precise per-pixel deformation vectors, we draw on its principles, using segmentation masks coupled with text prompts to direct the generation of spatially covariant filters. § METHOD§.§ PreliminariesDeformable image registration determines voxel-level correspondences between a moving image 𝐈_m and a fixed image 𝐈_f.The spatial mapping is denoted as ϕ(x) = x + 𝐮(x), with x indicating a location in the domain Ω⊂𝐑^H× W× D and 𝐮(x) being the displacement vector at x. The displacement field 𝐮(x) warps the moving image 𝐈_m to align with the fixed image 𝐈_f, ensuring every voxel in 𝐈_f corresponds to a voxel in the warped 𝐈_m (denoted as 𝐈_m ∘ϕ), with trilinear interpolation calculating values for non-grid positions. Unsupervised learning involves training a network F_θ to estimate the deformation field ϕ from 𝐈_m ∘ϕ and 𝐈_f: ϕ=F_θ(𝐈_m,𝐈_f). The network weights θ are optimized by minimizing a composite loss function ℒ, which combines measures of dissimilarity between warped 𝐈_m and 𝐈_f, smoothness of the deformation field, and an auxiliary loss for enhanced alignment:ℒ = ℒ_sim(𝐈_f,𝐈_m ∘ϕ) + ℒ_auc(𝐉_f,𝐉_m∘ϕ) + λℒ_reg(ϕ),where ℒ_sim(·,·) quantifies the similarity, ℒ_reg(·) imposes regularization to ensure deformation smoothness, and ℒ_dsc(·,·) serves as an auxiliary loss. The coefficient λ modulates the smoothness of the deformation field. In our implementation, we employ the Mean Square Error (MSE) loss to gauge dissimilarity, while the smoothness of the displacement field is encouraged using the L2 norm of its spatial gradients ||∇𝐮||^2.Consistent with <cit.>, Dice loss evaluates alignment between the moving segmentation 𝐉_m ∘ϕ and the fixed segmentation 𝐉_f. §.§ Weakly-supervised RegistrationIn fully-supervised image registration, training involves both input images and their matching ground-truth deformation fields, while inference is conducted with just the input images. On the other hand, unsupervised image registration uses only input images for training and inference. Our weakly-supervised method uses input images and segmentation masks during both training and inference, offering a middle ground between fully-supervised and unsupervised approaches. Moreover, the segmentation masks employed during training can be sourced from either ground-truth data or generated automatically.For inference, the model relies solely on masks produced by external segmentation models, circumventing the need for additional efforts, aligning the process with the unsupervised methods. In this study, we aim to train a network F_θ that processes the input images 𝐈_m, and 𝐈_f, along with the fixed segmentation mask 𝐉_f to predict the deformation field: ϕ=F_θ(𝐈_m,𝐈_f,𝐉_f). §.§ The Overall Framework of textSCF The proposed text-driven registration framework consists of three core components: a feature branch with a standard encoder-decoder network, a mask branch utilizing an external segmentation model to locate anatomical regions, and a text branch that encodes text prompts associated with anatomical regions. Next, we will detail each component within the framework.§.§.§ Text BranchConsidering the impressive capabilities of few-shot learning <cit.> and modeling anatomical relationships <cit.> of pretrained visual-language models like CLIP <cit.>, we aim to harness this knowledge by using CLIP to encode anatomical regions. For the k_th anatomical region, we generate a text prompt like “”, whereis replaced with the region's name.We then employ CLIP to obtain the corresponding embedding vector 𝐛_k for that region. If we have N anatomical regions, the resulting vectors compose an embedding matrix 𝐁∈ℝ^N× C_1, where C_1 is length of each vector.We carefully tailor the background vector to hedge against uncertainties in anatomical segmentation masks from external models.This vector is designed to have a resilient encoding that adjusts to regional variations and stands out against each distinct anatomical region. Let 𝐁̃ represent the matrix composed of last N-1 vectors. We perform singular value decomposition (SVD) on 𝐁̃, yielding 𝐁̃ = 𝐔Σ𝐕^⊤. To determine the background vector 𝐛_0, we maximize its orthogonality to the subspace spanned by the rows of 𝐁̃.Mathematically, 𝐛_0 is the last column of 𝐕, denoted as 𝐛_0=𝐕[:,-1]. §.§.§ Mask Branch and the Derivation of textSCF Conventional registration networks apply identical filters across all pixel locations when generating the deformation field, which may not yield the best results.Displacement patterns often vary between regions, reflecting each subject's distinctive traits like organ placement and size.This results in displacement fields that are internally consistent within a region but can vary greatly between different regions. Therefore, we introduce the concept of spatially covariant filters to accommodate these variations in regional displacement.The original SCP approach <cit.> employs a neural network to establish an implicit function <cit.> that maps pixel coordinates to corresponding classifier weights:Φ_θ: (x ∈ℝ^d) ↦ (𝐰(x) ∈ℝ^C),where x represents the pixel's coordinates in a d-dimensional space (d=3 in our volumetric registration), 𝐰(x) is the weight vector with C elements, and θ is the trainable parameter. (Note that 𝐰(x) can be a matrix of C× N if it is an N-label segmentation) The method is effective for structured medical image segmentation but less so for registration due to the need for a dense deformation field in non-aligned data. In this work, we suggest replacing pixel coordinates in Eq. (<ref>) with encoded vectors representing anatomical regions.First, we apply a pretrained external segmentation model like SwinUnetr <cit.> to the fixed image 𝐈_f to produce a probability distribution vector 𝐝(x) ∈ℝ^N representing the N anatomical regions for each location. Then, we use both argmax and max operations on 𝐝(x) to derive the segmentation mask 𝐉_f(x) and extract the confidence score, represented by the probability p(x). Then, we apply 𝐉_f(x) to look up on 𝐁 and obtain an embedding vector 𝐟(x). Finally, we can derive the proposed textSCF as follows:𝐰(x)= Φ_θ(𝐟(x)),𝐮(x)= (p(x)·𝐰(x)+(1-p(x))·𝐰_r)^⊤𝐅(x),where 𝐮(x) is the displacement vector, 𝐅(x)∈ℝ^C_2 is the feature vector produced by the backbone network, 𝐰(x) ∈ℝ^C_2× d is the estimated filter weights from implicit function Φ, with θ as its trainable parameters. 𝐰_r ∈ℝ^C_2× d is a trainable parameter (implemented with a linear layer) that acts as a uniform filter applied universally across all spatial locations. Substituting 𝐰(x) with 𝐰_r in Eq. (<ref>) simplifies the network to perform translation-invariant filtering. With the displacement vector, we can obtain the deformation field ϕ(x)=x+𝐮(x). The implicit function Φ_θ is implemented using a multi-layer perceptron (MLP) comprising three layers.Each layer includes a linear layer coupled with a ReLU activation function, except for the final layer, which consists solely of a linear component. Illustrated in Fig. <ref>, Φ_θ transitions text embeddings from a C_1-dimensional space to a C_2-dimensional space. §.§.§ Feature Branch and Overall FrameworkThe feature branch features a backbone network that adopts an encoder-decoder structure, similar to architectures like VoxelMorph <cit.> and its variants (See Fig. <ref>).We represent this network as f_ξ, with ξ denoting the network's trainable parameters. This branch combines the moving image 𝐈_m and the fixed image 𝐈_f along the channel dimension to produce a feature map 𝐅∈ℝ^C_2× H × W × D with C_2 channels and a spatial size of H × W × D, calculated by 𝐅=f_ξ(𝐈_m,𝐈_f). Here N_s specifies the starting number of channels, a parameter that will inform the construction of networks with varying complexities. The overall framework of our textSCF model is depicted in Fig. <ref>.The process begins with generating text prompts from anatomical region names, such as “” and “”, using a specific template. Our ablation study, outlined in Table <ref>, shows that textSCF, with its semantic encoding, considerably outperforms the original SCP model, which uses only spatial encoding.This study underscores the critical role for the choice of text prompts.Although further investigation into optimal prompting strategies is a compelling direction for future research, it falls outside the scope of this paper. Then these prompts are subsequently fed into a pretrained text encoder coupled with SVD to produce the text embeddings 𝐁. A segmentation model inputs the fixed image 𝐈_f to output the segmentation map 𝐉_f and its probability distribution, which, combined with the text embeddings, are assigned to each voxel. These combined embeddings 𝐟∈ℝ^C_1 × H × W × D are passed through a function Φ_θ to produce spatially covariant filters 𝐰∈ℝ^C_2 × d × H × W × D. Finally, with the feature map from f_ξ, the deformation field is calculated as outlined in Eq. (<ref>). § EXPERIMENTS AND RESULTS§.§ Datasets OASIS. We conducted inter-subject brain registration using the Learn2Reg registration challenge 2021 <cit.>, which utilizes the OASIS dataset <cit.>.This dataset comprises T1w brain MRI scans from 414 subjects.For training, we used 394 unpaired scans, and 19 image pairs from 20 scans were employed for validation and public leaderboard ranking [<https://learn2reg.grand-challenge.org/evaluation/task-3-validation/leaderboard/>]. In our experiments, we utilized pre-processed data from the challenge organizers, which included bias correction, skull stripping, alignment, and cropping of all scans to a size of 160×192×224.Abdomen CT. We also performed inter-subject organ registration using the Abdomen CT dataset <cit.> from the Learn2Reg <cit.> challenge 2020.This dataset includes 30 abdominal CT scans, with each scan segmented into 13 anatomical structures.All images were resampled to a consistent voxel resolution of 2 mm and a spatial size of 192×160×256. We divided the dataset into three parts: 20 CT images for training, 3 for validation, and 7 for testing, which results in 380 (20×19) training pairs, 6 (3×2) validation pairs, and 42 (7×6) testing pairs.Why Choose Them? The selected datasets present anatomical regions with various shapes, sizes, and locations, making them ideal for evaluating our method.They contain two commonly used imaging modalities: MRI and CT, each with its own set of challenges.The main difficulty of the OASIS dataset lies in fine-grained alignments of small and variably shaped brain structures.The Abdomen CT dataset, on the other hand, is primarily challenging due to large deformations and its relatively small size.Assessing on these datasets offers a more comprehensive and convincing evaluation.§.§ Implementation Details and Baseline MethodsTraining Details. All models were developed using PyTorch in Python <cit.>.The training environment included a machine with 32GB memory, a 16-core CPU, and an A100 GPU.Network training utilized the Adam optimizer <cit.> with an initial learning rate of 1e-4, complemented by a polynomial learning rate scheduler with a 0.9 decay rate. The training process was set to a batch size of 1 and continued for 700 epochs for the OASIS dataset and 100 epochs for the Abdomen CT dataset.Throughout the paper, we set λ=0.1 for the smoothness regularization term, except where noted otherwise. For a fair comparison, all models were trained either under same conditions or according to their specified preferred settings in their repositories.Data Processing. Our approach involved a straightforward data processing pipeline.In line with the learn2reg challenge guidelines <cit.>, the output deformation field is spatially halved for both datasets.For the OASIS dataset, we maintained the original size of the input images, whereas, for CT datasets, both the input and output sizes are halved.We normalized all images to ensure their intensities fall within the range of [0,1].Specifically for the CT dataset, intensities were clipped between -500 and 800 Hounsfield units prior to the normalization. For the OASIS dataset, we utilized automated segmentation masks generated by FreeSurfer <cit.> and the Neurite package <cit.> for both the Dice loss computation and as inputs to textSCF.In the case of the Abdomen CT dataset, manual segmentation masks were employed for calculating the Dice loss, while automated segmentation masks obtained from the pretrained SwinUnetr model <cit.> served as inputs to textSCF.Baseline Methods and Model Details of textSCF. Our study compares the textSCF with several state-of-the-art non-iterative, learning-based baseline models, including VoxelMorph <cit.>, LapRIN <cit.>, TransMorph <cit.>, LKU-Net <cit.>, and Fourier-Net <cit.>.For the OASIS dataset, we obtained evaluation scores from the public leaderboard or respective publications.In the case of the Abdomen CT dataset, we acquired the models' code from their public repositories and fine-tuned each to achieve optimal performance. Although textSCF is model-agnostic, we opted for the LKU-Net backbone due to its simplicity and effectiveness in capturing both fine-grained details and large deformations.For all our experiments, we standardized the size of the kernel to 5. Evaluation Metrics. Consistent with established methods <cit.> and challenge protocols <cit.>, our evaluation metrics include the Dice Similarity Coefficient (Dice) and the 95 % percentile of the Hausdorff Distance (HD95) for similarity assessment of anatomical regions. To evaluate the diffeomorphism quality of deformation fields, we used the standard deviation of the logarithm of the Jacobian determinant (SDlogJ).Furthermore, to assess computational complexity, we measured the network's parameter size and multi-add operation count for each method. §.§ Results and Analysis §.§.§ Analysis of Textual Anatomical EmbeddingsOur ablation study (Table <ref>) underscores text encoding's importance in textSCF's registration accuracy.Location-specific terms like "in human abdomen" enhance Dice scores by 0.7%, while specifying the imaging modality like "a computerized tomography" provides further improvement.The ViT model as a backbone surpasses ResNet, particularly evident in prompts "#5" and "#6".SVD's role in background encoding, as in prompts "#2" and "#3", proves beneficial.However, ChatGPT embeddings (prompt "#7") fall short compared to CLIP's image-text pairings, and SCP's sole spatial encoding is less effective.The study validates textSCF's strength in combining SCFs with text prompts for precise anatomical region-specific filtering and capturing semantic relationships between regions.§.§.§ Registration Accuracy Table <ref> presents the quantitative results of our proposed textSCF method in comparison to other methods on brain registration with the OASIS dataset.In summary, textSCF demonstrates superior performance in both the average Dice score and HD95 score while maintaining comparable smoothness in the deformation field (see Fig. <ref> for accuracy-smoothness trade-off).Notably, our model, which incorporates a text encoder and spatially covariant filters, shows improvement in all three metrics over the LKU-net, its direct counterpart without these features.In Table <ref>, we present quantitative results comparing our textSCF method to others in inter-subject abdomen registration.Among methods that exhibit smoother deformation fields than textSCF (indicated by a lower SDlogJ), textSCF stands out with significantly better Dice scores. Specifically, textSCF achieves improvements in Dice score of 57.17% over VoxelMorph, 54.17% over TransMorph, 49.94% over Fourier-Net, and 20.72% over LKU-Net.Although LapIRN <cit.> is optimized for large deformations and leads in Dice score, its higher SDlogJ suggests less smoothness in deformation.Notably, textSCF shows an 11.37% improvement in Dice score compared to LapIRN, while also maintaining a smoother deformation field.To gain deeper insights into the performance of various deformable registration methods,we visualized Dice score distributions across anatomical structures in the Abdomen CT dataset, as seen in Fig. <ref>.The proposed textSCF consistently outperforms other methods across these anatomical structures.Notably, in a paired t-test, textSCF significantly surpasses Fourier-Net, TransMorph, and VoxelMorph in Dice scores for all structures, with statistical significance (p<0.05).However, no statistical significance was found for the gall bladder in comparisons with LKU-Net and LapIRN, and for the left and right adrenal glands with LapIRN. Moreover, all methods, including textSCF, show reduced performance on the left/right adrenal gland and gall bladder, attributed to their small size and irregular shapes.§.§.§ Smoothness Analysis In deformable image registration, a diffeomorphism enables smooth, complete image transformations without tearing or folding, preserving topologies.Smoothness is typically encouraged in learning frameworks via regularizers or a diffeomorphic integration layer <cit.>.However, prioritizing smoothness may impact anatomical correspondence accuracy, making it crucial to find a balance between the two.The Fig. <ref> displays two scatter plots comparing different registration methods in terms of their Dice scores and smoothness measure SDlogJ for both brain and abdomen registrations. It highlights that textSCF leads in Dice scores, demonstrating its unmatched registration accuracy among the compared methods.Even with comparable levels of smoothness, textSCF maintains higher Dice scores, illustrating its proficiency in finding anatomical correspondences.LKU-Net and LapIRN, while individually strong in either smoothness or accuracy, lack this dual advantage.Consequently, textSCF emerges as the method with the best balance between registration accuracy and smoothness. §.§.§ Complexity AnalysisThe complexity of the textSCF model is governed by two parameters: the starting number of channels, N_s, in the backbone network f_ξ, and the number of channels, C_Φ, in function Φ_θ. As network complexity increases, so typically does registration accuracy, albeit at the cost of increased computational demands, such as larger network size and more multi-add operations. Hence, a well-designed registration method need to have a careful balance between these aspects. Effects of C_Φ. The impact of C_Φ on model performance was examined. A higher C_Φ during the training phase may result in increased computational demands.However, during inference, C_Φ does not add to the complexity; once the model is trained, SCFs can be obtained in a single forward pass and stored for subsequent use with negligible additional cost.As illustrated in Fig. <ref>, escalating C_Φ, initially enhances the Dice score, with a peak performance at 2048 channels.Beyond this point, there is a marginal decline, suggesting that there is an optimal range of C_Φ for this model's architecture.Effects of N_s. The complexity of our backbone network f_ξ is modulated by the starting channel count N_s, as illustrated in Fig. <ref>. This parameter can also be used to govern the complexity of comparators such as LapIRN and LKU-Net. Generally, a higher N_s enhances registration accuracy. Yet, for the OASIS dataset, performance declines when N_s exceeds 64.Similarly, for the Abdomen dataset, a N_s beyond 32 leads to lower accuracy. This optimal N_s threshold is linked to dataset size; an oversized network is prone to overfitting when data is limited. Fig. <ref> presents an accuracy-complexity comparison of textSCF with other registration methods. The proposed textSCF attains the most favorable balance: with comparable accuracy to LapIRN (N_s=32), textSCF (N_s=8) achieves an 89.13% reduction in network parameter size and a 98.34% reduction in multi-add operations. §.§ TextSCF Properties Inter-Regional Transferability. Our textSCF model demonstrates the capacity to transfer knowledge from external datasets with different anatomical regions, leveraging spatially covariant text embedding.We augmented abdomen CT registration with an auxiliary Lung CT dataset <cit.>.We processed 20 inspiration-phase lung images to align with the Abdomen CT dataset's specifications.During training, these lung images were combined with abdomen data, employing the same loss function.Table <ref> reveals that adding Lung data without textual anatomical encoding marginally decreased performance.In contrast, specific text encoding of the lung as an additional anatomical region enhanced outcomes, highlighting the model's transferability capabilities through text encoding. This indirectly showcases the model's ability to capture semantic relationships between different anatomical regions.Discontinuity-Preserving Capability. Most current learning-based registration methods presuppose a globally smooth deformation field, which may not hold true for cases involving large deformations, such as abdominal registrations. Ideally, a deformation field should be smooth within each anatomical region but allow for discontinuities between different regions.This ability to preserve discontinuities is a crucial feature for a registration method.As depicted in Fig. <ref>, textSCF exhibits this discontinuity-preserving capability, clearly delineating the stomach from surrounding areas, in contrast to LKU-Net's approach, which smooths over such boundaries. § CONCLUSIONSIn developing textSCF, we noted two primary limitations. Firstly, while many open-source and pretrained segmentation models are available, the registration accuracy is somewhat reliant on the precision of these external segmentations, with performance declining in tandem. Secondly, textSCF's effectiveness is less pronounced in datasets with simpler structures, demonstrated by only a slight Dice increase in cardiac registration compared to its counterpart without textSCF.In this paper, we introduced textSCF, a novel method for deformable medical image registration, addressing two key questions from Section <ref>.For Q1, we utilized internal prior information with anatomical-region specific filters to enhance intra-region consistency and inter-region distinction.For Q2, we harnessed external knowledge via pretrained segmentation models and CLIP, capturing semantic inter-regional relationships.textSCF showed remarkable performance in brain MRI and abdominal CT registration tasks, achieving top ranks in the MICCAI Learn2Reg 2021 challenge and notable Dice score improvements.ieeenat_fullnameAppendix SummaryIn this supplementary material, we offer further insights into the textSCF model as well as other baseline models.Appendix <ref> offers expanded descriptions and implementation specifics of these comparative models.Appendix <ref> showcases textSCF's universal applicability across different architectures, including both ConvNets and vision transformers.Appendix <ref> investigates how the accuracy of external segmentation impacts registration outcomes.Lastly, Appendix <ref> elaborates on diffeomorphic registration, covering definitions of diffeomorphisms, differentiable diffeomorphic integration layers, and the diffeomorphism quality metric SDlogJ.§ IMPLEMENTATION DETAILS We compared textSCF with multiple baseline models, and in this section, we provide more descriptions about the model and more details of how we implement them.VoxelMorph <cit.>. VoxelMorph stands at the forefront of unsupervised learning in medical image registration via ConvNets.As detailed in <cit.>, it comes in two variants: VoxelMorph-1 and the more advanced VoxelMorph-2, which doubles the feature counts of its predecessor.Our focus was on VoxelMorph-2, which, by expanding network size, achieves better performance compared to VoxelMorph-1.TransMorph <cit.>. TransMorph, built on the Swin transformer framework <cit.>, is known for its high registration accuracy across various tasks and exemplifies vision transformer-based registration methods.It benefits from Swin Transformer's extensive effective receptive field <cit.>.TransMorph has four variants, each utilizing a different Swin backbone.Our experiments utilized the 'Large' variant, which delivers the highest registration accuracy among its counterparts.LapIRN <cit.>. LapIRN, based on residual ConvNets <cit.>, features a three-level coarse-to-fine architecture. Unique among its peers, LapIRN employs three sub-networks, each tailored to a specific scale of deformation, making it particularly effective for datasets with large deformations.The network's complexity is managed through its initial channel count, and for consistency with our textSCF framework, we set this count to 32 to get optimal performance.LKU-Net <cit.>. LKU-Net, leveraging large kernel insights <cit.> in a U-Net <cit.> framework, excels in producing both fine-grained fields and those with large deformation. It was the chosen backbone for constructing the textSCF framework.The complexity of LKU-Net, like other models in our study, is regulated by the start channel count; we used 32 to maintain consistency with the textSCF setup.Fourier-Net <cit.>. Fourier-Net generates deformation fields in a low-frequency space, simplifying network complexity while enhancing field smoothness.It claims to match or surpass the registration accuracy of TransMorph and LapIRN in brain registration tasks, but with reduced computational demands.However, its performance dips in datasets characterized by large deformations.In our implementation of Fourier-Net, we set the starting channel count to 32 and adjusted the low-frequency patch size to be a quarter of the original size, aligning with its most extensive variant.Loss Function. Unless otherwise specified, all methods apart from LapIRN utilize the same loss function with λ=0.1, mirroring the textSCF framework.For LapIRN, to attain peak performance, we employed the Normalized Cross-Correlation (NCC) loss for dissimilarity measurement and adjusted λ to 3.5, in line with what is mentioned in its original code repository [<https://github.com/cwmok/LapIRN>].Additionally, we integrated a Dice loss into the LapIRN setup. § EXPANSIBILITY ACROSS ARCHITECTURES The design of textSCF facilitates its integration with a variety of backbones.We tested its adaptability by implementing it with comparator baseline models as backbones, including VoxelMorph, TransMorph, LapIRN, and LKU-Net.These models were evaluated on the Abdomen dataset, notable for its large deformations.For each network's textSCF variant, we configured C_ϕ to 2048 and employed the 'base' version of the pretrained SwinUnetr <cit.> as the external segmentor. Fig. <ref> showcases the impact of the textSCF module on various backbone networks in terms of Dice (%) metrics, highlighting textSCF's expansibility across architectures.The integration of textSCF leads to accuracy enhancements across all models, with VoxelMorph benefiting the most significantly.VoxelMorph outperforms TransMorph with textSCF integration, indicating a potentially greater synergy with ConvNet architectures, possibly due to ConvNets' implicit positional encoding via zero padding <cit.>.This intrinsic characteristic of ConvNets may explain their superior performance over vision transformers in handling large deformations.Additionally, LapIRN's gains with textSCF are less marked compared to others. § IMPACT OF EXTERNAL SEGMENTATION ACCURACY As previously noted, textSCF's registration accuracy is correlated with the accuracy of external segmentation inputs.We explore how changes in segmentation accuracy impact registration outcomes.Our evaluation on the Abdomen dataset involved using ground-truth masks with textSCF, modulating segmentation accuracy by adjusting the pretrained SwinUnetr network size from 'base' to 'small' and 'tiny' variants.We evaluated SwinUnetr's performance on the Abdomen dataset, calculating the average Dice score across all 30 instances as the metric for segmentation accuracy.Table <ref> illustrates the relationship between registration and segmentation accuracy.With ground-truth segmentation masks, the registration accuracy in terms of Dice (%) reaches 74.73%, significantly outperforming the SwinUnetr variants.As expected, registration accuracy decreases with segmentation accuracy for SwinUnetr variants.Intriguingly, as segmentation accuracy drops, both HD95 (mm) and SDlogJ metrics improve, likely because decreased segmentation precision leads to less pronounced structural discontinuities, resulting in a smoother deformation field. Notably, textSCF enhances registration accuracy by 13.04% over its non-textSCF variant, even when segmentation accuracy is as low as 63.35%.§ DIFFEOMORPHIC REGISTRATION In this section, we explore the principles of diffeomorphic registration. We begin by defining a diffeomorphism and discussing its relevance to the smoothness of deformation fields.We then present the concept of the Jacobian determinant as it pertains to deformation fields and introduce the formula for calculating the SDlogJ metric.Finally, we discuss the statistical underpinnings that allow SDlogJ to serve as an indicator of the quality of a diffeomorphic transformation.§.§ Diffeomorphism Diffeomorphism Definition. In mathematics, a diffeomorphism represents an isomorphism between smooth manifolds.This entails an invertible function that connects one differentiable manifold to another, ensuring that both the function and its inverse are smooth and continuously differentiable.Diffeomorphism in Image Registration. In the context of deformable image registration, a diffeomorphism refers to a smooth and invertible transformation process.This allows for seamless image transitions without tearing or folding, ensuring the preservation of topologies.While a continuously differentiable transformation and its inverse typically define a diffeomorphism, we emphasize smoothness as a crucial and more manageable aspect within learning frameworks.To promote this smoothness, global regularizers <cit.> or a diffeomorphic integration layer are often employed <cit.>.However, prioritizing smoothness might affect the accuracy of anatomical correspondences. Thus, striking a balance between smooth transformation and accurate anatomical mapping is crucial.§.§ Diffeomorphic Integration LayerAdding a differentiable diffeomorphic integration layer <cit.> with a global smoothness regularizer improves the smoothness of the deformation field.This process involves integrating a stationary velocity field 𝐮 over time t=[0,1] to compute the final registration field ϕ^(1).Beginning with the identity transformation ϕ^(0) = Id, the deformation field evolves according to ∂ϕ^(t)/∂ t = 𝐮(ϕ^(t)).Stationary ordinary differential equation integration represents a one-parameter subgroup of diffeomorphisms.In group theory, 𝐮, part of the Lie algebra, is exponentiated to yield ϕ^(1) = exp(𝐮), a Lie group member.One-parameter subgroups ensure exp((t + t')𝐮) = exp(t𝐮) ∘ exp(t'𝐮) for any scalars t and t', with ∘ being the composition in the Lie group.Starting with ϕ^(1/2^T) = p + 𝐮(p), we use ϕ^(1/2^t+1) = ϕ^(1/2^t)∘ϕ^(1/2^t) to get ϕ^(1) = ϕ^(1/2)∘ϕ^(1/2).In neural networks, diffeomorphic integration employs spatial transformation <cit.> layers for scaling and squaring operations T times.Based on <cit.>, T=7 is our chosen number of integration steps. §.§ Smoothness Measurement: SDLogJAs previously noted, while a sufficiently smooth deformation field doesn't automatically ensure diffeomorphism, it is a crucial component in achieving it.The Jacobian matrix, formed by the deformation field's derivatives in each direction, plays a key role in understanding the deformation's local behavior.This matrix is essentially a second-order tensor field that captures these local changes. The definition of the Jacobian matrix J_ϕ(p) is as follows:J_ϕ(p) = ([ ∂ϕ_x(p)/∂ x ∂ϕ_x(p)/∂ y ∂ϕ_x(p)/∂ z; ∂ϕ_y(p)/∂ x ∂ϕ_y(p)/∂ y ∂ϕ_y(p)/∂ z; ∂ϕ_z(p)/∂ x ∂ϕ_z(p)/∂ y ∂ϕ_z(p)/∂ z; ]),where p is the voxel position, ϕ is the deformation field. The Jocobian determinant of the deformation field at position p, denoted as |J_ϕ(p)|, is useful in analyzing the local characteristics of the deformation field.In regions where the Jacobian determinant is positive, the local deformation field typically exhibits diffeomorphic properties, indicating a one-to-one mapping.Conversely, areas with a negative Jacobian determinant suggest a loss of this one-to-one correspondence, highlighting areas of concern in the deformation process.Interpretation of | J_ϕ(p) | values. When | J_ϕ(p) | > 1, it corresponds to local expansion, indicating an increase in the volume at position p. A determinant between 0 and 1, 0 < | J_ϕ(p) | < 1, corresponds to local contraction, reflecting a reduction in volume. If | J_ϕ(p) | = 1, the deformation maintains the region's original size, reflecting volume preservation. A determinant of zero, | J_ϕ(p) | = 0, is associated with a collapse to a lower-dimensional structure, often a folding or singularity. Negative values of the determinant, | J_ϕ(p) | < 0, are associated with local inversion, where the orientation at that position is reversed, typically considered a non-physical transformation.Derivation of SDlogJ. SDlogJ serves as a valuable statistical metric <cit.> for assessing the diffeomorphic properties of deformation fields in image registration.It quantitatively evaluates the uniformity and smoothness of deformations across the image, aiding in determining their physical viability and alignment with diffeomorphic characteristics. Let μ be the mean of log Jacobian determinants and N the total number of voxel positions in the field.The standard deviation of these determinants is defined as:SDlogJ = √(1/N-1∑_p (logσ(| J_ϕ(p) |+ρ) - μ)^2),where σ is a clip function ensuring positive values and ρ offsets highly negative values (following the challenge host <cit.>, we set ρ=3). SDlogJ statistically quantifies uniformity in a deformation field's transformation properties.A lower SDlogJ indicates a smoother, higher quality field with consistent local transformations.In contrast, a higher SDlogJ points to variable, potentially less smooth transformations. §.§ Diffeomorphic Quality Measurement: SDLogJ The significance of the Jacobian determinant in analyzing local deformation behavior at specific positions was previously emphasized.A non-positive Jacobian determinant indicates a disruption in bijective mapping at those locations, detracting from the field's diffeomorphic quality.Consequently, calculating the proportion of positions where J_ϕ(p) ≤ 0, represented as |J_ϕ|_≤ 0%, provides a valuable metric for assessing the overall quality of diffeomorphism <cit.>.Experiment Settings. To examine the relationship between SDlogJ and |J_ϕ|_≤ 0%, we conducted experiments using textSCF on the Abdomen dataset.Two factors influencing the smoothness of the generated deformation field were considered: the use of the diffeomorphic integration layer and the coefficient λ of the global smoothness regularizer.The following settings were employed to create a range of SDlogJ-|J_ϕ|_≤ 0% pairs: 1) External Segmentor: Different external segmentation masks used in textSCF variants. 2) Network Complexity: Variation in the starting channel count N_s from 8 to 16, and then to 32. 3) Diffeomorphic Integration: Variants with and without the diffeomorphic integration layer. 4) λ for Smoothness Regularizer: Adjusting λ from 0.01, 0.05, 0.1, to 1.0.Results and Analysis. By adjusting the mentioned variables, we trained various textSCF models, generating a range of SDlogJ-|J_ϕ|_≤ 0% values.The results showed that SDlogJ values ranged between (6.75e-02, 9.50e-01) and |J_ϕ|_≤ 0% values between (3.37e-06, 4.16e-02). To improve clarity, min-max normalization was applied to both SDlogJ and |J_ϕ|_≤ 0%.This scaling brings their values within the (0,1) range, ensuring data consistency and improved readability, while preserving the correlation between the two variables. Fig. <ref> displays scatter plots of normalized SDlogJ-|J_ϕ|_≤ 0% with linear regression analysis.The Pearson correlation coefficient of 0.93 (p<0.05) indicates a strong positive correlation, showing that increases in SDlogJ typically accompany rises in |J_ϕ|_≤ 0% and vice versa.An R-squared value of 0.86 in our regression model reveals that SDlogJ explains about 86% of the variability in |J_ϕ|_≤ 0%.This suggests SDlogJ's high predictive value for |J_ϕ|_≤ 0%, indicating its effectiveness as a substitute metric.Thus, SDlogJ measurements can effectively parallel insights gained from |J_ϕ|_≤ 0% evaluations. Combined with SDlogJ's capability to assess deformation field smoothness, it stands as a straightforward metric for evaluating diffeomorphism quality. | http://arxiv.org/abs/2311.15607v1 | {
"authors": [
"Hang Zhang",
"Xiang Chen",
"Rongguang Wang",
"Renjiu Hu",
"Dongdong Liu",
"Gaolei Li"
],
"categories": [
"eess.IV",
"cs.AI",
"cs.CV"
],
"primary_category": "eess.IV",
"published": "20231127080053",
"title": "Spatially Covariant Image Registration with Text Prompts"
} |
[ [=====Recent advances in generative AI have unveiled significant potential for the creation of 3D content. However, current methods either apply a pre-trained 2D diffusion model with the time-consuming score distillation sampling (SDS), or a direct 3D diffusion model trained on limited 3D data losing generation diversity.In this work, we approach the problem by employing a multi-view 2.5D diffusion fine-tuned from a pre-trained 2D diffusion model. The multi-view 2.5D diffusion directly models the structural distribution of 3D data, while still maintaining the strong generalization ability of the original 2D diffusion model, filling the gap between 2D diffusion-based and direct 3D diffusion-based methods for 3D content generation. During inference, multi-view normal maps are generated using the 2.5D diffusion, and a novel differentiable rasterization scheme is introduced to fuse the almost consistent multi-view normal maps into a consistent 3D model. We further design a normal-conditioned multi-view image generation module for fast appearance generation given the 3D geometry. Our method is a one-pass diffusion process and does not require any SDS optimization as post-processing. We demonstrate through extensive experiments that, our direct 2.5D generation with the specially-designed fusion scheme can achieve diverse, mode-seeking-free, and high-fidelity 3D content generation in only 10 seconds. Project page: https://nju-3dv.github.io/projects/direct25https://nju-3dv.github.io/projects/direct25.§ INTRODUCTION Creating 3D content from generative models has become a heated research topic in the past year, which is key to a variety of downstream applications, including game and film industries, autonomous driving simulation, and virtual reality. Specifically, DreamFusion <cit.> was proposed to optimize a neural radiance field (NeRF) <cit.> using a pre-trained 2D text-to-image diffusion model and the score distillation sampling (SDS) technique, showing promising results for text-to-3D generation of arbitrary objects without any 3D data. However, the indirect 3D probability distribution modeling inevitably deteriorates the final generation quality. For example, it has been reported in DreamFusion and its follow-ups <cit.> that the overall generation success rate is low and the multi-face Janus problem exists.Another line of work focuses on direct 3D generation by training on large-scale 3D data. For example, <cit.> apply the probabilistic diffusion model for point cloud generation and <cit.> model the denoise diffusion process on signed distance field (SDF). These methods usually apply a specific 3D representation and train the denoise diffusion on such representation using a specific 3D dataset, e.g., ShapeNet <cit.>, and show high-quality generation results on objects similar to the training set. However, the scale of the current 3D dataset is still too small when compared with the text-image data <cit.>. Even with the largest 3D dataset <cit.> available, it is still challenging to train a 3D diffusion model for diverse text-to-3D generation. In this work, we instead extend existing text-to-2D models to a denoising diffusion process on multi-view 2.5D depth/normal data. Compared with full 3D representations such as 3D point clouds or meshes, 1) 2.5D information such as depth or normal are much easier to capture or collect (e.g., depth provided by active sensors); 2) the depth and normal maps perfectly align with the image data, making it possible to adapt and fine-tune a 2.5D model from a pre-trained 2D RGB model. In order to construct full 3D models, 2.5D maps viewed from multiple perspectives are necessary. Therefore, the target diffusion model should be capable of generating multi-view images with content consistency. In practice, we fine-tune existing text-to-image diffusion models on multi-view 2.5D renderings from the Objaverse dataset <cit.>. On the one hand, the models are adapted to 2.5D information. On the other hand, joint multi-view distribution is captured with the help of structural modification of injecting multi-view information to the self-attention layers. During inference, multi-view images are generated synchronously by common schedulers like DDIM <cit.>, which are then fused directly into a mesh by differentiable rasterization. The whole generation process completes in seconds, which is significantly faster than SDS-based methods that typically take 30 minutes. The system is extensively evaluated with complex text prompts and compared with both SDS-based and direct 3D generation methods, demonstrating the capability of generating 3D textured meshes with complex geometry, diversity, and high fidelity.To summarize, major contributions of the paper include:* We propose to approach the 3D generation task by training a multi-view 2.5D diffusion model, which explicitly models the 3D geometry distribution while inheriting a strong generalization ability of the large-scale pre-trained 2D image diffusion. * We introduce an efficient differentiable rasterization scheme to optimize a textured mesh directly from the multi-view normal maps and RGB images.* We carefully design a generation pipeline that achieves diverse, mode-seeking-free, and high-fidelity 3D content generation in only 10 seconds.§ RELATED WORK §.§ 3D Generation by Score DistillationScore Distillation <cit.> is one of the most popular method recently for 3D Generation by pre-trained 2D diffusion models. It distillates the knowledge of image denoising to the optimization process of differentiable rendering systems so that randomly rendered views are gradually refined to describe the input text prompt. There are fundamental problems: 1) 2D diffusion models are not 3D-aware, and the generated samples have multi-face problem as a result; 2) Each optimization step requires single forward of the denoising UNet, making the whole process time consuming; 3) High guidance scale of prompts is preferred for better convergence, which leads to over-saturation of appearance; 4) the optimization is mode-seeking, losing the strong diversity of 2D diffusion model. Follow up works are proposed to solve some of them, but not all. Zero-1-to-3 <cit.> fine-tunes the 2D diffusion model with multi-view dataset to grant the ability of perspective control and mitigate the problem 1 in image-to-3D task. ProlificDreamer <cit.> mitigate problem 3 and 4 by utilizing a KL-divergence loss to perform sampling instead of mode-seeking, at the cost of higher time complexity. In this work, we do not apply score distillation and completely separate diffusion process and 3D model optimization. The diffusion can be scheduled and conditioned normally, so that the results have diversity and realistic color. And the 3D model optimization operates on explicit representation so can be finished quickly. §.§ Direct 3D DiffusionFast 3D generation can be achieved by training a direct 3D diffusion model with 3D dataset. One key problem is to choose the 3D representation and design a special encoder/decoder for it. There are some early attempts to train direct 3D models for point cloud <cit.>, mesh <cit.> and implicit representation like NeRF or SDF <cit.>. However, they are trained on the limited datasets like ShapeNet <cit.> which have rather small data size, geometry complexity or category diversity. Recent 3D datasets such as Objaverse <cit.> dramatically improve the state-of-the-art of 3D dataset, but is still limited compared to 2D image-caption datasets for training 2D diffusion models. In this work, we still use 2D neural network to deal with 2.5D maps, and thus we can perform fine-tuning on existing 2D diffusion models so as to inherit their strong generalization.§.§ Multi-view DiffusionGenerating multi-view images simultaneously is another strategy to bring 3D-awareness to 2D diffusion models. Two key modifications are proposed to achieve this: 1) Information from other views are concatenated with the current view as keys and queries in the self-attention layers. The gathered information can be from the single projection <cit.>, epipolar lines <cit.> or all the pixels <cit.>; 2) The model is fine-tuned on multi-view renderings from 3D dataset like Objaverse <cit.>. To construct 3D models, previous works either use SDS <cit.>, which is still time consuming, or image-based reconstruction systems like NeuS <cit.>, which requires at least 10 views to produce reasonable reconstructions. In this work, we choose to generation multi-view 2.5D maps like normal, so that we can use SDS-free reconstruction while still keep the number of views small.§ METHODIn this section, we introduce our multi-view 2.5D diffusion system, which synchronously generates multi-view 2.5D geometry images, i.e., normal maps, and corresponding texture maps given a text prompt as input for 3D content generation (Fig. <ref>). Our method is efficient enough to generate various results in only 10 seconds. In Sec. <ref>, we first briefly review the 2D diffusion model and formulate the multi-view 2.5D adaptation. We then illustrate the cross-view attention which enhances the multi-view consistency in Sec. <ref>. In Sec. <ref>, we describe how to produce the final 3D model from generated 2.5D geometry images, and finally in Sec. <ref>, we demonstrate how to synthesize the texture maps given the generated normal maps, and construct the high-quality final textured triangle mesh. §.§ Diffusion Models and 2.5D Adaptation Diffusion models learn a conversion from an isotropic Gaussian distribution to the target distribution (e.g. image spaces) via iterative denoising operations. We build our system on latent diffusion models (LDM), which contains a variational autoencoder (VAE) including an encoder and a decoder, a denoising network, and a condition input encoder. Compared to original diffusion models, LDM conducts the whole diffusion process in the latent image space and greatly improves efficiency and quality. Specifically, during the forward process, a noisy latent at time t is sampled in the latent space and is gradually degraded by noise which makes it indistinguishable from the Gaussian noise, while the denoising process reverses the process, which iteratively predicts and remove the noise to get the real images. In this work, we extend 2D text-to-image diffusion models to generate multi-view geometry images. By fine-tuning a pre-trained 2D diffusion model using our 2.5D image dataset, we are able to inherit the generalization and also obtain the expressive generation ability for multi-view 2.5D geometry images. Let (𝒳,c) be 3D data with caption from training dataset, x_i ∈𝒳 be multi-view renderings, x_i,t be views corrupted by independent noise ϵ_i ∈ℰ at time t. The denoising neural network ϵ_θ is trained byL = 𝔼_(𝒳,c); ℰ∼ N(0,1); t∑_x_i ∈𝒳; ϵ_i ∈ℰϵ_i - ϵ_θ(x_i,t,c,t) _2^2.§.§ Cross-view Attention Before fine-tuning, the multiple images generated from the base model for the same text prompt are not guaranteed to describe the same object because they are initiated from different noise maps and are denoised independently. We use a solution similar to <cit.>: we add data communication among the diffusion processes and fine-tune the model on multi-view image dataset to learn multi-view conditioning. Implementation-wise, we synchronize all the diffusion processes. When the calculation reaches a self-attention layer, we gather all the intermediate results as queries and values instead of just using the results from the current branch. Because images are treated as sequential inputs, the additional information can be simply concatenated together without introducing more trainable parameters. This architecture ensures that the diffusion processes are mutually conditioned, which serves as a structural prerequisite for multi-view consistent generation.§.§ Explicit Multi-view 2.5D Fusion There are various approaches available for constructing a 3D model from multi-view observations. Among them, image-based 3D reconstruction methods such as multi-view stereo <cit.> or NeRF <cit.> requires at least 10 images for high-fidelity reconstruction, which pose significant computational challenges in multi-view diffusion scenarios. However, by taking benefits from 2.5D information, one could effectively reduce this requirement. In practice, we generate 4 normal maps aligned with world coordinates from different viewpoints (front, left, right, and back). To fuse these observations into a triangle mesh, we explore the insight of geometry optimization from an initialized mesh via differentiable rasterization. This optimization, which is independent of neural network inference, achieves convergence rapidly within seconds (see Alg. <ref>). Space Carving Initialization. A simplistic and straightforward approach would be to initialize the shape using basic geometric primitives like spheres and cubes and optimize. However, this often introduces significant challenges during the latter geometry optimization, particularly when the target shape's topology diverges significantly from these elementary forms. To tackle this challenge, we employ the space carving algorithm <cit.> for shape topology initialization. Besides, it also provides a good initialization for latter geometry optimization. Fig. <ref> (a) shows the space carving results. Specifically, this process begins by segregating the background normal maps through a simple value thresholding. Subsequently, a volume in the interested space is created, and each voxel is projected onto the images using the camera parameters, determining whether the corresponding pixel is part of the object or the background. By gathering all projections under different views, we construct an occupancy volume, in which a voxel's occupancy is set to 0 (indicating emptiness) if all of its projections belong to the background, and it is set to 1 (indicating occupancy) otherwise. In the final step, we apply the marching cubes algorithm <cit.> on the occupancy volume to extract the zero level-set surface to form the initialized shape. This technique not only effectively preserves the topology, but also provides a rough shape estimation generated from the multi-view normal images. Optimization via Differentiable Rasterization. Once we have obtained the initialized geometry, we further refine the mesh details based on observational data. This refinement is mathematically formulated as an optimization problem, targeting the triangle triangle vertices V and faces F. As illustrated in Alg. <ref> and Fig. <ref>, we first simply the marching cube-generated mesh to a lower face number, which is found to help accelerate and improve the optimization. In each optimization step, we optimize the model by minimizing the L_1 loss between the rendered results and observations, as well as a normal consistency regularization. The loss function could be written as follows:ℒ_V= ℒ_n + λ_α ℒ_α + λ_nc ℒ_nc,where ℒ_n = 1/4∑_i^4 || n_i - n̂_̂î ||_1 is the normal rendering loss. It measures the mean L_1 distance between rendered normal maps n and the observations n̂ under different camera viewpoints i ∈{0, 1, 2, 3}. Similarly, L_α = 1/4∑_i^4 || α_i - α̂_̂î ||_1 is the alpha mask loss, which computes the difference between rasterized object mask α and the observed α̂, and the latter could be obtained by a simple value thresholding δ = 0.05 in the generated normal maps. We additionally integrate a normal consistency term, denoted as ℒ_nc to regularize the mesh. Specifically, this regularization is designed to smooth the mesh on a global scale by minimizing the negative cosine similarity between connected face normals. The hyperparameters λ_α, λ_nc which control the different weights for alpha mask loss and normal consistency regularization are set to 1 and 0.1 respectively. We adopt the nvdiffrast library <cit.> for differentiable rasterization.After each optimization step, we further perform remeshing by merging or splitting triangle faces using the strategy from <cit.>. During experiments, we empirically found that only about 200 optimization steps are enough to generate a high-quality geometry mesh, which takes only around 2 to 3 seconds. As shown in the fig. <ref> (c-e), the dog shape has been well optimized at around 200 steps. §.§ Texture Synthesis Texturing the mesh is another crucial step in achieving a high-quality result. Similar to the geometry generation, we initially synthesized multi-view texture maps, which were then applied to the generated geometry. In practice, another multi-view diffusion model generates the corresponding multi-view texture maps, conditioned on text prompts and the multi-view normal images.As shown in figure <ref>, the architecture of the multi-view normal-conditioned diffusion model is similar to the text-to-normal model, except that we extend the first convolution layer by increasing the number of channels to satisfy the normal latent condition input. Specifically, we initialize the extra trainable parameters in the first layer to zero before training. The normal condition plays a pivotal role in shape information and guides the model to generate both text- and shape-aligned texture images. We further apply super-resolution, i.e., Real-ESRGAN <cit.> on the generated texture maps to increase more appearance details, resulting in a 4 × resolution upscale from 256×256 to 1024×1024.After obtaining the high-resolution RGB images, the final stage is to project these images to the shape geometry and generate a global texture. We perform UV parameterization and the Poisson blending algorithm <cit.> to alleviate multi-view inconsistency. Iterative updating. In most cases, a single run of the pipeline is enough to generate high-quality results. However, since we generate 4-view information at once, there may be some areas unobserved in the generated RGB images (such as the top area of the object), and a texture refinement is required. To address this issue, we could iteratively update the generated images by using popular inpainting <cit.> pipelines in diffusion models to refine the generated textures. By computing a visibility mask at a new camera viewpoint, the invisible areas could be generated given a certain noise strength. During experiments, we found that only 1 or 2 iterations are enough to inpaint the unseen areas.§ IMPLEMENTATION DETAILSIn the following, we describe the aspects relevant to our system implementation details: dataset preparation in Sec. <ref> and training setups in Sec. <ref>. §.§ Dataset PreparationWe use the Objaverse <cit.> dataset for 2.5D training data generation, which is a large-scale 3D object dataset containing 800K high-quality models. We use the captions provided by cap3d <cit.> as text prompts. We filter the dataset by sorting the CLIP scores and selecting the top 500K objects with high text-image consistency. Each object is firstly normalized at the center, and we render the scene from 32 viewpoints uniformly distributed in azimuth angles between [0, 360]. The elevation and camera FoV are set to 0 and 60, respectively. The camera distance from the origin is set to 1.5 times the focal length in normalized device coordinates. We use a composition of random lighting selected from point lighting, sun lighting, spot lighting, and area lighting. RGB images and normal maps are rendered for each object.Besides, we also adopt a large-scale 2D image-text dataset to improve the generation diversity. Specifically, we use the COYO-700M dataset <cit.>, which also contains metadata like resolution and CLIP scores <cit.>, etc. We filter the dataset with both width and height greater than 512, aesthetic scores <cit.> greater than 5, and watermark scores lower than 0.5, which results in a 65M-size subset. Though the filtered dataset is reduced to 1/10 of the original size, it is still larger than the 3D dataset. Actually, we do not use the whole filtered dataset during training. §.§ Training SetupAs introduced above, we train the model with both 2.5D rendered images and natural images, with a probability of 80% to select the former. This makes the instances seen in each batch nearly equal for two kinds of data. We use the Stable Diffusion v2.1 base model as our backbone model and fine-tune the latent UNet only for another 50K steps with 1000 warmup steps. Similar to Zero123 <cit.>, we use an image sample size of 256 × 256 for better and faster training convergence. The learning rate is set to 1e-5. We drop the text prompt conditioning with a probability of 15% and apply a noise offset of 0.05. The full training procedure is conducted on 32 NVIDIA A100 80G GPUs (800K steps for the text-to-normal model and 18K steps for the normal-conditioned RGB model, which takes around 80 and 20 hours separately). The batch size is set to 45 on each GPU which leads to a total batch size of 1440.§ EXPERIMENTSIn the following, we represent the experiment results of our approach and evaluate the design of our system, including qualitative comparisons against state-of-the-art techniques and quantitative evaluations of model performances.§.§ Text-to-3D contents generationGiven a random input text prompt, the proposed system is able to generate a high-fidelity 3D triangle mesh. Fig. <ref> shows a gallery of our generation results. Generated multi-view normal and RGB images are also presented beside the 3D mesh. Our multi-view normal diffusion model is able to generate high-quality normal maps with expressive geometry details, and the normal-conditioned RGB diffusion model also generates detailed textures aligned with input normal maps, which validates the effectiveness of our cross-view attention design. All prompts used are unseen during training, which proves the generalization ability. §.§ Qualitative and Quantitative EvaluationFinal 3D meshes In this section we compare our method with SDS-based methods including DreamFusion <cit.>, Fantasia3D <cit.>, and MVDream <cit.>. We also compare with the direct 3D generation methods including Point-E <cit.> and Shap-E <cit.>. The text prompts are provided from DreamFusion, which were unseen during the fine-tuning for MVDream and ours. For a fair comparison, we further extract meshes from the implicit representations used by other methods. Fig. <ref> illustrates qualitative comparisons of the generated 3D models. It is clearly found that Point-E and Shap-E fail to generate reasonable text-aligned results. These direct 3D-based generation methods were trained on the relatively small 3D dataset compared to large-scale 2D text-image datasets, leading to poor generalization ability. Besides, DreamFusion and Fantasia3D suffer from the multi-face problem, while the results from the latter contain more details because of the supervision on geometry only. The rest two methods are 3D-aware so are able to produce reasonable 3D topology. MVDream generally achieves better visual quality, while our results are more consistent with the text prompts and take much less time to generate (35 mins v.s. 10s). Sample diversity Here, we compare the diversity of generated samples with MVDream. In this experiment, We generate 10 samples with the same prompt but different seeds. Fig. <ref> presents the experiment results. Although both multi-view diffusion models are regularized by large-scale image-caption datasets to prevent overfitting on the 3D dataset, the results from MVDream still collapse to a single type because of the mode-seeking nature of SDS. On the contrary, our method can still keep the content diversity of the pre-trained diffusion model because the construction of 3D models is independent of the diffusion process, which would faithfully follow the random denoising process. Quantitative evaluation In the following, we quantitatively evaluate image generation quality and the text-image consistency of the proposed two novel multi-view diffusion models. Table <ref> demonstrates the evaluation results. Specifically, Frechet Inception Distance (FID) <cit.> and Inception Score (IS) <cit.> are adopted to measure the generation image quality and CLIP score cosine similarity <cit.> is calculated to measure the text-image consistency. We randomly select 2000 subjects as well as their multi-view RGB and normal renderings in the Objaverse <cit.> dataset as our evaluation database. FID and IS are calculated independently of viewpoints while the CLIP similarity is selected as the max value across all 4-view scores. In general, we could find that the proposed model achieves similar or even better results compared to the groundtruth renderings, which proves the high image quality and image-text consistency. Besides, we also evaluate the training strategies used in multi-view normal diffusion training, including using 2D large-scale dataset joint training, using higher consistency but fewer 3D subjects for training. It is clearly shown that the performance drastically drops when training without a 2D wild dataset injection. We believe that this is because fine-tuning purely multi-view normal data, would lead to a catastrophic forgetting of the original learned distribution and leads to poor learning ability. Training using fewer but higher text-consistent data leads to higher IS scores and CLIP similarities, as well as FID scores. We believe this is caused by the color distribution difference between wild 2D images and 3D dataset renderings. Though this model achieves slightly better results in the other two scores, in practice we found this model has lower generalization ability and diversity compared to the model that used more 3D data. § LIMITATIONS AND FUTURE WORK Limited view numbers Because the number of views is small, areas such as top, bottom and concavity cannot be fully observed, and thus their geometry or appearance cannot be well reconstructed. Apart from the iterative update scheme, the multi-view diffusion model can be further extend to handle more views.Texture quality For the appearance generation, we choose to finetune a multi-view normal-conditioned diffusion model for efficiency. However, the ability of generating realistic images is degraded because of the texture quality of the 3D training samples and their rendering quality. Apart from further enhancing the training samples, we can also apply the state-of-the-art texture generation systems <cit.> for non-time-sensitive tasks. § CONCLUSIONWe propose to perform fast text-to-3D generation by fine-tuning a multi-view 2.5D diffusion from pre-trained RGB diffusion models. To learn multi-view consistency, the model is fine-tuned on multi-view normal map renderings from 3D dataset, with cross-view attention as the structural guarantee. After the simultaneous generation of multi-view normal maps, 3D models are obtained by deforming meshes by differentiable rasterization. Finally, appearance is generated by multi-view normal-conditioned RGB diffusion. Compare with the slow SDS-based methods, our whole generation pipeline can produce diverse and mode-seeking-free 3D models in 10 seconds. And compared to direct 3D generation methods, our system demonstrates strong generalization to complex content and ability to generation fine details. Extensive experiments are conducted to show that our method is capable of fast generation of realistic, complex and diverse 3D models.ieeenat_fullname [ —— Supplementary Material —— ]Due to the space limitation of the main paper, we provide supplementary materials, including a project page and a PDF file to give an auxiliary demonstration. The project page presents video results for better visualization. In this file, we will present a detailed description of the implementation details, additional evaluation and discussions, and more results.§ IMPLEMENTATION DETAILSIn this section, we describe more implementation details of the proposed system, including data preparation, iterative updating, and inference time. §.§ Dataset Preparation We use the Objaverse <cit.> dataset for 2.5D training data generation, which is a large-scale 3D object dataset containing 800K high-quality models. We use the captions provided by Cap3d <cit.> as text prompts, which is the best 3D dataset caption method currently. Each object is firstly normalized at the center within a bounding box [-1, 1]^3, and we render the scene from 32 viewpoints uniformly distributed in azimuth angles between [0^∘, 360^∘]. The elevation is set to 0^∘ and camera FoV is set to 60^∘. The camera distance from the origin (0, 0, 0) is set to a fixed distance equal to 1.5 times the focal length in normalized device coordinates. For lighting, we use a composition of random lighting selected from point lighting, sun lighting, spot lighting, and area lighting. RGB images and normal maps in world coordinates are rendered using a rasterizer-based renderer for each object.Besides, we also adopt a large-scale 2D image-text dataset to improve the generation diversity following mvdream <cit.>. Specifically, we use the COYO-700M dataset <cit.>, which also contains metadata like resolution and CLIP scores <cit.>, etc. We filter the dataset with both width and height greater than 512, aesthetic scores <cit.> greater than 5, and watermark scores lower than 0.5, which results in a 65M-size subset. Though the filtered dataset is reduced to 1/10 of the original size, it is still much larger than the 3D dataset. Actually, we do not consume the whole dataset within the designated training time. In the following, we describe the specific dataset usage for two proposed multi-view diffusion model training. Text-to-normal multi-view diffusion model. As we want to generate high-quality and multi-view consistent normal maps from a single text prompt input, we are able to use all valid normal map renderings in Objaverse <cit.>. We filter the dataset by sorting the CLIP similarities between RGB images and captions and selecting the top 500K objects to keep a high text-image consistency. We take a similar 2D & 3D joint training strategy with MVDream <cit.>, where 3D data and 2D data are randomly chosen in each batch with a probability of 80% and 20%, respectively. This trick can guarantee the same expected number of instances to be seen in each training step because 4 views are from the same object for 3D dataset. Also for 3D data, we add a special tag normal map to the end of captions to indicate the normal map prediction task. During inference, we also add this postfix to the prompt for normal map predictions.Normal conditioned RGB multi-view diffusion model. Some samples in the Objaverse dataset has cartoonish appearance, and we would like to filter out these samples. Specifically, we first filter the dataset to obtain renderings whose aesthetic scores are larger than 5, which results in a 130K subset. Then, we compute the CLIP scores between the remaining images and two pre-defined positive and negative quality description prompts [Positive prompt: realistic, 4K, vivid, highly detailed, high resolution, high quality, photography, HD, HQ, full color;Negative prompt: cartoon, flat color, simple texture, ugly, dark, bad anatomy, blurry, pixelated obscure, unnatural colors, poor lighting, dull, unclear, cropped, lowres, low quality, artifacts, duplicate, morbid, mutilated, poorly drawn face, deformed, dehydrated, bad proportions]. We compute the ratio of the positive scores and negative scores and select the top 10K data as our training dataset. We found that this strategy successfully selected the high-quality renderings in the dataset, and works better than training on all rendering data.§.§ Iterative UpdatingIn most cases, a single run of the pipeline is enough to generate high-quality results. However, for some topologies, there may be large areas unobserved by the 4 perspectives (e.g., large planar areas on the top of the object). To address this issue, we could iteratively update rendered images from novel views by the inpainting <cit.> pipeline to refine the texture. Specifically, we compute an inpainting mask indicating the unseen areas at a new camera viewpoint, and the invisible areas are edited given a certain noise strength. In Fig. <ref>, we present the results of the iterative updating. In this example, we inpaint the top views of the generated bread and fuse the resulted RGB images back to the generated model. As shown in the figure, the top areas of the bread are unseen during the first generation, and we inpaint the unseen areas in the second run. The inpainting mask is used to ensure that only the unseen areas would be modified, while other regions are kept unchanged. The final generated model (Fig <ref> (e)) demonstrates the effectiveness of the strategy. During experiments, we found that 1 or 2 iterations suffice to recover the unseen areas. §.§ Inference TimeCompared to SDS optimization-based methods which typically take over half an hour, our method is efficient enough to generate high-quality results in 10 seconds: On a single Nvidia A100 GPU, the denoising process of the two multi-view diffusion models each takes around 2.5 seconds for 50 DDIM steps. The explicit geometry optimization takes around 2 ∼ 3 seconds for 200 optimization steps, which depends on the triangle mesh complexity. The final texture fusion takes around 1.5 seconds. the efficiency and diversity of the proposed system enables selection from batch generated samples, which greatly increases the practicality for prototyping and digital content creation. For iterative updating, typically 1-3 passes are enough to paint the unseen areas and can be finished in less than one minute, which is still much faster than the previous SDS optimization-based methods. § GEOMETRY-APPEARANCE DISENTANGLED GENERATION Due to the two-stage setting in the proposed method, one could generate random RGB images while keeping the geometry fixed, which enables geometry-appearance disentangled generation and offers better control over the generation process. Fig. <ref> demonstrates the disentangled generation results. It demonstrates that users can fix the satisfying generated geometry and then proceed to appearance generation.§ DISCUSSIONSIn the following, we provide a detailed discussion about the settings of our system, including the two-stage sequential models, and normal predictions v.s. depth predictions. Two-stage sequential architecture. As demonstrated in Sec. <ref>, a two-stage sequential architecture naturally enables the geometry-appearance disentangled generation and provides more freedom on both geometry and appearance generation. Besides, using a combined pipeline also leads to a double GPU memory requirement compared to the sequential setting, which could become a great burden under the multi-view setting. This challenge becomes much more severe when one increases the spatial resolution of the diffusion model, e.g. from 256 to 512 or even 1024. Finally, the sequential model has better multi-view and geometry-appearance consistency. Instead of the generation normal maps, we use the ones rendered from the optimized mesh for the texture diffusion model input. On the one hand, the rendered normal maps are guaranteed to be consistent. On the other hand, it provides better alignment between the generated RGB images and the actual geometry. For the above reasons, our system takes the two-stage sequential as our architecture.Normal v.s. Depth. Another alternative choice for our system is to use depth instead of normal. Because normal is the first-order derivative of the depth, it is free from scale ambiguity and provides a higher tolerance for multi-view inconsistency. Optimizing depth value directly requires much higher multi-view accuracy and therefore decreases the robustness of the geometry optimization system. Previous work <cit.> also found that using normal priors performs better than the depth priors, which also supports our assumption.Secondly, normal serves as a better conditioning signal for RGB generation because it generally has better alignment than depth. For example, sharp normal changes result in RGB discontinuity because of shading, but in this case depth may still be smooth.Therefore, we adopted normal as our shape representations and found it worked well. § MORE RESULTSWe present more results of the proposed method on the following pages, including the various generation ability (Fig <ref>) and more generation results (Fig. <ref>, <ref>, <ref>).§ ADDITIONAL VIDEO RESULTSWe present video results of the proposed method on the project page. Please check it for better visual results. | http://arxiv.org/abs/2311.15980v1 | {
"authors": [
"Yuanxun Lu",
"Jingyang Zhang",
"Shiwei Li",
"Tian Fang",
"David McKinnon",
"Yanghai Tsin",
"Long Quan",
"Xun Cao",
"Yao Yao"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231127162654",
"title": "Direct2.5: Diverse Text-to-3D Generation via Multi-view 2.5D Diffusion"
} |
[ [ 27 November 2024 ==================== This paper proposes three novel test procedures that yield valid inference in an environment with many weak instrumental variables (MWIV). It is observed that the t statistic of the jackknife instrumental variable estimator (JIVE) has an asymptotic distribution that is identical to the two-stage-least squares (TSLS) t statistic in the just-identified environment. Consequently, test procedures that were valid for TSLS t are also valid for the JIVE t. Two such procedures, i.e., VtF and conditional Wald, are adapted directly. By exploiting a feature of MWIV environments, a third, more powerful, one-sided VtF-based test procedure can be obtained. § INTRODUCTION Consider an instrumental variable (IV) model where unit i has outcome Y_i, scalar endogenous variable X_i and a vector of instruments Z_i. With Π_i=E[X_i| Z_i],Y_i=β X_i+e_iX_i=Π_i+v_iwhere the object of interest is β. This paper is interested in the environment where K:=dim(Z_i) is large and cov(X_i, Z_i)0 though the instrument is plausibly weak. Let Z denote the N× K data matrix of instruments so P:=Z(Z^'Z)^-1Z^' denotes the projection matrix, with P_ij denoting the (i,j)th element of P.The concentration parameter S := ∑_i∑_j iP_ijΠ_iΠ_j/√(Var (∑_i∑_j iP_ijX_iX_j) ) is a normalized object that characterizes how strong the instruments are. A setting with many weak instrument variables (MWIV) is one that is robust when S →∞ may not hold. This setting with MWIV is widely applicable. In this environment first proposed by <cit.>, the number of instrumental variables (IV) increases as the sample size increases. Such an environment can arise by design of the instrument. For instance, the judge instrument design uses indicators for the randomly assigned judges as instruments, so as the sample size increases the number of judges (and hence instruments) also increases. This design has applications in studying the effects of incarceration and detention (e.g., <cit.>; <cit.>; <cit.>; <cit.>), the effect of bankruptcy (e.g, <cit.>), the effect of disability benefits (e.g., <cit.>); and the effect of foster care due to randomly assigned social workers (e.g., <cit.>). The shift-share design also generates many instruments by construction (e.g., <cit.>; <cit.>). The MWIV environment may also arise from constructed instruments. Authors may construct 150 instruments by interacting the quarter of birth with the state of birth (e.g., <cit.>). There is a large literature arguing that the standard two-stage-least-squares (TSLS) procedure for IV is biased and yields invalid inference in the many instruments environment (e.g., <cit.>; <cit.>; <cit.>). Consequently, a literature advocating for jackknife procedures has developed (e.g., <cit.>; <cit.>).In particular, the jackknife instrumental variable estimator (JIVE) proposed by <cit.> (jiv2 in their paper) has gained much traction in empirical studies. This estimator, often referred to as the “jackknife" or “leave-out" estimator, has been used by papers including <cit.>, <cit.>, <cit.> and <cit.>.The JIVE estimator first constructs an instrument X̂_i := Z_i^'π̂_-i, where π̂_-i is the coefficient on Z when regressing X on Z using all units except the ith unit. Then, using X̂ to denote the data matrix of X̂_i's and Y to denote the outcomes, β̂_JIVE = (X̂^' X)^-1X̂^' Y, which essentially treats X̂ as the instrument in a regular TSLS procedure. The popularity of JIVE can partially be attributed to its interpretability. Once X̂ is viewed as a debiased constructed instrument, JIVE can be viewed as an extension of the familiar TSLS approach to the many instruments environment.Further, there are theoretical reasons to use JIVE. With a heteroskedastic model, <cit.> showed how other estimators that are robust to many instruments, such as the limited information maximum likelihood (LIML) and the bias-corrected TSLS (BTSLS), require S/ √(K)→∞ for consistency. In contrast, <cit.> showed that the JIVE remains consistent S→∞, so JIVE is more robust to weak instruments than BTSLS and LIML. Further, <cit.> showed how, even with heterogeneous treatment effects, the JIVE estimand is a weighted average of treatment effects. While estimation using JIVE is justified, inference is a more difficult issue. The JIVE t-ratio t^2 = (β̂_JIVE-β_0)/ var(β̂_JIVE) is commonly used for testing the null hypothesis that H_0: β = β_0. However, there are two issues that arise with t^2.First, var(β̂_JIVE) cannot be calculated in the same way as the variance estimator in TSLS. Since X̂ is a constructed object, we cannot simply calculate the variance as if X̂ were the instrument. Hence, a different variance estimator is required. Namely, building on <cit.>, <cit.> derived a variance estimator V̂ that is consistent even under the alternative.The second issue concerns the critical value used for conducting the test. If S →∞, then comparing t^2 with χ^2 (and hence t with the standard normal) is justified. However, when S does not diverge, t^2 has an asymptotic distribution that is non-standard.Consequently, a literature has developed on inference that is robust to both having many instruments and when the instruments are weak. These inference procedures include <cit.> and <cit.> that are based on the method of <cit.> adapted to MWIV, <cit.> based on the conditional likelihood ratio, and <cit.> that is based on the LM statistic.These papers propose use test statistics that are different from t^2 so that their test statistics are asymptotically normal under the null. Hence, having an appropriate test procedure based on the JIVE t is still an open issue — a gap that this paper aims to fill.Together with the development of MWIV procedures, there is also recent advancement in just-identified IV environments. The VtF procedure proposed in <cit.>, henceforth LMMPY, is found to be more powerful than many existing procedures, and can outperform many of them when considering lengths of confidence intervals (CI). Then, there is an open question on whether their novel method of curve construction applies to MWIV, and if so, whether these power properties remain. This paper first observes that the JIVE t in the MWIV environment has the same asymptotic distribution as the TSLS t statistic in the just-identified instrumental variable (IV) environment. This observation builds on <cit.> who had an expression for the JIVE t when finding an analog of a first-stage screening procedure of <cit.>.The observation implies that any inference procedure valid for the TSLS t is also valid for the JIVE t.By defining the analogous terms appropriately, the conditional Wald procedure of <cit.> and the VtF procedure of LMMPY can both be implemented for the JIVE t. In particular, the same critical values can be used. Beyond the two adaptations of existing procedures into the new environment, this paper develops a third procedure that exploits a unique feature of the MWIV environment. In MWIV environments,the sign of E[X_iX_j] for observations i and j with similar instrumental values is often known. For instance, in the judge environment where treatment (e.g., incarceration) is endogenous and the instrument (i.e., judges) are randomly assigned, observations i and j assigned to the same judge have E[X_iX_j]≥0. By exploiting this information, this paper develops a third procedure that builds on VtF that is even more powerful. Notably, even if the assumption that E[X_iX_j]≥0 does not hold, the proposed test is still valid — it merely has less power.The VtF-based approaches proposed in this paper retains the interpretability of the t^2 statistic based on JIVE. To implement the procedure after calculating β̂_JIVE, practitioners merely have to use the appropriate variance estimator V̂ to construct t^2 and the appropriate critical values, either from LMMPY or the one-sided values in this paper.§ SETTINGLet Y_i,X_i,Z_i denote the outcome, the endogenous variable, and the vector of instruments respectively. There are N observations and K instruments. The many K instruments could arise from having K different judges, for instance. Consider the linear IV model in <Ref>. For some random variables A and B, this paper uses the notation Q_AB := (1/√(K)) ∑_i=1^N∑_j iP_ij A_i B_j. In particular, (Q_ee,Q_Xe,Q_XX)^':=1/√(K)∑_i=1^N∑_j iP_ij(e_ie_j,X_ie_j,X_iX_j)^' Due to the setting, e_i:=Y_i-X_iβ is defined as the residual with respect to the true β. This paper makes a high-level assumption on the asymptotic distribution. Let μ^2:=∑_i∑_j iP_ijΠ_iΠ_j.[[Q_ee;Q_Xe; Q_XX-μ^2/√(K) ]]N(0,[[Φ Σ_12 Σ_13; Σ_12Ψτ; Σ_13τΥ ]])The asymptotic distribution of Assumption <ref> is immediate from the structural model of <Ref> and the central limit theorem of <cit.> once some regularity conditions are satisfied. This paper makes a high level assumption on the distribution to abstract from the discussion of these regularity conditions. When doing a hypothesis test of H_0:β=β_0, the hypothesized β_0 is used instead. Let Δ:=β-β_0 denote the divergence of the hypothesized value from the true value. Due to <cit.>, Assumption <ref> implies that, for e_i(β_0):=Y_i-β_0X_i,[[ Q_e(β_0)e(β_0) - Δ^2 μ^2/√(K); Q_Xe(β_0) - Δμ^2/√(K); Q_XX - μ^2/√(K) ]]N([[ 0; 0; 0 ]],[[Φ(β_0) Σ_12(β_0) Σ_13(β_0); Σ_12(β_0)Ψ(β_0)τ(β_0); Σ_13(β_0)τ(β_0) Υ ]]) where, Φ(β_0) =Δ^4Υ+4Δ^3τ+Δ^2(4Ψ+2Σ_13)+4ΔΣ_12+Φ Σ_12(β_0) =Δ^3Υ+3Δ^2τ+Δ(2Ψ+Σ_13)+Σ_12 Σ_13(β_0) =Δ^2Υ+2Δτ+Σ_13 Ψ(β_0) =Δ^2Υ+2Δτ+Ψ τ(β_0) =ΔΥ+τ Then, defining S:=μ^2/√(KΥ) as the concentration parameter (which is numerically equivalent to how S was written in the introduction),Q(β_0):=[[ AR(β_0);ξ(β_0); ν ]] := [[ Q_e(β_0)e(β_0)/√(Φ(β_0));Q_Xe(β_0)/√(Ψ(β_0));Q_XX/√(Υ) ]]N([[ Δ^2S√(Υ/Φ(β_0));Δ S√(Υ/Ψ(β_0)); S ]],V(Q(β_0)))whereV(Q(β_0)) := [[ 1 Σ_12(β_0)/√(Φ(β_0)Ψ(β_0))Σ_13(β_0)/√(Φ(β_0)Υ); Σ_12(β_0)/√(Φ(β_0)Ψ(β_0)) 1 τ(β_0)/√(Ψ(β_0)Υ);Σ_13(β_0)/√(Φ(β_0)Υ) τ(β_0)/√(Ψ(β_0)Υ) 1 ]] In the MWIV environment, the concentration parameter S does not diverge asymptotically. Conversely, in an environment with “strong" instruments, S→∞. In light of the above asymptotic distribution, and how the variance objects can be consistently estimated, the normalized statistics in Q(β_0), i.e., (AR(β_0),ξ(β_0),ν), have been used for inference on β. In particular, <cit.> used the AR(β_0) statistic, while <cit.> used the ξ(β_0) statistic as their LM procedure. Under the null, both test statistics are normally distributed, so an appropriate critical value from the standard normal distribution can be used. Since Δ=0 under the null, characterizing the alternative distribution is unnecessary for developing a valid test. However, characterizing such a distribution for some β_0 in general is helpful for power comparisons. In this characterization, although Q_e(β_0)e(β_0), Q_Xe(β_0), Q_XX can be obtained immediately from the data, objects in the variance, such as Υ,Ψ(β_0) and Φ(β_0), cannot be feasibly obtained. Hence, Q(β_0) is treated as an asymptotic object that is not feasible. Nonetheless, there are feasible estimators for the variance objects that are consistent even under the alternative. Following <cit.>, let:M :=I-P P̃_ij^2:=P_ij^2/M_iiM_jj+M_ij^2so M is the annihilator matrix, and P̃_ij adjusts the P_ij object. Then, the feasible and consistent variance estimators are:Υ̂:=1/K∑_i(∑_j iP_ijX_j)^2X_iM_iX/M_ii+1/K∑_i∑_j iP̃_ij^2M_iXX_iM_jXX_j τ̂(β_0) :=1/2[1/K∑_i(∑_j iP_ijX_j)^2(X_iM_ie(β_0)/M_ii +e_i(β_0)M_iX/M_ii) +1/K∑_i∑_j iP̃_ij^2(M_iXX_iM_jXe_j(β_0)+M_iXe_i(β_0)M_jXX_j)] Ψ̂(β_0) :=1/K∑_i(∑_j iP_ijX_j)^2e_i(β_0)M_ie(β_0)/M_ii+1/K∑_i∑_j iP̃_ij^2M_iXe_i(β_0)M_jXe_j(β_0) Define the normalized statistics as ξ̂(β_0) := Q_Xe(β_0)/√(Ψ(β_0)), ν̂ := Q_XX/√(Υ̂), and ρ̂(β_0) := τ̂(β_0)/√(Ψ̂(β_0)Υ̂), where ρ(β_0):=τ(β_0)/√(Ψ(β_0)Υ). I make a high-level assumption on these variance estimators that they converge to the true variance objects. The primitives of the assumption can be justified by <cit.>, who similarly used analogs of these objects.Ψ̂(β_0) Ψ(β_0), Υ̂Υ and τ̂(β_0) τ(β_0). A corollary of of the assumption is that ρ̂ (β_0) ρ (β_0) due to the continuous mapping theorem. As a special case of the setup, we can consider the judge design. In particular, in the judge design without covariates, it can be shown that E[ν]≥0. In this environment, Z_i are judge indicators. Let k(i) denote the judge k that i is matched to, and π denote the vector of values that E[X_i|Z_i] can take for the respective judges, i.e., π_k=E[X_i|k(i)=k].X_i=Z_i^'π+v_i Since P_ij=1{ k(i)=k(j)} /N_k(i), where N_k is the number of observations assigned to judge k,Q_XX=1/√(K)∑_i=1^N∑_j i1{ k(i)=k(j)}/N_k(i)X_iX_j For any two individuals i,j matched to the same judge, we have:E[X_iX_j] =E[(π_k+v_i)(π_k+v_j)]=π_k^2≥0Hence, S=E[ν]≥ 0. This observation is helpful in motivating the test procedure.§ PROCEDURE§.§ JIVE t statistic An existing procedure that is robust to having many instruments (albeit not when they are weak) is the jackknife instrumental variables estimator (JIVE). In particular, using a leave-one-out approach to eliminate bias, <cit.> proposes using β̂_JIVE=∑_i∑_j iP_ijY_iX_j/∑_i∑_j iP_ijX_iX_j The estimator described in the introduction is numerically equivalent to the expression in <Ref>. To obtain a variance estimator for β̂_JIVE that is consistent, <cit.> proposed an analogous jackknife approach, which <cit.> subsequently refined. Using ê_i := Y_i - X_i β̂_JIVE to denote the JIVE residual, the variance estimator is given by:V̂=∑_i(∑_j iP_ijX_j)^2ê_iM_iê/M_ii+∑_i∑_j iP̃_ij^2M_iXê_iM_jXê_j/(∑_i∑_j iP_ijX_iX_j)^2 Then, the JIVE inference procedure proposed in the literature uses the following t-statistic:t̂_JIVE^2 =(β̂_JIVE-β_0)^2/V̂ As <cit.> have shown, if the concentration parameter diverges asymptotically i.e., S→∞, then t̂_JIVE will have a standard normal distribution asymptotically, so using ± 1.96 critical values for t̂_JIVE will be valid. With many instruments, S→∞ is analogous to having a strong instrument. But when we have many weak instruments, S does not diverge, so using the ± 1.96 critical values for t̂_JIVE will not result in valid inference. Nonetheless, t̂_JIVE can algebraically be expressed as a function of ξ̂(β_0),ρ̂(β_0) and ν̂, which are feasible normalized statistics defined in Section 2. Since the variance estimators used in these feasible objects are consistent, these statistics converge to ξ(β_0),ρ(β_0) and ν respectively, which have a known joint distribution under the asymptotic environment. Under Assumption <ref>,t̂_JIVE^2 =ξ̂(β_0)^2/1-2ξ̂(β_0)/ν̂ρ̂(β_0)+ξ̂(β_0)^2/ν̂^2 =ξ(β_0)^2(1+o_P(1))/1-2ξ(β_0)/νρ(β_0)+ξ(β_0)^2/ν^2 =: t^2_JIVE (1+o_P(1))This lemma builds on <cit.>, who had a similar expression in their theorem 5. First, Lemma <ref> shows a numerical equivalence between the various objects that can be feasibly calculated, rather than just an asymptotic result. Second, the asymptotic result in Lemma <ref> holds not just under the null, but also for any β_0 β, and this result is immediate from Assumption <ref>.By inspection, the t_JIVE statistic has the same distribution as the two-stage-least-squares t_TSLS for just-identified IV setup considered in LMMPY. In particular, ρ(β_0) has a similar interpretation in both environments. ν here takes the role of their f and ξ(β_0) here takes the role of their t_AR(β_0). Finally, S in MWIV takes the place of their f_0. Hence, when proposing a <cit.> analog, <cit.> used ν^2 as the analog of the F-statistic of the first-stage regression in the just-identified environment. Since the asymptotic distribution of t̂_JIVE^2 is known under the null, it is possible to construct a critical value function for t̂_JIVE^2 that is robust to MWIV. The VtF-based critical value function c(.) proposed in this paper is only a function of ν and ρ(β_0), which can be consistently estimated from the data. Then, for testing H_0:β=β_0, reject if t̂^2_JIVE≥ c(ν,ρ(β_0)). For all S,ρ and a size α test, the critical value function will satisfy:(t_JIVE^2>c(ν,ρ(β_0))) = α§.§ Description of curve construction There are (at least) two possible ways to construct the curve. The first way is a two-sided construction, which is identical to the VtF procedure in LMMPY. Since t_JIVE^2 has the same asymptotic distribution as t_TSLS^2, any inference procedure that is valid for the just-identified t_TSLS^2 is also valid in this context. Hence, if we are agnostic about S=E[ν], we can immediately use the critical values calculated in LMMPY. The second way is a one-sided construction, which I will call the one-sided VtF (VtFo). In the judge environment with many instruments, as seen in <Ref>, an argument from the setting shows that there may be good reason to believe that S≥0. By using this piece of information in the curve construction, we may be able to get a more powerful test. The cost is that whenever we observe ν<0, we cannot reject the null, and must conclude that the data is uninformative. The assumption of S≥0 is also a more natural way to think about instruments in the just-identified environment, because researchers often justify the relevance condition by arguing that the instrument Z affects the endogenous variable X in a particular direction. Construction of a curve assuming S ≥ 0 in the just-identified environment may hence also be of independent interest. Considering how the AR(β_0) statistic is asymptotically normal and, for a 5% test, <cit.> proposed using 1.645 instead of ± 1.96 as the critical value for AR(β_0), VtFo is essential for comparing the recently-developed VtF with the existing literature on MWIV. The remainder of this subsection outlines the one-sided VtF critical value curve construction, while details are relegated to <Ref>. The method of construction is similar to the VtF in LMMPY, though the recursive equations differ. Since the curve is constructed under the null, I drop the β_0 indices without risk of ambiguity. Let T:=ν-ρξ, mimicking the Q in LMMPY. If ρ0, then ξ=1/ρ(ν-T), which means t^2=ξ^2/1-2ρξ/ν+ξ^2/ν^2 =ν^2/ρ^2(ν-T)^2/ν^2-2ρν(ν-T)1/ρ+(ν-T)^2/ρ^2 =ν^2(ν-T)^2/ρ^2T^2+(1-ρ^2)(ν-T)^2 For notational compactness, I use:t^2(ν,T,ρ) := ν^2(ν-T)^2/ρ^2T^2+(1-ρ^2)(ν-T)^2As in LMMPY, for a given T and ρ, the plot of t^2(ν,T,ρ) against ν will be W-shaped. Suppose for now that {ν:t^2≤ c(ν,ρ)} =(-∞,ν̅] for some ν̅ given T and ρ. Since ν|T∼ N(T,ρ^2), for a test of correct size α, we must have: ν̅^2(ν̅-T)^2/ρ^2T^2+(1-ρ^2)(ν̅-T)^2-c(ν̅,ρ) =0Φ(ν̅-T/√(ρ^2))=1-αwhere, with some abuse of notation, Φ(.) is the normal CDF. By solving for T in <Ref>, we can substitute T into <Ref> to initialize the curve with a closed form solution. Where √(q):=Φ^-1(1-α),c(ν̅,ρ)=ν̅^2/ρ^2(ν̅/√(ρ^2)√(q)-1)^2+(1-ρ^2) The curve can be constructed from the fixed point ν^*=√(ρ^2q), c^*=ρ^2q/1-ρ^2.[To see that this is the fixed point, <Ref> yields ν^*-T/√(ρ^2)=Φ^-1(1-α)=√(q) so T=0, ν^*=√(ρ^2q). Further, ν^2(ν-T)^2/ρ^2T^2+(1-ρ^2)(ν-T)^2=ρ^2q/1-ρ^2. Hence, by using ν^*=√(ρ^2q),c^*=ρ^2q/1-ρ^2 in the mapping, we get back the same point, giving us the fixed point as before. ] Observe that <Ref> is quartic in ν̅ in the numerator, so it has a local maxima. As T gets larger, the middle local maxima gets larger, and eventually crosses the critical value function c(.), so that the initial assumption of {ν:t^2≤ c(ν,ρ)} =(-∞,ν̅(T,ρ)] no longer holds. This situation means that there is some ν<ν̅ such that ν^2(ν-T)^2/ρ^2T^2+(1-ρ^2)(ν-T)^2>c(ν̅,ρ). Then, the critical value function for such T values must satisfy the following four equations:ν_H^2(ν_H-T)^2/ρ^2T^2+(1-ρ^2)(ν_H-T)^2-c(ν_H,ρ) =0 ν_M^2(ν_M-T)^2/ρ^2T^2+(1-ρ^2)(ν_M-T)^2-c(ν_M,ρ) =0 ν_L^2(ν_L-T)^2/ρ^2T^2+(1-ρ^2)(ν_L-T)^2-c(ν_L,ρ) =0 Φ(ν_H-T/√(ρ^2))-Φ(ν_M-T/√(ρ^2))+Φ(ν_L-T/√(ρ^2)) =1-α where ν_L ≤ν_M ≤ν_H. We do not have a neat closed-form solution as before, but we can construct the curve iteratively, thereby tracing out the rest of the curve. Details are provided in <Ref>, which readers may wish to skip on the first reading. The critical value functions are displayed in <Ref>. Many features of this VtFo curves are similar to the VtF curves of LMMPY. For any value of ρ, as ν increases, √(c) converges to the standard normal critical value of 1.96.Even though this test is motivated by a one-sided first-stage, we are still conducting a two-sided test using the t_JIVE statistic, so 1.96 is the relevant limit. When there is a low degree of endogeneity (i.e., ρ is small), √(c) lies below 1.96, so using the conventional ±1.96 factors will be conservative. For any value with |ρ| < 1, VtFo may reject the null even when ν < 1.645, because the critical value function is still defined as long as ν > ρΦ^-1(1-α). Since any inference procedure that is valid for t_TSLS is also valid for t_JIVE, the Conditional Wald (CW) approach of <cit.> is also valid in the many instruments environment. In particular, using a test where we reject if t̂_JIVE^2>c_CW(ρ̂(β_0),T̂) is valid, where c_CW is the same critical value function as in <cit.>. §.§ Details of One-Sided Curve Construction To be precise about how the procedure works, with a given ρ, we can use the expression in <Ref> up to a value ν̃. Then, we have to switch to the three-intersection algorithm. The remainder of this subsection first describes how to find ν̃, then describes the three-intersection algorithm.To find ν̃, first fix ρ and construct a grid of T>0. For every T, construct a grid of ν to calculate t^2(ν,T,ρ) using <Ref> and c(ν,ρ) using <Ref>.For small values of T, the curves of t^2(ν,T,ρ) and c(ν,ρ) are displayed in Figure <ref>, where t^2 is the W-curve in solid line while c(ν,ρ) is the dashed line. Since the rejection rule is t^2(ν,T,ρ) > c(ν,ρ), for a given T and ρ, to find the non-rejection probability, we must integrate over the region of ν such that the dashed line lies above the solid line. Hence, {ν:t^2≤ c(ν,ρ)} =(-∞,ν̅], where ν̅ is the ν value where the two curves intersect. Thus, the non-rejection probability is given by Φ((ν̅-T)/√(ρ^2)), motivating <Ref>. Since the curves intersect, t^2(ν̅,T,ρ) = c(ν̅,ρ) at this ν̅ value, motivating <Ref>. For large values of T, as seen in <Ref>, c(ν,ρ) and t^2(ν,T,ρ) cross multiple times. Since we had a grid of T, we can find T̃, the first T value such that there are multiple crossings. This T̃ corresponds to the value where the two curves are tangent to each other at some ν, as seen in <Ref>. With a given T̃, we can use <Ref> to solve for the corresponding ν̃. When there are three intersections (as in <Ref>), we can create a grid of T values starting from T̃ where the three intersection issue first occurs. The system of equations (<ref>) to (<ref>) is motivated by the rejection rule of t^2(ν,T,ρ) > c(ν,ρ), so we want to integrate over the ν region where the solid line is below the dashed line. Starting from the smallest T in this grid, we know that ν_L ≤ν_M ≤ T ≤ν_H. Hence, c(ν_M,ρ) and c(ν_L,ρ) are known due to <Ref>. Solve <Ref> and <Ref> to obtain a pair ν_L,ν_M corresponding to the given T. Then, use ν_L,ν_M for the given T in <Ref> to obtain a ν_H > T. Use this ν_H in <Ref> to obtain a c(ν_H,ρ) value. Repeat this process for subsequent T's in the sequence. The (ν_H, c(ν_H,ρ)) pair obtained in the first iteration will eventually become a corresponding (ν_M, c(ν_M,ρ)) pair for some T that is large enough. This iterative procedure allows us to extend the curve indefinitely. § PROPERTIES §.§ Theoretical Power BoundsTo compare VtF and VtFo in the many instruments environment with existing valid procedures in the literature, I first compare their power theoretically, then compare their numerical performance in the next section. Since the LM statistic of <cit.> ξ(β_0) is entirely analogous to t_AR(β_0) in the just-identified environment, the relative performance between ξ(β_0) relative to VtF here is identical to the relative performance between t_AR(β_0) and VtF in the just-identified case of LMMPY, so the analysis is omitted for brevity. The AR(β_0) statistic of <cit.> has no immediate analog in the just-identified case, so its analysis is warranted. A useful benchmark of power is the power bound, which is the rejection probability as Δ→∞. Due to the result in <cit.>, the power bound is one minus the probability that we get an unbounded confidence set (CS). Hence, to characterize the power bound, it suffices to characterize the probability that the various procedures yield an unbounded CS for a given DGP. The rest of this subsection will show that the VtF procedure and the AR procedure proposed by <cit.> have the same power bound. First, consider the two-sided procedures. Based on the just-identified theory of LMMPY, t^2 yields an unbounded CS if and only if their f satisfies f^2≤ 3.84. Applying their result to our context, using the VtF for t_JIVE yields an unbounded CS when ν^2 ≤ 3.84. Turning to the <cit.> statistic, we can observe that, when using their Φ̂(β_0), AR(β_0)= Q_e(β_0)e(β_0)/√(Φ̂(β_0))= Q_YY-2Q_XYβ_0+Q_XXβ_0^2/√(B_YYYY-4B_YYYXβ_0-2B_YYXXβ_0^2+4B_YXYXβ_0^2+4B_YXXXβ_0^3+B_XXXXβ_0^4) where B_XYXY is defined as:B_XYXY=2/K∑_i∑_j iP_ij^2/M_iiM_jj+M_ij^2[X_iM_iY][X_jM_jY] Since AR(β_0) is normally distributed, if we were to use the rejection rule of AR(β_0)^2>3.84, then observe that the non-rejection region of β_0 that satisfies AR(β_0)^2 ≤ 3.84 can be written as a quartic inequality in β_0 with the leading term Q_XX^2 - 3.84 B_XXXX. Hence, the CS is unbounded when Q_XX^2 ≤ 3.84 B_XXXX. Since B_XXXXΥ, this condition is equivalent to ν^2 ≤ 3.84, which is identical to VtF. Next, consider the one-sided procedures. The acceptance rule for VtFo is t_JIVE≤ c(ν;ρ). When looking at the power bound, we consider the case when ρ(β_0)→±1, where the confidence set is the whole real line when ν≤1.645, because c(.)→∞. Hence, the power bound is one minus the probability that ν≤1.645. The (one-sided) <cit.> acceptance rule is AR(β_0) ≤ 1.645. This condition can be expanded to be:Q_YY-2Q_XYβ_0+Q_XXβ_0^2≤1.645√(B_YYYY-4B_YYYXβ_0-2B_YYXXβ_0^2+4B_YXYXβ_0^2+4B_YXXXβ_0^3+B_XXXXβ_0^4) To save on notation, the above inequality can be written as LHS≤1.645(RHS)^1/2. Since RHS is positive, we cannot reject the null whenever LHS≤0. A realization of data where Q_XX≤0 (which occurs if and only if ν≤0) is sufficient for the CS to be unbounded. To see this, when Q_XX≤0, there exists β_0^L such that for all β_0: β_0 ≤β_0^L, we have LHS≤0. If LHS>0, then we can innocuously square both sides of the inequality, and the leading quartic term is L_4 = 1.645^2B_XXXX-Q_XX^2. If L_4<0, then we have an unbounded set of β_0 that satisfy the inequality. But in this unbounded set of β_0, we can only accept β_0's such that LHS≥0. Thus, thecondition for having an unbounded set is 1.645^2B_XXXX-Q_XX^2≤0, which translates to ν≤1.645. This condition is identical to VtFo.§.§ Power Curves To calculate asymptotic power curves, I draw the asymptotic Q's to calculate power. Since Q are asymptotic statistics, this exercise abstracts from drawing a dataset (X,Y,Z). I draw the transformed (TR) object directly, and treat Υ,Ψ,Φ,τ as known.Q_TR:=[[ AR_TR;ξ_TR; ν ]]=[[ Q_ee/√(ΥΦ); Q_Xe/√(ΥΨ);ν ]]∼ N([[ 0; 0; S ]],[[1r/2r^2;r/2 1/2(1+r^2)r/2;r^2r/21 ]]) Due to the formulae in the setting for the variance objects under the alternative (i.e., τ(β_0),Φ(β_0), etc.), for any given Δ, these alternative variances can be calculated. For every draw of Q_TR, and a given Δ, I can calculate the <cit.> test statistic AR, the <cit.> test statistic ξ, correlation ρ(β_0), and assemble the t_JIVE^2. These objects are sufficient to assess whether each inference procedure rejects the null or not. For every Δ,S,r, I take 10,000 draws, and plot power curves to display the results. Since the distribution is known, we can analytically calculate the power bounds, which are also displayed in the plot. <Ref> displays a power curve. In the data-generating process (DGP) with r=0.5, S=3, it is evident that all procedures have the correct size of 0.05 at Δ =0. The performance of xi relative to VtF is identical to that of LMMPY in the two-sided procedures. While MS2 is somewhat less powerful than xi, such a comparison is not fruitful as <cit.> proposed the one-sided “MS1" procedure instead. Comparing MS1 and VtFo, observe that VtFo is more powerful than MS1 on one side of the alternative than the other. As we extend Δ out to larger values, both VtFo and MS1 converge to the power bound that was calculated theoretically. Since S>0, the power bound of the one-sided procedures is above the power bound of two-sided procedures.This paper builds on the existing literature on MWIV by using its canonical asymptotic environment. By observing that the distribution of t_JIVE is identical to t_TSLS in the environment with just-identified instruments, this paper proposes the analog of the VtF procedure from LMMPY adapted to many instruments, and adapts the procedure to accommodate environments where the sign of the expected first-stage effect is known. It is further observed that there is an analogous conditional Wald procedure that is valid. The power properties of the proposed VtF procedure are similar to those in LMMPY, suggesting that the procedure is comparable to those in the existing literature. § PROOFS First observe that (β̂_JIVE-β_0) =∑_i∑_j iP_ijY_iX_j/∑_i∑_j iP_ijX_iX_j-β_0 =∑_i∑_j iP_ij(X_iβ_0+e_i(β_0))X_j/√(K)Q_XX-β_0 =β_0√(K)Q_XX+∑_i∑_j iP_ije_i(β_0)X_j-β_0√(K)Q_XX/√(K)Q_XX =Q_Xe(β_0)/Q_XX With V̂=∑_i(∑_j iP_ijX_j)^2ê_iM_iê/M_ii+∑_i∑_j iP̃_ij^2M_iXê_iM_jXê_j/(∑_i∑_j iP_ijX_iX_j)^2 we know numerically that: ∑_i(∑_j iP_ijX_j)^2ê_iM_iê/M_ii=∑_i(∑_j iP_ijX_j)^2(e_i(β_0)-X_iQ_Xe(β_0)/Q_XX)M_i(e(β_0)-XQ_Xe(β_0)/Q_XX)/M_ii ∑_i∑_j iP̃_ij^2M_iXê_iM_jXê_j=∑_i∑_j iP̃_ij^2M_iX(e_i(β_0)-X_iQ_Xe(β_0)/Q_XX)M_jX(e_j(β_0)-X_jQ_Xe(β_0)/Q_XX) Then, by expanding V̂, V̂=1/KQ_XX^2{∑_i(∑_j iP_ijX_j)^2e_i(β_0)M_ie(β_0)/M_ii+∑_i∑_j iP̃_ij^2M_iXe_i(β_0)M_jXe_j(β_0)} -Q_Xe(β_0)/KQ_XX^3{∑_i(∑_j iP_ijX_j)^2(e_i(β_0)M_iX/M_ii+X_iM_ie_i(β_0)/M_ii)+2∑_i∑_j iP̃_ij^2M_iXe_i(β_0)M_jXX_j} +Q_Xe(β_0)^2/KQ_XX^4{∑_i(∑_j iP_ijX_j)^2X_iM_iX/M_ii+∑_i∑_j iP̃_ij^2M_iXX_iM_jXX_j} By substituting these expressions with the hat variance and covariance objects, we get: t̂_JIVE^2=(β_0) = Q_Xe(β_0)^2/Ψ̂(β_0)-2Q_Xe(β_0)/Q_XXτ̂(β_0)+Q_Xe(β_0)^2/Q_XX^2Υ̂Then, with the appropriate normalizations, the numerical equivalence is immediate. Under Assumption <ref>,t̂_JIVE^2= Q_Xe(β_0)^2 (1+o_P(1))/Ψ(β_0)-2Q_Xe(β_0)/Q_XXτ(β_0)+Q_Xe(β_0)^2/Q_XX^2Υ= Q_Xe(β_0)^2/Ψ(β_0) (1+o_P(1))/1-2Q_Xe(β_0)/√(Ψ(β_0))/Q_XX/√(Υ)τ(β_0)/√(Ψ(β_0)/ √(Υ))+Q_Xe(β_0)^2/Ψ(β_0)/Q_XX^2Υ^2=ξ(β_0)^2(1+o_P(1))/1-2ξ(β_0)/νρ(β_0)+ξ(β_0)^2/ν^2ecta | http://arxiv.org/abs/2311.15932v1 | {
"authors": [
"Luther Yap"
],
"categories": [
"econ.EM"
],
"primary_category": "econ.EM",
"published": "20231127153932",
"title": "Valid Wald Inference with Many Weak Instruments"
} |
automata, arrows lv]Newton Loebens Newton Loebens, Instituto Federal de Mato Grosso do Sul. Aquidauana, MS 79200-000 Brazil. Telephone:(67) 3240-1600. [email protected][ ϵ⟨⟩ □ plainteoTheorem[section] lema[teo]Lemmacorollary[teo]Corollarydefi[teo]Definitionprop[teo]PropositionREMS[teo]RemarksdefinitionremarkRemarkex[teo]ExampleREM[teo]Remarkcnj[teo]Conjectureleft=20mm,right=20mm,top=20mm,bottom=20mmplainContinuous-time open quantum walks in one dimension: matrix-valued orthogonal polynomials and Lindblad generators [ January 14, 2024 =================================================================================================================== We study continuous-time open quantum walks in one dimension through a matrix representation, focusing on nearest-neighbor transitions for which an associated weight matrix exists. Statistics such as site recurrence are studied in terms of matrix-valued orthogonal polynomials and explicit calculations are obtained for classes of Lindblad generators that model quantum versions of birth-death processes. Emphasis is given to the technical distinction between the cases of a finite or infinite number of vertices. Recent results for open quantum walks are adapted in order to apply the folding trick to continuous-time birth-death chains on the integers. Finally, we investigate the matrix-valued Stieltjes transform associated to the weights. Keywords:Continuous-time open quantum walks. Matrix-valued orthogonal polynomials. Stieltjes transform. Lindblad generator. Matrix representation.§ INTRODUCTIONRandom walks have been a fundamental concept in the study of stochastic processes and probability theory for many decades <cit.>. In the field of quantum mechanics, the concept of quantum walks has emerged as a robust tool for exploring quantum systems' behavior and dynamics <cit.>. Quantum walks can be categorized into various types, and one particularly intriguing category is Open Quantum Walks (OQWs) <cit.>. This process introduces the influence of the environment, which leads to a richer set of dynamics and behaviors when compared to the classical random walks induced by Markov chains, making them an exciting area of research in quantum information and quantum computation. Continuous-Time Open Quantum Walks (CTOQWs) <cit.> represent a specific class of OQWs where the evolution of a quantum walker in a graph is continuous and influenced by an initial quantum state.Inspired by the classical Birth-Death Processes (BDPs), this article develops a generalization from the perspective of CTOQWs, exploring promising valuable insights into the behavior of quantum walkers in systems that exhibit birth and death processes. Introducing a matrix representation for the generator of a CTOQW, we apply the theory of matrix orthogonal polynomials to tridiagonal block matrices. The matrix orthogonal polynomial approach provides a powerful framework for analyzing the representation of those generators, enabling us to gain a deeper understanding of CTOQWs and their connection to BDPs. This technique has been applied in the case of unitary quantum walks, where the relevant orthogonal polynomials are described in terms of the theory of CMV matrices <cit.>. Regarding the setting of open quantum dynamics <cit.>, the problem of obtaining orthogonal polynomials and associated weights is an interesting one as well, although we would have to consider operators which are no longer unitary. A first step in this direction has been discussed in <cit.>, where a procedure for obtaining weight matrices associated with open quantum walks (OQWs) <cit.> on the half-line was described. In <cit.> it was studied the case of discrete-time quantum Markov chains on the line, as defined by S. Gudder <cit.>, and gave a collection of some nontrivial examples where the spectral representation can be explicitly achieved.Analogously to the references above, we can employ the matrix orthogonal polynomial framework to explore various statistical aspects of CTOQWs, including recurrence patterns and transition probabilities. We utilize the Stieltjes transform as a key tool to analyze these statistics, offering an effective method to understand the intricate dynamics of quantum walkers in quantum systems influenced by tridiagonal block matrices. We are particularly interested in the recurrence of CTOQWs in this work. A first step in this direction can be seen in <cit.>.Let us recall the classical BDPs. Birth-death processes on ℤ_≥0 are continuous-time Markov chains characterized by a set of birth-death rates {(λ_n,μ_n), n≥0} such that λ_n>0,n≥0, μ_n>0,n≥1 and μ_0≥0 (see <cit.>). The transition function P(t)=(P_ij(t)) satisfies the following conditions as t→0^+:P_ij(t)=λ_it+o(t),j=i+1, μ_it+o(t),j=i-1,1-(λ_i+μ_i)t+o(t),j=i.The matrix corresponding to the infinitesimal operator associated with the process is given by𝒜=[ -(λ_0+μ_0)λ_000⋯;μ_1 -(λ_1+μ_1)λ_10⋯;0μ_2 -(λ_2+μ_2)λ_2⋯;⋮⋮⋮⋱⋱ ].Following the classical work of S. Karlin and J. McGregor <cit.>, we can apply Favard's Theorem to the Jacobi matrix (<ref>) and assure the existence of a probability spectral measure ψ supported on [0,∞) associated with 𝒜. Moreover, if we define the sequence of polynomials {Q_n(x)}_n≥ 0 by the three-term recurrence relationQ_0(x) =1, Q_-1(x)=0, -xQ_n(x) =λ_nQ_n+1(x)-(λ_n+μ_n)Q_n(x)+μ_nQ_n-1(x),n≥0,that is, -xQ(x)=𝒜Q(x), where Q(x)=(Q_0(x),Q_1(x),…)^T, then we have that the polynomials {Q_n(x)}_n≥ 0 are orthogonal with respect to ψ. This provides the so-called Karlin-McGregor formula which gives an integral representation of the probability of reaching vertex j at time t given that the process started at vertex i, i.e. P_ij(t). This formula is given byP_ij(t)=∫_0^∞e^-xtQ_i(x)Q_j(x)dψ(x)/∫_0^∞Q_j^2(x)dψ(x). The main purpose of this paper is to analyze the spectral representation of some continuous-time open quantum walks (CTOQWs) by using the basic theory of matrix-valued orthogonal polynomials. The theory of orthogonal polynomials can be applied to the open quantum walks through an appropriate matrix representation that rises from the “vec" application, whose role is to stack a density matrix in a unique bigger vector, and then a reversion of this application is made after an application of the matrix representation(see <cit.>).This dynamic is described by a quantum Markov semigroup with a specific Lindblad generator and performs an evolution of the initial density operator. Roughly speaking, the state at instant t can be described by a pair (X_t,ρ_t) with X_t being the position of the particle at time t and ρ_t is the density operator describing the internal degrees of freedom of the corresponding vertex. We concentrate our results on CTOQWs whose vertices have all the same internal degrees of freedom, thereby the operators that describe the Lindblad generator will be acting on the same Hilbert space, and the matrices that describe the probability transitions will be squares. The main result of this work is Equation (<ref>), which expresses a formula for the Stieltjes transform of a CTOQW on the integer line in terms of Stieltjes transforms on the integer half-line. This transform associates a weight with a real function, enables us to evaluate the recurrence of CTOQWs, and offers a method for the construction of matrix weights that influence the orthogonality of the polynomials. We remark that this result is valid for any semigroup having a matrix representation of the form (<ref>), thus the folding trick is not retained to CTOQWs. For instance, we can also apply those formulas to quasi-birth-and-death processes.In Section 2 we review the “vec" representation for completely positive maps and the matrix representation for maps of the form Ψ(ρ)=Gρ+ρ G^*, where G is the part of the Lindblad generator which is not completely positive. In Section 3 we discuss the model of CTOQWs and present its matrix representation. In Section 4 we recall the concept of matrix-valued orthogonal polynomials and show how the recurrence of CTOQWs can be associated to the Stieltjes transform. Section 5 develops the matrix representation for CTOQWs in the integer line, leading to the main result of this work, Equation (<ref>), which associates the weight matrix of the walk in the half-line with the walk on the integer line. Section 6 illustrates the results with examples, giving explicit probabilities for different classes of Lindblad generators. In Section 7, an appendix is dedicated to recalling properties related to the existence of a matrix weight associated to the Lindblad generator.§ GENERAL SETTINGS Let ℋ be a separable Hilbert space with inner product ⟨ · | · ⟩, whose closed subspaces will be referred to as subspaces for short. The superscript ^* will denote the adjoint operator. The Banach algebra ℬ(ℋ) of bounded linear operators on ℋ is the topological dual of its ideal ℐ(ℋ) of trace-class operators with trace normρ_1=Tr(|ρ|),|ρ|=√(ρ^*ρ),through the duality <cit.>⟨ρ,X ⟩ = Tr(ρ X), ρ∈ℐ(ℋ),X∈ℬ(ℋ).If ℋ=k<∞, then ℬ(ℋ)=ℐ(ℋ) is identified with the set of square matrices of order k, denoted M_k(ℂ). The duality (<ref>) yields a useful characterization of the positivity of an operator ρ∈ℐ(ℋ),ρ∈ℐ(ℋ): ρ≥0⇔ Tr(ρ X)≥0, ∀ X∈ℬ(ℋ),X≥0,and similarly for the positivity of X∈ℬ(ℋ).In this work, we assume that we have a quantum particle walking either on the integer line, the integer half-line, or on a finite segment, that is, we have that the set of vertices V is labeled by ℤ, ℤ_≥0 or a finite set {0,1,…,N}, respectively. We will also call vertices sites. The state of the system is described by a column vector ρ = [ ρ_0; ρ_1; ρ_2; ⋮ ], ρ_i∈ℐ(ℋ), ρ_i≥0, ∑_i∈ VTr(ρ_i)=1. The vector representation vec(A) of A∈ M_k(ℂ), given by stacking together its rows, will be a useful tool. For instance,A = [ a_11 a_12; a_21 a_22 ]⇒ vec(A):=[ a_11; a_12; a_21; a_22 ].Let B=[b_ij] for B=[b_ij], that is, the entries of B are the complex conjugate entries of B. The vec mapping satisfies vec(AXB^T)=(A⊗ B) vec(X) for any square matrices A, B, X, with ⊗ denoting the Kronecker product. In particular, vec(BXB^*)=vec(BXB^T)=(B⊗B) vec(X), from which we can obtain the matrix representationΦ for a completely positive (CP) map ∑_i B_i· B_i^* when the underlying Hilbert space ℋ is finite-dimensional:Φ= ∑_i⌈ B_i⌉, ⌈ B ⌉ := B ⊗B.Here the operators B_i are identified with some matrix representation. We have that ⌈ B ⌉^* = ⌈ B^*⌉, where B^* denotes the Hermitian transpose (also known as conjugate transpose) of a matrix B. The same idea can be applied to maps of the form Ψ(ρ)=Gρ+ρ G^*. On this case the map Ψ has matrix representationΨ̂=G⊗ I+I⊗G.For more details, we refer the reader to the reference <cit.>.§ CONTINUOUS-TIME OPEN QUANTUM WALKS An operator semigroupsemigroup𝒯 on a Hilbert space ℋ is a family of bounded linear operators (T_t) acting on ℋ, t≥ 0, such thatT_tT_s=T_t+s, s,t∈ℝ^+, T_0=I_ℋ.If t↦ T_t is continuous for the operator norm of ℋ, then 𝒯 is said to be uniformly continuous.uniformly continuous semigroup This class of semigroups is characterized by the following result:[<cit.>, page 161] The following assertions are equivalent for a semigroup 𝒯 on ℋ: * 𝒯 is uniformly continuous;*There exists a bounded operator L on ℋ such thatT_t=e^tL, t∈ℝ^+.Further, if the conditions are satisfied, thenL=lim_t→0^+T_t-I_ℬ/t. The operator L is called the generator of 𝒯.generator of 𝒯A trace-preserving semigroup 𝒯:=(𝒯_t)_t≥ 0 of CP maps acting on ℐ_1(ℋ), set of trace-class operators on ℋ, is called a Quantum Markov Semigroup (QMS)Quantum Markov Semigroup (QMS) onℐ_1(ℋ). When lim_t→ 0||𝒯_t- Id||=0,𝒯 has a generator ℒ=lim_t→0^+(𝒯_t-Id)/t (see <cit.>), which is a bounded operator on ℐ_1(ℋ), also known as Lindblad operator.We consider a finite or countable set of vertices V and then take the composite systemℋ=⊕_i∈ V𝔥_i,where each 𝔥_i denotes a separable Hilbert space. The label i∈ V is interpreted as being the position of the walker and, when the walker is located at the vertex i∈ V, its internal state is encoded in the space 𝔥_i, describing the internal degrees of freedom of the particle when it is sitting at site i∈ V. Since we will be considering only examples with 𝔥_i=𝔥_j for all i,j∈ V, we let 𝔥_i=𝔥 for every i∈ V.The set of diagonal density operators acting on ℋ will be denoted by𝒟={∑_i∈ Vρ(i)⊗|i⟩⟨i|: ρ(i)=ρ(i)^*,ρ(i)≥ 0,∑_i∈ V(ρ(i))=1}. [<cit.>] A Continuous-time Open Quantum WalkContinuous-time Open Quantum Random Walk CTOQW (CTOQW) is an uniformly continuous QMS on ℐ_1(ℋ) with Lindblad operator of the formℒ:ℐ_1(ℋ)→ ℐ_1(ℋ) ρ ↦-i[H,ρ]+∑_i,j∈ V(S_i^jρ S_i^j^*-1/2{S_i^j*S_i^j,ρ}),where, consistently with the notation, we write S_i^j=R_i^j⊗|j⟩⟨i| for bounded operators R_i^j∈ℬ(𝔥_i,𝔥_j). Moreover, H and S_i^j are bounded operators on ℋ of the form H=∑_i∈ VH_i⊗|i⟩⟨i|,H_i is self-adjoint on 𝔥_i,S_i^j is a bounded operator on ℋ with ∑_i,j∈ VS_i^j*S_i^j converging in the strong sense. Also, [A,B]≡ AB-BA is the commutator between A and B and {A,B}≡ AB+BA is the anti-commutator between A and B.Then, we have ρ=∑_i∈ Vρ(i)⊗|i⟩⟨i|∈𝒟,e^tℒ(ρ)=𝒯_t(ρ)=∑_i∈ Vρ_t(i)⊗|i⟩⟨i|,∀ t≥ 0, withd/dtρ_t(i)=-i[H_i,ρ_t(i)]+∑_j∈ V(R_j^iρ_t(j) R_j^i^*-1/2{R_i^j*R_i^j,ρ_t(i)}). An alternative way to rewrite (<ref>) is given by equation (18.7) in <cit.>:ℒ(ρ)=∑_i∈ V(G_iρ(i)+ρ(i)G_i^*+∑_j∈ VR_j^iρ(j) R_j^i*)⊗|i⟩⟨i|,whereG_i=-iH_i-1/2∑_j∈ VR_i^j*R_i^j. Further, we will present the matrix representation for CTOQWs, and this will be done by taking the representation given in Equation (<ref>).The label i∈ V represents the position of the walker and, when the walker is located at i∈ V, its internal state is encoded in 𝔥_i, that is, 𝔥_idescribes the internal degrees of freedom of the walker when it is at site i∈ V.Starting the walk on site |i⟩ with initial density operator ρ∈𝒮(𝔥_i)=∑_i∈ Vρ(i)|i⟩⟨i|, the quantum measurement of the position gives rise to a probability distribution p_0 on V, such thatp_0(i)=ℙ( |i⟩)=(ρ(i))and for evolution on time t≥ 0, p_t(i)=ℙ(t, |i⟩)=(ρ_t(i)),wheree^tℒ(ρ)=∑_i∈ Vρ_t(i)⊗|i⟩⟨i|.The vector and matrix representation of states and CP maps may be easily adapted to CTOQWs. In fact, since any element of ℐ_1(ℋ) is block diagonal, when ℋ<∞, it may be represented by combining the vector representations of the finite diagonal blocks,ρ=∑_i∈ Vρ_i⊗|i⟩⟨ i| ⇒ρ:=[ vec(ρ_1); vec(ρ_2);⋮ ].The CTOQW (<ref>) admits theblock matrix representation e^tℒ(ρ) = e^tℒ ρ, ℒ = [ G_0^α+⌈ B_00⌉ ⌈ B_01⌉ ⌈ B_02⌉ ⋯; ⌈ B_10⌉ G_1^α+⌈ B_11⌉ ⌈ B_12⌉ ⋯; ⌈ B_20⌉ ⌈ B_21⌉ G_2^α+⌈ B_22⌉ ⋯; ⋮ ⋮ ⋮ ⋱ ],whereG_i^α=(-iH_i-1/2∑_j∈ VR_i^j*R_i^j)⊗ I+ I⊗(iH_i-1/2∑_j∈ VR_i^j*R_i^j). We will often identify the Lindblad generator ℒ with its block matrix representation and omit the hat, as the usage of such object will be clear from the context. Also, we will sometimes write X instead of ⌈ X⌉ in contexts where no confusion arises.It is worth noting that although the above definitions concern CTOQWs on general graphs, in this paper we will deal exclusively with the one-dimensional situation which we may also call the quantum birth-death process, and represent the generator byℒ̂= [ B_0 C_1; A_0 B_1 C_2; A_1 B_2 C_3; ⋱ ⋱ ⋱ ],for certain operators A_i, B_i, C_i, and the remaining operators being equal to zero. The above representation is for a quantum particle walking on the integer half-line ℤ_≥0, but we will also study examples acting on a finite set {0,1,…,N} or the integer line ℤ. § MATRIX-VALUED ORTHOGONAL POLYNOMIALS In this section we introduce the Karlin-McGregor Formula for CTOQW with set of vertices of the forms V={0,1,2,…,N} and V=ℤ_+={0,1,2…}. Then we will be able to give a recurrence criterion for vertex |0⟩ based on the Stieltjes transform of the associated weights.Following <cit.>, we pick d∈{1,2,3,…},(A_n)_n≥0,(B_n)_n≥0, and (C_n)_n≥1, such that the block tridiagonal matrixℒ̂=[ B_0 C_1; A_0 B_1 C_2; A_1 B_2 C_3; ⋱ ⋱ ⋱ ]represents a Lindblad generator of a CTOQW Λ. Then define recursively the associated matrix-valued polynomials from the matrix ℒ̂ on (<ref>) byQ_0(x)= I_d, Q_-1(x)=0_d -xQ_n(x) = Q_n+1(x)A_n+Q_n(x)B_n+Q_n-1(x)C_n,n=0,1,2,…,that is, Q(x)=(Q_0(x),Q_1(x),…) are solutions of the equation -xQ(x)=Q(x)ℒ̂. Here we denote I_d and 0_d the identity and the null matrix of dimension d× d.We recall that Λ_t'=ℒ̂Λ_t, where Λ_t=e^tℒ̂ and define the two-variable functionf(x,t)=Q(x)Λ_t,x∈ℂ, t∈[0,∞).One has∂ f(x,t)/∂ t=Q(x)Λ_t'=Q(x)ℒ̂Λ_t=-xQ(x)Λ_t=-xf(x,t), f(x,0)=Q(x),whose solution is f(x,t)=e^-xtQ(x). Hence e^-xtQ(x)=Q(x)Λ_t. Componentwise,e^-xtQ_i(x)=∑_k=0^∞Q_k(x)Λ_ki(t),where Λ_ki(t) is the (k,i)-th block of Λ(t).If there exists a weight matrix Σ such that the matrix-valued polynomials {Q_n(x)}_n≥0 are orthogonal with respect to Σ, in the following sense∫ Q_j^*(x)dΣ(x)Q_i(x)=δ_jiF_i,(F_i)≠ 0,then multiplying on the left side of (<ref>) by Q_j^*(x) and integrating with respect to Σ we obtain∫_ℝe^-xtQ_j^*(x)dΣ(x)Q_i(x)=∫_ℝQ_j^*(x)dΣ(x)Q_j(x)Λ_ji(t),therefore for any i,j∈ V, we have the Karlin-McGregor Formula for CTOQWs:Karlin-McGregor Formula for! CTOQWs Λ_ji(t)=(∫ Q_j^*(x)dΣ(x)Q_j(x))^-1(∫ e^-xtQ_j^*(x)dΣ(x)Q_i(x)), Λ(t)=(Λ_ji(t))_j,i=0,1,…. For more details about how to construct this formula see <cit.>.Sometimes we will write (<ref>) asΛ_ji(t)=Π_j(∫ e^-xtQ_j^*(x)dΣ(x)Q_i(x)),Π_j:=(∫ Q_j^*(x)dΣ(x)Q_j(x))^-1.Let p_ji;ρ(t) represent the probability of reaching site |j⟩ at instant t, given that we started at site |i⟩ with initial density ρ concentrated at i. Thenp_ji;ρ(t)=[^-1(Λ_ji(t)(ρ))]=[^-1(Π_j∫ e^-xtQ_j^*(x)dΣ(x)Q_i(x)(ρ))]. For simplicity, we write the transition probabilities byp_ji;ρ(t)=[Π_j∫ e^-xtQ_j^*(x)dΣ(x)Q_i(x)ρ]in contexts where no confusion arises.Consider a CTOQW with set of vertices V. Given |i⟩∈ V and ρ∈𝒮(𝔥_i), we say that |i⟩ is ρ-recurrent if∫_0^∞p_ii;ρ(t)dt=∞.When |i⟩ is recurrent for all densities, then we say that |i⟩ is recurrent. This concept is associated with the weight matrices by the following theorem. Consider a tridiagonal CTOQW on ℤ_≥0={0,1,2,…} and let Σ be its associated weight matrix. Vertex |j⟩ is ρ-recurrent if and only iflim_λ→ 0[Π_j∫_ℂQ_j^*(x)dΣ(x)Q_i(x)/λ+xρ]=∞.For each pair i,j∈ V we have∫_0^∞p_ji;ρ(t)dt = lim_λ→ 0∫_0^∞e^-λ tp_ji;ρ(t)dt = lim_λ→ 0∫_0^∞e^-λ t[Π_j∫_ℂ e^-xtQ_j^*(x)dΣ(x)Q_i(x)ρ]dt= lim_λ→ 0[Π_j∫_ℂ( ∫_0^∞e^-(λ+x)tdt)Q_j^*(x)dΣ(x)Q_i(x)ρ] = lim_λ→ 0[Π_j∫_ℂQ_j^*(x)dΣ(x)Q_i(x)/λ+xρ]. We recall the Stieltjes transform associated to Σ: B(z,Σ)=∫_ℂdΣ(x)/z-x,thus we obtain the straightforward consequence of Theorem <ref>:Consider a tridiagonal CTOQW on ℤ_≥0={0,1,2,…} and let Σ be its associated weight matrix. Vertex |0⟩ is ρ-recurrent if and only if-lim_z→ 0[Π_0 B(z,Σ)ρ]=∞. It is crucial to note that not all polynomials induced by block tridiagonal matrices are orthogonalizable under any matrix weight. The non-trivial nature of establishing orthogonality in this context necessitates a discerning criterion for the existence of such weights. Within the framework of Section <ref>, the appendix recalls a criterion for the orthogonality of polynomials induced by block tridiagonal matrices, and a precise expression for a specific type of weight.§ WALKS ON ℤ: THE FOLDING TRICKConsider the generator of a tridiagonal CTOQW on ℤ, given byℒ̂=⌈[⋱⋱;⋱ G^α_-2+⌈ B_-2⌉⌈ C_-1⌉ ; ⌈ A_-2⌉ G^α_-1+⌈ B_-1⌉ ⌈ C_0⌉;⌈ A_-1⌉ G^α_0+⌈ B_0⌉ ⌈ C_1⌉ ;⌈ A_0⌉ G^α_1+⌈ B_1⌉ ⌈ C_2⌉; ⌈ A_1⌉ G^α_2+⌈ B_2⌉ ⌈ C_3⌉ ; ⋱⋱⋱ ]⌉ ,where all blocks are matrices of order d^2, thus (𝔥)=d.We assume that there exists a sequence of d^2× d^2 Hermitian matrices (E_n)_n∈ℤ and non-singular matrices (R_n)_n∈ℤ such that⌈ A_n⌉ ^*R_n+1^*R_n+1 = R_n^*R_n⌈ C_n+1⌉ ,n≥0R_-n-1^*R_-n-1⌈ C_-n⌉ =⌈ A_-n-1⌉ ^*R_-n^*R_-n, n≥ 0,R_n(G^α_n+⌈ B_n⌉ )=E_nR_n,n∈ℤ.Let us defineΠ_j:=R_j^*R_j,j∈ℤ.Consider the two independent families of matrix-valued polynomials defined recursively from (<ref>) asQ_0^1(x)= I_d^2,Q_0^2(x)=0_d^2,Q_-1^1(x)= 0_d^2, Q_-1^2(x)=I_d^2, -xQ_n^α (x)= Q_n+1^α(x)⌈ A_n⌉ +Q_n^α(x)(G^α_n+⌈ B_n⌉ )+Q_n-1^α(x)⌈ C_n⌉ ,α=1,2,n∈ℤ,where we have the block vector Q^α(x)=(…,Q_-2^α(x),Q_-1^α(x),Q_0^α(x),Q_1^α(x),Q_2^α(x),…),α=1,2, satisfying -xQ^α(x)=Q^α(x)ℒ̂.As in the classical case, we introduce the block tridiagonal matrixℒ̆= [ D_0 N_1; M_0 D_1 N_2; M_1 D_2 N_3; ⋱ ⋱ ⋱ ],where each block entry is a 2d^2× 2d^2 matrix, given by[D_0= [ G^α_0+⌈ B_0⌉⌈ A_-1⌉; ⌈ C_0⌉ G^α_-1+⌈ B_-1⌉ ],M_n= [⌈ A_n⌉ 0; 0 ⌈ C_-n-1⌉ ], n≥ 0,;D_n= [ G^α_n+⌈ B_n⌉0;0 G^α_-n-1+⌈ B_-n-1⌉ ],N_n= [⌈ C_n⌉ 0; 0 ⌈ A_-n-1⌉ ], n≥ 1. ]The term folding trick comes from the transformation of the original generator ℒ̂, whose graph is represented in Figure <ref>,to the generator described by ℒ̆, which is represented by the folded walk in Figure <ref>. Note that ℒ̆ is a block tridiagonal matrix on ℤ_≥0, thereby we can apply all the properties we have seen in previous sections. The following 2d^2× 2d^2 matrix polynomials are defined in terms of (<ref>),𝒬_n(x)= [Q_n^1(x) Q_-n-1^1(x);Q_n^2(x) Q_-n-1^2(x) ],n≥ 0,and these satisfyx𝒬_0(x) = 𝒬_1(x)M_0+𝒬_0(x)D_0,𝒬_0(x)=I_2d^2, x𝒬_n(x) = 𝒬_n+1(x)M_n+𝒬_n(x)D_n+𝒬_n-1(x)N_n,n=1,2,…The leading coefficient of 𝒬_n(x) is always a nonsingular matrix. Moreover, forR̆_n:=[R_n0_d^2;0_d^2 R_-n-1 ], n≥ 0,Ĕ_0:=[E_0 R_0⌈ A_-1⌉ R_-1^-1;R_-1⌈ C_0⌉ R_0^-1 E_-1 ],Ĕ_n:=[E_n0_d^2;0_d^2 E_-n-1 ], n≥ 1,we see that the block matrices of ℒ̆ satisfy the conditions (<ref>) for n≥ 0: M_n^*R̆_n+1^*R̆_n+1=R̆_n^*R̆_nN_n+1,R̆_nD_n=Ĕ_nR̆_n,where matrices R̆_n are non-singular and Ĕ_n are Hermitian for all n≥ 0. DefiningΠ̆_j:=R̆_j^*R̆_j∈ M_2d^2(ℂ),j=0,1,2,…,the correspondence between Π̆_j and Π_j isΠ̆_j:=[Π_j0_d^2;0_d^2 Π_-j-1 ].By <cit.>, there exists a weight matrix W leading to the Karlin-McGregor formula for Λ̆=e^tℒ̆: Λ̆_ji(t)=Π̆_j∫_ℝ e^-xt𝒬_j^*(x)dW(x)𝒬_i(x).Once we have found the weight matrix appearing on (<ref>), we can also obtain the blocks Λ_ji(t) of the original walk generated by ℒ̂. The key for this operation is the following proposition: Assume that ℒ̂ is the generator of a CTOQW of the form (<ref>). The relation between Λ̆_ji(t) and Λ_ji(t) isΛ̆_ji(t)=[Λ_ji(t)Λ_j,-i-1(t);Λ_-j-1,i(t) Λ_-j-1,-i-1(t) ], i,j∈ℤ_≥0.First we use <cit.> (replace Φ̆_ji^(n) and Φ̂_ji^(n) by ℒ̆^n_ji and ℒ̂^n_ji respectively) to obtain thatℒ̆_ji^n=[ℒ̂_ji^nℒ̂_j,-i-1^n;ℒ̂_-j-1,i^n ℒ̂_-j-1,-i-1^n ], i,j∈ℤ_≥0, n=0,1,2,…,hence we obtain for every i,j∈ℤ_≥0 the expressionΛ̆_ji(t)=(e^tℒ̆)_ji=∑_n=0^∞t^n/n!ℒ̆^n_ji =∑_n=0^∞t^n/n![ℒ̂_ji^n t^nℒ̂_j,-i-1^n;ℒ̂_-j-1,i^n ℒ̂_-j-1,-i-1^n ]= [Λ_ji(t)Λ_j,-i-1(t);Λ_-j-1,i(t) Λ_-j-1,-i-1(t) ]. Note that we can evaluate Λ̆_ji(t) by (<ref>) and then extract the block Λ_ji(t) as in (<ref>). Further, for a density operator ρ we havep_ji;ρ(n)=Tr(Λ_ji(t)ρ)= Tr([ Λ_ji(t) 0; 0 0 ][ ρ; 0 ]) =Tr([ I_d^2 0; 0 0 ]Λ̆_ji(t) [ I_d^2 0; 0 0 ][ ρ; 0 ]).However, we would like to obtain the probability above avoiding the evaluation of Λ̆_ji(t). This can be done via a generalization of the Karlin-McGregor formula on ℤ_≥0. We proceed as follows: first, write the decompositiondW(x)=[ dW_11(x) dW_12(x); dW_21(x) dW_22(x) ],where dW_21(x)=dW_12^*(x), since dW(x) is positive definite. Then one has for i,j∈ℤ_≥0, Λ̆_ji(t) = Π̆_j∫_ℝ e^-xt𝒬_j^*(x)dW(x)𝒬_i(x) (<ref>)= [Π_j0_d^2;0_d^2 Π_-j-1 ]∫_ℝ e^-xt[Q_j^1(x) Q_-j-1^1(x);Q_j^2(x) Q_-j-1^2(x) ]^* [ dW_11(x) dW_12(x); dW_12^*(x) dW_22(x) ][Q_i^1(x) Q_-i-1^1(x);Q_i^2(x) Q_-i-1^2(x) ]= ∑_α,β=1^2[ Π_j∫_ℝe^-xtQ_j^α *(x)dW_αβ(x)Q_i^β(x)Π_j∫_ℝe^-xtQ_j^α *(x)dW_αβ(x)Q_-i-1^β(x); Π_-j-1∫_ℝe^-xtQ_-j-1^α *(x)dW_αβ(x)Q_i^β(x) Π_-j-1∫_ℝe^-xtQ_-j-1^α * (x)dW_αβ(x)Q_-i-1^β(x) ] .Joining equation above and Proposition <ref>, we obtain the Karlin-McGregor formula for a CTOQW on ℤ, given byΛ_ji(t)=∑_α,β=1^2Π_j∫_ℝ e^-xtQ_j^α *(x)dW_αβ(x)Q_i^β(x),i,j∈ℤ, n≥ 0.Conversely, if there exist weight matrices dW_11(x),dW_12(x),dW_22(x) such that Λ_ji(t) is of the form (<ref>), then Λ̆_ji(t) is of the formΦ̆_ji^(n)=Π̆_j∫_ℝ e^-xt𝒬_j^*(x)dW(x)𝒬_i(x).The weight matrixW(x)=[ W_11(x) W_12(x); W_12^*(x) W_22(x) ],is called the spectral block matrix of ℒ. Extending Theorem <ref> to the CTOQW on ℤ, we observe that, since Q_0^1=Q_-1^2=I_d and Q_0^2=Q_-1^1=0_d, the following limits hold∫_0^∞p_00;ρ(t)dt=lim_z↑ 0 Tr[Π_0B(z;W_11)vec(ρ)],where B(z;W) is the Stieltjes transform of the weight matrix W. Analogously,∫_0^∞p_-1,-1;ρ(t)dt=lim_z↑ 0 Tr[Π_-1B(z;W_22)vec(ρ)]. Let us write the matrix ℒ̂ in the formℒ̆=[ ℒ̂^-C;A ℒ̂^+ ], C=[⋮⋮⋮ ;000⋯; ⌈ C_0⌉00⋯ ], A=[ ⋯ 0 0 ⌈ A_-1⌉; ⋯ 0 0 0; ⋯ 0 0 0; ⋮ ⋮ ⋮ ],ℒ̆^+= [ G^α_0+⌈ B_0⌉ ⌈ C_1⌉ ; ⌈ A_0⌉ G^α_1+⌈ B_1⌉ ⌈ C_2⌉ ;⌈ A_1⌉ G^α_2+⌈ B_2⌉ ⌈ C_3⌉ ;⋱⋱⋱ ],ℒ̆^-= [ G^α_-1+⌈ B_-1⌉⌈ A_-2⌉ ;⌈ C_-1⌉ G^α_-2+⌈ B_-2⌉⌈ A_-3⌉ ; ⌈ C_-2⌉ G^α_-3+⌈ B_-3⌉ ⌈ A_3⌉ ;⋱⋱⋱ ]Our goal now is to write the Stieltjes transforms associated with the weight matrices W_αβ,α,β=1,2, in terms of the Stieltjes transforms associated with W_±, the weight matrices associated with ℒ̆^±.We introduce the generating function of ℒ̂ Φ(s):=∑_n=0^∞s^nℒ̂^nto obtain an explicit form for the Laplace Transform of Λ(t) on the following way:Λ_ji(t)= ∫_0^∞e^-xtΛ_ji(x)dx=∑_n=0^∞∫_0^∞e^-xtx^n/n!ℒ̂_ji^ndx= ∑_n=0^∞t^n/n!ℒ̂_ji^n=∑_n=0^∞ℒ̂_ji^n/t^n+1=Φ_ji(t^-1)/t. Using equations (48), (49), (50) and (51) of <cit.>, applied to Φ_ji(s^-1)=sΛ̂_ji(s), we obtainΛ_00(z) = Λ_00^+(z)(I-⌈ A_-1⌉Λ_-1,-1^-(z)⌈ C_0⌉Λ_00^+(z))^-1. Λ_-1,-1(z) = Λ^-_-1,-1(z)(I-⌈ C_0⌉Λ^+_00(z)⌈ A_-1⌉Λ_-1,-1^-(z))^-1. Λ_0,-1(z) =z^-1Λ_00^+(z)(I-⌈ A_-1⌉Λ_-1,-1^-(z)⌈ C_0⌉Λ_00^+(z))^-1⌈ A_-1⌉Λ_-1,-1^-(z). Λ_-1,0(z) = z^-1Λ^-_-1,-1(z)(I-⌈ C_0⌉Λ^+_00(z)⌈ A_-1⌉Λ_-1,-1(z))^-1⌈ C_0⌉Λ_00^+(z).We notice that the block matrices of both ℒ̆^+ and ℒ̆^- satisfy the conditions of equation (<ref>), thus there are positive weight matrices W_± associated with ℒ̆^± for which the associated polynomials are orthogonal. Then, we can writeΠ_0^+:=∫_ℝdW_+Π_-1^-:=∫_ℝdW_-. The Laplace Transform of Λ_ji(t) can be associated to the Stieltjes transform using thatΛ_ji(s) = ∫_0^∞e^-tsΛ_ji(t)dt=∫_0^∞e^-ts(Π_j∫_ℝe^-xtQ_j^*(x)dW(x)Q_i(x)dt)= Π_j∫_ℝQ_j^*(x)dW(x)Q_i(x)/s+x, s>0, that is,Λ_ji(-s)=Π_j∫_ℝQ_j^*(x)dW(x)Q_i(x)/x-s, s<0,thereby we recall that Q_0^1=Q_-1^2=I_d^2,Q_0^2=Q_-1^1=0_d^2 in order to obtain the relations[ B(z;W_11)=Π_0^-1Λ_00(-z), B(z;W_22)=Π_-1^-1Λ_-1,-1(-z),B(z^-1; W_12)=Π_-1^-1Λ_0,-1(-z),;B(z;W_21)=Π_-1^-1Λ_-1,0(-z),B(z;W_+)=(Π^+_0)^-1Λ_00^+(-z), B(z^-1;W_-)=(Π^-_-1)^-1Λ_-1,-1^-(-z). ]Joining with the identities (<ref>),(<ref>),(<ref>),(<ref>),the new Stieltjes transform identities are obtained:Π_0 B(z;W_11)= Π_0^+B(z;W_+)(I-⌈ A_-1⌉Π_-1^-B(z;W_-)⌈ C_0⌉Π_0^+B(z;W_+))^-1,Π_-1 B(z;W_22) =Π_-1^-B(z;W_-)(I-⌈ C_0⌉Π_0^+B(z;W_+)⌈ A_-1⌉Π_-1^-B(z;W_-))^-1, Π_0 B(z;W_12)= Π_0^+B(z;W_+)(I-⌈ A_-1⌉Π_-1^-B(z;W_-)⌈ C_0⌉Π_0^+B(z;W_+))^-1⌈ A_-1⌉Π_-1^-B(z;W_-),Π_-1 B(z;W_21)=Π_-1^-B(z;W_-)(I-⌈ C_0⌉Π_0^+B(z;W_+)⌈ A_-1⌉Π_-1^-B(z;W_-))^-1⌈ C_0⌉Π_0^+B(z;W_+).Sometimes the operators Π_i^+ and Π_i^- are equal to the identity operator. In this case, (<ref>) are reduced toB(z;W_11)= B(z;W_+)(I-⌈ A_-1⌉ B(z;W_-)⌈ C_0⌉ B(z;W_+))^-1, B(z;W_22)=B(z;W_-)(I-⌈ C_0⌉ B(z;W_+)⌈ A_-1⌉ B(z;W_-))^-1,B(z;W_12)=B(z;W_+)(I-⌈ A_-1⌉ B(z;W_-)⌈ C_0⌉ B(z;W_+))^-1⌈ A_-1⌉ B(z;W_-), B(z;W_21)= B(z;W_-)(I-⌈ C_0⌉ B(z;W_+)⌈ A_-1⌉ B(z;W_-))^-1⌈ C_0⌉ B(z;W_+). Equations (<ref>) and (<ref>) allow us to obtain the Stieltjes transform of the CTOQW with V=ℤ when we know the Stieltjes transform associated to the walks on ℤ_≥ 0 and ℤ_≤ 0. Since we are interested in the recurrence and transience of the CTOQWs, those equations are enough to obtain this information as it will be seen on the next section. A sufficient condition for Π_i^+=Π_i^-=I is to have A_n=C_n+1^* and B_n=B_n^* for every n∈ℤ, since we will always have G_n=G_n^* for all n∈ℤ in this case, and therefore we can take R_i=I for all i∈ℤ (see Equation (<ref>)). On the other hand, those conditions are not necessary, since we can find examples with R_n being any unitary matrices for each n. Most of our examples consider R_i^i=0 for all i∈ V. In this case the Hamiltonian part does not contribute to the probabilities, as it will be seen as a consequence of the following Proposition. Moreover, this Proposition gives equivalence to a condition that the diagonal of the matrix representation of the generator has negative-semidefinite matrices. Let us consider a tridiagonalCTOQW in Z_≥ 0 (or a finite V) satisfying the conditions of Equation (<ref>). Then G_n^α+⌈ B_n⌉≤ 0 if and only if-H_n⊗ I+I⊗H_n= [h_11^(n) … h_1,d^2^(n); ⋮ ⋱ ⋮; h_d^2,1^(n) … h_d^2,d^2^(n) ], h_kk^(n)=-b_kk^(n), h_jk^(n)=-i(s_jk^(n)-a_jk^(n)-ib_jk^(n)),∀j,k,where⌈ B_n⌉=[a_11^(n) … a_1,d^2^(n); ⋮ ⋱ ⋮; a_d^2,1^(n) … a_d^2,d^2^(n) ]+ i[b_11^(n) … b_1,d^2^(n); ⋮ ⋱ ⋮; b_d^2,1^(n) … b_d^2,d^2^(n) ], a_jk,b_jk∈ℝ ∀j,kandS_n⊗ I+I⊗S_n= [s_11^(n) … s_1,d^2^(n); ⋮ ⋱ ⋮; s_d^2,1^(n) … s_d^2,d^2^(n) ] ∀j,k, S_n:=1/2(A_n^*A_n+B_n^*B_n+C_n^*C_n). Let us suppose that T_n:=G_n^α+⌈ B_n⌉ for every n≥ 0 satisfies the conditions of Equation (<ref>).Firstly we suppose that T_n≤ 0, thus there exists an orthonormal basis {v_1,…,v_d^2} of ℂ^d^2 constituted by eigenvectors of T_n with T_nv_k=t_kv_k, k=1,…,d^2.Denote S_n=1/2(A_n^*A_n+B_n^*B_n+C_n^*C_n) to obtaint_kδ_jk= t_k⟨ v_j,v_k⟩= ⟨ v_j,(G_n^α+⌈ B_n⌉) v_k⟩=⟨ v_k,(-iH_n⊗ I+iI⊗H_n-S_n⊗ I-I⊗S_n+⌈ B_n⌉) v_k⟩=i⟨ v_j,(-H_n⊗ I+I⊗H_n) v_k⟩_F_1,j,k-⟨ v_j,(S_n⊗ I+I⊗S_n) v_k⟩_F_2,j,k+⟨ v_j,⌈ B_n⌉ v_k⟩_F_3,j,k. We have F_1,k,k,F_2,k,k∈ℝ, thus ⟨ v_k,(-H_n⊗ I+I⊗H_n) v_k⟩=-Im(⟨ v_k,⌈ B_n⌉ v_k⟩), thereby the entries of the diagonal of -H_n⊗ I+I⊗H_n coincide with the entries of the imaginary part of the diagonal of -⌈ B_n⌉.For j≠ k, we havei⟨ v_j,(-H_n⊗ I+I⊗H_n) v_k⟩=⟨ v_j,(S⊗ I+I⊗S) v_k⟩-⟨ v_j,⌈ B_n⌉ v_k⟩,thus, denoting by [X]_jk the (j,k)-th entry of a matrix X on the basis (v_k)_k, we obtain the identityi[-H_n⊗ I+I⊗H_n]_jk=[S⊗ I+I⊗S-⌈ B_n⌉]_j,k, j≠ k,completing the first part of the proof.On the other hand, we suppose that there exists an orthonormal basis such that equation (<ref>) is valid. In this case we have[T_n]_kk=ih_kk^(n)-s_kk^(n)+a_kk^(n)+ib_kk^(n)=-s_kk^(n)+a_kk^(n)<0,∀ k,and[T_n]_jk=ih_jk^(n)-s_jk^(n)+a_jk^(n)+ib_jk^(n)=i(-i(s_jk^(n)-a_jk^(n)-ib_jk^(n)))-s_jk^(n)+a_jk^(n)+ib_jk^(n)=0 ∀ j≠ k. This shows that [T_n] is diagonal with respect to this orthonormal basis and its entries are all real, thus it is hermitian. Let us consider a tridiagonalCTOQW in Z_≥ 0 (or a finite V) with a positive matrix weight associated to this CTOQW. Then H_n=h_nI for some h_n∈ R if and only if B_n is hermitian. In this case, H_n does not contribute to the probability of the walk.We suppose that there exists a positive matrix weight associated to the CTOQW, thus Equation (<ref>) is valid. We have that B_n is hermitian if and only ifa_jk^(n)+ib_jk^(n)=a_kj^(n)-ib_kj^(n) ∀ j,k.The matrix -H_n⊗ I+I⊗H_n is a multiple of the identity if and only if h_jk^(n)=0 ∀ j≠ k and h_kk^(n)=h ∀ k for some h∈ℝ, where the second statement is valid by Proposition <ref>. Moreover,h_jk^(n)=0⇔ s_jk^(n)=a_jk^(n)+ib_jk^(n),which is equivalent to have ⌈ B_n⌉ to be hermitian, since S is hermitian. § EXAMPLES§.§ Diagonal and simultaneously diagonalizable transitions First, we will consider a homogeneous CTOQW on the N+1 nodes indexed as V={0,1,…, N}, where we add two absorbing barriers (|-1⟩, |N+1⟩) on the extreme nodes, R_i^i=0 for each site, and the generator ℒ is given byℒ̂=[G^α ⌈ C⌉ ; ⌈ A⌉G^α ⌈ C⌉;⌈ A⌉G^α ⌈ C⌉ ; ⋱⋱⋱ ; ⌈ A⌉G^α ⌈ C⌉;⌈ A⌉G^α ],A=[ a_1 0; 0 a_2 ],C=[ c_1 0; 0 c_2 ],a_1,a_2,c_2,c_2>0,G^α=-diag(a_1^2+c_1^2,a_1^2+c_1^2+a_2^2+c_2^2/2,a_1^2+c_1^2+a_2^2+c_2^2,a_2^2+c_2^2/2,a_2^2+c_2^2).The classical symmetrizationℛ=(R_0,R_1,…,R_N), R_i=K^i-1/2, i=1,…,N, R_0=I_4,where K=⌈√(AC)⌉ =diag(a_1c_1, √(a_1c_1a_2c_2), √(a_1c_1a_2c_2), a_2c_2), givesJ=ℛℒ̂ℛ^-1=[ G^α K; K G^α K; K G^α K; ⋱ ⋱ ⋱; K G^α K; K G^α ]. The matrix-valued polynomials {Q_n}_n≥0 are recursively defined byQ_0(x) =1, Q_-1(x)=0,-xQ_0(x) =Q_0(x)G^α+Q_1(x)K,-xQ_i(x) =Q_i+1(x)K+Q_i(x)G^α+Q_i-1(x)K, i=1,…,N-1,which can be identified with the Chebyshev polynomials of the second kind {U_n}_n≥0. Indeed, we haveQ_n(x)=U_n((-x-G^α)K^-1/2), n≥ 0. Now, if we defineR_N+1(x):=Q_N(x)(-x-G^α)-Q_N-1(x)K,we have that the zeros of(R_N+1(x)) coincide with the eigenvalues of -J. A simple calculation shows thatR_N+1(x)=U_N+1((-x-G^α)K^-1/2)K.We would like to solve the equation det(R_N+1(x))=0. Recalling the representationU_n(z/2)=∏_j=1^n(z-2cos(jπ/n+1)),we obtain, for the matrix-valued case at hand,(R_N+1(x))=(U_N+1((-x-G^α)K^-1/2)K)=[∏_j=1^N+1((-xI_4-G^α)K^-1-2cos(jπ/N+2))K],thus(R_N+1(x))=k_1k_2^2k_4∏_j=1^N+1∏_m=1^4[(-x-g_m)/k_m-2cos(jπ/N+2)],where we have put G=-diag(g_1,g_2,g_3,g_4) and K=-diag(k_1,k_2,k_3,k_4). Since g_2=g_3 and k_2=k_3,(R_N+1(x)) is a polynomial of degree 4(N+1) having 3(N+1) distinct roots, which are of the formx_j=-g_1-2k_1cos(πj+1/N+2)=a_1^2+c_1^2-2a_1c_1cos(πj+1/N+2),y_j=-g_2-2k_2cos(πj+1/N+2)=√(a_1c_1a_2c_2)-(a_1^2+c_1^2+a_2^2+c_2^2)cos(πj+1/N+2),z_j=-g_4-2k_4cos(πj+1/N+2)=a_2^2+c_2^2-2a_2c_2cos(πj+1/N+2), j=0,…,N,each y_j being of multiplicity 2. There can be cases of eigenvalues with a greater multiplicity, which happens when the collection of zeros x_N,y_N and z_N overlap, so the multiplicity changes accordingly.Let us compute the weight matrixes on the zeros above. We haveW_j=g_j'(λ_j), g_j(λ):=-(λ_j-λ)^2(-J-λ I)_00^-1, λ_j=x_j,y_j,z_j,j=0,…,N,an expression which can be deduced from (see <cit.>)(-J-λ I)_ij^-1=∑_k=0^N P_i^*(λ_k)W_k P_j(λ_k)/λ_k-λ,and noting that this corresponds to the Laurent sum of the operator on the left-hand side except for the sign change λ_k-λ=-(λ-λ_k). With formula (<ref>), a calculation shows that for every N we have a corresponding set of multiples of the matrices given byW_K;1=[ 1 0 0 0; 0 0 0 0; 0 0 0 0; 0 0 0 0 ],W_K;2=[ 0 0 0 0; 0 0 0 0; 0 0 0 0; 0 0 0 1 ],W_K;3=[ 0 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 0 ].More precisely, we have a collection of 3(N+1) roots with weightsψ(x_j)=2/N+2sin^2(πj+1/N+2)W_K;1, j=0,…,N,ψ(y_j)=2/N+2sin^2(πj+1/N+2)W_K;2, j=0,…,N.ψ(z_j)=2/N+2sin^2(πj+1/N+2)W_K;2, j=0,…,N. For a specific instance of the above take N=2 (3 sites), so we have 9 roots, with weights1/4W_K;1, 1/4W_K;2, 1/4W_K;3associated with zeros a_1^2+c_1^2-2a_1c_1, √(a_1c_1a_2c_2)-(a_1^2+c_1^2+a_2^2+c_2^2) and a_2^2+c_2^2-2a_2c_2 respectively; weights1/2W_K;1, 1/2W_K;2, 1/2W_K;3associated with zeros a_1^2+c_1^2-√(2)a_1c_1, √(a_1c_1a_2c_2)-√(2)(a_1^2+c_1^2+a_2^2+c_2^2)/2 and a_2^2+c_2^2-√(2)a_2c_2 respectively; and weights1/4W_K;1, 1/4W_K;2, 1/4W_K;3associated with zeros a_1^2+c_1^2, √(a_1c_1a_2c_2) and a_2^2+c_2^2 respectively. Now, let us consider the walk on the half-line. We will consider a CTOQW whose set of vertices is V={0,1,2,…} and the walker can jump to its nearest neighbor, however, there is an absorbing barrier (|-1⟩). Thereofore, this walk can be interpreted as a BDP in which the population may become extinct. The matrixℒ̂= [ G^α_0⌈ C⌉;⌈ A⌉ G^α⌈ C⌉;⌈ A⌉ G^α⌈ C⌉; ⋱ ⋱ ⋱ ], [ G^α= -1/2((A^*A+C^*C)⊗ I_2+I_2⊗(A^*A+C^*C)); ; G^α_0=-1/2((A^*A)⊗ I_2+I_2⊗(A^*A)), ]is a valid generator of a CTOQW. Also,G^α=-[ a_1^2+c_1^2 0 0 0; 0 a_1^2+c_1^2+a_2^2+c_2^2/2 0 0; 0 0 a_1^2+c_1^2+a_2^2+c_2^2/2 0; 0 0 0 a_2^2+c_2^2 ],G^α_0=-[ a_1^2 0 0 0; 0 a_1^2+a_2^2/2 0 0; 0 0 a_1^2+a_2^2/2 0; 0 0 0 a_2^2 ].The CTOQWs of these first examples are entirely described by diagonal matrices; therefore, the parameter b in the density matrix ρ has no influence on these random walks with the specific transitions A and C.If we take K:=⌈ (AC)⌉ ^1/2 then we obtain the symmetrizationJ=ℛ(-ℒ̂)ℛ^-1=[ -G^α_0K ;K -G^αK; K -G^αK ;⋱⋱⋱ ],where K is positive definite,ℛ=(R_0,R_1,…,R_N), R_i=⌈ A^-1C⌉ ^i-1, i=1,2,3,…,N, R_0=I_4. Let us obtain the weight matrix associated to J̃, J̃:=[ -G^αK ;K -G^αK; K -G^αK ;⋱⋱⋱ ],using the results of A.J. Durán (<cit.>).Since G^α and K commute it is easy to see that the matrix H_A,B(x) given by <cit.> isH(x)= (xI+G^α)^2K^-2-4I_4=(xI+G^α)^2⌈ AC⌉ ^-1-4I_4=[ (x-a_1^2-c_1^2)^2/a_1^2c_1^2-4000;0 (x-a_1^2+c_1^2+a_2^2+c_2^2/2)^2/a_1a_2c_1c_2-400;00 (x-a_1^2+c_1^2+a_2^2+c_2^2/2)^2/a_1a_2c_1c_2-40;000 (x-a_2^2-c_2^2)^2/a_2^2c_2^2-4 ]. The associated weight matrix to J̃ isdΣ̃(x)= 1/2πK^-1√(diag(h_1,h_2,h_3,h_4)) =1/2π[ d_1(x)000;0 d_2(x)00;00 d_3(x)0;000 d_4(x) ],where h_j represents the j-th diagonal entry of the diagonal appearing on the representation of H(x) andd_1(x)=[√(4a_1^2c_1^2-(x-a_1^2-c_1^2)^2)]_+/a_1^2c_1^2, d_4(x)=[√(4a_2^2c_2^2-(x-a_2^2-c_2^2)^2)]_+/a_2^2c_2^2d_2(x)= d_3(x)=[√(4a_1a_2c_1c_2-(x-a_1^2+c_1^2+a_2^2+c_2^2/2)^2)]_+/2a_1a_2c_1c_2.Here we are using the notation [f(x)]_+ = f(x) if f(x)≥0 and 0 otherwise.We are interested on the transitions of the CTOQW, thus only d_1(x) and d_4(x) contribute for the calculus of the trace when we evaluate([ d_1(x)000;0 d_2(x)00;00 d_3(x)0;000 d_4(x) ]vec(ρ)),thereby we will avoid the massive calculations using terms as d_2(x) and d_3(x) appearing on the sequel of this section.The Stieltjes transform isB(z,Σ̃)= K^-1√(diag(h_1,h_2,h_3,h_4)) =[ w_1(z)000;0 w_2(z)00;00 w_3(z)0;000 w_4(z) ],where w_2(z)=w_3(z) is a function that does not vanish andw_1(z)= z-a_1^2-c_1^2-i√(4a_1^2c_1^2-(z-a_1^2-c_1^2)^2)/2a_1^2c_1^2, w_4(z)= z-a_2^2-c_2^2-i√(4a_2^2c_2^2-(z-a_2^2-c_2^2)^2)/2a_2^2c_2^2. Since the weight is obtained on the terms of <cit.>, we must have Π_0=I_4, then we use equation (2.20) of <cit.> to obtain the Stieltjes transform of the weight matrix associated to J:B(z,Σ)= (B(z,Σ̃)^-1+(G^α_0-G^α))^-1=[ σ_1(z)000;0∗00;00∗0;000 σ_2(z) ],whereσ_j(z)=z-a_j^2+c_j^2+√(-4a_j^2c_j^2+(z+a_j^2+c_j^2))/2c_j^2z,j=1,2. It is a simple calculation to verify that lim_z↑ 0σ_j(z)=∞⇔ a_j≤ c_j, thus, given a density operator ρ=[ a b; b^* 1-a ], we havelim_z↑ 0[vec^-1Π_0(B(z,Σ)vec(ρ))]=lim_z↑ 0(π_1σ_1(z)a+π_2σ_2(z)(1-a)),where π_1,π_2>0. Therefore, if {|e_0⟩,|e_1⟩} is the canonical basis of ℂ^2, then an application of Corollary <ref> shows that* a_1≤ c_1 and a_2≤ c_2 ⇒ vertex |0⟩ is recurrent;* a_1≤ c_1 and a_2> c_2 ⇒ vertex |0⟩ is |e_1⟩⟨e_1|-transient and ρ-recurrent for ρ≠|e_1⟩⟨e_1|; * a_1> c_1 and a_2≤ c_2 ⇒ vertex |0⟩ is |e_0⟩⟨e_0|-transient and ρ-recurrent for ρ≠|e_0⟩⟨e_0|; * a_1> c_1 and a_2>c_2 ⇒ vertex |0⟩ is transient. The Perron-Stieltjes inversion formula (Proposition 1.1 of <cit.>) givesdΣ(x)=1/π[ [√(4a_1^2c_1^2-(x-a_1^2-c_1^2)^2)/2c_1^2x]_+000;0∗00;00∗0;000 [√(4a_2^2c_2^2-(x-a_2^2-c_2^2)^2)/2c_2^2x]_+ ],thus an application of the Karlin-McGregor formula for CTOQWs gives for ρ=[ a b; b^* 1-a ], p_00;ρ(t)=a∫_0^∞ e^-xt[√(4a_1^2c_1^2-(x-a_1^2-c_1^2)^2)/2c_1^2x]_+dx+ (1-a)∫_0^∞ e^-xt[√(4a_2^2c_2^2-(x-a_2^2-c_2^2)^2)/2c_2^2x]_+dx. The particular case of r:=a_1=c_1 and s:=a_2=c_2 gives the weight matrixdΣ(x)=1/π[ [√(-x^2+4xr^2)/2r^2x]_+ 0 0 0; 0w_r,s(x) 0 0; 0 0w_r,s(x) 0; 0 0 0 [√(-x^2+4xs^2)/2s^2x]_+ ],wherew_r,s(x)=[2√(((r+s)^2-x)(x-(r-s)^2))/2(r^2+s^2)x-(r^2-s^2)^2]_++((r+s)(r-s)/r^2+s^2)^2δ_x_0(z) ,x_0=(r+s)^2(r-s)^2/2(r^2+s^2).Finally, we describe the associated walk on the integer line.Let us consider the homogeneous CTOQW on ℤ.In this case, the quantum walker's dynamics are uniform across different positions on the integer lattice, and could be explored in various physical systems, such as trapped ions, superconducting circuits, or photonic systems.We takeR_i^i+1=A=[ a_1 0; 0 a_2 ], R_i^i-1=C=[ c_1 0; 0 c_2 ], ∀ i∈ℤ, a_1,a_2,c_1,c_2>0.In this case we haveG_i=-[ a_1^2+c_1^2 0 0 0; 0 a_1^2+c_1^2+a_2^2+c_2^2/2 0 0; 0 0 a_1^2+c_1^2+a_2^2+c_2^2/2 0; 0 0 0 a_2^2+c_2^2 ], i∈ℤ.Using the first equation on (<ref>) with A_-1=A and C_0=C, we obtainB(z;W_11)= [ √((z-a_1^2-c_1^2)^2-4a_1^2c_1^2)/(z-a_1^2-c_1^2)^2-4a_1^2c_1^2000;0*00;00*0;000 √((z-a_2^2-c_2^2)^2-4a_2^2c_2^2)/(z-a_2^2-c_2^2)^2-4a_2^2c_2^2 ],where we used dW_+=dW_-=dΣ̃(x),dΣ̃(x) being the weight matrix given by (<ref>).It is easily seen thatlim_z↑ 0√((z-a_k^2-c_k^2)^2-4a_k^2c_k^2)/(z-a_k^2-c_k^2)^2-4a_k^2c_k^2=∞⇔ a_k=c_k, k=1,2,therefore, for ρ=[ a b; b^* 1-a ], we obtain that* a_1=c_1 and a_2=c_2 implies that the walk is recurrent;* a_1≠ c_1 and a_2≠ c_2 implies that the walk is transient;* a_1=c_1 and a_2≠ c_2 implies that the walk is ρ-transient for a=0 and ρ-recurrent for a>0;* a_1≠ c_1 and a_2= c_2 implies that the walk is ρ-transient for a=1 and ρ-recurrent for a<1.We observe that the walker returns infinitely often, in mean, to vertex |0⟩ for any initial density operator when a_1=c_1 and a_2=c_2. In the contrasting scenario, where a_j≠ c_j, j=1,2, the walker returnees is finite in mean. Lastly, when only one of the values a_j equals c_j, then the walk returns a finite number of times to |0⟩, in mean, for one only density, and infinitely often for all others.Moreover, the weight dW_11 is obtained by applications of the Perron-Stieltjes inversion formula:dW_11(x)= [ [√((x-a_1^2-c_1^2)^2-4a_1^2c_1^2)/(x-a_1^2-c_1^2)^2-4a_1^2c_1^2]_+000;0*00;00*0;000 [√((x-a_2^2-c_2^2)^2-4a_2^2c_2^2)/(x-a_2^2-c_2^2)^2-4a_2^2c_2^2]_+ ].§.§ The case of simultaneous unitarily diagonalizable transitionsThe above analysis can be applied to the simultaneous unitary diagonalizable coins, that is, we can take a unitary matrix U and coins given byA=U[ a_1 0; 0 a_2 ]U^*,C=U[ c_1 0; 0 c_2 ]U^*,a_1,a_2,c_1,c_2>0to obtain analogous conclusions about the recurrence of vertex |0⟩. In this case, we have* a_1≤ c_1 and a_2≤ c_2 ⇒ vertex |0⟩ is recurrent;* a_1≤ c_1 and a_2> c_2 ⇒ vertex |0⟩ is U|e_1⟩⟨e_1|U^*-transient and ρ-recurrent for ρ≠ U|e_1⟩⟨e_1|U^*; * a_1> c_1 and a_2≤ c_2 ⇒ vertex |0⟩ is U|e_0⟩⟨e_0|U^*-transient and ρ-recurrent for ρ≠ U|e_0⟩⟨e_0|U^*; * a_1> c_1 and a_2>c_2 ⇒ vertex |0⟩ is transient. Let us describe an example of this and, in addition, let us consider a perturbation on the first vertex. In this case, the walk can be represented by Figure <ref>, where B_0 represents the rate of jumping from vertex |0⟩ to itself. Let U∈𝕄_2(ℂ) be an unitary matrix and consider the CTOQW with generatorℒ̂= [ G_0^α+[B_0] [C]; [A] G [C]; [A] G [C]; ⋱ ⋱ ⋱ ], A=C=U[ 2 0; 0 1 ]U^*, B_0=U[ 1 h_1.i; h_1.i 1 ]U^*,and the Hamiltonian operatorH=∑_i∈ℤ_≥ 0H_i⊗|i⟩⟨i|, H_0=[ h_2 h_1; h_1 h_2 ], h_2∈ℝ, h_1∈ℂ, H_i=0i>0.The diagonal block matrices of ℒ̂ are G=-𝒰(8,5,5,2)𝒰^* andG_0^α=𝒰[ -4-|h_1|^200|h_1|^2;0 -5/2-|h_1|^2h_1^20;0h_1^2 -5/2-|h_1|^20;|h_1|^200 -1-|h_1|^2 ]𝒰^*,where 𝒰=U⊗ U, thus G^α_0 is hermitian.The Stieltjes Transform of the matrix weight associated to ℒ̃ (ℒ̂ with G_0^α+[B_0] switched by G) is then, by Equation (<ref>),B(z,Σ̃)=1/32𝒰[ w_1(z)000;0 w_2(z)00;00 w_3(z)0;000 w_4(z) ]𝒰^*, [w_1(z)= 8-z-√(z(z-16));w_2(z)= w_3(z)=20-4z-4√(z^2-10z+9);w_4(z)= 32-16z-16√(z^2-4z) ]. The Stietjes Transform of ℒ̂ is obtained byB(z,Σ)= (B(z,Σ̃)^-1+(G_0^α+[B_0]-G))^-1 = 𝒰[ s_1(z)00|h_1|^2;0 s_2(z)h_1^20;0 h̅_̅1̅^2 s_2(z)0;|h_1|^200 s_3(z);]^-1𝒰^*,s_1(z)= 32/z-8+√(z(z-16))+4-|h_1|^2, s_2(z)=8/z-5+√(z^2-10z_9)+5/2-|h_1|^2, s_3(z)= 2/z-2+√(z(z-4))+1-|h_1|^2. After some calculus using the limit given on Corollary <ref>, we obtain that this walk is recurrent for any choices of h_1∈ℂ,h_2∈ℝ.A perturbation on the vertex |0⟩ for the CTOQW on ℤ: We consider a CTOQW on ℤ with the same transitions as above but with a perturbation on vertex |0⟩. That is, we are taking the walk given by Figure <ref>, whereA=C=U[ 2 0; 0 1 ]U^*, B_0=U[ b_1 b_2; b_2 b_3 ]U^*, b_1,b_2,b_3∈ℝ.Each position on the lattice behaves similarly, except |0⟩. The perturbation, characterized by a different matrix rate for a self-loop at |0⟩, introduces a localized influence on the walker's behavior. This can be seen as a quantum interference effect, where the perturbation disrupts the otherwise uniform evolution of the quantum state.Physically, this setup could be implemented in a quantum system where the different vertices of the integer lattice correspond to distinct quantum states, and the perturbation arises from a modification in the local dynamics at one specific position. This might be achieved through controlled interactions or external fields acting on the quantum system. Such perturbations can be leveraged in quantum algorithms and simulations, providing a way to encode specific information or perform quantum operations selectively at certain positions in the lattice. ] | http://arxiv.org/abs/2311.16366v1 | {
"authors": [
"Newton Loebens"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20231127231251",
"title": "Continuous-time open quantum walks in one dimension: matrix-valued orthogonal polynomials and Lindblad generators"
} |
0000-0001-8079-1882]Natasha Latouf NSF Graduate Research Fellow, 2415 Eisenhower Ave, Alexandria, VA 22314 Department of Physics and Astronomy, George Mason University, 4400 University Drive MS 3F3, Fairfax, VA, 22030, USA NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771, USA Sellers Exoplanents Environment Collaboration, 8800 Greenbelt Road, Greenbelt, MD 20771, USA0000-0002-8119-3355]Avi M. Mandell NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771, USA Sellers Exoplanents Environment Collaboration, 8800 Greenbelt Road, Greenbelt, MD 20771, USA0000-0002-2662-5776]Geronimo L. Villanueva NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771, USA Sellers Exoplanents Environment Collaboration, 8800 Greenbelt Road, Greenbelt, MD 20771, USA0000-0002-9338-8600]Michael D. Himes NASA Postdoctoral Program Fellow, NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771, USA0000-0001-7912-6519]Michael Dane Moore NASA Goddard Space Flight Center, Greenbelt, MD, USA. Business Integra, Inc., Bethesda, MD, USA. Sellers Exoplanets Environment Collaboration, 8800 Greenbelt Road, Greenbelt, MD 20771, USA Center for Research and Exploration in Space Science and Technology, NASA Goddard Space Flight Center, Greenbelt, MD, USA. NASA Goddard Space Flight Center, Greenbelt, MD, USA.0000-0003-2273-8324]Jaime Crouse NASA Goddard Space Flight Center, Greenbelt, MD, 20771 NASA GSFC Sellers Exoplanet Environments Collaboration0000-0003-0354-9325]Shawn Domagal-Goldman NASA Goddard Space Flight Center, Greenbelt, MD, 207710000-0001-6285-267X]Giada Arney NASA Goddard Space Flight Center, Greenbelt, MD, 20771 NASA GSFC Sellers Exoplanet Environments Collaboration0000-0002-5060-1993]Vincent Kofman NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771, USA Sellers Exoplanents Environment Collaboration, 8800 Greenbelt Road, Greenbelt, MD 20771, USA Integrated Space Science and Technology Institute, Department of Physics, American University, Washington DC0000-0003-3099-1506]Amber V. Young NASA Goddard Space Flight Center, Greenbelt, MD, 20771 NASA GSFC Sellers Exoplanet Environments Collaboration Natasha Latouf [email protected], [email protected] We present the results for the detectability of the O2 and O3 molecular species in the atmosphere of an Earth-like planet using reflected light at visible wavelengths. By quantifying the detectability as a function of signal-to-noise ratio (SNR), we can constrain the best methods to detect these biosignatures with next-generation telescopes designed for high-contrast coronagraphy. Using 25 bandpasses between 0.515 and 1 ) and a pre-constructed grid of geometric albedo spectra, we examined the spectral sensitivity needed to detect these species for a range of molecular abundances. We first replicate a modern-Earth twin atmosphere to study the detectability of current O2 and O3 levels, and then expand to a wider range of literature-driven abundances for each molecule. We constrain the optimal 20%, 30%, and 40% bandpasses based on the effective SNR of the data, and define the requirements for the possibility of simultaneous molecular detection. We present our findings of O2 and O3 detectability as functions of SNR, wavelength, and abundance, and discuss how to use these results for optimizing future instrument designs. We find that O2 is detectable between 0.64 and 0.83with moderate-SNR data for abundances near that of modern-Earth and greater, but undetectable for lower abundances consistent with a Proterozoic Earth. O3 is detectable only at very high SNR data in the case of modern-Earth abundances, however it is detectable at low-SNR data for higher O3 abundances that can occur from efficient abiotic O3 production mechanisms. § INTRODUCTIONOver 5,000 exoplanets have been discovered and confirmed in the last three decades. With the field of exoplanet exploration booming since the first exoplanet atmosphere discovery <cit.>, the next hurdle is the detection and characterization of terrestrial exoplanet atmospheres. Current space-based observatories (e.g. Spitzer, HST, JWST) are beginning to probe the characteristics of potentially rocky planets with both transmission <cit.> and emission <cit.> measurements, but potentially habitable planets will be remain largely inaccessible except for 1-2 unique systems around the smallest stars (e.g. TRAPPIST-1) and impossible to characterize for Sun-like stars. However, with the advancements of high-contrast instrument technology and future mission concepts <cit.>, the ability to find a habitable Earth-twin around a Sun-like star is now a realistic goal for the next flagship visible-light space telescope, the Habitable Worlds Observatory (HWO). In the search for signs of planetary habitability, the detection of potential biosignatures can hint at biological activity on the planetary surface. Since Earth is currently our only example of a conclusively habitable (and inhabited) planet, using model scenarios with conditions similar to those on modern Earth or its distant past serves as a starting point for examining the detectability of planetary characteristics <cit.>. In particular, at different times in its history Earth hosted varying concentrations of the atmospheric biomarkers O2 and O3, which are primarily produced on Earth through photosynthesis and subsequent photochemical reactions; therefore measurements of geochemical proxies for the oxygenation of Earth's surface and atmosphere over time provide examples of possible global biogeochemical scenarios that we could encounter when observing Earth-like exoplanets. It is also useful to consider planets that - like Earth - have global liquid water oceans, but no biospheres. In some conditions, suchThe Phanerozoic (modern) epoch of Earth (541 million years ago – present), which forms the basis for most habitability and biosignature studies, has relatively high levels of both O2 (21%) and O3 (0.7 ppm) generated by abundant plant life and subsequent photochemistry. However, in the Proterozoic epoch (2.5 billion – 541 million years ago), photosynthetic life was less abundant and measurements suggest oxygen accumulation in the atmosphere at approximately 0.1% to 1% of modern Earth O2 levels but with potentially detectable levels of O3 <cit.>. In a planetary atmosphere similar to the Archean epoch of Earth (4 – 2.5 billion years ago), we would expect an oxygen-poor and ozone-poor atmosphere, with a large abundance of greenhouse gases including CO2 and CH4, which may have in turn formed a photochemical organic atmospheric haze <cit.>. In addition to past epochs of Earth's history, models of alternative atmospheric scenarios are capable of producing of O2 and O3 dramatically higher than today's values - even without the presence of biology - due to a variety of mechanisms <cit.>.As shown in Figure <ref>, we can see how varying the abundances of O2 and O3 can alter the resultant visible spectra and lead to differences in the detectability of these gases. In this work we quantify the detectability of O2 and O3, using reflected-light measurements at visible wavelengths (0.515 – 1 ), for a range of abundance values and spectral bandpasses. This project is a direct continuation from <cit.> - in BARBIE1, we studied the detection of H2O as a function of SNR and abundance throughout the visible spectral range, and in this paper we extend the same methodology to the study of O2 and O3 and also examine the impact of the width of spectral bands on the SNR required for detection.In [] <ref> we present the methodology of our simulations, also providing a brief summary of BARBIE1. In [] <ref> we present the results of our simulations for both the modern Earth-like SNR study and the molecular abundance study. In [] <ref> we discuss the presented results and analyze the impact for future observations of varying Earth-twin epochs. In [] <ref> we present our conclusions and ideas for future work. § METHODOLOGYWe follow a similar methodological approach to that of BARBIE1. Herein we present a summary of the main steps in our analysis.§.§ Inputs§.§.§ Pre-Computed Spectral Grid We use a geometric albedo spectral grid that was pre-computed by <cit.>. This grid is housed in the Planetary Spectrum Generator <cit.>. The grid contains 1.4 million geometric albedo spectra that have been pre-computed with the parameters, minimum, and maximum values laid out in Table 1 of S23. The native resolution of the grid is set to R=500 and binned to R=140 for our analysis. For more information on the creation and verification of the grid, see S23 and BARBIE1. There are three molecular species in the grid: H2O, O3, and O2, with N2 as the assumed background gas. The minimum and maximum grid values for these parameters are as follows: H2O in [10^-8, 10^-1]; O3 in [10^-10, 10^-1]; O2 in [10^-10, 0.8]; N2 = 1 - H2O - O3 - O2. The model is comprised of 50% clear and 50% cloudy spectra, i.e. C_f = 50%. There are three planetary parameters in the grid: surface pressure (P_0), surface albedo (A_s), and gravity (g). The minimum and maximum values for these parameters are as follows: P_0 in [10^-3 Bar, 10 Bar]; A_s in [10^-2, 1]; gravity in [1 m/s^2, 100 m/s^2]; and R_p is fixed to 1 R_. The grid covers a wavelength range from 0.515 – 1.0as this range has been defined as the VIS channel in the exoplanet imaging instrument concepts studied in the Astro2020 Decadal Survey that form the starting point for HWO.§.§.§ Mock Data and Retrieval MethodologyOur fiducial “data” spectrum is primarily set as a modern-Earth twin following <cit.>, with constant volume mixing ratios (VMRs) H2O=3×10^-3, O3=7×10^-7, O2=0.21, a background gas of N2=1-H2O-O3-O2, constant temperature profile at 250 K, A_s of 0.3, P_0 of 1 Bar, and a planetary radius fixed at R_p = 1 R_. We consider a resolving power of 140, binned from the native grid resolving power of 500. This fiducial spectrum is given as data to the nested sampler in conjunction with the grid. We use the modern-Earth twin fiducial spectrum for our initial SNR study, wherein we focused on SNRs 3–16 moving in steps of 1. We then change the fiducial spectrum for our abundance study, changing only the molecule of interest one at a time, to the values listed in Table <ref> and <ref>. All other parameters were left to modern-Earth values, in order to specifically constrain the molecule of interest. In BARBIE1, we focused on H2O, centered on modern-Earth values and moving in increasing and decreasing log-steps. This was due to the lack of constraint on H2O abundance throughout time. In this study, we consider O2 and O3, and set our maximum O2 value as the modern-Earth value of 0.21 VMR and move in decreasing log-steps of 0.25 and 0.5 down to values of 1% to 0.1% to represent a mid-Proterozoic Earth epoch <cit.>. For O3 we wished to test the possibility that values higher than on modern-day Earth might present stronger detectability. To study this potential, we set the maximum value as 3 × 10^-6 VMR, a value that is ∼5x the modern-Earth value of 7 × 10^-7 VMR, and well within the values created in models with high rates of abiotic O2 and O3 production on planets around F-, K, or M-type stars <cit.>.and decrease to the minimum value as the modern-Earth value. In log space, we decreased in steps of 0.1, due to the small gap between the maximum and minimum values. As in BARBIE1, to examine the detectability of the molecular species as a function of the central wavelength and width of a bandpass,we chose 25 evenly spaced values for the bandpass central wavelength. However, in this study we also vary the total width of the spectral bandpass, examining values of 20%, 30%, and 40%. These ranges are consistent with the simultaneous bandpasses that may be achieved with future high-performance coronagraphs <cit.>. Using these bandpass values and the inputs laid out in <ref> and <ref>, we run a series of Bayesian nested sampling retrievals for each abundance and SNR combination using the PSGnest application for the Planetary Spectrum Generator <cit.>. PSG is a radiative transfer model and tool for synthesizing and retrieving upon planetary spectra. This includes planetary atmospheres and surfaces covering wavelengths from 50 nm to 100 mm (i.e. from UV to Radio) and a large range of planetary properties. PSG includes aerosol, atomic, continuum, and molecular scattering/radiative processes, implemented layer-by-layer. PSG also includes the nested sampling routine PSGnest [https://psg.gsfc.nasa.gov/apps/psgnest.php], which is an adaptation of the algorithm used in Fortran MultiNest <cit.>. PSGnest is a Bayesian retrieval tool based on the well-known MultiNest framework and designed for exoplanetary observations; for more information on PSGnest and our retrieval methodology, see S23 and BARBIE1.§.§ OutputsThe output results file from PSGnest contains highest-likelihood values of output parameters, the average value resulting from the posterior distribution, uncertainties which are estimated from the standard deviation of the posterior distribution, and the log evidence (logZ) <cit.>. It also includes the input data (wavelength, uncertainty, and input spectrum, in respective columns) with the best-fit spectrum. We calculate the median values, as well as the upper and lower limits of the 68% credible region <cit.> using the output results. We also extract the posteriors and global log-evidence for use in our detectability calculations, which is the numerical representation that quantifies the relative support per each model given the input data <cit.>. Using these outputs, we compute the Bayes factor. The Bayes factor is calculated by subtracting the Bayesian log-evidence per retrieval using the fiducial gas abundances. The resulting differences are referred to as the log-Bayes factor <cit.>. The log-Bayes factor determines which scenario is most likely by examining the hypothesis where all parameters are present, and systematically subtracting the evidences from scenarios where each parameter in turn is nullified. In our studies, if lnB is less than 2.5, it represents an unconstrained detection; if lnB is between 2.5 and 5, it is a weak detection; if lnB is greater than 5, it is a strong detection <cit.>. This differs slightly from Table 2 of <cit.>, in which lnB represents a weak, moderate, and strong detection respectively. We do not calculate the log-Bayes factor for non-gaseous components, as those factors cannot be absent and thus the calculation cannot represent a detection of those components.§ RESULTS §.§ Modern Earth Case Results We begin by presenting the detectability of O2 and O3 as a function of SNR for the fiducial modern-Earth case, as first examined by S23 and BARBIE1; all of the O2 and O3 data and calculated log-Bayes factors across abundance, SNR, and wavelength are available to the community on Zenodo[10.5281/zenodo.8349974]. We only present the narrow SNR range within which detectability strength changes materially for O2 and O3 in Figures <ref> and <ref>. We can see that based on Figure <ref>, for SNR ≤7 there is only a weak or unconstrained detection possible for O2. However, beginning at SNR = 7.5, we can see strong detections of O2 corresponding to bandpasses containing deep O2 spectral features, such as at 0.74 . A strong detection of O2 becomes possible from 0.68 - 0.84by SNR = 8. At SNRs higher than 8, all bandpass locations yield a strong detection of O2 with increasingly better constraints of the 68% credible region.Looking to Figure <ref>, it was required to significantly increase the SNR to achieve a strong detection for O3. We look at an SNR range of 18 – 20, which is quite high, however it is only at this point that we begin to see a strong detection of O3 within the wavelength range of our simulations. At an SNR of 18, we can see a small area of weak detection between 0.6 and 0.67covering six bandpasses. At an SNR of 19, strong detection becomes possible for two bandpasses, and three bandpasses at SNR of 20. We can see it takes a very high SNR to achieve a strong detection of O3, which would lead to an exceptionally high integration time. We can also see that although three bandpasses yield a strong detection at SNR=20, none of those bandpasses correctly constrain the abundance of O3 within the 68% credible region. We present further investigation on this in Figure <ref>a. In this corner plot, which is focused on 0.64at a 20% bandpass, O3 is not retrieved within the 68% credible region, and there is a large spread of the high probability region which does not center on the true value of O3. This is largely due to the lack of continuum caused by the depth and width of the ozone features; a similar problem for detecting H2O at long wavelengths is discussed at length in BARBIE1. We present Figure <ref>b to show that by increasing the bandpass width from 20% to 40% centered on 0.64 , and thus increasing the amount of spectral region covered in each bandpass, an adequate continuum is constrained and the retrieval of O3 is firmly within the 68% credible region with little spread in the high probability regions. However, we can see in the corner plots shown in Figure <ref>a and <ref>b that A_s is also consistently retrieved at a value at least a factor of two away from the true value. To investigate the source of this incorrect retrieved parameter, we ran several retrievals with very high SNR (200) and with a data spectrum using values centered on a grid point, and compared both sets of results to our lower-SNR modern-Earth results.This allowed us to test whether the result was due to degeneracies for lower-SNR data or due to the impact of interpolation error in our grid-based retrieval scheme (as examined by S23). In Figure <ref>c, where we use a very high SNR and all of the parameters are set to exact values found in the S23 grid, we can see that all parameters are retrieved within a 68% credible region except for C_f which is known to be degenerate. When C_f is locked to its true value (0.5) as in Figure <ref>d, we can see this issue disappears and the range in the highest likelihood region decreases. There is higher interpolation error in A_s likely due to the scarce sampling in the grid points and the limited bandwidth of the sampled spectra considered here. As there is only three grid points in this parameter space, there is a higher likelihood for interpolation error as there is more distance between the given true value and the nearest grid point. For a more in-depth description of this interpolation error and its impact, see S23. In Figure <ref> we present the shortest wavelength at which a weak or strong detection is achieved for O2 and O3; as described in BARBIE1, this is an important metric since the number of planets amenable to high-contrast coronagraphic imaging is higher when observing at shorter wavelengths due to the smaller inner working angle. Figure <ref>a provides a summarial result of Figures <ref> and <ref>, covering the full range of SNR from 3-20 at a 20% bandpass. In Figure <ref>b, we present the shortest wavelength for strong detection if we assume a 30% bandpass, while in Figure <ref>c we present the shortest wavelength for strong detection if we assume a 40% bandpass. We also present the range for a strong detection of H2O as in BARBIE1, with additional SNRs to 20, to provide context to the results and present the possibilities for dual or triple molecule detection. To show the full range, we shade out to the longest wavelength where detection is possible. In Figure <ref>a we can see that O3 is only detectable, whether weakly or strongly, with high SNR (≥14) data over a very narrow range. We can see that O3 can be detected (albeit weakly) simultaneously with a strong detection of H2O at high SNR (≥17) with careful bandpass selection. However, the high SNR required for detection indicates that this would be costly in terms of observing time. Conversely, O2 is strongly detectable at an SNR of 8, with the range between shortest and longest wavelength encompassing all O2 features in the visible wavelength range. This detectability range overlaps significantly with the strong detection range of H2O. We can see that O2 and H2O can be simultaneously observed with SNR = 10 with careful selection of wavelength and corresponding bandpass. This also allows for a range of possible SNR depending on the desired wavelength of detection - if a longer wavelength such as 0.83is accessible, then SNR = 10 would allow for a dual detection, but if a short wavelength such as 0.68is required, then a much higher SNR is required. Next looking to Figure <ref>b, we see the detectability change, with triple molecule detection possible from a shortest wavelength of 0.66out to 0.7at an SNR ≥13. O2 remains strongly detectable beginning at an SNR of 8 as in Figure <ref>a; however, while the shortest wavelength of detectability for O2 starts at approximately 0.69with a bandpass of 20%, the shortest wavelength of detectability for O2 with a bandpass of 30% starts at approximately 0.66 . When we look to Figure <ref>c, we see that detectability changes drastically, with triple molecule strong detection possible from a shortest wavelength of 0.625out to 0.725at an SNR ≥11. We can also see that once again, O2 remains strongly detectable beginning at an SNR of 8 as in Figure <ref>a and <ref>b, the shortest wavelength of detectability for O2 changes once more, starting at approximately 0.625with a bandpass of 40%. Thus, although the SNR of strong detectability does not change, the minimum possible wavelength of strong detectability significantly decreases as a function of bandpass width.§.§ Results for Varying Abundance CasesAt this point in our study, we shift to present our abundance case study, wherein we vary the abundance of O2 below modern-Earth values and O3 above modern-Earth values. To assess the trade-off between longer observations (i.e., higher SNR) and different concentrations of O2 and O3, we varied the SNR on the observations for the full range of different VMRs per molecule. In Figure <ref>a, <ref>b, and <ref>c, we display the minimum SNR required to achieve a strong detection for each abundance of O2 at 0.76at 20%, 30%, and 40% bandpasses respectively. Looking first to Figure <ref>a, we notice that O2 quickly requires mid- to high-SNR data to be strongly or weakly detected with even one order of magnitude decrease in abundance. In fact, many of the abundances in our simulation are not detectable at all in this SNR range. The Proterozoic abundances of O2 (0.1% to 1% of modern Earth abundance) are not detectable at any SNR ≤20, and in fact will likely require an extremely high SNR to become detectable, as these values are two to three magnitudes lower than modern Earth abundance values. As seen in Figure <ref>b, varying the bandpass to 30% decreases the required SNR for detection across almost all abundances of O2. For instance, where at a 20% bandpass it requires an SNR of 19 to strongly detect O2 at 2.1×10^-2 VMR, with a 30% bandpass the required SNR drops to 18. This is true for all detectable abundance cases except modern Earth values, which consistently requires an SNR of 8 for strong detection. In Figure <ref>c, we do not see a difference in required SNR for strong detection with a 40% bandpass.While bandpass width makes little difference to the detectability of O2, it makes a significant difference when detecting O3. We can see in Figure <ref> the large impact of the change in bandpass width. With a 20% bandpass centered on the 0.76O2, we capture the entirety of the O2 feature, along with the two smaller H2O features at 0.74 and 0.84 . When the bandpass is widened to 40%, we capture all off the above, along with a portion of both the O3 feature that peaks at approximately 0.63and the 0.9H2O feature. In Figure <ref>a, <ref>b, and <ref>c, we display the minimum SNR required to achieve a strong detection for each abundance of O3 at 0.64with 20%, 30%, and 40% bandpasses respectively. In Figure <ref>a, we can see that O3 is detectable, strongly and weakly, in the full range of our simulation values. At modern Earth abundances, the required SNR for a strong detection is high at 19, but it is possible to achieve a weak detection at SNR = 14. When we increase to high O3 values, approximately 5x higher in abundance than modern Earth, the required SNR drops drastically, with strong or weak detection requiring SNRs of 7 or 5 respectively. The largest decrease in required SNR occurs between modern Earth values (7×10^-7 VMR) and 1.25×10^-6 VMR, dropping steeply from a required SNR of 19 to 11 for a strong detection, and 14 to 9 for a weak detection. When we increase the bandpass width to 30% as in Figure <ref>b, we see the required SNR for detection drop across all abundance values. For instance, at 1.5×10^-6 VMR, the required SNR for strong detection at a 20% bandpass is 10, with a 30% bandpass the required SNR drops to 8. This is true for all values across the abundances cases, to varying degrees of severity. This happens once more when the bandpass is widened to 40% as in Figure <ref>c. Looking to the same example case of 1.5×10^-6 VMR, the required SNR for strong detection drops to 6 at a 40% bandpass. With each increase of bandpass width, strong detectability at mid-low SNR data becomes more accessible, with a required SNR as low as 4 for a strong detection at 3×10^-6 VMR. §.§ Self-Consistent Chemistry ComparisonAs previously discussed, in our previous simulations we varied one molecular abundance (for O2) while the other parameters were held fixed to modern Earth-like values, thus we did not explore the relationship between varying both O2 and O3 and the resultant change in detectability.However, due to the photochemical relationship between the O2 abundance and the production of O3, the abundances of both molecules would actually be linked in any atmospheric scenario. As found in <cit.>, the relationship between O2 and O3 varies for model atmospheres depending on the type of host star, and the linearity of the relationship also appears to vary. When looking to hotter host stars (e.g., a G2 star), peak O3 abundance occurs at lower than modern Earth O2 abundances due to the O3 layer shifting in the atmosphere downwards to O2 levels due to O2 photochemical shielding. This O2/O3 relationship means that we can constrain the overall molecular oxygen abundance by detecting either of the species; O3 is in fact a highly sensitive tracer of lower abundances of O2. In order to explore the impact of this, we examined several scenarios where we varied our parameter values as coupled parameters, i.e. 10% PAL O2/105% PAL O3, etc., consistent with the results for a Sun-like star from <cit.> (as seen in Table <ref>). We present our detectability results for these abundance pairs in Figure <ref>. In Figure <ref>a-c, we present the O2 results with H2O at 20%, 30%, and 40% bandpasses at 10% PAL, 50% PAL, and 75% PAL of O2. We can see that at 10% PAL, O2 detectability is unlikely, requiring a minimum SNR of 19 at a 20% bandpass, and SNR of 18 at 30% and 40% bandpasses. However, between 50% PAL, 75% PAL, and the previously presented 100% PAL of O2, there are little difference, with slight variance in SNR (e.g. from requiring SNR of 8 for 100% PAL O2 to requiring SNR of 10 for 50% PAL O2) and little change across bandpass width as in Figure <ref>. In Figure <ref>d-f, we present the O3 results with H2O at 20%, 30%, and 40% bandpasses at 105% PAL and 110% PAL of O3. We can see that detectability does not shift significantly for O3, as our values do not vary greatly from a modern-Earth like value. However, one notable difference is that there is a larger range for strong O3 detection at both 105% and 110% O3 PAL with bandpass width 20% than the same bandpass width with 100% O3 PAL as in Figure <ref>. We also see that O3 is detectable at an SNR of 17 rather than 19 as in the modern-Earth like results for a 20% bandpass. The detectability range of O3 increases with increasing bandpass width, as expected following Figure <ref>.Following this, we analyze the limiting molecule in double or triple molecular detection in Figure <ref>. We present 10% O2/105% O3 PAL in Figure <ref>a-c. We can see that detection of O2 dually or triply with H2O or O3 is difficult. At a 20% bandpass, H2O and O2 are dual detectable only at an SNR ≥19. At a 30% bandpass, triple H2O/O2/O3 detection is possible at an SNR ≥18 in a very narrow wavelength range (approximately 0.66 to 0.7 ), and dual H2O/O2 detection is also possible at an SNR ≥18, while dual H2O/O3 detection has a wider range of detectability, from SNR ≥14, and from approximately 0.63 to 0.7 . At a bandpass of 40%, the range for triple detection increases in wavelength, to approximately 0.63 to 0.76 . The range for dual H2O/O3 detection also increases, starting from SNR ≥11 and from 0.63 to 0.75 . At all bandpasses, H2O has a large range in wavelength and SNR of single detectability thus it is highly likely to strongly detect H2O. In all bandpasses, O2 is the limiting molecule for dual or triple detection in both SNR and wavelength. In terms of dual H2O/O3 detection, O3 is the limiting molecule in wavelength, but H2O is the limiting molecule in SNR.We present 50% O2/110% O3 PAL in Figure <ref>d-f. We can see that, as in the modern Earth case, dual detection of H2O and O2 is always possible at all bandpasses. The same is not true for H2O and O3 however, with dual strong detection possible at only a single point in the 20% bandpass, and with a wider range possible at 30%. At 40%, there is no dual detection of H2O and O3, however there is a large range (approximately 0.12in width) wherein triple detection is possible from SNR ≥11. At 30% and 40% bandpasses there are also areas where dual detection is possible with O2 and O3 at mid-high SNR and short wavelengths. With a 20% bandpass, H2O is the limiting molecule for dual H2O/O2 detection in both SNR and wavelength. At a 30% bandpass, H2O is still the limiting molecule for dual H2O/O2 detection and triple H2O/O2/O3 detection in SNR, however O3 is the limiting molecule in wavelength. When looking at dual H2O/O3 detections, H2O is the limiting molecule in SNR and wavelength, mimicking the dual H2O/O2 detection. At a 40% bandpass, O3 is the limiting molecule in wavelength for a triple H2O/O2/O3 detection, and H2O is the limiting molecule in SNR for a triple H2O/O2/O3 detection. We then have the same H2O limiting for dual H2O/O2 detection in SNR, although there is no limiting molecule for dual detection in wavelength since the final bandpasses have detections for both H2O and O2.We present 100% O2/100% O3 PAL in Figure <ref>g-i. In the 20% bandpass, we see extremely similar results to Figure <ref>d, with the notable difference that O2 detection is possible down to an SNR of 8, compared to an SNR of 10. In the 30% bandpass, we again see almost identical results to the 50% O2/110% O3 PAL case shown in Figure <ref>e. The differences here are once again that O2 is detectable down to an SNR of 8, but we also notice that O3 has a smaller region of single detectablity, which is also seen in the smaller region of dual O2/O3 detection. As a result of the lower SNR requirements for O2 detectability, this results in a large region of dual H2O/O2 detection. In the 40% bandpass, once again the results mimic the 50% O2/110% O3 PAL counterpoint case shown inFigure <ref>f. The differences shown are a smaller region of triple H2O/O2/O3 detection and dual O2/O3, resulting in larger single O2 and dual H2O/O2 detection regions. This also results in a smaller single H2O detection region. In these cases, the limiting molecules are the same as above: At a 20% bandpass, H2O is the limiting molecule for dual H2O/O2 detection in both SNR and wavelength. At a 30% bandpass, H2O is the limiting molecule for dual H2O/O2 detection and triple H2O/O2/O3 detection in SNR, however O3 is the limiting molecule in wavelength. At a 40% bandpass, O3 is the limiting molecule in wavelength and SNR for a triple H2O/O2/O3 detection.§ DISCUSSIONThe primary utility of the retrieval results presented here is to help understand how the impact of O2 and O3 abundance affects the optimal strategy for detecting the presence of an atmosphere with water and oxygen with spectroscopic bandpasses in the visible-light spectral region. Here we discuss the major conclusions from this work, as well as some of the limitations to the study that may impact these conclusions.The most important result is the role that the width of the bandpass plays. When examining Figures <ref> and <ref>, we can see the notable difference in how a widening bandpass changes the detectability of molecules across abundance and type; where O2 detectability does not vary significantly as a function of bandpass across abundance values, O3 changes significantly as the bandpass widens. This is likely due to the fact that the O2 feature is narrow and deep, thus making it easily detectable at high abundance values, but easily captured in a smaller bandpass width, whereas O3 has a very broad feature thus the widening of the bandpass allows for a stronger continuum and significant change in detection as a function of bandpass. O3 also does not have any strong features in the 0.515 – 1region, leading to inherent difficulty in detection in this range. However, if we are able to observe the strong O3 band at 0.36 , detection would become stronger at lower SNR data. Recent works, such as by <cit.>, have investigated this principle and come to the conclusion that even with impacts of possible confusion due to SO2 and other species, O3 detection is indeed easier at lower SNR and lower resolution data. <cit.> found that with spectral resolution of 7, and SNR = 10, O3 can be detected in the UV at sub-PAL VMRs; they also note that observing additional signatures of habitability in the NIR region is crucial to interpreting O3 detections at UV and VIS wavelengths.We will examine these questions further in future work addressing the complete UV, VIS and NIR wavelength regions.A wider bandpass also better enables detections of one or more molecules with a single bandpass - in particular, the detection of O3 with other molecules.In Figure <ref>, we can see that for modern Earth abundances a bandpass width of ≥30% would allow for a detection of H2O at slightly shorter wavelengths (long-wavelength cutoff of ∼0.95versus 0.99for a 20% bandpass) and also a joint measurement of a strong O2 detection and at least a weak O3 detection with SNR = 8-9 versus only O2 (which could help to limit false positives and motivate additional observations). Furthermore, Figure <ref> shows that a 40% bandpass width enables a significant improvement in the ability to detect all three species with a single bandpass at around 0.72 ; this could either be used alone as a first reconnaissance bandpass or could be used as the follow-up to a longer-wavelength search for H2O alone at low SNR. The optimal choice between these two options depends significantly on whether a planet is detectable at longer wavelengths (due to IWA) and whether the exposure time required for longer-wavelength measurements is significantly impacted by instrument sensitivity; we leave these instrument- and target-dependent considerations to future work.The second important result is that the abundance versus SNR results in Figures <ref> and <ref> show a non-linear increase in the required SNR at smaller abundances. Combined with the fact that SNR is linearly dependent on the square of exposure time, this suggests that our exposure time will be extremely sensitive to the limiting abundance that must be detectable.Figure <ref> further demonstrates this, showing that when O2 drops from 50% PAL (12% VMR) to 10% PAL (2% VMR), it essentially becomes undetectable. This motivates a deeper analysis of whether detecting O2 at visible wavelength should be a high priority in the progression of measurements if there is a high likelihood of a non-detection for even a moderate abundance, compared with a measurement of more sensitive markers of atmospheric abundance in the UV or NIR.We note that high O3 abundances are challenging to model, in that it takes a very small amount to completely overwhelm the spectrum. Spectra with high O3 have continua near zero, which also causes the error value to increase substantially, thus high values of O3 must be handled with care. At high abundances, the error value increase can lead to poorly constrained retrievals using our grid. <cit.> found that this occurs at high O3 (log_10O3>-5) concurrently occurring with low P_0 (log_10P_0<-1.7) using our grid. Our simulations go no higher than log_10O3≤-6 and therefore avoid this degeneracy and resulting poorly constrained retrievals.As discussed in BARBIE1, there have been similar prior works that investigated the relationship between SNR and detectability, specifically <cit.> and <cit.>. We explore the differences in retrieval parameterization structure in BARBIE1, however even with the differences in techniques and analysis, our results for O2 and O3 detectability are in agreement. An SNR of 10 at R = 140 is sufficient to firmly constrain the abundances for an Earth-twin atmosphere with O2 and O3. Our work also explores the influence of varying bandpass width on molecular detection, which varies from prior work, and thus presents a new analysis of detection possibilities.§ CONCLUSIONS & FUTURE WORKTo summarize, by investigating bandpass width in tandem with SNR, we can see that detectability is intrinsically linked to both factors as an additional function of molecular type. The ability to properly prioritize and select the best combination of parameters can drive efficient observing practices. By understanding the SNR requirements for strong detection of O2 and O3 and also understanding which bandpass width could result in simultaneous strong detection of O2, O3, and H2O, we can properly prioritize the best combination of bandpass and required SNR for detection, thereby informing the best options for instrument design trades and observing procedure. O3 is most easily observable using a 30 or 40% bandpass width at shorter wavelengths with mid-low SNR data. With a 20% bandpass width, O3 is difficult to detect, requiring SNR ≥19 at modern-Earth values. O2 is not significantly affected by bandpass width, and consistently requires an SNR of ≥8 to be strongly detected at modern-Earth values at 0.76and shorter. It is not detectable at SNR ≤20 at Proterozoic era abundances. Since the coronagraph and instrument capabilities may dictate that the best observation occurs at shorter wavelengths, O2 and O3 are well within the most optimized instrument capabilities. O2 and O3 have strong geochemical markers that provide a depth of knowledge to potential Earth observations, and a heightened ability to constrain the observed Earth-like epoch. By also investigating coupled atmospheric abundances of O2 and O3, we can study how detectability of these molecules vary with the other parameter. Allowing parameters to vary with each other following leading coupled photochemistry and atmospheric models, we can prioritize the optimal bandpass width for more realistic exo-Earth simulations. In future work, we plan to build a new spectral model grid that includes a wider range of molecular species, and we will extend to shorter and longer wavelengths than the visible range. Specifically, we will cover the same wavelengths as the proposed coronagraph instrument for the Habitable Worlds Observatory <cit.>, including UV, Optical, and NIR. This will allow us to study more biosignatures and molecular species of interest, and represent the physical chemistries more accurately. This will allow us to expand our simulations of possible observations and establish best observational practices for exoEarth observations using next generation telescopes. We will also develop a PSG module to display all detectability information in a publicly accessible format. We note that SNR results may be subject to minor changes following grid reconstruction due to PSG radiative transfer upgrades. We will also include a full exoplanet yield calculation using the data contained within BARBIE1 and BARBIE2 to give an educated baseline for detection of biomarkers. The interpolation metric used in this work is a trilinear interpolation scheme which can be used on 3D grids as in this work, but in future works we will investigate if another interpolation metric, such as inverse distance weighted interpolation or multiplicative weight interpolation, can minimize interpolation error in grid-based retrievals. We will also conduct photochemically self-consistent studies in order to inform and broaden the results for future retrieval studies using grids. The expansion of grid parameter space will allow for the possibility of chemically consistent modeling, combined with input from updated photochemical models to ensure self-consistent atmospheric gas compositions with retrievals. This will be particularly important for considerations of gas detections for planets around different star types, given the impact star type has on the photochemistry of planetary atmospheres <cit.>.N. L. gratefully acknowledges financial support from an NSF GRFP. N.L. gratefully acknowledges Dr. Joesph Weingartner for his support and editing. N. L. also gratefully acknowledges Greta Gerwig, Margot Robbie, Ryan Gosling, Emma Mackey, and Mattel Inc.™ for Barbie (doll, movie, and concept), for which this project is named after. This Barbie is an astrophysicist! The authors also gratefully acknowledge conversation with Dr. Chris Stark regarding exoEarth yields and instrument design. The authors would like to thank the Sellers Exoplanet Environments Collaboration (SEEC) and ExoSpec teams at NASA's Goddard Space Flight Center for their consistent support. MDH was supported by an appointment to the NASA Postdoctoral Program at the NASA Goddard Space Flight Center, administered by Oak Ridge Associated Universities under contract with NASA. aasjournal | http://arxiv.org/abs/2311.16015v1 | {
"authors": [
"Natasha Latouf",
"Avi Mandell",
"Geronimo Villanueva",
"Michael Himes",
"Michael Moore",
"Nicholas Susemiehl",
"Jaime Crouse",
"Shawn Domagal-Goldman",
"Giada Arney",
"Vincent Kofman",
"Amber Young"
],
"categories": [
"astro-ph.EP",
"astro-ph.IM"
],
"primary_category": "astro-ph.EP",
"published": "20231127172042",
"title": "Bayesian Analysis for Remote Biosignature Identification on exoEarths (BARBIE) II: Using Grid-Based Nested Sampling in Coronagraphy Observation Simulations for O2 and O3"
} |
Exploring scale invariance in the expansion of a spherical unitary Fermi gas Kaijun Jiang^1 January 14, 2024 ============================================================================ The Segment Anything Model (SAM) achieves remarkable promptable segmentation given high-quality prompts which,however, often require good skills to specify. To make SAM robust to casual prompts, this paper presentsthe first comprehensive analysis on SAM's segmentation stability across a diverse spectrum of prompt qualities, notably imprecise bounding boxes and insufficient points. Our key finding reveals that given such low-qualityprompts, SAM's mask decoder tends to activate image features that are biased towards the background or confined to specific object parts. To mitigate this issue, our key idea consists of calibrating solely SAM's mask attention by adjusting the sampling locations and amplitudes of image features, while the original SAM model architecture and weights remain unchanged. Consequently, our deformable sampling plugin (DSP) enables SAM to adaptively shift attention to the prompted target regions in a data-driven manner.During inference,dynamic routing plugin (DRP) is proposed that toggles SAM between the deformable and regular gridsampling modes, conditioned on the input prompt quality.Thus, our solution, termed Stable-SAM, offers several advantages: 1) improved SAM's segmentation stability across a wide range of prompt qualities, while 2) retainingSAM's powerful promptable segmentation efficiency and generality, with 3) minimal learnable parameters (0.08 M) and fast adaptation.Extensive experiments validate the effectiveness andadvantages of our approach, underscoring Stable-SAM as a more robust solution forsegmenting anything. Qi Fan ([email protected]) done this work at Kuaishou Technology. Xin Tao ([email protected]) is the corresponding author.§ INTRODUCTIONThe recent Segment Anything Model (SAM <cit.>) stands a significant milestone in image segmentation,attributed to its superior zero-shot generalization ability on new tasks and data distributions.Empowered by the billion-scale training masks and the promptable model design, SAM generalizes to various visual structures in diverse scenarios through flexible prompts, such as box, point, mask or text prompts. Facilitated by high-quality prompts, SAM has produced significant performance benefit for various important applications, such ashealthcare <cit.>, remote sensing <cit.>, self-driving <cit.>, agriculture <cit.>, .Previous works mainly focus on improving SAM's segmentation performance assuminghigh-quality prompts are available, such as a tight bounding box (, produced by SOTA detectors <cit.>) or sufficient points (, 10 points) for the target object.However, in practice SAM or in fact interactive segmentation often encountersinaccurate or insufficient prompts, casually marked up by users as inaccurate box or very sparse points, especially in the crowd-sourcing annotation platform.Such inaccurate prompts often mislead SAM to produce unstable segmentation results as shown in Figure <ref>.Unfortunately, however, this critical issue has been largely overlooked, even though the suboptimal prompts and the resulting segmentation stability problem are quite prevalent in practice .Note that there is no proper off-the-shelf solution for solving SAM's segmentation stability problem with inaccurate prompts. Simply finetuning SAM's mask decoder with imprecise prompts may easily lead to catastrophic forgetting, underminingthe integrity of the highly-optimized SAM model and thus sacrificing the zero-shot segmentation generality.Although in the image domain deformable attention <cit.> has shown impressive efficacy on adaptively shifting the model attention to informative regions, which may naturally address the attention drift issue caused by the misleading prompts, a straightforward implementation of this idea can again compromise SAM's integrity. In this paper we present the first comprehensive analysis on SAM's segmentation stability across a wide range of prompt qualities, with a particular focus on low-quality prompts such as imprecise bounding boxes or points. Our findings demonstrate that, when fed with imprecise prompts, the SAM's mask decoder is likely to be misguided tofocus on the background or specific object parts, wherethe cross-attention module is inclinedto aggregate and activate image features of these regions when mutually updating the prompt and image tokens. Such collaborative token updating mechanism usually suffers from attention drift, which is accumulated and propagated from the suboptimal prompt to the unsatisfactory segmentation results. To address this issue,we present a novel deformable sampling plugin (DSP) with two key designs to improve SAM's stability while maintaining its zero-shot generality. Our key idea is to adaptively calibrate SAM's mask attention by adjusting the attention sampling positions and amplitudes, while keeping the original SAM model unchanged: 1) we employ a small offset network to predict the corresponding offsets and feature amplitudes for each image feature sampling locations, which are learned from the input image feature map; 2) then, we adjust the feature attention by resampling the deformable image features at the updated sampling locations for keys and values of the cross-attention module in SAM's mask decoder, keeping the original SAM model unchanged. In doing so, we can shift the feature sampling attention toward informative regions which is more likely tocontain target objects, and meanwhile avoiding the potential model disruption of the original highly-optimized SAM.Finally, to effectively handle both the high- and low-quality prompts, we propose a dynamic routing module to toggle SAM between deformable and regular grid sampling modes. A simple and effective robust training strategy is proposed to facilitate our Stable-SAM to adapt to prompts of diverse qualities. Thus, our method is unique in its idea and design on solely adjust the feature attention without involving the original model parameters.In contrast, the conventional deformable attention methods <cit.> updates the original network parameters, which isundesirable when adapting powerful foundation models, especially when finetuning large foundation models.Our method thus improves SAM’s segmentation stability across a wide range of prompt qualities with minimal learnable paramters and fast adaptation, and meanwhile retains SAM’s powerful promptable segmentation efficiency and generality. Our model, Stable-SAM, benefits several advantages from both the selective deformable attention and the powerful original SAM model, with minimal addition of computational overhead and parameters. First, the SAM's segmentation stability is substantially improved across a wide range of prompt qualities, especially with low-quality prompts. Besides, the original SAM's powerful promptable segmentation efficiency and generality are well-preserved, even in the data-scarce scenarios. Extensive experiments across multiple datasets validate the effectiveness and advantages of our approach, underscoring its potential as a robust solution for segmentation tasks.How to stably segment target objects in downstream applications?the prompt is usually inaccurate and diversefor simple objects with regular pose and shape, diverse prompts can perform wellBut for objects with deformable pose and shape, the prediction quality heavily relies on the input prompts.The prompt-activated image features is not goodWe want SAM to perform consistently well on diverse and inaccurate prompts.geometric transformationsUnStable-SAM predictions keys:sparse promptslarge geometric transformations in object scale, pose, viewpoint, and part deformation. prompt activates image tokens of corresponding image regions the intrinsic problem cannot be fully solved by simple data augmentationSimply finetuning SAM model on downstream datasets mayincrease the risk of overfitting, especially when the dataset is small, which is typical in real-world applications.each prompt activates some specific image regionsbutterfly effect § RELATED WORKSImproving Segmentation Quality. Researchers have proposed various methods to enhance the quality and accuracy of semantic segmentation methods. Early methods incorporate graphical models such as CRF <cit.> or region growing <cit.> as an additional post-processing stage, which are usually training-free. Many learning-based methods design new operators <cit.> or utilize additional refinement stage <cit.>.Recently, methods such as Mask2Former <cit.> and SAM <cit.> have been introduced, which address open-world segmentation by introducing prompt-based approaches. Along this line, a series of improvements <cit.> have been proposed, focusing on prompt-tuning and improving the accuracy of segmentation decoders. However, these methods overlook a crucial aspect, which is how to generate high-quality segmentation results in cases where the prompt is inaccurate. This is precisely the problem that our method aims to address.Tuning Foundation Models. Pretrained models have played an important role since the very beginning of deep learning <cit.>. Despite zero-shot generalization grows popular in foundation models of computer vision and natural language processing <cit.>, tuning methods such as adapter <cit.> and prompt-based learning <cit.> have been proposed to generalize these models to downstream tasks <cit.>. These methods typically involves additional training parameters and time. We propose a new method that makes better use of existing features with minimal additional methods and can also produce competitive results.Deformable Attention. Deformable convolution <cit.> has been proved effective to help neural features attend to important spatial locations. Recently, it has also been extended to transformer-based networks <cit.>. Such deformed spatial tokens are especially suitable for our task, which requires dynamically attending to correct regions given inaccurate prompts. However, previous deformable layers involve both offset learning and feature learning after deformation. In this paper, we propose a new approach to adjust the feature attention by simply sampling and modulating the features using deformable operations, without the need to train subsequent layers.§ SAM STABILITY ANLAYSISIn this section, we present a comprehensive investigation into the stability of SAM under prompts of varying quality. §.§ Segmentation Stability MetricPrior segmentation studies have focused on achieving high prediction accuracy, gauged by the Intersection-over-Union (IoU) between the predicted and ground truth masks. This focus on high performance is justified as segmentation models typically produce deterministic masks for given input images, without requiring additional inputs.However, SAM's segmentation output depends on both the image and the prompts, with the latter often varying in quality due to different manual or automatic prompt generators. In practical applications of SAM, segmentation targets are typically clear and unambiguous, independent of prompt quality. For instance, in autonomous driving applications, the goal is to segment the entire car stably and consistently, regardless of whether the prompt—be it a point or a bounding box—initially focuses on a specific part such as the wheel or the car body.Motivated by this application requirement, we introduce the segmentation stability metric.Specifically, SAM is capable of producing a set of binary segmentation maps M ∈ℛ^B × H × W for a single target object using B prompts of differing qualities. We define the segmentation stability (mSF) within the set as: S = 1/B∑_i=1^BIoU(M_i, M_union),where IoU(M_i, M_union) represents the Intersection-over-Union between the i-th segmentation map M_i and the collective foreground region ⋃_i^B M_i of all maps. This new metric assesses the consistency across segmentations in each prediction, serving as a reliable indicator of stability, even without access to the ground truth masks. §.§ SAM Segmentation Instability We perform empirical studies to illustrate the segmentation instability of the current SAM with prompts of differing quality, thereby justifying our Stable SAM approach.Model and Evaluation Details. The released SAM is trained with crafted prompts on large-scale SA-1B dataset. We evaluate the segmentation accuracy and stability of the ViT-Large based SAM with different prompt types and qualities, including box prompts with added noise (noise scale 0.4) and point prompts with varying numbers of points (1, 3, 5, 10 positive points randomly selected from the ground truth mask). For every input image and prompt type, we randomly select 20 prompts to compute their segmentation stability, average mask mIoU, and boundary mBIoU scores. The evaluation utilizes four segmentation datasets as in HQ-SAM <cit.>: DIS <cit.> (validation set), ThinObject-5K <cit.> (test set), COIFT <cit.>, and HR-SOD <cit.>.Table <ref> tabulates that SAM's segmentation accuracy and stability significantly decrease with low-quality prompts, such as imprecise box prompts or point prompts with minimal points.These analysis are performed on the four aforementioned segmentation datasets. The varying segmentation accuracy and stability indicates that SAM's mask decoder performs distinctly when dealing with prompts of varying qualities. We visualize the image activation map for the token-to-image cross-attention in SAM's second mask decoder layer to better understand its response to low-quality prompts. We focus on the second mask decoder layer for visualization because its cross-attention is more representative, benefiting from the input tokens and image embedding collaboratively updated by the first mask decoder layer. Figure <ref> demonstrates that an inaccurate box prompt causes SAM's mask decoder to miss regions of the target object while incorrectly incorporating features from the background, or focusing on specific object parts. It consequently leads to degraded segmentation accuracy and stability.Overall, the above empirical evidence suggests that SAM potentially suffers from the attention drift issue, where suboptimal prompts misleadingly shift attention from the target object to background areas or specific object parts, thereby compromising the accuracy and stability of the segmentation results. This motivates us to calibrate SAM's mask attention by leveraging learnable offsets to adjust the attention sampling position towards the target object regions, thus boosting segmentation accuracy and stability.In this section, we conduct a pioneering and thorough investigation into the segmentation stability of the Segment Anything Model (SAM) when subjected to prompts of varying quality. . §.§ Segmentation Stability Metric Qi: Do we need it here? Previous segmentation works mainly pursue high prediction performance by measuring the Intersection-of-Union (IoU) between the predicted and groundtruth masks. Such high-performance target is reasonable because the segmentation model usually output deterministic masks for solely input images, without involving any extra inputsHowever, SAM's segmentation output relies on both the image and prompt inputs, where the latter usually exhibits diverse qualities when incorporating with various manual or automatic prompt generators. In practice, the segmentation targets are usually well-defined and unambiguous in a specific SAM application, regardless of the prompt quality. For example, in an autonomous driving segmentation application, we want to stably and consistently segment the entire car, disregarding whether the prompt, such as a point or bounding box, initially targets a specific part like the wheel or the body of the car.Inspired by such application requirement, we define the segmentation stability metric to facilitate better understanding on it. Specifically, SAM can generate a batch of binary segmentation maps M ∈ℛ^B × H × W for one target object with B prompts of various qualities. The segmentation stability in the batch can be defined as:S = 1/B∑_i=1^BIoU(M_i, M_union),where IoU(M_i, M_union) is the Intersection over Union between the i-th segmentation map M_i and theunion foreground region ⋃_i^B M_i of all segmentation maps. This new metric considers the agreement among the segmentation of each prediction, which can be a good indicator of stability. §.§ SAM Segmentation InstabilityWe conduct empirical studies to demonstrate the segmentation instability problem of existing SAM under prompts of varying quality, so as to motivate our Stable SAM method. The released SAM is trained with crafted prompts on the large-scale SA-1B dataset. We evaluate the segmentation performance and stability of the ViT-Large based SAM under prompts of various types and qualities, , box prompts with injected noises (with noise scale 0.4), and point prompts with various point numbers (1, 3, 5, 10 positive points randomly sampled from the ground truth mask). For each input image and prompt type, we randomly sample 20 prompts and compute their segmentation stability and averaged mask mIoU and boundary mBIoU performance. The evaluation is conducted on four extremely fine-grained segmentation datasets as in HQ-SAM <cit.>: DIS <cit.> (validation set), ThinObject-5K <cit.> (test set), COIFT <cit.> and HR-SOD <cit.>. Table <ref> shows that SAM performs consistently much worse on both the segmentation accuracy and stability when the prompts are of low quality, , inaccurate box prompt or point prompt with very few points.We visualize the image activation map of the token-to-image cross-attention in the second SAM's mask decoder layer to facilitate better understanding on SAM's behavior under prompts of low quality. We choose to visualize the second mask decoder layer because its tokens and image embedding are collaboratively updated in the first mask decoder layer, whose cross-attention can effectively reflect the intersection between tokens and image embedding. Figure <ref> shows that the inaccurate box prompt leads SAM's mask decoder to overlook some image regions corresponding to the target object and meanwhile aggregate image feautres of the background or noisy objects.To further understand the attention drift problem caused by inaccurate prompts, we compute the attention coverage ratio on the foreground regions of the target object and background regions, respectively. Such attention coverage ratio statistics are conducted on the four segmentation datasets used before. Table <ref> shows that lower-quality prompts are inclined to shift attention to background regions, and therefore result in worse segmentation accuracy and stability.Overall, the can empirically conclude that SAM suffers from the attention drift problem accumulated and propagated from the suboptimal prompt input to the unsatisfactory image segmentation results, where the image attention is shifted from the target object to background regions. § STABLE SEGMENT ANYTHING MODEL§.§ Preliminaries We first revisit the recent Segment Anything Model (SAM) and deformable attention mechanism.Segment Anything Model. SAM <cit.> is a powerful promptable segmentation model. It comprises an image encoder for computing image embeddings, a prompt encoder for embedding prompts, and a lightweight mask decoder for predicting segmentation masks by combining the two information sources. The fast mask mask decoder is a two-layer transformer-based decoder to collaboratively update both the image embedding and prompt tokens via cross-attention. SAM is trained on the large-scale SA-1B dataset.Deformable Attention. Deformable attention <cit.> is a mechanism that enables the model to focus on a subset of key sampling points instead of the entire feature space. This mechanism naturally addresses the attention shift problem in SAM caused by low-quality prompts.In the standard self-attention, given a feature map x ∈ℛ^H × W × C, the attention weights are computed across all spatial locations within the feature map.In the deformable attention <cit.>, a uniform grid of points r ∈ℛ^H_G × W_G × 2 are first generated as the references[With the grid size downsampled from the input feature map spatial size (H, W) by a factor of s, thus H_G = H /s and W_G = W /s.]with the sampled image feature x_r ∈ℛ^H_G × W_G × C. Subsequently, a convolutional offset network θ_offset predicts the offset Δ r = θ_offset(x_r) for each reference point. The new feature sampling locations are given by r+Δ r ∈ℛ^H_G × W_G × 2. The resampled deformable image features x_r+Δ r∈ℛ^H_G × W_G × C are then utilized as the key and value features in the attention module.Note that conventional deformable attention optimizes both the offset network and attention module. Thus directly applying deformable attention to SAM is usually suboptimal,because altering SAM's original network or weights, , substituting SAM's standard attention with deformable attention and retraining, may compromise its integrity. Given a feature map x ∈ℛ^H × W × C, the standard self-attention mechanism computes the attention weights across all spatial locations within the feature map x:Attn(x) =σ(Q(x) · K(x)^T) · V(x),where σ denotes the softmax function, Q, K, V are the query, key, and value embedding projection functions, respectively.In deformable attention <ref>, a uniform grid of points r ∈ℛ^H_G × W_G × 2 are first generated as the references, with the grid size downsampled from the input feature map size by a factor of s, thus H_G = H /s and W_G = W /s. Subsequently, a convolutional offset network θ_offset predicts the offset Δ r ∈ℛ^H_G × W_G × 2 for each reference point by taking as input the image feature x_r ∈ℛ^H_G × W_G × C resampled at the reference points:Δ r = θ_offset(Q(x_r)). Thus, the deformable attention is defined as:DeformAttn(x) = σ(Q(x) · K(x_r+Δ r)^T) · V(x_r+Δ r),where x_r+Δ r∈ℛ^H_G × W_G × C is the resampled image feature obtained by shifting the feature sampling positions from the uniform grid reference points r to the new sampling positions r+Δ r. The convolutional offset network and self-attention module are trained concurrently.§.§ Deformable Sampling Plugin To address the attention drift issue while preserving the SAM's integrity, we propose a novel deformable sampling plugin (DSP) module on top of SAM's original token-to-image cross-attention module, as shown in Figure <ref>.Specifically, given the prompt token feature t ∈ℛ^T× C and image feature x_p ∈ℛ^H × W × C, the token-to-image cross-attention is:CAttn(t,x) = σ(Q(t) · K(x_p)^T) · V(x_p),where p ∈ℛ^H × W × 2 represents the image feature spatial sampling locations, σ denotes the softmax function, and Q, K, V are the query, key, and value embedding projection functions, respectivelyOur DSP adaptively calibrate the feature attention by adjusting solely image feature sampling locations and amplitudes without altering the original SAM model. Specifically, we utilize an offset network θ_offset to predict the feature sampling offset Δ p ∈ℛ^H × W × 2, akin to that in deformable attention:Δ p = θ_s(θ_offset(x_p)),where θ_s is a scale function s_p ·tanh(*) to prevent too large offset, and s_p is a pre-defined scale factor. The offset network θ_offset consists of a 1 × 1 convolution, a 5 × 5 depthwise convolution with the layer normalization and GELU activation, and a 1 × 1 convolution.The updated feature sampling locations are p+Δ p. The numerical range of both p and p+Δ p lies in {(0,0),...,(H-1,W-1)}, which is then normalized to the range [-1, 1] for feature sampling. The feature amplitudes are predicted by the first convolutional layer and the image features x_p are thus updated as x_p^⋆, which are used solely for computing the feature attention. Subsequently, we resample and modulate deformable image features x_p+Δ p^⋆∈ℛ^H × W × C at the updated sampling locations p+Δ p with the learned feature amplitudes for keys and values. Thus, our DSP calibrates the token-to-image cross-attention of SAM's mask decoder as:DCAttn(t,x) = σ(Q(t) · K(x_p+Δ p^⋆)^T) · V(x_p+Δ p^⋆).As p+Δ p is fractional, we apply a bilinear interpolation to compute x_p+Δ p^⋆ as in Deformable DETR <cit.>.Note that our DSP only trains the deformable offset network to predict new feature sampling locations p+Δ p and feature amplitudes, and feeds the resampled and modulated deformable features x_p+Δ p^⋆ to SAM's cross-attention module. Thus, the original SAM model remains unchanged. §.§ Non-Local Offset Network In most deformable attention works <cit.>, the offset network typically consists of a several convolutional layers dedicated to generating offsets. Under previous circumstances, its local receptive field is sufficient for shifting the sampling locations to neighboring positions.However, in promptable segmentation, the prompts may straddle across a large image region, and thus require a global offset network to provide sufficient deformable transformation. We thus propose a novel non-local offset network to address this issue.Although the non-local attention <cit.> can capture the global relation among image features at all spatial locations, the cost of standard implementationis quadratic the image feature size N=H × W. Linear non-local attention <cit.>, on the other hand, provides a much more efficient alternative, achieving linear complexity by approximating the original Softmax operation with strategically designed mapping functions. We choose the recent Focused Linear Attention <cit.> as our non-local module, due to its superior efficiency and expressiveness. The Focused Linear Attention can be formulated as:LinAttn(x) =ϕ_e (Q) ·ϕ_e (K)^T · V + DWC(V),where DWC represents the depthwise convolution, and Q,K,V are the abbreviations of Q(x),K(x),V(x). The mapping function ϕ_e is formulated as:ϕ_e(x) = f_e(ReLU(x)), f_e(x) = ||x||/||x^**e||x^**e,where x^**e denotes the element-wise power e of x. The computational complexity can be reduced from 𝒪(N^2C) (of the softmax function in standard non-local attention) to 𝒪(NC^2). In SAM, the feature size N=64 × 64 is much larger than the channel dimension C=256, and thus the overall computation can be substantially reduced.Empowered by the focused linear attention, our non-local offset network can be formulated as:Θ_offset = θ_offset(LinAttn(x)),where θ_offset is the convolutional offset network comprising a 3 × 3 depthwise convolution, a GELU activation and a 1 × 1 convolution.Note that our non-local offset network is flexible, allowing the focused linear attention module to be readily substituted with other potent linear non-local attention modules. §.§ Dynamic Routing Plugin While our DSP can effectively handle suboptimal and even erroneous prompts, by redirecting SAM's attention to informative regions which are more likely to contain the target objects, high-quality prompts can typically direct the model's attention correctly to target regions. Thus, it is essential to properly control the DSP's activation to prevent unwanted attention shifts.To address this issue, we propose a novel dynamic routing plugin (DRP) that regulates the degree of DSP activation based on the input prompt quality. The DRP can be formulated as follows:α = σ (MLP(t_o)) · s,where t_o ∈ℝ^1× C is the prompt token feature corresponding to the output mask, MLP refers to a small MLP network that includes an MLP layer with LayerNorm and GELU activation, as well as an output MLP layer; s denotes a learnable scale and σ denotes the softmax function.We utilize the predicted values of α = [α_1, α_2] ∈ℝ^1 × 2 to adaptively route SAM between DSP and original SAM's attention mechanism. Consequently, the token-to-image cross-attention output O(t, x) can be formulated as:O(t, x) = CAttn(t, α_1 · x_p + Δ p^⋆ + α_2 · x_p) This soft dynamic routing strategy allows SAM to benefit from both DSP and its original zero-shot generality, contingent upon the quality of the prompt. §.§ Robust Training StrategyWe propose a simple and effective robust training strategy (RTS) to assist our model to learn how to correct SAM's attention when adversely affected by bad prompts.Robust Training Against Inaccurate Prompts. SAM's training, including HQ-SAM <cit.>, typically utilizes high-quality prompts given by precise bounding boxes or multiple points to accurately identify the target object. To address inaccurate prompts, our RTS incorporates prompts of varying qualities during training. These prompts include groundtruth boxes, box prompts with added noise (noise scale 0.4), and point prompts with varying numbers of points (1, 3, 10 positive points randomly chosen from the ground truth mask). Robust Training Against Ambiguous Prompts.In real segmentation scenarios, target objects often occur in cluttered environment, either occluding others or being occluded. Even given an accurate, tight bounding box,objects other than the target object will be enclosed.On the other hand,target objects are typicallyunambiguous even other objects are enclosed.For instance, in MS COCO,beds (occluded by quilt) are consistently regarded as target objects; the model must accurately segment the entire bed including accessories such as pillows and bedding.Thus, SAM's original ambiguity-aware solution, which predicts multiple masks for a single prompt, is generally suboptimal in well-defined realistic applications. To address such “ambiguous" prompts, our RTS incorporates synthetic occlusion images to make SAM conducive to accurately segment target objects.The occlusion images are synthesized by randomly introducing other objects to simulate “occluder" and “occludee" relationship. Our RTS is general and applicable to various SAM variants to improve their segmentation stability. Notably, our Stable-SAM with DSP and DRP experience the most substantial improvements from the application of RTS. Deformable attention modifies it by introducing learnable offsets Δ p to the sampling spatial locations p of x_p:Deformable Attention(x_p) =softmax(Q(x_p) · K(x_p+Δ p)^T) · V(x_p+Δ p),where the learnable offsets Δ p ∈ℛ^H × W × 2 are predicted by a small convolutional offset network θ that takes the query feautre map x_p as input:Δ p = θ (Q(x_p)).The deformable offsets are shared by all queries to shift the keys and values sampling positions from the uniform grid reference points p to the new sampling positions p+Δ p. The convolutional offset network and self-attention module weights are updated simultaneously in a data-dependent way. §.§ Preliminaries We first revisit the deformable attention mechanism in recent vision transformers. Deformable attention <cit.> is a mechanism that allows the model to focus on a subset of key sampling points rather than the entire feature space. It naturally suit the SAM's attention shift problem caused by low-quality prompts. For a given feature map x ∈ℛ^H × W × C, the standard self-attention mechanism computes the attention weights across the features x of all spatial locations:Attn(x) =σ(Q(x) · K(x)^T) · V(x),where σ is the softmax function, Q, K, V are the query, key, and value embedding projection functions, respectively.In deformable attention <ref>, a uniform grid of points r ∈ℛ^H_G × W_G × 2 are first generated as the references, where the grid size is downsampled from the input feature map size by a factor s, H_G = H /s, W_G = W /s. Then, a convolutional offset network θ_offset predicts the offset Δ r ∈ℛ^H_G × W_G × 2 for each reference point by taking as input the image feature x_r ∈ℛ^H_G × W_G × C resampled on the reference points.Δ r = θ_offset(Q(x_r)). Thus, the deformable attention is defined as:DeformAttn(x) = σ(Q(x) · K(x_r+Δ r)^T) · V(x_r+Δ r),where x_r+Δ r∈ℛ^H_G × W_G × C is the resampled image feature by shifting the keys and values sampling positions from the uniform grid reference points r to the new sampling positions r+Δ r. The convolutional offset network and self-attention module weights are updated simultaneously in a data-dependent way.Directly applying deformable attention to SAM is usually suboptimal because of the following two main issues. First, modifying SAM's original network or weights will compromise SAM's integrity, , replacing SAM's standard attention with the deformable attention and retraining it. Second, the deformable offset network is typically a convolutional network with limited receptive field, which is usually undesirable in promptable segmentation, because the prompts may be potentially cross a large image region. §.§ Deformable Sampling Plugin To solve the attention drift problem and meanwhile maintain SAM's integrity, we propose a novel deformable sampling plugin (DSP) module to equip SAM's original token-to-image cross-attention module.Specifically, given the prompt token feature t ∈ℛ^T× C and image feature x_p ∈ℛ^H × W × C, the token-to-image cross-attention is:CAttn(t) = σ(Q(t) · K(x_p)^T) · V(x_p),where p ∈ℛ^H × W × 2 is the keys and values spatial sampling locations on the image feature.Our deformable sampling plugin adaptively adjust only the image feature sampling locations, while keeping the original SAM model unchanged. Specifically, we employ a offset network θ_offset to predict the feature sampling offset Δ p ∈ℛ^H × W × 2 which is similar to the one in deformable attention:Δ p = θ_offset(K(x_p)).Then we resample the image features x_p+Δ p∈ℛ^H × W × C for keys and values at the new sampling positions p+Δ p. Our DSP updates the token-to-image cross-attention into:DCAttn(t) = σ(Q(t) · K(x_p+Δ p)^T) · V(x_p+Δ p),where the numerical range of both p and x_p+Δ p lies in {(0,0),...,(H-1,W-1)}, which is then normalized to {(-1,-1),...,(1, 1)}. As p+Δ p is fractional, we apply bilinear interpolation to compute x_p+Δ p as in Deformable DETR <cit.>. We also leverage the relative position bias to facilitate the learning of deformable sampling plugin, as in Swin Transformer <cit.> and DAT <cit.>. Refer to Deformable DETR <cit.> and DAT <cit.> for their implementation details.Note that our DSP only trains the deformable offset network to predict new feature sampling locations x_p+Δ p and feeds the sampled features x_p+Δ p to SAM's cross-attention module, maintaining the original SAM model unchanged. §.§ Non-Local Offset Network In most deformable attention works <cit.>, the offset network typically consists of a few convolutional layers for offset generation. Its local receptive field is sufficient for shifting the sampling locations to neighboring positions, enabling the local deformable transformation.However, in promptable segmentation, the prompts are potentially cross a large image region, and thus require a global offset network to provide sufficient deformable transformation. We propose a novel non-local offset network to solve this issue.The non-local attention <cit.> can capture the global relation among image features of all spatial locations. However, the standard non-local attention yields unreasonably high memory and computation cost, whose overhead is quadratic the image feature size N=H × W.Linear non-local attention <cit.>, on the other hand, offers a much more efficient alternative with linear complexity by approximating the original Softmax operation through carefully designed mapping functions We choose the recent Focused Linear Attention <cit.> as our non-local module, due to its high efficiency and expressiveness. The Focused Linear Attention can be formulated as:LinAttn(x) =ϕ_e (Q) ·ϕ_e (K)^T · V + DWC(V),where DWC is the depthwise convolution, and Q,K,V are the abbreviations of Q(x),K(x),V(x). The mapping function ϕ_e is formulated as:ϕ_e(x) = f_e(ReLU(x)), f_e(x) = ||x||/||x^**e||x^**e,where x^**e represents element-wise power e of x. The computation complexity can be transformed from 𝒪(N^2C) (of the softmax function in standard non-local attention) to 𝒪(NC^2). In SAM, the feature size N=64 × 64 is much larger than the channel dimension 256, and thus the overall computation is significantly decreased. Powered by the focused linear attention, our non-local offset network can be formulated as:Θ_offset = θ_offset(LinAttn(x)),where θ_offset is the convolutional offset network consists of a 3 × 3 depthwise convolution followed by GELU activation and a 1 × 1 convolution.Note that our non-local offset network is flexible and the focused linear attention module can be easily replaced by other powerful linear non-local attention modules.§.§ Dynamic Routing Plugin Our DSP can effectively handle suboptimal prompts by redirecting SAM's attention to informative regions more likely to contain target objects. However, high-quality prompts typically attend the model's attention to target regions correctly. Therefore, the DSP's activation should be properly controlled to avoid potential radical attention shift. We thus propose a novel dynamic routing plugin (DRP) to control the DSP activation degree, conditioned on the input prompt quality.The DRP can be formulated as:α = σ (MLP(t_o)) · s,where t_o ∈ℛ^1× C is the prompt token feature corresponding to the output mask, MLP represents a small MLP network consisting of a MLP layer with a LayerNorm and GELU activation, and a output MLP layer, s denotes a learnable scale and σ denotes the softmax function.We leverage the predicted α = [α_1, α_2] ∈ℛ^1 × 2 to dynamically routing SAM between the DSP and original attention mode. Consequently, the token-to-image cross-attention output O(t) can be formulated as:O(t) = α_1 ·DCAttn(t) + α_2 ·CAttn(t). This soft dynamic routing strategy enables SAM to benefit from both DSP and its original zero-shot generality, depending on the prompt quality. §.§ Robust Training Strategy We propsoe a simple and effective robust training strategy (RTS) to facilitate our model to learn correcting SAM's attention influenced by suboptimal prompts.The original SAM training (including HQ-SAM <cit.>) is usually conducted with high-quality prompts, such as accurate box or several points to correctly indicate the target object. In addition, our RTS further leverages prompts of diverse types and qualities to train the model, including box prompts with added noise (noise scale 0.4) and point prompts with varying numbers of points (1, 3, 5, 10 positive points randomly selected from the ground truth mask)Our RTS is general and can be applied to different SAM variants to improve their segmentation stability. Among them, our Stable-SAM with Non-local DSP (DSP with non-local offset network) and DRP benefits the most from RTS. § EXPERIMENTS Datasets. For fair comparison we keep our training and testing datasets same as HQ-SAM <cit.>. Specifically, we train all models on HQSeg-44K dataset, and evaluate their performance on four fine-grained segmentation datasets, including DIS <cit.> (validation set), ThinObject-5K <cit.> (test set), COIFT <cit.> and HR-SOD <cit.>. Furthermore, we validate the model's zero-shot generalization ability on two challenging segmentation benchmarks, includingCOCO <cit.>, and SGinW <cit.>, where SGinW contains 25 zero-shot in-the-wild segmentation datasets.More experimental results are included in the supplementary material.Input Prompts. We evaluate model's accuracy and stability with prompts of differing type and quality, as described in Sec. <ref>. For MS COCO and SGinW, we do not use the boxes generated by SOTA detectors <cit.> as the box prompt. This is because their predicted boxes are typically of high quality and cannot effectively evaluate the model's segmentation stability in the presence of inaccurate boxes. Instead, we introduce random scale noises into the ground truth boxes to generate noisy boxes as the prompts. Specifically, to simulate inaccurate boxes while still having some overlap with the target object, we select noisy boxes that partially overlap with the ground truth boxes with IoU ranges of 0.5–0.6 and 0.6–0.7. We also evaluate our method using the box prompts generated by SOTA detectors.Evaluation Metrics.We select suitable evaluation metrics depending on testing datasets, , 1) mask mIoU, boundary mBIoU andmSF for DIS, ThinObject-5K, COIFT, and HR-SOD;2) mask mAP and mAP_50 for COCO and SGinW.§.§ Comparison with SAM Variants We compare our method with SAM and three powerful SAM variants. HQ-SAM is a recent powerful SAM variant for producing high-quality masks. We also try two simple SAM variants by finetuning its mask decoder and the prompt token, , DT-SAM and PT-SAM, respectively. All our Stable-SAM models are trained by just one epoch for fast adaptation unless otherwise stated. Stability Comparison on Four HQ Datasets. Table <ref> shows the segmentation accuracy and stability on four HQ datasets, when models are fed with suboptimal prompts. Notably, the use of noisy box prompts significantly reduces SAM's performance, as evidenced by the drop from 79.5/71.1 (as shown in Table <ref>) to 48.8/42.1 mIoU/mBIoU, accompanied by a low stability score of 39.5 mSF.This is probably because SAM was trained with solely high-quality prompts, thus seriously suffers from the low-quality prompts during inference. The other three SAM variants, namely HQ-SAM, DT-SAM, and PT-SAM, demonstrate relatively better stability in dealing with noisy boxes, which can be attributed to their long-term training on the HQSeg-44K dataset.Note our Stable-SAM can effectively address inaccurate box prompts, by enabling models to shifts attention to target objects. Given a single-point prompt, both SAM andits variants exhibit the lowest accuracy and stability. This indicates they are adversely affected by the ambiguity problem arising from the use of a single-point prompt. Although, in most practical applications, users prefer minimal interaction with clear and consistent segmentation targets. Our method maintainsmuch better performance and stability when handlingambiguous one-point prompt, owing to our deformable feature sampling and robust training strategy against ambiguity. When point prompts increase to 3, all methods performs much better, while other methods still under-perform compared with ours.Generalization Comparison on MS COCO and SGinW. Table <ref> presents the segmentation accuracy and stability when the models are generalized to MS COCO and SGinW with noisy box prompts. Note that the DT-SAM performs the worst, probably due to overfitting on the training set, which compromises its ability to generalize to new datasets. Our method consistently surpasses all competitors, particularly in handling inaccurate boxes (N-Box 0.5–0.6), where all noisy boxes have an IoU range of 0.5–0.6 with the ground truth boxes. Note that our method has a minimal number of extra learnable parameters (0.08M) and can be quickly adapted to new datasets by just one training epoch.Comparison Based on Detector Predicted Box Prompts. Existing zero-shot segmentation methods typically choose powerful object detection model to generate high-quality boxes as the input prompts, such as FocalNet-L-DINO <cit.>. We also evaluate our method in such setting. Table <ref> presents that our model achieves comparable performance as SAM and PT-SAM when using the FocalNet-L-DINO generated high-quality boxes as prompts. When using the R50-H-Deformable-DETR <cit.> as the box prompt generator, our method achieves comparable performance as HQ-SAM. Note that training and implementing SOTA detectors typically require large computational resources and the cross-domain generalization is still very challenging. In practice, users tend to leverage interactive tools to annotate objects for their personalized datasets. Our method substantially surpasses other competitors in such scenario, when the box can roughly indicate the target object. §.§ Analysis on Stable-SAMDeformable Sampling Plugin.Table <ref> shows DSP can be trained with high-quality prompts (without RTS) to improve the performance and stability on low-quality prompts, although the model still exhibits some instability. When equipped with RTS, DSP can effectively learn to shift SAM's attention to target objects when subjecting to inaccurate prompts. To delve deeper into the deformable sampling mechanism, we visualize the sampled feature points and their corresponding attention weights. Figure <ref> illustrates how our DSP effectively shifts model's attention to the target object, resulting in increased attention weights. Consequently, the cross-attention module aggregates more target object features into the prompt tokens, thereby improving the segmentation quality of the target objects. Dynamic Routing Plugin.We leverage DSP to dynamically route the model between the regular and deformable feature sampling modes, conditioned on the input prompt quality. We find that DRP tends to route more DSP features when dealing with worse prompts. The DSP routing weight α_1 is increased from 0.469 to 0.614 when we change the point prompt from three points to one point. It indicates that lower-quality prompts rely more on DSP features to shift attention to the desirable regions. Table <ref> shows that DRP can further improve model's performance, especially when handling the challenging one-point prompt scenario.Robust Training Strategy.Robust training is critical for improving model's segmentation stability, but is usually overlooked in previous works. RTS can guide the model, including our DSP, to accurately segment target objects even when provided with misleading low-quality prompts. Table <ref> shows that RTS substantially improves the segmentation stability of all the methods,albeit with a slight compromise in performance when dealing with high-quality prompts.Note that our Stable-SAM benefits the most from the application of RTS, which can be attributed to our carefully designed deformable sampling plugin design. Model Scalability.Our method solely calibrates SAM's mask attention by adjusting model's feature sampling locations and amplitudes using a minimal number of learnable parameters (0.08 M), while keeping the model architecture and parameters intact. This plugin design grants our method with excellent model scalability. Table <ref> shows that our model can be rapidly optimized by just one training epoch, achieving comparable performance and stability. By scaling the training procedure to 12 epochs, our method achieves the best performance across all prompting settings. Additionally, our method can cooperate with other SAM variants. For instance, when combined with HQ-SAM, the performance and stability are further improved. Low-Shot Generalization. Customized datasets with mask annotation are often limited, typically consisting of only hundreds of images. For a fair comparison, all methods in Table <ref> are trained with RTS by 12 training epochs. Table <ref> shows that HQ-SAM performs worst when trained with a limited number of images (220 or 440 images), which can be attributed to its potential overfitting problem caused by the relatively large learnable model parameters (5.1 M). In contrast, PT-SAM's better performance with minimal learnable parameters (0.13 M) further validates this hypothesis.Our plugin design, coupled with minimal learnable parameters, enables effective low-shot generalization, and thus achieves the best performance in such scenario.§ CONCLUSIONIn this paper, we present the first comprehensive analysis on SAM's segmentation stability across a wide range of prompt qualities. Our findings reveal that SAM's mask decoder tends to activate image features that are biased to the background or specific object parts. We propose the novel Stable-SAM to address this issue by calibrating solely SAM's mask attention, , adjusting the sampling locations and amplitudes of image feature using learnable deformable offsets, while keeping the original SAM model unchanged. The deformable sampling plugin (DSP) allows SAM to adaptively shift attention to the prompted target regions in a data-driven manner. The dynamic routing plugin (DRP) toggles SAM between deformable and regular grid sampling modes depending on the quality of the input prompts. Our robust training strategy (RTS) facilitates Stable-SAM to effectively adapt to prompts of varying qualities. Extensive experiments on multiple datasets validate the effectiveness and advantages of our Stable-SAM.§ APPENDIX § MORE EXPERIMENTAL RESULTS§.§ MESS The recently released Multi-domain Evaluation of Semantic Segmentation (MESS) <cit.> is a large-scale benchmark for holistic analysis of zero-shot segmentation performance. MESS consists of 22 downstream tasks, a total of 448 classes, and 25079 images, covering a wide range of domain-specific datasets in the fields of earth monitoring, medical sciences, engineering, agriculture and biology and other general domains. We evaluate SAM <cit.>, HQ-SAM <cit.> and our Stable-SAM on MESS benchmark using the official MESS evaluation code, and report the mean of class-wise intersection over union (mIoU).Following MESS's model settings, our Stable-SAM selects the first mask of the predicted multiple masks as the output. For a fair comparison, our Stable-SAM follows HQ-SAM to fuse the SAM's original prediction map into our predicted segmentation map. We provide four prompt types for evaluation. The oracle point refers to a single point sampled from the ground-truth mask using the point sampling approach RITM <cit.>. The random point refers to a single point randomly sampled from the ground-truth mask of the target object. The oracle box refers to a single box tightly enclosing the ground-truth mask of the target object. The noisy box refers to a single box generated by adding noise (noise scale 0.4) to the oracle box. Table <ref> tabulates the zero-shot semantic segmentation performance comparison on MESS. Our Stable-SAM performs best when prompted with oracle point, random point and noisy box, and achieves comparable performance when provided with oracle box. Our competitive performance on the large-scale MESS benchmark further consolidates the powerful zero-shot generalization ability inherent in our Stable-SAM. Table <ref> shows the dataset and comparison details on 22 tasks of MESS benchmark. Our Stable-SAM performs best on 19 out of 22 datasets.§.§ Backbone VariantsTable <ref> tabulates the performance comparison on different backbone variants. Our Stable-SAM consistently performs better than other methods on all backbone variants.§ RELATION TO OTHER METHODS Deformable Attention. Our method is unique in its idea and design on solely adjusting the feature sampling locations and amplitudes by training the offset network, without involving the original model parameters. In contrast,conventional deformable attention methods <cit.> train both the offset network and original network parameters, which is undesirable when adapting powerful foundation models in deployment, especially in finetuning large foundation models. Figure <ref> shows the difference between our deformable sampling plugin and conventional deformable attention.We apply the conventional deformable attention in our Stable-SAM by finetuning the mask decoder during training. Table <ref> shows that the conventional deformable attention (Stable-SAM (finetuning decoder)) exhibits the worst generalization ability on MS COCO, even worse than the original SAM model. This further validates the necessity and better performance of our deformable sampling plugin paradigm, , adapting the foundation model by only adjusting the feature sampling locations and amplitudes, while fixing the original model features and parameters.Spatial Attention. The spatial attention <cit.> can adjust the image spatial feature weights, and thus can be regarded as a soft feature sampling method. We directly replace DSP with spatial attention in our Stable-SAM to investigate if spatial attention offers comparable effectiveness. Table <ref> shows that spatial attention performs much worse than our DSP, although it consistently improves the segmentation performance and stability on all datasets. This indicates that simply adjusting the feature weights is insufficient to adapt SAM for handling suboptimal prompts. § IMPLEMENTATION DETAILS During training, we only train DSP and DRP on HQSeg-44K dataset while fixing the model parameters of the pre-trained SAM model. We train Stable-SAM on 8 NVIDIA Tesla V100 GPUs with a total batch size of 32, using Adam optimizer with zero weight decay and 0.001 learning rate. The training images are augmented using large-scale jittering <cit.>. The input prompts are randomly sampled from mixed prompt types, including ground truth bounding boxes, randomly sampled points (1, 3, 5, 10 positive points randomly selected from the ground truth mask), noisy boxes (generated by adding noise (noise scale 0.4) to the ground truth bounding boxes, where we ensure the generated noisy boxes have at least 0.5 overlap IoU with the ground truth boxes), and coarse masks (generated by adding Gaussian noise in the boundary regions of the ground truth masks). The model is optimized using cross entropy loss and dice loss <cit.>. We follow the same inference pipeline of the original SAM. The mask decoder first predicts a small mask in 256 × 256 spatial resolution for each prompt, which is then up-sampled to the original resolution 1024 × 1024 as the output mask. § STABILITY VISUALIZATION Figure 6-16 show extensive visualization comparisons between SAM and Stable-SAM, under box, 3-points and 1-point prompts of diverse qualities. We also visualize the image activation map for the token-to-image cross-attention in SAM’s second mask decoder layer to better understand its response to low-quality prompts. The important features are highlighted by the orange circles, with larger radius indicating higher attention score. SAM yields unsatisfactory segmentation results when provided with low-quality prompts, and even a minor prompt modification leads to unstable segmentation output. In contrast, our Stable-SAM produces consistent and accurate mask predictions even under prompts of diverse qualities, by shifting more feature sampling attention to the target object.ieeenat_fullname § TYPO CORRECTIONSIn 484L of the main paper, the “Deformable Routing Plugin” should be “Dynamic Routing Plugin”.In 490L of the main paper, the “from one point to three points” should be “from three points to one point”. | http://arxiv.org/abs/2311.15776v2 | {
"authors": [
"Qi Fan",
"Xin Tao",
"Lei Ke",
"Mingqiao Ye",
"Yuan Zhang",
"Pengfei Wan",
"Zhongyuan Wang",
"Yu-Wing Tai",
"Chi-Keung Tang"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231127125142",
"title": "Stable Segment Anything Model"
} |
Analysis of the subsolar-mass black hole candidate SSM200308from the second part of the third observing run of Advanced LIGO-Virgo Ester Ruiz Morales January 14, 2024 ====================================================================================================================================We present a novel deep learning-based approach to the 3D reconstruction of clothed humans using weak supervision via 2D normal maps. Given a single RGB image or multiview images, our network infers a signed distance function (SDF) discretized on a tetrahedral mesh surrounding the body in a rest pose. Subsequently, inferred pose and camera parameters are used to generate a normal map from the SDF. A key aspect of our approach is the use of Marching Tetrahedra to (uniquely) compute a triangulated surface from the SDF on the tetrahedral mesh, facilitating straightforward differentiation (and thus backpropagation). Thus, given only ground truth normal maps (with no volumetric information ground truth information), we can train the network to produce SDF values from corresponding RGB images. Optionally, an additional multiview loss leads to improved results. We demonstrate the efficacy of our approach for both network inference and 3D reconstruction.§ INTRODUCTIONRecent work on 3D human digitization has largely focused on the fully-supervised setting, where deep neural networks (DNNs) are trained to explicitly fit so-called ground truth 3D geometry <cit.>. In such approaches, high-end capture setups (with 4D scanners or a large number of cameras) are typically used to obtain high-quality, multiview training data <cit.>. Inferring 3D geometry and appearance from 2D information is a highly underconstrained problem; thus, it can be challenging for models trained on such high-quality data to generalize to the lower quality images typical of consumer-grade devices (such as phones and webcams). However, the ability to do so is crucial to the democratization of digital humans required for many applications in AR/VR, robotics, healthcare, etc. Given a single (monocular) RGB image, we estimate the clothed body with a DNN that infers signed distance function (SDF) values for each vertex of a tetrahedral mesh surrounding the body (similar to <cit.>). Implicit surfaces have become a common choice for 3D reconstruction from images (see e.g. <cit.>), especially as neural radiance fields (NeRFs) <cit.> have gained popularity. Importantly, the tetrahedral mesh data structure enables our use of Marching Tetrahedra to uniquely compute a triangulated surface. The resulting algorithm is straightforward to differentiate, alleviating concerns associated with the nondeterministic nature of Marching Cubes (see e.g. <cit.>) or the ray tracing of an implicit surface (see e.g. <cit.>).Our approach is a follow-up to <cit.>, which also uses an explicit representation of an SDF on a tetrahedral mesh; however, we add a second explicit representation of the surface via a triangle mesh. We thus have access to two explicit versions of the neural SDF, and energies can be conveniently formulated for either the volume or the surface or both. While similar in spirit to <cit.>, our method does not require the construction of a velocity in order to capture these energies with an evolving level set function; thus, we can control mesh based properties (e.g.area and dihedral angles) that would be lost when converting to a velocity field. On the other hand, the approach in <cit.> could be used to alleviate locking concerns in cases where discretizations on the triangle mesh and the tetrahedral mesh do not interact as expected due to differences in the degrees of freedom.The main goal of our work is to provide weak supervision during DNN training via 2D normal maps <cit.>. By formulating the optimization problem with respect to image-based normals, we aim to better represent fundamental correlations between 2D images and 3D geometry in order to facilitate subsequent democratization to consumer-grade devices. Given (inferenced) pose and camera parameters, a normal map can be computed from the skinned triangulated surface. See Figure <ref>. While we initialize the SDF network parameters with the data from <cit.>, the resulting model will tend to overfit (since it is trained on a limited quantity of 3D data) and thus generalize poorly to in-the-wild 2D images.Thus, after this initialization, we train the model in a weakly supervised manner using only ground truth normal maps.To summarize, our contributions: * We illustrate that our network can be used to reconstruct 3D geometry from sparse multiview RGB data obtained with consumer-grade cameras (and no ground truth 3D labels).* We compute the correct gradients to differentiate through Marching Tetrahedra via a Lagrangian formulation, which enables differentiable mesh generation and thus end-to-end training with both volumetric and surface-based energies.* We present a differentiable image rasterizer that: (1) allows us to use normal maps for weak supervision and (2) can efficiently compute normal maps from triangle meshes with over 300k triangles during network training.* We formulate regularization energies that coerce inferred implicit surfaces to: (1) resemble true SDFs and (2) be locally smooth.* We formulate silhouette energies defined to enforce 3D boundary matching. § RELATED WORK §.§ Human Shape EstimationVarious works use parametric body models such as SMPL <cit.> to estimate human body pose and shape without clothing <cit.>. While existing methods are able to generalize to in-the-wild images, the inferred body mesh is often quite different from the underlying body shape and does not capture clothing.In order to reconstruct humans wearing clothing from a single image <cit.>, template-based approaches either rely on parametric models <cit.> or use person-specific meshes <cit.>.For instance, GTA <cit.> projects SMPL onto a learned 3D triplane representation, and <cit.> constructs local implicit fields centered around locations on the SMPL-X model <cit.>. Limitations of template-based approaches to clothed human reconstruction include the output being constrained by the topology of the template as well as a reliance on accurate pose estimation. Template-free methods typically leverage 2D signals or 3D geometric representations to recover geometry. In <cit.>, reconstruction is achieved by generating front and back depth images that are later combined into a 3D surface. <cit.> builds on this idea and proposes a coarse-to-fine reconstruction method leveraging both predicted depth and normal images. Inspired by shape-from-silhouette techniques, SiCloPe <cit.> recovers geometry by predicting silhouette images and 3D joint positions. <cit.> predict volumetric occupancy on a uniform voxel grid directly, while <cit.> proposed learning a Fourier subspace of 3D occupancy; in both cases, Marching Cubes can be used to generate a triangle mesh. PIFu <cit.> and PIFuHD <cit.> infer 3D shape with neural implicit functions sampled onto a grid. Follow-up work <cit.> leverages predicted normal maps to improve depth inference. PAMIR <cit.> extends PIFuHD to increase generalizability by regularizing the implicit function using semantic features from a parametric model. ICON <cit.> and ECON <cit.> leverage inferred front and back normal maps as an intermediate encoding of 3D geometry, but these methods still rely on ground truth 3D scan data during training.Instead of a single input image, other works aim to construct animatable avatars from a sparse set of cameras <cit.>, video <cit.>, depth <cit.>, point clouds <cit.>, 4D capture <cit.>, or scans <cit.>. Most similar to our work, SelfRecon <cit.> uses normal maps inferred from PIFuHD <cit.> to supervise network training, and SeSDF <cit.> can either take as input a single image or uncalibrated multiview imagesMoving towards improved generalization, weakly supervised methods have also been explored for human pose estimation <cit.>, human body shape <cit.>, and garment template reconstruction (on the SMPL body) <cit.>.§.§ Differentiable Marching Cubes / TetrahedraA number of recent works have proposed methods for backpropagating through Marching Cubes <cit.> / Marching Tetrahedra <cit.> by either leveraging properties of a point-to-SDF network <cit.> (e.g. as in DeepSDF <cit.>) or by training a DNN for mesh generation <cit.>. MeshSDF <cit.> builds on DeepSDF <cit.>, where a network f_η is trained to infer an SDF value at a location x conditioned on a latent shape code η. In the forward step, f_η(x) is computed for every vertex of a fixed voxel grid so that Marching Cubes can be used to generate a triangle mesh. The authors postulate that a small increase in the SDF values would move a triangle vertex in the normal direction, which is only true idealistically when there are no shocks/rarefactions in the SDF isocontours (see e.g. <cit.>); moreover, Marching Cubes does not move vertices in such a manner, even when it produces a consistent set of vertices under perturbation. Their assumptions also necessitate f_η being a true SDF, even though it is only an approximation.Follow-up work in <cit.> presents a similar formulation using Marching Tetrahedra. See <cit.> for more discussion on the problematic assumptions in <cit.>. The authors observe that given infinitesimally small changes in the SDF, the gradient of each vertex v w.r.t. f_η evaluate at thev is∂ v/∂ f_η (v) = -n(v) = -∇ f_ηwhere n(v) denotes surface normals. To backpropagate through Marching Cubes, the authors simply exploit the fact that f_η is trained to emulate a true SDF ϕ, so gradients are computed with the assumption that f_η = ϕ. Specifically, during the backward pass, the output mesh vertices v (i.e. MarchingCubes(f_η(X)) are passed back through f_η, and Pytorch differentiation is used to compute ∇ f_η.Given the Marching Cubes vertices v, the gradient of the loss w.r.t. η is∂ℒ/∂η = ∑_v ∂ℒ/∂ v∂ v/∂ f_η∂ f_η/∂ηThus, their problem formulation necessitates a per-vertex approach to computing loss gradients and assumes that f_η is the true SDF parameterizing the surface during backpropagation.§ WEAK SUPERVISION VIA NORMAL MAPSIn the context of human digitization, it can be challenging to train generalizable ML-based models with full 3D supervision (see e.g. <cit.>). Robust generalization typically necessitates access to a large number of ground truth training examples; however, publicly available datasets of 3D scan data for clothed human bodies remain scarce (in part, because it is both expensive and complicated to obtain). Even if such data were more readily available, existing works typically require unskinning the data to a reference pose (see e.g. <cit.>), which can lead to various complications: tangling, self-intersection, inversion, etc.To alleviate dependency on labeled 3D data, we propose a weakly supervised approach using 2D normal maps as ground truth labels (only) during training. A 2D normal map defines an RGB value for each pixel, corresponding to the (camera or world space) unit normal that best represents the geometry rasterized to that pixel. A normal map can be approximated by casting a ray through the pixel center and subsequently interpolating normals to the ray-geometry intersection point, although a better estimate would be obtained by supersampling (similar to the way pixel color is computed). Importantly, difficult to handle occluded regions (such as the armpit) may be ignored (in contrast to full 3D supervision). Since there are a number of ways to obtain ground truth normal maps (besides utilizing 3D scan data), this approach vastly increasing the amount of data available for training. For example, one can utilize RGBD images <cit.>, stereo pairs <cit.>, and/or neural networks (including NeRFs <cit.>) trained to produce normal maps from RGB images <cit.>. Increasing the amount of training data (in this way) facilitates generalization to a much more representative and diverse set of people in clothing (as compared to using only a limited number of 3D scans).Given inferred SDF values ϕ̂_k on the (fixed) tetrahedral mesh vertices u_k, Marching Tetrahedra is used to uniquely generate a triangle mesh with vertices v_i(ϕ̂_k). Given an inferred pose θ̂ and camera parameters ĉ, a normal map N(v_i(ϕ̂_k), θ̂, ĉ) can be generated. The objective function to be minimized is thenℒ(ϕ̂_k, θ̂, ĉ) = ‖N(v_i(ϕ̂_k), θ̂, ĉ) - N_GT‖where N_GT is a ground truth normal map.§ TETRAHEDRAL MESH FRAMEWORKThe 3D space surrounding and including the human body is parameterized via a tetrahedral mesh. First, a Cartesian grid based level set representation is generated for the SMPL template body <cit.> in the star pose (similar to <cit.>). Then, a constant value is subtracted from the SDF values in order to inflate the zero level set so that its interior can contain a wide range of clothed body shapes. Subsequently, a tetrahedral mesh is generated for this interior region using red/green refinement <cit.>. See Figure <ref>.Given an input image, a DNN is trained to infer an implicit surface approximation to the clothed body, represented by SDF values ϕ̂_k on the tetrahedral mesh vertices u_k (similar to <cit.>). The DNN is composed of a CNN-based stacked hourglass encoder, followed by graph convolutional layers that progressively increase the resolution of the sampled SDF. This encodes an input image into a feature vector, which is then decoded into SDF values on the vertices of the tetrahedral mesh. Due to the large number of tetrahedral mesh vertices, the graph convolutional layers are only partially connected in order to significantly reduce memory usage. §.§ SkinningThe tetrahedral mesh can be deformed via linear blend skinning (LBS) using per-tetrahedron-vertex, per-joint skinning weights w_kj. The skinning weights are assigned in the star pose by first finding the point on the SMPL template body mesh closest to each tetrahedral mesh vertex, and then barycentrically interpolating skinning weights to that point from the vertices of the SMPL template body mesh triangle that contains it.Given pose parameters θ with joint transformations T_j(θ), the skinned position of each tetrahedral mesh vertex is u_k(θ) = ∑_j w_kjT_j(θ)u_k^j where u_k^j is the location of u_k in the untransformed reference space of joint j. VIBE <cit.> is used to estimate the SMPL pose parameters θ̂∈ℝ^72 for any given input image. During training, all layers of a pretrained VIBE model are frozen except for the final Gated Recurrent Unit (GRU) layer. § MARCHING TETRAHEDRA Given SDF values ϕ_k defined on tetrahedral mesh vertices u_k, Marching Tetrahedra can be implemented to compute a unique (non-ambiguous) triangle mesh with vertices v_i, making differentiation more straightforward as compared to the many cases (and non-uniqueness) that need to be considered for Marching Cubes. In order to avoid triangle vertices v_i coincident with a tetrahedron vertex u_k, all the ϕ_k values are preprocessed (infinitesimally) changing those with |ϕ_k| < ϵ to ϕ_k = ϵsign(ϕ_k), e.g. with ϵ=10^-8.For each tetrahedron mesh edge e_i={u_k_1, u_k_2} that includes a sign change, i.e. sign(ϕ_k_1) ≠sign(ϕ_k_2), a triangle vertexv_i = -ϕ_k_2/ϕ_k_1 - ϕ_k_2u_k_1 + ϕ_k_1/ϕ_k_1 - ϕ_k_2u_k_2is defined using linear interpolation. Afterwards, triangles are constructed in a tetrahedron-by-tetrahedron manner by considering the two cases that can occur: either three edges of the tetrahedron contain triangle vertices and one triangle is constructed, or four edges contain triangle vertices and a quadrilateral is constructed and split into two triangles. Note that this typically arbitrary splitting of the quadrilateral can be made consistent for the sake of differentiation. Since the tetrahedral mesh does not change topology, the edges can be numbered in a fixed manner; then, one can consistently split a quadrilateral by connecting the triangle vertex on the lowest numbered edge to the triangle vertex on the highest numbered edge (or via a similar alternative strategy). The resulting triangle mesh is guaranteed to be watertight, and the vertices in each triangle are reordered (when necessary) to ensure that all face normals point outwards. §.§ Ray-Tracing the Implicit Surface DirectlyAs an alternative to Marching Tetrahedra, consider casting a ray to find an intersection point with the implicit surface and subsequently using the normal vector defined (directly) by the implicit surface at that intersection point. A number of existing works consider such approaches in various ways, see e.g. <cit.>. Perturbations of the intersection point depend on perturbations of the ϕ values on the vertices of the tetrahedron that the intersection point lies within. If a change in ϕ values causes the intersection point to no longer be contained inside the tetrahedron, one would need to discontinuously jump to some other tetrahedron (which could be quite far away, if it even exists). A potential remedy for this would be to define a virtual implicit surface that extends out of the tetrahedron in a way that provides some sort of continuity (especially along silhouette boundaries).Comparatively, our Marching Tetrahedra approach allows us to presume (for example) that the point of intersection remains fixed on the face of the triangle even as the triangle moves. Since the implicit surface has no explicit parameterization, one is unable to similarly hold the intersection point fixed. The implicit surface utilizes an Eulerian point of view where the rays (which represent the discretization) are held fixed while the implicit surface moves (as ϕ values change), in contrast to our Lagrangian discretization where the rays are allowed to move/bend in order to follow fixed intersection points during differentiation. A similar approach for an implicit surface would hold the intersection point inside the tetrahedron fixed even as ϕ changes. Although such an approach holds potential due to the fact that implicit surfaces are amenable to computing derivatives off of the surface itself, the merging/pinching of isocontours created by convexity/concavity would likely lead to various difficulties. Furthermore, other issues would need to be addressed as well, e.g. the gradients (and thus normals) are only piecewise constant (and thus discontinuous) in the piecewise linear tetrahedral mesh basis.§.§ Computing GradientsAccording to Equation <ref>,∂ v_i/∂(ϕ_k_1, ϕ_k_2)= [ ϕ_k_2(u_k_1- u_k_2)(ϕ_k_1-ϕ_k_2)^2ϕ_k_1(u_k_2 -u_k_1)(ϕ_k_1-ϕ_k_2)^2 ]where dividing by (ϕ_k_1 - ϕ_k_2)^2 can be problematic. The preprocess at the beginning of Section <ref> guarantees that |ϕ_k_1 - ϕ_k_2| ≥ 2 ϵ, which means that the worst possible scenario for Equation <ref> (when |ϕ_k_1|=|ϕ_k_2|=ϵ) still results in 𝒪(1) coefficients for u_k_1 and u_k_2; however, the ϕ-based coefficients in Equation <ref> would be 𝒪(1 / ϵ). Thus, while ϵ=10^-8 is sufficient for Equation <ref>, a larger value of ϵ might be prudent when considering Equation <ref>. To backpropagate through Marching Tetrahedra, we need to compute ∂ v_i /∂ϕ, i.e. the partial derivative of the star-pose triangle mesh vertices w.r.t. the inferred level set ϕ. For each triangle vertex, the gradient of v_i = (x_i, y_i, z_i) w.r.t. ϕ_k_1,ϕ_k_2 is∂ v_i/∂ϕ_k_1∂ϕ_k_2 =[ ∂ x_i / ∂ϕ_k_1 ∂ x_i / ∂ϕ_k_2; ∂ y_i / ∂ϕ_k_1 ∂ y_i / ∂ϕ_k_2; ∂ z_i / ∂ϕ_k_1 ∂ z_i / ∂ϕ_k_2 ] Let λ_1,λ_2 be the coefficients of u_k_1 and u_k_2 in Equation <ref>, i.e. v_i = λ_1 u_k_1 + λ_2 u_k_2 whereλ_1 = -ϕ_k_2/ϕ_k_1 - ϕ_k_2λ_2 = ϕ_k_1/ϕ_k_1 - ϕ_k_2and λ_1 + λ_2 = 1. Now, the gradient of v_i w.r.t. λ_1,λ_2 is simply∂ v_i/∂λ_1 ∂λ_2 = [u_k_1 u_k_2]And letting λ = (λ_1, λ_2)∂λ/∂ϕ_k_1∂ϕ_k_2 =1/(ϕ_k_1-ϕ_k_2)^2[ ϕ_k_2-ϕ_k_1; - ϕ_k_2 ϕ_k_1; ]Thus, by the Chain rule we have∂ v_i/∂ϕ_k_1∂ϕ_k_2 = ∂ v_i/∂λ∂λ/∂ϕ_k_1∂ϕ_k_2= 1/(ϕ_k_1-ϕ_k_2)^2[ u_k_1 u_k_2 ][ ϕ_k_2-ϕ_k_1; - ϕ_k_2 ϕ_k_1; ]= 1/(ϕ_k_1-ϕ_k_2)^2[ ϕ_k_2(u_k_1 - u_k_2)ϕ_k_1(u_k_2 -u_k_1) ]§.§ SkinningThere are two options for the algorithm ordering between skinning and Marching Tetrahedra (the latter of which reverses the order in Figure <ref>). For skinning the triangle mesh, the skinned position of each triangle mesh vertex is v_i(θ,ϕ) = ∑_j w_ij(ϕ)T_j(θ)v_i^j(ϕ) where v_i^j is the location of v_i in the untransformed reference space of joint j. Unlike in Section <ref> where w_kj and u_k^j were fixed, w_ij and v_i^j both vary yielding three terms in the product rule. ∂ v_i^j/∂ϕ is computed according to Equation <ref>, noting that u_k_1 and u_k_2 are fixed. w_ij(ϕ) is defined similarly to Equation <ref>,w_ij = -ϕ_k_2/ϕ_k_1 - ϕ_k_2w_k_1j + ϕ_k_1/ϕ_k_1 - ϕ_k_2w_k_2jwhere w_k_1 j and w_k_2 j are fixed; similar to Equation <ref>, ∂ w_ij / ∂ϕ will contain 𝒪(1/ϵ) coefficients. For skinning the tetrahedral mesh, Equations <ref> and <ref> directly define v_i and ∂ v_i/∂ϕ since the skinning is moved to the tetrahedral mesh vertices u_k. Then, ∂ v_i/ ∂ u_k is computed according to Equation <ref> in order to chain rule to skinning (i.e. to ∂ u_k/ ∂θ, which is computed according to the equations in Section <ref>). § IMAGE RASTERIZATIONGiven a skinned triangulated surface and parameters for a perspective camera model, a camera space normal map is computed using a right-handed coordinate system. We assume that the geometry is centered in the image, since images are cropped and rescaled during preprocessing. Normal maps made using different assumptions, or decoded and stored as RGB values, are readily transformed back into unit normals (in camera space) in order to match our assumptions. §.§ NormalsRecall (from Section <ref>) that triangle vertices are reordered (if necessary) in order to obtain outward-pointing face normals. The area-weighted outward face normal isn_f(v_1,v_2,v_3) =1/2 (v_2-v_1) × (v_3-v_1)whereArea(v_1,v_2,v_3) = 1/2||(v_2-v_1) × (v_3-v_1)||_2is the area weighting. Area-averaged vertex unit normals n̂_v are computed vian_v = ∑_f n_f n̂_v= n_v/||n_v||_2where f ranges over all the triangle faces that include vertex v. Note that one can drop the 1/2 in Equation <ref>, since it cancels out when computing n̂_v in Equation <ref>. §.§ Camera ModelThe camera rotation and translation are used to transform each vertex v_g of the geometry to the camera view coordinate system (where the origin is located at the camera aperture), i.e. v_c = Rv_g + T. The normalized device coordinate system normalizes geometry in the viewing frustum (with z∈ [n, f]) so that all x,y ∈ [-1,1] and all z ∈ [0,1]. See Figure <ref>, left. Vertices are transformed into this coordinate system via[ [v_NDC] z_c; z_c ] =[2n/W 0 0 0; 02n/H 0 0; 0 0 f/f-n -fn/f-n; 0 0 1 0 ][ [v_c]; 1 ]where H = 2ntan(θ_fov/2) is the height of the image, θ_fov is the field of view, W=Ha is the width of the image, and a is the aspect ratio. The screen coordinate system is obtained by transforming the origin to the top left corner of the image, with +x pointing right and +y pointing down. See Figure <ref>, right. Vertices are transformed into this coordinate system via[ [v'];1 ] = [ -W/200W/2;0 -H/20H/2;0010;0001;][ [v_NDC]; 1 ] or via[ [v'] z_c;z_c ] = [-n 0 W/2 0; 0-n H/2 0; 0 0 f/f-n -fn/f-n; 0 0 1 0; ][ [v_c]; 1 ]which is obtained by multiplying both sides of Equation <ref> by z_c and substituting in Equation <ref>. https://pytorch3d.org/docs/cameras §.§ Normal MapFor each pixel, a ray is cast from the camera aperture through the pixel center to find its first intersection with the triangulated surface at a point p in world space. Denoting v_1, v_2, v_3 as the vertices of the intersected triangle, barycentric weights for the intersection pointα̂_1= Area(p, v_2, v_3)/Area(v_1, v_2, v_3) α̂_2= Area(v_1, p, v_3)/Area(v_1, v_2, v_3) α̂_3= Area(v_1, v_2,p)/Area(v_1, v_2, v_3)are used to compute a rotated (into screen space) unit normal from the unrotated vertex unit normals (see Equation <ref>) vian̂ = R α̂_1 n̂_v_1 + α̂_2 n̂_v_2 + α̂_3 n̂_v_3/||α̂_1 n̂_v_1 + α̂_2 n̂_v_2 + α̂_3 n̂_v_3||for the normal map. Note that dropping the denominators in Equation <ref> does not change n̂. §.§ Scanline RenderingAfter projecting a visible triangle into the screen coordinate system (via Equation <ref>), its projected area can be computed asArea2D(v_1', v_2', v_3')= -1/2det[ x_2'-x_1' y_2'-y_1'; x_3'-x_1' y_3'-y_1'; ]similar to Equation <ref> (where the negative sign accounts for the fact that visible triangles have normals pointing towards the camera). When a projected triangle overlaps a pixel center p', barycentric weights for p' are computed by using Area2D instead of Area in Equation <ref>. Notably, un-normalized world space barycentric weights can be computed from un-normalized screen space barycentric weights via α_1 = z_2' z_3' α_1', α_2 = z_1' z_3' α_2', α_3 = z_1' z_2'α_3' orα_1= z_2' z_3' Area2D(p', v_2', v_3')α_2= z_1' z_3' Area2D(v_1', p', v_3')α_3= z_1' z_2' Area2D(v_1', v_2', p')giving n̂ = Rα_1 n̂_v_1 + α_2 n̂_v_2 + α_3 n̂_v_3/||α_1 n̂_v_1 + α_2 n̂_v_2 + α_3 n̂_v_3||as an (efficient) alternative to Equation <ref>. If more than one triangle overlaps p', the closest one (i.e. the one with the smallest value of z' = α̂_1'z_1'+α̂_2'z_2'+α̂_3'z_3' at p') is chosen.§.§ Computing GradientsFor each pixel overlapped by the triangle mesh, the derivative of the normal (in Equation <ref>) with respect to the vertices of the triangle mesh is required, i.e. ∂α_i / ∂ v_g and ∂n̂_v_i / ∂ v_g are required. ∂α_i / ∂ v' can be computed from Equations <ref> and <ref>, ∂ v'/∂ v_c can be computed from Equation <ref>, and ∂ v_c/∂ v_g can be computed from v_c = Rv_g + T. ∂n̂_v_i / ∂ v_g can be computed from Equations <ref> and <ref>.§ SDF REGULARIZATIONTwo regularization terms are utilized during neural network training in order to encourage: (1) the inferred ϕ̂ values to resemble a true SDF and (2) smoothness (similar to <cit.>). Notably, the smoothness regularizer behaves significantly better when ϕ̂ is closer to a true SDF.§.§ Eikonal RegularizationGiven a tetrahedron t with vertices u_k = (x_k, y_k, z_k) and inferred ϕ̂_k values, ϕ̂ can be linearly approximated within the tetrahedron by writingϕ̂_k = ax_k + by_k + cz_k + dfor each of the four vertices; then, the resulting 4× 4 linear system of equations can be solved to obtain the unknown coefficients (a, b, c, d) leading to|∇ϕ̂_t| = √(a^2 + b^2 + c^2)as the norm of the gradient. Summing over tetrahedra leads toE_1a = 1/2∑_t (|∇ϕ̂_t| - 1)^2as the energy to be minimized. The problem with Equation <ref> (and similar approaches, such as <cit.>) is that the chain rule moves the square root in Equation <ref> to the denominator, potentially leading to NaNs/overflow; notably, even an exact SDF has |∇ϕ|=0 at both extrema and pinching/merging saddles, and an inferred ϕ̂ can have |∇ϕ̂|=0 elsewhere as well. This can be avoided by instead usingE_1b = 1/2∑_t (|∇ϕ̂_t|^2 - 1)^2which still enforces |∇ϕ̂_t| = 1; alternatively,E_1c = 1/2∑_t Volume(t)(|∇ϕ̂_t|^2 - 1)^2scales the penalty on each tetrahedron by its volume. §.§ Motion by Mean CurvatureIn order to encourage smoothness, we define an energy that when minimized results in motion by mean curvature. Following <cit.>, the surface area can be calculated via∫_Ω |∇ H(ϕ(x,y,z))| dVwhere H is a Heaviside function and V is the volume; thus, on our tetrahedral mesh, we minimize E_2 =∑_t |∇ H(ϕ̂)|Volume(t)using a smeared-out Heaviside FunctionH(ϕ̂) =0ϕ̂ < -ϵ_H 1/2 + ϕ̂/2ϵ_H + 1/2πsin(πϕ̂/ϵ_H) -ϵ_H≤ϕ̂≤ϵ_H1ϕ̂ > ϵ_Hwhere ϵ_H, chosen as 1.5 times the average tetrahedral mesh edge length, determines the bandwidth of numerical smearing (see <cit.>). |∇ H(ϕ̂)| is discretized by linearly approximating H(ϕ̂) in each tetrahedron along the lines of Equation <ref> in order to obtain coefficients (a,b,c,d) for use in the equivalent of Equation <ref>. In order to avoid division by small numbers, we ignore tetrahedra with |∇ H(ϕ̂)|<10^-8 in Equation <ref> reasoning that |∇ H(ϕ̂)| is small enough and thus ϕ̂ is smooth enough in such tetrahedra. § SILHOUETTE LOSSESInstead of striving to make the inverse rendering differentiable at silhouette boundaries (as in e.g. <cit.>), we introduce energies that force the silhouettes to match. §.§ ShrinkingFor pixels that overlap the inferred surface but not the ground truth surface, the interior of the inferred surface needs to shrink so that the corresponding triangles disappear. For each tetrahedron mesh edge containing a vertex of a problematic triangle, the edge's parent tetrahedral mesh vertices are added to the set U_shrink if they have negative SDF values; then, ℒ_shrink = 1/2∑_k ∈ U_shrink (ϕ̂_k -ϵ_s)^2encourages those negative ϕ̂_k values to target a positive ϵ_s=5×10^-3, which is chosen as half the average tetrahedral mesh edge length. §.§ ExpandingFor pixels that overlap the ground truth surface but not the inferred surface, the interior of the inferred surface needs to expand. In order to determine where this expansion should occur, the implicit surface is temporarily inflated by changing the sign of the SDF at every tetrahedral mesh vertex with both ϕ̂>0 and a one-ring neighbor with ϕ̂<0 (e.g. by setting ϕ̂_temp = -ϵ_s at those vertices). Next, the pixels that previously overlapped the ground truth surface but not the inferred surface and now overlap both the ground truth surface and the new inflated surface are identified. For each tetrahedron mesh edge containing a vertex of a triangle corresponding to one of these pixels, the edge's parent tetrahedral mesh vertices are added to the set U_expand if they had positive SDF values before inflation. At this point, all of the temporary ϕ̂_temp values are discarded and the original ϕ̂ values are restored. Then, ℒ_expand = 1/2∑_k ∈ U_expand (ϕ̂_k+ϵ_s)^2encourages the positive ϕ̂_k values to target -ϵ_s. § EXPERIMENTSWe first demonstrate (in Section <ref>) that our network has the ability to reconstruct clothed humans when ground truth camera parameters and normal maps are known. In Section <ref>, we demonstrate that the network can be trained to reconstruct 3D geometry with increasing efficacy as the number of sparse views increases. Subsequently (in Section <ref>), we extend this process to real-world RGB data (with no ground truth information) in order to demonstrate the ability to reconstruct 3D geometry using only network-inferred normal maps. For the sake of comparison, we also present (in Section <ref>) the results we obtained using available implementations of other methods for single view and multiview reconstruction. §.§ Network EfficacyGiven ground truth 3D data from RenderPeople <cit.>, we show that our network has the capacity and flexibility to reconstruct clothed humans from either a single image or multiple images. Regardless of the number of input images, the network is trained by minimizing the normal map loss (Equation <ref>), SDF regularization losses (Equations <ref> and <ref>), and silhouette losses (Equations <ref> and <ref>). In the multiview case, each image is considered individually (i.e. we treat multiview as a collection of single view examples). Figure <ref> shows an example of the results obtained by training our network on 8 camera views surrounding the person (as compared to the ground truth).§.§ Geometry ReconstructionTo quantitatively evaluate the accuracy of our inferred results, we define a normal map error ase_normal = 1/W× H∑_p (1/2(1-n̂_p · n_p))^2where the ground truth and predicted normals at pixel p are n_p and n̂_p, respectively, and n̂_p · n_p ∈ [-1,1] is replaced with -1 for pixels where the predicted and ground truth silhouettes do not overlap. Note that normal maps do not uniquely determine scale/depth; thus, the reconstructed objects could erroneously move closer/further from the camera becoming smaller/larger in scale (while also undergoing distortion, since this scale variance is not self-similar). In order to monitor this, we define a depth map error ase_depth= 1/W× H∑_p (d̂_p - d_p)^2where (d̂_p - d_p) is replaced with the thickness of the tetrahedral mesh (0.2 meters) for pixels where the predicted and ground truth silhouettes do not overlap.Given ground truth 3D data from RenderPeople <cit.>, we show how our network reconstructs 3D geometry with increasing efficacy as the number of sparse views increases. Figure <ref> shows the inferred 3D geometry from a novel view, and Table <ref> shows how per-pixel normal and depth errors decrease as the number of training views increases. When the network is trained on only one view, there are no constraints on the side/back of the person; hence, the predicted geometry has a high degree of noise when rendered from novel views. When trained with 5 views, the ground truth geometry is recovered with high accuracy.§.§ Geometry Reconstruction from RGB ImagesHere, we illustrate that our network can be used to reconstruct 3D geometry from monocular uncalibrated RGB images, without requiring any pretraining on scanned data (or any other informed initialization of the network parameters). However, we do utilize a pretrained pix2pix network <cit.>(introduced in PIFuHD <cit.>) to infer ground truth normal maps and note that pix2pix was trained on 3D ground truth geometry. We do not consider this a severe limitation both because normal maps are easier to infer than 3D geometry and because there are other ways to obtain normal maps.First, we captured monocular video footage of a person in a static pose; then, a sparse number of frames were extracted and preprocessed by removing the background using <cit.> and cropping to a square image. The resulting images were then passed into pix2pix to obtain “ground truth” normal maps. See Figure <ref>. Since estimated camera parameters will be prone to error, we refine a rough initialization iteratively. At each iteration, we train the network and use Marching Tetrahedra to create a mesh inferenced off of the image for (and overfit to) each view; then, we use ICP <cit.> to rigidly align all the meshes. Although one could delete all the triangles and remesh the point cloud, we obtained better results by updating each camera to match the ICP rigid transform of its corresponding mesh. The updated camera positions are then used to iteratively repeat the entire process. Once the camera parameters converge, the network can be trained with an additional loss that encourages 3D consistency. For a given camera view c_0, this loss is defined asℒ(ϕ̂_k) =∑_c≠ c_0‖ϕ̂_k - ϕ̂_k(c)‖where ϕ̂_k(c) refers to the inferred SDF values obtained from using view c's image. The network obtained from the aforementioned process (to improve camera extrinsics) will tend to be less detailed on the back side of the mesh, since only the front side can be seen in any given input image; thus, after improving camera extrinsics, we proceed as follows. Each view is fine-tuned with a regularizer that aims to keep ϕ close to that which was obtained using Equation <ref>; then, we delete any visible triangles that are not consistent with the normal map (within some tolerance). See Figure <ref>. Since these are (actual) triangle meshes, it is trivial to load them into a suitable computer graphics application and align/resize the meshes in order to combine them into a single unified mesh. See Figure <ref>.§.§ ComparisonsWe quantitatively compare our reconstruction method to existing single view <cit.> and multiview <cit.> reconstruction approaches using monocular videos from the People Snapshot Dataset <cit.>. Each video was captured with a fixed camera, and the subjects were asked to rotate while holding an A-pose. We trained our network on four frames per video (front, back, and two side views) and subsequently deleted any visible triangles that are not consistent with the normal maps (within some tolerance, as in Section <ref>). SelfRecon <cit.> and VideoAvatar <cit.> were trained on all video frames. For the single view approaches <cit.>, we took the mesh predicted using the front or back-facing frame (whichever is closer to the test view) and scaled/rigidly aligned it using ICP to fit the corresponding SelfRecon mesh. Table <ref> compares the results obtained with each method to the PIFuHD inferred normal map. SelfRecon has slightly more error and lacks detail compared to our approach (particularly around the face and wrinkles in the clothing). See Figure <ref>. Notably, the runtime of our approach on a single NVIDIA 3090 GPU is at least 50× faster than SelfRecon, which takes over a day of training (per video) to achieve their published results (our network is trained for about 20 minutes). § CONCLUSION To address the reconstruction problem, what one could do is simply given phi values on the grid, compute the normal map and only using the normal map, solve the inverse problem to get the phi values. Because it?s vision data, it?s gonna be rather noisy (e.g. using commodity components, not high-end scanners); therefore, it?s useful to parameterize the level set values with some priors (that?s the whole point of SMPL or 3D morphable models for faces/FLAME). The parameterization we choose is the parameters of a neural network that converts a number of images from different views into a volumetric implicit surface.So our parameterization converts images from any view direction into a unique implicit surface, that is regardless of the input image, it should get the same implicit surface back out. That is a parameterization. So we solve the inverse problem using that parameterization of the level set. Now, why is that a good parameterization? Because that?s what our brain does. If you look at someone you have a feel for their volumetric information from the images that you look at. You get a feel for their spatial extent. So it?s a belief in the human cognitive parameterization. Humans do 2 things, consume RGB data and depth. That?s how we are so good at estimating distance.Humans do get a normal map from an RGB image (2 eyes = stereo = depth = normal) You?re seeing relative distance with 2 eyes, not absolute distance. Which is exactly normals! We understand normals quite well, but not distance.For this problem it makes sense, but for fluid simulation it makes less sense to use a network. Although image-based reconstruction can be solved as an inverse problem, regularization is required in order to address issues with noise. Parameterized models (such as SMPL <cit.> or 3DMM <cit.>) provide for such regularization. We choose a neural network to parameterize our reconstruction where regularization is provided by having a limited number of network parameters. Our network aims to convert images from any view direction into a unique implicit surface, regardless of the view direction (similar in spirit to how the human brain process visual input); in fact, our eyes discern relative distance (similar to normal maps) more proficiently than they discern raw distance. In summary, we present a weakly-supervised method for clothed human reconstruction by leveraging 2D normal maps as the supervisory signal during neural network training. In order to train a learned model that can infer high-frequency cloth and body geometry without any ground truth 3D data, our proposed approach builds on strong geometric priors for modeling and rendering. Our results reinforce the notion that less training data is required to train networks that infer normal maps than to train networks that infer 3D geometry (in agreement with ECON <cit.>).This means that working to improve the efficacy of network-inferred normal maps (and using the results for 3D reconstruction, as in Section <ref>) is likely to be more productive than working to obtain (via expensive 3D scanning) the excessive amount of ground truth data required to train a network to inference 3D geometry directly. Moreover, the process outlined in Section <ref> provides an alternative mechanism (significantly cheaper than 3D scanning) for acquiring the ground truth data required to train a network to inference 3D geometry directly.§ ACKNOWLEDGEMENTSResearch supported in part by ONR N00014-19-1-2285, ONR N00014-21-1-2771. We would like to thank Reza and Behzad at ONR for supporting our efforts into machine learning. This work was also supported by JSPS KAKENHI Grant Number JP23H03439. J. W. was supported in part by the Gerald J. Lieberman Graduate Fellowship, the NSF Mathematical Sciences Postdoctoral Fellowship, and the UC President's Postdoctoral Fellowship. ieeenat_fullname | http://arxiv.org/abs/2311.16042v1 | {
"authors": [
"Jane Wu",
"Diego Thomas",
"Ronald Fedkiw"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231127180635",
"title": "Weakly-Supervised 3D Reconstruction of Clothed Humans via Normal Maps"
} |
Properties of Steady Sub-Alfvénic Solar Wind in Comparison with Super-Alfvénic Wind from Measurements of Parker Solar Probe [ January 14, 2024 =========================================================================================================================== The search for the giant pairing vibration (GPV) has a long standing history since the 1970's when it was predicted. First experimental measurements focused on (p,t) transfer reactions in the heavy nuclei and did not show convincing evidence. The discovery of a signal compatible with the GPV in the light carbon isotopes has renewed the interest for the GPV. It triggered new theoretical models showing that the GPV in the heavy nuclei might be too wide or too melted to be observed and triggered new experiments with radioactive probes based on (^6He,^4He) transfer. Nuclei in interaction with external fields display a wide variety of collective vibrations known as giant resonances, associated with various degrees of freedom and multipolarities. The giant isovector dipole resonance and the giant isoscalar quadrupole resonance are among the most studied ones. Through general symmetry arguments relating particles to holes, a particular mode associated with vibrations in the number of particles was predicted in the 1970s <cit.>, the Giant pairing vibration (GPV). It is due to the collective superposition of many particle-particle states, in analogy with the Giant (dipole, quadrupole) resonances that are due to the collective superposition of many particle-hole states. The analogy between particle-hole (shape) and particle-particle (pairing) excitations became well established and thoroughly explored by Broglia and co-workers <cit.>.However, although the first experimental evidence for the isovector giant dipole resonance dates back to 1937, no clear evidence for the existence of the giant pairing vibration was ever found in the heavy nuclei. This topic has regained interest since the recent work of Cappuzzello and coworkers <cit.> who identified a signal compatible with the GPV in the light carbon isostopes. In this paper, we will review the experimental search for the GPV. A full review of the theoretical and experimental search for GPV can be found in ref.<cit.> and a review on the experimental work in the light nuclei in ref.<cit.>.§ HOW TO PROBE THE GPV ? The collective character of the particle-hole (surface) vibration is probed by inelastic scattering reactions. In the same fashion two-particle transfer reactions provide much of our knowledge of pairing correlations.For excitations to 0^+ states these reactions are important probes of collective pairing excitations in nuclei. This has the same origin as the collectivity of surface vibrations in inelastic scattering. Namely all configurations contribute with the same phase to the two-particle transfer form factor leading to the collective pairing state (a vibration in gauge space). The existence of pair correlations is known to provide an enhancement in the magnitude of the ground-state to ground-state transition matrix elements between systems that differ by a number of two nucleons. Analogous enhancements are expected from particle-particle correlations involving transitions to higher single-particle shells.The analogy between shape and pairing can be taken further. Near closed shell, nuclei with two identical particles added or removed from a closed-shell configuration should be close to a quantum fluid limit, since the pairing correlations are not strong enough to overcome the large single-particle energy required to add a pair.There, strongly enhanced L=0 transitions manifest themselves following a vibrational pattern (similar to vibrations in shape), in which transferring a pair of nucleons changes the number of phonons by one. Low-lying pair-vibrational structures have been observed around ^208Pb by using conventional pair-transfer reactions such as (p,t) and (t,p) <cit.>. Nuclei with many particles (pair quanta) outside of a closed-shell configuration (i.e. at the middle of a shell) correspond to a superconducting limit, where there is a static deformation of the pair field and rotational behavior results (similar to shape rotation). Ground-state to ground-state transition is observed between monopole states and follows a rotational scheme. A textbook example would be the pair-rotational sequence comprising the ground states of the even-even Sn isotopes around ^116Sn <cit.>. Taking the analogy even further, it has long been predicted that there should be a concentration of strength, with L = 0 character, in the high-energy region (10– 15 MeV) of the pair-transfer spectrum. This is called the Giant Pairing Vibration (GPV) and is understood micro-scopically as the coherent superposition of 2p (addition mode) or 2h (removal mode) states in the next major shell 2ħω above (below) the Fermi surface. Similar to the well-known pairing vibrational mode (PV) <cit.>, which involves spin-zero-coupled pair excitations across a single major shell gap. Thus, pairing vibrations are evidenced through two-particle transfer reactions. They manifest themselves as a L=0 transition mode from an A nucleus to a A±2 nucleus. They are expected to lead to a large bump in the two-neutron transfer energy spectrum. Various independent theoretical calculations converge in predicting the GPV as a strong mode typically located around 70/A^1/3 MeV in two-neutron L=0 transfer channel with a width of 7.8/ A^1/3 MeV and carrying a cross-section which is 20%-100% of the ground state one <cit.>. The study of the GPV would also provide crucial information on the pairing interaction:the transfer cross-section depends on the form-factor of the two transferred neutrons. It has been shown that this form factor corresponds to the perturbation of the pairing field during the excitation of the system <cit.>.Intuitively, the collectivity of the GPV should increase with the mass of the nucleus. Therefore, in a simple picture, its strength is expected to be maximum for the heaviest nuclei, such as Sn and Pb isotopes, where numerous nucleons may contribute coherently. Two candidates have been envisaged the ”normal” nuclei (closed shells), like Pb <cit.>, or the superfluid nucleus (mid-shell), namely Sn, where more pairs can contribute.Experimental investigations of the GPV focussed on simple probes like (p,t) and (t,p) with various conditions.§ REVIEW OF THE EXPERIMENTAL SEARCH FOR GPV §.§ In the heavy nucleiIn the 60‘s and 70‘s, the searches for the GPV have focused on (p,t) reactions at high energy for both Pb and Sn isotopes. However it remained unsuccessful. There could be several reasons as mentionned in <cit.> : * The L matching conditions are an of great importance. The proton incident energy should be high enough to excite a 14 MeV mode but not too high in order not to hinder the L=0 transfer. The smaller the proton energy the larger the cross section for L=0 modes.* The use of a spectrometer is decisive in order to precisely measure the triton in the exit channel. The only reported search for the GPV with Ep ≈ 50 MeV used Si detectors, and was plagued by a strong background <cit.>.* As the L = 0 cross sections are known to exponentially increase when approaching 0 degree, the measurement has to be performed at small angles and is even better if it includes 0 degree.There was a revival of the experimental GPV search in the 2000’s with several experiments aiming at improving the three experimental conditions mentioned above. All used a spectrometer for the triton measurement to im- prove the measurement at 0 degree. Several attempts with different proton energies were performed. The first attempt used a 60 MeV proton beam produced at the iThemba LABS facility in South Africa impinging on ^208Pb and ^120Sn targets respectively <cit.>. No evidence for the GPV was found in the region of interest for both targets.The measurement was repeated with a 50 MeV and a 60 MeV proton beam and the K = 600 QDD magnetic spectrometer in zero degree mode to combine the best experimental conditions to probe the GPV. A strong proton background with a rate 500 times higher than that of the tritons of interest was produced by protons scattering off the beam stop . The tritons were identified by time-of-flight and it removed most of the background. The excitation energy spectrum obtained for ^118Sn is shown in Fig. <ref>. The deep holes contribution between 8 and 10 MeV is stronger in the 0 degree spectrum than at 7 degree indicating a possible low L composition of this background. A fit of the different component assuming a width between 600 keV and 1 MeV for the GPV was performed. It leads to a higher limit on the cross-section for populating the GPV between 0.13 and 0.19 mb over the angular acceptance of the spectrometer (±2 degrees). The last attempt with (p,t) reaction was performed at LNS Catania with a proton beam produced by the cyclotron accelerator at Ep = 35 MeV impinging on a ^120Sn target <cit.>. The lower proton energy was supposed to enhance the L=0 cross-sections and favor the population of the GPV. The measurement was performed with the MAGNEX spectrometer which allows to cover a range of about 7 MeV in the expected GPV energy region. The excitation energy function obtained for ^118Sn is shown in Fig.<ref> for the six magnetic settings of the spectrometer. The tritons were identified from their energy loss as a function of their position in the focal plane so that very small background contribution remains. The spectrum zoomed in the region of interest for the GPV shows a small bump over the background in the same energy region as the previous measurements at 50 and 60 MeV. The width was fitted to 1.5 ± 0.4 MeV.No clear evidence for a GPV mode has been found from the searches through (p,t) reactions. (t,p) transfer reactions should also be investigated to rule out any difference between two-neutron stripping and two-neutron pick-up reactions. §.§ In the light nuclei The search for the GPV in the heavy nuclei was almost stopped when ref.<cit.> revealed evidences for a GPV in the light carbon isotopes. The GPVs are observed at about 17 MeV and 13.7 MeV in ^14C and ^15C respectively with widths of the order of 1 to 1.5 MeV. The cross-sections at 84 MeV are of the order of 0.3 and 0.4 mbarn respectively. The angular distribution for the GPV state was also extracted and has a L=0 character as expected.The author claim that the reaction mechanism plays an important role in populating the GPV. The two-nucleon transfer reaction (^18O,^16O) is better suited because: * it is better matched in Q-value for L=0 transfer following Brink's conditions* the survival of a preformed pair in a transfer process is favored when the initial and final orbitals are the same .The authors further confirmed their results with the study of the two-neutron decay of the GPV and the observation of another GPV in ^11Be with the same transfer reaction. They also investigated the same reactions as in the original paper but at higher energy (275 MeV against 84 MeV in the first measurements) and confirmed their results. § WHERE DO WE STAND TODAY ?The non-observation of the GPV in the heavy nuclei, when all pairing theories predict it, is a puzzle. Today, we stand at a crossroad : either it is due to the reaction mechanism of the (p,t) transfer reaction that is not well suited for this kind of study or the GPV cannot be observed in the heavier nuclei because it is too wide to be observed.The arguments in favor of the latter point are many : the density of states in the heavy nuclei is maybe too high to observe the GPV that is melted with other contributions. Laskin and collaborators <cit.> suggested that as the main contribution to L=0 transfer comes from low-l orbitals (mainly the s_1/2 orbital) for (p,t), they have low centrifugal barrier and thus, they acquire large widths, so that the GPV would be too wide to be observed.As for the reaction mechanism aspect, new attempts to search for the GPV in heavier nuclei have been undertaken recently. Several theoritical work <cit.> pointed out that pair transfer reactions using weakly bound projectile would be more suited in terms of Q-value and transferred angular momentum matching, like (^6He,^4He) transfer.§.§Search for GPV through (^6He,^4He) reactions The ^208Pb(^6He,α) reaction has been investigated at GANIL <cit.> with the ^6He beam produced by the Spiral1 facility at 20 A MeV with an intensity of 10^7 pps. The detection system was composed of an annular Silicon detector. The background was very important, due to the various channels of two-neutron emission from ^6He (namely breakup) and also to the channeling in the detector of the elastically scattered ^6He beam. No indication of GPV was found in this experiment.Another experiment has been performed at TRIUMF with the IRIS set-up<cit.> to investigate ^116Sn(^6He,^4He) and ^116Sn(^18O,^16O) at 8 MeV/u (spokesperson : R.M. Clark and A.O. Macchiavelli). The preliminary results reflect the difficulties in using Silicon/CsI telescopes to investigate pair transfer due to the large background from reactions in the CsI, channeling and inter-strip effects in the Silicon.§ CONCLUSIONThe search for the GPV has triggered a lot of experimental efforts, combined with theoretical work to improve the predictions and interpret its non-observation. However, the quest for the GPV in heavy nuclei has not come to an end : further investigations with spectrometers instead of particle arrays and further searches of other types of GPV like the proton-proton GPV is still in his infancy.99 Bro77 R. A. Broglia and D. R. Bès, Phys. Lett. B69, 129 (1977). Bro73R. A. Broglia, O. Hansen, and C. Riedel, Adv. Nucl. Phys. 6 , 287 (1973). Cap15 F. Cappuzzello et al, Nature Comm. 6 (2015) 6743. Ass19 M. Assié, C.H. Dasso, R.J. Liotta, A.O. Macchiavelli and A. Vitturi, Eur. Phys. J. A 55 (2019) 245. Cav19 M. Cavallaro, F. Cappuzzello, D. Carbone and C. Agodi, Eur. Phys. J. A 55 (2019) 244. Boh66 N. Bohr, Nuclear structure symposium, 1966 Bri05D. Brink and R.A. Broglia, Nuclear superfluidity - Pairing in finite systems, Cambridge University Press (2005). Bes66 D. R. Bès and R. Broglia, Nucl. Phys. 80, 289 (1966). Boh98 A. Bohr and B. Mottelson, Nuclear Structure, Vol II, World Scientific Publishing Co. 1998. Her85 M.W. Herzog, R.J. Liotta, L.J. Sibanda, Phys. Rev. C31, (1985) 259/ Oer01W. Von Oertzen and A. Vitturi, Rep Prog. Phys 64, (2001) 1247 and references therein. Kha04E. Khan et al, Phys. Rev. C 69 (2004) 014314. Mou11 B. Mouginot et al, Phys. Rev. C 83, (2011) 037302. Cra77 G. M. Crawley et al., Phys. Rev. Lett. 23, (1977) 1451. deN14M. de Napoli et al, Acta Physica Polonica B 4, (2014) 437. Ass09 M. Assié et el, Eur. Phys. J. A., 42, (2009) 441. Cap21 F. Cappuzzello et al, Eur. Phys. J. A 57 (2021) 34. Cav16 M. Cavallaro et al, Phys. Rev. C 93 (2016) 064323. Las16 M. Laskin et al, PRC 93, 034321 (2016). Das15 C.H. Dasso et al, J. Phys.: Conf. Ser. 580 (2015). For02 L. Fortunato et al, Eur. Phys. J. A 14, (2002) 37. | http://arxiv.org/abs/2311.15591v1 | {
"authors": [
"M. Assié"
],
"categories": [
"nucl-ex",
"nucl-th"
],
"primary_category": "nucl-ex",
"published": "20231127073015",
"title": "Overview of the experimental quest for the giant pairing vibration"
} |
We report for the first time a relationship between galaxy kinematics and netequivalent width (netEW) in star forming galaxies during the epoch of peak cosmic star formation.Building on the previously reported broadband imaging segregation of -emitting and -absorbing Lyman-break galaxies (LBGs) at z∼2 (Paper I in this series) and previously at z∼3, we use thespectral type classification method to study the relationship between netEW and nebular emission-line kinematics in samples of z∼2 and z∼3 LBGs drawn from the literature for which matching rest-frame UV photometry, consistently measured netEWs, and kinematic classifications from integral field unit spectroscopy are available.we show that z∼2 and z∼3 LBGs segregate in colour–magnitude space according to their kinematic properties andspectral type, and conclude that LBGs withdominant in absorption (aLBGs) are almost exclusively rotation-dominated (presumably disc-like) systems, and LBGs withdominant in emission (eLBGs) characteristically have dispersion-dominated kinematics.We quantify therelationship between the strength of rotational dynamic support (as measured usingand ) and netEW for subsets of our kinematic sample where these data are available, and demonstrate the consistency of our result with other properties that scale with netEW and kinematics.Based on these findings, we suggest a method by which large samples of rotation- and dispersion-dominated galaxies might be selected using broadband imaging in as few as three filters and/or netEW alone.If confirmed with larger samples, application of this method will enable an understanding of galaxy kinematic behaviour over large scales in datasets from current and future large-area and all-sky photometric surveys that will select hundreds of millions of LBGs in redshift ranges from z∼ 2-6 across many hundreds to thousands of Mpc.Finally, we speculate that the combination of our result linking netEW and nebular emission-line kinematics with the known large-scale clustering behaviour of -absorbing and -emitting LBGs is evocative of an emergent bimodality of early galaxies that is consistent with a nascent morphology-density relation at z∼2-3. § INTRODUCTIONKinematics are a characteristic feature of the intrinsic galaxy population dichotomy that exists in the modern-day Universe <cit.>, and as such, are a key constraint for simulations that aim to understand the mechanisms by which galaxies evolve over cosmic time <cit.>.This intimate relationship between galaxy kinematics and their environment is manifest in the well-studied Morphology–Density Relation <cit.>, and has been investigated for a range of kinematic types and environments out to intermediate redshifts up to z∼1.5 <cit.>.Developments in the capability and efficiency of optical and near-IR integral field unit (IFU) and slit spectrographs over the past fifteen years have enabled numerous observational campaigns to probe in exquisite detail the nebular emission-line kinematics of star-forming galaxies (SFGs) at redshifts that span the peak of the cosmic star formation rate density <cit.>.The synthesis of these results has produced a picture of the high-redshift SFG population in which the majority (≳ 75%) of massive (log(M_⋆/M_⊙) ≳ 10) SFGs appear to have assembled primitive discs with characteristically large ionised gas velocity dispersions and rotation-dominated kinematics <cit.>.Lensed and other deep surveys targeting lower mass (log(M_⋆/M_⊙)≲ 9.5) SFGs report a significantly lower (< 50%) rotation-dominated fraction, and a higher proportion of galaxies that are kinematically disordered (dispersion-dominated) or have kinematic structure and/or morphologies that identified them as mergers <cit.>.Additionally, the nebular emission-line kinematics of SFGs in the redshift range z∼2-3, have been shown to correlate with a range of galactic physical properties including: stellar and dynamical mass; size and morphology; specific and total star formation rates; gas outflows and gas fraction; and nebular conditions including the degree and spatial distribution of metals (see for example <cit.> and references therein, and more recently, <cit.> and <cit.>).Despite these advances, the ability of currently available samples to inform the relationship between kinematics and large-scale structure at redshifts z≳2 is challenged by a range of factors including: small sample sizes; observational biases that are a function of survey depth and redshift range; specific sample selection criteria; the spatial and spectral resolution of the observations; and the diversity of kinematic analysis methods and classification criteria <cit.>.To better inform simulations that model the relationship between galactic kinematics and different formation and evolution pathways, and to facilitate the proper statistical study of relationships between the global kinematic properties of galaxy populations and large scale structure out to high redshifts, there is a need for much larger samples spanning a wider range of intrinsic and environmental galaxy properties, mapped over large scales, and to redshifts above z∼4, at which redshifts IFU-based kinematic measurements are currently not possible.Contemporaneously with progress in understanding the kinematics of SFGs at z≳2, rest-frame UV spectroscopic studies of star-forming UV-colour-selected Lyman break galaxies (LBGs) in the same redshift range have reported the sensitivity of() visibility to a wide range of galactic properties.Due to the resonant character of thetransition (see M. Dijkstra in <cit.> for a comprehensive description),transmission and spectral morphology are modulated by neutral gas properties such asoptical depth, covering fraction, dust content, and kinematics (see the review of <cit.> and more recently e.g., <cit.> and <cit.>).In addition to thesegas properties which directly control the absorption and scattering ofduring radiative transfer, it has been demonstrated for z∼2-4 LBGs that largertransmission, or netequivalent width (netEW), is associated with galaxies with bluer UV colours, lower metallicities, lower stellar masses, lower total UV luminosities, lower star formation rates, harder ionising field strengths, and more compact morphologies <cit.>.Deep observational surveys of the spatial redistribution ofinto the circumgalactic medium (CGM) have established the apparent ubiquity of so-called ` halos' around early SFGs <cit.>, and there is a growing body of observational and computational work suggesting thatvisibility in the early universe reflects, and is modulated by, the galactic environment on small and large scales <cit.>.Given this established sensitivity ofto a wide range of intrinsic and environmental galactic properties, and the trends that have been demonstrated linking many of the same properties to galaxy kinematics, it is reasonable to ask the question: “Is there a relationship betweenand galaxy kinematics and how might this be used to inform our understanding of galaxy formation and evolution?" – especially on large scales and at high redshifts whereis frequently the only spectroscopic indicator available. Radiative transfer simulations have investigated the influence of solid body rotation onobservables <cit.> and predict the sensitivity ofspectral line morphology to the bulk rotation of neutral gas and the viewing angle relative to the rotation axis.They also predict, however, that there should be no observable difference in the integratedline flux, theescape fraction, or the average number of scatterings for eachphoton caused by changes in the radiative transfer mechanism under the influence of rotation or dispersion-dominated kinematics alone.The only direct observational study of a relationship betweenand galaxy kinematics reported to date is the low-z work of <cit.> who derived values for shear velocity and intrinsic velocity dispersion from thekinematic maps of galaxies in theReference Sample <cit.>.The LARS collaboration surmise a causal connection between turbulence in actively star forming systems and interstellar medium conditions that favour an escape ofradiation, and further speculate that dispersion-dominated kinematics are a necessary requirement for a galaxy to have a significant amount of escaping .In the first paper in this series <cit.>, we report the photometric segregation of z∼2 LBGs versus netEW in colour-magnitude space, and derive criteria for the selection of pure samples of LBGs withdominant in absorption anddominant in emission using broadband imaging alone.Together with the analogous z∼3 result of <cit.>, we have suggested the utility of this method to study a wide range of properties known to be associated within large samples and over large scales in data from current and future large-area photometric campaigns.In particular, we foresee application of this approach to datasets from the all-sky LSST survey by the Vera C. Rubin Observatory that will select hundreds of millions of LBGs in redshift ranges from z∼2-6 across many hundreds to thousands of Mpc.In this paper we report a direct relationship between nebular emission-line kinematics and netEW insamples of z∼2 and z∼3 LBGs drawn from the literature, and extend the results of Paper I and C09 to propose a method by which the generalised kinematics of large samples of LBGs might be predicted using broadband imaging in as few as three filters, and studied on large scales in data from large-area and all-sky photometric surveys.Finally, we combine our result with known relationships betweenand galactic environment, and speculate on how these findings might be used to inform our understanding of galaxy formation and evolution in the early Universe.This paper is structured as follows: In Section 2, we describe the photometric, spectroscopic, and kinematic data used in the subsequent sections.A relationship between netEW and the nebular emission-line kinematics of LBGs at z∼3 and z∼2 is presented in Section 3.In Section 4, we discuss these results, their potential utility, and their implications for galaxy evolution science.The important conclusions of the paper are summarised in Section 5.We assume a ΛCDM cosmology with Ω_M= 0.3, Ω_Λ= 0.7 and H_0= 70 km s^-1 Mpc^-1.All magnitudes are quoted in the AB system of <cit.>.§ DATA §.§ OverviewWe assemble from the literature complementary kinematic samples of z∼2 and z∼3 LBGs with consistent multi-band rest-frame UV broadband photometry, uniformly measured netEWs, and kinematic classifications quantitatively and comparably determined from IFU-based spectroscopy.The z∼2 kinematic sample consists of 23 UV-colour-selected (BX) LBGs and 13 K_s-band (mass) selected SFGs in the range 2.0 < z ≲ 2.5.Twenty-two of these are classified as `rotation-dominated' (including all the K_s-band selected SFGs), four are `dispersion-dominated', and ten are classified as `mergers' in the source studies.(see Section <ref> and Table <ref> for details).The z∼3 kinematic sample consists of 24 LBGs in the range 2.6 < z ≲ 3.4, of whichten are classified as `rotating' or `rotation-dominated', eleven are `not-rotating' or `dispersion-dominated', and three galaxies are labelled as `not classified' in the source study (see Section <ref> and Table <ref> for details).The LBGs in the kinematic samples all had broadband optical photometry available from the catalogs of <cit.> and <cit.>.Broadband photometry for the K_s-band selected SFGs was transformed to match the U_nGR photometric system of these catalogs (see Section <ref>).The parent photometric catalogs derive from an observational campaign that targeted 14 uncorrelated fields with a total survey area of 1900 arcmin^2, resulting in a sample that is minimally affected by systematic biases due to cosmic variance or clustering.The survey used the rest-frame UV colour selection criteria of <cit.> and <cit.>.These criteria were designed to recovergalaxies with intrinsic properties – particularly rest-frame UV luminosity and reddening by dust – that were similar across both redshift ranges.The faint end magnitude cuts of R ≤ 25.5 (and R ≤ 26.0 for one z∼3 field) in the parent LBG samples were chosen so as to facilitate spectroscopic redshift determinations using the rich complement of strong interstellar and stellarlines in the rest-frame UV continuum betweenand ∼1700 Å <cit.>.Moreover, the faintest galaxies in the kinematic samples are brighter than R =25, thus mitigating any potential bias at the faint end due to over-reliance onin emission for redshift determination.The z∼2 and z∼3 parent samples have R-band apparent magnitudes in the range 22.0 < R< 25.5 and 22.7 < R< 26.0, corresponding to rest-frame UV luminosities (absolute magnitudes) of -22.6 < M_UV < -19.1 and -22.6 < M_UV < -19.5, respectively.The bulk of galaxies in the parent samples have stellar masses in the range 9 ≲log(M_⋆/M_⊙) ≲ 11 <cit.> and star formation rates (inferred from rest-frame UV luminosities uncorrected for extinction) in the range 3 ≲M_⊙ yr^-1≲ 60 (median 9.9 M_⊙ yr^-1) and 5.5 ≲M_⊙ yr^-1≲ 66, (median 10.3 M_⊙ yr^-1), respectively <cit.>.Accordingly, our parent and kinematic samples are typical of LBGs/SFGs at these redshifts <cit.>, and the z∼2 LBGs lie (though with a range of properties <cit.>) on the main sequence of stellar mass and star formation rate for z∼2 SFGs <cit.>.Throughout this work we use consistently determined netEW as a measure ofvisibility.NetEW incorporates information aboutin emission,in absorption, (even for strong to weak emitters) and their combined effects in the observedspectral feature.This is critically important for our LBG samples – especially at z∼2 wherein absorption dominates the population (see Paper I).NetEWs for galaxies in the parent LBG catalogs were measured uniformly at z∼2 (see <cit.>) and z∼3 <cit.> using the method described by <cit.>.The rest-frame UV colour criteria used to select the z∼2 and z∼3 LBGs result in a netEW distribution for the R < 25.5 samples that is representative of the intrinsic distribution for the parent population of galaxies <cit.>. §.§ z∼2 Kinematic sample §.§.§ Rest-frame UV-colour-selected galaxiesA sample of 23 rest-frame UV-colour-selected (BX) galaxies in the redshift range 2.0<z<2.5 that overlap with our parent z∼2 photometric catalog were selected from the SINS survey sample of <cit.> and the AO-assisted IFS survey of <cit.>.Twenty-one of these galaxies had netEWs in the spectroscopic catalog of <cit.>.The BX galaxies targeted by FS09 and LA09 have stellar masses in the range 9.0 < log(M_⋆/M_⊙) < 10.7, and are drawn from the near-IR spectroscopic sample of <cit.>.Although they have a number of galaxies in common, the LA09 galaxies tend to have stellar masses in the less-massive to typical-mass range (mean log(M_⋆/M_⊙) ≈ 10.1) compared to the FS09 sources that favour the higher mass end of the BX sample (mean log(M_⋆/M_⊙) ≈ 10.42). The SINS survey used the SINFONI instrument at the ESO VLT in natural-seeing and AO-assisted modes to extract spatially-resolved maps of the velocity-integrated flux, relative velocity, and velocity dispersion of theemission line.To facilitate a general analysis of all the SINSgalaxies, FS09 defined a working criterion involving the observed velocity gradient (v_obs) and the integrated line width (σ_int) by which galaxies with > 0.4 were classified as `rotation-dominated', and those with < 0.4 were classified as `dispersion-dominated'.Using either quantitative kinemetric analysis <cit.> or qualitative assessment of the asymmetry in the velocity field and dispersion map, galaxies with kinematics consistent with rotation were further classified as either `discs' or `mergers'.Updated kinematic classifications for eight of the SINS objects were derived from the deep AO-assisted data collected as part of the SINS/zC-SINF survey <cit.>.Galaxies identified by FS18 as possibly hosting an AGN (e.g, Q2343-BX610) were rejected from our sample.The LA09 AO study utilised the OSIRIS near-infrared integral field spectrograph <cit.> at the W. M. Keck Observatory.LA09 quantified the rotational dynamic support using the ratio of shear velocity to intrinsic velocity dispersion () and used detailed morphological analysis in combination with the 2D kinematic maps to characterise the kinematic properties of each galaxy.For the purposes of kinematic classification, we equatewithin the nomenclature of FS09, and, except for sources identified by LA09 as merging systems, assign the LA09 galaxies as either rotation or dispersion-dominated.The grand design spiral galaxy Q2343-BX442 reported by <cit.> was also included in our z∼2 kinematic sample.NetEW and kinematic parameters for Q2343-BX442 were supplied by D. Law (priv. comm.).With= 0.83, and clear disc-like morphology, we classify Q2343-BX442 as `rotation-dominated'.Details of the z∼2 UV-selected BX galaxies that comprise our kinematic sample are summarised in Table <ref> along with references to the source studies in each case.§.§.§ KMOS^3D Galaxies We supplement our z∼2 kinematic sample with a redshift-selected subset (2.0<z<2.5) of 13 galaxies from the COSMOS field pointings of KMOS^3D () – an integral field survey of over 600 mass-selected galaxies at 0.7 <z< 2.7 using the KMOS instrument at the ESO VLT <cit.>.Thesurvey combined galaxy dynamics derived from , near-IR continuum, velocity, and velocity dispersion maps with structural parameters and multi-band imaging to establish a set of criteria by which robust kinematic classifications could be determined <cit.>.Thesources that we employ are part of the `high S/N disc sample' of rotation-dominated galaxies reported by <cit.> that focused on massive galaxies with log(M_⋆/M_⊙) ≳ 10(see Table <ref> for details).These galaxies all meet the less exacting FS09 criterion of> 0.4 for classification as rotation-dominated in our study.The galaxies were cross-matched with the D2 field of the Canada-France-Hawaii Telescope Legacy Survey <cit.> to obtain u^* g^' r^' i^' z^' multi-band photometry.Thephotometric data were transformed intomagnitudes to facilitate direct comparison with the z∼2 sources of FS09, LA09, FS18 and LA12.The transformation was achieved by performing spectrophotometry on rest-frame UV composite spectra derived from a sample of z∼3LBGs divided into quartiles on the basis of netEW (A. Shapley, priv. comm.).The composite spectra are representative of the average LBG spectrum in each quartile, and thus accurately trace the colour–colour evolution and colour–magnitude distribution of each quartile, and its component galaxies <cit.>.For eachgalaxy, the composite spectra were first redshifted to the observation frame, and flux density in theforest corrected for redshift-dependent absorption through the IGM.The spectra were then convolved with the bandpasses of the u^* g^' r^' filters, the resulting integrated flux densities normalised to the observed g^'-band magnitude, and simulated u^* g^' r^'-band magnitudes calculated.From these, the composite spectrum that best fit the observedphotometry for each galaxy was determined.A reddening correction <cit.> was applied to the best fit spectrum in each case as required to optimise the fit.The best fit normalised and reddened spectrum was then convolved with the bandpassess of thefilters, and -band magnitudes estimated for each galaxy.In all cases, the quartile 1 (strongest netEW absorption) or the quartile 2 (next strongest netEW absorption with some emission) composite spectra provided the best fit to the observed photometry, suggesting that thegalaxies are best classified as aLBG or G_a spectral types (see Section <ref>).Using the transformedphotometry, a standard colour–colour test <cit.> was applied to confirm that thesample satisfied (within the photometric uncertainties) the criteria to be selected as z∼2 LBGs.With the goal of measuring netEWs, seven of the `high S/N disc sample'galaxies were included as secondary science targets on our multi-object slitmasks using the LRIS instrument <cit.> at Keck on 26, 27 December 2016 and 20–22 January 2020.These data were reduced in the conventional manner using IRAF and in-house code.NetEWs were measured following the procedure of <cit.>.We successfully measured netEW for three WI15 galaxies from the January 2020 data. However, weather conditions and primary science constraints during the December 2016 run resulted in low S/N spectra for the remaining four WI15 galaxies that were too poor to enable the reliable measurement of netEW.The 2D and 1D spectra of these four galaxies show no evidence of aemission component of ≳10 Å.The q1 and q2 quartiles of <cit.> have average netEWs -14.9 Å and -1.1 Å, respectively, and showemission components of < 10 Å.Consequently, we can reasonably postulate that the netEWs for the four galaxies are similar to the q1 and q2 quartile galaxies.On this basis, we report netEW <0.0 for these four sources and provisionally assign them spectral types aLBG/G_a in our system §.§ z∼3 Kinematic sample §.§.§ AMAZE and LSD GalaxiesUsing the SINFONI integral field spectrograph <cit.> on the ESO Very Large Telescope (VLT) in natural-seeing and natural guide star adaptive optics (AO) observation modes respectively, the related AMAZE <cit.> and LSD <cit.> surveys conducted near-IR IFU spectroscopic observations on LBGs at redshifts z≳3 with R ≃ 24.5 (L* and brighter) corresponding to a mass range of log(M_⋆/M_⊙)≈ 10-11.<cit.> derived nebular emission-line kinematics for a subset of amaze and lsd galaxies by fitting the profile and shift of the[OIII]λλ4959,5007 doublet.Due to limited signal-to-noise, GN11 used a plane-fitting method to assign kinematic classificationsaccording to the following criteria: galaxies for which the velocity map shows a non-zero gradient after plane-fitting were classified as `rotating'; galaxies for which the velocity map could not be fitted with a plane were classified as `not-rotating'; and galaxies with velocity maps well-fitted by a plane but with inclination consistent with zero were labelled as `not classifiable'.GN11 further employed a rotating-disc modelling approach to estimate maximum rotation velocities and intrinsic velocity dispersions for galaxies classified as `rotating' in their sample. We extract 18 GN11 galaxies that overlap with our parent photometric catalog.Details of the AMAZE and LSD galaxies used in this work are summarised in Table <ref>.§.§.§ KDS galaxies As part of the KMOS Deep Survey (KDS), <cit.> investigated the kinematics of typical isolated field SFGs at z ≃ 3.5 in the mass range 9.0 < log(M_⋆/M_⊙) < 10.5 using the KMOS instrument at the ESO VLT <cit.>.With natural-seeing measurements of the [OIII]λ5007 emission line, TU17 extracted 2D kinematic maps and used beam-smearing corrections derived from dynamical modelling to determine values of intrinsic rotation velocity (V_C) and intrinsic velocity dispersion (σ_int) for the spatially-resolved target galaxies in their sample.Dictated by the signal-to-noise ratio of their data, TU17 used a simple empirical diagnostic based on the ratio V_C/σ_int to kinematically classify their sample.Galaxies were classified as `rotation-dominated' if V_C/σ_int > 1, and as `dispersion-dominated' if V_C/σ_int < 1. The SSA22-P2 pointing of the KDS survey targeted a field environmentto the south of the main SSA22 spatial overdensity <cit.> and yielded five morphologically isolated (non-merger) field galaxies that were in common with our parent photometric catalog.Details of the fiveKDS galaxies used in this work are given in Table <ref>.§.§.§ DSF2237a-C2With a redshift of z ≃ 3.3 and kinematics derived from measurements of the [OIII]λ5007 emission line, we include the LA09 galaxy, DSF2237a-C2, in our z∼3 kinematic sample.Drawn from the z∼3 LBG catalog of <cit.>, DSF2237a-C2 has v_shear/σ_mean = 0.6 ± 0.2, and is the only isolated LA09 galaxy where the observed velocity gradient "is consistent with rotation and unambiguously aligned with the morphological major axis" (LA09).For the purposes of kinematic classification, we treat DSF2237a-C2 similarly to the z∼2 LA09 galaxies (see Section <ref>), and assign a `rotation-dominated' classification to this galaxy. §.§spectral type classificationsWe employ herein the samespectral type classification scheme as that used by C09 at z∼3 and in Paper I at z∼2 to demonstrate the photometric segregation of LBGs with respect to netEW.For this purpose we define galaxies withdominant in absorption (netEW < -10.0Å) as `aLBGs', and galaxies withdominant in emission (netEW > 20.0 Å) as `eLBGs'.We further divide the remaining z∼2 LBGs into G_a and G_e spectral types with netEWs in the range -10.0 < netEW < 0.0 Å and 0.0 < netEW < 20.0 Å respectively.§ ANALYSIS AND RESULTS§.§ z∼3 LBGsC09 discovered that z∼3 LBGs segregate in colour–magnitude space according to their netEW, and determined photometric criteria to select pure sub-samples withdominant in absorption (aLBGs) anddominant in emission (eLBGs) based on broadband imaging.Figure <ref> shows our z∼3 kinematic sample overlaid on rest-frame UV colour–magnitude diagrams (CMDs) of z∼3 LBGs segregated according to theirspectral type adapted from C09.In the left panel, points labelled q1-q4 show the monotonic trend of the parent sample divided into numerical quartiles on the basis of netEW.The more positive netEW quartiles (weaker absorption and stronger emission) trend consistently toward fainter R-band magnitudes and bluer (G-R) colours.The primary cut (solid green line) statistically divides the mean colour and magnitude values of the aLBG and eLBG distributions.The dashed (red) and dotted-dashed (blue) lines indicate an offset of 1.5σ in colour dispersion from the primary cut for the aLBG and eLBG distributions respectively, and define one choice of photometric criteria for the selection of pure -absorbing and -emitting sub-samples that lie in the shaded red and blue regionsrespectively.The aLBG and eLBG distributions segregate on the CMD such that ≳90 percent of aLBGs (grey squares) are located above the red dotted-dashed line (in the white and red regions), and ≲10 percent of eLBGs (grey triangles) are located in the red region above the blue dashed line.The reverse is true for eLBGs; ≳90 percent of eLBGs lie below the dashed line (in the white and blue regions), and ≲10 percent of aLBGs are found in the blue region below the dotted-dashed line.Thus, the aLBG and eLBG distributions partially overlap in the central part of the CMD, but relatively pure subsets of aLBGs and eLBGs can be selected from the `high-confidence' red and blue regions respectively (see C09 and Paper I for quantitative details of thespectral type photometric selection method).Galaxies in the z∼3 kinematic sample classified as `rotating' or `rotation-dominated' lie at or above the primary cut coincident with the majority of aLBGs.Galaxies classified as `not-rotating' or `dispersion-dominated' are scattered on the CMD, but all galaxies below the primary cut, and in the `high-confidence' eLBG (blue) region are classified as `not-rotating' or `dispersion-dominated'.In addition, the not-rotating/dispersion-dominated sub-sample is on average fainter and bluer than their rotating/rotation-dominated counterparts, as indicated by the red circle and blue diamond in the left panel of Figure <ref>, and we note that the `not-rotating' sub-sample may include late-stage mergers (GN11).Thus, the rotating/rotation-dominated and not-rotating/dispersion-dominated subsets of the kinematic sample follow the aLBG and eLBG distributions within their known dispersion characteristics.This trend is reinforced when we include the spectroscopically-determined netEW data and assign aspectral type to each galaxy in our kinematic sample according to the definitions given in Section <ref>.The right panel of Figure <ref> shows the z∼3 kinematic sample colour-coded according to theirspectral type overlaid on the parent LBGs.The three aLBGs lie well above the primary cut; of these, two are confirmed rotators, and the third is `not classifiable' and could be a face-on disc (GN11).The four eLBGs, of which three are `not-rotating' or `dispersion-dominated', lie on or below the primary cut.The association between kinematics andspectral type is less clear for intermediate G_a and G_e LBGs – particularly for those that lie toward the centre of the CMD – and may be obscured by the `rotating'/`not-rotating' classification scheme of the source study that precludes an interpretation in terms of late-stage merging systems (GN11).Of the ten not-rotating/dispersion-dominated galaxies in the z∼3 kinematic sample, three are eLBGs, six have G_e spectral type, and one is borderline G_a with a netEW of -0.1 Å.The mean netEW for these 10 galaxies is +19.6 Å.The ten rotating/rotation-dominated sources consist of two aLBGs, two G_a LBGs, three borderline G_a/G_e spectral types with netEW ≈ 0 Å, two G_e LBGs, and one eLBG.Two `rotating' galaxies (SSA22a-D17 and Q0302-C131) have significant netemission (+18.14 and +27.60 Å respectively).The average netEW for the rotating/rotation-dominated sub-sample is +1.1 Å.These results suggest an empirical relationship between rest-frame UV colour,spectral type, and the kinematic properties of z∼3 LBGs.In the following section we extend our study to z∼2 where a larger number of LBGs with kinematic classifications, appropriate imaging, andspectroscopic data are available. §.§ z∼2 LBGs§.§.§ z∼2 Kinematics on the CMDFigure <ref> shows the 36 galaxies in the z∼2 kinematic sample overlaid on the parent population of 557 z∼2 LBGs dispersed on a (U_n-R)/R CMD adapted from Paper I.We note that most of the z∼2 kinematic sample is distributed toward the redder half of the CMD; 31 of 36 galaxies lie above the primary cut,.and only five galaxies lie below it.To a large degree, this tendency reflects the non-random selection bias in the source IFU studies, and the observational difficulty associated with obtaining high-quality IFU-based data for faint and/or compact sources.For example, the early observations of FS09 targeted galaxies with `higher mass, spatially resolved velocity gradients, large velocity dispersions, or spatially extended emission', and WI15 reports specifically on a `high S/N disc sample' drawn from the largersurvey.Due to the known relationship betweenand z∼2 LBG morphology <cit.>, and that between rest-frame UV colour,absorption/emission strength, and rotational dynamic support that we report herein, both of these selection biases inevitably result in samples that are redder than the average of the SFG population accessible by ground-based IFU spectrographs and our LBG parent sample.That being said, the aims of this paper do not require that the kinematic galaxies sample the parent LBG population in an unbiased way.It is sufficient that the kinematic sample spans a wide enough range in apparent magnitude and colour to identify a relationship with the broadband colour and magnitudes of LBGs, and that this relationship can be used to statistically segregate the galaxies by theirproperties and general kinematic type.This `red bias' of the z∼2 kinematic sample notwithstanding, all but one of the 22 rotation-dominated galaxies lie above the primary cut, and all are above the dotted-dashed line, in the white and red regions that enclose ≳95 percent of the aLBG population.The `high-confidence' aLBG (red) region is free from contamination by non-merger dispersion-dominated galaxies, and only one such source (Q1217-BX95, see Section <ref>) is found above the primary cut.Although few in number, all dispersion-dominated galaxies lie below the dashed line (in the white and blue regions) where ≳97 percent of the eLBGs are located.All but one galaxy (Q1623-BX502, see Section <ref>) below the primary cut are dispersion-dominated systems (including mergers).Thus, as illustrated by the mean positions of the `rd' and `dd' sub-samples in Figure <ref> (red circle and blue diamond respectively), the rotation- and dispersion-dominated subsets of the kinematic sample are coincident with, and may follow, the colour distributions of aLBGs and eLBGs on the CMD.To test these propositions, we use the non-parametric two-sided Kolmogorov–Smirnov (KS) test to evaluate whether the distributions of the rotation and dispersion-dominated sub-samples in (U_n-R) colour are statistically consistent with the null hypotheses that they are drawn from the (U_n-R) colour distributions of the parent aLBG and eLBG populations.For consistency in comparison, we limit the magnitude range of the underlying aLBGs and eLBGs to be the same as that spanned by the kinematic sample (23.31 < M_AB < 24.85), and the kinematic sample to galaxies that derive from the UV-colour-selected z∼2 LBG sample used to establish the photometric segregation and selection criteria described in Paper I, and shown on the CMDs in Figures <ref> & <ref>.Histogram plots in (U_n-R) colour of the sub-samples used in the KS test are shown in the top panel of Figure <ref>.The bifurcation in (U_n-R) colour is more readily visualised in the lower panel of Figure <ref> which shows colour histograms of the difference in normalised fraction of the same aLBG & eLBG, and rd & dd sub-samples (i.e., nFrac_aLBG - nFrac_eLBG and nFrac_rd - nFrac_dd).The KS tests yield a probability (p) of 0.969 that the dispersion-dominated galaxies in our kinematic sample and the underlying eLBG population derive from the same distribution of (U_n-R) colours.Moreover, we can reject with greater than 95% confidence (p = 0.046) the null hypothesis that the dispersion-dominated sub-sample and aLBGs are drawn from the same colour distribution.The analogous hypothesis that the rotation-dominated sub-sample and eLBGs share a common distribution of (U_n-R) colours can be similarly rejected with high confidence (p = 0.0002), and there is a probability of p = 0.181 that they derive from the same distribution of aLBG colours.The lower confidence of theKS result for the rotation-dominatedsub-sample with respect to aLBGs can be plausibly explained in terms of the non-random selection bias in the kinematic sample described above.Nevertheless, a KS test between the rotation and dispersion-dominated sub-samples indicates that we can reject with ≳ 95% confidence (p = 0.0425) – and ∼ 98% (p = 0.0203) if we include thediscs – the null hypothesis that they are drawn from the same distribution in (U_n-R) colour.Sources classified as mergers will often contain two galaxies and will have brighter magnitudes compared to the non-merging fraction of the kinematic sample.The mean R-band magnitude of the ten merging systems is m(R)_merger = 23.51 (orange star in Figure <ref>), which is ∼0.6 mag brighter than the mean magnitude of the non-merging fraction (m(R)_non-merger = 24.10).Moreover, galaxies in the kinematic sample segregate in colour depending on their classification.Mergers would statistically contain two galaxies roughly randomly selected from the aLBG, eLBG, G_a, and G_e populations (modulo any environmental effects).The mean (U_n-R) colour (and standard deviation) of the mergers is 1.03 (0.38) compared to 1.43 (0.37) and 0.70 (0.37) for the rotation and dispersion-dominated sub-samples respectively.The statistical evidence connects the kinematic properties of z∼2 LBGs tospectral type via their mutual association with rest-frame UV colour on the CMD.We next look to investigate this trend directly using the 28 galaxies in the kinematic sample for which spectroscopically determined netEWs are available.The top panel of Figure <ref> shows this sub-sample with their assignedspectral types overlaid on the parent z∼2 LBGs.Seven out of eight rotation-dominated galaxies are aLBG or G_a spectral types, and all sevenrotators are net -absorbers (see Section <ref>).Three out of four dispersion-dominated galaxies are net -emitters (eLBG or G_e spectral types.The lower panel of Fig. <ref> shows just those members of the kinematic sample that meet the criteria for classification as aLBGs or eLBGs overlaid on the parent sample dispersed in colour-magnitude space with symbols colour-coded on a red-blue scale according to their measured netEW as described in Paper I.Seven of the 10 aLBGs are rotation-dominated and three are mergers.All rotation-dominated aLBGs lie above the primary cut, with six of the seven above the dashed blue line in the `high-confidence' aLBG region.The three eLBGs are located near or below the z∼2 eLBG distribution mean, with two galaxies (including the dispersion-dominated Q2343-BX418) below the dotted-dashed red line in the `high-confidence' eLBG region.Overall, not only do the rotation and dispersion-dominated sub-samples lie on the CMD in a way that is statistically associated with the distribution in colour of the -absorbing and -emitting populations, they have spectroscopically determinedspectral types that are consistent with this trend.Moreover, reviewing both panels of Figure <ref>, the broadband photometric selection criteria select nearly pure samples of rotation- and dispersion-dominated systems and potentially 100% pure selection for galaxies meeting our netEW criteria.§.§.§ eLBGs and dispersion-dominated galaxiesThe absence of galaxies with large, disc-like rotating structures in the -emitting half of the CMD, and the fact that the `high-confidence' aLBG region is almost completely devoid of net -emitting and/or dispersion-dominated sources, suggest that strongemission is linked to dispersion-dominated kinematics.The strength of such a claim is, however, challenged by the relative scarcity of faint, blue -emitting galaxies in our sample.In this regard, a closer examination of the eLBG and dispersion-dominated galaxies in the kinematic sample is instructive. The four dispersion-dominated galaxies in the z∼2 kinematic sample (diamond symbols in Figure <ref> and the top panel of Figure <ref>) have a mean netEW of +14.4 Å.Of these, three are net -emitters (eLBG or G_e spectral types).The exception is Q2346-BX405 which has a netEW of -8.61 Å (G_a spectral type).While FS09 ascribe a `dispersion-dominated' classification to Q2346-BX405, they note that it also has some kinematic features that are consistent with disc rotation.Moreover, they point out that Q2346-BX404 and Q2346-BX405 are an interacting pair – a feature that could result in the partial disruption of rotation kinematics and/or a subsequent classification as a dispersion-dominated system.In any event, Q2346-BX405 and its partner Q2346-BX404 (marked by asterisks in the top panel of Figure <ref>) lie near the centre of the CMD at positions we find to be typical of mergers, and would be excluded from selection as aLBGs or eLBGs in our photometric method.The other dispersion-dominated galaxy of interest within our framework is Q1217-BX95 (galaxy `a' in the top panel of Figure <ref>).While its location on the CMD is within the eLBG colour–magnitude distribution, it has a colour ((U_n-R) = 1.2) that is more typical of a rotation-dominated net -absorber.Q1217-BX95 is relatively massive (log(M_⋆/M_⊙)=10.08), but is compact (R_e = 0.6 kpc), has low gas-phase metallicity (12 + log(O/H) = 8.15 ± 0.02), and displays a 5σ detection of the auroral [OIII]λ4363 emission line[Gas-phase metallicities supplied by C. Steidel (priv. comm.) determined using the O3N2 index and the calibration described by <cit.> and <cit.>.Values are quoted using the scale of <cit.> in which solar metallicity has the value 12 + log(O/H) = 8.69].On the basis of this ensemble of properties, and a netEW of +10.2 Å, we would predict Q1217-BX95 to have dispersion-dominated kinematics, consistent with the classification for this galaxy reported by LA09.We note that photometric errors deriving from poor seeing in the Q1217 field images (1.56 in U_n) may be responsible for the (U_n-R) colour of this galaxy (C. Steidel, priv. comm.).The three spectroscopic eLBGs in our kinematic sample (Q2343-BX418, Q2343-BX660 and Q1623-BX502) are the three sources furthest below the primary cut on the CMD, and lie close to the eLBG mean (see bottom panel of Figure <ref>).They are, however, kinematically diverse.The dispersion-dominated Q2343-BX418 (galaxy `b' in the top panel of Figure <ref>) has low stellar mass (log(M_⋆/M_⊙)= 9.0), low metallicity (12 + log(O/H) = 7.9 ±0.2), is very blue ((U_n-R) = 0.32), very compact (R_e = 0.8 kpc), and has strongemission (netEW = +53.5 Å).Although having kinematic structure consistent with merging compact galaxies (LA09), Q2343-BX660 (galaxy `c' in the top panel of Figure <ref>) is otherwise similar to Q2343-BX418; it has relatively low stellar mass (log(M_⋆/M_⊙)= 9.9), low metallicity (12 + log(O/H) = 7.99 ±0.05), compact morphology (R_e = 1.6 kpc), very blue colour ((U_n-R) = 0.36), and is a strongemitter (netEW = +20.4 Å).With a netEW of +28.5 Å, and a `rotation-dominated' kinematic classification derived from the deep AO-assisted observations of FS18, Q1623-BX502 (galaxy `d' in the top panel of Figure <ref>) appears to be inconsistent with our proposition that large netEW is a useful predictor of dispersion-dominated kinematics.Although its magnitude and redder colour (U_n-R) = 0.72) than the other eLBGs place it in the `white strip' on the CMD that contains both aLBGs and eLBGs, its relatively low stellar mass (log(M_⋆/M_⊙)= 9.4), compact morphology (R_e = 1.1 ± 0.7kpc), low gas phase metallicity (12 + log(O/H) = 8.09), are typical of what we would predict for a dispersion-dominated galaxy with such strongemission.Earlier reports based on seeing-limited and AO observations classify Q1623-BX502 as `dispersion-dominated' <cit.> and the ratios Δv_obs/2σ_tot (0.50 ± 0.16) and V_rot/σ_0 (2.0_-0.8^+1.5) for Q1623-BX502 reported by FS18 are equivalent (within the stated uncertainties) to the threshold values (0.4 and √(3.36)≈ 1.83 respectively) used by FS09 and FS18 to classify galaxies as either rotation or dispersion-dominated (see Section <ref>).Thus, Q1623-BX502 is a borderline case in terms of its kinematics, and, by its position on the CMD, would not be selected by our kinematic broadband photometric criteria (see Paper I).To summarise, although some of the -emitting sources in the z∼2 kinematic sample are not classified as `dispersion-dominated' in the source IFU studies, they are characteristically compact, blue in colour, have relatively low stellar mass, low gas-phase metallicity, and highly disordered kinematics in which the rotation signature (if any) is weak and more typical of low angular momentum `orbital-type', rather than the strong `disc-like' rotation characteristic of theabsorbers.We note that those cases that are borderline or complex do not meet our broadband photometric criteria, and would not be selected. §.§ Kinematics vs netEW – Direct Comparisons§.§.§ Kinematic classifications vs netEWWith even a naive application of the kinematic classifications derived from the literature, there is a clear bifurcation in the averagespectral properties of the non-merger rotation and dispersion-dominated populations in the z∼2 and z∼3 kinematic samples.The mean netEW for rotation-dominated galaxies in the z∼2 sample is -7.0 Å, and +14.4 Å for the dispersion-dominated sub-sample.(cf. +1.1 and +19.6 Å for the z∼3 rotating/rotation-dominated and not-rotating/dispersion-dominated sub-samples respectively).A two-sided KS test on a combined (z∼2 + z∼3) sample comprising eighteen rotation- and 14 dispersion-dominated galaxies rejects with ∼99% confidence (p = 0.012) the null hypothesis that the rotation-and dispersion-dominated galaxies were drawn from the same distribution of netEW values.§.§.§ Quantitative kinematics versus We now look to investigate the relationship between netEW and galaxy kinematics using quantitative parameters derived from the IFU kinematic maps of the respective source studies in addition to the classifications that we have used up to this point.In doing so, we note that methods to measure rotation velocity and global velocity dispersion differ between surveys, Although we have taken care to compare like with like as much as possible, homogenising these methods is beyond the scope of this paper, and we acknowledge that this is a potential source of systematic errors.For this purpose, we invoke the ratioused by FS09 (and with modified notation by FS18) where v_obs/2 is half the maximum observedvelocity gradient across the source, and σ_int (σ_tot in FS18) is the integrated velocity dispersion derived from the linewidth of theemission in the spatially collapsed object spectrum.We note that v_obs/2 and σ_int are equivalent to the shear velocity (v_shear) and net velocity dispersion (σ_net) respectively, in the nomenclature of LA09.We similarly equate for the purpose of this analysis, the observed rotation velocity (V_obs) and velocity dispersion (σ_obs) of TU17 with v_obs/2 and σ_int respectively.As a quantity derived from the observed velocity field, v_obs/2 is related to the actual rotation velocity by a factor of 1/sin(i), where i is the galaxy inclination angle, and σ_int incorporates contributions to the integrated linewidth from both large-scale velocity gradients and intrinsic (random) gas motions.Nevertheless,has proven to be a useful (if approximate) probe of the nature of the dynamical support (FS09), and provides here a ready means to compare kinematic results from different studies.Values of v_obs/2, σ_int and the ratiofor a subset of 21 z∼2 and five z∼3 galaxies in our kinematic samples derived from their respective source studies are listed in Tables <ref> & <ref>. Figure <ref> shows(with supplied uncertainties) for these galaxies plotted versus netEW.Galaxies that are common between the different source studies and observation modes, and for which we have multiple estimates of , lie on vertical dotted lines to identify them for clarity.LA09 conclude that observational bias is an unsatisfactory explanation for differences in the global kinematic properties of the Keck/OSIRIS (LA09) and VLT/SINFONI (FS09) samples, and similar kinematics were found for the galaxies that were successfully observed by both surveys.There is also excellent agreement between the kinematic and morphological properties of Q1623-BX502 determined by both seeing-limited (FS09) and AO-assisted observations (FS09, LA09, FS18).Moreover, all the galaxies plotted in Figure <ref> are drawn from the related z∼2 and z∼3 UV-colour-selected catalogs of <cit.> and are, therefore, comparable in terms of their intrinsic properties (see Section <ref>).We can be confident, therefore, that the dispersion versus netEW shown in Figure <ref> derives from a genuine range in the physical properties of the galaxies (see Section <ref>), and that the kinematics probed by our sample, from the multiple surveys, reliably reflect the true continuum of kinematic character ranging from genuinely dispersion-dominated to rotationally supported systems.The horizontal dashed line in Figure <ref> at= 0.4 marks the threshold for the classification of galaxies as rotation or dispersion-dominated as established by FS09 for both seeing-limited and AO-assisted observations at typical spatial resolution.Where this criterion is applied to mergers, v_obs/2 can be interpreted as the projected `orbital velocity' of the system.The upper right-hand corner of Figure <ref> is unpopulated; there are no net -emitters with strong rotational signatures.All galaxies with strong rotational support areabsorbers and occupy the upper left corner of the plot.The lower left-hand corner (≲ 0.4 and netEW ≲ 0 Å) is dominated by merging systems.The only salient exceptions to these trends are: Q2346-BX405 (netEW = -8.6), which is classified as dispersion-dominated (= 0.21 ± 0.06), but shows kinematic features consistent with disc rotation and is part of an interacting pair (FS09); and Q1623-BX502 (netEW = +28.5 Å) and SSA22a-D3 (netEW = +11.47 Å), both of which have low angular momentum (v_obs/2 = ∼40 and 53 respectively) and are borderline for classification as rotation- or dispersion-dominated.Parameterisation of the kinematics of our sample viaallows us to statistically test the strength of the association between galaxy kinematics and netEW independent of the inherited kinematic classifications.For galaxies with multiple estimates of , we calculate average values and derive a sample of 17 discrete non-merger galaxies.A Spearman rank order correlation test on this sub-sample yields a coefficient (ρ) of ∼0.577 and a probability (p) of 0.0154 (∼2.2σ significance) that this moderate to strong degree of association could arise from an uncorrelated sample.Whileis a useful empirical measure of rotational dynamic support, a more robust and physically intuitive parameter for this purpose is the ratioinvoked by FS18, where v_rot is the intrinsic rotation velocity corrected for beam smearing and inclination angle according to: · = ·and σ_0 is the intrinsic local velocity dispersion determined from correction of the observed local velocity dispersion (σ_0obs) for beam smearing according to:= ·where sin(i) is the correction for inclination of a galaxy from the plane of the sky, and C_PSF,v and C_PFS,σ are the beam-smearing correction factors for rotation velocity and velocity dispersion respectively, in the rotating-disc framework of FS18.For the SINS/zC-SINF galaxies of FS09 and FS18, and Q2343-BX442 from LA12, we use the v_rot, σ_0 andvalues as published.Applying Equation (<ref>) to the LA09 galaxies, we use the average correction factor of the FS18 sample (i.e., C_PSF,v = 1.3) in a manner similar to <cit.>, and sin(i) = π/4, the mean inclination angle correction for a distribution of randomly inclined discs (LA09) to estimate v_rot.For the purposes of this comparison, σ_mean in LA09 approximates to σ_0obs in FS18.Following Equation (<ref>), we apply to the LA09 sample a correction of C_PSF,σ = 0.85, the average correction factor for the FS18 sample.Uncertainties are propagated from the LA09 estimates for v_shear and σ_mean and added in quadrature for the ratio .For the z∼3 galaxies, we similarly extract from the source references values for intrinsic rotation velocity (V_max in GN11 and V_C in TU17) and intrinsic velocity dispersion (σ_int in both GN11 and TU17), and equate these to v_rotand σ_0 respectively for the purposes of this analysis.Values of v_rot, σ_0 andused to construct Figure <ref> are listed with supplied or derived uncertainties, and associated netEWs, in Tables <ref> & <ref>.Figure <ref> shows the ratio of intrinsic rotation velocity to intrinsic velocity dispersion () thus derived for subsets of our kinematic samples, plotted as a function of netEW and rest-frame UV colour, i.e., (U_n-R) for z∼2 (left) and (G-R) for z∼3 (right).Figure <ref> shows theversus netEW relationship for the z∼2 and z∼3 samples with galaxies colour-coded according to their source survey.A Spearman rank correlation test for theversus netEW relationship for a combined non-merging subset of 18 z∼2 and z∼3 galaxies gives a ρ-value of -0.69, and a p-value of 0.0014 that allows us to reject the null hypothesis that there is no association betweenand netEW with ≳ 99.8% (∼3σ) confidence.The z∼2 subset of strongly rotating `disc-like' sources (≳3) have the most negative values of netEW, and exclusively populate the top left corner of the left panel in Figure <ref>, significantly above the horizontal dashed line at = √(3.36) (∼1.83) that corresponds to the point above which rotation starts to dominate over velocity dispersion in the dynamical support of turbulent discs (FS18).Similarly, non-merger galaxies with the lowest values of have high netEWs and lie toward the bottom right.Mergers are confined to the bottom left corner of the plot.In general, there is an increase in the dispersion of the sample in thedimension relative to thecase for the z∼2 sample (cf. Figure <ref>)indicating that as the rigour of the kinematic analysis is increased, the relationship between netEW and the degree of rotational dynamic support is strengthened.Differences in the details of the colour-colour selection criteria of the parent photometric catalogues (see Paper I) preclude direct comparison of the colour dispersion of the z∼2 and z∼3 kinematic samples.However, in both redshift ranges (see Figure <ref>), there is a clear trend from red to blue with decreasing rotational dynamic support (as measured by ) that provides quantitative verification, at a galaxy-by-galaxy level, of the statistical relationship between kinematic type, rest-frame UV colour, and netEW, as described in Sections <ref> & <ref>.§.§.§ Other galaxy propertiesIn Figure <ref>, we present netEW as it relates to broadband colour and kinematics.Figure <ref> plots netEW and kinematics in relation to several other properties previously reported for the galaxies in our sample: specifically, stellar mass (M_⋆), star-formation rate from SED fitting (SFR_SED), galactic size (R_e), age, gas fraction (μ), and dynamical mass (M_dyn).In each case, the plots show the full sample (left panels), as well as the sample with the mergers removed (right panels), in order to better understand the behaviour associated with discrete and isolated galaxies and those involved in interactions.Symbol sizes are scaled as indicated in the caption and respective plot legends.The range of colour gradients omits outliers to help highlight any systematic trends.Table <ref> lists the kinematic, spectroscopic, and physical properties used to construct the plots.In all cases, the quoted stellar properties (M_⋆, SFR_SED and stellar population age) were derived from evolutionary synthesis modeling of optical to near-IR broadband spectral energy distributions (SEDs) supplemented with mid-IR photometry when available.LA09,FS09 and FS18 used <cit.> models with a <cit.> initial mass function (IMF), the <cit.> reddening law, and solar metallicity. Best fits were achieved with either constant or exponentially declining SFRs (see source studies for details).The modelling procedures are described in detail by <cit.> (LA09) and inAppendix A of FS09 (FS09, FS18).For Q2343-BX442, LA12 used modelling procedures described by <cit.>, <cit.> and <cit.> to fit a SED constructed from ground-based U_nGℛJK_s, HST/WFC3 F160W and Spitzer IRAC photometry, with a Chabrier IMF, Calzetti reddening, and a constant star formation history.LA12 used updated Charlot & Bruzual population synthesis models that resulted in slightly lower estimates for M_⋆, SFR_SED and stellar population age than estimates in the literature for similar samples <cit.>.For the SINS/zC-SINFONI galaxies of FS09 and FS18, we use the galactic half-light radius (R_e) values estimated from HST rest-frame optical (H-band) imagingwhere available from <cit.>.Otherwise, sizes shown areradii derived from IFU maps.In any event, the difference in sizes betweenand H-band emission is small, such that the choice made here is inconsequential (FS18).For LA09 galaxies, R_e is the radius of nebula emission.For Q2343-BX442, we use the total luminous radius of ∼8 kpc estimated by LA12 from HST/WFC3 infra-red imaging of the stellar continuum. To facilitate direct comparison across our sample, we use cold gas fractions (μ = M_gas/(M_gas + M_⋆)) from <cit.> who estimated the mass of the gas associated with star formation using the local empirical correlation between star formation rate per unit area and gas surface density <cit.>.Dynamical masses (M_dyn) for SINS/zC-SINFONI galaxies are derived from modelling in the `rotating-disc' framework of FS18, or by one of the related methods described in detail by FS09 for galaxies unsuitable for modelling as discs.LA12 used a similar kinematic disc-fitting algorithm to model the velocity field of Q2343-BX442 and derive estimates for M_dyn.For LA09 galaxies, M_dyn is the single-component dynamical mass within the radius probed by nebular emission derived from the integrated velocity dispersion as per <cit.>.The top panels show that rotation-dominated galaxies are larger and most are older than their dispersion-dominated counterparts.There is no clear trend among the -absorbing rotators, but the -emitting dispersion-dominated galaxies are all similarly young.The centre panels show that the more strongly-rotating -absorbers typically have lower gas fractions and higher stellar masses than most of the dispersion-dominated -emitters, but there is no apparent trend in either of these properties within the rotation-dominated sub-sample.Although the dispersion-dominated galaxies have consistently high gas fractions and usually lower stellar masses, there is significant scatter in the stellar mass/netEW trend in our small sample.For example,galaxy Q1217-BX95 (large grey diamond in the centre panels) has relatively strongemission (netEW = +10.2 Å), dispersion-dominated kinematics ( = 0.34), but a relatively large stellar mass (∼3×10^10 M_⊙) in the context of our sample. Finally, the bottom panels show larger dynamical masses and star formation rates for rotation-dominated sources compared to dispersion-dominated galaxies.The small size of the kinematic sample that meets the necessary selection criteria for this work (see Section <ref>) prevents the robust investigation of the relationship between netEW and kinematics at fixed values of other galactic properties, but even with this sample, we can see the emergence of trends that are consistent with relationships reported separately between kinematics, netEW and the respective physical properties.These illustrative plots suggest the potential value of a holistic multi-dimensional approach to the study of galaxy evolution in large samples and on large scales, and provide motivation to collect larger samples with consistent data that would inform the strength of these (and other) relations and how they are connected causally or otherwise.§ DISCUSSION §.§ Context The results presented here are part of a broader project that aims to examine the intrinsic and environmental properties of high redshift galaxies in a holistic manner to lend insight into their evolution.C09 and Paper I describe a method by which large samples of z∼2-3 LBGs with knownspectral type can be selected from optical broadband imaging datasets, and by so doing suggest a means by which a wide range of other spectral, physical, and environmental galaxy properties known to correlate withmight be studied in large samples and on large scales using optical broadband imaging and/ordata alone.The power of this approach was demonstrated by <cit.> who studied the large-scale clustering properties and halo mass distributions of photometrically selected z∼3 aLBG and eLBG populations via correlation function analysis (see Section <ref>).-sensitive properties accessible by this approach include: size and morphology <cit.>; gas fraction <cit.>; parameters derived from SED fitting such as stellar mass, SFR, UV continuum slope/reddening and age <cit.>; gas covering fraction and outflow kinematics <cit.>; low and high ionisation absorption line and nebula emission line strengths <cit.>; and, dark matter halo mass, and large-scale spatial distribution <cit.>.In this work we add the relationships between , broadband colour, and nebular emission-line kinematics to this list. §.§ The importance of kinematics Galaxy kinematics are a characteristic feature of the Morphology–Density Relation (MDR) in the local Universe <cit.>, and are a key constraint for simulations that aim to understand the mechanisms by which galaxies evolve over cosmic time <cit.>.Despite its critical role in reconciling observations with physical and computational models, acquiring kinematic information for thousands to millions of galaxies over large scales from high to low redshift is not possible for the foreseeable future – in particular for galaxies at z≳ 4.Thus, there is strong motivation to explore connections between kinematics and other properties that may serve as proxies for predicting kinematic type where such detail is not directly obtainable.At z ≳ 4, the bulk of the galaxy population is only accessible via deep imaging and, for galaxies reachable with deep spectroscopy, oftenis the only accessible feature at optical/near-IR wavelengths.This is also the case for the fainter and lower-mass z∼2-3 galaxies.In this work we have reported a fundamental relationship between galaxy kinematics and netEW that has potential to serve as a predictor of kinematic type in such cases.In addition, we have proposed a broadband imaging method whereby samples of known kinematic type can be selected via the photometric segregation ofspectral types on the CMD, and from which the most promising galaxies can be efficiently selected for expensive follow-up observations.We do not propose that the coarse kinematic behaviour derived from this method is a substitute for the detailed and more accurate kinematic information available from IFU-based studies.Rather, we suggest this method as a complementary approach that can elucidate the typical kinematic character of large samples for which individual IFU measurements are not feasible, and at redshifts z ≳ 4 where such measurements are currently not possible.Together, these results provide a means to explore all the above -sensitive properties, and their relation to generalised kinematic type in very large samples of galaxies on large scales and potentially out to high redshift in datasets from current and future large-area photometric campaigns.For example, we envisage application of our method to datasets from the all-sky LSST that will select hundreds of millions of LBGs in redshift ranges from z∼2-6 across many hundreds to thousands of Mpc.§.§ Comparison with low-redshift analoguesThe fourteen low redshift (0.03<z<0.2) star-forming galaxies comprising theReference Sample (LARS) have continuum size, stellar mass, and rest-frame absolute magnitudes typical of Lyman break analogues in the local Universe selected to have properties similar to star-forming galaxies at 2<z<3 <cit.>.Accordingly, they provide a useful low-z reference sample for comparison with our z∼2-3 results.<cit.> derive values for shear velocity (v_shear) and intrinsic velocity dispersion (σ_0) from thekinematic maps of the LARS galaxies in a manner similar to the methods used by FS09, LA09 and FS18 at z∼2.They classify each galaxy as either a `rotating disc', `perturbed rotator', or as having `complex kinematics' based on the qualitative appearance of their velocity fields and the classification scheme introduced by <cit.>.HE16 show that the LARS galaxies are characterised by high intrinsic velocity dispersions in the range 40-100 (54 median), low shear velocities (30-180, 65 median), and v_shear/σ_0 ratios ranging from 0.5 to 3.2.In this respect the LARS galaxies are kinematically similar to turbulent star-forming galaxies observed at high redshift (HE16), including the z∼2-3 LBGs studied herein (cf. the kinematic parameters in Tables <ref> & <ref>). The LARS team used a synthetic narrow-band imaging method to measure total flux in the region of , and the subtraction from this of a modelled stellar continuum spectrum to derive theflux (or flux decrement).Pixel-wise SED fitting of the same spectral models as were used for continuum subtraction was then used to estimate the continuum flux density at 1216 Å by which the flux differential was divided in order to calculate aEW.The bandpass used to measuresampled rest-frame wavelengths between about 1205 and 1230 Å (depending on the exact redshift of the target galaxy).Accordingly, the measured flux included bothin emission and at least part of theabsorption signal blueward of 1216 Å.For comparison with our z∼2-3 samples, we use the integrated values ofEW given by HE16 and <cit.> that were calculated using fluxes and flux densities integrated over a circular aperture with twice the isophotal Petrosian radius determined for each galaxy from the image that transmitsand the far-UV continuum (see <cit.> and <cit.>).LARS galaxies with higher shearing velocities (v_shear≳ 50) have preferentially lowerEWs and lowerescape fractions than their lower angular momentum counterparts.Moreover,EW andescape fraction correlate with v_shear/σ_0 in the sense that the LARS galaxies with `complex kinematics' have higherEWs and higherescape fractions than systems with a kinematic signature indicative of a `perturbed rotator' or a `rotating disc'.While these observations of HE16 are in good qualitative agreement with our findings, the confidence with which we can directly compare the LARSEW values with the netEWs of the z∼2-3 LBGs is moderated by a number of factors:* Due to the relatively broad bandpass of the synthetic filters used for the LARS imaging, a large amount of stellar continuum light is transmitted along with thesignal.The reliability of the measuredflux is, therefore, critically dependent on the quality of the stellar continuum modelling, and in cases where stellar light dominates over , small errors in the modelling can result in large errors in the derived properties (M. Hayes, priv. comm.).* <cit.> found that due to the spatial redistribution ofunder the influence of resonant scattering processes, the computation of integratedquantities for their galaxies is a strong function of the radius over which values were summed.Figure 4 in <cit.> shows the aperture-dependent behaviour ofEW (and other properties) for the LARS galaxies.* The LARS team chose to adopt a convention of reportingEW in emission only such that even if an integratedEW with net absorption was measured, it has been reported only in terms of its emission component, i.e., with aEW of zero (M. Hayes, priv. comm.). Figure <ref> shows a v_shear/σ_0 versusEW plot of the HE16 galaxies overlaid on aversus netEW plot of our combined z∼2 and z∼3 kinematic samples.The curve-of-growth results of <cit.> indicate that Two LARS galaxies (LARS04 & LARS06) are net absorbers at all apertures within their field of view, but these are reported as havingEW = 0 under the convention adopted by the LARS team.While the magnitude of any offset is difficult to quantify, these two galaxies would move toward the left on the plot if this net absorbing character was reflected in the quotedEWs.Taking into account this caveat, and the other sources of systematic uncertainty notwithstanding, the relationship between galaxy kinematics andobservables in the LARS sample is almost indistinguishable from our result at z∼2-3.HE16 surmise that there is a causal relationship between turbulence in actively star-forming galaxies and ISM conditions that facilitate the escape ofphotons, and further speculate that dispersion-dominated kinematics are a necessary requirement for a galaxy to have a significant amount of escapingradiation.HE16 also note that, like the z∼2 samples of FS09 and LA09, galaxies in their sample with lower stellar mass (M_⋆) typically have lower v_shear/σ_0 ratios and that the strongly -emitting LARS-LAEs are preferentially found among the systems with M_⋆ ≲ 10^10 M_⊙. These conclusions for the low-z LARS galaxies are consistent with our results that show a link between higher netEW and dispersion-dominated kinematics in z∼2-3 LBGs.Moreover, they support our general proposition thatemission may be a useful diagnostic of galaxy kinematics and other properties, particularly at high redshifts where the number density of LAEs is higher <cit.>, and low-mass star-forming dispersion-dominated galaxies are more prevalent <cit.>. §.§ Implications for galaxy evolution science§.§.§ Kinematics and Large-Scale Structure at z∼2-3A key finding of this work, Paper I, and C09 is thatspectral types in populations of z∼2-3 LBGs segregate consistently with a range of intrinsic galactic properties.For example, we show in Sections <ref> & <ref> that eLBGs are characteristically, compact, blue, low-mass, low-metallicity, presumably young, dispersion-dominated systems, and that aLBGs are typically, red, spatially-diffuse, high-mass rotation-dominated galaxies, usually with disc-like morphology.<cit.> use the same photometric selection method to isolate large samples (∼10^5) of z∼3 aLBGs and eLBGs, and investigated their respective dark matter halo mass and spatial distribution on large scales using two-point correlation function analysis.They find that aLBGs preferentially reside in group and cluster environments, and eLBGs are typically found on the outskirts of groups and clusters and in the field.Moreover, cross-correlation function results showed that the two spectral types avoid each other on single halo to cluster halo scales.Similarly, and consistent with simulations that predict a decrease inEW with increasing overdensity in z∼2 protoclusters <cit.>, spectroscopic follow-up of protoclusters identified at z∼3-4 show that protocluster members have lower averageEW compared to equivalent coeval field galaxies <cit.>.There is also a growing body of work indicating that galaxies with large netEW are over-represented in under-dense regions, while galaxies with lower netEW are mainly located in over-dense environments <cit.>. Combining these results with our findings, it would appear that we are seeing at z∼2-3, a spatial segregation of galaxies with rotation- and dispersion-dominated kinematics on large (∼100 Mpc) scales.That is, larger more massive rotating disc galaxies (aLBGs in our system) are located preferentially in group environments, and dispersion-dominated eLBGs tend to be found on group outskirts and in the field.§.§.§ Bimodality at high redshift and the Morphology–Density Relation The demonstrated segregation ofspectral types with a wide range of galactic properties – including the large-scale clustering behaviour of aLBGs and eLBGs described above – is suggestive of a non-homogeneous galaxy population at high redshift.Indeed, it is reminiscent of the bimodal distribution of blue, star-forming spirals and large, massive red and quiescent ellipsoids of the modern-day universe (see for example <cit.>).Independent evidence of galaxy bimodality at high redshift is observed in dampedsystems (DLAs).<cit.> found a bimodality in the DLA population based on [CII] 158 μ m cooling rates that is independent of HI column density.DLAs with high cooling rates have significantly higher velocity line profiles, metallicity, dust to gas ratios, ISM line widths, and star formation rates compared to those with low cooling rates.Intriguingly, the properties of the high and low cooling-rate DLAs map well with the properties of aLBG and eLBGspectral types respectively, as discussed in this work and by <cit.>, <cit.>, and <cit.>.These results caution against assuming that high-redshift galaxy populations are homogeneous, and emphasise the need to consider heterogeneity, and likely bimodality, in the distribution of galactic properties.Accordingly, from a holistic consideration of the many relationships betweenand galaxy properties discussed above, we envisage a model in which massive rotating disc systems (aLBGs) that are preferentially found in groups and clusters merge to form present-day elliptical galaxies <cit.>.Compact eLBGs may be either super-star clusters in faint, low mass galaxies, or the precursors of bulges in present-day spiral galaxies, which are known to have dispersion-supported kinematics and are expected to form at high redshift.Such a model suggests that we are observing at z∼2-3 signs of a nascent morphology-density relation <cit.> traced by the aLBG and eLBG populations.§ SUMMARY AND CONCLUSIONS In this paper we report a direct relationship between nebular emission-line kinematics and netEW insamples of z∼2 and z∼3 LBGs drawn from the literature for which matching rest-frame UV broadband photometry, consistently measured netEWs, and kinematic classifications from IFU-based spectroscopy are available.We conclude that LBGs withdominant in absorption (aLBGs) are almost exclusively rotation-dominated (presumably disc-like) systems, and LBGs withdominant in emission (eLBGs) characteristically have dispersion-dominated kinematics.The key results of this paper are summarised below:* In Sections <ref> & <ref> we show that rotation- and dispersion-dominated z∼2-3 LBGs segregate consistently with rest-frame UV colour, and that their distributions on a rest-frame UV colour-magnitude diagram (CMD) are coincident with the aLBG and eLBG distributions respectively of the parent LBG samples from which they were drawn.* This congruent behaviour is reinforced by the assignment to the kinematic samples ofspectral types based on spectroscopically-determined netEWs (see Section <ref>), and is statistically supported by the results of two-sided Kolmogorov–Smirnov (KS) tests. * Galaxies located in the strongly -absorbing part of the CMD (aLBGs) are characteristically massive, red, spatially diffuse disc-like systems.Galaxies with photometric properties akin to those withdominant in emission (eLBGs) are most likely to be compact, blue, and dispersion-dominated, with low (if any) rotational dynamic support, free from large or luminous rotating disc structures.Moreover, we find that sources in our kinematic samples that have been positively identified as merging systems reside toward the bright side, and centrally in colour, on the CMD.* In Section <ref> we report the segregation in average netEW for subsets of rotation- and dispersion-dominated galaxies in our z∼2 and z∼3 kinematic samples, and show, for a combined z∼2-3 sample of 32 galaxies, a clear bifurcation (KS-test confidence ∼99%) in the averagespectral properties of rotation- and dispersion-dominated LBGs.* In Section <ref> we quantify therelationship between the strength of rotational dynamic support (as measured usingand ) and netEW for subsets of our kinematic sample where these data are available. Both results show a statistically significant non-linear negative correlation between rotational dynamic support and netEW.* In Section <ref> we confirm that the relationship between netEW and kinematics that we report is consistent with relationships reported separately between kinematics, netEW and other galactic properties such as stellar mass, star-formation rate, gas fraction, age, and size.* We demonstrate in Section <ref> the consistency of our result with the low-z kinematics versusstudy of <cit.>: a result that suggests the utility ofemission as a diagnostic of galaxy kinematics and other properties over a wide range of redshifts. In Paper I in this series <cit.> we report the photometric segregation of z∼2 LBGs versus netEW in rest-frame UV colour–magnitude space, and derive criteria for the selection of pure samples of LBGs withdominant in absorption anddominant in emission on the basis of optical broadband imaging alone.Together with the analogous z∼3 result of <cit.>, we have suggested the utility of this method to study a wide range of properties known to be associated with(see Section <ref>), in large samples and over large scales in datasets from current and future large-area and all-sky photometric surveys such as the Vera Rubin Observatory Legacy Survey of Space and Time <cit.>. Here we add nebular emission-line kinematics to the list of properties that might be studied by such an approach.We propose a method by which the generalised kinematic type of large samples of LBGs might be determined on the basis of netspectral types determined from broadband imaging, and their relation to other properties studied on large scales and at redshifts beyond the range accessible by current IFU spectrographs.The small size of the kinematic sample that meets the necessary selection criteria for this work (see Section <ref>) precludes a robust investigation of the relationship between netEW and kinematics at fixed values of other galactic properties.Nevertheless, these results (i) suggest the potential value of a holistic interpretation of galaxy evolution in terms of these many correlated properties, including kinematics and their relation to galaxy environment,and (ii) provide motivation for a dedicated high-resolution AO observational campaign that specifically targets a larger, uniformly-selected and analysed sample that would inform the strength of all these relations, and enable application of multi-variate regression techniques to determine how they are related causally or otherwise (cf. the low-z LARS study of <cit.>).Finally, in Section <ref> we speculate that the combination of our result linking netEW and nebular emission-line kinematics with the known large-scale clustering behaviour of -absorbing and -emitting LBGs <cit.>, is evocative of an emergent bimodality of early galaxies that is consistent with a nascent morphology-density relation being observed at z∼2-3 (Section <ref>). | http://arxiv.org/abs/2311.15721v2 | {
"authors": [
"Garry Foran",
"Jeff Cooke",
"Emily Wisnioski",
"Naveen Reddy",
"Charles Steidel"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20231127111629",
"title": "Lyman-alpha at Cosmic Noon II: The relationship between kinematics and Lyman-alpha in z~2-3 Lyman Break Galaxies"
} |
[email protected] Laboratoire de Physique Subatomique et de Cosmologie, CNRS/IN2P3, 38026 Grenoble, [email protected]é Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France [email protected]é Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France [email protected]é Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, FranceWe explore the possibility of adding complex absorbing potential at the boundaries when solving the one-dimensional real-time Schrödinger evolution on a grid using a quantum computer with a fully quantum algorithm described on a n qubit register. Due to the complex potential, the evolution mixes real- and imaginary-time propagation and the wave function can potentially be continuously absorbed during the time propagation.We use the dilation quantum algorithm to treatthe imaginary-time evolution in parallel to the real-time propagation. This method has the advantage of using only one reservoir qubit at a time, that ismeasured with a certain success probability to implement the desired imaginary-time evolution. We propose a specific prescription for the dilation method where the success probability is directly linked to the physical norm of the continuously absorbed state evolving on the mesh. We expect that the proposed prescription will have the advantage of keeping a high probability of success in most physical situations. Applications of the method are made on one-dimensional wave functions evolving on a mesh. Results obtained on a quantum computer identify with those obtained on a classical computer. We finally give a detailed discussion on the complexity of implementing the dilation matrix.Due to the local nature of the potential, for n qubits, the dilation matrix only requires 2^n CNOT and 2^n unitary rotation for each time step, whereas it wouldrequire of the order of 4^n+1 C-NOT gates to implement it using the best known algorithm for general unitary matrices.Efficient solution of the non-unitary time-dependent Schrödinger equation on a quantum computer with complex absorbing potential Edgar Andres Ruiz Guzman January 14, 2024 ================================================================================================================================ § INTRODUCTIONThe time evolution of quantum systems is a basic ingredient of simulating microscopic physics. It is a common subject of numerous studies, in domains ranging frommany-body systems <cit.>, electron-phonon interactions <cit.>, to quantum field theory <cit.>, or even hydrodynamics <cit.>. Most of these simulations deal with the real-time evolution of unitary systems, which fits naturally into the framework of quantum simulation.Quantum algorithms have also been proposed to perform imaginary-time evolution (ITE) or to mix real- and imaginary-time evolution on quantum computers.Indeed, many quantum systems can be described as or reduced to systems whose norm is not conserved over time (dissipative systems, open systems, tunneling effect, instantons, …), and imaginary time propagation methods are widespread in many domains. Extracting the ground state energy of microscopic systems by relying on the exponential decay of excited states is only one of the many applications of these powerful methods, which have been applied with great success not only in particle and nuclear physics but also in condensed matter and quantum chemistry. Developing algorithms on quantum computers able to efficiently encode evolution operators – unitary or not – is thus crucial to simulate a broad range of microscopic systems on quantum devices. Quantum algorithms aiming at including imaginary-time evolution enter either into the classof fully quantum algorithms or into the category of techniques referred to as hybrid quantum-classical methods <cit.>. The Quantum Imaginary-Time Evolution (QITE) method <cit.>, which is based on the use of variational principles <cit.> enters into the second class.While expected to be less resilient to noise, fully quantum algorithms have been the subject of extensive efforts in recent years.Among the methods that are now being explored, one can mention the dilation method <cit.> allowing to treat dissipative systems, the probabilistic ITE<cit.>, the “forward and backward" real-time evolution <cit.> or therecently proposed Singular Value Decomposition (SVD)-based approach of Ref. <cit.>.Other methods, like the “Linear Combination of Unitaries" <cit.> can be used, but theymight require a rather large number of ancillary qubits to implement the imaginary potential.In this work, our primary motivation is to explore the simulation of the Schrödinger equation in position space using quantum computers, and relying completelyon a quantum computer algorithm, i.e., avoidingquantum-classical hybrid methods. Although solving Schrödinger-like equation is rather standard on classical computers,especially on low dimension, and despite the fact that the strategy to perform such an evolution on quantum computeris known from decades <cit.>, the practical simulation of such problems remains, even today, rather difficult, at least due to the quantum resources required and due to the algorithm complexity (see, for instance, the discussion in <cit.>). Here we explore the possibility of adding complex absorbing potential (CAP) to a real-timeevolution. This method is standard in classical computing to reduce the numerical resources when the evolution is solved on a grid. It consists of adding an imaginary potentialthat absorbs the wave function escaping from a certain region of interest. The corresponding time-dependent Schrödinger equation then becomes:i ħ∂/∂ tϕ( x,t)= {p^2/2m + V( x) - i W( x) }ϕ( x,t)that corresponds to a mixing of real- and imaginary-time propagation.Such potential reduces the boundary conditions effects and, when the absorption is properly made, allows for a reduction in the number of mesh pointsneeded to simulate the evolution accurately. A similar advantage can be envisaged when the same equation is solved on a grid on quantum computers, when the grid points are encoded on the qubit's register. Said differently, if we can include absorbingboundary conditions on a quantum computer, we might significantly reduce the number of qubits needed to accurately treat the system. This reductionis quite attractive nowadays because the number of qubits on quantum platforms is rather restricted. In addition, efficient treatmentof boundary conditions through absorption opens the perspective to treat certain phenomena like particle decay in real-time or scattering processes.A challenge in adding absorption at the boundaries is that the evolution becomes non-unitary.We note that the inclusion of CAP in the context of quantumcomputation has been briefly discussed in <cit.> using QITE and applied to a simple model case. In <cit.>, although no application with CAP was shown, it was proposed to use the notion of quantum pixel for grid simulation in position. Here, we explore the possibility of using dilation techniques to simulate real-time evolution given by Eq. (<ref>), including CAP in one dimensionon a grid. This technique requires only one additional reservoir qubit while using fully quantum algorithms – that is without resorting to any hybrid quantum-classical computation. The application of the imaginary evolution operator is subject to a certain success probability, which is directly related to the “loss of norm" of the system when measuring the ancillary qubit. Special attention is given to improving the quantum algorithm to optimize the success probability. For long time evolution of systems weakly coupled to their environment or short time evolution of systems strongly coupled to their environment, the success probability of our algorithm is expected to remain close to one in many physical situations of interest. This article is organised as follows.In section <ref>, we detail our algorithm to simulate a non-unitary evolution given by Eq. (<ref>) using the dilation method.Section <ref> presents the practical implementation illustrated with results obtained on an IBM quantum simulator. The complexity associated with the implementation of the dilation method is discussed in section <ref>, followed by concluding remarks in section <ref>.§ NON-UNITARY PROPAGATION ALGORITHM§.§ Single time-step Trotter-Suzuki decomposition with imaginary timeIn order to perform the evolution given by Eq. (<ref>), we first rewrite it as a generalized propagator (we take the convention ħ c=1):| Ψ (t) ⟩ =e^-i H̃ t | Ψ (0) ⟩where H̃ is a non-unitary operator, which can be written without loss of generality as H̃=H - i W.H= K+ V is the usual Hamiltonian that contains the kinetic (K) and potential (V) terms, while W is an hermitianoperator associated with the CAP. | Ψ(0) ⟩ is the initial state, assumed to be known and normalized to 1.To implement the evolution, we consider the first time step Δ t and make use of the first order Trotter-Suzuki approximation <cit.> and write | Ψ(Δ t) ⟩ =e^-WΔ te^-i V Δ t e^-i K Δ t | Ψ(0) ⟩≡ U_ approx (Δ t) | Ψ(0) ⟩ As in usual Trotter propagation, we can estimate the error induced by replacing (<ref>) by (<ref>). For a small time step,we have: e^-i H̃Δ t - U_ approx (Δ t)≤ Δ t^2/2[ K, W + V] ≤ Δ t^2K (W +V )where, having in mind solving Eq. (<ref>), we implicitly assumed that [W,V]=0, which is true if both V and W arediagonal in position space.This will be our case since we will consider V and W local in space. Here · denotes the spectral norm.The last two factors appearing in Eq. (<ref>) corresponds to unitary propagators and can usually be treated using standard techniques ondigital computers (see illustrations in section <ref> and discussion in appendix <ref>). Note that the kinetic-term spectral norm is explicitly given in Eq. (<ref>), while W and V identify simply with the maximal absolute values these potentials take on the position mesh. Once the propagator is approximated by Eq. (<ref>), a major difficulty in implementing it on a quantum computer stems fromthe fact that the norm of the wave function immediately becomes lower than one. Indeed, for a single time step, we have ⟨Ψ(Δ t) | Ψ(Δ t) ⟩ = ⟨Ψ(0) | e^-Δ t W^† e^- Δ t W | Ψ(0) ⟩=1 - 2 Δ t ⟨Ψ(0) |W| Ψ(0) ⟩ + O(Δ t^2)where, in the last expression, we used the fact that W is Hermitian. Assuming further that W is positive definite will therefore lead to a decrease of thenorm. Such a loss, which could be seen as a loss of information on the system, is impossible if the system is treated on a isolated quantum register together with a perfect quantum computers, where only unitary transformations are possible. A natural solution to this problem is to add one or several qubits that act as a reservoir and can absorb parts of the informationon the system by interacting with it.§.§ Dilation method for non-unitary propagator§.§.§ General description of the method We briefly recall here the dilation method that is becoming today a common method forimplementing non-unitary operators on a system encoded on a qubit register <cit.>.Let us assume that we want to implement | Ψ⟩→ M | Ψ⟩ where Mis not unitary. We assume the system is encoded on n qubits, leading to a computational basis of size 2^n.The dilation method relies on doubling the size of the computational basis by adding a single reservoir qubit and applyingthe following unitary matrix to the n+1 qubits: U=[M √(I - M^† M); √(I - M^† M)- M ] . Specifically, we have U (|0_r ⟩⊗ | Ψ⟩)=|0_r ⟩⊗[ M |Ψ⟩]+ |1_r ⟩⊗[ √(I - M^† M)|Ψ⟩],where { |0_r ⟩ , |1_r ⟩} denote the two states associated to the reservoir qubit while | Ψ⟩ is the initialsystem state. From the above relation, we see that we can perform the desired operation by preparing the initial state |0_r ⟩⊗ | Ψ⟩, applyingthe U matrix and measuring 0 in the reservoir state. The procedure is schematically represented in the circuit displayed in Fig. <ref>. More precisely, when measuring the reservoir qubit in state |0⟩, according to the Born measurement's rules, the resulting reservoir+system state is given by: | Ψ_0 ⟩ = 1/√(p_s) | 0_r ⟩⊗[ M | Ψ⟩]p_s is nothing but the probability to measure the reservoir qubit in state |0 ⟩, called hereafter simply success probability and given by:p_s= ⟨Ψ | M^† M | Ψ⟩.§.§.§ Application to imaginary-time propagation We use the dilation method to apply the CAP, i.e., M ∝ e^- W Δ t. A similar problem was addressed in Ref. <cit.> where the dilation method was implemented to perform imaginary-time propagation using directly the full Hamiltonian, i.e., assuming W=H,to obtain the ground state of the problem. In that case, it was proposed to use the prescription M= 1/√(I + e^-2W Δ t) e^- W Δ t.We first implemented this prescription but realized that it has the clear drawback that the success probabilityexponentially tends to 0 when performing several time steps. This could indeed easily be seen, considering that for arbitrary time-step Δ t, we always have p_s ≤ 1/2 provided that M is positive definite. If the dilation process is iterated r times to simulate the evolutionup to t= r Δ t, the success probability will be lower than 1/2^r. This aspect is critical for practical implementation since, after several steps, most measurements of the ancillary qubit will be rejected, and themethod rapidly becomes inefficient. This unwanted feature can easily be avoided by taking the simpler prescription M = e^-WΔ t leadingtoU=[ e^-W Δ t√(I - e^-2 W Δ t); √(I - e^-2W Δ t) -e^- W Δ t ]. The unitarity of the matrix U can be easily proven noting that [√(I - e^-2 W Δ t),e^- W Δ t]=0. Applying this prescription to a given initial state |Ψ_ ini⟩, we see that the success probability given by Eq. (<ref>) becomes:p_s (Δ t)= ⟨Ψ_ ini | e^-2 W Δ t | Ψ_ ini⟩≃ ⟨Ψ_ ini | Ψ_ ini⟩ - 2 Δ t ⟨Ψ_ ini |W | Ψ_ ini⟩ +O(Δ t)^2,and provided the initial state is normalized to 1 at initial time, the success probability remains close to 1 if Δ t is small, which is much more favorable than the prescription of Ref. <cit.>. In terms of the specific steps taken for time evolution Eq. (<ref>),one can take advantage of the Trotter decomposition(<ref>) as follows: * For the first time step, the propagators e^-i V Δ t e^-i K Δ t are directly implemented as a set of unitarygates on the system register.* The CAP term is then implemented using the dilation method by measuring the reservoir qubit in state |0⟩.After the measurement, the wave function, denoted hereafter as | Ψ_ dil (t) ⟩, is given by:| Ψ_ dil (Δ t) ⟩ = 1/√(p_s(1))U_ approx (Δ t) |Ψ (0) ⟩where U_approx is the evolution matrix from Eq. (<ref>).We denote here by p_s(1) (resp. p_s (r)) the success probability associated with the first (resp. the r^th)application of the dilation method. * Iterating the above procedure, after the r^th application, and denoting t= r Δ t, we deduce thatthe system wave function is given on the qubit register as:| Ψ_ dil (t) ⟩ = 1/√( P_s(t))[ U_ approx (Δ t)]^r |Ψ (0) ⟩,where the global probability of success P_s(t=rΔ t) relates to r consecutive successful performances of the non-unitary evolution with CAP, i.e. the probability of obtaining r times zero when measuring the reservoir qubit. This probability is given by P_s(t=rΔ t)=p_s(1) ⋯ p_s(r). §.§.§ Physical estimates of the total success probability evolutionA schematic representation of the iterative procedure is depicted in Fig. <ref>, where the norm of the state| Ψ_ dil ( t) ⟩ is shown as a function of time. Each time a zero is measured in the reservoir qubit,the norm is reinitialized to 1. Denoting by | Ψ(t) ⟩ the wave function obtained with good accuracy by solving numerically the Schrödinger equation (<ref>) on a classical computer, according to Eq. (<ref>),we see that we have:[ U_ approx (Δ t)]^r |Ψ (0) ⟩∼| Ψ(t) ⟩within the accumulated Trotter errors.Consequently, we also have:lim_Δ t → 0 P_s(t)=⟨Ψ(t) | Ψ(t) ⟩≡ N(t) . The last property makes the dilation method rather attractive for physical systems simulation. Indeed, we have shown that the success probability at a given time t tends, in the limit Δ t→ 0, to the norm of the wave functionthat survives to the absorption when solving the problem on a classical computer. In many physical situations where CAP is useful, we are interested in systems that arerather localized in a certain region of space and where part of the wave function is emitted. This is, for instance, what happens when particlesare emitted from a compact localized quantum object. In this case, most particles are emitted over certain time-scale t≤τ_ decay,implyingthat N(t) decreases and then reaches a stationary asymptotic value. Accordingly, we expect in such situation that P_s(t>τ_ decay) ≃ cte, which implies that single-step success probability p_s(r) ≃ 1 for r ≫τ_ decay/Δ t. In brief, for suchsystems where most particles are emitted after a certain transient time, almost all events will be successful after this time.Note that, in section <ref>, we will actually consider the free wave-packet propagation, the physical situation where all particles escape the grid at infinite time. This case corresponds to an extreme example of decay phenomena, where the importance of absorption is more pronounced since N(t) tends to 0 at infinite time.It is therefore a perfect example for testing the absorption of particles at the grid boundaries. §.§.§ Expectation values of non-absorbed observables In general, we are often interested in computing the expectation values of observables, generically denoted by Otaken on a state | Ψ (t)⟩. Provided that we have | Ψ_ dil ( t) ⟩ encoded on the system register,we can compute approximately the quantity:⟨ O ⟩_ dil = ⟨Ψ_ dil ( t) | O |Ψ_ dil ( t) ⟩by implementing either a Hadamard test <cit.> and performing a set of measurements, or, if the observable is written as a linearcombination of Pauli chains, one can directly measure the system register after a proper change of the measurement basisto estimate each individual Pauli chain (see, for instance, <cit.>, table 3). In parallel, we can also estimate the total success probability, denoted by P'_s(t) simply by counting the number of events where only zeros weremeasured in the reservoir qubit and divide this number by the total number of events.From these estimates, we can compute approximate expectation values on | Ψ(t)⟩ simply as P'_s(t) ⟨ O ⟩_ dil. § ILLUSTRATION: 1D SCHRÖDINGER EVOLUTION ON A GRID WITH COMPLEX ABSORBING POTENTIALWe give here a proof of principle of applying the strategy discussed above.Specifically, we show examples of 1D Schrödinger equation on a grid with absorbing boundaryconditions. We concentrate on one critical situation where V(x)=0. In this case, assumingthat the particle is located inside the grid and will freely spread during the evolution, it should eventually completely be absorbed by the CAP in the infinite time limit. All calculations have been made below using the Qiskit softwarethat emulates a perfect digital computer <cit.>. §.§ Free wave propagation with CAP We consider a problem described on a restricted area of space x ∈ [x_ min, x_ max]. The wave functionis assumed to be initially a Gaussian wave function located in the middle of the region with:Ψ(x,t=0) = 1/π^1/4σ^1/2 e^[-(x-x_0)^2/2σ^2 + im.v/ħ.(x-x_0)] ,with x_0=(x_ max+x_ min)/2. Eventually, we will also consider the possibility that the wave functionhas an initial boost proportional to the parameter v. In the simulation, we use natural units with the convention ħ=c=1, ħ^2/2m =1. We concentrate here on the free wave case, i.e., we assume V(x) = 0 in Eq. (<ref>). Our goal is to removethe wave function while approaching the boundaries, i.e., to apply an absorbing potential within a certain distance D such that either |x-x_ min |< D or | x_ max - x| < D. We employ here the “amplitude reduction" technique – as suggested by Kosloff et al. in <cit.>. This amplitude reduction consists in applying the usual evolution operator to propagate from|Ψ(t) ⟩ to|Ψ(t+Δ t)⟩ and consecutively applying on |Ψ(t+Δ t)⟩ an “absorption" step according to|Ψ̃(t+Δ t)⟩= (1-W dt)|Ψ(t+Δ t)⟩. We assume here that the amplitude operator W is local and verify:{[W(x) = U_0/cosh^2[α.(x_ max - x)],forD > x_ max- x;;W(x) = U_0/cosh^2[α.(x-x_ min)],forx-x_ min< D;;W(x) =0, everywhere else . ]. To solve the problem on a classical or quantum computer, we directly discretizethe one-dimensional space on a grid of equally spaced mesh points. The wave function |Ψ⟩ is then written as:|Ψ⟩=∑_i=0^N_x-1ψ(x_i)| x_i ⟩ ψ(x_i)=⟨ x_i |Ψ⟩.The space grid { x_i =x_ min+ i . Δ x }_i=0,N_x -1 ranges from x_ min tox_ max, with a mesh step given by Δ x = (x_ max-x_ min)/(N_x-1). On the discretized mesh, the CAP becomesW(x_i) = U_0/cosh^2[αΔ x.(N_x-1-i)],for i>N_x-k W(x_i) = U_0/cosh^2[αΔ x . i], for i<kwhere k = D/Δ x is a fixed integer. As a reference calculation, we first solve the problem on a classical computer using the split-operator method <cit.>, i.e., going back-and-forth fromposition to momentum space. Consistently with the quantum computation case (see below), we used the simplest first-order splitting. In the classical computer case, the absorbing potential is directly applied to the wave function by multiplying each component ψ(x_i) by e^-Δ t W(x_i). In this case, the norm of the wave function decreases in time, as shown in Fig. <ref> directly probing the effect of the absorption at the two boundaries of the mesh. Such classical computing results are rather standard and will serve as referencecalculations for the one obtained using the quantum computing algorithm.We show in Fig. <ref> and Fig. <ref> two examples of wave function evolution without and with an initial boost respectively, obtained with a classical computer.Results have been simulated using an initially localized state located in the middle of the mesh with a width σ=0.4 and N_x = 2^4=16 points. The CAP parameters are U_0=0.4 and α=1.5. For figure <ref>, the boost parameter value is set to v= 4.In these figures, we display both the results of the calculations with and without absorption. In the absence of absorption, we see the accumulation of the wave function amplitude that is reflected by the boundaries and then starts to interfere with the incoming wave function.As it is well known from classical computing, such interference prevents from properly describing the wave function spreading.This interference is reduced when the CAP is included, although some interference is still visible, especially in the boostedcase. Note that such interference can be reduced by fine-tuning the absorption potential, but this is not the aim of the present work.Our main objective is to give proof that the same evolution as obtained in a classical computer can be implemented on quantum computers using the dilation methoddiscussed above.§.§ 1D free wave evolution with CAP on a quantum computer To encode the problem onto the qubit register, we use the standard binary (SB) representation that consists of mapping each position |x_i ⟩ intothe register state | [i] ⟩ where [i] denotes the binary representation of the integer i. The first-order Trotter-Suzuki method is used to perform the time evolution. Note that at first order, Trotter-Suzuki decomposition is strictly equivalent for the evolution to the “amplitude reduction" technique implemented on the classical computer discussed previously. As in the classicalcomputer simulation, we perform the evolution of the Eq. (<ref>) by going back and forth from the position spacewhere the operators (V,W) are diagonal, and the associated propagator can be easily implemented to the momentum space, where the same holds for the kinetic term K.Changing from one basis to the other requires the use of the Quantum Fourier Transform (QFT) algorithm or its inverse (see appendix <ref>).This gives the scheme depicted schematically in Fig. <ref>.The absorbing potential is included using the dilation method introduced in section <ref>.The blue box shown in Fig. <ref> is repeated N_t times,each time the result of the ancilla measurement being stored in a classical register, and the total system is measured only at the end. Wekeep only runs for which all the measurements of the reservoir qubit lead to zero.The U matrix is implemented using the algorithm discussed in section <ref>.For a given time, the amplitude is reconstructed by measuring the system register. We used here 2^14 shots for each panels ofFig. <ref> and <ref>. We see in both figures that wave function amplitudes obtained using the quantum algorithmperfectly match the ones obtained using the classical computer algorithm. Note that we have systematically comparedthe results obtained on classical and quantum processor units, including or not an initial boost and/or addinga local potential V(x), and always obtained a perfect matching in the results. §.§ Success probability and absorptionThe good agreement between the quantum and classical computer simulations is further confirmed in Fig.<ref> where we compare the evolution of the wave function norm during the time evolution for the two cases shown in Fig. <ref> and <ref>. In the classical computer, this norm can be directly computed by integrating the wave function over the grid.As discussed in detail in section <ref> (see also Fig. <ref>), in the quantum calculation case,the system wave function is re-normalized to 1 after each measurement of the reservoir qubit. Tocompute the physical norm, i.e., the one corresponding to the non-absorbed particle, we can use its connectionwith the success probability given by Eq. (<ref>). In practice, the success probability at a given time t = r Δ t is given by the probability to only obtain 0s in the measurements of the reservoirqubit for time steps 1 to r. The norm shown in the quantum calculation is deduced, assuming a strict equalitybetween the success probability and the norm of the wave function. We see that the two calculations are almost on top of each other.As a side remark, we also implemented the prescription of Ref. <cit.>.As we discussed in theintroduction, in this case, the success probability decays exponentially with the number of timesthe dilation is implemented. For comparison, without boost,the success probability is of the order of 1/2^r after the r^th time step, which should be compared to the values reported in Fig. <ref>.§ COMPLEXITY ANALYSIS FOR THE IMPLEMENTATION OF THE DILATION MATRIXThe dilation algorithm we use to implement non-unitary propagation is expected to be one of the most efficient algorithms in terms of ancillary qubits since it requires the addition of only one reservoir qubit.Still, it requires implementing a general matrix U which is quite demanding (see, for instance, Ref. <cit.> for a comprehensive overview). We restrict the discussion here on the complexity in terms of gates for implementing this matrix directly without adding more ancillary qubits besides the one needed in the dilation.If n qubits are used to simulate the physical system, it leads to a matrix U of dimension 2^n+1× 2^n+1=2^d× 2^d, whose naive implementation, as a general unitary matrix,requires in principle of the order of d^2 × 4^d gates <cit.>.In <cit.>, it was shown that, as an alternative to the dilation technique,the general complexity of implementing a matrix treating the imaginary-time propagation can be reduced by a factor 2 when using the singular value decomposition (SVD) technique.We show here that, due to the specific diagonal nature of the absorbing potential, and using the Cosine-Sine decomposition, it is possible to implement the dilation matrix U with much lower number of gates than the SVD-based approach. To decompose arbitrary unitary operators of dimension 2^d×2^d in terms of C-NOT and one-qubit gates, one ofthe most efficient decomposition scheme is the Quantum Shannon Decomposition (QSD) <cit.>. This algorithm is based on theCosine-Sine Decomposition (CSD) <cit.> and requires (3/4)×4^d-(3/2)×2^d C-NOT gates,where d is the total qubits number.If we use the CSD algorithm alone, without the improvements brought by the Shannon decomposition, we get a total complexity of4^d-2^d+1 C-NOT gates and 4^d elementary one-qubit gates <cit.>.QSD and CSD algorithms are particularly well-suited to our casesince the dilation matrix can immediately be written in terms of cosine-sine decomposition:U = [10;0 -1 ][ e^- τ W √(I - e^-2 τ W); - √(I - e^-2 τ W) e^- τ W ] , = [10;0 -1 ][CS; -SC ] , where C and S matrices verify C^2+S^2=1.We note that, due to the particular structure of the dilation matrix, it can always be written directly in this form. The l.h.s. matrix in (<ref>) is a trivial multiplexor that can be implemented with one Z gate acting on qubit d. However, since we are only interested in the diagonal part of the dilation matrix to implement the non-unitary evolution, we can even remove this Z gate and consider directly the non-trivial part of the decomposition, namely the matrix D= [CS; -SC ] [Note also that, without lost of generality themultiplexor can also directly be avoided in the first place simply by redefining U as:U=[e^-W Δ t √(I - e^-2 W Δ t); -√(I - e^-2W Δ t) e^- W Δ t ],instead of Eq. (<ref>). ].Thanks to the particular form of the dilation matrix in the case of non-unitary propagation with absorbing potential (see Eq. (<ref>)), we can even further reduce this complexity.Our matrices C and S are indeed diagonal, and we only need to resort to a small part of the CSD algorithm to implement U.These matrices can be written as C =diag[cos(θ_i)], S= diag[sin(θ_i)]with i=0,… 2^n-1. For a given time τ, the angles are related to the imaginary potential through:e^-W_iiτ=cosθ_i, √(1-e^-2W_iiτ)=sinθ_iwhere we can restrict θ_i angles to [0,π/2]. The operator D is a uniformly controlled rotation about the y axis, also denoted as F_n+1^n(R_y(θ⃗)) <cit.>. It consists of n-fold controlled rotations of qubit n+1 about the axis y, one R_y rotation for each of the 2^n different classical values of the control qubits. The circuit representation of D=F_n+1^n(R_y(θ⃗)) is displayedin Figure <ref>. In general, F_n+1^n(R_y) is a product of 2^n two-level operators. In <cit.> is proposed an implementation of F_n+1^n(R_y)in 2^n C-NOT and 2^n rotations acting on qubit n+1. We illustrate the efficient implementation of the uniformly controlled rotation on the n=2 case. To fix notations, the definition of F_3^2(θ⃗) is displayed in Figure <ref>.An illustration of an efficient circuit for F_3^2(R_y(θ⃗)), as proposed in Ref. <cit.>, is shown in Figure <ref>.The rotation angles α used in theefficient circuit are linked to the original rotation angles θ by a linear transformation.In the specific case we consider here, i.e., with diagonal CAP, the implementation of the dilation method thus requires2^n C-NOT and 2^n one-qubit gates. For comparison, in the same situation considered here, the SVD-based matrix requires solely toimplement the diagonal matrix Σ introduced in <cit.> of size d × d and will require 2^n+1 - 1 z-rotationunary gate and 2^n+1 - 2 CNOT gates. We compare in table <ref> the number of required CNOTfor some optimized methods as a function of the system size register n. We see that the dilation method outperforms the other techniques for the specific implementation of local complex potential. For instance, the numerical simulation considered in Fig. <ref> and <ref> have been made using n=4 qubits, for which a direct implementation of the U matrix requiresonly 16 CNOT and 16 R_y operations per time-step.§ CONCLUSION We analyze the possibility of performing wave function Schrödinger evolution on a gridwhere the evolution mixes both real- and imaginary-time evolution.The physical motivation behind adding an imaginary potential is the possibilityto absorb particles that might be emitted from a localized source without the needto treat very extended space regions. This technique is standard in classical computing andmight potentially lead to a significant reduction in the number of qubits when the grid itself is encoded on the qubit register. For the real-time evolution, we use the standard first-order Trotter-Suzuki method, where the system isevolved alternatively in position and momentum space. We show that absorbing boundary conditions can beefficiently implemented using the dilation method. In particular, we propose a specific prescriptionfor the ingredient of the dilation method that ensures that the success probability remains significantin most situations due to its connection with the physical wave function norm. Proof of the principle of the technique is given in the case of a one-dimensional particle evolving freelyon a mesh and being continuously absorbed in the two mesh sides. We show that results obtained on aquantum computer are identical to those obtained on a quantum computer and that, due to the local nature of theabsorbing potential usually used in applications. Finally, it is demonstrated that the dilation matrix can be implemented efficiently, significantly reducing the required quantum resources. §.§ Acknowledgments This project has received financial support from the CNRS through the 80Prime programand the AIQI-IN2P3 project. J. Zhang is funded by the joint doctoral programme of Université Paris-Saclay and the Chinese Scholarship Council. This work is part ofHQI initiative (www.hqi.fr) and is supported by France 2030 under the FrenchNational Research Agency award number "ANR-22-PNQC-0002". We acknowledge the use of IBM Q cloud as well as the use of the Qiskit software package <cit.> for performing the quantum simulations. § PRACTICAL ASPECTS OF THE KINETIC TERM IMPLEMENTATIONFor completeness, we give below some details regarding the implementation of the kinetic term in momentum space (see also <cit.>). Since K is diagonal in momentum representation, we use the Fourier transform to go from space to momentum representation, apply e^-ip^2/2mΔ t, and then revert back to position representation.If F denotes the Fourier transform operator, we have:e^-iKΔ t = F^†e^-ip^2/2mΔ tF.So to apply the operator e^-iK Δ t, we first performa standard QFT, whose general expression for astandard basis state | x⟩ is <cit.>| x⟩⟶1/2^n/2⊗_l=1^n[ |0⟩ +e^2iπ x/2^l |1⟩].Once in Fourier space, we must implement the diagonal operator e^-ip^2Δ t/2m in the basis | p⟩.Our qubit basis is defined in terms of N=2^n discretized space points, n being the number of qubits considered, such that x_k=x_ min+k.Δ x where x_ min≤ x ≤ x_ max, Δ x =x_ max-x_ min/2^n-1=L/2^n-1 is the discretization step in position space.The discretized momentum p can be written as :p=(2π/L)∑_j=0^n-1 2^j p_j, with p_j={0,1} and p∈ [0;2π/L(2^n-1)].However, in this case, we have no negative values for the momentum. To adapt our Fourier transform to a wave packet with both negative and positive momenta, we shift the p range top=2π/L(∑_j=0^n-1 2^j p_j-2^n-1). Then, for the range of momentum, we have:p∈[-2π/L2^n-1;2π/L(2^n-1-1)] p_k=2π/L(1-1/2^n)(∑_j=0^n-1 2^j k_j-2^n-1)Then, we deduce:p^2 = (2π/L)^2 (1-1/2^n)^2(∑_j=0^n-12^2jk_j+∑_j=0^n-1∑_l>j^n-12^l+j+1k_lk_j- ∑_j=0^n-1 2^n+j k_j+2^2n-2) So we havee^-iH/ħΔ t |p_n-1… p_0⟩ =⊗_j=0^n-1 e^-i(2^2j-2^n+j)p_j/2mħΔ t Π_k=j+1^n-1 e^-i2^k+j+1p_kp_j/2mħΔ t |p_j⟩We dropped the last term since it corresponds toa global phase on the qubits. To implement this evolution, we thus needonly phase gates and controlled-phase gates. An inverse Fourier transform brings us back to the direct space, where we can implement the part of the evolutionoperator containing the potential.Finally, we mentionthat the above treatment of the momentum space leads to the kinetic term spectralnorm (see the interval given in (<ref>)):K= 1/2mp^2 = π^2/L^2 m 2^2n-1.99Smith2019A. Smith, M. Kim, F. Pollmann, and J. Knolle, Simulating quantum many-body dynamics on a current digital quantum computer, npj Quantum Inf 5, 1 (2019).Fauseweh2020B. Fauseweh and J.-X. Zhu, Digital quantum simulation of non-equilibrium quantum many- body systems, arXiv:2009.07375 (2020). Macridin2018A. Macridin, et al. Digital quantum computation of fermion-boson interacting systems, Phys. Rev. A 98, 042312 (2018). Jordan2012S. P. Jordan, K. S. Lee, and J. Preskill, Quantum algorithms for quantum field theories, Science 336, 1130 (2012). Meng2023 Z. Meng and Y. YangQuantum computing of fluid dynamics using the hydrodynamic Schrödinger equation, Physical Review Research 5, 033182 (2023)Bha22 K. Bharti et al., Noisy intermediate-scale quantum (NISQ) algorithms, Rev. Mod. Phys. 94, 015004 (2022). Mot19 M. Motta, C. Sun, A. T. K. Tan, M. J. O'Rourke, E. Ye, A. J. Minnich, F. G. S. L. Brandao, and G. K.-L. Chan, Determining eigenstates and thermal states on a quantum computer using quantum imaginary time evolution, Nature Physics 16, 205 (2019).McA19 S. McArdle, T. Jones, S. Endo, Y. Li, S. C. Benjamin, and X. Yuan, Variational ansatz-based quantum simulation of imaginary time evolution, npj Quantum Information 5 (2019).Gom20 N. Gomes, F. Zhang, N. F. Berthusen, C.-Z. Wang, K.-M. Ho, P. P. Orth, and Y. Yao, Efficient step-merged quantum imaginary time evolution algorithm for quantum chemistry, Journal of Chemical Theory and Computation 16, 6256 (2020). Lan22Fabian Langkabel and Annika Bande,Quantum-Compute Algorithm for Exact Laser-Driven Electron Dynamics in Molecules,J. Chem. Theory Comput.18, 12, 7082 (2022). arXiv:2205.10543.Yua18 Xiao Yuan, Suguru Endo, Qi Zhao, Ying Li, Simon Benjamin, Theory of variational quantum simulation, Quantum 3, 191 (2019)End20 S. Endo, J. Sun, Y. Li, S. C. Benjamin, and X. Yuan, Variational quantum simulation of general processes, Phys. Rev. Lett. 125, 010501 (2020). Swe15 R. Sweke, I. Sinayskiy, D. Bernard, and F. Petruccione, Universal simulation of markovian open quantum systems, Phys. Rev. A 91, 062308 (2015). Swe16 R. Sweke, M. Sanz, I. Sinayskiy, F. Petruccione, and E. Solano, Digital quantum simulation of many-body non-markovian dynamics, Phys. Rev. A 94, 022317 (2016). Spa18 C. Sparrow, E. Martín-López, N. Maraviglia, A. Neville, C. Harrold, J. Carolan, Y. N. Joglekar, T. Hashimoto, N. Matsuda, J. L. OBrien, D. P. Tew, and A. Laing, Simulating the vibrational quantum dynamics of molecules using photonics, Nature 557, 660 (2018).Hu20 Z. Hu, R. Xia, and S. Kais, A quantum algorithm for evolving open quantum dynamics on quantum computing devices, Scientific Reports 10 (2020).Hea21 K. Head-Marsden, S. Krastanov, D. A. Mazziotti, and P. Narang, Capturing non-markovian dynamics on near-term quantum computers, Phys.Rev. Research 3, 013182 (2021).Hu21 Z. Hu, K. Head-Marsden, D. A. Mazziotti, P. Narang, and S. Kais, A general quantum algorithm for open quantum dynamics demonstrated with the Fenna-Matthews-Olson complex, (2021), arXiv:2101.05287. Tur22 F. Turro, A. Roggero, V. Amitrano, P. Luchi, K. A. Wendt, J. L. Dubois, S. Quaglioni, and F. Pederiva, Imaginary-time propagation on a quantum chip Phys. Rev. A 105, 022440 (2022) Lin21 S.-H. Lin, R. Dilip, A. G. Green, A. Smith, and F. Pollmann, Real- and imaginary-time evolution with compressed quantum circuits, PRX Quantum 2, 010342 (2021).Liu21 T. Liu, J.-G. Liu, and H. Fan, Probabilistic nonunitary gate in imaginary time evolution, Quantum Inf. Process. 20, 204 (2021).Kos22 Taichi Kosugi, Yusuke Nishiya, Hirofumi Nishi, and Yu-ichiro Matsushita, Imaginary-time evolution using forward and backward real-time evolution with a single ancilla: First-quantized eigensolver algorithm for quantum chemistry,Phys. Rev. Research 4, 033121 (2022). Schlim2022 A. W. Schlimgen , Kade Head-Marsden, LeeAnn M. Sager-Smith, Prineha Narang, and David A. MazziottiQuantum State Preparation and Non-Unitary Evolution with Diagonal Operators,Phys. Rev. A 106, 022414(2022)Wei20 S. Wei, H. Li, and G. Long A Full Quantum Eigensolver for Quantum Chemistry Simulations. Research, 2020, (2020). Chi12 A.M. Childs and N. Wiebe, Hamiltonian simulation using linear combinations of unitary operations, Quant. Inf. and Comp.12, 901 (2012).Bog98 Bruce M. Boghosian, Washington Taylor, Simulating quantum mechanics on a quantum computer, Physica D: Nonlinear Phenomena, 120, 30 (1998).Benenti2008 G. Benenti and G. Strini, Quantum simulation of the single-particle Schrödinger equation, Am. J. Phys. 76, 657-663 (2008).Chi22 A.M. Childs, J. Leng, T. Li, J.P. Liu, C. Zhang, Quantum simulation of real-space dynamics, Quantum 6, 860 (2022). Cha23 Hans Hon Sang Chanand Richard Meisterand Tyson Jonesand David P. Tewand Simon C.Benjamin, Grid-based methods for chemistry simulations on a quantum computer, Science Advances 9,eabo7484 (2023).Tro59H. F. Trotter, On the product of semi-groups of operators, Proc. Am. Math. Soc. 10, 545 (1959).Suz85 M. Suzuki, Decomposition Formulas of Exponential Operators and Lie Exponentials with Some Applications to Quantum Mechanics and Statistical Physics, J. Math. Phys. (N.Y.) 26, 601 (1985).NielsenChuang Michael A. Nielsen and Isaac L. Chuang.Quantum Computation and Quantum Information. Cambridge University Press, Cambridge ; New York, 10th anniversary ed edition, 2010.Ayr23 T. Ayral, P. Besserve, D. Lacroix and A. Ruiz Guzman, Quantum computing with and for many-body physics, Eur. Phys. J. A 59 (2023).Qis21 Qiskit Development Team, Qiskit: An Open-source Framework for Quantum Computing, (2021). https://doi.org/10.5281/zenodo.2573505doi:10.5281/zenodo.2573505.Kos1986 R. Kosloff and D. Kosloff,Absorbing Boundaries for Wave Propagation Problems,J. of Comp. Phys. 63, 363-376 (1986) Fei82 M.D. Feit, J. Fleck,Jr.,A. Steiger, Solution of the Schrödinger equation by a spectral method, J. Comput.Phys. 47, 412 (1982).Bal97 N. Balakrishnan, C. Kalyanaraman, N. Sathyamurthy, Time-dependent quantum mechanical approach to reactive scattering and related processes, Phys. Rep. 280, 79 (1997).Kro22 A. M. Krol, K. Mesman, A. Sarkar, M. Moller, Z. Al-Ars, Efficient Decomposition of Unitary Matrices in Quantum Circuit Compilers, Appl. Sci. 12, 759 (2022).Sch22 Anthony W. Schlimgen, Kade Head-Marsden, LeeAnn M. Sager-Smith, Prineha Narang,and David A. Mazziotti,Quantum state preparation and nonunitary evolution with diagonal operators, Phys. Rev. A 106, 022414 (2022)She06 V. Shende, S. Bullock, and I. Markov, Synthesis of quantum–logic circuits, IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 25, 1000 (2006). Tuc99 R. R. Tucci A Rudimentary Quantum Compiler, 2nd Edition, quant-ph/9902062.Mottonen2004M. Mottonen et al., Quantum circuits for general multi-qubit gates, Phys. Rev. Lett. 93, 130502, 2004.Mot06 M. Mottonen and J. Vartiainen, Decompositions of general quantum gates, Ch. 7 in Trends in Quantum Computing Research (NOVA Publishers, New York), 2006.arXiv:quant-ph/0504100 | http://arxiv.org/abs/2311.15859v1 | {
"authors": [
"Mariane Mangin-Brinet",
"Jing Zhang",
"Denis Lacroix",
"Edgar Andres Ruiz Guzman"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20231127142541",
"title": "Efficient solution of the non-unitary time-dependent Schrodinger equation on a quantum computer with complex absorbing potential"
} |
Institute for Physics of Microstructures, Russian Academy of Sciences, 603950 Nizhny Novgorod, GSP-105, Russia Lobachevsky State University of Nizhny Novgorod, 603950 Nizhny Novgorod, RussiaInstitute for Physics of Microstructures, Russian Academy of Sciences, 603950 Nizhny Novgorod, GSP-105, Russia Lobachevsky State University of Nizhny Novgorod, 603950 Nizhny Novgorod, Russia Moscow Institute of Physics and Technology, Dolgoprudnyi, Moscow Region 141701, RussiaProximity phenomena and induced superconducting correlations in heterostructures are shown to be strongly affected by thenonlocal nature of the electronic attraction. The latter can trigger the formation of Cooper pairs consisting of electrons localized in neighbouring layers even in the absence of direct quasiparticle transfer between the layers. We investigate the manifestations of such nonlocal pairing and resulting unconventional induced superconductivity in an exemplary two-dimensional (2D) electronic system coupled to a conventional superconductor. The interplay between the quasiparticle tunneling and spin-triplet interlayer pairing is shown to generatethe odd-frequency superconducting correlations in the 2D material which give rise to the paramagnetic contribution to the Meissner response and affect the energy resolved quasiparticle density of states. Experimental evidence for the above nonlocal interface pairing would provide new perspectives in engineering the unconventional superconducting correlations in heterostructures. Unconventional superconductivity and paramagnetic Meissner response triggered by nonlocal pairing interaction in proximitized heterostructures A. S. Mel'nikov January 14, 2024 ==============================================================================================================================================§ INTRODUCTION For more than half a century the physics of proximity phenomena in various superconducting heterostructures remains an attractive research direction both for experimentalists and theoreticians. The key mechanism underlying the proximity effect is known to arise from the electron transfer between the superconducting and nonsuperconducting material which results in the generation of the induced superconducting correlations in the normal subsystem <cit.>. The structure of these correlations is determined not only by the order parameter of the primary superconductor but also by the properties of the quasiparticle excitations inside the nonsuperconducting material. As a result, manipulating the electronic spectrum of the latter we get a unique possibility to engineer the induced superconducting state. To tune, e.g., the spin structure of Cooper pairs one can exploit the effect of exchange field in ferromagnetic subsystems <cit.> or the spin-orbit effects arising at the interfaces in heterostructures or in non-centrosymmetric materials <cit.>. On this way we can get very exotic structure of superconducting correlations providing the possibility to control both the equilibrium and transport effects in superconducting heterostructures. These unconventional superconducting correlations are particularly interesting in the context of recent development of the field of topologically protected quantum computations <cit.> and superconducting spintronics <cit.>.Is the above mentioned electron transfer between the subsystems the only mechanism underlying the proximity effect in heterostructures?An obvious answer to this question is positive provided we disregard the nonlocal nature of the attraction between the electrons responsible for the superconductivity phenomenon. However, in real systems this attractive interaction mediated, e.g., by phononsis not necessary local and can in principle bind the electrons even separated by the interface between the materials. In other words the interface impenetrable for electrons can be still transparent for the phonons. Certainly, different crystal lattice structures of contacting solids and, thus, different elastic properties should result in the reflection of the elastic waves incident on the interface. This reflection as well as the screening effects are expected to weaken any attractive forces between the quasiparticles localized in neighbouring subsystems. Still, if this nonlocal attraction is nonzero it can cause the formation of Cooper pairs of electrons positioned, e.g., in neighboring layers of the multilayered structure. This scenario of interlayer pairing is not completely new, of course, and previously it was discussed in the context of different layered superconductors such as transition metal dichalcogenides and high-T_c cuprates <cit.>. An important property of such interlayer pairing is that due to the nonlocality of the Cooper pair wavefunction (or more rigorously, the anomalous Green function) the Pauli principle does no more impose well known severe restrictions on the spins of electrons in the pair <cit.> which usually hamper the formation of triplet superconducting correlations. Exactly this argument in favour of possible triplet interlayer pairing motivated A. I. Larkin and K. B. Efetov <cit.> to consider this type of correlations to explain the extremely high upper critical fields in TaS_2 (pyridine) which were shown to exceed the paramagnetic limit <cit.>. These theoretical considerations of the interlayer pairing have been further developed <cit.> in the context of extensive studies of superconductivity in cuprates which also can be well described by the model of identical superconducting layers. All the above theoretical works were devoted to the study of natural layered compounds and, thus, assumed the coinciding electronic structure of the individual layers.The goal of our work is to apply the idea of Larkin and Efetov to the artificial heterostructures where the neighboring layers can possess quite different individual electronic characteristics including the difference in the normal state band spectra as well as different pairing properties. Considering the formation of the pairs consisting of electrons with different band spectra one can immediately notice the formal analogy of this problem to the one describing a standard singlet superconductor with the quasiparticle spectrum split by the Zeeman or exchange field. Certainly, the effective exchange field in our scenario will depend on the quasiparticle momentum but the basic features of the system including the depairing effect of the difference in the electronic spectra, formation of the odd - frequency superconducting correlations and the inhomogeneous Fulde - Ferrell - Larkin - Ovchinnikov (FFLO) state should be similar to the well known models describing the superconductors in the presence of the spin splitting field <cit.> (see also Ref. <cit.> and references therein). Let us emphasize that all these features are expected to appear in heterostructures without any ferromagnetic layers which could provide the source of the true exchange field determined by the interaction of electron spins with ferromagnetic ordering. This observation looks particularly interesting if we remind some rather old experiments indicating the presence of low temperature paramagnetic contribution to the Meissner response in superconducting cylinders covered by thin normal metal layers <cit.>. Several theoretical works argue that this phenomenon can be associated with the orbital effects <cit.>, the electronic repulsion in the normal metal layer <cit.>, the appearance of the p-wave superconductivity at low temperatures <cit.>, and the effects of the spin-orbit interaction <cit.>. In view of the above discussion this paramagnetic response could originate also from the odd-frequency superconducting correlations generated by the nonlocal electron pairing according to the Larkin - Efetov mechanism. Another interesting application of the interlayer pairing arises if we consider its role in Majorana - type systems <cit.> where this mechanism can probably help to get rid of necessity of rather high magnetic fields providing the Zeeman splitting of energy band in Majorana nanowires. Motivated by all these arguments we studied the manifestation of the Larkin - Efetov mechanism in two exemplary systems: (i) a bilayer consisting of thin films with a certain energy shift of the conduction bands; (ii) a two dimensional electron gas (2DEG) placed in contact with a thick superconducting layer (SC). Two layer model.— We proceed with the consideration of the phenomenon of interlayer pairing in a two layer model which can be viewed as the generalization of the one studied previously in <cit.>. The key point is that we assume the normal quasiparticle spectra to differ by a certain constant shift due to different conduction band offsets. Note that for simplicity we neglect here the Cooper pairing in each individual layer. The total Hamiltonian accounting for the interlayer pairing takes the form: H = ∑_j = 1,2H_j + H_t + H_ int, whereH_j = ∫ d^2𝐫 ψ^†_jσ(𝐫)ξ̂_jψ_jσ(𝐫),describe isolated two-dimensional layers, σ = ↑,↓ denotes spin degrees of freedom (summation over repeated indices is implied), ξ̂_j = -∇_𝐫^2/2m - μ_j, and m is the effective mass. The relative shift of the conduction bands is expressed as (μ_1 - μ_2) = 2χ, where μ_j is the difference between the chemical potential and the bottom of the corresponding energy band. The tunnel Hamiltonian has the formH_t = ∫ d^2𝐫[tψ_1σ^†(𝐫)ψ_2σ(𝐫) + t^*ψ_2σ^†(𝐫)ψ_1σ(𝐫)],and the interlayer electron-electron interaction is described by the termH_ int = U_0/2∫ d^2𝐫 ψ_1σ^†(𝐫)ψ_2σ'^†(𝐫)ψ_2σ'(𝐫)ψ_1σ(𝐫).Assuming the in-plane translational symmetry and spatially homogeneous interlayer pairing state, we obtain the following system of Gor'kov equations written in the Matsubara frequency - momentum representation <cit.>[ -iω_n + τ̌_zξ_1𝐤ť;ť^† -iω_n + τ̌_zξ_2𝐤 ][ Ǧ_11 Ǧ_12; Ǧ_21 Ǧ_22 ] = 1,where ω_n = 2π T(n+1/2), T is temperature, n is an integer, ξ_j𝐤 = 𝐤^2/2m - μ_j, and τ̌_i (i = x,y,z) are the Pauli matrices acting in the electron-hole space. The coupling matrix ť^†, the Green functions of the subsystems (Ǧ_11 and Ǧ_22) and the mixed ones (Ǧ_12 and Ǧ_21) acquire a nontrivial structure in the particle-hole spaceť^† = [t^*Δ̂_ int; -Δ̂_ int^* -t ] , Ǧ_ij = [Ĝ_ij F̂_ij; F̂^†_ijĜ̅̂_ij ] ,due to the presence of the interlayer gap function Δ̂_ int.We demonstrate the analogy between the effects of the band structure on nonlocal Cooper pairs and the ones of the spin-splitting field in a conventional superconductor by solving the self-consistency equationΔ̂_ int = -U_0/2T∑_ω_n∫d^2𝐤/(2π)^2F̂_12(𝐤;ω_n),for a particular case of the spin-singlet interlayer pairing Δ̂_ int = d_0(iσ̂_y) and U_0 = -|U_0|.Substituting the solution of Eq. (<ref>) into (<ref>), we derive <cit.>1 = -U_0/4T∑_ω_n∫d^2𝐤/(2π)^2[1/ω_n^2 + E_-^2 + 1/ω_n^2 + E_+^2-χ^2/√(ξ_𝐤^2|t|^2 + χ^2(ξ_𝐤^2 + |d_0|^2))(1/ω_n^2 + E_-^2 - 1/ω_n^2 + E_+^2)],where E_±(𝐤) is the quasiparticle energy spectrum of the two-layer system E_±^2(𝐤) = |t|^2 + |d_0|^2 + ξ_𝐤^2 + χ^2± 2√(ξ_𝐤^2(|t|^2 + χ^2) + χ^2|d_0|^2) ,and ξ_𝐤 = (ξ_1𝐤 + ξ_2𝐤)/2. The form of the gap equation (<ref>) is similar to the one for the superconductor with Rashba spin-orbit coupling under the influence of the Zeeman field (see, e.g., Eq. (27) in Ref. <cit.>). Thus, we anticipate that a relative shift of the conduction bands should provide a depairing effect for interlayer Cooper pairs whereas the tunnel coupling mixes the states of isolated layers and should play a role similar to the spin-orbit interaction. For the solution of the self-consistency equation we assume μ_j to be much larger than the cut-off energy Ω and then eliminate the cut-off in favor of the superconducting critical temperature of the interlayer order parameter T_c0^ int at zero conduction band shift χ=0 <cit.>. The resulting gap equation reads ln(T/T_c0^ int) + 2π TRe∑_ω_n >0[1/ω_n -- (|t|^2 + iζ)/ζ√(-ω_n^2 - |d_0|^2 + χ^2 + |t|^2+2iζ)] = 0,where ζ = √(ω_n^2(|t|^2 + χ^2) + |t|^2|d_0|^2). Typical |d_0(χ)| plots for different T shown in Fig. <ref> demonstrate the suppression of the interlayer gap function by the band splitting. Fig. <ref>(a) shows that for rather low temperatures and weak tunnel couplings there appear χ-regions with more than one solution of the gap equation, which is typical for the paramagnetic effect in superconductors. Thus, by the analogy with the spin-split superconductors <cit.> we argue that the relative band shift can lead to the appearance of the odd-frequency interlayer superconducting correlations and the FFLO instability. Fig. <ref>(b) shows that the quasiparticle tunnelling suppresses the depairing effect of the band splitting. Note that if we now consider the joint effect of the relative band shift and the true Zeeman field, one can naturally expect the emergence of the reentrant superconductivity similar to the situation considered in Ref. <cit.>.2DEG in contact with a thick s-wave superconductor.— As a next step, we investigate the joint effect of the nonlocal pairing and the proximity induced superconductivity on the spectral properties and the Meissner response of 2DEG placed in contact with a thick SC layer. Our goal here is to demonstrate that one can obtain a nontrivial behavior of the density of states in 2DEG along with the paramagnetic contribution to the Meissner response in a model configuration, which is close to the experiments of Mota and co-workers <cit.>. The superconductor is described by the termH_s = ∫ d^3𝐑[ψ_sσ^†(𝐑)ξ̂_sψ_sσ(𝐑)+ Δ_s(𝐑)ψ^†_s↑(𝐑)ψ_s↓^†(𝐑) + Δ_s^*(𝐑)ψ_s↓(𝐑)ψ_s↑(𝐑)],where 𝐑 is a three-dimensional vector in the superconducting region, ξ̂_s = -∇_𝐑^2/2m - μ_s, and Δ_s(𝐑) is the superconducting gap function. We choose the creation and annihilation operators in 2DEG ψ^†_nσ(𝐫) and ψ_nσ(𝐫) to be normalized to the layer volume {ψ_nσ(𝐫),ψ_nσ'^†(𝐫')}=d^-1δ_σσ'δ(𝐫-𝐫'), where 𝐫 = (X, Y, Z = 0), {A,B} = AB + BA, and d is the thickness of the 2D layer. Up to the factor of d, the Hamiltonian of 2DEG, the tunnel Hamiltonian and the interlayer interaction have the form (<ref>), (<ref>), and (<ref>), respectively, with ψ_1σ(𝐫)→ψ_sσ(𝐫) and ψ_2σ(𝐫)→ψ_nσ(𝐫). Neglecting the effects of the interlayer interaction in the SC layer, we derive the Gor'kov equations for the Matsubara Green's functions in 2DEG Ǧ_n. The resulting equations can be significantly simplified when the characteristic interatomic distance in the SC layer a_0 is much less than the one in 2DEG <cit.>. Under this model assumption we get a set of local equations <cit.>[-iω_n + τ̌_zξ_n(𝐫_1)-Σ̌(ω_n)]Ǧ_n(𝐫_1,𝐫_2) =d^-1δ(𝐫_1 - 𝐫_2), Σ̌(ω_n) = π d ν_s a_0^2ť^†ǧ_s(ω_n)ť , where ν_s is the density of states per spin projection in the normal-metal state of the superconductor, andǧ_s(ω_n)= iω_n -|Δ_s|σ̂_yτ̌_̌y̌/√(ω_n^2 + |Δ_s|^2) ,is the quasiclassical Green's function in the SC layer.Contrary to the previous setup, the model (<ref>) allows the odd-frequency superconducting correlations in proximitized 2DEG only for the spin-triplet interlayer gap function. We take as a model exampleΔ̂_ int= d_tσ̂_x. For simplicity we choose t and d_t to be real numbers. It is convenient to absorb the dimensional prefactors in the self-energy (<ref>) into the definitions of t and d_t: π d ν_st^2a_0^2 → t^2, π d ν_sd_t^2a_0^2→ d_t^2, π d ν_std_ta_0^2 → td_t, so that t^2, d_t^2, and td_t in further consideration are given in the energy units. Let us, first, discuss the structure of the resulting self-energy in the particle-hole space Σ̂_ij (i,j = 1,2) Σ̂_11 = iω_n(t^2 + d_t^2)/√(ω_n^2 + |Δ_s|^2) + 2td_tΔ_sσ̂_z/√(ω_n^2 + |Δ_s|^2) , Σ̂_12 = -Δ_s(t^2 + d_t^2)(iσ̂_y)/√(ω_n^2 + |Δ_s|^2) - 2iω_ntd_tσ̂_x/√(ω_n^2 + |Δ_s|^2) . The remaining components Σ̂_22 and Σ̂_21 can be obtained from Eqs. (<ref>) via the relations Σ̂_22(ω_n) = -Σ̂_11^ T(-ω_n), Σ̂_21(ω_n) = Σ̂_12^†(-ω_n). The first term in the right-hand side of Eq. (<ref>) indicates that the spin-triplet interlayer pairing leads to the enhancement of the spin-singlet superconducting correlations in 2DEG, which survive in the limit t→ 0. The second term in Eq. (<ref>) shows that in the presence of both tunneling and the interlayer pairing 2DEG features spin-triplet odd-frequency superconducting correlations. The diagonal elements Σ̂_11 and Σ̂_22contain the Zeeman-type terms ∝ td_tΔ_sσ̂_z, so that the spin-triplet interlayer pairing can also result in an additional spin splitting for quasiparticle states in 2DEG.We reveal the effects of the spin-triplet interlayer pairing on the spectral properties of proximitized 2DEG by calculating the energy-resolved density of states in the two-dimensional layer <cit.>ν_ 2D(E) = 1/π Im Tr[Ĝ_n(𝐫,𝐫; ω_n → -iE + Γ )],where Γ is the broadening parameter. Typical colorplots of the density of states are shown in Fig. <ref>. Fig. <ref>(a) refers to the case t = 0 and illustrates the appearance of the induced gap in the spectrum of 2DEG in the absence of tunneling. The density of states for a finite tunneling rate is shown in Fig. <ref>(b). The resulting ν_2D(E) dependencies for rather small and fixed d_t are typical for proximitized 2DEG and possess two pair of peaks, one of which marks the induced hard gap in the quasiparticle energy spectrum and another one is located near the gap of the parent superconductor E≈±Δ_s. The increase in the interlayer gap function d_t leads to the splitting of the peaks at the induced gap, which merge into a pronounced zero-bias peak at d_t = t. The spectral gap reopens upon further increase in d_t and tends to 2Δ_s.We demonstrate that the induced odd-frequency correlations can provide a paramagnetic contribution to the Meissner response. For this purpose, we derive linear relations 𝐣 = -Q𝐀 between the supercurrent 𝐣 and the vector potential 𝐀 for the model (<ref>) within both the clean and dirty limit <cit.>. Note that we analyze here only the contribution of the two-dimensional subsystem to the total response while the full response of the structure should be, of course, diamagnetic. For the derivation we follow the standard approach described in Ref. <cit.>. Typical Q(T) dependencies for the model (<ref>) with the spin-triplet interlayer pairing are shown in Fig. <ref>. For simplicity d_t is chosen to be temperature independent and we use the standard interpolation formulaΔ_s(T) = Δ_0tanh(1.74√(T_c/T - 1))for the gap function in the SC layer <cit.> where T_c is the critical temperature of the parent superconductor. For the clean limit and d_t = t (see Fig. <ref>(a)) the superconducting correlations in 2DEG exhibit the paramagnetic response (Q<0)which grows with decreasing temperature. Such behavior is similar to the one observed experimentally in <cit.> which can indicate the relevance of the considered effects for the analysis of puzzling experimental data. The appearance of the paramagnetic response is consistent with the behavior of the density of states, which yield a zero-bias peak at t = d_t. Fig. <ref>(b) illustrates that changing the model parameters away from the zero-bias anomaly (d_t≠ t) restores the diamagnetic response at ultra-low temperatures. The results in Fig. <ref>(c) refer to the dirty limit and also indicate the paramagnetic contribution to the Meissner response at d_t = t within the considered temperature range. In contrast to the clean limit regime, the paramagnetic response at ultra-low temperatures in the dirty limit is possible for not too small interlayer gap function <cit.>. Finally, let us comment on the relation between the direct and inverse proximity effect in superconductor - normal metal structures. In a standard situation rather high transparency of the barrier between the subsystems implies a strong inverse proximity effect. Our results point out that in the presence of the interlayer pairing this relation can break down, namely the inverse proximity effect can be small whereas experimentally measurable effects of the induced superconducting correlations can be noticeable. Note that some indications of such phenomena have been recently observed in <cit.>. To sum up, we have studied the manifestations of the interlayer pairing in proximitized heterostructures. Depending on the geometry and dimensionality of the system, we have shown that the interlayer pairing can lead to the appearance of the odd-frequency superconducting correlations, FFLO instability, the paramagnetic contribution to the Meissner response, and the multi-peak structure of the density of states. We believe that the obtained results can be useful both for the analysis of experimental data on proximitized heterostructures and for engineering new types of superconducting states in systems with induced superconductivity. Since the considered mechanism can play a role of the Zeeman field, it can be possible that the related effects can be useful for development of new platforms for topologically protected qubits based on Majorana modes <cit.>.This work was supported by the Russian Science Foundation (Grant No. 20-12-00053).99McMillan1968 W. L. McMillan, Phys. Rev. 175, 537 (1968).BuzdinRMP2005 A. I. Buzdin, Rev. Mod. Phys. 77, 935 (2005).BergeretRMP2005 F. S. Bergeret, A. F. Volkov, and K. B. Efetov, Rev. Mod. Phys. 77, 1321 (2005).EdelsteinPRB2003 V. M. Edelstein, Phys. Rev. B 67, 020505(R) (2003). EschrigRPP2015 M. Eschrig, Rep. Prog. Phys. 78, 104501 (2015).LinderNP2015 J. Linder and J. W. A. Robinson, Nat. Phys. 11, 307 (2015).BobkovaJPCM2022 I. V. Bobkova, A. M. Bobkov, and M. A. Silaev, J. Phys.: Condens. Matter 34, 353001 (2022).MelnikobPU2022 A. S. Mel'nikov, S. V. Mironov, A. V. Samokhvalov, and A. I. Buzdin, Phys. Usp. 65, 1248 (2022).NayakRMP2008 C. Nayak, S. H. Simon, A. Stern, M. Freedman, and S. Das Sarma, Rev. Mod. Phys. 80, 1083 (2008).DasSarmaNPJQI2015 S. Das Sarma, M. Freedman, and C. Nayak, npj Quantum Inf. 1, 15001 (2015).EfetovJETP1975 K. B. Efetov and A. I. Larkin, Zh. Eksp. Teor. Fiz. 68, 155 (1975) [Sov. Phys. JETP 41, 76 (1975)].TesanovicPRB1987 Z. Tešanović, Phys. Rev. B 36, 2364(R) (1987).BulaevskiiPRB1990 L. N. Bulaevskii and M. V. Zyskin, Phys. Rev. B 42, 10230 (1990).KlemmLiu R. A. Klemm and S. H. Liu, Phys. Rev. B 44, 7526 (1991); S. H. Liu and R. A. Klemm, Phys. Rev. B 45, 415 (1992). KettemannPRB1992 S. Kettemann and K. B. Efetov, Phys. Rev. B 46, 8515 (1992).BCS J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Phys. Rev. 108, 1175 (1957).MorrisPRB1973 R. C. Morris and R. V. Coleman, Phys. Rev. B 7, 991 (1973).ChandrasekharAPL1962 B. S. Chandrasekhar, Appl. Phys. Lett. 1, 7-8 (1962).ClogstonPRL1962 A. M. Clogston, Phys. Rev. Lett. 9, 266 (1962).SaintJames D. Saint-James, G. Sarma, and E. J. Thomas, Type II Superconductivity, Commonwealth and International Library. Liberal Studies Divi (Elsevier, Science & Technology, New York, 1969).Fulde1964 P. Fulde and R. A. Ferrel, Phys. Rev. 135, A550 (1964).LarkinJETP1965 A. I. Larkin and Y. N. Ocvhinnikov, Zh. Eksp. Teor. Fiz, 47, 1136 (1964) [Sov. Phys. JETP 20, 762 (1965)].LinderRMP2019 J. Linder and A. V. Balatsky, Rev. Mod. Phys. 91, 045005 (2019).Mota P. Visani, A. C. Mota, and A. Pollini, Phys. Rev. Lett. 65, 1514 (1990); F. B. Müller-Allinger and A. C. Mota, Phys. Rev. Lett. 84, 3161 (2000).BruderPRL1998 C. Bruder and Y. Imry, Phys. Rev. Lett. 80, 5782 (1998).FaucherePRL1999 A. L. Fauchère, W. Belzig, andd G. Blatter, Phys. Rev. Lett. 82, 3336 (1999).MakiPLA2000 K. Maki and S. Haas, Phys. Lett. A 272, 271 (2000).EspedalPRL2016 C. Espedal, T. Yokoyama, and J. Linder, Phys. Rev. Lett. 116, 127002 (2016).AlbrechtN2016 S. M. Albrecht, A. P. Higginbotham, M. Madsen, F. Kuemmeth, T. S. Jespersen, J. Nygård, P. Krogstrup, and C. M. Marcus, Nature (London) 531, 206 (2016).BommerPRL2019 J. D. S. Bommer, H. Zhang, Ö. Gül, B. Nijholt, M. Wimmer, F. N. Rybakov, J. Garaud, D. Rodic, E. Babaev, M. Troyer, D. Car, S. R. Plissard, E. P. A. M. Bakkers, K. Watanabe, T. Taniguchi, and L. P. Kouwenhoven, Phys. Rev. Lett. 122, 187702 (2019).summplemental See Supplemental Material at ... for the detailed derivation of the main results and additional numerical data. TewariNJP2011 S. Tewari, T. D. Stanescu, J. D. Sau, and S. Das Sarma, New J. Phys. 13, 065004 (2011).KopninBook N. Kopnin, Theory of Nonequilibrium Superconductivity (Oxford University Press, Oxford, 2001).BuzdinPRL2005 A. Buzdin, S. Tollis, and J. Cayssol, Phys. Rev. Lett. 95, 167003 (2005). KopninPRB2011 N. B. Kopnin and A. S. Melnikov, Phys. Rev. B 84, 064524 (2011).Svidzinski A. V. Svidzinski, Space-Inhomogeneous Problems in the Theory of Superconductivity (Nauka, Moscow, 1982).GrossZFP1986 F. Gross, B. S. Chandrasekhar, D. Einzel, K. Andreas, P. J. Hirschfeld, H. R. Ott, J. Beuers, Z. Fisk, and J. L. Smith, Z. Physik B - Condensed Matter 64, 175 (1986).PestovFTT2019 E. E. Pestov, Yu. N. Nozdrin, V. V. Rogov, I. Yu. Pashen'kin, and D. Yu. Vodolazov, Phys. Solid State 61, 1539 (2019). §.§ Supplemental Material: Unconventional superconductivity and paramagnetic Meissner response triggered by nonlocal pairing interaction in proximitized heterostructuresHere we provide the detailed derivation of the results in the main text and additional numerical data. The Material is organized as follows. In Sec. <ref> we present the description of the model for the two-dimensional electron gas (2DEG) coupled to a thick conventional superconductor (SC) in the presence of the interlayer pairing. In Sec. <ref> we provide the definition of the Matsubara Green's functions and the derivation of the Gor'kov equations for 2DEG/SC system (11). In Sec. <ref> we provide analytical expressions and additional numerical results for the density of states in proximitized 2DEG and the linear response of the induced superconducting correlations in 2DEG to the external magnetic field both in the clean and dirty limits. In Sec. <ref> we present the derivation of the self-consistency equation (7) for the two-layer model.§ MODEL OF 2DEG/SC STRUCTUREConsider a two-dimensional electron gas (Z = 0) proximity coupled to a conventional superconductor (Z > 0). Hereafter we use the units k_B = ħ = 1, where k_B is the Boltzmann constant and ħ is the Planck constant. The Hamiltonian of the system readsH = H_s + H_n + H_t + H_ int .The first termH_s = ∫ d^3𝐑[ψ_sσ^†(𝐑)ξ̂_sΨ_sσ(𝐑) + Δ_s(𝐑)ψ^†_s↑(𝐑)ψ_s↓^†(𝐑) + Δ_s^*(𝐑)ψ_s↓(𝐑)ψ_s↑(𝐑)],describes the s-wave superconductor (SC) and the second termH_n = d∫ d^2𝐫 ψ_nσ^†(𝐫)ξ̂_nψ_nσ(𝐫),is the Hamiltonian of 2DEG. Here σ = ↑, ↓ denotes spin degrees of freedom (summation over repeated spin indices is implied), d is the thickness of the normal-metallic layer, ξ̂_s = (-i∇_𝐑-e/c𝐀(𝐑))^2/2m_s - μ_s and ξ̂_n = (-i∇_𝐫-e/c𝐀(𝐫))^2/2m_n - μ_n stand for the quasiparticle kinetic energy operators in the SC and 2DEG with respect to the corresponding chemical potentials μ_s and μ_n, 𝐀 is the vector potential, m_s and m_n are the effective masses of the electrons in the subsystems, and Δ_s(𝐑) is the superconducting gap function in the SC layer. The creation and annihilation operators in 2DEG ψ_nσ^†(𝐫) and ψ_nσ(𝐫) are normalized to the layer volume {ψ_nσ(𝐫),ψ_nσ'^†(𝐫')} = d^-1δ_σσ'δ(𝐫-𝐫'). The tunnel Hamiltonian has the form:H_t = d∫ d^2𝐫[ψ_sσ^†(𝐫)t(𝐫)ψ_nσ(𝐫) + ψ_nσ^†(𝐫)t^*(𝐫)ψ_sσ(𝐫)],where t(𝐫) is the tinneling matrix element and we denote ψ_sσ(𝐫) = ψ_sσ(X,Y,Z = 0) for brevity. We consider the effects of the interlayer electron pairing on the induced superconductivity and the linear response of proximitized 2DEG to the applied magnetic field. Assuming that the interaction is relevant in the vicinity of the SC/2DEG interface, we choose the following form of the interaction H_ int = U_0/2 d∫ d^2𝐫 ψ^†_sσ(𝐫)ψ_nσ'^†(𝐫)ψ_nσ'(𝐫)ψ_sσ(𝐫).§ DEFINITION OF THE MATSUBARA GREEN'S FUNCTIONS AND DERIVATION OF EQS. (11) IN THE MAIN TEXT Throughout the second part of our work we use the following Green's functions:Ǧ_s(𝐗_1,𝐗_2) = ⟨ T_τψ̌_s(𝐗_1)ψ̌_s^†(𝐗_2)⟩ , Ǧ_n(𝐱_1,𝐱_2) = ⟨ T_τψ̌_n(𝐱_1)ψ̌_n^†(𝐱_2)⟩ , Ǧ_t(𝐗_1,𝐱_2) = ⟨ T_τψ̌_s(𝐗_1)ψ̌_n^†(𝐱_2)⟩ , 𝒢̌_t(𝐱_1,𝐗_2) = ⟨ T_τψ̌_n(𝐱_1)ψ̌_s^†(𝐗_2)⟩ . Here 𝐱 = (𝐫,τ) and 𝐗= (𝐑,τ), τ is the imaginary time variable in the Matsubara technique, T_τ is the time-ordering operator. We define the Nambu spinors ψ̌_s(𝐗) and ψ̌_n(𝐱) as ψ̌_s(𝐗) = [ψ_s↑(𝐗),ψ_s↓(𝐗),ψ^†_s↑(𝐗),ψ^†_s↓(𝐗)]^ T , ψ̌_n(𝐱) = [ψ_n↑(𝐱), ψ_n↓(𝐱), ψ^†_n↑(𝐱), ψ^†_n↓(𝐱)]^ T .For brevity below we present the equations of motion for the field operators and the derivation of the Gor'kov equations in the absence of the external magnetic field. We reveal the full equations with the vector potential in the end of this section. For the considered model (<ref>), fermionic operators in the SC layer satisfy the equations of motion∂/∂τψ̌_s(𝐗) = -[τ̌_zξ_s(𝐑) + Δ̌_s(𝐑)]ψ̌_s(𝐗) - dδ(Z)ť_ tun(𝐫)ψ̌_n(𝐱)- U_0/2dδ(Z)τ̌_z[ψ_nσ^†(𝐱)ψ_nσ(𝐱)]ψ̌_s(𝐗),where τ̌_i (i = x,y,z) are the Pauli matrices acting in the particle-hole space. The tunneling matrix in the Nambu space is defined as ť_ tun(𝐫) =diag[t(𝐫),-t^*(𝐫)], and the superconducting gap matrix has the formΔ̌_s(𝐑) = [ 0 Δ̂_s(𝐑); Δ̂_s^†(𝐑) 0 ] ,where Δ̂_s(𝐑) = (iσ̂_y)Δ_s(𝐑), σ̂_i (i = x,y,z,) are the Pauli matrices acting in the spin space. Equations of motion for the field operators in 2DEG are as follows:∂/∂τψ̌_n(𝐱) = -τ̌_zξ_n(𝐫)ψ̌_n(𝐱) - ť^*_ tun(𝐫)ψ̌_s(𝐱) -U_0/2τ̌_z[ψ_sσ^†(𝐱)ψ_sσ(𝐱)]ψ̌_n(𝐱).For the derivation of the Gor'kov equations for the Green's functions (<ref>) one should decouple thermodynamic averages of four fermionic operators <cit.>. For instance, considering the equation for the normal correlation function in 2DEG [Ĝ_n]_αβ, we have the combinations⟨ T_τψ_sσ^†(𝐱_1)ψ_sσ(𝐱_1)ψ_nα(𝐱_1)ψ_nβ^†(𝐱_2) ⟩ = ⟨ T_τψ_sσ^†(𝐱_1)ψ_sσ(𝐱_1)⟩⟨ T_τψ_nα(𝐱_1)ψ_nβ^†(𝐱_2)⟩ -⟨ T_τψ_sσ^†(𝐱_1)ψ_nα(𝐱_1)⟩⟨ T_τψ_sσ(𝐱_1)ψ_nβ^†(𝐱_2)⟩+⟨ T_τψ_sσ^†(𝐱_1)ψ_nβ^†(𝐱_2)⟩⟨ T_τψ_sσ(𝐱_1)ψ_nα(𝐱_1)⟩ .In the present work we focus on the effects of the interlayer anomalous averages represented, for instance, by the last term in the right-hand side of Eq. (<ref>) and neglect the other contributions. Using Eqs. (<ref>) and (<ref>), we derive the equations for the Matsubara Green's functions in 2DEG[∂/∂τ_1 + τ̌_zξ_n(𝐫_1)]Ǧ_n(𝐱_1,𝐱_2) + [t^*(𝐫_1)Δ̂_ int(𝐫_1); -Δ̂_ int^*(𝐫_1) -t(𝐫_1) ]Ǧ_t(𝐱_1,𝐱_2) =d^-1δ(𝐱_1 - 𝐱_2),where we introduced the interlayer gap function[Δ̂_ int(𝐫)]_αβ = -U_0/2⟨ψ_nα(𝐱)ψ_sβ(𝐱)⟩ .Equations for the Green's functions can be more conveniently in the Matsubara frequency representation ω_n = 2π T(n+1/2). We set τ = τ_1 - τ_2 and writeǦ(𝐫_1, 𝐫_2) = ∫_0^1/Tdτ Ǧ(𝐱_1,𝐱_2)e^iω_nτ ,omitting the frequency argument for brevity. Equations for the Green's functions in 2DEG written in the Matsubara frequency-coordinate representation have the form[-iω_n + τ̌_zξ_n(𝐫_1)]Ǧ_n(𝐫_1,𝐫_2) +[t^*(𝐫_1)Δ̂_ int(𝐫_1); -Δ̂_ int^*(𝐫_1) -t(𝐫_1) ]Ǧ_t(𝐫_1,𝐫_2) =d^-1δ(𝐫_1 - 𝐫_2).We derive the equation for the tunneling Green function in a similar fashion[-iω_n + τ̌_zξ_s(𝐑_1) + Δ̌_s(𝐑_1)]Ǧ_t(𝐑_1,𝐫_2) + dδ(Z_1)[ t(𝐫_1) -Δ̂_ int^ T(𝐫_1); Δ̂_ int^†(𝐫_1)-t^*(𝐫_1) ]Ǧ_n(𝐫_1,𝐫_2) = 0.Neglecting the back action of 2DEG on the superconductor and the effects of the interlayer interaction in the SC layer, the Gor'kov equations in the SC layer read [-iω_n + τ̌_zξ_s(𝐑_1) + Δ̌_s(𝐑_1)]Ǧ_s(𝐑_1,𝐑_2) = δ(𝐑_1 - 𝐑_2).To obtain a closed system of equations for the Green's functions in 2DEG we follow Ref. <cit.> and write the solution of Eq. (<ref>)Ǧ_t(𝐑_1,𝐫_2) = -d∫ d^2𝐫'Ǧ_s(𝐑_1,𝐫')ť(𝐫')Ǧ_n(𝐫',𝐫_2).Substituting Eq. (<ref>) into Eq. (<ref>) and restoring the vector potential, we get{-iω_n + τ̌_z[1/2m_n(-i∇_𝐫_1-τ̌_ze/c𝐀(𝐫_1))^2-μ_n]}Ǧ_n(𝐫_1,𝐫_2)- ∫ d^2𝐫 Σ̌(𝐫_1,𝐫)Ǧ_n(𝐫,𝐫_2) = d^-1δ(𝐫_1 - 𝐫_2), Σ̌(𝐫_1,𝐫) = dť^†(𝐫_1)Ǧ_s(𝐫_1,𝐫)ť(𝐫). HereǦ_s(𝐫_1,𝐫) stands for the Green's function of an isolated SC layer taken at the SC/2DEG interface Z_1 = Z = 0. The 4×4 matrix Green's function in Eqs. (<ref>) has the following structure in the particle-hole spaceǦ = [Ĝ F̂; F̂^†Ĝ̅̂ ] .In the present work we assume that the in-plane momentum projection is conserved during the tunneling process. In this case the tunneling amplitude t is independent of the coordinate along 2DEG/SC interface <cit.>. For simplicity, we also assume that the interlayer gap function Δ̂_ int is homogeneous. Note that Eqs. (<ref>) can be significantly simplified when the characteristic interatomic distance in the SC layer a_0 is much less than the one in 2DEG. Indeed, for rapidly oscillating Green's function in the SC layerǦ_s(𝐫_1,𝐫) = m_s/2π{τ̌_zcos(k_Fs|𝐫_1-𝐫|)/|𝐫_1 - 𝐫| +ǧ_s(ω_n)sin(k_Fs|𝐫_1 - 𝐫|)/|𝐫_1 - 𝐫|}e^-m_s√(ω_n^2 + |Δ_s|^2)/k_Fs|𝐫_1-𝐫| ,the integral in Eq. (<ref>) converges at |𝐫_1 - 𝐫|∼ a_0, and the resulting self-energy is local. Thus, under our model assumptions Eqs. (<ref>) acquire the form (11) in the main text. § SPECTRAL AND SCREENING PROPERTIES OF 2DEG For the calculations of the density of states (14), we solve Eq. (11a) with a local self-energy given by Eqs. (13). As a first step, we derive the expression for the normal Matsubara Green's function in 2DEG at coincident spatial arguments[Ĝ_n(𝐫,𝐫)]_σσ = ∫d^2𝐤/(2π)^2[i(ω̃_n - iσ h) + ξ_n]/[ξ_n^2 + (ω̃_n - iσ h)^2 + f_σ^2] ,where σ = ↑,↓ (± 1), ξ_n = 𝐤^2/2m_n - μ_n, andω̃_n = ω_n(1 + t^2 + d_t^2/√(ω_n^2 + Δ_s^2)),h(ω_n) = 2td_tΔ_s/√(ω_n^2 + Δ_s^2) ,f_σ(ω_n) = Δ_s(t^2 + d_t^2) + 2iσω_n td_t/√(ω_n^2 + Δ_s^2) .Integration over the momentum in Eq. (<ref>) yields[Ĝ_n(𝐫,𝐫)]_σσ = πν_0iω̃_n + σ h/√(f_σ^2 - (iω̃_n + σ h)^2) .Here ν_0 = m_n/2π is the density of states in an isolated 2DEG per spin projection. Finally, we substitute Eqs. (<ref>) into Eq. (14) and calculate the density of states.Typical behavior of the density of states in 2DEG as a function of energy and model parameters are presented in Fig. <ref>. Panels (a)-(c) show the colorplots of the density of states as a function of energy E and the interlayer gap function d_t for several tunneling rates t^2 = 0, t^2=Δ_s, and t^2 = 3Δ_s, respectively. Panels (d)-(f) reveal ν_ 2D(E) plots for several values of the interlayer gap function. We choose the energy level broadening parameter Γ = 0.01Δ_s to produce the plots. Figs. <ref>(a) and <ref>(d) refer to the case t = 0, for which the two-dimensional layer only features the spin-singlet superconducting correlations [see Eq. (13b)]. One can see the emerging minigap in the density of states for rather small d_t values (see the solid red line in Fig. <ref>(d)). The magnitude of the minigap for d_t^2 = 0.1Δ_s is approximately 0.2Δ_s, which is in agreement with the result of Eq. (13b) in the case t = 0 and d_t^2≪Δ_s. Two additional features in the density of states are located near the energy gap of the parent superconductor E ≈±Δ_s. The colorplot in Fig. <ref>(a) shows that the spectral gap tends to 2Δ_s upon the increase in the absolute value of the interlayer gap function. We provide ν_ 2D(E) plots for a finite tunneling rate t^2 = Δ_s and d_t^2/Δ_s = 0, 0.5, 1, and 3 in Fig. <ref>(e). Corresponding ν_ 2D(E) curve for d_t = 0 (shown by a blue dashed line) represents a typical energy dependence of the density of states of 2DEG with the induced superconductivity and possesses two pair of peaks, one of which (at E≈± 0.55Δ_s) marks the induced hard gap in the energy spectrum and another one is located at E≈±Δ_s. The increase in the interlayer gap function leads to the splitting of the peaks at the induced gap and to the decrease in the induced gap, which eventually disappears at a certain value of interlayer pairing amplitude. The black solid line in Fig. <ref>(e) shows a pronounced zero-bias peak in the density of states at d_t = t. The colorplot in Fig. <ref>(b) demonstrates that the spectral gap reopens upon further increase in d_t and tends to 2Δ_s for rather large d_t values. The results in Figs. <ref>(c) and <ref>(f) obtained for larger tunneling rate t^2 = 3Δ_s also demonstrate the hard gap closing-reopening feature upon the variation of the interlayer pairing amplitude as well as the appearance of a zero-bias peak in the density of states of the two-dimensional system at d_t = t. We continue with the analysis of a linear response of the induced superconducting correlations in 2DEG to an external magnetic field. Corresponding linear relations between the supercurrent 𝐣 and the vector potential 𝐀 in 2DEG are derived within both the clean and dirty limit. For the derivation we choose the transverse gauge for the vector potential div𝐀 = 0 and follow the approach described in Ref. <cit.>. §.§ Clean limit Here we consider the ballistic case. For the derivation of the quasiclassical equations in 2DEG, we introduce the Matsubara Green's functions in the mixed representationǦ_n(𝐫,𝐤) = ∫ dδ𝐫 e^-i𝐤δ𝐫Ǧ_n(𝐫,δ𝐫),where 𝐫 = (𝐫_1 + 𝐫_2)/2 and δ𝐫 = 𝐫_1 - 𝐫_2. Using Eqs. (<ref>), (<ref>) and considering the quasiparticle states in the vicinity of the Fermi surface𝐤 = 𝐧(k_Fn + ξ_n/v_Fn),we derive the quasiclassical equations for the Green's function in the mixed representation.[-iω_n + τ̌_z(ξ_n - i/2𝐯_Fn∇_𝐫) -e𝐯_Fn𝐀(𝐫+i/2𝐯_Fnd/dξ_n)]Ǧ_n(𝐫,𝐧,ξ_n) -Σ̌(𝐫)Ǧ_n(𝐫,𝐧,ξ_n) = 1.Here 𝐯_Fn = v_Fn𝐧, v_Fn denotes the Fermi velocity in an isolated 2DEG, 𝐧 = [cosφ_𝐤,sinφ_𝐤,0], k_Fn = m_nv_Fn, and ξ_n is the kinetic energy of quasiparticles relative to the chemical potential. Note that in the above equation we used the local approximation for the self-energy. The supercurrent density is then determined from the solution of Eq. (<ref>) 𝐣(𝐫) = -ek_FnT∑_ω_n∫dξ_n/(2π)d𝐧/(2π)𝐧 Tr[Ĝ_n(𝐫,𝐧,ξ_n)]. As a next step, we find the first-order correction for the Green's function with respect to the vector potential. For this purpose, it is convenient to calculate the Fourier transform of the Green's function with respect to ξ_nǦ_n(q) = ∫Ǧ_n(ξ_n)e^iqξ_ndξ_n/2π . Using Eq. (<ref>), we derive the quasiclassical equation for the Fourier transform (<ref>). Eliminating the spatial derivative via the replacement 𝐫→𝐫 - 1/2𝐯_Fnq, we get the equation[-iω_n - iτ̌_z∂/∂ q - e𝐯_Fn𝐀(𝐫 + q𝐯_Fn)]Ǧ_n(q) - Σ̌(𝐫 + 1/2q𝐯_Fn)Ǧ_n(q) = δ(q).Note that for the derivation of the linear response it is sufficient to expand the Green's function up to the first-order term in the vector potential Ǧ_n(q)≈Ǧ_n^(0)(q) + Ǧ_n^(1)(q),and take the unperturbed homogeneous self-energy Σ̌^(0). Indeed, within the local approximation the self-energy involves the Green's function in the superconducting layer at coincident spatial arguments, so the first-order correction Σ̌^(1) should vanish upon averaging over the momentum directions. Unperturbed Green's functions have the formĜ_n^(0)(q) = ∑_σ = ↑,↓Π̂_zσG^(0)_nσ(q), F̂_n^†(0)(q) = -(iσ̂_y)∑_σ = ↑,↓Π̂_zσF_nσ^†(0)(q),whereΠ̂_z↑,↓ = (1 ±σ̂_z)/2 and the expressions for the components read asG_nσ^(0)(q) = γ_σ(q)/2[iω̃_n + σ h/√(f_σ^2 - (iω̃_n + σ h)^2)+ i sgn(q)],F_nσ^†(0)(q) = γ_σ(q)/2f_σ/√(f_σ^2-(iω̃_n+σ h)^2) ,with γ_σ(q) = exp[-√(f_σ^2 - (iω̃_n + σ h)^2)|q|].The other Green's functions Ĝ̅̂_n^(0) and F̂_n^(0) can be obtained from Eqs. (<ref>) by using the symmetry relations Ĝ̅̂_n^(0)(ω_n) = -Ĝ_n^(0)(-ω_n) and F̂_n^(0)(ω_n) = [F̂_n^†(0)(ω_n)]^ T. The first-order correction for the Green's function at q = 0 is determined from the expressionǦ_n^(1)(q = 0) = ∫ dq' Ǧ_n^(0)(-q')e𝐯_Fn𝐀(𝐫 + q'𝐯_Fn)Ǧ_n^(0)(q').We put𝐀(𝐫) = ∫d^2𝐤/(2π)^2 𝐀(𝐤)e^i𝐤𝐫 , 𝐣(𝐫) = ∫d^2𝐤/(2π)^2 𝐣(𝐤)e^i𝐤𝐫 ,and then substitute Eqs. (<ref>) and (<ref>) into Eq. (<ref>). Performing the integration, we derive a linear relation between the supercurrent and vector potential in the clean limit𝐣(𝐤) = -e^2p_Fnv_FnT∑_ω_n∑_σ = ↑,↓1/2f_σ^2/√(f_σ^2 - (iω̃_n + σ h)^2)∫d𝐧/(2π)𝐧(𝐧𝐀(𝐤))/[f_σ^2 - (iω̃_n + σ h)^2 + v_Fn^2(𝐧𝐤)^2/4] .Under the assumption of a local response, the above equation transforms as follows: 𝐣(𝐫) = -Q𝐀(𝐫), Q = e^2k_Fnv_Fn/4T∑_ω_n,σf_σ^2/[f_σ^2 - (iω̃_n + σ h)^2]^3/2 .Typical temperature dependencies of the coefficient Q in the linear relation (<ref>) are shown in Fig. <ref>. For simplicity, we choose the interlayer gap function d_t to be constant within the considered temperature range. To take into account the temperature dependence of the gap function in the SC layer, we use the interpolation formula Δ_s(T) = Δ_0tanh(1.74√(T_c/T - 1)), where Δ_0 = Δ_s(T = 0) and T_c denotes the critical temperature of the parent superconductor. Figs. <ref>(a), <ref>(b), and <ref>(c) show several Q(T) plots for a fixed interlayer gap function and several tunneling rates t^2/Δ_0 = 0.1, 0.2, 0.3, 0.4, and 0.5. The results in Fig. <ref>(a) for d_t = 0 indicate that in the absence of the interlayer spin-triplet pairing the induced superconducting correlations in 2DEG only exhibit the Meissner response (Q>0). Diamagnetic response of the induced Cooper pairs becomes more pronounced at lower temperatures with decreasing t^2. This behavior is consistent with the fact that the induced gap in the quasiparticle energy spectrum of the two-dimensional layer decreases upon the decrease in the tunneling rate. Q(T) plots in Figs. <ref>(b) and <ref>(c) reveal several qualitatively different types of the linear response within different temperature ranges. For d_t^2 = t^2 = 0.1Δ_0 and 0.3Δ_0 [see a blue solid line in Fig. <ref>(b) and a black dashed line in Fig. <ref>(c)], the superconducting correlations in 2DEG exhibit the paramagnetic response (Q<0) within the considered temperature range, and |Q| grows with decreasing temperature. We note that this behavior is in qualitative agreement with our calculations of the density of states, which yield a zero-bias anomaly at t = d_t. The parameter range t^2 > d_t^2 (t^2 < d_t^2) is characterized by the presence of the minimum on a Q(T) curve and a diamagnetic response at low temperatures. For clarity, we also reveal the low-temperature behavior of Q for d_t^2 = t^2 = 0.1Δ_0, 0.2Δ_0, 0.3Δ_0, and 0.4Δ_0 in Fig. <ref>(d). Note that the presence of the paramagnetic response of 2DEG at high temperatures in Figs. <ref>(b) and <ref>(c) is probably related to the presence of the spin-triplet superconducting correlations in 2DEG, which survive in the limit Δ_s → 0 . §.§ Dirty limitWe proceed with the analysis of the linear response of the induced superconducting correlations in 2DEG with randomly distributed nonmagnetic point impurities. The effects of an elastic scattering are described by the impurity self-energyΣ̌_ imp(𝐫) = 1/τ∫dξ_n/2πd𝐧/2πτ̌_zǦ_n(𝐫,𝐧,ξ_n)τ̌_z,included into Eq. (<ref>). Here τ is the average time between collisions. As a first step, we derive the Eilenberger equations for ξ_n-integrated Green's functions. For this purpose, we subtract Eq. (<ref>) and its transpose. As a result, we get -i𝐯_Fn∇_𝐫ǧ_n(𝐫,𝐧) - [w̌(𝐫), ǧ_n(𝐫,𝐧)]= 0,where w̌ = τ̌_z[iω_n + Σ̌(𝐫) + Σ̌_ imp(𝐫) + e𝐯_Fn𝐀(𝐫) ],and the quasiclassical Green's function is defined as follows:ǧ_n(𝐫,𝐧) = ∫dξ_n/2πǦ_n(𝐫,𝐧,ξ_n)τ̌_z.Using Eqs. (<ref>), (<ref>), and (<ref>) evaluated at q = 0, it is straightforward to show that the introduced quasiclassical Green's function (<ref>) obeys the normalization condition ǧ_n^2 = -1/4. In this subsection we consider the case when the mean free path for elastic scattering ℓ is much less than the spatial scale of the superconducting correlations in 2DEG. In this case one can seek the solution of Eq. (<ref>) in the formǧ_n(𝐫,𝐧) = ǧ_n^(0)(𝐫) + 𝐧Γ̌_n(𝐫).Isotropic part of the Green function ǧ_n^(0) satisfies the Usadel equationD_n∇̌_𝐫[ǧ_n^(0)∇̌_𝐫ǧ_n^(0)] - 1/2[τ̌_z(iω_n + Σ̌), ǧ_n^(0)] = 0,whereas a small correction Γ̌_n is determined from the expressionΓ̌_n(𝐫) = 2iℓǧ_n^(0)(𝐫)∇̌_𝐫ǧ^(0)_n(𝐫).In the above equations ∇̌_𝐫ǎ = ∇_𝐫ǎ - ie𝐀[τ̌_z, ǎ] and D_n = v_Fnℓ/2 is the diffusion coefficient in 2DEG. Substituting Eqs. (<ref>) and (<ref>) into Eq. (<ref>), we get the expression for the supercurrent𝐣(𝐫) = -2ieD_nν_nT∑_ω_n Tr[ĝ_n^(0)(𝐫)∇_𝐫ĝ_n^(0)(𝐫) + f̂_n^(0)(𝐫)(∇_𝐫 + 2ie𝐀(𝐫))f̂_n^†(0)(𝐫)].Note that for the chosen gauge of the vector potential div𝐀 = 0, the Usadel equation (<ref>) doesn't contain linear terms in 𝐀. Correspondingly, the linear relation between the supercurrent and the vector potential can be obtained by substituting the zero-order spatially homogeneous Green's functions defined by Eqs. (<ref>) and (<ref>) into Eq. (<ref>). As a result, we obtain the local relation𝐣(𝐫) = -Q𝐀(𝐫),Q = 2π e^2D_nν_nT∑_ω_n∑_σ = ↑,↓f_σ^2/[f_σ^2 - (iω̃_n + σ h]^2 . Typical temperature dependencies of the coefficient Q in the linear relation (<ref>) are shown in Fig. <ref>. The plots are the results of Eq. (<ref>). Panels (a), (b), (c), and (d) correspond to d_t^2/Δ_0 = 0, 0.1, 0.2, and 0.3, respectively. Fig. <ref>(a) shows that in the absence of the spin-triplet interlayer pairing the induced superconducting correlations in 2DEG only exhibit the diamagnetic response. In constrast with the corresponding results for the clean limit [see Fig. <ref>(a)], the plots in Fig. <ref>(a) demonstrate that the magnitude of the response |Q| at low temperatures grows with increasing tunneling rate. Similarly to the previously considered case, we find that in the case of a finite interlayer gap function the type of the linear response can vary with temperature. In particular, the results for d_t^2 = t^2 = 0.1Δ_0 [shown by a blue solid line in Fig. <ref>(b)] reveal rather small diamagnetic response at low temperatures, which switches into the paramagnetic one upon the increase in T. The increase in the tunneling rate t^2 results in the enhancement of both the diamagnetic and paramagnetic response. The temperature range corresponding to the diamagnetic response increases for larger tunneling rates. Panels (c) and (d) show typical Q(T) plots within both parameter regions t^2<d_t^2 and t^2 > d_t^2. Considering, for instance, Fig. <ref>(d), we see that for t^2 = 0.1Δ_0 the two-dimensional layer features the diamagnetic (paramagnetic) response at low (high) temperatures. The temperature range, within which the Meissner response is established shrinks upon the increase in t^2. At t^2 = d_t^2 = 0.3Δ_0 [see a black dashed line in Fig. <ref>(d)] 2DEG exhibits the paramagnetic response within the considered temperature range. Further increase in the tunneling rate t^2>d_t^2 restores the low-temperature diamagnetic response and also leads to the enhancement of the paramagnetic response at high temperatures. § DERIVATION OF THE GAP EQUATION (7) FOR THE TWO-LAYER MODEL Here we consider the two-layer model and provide the derivation of the gap equation (7) in the main text. Our starting point is the Gor'kov equations (4)(-iω_n + τ̌_zξ_2𝐤)Ǧ_22 + ť^†Ǧ_12 = 1, (-iω_n + τ̌_zξ_1𝐤)Ǧ_11 + ťǦ_21 = 1, (-iω_n + τ̌_zξ_1𝐤)Ǧ_12 + ťǦ_22 = 0, (-iω_n + τ̌_zξ_2𝐤)Ǧ_21 + ť^†Ǧ_11 = 0. Solving the above system, we obtain the Green's functions of the subsystems in the case of the spin-singlet interlayer pairing Δ̂_ int = d_0(iσ̂_y)Ǧ_22(𝐤) = -1/ω̃_2𝐤^2 + (ξ_2𝐤 - Λ_2𝐤)^2 + 4|t|^2|d_0|^2ξ_1𝐤^2/(ω_n^2 + ξ_1𝐤^2)^2[-iω̃_2𝐤 - ξ_2𝐤 + Λ_2𝐤 2t^*d_0ξ_1𝐤(ω_n^2 + ξ_1𝐤^2)(iσ̂_y);2td_0^*ξ_1𝐤ω_n^2 + ξ_1𝐤^2(-iσ̂_y) -iω̃_2𝐤+ξ_2𝐤 -Λ_2𝐤 ] , ω̃_2𝐤 = ω_n(1 + |t|^2 + |d_0|^2/ω_n^2 + ξ_2𝐤^2),Λ_2𝐤 = ξ_1𝐤(|t|^2 - |d_0|^2/ω_n^2 + ξ_1𝐤^2), Ǧ_11(𝐤) = -1/ω̃_1𝐤^2 + (ξ_1𝐤 - Λ_1𝐤)^2 + 4|t|^2|d_0|^2ξ_2𝐤^2/(ω_n^2 + ξ_2𝐤^2)^2[ -iω̃_1𝐤 - ξ_1𝐤 + Λ_1𝐤2td_0ξ_2𝐤(ω_n^2 + ξ_2𝐤^2)(iσ̂_y); 2t^*d_0^*ξ_2𝐤(ω_n^2 + ξ_2𝐤^2)(-iσ̂_y) -iω̃_1𝐤 + ξ_1𝐤 - Λ_1𝐤 ] , ω̃_1𝐤 = ω_n(1 + |t|^2 + |d_0|^2/ω_n^2 + ξ_2𝐤^2),Λ_1𝐤 = ξ_2𝐤(|t|^2-|d_0|^2/ω_n^2 + ξ_2𝐤^2). The poles of the resulting Green's functions together with the replacement iω_n → E give the quasiparticle spectrum of the two-layer system, which can be cast to the form (8) in the main text. As a next step, we obtain the anomalous mixed Green's function F̂_12, which enters the self-consistency eqauation for the interlayer order parameter (6). We getF̂_12 = d_0(iσ̂_y)[-(iω_n + ξ_1𝐤)(iω_n - ξ_2𝐤)+|t|^2 + |d_0|^2]/(ω_n^2 + |t|^2 + |d_0|^2)^2 + ω_n^2(ξ_2𝐤^2 + ξ_1𝐤^2) + ξ_2𝐤^2ξ_1𝐤^2-2ξ_2𝐤ξ_1𝐤(|t|^2 - |d_0|^2) .Substituting the above expression into Eq. (6), we obtain the gap equation1 = -U_0/2T∑_ω_n∫d^2𝐤/(2π)^2[-(iω_n - ξ_1𝐤)(iω_n + ξ_2𝐤) + |t|^2 + |d_0|^2]/[(ω_n^2 + |t|^2 + |d_0|^2)^2 + ω_n^2(ξ_2𝐤^2 + ξ_1𝐤^2) + ξ_2𝐤^2ξ_1𝐤^2-2ξ_2𝐤ξ_1𝐤(|t|^2 - |d_0|^2)] ,which can be cast to the form (7) in the main text via the substitution ξ_𝐤 = (ξ_1𝐤+ξ_2𝐤)/2. 99 AGDbook A. A. Abrikosov, L. P. Gorkov, and I. E. Dzyaloshinski, Methods of Quantum Field Theory in Statistical Physics, Prentice Hall (1963). KopninPRB2011_appendix N. B. Kopnin and A. S. Melnikov, Proximity-induced superconductivity in two-dimensional electronic systems, Phys. Rev. B 84, 064524 (2011).KopninJETP2013_appendix N. B. Kopnin, I. M. Khaymovich, and A. S. Mel'nikov, Vortex matter in low-dimensional systems with proximity-induced superconductivity, JETP 117, 418 (2013).Svidzinski_appendix A. V. Svidzinski, Space-Inhomogeneous Problems in the Theory of Superconductivity (Nauka, Moscow, 1982). | http://arxiv.org/abs/2311.15574v1 | {
"authors": [
"A. A. Kopasov",
"A. S. Mel'nikov"
],
"categories": [
"cond-mat.supr-con",
"cond-mat.mes-hall"
],
"primary_category": "cond-mat.supr-con",
"published": "20231127065948",
"title": "Unconventional superconductivity and paramagnetic Meissner response triggered by nonlocal pairing interaction in proximitized heterostructures"
} |
Model-based reconstructions for quantitative imaging in photoacoustic tomography Andreas Hauptmann[Research Unit of Mathematical Sciences, University of Oulu, P.O.Box 8000, 90014 Oulu, Finland.Department of Computer Science, University College London, WC1E 6BT, London, UK]and Tanja Tarvainen[Department of Technical Physics, University of Eastern Finland, P.O. Box 1627, 70211 Kuopio, Finland.Department of Computer Science, University College London, WC1E 6BT, London, UK]===================================================================================================================================================================================================================================================================================================================================================================================================================== Referring image segmentation (RIS) aims to segment a particular region based on a language expression prompt. Existing methods incorporate linguistic features into visual features and obtain multi-modal features for mask decoding. However, these methods may segment the visually salient entity instead of the correct referring region, as the multi-modal features are dominated by the abundant visual context. In this paper, we propose MARIS, a referring image segmentation method that leverages the Segment Anything Model (SAM) and introduces a mutual-aware attention mechanism to enhance the cross-modal fusion via two parallel branches. Specifically, our mutual-aware attention mechanism consists of Vision-Guided Attention and Language-Guided Attention, which bidirectionally model the relationship between visual and linguistic features. Correspondingly, we design a Mask Decoder to enable explicit linguistic guidance for more consistent segmentation with the language expression. To this end, a multi-modal query token is proposed to integrate linguistic information and interact with visual information simultaneously. Extensive experiments on three benchmark datasets show that our method outperforms the state-of-the-art RIS methods. Our code will be publicly available. § INTRODUCTIONReferring image segmentation <cit.> (RIS) is a fundamental and challenging multi-modal task, which involves both vision-language understanding <cit.> and instance segmentation <cit.>. The target of RIS is to locate particular regions according to the given query in natural language. It has great potential in many applications, e.g., human-machine interaction and interactive image segmentation.Existing RIS methods <cit.> introduce various fusion methods to obtain multi-modal features. Then, these features are sent into a mask decoder to predict the segmentation mask. Despite significant advancements in RIS, there are still several limitations. First, current methods <cit.> utilize the unidirectional attention mechanism to fuse features from different modalities. However, they only consider the linguistic guidance for visual features but ignore the visual guidance for linguistic features. Unlike the unidirectional attention mechanism, BRINet <cit.> adopts both visual and linguistic guidance in a serial bidirectional way. Nevertheless, due to the serial manner, it only implicitly generates vision-aware linguistic features in the fusion model but does not explicitly use these features in the mask decoder. Second, existing methods use a mask decoder to generate the final segmentation mask from the multi-modal features. However, since multi-modal features are produced by integrating linguistic properties into visual features, they still contain a lot of visual properties. Without explicit linguistic guidance, the mask decoder focuses on the most visually salient entities but ignores linguistic consistency. Moreover, existing methods typically fine-tune the encoders to adapt them for the dataset of RIS. However, this strategy shrinks the generalization ability of encoders pre-trained on a large-scale dataset. In this paper, we propose a novel method that utilizes the mutual-aware attention mechanism and transfers the knowledge of Segment Anything Model (SAM) <cit.> into RIS.First, we introduce the Mutual-Aware Attention block to bidirectionally model the relationship between visual and linguistic features. The Mutual-Aware Attention block consists of two parallel branches: Vision-Guided Attention and Language-Guided Attention. As shown in Fig. <ref>, Vision-Guided Attention assigns different weights to each word in the expression for each image region (such as the red pentangle) and produces language-aware visual features. Similarly, Language-Guided Attention explores the corresponding image region for the word, e.g., `cap', `pants', `racket', and generates vision-aware linguistic features. We consider language-aware visual features and vision-aware linguistic features as the mutual-aware attention features of our method.Second, we design a Mask Decoder to enable explicit linguistic guidance. Specifically, we introduce a multi-modal query token to integrate visual and linguistic properties, which helps to segment the correct referring region. Finally, we freeze the image encoder of SAM to preserve its generalization ability. To transfer the knowledge of SAM into RIS, we introduce a Feature Enhancement module to integrate global and local visual features.We demonstrate the results of our method and other methods in Fig. <ref>. To our knowledge, our work is the first to transfer the powerful knowledge of SAM into RIS. To be summarized, our contributions are listed as follows: ∙ We propose a referring image segmentation method called MARIS, which leverages the powerful knowledge of SAM and uses the mutual-aware attention mechanism to model the relationship between visual and linguistic features bidirectionally. ∙ We introduce a Mutual-Aware Attention block to produce language-aware visual features and vision-aware linguistic features by weighting each word of the sentence and each region of visual features. ∙ We design a Mask Decoder to utilize explicit linguistic guidance and get a segmentation mask consistent with the language expression. Besides, we introduce a multi-modal query token to integrate visual and linguistic properties.∙ The proposed approach achieves new state-of-the-art performance on the three widely used RIS datasets, including RefCOCO, RefCOCO+, and G-Ref. Additionally, our method exhibits excellent generalization capabilities. § RELATED WORK§.§ Referring Image SegmentationReferring image segmentation <cit.> aims to segment a particular region according to the natural language expression. Early approaches <cit.>concatenate visual and linguistic features to produce multi-modal features, which are fed into the fully convolutional network for segmentation generation. <cit.> proposed a two-stage method that first generates masks by Mask R-CNN <cit.>, and then selected the target mask with linguistic prompt. Besides, MCN <cit.> presented a multi-task framework to jointly optimize two related tasks, i.e., referring expression comprehension and segmentation.As the attention mechanism <cit.> achieved great success in various fields, it has been exploited in the field of RIS <cit.>. Later, some methods <cit.> adopt the transformer-based architectures. VLT <cit.> introduces a Vision-Language Transformer to enhance deep interactions among multi-modal features. More recently, CRIS <cit.> utilized CLIP <cit.> as the image and text encoder and transferred the knowledge of CLIP for text-to-pixel alignment. ReLA <cit.> introduced a new task called generalized referring expression segmentation, which enables expressions to indicate the existence of target objects. However, these methods fail to produce and explicitly utilize vision-aware linguistic features in the mask decoder.§.§ Attention MechanismAttention mechanism has been widely used in various multi-modal tasks. In <cit.>, Transformer schemes are used to exploit the long-range dependencies between visual features and linguistic features. Besides, the Transformer-decoder based architectures <cit.> are also used to fuse the visual and linguistic features. For example, BLIP-2 <cit.> builds a Q-former based on the cross-attention mechanism to assemble visual and linguistic information by a set of learned queries. Later, GLIP <cit.> introduces bidirectional cross-attention to obtain multi-modal features. <cit.> build a Trajectory to Word attention for video-language tasks. In this paper, we propose a Mutual-Aware Attention scheme to generate language-aware visual features and vision-aware linguistic features, where the latter guides the former to generate an accurate mask in the Mask Decoder. §.§ Powerful Foundation Models in Computer VisionFoundation models are trained on broad data and can be adapted (e.g., fine-tuned) to a wide range of downstream tasks. In recent years, some vision transformers <cit.> achieved state-of-the-art performance on various tasks, including image classification, semantic/instance segmentation, and object detection. Due to great efforts made on large-scale datasets, recent foundation models <cit.> are equipped with more powerful feature representations. Benefiting from 400 million image-text pairs, CLIP <cit.> achieved strong zero-shot ability on many visual tasks. Some researchers utilize the knowledge of CLIP for different tasks, including semantic segmentation <cit.>, object detection <cit.>, and referring image segmentation <cit.>. Recently, Meta released SAM, the first segmentation foundation model trained on more than 1 billion masks, and achieved remarkable performance on interactive segmentation.In this paper, we first propose a novel method that leverages the powerful knowledge of SAM in the field of RIS. Besides, we design the Mutual-Aware Attention block and the Mask Decoder to get an accurate segmentation mask.§ METHODOLOGYThe overall architecture of MARIS is shown in Fig. <ref>. Firstly, the input image and language expression are projected into the visual (F_v_1, F_v_2, F_v_3) and linguistic (F_l) feature spaces via a pre-trained image encoder <cit.> and a text encoder <cit.>, respectively. Note that the parameters of the image and text encoder are frozen. Secondly, we design a Feature Enhancement (FE) module, which fuses features from different layers of the image encoder and obtains enhanced visual features (F_v). Thirdly, enhanced visual features (F_v) and linguistic features (F_l) are fed into the Mutual-Aware Attention (MA) block to obtain mutual-aware attention features. Finally, we introduce the Mask Decoder (DE) with a single multi-modal query token to utilize explicit linguistic guidance and produce a language-consistent mask. We will describe the details of these steps in the following subsections. §.§ Image Encoder and Text EncoderThe image encoder of SAM <cit.>, a VIT-based backbone, takes images of size 1024×1024 as inputs and generates visual features of spatial size 64×64. In particular, SAM uses a VIT-H with 14×14 windows and four plain global attention blocks. For an input image, we utilize visual features from 2nd∼4th global attention blocks, which are defined as shallow layer features F_v_1∈ℝ^H × W × C, middle layer features F_v_2∈ℝ^H × W × C and deep layer features F_v_3∈ℝ^H × W × C. Here, H and W are the height and width of the feature map, respectively, and C denotes the channel size of visual features.For the language expression, we adopt a text encoder pre-trained by <cit.> and obtain linguistic features F_l∈ℝ^L_t× C'. Here, L_t denotes the length of linguistic features. Accordingly, C' is the channel size of linguistic features.To preserve the generalization capability of the image and text encoder and save computational resources, we freeze the parameters of these encoders, which also prevents catastrophic forgetting <cit.>. §.§ Feature EnhancementTo generate an accurate segmentation mask for the causal language expression, it is necessary to focus on both global semantic information and local grained details. For the image encoder of SAM, features from the deep layer and shallow layer contain accurate global features and abundant local features, respectively. Based on this consideration, we first fuse the shallow layer feature F_v_1 and the middle layer feature F_v_2 as follows.F̂ = CBA([MLP(F_v_1), MLP(F_v_2)]),where F̂ denotes the early enhanced feature. CBA(·) is sequential operations, including convolution layers with 3× 3 kernels, a batch-normalization layer, and GeLu activation function. MLP(·) represents the Multi-Layer Perceptron (MLP) layer. [·, ·] is the concatenation operation.Subsequently, we fuse the early enhanced feature F̂ and the deep layer feature F_v_3 to obtain the final enhanced visual feature, F_v = CBA([MLP(F̂), MLP(F_v_3)]),where F_v∈ℝ^H× W × C is the final enhanced visual feature. Then the feature map is flattened into a 2-D vector F_v∈ℝ^L_v × C, where L_v is equal to H× W. §.§ Mutual-Aware AttentionAfter obtaining visual and linguistic features, the first step is to fuse these features. Existing methods <cit.> propose different strategies to get multi-modal features. However, these methods only assign different weights to each word in the expression but treat each image region equally. BRINet <cit.> adopts a serial bidirectional design to utilize both visual and linguistic guidance. However, the serial design fails to utilize vision-aware linguistic features explicitly.To address these issues, we propose the Mutual-Aware Attention block, which consists of two parallel branches. Specifically, the first branch is Vision-Guided Attention, which weights different words for each pixel of visual features. Accordingly, the second branch is Language-Guided Attention, which weights different image regions for each word of the sentence. The architecture of Mutual-Aware Attention is shown in Fig. <ref>.First, we model the correlation between linguistic features and visual features as follows,Z_v = F_vW_v, Z_l = F_lW_l, A = Softmax(Z_v Z^⊤_l + ℳ)where A∈ℝ^L_v× L_t is the attention weight. W_v and W_l are learnable matrices of size C× C and C'× C, which aim to transform F_v and F_l into the same feature dimension. ℳ is the attention mask, which is calculated by, ℳ (i,j)= { 0 ifM(i,j)<τ-∞ otherwise.,where M=1/(1+e^-Z_vZ^⊤_l) denotes the relevant scores between visual and linguistic features. τ is the threshold, and its value will be discussed in the supplementary material. Through the attention mask ℳ, we alleviate the interference from irrelevant pairs in visual and linguistic features. After that, we obtain mutual-aware attention features, including language-aware visual features F_lav∈ℝ^L_v× C and vision-aware linguistic features F_val∈ℝ^L_t× C as follows,F_lav =LayerNorm(AZ_l + Z_v), F_val =LayerNorm(A^⊤Z_v + Z_l).where LayerNorm(·) denotes the Layer Normalization. We use two sequential Mutual-Aware Attention blocks in our implementation, and the ablation in terms of the number of Mutual-Aware Attention blocks will be discussed in the supplementary material. §.§ Mask DecoderThe mutual-aware attention features are fed into the mask decoder to obtain the final mask. Since the multi-modal features contain excessive visual properties, the mask decoder is likely to segment visual-dominant entities without explicit linguistic guidance. To enable explicit linguistic guidance, we build a Mask Decoder based on the mask classification framework <cit.>. Specifically, we only use a single multi-modal query token with random initialization. Different from DETR/Mask2former, we combine the multi-modal query token with vision-aware linguistic features as the input of the decoder. Such a design enables the multi-modal query token to integrate linguistic information and interact with visual features, thus getting a consistent segmentation with the language expression.To this end, language-aware visual feature F_lav is first fed into a multi-head self-attention layer to extract powerful contextual information.F̂_lav = MHSA(F_lav) + F_lav,where MHSA(·) is the multi-head self-attention layer. Then, the multi-modal query token F_m ∈ℝ^1× C along withF_val∈ℝ^L_t× C are sent to a multi-head self-attention layer to aggregate the vision-aware linguistic feature. Let F_c := [F_m, F_val], The aggregation is formulated asF̂_c := [F̂_m, F̂_val] = MHSA(F_c) + F_c,where F̂_c denotes the evolved feature concatenating the evolved versions of vision-aware linguistic feature F̂_val and multi-modal query token F̂_m.Subsequently, we perform interaction between F̂_lav and F̂_c, obtaining the evolved language-aware visual feature F_lav via a multi-head cross-attention layer as follows.F_lav = MHCA(F̂_lav, F̂_c, F̂_c) + F̂_lav.where MHCA(·) is the multi-head cross-attention layer. The next decoder block takes evolved language-aware visual feature F_lav and evolved concatenated feature F̂_c from the previous layer as inputs.After that, the evolved language-aware visual feature F_lav is upsampled by two sequential blocks. Each consists of a convolutional layer and an upsample operation. We extract the evolved multi-modal query token F̂_m from the evolved concatenated feature F̂_c, and send it to a MLP layer. Finally, we multiply the output of MLP with upsampled visual features to generate the segmentation mask. §.§ Losses In the training process, we adopt the linear combination of focal loss <cit.>and dice loss <cit.> formulated as follows.ℒ = ℒ_f + ℒ_d,where ℒ_f and ℒ_d are focal loss and dice loss, respectively.§ EXPERIMENTS§.§ DatasetsWe conduct experiments on three widely used datasets, including RefCOCO & RefCOCO+ <cit.>, and G-Ref <cit.>. The images of these three datasets are from MSCOCO <cit.>, but are annotated with language expressions with different styles. Expressions of RefCOCO/RefCOCO+ have an average length of 3.61/3.53. Compared with RefCOCO, expressions about absolute locations, e.g., left/right, are forbidden in RefCOCO+. G-Ref has a longer average length (8.4 words). Following previous works, we evaluate both RefCOCO and RefCOCO+ in three subsets: validation, testA, and testB. For G-Ref, we leverage both partitions of UMD and Google for the evaluation. §.§ MetricsFollowing previous works <cit.>, we utilize two metrics in our experiments, including mask Intersection-over-Union (IoU) score and Precision with thresholds (Pr@X). Specifically, IoU scores reveal the predicted mask quality by calculating intersection regions over union regions between the predicted mask and the ground truth across all testing samples. Besides, Pr@X denotes the ratio of predicted masks with IoU scores higher than the threshold X∈{70,80,90}. Implementation details are reported in the supplementary material. For example, Pr@70 denotes the location ability of the model, while Pr@90 shows the ability to generate a high-quality mask.§.§ Comparison With State-of-the-art MethodsWe compare the proposed MARIS with previous state-of-the-art (SOTA) methods on the three most widely used benchmarks, i.e., RefCOCO, RefCOCO+, and G-Ref. Quantitative results are shown in Tab. <ref>.Our method achieves significant improvements over the second-best SOTA method, MCRES <cit.>, on the RefCOCO dataset. Specifically, our method outperforms MCRES by 1.28%, 1.94%, and 1.00% on the val, testA, and testB split, respectively. These results demonstrate the effectiveness of our framework for the RIS task. On the RefCOCO+ dataset, our MARIS improves over ReLA <cit.> on the val and testA splits by 0.33% and 1.08%, respectively. However, we observe a slight performance drop of 0.32% on the testB split compared to ReLA. A possible reason is that the frozen text encoder gets sub-optimal linguistic feature representation for language expression without absolute locations. When the test set (i.e., testB split) contains images with multiple objects that are hard to be distinguished without absolute locations, our method exhibits inferior performance.Finally, on another more complex G-Ref dataset, our method achieves an IoU improvement of 0.48%, 0.38%, and 1.98% on the val (U), test (U), and val (G) split, respectively. This improvement indicates that our method is also competitive for long and causal language expressions. Besides, we also demonstrate the ratio of predicted masks with IoU scores higher than 90%. According to the last row of Tab. <ref>, our method typically segments a high-quality mask. §.§ Ablation Study To verify the effectiveness of the proposed modules of our method, we conduct ablation studies to investigate each component, including Feature Enhancement (FE), Mutual-Aware Attention (MA), and Mask Decoder (DE) on the RefCOCO val dataset, as shown in Tab. <ref>. Note that we use the SAM's decoder <cit.> for the variant excluding the proposed decoder.§.§.§ Mutual-Aware Attention BlocksMutual-Aware Attention blocks are introduced to weight different image regions and different words in the sentence. It brings an improvement by 1.88% in terms of IoU score. To verify the superiority of Mutual-Aware Attention, we conduct experiments that use other methods <cit.> to incorporate features of different modalities. Specifically, ReSTR <cit.> utilizes a transformer encoder (TE) to model the long-range dependencies. VLT <cit.> adopts the Spatial Dynamic Fusion (SDF) to produce different linguistic feature vectors, which is equivalent to using only Visual-Guided Attention. BRINet <cit.> introduces a serial bidirectional cross-modal (BCM) module to utilize visual and linguistic guidance. According to Tab. <ref>, our MA outperforms TE, SDF, BCM by 1.35%, 1.12%, 1.00% IoU score, respectively. This is because existing methods only explore the informative words of each image region, while our method also provides the corresponding image regions of each word in the language expression and generates vision-aware linguistic features. We also provide some visualized examples in the supplementary material to show that our method generates a more accurate and high-quality mask than others.Finally, according to # 8, the attention mask alleviates the interference of irrelevant pairs between visual and linguistic features, which further improves the performance by 0.68% IoU.Besides, we visualize the output of Vision-Guided Attention and Language-Guided Attention in Fig. <ref>(a) and (b), respectively. For the red rectangle in Fig. <ref>(a1), we list attention weights of each word in Fig. <ref>(a3). Our model considers `bottom' and `blue' as the most informative words. Thus, our prediction mask accurately locates the bottom boy in blue, as shown in Fig. <ref>(a2). Similarly, for the word `black', we show its attention map in Fig. <ref>(b3). In the Mask Decoder, the final segmentation mask is refined according to the attention map. Our prediction mask is shown in Fig. <ref>(b2).§.§.§ Mask DecoderAccording to Tab. <ref>, replacing the Mask Decoder with SAM's decoder reduces the IoU score by 1.23%. This reduction is caused by the token to image attn. (TOI-A) layer in SAM's decoder. Specifically, the TOI-A layer performs cross-attention by taking prompt tokens containing output tokens and linguistic features as queries (Q), visual features as keys (K) and vectors (V). Since these output tokens are initialized randomly, they make an uncertain adjustment to the evolved visual features and thus affect the performance. To verify the disadvantage of TOI-A layer for RIS, we insert this layer into each block of our decoder. As shown in # 9 of Tab. <ref>, the TOI-A layer leads to 0.80% IoU decrease. Besides, to verify the effectiveness of explicit linguistic guidance (ELP), we also implement the experiment without explicit linguistic guidance (# 10). Similar to VLT <cit.>, we multiply F_val with F_lav to obtain the input feature of the Mask Decoder. # 10 in Tab. <ref> indicates that explicit linguistic guidance improves the IoU performance by 1.78%, which demonstrates the effectiveness of the proposed decoder.§.§.§ Feature EnhancementAs shown in Tab. <ref>, the Feature Enhancement module significantly improves the performance of MARIS by 7.01% IoU score. To understand Feature Enhancement comprehensively, we conduct the experiment by using another well-known backbone-adaption baseline. Specifically, we adopt VIT-DET <cit.> as the compared baseline, which uses only the feature map from the last layer of the backbone to generate multi-scale features. The quantitative evaluations are shown in Tab. <ref>.Compared with removing Feature Enhancement (FE) module (# 1), multi-scale features generated from the last layer improve the performance by 2.36%. However, compared with using features from different layers, this baseline shrinks the IoU performance by 4.65%. The reason for performance degradation is that features from the last layer contain highly global information, and multi-scale features generated from the last layer exhibit a limited representation of grained details that are essential for RIS. §.§ Generalization Ability To demonstrate the generalization ability of our method, we conduct experiments on the test split of PhraseCut <cit.>. PhraseCut contains 1287 categories, which is much more diverse than 80 categories in COCO. Thus, we compare with two previous methods (as their parameters are available online) on PhraseCut to evaluate their generalization ability. As shown in Tab. <ref>, our method surpasses previous methods in terms of generalization ability. For example, when training on the RefCOCO dataset, our method exceeds CRIS and LAVT by 7.29% and 6.14%, respectively. This advantage comes from the frozen text encoder and image encoder and the introduction of Feature Enhancement. In contrast, encoders of other methods are trainable and thus might be biased to the fine-tuned dataset. We also provide some successful and failed visualized examples in Fig. <ref>.§.§ ConclusionThis paper proposes a novel referring image segmentation method called MARIS, which effectively uses mutual-aware attention features and incorporates the powerful knowledge from SAM into RIS. Our model contains three components: the Feature Enhancement module, the Mutual-Aware Attention block, and a Mask Decoder. To be specific, the Feature Enhancement module incorporates global and local features to transfer the knowledge from the frozen image encoder of SAM. Subsequently, the Mutual-Aware Attention block produces language-aware visual features and vision-aware linguistic features by weighting each word of the sentence and each region of visual features. Finally, we design a Mask Decoder to utilize explicit linguistic guidance. Specifically, we introduce the multi-modal query token to integrate visual and linguistic properties. Extensive experiments on three well-known benchmarks and PhraseCut demonstrate that MARIS achieves new state-of-the-art performance and great generalization ability. | http://arxiv.org/abs/2311.15727v1 | {
"authors": [
"Mengxi Zhang",
"Yiming Liu",
"Xiangjun Yin",
"Huanjing Yue",
"Jingyu Yang"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231127112425",
"title": "MARIS: Referring Image Segmentation via Mutual-Aware Attention Features"
} |
http://arxiv.org/abs/2311.15651v3 | {
"authors": [
"Hiroshi Ishii"
],
"categories": [
"math.AP",
"35R11, 35B40, 35C07"
],
"primary_category": "math.AP",
"published": "20231127092607",
"title": "Propagating front solutions in a time-fractional Fisher-KPP equation"
} |
|
1]Erlend [email protected] 1]Nikolai [email protected] 1]Anna [email protected] 2]Stefan [email protected] 2]Janusz [email protected] 2]Dibyakrupa [email protected] [1]Department of Physics and Technology, University of Bergen, Allégaten 55, 5007 Bergen, Norway[2]Institute of Theoretical Physics, Faculty of Physics, University of Warsaw,ul. Pasteura 5, 02-093 Warsaw, PolandExploring CP violation in H -> tau+ tau- gamma [ January 14, 2024 ==============================================We propose a method of measuring the CP-odd part of the Yukawa interaction of Higgs boson and τ leptons by observing the forward-backward asymmetry in the decay H →τ^+τ^-γ.The source of such asymmetry is the interference of the CP-even loop-level contribution coming from H → Zγ→τ^+τ^-γ decay channel with the contribution from tree-level CP-odd Yukawa interaction.We find that the CP violating effect is maximum when the invariant mass of the τ^+ τ^- pair is equal to the mass of the Z boson.We propose and utilise various Dalitz plot asymmetries to quantify the maximal size of the asymmetry and perform Monte Carlo simulations to study the feasibility of measuring it in the high luminosity phase of the Large Hadron Collider (HL-LHC). § INTRODUCTION In the Standard Model (SM), violation of the CP symmetry is encoded in the CKM matrix.In principle, a Beyond the Standard Model (BSM) physics may have new sources of CP violation.In particular, BSM CP violation in the Yukawa interactions is welcome for electroweak baryogenesis (it is well known that CP violation in the SM is by far too weak for baryogenesis <cit.>).The most general expression for CP violating Hψψ Yukawa interaction can be written in the following form, ℒ_Hψψ^ = - m_ψ/ ψ(a_ψ + i γ^5 b_ψ) ψH, whereis the vacuum expectation value of the Higgs field, m_ψ denotes the mass of the fermion ψ, and a_ψ, b_ψ are two real valued parameters.In the SM, a_ψ^SM=1, b_ψ^SM=0.If simultaneously both a_ψ≠ 0 and b_ψ≠ 0, it implies CP violation in Hψψ Yukawa interaction.The parameters b_ψ are strongly constrained by the experimental bounds on the electron and neutron Electric Dipole Moments (for a recent analysis see <cit.> and references therein).In this context, the τ lepton Yukawa coupling is of interest as it is large and the EDM bound, b_τ <0.3, is weak enough for the τ Yukawa to play a role in electroweak baryogenesis, see e.g. <cit.>.CP violation in the τ Yukawa has also been searched for at the LHC.The recent study by CMS <cit.> probing H →τ^+τ^- gives b_τ≲ 0.34 at 68.3% confidence level (for further prospects see <cit.>).Majority of the experimental studies on this issue concentrate on measurements of the angle between τ decay planes determined by the directions of particles produced in subsequent τ lepton decays, such as in H→τ^+τ^-→π^+π^-ν_τ ν̅_τ or H→τ^+τ^- →ρ^+ρ^-ν_τ ν̅_τ <cit.>. In this paper we propose to measure the forward-backward asymmetry of τ lepton angular distribution in the H →τ^+τ^-γ decay, as a measure of the CP violation in Hττ Yukawa interaction.To study this asymmetry we utilise the Lorentz invariant Dalitz plot distribution of events.The dominant CP-violating effects which contribute to the forward-backward asymmetry in the H →τ^+τ^-γ decay are proportional to the interference of the tree-level and loop-level diagrams[One could, in principle, also consider H →ℓ^+ℓ^-γ, with ℓ=e,μ, <cit.>, facilitated by similar Feynman diagrams as in Fig. <ref>, to probe CP violation in the corresponding Yukawa interactions.But for these processes the loop-level contributions are dominant and can overshadow the CP-violating part of the tiny Yukawa interaction of e,μ with Higgs boson.Nevertheless, our proposal to use the Lorentz invariant Dalitz plot distribution to study forward-backward asymmetry holds for these decays as well.In such a case, observation of a sizeable asymmetry would suggest CP violation in the loop-level contributions.] shown in Fig. <ref>.The lower branching ratio than the H →τ^+τ^- is partially compensated by the fact that one only requires to reconstruct the 4-momenta of the τ leptons and not the full spatial distributions of the final τ decay products. Our heuristic simulations for the HL-LHC show that one can possibly probe b_τ using our proposed methodology.A more thorough Monte Carlo study scanning the full 2-dimensional Dalitz plot distribution is beyond our current expertise, and is hence reserved for future exploration.Our paper is organised as follows.In Sec. <ref> we briefly outline the important phenomenological aspects of the 3-body decay H →τ^+τ^-γ, showing how the forward-backward asymmetry originates and how can it be probed from the Lorentz invariant Dalitz plot distribution.In Sec. <ref> we do a numerical study, looking at the distribution pattern inside the Dalitz plot and assess how large the forward-backward asymmetry could be.In Sec. <ref> we perform a heuristic Monte Carlo study of the feasibility of observing the asymmetry in context of HL-LHC.Finally we conclude in Sec. <ref> summarising our findings and highlighting the salient features of our proposed methodology. § PHENOMENOLOGICAL STUDY OF H -> TAU+ TAU- GAMMAThe decay H →τ^+τ^-γ is its own CP-conjugate process.Let us study the kinematic configuration of the decay in the center-of-momentum frame of τ^+τ^- (equivalently called the di-tau rest frame).From Fig. <ref> it is clear that the CP transformation takes the angle θ between τ^+ and photon to π-θ.This implies that any difference (or asymmetry) in the angular distribution of events with respect to cosθ↔ -cosθ (`forward' ↔ `backward') exchange would be a clear signature of CP-violation. As illustrated in Fig. <ref> the decay H →τ^+τ^-γ proceeds via the tree-level Hττ Yukawa interaction, as well as via the effective vertex of H →𝒱 γ→τ^+τ^-γ, with 𝒱=Z,γ.The effective Lagrangian for the later interaction can be, to the lowest mass dimension order, written in the form, ℒ_H𝒱γ = H/4 (2 A_2^Zγ F^μν Z_μν + 2 A_3^Zγ F^μνZ_μν + A_2^γγ F^μν F_μν + A_3^γγ F^μνF_μν), where 𝒱_μν = ∂_μ𝒱_ν - ∂_ν𝒱_μ, 𝒱_μν = 1/2ϵ_μνρσ𝒱^ρσ, and A_2,3^𝒱γ are dimensionless form factors.Such form factors receive contributions from the SM loop-level diagrams (see Fig. <ref>), and from the interaction beyond the SM, the latter in general possibly also containing CP-violating couplings. We take into account only the SM loop contributions, assuming that BSM loop corrections are small compared to the tree level ones.Thus, we put A_3^𝒱γ=0 while doing numerical study[Note that for top quark or W boson contributions to H𝒱γ coupling, loop integrals are purely real, so the CP violating form factors can only be proportional to imaginary couplings.], but for completeness we will keep the A_3^𝒱γ dependent terms in our analytical expressions.The expressions for A_2^Zγ and A_2^γγ in the SM are given in Ref. <cit.>.Let us denote the decay amplitude for H→τ^+ τ^- γ by ℳ.As illustrated in Fig. <ref>, the amplitude can be split into three parts: (1) tree-level contribution ℳ^(Yuk), (2) loop-level Zγ contribution ℳ^(Zγ), and (3) loop-level γγ contribution ℳ^(γγ), i.e. ℳ = ℳ^(Yuk) + ℳ^(Zγ) + ℳ^(γγ).Like any other 3-body decay of a spin-0 particle, the full kinematics of H(p_H) →τ^+ (p_+)τ^- (p_-)γ (p_0) can be described by two independent variables.We choose to work with Lorentz invariant mass squares. Definingm_+-^2= (p_+ + p_-)^2 = (p_H - p_0)^2, m_+0^2= (p_+ + p_0)^2 = (p_H - p_-)^2, m_-0^2= (p_- + p_0)^2 = (p_H -p_+)^2, wherem_+-^2 + m_+0^2 + m_-0^2 = m_H^2 + 2 m_τ^2.We can express cosθ, defined in the di-tau rest frame, in terms of the Lorentz invariant variables:cosθ = ( 1 - 4 m_τ^2m_+-^2)^-1/2 m_-0^2 - m_+0^2/m_H^2 - m_+-^2.At the beginning of this section we have argued that the forward-backward asymmetry in cosθ distribution can serve as a probe of CP violation.Therefore, we see that the forward-backward asymmetry would be equivalent to an asymmetry in the distribution or number of events in the m_+0^2 vs. m_-0^2 plane (usually called a Dalitz plot) under the exchange m_+0^2 ↔ m_-0^2.Equivalently, one can consider distribution of events in the m_+0 vs. m_-0 plane which may be more convenient from experimental perspective.The `forward' (or `backward') region in Dalitz plot is that region where m_-0 > m_+0 (or m_-0 < m_+0).In the rest frame of the Higgs boson, the differential decay rate of H →τ^+ τ^- γ in terms of m_+0 and m_-0 is given by,^2 Γ_ττγ m_+0m_-0 = m_+0m_-0/64π^3 m_H^3ℳ^2 ≡𝒟(m_+0,m_-0),where the squared amplitude ℳ^2 can be split into six constituents,ℳ^2= |ℳ^(Yuk)|^2 + |ℳ^(Zγ)|^2 + |ℳ^(γγ)|^2 + 2(ℳ^(γγ) ℳ^(Zγ)*) + 2(ℳ^(Yuk) ℳ^(Zγ)*) + 2(ℳ^(Yuk) ℳ^(γγ)*).In order to clearly point out the terms responsible for the forward-backward asymmetry and see how it is related to CP-asymmetry, we write down the expression for the individual constituents of amplitude square, as shown in Eq. (<ref>), in terms of m_+-^2 and θ.Using Eqs. (<ref>) and (<ref>) one can easily rewrite all these expressions in terms of m_+0 and m_-0.Neglecting the subdominant m_τ dependent terms in the numerator, we have: |ℳ^(Yuk)|^2= 16 e^2(a_τ^2+b_τ^2) m_τ^2(m_H^4+m_+-^4) m_+-^4sin^2θ/^2 (m_H^2-m_+-^2)^2( (m_+-^2 - 4 m_τ^2)sin^2θ + 4 m_τ^2 )^2,|ℳ^(Zγ)|^2= g_Z^2(( c_A^τ)^2+( c_V^τ)^2)(( A_2^Zγ)^2+( A_3^Zγ)^2) m_+-^2 (m_H^2-m_+-^2)^2/8 ^2 ((m_+-^2-m_Z^2)^2+Γ_Z^2 m_Z^2)(1+cos^2θ),|ℳ^(γγ)|^2= e^2 (( A_2^γγ)^2+( A_3^γγ)^2) (m_H^2-m_+-^2)^2/2 m_+-^2^2(1+cos^2θ), (ℳ^(γγ) ℳ^(Zγ)*)= -e g_Z(m_H^2-m_+-^2)^2/4 ^2 ((m_+-^2-m_Z^2)^2+Γ_Z^2 m_Z^2) ×(2 c_A^τ ( A_2^γγ A_3^Zγ-A_2^Zγ A_3^γγ) m_ZΓ_Zcosθ + c_V^τ (A_2^γγ A_2^Zγ+A_3^γγ A_3^Zγ)(m_+-^2-m_Z^2) (1+cos^2 θ)), (ℳ^(Yuk) ℳ^(Zγ)*)= 4 e g_Z m_τ^2 m_+-^4sin^2θ/^2 ((m_+-^2-m_Z^2)^2+Γ_Z^2 m_Z^2) ( ( m_+-^2 - 4 m_τ^2 )sin^2θ + 4 m_τ^2 )^2 ×( c_A^τ ( A_3^Zγ a_τ - A_2^Zγ b_τ) m_ZΓ_Z(m_H^2-m_+-^2)cosθ+ c_V^τ(m_+-^2-m_Z^2) (A_2^Zγ a_τ(m_H^2-m_+-^2 cos^2 θ) + A_3^Zγ b_τ(m_H^2-m_+-^2) )), (ℳ^(Yuk) ℳ^(γγ)*)= - 8 e^2 m_τ^2 m_+-^2sin^2θ/^2( (m_+-^2 - 4m_τ^2 )sin^2θ + 4m_τ^2 )^2 ×(A_2^γγ a_τ(m_H^2 - m_+-^2cos^2θ) + A_3^γγ b_τ(m_H^2-m_+-^2) ), where c_V^τ = -1/2 + 2sin^2θ_W, c_A^τ = -1/2, and g_Z = e/(sinθ_Wcosθ_W), with θ_W being the weak mixing angle.Note that we have kept the total width of the Z boson, Γ_Z, because the Z boson can be on-shell in our case. We are interested in terms that are odd (linear) in cosθ (or, using Lorentz invariant variables, odd in the difference m_+0^2 - m_-0^2).Such terms are found to be proportional to m_ZΓ_Z as well as the product of CP-even and CP-odd couplings.If we use the narrow-width approximation for the Z boson propagator, 1/(m_+-^2-m_Z^2)^2 + Γ_Z^2 m_Z^2≈π/m_ZΓ_Zδ(m_+-^2-m_Z^2), the m_ZΓ_Z factor in the terms linear in cosθ cancels out, and it is obvious that maximum CP-violation occurs for m_+-^2 = m_Z^2.Thus, the dominant contribution to the forward-backward asymmetry comes from the events for which invariant mass of the τ pair is close to the Z boson mass.It is clear from Eq. (<ref>) that to a good approximation the asymmetry in the cosθ distribution probes the combination (A_3^Zγa_τ-A_2^Zγb_τ).In our numerical study in Sec. <ref> we put A_3^Zγ=0.In the following section we illustrate how the distribution pattern in the `forward' and `backward' regions of the Dalitz plot differ due to CP violation (i.e. b_τ≠ 0) by studying the following distribution asymmetry, 𝒜(m_+0,m_-0) = |𝒟(m_+0,m_-0) - 𝒟(m_-0,m_+0)|/𝒟(m_+0,m_-0) + 𝒟(m_-0,m_+0). Additionally, we also study the asymmetry integrated over the region where the invariant mass of the τ^+ τ^- pair is close to the Z boson mass, A(n) = | ∬(𝒟(m_+0< m_-0) - 𝒟(m_+0>m_-0) )Π(m_+-,n) m_+0m_-0|/∬𝒟(m_+0,m_-0)Π(m_+-,n) m_+0m_-0, where the function Π(m_+-, n) defines the cut on the invariant mass of the τ^+ τ^- pair Π(m_+-, n) =1for | m_+- - m_Z |⩽nΓ_Z ,0otherwise. The asymmetry A(n) is directly related to the number of events around the Z pole, A(n) = N_F(n)-N_B(n)/N_F(n) + N_B(n), where N_F/B(n) denote the number of events contained in the forward/backward region which are also contained in the region around Z pole as defined in Eq. (<ref>). § NUMERICAL STUDY In this section we do a numerical study of the effect of the CP violating parameter b_τ on the Dalitz plot distribution in m_+0 vs. m_-0 plane.Especially we focus on the size of the asymmetries 𝒜(m_+0,m_-0) and A(n) as defined in Eqs. (<ref>) and (<ref>).As detailed below, we impose a few kinematic cuts in the Higgs rest frame.In the next section we present the results of a heuristic simple MC simulation as an attempt to be closer to the experimental conditions at the HL-LHC.We note that by neglecting m_τ in comparison with Higgs mass m_H, one can constrain (to a very good approximation) the sum a_τ^2 + b_τ^2 from the experimentally measured pp → H →τ^+τ^- cross-section <cit.>, which yields a_τ^2 + b_τ^2 ≈ 0.93^+0.14_-0.12, where the experimental errors have been added in quadrature.To avoid infrared divergence, we impose a cut on the photon energy (i.e. specify a minimum energy for the photon) in the Higgs rest frame, E_γ^cut = 5 GeV. As we discuss later, the actual value of this cut has little impact on the decay branching ratios in the range of the di-τ invariant mass squared m_+-^2 most sensitive to the CP violation effect. For the sake of reference we note that, with this cut, for the full kinematical range of m^2_+- the branching ratio of H →τ^+τ^-γ is BR_ττγ=3.72× 10^-3.The branching ratio decreases once a cut is imposed on the three relative angles θ_X, with X ∈{ +-,+0,-0 } (see Fig. <ref>) among the final particles in the Higgs rest frame.An angular cut θ_X^cut specifies the minimum angle among the final particles.For θ_X^cut = 5^∘ we get BR_ττγ=3.24× 10^-3 which further decreases by approximately 15% for each 5^∘ increase in the cut.Both the angular cut θ_X^cut and photon energy cut E_γ^cut affect the allowed values of m_+0 and m_-0.In Fig. <ref> we see that the differential decay distribution have maxima close to the axes when m_± 0^2 approaches m_τ^2.These peaks are characteristic of the tree-level contribution from Fig. <ref>.A second peak is also easily discernible in the distributions around m_+-^2 = m_Z^2 as a slightly darker band, and this corresponds to contribution from the on-shell Z contribution, coming from the one-loop level diagrams of Fig. <ref>.Furthermore, for b_τ≠ 0 we do find non-zero forward-backward asymmetry.Also as expected, the distribution asymmetry 𝒜(m_+0,m_-0) become significantly large around the Z-pole region.The distribution asymmetry can be as large as ∼ 1% depending on the values of a_τ, b_τ such as for a_τ=0.950 and b_τ=0.20.Regarding the asymmetries A(n) around the Z-pole, see Eq. (<ref>), we note that the Z-pole cut as encoded in Eq. (<ref>) can be rewritten, in terms of the photon energy in the Higgs rest frame, as follows, Π( m_+-,n ) ≡Π( E_γ,n ) = 1 for |√(m_H^2 - 2 m_H E_γ) - m_Z |⩽ n Γ_Z,0 otherwise, From the equation above it is clear that for the invariant mass of the τ pair close to the Z pole, say | m_+- - m_Z |⩽ 5Γ_Z, that the photon energy cut E_γ^cut=5 GeV has no relevance, since the minimum photon energy required for events around Z-pole corresponds to higher photon energies.Only the angular cuts θ_X^cut have any bearing in such a case.In Fig. <ref> we show the variation of A(n) for 1 ⩽ n ⩽ 5 and compare it with with the ratio Γ_ττγ(around Z pole)/Γ_ττγ(full), where Γ_ττγ(around Z pole) is the partial decay rates for the decay H →τ^+τ^-γ with m_+- around the Z pole (imposed using Eq. (<ref>)), and Γ_ττγ(full) is the full partial decay rate.As expected, the asymmetry decreases with n, as it is strongly localised around the Z-pole, whereas Γ_ττγ(around Z pole)/Γ_ττγ(full) increases with n. The plot clearly shows the challenge for an experimental analysis to find an optimal balance between the magnitude of the effect and the statistics of the events. § SIMULATION STUDY OF H -> TAU+ TAU- GAMMA IN THE CONTEXT OF HL-LHC To estimate the sensitivity of the proposed Higgs boson decay H →τ^+τ^-γ to the CP violation effects at the HL-LHC, Monte-Carlo (MC) generators were used to simulate the signal in the actual experimental environment.However, due to the limited computing resources, we have used a simplified MC simulation procedure.The differential cross-sections corresponding to the various a_τ and b_τ values are computed using <cit.> as a function of m_+0 and m_-0. The MC signal samples are re-weighted using these cross-sections (which include the kinematic cuts of Sec. <ref>) to properly model the impact of the interference term, similar to the "interpolation" approach used in <cit.>.The validity of the approach is verified by the comparison of relevant kinematic distributions with the analytical calculations. We project the Dalitz plot distribution of events in forward and backward regions onto the m_+- axis to do a 1-dimensional binned study of the forward-backward asymmetry.A more detailed and thorough MC study taking the full 2-dimensional Dalitz plot distribution into account and exploring unbinned Dalitz plot analysis techniques such as the Miranda method <cit.>, the method of energy test statistic <cit.> and the earth mover's distance <cit.> are reserved for future explorations.In the following, all additional cuts are defined in the laboratory frame.Reconstruction of the Higgs rest frame, that was used in the previous section would require the knowledge of the Higgs boson three-momentum which is not known experimentally.Besides, the observed distribution of events in m_+0 vs. m_-0 Dalitz plot can be obtained in any frame of reference.§.§ Monte Carlo SimulationFor the gluon-fusion production of the Higgs boson, the<cit.> generator was used with the<cit.> PDF set.Proton-proton collisions are set to happen at center-of-mass energy of 14 TeV, as is expected for HL-LHC.For the simulation of the decay of the Higgs boson, modelling of the parton showers, and hadronization, the simulated events were processed with the<cit.> program with the<cit.> PDF set. <cit.> framework is then used to emulate the resolution and reconstruction of physical objects (such as photons, τ leptons, and jets) by a general-purpose particle detector (such as ATLAS or CMS) using the "HLLHC" card. <cit.> package is used to perform the jet clustering using the anti-k_t algorithm <cit.>.In the simulation studies photons are required to have p_T > 10 GeV and to be isolated with an angular cone defined by the condition[Here and everywhere we use the cylindrical coordinates (r,ϕ) to describe the transverse plane, ϕ being the azimuthal angle around the beam line.The pseudorapidity η is defined as -lntan(θ/2).Finally, the angular distance is measured in units of Δ R ≡√((Δη)^2 + (Δϕ)^2).] Δ R≤ 0.3.The reconstructed τ leptons are required to have p_T > 15 GeV.Their reconstruction is based on seed jets with the radius parameter <cit.> = 0.4.This p_T selection represents a realistic lower limit of what a general purpose detector can achieve.We assume that hadronically decaying τ leptons can be identified with 100% efficiency.In reality this efficiency will be heavily dependent on the desired jet rejection power achievable with the conditions of the HL-LHC.The results presented in this section scale trivially with the τ identification efficiency.This optimisation is left for the future, more realistic, simulations of the performance of τ identification algorithms at the HL-LHC.All plots in this subsection are based on the MC simulation described above. The HL-LHC is expected to deliver about 3000 fb^-1 integrated luminosity of data <cit.>.This corresponds to over 160 million events with gluon-gluon fusion production of the Higgs boson.With hadronically reconstructed τs and taking the same kinematic constraints as considered in Section <ref>, we estimate that 2.24 × 10^5 of these Higgs bosons will eventually decay into the γ τ^+_had τ^-_had final state[This estimate does not include the laboratory frame requirements on p_T, Δ R anddiscussed above.].Approximately 10% of the events will have the di-τ system with the invariant mass m_+- within 5 GeV of the Z-boson mass peak where the forward-backward asymmetry manifests, see Fig. <ref>. For any selected range of m_+- we can estimate the number of events in `forward' and `backward' regions, say N_F and N_B respectively.Thus we can easily estimate the following forward-backward asymmetry, A = N_F - N_B/N_F + N_B. The laboratory frame kinematic requirements applied to the reconstructed objects, such as the τ and photon p_T and isolation requirements, further reduce the number of available events by a factor of 3 in the Z mass peak region, see Fig. <ref>.The photon p_T requirement by itself is responsible for a 50% decrease in the selection efficiency.[This can be contrasted with the fact that the photon energy cut in Higgs rest frame of 5 GeV has no effect when m_+- - m_Z⩽ 5 Γ_Z, as mentioned in Sec. <ref>.] §.§ Kinematic Fit Although the true invariant mass of the di-τ system (m_+-) offers a good way to access the forward-backward asymmetry, see Fig. <ref>, it is not accessible experimentally.The short lifetime of the τ leptons means that they will decay before reaching the detector, with ν_τ escaping undetected.For hadronically decaying taus that are used in the present study, the particles registered in the detector will be predominantly charged and neutral pions.The detectors have limited acceptance and resolution, meaning that energies and momenta of these particles will be reconstructed with a limited accuracy.The visible invariant mass of the di-τ system (m_+-^vis), constructed from the visible decay products of τ decays, offers a degraded sensitivity to the forward-backward asymmetry, with almost no visible Z peak, see Fig. <ref>.A fit procedure to recover the sensitivity to the asymmetry based on the kinematic constraints of the system is described in the following.[At this point we have three different ways to compute the invariant masses (and the asymmetry): true (m_X, A), using the full information of the ν_τ momentum from the MC; visible (m_X^vis, A^vis), using no information about the ν_τ momentum; and fitted (m_X^fit, A^fit), using the information obtained in the fit procedure.Here m_X can denote m_+- = m(τ^+τ^-), m_+0 = m(τ^+γ) or m_-0 = m(τ^-γ).]The final state of H →γ τ^+τ^- is subject to two constraints: * The true invariant mass of the three final particles must be equal to the mass of the Higgs boson. * The energy in the transverse plane, perpendicular to the beam line, should be conserved and equal to 0, with any deviations coming from either the missing neutrinos (ν_τ, ν_τ) or mismeasurements of the particle's energies.A fit procedure using<cit.> is performed based on these two conditions with the overall energy of the two τ leptons as free parameters.Since the opening angle between the neutrinos and visible tau decay product has to be of the order of m_τ / E_τ, both ν_τ and ν̅_τ are predominantly collinear with the visible parts of the hadronically decaying τ's, for the energies considered here.Therefore, the approach of treating the contributions from the τ neutrino and τ energy smearing as one common parameter that only affects the energy of the τ-lepton and not its spacial direction is justified.This simple fit procedure allows us to restore the true energies of the τ-leptons and the fitted two-body invariant masses match well with the true invariant masses, as demonstrated in Fig. <ref>.Further the fitted invariant masses m^fit_+0 and m^fit_-0 are used to identify events in forward and backward regions.This information is then used to estimate the asymmetry A^fit while selecting m^fit_+- in the region around Z-boson mass where the asymmetry is maximal.The asymmetry defined using the fitted masses, A^fit behaves similarly as expected for the true asymmetry as a function of the fitted di-τ mass, see Fig. <ref>. Thus A^fit is a reasonable estimator of the forward-backward asymmetry.In the following we evaluate this asymmetry in a real-data-like environment.§.§ Asymmetry Calculation We compare two methods of quantifying the asymmetry and estimating the corresponding values of b_τ.The first approach uses a simple selection of events with m^fit_+- close to the Z mass peak, where the asymmetry is maximised.Here, the window of ± 9 GeV around m_Z was chosen, i.e. m^fit_+- - m_Z⩽ 9 GeV.The width of this window was inspired by the range of the di-tau mass m^fit_+- where the asymmetry is enhanced, see Figs. <ref> and <ref>.The asymmetry estimate A^fit is then computed as in the equation <ref> and used to predict b_τ. The second approach involves widening the di-τ mass selection to the range of 72–114 GeV.The asymmetry A^fit is computed in bins of 3 GeV.A skewed Gaussian f_skew(t) is then fitted to the shape of the asymmetry distribution,f_skew(t) = c ϕ(t) Φ(α t),t = x-a/b,where ϕ(x) is the normal probability density function, Φ(x) is the normal cumulative distribution function, and a,b,c,α are the free parameters in the fit.These parameters are first determined in the fit to the shapes of the true asymmetry distributions obtained from analytical calculations, see Fig. <ref>.All parameters except for the overall scaling factor c are then fixed and the skewed Gaussian is fitted to the distribution of A^fit, see Fig. <ref>.The determined value of c is compared to the values from the fits to the analytical shapes to determine the b_τ, as shown in Fig. <ref>. For sake of completeness Fig. <ref> shows the fit to the reconstructed-fitted di-tau mass distribution for reconstructed events in the SM case when b_τ = 0. The results of the two approaches are summarised in Table <ref>.The uncertainties of the measurements include the statistical uncertainty of the expected HL-LHC event yields, which is the dominant one.To estimate the HL-LHC uncertainty contribution we rescale the yields to match those expected at 3000fb^-1 and recompute the statistical uncertainty accordingly.Both of the approaches produce comparable central values with the fit to f_skew resulting in lower uncertainties. § CONCLUSIONS We have analysed the 3-body decay of the Higgs boson H →τ^+τ^-γ as an additional source of information about the CP violation in the Hττ Yukawa coupling, independent from the existing experimental studies on the 2-body decay H →τ^+ τ^- <cit.>.The forward-backward asymmetry in the τ angular distribution in our case arises due to the interference of the tree-level contribution (which includes the CP violating Hττ Yukawa coupling b_τ≠ 0) and the CP-even SM loop-level contributions.We have proposed a novel method of measuring forward-backward asymmetry in the Dalitz plot distribution of events in the plane of γ τ^± Lorentz invariant masses (m_+0 vs. m_-0 plane).Such a Dalitz plot distribution is frame independent, making the method of extraction of forward-backward asymmetry clean and attractive from the experimental point of view. The asymmetry is directly proportional to the CP-odd Hττ coupling parameter b_τ.In principle, the asymmetry can also appear from the interference of CP-even tree-level contribution and CP violating loop-level contributions.However, for our numerical study, we assume no CP-violation at loop-level and focus only on the effects of non-zero b_τ and whether this can be experimentally probed at HL-LHC.The forward-backward asymmetry is predicted to be the largest when the di-τ invariant mass m_+- is close to m_Z (it could reach ∼ 1% for high values of b_τ) and it rapidly diminishes as one moves farther away from the Z pole.To estimate the feasibility of such asymmetry measurements at the HL-LHC we have performed a simplified MC simulation with kinematic cuts meant to mimic the experimental conditions.A kinematic fit was used to constrain the hadronically reconstructed τ-leptons and account for the missing ν_τ information not available in the detector.We estimated the asymmetry directly in the region with di-τ mass in the range of m_Z ± 9 GeV for different values of b_τ.We also looked for the asymmetry by performing a shape fit in a wider mass region, 72 GeV⩽ m_+-⩽ 114 GeV.From our MC studies we find that the statistical uncertainties we currently expect to get with the HL-LHC dataset are significantly larger than the effect itself.Nevertheless, our simplistic MC study suggests that our proposed methodology is experimentally doable, and our results could be encouraging for more detailed and in-depth explorations in the future.Instead of the one-dimensional binned shape fit used in this study a full two-dimensional unbinned Dalitz plot analysis could instead be envisaged using for example the Miranda method <cit.>, the method of energy test statistic <cit.> and the earth mover's distance <cit.>.The asymmetry can also appear from the interference of CP-even tree-level contribution and CP violating loop-level contributions, this effect has not been considered in our numerical studies yet.In our MC simulation, we have only considered final states with both of the τ-leptons decaying hadronically, the dataset can be doubled by also considering one of the τs to decay leptonically, i.e. adding the H →τ_hadτ_lepγ decay channel.With the better understanding of the technical capabilities of particle detectors such as ATLAS and CMS after the Phase-2 upgrades, the kinematic selections can be further optimised.Finally, once the asymmetry can be probed with reduced uncertainty, it would be interesting to compare its prediction for b_τ with that obtained from the already ongoing experimental study of H →τ^+τ^- → m^+ν_τm^-ν_τ where m=π,ρ etc.If there is significant deviation in the two b_τ values, one can assume that there is some significant CP-violation coming from the loop-level contribution, which we have neglected in our numerical study in this paper.It is interesting to note that the same loop-level diagrams also contribute to H →ℓ^+ℓ^-γ for ℓ=e,μ, and for these decay modes the tree-level contributions are negligible.Moreover, the same Dalitz plot techniques developed for H →τ^+τ^-γ can also be applied to probe the asymmetry in the Dalitz plots of H →ℓ^+ℓ^-γ to constrain or discover the CP violation at loop-level.Therefore, our formalism of probing the forward-backward asymmetry inside the Lorentz invariant Dalitz plot distribution of events would certainly help explore CP property of the Higgs boson in a more systematic and unified manner.§ ACKNOWLEDGEMENTS We thank Steffen Mæland and Bjarne Stugu for helpful discussions concerning experimental signatures of the CP violation in the Hττ Yukawa coupling. This research has received funding from the Norwegian Financial Mechanism for years 2014-2021, under the grant no 2019/34/H/ST2/00707.The work of DS is supported by the Polish National Science Centre under the Grant number DEC-2019/35/B/ST2/02008.utphys_jr | http://arxiv.org/abs/2311.16211v1 | {
"authors": [
"Erlend Aakvaag",
"Nikolai Fomin",
"Anna Lipniacka",
"Stefan Pokorski",
"Janusz Rosiek",
"Dibyakrupa Sahoo"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20231127175434",
"title": "Exploring CP violation in $H \\to τ^+ τ^- γ$"
} |
resthmTheoremtheoremTheorem[section] proposition[theorem]Proposition lemma[theorem]Lemma corollary[theorem]Corollary claim[theorem]Claim definitionDefinition[section] remarkRemark[section] factFact[section] taskTask questionQuestionKwInputInputKwOutputOutput | http://arxiv.org/abs/2311.16009v1 | {
"authors": [
"Thiago Bergamaschi",
"Naresh Goud Boddu"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20231127170902",
"title": "On Split-State Quantum Tamper Detection and Non-Malleability"
} |
=1 | http://arxiv.org/abs/2311.16246v1 | {
"authors": [
"Charlie Cresswell-Hogg",
"Daniel F. Litim"
],
"categories": [
"hep-th"
],
"primary_category": "hep-th",
"published": "20231127190029",
"title": "Scale Symmetry Breaking and Generation of Mass at Quantum Critical Points"
} |
[ Event Detection in Time Series: Universal Deep Learning Approach equal* Menouar Azib Benjamin Renard Philippe Garnier Vincent Génot Nicolas André Machine Learning, ICML0.3in ]Event detection in time series is a challenging task due to the prevalence of imbalanced datasets, rare events, and time interval-defined events. Traditional supervised deep learning methods primarily employ binary classification, where each time step is assigned a binary label indicating the presence or absence of an event. However, these methods struggle to handle these specific scenarios effectively. To address these limitations, we propose a novel supervised regression-based deep learning approach that offers several advantages over classification-based methods. Our approach, with a limited number of parameters, can effectively handle various types of events within a unified framework, including rare events and imbalanced datasets. We provide theoretical justifications for its universality and precision and demonstrate its superior performance across diverse domains, particularly for rare events and imbalanced datasets. § INTRODUCTION Event detection in time series data is a crucial task in various domains, including finance, healthcare, cybersecurity, and science. This task involves identifying instances of behavioral shifts, often referred to as the change-point detection problem in statistical literature <cit.>. Such events encompass anomalies, frauds, physical occurrences, and more. In recent years, supervised deep learning methods have emerged as powerful tools for addressing this challenge, often employing a classification framework to assign binary labels to each time step, indicating the presence or absence of an event <cit.>.However, these classification-based approaches face inherent limitations. Particularly, they may struggle to handle imbalanced datasets, where the majority class (non-events) significantly outnumbers the minority class (events) <cit.>. This imbalance can lead to biased predictions, favoring the majority class and hindering the accurate detection of events. To address this issue, various techniques have been proposed, such as the SMOTE algorithm, which artificially inflates the minority class to improve classification performance. However, these methods have limitations, including potential overfitting and the introduction of artificial data points <cit.>.Moreover, these classification-based methods often fail to consider events defined by time intervals, a common occurrence in real-world scenarios. Events may span multiple time steps, and their identification requires capturing the temporal context of the data rather than simply assigning binary labels to individual time steps. Moreover, these methods can be broadly categorized into two approaches: empirical methods primarily focused on practical applications and benchmark performance, lacking a strong theoretical foundation <cit.>, and methods with a theoretical foundation grounded in mathematical proofs or justifications for their efficacy. To our knowledge, only one work, <cit.>, falls into the latter category.In <cit.>, the authors propose a novel change point detection method employing a neural network architecture that includes the CUSUM-based classifier <cit.> as a specific instance. They demonstrate that their architecture cannot underperform the CUSUM classifier in identifying change points. Additionally, they show that the misclassification error is constrained by two factors: one associated with the inherent misclassification error of the CUSUM-based classifier and the other related to the complexity of the neural network class as measured by its Vapnik-Chervonenkis (VC) dimension <cit.>. However, their study has certain limitations. It models the problem as a change-in-mean model <cit.>, which may not be adequately generalized for other types of event detection, including anomalies. Furthermore, it assumes that the data are drawn from a multivariate normal distribution, potentially limiting the applicability of their approach.In contrast to these limitations, we present a novel supervised deep learning approach for event detection in multivariate time series data that departs from binary classification and leverages regression. This departure offers several advantages in handling imbalanced datasets. Unlike traditional classification, regression inherently accommodates continuous outputs, making it suitable for scenarios where events may not be binary and imbalanced. Additionally, our approach, previously introduced in <cit.>, accepts ground truth events defined as specific time points or intervals, eliminating the need for point-wise labels across the entire dataset. While <cit.> focused on algorithmic implementation and introduced a Python package, our paper delves deeper into the theoretical underpinnings of the method by presenting a mathematical framework.We demonstrate the universality of our approach in detecting events in time series data, assuming mild continuity assumptions for multivariate time series. By utilizing the universal approximation theorem <cit.>, we establish that our method can detect a broad spectrum of events with arbitrary precision. Notably, our approach surpasses <cit.> in robustness and applicability, owing to its weaker assumptions. This nuanced approach enhances versatility, making it suitable for diverse event detection scenarios. Beyond theoretical considerations, we showcase the practical effectiveness of our approach. Despite having a minimal number of trainable parameters, it outperforms existing deep-learning techniques when applied to real-world imbalanced datasets, such as those in fraud detection and bow shock crossing identification. These empirical validations underscore not only the efficacy but also the broad applicability of our framework across various domains, positioning it as a formidable contender in the field of event detection. The regression-based method, by design, provides a more flexible and nuanced approach to handling imbalanced datasets, contributing to its effectiveness in capturing rare events and continuous variations in event characteristics.In summary, our proposed framework, rooted in deep learning and regression, offers a robust and versatile solution for event detection in multivariate time series data, particularly in scenarios with imbalanced datasets and non-binary events. It demonstrates superior performance compared to existing methods, both theoretically and empirically. This novel approach holds significant promise for addressing event detection challenges across diverse domains, making it a valuable contribution to the field of time series analysis.§ MATHEMATICAL FORMULATION This section introduces the mathematical formulation of the method. The method is based on two key components: a multivariate time series that represents the data, and a set of reference events, also known as ground truth events. Let T(t) be a time series that maps a real value t to a feature vector in 𝒳⊂ℝ^f, where f is the number of features. The mapping can be represented as follows:T: ℝ →𝒳⊂ℝ^f t↦ T(t)The ground truth events are encapsulated within a set E. Each event e within this set is an interval defined by a start time τ_1 and an end time τ_2, denoted as:e = [τ_1, τ_2]where both τ_1 and τ_2 are real numbers.We assume, without loss of generality, that the time series T(t) is defined for all t in the interval [α, β], where α and β are real numbers, and that it takes values in 𝒳. Additionally, we assume that the reference events in the set E do not overlap, i.e., for all e_1, e_2 ∈ E, the intersection of e_1 and e_2 is an empty set (e_1 ∩ e_2 = ∅). §.§ Overlapping PartitionsIn this subsection, we introduce overlapping partitions concept. We split the interval [α, β] into a sequence of equally spaced points (t_1=α < t_2 < t_3, … < t_N=β), where N ∈ℕ, and the spacing is s∈ℝ. We consider a family of overlapping partitions (p_i)_i ∈ I where I = {i ∈ℕ | 1 ≤ ii ≤ N - w + 1} is an index set. An overlapping partition p_i is set of size w ∈ℕ, w>1, denoted by {t_i, t_i+1, …, t_i+w-1}, where i is in I. The set of all overlapping partitions is 𝒫 = {p_i | i ∈ I}. The term 'overlapping' means that any two neighboring partitions have at least one point in common. Next, we define a function o that maps each overlapping partition p_i ∈𝒫 to a vector v_i ∈𝒱, as follows:o : 𝒫 →𝒱p_i↦ v_i = o(p_i)Here 𝒱={o(p_i) | i ∈ I}. The function o assigns to the partition p_i a vector v_i of dimension r = w· f, which contains the values of the time series T(t_j) for all j such that i ≤ jj ≤ i+w-1. The vector v_i can be written as follows:v_i = [ T(t_i)[1]; ⋮; T(t_i)[f]; ⋮; T(t_i+w-1)[1]; ⋮; T(t_i+w-1)[f]; ]This representation concatenates the feature values of the time series T over the partition p_i, where T(t_j)[k] denotes the k-th feature value at time t_j. §.§ Overlapping Parameter FunctionIn this subsection, we introduce the overlapping parameter function, denoted as op, which quantifies the temporal distance of each partition p_i ∈𝒫 with respect to the nearest events. The function op assigns a value between 0 and 1 to each partition.To calculate the op value for a given event e ∈ E and partition p_i ∈𝒫, we use the Jaccard similarity coefficient <cit.>. The op value is computed by taking the duration of the intersection between p_i and e, and dividing it by the duration of their union. This is shown in the following formula:op(p_i, e) = duration(p_i ∩ e)/duration(p_i ∪ e) The value of op(p_i, e) will be close to 1 if the event e and the partition p_i largely overlap, and close to 0 if they have little overlap. This provides a measure of the temporal proximity of the partition to the event.Given that the cardinality of each partition is denoted as w, we define the temporal duration w_s as w_s = (w - 1)· s. To synchronize the temporal duration of each partition (w_s) with the events, we adjust each event e using the following formula:∀ e=[τ_1, τ_2] ∈ E, t_mid = τ_1 + τ_2/2, e = [τ_mid - w_s/2, τ_mid + w_s/2]This formula modifies the start and stop times of each event to align with the temporal size of the partitions. The midpoint of the event remains the same, but the duration is adjusted to match w_s.Let p_i = {t_i, t_i+1, …, t_i+w-1}, where i ∈ I, be a partition. The event e = [τ_1, τ_2], where e ∈ E, is considered close to p_i if:|t_i - τ_1| < w_s.Given p_i and e = [τ_1, τ_2] ∈ E, with t_i+w-1 = t_i + w_s, we define p_i ∩ e and p_i ∪ e as follows:p_i ∩ e = ∅, if eis not close top_i(Def. <ref>)[τ_1, t_i+w-1],ift_i ≤τ_1 t_i > τ_1 - w_s [t_i, τ_2],if τ_1 < t_it_i < τ_2 To simplify, we define I_1 = ]τ_1 - w_s, τ_1] and I_2 = ]τ_1, τ_2[. p_i ∪ e=[t_i, τ_2],ift_i ∈ I_1 [τ_1, t_i+w-1],ift_i ∈ I_2Based on the definitions, we can express op(p_i, e) as follows: op(p_i, e)=0, if e is not close top_it_i + w_s - τ_1τ_2 - t_i,ift_i ∈ I_1τ_2 - t_it_i + w_s - τ_1,ift_i ∈ I_2 It is readily apparent that when t_i = τ_1, op(p_i, e) equals 1. Similarly, when t_i = τ_1 + τ_22, op(p_i, e) equals 1/3. <ref> provides a visual representation of how op(p_i, e) changes with respect to the middle time of p_i, denoted as t_i + w_s/2. As can be seen from the plot, it is evident that the peak of op(p_i, e) is situated at the midpoint of event e.We now define the op value of p_i with respect to all the events E as follows: op(p_i) = max_e∈ E op(p_i, e).This definition suggests that op(p_i) is equivalent to op(p_i, e), where e denotes the event that is closest to p_i. This is because the event closest to p_i exhibits the maximum overlap with the partition p_i. We introduce a new function π defined as follows:π: 𝒱 → [0 , 1] v_i↦π(v_i) = op(p_i)where p_i is a partition corresponding to vector v_i. The function π is characterized as a black-box function, which means that there is no explicit mathematical formulation available for it. It's important to note that by selecting a sufficiently large value for w, we can ensure a one-to-one correspondence between each v_i in 𝒱 and a unique p_i in 𝒫. This is without loss of generality, as assuming that the function o is bijective allows us to associate v_i with p_i having the same index in the definition of the π function. This assumption of bijectivity simplifies the association process. The function π plays a pivotal role in the event detection process, a topic that will be further elaborated in the following sections. § PRINCIPLE OF EVENT DETECTIONThe principle of event detection, based on overlapping partitions 𝒫 and the function π, centers around identifying a function f with a well-defined mathematical expression that accurately approximates the function π. A straightforward approach involves training a feed-forward neural network f on the set {(v_i, π(v_i)) | i∈I_train⊂ I}.The approximation error for f is computed as follows:ε(f)=1/|I_train|∑_i ∈I_trainℒ(f(v_i),π(v_i))In this equation, ℒ denotes the loss function, which measures the disparity between the approximation value f(v_i) and the ground truth value π(v_i). In regression problems, the mean squared error (MSE) is commonly employed as the loss function <cit.>.Once trained, f can be applied to any vector v_k for k∈ I_test⊂ I to estimate π(v_k), equivalent to op(p_k). The peaks in op(p_k) for k∈ I_test should align with the mid-times of the predicted events, as previously illustrated in <ref>. These peaks are identified to extract the predicted events, characterized by the intervals:e_q = [τ_q - w_s/2, τ_q + w_s/2]In this equation, τ_q denotes the mid-time of the q-th peak. If T and T^-1 are continuous, then there exists a feed-forward neural network f ∈Σ^r(Ψ) that utilizes a squashing function Ψ. This network can approximate the function π from 𝒱 to [0,1] with arbitrary precision, given a sufficient number of hidden units Q ∈ℕ.A squashing function Ψ is a type of function that compresses the input into a smaller range. In neural networks, squashing functions, such as the sigmoid and hyperbolic tangent (tanh), serve as activation functions. These functions transform any input value into a range between 0 and 1 (for sigmoid) or between -1 and 1 (for tanh). Here, Σ^r(Ψ) represents a set of single hidden layer feed-forward neural networks defined as follows:{ f: ℝ^r→ℝ: f(x) =∑_j=1^Qβ_j Ψ(A_j(x)) }where x ∈ℝ^r, β_j ∈ℝ, and A_j ∈𝐀^r. The function A_j(x) = w_j · x + b_j, with w_j ∈ℝ^r and b_j ∈ℝ. The parameters w_j, b_j, and β_j correspond to the network weights.We provide the proof for Theorem <ref> in <ref>.Under the continuity of both T and T^-1, this theorem ensures that the function π can be effectively approximated by a feed-forward network f ∈Σ^r(Ψ) with enough hidden units, achieving any desired precision. This establishes the foundation for the reliable and effective method of accurately detecting a wide range of events in time series data, as discussed in the earlier principles of detection. We note that the theoretical guarantees of this method depend on the continuity of both the time series and its inverse. However, in practical scenarios where continuity may not hold, the method still performs well based on the empirical success of neural networks in various applications. Neural networks have shown their ability to learn and approximate complex patterns, even without strict continuity assumptions. Therefore, while the theory assumes continuity, the adaptability and learning capability of neural networks allow this framework to handle cases where continuity is relaxed. Our proof demonstrates the value of this method for applications in signal processing, anomaly detection, and prediction across various fields, such as finance, medicine, and engineering. §.§ Practical Considerations and Implementations In practical applications, feed-forward neural networks for regression tasks often introduce noise into predictions due to factors such as complex data, overfitting, and training limitations. In our scenario, the noise in predictions generated by f ∈Σ^r(Ψ) can pose challenges when estimating peak locations, potentially leading to false events.To address this issue, one universally effective method is to smooth the approximation f by convolving it with a Gaussian kernel. This convolution operation attenuates high-frequency noise while preserving the underlying shape of f <cit.>, resulting in more accurate and reliable peak location estimation and a reduction in false events.The extent of smoothing is governed by the standard deviation σ. Achieving optimal smoothing outcomes requires selecting the optimal standard deviation. This is accomplished using an optimization algorithm that determines the value of σ that maximizes the F1-Score.Additionally, optimizing the peak threshold is essential to complement the smoothing process. The peak threshold determines which values in the smoothed f are considered as peaks. A suitable threshold ensures a balance between capturing ground truth peaks and minimizing the inclusion of noise-induced peaks. Similar to the standard deviation, optimizing the peak threshold is crucial for accurate peak detection results.For further details and comprehensive discussions, the reader is referred to the additional materials presented in <ref>. § NUMERICAL STUDY In this section, we conduct a comprehensive evaluation of the method's effectiveness on two challenging and imbalanced datasets. The first dataset focuses on credit card fraud detection <cit.>, while the second dataset involves the detection of bow shock crossings in space physical time series. We employ the F1-Score metric, which adeptly balances precision and recall, making it particularly suited for assessing performance on imbalanced datasets where the minority class holds significant interest.We benchmark the F1-Score results against state-of-the-art metrics obtained by diverse architectures, as reported in reputable literature <cit.>. All results and materials from this section are available on our GitHub repository for reproducibility. §.§ Fraud Detection §.§.§ Setup The credit card fraud detection dataset labels each one-second time step as either 0 or 1, indicating the absence or presence of fraud, respectively. However, the method requires fraud instances to be represented as a list of time intervals. To reconcile this, we transform the labeled time steps corresponding to fraud occurrences into a list of time intervals. For each time step c_1_q where the label is 1 (indicating fraud), with the total number of frauds denoted as n_b, we define an interval as follows:e_q = [c_1_q - w_s/2, c_1_q + w_s/2] Subsequently, we construct the list of fraud intervals:E = { e_q | q = 1, 2, …, n_b } Given that the fraud dataset has time steps of one second, i.e., s=1 second, and considering that the deep learning methods we are comparing with are based on binary classification (predicting the presence or absence of fraud for each second), we need to ensure a fair comparison with our method. Therefore, we consider the temporal size w_s of fraud to be 1 second, which is specified by w_s = (w-1) · s. Thus, we must set w = 2. For our method, we use an FFN with a single hidden layer containing Q=20 neurons. The activation function Ψ is set to sigmoid and the partition size w is chosen to be 2.§.§.§ Comparison with and without Data Balancing (SMOTE) We compare the method with deep learning approaches that either employ or do not employ any data balancing technique, such as Synthetic Minority Over-sampling Technique (SMOTE) <cit.>. The benchmark methods include a Convolutional Neural Network (CNN) <cit.> without SMOTE and Feed-Forward Neural Networks (FFN) <cit.> with SMOTE.The method demonstrates superior performance (F1-Score) compared to the aforementioned methods. Furthermore, our approach exhibits a substantial reduction in parameters, with 1,201 parameters as opposed to 119,457 in the CNN-based method and 5,561 in one of the FFN-based methods. This significant parameter reduction not only indicates computational efficiency but also renders our method well-suited for scenarios with limited computational resources. Our method surpasses both competing methods that used SMOTE in terms of precision and F1-Score, achieving high performance even without using SMOTE for data balancing.Figure <ref> showcases the training loss and validation loss of FFN during the training process on fraud detection. The low losses observed in both the training and validation phases indicate that the network has successfully learned the underlying patterns, justifying the obtained good metrics.Furthermore, Figure <ref> zooms in on the comparison between the predicted op values and the true op values in the credit card fraud case. As expected, the shape of the predicted values aligns well with the true values, showing minimal fluctuations, and sometimes additional peaks appear. This result highlights the effectiveness of using a Gaussian filter to attenuate these fluctuations and a peak detection threshold to eliminate false peaks, thereby leading to improved accuracy of the predictions and justifying the excellent metrics obtained. For more discussions on this, see Section <ref>. §.§ Bow Shock Crossings Detection In this section, we extend the evaluation of the method to a distinct imbalanced dataset explicitly designed for plasma's physical time series with bow shock crossing events. The dataset is derived from Mars Express spacecraft (MEX) data using the AMDA software <cit.>, with ground truth annotations for bow shock crossings provided by <cit.>.We compare the method with a study by <cit.>, where the authors developed a deep learning approach for bow shock crossing detection using Cassini data and ResNet18, a well-known deep CNN architecture. Despite the entirely different nature of our dataset, focusing on time series data rather than images, we choose to compare with <cit.> due to the absence of other bow shock detection studies using deep learning on time series data. We aim to demonstrate that our method, with a minimal number of parameters, can achieve metrics comparable to state-of-the-art architectures like ResNet18.§.§.§ Setup In configuring our approach, we intentionally selected a partition size w of 76. This decision aligns with the empirical understanding that a bow shock event typically spans 5 minutes, as indicated in <cit.>, where criteria are assessed within a 5-minute timeframe to validate bow shock crossings. With our dataset's sampling rate denoted by s and set at 4 seconds, the event duration w_s is precisely calculated as (76-1)· 4 = 300 seconds, corresponding to the acknowledged 5-minute duration.In this specific configuration, we employ a feed-forward neural network featuring a single hidden layer. The hidden layer comprises Q=20 neurons and utilizes a sigmoid function as a squashing function.§.§.§ Performance Evaluation and Comparative Analysis The method outperforms the ResNet18-based method. Moreover, the method's efficiency is evident in its significantly lower number of parameters, demonstrating its ability to achieve high performance with low complexity.Figure <ref> showcases the training loss and validation loss of FFN during the training process of the Bow Shock case.Figure <ref> zooms in on the comparison between the predicted op values and the true op values on the bow shock case. In this case, we see a better matching between predicted values with the true values than in the previous case, which justifies a better F1-Score.Here is the corrected and improved version of the "Conclusion" section:§ CONCLUSION In this paper, we have presented a novel deep learning supervised method for event detection in multivariate time series data, leveraging a regression-based approach instead of traditional classification. We have established a rigorous theoretical foundation, making this method a versatile framework capable of detecting a wide range of events, including change points, frauds, anomalies, and more. By demonstrating its universality under mild assumptions about the continuity of multivariate time series, we have established its ability to identify events with arbitrary precision.Our framework not only excels in theoretical considerations but also exhibits practical efficacy. With a minimal number of trainable parameters, our approach outperforms existing deep learning methods on real-world imbalanced datasets, particularly in fraud detection and bow shock crossing identification. These practical validations underscore the effectiveness and relevance of our framework across various domains, establishing it as a compelling solution for event detection in diverse fields.However, it is essential to acknowledge certain limitations in our framework, particularly in the context of multi-class event detection, where complexities may arise. To address these limitations, we plan to extend the evaluation of our framework to diverse datasets from various fields, aiming to achieve state-of-the-art metrics compared to other methods. This broader testing approach will allow us to further demonstrate the advantages of our method, including its minimal parameter requirements, which not only reduce computational resources but also contribute to a more sustainable and environmentally friendly solution. Additionally, our ongoing efforts involve enhancing the framework's capabilities to predict events of varying durations, addressing a notable limitation in our current approach, which focuses solely on predicting the midpoint of events with fixed durations. This advancement represents a significant improvement in the versatility and practical applicability of our framework.Overall, the proposed method represents a significant step forward in event detection for multivariate time series data. Its theoretical rigor, practical efficacy, and minimal parameter requirements make it a compelling solution for a wide range of applications. We believe that our method has the potential to revolutionize event detection in various fields and will continue to explore its capabilities and expand its applicability in future work.icml2024§ PROOFSTo prove Theorem <ref>, we first need to establish the continuity of the following three functions: o, op, and π. But before delving into that, let's provide some definitions. We define a distance d_𝒫 as follows:d_𝒫: 𝒫×𝒫 →ℝ(p_i, p_j)↦ d_𝒫(p_i, p_j) = | t_i - t_j | We define a distance d_𝒱 as follows:d_𝒱: 𝒱×𝒱 →ℝ(v_i, v_j)↦ d_𝒱(v_i, v_j) = √(∑_m=1^w∑_k=1^f (T(t_j+m-1)[k] - T(t_i+m-1)[k])^2) We define a distanced_T as follows:d_T: ℝ×ℝ →ℝ(t_i, t_j)↦ d_T(t_i, t_j) = √(∑_k=1^f (T(t_j)[k] - T(t_i)[k])^2)Let i, j ∈ I and m ∈ℕ, such that 0 < m ≤ w and 0 < i < j.Proving the Continuity of o:We have | t_j+m-1 - t_i+m-1| = | t_j - t_i | = (j - i) · s. Given ϵ = s > 0, without loss of generality, assume s ≪ 1 (assuming N is sufficiently large). Then, there exists δ = (j - i)ϵ + ϵ such that | t_j+m-1 - t_i+m-1| < δ. If we assume that the time series T is continuous, then | t_j+m-1 - t_i+m-1| < δ implies d_T(t_j+m-1, t_i+m-1) < ϵ.For p_i, p_j ∈𝒫, let v_i = o(p_i) and v_j = o(p_j). From the definition:d_𝒱(v_i, v_j)= √(∑_m=1^w∑_k=1^f (T(t_j+m-1)[k] - T(t_i+m-1)[k])^2).If d_T(t_j+m-1, t_i+m-1) < ϵ, then we can bound d_𝒱(v_i, v_j) as follows:d_𝒱(v_i, v_j) < √(w)ϵ. Let ϵ_1 = √(w)ϵ. Choose δ = (j - i) ϵ_1/√(w) + ϵ_1/√(w). Now, if d_𝒫(p_i, p_j) < δ, it implies d_𝒱(v_i, v_j) < ϵ_1. Thus, we can conclude that o is continuous.Proving the Continuity of op:For p_i, p_j ∈𝒫, we have d_𝒫(p_i, p_j) = (j-i) · s. Assuming that s is very small (given N is sufficiently large), without loss of generality, let's consider s ≪ 1. Under this assumption:* p_i and p_j share the same closest event e. Consequently, we express op(p_i) as op(p_i, e) and op(p_j) as op(p_j, e).* If t_i ∈ I_1, then t_j ∈ I_1, and if t_i ∈ I_2, then t_j ∈ I_2. For t_j, t_i ∈ I_1, the bound for op(p_j) - op(p_i) is given by:op(p_j) - op(p_i) < 2d_𝒫(p_j, p_i)/w_s Similarly, for t_j, t_i ∈ I_2, the bound is:op(p_j) - op(p_i) > -2d_𝒫(p_j, p_i)/w_s In summary:|op(p_j) - op(p_i)| < 2d_𝒫(p_j, p_i)/w_s This relation is the Lipschitzian relation, implying the continuity of op <cit.>.Proving the Continuity of π:Let (v_i_n)n be a sequence that converges to v_i, where v_i, vi_n∈𝒱. This can be formally written as:lim_n →∞ v_i_n = v_iThis means that as n approaches infinity, the sequence (v_i_n)n converges to v_i. In other words, the values of vi_n get arbitrarily close to v_i as n gets larger and larger. These assignments are valid since the function o is continuous. For more on convergence sequences with continuous functions, please refer to <cit.>.Let's define v_i, v_i_n as follows:v_i = [ x_i; ⋮; x_i+w-1; ], v_i_n = [ x_i_n; ⋮; x_(i+w-1)_n; ]Where x_(i+k)_n, x_i+k∈ℝ^f, 0 ≤ k ≤ w-1. We have lim_n →∞ v_i_n = v_i lim_n →∞ x_(i+k)_n = x_i+k, 0 ≤ k ≤ w-1 <cit.>. Since o is bijective, we can associate, for v_i, v_i_n respectively, the partitions p_i, p_i_n that can be defined as follows:p_i = {T^-1(x_i+k), 0 ≤ k ≤ w-1} p_i_n = {T^-1(x_(i+k)_n), 0 ≤ k ≤ w-1}We have for 0 ≤ k ≤ w-1 that lim_n →∞ x_(i+k)_n = x_i+k then if T^-1 is continuous, we can deduce that:lim_n →∞ p_i_n = p_i Since op is continuous, then:lim_n →∞ p_i_n = p_i lim_n →∞ op(p_i_n) = op(p_i)Finally, from the definition of the function π, we can write that lim_n →∞ v_i_n = v_i lim_n →∞π(v_i_n) = π(v_i), thus π is continuous <cit.>.We have 𝒫 as a finite set, implying it is compact. Since o is continuous, the image of 𝒫 under o,𝒱, is also compact <cit.>. Therefore, π is continuous over the compact set 𝒱. According to Theorem 2.4 in <cit.>, we can assert that a feed-forward neural network f ∈Σ^r(Ψ), utilizing a squashing function Ψ, can accurately approximate the function π with any desired degree of precision λ. In their proof, the authors choose the number of hidden units Q such that 1/Q < λ2, indicating that a larger value of Q is required to achieve an excellent approximation. § PRACTICAL CONSIDERATIONS AND IMPLEMENTATIONS §.§ Post-Processing for Noise Reduction In practical scenarios, the noise in predictions generated by f ∈Σ^r(Ψ) can pose challenges when estimating peak locations, potentially leading to false events. For this reason, it is often necessary to smooth the predicted values f by convolving them with a Gaussian kernel G_σ, characterized by a standard deviation σ.The predicted values generated by f are defined as follows:P_f = {k ∈ I_test, f(v_k)}By definition (cf. <ref>), the vector v_k is associated with partition p_k, and op(p_k) = f(v_k).The convolution operation is defined as follows:P_G[k]=∑_x=-r^r P[k-x] · G_σ[x]∑_x=-r^r G_σ[x]Where r ∈ℕ is a radius of the Gaussian filter. The Gaussian kernel G_σ is defined as follows:G_σ[x] = 1/√(2π)σ e^-x^2/2σ^2The normalization factor 1∑_k=-r^r G_σ, r[k] is applied to ensure that the sum of the normalized kernel values equals 1. This normalization step preserves the overall amplitude of the predicted values during the convolution operation.The Gaussian function is known for its property of being infinitely differentiable, which means that its derivatives of all orders exist and are continuous. Consequently, when a Gaussian kernel is convolved with a function, the resulting function also inherits this property of infinite differentiability <cit.>. This characteristic simplifies the task of identifying local maxima or peaks.To distinguish the peaks that most likely represent true events, it is common to introduce a threshold on the peak height values, denoted as h. Peaks with values above the threshold are identified as predicted events. Similar to the standard deviation, optimizing the peak threshold is crucial for accurate peak detection results. §.§ Computing Predicted Events To compute the predicted events, we employ the following process that involves the following steps (<ref>):* Smoothing: The predicted values undergo a Gaussian filter to reduce noise and eliminate fluctuations. This smoothing process enhances the accuracy of event extraction by reducing false positives caused by noise.* Peak Identification: After smoothing, we identify peaks in the filtered predictions. These peaks correspond to the mid-times of the predicted events, indicating the locations where events are likely to occur.* Comparison with Actual Events: The identified peaks are compared with the actual events in the test set. A predicted event is considered a match if it occurs within a maximum time tolerance δ of its corresponding actual event. This time tolerance allows for some flexibility in matching the predicted and actual event times, accommodating the inherent temporal uncertainty in labeling the reference events. The maximum time tolerance parameter δ is a user-defined value. By default, we set δ to be equal to w_s.* Performance Evaluation: The performance is evaluated using the F1-Score metric. Maximizing the F1-Score is the desired outcome, as it requires simultaneously optimizing precision and recall. To maximize the F1-Score, several parameters are fine-tuned. These include the radius r and the standard deviation (σ) of the Gaussian filter, and the threshold (h) used for peak identification. By optimizing these parameters, we can accurately identify the predicted events, leading to improved overall performance of the method. § ADDITIONAL RESULTS <ref> and <ref> present a visual comparison between the predicted values and the ground truth values of op. The shape of the predicted values aligns well with the ground truth values, and occasional additional peaks may appear. The effectiveness of using a peak threshold to eliminate false peaks, as explained above, is evident in the improved accuracy of predictions.Furthermore, <ref> and <ref> illustrate the distribution of time differences δ(t) between predicted events and ground truth events respectively on fraud case and bow shock case. This visualization offers valuable insights into the temporal deviations between predicted and actual event occurrences, facilitating an analysis of the accuracy and precision of our event predictions concerning their temporal alignment. In the fraud case, the mean is equal to -0.15 seconds and a standard deviation of about 0.48 seconds implying that our framework achieves precise event detection. In the bow shock case, the distribution of time differences demonstrates a standard deviation of 75 seconds and a mean of 18 seconds. This indicates that predicted events typically deviate from ground truth events by an average of 18 seconds, with some variations of up to 75 seconds. This level of accuracy is perfectly acceptable for physical applications, especially taking into account the fact that typical planetary plasma instruments have a temporal resolution in the 1-10sec range. | http://arxiv.org/abs/2311.15654v2 | {
"authors": [
"Menouar Azib",
"Benjamin Renard",
"Philippe Garnier",
"Vincent Génot",
"Nicolas André"
],
"categories": [
"stat.ML",
"cs.LG"
],
"primary_category": "stat.ML",
"published": "20231127093356",
"title": "Event Detection in Time Series: Universal Deep Learning Approach"
} |
2D Feature Distillation for Weakly- and Semi-Supervised 3D Semantic SegmentationOzan Unal^1 Dengxin Dai^2 Lukas Hoyer^1 Yigit Baran Can^1 Luc Van Gool^1,3,4 ^1ETH Zurich, ^2Huawei Technologies, ^3KU Leuven, ^4INSAIT {ozan.unal, dai, lukas.hoyer, cany, vangool}@vision.ee.ethz.chJanuary 14, 2024 ======================================================================================================================================================================================================================== As 3D perception problems grow in popularity and the need for large-scale labeled datasets for LiDAR semantic segmentation increase, new methods arise that aim to reduce the necessity for dense annotations by employing weakly-supervised training. However these methods continue to show weak boundary estimation and high false negative rates for small objects and distant sparse regions. We argue that such weaknesses can be compensated by using RGB images which provide a denser representation of the scene. We propose an image-guidance network (IGNet) which builds upon the idea of distilling high level feature information from a domain adapted synthetically trained 2D semantic segmentation network. We further utilize a one-way contrastive learning scheme alongside a novel mixing strategy called FOVMix, to combat the horizontal field-of-view mismatch between the two sensors and enhance the effects of image guidance. IGNet achieves state-of-the-art results for weakly-supervised LiDAR semantic segmentation on ScribbleKITTI, boasting up to 98% relative performance to fully supervised training with only 8% labeled points, while introducing no additional annotation burden or computational/memory cost during inference. Furthermore, we show that our contributions also prove effective for semi-supervised training, where IGNet claims state-of-the-art results on both ScribbleKITTI and SemanticKITTI.§ INTRODUCTION With the ever growing interest in 3D scene understanding for autonomous vehicles, semantic segmentation for LiDAR point clouds has also risen in popularity. To accurately and robustly learn the dense prediction task of generating per point class labels, a high volume of data is not only valuable but required. However manually labeling outdoor LiDAR scenes for semantic segmentation is both time consuming and expensive for large scale datasets.There are two recently explored paths in the literature for reducing the labeling cost of outdoor LiDAR scenes: (i) by employing weak-supervision, where all frames have incomplete labels (e.g. by using line-scribbles <cit.>) and (ii) by employing semi-supervision, where a subset of frames are labeled and the rest remain completely unlabeled <cit.>. Commonly, LiDAR semantic segmentation models suffer from error prone boundary estimation between classes, as well as high false negative rates on both small objects and distant sparse regions. This is caused by the sparsity of LiDAR point clouds which severely reduces the number of points that fall on such regions to form an understandable and well separable geometry. As expected, these errors are further amplified when dealing with incomplete supervision, especially with scribble labels that completely forgo labeling boundaries. It can even be argued that such hard cases potentially need more representation within the dataset for correct and robust learning, something that clearly lacks under data-efficient settings.These errors are severely reduced when operating on a denser representation of a scene (see Fig. <ref> - top). Luckily, LiDAR sensors are commonly paired with cameras that are not only cheaper but also provide a dense signal in the form of an RGB image that allows better separable boundaries (especially with the aid of RGB color channels), as well as orders of magnitude more pixels than points on small objects and distant regions. It is for this reason that all autonomous vehicles are equipped with a high resolution camera facing the front of the car to provide a denser and more complete understanding of the critical ego-vehicle path.Our goal in this work is to leverage this high resolution image within our 3D pipeline to target the common weaknesses of LiDAR semantic segmentation models trained under incomplete supervision (weak labels). However we face two major challenges: (i) we need to retain our low annotation budget to have a scalable solution, therefore we cannot use additional annotated datasets or pretrained models in our setup; (ii) we need to tackle the issue of the horizontal field-of-view (FOV) mismatch between a LiDAR sensor and camera, where only a subset of points that fall onto the camera FOV have valid correspondence.To this extent, we propose the Image-Guidance network (IGNet) that comprises of two core modules: (M1) domain adaptive image feature distillation that allows us to keep our low annotation budget and (M2) one-way contrastive learning that combats the FOV mismatch by leveraging image features to supervise out-of-image points. Throughout this work, we strictly associate the 2D domain with RGB images and 3D with LiDAR point clouds.M1: Firstly, we train a 2D semantic segmentation model to generate per pixel high level features that better capture shape and context for sparse regions. By training on synthetic data, we avoid introducing any additional annotation requirements. We establish point-to-pixel correspondence between the LiDAR point cloud and the camera image (Fig. <ref> - bottom), and distill the information from the generated features onto a 3D network via an auxiliary loss.However, training on synthetic data yields yet another challenge: There exists a domain gap between synthetic images and real images that hinder performance in 2D. To further improve the quality of our image features, we propose using a domain adaptation (DA) pipeline to align our source domain onto the target. We further supervise the DA task via weak image labels generated by projecting the LiDAR labels onto the corresponding image.M2: Next, we tackle the issue of the horizontal FOV mismatch between the camera and the LiDAR sensor. As our image-guidance module requires valid point-pixel correspondences, the auxiliary supervision remains limited to points that fall onto the image. To extend the supervision to points outside of the image, we propose using a one-way contrastive loss guided by a teacher model, allowing points that fall within the image to guide points that fall outside.Here we observe that the number of pixel-to-outside-point-pairings remains limited as each LiDAR scan has a fixed associated image. This reduces the effect of the contrastive learning, especially since this single image alone often contains zero to a few object instances of each class. To combat this, we introduce a simple mixing strategy called FOVMix, where we cut and paste an image with its corresponding points from one scene onto another. With FOVMix, we are not only able to generate new pixel-point pairings to aid the contrastive learning but also increase the variability within each mini-batches.To summarize:* We propose using a synthetically trained 2D semantic segmentation model to guide the 3D network's feature space in order to improve boundary, distant region and sparse object segmentation.* We employ weakly-supervised domain adaptation to further align the 2D features with our dataset.* We extend the supervision from the image-guidance network to points out of the camera field-of-view via a one-way supervised contrastive loss.* We propose a new mixing strategy called FOVMix to introduce additional variety into the dataset along with additional point-pixel pairings to extract further performance from our contrastive loss.We achieve state-of-the-art results for weakly-supervised semantic segmentation on ScribbleKITTI <cit.>. We further show that IGNet can also be utilized for semi-supervised LiDAR segmentation to yield state-of-the-art results on both ScribbleKITTI and SemanticKITTI <cit.>.It should be noted that our proposed modules are only required during training, thus the performance boost comes without any additional computational or memory burden compared to the baseline 3D model during inference. Finally, as only synthetic data is required, we also do not introduce any additional annotation costs.§ RELATED WORKData Efficient LiDAR Semantic Segmentation: LiDAR semantic segmentation research has heavily focused on understanding how to best process the unordered data structure, with earlier focus on direct point based neural networks <cit.> having later shifted to sparse convolutional networks <cit.>. As architectures mature, we observe another developing area of interest: data efficiency within LiDAR semantic segmentation.As known, the dense prediction task requires a large-scale annotated dataset, which is especially difficult and expensive to obtain for LiDAR point clouds <cit.>. Recent work therefore investigate two paths that aim to reduce this associated labeling cost: (i) weakly-supervised learning, where every frame is partially labeled, and (ii) semi-supervised learning, where only a subset of frames are labeled and the remaining stay completely unlabeled. However such approaches always come at the cost of performance, as reducing the number of labels within a dataset reduces the supervision provided to the model. Current popular literary work that deal with incomplete labels aim to extend the supervision to unlabeled points by (i) self-supervised training <cit.> where a model is trained on self-generated pseudo-labels or (ii) relying on a guidance network to generate on the fly targets (e.g. mean teacher <cit.>).For self-supervised training, CBST <cit.> proposes to use class-wise thresholding for self-training to reduce confirmation bias. Extending CBST, DARS <cit.> proposes to re-distribute biased pseudo labels for semi-supervised training.For 3D in particular, ScribbleKITTI <cit.> provides the first realistic benchmark for weakly supervised LiDAR semantic segmentation by introducing the scribble-annotated dataset. In their work, to reduce the gap to fully supervised training, they propose the SSLSS pipeline where they utilize a mean teacher setup <cit.> to stretch the supervision to unlabeled points, and extend CBST with a range component to deal with the increased sparsity of LiDAR point clouds.For works on indoor point clouds, PSD <cit.> utilizes similar consistency checks to align clean and perturbed outputs of unlabeled points. WS3D <cit.> utilizes region-level boundary awareness and instance discrimination to improve indoor and outdoor 3D semantic segmentation with simulated weak labels. Furthermore for semi-supervised learning, DiAL <cit.> uses a simple MT setup, GPC <cit.> proposes using a pseudo-label guided point contrastive loss, SSPC <cit.> utilizes self-training and LaserMix <cit.> uses a mixing operation to bring supervision to unlabeled frames. CPS <cit.> utilizes a Siamese structure to induce cross supervision. Multi-Modality with LiDAR and Image: As mentioned, the additional information available in the corresponding RGB image does provide meaningful advantages that can improve LiDAR perception. Yet the task of incorporating this information within a robust pipeline is not trivial.Fusion has been studied for a number of LiDAR based 3D perception tasks in a supervised and weakly-supervised manner <cit.>. For LiDAR semantic segmentation PMF <cit.> and LIF-Seg <cit.> fuse the information from streams that process each modality individually to obtain higher information yielding features. However such approaches not only require image information during inference but also have linearly increasing memory and computation cost. 2DPASS <cit.> overcomes this by only using a one way information flow during training. Still, training the image stream on only LiDAR projected labels suffer heavily under incomplete annotations where it hinders performance instead of improving it. Sautier <cit.> proposes a more general approach of self-supervised pretraining through the alignment of pixel- and point regions that still remains susceptible to forgetting (at a reduced scale).Mix-Augmentation: Mixing operations have been very successful in increasing variability in the dataset and producing significant performance boosts for many tasks <cit.>. CutMix <cit.> mixes portions of the input and output of one sample image with another. MixMatch <cit.> applies the same mixing operation to labeled and unlabeled frames in a semi-supervised setting while generating labels via guessing and sharpening for unlabeled parts to provide supervision. Specifically for semi-supervised learning on LiDAR point clouds, LaserMix <cit.> aims to introduce variability through cylindrical and range-view partitioning and mixing.§ DATA EFFICIENT LIDAR SEGMENTATION Data efficient LiDAR semantic segmentation aims to reduce the labeling cost associated with the dense prediction task by employing (i) weak supervision, where all frames have incomplete labels (e.g. by using scribble annotations), or (ii) semi supervision, where some frames have labels and others remain unlabeled. In either setting, naively training a model on available labeled points results in a considerable performance drop as only a small subset of points provide supervision. Specifically, we observe an amplified error rate caused by (i) weak boundary estimation between classes and (ii) misclassification of small objects and distant sparse regions, as LiDAR's increased sparsity by range causes a severe reduction in the number of available points on an object to form an understandable geometry. §.§ A Baseline Approach: Mean Teacher As a first step in reducing the performance gap to fully supervised training we employ a generalized approach to utilize all points within the dataset. In specific, to extend the supervision to unlabeled points, following Unal <cit.>, we construct a mean teacher (MT) framework <cit.>, where a student network is trained using a supervised loss H (e.g. cross-entropy) and a teacher network is formed by the exponential moving average (EMA) of the student's weights θ (for time step t):θ^EMA_t = αθ^EMA_t-1 + (1-α) θ_tThe given update rule yields a teacher model that is a better and more robust predictor <cit.>. To exploit this behaviour, we apply a consistency loss between the teacher and the student to align its outputs to the more accurate predictions, e.g. by minimizing the Kullback-Leibler divergence to the softmax outputs. Formally, for all points x, the loss function can be redefined as:ℒ = H(ŷ, y) + 1_U(x)KL(ŷ||ŷ_EMA)with ŷ and ŷ_EMA denoting the predictions of the student and teacher models, y the ground truth labels and U denoting the set of points without ground truth labels. An illustration of the MT pipeline can be seen in Fig. <ref> - green.While a mean teacher framework does allow us to utilize the entire dataset within our training pipeline, due to the lack of direct supervision, similar to the student, the teacher's predictions remain uncertain and error prone for points that lie on class boundaries or for sparsely represented classes (e.g. volumetrically small objects or distant regions), especially when trained on weak scribble labels that completely forgo labeling any boundary points. §.§ Image Guidance via Feature Distillation To target these weaknesses we propose using image feature distillation from a trained 2D semantic segmentation model. But before we dive deep into the details, it is important to establish motivation.RGB images provide a much denser representation of a scene compared to LiDAR point clouds. This increased density along with the available color channels allow easier distinction of both class boundaries as well as small objects and distant regions. 2D semantic segmentation models can therefore learn better separable and richer features for such pixels. Following this observation, we propose introducing an image guidance (IG) network to exploit the mature features of a trained 2D semantic segmentation model.Firstly, we apply a forward pass to the camera image using a synthetically-trained semantic segmentation model to extract a high level feature representation (θ_IG:[0,255]^3 ↦ f_IG∈ℝ^d). It should be noted that we opt to use synthetic data to avoid introducing any additional annotation burden as the collection of new labeled samples can be easily automated. Using available intrinsic and extrinsic camera matrices K and [R | t] respectively, we project the 3D points cloud in homogeneous coordinates x_hom onto the rectified camera coordinates following x^T_rec = K [R | t]x^T_hom and extract point to pixel mappings m: x_rec↦ (k,l) with k = ⌊ x_rec^(0) /x_rec^(2)⌋ and ⌊ l = x_rec^(1) /x_rec^(2)⌋. A point to pixel correspondence is considered valid if the pixel (k,l) falls within the image.We extend our 3D model with an auxiliary head that maps the final layer features to the image feature dimension d. During training, we introduce a new consistency term between the student and the IG teacher that is applied to all points that have a valid pixel correspondence. Formally, we restate the loss function to include image-guidance as:ℒ = H(ŷ, y) + 1_U(x)KL(ŷ||ŷ_EMA) + ℒ_IGwith ℒ_IG = 1_I(x, m(x))KL(sm(f) || sm(f_IG))with I denoting the set of points with valid pixel correspondence, sm denoting the softmax operation, f, f_IG∈R^N' × C denoting the feature representations of the 3D auxiliary head and IG decoders respectively.With the addition of the auxiliary loss, the 3D network aims to mimic the more mature representation of the 2D network for points with pixel correspondences. In other words, we introduce a new teacher model, where boundary points along with small and distant objects more richly defined due to the denser representation, to further and better guide the student on unlabeled points. An illustration of the proposed module can be seen in Fig. <ref> - red.It should be noted that the IG network is only required during training and can be completely removed for inference alongside the auxiliary head, causing no additional memory requirements or time costs to the overall 3D model. §.§ 2D Weakly-Supervised Domain Adaption As mentioned before, in order to train θ_IG for semantic segmentation, we resort to synthetic data. It has the desirable property that even dense annotations can be automatically generated so that no additional labeling cost is introduced. However, a model trained on synthetic source data (I_s,S_s), usually experiences a performance drop when applied to real-world target images I_t due to the domain gap.To tackle this, we propose employing a domain adaptation pipeline to improve the quality of the extracted features and better align with the data from our real-world training set. Following current literature <cit.>, we reestablish a mean teacher framework <cit.> and use the teacher model to generate pseudo labels P_t for the target domain images by freezing the unlabeled image predictions. We train the 2D network with a linear classification layer γ not only on the synthetic image-label pairings (I_s, S_s) but also on the target images with pseudo labels (I_t, P_t). Formally, the loss for the 2D model can be defined as:withℒ = ℒ_S +ℒ_DAwith ℒ_S = H(γ(θ_𝐼𝐺(I_s)), S_s) and ℒ_DA =H(γ(θ_𝐼𝐺(I_t)), P_t) Furthermore, in contrast to common unsupervised domain adaptation, we have access to LiDAR scribble annotations on the target domain. Even though these only provide sparse and possibly noisy supervision(due to projection errors), they can be an important anchor for the adaptation to the target domain. In order to incorporate this additional information into our pipeline, we augment the EMA teacher pseudo-label P_t with projected scribble labels P_t(m(x)) ← y.We then extend our domain adaptive loss ℒ_DA from Eq. <ref> to increase the importance of the projected labels P_t(m(x)) via a weight vector λ_p:ℒ_DA = λ_p H(γ(θ_𝐼𝐺(I_t)), P_t)with λ_p = λ_p for pixels with valid point mapping and 1 otherwise. An illustration of the proposed weakly-supervised domain adaptation pipeline can be seen in Fig. <ref> - blue.Finally, to form the image guidance model θ_IG, we copy and freeze the 2D student model (following unsupervised domain adaptation convention <cit.>) without the linear classifier and use its generated features to guide the 3D student model during training. §.§ Extending the Supervision Beyond the Image With image-guidance (Eq. <ref>) the information distillation from the mature 2D features to the 3D pipeline is limited by the availability of point-pixel correspondences. For many cases, we are limited to a front facing camera, so there exists a big mismatch between the horizontal FOV of the two sensors. Under such a setup, the set of all points with valid pixel correspondence (I) is much smaller than the set of all points without a valid correspondence (O = I ∩ P), i.e. |I|<|O|. In other words, the lack 360^∘ coverage for the camera means that points with pixel correspondence only make up a small portion of the LiDAR point cloud.To be able to guide points outside of the image using the 2D domain adapted features, we introduce an extension to the image-guidance loss with a one-way supervised contrastive loss (CL).Let I^(c)⊆ I and O^(c)⊆ O define two sets of points inside and outside of the image respectively with associated class c =ŷ_EMA, given by the teacher's prediction. Formally, we define the one-way supervised contrastive loss as:ℒ_CL = ∑_c ∑_o∈ O^(c) - log( 1/|O^(c)|∑_i ∈ I^(c)exp (f_o · f_IG,i / τ)/∑_i' ∈ Iexp (f_o · f_IG,i' / τ))with τ denoting the temperature. The total loss can then be formulated as:ℒ = H(ŷ, y) + 1_U(x)KL(ŷ||ŷ_EMA) + ℒ_IG + λℒ_CLwith λ denoting the scale hyperparameter.As illustrated in Fig. <ref>, the loss extension aims to apply a pull force to all points towards pixels of the same category while also applying a push to all points away from pixels of a different class. We therefore align the features of points outside of the image with the features of the 2D image-guidance network. §.§ FOVMix Finally, we introduce a new mixing operation called FOVMix. Given two data samples (x_A, y_A, I_A) and (x_B, y_B, I_B), the goal of FOVMix is to generate a new training sample (x̃, ỹ, Ĩ). Simply put, we take an image from sample A and replace it with the image of sample B. To accompany this, we further take all points that are within the image FOV of sample A, and paste them onto sample B while removing all points of B that were in the same region. An illustration of FOVMix can be seen in Fig. <ref>.Formally, we define the mixing operation as:x̃ = [𝐌_AA⊙ x_A, (1 - 𝐌_AA) ⊙ x_B]ỹ = [𝐌_AB⊙ y_A, (1 - 𝐌_AB) ⊙ y_B]Ĩ = I_A𝐌_AB, 𝐌_AA∈{0,1}^N denote the binary masks that yield the points within the image FOV given the intrinsic projection matrix A and extrinsic projection matrices A and B respectively, ⊙ and [,] denoting a dot product for masking and concatenation operations. Thus, FOVMix does not depend a specific sensor/setting, but only relies on the availability of point to pixel correspondences, which is expected for systems with both a LiDAR sensor and camera.FOVMix is a simple operation that accomplishes two feats: (i) it increases the effectiveness of the one-way contrastive loss by introducing additional pairings of points inside-outside of the image, (ii) it increases the richness of the data within each mini-batch. While FOVMix introduces noise along the boundaries of the image FOV similar to other mixing methods commonly used in dense vision tasks, the increased diversity and richness of each mini-batch is a worthy trade-off against the introduced noise.§ EXPERIMENTS Implementation details: We use Cylinder3D <cit.> as a baseline 3D model. For the mean teacher, we follow convention and set the update hyperparameter α = 0.999 <cit.>. For the domain adaptive 2D pipeline we follow DAFormer <cit.>. We heuristically balance the losses by setting λ=0.001 and λ_p=10. For semi-supervised, we restrict set A in FOVMix to labeled frames to ensure we have direct supervision in all samples and do additional rotation augmentation before the FOVMix operations to increase variability.Datasets: We run our experiments on the ScribbleKITTI <cit.> dataset that provides realistic weak labels for LiDAR semantic segmentation in the form of scribbles. ScribbleKITTI is built on SemanticKITTI <cit.>, the most popular large-scale outdoor-scene dataset for LiDAR semantic segmentation, shares the same valid-set. The weak labels only provide annotations to 8% of the point count and completely forgo class boundaries. Thus, compared to dense annotations, labeling times are reduced by 10 fold.For the 2D syntetic training, we use the GTA-V dataset which contains 24966 synthetic images with pixel level semantic annotation. The images are generated using a modded version of the open-world video game Grand Theft Auto 5. §.§ Results Weakly-Supervised LiDAR Segmentation: We report the performance of our image-guidance network (IGNet) trained with scribble-supervision in Tab. <ref>. As seen, IGNet outperforms previous SOTA, showing improvements across the board for all classes and reaching 96.4% relative performance when compared to fully supervised training while only using 8% labeled points. In specific, we observe large gains for small object categories such as bicycle and motorcycle when compared to the previous SOTA SSLSS <cit.>.It should be noted that, in contrast to SSLSS, IGNet does not require self-training. Therefore the training times are considerably reduced (from 5 days to 1 - including the 2D training - using 8 Nvidia RTX2080Ti's). Still, to further push performance, we can IGNet++. Here, we replace the Cylinder3D backbone of SSLSS with IGNet and therefore employ the same class-range-balanced self-training scheme on top of our image guidance to achieve 63% mIoU, i.e. 98% relative performance compared to fully supervised.Semi-Supervised LiDAR Segmentation: We also show that IGNet can be used for all data-efficient LiDAR semantic segmentation settings. In particular, we report results for (i) semi-supervised training using SemanticKITTI <cit.> and (ii) semi- and weakly-supervised training on ScribbleKITTI <cit.>, where we carry experiments on a semi-supervised setting while training with a weakly-supervised dataset. We follow Kong <cit.> and generate a semi-supervised dataset by uniformly sampling frames.As seen in Tab. <ref>, IGNet outperforms previous SOTA's by a considerable margin on almost all cases. Specifically, as expected, we see greater margins of improvement in the ScribbleKITTI semi-supervised benchmark since the image-guidance can be more effectively utilized to learn boundary information despite the lack of any such labels. We also report a direct comparison to the baseline Cylinder3D model where IGNet shows great absolute mIoU improvements of 4.1%-9.7% while introducing no additional memory or computational requirements during inference. §.§ Ablation Studies We conduct ablation studies on the ScribbleKITTI <cit.> dataset, where alongside the mIoU, we also report the relative performance of our model compared to the baseline Cylinder3D <cit.> trained on densely annotated labels.Effects of Network Components: We first investigate the effects our proposed components. Starting from a baseline model, we introduce each module one by one, reporting the mIoU and relative performances in Tab. <ref>. As seen each component provides a considerable performance gain over the baseline. Specifically we see a 2% gain when we introduce our domain adapted image-guidance network, and a further 0.2% when we introduce our contrastive loss/FOVMix individually. When utilizing both modules, we see that the constrastive loss can benefit from additional point pairings established via the FOVMix operation, which reflects in the gain of 0.8% (as opposed to 0.3%).Is Domain Adaptation Necessary? We further investigate the necessity of domain adaptation for our image-guidance network. Starting from a mean teacher framework, we compare the performance of our 3D model when guided by the DAFormer model <cit.> trained on (i) weak labels that we generate by projecting 3D scribbles onto the image, and (ii) the synthetically generated GTA-V dataset <cit.>, as well as the complete DAFormer pipeline (model + DA) with (iii) GTA-V → ScribbleKITTI, and (iv) GTA-V → ScribbleKITTI with additional projected weak supervision. The results are shown in Tab. <ref> which emphasize the importance of DA and the usefulness of the weak supervision.Where do the Improvements Come From? Our goal when using image features to guide our 3D model is to exploit the better representation capabilities of 2D semantic segmentation models trained on denser representations for (i) border points, where color channels can provide finer separation compared to noisy LiDAR measurements, (ii) small object and sparsely represented regions, where the pixel count remains considerably higher compared to the LiDAR point count. Finally, we conduct an ablation study to investigate if this behaviour can be observed in the model accuracy after introducing the 2D image-guidance module. In Tab. <ref>, we isolate the effects of our image guidance module by directly comparing to the mean teacher. Firstly, we show that the introduction of image-guidance does boost the border accuracy significantly (+3.5%). Here, we classify points to be on a border if any of its closes N=16 neighbors in 3D space do not share the same class. Second, we observe that IGNet obtains a considerably better performance (+6.6%) on small objects (pedestrians and two-wheelers) compared to the gain in larger objects (+1.5% for four-wheelers). Lastly, when comparing accuracy changes by range, sparsely represented distant regions beyond 25m of range show an improvement of +2.0% when compared to the MT baseline, while close regions only see marginal gains of +0.4%. Here we conclude that image-guidance can indeed compensate for the common weaknesses seen in LiDAR segmentation, especially under weak supervision.Apart from quantitative results, we also showcase examples from the valid-set illustrating this effect in Fig. <ref>. Here we show that IGNet can (top) finely determine object boundaries, (middle) better segment small objects (Cylinder3D and SSLSS misidentify some bicyclist points), and (bottom) improve recognition for sparsely represented regions (IGNet correctly segments all three sparse objects).§ CONCLUSION In this work we tackle common weaknesses of data efficient LiDAR semantic segmentation by distilling high level feature information from a synthetically trained 2D semantic segmentation network. We reduce the domain gap between synthetic and real data by employing weakly supervised DA. We extend the supervision from image pixels to out-of-FOV points via a one way contrastive loss and construct new pairings via FOVMix. With our proposed IGNet, we achieve better boundary estimation, increase performance at distant, sparse regions and heavily improve small class segmentation. We achieve SOTA results in both weakly- and semi-supervised 3D semantic segmentation.Limitations: Compared to the baseline Cylinder3D, IGNet requires roughly twice the training time due to its two stage approach. Furthermore, the feature distillation module requires paired RGB images with LiDAR scans. While all current LiDAR equipped autonomous systems have an accompanying camera setup, our method still relies on the fact that the sensors need to be calibrated for valid pairings.Acknowledgements: This work was funded by Toyota Motor Europe via the research project TRACE Zurich.ieee_fullname | http://arxiv.org/abs/2311.15605v1 | {
"authors": [
"Ozan Unal",
"Dengxin Dai",
"Lukas Hoyer",
"Yigit Baran Can",
"Luc Van Gool"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231127075729",
"title": "2D Feature Distillation for Weakly- and Semi-Supervised 3D Semantic Segmentation"
} |
Stab-GKnock: Controlled variable selection for partially linear models using generalized knockoffsHan Su, Panxu Yuan, Qingyang Sun, Mengxi YiCorresponding author, Email: ., Gaorong LiSchool of Statistics, Beijing Normal University, Beijing 100875, P. R. ChinaJanuary 14, 2024 ============================================================================================================================================================================= The recently proposed fixed-X knockoff is a powerful variable selection procedure that controls the false discovery rate (FDR) in any finite-sample setting, yet its theoretical insights are difficult to show beyond Gaussian linear models. In this paper, we make the first attempt to extend the fixed-X knockoff to partially linear models by using generalized knockoff features, and propose a new stability generalized knockoff (Stab-GKnock) procedure by incorporating selection probability as feature importance score. We provide FDR control and power guarantee under some regularity conditions. In addition, we propose a two-stage method under high dimensionality by introducing a new joint feature screening procedure, with guaranteed sure screening property. Extensive simulation studies are conducted to evaluate the finite-sample performance of the proposed method.A real data example is also provided for illustration.Keywords: False discovery rate; Generalized knockoffs; Joint feature screening; Partially linear models; Selection probability; Power analysis§ INTRODUCTION Semiparametric regression models have been widely used to balance between modeling bias and “curse of dimensionality” for modeling complex data in many scientific fields, including information sciences, econometrics,biomedicine, social sciences, and so on. See the monographs <cit.>for more details. As the leading example of semiparametric models, partially linear models (PLM) <cit.> hold both the flexibility of nonparametric models and model interpretation of linear models.Specifically, PLM takes the formY= X^ Tβ+g(U)+ε,where Y∈ℝ is a response variable, X=(X_1,…,X_p)^∈ℝ^p is an explanatory covariate vector, β=(β_1,…,β_p)^ is a p-dimensional vector of unknown regression coefficients, U is an observed univariate variable, g(·) is an unknown smooth function,ε∼ N(0,σ^2) with 0<σ^2<∞, and independent of the associated covariates (X^,U).Variable selection for high-dimensional PLM has attracted extensive attention over the past two decades.When the dimension p of the linear part diverges slowly with the sample size n, <cit.>proposed SCAD-penalized estimators of the linear coefficients and established the consistency results.<cit.>proposed a doubly penalized procedure to identify significant linear and nonparametric additive components. Allowing p>n or even to grow exponentially with n, <cit.>proposed the profile forward regression (PFR) algorithm to perform feature screening for ultra-high-dimensional PLM.<cit.>proposed a new two-step procedure for estimation and variable selection. <cit.>proposed the projected estimation for massive data and established consistency results for the linear and nonparametric components. For more variable selection methods in PLM, please refer to<cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>.Yet, most existing methods in the literature mainly focus on how to select all significant variables, lacking adequate attention to the control of selection error rates such as false discovery rate (FDR).Loosely speaking, FDR is defined as the expectation of the proportion of false discoveries among all discoveries, which was first introduced in <cit.> and since then, has been a gold criterion in large-scale multiple testing. However, traditional FDR control methods,such as <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and among others,rely heavily on p-values as feature important measures. This limits the application to high-dimensional PLM analysis, partially due to non-negligible estimate bias introduced by the nonparametric component g(U) and the regularized terms, which makes p-values difficult to obtain <cit.>.More recently, <cit.>proposed an elegant fixed-X knockoff procedure under low-dimensional Gaussian linear models to achieve FDR control without resorting to p-values. The main point is to generate “fake” knockoff features that mimic the dependency structure of the original variables. The application of fixed-X knockoff has been investigated in many aspects. <cit.> examined the performance of fixed-X knockoff when p>n, and proposed a “screening+knockoff” two-stage procedure for high dimensional setting based on data splitting technique. <cit.> introduced generalized knockoff features for the structural change detection, and achieved FDR control under the dependent structure. See more details in <cit.>, <cit.>, <cit.>,<cit.>, <cit.>and references therein.However, one common feature of existing works based on fixed-X knockoff is that they actually focus on the linear regression setting. When extending beyond Gaussian linear models, the failure of the sign-flip property for knockoff statistics <cit.> renders the FDR control infeasible. Despite the fact that <cit.> and <cit.> have made some relevant pioneering discussions, they only focused on the structural change detection problem in sparse linear regressions.In addition, there is generally a lack of theoretical power analysis for knockoff-based selection in semiparametric models, except for <cit.> and <cit.>, who have addressed this issue from a simulation perspective or in nonparametric additive models. Nevertheless, both of them are introduced based on the model-X knockoff framework in <cit.>, which is a randomized procedure and counts heavily on the knockoff features generation mechanism. Whereas, it remains open for the fixed-X knockoff based selection for semiparametric models.In this paper, we re-study the selection probability statistics <cit.> and propose a novel stability generalized knockoffs (Stab-GKnock) procedure to study FDR control and power analysis for PLM (<ref>). To the best of our knowledge, this is the first attempt to extend the fixed-X knockoff beyond Gaussian linear models and study the problem of controlled variable selection for semiparametric partially linear models.We emphasize that this extension is not trivial since the presence of the nonparametric part g(U) makes the sign-flip property of knockoff statistics difficult to verify, which will be further discussed in Sections <ref> and <ref>.Three key components are summarized for our implementation:the construction of generalized knockoff features,the intersection subsampling-selection strategy,and the two-stage extension with a new joint feature screening method. The workflow and algorithms are presented in Figure <ref>, Algorithms <ref> and<ref>, respectively. Specifically, we first apply the projection technique to recover the active set and transform the original data.We then construct the generalized knockoff features based on the projected design matrix and establish two pairwise exchangeability properties with the dependent projected data.Noting that the traditional Lasso signed max (LSM) statistic does not perform well facing high correlation structure <cit.>,we innovate the idea of stability selection <cit.> based on an intersection subsampling-selection strategy, and incorporate selection probability difference (SPD) as generalized knockoff statistics, see Figures <ref> and <ref> in Section <ref>.We theoretically show the proposed Stab-GKnock procedure achieves the finite-sample FDR control and asymptotic power one. To extend the applications to high-dimensional PLM, we further propose a two-stage procedure using data splitting technique, which are commonly considered in knockoff-based literature, such as <cit.>, <cit.>, <cit.>, <cit.>and among others.In the first step, we use the first part of data to reduce the dimension to a suitable order by introducing a new joint screening method called sparsity constrained projected least squares (Sparse-PLS, SPLS) method.In contrast with the traditional marginal effect screening, Sparse-PLS naturally accounts for the joint effects among features and performs better in applications. In the second step, we apply Stab-GKnock to select variables on the screened variables set using the second part of data. Theoretical analysis shows the Sparse-PLS screening method enjoys the sure screening property.The theoretical guarantees in terms of both FDR and power are also established.The rest of this paper is organized as follows.We begin in Section <ref> with a brief review of the fixed-X knockoff framework, as well as a detailed description of the model setting and the projection technique.We propose the methodology of the Stab-GKnock procedure by constructing the generalized knockoff features and the refined SPD statistics in Section <ref>. The associated theoretical results in terms of both FDR and power are also established under some regularity conditions.Section <ref> proposes the Sparse-PLS screening method and shows its sure screening property under some mild conditions, and further studies a two-stage extension with FDR control for the high-dimensional setting.Section <ref> assesses the finite sample performance of the Stab-GKnock, Sparse-PLS, and SPLS-Stab-GKnock procedure with several simulation studies.A real data application is also provided in Section <ref>.We briefly summarize this paper in Section <ref>.Technique proofs of theories and additional simulation studies are provided in Supplementary material.§ PRELIMINARIES To avoid confusion, we first specify some notations to facilitate the presentation.The boldface roman B represents a matrix, and the boldface italics B represents a vector. For a subset 𝒜⊂{1,…,p}, we denote |𝒜| and 𝒜^c as its cardinality and complement set, respectively.For a n × p matrix B and a generic set 𝒜, we useB_𝒜={b_ij,i=1,…,n, j∈𝒜}∈ℝ^n×|𝒜| to represent the submatrix consisting of the column of B with indices in 𝒜, and B_𝒜={b_j,j ∈𝒜}∈ℝ^|𝒜| to represent the subvector of B corresponding to 𝒜.Similarly, for a subset I ∈{1,…,n}, we useB(I)={b_ij, i ∈ I, j=1,…,p}∈ℝ^|I|× p to represent the submatrix consisting of the row of B with indices in I.Denote λ_min(B) and λ_max( B) the smallest and largest eigenvalues of an arbitrary square matrix B, respectively. For two constants a and b, let a∨ b and a∧ b be the maximum and minimum between a and b. For a matrix ={b_ij}, let_1 = max_j(∑_i |b_ij|),_2 = √(λ_max(B^B)),_∞ = max_i(∑_j |b_ij|),_max = max_i,j(|b_ij|). For a vector B=(b_1,…,b_p)^, we denote B_∞=max_1≤ i≤ p |b_i|, and B_q=(∑_i |b_i|^q)^1/q the L_q-norm for q∈ (0,∞). Let ⌊·⌋ be the floor function. Let 1 (·) denote the indicator function, _n the n × n identity matrix, e_j=(0,…,0,1,0,…,0)^ the vector with the j-th component equals to 1 while the other components equal to 0. ≽ 0 denotesa positive semidefinite matrix, a_n ≍ b_n denotes sequences {a_n} and {b_n} have the same order of magnitude. Throughout the paper, we use c, C,… to denote constants that may vary from place to place.§.§ Review of fixed-X knockoffKnockoffs methods are a flexible class of reproducible multiple testing procedures with FDR control. The original fixed-X knockoff filter in <cit.>considers the classical Gaussian linear modelY=β+ε,ε∼ N(0,σ_ε^2 _n),where Y∈ℝ^n is the response,∈ℝ^n× p is the fixed design matrix, 0<σ_ε^2<∞ and ∈ℝ^p are unknown.The goal is to test the p hypotheses H_0 j: β_j = 0 against the two-sided alternative, for j=1,…,p. The valid knockoffs =( X_1,…, X_p) forcan be constructed, obeying that^=^, ^=^-{s}for some {s} satisfying 2-{s}≽ 0, where s=(s_1,…,s_p)^∈ℝ^p_+. When n ≥ 2p, an explicit representation can be computed by= [_p-(^)^-1{s}]+_0, where ∈ℝ^n× p is an orthonormal matrix orthogonal to the column space of , _0 is the Cholesky decomposition factor of the matrix 2{s}-{s} (^)^-1{s}, see the details in <cit.>.Onceis constructed, knockoffs framework then calculates the knockoff statistics W=(W_1,…,W_p)^ T based on ([,],Y) obeying the following two properties: (1) (The sufficient property). The knockoff statistics W only depend on the augment Gram matrix [,]^ T [,] and the feature-response inner products [,]^ TY. (2) (The antisymmetry property). Swapping the j-th column ofwith the associated knockoff counterpart, it only changes the sign of the knockoff statistic W_j, i.e., for any 𝒜⊂{1,…,p}, j=1,…,p,W_j([,]_ swap(𝒜), Y)= W_j([,], Y) · +1,if j ∉A,-1,if j ∈A,where swap(𝒜) is an operator that swaps _𝒜 and _𝒜.The type of knockoff statistic is not unique. Knockoff demands statistics possessing the sign-flip property <cit.>as follows(W_1,…,W_p) d= (ϵ_1 · W_1,…, ϵ_p · W_p),where ϵ_j=1 if β_j = 0, and ϵ_j = ± 1 with equal probability 1/2 if β_j≠ 0.The sign-flip property is key to obtain valid error control in the knockoff framework, we will further explain the details in Section <ref>. The sign-flip property is a consequence of following two exchangeability properties for ([,],Y). (3) (Pairwise exchangeability for the features). For any subset 𝒜⊂{1,…,p}, we have[,]^ T_ swap(𝒜) [,]_ swap(𝒜)= [,]^ T [,]. (4) (Pairwise exchangeability for the response). For any subset 𝒢⊂{j: β_j = 0}, we have[,]^ T_ swap(𝒢)Yd= [,]^ TY, where the property (<ref>) demands the i.i.d. structure for the response Y.After calculating knockoff statistics, the fixed-X knockoff filter rejects H_0 j if W_j ≥ T, where T is a data-dependent threshold. There are two ways to choose the threshold T, one is defined asT=min{t ∈𝒲 :|{j:W_j<-t}|/|{j:W_j>t}|∨ 1≤ q},(Knockoff)another is defined asT=min{t ∈𝒲 :1+|{j:W_j<-t}|/|{j:W_j>t}|∨ 1≤ q}, (Knockoff+)where 𝒲{|W_j|:|W_j|>0}. The threshold T chosen by Knockoff+ (<ref>) is slightly more conservative than the threshold chosen by Knockoff (<ref>), and satisfies FDR control in finite-sample setting. Intuitive extensions to p<n<2p have also been given in <cit.>.§.§ Recap: Projected spline estimation in PLMSuppose {(x_i^,U_i,Y_i),i=1,…,n} are observed samples of (X^,U,Y) from model (<ref>),then model (<ref>) can be re-expressed with matrix formY=β+g(U)+ε,whereY=(Y_1,…,Y_n)^∈ℝ^n is the response vector, =(x_1,…,x_n)^=(X_1,…,X_p) ∈ℝ^n× p is the fixed design matrix with x_i=(X_i1,…,X_ip)^ and X_j=(X_1j,…,X_nj)^, U=(U_1,…,U_n)^∈ℝ^n, g(U)=(g(U_1),…,g(U_n))^, and ε=(ε_1,…,ε_n)^ is the vector of model error. In this subsection, we will introduce the projected spline estimators ofand g(·) in the partially linear model (<ref>). Specifically, we use the polynomial splines to approximate the nonparametric part g(·) which satisfies certain smoothness. According to the splines’ approximation property <cit.>,the nonparametric function in model (<ref>) can be well approximated and parameterized as g(U) = (g(U_1),…,g(U_n))^≈_0, where _0 ∈ℝ^K is an unknown parametric vector, =(B(U_1),…,B(U_n))^∈ℝ^n × K is the known basis matrix. B(u)=(B_1(u), … ,B_K(u))^ is the B-spline basis with K=K^*+m, where K^* is the number of internal knots, and m is the order of polynomial splines. Thus, the problem of estimating g(·) becomes that of estimating _0.We consider the following penalized least squares objective function𝒬(,_0)=1/2Y--_0^2+ λ_1, where λ≥ 0 is a tuning parameter. We then adopt the projection technique to transfer (<ref>) to a Lasso-type problem <cit.>.For any given , a minimizerof 𝒬(,_0) isdefined as=(^)^-1^(Y-). Let _ Z=(^)^-1^ be the projection matrix of the column space of the basis matrix , _n-_ Z is also a symmetric idempotent matrix. For simplicity, let Y^*=Y, ^*=. By (<ref>), (<ref>) and some simple calculations, we can obtain the projected spline estimator ofas= min_β{1/2n(Y-)^2+ λ_1} =min_β{1/2nY^*-^*^2+ λ_1}.After obtaining , we can plug it back into (<ref>) to obtain =(^)^-1^(Y-), g =(^)^-1^(Y-)=_ Z(Y-). The oracle inequalities and sign consistency of the projected spline estimatorhave been established in the literature <cit.>. Nonetheless, the finite-sample FDR control is of more interest for researchers, yet faces severe challenges.§.§ Problem setupIn this paper, we extend the knockoffs framework to the partially linear model (<ref>), aiming to select as many truly associated variables as possible while keeping FDR at a predetermined level.Denote [p]={1,…,p} the index set of the full model, and 𝒮={j:β_j≠0 } the active set, i.e., the index set of non-null features.𝒮^c=[p] ∖𝒮 is the unactive set. p_1|𝒮| and p_0|𝒮^c|= p-p_1 are the numbers of the relevant and null features. Let 𝒮 denote the discovered variable set by some variable selection procedures, FDR and power of a variable selection procedure are defined, respectively, as FDR(𝒮) = 𝔼[|𝒮∩𝒮^c|/|𝒮| ∨ 1], Power(𝒮) = 𝔼[|𝒮∩𝒮|/|𝒮|]. The idea of extending knockoff framework to PLM is intuitive, but not trivial when constructing the knockoff features. On account of the randomness of U, classical knockoff features (<ref>) based onwill lead the property (<ref>) fail.Therefore, we consider to construct knockoff features based on the transformed design matrix ^*. Note that ^*= is associated with the projection matrix , the elements of the transformed response Y^*=Y are no longer i.i.d. sinceY^* ∼ N(^*, σ^2).The above dependence structure violates the assumption imposed for the fixed-X knockoff in <cit.>,further makes the sign-flip property (<ref>) difficult to verify, hence calls new methodological and theoretical investigations.In Section <ref>, we will develop a novel Stab-GKnock procedure and establish the desired FDR and power guarantee for partially linear models.§ METHODOLOGY §.§ Extending the fixed-X knockoff to PLM: Stab-GKnock In this subsection, we extend the fixed-X knockoff to partially linear models using generalized knockoff features in <cit.>.To conduct FDR control and power analysis,we provide a nontrivial technical analysis to prove the pairwise exchangeability properties for the dependent transformed data. We also innovate the idea of stability selection in <cit.>based on an intersection subsampling-selection strategy, and incorporate the selection probability difference as generalized knockoff statistics. Hence, we call this procedure as Stab-GKnock, which can be concluded in following three steps.4mmStep 1: Construct generalized knockoff features. For simplicity, let ^*=(x^*_1,…,x^*_n)^=(X^*_1,…,X^*_p) ∈ℝ^n × p, Y^*=(Y^*_1,…,Y^*_n)^∈ℝ^n. Without loss of generality, we assume that the transformed design matrix ^* is standardized such that X^*_j_2^2=1, j=1,…,n. Denote ^*=^*^^* the Gram matrix of ^*. Then we construct the generalized knockoff features =(X_1,…,X_p) ∈ℝ^n × p based on ^* instead of original design , satisfing^=^*, ^^*=^*-{s},where s=(s_1,…,s_p)^∈ℝ^p_+. The generalized knockoff matrixmimics the dependency structure of the transformed design ^*, see Section <ref>. When n ≥ 2p, one can computeby= ^*[_p-(^*)^-1{s}]+.Here, {s} is a diagonal matrix obeying 2^*-{s}≽ 0,∈ℝ^n× p is an orthonormal matrix that is orthogonal to the column space of ^*, i.e., ^^*= 0,andis the Cholesky decomposition factor of the matrix 2{s}-{s} (^*)^-1{s}. The projection transformation does not affect the existence of generalized knockoff features . According to (<ref>),exists if and only if ^* is reversible.By the definition of the B-spline basis B(u) and the basis matrix ,is of full column rank K,where K is the dimension of B-spline basis. This implies that the projection matrix =_n-_ Z is of rank n-K ≥ p if K≤ p. Thus, ^*= is of full column rank p and ^* is invertible. In the proposed Stab-GKnock procedure, we make the first attempt to establish two pairwise exchangeability properties for PLM, shown in Theorems <ref> and <ref>. As we have emphasized in Section <ref>, they are essential for the sign-flip property (<ref>) of statistics, yet not trivial since the elements of the transformed response Y^*=Y are no longer i.i.d.For any subset 𝒜⊂{1,…,p}, we have[^*,]^ T_ swap(𝒜) [^*,]_ swap(𝒜)= [^*,]^ T [^*,]. For any subset 𝒢⊂𝒮^c, we have[^*,]^ T_ swap(𝒢)Y^* d= [^*,]^ TY^*. Theorem <ref> shows that the Gram matrix of [^*,] is invariant when we swap the j-th column of ^* andfor each j ∈𝒜.The proof of Theorem <ref> is referred to Supplementary material S.2. Theorem <ref> shows that the distribution of the inner product [^*,]^ TY^* is invariant when we swap the j-th column of ^* andfor each j ∈𝒢.According to (<ref>), we can obtain the swapped distribution as follows[^*,]^ T_ swap(𝒢)Y^* ∼ N([^*,]^ T_ swap(𝒢)^*,σ^2[^*,]^ T_ swap(𝒢) [^*,]_ swap(𝒢)).Under the Gaussian assumption, Theorem <ref> holds if and only if the expectation and covariance matrix of the swapped distribution are invariant. The expectation is invariant as β_j=0, j ∈𝒢.Moreover, an important lemma ensures the covariance of the swapped distribution invariant, that is, a projection ofis also a generalized knockoff of ^*, presented in Supplementary material S.3. For p < n < 2p , we can no longer compute the generalized knockoffsby (<ref>), as there is no subspace of dimension p orthogonal to ^*. Following <cit.>,we can create row-augmented data and further use the Stab-GKnock, as long as an accurate estimate of σ^2 can be obtained.4mmStep 2:Construct generalized knockoff statistics. Once have generalized knockoff features, we need to construct generalized knockoff statistic W=(W_1,…, W_p)^∈ℝ^p as the testing statistic, obeying the sufficient property and the antisymmetry property mentioned in Section <ref>. The type of statistic is not unique. A widely used choice is the LSM statistic, which is the point of the tuning parameter on Lasso regression path at which the feature first enters the model <cit.>. However, the knockoff methods based on the LSM statistic may suffer power loss in real applications.<cit.> pointed out this problem, but did not provide a specific solution.In this paper, we operate the subsampling strategy to enhance the selection stability, specifically using the projected spline estimator in Section <ref> to obtain the selection probability as the variable importance score, and then construct the associated SPD statistic.However, a main concern about subsampling is that it will increase the variability of the selection result and face a power loss compared to other methods based on full data, as shown in Figure <ref>.To remedy this issue, we adopt an intersection subsampling-selection strategy similar to <cit.>.Let I ⊂{1,…,n} denote the corresponding subsample indices with size ⌊ n/2 ⌋, we obtain two projected spline estimators of the augment regression coefficient vector =(^,0_p^)^∈ℝ^2p by (<ref>) using ([^*,],Y^*)(I) and ([^*,],Y^*)(I^c), respectively,b(I) = min_β∈ℝ^2p{1/|I|Y^*(I)-[^*,](I)^2+ 2λ_1}, b(I^c) = min_β∈ℝ^2p{1/|I^c|Y^*(I^c)-[^*,](I^c)^2+ 2λ_1},where λ>0 is a tuning parameter.Then we get two estimates of 𝒮 based on the subsample indices set I and I^c as𝒮^(1)(I) ={j ∈{1,…,2p}: b_j(I) ≠ 0}, 𝒮^(2)(I^c) ={j ∈{1,…,2p}: b_j(I^c) ≠ 0}.We further adopt a simple intersection strategy to obtain the selected set 𝒮(I) as follows𝒮(I)=𝒮^(1)(I) ∩𝒮^(2)(I^c).Thus, the probability of being in the selected set 𝒮(I) isΠ_j= ℙ(j ∈𝒮(I)),j ∈{1,…,2p},where ℙ is taken concerning the randomness of subsampling. Further, we can define the generalized knockoff statistic W_j for the transformed feature X^*_j as the SPD statisticW_j= Π_j - Π_j+p,j ∈{1,…,p}.As the selection probability Π_j is unknown, it can be estimated accurately by the empirical selection probability.Specifically, we repeat the above subsampling and projected spline estimation procedure L times, each time for two subsamples I_l and I_l^c for l=1,…, L, and record the selected set as 𝒮(I_l)=𝒮^(1)(I_l) ∩𝒮^(2)(I_l^c). Hence, we obtain the empirical selection probability for each variable X_j based on {𝒮(I_l)}_l=1^L, i.e.,Π_j = 1/L∑_l=1^L1(j ∈𝒮(I_l)),j ∈{1,…,2p}. Π_j and Π_j+p can be regarded as the importance scores for the transformed feature X^*_j and the generalized knockoff feature X_j. Noting that all generalized knockoff features are noises, a large positive W_j indicates X^*_j may be a true signal for Y^*, whereas a null X^*_j induces W_j is close to 0 and equally likely to be positive or negative <cit.>. The same can be said of the associated original feature X_j for the response Y. As pointed out in the literature, choosing L = 100 is sufficient to estimate the selection probability (<ref>) accurately by (<ref>), we illustrate this point in Sections <ref> and <ref>.Note that the proposed intersection selection strategy will help stabilize the selection result and boost the power.To illustrate this point, we consider a motivating simulation example.Generate each row of the design matrixindependently from N_p(0,), where _ij = 0.6 for i≠ j, _ij = 1 for i=j.The sample size and the dimension are (n,p)=(2000,1000). We randomly set p_1 entries of the true regression parameter β to be nonzero with p_1=10 and 30. These nonzero entries take values ± 1.2 randomly. We set the nonparametric smooth function g(U) = sin(2 π U), and the univariate { U_i } is i.i.d. drawn from the uniform distribution on [0,1].In addition, the dimension K of the B-spline basis is chosen by BIC criterion, and the tuning parameter λ in (<ref>) and (<ref>) is chosen by 10-fold cross-validation. Figure <ref> is the scatter plot of SPD statistic W_j based on the classical selection strategy (<ref>) in <cit.>, the union selection strategy, and our intersection selection strategy (<ref>). In all strategies, we can see that most W_j's (green square) of the true signals are significantly positive, yet the W_j's (black dot) of the nulls roughly symmetry about 0.Figures <ref>a and b show that, the union strategy and the classical strategy both cause W_j's of nulls to severely inflate, which makes it difficult to distinguish the true signals from the nulls and hence results in power loss. Conversely, Figures <ref>c1 and c2 depict that the intersection strategy sufficiently shrinks the statistics W_j's of the nulls, which helps us identify the true signals and chooses a better threshold T.Moreover, the proposed intersection strategy also performs well when the signals become more sparse, illustrated by Figure <ref>c2. In Section <ref>, we will further substantiate these points in Lemma <ref> and Figure <ref>. 4mmStep 3: Select data-dependent threshold.In our Stab-GKnock procedure, the final step is to choose a data-dependent threshold value T via (<ref>) or (<ref>) mentioned in Section <ref>. The active set 𝒮 is estimated by𝒮={j ∈{1,…,p}: W_j ≥ T}.The workflow of the proposed Stab-GKnock procedure is presented in Figure <ref>.§.§ Stab-GKnock algorithmIn what follows, the Stab-GKnock procedure is summarized in Algorithm <ref>. §.§ Theoretical results In this subsection, we build theoretical guarantees for the Stab-GKnock procedure in terms of FDR and power. We first give the following lemma to show the sign-flip property (<ref>) of the statistics (<ref>), which is also illustrated in Figure <ref>.Let {ϵ_1,…,ϵ_p} be a set of independent random variables, such that ϵ_j=1 if j∈𝒮, and ϵ_j = ± 1 with equal probability 1/2 if j ∈𝒮^c. Then,(W_1,…,W_p) d= (ϵ_1 · W_1,…, ϵ_p · W_p). Lemma <ref> is the key of our proposed Stab-GKnock framework, which gives an “overestimate” of FDP.To see this, for any t ≥ 0, we have|{ j:W_j≥ t, j ∈𝒮^c }|/|{j:W_j≥ t }| ∨ 1≈|{ j:W_j≤ -t, j ∈𝒮^c }|/|{j:W_j≥ t }| ∨ 1≤|{ j:W_j≤ -t }|/|{j:W_j≥ t }| ∨ 1 := FDP(t). Consider a similar simulation example in Section <ref>, the sampling distribution histogram of generalized knockoff statistic W_j is presented in Figure <ref>, where the green squares and black dots denote true signals and nulls, respectively.We can see that most W_j's of the true signals are significantly positive, yet the W_j's of the nulls roughly symmetry about 0. Owing to Lemma <ref>, we can select variables with W_j ≥ T, and conservatively estimate FDP using the left tail of the distribution.The proof of Lemma <ref> is presented in Supplementary Material S.4.The following theorem indicates that the Stab-GKnock procedure can control FDR at the nominal level q ∈ (0,1) for any finite sample size n. For any q ∈ (0,1), choose the threshold T > 0 by (<ref>). Then the Stab-GKnock procedure which retains the set𝒮={j ∈{1,…,p}: W_j ≥ T}controls the modified FDR defined asmFDR(𝒮) = 𝔼[|𝒮∩𝒮^c|/|𝒮| + 1/q] ≤ qfor any finite sample size n. If the threshold T > 0 is chosen by (<ref>), then the Stab-GKnock controls the usual FDRFDR(𝒮) = 𝔼[|𝒮∩𝒮^c|/|𝒮| ∨ 1] ≤ q.The proof of Theorem <ref> counts on the result of Lemma <ref> and the original proof in <cit.>. Next, we are inquisitive about the other side of the coin, that is, the power guarantee of our proposed Stab-GKnock procedure. In order to establish the theoretical results, we need some basic regularity conditions.[Minimal signal condition]There exists some slowly diverging sequence κ_n →∞, such thatmin_j∈𝒮 |_j| ≥κ_n √(log(2p)/n), as n →∞. [Minimal eigenvalue condition] There exist a constant C_1 > 0, such thatλ_min(1/n[𝐗^*_𝒮, 𝐗_𝒮]^T[𝐗^*_𝒮, 𝐗_𝒮]) ≥ C_1. [Mutual incoherence condition]There exists a constant γ_ I∈ (0,1], which may depend on the subsampling index I, such that max_j ∈𝒮^c[𝐗^*(I), 𝐗(I)]_j^T[𝐗^*_𝒮(I), 𝐗_𝒮(I)]([𝐗^*_𝒮(I), 𝐗_𝒮(I)]^T[𝐗^*_𝒮(I), 𝐗_𝒮(I)])^-1_2≤ 1-γ_ I . Conditions <ref>–<ref> are crucial for establishing the asymptotic power result for Stab-GKnock. Condition <ref> is a signal strength condition, which ensures that the projected spline estimator (<ref>) does not miss too many true signals. Condition <ref> is easily satisfied in high-dimensional regression, see <cit.>.Condition <ref> is known as the minimal eigenvalue condition, which states that the Gram matrix of the active set on the augment design matrix is invertible. Condition <ref> is known as the mutual incoherence condition, which indicates that the correlation between the true signals and nulls should not be too strong. Conditions <ref> and <ref> are common technique conditions for Lasso regression <cit.>,which further ensure the variable selection consistency for the projected spline estimator (<ref>). As the original design matrixand the projection matrixare observable, we impose Conditions <ref>–<ref> on the transformed design matrix ^*= instead of . Similar treatments can be found in <cit.> and <cit.>.Let λ≍√(log (2p)/n), and suppose that regularity conditions <ref>–<ref> hold. Then, the selected set 𝒮 obtained by Stab-GKnock satisfiesPower(𝒮) = 𝔼[|𝒮∩𝒮|/|𝒮| ∨ 1] → 1, as n →∞.Theorem <ref> indicates that the Stab-GKnock attains an asymptotic full power, that is, it can identify all important features asymptotically as n →∞. The proof of Theorem <ref> is presented in Supplementary Material S.6, which is enlighted by <cit.>.Essentially, the Stab-GKnock procedure relies on the selection results of the projected spline estimator (<ref>). Thus, we need Lasso-type regression (<ref>) to achieve the variable selection consistency, i.e., it can identify all true signal variables. See S.1 in Supplementary Material for more details.§ HIGH-DIMENSIONAL SETTING: SPLS-STAB-GKNOCKThe Stab-GKnock procedure in Section <ref> demands n>2p and is not applicable for high-dimensional setting. In this section, we propose a two-stage procedure based on data-splitting technique.Specifically, we split the full data into two parts.In the first screening step, we implement a newly proposed joint screening method, called Sparse-PLS, to reduce the dimension p to a suitable dimension p_1 using the first part data of size n_1.In the second selection step, we further apply Stab-GKnock to select the variables on the screened variables set using the second part data of size n_2, where n=n_1+n_2. We first introduce the Sparse-PLS screening method in Section <ref>, and then summarize the two-stage algorithm in Section <ref>.§.§ Joint screening procedure in PLM: Sparse-PLSNote that our two-stage procedure for high-dimensional setting is a natural extension of the low-dimensional Stab-GKnock procedure as long as the screening step correctly captures all relevant features.Hence, we desire the sure screening property in <cit.>to be attained. Most existing screening methods rely on the marginal effects of features on the response, such as SIS <cit.> and RRCS <cit.>.There are two main concerns about marginal screening methods. First, despite screening based on marginal effects having computational efficiency, they are often unreliable in practice since they ignore the joint effects of candidate features. Second, the feature with significant joint effect but weak marginal effect is likely to be wrongly left out by marginal screening methods. We illustrate these points in Sections <ref> and <ref>.Here, we propose a joint screening strategy for high-dimensional PLM via the sparsity-constrained projected least squares estimation (Sparse-PLS, SPLS). Considering the high-dimensional partially linear model (<ref>) in Section <ref>, we obtain the following projected least squares objective function ofby splines’ approximation and projection techniqueℒ()= 1/2nY--^2 =1/2n(Y-)^2 =1/2nY^*-^*^2.Supposeis sparse, i.e., the true model size p_1 ≤ k for some user-specified sparsity 0<k<p. The proposed Sparse-PLS estimator can be defined as(k)= min_β∈ℝ^pℒ(), s.t. _0≤ k. Then the screened set of Sparse-PLS is obtained𝒮_1= {j ∈{1,…,p}: β_j(k) ≠ 0}. The sparsity constraint _0≤ k in (<ref>) specifies the number of features allowed in the model, that is, the Sparse-PLS procedure just estimates some of the coefficients while presets the others to 0, which makes Sparse-PLS suitable for feature screening.Note that estimation (<ref>) is carried out on the full model, (k) can be viewed as a screener which naturally accounts for the joint effects among features and hence goes beyond marginal utilities.Essentially, the Sparse-PLS proposes a sparsity-constraint estimator by adopting a ℒ_0-regularization technique, which has similarities with the SMLE in <cit.> and the constrained Danzig selector (CDS) in <cit.>. On the other side of the coin, the proposed Sparse-PLS procedure can also be viewed as a high-dimensional best subset selection with subset size k <cit.>,thereby the cardinality constraint makes problem (<ref>) become an NP-hard problem <cit.>.In this article, we solve the screener (k) and implement Sparse-PLS efficiently by using a modern optimization method, specifically mixed integer optimization (MIO). It can obtain a near-optimal solution efficiently for the nonconvex optimization problem (<ref>), theoretically shown in <cit.>.Finally, we prove the Sparse-PLS computed by MIO enjoys the sure screening property under some regularity conditions.We start with some additional regularity conditions. [NP-dimensionality condition]Let log(p)=O(n^κ) for some 0 ≤κ <1. [Minimal signal condition]There exist some nonnegative constants ω_1, ω_2, τ_1 and τ_2 such that min_j ∈𝒮 |β_j| ≥ω_1 n^-τ_1, and p_1 ≤ k ≤ω_2 n^-τ_2.[UUP condition]There exist constants c_1>0 and δ_1>0 such that for sufficiently large n, λ_min(n^-1^*_𝒜^^*_𝒜) ≥ c_1for 𝒜∈^2k_+ and _𝒜∈{_𝒜:_𝒜-^*_𝒜≤δ_1 },where ^k_+{𝒜: 𝒮⊂𝒜; |𝒜|≤ k} and ^k_-{𝒜: 𝒮⊄𝒜; |𝒜| ≤ k} denote the collections of the over-fitted models and the under-fitted models, respectively. [Dependence condition] There exist constants c_2, c_3>0 such that |X^*_ij|≤ c_2 and max_1 ⩽ j ⩽ pmax_1 ⩽ i ⩽ n{X^*_ij^2/∑_i=1^nX^*_ij^2σ^*_i^2}≤ c_3· n^-1,when n is sufficiently large, whereσ^*_i^2=(Y_i^*).Condition <ref> imposes an assumption that p can diverge up to an exponential rate with n, which means the dimension p can be greatly larger than the sample size n.Condition <ref> permits the coefficients of true signal variables to degenerate slowly as n diverges, which is widely used in the literature of screening methods <cit.>It also places a weak restriction on the sparsity k to make sure screening possible. Condition <ref> restricts the pairwise correlations between the columns of ^* in consideration, which is equivalent to the UUP condition given in <cit.>.This condition is mild and commonly used in high-dimensional methods, like DS <cit.>, SIS-DS <cit.>, FR <cit.>, SMLE <cit.>, CDS <cit.>, GFR <cit.> and CKF <cit.>.Condition <ref> is also a restriction on the transformed design matrix ^*, which holds naturally so long as σ^*_i^2 does not degenerate too fast, noted by <cit.>. Under Conditions <ref>–<ref>, the following Theorem <ref> states the sure screening property. Assume regularity conditions <ref>–<ref> hold with τ_1+τ_2<(1-κ)/2, and 𝒮_1 is the MIO computed screened set of size k from the Sparse-PLS procedure, we haveℙ(𝒮⊂𝒮_1) → 1, as n →∞.Theorem <ref> ensures that the subset selected by Sparse-PLS would not miss any true signal variable with probability tending to one. The proof of Theorem <ref> is given in Supplementary Material S.7. The sparsity k controls the threshold between signal and null features, thus the choice of k is a key point in screening procedures. Standard hard-threshold choices often set k=c⌊ n/log(n) ⌋ for some c>0 <cit.>, it can also be selected by a data-driven strategy in <cit.>. In simulation studies, we find the proposed Sparse-PLS has robust performance for a wide choice of k compared with the marginal methods, shown in Table <ref>.§.§ High-dimensional SPLS-Stab-GKnock algorithm In this subsection, the two-stage SPLS-Stab-GKnock procedure is extended based on data-splitting technique, which is briefly introduced as follows * Randomly split the data into two groups(^(1),U^(1),Y^(1)) ∈ℝ^n_1× p×ℝ^n_1×ℝ^n_1 and(^(2),U^(2),Y^(2)) ∈ℝ^n_2× p×ℝ^n_2×ℝ^n_2, where n_1+n_2=n.* Screening step: Apply Sparse-PLS on (^(1),U^(1),Y^(1)) and obtain the screened set 𝒮_1 which reduces the dimension p to a suitable dimension p_1<n_2/2.* Selection step: Apply Stab-GKnock to further select variables on the screened variables set 𝒮_1 using (^(2),U^(2),Y^(2)). We summarize the SPLS-Stab-GKnock procedure in Algorithm <ref>.Along with Condition <ref>, we also require k ≤ n_2/2 in the screening step, which is not only necessary for establishing the sure screening property but ensures a suitable dimension p_1 (k) _0≤ n_2/2 for the subsequent Stab-GKnock selection step.For high-dimensional settings, the two-stage procedure using data-splitting technique is commonly considered in knockoff-based literature <cit.>. There are two main concerns about these methods.On the one hand, the screening accuracy determines the performance of the subsequent selection. Despite enjoying the sure screening property asymptotically, the marginal screening method used in the first step causes the two-stage procedure to suffer power loss in real applications, mentioned in <cit.> and <cit.>.On the other hand, data splitting increases the variability of the selection result and may cause the loss of statistical power in practice. For future research, we suggest that an in-depth analysis of the data-splitting strategy could potentially enhance the performance even further, such as the unequal subsample size strategy or overlapping splitting strategy. In addition, the “data recycling” idea proposed in <cit.> can be borrowed to improve the power. §.§ Theoretical results In this subsection, we establish theoretical guarantees for the SPLS-Stab-GKnock procedure in terms of FDR and power. Denote ℰ={𝒮⊂𝒮_1} the event that the screening step possesses the sure screening property. Under regularity conditions <ref>–<ref>, for any q ∈ (0,1), choose the threshold T > 0 by (<ref>), the SPLS-Stab-GKnock procedure satisfies FDR(𝒮) = 𝔼[ |𝒮∩𝒮^c|/|𝒮| ∨ 1] ≤ qwith the probability tending to one as n →∞. Furthermore, conditioning on the sure screening event ℰ={𝒮⊂𝒮_1} and select the threshold T by (<ref>), then we obtain a finite-sample FDR control guaranteeFDR(𝒮) = 𝔼[|𝒮∩𝒮^c|/|𝒮| ∨ 1ℰ] ≤ q. Theorem <ref> states that conditioning on event ℰ, the SPLS-Stab-GKnock procedure achieves FDR control at the nominal level q ∈ (0,1). Under regularity conditions <ref>–<ref> and conditions in Theorem <ref>. Then, conditional on the sure screening event ℰ={𝒮⊂𝒮_1} and select the threshold T by (<ref>), the SPLS-Stab-GKnock procedure satisfiesPower(𝒮) = 𝔼[ |𝒮∩𝒮|/|𝒮| ∨ 1ℰ] → 1, as n →∞. Theorem <ref> indicates that conditioning on event ℰ, the SPLS-Stab-GKnock procedure attains an asymptotic full power as n diverges to infinity. § SIMULATION STUDIES In this section, we conduct numerical simulations to evaluate the finite-sample performance of the proposed Stab-GKnock, Sparse-PLS screening and SPLS-Stab-GKnock procedure. In Section <ref>, we consider the finite sample performance of Stab-GKnock for n ≥ 2p case. In Section <ref>, we evaluate the screening performance of SPLS. In Section <ref>, we assess the performance of SPLS-Stab-GKnock for p>n case.In all cases, we set L = 100, and the tuning parameter λ is selected by cross-validation.§.§ Low-dimensional performanceIn this subsection, we evaluate the empirical performance of Stab-GKnock in low-dimensional cases. Specifically, we consider the partially linear model (<ref>) with n ≥ 2p. We draw each row of the design matrixindependently from * Case 1: A centered multivariate Gaussian distribution N_p(0,), where =(ρ^|i-j|)_1 ≤ i,j ≤ p for correlations ρ=0.2 and 0.5; * Case 2: A centered multivariate t distribution t_p,3(0,) with degrees of freedom 3, where =(ρ^|i-j|)_1 ≤ i,j ≤ p for correlations ρ=0.2 and 0.5. We randomly set p_1 entries of the true regression parameter β to be nonzero. These nonzero entries take values ± A randomly with A=0.2,0.4,0.6,0.8 and 1.0.We set the nonparametric smooth function g(U) = sin(2 π U), and the univariate { U_i} is i.i.d. drawn from the uniform distribution on [0,1].We set the spline order m=3, and set the number of internal knots K^*=n^1/9, which is the theoretically optimal order.The errors {ϵ_i } are i.i.d. copies from N(0,1).It is worth noting that the Stab-GKnock procedure demonstrates robust performance for a wide choice of error variance σ^2.Relevant simulations are not pursued here due to space limitations.We set the desired FDR level as q = 0.1, and choose the tuning parameter λ in (<ref>) by 10-fold cross-validation using R package . We compare the following five methods for different settings based on 200 replications. * Stab-GKnock: The proposed procedure in this paper implemented via Algorithm <ref> with the threshold selected by (<ref>). * Stab-GKnock+: The proposed procedure in this paper implemented via Algorithm <ref> with the threshold selected by (<ref>).* B-H: The BH procedure applied to p-values from univariate regression in <cit.>, which is implemented using function “univglms” in R package . * Knock-LSM+: The fixed-X knockoff procedure with LSM knockoff statistic in <cit.>, which is implemented using function “create.fixed” in R package . We generate the knockoff features withequicorrelated construction to choose s. * m-Knock+: The model-X knockoff procedure with Lasso coefficient difference (LCD) statistic in <cit.>, which is implemented using function “create.guassian” in R package . We use the second-order approximation in <cit.> to generate the knockoff features with approximate semidefinite program construction to choose s. Note that the B-H procedure and the fixed-X knockoff procedure are proposed for linear models, we first apply the projection technique to convert model (<ref>) to a linear model, and then apply two procedures on the transformed model. Figures <ref> to <ref> show the simulation results for n ≥ 2p cases. We can observe that the proposed method demonstrates favorable results in most settings, and presents significant improvement in power compared to other procedures. More specifically, we have the following findings.(1) Figure <ref> reports the low-dimensional simulation results whenis Gaussian design. We find that the power of all methods rises with increasing signal strengths A, yet the Stab-GKnock procedures always yield basically the highest power. Meanwhile, the proposed methods are not sensitive when the correlation ρ increases from 0.2 to 0.5, while the powers of Knock-LSM+ and m-Knock+ slightly decline. In addition, although B-H method maintains high power as ρ increases, it can not control FDR at target level q=0.1 anymore.(2) Figure <ref> examines whether our proposed methods still work whenis non-Gaussian design. We find that all the methods successfully control FDR except for the B-H procedure, and the powers of our proposed Stab-GKnock methods still tend to one. On the contrary, the power of Knock-LSM+ exhibits a noticeable decrease compared to the Gaussian scenario above, as it is tailored for Gaussian design. The model-X knockoff method is relatively robust for the design matrix.(3) Figure <ref> examines whether our proposed methods perform well when the signals become more sparse. We find that our proposed Stab-GKnock methods still have the highest power as the signal sparsity p_1 / p descends. We attribute it to the intersection strategy of our SPD statistics W_j defined in (<ref>), which is visually illustrated in Figure <ref>. In summary, FDR is well controlled for our proposed methods,even when the design matrix is not normally distributed or the signals are sparse. Apart from its robustness, Stab-GKnock also holds remarkably higher power compared to the main competitors across different scenarios.§.§ Screening performanceIn this subsection, we assess the performance of the proposed Sparse-PLS procedure for high-dimensional screening. Specifically, we set p=700, n=200 and A=0.6 with design matrixgenerated by Case 1 or Case 2. We measure the screening performance by calculating: (1) FDR, the empirical average false discovery rate after screening; (2) PRR, the averaged proportion of signals that are retained after screening; (3) SSR, the proportion of times that all signals are retained after screening; (4) MMS, the minimum model size to include all signals. We compare the following five methods under the same setup based on 200 replications. * SPLS: The proposed procedure in this paper which selects the screened set by (<ref>). * SIS: The sure independence screening procedure based on Pearson's ρ correlation coefficient in <cit.>. * RRCS: The robust rank correlation screening procedure based on Kendall's τ correlation coefficient in <cit.>. * PFR: The profiled forward regression algorithm in <cit.>. * SPLasso: The sequential profile Lasso method in <cit.>. We apply SIS and RRCS procedures after employing our spline approximation and projection technique in Section <ref>, and the profiled techniques used in PFR and SPLasso are also replaced by spline approximation. Moreover, SMLE in <cit.> is a likelihood-based method that is not suitable for partially linear models, and CDS in <cit.> focuses on the convergence rates under weak signal strength assumption, not feature screening, hence we do not compare with these methods considering joint effects mentioned above.Tables <ref> and<ref> summarize the simulation results for screening. We can find that the proposed Sparse-PLS procedure achieves a promising screening accuracy compared to the marginal methods. It can identify the majority of signals after screening to a desirable model size. More specifically, we have the following findings.(1) Table <ref> reports the screening accuracy with fixed model size k = 40 ≈ n/log(n). Noting that a smaller FDR with a larger PRR and SSR suggests a more accurate screening method, we find that the proposed SPLS method performs remarkably well as p_1 varies. Its highest SSR guarantees a high power of the subsequent selection analysis, which is in line with its theoretical property in Theorem <ref>. In contrast, the other SIS-based marginal methods are likely to be affected by the correlation among features. Unlike SPLS, which can jointly assess the significance of covariates, they tend to leave out some relevant features.(2) Table <ref> shows 5%, 25%, 50%, 75%, and 95% quantiles of the minimum model size to include all signals whenis non-Gaussian design, where a smaller quantile indicates a more effective screening approach. We find that the proposed SPLS method can detect the signals with the smallest model size. It allows us to sharply decrease the number of features in the first stage of screening. In contrast, SIS and RRCS demand a larger model to cover all signals, which illustrates that the feature with significant joint effect but weak marginal effect is likely to be wrongly left out by these marginal screening methods.By introducing Kendall's τ correlation, RRCS achieves better performance in non-Gaussian design but still does not screen out the majority of nulls.Moreover, PFR performs quite well since its strategy helps to incorporate some feature joint effects in the screening process compared to SIS, yet faces a high computational cost and is still inferior to SPLS.To summarize, the numerical results are in line with the sure screening property of the SPLS procedure, which allows a high power in the second stage of Algorithm <ref> for subsequent selection. It is worth noting that the screening size k is set as the hard threshold in this paper. To make the k statistically interpretable, treating it as a tuning parameter within the model can also be an interesting topic for future research.§.§ High-dimensional performanceIn this subsection, we conduct the two-stage SPLS-Stab-GKnock procedure for high-dimensional cases. Specifically, we set (n,p,p_1)=(500,1500,20) for the partially linear model (<ref>) with design matrixgenerated by Case 1 or Case 2. As illustrated in Algorithm <ref>, we randomly divide the full data into two parts and use n_1=250 samples to conduct the SPLS procedure for the screening step. After reducing to k=100 variables, we use the remaining samples to compare the Stab-GKnock procedure with Knock-LSM+, B-H and m-Knock+ for the selection step. All the following results are based on 200 replications, and the target FDR level is also set as q=0.1. Figures <ref> and <ref> show the simulation results for p>n cases. Our two-stage SPLS-Stab-GKnock procedure performs well in terms of FDR and power. Specifically, we have the following findings.(1) Figure <ref> reports the finite-sample simulation results whenis Gaussian design. We find that the SPLS-Stab-GKnock procedure successfully controls FDR. Its power tends to one as the signal strength A increases, which also confirms that SPLS will not lose many signals in the first screening stage. In comparison, SPLS-Knock-LSM+ and SPLS-B-H methods are sensitive when increasing the correlation from 0.2 to 0.5, and SPLS-m-Knock+ results in a significant power loss.(2) Figure <ref> shows the finite-sample performances whenis non-Gaussian design. We find that the proposed SPLS-Stab-GKnock still enjoys the highest power while controlling FDR at the target value. On the contrary, the SPLS-B-H method fails to control FDR in non-Gaussian scenarios. The SPLS-Knock-LSM+ and SPLS-m-Knock+ are too conservative to achieve a desirable high power.In a nutshell, our two-stage procedure can perfectly handle high-dimensional cases with regard to both FDR and power.§ REAL DATA ANALYSIS In this section, we illustrate the effectiveness of our proposed methods by an application to a breast cancer dataset, which has been analyzed in <cit.> and <cit.>.As reported in <cit.>,breast cancer is the most common cancer diagnosis in women across 140 countries and is the most frequent cause of cancer mortality in 101 countries. <cit.>collected a breast cancer dataset from 97 lymph node-negative breast cancer patients under 55 years old. This dataset contains 97 rows and 24481 columns, each row contains 24481 gene expression levels and 7 clinical risk factors including age, tumour size, histological grade, angioinvasion, lymphocytic infiltration, estrogen receptor (ER) and progesterone receptor status for 97 patients. By removing the missing genes, we can obtain 24188 gene expressions. In this section, ER is regarded as the response supported by the study in <cit.>,and the patient's age is regarded as the univariate for nonparametric component. Both the response and covariates have been standardized with mean zero and variance one.The goal is to identify genes that are related to the ER of breast cancer patients. We consider the following high-dimensional partially linear modelY_i=X_i^ + g(U_i) + ε_i,i=1,…,97,where Y_i is the ER of the breast cancer patient, X_i is the p-dimensional covariates vector consisted of 24188 genes expressions, =(β_1,…,β_p)^ is a p-dimensional vector of unknown regression coefficients, U_i is the patient's age.To deal with this problem, we perform our proposed SPLS-Stab-GKnock in two stages as illustrated in Algorithm <ref>. Specifically, we randomly select n_1=50 samples to conduct the SPLS procedure for model (<ref>) and obtain k=23 candidate genes in the screening step. Then we use the remaining 47 samples to apply the Stab-GKnock procedure for final selection with target FDR level q = 0.2.We compare our proposed method with Lasso, SPLS-B-H, SPLS-Knock-LSM+ and SPLS-m-Knock+ based on 200 replications. We apply Lasso after employing spline approximation and projection technique in Section <ref>. Table <ref> briefly summarizes the sample mean and standard error (in parentheses) of the model sizes selected by each method. We find that our SPLS-Stab-GKnock obtains a moderate model size among all the methods. It neither selects too many genes like Lasso and SPLS-B-H nor is it as overly conservative as SPLS-m-Knock+ and SPLS-Knock-LSM+, which could lead to the potential loss of some relevant genes.Table <ref> presents genes selected by Lasso, SPLS-Stab-GKnock+ and SPLS-m-Knock+ more than 80 percent of repetitions. We find that all three methods select genes 15835, 13695 and 1279. The proposed SPLS-Stab-GKnock+ method also selects genes 1690, 13695, 6912 and 10177, and excludes several genes selected by Lasso, which partly echoes the results in <cit.> and <cit.>. To confirm our conjecture that SPLS can jointly evaluate the significance of covariates in Section <ref>, we further compare the marginal and joint effects of genes selected by SPLS-Stab-GKnock.Figure <ref> reports the marginal effects of genes on the response.To ensure the subsequent selection step works, we have to obtain at most 23 genes in the screening step. We can see that among all 6 genes selected by SPLS-Stab-GKnock, only genes 15835 and 10177 are among the top 23 genes ranked by Kendall's τ correlation with the response ER. In other words, if we use RRCS for screening, two-thirds of genes selected by SPLS-Stab-GKnock will be left out. Similar results will be achieved when other marginal-based methods, like SIS, are employed for screening. Although bearing relatively lower marginal effects on the response, Figure <ref> demonstrates that genes selected by SPLS-Stab-GKnock exhibit stronger joint effects on other relevant genes.Specifically, we use Pearson's ρ correlation between genes to evaluate their joint effects, and set the performance of gene 1296, the 23th gene ranked by Kendall's τ correlation with the response ER, as the baseline.Genes in the boxplot are arranged in descending order by their Kendall's τ correlation coefficients with regard to the response, as depicted in the legend for their rankings among all genes.From the boxplot, we find that genes 15835 and 10177 with stronger marginal effects (on the left side) are selected by both SPLS-Stab-GKnock and RRCS, while genes 1690, 13695, 1279 and 6912 with lower marginal effects but stronger joint effects (on the right side) are selected only by SPLS-Stab-GKnock. In other words, all the genes selected by SPLS-Stab-GKnock share higher joint effects on other associated genes compared to gene 1296, even though with a much lower rank in terms of the marginal effect on the response. It reiterates our standpoint that our proposed methods can jointly assess the significance of relevant features.Lastly, we present the estimated nonparametric function curve in Figure <ref>, which displays the trend of ER levels changing with patients' age in a cartoon manner. Specifically, along with the seletion step mentioned above, we use the same k=23 candidate genes screened by SPLS to estimate g by (<ref>). It shows that the value of effect first increase during one's youth (from age 30 to 34), subsequently decrease in middle age (from age 34 to 50), and then slightly bounce back when growing old (from age 50 to 54). This result is consistent with many authoritative studies in medicine, such as <cit.>,further underscoring the validity of the proposed method. Hence, we show that our proposed methods have significance performances in controlling FDR for high-dimensional partially linear models from a practical point of view. § CONCLUSIONThis paper considers the problem of variable selection for the partially linear model with FDR control by using generalized knockoff features. Incorporating selection probability as feature importance scores, we develop a Stab-GKnock procedure. The finite-sample FDR control and asymptotically power results are established for the proposal under some regularity conditions.A two-stage procedure based on joint screening is also developed under high dimensionality. There are some future directions.An interesting direction is to investigate the applicability of Stab-GKnock and Sparse-PLS to other semiparametric models, such as varying-coefficient models and generalized semiparametric models.Moreover, the study of power analysis based on SPD statistics can be extended to other aspects in view of more complex data, such as Gaussian graphic model <cit.>,nonparametric adaptive model <cit.>,and structure change detection <cit.>,noting that they are not trivial and call for further theoretical results.In addition, another interesting issue is to extend our Stab-GKnock under model-X knockoff framework to study the robust knockoff-based method for high-dimensional semiparametric models with heavy-tailed error distribution or misspecified feature distribution, which may combine with the idea in <cit.>.§ ACKNOWLEDGEMENTS This research was supported by the National Natural Science Foundation of China (12271046, 12101119, 11971001 and 12131006) and the Fundamental Research Funds for the Central Universities (310422113).§ DECLARATIONSConflict of interestThe authors declare that they have no conflict of interest. apalike_revised | http://arxiv.org/abs/2311.15982v1 | {
"authors": [
"Han Su",
"Panxu Yuan",
"Qingyang Sun",
"Mengxi Yi",
"Gaorong Li"
],
"categories": [
"stat.ME",
"math.ST",
"stat.TH"
],
"primary_category": "stat.ME",
"published": "20231127162810",
"title": "Stab-GKnock: Controlled variable selection for partially linear models using generalized knockoffs"
} |
0.98From Reactive to Proactive Volatility Modeling with Hemisphere Neural Networks Philippe Goulet Coulombe Contact:mailto:[email protected] helpful comments, we thank Frank Diebold,Maximilian Göbel,Alain Guay,Nicolas Harvie,Michael Pfarrhofer,Aubrey Poon,Dalibor Stevanovic,and Boyuan Zhang as well as participants at the IIF MacroFor and AMLEDS seminars.The views expressed in this paper do not necessarily reflect those of the Oesterreichische Nationalbank or the Eurosystem.This research was enabled in part by support provided by Calcul Québec and the Digital Research Alliance of Canada. This draft: January 14, 2024.The Python package is available https://github.com/TheAionxGit/Aionxhere.Mikael Frenette Karin Klieber=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== 1.2We reinvigorate maximum likelihood estimation (MLE) for macroeconomic density forecasting through a novel neural network architecture with dedicated mean and variance hemispheres.Our architecture features several key ingredients making MLE work in this context.First, the hemispheres share a common core at the entrance of the network which accommodates for various forms of time variation in the error variance.Second, we introduce a volatility emphasis constraint that breaks mean/variance indeterminacy in this class of overparametrized nonlinear models. Third, we conduct a blocked out-of-bag reality check to curb overfitting in both conditional moments.Fourth, the algorithm utilizes standard deep learning software and thus handles large data sets –both computationally and statistically.Ergo, our Hemisphere Neural Network (HNN) provides proactive volatility forecasts based on leading indicators when it can, and reactive volatility based on the magnitude of previous prediction errors when it must. We evaluate point and density forecasts with an extensive out-of-sample experiment and benchmark against a suite of models ranging from classics to more modern machine learning-based offerings. In all cases, HNN fares well by consistently providing accurate mean/variance forecasts for all targets and horizons.Studying the resulting volatility paths reveals its versatility, while probabilistic forecasting evaluation metrics showcase its enviable reliability.Finally, we also demonstrate how this machinery can be merged with other structured deep learning models by revisiting <cit.>’s Neural Phillips Curve.emptyleft=1.7 cm, right= 1.7 cm, top=2.3 cm, bottom=2.3 cm § INTRODUCTIONUnlike traditional deep learning strongholds such as speech recognition and computer vision, applications in social sciences are typically nowhere near perfect prediction accuracy. In other words, signal-to-noise ratio is low for most economic applications, and in the vicinity of 0 for finance applications. Still, the recent literature shows that deep learning methods can do surprising yet informative predictions in economics <cit.>.Thus, it is particularly pertinent to estimate heterogeneous prediction uncertainty – in order to determine when to trust or distrust a neural network's forecast. In this paper,we provide a principled and effective way to do so,which comes in the form of a novel standalone density forecasting tool.Its design also allows for it to be a building block that can be merged with elements of other macroeconometric deep learning models.This is of independent interest given that deep neural networks (NN) and their associated software environments are fertile ground to build more structured models (either for the sake of interpretability,increased performance,or both; see ),or to incorporate the ever-growing sources of non-traditional data. 0.2cmA Hemisphere Neural Network (Redux).<cit.> introduces the concept of a Hemisphere Neural Network (HNN) where a NNis restricted so that its prediction is the sum of latent time series corresponding to the outputs of subnetworks. Those are constructed from groups of predictors separated at the entrance of the network into different hemispheres.Thestructure allows the understanding of the final layer's cells output aslatent states in a linear equation.There,the motivation was interpretability of the conditional mean through separability.Here,the point is to go beyond the conditional mean.This paper treats the mean and the variance of a predictive regression as two separate hemispheres in one neural network where the loss function is the negative log-likelihood.The model features a common core at the entrance of the network which accommodates for various interactions between the conditional mean and variance structures. This resembles the autoregressive conditional heteroskedasticity (ARCH) behavior where mean parameters enter the volatility equation or volatility-in-means with the reverse operation.But going straight for maximum likelihood estimation of the new architecture will fail,for old and new reasons.The most prominent of those is that the double descent phenomenon –the modus operandi of modern deep learning – will resultin the usual benign overfitting of the conditional mean <cit.> and malign underfitting of the conditional variance. A key observation is that,in vastly overparameterized models aiming for the first two moments,in-sample overfitting of the first leads to underfitting of the second,and vice versa.Then,what will happen out-of-sample is anybody's guess. Accordingly,left unchecked, HNN could completely overfit the training data with either a perfect conditional mean path or an equally perfect conditional variance process – the allocation between the two very disparate models left to random initialization choices. We overcome this particularly daunting roadblock by designing three main algorithmic modifications: a volatility emphasis constraint in estimation, a blocked out-of-bag recalibration, and blocked subsampling.The resulting HNN will prove highly competitive in our (point and density) forecasting exercise and provide more reliable coverage than currently available machine learning (ML) basedalternatives. This desirable consistency in performance is a direct byproduct of the three aforementioned "modifications" bringing what could be called "conformal restrictions" in estimation and prediction.As the name suggests,such operations are related to the rapidly growing ML literature on conformal prediction where a pseudo-out-of-sample metric is used as raw material to construct prediction intervals with coverage guarantees <cit.>.0.2cmProactivity,Reactivity,and Related Literature. We neither restrict the mean nor the variance to follow a specific law of motion.They are both neural (sub)networks taking a large panel of macroeconomic series as common input.Neural networks successfully deal with high-dimensional input spaces and are implemented in highly optimized software environments providing fast computations.We refer to proactive volatility forecasts as those leveraging leading indicators to predict heightened volatility before the model delivers a large forecast error. Conversely, reactive forecasts propagate shocks that already occurred, resulting in increased expected variance in the following periods–after the occurrence of an initial major shock.HNN provides proactive volatility forecasts based on observed indicators when it can, and reactive volatility based on the magnitude of previous prediction errors when it must. The "reactive" class of models has a very long and distinguished history in econometrics,with (G)ARCH<cit.> and stochastic volatility (SV) models <cit.>.The popularity of SV for macroeconomic forecasting is mostly unrivaled.It isthe workhorse volatility process to close a Bayesian model and accounts for (slow) structural change in innovations' variance<cit.>.SV and GARCH models can be augmented with indicators that may have proactive qualities <cit.>,but this faces various important challenges,like that of high-dimensionality,and therefore,traditional reactive specifications have nearly always dominated the landscape.We find neural network adaptions for SV and GARCH to model time-varying volatility in financial time series in, e.g., <cit.>.However, estimating the predictive variance when applying deep learning models to estimate the predictive mean has turned out to be a very challenging task. Neural networks tend to be overconfident in making predictions <cit.> and deliver residuals close to 0 <cit.> that are a rather elusive target in a secondary conditional variance regression.Furthermore,implementing GARCH or SV-like methods in the highly nonlinear structure of deep learning models implies a significant deviation from the very software environments making their computations feasible and efficient.Recent contributions apply SV in nonlinear or nonparametric models such as Bayesian additive regression trees <cit.> or Bayesian neural networks <cit.>.However, these models rely on Bayesian estimation which often turns out to be computationally costly,and the volatility prediction remains solely reactive by construction.Quantile and distributional regressions enjoy increasing popularity in the macroeconomic literature and have seldom been found to have proactive qualities <cit.>.Early propositions to overcome the normality assumption when modeling densities include the seminonparametric (SNP) model of <cit.>. Recent contributions extend the concepts of quantile and density regressions to nonlinear nonparametric models.<cit.> do so in the context of Bayesian additive regression trees (BART) whereas <cit.> and <cit.> do related things with neural networks. From the deep learning literature side of the aisle,we find extensions of traditional methods which allow for the estimation of high-order moments, mixtures of distributions,as well as quantile regressions.<cit.> and <cit.> proposed estimating the first and second moments of the predictive distribution with two separate neural networks. Building on this idea, the recent literature proposes different variants of mean-variance neural networks <cit.> as well as mixture density networks <cit.>. In that vein,the DeepAR model of <cit.> is getting increasing attention.Amazon's DeepAR is a sequence-to-sequence probabilistic forecasting model which estimates the parameters of a distribution with Recurrent Neural Networks (RNNs) based on maximum likelihood.However, as documented in <cit.>,DeepAR tends to underestimate variance,likely for the aforementioned double descent reasons.We will also find in our experiments that the quality of DeepAR's density forecasts is erratic.Lastly,the estimation of quantile regressions using neural networks dates back to <cit.> and has since been the subject of a copious amount of research<cit.>.0.2cmIntended Use. A relevant question is where HNN stands in this deluge of works.It is an economical yet not any less sophisticated solution to quantify time-varying uncertainty surrounding deep learning-based macroeconomic forecasts.It is fast,malleable,and easily understood – through the use of only two (nonlinear) conditional moments.We will see that it works well for many targets without any particular tuning, and that both point and density forecasts are highly competitive and reliable. Lastly,it will be easily merged with more structured models,like thatof <cit.>, giving a "complete" model of inflation based on a nonlinear Phillips curve specification.0.2cm Summary of Forecasting Results.In a thorough forecasting exercise using macroeconomic data for the US,we find that HNN has a great capacity for adaptation in the face of a heterogeneous pool of series.Adaptability is a recurring finding when applying machine learning tools to macroeconometric problems <cit.> and can be linked back to the carefully crafted semi-nonparametric structure of the model.Specifically, it captures the Great Moderation pattern in real activity variables (i.e.,long-run change) and yet,without changing the specification nor hyperparameters,can deliver a more "spiky" volatility process for the S&P 500.The estimated volatility path for longer-run forecasts of macroeconomic targets (s=4 quarters ahead) displays a behavior that at times resembles more that of a (smoothly) switching process,in contrast to the slowly evolving SV process which dominates the literature.Those higher volatility regions are proactive in the sense that they begin before the advent of a major prediction error, a behavior that is observed both in-sample (with out-of-bag estimates) and out-of-sample for many targets (e.g.,GDP growth,Unemployment Rate,Inflation).In terms of performance,HNNalways ranks among the top models in terms of RMSE, log score,coverage rates,and other metrics of calibration and probabilistic forecast evaluation.More interestingly, it never suffers "catastrophic failures" (like massive undercoverage) that seldom occur on some targets for the other sophisticated competing models.For instance,it is not infrequent to see BARTand DeepAR substantially undercovering – a phenomenon we delve into and attribute to harmless overfitting of the conditional mean leading to quite harmful underestimationand underfitting of the conditional variance.This implies that while conditional means can be used as per the model’s estimation (and perform well as such),conditional variances cannot, and often fail in ways that basic manual quality control is unable to flag nor fix ex-ante. HNN,in contrast,appears to have a level of reliability mostly on par with that of AR competitors. The use of out-of-bag (and presumably non-overfitted) errors to calibrate or estimate the volatility process helps HNN a lot in being reliable out-of-the-box.Our much simpler NN_ SV,a reduction of HNN which forfeits proactive volatility but keeps a sophisticated conditional mean function and fits a SV model on OOB residuals in a second step,is equally reliable in terms of coverage.The forecasting section also includes a series of vignettes.First, we compare our approach to quantile regressions of various kinds.A striking observation is that for real activity targets – which asymmetry and non-normality have been heavily documented following <cit.> – the normal likelihood-based HNNusually performs better or as well as the best (linear or nonlinear) quantile model.This is true for both tails of the distribution and short and long forecasting horizons.Second,we investigate whether the use of Long Short-Term Memory Networks <cit.> could further improve HNN results in any material way—they do not.In addition, we present results on monthly data for the US and a euro area forecasting exercise in the appendix. We find that HNN fares well with time series that are noisier and shorter.However, as one could expect from this more hostile terrain, gains with respect to autoregressive SV models are more punctual and modest in size.0.2cm Fusion With a Structured Deep Learning Model.HNN and the overall apparatus developed in this paper can also be joined in a modular fashion with more structured deep learning models to obtain interpretable forecasts with reliable uncertainty quantification.We construct such a model for inflation by embedding <cit.>'s Neural Phillips Curve (NPC) within this paper's arsenal.We find that the customized (yet restricted) HNN-NPC improves point and density forecasts over the plain density HNN.We also compare with a simpler Bayesian model also providing interpretability and uncertainty estimates via SV <cit.>. In line with <cit.>'s findings, we see that the neural model better captures inflation dynamics during important episodes like in the outset of the Great Recession and the post-Pandemic inflation surge.This is attributable to a very different reading of the contribution of real activity and expectations in less quiet economic times.We enrich such results by showing that HNN-NPC was rather confident when predicting high inflation in early 2020, and nearly dismisses its own deflationary forecast in 2020.Moreover,its proactive volatility qualities are also apparent in 2008 where the volatility estimate climbs out of its bed well before SV does so (following the 2008 crash in oil prices).All in all,HNN-NPC demonstrates how deep learning can improve over classic approaches while retaining essential qualities of the latter– interpretability and uncertainty quantification.0.2cmOutline. Section <ref> introduces the mean-variance HNN by describing and motivating the network architecture,and presenting the key algorithmic modifications to plain MLE.Section <ref> conducts an extensive empirical analysis using macroeconomic data for the US. In Section <ref> we extend HNN to a nonlinear Phillips curve model for inflation. Finally, Section <ref> concludes. § THE ARCHITECTUREThis section describes our proposed neural network architecture to estimate the mean and variance of the predictive distribution of our target variable y_t+1 (or y_t+s in the case of direct s steps ahead forecasts).We assume that y_t+1 follows a Gaussian distribution and depends on a (potentially very large) number of K covariates denoted by X_t:y_t+1∼𝒩(f( X_t),g( X_t))In this very general setup, the functions f and g are unknown and may be highly nonlinear.To remain agnostic on the functional form of f and g,both will be approximated through a neural network (NN) structure <cit.>.Rather than estimating a standard deep NN model and hoping to make something out of the residuals,we design a specific architecture derived from<cit.>'s original HNN that will obtain g and f jointly.In our application,hemisphere 1 (h_m = f) is the conditional mean and hemisphere 2 (h_v = g) is the conditional variance.Both hemispheres are fully nonlinear, nonparametric functions of the input space X_t which ultimately output two time series: conditional mean (ŷ_t) and conditional variance (σ̂^2_t).Importantly,they get their assigned roles from how they enter the loss function,which is now proportional to the log-likelihood.Accordingly,the first building block of our approach is to have a neural network with objective functionmin_θ_m, θ_v∑_t=1^T( y_t+1 - h_m (X_t;θ_m))^2 /h_v(X_t;θ_v)+ log(h_v(X_t;θ_v))where θ_m andθ_v are the network parameters consisting of the weights w_j and the bias term b_j (i.e., θ_m = (w_m, b_m) and θ_v = (w_v, b_v)).The next questions are (i) what is the structure of h_m and h_v,and (ii) how do we successfully solve (<ref>).The next paragraphs set out to answer those through a series of "ingredients" that reinvigorate what is otherwise a rather plain-looking MLE problem. 0.25cmIngredient 1: Two Hemispheres and A Common Core.Figure <ref> summarizes the network's architecture.As can be seen, both hemispheres share the same input data as well as a few common layers before estimating the parameters of each hemisphere. The outcome of both hemispheres enter the loss function and thereby complete the model setup. Going backward from the loss towards the original inputs,the "yellow" hemisphere is h_m (X_t;θ_m)=w_m^(L_m)' Z_t^(L_m-1)+b_m^(L_m), withZ^(l)_t= ϕ^(l)(w_m^(l)' Z^(l-1)_t +b_m^(l)), forL_c ≤ l ≤ L_m-1,and the "blue" one ish_v (X_t;θ_v) = log(1 + exp( w_v^(L_v)' Z_t^(L_v-1)+b_v^(L_v))), withZ^(l)_t = ϕ^(l)(w_v^(l)' Z^(l-1)_t +b_v^(l)), forL_c ≤ l ≤ L_v-1,where ϕ denotes a nonlinear activation function,L_m and L_v are number of hidden layers for each hemisphere,and L_c is that of the common (red) core.The Softplus activation function in (<ref>) constrains ĥ_v(𝐗_t;θ_v) to be positive at all times.Clearly, the definitions of h_m (X_t;θ_m) and h_v (X_t;θ_v) are still incomplete given that X_t has yet to make an appearance on the right-hand side of (<ref>) and (<ref>).The common core at the entrance of the networks brings such completion via Z^(l)_t = ϕ^(l)(w_c^(l)' Z^(l-1)_t +b_c^(l)), for1 ≤ l ≤ L_cwith Z^(0)_t =X_t.Thus, the first layer in each hemisphere uses the outputs of neurons from the last layer of the common block Z^(L_c)_t.Having dedicated mean and variance hemispheres requires little additional ink to motivate,but the virtues of the common core, while numerous,are more subtle.Consider the following simple linear data generating process (DGP) with ARCH errors{[ y_t=𝐗'_t β+ε_t, ε_t = σ_t^2 ϵ_t, ϵ_t ∼iid; σ_t^2=c+a_1 ε_t-1^2+…+a_p ε_t-p^2 ].In this model,the corresponding hemisphere outputs and parametrizations would be{[ h_m(X_t; β ) =𝐗'_t β; h_v(X_t;[ a β ]) =c+a_1(y_t-1-𝐗'_t-1β)^2+…+a_p(y_t-p-𝐗'_t-pβ)^2. ].As <cit.> puts it "Even in the simple case, we cannot estimate separately the parameters of the conditional mean and those appearing in the conditional variance." Obviously,this does not mean all volatility models need to be estimated jointly.What it suggests, however,is that successful models of time series volatility often have some cross-equation restrictions between h_m and h_v.Rather than introducing cross-equation restrictions,which are likely both unfeasible and undesirable in a neural network setup, we discipline h_m and h_v with soft constraints, i.e., cross-equation regularization.We achieve this by estimating common layers at the entrance of the network, which can be interpreted as hemispheres sharing weights. As emphasized in Figure <ref>, we estimate a few common layers for both hemispheres before separating the mean from the variance hemisphere, where hemisphere-specific neurons are presented in yellow for the former and in blue for the latter.While this example details how the conditional mean parameters enter that of the variance,the opposite sharing direction is also possible.For instance,latent structures driving volatility can flow in the mean hemispheres,which conveniently allow for GARCH- or SV- or any volatility-in-means effect which have been popular in finance to study the time-varying risk premium <cit.> and now in macroeconomics to quantify the real effects of uncertainty<cit.>.By adding time trends to our set of covariates we approach a classical SV specification through a residuals trend-filtering perspective.GARCH dynamics would suggest making h_v a recurrent neural network.As we will see,HNN results will be quite competitive without this additional complication – RNNs and LSTMs are notoriously harder and longer to train.We nevertheless explore this possibility in Section <ref>.0.25cmIngredient 2: Volatility Emphasis.As we know from the double descent phenomenon <cit.>, a mildly deep and large network will yield a near perfect in-sample fit even in the presence of large amounts of noise and yet produce stellar out-of-sample results.When focusing on out-of-sample point forecasts,we can safely embracedouble descent and its associated benefits.However,this creates trouble for the historical (in-sample) analysis of conditional mean estimates and double trouble for the conditional variance,with the latter being unreliable both in- and out-of-sample.Our volatility emphasis constraint, coupled with the next two ingredients,will make MLE work in the context of densely parameterized models.A first observation is that a model in the double descent region eradicates residuals,yet MLE is supposed to obtain the parameters of their non-degenerated distribution.A secondis that the reverse solution is also possible: a perfect volatility model with no conditional mean.In other words,when solving (<ref>) without further adjustments, HNN can completely overfit the data with either h_m or h_v,giving rise to vastly different models. This suggests that the overall prevalence of h_m versus h_v,when those are left completely unconstrained,is not identified and cannot be obtained from in-sample estimation (in a similar spirit to the regularization parameter λ in a ridge regression,).Note that early stopping can help in regularizing h_m or h_v,but will do symmetrically which is highly suboptimal in many applications where it is clear that one hemisphere should be more expressive than the other.As a solution, we bring in a constraint.We fix theaverage conditional predictive variance to a constant (i.e., (h_v(X_t;θ_v))/(y_t+1) =ν) during estimation and let HNN learn deviations from it.We refer to ν as the volatility emphasis parameter,because it guides how much of the network fitting capacities should go to the volatility versus the mean.Why does this work? First,it serves as a solution to optimization cycling through near-perfect conditional mean versus conditional variance optima and the general indeterminate nature of the problem in overfitting situations.Fixing the expressivity of h_m allows us to let h_v benignly overfit (if need be) the way it is typically done for the conditional mean estimation in plain squared error minimization.As a result, the conditional mean is tied to deliver estimates that will look like a plausible OOB fit for every run (as set by ν),and conditional variance can be projected OOB and compared to OOB squared errors (Ingredient 3 & 4) to obtain a non-overfitted volatility path in- and out-of-sample.While the final unconditional variance is readjusted and not imposed (see Ingredient 4 below),we should not choose ν lightly, as it will influence the relative flexibility of h_m and h_v and estimated paths.Clearly, from experience, we expect ν to be close to 1 for stock returns and lower for other macroeconomic targets,especially those exhibiting persistence. In theory,ν could be cross-validated,but to avoid the obvious practical cost of doing so,we rather set ν through a very well-informed guess.We estimate a standard NN with an analogous architecture,calculate the mean of the squared blocked OOB residuals,and set ν = (ε̂_t,NN^2)/(y_t+1) where ε̂_t,NN denotes the blocked OOB residuals. In effect,if one were to conduct basic conformal prediction-based inference for a plain NN in a macroeconomic time series context and assume homoscedasticity,(ε̂_t,NN^2) and ε̂_t,NN in general would be natural inputs to obtain coverage-guaranteed (out-of-sample) prediction intervals <cit.>.The presence of the denominator (y_t+1) brings ν in universal units (i.e.,between 0 and 1)[In practice,the original estimate can go marginally above 1 since the inputs are OOB rather than training residuals,and the plain NN model may do worse out-of-bag than simply taking the sample average when facing extremely low signal-to-noise ratios.We enforce an upper bound at 0.99, effectively forcing ν to always deliver a R^2> 1 %. ] and is implicit in our calculations because all the data will be standardized at the entrance of the network and scaled back to original units at the exit.A possibility for future work is to cross-validate ν in the neighborhood of the informed guess or update it through iterative HNN estimations,but our current empirical results suggest this extra legwork may not be necessary.The inevitable failing of an unchecked HNN (and DeepAR later on) as well as the usefulness of the volatility emphasis constraint can also be intuitively understood from basic MLE econometrics for linear regression.Even when fitting the simplest linear model without shrinkage,the MLE estimate of the error variance is always biased downward: it yields σ^2/T<σ^2/T-K where K is the number of regressors and the second expression is the OLS version.This potentially major discrepancy is straightforward to correct when the number of degrees of freedom is known, which it is not in the deep learning context. The best course of action when the analytical calculation of degrees of freedom is impossible is the use of pseudo-out-of-sample metrics (of which cross-validation is the better known).Thus,curbing many problems at once,we fix ν to a plausibly unbiased value ex-ante calculated from out-of-bag sampling and an approximated h_m.Two outstanding issues remain.If the originally imposed νis not exactly in tune with h_m's final performance,we may want to adjust the average conditional variance accordingly. Another observation is that the volatility emphasis constraint fixes the expressivity of the conditional mean,but not that of the variance.Thus,it does not prevent h_v from overfitting what is left free in the likelihood,and may offer implausibly accurate conditional variance forecasts in-sample that will not be matched out-of-sample. We will get back to this when covering the fourth and final ingredient.0.25cmIngredient 3: Blocked Subsampling.We now turn to the important ingredient that has been implicit throughout.Bagging in our context entails two major benefits.First, the use of quantities which are immune to extreme overfitting.Second,it helps with optimization itself.There is no guarantee that a single run of stochastic gradient descent initiated randomly will yield the "true parameters". In that sense,our approach does not aim to succeed where traditional maximum likelihood estimation (MLE) would likely fail.This is less concerning when considering an ensemble of multiple runs, similar to what is common practice for point prediction with NNs <cit.>.In this case, we employ B=1000 runs,which may seem excessive for out-of-sample predictions but is suitable for OOB "time series" that utilize an average of (1-) × 1000 runs at each point in time. [Considering 300-something runs can be sufficient but results may change in a very marginal way depending on the seed.]More precisely,the calculations proceed as follows.Assume we have a sample of size 100 and choose a subsampling rate of 0.80. We estimate HNN using data points from 1 to 85, and project it "out-of-bag" on the 20 observations not used in training. This gives us h_j(X_80:100;θ̂_j,b) for a single allocation b (for b = 1, …, B) while h_j(X_1:80;θ̂_j,b) are still s.By considering many such random (non-overlapping blocked) allocations where "bag" and "out-of-bag" roles are interchanged,we obtain the final h_t,m and intermediary (see next ingredient) h_t,v pathsby averaging over B at each t such that h_j(X_t;θ̂_j) = 1/(1-0.80) × B∑_b=1^B I(h_j(X_t;θ̂_j,b)≠)h_j(X_t;θ̂_j,b)forj ∈{m, v }.Interestingly,this procedure fits within the framework of <cit.>'s Weighted Bayesian Bootstrap and, in particular, of <cit.>'s extension of it for generic ML losses. In short,randomly weighted optimization of the loss provides an approximate Bayesian posterior.0.25cmIngredient 4: Blocked Out-of-Bag Reality Check. To obtain a proper estimate of the unconditional variance we introduce a recalibration step based on the blocked out-of-bag residuals. This is done by our reality check, which scales back the h_v(X_t;θ_v) path making use of the OOB residuals.[In a sense, this step takes the concept of conformal prediction – a method to form prediction intervals without making distributional assumptions <cit.> – to a conditionally heteroskedastic environment.Uncertainty in future predictions is based on the residuals of a held-out validation set, which is used to recalibrate the prediction intervals.<cit.> extends the applicability of such methods to dependent data using a block approach. ]In our setup, the initial guess for ν – coming from a plain NN and not the dual estimation of h_v and h_m – might not exactly match the resulting average volatility of HNN's OOB residuals.Moreover,even after early stopping,the raw h_v may be overly wiggly and reflect conditional variance overfitting.Hence, we recalibrate h_v using HNN's blocked OOB residuals by running log(ε̂_t, HNN^2)=ζ_0 +ζ_1 log(h_v(𝐗_t;θ̂_v))_δ_t+ξ_tand then update the in-sample volatility such thatĥ_v(𝐗_t;[ θ̂_v,ζ̂_̂0̂,ζ̂_̂1̂, ς̂]) ←exp(δ̂_t) ×ς̂,where ς̂ is the estimate of the scaling object ς= E[exp(ξ_t)].If there is a mismatch between ν and the new OOB residuals coming from HNN,ζ_0 can adjust for it.ζ_1's role is to move accordingly and damper the overall variation in the final ĥ_v in the event that the raw h_v overfits.This is because ε̂_t, HNN^2,coming from blocked subsampling, is a suitable approximation to the kind of prediction errors HNN will encounter in the real out-of-sample.Thus,if necessary,ζ_1 acts as a raccord between h_v and "reality".The above operations can be seen as a direct neural translation of <cit.>'s Section 8.4on weighted least squares (WLS). Note that the constant ς̂ is not part of Wooldridge's textbook because only relative (observation) weights are needed for the WLS application.In our case,we need an absolute metric and E[exp(ξ_t)] is not merely equal to 1 as a result of exp() being a nonlinear function and ξ_t likely being non-normal.We sample with replacement from the vector of ξ̂_t = log(ε̂_t, HNN^2)-δ̂_t to estimate the expectation.For out-of-sample volatility predictions,we thus use ĥ_v(𝐗_t^test; [ θ̂_v,ζ̂_̂0̂,ζ̂_̂1̂, ς̂]).0.15cmHyperparameters. For each hemisphere we estimate a standard feed-forward fully connected network, which features two hidden layers (= 2). The same holds for the common block at the entrance of the network. Moreover, each layer (common or not) is given = 400. We choose the ReLU activation function (ReLU(x)=max{0, x}) throughout the hidden layers and define a linear activation function for the output of h_m.To prevent the error variance from being negative, a natural choice for the output activation function for h_v is the Softplus function (Softplus(x)=log(1+exp(x))), which imposes these bounds (h_v(𝐗_t;θ_v)≥ 0 ∀ t) and is, in effect,a soft ReLU.Hyperparameters for the optimization of the algorithm are set as follows.The maximum number of epochs is 100 and the learning rate is 0.001. Similar to <cit.>, we perform early stopping by using only a subset (80%) of the training sample for the estimation of the parameters and determine with the remaining set (i.e., 20%) when to stop the optimization.We set B=1000.The patience parameter in early stopping is 15 epochs.As a form of ridge regularization on network weights,early stopping may improve the efficiency of the algorithm and prevents the network from overfitting <cit.>. In addition, we apply dropout with aof 0.2. We use the Adam optimizer and choose the whole sample for the batch size.Network weights w_m and w_v are initialized using 𝒩(0, 3100).Those choices are common to all target variables.§ MACROECONOMIC POINT AND DENSITY FORECASTINGWe test our proposed approach by modeling and forecasting key macroeconomic and financial variables of the US economy. We base our analysis on the FRED-QD database of <cit.>, which is available on a quarterly frequency and features 248 US macroeconomic and financial aggregates. Our sample ranges from 1960Q1 to 2022Q4. All variables but prices are transformed according to <cit.> to achieve approximate stationarity.[ NONBORRES(Reserves ofDepository institutions (Nonborrowed)),TOTRESNS (Reserves ofDepository institutions (total)),GFDEBTNx (total public debt),and BOGMBASEREALx (real monetary base) have been dropped because of their large shift in scale between in- and out-of-sample.While estimation and predictions were robust to their inclusion (by putting a very small weight on those variable),out-of-sample variable importance metrics were affected (see <cit.> for further discussion on this issue in the context of Shapley Values). ]Prices are in log first differences (inflation rate) rather than second differences (acceleration rate).All predictors are standardized to have zero mean and unit variance which is necessary for NN-based models and redundant for the others.We include two lags for each variable X_t,k and add 100 linear trends to the set of covariates allowing for exogenous slow time variation in the parameters, and approximate trend filtering of the residuals à la stochastic volatility if the DGP requires so.Missing values at the beginning of the training sample are imputed using the EM algorithm of <cit.>.The target variables are GDP growth, change in the unemployment rate, headline CPI inflation, housing starts growth as well as S&P 500 stock returns. For each of them, we compute the one-step and four steps ahead predictive mean and variance for our hold-out sample starting in 2007Q1 and ending 2022Q4. NN-based models are re-estimated every two years whereas standard models are updated every quarter, all on an expanding window basis. Our forecasting exercise is based on a pseudo-out-of-sample analysis, which does not account for ragged edges or revisions in the underlying data set. Since we deal with large, dense models and none of the NN-based models put a disproportionate weight on a few indicators not available in real time, extending to a real-time exercise will not entail significant deviations from the results presented. We explore the performance of the HNN by comparing the results to a set of competing models. This set is comprised of simple linear benchmarks including AR processes with SV and GARCH (AR_SV and AR_G) as well as a high-dimensional Bayesian linear regression endowed with shrinkage and SV (BLR). In terms of nonlinear modeling choices, we consider standard neural network specifications (NN_SV and NN_G), Bayesian additive regression trees (BART, see <cit.>) as well as Amazon's DeepAR <cit.>. Details on the implementation of the benchmark models can be found in Appendix <ref>.NN_SV and NN_G use the same architecture as HNN's conditional mean, but are trained by minimizing the usual squared errors and the volatility processes are fitted in a second step on the resulting out-of-bag residuals.Those plain NNs allow to directly assess the relevance of a data-rich and densely parameterized nonlinear volatility function, and document HNN's proactivity versus standard approaches for often similar conditional means.Lastly,those two NN benchmarks allow to quantify the various merits of modeling jointly the first two conditional moments.As discussed in Section <ref>, it is not difficult to think of DGPs where this could make a sizable difference,but knowing in what terrain we are standing is inevitably an empirical question.The rest of this rather rich set of competitors allows us to span the space of relevant one-shot deviations from our framework.First, comparing the results to linear models sheds light on whether modeling nonlinearities pays off for macroeconomic point and density forecasting.Second,BART and DeepAR are the natural go-to nonlinear, nonparametric ML tools providing density forecasts.Tree ensembles are always very stubborn benchmarks for learning tasks with tabular data, and BART provides a probabilistic extension of boosting that produces natively density forecasts.DeepAR's architecture resembles that of a very crude HNN where there is only a common (LSTM) core,no hemispheres,and all remaining ingredients of Section <ref> have been dropped.Accordingly,performance differentials with HNN will procure a rough estimate of the (non-)marginal benefits of those propositions. For each of our six target variables, we evaluate compactly the resulting point forecasts using the root mean square error (RMSE), the probabilistic forecasting accuracy by means of the log score (ℒ) and the share of variation explained in residuals' magnitude via the R^2_|ε_t| of absolute residuals.For the out-of-sample (OOS) forecasted values at time t for s ∈{1,4} we compute:RMSE_s = √(1/#OOS∑_t ∈OOS (y_t+s-ŷ_t,s)^2), ℒ_s = - 1/#OOS∑_t ∈OOSlog( φ (ε_t,s; σ̂_t,s)),R^2_|ε_t|,s = 1 - ∑_t ∈OOS (|ε_t,s| - σ̂_t,s)^2/∑_t ∈OOS (|ε_t,s| - η)^2,where ε_t,s = y_t+s-ŷ_t,s,η is the standard deviation of the in-sample residuals,and φ (. .; σ̂_t,s) is a normal density with zero mean and standard deviation σ̂_t,s.While exotic in appearance,this last metric is only the out-of-sample goodness of fitof what would be the second stage regression in a weighted least squares problem.Surely,it does not have all the qualities of other scoring rules and |ε_t| is not exactly realized volatility,yet it arguablyprovides a metric that is much easier to interpret quantitatively. Lastly, it must be interpreted with care: a model can reach a high R^2_|ε_t| because it has a failing conditional mean and the unexploited predictability flows in |ε_t|. Thus, a sufficient condition for safely gazing at the R^2_|ε_t| of a particular model is for it to also have a low RMSE.Intuitively,a high-performing model with a fine ℒ will have both a low RMSE and a high R^2_|ε_t|.Moreover, in Section <ref> we present additional density forecasting measures, which are the continuous ranked probabilitiy score (CRPS) and the coverage rate (68%), and assess model calibration using probability integral transforms (PITs).We report evaluation metrics for the full test sample,as well as a subsample that ends prior to the Pandemic Recession.Given the unpredictable and unprecedented wild swings of 2020,those observations are always discarded for the real activity targets. In the interest of space and sinceAR_ SV and AR_ G often yield very similar results out-of-sample, we only report the best of the two according to ℒ for each target/out-of-sample pair.All reported RMSEs are ratios with respect to that of the OLS-based AR(2). Preferred models are those with low values in terms of RMSE and ℒ and high values for R^2_|ε_t|. §.§ ResultsWe report the main forecasting results through a series of dashboards for selected targets featuring essential statistics and visualizations. This detailed analysis is conducted target by target to assess the individual performance and to provide some basic economic reasoning.We compare HNN's conditional mean and volatility paths to that of selected benchmarks in the upper left and upper right panel of Figure <ref> to Figure <ref>.We plot in-sample estimates up to the start of our hold-out (i.e., up to 2007Q1) and the recursively re-estimated out-of-sample ones thereafter (i.e., from 2007Q1 to 2022Q4).Our analysis is complemented by investigating the main drivers of the mean and the variance hemisphere.For this purpose,we measure Variable Importance (VI) as in <cit.> and <cit.>, which is itself inspired from what is traditionally done to interpret Random Forests <cit.>. Details are given in Appendix <ref>.Additional results for leftover quarterly targets (s=4 casesand unemployment) can be found in Appendix <ref>.In general,HNN exhibits remarkable adaptability when faced with a diverse range of series. It adeptly captures the Great Moderation pattern in real activity variables,manages to produce a more "spiky" volatility pattern for the S&P 500,and sometimes exhibits behavior more akin to a smoothly switching process than to SV for predicting macroeconomic variables at longer horizons.These higher volatility periods demonstrate a proactive nature by often preceding significant prediction errors. This behavior is observable both in-sample, with out-of-bag estimates, and out-of-sample. In terms of performance, HNN consistently ranks among the top models across all metrics.Moreover,it does not experience substantial undercoverage, which occasionally plague other sophisticated competing models.For the one-step ahead predictions of GDP growth HNN clearly outperforms all benchmarks. This finding holds for point and density forecasts as well as for both samples (see Figure <ref>). Considering the sample ending in 2019Q4 we find that all nonlinear techniques yield high predictive power with HNN giving the lowest RMSE and the lowest log score (ℒ).Moreover,HNN gives the highest R^2_|ε_t| at 30%, distancing the nearest competitors NN_ SV and NN_ G by about 10 percentage points,and BART/DeepAR by more than 20. Similar findings are obtained from including Covid-19 pandemic observations. Coupled with the good forecasting performance, this implies that our model captures a substantial part of the "realized volatility". Extending the hold-out to the end of 2022 reveals that HNN also yields a good performance after the Covid-19 pandemic whereas other models (especially, BART and BLR) lose ground against the linear benchmark.The right panel of Figure <ref> nicely demonstrates the reactive and proactive behavior of HNN's volatility hemisphere. The variance path increases at early stages of turmoil despite accurate predictions in previous periods,and is thus better prepared to receive larger errors than the green line corresponding to the SV specification.In fact,HNN shoots up at about the same time as AR_ SV,for which higher predictions errors have already been accumulating at that point.In line with <cit.>, we find that the conditional variance is affected by developments in financial markets whereas the predictive mean is driven by labor market variables as well as real activity measures such as imports, exports and manufacturers' new orders (see Figure <ref> in Appendix <ref>). Figure <ref> shows the results for the one-year ahead prediction of GDP growth. Again, high-dimensional models (except for DeepAR) yield high predictive accuracy when focusing on the hold-out ending in 2019. BART gives the best point forecasting performance, closely followed by BLR and NN specifications. In terms of log scores, HNN and BART clearly outperfom the other models. Moreover, R^2_|ε_t| shows that HNN explains nearly a third of the realized volatility similar to NN_ G. When including the periods after 2020, HNN beats all its linear and nonlinear competitors with respect to density forecasting performance. While alternative NN specifications perform rather poorly at the end of 2021 and the beginning of 2022, HNN yields highly competitive predictions and acknowledges the elevated uncertainty until the end of the sample. Similar to the one-step ahead case, the volatility hemisphere shows proactive tendencies. The conditional variance picks up the uncertainty in the underlying data set early, resulting in superior density predictions. During the Great Moderation and in the periods after the Global Financial Crisis the variance is low and stable, narrowing the predictive distribution to rather certain estimates. Note that this also holds for the one-step ahead case. Again, variables measuring financial conditions are important drivers of the mean and the variance hemisphere. These include debt-to-income ratios of several sectors in the US economy as well as real disposable business income (see Figure <ref> in the appendix). As shown by <cit.>, the impact of financial conditions on GDP growth seems to be robust at multiple horizons. Similarly, <cit.> and <cit.>, amongst others, emphasize the importance of financial conditions on real activity, especially in the tails. When interest centers on one-step ahead inflation predictions (see Figure <ref>) we find that our neural network models yield high forecasting performance for both point and density predictions as well as both samples. HNN outperforms all other models with respect to point forecasting performance in terms of RMSE and NN_ G gives the lowest log score for density performance, closely followed by HNN. A similar pattern is observed when extending the sample to the end of 2020.Noteworthy, we see that the HNN overestimates the effects of the Covid-19 pandemic in its mean estimate which is, however, accompanied by a large variance, implying that the model acknowledges the unprecedentedly high uncertainty involved. This nearly completely discounts the dramatic deflation forecast and results in a highly competitive ℒ.During this time, the variance hemisphere is mainly driven by employment variables and money stock which were heavily affected by the Covid-19 shock and exhibited major fluctuations in 2020 (see Figure <ref> in the appendix). BART gives the worst log scores compared to the other models as it tends to underestimate the variance during most periods. Given the policy needs for interpretable inflation forecasts, we extend the HNN to a more structural approach proposed in <cit.>. Relying on a nonlinear Phillips curve specification, the architecture of the neural network is designed to provide among other things a measurement of economic slack and inflation expectations.As will be shown in Section <ref>, the Neural Phillips Curve (NPC) model equipped with a mean and a variance hemisphere predicts inflation reasonably well and substantially outperforms its competitors.Results for the quarterly forecasts of the S&P 500 presented in Figure <ref> show that our HNN outperforms all competitors in terms of density predictions and yields highly competitive point forecasts following BART and AR_ G. Moreover, HNN explains almost a third of the variance of absolute residuals, similar to the DeepAR, but with allegedly much less "leftover conditional mean predictability" in it given DeepAR's higher RMSE.When comparing the conditional volatility estimated by the set of models, we see that the predictive variance of HNN follows a different pattern than those of the other models. Our proposed approach attaches higher weight to macro uncertainty and gives a countercyclical variance path (see the upper left panel of Figure <ref>).Since the architecture of HNN allows for proactive and reactive volatility,the predictive variance takes into account signals from the input data set while at the same time accommodates for various forms of time variation. This way, it offers great flexibility going beyond the reactive structure of GARCH and SV processes and accounts for nonlinear relations between the covariates and the target. It relates to the strand of literature exploring the predictive power of exogenous variables for forecasting stock market volatility <cit.> and sheds light on the economic sources of the volatility process. We find that the variables shaping the variance hemisphere are related to both, financial and economic conditions. Main drivers are variables closely moving with the economic business cycle, such as new housing permits, imports and employment, but also variables measuring financial conditions and stock market variables (see Figure <ref> in the appendix).Turning to the short-term predictions of housing starts, which is presented in Figure <ref>, we see a remarkable performance of the neural network models for the periods during and after the Covid-19 pandemic. In terms of density forecasts, HNN's predictive accuracy is only challenged by the highly competitive performance of AR_ SV.While BLR outperforms all competitors for point and density predictions before 2020, controlling for nonlinearities gains in importance thereafter. All nonlinear models catch up on the AR benchmark with HNN yielding the lowest RMSE and a ℒ, which is comparable to AR_ SV. Moreover, our proposed model explains about 10 % of the realized volatility measured by the R^2_|ε_t| of absolute residuals, following AR_ SV which, however, gives a substantially higher RMSE.Visual inspection of the conditional mean of the HNN (see the upper right panel of Figure <ref>) reveals some noteworthy patterns for the observations during the Covid-19 pandemic in 2020. Even though HNN underestimates the unprecedented downturn in the first quarter of 2020, it manages to take advantage of the signals provided by the unconventional behavior of various variables in the set of covariates for more accurate point and density forecasts than its competitors in the following periods. This raises the question: what drives this? For both hemispheres, we find that financial conditions have high predictive power. Moreover, new housing permits as well as commodity price developments (in particular, metals and fuels) play an important role (see Figure <ref> in the appendix).§.§ Calibration and Alternative Density Forecasts Evaluation Metrics Given the remarkably consistent performance of the HNN's density predictions,we challenge these results by addingevaluation metrics including the continuous ranked probability score (CRPS), the 68 % coverage rate and a PIT-based test for auto-calibration <cit.>.First, we compute the CRPS introduced by <cit.>, which is a proper scoring rule for predictive distributions and enjoys the advantage of being less sensitive to outliers.Let F denote the cumulative distribution function and 𝔣 the predictive density with ŷ_t,s and ŷ'_t,s being independent random draws from the predictive density. The CRPS is then defined asCRPS_t,s(y_t,s) = ∫_-∞^∞ (F(z) - 1{y_t,s≤ z })^2 dz = E_𝔣|ŷ_t,s - y_t,s| - 0.5 E_𝔣|ŷ_t,s - ŷ'_t,s|,where 1{y_t,s≤ z } defines an indicator function, which returns the value 1 if y_t,s≤ z and 0 otherwise. In the figure below we report the CRPS averaged across the hold-out relative to our AR benchmark. Second, we consider the nominal coverage rate, which measures the frequency of the forecasts falling inside a specific interval. The predictive density is considered too wide (narrow), if the realized frequency exceeds (drops below) the nominal level chosen for the interval.Formally, this boils down toI_t,s^γ =1if ŷ_t,s∈ [L_t,s^γ, H_t,s^γ] 0if ŷ_t,s∉ [L_t,s^γ, H_t,s^γ],where L_t,s^γ and H_t,s^γ are the lower and upper limits of the interval. We compare the relative frequency of interval hits, γ̂_s = 1/#OOS∑_t ∈OOS I_t,s^γ, to the pre-specified coverage rate γ, which we set to 68 %. The last metrics tests for auto-calibration based on probability integral transforms (PITs). Following <cit.> we use the PIT of the implied forecast distribution based on the energy score, which is given byU_ES,t,s =𝒫_𝔣( E_𝔣 || ŷ_t,s-ŷ'_t,s || ≤ E_𝔣 || ŷ_t,s-y_t,s || ), where || · || gives the Eucledian distance and 𝒫_𝔣 and E_𝔣 the probability and the expected value under the forecast distribution, respectively. We then test for standard uniformity of {U_ES,t}_t=1^T. A model is said to be well calibrated if we do not reject the null hypothesis of auto-calibration. This implies that regardless of any further transformations the forecast distribution will not improve. Figure <ref> presents the additional scores for both evaluation samples. The left panels focus on the sample ending before the Covid-19 pandemic (i.e., 2007Q1 to 2019Q4) whereas the right panels present the full sample results (i.e., 2007Q1 to 2022Q4).Overall, the results confirm the promising performance of HNN. The relative CRPS shows substantial improvements of our approach against the AR model for most targets. The coverage rate is close to the selected level (which is 68 %) and shows no tendency of structurally underestimating the variance. Unlike other models, the HNN shows no evidence against auto-calibration. Analyzing each metric in more details gives some interesting insights. Comparing the relative CRPS of the HNN with other nonlinear models underlines its remarkable density forecasting performance. HNN outperforms the linear benchmark in all cases except for high-order forecasts of housing starts and one-step ahead inflation forecasts when considering the full sample. Largest gains compared to the AR benchmark can be found for the unemployment rate.This holds for all models.In the case of HNN,this is attributable to excellent point and density forecasts (almost 30% reduction in RMSE for both horizons combined with a very high R^2_|ε_t| for h=4 in Table <ref>). BART often yields competitive results except for inflation.DeepAR and BLR often show similar density forecasting performance to the AR benchmark, yielding scores close to 1. The NN models with time-varying volatility are often worse and only sometimes better (e.g., for unemployment) than linear models.The coverage rate shows substantial underestimation of the variance for BART and DeepAR. The predictive densities of BART and DeepAR are too narrow for most targets and both evaluation samples. AR_ SV gives a rather mixed picture with density predictions of GPD growth and inflation one-step ahead being too wide while higher-order densities being too narrow. HNN, on the other hand, gives ratios close to the 68% level. For two target variables (i.e., GDP growth and four steps ahead unemployment), we get densities that are rather wide whereas the predictive distributions for housing starts and short-term inflation show slight underestimation of the variance. AR_ G as well as BLR tend to give wide predictive distributions with coverage ratios close to 80 % or above. Similarly, the NNs with SV or GARCH yield rather wide densities. The test results for auto-calibration, presented in the last two panels in Figure <ref>, suggest that the models are well calibrated for most targets with the exception of DeepAR. In this case, the test clearly rejects auto-calibration for inflation and housing starts regardless of the forecast horizon and GDP growth for four steps ahead predictions. When considering the full sample, this also holds for the unemployment rate. The other competitors give low p-values for at least one or two of the estimated target variables and even more when focusing on the full sample. HNN, on the other hand, shows auto-calibrated results for all targets when evaluating the full sample and all but one target (i.e., GDP four steps ahead at the 10% level) when considering the periods before the Covid-19 pandemic.§.§ An Understanding of BART and DeepAR's DifficultiesSome results from the previous section warrant a digression from our main thread of investigation.As is particularly apparent from coverage results in Section <ref>,but also from log scores throughoutSection <ref> and additional results in Table <ref>,the quality of BART and DeepAR's probabilistic forecasts is rather uneven. In both cases,point forecasts often rank very highly but ℒ and other density evaluation metrics show clear signs of distress.While BART's problems are frequently contained to in-sample historical estimates (quite visible in Figures <ref> and <ref>),that of DeepAR are rather generalized.This section first describes the facts,and then provides suggestive explanations for the phenomena. Finally, we discuss what can be done in both cases to alleviate such substantial problems so that,hopefully,the operation produces some wisdom to draw from for future applications of such methods.First, let us stress that it is certainly not excluded that an extensive amount of tuning for both could non-trivially improve the probabilistic performance of both approaches,but this is not what is typically seen in the literature <cit.>.There are practical reasons, of course,but also statistical ones,like the instability of cross-validation in such environments,or that tuning hyperparameters is not exactly Bayesian. Lastly,we typically expectproper calibration (whether the conditional mean is very proficient or not) to be obtained independently of tuning.For instance,this is what we get from AR_ SV, HNN,and NN_ SV.A particularly telling example is the following.HNN turns in similar outperformance for both GDP and unemployment at the two horizons under study—as one would rightfully expect from two strongly cross-correlated targets. While HNN is decisively superior for both targets at s=1,BART and HNN yield a nearly identical (and stellar) log score for GDP (s=4). Yet,BART ranks last among all models in terms of log score for Unemployment Rate (s=4) whereas HNN reaches gains similar to GDP (s=4).Rarely does it hurt to look at the data underlying summary statistics, and we use the case of unemployment at s=4 to guide the following discussion.Moreover, it is a target for which the inherent "true" uncertainty is manifest.Figure <ref> reports times series corresponding to the second panel of Table <ref> in Appendix <ref> where BART and DeepAR are reported to have fine RMSEs with dismal log scores. We compare conditional means of HNN, BART and DeepAR to the realized value in the left panel and show conditional volatility in the right panel. In-sample, BART's and DeepAR's fitted values nearly perfectly overlap with the realized ones, and are suggestive of an unrealistically good predictive ability. Accordingly, BART estimates a very low volatility path with little fluctuations throughout these periods.DeepAR inconsistently estimates the general volatility level to be at roughly the same level as HNN (for this specific case) and thus reports massive over-coverage in-sample.While BART's volatility estimates seem to be reasonable for the hold-out sample, those of DeepAR follow a strange path.As mentioned above, this behavior is not exceptional to this case,but is recurrent. For instance,a similar behavior is observed for S&P 500 (Figure <ref>) as well as housing starts (Figure <ref>) where BART reports noticeably lower estimates of average volatility than either AR_ SV,AR_ G or HNN.This suggests probabilistic forecasts of these models often lack appropriate levels of uncertainty and historical analysis based on in-sample results, which are frequently conducted in macroeconometrics, may provoke misleading implications and, hence, should be interpreted with care. What is causing this? In short, benign overfitting is,as the name suggests,benign for the conditional mean (out-of-sample).In our results,BART and DeepAR exhibit this phenomenon that has been described for neural networks <cit.> and tree ensembles <cit.>.Without further precautions,this overfitting is, however, malign for the conditional variance. The discussion of Ingredients2 and 3in Section <ref> already alluded that such problems would arise if not addressed directly,which we do for HNN by introducing the volatility emphasis parameter ν and using blocked subsampling to recalibrate the variance hemisphere.In practical terms, if one is only interested in minimizing out-of-sample RMSEs, the near-perfect in-sample fits attained by BART and DeepAR can safely be ignored. However, when it comes to deeper investigations, such as uncertainty quantification or in-sample analysis, the best course of action is to tread lightly. At this juncture, it becomes preferable to inspect separately the two models. The overfitting issues of BART are more blatant in-sample. Hence, our results suggest that one should be careful using BART estimates in-sample to draw any economic conclusion even if BART yields the best (point or density) forecasts results out-of-sample. Note that the phenomenon is even more pronounced at the monthly frequency where pure noise is prevalent. Given that BART provides a reasonably convenient environment to go beyond mere conditional mean modeling and think about more structural objects (like some kind of time-varying parameters or any latent states), it is important to have reliable historical estimates. In the literature, we find possible solutions tailored to Random Forests, for which <cit.> finds a similar pattern, such as using blocked out-of-bag quantities <cit.>. However, out-of-bag sampling appears computationally unfeasible and would change the meaning of the Bayesian setup.More promising is some extensive case-by-case empirical tuning of the level of volatility prior. Again, some careful thinking is necessary about how such tuning should be conducted as BART will also have a preference for overly parameterized models in a pseudo-out-of-sample setup similar to what we see for out-of-sample. A possibility is to use an auxiliary model immune to such complication, like an autoregressive process.Or,when a preferred BART specification is chosen,to increase such priors as long as the out-of-sample fit is mostly intact – analogous to reducing the unnecessary depth of trees in a Random Forest <cit.>. Note that, while not directly addressed here, some of BART's in-sample overconfidence spills out on the test sample mostly for calibration metrics, making it not as reliable as HNN or AR_ SV.Some of the aforementioned solutions could likely help in that regard.Regarding DeepAR, one could legitimately presume that early stopping and dropout could help avoiding the perfect in-sample fit seen in our results. Yet, the double descent phenomenon and associated neural networks oddities often make it difficult to use traditional regularization intuition as guidance.Moreover, as we have noted in Section <ref>,unconstrained fully nonparametric models of mean and variance can overfit using either moment and the allocation between the two will depend on a mostly unknown mapping between obscure architecture choices and final results.In further (unreported) experiments,adjusting the number of layers and neurons can sometimes help or hurt, and in a mostly unpredictable way, i.e.,there is no clear mapping between the total number of neurons and density forecasting underperformance.The regularities are rather that (i) RMSEs are only remotely affected by such choices,(ii) in-sample nominal coverage is often extremely high,and (iii) out-of-sample coverage varies greatly but primarily on the low end. The most promising way of proceeding seems to be cross-validation based on blocked pseudo-out-of-sample density forecasting evaluation metrics. However, this would entail an unfeasible computational burden and, given the small sample size, probably places excessively high expectations on the power of cross-validation. §.§ On the Costs and Benefits of RecurrenceRecurrent neural networks are specifically designed to process sequential data <cit.>. By keeping an internal memory state, which is used as additional input at each time step, RNNs capture patterns and dependencies over time. That is, RNNs receive information from two sources: external shocks to the system and the internal state from previous periods and as such, mimic the structure of a (G)ARCH process.While RNNs have become popular in various domains such as natural language processing or speech recognition <cit.>, they also come with limitations. Due to their recurrent nature and sequential processing, training can get computationally expensive, especially for long time series. Even more troublesome, RNNs are susceptible to vanishing or exploding gradients. To address these challenges, we implement a LSTM network <cit.>, which uses a gating mechanism to filter pertinent information, and restrict each hemisphere to use only one recurrent layer, effectively reducing the depth by half compared to the original architecture. All other hyperparameter choices are unchanged (see Section <ref>). As is evident from Table <ref> in Appendix <ref>, there is no need for taking on the burden of endowing HNN with a recurrent (LSTM) structure. Across all targets we find that gains from HNN-LSTM are either small or nonexistent.For point forecasts we get very similar results from both model specifications. HNN yields lower RMSEs in most cases, regardless of the forecast horizon. In cases where HNN-LSTM beats our standard specification, it is by very small margins. The only exception is the one-step ahead prediction of inflation and housing starts when considering the full sample. This difference in performance for inflation, however, can be diminished when considering the structural approach presented in Section <ref>.When focusing on density predictions, HNN yields better forecasting accuracy as its recurrent counterpart.We conclude that we can easily extend our proposed model to more complex types of neural networks, which yield, however, very similar results at the cost of higher computational burden.§.§ A Comparison with Quantile Regression ApproachesSince all benchmarks considered so far estimate the variance process reactively, we expand our set of competitors by quantile regressions, which feature proactivity. We include a linear Bayesian quantile regression (BQR) with shrinkage, a quantile version of BART (QBART) as well as of the AR(2) model (QAR). By estimating different quantiles of the predictive distribution based on the input matrix X_t, quantile regressions directly and proactively model the uncertainty surrounding the response variable. This makes them a fair but hard-to-beat benchmark. However, estimating multiple quantiles for each target and horizon adds complexity and computational burden to our exercise. Besides rather statistical phenomena such as quantile-crossing <cit.>, which describes the lack of monotonicity when estimating conditional quantile functions, results may not have a straightforward interpretation in some cases. Consider, for example, the Neural Phillips Curve model with proactive volatility. Its extension to quantile regression would entail the estimation of an output gap measure for each quantile and, thus, raise the question of how to interpret the meaning of the resulting slack variables.We evaluate the (tail) forecasting accuracy of all models using log scores and quantile-weighted continuous ranked probability score (CRPS_ω). The weights are set such that the metrics allows for analyzing downside risks via the left tail and upside risks via the right tail of the distribution. To complete our analysis we also check the forecasting performance with respect to the center of the distribution.[Unweighted CRPS for HNN and the main benchmark models can be found in Figure <ref>.] For details on model specification and implementation of the quantile regression approaches as well as the additional evaluation metrics we refer to Appendix <ref>. Results are presented in Table <ref> and Table <ref> in the appendix. We find that HNN remains highly competitive when investigating the predictive distribution in more detail and comparing its performance to quantile regression approaches. For real activity targets HNN either ranks first or is very close to the best performing model for both horizons, both samples and in each part of the distribution.This is remarkable since non-normality and, in particular,asymmetryof conditional distribution have become a major focal point of applied macroeconometric research.In a well-known article,<cit.> show that when it comes to estimating the conditional distribution of GDP, quantile regressions perform well because the resulting distributions are left-skewed in recessionary periods and closer to symmetry during expansion.We find that,even though HNN builds upon the usual normality assumption,its sophisticated mean and variance functions provide the necessary flexibility to adjust and capture dynamics in the tails.HNN yields results very close to the Bayesian quantile regression for one-step ahead GDP growth and even outperforms it when considering the full sample. For four steps ahead, QBART tops the list of competing models but is again closely followed by HNN, especially when including the observations of the Covid-19 pandemic.Therefore,HNN's results are always in the ballpark of the (ex-post) best quantile regression model.Turning to the results for the remaining targets, HNN outperforms all competitors for the unemployment rate one-step ahead and for four steps ahead when evaluating the sample up to the Covid-19 pandemic. Here, we find substantial gains in the tails as well as the center of the distribution. For higher-order forecasts of the unemployment rate (when considering the full sample) HNN yields highly competitive results compared to QBART. Similarly, QBART turns out to be the main competitor for higher-order density predictions of inflation.For the one-step ahead case we often find strong performance of BQR (see, e.g., GDP growth, housing starts, S&P 500), except for inflation where HNN yields the lowest CRPS. For S&P 500, especially for higher-order forecasts, BART remains the best performing model. Even though the Bayesian models, either plain or as quantile extensions, turn out to be very difficult to beat, HNN follows closely and thus, captures upside and downside risks to a similar extent.§ A NEURAL PHILLIPS CURVE WITH PROACTIVE VOLATILITY Given their high importance for economic policy decisions in central banks and governmental institutions, inflation forecasts should be decent and preferably interpretable through some basic macroeconomic reasoning. No less important is to be aware of the level of uncertainty associated with a specific inflation forecast. As we have seen, HNN is a promising toolthat manages to incorporate large amounts of data and captures nonlinearities via its sophisticated mean and variance specification.However,the anatomy of h_m remains mostly unknown.In this section, we achieve both goals by bringing back interpretability of the conditional mean for this particular target and, at the same time, modeling the time-varying level of decency with the variance hemisphere.The resulting model,of appreciable architectural complexity, is <cit.>'s Neural Phillips Curve embedded within this paper's probabilistic forecasting methodology (henceforth, HNN-NPC). Precisely, we impose a Phillips Curve structure on the fully nonparametric h_m in (<ref>), resulting inh_m^NPC(X_t;[θ_ℰ^LR,θ_ℰ^SR,θ_g,θ_c]) = h_ℰ^LR(X_t^ℰ_LR;θ_ℰ^LR)+h_ℰ^SR(X_t^ℰ_SR;θ_ℰ^SR) + h_g(X_t^g;θ_g)+ h_c(X_t^c;θ_c)where X_t^i with i ∈{ℰ_LR, ℰ_SR,g,c } being the subsets of columns that correspond,respectively,to long-run expectations, short-run expectations,"output gap",and commodity prices hemispheres.Their definitions in terms of FRED-QD are identical to that of the original paper.The exact list of memonics can be found in Appendix <ref>. The new hemispheric structure of the conditional mean and its goal to gain interpretability implies the need to drop the common core at the entrance of the network. Regarding the volatility process, this results in a more structured one-way directional flow of h_m into h_v. As there is no overwhelming theoretical reason to constrain the volatility process to solely rely on PC-inspired inputs we modify h_v in (<ref>) forh_v^NPC(X_t;[ θ_ℰ^LR,θ_ℰ^SR,θ_g , θ_c,θ_v,θ_ṽ]) = h_v ( [ h_ℰ^LR(X_t^ℰ_LR; θ_ℰ^LR),h_ℰ^SR(X_t^ℰ_SR;θ_ℰ^SR),h_g(X_t^g;θ_g),h_c(X_t^c;θ_c) ,h_ṽ(X_t;θ_ṽ) ] ; θ_v )where h_ṽ(X_t;θ_ṽ) is a subnetwork that serves in processing all variables of our high-dimensional data set to extract a time series for h_v^NPC that carries relevant signals for the conditional variance that are not captured by the four series summing up to the conditional mean.All subnetworks (i.e., h_ṽ and the four subnetworks of the conditional mean) precede h_v^NPC. Since conditional mean hemispheres contain an unequal number of predictors,leading to unequal a priori importance assigned to each one,we rescale data as proposed in <cit.>.Precisely,we divide the scaledpredictors of X^i by √(#.columns(X^i)/#.columns(X)).This scheme is not necessary for the h_ṽ subnetwork because it takes the whole X and its relative influence is only guided by its total number of neurons and the volatility emphasis parameter ν.Since the number of effective parameters is at least four times superior in the conditional mean subnetwork,the X'sentering h_ṽ are multiplied by a factor of 5. Other hyperparameters are the same as described in Section <ref> except that each hemisphere now consists of 200 neuronsand ν is now obtained from the blocked out-of-bag errors of <cit.>'s plain NPC model.First,we can compare whether the incoming restrictions hurt or improve predictive ability both in terms of RMSE and probabilistic forecasting metrics.Table <ref> in the appendix shows promising results with the more "specialized" HNN-NPC improving in a non-trivial fashion nearly all performance metrics including or excluding post-2020 data.This is not completely jarring because (i) relevant restrictions will bring down variance more than they incur bias and (ii) distilling FRED-QD to include only relevant variables (according to loose macroeconomic theory) can significantly improve the performance of an otherwise dense conditional mean model.The first row of Figure <ref> reports the usual conditional mean and volatility,and now compares with a small Bayesian model of similar aim.Indeed,<cit.> provide a model of trend inflation with a Phillips Curve (with unemployment as forcing variable),drifting coefficients,and stochastic volatility.The first panel highlights that HNN-NPC fares particularly well during two turbulent eras by (i) avoiding the missing disinflation following the Great Recession and (ii) capturing a non-trivial part of the 2021-2023 surge in inflation.In contrast, CKP exhibit the typical observation that traditional PC-based forecasts were not only reactive rather than proactive during recent years,but also consistently "biased" downwards up until 2023. Regarding volatility estimates,HNN-NPC's proactive behavior is apparent during the Great Recession,with h_v,t^NPC rising from its bed a few quarters before the SV process embedded in CKP—the latter erupts following the 2008 oil price crash and then apparently diffuses its effects for almost three years.Both volatility estimates spike following the initial Covid-19 crash. The HNN-NPC spike is particularly pronounced and explains why it yields the best log score despite a highly inaccurate deflation forecast in 2020Q2: the confidence interval for that particular point in time is so large it also includes positive inflation.As such,the erroneous point forecast is discounted in the probabilistic performance metrics since HNN-NPC practically "knew" it was turning in a forecast of extremely low reliability.The massive uncertainty quickly dissipates to a more reasonable level (comparable to that of post-Great Recession), then hits a low point before picking up again following the invasion of Ukraine. For additional comparison,we also report contributions from a canonical PC regression in the second row of Figure <ref>.In the case of "PC",those are constructed from a traditional PC specification– including two lags of inflation and the Congressional Budget Office (CBO) estimate of the output gap – with time-varying coefficients obtained from <cit.>'s two-steps ridge regression approach.As discussed in <cit.>,the key statistical distinction between HNN-based inflation modeling and the two models included for reference purposes is the nonlinear processing of a rich real activity data set.The use of neural networks serves as a convenient way to achieve that goal within a "generalized" PC environment.Looking at contributions in the second row,we reaffirm some key findings from <cit.>. Among other things,we get a rapid closing of g_t's contribution to inflation post-Great Recession.This contrasts with CKP and PC whichreport more lasting downward pressures following 2008.The next observation is the strikingly different behavior of h_g,t starting from last quarter of 2020.HNN-NPC sees h_g,t contributing strongly to the highest quarterly inflation forecasts in a generation.The two benchmarks are not nearly as agitated in 2021 and 2022, and report forecasts and contributions that fit within the popular PC-based narrative in 2021 that inflation would be transitory. From Figure <ref>,we also get some additional insights from updating the data up to 2023Q2 <cit.>.First,we see that g_t is still pushing forecasts above the target range,but its effect has massively shrunk from the highs of 2021,mostly starting from 2022.As of 2023Q2, the contribution of real activity to inflation as estimated by HNN-NPC is about twice as much as that from CKP, yet it is the closest they have been to agreement in the last three years.The contribution of expectations seems to slowly decrease to pre-pandemic levels,but is appreciably higher than that of CKP, which has already settled to rather low levels.HNN-NPC's and CKP's point forecasts for 2023Q2 are roughly similar (with CKP gaining in performance in 2023),and so is their latest assessment of volatility.This agreement aligns with the observation that their predictions mainly differ during a few localizedepisodes,and 2023Q2 falls outside of that. Usual disagreement can be traced back to their differing evaluation of economic slack and expectations,especially when those are far from their mean. Key disagreeing segments for those components are the missing disinflation post 2008 and non-transitory inflation surge of 2021-2023.However, as of 2023Q2 the unemployment gap of CKP has mostly caught up with mounting real activity pressures expressed by HNN, resulting in the recent relative concordance between the two models. § CONCLUDING REMARKS We provide a way to conduct density forecasting where both conditional mean and variance are the outputs from neural networks.Results show that in many cases,HNN picks up early signals of increased future volatility before the occurrence of large prediction errors.This proactive behavior often gives a significant advantage over stochastic volatility specifications that are frequently used to close linear and nonlinear macroeconomic forecasting models.Moreover,its nominal coverage and overall probabilistic forecasting performance is much more consistent across targets and experiments than what we found for two leading nonlinear nonparametric machine learning alternatives.Therefore,HNN is an effective new tool for density forecasting in itself anda convenient building block for deep-learning based macroeconomic models when timely uncertainty quantification is needed.Concerning the latter aspect,we provided such an application by merging this paper's architecture with that of <cit.>'s Neural Phillips Curve.There are many possible extensions.A conceptually obvious yet very relevant oneis that of a multivariate normal predictive density,providing a MLE-based alternative for the estimation of (possible large) nonlinear/time-varying vector autoregressions.apalike0.75saveeqn§ APPENDIX1.25§.§ Additional Figures and Results §.§ A FRED-MD DetourSince researchers and policymakers are often interested in more timely forecasts with a sampling frequency higher than quarterly, we expand our exercise to a monthly setup. This way we also gain insights in our model's behavior and performance when dealing with more noisy data. We apply our proposed model and our rich set of competitors to the FRED-MD database of <cit.>. Again, we explore the model's density and point forecasting performance by predicting different targets including real activity variables, monetary aggregates and inflation series in the US. In particular, we forecast nonfarm payroll, industrial production, real personal income, personal consumption expenditures, retail and food services sales, M2 money stock, and producer price inflation. We compute forecasts for one-month ahead, six months ahead and one year ahead. Similar to the quarterly case, our hold-out sample starts at the beginning of 2007 (i.e., 2007M1) and ends in 2022 (i.e., 2022M12). Results are presented in Table <ref>, Table <ref>, Table <ref> and Table <ref> for one-month, three months, six months and twelve months ahead, respectively. In line with the findings for the quarterly application, HNN yields good forecasting results for real activity variables. Focusing on one-month ahead forecasts Table <ref> shows that BLR and AR with time-varying volatility gives the lowest RMSE and log scores across many targets. However, for most of them HNN gets very close or even manages to outperform them. For example, we obtain very similar results from the best performing model and HNN for nonfarm payroll, industrial production and real personal income. For real personal consumption expenditures and retail sales, HNN outperforms all other benchmarks for the evaluation sample up to 2020 and remains highly competitive for the full sample. All nonlinear models suffer from inferior point forecasts when it comes to producer price inflation and the M2 money stock. Yet, they yield highly competitive density predictions.Overall,short-run monthly results reveal that alternative specifications have a hard time improving on the performance of linear benchmarks.Yet, HNN is always near the top of the pack, highlighting its reliability even when the simplest approach comes out on top.Nonlinearities seem to gain in importance for higher-order forecasts – a finding that echoes to that of <cit.>. In the case of three and six months ahead density forecasts (see Table <ref> and Table <ref>), we find that either HNN or BART yield the lowest log scores for all real activity and employment variables. Also, the nonlinear models outperform the simpler benchmarks for point forecasts when including the post-Covid periods. This holds for all cases when considering the six months ahead forecast horizon and for the most when evaluating the three months ahead horizon. A similar picture arises from forecasting our monthly target set twelve months ahead (see Table <ref>). BLR often remains hard to beat but nonlinear models tend to outperform it for density predictions and the full sample.Noteworthy,HNN catches up with BLR when forecasting producer price inflation for twelve months ahead. It yields highly competitive point forecasts for both samples and the best density prediction for the full sample.§.§ Results with Euro Area Data In this section we apply our model to euro area data. This entails a major challenge: time series for the euro area are short, most of them only dating back to the early 2000s.The US data includes several business cycle phases and,of special relevance recently, high inflation periods.There is very little if any of that for our post-2000 euro sample.Machine learning tools – with their edge over simpler methods depending on how much history they can learn from – are in a difficult terrain.Nonetheless,due to their ability to flexibly model nonlinearities, however, several recent contributions have shown that using machine learning models for the euro area is promising <cit.>.Lastly,a more subtle problem is that the size of the overall sample limits the span of the test sample,which, for instance,excludes the presence of allegedly more predictable recession. For our exercise, we use the Euro Area Real Time Database provided by the European Central Bank <cit.>. The data set encompasses 165 time series covering several sectors of the real economy as well as financial market developments in the euro area. Due to missing data, we include 145 series spanning the months 2002M2 to 2022M8. Our hold-out sample runs from 2015M1 to 2022M8. We seasonally adjust all series (if applicable), transform them to stationarity (mostly corresponding to the US data set for the respective series) and standardize the data.Our targets comprise industrial production, unemployment, inflation and the stock market index (Dow Jones Eurostoxx 50).Forecast horizons areone-month, three months, six months and twelve months ahead. 0.15cmResults.Overall, we find that AR models are hard to beat for monthly targets (see Table <ref> to <ref>),particularly at shorter horizons.Similar observations are made for the US application with monthly data (see Section <ref>).Another common finding is that HNN yields a remarkable performance for real activity.Point and density forecasts for industrial production rank either first or very close to the best performing model for all forecast horizons as well as both evaluation samples (i.e., including or excluding post-2020 periods).Also, HNN explains a high share of the variation in realized volatility measured by R^2_|ε_t|, especially for higher-order forecasts. Visually inspecting both hemispheres gives some insights into properties of the mean and variance paths and thereby HNN's good performance. As shown in Figure <ref> the volatility paths for HNN and selected benchmarks reflect highly uncertain as well as tranquil times of the European business cycle. Compared to its competitors, HNN marks not only the Great Recession but also the sovereign debt crisis during 2011-2013. Having overcome this long-lasting period of high fragility, the variance hemisphere shows a low and stable path until the economy was hit by the Covid-19 pandemic. While models equipped with SV show elevated uncertainty for post-2020 periods, HNN estimates a lower volatility path, which pays off as per log scores including this period in, e.g.,Table <ref>. In line with previous findings, our nonlinear competitors tend to estimate low volatility leading to inferior density forecasts. We see this pattern for multiple steps ahead forecastsof industrial production and even more strikingly for unemployment.BART's point forecasting performance is sometimes remarkable,in line with the traditional wisdom on tree ensembles and small samples <cit.>. However, it shows rather poor performance for density predictions—by constantly underestimating volatility as per the phenomenon described in Section <ref>. HNN, on the other hand, yields slightly less gains but remains competitive to AR_ SV for both evaluation metrics. For the stock market index, we get good point forecasting performance of AR with time-varying volatility and BART, closely followed by HNN. For density predictions, HNN beats BART in all cases and ranks close to the AR process.Figure <ref> reveals that the variance hemisphere proactively estimates high volatility during the Great Recession and the Covid-19 pandemic. It peaks before SV-based benchmarks and levels off rather quickly in the following periods. HNN is the only model estimating heightened uncertainty for the full duration of the sovereign debt crisis. We see variance decreasing in 2013 when financial markets gained back trust after Mario Draghi's declaration to do "whatever it takes" in order to save the euro <cit.>. The following years are characterized by stability, well captured by HNN's variance hemisphere. For the Covid-19 period, HNN shows a timely and severe peak of uncertainty already calming down in late 2020. Similar to US inflation predictions, HNN in its unrestricted form has difficulties beating the AR model. Especially when it comes to capturing the 2021-2022 surge, neural network models suffer from rather large prediction errors. Density forecasts remain competitive showing the adaptability of HNN in terms of uncertainty and responsiveness to its own failures.Unsurprisingly,we also see the other nonlinear specifications struggling with the post-Covid inflation path.Point forecasts of linear models show highest accuracy across all horizons when considering the full sample. For density forecasts we find that BART and NN_ SV (similar to HNN) yield competitive results.Even though BART and NN with time-varying volatility perform well, AR_ SV often remains the best performing model. §.§ What are the hemispheres made of?To shed light on which variables drive the hemispheres in our network, we conduct a variable importance (VI) exercise similar to <cit.> and <cit.>. The importance of variable k (for k=1, …, K) for each hemisphere j (i.e., h_m and h_v) is determined in three steps. First, variable k and its lags are randomly shuffled. Second, the respective hemisphere is recomputed (but not re-estimated) with the shuffled variable k all else equal. Finally, we compute the deviation of the new estimate with the transformed data (h_j(X̃_t;θ_j)) to the baseline result (h_j( X_t;θ _j)).The standardized VI_k^j,in terms of % of increase in MSE, is then given byVI_k^j= 100 ×(1T∑_t=1^T (h _j(X̃_t;θ _j)-h _j( X_t;θ _j))^2/Var(h _j( X_t;θ _j))). Figures <ref> to <ref> report VI results for the targets discussed in Section <ref>. §.§ Benchmark Models Bayesian Linear Regression (BLR). The Bayesian linear regression model serves as a high-dimensional, linear benchmark in our rich set of competitors. To achieve parsimony, we implement the Normal-Gamma (NG) shrinkage prior of <cit.>, which belongs to the class of hierarchical global-local shrinkage priors and as such, imposes global shrinkage common to all parameters as well as local shrinkage specific to each of them. Moreover, we estimate the model using stochastic volatility to account for time variation in the magnitudes of error terms. Formally, the model is given byy_t =X_t β + ε_t, ε_t ∼𝒩(0,σ^2_t).with the following prior distribution on the kth element of β (for k = 1, …, K):β_k | ψ_k,λ̃∼𝒩(0, ψ_k), ψ_k|λ̃∼𝒢(ϑ,ϑλ̃/2), λ̃∼𝒢(e_0,e_1)The idiosyncratic scaling parameter, which ensures an individual degree of shrinkage for each element in β, is denoted by ψ_k whereas λ̃ gives the global shrinkage parameter. ϑ controls the tail behavior of the prior and is assumed to follow ϑ∼exp(1). For the global shrinkage hyperparameters we assume e_0 = e_1 = 0.01.To estimate the model we use a Markov chain Monte Carlo (MCMC) algorithm which iterates through the following steps. First, we draw the linear coefficients from a standard Gaussian posterior taking well-known forms. These can be found in, e.g., <cit.>. Next, we sample the additional parameters related to the NG prior. For the corresponding posteriors, we refer to <cit.>. The stochastic volatilities are drawn by employing the algorithm proposed in <cit.>. We repeat these steps 20,000 times and discard the first 10,000 draws as burn-in.0.25cmBayesian Additive Regression Trees (BART). An alternative way to approximate function f is using Bayesian additive regression trees <cit.>. The model accomplishes this by building an ensemble model of regression trees. Let Λ_d denote a single regression tree for d = 1, …, D regression trees. We then take the sum over all D regression trees to approximate f:f( X_t) ≈∑_d=1^D Λ_d ( X_t | 𝒯_d, ρ_d).Each regression tree depends on the tree structure, 𝒯_d, and the terminal node parameter ρ_d. Regarding the choices on hyperparameters and priors we rely on <cit.>. In short, we set D to 250 and use a tree-generating stochastic process for the prior on the tree structure. This process determines the probability of a given node being nonterminal, selects the variables and estimates the corresponding thresholds used in the splitting rule that spawns left and right children nodes. The priors on the terminal nodes are conjugate Gaussian prior distributions with data-based prior variances. In this setting, a certain amount of prior mass is centered on the range of the data and at the same time ensures higher degree of shrinkage with an increasing number of trees.0.25cm DeepAR. The DeepAR is an autoregressive neural network model based on a LSTM architecture <cit.>. It is designed for probabilistic forecasting and produces density predictions based on a user-defined distribution. In our applications, we use 2 hidden layers containing 400 LSTM cells with activation function being Hyperbolic Tangent, i.e., tanh(x) = e^x - e^-x/e^x + e^-x. Each hidden layer is subject to stochastic dropout with a rate of 0.2 during training only. We use Adam Optimizer with a learning rate of 0.001. The model is optimized according to the negative log-likelihood function over 20 epochs with a patience parameter of 5. 0.25cmBayesian Quantile Regressions. Our benchmarks based on quantile regressions include a linear Bayesian quantile regression model (BQR) as well as a quantile version of BART (QBART) and the AR(2) model (QAR). In general terms, we estimate the following model for quantile τ∈ (0,1):y_t = f_τ( X_t) + u_t,u_t ∼AL_τ(σ_τ)To sample from the asymmetric Laplace (AL) distribution we rely on the auxiliary representation of <cit.> given byu_t = μ_τυ_τ,t + π_τ√(σ_τυ_τ,t u_t), μ_τ = 1-2τ/τ(1-τ), π^2_τ = 2/τ(1-τ), υ_τ,t∼ℰ(σ_τ)This allows us to write (<ref>) as a conditionally Gaussian:ỹ_τ,t = f(X̃_τ,t) + u_t,u_t ∼𝒩(0,1)with ỹ_τ,t = (y_t - μ_τυ_τ,t)/(π_τ√(σ_τ,υ_τ,t)) and X̃_τ,t = (π_τ√(σ_τυ_τ,t) I_K)^-1 X_t. The prior on the scale parameter of the AL distribution is inverse Gamma with σ_τ∼𝒢^-1(3,0.3).In case of BQR, we define f_τ( X_t) =X'_t β_τ and estimate the large-scale model with a NG shrinkage prior. For QBART we approximate each function f_τ using a sum of regression trees. QAR is estimated with a weakly informative prior. For details on the posterior distributions, we refer to <cit.> as well as the description of BLR and BART above.We evaluate the (tail) forecast accuracy of our quantile regression approaches using log scores and quantile-weighted CRPS (CRPS_ω). CRPS_ω is computed as the sum of quantile scores (QS) over all quantiles <cit.>. The quantile score for quantile τ and forecast horizon s is defined as:QS_τ,t,s = (y_t,s - 𝒬_τ,t,s) (τ-1{y_t,s≤𝒬_τ,t,s}),where 𝒬_τ,t,s is the point forecast at quantile τ and 1 denotes an indicator function taking value 1 if the true value is at or below the quantile forecast and 0 otherwise.The quantile-weighted CRPS is then given by:CRPS_t(ω_τ) = ∫_0^1 ω_τQS_τ,t d τ.We compute quantiles τ∈{0.05,0.10,…,0.90,0.95}.§.§ Mnemonics for HNN-NPC[language=R] #These are for HNN-F.Add "trend" to the first three hemispheres to get HNN.real.activity.hemisphere <- c("PAYEMS","USPRIV","MANEMP","SRVPRD", "USGOOD" ,"DMANEMP","NDMANEMP","USCONS","USEHS", "USFIRE","USINFO","USPBS","USLAH","USSERV", "USMINE","USTPU","USGOVT","USTRADE", "USWTRADE","CES9091000001","CES9092000001", "CES9093000001","CE16OV","CIVPART", "UNRATE","UNRATESTx","UNRATELTx","LNS14000012", "LNS14000025","LNS14000026", "UEMPLT5","UEMP5TO14","UEMP15T26","UEMP27OV", "LNS13023621","LNS13023557", "LNS13023705","LNS13023569","LNS12032194", "HOABS","HOAMS","HOANBS","AWHMAN", "AWHNONAG","AWOTMAN","HWIx","UEMPMEAN", "CES0600000007", "HWIURATIOx","CLAIMSx","GDPC1", "PCECC96","GPDIC1","OUTNFB","OUTBS","OUTMS", "INDPRO","IPFINAL","IPCONGD","IPMAT","IPDMAT", "IPNMAT","IPDCONGD","IPB51110SQ","IPNCONGD", "IPBUSEQ","IPB51220SQ","TCU","CUMFNS", "IPMANSICS","IPB51222S","IPFUELS") SR.expec.hemisphere <- c("Y", "PCECTPI","PCEPILFE", "GDPCTPI","GPDICTPI","IPDBS", "CPILFESL","CPIAPPSL", "CPITRNSL","CPIMEDSL","CUSR0000SAC","CUSR0000SAD", "WPSFD49207","PPIACO","WPSFD49502","WPSFD4111", "PPIIDC","WPSID61","WPSID62","CUSR0000SAS","CPIULFSL", "CUSR0000SA0L2","CUSR0000SA0L5", "CUSR0000SEHC", "spf_cpih1","spf_cpi_currentYrs","inf_mich")commodities.hemisphere <- c("WPU0531","WPU0561","OILPRICEx","PPICMM")LR.expec.hemisphere <- c("trend")credit.hemisphere <- c("BUSLOANSx","CONSUMERx","NONREVSLx","REALLNx","REVOLSLx","TOTALSLx","DRIWCIL","DTCOLNVHFNM","DTCTHFNM","INVEST","nfci","nfci_credit","nfci_nonfin") | http://arxiv.org/abs/2311.16333v1 | {
"authors": [
"Philippe Goulet Coulombe",
"Mikael Frenette",
"Karin Klieber"
],
"categories": [
"econ.EM",
"cs.LG"
],
"primary_category": "econ.EM",
"published": "20231127213750",
"title": "From Reactive to Proactive Volatility Modeling with Hemisphere Neural Networks"
} |
Non-Bloch band theory of generalized eigenvalue problems Yuto Ashida Nov 2023 ======================================================== We present Symphony, an E(3)-equivariant autoregressive generative model for 3D molecular geometries that iteratively builds a molecule from molecular fragments. Existing autoregressive models such as G-SchNet <cit.> and G-SphereNet <cit.> for molecules utilize rotationally invariant features to respect the 3D symmetries of molecules. In contrast, Symphony uses message-passing with higher-degree E(3)-equivariant features. This allows a novel representation of probability distributions via spherical harmonic signals to efficiently model the 3D geometry of molecules. We show that Symphony is able to accurately generate small molecules from the QM9 dataset, outperforming existing autoregressive models and approaching the performance of diffusion models. § INTRODUCTIONIn silico generation of atomic systems with diverse geometries and desirable properties is important to many areas including fundamental science, materials design, and drug discovery <cit.>. The direct enumeration and validation of all possible 3D structures is computationally infeasible and does not in itself lead to useful representations of atomic systems for guiding understanding or design. Thus, there is interest in `generative models' that can generate 3D molecular structures using machine learning algorithms.Effective generative models of atomic systems must learn to represent and produce highly-correlated geometries that represent chemically valid and energetically favorable configurations. To do this, they must overcome several challenges: * The validity of an atomic system is ultimately determined by quantum mechanics. Generative models of atomic systems are trained on 3D structures relaxed through computationally-intensive quantum mechanical calculations. These models must learn to adhere to chemical rules, generating stable molecular structures based solely on examples. * The stability of atomic systems hinges on the precise placement of individual atoms. The omission or misplacement of a single atom can result in significant property changes and instability.* Atomic systems have inherent symmetries. Atoms of the same element are indistinguishable, so there is no consistent way to order atoms within an atomic system. Additionally, atomic systems lack unique coordinate systems (global symmetry) and recurring geometric patterns occur in a variety of locations and orientations (local symmetry).Taking these challenges into consideration, the majority of generative models for atomic systems operate on point geometries and use permutation and Euclidean symmetry-invariant or equivariant methods. Thus far, two approaches have been emerged as effective for directly generating general 3D geometries of molecular systems: autoregressive models <cit.> and diffusion models <cit.>.In this work, we introduce Symphony, an autoregressive generative model that uses higher-degree equivariant features and spherical harmonic projections to build molecules while respecting the E(3) symmetries of molecular fragments. Similar to other autoregressive models, Symphony builds molecules sequentially by predicting and sampling atom types and locations of new atoms based on conditional probability distributions informed by previously placed atoms. However, Symphony stands out by using spherical harmonic projections to parameterize the distribution of new atom locations. This approach enables predictions to be made using features from a single `focus' atom, which serves as the chosen origin for that step of the generation process. It allows for the simultaneous prediction of the radial and angular distribution of possible atomic positions in a direct manner without needing to use additional atoms.To test our proposed architecture, we apply Symphony to the QM9 dataset and show that it outperforms previous autoregressive models and is competitive with existing diffusion models on a variety of metrics. We additionally introduce a metric based on the bispectrum for assessing the angular accuracy of matching generated local environments to similar environments in training sets. Finally, we demonstrate that Symphony can generate valid molecules at a high success rate, even when conditioned on unseen molecular fragments.§ BACKGROUNDE(3)-Equivariant Features: We say a E(3)-equivariant feature z ∈ℝ^2l + 1 transforms as the irreducible representation l under rotationand translation :zD^l()^T zwhere D^l is the irreducible representation of SO(3) of degree 2l + 1. D^l() ∈ℝ^(2l + 1) × (2l + 1) is referred to as the Wigner D-matrix of the rotation . As D^0() = 1 and D^1() =, invariant `scalar' features correspond to degree l = 0 features, while `vector' features correspond to l = 1 features. Note that these features are invariant under translation .Spherical Harmonics: The real spherical harmonics Y_l,m(θ, ϕ) are a set of real-valued orthogonal functions defined on the sphere S^2, indexed by two integers l and m such that l ≥ 0, |m| ≤ l.Here θ and ϕ come from the notation for spherical coordinates, where r is the distance from an origin, θ∈ [0, π] is the polar angle and ϕ∈ [0, 2π) is the azimuthal angle. The relation between Cartesian and spherical coordinates is given by: x = r sinθcosϕ, y = r sinθsinϕ, z = r cosθ. l corresponds to an angular frequency: the higher the l, the more rapidly Y_l,m changes over S^2. This can intuitively be seen by looking at the functional form of the spherical harmonics. In their Cartesian form, the spherical harmonics are proportional to simple polynomials. In one common choice of basis, l=0 is proportional to 1, l=1 is proportional to (x, y, z) and l=2 is proportional to (xy, yz, 2z^2 - x^2 -y^2, zx, x^2-y^2), as seen in Figure <ref>D-F.One important property of the spherical harmonics is that they can be used to create E(3)-equivariant features. Let Y_l(θ, ϕ)= [Y_l,-l(θ, ϕ), …, Y_l,l(θ, ϕ)] ∈ℝ^2l + 1 represent the collection of all spherical harmonics with the same l. Then, Y_l(θ, ϕ) transforms as an E(3)-equivariant feature of degree l under rotation: Y_l( (θ, ϕ)) = D^l()^T Y_l(θ, ϕ), whereis an arbitrary rotation, and (θ, ϕ) is interpreted as the coordinates of a point on S^2.The second important property of the spherical harmonics that we employ is the fact that they form an orthonormal basis for functions on the sphere S^2. Thus, for any function f: S^2 →ℝ, we can express f as a linear combination of the Y_l,m. Formally, there exists unique coefficients c_l ∈ℝ^2l + 1 for each l ∈ℕ, such that f(θ, ϕ) = ∑_l = 0^∞c_l^T Y_l(θ, ϕ). We term these coefficients c_l as the spherical harmonic coefficients of f as they are obtained by projecting f onto the spherical harmonics. § METHODS We first describe Symphony, our autoregressive model for 3D molecular structures, with a comparison to prior work in <ref>. §.§ Building Molecules Via Sequences of Fragments First, we create sequences of fragments using molecules from the training set via [alg:createSequence]CreateFragmentSequence. Given a moleculeand random seed r, CreateFragmentSequence constructs a sequence of increasingly larger fragments {^1, …^||} such that |^n| = n for all n ∈{1, …, ||} and ^|| = exactly. Of course, there are many ways to create such sequences of fragments; [alg:createSequence]CreateFragmentSequence simply builds a molecule via a minimum spanning tree.Symphony attempts to recreate this sequence step-by-step, learning the (probabilistic) mapping ^n→^n + 1. In particular, we ask Symphony to predict the focus node f_n + 1, the target atomic number Z_n + 1 and the target position _̊n + 1, providing feedback at every step. §.§ Handling the Symmetries of Fragments Here, we highlight several challenges that arise because ^n must be treated as an unordered set of atoms that live in 3D space. In particular, let ^n += {(_̊1 + , Z_1), …, (_̊n + , Z_n)} be the description of the same set of atoms in ^n but in a coordinate frame rotated by ^-1 and translated by ^-1.Similarly, let π be any permutation of {1, …, n} and π^n = {(_̊π(1), Z_π(1)), …, (_̊π(n), Z_π(n))}.Fundamentally, ^n +, ^n and π^n all represent the same set of atoms. Thus, we would like Symphony to naturally accommodate thesymmetries of fragment ^n, under the group E(3) of Euclidean transformations consisting of all rotationsand translations , and the group of all permutations ofconstituent atoms. Formally, we wish to have: * Property (1): The focus distribution p^focus and the target species distributionp^species should be E(3)-invariant: p^focus(f_n + 1; ^n + )= p^focus(f_n + 1; ^n) p^species(Z_n + 1 |f_n + 1; ^n + )= p^species(Z_n + 1 |f_n + 1; ^n) * Property (2): The target position distribution p^position should be E(3)-equivariant: p^position(_̊n + 1+|f_n + 1, Z_n + 1; ^n + ) = p^position(_̊n + 1 | f_n + 1, Z_n + 1; ^n)* Property (3): With respect to the ordering of atoms in ^n, the map p^focus should be permutation-equivariant while p^species and p^position should be permutation-invariant.We represent p^focus, p^species and p^position as probability distributions because there may be multiple valid choices of focus f_n + 1, species Z_n + 1 and position _̊n + 1.§.§ The Design of Symphony The overall working of Symphony is shown graphically in <ref>. Symphony first computes atom embeddings via an Embedder. Here, we assume that Embedder(^n) = {h_v,l |v ∈^n, 0 ≤ l ≤} returns a set of E(3)-equivariant features h_v,l of degree l upto a predefined degree , for each atomv in ^n.In <ref>, we show that Symphony can guarantee [prop:1]Properties (1), (2) and [prop:3](3) as long as Embedder is permutation-equivariant and E(3)-equivariant.We provide further details about Embedder in <ref>.From [prop:1]Property (1), p^focus and p^species should be invariantunder rotation and translations of ^n. Since the atom types and the atom indices are discrete sets, we can represent both of these distributions as a vector of probabilities. Thus, we compute p^focus and p^species by applying a multi-layer perceptron MLP on only the rotation and translation invariant features of Embedder(^n):p^focus(f_n + 1 ; ^n) = MLP(Embedder(^n)_f_n + 1,0) p^species(Z_n + 1 |f_n + 1 ; ^n)= MLP(EmbedAtomType(Z_n + 1) ·Embedder(^n)_f_n + 1,0)Alongside the node-wise probabilities for p^focus, we also predict a globalSTOP probability, indicating that no atom should be added.On the other hand, [prop:2]Property (2) shows that p^position transforms non-identically under rotations and translations. We describe a novel parametrization of 3D probability densities such as p^position with spherical harmonic projections.The position $̊ is represented by spherical coordinates(r, θ, ϕ)whereris the distance from the focusf,θis the polar angle andϕis the azimuthalangle. Any probability distributionp^positionover positions must satisfy thenormalization and non-negativity constraints: ∫_Ω p^position(r, θ, ϕ)dV = 1and p^position(r, θ, ϕ) ≥0 wheredV = r dr sinθdθdϕis the volume element andΩ= { r ∈[0, ∞), θ∈[0, π], ϕ∈[0, 2π)}represents all space in spherical coordinates. Since these constraints are hard to incorporate directly into a neural network, we predictthe unnormalized logitsf^position(r, θ, ϕ)instead, and take the softmaxover all space: p^position(r, θ, ϕ) = 1/Z expf^position(r, θ, ϕ) To model these logits, we first discretize the radial componentrinto a set of discrete values. We choose64uniformly spaced values from0.9A to2.0A, which covers all of the bond lengths in QM9. For each fixed value ofr, we obtain a function on the sphereS^2, which we represent in the basis of spherical harmonic functionsY_l,m(θ, ϕ), as described in<ref> and similar to the construction of <cit.>. As we have a radial componentrhere, the coefficientsc_lalso depend onr:f^position(r, θ, ϕ |f_n + 1, Z_n + 1;^n) = ∑_l = 0^∞ c_l(r;f_n + 1, Z_n + 1, ^n)^T Y_l(θ, ϕ)Symphony predicts these coefficientsc_lfrom the degreelfeatures of the focusnodeEmbedder(^n)_f_n + 1,l, and the embedding of the target speciesZ_n + 1:c_l(r;f_n + 1, Z_n + 1, ^n) = Linear(Embedder(^n)_f_n + 1,l⊗EmbedAtomType(Z_n + 1))By explicitly modelling the probability distributionsp^focus, p^speciesandp^position, Symphony learns to represent all possible options of completing^ninto a valid molecule. §.§ Bypassing the Angular Frequency BottleneckFor computational reasons, we are often limited to using a finite number of sphericalharmonic projections (ie, up to some). Due to the way the spherical harmonics areconstructed, this means we can only represent signals upto some angular frequency. Forexample, to represent a signal on the sphere with peaks separated bydradians, we needspherical harmonic projections with≥2π/d. This is similar to issuesfaced when using the first few terms of the Fourier series; we cannot represent highfrequency components. To bypass the bottleneck of angular frequency, we propose using multiple channels ofspherical harmonic projections, which are then summed over after a non-linearity: f^position(r, θ, ϕ;^n) = log∑_channel ch exp ∑_l = 0^∞ c^ch_l(r; ^n)^T Y_l(θ, ϕ) . See <ref> for a concrete example where adding multiple channels effectively increases the angular frequency capacity of our model. For Symphony, we find that2channels is sufficient, as demonstrated in §.§ Training and InferenceWe utilize teacher forcing to train Symphony. At training time, the true focusf_n + 1and atomic numberZ_n + 1are provided as computed inNextFragment. Thus, no sampling occurs at training time. The true probability distributionsq^focusandq^speciesare computed empirically from the set of unfinished atoms and their corresponding neighbors in. The true probability distributionq^positionis computed by smoothly approximating a Dirac delta distribution upto some cutoff frequencyat the target position_̊n + 1around the focus atom. Further details about the training process and representingDirac delta distributions are provided in <ref> and <ref>.q^position()̊ = 1/Zexp(- - _̊n + 1/2σ_true^2·δ_(r̂ - r̂_n + 1))At inference time, both the focusf_n + 1and atomic numberZ_n + 1are sampled fromp^focus(·; ^n)andp^species(·| f_n + 1; ^n)respectively. These are used to sample_̊n + 1fromp^position(·| f_n + 1, Z_n + 1; ^n). Molecules are generated by starting from an initial fragment^1, and repeatedly sampling fromp^focus,p^speciesandp^positionuntil a STOP is predicted orN_max = 35iterations haveoccurred.[N_max was set as 35 based on the maximum size of molecules in the QM9 dataset as 30 atoms.] We set^1as a single hydrogen atom at the origin. §.§ Relation to Prior WorkMost methods for 3D molecular structure generation fall into one of two broad categories: autoregressive and end-to-end models. G-SchNet <cit.> and G-SphereNet <cit.> were the first successful attempts at autoregressive generation of molecular structures. G-SchNet uses the SchNet framework <cit.> to perform message-passing with rotationally invariant features and compute node embeddings. A focus node is then selected as the center of a 3D grid. All of the atoms in the current fragment then vote on where to place the next atom within this grid by specifying a radial distance to the next atom. Because of the use of only rotationally invariant features, at least three atoms are needed to be present in the current fragment to specify the exact position of the next atom without any degeneracy due to symmetry; this procedure is called triangulation. This requires several additional tokens to break symmetry. Similarly, G-SphereNet learns a normalizing flow to perform a triangulation procedure once there are atleast3atoms in^n.We wish to highlight two observations that guided the development of Symphony:*Rotationally invariant features centered at a single pointcannot capture the orientations of geometrical motifs <cit.>. To handle the degeneracies inherent when usingrotationally invariant features to predict positions, G-SchNet uses unphysicalauxiliary tokens (which are multiple spatial positions that are not atoms) to break symmetry. *G-SchNet queries all of the atoms in ^n at each iteration, which meansdistant atoms can have an undue influence when placing the next atom. Similarly, G- SphereNet predictions are not a smooth function of the input fragment; when the inputis perturbed slightly, the choice of atoms used in the triangulation procedure canchange drastically.Recently,E(3)-equivariant neural networks that build higher-degreeE(3)-equivariant features have demonstrated improved performance on a wide range of atomistic tasks<cit.>. Our key contribution is to show the benefit of higher-degreeE(3)-equivariant features for the molecular generation task allowing for a novel parametrization of 3D probability distributions using spherical harmonic projections. <cit.> also uses spherical harmonic projections with asingle channel for molecule generation, but trained with reinforcement learning. Their parametrization and sampling of the distribution differs from ours; we discuss these details in <ref>. Among end-to-end generation methods, <cit.> developed EDM, a state-of-the-artE(3)-equivariant diffusion model. EDM significantly outperformed the previously proposedE(3)-equivariant normalizing flow (ENF) model for molecule generation <cit.>. EDM learns to gradually denoise a initial configuration of atoms into a valid molecular structure. Both EDM and ENF are built on theE(n)-Equivariant Graph Neural Networks <cit.> framework which can utilize only scalar and vector features (and interactions between them). A recent work <cit.> improves EDM by utilizing bond order information (and hence, a 2D molecular graph to compare to), which we do not assume access to here. While expressive, diffusion models are expensive to train, requiring≈3.5×more training on the QM9 dataset to outperform autoregressive models. Unlike autoregressive models, diffusion models do not flexibly allow for completion of molecular fragments, because they are usually trained in setups where all atoms are free to move. To avoid recomputation of the neighbor lists during diffusion, current diffusion models use fully-connected graphs where all atoms interact with each other. This could potentially affect their scalability when building larger molecules. On the other hand, Symphony and other autoregressive models use distance cutoffs to restrict interactions and improve efficiency. Furthermore, diffusion models are significantly slower to sample from, because the underlying neural network is invoked≈1000times when sampling a single molecule. § EXPERIMENTAL RESULTS A major challenge with generative modelling is evaluating the quality of generated 3D structures. Ideally, a generative model should generate physically plausible structures, accurately capture training set statistics and generalize well to molecules outside of its training set. We propose a comprehensive set of tests to evaluate Symphony and other generative models along these three aspects. §.§ Validity of Generated StructuresAll of the generative models considered here output a set of atoms with 3D coordinates; bonding information is not generated by the model. Before we can use cheminformatics tools designed for molecules, we need to assign bonds between atoms. Multiple algorithms exist for bond order assignment: <cit.>, OpenBabel <cit.> and a simple lookup table based on empirical pairwise distances in organic compounds <cit.>. Here, we perform the first comparison between these algorithms for evaluating machine-learning generated 3D structures. In <ref>, we use each of these algorithms to infer the bonds and create a molecule from generated 3D molecular structure.We declare a molecule as valid if the algorithm could successfully assign bond order with no net resulting charge. We also measure the uniqueness to see how many repetitions were present in the set of SMILES <cit.> strings of valid generated molecules. Ideally, we want both the validity and the uniqueness to be high. While EDM <cit.> is still superior on the validity and uniqueness metrics, we find that Symphony performs much better on both validity and uniqueness than existing autoregressive models, G-SchNet <cit.> and G-SphereNet <cit.>, for theand OpenBabel algorithms. Note that the lookup table does not account for aromatic bonds and is quite sensitive to exact bond lengths; we believe this penalizes Symphony due to its coarser discretization compared to EDM and G-SchNet. Of note is that onlyfinds almost all of the ground truth QM9 structures to be valid.Recently, <cit.> showed that the predicted 3D structures from machine-learned protein-ligand docking models tend to be highly unphysical. For <ref>, we utilize their PoseBusters framework to perform the following sanity checks to count how many of the predicted 3D structures are reasonable. We see that the valid molecules from all models tend to be quite reasonable, with Symphony performing better than all baselines on generating structures with reasonable UFF <cit.> energies and respecting the geometry constraints of double bonds. Further details about the PoseBusters tests are provided in <ref>. §.§ Capturing Training Set StatisticsNext, we evaluate models on how well they capture bonding patterns and the geometry of local environments found in the training set molecules. In previous work <cit.>, models were compared based on how well they capture the true bond length distributions observed in QM9. However, such statistics only deal with pairwise bond lengths and cannot capture the geometry of how atoms are placed relative to each other. Here, we utilize the bispectrum<cit.> as a rotationally invariant descriptor of the geometry of local environments.Given a local environment with a central atomu, we first project all of the neighbors ofuaccording to the inferred bonds onto the unit sphereS^2. Then, we compute the signalfas a sum of Dirac delta distributions along the direction of each neighbor: f(r̂) = ∑_v ∈N(u)δ_(r̂ - r̂_vu ) . The bispectrumℬ(f)offis then defined as: ℬ(f) = ExtractScalars(f ⊗f ⊗f) . Thus,fcaptures the distribution of atoms aroundu, and the bispectrumℬ(f)captures the geometry of this distribution. The advantage of the bispectrum is that it varies smoothly whenfis varied and is guaranteed to be rotationally invariant. We compute the bispectrum of local environments with atleast2neighboring atoms. Note that we exclude the pseudoscalars in the bispectra.For comparing discrete distributions, we use the symmetric Jensen-Shannon divergence (JSD) as employed in <cit.>. Given the true distributionQand the predicted distributionP, the Jensen-Shannon divergence between them is defined as:D_JS(QP) = 1/2 D_KL(QM) + 1/2 D_KL(PM )whereD_KLis the Kullback–Leibler divergence andM = Q+P/2is the mean distribution. For continuous distributions, estimating the Jensen-Shannon divergence from samples is tricky without further assumptions on the distributions. Instead, we use the Maximum Mean Discrepancy (MMD) score from <cit.> instead to compare samples from continuous distributions. The MMD score is the distance between means of features computed from samples from the true distributionQand the predicted distributionP.A model with a smaller MMD score captures the true distribution of samples better. We provide details about the MMD score in <ref>.From <ref> we see that Symphony and other autoregressive models struggle to match the bond length distribution of QM9 as well as EDM. This is the case except for the single C-H and single N-H bonds. On the bispectra, however, Symphony attains the lowest MMD for several environments. To gain some intuition for these MMD numbers, we also plotted the bond length distributions, samples of the bispectra, atom type distributions and other statistics in <ref> for each model.§.§ Generalization CapabilitiesAll of the metrics discussed so far can be maximized by simplymemorizing the training set molecules. Now, we propose a new metric toevaluate how well the models have actually learned to generate validchemical structures. We compare models by asking them to completefragments of1000unseen molecules from the test set, with one hydrogen atom removed. We then check how many final molecules were deemed valid. Since the valid completion rate (VCR) depends heavily on the quality of the model, we compute the validcompletion rate for fragments of molecules from the training set as well. If the performance is significantly different between the two sets of fragments, this indicates that the models do not generalize well. Diffusion models such as EDM are more challenging to evaluate for this task, since we would need a way to fix the initial set of atoms, so we compare only Symphony and G-SchNet. Encouragingly, both models are able to generalize well to unseen fragments, but Symphony's overall completion rate is higher for both seen and unseen fragments. However, we notice that the performance of Symphony on this task seems to decrease as training progresses, which we are currently investigating.§.§ Molecule Generation ThroughputOne of the major advantages of autoregressive models (such as Symphony) over diffusion models (such as EDM) is significantly faster inference speeds. As measured on a single NVIDIA RTX A5000 GPU, Symphony's inference speed is 0.293 seconds/molecule, compared to EDM's 0.930 sec/mol. Symphony is much slower than existing autoregressive models (G-SchNet is at 0.011 sec/mol, and G-SphereNet 0.006) because of the additional tensor products for generating higher-degreeE(3)-equivariant features, but is still approximately3×faster than EDM.However, our sampler is currently bottlenecked by some of the limitations of JAX <cit.>; we believe that Symphony's inference speed reported here can be significantly improved to match its training speed.§ CONCLUSIONWe have proposed Symphony, a new method to autoregressively generate 3D molecular geometries with spherical harmonic projections and higher-degreeE(3)-equivariant features. We show promising results on molecular generation and completion, relative to existing autoregressive models. However, one drawback of our current formulation is that the discretization of our radial components is too coarse, so our bond length distributions are not as accurate as EDM or G-SchNet. This affects our validity when using lookup tables to assign bond orders as they are particularly sensitive to exact bond lengths. Further, Symphony incurs increased computational cost due to the use of tensor products to create higher degreeE(3)-equivariant features. As a highlight, Symphony is trained on only≈80epochs, while G-SchNet and EDM are trained for330and1100epochs respectively. Further exploring the data efficiency of Symphony remains to be seen. In the future, we plan to explore normalizing flows to smoothly model the radial distribution without any discretization, and placing entire local environment motifs at once which would speed up generation.iclr2024_conference | http://arxiv.org/abs/2311.16199v1 | {
"authors": [
"Ameya Daigavane",
"Song Kim",
"Mario Geiger",
"Tess Smidt"
],
"categories": [
"cs.LG",
"q-bio.BM"
],
"primary_category": "cs.LG",
"published": "20231127053221",
"title": "Symphony: Symmetry-Equivariant Point-Centered Spherical Harmonics for Molecule Generation"
} |
Anti-Gauss cubature rules with applications to Fredholm integral equations on the square Patricia Díaz de AlbaDepartment of Mathematics, University of Salerno, via Giovanni Paolo II 132, 84084 Fisciano, Italy,Luisa FermoDepartment of Mathematics and Computer Science, Universityof Cagliari, via Ospedale 72, 09124 Cagliari, Italy,Giuseppe Rodriguez[2]January 14, 2024 =================================================================================================================================================================================================================================================================================The purpose of this paper is to develop the anti-Gauss cubature rule for approximating integrals defined on the square whose integrand function may have algebraic singularities at the boundaries. An application of such a rule to the numerical solution of second-kind Fredholm integral equations is also explored. The stability, convergence, and conditioning of the proposedNyström-type method are studied. The numerical solution of the resulting dense linear system is also investigated and several numerical tests are presented. Fredholm integral equation; Nyström method; Gauss cubature formula; anti-Gauss cubature rule; averaged schemes.65R20; 65D30; 42C0§ INTRODUCTION Let us consider the integral(f)=∫_ f_1() d, where :=[-1,1] × [-1,1], =(x_1,x_2), andf_1 is an integrable bivariate function which may have algebraic singularities on the boundary of . As usual, we deal with such singularities by writing(f)=∫_ f() w() d = ∫_-1^1 ∫_-1^1 f(x_1,x_2) w_1(x_1) w_2(x_2) dx_1 dx_2, that is, by factoring f_1 as the product of a function f which is sufficiently smooth onand a weight function w() =w_1(x_1) w_2(x_2),withw_i(x_i)=(1-x_i)^α_i (1+x_i)^β_i,α_i,β_i>-1,i=1,2.For the numerical approximation of the integral (<ref>), we may opt for two alternative techniques; see <cit.>.The first one, known as the “indirect” approach,consists of approximating each one-dimensional integral in (<ref>) by a well-known quadrature rule. This procedure takes advantage of the fact that univariate rules have been deeply studied and explored, compared with the multivariate ones. In <cit.>, the authorspropose to approximate integrals of type (<ref>) by a cubature formula obtained as a tensor product of two Gaussian rules; see also formula (<ref>) in Section <ref>. They investigate the stability and convergence of the formula in suitable weighted spaces, and provide asymptotic estimates of the weighted quadrature error for an increasing number of nodes. Specifically, they state the order of convergence as a function on the smoothness properties of the integrand function f, providing a lower bound which involves an unknown constant independent of f and of the number of nodes. The second approach, which can be considered “direct”, consists of constructing true bivariate cubature schemes from scratch. This case is more involved. Indeed, it is well known that Gaussian cubature rules based on bivariate orthogonal polynomials exist only in few cases; see for instance <cit.>. An interesting example is given in <cit.>, where the nodes are zeros of suitable bivariate orthogonal polynomials; see also <cit.>.In this paper, we initially focus on the “indirect” approach and develop an anti-Gaussian cubature rule as a tensor product of two anti-Gaussian univariate formulae. Anti-Gauss rules were introduced for the first time in <cit.> and subsequently investigated by many authors; see, for example, <cit.>. According to our knowledge, such formulae have been investigated in the bivariate case only on the real semi-axis <cit.>. Their utilityis twofold. On the one hand, they allow one to build new cubature rules, namely, averaged or stratified cubature formulae, which turns out to have several advantages in terms of accuracy and computational cost.On the other hand, they provide numerical estimates for the error of the Gaussian cubature rule for a fixed number of points, so that one can determine the number of points needed to approximate the integral with a prescribed accuracy. The estimates so obtained do not depend on unknown constants and are not asymptotic. In the second part of the paper, we apply anti-Gauss rules to the numerical solution of the integral equation (I-K)f=g,where f is the bivariate function to be recovered, defined on the square , I is the identity operator, and g is a given right-hand side, which is sufficiently smooth on (-1,1) × (-1,1) and may havealgebraic singularities at the boundary of . The integral operator Kf is defined as(Kf)()=∫_k(,) f() w() d,where =(x_1,x_2) and =(y_1,y_2) belong to , the kernel function k defined on × is known, d=dx_1dx_2, and w is the weight function given in (<ref>). Defining the function w as the product of two classical Jacobi weights aims at accounting for possible algebraic singularities at the boundary of the kernel domain.Equation (<ref>) arises in several problems related to electromagnetic scattering, aerodynamics, computer graphics and mathematical physics that can be rewritten in terms of equations of type (<ref>). Examples are the radiosity equation <cit.> and the rendering equation <cit.>. In view of such applications, several numerical methods have been developed for the numerical solution of equation (<ref>), such as weighted Nyström type methods <cit.>, integral mean value methods <cit.>, Galerkin methods <cit.>, collocation methods <cit.>, and wavelets methods <cit.>.Recently, much attention has been devoted to “stratified” quadrature formulae <cit.>. They are linear combination of an m-points Gauss rule and a formula with more than m nodes, e.g., the anti-Gauss rule, to reach an algebraic precision larger than 2m-1. In the light of the accurate numerical results that such formulae are able to give in the one dimensional case (see, for instance, <cit.> or <cit.>), in this paper we propose a weighted Nyström method based on anti-Gauss cubature formulae. We investigate the stability and convergence of the proposed method in suitable weighted spaces, and propose to combine it with the Nyström method based on the Gauss rule presented in <cit.>. This combination allows us to have two Nyström interpolants that, under suitable assumptions, bracket the solution of the integral equation.As a consequence, an average of the two numerical solution produces a better accuracy. The numerical solution of the resulting linear system is also investigated. The system is characterized by a dense coefficients matrix and by a dimension which becomes large when the functions involved have a low degree of smoothness. The iterative solution by the GMRES method is investigated and the special case of a separable kernel is also considered.The paper is organized as follows. In Section <ref>, we introduce the anti-Gauss cubature rule and investigate its properties with Proposition <ref>. Under suitable assumptions, we extend the bracketing property to a general function f (Theorem <ref>) and provide simpler assumptions in the Chebychev case; see Corollaries <ref> and <ref>. We also present two numerical examples to support the theoretical analysis of the new formulae. Section <ref> describes a Nytröm method based on the Gauss and anti-Gauss rules, and show that the two corresponding Nyström interpolants bracket the solution of the integral equation, suggesting that a better accuracy can be obtained by taking the average of the two interpolants. In Section <ref>, we analyze the linear systems that yield the interpolants and solve them by optimized versions of the GMRES iterative method. In particular, we investigate the special case of a separable kernel. Finally, Section <ref> presents the results of a numerical experimentation on integral equations. § CUBATURE RULESLet us consider the integral (<ref>), with the weight function w defined in (<ref>). To obtain a numerical approximation, we apply to each nested weighted integral the optimal Gauss-Jacobi ruleG^(ℓ)_n(g) = ∑_j=1^nλ_j^(ℓ) g(x_j^(ℓ)),where g(x) is a univariate function defined on [-1,1], λ_j^(ℓ) is the jth Christoffel number with respect to the weight w_ℓ(x) appearing in the integral, and x_j^(ℓ) is the jth zero of the monic polynomial p_n^(ℓ)(x) orthogonal with respect to the same weight, for ℓ=1,2.To ease exposition, we recall that p_n^(ℓ)(x) satisfies the following three-term recurrence relationp^(ℓ)_-1(x)=0,p^(ℓ)_0(x)=1, p^(ℓ)_j+1(x)=(x-a^(ℓ)_j) p^(ℓ)_j(x)-b^(ℓ)_j p^(ℓ)_j-1(x),j=0,1,2,…,where the coefficients a^(ℓ)_j and b^(ℓ)_j are given bya^(ℓ)_j = β_ℓ^2-α_ℓ^2/(2j+α_ℓ+β_ℓ)(2j+α_ℓ+β_ℓ+2),j ≥ 0, b^(ℓ)_0= 2^α_ℓ+β_ℓ+1Γ(α_ℓ+1) Γ(β_ℓ+1)/Γ(α_ℓ+β_ℓ+2), b^(ℓ)_j = 4j(j+α_ℓ)(j+β_ℓ)(j+α_ℓ+β_ℓ)/(2j+α_ℓ+β_ℓ)^2 ((2j+α_ℓ+β_ℓ)^2-1),j ≥ 1.It is well known <cit.> that the zeros of p_n^(ℓ)(x) can be efficiently computed as the eigenvalues of the Jacobi matrix associated to the polynomials, while the Christoffel numbers are the squared first components of the normalized eigenvectors of the same matrix.Let us go back to the approximation of (<ref>). By using n_1 points in the integral with the differential dx_1 and n_2 nodes in that with dx_2, we obtain the (n_1 × n_2)-point Gauss cubature rule_n_1,n_2(f)=∑_j_1=1^n_1∑_j_2=1^n_2λ_j_1^(1)λ_j_2^(2) f(x_j_1^(1),x_j_2^(2)).Denoting by R^(G)_n_1,n_2(f) the remainder term for the integral, i.e.,(f)=_n_1,n_2(f)+R^(G)_n_1,n_2(f),it is immediately to observe that the interpolatory scheme (<ref>) is such thatR^(G)_n_1,n_2(p)=0,∀ p ∈ℙ_2n_1-1,2n_2-1,where ℙ_k,ℓ is the set of all bivariate polynomials of the typep(x,y)=∑_i=0^k ∑_j=0^ℓ a_ij x^i y^j, a_ij∈ℝ,of degree at most k in the variable x and at most ℓ in the variable y.In <cit.>, estimates for the error R^(G)_n_1,n_2(f) are given in terms of the smoothness properties of the function f. Basically, the cubature error goes to zero as the error of best polynomial approximation for f. Here, we want to provide an estimate for such error by using stratified schemes. This approach is well consolidated in the one-dimensional case through the well known Gauss-Kronrod formulae <cit.>, the anti-Gauss quadrature rules <cit.>, and their recent extensions <cit.>.To this end, we introduce the anti-Gaussian cubature scheme_n_1+1,n_2+1(f)=∑_j_1=1^n_1+1∑_j_2=1^n_2+1μ_j_1^(1)μ_j_2^(2) f(η_j_1^(1),η_j_2^(2)),where μ_i^(ℓ) is the ith anti-Gaussian quadrature weight for ℓ=1,2, andη_i^(ℓ) is the ith zero of the polynomialq^(ℓ)_n_ℓ+1(x)=p^(ℓ)_n_ℓ+1(x)-b^(ℓ)_n_ℓ p^(ℓ)_n_ℓ-1(x), ℓ=1,2.Anti-Gaussian cubature formulae and related generalizations have been very recently investigated in <cit.> for the Laguerre weight.Similarly to (<ref>) and (<ref>), such a cubature rule has been obtained as a tensor product of two univariate anti-Gauss rules <cit.>, which we denote by A^(ℓ)_n_ℓ+1, ℓ=1,2. Therefore, the zeros {η_i^(ℓ)}_i=1^n_ℓ+1 are the eigenvalues of the matrixΨ^(ℓ)_n_ℓ+1 = [ J^(ℓ)_n_ℓ √(2 b^(ℓ)_n_ℓ)𝐞_n_ℓ; √(2 b^(ℓ)_n_ℓ)𝐞^T_n_ℓ a^(ℓ)_n_ℓ; ],whereJ^(ℓ)_n_ℓ=[a^(ℓ)_0 √(b^(ℓ)_1); √(b^(ℓ)_1)a^(ℓ)_1⋱; ⋱⋱ √(b^(ℓ)_n_ℓ-1); √(b^(ℓ)_n_ℓ-1)a^(ℓ)_n_ℓ-1;]and𝐞_n_ℓ=(0,0,…,1)^T ∈ℝ^n_ℓ. The coefficients {μ_i^(ℓ)}_i=1^n_ℓ+1 are determined asμ^(ℓ)_i=b^(ℓ)_0 (v^(ℓ)_i,1)^2,where b^(ℓ)_0 is defined as in (<ref>) and v^(ℓ)_i,1 is the first component of the normalized eigenvector corresponding to the eigenvalue η^(ℓ)_i.We remark that for the computation of the eigenvalues and eigenvectors we can use the algorithm devised by Golub and Welsch in <cit.>. It is based on the QR factorization with a Wilkinson-like shift and has a computational cost c n_ℓ^2+O(n_ℓ), ℓ=1,2, where c is a small positive constant independent of n_ℓ.Let us mention that, by definition, all the weights are positive and the zeros interlace the nodes of the Gauss rule <cit.>, i.e.,η_1^(ℓ)<x_1^(ℓ)<η_2^(ℓ)<x_2^(ℓ)<⋯<x^(ℓ)_n_ℓ<η^(ℓ)_n_ℓ+1.Moreover, the anti-Gauss nodes η^(ℓ)_i belong to the interval [-1,1] whenα_ℓ≥ -1/2,β_ℓ≥ -1/2, (2 α_ℓ+1)(α_ℓ+β_ℓ+2)+1/2(α_ℓ+1)(α_ℓ+β_ℓ)(α_ℓ+β_ℓ+1) ≥ 0, (2 β_ℓ+1)(α_ℓ+β_ℓ+2)+1/2(β_ℓ+1)(α_ℓ+β_ℓ)(α_ℓ+β_ℓ+1) ≥ 0. We remark that some classical Jacobi weights, such as the Legendre weight (α_ℓ=β_ℓ=0) and the Chebychev weights of the first (α_ℓ=β_ℓ=-1/2), second (α_ℓ=β_ℓ=1/2), third (α_ℓ=-1/2, β_ℓ=1/2), and fourth kind (α_ℓ=1/2, β_ℓ=-1/2), satisfy conditions (<ref>). However, we emphasize that the nodes might include the endpoints ± 1. This happens, for instance, with the Chebychev weights of the first (η_1=-1 and η_n_ℓ+1=1), third (η_n_ℓ+1=1), and fourth kind (η_1=-1). In the case of Chebychev polynomials of the first kind an explicit form for the nodes and weights have been given in <cit.>. From now on, we assume that conditions (<ref>) are satisfied.Denoting by R^(A)_n_1+1,n_2+1(f) the related cubature error, i.e.,(f)=:_n_1+1,n_2+1(f)+R^(A)_n_1+1,n_2+1(f),we have the following proposition, which has been proved in <cit.> for the Laguerre weight on [0,∞).The error of the anti-Gauss cubature scheme (<ref>) has the following propertyR^(A)_n_1+1,n_2+1(p)=-R^(G)_n_1,n_2(p), ∀ p ∈ℙ_2n_1+1,2n_2-1∪ℙ_2n_1-1,2n_2+1. The proof follows the same line as that of <cit.>.Hence, by virtue of (<ref>), we can immediately deduce some important features of the rule _n_1+1,n_2+1:* If p ∈ℙ_2n_1-1,2n_2-1, then R^A_n_1+1,n_2+1(p)=0. * If p ∈ℙ_2n_1+1,2n_2-1∪ℙ_2n_1-1,2n_2+1, the Gauss and the anti-Gauss cubature rules provide an interval containing the exact integral (p). Indeed, it either holds_n_1+1,n_2+1(p) ≤(p) ≤_n_1,n_2(p) or_n_1,n_2(p) ≤(p) ≤_n_1+1,n_2+1(p).* For every polynomial p∈ℙ_2n_1+1,2n_2-1∪ℙ_2n_1-1,2n_2+1, it holds(p)= 1/2[_n_1,n_2(p)+_n_1+1,n_2+1(p)].This means that the convex combination of the two cubature formulae at the right-hand side is a cubature formula more accurate than the Gauss rule. From now on, we will denote it by^Avg_2n_1+1,2n_2+1(f)= 1/2[_n_1,n_2(f)+_n_1+1,n_2+1(f)],and we will call it averaged Gauss cubature formula. It has positive weights andinvolves (2n_1+1)×(2n_2+1) real and distinct nodes. * By using the scheme ^Avg_2n_1+1,2n_2+1, we can estimate the error R^(G)_n_1,n_2 asR^(G)_n_1,n_2=(f)-_n_1,n_2(f)≃^Avg_2n_1+1,2n_2+1(f)-_n_1,n_2(f) = 1/2[_n_1+1,n_2+1(f)-_n_1,n_2(f)] =:R^[1]_n_1,n_2(f).The computational cost for computing nodes and weights of ^Avg_2n_1+1,2n_2+1 is 2cn_ℓ^2+2O(n_ℓ), that is one half of the cost of the Gauss rule _2n_1,2n_2, which is 4cn_ℓ^2+2O(n_ℓ).We recall that the anti-Gauss cubature rule (<ref>) is a stable formula. This means that if we look at the rule as a linear functional _n_1+1,n_2+1: →ℝ whereis a Banach space, thensup_n_1,n_2_n_1+1,n_2+1 < ∞.This is a consequence of the stability of the univariate anti-Gauss rule, which has also been proved in weighted spaces equipped with the uniform norm in <cit.>, under suitable assumptions; see also <cit.>, wheresuch assumptions are relaxed.In the univariate case it has been proved, under rather restrictive assumptions on the integrand function f,that the Gauss and the anti-Gauss quadrature rules bracket the integral I(f); see <cit.>, <cit.>, and <cit.>. The same result has been proved under much less limiting assumptions in <cit.>, for the solution of second-kind integral equations.In the following, we extend the bracketing condition to bivariate integrals, that is, we give assumptions for which property 2) is valid for a general function f of two variables.Let us expand the integrand function f() in terms of the polynomialsp_n_1,n_2()=p^(1)_n_1(x_1)p^(2)_n_2(x_2),orthogonal with respect to the weight function w(), in the formf() = ∑_i=0^∞∑_j=0^∞α_i,j p_i,j(),whereα_i,j = (b_0^(1) b_0^(2))^-1/2∫_ f() p_i,j() w() d.Let us assume that the coefficients α_i,j in (<ref>) converge to zero sufficiently rapidly, and the following relation holds true(-_n_1,n_2)(f)= -S_n_1,n_2 + ^(1)_n_1,n_2, (-_n_1+1,n_2+1)(f)= S_n_1,n_2 + ^(2)_n_1,n_2,withmax(|^(1)_n_1,n_2|,|^(2)_n_1,n_2|)<|S_n_1,n_2|,whereS_n_1,n_2 = √(b_0^(2))∑_i=2n_1^2n_1+1α_i,0 G^(1)_n_1(p^(1)_i) + √(b_0^(1))∑_j=2n_2^2n_2+1α_0,j G^(2)_n_2(p^(2)_j),with G^(ℓ)_n_ℓ defined by (<ref>). The terms ^(1)_n_1,n_2 and ^(2)_n_1,n_2 depend on both f and the quadrature formulae involved; their expression will be given in the proof.Then, either_n_1,n_2(f) ≤(f) ≤_n_1+1,n_2+1(f) or_n_1+1,n_2+1(f) ≤(f) ≤_n_1,n_2(f). From (<ref>),(f) = α_0,0(b_0^(1) b_0^(2))^1/2.Substituting (<ref>) in (<ref>) yields_n_1,n_2(f) = ∑_i=0^∞∑_j=0^∞α_i,j G^(1)_i G^(2)_j,where G^(ℓ)_i=G^(ℓ)_n_ℓ(p^(ℓ)_i), ℓ=1,2. Then, exploiting the degree of exactness of G^(ℓ)_i we obtain(-_n_1,n_2)(f) = -S_n_1,n_2 + ^(1)_n_1,n_2,with^(1)_n_1,n_2 =-∑_i=2n_1^2n_1+1∑_j=2n_2^2n_2+1α_i,j G^(1)_i G^(2)_j -∑_i=2n_1+2^∞[ α_i,0√(b_0^(2)) +∑_j=2n_2^2n_2+1α_i,j G^(2)_j ] G^(1)_i -∑_i=2n_1+2^∞∑_j=2n_2+2^∞α_i,j G^(1)_i G^(2)_j -∑_j=2n_2+2^∞[ α_0,j√(b_0^(1)) +∑_i=2n_1^2n_1+1α_i,j G^(1)_i ] G^(2)_j. Now, substituting (<ref>) in (<ref>) leads to_n_1+1,n_2+1(f) = ∑_i=0^∞∑_j=0^∞α_i,j A^(1)_i A^(2)_j,where A^(ℓ)_i=A^(ℓ)_n_ℓ+1(p^(ℓ)_i), ℓ=1,2. The definition of the anti-Gauss rule implies thatA^(ℓ)_n_ℓ+1(p) = 2I(p) - G^(ℓ)_n_ℓ(p) = -G^(ℓ)_n_ℓ(p),for any polynomial p of degree larger than zero and smaller or equal to 2n_ℓ+1. By applying this property and a similar argument as before, we have(-_n_1+1,n_2+1)(f) = S_n_1,n_2 + ^(2)_n_1,n_2,with^(2)_n_1,n_2 =-∑_i=2n_1^2n_1+1∑_j=2n_2^2n_2+1α_i,j G^(1)_i G^(2)_j -∑_i=2n_1+2^∞[ α_i,0√(b_0^(2)) -∑_j=2n_2^2n_2+1α_i,j G^(2)_j ] A^(1)_i -∑_i=2n_1+2^∞∑_j=2n_2+2^∞α_i,j A^(1)_i A^(2)_j -∑_j=2n_2+2^∞[ α_0,j√(b_0^(1)) -∑_i=2n_1^2n_1+1α_i,j G^(1)_i ] A^(2)_j. The above relations show that when assumption (<ref>) is satisfied, there is a change of sign in the errors produced by the Gauss and anti-Gauss rules. This proves the assertion. In the next two examples, we give a practical illustratation of the theoretical properties of the cubature error. Let us consider the following integral∫_-1^1 ∫_-1^1 |sin(1-x_1)|^9/2 (1+x_1+x_2) w(x_1,x_2) dx_1 dx_2,where w is the weight function defined in (<ref>) with α_1=β_1=-1/2 and α_2=β_2=0. The integrand function is smooth with respect to the variable x_2, whereas only its first four derivatives with respect to x_1 are continuous. Hence, it is sufficient to use few points (for instance n_2=8) to approximate the integral in x_2. In Table <ref> we report the cubature errors for increasing values of n_1. In addition to the cubature error of the Gauss and anti-Gauss rule, we also giveR^(Avg)_n_1,n_2(f)=(f)-^Avg_2n_1+1,2n_2+1(f).From the third and fourth columns, we can see that the error provided by the anti-Gauss rule is of the same magnitude of the error given by the Gauss rule and opposite in sign. This improves the accuracy of the averaged rule; see fifth column. The last column shows that formula ^Avg_2n_1+1,2n_2+1(f) is a good estimate for the Gauss rule error. The graph on the left in Figure <ref> displays the two terms of inequality (<ref>) for n_1=1,…,30 and n_2=8. It shows that the assumption of Theorem <ref> is verified, ensuring the change of sign in the errors of the two cubature rules.Let us consider integral (<ref>) withf(x_1,x_2)= x_1 |cos(1/2-x_1)|^3/2+x_2 |sin(1+x_2)|^3/2,andw(x_1,x_2)=√(1-x_1^2/1-x_2). In this case, the integrand function has a low smoothness with respect to both variables. Then, to obtain a good approximation we need to increase both n_1 and n_2. In Table <ref>, we can see the computational advantage of the averaged rule with respect to the Gauss scheme. To obtain an error of the order 10^-13 we have two options: we may apply the averaged rule with n_1=n_2=128, and this requires n_1n_2+(n_1+1)(n_2+1)=33.025 function evaluations, or we may use the Gauss cubature formula with n_1=n_2=256. In this case, we have to perform n_1n_2=65.536 function evaluations.The graph on the right in Figure <ref> shows that for some values of n_1=n_2 the assumption (<ref>) of Theorem <ref> is violated. However, numerical experiments show that the change of sign in the error always happens. In particular, the graph shows that inequality (<ref>) is not verified when n_1=n_2=20, but we have R^(G)_20,20(f)=-1.54· 10^-07 and R^(A)_20,20(f)=1.56· 10^-07. The assumption (<ref>) is undoubtedly restrictive, but it is only a sufficient condition for the bracketing of the solution. In <cit.> a less restrictive assumption has been given, in the univariate case, for the Chebychev weight of the first kind. The following corollary extends that result to bivariate integrals. Let α_i=β_i=-1/2 in (<ref>). Then, ifmax(|^(1)_n_1,n_2|,|^(2)_n_1,n_2|)< |α_2n_1,0+α_0,2n_2 |,holds true for n_1 and n_2 large enough, where^(1)_n_1,n_2= √(2)α_2n_1,2n_2+ ∑_k_1=2^∞ (-1)^k_1(α_2n_1 k_1,0- √(2) α_2n_1 k_1,2 n_2) = + √(2)∑_k_1=2^∞∑_k_2=2^∞(-1)^k_1+k_2α_2 n_1 k_1, 2 n_2 k_2 + ∑_k_2=2^∞ (-1)^k_2(α_0, 2n_2 k_2- √(2) α_2n_1,2 n_2 k_2),and^(2)_n_1,n_2= √(2)α_2n_1,2n_2+ ∑_k_1=2^∞(α_2n_1 k_1,0+ √(2) α_2n_1 k_1,2 n_2) = + √(2)∑_k_1=2^∞∑_k_2=2^∞α_2 n_1 k_1, 2 n_2 k_2 + ∑_k_2=2^∞(α_0, 2n_2 k_2+ √(2) α_2n_1,2 n_2 k_2),then the statement of Theorem <ref> holds true. The identityG_n(p_i^(ℓ))=(-1)^k√(2 π),if i=2nk, 0,otherwise,reported in proof of Corollary 1 in <cit.>, allows us to obtain a simplified expression for the terms S_n_1,n_2, ^(1)_n_1,n_2 and ^(2)_n_1,n_2 given in Theorem <ref>, that is,S_n_1,n_2 = √(2)π (α_2n_1,0+α_0,2n_2), ^(1)_n_1,n_2 = √(2)π^(1)_n_1,n_2, ^(2)_n_1,n_2 = √(2)π^(2)_n_1,n_2.By applying Theorem <ref>, we conclude the proof. We remark here that, for the Chebychev case, the number of coefficients α_i,j present in the different series terms is much smaller than the ones involved in the completed expression of |^(1)_n_1,n_2| and |^(2)_n_1,n_2| introduced in the proof of Theorem <ref>, simplifying the relation (<ref>).In the next corollary, we further streamline the results in Corollary <ref>. Let us consider α_i=β_i=-1/2 in (<ref>). Then, if|θ_n_1,n_2|< |α_2n_1,0+α_0,2n_2 |,holds true for n_1 and n_2 large enough, where|θ_n_1,n_2| = √(2)|α_2n_1,2n_2|+∑_k_1=2^∞|α_2n_1 k_1,0|+ √(2)|α_2n_1 k_1,2 n_2|+√(2)∑_k_1=2^∞∑_k_2=2^∞ |α_2 n_1 k_1, 2 n_2 k_2| + ∑_k_2=2^∞ |α_0, 2n_2 k_2|+ √(2)|α_2n_1,2 n_2 k_2|,then the statement of Theorem <ref> holds true.By using the triangle inequality and taking into account the hypothesis, we havemax(|^(1)_n_1,n_2|,|^(2)_n_1,n_2|) ≤ |θ_n_1,n_2| ≤ |α_2n_1,0+α_0,2n_2 |,which yields the assertion, by virtue of Theorem <ref>.§ NYSTRÖM METHODS AND THE AVERAGED NYSTRÖM INTERPOLANT The aim of this section is to approximate the solution of (<ref>) by an interpolant function whose construction is based on Gauss and anti-Gauss cubature rules (<ref>) and (<ref>).If the right hand side of (<ref>) have algebraic singularities at ± 1, the solution inherits the same singularities. The same happens if the kernel is singular at ± 1 with respect to the external variables (y_1,y_2). Therefore, we solve the equation in a suitable weighted space. Let us introduce the weight functionu() =u_1(x_1) u_2(x_2),withu_i(x_i)=(1-x_i)^γ_i (1+x_i)^δ_i,γ_i,δ_i ≥ 0,i=1,2.We search for the solution of (<ref>) in the space C_u of all functions f continuous in the interior of the squareand such thatlim_x_1 →± 1 (fu)(x_1,x_2)=0, ∀ x_2 ∈ [-1,1],lim_x_2 →± 1 (fu)(x_1,x_2)=0, ∀ x_1 ∈ [-1,1],endowed with the normf_C_u=fu_∞=sup_∈ |(fu)()|.If γ_i=δ_i=0 for i=1,2, then C_u coincides with the set of all continuous functions on the square, i.e., C_u ≡ C(). If the function f has one or more singularities on the boundary of , then the corresponding parameter γ_i or δ_i is set to a positive value in order to compensate the singularity.This approach amounts to solving the weighted equation(fu)() - ∫_k(,) u()/u() (fu)() w() d = (gu)(),in the space C() of continuous functions on the square.To deal with smoother functions having some discontinuous derivatives on the boundary of , we introduce the Sobolev-type spaceW^r_u={f∈ C_u: f_x_i^(r)φ^ru_∞<∞, i=1,2},where φ(z)=√(1-z^2). The superscript (r) denotes the rth derivative of the univariate function f_x_i, obtained by fixing either x_1 or x_2 in the function f. We equip W^r_u with the normf_W^r_u=fu_∞+max_i=1,2f_x_i^(r)φ^ru_∞. The error of best polynomial approximation in C_u can be defined asE_m,n(f)_u=inf_p∈ℙ_m,n[f-p]u_∞. From now on, the symbolwill denote a positive constant and we will use the notation ≠(a,b,…) to say thatis independent of the parameters a,b,…, and =(a,b,…) to say that it depends on them. Moreover, if A,B>0 are quantities depending on some parameters, we will write A ∼ B, if there exists a positive constant ≠(A,B) such that B/≤ A ≤ C B.Next proposition gives an estimate for the above error in Sobolev-type spaces. For each f∈ W^r_u, it holdsE_m,n(f)_u ≤[ 1/m^r + 1/n^r] ·max_i=1,2f_x_i^(r)φ^ru_∞,where ≠(m,n,f).Following <cit.>, one hasE_m,n(f)_u ≤[sup_x_2 ∈ [-1,1] u_2(x_2) E_m(f_x_2)_u_1+ sup_x_1 ∈ [-1,1] u_1(x_1) E_n(f_x_1)_u_2],where E_ℓ(g)_u_i is the u_i-weighted best approximation error of the univariate function g by a polynomial of degree at most ℓ; see <cit.>. Then, by the inequalityE_ℓ(g)_u_i≤/ℓ^rg^(r)φ^ru_i_∞,from <cit.>, we obtain the assertion. To ease the exposition, we introduce a multi-index notation, where an index may take integer vectorial values. Such indexes will be denoted by bold letters. Let =(n_1,n_2) and consider the set of bi-indicesℑ_={=(i_1,i_2) : 1≤ i_1≤ n_1,1≤ i_2≤ n_2}.For ∈ℑ_, consistently with the notation =(x_1,x_2), we define _=(x_i_1^(1),x_i_2^(2)), where x_i_1^(1) and x_i_2^(2) are the Gaussian nodes introduced in the cubature rule (<ref>), which we will denote by _.Let us now write the classical Nyström method for the integral equation (<ref>), based on approximating the operator K by the Gauss cubature formula _. This leads to the functional equation(I-K_)f_=g,where f_ is an unknown function approximating f and(K_ f)() = ∑_j_1=1^n_1∑_j_2=1^n_2λ_j_1^(1)λ_j_2^(2)k(_,) f(_),where =(j_1,j_2)∈ℑ_.By multiplying both sides of (<ref>) by the weight function u and collocating at the points _, ∈ℑ_, we obtain the linear system∑_j_1=1^n_1∑_j_2=1^n_2[δ_i_1,j_1δ_i_2,j_2-λ_j_1^(1)λ_j_2^(2) u(_)u(_)k(_,_)] a_j_1,j_2= (gu)(_),where δ_i,k is the Kronecker symbol, and a_j_1,j_2=(fu)(_) are the unknowns. By defining δ_,=δ_i_1,j_1δ_i_2,j_2, λ_=λ_j_1^(1)λ_j_2^(2), and collapsing the two summations into a single one, (<ref>) can be rewritten as∑_∈ℑ_[δ_,-λ_ u(_)u(_)k(_,_)] a_= (gu)(_), ∈ℑ_.This corresponds to the Nyström method for the weighted equation (<ref>).We remark that the quantities k(_,_) are entries k_i_1,i_2,j_1,j_2 of a fourth order tensor ∈^I_1× I_2× I_1 × I_2, where I_k={1,2,…,n_k}, k=1,2; see <cit.>. Moreover, the tensor-matrix product in (<ref>) and the tensor-tensor product that will be used in next section corresponds to the so-called Einstein product <cit.>. We prefer to adopt the multi-index formalism, used, e.g., in <cit.>, because it is closer to the usual matrix notation.The solution of system (<ref>) provides the unique solution of equation (<ref>) and vice-versa. In fact, if a^*_ is a solution of (<ref>), then we can determine the weighted solution of (<ref>) by the so-called Nyström interpolant(f_ u)()=(gu)()+ u() ∑_∈ℑ_λ_/u(_)k(_,)a^*_.Vice-versa, if we evaluate (<ref>) at the cubature points we obtain the solution of (<ref>).Now, we apply the Nyström method to the anti-Gaussian cubature formula _+1, with 1=(1,1), as an approximation for the operator K, obtaining the equation(I-K_+1)f̃_+1=g,where f̃_+1 is the unknown and(K_+1 f)() = ∑_∈ℑ_+1μ_k(_,) f(_),with μ_=μ_j_1^(1)μ_j_2^(2) and _=(η_j_1^(1),η_j_2^(2)).A simple collocation of equation (<ref>) at the knots _ and a multiplication of both sides by u(_) leads to the linear system∑_∈ℑ_+1[δ_,-μ_ u(_)u(_)k(_,_)]ã_= (gu)(_), ∈ℑ_+1.where ã_=(fu)(_) are the unknowns.If ã^*_ is the solution of (<ref>), then the Nyström interpolant(f̃_+1 u)()=(gu)()+ u() ∑_∈ℑ_+1μ_/u(_)k(_,) ã^*_,solves (<ref>), and hence approximates the solution of (<ref>). Vice-versa, if we evaluate the above function at the cubature points we obtain the solution of (<ref>). Let {I+K}={0} in C_u and let the parameters of the weight u given in (<ref>) be such that0 ≤γ_i < α_i+1,0 ≤β_i < δ_i+1, i=1,2.We also assume thatg ∈ W_u^r, sup_∈k__W^r_u < ∞, sup_∈ u() k__W^r < ∞.Then, there exist a sufficiently large bi-index _0 such that, for ≥_0, equations (<ref>) and (<ref>) admit a unique solution f^*_∈ C_u and f̃^*_+1∈ C_u, respectively. Moreover, if f^* is the unique solution of (<ref>), thenmax{(f^*-f^*_)u_∞, (f^*-f̃^*_+1)u_∞}≤[ 1/n_1^r + 1/n_2^r] ·max_i=1,2f_x_i^*(r)φ^ru_∞,where ≠(,f). The stability of the Nyström method based on the Gauss rule as well as the error estimate (<ref>) has been proved in <cit.> (see also <cit.> for the case u ≡ 1). The proof of the assertion related to the Nyström method based on the anti-Gauss rule follows the same line of the corresponding theorem of <cit.>; see also <cit.>. Let f^* be the unique solution of (<ref>). Consider the orthogonal expansion of the kernel k multiplied by f^* and its approximations f_ and f̃_+1k(,) f^*()= ∑_i=0^∞∑_j=0^∞α_i,j() p_i,j(), α_i,j()= (b_0^(1) b_0^(2))^-1/2(K(f^*p_i,j))(), k(,) f_()= ∑_i=0^∞∑_j=0^∞α_i,j^() p_i,j(), α_i,j^()= (b_0^(1) b_0^(2))^-1/2(K(f_ p_i,j))(), k(,) f̃_+1()= ∑_i=0^∞∑_j=0^∞α̃_i,j^+1() p_i,j(),α̃_i,j^+1()= (b_0^(1) b_0^(2))^-1/2(K(f̃_+1 p_i,j))().Then, under the assumption of Theorem <ref>,lim_n_1,n_2→∞ [α_i,j^-α_i,j]u _∞ = 0 andlim_n_1,n_2→∞ [α̃_i,j^+1-α_i,j]u _∞ = 0. The proof follows the same line of Theorem 4 from <cit.>.Let us assume that inequality (<ref>) is satisfied and the assumptions of Theorem <ref> are verified. Then, for any ∈, eitherf̃_+1() ≤ f^*() ≤ f_() or f_() ≤ f^*()≤f̃_+1(). By (<ref>), f=Kf+g. Proceeding similarly with equations (<ref>) and (<ref>), we deduce that to prove the assertion it is sufficient to state either of the following two relations(K_+1f̃_+1)() ≤ (Kf^*)() ≤ (K_ f_)() (K_ f_)() ≤ (Kf^*)() ≤ (K_+1f̃_+1)().By virtue of the assumptions and Corollary <ref>, the above inequalities follow by applying Theorem <ref> to the function h_()=k(,)f(). Once we have proven under which conditions the unique solution f^* of the integral equation is bracketed by the Nyström interpolants for any ∈, we can introduce the averaged Nyström interpolant𝔣_() = 12(f_()+f̃_+1()), ∈,which yields a better approximation of the solution. § SOLVING THE LINEAR SYSTEMS In this section we describe a tensor representation of systems (<ref>) and (<ref>), we study their condition number, and propose numerical methods for their resolution. In the following, the product between two tensors , , and between a tensorand a matrix a, must be considered in the multi-index sense, that is, ()_, = ∑_∈ℑ__,_,,(a)_ = ∑_∈ℑ__,a_, ,∈ℑ_.The inverse tensor is such that ^-1=, where ()_,=δ_,. Moreover, the infinity norm _∞ is defined in the usual operatorial sense, and the condition number is κ_∞()=_∞^-1_∞.Let us introduce the notationΛ_=(λ_)_∈ℑ_, with (Λ_)_, = λ_,=, 0≠.We give a compact representation of systems (<ref>) and (<ref>),(_-___^-1Λ_)å =, (_+1-_+1_+1_+1^-1Λ_+1) å = ,where_ = k(_,_) =k(x_j_1^(1),x_j_2^(2),x_i_1^(1),x_i_2^(2)),_ = (u(_))_∈ℑ_, and = ((gu)(_))_∈ℑ_. Matrices _+1, _+1, Λ_+1, and the arrayare defined similarly.In the next theorem we state the numerical stability of the Nyström method. Under the assumptions of Theorem <ref>, it holdsκ_∞(_ - ___^-1Λ_) ≤, κ_∞(_+1 - _+1_+1_+1^-1Λ_+1) ≤,whereis independent of .The proof follows the same line of Theorem 3.1 from <cit.>. §.§ The general case Let us first solve linear systems (<ref>) and (<ref>) in the general case, that is, whenthe coefficient tensor is not structured. For the sake of clarity and brevity, from now on we will only refer to system (<ref>) and set_=_-___^-1Λ_.The same considerations will be valid for system (<ref>) and the corresponding tensor _+1. We note that even if the kernel is a symmetric function like, for instance, k(,) = ^2+^2+, the resulting coefficient tensor may not be not symmetric, that is, (_)_,≠(_)_,, due to the presence of the weight function u and the Christoffel numbers. Before solving system (<ref>), we rewrite it in matrix form, i.e., we transform the matrices containing the unknowns and the right-hand side into vectors, and represent the multi-index tensor as a standard matrix. To do this, we employ the lexicographical order to obtain the matrixF_N ∈^N × N given by(F_N)_ℓ,k=(_)_,, ℓ=i_1+(i_2-1)n_1, k=j_1+(j_2-1)n_1.This process is known as matricization or unfolding <cit.>. A similar procedure is applied to arrays a and h to obtainvectors a̅,h̅∈^N, with N=n_1n_2, defined asa̅_k= a_j_1,j_2, h̅_k= h_j_1,j_2,k=j_1+ (j_2-1)n_1,for j_1=1,…,n_1, j_2=1,…,n_2, and k=1,…,N,so that the system becomesF_Na̅=h̅. To solve system (<ref>), we employ the generalized minimal residual (GMRES) method <cit.>. The GMRES iterative method for the solution of the linear system F_Na̅=h̅ is based on the Arnoldi partial factorizationF_N Q_r=Q_r+1H_r+1,r,r=1,2,…,N,where Q_r=[q_1,q_2,…,q_r] has orthonormal columns, with q_1=h̅/h̅, and H_r+1,r is an Hessenberg matrix; · denotes the vector 2-norm.At the rth iteration, GMRES approximates the solution of the system asa̅^(r) =min_a̅∈ K_rF_Na̅-h̅^2 =min_y∈ℝ^rH_r+1,r y-h̅_1^2,where K_r={h̅,F_Nh̅,…,F_N^r-1h̅}={q_1,…,q_r} is a Krylov space of dimension r.Once the tensor _ has been computed, this requires 2N^2 floating point operations to assemble the matrix F_N and a matrix-vector product at each iteration, leading to a computational cost of O((2+r)N^2).The complexity can be slightly reduced by avoiding to assemble F_N and performing the product F_Nq_k at each iteration asq_k-u∘[K_N(d∘q_k)],where u_=u(x_), d_=λ_/u_,K_N is the matricization of _, and ∘ denotes the component-wise Hadamard product (a∘b)_=a_b_. In this case the computational cost is O(rN^2). We will denote this approach with a factored coefficient matrix by GMRES-FM.§.§ The case of a separable kernel Let us assume that the kernel in (<ref>) is separable, that is,k(,) = k(x_1,x_2,y_1,y_2) = k_1(x_1,y_1) k_2(x_2,y_2).This means that _=K^(1)_n_1⊗ K^(2)_n_2, where K^(1)_n_1 and K^(2)_n_2 are two square matrices of dimension n_1 and n_2, respectively, with(K^(1)_n_1)_i_1,j_1 = k_1(x_j_1^(1),x_i_1^(1)),(K^(2)_n_2)_i_2,j_2 = k_2(x_j_2^(2),x_i_2^(2)),and ⊗ denotes the Kronecker tensor product, that is,(_)_, = (K^(1)_n_1)_i_1,j_1(K^(2)_n_2)_i_2,j_2. Keeping into account that u()=u_1(x_1) u_2(x_2) and λ_=λ_j_1^(1)λ_j_2^(2), the system (<ref>) becomes∑_j_1=1^n_1∑_j_2=1^n_2[δ_i_1,j_1δ_i_2,j_2 - ϕ^(1)_i_1,j_1ϕ^(2)_i_2,j_2] a_j_1,j_2 = h_i_1,i_2,for i_1=1,…,n_1 and i_2=1,…,n_2, withϕ^(ℓ)_i_ℓ,j_ℓ = λ^(ℓ)_j_ℓ u_ℓ(x^(ℓ)_i_ℓ)/u_ℓ(x^(ℓ)_j_ℓ)(K^(ℓ)_n_ℓ)_i_ℓ,j_ℓ, ℓ=1,2.This amounts to solving the Stein matrix equationΦ^(1) A (Φ^(2))^T - A + H = 0,where A=(a_j_1,j_2), H=(h_i_1,i_2), and Φ^(ℓ)=(ϕ^(ℓ)_i_ℓ,j_ℓ), for ℓ=1,2. There is a wide literature on numerical methods for solving this kind of matrix equations, some classical references are <cit.>. We will use thefunction of MATLAB.The structure of the Stein equation (<ref>) also allows for speeding up the GMRES method and reducing the storage space. Indeed, the product F_Nq_k can be expressed, at each iteration, in the formQ_k - Φ^(1) Q_k (Φ^(2))^T,where the vector q_k is the unfolding of the matrix Q_k. In this way, the number of floating point operations of a matrix-vector product decreases from O(N^2) to O(N), as well as the storage space. This implementation will be denoted in the following by GMRES-SK. § NUMERICAL RESULTS In this section we numerically solve several integral equations of the type (<ref>) to investigate the performance of the method presented in Section <ref> and Section <ref>, and support the theoretical analysis. For each example, we first identify the space C_u, in which the solution is sought, according to Theorem <ref>, solve systems (<ref>) and (<ref>), compute the Nyström interpolant (<ref>) and (<ref>), and calculate the averaged Nyström interpolant (<ref>).The algorithms were implemented in Matlab version 9.10 (R2021a), and the numerical experiments were carried out on an Intel(R) Xeon(R) Gold 6136 server with 128 GB of RAM memory and 32 cores, running the Linux operating system.To test the accuracy, we compute the relative errorsξ^(G)_= f^*-f__C_u/f^*_C_u, ξ^(A)_= f^*-f̃_+1_C_u/f^*_C_u, ξ^(Avg)_=f^*-𝔣__C_u/f^*_C_u, where the infinity norm is approximated on a grid of 50 × 50 points and f^* is the exact solution of the equation. If the solution is unknown, then we consider as exact the approximated solution obtained by the Nyström interpolant (<ref>) for sufficiently large n_1, n_2. The adopted value of =(n_1,n_2) will be specified case by case. In our tests, we consider both separable and non-separable kernels. When a low regularity of the kernel and/or the right-hand side yields the necessity of increasing the size of the linear system, we explore the efficiency of the proposed approaches for its solution methods, both in terms of accuracy and of computational time. In some examples, we report the ∞-norm condition numbers κ_∞^(G) and κ_∞^(A) of systems (<ref>) and (<ref>), respectively, to confirm the theoretical analysis of Theorem <ref>.Let us first test our method on an integral equation whose exact solution is known. Consider the equationf(y_1,y_2)- ∫_-1^1 ∫_-1^1 x_2 y_2 ^x_1+y_1 f(x_1,x_2) dx_1 dx_2 = g(y_1,y_2),withg(y_1,y_2) = cos(y_1+y_2)-(cos2+^2(sin2-1))y_2 ^y_1-1,whose solution is f(x_1,x_2)=cos(x_1+x_2). Since the kernel and right-hand side are smooth functions, we search for the solution in the space C_u with u ≡ 1, i.e., we set γ_i=δ_i=0 for i=1,2.Table <ref> reports the relative errors for increasing values of n_1=n_2. As expected, since the kernel and right-hand side are analytic functions, it shows a fast convergence.The averaged Nyström interpolant allows to improve accuracy up to four significant digits, with respect to the two Nyström interpolants based, respectively, on the Gauss and anti-Gauss rules. Since the size of the system is small, in this example we solve the linear systems by Gauss's method with column pivoting. As highlighted by the last two columns of Table <ref>, the two systems are very well conditioned.A plot of the pointwise errors for the Gauss and the anti-Gauss interpolants is reported in Figure <ref>, for =(4,4), in two different perspectives. It can be observed that the errors provided by the two cubature rules are of opposite sign, confirming the assertion of Theorem <ref>. In this example, we solve the integral equation f(y_1,y_2)- 3/10∫_-1^1 ∫_-1^1 sin(x_2+x_1)(1+x_1+y_2) f(x_1,x_2) w(x_1,x_2) dx_1 dx_2 = g(y_1,y_2),withg(y_1,y_2) = log(2+y_2)sin(√(1-y_1)),where w(x_1,x_2)=√(1-x_1^2) (α_1=1/2, β_1=1/2, α_2=0, and β_2=0). According to Theorem <ref>, we fix γ_1=1, δ_1=5/4, γ_2=2/3, and δ_2=2/3 for the weight u of the function space. Here, the exact solution f^*is not available, so we approximate it by the Nyström interpolant based on the Gaussian formula with =(700,32). The kernel is a smooth non-separable function whereas, for each fixed y_2, g_y_2(y_1) ∈ W_3; therefore, by virtue of (<ref>), the expected order of convergence is O(n_1^-3).Note that since the right-hand side has a different degree of smoothness with respect to the two variables, we can use a number of nodes n_2 much smaller than n_1, thus reducing the number of equations of the system. However, the low smoothness of the right-hand side causes n_1 to grow. So the size of the linear systems is moderately large, and we solve them by the GMRES-FM method, that is, the implementation with a factored coefficient matrix.Table <ref> reports the obtained relative errors. In this example the good performance of the averaged interpolant in term of accuracy is evident. To compute it, when =(128,16), we have to solve two linear systems of order 128· 16=2048, with an error of order 10^-11. The same error is produced by the Nyström method based on the Gauss rule, as reported in Table <ref>, but this requires to solve a system of order 256· 16=4096, and so a much larger complexity and storage space.We see that GMRES-FM converges in few iterations (reported, in parentheses, in the second and third columns) and it is clear that the order of the system has no effect on the speed of convergence. In accordance with Theorem <ref>, this happens because the condition number of the coefficient matrices is small and does not depend on the size of the systems; see the last two columns of Table <ref>.Let us now consider the following equation with a separable kernelf(y_1,y_2)- 3/10∫_-1^1 ∫_-1^1 ^-(1+x_1)(1+y_1)-(1+y_2)(1+x_2)f(x_1,x_2) w(x_1,x_2) dx_1 dx_2 = g(y_1,y_2),with a right-hand side characterized by a low degree of smoothness with respect to both variables g(y_1,y_2) = cos(3+y_2)(1+y_2)^3/2 sin((1-y_1)^3/2),where w(x_1,x_2)=√((1-x_1^2)(1-x_2^2)) with α_1=β_1=1/2 and α_2=β_2=1/2. For the weight u, we set γ_1=δ_1=5/4 and γ_2=δ_2=5/4.In this test, we investigate the computational time required for solving the linear systems by Gauss's method (PA=LU) and the four approaches described in the previous section: GMRES, GMRES-FM, where the coefficient matrix is multiplied in a factored form, GMRES-SK, specially suited for the case of a separable kernel, and the solution of Stein's equation (<ref>) by thefunction of MATLAB.As highlighted in Table <ref>, the application of Gauss's method, the standard implementation of GMRES, and GMRES-FM, are unfeasible when the system becomes moderately large. Moreover, the first three methods go out of memory when n_1,n_2>128. On the contrary, GMRES-SK has a good performance and the computational time is comparable with that of MATLAB solver function . Both method can be applied for large problem dimensions. Table <ref> reports the relative errors with respect to the approximation obtained setting =(512,512), which we consider exact. The linear system is solved by the GMRES-SK method. The averaged Nyström interpolant provides 2 additional significant digits with respect to the base interpolants starting from =(4,4), until it reaches machine precision for =(128,128), while the approximation based on the standard Gauss cubature rule produces the same approximation for =(256,256).It is also important to remark that, if the assertion of Theorem <ref> holds, the halved difference between theGauss and anti-Gauss interpolants yields a bound for the approximation error of the averaged interpolant, that is,f^*-𝔣__∞≤f_-f̃_+1_∞/2.Such a bound is not directly available when a single formula is employed.In this example, we analyze the effect of a smooth right-hand side anda kernel which is not smooth with respect to the first variable. Hence, we apply our method to the equationf(y_1,y_2)-1/7∫_-1^1 ∫_-1^1 (x_2+y_2)|cos (1+x_1)|^9/2 f(x_1,x_2)w(x_1,x_2) dx_1 dx_2 = g(y_1,y_2),whereg(y_1,y_2) = ^y_1siny_2,w(x_1,x_2)=√(1-x_2^2)/√(1-x_1) (α_1=-1/2, β_1=0, α_2=1/2, and β_2=1/2), and we fix γ_1=0, δ_1=1/4, γ_2=1/2, and δ_2=5/4, for the weight u of the function space defined in (<ref>). Also in this case the exact solution f(x_1,x_2) is not available, so we approximate it by the Nyström interpolant based on the Gauss rule with =(512,32).Table <ref> displays in the second and third columns the numerical errors provided by the Gauss and anti-Gauss Nyström methods. The results are better than the theoretical estimate, which is of order O(n_1^-4). The accuracy of the averaged interpolant improves of 1–2 significant digits, until machine precision is reached.§ CONCLUSION AND EXTENSIONS This paper introduces a new anti-Gauss cubature rule and proposes its application to the resolution of Fredholm integral equations of the second kind defined on the square. A Nyström-type method based on Gauss and anti-Gauss cubature rules is developed and analyzed in terms of stability and convergence, and an averaged Nyström interpolant is proposed to better approximate the solution of the problem. Numerical tests investigate the performance of the methods and confirm the computational advantage of the averaged Nyström interpolant, in comparison with the classical approach based on the Gauss rule. Extensions to other averaged cubature formulae are presently being developed by the authors. § ACKNOWLEDGEMENTSL. Fermo and G. Rodriguez are partially supported by Fondazione di Sardegna, Progetto biennale bando 2021, “Computational Methods and Networks in Civil Engineering (COMANCHE)”. The authors are members of the GNCS group of INdAM. L. Fermo is partially supported by INdAM-GNCS 2023 Project “Approssimazione ed integrazione multivariata con applicazioni ad equazioni integrali”. G. Rodriguez is partially supported by the INdAM-GNCS 2023 Project “Tecniche numeriche per lo studio dei problemi inversi e l'analisi delle reti complesse”.P. Díaz de Alba is partially supported by the INdAM-GNCS 2023 Project “Metodi numerici per modelli descritti mediante operatori differenziali e integrali non locali” and gratefully acknowledges Fondo Sociale Europeo REACT EU - Programma Operativo Nazionale Ricerca e Innovazione 2014-2020 and Ministero dell'Università e della Ricerca for the financial support. plain | http://arxiv.org/abs/2311.15967v1 | {
"authors": [
"Patricia Diaz de Alba",
"Luisa Fermo",
"Giuseppe Rodriguez"
],
"categories": [
"math.NA",
"cs.NA",
"65R20, 65D30, 42C0"
],
"primary_category": "math.NA",
"published": "20231127160923",
"title": "Anti-Gauss cubature rules with applications to Fredholm integral equations on the square"
} |
Complex Quantum Networks: a Topical Review and Ginestra Bianconi January 14, 2024 ========================================== The left-corner transformation <cit.> is used to remove left recursion from context-free grammars, which is an important step towards making the grammar parsable top-down with simple techniques. This paper generalizes prior left-corner transformations to support semiring-weighted production rules and to provide finer-grained control over which left corners may be moved. Our generalized left-corner transformation (GLCT) arose from unifying the left-corner transformation and speculation transformation <cit.>, originally for logic programming.Our new transformation and speculation define equivalent weighted languages. Yet, their derivation trees are structurally different in an important way: GLCT replaces left recursion with right recursion, and speculation does not. We also provide several technical results regarding the formal relationships between the outputs of GLCT, speculation, and the original grammar. Lastly, we empirically investigate the efficiency of GLCT for left-recursion elimination from grammars of nine languages.=-1 < g r a p h i c s > -2-2<https://github.com/rycolab/left-corner> § INTRODUCTIONGrammar transformations are functions that map one context-free grammar to another. The formal language theory literature contains numerous examples of such transformations, including nullary rule removal, rule binarization, and conversion to normal forms, e.g., those of <cit.> and <cit.>.In this work, we study and generalize the left-corner transformation <cit.>. Qualitatively, this transformation maps the derivation trees of an original grammar into isomorphic trees in the transformed grammar.The trees of the transformed grammar will be such that the base subtree of a left-recursive chain (the left corner) is hoisted up in a derivation tree while replacing the left-recursive path to the left corner with a right-recursive path to an empty constituent.<Ref> provides an example.A common use case of the left-corner transformation is to remove left recursion from a grammar, which is necessary for converting the grammar to Greibach normal form and for several top-down parsing algorithms <cit.>. As an additional effect, it reduces the stack depth of top-down parsing <cit.>, which makes it an interesting method for psycholinguistic applications <cit.>. The closely related left-corner parsing strategy has been argued to be more cognitively plausible than alternatives due to its constant memory load across both left- and right-branching structures and low degree of local ambiguity for several languages <cit.>; indeed, empirical evidence has shown that certain left-corner parsing steps correlate with brain activity <cit.> and reading times <cit.>.[Moreover, statistical left-corner parsers have proven themselves empirically effective for several grammar formalisms <cit.>.]=-1 This paper uncovers an interesting connection between the left-corner transformation and the speculation transformation <cit.>.[Speculation was originally a transformation for weighted logic programs, which we have adapted to CFGs.] We show that speculation also hoists subtrees up the derivation tree, as in the left-corner transformation.However, in contrast, it does not remove left recursion.In uncovering the similarity to speculation, we discover that speculation has been formulated with more specificity than prior left-corner transformations: it has parameters that allow it to control which types of subtrees are permitted to be hoisted and which paths they may be hoisted along.We bring this flexibility to the left-corner transformation with a novel generalized left-corner transformation (GLCT).[We note that generalized left-corner parsers also exist <cit.>. However, they are generalized differently to our transformation.]It turns out that the latter functionality is provided by the selective left-corner transformation <cit.>; however, the former is new.We provide several new technical results:[These technical results fill some important gaps in the literature, as prior work <cit.> did not provide formal proofs.] [label=(*)]* We prove that GLCT preserves the weighted languages and that an isomorphism between derivation trees exists (<ref>).* We provide explicit methods for mapping back and forth between the derivation trees (<ref>).* We prove that the set of derivation trees for speculation and GLCT are isomorphic (<ref>).* We prove that our GLCT-based left-recursion elimination strategy removes left recursion (<ref>).Additionally, we empirically investigate the efficiency of GLCT for left-recursion elimination from grammars of nine different languages in <ref>. § PRELIMINARIESThis section provides the necessary background on the concepts pertaining to semiring-weighted context-free grammars that this paper requires. A semiring is a tuple , ⊕, ⊗, , whereis a set and the following hold: * ⊕ is an associative and commutative binary operator with an identity element ∈* ⊗ is an associative binary operator with an identity element ∈* Distributivity: ∀ a,b,c ∈, (a ⊕ b) ⊗ c(a ⊗ c) ⊕ (b ⊗ c) and c ⊗ (a ⊕ b)(c ⊗ a) ⊕ (c ⊗ b)* Annihilation: ∀ a ∈, a ⊗⊗ aThe semiring is commutative if ⊗ is commutative. We highlight a few commutative semirings and their use cases: * boolean {, ⊤}, , , , ⊤: string membership in a language,* nonnegative real _≥ 0∪{∞}, +, ·, 0, 1: the total probability of a string,* viterbi [0,1], max, ·, 1, 0: the weight of the most likely derivation of a string.For further reading on semirings in NLP, we recommend <cit.> and <cit.>.Our work studies weighted context-free grammars (WCFGs), which define a tractable family of weighted languages—called weighted context-free languages—that is frequently used in NLP applications (see, e.g., ).A weighted context-free grammar is a tuple , , , where *is a set of nonterminal symbols*is a set of terminal symbols, ∩ =∅* ∈ is the start symbol*is a bag (multiset) of weighted production rules. Each rule r ∈ is of the formwhere ∈, ∈ (∪)^*, and w ∈. We assume thatis commutative semiring.We will use the following notational conventions: * ,,∈ for nonterminals* a, b, c∈ for terminals* ,,∈^* for a sequence of terminals* α, β, γ∈ (∪) for a terminal or nonterminal symbol* , , ∈ (∪)^* for a sequence of nonterminals or terminals* , , , and , , , for WCFGs.We write r to access the weight of a rule r.When describing a grammar and its rules, we may use the following terms: The size of a grammar is ∑_() ∈ (1 + ||). The arity of a ruleis equal to ||; thus, a rule is nullary if ||=0, unary if ||=1, and so on.For technical reasons, it is often convenient to eliminate nullary rules from the grammar.[<cit.> provides an efficient method to remove nullary rules from semiring-weighted CFGs, which we make use of in our experiments (<ref>).]A derivation is a rooted, (∪)-labeled, ordered tree where each internal node must connect to its children by a production rule.[Note: a derivation may be built by a rule with an empty right-hand side; thus, the leaves may be elements of .When rendering such a derivation, childlessness is marked with .] To access the root label of a derivation , we write . The set of all derivations of is the smallest setsatisfying=∪{[CenterFix, scale=.7, sibling distance=1cm, level distance=1cm, every node/.style=font=] (xK)childnode[yshift=-.4cm]childnodeedge from parent[draw=none]childnode[yshift=-.4cm]; |(→α_1 ⋯α_K) ∈, [scale=.7]α_1∈, …, [scale=.7]α_K∈} The yield ∈^* of a derivationis∙if ∈ = a ∙else: [CenterFix, scale=.7, sibling distance=.75cm, level distance=1cm, every node/.style=font=] (xK)childnode[yshift=-.4cm]childnodeedge from parent[draw=none]childnode[yshift=-.4cm]; [scale=.7]∘⋯∘[scale=.7]where ∘ denotes concatenation. The weight ∈ of a derivationis∙ if ∈: ∙else: [CenterFix, scale=.7, sibling distance=.75cm, level distance=1cm, every node/.style=font=] (xK)childnode[yshift=-.4cm]childnodeedge from parent[draw=none]childnode[yshift=-.4cm]; [→α_1 ⋯α_K⊗; [scale=.7]⊗⋯⊗[scale=.7] ]The weighted language of α∈ (∪) is a function _α: ^* → defined as follows: _α() ⊕_∈[α]()where [α] ∈ = α denotes the subset of containing trees labeled α, and[α]() ∈[α] = denotes those with yield .In words, the value of a string ∈^* in the weighted language _α is the ⊕-sum of the weights of all trees in [α] withas its yield. The weighted language of the grammar is () _(). Given a set of symbols 𝒳, we write [𝒳] as shorthand for ⋃_α∈𝒳[α].Lastly, letdenote the weighted language generated by the tree .[Formally, ()ifelse.]We define the following operations on weighted languages and , for ∈∪: * Union: [ ⊕]() () ⊕()* Concatenation: [ ∘]() ∘ = () ∘() Note that these operations form a (noncommutative) semiring over weighted languages where is the language that assigns weight zero to all strings and is the language that assigns one to the empty string and zero to other strings.The weighted language of may also be expressed as a certain solution[Note that the system of equations does not necessarily have a unique solution.In the case of an ω-continuous semiring, <ref> coincides with the smallest solution to <ref> under the natural ordering <cit.>.] to the following system of equations: _a = _a ∀a∈_ = (β_1 ⋯β_Kw) ∈ w ∘_β_1∘⋯∘_β_K ∀∈ where _a is the weighted language that assigns to the string a and to other strings.We say that α is useless if there does not exist a derivation ∈[] that has a subderivation ' with 'α. We define trimming trim() as removing each useless nonterminal and any rule in which they participate.It is easy to see that trimming does not change the weighted language of the grammar because no useless nonterminals participate in a derivation rooted at .We can trim useless rules in linear time using well-known algorithms <cit.>. We say that grammars and are equal () if they have the same tuple representation after trimming. We say they are equivalent (≡) if they define the same weighted language.[I.e., (≡) ⟺∀∈ (∪)^*_() = _().] We say that they are 𝒳-bijectively equivalent (≡_𝒳) if a structure-preserving bijection of type [𝒳] →[𝒳] exists. The mapping is structure-preserving if (∀∈[𝒳]) it is (i) label-preserving (), (ii) yield-preserving (), and (iii) weight-preserving (). Suppose ≡_𝒳, ∈𝒳 and , then ≡, but not conversely. A benefit of this stronger notion of equivalence is that derivations in and are interconvertible: we can parse in and convert to a parse in and vice versa, assumingand ∈𝒳.[Under these conditions, our 𝒳-bijective equivalence notion becomes a weighted extension of what <cit.> call complete covers and <cit.> calls proper covers. However, their definitions assume traditional string-rewriting derivations instead of tree-structured derivations.=-1 ]§ TRANSFORMATIONS This section specifies our novel generalized left-corner transformation, its correctness guarantees, and its connections to prior left-corner transformations.We also describe the speculation transformation <cit.> and discuss the connections between the speculation transformation and our generalized left-corner transformation. §.§ Generalized Left-Corner TransformationThis section introduces the generalized left-corner transformation (GLCT).[A generalized right-corner transformation can be defined analogously—applications of such a transformation are given in <cit.> and <cit.>.]This transformation extends prior left-corner transformations by providing additional parameters that control which subtrees can be hoisted.The generalized left-corner transformation (, , ) takes as input * a grammar , , ,* a subset of (non-nullary) rules ⊆ (called left-corner recognition rules)* a subset of symbols ⊆ (∪) (called left-corner recognition symbols)and outputs a grammar , , , with * a superset of nonterminals (⊇)[∪{X|∈}∪{αβ|α, β∈ (∪)}]* the same set of terminals ()* the same start symbol ()* the weighted production rules ():∈∖ α α∈, α∈ ∈∪α w αw ∈, ∈ ww ∈∖ αw αw ∈, α∉The transformation creates two kinds of new nonterminals using α and α.[ This notation works as follows: we associate a unique identifier 𝑖𝑑 with the transformation instance.Then, α𝑖𝑑, α if α∈ else α and α𝑖𝑑, , α. This ensures that the symbols produced cannot conflict with those on (i.e., ∀α∈∪, ∈α, α∉). ]In <ref>, we see that GLCT introduces new nonterminals of two varieties: slashed nonterminals (denoted α) and frozen[Our notion of frozen nonterminals is borrowed directly from the speculation transformation <cit.>, where they are called . ] nonterminals (denoted ).The frozen and slashed nonterminals are each defined recursively. * Slashed nonterminals are built by a base case <ref> and a recursive case <ref>.* Frozen nonterminals are built by a base case <ref> and a recursive case <ref>. We see in <ref> and <ref> that GLCT replaces the rules defining the original nonterminals () with rules that use GLCT's new nonterminals; the only way to build a nonterminal from is using one of these two rules.We refer to these as recovery rules because they recover the original symbols from the new frozen and slashed symbols.We also see that <ref> is responsible for converting left recursion into right recursion because the slashed nonterminal on its right-hand side is moved to the right of .We will return to this when we discuss speculation in <ref>.<ref> illustrates how GLCT transforms trees. Left corner and spine. To better understand the parameters and , we define the spine and left corner of a derivation ∈ of the original grammar. Suppose has the following general form: [CenterFix, scale=.6,sibling distance=1.2cm,level distance=1cm,every node/.style=font=, inner xsep=0, inner ysep=1pt] (xK) K childnode[](foo)K-1 childnodeedge from parent[draw=none]childnode(bar) 2 edge from parent[draw=none]childnode[yshift=-.4cm,xshift=0cm]1 childnode[yshift=-.4cm,xshift=0cm]1childnodeedge from parent[draw=none] childnode[yshift=-.5cm] K-2 childnode[yshift=-.4cm,xshift=.25cm]K-1 ; [sloped] (foo) – node[fill=white,inner sep=1pt](bar);Then, we define the spineas the maximum-length sequence of rules (K→K-1 K-1) ⋯ (2→1 1) along the left edges of where each rule is in .The left cornerof a tree ∈ is the bottommost subtree [1] of with [1]∈ that is reachable starting at the rootalong the left edges of where each edge comes from a rule in .If no such subtree exists, we say that has no left corner and write . We write / to denote the withreplaced by the empty subtree; we define /.Lastly, we define D/α/ | ∈ D,= α where D is a set of derivations and α∈∪∪{}.To illustrate, let be the derivation on the left in <ref>. The right-hand side derivation results from applying GLCT with {NP} and {NP VP,NPPossP NN,PossPNP 's}. The spine of , then, is the sequence of rules in (in the same order), and its left corner is the lower NP-subtree. Note how the left corner is the subtree hoisted by the transformation.Interpretation. The symbol α represents the weighted language of where we have replaced its left corner subtrees labeled α (if one exists) with . We can see that the recovery rule <ref> uses these slashed nonterminals to reconstruct by each of its left-corner types (found in ). We also have a recovery rule <ref> that uses a frozen nonterminal , which represents the other ways to build (i.e., those that do not have a left corner in ).Thus, the weighted language of decomposes as a certain sum of slashed nonterminals and its frozen version (formalized below).[Decomposition]propositionDecompositionSuppose (, , ).Then, for any ∈:_ = _⊕α∈_α∘_αSee <ref> for proof. Next, we describe the weighted languages of the slashed and frozen nonterminals in relation to the derivations in the original grammar.<ref> establishes that the weighted language of α is the total weight of α derivations without a left corner, and α is the total weight of all derivations with an α left corner that has been replaced by . [Weighted language relationship]propositionLanguageRelationshipSuppose (, , ). Then, for any ∈ and α∈∪: _α = ∈[α]/and_α= ∈[]/αSee <ref> for proof.Special cases. We now discuss how our transformation relates to prior left-corner transformations. The basic left-corner transformation <cit.> is lct() (, , ∪),i.e., we setand ∪.[That is, the output grammars are equal post trimming. The only useful rules are instances of <ref>, <ref>, and <ref>. Furthermore, the <ref> rules will be useless unless α is a terminal.With these observations, verifying that GLCT matches johnson-1998-finite-state presentation of LCT is straightforward.] This forces the leftmost leaf symbol of the tree to be the left corner, which is either a terminal or the left-hand side of a nullary rule.The selective left-corner transformation (SLCT; ) (, ) (, , ∪) supports left-corner recognition rules , but it does not allow control over the left-corner recognition symbols, as is required to be ∪.[More precisely, we are using the SLCT with top-down factoring <cit.>.] Thus, SLCT takes any subtree at the bottom of the spine to be its left corner.Frozen nonterminals enable us to restrict the left corners to those labeled . Formal guarantees. We now discuss the formal guarantees related to our transformation in the form of an equivalence theorem (<ref>) and an asymptotic bound on the number of rules in the output grammar (<ref>). <ref> establishes that the GLCT's output grammar is -bijectively equivalent to its input grammar.[-bijective equivalence]theoremEquivalenceTheoremSuppose =, , , is a WCFG and = (, , ) where ⊆ and ⊆∪. Then, ≡_. We prove this theorem in <ref>.To our knowledge, this is the only formal correctness proof for any left-corner transformation.[We note that <cit.> prove correctness for an alternative method to remove left recursion, which is used as a first step when converting a grammar to Greibach normal form. This method, however, might lead to an exponential increase in grammar size <cit.>.] In addition, <ref> provides pseudocode for the derivation mapping [] →[] and its inverse , in <ref>, respectively.We can bound the number of rules in the output grammar as a function of the input grammar and the transformation's parameters and .The number of rules in (, , ) is no more than|| + || (1 + || + ||) + |∖| + ||We bound the maximum number of rules in each rule category of <ref>:|<ref>| ≤ |∖| |<ref>| ≤ || || |<ref>| ≤ ||+|| |<ref>| ≤ || || |<ref>| ≤ |∖| |<ref>| ≤ ||Each of these bounds can be derived straightforwardly from <ref>.Summing them, followed by algebraic simplification, proves <ref>. The bound in <ref> is often loose in practice, as many of the rules created by the transformation are useless. In <ref>, we describe how to use GLCT to eliminate left recursion, and we investigate the growth of the transformed grammar for nine natural language grammars in <ref>. Optimizations.We briefly describe two improvements to our method's practical efficiency.Reducing the number of useless rules.For efficiency, we may adapt two filtering strategies from prior work that aim to reduce the number of useless rules created by the transformation.[These strategies are provided in our implementation.] We provide equations for how to modify in GLCT to account for these filters in <ref>.Fast nullary rule elimination.Nullary rule elimination is often required as a preprocessing step in parsing applications <cit.>.When eliminating the nullary rules introduced by our transformations (i.e., the base case for slashed rules), there turns out to be a special linear structure that can be exploited for efficiency.We describe the details of this speedup in <ref>.§.§ Speculation TransformationIn this section, we adapt 's (; 6.5) speculation transformation from weighted logic programming to WCFGs.[The translation was direct and required essentially no invention on our behalf.However, we have made one aesthetic change to their transformations that we wish to highlight: the closest WCFG interpretation of eisner2007program speculation transformation restricts the slashed nonterminals beyond <ref> and <ref>; their version constrains the denominator to be ∈.This difference disappears after trimming because useful slashed nonterminals must be consumed by the recovery rule <ref>, which imposes the constraint one level higher.We prefer our version as it enables <ref>, which shows that GLCT and speculation produce 𝒳-bijective equivalent grammars for all nonterminals.The pruned version would result in a weaker theorem with 𝒳 being a subset of the nonterminals with a nuanced specification. ]We will provide a new interpretation for speculation that does not appear in the literature.[vieira2023automating dissertation, which appeared contemporaneously with this paper, adopts our same interpretation.]In particular, we observe that speculation, like the left-corner transformation, is a subtree hoisting transformation.The speculation transformation (, , ) takes as input * a grammar , , ,* a subset of (non-nullary) rules ⊆ (called the left-corner recognition rules)* a subset of symbols ⊆ (∪) (called the left-corner recognition symbols)and outputs a grammar , , , with * a superset of nonterminals (⊇)[See <ref>.]* the same set of terminals ()* the same start symbol ()* the weighted production rules (): ∈∖αα∈, α∈ ∈∪α wαw ∈, ∈ ∪ ww ∈∖α wαw ∈, α∉Upon inspection, we see that the only difference between speculation and GLCT is how they define their slashed nonterminals, as the other rules are identical. The slashed nonterminals have the same base case <ref> and <ref>.However, their recursive cases <ref> and <ref> differ in an intriguing way:α wαw ∈, ∈ (<ref>) α wαw ∈, ∈∪ (<ref>) This difference is why GLCT can eliminate left recursion and speculation cannot: GLCT's slashed nonterminal appears to the right of , and speculation's appears on the left.For GLCT, is passed along the numerator of the slashed nonterminal, whereas, for speculation, is passed along the denominator.<ref> (below) establishes that speculation and GLCT are bijectively equivalent for their complete set of nonterminals.[We note that the set of useful slashed and frozen nonterminals typically differs between GLCT and speculation.]=-1[Speculation–GLCT bijective equivalence]theoremThmSpeculationGLCTFor any grammar , and choice of and , (,,) and (,,) are 𝒳-bijectively equivalent where 𝒳 is the complete set of symbols (i.e., original, frozen, and slashed).=-1 See <ref> for the proof sketch. We also provide the first proof of equivalence for speculation. For any grammar , , ,, ⊆, and ⊆∪: (, , ) ≡_. The theorem follows directly from <ref>, <ref>, and the compositionality of bijective functions. § LEFT-RECURSION ELIMINATIONMotivated by the desire for efficient top-down parsing for which left-recursion poses challenges (<ref>), we describe how GLCT may be used to transform a possibly left-recursive grammar into a bijectively equivalent grammar without left-recursion.[Note: when we say left recursion, we often mean unbounded left recursion. ]The bijective equivalence (<ref>) ensures that we can apply an inverse transformation to the derivation tree of the transformed grammar into its corresponding derivation tree in the original grammar.This section provides an efficient and (provably correct) recipe for left-recursion elimination using GLCT. We experiment with this recipe on natural language grammars in <ref>.Our left-recursion elimination recipe is based on a single application of GLCT, which appropriately chooses parameters and .We describe how to determine these parameters by analyzing the structure of the rules in .We define the left-recursion depth of a derivation treeas the length of the path from the root to the leftmost leaf node.The left-recursion depth of a grammar is max_∈.We say that is left-recursive iffis unbounded. To analyze whether is left-recursive, we can analyze its left-recursion graph, which accurately characterizes the left-recursive paths from the root of a derivation to its leftmost leaf in the set of derivations .The left-recursion graph of the grammar is a labeled directed graph G N, E with nodes N ∪ and edges E { (α) | r ∈, r(α ⋯) }. It should be clear that the is left-recursive iff G has a cyclic subgraph.We classify a rule r as left-recursive if the edge labeled r is an edge in any cyclic subgraph of G. To determine the set of left-recursive rules, we identify the strongly connected components (SCCs) of G (e.g., using tarjan-1972-depth algorithm). The SCC analysis returns a function π that maps each of G's nodes to the identity of its SCC.Then, a rule r is left-recursive iff its corresponding edge αβ satisfies π(α) π(β).[Note that nullary rules cannot be left-recursive.]To ensure that left recursion is eliminated, must include all left-recursive rules.=-1We use the following set to provide a sufficient condition on to eliminate left recursion: (∪{ |( α) ∈E,r ∈∖}) ∩{ α|(α⋯) ∈})This set captures the set of nodes that may appear at the bottom of a spine (for the given ).This is because the spine is defined as the longest sequence of rules in along the left of a derivation; thus, a spine can end in one of two ways (1) it reaches a terminal, or (2) it encounters a rule outside of .Thus, the bottom elements of the spine are the set of terminals, and the set of nodes with at least one (∖)-labeled outgoing edge—which we refine to nodes that might appear in the spine (i.e., those in the leftmost position of the rules in ). With these definitions in place, we can provide sufficient conditions on the GLCT parameter sets that will remove left recursion:[Left-recursion elimination]theoremlrElimSuppose that trim((,,)) where * has no unary rules* ⊇ the left-recursive rules in* ⊇Then, is not left-recursive. Moreover, ≤ 2 · C where C is the number of SCCs in the left-recursion graph for . See <ref> for the proof.Example. In <ref>, we made use of <ref> to remove the left-recursion from NP to NP, applying GLCT with ={NP} and ={NP VP,NPPossP NN,PossPNP 's}.[Note that omitting NP VP from would have also eliminated left recursion in this example, but we would have obtained a different output tree in the figure.]Our recipe. We take as the set of left-recursive rules in and as .[Our choice for is consistent with the recommendation for SLCT in <cit.>.] This minimizes the upper bound given by <ref> subject to the constraints given in <ref>. Special cases. <ref> implies that the basic left-corner transformation and the selective left-corner transformations (with ⊇ the left-recursive rules) will eliminate left recursion.Experimentally, we found that our recipe produces a slightly smaller grammar than the selective option (see <ref>).Unary rules. The reason <ref> requires that is unary-free is that the left-corner transformation cannot remove unary cycles of this type.[Prior left-corner transformations <cit.> are limited in the same manner.] To see why, note thatfor a unary rule <ref>; thus, the transformed rule will have a slashed nonterminal in its leftmost position, so it may be left-recursive.Fortunately, unary rule cycles can be eliminated from WCFGs by standard preprocessing methods (e.g., <cit.> and <cit.>).However, we note that eliminating such unary chain cycles does not produce an -bijectively equivalent grammar as infinitely many derivations are mapped to a single one that accounts for the total weight of all of them. Nullary rules.We also note that the case where the may derive as its leftmost constituent also poses a challenge for top-down parsers.For example, that would be the case in <ref> if PossPNP POS was replaced by PossP NP POS and ; this grammar is not left-recursive, but the subgoal of recognizing an NP in top-down parser will still, unfortunately, lead to infinite recursion.Thus, a complete solution to transforming a grammar into a top-down-parser-friendly grammar should also treat these cases. To that end, we can transform the original grammar into an equivalent nullary-free version with standard methods <cit.> before applying our GLCT-based left-recursion elimination recipe. As with unary rule elimination, nullary rule elimination does not produce an -bijectively equivalent grammar. §.§ ExperimentsIn this section, we investigate how much the grammar size grows in practice when our GLCT recipe is used to eliminate left recursion.We compare our results to SLCT with top-down factoring (<ref>) to see whether the additional degree of freedom given by leads to any reduction in size. We apply both transformations to nine grammars of different languages: Basque, English, French, German, Hebrew, Hungarian, Korean, Polish, and Swedish. We use the ATIS grammar <cit.> as our English grammar.[We selected the (boolean-weighted) ATIS grammar because it was used in prior work <cit.>.We note, however, that—despite our best efforts—we were unable to replicate moore-2000-removing exact grammar size on it.] We derived the other grammars from the SPMRL 2013/2014 shared tasks treebanks <cit.>.[Specifically, we load all trees from the SPMRL 5k training dataset, delete the morphological annotations, collapse unary chains like →→→ into →, and create a grammar from the remaining rules. The weights of the SPMRL grammars are set using maximum-likelihood estimation. None of the treebanks contained nullary rules.] Experimental setup.For GLCT, we set and according to our recipe.For SLCT, we set according to <ref>'s conditions for removing left-recursion. We compare the grammar size and the number of rules of the raw output grammar to those of the input grammar. However, the raw output sizes can be reduced using useless rule filters (discussed in <ref> and <ref>), so we additionally apply trimming to the output grammars. When parsing, it is often practical to first binarize the grammar and remove nullary rules, so we perform those postprocessing steps as well.=-1As a sanity check, we verify that left recursion is removed in all settings by checking that the left-recursion graph of the output grammar is acyclic. We present the results as evaluated on grammar size in <ref>. <ref> provides further results in terms of the number of rules. Discussion. Interestingly, the increase in size compared to the input grammar varies a lot between languages. Previous work <cit.> only evaluated on English and thus appear to have underestimated the blow-up caused by the left-corner transformation when applied to natural language grammars. Compare, for instance, the ratio between the trimmed size and the original size in <ref> of English (1.2) to Basque (3.4), Hebrew (6.7), and Swedish (23.7). By <ref>, the number of rules in the output grammar scales with ||, which by <ref> is set as the left-recursive rules. The GLCT produces smaller grammars than the SLCT for all languages before either of the postprocessing steps. This difference is (almost) eliminated post-trimming, however, which is unsurprising given that SLCT is a special case of GLCT (<ref>). The small difference in size after trimming happens since two rules of the form XX XX <ref> and XXε <ref> in SLCT are replaced by one rule XX <ref> in GLCT. However, this difference disappears after nullary removal.§ CONCLUSIONThis work generalized the left-corner transformation to operate not only on a subset of rules but also on a subset of nonterminals. We achieve this by adapting frozen nonterminals from the speculation transformation. We exposed a tight connection between generalized left-corner transformation and speculation (<ref>). Finally, and importantly, we proved the transformation's correctness (<ref>) and provided precise sufficient conditions for when it eliminates left recursion (<ref>).=-1§ LIMITATIONSParsing runs in time proportional to the grammar size. <ref> shows that we obtain the same grammar size after postprocessing from our method and the selective left-corner transformation, which gave us no reason to provide an empirical comparison on parsing runtime.Moreover, it is thus of practical importance to restrict the growth of the grammar constant.We have discussed a theoretical bound for grammar growth in <ref>, investigated it empirically in <ref>, and provided further tricks to reduce it in <ref>. Orthogonally to the left-corner transformation itself, it is possible to factor the grammar so that the grammar size is minimally affected by the transformation. Intuitively, reducing the number of left-recursive rules in will also reduce the number of rules that are required in , which, in turn, leads to fewer rules in the output grammar. We did not present any such preprocessing techniques here, but <cit.> provides a reference for two methods: left-factoring and non-left-recursive grouping.<cit.> give a second factoring trick in addition to top-down factoring (see <ref>), which is similar to moore-2000-removing left-factoring. We also mention that vieira2023automating search-based technique for optimizing weighted logic programs could be directly applied to grammars. In particular, the search over sequences of define–unfold–fold transformations can be used to find a smaller grammar that encodes the same weighted language.There appear to be connections between our notion of slashed nonterminals and the left quotient of formal languages that we did not explore in this paper.[The left quotient of a WCFG by a weighted regular language can be represented as another WCFG using a modified intersection construction <cit.>—see <cit.> for details.] For example, the simplest case of the left quotient is the <cit.> derivative. The Brzozowski derivative of with respect to a∈ is equal to the weighted language of a in the output grammar produced by speculation or GLCT, provided that the is nullary-free, , and a∈.We suspect that other interesting connections are worth formalizing and exploring further.Finally, we note that we could extend our transformation to deal with nullary rules directly rather than eliminating them by preprocessing (as discussed in <ref>).The idea is to modify <ref> in GLCT so that the slashed nonterminal on its right-hand side is formed from the leftmost nonterminal that derives something other than , rather than the leftmost symbol. For this extension to work out, we require that the grammar is preprocessed such that each nonterminal is replaced by two versions: one that generates only , and one that generates anything else. Preprocessing the grammar in this way is also done in nullary rule elimination (see <cit.> for details). § ETHICAL STATEMENTWe do not foresee any ethical issues with our work.§ ACKNOWLEDGMENTSWe thank Alex Warstadt, Clemente Pasti, Ethan Wilcox, Jason Eisner, and Josef Valvoda for valuable feedback and discussions. We especially thank Clemente for pointing out the nullary removal optimization in <ref>. We also thank the reviewers for their useful comments, suggestions, and references to related work.Andreas Opedal acknowledges funding from the Max Planck ETH Center for Learning Systems. acl_natbib§ PROOF OF <REF> (DECOMPOSITION)* _ = _ by <ref>= (β_1 ⋯β_Kw) ∈ w ∘_β_1∘⋯∘_β_K by Eq. <ref>= () ∈∘_⊕(α α) ∈∘_α∘_α by <ref>= _⊕α∈_α∘_α by <ref> and algebra Note that <ref> specializes the sum to the only kinds of rules that can build ∈: rules <ref> and <ref>. § PROOF OF <REF> (BIJECTIVE EQUIVALENCE) Roadmap. We will show that ≡_(, , ) for any choice of and .Our proof makes use of two lemmas, <ref> and <ref>, to establish that the derivation mapping(<ref>) and its inverse (<ref>) define a bijection of the necessary type. <ref> shows thatpreserves label, weight, and yield. <ref> shows thatis invertible and, thus, that a bijection exists. *Recall from <ref> that -bijective equivalence (≡_) requires the existence of a structure-preserving bijective mapping of type [] →[]. <ref> shows that in <ref> is a mapping of type → that preserves the desired structure (label, weight, and yield).Thus, is structure-preserving. <ref> shows that is a bijection of [] →[].Thus, we have verified the existence of a structure-preserving bijection and, therefore, ≡_. | http://arxiv.org/abs/2311.16258v1 | {
"authors": [
"Andreas Opedal",
"Eleftheria Tsipidi",
"Tiago Pimentel",
"Ryan Cotterell",
"Tim Vieira"
],
"categories": [
"cs.CL",
"cs.DS",
"cs.FL"
],
"primary_category": "cs.CL",
"published": "20231127190437",
"title": "An Exploration of Left-Corner Transformations"
} |
Temple University Philadelphia PA United States [email protected] 0000-0003-2781-6619University of Auckland Auckland New Zealand [email protected] 0000-0002-5150-9806Temple University Philadelphia PA United States [email protected] 0000-0002-0094-1113University of Auckland Auckland New Zealand [email protected] 0000-0001-6829-9449Temple University Philadelphia PA United States [email protected] 0000-0001-5767-1057Aalto University Espoo Finland [email protected] 0000-0001-6502-209XAalto University Espoo Finland [email protected] 0000-0002-7277-9282Temple University Philadelphia PA United States [email protected] 0000-0001-7646-2373Identifying and resolving logic errors can be one of the most frustrating challenges for novices programmers.Unlike syntax errors, for which a compiler or interpreter can issue a message, logic errors can be subtle. In certain conditions, buggy code may even exhibit correct behavior – in other cases, the issue might be about how a problem statement has been interpreted. Such errors can be hard to spot when reading the code, and they can also at times be missed by automated tests.There is great educational potential in automatically detecting logic errors, especially when paired with suitable feedback for novices.Large language models (LLMs) have recently demonstrated surprising performance for a range of computing tasks, including generating and explaining code.These capabilities are closely linked to code syntax, which aligns with the next token prediction behavior of LLMs.On the other hand, logic errors relate to the runtime performance of code and thus may not be as well suited to analysis by LLMs. To explore this, we investigate the performance of two popular LLMs, GPT-3 and GPT-4, for detecting and providing a novice-friendly explanation of logic errors.We compare LLM performance with a large cohort of introductory computing students (n=964) solving the same error detection task.Through a mixed-methods analysis of student and model responses, we observe significant improvement in logic error identification between the previous and current generation of LLMs, and find that both LLM generations significantly outperform students. We outline how such models could be integrated into computing education tools, and discuss their potential for supporting students when learning programming. <ccs2012> <concept><concept_id>10003456.10003457.10003527</concept_id><concept_desc>Social and professional topics Computing education</concept_desc><concept_significance>500</concept_significance></concept></ccs2012> [500]Social and professional topics Computing education Decoding Logic Errors: A Comparative Study on Bug Detection by Students and Large Language Models Joanne Kim=================================================================================================§ INTRODUCTION Learning to program involves navigating a landscape where mistakes are an inherent part of the journey. Novice programmers are bound to encounter numerous errors when writing code, ranging from logic flaws and syntactical inaccuracies to runtime glitches. These mistakes pose substantial hurdles to students as they strive to develop their programming skills. Despite extensive efforts by computing education researchers and practitioners to establish taxonomies and recognize patterns of common programming errors <cit.> , the process of effectively detecting and resolving bugs remains a persistent challenge.Simultaneously, the emergence of large language models (LLMs) has demonstrated remarkable capabilities in understanding and generating text that is highly similar to the text generated by people. These models, trained on vast amounts of textual data, have been used in a variety of computing education contexts including helping students to understand code <cit.> and programming error messages <cit.>. These use cases demonstrate the ability of LLMs to understand the syntax and structure of code. Still, it is unclear whether models can reason about runtime performance without explicitly running the code. Therefore, detecting runtime errors may present a challenge for LLMs, limiting their potential to help learners. In this paper, we conduct a large-scale comparative study that investigates the abilities of two LLMs and students to detect bugs in faulty code. We recruited 964 students in a large introductory C programming class to identify bugs in three code examples. The selected code examples contained three types of bugs including an out-of-bounds error, an expression error, and an operator error. Students were selected because they are increasingly relying on LLMs as a legitimate help-seeking resource <cit.>. Our results suggest that LLMs outperform students in bug detection performance, especially for faulty code. However, in addition to detecting the pre-inserted bugs, the LLMs had a tendency to be overly proactive, also commenting on extremely minor `bugs' such as naming conventions, and other considerations that might be overwhelming if used for learning purposes. GPT-4 was nearly perfect at identifying bugs in faulty code, but was much more likely than GPT-3 to identify these minor `bugs' in the correct programs and therefore performed `poorly' on correct code. Studying correct code was important because students may use these tools when their code is mostly correct, and a list of minor errors may be demoralizing or may lead them off-track. Based on our findings, we conclude that LLMs appear to be capable of identifying logic errors, outperforming students at this task. However, additional work is needed to extend this work toward more complex code examples and with more advanced computing students. Given that experts are more likely to `chunk' code and see emergent structure, it is unclear whether they would be more or less able to identify bugs in the code without writing test cases.§ RELATED WORK§.§ Students and Bugs in Code Bugs and errors are a common feature in student code and understanding the encountered problems and errors has been a long-standing endeavour within Computing Education Research. Early research in this area centered often on specific problems such as the looping problem or the rainfall problem <cit.>, leading also toward investigations into the design and features of programming languages (e.g. <cit.>). In general there are differences in frequency of programming errors <cit.> and the time that it takes to fix those errors <cit.>. The types of errors that students encounter also gradually change <cit.>, and they can stem from multiple sources <cit.>. These sources include misinterpreting the programming problem and having flaws in programming knowledge <cit.>, not to mention the role of the used programming language <cit.>. When students encounter a problem, they need to resolve it. Resolving programming problems – or debugging – can be done using multiple approaches, including tracing code, commenting out code, and adding print statements <cit.>. Simply looking at the code and trying to find places that do not look right – i.e. pattern matching – can also be a viable strategy in some cases <cit.>. Like programming, finding problems in code by tracing the code is a skill, and both of them have been highlighted as something that students can struggle with. As an example, an ITiCSE working group from 2001 highlighted a lack of programming skills at the end of introductory programming course <cit.>, and a subsequent ITiCSE working group from 2004 focused on the results by looking into students' ability to read and trace code <cit.>, also highlighting problems. These issues have in part led to national and international efforts in understanding the struggles that students face, such as the BRACElet project that started in 2004 <cit.>. These studies tend to highlight that students have difficulties with tracing code <cit.>, which might in part be explainable by lack of expertise. A student might, when solving a tracing problem, even just guess a solution if they do not have a higher-level reasoning strategy <cit.>, or might simply have misconceptions about how a program executes, which in turn leads to faulty conclusions <cit.>. This possibility of guessing code tracing outcomes has also in part led to the emergence of “explain in plain English” problems. For these problems, students are expected to provide a high-level overview of the program's functionality and purpose rather than simply outlining what the program does <cit.>. These problems can also be challenging, and any tools that would help students learn to understand and explain code would be of benefit.§.§ Generative AI and Computing Education Recently computing education researchers are expressing concern and excitement about the ways that generative models may affect the computing education landscape <cit.>. While a strong consensus about how we should adapt our pedagogical practice has yet to emerge, each of these discussions acknowledge that generative models are not likely a passing fad. Numerous examples of the capabilities of generative models are emerging such as their ability to both solve and create programming assignments <cit.>, explain code <cit.>, identify programming concepts <cit.>, answer multiple choice questions <cit.>, write code <cit.>, solve visual problems <cit.>, and enhance programming error messages <cit.>. These use cases are critical because without understanding the capabilities of generative models, it is extremely challenging to adapt to this rapidly changing landscape.However, limited work has investigated the capabilities of generative models to identify bugs within code. Given that novice programmers often encounter bugs and may lack the ability to identify and fix these bugs, it is important to explore the capabilities of generative models to accomplish this task. Very recent papers focus on enhancing programming error messages <cit.> and automatically repairing bugs in code <cit.>. In this paper, we add to the growing set of use cases by exploring the potential for generative models to identify potential bugs and errors.[inline]ah: the above is now written so that the next part could start by highlighting that llms can produce different sorts of stuff, which could be of use for learning; we could later e.g. suggest a llm-supported bug-hunting ground where students could practice code tracing and finding bugs* Heatmap view of bugs <cit.> §.§ Taxonomies of Common BugsPrior research points to the common occurrence of certain specific bugs amongst student code. Frequent mistakes that students make include syntax errors such as mismatching parenthesis <cit.> and confusing '=' with '==' or vice versa <cit.>, type errors such as incorrectly calling or assigning methods <cit.>, and semantic errors such as missing a return statement<cit.>. * <cit.>* <cit.>* <cit.>* <cit.>* <cit.>* <cit.> §.§ Taxonomies of Common BugsPrior research points to the common occurrence of certain specific bugs amongst student code. Frequent mistakes that students make include syntax errors such as mismatching parenthesis <cit.> and confusing '=' with '==' or vice versa <cit.>, type errors such as incorrectly calling or assigning methods <cit.>, and semantic errors such as missing a return statement<cit.>. * <cit.>* <cit.>* <cit.>* <cit.>* <cit.>* <cit.>§.§ Generative AI and Computing EducationRecently computing education researchers are expressing concern and excitement about the ways that generative models may affect the computing education landscape <cit.>. While a strong consensus about how we should adapt our pedagogical practice has yet to emerge, each of this discussions acknowledge that generative models are not likely a fad. Numerous examples of the capabilities of generative models are emerging such as their ability to create programming assignments <cit.>, explain code <cit.>, identify programming concepts <cit.>, answer multiple choice questions <cit.>, write code <cit.>, and enhance programming error messages <cit.>. These use cases are critical because without understanding the capabilities of generative models, it is extremely challenging to adapt to this rapidly changing landscape.However, limited work has investigated the capabilities of generative models to identify bugs within code. Given, that early programmers often encounter bugs and may lack the ability to identify and fix these bugs, it is important to explore the capabilities for generative models to accomplish this task. Very recent papers focus on enhancing programming error messages <cit.> and automatically repairing bugs in code <cit.>. In this paper, we add to the growing set of use cases by exploring the potential for generative models to identify potential bugs and errors. §.§ Taxonomies of Common BugsPrior research points to the common occurrence of certain specific bugs amongst student code. Frequent mistakes that students make include syntax errors such as mismatching parenthesis <cit.> and confusing '=' with '==' or vice versa <cit.>, type errors such as incorrectly calling or assigning methods <cit.>, and semantic errors such as missing a return statement<cit.>. * <cit.>* <cit.>* <cit.>* <cit.>* <cit.>* <cit.>§.§ Generative AI and Computing EducationRecently computing education researchers are expressing concern and excitement about the ways that generative models may affect the computing education landscape <cit.>. While a strong consensus about how we should adapt our pedagogical practice has yet to emerge, each of this discussions acknowledge that generative models are not likely a fad. Numerous examples of the capabilities of generative models are emerging such as their ability to create programming assignments <cit.>, explain code <cit.>, identify programming concepts <cit.>, answer multiple choice questions <cit.>, write code <cit.>, and enhance programming error messages <cit.>. These use cases are critical because without understanding the capabilities of generative models, it is extremely challenging to adapt to this rapidly changing landscape.However, limited work has investigated the capabilities of generative models to identify bugs within code. Given, that early programmers often encounter bugs and may lack the ability to identify and fix these bugs, it is important to explore the capabilities for generative models to accomplish this task. Very recent papers focus on enhancing programming error messages <cit.> and automatically repairing bugs in code <cit.>. In this paper, we add to the growing set of use cases by exploring the potential for generative models to identify potential bugs and errors. §.§ Taxonomies of Common BugsPrior research points to the common occurrence of certain specific bugs amongst student code. Frequent mistakes that students make include syntax errors such as mismatching parenthesis <cit.> and confusing '=' with '==' or vice versa <cit.>, type errors such as incorrectly calling or assigning methods <cit.>, and semantic errors such as missing a return statement<cit.>. * <cit.>* <cit.>* <cit.>* <cit.>* <cit.>* <cit.>§.§ Generative AI and Computing EducationRecently computing education researchers are expressing concern and excitement about the ways that generative models may affect the computing education landscape <cit.>. While a strong consensus about how we should adapt our pedagogical practice has yet to emerge, each of this discussions acknowledge that generative models are not likely a fad. Numerous examples of the capabilities of generative models are emerging such as their ability to create programming assignments <cit.>, explain code <cit.>, identify programming concepts <cit.>, answer multiple choice questions <cit.>, write code <cit.>, and enhance programming error messages <cit.>. These use cases are critical because without understanding the capabilities of generative models, it is extremely challenging to adapt to this rapidly changing landscape.However, limited work has investigated the capabilities of generative models to identify bugs within code. Given, that early programmers often encounter bugs and may lack the ability to identify and fix these bugs, it is important to explore the capabilities for generative models to accomplish this task. Very recent papers focus on enhancing programming error messages <cit.> and automatically repairing bugs in code <cit.>. In this paper, we add to the growing set of use cases by exploring the potential for generative models to identify potential bugs and errors. §.§ Taxonomies of Common BugsPrior research points to the common occurrence of certain specific bugs amongst student code. Frequent mistakes that students make include syntax errors such as mismatching parenthesis <cit.> and confusing '=' with '==' or vice versa <cit.>, type errors such as incorrectly calling or assigning methods <cit.>, and semantic errors such as missing a return statement<cit.>. * <cit.>* <cit.>* <cit.>* <cit.>* <cit.>* <cit.>§.§ Generative AI and Computing EducationRecently computing education researchers are expressing concern and excitement about the ways that generative models may affect the computing education landscape <cit.>. While a strong consensus about how we should adapt our pedagogical practice has yet to emerge, each of this discussions acknowledge that generative models are not likely a fad. Numerous examples of the capabilities of generative models are emerging such as their ability to create programming assignments <cit.>, explain code <cit.>, identify programming concepts <cit.>, answer multiple choice questions <cit.>, write code <cit.>, and enhance programming error messages <cit.>. These use cases are critical because without understanding the capabilities of generative models, it is extremely challenging to adapt to this rapidly changing landscape.However, limited work has investigated the capabilities of generative models to identify bugs within code. Given, that early programmers often encounter bugs and may lack the ability to identify and fix these bugs, it is important to explore the capabilities for generative models to accomplish this task. Very recent papers focus on enhancing programming error messages <cit.> and automatically repairing bugs in code <cit.>. In this paper, we add to the growing set of use cases by exploring the potential for generative models to identify potential bugs and errors. § METHOD§.§ Research Questions Previous research has demonstrated many impressive capabilities of large language models. However, many of these examples, such as generating explanations and identifying programming concepts, are closely linked to code syntax, which aligns with the next token prediction behavior of LLMs. To better explore the potential limits of LLMs, this study focuses on identifying logic errors in code, which relate to the runtime performance of code, and thus may not be as well suited to analysis by LLMs as they are unable to execute code. If large language models perform well in this task, there is an exciting opportunity to use these models to help students to debug their code. Based on these goals, we investigated the following research questions: RQ 1: How do students and large language models compare in their ability to correctly identify logic errors in faulty code?RQ 2: Which types of logic errors are easiest for students and large language models to correctly identify? RQ 3: How many bugs or issues do students and large language models identify when reviewing faulty and correct code?§.§ Study Design In this study, we seek to investigate the performance of large language models in detecting bugs in faulty code. We conducted a study that compared the performance of students with the two large language models GPT-3 and GPT-4. Performance was measured across three code examples with four variants. These variants included the correct code and three variants with bugs introduced: 1) an operator error, 2) an out-of-bounds error, and 3) an expression error. The study was designed with two between-subjects components which include the source of the detection method, i.e., whether it was performed by the students, GPT-3, or GPT-4, and the bug variant. The study also included a within-subjects component which was the three code examples. By showing students multiple examples, we could partially control for participant error. §.§.§ Participants, Data Collection, and EthicsThe data used in this study were collected from a first-year C programming course at The University of Anonymous. The data were collected during a single lab session that ran over a one-week period. Leading up to this lab, the course covered the concepts of arithmetic, types, functions, loops, and arrays. We collected 964 total complete responses from students. The data collection followed the ethical guidelines of the university and was approved by the ethics review board[IRB approval number anonymized for review.]. §.§.§ Study Tasks As part of the lab, students were shown three code examples. Figure <ref> shows the three examples that were shown to students during the lab. Each example contains a function with a single loop that processes elements of an array.The task for the students was to identify any bugs that might exist within the code. The instructions said “Consider the following definition of a function called <Function Name>:” which was followed by the code without comments. They were then asked to come up with a short description of what they believe the intended purpose of the function to be. This was followed by having them “List all errors, if any, found in this code based on your explanation of the purpose of the function. It is possible that the code contains one or more small errors (however, this is not necessarily true and the code may be correct). If you can identify any errors in the implementation of the code, you should describe these errors.” §.§.§ Measures The data collection resulted in 2980 total responses from students. In addition, 30 LLM responses were generated for each code example and version pair by varying the temperature and prompt to account for variations that might affect performance. This resulted in 720 total additional responses from the two models. A team of four researchers manually coded each student and model response. The coders evaluated the correctness of the identified bug as a dichotomous variable (e.g.: correct or incorrect). The coders also evaluated the number of bugs that the response contained. The coding was mutually exclusive: a response correctly identifying a bug but also noting other incorrect bugs was coded as correct. When coding the example that did not contain bugs, we coded a blank response or an explicit statement that no bugs were contained as a correct response and other responses were considered incorrect. This coding scheme did not allow for explicitly tracking false positives and false negatives, but it was necessary to obtain substantial inter-rater reliability (κ=0.873, 30 ratings). Students often did not explicitly state the bug so we coded their response as `correct' even if they only provided a solution that would fix the expected bug. §.§.§ Analysis for Conditional Differences We analyzed the dependent measures (e.g.: number of bugs) using a linear mixed-effects model. The main fixed factors of interest were the “Source” (representing GPT-3, GPT-4, or Students) and the “Version” of the code example (representing different versions of the example). Additionally, an interaction term between “Source” and “Version” was included to examine potential differences in bug identification across sources and versions. To account for potential dependencies among observations from the same example, a random intercept term was included in the model specification. This random effect was nested within the “Code Example” factor, capturing the variability associated with different examples. Pairwise comparisons were made using the Tukey method with Holm's correction for multiple comparisons. §.§ Models §.§.§ Model Specification To automatically identify the bugs in the study, we used two large language models <cit.> developed by OpenAI. The first model, text-davinci-003, has been widely used up until the time of running the study. Later, when GPT-4 was released, we included results using the gpt-4-0314 model to understand how the state-of-the-art models perform at the same task. §.§.§ Prompt Engineering Prompt engineering is a process of developing instructions to guide the responses of an LLM. The specificity and phrasing of these prompts have the potential to strongly influence the content and quality of the responses <cit.>. Understanding the potential effects that prompts can have on performance, we used multiple prompting strategies to account for this aspect. In addition, the hyperparameters of an LLM, such as the temperature, can also affect the output. Lower temperatures tend to result in more deterministic responses while higher temperatures tend to provide more `creative' responses. We chose to use the default temperature of 0.7 and a lower temperature of 0.3. The three prompts used for this study are listed below. * # List all errors and bugs, if any, found in the following C code: <code>* # List any issues, including bugs, errors, or potential problems that exist in the following C code: <code>* # Assume the role of a highly intelligent computer scientist who is capable of easily finding bugs and errors by reading source code. List all errors and bugs, if any, found in the following C code: <code>Between the variations in prompt and temperature, there were 6 possible permutation. For each permutation, we issued 5 requests to the OpenAI API. The reason for issuing 5 requests was to account for the non-deterministic nature of LLM prompts. This resulted in 30 responses for each combination of code example and bug type and 360 total requests to OpenAI. § RESULTS §.§ Bug Detection Performance Performance in bug detection rates varied between the students and the models, as shown in Table <ref>. GPT-3 exhibited an overall correctness rate of 85.3%, while GPT-4 closely followed with a correctness rate of 85.0%. Notably, students had a much lower bug detection rate at 49.1%. While both models detected bugs at nearly twice the rate of students, performance was even higher when only considering model performance on faulty code. §.§.§ For faulty code, LLMs outperform students When presented with incorrect code, GPT-3 exhibited a bug detection rate of 87.3%, demonstrating a substantial ability to identify coding errors. GPT-4 surpassed this performance with an impressive bug detection rate of 99.2%, indicating a higher sensitivity to identifying bugs within faulty code. On the other hand, students detected bugs at rate of 34.5%, showcasing a limited proficiency in detecting coding errors. §.§.§ LLMs tended to identify bugs in correct code In the case of identifying correctly functioning code, GPT-3 achieved a bug detection rate of 79.4% (i.e., classified the code as bug-free). GPT-4, however, displayed a comparatively lower rate of 42.2% in correctly identifying bug-free code. In contrast, students demonstrated a notably high proficiency in identifying correct code, with a bug detection rate of 92.8%.§.§ Number of Bugs Detected We observed statistically significant differences in the number of bugs identified by GPT-3, GPT-4, and students. The results of the linear mixed-effects model, which are summarized in Table <ref>, show that GPT-4 identified significantly more bugs than GPT-3 (β = 0.76, SE = 0.13, z = 5.77, p < 0.001) and students (β = 1.70, SE = 0.11, z = -15.13, p < 0.001). The model estimated that GPT-4 identified 0.761 more bugs than GPT-3 and 1.701 more bugs than students when other variables were held constant. GPT-3 also identified statistically significantly more bugs than students (β = 0.94, SE = 0.11, z = -8.30, p < 0.001). §.§ Analyzing the Bug Reports §.§.§ GPT-4 was more verbose, even when normalized by the number of bugs detectedWe computed the average word count for responses made by students and each model. GPT-4 responses had on average 129.0 (σ = 44.7) words followed by GPT-3 and students with 54.2 (σ = 19.5) and 38.9 (σ = 27.0) words respectively. This constitutes a 3.31-fold increase in the number of words GPT-4 produced compared to students. Given the differences in number of bugs identified by source, we normalized word count by the number of bugs reported. This resulted in 52.7 (σ = 25.7) words for GPT-4 and 23.5 (σ = 9.11) and 35.6 (σ = 24.5) words for GPT-3 and students. These results should be contextualized by the observation that GPT-4 had a tendency to provide partial and in a few cases complete solutions for the bugs that it identified. Moreover, the models exhibited a more concentrated distribution around their means, while student responses exhibited notably higher variability. §.§.§ Qualitative analysis of responses To better understand the capabilities of large language models in detecting bugs in code and to draw distinctions between students' responses and these models, we did a qualitative exploration of the error messages. By analyzing selected bug reports generated by the models in our study, we could shed light on a variety of shortcomings of the models' approaches. As suggested by our analysis of word count, there appear to be differences in the ways that LLMs and students identify and describe bugs. Students tended to describe the bug, offer a solution, or both. We did not observe an instance where a model offered a solution without also describing the underlying bug. Models often combined a description of the bug along with a partial solution, in some cases they provided a complete solution. GPT-4 appeared to be most likely to offer a complete solution. As noted in the analysis of bug detection performance, LLMs often identified bugs even for `correct' code. Below is an example where GPT-4 identified 6 bugs in otherwise correct code for Example 2: * No function prototype: There is no function prototype provided for thefunction. It is a good practice to provide a function prototype before the function definition to avoid any potential issues with implicit declarations.* No include statements: The code does not include any necessary header files, such asor , which might be required for certain operations or functions used in the program.* Inconsistent naming convention: The function name is mentioned in the question, but the actual function name in the code is . It is important to maintain a consistent naming convention throughout the code.* No error handling: The function does not perform any error handling, such as checking for NULL pointers or negative array length. This could lead to undefined behavior if the function is called with invalid arguments.* No comments: The code lacks comments explaining the purpose of the function and its parameters. Adding comments can make the code more readable and easier to understand for others.* Potential integer overflow: If theparameter is very large It should be noted that most of these identified `bugs' could be interpreted as areas for improvement. For example, the model suggests improvements to naming conventions or including comments. However, both the fourth and sixth bugs could actually result in runtime issues. The model identifies a lack of input sanitization checks for two potential corner cases. The qualitative investigation underscores the intricacies of LLMs' code interpretations. They exhibit detailed understanding but can occasionally miss the mark on context or offer feedback that, while technically correct, might feel misaligned with the coder's intent. Balancing LLM insights with human discernment could yield the most effective outcomes.Many student responses just contained the proposed change without any explanation or reasoning. They often did not explicitly define a bug but instead only described the solution. Some students also indicated errors that either did not fix the issue, introduced new problems, or focused too heavily on syntactical correctness without addressing the core problem. In the example below, a student highlighted changes that should be made to the code which do not fix the bug:*should change toto avoid using the 0th value.* Instead of , use .* There should be no space betweenand the opening parenthesis .* Similarly, there should be no space betweenand the opening parenthesis .§ DISCUSSION Our results suggest that large language models are more capable than students at identifying bugs in code. There are multiple possible explanations for this. First, more expert programmers often do not necessarily need to read the code character by character or word by word when forming an understanding of the code, rather, they study features of the code that are relevant to the task at hand <cit.>. Consequently, a student may miss syntax errors or minor bugs, if they are not in focus. This can also be explained by the happy path mentality where because most of the code is correct, students may become complacent and fail to detect bugs; some bugs also take more time to identify and fix than others <cit.>. Participants were explicitly prompted to find errors, which puts them into an explicit debugging mindset. In practice, they might not critically examine their code with the same scrutiny, so the bug detection rate for students may actually be even lower in practice. Both LLMs performed extremely well, with GPT-4 performing near perfect when presented with buggy code. However, both models performed poorly in our analysis of correct code as they identified very minor bugs and stylistic aspects such as naming conventions contrary to our expectation that they would classify the code as bug-free. While the suggestions were largely correct, it might not be helpful to point out minor bugs and code conventions in otherwise correct code, especially considering students' preferences for concise bug reports <cit.>.One noticeable difference between GPT-3 and GPT-4 was that GPT-4 would point out these minor bugs more than GPT-3. One possible explanation for this is that the newer model has possibly had more instruction fine-tuning, where the model is trained to follow instructions from the user. This might cause the model to try please the user by going above and beyond the ask, e.g. in our case not only pointing out the obvious bug, but also commenting on more minor issues. We also found that GPT-4 was more verbose, even when controlling for the number of bugs in the code. This aligns with prior findings where newer models often add superfluous textual content to responses <cit.> and may come up with non-existing bugs to fix when asked to help with buggy code <cit.>. The ability of LLMs to correctly identify bugs at a much higher rate than students has exciting implications for computing education. LLMs could be used to help novices (and more experienced programmers too) in detecting bugs in code, for example, by having LLMs integrated directly into the IDE that students use to work on their course exercises. Models could make suggestions for improvement as they did in cases with correct code or identify subtle logic errors in the code, potentially building on prior research on improving programming error messages, which has the promise of improving learning <cit.>. Despite the allure of the technological possibilities, there likely should be a mechanism that would control how often the suggestions would be shown, as not all errors require help <cit.>. Similarly, it is important to carefully curate educational content, especially with growing concerns about over-reliance on LLMs <cit.>. To mitigate potential issues, it is likely preferable to avoid directly presenting errors and solutions to students. Instead, pedagogical systems could detect when students are spinning their wheels trying to debug their code <cit.> and then use the LLM to scaffold students toward identifying the error themselves. Thus providing learning opportunities that also mitigate stress associated with debugging. Similarly, as LLMs are adept at detecting bugs and writing suggestions on how to fix them, they could be further integrated into teacher tools. As an example, tools such as OverCode <cit.> and CodeClusters <cit.> that are designed to provide feedback to masses of students could be integrated with LLMs so that LLMs would create draft feedback, which instructors then could – when needed – adjust and send out. The ability of LLMs to identify rare corner cases also has interesting implications for teaching testing, as feedback from LLMs could help with writing more comprehensive test suites. The good performance of the models could also lead to new, innovative exercise types. For example, we envision that an LLM could create buggy code where students would need to find and fix the bug – similarly, one activity could be trying to create bugs that LLMs fail to identify. Such activities could also provide additional data on learning, which then could be used to fine-tune LLMs.As the educational landscape continues to adapt to LLMs <cit.>, the new bug capabilities of LLMs identified in this paper may further inform how students seek help in classroom settings <cit.>.§.§ Limitations To make the task more ecologically valid, we provided students with an open-response question rather than a multiple-choice question. This had the advantage that students could not guess the right answer and was more similar to how students would encounter code in the wild; however, it became difficult to differentiate between a response that explicitly stated `no bugs' and a blank response. To address this limitation, we evaluated the rates of default responses by variant and observed no statistically significant difference in the number of default responses across all four variants.Participants were asked to identify any bugs that were present within the code, so in this case, a lack of an explicit response was treated as a default response (e.g.: `no bugs'). To assess the impact on our results, we recalculated percentages by excluding blanks. The revised student correctness rates are as follows: 79.6% (133 blanks removed), 89.8% (169 blanks removed), and 60.0% (181 blanks removed). These results represent a conservative estimate, considering only explicitly stated correct answers. The resulting rates remained higher than GPT-4, but closer to GPT-3 correctness rates. Participants were also explicitly instructed to identify bugs as part of the lab activity. While prior research has demonstrated that debugging others' code can be challenging <cit.>, it is possible that if students were studying their own code, it might have been easier for them. Relatedly, the code did not have comments that would explain what each line of code does. This may align with code students often encounter naturally, but could have affected the students' performance or required the model to infer too much from the code structure and function name. The code examples used in this study only contained a single intended error. It is possible that the presence of multiple bugs in code might affect the performance of LLMs (and students) in detecting bugs. The goal for this paper was an initial tightly scoped investigation of identifying a bug within code. Future work will investigate cases where multiple bugs are included. In our study, we employed a robust approach by utilizing three distinct prompts, leveraging multiple models, including both GPT-3 and GPT-4, and exploring various temperatures (i.e., 0.4 and 0.7). Additionally, each prompt was issued multiple times to accommodate the inherent probabilistic nature of generative AI. While we acknowledge the potential impact of further prompt optimization on mitigating false positives in the correct code condition, it's essential to note the dynamic nature of these models, characterized by continuous changes in verbosity and performance <cit.>. Rather than providing a definitive characterization of performance, our primary objective was to delve into a novel capability of LLMs. § CONCLUSION In this work, we report on the results of a study that compares the ability of students and large language models to identify bugs in faulty and correct code. Our results suggest that students struggled to find bugs in faulty code, but that they performed relatively well at identifying whether the code was correct. The models performed in the opposite way: both models (GPT-3 and GPT-4) strongly outperformed students in identifying bugs in faulty code, but tended to identify many minor `bugs' which were more akin to suggestions for improvement when the code was correct. This suggests that models are overly sensitive toward discovering bugs in code. While some of the minor bugs detected by the models could be considered `bugs', such over-sensitivity could be seen as a negative for integrating LLMs into teaching. If students receive superflous feedback on minor stylistic aspects, for example, they might start disregarding any useful feedback from the models too.ACM-Reference-Format | http://arxiv.org/abs/2311.16017v1 | {
"authors": [
"Stephen MacNeil",
"Paul Denny",
"Andrew Tran",
"Juho Leinonen",
"Seth Bernstein",
"Arto Hellas",
"Sami Sarsa",
"Joanne Kim"
],
"categories": [
"cs.HC",
"cs.AI"
],
"primary_category": "cs.HC",
"published": "20231127172833",
"title": "Decoding Logic Errors: A Comparative Study on Bug Detection by Students and Large Language Models"
} |
IEEEexample:BSTcontrol Scale-Dropout: Estimating Uncertainty in Deep Neural Networks Using Stochastic Scale Soyed Tuhin Ahmed92, Kamal Danouchi3, Michael Hefenbrock4, Guillaume Prenat3, Lorena Anghel3, Mehdi B. Tahoori22Karlsruhe Institute of Technology, Karlsruhe, Germany, 9corresponding author, email: [email protected] 3Univ. Grenoble Alpes, CEA, CNRS, Grenoble INP, and IRIG-Spintec, Grenoble, France 4RevoAI GmbH, Karlsruhe, GermanyJanuary 14, 2024 =====================================================================================================================================================================================================================================================================================================================================================-0.em -0.em Uncertainty estimation in Neural Networks (NNs) is vital in improving reliability and confidence in predictions, particularly in safety-critical applications. Bayesian Neural Networks (BayNNs) with Dropout as an approximation offer a systematic approach to quantifying uncertainty, but they inherently suffer from high hardware overhead in terms of power, memory, and computation. Thus, the applicability of BayNNs to edge devices with limited resources or to high-performance applications is challenging. Some of the inherent costs of BayNNs can be reduced by accelerating them in hardware on a Computation-In-Memory (CIM) architecture with spintronic memories and binarizing their parameters. However, numerous stochastic units are required to implement conventional Dropout-based BayNN. In this paper, we propose the Scale Dropout, a novel regularization technique for Binary Neural Networks (BNNs), and Monte Carlo-Scale Dropout (MC-Scale Dropout)-based BayNNs for efficient uncertainty estimation. Our approach requires only one stochastic unit for the entire model, irrespective of the model size, leading to a highly scalable Bayesian NN. Furthermore, we introduce a novel Spintronic memory-based CIM architecture for the proposed BayNN that achieves more than 100× energy savings compared to the state-of-the-art. We validated our method to show up to 1% improvement in predictive performance and superior uncertainty estimates compared to related works. -0.2em -0.2em belowskip=0pt, font=footnotesize [figure]font=footnotesize§ INTRODUCTIONIn recent years, Neural networks (NNs) have revolutionized various fields of artificial intelligence due to their exceptional ability to learn complex patterns and generate accurate predictions. NN models have produced remarkable results in a variety of tasks, including image recognition <cit.>, natural language processing <cit.>, and are even being deployed in safety-critical applications such as autonomous vehicles, automatic medical diagnosis <cit.>, and automated optical inspection in industrial applications <cit.>. Despite the achievements of NNs, they are unable to quantify the uncertainty associated with their predictions. Estimating uncertainty is essential to understand the risks associated with prediction and to make informed decisions, especially in safety-critical domains, where incorrect predictions can have severe consequences <cit.>.In contrast to traditional NNs, Bayesian Neural Networks (BayNNs) offer a principled approach to uncertainty estimation <cit.>.BayNN is a model based on the Bayesian framework for model interpretation and decision-making that introduces probability distributions over the weights or activation of the network. Despite the benefit of BayNN, they inherently have higher hardware overhead in terms of power, memory consumption, throughput, computation, and number of stochastic units representing probability distributions of BayNNs.Consequently, the applicability of BayNN is limited in the context of edge devices, for example, microcontrollers and smartphones, where computing and memory resources are limited and power consumption and throughput are a concern <cit.>. Furthermore, exact Bayesian inference is often computationally intractable, necessitating the adoption of various approximation techniques. The ensemble methods <cit.>, and variational inference <cit.> are among the prominent methods. However, they have high overhead as a result of the storage of multiple copies of the model, or they have twice the number of parameters compared to traditional NNs. On the other hand, Monte Carlo Dropout (MC-Dropout) based Bayesian approximation <cit.> is particularly appealing, as they have the same number of parameters as conventional NNs. However, the number of stochastic units required for their implementation is still a concern.Moreover, in terms of hardware architectures, BayNNs are typically implemented on von Neumann architectures, where the memory and computation units are physically separated. Since BayNN applications are data intensive, data movement between the processor and memory becomes the bottleneck of the entire application, leading to the memory wall problem <cit.>. Consequently, implementing BayNN to Computation-in-Memory (CIM) architectures with emerging resistive non-volatile memories (NVMs) <cit.> is an attractive option to reduce their inherent costs. In the CIM architecture, the common operation of NN, matrix-vector multiplication, is carried out inside the memory where data (NN parameters) already reside. Therefore, the memory-wall problem is alleviated, leading to a highly attractive solution to accelerate BayNN at the edge.However, implementing BayNN to the CIM architecture is not straightforward due to the deterministic nature of the architecture, the limited precision of the spintronic memories, and the design of stochastic units for Bayesian inference.Also, an approach that does not make changes to the conventional memory array but only changes to the peripheral circuitry is attractive. Therefore, in the literature, several methods <cit.> have been proposed to implement Dropout-based as well as other approximate BayNNs into CIM architectures with different NVM technologies. However, they have limitations, including a) requiring numerous Dropout modules (typically one for each neuron), b) not suitable for convolutional layers, and c) having high power consumption.In this paper, we propose the Scale Dropout, a novel regularization technique and Monte Carlo Scale Dropout (MC-Scale Dropout) based BayNN for uncertainty estimation in Binary Neural Networks (BNNs) <cit.>. BNNs utilize 1-bit precision weights and activations, effectively addressing the constraints posed by limited-precision spintronic memories. Moreover, by integrating Scale Dropout during inference, we achieve a robust uncertainty estimation, comparable to MC-Dropout. Our approach aims to provide a balanced trade-off between model uncertainty and the computational constraints of edge devices. The scale Dropout employs the proposed vector-wise unitary Dropout technique, dropping out the entire scale vector, thus reducing memory and computational requirements without sacrificing the quality of uncertainty estimates. The scale vector, an additional parameter in BNN, is crucial in modern BNN algorithms to reduce quantization error. The primary contributions of our research are as follows: * We introduce a novel regularization technique named scale-Dropout, which can reduce co-adaptation in BNN training.* We introduce the MC-Scale Dropout-based Bayesian approximation for efficient estimation in BNNs. * We propose a novel CIM architecture where the model parameter is implemented with spintronic memories operating in the deterministic region and a spintronic-based scale Dropout module with the spintronic device operation in the stochastic regime. Our proposed CIM architecture does not imply changes to the common crossbar structure. It can reuse existing crossbar structures with only the peripheral circuitry modified for Bayesian inference. Compared to existing work, our method requires only one Dropout module for the model, regardless of the size of the topology leading to a highly scalable Bayesian approach for edge applications. The predictive performance of our method was extensively evaluated for various deep learning tasks, including classification and semantic segmentation, using different data sets and network topologies. Also, the effectiveness of our approach in estimating uncertainty is evaluated on various out-of-distribution data and metrics.The structure of the paper is as follows: Section <ref> describes the background on Binary Neural Networks, traditional Dropout techniques, spintronic memories, CIM, and discusses related papers. In Section <ref>, we present the proposed Scale Dropout technique, explaining its design, operation, and uncertainty estimation approach. In Section <ref>, details of hardware implementation are discussed. In Section <ref>, simulation and experimental setup are discussed, and in Section <ref>, our experimental results are presented at the algorithmic and hardware levels. Later, in Section <ref>, we discuss various aspects of the proposed method, and Section <ref> concludes the paper.§ PRELIMINARIES§.§ Binary NNs and ScalingBNNs have gained popularity as a model compression technique due to their ability to reduce memory requirements by ∼ 32× compared to a full-precision NN. In BNNs, the weights and activations are binarized to -1 or +1 using the sign(.) function. Binarization further simplifies computationally intensive matrix-vector multiplication operations into computationally cheaper bitwise XNOR and bit counting operations, thus improving computational efficiency and leading to a speed-up of ∼ 58× on CPU <cit.>.The scale vectoris a crucial aspect of BNN to alleviate the loss of accuracy due to binarization <cit.>. Here, a real-valued vector multiplies the weighted sum, weights, or activations of a layer. The scale vector can be defined in two ways, such as analytically calculated values that scale the binarized weights and activations for each layer in <cit.> or it can be learned through backpropagation similar to other parameters of the model <cit.>. In terms of the location of the scale vector, applying the scale to the weight matrix of each layer before the XNOR operation (as done in <cit.>) is possible in a CPU or GPU implementation but may not be as feasible for a spintronics-based CIM architecture. This is because, depending on the shape of the scale vector, each neuron or channel will have a different scale factor, leading to different mapping strategy requirements for each neuron or channel. Similarly, input scaling is not feasible, as the inputs are directly converted to voltages and fed into the crossbar for computation.In this paper, we specially design the scale vector and the application so that it can be implemented in the CIM architecture as well. §.§ Uncertainty EstimationUncertainty estimation in neural networks is the process by which a model provides a measure of uncertainty or confidence along with its predictions. This measure typically captures the model's belief about the output, given the input data and the learned parameters <cit.>. Uncertainty estimation enables NNs to express what they do not know. Traditional neural networks, despite their impressive prediction capabilities, fail to provide these crucial uncertainty estimates. There are two distinct types of uncertainty in deep learning: Aleatoric and epistemic <cit.>. Aleatoric or data uncertainty, which is irreducible even with more data, arises due to the inherent noise in the data generation process. On the other hand, uncertainty resulting from a lack of knowledge is referred to as epistemic or model uncertainty. In other words, it refers to the lack of knowledge of a model. Epistemic uncertainty can be eliminated with sufficient data from an unseen region or with knowledge about sources of uncertainty. Estimating epistemic uncertainty allows the NN to be aware of what they do not know. Therefore, this paper focused on estimating the epistemic uncertainty of a model. §.§ Conventional Dropout MethodsDropout is a regularization technique commonly used in neural networks to prevent overfitting <cit.>. During training, the Dropout randomly sets a proportion of the input units tozero with a probability p at each forward pass. It can be interpreted as training a large ensemble of "thinned" networks. The final prediction is then an ensemble prediction of these networks. However, Dropout is not applied during inference.There have been other variants of Dropout, e.g., Dropconnect proposed by Wan et al. <cit.>, which proposed to set individual weights and biases rather than neuron output to zero with some probability p. Spatial Dropout proposed by Tompson et al. <cit.>, is a Dropout method targeted for convolutional neural networks.In Spatial Dropout, entire feature maps are dropped (set to zero) with probability p rather than individual pixels to reduce spatial correlation. As a result, the network is prevented from using nearby pixels to recover information when a Dropout is applied. Variational Dropout, as proposed by Kingma et al. <cit.>, uses Gaussian multiplicative noise instead of zeroing out the weights or activations of the network. Their approach also enables the NN to automatically determine an effective Dropout probability for an entire network or for individual layers or neurons. In this paper, we take a different approach to Dropout. Our approach is focused on the scale vectors of a binary NN model. Additionally, our approach does not set the dropped value of the scale vector to zero, as done in existing work, as it prevents information flow. This means that the whole weighted sum of a layer would be zeroed out if the scaled vector were set to zero. Instead, our approach drops the entire scale vector to one. That means in the dropped behavior, the scale vector is bypassed with a probability, and the weighted sum value of a layer remains unchanged. §.§ Bayesian Neural Networks A conventional NN represents a functionparameterized by thelearnable parameters . D, P, and C represent the dimensions of inputs , parameters , and outputs , respectively. Here, the parameter vector is a (point) estimate, that is, a single point value, and is found using a classic maximum likelihood estimation approach. In the maximum likelihood estimation method,a certain likelihood function p(|, ) is maximized given the observed data 𝒟 to obtain <cit.>. Despite the success of this approach, it inherently ignores the uncertainty in the estimation of the parameters . In contrast, BayNNs are based on the Bayesian framework and offer an approach to uncertainty estimation. In a BayNN, the parametersof the network are treated as random variables. Consequently, a posterior distribution of the parameters θ is computed given the dataset 𝒟. Unfortunately, the exact computation of this posterior distribution p( | 𝒟), as well as the resulting posterior predictive distribution p(^* |^*, 𝒟)= ∫ p(^* |^*, ) p(|𝒟) dis generally computationally intractable.To overcome this challenge, various approximation methods, such as Monte Carlo Dropout <cit.>, were developed to allow feasible approximations of uncertainty for BayNNs. The approximate posterior distribution is depicted as q(y^*| x^*, 𝒟). By introducing Dropout layers into the network during training and, more importantly, keeping them active during testing, MC-Dropout effectively simulates a sampling process from the posterior distribution of the parameters .When the Dropout layers are kept active during testing, samples can be obtained from an approximate posterior. This approach, while computationally cheaper than most other approximation techniques, provides a practical and efficient way to estimate predictive uncertainty, making BayNNs more accessible for practical applications <cit.>. §.§ Spintronic Memory Technology The primary component of STT-MRAM is the magnetic tunnel junction (MTJ). It comprises two ferromagnetic layers: the free layer and the reference layer, and is separated by a thin oxide layer. The magnetic orientation of the reference layer is fixed, but the orientation of the free layers can be changed (by passing a proper SET/RESET current) to be either parallel or anti-parallel, corresponding to low and high resistive states, respectively <cit.>.STT-MRAM offers several advantages, such as fast switching, high endurance, and CMOS compatibility <cit.>. Despite their benefits, their resistance levels are low, typically within a few kΩ. As a consequence, attempting to read all the bit cells in a crossbar array simultaneously can result in an excessively high current at the output of the crossbar. In order to address the challenge of integrating STT-MRAM for matrix-vector multiplication operation, alternative spintronic memory technologies can be considered, such as Spin-Orbit Torque (SOT) based MRAM (SOT-MRAM), a three-terminal device. In SOT-MRAM, the MTJ is mounted on a heavy-metal substrate. The resistance states of the SOT devices can be adjusted, allowing them to achieve resistance levels ranging up to several MΩ <cit.>. In addition, the reliability during read operation in SOT-MRAM is considerably enhanced, as they have separate read and write paths. §.§ Related Works Previous studies have investigated hardware solutions for Bayesian and Binary Neural Networks, which are usually based on CMOS technology.In the paper <cit.>, a novel FPGA implementation is proposed that utilizes non-linear activation functions. However, this approach may be restricted when used with larger datasets. With similar technology, work in <cit.> proposed an architecture to implement MC-Dropout. The study described in <cit.> introduces an approach where only a partial BayNN is implemented, treating solely the last layers of the network as Bayesian. The technique described in <cit.> involves a CIM implementation in which the crossbar array stores the variance parameter and stochastic resistive RRAM devices are used to sample the probability distribution at the input of the array. This approach requires a single random element for each input, which is not very energy efficient. In <cit.>, the authors take advantage of the non-idealities of RRAM devices to apply Bayesian learning. In the paper by <cit.>, an implementation of neuron Dropout was proposed that takes advantage of the stochastic and deterministic features of STT-MRAM. Unfortunately, this approach requires a random number generator (RNG) for each neuron, leading to a considerable increase in power consumption. The research in <cit.> showed the application of a set of resistive crossbar arrays to store probabilistic weights to execute BayNN. In <cit.>, they proposed a suitable CIM architecture for BayNN, where multiple crossbar arrays are available and one is randomly chosen for each forward pass.In Yang et al. <cit.>, crossbar arrays were used to construct Bayesian neural networks with the help of low-barrier MTJs, resulting in a significant decrease in energy consumption. Despite the fact that memories with low-energy barriers are used, they have endurance limitations that can eventually have an impact on the precision of the CIM engine. The paper in <cit.> presented an alternative implementation with MRAM-based crossbar arrays that can represent mean and variance. However, this approach required considerable pre-processing to encode the mean and variance in the crossbars.In this paper, we propose an architecture that has a decreased dependence on RNGs. Also, we employ two arrays to store both the weights and the proposed Scale for a reduced overhead for BayNN implementation.§ PROPOSED APPROACH §.§ Scale VectorConsidering the constraints and opportunities in the CIM architecture (see section<ref>), we propose a hardware-software co-design approach for the scaling factor. Specifically, we design our scale factor (denoted as ) to be learnable through a gradient descent algorithm and the same shape as the bias vector of a layer, ∈ℝ^C_out× 1× 1. Here, C_out represents the number of output channels in convolutional layers and the number of neurons in linear layers. This choice is motivated by the desire to reduce memory overhead while ensuring compatibility with the CIM architecture. By making the scale factor learnable, we allow the training process to determine the optimal scale factor, making the model more adaptive and possibly improving its performance <cit.>. Note that the weight and bias parameters and the μ and σ variables of the batch normalization layer have the same value as the bias vector of a layer. Therefore, choosing the same shape of scale vector as those vectors leads to simplified computation and storage in CIM architecture. §.§ Scale Dropout Model Description Let a BNN with L hidden layers and ^(l-1) denote the input vector, ^(l) denote the output vector, ^(l) denote the scale vector, ^(l) denote the weights and ^(l) denote the biases of the layer l. The feed-forward operation (for l = 0, ⋯, L-1) of BNN can be described as ^(l) = (sign(^(l))^⊤⊗sign(^(l-1)) + ^(l))⊙^(l) ^(l) = BatchNorm_γ, β (^(l)) ^(l) = ϕ(^(l)) where ϕ denotes the element-wise nonlinear activation function for BNN, e.g., the Tanh function (hyperbolic Tangent), ⊤ denotes the matrix transpose operation and BatchNorm_γ, β (·) denotes the bach normalization <cit.> with a learnable parameter γ and β. In addition, ⊙ denotes element-wise multiplication, and ⊗ denotes binary convolution. With Scale-Dropout, the feed-forward operation becomes :d^(l) ∼Bernoulli(p)^(l) = ^(l)· d^(l) ^(l) = (sign(^(l))^⊤⊗sign(^(l-1)) + ^(l))⊙^(l) ^(l) = BatchNorm_γ, β (^(l)) ^(l) = ϕ(^(l)) Here, the Dropout mask for the scale Dropout is defined as a scalar d ∈{0, 1} and is independently sampled from a Bernoulli distribution with a probability parameter p for each layer.The scale vector multiplies the weighted sum of each layer. Therefore, if we were to set the scale values to zero (similar to traditional Dropout), it would lead to a complete loss of information in that layer. To address this problem, we introduce an alternative approach called Unitary Dropout. In this method, when the randomly generated Dropout mask is 0, all elements associated with the scale vector are set to 1.As a result, during forward propagation, the network ignores the scale factors that correspond to the Dropout mask being 0, while the scaling factor retains its original value when the randomly generated Dropout mask is set to 1. Fig. <ref> shows the scale Dropout concept during the train and inference time.Although we have focused on Unitary Dropout in this paper due to their simple implementation in the CIM architecture, other alternatives can also be considered. For instance, Average Scale Dropout and Random Scale Dropout. In Average Scale Dropout, instead of setting the scale vector to one, it involves dropping to the average of the scale vector, . On the other hand, the Random Scale-Dropout method involves replacing the dropped scale with a random value sampled from a predefined distribution, for example, a uniform distribution.Additionally, to reduce the number of Dropout modules to one in the CIM architecture, the entire scale vector is dropped at the same time. However, the proposed Scale-Dropout can be applied to the scale vector element-wise at the cost of a large number of Dropout modules. §.§ Co-adaptation MitigationThe introduction of the proposed scale Dropout imposes randomness in the scale vector and, in turn, the activation of a layer. Thereby, it can potentially reduce co-adaptation between the scale vector and the binary weights. Whenis treated as a random variable during training, the model is less dependent on specific scale values, promoting a more diverse range of features in the BNN. This phenomenon can be expressed mathematically as increased variance in the learned representations across the network, thus reducing co-adaptation. §.§ Choosing Dropout ProbabilityTo choose a Dropout probability of the scale Dropout, we propose a layer-dependent adaptive scale Dropout method.Specifically, a Dropout probability of 10% or 20% is used on layers with a comparably smaller number of parameters, but a larger Dropout probability, e.g., 50% is used on layers with a larger number of parameters. Consequently, unlike works <cit.>, where many different kinds of implementation with different locations for the Dropout layer need to be explored, our approach does not require such exploration, as the Scale-Dropout is applied to all the binary layers. Also, it is not necessary to explore the various Dropout rates. Consequently, our approach stands out as a more deployment-ready solution compared to related works. §.§ Learning with Scale-DropoutThe proposed BayNN with Scale-Dropout can be trained using stochastic gradient descent, similar to standard BNN using existing algorithms such as <cit.>. The only difference is that for each forward pass during training, we sample a scaled network by applying Scale-Dropout. The forward and backward propagation for each iteration is performed only on this scaled network. The gradients for each parameter are averaged over the training instances of each mini-batch. The training objective combining a Bayesian approximation and Scale-Dropout is discussed in Section <ref>.AlthoughScale-Dropout alone offers several benefits, using Scale-Dropout in conjunction with common regularization techniques such as L2 regularization, learning rate scheduling, data augmentation, and momentum for the gradient descent algorithm further improves accuracy.§ SCALE-DROPOUT AS A BAYESIAN APPROXIMATION As stated previously in Section <ref>, an NN with standard Dropout can be used as an approximate method of Bayesian inference. Gal et al. <cit.> showed that learning an NN with Dropout and L2 regularization is equivalent to a Gaussian process. The optimization objective of their approach, named MC-Dropout, is given byℒ()_MC-Dropout = ℒ(, 𝒟) + λ∑_l=1^L(||^l||^2_2 + ||^l||^2_2).In this paper, we propose Monte Calo (MC)-Scale Dropout based Bayesian approximation that uses Scale-Dropout in place of the standard Dropout for Bayesian inference. Our approach expands the MC-Dropout approaches <cit.>, for BNN and better efficiency with specific learning objectives. In the following section, the learning objective and how to obtain the model uncertainty for the (MC)-Scale Dropout are discussed in detail. §.§ Learning Objective For the proposed Monte Calo (MC)-Scale Dropout objective, we introduce a regularization function for the scales . Specifically, we design a regularization function that encourages the scale factor to be positive to preserve the sign of the computed ^(l) of a layer l. Also, it encourages the scale factor to be centered around one, so it scales up or down the element ofbased on their contribution to the loss. To achieve a Bayesian approximation, we use a similar approach to MC-Dropout. However, in MC-Dropout, activations are dropped to zero, which inspires the L2 regularization to push the weights towards zero. On the contrary, in ourUnitary Dropout approach, the scale factors are dropped to one. This promotes a regularization effect that encourages the scale vector to center around one, a key distinction that aligns better with the nature of binary networks where weights are binarized to -1 or 1. The regularization function can be mathematically described byφ∑_l=1^L(1-μ^l_)^2.Here, μ_^l is the mean of the scale vector of a layer l and φ is the hyperparameter for controlling the strength of the regularization. Despite the regularization of the scales, we also optionally apply the L2 regularization to the weights. Applying L2 regularization is a challenge in BNN. In BNN, real-valued proxy weights are binarized to +1 or -1, therefore, applying L2 regularization to either of them may not be beneficial <cit.>. However, L2 regularization can be implemented in the actual real-valued weights, with binarization applied to the normalized weights within the output channel dimensions <cit.>. Opting for channel-wise normalization also proves advantageous in reducing binarization errors <cit.>. To achieve this, the channel-wise mean is first computed:μ_c = 1/H ×𝒲∑_h=1^H∑_w=1^𝒲.Here, H and 𝒲 represent the height and width of the kernels in the weight matrix, the last two dimensions of 𝒲. Subsequently, the channel-wise mean μ_c is subtracted from the proxy weights (real-valued):= -μ_c.Following that, the channel-wise standard deviation is calculated on zero-centered weights.σ_c^2 =1/H ×𝒲∑_h=1^H∑_w=1^𝒲^2- ^2.Note that σ_c^2 calculation is simplified for efficiency reasons. Lastly, the channel-wise standard deviation divides the zero centered weight for channel-wise normalization as:= /σ_c. Consequently, binarization on the channel-wise normalized weights can be defined as:^* =+1if ≥ 0 -1otherwiseNote that, channel-wise weight normalization has become standard practice in modern BNN models. The overall objective of the MC-Scale Dropout with both scales and weight Dropout is defined as: ℒ()_MC-Scale Dropout = ℒ(, 𝒟) + λ∑_l=1^L||^l||^2_2 + φ∑_l=1^L(1-μ^l_)^2. Here, λ is the weight decay hyperparameter of the weight regularization.§.§ Obtaining Model Uncertainty To obtain the uncertainty of the model, we perform T forward passes with the proposed Scale-Dropout enabled during Bayesian inference. During each of the T forward passes, we sample an independent and identically distributed random Dropout mask from the Bernoulli distribution for each layer {d^(l)_t,⋯,d^(L)_t}^T_t=1, giving T stochastic scale vectors {^(l)_t,⋯,^(L)_t}^T_t=1 and ultimately stochastic weighted sums {^(l)_t,⋯,^(L)_t}^T_t=1.The predictive mean is given by:E_q(y^*| x^*, 𝒟)(y^*) ≈1/T∑_t=1^T y^*_t (x^*, ^(l)_t,⋯,^(L)_t)Here, x^* is the test input, q(y^*| x^*, 𝒟) is the posterior distribution, y is the stochastic prediction, and y^* is the final prediction. We refer to this Monte Carlo estimate as the MC-Scale Dropout. In practice, this is equivalent to performing T stochastic forward passes through the network and averaging the results. In the literature, this is known as model averaging <cit.>.In terms of the posterior distribution of the output, equations <ref> can be modified as ^(l) = S^(l)⊙diag(d) d^(l) ∼Bernoulli(p)forl=1, ⋯, LHere, S represents the weighted sum of a layer. Batch normalization is applied to ^(l). Thus, the sampling process of the Dropout mask is the same as that of the MC-Dropout. In an empirical evaluation (shown later in the <ref>), we observed that the distribution of output for each class approaches a Gaussian distribution as the number of stochastic forward passes (T) through the network increases. This is due to the aggregate effect of the scalar dropout mask over many forward passes, which can be seen as introducing a form of multiplicative noise.Uncertainty estimates of the prediction can be obtained from the variance of the T forward passes asVar_q(y^*| x^*, 𝒟)(y^*)≈1/T∑_t=1^T (y^*_t (x^*, ^(l)_t,⋯,^(L)_t) - E_q(y^*| x^*, 𝒟)(y^*))^2In addition, the K% confidence interval (CI) can also be used as an uncertainty estimate of the MC-Scale Dropout model. According to the central limit Theorem, for sufficiently large T,{y^*_1⋯y^*_T} follow a normal distribution. For a K% confidence interval, we use the percentiles of the predictions. Let Q_η/2 be the η/2 quantile of the predictions. Where η = 1 - K/100. Consequently, the K% confidence interval is given byCI = [μ_y - Q_η/2σ_y/√(T), μ_y + Q_η/2σ_y/√(T)].Here, μ_Y and σ_Y represent predictive mean E_q(y^*| x^*, 𝒟)(y^*), and variance Var_q(y^*| x^*, 𝒟)(y^*) from formulas <ref> and <ref>, respectively. For sufficiently large T, the confidence interval can be approximated by directly calculating the 100-k/2 and 100+k/2 quantile (for a K% CI) of the predictions as CI≈[percentile(100-k/2), percentile(100+k/2)]. § HARDWARE IMPLEMENTATION§.§ Modelling Spintronic-based Scale DropoutIn our design, only one spintronic-based Dropout (namd here Spin-ScaleDrop) module is designed and implemented for the entire neural network. Thus, the proposed Spin-ScaleDrop module is reused for all layers of the CIM architecture. After the computation of a layer is performed, a new Dropout mask from the Spin-ScaleDrop Module module is sampled for the next layer. However, due to the manufacturing and infield variation of the MTJs in the Spin-ScaleDrop, the Dropout probability itself becomes a stochastic variable. We model the variation as a Gaussian distribution, the mean of the distribution μ represents the expected Dropout probability, and σ represents the device variations. Therefore, the probability of Dropout p of a layer l can be modelled as p̂_l = p_l + ϵwithϵ∼𝒩(μ,σ^2). Here, p̂_l denotes the probability of Dropout with variation in the process. The feed-forward operation expressed in equation <ref> remains the same, with only p̂_l used as the Dropout probability. Note that a probability has to be in [0,1], since p is usually chosen between 0.1 and 0.5, it is unlikely that p crosses this range due to variation. The Dropout probability typically varies from 3% to 10%. §.§ Designing Spintronic-Based Scale Dropout Module The Spin-ScaleDrop module is designed by harnessing the stochastic regime of an MTJ and is utilized as a random number generator. The probability density function governing the switching of the SOT-MTJ follows an exponential distribution and is expressed as <cit.>:p_ sw = 1-exp(t/τ)τ = τ_0 exp[Δ E/k_B 𝒯(1-2I/I_c0(π/2 - I/I_c0))]Here, Δ E is the thermal stability factor, I is the applied current through the SOT-track, t is the pulse duration, τ_0 is the attempt time, I_c0 is the critical current at 0, k_B is the Boltzmann constant and 𝒯 is the temperature. I_c0 represents the minimum current required to switch the MTJ.The equation (<ref>) is used to model the switching behavior of the SOT-MTJ for different switching currents while keeping the pulse width fixed at 10. To generate the bidirectional current across the SOT track, four transistors are added, as shown in Fig. <ref>. The desired switching probability of 50% is achieved by programming the MTJs through successive "SET" and "RESET" operations.To ensure reliable MTJ switching, the write duration is set to 10 for the SET operation and to 5 for the RESET operation. The state of the MTJ is read using a Sense Amplifier (SA, in Fig.<ref>). The SET and RESET cycles are repeated to generate a stochastic sequence.The Scale Dropout Module allows for the stochastic activation of the Scale vector that is stored in the neighboring memory.§.§ Proposed Spintronics-based CIM ArchitectureIn spintronic-based CIM architectures, the SOT-MRAM devices are arranged in a crossbar fashion, with an MRAM device at each crosspoint (see Fig. <ref>). For inference, the mapping of the trained binary weights to the array is performed with a one-time write operation. In BNN, XNOR and the bit-counting operation are performed instead of the weighted sum operation <cit.>. The XNOR operation in CIM is shown in Table <ref> and the encoding of the respective +1 and -1 weights with the complementary bit cell is shown in Fig. <ref>. The mapping of the weight matrix of various NN layers to the crossbar arrays is challenging because of their varying shapes and dimensions. While mapping Fully Connected (FC) layers is relatively simple due to their 2D weight matrices (ℝ^m × n), Convolutional (conv) layers pose challenges due to their 4D structures (ℝ^K × K × C_ in× C_ out), with K denoting the kernel size, C_ in and C_ outdenoting the number of input and output channels, respectively. There are primarily two strategies for mapping convolutional layers. The first strategy 1 involves unrolling each kernel of shape K× K× C_in into a column of the crossbar <cit.>. The second strategy 2 maps each kernel to multiple smaller crossbars of shape C_in× C_out, arranged in a K× K layout <cit.>.During inference (online operation), each element of the binary input vector x for a layer is converted into a (0, 1) or (1, 0) signal and fed into the crossbar array for inference. This architecture allows for parallel computation and outputs the weighted sum results as currents flow through each source line. Finally, the analog currents are converted to digital signals using Analog-to-Digital Converters (ADCs) and passed on to the Accumulator-Adder module to sum up the partial matrix-vectors multiplication. These partial multiplications are then stored in registers and multiplied with the Scale memory.Regarding the scale vectors, they are stored in a nearby 32-bit SRAM memory. In the scale memory, each row stores a scale vector. The column dimension of the SRAM memory depends on the maximum number of neurons or channels within the NN layers, and the row dimension depends on the number of layers in the model. This scale vector is subsequently applied, depending on the stochastic activation by the Scale Dropout module, using a multiplexer.Recent state-of-the-art CNN topologies, e.g., ResNet and DenseNet, use skip connections. In Computing in Memory architectures, skip-connection can be implemented by selectively routing the output signals through the crossbars and summing them with digital circuits. Since layer-by-layer computations are sequential, signals for these connections can be stored in a buffer memory until the computation of the following layers is completed. § EXPERIMENTAL AND SIMULATION SETUP§.§ Datasets§.§.§ In distribution (ID) Dataset To evaluate both predictive performance and uncertainty estimation, we have used several challenging benchmark and real-world biomedical in-distribution datasets on various learning paradigms (classification and semantic segmentation) in the context of Bayesian deep learning. An in-distribution dataset refers to a set of data samples that come from the same distribution as the data the model was trained on. For example, if a model is trained on images of an aeroplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck from CIFAR-10, then, during inference, more images of CIFAR-10 (although not seen during training) would be considered an in-distribution dataset. Specifically, for classification, we have used the CIFAR-10. Furthermore, for biomedical semantic segmentation, breast ultrasound scans (for breast cancer)<cit.>, COVID-19 lung computed tomography (CT) <cit.>, and skin cancer <cit.>. The breast cancer dataset containing ultrasound scans is a vital resource used for the early detection of breast cancer, one of the leading causes of death among women worldwide. The dataset is classified into three classes, normal, benign, and malignant images, and has a total of 780 images with a size of 500 × 500 pixels on average. On the other hand, the Skin Cancer dataset for Biomedical Segmentation contains 200 dermoscopic images of shape 572× 765 pixels with their corresponding label masks. Accurate prediction of skin cancer allows computer-aided diagnostic systems to assist medical professionals in the early detection and precise delineation of skin lesions. Lastly, the COVID-19 lung CT dataset contains anonymized human lung CT scans with different levels of severity in COVID disease.Evaluating the proposed method on various datasets shows the scalability and generality of our proposed approach. Note that semantic segmentation, which involves segmenting an image into multiple sections and labeling each pixel with its corresponding class label, is regarded as more difficult than classification tasks due to its finer granularity.We have applied random data augmentation and dataset normalization on all the datasets during training to improve accuracy. For example, for CIFAR-10 datasets, we have applied RandomHorizontalFlip and RandomResizedCrop type random data augmentation.§.§.§ OOD DatasetWe used six additional OOD datasets to evaluate the efficacy of our method in dealing with data uncertainty. : * Gaussian noise (𝒟̂_1): Each pixel of the image is generated by sampling random noise from a unit Gaussian distribution, ∼𝒩(0,1),* Uniform noise (𝒟̂_2): Each pixel of the image is generated by sampling random noise from a uniform distribution,∼𝒰(0,1),* CIFAR-10 with Gaussian noise (𝒟̂_3): Each pixel of the CIFAR-10 images is corrupted with Gaussian noise,* CIFAR-10 with uniform noise (𝒟̂_4): Each pixel of the CIFAR-10 images is corrupted with uniform noise,* SVHN: Google street view house numbers dataset <cit.>, and* STL10: a dataset containing images from the popular ImageNet dataset <cit.>.Each of these OOD datasets contains 8000 images, and the images have the same dimensions as the original CIFAR-10 dataset (32 × 32 pixels).§.§ Evaluated Topologies and Training setting The proposed Scale-Dropout is evaluated for its predictive performance and uncertainty estimation in state-of-the-art convolutional NN (CNN) topologies, including ResNet <cit.>, and VGG <cit.> for benchmark classification tasks. In the case of biomedical image segmentation tasks, U-Net <cit.>, and Bayesian SegNet <cit.>, topologies are used. The U-Net topology consists of a contracting path and an expansive path with skip connections, which gives it the U-shaped architecture. On the other hand, Bayesian SegNet is a deep convolutional encoder-decoder architecture for semantic image segmentation.All models are trained with the Adam optimization algorithm with default settings in the PyTorch framework to minimize the proposed objective function with a weight decay rate of λ=1× 10^-5 and φ =1×10^-5. Classification and segmentation tasks are trained for 300 epochs. All weights and activations of models for classification tasks are binarized (1-bit model). The activations of biomedical semantic segmentation models are quantized to 4 bits, but their weights are kept binary. As stated previously, semantic segmentation tasks are more difficult and therefore, they require slightly more bit precision at the activation for accurate predictions. We have used the activation quantization algorithm proposed in <cit.> to quantize the activations to 4 bits. Since 1-bit weights are still maintained, the crossbar structure does not need to be modified. In fact, only peripheral modifications, such as ADC with increased bit resolution are required.We have used the recently proposed IrNet <cit.> binarization algorithm to implement the proposed learnable scale and Scale-Dropout. Note that (to our knowledge) any binarization algorithm can be extended with our method with slight modification, i.e., add a learnable scale vector and scale Dropout. §.§ Evaluation MetricsThe evaluation metrics utilized to assess the effectiveness of segmentation tasks include pixel-wise accuracy, Intersection-Over-Union (IoU), Sensitivity, Specificity, Area Under the ROC Curve (AUC), F1 Score, and precision. Among these metrics, IoU holds particular significance, as it comprises the ratio between the area of overlap and the area of union between the predicted and ground-truth segments. Pixel-wise accuracy, on the other hand, quantifies the percentage of pixels in the predicted image that have been correctly classified. Specificity conveys the model's capability to accurately recognize actual negative cases, while Sensitivity reflects its ability to correctly identify actual positive cases. AUC is a single-number summary of the true-positive and false-positive rates, with higher values indicating superior performance. The F1 score serves as a comprehensive metric by integrating both precision and recall to measure the accuracy of a model on a dataset. Lastly, precision indicates the proportion of positive predictions that are genuinely correct.On the other hand, classification tasks are evaluated for their inference accuracy, which constitutes the ratio of images correctly classified by the model over the total number of images in the validation dataset.(Epistemic) Uncertainty estimation of the models is evaluated on predictive variations, entropy, and confidence interval with K=95% based on Equations <ref>,<ref>, respectively. Out-of-distribution data is detected as:OOD,if max(𝒬({_̂t̂^*}_t=1^T)) < 0.95ID,otherwise. Here, _̂t̂^* represents the softmax output obtained during the stochastic forward pass of the MC run t out of the T runs. The function 𝒬(·) calculates the 1-th quantiles among the set of values, while max(·) finds the maximum confidence score among output classes. The classification as ID or OOD depends on whether the maximum value from the 10th percentile of the averaged outputs is less than 0.95 (for OOD) or not (for ID). The underlying idea of our OOD detection is that for in-distribution data, most confidence scores from the T MC runs are high and close to one another, resulting in low variance. In contrast, for out-of-distribution data, confidence scores exhibit higher variance. §.§ Architectural SimulationTo carry out the architectural simulation, we first obtained the circuit specifications for the peripheral blocks, as outlined in Section <ref>. We then independently simulated each component of the architecture to gauge its energy utilization. Both the crossbar array and the Spin Scale-Dropout module were analyzed using an electrical simulator such as the Simulation Program for Integrated Circuit (SPICE), to assess their energy consumption. The use of high-resistance SOT devices <cit.>, in conjunction with the binary nature of the network, serves to reduce the overhead related to peripheral elements. The Accumulator-Adder, Comparator, and Averaging circuits were synthesized using the Synopsys Design Compiler, leveraging the TSMC 40 nm low-power Process Design Kit (PDK). For the CIM operation, decoding and sensing were assessed at the circuit-array level using NVsim (NonVolatile memory simulator) <cit.>. To achieve this, we modified the NVsim simulator to accommodate multiple active cells, thus simulating CIM operation accurately. Additionally, we substituted the single-bit sense amplifiers with multi-bit ADCs. Performance metrics for each discrete component are shown in Table <ref>. § EVALUATION§.§ Predictive Performance§.§.§ Comparison With State-of-the-Art Algorithms The predictive performance of our method is comparable to the SOTA binary Bayesian NN methods, as shown in Table <ref> on a range of CNN architectures, including VGG, ResNet-18, and ResNet-20, evaluated on the CIFAR-10 dataset. In the worst case, the predictive performance is 1.45% below the SpinDrop <cit.> method for the VGG topology. Here, we assumed that there are no device variations in the spintronics-based scale Dropout module. For a fair comparison, we used the same network size as those used in their work. However, the hardware implementation of our solution may lead to a smaller area and a better power-performance product owing to a simpler spintronics-based Dropout module design. In our analysis, we have used layer-dependent adaptive Dropout rates (See Section <ref>) to scale Dropout. The low variance in inference accuracy (numbers in brackets) shows the stability of the proposed approach.In terms of the activation function, the proposed binary BayNN uses the Sign(.) function, which is an approximation of the hard Tanh function. In this case, the proposed method performs similarly to the MC-Dropout method. However, in the case of the ReLU activation function in the MC-Dropout model, the accuracy of the MC-Dropout model increases. Thus, the difference between the proposed method and the MC-Dropout increases to around ∼ 1%.Furthermore, our proposed method achieves an improvement of up to 6.74% in inference accuracy compared to the SOTA point estimate BNN algorithm.However, since our method is built on top of the IR-Net BNN algorithm <cit.>, predictive performance should be comparable to their approach. As depicted in Table <ref>, the predictive performance is in the worst case 0.18% lower, which is negligible. Similarly, accuracy is comparable to the full precision model, depicting that our method in general does not increase quantization error. Note that in the full precision model, the ReLU function is used as the activation for the convolution layers, while our proposed method uses the activation function sign(x) for all layers where activations are applied. For biomedical image segmentation tasks, the proposed method outperforms the full-precision MC-Dropout method by up to 6.4% in terms of IoU score. In the worst scenario, our method results in a 69.69% reduction in the IoU score for the breast cancer dataset. Additionally, our approach outperforms the MC-Dropout method in most other metrics. Table <ref> presents a summary of the results. The predictive performance is qualitatively shown in Figure <ref> for each dataset (with two examples). The sixth and third columns show the prediction mask for MC-Dropout and our method, respectively. It can be observed that the segmentation masks for the proposed method are similar to MC-Dropout and ground truth. In general, misclassified pixels are around the boundary of ground-truth masks.§.§.§ Impact of MC Runs on Inference AccuracyWe observed that using Monte Carlo sampling (T forward pass) for Bayesian inference generally enhances predictive performance across all datasets. For example, the inference accuracy of the ResNet-20 model increases from 84.63% to 86.05%. In our evaluation, twenty samples (T = 20) for the larger model and fifty samples (T = 50) for the smaller model were used for Bayesian inference.Fig. <ref> (a) shows that the proposed method requires a smaller number of samples, with inference accuracy plateaus around T = 20 to 50. In comparison, MC-Dropout and MC-DropConnect methods require 100, and 90 Monte Carlo sampling, respectively, to achieve the maximum inference accuracy, as reported in <cit.> for CIFAR-10. In our experiment (see Fig. <ref> (b)), we observed that the MC-DropConnect method plateaus at 100 Monte Carlo runs, and the MC-Dropout method plateaus at 200 Monte Carlo runs on the same model (VGG) and dataset. Therefore, our method requires up to 180 less Monte Carlo sampling, leading to 10× less XNOR and bit-counting operation, energy consumption, and latency for each Bayesian inference result. We assume the same NN topology, hardware architecture, and memory device technology for a fair comparison. However, it should be noted that T at which accuracy plateaus can vary from task to task and from model to model. §.§.§ Performance of BayBNN with Spin-ScaleDropWe have shown that the predictive performance of our method is comparable to that of the full precision and binary implementations, assuming that the Spintronics Dropout module remains unchanged. However, it should also be tolerant to manufacturing and thermal variations in the Spintronic-based Dropout module.To this end, we performed a small ablation study on the CIFAR-10 dataset with ResNet-18 and VGG topologies with models trained with and without variations in the Dropout module. Specifically, in one study, we trained both models with no variation in the probability of Dropout but, during Bayesian inference, we evaluated the model against our proposed Spintronic-based Dropout with up to 3× the standard deviation σ of the manufacturing variations. This means that the Dropout probability of each neuron can fluctuate by ±10% from the trained value. In this case, a slight improvement (+ 0.13%) in predictive performance is observed for the ResNet-18 model, but for the VGG model, a slight reduction in inference accuracy (- 0.19%) is observed. Nevertheless, the inference accuracy for both models remains close to the baseline accuracy. Furthermore, increasing the variation of a model from 0× to 3× has a negligible effect on the inference accuracy.In the other case, the NN is trained considering the variation in the Dropout module (see Section <ref>). In this case, unlike in the previous case, there is a slight improvement in predictive performance (up to + 0.26%) for both models compared to the baseline. Variation in the Dropout probability leads to more stochastically during Bayesian inference, and as a result, accuracy improves slightly. The results are summarized in Table <ref>. §.§ Uncertainty EstimationDetecting Distribution Shift To show the effectiveness of the proposed Scale-Dropout method in detecting distribution shifts in the data, we conducted two experiments. In one experiment, we continuously added random noise from a uniform distribution to the input data with increasing strength. As shown in Fig. <ref>, the variance and confidence interval in the model logits (SoftMax input) and the predicted probability of the output classes (SoftMax output) increases as the strength of noise increases. In other words, the uncertainty in the prediction increases as the distribution dataset shifts away from the original distribution. However, despite the high uncertainty, the model predicts a truck or a bird.On the other hand, we have performed another experiment with all images of the CIFAR-10 dataset on the VGG model continuously rotated up to 90^∘. It can be seen in Fig. <ref> that as the images are rotated, the inference accuracy decreases and the predictive entropy increases from the starting entropy. Our method is compared with deterministic as well as common uncertainty estimation techniques, namely MC-Dropout <cit.> and Deep Ensemble <cit.> with five randomly initialized models. The trend of decrease in inference accuracy is similar for all models. However, the Deep Ensemble slightly outperforms the proposed and other methods in terms of accuracy. Regardless, our proposed MC Scale-Dropout method produces significantly more predictive entropy compared to other methods, including the Deep Ensemble method. This is because the proposed MC Scale-Dropout method effectively turns a single model into numerous ensembles by enabling Dropout during inference, allowing it to generate multiple predictions from a single model. Whereas, Deep Ensemble has limited models in the ensemble, e.g., five models. Also, the scale Dropout provides a regularization effect that can sometimes lead to better generalization in MC Scale-Dropout models compared to individual models in a Deep Ensemble. Consequently, our method can produce better uncertainty estimates compared to related works even with a 1-bit model.Note that in the MC-Dropout model, Dropout is applied to the extracted features from the convolutional layers to achieve similar inference accuracy. If Dropout is applied to all layers, the predictive entropy increases, but inference accuracy decreases significantly, e.g., by more than 3%. Our approach provides a good balance between uncertainty estimates without any degradation in accuracy. Detecting Out-of-distribution DataWe show that the model uncertainty increases as the distribution of the data shifts from the original distribution. Here, we perform an ablation study with six (definitive) out-of-distribution datasets.As depicted in Table <ref>, our proposed method can achieve a detection rate of OOD of up to 100% across various model architectures and six different OOD datasets (𝒟̂_1 through 𝒟̂_6). There are some variations in OOD detection rates across different architectures for the same OOD dataset. However, even in these cases, our method can consistently achieve a high OOD detection rate, with the lowest detection rate being 77.77% on the ResNet-18 model with 𝒟̂_4 dataset. However, when the threshold for SoftMax confidence increases from 95% to 99%, the OOD detection rate in the dataset 𝒟̂_4 improved to 81.78%, an ∼ 4% improvement. Compared to MC-Spatial Dropout and SpinBayes methods, the OOD detection rates are generally similar. In the worst case, the OOD detection rate is ∼ 14% lower for the VGG topology on the 𝒟̂_6 dataset. Therefore, the results indicate that the proposed MC-Scale Dropout method is a robust and reliable solution to OOD detection across diverse model architectures and datasets. Epistemic Uncertainty of Semantic SegmentaionFor biomedical segmentation tasks, the epistemic uncertainty is calculated for each pixel. The fifth and eighth columns of Fig. <ref> depict the pixel-wise uncertainty masks (qualitatively) for the MC-Dropout and the proposed MC-Scale Dropout method. In segmentation tasks, an ideal model would produce high uncertainty around misclassified pixels and low uncertainty around correctly classified pixels. Overall, as depicted in Fig. <ref>, the uncertainty is high around the misclassified pixels for the proposed method, but correctly classified pixels have low uncertainty. In general, the uncertainty masks for MC-Dropout are darker, depicting slightly stronger uncertainty estimates due to their higher model precision (32 bits) and a higher Dropout probability (50%). However, in some cases, the uncertainty mask is also stronger in the region of correctly classified pixels. However, our proposed method produces uncertainty only around miss-classified pixels. §.§ Hardware Overhead AnalysisTo assess the energy consumption of the proposed approach, we estimated the required resources for implementing a network of five layers with the Scale-Dropout method, and we assumed using 10 crossbar arrays of 256×256 and 10 Spin-ScaleDrop modules to implement a LeNet-5 network. The total area needed for the implementation of the LeNet-5 topology is 0.401mm^2 comprising the crossbar arrays and the memories. The area estimation is based on the NVSim and layout measurement.Given the energy consumption of the different components of our architecture shown in Table <ref>. We used the NVSim simulator to estimate the total energy consumption for an inference run and multiplied this value by the number of forward passes (MC run). The analysis is carried out for ten forward passes (T=10). The energy consumption of an inference run is shown in Table <ref> compared to other FPGA and CIM implementations.We evaluated two topologies LeNet-5 for the MNIST dataset and VGG small for CIFAR-10. For a consistent benchmark, the same metrics as in previous studies were used. The Scale-Dropout approach significantly improves energy efficiency, reaching up to 100× higher efficiency compared to the method presented in <cit.>. Compared to the implementation in <cit.>, our approach is 51× better.Furthermore, compared to the implementation based on STT-MRAM <cit.>, the proposed approach exhibits 3.77× better efficiency. Finally, compared to reference <cit.>, our approach demonstrates 4.38× greater energy efficiency. To scale up the approach, we have performed an energy consumption estimation with a VGG small topology and we report0.29 μ J/Image. Thus, energy consumption remains notably low even when considering a larger dataset such as CIFAR-10. Furthermore, Scale-Dropout requires only one RNG per layer compared to similar approaches <cit.>. An RNG can be shared for all layers to reduce the number of RNGs for the whole model to one. This significantly contributes to a reduction in energy consumption.§ DISCUSSION §.§ In Distribution Uncertainty AnalysisWe thoroughly analyzed the performance of the proposed method in data distribution shift and out-of-distribution data in Section <ref>, it is equally important to perform well when it receives in-distribution data. This means that correct predictions should have low uncertainty and a model should accept most of them.In our in-distribution data analysis (Table <ref>), we present the accepted, rejected, TPR, TNR, and AR percentages. TPR indicates the rate of correct and accepted predictions, while TNR refers to rejected and incorrect predictions. High TPR and TNR rates are desired as they suggest that most of the accepted predictions have low uncertainty, and incorrect predictions have high uncertainty. A high AR rate also indicates that most of the correct predictions are accepted.The VGG and ResNet-18 models, with their larger size, effectively handle the complexity of the CIFAR-10 task, showing acceptance of approximately 80% and more than 80% in both TPR and TNR, plus more than 97% in AR, confirming the efficacy of our method.On the contrary, the smaller ResNet-20 model is not optimal for handling the complexity of CIFAR-10, leading to 'uncertainty in model architecture' <cit.> and consequently to greater uncertainty in prediction. To be specific, its inference accuracy is comparatively lower ∼ 86% compared to ∼ 91% for the other model, since it only has 16, 32, and 64 neurons in the residual blocks. Thus, it has a lower acceptance rate (41%). That means that most predictions are uncertain and our method is also effective in quantifying 'uncertainty in the model architecture'. Note that our classification of the predictions (OOD or ID) with our approach (see Equation <ref>) is conservative and prioritizes certainty. Adjusting quantile and confidence scores (see Equation 24) can increase acceptance rates closer to inference accuracy but may decrease OOD detection rates.§.§ Corruption Robustness Analysis The proposed method is evaluated on 15 common corruptions reported in the work (CIFAR-10-C) <cit.> with various topologies with and without pre-processing, as shown in Fig. <ref>. Our approach can achieve an OOD detection rate of on average 87.06%, 86.10% and 97.64% for VGG, ResNet-18, and ResNet-20 topologies, respectively, when no pre-processing is applied.On the other hand, when the corruption robustness dataset is pre-processed by channel-wise normalizing them, i.e., they have the same channel-wise distribution as the clean CIFAR-10 data the model expects, the corruption error drastically reduces. For example, the mean corruption error for VGG was reduced from 82.84% to 49.95%. Consequently, the uncertainty of the predictions also reduces. Specifically, our approach achieves OOD detection rates of 58.48%, 56.21%, and 87.73%, respectively, for VGG, ResNet-18 and ResNet-20 topologies. Therefore, pre-processing the dataset standardizes the data and improves the corruption robustness similar to the histogram equalization results reported in the work <cit.>.In terms of topology, in larger networks, e.g., ResNet-18, the corruption error is relatively lower. For example, in the case of Gaussian noise, the corruption error is reduced from 86.85% in VGG to 83.52% in ResNet-18. A similar trend is observed for other datasets. However, despite the fact that the ResNet-20 model is smaller than ResNet-18, it has a relatively higher corruption error because the smaller model introduces “uncertainty in model architecture” as mentioned in the previous section. Nevertheless, there is a direct relationship between corruption error, uncertainty, and, in turn, the OOD detection rate.In cases where the accuracy is reduced by a small margin, the model uncertainty is low, and the OOD detection rate with our approach is also low. For example, the worst-case OOD detection rate for the VGG topology is 29.53%. This is achieved when the accuracy is reduced by only 6.89% for brightness corruption. On the other hand, the highest OOD detection rate is achieved when the accuracy is reduced by 78.88% for the VGG topologies.§.§ Variability and scalability This study extends previous research demonstrating the robustness of the dropout approach against device variability <cit.>.It highlights the impact of dropout module variations on network accuracy as shown in Table <ref>. Moreover, we propose to utilize SOT-MRAM, capable of achieving resistance levels up to several MΩ, which aligns with previous simulations emphasizing resistance's crucial role in constructing large arrays <cit.>, validating the scalability and energy efficiency of this approach.§.§ Empirical Analysis of the Posterior DistributionWe have performed an empirical evaluation of our method on the CIFAR-10 dataset and the ResNet-18 topology. We have observed that as the number of Monte Carlo samples increases, the histogram of the posterior distribution for each of the 10 classes indeed approaches a Gaussian distribution and can be considered as an approximate Gaussian distribution similar to the MC-Dropout method.§ CONCLUSION In this paper, we propose a novel Dropout approach, Scale-Dropout, which drops the entire scale vector of a layer stochastically with a probability p. Our approach required only one Dropout module in the hardware, regardless of the model size. Additionally, we propose scale-Dropout-based Bayesian inference, MC-Scale Dropout, for efficient uncertainty estimation. Furthermore, we propose a novel SOT-MRAM-based CIM implementation for accelerating Bayesian NNs. In our CIM architecture, the stochastic and deterministic aspects of SOT-MRAM have been combined in a crossbar array-based architecture, with only changes made in peripheral circuitry, and achieve up to 100× of energy savings. In terms of uncertainty estimation, our approach can detect up to 100% out-of-distribution data, significantly higher uncertainty estimates compared to popular uncertainty estimation approaches. Additionally, predictive performance is improved by up to 1.33% compared to SOTA binary BayNN and improved by up to 0.26% compared to conventional BNN approaches. Our approach combines the algorithmic approach with the cost-effective and energy-efficient SOT-MRAM-based CIM implementation for reliable prediction.IEEEtran-2plus -1fil [< g r a p h i c s > ]Soyed Tuhin Ahmed has received his bachelor's in Electrical and Electronics Engineering from American International University Bangladesh with Summa Cum Laude and afterwards Master in Communication Engineering from Technische Universität München in 2020. He joined the CDNC group at Karlsruhe Institute of Technology, Karlsruhe, Germany as a PhD student in September 2020. His current research interests are Deep learning, scalable and low-cost uncertainty estimation, resilient hardware accelerator for machine learning, and robust and accurate deep learning.-2plus -1fil[ < g r a p h i c s > ]Kamal Danouchi graduated from the University of Aix-Marseille, France, with an engineering degree in Microelectronics in 2021. He is currently pursuing his PhD at CEA-SPINTEC. His research interests cover emerging non-volatile memories, spintronics, and IC design for unconventional computing. -2plus -1fil [ < g r a p h i c s > ]Michael Hefenbrock received the Ph.D. degree in computer science from the Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany, in 2022. He is currently the head of artificial intelligence at RevoAI GmbH, Karlsruhe. His current research interests include machine learning and optimization and their application to problems in design automation.-2plus -1fil [ < g r a p h i c s > ]Guillaume Prenat graduated from Grenoble Institute of Technology in France, where he obtained his engineer degree in 2002 and his PhD degree in microelectronics in 2005. He joined SPINTEC in 2006 to take in charge the design activity.In 2021, he obtained his habilitation to conduct research from University Grenoble Alpes. His field of interest covers the development of design tools for CMOS/magnetic technology and the evaluation of hybrid non-volatile circuits (FPGA, processors...) to contribute to circumventing the limits of microelectronics. In this framework, he was involved as the scientific contact in 8 European and French research projects, and as the coordinator of a H2020 ICT project embedding 9 academic and industrial partners. He was also in charge of the collaboration contract between Spintec and the startup eVaderis.-2plus -1fil [ < g r a p h i c s > ]Lorena Anghelreceived the Ph.D. degree (cum laude) from Grenoble INP in 2000. From 2016 to 2020, she was the Vice President of Grenoble INP, in charge of industrial relationships, where she is currently the Scientific Director. She is also a Full Professor at Grenoble INP and a member of the Research Staff at the Spintec Laboratory. She has published more than 130 publications in international conferences and symposia. She was a recipient of several best paper and outstanding paper awards. She had fulfilled positions, such as the General Chair and Program Chair for many prestigious IEEE conferences, including IEEE VTS, IEEE ETS, IEEE NANOARCH, and IEEE On-Line Test Symposium.-2plus -1fil[ < g r a p h i c s > ]Mehdi B. Tahoori (M'03, SM'08, F'21) received the B.S. degree in computer engineering from the Sharif University of Technology, Tehran, Iran, in 2000, and the M.S. and Ph.D. degrees in electrical engineering from Stanford University, Stanford, CA, in 2002 and 2003, respectively. He is currently a Full Professor at the Karlsruhe Institute of Technology, Karlsruhe, Germany.In 2003, he was an Assistant Professor at the Department of Electrical and Computer Engineering, Northeastern University, where he became an Associate Professor in 2009. From August to December 2015, he was a visiting professor at the VLSI Design and Education Center (VDEC), University of Tokyo, Japan. From 2002 to 2003, he was a Research Scientist with Fujitsu Laboratories of America, Sunnyvale, CA.Prof. Tahoori was a recipient of the National Science Foundation Early Faculty Development (CAREER) Award. He has received a number of best paper awards at various conferences and journals, including ICCAD, FPL, TODAES, and TVLSI. He is a fellow of the IEEE and a recipient of the European Research Council (ERC) Advanced Grant. | http://arxiv.org/abs/2311.15816v2 | {
"authors": [
"Soyed Tuhin Ahmed",
"Kamal Danouchi",
"Michael Hefenbrock",
"Guillaume Prenat",
"Lorena Anghel",
"Mehdi B. Tahoori"
],
"categories": [
"cs.LG",
"cs.AI",
"cs.ET"
],
"primary_category": "cs.LG",
"published": "20231127134120",
"title": "Scale-Dropout: Estimating Uncertainty in Deep Neural Networks Using Stochastic Scale"
} |
Quadrature Rules on Triangles and TetrahedraZ. Worku, J.E. Hicken, D.W. ZinggZelalem Arega Worku [email protected] Jason E. Hicken [email protected] David W. Zingg [email protected] [†]Institute for Aerospace Studies, University of Toronto, Toronto, Ontario, M3H 5T6, Canada []Department of Mechanical, Aerospace, and Nuclear Engineering, Rensselaer Polytechnic Institute, Troy, NY, United States -0.5cm Quadrature Rules on Triangles and Tetrahedra for Multidimensional Summation-By-Parts Operators Zelalem Arega Worku[†] Jason E. Hicken[] David W. Zingg[†]=====================================================================================================-2.5cm tocsectionAbstract Multidimensional diagonal-norm summation-by-parts (SBP) operators with collocated volume and facet nodes, known as diagonal- operators, are attractive for entropy-stable discretizations from an efficiency standpoint. However, there is a limited number of such operators, and those currently in existence often have a relatively high node count for a given polynomial order due to a scarcity of suitable quadrature rules. We present several new symmetric positive-weight quadrature rules on triangles and tetrahedra that are suitable for construction of diagonal- SBP operators. For triangles, quadrature rules of degree one through twenty with facet nodes that correspond to the Legendre-Gauss-Lobatto (LGL) and Legendre-Gauss (LG) quadrature rules are derived. For tetrahedra, quadrature rules of degree one through ten are presented along with the corresponding facet quadrature rules. All of the quadrature rules are provided in a supplementary data repository. The quadrature rules are used to construct novel SBP diagonal- operators, whose accuracy and maximum timestep restrictions are studied numerically.65M06 65M12 65N06 65N12 § INTRODUCTION Summation-by-parts (SBP) operators enable the construction of entropy-stable high-order discretizations of the Euler and Navier-Stokes equations <cit.>.Diagonal-norm SBP operators have collocated solution and volume quadrature nodes, which enables trivial inversion of the norm/mass matrix, facilitating efficient implementation of high-order methods with explicit time-marching schemes. Due to the collocation, the efficiency of the method is significantly affected by the number of quadrature nodes. This is even more prominent for the Hadamard-form entropy-stable discretizations <cit.> of the Euler and Navier-Stokes equations on simplices, as a volume flux computation coupling each node with all other nodes must be calculated. Therefore, the development of quadrature rules with fewer nodes on simplices is imperative to improve efficiency of entropy-stable discretizations with diagonal-norm SBP operators. While existing positive interior (PI) quadrature rules offer the fewest nodes for a given quadrature accuracy, their use for entropy-stable SBP discretizations requires expensive element coupling computations, although there are ways to reduce this cost to some extent <cit.>. Alternatives to PI rules for SBP operators have been proposed; notably, Hicken <cit.> derived quadrature rules with a set of volume nodes on each facet of the triangle and tetrahedron, and Chen and Shu <cit.> presented rules on the triangle that enable construction of SBP operators with collocated volume and facet quadrature nodes, which are referred to as diagonal- or ^0<cit.> SBP operators. Diagonal- SBP operators eliminate the need to extrapolate the solution from volume to facet nodes and are of particular interest for entropy-stable SBP discretizations as they reduce the cost of element coupling operations and enable straightforward enforcement of boundary conditions. However, the number of nodes required for the quadrature rules of existing diagonal- operators is significantly larger than that of the PI rules, especially for the tetrahedron. Furthermore, only a limited number of rules of this type are available in the literature. In light of this, the goal of this paper is to find efficient quadrature rules on the triangle and tetrahedron for the purpose of constructing efficient diagonal-norm diagonal- multidimensional SBP operators.Quadrature rules with boundary nodes on simplices have been explored, although to a lesser extent than PI rules. Sets of nodes on the triangle that are typically well-suited for interpolation or quadrature accuracy have both been utilized to derive such rules, , see <cit.> among others. Similar studies for the tetrahedron, however, are lacking. A shared property of several of the nodal sets in the mentioned studies is that for a degree p operator there are p+1 nodes on each facet of the triangle, and vertices are included. This automatically excludes the rules from being applicable for construction of diagonal- SBP operators, as the facet quadrature accuracy is not sufficient. We recall that a sufficient condition on the facet quadrature rule for the existence of a degree p SBP operator is that it be at least of degree 2p accurate. Although this is only a sufficient condition, as is evident from the existence of Legendre-Gauss-Lobatto (LGL) tensor-product operators on quadrilaterals and hexahedra, to the authors' knowledge, no diagonal- SBP operator on simplices with facet quadrature rule of degree less than 2p has been constructed.Odd-degree quadrature rules on the triangle for diagonal- SBP operators with Legendre-Gauss (LG) facet nodes were first introduced in <cit.>, followed by even degree rules with LGL and LG facet nodes in <cit.> and odd-degree rules with LGL facet nodes in <cit.>. Together, these studies provide quadrature rules up to degree eight[<cit.> provides additional rules of degree 12 and 16 with LGL facet nodes.] on the triangle. For the tetrahedron, Hicken <cit.> derived even degree quadrature rules up to degree eight with PI rule facet nodes. Marchildon and Zingg <cit.> studied optimization of these operators and developed novel rules up to degree four on the tetrahedron, managing to lower the number of nodes for the degree two and four quadrature rules. This was achieved by allowing the facet nodes to be placed at the vertices and edges of the tetrahedron. Other efforts to improve the efficiency of entropy-stable discretizations with multidimensional SBP operators include, for instance, the use of staggered grids in<cit.>, entropy-split formulations in <cit.>, and collapsed coordinate tensor-product elements in<cit.>.In this paper, we derive symmetric quadrature rules for construction of diagonal-norm diagonal- SBP operators on triangles and tetrahedra using the open-source Julia code <cit.>, which employs the Levenberg-Marquardt algorithm (LMA) <cit.> to solve the nonlinear systems of equations that arise from the quadrature accuracy conditions. The code's capability is enhanced by enforcing a constraint to find positive weights and by combining it with a particle swarm optimization (PSO) <cit.> subroutine to mitigate issues related to initial guesses and convergence to suboptimal local minima. We extend the available set of quadrature rules for diagonal- operators up to degree twenty on triangles with both the LGL and LG facet node configurations, and up to degree ten for tetrahedra, achieving a significant reduction in the number of nodes relative to many of the existing rules. The new rules are used to construct novel diagonal- SBP operators, whose accuracy and timestep restrictions are studied numerically.The rest of the paper is organized as follows: <ref> describes the problem statement and symmetry groups on the reference triangle and tetrahedron, <ref> details the methodology employed, <ref> presents the derived quadrature rules along with a description of multidimensional SBP operators and their construction, and numerical results are presented in <ref> followed by conclusions in <ref>. § PRELIMINARIESConstructing quadrature rules over a domain of interest requires solving highly nonlinear systems of equations to find the nodal locations and weights. Usually, quadrature rules are designed to be exact for a desired degree of polynomial functions. The problem can be stated as: find x and w such that ∫_Ω P_j(x)Ω=∑_i=1^n_pw_i P_j(x_i), j∈{1,…,n_b},where x = [x_1,…,x_d]^T, d is the spatial dimension, andn_p and n_b denote the number of quadrature points and polynomial basis functions, respectively. For a degree q_v accurate quadrature rule on a simplex, there are n_b= p+dd polynomial basis functions. Rewriting the problem statement in matrix form, we haveg^Tw-f=0,whereis the Vandermonde matrix containing evaluations of the basis functions at each node along its columns and f=[∫_Ω P_1(x)Ω,…,∫_Ω P_n_b(x)Ω]^T. The basis functions used to construct the Vandermonde matrix affect its condition number. It is well-known that, for high-order polynomials, monomial basis functions result in an ill-conditioned . In contrast, the orthonormal Proriol-Koornwinder-Dubiner (PKD) <cit.> basis functions offer better conditioning and a convenient f vector; all except the first entry of f are zero due to the orthogonality of the basis functions. For the purpose of constructing a degree p diagonal-norm diagonal- multidimensional SBP operator, we require that: * all quadrature points lie in the closure of the simplex, * all weights are positive, * the volume quadrature is at least degree q_v=2p-1 accurate, * a subset of the quadrature points lying on each facet form a positive-weight facet quadrature rule of at least degree q_f=2p, and * both the facet and volume quadrature rules are symmetric. The symmetry requirement ensures that a solution obtained using the SBP operators is not spatially biased within an element.Furthermore, the symmetry constraints reduce the number of unknowns in <ref> substantially <cit.>.The reference triangle and tetrahedron are defined, respectively, as Ω_tri ={(x,y)| x,y≥-1; x+y≤0}, Ω_tet ={(x,y,z)| x,y,z≥-1; x+y+z≤-1}.There are three symmetry groups on the triangle, and five on the tetrahedron <cit.>. However, we will identify symmetric nodes that lie on the facets of the simplices as being in separate symmetry groups. On the triangle, the symmetry groups, in barycentric coordinates, are permutations ofS_1 = (1/3, 1/3, 1/3), S_21 = (α,α,1-2α), S_111= (α,β,1-α-β), S_vert = (1,0,0), S_mid-edge= (1/2,1/2,0), S_edge =(α,1-α,0),where α and β are parameters such that the quadrature points lie in the closure of the domain. The symmetry groups in the first line of <ref> represent interior points, while those in the second line denote points on the facets. Similarly, the symmetry groups on the reference tetrahedron are permutations ofS_1 = (1/4,1/4,1/4,1/4), S_face-cent = (1/3, 1/3, 1/3, 0),S_31 = (α,α,α,1-3α), S_vert= (1,0,0,0),S_22 = (α,α,1-α,1-α), S_mid-edge= (1/2,1/2,0, 0),S_211 = (α,α,β,1-2α-β), S_face-21= (α,α,1-2α, 0),S_1111 = (α,β,γ,1-α-β-γ), S_edge = (β,1-β, 0, 0),S_face-111 = (α,β,1-α-β, 0).The facet symmetry groups in the right columns of <ref> are special cases of the symmetry groups in the left column, which are used in this work to denote interior points exclusively. § METHODOLOGYThe Vandermonde matrix in <ref> is a function of the quadrature points, x; hence, the algorithm to solve the equation starts by guessing the nodal locations and weights. This is done indirectly by providing the type and number of symmetry groups and the values of the associated parameters and weights. Using the initial guess of the parameters, it is possible to compute the coordinates of the i-th node using the transformation,x_i = ^Tλ_k,where ∈(d+1)d contains the coordinates of the d+1 vertices in its rows and λ_k is the k-th permutation of the barycentric coordinates of the symmetry group that corresponds to the i-th node. The weight vector, w, is constructed by assigning equal weights to all nodes in the same symmetry group. To derive a degree q_v quadrature rule satisfying all the properties required to construct a degree p SBP diagonal- operator, we first need to find a facet quadrature rule of degree q_f≥ 2p. On the reference triangle, we use either the LGL rule with n_f = p+2 facet nodes (including the vertices) or the LG rule with n_f = p+1. In construction of the volume quadrature rule, we fix the facet quadrature points; hence, the parameters in the facet symmetry groups are kept constant, , we solve for the weights at all points and for the parameters associated with the interior symmetry groups. A similar strategy is followed to find the quadrature rules on the tetrahedron. While existing PI rules can be used as facet quadrature rules for the tetrahedron, they generally lead to more volume nodes than necessary. Hence, we first construct facet quadrature rules of degree 2p that would result in fewer volume quadrature points on the tetrahedron by placing some of the nodes at the vertices and/or edges of the facets. We note that, depending on their nodal locations, symmetry groups with the same number of nodes on the triangle can produce a different number of nodes on the tetrahedron when applied to its facets. For example, each of the S_vert and S_face-21 symmetry groups results in three nodes per facet, but four and twelve volume nodes, respectively. The number of volume nodes due to inclusion of the various facet symmetry groups of the tetrahedron is presented in <ref>. The root finding problem in <ref> can be written equivalently as a least-square minimization problem,min_τ1/2g^Tg,where τ = [λ,w]^T, and λ and ware vectors of all the parameters and weights associated with each symmetry group.The LMA is widely used to solve <ref> as it is less sensitive to initial guesses than Newton's method. The LMA computes the step direction, h, which is initialized as the zero vector, ash =-^+^T g,where = ^T+ νdiag(^T ), ν > 0 controls the scale of exploration. Note that the notation (·)^+ in <ref> denotes the Moore-Penrose pseudo-inverse, (·) denotes the extraction of the rows and columns of a vector or matrix that correspond to parameters of the interior symmetry groups and all weights, ∈n_bn_τ is the Jacobian matrix given by_(i,j) = [g_i]τ_j,and n_τ is the sum of the number of parameters and weights. The Jacobian can also be written in terms of matrices as=[∑_k=1^d_x_k^Tdiag(w)[x_(:,k)]λ,^T[w]w],where _x_k is the k-th direction derivative ofand x_(:,k)is the k-th direction component vector of x. The matrix ∂x_(:,k)/∂λ is computed using the relation in <ref>, and ∂w/∂w is a matrix of zeros and ones. The value of ν is initially set to 1000, but it is gradually reduced or increased depending on the convergence of the objective function.The algorithm starts with an initial guess, τ^(0), and the value of τ at the n-th iteration is updated as τ^(n+1) = τ^(n) + η^(n)h^(n),where η^(n) = 1 is used unless a negative weight is encountered. If a negative weight is encountered at the i-th entry of τ^(n+1), then the update is recomputed usingη^(n) = (ε - τ^(n)_i)/h^(n)_i,where ε > 0 is an arbitrary lower bound for the update of the negative weight, and is set to ε = 10^-4 in all of our cases.Despite being more robust than Newton's method, the LMA still suffers from bad initial guesses; especially, as the number of parameters grows and the quadrature accuracy increases, often stagnating at a suboptimal local minimum. To mitigate these issues, the LMA is coupled with a particle swarm optimization (PSO) algorithm. The PSO algorithm starts with an initialization of n_c particles, each with a random initial guess of τ.The objective function, g, given in <ref>, is computed for each particle and the personal best, τ_pb, and global best, τ_gb, approximations are tracked throughout the iterations. The PSO step size or velocity vector, v, which is initialized as the zero vector, is computed for each particle as v^(n+1) = bv^(n)+ c_1 r_1∘(τ^(n)_pb - τ^(n))+c_2 r_2∘(τ^(n)_gb - τ^(n)),where b=0.6 is the inertial weight parameter, c_1 = 1.5 is the cognitive parameter, c_2 = 1.5 is the social parameter, r_1 and r_2 are vectors of length equal to that of τ with uniform random entries on [0,1], and ∘ denotes an elementwise multiplication. The vector, τ,is updated at each iteration asτ^(n+1) = τ^(n) + v^(n+1). If a negative weight is encountered for any particle, it is simply replaced by a small positive number, , 10^-4, and the update is recomputed. As the quadrature accuracy is increased, the algorithm sometimes stagnates at a local minimum and further exploration is hindered. If the same local minimum is obtained over a number of iterations, then τ is perturbed as (1-δ)τ + δr, where r is a vector of length equal to the length of τ with uniform random entries on [0,1] and δ>0 is an arbitrary small number. The choice of the value of δ depends on the quadrature degree, as the sensitivity to perturbation increases with the quadrature degree.The PSO and LMA are coupled in such a way that the output vector of one is used as an initial vector of the other in a loop until convergence. The PSO mitigates issues related to initial guesses and convergence to suboptimal local minima, while the LMA offers fast convergence when good initial values are provided. Despite the efficiency of the coupled algorithm, at very high quadrature degrees, the minimization sometimes stagnates before convergence is realized. In such cases, the minimization is restarted, and in some instances, the parameters associated with the interior nodes are initialized using parameters of known PI rules.§ QUADRATURE RULES AND SBP OPERATORSAn SBP operator on a compact reference domain, Ω̂, with a piecewise smooth boundary, Γ̂, is defined as <cit.>: ∈n_pn_p is a degree p SBP operator in the i-direction approximating the first derivative x̂_i on the set of nodes S={x_j}_j=1^n_p if * [ p]_j = Px̂_i(x_j)for all P∈p * =^-1, whereis a symmetric positive definite matrix, and * + ^T=, where p^T q = ∫_Γ̂PQ n_x̂_iΓ,∀P,Q∈ℙ^r(Γ̂), where r ≥ p, n_x̂_i is the i-component of the outward pointing unit normal vector on Γ̂, and ℙ^q denotes a polynomial space of degree q. A diagonal-norm SBP operator has a diagonal $̋ matrix containing the weights of a volume quadrature rule of degree at least q_v = 2p-1 <cit.>. The existence of a sufficiently accurate positive-weight quadrature rule on Ω̂ is necessary and sufficient for the existence of a degree p first-derivative diagonal-norm SBP operator <cit.>. The boundary operator, , is also constructed using a degree 2p accurate positive-weight facet quadrature rule as <cit.> = ∑_γ∈Γ̂^T ,where is a diagonal matrix containing the i -component of the outward unit normal vector on facet γ, is an extrapolation operator from the volume to the facet nodes, and is a diagonal matrix containing the facet quadrature weights. If the volume and facet quadrature nodes are collocated, then simply picks out function values at the facet nodes, resulting in a diagonal matrix. The collocation of the facet and volume nodes reduces the cost of element coupling via simultaneous approximation terms (SATs), especially for entropy-stable discretizations. For further discussion on construction of multidimensional SBP operators, we refer the reader to <cit.>. Construction of SATs for SBP discretizations of various model equations in CFD can be found in <cit.>. Using the methodology outlined in the previous section, we have derived quadrature rules that satisfy conditions i. – v. stated in <ref>, and constructedSBP diagonal- operators on the reference triangle and tetrahedron. Quadrature rules with LGL and LG facet nodes of degree up to twenty on the triangle and up to ten on the tetrahedron are derived, which can be found in the supplementary data repository[<https://github.com/OptimalDesignLab/SummationByParts.jl/tree/master/quadrature_data>]. <ref> illustrates the nodal configurations of some of the quadrature rules in 2D and 3D. Many of the rules are novel to the authors' knowledge, and substantial improvements, in terms of number of quadrature points, have been achieved for several of the existing rules on the tetrahedron, as illustrated in <ref>. These improvements result in more efficient SBP diagonal- operators; for instance, new operators of degree 3and 4 are constructed with 44 and 76 nodes, respectively, instead of 69 and 99 nodes. The derived quadrature rules extend the available set of SBP diagonal- operators from degree 4 to 10 in 2D and from degree 4 to 5 in 3D. Unless specified otherwise, all the numerical studies in this work use SBP diagonal- operators with quadrature rules stated in the first columns of the different types of quadrature rules presented in <ref>, which also provides the minimum nodal spacing of the quadrature rules on the reference elements. It is noted that all rules on the triangle with LGL facet nodes have larger minimum nodal spacing than those with LG facet nodes. Furthermore, the rules obtained on the tetrahedron have equal or larger minimum nodal spacing than existing rules.§ NUMERICAL RESULTSIn this section, the SBP diagonal- operators constructed using the proposed quadrature rules are applied to linear and nonlinear problems. First, a mesh is generated by partitioning the spatial domain,Ω, intomsquares or cubes in each direction and subdividing them into two triangles or six tetrahedra, respectively. The nodes on the physical elements are obtained by affinely mapping the nodes on the reference elements. The 2D and 3D meshes are refined for mesh convergence studies using m_k = 60 - 5p + (12-p)kand m_k = 10 + 5k number of edges in each direction, respectively, where k={1,2,…} denotes the refinement level. The number of edges are chosen to ensure that errors are sufficiently larger than machine precision, enabling calculations of convergence rates for the highest-degree operators.The standard fourth-order Runge-Kutta (RK4) scheme is applied to march the numerical solution in time. For the accuracy studies, sufficiently small timesteps are used such that the temporal errors are negligible compared to the spatial errors. As in <cit.>, the L^2 solution error in the domain is computed by interpolating the numerical solution from the SBP nodes to a quadrature rule of degree 3p + 1 , integrating the square of the solution error, summing the result over all elements, and taking the square root of the sum.§.§ Linear advection problemWe consider the linear advection equation,[U]t + ∑_i=1^dc_i[U]x̂_i = 0, on the periodic domain Ω=[0,1]^d . The problem is used to test the accuracy and timestep stability limits of the operators. The initial condition is obtained from the exact solution,U( x̂) =∏_i=1^dsin(ωπ (x̂_i-c _it)),where c=[5/4,√(7)/4]^T in 2D or c=[3/2,1/2,1/√(2)]^T in 3D is used in all cases. The values ofcare chosen to set the wave speed magnitude at√(d)but are otherwise chosen randomly. The direction of the wave propagation depends on c and affects numerical errors and mesh convergence rates in some cases.The advection equation, <ref>, is discretized using the diagonal- SBP operators and an upwind SAT, see, , <cit.> for the details of the SBP-SAT discretization.The problem is run up to t=1 with the ωparameters in <ref> set to 8 and 2 for the 2D and 3D cases, respectively. The solution errors and convergence rates are tabulated in <ref>, which shows convergence rates close to p+1 on the finest meshes.In addition to the accuracy of the operators, we are also interested in their maximum timestep limits for explicit time marching schemes. A large timestep limit is desired for stability-bounded problems, where the maximum stable timestep can be applied without compromising accuracy. The maximum timestep is computed for each operator using golden section optimization. For this study, the triangular and tetrahedral meshes are obtained by subdividing quadrilateral and hexahedral meshes with four elements in each direction. The discretization is considered to be stable if the change in energy is less than or equal to zero after five periods. The change in energy at a given timestep is computed as,Δ E = ∑_Ω_k∈T_h(u^T_k_̋ku_k - u_0,k^T_̋ku_0,k),where u_k , u_0,k, and _̋k are the solution vector at the specified timestep, the initial solution vector, and the norm matrix on element Ω_k , respectively, and T_h is a set containing all physical elements. <ref> presents the maximum timestep values for each SBP-SAT discretization. On the triangle, we have not made improvements on existing quadrature rules except in the case of the degree 16 rule with the LGL facet nodes; hence, comparisons of the maximum timesteps with previously existing operators are not presented. On the tetrahedron, the new quadrature rule for the degree 2 SBP diagonal-operator yields a smaller stable timestep than the existing rules, while the degree 3 and 4 operators with q_v=2p quadrature rules lead to slightly lower but comparable stable timesteps relative to the existing rules. The degree 3 and 4 diagonal- operators constructed with q_v=2p-1 quadrature rules have about 1.72 and 1.28 times larger maximum timesteps, respectively, than the existing degree 3 and 4 operators. This, combined with their lower node count, leads to substantial efficiency improvements for stability-bounded problems. §.§ Isentropic vortex problemThe isentropic vortex problem, governed by the Euler equations, is another common test case used to study the accuracy of high-order methods. We consider the 3D case on the periodic domainΩ= [-10,10]^3with the initial conditions <cit.> ρ=(1-2/25(γ-1)exp(1-(x_2-t)^2-x_1^2))^1/γ-1, e =ρ^γ/γ(γ-1)+ρ/2(u^2+v^2+w^2), u =-2/5(x_2-t)exp(1/2[1-(x_2-t)^2-x_1^2]), v =1+2/5x_1exp(1/2[1-(x_2-t)^2-x_1^2]), w =0,where ρis the density, e is the total energy per unit volume, u , v , and w are the velocities in thex_1 ,x_2 , and x_3 directions, respectively, andγ=7/5is the ratio of specific heats. We use the Hadamard-form entropy-stable discretization <cit.> on tetrahedral elements with the Ismail-Roe two-point fluxes <cit.>. Furthermore, the matrix-type interface dissipation operator of <cit.> is applied. The problem is run until t=1 , and a mesh convergence study is conducted. The L^2 solution errors and their rates of convergence are shown in <ref>. Convergence rates greater than p+0.5 are attained on the finest meshes for all operators constructed using the new quadrature rules.§ CONCLUSIONSSeveral novel quadrature rules that are applicable for construction of diagonal-norm diagonal- SBP operators on triangles and tetrahedra are derived. The quadrature rules are obtained by coupling the LMA and PSO methods to solve the nonlinear systems of equations arising from the quadrature accuracy conditions. The LMA provides fast convergence when a good initial condition is provided, while the PSO enables efficient exploration of the design space while also mitigating stagnation issues at suboptimal local minima. Quadrature rules of degrees one through twenty on triangles with both the LGL and LG type facet nodes are presented. For tetrahedra, quadrature rules of degree one through ten are reported, which, in most cases, have substantially fewer nodes than previously known rules for SBP diagonal- operators. The newly derived quadrature rules lead to SBP diagonal- operators with enhanced efficiency relative to many of the existing operators of the same degree. They also extend the available set of SBP diagonal- operators from degree 4 to 10 in 2D and from degree 4 to 5 in 3D.The diagonal-norm diagonal- multidimensional SBP operators are applied to solve the linear advection and isentropic vortex problems on periodic domains. Mesh refinement studies for the problems show that convergence rates on the finest meshes are greater than p+0.5 in all cases.We have found that the quadrature points with the LGL facet nodes on triangles have larger minimum nodal spacing than those with the LG facet nodes. For tetrahedra, the rules constructed in this work provide equal or larger minimum nodal spacing than those found in the literature. We have also investigated the maximum timestep values for stability, and found that most of the new rules offer comparable or larger stable timesteps relative to previously reported rules.§ DECLARATIONS§.§ Conflicts of InterestThe authors have no conflicts of interest to declare.§.§ Data AvailabilityAll the quadrature rules reported in this work can be found in the supplementary data repository at <https://github.com/OptimalDesignLab/SummationByParts.jl/tree/master/quadrature_data>. The software implementation used to obtain the quadrature rules is publicly accessible at <https://github.com/OptimalDesignLab/SummationByParts.jl>. §.§ AcknowledgmentsThe authors would like to thank Professor Masayuki Yano and his Aerospace Computational Engineering Lab at the University of Toronto for the use of their software, the Automated PDE Solver (APS).The first author would also like to thank André Marchildon for the discussions on placement of quadrature points on tetrahedral elements. Computation were performed on the Niagara supercomputer at the SciNet HPC Consortium <cit.>. SciNet is funded by: the Canada Foundation for Innovation; the Government of Ontario; Ontario Research Fund - Research Excellence; and the University of Toronto. spmpscitocsection | http://arxiv.org/abs/2311.15576v1 | {
"authors": [
"Zelalem Arega Worku",
"Jason E. Hicken",
"David W. Zingg"
],
"categories": [
"math.NA",
"cs.NA",
"65M06, 65M12, 65N06, 65N12"
],
"primary_category": "math.NA",
"published": "20231127070429",
"title": "Quadrature Rules on Triangles and Tetrahedra for Multidimensional Summation-By-Parts Operators"
} |
http://arxiv.org/abs/2311.15739v1 | {
"authors": [
"Jesse Daas",
"Cristobal Laporte",
"Frank Saueressig"
],
"categories": [
"gr-qc",
"hep-th"
],
"primary_category": "gr-qc",
"published": "20231127114417",
"title": "On the impact of perturbative counterterms on black holes"
} |
|
C>XDépartement de Physique Nucléaire et Corpusculaire, Université de Genève, Genève; SwitzerlandSearching for New Physics in Hadronic Final States with Run 2 Proton–Proton Collision Data at the LHC Steven Schramm January 14, 2024 ===================================================================================================== The symmetries of the Standard Model give rise to the forces that act on particles, and the corresponding force mediators. While the Standard Model is an excellent description of particle interactions, it has known limitations; it is therefore important to search for new physics beyond the Standard Model, potentially indicating as-of-yet unknown symmetries of nature. The ATLAS and CMS collaborations have detailed physics programmes, involving a large number of searches for new physics in hadronic final states. As the start of Run 3 of the LHC is imminent, now is a good time to review the progress made and the status of hadronic searches during Run 2 at a centre-of-mass collision energy of √(s)=13 TeV. This review provides an overview of the motivations and challenges of hadronic final states at the LHC, followed by an introduction to jet reconstruction, calibration, and tagging. Three classes of searches for new physics in hadronic final states are discussed: di-jet searches, searches for missing transverse momentum in association with another object, and searches for hadronic di-boson resonances. The complementarity of these different analysis strategies is discussed, emphasising the importance of a varied hadronic physics programme in the search for new physics.§ INTRODUCTIONThe search for phenomena that are not described by the Standard Model of particle physics, often referred to as the search for physics beyond the Standard Model, is of fundamental importance to modern physics. The Standard Model describes the basic constituents of matter, and their interactions, where the interactions arise from symmetries in nature. While the Standard Model has stood up to a plethora of tests so far, it also has limitations, and it must break down at some higher energy scale at or before the Planck scale; probing higher and higher energy scales and looking for deviations from Standard Model expectations, potentially representing the existence of new fundamental symmetries, is thus one of the key methods to search for new physics.High-energy particle interactions are statistical rather than deterministic in nature; thus, it is important to be able to gather a large amount of data when searching for new elusive phenomena. The LHC <cit.> at CERN is instrumental to this approach: it is the highest-energy particle accelerator in the world, providing proton–proton collisions at a centre-of-mass energy of 13, and has delivered a huge dataset of roughly 150 to both the ATLAS <cit.> and CMS <cit.> Experiments during the Run 2 (2015–2018) data-taking period. As the LHC is preparing to begin Run 3 in 2022, now is an important time to review the status of searches for physics beyond the Standard Model.Searches for new physics represent a major part of both the ATLAS and CMS physics programmes, where each collaboration consists of thousands of scientific authors, pursuing a large variety of different possible types of new physics. A single review is insufficient to accurately represent the diversity of work that has been conducted by ATLAS and CMS, and thus this review will focus on signature-driven searches for new physics in hadronic final states. This includes both classical hadronic signatures, where the final state involves individual light quarks and/or gluons, as well as more recently studied hadronic signatures, where the final state includes collimated hadronic decays of massive particles, such as W/Z bosons or top quarks. This review is complementary to others covering more specific types of searches for new physics: searches with third-generation quarks <cit.>, extended Higgs sectors <cit.>, or di-Higgs final states <cit.>. Analyses involving hadronic final states, but which are closely related to those other topics, are thus covered in the other review of relevance.This review begins by summarizing the motivations and challenges of hadronic searches at the LHC in Section <ref>, and discussing how hadronic final states are reconstructed as different types of jets by the ATLAS and CMS experiments in Section <ref>. This background is then applied to three classes of signature-based hadronic final state searches: di-jet searches in Section <ref>, missing transverse momentum searches in Section <ref>, and hadronic di-boson searches in Section <ref>. The complementarity of these different final states is discussed in Section <ref>, before the review concludes in Section <ref>.§ MOTIVATIONS AND CHALLENGES FOR HADRONIC SEARCHES AT THE LHC Hadronic signatures can be more challenging than the equivalent lepton- or photon-based searches, especially at the LHC. A clear example of this is the discovery of the Higgs boson in 2012: this historic observation of a new particle by ATLAS <cit.> and CMS <cit.> was significantly driven by the leptonic and photon decay modes of the Higgs boson in both experiments, with only minor contributions to the discovery sensitivity from the hadronic decay modes. With this in mind, it is worth discussing both where hadronic final states can be powerful tools in the search for new physics at the LHC, as well as the challenges that must be overcome in such searches. §.§ MotivationsOne of the most striking motivations for the use of hadronic final states in the search for new physics at the LHC relates to limiting the assumptions that must be made when searching for phenomena beyond the Standard Model. In proton–proton collisions, such as those delivered by the LHC, the overwhelming majority of the high-energy collisions will occur as the result of quark–quark, quark–gluon, or gluon–gluon interactions. If a new particle is produced at tree-level by such interactions, then it can also decay via the same couplings to quarks and/or gluons, unless there are kinematic constraints. In this way, some types of searches in hadronic final states can avoid making any additional assumptions about the couplings the new particle may or may not have to the other Standard Model particles. This motivation is particularly relevant to di-jet searches, which will be discussed in Section <ref>. Feynman diagrams demonstrating this motivation are provided in Figure <ref>.When searching for some types of particles beyond the Standard Model, it is important that the new particle is not produced in isolation, but rather it is produced together with an initial-state radiated (ISR) Standard Model particle. This alters the momentum balance of the event, which is the entire basis of the searches described in Section <ref>, and which also allows for circumventing experimental challenges faced by some of the searches discussed in Section <ref>. As the initial collision at the LHC occurs between quarks and/or gluons, the most probable source of ISR radiation is also a quark or gluon, due to the dominance of the strong coupling. In such cases, there is therefore a statistical advantage to hadronic final states, which can lead to hadronic signatures dominating the sensitivity to new physics when the statistical sensitivity is the primary limitation. Examples of how this can work for a possible new particle produced through quark–antiquark annihilation include the radiation of a gluon before the annihilation occurs, or one of the quarks in the annihilation originating from a gluon splitting process. Feynman diagrams demonstrating these specific examples are provided in Figure <ref>. Another motivation for hadronic signatures comes from the branching ratios of the decays of massive particles: the W, Z, and H bosons, as well as the top quark, all decay primarily to hadronic final states. As such, searches for new physics involving such massive particles can benefit from the larger branching ratios and thus increased statistical power of hadronic decays, if they can overcome the associated challenges. This is the primary motivation for the hadronic di-boson searches described in Section <ref>, although it is also relevant to some of the searches in Section <ref>. Some representative branching ratios motivating the statistical potential of using hadronic decays in such cases are provided in Table <ref>. Comparing the branching fractions in this table to the aforementioned example of the Higgs boson discovery, which was primarily driven by the low-statistics H→γγ, H→ZZ→ℓℓℓℓ, and H→WW→ℓνℓν channels, it is clear that the statistical power of hadronic final states needs to be carefully balanced against the associated challenges. §.§ Challenges In the search for new physics in hadronic final states at the LHC, one of the immediate challenges is the enormous background from Standard Model processes; the dominant visible scattering product from proton–proton collisions is hadronic physics. As shown in Figure <ref>, the dominant hadronic physics process (referred to as “Jets”, to be discussed more in Section <ref>) has a cross-section of roughly 10^6, while the inclusive Higgs boson production cross-section is more than four orders of magnitude lower. This huge difference in cross-section means that hadronic final state analyses must either measure the Standard Model background to extreme precision before searching for deviations, such as is done for the searches in Section <ref>, or find a way to suppress the Standard Model background while enhancing the beyond Standard Model signal of interest, as is the case for the searches described in Sections <ref> and <ref>.The enormous cross-section of Standard Model hadronic physics processes also imposes significant experimental constraints on the analysis of hadronic final states. In particular, it is not possible to record every hadronic physics process resulting from proton–proton collisions, as that would overwhelm the detector readout and data storage capabilities of the experiments. In order to mitigate this data volume constraint, the ATLAS and CMS experiments only record very-high-energy collisions involving hadronic final states, while the lower cross-sections of leptonic Standard Model processes allow for recording all leptonic collisions down to much lower energies. Table <ref> gives example hadronic and leptonic triggers used by ATLAS and CMS during Run 2, demonstrating the order of magnitude difference in the collision energy scale that is recorded. Searches for new physics in hadronic final states at the LHC are thus confined to the high energy regime, unless the analysis has a way to mitigate this trigger constraint, as is the case for some of the searches presented in Section <ref>.In order to maximize the potential to discover new physics at the LHC, it is important to have as large of a dataset as possible, therefore mitigating the probabilistic nature of both proton–proton collisions and the subsequent particle interactions. One way to increase the dataset size is to collide multiple pairs of protons simultaneously, as the probability to produce a very rare interaction is thus enhanced by the number of concurrent collisions; the LHC did this during Run 2, with an average of roughly 30 simultaneous collisions. In cases where a rare process occurs, this typically results in one rare high-energy process (usually called thecollision) and several other lower-energy processes (usually calledcollisions). Particles from thesecollisions can pollute theprocess, as the detector records all of the sources of energy, and it is left to reconstruction procedures to tell apart the different origins of each energy deposit in the detector. This is a challenge for all types of analyses, but it is particularly challenging for hadronic final states as the jets used to reconstruct hadronic interactions are large and thus more susceptible to randomly overlapping contributions fromcollisions. Furthermore, as shown in Figure <ref>, hadronic processes are the most common visible by-product of proton–proton collisions at ATLAS and CMS. It can therefore be difficult to differentiate between hadronic physics originating from theprocess, as opposed to thecollisions, which is a problem that many searches for new physics in hadronic final states must address. The searches presented in Sections <ref>–<ref> use a variety of different techniques to mitigate the impact ofon their respective sensitivities to new physics. § HADRONIC PHYSICS RECONSTRUCTION AND PERFORMANCEQuarks and gluons both carry a colour charge; colour confinement states that they are thus unable to exist in isolation. Instead, they fragment and hadronise to form collimated streams of colour-neutral particles, such as pions, kaons, and other hadrons. These collimated streams of particles are typically referred to as hadronic showers, especially when discussing their subsequent interactions with a particle detector.While quarks and gluons cannot exist in isolation, their properties can be inferred by gathering all of the produced particles in the corresponding hadronic shower and summing the resulting set of four-vectors. There is no single definitive way to do this, as various effects make it difficult to identify which particles or which detector energy deposits come from which originating quark and/or gluon. Procedures that define how to group the individual four-vectors of hadronic showers are referred to as jet algorithms, and the resulting summed four-vectors are referred to as jets. Jets are the backbone of hadronic physics analyses at the LHC, where both ATLAS and CMS predominantly use thejet algorithm <cit.>.The same jet algorithm can be applied to a variety of different types of input four-vectors, resulting in distinct sets of jets with different strengths and limitations. The algorithm can also be run with different algorithmic parameters: the radius or distance parameter R in particular is often manipulated to adapt the jet algorithm for different intended use cases. Standard jets in both ATLAS and CMS, often referred to asjets, currently use R=0.4; this has been found to provide robust performance for the reconstruction of hadronic showers originating from individual light quarks (up, down, strange, charm, and bottom; also called not-top quarks) and/or gluons.In some cases, it is advantageous to represent multiple hadronic showers as a single jet. This is often the case for analyses involving hadronic decays of massive particles, such as W/Z/H bosons or top quarks, if the parent particle has a high transverse momentum. While the decays of such particles are back-to-back in their own reference frame, the boost to the experimental frame leads to overlapping hadronic showers, which cannot be easily disentangled. It therefore makes sense to reconstruct the entire decay (including the subsequent parton shower and hadronisation processes) as a single jet, leading to the production of four-vectors that should correspond to the properties of the initial massive particles of interest rather than the daughter particles. The reconstruction of such boosted jets, often referred to asjets, currently differs between ATLAS and CMS. From the jet algorithm perspective, ATLAS currently uses R=1.0 in contrast to R=0.8 as used by CMS, but there are also other differences as will be discussed.§.§ Inputs to Jet ReconstructionATLAS and CMS are general-purpose particle physics detectors, and are built in layers; starting from the interaction point and moving radially outwards, they consist of tracking detectors, electromagnetic calorimeters, hadronic calorimeters, and muon systems <cit.>. Hadronic showers, as collections of particles, produce quite a complex signature in such detectors, and are primarily of relevance to the tracking detectors and both types of calorimeters. Hadronic showers can also occasionally reach the muon systems, such as from heavy flavour decays or calorimeter punch-through; these are not the focus of this section and will thus not be discussed further here.To first order, hadronic showers are comprised of equal fractions of the three different types of pions: π^+, π^0, and π^-. Charged and neutral pions interact very differently with the detector. Charged pions (π^±) live long enough to traverse the detector and are electrically charged; thus, they create tracks in the tracking detectors, and interact primarily through the strong force within both electromagnetic and hadronic calorimeters. In contrast, neutral pions (π^0) have a very short lifetime, decaying quickly to pairs of photons; as photons are neutral electromagnetic particles, they do not leave tracks in the tracking detectors, and typically deposit their energy within the electromagnetic calorimeters.§.§.§ Calorimeter-Based Inputs As hadronic showers contain both neutral and charged particles, but trackers only observe charged particles, the ATLAS and CMS calorimeters play a key role in jet reconstruction. To this end, both ATLAS and CMS have used calorimeter energy deposits as the inputs to jet reconstruction, with ATLAS building an object referred to as a topological cluster () <cit.> and CMS using geometrically projected calorimeter towers <cit.>. CMS has primarily moved on from such calorimeter-tower-based jet reconstruction, and thus they are not described further here. are reconstructed in ATLAS from topologically-adjacent calorimeter cells, using an algorithm consisting of four steps: seed-finding, expansion, boundary addition, and splitting. Seed-finding proceeds by identifying all cells in the calorimeter which have at least four times the amount of energy as expected from noise (|E/σ_E|>4); such cells form the starting points of individual clusters. These seeds are then expanded iteratively in all three dimensions, incorporating adjacent cells with at least two times the amount of energy expected from noise (|E/σ_E|>2). Once this iterative expansion is finished, a final layer of all adjacent cells are added to each , regardless of their energy value (|E/σ_E|>0). After forming this initial set of , a search for multiple local maxima within each cluster is performed, after which point a given cluster may be split if multiple maxima are identified. The resultingcan either be left as-is, or they can be further calibrated before being used as input to jet reconstruction. If they are left uncalibrated, they are referred to as electromagnetic (EM) , as they do not account for calorimeter non-compensation and thus correspond to the energy scale of an electromagnetic particle (electron or photon) in the calorimeter. They can alternatively be further calibrated using the Local Cell Weighting (LCW) procedure, which determines a probability that a givencorresponds to an electromagnetic or hadronic shower, and then applies a calibration weighted by that probability. During Run 2, ATLAS used both EM and LCWas inputs to jet reconstruction in different contexts, as will be described further. §.§.§ Particle Flow Inputs While the calorimeter is instrumental in the reconstruction of hadronic showers, the tracking detector provides complementary information, which can dramatically improve jet performance in certain contexts. Sampling calorimeters, such as are used by ATLAS and CMS, record only a fraction of the deposited shower momentum; they are therefore more sensitive to variations in shower development at low momentum, and better measure the shower properties at high momentum. In contrast, tracking detectors rely on measuring the curvature of charged particles traversing a magnetic field in order to evaluate their momentum; the curvature is proportional to the inverse of the transverse momentum, and thus this can be done more precisely at low momentum, as at high momentum the tracks become straight. It is therefore natural to consider combining the information from both types of detectors in order to maximally benefit across the full kinematic regime of interest; this procedure is typically referred to as particle flow.While the ideas behind particle flow are generally similar, the actual algorithms implementing particle flow are usually experiment-specific, as the optimal balance between the tracking detector and calorimeter is detector-dependent. ATLAS <cit.> and CMS <cit.> have both developed particle flow algorithms oriented around their detectors and experimental objectives, and have used these algorithms during Run 2 of the LHC.One of the core pieces of particle flow algorithms is a procedure to match tracks to calorimeter energy deposits, thereby avoiding double-counting of the energy from any given particle. This matching takes place by extrapolating charged-particle tracks from the tracking detector to the calorimeter, and comparing the momentum observed in the two detectors. If they are consistent, the calorimeter signal is interpreted as corresponding to that one charged particle. If there is instead much more energy in the calorimeter than is expected for the track in question, then the calorimeter signal is interpreted as containing both charged and neutral components, as the tracker cannot see neutral particles. The expected energy of the track is then subtracted from the calorimeter signal, leaving two four-vectors: one corresponding to the charged-particle track, and the other corresponding to the remaining neutral calorimeter energy deposit. If there is no track matched to a given calorimeter signal, then the energy deposit is interpreted as originating from a neutral particle.The output of particle flow algorithms is therefore a set of four-vectors corresponding to hybrid objects: sometimes they are charged-particle tracks, other times they are the original calorimeter signals, and they can also represent subtracted calorimeter signals. The resulting particle flow objects can then be used as the inputs to jet reconstruction, which comes with multiple advantages over calorimeter-only inputs. Beyond improving the low-momentum measurements of charged particles, the ATLAS and CMS tracking detectors also have the spatial resolution to link a given track to a specific proton–proton collision, thereby mitigating -related effects for charged particles. As hadronic showers are composed of roughly 2/3 charged particles, the improved momentum measurements and -mitigation effects of individual particle flow objects can translate to a sizeable improvement in the precision of the final jet four-vector. §.§ Standard Jet Reconstruction and PerformanceThe majority of Run 2 analyses in ATLAS and CMS focus on hadronic showers originating from not-top quarks and gluons, and thusjets with R=0.4 are the most appropriate choice. While CMS made extensive use of particle flow inputs to jet reconstruction for all of Run 2, ATLAS started Run 2 using -based jet reconstruction, and switched to particle flow inputs closer to the end of Run 2; the ATLAS analyses that will be presented thus use a mixture of the two types of jets. After building jets from a given set of inputs, they must be calibrated to account for a variety of effects including (but not limited to) calorimeter non-compensation and differences between data and simulation. In addition to calibrating jets, the impact ofon jet reconstruction and performance must be suppressed; this is critical both to minimizing jet-related uncertainties and to mitigating the effect ofcontamination in searches for new physics (hadronic or otherwise). The jets may also be further evaluated to determine their consistency with the hypothesis of originating from a given type of particle, especially in the context of bottom and charm quarks. §.§.§ Correcting the Jet Scale and Resolution Jet calibrations are designed to correct the scale and resolution of jets, where the scale represents the mean of the energy or momentum distribution and the resolution is defined as the width of the same distribution. The jet momentum scale and resolution have a direct impact on the ability to observe new physics: taking a di-jet resonance search as an example, the scale controls the location of the invariant mass peak corresponding to the resonant mass, while the resolution impacts the width of the peak. It is thus important to properly calibrate jets when searching for new physics in hadronic final states. The full jet calibration chain is quite complex, although ATLAS <cit.> andhave converged on the motivations behind and procedures for the majority of the corrections within the full calibration chain. The scale is first corrected using simulated samples, adjusting the mean of the reconstruction jet response to match the truth expectation, thereby correcting for the calorimeter response, which varies across the ATLAS and CMS detectors. This truth expectation is defined using truth jets, which are built by applying thejet algorithm with R=0.4 to the set of detector-stable and detector-interacting truth particles generated by a given simulated event generator. The most significant simulation-based correction for ATLAS and CMS is shown in Figure <ref>; the correction factors are expected to be different as they account for detector-specific effects. The usage of particle flow inputs reduces the magnitude of this correction: the full track momentum is observed, in contrast to sampling calorimeters observing only a fraction of the total deposited energy. These simulation-based corrections are applied to both simulated events and data, under the assumption that the detector response is reasonably modelled in simulation. While this is true to first order, it is important to subsequently correct the scale for differences in the jet response between data and simulation. This is done through the derivation of a set ofcorrections, where a jet of interest is balanced against a well-defined reference object. Reference objects include Z bosons decaying to e^+e^-/μ^+μ^-, photons, the vector sum of a system of already-calibrated lower-momentum jets, or jets in different regions of the detector. As each of these techniques has different sensitivities and covers different kinematic regimes, the individualcorrection factors are statistically combined to define the final calibration factor; this is then applied to data such that the average jet in data matches the average jet in simulation, and thus also the truth expectation. The resulting data-to-simulation correction factors are shown in Figure <ref>, where a similar shift is seen for both ATLAS and CMS, suggesting that the difference between jets in data and simulated events is to first order common across the experiments.Uncertainties on the scale of jets in data with respect to simulation are evaluated by combining the aforementioned measurements with other possible effects not directly evaluated . These includeuncertainties, which are dominant for the lowest momentum regime, and flavour (light-quark vs. gluon) uncertainties, which are the limiting effect for the intermediate momentum range. The total uncertainties are quite similar for ATLAS and CMS, as shown in Figure <ref>, peaking at roughly 5% at very low momentum and reaching a minimum a bit below 1% for transverse momenta above roughly 200.In addition to the scale, it is also important to quantify the jet momentum resolution. ATLAS and CMS agree on a model to define the jet resolution <cit.>: N/ + S/√() + C. In this equation, N represents the electronic andnoise within the jet, S represents the stochastic nature of hadronic showers in calorimeters, C is a constant term defining the calorimeter's fundamental limitations, andis the transverse momentum of the jet. This functional form is used in fits to measurements of the jet resolution in order to extract a smooth trend, where the individual measurements are derived followingmethods, including the balance of jets in events containing exactly two jets. ATLAS also considers the balance of randomly-defined cones as a measure of constraining the noise term, while CMS evaluates the balance of a probe jet against a reference Z boson decaying to e^+e^-/μ^+μ^-. The resulting resolution measurements are shown in Figure <ref>, where ATLAS and CMS present their results slightly differently, but in the end they see similar trends.§.§.§ Mitigating Pileup Effectscollisions impact jet reconstruction and performance in multiple ways, and the suppression of such effects is of key importance for analyses involving hadronic final states. As already mentioned,can degrade the resolution ofjets of interest, as particles from other collisions may happen to overlap with the jet of interest thereby impacting the subsequent measurement of that jet. Particle flow already helps to mitigate such effects by linking charged energy contributions to a given vertex, a technique known as Charged Hadron Subtraction (CHS), but neutral particles fromcollisions escape such constraints. In addition to contaminating jets from thecollision,can also produce jets entirely separate from thecollision, which must be removed from the event to properly quantify the event's properties; this is especially important when selecting events based on the number of jets in the event or the balance of the hadronic activity in the event. As both of these selections are employed in hadronic searches for new physics, it is thus important for such searches to mitigate -related effects; we will now discuss some ways in which this can be done.ATLAS and CMS have both developedmitigation strategies beyond CHS, and the two strategies take very different directions. ATLAS uses a jet-based discriminant, known as the Jet Vertex Tagger (JVT), in order to rejectjets while retainingjets with high efficiency <cit.>. This approach works very well for suppressing entire jets, and can be used for jets built from either electromagnetic-scaleor particle flow objects as inputs, as shown in Figure <ref>a.While such a jet-based discriminant can efficiently reject jets originating fromvertices, it does not help with removing neutralcontributions withinjets. In order to improve on this, CMS has also developed an algorithm for PileUp Per Particle Identification (PUPPI) <cit.>. This algorithm evaluates the consistency of each individual four-vector used in the jet reconstruction process with the hypothesis of originating from thecollision, as opposed to originating from acollision. Only the four-vectors that appear to originate from thecollision are retained, thereby suppressing bothcontributions tojets as well as the creation of additional -originating jets. The PUPPI algorithm is seen to work very well, and even outperforms the CMS jet-basedselection algorithm, as shown in Figure <ref>b. §.§.§ Identifying Heavy Flavour Jets As previously mentioned,jets are primarily used in the context of interpreting hadronic showers from non-top quarks and gluons; so far, we have not further differentiated between the possible sources. The heavier quarks, namely, charm and bottom, form hadrons with sufficiently long lifetimes for those hadrons to travel a non-negligible distance before decaying. This has an important experimental implication: it is possible to observe this displacement, and thus differentiate jets involving bottom quarks, and to a lesser extent charm quarks, from those originating from lighter quarks or gluons. This experimental capability is very useful in the context for both searches for new physics and measurements of the Standard Model.The most straightforward experimental signature for the presence of a heavy flavour quark is the observation of a displaced vertex in the tracking detector, where displaced vertices are points that charged-particle tracks originate from, but which are spatially inconsistent with being a proton–proton collision from the crossing of the LHC proton beams. The observation of a displaced vertex is a strong indication for the presence of a particle with a long lifetime, such as a heavy flavour hadron. Such a displaced vertex can also come from other sources, such as tau leptons, but the ATLAS and CMS detectors provide sufficient experimental information to differentiate between displaced vertices from heavy flavour decays and those from other sources. Displaced vertices are the most striking way in which jets involving heavy flavour can be identified, but they are not the only way: ATLAS <cit.> and CMS <cit.> have both designed and used a variety of increasingly complex algorithms, exploiting a variety of experimental features, in order to identify jets consistent with originating from heavy flavour decays. This process is often referred to as flavour tagging, and a jet which passes the selection is said to be a b-tagged (or c-tagged) jet.The flavour-tagging community has a history of developing algorithms employing modern machine learning tools in order to obtain the maximum possible flavour-tagging performance. The exact algorithms used for flavour tagging have changed quite a bit during Run 2, and the ability to differentiate heavy-flavour jets from background jets has continued to improve. It is not possible to discuss all of the algorithms used, rather Figure <ref> shows comparisons performed by both collaborations of the performance of their respective flavour-tagging algorithms, evaluated using partial Run 2 datasets. Figure <ref>b in particular shows the b-tagging performance at CMS as evaluated with respect to backgrounds of both light-quark jets and c-quark jets, demonstrating that b-tagging is more efficient at rejecting light-quark jets than charm jets due to bottom and charm decays being more similar to each other. Both collaborations find that the best performance is obtained using modern deep learning tools (DL1 in ATLAS, DeepCSV in CMS), although the performance of another type of machine learning classifier, boosted decision trees, is only slightly degraded in the case of b-tagging (MV2 in ATLAS, cMVAv2 in CMS).The variety of tagging algorithms developed by ATLAS and CMS are optimised based on comparisons of different simulated samples, and it is possible that the features learned by the machine learning tools are not well-modelled. It is thus of great importance to also study the behaviour of the final taggers in both data and simulation, to extract scale factors associated with any potential differences, and to define an uncertainty on the extent to which simulation matches data. ATLAS and CMS both have multiple methods for extracting such scale factors <cit.>; representative scale factors for b-tagging using deep learning classifiers are shown in Figure <ref>. While the scale factors do differ from unity, indicating that simulation does not perfectly model the data, such deviations are generally quite small and thus the tagger performance is reasonably well modelled. §.§ Boosted Jet Reconstruction and PerformanceThe very high energy scale of the LHC is sufficient to produce massive particles, such as W/Z/H bosons and top quarks, with a substantial momentum in the experimental reference frame. If this momentum is much larger than the mass of the parent particle, this implies a large Lorentz boost between the massive particle decay frame and the experimental frame; thus, the decay products are collimated in the detector. For a two-body decay of a massive particle, the angular separation Δ R between the two daughter particles d_1 and d_2 follows the formula:Δ R_d_1d_2≈1/√(z(1-z))m^p/^pΔ R_d_1d_2≳2m^p/^p ,where m^p and ^p are the mass and transverse momentum of the parent particle p, and z is the momentum split between the two decay particles. Assuming an even momentum split of z=0.5 leads to the simplified version of the equation, which is a lower bound on the angular separation, but which is also often used as an approximation for the typical angular separation of a two-body decay.As an example, consider the decay of a W boson to a pair of light quarks. If we require the two quarks to be fully distinguishable for an R=0.4 jet algorithm, then the separation between the two quarks should be at least Δ R=0.8. Using the known mass of the W boson of 80.379 <cit.>, we can invert the equation to determine that this angular separation occurs when ^W≈200. The decays of higherW bosons will thus start to overlap for an R=0.4 jet algorithm, signalling the start of the boosted regime: rather than dealing with the increasingly challenge of reconstructing the two daughter particles as separate jets, it makes sense to instead reconstruct the entire massive particle decay as a singlejet.This transition from reconstructing massive particles using their daughter particles to reconstructing the entire decay has many implications, and is a powerful way to mitigate the challenges associated with searches for new physics in hadronic final states. As thejet contains the entire decay, the jet mass should now correspond to the parent particle mass. Additionally, the angular energy distribution of the jet should be consistent with the hypothesis of multiple regions of high energy density corresponding to the multiple daughter particles, as opposed to a single energy-dense region expected from not-top quarks and gluons. These properties allow for the suppression of the otherwise overwhelming Standard Model hadronic physics background, supporting searches for new rare physics; such techniques were first widely discussed in the context of recovering the bb̅ final state as a promising channel in the search for the Higgs boson <cit.>. The use of such techniques, where entire hadronic decays are reconstructed as a single jet and the internal properties of the jet are exploited, forms the basis of what is called jet substructure.The use of jet substructure has grown dramatically since the LHC started taking data, and jet substructure techniques have become a key component of the ATLAS and CMS physics programmes, especially in the context of searches for new physics. While reconstructing jets using thealgorithm with a larger value of the radius parameter is a good start, more advanced strategies must be employed to really benefit from modern jet substructure techniques. Following the reconstruction of suchjets, they must be calibrated; the momentum must be corrected for similar reasons as forjets, but now the mass of the jet also has an important meaning and must be calibrated for analysis usage. Searches for new physics usingjets usually only want to study hadronic decays of massive particles, butjets by default also include the huge Standard Model background from not-top quarks and gluons; it is thus also important to suppress these large backgrounds and to quantify the amount of background remaining.§.§.§ Boosted Jet Reconstruction Just likejets, the reconstruction ofjets also starts by running thealgorithm over a given set of inputs. However, that is not sufficient:jets, as the name implies, cover a large area of the detector. This makes them particularly susceptible to both underlying event andeffects, which must be removed in order to see thehadronic decay of interest. This is typically done by processing all of the inputs that were grouped into a given jet by thealgorithm, and deciding which of the inputs to keep or remove based on some criterion aimed at only retaining four-vectors consistent with a high momentum hadronic decay of a massive particle. This procedure of further processing the inputs used in each jet, after reconstructing the original jet, is known as jet grooming; one such algorithm was used already in the aforementioned pre-LHC studies of boosted H→bb̅ decays <cit.>.There are now a large number of different jet grooming algorithms, which typically have a few different parameters that can be adjusted to optimize how aggressive the grooming procedure is. The optimal algorithm and configuration thereof depends on many factors, and ATLAS and CMS have settled on different choices for the majority of their Run 2 searches for new physics. In the context of a search for new physics, an ideal jet grooming algorithm will remove the underlying event and other sources of radiation while keeping the entire hadronic decay of interest, and will removeeffects to the extent that the dependence of jet substructure variables onis negligible. An example from ATLAS showing such positive results from the application of one type of grooming, known as trimming <cit.>, is shown in Figure <ref>. By grooming the jets, the Z boson mass is recovered, background jets from not-top quarks and gluons have their masses suppressed (thereby enhancing the ability to identify Z bosons), and thedependence is removed. Other properly optimized grooming algorithms will provide similar benefits. The large majority of the relevant ATLAS searches in Run 2 have used calibrated LCWas inputs, although there are a small number of analyses that have used a particle flow input type that was designed specifically for high momentumjet reconstruction, named Track-CaloClusters (TCCs) <cit.>. In both cases, these inputs are used to reconstructR=1.0 jets, which are subsequently trimmed <cit.> using asub-jet radius of R_sub=0.2 and afraction cut of f_cut=5%. For jets built using , the mass of the jet is then further refined through a combination of the calorimeter jet mass with tracking information in a form of minimal particle flow, which is referred to as the combined mass <cit.>. The subsequent ATLAS plots will focus on jets built using calibratedinputs and using the combined mass, as that was the most commonly used strategy in relevant searches.Most of the CMS searches that will be presented have used particle flow inputs to thealgorithm with R=0.8, although in some cases Cambridge–Aachen (CA) jets <cit.> with R=1.5 have been used instead, especially for searches where top quarks are at a lower momentum and thus a larger radius is needed to contain the full decay. Most of the Run 2 results are then groomed using the Soft Drop algorithm <cit.> with a soft threshold cut z_cut=0.1 and an angular exponent β=0. In addition to grooming, it is possible to apply further selections on the jet inputs; CMS does so by applying the same PUPPI algorithm discussed in the context ofjets to the reconstruction ofjets. The subsequent CMS plots will focus on jets built using thealgorithm with R=0.8 and Soft Drop grooming plus PUPPI applied, as that was the most commonjet strategy employed by the relevant searches.Plots showing the mass in simulated samples, for a variety of different signal types of interest and the primary background to mitigate, are provided in Figure <ref>. These plots show the most commonly used mass definition and jet reconstruction strategy for both ATLAS and CMS. The mass already provides a strong first means of differentiating between jets from hadronic decays of massive particles as opposed to background sources, although much more can be done, as will be discussed soon. It is first important to discuss howjets, and especially their mass, can be calibrated.§.§.§ Correcting the Jet Scale and Resolution Just like forjets, it is important to correct the scale and resolution ofjets. The momentum scale and resolution still have a direct impact on searches for new physics in hadronic final states, including being responsible for the peak position and width in hadronic resonance searches. However, forjets, it is also very important to correct the mass scale and resolution: this impacts some searches directly where thejet is itself expected to contain a new physics resonance, but it is also key to the concept of identifyingjets consistent with a given signal interpretation (W/Z/H boson or top-quark) and rejecting jets from background sources.The procedure for both momentum and mass calibrations starts by comparing a given reconstructed jet against its truth jet reference, but in this context the truth jet must also have the same grooming algorithm applied to remove underlying event contributions in the same way and thus represent the same type of shower in the detector. The corrections are derived sequentially, as the momentum calibration scales the full four-vector (including the mass), and the mass is then further corrected <cit.>. The resulting momentum and mass calibrations, correcting for the ATLAS detector response, are provided in Figure <ref>. In order to properly correct the response, the mass calibration is actually dependent on the jet mass; a plot corresponding to the correction for a jet with the W boson mass is shown.After applying a simulation-based correction, it is once again necessary to evaluate possible differences between data and simulation. In ATLAS, this is done for the momentum scale following the same strategy as forjets, where thebalance of ajet against a well-known reference is used <cit.>. The resulting momentum calibration and uncertainties, including flavour- and topology-related effects, are shown in Figure <ref>. The jet momentum resolution is also evaluated in a similar way, but with only the method using pairs of jets, asjets are of most relevance at higher momentum where the approach using two jets is very precise <cit.>; plots for thejet momentum resolution are not shown here for brevity.Correcting for possible differences in the jet mass between data and simulation requires a new approach with respect tomomentum calibrations, as the jet mass is not an event-conserved quantity unlike the total transverse momentum. It is thus important to identify a high-purity selection of signal jets, where the mass distribution should be the same, and to correct for any observed differences. Both ATLAS and CMS do this using tt̅ events, where the W boson from one of the top quarks decays leptonically, and the W boson from the other top quark decays hadronically. These semi-leptonic tt̅ events can be selected with high purity, and provide access to signal jets with either the W boson mass or the top quark mass, depending on if the b-quark from the hadronically decaying top quark is inside or outside of thejet under study. These high-purity selections allow for a direct extraction of the mass scale and resolution by fitting the signal component in both data and simulation, and any subsequent differences can then be corrected for. Examples of this procedure are shown in Figure <ref> for top quarks in ATLAS and W bosons in CMS.§.§.§ Identifying Boosted Hadronic DecaysWith a well-calibrated jet mass, it is already possible to start to differentiatejets containing boosted hadronic decays of massive particles fromjets containing not-top quarks and gluons. However, the background jet distribution has a long tail as seen in Figure <ref>, and the cross-section for such background events is dramatically higher than for signals such as W/Z/H bosons and top quarks. As such, it makes sense to design more complex algorithms to differentiate between signal and background jets; these algorithms are commonly referred to as jet taggers.The idea of such taggers relies primarily upon the different angular energy structure of signal and background processes withinjets. For example, a background jet originating from a not-top quark or gluon will usually have a single region of high energy density, signal jets originating from W/Z/H bosons will typically have two regions of high energy density, and signal jets originating from top quarks will typically have three regions of high energy density. One way that this can be quantified is to impose a given number of axes on the jet, and evaluate the consistency of the jet constituent four-vectors with that number of axes. This forms the basic idea of one of the early jet substructure variables, named N-subjettiness <cit.>, which is still used today.In the nomenclature of N-subjettiness, a measure of the consistency of a jet with the interpretation of having one axis is τ_1, two axes is τ_2, and so on. Ratios of N-subjettiness variables, τ_XY = τ_X/τ_Y, then provide separation between jets with X or more axes as opposed to Y or fewer axes. This can then be used to differentiate between signal and background jets, as shown in Figure <ref>, where one of the plots shows separation after having already applied a mass cut, therefore demonstrating that N-subjettiness ratios provide complementary information to the jet mass. Simple cut-based jet taggers, consisting of jet mass cuts plus a single additional substructure variable, such as τ_32 or τ_21, have been used by many ATLAS and CMS searches in the earlier stages of Run 2. However, more recently both ATLAS and CMS have switched to a variety of advanced jet taggers exploiting modern machine learning techniques (Boosted Decision Trees, BDTs; Deep Neural Networks, DNNs; and other modern tools). These more advanced taggers are able to exploit non-linear correlations between the different substructure variables to further improve on few-variable cut-based taggers, or even to use the four-vectors of the individual jet constituents directly.ATLAS and CMS have both designed and used a large variety of different jet taggers, and it would take an entire separate review to fully explore the different options that have been used during Run 2. A few summary plots of different taggers used for the identification of jets originating from top-quarks, W bosons, and H bosons are shown in Figure <ref>; Z boson tagging performance is generally very similar to W boson tagging. The figures show either the background rejection (ATLAS) or background misidentification rate (CMS) for a fixed signal efficiency, where one quantity is the inverse of the other: a good tagger will have a large background rejection or, equivalently, a small misidentification rate. Taggers using different forms of machine learning techniques are shown to provide large improvements in performance over simple cut-based taggers, as expected, and further benefits are seen from adding in jet constituent information such as is done in DeepAK8 from CMS <cit.>.While the ability of the tagger to reject background jets for a given signal efficiency is one very important metric, it is not always the deciding factor when choosing which tagger to use for a given search for new physics. Taggers providing the largest background rejection often significantly sculpt the background jet mass distribution, which can be a major problem. As an example, searches looking for resonances in the mass distribution of a singlejet require a smooth jet mass distribution for their background estimation, and must avoid introducing artificial bumps from the use of the tagger. In this context, the extent to which a tagger sculpts the jet mass distribution becomes a key metric. Originally, individual substructure variables, such as τ_21, were transformed in a way that made them independent of the jet mass; this is referred to as a Designed Decorrelated Tagger (DDT) <cit.>. However, modern advances in machine learning techniques have allowed for this to be extended to a variety of different advanced taggers, which are typically then referred to as being mass-decorrelated (MD). ATLAS <cit.> and CMS <cit.> have both studied the development of DDT and MD taggers, some of which can be seen in the CMS plots in Figure <ref>; ATLAS DDT/MD results are not shown here. The plots show that DDT techniques can actually increase the background rejection for cut-based taggers as the transformation is including more information, while MD taggers sacrifice some of their performance to avoid sculpting the mass distribution, although they remain more powerful than cut-based DDT approaches. MD taggers can also open up new analysis opportunities, such as enabling the use of mass-related control regions, which further supports the usage of such advanced taggers. After the development of complex jet taggers, it is necessary to derive correction factors to ensure that the taggers are selecting the same types of jets in data and simulation. This is not a trivial statement, as the simulation may model the correlations between variables or jet constituent four-vectors differently than in data, and the taggers optimized in simulated events may then exploit these differences. Similar to the procedure used to correct the jet mass scale and resolution, it is important to identify a high-purity selection of the object of interest and to compare the resulting data and simulated efficiencies of the taggers. Focusing on W-boson and top-quark taggers, as those are of most relevance to the hadronic searches that will be discussed in this review, ATLAS and CMS both use semi-leptonic tt̅ events for signal efficiencies; the resulting agreement between data and simulation can be seen in Figure <ref>. Inclusive selections of not-top quark and gluon events, or of photon plus jet events, are used to evaluate the corresponding background efficiencies for W-boson and top-quark taggers <cit.>. The resulting scale factors required to correct the simulation to match the data are typically reasonably close to one, suggesting that the modelling of the quantities and correlations being exploited in the taggers is of reasonable quality. § DI-JET SEARCHESDi-jet searches have a long history in hadron collider experiments, and are typically among the first searches conducted upon accessing a new centre-of-mass-energy scale, as they rapidly become sensitive to new very massive physics. Such searches are also sensitive to a wide variety of new physics models, due to their minimal assumptions about the properties of the new physics sector, as discussed in Section <ref>. This motivates a variety of different types of di-jet searches for new high-mass phenomena, as will be discussed in Section <ref>.Rather than focusing solely on pushing the search for new physics to ever-higher masses, modern di-jet searches are also increasingly extending towards the lower-mass regime. While this regime has been studied at previous colliders, new physics may still have been missed; reaching the mass scale of new physics is only sufficient to discover that new particle if the search also reaches the sensitivity required to extract the signal from the background. In other words, new physics may be hiding in the low-mass regime if the new phenomenon is too weakly coupled to quarks and gluons to have been observed in past searches.Given that the LHC produces an enormous amount of di-jet events, it thus provides the potential to probe low-mass hadronic physics well beyond what was possible at previous experiments, so long as you can access that data. This is the main challenge that low-mass di-jet searches must overcome: the majority of the relevant low-mass data is not recorded by ATLAS and CMS due to trigger constraints, as discussed in Section <ref>. Low-mass di-jet searches must therefore find some way to mitigate these trigger constraints, and ATLAS and CMS have now found several complementary ways to do so, as will be discussed in Sections <ref>–<ref>.These different di-jet search strategies are all complementary, and must be considered as a whole to properly understand the sensitivity of the ATLAS and CMS di-jet search programmes to new physics. During Run 2, ATLAS and CMS moved towards a harmonised means of comparing the different analyses, which are interpreted as a search for a new axial-vector Z^' boson of a given mass m_Z^' and with a given coupling of that boson to quarks g_q. Tree-level Feynman diagrams for the production and subsequent decay of the searched-for Z^' boson are shown in Figure <ref>, and more details on this model as applied to dijet searches at the LHC can be found in, for example, the LPCC Dark Matter Working Group recommendation documents <cit.>. The resulting limits of the different types of searches conducted by both ATLAS and CMS are shown in Figure <ref>. While these figures provide an excellent summary of the analyses conducted to the date at which the plots were last updated, it is also useful to discuss how the individual results were obtained. §.§ High-Mass Di-Jet SearchesHigh-mass di-jet searches work in the regime where the corresponding jet triggers are fully efficient, and thus probe new physics with the full statistical power of the LHC. The classic example is the di-jet resonance search, which both ATLAS and CMS have published using the full Run 2 dataset <cit.>. This type of search has such extensive statistical power that it includes observed events with di-jet masses of 8; this is the highest mass range seen by any of the searches presented in this review. The ATLAS di-jet resonance search makes use ofjets built from topological clusters, while the CMS search usesjets built from particle flow objects. CMS furthermore uses the two leading jets in the event as seeds in the creation of “wide jets”, whereby all otherjets with Δ R < 1.1 are added to the four-vectors of the leading jets, and the wide jets are used to define the di-jet system; this procedure reduces the impact of gluon radiation on the search.ATLAS and CMS both fit the large and smoothly falling Standard Model background directly from data, using functions, and look for deviations from that background corresponding to new particle resonances. CMS additionally considers another data-driven background estimation method, referred to as the ratio method, which defines signal and control regions in terms of the pseudorapidity separation of the di-jet system (|Δη|); these regions are then used to derive a mass-dependent transfer factor to correct the simulation to match the data expectation. Both background estimation methods work well, and no significant deviation is observed by either ATLAS or CMS, as shown in Figure <ref>a,b, respectively. Limits are therefore set on a wide variety of different signal models, including both MC-based models and generic Gaussian signals, of various widths; examples are shown for ATLAS and CMS in Figure <ref>c,d, respectively. The di-jet resonance search provides access to the highest energy scales at the LHC, but it is possible that new resonant physics lies beyond the LHC energy scale. In this case, it may still be possible to observe the effects of new physics through modifications to the angular structure of the highest-energy di-jet events, typically characterised using the variable χ = e^|Δ y|≈1+cosθ^*/1-cosθ^*, where θ^* is the polar angle in the di-jet centre-of-mass frame. Most Standard Model di-jet processes are t-channel and result in small values of θ^* (large values of χ), while new physics is expected to be more isotropic, and thus may show up at smaller values of χ.ATLAS and CMS have conducted such searches using Run 2 data, but in both cases the published searches have only made use of a portion of the full dataset <cit.>. The angular searches are very similar to the aforementioned resonance searches, although for CMS the search uses standardjets rather than “wide jets”. For the angular search, the background is taken from simulated samples, and the resulting shape is compared to data distributions in a variety of signal regions defined in terms of di-jet mass window selections. The resulting signal regions are shown for ATLAS in Figure <ref>a, while the highest mass region is shown for CMS in Figure <ref>b. No significant deviations from the predicted shape are observed, and thus limits are set on the scale of new physics. It is also possible that new physics is not uniform in its couplings to the different types of quarks. In particular, new physics may couple preferentially to the more massive bottom and top quarks, and thus not be immediately apparent in the previously discussed inclusive di-jet searches. Di-jet searches making use of flavour tagging algorithms can probe such possibilities by suppressing events consisting of pairs of light-quarks and/or gluons, while accepting the signal events of interest. ATLAS has conducted both single-b-tagged and di-b-tagged di-jet searches using the full Run 2 dataset, which uses the same type of jets and the same background strategy as the inclusive search <cit.>. The resulting di-b-tagged di-jet mass spectrum is shown in Figure <ref>a; no significant deviations are observed, and thus limits are set on a variety of new signal models, including Gaussian resonances as shown in Figure <ref>b. CMS has also conducted such searches during Run 1 <cit.>, but does not yet have such a publication using Run 2 data.Searches for di-top-quark resonances require a bit more care, as the top quarks decay immediately, and the resulting decays produce a variety of different final states. Focusing on the fully hadronic final state, as is the topic of this review, results in a final state with six partons: both of the two top quarks decay to a b quark and a W boson, and each of the W bosons subsequently decays to a pair of quarks. At the energy scale where fully hadronic di-top-quark resonance searches are conducted at the LHC, the top quarks are boosted, and thus their decays are collimated: the entire hadronic decay of a top quark is thus reconstructed as a singlejet, forming the basis of di-top-quark resonance searches as di--jet searches.Non-top-quark and gluon jets can also be reconstructed asjets, forming a large background to the search for tt̅ resonances. Such backgrounds must be suppressed by tagging thejets, thereby accepting jets originating from hadronic top-quark decays and suppressing those originating from other processes. Thesejet taggers are supplemented with b-tagging information to further suppress the background from light-quarks and gluons. The events that pass these criteria are a mixture of Standard Model tt̅ events and mis-tagged not-top-quark events, where the latter category is strongly suppressed in case of the use of a two b-tag requirement.Following this set of complex taggers, the resulting mixture of background events must be evaluated in a data-driven way. ATLAS and CMS both make use of simulated events for the tt̅ background, which is generally well modelled. In contrast, the surviving not-top-quark background is determined using a series of control regions enhanced in not-top-quark events, where these control regions are used to determine the corresponding selection efficiency and thus the expected contribution to the invariant mass distribution.ATLAS has conducted the fully hadronic tt̅ resonance search using the full Run 2 dataset <cit.>, and the resulting di-b-tagged mass distribution is shown in Figure <ref>a. CMS has also conducted such a search, but using a partial Run 2 dataset <cit.>, where a similar plot is shown in Figure <ref>b. In both cases, no significant deviations are found beyond the background expectation, and thus limits are set on the production of axial-vector Z^' resonances as shown in Figure <ref>c,d for ATLAS and CMS, respectively. The ATLAS limit plot is shown only for the fully hadronic tt̅ resonance search, while the CMS limit plot combines the fully hadronic result with the semi-leptonic and fully-leptonic decay modes of the tt̅ system. §.§ Trigger-Based Di-Jet Searches High-mass di-jet searches start at roughly 1, which is due to the hadronic trigger thresholds used by both ATLAS and CMS. In order to access the lower-mass regime, it is necessary to find a way around this trigger constraint. This could be done by using prescaled triggers, which record only a fraction of the events that would have otherwise passed the trigger selection, but such triggers are typically afforded minimal rate; the effective luminosity of such an approach is therefore too small to provide sensitivity to new physics beyond what has previously been studied.There is, however, a way around the trigger constraint: the analysis can be performed in the trigger. In practice, it is a bit more complex, but this idea is the foundation of the approach referred to as a Trigger-Level Analysis (TLA) by ATLAS, or Data Scouting (DS) by CMS. Such searches exploit the fact that the trigger is bandwidth-limited, and bandwidth is the product of the event size and the trigger rate, meaning that a very small event size can enable the recording of a very-high-rate process. As such, if all of the information needed to conduct the analysis can be calculated within the trigger system, that very small amount of information can be written out alone, while the rest of the much larger event can be discarded. Such an approach is only useful if the precision of the information available in the trigger is sufficient for the analysis objectives. For this reason, TLA/DS works well with jets reconstructed by the experimental software-based triggers, but not jets from the hardware-based triggers; the point at which the hardware triggers are fully efficient therefore defines a lower boundary that this approach can reasonably access.The TLA/DS approach has allowed both ATLAS <cit.> and CMS <cit.> to probe much lower mass di-jet resonances with the full statistical power of the LHC dataset, albeit with some small additional uncertainties or other performance degradations related to how the trigger-reconstructed jets are typically less precisely known than those available for offline data analysis. For example, the ATLAS di-jet TLA lacks the tracking information needed to apply part of the calibration sequence that mitigates quark-vs-gluon differences, while the CMS DS di-jet analysis uses calorimeter inputs to jet reconstruction instead of particle flow inputs. However, these are small prices to pay for the ability to extend the search for di-jet resonances to lower masses with unprecedented statistical precision. The resulting invariant mass spectra are fit using functional forms, similar to the high-mass di-jet resonance search, as shown in Figure <ref>a for ATLAS and Figure <ref>b for CMS, both of which use a partial Run 2 dataset. No significant deviations from the background expectation are observed, and thus the limits on axial-vector Z^' production provided by the high-mass di-jet resonance search are extended down to masses of roughly 500, as shown in Figures <ref>c,d. §.§ Di-Jet Searches in Association with Other ObjectsAfter reaching the hardware-based trigger constraints, other methods must be found to continue to probe lower-mass di-jet resonances. One way to do this is to trigger the event based on other activity that happens to be present, but which does not relate with or otherwise impact the process of interest. This is the case of searches for di-jet resonances in association with initial state radiation, whether that radiation is another jet, a photon, a lepton (from an ISR W boson), or otherwise. This approach can provide access to much lower di-jet masses, but it comes at the cost of the requiring both the ISR to occur and the ISR object to be energetic enough to pass the associated trigger. ISR-based approaches thus have less statistical power than conventional approaches, but they can access regimes inaccessible to either standard or trigger-level searches, and thus they are an integral part of the search for new physics. Once the event has been triggered, the next step is to differentiate between the di-jet system of interest and the ISR object. This may be quite straightforward, such as is the case of a muon clearly being distinct from the di-jet system, or it may be more complex, especially when the ISR object is itself another jet. After identifying the di-jet system, the analysis becomes more similar to one of the previously discussed di-jet searches, where a background expectation is defined (typically from a functional form fit to the data) and a search for narrow resonances over that background is performed. The strategy of searching for di-jet resonances in association with other objects is quite new at the LHC, with the first preliminary result appearing in 2016 <cit.>. As a result, ATLAS and CMS have not yet independently considered each of the different possible types of associated objects. However, when ATLAS and CMS searches are considered all together, they have done a good job of covering different possible associated objects.Photons are a promising ISR object to use, as they are typically easily distinguishable from jets, and thus there is little confusion about which part of the event is the di-jet system of interest. In addition, ATLAS and CMS photon triggers have much lower thresholds than jet triggers, as per Table <ref>. ATLAS has therefore conducted a search for-based di-jet resonances in association with photons using a partial Run 2 dataset <cit.>, which uses both a single-photon trigger for the lowest di-jet masses and a combined photon+di-jet trigger for higher di-jet masses, resulting in two separate search regions. While both triggers record nearly the same amount of luminosity, the photon+di-jet trigger provides more statistics where it is active, due to its use of a lower photon energy cut; this demonstrates the statistical impact of requiring an ISR object of a given energy, which is independent of the system under study. The same search moreover considers both the inclusive di-jet spectrum, as well as the di-b-tagged di-jet spectrum, which are then fit using functional forms as done for traditional di-jet searches. The resulting di-jet mass distributions for the inclusive and di-b-tagged selections can be found in Figure <ref>a,b, respectively. No significant deviations are observed from the background fit expectations, and thus limits are set on the quark coupling to an axial-vector Z^' under the assumption of a flavour-universal quark coupling, as shown in Figure <ref>c,d. Even under this flavour-universal assumption, the di-b-tagged selection provides improved sensitivity to the Z^' model under study, as the selection reduces the background contributions of the di-gluon and quark+gluon production processes, which are large at low energies.The possibility of di-jet resonances in association with an extra jet, from either quark or gluon radiation, has been studied by CMS using a partial Run 2 dataset <cit.>. Using jets as the associated object introduces multiple challenges, such as being subject to the same trigger constraints as before without the introduction of dedicated three-jet triggers. The CMS result therefore developed and used a DS stream optimised for multi-jet events, with cuts onat both the hardware- and software-levels, whereis the scalar sum of theof the jets in the event. This stream was then used to build three “wide jets”, similar to what has been previously described, from the calorimeter-basedjets available in the trigger system. An additional complication of using a jet as the associated object is that there is no a priori way to define which two of the three jets correspond to the di-jet system of interest, and which is the additional jet. The analysis investigated multiple ways to select the jets constituting the di-jet system, but settled on using the two jets with the highest ; it was found that this was correct more often than not, especially for the higher-mass resonances considered. After identifying the di-jet system, the di-jet mass spectrum is fit with a functional form, defining the background estimation. No significant deviation from the background estimation is observed, as seen in Figure <ref>a; thus, limits are set on the coupling of quarks to an axial-vector Z^' model as shown in Figure <ref>b. A third type of associated object that can be used is a lepton (such as an electron or muon), with the interpretation of it originating from initial state W boson radiation. ATLAS has conducted such a search using the full Run 2 dataset <cit.>, where the di-jet resonance in association with a lepton is one of the interpretations considered. As listed in Table <ref>, lepton triggers in ATLAS and CMS have very low thresholds, which can help to overcome the rarity of W boson radiation with respect to photon or jet radiation. Leptons are also very easy to differentiate from jets, and thus it is easy to define the di-jet system and fit the data using a functional form to obtain a background estimation. The resulting di-jet mass spectrum is shown in Figure <ref>a; no significant deviations from the prediction are observed, and thus limits are set on the production of an axial-vector Z^' in Figure <ref>b. §.§ Boosted Di-Jet Searches in Association with Other Objects The previous analyses have focused on di-jet masses at the level of a few hundred , which means that there is still the possibility that new physics is hiding at even lower mass scales. Accessing lower mass scales with such techniques is challenging, as the di-jet system eventually becomes so low in mass that the decay becomes collimated and the jets overlap. At this point, it is still possible to search for di-jet resonances, but a new technique is needed: the di-jet system must now be represented as a singlejet, and the mass of that single jet represents the possible resonance of a new particle. This approach is referred to as the boosted di-jet topology, in contrast with the resolved topology that we have been discussing so far.The first LHC search for a boosted di-jet system in association with another object happened at almost the same time as the first resolved search, also occurring in 2016 <cit.>. Any non-top quark or gluon can be reconstructed as ajet, which complicates such searches, as the Standard Model di-jet production process is a background to boosted jet + ISR jet searches, while the Standard Model photon+jet process is a background to boosted jet + ISR photon searches. Advanced jet substructure techniques are the solution to this problem, as jet substructure variables can be used to reject jets from non-top quarks and gluons while accepting jets consistent with the interpretation of containing a di-jet system. This rejection of the backgrounds must be done carefully, as jet substructure techniques are generally correlated with the jet mass; cutting on the substructure variable can therefore bias the observable of interest to the search for new physics. The solution to this challenge lies in specifically designing a substructure variable to be uncorrelated with respect to the jet mass, and thus cuts on the variable can reject the background without biasing the search. This idea was first proposed via Designing Decorrelated Taggers (DDT) <cit.> in the context of such searches, but has since grown to include additional analytic and machine learning strategies for designing selections to reject the Standard Model background without shaping the mass distribution, which are now used in a variety of applications. ATLAS has conducted a search for such boosted di-jet resonances in association with both photons and jets <cit.>, where the DDT technique is used to suppress the Standard Model photon+jet and di-jet backgrounds, respectively. The background is evaluated by inverting the DDT cut and calculating transfer factors from the control to signal region, which are then subsequently smoothed using a Gaussian process regression. The resulting background estimation strategy is validated by applying it to the W/Z boson mass peak and confirming that the extracted significance of the mass peak matches the Standard Model expectation. With this validation done, the analysis then proceeds to evaluate the consistency of data with the background estimation in both the photon and jet channels, which are shown in Figure <ref>a,b, respectively. No significant deviation is observed beyond the Standard Model background, and thus limits are set on the coupling of quarks to an axial-vector Z^' mediator as shown in Figure <ref>c, where the photon and jet channels are combined in the limit-setting procedure. CMS has conducted a search for boosted di-jet systems in association with a photon <cit.>, using a partial Run 2 dataset, which holds the current record in accessing the low-mass regime. The analysis makes use of the DDT procedure to suppress the Standard Model photon+jet background, although the remaining background is still comprised primarily of events which survive this cut. Resonant backgrounds from W/Z+photons and tt̅ are taken from simulated samples, while the photon+jet background is estimated by defining a transfer factor from a control region in which the DDT cut has been inverted. The resulting background estimation is compared to data in Figure <ref>a, and no significant deviations are observed; thus, limits are set on the quark coupling to an axial-vector Z^' as shown in Figure <ref>b. These limits are exceptional in that they probe all the way down to Z^' masses of 10, which is the lowest mass scale probed by any current di-jet resonance search at the LHC.The search for boosted di-jet systems in association with another jet has also been studied by CMS <cit.>, again using a partial Run 2 dataset. The analysis is split into two separate regions corresponding to different mass regimes: the lower mass regime uses the standard CMSdefinition ofR=0.8 jets, while the higher mass regime expands to using the Cambridge–Aachen algorithm with R=1.5; in this way, the R=1.5 jet can contain a more massive di-jet resonance than for R=0.8 at the same di-jet system . The DDT procedure is again used to define a substructure variable that can reject the Standard Model background from not-top quarks and gluons while retaining possible di-jet resonances, without sculpting thejet mass distribution. This decorrelation is performed separately for the two different jet definitions, and is done differently than in the previous analyses: the decorrelation is defined to reject 95% of Standard Model background jets from di-jet processes in all regions of the 2D parameter space considered. Due to this choice, it is known that only 5% of the di-jet background is accepted in all regions studied, up to possible simulation limitations in modelling the parameter space of interest. The background estimation procedure thus focuses on evaluating potential differences in the DDT modelling, and is done by simultaneously fitting a function to the events passing and failing the DDT cut. This procedure defines the background estimate for the dominant background process, while the smaller backgrounds from W/Z+jet, tt̅, and single-top are taken from simulation. The resulting background estimates are derived for different jetbins; one such bin is shown forR=0.8 jets in Figure <ref>a and Cambridge–Aachen R=1.5 jets in Figure <ref>b. No significant deviations are observed from the respective background expectations, and thus limits are set on the quark coupling to an axial-vector Z^', as shown in Figure <ref>c. The resulting limits in the lower-mass regime are further combined with a previous analysis, as shown in Figure <ref>d; the previous analysis was done similarly and using a distinct partial Run 2 dataset, detailed in Ref. <cit.>. This previous result saw a potential excess at a resonance mass a bit above 100, but the result discussed here does not confirm it, and thus the combined significance is reduced with respect to Ref. <cit.>. § MISSING TRANSVERSE MOMENTUM PLUS X SEARCHESSearches for invisible or otherwise very weakly interacting particles are challenging at ATLAS and CMS, as they escape the detector without leaving any visible energy signature to indicate their presence. The Standard Model already includes one such type of particle, the neutrino, the production of which forms an irreducible background to any search for other detector-invisible particles. There is, however, a candidate for an invisible particle beyond the Standard Model and which is of great interest to the particle physics community: particulate dark matter. If dark matter has a particle origin, then it happens that a weakly-interacting massive particle (WIMP) with a mass at the weak scale would naturally produce the observed relic abundance of dark matter in the universe <cit.>; this is known as the “WIMP Miracle”. As the LHC is particularly sensitive to particles at the weak scale, there is both strong interest in and motivation for searches for such dark matter candidates. There is therefore a large physics programme at the LHC oriented around the search for dark matter, including common LHC recommendations on how to interpret the results of such searches <cit.>.One of the original approaches to such searches at the LHC, and one which is still of great relevance, is the search for the pair-production of dark matter particles χ through the decay of a new s-channel mediator, such as a Z^' boson; more details on such Z^' models as used at the LHC can be found in, for example, the above-mentioned LPCC Dark Matter Working Group recommendation documents. However, if the collision is entirely described by the process qq̅→Z^'→χχ̅, then the events will be invisible to the ATLAS and CMS detectors, as the final state only involves detector-invisible particles. Searches therefore must add an additional experimental constraint, in the form of requiring that ISR accompanies the production of the Z^', as shown in Figure <ref>. This ISR requirement does not add any additional assumption about the new physics production or decay couplings, as the radiation occurs independently of the new physics of interest, and thus does not bias the search to specific models. The presence of ISR is rather an experimental consideration, which comes with the price of a reduced cross-section, but which is required to observe such events. The addition of ISR adds a visible component to the collision by-products, which the Z^' and thus the dark matter particles must recoil against. Thanks to the conservation of transverse momentum in LHC collisions, this imbalance between visible activity in one part of the detector and nothing in the opposite part of the detector can be quantified; the imbalance is referred to as missing transverse momentum, and large values of missing transverse momentum imply the presence of invisible particles. Searches for missing transverse momentum balancing some other visible object, usually assumed to be from ISR, are thus a prominent means of searching for the production of dark matter at the LHC. There are many such searches, and they will not be covered in detail here, as that could be the subject of an entire separate review. Instead, this review will focus on analyses related to hadronic final states, which happen to have the leading sensitivity to a variety of different types of possible mediators between the Standard Model and postulated dark sectors.§.§ Missing Transverse Momentum Plus Jet SearchesQuarks and gluons, or collectively jets, are the most common source of ISR at the LHC. From a statistical perspective, this means that searches for invisible particles using missing transverse momentum in association with ISR jets should lead the sensitivity. This is indeed generally true, and the resulting search is often referred to as a mono-jet search due to the presence of only a single visible jet in the detector. This name has stuck, even though modern iterations of the mono-jet search allow for more than one jet to be present, so long as there is at least one high-energy jet in association with large missing transverse momentum.ATLAS <cit.> and CMS <cit.> have both published mono-jet analyses using the full Run 2 dataset. The analyses are generally quite similar in concept, with a focus on precisely evaluating the expected contribution of the irreducible Standard Model background of Z(→νν) + jets from a variety of control regions. ATLAS and CMS both useand W(→ℓν) + jets control regions, for ℓ={e,μ}, while ATLAS defines an additional tt̅ + single-top control region, and CMS benefits from an additional γ+jets control region. The combination of all of these control regions allows for a very precise determination of the dominant and irreducible Z(→νν) + jets process in the signal region, as well as the secondary contributions from other Z+jets and W+jets processes; these high-precision estimations of the signal region contributions are key to the final sensitivity of the analysis. Both analyses also have dedicated control regions to estimate the contributions of other processes to the signal region, such as those from multi-jet backgrounds. This use of many dedicated control regions to estimate the relevant background processes in the signal region results in a very sensitive analysis, which is predominantly limited by systematic uncertainties, both experimental (object reconstruction and scale) and theoretical (in the process of extrapolating from control regions to signal regions). An example of the W(→eν) + jets control region is shown in Figure <ref>a for ATLAS and Figure <ref>c for CMS, while the signal region expectations are shown in Figure <ref>b,d, respectively. A small deviation is seen in one bin of the ATLAS signal region, but there is a related fluctuation in the W(→eν)+jets control region; thus, it is possible that the effect is correlated with a feature in the control region.Another similar but distinct hadronic final state considers the possibility of a hadronically decaying W or Z boson. The resulting analysis, often referred to as the hadronic mono-V search, follows a very similar background estimation strategy to the mono-jet search. In ATLAS, the search has been conducted using a partial Run 2 dataset <cit.>, and with the aforementioned Z(→ℓℓ) + jets and W(→ℓν) + jets control regions. The CMS hadronic mono-V search was conducted together with the mono-jet search, and thus includes all of the different control regions discussed previously, and uses the full Run 2 dataset <cit.>. While the background estimation procedure is similar, the analysis definition is quite different. Both ATLAS and CMS usejets to represent the hadronically decaying W/Z boson, and employ jet taggers along the lines of those discussed in Section <ref> to reject backgrounds from non-top-quark or gluon jets while selecting those consistent with hadronic W or Z boson decays. Both ATLAS and CMS additionally have both low-purity (LP) and high-purity (HP) selections; the signal regions for the high-purity selections are shown in Figure <ref> for both ATLAS and CMS. ATLAS additionally considers single- and double-b-tagged selections, which further enhance the purity with which W and Z boson decays can be retained. ATLAS further studies resolved categories, where the two quarks from the W or Z decay are not sufficiently collimated to be adequately represented by a singlejet; pairs ofjets are thus used instead, and the invariant mass of that pair of jets is required to be consistent with the interpretation of them originating from the decay of a W or Z boson. No significant deviations from the background prediction are observed in any of the mono-jet or hadronic mono-V searches, and thus limits are set on the production of various different mediators that couple the Standard Model to the dark sector. Two key benchmark models studied by both ATLAS and CMS are the production of axial-vector mediators, and the production of pseudo-scalar mediators. Limits on the production of both of these types of processes, as a function of the mediator mass and the dark matter mass, are shown for a given choice of the coupling between quarks and the mediator (g_q=0.25 for axial-vector, g_q=1.0 for pseudo-scalar) and the coupling between the mediator and dark matter (g_χ=1.0), in Figure <ref>. These limits are all shown for the full Run 2 dataset, and are for the mono-jet analysis signal region for ATLAS <cit.>, while the CMS results include contributions from both the mono-jet and hadronic mono-V signal regions <cit.>. §.§ Other Missing Transverse Momentum SearchesWhile the mono-jet final state is generally the most sensitive to the previously presented dark matter models, the second most abundant source of ISR at ATLAS and CMS is photons, not W and Z bosons. The missing transverse momentum plus ISR photon search is thus also an important part of the search programme, and along the same lines as the mono-jet and hadronic mono-V searches, it is often referred to as the mono-photon analysis. ATLAS has published a mono-photon analysis using the full Run 2 dataset <cit.>, while the corresponding CMS analysis currently uses a partial Run 2 dataset <cit.>. Similar control regions to the mono-jet search are used to estimate the Z(→νν)+γ background and other W/Z+γ backgrounds, just with the associated jet replaced by an associated photon. The CMS search is further divided into “vertical” and “horizontal” signal regions, which are defined in such a way that the contribution from background beam halo events can be determined. The resulting signal regions for ATLAS and CMS are shown in Figure <ref>a,b, respectively. No significant deviations from the background prediction are observed, and thus both analyses proceed to set limits on the production of axial-vector mediators coupling the Standard Model to dark matter, which are correspondingly shown in Figure <ref>c,d. Comparing these limits with those shown in Figure <ref>, it is clear that the mono-jet analysis is more sensitive to the models shown here. In addition to the mono-(jet/V/γ) searches presented so far, there are a vast number of other searches for missing transverse momentum in association with other objects that can be interpreted in the context of mediators connecting the Standard Model to dark matter. In particular, the sensitivity to scalar mediators with Yukawa couplings is enhanced by the presence of massive objects, such as top or bottom quarks. Such signatures include large amounts of missing transverse momentum in addition to single-top quarks, tt̅ pairs, single-bottom quarks, bb̅ pairs, or top+bottom. While many of these signatures include hadronic final states, they are not discussed in detail here; these types of signatures are of great relevance to other types of models, and are discussed in a parallel review <cit.>.In order to briefly demonstrate the relevance of such other final states in the search for scalar and pseudo-scalar mediators decaying to dark matter, plots showing the sensitivity of a variety of different signatures are shown for both ATLAS <cit.> and CMS <cit.> in Figure <ref>. The mono-jet process may have the largest cross-section, but the quark–antiquark annihilation usually occurs between low-mass quarks. As such, much more rare processes including the production of pairs of top quarks in association with the scalar or pseudo-scalar mediator can actually have better sensitivity, as the coupling to the top quark mass compensates for the lower cross-section. These other signatures are therefore an important part of the ATLAS and CMS physics programme in the context of the search for the pair-production of dark matter at the LHC.§ HADRONIC DI-BOSON SEARCHESSearches for new particles decaying to pairs of electroweak bosons are sensitive to a wide variety of models of new physics. Pairs of bosons can be the by-product of new mediators of spin 0, spin 1, or spin 2, thus probing many interesting possibilities. The link with spin 0 mediators means that such di-boson final states are also of great interest to searches for Higgs couplings that do not match Standard Model expectations, or to new scalar particles; most di-boson searches have thus been covered in a parallel review to this one <cit.>. This review will therefore focus on searches for di-boson production in the fully hadronic final state, and where neither boson is a Higgs boson, the Feynman diagram for which is shown in Figure <ref>. Results of such searches are typically interpreted in terms of benchmark models, including spin 0 Radions <cit.>, spin 1 Heavy Vector Triplets (HVTs) or W^' and Z^' bosons <cit.>, and spin 2 bulk RS gravitons <cit.>. §.§ Searches with Standard Model BosonsSearches for di-boson production in the fully hadronic final state are primarily motivated by the large branching ratios of the Standard Model bosons to pairs of quarks, as discussed in Section <ref> and Table <ref>. The hadronic final state can thus have a larger statistical power than fully leptonic or semi-leptonic final states, and may therefore be the first final state to observe new physics, especially in the highest-accessible-energy regime. This means that fully hadronic searches are primarily of interest where the resulting W and Z bosons are produced at very high energy, and thus their subsequent decays to pairs of quarks are highly collimated. These collimated decays are then reconstructed as pairs ofjets, where each of the W and/or Z bosons form one such jet, using the techniques discussed in Section <ref>.Fully hadronic di-boson searches thus take the form of searches for di--jet events; many techniques thus overlap with the di-jet searches discussed in Section <ref>. However, in the case of di-boson searches, the enormous Standard Model multi-jet and W/Z+jet backgrounds must be overcome in order to be sensitive to rare new di-boson physics. The key to achieving this requirement is the development of powerful taggers, which can differentiate betweenjets consistent with originating from W or Z bosons against those originating from non-top-quarks or gluons, as discussed in Section <ref>. These taggers suppress the Standard Model backgrounds by several orders of magnitude, leaving behind a small but not insignificant fraction of multi-jet and W/Z+jet events. The search for fully hadronic di-boson resonances starts from this baseline. The background estimate must be derived in a data-driven way, as the simulated samples cannot be trusted to properly represent the tiny fractions of background events that survive the jet taggers, yet these surviving background events remain a sizeable contribution with respect to the expected number of signal events. The strategy of searching for resonances therefore allows for the smoothly-falling background to be estimated using functional forms, with the signal hypothesis confined to a narrow region of the spectrum. This is the approach followed by both ATLAS <cit.> and CMS <cit.> in their latest searches for fully hadronic di-boson resonances, where the published ATLAS result and the preliminary CMS result both use the full Run 2 dataset.Both searches cut very tightly on their jet taggers in order to suppress the Standard Model background, leaving only a small fraction of background events behind. However, in order to obtain the required background suppression, they must also discard a large number of potential signal events. The fraction of signal events which survive these strong selection criteria are shown in Figure <ref>; ATLAS only considers one category where the final signal acceptance is at the level of 5%, while CMS considers five separate categories, which all-together bring the signal acceptance to the level of 20%. The five CMS categories come from the tagger targets (V=W/Z, or H) and requirements on the tagger (HP = high purity, or LP = low purity), where the combinations considered are VV-HPHP, VV-HPLP, VH-HPHP, VH-HPLP, and VH-LPHP. HP denotes the use of tighter selections on the boson candidate(s), while LP indicates looser selections, thus increasing the background contamination in order to retain additional signal events. The VV-dedicated regions contribute roughly 10% of the 20% signal acceptance for VV final states; the remaining 10% of retained VV signal events comes from the VH categories, due to the possibility to confuse H bosons and W/Z bosons, and thus the VH categories are important even for VV interpretations as presented in this review. As the jet taggers are so crucial to the analysis sensitivity, it is also important to understand the extent to which the tagger performance differs between data and simulated events. ATLAS and CMS thus evaluate the performance of their taggers in control regions, although the techniques used by the two collaborations differ. ATLAS makes use of a dedicated W/Z+jets (V+jets) control region, where one jet is required to pass the tagger requirements other than the jet mass, and the other is required to fail one of the tagger selections. The resulting distribution, shown in Figure <ref>a, is still dominated by multi-jet events; however, there is a clear W/Z+jets peak on top of the smooth multi-jet distribution, which can therefore be fit in both data and simulation in order to extract the required tagger efficiency scale factors. This approach works, but it is sensitive to the ability to extract the W/Z+jet peak from the larger multi-jet background, which limits the precision of the method. In contrast, CMS uses a dedicated semi-leptonic tt̅ control region, as shown in Figure <ref>b, where the selected events are dominated by W bosons. This approach may have a higher purity of the object of interest, but does not include Z bosons, and must be performed at lower energy than the analysis regime of interest as otherwise the selection becomes dominated by top quarks instead of W bosons. The two techniques used by ATLAS and CMS thus both come with their own benefits and limitations with regards to their ability to accurately measure the tagger efficiency in the kinematic regime of interest to fully hadronic di-boson resonance searches.With the tagger performance under control, the remaining step is to define the background expectation in the signal region. ATLAS does this by fitting the di--jet invariant mass spectrum directly, using a functional form to describe the smoothly falling background shape. The analysis further handles the fact that the W and Z taggers are not orthogonal by pre-combining the WW and WZ events into one signal region, and the WW and ZZ events into a second signal region, as these two combinations are useful in probing different possible signal interpretations. This therefore directly handles the events that are identified as falling into both of the two tagger categories of interest; the resulting WW+WZ and WW+ZZ signal regions are shown in Figure <ref>a,b, respectively. As no significant deviations are observed from the background expectation, the analysis proceeds to set limits on spin-0 Radion production using the WW+ZZ region, spin-1 Heavy Vector Triplet (HVT) W^' and Z^' production using the WW+WZ region, and spin-2 bulk RS graviton production using the WW+ZZ region. The V^' production limits are shown in Figure <ref>c, while the graviton production limits are shown in Figure <ref>d. Instead of fitting the di--jet invariant mass spectrum alone, the CMS analysis simultaneously fits the invariant mass spectrum together with the individual jet mass spectra of both of the boson candidate jets. By searching for peaks in this set of three distributions, it is possible that a resonance could be discovered in the di--jet invariant mass spectrum that corresponds to boson masses other than those expected for W and/or Z bosons. This possibility is further supported by the choice of the analysis to use a mass-decorrelated tagger, using methods discussed in Section <ref> to make the tagger mass-independent; thus, the tagger used corresponds to a two-body-decay structure rather than strictly a W or Z boson decay. The resulting di--jet invariant mass spectra, with the background estimates taken from three-dimensional simultaneous fits, are shown for the HPHP category and the HPLP category in Figure <ref>a,b, respectively. Two local excesses are observed, both at the level of 3.6 standard deviations, at 2.1 and 2.9; these excesses both have a global significance of 2.3 standard deviations. As no globally significant deviations are observed from the background expectation in either category, limits are set on spin-0 Radion models, spin-1 HVT W^' and Z^' models, and spin-2 bulk RS graviton models. The resulting limits on Z^'→WW and G_bulk→ZZ are shown in Figure <ref>c,d, respectively. The need of fully hadronic di-boson searches to cut so tight on the signal in order to suppress the background, as shown earlier in Figure <ref>, counteracts the aforementioned benefits of the larger branching fractions of W and Z bosons to hadronic final states. Nonetheless, the hadronic final states are still complementary or competitive to other final states, even using the taggers that were available early in Run 2 <cit.>. The taggers used to reject the Standard Model backgrounds have improved quite a bit during Run 2; these improvements are likely to continue as advanced techniques are increasingly applied to this task. Further tagger improvements would increase the fraction of signal events that can be retained, and thus have the potential to make the hadronic final state play an ever more important role in the search for new di-boson physics. §.§ Searches with Generic Bosons The previously described ATLAS and CMS searches for fully hadronic di-boson resonances were primarily optimized around the interpretation of Standard Model W and Z bosons. However, a new physics particle A may instead decay to other new particles B and C, which then in turn decay back to pairs of quarks. The CMS fully hadronic di-boson resonance search takes a first step in this direction in that the background estimation procedure fits the invariant mass distribution simultaneously with the individual jet mass distributions, but the analysis is not directly designed for new physics based around such alternative decays. ATLAS has now conducted a search using the full Run 2 dataset, which has been optimised for the generic process A→BC, where B and C both decay hadronically <cit.>. This result is based on a very different analysis strategy than the aforementioned searches, built around the idea of Classification WithOut LAbels (CWOLA) <cit.>. In this approach, the invariant mass spectrum is divided into eight regions. The method scans over all but the extreme regions, considering one-by-one each region as a signal region, and the regions on either side as sidebands; this results in the study of six separate signal regions. The analysis then trains a neural network to differentiate between the events in the signal window and the two sidebands. Under the interpretation of there being new physics in the signal window more abundantly than in the sidebands, the network will learn a proxy for signal vs background discrimination. The resulting network can then be applied to the full spectrum to enhance the contribution of signal events, and the spectrum can be fit to define the background expectation for a resonance search to be conducted within the The network used to differentiate between signal-region-like and sideband-region-like events is trained using only the masses of the twojets. It is furthermore applied using two different selections, one corresponding to keeping the 10% most signal-region-like events (ϵ=10%), and another correspond to keeping the 1% most signal-region-like events (ϵ=1%). The resulting signal windows for ϵ=10% are stitched together to form a single plot, shown in Figure <ref>a. As the signal windows each apply a different neural network, there is no expectation that the resulting spectrum will be smooth at the stitching boundaries, as is quite clear at the 5.68 boundary.The approach described so far has no dependence on any simulated model, but if desired, a model can be injected into the neural network training process. If this is done, the resulting network will become more sensitive to that specific model, at the cost of reduced sensitivity to other possible types of new physics. This is useful to allow this generic search to be compared with other analyses searching for specific benchmark models, and thus an example of injecting 3 and 5 W^'→WZ signals is shown for ϵ=10% in Figure <ref>b. Comparing Figure <ref>a to Figure <ref>b, it is clear that the injection of a signal into the training process has resulted in a stronger classifier, and thus the number of events near 5 is further suppressed with respect to the signal-model-independent training.No significant deviations are observed from the background expectation in any of the six regions, at either ϵ=10% or ϵ=1%, and thus limits are set on benchmark A→BC model for a variety of different values of the masses of A, B, and C. The results for A=3 are shown for ϵ=10% and ϵ=1% in Figure <ref>c,d, respectively, where the limits from dedicated analyses are overlaid. The dedicated fully hadronic di-boson resonance search from ATLAS is more sensitive than the generic search when A and B have the W mass, but for other values of A and B the traditional analysis has no sensitivity, as the taggers used are heavily optimised around the Standard Model W/Z boson interpretation. Instead, the limits from the high-mass di-jet search discussed in Section <ref> are relevant, as the A can instead decay back directly to pairs of quarks instead of decaying to B and C. The same figures show that the search for generic A→BC resonances is often much more sensitive than the di-jet search to this model, as the neural network is exploiting the structure of the final state to reject the Standard Model multi-jet background while retaining the signal candidates of interest. This nicely demonstrates the utility of such an approach to searching for generic A→BC di-boson resonances: it cannot out-perform a dedicated search for a given combination of the masses of B and C, but it can increase the sensitivity to other mass assumptions beyond what is possible from re-interpreting searches that do not consider the structure of the decay process.§ COMPLEMENTARITY OF HADRONIC PHYSICS SEARCHESThe di-jet searches presented in Section <ref> are of great importance when considering complementarity between different methods of looking for new physics beyond the standard model. Di-jet searches provide a generic means of constraining the presence of a whole class of new physics models: if a model assumes that the mediator can be produced through an s-channel process involving the annihilation of quark–antiquark pairs, then di-jet searches can probe that model, as the mediator can also decay back to quark–antiquark pairs (excluding mediators that are sufficiently long-lived for the decay to occur after leaving the detector volume). This means that di-jet searches are directly complementary to the missing transverse momentum plus X searches discussed in Section <ref> and the hadronic di-boson searches discussed in Section <ref>.The complementarity of di-jet searches and missing transverse momentum plus X searches has been studied in detail during Run 2, with particular emphasis on the search for new axial-vector Z^' bosons with couplings to dark matter. ATLAS <cit.> and CMS <cit.> have both created plots overlaying the sensitivity of their different types of searches to such a Z^' model, as shown in Figure <ref>. The couplings of the postulated Z^' boson to quarks (g_q), leptons (g_ℓ), and dark matter (g_χ) are not fixed parameters, and thus different coupling scenarios are considered, following the LPCC Dark Matter Working Group recommendations <cit.>. The different scenarios nicely demonstrate the complementary sensitivity of the different types of searches: if g_q is large compared to g_ℓ, di-jet searches lead the sensitivity across the full parameter space; if g_q and g_ℓ are of similar size, di-lepton searches (not discussed in this review) take the lead; and if g_q and g_ℓ are both small, missing transverse momentum plus X searches are of great importance. Di-jet and di-boson searches are similarly complementary, but their joint sensitivity has not yet been compared to the same extent. However, it is clear that similar behaviour would be observed, namely, di-jet searches would be of great importance when the coupling of the new mediator to quarks is large, and di-boson searches would provide leading sensitivity when the coupling of the new mediator to bosons is large.§ SUMMARY AND OUTLOOKThis review provides an overview of ATLAS and CMS searches for new physics in hadronic final states, shortly before the start of Run 3. Following a discussion on the motivations and challenges of physics involving hadronic final states at the LHC in Section <ref>, the different jet reconstruction, calibration, and tagging strategies employed by ATLAS and CMS were presented in Section <ref>. These provide the necessary background to understand how hadronic final states are observed and the precision that such hadronic observables have attained during Run 2 of the LHC.With this baseline in place, the review shifted to searches for di-jet resonances in Section <ref>. These searches are of fundamental importance to the ATLAS and CMS physics programmes, as they are sensitive to a wide range of possible new physics models, due to their minimal set of assumptions: the new particle under study must be possible to create through quark–antiquark annihilation. The di-jet search programme has increased in scope considerably during Run 2, including new strategies to circumvent the trigger barrier and to access the lower-mediator-mass regime with unprecedented precision. New techniques to access this regime continue to be deployed, and several of the analyses presented have only been conducted by one of ATLAS or CMS so far; there is thus still scope for further improvements and possible discovery of new physics in these low-mass di-jet searches using only the Run 2 dataset, which will only be further improved during Run 3.Many di-jet searches are conducted in order to probe the existence of new mediators between the Standard Model and dark matter. Another means of probing the existence of such mediators is to directly study events including the production of dark matter particles, which must balance some initial-state-radiated object in order to be visible to the detector. Searches for missing transverse momentum in association with jets, and briefly also with other objects, were presented in Section <ref>. While the flagship search of this type has already been published using the full Run 2 dataset by both ATLAS and CMS, there remain other signatures that are competitive for some models of mediators to the dark sector, and not all of those searches have yet been extended to the full dataset. Moreover the flagship mono-jet analysis is generally systematically limited; thus, further refinements to the analysis strategy, the object reconstruction uncertainties, or theoretical uncertainties could provide sizeable improvements to the analysis sensitivity.Another possibility is that the new mediator preferentially decays to pairs of bosons, and thus searches for resonances in fully hadronic decays of pairs of electroweak bosons were discussed in Section <ref>. These searches depend crucially on modern developments in jet tagging, whereby jets consistent with originating from W and/or Z bosons are selected, while jets originating from quarks or gluons are rejected. This is necessary to overcome the otherwise enormous Standard Model multi-jet background, and the ability to do so has improved substantially during Run 2. These searches will continue to benefit from collecting more data, but ultimately further improvements to the tagger design or analysis strategy are likely to provide more significant gains in the coming years. Additional searches for generic boson resonances, where the bosons do not necessarily have the W or Z boson mass, may also yield discoveries in regions that are not covered by dedicated analyses.These three types of searches for new physics in hadronic final states are complementary, as discussed in Section <ref>. The di-jet searches and missing transverse momentum plus associated object searches can both be directly interpreted under the context of the same model, which was shown for a new axial-vector mediator. As there is no well-defined expectation for the couplings of such a new mediator to quarks, leptons, and dark matter particles, different signatures must all be studied in order to maximally cover the new physics parameter space. Di-jet searches are also complementary to the fully hadronic di-boson searches for new mediators for similar reasons: di-jet searches would provide the strongest sensitivity when the quark coupling is large, while di-boson searches would provide better sensitivity when the coupling to bosons is large. There is substantial scope for further comparisons and combinations of different search strategies, both hadronic or otherwise, which may indicate regions that are being missed by the current ATLAS and CMS search programmes.The analyses presented in this review are only a subset of all possible searches for new physics in hadronic final states. Hadronic final states are already of great interest at the LHC, yet the precision of hadronic physics observables has continued to improve, taggers to identify specific types of hadronic objects frequently provide large gains with respect to previous versions, and new hadronic analysis strategies are being developed and deployed. The challenges of hadronic physics at the LHC are being slowly but surely mitigated, and with these advances, there will surely be many new opportunities for searches in hadronic final states in the years to come. § ACKNOWLEDGEMENTSThis review is part of a project that has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 948254) and from the Swiss National Science Foundation (SNSF) under Eccellenza grant number PCEFP2_194658. tocsectionBibliography [title=References] | http://arxiv.org/abs/2311.16040v1 | {
"authors": [
"Steven Schramm"
],
"categories": [
"hep-ex"
],
"primary_category": "hep-ex",
"published": "20231127180127",
"title": "Searching for New Physics in Hadronic Final States with Run 2 Proton-Proton Collision Data at the LHC"
} |
Performance Analysis of MDMA-Based Cooperative MRC Networks with Relays in Dissimilar Rayleigh Fading ChannelsThis work was supported by Natural Science Foundation of China (Project Number: U22B2003 and U2001208). (Corresponding author: Chen Dong). Lei Teng, Wannian An, Chen Dong, Xiaoqi Qin and Xiaodong Xu are with the School of Information and Communication Engineering Beijing University of Posts and Telecommunications.Lei Teng2, Wannian An2, Chen Dong2, Xiaoqi Qin2, Xiaodong Xu2 2State Key Laboratory of Networking and Switching Technology,Beijing University of Posts and Telecommunications, Beijing, 100876, China. Email: [email protected], [email protected], [email protected],[email protected], [email protected] January 14, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================ Multiple access technology is a key technology in various generations of wireless communication systems. As a potential multiple access technology for the next generation wireless communication systems, model division multiple access (MDMA) technology improves spectrum efficiency and feasibility regions. This implies that the MDMA scheme can achieve greater performance gains compared to traditional schemes. Relay-assisted cooperative networks, as a infrastructure of wireless communication, can effectively utilize resources and improve performance when MDMA is applied. In this paper, a communication relay cooperative network based on MDMA in dissimilar rayleigh fading channels is proposed, which consists of two source nodes, any number of decode-and-forward (DF) relay nodes, and one destination node,as well as using the maximal ratio combining (MRC) at the destination to combine the signals received from the source and relays. By applying the state transition matrix (STM) and moment generating function (MGF), closed-form analytical solutions for outage probability and resource utilization efficiency are derived. Theoretical and simulation results are conducted to verify the validity of thetheoretical analysis.Model division multiple access, Cooperative network, State transition matrix, Moment generating function, Maximal ratio combining § INTRODUCTIONSemantic communication is considered a potential paradigm for next-generation communication systems<cit.>, which involves the selective extraction, compression, transmission of features from the original signals, and the utilization of semantic-level information for communication purposes. The goal of the text-based semantic communication system <cit.>, namely the deep learning based semantic communication system (DeepSC), is to recover the meaning of sentences in traditional communication systems rather than bit or symbol errors, with the aim of maximizing system capacity and minimizing semantic errors. It exhibits significant semantic transmission advantages under low signal-to-noise ratio (SNR) conditions compared to traditional communication systems. According to <cit.>, the layer-based semantic communication system for image (LSCI) model serves as an intelligent carrier, and semantic transmission is essentially the propagation of artificial intelligence models. Therefore, the semantic slice model (SeSM) is designed to achieve semantic intelligent propagation. In <cit.>, nonlinear transform source-channel coding (NTSCC) is applied to map image and video sources into a nonlinear latent space for more efficient semantic extraction and transmission.The aforementioned semantic communication systems are designed for point-to-point communication, while multi-user communication systems have always been a focus of research <cit.>. From the first generation (1G) to the fifth generation (5G) of communication system development, each generation has featured representative and groundbreaking multiple access (MA) technologies <cit.>. A multi-user semantic communication system based on Non-Orthogonal Multiple Access (NOMA) is proposed in <cit.>, which supports semantic transmission for multiple users with different source information modalities. In order to harness the performance potential of semantic information, a novel multiple access technology based on semantic domain resources is proposed in <cit.>, known as Model Division Multiple Access (MDMA). It explores the shared and personalized information from a high-dimensional semantic space. The shared semantic information is transmitted within the same time-frequency resources, while personalized semantic information is transmitted separately. Compared to other MA technologies, MDMA's gain primarily stems from the reuse of shared information among different users in the model information space.Furthermore, relay-assisted cooperative networks are an important research area in communication systems. In <cit.>, an exact closed-form expression for the outage probability of DF relay in a communication network with m relays is derived in different Rayleigh fading channels. In <cit.>, a cooperative communication network based on energy harvesting (EH) and DF relaying is considered. A STM is proposed to obtain the probability distribution of the candidate broadcast node set. The combination of relay-assisted cooperative networks and multiple access technology is also a research focus. In <cit.>, a time division multiple access (TDMA)-based cooperative medium access control protocol for wireless multi-hop relaying networks is proposed and analyzed. In <cit.>, the impact of relay selection (RS) on the performance of cooperative NOMA is studied. In addition, diversity gain techniques are of significant importance for relay-assisted cooperative networks. In <cit.>, the performance of NOMA in a collaborative system is studied when a source node communicates with two EH user devices using MRC technique with multiple antenna hybrid AF/DF relay nodes. In <cit.>, a cooperative communication network based on two energy harvesting DF relays is proposed, where the diversity gain is obtained at the destination node using MRC technique, and the probability density function of the total SNR collected at the destination node under different transmitter-receiver node states is calculated using STM.Based on the above, it can be observed that the combination of multiple access technologies and cooperative networks for next-generation communication systems has received less attention. This paper aims to propose a communication cooperative network utilizing MDMA. The main contributions of this paper are as follows: * A DF relays-assisted MRC network based on MDMA, which consists of two source nodes, an arbitrary number of DF relay nodes, and one destination node, is proposed. * By utilizing MGF and STM, closed-form theoretical expressions for the outage probability, resource utilization efficiency, and the average number of time slots required for each data transmission are derived in this network. * Through numerous simulation results, it is demonstrated that simulation results match with corresponding theoretical results. Notations: In TABLE <ref>, all notations are showed.The remaining paper is developed as follows: Section II interprets the system model. In section III, the expressions for the outage probability, resource utilization efficiency and the average number of time slots required for each image transmission are determined. In section IV, simulation performance results are displayed. Section V is where the conclusion is provided.§ SYSTEM MODEL Consider the wireless network depicted in Fig <ref>, where the source nodes S1 and S2 employ a set of m relay nodes c = {R1, ..., Rm}. Let d_ab denote the distance between nodes with a and b. The mutually independent complex channel gains between the nodes in the j-th time slot are modeled as h_S1D(j)∼𝒞𝒩(0, d_S1D^-α), h_S2D(j)∼𝒞𝒩(0, d_S2D^-α), h_S1Ri(j)∼𝒞𝒩(0, d_S1Ri^-α), h_S2Ri(j)∼𝒞𝒩(0, d_S2Ri^-α), h_RiD(j)∼𝒞𝒩(0, d_RiD^-α), where i = 1, ..., m and the term α represents the path-loss exponent.The source S1 broadcasts unit-energy symbols x_S1(i) to relay Ri and destination D at rate R_0 with the constant power P_S. The received signals y_S1Ri(j), and y_SD(j) at relay Ri and destination D in the j-th time slot are given by y_S1Ri(j) = √(P_S)h_S1Ri(j)x_S(j) + n_S1Ri(j),y_S1D(j) = √(P_S)h_S1D(j)x_S(j) + n_S1D(j), where, n_S1Ri(j) and n_S1D(j)∼𝒞𝒩(0, N_0) represent the received additive white Gaussian noise (AWGN) at Ri and D, respectively. Thus, the received signal-to-noise ratios (SNRs) γ_S1Ri(j) and γ_S1D(j) at Ri and D in the j-th time slot would be given as follows γ_S1Ri(j) = P_S|h_S1Ri(j)|^2/N_0γ_S1D(j) = P_S|h_S1D(j)|^2/N_0. Similarly, y_S2Ri(j) = √(P_S)h_S2Ri(j)x_S(j) + n_S2Ri(j),y_S2D(j) = √(P_S)h_S2D(j)x_S(j) + n_S2D(j), y_RiD(j) = √(P_S)h_RiD(j)x_S(j) + n_RiD(j), γ_S2Ri(j) = P_S|h_S2Ri(j)|^2/N_0, γ_S2D(j) = P_S|h_S2D(j)|^2/N_0γ_RiD(j) = P_S|h_RiD(j)|^2/N_0. LetΓ_th = 2^R_0-1 be the network SNR threshold, where R_0 represents the target data transmitting rate. When the SNR is greater than Γ_th, the receiver is assumed to successfully decode the information.Furthermore, we assume that the receivers at the relays and the destination have accurate channel state information, enabling MRC. However, there is no transmitter channel state information available at the source or relays. In addition, in Fig.<ref>, S1 and S2 extract their source semantic information S_x and S_y, respectively, as follows: S_x = ϕ_1(X), S_y = ϕ_2(Y), where S_x and S_y represent the semantic information extracted from X and Y, respectively. We define the decoding set C as the set of relays in c that have the ability to successfully decode the source messages. As shown in <cit.>, the shared information S_xs and S_ys and the personalized information S_xp and S_yp can be extracted. With the help of MDMA, D only needs to successfully receiveS_xs, S_xp and S_yp to successfully reconstruct X and Y. This is because S_xs and S_ys have a high degree of similarity. The ratio of the number of shared information bits B_s to the total number of information bits B_t is defined as η, and correspondingly, 1-η represents the ratio of the number of personalized information bits B_p to the total number of information bits B_t. As a result, the number of time slots required for transmitting shared information β_s and personalized information β_p can be defined as follows:β_s = ⌈B_s/R_0⌉ = ⌈η B_t/R_0⌉,β_p = ⌈B_p/R_0⌉ = ⌈(1-η )B_t/R_0⌉,To meet the multiple access requirements of S1 and S2, MDMA is applied in the system. The information transmission in MDMA is divided into two phases:Phase I:step1: S1 broadcasts the shared semantic information S_xs to D. If γ_S1D is greater than the threshold Γ_th, D successfully decodes the information and broadcast positive acknowledgement (ACK). Otherwise, step2 proceeds in the next time slot. If D has already successfully received β_s times the shared information, Phase II proceeds in the next time slot. Otherwise, repeat Phase I in the next time slot.step2: If C is empty, it is recorded as a failure and repeat step1 in the current time slot. Otherwise, the relays in C simultaneously transmit it to D. If the total SNR exceeds the threshold through MRC, D is considered to successfully decode the information and broadcast ACK. If D has already successfully received β_s times the shared information, Phase II begins. Otherwise, repeat step1 in the next time slot.Phase II: S1 and S2 use TDMA to transmit personalized semantic information (S_xp and S_yp) to D. First, S1 transmits its information S_xp, followed by S2's information S_yp. The personalized transmission process is consistent with Phase I but β_p times. Once S_xp and S_yp are both transmitted, S1 and S2 send new information.§ PERFORMANCE OF DF RELAYING IN RAYLEIGH FADING CHANNELS To obtain theoretical performance expressions for the system, we can calculate the performance for different steps in MDMA separately. For Phase I step1 (pIS1s1 for short), the outage probability OP_pIS1s1 can be expressed as follows: OP_pIS1s1 = Pr{γ_S1D < Γ_th} = 1-e^-d_S1D^αΓ_th/SNR. For Phase I step2, the combined SNR γ_combined can be expressed as follows: γ_combined1 =γ_S1D+∑_Ri∈ Cγ_RiD Through the Theorem of Total Probability, the outage probability OP_pIS1s2 can be given by OP_pIS1s2 = ∑_C^C ≠∅Pr{γ_combined < Γ_th | γ_S1D < Γ_th}Pr{C}/1-Pr{C = ∅}. Let path 0 represent the direct link from S1 to D, and path i represent the cascaded link from S1 to Ri to D, where i = 1, ..., m. Let random variable y_S1,i denote the square gain on the i-th cascaded link. The random variable y_S1,i takes into account both the fading on the source-to-i-th relay link and the fading on the i-th relay-to-destination link. Then, y_S1,i has a probability density function (PDF) f_y_S1,i(x)= f_y_S1,i|negativePr{negative} +f_y_S1,i|positivePr{positive}. If the i-th path is negative, the conditional PDF f_y_S1,i|negative = δ(0), where δ(0) represents the Dirac Delta function, which is a pulse function at zero. The probability of this event occurring is denoted as A_i: A_i = Pr{γ_S1Ri < Γ_th} = 1-e^-d_S1Ri^αΓ_th/SNR. Obviously, the probability that the i-th path is positive is 1-A_i and f_y_S1,i|positive(x)= d_RiD^α/SNRe^-d_RiD^α/SNRx. Therefore, f_y_S1,i(x)= δ(0)A_i+d_RiD^α/SNRe^-d_RiD^α/SNRx(1-A_i) i=1, 2,..., m. To obtain Eq.(<ref>), the CDF of ∑_C^C ≠∅∑_Ri∈ Cγ_RiDPr{C} can be gotten firstly. The PDF can be determined through the moment generating functions, and then obtain the CDF from the PDF by integration. The MGF of i-th path is M_i(s) = A_i+(1-A_i)d_RiD^α/SNR/s+d_RiD^α/SNR. Due to the mutual independence of y_S1,i, the MGF of ∑_C^C ≠∅∑_Ri∈ Cγ_RiDPr{C} can be expressed as follows: M_sum(s) = ∏_i=1^m M_i(s). After some simplification and applying the Laplace inverse transform, the CDF can be obtained as Eq.(<ref>), where θ_x,y=d_RyD^α/d_RyD^α-d_RxD^α. Let p_γ_overall1(j) be the probability which are random variable ∑_C^C ≠∅∑_Ri∈ Cγ_RiD of value (j-1/𝒩Γ_th≤∑_C^C ≠∅∑_Ri∈ Cγ_RiD<j/𝒩Γ_th). Hence, p_γ_overall1(j)=F_sum(j/𝒩Γ_th)-F_sum(j-1/𝒩Γ_th) j=1,2,...,𝒩.where 𝒩 represents the granularity of differentiation. Similarly, p_γ_S1D(j) is the probability which are random variable γ_S1D of value {(j-1/𝒩Γ_th≤γ_S1D<j/𝒩Γ_th)|γ_S1D < Γ_th},p_γ_S1D(j)=e^-d_S1D^αj-1/𝒩Γ_th/SNR-e^-d_S1D^αj/𝒩Γ_th/SNR/1-e^-d_S1D^αΓ_th/SNR j=1,2,...,𝒩.Let p_γ_combined1(j) be the probability which are random variable ∑_C^C ≠∅{γ_combined < Γ_th | γ_S1D < Γ_th} of value (j-1/𝒩Γ_th≤∑_C^C ≠∅{γ_combined < Γ_th | γ_S1D < Γ_th}<j/𝒩Γ_th). Obviously, p_γ_combined1= conv(p_γ_overall1,p_γ_S1D).and Eq.(<ref>)can be written as followsOP_pIS1s2= ∑_j=1^𝒩p_γ_combined1(j)/1-∏_i=1^mA_i.According to the system model, the transmission process of Phase II S1 step 1 and Phase II S1 step 2 is similar to that of Phase I step 1 and Phase I step 2, with the exception of varying sizes of information. Therefore, the outage probability OP_pIIS1s1 and OP_pIIS1s2 can be given byOP_pIIS1s1= OP_pIS1s1, OP_pIIS1s2=OP_pIS1s2.Regarding Phase II S2 step 1 and Phase II S2 step 2, similar to the calculation of Phase I step 1 and Phase I step 2 mentioned earlier, obtaining OP_pIIS2s1 and OP_pIIS2s2 only requires replacing S1 with S2 in the calculation formulas. Thus, the expressions of the outage probability for each phase and each step are obtained. The final expression for the outage probability OP is as follows:OP=OP_pIS1s1∑_i=1^β_sp_pIS1s1,i+OP_pIS1s2∑_i=1^β_sp_pIS1s2,i+OP_pIIS1s1∑_i=1^β_pp_pIIS1s1,i+OP_pIIS1s2∑_i=1^β_pp_pIIS1s2,i+OP_pIS2s1∑_i=1^β_pp_pIIS2s1,i+OP_pIS2s2∑_i=1^β_pp_pIIS2s2,i,where p_pIS1s1,i represents the probability of phase I i-th S1 transmitting step1 and so on. Through STM, these probabilities can be obtained. Let p be the probability distribution p(1)= [p_pIS1s1,1, p_pIS1s2,1,p_pIS1s1,2, p_pIS1s2,2,...,p_pIS1s1,β_s,p_pIS1s2,β_s,p_pIIS1s1,1,p_pIIS1s2,1,...,p_pIIS1s1,β_p,p_pIIS1s2,β_p,p_pIIS2s1,1,p_pIIS2s2,1,...,p_pIIS2s1,β_p,p_pIIS2s2,β_p].Using STM,p(i+1)=p(i)T, where T=[ p_(pIS1s1,1)-(pIS1s1,1) p_(pIS1s1,1)-(pIS1s2,1) p_(pIS1s1,1)-(pIS1s1,2) ...p_(pIS1s1,1)-(pIIS2s1,β_p)p_(pIS1s1,1)-(pIIS2s2,β_p); p_(pIS1s2,1)-(pIS1s1,1) p_(pIS1s2,1)-(pIS1s2,1) p_(pIS1s2,1)-(pIS1s1,2) ...p_(pIS1s2,1)-(pIIS2s1,β_p)p_(pIS1s2,1)-(pIIS2s2,β_p); p_(pIS1s1,2)-(pIS1s1,1) p_(pIS1s1,2)-(pIS1s2,1) p_(pIS1s1,2)-(pIS1s1,2) ...p_(pIS1s1,2)-(pIIS2s1,β_p)p_(pIS1s1,2)-(pIIS2s2,β_p); ... ... ... ... ... ...;p_(pIIS2s1,β_p)-(pIS1s1,1)p_(pIIS2s1,β_p)-(pIS1s2,1)p_(pIIS2s1,β_p)-(pIS1s1,2) ... p_(pIIS2s1,β_p)-(pIIS2s1,β_p) p_(pIIS2s1,β_p)-(pIIS2s2,β_p);p_(pIIS2s2,β_p)-(pIS1s1,1)p_(pIIS2s2,β_p)-(pIS1s2,1)p_(pIIS2s2,β_p)-(pIS1s1,2) ... p_(pIIS2s2,β_p)-(pIIS2s1,β_p) p_(pIIS2s2,β_p)-(pIIS2s2,β_p); ],where p_(pIS1s1,1)-(pIS1s2,1) represents the probability of a state transition from (pIS1s1,1) to (pIS1s2,1) and so on. According to the system model, we have p_(pIS1s1,j)-(pIS1s1,j)=(1-e^-d_S1D^αΓ_th/SNR)∏_i=1^mA_i j=1,2,...,β_s, p_(pIS1s1,j)-(pIS1s2,j)=(1-e^-d_S1D^αΓ_th/SNR)(1-∏_i=1^mA_i) j=1,2,...,β_s, p_(pIS1s1,j)-(pIS1s1,j+1)=e^-d_S1D^αΓ_th/SNR j=1,2,...,β_s-1, p_(pIS1s1,β_s)-(pIIS1s1,1)=e^-d_S1D^αΓ_th/SNR,p_(pIS1s1,j)-(pIS1s1,j)=OP_pIS1s1∏_i=1^mA_ip_(pIS1s1,j)-(pIS1s2,j)=OP_pIS1s1(1-∏_i=1^mA_i) j=1,2,...,β_s, p_(pIS1s2,j)-(pIS1s1,j) =OP_pIS1s2 p_(pIS1s2,j)-(pIS1s1,j+1) =1-OP_pIS1s2 p_(pIS1s1,j)-(pIS1s1,j+1) =1-OP_pIS1s1 j=1,2,...,β_s-1, p_(pIS1s1,β_s)-(pIIS1s1,1)=1-OP_pIS1s1, p_(pIIS1s1,j)-(pIIS1s1,j)=OP_pIIS1s1∏_i=1^mA_ip_(pIIS1s1,j)-(pIIS1s2,j)=OP_pIIS1s1(1-∏_i=1^mA_i) j=1,2,...,β_p, p_(pIIS1s2,j)-(pIIS1s1,j) =OP_pIIS1s2 p_(pIIS1s2,j)-(pIIS1s1,j+1) =1-OP_pIIS1s2 p_(pIIS1s1,j)-(pIIS1s1,j+1) =1-OP_pIIS1s1 j=1,2,...,β_p-1, p_(pIIS1s1,β_p)-(pIIS1s1,1)=1-OP_pIIS1s1, p_(pIIS2s1,j)-(pIIS2s1,j)=OP_pIIS2s1∏_i=1^mÂ_ip_(pIIS2s1,j)-(pIIS2s2,j)=OP_pIIS2s1(1-∏_i=1^mÂ_i) j=1,2,...,β_p, p_(pIIS2s2,j)-(pIIS2s1,j) =OP_pIIS2s2 p_(pIIS2s2,j)-(pIIS2s1,j+1) =1-OP_pIIS2s2 p_(pIIS2s1,j)-(pIIS2s1,j+1) =1-OP_pIIS2s1 j=1,2,...,β_p-1, p_(pIIS2s1,β_p)-(pIS1s1,1)=1-OP_pIIS2s1, Â_i = Pr{γ_S2Ri < Γ_th} = 1-e^-d_S2Ri^αΓ_th/SNR i=1,2,...,m,and the remaining elements In T that have not been mentioned are all assumed to be zero.Based on the algorithm 1 described in <cit.> and utilizing T and p(1), we can obtain the final probability distribution of the states. By substituting this distribution into Eq.(<ref>), we can derive the closed-form analytical expression for the outage probability. Furthermore, the time slot cost for each data T_c is a performance metric worthy of attention, which can be calculated as follows:T_c= lim_N →∞∑_i=1^N OP^i-1(1-OP)i=lim_N →∞ (1-OP) 1/1-OP[1+(OP)(1-(OP)^N-1)/1-OP-N(OP)^N]=1/1-OP.In addition, the resource utilization efficiency of the system φ can be defined as:φ= 2/T_c(β_s+2β_p)BW,where B represents the size of the utilized bandwidth, while W represents the size of the utilized power. p_(pIIS1s1,j)-(pIIS1s1,j)=(1-e^-d_S1D^αΓ_th/SNR)∏_i=1^mA_i j=1,2,...,β_p, p_(pIIS1s1,j)-(pIIS1s2,j)=(1-e^-d_S1D^αΓ_th/SNR)(1-∏_i=1^mA_i) j=1,2,...,β_p, p_(pIIS1s1,j)-(pIIS1s1,j+1)=e^-d_S1D^αΓ_th/SNR j=1,2,...,β_p-1, p_(pIIS1s1,β_p)-(pIIS2s1,1)=e^-d_S1D^αΓ_th/SNR, p_(pIIS2s1,j)-(pIIS2s1,j)=(1-e^-d_S2D^αΓ_th/SNR)∏_i=1^mA_i j=1,2,...,β_p, p_(pIIS2s1,j)-(pIIS2s2,j)=(1-e^-d_S2D^αΓ_th/SNR)(1-∏_i=1^mA_i) j=1,2,...,β_p, p_(pIIS2s1,j)-(pIIS2s1,j+1)=e^-d_S2D^αΓ_th/SNR j=1,2,...,β_p-1, p_(pIIS2s1,β_p)-(pIS1s1,1)=e^-d_S2D^αΓ_th/SNR, § SIMULATION PERFORMANCE RESULTSIn this section, simulation results are presented to demonstrate the validity of the derived theoretical analytical expressions. Additionally, performance comparisons are made between the cooperative network employing MDMA and the cooperative networks employing NOMA, FDMA, or TDMA. For all simulations, the following system parameters are taken into account unless otherwise specified. Suppose S1, S2, Ri (i=1,2,...,8) and D are all located on a two-dimensional plane, and their position coordinates are (20,20), (0,20), (50,50-100(i-0.5)/8+5), and (100,0), respectively. Path-loss exponent α = 3. The target data transmitting rate R_0=1 bit/s/Hz. The total number of one image information bits B_t=10 bits. The channel noise variance N_0=-50dBm.Furthermore, in all figures, blue markers represent simulation values, and red lines indicate the STM-based theoretical values, while the performance of other MA systems is based on simulation results. In Fig.<ref>, it can be observed that the Outage probability of the cooperative network employing MDMA is the lowest among all the compared systems. Moreover, as the parameter η approaches 1, the outage probability decreases further. This can be attributed to the fact that S1 is closer to the relay and destination nodes, resulting in a higher proportion of transmissions utilizing S1 in the MDMA-based network. As η increases, the usage of S1 for transmission becomes more dominant. Additionally, the outage probability of the TDMA-based network is similar to that of the MDMA-based network, while the highest outage probability is observed in the NOMA-based network. This is because in NOMA, in a power-constrainedsystem, both X and Y can only be transmitted simultaneously only using the maximum transmission power of the node, which greatly increases the difficulty of successful decoding. From Fig.<ref>, it can be observed that when the transmission power of the S node P_Ts is low, the MDMA-based system requires the minimum number of time slots for transmission. As P_Ts increases, the number of time slots required for the FDMA-based system decreases rapidly. When P_Ts exceeds 8 dBm, the FDMA-based system becomes second only to the MDMA system with η=0.9. As P_Ts further increases, NOMA also decreases rapidly and approaches FDMA.According to Fig.<ref>, it can be observed that the MDMA system achieves the highest resource utilization when the transmission power of nodes P_T is less than 14dBm. When P_T exceeds 14dBm, the NOMA system gradually surpasses the MDMA systems with η=0.5 and η=0.7, but it still remains lower than the MDMA system with η=0.9. Additionally, it can be seen that the FDMA system consistently maintains a lower resource utilization. This is due to the fact that FDMA utilizes two sets of power and two sets of bandwidth, while the remaining MA systems are assumed to use just one set of resources. In the NOMA system, the resource utilization increases rapidly with the increase of P_T. This is because the NOMA system can transmit both X and Y simultaneously using one set of resources. This significantly improves the transmission efficiency, especially at high signal-to-noise ratios, and fully utilizes the available resources. Moreover, it can be observed from all the result figures that the theoretical results and the simulation results are well-matched. § CONCLUSIONThis paper combines cooperative communication MRC network with MDMA. By applying the MGF and STM, the theoretical algorithm-based expressions for the outage probability, resource utilization efficiency, and the average number of time slots required for each image transmission are derived. Furthermore, through numerous simulation results, it is demonstrated that our work outperforms traditional approaches.00 b1P. Zhang, W. Xu, H. Gao, K. Niu, X. Xu, X. Qin, C. Yuan, Z. Qin, H. Zhao, J. Wei, F. Zhang, "Toward Wisdom-Evolutionary and Primitive-Concise 6G: A New Paradigm of Semantic Communication Networks", Engineering, Vol. 8, pp. 60-73, 2022. b2 H. Xie, Z. Qin, G. Y. Li and B. -H. Juang, "Deep Learning Enabled Semantic Communication Systems," IEEE Transactions on Signal Processing, vol. 69, pp. 2663-2675, 2021. b3 C. Dong, H. Liang, X. Xu, S. Han, B. Wang and P. Zhang, "Semantic Communication System Based on Semantic Slice Models Propagation," IEEE Journal on Selected Areas in Communications, vol. 41, no. 1, pp. 202-213, Jan. 2023. b4 S. Wang et al., "Wireless Deep Video Semantic Transmission," IEEE Journal on Selected Areas in Communications, vol. 41, no. 1, pp. 214-229, Jan. 2023. b5X. Luo, R. Gao, H. -H. Chen, S. Chen, Q. Guo and P. N. Suganthan, "Multi-Modal and Multi-User Semantic Communications for Channel-Level Information Fusion," IEEE Wireless Communications, Early Access . b6 W. Li, H. Liang, C. Dong, X. Xu, P. Zhang and K. Liu, "Non-Orthogonal Multiple Access Enhanced Multi-User Semantic Communication," IEEE Transactions on Cognitive Communications and Networking, Early Access. b7P. Zhang, X. Xu, C. Dong, K. Niu, H. Liang, Z. Liang, X. Qin, M. Sun, H. Chen, N. Ma, W. Xu, G. Wang, and X. Tao, “Model division multiple access for semantic communications,”Frontiers of Information Technology Electronic Engineering, pp. 801–812, 2023. b8 Y. Mao, O. Dizdar, B. Clerckx, R. Schober, P. Popovski and H. V. Poor, "Rate-Splitting Multiple Access: Fundamentals, Survey, and Future Research Trends,"IEEE Communications Surveys & Tutorials, vol. 24, no. 4, pp. 2073-2126, 2022. b9N. C. Beaulieu and J. Hu, "A closed-form expression for the outage probability of decode-and-forward relaying in dissimilar Rayleigh fading channels," IEEE Communications Letters, vol. 10, no. 12, pp. 813-815, December 2006. b10W. An, C. Dong, X. Xu, C. Xu, S. Han and L. Teng, "Opportunistic Routing-Aided Cooperative Communication Network With Energy Harvesting," IEEE Internet of Things Journal, vol. 10, no. 8, pp. 6928-6945, 15 April, 2023. b11J. -K. Lee, H. -J. Noh and J. Lim, "TDMA-Based Cooperative MAC Protocol for Multi-Hop Relaying Networks," IEEE Communications Letters, vol. 18, no. 3, pp. 435-438, March 2014. b12Z. Ding, H. Dai and H. V. Poor, "Relay Selection for Cooperative NOMA," IEEE Wireless Communications Letters, vol. 5, no. 4, pp. 416-419, Aug. 2016. b13A. Salem and L. Musavian, "NOMA in Cooperative Communication Systems With Energy-Harvesting Nodes and Wireless Secure Transmission," IEEE Transactions on Wireless Communications, vol. 20, no. 2, pp. 1023-1037, Feb. 2021. b14L. Teng, W. An, C. Dong, X. Xu and B. Han, "Opportunistic Routing Aided Cooperative Communication MRC Network With Energy-Harvesting Nodes," IEEE Open Journal of the Communications Society, vol. 4, pp. 1091-1110, 2023. | http://arxiv.org/abs/2311.15593v1 | {
"authors": [
"Lei Teng",
"Wannian An",
"Chen Dong",
"Xiaoqi Qin",
"Xiaodong Xu"
],
"categories": [
"cs.IT",
"cs.PF",
"eess.SP",
"math.IT"
],
"primary_category": "cs.IT",
"published": "20231127073756",
"title": "Performance Analysis of MDMA-Based Cooperative MRC Networks with Relays in Dissimilar Rayleigh Fading Channels"
} |
[ IPv6 Bitcoin-Certified Addresses Mathieu Ducroux nChain AG Zug, Switzerland [email protected] xxxx; accepted xxxx ========================================================================type=figure< g r a p h i c s > figureAction customization results of our method.By inverting representative action-related features, the learned identifiers “<A>” can be paired with a variety of characters and animals to contribute to the generation of accurate, diverse and high-quality images.] *Work done during internship at Alibaba Group. ^†Corresponding author. This study focuses on a novel task in text-to-image (T2I) generation, namely action customization. The objective of this task is to learn the co-existing action from limited data and generalize it to unseen humans or even animals. Experimental results show that existing subject-driven customization methods fail to learn the representative characteristics of actions andstruggle in decouplingactions from context features, including appearance. To overcome the preference for low-level features and the entanglement of high-level features, we propose an inversion-based method () to learn action-specific identifiers from the exemplar images. first expands the semantic conditioning space by introducing layer-wise identifier tokens,thereby increasing the representational richness while distributing the inversion across different features. Then, to block the inversion of action-agnostic features, extracts the gradient invariance from the constructed sample triples and masks the updates of irrelevant channels. To comprehensively evaluate the task, we present anthat includes a variety of actions, each accompanied by meticulously selected samples. Both quantitative and qualitative results show that our outperforms existing baselines in action-customized T2I generation. Our project page is at <https://adi-t2i.github.io/ADI>.§ INTRODUCTIONThanks to the remarkable advances in text-to-image generation models <cit.>, in particular the recent diffusion model <cit.>, high-quality and diverse images can be synthesized under the control of text descriptions. However, it is difficult to provide precise descriptions of the desired actions, which are highly abstracted and summarized concepts. Therefore, relying solely on textual descriptions to generate actions tends to reduce fidelity to user requirements. Additionally, controllable generation methods <cit.> thatrely on the conditioning of a skeleton or sketch image suffer from limited diversity and freedom,and they show difficulty generalizing to unseen subjects without retraining. In this paper, we study the action customization task, capturing the common action in the given images to generate new images with various new subjects.To better understand the challenge of action customization, we start by examining existing subject-driven customization methods. Observations shown in <ref> can be divided into two categories. Several methods including DreamBooth <cit.>, Textual Inversion <cit.>, and ReVersion <cit.>, generate images that are unrelated to specific actions, suggesting that they fail to capture the representative characteristics of the actions. Since most of them are designed to invert appearance features with a pixel-level reconstruction loss, low-level details are emphasized during optimization while high-level action features are neglected. Benefiting from fine-tuning cross-attention or utilizing per-layer tokens, Custom Diffusion <cit.> and P+ <cit.> offer a larger semantic conditioning space for learning new concepts. Consequently, they are capable of encoding action-related knowledge such as “raises one finger” or “raises both arms for cheering” from exemplar images. However, they fail to decouple the focus from action-agnostic features, such as the appearance of the human body. These pieces of information are also encoded into the learned identifiers and “contaminate” the generation of animals during inference. As a result, the intended gorilla is replaced by a woman, and the tigers generated by the two methods exhibit human arms instead. To avoid the appearance leakage while accurately modeling the target action, we propose () to learn the optimal action-specific identifiers. Firstly, we expand the semantic conditioning space by applying layer-wise identifier tokens. Since existing works have analyzed that different layers have varying degrees of control over low-level and high-level features <cit.>, such an expansion increases the accommodation of various features, making it easier to invert action-related features. Furthermore, we would like to decouple the action-agnostic features from the learning of action identifiers. To achieve this, we discover invariant mechanisms in the data that are difficult to vary across examples. Specifically, given an exemplar image with the specific action, another same-action image can be randomly sampled from the training data, forming a context-different tuple. Meanwhile, leveraging mature subject-driven customization techniques, an image that shares similar context can be quickly synthesized to form an action-different tuple. To decouple the highly-coupled features, we disentangle action-agnostic features at the gradient level, and construct two context gradient masks by comparing the difference on the gradients over the input tuples. By overwriting the merged gradient mask to the gradient of the anchor image, the update of action-agnostic channels on the identifiers is discarded.Moreover, as a pioneering effort in this direction, we also contribute to a new benchmark named , which provides a testbed of unique actions with diverse images for the under-explored task.We conduct extensive experiments on , and a quick glance at the performance of is illustrated in <ref>, where users can freely combine the designated action identifiers with various unseen humans and even animals.In summary, the main contributions of our work are three-fold: * We propose a novel action customization task, which requires to learn the desired action from limited data for future generation. While existing customization focus on reprinting appearances, we highlight this under-studied but important problem. * We contribute the , where a variety of unique actions with manually filtered images provide the evaluation conditions for the task.* We devise the () method, which successfully inverts action-related features into the learned identifiers that can be freely combined with various characters and animals to generate high-quality images.§ RELATED WORK §.§ Text-to-Image (T2I) GenerationGenerating high-quality and diverse images from textual conditions has received considerable attention from both the research community and the general public. The previous dominant generative adversarial networks (GANs) <cit.>, consisting of a generator and a discriminator, suffer from unstable optimization and less diverse generations due to the adversarial training <cit.>. And variational autoencoders (VAEs) <cit.>, which apply a probabilistic encoder-decoder architecture, are also prone to posterior collapse and over-smoothed generations <cit.>. Text-conditional auto-regressive models <cit.> have shown more impressive results, but require time-consuming iterative processes to achieve high-quality image sampling. More recently, diffusion models have emerged as a promising alternative, achieving impressive results with open-vocabulary text descriptions through their natural fitting to inductive biases of image data <cit.>. GLIDE <cit.> introduces text conditions into the diffusion process through the use of an unclassified guide. DALL-E 2 <cit.> employs a diffusion prior module and cascading diffusion decoders to generate high-resolution images based on the CLIP <cit.> text encoder. Imagen <cit.> focuses on language understanding by using a large T5 language model to better represent semantics. The latent diffusion model <cit.> improves computational efficiency by performing the diffusion process in low-dimension latent space with an autoencoder. Finally, Stable Diffusion (SD) <cit.> employs a cross-attention mechanism to inject textual conditions into the diffusion generation process, aligning with the provided textual input. However, it is difficult to provide precise action description in text,since user intent and machine understanding are not aligned. Furthermore, experimental results in <ref> show that some actions are difficult to generate correctly without re-training, e.g., “performs a handstand”.§.§ Controllable Action GenerationIn this paper, we focus on transferring the desired action to unseen people, characters, and even animals for photorealistic image generation. Existing efforts take source images and pose information (e.g., skeletal images or body parsing) as conditions to control the generation. Previous controllable solutions based on GANs <cit.> and VAEs <cit.> suffer from training difficulties and poor generation results. Some subsequent works <cit.> introduce text conditions to guide the action generation, yet fail with open vocabulary due to the small size of the vocabulary pools. Thanks to the significant advances of T2I diffusion models, recent methods <cit.>, in particular the popular ControlNet <cit.>, involve additional trainable modules to add arbitrary conditions, improving the versatility and controllability. While gaining a tremendous amount of traction from the community, ControlNet refers to the provided skeleton image to generate the action, which reduces the flexibility and diversity. In addition, the objective of designing a general framework makes it not well-targeted to animals. In this work, we investigate customized solutions for action generation.§.§ Subject-Driven CustomizationDue to the demand for generating images with user-specified subjects, customization methods <cit.> tailored to the appearance have been studied in context of T2I generation. Specifically, DreamBooth <cit.> binds rare new words with specific subjects through fine-tuning the whole T2I generator. Textual Inversion <cit.> learns an extra identifier to represent the subject and adds the identifier as a new word to the dictionary of the text encoder. Custom Diffusion <cit.> only fine-tunes thekey and value matrices of the cross-attention to represent new concepts. P+ <cit.> extends the textual-conditioning space with per-layer tokens to allow for greater disentangling and control. Despite the success achieved, the experimental results in <ref> show their failure in action customization. A recent work ReVersion <cit.> makes progress in learning specific relations including some interactions from exemplar images. However, the design of the method, which specializes in learning spatial relations, makes it difficult to invert action information.§ ACTION CUSTOMIZATION BENCHMARK Given a set of exemplar images 𝒳 = {𝐱_1, 𝐱_2, ⋯, 𝐱_N }, we assume that all images contain the same action performed by different people. The action-agnostic descriptions associated to the exemplar images are also provided, which can be used as prompt templates during training. The objective of the action customization task is to extract the co-existing action and transfer it to the synthesis of action-specific images with different new subjects. In order to provide suitable conditions for systematic comparisons on this task, we present a new , which consists of diverse actions accompanied by meticulously selected sample images. The benchmark can be used for both quantitative and qualitative comparisons. Action Categories. To determine the involved actions,we first request GPT-4 <cit.> to provide 50 candidate action categories, and then attempt to collect images for these candidates. Only actions that can collect sufficient high-quality images are preserved. We finally define eight unique actions, ranging from single-handed (e.g., “raises one finger”) to full-body movements (e.g., “performs a handstand”).Exemplar Images and Prompts. For each action, we collect ten example images with corresponding textual descriptions, featuring different people. We manually remove action-related descriptions from the textual content to make them suitable as prompt templates.Evaluation Subjects. We provide a list containing 23 subjects, including generic humans (e.g., “An old man”), well-known personalities (e.g., “David Beckham”), and animals (e.g., “A panda”). The latter two are guaranteed to be completely unseen, which tests the generalization of methods.§ METHODOLOGY We start with the technical background in <ref>. Then, we provide a comprehensive description of our proposed in <ref>.§.§ PreliminariesOur study is based on the Stable Diffusion (SD) <cit.> model, which is considered to be the public state-of-the-art text-to-image generator. Specifically, to operate the diffusion process <cit.> in a low-dimensional latent space, SD employs a hierarchical VAE that consists of an encoder ℰ and a decoder 𝒟.The encoder ℰ is tasked with encoding the given image 𝐱 into latent features 𝐳, and the decoder 𝒟 reconstructs the image 𝐱 from the latent, i.e., 𝐱 = 𝒟(𝐳) = 𝒟(ℰ(𝐱)). To control the generation with the textual conditions, given the noisy latent 𝐳_t, current time step t and text tokens 𝐲, a conditional U-Net <cit.> denoiser is trained to predict the noise ϵ added to the latent 𝐳: ℒ=𝔼_𝐳∼ℰ(𝐱), 𝐲, ϵ∼𝒩(0,1), t[ϵ-ϵ_θ(𝐳_t, t, 𝐲)_2^2], where 𝐲 is obtained by feeding the prompt into a CLIP <cit.> text encoder. During inference, the pre-trained SD first samples a latent 𝐳_T from the standard normal distribution 𝒩(0,1). Iteratively, 𝐳_t-1 can be obtained by removing noise from 𝐳_t conditioned on 𝐲. After the final denoising step, the latent 𝐳_0 is mapped to generate an image 𝐱 with the decoder 𝒟.§.§ () Given exemplar images that all contain a specific entity, existing subject-driven inversion methods <cit.> learn to represent the entity as an identifier token v ∈ℝ^d. And the learned v can then be employed in text prompts to produce diverse and novel images, where the entity can be generated with different contexts.In this paper, we continue the vein of capturing the common action in exemplar images by finding the optimal identifiers.An overview of our proposed is illustrated in <ref>.Expanding Semantic Inversion. To overcome the preference to low-level appearance features, we apply layer-wise identifier tokens to increase the accommodation of various features. Specifically, for the l-th layer where l ∈ [1, L] and L is the number of cross-attention layers in the T2I model, a new identifier token v_l ∈ℝ^d is initialized. Feeding the prompt with v_l into the text encoder, the output tokens 𝐲_l can control the update of the latents in the l-th layer of SD, thus influencing the generation of the visual content. And the learned tokens from all layers can form a token set 𝒱, which can then be paired with different subjects for generation. Rather than having a single identifier token take on the responsibility of reconstruction, having separate identifiers at different layers effectively ensures that more features are converted, including the action-related features we care about. Learning Gradient Mask with Context-Different Tuple. The next step is to prevent the identifiers from learning features that are not relevant to the action and thus contaminating the subsequent image generation. Given 𝐱^(a, c)∈𝒳 as an anchor sample, where a denotes the specific action, and c denotes the action-agnostic context contained in the image including human appearance and background, we can randomly sample another image 𝐱^(a, c) from 𝒳, where c represents that the context is different from c. Taking the context-different tuple 𝐱^(a, c) and 𝐱^(a, c) as the input, we can calculate two gradients of the denoising loss ℒ with respect to the identifier token v: g^(a, c) = ∂ℒ^(a, c)/∂ v, g^(a, c) = ∂ℒ^(a, c)/∂ v. Note that the subscript l is omitted for the sake of uniformity and clarity. Each identifier token contains multiple channels, each carrying semantically distinct and independent information. And the gradient consistency of a channel indicates that the channel is likely to carry information about the specific action. Therefore, we calculate the absolute value of the difference between the two gradients: △ g^c = | g^(a, c) - g^(a, c) |, where the semantic channels with a small difference can be regarded as action-related feature channels of the action a, which are expected to be preserved. Specifically, we sort the difference from the largest to the smallest, denoting the value at β percent as γ^β and taking it as a threshold. In other words, β is the proportion of channels that are masked. Then, the mask that shares the same dimension as v can be calculated. For the k-th channel, m^c_k ={[0,△ g^c_k ⩾γ^β;1, △ g^c_k < γ^β; ]. . By overwriting the mask to the gradient of the anchor sample, the action-related knowledge is preserved and incorporated into the update of v, while the updates on action-agnostic channels are ignored. Note that since the specific visual invariance about the action changes slightly depending on the sample pair, the channels that are masked may not be exactly the same each time. Another point that worth highlighting is that both samples use the prompt of the anchor sample 𝐱^(a, c) when calculating the gradients. Since the visual context of 𝐱^(a, c) is inconsistent with the description in the prompt, the reconstruction loss favours larger gradients in the context-related channels. In this way, the action-related channels found through the threshold will be more accurate. Learning Gradient Mask with Action-Different Tuple. Although the context-different tuples have the same action semantics, there may be differences in the visualisation of the actions, and therefore the channels associated with the most representative action features do not necessarily have a smaller gradient difference. Since learning the gradient mask with the context-different only tuple is not stable and effective enough, we also construct action-different tuples to generate the gradient mask from another perspective. For each sample 𝐱^(a, c) in 𝒳, we can use it to quickly train a subject-driven customization model (e.g., DreamBooth) that effectively inverts the most of the low-level context information. Therefore, by filling the prompt template of 𝐱^(a, c) with action descriptions that are different from a, the trained customization model can generate various action images as 𝒳^(a, c). Due to the one-shot training and the concise text, these images may not be consistent with the action descriptions, or the context may differ from the original 𝐱^(a, c), but in practice, we have found that the quality is sufficient to diversify the action variation.In this way, when 𝐱^(a, c) is sampled during training, we can randomly sample a image 𝐱^(a, c) from 𝒳^(a, c) to construct the action-different tuple. And the gradient of 𝐱^(a, c) with respect to the token v can be calculated as g^(a, c) = ∂ℒ^(a, c)/∂ v. Similarly, both samples use the prompt of the anchor sample 𝐱^(a, c). We can also calculate the absolute value of the gradient difference: △ g^a = | g^(a, c) - g^(a, c) |, where the semantic channels with small difference can be regarded as context-related feature channels of the action a, which are expected to be masked. Therefore, we have m^a_k ={[0, △ g^a_k < λ^β;1,△ g^a_k ⩾λ^β; ], . where λ^β is the threshold here to mask β percent of the channels. Merging Gradient Masks for Context. According to the above discussion, the final input of the model is a triplet as ℐ = {𝐱^(a, c), 𝐱^(a, c), 𝐱^(a, c)}. And two context masks can be created with different binary groups. Therefore, we can merge m^a and m^c to get the final context mask m. In practice, we keep only the intersection of the unmasked channels as unmasked, as we find this merging strategy performs better. Formally, we have m = m^c∩m^a.Then, we overwrite m to the gradient of the anchor sample: g ^(a, c) = m ⊙ g^(a, c).Note that the masked gradient g ^(a, c), where action-agnostic channels are considered to be masked, is the only gradient used to update v. Therefore, our identifiers can adequately invert action-related features. § EXPERIMENTS §.§ Experiment Setup Baselines. For the baselines included in the comparison, we select Stable Diffusion <cit.>, ControlNet <cit.>, DreamBooth <cit.>, Textual Inversion <cit.>, ReVersion <cit.>, Custom Diffusion <cit.> and P+ <cit.>. Implementation Details. For our , we set the masking ratio β to 0.6 and use the AdamW <cit.> optimizer with a learning rate of 2e-4. For a fair comparison, we use 50 steps of the DDIM <cit.> sampler with a scale of 7.5 for all methods. Unless otherwise specified, Stable Diffusion v2-1-base is selected as the default pre-trained model, and images are generated at a resolution of 512×512. All experiments are conducted on one A-100 GPU.§.§ Quantitative ComparisonWe perform the quantitative comparison with human evaluators to assess the quantitative performance. For each subject-action pair, four images are randomly sampled from the images generated by different methods. Given (1) the exemplar images of a specific action, and (2) the textual name of the subjects, human evaluators are asked to determine whether (1) the generated action is consistent with those in the exemplar images, and (2) the generated character corresponds with the name without obvious deformations, defects, or abnormalities. A generated image will only be considered totally correct if both the action and the character are correctly generated.<ref> reports the action, subject and total accuracy for all methods. Some observations are worth highlighting: (1) Given the textual descriptions of the actions, Stable Diffusion yields the highest total accuracy of all baseline methods. This suggests that the existing baselines do not take full advantage of the exemplar images. (2) Despite relying on the skeleton as the condition to improve the action generation, ControlNet fail to maintain the performance of subject generation, resulting in an unsatisfactory total accuracy. (3) The action accuracy of DreamBooth, Textual Inversion, and ReVersion is incredibly low, reflecting their complete failure to invert the action-related features. (4) Custom Diffusion and P+ improve action accuracy at more or less the expense of subject accuracy. (5) Attribute to the extended semantic conditioning space and the gradient masking strategy, our dramatically improves the accuracy of action generation while maintaining excellent subject accuracy. As a result, achieves the best total accuracy, outperforming the baselines by 23.92%. §.§ Qualitative Comparison<ref> illustrates the qualitative comparison of all methods involved. It can be observed that although text descriptions of the actions are provided, the actions generated by Stable Diffusion still differ from the examples. ControlNet can only maintain a rough consistency in posture and struggles to match the generated subjects to the desired requirements, resulting in incomplete or distorted body structures, while sacrificing diversity. And the subject-driven customization methods, as discussed earlier, fail to generate the actions or exhibit appearance characteristics that differ from the specified subjects. This suggests that they are unable to convert only the features associated with the actions. Giving the credit to the design from a perspective of gradient, our decouples action-related features from action-agnostic information and blocks the inversion of the latter. This allows to effectively model the invariance of the action and transfer it to different characters and animals without sacrificing image quality and variety. §.§ Ablation StudyWe conduct ablation experiments on to verify the individual effects of the proposed contributions. From the generation results in <ref>, it can be observed that (1) The removal of the extension to the semantic conditioning space diminishes the inversion ability of .(2) Both the gradient masks learned from the context-different and the action-different tuples are essential. Removing either one can lead to inadequate learning of action knowledge or a degradation in the quality of the subject's appearance. (3) We also attempt to reverse the gradient masks, i.e., updates to channels that should have been masked are preserved, and updates to other channels are cancelled. Obviously, this will result in action-related features not being inverted.§.§ Further Analysis Impact of Masking Strategy.To validate the masking strategy in our , we compare it with four other strategies in <ref>. Specifically, on the gradients for each update: (1) Uniform: we uniformly mask β percent of channels. (2) Random: we randomly mask β percent of channels. (3) Min: we mask β percent of channels with the lowest value. (4) Max: we mask β percent of channels with the highest value. We observe that none of these four strategies successfully captures high-level features related to actions, since the images they generate are independent of the specified action. And the comparison also shows that the effectiveness of our not only depends on the masking itself, but also requires learning action-agnostic channels by modeling the invariance of action and context. Impact of Gradient Mask Merging Strategy.As shown in <ref>, takes the intersection of the two gradient masks as the default merging strategy. We compare this with selecting the union of the two masks, and illustrate the generation results in <ref>. Since only channels that are preserved on both masks are updated, taking the intersection can effectively filter out action-agnostic features, leading to better customization of the actions. In contrast, taking the intersection may dilute the most representative action features due to the preserved context information.Impact of Masking Ratio β. In <ref>, we vary the masking ratio β from 0.2 to 0.8. When β is small, fewer dimensions of the gradient are masked, and more action-agnostic features are retained to hinder the generation of the subject's appearance. This situation improves as β is gradually increased. However, when β is relatively large, due to the large number of masked dimensions, some of the most discriminative features of actions may not be inverted, resulting in incomplete learning of actions. Note that the optimal value of β may be different for different actions. § CONCLUSION In this paper, we investigate an under-explored text-to-image generation task, namely action customization. To understand the challenge of the task, we first visualize the inadequacy of existing subject-driven methods in extracting action-related features from the entanglement of action-agnostic context features. Then, we propose a novel method named to learn action-specific identifiers from the given images. To increase the accommodation of knowledge relevant to the action, extends the inversion process with layer-wise identifier tokens. Furthermore, generates gradient masks to block the contamination of action-agnostic features at the gradient level. We also contribute the for evaluating performance on the task. Since there is a growing need to synthesize action-specific images with various new subjects, we hope that our work can highlight this important direction.§ ACKNOWLEDGEMENT This work was supported by STI 2030—Major Projects (2022ZD0208800), NSFC General Program (Grant No. 62176215). This work was supported by Alibaba Group through Alibaba Research Intern Program.ieeenat_fullname§ BENCHMARK DETAILSIn this section, we describe the presentedin detail. The full benchmark will be publicly available. §.§ Actions We define eight diverse, unique and representative actions as follows:* salute: “salutes”* gesture: “raises one finger”* cheer: “raises both arms for cheering”* pray: “has hands together in prayer”* sit: “sits”* squat: “squats” * meditate: “meditates”* handstand: “performs a handstand”where the action categories (displayed in boldface) are used only to distinguish between actions, and the actions can be best described with the exemplar images. And the text descriptions (displayed in italics) that are used for Stable Diffusion are obtained using an image captioning model. §.§ Subjects We provide 23 subjects for evaluation as follows:* generic human: “A boy”, “A girl”, “A man”, “A woman”, “An old man”* well-known personalities: “Barack Obama”, “Michael Jackson”, “David Beckham”, “Leonardo DiCaprio”, “Messi”, “Spiderman”, “Batman”* animals: “A dog”, “A cat”, “A lion”, “A tiger”, “A bear”, “A polar bear”, “A fox”, “A cheetah”, “A monkey”, “A gorilla”, “A panda”where diverse and unseen subjects and the introduction of animals demand that, models not only retain pre-trained knowledge without forgetting, but also accurately generate animal representations without distortion or anomalies. § BASELINE DETAILS * ControlNet <cit.>: We use OpenPose <cit.> as a preprocessor to estimate the human pose of the given reference image.* DreamBooth <cit.>:The training is with a batch size of 2 and a learning rate of 5e-5. The number of training steps is set to 1000, and 50 images are generated for prior preservation.* Textual Inversion <cit.>: The training is with a batch size of 2 and a learning rate of 2.5e-4. The number of training steps is set to 3000.* ReVersion <cit.>: The training is with a batch size of 2 and a learning rate of 2.5e-4. The number of training steps is set to 3000. The weighting factors of the denoising loss and the steering loss are set to 1.0 and 0.01. The temperature parameter in the steering loss is set to 0.07. And in each iteration, 8 positive samples are randomly selected from the basis preposition set.* Custom Diffusion <cit.>: The training is with a batch size of 2 and a learning rate 1e-5. The number of training steps is 2000.And the number of regularization images is 200.* P+ <cit.>: The training is with a batch size of 8 and a learning rate 5e-3. The number of training steps is 500. § ADDITIONAL EXPERIMENTAL RESULTS§.§ Comparison with Action-Prior DreamBooth Our utilizes the generated action-different samples with the same context to capture the context-related features. To analyze the advantages of controlling updates with these data rather than directly employing them in training, we present a new baseline named action-prior DreamBooth, which replaces the class prior generated by original Stable Diffusion with these action-different samples. Therefore, in addition to the inherent action invariance, contextual invariance also emerges in the training data. However, as shown in <ref>, this new baseline still struggles with inverting action-specific features. This observation suggests a lack of ability to capture high-level invariance. §.§ Additional Qualitative ResultsTo show the effectiveness of , we illustrate additional generation results in <ref>, covering all actions within . | http://arxiv.org/abs/2311.15841v2 | {
"authors": [
"Siteng Huang",
"Biao Gong",
"Yutong Feng",
"Xi Chen",
"Yuqian Fu",
"Yu Liu",
"Donglin Wang"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231127140713",
"title": "Learning Disentangled Identifiers for Action-Customized Text-to-Image Generation"
} |
Exploring primordial curvature perturbation on small scales with the lensing effect of fast radio bursts Zong-Hong Zhu January 14, 2024 ========================================================================================================Lung adenocarcinoma is a morphologically heterogeneous disease, characterized by five primary histologic growth patterns. The quantity of these patterns can be related to tumor behavior and has a significant impact on patient prognosis. In this work, we propose a novel machine learning pipeline capable of classifying tissue tiles into one of the five patterns or as non-tumor, with an Area Under the Receiver Operating Characteristic Curve (AUCROC) score of 0.97. Our model’s strength lies in its comprehensive consideration of cellular spatial patterns, where it first generates cell maps from Hematoxylin and Eosin (H&E) whole slide images (WSIs), which are then fed into a convolutional neural network classification model. Exploiting these cell maps provides the model with robust generalizability to new data, achieving ≈ 30%higher accuracy on unseen test-sets compared to current state of the art approaches. The insights derived from our model can be used to predict prognosis, enhancing patient outcomes.Computational pathology, histology, LUAD, growth patterns § INTRODUCTIONLung cancer is one of the most prevalent forms of cancer worldwide, being the second most common cancer (after breast) and the leading cause of cancer-related deaths, accounting for approximately 18% of all cancer deaths globally <cit.>. Around 85% of all reported lung cancer cases are classified as non-small cell lung cancer (NSCLC), with Lung Adenocarcinoma (LUAD) as its prevalent sub-type. According to the latest World Health Organization (WHO) classification, invasive nonmucinous LUAD grows into five primary histologic growth patterns: lepidic, acinar, papillary, micropapillary, and solid <cit.>, examples are shown in Fig.<ref>(a). These patterns can be related to prognosis and effect the patient outcome, with lepidic having the most favorable prognosis, followed by acinar and papillary, whereas solid and micropapillary have been known to have the worst prognosis of all <cit.>.As cancer is heterogenous, within the same tumor, LUAD often presents a varied combination of multiple growth patterns. The World Health Organization recommends a three-tiered grading system for lung adenocarcinomas, which involves assessing the predominant histological growth pattern and determining the presence or absence of high-grade patterns (solid and micropapillary). This approach has been shown to provide more accurate information about patient outcomes. However, accurately assessing various growth patterns can be challenging in routine practice due to the presence of mixed patterns, tumor heterogeneity and time constraints. Additionally, quantifying these growth patterns through visual assessment is subjective and can vary between different observers <cit.>. Nevertheless, this diagnostic method fails to consider the diverse patterns and their spatial arrangement within the tumor, which could provide more accuracy in diagnosis and improve prognostic predictions for patients. The automation of growth pattern classification using machine learning would be a valuable addition to pathology, enhancing the precision and objectivity of such task while alleviating its labor-intensive and time-consuming nature.Due to the high dimensionality and large size of Whole Slide Images (WSIs), it is necessary to divide them into smaller tiles before feeding them to deep learning models. As a result, two different approaches for splitting training and testing data have emerged in the literature <cit.>:* A WSI-based split: Where the splitting of data is done at the WSI level, before the tiling. This ensures that the tiles in the test set originate from WSIs that none of its tiles have been utilized for training. Thus creating a truly unseen test set.* A tile-based split: Where the WSIs are first divided into tiles, followed by mixing these tiles for subsequent division into training and testing sets. In this approach, training and test sets could have tiles originating from the same WSI.The classification of LUAD growth patterns is an active area of research. The majority of the machine learning algorithms designed for that task predict a single pattern for each WSI, namely the predominant pattern <cit.>. These methods achieve good accuracies and can be used to assist pathologists in their routine work. However, they do not deliver adequate performance when assessed on the tile level. This might be explained that the aggregation method producing the slide level prediction diminishes the error by neglecting the misclassified tiles. Only a limited number of studies <cit.>have developed tile based classifiers. However, in all these studies a tile-based split was adopted. Consequently, a tile in the test set might originate from the same WSI or even be adjacent to a tile used in training the model. This notably enhances the classifier's performance because tiles from the same pattern within a WSI possess substantial visual resemblance, unlike similar pattern tiles from other WSIs. As expected, these models often encounter failure when new data or WSIs that have not been seen before are used for external validation.In this paper, we introduce a novel machine learning pipeline capable of classifying tissue tiles into one of the five growth patterns or as non-tumor. Our model’s strength lies in its comprehensive consideration of tissue cellular composition, where we first generate cell maps from Hematoxylin and eosin (H&E) WSIs, which are then fed into a convolutional neural network classification model. Specifically, our contributions are:* We introduce the concept of cell maps for predicting pathology specific tasks that can add generalizability to current machine learning approaches.* We propose a pipeline for LUAD growth pattern classification that outperforms the current state of the art approaches on unseen test sets, when adopting the WSI-based splitting approach. § THE PROPOSED METHOD The proposed approach can be broken down into two major parts: (1) The construction of the cell maps, (2) training a CNN to predict the different growth patterns. An overview of the proposed method is illustrated in Fig.<ref>.§.§ Cell maps construction We leveraged Hover-Net <cit.> to find the locations and type of cells in a given tissue. The Hover-Net model was trained on the PanNuke dataset <cit.> composed from 19 different tissue types, including lung. We processed the entire WSI at 20 × magnification. From those resulting nuclei only the neoplastic epithelial, non-neoplastic epithelial, and connective cells were selected. Where the arrangement of the neoplastic cells is the main visual aspect differentiating the different growth patterns. The non-neoplastic cells were used to define the normal tissue, and the connective cells were incorporated to aid in the classification of the papillary pattern; which is defined as tumor papillae arranged around a fibrovascular core. Finally, a binary mask was created for each nuclei class, where a value of 1 was given to a pixel if it corresponds to a nucleus centroid, of that given class, or was in a radius of 4 pixels from it. The three binary masks where then stacked to form a 3-channel cell map image and resized to the dimensions of its WSI at 5 × magnification. As shown in Fig<ref>(b), the visual distinction between the different patterns is maintained in the generated cell maps.§.§ Growth pattern classification We finetuned a ResNet50, pretrained on ImageNet, after adapting the last fully connected layer to predict one of the six classes (lepidic, acinar, papillary, micropapillary, solid, and normal). Cell map tiles of size 256 ×256were passed to the network, after applying random horizontal and vertical flipping. The model was trained for 20 epochs with a learning rate of 1e-5, which due to the simplicity of the input image representation, we found to be sufficient for the model to converge. § EXPERIMENTS§.§ Dataset The dataset used in this paper consists of 1,034 tiles obtained from 18 WSIs across 12 different centers from the TCGA-LUAD dataset <cit.>. Two pulmonary pathologists independently annotated growth patterns regions on the WSIs. A tile was included if both pathologists opinions agreed on at least 80% of its area. Each WSI includes on average 2-3 patterns. The most common pattern was solid having 311 tiles, and the least common patterns were papillary and micropapillary having only 54 and 39 tiles, respectively. §.§ Results In this section, we compare our proposed cell maps approach with other state of the art methods and baselines including: * A multiscale approach: where we proposed to train two identical ResNet-34 models: one on H&E tiles at 20 × magnification and the other on H&E tiles at 5 × magnification. The final classification result is determined by selecting the prediction with the highest probability among the two models. This approach mirrors the way pathologists typically analyze patterns, where they make an initial decision at lower magnifications and then confirm or alter their decision at higher magnifications.* CoAtNet-1 <cit.>: a recently published network architecture combining the self-attention mechanism with convolutions. This architecture achieved the best performance on the GasHisSDB dataset <cit.>, a histology dataset of H&E images for the diagnosis of gastric cancer. We used H&E tiles at 20 × magnification as an input to the model.* ResNet50: (the architecture used in <cit.> and <cit.>) pre-trained on ImageNet fed with H&E tiles at 20 × magnification. This architecture was used in * The approach proposed by AlSubaie et. al.<cit.> :The input to the model is a 6-channel image, where the tile at 20 × magnification, and at 10 × magnification of an H&E image were stacked and aligned in the center to form the input to a modified ResNet50.* HoverNet <cit.> cellular features: A tile is formulated as a 12-feature vector composed of the count of cells, maximum and minimum distance between cell centroids for each of following cell types : neoplastic, non-neoplastic, connective, and inflammatory cells present in that tile. A support vector machine is then used to classify these feature vectors into the same six classes. §.§.§ Weak validation For our first experiment, we adopted a tile-based data splitting approach, where we mixed all the tiles and used 5-fold cross validation to prove the effectiveness of the proposed model and its comparability to the currently published work. Quantitative results are listed in Table<ref>.§.§.§ Strong validation For our main experiments, we adopt a WSI-based data splitting approach, where we trained and evaluated each model 5 times. In each trial 6 WSIs were randomly sampled from the dataset to be the unseen test set, ensuring that it includes samples from all the six classes. The remaining slides were divided into tiles and split into 90% training and 10% validation set. The average and standard deviation of the performance measures (AUC-ROC, accuracy, and macro F1-score) are reported in Table <ref>. It has been shown in Table <ref> that when performing proper validation by adopting a WSI-based data splitting, the proposed cell maps approach performs better on unseen test sets compared to other methods, some of which perform worse than random guessing. Although using the H&E image directly for classification initially suggests better performance, when properly validating the models via a WSI-based splitting, the performance dramatically drops by over 60%. Conversely, when employing cell maps, the decline in performance is notably less, dropping by under 20%. § CONCLUSIONSWe proposed a new approach for LUAD growth pattern classification, that outperforms previously proposed approaches in the literature and baseline methods, when evaluated using WSI-based splitting. Additionally, we present a new representation of WSIs, called cell maps, which effectively captures cellular composition in more compact and lighter images. This innovative approach can significantly expedite the training of machine learning algorithms for some pathological tasks. Our future directions include acquiring more annotations to expand our dataset along with testing the model on external cohorts. We aim to project the model predictions on the entire WSI for prognosis and survival analysis. § ACKNOWLEDGMENTSAA is fully funded by the Saudia Arabia Cultural Bureau in London.IEEEbib | http://arxiv.org/abs/2311.15847v1 | {
"authors": [
"Arwa Al-Rubaian",
"Gozde N. Gunesli",
"Wajd A. Althakfi",
"Ayesha Azam",
"Nasir Rajpoot",
"Shan E Ahmed Raza"
],
"categories": [
"eess.IV",
"cs.CV",
"cs.LG"
],
"primary_category": "eess.IV",
"published": "20231127141251",
"title": "Cell Maps Representation For Lung Adenocarcinoma Growth Patterns Classification In Whole Slide Images"
} |
Folded symplectic forms in contact topology Joseph Breen January 14, 2024 =========================================== The clinical management of breast cancer depends on an accurate understanding of the tumor and its anatomical context to adjacent tissues and landmark structures. This context may be provided by semantic segmentation methods; however, previous works have been largely limited to a singular focus on the tumor alone and rarely other tissue types. In contrast, we present a method that exploits tissue-tissue interactions to accurately segment every major tissue type in the breast including: chest wall, skin, adipose tissue, fibroglandular tissue, vasculature and tumor via standard-of-care Dynamic Contrast Enhanced MRI. Comparing our method to prior state-of-the-art, we achieved a superior Dice score on tumor segmentation while maintaining competitive performance on other studied tissues across multiple institutions. Briefly, our method proceeds by localizing the tumor using 2D object detectors, then segmenting the tumor and surrounding tissues independently using two 3D U-nets, and finally integrating these results while mitigating false positives by checking for anatomically plausible tissue-tissue contacts. The object detection models were pre-trained on ImageNet and COCO, and operated on MIP (maximum intensity projection) images in the axial and sagittal planes, establishing a 3D tumor bounding box. By integrating multiple relevant peri-tumoral tissues, our work enables clinical applications in breast cancer staging, prognosis and surgical planning.§ INTRODUCTIONOver the past decade, Dynamic Contrast-Enhanced MRI (DCE MRI) has emerged as a standard for breast imaging. Contemporaneously, work in medical image analysis for breast cancer has focused on the segmentation of the tumor, which enables assessments of tumor dimension <cit.>. Few segmentation methods address the surrounding non-neoplastic tissues <cit.>. Unfortunately, this limits the clinical applicability of these methods since the determination of the T stage, a key component of the American Joint Committee on Cancer (AJCC) breast cancer staging guidelines, requires not only tumor dimensions but also an evaluation of surrounding skin or chest wall involvement <cit.>. Additionally, tumor adjacent tissues also affect surgical planning and the growth characteristics of the cancer . The need to segment the surrounding tissues as well as the tumor highlights a need for multi-class breast MRI semantic segmentation which includes all clinically relevant tissues.Several factors pose unique challenges for breast MRI analysis. The variability of the DCE MRI acquisition process impairs the robustness of many methods of image analysis. Scanner artifacts can include ghosting, geometric deformations, low signal-to-noise ratio, and intensity non-uniformity, which often exhibit variation over time for a single scanner, and even more dramatically between scanners <cit.>. Other hardware variability sources include breast coil placement, and interference resulting in Moiré artifacts. Additionally, variability is introduced by the acquisition parameters such as the spatial and temporal intervals. Temporal intervals can vary from 1-30 seconds for ultra-fast DCE MRI, or up to 80-100 seconds for standard DCE MRI <cit.>. Training over a variety of image acquisition parameters, scanners and sites can mitigate unexpected performance degradation.Additionally, breast anatomy is unique in its heterogeneous presentation. It is common for the breasts to vary greatly in size and shape between patients. Breast volume can vary by 4-fold or more <cit.>. There is also significant variation in breast density (the ratio of glandular tissue to breast volume) <cit.>. A major source of variability is background parenchymal enhancement (BPE), which often occurs in dense breast cases and is a major confounding factor in breast MRI interpretation <cit.>. These are compounded by the heterogeneous nature of breast cancer, which varies greatly in size and morphology, especially between grades such as DCIS vs invasive breast cancer. Breast cancer can also induce variation in other tissues, such as skin tagging or thickening.§ PRIOR WORK AND NOVEL CONTRIBUTION Variability of DCE MRI can be addressed using harmonization techniques. MRI harmonization techniques were generally developed for brain MRI and adapted for use in breast MRI by prior works. It can also be addressed by using data augmentation during training to develop robust models. The use of affine or noise-based image transformations is common in medical image segmentation, however the use of elastic deformations and simulations of common MRI artifacts remains rare <cit.>.The limited availability of training data is a common concern as deep learning is best suited to settings with broad data availability. Previous works address this using self-supervised learning <cit.>, transfer learning <cit.>, and weakly supervised learning <cit.>. These approaches are rarely used in breast MRI analysis.Semantic segmentation methods for breast MR largely target the tumor, rarely segmenting glandular or adipose tissues. Methods that use standard-of-care T1w DCE MRI are limited to segmenting tumor, adipose and glandular tissues, and typically make use of standard U-Net techniques <cit.>. Methods using multi-parametric MRI may segment a larger variety of tissues, but clinical applicability is limited due to the non-standard MRI sequences used.Object detection methods have previously been applied in medical imaging domains but remain rare in breast MRI analysis. They are most prevalent in lung CT analysis <cit.>. These methods are useful for reducing false positive rates, which is necessary to enable automated tumor segmentation.Contributions. We present an automated method for segmenting multiple tissues on standard-of-care breast DCE MRI for clinical usage as a visual aid for surgical planning. Unlike previous works, our method delineates all clinically relevant anatomical structures of the breast simultaneously, including skin, adipose, fibroglandular, vascular, chest wall, and tumor tissues while training and evaluating on a diverse multi-institutional dataset.This paper makes both methodological and applied contributions:* A method for utilizing 2D object detectors pre-trained on natural images for 3D object localization.* A method for filtering false positive tissues in breast MR by tissue-tissue contact.* We show applicability to a broader set of breast tissues than previous works.* We show applicability to breast MRIs from unseen institutions with radiologist vetted ground truth.§ METHODSOur method is a multi-phase system which uses a collection of deep learning models and post-processing steps, * Bi-MIP (Maximum Intensity Projection) Tumor Localization: The tumor is detected by 2D object detection networks in both the sagittal and axial MIPs.* Tumor Semantic Segmentation: The tumor is segmented by a 3D semantic segmentation network.* Multi-Tissue Semantic Segmentation: All tissues of interest are segmented by a 3D semantic segmentation network.* Fusion of Tumor Localization and Tumor Segmentation: The 2D tumor localization bounding boxes are projected into 3D and used to crop the tumor segmentation.* Tissue-Tissue Interaction Heuristics: The predictions of the tumor and multi-tissue models are merged according to heuristics developed to utilize anatomical knowledge to minimize common neural network failure modes. The input to the overall system is a T1-weighted (T1w) fat-suppressed DCE MRI acquired with standard-of-care parameters, and with a known disease laterality (left, right or bilateral). The output is a discrete labeling of the image with air, skin, adipose tissue, fibroglandular tissue, vasculature, tumor and chest wall. All model training was conducted on a single NVIDIA GPU with no more than 24 GB VRAM. §.§ Data Preparation and Augmentation For each DCE MRI series, we chose three timepoints, the pre-contrast, first post-contrast (early post-contrast) and a post-contrast series at least 5 minutes after the first (late post-contrast). The images were resampled to 1 mm in all three spatial dimensions and linearly registered using phase cross correlation.The preprocessing for the 2D localization networks consisted of applying maximum intensity projections (MIPs) through the axial and sagittal planes. The sagittal projection was limited to the half with the diseased breast. To reduce background enhancement from glandular tissue, blood vessels, and other non-tumor enhancing regions, an anisotropic median filter (window size of 10 mm) was applied along the MIP axis. Finally, the intensities are normalized to match ImageNet statistics where the timepoints selected during image processing map to the RGB channels.When training the 3D semantic segmentation networks, we performed an extensive image augmentation routine. First, we used random cropping to a spatial extent of 128 x 128 x 64 mm^3 (the short axis being the Superior-Inferior axis). Then we randomly applied one or more image augmentation methods from the following list: additive gaussian noise, spatially correlated multiplicative gaussian noise, rotations, scaling, elastic deformations, and drift <cit.>. Finally, the image is normalized to the standard normal distribution. §.§ Bi-MIP Tumor LocalizationThe 2D tumor localization networks include an axial projection localization network and a sagittal projection localization network. Both networks are constructed as Shifting Windows (Swin) Transformers trained on ImageNet-1K for the backbone network of a Mask-RCNN that was trained on COCO <cit.>. We use the “mm_detection” software library, version 2.27 to construct and train the networks <cit.>.The axial and sagittal networks were both trained for 50 epochs, with a batch size of 2 images per iteration. The target was the bounding box and mask computed from the projection of the 3D tumor segmentation onto the axial or sagittal plane. Cross entropy loss was used on the object detection and mask outputs of the networks. We used the AdamW optimizer with 1 x 10-4 learning rate, betas of 0.9 and 0.999, and weight decay of 0.05 <cit.>. §.§ Tumor Semantic SegmentationThe tumor semantic segmentation network uses a U-net architecture constructed using the “MONAI” software library, version 0.9 <cit.>. We used the “DynUNet” architecture with [48, 96, 192, 384, 768] filters per block, residual skip connections, and instance normalization. Overall, the model had 51M parameters. We used the Adam optimizer, with a learning rate of 5 x 10-5 for 200 epochs. The models were trained using combo loss where the contribution of Dice component was 80% <cit.>. The batch size for the training procedure was 4 volumes per iteration. A positive and negative ratio of 9:1 was used for the tumor class. §.§ Multi-Tissue Semantic SegmentationWe used a Residual U-net architecture, with 107M parameters <cit.>. The filter sizes were [48, 96, 192, 384] with 3 layers per block. The model was trained using the “pytorch” library, version 1.8 <cit.>. Optimization was performed with stochastic gradient descent set to a learning rate of 1 x 10-4, Nesterov momentum of 0.9, and weight decay of 5 x 10-5, for 60 epochs over the training data split. The batch size was 1. Image crops were sampled uniformly. Finally, the multi-tissue segmentation model was calibrated using temperature scaling to minimize the expected calibration error.§.§ Fusion of Tumor Localization and Tumor Segmentation The axial and sagittal box proposals form a 3D bounding box through a series of fusion and projection operations. The axial bounding box proposals are fused through a weighted average to form a single box in the diseased breast. The sagittal box proposals that overlap with the processed axial box along the anterior-posterior direction are also fused. The 3D box is then merged by averaging the dimensions from the processed boxes along the intersecting anterior-posterior plane; the remaining non-intersecting dimensions (left-right and superior-inferior) are copied over from the processed boxes. When no bounding box proposals are produced from either model, the imaging axis is used as the bounding box dimension (half the axis for the left-right plane of the diseased breast).§.§ Tissue-Tissue Interaction HeuristicsWe utilize the non-tumor probabilities estimated by the multi-tissue model and the tumor probability from the tumor segmentation model. Then, we suppressed tumor connected components outside of the 3D bounding box identified above. Finally, we suppress the tumor probability by subtracting the vasculature probability and re-normalize probabilities to sum to 1. We analyze the contacts of various tissue components to each other to further mitigate false positives. We drop air that does not contact the edge of the image, skin which does not contact air, and tumor that does not contact glandular tissue where the contact area is less than 64 mm^2.Then, we used hysteresis thresholding of the tumor prediction where we include up to a 4 mm additional radius of the tumor prediction which mitigates jagged and under segmented boundaries. § EXPERIMENTS§.§ Development Dataset We sourced de-identified data retrospectively from a variety of public clinical trials and private collaborations. We collected standard-of-care T1-weighted fat-suppressed DCE MRI from the ISPY1 (32 patients) and ISPY2 (202 patients) public datasets, and seven other institutions (268 patients) <cit.>. The dataset included a variety of MRI from scanners made by General Electric, Phillips, Toshiba, and Siemens. The magnetic field strength of scanners also varied between 1.5 to 3.0 Tesla.§.§.§ Tumor Development Data and Ground Truthing For the development sets of the localization and tumor segmentation models, consisted of 358 training, 44 validation, 46 test and 48 held-out patients. The hold-out set was collected from 2 institutions excluded from the training set. All tumor ground truth data were 3D annotations with 1 mm cubic resolution, identifying the extent of the solid mass of the cancer, informed by radiology reports when available. To annotate the ground truth data, a convolutional neural network was initially used to segment the tumor, and the segmentation boundaries were manually adjusted as necessary. A US-board certified radiologist advised as needed in dispute cases.§.§.§ Multi-Tissue Development Data and Ground TruthingFor the multi-tissue segmentation model development dataset, there were 50 training and 15 validation patients. Annotations were performed using 3D Slicer <cit.>. The annotation classes were air, skin, fibroglandular tissue, adipose tissue, tumor, vasculature and chest. The chest tissue class was a catch-all for the thoracic cavity, muscles, bones, and lymph nodes. All annotations had 1 mm cubic resolution. Vasculature annotations were limited to vessels above 2 mm in diameter. The computer vision tools built into 3D Slicer were used to assist in the annotation efforts. The authors performed quality control on this set. §.§.§ Multi-Tissue Evaluation Dataset and Ground TruthingWe evaluate our method on 32 complete segmentations with the tissues described previously. The patients used for the evaluation set were sourced from 3 institutions distinct from the training dataset and sent to radiologists for quality control. The radiologists reviewed 37 completed segmentations, rejecting 5 due to insufficient quality (requirement of 70% estimated Dice score in general, 80% for chest and adipose tissue in particular). Four exclusions were due to tumor labeling quality, and one due to vasculature.§.§ Segmentation Results with Tumor Hold-out Dataset We evaluated the tissue-tissue heuristic method and tumor localization and segmentation methods for tumor segmentation performance on the hold-out data split (Table <ref>). We found significant (paired t-test with α=5%) improvements in mean Dice score and Robust Hausdorff Distance for tumor segmentations compared to the multi-tissue model alone. Also, false positive tumor components per patient fell from 38 to 0.25 between the multi-tissue model alone and applying all methods, respectively. §.§ Segmentation Results with Multi-Tissue Evaluation Dataset We compared our methodology to the popular nnU-Net methodology <cit.> since U-Net methods have been applied extensively to breast MRI analysis <cit.>. We trained nnU-Net on a single data split, similarly to our multi-tissue model.In evaluation on the multi-tissue evaluation dataset (Table <ref>), the presented method was on-par with nnU-Net on non-tumor tissues, while being significantly better in terms of tumor segmentation (two-tailed, paired t-test, p=0.018). The presented method was much better in terms of tumor false positive connected component counts, with an average of 0.48 per patient, compared to 3.8 for nnU-Net. We qualitatively compare the inference results in Figure <ref>.§ CONCLUDING REMARKS Previous models were limited in robustly segmenting multiple breast tissues using standard-of-care DCE MRI across sites.Here, we presented the first 3D multi-tissue deep learning model for breast MRI that encompasses clinically relevant non-neoplastic tissues including: adipose tissue, glandular tissue, vasculature, skin, and chest wall, while also achieving excellent tumor segmentation performance. Our work is a step towards the broader vision of panoptic segmentation in medical imaging and opens new avenues for incorporating 3D visualization in clinical applications for comprehensive tumor staging, surgical planning, biomedical imaging research, investigations into tumor biology, and patient education.Thanks to the data annotators for providing the segmentations needed for development of the model. Thanks to Anant Madabhushi for helpful discussions regarding the presentation of the work. spiebib | http://arxiv.org/abs/2311.16213v1 | {
"authors": [
"Arda Pekis",
"Vignesh Kannan",
"Evandros Kaklamanos",
"Anu Antony",
"Snehal Patel",
"Tyler Earnest"
],
"categories": [
"eess.IV",
"cs.CV",
"cs.LG",
"I.4.6; J.3"
],
"primary_category": "eess.IV",
"published": "20231127182207",
"title": "Seeing Beyond Cancer: Multi-Institutional Validation of Object Localization and 3D Semantic Segmentation using Deep Learning for Breast MRI"
} |
-0.5cm Quadrature Rules on Triangles and Tetrahedra for Multidimensional Summation-By-Parts Operators Zelalem Arega Worku[†] Jason E. Hicken[] David W. Zingg[†]===================================================================================================== Conventional Federated Domain Adaptation (FDA) approaches usually demand an abundance of assumptions, which makes them significantly less feasible for real-world situations and introduces security hazards.This paper relaxes the assumptions from previous FDAs and studies a more practical scenario named Universal Federated Domain Adaptation (UFDA). It only requires the black-box model and the label set information of each source domain, while the label sets of different source domains could be inconsistent, and the target-domain label set is totally blind.Towards a more effective solution for our newly proposed UFDA scenario, we propose a corresponding methodology called Hot-Learning with Contrastive Label Disambiguation (HCLD). It particularly tackles UFDA's domain shifts and category gaps problems by using one-hot outputs from the black-box models of various source domains. Moreover, to better distinguish the shared and unknown classes, we further present a cluster-level strategy named Mutual-Voting Decision (MVD) to extract robust consensus knowledge across peer classes from both source and target domains. Extensive experiments on three benchmarkdatasets demonstrate that our method achieves comparable performance for our UFDA scenario with much fewer assumptions, compared to previous methodologies with comprehensive additional assumptions. § INTRODUCTIONFederated Learning (FL) <cit.> allows models to be optimized across decentralized devices while keeping data localized, where no clients are required to share their local confidential data with other clients or the centralized server.Traditional FL often struggles to produce models that can effectively generalize to new unlabeled domains from clients due to the barrier presented by domain shifts <cit.>.To address this, Federated Domain Adaptation (FDA) <cit.> are proposed and achieved tremendous success as it allows knowledge transfer from decentralized source domains to an unlabeled target domain using Domain Adaption (DA) techniques.Nonetheless, current FDA scenarios often operate under the presumption that model parameters or gradients are optimized based on the source domain. However, acquiring such information in real-world situations is exceptionally challenging due to commercial confidentiality. Also, exposing such information introduces potential risks such as model misuse and white-box attacks. To establish a relaxed condition, Federated Domain Adaptation with Black-Box Models (B^2FDA) <cit.> is introduced,where the target-domain client can only access the application programming interfaces (APIs) of various source domains. However, most existing B^2FDA approaches assume that the label sets of different source domains must perfectly align with each other and that of the target domains. This assumption is particularly challenging to fulfill in real-world scenarios. First, source data can originate from vastly diverse domains. For example, the biometric data of a single client could stem from unrelated sources like the medical domain (e.g., clinical records from different hospitals) or the financial domain (e.g., user records from different Banks). Second, in real-world scenarios, acquiring information about the label set of the target domain samples is often a formidable task. Consequently, attempting to align the label sets of source and target domains becomes impractical.To further minimize those in-practical assumptions from B^2FDA, we introduce a new scenario Universal Federated Domain Adaptation (UFDA) towards a practical FDA set-up with practical assumptions.As shown in Figure. <ref>, in UFDA, the target domain solely requires a black-box model, devoid of its specifics (e.g., gradients). Meanwhile, we only need to know the source domains' label sets, which are not required to be identical as in B^2FDA scenarios, and the target domain's label set will remain entirely unknown as most real-world DA scenarios. On the other hand, different from most existing B^2FDA setups <cit.>, our UFDA presents two unique challenges: First, as the target domain's label set is completely unknown, the model optimized based on each individual source domain could be particularly imprecise for those unique categories of the target domain.Second, the completion uncertainty of the target domain's label set also makes it impossible to distinguish the shared and unknown classes among source and target domains. However, it is important in FDA problems to guarantee the consistency of label sets between source and target domains. To tackle the first challenge, we propose a methodology called Hot-Learning with Contrastive Label Disambiguation (HCLD).It adopts one-hot outputs (without confidence) produced by various source APIs, which generate more than one candidate pseudo-labels for each target sample. Compared with previous FDA methods, which directly adopt one candidate (with confidence) from source APIs by using the probability function (e.g., Softmax), our method can mitigate the impact caused by the falsely higher confidence in these non-existent categories.To obtain more credible pseudo-labels, we propose a Gaussian Mixture Model (GMM) based Contrastive Label Disambiguation (GCLD) method, which sharpens the shared-class confidence and smooths the unknown-class confidence. Specifically, it leverages contrastive learning (CL) <cit.> strategy to dynamically generate prototype-based clustering, which will fit a GMM <cit.> based on its self-entropy distribution for sample divisions.Therefore, the easy-to-learn sample can be treated as a shared-class sample while the hard-to-learn sample can be treated as an unknown-class sample.Furthermore, to address the second challenge, we propose a cluster-level Mutual-Voting Decision (MVD) strategy by leveraging the consensus knowledge of shared classes among source and target domains.We calculate a “mutual voting score" for each class based on the overlapping samples recognized as the same category from all APIs (i.e., source + target). Then, we use this score to distinguish each class as “shared" or “unknown" type.Our contributions are summarized as follows: * We introduce a new FDA scenario, UFDA, which not only inherits relaxed assumptions as in B^2FDA, but also eliminates the consistency requirement of label sets among source domains and keeps the target domain's label sets completely unknown, towards a practical scenario for real-world situations.* We proposed a novel methodology, HCLD, to address the imprecision issue for samples from non-existent categories. It adopts ensemble one-hot outputs from multi-source APIs to produce multiple candidate pseudo-labels and uses a GMM-based strategy GCLD to disambiguate those candidates.* We present a cluster-level strategy MVD to distinguish shared and unknown classes by leveraging consensus knowledge across peer classes from source and target domains. * We conduct extensive experiments on three DA benchmarks. The results demonstrate that our method exhibits performance on par with previous MDA approaches, yet relies on significantly fewer assumptions. This substantiates the practicality of our method.§ RELATED WORKS §.§ Multi-source Domain Adaptation (MDA)MDA <cit.> has gained significant attention as a means to mitigate performance degradation caused by domain shifts.Despite the achievements of MDA, many existing approaches <cit.> are limited to the assumption of perfectly matched label sets and have to access the raw multi-source, which can be inefficient and may raise concerns regarding data protection policies <cit.>. To tackle category shift issues, the UniMDA scenario is introduced <cit.>, where the label set among multi-sources differ, and no prior knowledge about the target label sets is accessible.In UniMDA, the concept of category shift was first introduced in DCTN <cit.>, which acknowledged that the number of categories in each source domain may differ from the target domain.DCTN learns transferable and discriminative representations via an alternating adaptation algorithm and a distribution-weighted combining rule.To address data privacy issues, source-free domain adaptation (SFDA) <cit.> and federated domain adaptation (FDA) <cit.> have attracted increasing attention. Instead of accessing the raw data directly, SFDA utilizing the well-trained model rather than the raw labeled data has emerged as a possible solution to this problem.Another setting that deals with unavailable source data is the FDA, where the goal is to develop a global model from decentralized datasets by aggregating the parameters of each local client <cit.>. Inspired by FL, <cit.> first raised the concept of the FDA. This work provides a solution named Federated Adversarial Domain Adaptation, which aims to address the FDA problem in a federated learning system using adversarial techniques. However, these approaches do not address both of these limitations simultaneously. Recently, a few works <cit.> explore SFDA under category shift.Despite their effectiveness, they require dedicated multi-source model specifics, which can be restricted due to their commercial value and associated risks, such as model misuse and white-box attacks.In this work, we deal with a practical scenario of UFDA, which requires neither the shared data and model specifics, consistency of label sets among source domains, and information on the target domain label set.§.§ Contrastive learning (CL) Due to the success of CL <cit.>, numerous efforts have been made to improve the robustness of classification tasks by harnessing the advantages of CL. For instance, <cit.> employed CL as a pre-training technique for their classification model. Another approach, RRL <cit.> introduced label cleaning utilizing two thresholds on soft labels, which are calculated from the predictions of previous epochs and their nearest neighbors. Similarly, Sel-CL <cit.> leveraged nearest neighbors to select confident pairs for supervised CL <cit.>. Despite their demonstrated effectiveness, these methods are not explicitly designed to tackle the category shift between the noise-label sets and the ground-truth label set.§ METHODOLOGY §.§ Preliminaries We are given M source datasets from different clients {_S^m}_m=1^M and an unlabeled target client _T, where each source client contains N_m labeled source samples _S^m:={(x_i^m,y_i^m)}_i=1^N_m and the target client comprising N_T unlabeled samples {x_i}_i=1^N_T, s.t., x_i∈X^T. In most real-world scenarios, each client's data and model specifics are stored exclusively on local systems, ensuring that they are not shared with other clients or a centralized server. Therefore, the label sets between the aforementioned multi-source and target clients may exhibit significant variations. While, most existing FDA studies intuitively assume that multi-source and target clients share the same label sets, which is not practical. Inspired by the research of UniMDA, we define _s_m as the label sets for the m-th source node and _t as the label set for the target node. The label sets _m represents the common labels between _s_m and _t. Furthermore, _s_m=_s_m\_m represent the label sets exclusive to _S^m. Similarly, _t=_t\{∪_m_m} indicates the classes in the target domain _T that are unknown in the multi-source domains, as they should never appear in any source label sets.The label setsrepresent the union of shared classes, i.e., = ∪_m_m. It is important to note that the target data are fully unlabeled and the target label set (which is inaccessible during training) is only used to define the UFDA problem.§.§ HCLDOur proposed HCLD aims to establish an effective mapping that can accurately classify target samples if they correspond to the shared class , or confuse the samples with an "unknown" class.As shown in Figure. <ref>, HCLD consists of two key components: 1) Pseudo-Hot-Label (PHL) Generation; 2) Gaussian Mixture Model-based Contrastive Label Disambiguation (GCLD). Firstly, to mitigate the impact caused by multi-source APIs' falsely higher confidence for the non-existent categories, we calculate the pseudo-labels for each target sample with the proposed PHL Generation strategy. Then, we adopt the GCLD manner to obtain more credible pseudo-labels, which sharpens the shared-class confidence and smooths the unknown-class confidence.§.§.§ PHL Generation In UFDA, only the label sets {_s_m}_m=1^M in each source domain and the softmax output Y^T_m in each source APIs for target samples are acceptable for the target party: Y^T_m= f^m_S(X^T).Considering the domain shift between multiple source and target domains, the key challenge lies in obtaining more reliable pseudo-labels for each target sample. Empirically, individual source APIs often display increased confidence levels for both shared- and non-existent categories. Such a trend adversely affects the accuracy of pseudo-labels that are produced using these confidence scores. To address the aforementioned limitation, we suggest the use of an ensemble of multiple one-hot outputs to create the pseudo-labels, referred to as PHL C_pse (i.e., Figure. <ref>), which generates multiple candidate pseudo-labels for each target sample, providing a broader and potentially more accurate range of labeling options. Given the lack of pre-existing knowledge about the target label sets, we determine the Pseudo-Label Sets (PLS) for the target domain by the following method:_T= ∪_m_s_m.This strategy ensures that the accurate labels identified by each APIs are encompassed within these candidate pseudo-labels. §.§.§ GCLDThe candidate pseudo-labels in the above PHL C_pse inevitably contain unknown categories due to the gap between multi-source and target domains. We adopt GCLD, which iteratively sharpens the possible shared-class confidence, smooths the possible unknown-class confidence, and obtains more credible pseudo-labels.The critical challenge is distinguishing between shared- and unknown-class samples. Inspired by <cit.>, GMM can better distinguish clean and noisy samples due to its flexibility in the sharpness of distribution.Treating easy-to-learn samples as shared class instances and challenging samples as unknown-class instances, we facilitate the acquisition of discriminative image representations through CL and construct a GMM over the representations for sample divisions.Typically, the dimension of contrastive prototypes is limited by the pseudo-label sets, making it difficult for GMM to handle this scenario effectively. Therefore, we utilize a comprehensive Memory Bank denoted U^e={u_1^e, …, u_N_T^e} that maintains the running average of the features of all target samples. Here, U^e represents the Memory Bank in epoch e. We initialize U^e with random unit vectors and update its values by mixing U^e and U^e-1 during training (details in the next subsection).U^e ←δU^e+(1-δ) U^e-1where δ is a mixing parameter. The self-entropy of U^e for each sample can be defined as:l_ce(i) = -∑ u^e_i log(u^e_i), i ∈{1, …, N_T} 1) GMM-based Sample Divisions. To distinguish between shared- and unknown-class samples, we fit a two-component GMM to the self-entropy distribution l_ce using the Expectation Maximization algorithm.Each sample is assigned a shared probability w_i, which is the posterior probability p(θ|ℓ_ce), where θ corresponds to the Gaussian component with a smaller mean (indicating a smaller self-entropy). Based on the shared probability, we divide all target samples into two sets: W^1 (the sample may with shared class) and W^0=D_T\W^1 (the sample may with unknown class) by setting a threshold σ. 2) Pseudo Target Updating. In terms of the above distinguished shared class W^1 and unknown class samples W^0, we sharpen the shared-class confidence and smooth the unknown-class confidence to update the pseudo-labels C_pse as follows,C_pse^e←ϕ(ϕC_pse^e + (1-ϕ)C_pse^e-1)+ (1-ϕ)z^e z^e =Onehot(u^e_i) x_i ∈W^11/n_C otherwise.where ϕ is a tunable hyperparameter. Ultimately, the pseudo-labels with shared classes will be clustered around each cluster center, while confusing the pseudo-labels with unknown labels.3) Prototype Updating. Since the contrastive loss induces a clustering effect in the embedding space, we maintain a prototype embedding vector μ_c corresponding to each class in 𝒞̂_T, which serves as a set of representative embedding vectors. This approach adopts updating μ_c with a moving-average style:μ_c =Normalize(γμ_c+(1-γ) q), ifc=max _j ∈𝒞̂_T f^j(Aug_q(x))where the momentum parameter γ was set as 0.99. Then, we iteratively update the above-mentioned Memory Bank U^e with the moving-updating mechanism U^e ←q * μ_c^T (details of q in the next subsection).§.§.§ Training ObjectiveGiven the target samples with PHL {x^i, c_pse^i}_i=1^N_T, we generate a query view Aug_q(x) and a key view Aug_k(x) with the randomized data augmentation Aug(x). Then, HCLD employs the query network g(.) and the key network g^'(.) to encode the query q = g(Aug_q(x)) and keys k = g^'(Aug_k(x)). Similar to MoCo <cit.>, the key network employs a momentum update using the query network. Additionally, we maintain a queue that stores the most recent key embeddings k and chronologically update the queue. This enables us to establish a contrastive embedding pool A = B_q∪ B_k∪ queue,where B_q and B_k represent vectorial embeddings corresponding to the query and key views, respectively. For each sample, the contrastive loss can be calculated by contrasting its query embedding with the remaining embeddings in pool A. ℒ_cont(g ; x, τ, A) = -1/|P(x)|∑_k_+∈ P(x)logexp(q^⊤k_+ / τ)/∑_k^'∈ A(x)exp(q^⊤k^' / τ), where A(x) = A \{q} and τ≥ 0 is the temperature parameter. Inspired by <cit.>, DNNs first memorize the training data of easy-learning samples, then gradually adapt to noisy labels. We construct the positive set P(x) with the predicted label from the Classifier (See Figure. <ref>). About the query view, we train the classifier f using cross-entropy loss, ℒ_cls(f; x_i, c_pse^i) = ∑_n=1^n_C-s_n^ilog(f^n(x_i)), x_i ∈ X^Twhere n_C indicates the number of categories in Ĉ_T, n denotes the indices of labels, s_n^i denotes the n-th vector of c_pse^i, and f^n denotes the n-th output of f. Putting it all together, the overall loss function can be defined as ℒ=ℒ_cls+βℒ_cont, where β is set as 0.01 to balance each loss component.§.§ MVDThrough the above HCLD strategy, we could optimize a model that is well-performing in shared classes and ambiguous in unknown classes.However, better adaptation performance depends on accurate inference for shared and unknown classes, which becomes challenging without multi-source data or parameters.Inspired by the consensus knowledge of shared classes among different domains, we consider utilizing cluster-level consensus from multi-source and target APIs to distinguish between shared and unknown classes.As the source and target APIs rarely misunderstand the non-existent category as the same shared class, we introduce an MVD strategy, which leverages the knowledge voting among the source and target views.Specifically, it calculates the voting scores in each class (the proportion of overlapping samples recognized as the same category in the dataset by all APIs, compared to the minimum number of all samples recognized as that category among these APIs) and calculates the mutual voting scores among the source and target views, which can be used to determine if it reaches a consensus. The overview of MVD is shown in Figure. <ref>.For the source view, given a pair of matched class clusters B_S^m, i (obtain B_S^m, i across the outputs of the m-th source model) and B_T^m, i, we measure the cluster-level consensus via calculating the voting score {d_s^1, …,d_s^n_C}, d_s^i,m = (B_S^m,i∩ B_T^m,i)/min _j ∈{1, …, M} (B_S^j,i, B_T^j,i), i ∈𝒞_S_m d_s^i = max _j ∈{1, …, M} d_s^i,jSimilarly, we calculate the voting score {d_t^1, …,d_t^n_} in the target view. Then, the mutual-voting score of two views for each union source class can be calculated as:𝒮_c =d_t^c+ d_s^c/2, c ∈𝒞̂_TFor each _c, we can predict the class c with a validated threshold λ.This either assigns class c to one of the union source classes (_cλ) or rejects it as an "unknown" class. § EXPERIMENTS §.§ Experimental Setup§.§.§ Datasets.Office-Home <cit.> is a DA benchmark that consists of four domains: Art (Ar), Clipart (Cl), Product (Pr), and Real World (Re). Office-31 <cit.> is another popular benchmark that consists of three domains: Amazon (A), Webcam (W), and Dslr (D). VisDA2017+ImageCLEF-DA is a combination of two datasets. VisDA2017 <cit.> is a DA dataset where the source domain contains simulated images (S) and the target domain contains real-world images (R). ImageCLEF-DA, on the other hand, is organized by selecting the common categories shared by three large-scale datasets: ImageCLEF (C), ImageNet (I), and Pascal VOC (P). Classes in the combined dataset are numbered as follows: Classes No. 1–7 represent the shared classes among the five datasets in alphabetical order. Classes No. 8–12 are the remaining classes from S and R domains. Classes No. 13–17 are the remaining classes from the C, I, and P domains.In UFDA, each domain contains two types of labels: shared and unknown. We use a matrix to describe the specific UniMDA setting, called UMDA-Matrix <cit.>, which is defined as [ |_1|...|_M||||_S_1| ... |_S_M| |_t|]. The first row is the size of the shared class of all the domains, and the second row denotes the unknown class. The first m columns are the label set of the multi-source domains, and the last one denotes the target domain. In this way, UniMDA settings can be determined by the division rule. To ensure a fair comparison with previous UniMDA works, we maintain the same UMDA-Matrix settings with UMAN. §.§.§ Baseline Methods.The proposed HCLD^2 (HCLD & MVD) is compared with a range of State-Of-The-Art (SOTA) DA approaches. i.e., including DANN <cit.>, RTN <cit.>, OSBP <cit.>, MDAN <cit.>, MDDA <cit.>, UAN <cit.>, DCTN <cit.>, and UMAN <cit.>.To ensure a fair comparison, we still the same evaluation metrics as those in the previous study <cit.>, which represents the mean per-class accuracy over both the shared classes and the unknown class. Since the UFDA setting is fairly new in this field, we also compare the other setting HCLD^2⋆ based on the proposed HCLD^2. Different from HCLD^2, HCLD^2⋆ implements HCLD^2 with the Pseudo-Soft-Label (PSL) which is generated by averaging the output of source models, we weigh each class by the number of source models containing this class. §.§.§ Implementation details. In UFDA, the architecture in each node can be either identical or radically different. However, to ensure a fair comparison with previous UniMDA works, we maintain a common model architecture. Specifically, we utilize ResNet-50 as the backbone for all tasks. The projection head of the contrastive network is a 2-layer MLPs that outputs 128-dimensional embeddings. For model optimization, we employ stochastic gradient descent (SGD) training with a momentum of 0.9. The learning rate is decayed using the cosine schedule, starting from a high value (e.g., 0.005 for Office-31, Office-Home, and VisDA2017+ImageCLEF-DA) and decaying to zero. To follow the standard UniMDA training protocol, we use the same source and target samples, network architecture, learning rate, and batch size as in the UMAN <cit.>. In decentralized training, the number of communication rounds r plays a crucial role. To ensure a fair comparison with traditional UniMDA works, we adopt r=1 for all tasks. Furthermore, we implement all methods using PyTorch and conduct all experiments on an NVIDIA GeForce GTX 4*2080Ti, utilizing the default parameters for each method.§.§ Experimental ResultsHere we present the comparison between our method and the above baseline methods. Some results are directly chosen from <cit.>. From the results in Table <ref>, despite the raw data and model specifics are not available, HCLD^2 still can perform comparably across almost all tasks compared with the traditional UniMDA setting. It also shows that, although HCLD^2⋆'s performance on VisDA+ImageCLEF-DA is not ideal, it achieves SOTA results across several tasks on Office-Home and Office-31. The results highlight the efficacy of our proposed HCLD^2 again and demonstrate the instability of directly using the soft outputs for the pseudo-label generation. §.§ Ablation Study§.§.§ Overall Component Effectiveness.We study the effectiveness of three key components (PHL Generation, GCLD, and MVD) in HCLD^2, with results shown in Table <ref>. Results show that both GCLD and MVD significantly improved accuracy compared to the approach that removes MVD and GCLD only trains a classifier with the pseudo-labels (PHL or PSL). By combining these two components we can obtain the best performance.Suffer from the one-hot setting, the method exclusively trains a classifier employing the PHL, resulting in consistently lower accuracy compared to the PSL.However, intriguingly, the integration of GCLD yields a remarkable outcome where the PHL-based approach significantly outperforms the PSL-based approach by a substantial margin. §.§.§ Effectiveness of the PHL Generation.To further analyze the impact of different pseudo-label generated methods, we report the performance of HCLD^2⋆ and HCLD^2 with varying settings of category in Table <ref>. We can see that HCLD^2⋆ works better than HCLD^2 when the intersection of the multi-source label sets is non-empty. However, when the intersection is empty, the performance of HCLD^2⋆ will suddenly decline along with the accuracy of PSL. On the other hand, HCLD^2 performs well with all category settings and shows a more stable performance compared with HCLD^2⋆, which is sensitive to different category settings.§.§.§ Effectiveness of GCLD. In Figure. <ref>, we report the performance of PSL with and without GCLD, and PHL with GCLD. As illustrated, PSL with GCLD outperforms the approach without GCLD by a large margin. In the initial epochs, PHL with GCLD may suffer from the one-hot setting. As the number of training epochs increases, PHL with GCLD will surpass the performance of PSL with GCLD.§.§.§ Effectiveness of MVD. In Table <ref>, we show results for the single-view and mutual-voting decision strategies. As a baseline, we implement HCLD^2 without incorporating any shared-class decision strategy. We establish the shared classes through voting outcomes within the source or target node for the single view. As we can see, MVD yields the most favorable results compared to any other single-view approach, although every single strategy exhibits improved performance over the baseline. Moreover, we study the parameter λ on task Dslr. As shown in Figure. <ref>b, within a wide range of λ (0.3-0.5), the performance only varies to a small degree, showing that our method is robust to different choices of λ.§.§.§ Why not Source-Free DA. Although FDA and SFDA are similar to some extent (e.g., only the pre-trained source model is accessible to the target domain), they are essentially different. FDA has an important assumption, i.e., the decentralized source clients keep communicating the updated source black-box models during the training process, whereas this does not hold in SFDA at all. Both our proposed scenario UFDA and our method HCLD^2 heavily rely on this assumption and aim to make the black-model communication in a more practical condition.Indeed, the difference between the SFDA and FDA settings of our method could be reflected in Table <ref>. As seen, without black-box model communication in SFDA, the performance of our model significantly drops. Moreover, the FDA performance increases with the communication round r. § CONCLUSIONThis work investigated a more practical scenario, UFDA, where we relax the comprehensive assumptions such as configuration specifics nor the prior label set overlap across multi-source and target domains as in most FDA scenarios. We propose a new optimization methodology HCLD^2 to address UFDA and cluster-level strategy called MVD to distinguish shared and unknown classes during inference.Through extensive evaluations of three benchmark datasets, we demonstrate that HCLD^2 is capable of achieving comparable performance as conventional MDA baselines even with much less source knowledge. In the future, we may explore methods to further minimize additional assumptions (e.g., source label sets) in our UDFA, aiming for a more relaxed FDA scenario.§.§.§ AcknowledgmentThis work was supported in part by the National Key R&D Program of China 2022YFF0901800, in part by the NSFC Grant. (No.61832008, 62176205, and 62072367), in part by the Hong Kong Research Grants Council General Research Fund (17203023), in part by The Hong Kong Jockey Club Charities Trust under Grant 2022-0174, in part by the Startup Funding and the Seed Funding for Basic Research for New Staff from The University of Hong Kong, and part by the funding from UBTECH Robotics. | http://arxiv.org/abs/2311.15570v2 | {
"authors": [
"Xinhui Liu",
"Zhenghao Chen",
"Luping Zhou",
"Dong Xu",
"Wei Xi",
"Gairui Bai",
"Yihan Zhao",
"Jizhong Zhao"
],
"categories": [
"cs.LG",
"cs.CV"
],
"primary_category": "cs.LG",
"published": "20231127063807",
"title": "UFDA: Universal Federated Domain Adaptation with Practical Assumptions"
} |
innercustomthmTheoremtheoremTheorem[section] corollary[theorem]Corollary lemma[theorem]Lemma prop[theorem]Proposition conjecture[theorem]Conjecture definition definition[theorem]Definition example[theorem]Example claim[theorem]Claim remark[theorem]Remark definition app-corollaryCorollaryCorollaries app-corollaryCorollaryCorollaries app-definitionDefinitionDefinitions app-definitionDefinitionDefinitions figureFigureFigures figureFigureFigures lemmaLemmaLemmata lemmaLemmaLemmata app-lemmaLemmaLemmata app-lemmaLemmaLemmata app-propositionPropositionProposition app-propositionPropositionProposition app-theoremTheoremTheorems app-theoremTheoremTheorems forwards (⇒) backwards (⇐)1.15 Controlling Formal Fibers ofCountably Many Principal Prime Ideals David Baron, Ammar Eltigani, S. Loepp, AnaMaria Perez, M. Teplitskiy January 14, 2024 ======================================================================== Let T be a complete local (Noetherian) ring.For each i ∈ℕ, let C_i be a nonempty countable set of nonmaximal pairwise incomparable prime ideals of T, and suppose that if i ≠ j, then either C_i = C_j or no element of C_i is contained in an element of C_j. We provide necessary and sufficient conditions for T to be the completion of a local integral domain A satisfying the condition that, for all i ∈ℕ, there is a nonzero prime element p_i of A, such that C_i is exactly the set of maximal elements of the formal fiber of A at p_iA. We then prove related results where the domain A is required to be countable and/or excellent.§ INTRODUCTION Completions of local (Noetherian) rings have proven to be a useful tool in commutative algebra. Unfortunately, not all aspects of the relationship between a local ring and its completion with respect to its maximal ideal are well understood. In this paper, we are interested in gaining a better understanding of the relationship between the prime ideals of a local ring and the prime ideals of its completion. Recall that the formal fiber of a local ring A at a prime ideal P of A is defined to be (T ⊗_A k(P)) where k(P)= A_P/PA_P and T is the completion of A with respect to its maximal ideal. We note that there is a one-to-one correspondence between the formal fiber of A at P and the prime ideals Q of T satisfying Q ∩ A = P.Thus, it is often useful to think of the formal fiber of A at P as the set of prime ideals of the completion of A that lie over P, and this is how we will think of the formal fiber of A at P for the remainder of this paper. Since T is a faithfully flat extension of A, the map (T) ⟶(A) given by Q ⟶ Q ∩ A is onto. It follows that every prime ideal of T is in the formal fiber of some prime ideal of A. Hence, the formal fibers of a local ring A form a partition of the prime ideals of T. We aim to understand which partitions of the prime ideals of T are possible. More specifically, we are interested in the following question.Question: Let T be a complete local ring and let 𝒞 = {C_α}_α∈Ω be a partition of (T). Under what conditions does there exist a local ring A such that the completion of A with respect to its maximal ideal is T and such that, for all C_α∈𝒞, there exists a prime ideal P_α of A such that C_α is exactly the formal fiber of A at P_α?In other words, we ask, given a complete local ring T, under what conditions is it possible to find a local ring A with completion T such that all formal fibers of A are controlled? We believe that answering this question is very difficult, and so we focus on controlling formal fibers of specific prime ideals of A. For example, one could ask how well the formal fibers of minimal prime ideals can be controlled.The results in <cit.> give insight into the answer to this question.In this paper, we are interested in how well formal fibers of height one prime ideals can be controlled, furthering previous work on the topic (see, for example, <cit.> and <cit.>). In particular, in <cit.>, it is shown that the formal fiber of exactly one height one prime ideal can be controlled. We extend this result by showing that the formal fibers of countably many height one prime ideals can be controlled.Specifically, in Section <ref>, we prove the following result. <ref>Let T be a complete local ring and let Π denote the prime subring of T. For each i ∈ℕ, let C_i be a nonempty countable set of nonmaximal pairwise incomparable prime ideals of T and suppose that, if i ≠ j, then either C_i = C_j or no element of C_i is contained in an element of C_j. Then T is the completion of a local domain A ⊆ T satisfying the condition that, for all i ∈ℕ, there is a nonzero prime element p_i of A such that C_i is exactly the set of maximal elements of the formal fiber of A at p_iA if and only if there exists a set of nonzero elements 𝔮 = {q_i}_i = 1^∞ of T satisfying the following conditions * For i ∈ℕ we have q_i ∈⋂_Q ∈ C_iQ and if C_j ≠ C_i and Q' ∈ C_j, then q_i ∉Q',* P ∩Π[𝔮]=(0) for all P ∈(T),* If i ∈ℕ and P' ∈(T/q_iT), then P' ⊆ Q for some Q ∈ C_i, and* If i ∈ℕ and Q ∈ C_i, then F_Π[𝔮]∩ Q ⊆ q_iT where F_Π[𝔮] is the quotient field of Π[𝔮]. Showing that the above four conditions are sufficient is much more difficult than showing that they are necessary, and so, the bulk of our work will be showing that the conditions are sufficient. To do this, we start with a complete local ring T, the sets C_i satisfying our hypotheses, and a set of elements 𝔮 = {q_i}_i = 1^∞ of T satisfying the four conditions. We then construct a local domain A that satisfies the properties given in Theorem <ref>. We define the notion of a Ct𝔮-subring of T which will aid in our construction of A. We end Section 2 by showing these conditions are in fact necessary, thereby completing the proof of our theorem. In Section 3, we show that we can have some control over formal fibers of countably many height one prime ideals of countable domains, and in Section 4, we show that we can have some control over formal fibers of countably many height one prime ideals of quasi-excellent and excellent domains. In Section 5, we combine the results from Section 3 and Section 4 to show that we can have some control over formal fibers of countably many height one prime ideals of countable quasi-excellent domains and of countable excellent domains.Throughout the paper,will be assumed to be the positive integers. Moreover, we say a ring R is quasi-local if it has unique maximal ideal M, whilst R is local if it is both quasi-local and Noetherian. We let (R,M) denote a quasi-local ring R with maximal ideal M, and we denote the completion of a local ring R with respect to its maximal ideal by R. We call a local ring A a precompletion of a complete local ring T if A≅ T. When we say that C is a set of incomparable (or pairwise incomparable) prime ideals of a ring R, we mean that for all pairs of prime ideals P,P' ∈ C, we have P ⊈P'. Finally, if R is an integral domain, we use F_R to donote the quotient field of R. § THE MAIN THEOREMThe goal of this section is to prove Theorem <ref>. As mentioned in the previous section, most of our work is dedicated to showing that the four conditions given in Theorem <ref> are sufficient. To show that they are sufficient, we start with a complete local ring T, the sets C_i, and the set of elements 𝔮 of T satisfying the four conditions of the theorem. We then adjoin the set 𝔮 to the prime subring of T. The next step is to carefully successively adjoin elements to this ring to create an ascending chain of subrings of T with each subring satisfying very specific properties. The union of these subrings will be the desired precompletion of T. §.§ PreliminariesWe begin with preliminary results that will help with the construction of our precompletion. Cardinality arguments will play a central role in our construction. The following proposition, taken from <cit.>, will be used for some of these cardinality arguments. (<cit.>, Lemma 2.2) Let (T,M) be a complete local ring with (T) ≥ 1. Let P be a nonmaximal prime ideal of T. Then, |T/P| = |T| ≥ c, where c denotes the cardinality of .Recall that, given a complete local ring T, one of the conditions we want our local domain A to satisfy is that its completion is T.The following proposition from <cit.> provides conditions for a quasi-local subring of T to be Noetherian and have T as its completion. (<cit.>, Proposition 1)If (R, R ∩ M ) is a quasi-local subring of a complete local ring (T,M), the map R → T/M^2 is onto, and IT ∩ R = I for every finitely generated ideal I of R, then R is Noetherian and the natural homomorphism R→ T is an isomorphism.Our final local domain A will be a subring of T that satisfies the conditions of Proposition <ref>. To ensure that A satisfies these conditions, we repeatedly use the following result from <cit.>. Note that Proposition <ref> can be thought of as a generalization of the countable prime avoidance theorem for complete local rings.(<cit.>, Lemma 2.7) Let (T,M) be a complete local ring such that (T) ≥ 1, let C be a countable set of incomparable nonmaximal prime ideals of T and let D be a subset of T such that |D| < |T|. Let I be an ideal of T such that I⊈ P for all P∈ C. Then I⊈⋃r+P r∈ D,P∈ C. The next four lemmas are useful results regarding subrings of Noetherian rings where the subrings are not assumed to be Noetherian. Let T be a ring and let R be a subring of T that contains no zerodivisors of T. Let q be a nonzero element of R such that qT∩ R = qR. Then for every ℓ∈ℕ, we have (qT)^ℓ∩ R = (qR)^ℓ. We induct on ℓ. By hypothesis, the statement holds for ℓ = 1. Assume (qT)^ℓ∩ R = (qR)^ℓ. Let x ∈ (qT)^ℓ + 1∩ R and write x=q(q^ℓt) for some t ∈ T. Then x ∈ qT ∩ R = qR, and so x = qr for some r ∈ R. Thus q(q^ℓt) = qr and since q is not a zerodivisor, we have r = q^ℓt ∈ (qT)^ℓ∩ R = (qR)^ℓ. It follows that x = q(q^ℓt) ∈ (qR)(q R)^ℓ = (q R)^ℓ+ 1. Therefore, (qT)^ℓ+ 1∩ R ⊆ (qR)^ℓ+ 1.Since the reverse inclusion holds, the result follows. Let (T,M) be a local ring and let R be a subring of T.Suppose q and a are elements of R such that a ≠ 0 and q ∈ M. If a∈ qR, then there is a positive integer k such that a∈ q^kR and a∉q^k+ 1R.Assume that no such postive integer exists.Then a ∈⋂_n = 1^∞ (qR)^n ⊆⋂_n = 1^∞ (qT)^n ⊆⋂_n = 1^∞ (M)^n. Since T is Noetherian, Krull's intersection theorem gives us that ⋂_n = 1^∞ (M)^n = (0).It follows that a = 0, a contradiction.Let (T,M) be a local ring and let (R,R ∩ M) be a quasi-local subring of T that contains no zerodivisors of T. If y is a nonzero element of R, then y is contained in at most finitely many principal prime ideals of R. Let y ∈ R, and suppose {q_iR}_i ∈ℕ are prime ideals of R with y ∈ q_iR for all i ∈ℕ and with q_iR = q_jR if and only if i = j. We claim that, for k ∈ℕ, there is an element r_k in R such that y = (q_1q_2⋯ q_k)r_k. To show this, we induct on k. Since y ∈ q_1R, there is an r_1 ∈ R such that y = q_1r_1.Now suppose k ≥ 1 and assume there is an r_k ∈ R such that y = (q_1q_2⋯ q_k)r_k. Now y ∈ q_k + 1R, and since q_k + 1R is a prime ideal of R, we have that r_k ∈ q_k + 1R or q_i ∈ q_k + 1R for some i = 1,2, … ,k. If r_k ∈ q_k + 1R, then r_k = q_k + 1r_k + 1 for some r_k + 1∈ R.Hence, y = (q_1q_2⋯ q_kq_k + 1)r_k+ 1, and our claim holds by induction. So suppose q_i ∈ q_k + 1R for some i = 1,2, … ,k. Then q_iR ⊆ q_k + 1R and so q_i = q_k + 1r for some r ∈ R. Since q_iR is a prime ideal of R, q_k + 1∈ q_iR or r ∈ q_iR. If q_k + 1∈ q_iR then q_iR = q_k + 1R, a contradiction. If r ∈ q_iR then r = q_ir' for some r' ∈ R and we have q_i = q_k + 1q_ir'. Since q_i is not a zerodivisor in T, we have 1 = q_k + 1r', and so q_k + 1 is a unit, contradicting that q_k + 1R is a prime ideal of R. Thus our claim holds.It follows that y ∈⋂_k ∈ℕ (M)^k,and so by Krull's intersection theorem, y = 0. If R is a subring of the ring T and Q is an ideal of T, then there is an injective map R/(R ∩ Q) ⟶ T/Q. Therefore, it makes sense to say that an element t + Q ∈ T/Q is either algebraic or transcendental over the ring R/(R ∩ Q).Let (T,M) be a local ring and let R be a subring of T. Let q be an element of R such that q ∈ M and q is a regular element of T.Suppose Q is an ideal of T such thatR∩ Q = qR. If t ∈ T with t+Q∈ T/Q transcendental over R/(R∩ Q), then t is transcendental over R.Suppose t∈ T is algebraic over R. Then a_0 + a_1t + a_2t^2 +…+ a_nt^n = 0 for some a_i ∈ R with a_n ≠ 0. Since t+Q∈ T/Q is transcendental over R/(R∩ Q), we have a_i ∈ R ∩ Q = qR for all i ∈{0,1, … ,n}. If a_i ≠ 0, let k_i be the largest positive integer such that a_i∈ q^k_iR. Note that k_i exists by Lemma <ref>.Let j = mink_i. Since j ≤ k_i for all i,0 = a_0 + a_1t + a_2t^2 +…+ a_nt^n = q^j(b_0 + b_1t + b_2t^2 +…+ b_nt^n) for some b_i ∈ R where at least one b_i is not in qR. Since q is a regular element of T, we have b_0 + b_1t + b_2t^2 +…+ b_nt^n = 0. Since t+Q∈ T/Q is transcendental over R/(R∩ Q), we have b_i ∈ R ∩ Q = qR for all i, a contradiction. In this paper, we generalize the main results of <cit.>. In particular, in <cit.>, it is shown that, given a complete local ring T along with a finite set of prime ideals C of T, there are conditions for which T has a precompletion A such that A has a principal prime ideal whose formal fiber's maximal elements are exactly the elements of C.We generalize this result in two ways.First, we show that the set C can be infinitely countable.Second, we find conditions for which T has a precompletion A such that A has countably many principal prime ideals that have countably many maximal elements of their formal fibers.Moreover, this set of maximal elements can be controlled. Many of our results and the proofs for our results are analogous to those from <cit.>. Therefore, before stating our own results, we first state the following important definition from <cit.>. Let (T,M) be a complete local ring, and suppose we have a finite, pairwise incomparable set C = {Q_1, Q_2,… ,Q_k}⊆ (T). Let p ∈⋂_i=1^k Q_i be a nonzero regular element of T. Suppose that (R,R ∩ M) is a quasi-local subring of T containing p with the following properties: * Γ(R) < |T|, where Γ(R)=sup{ ||, |R|},* If P is an associated prime ideal of T then R ∩ P = (0), and* For all i ∈{1,…,k}, F_R ∩ Q_i ⊆ pT.Then we call R a pca-subring of T.§.§ The ConstructionWe introduce the definition of a Ct𝔮-subring of a complete local ring T, an essential component in our construction. Note that Ct𝔮-subrings are generalizations of pca-subrings from <cit.>. Let (T,M) be a complete local ring. For each i ∈ℕ, let C_i be a nonempty countable set of nonmaximal pairwise incomparable prime ideals of T and such that, if i ≠ j, then either C_i = C_j or no element of C_i is contained in an element of C_j. For each i ∈ℕ, let q_i be a nonzero regular element of T such that q_i ∈⋂_Q ∈ C_iQ and, if C_j ≠ C_i and Q' ∈ C_j, then q_i ∉Q'. Suppose that (R, R∩ M) is an infinite quasi-local subring of T containing q_i for all i ∈ℕ such that: * |R| < |T|,* If P ∈(T), then R ∩ P = (0),and* If i ∈ℕ, then F_R ∩ Q ⊆ q_iT for all Q∈ C_i.Let 𝔮 = {q_ii ∈ℕ} and let C = {C_ii ∈ℕ}. Then we call R a C to 𝔮 subring of T, or a Ct𝔮-subring of T for short. When the ring T is understood, we will often call R a Ct𝔮-subring instead of a Ct𝔮-subring of T. Notice that condition (ii) implies that Ct𝔮-subrings are integral domains. Moreover, Ct𝔮-subrings contain regular elements in T of the form q_i for all i ∈ℕ. These elements will be used to generate principal prime ideals p_iA in our final local domain A such that the formal fiber of A at p_iA has maximal elements exactly the elements of C_i. We note that if T is a complete local ring that contains a Ct𝔮-subring, then T has prime ideals that are not maximal that contain regular elements of T. It follows that the dimension of T is at least two. In order to construct the desired local domain A with completion T, we adjoin elements to Ct𝔮-subrings and then localize. By doing this, we construct a chain of Ct𝔮-subrings and show that their union is our desired precompletion A. We next show that, under certain circumstances, a localization of a Ct𝔮-subring is a Ct𝔮-subring, and the union of a chain of Ct𝔮-subrings is a Ct𝔮-subring. These results help us maintain the Ct𝔮-subring properties throughout most of our construction. We note that Lemma <ref> is the analog of Lemma 2.4 from <cit.> and Lemma <ref> is the analog of Lemma 2.5 from <cit.>, and the proofs of Lemmas <ref> and <ref> are largely based on the proofs of those lemmas in <cit.>. Let (T,M), C_i, and q_i for i ∈ℕ be as in Definition <ref>. Suppose R is an infinite subring of T. If R satisfies conditions (i)-(iii) from Definition <ref>, then R_(R∩ M) satisfies conditions (i)-(iii) from Definition <ref>.In particular, if R contains q_i for all i ∈ℕ, then R_(R∩ M) is a Ct𝔮-subring of T. Let R be an infinite subring of T satisfying properties (i), (ii), and (iii) from Definition <ref>. Since R is infinite, we have |R_(R ∩ M)| = |R| < |T|. Now suppose P ∈(T) and let x ∈ R_(R ∩ M)∩ P. Then r_1x = r_2 for r_1,r_2 ∈ R with r_1 ≠ 0. Thus r_2 ∈ R ∩ P = (0) and it follows that x = 0.Hence, condition (ii) is satisfied. Finally, let i ∈ℕ and let Q ∈ C_i. Suppose y ∈ F_R_(R ∩ M)∩ Q. Then y ∈ F_R ∩ Q ⊆ q_iT and so condition (iii) is satisfied. Finally, note that R_(R∩ M) is an infinite quasi-local subring of T. If R contains q_i for all i, then so does R_(R ∩ M) and so R_(R ∩ M) is a Ct𝔮-subring of T.A main part of our construction is to build a chain of Ct𝔮-subrings of T and then claim that the union of the Ct𝔮-subrings in the chain satisfies most properties of being a Ct𝔮-subring of T.We use the next lemma to prove this claim.Let (T,M), C_i, and q_i for i ∈ℕ be as in Definition <ref>. Let Ω be a well-ordered set, and let {(R_α, R_α∩ M) α∈Ω} be a set of Ct𝔮-subrings of T indexed by Ω with the property that R_α⊆ R_β for all α and β such that α < β. Let S = ⋃_α∈ΩR_α. Then S is quasi-local with maximal ideal S ∩ M and satisfies conditions (ii), and (iii) of Definition <ref>. Furthermore, if, for some λ, |R_α| ≤λ for all α∈Ω, we have |S| ≤λ |Ω|. So, if |Ω| ≤λ and |R_α|=λ for some α, |S|=λ. The cardinality statement does not require a proof. Suppose {(R_α, R_α∩ M)|α∈Ω} is a set of Ct𝔮-subrings indexed by Ω with the property R_α⊆ R_β for all α and β such that α < β. Since each R_α is quasi-local with maximal ideal R_α∩ M, S is quasi-local with maximal ideal S ∩ M. Let P ∈(T). Since R_α∩ P = (0) for every α∈Ω, we have S ∩ P = (0). Now let i ∈ℕ and suppose Q ∈ C_i.Let x ∈ F_S ∩ Q. Then s_1x = s_2 for s_1,s_2 ∈ S. Choose α∈Ω such that s_1,s_2 ∈ R_α. Then x ∈ F_R_α∩ Q ⊆ q_iT. A crucial part of our construction will be finding Ct𝔮-subrings S of T that satisfy the condition that q_iT ∩ S = q_iS for all i ∈ℕ. Although not all of our Ct𝔮-subrings will satisfy this condition, we show that, for any given Ct𝔮-subring R, we can find another Ct𝔮-subring S containing R that does satisfy the condition that q_iT ∩ S = q_iS for all i ∈ℕ.Let (T,M), C_i, and q_i for i ∈ℕ be as in Definition <ref>. Let (R, R ∩ M) be a quasi-local subring of T satisfying all conditions to be a Ct𝔮-subring of T except that R may be finite. Then S = F_R ∩ T satisfies all conditions to be a Ct𝔮-subring of T except that S might be finite. Moreover, R ⊆ S ⊆ T and q_iT ∩ S = q_iS for all i ∈ℕ. If R is infinite, then S is a Ct𝔮-subring of T satisfying |S| = |R|.We first show that S is quasi-local with maximal ideal S ∩ M. In particular, we show that x ∈ F_R ∩ T is a non-unit if and only if x ∈ M ∩ F_R ∩ T. Let x ∈ F_R ∩ T. Suppose x ∉ M. Then x is a unit in T, and so there exists t ∈ T such that tx=1. Now,xr_1=r_2 for r_1, r_2 ∈ R, and so r_1 = tr_2. This implies that t ∈ F_R, and so x is a unit in F_R ∩ T. Thus, if x ∈ F_R ∩ T is a non-unit then x ∈ M ∩ F_R ∩ T. The other direction follows since if x ∈ M, then x is not a unit. Thus, S=F_R∩ T is quasi-local with maximal ideal S ∩ M.Note that R ⊆ S ⊆ T and since R contains q_i for all i ∈ℕ, so does S. If R is finite, then so is S and if R is infinite, then |S| = |R|. As a consequence, |S|<|T|. Now let P ∈(T) and suppose x ∈ S ∩ P. Then r_1x = r_2 for some r_1,r_2 ∈ R with r_1 ≠ 0 and so r_1x ∈ R ∩ P = (0). Since r_1 is not a zerodivisor of T, we have x = 0. Therefore, S ∩ P = (0). Now let i ∈ℕ and suppose Q ∈ C_i. Let y ∈ F_S ∩ Q. Then y ∈ F_R ∩ Q ⊆ q_iT. It follows that S satisfies all conditions to be a Ct𝔮-subring of T except that it might be finite. If R is infinite, then S is infinite, and so it is a Ct𝔮-subring of T satisfying |S| = |R|.Finally, we show that q_iT ∩ S = q_iS for all i ∈ℕ. Fix i ∈ℕ and let x ∈ q_iT ∩ S. Then, we can write q_it=x for t ∈ T and xr_2=r_1 with r_1, r_2 ∈ R. This implies that q_itr_2=r_1 and so t ∈ S. Therefore, x ∈ q_iS. It follows that q_iT ∩ S ⊆ q_iS for all i ∈ℕ. The reverse inclusion is a consequence of the fact that S⊆ T.When we adjoin an element of T to a Ct𝔮-subring, we want the resulting subring of T to also be a Ct𝔮-subring. The next lemma shows that there are conditions under which we can do this. Let (T,M), C_i, and q_i for i ∈ℕ be as in Definition <ref>. Moreover, suppose that for every i ∈ℕ and for every P ∈ (T/q_iT), we have that P ⊆ Q for some Q ∈ C_i. Let (R, R∩ M) be a quasi-local subring of T containing q_i for all i ∈ℕ and such that, for all i ∈ℕ, q_iT ∩ R = q_iR. Suppose also that R satisfies conditions (i), (ii), and (iii) from Definition <ref>. If x+Q ∈ T/Q is transcendental over R/(R∩ Q) for all Q ∈⋃_i ∈ℕC_i then S = R[x]_(R[x]∩ M) is a Ct𝔮-subring of T. Our proof follows parts of the proof of Lemma 2.8 in <cit.> very closely. First note that, since R contains q_i for all i ∈ℕ, so does S. Since x + Q is transcendental over R/(R ∩ Q) for some prime ideal Q of T, we have that R[x] is infinite and so S is infinite.In addition, if R is finite, then S is countable and if R is infinite then |S| = |R|. Since C_1 is a nonempty set of prime ideals that are not maximal, dimT ≥ 1. By Proposition <ref>, T is uncountable, and so we have |S| < |T|.Fix i and suppose P ∈(T). We claim P ⊆ Q for some Q ∈ C_i. Let z ∈ P. Then there is a nonzero y ∈ T such that zy = 0.Since T is Noetherian, there is a nonnegative integer k such that y ∈ (q_i)^kT and y ∉(q_i)^k+ 1T. Hence, y = q_i^kt for some t ∈ T with t ∉q_iT. Now zq_i^kt = 0 and since q_i is a regular element of T, we have zt = 0. Therefore, z(t + q_iT) = 0 + q_iT with t + q_iT ≠ 0 + q_iT and it follows that z ∈ P' for some P' ∈(T/q_iT). By hypothesis, P' ⊆ Q for some Q ∈ C_i.It follows that P ⊆⋃_Q ∈ C_iQ. Now use Proposition <ref> with C = C_i, D = {0}, and I = P to conclude thatP ⊆ Q for some Q ∈ C_i as claimed. Let f ∈ R[x] ∩ P ⊆ R[x] ∩ Q with f ≠ 0. Then f = r_nx^n + ⋯ + r_1x + r_0 ∈ Q for some r_j ∈ R with r_n ≠ 0. Since x + Q is transcendental over R/(R ∩ Q), we have r_j ∈ R ∩ Q for all j = 1,2, …,n, and since R satisfies condition (iii) of Ct𝔮-subrings of T, we have R ∩ Q ⊆ q_iT. Let m be the largest positive integer such that r_j ∈ q_i^mT for all j = 1,2, … ,n. Now, q_i^mT ∩ R = q_i^mR by Lemma <ref>. Thus, r_j = q_i^mr'_j for r'_j ∈ R with at least one r'_j ∉q_iT, and, since R ∩ Q ⊆ q_iT we have that at least one r'_j is not in Q. Observe that f = q_i^m(r'_nx^n + ⋯ + r'_1x + r'_0) ∈ P and since q_i is a regular element of T, it is not in P. Hence r'_nx^n + ⋯ + r'_1x + r'_0 ∈ P ⊆ Q. This contradicts that x+Q ∈ T/Q is transcendental over R/(R∩ Q), and so f = 0 and we have that R[x] ∩ P = (0). It follows that R[x] satisfies condition (ii) of Definition <ref>. Now fix i and suppose Q ∈ C_i and f/g ∈ F_R[x]∩ Q where f,g ∈ R[x] with g ≠ 0. Then there is an element p ∈ Q such that f = pg. So there exist r_j,s_j ∈ R such that f = r_nx^n + ⋯ + r_1x + r_0, g = s_mx^m + ⋯ + s_1x + s_0 with s_j ≠ 0 for some j andr_nx^n + ⋯ + r_1x + r_0 = p(s_mx^m + ⋯ + s_1x + s_0).If f = 0 then f/g ∈ q_iT so suppose f ≠ 0. Then r_j ≠ 0 for some j.Let n' be the largest integer such that r_j ∈ q_i^n'T for all j = 1,2, … ,n and let m' be the largest integer such that s_j ∈ q_i^m'T for all j = 1,2, … ,m. Then we haveq_i^n'(r'_nx^n + ⋯ + r'_1x + r'_0) = pq_i^m'(s'_mx^m + ⋯ + s'_1x + s'_0)where at least one r'_j ∉q_iT and at least one s'_j ∉q_iT. If n' ≤ m' then r'_nx^n + ⋯ + r'_1x + r'_0 ∈ Q, contradicting that x+Q ∈ T/Q is transcendental over R/(R∩ Q). So n' > m'. It follows that p(s'_mx^m + ⋯ + s'_1x + s'_0) ∈ q_iT. If p ∉q_iT then s'_mx^m + ⋯ + s'_1x + s'_0 ∈ P' for some P' ∈(q_iT). By hypothesis, we have s'_mx^m + ⋯ + s'_1x + s'_0 ∈ Q' for some Q' ∈ C_i. Now x+Q' ∈ T/Q' is transcendental over R/(R∩ Q') and so s'_j ∈ R ∩ Q' for all j = 1,2, …,m. Since R ∩ Q' ⊆ q_iT we have s'_j ∈ q_iT for all j = 1,2, …,m, a contradiction. It follows that p ∈ q_iT and we have F_R[x]∩ Q ⊆ q_iT. In other words, R[x] satisfies condition (iii) of Definition <ref>. By Lemma <ref>, S = R[x]_(R[x]∩ M) is a Ct𝔮-subring of T.Lemma <ref> gives us conditions on a subring R of T and an element x of T that ensures that the ring S = R[x]_(R[x] ∩ M) is a Ct𝔮-subring of T. We rely heavily on this result for our construction.We now work to ensure that the domain A we construct has completion T. Recall from Proposition <ref> that, to do this, it suffices to construct A so that the map A ⟶ T/M^2 is onto and IT ∩ A = I for every finitely generated ideal I of A. Given a prime ideal J that is not in any element of any C_i and given an element u + J of T/J, the next result allows us to adjoin an element of u + J to a Ct𝔮-subring of T to obtain another of Ct𝔮-subring of T. We repeatedly apply this lemma to ensure that the map A ⟶ T/M^2 is onto. The lemma will also be useful in Section 4 and in Section 5, when we construct A to be quasi-excellent.Let (T,M), C_i, and q_i for i ∈ℕ be as in Definition <ref>. Moreover, suppose that for every i ∈ℕ and for every P ∈ (T/q_iT), we have that P ⊆ Q for some Q ∈ C_i. Let (R, R ∩ M) be a Ct𝔮-subring of T such that q_iT ∩ R = q_iR for all i ∈ℕ and let u+J ∈ T/J where J is an ideal of T with J ⊈Q for all Q ∈⋃_i ∈ℕC_i. Then there exists a Ct𝔮-subring S of T that satisfies the following conditions: * R ⊆ S ⊆ T,* |R|=|S|,* q_iT ∩ S=q_iS for all i ∈ℕ,* u+J is in the image of the map S → T/J, and* If u ∈ J, then S ∩ J ⊈Q for all Q ∈⋃_i ∈ℕC_i.Let Q ∈⋃_i ∈ℕC_i and let D_(Q) be a full set of coset representatives of the cosets t+Q ∈ T/Q that make (u+t)+Q algebraic over R/(R∩ Q). Note that, since R is infinite, |D_(Q)|≤|R| for each Q ∈⋃_i ∈ℕC_i. Let D=⋃_Q ∈⋃_i ∈ℕC_i D_(Q) and note that |D|<|T|.Now use Proposition <ref> with C = ⋃_i ∈ℕC_i and I=J, to find x ∈ J such that x ∉⋃{r+Qr ∈ D,Q ∈ C }. Since (u+x)+Q is transcendental over R/(R∩ Q) for all Q ∈⋃_i ∈ℕC_i and the hypotheses of Lemma <ref> are met, S'=R[u+x]_(R[u+x]∩ M) is a Ct𝔮-subring of T. Now, use Lemma <ref> to obtain a Ct𝔮-subring S of T such that |R|=|S'|=|S|, S' ⊆ S ⊆ T, and q_iT ∩ S = q_iS for all i ∈ℕ.Since S' ⊆ S, the image of S in T/J contains (u+x)+J = u+J. Furthermore, if u ∈ J, then u+x ∈ J∩ S, but since (u+x)+Q is transcendental over R/R ∩ Q for each Q ∈⋃_i ∈ℕC_i, we have that u +x ∉ Q. Therefore, J ∩ S ⊈Q for all Q ∈⋃_i ∈ℕC_i. We now prove a sequence of lemmas that are fundamental in showing that IT ∩ A = I for every finitely generated ideal I of A. Let (T,M), C_i, and q_i for i ∈ℕ be as in Definition <ref>. Let (R, R∩ M) be a quasi-local subring of T containing q_i for all i ∈ℕ. Also suppose that for all i ∈ℕ and for all Q ∈ C_i we have q_iT ∩ R = q_iR and F_R ∩ Q ⊆ q_iT. Then Q ∩ R =q_iR for all Q ∈ C_i. In particular, q_iR is a prime ideal of R. Let i ∈ℕ and let Q ∈ C_i. Notice that q_iR ⊆ q_iT ∩ R ⊆ Q ∩ R. Now, if x ∈ R ∩ Q then x ∈ R ∩ Q ⊆ F_R ∩ Q ⊆ q_iT, and so x ∈ q_iT ∩ R = q_iR.It follows that Q ∩ R = q_iR.Many of our results for Ct𝔮-subrings have analogous statements for pca-subrings in <cit.>.We note that the next lemma does not have an analogous result in <cit.> and it is a key component in generalizing the main theorem from <cit.>.Let (T,M), C_i, and q_i for i ∈ℕ be as in Definition <ref>. Moreover, suppose that for every i ∈ℕ and for every P ∈ (T/q_iT), we have that P ⊆ Q for some Q ∈ C_i. Let (R, R ∩ M) be a Ct𝔮-subring of T such that q_iT ∩ R = q_iR for all i ∈ℕ and let I=(y_1,…,y_m)R be a finitely generated ideal of R. If I ⊈ q_i R for all i ∈ℕ, then there exists a Ct𝔮-subring R' of T such that * IR' = (y_1,…,y_m)R' = (y, y_2, …, y_m)R' where y∉ q_i R' for all i ∈ℕ, * R ⊆ R' ⊆ T,* |R'| = |R|, and * q_i T ∩ R' = q_i R' for all i ∈ℕ. Without loss of generality, we may assume that y_j ≠ 0 for all j = 1,2, …,m. By Lemma <ref>, q_iR is a prime ideal of R for all i ∈ℕ.Thus, by Lemma <ref>, y_1 is contained in at most finitely manyideals of the form q_i R. If y_1 ∉q_iR for all i ∈ℕ then define R' = R and y = y_1, and observe that the lemma holds in this case. So for the rest of the proof, assume that y_1 ∈ q_iR for at least one i ∈ℕ.We will define y = y_1 + at for some carefully chosen a ∈ I and t ∈ T. Assume, without loss of generality, that y_1 ∈ q_kR for all 1 ≤ k ≤ n and if k > n with y_1 ∈ q_kR then q_kR = q_jR for some 1 ≤ j ≤ n. Use the Prime Avoidance Theorem to find a ∈ R such that a ∈ I and a ∉⋃_k=1^n q_kR. Note that we have chosen a ∈ I such that for i ∈ℕ, if y_1 ∈ q_i R, then a ∉ q_i R.To choose t, let Q ∈⋃_i ∈ℕC_i and let D_(Q) be a full set of coset representatives of the elements t + Q of T/Q that are algebraic over R / (R ∩ Q). Let D = ⋃_Q ∈⋃_i ∈ℕC_i D_(Q). Now, T is uncountable by Proposition <ref>. Since |R| < |T|, we have |R / (R ∩ Q)| < |T| and the algebraic closure of R / (R ∩ Q) in T/Q also has cardinality smaller than |T|. Therefore, for every Q ∈⋃_i ∈ℕC_i, |D_(Q)| < |T|. Recall that there are countably many C_i's and each C_ihas countably many elements. Thus, |D|<|T|.Note that M ⊈ Q for all Q ∈⋃_i ∈ℕC_i because the elements of each C_i are nonmaximal. Apply Proposition <ref> with C = ⋃_i ∈ℕC_i and I = M to find t ∈ M such that t + Q is transcendental over R / (R ∩ Q) for all Q ∈⋃_i ∈ℕC_i.By Lemma <ref>, S = R[t]_(R[t] ∩ M) is a Ct𝔮-subring of T.Note that R ⊆ S ⊆ T and |S| = |R| < |T|. If Q ∈ C_1 then, by Lemma <ref>, Q ∩ R = q_1R. By Lemma <ref>, t is transcendental over R. We claim that y = y_1 + a t ∉q_i R[t] for all i ∈ℕ. By way of contradiction, assume that y∈ q_i R[t] for some i ∈ℕ. Since t is transcendental over R, y_1+at ∈ q_i R[t] implies that y_1 ∈ q_iR and a ∈ q_iR. However, this is a contradiction since we chose a such that y_1 ∈ q_i R implies that a ∉ q_i R. Therefore, y∉ q_i R[t] for all i ∈ℕ. Now suppose that i ∈ℕ and f ∈ q_iT ∩ R[t]. Then, for some a_j ∈ R we have f = a_nt^n + ⋯ + a_1t + a_0 ∈ q_iT ⊆ Q for all Q ∈ C_i. Since t + Q is transcendental over R / (R ∩ Q), we have a_j ∈ R ∩ Q = q_iR. It follows that f ∈ q_iR[t] and so we have q_iT ∩ R[t] = q_iR[t].As a consequence, y∉ q_i T for all i ∈ℕ. We now show that (y_1,…,y_m)S = (y,…,y_m)S. Since y = y_1 + at with a ∈ (y_1, …, y_m)S, we have that (y,…,y_m)S ⊆ (y_1,…,y_m)S = IS.Notice that y - y_1 = at ∈ (M ∩ S)IS, and so we have (y, y_2, …, y_m)S + (M ∩ S)IS = IS. By Nakayama's Lemma, we have IS = (y, y_2, …, y_m)S.By Lemma <ref>, R'= F_S∩ T is a Ct𝔮-subring of T such that S ⊆ R' ⊆ T, |R'| = |S|, and and q_i T ∩ R' = q_i R' for all i ∈ℕ. Note also that, since IS = (y, y_2, …, y_m)S, we have IR' = (y_1,…,y_m)R' = (y, y_2, …, y_m)R'. We now claim that y∉ q_i R' for all i ∈ℕ. Suppose on the contrary that y∈ q_i R' for some i ∈ℕ. Then y∈ q_iR' ⊆ q_iT, a contradiction.Hence, R' is the desired Ct𝔮-subring of T.We are ready to show that, if I is a finitely generated ideal of a Ct𝔮-subring R of T, and c ∈ IT∩ R, then we can find a larger Ct𝔮-subring S of T with c∈ IS. The statement of the lemma corresponds to the statement of Lemma 2.9 from <cit.>, but for Ct𝔮-subrings rather than pca-subrings. The first part of the proof of Lemma <ref> below, in particular, is very close to the first part of the proof of Lemma 2.9 in <cit.>. Let (T,M), C_i, and q_i for i ∈ℕ be as in Definition <ref>. Moreover, suppose that for every i ∈ℕ and for every P ∈ (T/q_iT), we have that P ⊆ Q for some Q ∈ C_i. Let (R, R ∩ M) be a Ct𝔮-subring of T such that q_iT ∩ R = q_iR for all i ∈ℕ. Let I be a finitely generated ideal of R and let c ∈ IT ∩ R. Then there exists a Ct𝔮-subring S of T meeting the following conditions: * R ⊆ S ⊆ T,* |S| = |R|,* c ∈ IS, and* q_iT ∩ S = q_iS for all i ∈ℕ.We induct on the number of generators of I. Suppose I = aR. If a = 0, then c = 0. So, S = R is the desired Ct𝔮-subring. If a ≠ 0, then let c = au for some u ∈ T.We show that S' = R[u]_(R[u] ∩ M) is a Ct𝔮-subring satisfying the first three conditions and then we apply Lemma <ref> to find S such that all conditions are satisfied. Note that |R|=|R[u]|, so R[u] satisfies condition (i) of Definition <ref>. Let P ∈(T) and let f ∈ R[u] ∩ P. Then f = r_nu^n + ⋯ + r_1u + r_0 where r_n, … ,r_1, r_0 ∈ R. It follows thata^nf = r_nc^n + ⋯ + r_1ca^n-1 + r_0a^n,and we see that a^nf ∈ R ∩ P=(0) because by assumption, R is a Ct 𝔮-subring. Since R is a domain, a is not a zero-divisor in T. It follows that f=0 and R[u] ∩ P=(0) for all P ∈(T). Therefore, R[u] satisfies condition (ii) of Definition <ref>.Now let i ∈ℕ and suppose x ∈ F_R[u]∩ Q for some Q ∈ C_i. Then, f_2x=f_1 for f_1,f_2 ∈ R[u]. Find a positive integer m such that a^mf_1, a^mf_2 ∈ R. Then, a^mf_2x=a^mf_1, and so x ∈ F_R∩ Q ⊆ q_iT by the properties of Ct𝔮-subrings. It follows that R[u] satisfies condition (iii) of Definition <ref>. By Lemma <ref>, S'= R[u]_(R[u] ∩ M) is a Ct𝔮-subring of T. Lastly, we apply Lemma <ref> to find a Ct𝔮-subring S of T that satisfies all four properties of the lemma.Now suppose that I = (y_1, …,y_m)R is generated by m > 1 elements, and assume that the statement holds for all ideals with m - 1 generators. Assume, without loss of generality, that y_j ≠ 0 for all 1 ≤ j ≤ m. Since c ∈ IT, there exists t_1, t_2,… ,t_m ∈ T such that c = y_1 t_1 + y_2 t_2 + ⋯ + y_m t_m. By Lemma <ref>, q_iR is a prime ideal of R for all i ∈ℕ. If I were contained in infinitely many ideals of the form q_iR for i ∈ℕ, then y_1 would be in infinitely many ideals of the form q_iR, violating Lemma <ref>.It follows that I is contained in at most finitely many ideals of the form q_iR. If I ⊈q_iR for all i ∈ℕ, then let I' = I. On the other hand, if I ⊆ q_iR for at least one i ∈ℕ then, without loss of generality, let I be contained in q_1 R, …, q_s R where, if i,j ∈{1,2, … ,s}, then q_iR = q_j R if and only if i = j, and, if j > s and I ⊆ q_jR, then q_jR = q_iR for some i ∈{1,2, … ,s}.Now, y_1 ∈ q_1R and so, by Lemma <ref>, there is a largest positive integer k_1 such that y_1 ∈ q_1^k_1R. Similarly, there is a largest positive integer k_2 such that y_2 ∈ q_1^k_2R. Continue to find k_3, … ,k_s.Let ℓ_1 = min{k_1,…, k_s}. In a similar manner, define ℓ_2, … ,ℓ_s using q_2R, … ,q_sR.Now c = q_1^ℓ_1⋯ q_s^ℓ_s (y_1' t_1 + ⋯ + y_m' t_m) where y'_1, … ,y'_m ∈ R with y_k = q_1^ℓ_1⋯ q_s^ℓ_sy_k' for all 1 ≤ k ≤ m and, if j ∈{1,2, … ,s} then there is a y' ∈{y'_1, … ,y'_m} satisfying y' ∉q_jR. Letting I' = (y_1', …, y_m'), it follows that I' ⊈q_jR for all j ∈{1,2, …,s}. If j > s and I' ⊆ q_jR, then I ⊆ I' ⊆ q_jR, a contradiction.Hence, I' ⊈q_iR for all i ∈ℕ.Let c' = y_1' t_1 + ⋯ + y_m' t_m and notice that c = q_1^ℓ_1⋯ q_s^ℓ_s c'. Now c ∈ q_1^ℓ_1T ∩ R = q_1^ℓ_1R by Lemma <ref> and so q_1^ℓ_1⋯ q_s^ℓ_s c' = q_1^ℓ_1c_1 for some c_1 ∈ R. Cancelling, we get that q_2^ℓ_2⋯ q_s^ℓ_sc' = c_1 ∈ R. Repeat the argument to cancel q_2^ℓ_2, …, q_s^ℓ_s and conclude that c' ∈ R.To proceed, use Lemma <ref> to find a Ct𝔮-subring R' of T such that * (y'_1,y_2',…,y_m')R' = (y,y_2',…,y_m')R' where y∉q_i R' for all i ∈ℕ,* R ⊆ R' ⊆ T,* |R'| = |R|, and* q_i T ∩ R' = q_i R' for all i ∈ℕ. Without loss of generality, reorder the generators of I'R' = (y,y_2',…,y_m')R' so that y_2' ∉ q_i R' for all i ∈ℕ. Our goal is now to find a t ∈ T that allows us to adjoin t_1 + y_2' t to R' without disturbing the Ct𝔮-subring properties. First note that if Q ∈⋃_i ∈ℕC_i and(t_1 + y_2' t) + Q = (t_1 + y_2' t') + Q for t,t' ∈ T, then we have that y_2'(t - t') ∈ Q. By Lemma <ref>, Q ∩ R' = q_i R' for all Q ∈ C_i. Since y_2' ∉ q_iR' for all i ∈ℕ, we have y_2' ∉ Q. Thus, (t - t') ∈ Q, and therefore, t + Q = t' + Q. As a result, if t + Q ≠ t' + Q, then (t_1 + y_2' t) + Q ≠ (t_1 + y_2' t') + Q. Let Q ∈⋃_i ∈ℕC_i and let D_(Q) be a full set of coset representatives of elements t' + Q of T/Q that make (t_1 + y_2' t') + Q algebraic over R' / (R' ∩ Q). Let D = ⋃_Q ∈⋃_i ∈ℕC_i D_(Q) and note that |D| < |T|. Use Proposition <ref> with I = T and C = ⋃_i ∈ℕC_i to find an element t ∈ T such that t ∉⋃{r + P | r ∈ D, P ∈ C }. Let x = t_1 + y_2' t. Since x+Q is transcendental over R'/ (R' ∩ Q) for all Q ∈⋃_i ∈ℕC_i, R'_1 = R'[x]_(R'[x] ∩ M) is a Ct𝔮-subring of T by Lemma <ref>. Note that |R'_1| = |R'| = |R|. Using Lemma <ref>, let R” be a Ct𝔮-subring of T such that R'_1 ⊆ R”⊆ T, |R”| = |R'_1| = |R|, and q_iT ∩ R” = q_iR” for all i ∈ℕ.We now both add and subtract y_1' y_2' t to see that c'= y_1' t_1 + y_1' y_2' t - y_1' y_2' t + y_2' t_2 + ⋯ + y_m' t_m= y_1' x + y_2'(t_2 - y_1' t) + y_3' t_3 + ⋯ + y_m' t_m. Let I” = (y_2',…,y_m')R” and c” = c' - y_1' x. Then c”∈ I”T ∩ R”. Use the induction assumption to find a Ct𝔮-subring S of T such that |S| = |R”| = |R|, R ⊆ R' ⊆ R”⊆ S ⊆ T, c”∈ I”S, and q_iT ∩ S = q_iS for all i ∈ℕ. Then c' = y_1' x + c”∈ I'S since x ∈ R”⊆ S by construction of R”. Now, recall that c = q_1^ℓ_1⋯ q_s^ℓ_s c', so, c ∈ (q_1^ℓ_1⋯q_s^ℓ_s)I'S, which implies that c ∈ IS. It follows that S is the desired Ct𝔮-subring of T.Lemma <ref> is the analogous version of Lemma 2.11 in <cit.>, and so the proofs are very similar. The lemma shows that we can find a Ct𝔮-subring S of T that satisfies several of our desired properties. In particular, S satisfies the condition that IT ∩ S = I for every finitely generated ideal I of S. Before stating Lemma <ref>, we introduce the following useful definition.Let Ω be a well ordered set and let α∈Ω. We define γ(α) = sup{β∈Ω|β < α}. Let (T,M), C_i, and q_i for i ∈ℕ be as in Definition <ref>. Moreover, suppose that for every i ∈ℕ and for every P ∈ (T/q_iT), we have that P ⊆ Q for some Q ∈ C_i. Let (R, R ∩ M) be a Ct𝔮-subring of T such that q_iT ∩ R = q_iR for all i ∈ℕ. Let J be an ideal of T with J ⊈ Q for all Q ∈⋃_i ∈ℕC_i, and let u + J ∈ T/J. Then there exists a Ct𝔮-subring S of T such that* R ⊆ S ⊆ T,* |R|=|S|,* u+J is in the image of the map S ⟶ T/J,* If u ∈ J, then S ∩ J ⊈ Q for all Q ∈⋃_i ∈ℕC_i, and* For every finitely generated ideal I of S, we have IT ∩ S = I. First use Lemma <ref> to find a Ct𝔮-subring R_0 of T such that R ⊆ R_0 ⊆ T, |R_0| = |R|, q_iT ∩ R_0 = q_iR_0 for all i ∈ℕ, u + J is in the image of the map R_0 ⟶ T/J, and if u ∈ J, then R_0 ∩ J ⊈Q for all Q ∈⋃_i ∈ℕC_i. Our final ring S will contain R_0 and so S will satisfy conditions (iii) and (iv) automatically. LetΩ = {(I,c) | IR_0c ∈ IT ∩ R_0 },and note that |Ω| = |R_0| = |R|.Well order Ω such that it does not have a maximal element, and let 0 denote its initial element. We now inductively define an increasing chain of Ct𝔮-subrings of T, one for every element of Ω.The subrings will satisfy the condition that |R_α| = |R| for every α∈Ω and q_iT ∩ R_α = q_iR_α for every i ∈ℕ. Note that R_0 has already been defined. Now let α∈Ω and assume that R_β has been defined for all β < α such that R_β is a Ct𝔮-subring of T with |R_β| = |R|, if μ, ρ∈Ω with μ < ρ < β, then R_μ⊆ R_ρ⊆ R_β, and q_iT ∩ R_β = q_iR_β for all i ∈ℕ. If γ(α) < α then γ(α) = (I,c) for some (I,c) ∈Ω. Define R_α to be the Ct𝔮-subring of T obtained from Lemma <ref> such that R_γ(α)⊆ R_α⊆ T, |R_α| = |R_γ(α)| = |R|, c ∈ IR_α, and q_iT ∩ R_α = q_iR_α for all i ∈ℕ. If γ(α) = α then define R_α = ⋃_β < αR_β. By Lemma <ref>, R_α is a Ct𝔮-subring of T with |R_α| = |R|. If x ∈ q_i T ∩ R_α for some i ∈ℕ, then x ∈ R_β for some β < α and so x ∈ q_iT ∩ R_β = q_iR_β⊆ q_iR_α. Therefore, q_iT ∩ R_α = q_iR_α for every i ∈ℕ. Now let R_1 = ⋃_α∈Ω R_α. By Lemma <ref>, R_1 is a Ct𝔮-subring of T with |R_1| = |R|, and by the argument at the end of the previous paragraph, q_iT ∩ R_1 = q_iR_1 for every i ∈ℕ. Let I be a finitely generated ideal of R_0 and let c ∈ IT ∩ R_0. Then (I,c) = γ(α) for some α∈Ω satisfying γ(α) < α. By construction, c ∈ IR_α⊆ IR_1.It follows that IT ∩ R_0 ⊆ IR_1 for every finitely generated ideal I of R_0.Repeat this construction with R_0 replaced by R_1 to obtain a Ct𝔮-subring R_2 of T with |R_2| = |R|, q_iT ∩ R_2 = q_iR_2 for every i ∈ℕ, and IT ∩ R_1 ⊆ IR_2 for all finitely generated ideals I of R_1. Continue to obtain R_j for every j ∈ℕ.Then, for every j ∈ℕ, we have R_j is a Ct𝔮-subring of T with |R_j| = |R|, q_iT ∩ R_j = q_iR_j for every i ∈ℕ, and IT ∩ R_j ⊆ IR_j + 1 for every finitely generated ideal I of R_j.We claim that S = ⋃_j ∈ℕR_j is the desired Ct𝔮-subring of T. Note that R ⊆ S ⊆ T. By Lemma <ref>, S is a Ct𝔮-subring of T with |S| = |R|. Let I be a finitely generated ideal of S and let x ∈ IT ∩ S.Let I = (s_1, … ,s_k) where s_j ∈ S. Choose K ∈ℕ so that x,s_1, … ,s_k ∈ R_K. Then x ∈ (s_1, …,s_k)T ∩ R_K ⊆ (s_1, …,s_k)R_K + 1⊆ IS and it follows that IT ∩ S = I for every finitely generated ideal I of S. We are now ready to construct a local domain A containing elements q_i for i ∈ℕ that satisfies the needed conditions to be a precompletion of T, namely, the map A → T/M^2 is onto and IT∩ A=IA for every finitely generated ideal I of A. Furthermore, A satisfies the condition that for each i ∈ℕ, q_iA is a prime ideal of A and the maximal elements in the formal fiber of A at q_iA are exactly the elements of C_i. The proof of Lemma <ref> is largely based on the proof of Lemma 2.12 from <cit.>. Let (T,M), C_i, q_i, and 𝔮 for i ∈ℕ be as in Definition <ref>. Moreover, suppose that for every i ∈ℕ and for every P ∈ (T/q_iT), we have that P ⊆ Q for some Q ∈ C_i. Let Π denote the prime subring of T. Suppose P ∩Π[𝔮]=(0) for every P ∈ (T) and for all i ∈ℕ, if Q ∈ C_i then F_Π[𝔮]∩ Q ⊆ q_iT. Then there exists a local domain A ⊆ T containing q_i for every i ∈ℕ such that the following conditions hold. * A≅ T,* For every i ∈ℕ, q_iA is a prime ideal of A and the maximal elements of the formal fiber of A at q_iA are exactly the elements of C_i,* If J is an ideal of T satisfying that J ⊈ Q for all Q ∈⋃_i ∈ℕC_i, then the map A → T/J is onto and J ∩ A ⊈ Q for all Q ∈⋃_i ∈ℕC_i, * If P' is a nonzero prime ideal of A with P' ≠ q_iA for all i ∈ℕ, then T ⊗_A k(P') ≅ k(P'), where k(P')=A_P'/P'A_P'.By assumption and since T is uncountable, R”_0 = Π[𝔮]_(Π[𝔮] ∩ M) satisfies all conditions to be a Ct𝔮-subring of T except that it might be finite. By Lemma <ref>, R'_0 = F_R”_0∩ T satisfies all conditions to be a Ct𝔮-subring of T except that it might be finite. Moreover, R”_0 ⊆ R'_0 ⊆ T andq_iT ∩ R'_0= q_iR'_0 for all i ∈ℕ. Note that R'_0 is countable since R”_0 is countable. Let Q ∈⋃_i ∈ℕC_i and let D_(Q) be a full set of coset representatives for the cosets t + Q ∈ T/Q that are algebraic over R'_0/(R'_0 ∩ Q). Let D = ⋃_Q ∈⋃_i ∈ℕC_iD_(Q), and note that |D| < |T|. Use Proposition <ref> with I = T and C = ⋃_i ∈ℕC_i to find x ∈ T with x ∉⋃r+P r∈ D,P∈ C. Then, for all Q ∈⋃_i ∈ℕC_i, x + Q ∈ T/Q is transcendental over R'_0/(R'_0 ∩ Q).By Lemma <ref>, R'_0 = R'_0[x]_(R'_0 ∩ M) is a Ct𝔮-subring of T. Note that R'_0 is countable. Now use Lemma <ref> to find a Ct𝔮-subring S of T such that R'_0 ⊆ S ⊆ T, |S| = |R'_0|, and q_iT ∩ S = q_iS for all i ∈ℕ. Finally, use Lemma <ref> with J = T and u = 0 to find a Ct𝔮-subring R_0 of T such that S ⊆ R_0 ⊆ T, R_0 is countable and, for every finitely generated ideal I of R_0, we have IT ∩ R_0 = I. Now letΩ = {u + J | JTJ ⊈QQ ∈⋃_i ∈ℕC_i }Since T is infinite and Noetherian |Ω| ≤ |T|. Well order Ω so that each element has fewer than |Ω| predecessors, and let 0 denote the initial element of Ω. We recursively define a chain of Ct𝔮-subrings of T, one for each element of Ω. We have already defined R_0. Let α∈Ω and assume that R_β has been defined for all β < α such that R_β is a Ct𝔮-subring of T, |R_β| ≤max{ℵ_0, |{μ∈Ω|μ≤β}|}, if ρ < μ≤β, then R_ρ⊆ R_μ, and IT ∩ R_β = I for every finitely generated ideal I of R_β. First suppose γ(α) < α. Then γ(α) = u + J for some u + J ∈Ω. Define R_α to be the Ct𝔮-subring of T obtained from Lemma <ref> so that R_γ(α)⊆ R_α, |R_α| = |R_γ(α)|, u + J is in the image of the map R_α⟶ T/J, if u ∈ J, then R_α∩ J ⊈Q for all Q ∈⋃_i ∈ℕC_i, and for every finitely generated ideal I of R_α we have IT ∩ R_α = I. Now suppose that γ(α) = α. Then define R_α = ⋃_β < αR_β. Suppose I = (a_1, … ,a_k) is a finitely generated ideal of R_α. Then if x ∈ IT ∩ R_α, we have for some β < α that x ∈ (a_1, … ,a_k)T ∩ R_β = (a_1, … ,a_k)R_β⊆ I.It follows that IT ∩ R_α = I for every finitely generated ideal I of R_α. Note that |R_α| ≤max{ℵ_0, |{μ∈Ω|μ≤α}|} < |T| and so by Lemma <ref>, R_α is a Ct𝔮-subring of T.We claim that A = ⋃_α∈Ω R_α is the desired domain. By Lemma <ref>, A is quasi-local and satisfies conditions (ii) and (iii) of Definition <ref>. Since A ∩ P = (0) for all associated prime ideals P of T, A is a domain. Since IT ∩ R_α = I for every finitely generated ideal I of R_α we can show, using the same argument as in the previous paragraph, that IT ∩ A = I for every finitely generated ideal I of A. By construction, condition (iii) of the lemma is satisfied. In particular, if Q ∈⋃_i∈ℕC_i, then Q ≠ M and so M^2 ⊈Q. It follows that the map A ⟶ T/M^2 is onto.By Proposition <ref>, A≅ T.Now we show conditions (ii) and (iv) of the lemma are satisfied. If Q ∈ C_i, then by Lemma <ref>, Q ∩ A = q_iA, and so Q is in the formal fiber of A at q_iA. If J is a prime ideal of T such that J ⊈Q for all Q ∈⋃_i ∈ℕC_i, then, by construction, A ∩ J ⊈Q for all Q ∈⋃_i ∈ℕC_i. It follows that J is not in the formal fiber of q_iA for all i ∈ℕ. Therefore, for every i ∈ℕ, the maximal elements of the formal fiber of A at q_iA are exactly the elements of C_i. Now let P' be a nonzero prime ideal of A such that P' ≠ q_iA for all i ∈ℕ. Let J=P'T. First, suppose that J ⊆ Q for some Q ∈⋃_i∈ℕC_i. Then, P' ⊆ J ∩ A ⊆ Q∩ A = q_iA. This implies that ht(P') ≤ (q_iA). But, A is a domain and q_iA is prime, so ht(q_iA)=1. Therefore, either P'=q_iA or P'=(0), which is a contradiction. It follows that J ⊈Q for all Q ∈⋃_i ∈ℕC_i. By construction, we have that A → T/J is onto. Now, since J ∩ A = P'T ∩ A = P', the map A/P' → T/J is an isomorphism, and so, A/P' is complete. Then it follows that T ⊗_A k(P') ≅ (T/P'T)_A-P'≅ (A/P')_A-P'≅ A_P'/P'A_P'≅ k(P').Assume the setting of Lemma <ref> and consider the map f from the setX = {J ∈(T) | J ⊈Q Q ∈⋃_i ∈ℕC_i}to the setY = {P ∈(A) | P ≠ (0)P ≠ q_iAi ∈ℕ}given by J ⟶ J ∩ A. By Lemma <ref>, and since T is a faithfully flat extension of A, we know that f does map elements of X to elements of Y and f is onto. Now let J ∈ X and let P = J ∩ A ∈ Y. Then the map A ⟶ T/J is onto and since J ∩ A = P we have that A/P ≅ T/J. In particular, A/P is complete and so A/P ≅ T/PT. Letting A/P denote its image in T/PT, we have (J/PT) ∩ (A/P) = (J ∩ A)/P = P/P = (0).But, since A/P ≅ T/PT, there can only be one ideal I of T/PT such that I ∩ (A/P) = (0). Thus, J/PT = PT/PT. It follows that J = PT and as a consequence f is injective. Therefore, f is bijective and so there is a one to one correspondence between elements of X and elements of Y. We are now ready to prove the main theorem of this section. Let T be a complete local ring and let Π denote the prime subring of T. For each i ∈ℕ, let C_i be a nonempty countable set of nonmaximal pairwise incomparable prime ideals of T and suppose that, if i ≠ j, then either C_i = C_j or no element of C_i is contained in an element of C_j. Then there exists a local domain A ⊆ T with A≅ T and, for all i ∈ℕ, there is a nonzero prime element p_i of A such that C_i is exactly the set of maximal elements of the formal fiber of A at p_iA if and only if there exists a set of nonzero elements 𝔮 = {q_i}_i = 1^∞ of T satisfying the following conditions * For i ∈ℕ we have q_i ∈⋂_Q ∈ C_iQ and, if C_j ≠ C_i and Q' ∈ C_j, then q_i ∉Q',* P ∩Π[𝔮]=(0) for all P ∈(T),* If i ∈ℕ and P' ∈(T/q_iT), then P' ⊆ Q for some Q ∈ C_i, and* If i ∈ℕ and Q ∈ C_i, then F_Π[𝔮]∩ Q ⊆ q_iT where F_Π[𝔮] is the quotient field of Π[𝔮].Suppose there exists a set of nonzero elements 𝔮 = {q_i}_i = 1^∞ of T such that conditions (i)-(iv) hold. Since P ∩Π[𝔮] = (0) for all P ∈(T), q_i is a regular element of T for all i ∈ℕ. By Lemma <ref> the desired A exists with p_i = q_i for every i ∈ℕ. Conversely, suppose A ⊆ T is a local domain with A≅ T and suppose that, for every i ∈ℕ, there is a nonzero prime element p_i of A such that C_i is exactly the set of maximal elements of the formal fiber of A at p_iA. Let q_i = p_i for every i ∈ℕ.We claim that 𝔮 = {q_i}_i = 1^∞ is the desired set of elements of T.Fix i ∈ℕ and suppose Q ∈ C_i. Then Q is in the formal fiber of A at q_iA and so Q ∩ A = q_iA. It follows that q_i ∈ Q. Thus, q_i ∈⋂_Q ∈ C_iQ. Now suppose C_j ≠ C_i and Q' ∈ C_j. By hypothesis, no element of C_j is contained in an element of C_i. Note that Q' ∩ A = q_jA. If q_i ∈ Q' then q_iA ⊆ Q' ∩ A = q_jA and so q_iA = q_jA. Thus Q' is in the formal fiber of A at q_iA. Therefore, Q' ⊆ Q for some Q ∈ C_i, a contradiction.It follows that condition (i) of the theorem holds.To prove condition (ii) is satisfied, observe that, since the extension A ⊆A = T is faithfully flat, any zerodivisor of T which is in A must be a zerodivisor of A. Since A is a domain and Π[𝔮] ⊆ A, we must have that P ∩Π[𝔮] = (0) for all P ∈ (T). Now we show condition (iii) holds. Since the completion of A / (q_iT ∩ A) = A / q_iA is T/q_iT, all zerodivisors of T/q_iT contained in A/q_iA are zero-divisors of A/q_iA. But, A/q_iA is a domain since q_iA is prime. Thus if P'∈(T/q_iT) then P' ∩ A ⊆ q_iT ∩ A = q_iA. Since q_i ∈ P', we also have q_iA ⊆ A ∩ P', which gives us that P' ∩ A = q_iA. Thus, P' is in the formal fiber of A at q_iA, and it follows that P' ⊆ Q for some Q ∈ C_i.Finally, to show condition (iv) holds, suppose i ∈ℕ and let Q ∈ C_i. Suppose x ∈ F_Π[𝔮]∩ Q. Then xg=h for some g,h ∈Π[𝔮] ⊆ A with g ≠ 0. Now h ∈ gT ∩ A = gA. Since P ∩Π[𝔮] = (0) for all P ∈ (T), we know that g is not a zero-divisor of T. It follows that x ∈ A and so x ∈ Q ∩ A=q_iA ⊆ q_iT. We end this section with two examples illustrating Theorem <ref>. Let T = ℚ[[x_1,x_2,x_3,x_4,x_5,x_6]], C_1 = {(x_1,x_2 - α x_3) |α∈ℚ}, C_2 = {(x_4,x_5 - α x_6) |α∈ℚ}, q_1 = x_1, q_2 = x_4, and if j > 2, then let C_j = C_2 and q_j = q_2. Then the conditions for Theorem <ref> are satisfied.Therefore, T contains a local domain A with A≅ T and A contains prime elements p_i for every i ∈ℕ such that C_i is exactly the set of maximal elements of the formal fiber of A at p_iA. In fact, by the proof of Theorem <ref>, it can be arranged so that p_1 = x_1, p_2 = x_4, and p_j = x_4 for j > 2. We use the next lemma to aid with our second example. (<cit.>, Lemma 16) Let (T,M) be a complete local ring and let C be a countable set of nonmaximal prime ideals of T.Let D be a countable set of elements of T.Let y ∈ M such that y ∉P for all P ∈ C.Then there exists a unit t of T such thatyt ∉⋃{P + r |P ∈ C, r ∈ D}. Let T = ℚ[[x_1, …,x_K]] for K ≥ 2. Then T has infinitely many height one prime ideals and, since T is a unique factorization domain, all height one prime ideals are principal. Let 𝒫 = {p_i}_i∈ℕ be a countable collection of prime elements of T such that p_iT = p_jT if and only if i = j. Choose C_i for i ∈ℕ so that each C_i is a nonempty finite set of prime ideals of the form p_kT where p_k ∈𝒫 and such that if i ≠ j then C_i ∩ C_j = ∅. For i ∈ℕ, let C_i = {p'_1T, …, p'_sT} where p'_k ∈𝒫 for 1 ≤ k ≤ s. Define q'_i = ∏_k = 1^s p'_k. We now use Lemma <ref> to define the set {q_i}_i ∈ℕ so that condition (iv) of Theorem <ref> holds. Given i ∈ℕ, q_i will be an associate of q_i' and so conditions (i) - (iii) of Theorem <ref> will be satisfied. First, let R_0 = ℚ, and note that R_0 ∩ Q = (0) for all Q ∈⋃_i ∈ℕC_i. For j ≥ 0, we use R_j to define q_j + 1, and then we use q_j + 1 to define a countable subring R_j + 1 of T. We proceed in this manner to define q_i for every i ∈ℕ and R_j for every j ≥ 0. We ensure that if i ∈ℕ and Q ∈ C_i thenthen R_j ∩ Q = (0) if i > j and R_j ∩ Q = q_iR_j if i ≤ j. Assume that j ≥ 0 and that R_j and q_i for i ≤ j have been defined to satisfy these conditions. We now define q_j+ 1 and R_j + 1. Let X_j+ 1 = {Q ∈⋃_i ∈ℕC_i | Q ∉C_j+ 1}. Note that q'_j+ 1∉Q for all Q ∈ X_j+ 1. For Q ∈ X_j+ 1, let D_(Q) be a full set of coset representatives of the cosets t + Q ∈ T/Q that are algebraic over R_j/(R_j ∩ Q). Define D = ⋃_Q ∈ X_j+ 1D_(Q) and note that D is countable. Use Lemma <ref> with C = X_j+ 1 and y = q'_j+ 1 to find a unit t_j+ 1∈ T such that q'_j+ 1t_j+ 1 + Q is transcendental over R_j/(R_j ∩ Q) for all Q ∈ X_j+ 1.Define q_j+ 1 = q'_j+ 1t_j+ 1, and define R_j + 1 = R_j[q_j+ 1]. To see that, for this choice of q_j+ 1 and R_j + 1 our desired properties hold, suppose i ∈ℕ and Q ∈ C_i. Let f ∈ R_j + 1∩ Q. Then f = r_nq_j+ 1^n + ⋯ + r_1q_j+ 1 + r_0 ∈ Q for r_k ∈ R_j. First suppose i ≠ j + 1. Then, since q_j+ 1 + Q is transcendental over R_j/(R_j ∩ Q) for all Q ∈ X_j+ 1, we have that r_k ∈ Q ∩ R_j for 1 ≤ k ≤ n. If i > j + 1 then Q ∩ R_j = (0) and so R_j + 1∩ Q = (0).If i < j + 1, then Q ∩ R_j = q_iR_j and so f ∈ q_iR_j + 1. It follows that R_j + 1∩ Q = q_iR_j + 1. It remains to consider the case where i = j + 1. In this case, q_j + 1∈ Q, and so r_0 ∈ R_j ∩ Q = (0). Thus, f = q_j + 1(r_nq_j + 1^n - 1 + ⋯ + r_1) ∈ q_j + 1R_j + 1, and it follows that R_j + 1∩ Q = q_j + 1R_j + 1.We now show that, for this choice of the set 𝔮 = {q_i}_i ∈ℕ, condition (iv) of Theorem <ref> is satisfied. Let i ∈ℕ and let Q ∈ C_i. Suppose x ∈ F_Π[𝔮]∩ Q. Then there is a N ∈ℕ such that x = f/g for f,g ∈ R_N with g ≠ 0. If i > N, then f = xg ∈ R_N ∩ Q = (0) and so 0 = x ∈ q_iT and we have that F_Π[𝔮]∩ Q ⊆ q_iT. Now suppose i ≤ N. If f,g ∈ q_iT then f,g ∈ R_N ∩ Q = q_iR_N, and so f = q_if' and g = q_ig' for f',g' ∈ R_N. Then we have x = f'/g'. If f',g' ∈ q_iT, we can repeat this. Note that this process must stop since if it does not, g ∈⋂_ℓ∈ℕ(q_i)^ℓT = (0), a contradiction. Therefore, without loss of generality, we may assume that at least one of f and g is not in q_iT. Now, f = xg ∈ R_N ∩ Q = q_iR_N ⊆ q_iT = (t_i∏_k = 1^sp'_k)T. Since f ∈ q_iT, we have that g ∉q_iT. Suppose g ∈ p'_mT for some m ∈{1,2, …,s}. Then g ∈ R_N ∩ p'_mT = q_iR_N ⊆ q_iT, a contradiction. It follows that x ∈ p'_kT for all k ∈{1,2, …, s}, and so x ∈ q_iT. Hence, F_Π[𝔮]∩ Q ⊆ q_iT. By Theorem <ref>, T contains a local domain A with A≅ T and A contains prime elements p”_i for every i ∈ℕ such that C_i is exactly the set of maximal elements of the formal fiber of A at p”_iA. In fact, by the proof of Theorem <ref>, it can be arranged so that p”_i = q_i, for all i ∈ℕ.§ COUNTABLE PRECOMPLETIONS It can be shown using Remark <ref> that the the precompletion constructed in Lemma <ref> is necessarily uncountable. Therefore, we have established that it is possible to control the formal fibers of countably many height one prime ideals for an uncountable precompletion. It is interesting to ask whether we can do the same for a countable one. In this section, we prove a result analogous to Theorem <ref> where we require that the domain A be countable. Before we begin, consider the following illustrative example. Suppose that A is a countable local domain with dim(A) = 3 and let p be a prime element of A. Let T be the completion of A with respect to its maximal ideal. The completion of the dimension two domain A' = A/pA is T' = T/pT. Note that A' is countable and, byProposition <ref>, T' has uncountably many prime ideals. A nonzero element of A' is contained in only finitely many height one prime ideals of T', and it follows that only countably many height one prime ideals of T' contain nonzero elements of A'. Hence, the formal fiber of A' at its zero ideal contains uncountably many height one prime ideals of T'. As a result, the formal fiber of A at pA contains uncountably many height two prime ideals of T. So, the formal fiber of A at pA cannot contain countably many maximal elements. This example shows that Theorem <ref> does not hold if we simply replace the phrase “Then there exists a local domain..." with the phrase “Then there exists a countable local domain..." In the countable version, therefore, we weaken the condition that C_i is exactly the set of maximal elements of the formal fiber of A at p_iA to the condition that the elements of C_i are merely contained in the formal fiber of A at p_iA. We now state a lemma aboutcardinalities of residue fields of local rings. (<cit.>, Lemma 2.12) Let (A,M) be a local ring. If A/M is finite, then A/M^n is finite for all n. If A/M if infinite, then |A/M^n| = |A/M| for all n.For our precompletion A to be countable, it is necessary that T/M is countable. We now prove a lemma analogous to Lemma <ref> in the previous section, where we assume T/M is countable. The proof of Lemma <ref> follows the proof of Lemma <ref> with only minor adjustments. Let (T,M), C_i, q_i, and 𝔮 for i ∈ℕ be as in Definition <ref> with the added condition that T/M is countable. Moreover, suppose that for every i ∈ℕ and for every P ∈ (T/q_iT), we have that P ⊆ Q for some Q ∈ C_i. Let Π denote the prime subring of T. Suppose P ∩Π[𝔮]=(0) for every P ∈ (T) and for all i ∈ℕ, if Q ∈ C_i then F_Π[𝔮]∩ Q ⊆ q_iT. Then there exists a countable local domain A ⊆ T containing q_i for every i ∈ℕ such that the following conditions hold.* A≅ T, and* For every i ∈ℕ, q_iA is a prime ideal of A and the formal fiber of A at q_iA contains the elements of C_i.Follow the proof of Lemma <ref> with Ω = T/M^2. By assumption, T/M is countable, so by Lemma <ref>, Ω is countable. Note that, since elements in each C_i are nonmaximal, we have that M^2 ⊈Q for all Q ∈⋃_i ∈ℕC_i. Now make the following minor adjustment to the proof of Lemma <ref>.When constructing the chain of Ct𝔮-subrings of T, ifγ(α) < α then γ(α) = u + M^2 for some u + M^2 ∈Ω. Define R_α to be the Ct𝔮-subring of T obtained from Lemma <ref> so that |R_α| = |R_γ(α)|, u + M^2 is in the image of the map R_α⟶ T/M^2,and for every finitely generated ideal I of R_α we have IT ∩ R_α = I. Then by the proof of Lemma <ref>, A is a domain containing q_i for every i ∈ℕ, A≅ T, q_iA is a prime ideal of A, and the formal fiber of A at q_iA contains the elements of C_i. Moreover, since Ω is countable, A is countable.We are now ready to state and prove the analog of Theorem <ref> where we require A to be countable.Let (T,M) be a complete local ring and let Π denote the prime subring of T. For each i ∈ℕ, let C_i be a nonempty countable set of nonmaximal pairwise incomparable prime ideals of T and suppose that, if i ≠ j, then either C_i = C_j or no element of C_i is contained in an element of C_j. Then, there exists a countable local domain A ⊆ T with A≅ T and, for all i ∈ℕ there is a nonzero prime element p_i of A such that all elements of C_i are in the formal fiber of A at p_iA if and only if there exists a set of nonzero elements 𝔮 = {q_i}_i = 1^∞ of T satisfying the following conditions * For i ∈ℕ we have q_i ∈⋂_Q ∈ C_iQ and, if q_jT ≠ q_iT and Q' ∈ C_j, then q_i ∉Q',* P ∩Π[𝔮]=(0) for all P ∈(T),* If i ∈ℕ and Q ∈ C_i or Q ∈(T/q_iT), then F_Π[𝔮]∩ Q ⊆ q_iT, * T/M is countable, and* If q_iT ≠ q_jT and P_i ∈(T/q_iT), P_j ∈(T/q_jT), then P_i ⊈P_j.Suppose there is a set of nonzero elements 𝔮 = {q_i}_i = 1^∞ of T such that conditions (i)-(v) hold. Since P ∩Π[𝔮] = (0) for all P ∈(T), q_i is regular in T for all i ∈ℕ. For every i ∈ℕ, let X_i be the set of maximal elements of (T/q_iT). DefineC'_i = C_i ∪{Q ∈ C_j | q_jT = q_iT }and defineC”_i = C'_i ∪{P ∈ X_i | P ⊈QQ ∈ C'_i}.Then T, C”_i, q_i, and 𝔮 for i ∈ℕ satisfies the conditions in Definition <ref>. By definition of C”_i, we have that if P ∈(T/q_iT), then P ⊆ Q for some Q ∈ C”_i. By Lemma <ref>, the desired domain A exists with p_i = q_i for every i ∈ℕ.Now suppose there exists a countable local domain A ⊆ T with A≅ T and, for all i ∈ℕ there is a nonzero prime element p_i of A such that all elements of C_i are in the formal fiber of A at p_iA. Let q_i = p_i for every i ∈ℕ. We claim that 𝔮 = {q_i}_i = 1^∞ is the desired set of elements of T. Let i ∈ℕ and suppose Q ∈ C_i. Then Q is in the formal fiber of A at q_iA and so q_i ∈ Q. It follows that q_i ∈⋂_Q ∈ C_iQ. Now suppose q_jT ≠ q_iT and let Q' ∈ C_j. If q_i ∈ Q' then q_i ∈ Q' ∩ A = q_jA. Since q_i and q_j are both prime elements of A, we have q_iA = q_jA and so q_iT = q_jT, a contradiction.Hence, condition (i) of the theorem holds. The arguments that conditions (ii) and (iii) hold follow from the arguments in the last three paragraphs of the proof of Theorem <ref>. Since A = T, we have that |T/M| = |A/(A ∩ M)| ≤ |A|. So, T/M is countable and condition (iv) is satisfied. Now suppose q_iT ≠ q_jT with P_i ∈(T/q_iT), P_j ∈(T/q_jT) and P_i ⊆ P_j.The completion of the domain A' = A/q_iA is T' = T/q_iT. All associated prime ideals of T' must be in the formal fiber of A' at (0). It follows that A ∩ P_i = q_iA.Similarly, A ∩ P_j = q_jA. Therefore, q_iA ⊆ q_jA. Since q_i and q_j are nonzero prime elements of A, we have q_iA = q_jA. Therefore q_iT = q_jT, a contradiction. It follows that condition (v) holds. Note that the conditions of Theorem <ref> are satisfied for both Example <ref> and Example <ref>, and so, for those examples, there exists a countable local domain A ⊆ T with A≅ T and A contains prime elements p_i for every i ∈ℕ such that all elements of C_i are in the formal fiber of A at p_iA. § EXCELLENT AND QUASI-EXCELLENT PRECOMPLETIONSIn this section we prove analogous results from Section <ref> where we require the domain A to be quasi-excellent, and where we require the domain A to be excellent. We begin with definitions and several important results about quasi-excellent rings and excellent rings. For the remainder of this paper, if A is a ring and P is a prime ideal of A, we use k(P) to denote the field A_P/PA_P.Recall that a Noetherian ring A is said to be quasi-excellent if it satisfies the following two conditions:* for all P ∈(A), the ring A⊗_A L is regular for every finite field extension L of k(P);* Reg(B) ⊂(B) is open for every finitely generated A-algebra B. Recall, also, that a quasi-excellent ring is said to be excellent if it is universally catenary.The next lemma gives sufficient conditions for a precompletion of a complete local ring T to be quasi-excellent in the case that T contains the rationals.Let (T, M) be a complete local ring containing the rationals. Given a local subring (A, A ∩ M) of T such that A=T, the ring A is quasi-excellent if and only if, for every P ∈(A) and every Q ∈(T) satisfying Q ∩ A = P, the ring (T/PT)_Q is a regular local ring. Note that, using the language of formal fibers, Lemma <ref> says that A is quasi-excellent if and only if, for every prime ideal P of A, the ring (T/PT)_Q is a regular local ring for all elements Q in the formal fiber of A at P. The following two theorems follow fromTheorem 31.6 and Theorem 31.7 in <cit.>. Let A be a local ring such that its completion is equidimensional. Then A is universally catenary.Suppose A is a local domain that is universally catenary.Then the completion of A is equidimensional. We are now ready to prove the main theorems of this section. In particular, we show the quasi-excellent and excellent analogs to Theorem <ref>.Let T be a complete local ring containing the rationals, and let Π≅ℤ denote the prime subring of T. For each i ∈ℕ, let C_i be a nonempty countable set of nonmaximal pairwise incomparable prime ideals of T and suppose that, if i ≠ j, then either C_i = C_j or no element of C_i is contained in an element of C_j. Then, there exists a quasi-excellent local domain A ⊆ T with A≅ T and, for all i ∈ℕ, there is a nonzero prime element p_i of A, such that C_i is exactly the set of maximal elements of the formal fiber of A at p_iA if and only if there exists a set of nonzero elements 𝔮 = {q_i}_i = 1^∞ of T satisfying the following conditions * For i ∈ℕ we have q_i ∈⋂_Q ∈ C_iQ and, if C_j ≠ C_i and Q' ∈ C_j, then q_i ∉Q',* P ∩Π[𝔮]=(0) for all P ∈(T),* If i ∈ℕ and P' ∈(T/q_iT), then P' ⊆ Q for some Q ∈ C_i,* If i ∈ℕ and Q ∈ C_i, then F_Π[𝔮]∩ Q ⊆ q_iT, * If Q ∈⋃_i ∈ℕC_i and q_i ∈ Q then T_Q/q_iT_Q is a regular local ring, and* If P is a prime ideal of T such that there is an i ∈ℕ with P ⊆ Q for some Q ∈ C_i and q_i ∉P, then T_P is a regular local ring. First, suppose T is the completion of a quasi-excellent local domain A ⊆ T with A≅ T and, for all i ∈ℕ, there is a nonzero prime element p_i of A, such that C_i is exactly the set of maximal elements of the formal fiber of A at p_iA. Let q_i = p_i for every i ∈ℕ.We claim that 𝔮 = {q_i}_i = 1^∞ is the desired set of elements of T. Conditions (i), (ii), (iii), and (iv) follow from the proof of Theorem <ref>.If Q ∈⋃_i ∈ℕC_i and q_i ∈ Q then Q is in the formal fiber of A at q_iA and so by Lemma <ref>, the ring (T/q_iT)_Q is a regular local ring.It follows that condition (v) holds. To show that condition (vi) holds, suppose P is a prime ideal of T such that there is an i ∈ℕ with with P ⊆ Q for some Q ∈ C_i and q_i ∉P. Then we have A ∩ P ⊆ A ∩ Q = q_iA and it follows that A ∩ P = (0).Hence P is in the formal fiber of A at (0). By Lemma <ref> the ring (T/(0)T)_P ≅ T_P is a regular local ring.Now suppose there exists a set of nonzero elements 𝔮 = {q_i}_i = 1^∞ of T such that conditions (i)-(vi) hold. By Lemma <ref>, there exists a local domain A ⊆ T such that A≅ T, and, for every i ∈ℕ, q_i ∈ A and q_iA is a prime ideal of A with the set of maximal elements of the formal fiber of A at q_iA exactly C_i. Moreover, if J is an ideal of T satisfying that J ⊈ Q for all Q ∈⋃_i ∈ℕC_i, then the map A → T/J is onto and J ∩ A ⊈ Q for all Q ∈⋃_i ∈ℕC_i. Further, if P' is a nonzero prime ideal of A with P' ≠ q_iA for all i ∈ℕ, then T ⊗_A k(P') ≅ k(P'), where k(P')=A_P'/P'A_P'.We now show using Lemma <ref> that A is quasi-excellent. First assume that P' is a nonzero prime ideal of A with P' ≠ q_iA for all i ∈ℕ. Then if Q is in the formal fiber of A at P', the ring (T/P'T)_A - P'≅ T ⊗_A k(P') is isomorphic to k(P'), a field. Since (T/P'T)_Q is a localization of the ring (T/P'T)_A - P', we have that (T/P'T)_Q is also a field and so it is a regular local ring.Now suppose P' = q_iA for some i ∈ℕ. Let Q' be in the formal fiber of A at q_iA. Then Q' ⊆ Q for some Q ∈ C_i. Since condition (v) holds, we have (T/q_iT)_Q ≅ (T_Q/q_iT_Q) is a regular local ring. It follows that (T/q_iT)_Q' is a regular local ring.Finally, we consider the case where P' = (0). If the prime ideal J of T is in the formal fiber of A at (0), then J ∩ A = (0). It follows by condition (iii) of Lemma <ref> that there is some i ∈ℕ such that J ⊆ Q for some Q ∈ C_i. Since J ∩ A = (0), we have q_i ∉J. By condition (vi), T_J is a regular local ring, and so we have that (T/(0)T)_J ≅ T_J is a regular local ring. By Lemma <ref>, A is quasi-excellent. Let T be a complete local ring containing the rationals, and let Π≅ℤ denote the prime subring of T. For each i ∈ℕ, let C_i be a nonempty countable set of nonmaximal pairwise incomparable prime ideals of T and suppose that, if i ≠ j, then either C_i = C_j or no element of C_i is contained in an element of C_j. Then, there exists an excellent local domain A ⊆ T with A≅ T and, for all i ∈ℕ, there is a nonzero prime element p_i of A, such that C_i is exactly the set of maximal elements of the formal fiber of A at p_iA if and only if T is equidimensional and there exists a set of nonzero elements 𝔮 = {q_i}_i = 1^∞ of T satisfying the following conditions * For i ∈ℕ we have q_i ∈⋂_Q ∈ C_iQ and, if C_j ≠ C_i and Q' ∈ C_j, then q_i ∉Q',* P ∩Π[𝔮]=(0) for all P ∈(T),* If i ∈ℕ and P' ∈(T/q_iT), then P' ⊆ Q for some Q ∈ C_i,* If i ∈ℕ and Q ∈ C_i, then F_Π[𝔮]∩ Q ⊆ q_iT, * If Q ∈⋃_i ∈ℕC_i and q_i ∈ Q then T_Q/q_iT_Q is a regular local ring, and* If P is a prime ideal of T such that there is an i ∈ℕ with P ⊆ Q for some Q ∈ C_i and q_i ∉P, then T_P is a regular local ring.Suppose there exists an excellent local domain A ⊆ T with A≅ T and, for all i ∈ℕ, there is a nonzero prime element p_i of A, such that C_i is exactly the set of maximal elements of the formal fiber of A at p_iA. Since A is excellent, it is quasi-excellent, and so by Theorem <ref> there exists a set of nonzero elements 𝔮 = {q_i}_i = 1^∞ of T satisfying conditions (i) - (vi). Since A is universally catenary, we have by Theorem <ref> that T is equidimensional.Now, suppose T is equidimensional and there exists a set of nonzero elements 𝔮 = {q_i}_i = 1^∞ of T satisfying conditions (i)-(vi). By Theorem <ref>, there exists a quasi-excellent local domain A ⊆ T with A≅ T and, for all i ∈ℕ, there is a nonzero prime element p_i of A, such that C_i is exactly the set of maximal elements of the formal fiber of A at p_iA. By Theorem <ref>, A is universally catenary, and so A is excellent. Note that the conditions of Theorem <ref> are satisfied for both Example <ref> and Example <ref>, and so for those examples, T contains an excellent local domain A with A≅ T and A contains prime elements p_i for every i ∈ℕ such that C_i is exactly the set of maximal elements of the formal fiber of A at p_iA.§ COUNTABLE EXCELLENT AND QUASI-EXCELLENT PRECOMPLETIONS In this section we prove analogous results to Theorems <ref> and <ref>, but where the domain A is required to be countable. Remark <ref> can be used to show that the precompletions constructed in the proofs of Theorems <ref> and <ref> are uncountable. In order to make our quasi-excellent and excellent precompletions countable, then, we need a different approach. The ideas in this section are inspired by results presented in <cit.>. Given a complete local ring T, our strategy for constructing a countable precompletion of T is to start with a countable Ct𝔮-subring of T given by Theorem <ref>. We then inductively build a countable ascending chain of countable Ct𝔮-subrings of T. Each Ct𝔮-subring in the chain will have completion T. In addition, we adjoin generators of carefully chosen prime ideals of T to each of our subrings. The union of this chain will be a countable quasi-excellent Ct𝔮-subring of T whose completion is T. Before we begin our construction, we state two preliminary results. We note that, if R is a Noetherian ring, we use Sing(R) to denote the set {P ∈(R) | R_P }, and if I is an ideal of R, we use V(I) to denote the set of prime ideals of R that contain I.If R is excellent, then Sing(R) is closed in the Zariski topology, i.e., Sing(R)=V(I) for some ideal I of R. Let (T,M) be a complete local reduced ring and let (R, R∩ M) be a countable local domain with R⊆ T and R = T. Then Σ = ⋃_P∈(R)Q∈T Q∈ (I)for I where Sing(T/PT) = V(I/PT)is a countable set. Furthermore, for any prime ideal Q in this set, Q ⊈P for all P ∈(T).We are ready to begin our construction. We use the next lemma to adjoin generators of specific prime ideals of T to a Ct𝔮-subring of T to obtain another Ct𝔮-subring of T. The result is crucial in showing that our final ring is quasi-excellent. Let (T,M), C_i, q_i, and 𝔮 for i ∈ℕ be as in Definition <ref>. Moreover, suppose that for every i ∈ℕ and for every P ∈ (T/q_iT), we have that P ⊆ Q for some Q ∈ C_i.Suppose (R, R ∩ M) is a Ct𝔮-subring of T such that, for all i ∈ℕ, q_iT ∩ R = q_iR. Let J ∈(T) such that J ⊈ P for all P ∈⋃_i ∈ℕC_i. Then there exists a Ct𝔮-subring of T, (R',R' ∩ M), such that R ⊆ R', |R'| = |R|, and R' contains a generating set for J. Let J = (x_1,…,x_n). We inductively define a chain of Ct𝔮-subrings of T, R=R_1 ⊆ R_2 ⊆…⊆ R_n+1 such that R_n+1 contains a generating set for J. To construct R_i+1 from R_i, we show that there exists an element x_1 of T so that R_i+1 = R_i[x_i]_(R_i[x_i]∩ M) is a Ct𝔮-subring of T and such that we can replace x_i with x_i in the generating set of J. By Proposition <ref>, there exists y ∈ J such that y ∉ P for all P ∈⋃_i ∈ℕC_i.We construct R_2 by adjoining a carefully chosen element of T toR_1=R. We find α_1 ∈ M so that x_1=x_1+α_1y satisfies x_1+P ∈ T/P is transcendental over R_1/(R_1 ∩ P) = R/(R ∩ P) ≅ R for all P ∈⋃_i ∈ℕC_i. To do this, first let P ∈⋃_i ∈ℕC_i and let D_(P) be a full set of coset representatives of the cosets t+P ∈ T/P that make x_1+ty+P algebraic over R_1/(R_1 ∩ P). Let D_1=⋃_P ∈⋃_i ∈ℕC_iD_(P) and note that |D_1| ≤ |R|. It follows that |D_1| < |T|. Applying Proposition <ref> with I = M, we have that M ⊈⋃{r+P r ∈ D_1, P ∈⋃_i ∈ℕC_i},so there exists α_1 ∈ M such that x_1 + α_1y + P is transcendental over R_1/(R_1 ∩ P) for every P ∈⋃_i ∈ℕC_i. Let x_1 = x_1 + α_1y. By Lemma <ref>, we have that R_2 = R_1[x_1]_(R_1[x_1]∩ M) is a Ct𝔮-subring of T.We now show that J = (x_1, x_2,…,x_n). Since x_1-x_1 ∈ MJ, it follows that (x_1, x_2, …, x_n)+MJ=J, and thus, by Nakayama's Lemma, (x_1, x_2, …, x_n)=J.To construct R_3, use the above argument with R_1 replaced by R_2 to find α_2 ∈ M such that x_2 = x_2 + α_2y satisfiesR_3 = R_2[x_2]_(R_2[x_2] ∩ M) is a Ct𝔮-subring of T. We have that J = (x_1, x_2, x_3, …, x_n) by an argument similar to the way we argued that J = (x_1, x_2, …, x_n).Repeat the above process for each i = 4,…,n+1 to obtain a chain of Ct𝔮-subrings of T, R_1 ⊆…⊆ R_n+1 and J = (x_1, x_2, …, x_n). By construction, x_i ∈ R_i+1, and so R_n+1 contains a generating set for J. Note that |R_n + 1| = |R|. Thus, R' = R_n+1 is our desired Ct𝔮-subring of T. We are now ready to state and prove the main theorems for this section. Let (T,M) be a complete local ring containing the rationals and let Π≅ℤ denote the prime subring of T. For each i ∈ℕ, let C_i be a nonempty countable set of nonmaximal pairwise incomparable prime ideals of T and suppose that, if i ≠ j, then either C_i = C_j or no element of C_i is contained in an element of C_j. Then, there exists a countable quasi-excellent local domain A ⊆ T with A≅ T and, for all i ∈ℕ there is a nonzero prime element p_i of A such that all elements of C_i are in the formal fiber of A at p_iA if and only if there exists a set of nonzero elements 𝔮 = {q_i}_i = 1^∞ of T satisfying the following conditions * For i ∈ℕ we have q_i ∈⋂_Q ∈ C_iQ and, if q_jT ≠ q_iT and Q' ∈ C_j, then q_i ∉Q',* P ∩Π[𝔮]=(0) for all P ∈(T),* If i ∈ℕ and Q ∈ C_i or Q ∈(T/q_iT), then F_Π[𝔮]∩ Q ⊆ q_iT, * T/M is countable,* If q_iT ≠ q_jT and P_i ∈(T/q_iT), P_j ∈(T/q_jT), then P_i ⊈P_j, and * Suppose 𝒞 = {Q ∈(T) | Q ∈⋃_i ∈ℕC_iQ ∈(T/q_iT) i ∈ℕ} and J ∈(T) satisfies J ⊆ Q for some Q ∈𝒞. If q_i ∈ J for some i ∈ℕ then (T/q_iT)_J is a regular local ring, while if q_i ∉J for all i ∈ℕ, then T_J is a regular local ring.Suppose there exists a countable quasi-excellent local domain A ⊆ T with A≅ T and, for all i ∈ℕ there is a nonzero prime element p_i of A such that all elements of C_i are in the formal fiber of A at p_iA. Let q_i = p_i for every i ∈ℕ. By Theorem <ref>, the first five conditions hold. Now let J be a prime ideal of T and suppose J ⊆ Q for some Q ∈𝒞. Then A ∩ J ⊆ A ∩ Q = q_iA for some i ∈ℕ. If q_i ∈ J then A ∩ J= q_iA and so J is in the formal fiber of A at q_iA. By Lemma <ref>, the ring (T/q_iT)_J is a regular local ring.Now suppose q_i ∉J for all i ∈ℕ.Then we have A ∩ J =(0). Hence J is in the formal fiber of A at (0). By Lemma <ref> the ring (T/(0)T)_J ≅ T_J is a regular local ring. Now suppose conditions (i) - (vi) hold. By Theorem <ref>, there is a countable local domain R_0 ⊆ T with R_0 ≅ T and, for all i ∈ℕ, there is a nonzero prime element p_i of A such that all elements of C_i are in the formal fiber of A at p_iA. In fact, by the proof of Theorem <ref>, we can choose p_i = q_i for every i ∈ℕ.For every i ∈ℕ, let X_i be the set of maximal elements of (T/q_iT). DefineC'_i = C_i ∪{Q ∈ C_j | q_jT = q_iT }and defineC”_i = C'_i ∪{P ∈ X_i | P ⊈QQ ∈ C'_i}.Then T, C”_i, q_i, and 𝔮 for i ∈ℕ satisfies the conditions in Definition <ref>. By the definition of C”_i, we have that if P ∈(T/q_iT), then P ⊆ Q for some Q ∈ C”_i. Suppose i ∈ℕ and Q ∈ C”_i. If x ∈ F_R_0∩ Q then r = xs for some r,s ∈ R_0. So r ∈ sT ∩ R_0 = sR_0. Since s is not a zerodivisor in T we have that x ∈ R_0. Thus x ∈ R_0 ∩ Q = q_iR_0 ⊆ q_iT. It follows that R_0 is a Ct𝔮-subring of T. We note that in this case, and for the rest of the proof, when we say that a ring is a Ct𝔮-subring of T, we mean with respect to the C”_i's. We now show that T is reduced. Let P ∈(T).Then there is an i ∈ℕ such that P ⊆ Q for some Q ∈ C”_i. (To see this, recall the second paragraph of the proof of Lemma <ref>). Since q_i is not a zerodivisor, q_i ∉P. By condition (vi), T_P is a regular local ring. As a consequence, T satisfies Serre's (R_0) condition. In addition, if P ∈(T) then PT_P ∈(T_P). If htP > 0, then T_P is a regular local ring of dimension at least one and depth zero, a contradiction. Thus, T satisfies Serre's (S_1) condition and it follows that T is reduced.We define a countable ascending chain of countable Ct𝔮-subrings of T recursively, starting with R_0. For each Ct𝔮-subring in the chain we ensure its completion is T. Let j ≥ 1 and assume we have defined R_j so that it is a countable Ct𝔮-subring of T with R_j≅ T. We now define R_j + 1. By Lemma <ref>, the setΣ =⋃_P ∈(R_j){J| J ∈(I)for Iwhere Sing(T/PT)=V(I/PT)}is countable. DefineΩ = {J ∈Σ| J ⊈QQ ∈⋃_i ∈ℕC”_i},and note that Ω is countable. Enumerate Ω, (J_k)_k ∈ℕ. We recursively define a countable ascending chain of countable Ct𝔮-subrings of T, S_0 ⊆ S_1 ⊆⋯. Let S_0 = R_j. For k ∈ℕ, we ensure that S_k + 1 contains a generating set for all ideals in the set {J_1, J_2, … , J_k}. Assume that k ≥ 0 and S_k has been defined so that it is a countable Ct𝔮-subring of T, q_iT ∩ S_k = q_iS_k for all i ∈ℕ, and S_k contains a generating set for all ideals in the set {J_1, … ,J_k - 1}. Let S'_k + 1 be the countable Ct𝔮-subring of T obtained from Lemma <ref> so that S_k ⊆ S'_k + 1 and S'_k + 1 contains a generating set for J_k. Define S_k + 1 to be thecountable Ct𝔮-subring of T obtained from Lemma <ref> so that S'_k + 1⊆ S_k + 1, and q_iT ∩ S_k + 1 = q_iS_k + 1 for all i ∈ℕ. Define R'_j + 1 = ⋃_k = 1^∞S_k. By Lemma <ref>, R'_j + 1 is a countable Ct𝔮-subring of T, and by construction, R'_j + 1 contains a generating set for every element of Ω. For every i ∈ℕ and for every k ∈ℕ we have that q_iT ∩ S_k = q_iS_k, and it follows that q_iT ∩ R'_j + 1 = q_iR'_j + 1 for every i ∈ℕ. Now use Lemma <ref> to obtain a countable Ct𝔮-subring of T, R_j + 1 such that R'_j + 1⊆ R_j + 1 and IT ∩ R_j + 1 = IR_j + 1 for every finitely generated ideal I of R_j + 1. Since R_j + 1 contains R_0 and the map R_0 ⟶ T/M^2 is onto, we have the the map R_j + 1⟶ T/M^2 is onto.By Proposition <ref>, R_j + 1≅ T.Define A = ⋃_j=0^∞R_j. Then A is a countable Ct𝔮-subring of T. Since the completion of R_j is T for all j ∈ℕ, we have that IT ∩ R_j = I for every finitely generated ideal I of R_j. It follows (see the proof of Lemma <ref>) that IT ∩ A = I for every finitely generated ideal I of A. In addtion, we have that the map A ⟶ T/M^2 is onto.By Proposition <ref>, A is Noetherian and A≅ T. Since A is a Ct𝔮-subring of T, it is a domain and, if i ∈ℕ and Q ∈ C”_i then by Lemma <ref>, Q ∩ A = q_iA and so all elements of C”_i are in the formal fiber of A at q_iA.It follows that all elements of C_i are in the formal fiber of A at q_iA.It remains to show that A is quasi-excellent. To do this, we use Lemma <ref>. Let Q' ∈(T) and let P = Q' ∩ A. Suppose that (T/PT)_Q' is not a regular local ring. Then Q'/PT ∈(T/PT) = V(I/PT) for some ideal I of T. Thus there is a prime ideal J of T that is minimal over I and J ⊆ Q'. Then J/PT ∈ V(I/PT) and so (T/PT)_J is not a regular local ring. Note that A ∩ J = P. Suppose that J ⊆ Q for some Q ∈⋃_i ∈ℕC”_i. Then P = A ∩ J ⊆ A ∩ Q and so P = (0) or P = q_iA for some i ∈ℕ. If P = (0), then q_i ∉J for all i ∈ℕ and so by condition (vi), T_J ≅ (T/PT)_J is a regular local ring, a contradiction. If P = q_iA for some i ∈ℕ, then by condition (vi), (T/q_iT)_J ≅ (T/PT)_J is a regular local ring, also a contradiction.It follows that J ⊈Q for all Q ∈⋃_i ∈ℕC”_i. Let P = (a_1,…, a_m), and choose j ∈ℕ so that a_k ∈ R_j for all 1≤ k ≤ m. Let P_j = (a_1, …,a_m)R_j. Then T/PT = T/P_jT and so by construction, R_j + 1 contains a generating set for J. It follows that A contains a generating set for J. Therefore, (T/PT)_J ≅ (T/(A ∩ J)T)_J ≅ (T/JT)_J, which is a field. This contradicts that (T/PT)_J is not a regular local ring. By Theorem <ref>, it follows that A is quasi-excellent.Let (T,M) be a complete local ring containing the rationals and let Π≅ℤ denote the prime subring of T. For each i ∈ℕ, let C_i be a nonempty countable set of nonmaximal pairwise incomparable prime ideals of T and suppose that, if i ≠ j, then either C_i = C_j or no element of C_i is contained in an element of C_j. Then, there exists a countable excellent local domain A ⊆ T with A≅ T and, for all i ∈ℕ there is a nonzero prime element p_i of A such that all elements of C_i are in the formal fiber of A at p_iA if and only if T is equidimensional and there exists a set of nonzero elements 𝔮 = {q_i}_i = 1^∞ of T satisfying the following conditions * For i ∈ℕ we have q_i ∈⋂_Q ∈ C_iQ and, if q_jT ≠ q_iT and Q' ∈ C_j, then q_i ∉Q',* P ∩Π[𝔮]=(0) for all P ∈(T),* If i ∈ℕ and Q ∈ C_i or Q ∈(T/q_iT), then F_Π[𝔮]∩ Q ⊆ q_iT, * T/M is countable,* If q_iT ≠ q_jT and P_i ∈(T/q_iT), P_j ∈(T/q_jT), then P_i ⊈P_j, and* Suppose 𝒞 = {Q ∈(T) | Q ∈⋃_i ∈ℕC_iQ ∈(T/q_iT) i ∈ℕ} and J ∈(T) satisfies J ⊆ Q for some Q ∈𝒞. If q_i ∈ J for some i ∈ℕ then (T/q_iT)_J is a regular local ring, while if q_i ∉J for all i ∈ℕ, then T_J is a regular local ring. Suppose there exists a countable excellent local domain A ⊆ T with A≅ T and, for all i ∈ℕ there is a nonzero prime element p_i of A such that all elements of C_i are in the formal fiber of A at p_iA. By Theorem <ref>, there is a set of nonzero elements 𝔮 = {q_i}_i = 1^∞ of T that satisfy the six conditions given in the theorem. Since A is universally catenary, T is equidimensional by Theorem <ref>.Now suppose T is equidimensional and there is a set of nonzero elements 𝔮 = {q_i}_i = 1^∞ of T that satisfy the six conditions given in the theorem. By Theorem <ref>, there exists a countable quasi-excellent local domain A ⊆ T with A≅ T and, for all i ∈ℕ there is a nonzero prime element p_i of A such that all elements of C_i are in the formal fiber of A at p_iA. By Theorem <ref>, since T is equidimensional, A is universally catenary, and so A is excellent. Note that the conditions of Theorem <ref> are satisfied for both Example <ref> and Example <ref> and so for those examples, there exists a countable excellent local domain A ⊆ T with A≅ T and A contains prime elements p_i for every i ∈ℕ such that all elements of C_i are in the formal fiber of A at p_iA.§ ACKNOWLEDGEMENTS We thank Williams College and the National Science Foundation, via NSF Grant DMS2241623, and NSF Grant DMS1947438 for their generous funding of our research.plain | http://arxiv.org/abs/2311.15797v1 | {
"authors": [
"David Baron",
"Ammar Eltigani",
"S. Loepp",
"AnaMaria Perez",
"M. Teplitskiy"
],
"categories": [
"math.AC",
"13B35, 13J10 (Primary) 13F40 (Secondary)"
],
"primary_category": "math.AC",
"published": "20231127132024",
"title": "Controlling Formal Fibers of Countably Many Principal Prime Ideals"
} |
195mm 235mm -10mm -15mm -15mm =1 | http://arxiv.org/abs/2311.16244v1 | {
"authors": [
"Dmitry S. Ageev",
"Irina Ya. Aref'eva",
"Timofei A. Rusalev"
],
"categories": [
"hep-th",
"gr-qc",
"quant-ph"
],
"primary_category": "hep-th",
"published": "20231127190022",
"title": "Black Holes, Cavities and Blinking Islands"
} |
Scale-Dropout: Estimating Uncertainty in Deep Neural Networks Using Stochastic Scale Soyed Tuhin Ahmed92, Kamal Danouchi3, Michael Hefenbrock4, Guillaume Prenat3, Lorena Anghel3, Mehdi B. Tahoori22Karlsruhe Institute of Technology, Karlsruhe, Germany, 9corresponding author, email: [email protected] 3Univ. Grenoble Alpes, CEA, CNRS, Grenoble INP, and IRIG-Spintec, Grenoble, France 4RevoAI GmbH, Karlsruhe, Germany =====================================================================================================================================================================================================================================================================================================================================================plain As system parallelism at chip- and server-level increases, challenges that arose with network-level systems a decade ago, are now being encountered with these massively parallel systems that have become an important workhorse for Machine Learning workloads as well as Graph and Sparse workloads. To tackle the communication bottlenecks, recent works have introduced task-based parallelization schemes to accelerate graph search and sparse data-structure traversal, where some solutions scale up to thousands of processing units (PUs) on a single chip. However, existing communication schemes do not scale to larger than thousands of processing tilesTo address these challenges we propose Tascade, a system that offers hardware-supported, efficient and balanced reduction trees to reduce communication overheads in task-based parallelization schemes and scales up to a million PUs. achieves this by implementing an execution model utilizing proxy regions and cascading updates, along with a supporting hardware design that enables the execution of the reduction tree at the chip level. The approach reduces overall communication and improves load balancing. We evaluate six applications and four datasets to provide a detailed analysis of 's performance, power, and traffic-reduction gains over prior work. Our parallelization of Breadth-First-Search with RMAT-26 across a million PUs—the largest of the literature—reaches 5305 GTEPS.§ INTRODUCTIONIn the last decade, we have seen the rise of massive manycore systems <cit.>. While many of these systems target AI workloads via dataflow computation,there is also an unmet demand for systems that can massively accelerate applications that utilize sparse data structures that rival AI model sizes <cit.>. These workloads are more challenging to accelerate at massive scales due to their irregular memory-access patterns.Recent works have proposed task-based parallelization schemes that accelerate them by splitting the program at irregular memory accesses <cit.>, offering a promising path for parallelizing these communication- and data-intensive applications.While some solutions (e.g. Dalorex <cit.>) scale up to thousands of processing units (PUs) on-chip, the challenge of parallelizing these applications across millions of cores has not been solved.The problems with scaling task-based parallelization: In task-based parallelization schemes the original program is divided into parallelized tasks by slicing at irregular memory accesses to hide latency for data accesses through a pipelining effect <cit.>. Task-based parallelization alone suffers from serialization due to atomic updates, as well as memory-bandwidth saturation, severely limiting the scaling of parallelization. To counter these issues, other task-based parallelization schemes have been proposed (e.g. Dalorex), where datasets are sharded and stored across the tile grid allocated to run a program such that each data-segment has a single owner that can operate on it, eliminating the need for atomic updates and making all accesses local. Any updates to a tile's data that result from computations executed on other tiles are communicated across the network and accumulated at the owner tile which executes a reduction task to process each update. However, as the system and therefore grid size increases beyond thousands of cores, there is increasing communication overhead, work imbalance (due to the imbalance in the reduction array), and network clogging around reduction-heavy tiles.Therefore, in order to continue scaling, one must solve these challenges thereby achieving: (1) Scale-invariant communication load, (2) Work-balance and (3) Network-load balance.We present , which makes strides in these three goals by implementing a reduction-tree approach where select tiles along the communication path to the owner can reduce incoming data updates for a target data leading to scalable parallelization up to a million cores. Reduction Tree Implementation Through Proxy Owners: To reduce long-distance communication, divides the tile grid into subgrids which we call proxy regions (Fig.<ref>). In each region, each tile is assigned proxy ownership to a chunk of data as if the entire data array were distributed only within its region. relaxes the single-owner-per-data constraint of Dalorex, and permits proxy owners to cache an update to remotely stored/owned original data in a cache structure called proxy cache, that has special configuration registers. From here on in this paper, we call the tile that owns the original data array chunk the “owner tile” to distinguish it from the proxy owners.Tasks requiring long-distance communication are first sent to the proxy owner within the sender's region. Eventually, when the update is sent to the owner tile, additional proxy owners en route may also process the task as proxy tasks through a mechanism called Selective Cascading. Hardware Support for Cascading Updates: introduces a hardware-software co-design for coalescing and reduction trees using the task-based programming model.This is efficiently executed utilizing two key hardware components: Task-integrated Proxy Caches, and the Cascading Router Logic. Proxy caches, in addition to allowing coalescing of updates also contain the logic needed to trigger reduction tasks based on the write-propagation policy of the Proxy cache. This efficiently limits footprint of the data copies when using a reduction tree, while introducing the ability to have asynchronous merging of updates. The Cascading Router Logic enables selective cascading, where invocations sent to the owner tile can be routed to proxy owners en route for further processing based on whether NoC queue toward owner is full or if the proxy tile is “avalaible" to process it (see <ref>). When applied to tasks that perform associative operations, this results in accumulating updates in a cascading manner where proxy tiles closer to owner tile have more reduced values and most invocations are filtered out before reaching the owner tile, significantly reducing the average communication distance of task invocations (see <ref>). This dynamic approach also improves work balance and network contention, as many of the task invocations targeting a tile that owns very hot data are filtered out by the proxy tiles.We evaluate strong scaling performance when parallelizing sparse applications with up to one million PUs across a thousand chips, with no dataset preprocessing or partitioning. We demonstrate these benefits for the task that is responsible for vertex-update in graph applications and for the reduction phase of sparse linear algebra and histogram. The technical contributions of this paper are:0em* Hardware-software co-design of a reduction tree approach for task-based parallelization schemes. * Software-configurable proxy region sizes for coalescing and filtering reduction operations (the leaves of the tree).* Opportunistic and asynchronous propagation of updates through the tree thanks to the router's selective cascading and proxy cache's write-propagation policy.* Efficient handling of temporal storage at each level of the tree with proxy caches that are integrated into the task-invocation mechanism.We evaluate and demonstrate that:0em* gets additive improvements from coalescing and filtering at the proxy regions, and opportunistic and asynchronous cascading. These benefits are network topology-dependent, ranging from 5.5× on a multi-chip Torus, to 8.5× on a monolithic Mesh, with 128x128 (16K) PUs.* 's performance gains grow with the parallelization, achieving 5.6× geomean speedup over Dalorex for 16K PUs, and 13.7× when scaling to 64K PUs.* scales well up to 1 million PUs for graphs of a billion edges, while prior work starts to plateau in the tens of thousands of PUs. * Using for BFS yields 6× higher performance than the top entry of the Graph500 list for RMAT-26, and 25× faster for RMAT-22. § BACKGROUND AND MOTIVATIONMemory accesses in graph and sparse linear algebra applications do not exhibit spatial or temporal locality, resulting in poor cache behavior and intense traffic in the memory hierarchy <cit.>. Prior work aiming to accelerate these workloads mitigate memory latency via decoupling, prefetching, and hardware pipelining techniques <cit.>. Fifer <cit.> and Polygraph <cit.> increase utilization further through spatio-temporal parallelization, while Hive <cit.> provides ordered parallelization.However, these works do not solve the network and memory bandwidth scalability issues when parallelizing across a large number of PUs. While Tesseract <cit.> and GraphQ <cit.> alleviate the memory-bandwidth problem via processing-in-memory, their proposed integration of PUs on the logic layer of a memory cube <cit.>, constrains parallelization degree of a given dataset size.To scale further, Dalorex <cit.> proposed a task-based parallelization scheme along with processing near-memory, where datasets are sharded across the grid of tiles allocated to run a program such that each data segment has a single owner that can operate on it. This eliminates the need for atomic updates and makes all accesses local <cit.>, andallows scaling to thousands of cores. However, for larger grid sizes, communication becomes a bottleneck as the longer average distance needed for task invocations results in increased network traffic.In addition, when parallelizing across larger numbers of tiles, work imbalance starts to rise since the variance in the number of update operations seen per tile increases.offers an efficient reduction-tree approach to reduce communication when scaling by using a task-based cascading-update process along the path of proxy tiles.We detail the data-local execution model of Dalorex in <ref> which this paperutilizes, and Software-based approaches to reduction trees in <ref>. §.§ The Data-Local Execution Model Tasking and queuing: In Dalorex, the original program is split into tasks that are executed at the tile co-located with the memory region (data) that the task operates on. A task can spawn dependent tasks by injecting inputs for each dependent task in the output queue (OQ) of the tile it is executed at. At each tile, there is one input queue (IQ) per task type for incoming task invocations.An IQ is populated with invocation parameters that are either (a) directly pushed by a prior task executed in the local tile, or (b) coming as task messages from the network channels. Task prioritization: In Dalorex, work efficiency and PU utilization are highly impacted by the order of task executions. The task scheduling unit (TSU) determines the order of execution of tasks based on the occupancy of queues. It prioritizes tasks whose IQ is highly populated, or OQ is empty. The TSU “senses” the network pressure and executes tasks that will relieve pressure when it is high (IQs full) or increase it when it is low (OQs rarely pushing).Distant task communication: A program in data-local execution is a sequence of tasks that invokes other tasks upon pointer indirection. It has no concept of execution threads (no main task). Thus, spawned tasks may target any random tile in the grid, i.e., whoever owns the data to be processed next. As the size of the grid increases, so would the average number of router hops of a task message and, thus, the network contention.introduces proxy ownership and selective cascading (see <ref>, <ref>) to coalesce and merge data through a reduction tree, thereby reducing the communication overhead in large tile grids. Tile grid and network routing: The grid size a program runs on is determined by the user at compile time. Thus, the logical tile ID is defined when a workload is selected to run on a given grid, and it is used for X-Y coordinate routing. Since the dataset is statically divided across the tiles on the grid, and the first parameter of a task message is a global index to a data array, this index is used to route the message, avoiding message headers altogether. The router selects the bits that indicate the destination tile ID, based on the size of the array associated with a network channel for routing purposes.§.§ Relevant Software ApproachesWhile it is possible to implement a reduction tree approach managed in software, this presents significant limitations such as synchronization <cit.>,or otherwise active waiting to handle messages, not easily managed in general-purpose software systems <cit.>. In addition, storage requirements for software-managed reduction are often the size of the entire array to be reduced, times the number of copies.Tascade offers asynchronous merging executed through write-back or write-through of the proxy cache, or by opportunistically flushing cache lines (see <ref>).Another related software-based approach is Gluon <cit.> which offers a lightweight API to enable optimization of communication when running programs on distributed systems that process partitioned graph data. Gluon's approach is complementary to and we expect an additive effect if they were to be combined. works with a single partition or unpartitioned graph and decreases communication through reduction trees, whereas Gluon decreases communication between compute nodes processing each graph partition by optimizing the update process of vertices that are shared across partitions. mystyle2 backgroundcolor=,commentstyle=, keywordstyle=,numberstyle=, stringstyle=, basicstyle=, breakatwhitespace=false,breaklines=true,captionpos=b, keepspaces=true,numbers=left, numbersep=3pt, showspaces=false, showstringspaces=false, showtabs=false, tabsize=2,morekeywords= [escapeinside=(**), language=Verilog, caption= Snippet of the Verilog code needed to implement proxy regions and selective cascading. The proxy region is configured by setting the proxy_mask and proxy_enable registers (i.e., flip flops).The id_x_within and id_y_within registers are the coordinates of the tile within the proxy region. The select_msg wire determines whether the message should be captured by the proxy region or let through. The sequential depth of select_msg is not worse than the proxy comparison, which is on par with the sequential depth of is_dest. Therefore, the critical path of route_to_core is only an OR gate more than the existing logic. Note that the critical path of the >= operators involved in calculating the cardinal directions is longer than the is_dest logic, and thus, our addition is probably not affecting the overall critical path in most router designs. , label=lst:selective_cascading_code, style=mystyle2] input [15:0] dest_x, dest_y; input [1:0] input_port; // N,S,E,W reg [15:0] id_x, id_y; // Existing Tile ID Registers // Proxy Configuration Registers reg proxy_enabled_r; // To enable usage of proxy regions reg [3:0] proxy_mask_x, proxy_mask_y; // 4'b0011 for 16x16// Whether the opposite-facing port from the inputs N,S,E,W had its buffer full last cycle, and proxy is enabled reg [1:0] opposite_port_buffer_full_r; // The occupancy of the input queue of the PU is <= half reg PU_IQ_lt_half_full_r; // It includes proxy_enabled_r wire select_msg = PU_IQ_lt_half_full_r || opposite_port_buffer_full_r[input_port]; //We flop id_within to remove a gate from the critical path reg [5:0] id_x_within =id_x[5:2] proxy_mask_x,id_x[1:0] reg [5:0] id_y_within =id_y[5:2] proxy_mask_y,id_y[1:0] wire is_proxy_x = dest_x[5:2] proxy_mask_x, dest_x[1:0] == id_x_within; wire is_proxy_y = dest_y[5:2] proxy_mask_y, dest_y[1:0] == id_y_within; // Sequential depth: 6-bit comparator + three AND wire go_to_proxy = is_proxy_xis_proxy_yselect_msg; // Existing logic: 16-bit comparator + AND wire is_dest = (dest_x == id_x)(dest_y == id_y); // Adding an OR gate to the critical path of is_dest wire route_to_core = go_to_proxy || is_dest; § THE TASCADE APPROACH targets reducing the long-distance communication overhead of task-based programming models on tile-based architectures by implementing hardware-supported coalescing and filtering of updates. The updates then propagate across the grid through a reduction tree approach. Task-based data-local execution schemes rely on associative and commutative operations where the ordering of operations do not change the output, i.e., reduction operations.In , each tile is the root of a reduction tree (<ref>). The nodes of each reduction tree are the proxy owner tiles for the data that this tile (root of the tree) owns.Proxy owners are distributed across the grid, one per region. The coordinate of a proxy owner within each region is the same as the data owner within its region. This means much of the traffic flowing towards the owners—with dimension-ordered routing—is filtered by the proxy owners and the updates coalesced.Combined with the asynchronous nature of the proxy cache update-propagation policy, decreases and balances the NoC traffic across the grid.In this section, we first explain the proxy ownership concept and proxy tasks enabled through proxy caches in <ref>. Then, <ref> describes the reduction tree implementation and selective cascading. Finally, we detail the software-configurable aspects of the approach in <ref>.§.§ Proxy Owners and Tasks Programs that can be parallelized for loop iterations have the underlying assumption that the order of iterations, as well as interleavings of operations across different iterations, preserves correctness.Many graph and sparse applications have commutative operations, making them amenable to such parallelization, provided that all writes to the same data are atomic. This is the underlying power of models such as Bulk-Synchronous Parallelization (BSP) as they guarantee eventual correctness even with arbitrary interleavings of all other read/write operations.Distributed-memory MapReduce <cit.> implementations may avoid atomic operations by updating copies of the result array and merging them at the end, in a reduction manner. However, since pre-merge computations and merging cannot be effectively overlapped in such software schemes, it leads to idle PUs towards the end of the computation phase. This under-utilization is exacerbated in the context of BSP, where each epoch has a barrier.The data-local programming model proposed by Dalorex eliminates the need for atomic updates with a single-owner-per-data task-based model. However, large parallelizations using this model suffer from work imbalance when the underlying dataset is skewed, as only a single tile's PU can operate on a given data.Tascade relaxes the single-owner-per-data constraint of Dalorex for reduction arrays and employs two modes of ownership: (1) Data owner: As in Dalorex, the memory address space is distributed across all tiles, and the dataset arrays are then laid out in memory so that every tile owns an equal-sized chunk of each data array; (2) Proxy owner:We map proxy regions by dividing the tile grid into smaller subgrids. For reduction tasks, we distribute the responsibility for temporary storage of the updates to individual chunks of the reduction array(s) across the tiles of each proxy region.We call this proxy ownership. The most recent updates coming from within and other regions can be stored in a proxy owner tile's proxy cache (detailed in <ref>).Proxy regions enable temporary storage to perform reduction operations of remote data through proxy tasks. Applicability: Fig.<ref> explains how proxy regions are configured in software. Each proxy region is responsible for temporary storage of one set of updates for the entirety of the reduction array(s). Any reduction task in the data-local execution model can have a proxy task. In our evaluation (<ref>), we apply this to the vertex update task of graph applications and the output vector for histogram and sparse matrix-vector multiplication. The aggregated data updates in the proxy caches are eventually sent to the owners of each piece of the reduction array, resulting in eventual consistency. Fig.<ref> depicts this scenario with two proxy tasks (T3') being invoked on the same tile, coming from different T2 tasks.§.§ Proxy cache DesignData updates to a proxy tile eventually trigger a task invocation towards the owner tile. This happens transparently to software, thanks to the design of the write-propagation policy of the Proxy cache—which is software configurable.When configured as write-back, upon a cacheline being evicted or flushed, the data is written back by invoking a proxy task. The task message contains the evicted address and value, and is routed towards the owner tile. To enable this, the proxy cache—similar to the PU—has the ability to push into the OQs. The cache self-invalidates cachelines when it is idle and all its OQs are empty. This policy enables the merging of the proxy values to the original tile's data, asynchronously and opportunistically, as well as at the end of the computation phase. The write-back policy enables maximal update coalescing, which is suited for time-insensitive reductions that have either a single computation phase (e.g. Histogram) or a barrier between search epochs (e.g. Pagerank).Alternatively, when configured as write-through, any update to cached data triggers a task invocation towards the owner tile using the same process. The write-through policy enables data filtering for minimization operators. This enables the updates to reach the owner tile as soon as possible.This avoids redundant explorations of vertices in the frontier and is suitable for barrier-less implementations of graph applications, as in Dalorex's SSSP implementation.In write-through mode, there is no need for the proxy cache to self-invalidate, as it is always up-to-date with the owner tile.To avoid additional buffers, the TSU ensures that the OQ has sufficient space for the proxy cache to push a task invocation before scheduling any task that can result in proxy cache eviction or update. The proxy cache has five memory-mapped configuration registers: (1) Proxy array starting address, (2) Proxy array length, (3) Write-propagation policy, (4) Default value for cache misses, and (5) Size of the cache. The first two registers determine the range of the tile's local address space for which memory operations are directed to the proxy cache. We call this the proxy segment. When the proxy region is first configured, the start address and the length are stored in these configuration registers. The write-propagation policy can be set based on the application needs. The default value for misses is the value returned upon a cache miss. In reduction tasks, this would correspond to the initial value of the reduction array(e.g. zero in Histogram or infinite in BFS). The size of the cache determines the chunk of the tile's SRAM that is reserved for the proxy cache.This hardware support enables to improve on Dalorex's single-owner-per-data task parallelization model with minimal storage and software overhead.§.§ Cascading Since proxy ownership allocation within each region is the same, the proxy tiles for a particular array element are on the same row/column for horizontally/vertically aligned proxy regions.Therefore, when a task invocation moves towards the owner tile across the NoC in a dimension-ordered manner, it will naturally pass by its corresponding proxy tiles en route, as shown in Fig. <ref> for two different owner tiles on a 2D-Mesh. As a task invocation passes by a proxy tile, the router has the option to grab it as a proxy task and execute on this tile.We have two configurable modes for en route proxy task processing: (1) Always Cascading: Every proxy tile en route (i.e. one per proxy region) processes the proxy task; (2) Selective Cascading: If the proxy tile is not too busy (based on its IQ occupancy) or there is contention on the router's output port towards the owner tile, the router grabs the proxy task for the tile to process it. If that is not the case, the task message continues its way towards the owner tile.With the write-through policy, the Proxy tiles closer to the owner tile are more likely to have the most up-to-date data on their cache, leading to cascaded filtering of updates. Listing 1 shows the logic that we added to the router to support cascading and its selective mode. We added two configuration registers (for coordinates X and Y) to store the masks of the bit-selection that determines whether the tile is a proxy owner for a given message (line 7) and a register to enable/disable proxy usage. For every incoming message, the router determines if the current tile is the owner or proxy tile of the data (lines 16-23). If it is neither, the router moves the data in the direction of the owner tile. If it is the owner tile, then it directs the data to the corresponding task's IQ ( in Fig. <ref>). Alternatively, if it is a proxy tile, it may capture the message into the proxy task's IQ based on the occupancy of the IQ and of the buffer of the outgoing network port (lines 10-13 and ( in Fig. <ref>)).As described in Listing <ref> the logic for identifying a tile as a proxy for a message is done in parallel with the existing logic that determines whether the tile is the destination, and thus, we only add one OR-gate to this path (line 25). As described in the code, the critical path of calculatingis not longer than the existinglogic since the proxy-mask comparator employs fewer bits, andis determined without using the message address. In terms of total logic added, we count 24 flip flops and a few dozen of logic gates, which represents a negligible overhead to the overall area of the router given its complex per-port multiplexing logic and message buffering space. Takeaway 1. Proxy regions and selective cascading provide the following advantages: (a) reduces work imbalance by allowing multiple tiles to operate on a given data; and (b) reduces the number of bytes traversing the NoC by coalescing/filtering updates en route to the owner tile; (c) balances NoC and PU contention by opportunistically deciding whether proxy tasks are executed at proxy tiles (when the NoC is busy), or they continue towards the owner (if the PU is busy).§.§ Software-configurable Features In Dalorex, all the dataset arrays had to fit within the aggregated SRAM memory of the grid of tiles. While this is suited for large levels of parallelization that aim to achieve the fastest time-to-solution, being forced to scale out with the dataset size can become very costly. Keeping the same tile design as in Dalorex, we manage the SRAM scratchpad differently to allow for both data and proxy caches.Inspired by prior work on reconfigurable caches <cit.>, offers to the software the appearance of a tile's local address space, which is mapped to the SRAM in two ways: (1) as a cache, or (2) as a scratchpad.Cache mode: The cache mode allocates a portion of the SRAM as a direct-mapped cache, for a given address range of the tile's local address space (a segment). It stores cacheline tags and the valid bit in SRAM too, so the area overhead of the cache is only the logic for tag comparison. This mode is used to configure a proxy cache, which holds a proxy segment, and the data cache, which holds the data segment (including the dataset and the reduction array, backed up by DRAM).When multiple tasks use proxies, each task configures its own logical proxy cache, although they all use the same tag-comparison logic. Only one data cache per tile can be configured, and its line width equals the bitline width of the DRAM memory controller (512 bits in our experiments).Scratchpad mode: When scaling out the parallelization of a dataset, if the memory footprint per tile fits in the local SRAM, the data cache would not be configured. Instead, the data segment would map directly into the SRAM scratchpad.The proxy segment size decreases when the size of the proxy region increases for the same grid of tiles (the performance impact of region sizes is studied in Fig. <ref>). Alternatively, the segment size increases for the same region size on increasing grid sizes. However, for this strong scaling case, the footprint of the dataset per tile decreases, so the proxy cache and the data cache can balance the size allocated to them via the configuration registers that indicate the maximum size available to them. Either the proxy cache or the data cache size can be set to a maximum size, leaving the other to use the rest of the SRAM. By default, the proxy cache can use up to three-quarters of the SRAM size. Data cache misses and evictions: Upon a miss, the data cache fetches the full cacheline from DRAM without checking for coherence since the data arrays in the data segment are not shared. The data cache has a dirty bit per line to write back to DRAM upon eviction. Since the data cache of each tile only contains the part of the dataset that the tile is responsible for, there are no coherence issues for modified data. Proxy cache misses and evictions: A miss in the proxy cache returns the preconfigured default value such as zero for Histogram or infinity for SSSP. On eviction, the data is either ignored (write-through policy) or sent as a task invocation to the owner tile (write-back). In our experiments, a proxy cache line contains one element to avoid sending multiple updates upon eviction. Multi-element cachelines would increase effective proxy cache size but use less SRAM for the tags. Takeaway 2. Software-configurable caches enable utilizing the SRAM efficiently by balancing the resources dedicated to the data cache and proxy cache, depending on their footprint. When sharding a dataset across more tiles, its footprint per tile decreases, leaving more SRAM space to proxy cache. Data partitioning is a preprocessing step used in distributed graph processing to minimize cross-node communication <cit.>.Our approach is orthogonal to data partitioning as it can be applied within each subgraph or problem partition. Our evaluation did not use data partitioning as we are interested in achieving scalability while processing one problem partition. § EVALUATION METHODOLOGY Applications: We evaluate the performance of on four graph workloads, one sparse linear algebra, and a histogram benchmark <cit.> to demonstrate the generality of our approach for memory-intensive applications.Breadth-First Search (BFS) computes the number of hops from a root vertex to all vertices reachable from it; Single-Source Shortest Path (SSSP) finds the shortest path from the root to each reachable vertex; PageRank (PAGE) ranks vertices based on the flow rate of traffic to each vertex <cit.>; Weakly Connected Components (WCC) finds and labels each set of vertices reachable from one to all others using graph coloring <cit.>; Sparse Matrix-Vector Multiplication (SPMV); and Histogram.Datasets: We use three sizes of the RMAT graphs <cit.> —standard on the Graph500 list <cit.>—RMAT-22, RMAT-25 and RMAT-26, which are named after their number of vertices. For example, RMAT-26 (R26) contains 2^26 (67M) vertices (V) and 1.3B edges (E), and has a memory footprint of 12GB. We also use the Wikipedia (WK) graph (V=4.2M, E=101M) in our evaluation to exercise different graph topologies. For SPMV we use the same datasets, since a graph is a square sparse matrix of V × V dimensions and E elements. The graph data is stored in Compressed Sparse Row (CSR) format without any partitioning, resulting in three input arrays, one for the values of the non-zeros, one for the column indices of those non-zeros, and one for the pointers to the beginning of each row in the previous two arrays. The output array has size V, and its meaning depends on the application, e.g., for Histogram, it is the count of the column indices of the non-zeros.Simulation: We built our evaluation infrastructure on top of the functional simulator from Dalorex <cit.>—a cycle-accurate simulator for the NoC and based on performance models for the PUs. We chose this to make it easier to compare over Dalorex and because faithfully modeling the NoC is the most critical part for these large parallelizations of data-dependant applications. We extended this simulator to support the two modes for using the SRAM—the direct-mapped cache and the scratchpad mode—so that we could implement the proxy cache. In addition, we added the router support for the proxy regions and router cascading logic. Moreover, we consider two types of systems for our experiments: (a) a multi-chip system where every 32x32-tile chip is attached to a 8GB HBM2E DRAM with eight 64GB/s memory channels, for the multi-chip experiments on Figures <ref> and <ref>, or (b) a large monolithic grid of tiles without DRAM, as in Dalorex.In the multi-chip case, we model inter-node latency as 20ns <cit.> and 1.17pJ/bit (up to 80mm) <cit.>, and the interconnect between chips is a 2D Torus. In all experiments, SRAM per tile size is 512KB, network-on-chip (NoC) is 64-bit. The NoC is a 2D Torus in all experiments except for the one characterizing network topologies (<ref>). We always use the same hardware configuration for Dalorex and , except for our hardware support for the proxy cache and cascading. More details of our latency and energy parameters and our simulation logs can be found in our .§.§ Evaluating The Impact of Design Contributions The performance impact of the approach depends on a number of factors including (1) the number of PUs per proxy region and the number of proxy regions in the grid, (2) the proxy cache size and its ratio to the proxy segment size, (3) how the proxy data is merged to the reduction array (always, selective or no cascading), (4) the network interconnect, and (5) the dataset size and topology (e.g. skewed graph).While a fine-grained dissection of each of these factors or the combinatorial analysis of them all is not feasible (given the large tile grids evaluated), we evaluate the following set of configurations that we believe is capable of quantifying the impact of each significant contribution. We evaluate the impact of: (a) proxy region sizes in the range of 8x8, 16x16, and 32x32—for all the other experiments the proxy size is 16x16;(b) different proxy cache SRAM budgets, as fractions of the proxy segment size (the effective proxy size is smaller than the SRAM allocated for it due to the tags), for two region sizes; (c) three options for merging the proxies; (d) three network interconnect options, i.e., 2D-Torus and 2D-Mesh in the monolithic experiments, and a two-level hierarchical Torus for the multi-chip experiments; (e) graph structures, by using RMAT and Wikipedia graphs. In addition, we characterize the benefit of asynchronous cascading vs synchronous merging of proxy data, where the latter also serves as an upper bound for the improvement of a hypothetical software-managed proxy cache over no proxy.§.§ Comparing with the State-of-the-Art We use Dalorex as the SotA system for comparison since we target a tile-based architecture and task-based parallelization scheme similar to it. Moreover, Dalorex has shown the highest level of parallelization for graph processing with publicly available detailed architectural design.To understand where the performance of would stand on the Graph500 list <cit.>, we adhere to their guidelines as much as we can and provide a comparison to the performance listed there for RMAT-26. Graph500 requires timing separately the reading, preparing, and loading of the graph onto the system from the graph traversal itself. In our case, we do not perform any dataset pre-processing and directly read the CSR structure from the disk. Based on the Graph500 guidelines, we begin measuring runtime when the search key is loaded onto the system, and we stop when the last vertex is visited. We report traversed edges per second (TEPS). for one search key (id=0), as opposed to randomly sampling and averaging time across 64 search keys, given the long simulation time.Since we evaluate other workloads than graph traversal, we consider the non-zero elements of the sparse matrix for SPMV, and the input array for Histogram, when reporting TEPS.§ RESULTS Large parallelization of Graph and Sparse workloads suffers from communication bottlenecks. Dalorex recently pushed the parallelization level of graph processing to 2^12 vertices per PE tile (RMAT-26 across 2^14 tiles)<cit.>. However, even this task-based data-local approachloses scalability beyond this point.<ref> shows Dalorex scaling across three grid sizes of 64×64 (2^12 tiles), 128×128 (2^14 tiles) and 256×256 (2^16 tiles) processing RMAT-22 and Wikipedia datasets. The starting point of this scaling experiment is the minimum grid size that can hold the entire dataset on SRAM in this monolithic architecture (with 512KB per tile). Dalorex shows a plateau and in some cases lower performance as the grid size increases. This plateau in performance is accompanied by a steep increase in NoC traffic (lower panel in <ref>) due to the longer average distance task invocations must travel. This analysis showcases how increasing grid sizes necessitate communication-reduction techniques like .mitigates the scalability problem of parallel processing graph and sparse applications (<ref> orange points), and achieves parallelization up to a million PUs for RMAT-26 (<ref>).The two key features of that enable this are: (1) Coalescing and filtering of updates to distant data—via the proxy caches—coupled with asynchronous task invocation for sending these updates to the owner tile, and (2) Cascading the reduction operations sent to the owner tile through proxy tiles en route which is equivalent to concurrent reduction trees across the grid where owner tiles act as the root and the proxy tiles as the nodes of the tree.The proxy cache and the cascading router logic are the key hardware contributions that enable these two features.In <ref>, we first evaluate the contribution of proxy cache-mediated filtering and coalescing in local proxy regions with no cascading, then show the additional contribution of cascading.In <ref> we evaluate the impact of the proxy region size, and in <ref> we analyze the sensitivity of performance to proxy cache size. <ref> evaluates adding synchronization before merging the proxy updates, measuring the benefit of asynchronicity and providing an upper-bound for the performance of a software-managed proxy. In <ref> we evaluate the improvements that provides with different NoC topologies and evaluate its applicability to multi-chip systems. Finally, <ref> studies strong scaling (by parallelizing RMAT-26 for grid sizes ranging from 1024 tiles to 1,048,576 tiles) and checks where its performance stands regarding the Graph500 list and other works <cit.>. §.§ Proxy Caching and Cascading Improve PerformanceProxy Caching: While able to scale up to thousands of PUs, the single-owner-per-data scheme that Dalorex proposed already starts to show sub-linear performance at thousand-tile scales <cit.>.This is because increasing grid sizes causes an increase in average NoC distance traveled for task invocations and causing additional load in the average network load.In addition, the single-owner-per-data constraint requires each task on a given data segment to be handled by a single tile.Since increasing graph sizes results in a higher variance this creates increasing imbalance in load distribution. In this section, we first demonstrate the performance improvement of utilizing proxy regions where updates can be coalesced or filtered at proxy tiles.We use Dalorex's 128×128 monolithic grid—the largest grid evaluated in their paper—as our baseline. We set the proxy region size to 16×16, for which the proxy segment fits entirely on the tile's SRAM. Compared to the baseline, the option with no cascading but merging the proxy data directly into the owner tile (Proxy & Merge Owner) provides a geomean of 3.6× over all datasets and apps performance improvement. Additionally, it improves energy efficiency by 1.2×, in part due to the reduced NoC traffic (1.6×).For applications operating in write-back mode (PageRank, SPMV and Histogram), the proxy caches mainly provide a coalescing as well as filtering benefit reducing overall traffic. For applications operating in write-through mode (SSSP, BFS and WCC), updates are propagated immediately, and the advantage of proxy comes from filtering non-minimal updates, i.e. reducing traffic.Cascading: While proxy caching alone significantly improves the performance, there is additional processing that can be utilized when the task invocations are en route to the owner.Our cascading approach effectively implements a reduction tree across the grid where the owner tile acts as the root and the proxy tiles as the nodes of the tree. Data can be reduced at every node of the tree (Always-Cascading) or opportunistically on each nodes (Selective-Cascading). <ref>, yellow (#3) and orange (#4) bars show the performance improvement of these two options over the baseline (#1).The Always-Cascading option (Proxy & Cascade) improves performance by 1.4× geomean over Proxy & Merge, and 5× over the baseline. However, its energy efficiency is 10% worse than the baseline on geomean. Although this option theoretically reduces the traffic the most, it increases data staleness for the time-sensitive reductions of barrier-less graph applications (SSSP, BFS and WCC). This partly because Always-Cascading policy requires a write-back proxy cache to avoid dead-lock.This is because when combined with a write-through policy having to stall at each proxy tile causes a dependency cycle between a regular task and a proxy task. While Always-Cascading ensures that all the proxy owners store the most up-to-date values that pass by them, it takes longer for invocations that need to go to the owner tile to get there, causing staleness. In addition, since all the proxy owners must process a passing invocation, it increases work for PUs in general, reducing their effective utility.Selective Cascading on the other hand (denoted as Tascade in <ref>), allows the proxy tiles en route to capture a task invocation when there is network traffic ahead or when the proxy tile is eager to receive an update (low occupancy on the proxy task's IQ). In addition, Selective Cascading supports write-through as a task message does not stall if the IQ of the proxy tile is full.<ref> shows that with this selective policy, improves the performance further to a 5.6× geomean over the baseline, while also improving energy efficiency by 1.2×. While increases PU energy by increasing the number of tasks to be processed, it also reduces one of the main sources of energy—the NoC—since the number of hops per task decreases significantly, resulting in geomean reduction in NoC traffic of 2.2× over Dalorex.§.§ Optimal proxy region Size The importance of the choice of proxy region size is evident when one considers the two extremes of proxy region size of a single tile versus the entire grid size. On one end, with a single tile proxy region, all tiles would have to cache the entire array creating a high storage cost or low proxy cache efficiency when the proxy cache size is limited. In addition, the cascading would be considered at every tile. With the proxy region size equal to the entire grid, one would recover the same configuration as Dalorex. Therefore we expect there to be a middle ground where the optimal proxy region size is something in between.Decreasing the size of the proxy region increases the size of the proxy segment (the address range that a tile is a proxy for), and thus, the pressure on the proxy cache increases. We expect this optimal point to be impacted by the available SRAM size that can be dedicated to the proxy cache. For a total grid size of 128×128, we evaluate the performance of proxy region sizes of 32×32, 16×16 and 8×8, shown in <ref>. The bars for the last two options overlay two cases: when the proxy segment fits in the proxy cache in full (light-colored bars) and when the proxy cache size is limited toproxy segment size of the 32×32 case (dark-colored bars).We show this to understand the peak improvement but also to understand the tradeoff between a smaller proxy region and a larger proxy segment to cache.In the unlimited case, performance increases as the proxy region size decreases. However, with limited cache size (16KiB), the 8×8 does not significantly improve over the 16×16 case. This is because with a smaller cache size values get evicted more often, leading to less coalescing of the updates.In the next section, we examine the impact of proxy cache size using a proxy region size of 16×16. §.§ Impact of Limiting proxy cache Capacity In order to understand the performance impact of limited proxy cache capacity, we simulated the configurations of with 128×128 tile grid, 16×16 proxy regions and halved the SRAM budget allocated for the proxy cache at each step. These sizes range from 64KiB (the size of the proxy segment for this dataset and region size) to 1/16 of that, i.e., 4KiB. Note that the effective proxy cache size is smaller than the SRAM allocated for it due to the tags.<ref> shows significant performance differences between applications and datasets, where some steadily decay performance from the beginning or after the half-size datapoint (e.g., BFS, SSSP and Histo), while others remain high despite the pressure on the proxy cache (e.g., PageRank and WCC). The increase in performance with smaller proxy cache sizes in Pagerank and SPMV is caused by having fewer elements to flush from the cache towards the end of the program since they were already merged into the owner tile upon eviction. On geomean, the performance remains around the 5.6× mark until the proxy cache budget is reduced by 16×. Nonetheless, the performance at the last datapoint is 88% of the full one, and still 4.9× over the baseline.<ref> also shows the gains in energy efficiency with proxy are correlated with the savings in NoC traffic. NoC traffic reduction rates over the baseline range from 2.2× to 1.8× on geomean from the full proxy cache size to the smallest one (1/16). Energy efficiency also decays with more constrained cache sizes and they remain above the baseline until the last datapoint. These results show dataset- and application-dependence, however overall trend is highly consistent.§.§ Asynchronicity Improves Performance One of the main advantages of implementing proxy caching and selective cascading with a task-based data-local parallelization scheme like Dalorex is that it allows for coalescing and reduction tree processes to be executed asynchronously. This is implemented seamlessly thanks to the hardware support integrated into the approach. Estimating the impact of asynchronicity is especially important since software approaches to reduction trees often utilize such a synchronization step. We evaluate the cost of imposing synchronization by evaluating proxy in the presence and absence of a barrier before starting to merge the proxy regions.<ref> shows the performance of merging the proxies directly (Sync & Merge) or via cascading (Sync & Cascade), after the barrier is reached by all the PUs. asynchronicity yields a 1.6× geomean improvement over Sync & Cascade and 2.2× with Tascade. Since in this experiment the proxy cache stores the entire segment (proxy regions of size 16x16), <ref> also showcases the runtime improvement of starting to flush the proxy caches towards the end of the program when PUs are often idle, instead of waiting for the barrier. Moreover, merging asynchronously in improves energy efficiency by 13% geomean over the synchronous cascade version. Comparison to Software-managed Reduction Tree Approach: The synchronous versions of the proxy approach evaluated above represent an upper bound to the performance expected from a software-managed approach since the hardware components introduced by provide additional benefits. For example, these versions still use the proxy cache in hardware instead of a software-managed one.Moreover, the cascade version shown in <ref> is only synchronous before the cascading starts, and the cascading itself is asynchronous and selective once it starts.§.§ Impact of NoC ChoiceWe envision that the hardware-enabled asynchronous reduction tree approach of can be utilized in a broader set of systems ranging from server-class <cit.> to wafer-scale manycores <cit.>, to clusters of these chips connected <cit.>. Since the 2D-torus NoC utilized in Dalorex is not a common NoC found in AI-oriented manycores <cit.>, we also wanted to evaluate the performance improvement provides with a 2D-Mesh NoC. In addition, as an alternative to the monolithic implementation, one may use sever-class-sized chips, connected with a board-level or cluster-level interconnect. Thus, we also performed experiments evaluating the improvements of when using an inter-chip interconnect as well. <ref> shows the performance improvement of with Mesh, Torus, and Inter-chip networks, over the baseline of Dalorex (Torus), for the same grid sizes (128×128). yields large performance improvement over no proxy for all NoC types with 5.5×, 5.6× and 8.5× for inter-chip interconnected multichip system, monolithic chip with torus and monolithic chip with Mesh NoC, respectively.Mesh NoC has twice the diameter and half the bisection bandwidth of a Torus, therefore twice the communication overhead for these applications, therefore benefits more from traffic reduction.When using proxy regions, much of these communications are reduced to within proxy region communications which scales with the number of columns in a proxy region times the number of proxy regions.<ref> demonstrates this by depicting the heatmap of router activity for a Mesh NoC utilizing .This figure can display an animation of the time-evolution of the router activity throughout the application execution when visualized as GIF—also available on our .In comparison, the inter-chip datapoint uses two hierarchical torus, one that connects each chip, and one that connects tiles within the chip. The hierarchical torus connectivity reduces the average distance that task invocations must travel to some extent, and thus, the performance improvement is not as high as with the Mesh NoC, but still very significant, 5.5× geomean. Moreover, improves energy efficiency across the board. §.§ Strong Scaling Up to a Million Tiles Up to this point, we have shown the performance improvement provides over previous work and the contribution of different design components using the 128×128 grid (2^14 tiles) as the main frame of comparison. We now analyze the ability of performance to scale, given the benefits it provides. <ref> presents our evaluation of , scaling the parallelization from 256 PUs to over a million PUs (2^8 to 2^20) by quadrupling the number at each step. In these experiments we use a 16×16 proxy region size until 2^16 tiles and beyond that we increase the proxy region size to 32×32.[We increase the region size beyond 2^20 to decrease the memory footprint of the simulator itself, as the number of proxy regions scale quadratically with the grid size when keeping the proxy region size constant.] As shown in <ref>, Dalorex does not scale much beyond 64×64. Thus, we only show the performance of for this analysis.increases performance across the scaling range, going significantly beyond the scaling capability for processing sparse applications than has been demonstrated before.<ref> shows how throughput scales well with the number of tiles, with some signs of plateauing at the last datapoints. The gap between Operations/s and TEPS shows the number of instructions needed to traverse an edge (or multiply a non-zero element in the case of SPMV). This gap—more clearly seen in the bottom plot of <ref>—also shows the work efficiency in barrier-less graph applications, which decreases with data staleness. It is noteworthy that the energy efficiency of —measured by TEPS/Watt and Ops/Watt—remains fairly stable in this range of scaling, only decaying towards the end. Note that these are extreme parallelization levels already, i.e., on the last datapoints the 2^26-vertex graph is parallelized across 2^20 tiles, equating to 64 vertices per tile (and 20 times as many edges). Improving the scaling of through better data placement methods and by combining it with methodologies for graph partitioning <cit.> are possible avenues that we have not explored. Throughput-per-watt likes mid grids sizes: <ref> (bottom) shows that throughput-per-watt peaks at 2^16 PUs, i.e., 2^10 vertices per PU. This is the parallelization level at which the entire dataset fits on-chip, and thus, the SRAM behaves as the main level of memory as in Dalorex. Throughput-per-watt drops significantly after that level, where there are 64 chips.The inter-package links are more power-hungry than the ones inside the package, hence, the drop in efficiency. Petabyte/s of memory bandwidth: As mentioned earlier, data-structure traversal has a low arithmetic intensity. Fig.<ref> demonstrates how much memory bandwidth is required to maintain a high target throughput. For the 1-million-tile configuration, SPMV reads, on average, more than half a PB/s from their local memories with an arithmetic intensity of 0.09 FLOPs/byte. At peak throughput of execution, SPMV reads 1.4 PB/s to perform 68 TeraFlop/s. This configuration uses 1024 chip packages and draws 15KW of power on average and 24KW at its peak—power density stays within the tens of mW/mm^2 suitable for air cooling. In the context of the Graph500 list: The largest dataset size that we simulated is R26 due to memory and time requirements of the simulations. The top entry for BFS on R26 is the Tianhe Exa-node (Prototype@GraphV) <cit.>, delivering884 GTEPS.For that size, achieves 2930 GTEPS with 2^18 PUs (256 chips) and 5300 GTEPS with 2^20 (1024 chips). The smallest dataset size with an entry higher than 5300 GTEPS is R36, which is 1024× larger than R26. Since weak-scaling is more easily achievable than strong-scaling (e.g., Argonne's Mira or Fugaku <cit.>),we would expect to achieve even higher throughput for datasets of this size.For smaller datasets like R22, the best performing prior work demonstrates up to 70 GTEPS <cit.> running these codes <cit.> on a V100-SXM3 GPU. On , the 2^16 PU configuration evaluated in Fig.<ref> yields 1,744 GTEPS (25× higher). § CONCLUSIONThis paper presents significant advancements in task-based parallelization schemes through a hardware-software co-design for efficient and scalable execution of reduction trees.The novel integration of a reduction tree approach, coupled with software-reconfigurable region sizes, enables efficient coalescing and filtering of reduction operations. This is further enhanced by the introduction of opportunistic and asynchronous update propagation, leveraging our router's selective cascading and proxy cache's write-propagation policy. The proxy cache reduces the temporal storage needed for reduction trees and does not introduce additional hardware storage, as it efficiently utilizes the existing SRAM per tile along with the data cache.The evaluation of underlines its effectiveness by demonstrating substantial performance improvements across various network topologies and scales. We characterize the accumulated benefits of software-configurable proxy regions, proxy caches, and selective cascading, over prior work Dalorex, where improvements increase with scale, ranging from 5.6× geomean for 16K PUs to 13.7× for 64K PUs, across several graph and irregular applications.Evaluating on a billion-edge graph, scales performance up to 1 million PUs. Putting it in the context of the Graph500 list for BFS, achieves 5305 GTEPS, which is 6× higher than the top entry for that problem size. These results not only validate the technical contributions of this paper but also establish as a highly scalable and efficient solution for graph processing. § ARTIFACT EVALUATIONWe have proactively created the scripts for the artifact evaluation of the experiments presented in this paper. Our includes the simulation framework implementing ; we are planning to document it with the publication of this paper to allow researchers to explore further optimizations, e.g., exploring more applications, or tuning or adding hardware support for reduction trees.plain | http://arxiv.org/abs/2311.15810v1 | {
"authors": [
"Marcelo Orenes-Vera",
"Esin Tureci",
"David Wentzlaff",
"Margaret Martonosi"
],
"categories": [
"cs.AR",
"cs.DC"
],
"primary_category": "cs.AR",
"published": "20231127133233",
"title": "Tascade: Hardware Support for Atomic-free, Asynchronous and Efficient Reduction Trees"
} |
Controlling Formal Fibers ofCountably Many Principal Prime Ideals David Baron, Ammar Eltigani, S. Loepp, AnaMaria Perez, M. Teplitskiy January 14, 2024 ========================================================================In this paper, we study the graph condensation problem by compressing the large, complex graph into a concise, synthetic representation that preserves the most essential and discriminative information of structure and features. We seminally propose the concept of Shock Absorber (a type of perturbation) that enhances the robustness and stability of the original graphs against changes in an adversarial training fashion. Concretely, (I) we forcibly match the gradients between pre-selected graph neural networks (GNNs) trained on a synthetic, simplified graph and the original training graph at regularly spaced intervals. (II) Before each update synthetic graph point, a Shock Absorber serves as a gradient attacker to maximize the distance between the synthetic dataset and the original graph by selectively perturbing the parts that are underrepresented or insufficiently informative. We iteratively repeat the above two processes (I and II) in an adversarial training fashion to maintain the highly-informative context without losing correlation with the original dataset. More importantly, our shock absorber and the synthesized graph parallelly share the backward process in a free training manner. Compared to the original adversarial training, it introduces almost no additional time overhead.We validate our framework across 8 datasets (3 graph and 5 node classification datasets) and achieve prominent results: for example, on Cora, Citeseer and Ogbn-Arxiv, we can gain nearly 1.13%∼ 5.03% improvements compare with SOTA models. Moreover, our algorithm adds only about 0.2% to 2.2% additional time overhead over Flicker, Citeseer and Ogbn-Arxiv. Compared to the general adversarial training, our approach improves time efficiency by nearly 4-fold. The code is available in the supplementary material. § INTRODUCTION Graphs serve as a ubiquitous representation for a diverse range of real-world data, encompassing domains such as social networks <cit.>, chemical molecules <cit.>, transportation systems <cit.>, and recommender systems <cit.>, among many others. As the tailor-made designs, Graph neural networks (GNNs) <cit.> have become a prevalent solution for machine learning tasks on graph-structured data, and have exhibited outstanding performance across a broad spectrum of graph-related applications <cit.>.However, real-world scenarios often involve large-scale graphs with millions of nodes and edges <cit.>, which presents significant computational overheads when training GNNs <cit.>. Worse still, fine-tuning hyperparameters and identifying suitable training schemes for self-supervised models can be both expensive and resource-intensive, particularly for large-scale graphs with dense connections. To this end, when a GNN is making a prediction, one naturally raises a question: is it possible to effectively simplify or reduce the graph to not only accelerate graph algorithms, including GNNs, but also aid in storage, visualization, and retrieval for associated graph data analysis tasks <cit.>?To address this inefficiency, existing approaches typically fall into two research lines – graph sampling and graph distillation. Within the first class, many endeavors <cit.> have investigated the use of custom-built sampling approaches to reduce the computational footprint of GNNs (including some pruning methods). These methods aim to identify the discriminative edges or nodes to enhance training or inference efficiency. Nevertheless, sampling or pruning graph nodes or edges may cause massive information loss, resulting in performance collapse <cit.>. To this end, many studies focus on the graph distillation research line. In contrast to simplifying the graph structure or nodes, the second research line targets condensing the large original graph into a small, synthetic, and highly informative graph. The objective is to train GNNs on the condensed graph, such that their performance is comparable to those trained on the original graph <cit.>. It is worth emphasizing that there are fewer prior studies on pruning or compressing GNNs <cit.>, which can bring the salient benefits for power-efficient graph representation learning. However, they cannot be extracted and modeled in data-level optimization, which goes out of the scope of our work.Generally, graph distillation draws inspiration from data distillation techniques <cit.> and aims to ensure consistency between raw and synthetic datasets by constraining the soft labels across both sets. Recently, some trajectory matching algorithms show great prominence in image <cit.> and graph realms <cit.>. Concretely, these frameworks adopt parameter or gradient matching scheme w.r.t the condensed set and raw data during the training process. Though promising, enforcing gradient matching results in an inelastic compression process, as shown in Fig. <ref> (Left). When we summarize a paper into an abstract, we can easily see that replacing some words does not change the meaning of abstract. The appearance of these synonyms in the synthetic data set will not change the meaning, but the trajectory matching expects that the synthetic set is completely consistent with the original set under each step of the training process.Research gap. Given a more intuitive instance (Fig. <ref> (Middle)), graph trajectory matching algorithms enforce gradient consistency and provide a rigid parameter space that limits the flexibility during the training. In fact, these methods may not explore the impact of certain parameters that are similar in the vicinity of the optimal matching point.This paper targets at overcoming this tricky hurdle and explores a more robust graph condensation framework. We propose a graph robust condensation algorithm, (GroC), a principled adversarial training (bi-level optimization) framework that explores the neighborhood space of the parameters that have the greatest impact on the original matching process. To achieve this, we painstakingly design a Shock Absorber operator to attach perturbations of adversarial training in specific positioned location of the synthetic dataset. To highlight, the optimization process of our GroC is: (i) robust, as the training compression process is more stable and robust, which can better find a compressed subset (see the example in Fig. <ref> Right); (ii) one-stop-shop, since it is completely free of human labor of trial-and-error on perturbations and locations choices. (iii) time-efficient, through free trainingalgorithm, we parallelize the entire process of optimizing adversarial perturbations and synthesizing datasets, ensuring that virtually no additional time overhead is introduced. Contributions. Our contributions can be summarized as follows: (1) We propose a robust adversarial training-based graph condensation framework called GroC for more effectively learning the robust representation space of the synthetic data, by attaching perturbations on the synthetic graph during the gradient matching process. (2) Shock Absorber operator can not only help our model achieve prominent performances, but also serve as a general operator for other compression frameworks. (3) Building on our insights, we train our framework on graph/node classification tasks. Stunningly, our model can yield SOTA results on various graph benchmarks, e.g., for example, on Cora, Citeseer and Ogbn-Arxiv, we can gain nearly 1.13%∼ 5.03% improvements compare with SOTA models under a smaller variance. Moreover, our algorithm adds only about 0.2% to 2.2% additional time overhead over Flicker, Citeseer and Ogbn-Arxiv. These results empirically demonstrate the effectiveness and robustness of our proposal.§ PRELIMINARIES & RELATED WORKGraph Neural Networks (GNNs). Graph neural networks (GNNs) <cit.> are capable of processing variable-sized, permutation-invariant graphs. They learn low-dimensional representations through an iterative process that involves transferring and aggregating the representations from topological neighbors. Although GNNs have shown promising results, they face significant inefficiencies when scaling up to large or dense graphs. Towards this end, several research streams have focused on addressing this issue, such as graph sampling and graph distillation. Graph Sampling & Distillation. Graph sampling reduces the computational burden of GNNs by selectively sampling sub-graphs or applying pruning methods <cit.>. However, the aggressive sampling strategy may lead to a significant loss of information, potentially reducing the representation ability of the sampled subset. To address this, graph distillation research line <cit.> draws inspiration from dataset distillation (DD), which aims to distill (compress) the knowledge embedded in raw data into synthetic data, ensuring that models trained on this synthetic data maintain performance <cit.>. Remarkably, <cit.> take the first step to propose optimizing both nodes and edges in the graph by employing training gradient matching, which under the spotlight of our research. Adversarial training & Robustness. Adversarial training was introduced as a defense mechanism against adversarial attacks, where a model is trained not only on the clean data but also on adversarial samples generated during training <cit.>. They demonstrated that adversarial training can make deep neural networks more robust. Build upon these observations, many subsequent studies pay attention to design different adversarial examples <cit.>. In our work, we introduce existing methods to the inelasticity of synthetic data. Following the Projected Gradient Descent (PGD) <cit.>, performs iterative gradient descent with backtracking to generate adversarial perturbations.§ METHODOLOGY As shown in Fig. <ref>, in this section, we devote to explaining how our GroC framework deepens and enhances the robustness in graph condensation task. Going beyond this, we take a comprehensive look at our key operator Shock Absorber. Finally, we introduce the free training for the efficient implementation GroC. In the following parts, we will delineate our model components by introducing Fig. <ref> from left to right. For ease of understanding, we summarize all notations and depict our algorithm in Appendix, which can be checked in supplemental materials. §.§ Graph Condensation via Gradient Matching In this work, we first review the process of graph condensation and start from the original graph T=(A, X, Y), where A∈ℝ^N × N is the adjacency matrix, N is the number of nodes, X∈ℝ^N × d is the d-dimensional node feature attributes. Similar to traditional graph condensation task <cit.>, we note the label of nodes as Y={0,1, … , C - 1}^N denotes the node labels over C classes. Our work target to train a synthetic graph S=(A^',X^',Y^') with adjacency matrix A'∈ℝ^N' × N' and feature attributes X'∈ℝ^N' × D. We designate GroC, to obtain a synthetic graph with N' ≪ N, which satisfies that a general GNN trained on S can reach commensurate accuracy on large graph T.Gradient Matching as the Objective. As our goal is to learn highly informative synthetic graphs, one prominent approach is to enable GNNs trained on synthetic graphs to mimic the training trajectory on the original large data. To achieve this goal, dataset condensation <cit.> introduces a gradient matching scheme. More specifically, it tries to minimize the discrepancy between the gradients of the model with respect to the parameters, as computed on a large-real data T and a small-synthetic data S, at each training step. Therefore, the parameters of the model trained on synthetic data will closely resemble those trained on real data at every training steps. We first formalize the problem as: min_ S L( GNN_θ _ S( A,X),Y)s.t.θ _ S = arg min_θL ( GNN_θ ( A',X' ),Y' ) where GNN_θ stands for the GNN initizalization with θ, L represents the loss function. θ_ S denotes that model parameters trained onS. The labels of the synthetic graph are pre-defined. First, we generate a specific number of labels, ensuring an equal number of labels per class. Then, we randomly select the corresponding nodes from the original dataset to serve as the initial features for each class in the synthetic dataset. Following the <cit.>, we employ multiple initializations to mitigate the risk of over-fitting. For ease of understanding, we describe the gradient matching procedure based on a single initialization in the following part, unless otherwise specified. min_ S∑_t = 0^T D( ∇ _θℓ _t^ S (f_θ _t( S_t),Y^'),∇ _θℓ _t^ T (f_θ _t( T),Y))s.t.θ _t + 1 = opt(θ _t, S_t)In Eq. <ref>, D (·,·) is a distance function, f_θ _t denotes the GNN model parameterized with θ at time point t. S_t is the synthetic data of the t-th iteration of optimization. T represents the total number of steps in the training process, and opt(·,·) is the optimization operator used for updating the parameters θ.This equation represents a bi-level problem, where we learn the synthetic graphs S in the outer optimization loop, and update the model parameters θ _t in the inner optimization loop. ℓ _t^ S and ℓ _t^ Tare the negative log-likelihood loss of the synthetic and original datasets, respectively. We conduct gradient matching process at different time points and our parameter update forms of two datasets can be written as: θ _t + 1^ S = opt_θ( ℓ _t^ S( GNN_θ _t^ S( A^',X^'),Y^')) θ _t + 1^ T = opt_θ( ℓ _t^ T( GNN_θ _t^ T( A,X),Y))Here we use backpropagation to update the model parameters θ _t + 1^ S and θ _t + 1^ T, and we proceed to consider to match the gradient distance of two graph set. Similar to <cit.>, we define distance D as the sum of the distances dis at each layer. For a specific layer, given two GNN model gradients G^ S∈ℝ^d_1×d_2 and G^ T∈ℝ^d_1×d_2, the distance dis(·, ·) used for condensation is defined as follows. dis(G^ S,G^ T) = ∑_i = 1^d2( 1 - G_i^ S· G_i^ T/G_i^ SG_i^ T ) In Eq. <ref>, G^ S and G^ T represent the i-th column vectors of the gradient matrices. By employing these formulations, we can efficiently achieve gradient matching strategy.However, traditional methods are notably unstable, and an excessive emphasis on gradient consistency can lead to synthesized graphs often lacking the desired generalization capability. A promising approach is to introduce adversarial training, enabling the model to explore a wider latent space during its training process. To this end, we introduce adversarial training into the graph condensation research line for the first time, setting the context for our investigation. In the following parts, for the sake of convenience, we will draw on the concept of left and right limits from mathematical notation. We denote the superscript t^+ for the right limit and the superscript t^- for the left limit [Assuming that t^+ is the right limit for t, for any ς > 0 satisfies that t^+-t > ς, and we can draw the similar conclusion in left limit t^-: t-t^- > ς.]. Fig. <ref> showcases our GroC algorithms, in the initial stage, we do not update the GNN parameters. Instead, we optimize the trainable synthesized graph (outer loop). We refer to this particular time as the left limit of t, e.g., t^-, in which we optimally update the synthetic dataset S_t - 1→ S_t using the gradient computed by the GNN: X^'←X^' -η _1∇ _X^'D^'if t %( ω_1 + ω_2) < ω_1 In Eq. <ref>, here D^' is the updated distance of the two datasets, we leverage the gradient distances to propagate optimized features of the synthetic dataset in the backpropagation fashion. However, frequently matching gradients may lead the whole optimization process notoriously time-consuming. As a trade-off, we match the gradient at regular intervals of ω_1 + ω_2periodically. Concretely, in every ω_1 + ω_2 epoch, we match gradient ω_1 times for optimizing feature attributes and the next ω_2 epoch we only update adjacency matrix A^': g_ϕ←g_ϕ- η _2∇ _ϕD^'⇒A^' = g_ϕ( X^') withA_ij^' = σ( ( MLP_ϕ( [ X_i^';X_j^' ] ) + MLP_ϕ( [ X_j^';X_i^' ] ) )/2 ) Here g_ϕ denotes the MLP parameterized with ϕ. We generate adjacency matrix by controlling synthetic feature through g_ϕ. Then we use hyper-parameter ρ to control the sparsity of the adjacency matrix. §.§ Roust Learning via Shock Absorber Min-Max (Adversarial) Optimization. Starting from the conclusion of gradient matching at time point t^-, we introduce our shock absorber operator to enhance the gradient matching process and thereby expand the optimization space for exploration at time point t. We propose to regularly and automatically learn to add a perturbation δ (generated by adversarial training and refer to shock absorber) on attributes of synthetic graph S_t. Further, we update our adversarial training framework via the following min-max optimization framework:min_θ_t + 1𝔼_θ_0∼ P_θ_0{max_θ_t^*, δ_p ≤ε D ( ∇_θ_t + 1ℓ_t^𝒮, ∇_θ_t + 1ℓ_t^𝒯) }ℓ_t^𝒮:=ℓ_t^𝒮( f_θ_t^*(𝒮_t + δ_γ) ) ℓ_t^𝒯:=ℓ_t^𝒯( f_θ_t^*(𝒯), Y) where f_θ _t^* represents the GNN model parameterized with the fixed optimal θ^* at the t-th iteration, ·_pis some ℓ _p-norm distance metric, ε is the perturbation budget, and D is the distance function which comes from Eq. <ref>. 𝔼_θ_0∼ P_θ_0 denotes that multiple times initialization (satisfies P_θ_0 distribution) and calculating expectation. Thoroughly achieving Eq. <ref> need to find a temporary and intrusive variable, e.g., Shock Absorber, which can help to explore the gradients field of the synthetic datasets as much as possible with limited scope. Towards this end, we resort to previous research <cit.>, which has demonstrated that the saddle-point optimization problem of Eq <ref> can be effectively solved using Stochastic Gradient Descent (SGD) for the outer minimization and Projected Gradient Descent (PGD) for the inner maximization. Similarly, the approximation of the inner maximization under an l_∞ constraints is as follows:δ_γ+ 1 = Π _||δ|| _∞≤ε( δ_γ + α· D( ∇ _θ ^* _tℓ _t^ S,∇ _θ ^* _tℓ _t^ T))where the perturbation δ is updated iteratively for M round, and the function Π _δ_∞≤ε performs projection onto the ε-ball in the l_∞-norm. Compared to the traditional adversarial attack or training algorithms <cit.>, We removed the sign function, as we aim for updates to be within a more granular range, the diversified perturbations can ensure the process is more robust.We iteratively update M times to generate the perturbations as depicted in Eq. <ref>, this process requires M end-to-end forward and backward passes. For ease of understanding, we illustrate the process through which the shock absorber operates in Fig. <ref>. In M rounds updating, we iteratively fuse the perturbations with synthetic graph. In this fashion, the most severe perturbations δ_M are applied to the input features, upon which the model weights are optimized. It is worth emphasizing that we have no parameter update process in this procedure, we preserve the parameter gradient throughout the entire M iterations and subsequently eliminate this perturbation after M rounds of Shock Absorber influence. In next time point (t^+), we use the following function to proceed to update the synthetic dataset: D_t^ + ^ S = 1/M∑_γ= 1^M D_t^γ( ∇ _θ _t^*ℓ _t^ S,∇ _θ _t^*ℓ _t^ T)Here D_t^γ denotes distance at γ round at t points. D_t^ + ^ S represents the distance after attaching M round shock absorber. At t^+ time point, we only use the average gradient values to update the synthetic datasets. §.§ A time-efficient version of GroC To better generalize to large datasets and reduce the computational complexity, we provide a time-efficient version of GroC called TimGroC. Compared to GroC, TimGroC achieves a significantly efficient time benefit, enhancing model robustness with almost no additional time overhead. In the implementation of TimGroC, we removed the training loop that updates the adversarial perturbation during adversarial training optimization (i.e., as illustrated in Fig <ref>). This allows the M iterations at time t^ + to be integrated into the outer loop for optimizing the synthesized dataset. Specifically, the adversarial perturbation is set as a persistent variable and added to the synthesized data for gradient matching. This process involves both forward and backward passes, simultaneously obtaining gradients for the synthesized dataset and the shock absorber. This process can be understood as free adversarial training <cit.>. Based on this, we reap the benefits brought by adversarial training, with virtually no additional time cost introduced. §.§ Gradient Locating in Synthesized Data In this work, we focus on using the shock absorber for helping the graph data condensation process be more robust. However, employing excessively large perturbations may diminish the expressive power of the entire synthetic dataset. Therefore, we selectively apply perturbations solely to the most vulnerable portion of the synthetic dataset. We refer to this process as gradient localization, as it involves identifying the optimal location for applying perturbations. Concretely, we perform element-wise multiplication between a differentiable all-ones matrix m_δ and the perturbations δ. The purpose of operation is to incorporate the all-ones matrix into the optimization computation. Note that the shape of δ is identical to that of the synthetic data 𝒮. R = | ∇ _m_δ( D( ∇ _θℓ (f_θ _t( S + δ⊙m_δ⊙m_g),Y^'),∇ _θℓ (f_θ _t( T),Y)))| where R is the absolute value of the gradient of the m_δ. The adversarial training is performed M times backward pass and forward pass, the noise δ _0 is uniform noise, m_g is given an initial value of all one matrix. Through the above gradient information, we use topk algorithm to get position m_g: m_g_(i,j)= 1, ifR_i, j are the top-k largest entries0, otherwiseThe m_g obtained by the gradient in the previous round acts on the noise δ _γ + 1 of the second roundδ _γ = δ _γ ^'⊙m_δ⊙m_g,γwhere the m_g,0 is the all-ones matrix, after computing the gradient information for the position matrix m_δ at the first step, it becomes a sparse matrix with only 0 and 1 entries. Therefore, after incorporating our method into Eq. <ref>, it can be rewritten as:min_ S∑_t = 0^T - 1D( ∇ _θℓ (f_θ _t( S + δ⊙m_δ⊙m_g),Y^'),∇ _θℓ (f_θ _t( T),Y)) § EXPERIMENTS We present empirical results to demonstrate the effectiveness of our proposed methods GroC and TimGroC. The experiments aim to answer the following research questions:* RQ1. How is the evaluation quality of GroC and TimGroC compared to that of existing SOTAs? * RQ2. How effective is shock absorber? * RQ3. What is the time overhead of our model? * RQ4. Does the graph compressed by our model exhibit transferability across backbones? §.§ Experimental SettingDatasets & Baselines. We conduct experiments on three transductive datasets, i.e., Citeseer, Cora <cit.> and Ogbn-arxiv <cit.> and on the inductive dataset, i.e., Flickr <cit.> and Reddit <cit.>. For the datasets setting, we follow the setup in <cit.>. In addition, we also examine the transfer ability of our Shock Absorber on the graph classification task. We utilize the Ogbg-molhiv molecular dataset from Open Graph Benchmark (OGB) <cit.> and TUDatasets (DD and NCI1) <cit.> for graph-level property classification. On node classification datasets, We compare our method with one state-of-the-art condensation method and three coreset methods. (1) GCond <cit.> models the condensed graph structure based on the condensed node features. (2) Random coreset <cit.> randomly selects nodes for graph sampling. (3) The Herding coreset <cit.> is often used in continual learning to select samples closest to the cluster center.(4) The K-Center method <cit.> minimizes the maximum distance between a sample and its nearest center to select center samples. For the four baselines: Random, Herding, K-Center, and GCond, we use the implementations from <cit.>. Evaluation. We train our method and SOTAs with the same settings, including learning rate, optimizer, etc. Firstly, we create three condensed graphs by training methods with different seeds. Then, we train a GNN on each graph, repeating the process three times. To assess condensed graph information, we train GNN classifiers and evaluate on real graph test nodes or graphs. By comparing model performance on real graphs, we obtain condensed graph informativeness and effectiveness. Experiments are repeated 3 times, reporting average performance and variance.Backbones. To ensure fairness, we utilize the identical model as GCond <cit.>, specifically GCN <cit.>, for evaluation. In the condensation process, we apply SGC <cit.>, configuring it as a 2-layer model with 256 hidden units.§.§ Main Results (RQ1) In this part, we thoroughly investigate the performance of the GroC and TimGroC across various datasets. We conduct a comprehensive comparison of our frameworks with Random, Herding, K-Center, and GCond, for node classification tasks on the Cora, Citeseer, Ogbn-arxiv and Flickr datasets. In Tab <ref>, we present a comparison of sparsity performance and other related parameters between our model and the current state-of-the-art models. In Tab <ref>, we further extend the comparison by including several clustering/compression algorithms. The observations (obs) can be listed as follows.* Obs 1: GroC/TimGroC consistently outperform GCond under extremely large condensation rates, verifying its extraordinary performance. For instance, on the Citeseer dataset, our model can achieve a compression rate under 0.003, which is nearly 5.2% higher than the current SOTA GCond. These results demonstrate the significance of adversarial training to graph condensation (Tab. <ref>). Interestingly, from our visualization results (Fig <ref>) and Table <ref>, it can be observed that the graphs we compressed are highly dense, with edges acting as dense information carriers, which facilitates information storage. * Obs 2: GroC demonstrates superior performance and lower variance, which attests to the robustness of our algorithm. For instance, in Tab <ref>, both GroC and TimGroC achieved nearly the lowest variance and the highest performance. On the Citeseer dataset, they show an improvement of almost 5.2% accompanied by a decline in variance of nearly 2.0%. Meanwhile, on the Reddit, our framework exhibits a variance of only 0.05%, which is significantly lower than other models, further corroborating the robustness of our GroC and TimGroC frameworks. * Obs 3: GroC exhibits stronger training robustness. GroC exhibits stronger training robustness. In Fig <ref>, it is readily apparent that, during the training phase, GroC's curve is predominantly above that of GCond, demonstrating significant potential. Particularly on the Citeseer and Ogbn-Arixv datasets, our model excels and surpasses the current best models, still possessing the capability for further improvement towards the end of the training.§.§ Scability of Shock Absorber (RQ2) Additionally, we evaluate Shock Absorber and DoSCond on the graph classification task using the Ogbg-molhiv <cit.>, DD, and NCII datasets <cit.>. This comparative analysis allows us to assess the effectiveness and superiority of the shock absorber method in different scenarios. Following the methodology of DoSCond <cit.>, a condensation method for graph classification, we generated one condensed graph for each class. In the training phase of condensed graphs, we integrated our proposed shock absorber into the one-step gradient matching process. For this purpose, we employed a 3-layer GCN for gradient matching. During the test phase, we used a GCN with the same architecture and trained the model on condensed graphs for 500 epochs with learning rate 0.001. The results of the classification accuracy are summarized in Tab. <ref>. Scability in Graph Classification. As shown in Tab. <ref>, in the graph classification scenario, our GroC shows a decent generalization. Its effectiveness can be attributed to the adversarial perturbation which improves the robustness during the graph condensation process. With the gradient localization in synthesized data, condensed graphs contain more effective information which is beneficial for model training. Moreover, our experiments on graph-level property classification have demonstrated superior interpretability and generalization ability for graph classification, surpassing leading baselines. §.§ Study of time consumption (RQ3)In this subsection, we examine the time cost of our algorithm through experiments to further assess whether our model introduces excessive time overhead while enhancing robustness (3070 GPU). Since our aim is to enhance robustness, our model incorporates an adversarial training process. We observed that the model achieves optimal performance when M is between 3 and 4. Consequently, we adopted the more efficient M=3 as the setting for GroC to compare time efficiency. We discovered that TimGroC, compared to GroC, achieves an improvement ranging from 3.19 ∼ 4.11 times while maintaining optimal performance. This further substantiates that our algorithm enhances robustness without introducing additional computational power. §.§ Study of transferability (RQ4)Lastly, we employed GCN as the training backbone and trained the synthesized smaller graph on this new backbone to evaluate its transferability. As shown in Tab <ref>, we choose Cora and Citeseer as benchmarks and follow the reduction ratio of <cit.>, we can easily observe that our synthesized data also achieves commendable performance on GraphSAGE, SGC, and MLP. This further attests to the excellent transferability of our compression algorithm, offering a reliable solution for future data compression.§ CONCLUSIONIn this study, we introduce GroC, a robust adversarial training-based graph condensation framework. GroC leverages principled adversarial training (min-max optimization) to explore the parameter space surrounding the influential parameters in the original matching process. Based on this, we further introduce the shock absorber operator, which enhances the gradient matching process and maximizes the exploration of synthetic dataset gradients within a limited scope. Our experimental results demonstrate that our approach surpasses other SOTA methods in terms of accuracy and efficiency across multiple node classification datasets and graph classification datasets. The evaluation highlights that our condensed graphs effectively retain important structural properties of the original graphs while significantly reducing dimensionality and computational complexity. § NOTATIONS § ALGORITHM OF OUR GROC METHOD | http://arxiv.org/abs/2311.15772v1 | {
"authors": [
"Xinglin Li",
"Kun Wang",
"Hanhui Deng",
"Yuxuan Liang",
"Di Wu"
],
"categories": [
"cs.LG"
],
"primary_category": "cs.LG",
"published": "20231127124442",
"title": "Attend Who is Weak: Enhancing Graph Condensation via Cross-Free Adversarial Training"
} |
[Model-agnostic Body Part Relevance Assessment for Pedestrian DetectionMaurice Günder1,20000-0001-9308-8889 Sneha Banerjee1,30000-0002-9950-2873 Rafet Sifa1,20009-0004-6680-8210 Christian Bauckhage1,20000-0001-6615-2128January 14, 2024 ============================================================================================================================================================== < g r a p h i c s >figureWith just a single input image (left), RLDF can generate diverse semantically similar images (bottom and right). RLDF requires no text guidance or fine-tuning and can be personalized by simple actions in the the semantic encoding space.] Large vision-language models are steadily gaining personalization capabilities at the cost of fine-tuning or data augmentation. We present two models for image generation using model-agnostic learning that align semantic priors with generative capabilities. RLDF, or Reinforcement Learning from Diffusion Feedback, is a singular approach for visual imitation through prior-preserving reward function guidance. This employs Q-learning (with standard Q*) for generation and follows a semantic-rewarded trajectory for image search through finite encoding-tailored actions. The second proposed method, noisy diffusion gradient, is optimization driven. At the root of both methods is a special CFG encoding that we propose for continual semantic guidance. Using only a single input image and no text input, RLDF generates high-quality images over varied domains including retail, sports and agriculture showcasing class-consistency and strong visual diversity. Project website: https://infernolia.github.io/RLDFhttps://infernolia.github.io/RLDF. § INTRODUCTION Recent breakthroughs in text-to-image models were quickly followed by the adoption of human intervention in visual prompt engineering. Humans typically use a visual feedback loop to prompt VLMs and modify the prompts until they arrive at the desired images — an afternoon of merriment for some, yet an expensive ordeal for another. What if we could align generative models with proxies for human perception using semantic diffusion feedback?We aim to single-handedly eliminate this bottleneck of human feedback by context-driven image generation guided by semantic priors. Critically, we attempt to avoid subject-driven generation to mitigate propagation of memorized examples and instead focus on the class of subject-driven generations. The task can be summarized as zero-shot semantic-guided generation for imitation using visual-prompting of VLMs. To simulate the long tail, we impute bias-equalizing corrections into the generative process to obtain diversity in generated examples. Ourcontributions are many-fold.* We propose RLDF and nDg models for class-driven semantic imitation using only a single image input. We test these models on multiple domains (Figure <ref>) and on a full ImageNet clone(Figure <ref>), for evaluating usability on real-world benchmarks.* We demonstrate highly effective model-agnostic stability across DALLE-2, SD 1.4 and SD 2.1 models in a plug-and-play mechanism with convergence guarantees. * RLDF demonstrates generalization across object and action spaces, allowing extensive personalization capabilities.* The models implicitly attempt to ablate training concepts which can assist for copyright protection and style removal.§ RELATED WORKOf the rapidly improving photorealistic text-to-image models <cit.> we select Stable Diffusion <cit.> in favor of reproducibility and demonstrate the model-agnostic capabilities across both open and closed models in the experiments.Guided diffusion models Fine-tuning and guidance have been shown to be useful for personalized generation <cit.> in diffusion models. Depending on the target task, these signals can be set, or the models can be further tuned for specific data generation <cit.>. Prompt Engineering for Diffusion Models Prompt-image alignment has been tackled using approaches like gradient-based optimization <cit.>, universal guidance <cit.>, diffusion process inversion <cit.> and extends to applications in video generation <cit.>. Synthetic Data from Diffusion Models Augmenting real world data with synthetic examples <cit.> has been of interest, consequently extending to diffusion models, both fine-tuned and off-the-shelf <cit.>. We create similar ImageNet clones with RLDF for comparison in this work. Diffusion Models and Reinforcement Learning Previous works have explored RL for guiding diffusion processes <cit.>,applied generative modeling for decision-making <cit.> and enhanced fine-tuning <cit.>. Specifically, for prompt-image alignment, relevant closer work in reinforcement learning includes decision-making frameworks <cit.>, prompt optimization <cit.>.We learn through rewards extracted from diffusion generations for image-prompt alignment and do not perform objective-driven diffusion model training or data augmentation.Additionally, we eliminate the need for human prompting (and guidance) or text input by starting generation from random noise and directly learning from visual semantics.Among previous works <cit.> on other tasks, Q* search <cit.>, is a search algorithm showcasing great scalability and experimental results on problems including the Rubik’s cube.Memorization in Diffusion Models Due to the volume and varied sources of training data, certain concerning patterns were observed to have emerged in diffusion models. Diffusion models tend to replicate training examples <cit.>, which may contain copyright material or artistic styles. Recent work has demonstrated the treatment of this problem as ablation <cit.>. We demonstrate the effectiveness of our proposed method RLDF in ablating input concepts like artistic styles and copyrighted characters in this work.§ METHODOLOGYIn this section, we provide a brief background on the RL problem formulation. The RLDF presentation is heavily inspired from classical RL theory <cit.> and borrows the standard learning objectives to create a novel model for semantic-guided image search.Formulation: Imagine an n-dimensional gridworld problem where the agent navigates through the object-action space under reinforcement learning policies incentivized by diffusion feedback and semantic guidance.A countable (finite) MDP can be defined by a tuple ⟨Υ, 𝒜, P, R, γ⟩ where Υ is the “state space”, 𝒜 is the “action space”, P : Υ×𝒜→Δ(Υ) is a “transition kernel”, R is the reward function and γ∈ [0, 1) is the “discount factor”.In RLDF, we formulate the image search problem as an MDP <ref> with the classic RL objective <cit.> of maximizing the reward R as shown in Figure <ref>. Over a discrete time step sequence t=0,1,2,3,.. we simulate interaction between the diffusion environment and the actor efficiently. The environment initialization places the agent at a random noise encoding state. The agent receives the environment's encoded state, Υ_t∈Υ at every time step t, and selects an action, A_t∈𝒜(υ) from this knowledge of the state. After the passing of a time step, the agent obtains a numerical semantic reward, R_t+1∈ℛ⊂ℝ, and is now transported to the new encoded state, Υ_t+1· This interaction between the agent and diffusion feedback is well understood as a trajectory: Υ_0, A_0, R_1, Υ_1, A_1, R_2, Υ_2, A_2, R_3, …. The RLDF trajectory can be visualized as a sequence of images as shown in Figure <ref>. In each step, the encoding state is the basis of the diffusion model generation as a semantic representation equivalent to a positional state. The dynamics of this system are captured in probability by p: Υ×ℛ×Υ×𝒜→[0,1] which is a deterministic function computing a probability for the values for R_t and Υ_t, dependent only on the preceding state and action. .9!p(υ^', r |υ, a) ≐Pr{Υ_t=υ^', R_t=r |Υ_t-1=υ, A_t-1=a}Semantic Encoding In RLDF, we propose the representation of a state as a encoded statederived from Context Free Grammar rules written for image search.Classically, a Grammar that generates a language L is given by:G=⟨T, N, S, R⟩where T , N ,S ,R,X, Ware the sets ofterminals, non-terminals, start symbol, rules of the form X → W, non-terminals and a sequence of terminals and non-terminals and:N={S, P, NP, A, DP, PM,I, LC, H, C, F,S, NOM,VP, Noun, Verb, Frequency, Density, Scene}T = { vocabulary of objects, verbs, actions, scenes, conjunctions, numbers }S=SR={ S→PNPA DP ILC P →{ a photo of } NP→Frequency NOMFrequency → one| many | ...DP→ Density PMDensity → no| one | ... NOM→ NounNoun → banana| monkey | dog | ...LC→ SceneScene →farm| playground |...VP→ Verb Verb →playing| teaching | ...PM→ H H → people A→ F F → and I→ C C → in }Intuition The raw state ψ derived from the above grammar rules is then compressed into a single vector which contains the semantic elements label-encoded by mapping onto the structure of natural language. This represents the encoded state υ which maps the semantics of an entire imageas a vector in the object-action-scene space. Thus, each point in the encoded space can represent one or more images(many-to-one mapping) that contain the same semantic information. Intuitively, our goal state or input image (gold coins (see Figure <ref>), is the end point of a reward-guided path. This can be conceptualized as semantic goal-conditional RL, as the target image semantics are used in reward computation. The entire environment is in the encoding space, and each axis represents an image property, thus each action (moving forward, up etc) leads to a new state produced by the diffusion model. However, due to the reward seeking behavior one does not need to stop at the goal, but can continue training with the same objective to seek more rewards and simulate finer semantic attributes.Semantic Locality The axes representing image properties are the backbone of the control enabled output that this model generates. From both computational and interpretability perspectives, we design the vocabulary tree such that semantic properties that are similar visually are also closely located in the encoding space. Taking the “scene or background” visual property as an example, the encodings of park and vegetable garden scenes are closer than the encodings of park and train station platform. This is useful when we use RLDF for precise control(Figure <ref>).Reward Seeking Behavior A strong estimation of the gains of a given encoded state or given action is the ideal outcome of this learning paradigm. The state-value function of a state υ under a policy π (policyis a state → action selection probability mapping), denoted v_π(υ), is the expected return for following π from υ. Expanding on this using a recursive relation:v_π(υ)≐𝔼_π[G_t|Υ_t=υ] =∑_aπ(a |υ) ∑_υ^', r p(υ^', r |υ, a)[r+γ v_π(υ^')],for all υ∈Υ Thecorresponding action-value function from following a policy π, denoted q_π(υ, a), as the expected return for a from υ, is given by: q_π(υ, a) ≐𝔼_π[G_t|Υ_t=υ, A_t=a]=𝔼_π[∑_k=0^∞γ^k R_t+k+1|Υ_t=υ, A_t=a]Reward Engineering We propose 3 reward functions (See Figure <ref>) which vary in their respective mechanisms of computing semantic gains (“diffusion feedback”). The first reward is “Multi-Semantic Reward” and returns high rewards for matching semantic elements between generations with ground truth.Reward (R) =1_{g∈ G} + 1_{s} 1_{g∈ G} =Cifg ∈ GT[objects],0ifg ∉ GT[objects[. 1_{s} =+C_sifg ∈ GT[scenes],-C_sifg ∉ GT[scenes].The second reward is “Partial-Semantic Reward” and returns high rewards for matching scene semantics between generations with ground truth.Reward (R) =1_{s}=+C_sifg ∈ GT[scenes],-C_sifg ∉ GT[scenes]. The third reward is “CLIP Reward” <cit.>and returns CLIP embedding similarity as a reward between generations with ground truth. Here x and y are the CLIP feature embeddings of the ground truth image and the generation.Reward (R) =Cosine(x,y) = x · y/|x||y|Bellman Optimality Under an optimal policy π_*, the expected return for the best action from that state must be equal to the value. The optimality equations for v_* andq_* are given by: v_*(υ)= max _a ∈𝒜(υ) q_π_*(υ, a) =max _a 𝔼_π_*[G_t |Υ_t=υ, A_t=a]=max _a ∑_υ^', r p(υ^', r |υ, a)[r+γ v_*(υ^')] . where the expected discounted return is given by G_t, not to be confused with the grammar notation G. q_*(υ, a)=𝔼[R_t+1+γmax _a^' q_*(Υ_t+1, a^') |Υ_t=υ, A_t=a]=∑_υ^', r p(υ^', r |υ, a) [r+γmax _a^' q_*(υ^', a^')]Convergence Under the conditions of a bounded deterministic reward and infinite exploration horizon in our discounted (γ) finite MDP, we can obtain convergence guarantees <cit.> on RLDF using proven theorems. The Q-learning variant in RLDF given by the update rule with step size α_t(υ,a):Q_t+1(υ_t, a_t)=Q_t(υ_t, a_t)+α_t(υ_t,a_t)[r_t+ γ max _b ∈𝒜 Q_t(υ_t+1, b)-Q_t(υ_t, a_t)]converges almost surely to the optimal Q-function for all (υ, a) ∈Υ×𝒜 under the conditions∑_t α_t(υ, a)=∞∑_t α_t^2(υ, a)<∞ *The Upper Cased Functions (eg. Q) are the corresponding array estimates. Υ^+ is the set of all states, including the terminal state. §.§ Implementation details Dataset Size For cloning ImageNet <cit.>, we use examples (both published in this paper and used in RLDF) from the popular subset (ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012-2017 image classification and localization dataset) <cit.>. This contains 1000 object classes and respective train-val-test splits with 1,281,167, 50,000 and 100,000 images. We use RLDF for cloning ImageNet, and build our clone at a similar scale for fair comparison. We reproduce the dataset by generating synthetic images for 1000 object classes with train-val-test splits with 1,508,000, 56,000 and 100,000 images. Vision-language models Stable Diffusion <cit.> is a latent text-to-image diffusion model that was used in this work for the photo-realistic generations and reproducibility. For the key experiments that include the ImageNet clone, we use the HuggingFace Stable Diffusion v1.4 <cit.> model. For the extended model-agnostic plug-and-play experiments, we use the Stable Diffusion v2.1 <cit.> model from HuggingFace and DALLE-2 (model version on the day of access)<cit.>.Image Generation We inferenceon a distributed multi-GPU (approximately 5 A100s) setup that takes approximately 7 days for ImageNet cloning. For efficiency, we ran inference with float16, with the better DPMSolverMultistepScheduler scheduler <cit.>, 20 inference steps, attention slicing, and the latest (on access date) autoencoder <cit.>. The images were of high quality, generated at 512 × 512 pixels. Along with the positive CFG, we also prompt models with a large list of negative prompts <cit.> for better image quality.Reward Functions For the three proposed rewards functions we compute the coarse semantic reward using representations learned from <cit.> and fine semantic reward from smaller entity recognition. Our implementation is based on different architectures and data sets, including <cit.>. The CLIP rewards <cit.> were calculated based on open source <cit.>.RLDF The basic setup of the environment was based on the GridWorld problem given in the textbook <cit.> and subsequent open source implementations <cit.>. We claim that this method is highly efficient due to the mitigation of diffusion-model finetuning and focused search guided by semantic rewards. The model cost can be computed as (diffusion model inference cost per step + nominal action-encoding modification cost per step + reward computation cost per step) * number of steps). For environments as small as 480 states, this can be as low as 100 steps of training for baseline (high-level) encodings. For ImageNet cloning we restrict the environment to searching along the object axis, but extend to (count + class) multi-object, action, scene and (count + class) people-specific axes for the remaining results. For diversity in the inference step, we increase the control axes vastly and include object and person attributes like location (city), weathers, times-of-day, artistic styles, colors, ages, race (for diversity and inclusion of humans from all across the world), emotions, and gender axes. The goal is to create well-balanced images that can be used for model training in future work. Evaluation We evaluate the RLDF datasets with two key experiments: * Classification: We trained ResNet-18 <cit.>on Synthetic RLDF ImageNet-100 data and tested on the real ImageNet-100 data <cit.>, usingstandard classes <cit.> and training recipes. We also tested on synthetic data from a previous baseline <cit.> for an extended comparison.* Image Distributions We compare the RLDF synthetic data with ImageNet <cit.> and its natural distribution shifted datasets including ImageNet-Sketch <cit.>, ImageNet-R <cit.>, ImageNet-A <cit.>, ImageNet-O <cit.> and compare with results from a previous baseline <cit.>. The FID scores <cit.> and KID scores <cit.> were implemented using <cit.>. Methodology The design of the context-free grammar encoding was inspired by early work <cit.>. In this work we use also data samples or reference diagram elements from open sources <cit.>.Applications Of the many interesting generative applications, we focus on <cit.> for previous ImageNet experiments, <cit.> for demonstrating use of RLDF for similar ablation of memorized concepts and <cit.> for using RLDF for similar precise control over attributes by sliding over axes of the RLDF environment. Noisy Diffusion Gradient While formulating the RLDF MDP, the intermediate outputs are semantic encodings. By directly computing the gradients (Δ f) on these encodings, we propose a fast noisy diffusion gradient model to reach the global optima (input image encoding). This does not have similar guarantees on convergence due to the noise from the diffusion model, and often gets stuck in plateaus.Noise here means the possibility of diffusion models generating artifacts harming semantic registration. The key disadvantages of this proposed method are the failure modes under low signals and fast divergence under post-goal training conditions.§ RESULTS The goal of RLDF for ImageNet cloning was to imitate natural semantic elements through diffusion feedback learning. We do not aim to improve the image-generation process (e.g. diffusion) itself in any way, and rather focus on obtaining semantically similar images. We find that RLDF is able to overcome previous semantic challenges and generate consistent distributions across ImageNet-1k classes, as visible in Figures <ref> and <ref>.For this task, we generate 1.5 M images for all 1000 classes,following the ImageNet <cit.> dataset structure and train the classifier using standard recipes. Congruent with findings from previous works, we do find that training classifiers on purely synthetic data does not generalize well to real world performance and present our results for 100 classes (ImageNet-100) standalone in Table <ref>. Interestingly, in Table <ref>, we see that using the FID metrics, the RLDF ImageNet Clone lies comparatively closer to original ImageNet distribution and farther from its natural distribution shifts like ImageNet-Sketch <cit.> among the compared datasets. § DISCUSSION We present some key insights from the RLDF training runs (see Figure <ref>)including:* The training patterns displayed in Table <ref> heavily depend on the initial position of the agent in the encoding space (as this governs the extent to which the agent can explore in near neighborhood to find terminal state faster) which has been treated through random initialization.* To maintain the reward and state consistency, we are required to perform appropriate seeding which determines the state-reward as generated from the diffusion model. Using dynamic seeds during the training process is not recommended, as the model may produce diverging semanticsgiving looping rewards along the same initial trajectory.* CLIP Rewardstend to provide better fine-grained guidance, however gets stuck at local optima in the reward landscape, which may cause missing the desired location in the exploratory phase of the agent. In contrast, Partial Semantic Guidance provides a more gradual reward landscape, thus ensuring communication between immediate reward and final convergence goal closer to the input image distribution semantically.* We intentionally do not include distance from terminal state as a reward due to the ambiguity of Euclidean distance. Two images equally far away from the destination with high semantic differences should ideally have different rewards. * Increasing the world dimensions (we scale the base environment by 9x) can increase compute over 2x, yet the RLDF learner is still able to efficiently locate the desired properties in the encoding space. * The individual task difficulty can often be domain specific. For example, in the indoor navigation domain when a bowl is kept on a table, the random agent was also able to reach the end of the trials. In other domains, RL agents performed significantly better. * In small state space exploration environments, random action sampling(ϵ =1) appears to yield instances of randomly good performance. However, when we scale this for deeper exploration, one finds that the earlier observed performance was spurious as the agent finds themselves much farther away from the ideal state through random sampling versus learning better actions through exploiting seen information.* Design Idea: In the problem setup, one can penalize the agent for unrealistic generations by setting their encodings as terminal states or negative rewards. We avoid this in our work to isolate the model learning from human feedback and guidance.§ CONCLUSION AND FUTURE WORK We show that RLDF model can produce semantic-rich generations through class-aware guidance and diffusion feedback, showcased in Figure <ref>. Our second proposed method (Noisy Diffusion Gradient) also obtains semantically accurate results, as shown in Figure <ref>. The key contribution is efficient compression of semantics into encoded vectors to represent any real world image. Critically, RLDF requires no text input, text guidanceor fine-tuning of TTI models. In the future works, one can plug in better text-to-image models for improving performance on real-world benchmarks.Applications RLDF generalizes across object (Figure <ref>) and action (Figure <ref>) spaces. The plug-and-play mechanism allows for substitution of underlying TTI models (Figure <ref>). We show that feedingcaptions (Figure <ref>) may not supply sufficient semantic information, which RLDF attempts to remedy. RLDF can be applied for ablation of memorized concepts (Figure <ref>) and for similar precise control over attributes (Figure <ref>) by sliding over axes of the RLDF environment.Limitations Some limitations of RLDF include computational costs in larger environments, subject inconsistency (we focus on class-consistency instead), and the constraints of being bounded in performance to the goodness of the underlying text-to-image model. § ACKNOWLEDGEMENTS Gratitude to God, my parents, teachers, and friends for providing me with resources, guidance, knowledge and cheesy pasta in this journey. Thanks to the AlphaGo documentary, which kindled my interest in reinforcement learning. This work is but a small sapling of my ideas, with the desire to grow and blossom with feedback (human/diffusion). ieeenat_fullname | http://arxiv.org/abs/2311.15648v1 | {
"authors": [
"Aboli Marathe"
],
"categories": [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.LG",
"cs.RO"
],
"primary_category": "cs.CV",
"published": "20231127092012",
"title": "Reinforcement Learning from Diffusion Feedback: Q* for Image Search"
} |
[email protected] Fondazione Bruno Kessler (FBK), I-38123, Trento, Italy Univ. Grenoble Alpes, CNRS, Grenoble INP, Institut Néel, 38000 Grenoble, France The axion is an hypothetical beyond the Standard Model particle. Its experimental search is an ongoing effort, and an expanding number of techniques keep on narrowing its parameters space. Leveraging the interaction between dark matter axions and spins, a fermionic interferometer is an experiment which aims at detecting the axion-induced precession of a spin resonance.We describe the detection scheme, outline the possible experimental implementations, their sensitive axion-mass range and discovery potential. Furthermore, the building and characterisation of an axion interferometer is explained in details and the resulting setup is used to search for sub-neV dark matter. The Fermionic Axion Interferometer Nicolò Crescini January 14, 2024 ================================== §.§ Axions and spinsThe main evidences of physics beyond the Standard Model <cit.> are the strong CP problem, the asymmetry between matter and anti-matter, neutrino oscillations, dark matter and dark energy <cit.>. The axion <cit.> is an hypothetical particle which has the potential to solve two of these matters. Originally proposed to accommodate for the absence of charge-parity symmetry violation in quantum chromodynamics, i. e. the strong CP problem <cit.>, it later became a compelling dark matter candidate <cit.>. This theoretical sparkle triggered the first experimental searches that excluded early axion models, and which were followed by new models and new experiments <cit.>. To this day, no signal compatible with axions has been reported.The first thing to consider when detecting a particle is its mass, and the axion one is unknown. A virtually infinite mass range can be constrained by means of models <cit.>, astrophysics <cit.> and cosmology <cit.>, suggesting a preferred window for the axion mass m_a between micro- and milli-electronvolts. Still, many theoretical efforts and experimental searches are directed towards lighter or heavier axions or axion-like particles <cit.>. Typically, the most sensitive experiments cover a narrow mass range, as in the case of haloscopes <cit.>, while broadband techniques cover a wide mass range at the expense of a reduced sensitivity <cit.>.Among all of these are spin resonance axion searches can be based on electrons <cit.>, or nuclei, which can access low frequency axions <cit.>.Dine-Fischler-Srednicki-Zhitnitsky (DFSZ) axions <cit.> interact with fermions with pseudoscalar interaction constant g_p <cit.> though the LagrangianL_f = g_p/2m_f (ψ̅γ_μγ_5 ∂^μ a ψ),where ∂^μ a is the axion field quadriderivative, γ_μ and γ_5 are Dirac matrices, and ψ is the quantum field of a fermion with mass m_f and charge e. One can notice that L_f is analogous to the interaction Lagrangian of quantum electrodynamicsL_QED⊃ -e (ψ̅γ_μ A^μψ),minus a γ_5. Physically, this means that the axion field derivative acts on fermionic spins as an effective magnetic field. In the non-relativistic limit Eq. (<ref>) can be recast in terms of the fermion's spin s_i = μ_f σ_i, where μ_f = e/2m_f is the fermion magneton and σ_i is the Pauli vector. The remaining terms form a pseudo-magnetic axionic fieldb_i = g_p/2e∂_i a,which oscillates at the frequency of the axion mass m_a, such that b_i ∝β_i e^-i m_a t, where β_i=(β,0,0) is the axion speed relative to the speed of light c. If the spin s_i is in a static magnetic field B_i and in an axion effective field, both parallel to its direction, its equations of motion are Bloch equations <cit.>ṡ_̇i̇/γ = ϵ_ijks^j (B^k + b^k),where γ is the fermion gyromagnetic ratio. This equation displays a Larmor precession with an additional axion-induced spin precession <cit.>, which is the core observable of the present work. Taking the non-relativistic limit of Eq. (<ref>), and treating the axion as a classical field are justified by β_i ≪ 1 and by the large axion occupation number n_a = ϱ_dm/m_a <cit.>, respectively. An Earth-based laboratory is immersed in the dark matter halo of the Milky Way, whose estimated density is ϱ_dm=0.4 GeV/cm^3, and travels through it at a speed β≃10^-3. The resulting axion effective field isb_a = g_p/2eβ n_a^1/2≃ 3×10^-18( m_a / 1 eV) T.centered at the frequency corresponding to the axion mass, and with a quality factor which is approximately Q_a ≃β^-2≃ 10^6 <cit.>.The detection of an axion precession can be understood in a general experimental scheme which can be realised at different energy scales—i.e. axion masses—and therefore with different physical systems and technologies, outlined in Table <ref>. Let's first discuss the experimental scheme, outlined in Fig. <ref>. The spin resonance under consideration is phase-modulated by the dark matter axion field gradient which, being dependent on β_i, is a directional effect. This suggests that two spin resonances with static magnetic fields oriented perpendicularly to each other can be used as the two arms of an interferometer. The east arm—(E)—is parallel to the axion field gradient and carries the dark matter signal, while the north arm—(N)—acts as a reference. In the east arm B^(E)_i = (B_0,0,0) and b_i = (b_a,0,0), so Eq. (<ref>) describes a spin resonance whose resonant frequency ω_0 = γ B_0 is periodically modulated by an axionic field with amplitude b_a and frequency ω_a. The north arm has B^(N)_i = (0, B_0, 0), giving the same resonance frequency and no axionic modulation. One can therefore imagine to shine the same light beam to both the resonances, and then let the outputs interfere with themselves, providing an interferometric readout of a dark matter axion signal.This scheme is analogous to the one of gravitational wave observatories, where the Fabry-Pérot cavities are substituted by the spin resonances, as shown in Fig. <ref>. Although existing proposals use interferometric techniques to search for axions <cit.>, the present one differs from them in both detection principle and practical realisation. As is detailed in the following, the advantages of this approach are that it leverages the considerable experimental assets of interferometry, it is broadband in the axion mass parameter space, and it can be engineered at different energy scales.§.§ Detection schemeAs shown in the introduction, axionic dark matter produces coherent oscillations in the frequency of a spin resonance. A suitable way to detect this effect is based on phase modulation, and is detailed in the following. A monochromatic input light on-resonance with the spin precession and with amplitude A_0 is phase modulated by the oscillating resonance, yielding a signalξ(t) = A_0 e^-iω_0 t e^-ixsin(ω_a t),where x=2π^2 γ b_a / k_0 is the modulation index, and k_0 is the spin resonance linewidth.This effect is illustrated in Fig. <ref> by using a magnified effect of axions acting on a frequency ω_0/2π = 5 MHz.If the modulation index x is small, ξ(t) can be recast in terms of the first kind Bessel functionsξ(t)= ∑_n=0^∞ξ_n(x)= A_0 e^-iω_0t∑_n=0^∞x^n/2^n n!e^inω_at.At zero order Eq. (<ref>) gives the monochromatic tone amplitude itself |ξ_0| = A_0, while for n=1 it yields the first sideband amplitude|ξ_1| = πγ A_0 b_a / 2k_0which is the searched-for signal of the scheme. One can notice that the sensitivity of phase-modulation schemes to first order does not depend on the amount of material used, but rather on its coherence properties—i.e. the spin resonance linewidth k_0—making them ideal table-top experiments <cit.>.Eq. (<ref>) needs to be compared with a background to get the experimental sensitivity. While the signal calculation is general enough to accommodate for different experimental schemes, the treatment of the noise is more involved, and is discussed hereafter in some details. Assuming that the sensitivity is limited by a white noise with power spectral density N^2_0, we can calculate the minimum detectable effective axion field asσ_a= 2 k_0 N_0/πγ A_0= 2.27 (k_0/10 kHz) (N_0/A_0/10^-5/√(Hz)) pT/Hz^1/2,where γ is the electron gyromagnetic ratio, and the other quantities are roughly the experimental parameters of the setup described in the next part of this work. For instance, considering an axion signal at 1 kHz and therefore an integration time of 10^3 s—matching Q_a—we obtain a sensitivity of about 72 fT. The interferometer bandwidth—the range of axion masses probed by the experiment—is in first approximation k_0, as when ω_a>k_0 the sideband ξ_1 falls out of resonance, and its amplitude decreases. However, depending on the background origin, the signal-to-noise ratio could be preserved, and the experimental bandwidth could be increased depending on the measurement technique.Let's briefly discuss the three experimental cases of Table <ref>, which are referred to as radiofrequency, microwave, and laser interferometer, respectively. A radiofrequency interferometer can be realised with both nuclei or electrons. The former resembles some configurations of the cosmic axion spin precession experiment <cit.>, although not searching for axion-induced oscillating dipole moments or magnetisation. However, one could use a different detection configuration of the same setup to implement the scheme described in this work. Electron spin based experiments work with low magnetic fields, which is advantageous in terms of complexity but makes it particularly sensitive to environmental fluctuations.A non-negligible advantage of radiofrequency setups is that all the interferometry can be handled digitally thanks to fast electronics, reducing the hardware requirements to a minimum. At microwave frequencies nuclear spin resonance becomes inaccessible, although some nuclei like Holmium show up to GHz precession <cit.> thanks to a large effective γ. Electron spin resonances on the other hand are widely studied in this frequency range. A major problem occurring with spin resonances at high frequencies is the broadening of the resonance linewidth, often referred to as radiation damping <cit.>, which is eliminated by embedding the sample in a resonant cavity. The resulting photon-magnon hybrid systems are applied e. g. to quantum technologies <cit.> and Dark Matter searches <cit.>.A laser axion interferometer has several intriguing features.Let us consider as an example a magnetic dipole atomic transition in the infrared range. A resonant laser beam transmitted through the sample will experience phase modulation like the carrier signals treated beforehand, and with a quality factor of order 10^4 the bandwith of the interferometer would encompass the whole QCD axion range <cit.>. On top of that, the experimental apparatus can benefit from all the advantages of interferometers, similarly to gravitational waves observatories. However,the effective modulation of a magnetic dipole resonance should be estimated and would probably depend on the nature of the transition itself [The effective amplitude modulation and the noise estimation are not trivial, therefore a thorough discussion of this setup will be carried out in a forthcoming work]. §.§ Experimental axion searchThis section concerns the building and operation of a radiofrequency axion interferometer sensitive to the axion-electron interaction at sub-megahertz frequency <cit.>, a scheme of the experimental setup is presented in Fig. <ref>.The sensing material is a ferrimagnet, NiZn, shaped in two rods oriented perpendicularly. The rods are the two arms of the interfetometer and are labelled north (N) and east (E) from the direction they are pointing at. Each rod is readout by coupling it to a coil, resulting in a resonance frequency of about 5 MHz, and is then connected to the electronics (for more details see the appendix). Given the relatively low frequency of the resonances, the signal is generated and readout by a 125 MS/s analog-to-digital and digital-to-analog field-programmable-gate-array board, allowing to handle the signal processing digitally. In addition, several environmental parameters, like temperature, pressure or local magnetic field, are recorded with dedicated sensors, allowing to disentangle a possible axion signal from environmental systematics. The setup characterisation relies on the transmission measurement of the resonances, which gives the resonance frequencies of the arms, and their linewidth, as presented in Fig. <ref>a. The north arm has resonance frequency ω_N = 5.45 GHz and linewidth k_N = 19.4 kHz, while the east arm has ω_E = 5.63 GHz and k_E = 21.4 kHz, obtained by fitting the transmission measurement with Lorentzian functions.The haloscope operation requires to send two tones on resonance with each arm to achieve the maximum phase modulation, as shown in Fig. <ref>b and <ref>c.The tones' frequency are ω_N and ω_E, while their amplitudes are set to A_N=A_E=1.0 V. The two analog signals are summed and down-converted according to Fig. <ref>, where the down-conversion frequency ω_LO=(2π)f_LO needs to be ω_LO=(ω_N + ω_E)/2. The interferometer can suppress common phase or amplitude noise depending on the phase of the down-conversion tone <cit.>, and given the nature of the axion signal, we opt for phase noise reduction. Eq. (<ref>) shows that it is possible to arbitrarily improve the sensitivity by increasing A_0, i. e. using stronger tones. However, in a realistic experimental configuration N_0 is directly proportional to A_0, or—in the best case scenario—dominated by quantum fluctuations. The limit of the present setup is the phase noise of the signal generators, which is about -100 dBc, and fixes A_0/N_0≃10^-5 V/Hz^1/2. As mentioned in the introduction, the virialised axion field have Q_a≃10^6, and therefore the optimal resolution bandwidth to detect it is RBW=ω_a/Q_a. For frequencies below 200 kHz RBW is above one second, but in the present setup the resolution bandwidth is set to RBW=1 kHz, the minimum allowed by the electronics. The duty cycle of the acquisition is also limited to roughly 2%. These last two limitations are purely technical, and will be optimised in an updated version of the setup. The experiment is controlled using a dedicated characterisation, acquisition and analysis code which can be found in Ref. <cit.>.If one arm is oriented towards Vega <cit.> its resonance frequency is modulated by the axion field b_a, while the one of the remaining arm is unaltered. A single radiofrequency signal probes the on-resonance transmission of the ferrite, and indicates whether the resonance frequency shifts, while two signals, each on one arm of the interferometer, can detect relative shifts of one spin resonance with respect to the other <cit.>. The measurement scheme is interferometric, and therefore resilient to systematic uncertainties.The output data (see Fig. <ref>) is a spectrum which contains information on whether ω_N shifted with respect to ω_E or viceversa, which can be used to put a limit on the presence of a dark matter wind.An experimental run consists in the acquisition of an arbitrary number of 1 ms-long spectra which are then averaged to a single spectrum. We collect two runs of 10^4 s, called signal and background. The former is collected while the signal is at the maximum of its daily modulation and the latter while no signal is expected, so that by comparing them one can extract an upper limit on the axion-electron interaction. Signal and background are found to be compatible on all the measured frequencies, so the result is consistent with no axion signal. Following Eq. (<ref>), the magnetic field limit is recast in the 2σ (90% confidence level) upper limit on g_p reported in Fig. <ref>. The minimum value measured for |g_p| is 1.2×10^-7 for an axion mass of about 0.36 neV. §.§ Perspectives and conclusions A fermionic interferometer is a broadband dark matter axion experiment based on the modulation of spin resonances. Two light beams interact with a magnetic material whose properties depend on the local dark matter density to then interfere, eliminating systematic errors and only preserving the searched-for signal. The scheme can be realised with radiofrequency, microwave or laser light. We present the experimental implementation and operation of a radiofrequency fermionic interferometer with which we perform an axion search in the sub-neV mass range. This preliminary setup of the interferometer is realised in a minimal experimental scheme which is prone to improvements, and which is simple enough to be openly reproduced <cit.>. Short term improvements include an apparatus where the interferometry is analogic, and digitisation happens only after the down-conversion. This allows for continuous monitoring of the signal, and therefore an arbitrary resolution bandwidth and a unitary duty cycle, making it also possible to search for transient signals.For future axion searches, in particular towards μeV-mass axions (see Table <ref>), one can envision a laser interferometer probing an infrared magnetic resonance of e. g. Er:YLF or Er:YSO <cit.>. This axion experiment is particularly interesting because it leverages the analogy with gravitational waves detectors, where the magnetic resonance plays the role of the Fabry-Pérot cavity. However, a detailed studies of the atomic transition to be used is required to design and operate such experiment.Beyond axion searches, an intriguing possibility is the use of this interferometer to detect gravitational waves effects via the gravitational Larmor <cit.> or gravitational precession <cit.>. No change is necessary on the experimental setup, as it is already sensitive to any effective magnetic field modulating the spin resonance, but the analysis procedure would need to be modified in order to measure transient effects instead of persistent ones. Eventually, we mention that the magnetic field sensitivity of the setup is interesting also beyond fundamental physics, as when optimised can compete with state-of-the-art magnetometers <cit.>. §.§ Acknowledgements The support of Amberlab, in the persons of Davide Fasoli and Manuel Pachera, is greatly acknowledged for the help and support for building the experimental apparatus.Federico Chiossi is also acknowledged for the advice on the use of atomic transitions to realise the fermionic interferometer. Eventually, the author would like to thank Gianni Carugno and Giuseppe Ruoso for the discussion on the experimental scheme.apsrev4-2 §.§ Appendix: open source project The simple hardware requirements of the experiment make it interesting for open source projects to be carried out in schools, universities or by amateurs. The described experimental setup is inexpensive and all of the necessary code, from drivers to analysis routines, is publicly available <cit.>. A circuit-board scheme of the experiment is sketched in Fig. <ref>. | http://arxiv.org/abs/2311.16364v2 | {
"authors": [
"Nicolò Crescini"
],
"categories": [
"hep-ex",
"hep-ph"
],
"primary_category": "hep-ex",
"published": "20231127230607",
"title": "The Fermionic Axion Interferometer"
} |
Physics-Informed Neural Network for Discovering Systems with Unmeasurable States with Application to Lithium-Ion Batteries Yuichi Kajiura, Jorge Espin, and Dong Zhang Y. Kajiura, J. Espin and D. Zhang are with the School of Aerospace and Mechanical Engineering, The University of Oklahoma, Norman, OK, 73019, USA. E-mail: {yuichi, jorge.espin, dzhang}@ou.edu.January 14, 2024 =================================================================================================================================================================================================================================================Recent advances in neural methods have led to substantial improvement in the quality of Neural Machine Translation (NMT) systems. However, these systems frequently produce translations with inaccurate gender <cit.>, which can be traced to bias in training data. <cit.> tackle this problem with a handcrafted dataset containing balanced gendered profession words. By using this data to fine-tune an existing NMT model, they show that gender bias can be significantly mitigated, albeit at the expense of translation quality due to catastrophic forgetting. They recover some of the lost quality with modified training objectives or additional models at inference. We find, however, that simply supplementing the handcrafted dataset with a random sample from the base model training corpus is enough to significantly reduce the catastrophic forgetting. We also propose a novel domain-adaptation technique that leverages in-domain data created with the counterfactual data generation techniques proposed by <cit.> to further improve accuracy on the WinoMT challenge test set <cit.> without significant loss in translation quality. We show its effectiveness in NMT systems from English into three morphologically rich languages – French, Spanish, and Italian.The relevant dataset and code will be available at <Github>§ INTRODUCTION Neural Machine Translation (NMT) is now mainstream, both academically and commercially, and has become one of the biggest success stories of Deep Learning in Natural Language Processing (NLP). However, there has been a growing concern about bias in NMT systems, where models optimized to capture the statistical properties of data collected from the real world inadvertently learn or even amplify social biases found in training data. Specifically, gender bias is prevalent in widely used industrial NMT systems <cit.>.Gender bias in NMT can be divided into two main categories of problems: (1) Translating sentences where gender is ambiguous. In this scenario, NMT systems tend to produce stereotypical gender roles in the target. For instance, while translating sentences from English to Spanish, engineer is usually translated as being male while nurse is usually translated as being female. (2) Translating sentences where a human can reasonably infer gender in the source from the available context. In this case, NMT output disagrees with the source gender, reflecting potential bias.In this work, we propose solutions to address the second class of problems described above – where gender can be inferred from source context. The most prevalent approaches to mitigating gender bias in NMT models are retraining and fine-tuning. While <cit.>; <cit.>; <cit.> explore approaches involving retraining from scratch, <cit.> and <cit.> treat gender bias as a domain-adaptation problem. Since retraining is uneconomical, we explore fine-tuning approaches to mitigate the bias. <cit.> fine-tune base NMT models on a gender balanced dataset extracted from Wikipedia. While they show an accuracy improvement of 10% on the WinoMT pro-stereotypical subset, improvement on the anti-stereotypical subset is limited. On the other hand, <cit.> demonstrate that by fine-tuning an existing NMT model on a handcrafted dataset containing gendered profession words, gender bias can be reduced significantly, though at the expense of translation quality due to catastrophic forgetting. They overcome that catastrophic forgetting through a regularized training technique or through inference with a lattice rescoring procedure. Conversely, our approach uses a subset of the training corpus itself to generate finetuning data that is in-domain. As a result, we avoid the catastrophic forgetting otherwise seen during domain adaptation. We construct our finetuning corpus by using the counterfactual data generation technique proposed by <cit.> to convert between masculine-inflected and feminine-inflected sentences in morphologically rich target languages. We achieve 19%, 23%, and 21.6% WinoMT accuracy improvements over the baseline for Italian, Spanish, and French respectively, without significant loss in general translation quality. The advantages of our approach are three-fold nolistsep * Since our approach uses a subset of the in-domain training corpus to generate finetuning data, we avoid the catastrophic forgetting otherwise seen during domain adaptation.* Our approach is purely data centric and therefore requires no modification to training objective or additional models at decoding. It therefore incurs no additional cost during training or inference.* Using counterfactual data generation techniques, one can leverage a much more dynamic and diverse data set in model training. The rest of the paper is organized as follows: Section (2) covers related work. Section (3) has ethics consideration of our work. Section (4) details our data generation and fine-tuning techniques. In Section (5) we explain our experimental setup. Result and error analysis are covered in section (6)Finally, in Section (7) we discuss future work and other concluding thoughts. § RELATED WORK <cit.> investigate the problem of gender bias in machine translation. They construct a templatized test set, using a list of job positions from the U.S. Bureau of Labor Statistics (BLS), in 12 gender-neutral languages and translate these sentences into English using Google Translate. They observe that Google shows a strong inclination towards male defaults, especially for fields related to science, engineering, and mathematics. <cit.> approach gender bias as a domain-adaptation problem. They fine-tune a base model on a handcrafted, gender-balanced profession dataset. They show significant improvements in gender bias on WinoMT challenge set, but with a loss in general translation quality due to catastrophic forgetting. They propose two strategies for overcoming this catastrophic forgetting. Elastic Weight Consolidation (EWC), a regularized adaptation technique, involves adding a regularization term to the original loss function to maintain the general translation quality. The alternative solution, lattice rescoring, avoids forgetting by using constrained decoding to keep the translations close to the previously generated translation. These solutions require modification to the objective function or additional cost during inference. <cit.> fine-tune base NMT models on a gender balanced dataset extracted from Wikipedia. While they show an accuracy improvement of 10% on the WinoMT pro-stereotypical subset, improvement on the anti-stereotypical subset is limited.On the other hand, <cit.>; <cit.>; <cit.> explore approaches involving retraining from scratch. <cit.> prepend source sentences with a tag indicating the speaker’s gender, both during training and inference. Though this doesn’t remove the gender bias from the model, it does provide some control of the gender of entities in the produced translations. <cit.> train their NMT models from scratch by annotating source language words with the grammatical gender of the corresponding target words. During inference, since they do not have access to gender information of the target sentence, they use co-reference resolution tools to infer gender information from the source sentence instead. They show accuracy improvements of up to 25% on the WinoMT challenges set. While this improvement is substantial, their approach requires annotation of the training corpus, as well as full retraining of NMT models from scratch, each of which may be prohibitively expensive for some purposes. <cit.> approach gender errors in NMT output as a correction problem and explore methods of guiding NMT inference towards finding better gender translations. They experiment with a combination of two techniques - (1) applying gender constraints while decoding to improve nbest list gender diversity and (2) reranking nbest lists using gender features obtained automatically from the source sentence. They show WinoMT accuracy improvements similar to <cit.>.§ BIAS STATEMENTIn this work, we measure and attempt to mitigate gender bias in NMT systems. Our work only deals with scenarios where the gender can be inferred in the source. Our proposed solution involves fine-tuning a base model on a gender-balanced in-domain dataset built from the training corpus. We show substantial accuracy improvements as measured by the WinoMT test set. Both our work and the WinoMT test set are geared toward profession words, thus we may fall short in other areas. We haven't analyzed the skew in gender representation in the training data. Finally, we have only looked at bias with respect to male and female genders and our work does not address non-binarygender identities.§ GENDER-BALANCED DATA GENERATION In this section we detail the core algorithm for generating gender-balanced fine-tuning data with counterfactuals.We generate counterfactual sentence pairs from the training data used to train our base model. The goal of this process is to identify sentences pairs that contain a masculine or feminine form of a profession animate noun and produce a modified version with the opposite gender. The modified sentence pair should be an adequate translation and the source and target should each be fluent sentences. The methodology is summarized in Algorithm 1.§.§ In-Domain Data GenerationSelecting Gendered SentencesWe begin the finetuning data generation process by selecting a subset of the base model training corpus to use for counterfactual data generation. To generate high quality data which also works with the counterfactual generation tools, we reject sentences that do not meet the following criteria: * Length: Maximum of 20 whitespace-delimlited tokens.* Length Ratio: Ratio between tokens on each side does not exceed 3.* Animacy: English sentence must contain exactly one gendered pronoun and one profession noun from the list extracted from the handcrafted dataset. Additionally, we use Stanza (<cit.>) to check that the Part-of-speech (POS) of the matched profession word is noun, as some have adjective senses as well.* Wellformedness: English side must begin with a capital letter and end with a punctuation mark.* Proper Nouns: English side contains no tokens tagged as proper nouns by Stanza. §.§ Counterfactual Data Generation Generating Counterfactuals for Source Data: We produce a gender-swapped version of each English sentence in the set extracted in<ref> by replacing the pronoun in our replacement set with its opposite-gendered counterpart. Her can be swapped to either him or his, and we use the Spacy POS tagger to disambiguate. His is always swapped to her, as cases where it should be swapped to hers are sufficiently rare.Generating Counterfactuals for Target Data: In morphologically rich languages, this generally requires changing the form for more words than just the identified animate noun in the target language. For example:Le soldat allemand est très content.La soldate allemande est très contente. We leverage <cit.> for generating the counterfactual data for Spanish, French and Italian sentences.This begins by using Stanza <cit.> to tokenize and parse the target sentences. We then use Udify <cit.> to add universal dependency features such as number and gender to tokens in the parse tree. Each target sentence has been preselected to contain exactly one profession animate noun from our animacy list. We mark this animate noun to be gender-swapped and apply a Markov Random Field (MRF) model as proposed by <cit.> to identify additional tokens that also need to have their gender changed so that the generated counterfactual sentence has correct gender agreement. For French and Spanish, we use the models provided by <cit.>, and for Italian we trained a model from treebank data[<https://universaldependencies.org/#download>]. We found that for French and Italian, the MRF model rarely marked determiners for reinflection, so we additionally marked any determiner token whose head is the selected animate noun.Next we reinflect the lemmata of the tokens marked in the previous step to their new forms. During initial experimentation we used a Morphological Inflection (MI) model to reinflect all identified tokens, but we found that it had low accuracy on determiners and often left the profession animate nouns themselves unchanged from their original form. Furthermore, in Italian and French the correct form for some determiners depends on sentence context and therefore cannot be correctly predicted by an MI model without access to that context. We determined that using dictionary lookup for determiners and nouns in the profession animacy list produces better results. We fall back to the MI model for words not covered in the dictionary, which most often were adjectives modifying the animate noun.We apply a set of hand-crafted rules to form contractions between prepositions and articles as necessary for each language. For example, in Spanish de el (glossed as of the in English) is contracted to del. We then detokenize the sentences. Generating counterfactuals through forward and backtranslation: During early testing we explored alternative approaches for generating each side of the data in the gender-balanced dataset. We backtranslated the target counterfactual to produce a corresponding English sentence. Likewise, we forward translated the gender-swapped source sentences in English to generate counterfactuals in the target language. However, we found the results of this alternative method to be inferior. One possible explanation for this inferior quality is that both forward and back translation systems have the same gender bias problem that we are trying to solve, and so it was not able to produce translations of the needed quality. §.§ Selecting Random Dataset We also sample another subset of the training corpus to use as neutral data. Our aim is to select a dataset which has distribution more similar to the original base model training corpus. This data may or may not contain any gendered pronouns or words from our profession animate noun list. Our hypothesis is that including this data along with the handcrafted or counterfactual datasets during fine tuning will help mitigate the catastrophic forgetting that is observed when training on those datasets alone.We apply the following filtersfrom <ref> to the base model training corpus before randomly sampling: Length (maximum of 100 tokens), Length Ratio and Wellformedness.§.§ Handcrafted Profession DatasetWe utilize and extend the handcrafted profession dataset developed by <cit.> in this work. This data set consists of 388 English sentences human-translated into each target language of interest. Each English sentence consists of a profession word from a list of 194, embedded into a masculine or feminine version of the same template. For example: The logistician finished his work.The logistician finished her work. We use the Spanish translations from <cit.> and have additionally translated the English data into French and Italian.§ EXPERIMENTS§.§ Languages and DataWe evaluate our approach on three language pairs in a high-resource setting – English →French (en →fr), English →Spanish (en →es) and English →Italian (en →it).The WinoMT framework supports the evaluation of gender bias in translations from English into these morphologically rich target languages from romance language family.Following the procedure described in <ref>, we sample at most 10 training examples for each word in our profession animate noun list and generate counterfactuals for those sentences. We then remove any sentence pair for which there is no difference between the counterfactual and the original target sentence.We then create sentence pairs from each gender-swapped English sentence and corresponding counterfactual target sentence. We add to this the original non-counterfactual sentence pairs so that for each sentence pair, we have a balanced masculine and feminine version. Combining original and counterfactual data in this way yields two interesting benefits. First, each profession word will now be balanced in this dataset between male and female versions, which should guide the model towards a gender-balanced state rather than overshooting. Second, because the sentential context is identical in the male and female versions, we hope not to teach the model to erroneously associate male and female versions with unrelated contextual features. Table <ref> summarizes the corpus statistics for train, validation and fine-tune datasets for the three language pairs. We use IWSLT dataset for validation and WMT datasets for testing.§.§ Training and inference Our base NMT models are transformers<cit.> with RNN-based decoder with SSRU <cit.>, implemented in the Marian Toolkit <cit.>. We use 6 layers in the encoder and decoder, 8 attention heads, and 2048-dimension feed-forward layer with RELU activation. We apply a dropout of 0.1 between transformer layers. The embedding dimension is 512 and we tie target embeddings and output embeddings.We learn a joint vocabulary with 32K tokens, computed with SentencePiece on the training data. We set label smoothing to 0.1. We optimize the model parameters with SGD(adam), with a learning rate schedule of (learning rate, warmup)= (0.0002, 8000). We train baselines with AfterBatches set to 1000000 and early stopping on validation set cross-entropy-mean-word set to 10. We decode with beam size=4.During fine-tuning we use the vocabulary of the base model. We optimize the model parameters with SGD(adam), with a learning rate schedule of (learning rate, warmup)= (5e-05, 100). We set AfterBatches to 4000 with early stopping on validation BLEU set to 20.§ RESULTS§.§ WinoMT Challenge Set and Evaluation Metrics Following other recent work, we evaluate our NMT systems on WinoMT <cit.> challenge set to quantify gender bias. This dataset consists of English sentences, each containing two animate nouns, one of which is coreferent with a gendered pronoun. From context within the sentence, a human can clearly determine which animate noun is coreferent, and thus the gender of the person described by that noun. By checking how often an MT system produces a translation of the correct gender for that animate noun, we can determine to what extent the system relies on gender stereotypes as opposed to relevant context. Following are key metrics:Accuracy (Acc) – Percentage of translations with correct gender for the primary entity.Pro – Accuracy on pro-stereotypical subset (e.g. female nurse).Anti – Accuracy on anti-stereotypical subset (e.g. male receptionist).ΔS – Difference in accuracy between pro-stereotypical and anti-stereotypical gender role assignments. Higher value indicates that models are better at translating gender when it aligns with stereotypical gender roles.ΔG – Difference between male and female F1 scores. Higher positive value indicates that the system does a better job in translating male entities. §.§ Result AnalysisWe fine-tuned our base models on various combinations of the three datasets described in Section 4: (1) the handcrafted dataset from <cit.> (S&B), (2) a random sample of the base model training data (Random), and the gender-balanced in-domain dataset we describe in section 4.2 (GB). We report WinoMT metrics in Table <ref> and BLEU scores for each system are presented in Table <ref>.Our base models achieve comparable accuracy,ΔG, and ΔS to the best performing commercial translation systems reported in <cit.>. The WinoMT scores for the base models show that they are heavily gender-biased. High positive values ofΔS indicate much higher gender accuracy on sentences with pro-stereotypical assignments than anti-stereotypical ones. Fine-tuning on S&B alone yields substantial improvements on WinoMT accuracy relative to the baseline, up to 25%for French, as well as reducingΔG andΔS dramatically. However, catastrophic forgetting leads to a consistent drop in BLEU of up to around 1 point, which is significant. This was also observed in original experiments in <cit.>.By supplementing S&B with our random sample dataset (S&B+Random), we recover most of the BLEU degradation from S&B while retaining the bulk of the WinoMT accuracy gains, though with an accuracy gap of 6% remaining on French. This demonstrates that the method is a suitable alternative for their more complicated and costly approaches of EWC and lattice rescoring.Fine-tuning on our gender-balanced dataset (GB) achieves better WinoMT accuracy than S&B and S&B+Random for Spanish and Italian with only minimal loss in BLEU relative to base models. The French system has slightly lower accuracy than S&B , though with much less BLEU degradation and still 17% improved accuracy over the baseline. Further adding random data (GB+random) had almost no effect on Spanish and a slight improvement in accuracy and BLEU for Italian. Because GB and Random are both in-domain, the additional benefit of including Random appears to be less than for adding it to S&B. For French, we see a slight improvement in BLEU and slight regression in WinoMT accuracy.Finally, our models finetuned on all three datasets combined (GB+Random+S&B) show the strongest WinoMT accuracy among systems with acceptable levels of BLEU loss relative to base models. These systems show accuracy improvements of 19%, 23% and 21.6% over the baseline for Italian, Spanish, and French respectively, as well as substantial improvement inΔS for all the language pairs. Though accuracy for the S&B French model is slightly higher, that system shows a drop in BLEU of around 1 point relative to the base model, which may be considered unacceptably large. §.§ Error Analysis en →fr models finetuned on data including our GB dataset exhibit a problematic pattern of mistranslating the English word she into il (he), despite large gains in predicting correct gender of profession nouns themselves. This pattern is observed in both the WinoMT output and WMT test sets, but is not seen in the base model output or finetuning experiments that exclude the GB dataset.For example, in Table <ref>, she is translated to il in the GB+Random+S&B translation, despite both profession nouns in the sentence being changed to feminine forms (WinoMT counts this sentence correct if the translation of the driver/la conductrice is feminine, but the hairdresser/la coiffeuse has also been changed). In the base model and S&B+Random translations, she is correctly translated to elle.Examining the GB dataset, we notice several examples that may be contributing to this pattern. In Table <ref>, we see an example where the English pronoun refers to a different individual from the profession noun that we modify in the counterfactual, and our method of gender swapping English introduces an erroneous mapping between she and il. In Table <ref> we see an example where the pronoun and profession noun refer to the same individual, but the pronoun is not identified by our MRF model. Hence, it does not get changed to feminine on the French side, but does in the English side due to the simple gender swapping rule.We do not see the same error in our Spanish or Italian output. One likely reason for this is that subject pronouns are frequently elided in those two languages, so far fewer erroneous mappings are introduced. The model is also more able to hide uncertainty about correct pronoun gender by simply omitting subject pronouns. We also do not see the reverse pattern, i.e. he translating to elle (she). This may be because the sampled gender dataset contains about 3 times as many instances of he as she. The gender swapping process produces many more she/il mappings.We also see another interesting pattern in Table <ref> – In the base model output both profession nouns are masculine, but in both finetuned models, the gender of both has changed to feminine, matching the pronoun, she. This suggests that the models may not be learning to solve the coreference resolution problem so much as simply conditioning on the gender of any present pronoun. This pattern is observed frequently in all three of our tested languages.§ CONCLUSION In this work we demonstrate two fine-tuning-based approaches for mitigating gender bias in NMT in scenarios where gender is clear in the source. We show substantial improvements in WinoMT accuracy for three language pairs without significant degradation in BLEU.First, we extend the approach described in <cit.> by adding a random in-domain sample to their handcrafted profession dataset before finetuning. We show that this simple method is sufficient to minimize catastrophic forgetting and provides an attractive alternative to their potentially more complicated proposals of EWC regularization and lattice rescoring.Next, we adapt the counterfactual-data-generation technique of <cit.> to an NMT setting to synthesize a gender-balanced in-domain dataset for finetuning. This data-centric approach constructs the finetuning data from the original training corpus, and therefore keeps the model focused on the original domain and incurs no additional cost during training or inferenceAmong various interesting directions for future work, we would like to extend our techniques for addressing non-binary gender. Also, our current techniques work well in simple sentences when only a single individual is mentioned, such as “The doctor finished his work for the day”. When multiple people are mentioned in a single sentence this method can produce sentence pairs where different entities get gender-swapped in the source and target. We plan to explore using coreference resolution systems to filter out incorrect sentence pairs in our data selection algorithm. acl_natbib | http://arxiv.org/abs/2311.16362v1 | {
"authors": [
"Ranjita Naik",
"Spencer Rarrick",
"Vishal Chowdhary"
],
"categories": [
"cs.CL"
],
"primary_category": "cs.CL",
"published": "20231127230301",
"title": "Reducing Gender Bias in Machine Translation through Counterfactual Data Generation"
} |
*[1] 0pt[0pt][0pt]#1 Bayesian Spectral Graph Denoising with Smoothness PriorSam Leone1†, Xingzhi Sun2†,Michael Perlmutter3 and Smita Krishnaswamy1,2,4,5,6,7,8 1Program for Applied Mathematics, Yale University 2Department of Computer Science, Yale University 3Department of Mathematics, Boise State University 4Department of Genetics, Yale School of Medicine5Department of Genetics, Yale School of Medicine 6Wu Tsai Institute, Yale University 7FAIR, Meta AI8Computational Biology and Bioinformatics Program, Yale University †Equal Contribution January 14, 2024 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Here we consider the problemof denoising features associated to complex data, modeled as signals on a graph, via a smoothness prior. This is motivated in part by settings such as single-cell RNA where the data is very high-dimensional, but its structure can be captured via an affinity graph.This allows us to utilize ideas from graph signal processing. In particular, we present algorithms for the cases where the signal is perturbed by Gaussian noise, dropout, and uniformly distributed noise. The signals are assumed to follow a prior distribution defined in the frequency domain which favors signals which are smooth across the edges of the graph. By pairing this prior distribution with our three models of noise generation, we propose Maximum A Posteriori (M.A.P.) estimates of the true signal in the presence of noisy data and provide algorithms for computing the M.A.P. Finally, we demonstrate the algorithms' ability to effectively restore signals from white noise on image data and from severe dropout in single-cell RNA sequence data.denoising, graph signal processing, estimation § INTRODUCTIONSignals defined on modern large-scale, irregularly structureddata sets are often corrupted by large amounts of noise such as measurement error or missing measurements. This motivates one to estimate the most likely true, uncorrupted values of the signal based on both the noisy observations and their prior beliefs about the signal, which often takes the form of a smoothness assumption. We shall present an approach for producing such Maximum A Posteriori (M.A.P.) estimates which utilizes tools from spectral graph theory. Our method is motivated by the explosion in recent decades of complex high-dimensional data, and associated signals, with very high noise levels. Such data may explicitly reside on a graph, e.g., social, energy, transportation, sensor, or neuronal networks<cit.>, or it may implicitly have relationships between entities from which a graph can be built, for example, physical and biological systems, text, image, time series <cit.>With the graph (either existing or built from data), we can treat features as signals (functions) on the graph, and apply methods in graph signal processing, especially spectral graph theory.Typically, a well-behaved signal defined on the vertices will take similar values at vertices that are more connected. This leads us to the prior that many functions of interest will be smooth on the graph, where the concept of smoothness can be quantified using tools from spectral graph theory and the eigendecomposition of the graph Laplacian.This intuition motivates the following approach. First, we assume a priori that the signal of interest is likely “fairly smooth" on the graph. Then, we model the noise of the observations. Finally, we produce an estimate of the true signal with the highest likelihood based on our prior beliefs and the observed measurements. Importantly, we note that the assumption that the signal is smooth isvery mild and we do not assume that the signal (or the data on which it is defined) has any specific form. We provide details on how to implement this approach under several noise models and then demonstrate the effectiveness of our method on real-world and synthetic data sets. We also note that our method fills the gap of theoretical guarantees in the popular method MAGIC<cit.>, which it outperformsdue to the specific modeling of noise types.§ BACKGROUND & RELATED WORK §.§ An example for high-dimensional dataWe first motivate our method via an example of denoising features associated to complex high-dimensionaldata. Single-cell RNA sequence (scRNA-seq) provides high resolution information about gene expression and is of great interest in molecular biology, clinical studies, and biotechnology <cit.>. scRNA-seq data is high-dimensional, as it measures the expression of tens of thousands of genes on up to millions of cells<cit.>, andsuffers from high noise levels due to multiple sources. Reducing this noise is a crucial step, which is needed prior to downstream analysis<cit.>.In single-cell RNA sequence data, one obtains the gene-expression counts for a variety of genes in each cell. Each cell can then be viewed as a high-dimensional vector (whose i-th coordinate corresponds to the amount of gene i expressed). It is a common practice to turn this data, consisting ofhigh-dimensional vectors (cells), into a graph by placing edges between cells which are close together in high-dimensional space, and viewing the expression of each gene, as a signal (function) defined on the cellular graph<cit.>.§.§ Graph Signal Processing with Bayesian inferenceSpectral graph theory concerns itself with the distribution of eigenvalues and eigenvectors ofmatrices associated with graphs. Theset of eigenvalue-eigenvector pairs is known to uncover the geometric and algebraic properties of a graph. This is the observation that drives algorithms like spectral clustering and diffusion maps <cit.>, the main intuition being that low frequency eigenpairs capture low-resolution, key information about a graph's structure.Graph Signal Processing (GSP) utilizes tools from spectral graph theory to extend the Fourier transform from classical signal processing and time series analysis to the graph setting <cit.>. In the classical methods,signals can be denoised by mapping the signal to the Fourier domain, reducing the high-frequency component of the function, and inverting the Fouriertransform to achieve a “smoother" version of the signal. In much the same way, GSP operates byrepresenting graph signals in a basis eigenvectors for the graph Laplacian (defined below), whose corresponding eigenvalues may be interpreted as (squared) frequencies,and then reducing the high-frequency components. Filtering in this manner has been applied to use cases such aspoint cloud data, biological imaging, sensor networks, and more <cit.>. Bayesian inference is a fundamental method of parameter estimation. The typical form of this problem is that there is a random variable x drawn from some prior distribution and another random variable ywhose distribution depends on x. The ambition of Bayesian estimation is, given only y, to estimate the underlying value of x using both prior information on x and the interaction between x and y. Notably, two important, nonstandard aspects of our method are: (1) we do not have any explicit prior on a data on which the signals is defined, but rather directly build the graph from the data (if it does not already exist), and treat it as a deterministic structure; (2) we do not assume the signal has any specific form, but rather use a mild prior of its smoothness on the graph. These distinctions free us from the limitations of Bayesian models caused by model misspecification<cit.>, and make our method generally applicable to the vast range of data sets regardless of the data distributions.§.§ MAGIC: Markov Affinity-based Graph Imputation of CellsMAGIC <cit.> is a commonly used method for denoising single-cell data. It is based on the idea that the high-dimensional data lies on a low-dimensional manifold, represented by a nearest neighbor graph. After building the graph, it uses data diffusion, which is random-walk on the graph to denoise the model. Its has been tremendously successful; however, it lacks a solid theoretical model. Our method fills this gap with GSP and Bayesian inference. Furthermore, by specifying cases of common noise models, we are able to adjust our model accordingly, allowing us to outperform MAGIC in these cases.§.§ Notation and DefininitionsThroughout, we shall let = (,,w) denote a weighted, connected, and undirected graphwith ||=nand ||=m. Without loss of generality, wewill assume= {1,…,n}. We shall refer to functions : →ℝ as graph signals. In a slight abuse of notation, we will not distinguish betweenand the vector in ℝ^n whose a-th entry is (a), 1≤ a≤ n. We shall letdenote the weighted adjacency matrix and let =diag(1) denote the corresponding diagonal degree matrix.Givenand , the combinatorial Laplacian is defined as =-.Is is well-known thatadmits an orthonormal basis of eigenvectors, _i=λ_i_i, 1≤ i ≤ n, where _1=1 and 0=λ_1<λ_2≤…≤λ_n. It follows thatis a positive semi-definite matrix whose null space is equal to span{1}. One may compute that the quadratic form corresponding tois given by ^⊤ = ∑_{a,b}∈ E w(a,b) ((a)-(b))^2. Thus, setting =_i, the λ_i are interpreted as (squared) frequencies, representing the rate at which _i oscillates over the graph, and the _i are interpreted as generalized Fourier modes, where (λ_i)=⟨,_i⟩ represents the portion ofat frequency λ_i. Since the _i are an orthonormal basis, we have = ∑_i=1^n (λ_i)_i. Therefore, for a real-valued function h, we can define a corresponding filter by h() = ∑_i=1^n h(λ_i)(λ_i)_i. We shall letdenote the weighted m× n incidence matrix, where rows correspond to edges and columns to vertices, whose entries are given by(e,a) = - (e,b) = √(w(a,b)), if e=(a,b) and (e,v) = 0 for all v ∉{a,b} <cit.>. One may verify that the Laplacian can be factored as = ^⊤. (Here, we implicitly assume an arbitrary, but fixed, ordering of each edge (a,b). This arbitrary choice does not affect the identity = ^⊤ nor any of our analysis.)We shall let p() denote the probability distribution of a random variableand shall let p(|) denote the conditional distribution ofgiven another random variable . We shall make use of the fact that by Bayes' theorem, p(|)∝ p()p(|), where ∝ denotes proportionality and the implied constant depends on .§ METHODSOur goal is to estimate an unknown signal ∈ℝ^n based on anobservation ∈ℝ^n, which we interpret as a noisy version ofunder various settings. In each case, we will assume that an observedis obtained from a corruption of a true signalwhich lies within a corresponding admissibility class _. We shall then define the maximum a posteriori estimate ofto be the most likely value ofbased on (i) the fact thatwas observed and (ii) our a priori beliefs ondiscussed in the following subsection. §.§ A Prior Distribution Based on Smoothness We define prior distributions on (λ_i) fori = 2,…, n, assuming that each (λ_i) follows the probability distribution:p_κ((λ_i))∝exp(-κλ_i f(λ_i)^2)where κ is a fixed smoothing parameter. We further assume that thef(λ_i) are independent which implies that, for any fixed value of (λ_1),the probability distribution ofsatisfiesp_κ(f)∝∏_i=2^n exp(-κλ_i f(λ_i)^2) = exp( - κ∑_i=2^n λ_i f(λ_i)^2 ).We then givethe probability distribution defined by taking the inverse GFT of . We note that since the _i are an orthonormal eigenbasis and = ∑_i=1^n (λ_i)_i,we havep_κ() ∝exp( - κ∑_i=2^n λ_i f(λ_i)^2 ) = exp(-κ^⊤).Therefore, we see that this probability distribution is defined so that the likelihood ofdecreases with its variation across the graph and κ acts as a parameter controlling the tolerance towards fluctuation.Notably, we do not assume any prior distribution on (λ_1) (although is some cases (λ_1) will be implicitly constrained by the admissibility class _). Therefore, our maximum a posteriori estimate is simply the most likely value ofbased on theand our prior beliefs about (λ_2),…,(λ_n). §.§ Gaussian Noise on the GraphWe first consider the setting where each of the Fourier coefficients is corrupted by Gaussian noise, i.e., (λ_i) = f(λ_i) + z_i, 2 ≤ i ≤ n, where each z_i ∼𝒩(0,σ^2) is an independent normal random variable. We will further assume that the total noise - has zero mean, which motivates us to define the admissibility class Ω_ = { : f(λ_1) =(λ_1)}. By expanding the conditional and a priori densities and utilizing the fact that for a given , we have p_κ( | )∝ p_κ()p_κ(|), one may derive a maximum a posteriori estimate ofgiven [Further details on the derivation of Theorem <ref>, and all of our other theoretical results, are available at <https://arxiv.org/abs/2311.16378>]. [Gaussian Denoising] Letbe given, and let Ω_ = { : f(λ_1) =(λ_1)}. As above, assume that (λ_i) = f(λ_i) + z_i, 2≤ i≤ n, z_i∼𝒩(0,σ^2) and that our prior beliefs onare as described in Section <ref>. Then, the maximum a posteriori likelihood estimate ofgivenis, _map = h(),where h() is a filter as described in Section <ref> with h(λ_i) = 1/1 + 2κσ^2 λ_i. Moreover, _map can be computed, to withinϵ accuracy in the -norm (_^2=^⊤), in time 𝒪̃(m log(ϵ^-1) min{√(log(n)), √(2κσ^2λ_max + 1/2κσ^2λ_min + 1)}).We note that the minimum in the term describing the time complexity arises from the existence of two possible methods of computation, both of which are algorithms for solving linear systems in an implicit matrixwith a condition number of β = 2κσ^2λ_max + 1/2κσ^2λ_min + 1. When 2κσ^2 is small, β is small and the conjugate gradient algorithm will terminate rapidly. Alternatively, when β is large, one may use the solver from<cit.> which requires𝒪̃(m log(ϵ^-1) √(log(n))) time. In practice, σ^2 and κ are generally unknown. However, as the filter depends only on the product 2κσ^2, it suffices to estimate this quantity, which we denote by τ. We propose a method of moments estimator which calculates the expectation of ^⊤, ^⊤^2 in terms of σ, κ and backsolves using the empirical values.Alternatively, we may regard τ as a smoothing parameter to be tuned, rather thana quantity needing estimation. τ≈tr() ^⊤ - (n-1) ()^⊤()/tr()()^⊤()- tr(^2) ^⊤Note that, by the handshake lemma, () = ∑_a ∈(a)= 2∑_(a,b) ∈w(a,b). Furthermore, (^2) = ∑_a((a)^2 + ∑_(a,b) ∈w(a,b)^2 ), and so both of these quantities can be calculated in 𝒪(m) time.Alternatively, we may regard τ as a smoothing parameter to be tuned, rather thana quantity needing estimation. We note that this filter may be viewed as a form of Tikhonov regularization <cit.>.§.§ Uniformly Distributed NoiseNext, we consider the case when the noise is a random uniform scaling in the vertex domain: (a) = u(a) (a), where each u(a) ∼Unif[0,1] is an independent uniform random variable. In this case, since 0≤ u(a)≤ 1, we set the admissibility class _={: |(a)|≥ |(a)|,sign() = sign(),∀ a∈ V}. For such an ∈Ω_, one may compute that the a posteriori likelihood of ∈_ givenis p_κ(|) = p_κ() ∏_a ∈ V1/|(a)|. We will maximize the a posteriori likelihood by minimizing the negative log likelihood, which using basic properties of the logarithm leads us to the optimization problemmin_∈Ω_ℒ(),ℒ()= κ^⊤ + ∑_a ∈ Vlog |(a)|.In order to (approximately) solve this problem, we adopt a constrained Convex-Concave Procedure (CCP) <cit.> for the above. The CCP operates by splitting a function of the form f(x) = f_concave(x) + f_convex(x) and approximating the concave portion linearly about the current solution; the relaxed problem is convex and can be solved more efficiently. The procedure is repeated until convergence, and it is known to be a descent algorithm. Applied this particular optimization, the CCP update of ^t+1 from ^t is as follows:^t+1 = min_∈Ω_κ^⊤ + ∑_a ∈ V(a)/|^t(a)|We remark that ^t+1 can be computed as a quadratic program and that the update provides a descent algorithm - ℒ(^t+1) ≤ℒ(^t). This is because the loss function is a quadratic function ofand the feasible region Ω_ is a convex polyhedron. §.§ Partial Observations & Bernoulli DropoutIn our final two models, we consider two settings where the noise behaves differently at different vertices. We assume that there is some (possibly unknown) set S⊆ V where f(a) is exactly equal to g(a) for all a∈ S. We make no assumption regarding the relationship between f(a) and g(a) for a ∉ S. This leads us to define the admissibility class Ω_ = { : (a) = (a)for alla ∈ S }. We consider two practically useful variations of this problem: 0em * Basic Interpolation: The set S is known. * Bernoulli Dropout: There is a “set of suspicion” ζ where we are unsure whether a ∈ Sor a ∈ S^c. There is also a (possibly empty) set ζ^c for which the observer is certain of their observations (i.e., we know ζ^c⊆ S). For each a∈ζ, we assume that a is corrupted (i.e., a∉ S) with probability p and that a∈ S with probability 1-p.In this first scenario, the maximum a posteriori estimate ofis the most likelythat is equal toover the observation set S: _map = max_f ∈Ω_ p_κ(). Because of the monotonicity of the exponential function, this is equivalent to computing min_f ∈Ω_^⊤. This problem was studied in <cit.>, which proved the following result. Notably, <cit.> predated the development ofefficient solvers which could be used to compute _map as in (<ref>). However, now that such solvers exist<cit.>, one may use them to compute the proposed estimate to accuracy ϵ in 𝒪̃(m√(logn)log(ϵ^-1)) time, where ∂ S is the boundary of S, n = |S^c ∪∂ S| and m = |E(S^c,S^c) ∪ E(S^c, S)|, where E(S_1,S_2) denotes the set of edges going from S_1 ⊆ to S_2 ⊆. We also denote (A):=((a_1),(a_2),…,(a_k)), where {a_1,a_2,…,a_k}= A ⊆ V; (S_1,S_2) and (S_1,S_2) are the restrictions ofandto rows in S_1 and columns in S_2, respectively; (:,S_1) is the restriction ofto columns in S_1; ∀ S_1, S_2⊆ V.[Restated from <cit.>] Suppose S has at least one edge going to S^c. Then there exists a unique solution to min_∈Ω^⊤. The interpolation ofto S^c is given by_map(S^c) = (S^c,S^c)^-1(S^c,S)(S).Now, we consider the second, more challenging scenario where we observe a signalwhich is equal to , except in a set of suspicion ζ where, with probability p, g(a) is corrupted (i.e., not equal to f(a)). Based on the observationalone, there is no obvious way to identify the set S={a∈ V: (a)=(a)} (although we do know ζ^c⊆ S). However, wenote that forto take a given value, there must be (ζ)-(ζ)_0 corrupted entries and |ζ|-(ζ)-(ζ)_0 uncorrupted entries. Since each entry is corrupted with probability p, we model p_κ(|)∝ p^(ζ)-(ζ)_0(1-p)^|ζ| - (ζ)-(ζ)_0. Thus, for ∈_, the negative log of the posterior likelihoodofcan be estimated as: -log p_κ( | )= κ^⊤ +(ζ) - (ζ)_0 (log(1-p)-log(p) ) + constant.Therefore, if we define τ = κ^-1(log(1-p)-log(p)), we observe the MAP is produced by the following minimization problem: _map∈min_∈Ω_^⊤ + τ(ζ)-(ζ)_0.Note that the sign of τ is going to depend on log(1-p)-log(p) = log(1/p-1). When p < 1/2, then the penalty term τ is positive; otherwise, τ < 0 so we may assume all values have changed and estimateusing Theorem <ref>. When p < 1/2, the penalty term is positive. By breaking upinto (ζ) and (ζ^c), we may write the optimization as a regression problem:When p < 1/2, the _map is the min of the following sparse regression problem: _map(ζ)∈(ζ) + min_x{τx_0 + (:,ζ)x - (:,ζ^c)(ζ^c) + (:,ζ)(ζ)_2^2 }.And when p ≥ 1/2, the solution is given by, _map(ζ) = (ζ^c,ζ^c)^-1(ζ^c, ζ)(ζ^c). In general, the problem of ℓ_0-regularized regression is NP-Hard <cit.> . However, numerous approximation methods exist including branch and bound <cit.> and an ℓ_2-based greedy algorithm <cit.>. Alternatively, we may consider a relaxed version of the minimization problem in which the ℓ_0 penalty term is replaced with an ℓ_1 penalty term. In this case, the relaxed _map can be found via LASSO regression <cit.>, for which many efficient algorithms exist. Finally, we draw special attention to the “no-trust" case where ζ =, i.e. we are skeptical of all observations. Then the optimization can be written more simply:_map(ζ)∈(ζ) + min_xx-g_2^2 + τx_0The benefit of the no-trust estimate is that it makes few assumptions about the nature of the noise and does not require the user to come up with ζ. § EXPERIMENTS & APPLICATIONS§.§ Gaussian Noise on an ImageWe first consider1000 images belonging to the CIFAR-10 data set modeled as signals on 32 × 32 grid graph ; we use the convention of treating each pixel as a vertex connected to adjacent pixels. Importantly, we note that images are not our primary motivation. We include this example primarily to allow visualization of our method before proceeding to more complex graphs. For a fixed image, we add Gaussian noise with different variances σ^2. We then apply the filter proposed in Theorem <ref>, and consider the ℓ_2 norm between the restored signal and the ground truth. We compare to two other algorithms: local averaging, and weighted nuclear norm minimization. For the local averaging, we repeatedly set the value of a vertex to be the average of its neighbors for some number of iterations t. Note that this is equivalent to applying the powered diffusion operator <cit.> (or equivalently the random walk matrix) to the noisy signal .The nuclear norm minimization based estimate is parameterized by τ, and is given by the solution to min_1/2 - ^2 + τ_⋆, where _⋆ is the nuclear norm ofviewed as a matrix. The penalty τ corresponds to a convex relaxation of a low-rank penalty and is designed with the assumption that noise exists over excess left & right singular vectors of . For the spectral estimate, we use the method of moments estimate for 2κσ^2 given by Equation <ref>. We then calculate, for every image, the restored signal using each possible t and τ. The average percent error is provided in the Table <ref>. We see that in the high-noise setting, the spectral denoising algorithm outperforms both local averaging and the nuclear norm estimate as a consequence of our low frequency prior. §.§ High Frequency Preservation: Comparison to MAGICWe revisit MAGIC under our proposed framework. MAGIC takes our prior assumption, that the true signal is likely in the low-frequencies as a fact, rather than a probabilistic statement.It can be interpreted as choosing h(λ) in advance to be equal to h(λ)=(1-λ/2)^t (where t is a tuned parameter) rather than finding the optimal filter based combining our prior beliefs with the observed signal. To illustrate the advantages of our method over MAGIC, we conduct a comparison using Bernoulli dropout. We generate a set of C = 5 cluster centers in two dimensional space. Around each, we generate m = 200 points. We construct an affinity-based graph with 1000 vertices. We then consider low frequency and high frequency signals. Low frequency signals vary between clusters, while high frequency signals vary within clusters. Finally, we randomly set a proportion p of the observations to zero for different values of p and apply each algorithm. Table 3 examines the resulting correlations between estimated and ground truth signals.Our algorithm consistently outperforms MAGIC, and the effect is most notable for high-frequency signals. This can be explained, at a high level, by the fact that the powered diffusion operator <cit.> rapidly depresses high-frequency information, which our algorithm is better able to preserve. §.§ Denoising Simulated single-cell DataWe next apply our method to single-cell RNA sequence (scRNA-seq).scRNA-seq data involves counting mRNA molecules in each cell, which is prone to two types of noise which we test our method's ability to remove: 0em* Bernoulli dropout. Because of the small number of the mRNAs molecules in the cell, there can be Bernoulli dropouts when the mRNAs are present but not captured by the experiment equipment<cit.>.* Uniform noise. For a given gene, considering the fact that the failure of mRNA capturing does not happen for all the mRNAs, but only for a percentage, wemodel the noise as uniform - the counts are randomly reduced by a uniformly-distributed percentage <cit.>. From the data matrix, we build a nearest neighbor graph <cit.> where each vertex is a cell (modeled as a row vector of gene counts). A column of the matrix (a gene's counts on all the cells) is considered a signal on the graph that we can denoise with our models. By applying the models on all the columns, we obtain a matrix of the denoised data. We compare the denoising performance of our method with four existing methods. On top of the ground truth, we add different types of noise and then compare the performance of our method with MAGIC and other denoising methods: low-pass filter and high-pass band-limit filters defined w.r.t. , and local averaging which is a 1-step random walk on the graph using the row-normalized adjacency matrix. We compute the relative error of the denoised signals with the ground truth. In order to be able to assess the efficacy of our method, we use the bulk gene expression data of C. elegans containing 164 worms and 2448 genes <cit.> to simulate the ground truth single-cell data, because it does not have the zero-inflation as in noisy scRNA-seq data. As shown in Table <ref> we are better able to recover the true signal than the competing methods.§.§ Bernoulli Noise on EHR DataElectric Health Record (EHR) data are another noisy real-world data of interest. The amount of EHR data has grown exponentially in the recent decades, and has spurred extensive research interest for clinical application and computational methods<cit.>. EHR data contain repeated measurements of patients such as vitals, with severe missingness because the measures aren't taken in a regular time basis, which can be modeled with Bernoulli dropout. We apply our Bernoulli denoising on vital variables of a patient to demonstrate its ability to impute the missing entries. We treat each time point as a vertex, and build a nearest-neighbor graph where edges connect nearby times, in the moral that measurements close in time should be similar.Then, we treat each variable's measurements as a signal on that graph, and apply our Bernoulli model. We obtain the vital signals of a patient during 140 days of hospital stay from the MIMIC-IV data set <cit.> . The data is preprocessed using kernel smoothing, to simulate plausible ground truth. Then Bernoulli dropout is applied to the data, followed by the Bernoulli model for denoising. Figure <ref> visuallizes the ground truth, the noisy signal, and the denoised signal. Our model recovers the true signals well.§.§ Analysis of Bernoulli Noise on Mouse Myeloid DataIn this experiment, we consider levels of gene expresssion over a set of cells in mouse bone marrow. This data set suffers from severe dropoput, with 82% of cell-gene pairs equaling 0. In order to impute missing values, we first generate a data graph based on affinities between points in Euclidean space. We then apply algorithm <ref> in order to estimate the true gene counts. CD14 is a known gene marker for differentiation of myeloid cells <cit.>. Furthermore, CD34 and FcgR are known surface markers for separating myeloid progenitors into CMP, MEP, and GMP. Such canonical markers are typically lowly expressed. However, because of the sparsity of the original data, it is difficult to capture trends among these genes. 98.7%, 72.0%, and 73.7% of monocytes do not express CD14, CD34, and FcgR, respectively. But patterns emerge upon imputation. We compare gene correlations which emerge from MAGIC with our algorithm, as well as the raw data. For example, correlation between CD14 and Cd34 is 0.03 without imputation vs 0.755 with imputation. For CD14 qnd FcgR, the corresponding values are 0.11 and 0.765. In general, the average correlation between all non-equal genes rises from 0.44 to 0.71. Furthermore, we create another data graph based on the imputed values to compare graph connectivities and thus modulation. In particular, the Fiedler value is0.031 for the original graph and 0.00297 for the modified graph. This lower connectivity is indication of lower connectivity and thus greater differentiation of cells into distinct modes. § CONCLUSION We have introduced a method that denoises high-dimensional data by building a graph, treating the features as signals on the graph, and doing M.A.P estimation to recover the true features as denoised signals. We only rely on a mild prior of smoothness on the graph, making our model general and applicable to a vast variety of data modalities. We produce estimators and efficient algorithms for three types of noise common in real data: Gaussian, uniform-scaling, and Bernoulli dropout. Our model outperforms MAGIC and other methods, thanks to the modeling of the noise.IEEEbib§.§ On the Rigor of Statistical Formalizations We remark first about an unconventional use of Bayesian inference. Consider the following inference problem:is generated from a prior distribution with support over , andis generated from a distribution which nontrivially depends on . From , we would like to determine . Traditionally, the supportdoes not depend on . However, we find it practical to our analysis to assume that it does. The main reason that this device is necessary is because exp(-κ^⊤) ∉ℒ^1(ℝ^n, ℬ(R^n), λ^n) (where λ^n denotes the n-dimensional Lebesgue measure). For instance, in the analysis of the Gaussian M.A.P., we elected to let _ be the set of thosewhose mean agrees with the observed . We remark that we could have also supposed that the mean ofis random with an unknown distribution (or even another normal distribution), but that the mean ofwill always be that of . In this case, the corresponding M.A.P. would coincide with the one that we derive. However, this assumption only complicates the analysis. Furthermore, this method would not generalize well to the statistical model of uniform noise: it is not in general true that assuming (λ_1) = (λ_1) would yield the same admissible region _, and so it is more simple to letdepend on the observed value. Yet another brute force approach would be to assume thatbelongs to some very large ball in ℝ^n of sufficiently large radius to contain all reasonable solutions.It is a matter of both theoretical and practical interest to determine whether there is in fact a distribution with support ℝ^n and density p_κ. It is desirable that, in such a distribution, we may determine the expected value of random variables X who are measurable with respect to the map ↦(λ_i) for i=2 … n; in other words, functions of the GFT over the non-mean frequencies have expectation. Indeed, this is the case, and the result is captured by Theorem <ref>. We also make note of a somewhat hazy definition of the statistical model of Bernoulli dropout. Recall that we insisted a vertex a belongs to S^c with probability p and S with probability 1-p, but did not specify how (a) is generated from (a) when it is the case that a ∈ S^c. The most simple statistical justification for this would be to simply say that the distribution of (a) when a ∈ S is deterministic, but unknown.Let p_κ() be proportional to exp(-κ^⊤) in Ω and p_κ() = 0 in Ω^c. Suppose that Ω = Ω + span{1}. Then there exists a measure space (Ω, ℱ, ℙ such that, 0em* ℙ is a probability measure. * The restricted Fourier transform ↦fn(λ_i) is ℱ-measurable for i = 2 … n.* When μ is the n-1 dimensional Lebesgue measure acting on the vector space span{1}^⊤, then there is a Radon-Nikodym derivative dℙ/dμ() = p_κ(). The main reason such a probability distribution may be unusual is this:has a null space. And therefore, ∫_ℝ^n p_κ(x)dx = + ∞There are two main ways to combat this. The first way described is to simply, when possible, avoid constructing a probability distribution on all of ℝ^n and over a manifold (or manifold with boundary) M instead. The most natural M is one which is “bounded” in the direction of the constant vectors: there exists constants B_0, B_1 such that B_0 ≤1^⊤ x ≤ B_1 for all x ∈ M. The integral is then bounded. For the simplest proof, we can write down a rectangular manifold R of the form [-B_0,B_1]_0 + ∑_i=2^n L_i _i that contains M (each L_i is some interval, possibly equal to ℝ). Then, ∫_M p_κ(x)dx ≤∫_Rp_κ(x)dx p_κ≥ 0 = ∫_R∏_i=2^n exp(- κ∑_i=1^n λ_i x(λ_i)^2)dx= ∫_B_0^B_1(∏_i=2^n ∫_L_iexp(-κ∑_i=2^n λ_i u_i^2)_someβ_idu_i) du_1change of varsu_i = x(λ_i) = ∫_B_0^B_1β_2 …β_n du_1= (B_1-B_0)β_1…β_n The exactly constants involved are not particularly relevant, although for many applications, they can be easily calculated from the normalizing constant of some normal distribution. However, there is still the question of what to do when the manifold M is not of this form. In particular, we consider the case of M= ℝ^n.The trick to constructing our distribution is the realization that integration along the direction of 1 is the obstacle to a traditional density. Therefore, it is more natural to limit ourselves to consideration of only full lines parallel to 1. Specifically, for a set S, we define the following set E(S):E(S) = { x + s : s ∈ S, x ∈span{1}}ℓ(S) can be most naturally understood as the result of “extruding” S in the ones direction. Let ℱ_0 be the usual Borel σ-algebra generated by open sets of Ω. It can be shown by a generating class argument that whenever F ∈ℱ_0, then E(F) ∈ℱ_0 as well. Thus, ℱ := {E(F) : F ∈ℱ_0 } is a sub-σ algebra of ℱ. We build the desired probability measure ℙ from a simpler measure ℚ. Define for any set G ∈ℱ_0 the measure ℚ such that,ℚ{G} = 1/Z_κ∫_Gexp(-κ x^⊤ ( + 11^⊤ )x )dxWe let Z_κ be whatever normalizing constant is necessary. Because + 11^⊤ is now positive definite, ℚ{G} is a standard probability distribution with a density with respect to the n dimensional Lebesgue measure. We then simply let ℙ be the restriction of ℚ to ℱ. By assumption, any set G can be decomposed into the Minkowski sum of two orthogonal sets G_0 + G_1, where G_0 = span{1}, G_1 ∈span{1}^⊥. When this is the case, ℙ{G}= ℚ{G} by definition of ℙ = 1/Z_κ∫_x ∈ G_1∫_y ∈ G_0exp(-κ (x + y)^⊤ ( + 11^⊤) (x + y) )dydxℚ's density function =1/Z_κ∫_G_1∫_-∞^∞exp(-κ (x + t 1)^⊤ ( + 11^⊤) (x + t 1) )dtdxchange of variablesy = t1 = 1/Z_κ∫_G_1∫_-∞^∞exp(-κ x^⊤ x)exp( - κ (nt)^2 )dtdx t1^⊤11^⊤1 t = nt^2 ∝∫_G_1exp(-κ x^⊤ x) dxNote that when we integrate x over G_1, we are implicitly using the n-1 dimensional Lebesgue measure acting on the vector space span{1}^⊤. The Radon-Nikdoym derivative is exp(-κ x^⊤ x) multiplied by the necessary normalizing constant. It now remains only to prove that the Fourier transform is measurable, which follows from a simple argument. Fix i = 2 … n and let X_i : ↦f(λ_i). Note that the Fourier Transform is a linear function and is clearly ℱ_0 measurable. Thus, for any set S ∈ℬ(ℝ^n), X_i^-1(S) ∈ℱ_0. Note also that the fourier transform of span{1} in frequency λ_i is 0. Thus, X_i(E(X_i^-1(S)) = X_i(X_i^-1(S) = S. And since E(X_i^-1(S)) ∈ℱ and S and i are arbitrary, the measurability property holds.§.§ Proof of Theorem <ref>[Gaussian Denoising] Suppose we observe some valueand let Ω_ = { : f(λ_1) =(λ_1)}. Suppose also thatis generated fromby additive Gaussian noise with variance σ^2 in frequencies λ_2…λ_n and thathas density p_κ in Ω_. Then if we let h(λ_i) = 1/1 + 2κσ^2 λ_i be the filter function, the the maximum a posteriori likelihood ofgivenis, _map = h()Furthermore, the MAP can be computed in time 𝒪̃(m log(ϵ^-1) min{√(log(n)), √(λ_max + 1/2κσ^2/λ_min + 1/2κσ^2)}) to ϵ accuracy in the -norm. We first prove that the MAP is produced by the given filter. Notice first that the following expression holds for the likelihood ofgiven , up to a multiplicative constant:p( | )∝∏_i=2^n exp(- 1/2σ^2 ((λ_i) - f(λ_i))^2)independence= exp(- 1/2σ^2∑_i=2^n ((λ_i) - f(λ_i))^2)log properties = exp(- 1/2σ^2∑_i=1^n ((λ_i) - f(λ_i))^2)(λ_1) = f(λ_1) when ∈Ω_ = exp( - 1/2σ^2f-^2 ) = exp( -1/2σ^2) -^2Parseval's IdentityWe can multiply this by the a priori probability ofto obtain the a posteriori likelihood p_κ( | ):p_κ( | )∝ p_κ() exp( - -^2 )= exp( - κ^⊤ - 1/2σ^2-^2 )Of course, this relation holds among thosewhose means are the same as . To remedy this, we simply parameterize the set ofwho share a mean with . Let Π be the projection matrix onto span{1}^⊥. We can write = _1 + _2 where _2 = a1 and a = 1/n1^⊤. Then _2 = ( - Π). In this case, ^⊤ = _1^⊤_1. Furthermore, - ^2 = _1 - Π^2. Thus, by this observation and monotonicity of the exponent, it suffices to minimize the following loss function in _1, ℒ(_1) =1/2σ^2_1 - Π^2 + κ_1^⊤_1Taking a gradient, ∇ℒ = 1/σ^2(_1 - Π) + 2κ_1 = (1/σ^2 +2κ) _1 - 1/σ^2ΠSetting the gradient equal to zero, we obtain the necessary expression for _1:_1 =( +2κσ^2 )^-1ΠIf we plug in our value for _2 and recognize that the operation (1/σ^2 +2κ)^-1 is mean preserving, then we finally obtain the following expression:=( + 2κσ^2)^-1Π +( - Π)=( +2κσ^2 )^-1We now prove that this is produced by the given filter. Let = ΨΛΨ^⊤ be the eigendecomposition of . Then we write,( +2κσ^2 )^-1= (ΨΨ^⊤ + 2κσ^2 ΨΛΨ^⊤)^-1 ΨΨ^⊤ == (Ψ( + 2κσ^2 Λ)Ψ^⊤)^-1 = (Ψ^⊤)^-1( + 2κσ^2 Λ)^-1Ψ^-1 = Ψdiag(1 + 2κσ^2λ_i : i ∈ 1 … n )^-1Ψ^⊤ = Ψdiag(1/1 + 2κσ^2λ_i : i ∈ 1 … n) Ψ^⊤ = Ψ h(Λ)Ψ^⊤Which demonstrates that the claimed filter. To prove that the filter can be computed efficiently, we make an appeal to the solver of <cit.>. Note that to compute= ( +2κσ^2 )^-1It suffices to solve, ( +2κσ^2 ) = This can be done in time Õ(m√(log(n))) because +2κσ^2 belongs to the class SDDM_0. This is true because i. rescaling a Laplacian by a positive constant is still a Laplacian and ii. matrices of the form Laplacian + Diagonal belong to SDDM_0.§.§ The Gaussian Parameter EstimateSuppose thatis generated in the way that is described. For brevity, let a = 1/√(n)1^⊤ be the component ofin _1. Note that when ∈Ω_, then the distribution of f(λ_i) is 𝒩(0, 1/2κλ_i). Furthermore, the distribution of f(λ-i) - (λ_i) is 𝒩(0,σ^2) (it is known that if a signal is 𝒩(0,σ^2) distributed, then its distribution is preserved under rotations). And thus, the distribution of (λ_i) is 𝒩(0, σ^2 + 1/2κλ_i). In this case, we can compute a moment ofwith respect to . 𝔼[^⊤] = 𝔼[∑_i=2^nλ_i (λ_i)^2 ]= ∑_i=2^nλ_i 𝔼[(λ_i)^2]linearity of expectation = ∑_i=2^nλ_i ( σ^2 + 1/2κλ_i)normal distribution properties = ∑_i=2^n σ^2 λ_i + 1/2κ = σ^2 () + 1/2κ(n-1)Similarly, we compute again, 𝔼[^⊤^2 ] =∑_i=2^n λ_i^2( σ^2 + 1/2κλ_i) = ∑_i=2^n λ_i^2 σ^2+ λ_i/2κ = σ^2 (^2) + 1/2κ()The premise of our method of estimating τ by τ̂ is finding values of κ̂ and σ̂^̂2̂ for which these equations coincide with our observations and setting τ̂ = 2κ̂σ̂^2. Depending on the nature of our observations, it could be by chance that σ̂^̂2̂ or κ̂ are nonpositive.§.§.§ Case 1Consider first the case when such a set of κ and σ^2 exist and are positive. To find κ and σ^2 amounts to merely solving the following 2 × 2 system of equations,[() (n-1);(^2)() ][ σ̂^2; (2κ̂)^-1 ] =[ ^⊤; ^⊤^2 ]Letdenote the above matrix. We use the standard formula of a 2 × 2 inverse: 1/(M)[ () -(n-1);-(^2) () ][ ^⊤; ^⊤^2 ]And so, σ̂^2 = ()^⊤ - (n-1)^⊤^2/() (2κ̂)^-1 = ()^⊤^2- (^2)^⊤/()Combining these,τ̂= 2κ̂σ̂^2=2()^⊤ - (n-1)^⊤^2/()·()/2( ()^⊤^2- (^2)^⊤) = ()^⊤ - (n-1)^⊤^2/()^⊤^2- (^2)^⊤ = (n-1)()^⊤- ()^⊤/(^2)^⊤ - ()()^⊤ §.§.§ Case 2In this case, rather than solving the equations exactly, we may attempt to solve the system as best as possible but with positive coefficients. This amounts to doing a change of variables x_1 = σ̂^2, x_2 = (2κ̂)^-1, computing,x_1^⋆, x_2^⋆ = min_(x,y) ≽ (0,0)[ x; y ]We would then compute σ̂^2, κ̂ from x_1^⋆, x_2^⋆. And again, we compute τ̂ from κ̂,σ̂^̂2̂.Thus, it remains to find the best x_1^⋆, x_2^⋆. To compute them, we use the following proposition about the behavior of such systems in ℝ^2:Suppose = [c_1 c_2] ∈ℝ^2 × 2 andb∈ℝ^2 both have all positive entries. Then, min_x≼ 0x - b = ^-1proj_convex-cone(c_1, c_2)(b)And if b∉convex-cone(c_1, c_2) then there exists an i ∈{1,2} for which, proj_convex-cone(c_1, c_2)(b) =proj_span(c_i)(b) = c_i^⊤b_i/c_i^2c_iWe first begin with the observation that the set C := {x : x≽ 0} is, by definition, convex-cone(c_1, c_2). Thus, the vector in this set closest to b will be precisely the projection of b onto C; denote this projection by y. To reproduce the argument x from which y = x, we simply compute ^-1x. It now remains to prove the second part of the claim. The first realization is that when b is not in C, then its projection onto C will belong to the boundary of C, where the boundary of C is comprised of {αc_i : α≥ 0, i ∈{1,2}}. Therefore, there will exist an i for which y = αc_i for some α≥ 0. We claim that not only is this the case, but for this i, y is the projection of b onto the span of c_i. This is true for the following reason: because c_i and y both belong to the nonnegative quadrant, the angle between c_i and y is less than 90^o. Therefore, c_i^⊤b≥ 0. Equivalently, α = c_i^⊤b / c_i^2 ≥ 0. This proves the desired result. We point out that the particular i can be determined by either i. calculating both such projections and determining which has smaller distance to b or ii. sorting b, c_1, and c_2 by increasing angle they make with the positive x-axis to determine which side of C it is that b lies on.§.§.§ Estimation Based on Many Signals & CorrectnessSuppose instead of one signal we have k independently generated signals _1 …_k. We may substitute ^⊤^2 and ^⊤ with 1/k∑_i=1^k _i ^2_i and 1/k∑_i=1^k _i _i, respectively. As the expected value of an i.i.d. average is the same as the expected value of each entry, the same analysis produces an additional estimate of τ:τ̂ = (n-1) (1/k∑_i=1^k _i ^2_i )- ()(1/k∑_i=1^k _i _i )/(^2)(1/k∑_i=1^k _i _i ) - ()(1/k∑_i=1^k _i ^2_i )We remark that, as a consequence of the Law of Large Numbers, as k →∞, then τ̂ converges almost surely to the true τ:(n-1) (1/k∑_i=1^k _i ^2_i )- ()(1/k∑_i=1^k _i _i )/(^2)(1/k∑_i=1^k _i _i ) - ()(1/k∑_i=1^k _i ^2_i ) →(n-1) (σ^2 (^2) + 1/2κ() )- ()(σ^2 () + 1/2κ(n-1) )/(^2)(σ^2 () + 1/2κ(n-1)) - ()(σ^2 (^2) + 1/2κ() ) = 2κσ^2 ((^2)(n-1) - ()^2) + (n-1)() - (n-1)()/2κσ^2 ((^2)() - (^2)()) + (^2)(n-1) - ()^2 = 2κσ^2 ((^2)(n-1) - ()^2)/((^2)(n-1) - ()^2) = τ§.§.§ CalculationWe make some final observations that will allow us to compute these observations relatively efficiently. First, we observe that () = ( -) = () - 0 is the total degree of the graph. Likewise, ^2 has on its ath diagonal entry the squared norm of the ath row of , the off diagonal terms contribute ∑_(a,b) ∈ Ew(a,b) and the diagonal term contributes (a)^2, so () = ∑_a((a)^2 + ∑_(a,b) ∈ Ew(a,b)^2). §.§ Additional PropertiesWe can also attempt to compute the distribution of the map estimate. Note, _map - = ∑_i=2^n_i (f_map(λ_i) - f(λ_i) ) = ∑_i=2^n _i 1/1 + 2κσ^2λ_i(λ_i) - f(λ_i))= ∑_i=2^n _i( 1/1 + 2κσ^2λ_i(f(λ_i) + z_i) - f(λ_i) ) = ∑_i=2^n_i(f(λ_i)_𝒩(0, (2κλ_i)^-1)(1/1 + 2κσ^2λ_i - 1) + z_i_𝒩(0,σ^2)1/1 + 2κσ^2λ_i) = ∑_i=2^n_i(f(λ_i)(1/1 + 2κσ^2λ_i - 1)_𝒩(0, (2κλ_i)^-1(1/1+2κσ^2λ_i-1)^2) + z_i(1/1 + 2κσ^2λ_i)_𝒩(0,σ^2/(2κσ^2λ_i + 1)^2)) = ∑_i=2^n _i( f(λ_i)(1/1 + 2κσ^2λ_i - 1) + z_i(1/1 + 2κσ^2λ_i)_𝒩(0,(2κλ_i)^-1(1/1+2κσ^2λ_i-1)^2 + σ^2/(2κσ^2λ_i + 1)^2))We then compute, (2κλ_i)^-1(1/1+2κσ^2λ_i-1)^2 + σ^2/(2κσ^2 + 1)^2= 1/2κλ_i( 2κσ^2λ_i/1 + 2κσ^2λ_i)^2 + σ^2/(2κσ^2 + 1)^2 = 1/(2κσ^2λ_i + 1)^2( (2κσ^2λ_i)^2/2κλ_i + σ^2 )= 1/(2κσ^2λ_i + 1)^2( 2κσ^4 λ_i + σ^2 )= σ^2 2κσ^2λ_i + 1/(2κσ^2λ_i + 1)^2 = σ^2/2κσ^2λ_i + 1Therefore, there exists coefficients c_2 … c_n each distributed 𝒩(0,σ^2/2κσ^2λ_i + 1). If we compile these coefficients into a vector c = (0,c_2 … c_n)_map- = ∑_i=2^n _ic_i = ΨcAs c follows a multivariate normal distribution, so does Ψc. It is mean 0 and has covariance matrix, Cov(Ψc)= ΨCov(c)Ψ^⊤ = Ψdiag(0,c_2 … c_n) Ψ^⊤ = Ψdiag(0, σ^2/2κσ^2λ_i + 1 : i ∈ 2… n ) Ψ^⊤ = σ^2 (Π + 2κσ^2 )^+From this expression, we can determine the behavior of the error for different values of σ and κ. When σ = 0, we recover the exact signal, and the covariance matrix goes to zero. On the other hand, when κ→ 0, the covariance matrix approaches σ^2 Π. This is because in the limit for such κ, we rely less and less on our prior information. In such a case, the map estimate is _map→, and the above formula is simply the covariance matrix of z =-.We might also wonder, what κ,σ^2 maximize the likelihood of ? Write, p_κ()∝∏_i=2^nexp(- (λ_i)^2/2(σ^2 + 1/2κλ_i))= exp( - ∑_i=2^n (λ_i)^2/2σ^2 + 1/κλ_i) = exp( - ∑_i=2^n (λ_i)^2/2σ^2 κλ_i+ 1 /κλ_i) = exp( - κ∑_i=2^n(λ_i)^2 λ_i /2σ^2 κλ_i+ 1)Let us do a change of variables. Let u = κ, v = 2σ^2 κ. Then the likelihood is monotonic in, u ∑_i=2^n (λ_i)^2 1/v + 1/λ_iTaking a derivative with respect to u, we find v satisfies,∑_i=2^n (λ_i)^2 1/v + 1/λ_i = 0And taking a v derivative, u ∑_i=2^n (λ_i)^2 (-1/(v + 1/λ_i)^2) = 0§.§ A Linear Time Denoising Algorithm on TreesWe note that the algorithm of <cit.> can be used to find in linear time the solution of SDDM matrices whose graph of nonzero entries is a tree. Note that this has an immediate application to time series data, if we choose to identify each of n values with a vertex on the path graph. Note that this is an improvement on many other smoothing methods, such as the Fast Fourier Transform, which generally require time 𝒪(n log(n)). We (re)prove the existence of a linear-time SDDM_0 solver for matrices whose graph of nonzero entries is a tree. Do note that this is completely equivalent to the partial Cholesky factorization proposed in <cit.>. Our contribution is not novel theory, but rather an algorithm which is elementary to program. We begin, however, with an observation motivated by linear algebra.§.§.§ Mathematical Preliminaries Define an (a,b,α) elimination matrix to be the matrix U_a,b,α =+ αχ_aχ_b^⊤. That is, U_a,b,α is the matrix equal to the identity, except U_a,b,α(a,b) = α. Furthermore, we define p(i) to be the index of the DFS predecessor of vertex i. Importantly, when vertex i is eliminated, p(i) is its unique neighbor. Note that p(i) ≥ i in the reverse order.Suppose the vertices of a tree T = (V,E) are put in reverse DFS order, starting from an arbitrary root s. Then when vertices 1… i - 1 are removed from T, i is a leaf of T ∖{1… i-1}.Consider DFS initiated with root s and a vertex v with reverse DFS order i. All descendents of v in the tree rooted at s have a higher DFS order than v and thus a lower reverse DFS order. Therefore, by the time v is considered, all its descendents will have been eliminated, and so it is a leaf of T ∖{1… i-1}.Letbe a diagonal matrix (not the usual degree matrix) and letbe the Laplacian of a tree. Let =+ be an SDDM_0 matrix. Consider a reverse DFS order 1… n of . Then there exists a sequence α_1 …α_n such that, with U = ∏_i=1^n-1 U_i,p(i),α_i such that U^⊤ U is a diagonal matrix. We will prove the claim by induction. First, suppose without loss of generality that the rows are already indexed by the reverse DFS order. So then we eliminate row / column 1, then 2, and so on. We will prove a slightly smaller claim, which will be the engine of our induction. Suppose _0 is a matrix which can be written as, _0 = [ _00;0 _0 + C_0;]Where _0 is some ℝ^i× i nonnegative diagonal matrix, _0 is a Laplacian matrix of some G, and C_0 is a nonnegative diagonal matrix in ℝ^(n-i) × (n-i). Suppose that the subgraph induced by _0 + C_0 has (without renumbering) vertex i+1 as a leaf. Then with, _1 = U_i+1,p(i+1),α^⊤_0 U_i+1,p(i+1),αα = w(i+1,p(i+1))/C_0(i+1,i+1) + w(i+1,p(i+1))_1 can be written as, _1 =[ _10;0 _1 + C_1;]Where _1 is a nonnegative diagonal matrix in ℝ^(i+1)×(i+1), B_1 is Laplacian matrix of G with i eliminated, and C_1 is a nonnegative diagonal matrix in ℝ^(n-(i+1)) × (n-(i+1)). Now consider the matrix U_i+1,p(i+1),α with α given as in the statement of the theorem. Recall that U_i+1,p(i+1),α is equal to the identity except its (i+1,p(i+1))th entry is α. Thus, for a matrix X, U_i+1,p(i+1),α^⊤X U_i+1,p(i+1),α= ( + αχ_i+1χ_p(i+1)^⊤)^⊤X ( + αχ_i+1χ_p(i+1)^⊤)= X + αXχ_i+1χ_p(i+1)^⊤ + αχ_p(i+1)χ_i+1^⊤X + α^2 χ_p(i+1)χ_i+1^⊤Xχ_i+1χ_p(i+1)^⊤ = X + αX(:,i+1)χ_p(i+1)^⊤ + αχ_p(i+1)X(i:,) + α^2 X(i+1,i+1) χ_p(i+1)χ_p(i+1)^⊤Thus, U_i+1,p(i+1),α^⊤X U_i+1,p(i+1),α is the same as X but with αX(i+1,:) added to column p(i+1), αX(:,i+1), added to row p(i+1), and α^2 X(i+1,i+1) added to the p(i+1),p(i+1)th entry. Applying this to X = _0,U_i+1,p(i+1),α^⊤_0U_i+1,p(i+1),α = U_i,p(i),α^⊤[_0 0 0 0 0; 0w(i+1,p(i+1)) + C_0(i,i) …-w(i+1,p(i+1)) …; … … … … …; 0-w(i+1,p(i+1)) … (p(i+1)) + C_0(p(i+1),p(i+1)) …; … … … … … ]U_i+1,p(i+1),α = [_0 0 0 0 0; 0w(i+1,p(i+1)) + C_0(i+1,i+1) … 0 …; … … … … …; 0 0 … (p(i+1)) - α w(i+1,p(i+1)) + C_0(p(i+1),p(i+1)) …; … … … … … ]It now simply remains to check that this matrix is of the stated form. Note first that the upper left block of this matrix is an (i+1) × (i+1) nonnegative diagonal matrix (because we've removed the off diagonal entries of row / column i+1). Furthermore, (p(i+1)) - α w(i+1,p(i+1))= (p(i+1)) - w(i+1,p(i+1)) + w(i+1,p(i+1)) α w(i+1,p(i+1))= (p(i+1)) - w(i+1,p(i+1)) + w(i+1,p(i+1)) (1-α)= (p(i+1)) - w(i+1,p(i+1)) + w(i+1,p(i+1))C_0(i+1,i+1)/w(i+1,p(i+1)) + C_0(i+1,i+1)Note that (p(i+1)) - w(i+1,p(i+1)) is the degree of p(i+1) in G with i+1 removed. Define C_1 to be the (n-(i+1)) × (n-(i+1)) submatrix of C_0 induced by removing column / row i but with C_1(p(i),p(i)) = C_0(p(i),p(i)) + w(i+1,p(i+1))C_0(i+1,i+1)/w(i+1,p(i+1)) + C_0(i+1,i+1). Clearly, C_1 is nonnegative and diagonal, and so the claim holds. We first note that since P_0^-1 = P_0^⊤ that, P_0^⊤_0 P_0 = [ _00;0 _0 + C_0;]Let P_2 be the permutation matrix such thatNow consider the matrix U_a,p(a),α with α given as in the statement of the theorem. Recall that U_a,p(a),α is equal to the identity except its (a,p(a))th entry is α. Thus, for a matrix X, U_a,p(a),α^⊤X U_a,p(a),α= ( + αχ_aχ_p(a)^⊤)^⊤X ( + αχ_aχ_p(a)^⊤)= X + αXχ_aχ_p(a)^⊤ + αχ_p(a)χ_a^⊤X + α^2 χ_p(a)χ_a^⊤Xχ_aχ_p(a)^⊤ = X + αX(:,a)χ_p(a)^⊤ + αχ_p(a)X(a:,) + α^2 X(a,a) χ_p(a)χ_p(a)^⊤Thus, U_a,p(a),α^⊤X U_a,p(a),α is the same as X but with αX(a,:) added to column p(a), αX(:,a), added to row p(a), and α^2 X(a,a) added to the p(a),p(a)th entry. Thus, note that with α = w(a,p(a))/C_0(a,a) + w(a,p(a)),U_a,p(a),α^⊤[w(a,p(a)) + C_0(a,a) - w(a,p(a)) 0; - w(a,p(a)) (p(a)) + C_0(p(a),p(a)) …; … … … ] U_a,p(a),α^⊤= [w(a,p(a)) + C_0(a,a) 0 0; 0 -α w(a,p(a)) +(p(a)) + C_0(p(a),p(a)) …; … … … ]We note that, (p(a)) - α w(a,p(a)) =(p(a)) - w(a,p(a)) + w(a,p(a))(1- α)= (p(a)) - w(a,p(a)) +C_0(a,a)w(a,p(a))/C_0(a,a) + w(a,p(a))Where (p(a)) - w(a,p(a)) yields the degree of p(a) once a is eliminated. We define C_2 to be the matrix Supposeis an n × n matrix of the form, = [ _00;0 _i-1 + ]Where _0 ∈ℝ^i-1 × i-1 and ∈ℝ^(n-i+1) × (n-i+1) are diagonal and _i-1 is the Laplacian matrix resulting from elimination of 1 … i-1 in T. Then there exists an α for which, U_i,n(i),α^⊤ U_i,n(i),α = And, = [ _00;0 _i + ]Where _0 is diagonal with dimension i,is diagonal with dimension n-i, and _i is the Laplacian arising from eliminating vertex i.The idea is relatively simple. For brevity, let _T_i-1(a) denote the degree of vertex a in T with vertices 1 … i-1 eliminated. First, expand _i-1+ as, _i-1+ = [w(i,p(i)) + (i,i) -w(i,p(i))…;- w(i,p(i)) w(i,p(i)) + _i(p(i)) + (p(i),p(i))…;……_i-1(n) + (n,n) ]And let α = w(i,p(i))/(i,i) + w(i,p(i)). Consider the action of,* Adding a multiple of α× row i to row p(i)* Then adding a multiple of α× column i to column p(i)The resulting of these operations is a matrix that looks like, [_0 …; … w(i,p(i)) + (i,i) 0 …; … 0 (i,i)w(i,p(i))/(i,i) + w(i,p(i) + _i(p(i) + (p(i),p(i)) …; … _i-1(n) + (n,n) ]Thus if we let, _̃0̃ = [ _00;0 w(i,p(i) + (i,i) ] = [ (i,i)w(i,p(i))/(i,i) + w(i,p(i)) + (p(i),p(i))0;0 (i+2:n, i+2:n) ]We find thattakes the desired form. Finally, observe that these operations can be stated in terms of U_i,p(i),α. The multiplication U_1,p(i),α^⊤ corresponds to adding α× row i to row p(i). Likewise, U_i,p(i) corresponds to the same action for the columns. Therefore, = U_i,p(i),α^⊤ U_i,p(i),αWhich proves the desired claim. With the main technical claim done, we simply repeat it inductively. The result is a sequence α_1 …α_n-1 such that, U_n-1,p(n-1),α_n-1^⊤…U_1,p(1),α_1^⊤( + ) U_1,p(1),α_1… U_n-1,p(n-1),α_n-1 = _final Note that _final is going to have a diagonal matrix on the submatrix induced by the first n-1 vertices. The lower right entry is going to be a Laplacian + a Diagonal matrix. But as this is simply a number, the matrix _final is simply diagonal. By letting U = U_1,p(1),α_1… U_n-1,p(n-1),α_n-1, we prove the desired claim.§.§.§ The AlgorithmWe now apply the above proposition to develop an algorithm. It is elementary to check that x = U^-1 U^⊤b solves the system. This leaves only two loose ends: i. efficiently computing Ub, U^⊤y and ii. storing . To address point (1), I point out that we don't want to ever compute all of U. Rather, we use the fact that U = ∏_i=1^n-1U_i,p(i),α_i and U^⊤ = ∏_i=n-1^1U_i,p(i),α_i^⊤. Observe that U_a,b,αx corresponds simply to adding αx[b] to x[a]. Likewise, U^⊤_a,b,α adds αx[a] to x[b]. To address point (2), reconsider the elimination proved in Proposition <ref>. Note that (i,i) is determined at precisely the step when vertex i is eliminated. While (i,i) does not change during this step, (p(i),p(i)) might. Therefore, we can keep track of the diagonal entries ofin a vector m which is computed in a dynamic program. Such a dynamic program may also compute the needed α's.With this in mind, we have an algorithm for solving SDDM_0 equations on matrices whose graph of nonzero entries is a tree: Remarks It is worth noting that this algorithm can be generalized modestly. In particular, if upon eliminating all degree one vertices and then eliminating all degree two vertices, no vertices remain. (Eliminating a degree two vertex b with neighbors a and c amounts to connecting a to c directly and adjusting the weight). <cit.> Given that no obviously practical graphs take this form and the Spielman-Teng solver can be used out of the box to solve them, we do not present the algorithm.§.§ The Bernoulli Model§.§.§ Harmonic Interpolation & its RuntimeIt is worth beginning with an explanation of the harmonic interpolation algorithm suggested by <cit.>. The authors consider, as we do, the situation in which a signalis known in a set S and interpolated to a new signaldefined over S^c such that the total energy ^⊤ is minimal. The authors state that the optimal (S^c) is given by (S^c,S^c)^-1(S^c,S)(S). [Restated from <cit.>] Suppose S has at least one edge going to S^c. Then there exists a unique solution to min_∈Ω^⊤. The interpolation ofto S^c is given by_map(S^c) = (S^c,S^c)^-1(S^c,S)(S).We emphasize that the interpolation problem does not depend on the entirety of S, but rather those vertices s ∈ S which is connected by an edge to some s̅∈ S^c; these vertices are precisely ∂ S. Therefore, it suffices to consider the subgraph H induced by the vertices S^c ∪∂ S, which has by definition n̂ vertices and m̂ edges. We remark that (S^c,S)(S) may be computed in time 𝒪(m̂), since for each a ∈ S^c,((S^c,S)(S))(a) = ∑_(a,b) ∈ E : b ∈ Sw(a,b)(b)By iterating over all such vertices a, we count once each edge e ∈ E(S^c,S). As this is no more than the total number of edges m̂ in H, this may be done in time 𝒪(m̂). We now consider the matrix (S^c, S^c). This can be written as a sum +, whereis the Laplacian of the graph induced by S^c (a subgraph of H) andis a diagonal matrix. This is a SDDM matrix with no more than 𝒪(m̂) entries. Applying the solver of <cit.>, we may solve the equation (S^c,S^c)(S^c) = (S^c,S)(S) in time 𝒪̃(m̂√(log(n̂))).§.§.§ Derivation of the M.A.P. EstimateThankfully, most of the computations involved in the Bernoulli model are fairly elementary. Suppose we have our “set of suspicion” ζ and an estimateof the true signal. In order forto be produced by , it needs to be that (ζ)-(ζ)_0 observations get sent to S^c, all independently and with probability p. Otherwise, (a) = (a) with probability 1-p for each of |ζ| - (ζ) - (ζ)_0 observations. The result is that the conditional likelihood ofgivenis,p^(ζ)-(ζ)_0(1-p)^|ζ| - (ζ)-(ζ)_0Multiplying this by the prior probability p_κ(), we obtain the conditional likelihood ofgiven : p_κ( | ) ∝exp(-κ^⊤)p^|(ζ)-(ζ)_0(1-p)^|ζ| - (ζ)-(ζ)_0And thus, log(p_κ(|)) = -κ^⊤ + (ζ)-(ζ)_0log(p) + (|ζ| - (ζ)-(ζ)_0)log(1-p) + constant = -κ^⊤ + (ζ)-(ζ)_0(log(p) - log(1-p)) + constant Thus, to maximize likelihood, we minimize,κ^⊤ - (ζ)-(ζ)_0(log(p) - log(1-p)) Which is equivalent to minimizing, ^⊤ + (ζ)-(ζ)_0log(1-p) - log(p)/κ_τ as definedAgain, the sign of τ is dictated by the sign of log(1-p)-log(p), which is positive when 1-p > p (i.e. p < 1/2), negative when 1-p < p (i.e. p > 1/2) and zero when p = 1/2. When this is the case, the ℓ_0 penalty term is actually negative. Thus, we benefit from insisting that (a) ∈ S^c, since that increases the conditional likelihood ofand allows us to varyover the smoothness term to optimality.§.§.§ The Language of Ridge RegressionWe stated an optimization for the Bernoulli model in terms of the incidence matrix. We make that formal now. First, we do assume the known property that = ^⊤. If we write this out in block notation, that means, [ (ζ,ζ) (ζ,ζ^c); (ζ^c,ζ) (ζ^c,ζ^c) ]== ^⊤ =[ (E,ζ)^⊤; (E,ζ^c)^⊤ ][ (E,ζ) (E,ζ^c) ] = [ (E,ζ)^⊤(E,ζ) (E,ζ)^⊤(E,ζ^c); (E,ζ^c)^⊤(E,ζ) (E,ζ^c)^⊤(E,ζ^c) ]We compare both of these blocks elementwise to equate outer products of the incidence matrix to submatarices of the Laplacian. Additionally,^⊤= [ (ζ)^⊤ (ζ^c)^⊤ ][ (ζ,ζ) (ζ,ζ^c); (ζ^c,ζ) (ζ^c,ζ^c) ][ (ζ); (ζ^c) ] = (ζ)^⊤(ζ,ζ)(ζ) + 2(ζ)(ζ,ζ^c)(ζ^c) + (ζ^c)(ζ^c,ζ^c)(ζ^c)= (ζ)^⊤(E,ζ)^⊤(E,ζ) (ζ) + 2(ζ) (E,ζ)^⊤(E,ζ^c)(ζ^c) + (ζ^c)(E,ζ^c)^⊤(E,ζ^c) (ζ^c)= ((E,ζ)(ζ)-(E,ζ^c)(ζ^c) )^⊤((E,ζ)(ζ)-(E,ζ^c)(ζ^c) ) = (E,ζ)(ζ)-(E,ζ^c)(ζ^c)_2^2Finally, we use the fact that for all valid , (ζ^c) =(ζ^c) since ζ^c ⊆ S. Combining all of these observations, it suffices to minimize, (E,ζ)(ζ)-(E,ζ^c)(ζ^c)_2^2 + τ(ζ)-(ζ)_0Among all (ζ) (and simply set (ζ^c) =(ζ^c). Note that a LASSO solver might prefer an ℓ_1 penalty term which uses the coefficients, and not their difference with another vector. For this reason, we also consider writing (ζ) = (ζ) + x for some “difference” variable x. In this case, we can compute the best difference: min_x(E,ζ)x + (E,ζ)(ζ) - (E,ζ)(ζ^c)_2^2 + τx_0And add the result back to (ζ). It is worth that any matrix B can be used such that B^⊤ B =; the square root ofis another logical choice. Indeed, this would size down the problem, but at much greater initial computational cost. An approximate approach for massive graphs may be to reduce the dimension of each feature using the JLT <cit.>.§.§ A Randomized Algorithm for Huge Graphs For truly massive problems, another heuristic might be to reduce the dimensionality of the problem using the Johnson-Lindenstrauss transform <cit.>. To start, fix an ϵ > 0. This would reduce the dimension of each feature to d = 𝒪(log m ϵ^-2) =𝒪(log n ϵ^-2) to approximate distances within a fraction ϵ. This will reduce a regression problem that looks like, Bx - z_2^2 + x_0 This will reduce the regression to one which looks like, B̃x - z̃_2^2 + x_0 Where B ∈ℝ^d × n, z̃∈ℝ^d. With high probability, B̃x = z is solvable (note it is solvable in the first place pre-projection). And because the system is overdetermined, a solution exists that activates d features. The result is that is that for this x_greedy, 0em* With high probability, (1-ϵ)x_greedy -z< x_greedy-z̃ < (1+ϵ) x_greedy-z. In other words, with high probability, * x_0 = dWhat this means§.§ The Update Rule of the CCP The algorithm of the CCP is to approximate a function of the form convex + concave by taking the first-order Taylor expansion of the concave portion; as the sum of a convex function and a linear function is convex, so is the new problem. In this case, we write, ℒ()= κ^⊤ + ∑_a ∈log |(x)|=ℒ_vex() + ℒ_cave()Note that, ∂/∂(a)ℒ_cave = 1/|(a)|Thus, a Taylor expansion of ℒ_cave about some center ^t is, ℒ_cave( ; ^t) = ℒ_cave(^t) + (-^t)^⊤∇ℒ_cave(^t)= ℒ_cave(^t) + ∑_i=1^n 1/|^t(a)|((a)-^t(a))= ∑_i=1^n (a)/|^t(a)| + constantThus, the function we try to minimize at each step of the iteration is, ℒ_vex()+ ℒ_cave( ; ^t) = κ^⊤ +∑_i=1^n (a)/|^t(a)| + constantOf course, it suffices to ignore the constant terms for the purpose of optimization. Thus, the iteration is as claimed, ^t+1∈min_Ω_κ^⊤ +∑_i=1^n (a)/|^t(a)|Because the CCP is generally a descent algorithm, this special case is a descent algorithm as well. It remains to explain why this is a Quadratic Program. Of course, the loss function is quadratic in , so it remains to discuss the feasible region. One might recall that the feasible region is Ω_ = { : 0 ≤(a) ≤(a)or (a) ≤(a) ≤ 0}. This is a rectangular set and thus falls within the framework of QPs.Finally, we give some discussion to when (a) = 0 exactly. This is a probability zero event according to our statistical model, and so there is some expectation that it doesn't occur. Alternatively, we can simply insist that (a) = 0 and optimize over the remaining terms.§ ADDITIONAL EXPERIMENTSWe validate the use of the CCP by a brief experimental comparison to projected gradient descent. To evaluate these models, we will run these algorithms on an artificial example using an image. First, we regard each color channel as a signal on a 50 × 50 grid graph. Then, we artificially generate the uniform noise, independently for each pixel, to create our signal . We then run, for each channel, the projected gradient algorithm as well as the Convex-Concave Procedure. For the initialization, we use a bit of chemistry: although the observed signalis of course feasible, we perturb it each coordinate by a small amount so that ^0 is strictly feasible (heuristically, this seems to eliminate spotting in the output). For each algorithm, we use a stopping criteria that the error change by no more than 10^-7×ℒ(^0) (we multiply by the initial loss to provide scale). Finally, a learning rate of γ = 1 is chosen for projected gradient.Notice that this requires knowing the ground truth signal, so this is questionable in practice. But for the purposes of experimentation, since that estimation is not the focus, this will do. Both algorithms were run in python in the Yale Zoo. In order to solve the QP <ref>, the package CVXOPT is used. Note first that, theoretically, no algorithm would necessarily outperform the other. In fact, unless projected gradient were implemented with exact line search, it is not a guaranteed descent algorithm in the non-convex case. The CCP, of course, is, but we cannot make a statement about the value achieved or the rate of convergence. Comparison We compare the two algorithms in terms of runtime (both in terms of number of steps and total time), rate of decay, and accuracy. For all three channels, the CCP converged within 10 iterations (impressive)! Each iteration took 1.63 seconds on average to complete. In contrast, projected gradient descent took 1950, 1425, and 985 iterations to meet the same convergence criteria. While the iterations for the CCP are more involved than those for gradient descent, the CCP is still faster overall. We find that the CCP took 15.6, 15.5, and 17.6 seconds to converge, while projected gradient required 75.9, 54.5, and 37.5 seconds. What is much more interesting is the rate of decay. From figure <ref>, it is clear that the CCPnot only converges faster overall to itsfinal value, but it does so incredibly quickly compared to projected gradient, especially so for the early iterations. However, we also see that gradient descent is steadily decreasing at what appears to be an exponential rate. There is of course nothing inherent about the problem that guarantees this, although it is suggestive that the concavity due to log is dominated by the convexity of the quadratic form part of ℒ, and so our problem is “nearly-convex.” Of course, the secondary question remains: how do the algorithms compare in accuracy? Here, we have two notions: final value of the objective function, and actual closeness to the ground truth signal. In all channels, the case is the same. Both algorithms pass the most natural benchmark: they are able to achieve a lower loss value than the ground truth. Therefore, differences from the true signal can be regarded as due to modeling errors or random chance, rather than poor search of the loss landscape. However, the final loss value for projected gradient is consistently lower than that for the CCP, so in this regard, projected gradient outperforms. Likewise, the min from projected gradient has a lower squared error from the ground truth than the min from the CCP. Given that projected gradient algorithm both outperforms the CCP in final loss and decreases smoothly, this is suggestive that the CCP, while a descent algorithm that converges fairly rapidly, is prone to getting stuck in regions of the loss landscape. It is also apparent in figure <ref> that the CCP asymptotes to a sub-optimal loss, so this is not a consequence of early stopping. | http://arxiv.org/abs/2311.16378v2 | {
"authors": [
"Sam Leone",
"Xingzhi Sun",
"Michael Perlmutter",
"Smita Krishnaswamy"
],
"categories": [
"cs.LG",
"eess.SP"
],
"primary_category": "cs.LG",
"published": "20231127235319",
"title": "Bayesian Formulations for Graph Spectral Denoising"
} |
Towards Transfer Learning for Large-Scale Image Classification Using Annealing-based Quantum Boltzmann Machines© 2023 IEEE.Personal use of this material is permitted.Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The authors acknowledge funding from the German Federal Ministry for Economic Affairs and Climate Action, project PlanQK, 01MK20005I. Daniëlle Schuman0009-0000-0069-5517 LMU [email protected] Leo Sünkel LMU [email protected] Philipp Altmann0000-0003-1134-176X LMU [email protected] Jonas Stein0000-0001-5727-9151 LMU [email protected] Christoph Roch0000-0003-0781-6590 LMU [email protected] Thomas Gabor LMU [email protected] Claudia Linnhoff-Popien0000-0001-6284-9286 LMU [email protected] 14, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== A key challenge of modern machine learning systems is to achieve Out-of-Distribution (OOD) generalization—generalizing to target data whose distribution differs from that of source data. Despite its significant importance, the fundamental question of “what are the most effective algorithms for OOD generalization” remains open even under the standard setting of covariate shift. This paper addresses this fundamental question by proving that, surprisingly, classical Maximum Likelihood Estimation (MLE) purely using source data (without any modification) achieves the minimax optimality for covariate shift under the well-specified setting. That is, no algorithm performs better than MLE in this setting (up to a constant factor), justifying MLE is all you need. Our result holds for a very rich class of parametric models, and does not require any boundedness condition on the density ratio. We illustrate the wide applicability of our framework by instantiating it to three concrete examples—linear regression, logistic regression, and phase retrieval. This paper further complement the study by proving that, under the misspecified setting, MLE is no longer the optimal choice, whereas Maximum Weighted Likelihood Estimator (MWLE) emerges as minimax optimal in certain scenarios. § INTRODUCTIONDistribution shift, where the distribution of test data (target data) significantly differs from the distribution of training data (source data), is commonly encountered in practical machine learning scenarios <cit.>. A central challenge of modern machine learning is to achieve Out-of-Distribution (OOD) generalization, where learned models maintain good performance in the target domain despite the presence of distribution shifts. To address this challenge, a variety of algorithms and techniques have been proposed, including vanilla empirical risk minimization (ERM) <cit.>, importance weighting <cit.>, learning invariant representations <cit.>, distributionally robust optimization (DRO) <cit.>, etc. See the recent survey <cit.> for more details. These results claim the effectiveness of the corresponding proposed algorithms in different regimes. This leads to a natural fundamental question: What are the most effective algorithms for OOD generalization? This paper consider a widely-studied formulation of OOD-generalization—covariate shift. Under covariate shift, the marginal distributions of the input covariates X vary between the source and target domains, while the conditional distribution of output given covariates Y| X remains the same across domains. We consider learning a model from a known parametric model class under well-specified setting, where well-specification refers to the problems where the true conditional distribution of Y | X lies in the given parametric model class. We argue that well-specified setting becomes increasingly more relevant in modern learning applications, because these applications typically use large-scale models with an enormous number of parameters, which are highly expressive and thus make the settings “approximately” well-specified.Unfortunately, even under the basic setup of well-specified covariate shift, the aforementioned highlighted problem remains elusive — while the seminar work <cit.> provides the first asymptotic guarantees for classical Maximum Likelihood Estimation (MLE) algorithm under this setup, and proves its optimality among a specific class of weighted likelihood estimators, his results leave two critical questions open: (1) Does MLE remain effective in the practical non-asymptotic scenario when the number of data is limited? (2) Do there exist smart algorithms beyond the class of weighted likelihood estimators that outperform MLE? This paper precisely addresses both critical questions and thus resolving the highlighted problem under well-specified covariate shift. Our contributions. Concretely, this paper makes following contributions:* We prove that, for a large set of well-specified covariate shift problems, the classical Maximum Likelihood Estimation (MLE) — which is computed purely based on source data without using any target data — finds the optimal predictor on the target domain with prediction loss decreases as Õ((_T^-1_S)/n). Here (·) standards for trace, _S, _T are the fisher information under source and target data distribution respectively, and n is the number of source data. Our result does not require any boundedness condition on the density ratio, and is, to our best knowledge, the first general, non-asymptotic, sharp result for MLE on a rich class of covariate shift problems. * We provide the first minimax lower bound under well-specified covariate shift for any algorithm, matching the error rate of MLE. This implies that MLE is minimax optimal, and no algorithm is better than MLE in this setting (up to a constant factor), justifying “MLE is all you need”. * We instantiate our generic results by considering three representative examples with distinct problem structures: linear regression, logistic regression and phase retrieval. We verify preconditions, compute key quantities, and directly give covariate shift guarantees for these applications. * We further complement the study of this paper by considering the mis-specfied setting where MLE ceases to work. We establish the first general, non-asymptotic upper bound for the Maximum Weighted Likelihood Estimator (MWLE) provided bounded likelihood ratio. We prove that MWLE is minimax optimal under certain worst-case mis-specification. MLE versus MWLE. This paper shows that importance weighting should not always be the go-to algorithm for covariate shift problems. Despite MWLE works under more general mis-specified setting given bounded density ratio, in the well-specified regime, MLE does not require bounded density ratio, and is provably more efficient than MWLE in terms of sample complexity. MLE is all you need for well-specified covariate shift problem. §.§ Related work Parametric covariate shift. The statistical study of covariate shift under parametric models can be dated back to <cit.>, which established the asymptotic normality of MWLE and pointed out that vanilla MLE is asymptotically optimal among all the weighted likelihood estimators when the model is well-specified. However, no finite sample guarantees were provided, and the optimality of MLE is only proved within the restricted class of weighted likelihood estimators. In contrast, this paper establishes non-asymptotic results and proves the optimality of MLE among all possible estimators under well-specified models. <cit.> studied the importance weighting under the statistical learning framework and gave a non-asymptotic upper bound for the generalization error of the weighted estimator. However, their rate scales as (1/√(n)) compared to our rate (1/n), where n is the sample size.A recent line of work also provide non-asymptotic analyses for covariate shift under well-specified setting, however they focus on linear regression or a few specific models which are more restrictive than our setting: <cit.> introduces a statistical minimax framework and provides lower bounds for OOD generalization in the context of linear and one-hidden layer neural network regression models. When applied to covariate shift, their lower bounds are loose and no longer minimax optimal. <cit.> considers the minimax optimal estimator for linear regression under fixed design, the estimator they proposed is not MLE and is much more complicated in certain regimes. Finally, <cit.> considers covariate shift in linear regression where the learner can have access to a small number of target labels, this is beyond the scope of this paper, where we focus on the classical covariate shift setup in which target labels are not known.Nonparametric covariate shift. Another line of work focuses on well-specified nonparametric models under covariate shift. <cit.> presented minimax results for nonparametric classification problem, which was controlled by a transfer-exponent that measures the discrepancy between source and target. Inspired by the aforementioned work, <cit.> studied nonparametric regression problem over the class of Hölder continuous functions with a more fine-grained similarity measure. When considering reproducing kernel Hilbert space (RKHS), <cit.> showedkernel ridge regression (KRR) estimator with a properly chosen penalty is minimax optimal for a large family of RKHS when the likelihood ratio is uniformly bounded, and a reweighted KRR using truncated likelihood ratios is minimax optimal when the likelihood ratio has a finite second moment. Later, <cit.> proposed a learning strategy based on pseudo-labels. When the likelihood ratio is bounded, their estimator enjoyed the optimality guarantees without prior knowledge about the amount of covariate shift.Although these works focused on covariate shift problems, they considered nonparametric setting, and hence are not directly comparable to our work.As an example, <cit.> showed that MLE (empirical risk minimization in their language) is provably suboptimal for addressing covariate shift under nonparametric RKHS assumptions. In contrast, we show that MLE is optimal for covariate shift for a well-specified parametric model.We also highlight that our lower bound is instance dependent in the sense that it depends on the source and target distributions. This is in contrast to prior work (e.g. <cit.>, <cit.>) that consider the worst-case scenario over certain classes of source-target pairs (e.g., bounded density ratios). Maximum likelihood estimation. A crucial part of this work is analyzing MLE, which is a dominant approach in statistical inference. There exists a variety of work studying the behavior of MLE under the standard no-distribution-shift setting. It is well known that MLE is asymptotically normal <cit.> with the inverse of Fisher information as the asymptotic variance. <cit.> established the famous Cramer-Rao bound for unbiased estimators, which also showed that no consistent estimator has lower asymptotic mean squared error than the MLE. <cit.> gave the asymptotic distribution of MLE under the mis-specified setting. More recently, non-asymptotic behaviours of MLE are studied under certain models. <cit.> established the non-asymptotic error bound for MLE in logistic regression using self-concordance. This line of work does not consider covariate shift, which is an indispensable part of this paper.Importance reweighting algorithms. Lastly, importance reweighting (or importance sampling) is a classical method to use independent samples from a proposal distribution to approximate expectations w.r.t. a target measure <cit.>. <cit.> studied the sample size (depending on the KL divergence between two distributions) required for importance sampling to approximate a single function. <cit.> extended analysis to the case with general f-divergences. In addition to correcting covariate shift, importance reweighting has been central in offline reinforcement learning. For instance, <cit.> showed a truncated version of importance reweighting is minimax optimal for estimation the value of a target policy using data from a behavior policy. For learning the optimal policy from the behavior data, <cit.> presented upper bounds of an importance-reweighted estimator. This spurs a long line of work of using importance weighting in offline RL. See the recent work <cit.> and the references therein.§ BACKGROUND AND PROBLEM FORMULATIONIn this section, we provide background on the problem of learning under covariate shift. We also review two widely adopted estimators: maximum likelihood estimator and maximum weighted likelihood estimator. Notations.Throughout the paper, we use c to denote universal constants, which may vary from line to line. §.§ Covariate shift and excess riskLet X∈ be the covariates and Y∈ be the response variable that we aim to predict. In a general out-of-distribution (OOD) generalization problem, we have two domains of interest, namely a source domain S and a target domain T. Each domain is associated with a data generating distribution over (X,Y): _S(X,Y) for the source domain and _T(X,Y) for the target domain. Givenn i.i.d. labeled samples {(x_i,y_i)}^n_i=1∼_S(X,Y) from the source domain, the goal of OOD generalization is to learn a prediction rule X → Y that performs well in the target domain. In this paper, we focus on the covariate shift version of the OOD generalization problem, in which the marginal distributions _S(X) and _T(X) of the covariates could differ between the source and target domains, while the conditional distribution Y | X is assumed to be the same on both domains.More precisely, we adopt the notion of excess risk to measure of the performance of an estimator under covariate shift.Let :={f(y | x;β) |β∈^d} be a parameterized function class to model the conditional density function p(y | x) of Y | X. A typical loss function isdefined using the negative log-likelihood function:ℓ(x,y,β):=-log f(y |x;β).The excess risk at β is then defined asR(β):= _T[ℓ(x,y,β)]-inf_β_T[ℓ(x,y,β)],where the expectation _T is taken over _T(X,Y).When the model is well-specified, i.e., when the true density p(y | x)=f(y |x;β^⋆) for some β^⋆, we have inf_β_T[ℓ(x,y,β)]= _T[ℓ(x,y,β^⋆)]. As a result, we evaluate the loss at β against the loss at the true parameter β^⋆.In contrast, in the case of mis-specification, i.e., when p(y | x) ∉, the loss at β is compared against the loss of the best fit in the model class.§.§ Maximum likelihood estimation and its weighted versionIn the no-covariate-shift case, maximum likelihood estimation (MLE) is arguably the most popular approach.Letℓ_n(β):=1/n∑^n_i=1ℓ(x_i,y_i,β)be the empirical negative log-likelihood using the samples {(x_i,y_i)}^n_i=1 from the source domain.The vanilla MLE is defined asβ_:=_β∈^dℓ_n(β). One potential “criticism” against MLE in the covariate shift setting is that the empirical negative log-likelihood is not a faithful estimate of the out-of-distribution generalization performance, i.e.,_T[ℓ(x,y,β)]. In light of this, a weighted version of MLE is proposed. Let w(x):=d_T(x)/d_S(x) be the density ratio function andℓ^w_n(β):=1/n∑^n_i=1 w(x_i) ℓ(x_i,y_i,β).be the weighed loss. Then the maximum weighted likelihood estimator is defined asβ_:=_β∈^dℓ^w_n(β).It is easy to see that the weighted loss is an unbiased estimate of _T[ℓ(x,y,β)].To easepresentations later, we would also recall the classical notion of Fisher information—an important quantity to measure the difficulty of parameter estimation. The Fisher information evaluated at β on source and target is defined as_S(β):=_x∼_S(X), y | x∼ f(y |x;β)[∇^2 ℓ(x,y,β)],_T(β):=_x∼_T(X), y | x∼ f(y |x;β)[∇^2 ℓ(x,y,β)].Here, the gradient and Hessian are taken with respect to the parameter β.§ WELL-SPECIFIED PARAMETRIC MODEL UNDER COVARIATE SHIFTIn this section, we focus on covariate shift with a well-specified model, that is, the true conditional distribution falls in our parametric function class. This setting aligns with the practice, since in modern machine learning we often deploy large models whose representation ability are so strong that every possible true data distribution almost falls in the function class. We assume there exists some β^⋆ such thatp(y | x)=f(y | x;β^⋆), anddenote the excess risk evaluated at β under true model parameter β^⋆ as R_β^⋆(β), i.e.,R_β^⋆(β):= _x∼_T(X) y|x∼ f(y|x;β^⋆)[ℓ(x,y,β)]-_x∼_T(X) y|x∼ f(y|x;β^⋆)[ℓ(x,y,β^⋆)]. While the objective of MLE (cf. (<ref>)) is not an unbiased estimate of the risk under the target domain, we will show in this section that MLE is in fact optimal for addressing covariate shift under well-specified models.More specifically, in Section <ref>, we provide the performance upper bound for MLE under generic assumptions on the parametric model. Then in Section <ref>, we characterize the performance limit of any estimator in the presence of covariate shift. As we will see, MLE is minimax optimal as it matches the performance limit. §.§ Upper bound for MLEIn this subsection, we establish a non-asymptotic upper bound for MLE under generic assumptions on the model class.We make the following assumptions on the model class : * There exist B_1, B_2, N(δ), and absolute constants c, γ such that for any fixedmatrix A ∈ℝ^d × d, any δ∈ (0, 1), and any n > N(δ), with probability at least 1-δ:A(∇ℓ_n(β^⋆)-[∇ℓ_n(β^⋆)])_2 ≤ c √( V logd/δ/n)+ B_1A_2 log^γ(B_1A_2/√(V)) logd/δ/n, ∇^2ℓ_n(β^⋆)-[∇^2ℓ_n(β^⋆)]_2≤ B_2 √(logd/δ/n),where V = n ·𝔼A(∇ℓ_n(β^⋆)-[∇ℓ_n(β^⋆)])_2^2 is the variance. *There exists some constant B_3≥ 0 such that ∇^3 ℓ (x,y,β)_2≤ B_3 for all x∈_S∪_T, y∈, β∈^d, where _S (resp. _T) is the support of _S(X) (resp. _T(X)). * The empirical loss ℓ_n(·) defined in (<ref>) has a unique local minimum in ^d, which is also the global minimum. Several remarks on Assumption <ref> are in order.Assumption <ref> is a general version of Bernstein inequality (when γ=0 it reduces to classical Bernstein inequality), which gives concentration on gradient and Hessian. This assumption is naturally satisfied when the gradient and Hessian are bounded (see Proposition <ref> for details). Assumption <ref> requires the third order derivative of log-likelihood to be bounded, which is easy to satisfy (e.g., linear regression satisfies this assumption with B_3=0). Assumption <ref> ensures the MLE is unique, which is standard in the study of the behaviour of MLE. We can see that it naturally applies to traditional convex losses. It is worthnoting that our general theorem can also be applied under a relaxed version of Assumption <ref>, which will be shown in Theorem <ref>. In Section <ref>, we will see that Assumption <ref> is mild and easily satisfied for a wide range of models.Now we are ready to present the performance upper bound for MLE under covariate shift. Suppose that the model classsatisfies Assumption <ref>. Let _T:=_T(β^⋆) and _S:=_S(β^⋆). For any δ∈ (0,1), if n≥ c max{N^⋆log(d/δ), N(δ)}, then with probability at least 1-2δ, we haveR_β^⋆(β_)≤ c(_T^-1_S)logd/δ/nfor an absolute constant c. HereN^⋆:=Poly (d, B_1, B_2, B_3, _S^-1_2, _T^1/2_S^-1_T^1/2_2^-1).For an exact characterization of the threshold N^⋆, one can refer to Theorem <ref> in the appendix. Theorem <ref> gives a non-asymptotic upper bound for the excess risk of MLE: when the sample size exceeds a certain threshold of max{N^⋆log(d/δ), N(δ)}, MLE achieves an instance dependent risk bound (_T _S^-1)/n.It is worth noting that our analysis does not require boundedness on the density ratios between the target and source distributions (as have been assumed in prior art <cit.>), which yields broader applicability. In Section <ref>, we will instantiate our generic analysis on three different examples: linear regression, logistric regression and phase retrieval. §.§ Minimax lower boundIn the previous section, we have established the upper bound for the vanilla MLE.Now we turn to the complementary question regarding the fundamental limit of covariate shift under well-specified models. To establish the lower bound, we will need the following Assumption <ref> that is a slight variant of Assumption <ref>. Different from the upper bound, the lower bound is algorithm independent and involve a model class rather than a fixed ground truth. Hence, Assumption <ref> focuses on population properties of our model as opposed to Assumption <ref>, which is on the sample level.Let β_0∈^d and B>0.We make the following assumptions on the model class : * Assumption <ref> holds. *There exist some constants L_S, L_T≥ 0 such that for any β_1,β_2∈_β_0(B):_S(β_1)-_S(β_2)_2≤ L_S β_1-β_2_2,_T(β_1)-_T(β_2)_2≤ L_T β_1-β_2_2. *For any β^⋆∈_β_0(B), the excess risk R_β^⋆(β) defined in (<ref>) is convex in β∈^d. *We assume _S(β) and _T(β) are positive definite for all β∈_β_0(B). Assumption <ref> essentially requires the Fisher information will not vary drastically in a small neighbourhood of β_0. This assumption is easy to hold when the fisher information has certain smoothness (e.g., in linear regression, the fisher information does not change when β varies). Since Assumption <ref> is a slight variant of Assumption <ref>, both assumptions are often satisfied simultaneously for a wide range of models, as we will show in Section <ref>. Suppose the model classsatisfies Assumption <ref>. As long as n ≥ N_0, we haveinf_β̂sup_β^⋆∈_β_0(B)(_T(β^⋆)^-1_S(β^⋆))^-1_x_i∼_S(X) y_i|x_i∼ f(y|x;β^⋆)[R_β^⋆(β̂)]≥1/50n,where N_0:= Poly (d,B^-1, B_3, L_S, L_T, _S(β_0)_2, _T(β_0)_2, _S(β_0)^-1_2, _T(β_0)^-1_2).For an exact characterization of the threshold N_0, one can refer to Theorem <ref> in the appendix. Comparing Theorem <ref> and <ref>, we can see that, under[It is worthy to point out that, it is not hard for Assumptions <ref> and <ref> to be satisfied simultaneously. These assumptions will hold naturally when the domain is bounded and the log-likelihood is of certain convexity and smoothness, as we will show in the next section by several concrete examples. ] Assumptions <ref> and <ref>, then for large enough sample size n, (_T(β^⋆)^-1_S(β^⋆))/n exactly characterizes the fundamental hardness of covariate shift under well-specified parametric models. It also reveals that vanilla MLE is minimax optimal under this scenario.To gain some intuitions, _S^-1 captures the variance of the parameter estimation, and _T measures how the excess risk on the target depends on the estimation accuracy of the parameter.Therefore what really affects the excess risk (on target) is the accuracy of estimating the parameter, and vanilla MLE is naturally the most efficient choice. We also highlight that our lower bound is instance dependent in the sense that it depends on the source and target distributions. This is in contrast to prior work (e.g. <cit.>, <cit.>) that consider the worst-case scenario over certain classes of source-target pairs (e.g., bounded density ratios). § APPLICATIONSIn this section, we illustrate the broad applicability of our framework by delving into three distinct statistical models, namely linear regression, logistic regression and phase retrieval.For each model, we will demonstrate the validity of the assumptions, and give the explicit non-asymptotic upper bound on the vanilla MLE obtained by our framework as well as the threshold of sample size needed to obtain the upper bound.§.§ Linear regressionIn linear regression, we have Y = X^Tβ^⋆+ε, where ε∼(0,1) and ε X. The corresponding negative log-likelihood function (i.e. the loss function) is given byℓ(x,y,β):=1/2(y-x^Tβ)^2.We assume X∼(0,I_d) on the source domain and X∼(α,σ^2 I_d) on the target domain.The aforementioned linear regression model satisfies Assumption <ref> and <ref> with γ=1, N(δ)=dlog (1/δ), B_1=c√(d), B_2=c√(d), B_3=0 and L_S=L_T=0. Moreover, we have (_T _S^-1)=α_2^2+σ^2 d.By Theorem <ref> and Theorem <ref>, since Assumption <ref> and <ref> are satisfied, we immediately demonstrate the optimality of MLE under linear regression. The following theorem gives the explicit form of excess risk bound by applying Theorem <ref>:For any δ∈ (0,1), if n≥ (Nlogd/δ), then with probability at least 1-2δ, we haveR_β^⋆(β_)≤ c(α^2_2+σ^2 d)logd/δ/n,where N:=d(1+α^2_2d+σ^2 d/α^2_2+σ^2 d)^2 . Regarding the upper bound of the excess risk, we categorize it into two scenarios: large shift and small shift.In the small shift scenarios (i.e., α_2^2≤σ^2 d), the result is the same as that in scenarios without any mean shift, with a rate of σ^2 d/n.On the other hand, in the large shift scenarios (i.e., α_2^2≥σ^2 d), the upper bound of the excess risk increases with the mean shift at a rate of α_2^2/n. For a minor mean shift, specifically when α_2=cσ for a given constant c, the threshold is N=d. This aligns with the results from linear regression without any covariate shift. On the other hand, as the mean shift increases (i.e., |α|_2=σ d^k for some 0< k< 1/2), the threshold becomes N=d^4k+1, increasing with the growth of k. In scenarios where the mean shift significantly surpasses the scaling shift, denoted as α≥σ√(d), the threshold reaches N=d^3.§.§ Logistic regressionIn the logistic regression, the response variable Y∈{0,1} obeys(Y=1 | X=x)=1/1+e^x^Tβ^⋆,(Y=0 | X=x)=1/1+e^-x^Tβ^⋆.The corresponding negative log-likelihood function (i.e. the loss function) is given byℓ(x,y,β):=log(1+e^x^Tβ)-y(x^Tβ).We assume X∼ (^d-1(√(d))) on the source domain and X∼(^d-1(√(d)))+v on the target domain, where ^d-1(√(d)):={x∈^d|x_2=√(d)}. In the following, we will give the upper bound of the excess risk for MLE when v= rβ_⊥^⋆, where β_⊥^⋆ represents a vector perpendicular to β^⋆ (i.e., β_⊥^⋆ Tβ^⋆=0). Without loss of generality, we assume β^⋆_2=β_⊥^⋆_2=1. The aforementioned logistic regression model satisfies Assumption <ref> and <ref> with γ=0, N(δ)=0, B_1=c√(d), B_2=cd, B_3=(√(d)+r)^3, L_S=d^1.5 and L_T=(√(d)+r)^3. Moreover, we have (_T _S^-1) ≍ d + r^2.By Theorem <ref> and Theorem <ref>, since Assumption <ref> and <ref> are satisfied, we immediately demonstrate the optimality of MLE under logistic regression. The following theorem gives the explicit form of excess risk bound by applying Theorem <ref>:For any δ∈ (0,1), if n≥ (Nlogd/δ), then with probability at least 1-2δ, we haveR_β^⋆(β_)≤ c(d+r^2)logd/δ/n,where N:=d^4(1+r^6). The bound on the excess risk incorporates a r^2 term, which is a measurement of the mean shift. This is due to the fact that the MLE does not utilize the information that v^Tβ^⋆=0. Therefore, v^Tβ_ is not necessarily zero, which will lead to an additional bias. Similar to linear regression, we can categorize the upper bound of the excess risk into two scenarios: large shift (r ≥√(d)) and small shift (r ≤√(d)).We admit that the N here may not be tight, as we lean on a general framework designed for a variety of models rather than a specific one.§.§ Phase retrievalAs we have mentioned, our generic framework can also be applied to the scenarios where some of the assumptions are relaxed. In this subsection, we will further illustrate this point by delving into the phase retrieval model. In the phase retrieval, the response variable Y = (X^Tβ^⋆)^2+ε, where ε∼(0,1) and ε X.We assume _S(X) and _T(X) follow the same distribution as that in the logistic regression model (i.e., Section <ref>).Note that both the phase retrieval model and the logistic regression model belong to generalized linear model (GLM), thus they are expected to have similar properties. However, given the loss function ℓ(x,y,β):=1/2(y-(x^Tβ)^2)^2, it is obvious that Assumption <ref> is not satisfied, since if β is a global minimum of ℓ_n, -β is also a global minimum. The following theorem shows that we can still obtain results similar to logistic regression though Assumption <ref> fails to hold.For any δ∈ (0,1), if n≥(Nlogd/δ), then with probability at least 1-2δ, we haveR_β^⋆(β_)≤ c(d+r^2)logd/δ/n,where N:=d^8(1+r^8). § MIS-SPECIFIED PARAMETRIC MODEL UNDER COVARIATE SHIFT In the case of model mis-specification, we still employ a parameterized function class :={f(y | x;β) | β∈^d} to model the conditional density function of Y | X. However, the true density p(y | x) might not be in . As we previously showed, under a well-specified parametric model, the vanilla MLE is minimax optimal up to constants. However, when the model is mis-specified, the classical MLE may not necessarily provide a good estimator.There exist certain mis-specified scenarios such that classical MLE is not consistent, whereas MWLE is. Proposition <ref> illustrates the necessity of adaptation under model mis-specification since the classical MLE asymptotically gives the wrong estimator. In this section, we study the non-asymptotic property of MWLE. Letbe the model class of the ground truth Y | X, and M ∈ be the ground truth model for Y | X. We denote the optimal fit on target asβ^⋆(M):=_β_x∼_T(X) y|x∼ M[ℓ(x,y,β)].The excess risk evaluated at β is then given byR_M(β)= _x∼_T(X) y|x∼ M[ℓ(x,y,β)]-_x∼_T(X) y|x∼ M[ℓ(x,y,β^⋆(M))]. §.§ Upper bound for MWLEIn this subsection, we establish the non-asymptotic upper bound for MWLE, as an analog to Theorem <ref>. We make the following assumption which is a modification of Assumption <ref>.We assume the function classsatisfies the follows:* There exists some constant W>1 such that the density ratio w(x)≤ W for all x∈_S∪_T. *There exist B_1, B_2 and N(δ), and absolute constants c, γ such that for any fixedmatrix A ∈ℝ^d × d, any δ∈ (0, 1), and any n > N(δ), with probability at least 1-δ:A(∇ℓ^w_n(β^⋆(M))-[∇ℓ^w_n(β^⋆(M))])_2 ≤ c √( V logd/δ/n)+ WB_1A_2 log^γ(WB_1A_2/√(V)) logd/δ/n, ∇^2ℓ^w_n(β^⋆(M))-[∇^2ℓ^w_n(β^⋆(M))]_2≤ WB_2 √(logd/δ/n),where V = n ·𝔼A(∇ℓ^w_n(β^⋆(M))-[∇ℓ^w_n(β^⋆(M))])_2^2 is the variance. *Assumption <ref> holds. *There exists N'(δ) such that for any δ∈ (0,1) and any n≥ N'(δ), with probability at least 1-δ, the empirical loss ℓ^w_n(·) defined in (<ref>) has a unique local minimum in ^d, which is also the global minimum. Assumption <ref> is a density ratio upper bound (not required for analyzing MLE), which is essential for the analysis of MWLE. Assumption <ref> is an analog of Assumption <ref>, in the sense that the empirical loss ℓ_n is replaced by its weighted version ℓ_n^w. Assumption <ref> is a weaker version of Assumption <ref> in the sense that it only requires ℓ_n^w has a unique local minimum with high probability. This is due to the nature of reweighting: when applying MWLE, w(x_i) can sometimes be zero, which lead to the degeneration of ℓ_n^w (with a small probability). Therefore we only require the uniqueness of local minimum holds with high probability.To state our non-asymptotic upper bound for MWLE, we define the following “weighted version” of Fisher information:G_w(M):=_x∼_S(X) y|x∼ M[w(x)^2∇ℓ(x,y,β^⋆(M))∇ℓ(x,y,β^⋆(M))^T],H_w(M):=_x∼_S(X) y|x∼ M[w(x)∇^2ℓ(x,y,β^⋆(M))]=_x∼_T(X) y|x∼ M[∇^2ℓ(x,y,β^⋆(M))]. Suppose the function classsatisfies Assumption <ref>. Let G_w := G_w(M) and H_w := H_w(M). For any δ∈ (0,1), if n≥ c max{N^⋆log(d/δ), N(δ), N'(δ)}, then with probability at least 1-3δ, we haveR_M(β_)≤ c(G_w H^-1_w)logd/δ/nfor an absolute constant c. Here N^⋆ := Poly(W,B_1,B_2,B_3,H_w^-1_2, (G_w H_w^-2), (G_w H_w^-2)^-1).For an exact characterization of the threshold N^⋆, one can refer to Theorem <ref> in the appendix.Compared with Theorem <ref>, Theorem <ref> does not require well-specification of the model, demonstrating the wide applicability of MWLE. The excess risk upper bound can be explained as follows: note that (G_w H^-1_w) can be expanded as (H_w H^-1_w G_w H^-1_w). As shown by <cit.>, the term √(n)(β_-β^⋆) converges asymptotically to a normal distribution, denoted as (0,H_w^-1G_wH_w^-1). Thus, the component H^-1_w G_w H^-1_w characterizes the variance of the estimator, corresponding to the ^-1_S term in Theorem <ref>.Additionally, the excess risk's dependence on the parameter estimation is captured by H_w as a counterpart of _T in Theorem <ref>. However, to establish Theorem <ref>, it is necessary to assume the bounded density ratio, which does not appear in Theorem <ref>. Moreover, when the model is well-specified, by Cauchy-Schwarz ineqaulity, we have (G_w H_w^-1)≥(_T_S^-1), which implies the upper bound for MWLE is larger than the vanilla MLE. This observation aligns with the results presented in <cit.>, which point out that when the model is well specified, MLE is more efficient than MWLE in terms of the asymptotic variance. §.§ Optimality of MWLE To understand the optimality of MWLE, it is necessary to establish a matching lower bound. However, deriving a lower bound similar to Theorem <ref>, which holds for any model classes that satisfies certain mild conditions, is challenging due to hardness of capturing the difference betweenand . As a solution, we present a lower bound tailored for certain model classes and data distributions in the following. There exist _S(X)≠_T(X), a model class and a prediction classsatisfying Assumption <ref> such that when n is sufficiently large, we haveinf_β̂sup_M ∈(G_w(M) H^-1_w(M))^-1_x_i∼_S(X) y_i|x_i∼ M[R_M(β̂)] ≳1/n. By Theorem <ref>, the excess risk of MWLE is upper bounded by (G_w H_w^-1)/n. Therefore, Theorem <ref> shows that there exists a non-trivial scenario where MWLE is minimax optimal. Notice that Theorem <ref> presents a weaker lower bound compared toTheorem <ref>. The lower bound presented in Theorem <ref> holds only for certain meticulously chosen _S(X), _T(X), model classand prediction class . In contrast, the lower bound in Theorem <ref> applies to any _S(X), _T(X), and classthat meet the required assumptions. § CONCLUSION AND DISCUSSIONTo conclude, we prove that MLE achieves the minimax optimality for covariate shift under a well-specified parametric model. Along the way, we demonstrate that the term (_T _S^-1) characterizes the foundamental hardness of covariate shift, where _S and _T are the Fisher information on the source domain and the target domain, respectively. To complement the study, we also consider the misspecified setting and show that Maximum Weighted Likelihood Estimator (MWLE) emerges as minimax optimal in specific scenarios, outperforming MLE. Our work opens up several interesting avenues for future study.First, it is of great interest to extend our analysis to other types of OOD generalization problems, e.g., imbalanced data, posterior shift, etc. Second, our analyses relies on standard regularity assumptions, such as the positive definiteness of the Fisher information (which implies certain identifiability of the parameter) and the uniqueness of the minimum of the loss function. Addressing covariate shift without these assumptions is also important future directions.iclr2024_conference§ PROOFS FOR SECTION <REF> §.§ Proofs for Theorem <ref>The detailed version of Theorem <ref> is stated as the following. Suppose that the model classsatisfies Assumption <ref>. Let _T:=_T(β^⋆) and _S:=_S(β^⋆). For any δ∈ (0,1), if n≥ c max{N^⋆log(d/δ), N(δ)}, then with probability at least 1-2δ, we haveR_β^⋆(β_)≤ c(_T^-1_S)logd/δ/nfor an absolute constant c. Here N^⋆:=(1 + κ̃/κ)^2 ·max{κ̃^-1α_1^2log^2γ((1+κ̃/κ)κ̃^-1α^2_1), α_2^2,κ̃(1 + _T^1/2_S^-1_T^1/2_2^-2)α_3^2 },where α_1 := B_1 _S^-1_2^1/2, α_2 := B_2 _S^-1_2, α_3 := B_3 _S^-1_2^3/2,κ:=(_T_S^-1) /_T^1/2_S^-1_T^1/2_2,κ̃ := (_S^-1)/_S^-1_2.For proving Theorem <ref>, we first state two main lemmas. Informally speaking, Lemma <ref> and Lemma <ref> capture the distance between β_ and β^⋆ under different measurements.Suppose Assumption <ref> holds. For any δ∈ (0,1) and any n≥ cmax{N_1log(d/δ), N(δ)}, with probability at least 1-δ, we have β_∈_β^⋆(c√((_S^-1)logd/δ/n)) for some absolute constant c. Here N_1:=max{ B^2_2_S^-1^2_2, B^2_3_S^-1^2_2(_S^-1), (B^2_1B_2_S^-1_2^3log^2γ (κ̃^-1/2α_1)/(_S^-1))^2/3,(B^3_1B_3_S^-1_2^4log^3γ (κ̃^-1/2α_1)/(_S^-1))^1/2, B^2_1_S^-1_2^2log^2γ (κ̃^-1/2α_1)/(_S^-1)} .Suppose Assumption <ref> holds. For any δ∈ (0,1) and any n≥ cmax{N_1log(d/δ), N_2log(d/δ), N(δ)}, with probability at least 1-2δ, we have_T^1/2(β_-β^⋆)_2^2≤ c(_T_S^-1) logd/δ/nfor some absolute constant c. Here N_1 is defined in Lemma <ref> andN_2:=max{ (B_2_T^1/2_S^-1/2_2^2(_S^-1)/(_T_S^-1) )^2, (B_3_T^1/2_S^-1/2_2^2(_S^-1)^1.5/(_T_S^-1) )^2,(B^2_1B_2_T^1/2_S^-1/2_2^2_S^-1_2^2log^2γ (κ̃^-1/2α_1)/(_T_S^-1) )^2/3, (B^3_1B_3_T^1/2_S^-1/2_2^2_S^-1_2^3log^3γ (κ̃^-1/2α_1)/(_T_S^-1) )^1/2,B^2_1_T^1/2_S^-1/2_2^2_S^-1_2log^2γ (κ^-1/2α_1)/(_T_S^-1) }. The proofs for Lemma <ref> and <ref> are delayed to the end of this subsection. With these two lemmas, we can now state the proof for Theorem <ref>.By Assumption <ref>, we can do Taylor expansion w.r.t. β as the following:R_β^⋆(β_) = _x∼_T(X) y|x∼ f(y|x;β^⋆)[ℓ(x,y,β_)-ℓ(x,y,β^⋆)] ≤_x∼_T(X) y|x∼ f(y|x;β^⋆) [∇ℓ(x,y,β^⋆)]^T(β_-β^⋆) + 1/2(β_-β^⋆)^T_T(β_-β^⋆) +B_3/6β_-β^⋆_2^3.Applying Lemma <ref> and <ref>, we know for any δ and any n≥ cmax{N_1log(d/δ), N_2log(d/δ), N(δ)}, with probability at least 1-2δ, we have(β_-β^⋆)^T_T(β_-β^⋆) ≤ c(_T_S^-1) logd/δ/nandβ_-β^⋆_2≤ c√((_S^-1)logd/δ/n).Also notice that, _x∼_T(X) y|x∼ f(y|x;β^⋆) [∇ℓ(x,y,β^⋆)]=0. Therefore, with probability at least 1-2δ, we haveR_β^⋆(β_)≤c/2(_T_S^-1) logd/δ/n + c^3/6B_3(_S^-1)^1.5(logd/δ/n)^1.5for any δ and any n≥ cmax{N_1log(d/δ), N_2log(d/δ), N(δ)}. If we further assume n≥ c(B_3(_S^-1)^1.5/(_T_S^-1) )^2log(d/δ), it then holds thatR_β^⋆(β_)≤ c (_T_S^-1) logd/δ/n.Note thatmax{N_1,N_2, (B_3(_S^-1)^1.5/(_T_S^-1) )^2}=max{ B^2_2_S^-1^2_2, B^2_3_S^-1^2_2(_S^-1), (B^2_1B_2_S^-1_2^3log^2γ (κ̃^-1/2α_1)/(_S^-1))^2/3, (B^3_1B_3_S^-1_2^4log^3γ (κ̃^-1/2α_1)/(_S^-1))^1/2,B^2_1_S^-1_2^2log^2γ (κ̃^-1/2α_1)/(_S^-1), (B_2_T^1/2_S^-1/2_2^2(_S^-1)/(_T_S^-1) )^2, (B_3_T^1/2_S^-1/2_2^2(_S^-1)^1.5/(_T_S^-1) )^2,(B^2_1B_2_T^1/2_S^-1/2_2^2_S^-1_2^2log^2γ (κ̃^-1/2α_1)/(_T_S^-1) )^2/3, (B^3_1B_3_T^1/2_S^-1/2_2^2_S^-1_2^3log^3γ (κ̃^-1/2α_1)/(_T_S^-1) )^1/2,B^2_1_T^1/2_S^-1/2_2^2_S^-1_2log^2γ (κ^-1/2α_1)/(_T_S^-1) , (B_3(_S^-1)^1.5/(_T_S^-1) )^2}=max{α_2^2, κ̃α^2_3, α_1^4/3α_2^2/3κ̃ ^-2/3log^4γ/3 (κ̃^-1/2α_1),α_1^3/2α_3^1/2κ̃ ^-1/2log^3γ/2 (κ̃^-1/2α_1), α_1^2κ̃^-1log^2γ (κ̃^-1/2α_1), α_2^2 (κ̃/κ)^2, α_3^2 κ̃^3/κ^2,α_1^4/3α_2^2/3κ ^-2/3log^4γ/3 (κ̃^-1/2α_1),α_1^3/2α_3^1/2κ ^-1/2log^3γ/2 (κ̃^-1/2α_1), α_1^2κ^-1log^2γ (κ^-1/2α_1), α_3^2 κ̃^3 κ^-2_T^1/2_S^-1_T^1/2_2^-2}≤max{κ̃^-1α_1^2log^2γ((1+κ̃/κ)κ̃^-1α^2_1), κ^-1α_1^2log^2γ((1+κ̃/κ)κ̃^-1α^2_1), α_2^2,(κ̃/κ)^2α_2^2 ,κ̃α^2_3, (κ̃^3/κ^2)α_3^2 , κ̃^3 κ^-2_T^1/2_S^-1_T^1/2_2^-2α_3^2}≤ (1 + κ̃/κ)^2 ·max{κ̃^-1α_1^2log^2γ((1+κ̃/κ)κ̃^-1α^2_1), α_2^2,κ̃(1 + _T^1/2_S^-1_T^1/2_2^-2)α_3^2 }=:N^⋆.To summarize, for any δ, any n≥ cmax{N^⋆log(d/δ), N(δ)}, with probability at least 1-2δ, we haveR_β^⋆(β_)≤ c (_T_S^-1) logd/δ/n. In the following, we prove Lemma <ref> and <ref>.Proof of Lemma <ref> For notation simplicity, we denote g:=∇ℓ_n(β^⋆)-[∇ℓ_n(β^⋆)]. Note that V = n ·𝔼 [A(∇ℓ_n(β^⋆)-[∇ℓ_n(β^⋆)])_2^2]=n·[∇ℓ_n(β^⋆)^TA^TA∇ℓ_n(β^⋆)]=n·[(A∇ℓ_n(β^⋆)∇ℓ_n(β^⋆)^TA^T)]=(A_SA^T).By taking A= _S^-1 in Assumption <ref>, for any δ, any n > N(δ), we have with probability at least 1-δ:_S^-1g_2≤ c√((_S^-1) logd/δ/n)+ B_1_S^-1_2 log^γ(B_1_S^-1_2/√((_S^-1))) logd/δ/n =c√((_S^-1) logd/δ/n)+ B_1_S^-1_2 log^γ (κ̃^-1/2α_1)logd/δ/n,∇^2ℓ_n(β^⋆)-[∇^2ℓ_n(β^⋆)]_2≤ B_2 √(logd/δ/n).Let event A:={(<ref>),(<ref>) holds}.Under the event A, we have the following Taylor expansion:ℓ_n(β) - ℓ_n(β^⋆) by Assumption <ref>≤(β - β^⋆)^T∇ℓ_n (β^⋆) +1/2 (β - β^⋆)^T∇^2ℓ_n (β^⋆) (β - β^⋆) + B_3/6β-β^⋆_2^3∇ℓ(β^⋆)=0= (β - β^⋆)^T g+1/2 (β - β^⋆)^T∇^2ℓ_n (β^⋆) (β - β^⋆) + B_3/6β-β^⋆_2^3by (<ref>)≤ (β - β^⋆)^T g + 1/2 (β - β^⋆)^T_S (β - β^⋆) + B_2√(logd/δ/n)β-β^⋆_2^2 + B_3/6β-β^⋆_2^3Δ_β:=β-β^⋆=Δ_β^Tg + 1/2Δ_β^T_SΔ_β + B_2√(logd/δ/n)Δ_β_2^2 + B_3/6Δ_β_2^3= 1/2(Δ_β-z)^T_S (Δ_β-z) - 1/2z^T_Sz + B_2√(logd/δ/n)Δ_β_2^2+ B_3/6Δ_β_2^3 where z:=-_S^-1g. Similarlyℓ_n(β) - ℓ_n(β^⋆) ≥1/2(Δ_β-z)^T_S (Δ_β-z) - 1/2z^T_Sz - B_2√(logd/δ/n)Δ_β_2^2 - B_3/6Δ_β_2^3.Notice that Δ_β^⋆+z = z, by (<ref>) and (<ref>), we haveℓ_n(β^⋆+z)- ℓ_n(β^⋆) ≤ - 1/2z^T_Sz+B_2√(logd/δ/n)(c√((_S^-1) logd/δ/n)+ B_1_S^-1_2 log^γ (κ̃^-1/2α_1)logd/δ/n)^2+ B_3/6(c√((_S^-1) logd/δ/n)+ B_1_S^-1_2 log^γ (κ̃^-1/2α_1)logd/δ/n)^3≤ - 1/2z^T_Sz+2c^2B_2 (_S^-1)(logd/δ/n)^1.5+2B^2_1B_2_S^-1_2^2log^2γ (κ̃^-1/2α_1)(logd/δ/n)^2.5 + 2/3c^3B_3 (_S^-1)^1.5(logd/δ/n)^1.5+2/3B^3_1B_3_S^-1_2^3log^3γ (κ̃^-1/2α_1)(logd/δ/n)^3,where we use the fact that (a+b)^n≤ 2^n-1(a^n+b^n) in the last inequality. For any β∈_β^⋆(3c√((_S^-1)logd/δ/n)), by (<ref>), we haveℓ_n(β)-ℓ_n(β^⋆)≥1/2(Δ_β-z)^T_S (Δ_β-z) - 1/2z^T_Sz -9c^2B_2(_S^-1)(logd/δ/n)^1.5-9/2c^3B_3(_S^-1)^1.5(logd/δ/n)^1.5.(<ref>) - (<ref>) givesℓ_n(β)-ℓ_n(β^⋆+z)≥1/2(Δ_β-z)^T_S (Δ_β-z) - (9c^2 B_2(_S^-1)(logd/δ/n)^1.5+9/2c^3B_3(_S^-1)^1.5(logd/δ/n)^1.5+2c^2B_2 (_S^-1)(logd/δ/n)^1.5+2B^2_1B_2_S^-1_2^2log^2γ (κ̃^-1/2α_1) (logd/δ/n)^2.5 + 2/3c^3 B_3 (_S^-1)^1.5(logd/δ/n)^1.5+2/3B^3_1B_3_S^-1_2^3log^3γ (κ̃^-1/2α_1)(logd/δ/n)^3) = 1/2(Δ_β-z)^T_S (Δ_β-z)- (11c^2B_2(_S^-1)(logd/δ/n)^1.5+31/6c^3B_3(_S^-1)^1.5(logd/δ/n)^1.5+2B^2_1B_2_S^-1_2^2log^2γ (κ̃^-1/2α_1)(logd/δ/n)^2.5+ 2/3B^3_1B_3_S^-1_2^3log^3γ (κ̃^-1/2α_1)(logd/δ/n)^3)Consider the ellipsoid :={β∈^d | 1/2(Δ_β-z)^T_S (Δ_β-z) ≤ 11c^2 B_2(_S^-1)(logd/δ/n)^1.5+31/6c^3B_3(_S^-1)^1.5(logd/δ/n)^1.5+2B^2_1B_2_S^-1_2^2log^2γ (κ̃^-1/2α_1)(logd/δ/n)^2.5+2/3 B^3_1B_3_S^-1_2^3log^3γ (κ̃^-1/2α_1)(logd/δ/n)^3}.Then by (<ref>), for any β∈_β^⋆(3c√((_S^-1)logd/δ/n)) ∩^C, ℓ_n(β)-ℓ_n(β^⋆+z) > 0.Notice that by the definition of , using λ_min^-1(_S)= _S^-1_2, we have for any β∈,Δ_β-z_2^2 ≤ 22c^2B_2_S^-1_2(_S^-1)(logd/δ/n)^1.5+31/3c^3B_3_S^-1_2(_S^-1)^1.5(logd/δ/n)^1.5+4B^2_1B_2_S^-1_2^3log^2γ (κ̃^-1/2α_1) (logd/δ/n)^2.5+ 4/3B^3_1B_3_S^-1_2^4log^3γ (κ̃^-1/2α_1)(logd/δ/n)^3.Thus for any β∈, we haveΔ_β_2^2 ≤ 2(Δ_β-z_2^2+z_2^2)by (<ref>)≤ 44c^2B_2_S^-1_2(_S^-1)(logd/δ/n)^1.5+62/3c^3B_3_S^-1_2(_S^-1)^1.5(logd/δ/n)^1.5+8B^2_1B_2_S^-1_2^3log^2γ (κ̃^-1/2α_1)(logd/δ/n)^2.5+ 8/3 B^3_1B_3_S^-1_2^4log^3γ (κ̃^-1/2α_1)(logd/δ/n)^3+4c^2(_S^-1)logd/δ/n+4B^2_1_S^-1_2^2log^2γ (κ̃^-1/2α_1)(logd/δ/n)^2 .To guarantee (_S^-1)logd/δ/n is the leading term, we only need (_S^-1)logd/δ/n to dominate the rest of the terms. Hence, if we further have n≥ c N_1log(d/δ), it then holds thatΔ_β_2^2≤ 9c^2(_S^-1)logd/δ/n,i.e., β∈_β^⋆(3c√((_S^-1)logd/δ/n)). Here N_1:=max{ B^2_2_S^-1^2_2, B^2_3_S^-1^2_2(_S^-1), (B^2_1B_2_S^-1_2^3log^2γ (κ̃^-1/2α_1)/(_S^-1))^2/3,(B^3_1B_3_S^-1_2^4log^3γ (κ̃^-1/2α_1)/(_S^-1))^1/2, B^2_1_S^-1_2^2log^2γ (κ̃^-1/2α_1)/(_S^-1)} .In other words, we show that ⊂_β^⋆(3c√((_S^-1)logd/δ/n)) when n≥ c max{N_1log(d/δ),N(δ)}.Recall that by (<ref>), we know that for any β∈_β^⋆(3c√((_S^-1)logd/δ/n)) ∩^C, ℓ_n(β)-ℓ_n(β^⋆+z) > 0.Note that β^⋆+z∈. Hence there is a local minimum of ℓ_n(β) in . By Assumption <ref>, we know that the global minimum of ℓ_n(β) is in , i.e., β_∈⊂_β^⋆(3c√((_S^-1)logd/δ/n)). Proof of Lemma <ref>Let E:= {β_∈⊂_β^⋆(c√((_S^-1)logd/δ/n)) }. For any δ and any n≥ c max{N_1log(d/δ),N(δ)}, by the proof of Lemma <ref>, we have (E) ≥ 1-δ.By taking A= _T^1/2_S^-1 in Assumption <ref>, for any δ, any n > N(δ), we have with probability at least 1-δ:_T^1/2_S^-1g_2 ≤ c√((_S^-1_T)logd/δ/n)+B_1_T^1/2_S^-1_2 log^γ(B_1_T^1/2_S^-1_2/√((_S^-1_T))) logd/δ/n≤ c√((_S^-1_T)logd/δ/n)+B_1_T^1/2_S^-1_2 log^γ(κ^-1/2α_1) logd/δ/n.We denote E':={(<ref>) holds}. For any δ and any n≥ c max{N_1log(d/δ),N(δ)}, we have (E ∩ E') ≥ 1-2δ.Under E ∩ E', β_∈, i.e., 1/2(Δ_β_-z)^T_S(Δ_β_-z) ≤ 11c^2B_2(_S^-1)(logd/δ/n)^1.5+31/6c^3B_3(_S^-1)^1.5(logd/δ/n)^1.5+2B^2_1B_2_S^-1_2^2log^2γ (κ̃^-1/2α_1)(logd/δ/n)^2.5+ 2/3B^3_1B_3_S^-1_2^3log^3γ (κ̃^-1/2α_1)(logd/δ/n)^3.In other words,_S^1/2(Δ_β_-z)_2^2≤ 22c^2 B_2(_S^-1)(logd/δ/n)^1.5+31/3c^3B_3(_S^-1)^1.5(logd/δ/n)^1.5+4B^2_1B_2_S^-1_2^2log^2γ (κ̃^-1/2α_1)(logd/δ/n)^2.5+ 4/3B^3_1B_3_S^-1_2^3log^3γ (κ̃^-1/2α_1)(logd/δ/n)^3Thus we have_T^1/2(β_-β^⋆)_2^2 = _T^1/2Δ_β__2^2 = _T^1/2(Δ_β_-z) +_T^1/2z _2^2≤ 2_T^1/2(Δ_β_-z)_2^2 + 2_T^1/2z _2^2 = 2_T^1/2_S^-1/2 (_S^1/2 (Δ_β_-z))_2^2 + 2_T^1/2_S^-1g _2^2≤ 2_T^1/2_S^-1/2_2^2_S^1/2 (Δ_β_-z)_2^2 + 2_T^1/2_S^-1g _2^2by (<ref>) and (<ref>)≤ 4c^2(_T_S^-1) logd/δ/n+44c^2B_2_T^1/2_S^-1/2_2^2(_S^-1)(logd/δ/n)^1.5 +62/3c^3B_3_T^1/2_S^-1/2_2^2(_S^-1)^1.5(logd/δ/n)^1.5+8B^2_1B_2_T^1/2_S^-1/2_2^2_S^-1_2^2log^2γ (κ̃^-1/2α_1)(logd/δ/n)^2.5 +8/3 B^3_1B_3_T^1/2_S^-1/2_2^2_S^-1_2^3log^3γ (κ̃^-1/2α_1)(logd/δ/n)^3 +4B^2_1_T^1/2_S^-1/2_2^2_S^-1_2log^2γ (κ^-1/2α_1) (logd/δ/n)^2To guarantee (_T_S^-1) logd/δ/n is the leading term, we only need (_T_S^-1) logd/δ/n to dominate the rest of the terms. Hence, if we further have n≥ cN_2log(d/δ), we have_T^1/2(β_-β^⋆)_2^2≤ 9c^2(_T_S^-1) logd/δ/n.HereN_2:=max{ (B_2_T^1/2_S^-1/2_2^2(_S^-1)/(_T_S^-1) )^2, (B_3_T^1/2_S^-1/2_2^2(_S^-1)^1.5/(_T_S^-1) )^2,(B^2_1B_2_T^1/2_S^-1/2_2^2_S^-1_2^2log^2γ (κ̃^-1/2α_1)/(_T_S^-1) )^2/3, (B^3_1B_3_T^1/2_S^-1/2_2^2_S^-1_2^3log^3γ (κ̃^-1/2α_1)/(_T_S^-1) )^1/2,B^2_1_T^1/2_S^-1/2_2^2_S^-1_2log^2γ (κ^-1/2α_1)/(_T_S^-1) }.To summarize, we show that for any δ∈ (0,1) and any n≥ cmax{N_1log(d/δ), N_2log(d/δ), N(δ)}, with probability at least 1-2δ, we have_T^1/2(β_-β^⋆)_2^2≤ 9c^2(_T_S^-1) logd/δ/n. §.§ Proofs for Theorem <ref>The detailed version of Theorem <ref> is stated as the following.Suppose the model classsatisfies Assumption <ref>. Then we haveinf_β̂sup_β^⋆∈_β_0(B)(_T(β^⋆)^-1_S(β^⋆))^-1_x_i∼_S(X) y_i|x_i∼ f(y|x;β^⋆)[R_β^⋆(β̂)]≥1/16·1/2n+π^2d/R^2_1(_T(β_0)^-2_S(β_0))(_T(β_0)^-1_S(β_0))^-1,whereR_1:=1/4√(λ_min(_T(β_0))/λ_max(_T(β_0)))·min{λ^2_min(_S(β_0))/4L_Sλ_max(_S(β_0)),λ_min(_T(β_0))/4B_3+2L_T,B}. We first present some useful lemmas that will be used in the proof of Theorem <ref>. Under Assumptions <ref>, <ref> and <ref>, we can choose R_0≤ B such that for any β,β^⋆∈_β_0(R_0):1/2·^-1_S(β_0)≼^-1_S(β)≼ 2·^-1_S(β_0),1/2·_T(β_0)≼_x∼_T(X) y|x∼ f(y|x;β^⋆)[∇^2ℓ(x,y,β)]≼ 2·_T(β_0).We can further choose R_1≤ R_0 such that for any β^⋆∈_β_0(R_1),β∉_β_0(R_0): R_β^⋆(β)≥ R_β^⋆(β_0). Taking β^⋆=β, Lemma <ref> (<ref>) implies for any β∈_β_0(R_0):1/2·_T(β_0)≼_T(β)≼ 2·_T(β_0). Let C_β_0(B):={β∈^d | β-β_0∈[-B,B]^d} be a cube around β_0. For any β_0∈^d and B>0, there exists a prior density λ(β) supported on C_β_0(B) such that for any estimator β̂, we have_β^⋆∼λ(β)_x_i∼_S(X) y_i|x_i∼ f(y|x;β^⋆)[(β̂-β^⋆)^T_T(β_0)(β̂-β^⋆)]≥(_T(β_0)^-1_S(β_0))^2/n_β^⋆∼λ(β)[(^-1_S(β_0)_S(β^⋆)^-1_S(β_0)_T(β_0))]+π^2/B^2(_T(β_0)^-2_S(β_0))The proofs for the above lemmas are delivered to the end of this subsection. With Lemma <ref> and Lemma <ref> in hand, we are now ready to prove Theorem <ref>. For any estimator β̂, we defineβ̂^p:= β̂ β̂∈_β_0(R_0)β_0β̂∉_β_0(R_0).By Lemma <ref>, for any β^⋆∈_β_0(R_1), we have R_β^⋆(β̂)≥ R_β^⋆(β̂^p). We then haveinf_β̂sup_β^⋆∈_β_0(B)(_T(β^⋆)^-1_S(β^⋆))^-1_x_i∼_S(X) y_i|x_i∼ f(y|x;β^⋆)[R_β^⋆(β̂)]≥inf_β̂sup_β^⋆∈_β_0(R_1)(_T(β^⋆)^-1_S(β^⋆))^-1_x_i∼_S(X) y_i|x_i∼ f(y|x;β^⋆)[R_β^⋆(β̂)]≥inf_β̂^psup_β^⋆∈_β_0(R_1)(_T(β^⋆)^-1_S(β^⋆))^-1_x_i∼_S(X) y_i|x_i∼ f(y|x;β^⋆)[R_β^⋆(β̂^p)]≥inf_β̂∈_β_0(R_0)sup_β^⋆∈_β_0(R_1)(_T(β^⋆)^-1_S(β^⋆))^-1_x_i∼_S(X) y_i|x_i∼ f(y|x;β^⋆)[R_β^⋆(β̂)],where the first inequality follows from the fact that R_1≤ R_0≤ B, the second inequality follows from R_β^⋆(β̂)≥ R_β^⋆(β̂^p), and the third inequality follows from β̂^p∈_β_0(R_0). For any β^⋆∈_β_0(R_1)⊆_β_0(R_0), by (<ref>) and (<ref>), we have_T(β^⋆)≼ 2_T(β_0),^-1_S(β^⋆)≼ 2^-1_S(β_0),which implies(_T(β^⋆)^-1_S(β^⋆))^-1≥1/4(_T(β_0)^-1_S(β_0))^-1 .Combine (<ref>) and (<ref>), we haveinf_β̂sup_β^⋆∈_β_0(B)(_T(β^⋆)^-1_S(β^⋆))^-1_x_i∼_S(X) y_i|x_i∼ f(y|x;β^⋆)[R_β^⋆(β̂)]≥1/4(_T(β_0)^-1_S(β_0))^-1inf_β̂∈_β_0(R_0)sup_β^⋆∈_β_0(R_1)_x_i∼_S(X) y_i|x_i∼ f(y|x;β^⋆)[R_β^⋆(β̂)].By Taylor expansion, for any β̂∈_β_0(R_0),β^⋆∈_β_0(R_1), we haveR_β^⋆(β̂) =R_β^⋆(β^⋆)+(β̂-β^⋆)^T_x∼_T(X) y|x∼ f(y|x;β^⋆)[∇ℓ(x,y,β^⋆)]+1/2 (β̂-β^⋆)^T_x∼_T(X) y|x∼ f(y|x;β^⋆)[∇^2ℓ(x,y,β̃)](β̂-β^⋆)=1/2 (β̂-β^⋆)^T_x∼_T(X) y|x∼ f(y|x;β^⋆)[∇^2ℓ(x,y,β̃)](β̂-β^⋆)for some β̃∈_β_0(R_0). By Lemma <ref> (<ref>), it then holds thatR_β^⋆(β̂)≥1/4(β̂-β^⋆)^T_T(β_0)(β̂-β^⋆) .By (<ref>) and (<ref>), we then haveinf_β̂sup_β^⋆∈_β_0(B)(_T(β^⋆)^-1_S(β^⋆))^-1_x_i∼_S(X) y_i|x_i∼ f(y|x;β^⋆)[R_β^⋆(β̂)]≥1/16(_T(β_0)^-1_S(β_0))^-1inf_β̂∈_β_0(R_0)sup_β^⋆∈_β_0(R_1)_x_i∼_S(X) y_i|x_i∼ f(y|x;β^⋆)[(β̂-β^⋆)^T_T(β_0)(β̂-β^⋆)]≥1/16(_T(β_0)^-1_S(β_0))^-1inf_β̂∈_β_0(R_0)sup_β^⋆∈ C_β_0(R_1/√(d))_x_i∼_S(X) y_i|x_i∼ f(y|x;β^⋆)[(β̂-β^⋆)^T_T(β_0)(β̂-β^⋆)],where the last inequality follows from the fact that C_β_0(R_1/√(d))⊆_β_0(R_1). By Lemma <ref>, there exists a prior density λ(β) supported on C_β_0(R_1/√(d)) such that for any estimator β̂, we have_β^⋆∼λ(β)_x_i∼_S(X) y_i|x_i∼ f(y|x;β^⋆)[(β̂-β^⋆)^T_T(β_0)(β̂-β^⋆)]≥(_T(β_0)^-1_S(β_0))^2/n_β^⋆∼λ(β)[(^-1_S(β_0)_S(β^⋆)^-1_S(β_0)_T(β_0))]+π^2d/R^2_1(_T(β_0)^-2_S(β_0))≥(_T(β_0)^-1_S(β_0))^2/2n(_T(β_0)^-1_S(β_0))+π^2d/R^2_1(_T(β_0)^-2_S(β_0)).Here the last inequality uses the fact that for any β^⋆∈ C_β_0(R_1/√(d))⊆_β_0(R_0), by Lemma <ref> (<ref>), we have ^-1_S(β_0)≼ 2^-1_S(β^⋆), which implies_β^⋆∼λ(β)[(^-1_S(β_0)_S(β^⋆)^-1_S(β_0)_T(β_0))]≤_β^⋆∼λ(β)[(2^-1_S(β^⋆)_S(β^⋆)^-1_S(β_0)_T(β_0))] =2(_T(β_0)^-1_S(β_0)).We then conclude for any estimator β̂sup_β^⋆∈ C_β_0(R_1/√(d))_x_i∼_S(X) y_i|x_i∼ f(y|x;β^⋆)[(β̂-β^⋆)^T_T(β_0)(β̂-β^⋆)]≥_β^⋆∼λ(β)_x_i∼_S(X) y_i|x_i∼ f(y|x;β^⋆)[(β̂-β^⋆)^T_T(β_0)(β̂-β^⋆)]≥(_T(β_0)^-1_S(β_0))^2/2n(_T(β_0)^-1_S(β_0))+π^2d/R^2_1(_T(β_0)^-2_S(β_0)).Combine (<ref>) and (<ref>), we haveinf_β̂sup_β^⋆∈_β_0(B)(_T(β^⋆)^-1_S(β^⋆))^-1_x_i∼_S(X) y_i|x_i∼ f(y|x;β^⋆)[R_β^⋆(β̂)]≥1/16(_T(β_0)^-1_S(β_0))^-1·(_T(β_0)^-1_S(β_0))^2/2n(_T(β_0)^-1_S(β_0))+π^2d/R^2_1(_T(β_0)^-2_S(β_0))=1/16·1/2n+π^2d/R^2_1(_T(β_0)^-2_S(β_0))(_T(β_0)^-1_S(β_0))^-1.Thus we prove Theorem <ref>. In the following, we prove Lemma <ref> and Lemma <ref>. Proofs for Lemma <ref>We choose R_0 := min{λ^2_min(_S(β_0))/4L_Sλ_max(_S(β_0)),λ_min(_T(β_0))/4B_3+2L_T,B}, R_1:=1/4√(λ_min(_T(β_0))/λ_max(_T(β_0)))· R_0.In the sequel, we will show the aforementioned choices of R_0 and R_1 satisfy the conditions outlined in Lemma <ref>.First of all, we show (<ref>) holds. Fix any β∈_β_0(R_0). By Assumption <ref>, we have_S(β)-_S(β_0)_2≤ L_Sβ-β_0_2≤ L_S R_0,which implies^-1_S(β)-^-1_S(β_0)_2 ≤^-1_S(β_0)_2·_S(β)-_S(β_0)_2·^-1_S(β)_2 ≤L_S R_0/λ_min(_S(β_0))λ_min(_S(β)).By Weyl's inequality (Lemma 2.2 in <cit.>), we have|λ_min(_S(β))-λ_min(_S(β_0))| ≤_S(β)-_S(β_0)_2 ≤ L_SR_0.Note thatR_0 ≤λ^2_min(_S(β_0))/4L_Sλ_max(_S(β_0))≤λ_min(_S(β_0))/2L_S.Thus we haveλ_min(_S(β)) ≥λ_min(_S(β_0))-L_SR_0 ≥1/2λ_min(_S(β_0)),which implies ^-1_S(β)-^-1_S(β_0)_2 ≤L_S R_0/λ_min(_S(β_0))λ_min(_S(β))≤2L_S R_0/λ^2_min(_S(β_0))≤1/2λ_max(_S(β_0)).Then for any x∈^d, we havex^T(^-1_S(β)-1/2^-1_S(β_0))x=1/2 x^T^-1_S(β_0)x+x^T(^-1_S(β)-^-1_S(β_0))x≥x^2_2/2λ_max(_S(β_0))-x^2_2·^-1_S(β)-^-1_S(β_0)_2=x^2_2( 1/2λ_max(_S(β_0))-^-1_S(β)-^-1_S(β_0)_2)≥ 0,where the last inequality follows from (<ref>).Thus we conclude ^-1_S(β)≽1/2^-1_S(β_0). Similarly, we can show that ^-1_S(β)≼ 2^-1_S(β_0).As a result, we show that (<ref>) holds.Next, we show (<ref>) holds. Fix any β^⋆, β∈_β_0(R_0). By Assumption <ref>, for any x∈, y∈, we have∇^2ℓ(x,y,β)-∇^2ℓ(x,y,β^⋆)_2 ≤ B_3β-β^⋆_2 ≤ 2B_3R_0,which implies _x∼_T(X) y|x∼ f(y|x;β^⋆)[∇^2ℓ(x,y,β)]-_x∼_T(X) y|x∼ f(y|x;β^⋆)[∇^2ℓ(x,y,β^⋆)]_2≤_x∼_T(X) y|x∼ f(y|x;β^⋆)[∇^2ℓ(x,y,β)-∇^2ℓ(x,y,β^⋆)_2] ≤ 2B_3R_0.By Assumption <ref>, we have_T(β^⋆) - _T(β_0)_2 ≤ L_Tβ^⋆-β_0_2 ≤ L_T R_0Thus, by (<ref>) and (<ref>), we have_x∼_T(X) y|x∼ f(y|x;β^⋆)[∇^2ℓ(x,y,β)]-_T(β_0)_2≤_x∼_T(X) y|x∼ f(y|x;β^⋆)[∇^2ℓ(x,y,β)]-_x∼_T(X) y|x∼ f(y|x;β^⋆)[∇^2ℓ(x,y,β^⋆)]_2+_T(β^⋆) - _T(β_0)_2≤ (2B_3+L_T)R_0≤1/2λ_min(_T(β_0)),where the last inequality follows from the choice of R_0. Consequently, for any x∈^d, we havex^T(_x∼_T(X) y|x∼ f(y|x;β^⋆)[∇^2ℓ(x,y,β)]-1/2_T(β_0))x=1/2 x^T_T(β_0) x+x^T(_x∼_T(X) y|x∼ f(y|x;β^⋆)[∇^2ℓ(x,y,β)]-_T(β_0))x≥1/2x^2_2λ_min(_T(β_0))-x^2_2_x∼_T(X) y|x∼ f(y|x;β^⋆)[∇^2ℓ(x,y,β)]-_T(β_0)_2≥1/2x^2_2λ_min(_T(β_0))-1/2x^2_2λ_min(_T(β_0))=0.We then conclude _x∼_T(X) y|x∼ f(y|x;β^⋆)[∇^2ℓ(x,y,β)]≽1/2_T(β_0). Similarly, we can show that_x∼_T(X) y|x∼ f(y|x;β^⋆)[∇^2ℓ(x,y,β)]≼ 2_T(β_0). Thus we show that (<ref>) holds.Finally, we need to show that for any β^⋆∈_β_0(R_1),β∉_β_0(R_0): R_β^⋆(β)≥ R_β^⋆(β_0). Fix any β^⋆∈_β_0(R_1),β∉_β_0(R_0). We denote β':={λβ+(1-λ)β^⋆ | λ∈ [0,1]}∩{β' | β'-β_0_2=R_0}.By the choice of R_1, we know that R_1≤ R_0/2, which impliesβ'-β^⋆_2 ≥β'-β_0_2-β_0-β^⋆_2 ≥ R_0-R_1 ≥R_0/2.By convexity of R_β^⋆(·) assumed in Assumption <ref> and R_β^⋆(β)≥ R_β^⋆(β^⋆), we have R_β^⋆(β)≥ R_β^⋆(β'). Thus, we obtainR_β^⋆(β)- R_β^⋆(β^⋆)≥ R_β^⋆(β')- R_β^⋆(β^⋆)Taylor=1/2 (β'-β^⋆)^T_x∼_T(X) y|x∼ f(y|x;β^⋆)[∇^2ℓ(x,y,β̃)](β'-β^⋆)by (<ref>)≥1/4 (β'-β^⋆)^T_T(β_0)(β'-β^⋆)≥1/4λ_min(_T(β_0))β'-β^⋆^2_2by (<ref>)≥R^2_0/16λ_min(_T(β_0)).Note thatR_β^⋆(β_0)- R_β^⋆(β^⋆)Taylor=1/2 (β_0-β^⋆)^T_x∼_T(X) y|x∼ f(y|x;β^⋆)[∇^2ℓ(x,y,β̃)](β_0-β^⋆)by (<ref>)≤(β_0-β^⋆)^T_T(β_0)(β_0-β^⋆)≤λ_max(_T(β_0))β_0-β^⋆^2_2≤ R^2_1λ_max(_T(β_0))=R^2_0/16λ_min(_T(β_0)),where the last equation follows from the choice of R_1. By (<ref>) and (<ref>), we obtain R_β^⋆(β)≥ R_β^⋆(β_0). Thus, we finish the proof of Lemma <ref>. Proofs for Lemma <ref>Let β_0=[β_0,1,…,β_0,d]^T, β=[β_1,…,β_d]^T andf_i(x) := π/4Bcos(π/2B(x-β_0,i)),i=1,…,d.We define the prior density asλ(β):= Π^d_i=1f_i(β_i)β∈ C_β_0(B)0β∉ C_β_0(B) ,which is supported on C_β_0(B). In the sequel, we will show this prior density satisfies the condition outlined in Lemma <ref>.For notation simplicity, we denote A = (A_ij):=^-1_T(β_0),C = (C_ij):=_T(β_0)^-1_S(β_0).By multivariate van Trees inequality (Theorem 1 in <cit.>), for any estimator β̂, we have_β^⋆∼λ(β)_x_i∼_S(X) y_i|x_i∼ f(y|x;β^⋆)[(β̂-β^⋆)^T_T(β_0)(β̂-β^⋆)]≥(_T(β_0)^-1_S(β_0))^2/n_β^⋆∼λ(β)[(^-1_S(β_0)_S(β^⋆)^-1_S(β_0)_T(β_0))]+(λ),where (λ) = ∫_C_β_0(B)(∑_i,j,k,ℓA_ijC_ikC_jℓ∂/∂β_kλ(β)∂/∂β_ℓλ(β))1/λ(β) dβ.By the choice of λ(β), we have∫_C_β_0(B)(∑_i,j,k,ℓ k≠ℓA_ijC_ikC_jℓ∂/∂β_kλ(β)∂/∂β_ℓλ(β))1/λ(β) dβ=∫_C_β_0(B)∑_i,j,k,ℓ k≠ℓA_ijC_ikC_jℓf'_k(β_k)f'_ℓ(β_ℓ)Π_i≠ k,ℓf_i(β_i)dβ=∑_i,j,k,ℓ k≠ℓA_ijC_ikC_jℓ∫_C_β_0(B)f'_k(β_k)f'_ℓ(β_ℓ)Π_i≠ k,ℓf_i(β_i)dβ=0.Here the last equation follows from the fact∫^β_0,k+B_β_0,k-Bf'_k(β_k)dβ_k=∫^β_0,ℓ+B_β_0,ℓ-Bf'_ℓ(β_ℓ)dβ_ℓ=0. Note that∫_C_β_0(B)(∑_i,j,k,ℓ k=ℓA_ijC_ikC_jℓ∂/∂β_kλ(β)∂/∂β_ℓλ(β))1/λ(β) dβ=∑_i,j,kA_ijC_ikC_jk∫_C_β_0(B)(f'_k(β_k))^2/f_k(β_k)Π_i≠ kf_i(β_i)dβ=∑_i,j,kA_ijC_ikC_jk∫^β_0,k+B_β_0,k-B(f'_k(β_k))^2/f_k(β_k)dβ_k=π^2/B^2∑_i,j,kA_ijC_ikC_jk=π^2/B^2(ACC^T).Thus, we have(λ)= ∫_C_β_0(B)(∑_i,j,k,ℓ k≠ℓA_ijC_ikC_jℓ∂/∂β_kλ(β)∂/∂β_ℓλ(β))1/λ(β) dβ+ ∫_C_β_0(B)(∑_i,j,k,ℓ k=ℓA_ijC_ikC_jℓ∂/∂β_kλ(β)∂/∂β_ℓλ(β))1/λ(β) dβ=π^2/B^2(ACC^T)=π^2/B^2(_T(β_0)^-2_S(β_0)).Combine (<ref>) and (<ref>), we prove Lemma <ref>.§ PROOFS FOR SECTION <REF> §.§ Proofs for Proposition <ref> and Theorem <ref>For our linear regression model, ℓ(x,y,β) = 1/2log (2 π) + 1/2 (y-x^Tβ)^2.The convexity of ℓ in β immediately implies Assumption <ref>.We then have∇ℓ(x,y,β) =-x(y-x^Tβ),∇^2ℓ(x,y,β) =xx^T,∇^3ℓ(x,y,β) =0, _S=_x∼_S(X)[xx^T]=I_d , _T=_x∼_T(X)[xx^T] =αα^T+σ^2 I_d.Therefore Assumption <ref> is satisfied with L_S=L_T=0 and Assumption <ref> trivially holds. Note that ∇ℓ(x_i,y_i,β^⋆)=-x_iε_i. Since x_i_2 is √(d)-subgaussian and |ε_i| is 1-subgaussian, by Lemma 2.7.7 in <cit.>, it holds that x_i_2|ε_i| is √(d)-subexponential random variable. Thus A∇ℓ(x_i,y_i,β^⋆)_2 is A_2√(d)-subexponential random variable. Then, by Lemma <ref> with u_i=A(∇ℓ(x_i,y_i,β^⋆)-[∇ℓ(x_i,y_i,β^⋆)])=A∇ℓ(x_i,y_i,β^⋆), V=[u_i^2_2]=n ·𝔼A(∇ℓ_n(β^⋆)-[∇ℓ_n(β^⋆)])_2^2, α=1 and B_u^(α)=c√(d)A_2, we have for any matrix A∈^d× d, and any δ∈ (0,1), with probability at least 1-δ:A(∇ℓ_n(β^⋆)-[∇ℓ_n(β^⋆)])_2≤ c(√(Vlogd/δ/n)+√(d)A_2log (√(d)A_2/√(V))logd/δ/n),which satisfies the gradient concentration in Assumption <ref> with B_1=c√(d) and γ=1.Note that x_i∼(0,I_d). Thus, by Theorem 13.3 in <cit.>, for any δ∈ (0,1), with probability at least 1-δ, we have∇^2ℓ_n(β^⋆)-[∇^2ℓ_n(β^⋆]_2=1/n∑^n_i=1x_ix_i^T-I_d_2≤ c(√(dlog (1/δ)/n)+dlog (1/δ)/n)≤ 2c√(dlog (1/δ)/n),where the last inequality holds if n≥(dlog1/δ). Hence linear regression model satisfies the matrix concentration in Assumption <ref> with B_2=c√(d), N(δ)= d log1/δ. Since ∇^3ℓ≡ 0, we know Assumption <ref> holds with B_3=0.Note that ∇^2ℓ_n(β)=1/n∑^n_i=1x_ix_i^T=1/n X^TX,where X:=[x_1,…,x_n]^T. Given that {x_i}^n_i=1 are i.i.d (0,I_d), it follows that X is almost surely full rank when n≥ d. Hence, when n≥ d, we have∇^2ℓ_n(β)=1/n∑^n_i=1x_ix_i^T=1/n X^TX≻ 0.Consequently, ℓ_n(·) is strictly convex and thus satisfies Assumption <ref>. Finally, Theorem <ref> follows directly from Theorem <ref> with γ=1, B_1=c√(d), B_2=c√(d), B_3=0, N(δ)= d log1/δ, _S=I_d and _T=αα^T+σ^2 I_d. §.§ Proofs for Proposition <ref> and Theorem <ref> In the following, we will show the logistic regression model satisfies Assumptions <ref> and <ref>. For logistic regression, the loss function is defined as ℓ(x,y,β)=log(1+e^x^Tβ)-y(x^Tβ).We then have∇ℓ(x,y,β) =x/1+e^-x^Tβ-xy,∇^2ℓ(x,y,β) =xx^T/2+e^-x^Tβ+e^x^Tβ,∇^3ℓ(x,y,β) =e^-x^Tβ-e^x^Tβ/(2+e^-x^Tβ+e^x^Tβ)^2· x⊗ x ⊗ x.Here ⊗ represents the tensor product and x⊗ x ⊗ x∈^d× d× d with (x⊗ x ⊗ x)_ijk=x_i x_j x_k. The convexity of ℓ in β immediately implies Assumption <ref>; Assumption <ref> trivially holds.Note that on source domain x_2 = √(d) and |y|≤ 1. Hence we have for any (x,y) on source domain:∇ℓ(x,y,β^⋆)_2 =x/1+e^-x^Tβ^⋆-xy_2 ≤x/1+e^-x^Tβ^⋆_2+xy_2 ≤x_2+x_2=2√(d),∇^2ℓ(x,y,β^⋆)_2 =xx^T/2+e^-x^Tβ^⋆+e^x^Tβ^⋆_2 ≤xx^T_2≤x^2_2≤ d. By Lemma <ref> with u_i=A(∇ℓ(x_i,y_i,β^⋆)-[∇ℓ(x_i,y_i,β^⋆)])=A∇ℓ(x_i,y_i,β^⋆), V=[u_i^2_2], α=+∞, B_u^(α)=2√(d)A_2, we have for any matrix A∈^d× d, and any δ∈ (0,1), with probability at least 1-δ:A(∇ℓ_n(β^⋆)-[∇ℓ_n(β^⋆)])_2≤ c(√(Vlogd/δ/n)+√(d)A_2logd/δ/n),which satisfies the gradient concentration in Assumption <ref> with B_1=c√(d) and γ=0. By matrix Hoeffding inequality, logistic regression model satisfies the matrix concentration in Assumption <ref> with B_2=cd. We conclude that logistic regression model satisfies Assumption <ref> with N(δ)=0, B_1=c√(d), γ=0, B_2=cd.Note that for x on source domain, we have x_2≤√(d); for x on target domain, we have x_2≤√(d)+r. Thus, it holds that∇^3ℓ(x,y,β)_2 =e^-x^Tβ-e^x^Tβ/(2+e^-x^Tβ+e^x^Tβ)^2· x⊗ x ⊗ x_2 ≤_(i)x⊗ x ⊗ x_2 ≤x^3_2≤ (√(d)+r)^3.Here (i) uses the fact that|e^-x^Tβ-e^x^Tβ/(2+e^-x^Tβ+e^x^Tβ)^2| ≤e^-x^Tβ+e^x^Tβ/(2+e^-x^Tβ+e^x^Tβ)^2≤1/2+e^-x^Tβ+e^x^Tβ≤ 1.Hence logistic regression satisfies Assumptions <ref> with B_3=(√(d)+r)^3. Notice that this also implies Assumption <ref>: By definition,_S(β) := _x∼_S(X)[∇^2ℓ (x,y,β)],therefore _S(β_1)-_S(β_2) =_x∼_S(X)[∇^2ℓ (x,y,β_1)-∇^2ℓ (x,y,β_2)] ≤_x∼_S(X)[∇^2ℓ (x,y,β_1)-∇^2ℓ (x,y,β_2)] ≤ (√(d))^3 β_1-β_2.Similarly_T(β_1)-_T(β_2)≤ (√(d)+r)^3 β_1-β_2.These inequlities shows that logistic regression model satisfies Assumption <ref> with L_S=d^1.5 and L_T=(√(d)+r)^3. Note that ∇^2ℓ_n(β) =1/n∑^n_i=1∇^2ℓ(x_i,y_i,β)=1/n∑^n_i=1x_ix^T_i/2+e^-x^T_iβ+e^x^T_iβ =1/n X^TAX,where X:=[x_1,…,x_n]^T∈^n× d and A:=(1/(2+e^-x^T_iβ+e^x^T_iβ))≻ 0. When n ≥d, X is full rank (i.e., (X)=d) almost surely, consequently, ℓ_n(·) is strictly convex and thus satisfies Assumption <ref>. By Theorem <ref>, we have when n≥(N^⋆logd/δ),R_β^⋆(β_)≲(_T^-1_S)logd/δ/n.Here N^⋆:= (1 + κ̃/κ)^2 ·max{κ̃^-1α_1^2log^2γ((1+κ̃/κ)κ̃^-1α^2_1), α_2^2,κ̃(1 + _T^1/2_S^-1_T^1/2_2^-2)α_3^2 },where α_1 := B_1 _S^-1_2^0.5, α_2 := B_2 _S^-1_2, α_3 := B_3 _S^-1_2^1.5,κ:=(_T_S^-1) /_T^1/2_S^-1_T^1/2_2,κ̃ := (_S^-1)/_S^-1_2. Now it remains to calculate the quantities N^⋆ and (_T^-1_S) for this instance, where the crucial part is to identify what are _S and _T. The following two lemmas give the characterization of _S and _T. Under the conditions of Theorem <ref>, we have _S=U(λ_1,λ_2,…,λ_2)U^T and _T=U(λ_1,λ_2+r^2λ_3,λ_2,…,λ_2)U^T for an orthonormal matrix U. Where λ_1:=_x∼ (^d-1(√(d)))[(β^⋆ Tx)^2/2+exp (β^⋆ Tx) +exp (-β^⋆ Tx)],λ_2:=_x∼ (^d-1(√(d)))[(β_⊥^⋆ Tx)^2/2+exp (β^⋆ Tx) +exp (-β^⋆ Tx)],λ_3:=_x∼ (^d-1(√(d)))[1/2+exp (β^⋆ Tx) +exp (-β^⋆ Tx)].Under the conditions of Theorem <ref>, there exist absolute constants c,C, c'>0 such that c<λ_1, λ_2, λ_3 <C, for d ≥ c'.The proofs for these two lemmas are in the next section. With Lemma <ref>, we have _T _S^-1 = U(1,1+ r^2λ_3/λ_2,…,1)U^T, _S^-1 = U(1/λ_1,1/λ_2,…,1/λ_2)U^T. By Lemma <ref>, since λ_1, λ_2, λ_3 = O(1), we have (_T _S^-1) = d + r^2λ_3/λ_2≍ d + r^2, _T _S^-1_2 = 1 + r^2λ_3/λ_2≍ 1 + r^2. Similarly(_S^-1) = λ_1^-1 + (d-1) λ_2^-1≍ d, _S^-1_2 = max{λ_1^-1,λ_2^-1}≍ 1. Also recall that B_1=√(d), B_2=d, B_3=(√(d)+r)^3, plug in all those quantities we have κ =(_T _S^-1)/_T _S^-1_2≍d+r^2/1+r^2, κ̃ =(_S^-1)/_S^-1_2≍ d, α_1 = B_1 _S^-1_2^0.5≍√(d), α_2 = B_2 _S^-1_2≍ d, α_3 = B_3 _S^-1_2^1.5≍(√(d) + r)^3. Therefore we have when n≥(N^⋆logd/δ),R_β^⋆(β_)≲(_T^-1_S)logd/δ/n≍(d+r^2)logd/δ/n,whereN^⋆ = (1 + κ̃/κ)^2 ·max{κ̃^-1α_1^2log^2γ((1+κ̃/κ)κ̃^-1α^2_1), α_2^2,κ̃(1 + _T^1/2_S^-1_T^1/2_2^-2)α_3^2 }≍(1+d+r^2d/d+r^2)^2·max{1, d^2, d (1 + (1+r^2)^-2) (√(d)+r)^6} = (1+d+r^2d/d+r^2)^2· d (√(d)+r)^6 . When r ≲ 1, N^⋆≍ d^4. When 1 ≲ r ≲√(d), N^⋆≍ r^4d^4. When √(d)≲ r, N^⋆≍ r^6d^3.§.§.§ Proofs for Lemma <ref> and <ref>The intuition of proving Lemma <ref> and <ref> is that,when d is large, distribution (^d-1(√(d))) behaves similar to distribution (0, I_d) which has good properties (isotropic, independence of each entry, etc.)By definition,_S := _x∼ (^d-1(√(d)))[xx^T/2+exp (β^⋆ Tx) +exp (-β^⋆ Tx)]Let z ∼(0,I_d), then x and z√(d)/z_2 have the same distribution. Therefore _S= _x∼ (^d-1(√(d)))[xx^T/2+exp (β^⋆ Tx) +exp (-β^⋆ Tx)] = _z∼ (0,I_d)[zz^Td/z^2_2/2+exp (β^⋆ Tz ·√(d)/z_2) +exp (-β^⋆ Tz·√(d)/z_2)] = _z∼ (0,I_d)[(β^⋆β^⋆ T + U_⊥U_⊥^T)zz^Td/z^2_2/2+exp (β^⋆ Tz ·√(d)/z_2) +exp (-β^⋆ Tz·√(d)/z_2)]where [β^⋆,U_⊥] ∈^d × d is a orthogonal basis. With this expression, we first prove β^⋆ is an eigenvector of _S with corresponding eigenvalue λ_1._S β^⋆ = _z∼ (0,I_d)[(β^⋆β^⋆ T + U_⊥U_⊥^T)zz^Td/z^2_2/2+exp (β^⋆ Tz ·√(d)/z_2) +exp (-β^⋆ Tz·√(d)/z_2)] β^⋆= _z∼ (0,I_d)[β^⋆β^⋆ Tzz^Td/z^2_2β^⋆/2+exp (β^⋆ Tz ·√(d)/z_2) +exp (-β^⋆ Tz·√(d)/z_2)] + _z∼ (0,I_d)[U_⊥U_⊥^Tzz^Td/z^2_2β^⋆/2+exp (β^⋆ Tz ·√(d)/z_2) +exp (-β^⋆ Tz·√(d)/z_2)] = _z∼ (0,I_d)[(β^⋆ Tz)^2d/z^2_2/2+exp (β^⋆ Tz ·√(d)/z_2) +exp (-β^⋆ Tz·√(d)/z_2)] β^⋆+ _z∼ (0,I_d)[U_⊥U_⊥^Tzz^Td/z^2_2β^⋆/2+exp (β^⋆ Tz ·√(d)/z_2) +exp (-β^⋆ Tz·√(d)/z_2)] = λ_1 β^⋆ + _z∼ (0,I_d)[U_⊥U_⊥^Tzz^Td/z^2_2β^⋆/2+exp (β^⋆ Tz ·√(d)/z_2) +exp (-β^⋆ Tz·√(d)/z_2)].Therefore we only need to prove _z∼ (0,I_d)[U_⊥U_⊥^Tzz^Td/z^2_2β^⋆/2+exp (β^⋆ Tz ·√(d)/z_2) +exp (-β^⋆ Tz·√(d)/z_2)]=0.In fact,_z∼ (0,I_d)[U_⊥^Tzz^Td/z^2_2β^⋆/2+exp (β^⋆ Tz ·√(d)/z_2) +exp (-β^⋆ Tz·√(d)/z_2) ]= _z∼ (0,I_d)[d/z^2_2 (U_⊥^Tz)(z^Tβ^⋆)/2+exp (β^⋆ Tz ·√(d)/z_2) +exp (-β^⋆ Tz·√(d)/z_2) ]= _z∼ (0,I_d)[d/|A|^2 + B^2 AB/2+exp (A ·√(d)/√(|A|^2 + B^2)) +exp (-A·√(d)/|A|^2 + B^2) ]where we let A:=z^Tβ^⋆, B:=U_⊥^Tz. Notice that by the property of z∼(0,I_d), A and B are independent. Also, B is symmetric, i.e., B and -B have the same distribution. Therefore _z∼ (0,I_d)[d/|A|^2 + B^2 AB/2+exp (A ·√(d)/√(|A|^2 + B^2)) +exp (-A·√(d)/|A|^2 + B^2)] replace B by -B=_z∼ (0,I_d)[ -d/|A|^2 + B^2 AB/2+exp (A ·√(d)/√(|A|^2 + B^2)) +exp (-A·√(d)/|A|^2 + B^2)] = -_z∼ (0,I_d)[d/|A|^2 + B^2 AB/2+exp (A ·√(d)/√(|A|^2 + B^2)) +exp (-A·√(d)/|A|^2 + B^2)],which implies _z∼ (0,I_d)[U_⊥U_⊥^Tzz^Td/z^2_2β^⋆/2+exp (β^⋆ Tz ·√(d)/z_2) +exp (-β^⋆ Tz·√(d)/z_2)]=0. Next we will prove that for any β_⊥ such that β_⊥_2=1, β^⋆ Tβ_⊥=0, β_⊥ is an eigenvector of _S with corresponding eigenvalue λ_2.Let [β_⊥, U] be an orthogonal basis (β^⋆ is the first column of U). _S β_⊥ = _z∼ (0,I_d)[(β_⊥β_⊥^T + UU^T)zz^Td/z^2_2/2+exp (β^⋆ Tz ·√(d)/z_2) +exp (-β^⋆ Tz·√(d)/z_2)] β_⊥= _z∼ (0,I_d)[β_⊥β_⊥^Tzz^Td/z^2_2β_⊥/2+exp (β^⋆ Tz ·√(d)/z_2) +exp (-β^⋆ Tz·√(d)/z_2)] + _z∼ (0,I_d)[UU^Tzz^Td/z^2_2β_⊥/2+exp (β^⋆ Tz ·√(d)/z_2) +exp (-β^⋆ Tz·√(d)/z_2)] = _z∼ (0,I_d)[(β_⊥^Tz)^2d/z^2_2/2+exp (β^⋆ Tz ·√(d)/z_2) +exp (-β^⋆ Tz·√(d)/z_2)] β_⊥+ _z∼ (0,I_d)[UU^Tzz^Td/z^2_2β_⊥/2+exp (β^⋆ Tz ·√(d)/z_2) +exp (-β^⋆ Tz·√(d)/z_2)] = λ_2 β_⊥ + 0= λ_2 β_⊥Here _z∼ (0,I_d)[UU^Tzz^Td/z^2_2β_⊥/2+exp (β^⋆ Tz ·√(d)/z_2) +exp (-β^⋆ Tz·√(d)/z_2)]=0because of a similar reason as in the previous part.For _T, the proving strategy is similar.For x ∼(^d-1(√(d)))+v on the target domain, where v= rβ_⊥^⋆, let w=x-v=x-rβ_⊥^⋆, then w ∼(^d-1(√(d))). Let z ∼(0,I_d), then w and z√(d)/z_2 have the same distribution. We have_T= _x∼ (^d-1(√(d))) +v[xx^T/2+exp (β^⋆ Tx) +exp (-β^⋆ Tx)] =_w∼ (^d-1(√(d)))[(w+v)(w+v)^T/2+exp (β^⋆ T(w+v)) +exp (-β^⋆ T(w+v))] v^Tβ^⋆=0=_w∼ (^d-1(√(d)))[ww^T+wv^T+vw^T+vv^T/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw)]Therefore _T β^⋆ =_w∼ (^d-1(√(d)))[ww^T+wv^T+vw^T+vv^T/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw)] β^⋆v^Tβ^⋆=0=_w∼ (^d-1(√(d)))[ww^T/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw)] β^⋆= _S β^⋆= λ_1 β^⋆,where the last line follows from the previous proofs. Similarly, for any β̃_̃⊥̃ such that β̃_̃⊥̃_2=1, β_⊥^⋆ Tβ̃_̃⊥̃=0, _T β̃_̃⊥̃ =_w∼ (^d-1(√(d)))[ww^T+wv^T+vw^T+vv^T/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw)] β̃_̃⊥̃v^Tβ̃_̃⊥̃=0=_w∼ (^d-1(√(d)))[ww^T/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw)] β̃_̃⊥̃= _S β̃_̃⊥̃= λ_2 β̃_̃⊥̃.For β_⊥^⋆, _T β_⊥^⋆ =_w∼ (^d-1(√(d)))[ww^T+wv^T+vw^T+vv^T/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw)] β_⊥^⋆=_w∼ (^d-1(√(d)))[ww^T/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw)] β_⊥^⋆+_w∼ (^d-1(√(d)))[wv^T/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw)] β_⊥^⋆+_w∼ (^d-1(√(d)))[vw^T/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw)] β_⊥^⋆+_w∼ (^d-1(√(d)))[vv^T/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw)] β_⊥^⋆ := I_1+ I_2+ I_3 +I_4.As in the previous proofs, I_1=_S β_⊥^⋆ = λ_2 β_⊥^⋆. I_2=_w∼ (^d-1(√(d)))[wv^T/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw)] β_⊥^⋆v=rβ_⊥^⋆= r _w∼ (^d-1(√(d)))[w β_⊥^⋆ Tβ_⊥^⋆/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw)]β_⊥^⋆=1= r _w∼ (^d-1(√(d)))[w/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw)]=0.where the last lines follows from w is symmetric and w/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw) is a odd function of w.I_3=_w∼ (^d-1(√(d)))[vw^T/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw)] β_⊥^⋆v=rβ_⊥^⋆= r _w∼ (^d-1(√(d)))[β_⊥^⋆ w^Tβ_⊥^⋆/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw)]= r _w∼ (^d-1(√(d)))[ w^Tβ_⊥^⋆/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw)] β_⊥^⋆=0.where the last lines follows from w is symmetric and w^Tβ_⊥^⋆/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw) is a odd function of w.I_4=_w∼ (^d-1(√(d)))[vv^T/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw)] β_⊥^⋆v=rβ_⊥^⋆= r^2_w∼ (^d-1(√(d)))[β_⊥^⋆β_⊥^⋆ Tβ_⊥^⋆/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw)]β_⊥^⋆=1= r^2_w∼ (^d-1(√(d)))[1/2+exp (β^⋆ Tw) +exp (-β^⋆ Tw)] β_⊥^⋆=r^2λ_3 β_⊥^⋆.Combine the calculations of I_1,I_2,I_3,I_4, we have _Tβ_⊥^⋆ = I_1 + I_2 + I_3 + I_4 = λ_2 β_⊥^⋆ + r^2λ_3 β_⊥^⋆= (λ_2 + r^2λ_3) β_⊥^⋆.In conclusion, we have _S=U(λ_1,λ_2,…,λ_2)U^T and _T=U(λ_1,λ_2+r^2λ_3,λ_2,…,λ_2)U^T for an orthonormal matrix U, where U= [β^⋆, β_⊥^⋆, ⋯].Recall the definition of λ_1, λ_2, λ_3:λ_1:=_x∼ (^d-1(√(d)))[(β^⋆ Tx)^2/2+exp (β^⋆ Tx) +exp (-β^⋆ Tx)]=_z∼(0,I_d)[d/z_2^2(β^⋆ Tz)^2/2+exp (√(d)/z_2β^⋆ Tz) +exp (-√(d)/z_2β^⋆ Tz)],λ_2:=_x∼ (^d-1(√(d)))[(β_⊥^⋆ Tx)^2/2+exp (β^⋆ Tx) +exp (-β^⋆ Tx)]=_z∼(0,I_d)[d/z_2^2(β_⊥^⋆ Tz)^2/2+exp (√(d)/z_2β^⋆ Tz) +exp (-√(d)/z_2β^⋆ Tz)],λ_3:=_x∼ (^d-1(√(d)))[1/2+exp (β^⋆ Tx) +exp (-β^⋆ Tx)]=_z∼(0,I_d)[1/2+exp (√(d)/z_2β^⋆ Tz) +exp (-√(d)/z_2β^⋆ Tz)].Next we will show that there exists constants c,C,c'>0 such that when d≥ c', we have c ≤λ_1 ≤ C. The proofs for λ_2 and λ_3 are similar. Notice that, when d is large, d/z_2^2 concentrates around 1. If we replace d/z_2^2 by 1 in the above expressions, we have λ_1≈_z∼(0,I_d)[(β^⋆ Tz)^2/2+exp (β^⋆ Tz) +exp (-β^⋆ Tz)]Since β^⋆ Tz ∼(0,1) when z ∼(0,I_d) and β^⋆=1,we have _z∼(0,I_d)[(β^⋆ Tz)^2/2+exp (β^⋆ Tz) +exp (-β^⋆ Tz)] = _y∼(0,1)[y^2/2+exp (y) +exp (-y)]which is a absolute constant greater than zero and not related to d. Following this intuition, we can bound λ_1 as the following. We first state the concentration of the norm of (0,I_d). By <cit.> (3.7), (|z-√(d)| ≥ t) ≤ 2 e^-4ct^2for some absolute constant c>0. Take t=√(d)/2, we have(z/√(d)∉ [1/2, 3/2]) ≤ 2e^-cd.With this concentration, we do the following truncation:λ_1 =_z∼(0,I_d)[d/z_2^2(β^⋆ Tz)^2/2+exp (√(d)/z_2β^⋆ Tz) +exp (-√(d)/z_2β^⋆ Tz)] = _z∼(0,I_d)[d/z_2^2(β^⋆ Tz)^2/2+exp (√(d)/z_2β^⋆ Tz) +exp (-√(d)/z_2β^⋆ Tz)_z/√(d)∈ [1/2, 3/2]] + _z∼(0,I_d)[d/z_2^2(β^⋆ Tz)^2/2+exp (√(d)/z_2β^⋆ Tz) +exp (-√(d)/z_2β^⋆ Tz)_z/√(d)∉ [1/2, 3/2]] := J_1 + J_2.For J_2, it is obvious that0≤ J_2 ≤d/4(z/√(d)∉ [1/2, 3/2]) ≤d/2 e^-cd. For upper bound of J_1, J_1= _z∼(0,I_d)[d/z_2^2(β^⋆ Tz)^2/2+exp (√(d)/z_2β^⋆ Tz) +exp (-√(d)/z_2β^⋆ Tz)_z/√(d)∈ [1/2, 3/2]] ≤_z∼(0,I_d) [4(β^⋆ Tz)^2/4] =1.Therefore λ_1= J_1 + J_2 ≤ 1 + d/2e^-cd.It's obvious that there exists an absolute constant c' such that when d ≥ c', λ_1 ≤ 2.For lower bound of J_1, we have J_1= _z∼(0,I_d)[d/z_2^2(β^⋆ Tz)^2/2+exp (√(d)/z_2β^⋆ Tz) +exp (-√(d)/z_2β^⋆ Tz)_z/√(d)∈ [1/2, 3/2]] ≥_z∼(0,I_d)[4/9(β^⋆ Tz)^2/2+exp (2β^⋆ Tz) +exp (-2β^⋆ Tz)_z/√(d)∈ [1/2, 3/2]]= _z∼(0,I_d)[4/9(β^⋆ Tz)^2/2+exp (2β^⋆ Tz) +exp (-2β^⋆ Tz)] - _z∼(0,I_d)[4/9(β^⋆ Tz)^2/2+exp (2β^⋆ Tz) +exp (-2β^⋆ Tz)_z/√(d)∉ [1/2, 3/2]] ≥_z∼(0,I_d)[4/9(β^⋆ Tz)^2/2+exp (2β^⋆ Tz) +exp (-2β^⋆ Tz)] - _z∼(0,I_d)[4/9(β^⋆ Tz)^2/4_z/√(d)∉ [1/2, 3/2]] ≥_z∼(0,I_d)[4/9(β^⋆ Tz)^2/2+exp (2β^⋆ Tz) +exp (-2β^⋆ Tz)] - _z∼(0,I_d)[z_2^2/9_z/√(d)∉ [1/2, 3/2]] = _y∼(0,1)[4/9y^2/2+exp (2y) +exp (-2y)] - _z∼(0,I_d)[z_2^2/9_z/√(d)∉ [1/2, 3/2]] := c_1 - _z∼(0,I_d)[z_2^2/9_z/√(d)∉ [1/2, 3/2]]Notice that here c_1 is a positive constant not related to d. For the second term,_z∼(0,I_d)[z_2^2/9_z/√(d)∉ [1/2, 3/2]] =_z∼(0,I_d)[z_2^2/9_z/√(d)≤1/2] + _z∼(0,I_d)[z_2^2/9_z/√(d)≥3/2] ≤d/36(z/√(d)≤1/2) + 1/9∫_9/4 d^∞(z_2^2≥ t) d t + 1/9·9/4 d (z_2^2≥9/4 d) by (<ref>)≤d/36 2e^-cd + 1/9∫_9/4 d^∞(z_2^2≥ t) d t + d/4 2e^-cdt = d(y+1)^2≤ de^-cd + 1/9∫_1/2^∞2d(y+1) (z_2≥√(d) +√(d)y ) d y by (<ref>)≤ de^-cd + 1/9∫_1/2^∞2d(y+1) 2e^-4cdy^2d y ≤ de^-cd + 2d∫_1/2^∞y e^-4cdy^2d y ≤ de^-cd + 1/4c e^-cdCombine this inequality and previous inequalities of J_1 and J_2, we haveλ_1= J_1 + J_2 ≥ c_1 - de^-cd - 1/4c e^-cdTherefore it's obvious that there exists an absolute constant c' such that when d ≥ c', λ_1 ≥c_1/2. The proof for λ_2 is almost the same, the only difference is that in the numerator, we replace β^⋆ Tz by β_⊥^⋆ T z. The proof for λ_3 is even simpler. For upper bound, λ_3 =_z∼(0,I_d)[1/2+exp (√(d)/z_2β^⋆ Tz) +exp (-√(d)/z_2β^⋆ Tz)] ≤1/4.For lower bound, λ_3 =_z∼(0,I_d)[1/2+exp (√(d)/z_2β^⋆ Tz) +exp (-√(d)/z_2β^⋆ Tz)]≥_z∼(0,I_d)[1/2+exp (√(d)/z_2β^⋆ Tz) +exp (-√(d)/z_2β^⋆ Tz)_z/√(d)∈ [1/2, 3/2]] ≥_z∼(0,I_d)[1/2+exp (2β^⋆ Tz) +exp (-2β^⋆ Tz)_z/√(d)∈ [1/2, 3/2]] = _z∼(0,I_d)[1/2+exp (2β^⋆ Tz) +exp (-2β^⋆ Tz)] -_z∼(0,I_d)[1/2+exp (2β^⋆ Tz) +exp (-2β^⋆ Tz)_z/√(d)∉ [1/2, 3/2]] = c_2 - 1/4(z/√(d)∉ [1/2, 3/2]) ≥ c_2 -1/2 e^-cd.Therefore there exists constant c' such that when d ≥ c', λ_3 ≤c_2/2. §.§ Proofs for Theorem <ref>In this section, our objective is to establish the upper bound of MLE for the phase retrieval model.A direct application of Theorem <ref> is impractical, as Assumption <ref> is not met; notably, both β^⋆, -β^⋆ serve as global minimums of population loss.To circumvent the issue of non-unique global minimums, we employ a methodology similar to that used in proving Theorem <ref>, though with a slightly refined analysis.In the sequel, we will use the same notations as in the proof of Theorem <ref>. Even though the global minimum of population loss for the phase retrieval model isn't unique, meaning it could be either β^⋆ or -β^⋆, we can still show that the MLE falls into a small ball around either β^⋆ or -β^⋆.Under the settings of Theorem <ref>, if n≥(d^4 logd/δ), then with probability at least 1-δ, we havemin{β_-β^⋆_2,β_+β^⋆_2}≲√(d^2logd/δ/n). Without loss of generality, in the sequel, we consider n≥(d^4 logd/δ) and assume β_-β^⋆_2≲√(d^2logd/δ/n),which implies β_∈_β^⋆(1). Recall that for the phase retrieval model,ℓ(x,y,β)=1/2log (2π)+1/2(y-(x^Tβ)^2)^2.It then holds that∇ℓ(x,y,β)=2(x^Tβ)^3 x-2(x^Tβ)yx,∇^2 ℓ(x,y,β)=6(x^Tβ)^2xx^T-2yxx^T,∇^3 ℓ(x,y,β)=12 (x^Tβ) x ⊗ x ⊗ x.Note that for Y=(X^Tβ^⋆)^2+ε, we have ∇ℓ(X,Y,β^⋆)=-2(X^Tβ^⋆)Xε. Therefore (recall that β^⋆=1) ∇ℓ(x_i,y_i,β^⋆) is 2d-subgaussian, by Lemma <ref>, we have for any δ, with probability at least 1- δ,_S^-1g_2≲√((_S^-1) logd/δ/n)+ d_S^-1√(logd^2_S^-1^2/(_S^-1))logd/δ/n.Which can be viewed as setting B_1= d and γ=1/2 in Assumption <ref>. Hence β^⋆+z=β^⋆-_S^-1g∈_β^⋆(1) when n≥(max{(_S^-1)logd/δ, d_S^-1_2 √(logd^2_S^-1^2/(_S^-1))logd/δ}).We then show the concentration inequality for the Hessian matrix. Note that∇^2ℓ_n(β^⋆)=1/n∑^n_i=1∇^2ℓ(x_i,y_i,β^⋆)=4/n∑^n_i=1(x^T_iβ^⋆)^2x_ix^T_i-2/n∑^n_i=1ε_ix_ix^T_i.Since (x^Tβ^⋆)^2xx^T≤ d^2, by matrix Hoeffding, with probability at least 1-δ, we have__S[(x^Tβ^⋆)^2xx^T] - d^2√(8logd/δ/n) I_d≼1/n∑^n_i=1(x^T_iβ^⋆)^2x_ix^T_i ≼__S[(x^Tβ^⋆)^2xx^T]+ d^2√(8logd/δ/n) I_dMoreover, by matrix Chernoff bound, with probability at least 1-δ, we have- d√(8logd/δ/n) I_d≼-1/n∑^n_i=1ε_ix_ix^T_i≼ d√(8logd/δ/n) I_d.Combine (<ref>) and (<ref>), we obtain∇^2ℓ(β^⋆) - 6d^2√(8logd/δ/n) I_d≼∇^2ℓ_n(β^⋆) ≼∇^2ℓ(β^⋆) + 6d^2√(8logd/δ/n) I_d,which can be viewed as setting B_2=d^2 in (<ref>). For any β∈_β^⋆(1), we have∇^3ℓ(x,y,β)_2=12(x^Tβ)x⊗ x⊗ x≤ 24 (√(d)+r)^4.Thus, we can view as if this model satisfies B_3 = (√(d)+r)^4 in Assumption <ref>. Then same as (<ref>) we have with probability 1-δ,ℓ_n(β^⋆+z)- ℓ_n(β^⋆) ≤ - 1/2z^T_Sz+2c^2B_2 (_S^-1)(logd/δ/n)^1.5+2B^2_1B_2_S^-1_2^2log (κ̃^-1/2α_1)(logd/δ/n)^2.5 + 2/3c^3B_3 (_S^-1)^1.5(logd/δ/n)^1.5+2/3B^3_1B_3_S^-1_2^3log^1.5 (κ̃^-1/2α_1)(logd/δ/n)^3,By Lemma <ref>, we have (<ref>). Then same as (<ref>) we have with probability at least 1-δ, ℓ_n(β_)-ℓ_n(β^⋆)≥1/2(Δ_β_-z)^T_S (Δ_β_-z) - 1/2z^T_Sz - (B_2 d^2(logd/δ/n)^1.5 + B_3 d^3(logd/δ/n)^1.5).Consequently, by (<ref>), (<ref>) and the fact that ℓ_n(β_)-ℓ_n(β^⋆+z)≤ 0, we have (Δ_β_-z)^T_S (Δ_β_-z) ≤(B_2 (_S^-1)(logd/δ/n)^1.5+B^2_1B_2_S^-1_2^2log (κ̃^-1/2α_1) (logd/δ/n)^2.5 + B_3 (_S^-1)^1.5(logd/δ/n)^1.5+B^3_1B_3_S^-1_2^3(log (κ̃^-1/2α_1))^1.5(logd/δ/n)^3+ B_2 d^2(logd/δ/n)^1.5+B_3 d^3 (logd/δ/n)^1.5)Then, same as the proof of Lemma <ref>, we further have for any δ, with probability at least 1-2δ,(β_-β^⋆)^T_T(β_-β^⋆) ≲(_T_S^-1) logd/δ/n+ (B_2_T^1/2_S^-1/2_2^2(_S^-1)(logd/δ/n)^1.5 +B^2_1B_2_T^1/2_S^-1/2_2^2_S^-1_2^2log (κ̃^-1/2α_1) (logd/δ/n)^2.5 + B_3 _T^1/2_S^-1/2_2^2(_S^-1)^1.5(logd/δ/n)^1.5 +B^3_1B_3_T^1/2_S^-1/2_2^2_S^-1_2^3(log (κ̃^-1/2α_1))^1.5(logd/δ/n)^3+ B_2_T^1/2_S^-1/2_2^2 d^2 (logd/δ/n)^1.5 +B_3_T^1/2_S^-1/2_2^2 d^3 (logd/δ/n)^1.5+B^2_1_T^1/2_S^-1/2_2^2_S^-1_2log (κ^-1/2α_1) (logd/δ/n)^2 )= (_T_S^-1) logd/δ/n+ (d^2_T^1/2_S^-1/2_2^2(_S^-1)(logd/δ/n)^1.5 +d^4 _T^1/2_S^-1/2_2^2_S^-1_2^2log (κ̃^-1/2α_1) (logd/δ/n)^2.5 + (√(d)+r)^4_T^1/2_S^-1/2_2^2(_S^-1)^1.5(logd/δ/n)^1.5 +d^3 (√(d)+r)^4_T^1/2_S^-1/2_2^2_S^-1_2^3(log (κ̃^-1/2α_1))^1.5(logd/δ/n)^3+ d^4_T^1/2_S^-1/2_2^2 (logd/δ/n)^1.5 +d^3 (√(d)+r)^4_T^1/2_S^-1/2_2^2 (logd/δ/n)^1.5+d^2_T^1/2_S^-1/2_2^2_S^-1_2log (κ^-1/2α_1) (logd/δ/n)^2 )To guarantee (_T_S^-1) logd/δ/n is the leading term, we only need n ≥ (N_1 logd/δ), where N_1:=max{ (d^2_T^1/2_S^-1/2_2^2(_S^-1)/(_T_S^-1) )^2 , (d^4_T^1/2_S^-1/2_2^2_S^-1_2^2 log (κ̃^-1/2α_1)/(_T_S^-1) )^2/3, ((√(d)+r)^4_T^1/2_S^-1/2_2^2(_S^-1)^1.5/(_T_S^-1) )^2, (d^3 (√(d)+r)^4_T^1/2_S^-1/2_2^2_S^-1_2^3 (log (κ̃^-1/2α_1))^1.5/(_T_S^-1) )^1/2,(d^4_T^1/2_S^-1/2_2^2/(_T_S^-1) )^2, (d^3 (√(d)+r)^4_T^1/2_S^-1/2_2^2/(_T_S^-1) )^2, d^2 _T^1/2_S^-1/2_2^2_S^-1_2log (κ^-1/2α_1)/(_T_S^-1) }.That is, for any δ, when n ≥ (max{d^4,(_S^-1), d_S^-1_2 log^0.5(κ̃^-1/2α_1), N_1}logd/δ),with probability 1-2δ,(β_-β^⋆)^T_T(β_-β^⋆) ≲(_T_S^-1) logd/δ/n.Then following the proof of Theorem <ref>, do Taylor expansion w.r.t. β as the following:R_β^⋆(β_) = _x∼_T(X) y|x∼ f(y|x;β^⋆)[ℓ(x,y,β_)-ℓ(x,y,β^⋆)] ≤_x∼_T(X) y|x∼ f(y|x;β^⋆) [∇ℓ(x,y,β^⋆)]^T(β_-β^⋆) + 1/2(β_-β^⋆)^T_T(β_-β^⋆) +B_3/6β_-β^⋆_2^3. ≤c/2(_T_S^-1) logd/δ/n + c^3/6d^3 (√(d)+r)^4 (logd/δ/n)^1.5.with probability at least 1-2 δ. If we further assume n≥((d^3 (√(d)+r)^4/(_T_S^-1) )^2logd/δ), it then holds thatR_β^⋆(β_)≤ c (_T_S^-1) logd/δ/n.Therefore we conclude that for any δ, when n ≥ (N logd/δ),with probability at least 1-2δ, R_β^⋆(β_)≤ c (_T_S^-1) logd/δ/n,whereN :=max{d^4, (_S^-1), d_S^-1_2log^0.5(κ̃^-1/2α_1), N_1, (d^3 (√(d)+r)^4/(_T_S^-1) )^2}=max{(d^2_T^1/2_S^-1/2_2^2(_S^-1)/(_T_S^-1) )^2, (d^4_T^1/2_S^-1/2_2^2_S^-1_2^2log (κ̃^-1/2α_1)/(_T_S^-1) )^2/3, ((√(d)+r)^4_T^1/2_S^-1/2_2^2(_S^-1)^1.5/(_T_S^-1) )^2, (d^3 (√(d)+r)^4_T^1/2_S^-1/2_2^2_S^-1_2^3(log (κ̃^-1/2α_1))^1.5/(_T_S^-1) )^1/2,(d^4_T^1/2_S^-1/2_2^2/(_T_S^-1) )^2, (d^3 (√(d)+r)^4_T^1/2_S^-1/2_2^2/(_T_S^-1) )^2, d^2 _T^1/2_S^-1/2_2^2_S^-1_2log (κ^-1/2α_1)/(_T_S^-1), d^4, (_S^-1), d_S^-1_2log^0.5(κ̃^-1/2α_1), (d^3 (√(d)+r)^4/(_T_S^-1) )^2}.Now it remains to calculate N and (_T _S^-1).Similar to logistic regression (see Lemma <ref> and <ref>), we have the following two lemmas that characterize _S and _T. Under the conditions of Theorem <ref>, we have _S=U(λ_1,λ_2,…,λ_2)U^T and _T=U(λ_1,λ_2+r^2λ_3,λ_2,…,λ_2)U^T for an orthonormal matrix U. Where λ_1:=4_x∼ (^d-1(√(d)))[(β^⋆ Tx)^4],λ_2:=4_x∼ (^d-1(√(d)))[(β^⋆ Tx)^2(β_⊥^⋆ Tx)^2],λ_3:=4_x∼ (^d-1(√(d)))[(β^⋆ Tx)^2].Under the conditions of Theorem <ref>, there exist absolute constants c,C, c'>0 such that c<λ_1, λ_2, λ_3 <C, for d ≥ c'. The proofs for these two lemmas are in the next section. With Lemma <ref>, we have _T _S^-1 = U(1,1+ r^2λ_3/λ_2,…,1)U^T, _S^-1 = U(1/λ_1,1/λ_2,…,1/λ_2)U^T. By Lemma <ref>, since λ_1, λ_2, λ_3 = O(1), we have (_T _S^-1) = d + r^2λ_3/λ_2≍ d + r^2, _T _S^-1_2 = 1 + r^2λ_3/λ_2≍ 1 + r^2. Similarly(_S^-1) = λ_1^-1 + (d-1) λ_2^-1≍ d, _S^-1_2 = max{λ_1^-1,λ_2^-1}≍ 1, α_1=B_1_S^-1_2^1/2≍ d. Plug in these quantities, recallκ:=(_T_S^-1) /_T^1/2_S^-1_T^1/2_2≍d+r^2/1+r^2we have N =max{ d^6 κ^-2, d^8/3κ^-2/3log^2/3 (κ̃^-1/2α_1), d^3 (√(d)+r)^8 κ^-2, d^3/2 (√(d)+r)^2 κ^-1/2log^3/4 (κ̃^-1/2α_1), d^8 κ^-2, d^6 (√(d)+r)^8 κ^-2, d^2 κ^-1log (κ^-1/2α_1), d^4, d,dlog^1/2 (κ̃^-1/2α_1), d^6 (√(d)+r)^8 κ^-2_T _S^-1^-2}1 ≤κ≤ d=max{d^6 (√(d)+r)^8 κ^-2, d^6 (√(d)+r)^8 κ^-2_T _S^-1^-2}_T _S^-1≍ 1+r^2 ≥ 1= d^6 (√(d)+r)^8 κ^-2≍d^6 (√(d)+r)^8 (1+r^2)^2/(d+r^2)^2≍ d^6 (d+r^2)^2 (1+r^2)^2 We can see that when r ≤ 1, N ≍ d^8. When 1 ≤ r ≤√(d), N ≍ d^8 r^4.When r ≥√(d), N ≍ d^6 r^8.§.§.§ Proof of Lemma <ref>In the following, we prove Lemma <ref>. The intuition is that, although ℓ is not convex in β,ℓ is quadratic in M:=ββ^T.With a little bit abuse of notation, for matrix M∈^d× d, we denote ℓ(x,y,M):=1/2(y-⟨ xx^T, M⟩)^2.Under the case where M=ββ^T, we haveℓ(x,y,M):=1/2(y-⟨ xx^T, ββ^T⟩)^2 =1/2(y-(x^Tβ)^2)^2=ℓ(x,y,β).We further denoteℓ_n(M):=1/n∑^n_i=1ℓ(x_i,y_i,M)=1/2n∑^n_i=1(y_i-⟨ x_ix_i^T, M⟩)^2.and M^⋆:=β^⋆β^⋆ T.It then holds that∇ℓ_n(M^⋆)=-1/n∑^n_i=1(⃗x_i x_i^T)ε_i,∇^2ℓ_n(M^⋆)=1/n∑^n_i=1(⃗x_i x_i^T)(⃗x_i x_i^T)^T,∇^3ℓ_n(M)=0.Denote Σ_S:=_x∼_S(X)[(⃗xx^T)(⃗xx^T)^T], then by Lemma <ref> with V=(Σ_S), α=2, B_u^α=cd for some absolute constants c,c', we have with probability at least 1-δ,∇ℓ_n(M^⋆)_2 ≤ c'(√((Σ_S)logd/δ/n)+d (logc^2 d^2/(Σ_S))^1/2logd/δ/n).By matrix Hoeffding, we have with probability at least 1-δ,Σ_S -d^2√(8logd/δ/n) I_d≼∇^2ℓ_n(M^⋆) ≼Σ_S +d^2√(8logd/δ/n) I_d. Before conducting further analysis, we need some characterizations of Σ_S. By the definition of Σ_S, we can see that the ((i,j),(k,l)) entry of Σ_S is _X ∼_S (X)[X_i X_j X_k X_l]. Since X is symmetric and isotropic, we have_X ∼_S (X)[X_i X_j X_k X_l] =_X ∼_S (X)[X_i^2 X_k^2]if i=j, k=l and i ≠ k _X ∼_S (X)[X_i^2 X_j^2] if {i,j} = {k,l} and i ≠ j _X ∼_S (X)[X_i^4] if i=j=k=l0OtherwiseFor the calculation of moments, using (3a) in <cit.> with a=(1,0,⋯, 0)^T and ϵ= 1/√(d) X,we have _X ∼_S (X)[X_1^4]=3d/d+2, _X ∼_S (X)[X_1^2 X_2^2]=d/d+2. Since X is isotropic, we have(Σ_S)_((i,j),(k,l)) =d/d+2 if i=j, k=l and i ≠ k d/d+2 if {i,j} = {k,l} and i ≠ j 3d/d+2 if i=j=k=l0OtherwiseTherefore (Σ_S)=∑_i,j[X_i^2 X_j^2]=d(d-1)d/d+2 + d3d/d+2=d^2. The following lemma characterizes the "minimum eigenvalue" of Σ_S on a special subspace, which will be useful in our analysis.For any vector a=(a_ij)_(i,j) ∈ [d] × [d]∈^d^2 satisfies a_ij = a_ji,a^TΣ_S a ≥2d/d+2a_2^2.a^TΣ_S a=∑_i,j,k,la_ija_kl (Σ_S)_((i,j),(k,l))by (<ref>)=d/d+2 (∑_i ≠ ja_ij^2 + ∑_i ≠ ja_ija_ji+ ∑_i ≠ ja_iia_jj + 3∑_ia_ii^2) a_ij=a_ji=d/d+2 (2∑_i ≠ ja_ij^2 + ∑_i ≠ ja_iia_jj + 3∑_ia_ii^2) = d/d+2 (2(∑_i ≠ ja_ij^2+∑_ia_ii^2) + (∑_i ≠ ja_iia_jj +∑_ia_ii^2)) = d/d+2(2a_2^2 + (∑_ia_ii)^2) ≥2d/d+2a_2^2.With Lemma <ref> and (<ref>), we are now able to prove Lemma <ref>. By Taylor expansion, we have for M=ββ^T, M^⋆=β^⋆β^⋆ T, with probability at least 1-δ, ℓ_n(M)-ℓ_n(M^⋆)∇^3ℓ_n ≡ 0=(⃗M-M^⋆)^T∇ℓ_n(M^⋆)+1/2(⃗M-M^⋆)^T∇^2ℓ_n(M^⋆)(⃗M-M^⋆)by (<ref>), (<ref>)≥-c'M-M^⋆_F(√((Σ_S)logd/δ/n)+d (logc^2 d^2/(Σ_S))^1/2logd/δ/n)+1/2(⃗M-M^⋆)^TΣ_S(⃗M-M^⋆)-M-M^⋆^2_Fd^2√(8logd/δ/n)by Lemma <ref> and(<ref>)≥(d/d+2-d^2√(8logd/δ/n))M-M^⋆^2_F-c”(√(d^2logd/δ/n)+dlogd/δ/n)M-M^⋆_F≥1/2M-M^⋆^2_F-c”(√(d^2logd/δ/n)+dlogd/δ/n)M-M^⋆_Fwhen n≥(d^4 logd/δ). We denote M_:=β_β_^T. Note that ℓ_n(M_)-ℓ_n(M^⋆)=ℓ_n(β_)-ℓ_n(β^⋆)≤ 0. Thus we have1/2M_-M^⋆^2_F-c”(√(d^2logd/δ/n)+dlogd/δ/n)M_-M^⋆_F≤ 0 ,which impliesM_-M^⋆_F≲(√(d^2logd/δ/n)+dlogd/δ/n)≲√(d^2logd/δ/n).Thus so far we have shown, if n≥(d^4 logd/δ), then with probability at least 1-δ, we haveM_-M^⋆_F≲√(d^2logd/δ/n).By Lemma 6 in <cit.>, we further havemin{β_-β^⋆_2,β_+β^⋆_2}≲1/β^⋆_2M_-M^⋆_F≲√(d^2logd/δ/n). §.§.§ Proofs for Lemma <ref> and <ref>The proofs for Lemma <ref> and <ref> are similar to proofs for Lemma <ref> and <ref>.By definition,_S := 4_x∼ (^d-1(√(d)))[xx^T(x^Tβ^⋆)^2]Let z ∼(0,I_d), then x and z√(d)/z_2 have the same distribution. Therefore _S= 4_x∼ (^d-1(√(d)))[xx^T(x^Tβ^⋆)^2] = 4_z∼ (0,I_d)[zz^Td/z^2_2 (β^⋆ Tz ·√(d)/z_2)^2] = 4_z∼ (0,I_d)[(β^⋆β^⋆ T + U_⊥U_⊥^T)zz^T (β^⋆ Tz)^2 d^2/z^4_2]where [β^⋆,U_⊥] ∈^d × d is a orthogonal basis. With this expression, we first prove β^⋆ is an eigenvector of _S with corresponding eigenvalue λ_1._S β^⋆ = 4_z∼ (0,I_d)[(β^⋆β^⋆ T + U_⊥U_⊥^T)zz^T (β^⋆ Tz)^2 d^2/z^4_2]β^⋆= 4_z∼ (0,I_d)[β^⋆β^⋆ Tzz^T (β^⋆ Tz)^2 d^2/z^4_2β^⋆]+ 4_z∼ (0,I_d)[ U_⊥U_⊥^Tzz^T (β^⋆ Tz)^2 d^2/z^4_2β^⋆]= 4_z∼ (0,I_d)[(β^⋆ Tz)^4 d^2/z^4_2] β^⋆+ 4_z∼ (0,I_d)[ U_⊥U_⊥^Tzz^T (β^⋆ Tz)^2 d^2/z^4_2β^⋆]= λ_1 β^⋆ + 4_z∼ (0,I_d)[ U_⊥U_⊥^Tzz^T (β^⋆ Tz)^2 d^2/z^4_2β^⋆].Therefore we only need to prove _z∼ (0,I_d)[ U_⊥U_⊥^Tzz^T (β^⋆ Tz)^2 d^2/z^4_2β^⋆]=0.In fact,_z∼ (0,I_d)[ U_⊥^Tzz^T (β^⋆ Tz)^2 d^2/z^4_2β^⋆]= _z∼ (0,I_d)[ (U_⊥^Tz) (β^⋆ Tz)^3 d^2/z^4_2]= _z∼ (0,I_d)[(d/|A|^2 + B^2)^2 A^3 B ]where we let A:=z^Tβ^⋆, B:=U_⊥^Tz. Notice that by the property of z∼(0,I_d), A and B are independent. Also, B is symmetric, i.e., B and -B have the same distribution. Therefore _z∼ (0,I_d)[ U_⊥U_⊥^Tzz^T (β^⋆ Tz)^2 d^2/z^4_2β^⋆]=_z∼ (0,I_d)[(d/|A|^2 + B^2)^2 A^3 B ]=0. Next we will prove that for any β_⊥ such that β_⊥_2=1, β^⋆ Tβ_⊥=0, β_⊥ is an eigenvector of _S with corresponding eigenvalue λ_2.Let [β_⊥, U] be an orthogonal basis (β^⋆ is the first column of U). _S β_⊥ = 4_z∼ (0,I_d)[(β_⊥β_⊥^T + UU^T)zz^T (β^⋆ Tz)^2 d^2/z^4_2]β_⊥= 4_z∼ (0,I_d)[β_⊥β_⊥^Tzz^T (β^⋆ Tz)^2 d^2/z^4_2β_⊥]+ 4_z∼ (0,I_d)[ UU^Tzz^T (β^⋆ Tz)^2 d^2/z^4_2β_⊥]= 4_z∼ (0,I_d)[(β_⊥^T z)^2 (β^⋆ Tz)^2 d^2/z^4_2] β_⊥+ 4_z∼ (0,I_d)[ UU^Tzz^T (β^⋆ Tz)^2 d^2/z^4_2β_⊥]= λ_2 β_⊥ + 0= λ_2 β_⊥Here 4_z∼ (0,I_d)[ UU^Tzz^T (β^⋆ Tz)^2 d^2/z^4_2β_⊥] =0because of a similar reason as in the previous part.For _T, the proving strategy is similar.For x ∼(^d-1(√(d)))+v on the target domain, where v= rβ_⊥^⋆, let w=x-v=x-rβ_⊥^⋆, then w ∼(^d-1(√(d))). Let z ∼(0,I_d), then w and z√(d)/z_2 have the same distribution. We have_T= 4_x∼ (^d-1(√(d))) +v[xx^T(x^Tβ^⋆)^2]=4_w∼ (^d-1(√(d))) [(w+v)(w+v)^T((w+v)^Tβ^⋆)^2]v^Tβ^⋆=0= 4_w∼ (^d-1(√(d)))[(ww^T+wv^T+vw^T+vv^T)(w^Tβ^⋆)^2]Therefore _T β^⋆ = 4 _w∼ (^d-1(√(d)))[(ww^T+wv^T+vw^T+vv^T)(w^Tβ^⋆)^2] β^⋆v^Tβ^⋆=0= 4_w∼ (^d-1(√(d)))[ww^T(w^Tβ^⋆)^2] β^⋆= _S β^⋆= λ_1 β^⋆,where the last line follows from the previous proofs. Similarly, for any β̃_̃⊥̃ such that β̃_̃⊥̃_2=1, β_⊥^⋆ Tβ̃_̃⊥̃=0, _T β̃_̃⊥̃ = 4 _w∼ (^d-1(√(d)))[(ww^T+wv^T+vw^T+vv^T)(w^Tβ^⋆)^2] β̃_̃⊥̃v^Tβ̃_̃⊥̃=0= 4_w∼ (^d-1(√(d)))[ww^T(w^Tβ^⋆)^2]β̃_̃⊥̃= _S β̃_̃⊥̃= λ_2 β̃_̃⊥̃.For β_⊥^⋆, _T β_⊥^⋆ =4 _w∼ (^d-1(√(d)))[(ww^T+wv^T+vw^T+vv^T)(w^Tβ^⋆)^2] β_⊥^⋆=4 _w∼ (^d-1(√(d)))[ww^T(w^Tβ^⋆)^2]β_⊥^⋆ +4 _w∼ (^d-1(√(d)))[wv^T(w^Tβ^⋆)^2] β_⊥^⋆+4 _w∼ (^d-1(√(d)))[vw^T(w^Tβ^⋆)^2]β_⊥^⋆ +4 _w∼ (^d-1(√(d)))[vv^T(w^Tβ^⋆)^2] β_⊥^⋆ := I_1+ I_2+ I_3 +I_4.As in the previous proofs, I_1=_S β_⊥^⋆ = λ_2 β_⊥^⋆. I_2=4 _w∼ (^d-1(√(d)))[wv^T(w^Tβ^⋆)^2] β_⊥^⋆v=rβ_⊥^⋆= 4r _w∼ (^d-1(√(d)))[w(β_⊥^⋆ Tβ_⊥^⋆)(w^Tβ^⋆)^2]β_⊥^⋆=1= 4r _w∼ (^d-1(√(d)))[w(w^Tβ^⋆)^2]=0.where the last lines follows from w is symmetric and w(w^Tβ^⋆)^2 is a odd function of w.I_3= 4 _w∼ (^d-1(√(d)))[vw^T(w^Tβ^⋆)^2]β_⊥^⋆v=rβ_⊥^⋆= 4r _w∼ (^d-1(√(d)))[β_⊥^⋆ w^Tβ_⊥^⋆(w^Tβ^⋆)^2]= 4r _w∼ (^d-1(√(d)))[ (w^Tβ_⊥^⋆)(w^Tβ^⋆)^2] β_⊥^⋆=0.where the last lines follows from w is symmetric and (w^Tβ_⊥^⋆)(w^Tβ^⋆)^2 is a odd function of w.I_4=4 _w∼ (^d-1(√(d)))[vv^T(w^Tβ^⋆)^2] β_⊥^⋆v=rβ_⊥^⋆= 4 r^2_w∼ (^d-1(√(d)))[β_⊥^⋆β_⊥^⋆ Tβ_⊥^⋆ (w^Tβ^⋆)^2]β_⊥^⋆=1= 4 r^2_w∼ (^d-1(√(d)))[β_⊥^⋆(w^Tβ^⋆)^2]=r^2λ_3 β_⊥^⋆.Combine the calculations of I_1,I_2,I_3,I_4, we have _Tβ_⊥^⋆ = I_1 + I_2 + I_3 + I_4 = λ_2 β_⊥^⋆ + r^2λ_3 β_⊥^⋆= (λ_2 + r^2λ_3) β_⊥^⋆.In conclusion, we have _S=U(λ_1,λ_2,…,λ_2)U^T and _T=U(λ_1,λ_2+r^2λ_3,λ_2,…,λ_2)U^T for an orthonormal matrix U, where U= [β^⋆, β_⊥^⋆, ⋯].Recall the definition of λ_1, λ_2, λ_3:λ_1:=4_x∼ (^d-1(√(d)))[(β^⋆ Tx)^4] = 4_z∼ (0,I_d)[(β^⋆ Tz)^4 d^2/z^4_2],λ_2:=4_x∼ (^d-1(√(d)))[(β^⋆ Tx)^2(β_⊥^⋆ Tx)^2]= 4_z∼ (0,I_d)[(β^⋆ Tz)^2 (β_⊥^⋆ T z)^2 d^2/z^4_2] ,λ_3:=4_x∼ (^d-1(√(d)))[(β^⋆ Tx)^2]= 4_z∼ (0,I_d)[(β^⋆ Tz)^2 d/z^2_2].Next we will show that there exists constants c,C,c'>0 such that when d≥ c', we have c ≤λ_1 ≤ C. The proofs for λ_2 and λ_3 are similar.<ref> With this concentration, we do the following truncation:1/4λ_1 = _z∼ (0,I_d)[(β^⋆ Tz)^4 d^2/z^4_2] =_z∼ (0,I_d)[(β^⋆ Tz)^4 d^2/z^4_2_z/√(d)∈ [1/2, 3/2]] + _z∼ (0,I_d)[(β^⋆ Tz)^4 d^2/z^4_2_z/√(d)∉ [1/2, 3/2]] := J_1 + J_2.For J_2, it is obvious that0≤ J_2 ≤ d^2 (z/√(d)∉ [1/2, 3/2]) ≤ 2d^2 e^-cd. For upper bound of J_1, J_1= _z∼ (0,I_d)[(β^⋆ Tz)^4 d^2/z^4_2_z/√(d)∈ [1/2, 3/2]] ≤_z∼(0,I_d) [16(β^⋆ Tz)^4] =48.Therefore 1/4λ_1 = J_1 + J_2 ≤ 48 + 2d^2 e^-cd.It's obvious that there exists an absolute constant c' such that when d ≥ c', 1/4λ_1 ≤ 50.For lower bound of J_1, we have J_1= _z∼ (0,I_d)[(β^⋆ Tz)^4 d^2/z^4_2_z/√(d)∈ [1/2, 3/2]] ≥_z∼(0,I_d) [(2/3)^4(β^⋆ Tz)^4] =(2/3)^4 · 3.Therefore 1/4λ_1 = J_1 + J_2 ≥ (2/3)^4 · 3Therefore it's obvious that there exists an absolute constant c' such that when d ≥ c', 1/4λ_1 ≥1/2. The proofs for λ_2 and λ_3 are almost the same. § PROOFS FOR SECTION <REF> §.§ Poofs for Proposition <ref> We consider the case where Y=X^2+ε, ε∼(0,1), ε X, and we have X∼(-10,1) on the source domain and X∼(10,1) on the target domain. Then the optimal linear fit on the target is given by β^⋆ = _β∈_(x,y)∼_T(X,Y)[(y-xβ)^2]=(_x∼(10,1)[x^2])^-1_x∼(10,1)[x^3]>0.However, the linear fit learned via classical MLE asymptotically behaves asβ_ =_β∈1/2n∑^n_i=1(y_i-x_iβ)^2 =(1/n∑^n_i=1x^2_i)^-1(1/n∑^n_i=1x_i y_i)(_x∼(-10,1)[x^2])^-1_x∼(-10,1)[x^3]<0.Hence, the classical MLE losses consistency. For MWLE, we haveβ_ =_β∈1/2n∑^n_i=1w(x_i) (y_i-x_iβ)^2=(1/n∑^n_i=1w(x_i) x^2_i)^-1(1/n∑^n_i=1w(x_i) x_i y_i)β^⋆,which asymptotically provides a good estimator. §.§ Proofs for Theorem <ref>The detailed version of Theorem <ref> is stated as the following.Suppose the function classsatisfies Assumption <ref>. Let G_w := G_w(M) and H_w := H_w(M). For any δ∈ (0,1), if n≥ c max{N^⋆log(d/δ), N(δ), N'(δ)}, then with probability at least 1-3δ, we haveR_M(β_)≤ c(G_w H^-1_w)logd/δ/nfor an absolute constant c. Here N^⋆ := W^2 ·max{λ^-1α̃_1^2log^2γ(W^2λ^-1α̃^2_1),α̃_2^2,λα̃_3^2},where α̃_1:=B_1H^-1_w_2^0.5, α̃_2:=B_2H^-1_w_2, α̃_3:=B_3H^-1_w_2^1.5, and λ:=(G_w H_w^-2)/H^-1_w_2. The proofs for Theorem <ref> is similar to proofs for Theorem <ref>. For notation simplicity, through out the proofs for Theorem <ref>, let β^⋆:=β^⋆(M), H_w:= H_w(M), G_w:= G_w(M).We first state two main lemmas, which capture the distance between β_ and β^⋆ under different measurements.Suppose Assumption <ref> holds. For any δ∈ (0,1) and any n≥ cmax{N_1log(d/δ), N(δ), N'(δ)}, with probability at least 1-2δ, we have β_∈_β^⋆(c√((G_w H^-2_w)logd/δ/n)) for some absolute constant c. Here N_1:=max{ W^2 B_2^2H_w^-1_2^2, W^2 B_3^2(G_w H_w^-2)H_w^-1_2^2, (W^3 B^2_1B_2H_w^-1_2^3log^2γ(Wλ^-1/2α̃_1)/(G_w H_w^-2))^2/3,(W^4B^3_1B_3H_w^-1_2^4log^3γ(Wλ^-1/2α̃_1)/(G_w H_w^-2))^1/2,W^2 B_1^2H_w^-1_2^2log^2γ(Wλ^-1/2α̃_1)/(G_w H_w^-2)}.Suppose Assumption <ref> holds. For any δ∈ (0,1) and any n≥ cmax{N_1log(d/δ), N_2log(d/δ), N(δ), N'(δ)}, with probability at least 1-3δ, we haveH_w^1/2(β_-β^⋆)_2^2≤c (G_w H_w^-1) logd/δ/n .for some absolute constant c. Here N_1 is defined in Lemma <ref> andN_2:=max{ (W B_2(G_w H_w^-2)/(G_w H_w^-1) )^2, (W B_3(G_w H_w^-2)^1.5/(G_w H_w^-1) )^2, ( W^3 B^2_1B_2H_w^-1_2^2log^2γ(Wλ^-1/2α̃_1)/(G_w H_w^-1))^2/3,(W^4B^3_1B_3H_w^-1_2^3log^3γ(Wλ^-1/2α̃_1)/(G_w H_w^-1) )^1/2, W^2 B_1^2H_w^-1_2log^2γ(Wλ^-1/2α̃_1)/(G_w H_w^-1) }. The proofs for Lemma <ref> and <ref> are delayed to the end of this subsection. With these two lemmas, we can now state the proof for Theorem <ref>.By Assumption <ref> and <ref>, we can do Taylor expansion w.r.t. β as the following:R_M(β_) = _(x,y) ∼_T(x,y)[ℓ(x,y,β_)-ℓ(x,y,β^⋆)] ≤_(x,y) ∼_T(x,y) [∇ℓ(x,y,β^⋆)]^T(β_-β^⋆) + 1/2(β_-β^⋆)^TH_w(β_-β^⋆) +WB_3/6β_-β^⋆_2^3.Applying Lemma <ref> and <ref>, we know for any δ and any n≥ cmax{N_1log(d/δ), N_2log(d/δ), N(δ), N'(δ)}, with probability at least 1-3δ, we have(β_-β^⋆)^TH_w(β_-β^⋆)≤ c(G_w H_w^-1) logd/δ/nandβ_-β^⋆_2≤ c√((G_w H_w^-2)logd/δ/n).Also notice that, _(x,y) ∼_T(x,y) [∇ℓ(x,y,β^⋆)]=0. Therefore, with probability at least 1- 3δ, we haveR_M(β_) ≤c/2(G_w H_w^-1) logd/δ/n + c^3/6WB_3(G_w H_w^-2)^1.5(logd/δ/n)^1.5.If we further have n≥ c(WB_3(G_w H_w^-2)^1.5/(G_w H_w^-1))^2log (d/δ), it then holds thatR_M(β_) ≤ c(G_w H_w^-1) logd/δ/n .Note thatmax{N_1,N_2, (WB_3(G_w H_w^-2)^1.5/(G_w H_w^-1))^2}=max{W^2 B_2^2H_w^-1_2^2,W^2 B_3^2(G_w H_w^-2)H_w^-1_2^2, (W^3 B^2_1B_2H_w^-1_2^3log^2γ(Wλ^-1/2α̃_1)/(G_w H_w^-2))^2/3,(W^4B^3_1B_3H_w^-1_2^4log^3γ(Wλ^-1/2α̃_1)/(G_w H_w^-2))^1/2,W^2 B_1^2H_w^-1_2^2log^2γ(Wλ^-1/2α̃_1)/(G_w H_w^-2)}=W^2·max{α̃_2^2,λα̃_3^2, α̃_1^4/3α̃_2^2/3λ^-2/3log^4γ/3(Wλ^-1/2α̃_1), α̃_1^3/2α̃_3^1/2λ^-1/2log^3γ/2(Wλ^-1/2α̃_1), λ^-1α̃_1^2log^2γ(Wλ^-1/2α̃_1)}≤ W^2 ·max{λ^-1α̃_1^2log^2γ(W^2λ^-1α̃^2_1),α̃_2^2,λα̃_3^2}=:N^⋆.Here the first equation follows from the fact that(G_w H_w^-2)=(H_w^-1/2G_w H_w^-1/2 H_w^-1)≤H_w^-1_2(H_w^-1/2G_w H_w^-1/2)=H_w^-1_2(G_w H_w^-1).To summarize, for any δ∈ (0,1) and any n≥ cmax{N^⋆log(d/δ),N(δ), N'(δ)}, with probability at least 1- 3δ, we haveR_M(β_) ≤ c(G_w H_w^-1) logd/δ/n .In the following, we prove Lemma <ref> and <ref>.Proof of Lemma <ref>For notation simplicity, we denote g:= ∇ℓ_n^w(β^⋆) - __S [∇ℓ_n^w(β^⋆)]. Note that V = n ·𝔼 [A(∇ℓ_n^w(β^⋆)-[∇ℓ_n^w(β^⋆)])_2^2]=n·[∇ℓ_n^w(β^⋆)^TA^TA∇ℓ_n^w(β^⋆)]=n·[(A∇ℓ_n^w(β^⋆)∇ℓ_n^w(β^⋆)^TA^T)]=(A G_w A^T).By taking A= H_w^-1 in Assumption <ref>, for any δ and any n>N(δ), we have with probability at least 1-δ:H_w^-1g_2≤c√((G_w H_w^-2) logd/δ/n)+ W B_1H_w^-1_2log^γ(W B_1H_w^-1_2/√((G_w H_w^-2))) logd/δ/n= c√((G_w H_w^-2) logd/δ/n)+ W B_1H_w^-1_2log^γ(Wλ^-1/2α̃_1) logd/δ/n∇^2ℓ_n^w(β^⋆)-[∇^2ℓ_n^w(β^⋆)]_2≤ W B_2 √(logd/δ/n).Let event Ã:={(<ref>), (<ref>) holds} and Ã':={ℓ^w_n(·) has a unique local minimum, which is also global minimum}. By Assumption <ref> and Assumption <ref>, it then holds for any δ and any n≥max{N(δ),N'(δ)} that (Ã∩Ã')≥ 1-2δ. Under the event Ã∩Ã', we have the following Taylor expansion:ℓ_n^w(β) - ℓ_n^w(β^⋆) by Assumption <ref>, <ref>≤ (β - β^⋆)^T∇ℓ_n^w (β^⋆) +1/2 (β - β^⋆)^T∇^2ℓ_n^w (β^⋆) (β - β^⋆) + W B_3/6β-β^⋆_2^3__S [∇ℓ_n^w(β^⋆)]=0= (β - β^⋆)^T g+1/2 (β - β^⋆)^T∇^2ℓ_n^w (β^⋆) (β - β^⋆) + W B_3/6β-β^⋆_2^3by (<ref>)≤ (β - β^⋆)^T g + 1/2 (β - β^⋆)^T H_w (β - β^⋆) + W B_2√(logd/δ/n)β-β^⋆_2^2 + W B_3/6β-β^⋆_2^3Δ_β:=β-β^⋆=Δ_β^Tg + 1/2Δ_β^T H_w Δ_β + W B_2√(logd/δ/n)Δ_β_2^2 + W B_3/6Δ_β_2^3= 1/2(Δ_β-z)^T H_w (Δ_β-z) - 1/2z^TH_wz + W B_2√(logd/δ/n)Δ_β_2^2 + W B_3/6Δ_β_2^3 where z:=-H_w^-1g. Similarlyℓ_n^w(β) - ℓ_n^w(β^⋆) ≥1/2(Δ_β-z)^T H_w (Δ_β-z) - 1/2z^TH_w z - W B_2√(logd/δ/n)Δ_β_2^2 - W B_3/6Δ_β_2^3. Notice that Δ_β^⋆+z = z, by (<ref>) and (<ref>), we haveℓ_n^w(β^⋆+z)- ℓ_n^w(β^⋆) ≤ - 1/2z^T H_w z+ WB_2√(logd/δ/n)( c√((G_w H_w^-2) logd/δ/n)+ W B_1H_w^-1_2log^γ(Wλ^-1/2α̃_1) logd/δ/n)^2+ W B_3/6( c√((G_w H_w^-2) logd/δ/n)+ W B_1H_w^-1_2log^γ(Wλ^-1/2α̃_1) logd/δ/n)^3≤ - 1/2z^T H_w z+2c^2W B_2(G_w H_w^-2)(logd/δ/n)^1.5+2W ^3 B^2_1B_2H_w^-1_2^2log^2γ(Wλ^-1/2α̃_1) (logd/δ/n)^2.5 +2/3 c^3W B_3 (G_w H_w^-2)^1.5(logd/δ/n)^1.5+2/3 W^4B^3_1B_3H_w^-1_2^3log^3γ(Wλ^-1/2α̃_1)(logd/δ/n)^3. For any β∈_β^⋆(3c√((G_w H_w^-2)logd/δ/n)), by (<ref>), we haveℓ_n^w(β)-ℓ_n^w(β^⋆) ≥1/2(Δ_β-z)^T H_w (Δ_β-z) - 1/2z^TH_w z -9c^2W B_2(G_w H_w^-2)(logd/δ/n)^1.5-9/2 c^3W B_3(G_w H_w^-2)^1.5(logd/δ/n)^1.5.(<ref>) - (<ref>) givesℓ_n^w(β)-ℓ_n^w(β^⋆+z)≥1/2(Δ_β-z)^T H_w (Δ_β-z) -(11c^2W B_2(G_w H_w^-2)(logd/δ/n)^1.5+31/6c^3W B_3(G_w H_w^-2)^1.5(logd/δ/n)^1.5+ 2W ^3 B^2_1B_2H_w^-1_2^2log^2γ(Wλ^-1/2α̃_1) (logd/δ/n)^2.5 +2/3 W^4B^3_1B_3H_w^-1_2^3log^3γ(Wλ^-1/2α̃_1)(logd/δ/n)^3)Consider the ellipsoid :={β∈^d | 1/2 (Δ_β-z)^T H_w (Δ_β-z) ≤11c^2W B_2(G_w H_w^-2)(logd/δ/n)^1.5+31/6c^3W B_3(G_w H_w^-2)^1.5(logd/δ/n)^1.5+ 2W ^3 B^2_1B_2H_w^-1_2^2log^2γ(Wλ^-1/2α̃_1) (logd/δ/n)^2.5+2/3 W^4B^3_1B_3H_w^-1_2^3log^3γ(Wλ^-1/2α̃_1)(logd/δ/n)^3}Then by (<ref>), for any β∈_β^⋆(3c√((G_w H_w^-2)logd/δ/n)) ∩^C, we haveℓ_n^w(β)-ℓ_n^w(β^⋆+z) > 0. Notice that by the definition of , using λ_min^-1(H_w)= H_w^-1_2, we have for any β∈,Δ_β-z_2^2 ≤ 22c^2H_w^-1_2W B_2(G_w H_w^-2)(logd/δ/n)^1.5+31/3c^3H_w^-1_2W B_3(G_w H_w^-2)^1.5(logd/δ/n)^1.5 +4H_w^-1_2W^3 B^2_1B_2H_w^-1_2^2log^2γ(Wλ^-1/2α̃_1) (logd/δ/n)^2.5+4/3H_w^-1_2W^4B^3_1B_3H_w^-1_2^3log^3γ(Wλ^-1/2α̃_1)(logd/δ/n)^3.Thus for any β∈, Δ_β_2^2 ≤ 2(Δ_β-z_2^2+z_2^2)by (<ref>)≤44c^2H_w^-1_2W B_2(G_w H_w^-2)(logd/δ/n)^1.5+62/3c^3H_w^-1_2W B_3(G_w H_w^-2)^1.5(logd/δ/n)^1.5 + 8H_w^-1_2W^3 B^2_1B_2H_w^-1_2^2log^2γ(Wλ^-1/2α̃_1) (logd/δ/n)^2.5 +8/3H_w^-1_2W^4B^3_1B_3H_w^-1_2^3log^3γ(Wλ^-1/2α̃_1)(logd/δ/n)^3 +4c^2(G_w H_w^-2) logd/δ/n+4 W^2 B_1^2H_w^-1_2^2 log^2γ(Wλ^-1/2α̃_1)(logd/δ/n)^2.To guarantee (G_w H_w^-2) logd/δ/n is the leading term, we only need (G_w H_w^-2) logd/δ/n to dominate the rest of the terms. Hence, if we further have n≥ cN_1log(d/δ), it then holds thatΔ_β_2^2≤ 9c^2(G_w H_w^-2) logd/δ/n,i.e., β∈_β^⋆(3c√((G_w H_w^-2)logd/δ/n)).Here N_1:=max{ W^2 B_2^2H_w^-1_2^2, W^2 B_3^2(G_w H_w^-2)H_w^-1_2^2, (W^3 B^2_1B_2H_w^-1_2^3log^2γ(Wλ^-1/2α̃_1)/(G_w H_w^-2))^2/3,(W^4B^3_1B_3H_w^-1_2^4log^3γ(Wλ^-1/2α̃_1)/(G_w H_w^-2))^1/2,W^2 B_1^2H_w^-1_2^2log^2γ(Wλ^-1/2α̃_1)/(G_w H_w^-2)}.In other words, we show that ⊂_β^⋆(3c√((G_w H_w^-2)logd/δ/n)). Recall that by (<ref>), we know that for any β∈_β^⋆(3c√((G_w H_w^-2)logd/δ/n)) ∩^C, ℓ_n^w(β)-ℓ_n^w(β^⋆+z) > 0.Note that β^⋆+z∈. Hence there is a local minimum of ℓ_n^w(β) in . Under the event Ã', we know that the global minimum of ℓ_n^w(β) is in , i.e., β_∈⊂_β^⋆(3c√((G_w H_w^-2)logd/δ/n)).Proof of Lemma <ref>Let Ẽ:= {β_∈⊂_β^⋆(√((G_w H_w^-2)logd/δ/n)) }. Then by the proof of Lemma <ref>, for any δ∈ (0,1) and any n≥ cmax{N_1log(d/δ), N(δ), N'(δ)}, we have (Ẽ) ≥ 1-2δ.By taking A= H_w^-1/2 in Assumption <ref>, for any δ∈ (0,1) and any n≥ N(δ), with probability at least 1-δ, we have:H_w^-1/2g_2 ≤ c√((G_w H_w^-1) logd/δ/n)+ W B_1H_w^-1/2_2log^γ(W B_1H_w^-1/2_2/√((G_w H_w^-1))) logd/δ/n≤ c√((G_w H_w^-1) logd/δ/n)+ W B_1H_w^-1/2_2log^γ(W B_1H_w^-1/2_2/√((G_w H_w^-2)H_w^-1^-1_2)) logd/δ/n= c√((G_w H_w^-1) logd/δ/n)+ W B_1H_w^-1/2_2log^γ(Wλ^-1/2α̃_1)logd/δ/nWe denote Ẽ':={(<ref>) holds}. Then for any δ and any n≥ cmax{N_1(M)log(d/δ), N(δ), N'(δ)}, we have (Ẽ∩Ẽ')≥ 1-3δ. Under Ẽ∩Ẽ', β_∈, i.e., 1/2(Δ_β_-z)^TH_w(Δ_β_-z) ≤ 11c^2 W B_2(G_w H_w^-2)(logd/δ/n)^1.5+31/6c^3W B_3(G_w H_w^-2)^1.5(logd/δ/n)^1.5+ 2W ^3 B^2_1B_2H_w^-1_2^2log^2γ(Wλ^-1/2α̃_1) (logd/δ/n)^2.5 +2/3 W^4B^3_1B_3H_w^-1_2^3log^3γ(Wλ^-1/2α̃_1)(logd/δ/n)^3.In other words,H_w^1/2(Δ_β_-z)_2^2≤ 22c^2 W B_2(G_w H_w^-2)(logd/δ/n)^1.5+31/3c^3W B_3(G_w H_w^-2)^1.5(logd/δ/n)^1.5+ 4W ^3 B^2_1B_2H_w^-1_2^2log^2γ(Wλ^-1/2α̃_1) (logd/δ/n)^2.5 +4/3 W^4B^3_1B_3H_w^-1_2^3log^3γ(Wλ^-1/2α̃_1)(logd/δ/n)^3.Thus we haveH_w^1/2(β_-β^⋆)_2^2 = H_w^1/2Δ_β__2^2 = H_w^1/2(Δ_β_-z) +H_w^1/2z _2^2≤ 2H_w^1/2(Δ_β_-z)_2^2 + 2H_w^1/2z _2^2 = 2H_w^1/2 (Δ_β_-z))_2^2 + 2H_w^-1/2g _2^2by (<ref>) and (<ref>)≤4c^2 (G_w H_w^-1) logd/δ/n+ 44c^2W B_2(G_w H_w^-2)(logd/δ/n)^1.5 +62/3c^3W B_3(G_w H_w^-2)^1.5(logd/δ/n)^1.5 + 8W^3 B^2_1B_2H_w^-1_2^2log^2γ(Wλ^-1/2α̃_1) (logd/δ/n)^2.5 +8/3 W^4B^3_1B_3H_w^-1_2^3log^3γ(Wλ^-1/2α̃_1)(logd/δ/n)^3+4W^2 B_1^2H_w^-1_2log^2γ(Wλ^-1/2α̃_1) (logd/δ/n)^2To guarantee (G_w H_w^-1) logd/δ/n is the leading term, we only need (G_w H_w^-1) logd/δ/n to dominate the rest of the terms. Hence, if we further have n≥ cN_2log(d/δ), we haveH_w^1/2(β_-β^⋆)_2^2≤ 9c^2(G_w H_w^-1) logd/δ/n .Here N_2:=max{ (W B_2(G_w H_w^-2)/(G_w H_w^-1) )^2, (W B_3(G_w H_w^-2)^1.5/(G_w H_w^-1) )^2, ( W^3 B^2_1B_2H_w^-1_2^2log^2γ(Wλ^-1/2α̃_1)/(G_w H_w^-1))^2/3,(W^4B^3_1B_3H_w^-1_2^3log^3γ(Wλ^-1/2α̃_1)/(G_w H_w^-1) )^1/2, W^2 B_1^2H_w^-1_2log^2γ(Wλ^-1/2α̃_1)/(G_w H_w^-1) }.To summarize, we show that for any δ and any n≥ cmax{N_1log(d/δ), N_2log(d/δ), N(δ), N'(δ)}, with probability at least 1-3δ, we haveH_w^1/2(β_-β^⋆)_2^2≤ 9c^2(G_w H_w^-1) logd/δ/n . §.§ Proofs for Theorem <ref> For any W > 1, we construct _S(X), _T(X),andas follows.We define _T(X):=((1)) and _S(X):=((W^1/d)), where (1) and (W^1/d) are d-dimensional balls centered around the original with radius 1 and W^1/d, respectively. For notation simplicity, we denote Q:=(1) and P:=(W^1/d) in the following. The density ratios is then given byw(x):=d_T(x)/d_S(x)=W x∈ Q0 x∉ Q ,which is upper bounded by W. We further have _S(β)=_x∼_S(X)[xx^T]= W^2/d/3dI_d≻ 0,_T(β)=_x∼_T(X)[xx^T]= 1/3dI_d≻ 0.Let :={f(y | x;β) | β∈^d} be the linear regression class, i.e., -log f(y | x;β)=(log 2π)/2+(y-x^Tβ)^2/2. We assume the true conditional distribution belongs to a classthat is defined as:={Y | Xs.tp(y | x)=f(y | x;β^⋆_1)1_{x∈ Q}+f(y | x;β^⋆_2)1_{x∈ P∖ Q }, β^⋆_1,β^⋆_2∈_β_0(B)}for some β_0∈^d and B>0. We utilize the function classto approximate the true conditional density function, which subsequently results in model mis-specification. In the sequel, we will show the lower bound of excess risk for any estimators under this model class .Fix any ground truth model M∈, that is, we are assuming the true conditional distribution follows the form:p(y | x)=f(y | x;β^⋆_1)1_{x∈ Q}+f(y | x;β^⋆_2)1_{x∈ P∖ Q },where β^⋆_1 and β^⋆_2 are arbitrarily chosen fixed points from _β_0(B). Note that the model is actually well-specified on the target domain. Hence the optimal fit on the target is given byβ^⋆(M)=_β_(x,y)∼_T(X,Y)[ℓ(x,y,β)]=β^⋆_1. For linear regression, it is easy to verify that Assumption <ref>, <ref> and <ref> hold. Let R_0 and R_1 be the parameters chosen by Lemma <ref>. Then similar to the proofs of Theorem <ref>, we haveinf_β̂sup_M∈_(x_i,y_i)∼_S(X,Y)[R_M(β̂)]=inf_β̂sup_β^⋆_1,β^⋆_2∈_β_0(B)_(x_i,y_i)∼_S(X,Y)[R_β^⋆_1(β̂)]≥inf_β̂sup_β^⋆_1,β^⋆_2∈_β_0(R_1)_(x_i,y_i)∼_S(X,Y)[R_β^⋆_1(β̂)]≥inf_β̂∈_β_0(R_0)sup_β^⋆_1,β^⋆_2∈_β_0(R_1)_(x_i,y_i)∼_S(X,Y)[R_β^⋆_1(β̂)]≥1/4inf_β̂∈_β_0(R_0)sup_β^⋆_1,β^⋆_2∈_β_0(R_1)_(x_i,y_i)∼_S(X,Y)[(β̂-β^⋆_1)^T_T(β_0)(β̂-β^⋆_1)]≥1/4inf_β̂∈_β_0(R_0)sup_β^⋆_1,β^⋆_2∈ C_β_0(R_1/√(d))_(x_i,y_i)∼_S(X,Y)[(β̂-β^⋆_1)^T_T(β_0)(β̂-β^⋆_1)]= 1/4inf_β̂∈_β_0(R_0)sup_[β^⋆ T_1,β^⋆ T_2]∈ C_[β^T_0,β^T_0](R_1/√(d))_(x_i,y_i)∼_S(X,Y)[(β̂-β^⋆_1)^T_T(β_0)(β̂-β^⋆_1)]By Theorem 1 in <cit.> (multivariate van Trees inequality) with ψ(β_1^⋆,β_2^⋆)=β_1^⋆, C(β_1^⋆,β_2^⋆)≡ C:=[WI_d,0]∈^d× 2d and B(β_1^⋆,β_2^⋆)≡ B:=_T^-1(β_0), we have for any estimator β̂ and good prior density λ that supported on C_[β^T_0,β^T_0](R_1/√(d)),_[β^⋆ T_1,β^⋆ T_2]∼λ_(x_i,y_i)∼_S(X,Y)[(β̂-β^⋆_1)^T_T(β_0)(β̂-β^⋆_1)]≥(Wd)^2/2nWd+(λ),where(λ) = ∫_C_[β^T_0,β^T_0](R_1/√(d))(∑_i,j,k,ℓB_ijC_ikC_jℓ∂/∂β̃_kλ(β̃)∂/∂β̃_ℓλ(β̃))1/λ(β̃) dβ̃.Let β̃_0=[β_0,1,…,β_0,d,β_0,1,…,β_0,d]^T, β̃=[β_1,…,β_2d]^T andf_i(x) := π√(d)/4R_1cos(π√(d)/2R_1(x-β̃_0,i)),i=1,…,2d.We define the prior density asλ(β̃):= Π^2d_i=1f_i(β_i)β̃∈ C_[β^T_0,β^T_0](R_1/√(d))0β̃∉ C_[β^T_0,β^T_0](R_1/√(d)) .Then following the same argument as in the proof of Lemma <ref>, we have(λ)=π^2d/R_1^2(BCC^T)=π^2W^2 d/R_1^2(^-1_T(β_0)).As a result, for any estimator β̂, we have_[β^⋆ T_1,β^⋆ T_2]∼λ_(x_i,y_i)∼_S(X,Y)[(β̂-β^⋆_1)^T_T(β_0)(β̂-β^⋆_1)]≥(Wd)^2/2nWd+π^2W^2 d/R_1^2(^-1_T(β_0)), which impliessup_[β^⋆ T_1,β^⋆ T_2]∈ C_[β^T_0,β^T_0](R_1/√(d))_(x_i,y_i)∼_S(X,Y)[(β̂-β^⋆_1)^T_T(β_0)(β̂-β^⋆_1)]≥_[β^⋆ T_1,β^⋆ T_2]∼λ_(x_i,y_i)∼_S(X,Y)[(β̂-β^⋆)^T_T(β_0)(β̂-β^⋆)]≥(Wd)^2/2nWd+π^2W^2 d/R_1(^-1_T(β_0)).Combine (<ref>) and (<ref>), we haveinf_β̂sup_M∈_(x_i,y_i)∼_S(X,Y)[R_M(β̂)]≥1/4·(Wd)^2/2nWd+π^2W^2 d/R_1(^-1_T(β_0))≳Wd/nwhen n is sufficiently large.Recall thatH_w(M)=_(x,y)∼_T(X,Y)[∇^2ℓ(x,y,β^⋆(M))]=_(x,y)∼_T(X,Y)[∇^2ℓ(x,y,β^⋆_1)]=_T(β^⋆_1).and by the definition of w(x), we further haveG_w(M)=_(x,y)∼_S(X,Y)[w(x)^2∇ℓ(x,y,β^⋆(M))∇ℓ(x,y,β^⋆(M))^T]=_(x,y)∼_T(X,Y)[w(x)∇ℓ(x,y,β^⋆(M))∇ℓ(x,y,β^⋆(M))^T]=W_(x,y)∼_T(X,Y)[∇ℓ(x,y,β^⋆(M))∇ℓ(x,y,β^⋆(M))^T]=W_(x,y)∼_T(X,Y)[∇ℓ(x,y,β^⋆_1)∇ℓ(x,y,β^⋆_1)^T]=W_T(β^⋆_1).Therefore (G_w(M)H_w(M)^-1)= Wd, which gives the desired result. What remains is to verify thatsatisfies Assumption <ref>, <ref>, <ref> and <ref>. Assumption <ref> is trivially satisfied. For Assumption <ref> and <ref>, notice that ∇ℓ(x,y,β) =-x(y-x^Tβ),∇^2ℓ(x,y,β) =xx^T,∇^3ℓ(x,y,β) =0.andw(x):=d_T(x)/d_S(x)=W x∈ Q0 x∉ Q ,By the definition of , we can write the distribution of y as y_i=x_i^Tβ_1^⋆ + ϵ_i x_i∈ Qx_i^Tβ_2^⋆ + ϵ_i x_i∉ Q ,where ϵ_i is a (0,1) noise independent of all x_i's. Therefore let u_i := A w(x_i)∇ℓ(x_i,y_i,β^⋆(M)), we have u_i=-WA x_i ϵ_i x_i∈ Q0 x_i∉ Q ,which indicates that u_i is AW-subgaussian. Therefore by Lemma <ref>, the vector concentration in Assumption <ref> is satisfied with γ=0.5, B_1=1. For the matrix concentration, notice that w(x_i)∇^2ℓ(x_i,y_i,β^⋆(M))=W x_i x_i^Tx_i∈ Q0 x_i∉ Q ,therefore my matrix Hoeffding, w(x_i)∇^2ℓ(x_i,y_i,β^⋆(M))_2 ≤ W,thus the matrix concentration in Assumption <ref> is satisfied with B_2=1. Further more, N(δ)=0 is enough for satisfying Assumption <ref>.Assumption <ref> is satisfied with B_3=0 since∇^3ℓ(x,y,β) =0.For Assumption <ref>, we can prove that it is satisfied with N'(δ)= max{8W log1/δ, 2dW}. This is because, (∇^2ℓ_n^w(β) ≻ 0for all β)= (W/n∑_i=1^nx_i x_i^T_x_i ∈ Q ≻ 0) ≥(#{x_i ∈ Q} > d) = 1- (#{x_i ∈ Q}≤ d) by Chernoff bound≥1- exp (-μ/2(1-d/μ)^2) ≥ 1- δ,where μ:=n/W, and the last inequality hold when n≥ N'(δ). Therefore when n≥ N'(δ), with probability at least 1-δ, ℓ_n^w is strictly convex, therefore has a unique local minimum which is also the global minimum.§ AUXILIARIESIn this section, we present several auxiliary lemmas and propositions. §.§ Concentration for gradient and HessianThe following lemma gives a generic version of Bernstein inequality for vectors.Let u, u_1,⋯,u_n be i.i.d. mean-zero random vectors. We denote V = [u^2_2] and B^(α)_u:=inf{t>0: [exp(u^α/t^α)]≤ 2},α≥ 1.Suppose B^(α)_u<∞ for some α≥ 1. Then there exists an absolute constant c>0 such that for all δ∈ (0,1), with probability at least 1-δ:1/n∑_i=1^nu_i_2≤ c(√(Vlogd/δ/n)+B^(α)_u(logB^(α)_u/√(V))^1/αlogd/δ/n).See Proposition 2 in <cit.> for the proof. The following proposition shows that when gradient and Hessian are bounded or sub-Gaussian (sub-exponential), Assumption <ref> is naturally satisfied.If ∇ℓ (x_i,y_i,β^⋆)_2≤ b_1 for all i∈ [n], then the vector concentration (<ref>) is satisfied with B_1=b_1 and γ=0. Alternatively, if ∇ℓ (x_i,y_i,β^⋆)_2 is b_1-subgaussian, then (<ref>) is satisfied with B_1=b_1 and γ=1/2. When ∇ℓ (x_i,y_i,β^⋆)_2 is b_1-subexponential, then (<ref>) is satisfied with B_1=b_1 and γ=1. For the Hessian concenntration, if ∇^2 ℓ (x_i,y_i,β^⋆)_2≤ b_2 for all i∈ [n], then (<ref>) is satisfied with B_2=b_2. The vector concentration (<ref>) is a direct proposition of Lemma <ref>. The Hessian concentration (<ref>) is a direct consequence of matrix Hoeffiding inequality. | http://arxiv.org/abs/2311.15961v1 | {
"authors": [
"Jiawei Ge",
"Shange Tang",
"Jianqing Fan",
"Cong Ma",
"Chi Jin"
],
"categories": [
"stat.ML",
"cs.LG",
"math.ST",
"stat.TH"
],
"primary_category": "stat.ML",
"published": "20231127160648",
"title": "Maximum Likelihood Estimation is All You Need for Well-Specified Covariate Shift"
} |
Properties of Steady Sub-Alfvénic Solar Wind in Comparison with Super-Alfvénic Wind from Measurements of Parker Solar Probe [ January 14, 2024 =========================================================================================================================== Federated Unlearning (FU) aims to delete specific training data from an ML model trained using Federated Learning (FL). We introduce , an efficient and original FU method that utilizes dataset distillation (DD) to accelerate unlearning and drastically reduces computational overhead compared to existing approaches. In , each client uses DD to generate a compact dataset representative of the original training dataset, called a distilled dataset, and uses this compact dataset during unlearning. To unlearn specific knowledge from the global model, has clients execute Stochastic Gradient Ascent with samples from the distilled datasets, thus significantly reducing computational overhead compared to conventional FU methods. We further increase the efficiency of by ingeniously integrating DD into the FL training process. By reusing the gradient updates produced during FL training for DD, the overhead of creating distilled datasets becomes close to negligible. Evaluations on three standard datasets show that, with comparable accuracy guarantees, reduces the duration of unlearning by 463.8× compared to model retraining from scratch and 65.1× compared to existing FU approaches. We also demonstrate the scalability of with 100 clients and show its effectiveness while handling multiple unlearning operations.§ INTRODUCTIONThe vast amount of data produced by computing devices is increasingly being used to train large-scale ML models that empower industrial processes and personal experiences <cit.>. However, this data is often privacy sensitive or very large in volume, making it prohibitively expensive or impossible to upload it to a central server <cit.>. To sidestep this issue, FL is increasingly being applied to collaboratively train ML models in a privacy-preserving manner <cit.>. FL obviates the need to move the data to a central location by having participants only exchange model updates with a server. In each round, this server aggregates all incoming trained models and sends the updated global model to participants. Recent privacy regulations like the GDPR grant data owners with the right to be forgotten <cit.>. In the realm of ML, this requires organizations to remove the influence of this data on the trained model <cit.>. The latter is called machine unlearning <cit.>. For instance, hospitals that collaboratively trained a model using FL might have to unlearn particular data samples in response to patient requests. However, the distributed nature of FL and the inability to access training data directly makes FU a challenging task.A naive way to unlearn particular samples is to retrain the model from scratch while omitting these samples. As the model and data size increase, complete retraining quickly becomes prohibitively expensive regarding the time and compute resources required. A more effective approach is to use SGA on the samples being unlearned, , optimizing in the direction that maximizes the loss function <cit.>. However, since SGA updates the entire model, it also deteriorates the model performance on the remaining samples.Hence, this approach typically utilizes a recovery phase, which retrains on the remaining samples for a few rounds to restore performance. While this method is more efficient than full retraining, the unlearning and recovery phases are still computationally demanding when dealing with a large volume of training data.This paper presents , a novel FU approach that efficiently performs unlearning and recovery. achieves this efficiency by harnessing DD, a technique to condense a large training dataset into a compact synthetic dataset that is orders of magnitude smaller in volume <cit.>. This synthetic dataset is generated such that it largely preserves the features of the original training dataset. then utilizes this compact dataset during the unlearning and recovery phases. Figure <ref> depicts the complete workflow. Clients first engage in regular FL training with their train dataset to collaboratively produce a trained model. Each client also generates a compact synthetic dataset using DD. Upon receiving the initial unlearning request, the network executes unlearning rounds, during which each client performs SGA on their local distilled dataset. The network then executes recovery rounds, during which they also use the distilled data. The key to efficiency in is the tiny volume of distilled data, resulting in efficient downstream unlearning compared to when using the original datasets of clients.The distilled datasets can be generated independently from the FL training as the distilled dataset becomes necessary only after training finishes. More specifically, by leveraging a recent DD technique based on gradient matching <cit.>, smartly re-uses the gradient updates produced during FL training for DD, thus significantly reducing the overhead of the DD process. Contributions. This paper makes the following three key contributions: * We introduce , a novel and efficient federated unlearning approach that leverages dataset distillation to unlearn specific knowledge from a trained model (<Ref>).* To reduce the computational overhead of DD, we seamlessly integrate DD into FL by reusing the gradient updates generated during FL training for creating distilled datasets (<Ref>).* We implement and open-source , and evaluate its unlearning performance in terms of efficiency and accuracy on three standard datasets (Section <ref>). We find that reduces the duration of class unlearning by 463.8× compared to model retraining from scratch and 65.1× compared to state-of-the-art FU approaches. § BACKGROUND AND RELATED WORK Federated Unlearning. <cit.> first propose the concept of MU to eliminate the contribution of one specific training data sample from a well-trained model. Since then, many other algorithms for machine unlearning have been introduced <cit.>. These works focus mainly on unlearning knowledge from linear classification models, , for logistic regression, but are unsuitable for more complex models, , deep neural networks. Some algorithms have other restrictions and can only be applied to specific model architectures or scenarios, , <cit.> only fits random forests model and <cit.> is only for Bayesian learning.Federated Unlearning FU is a MU technique where knowledge is removed from a trained model in a distributed and collaborative manner. Generally, FU is more challenging than MU for the following two reasons. First, the model aggregation of FL intertwines the information of all clients, making it hard to identify and remove specific data samples. Secondly, client data is only available locally and cannot be moved to a central server, therefore mandating active participation by clients to perform unlearning. <cit.> propose FedEraser to achieve client-level unlearning, , eliminating the impact of data of one FL client from the FL global model. FedEraser reconstructs the global model by utilizing all historical update gradients of clients stored in the server during training. The key idea here is to trade the additional storage cost for less computation cost (faster unlearning). <cit.> mainly focus on class-level unlearning; they measure the class discrimination of channels in the model, , the relevance of different classes on the model channel) and then prune the most relevant channel of the target class to unlearn it. <cit.> proposed a more general federated unlearning framework by inverse gradient ascent, which achieves unlearning on class, client, and sample levels. However, this process remains inefficient, particularly when the volume of data is large or when multiple unlearning requests need to be executed.Dataset Distillation.The goal of DD is to replace a large training dataset with a significantly smaller one that can substitute the original dataset during model training <cit.>. DD can potentially speed up downstream tasks such as continual learning <cit.> and neural architecture search <cit.>. employs DD to speed up the unlearning process significantly. Early DD approaches are based on coreset selection that identifies a subset of influential samples during training. Sample selection, for example, can aim to maximize diversity <cit.>, detect clusters of similar samples <cit.>, or identify individual samples that lead to high accuracy <cit.>. Another class of algorithms synthesizes a set of new samples from the original dataset. The approach described in <cit.> is to match the gradients of a model trained on the original and synthetic data. Follow-up work has introduced distillation techniques based on trajectory gradient matching <cit.>, differential data augmentation functions <cit.>, distribution matching <cit.> and feature alignment <cit.>. Other work utilizes DD for one-shot FL, which significantly reduces communication cost compared to multi-round FL <cit.>.§ DESIGN OF In this section, we first formally define our notation and problem setup.We then describe the unlearning algorithm in <Ref>.<Ref> describes how we leverage DD to unlock efficient unlearning and presents the DD algorithm. <Ref> shows how integrates DD with FL training before summarizing end-to-end in <Ref>.Problem Setup.We consider an FL system containing total N clients (, mobile devices), where each client i ∈ N holds a local training dataset D_i. The clients collaboratively train a global FL modelfollowing a standard FL algorithm (<cit.>).Once the global modelis trained, the federated server may receive an unlearning request for the subset D_f. We refer to D_f as the forgetting dataset that needs to be unlearned from the global model .The characterization of D_f defines the type of unlearning performed. For instance, when D_f contains the data of an entire class, we perform class-level unlearning, whereas when D_f contains a single client's data, we perform client-level unlearning. We define a FU algorithm 𝒰 as _f = 𝒰(, D_f), where _f is the unlearned model. Unlearning aims to obtain a model _f that is equivalent in performance to a model trained only on the remaining dataset D \ D_f. In other words, the unlearning model _f should achieve good performance on D \ D_f while showing poor performance on D_f. §.§ The unlearning algorithm When the server receives an unlearning request for D_f, it initiates unlearning rounds that resemble traditional FL training rounds. However, each client now performs instead of regular on the portion of samples in D_f. In each round, the server aggregates the models received from clients. However, training often introduces noise that affects the performance of remaining data <cit.>. This noise necessitates subsequent recovery rounds during which clients engage in regular training on the remaining data, , D \ D_f. We refer to D \ D_f as the recovery set. As we experimentally show later, this recovery phase rapidly restores the accuracy of the remaining classes. The execution of an unlearning request, therefore, encompasses unlearning on D_f and recovery on D \ D_f, thus updating the model with the entire dataset. Therefore, this process remains inefficient, in particular with high data volumes or in the presence of many unlearning requests.§.§ Dataset Distillation for Efficient UnlearningTo significantly reduce the volume of data involved when executing an unlearning request, we utilize DD to condense all critical information from the original training dataset into a significantly smaller synthetic or distilled dataset S. In our FL setting, each client i ∈ N independently distills its local dataset D_i into S_i such that |S_i| ≪ |D_i|. The unlearning algorithm 𝒰 can thus be modified as _f = 𝒰(, S_f), where S_f is the counterpart of the unlearning dataset D_f in the distilled dataset. Since the distilled data is orders of magnitude smaller in volume, the unlearning task can be carried out very efficiently.adopts the recent algorithm of <cit.> to perform dataset distillation on each FL client. The primary reason for choosing this algorithm is the striking similarity of the algorithmic structure to standard FL algorithms. As one of the main contributions in this paper, we leverage this similarity to integrate the process of distilling data while training the FL model. This in-situ distillation avoids expensive computational overheads of distilling data separately.Dataset Distillation. Before describing the algorithm of <cit.>, we first formalize the task of DD. We adopt the DD formulation from <cit.>. Suppose we are given a training datasetwith || pairs of a training images and class labels ={(_i,y_i)}|_i=1^|| where ∈⊂ℝ^d, y∈{0,…,C-1},is a d-dimensional input space and C is the number of classes.The goal is to learn a differentiable function ϕ(a deep neural network) with parametersthat correctly predicts labels of unseen images. One can learn the parameters of this function by minimizing an empirical loss term over the training set:^=_^()where ^()=1/||∑_(,y)∈ℓ(ϕ_(),y), ℓ(·,·) is a task-specific loss (cross-entropy) and ^ is the minimizer of ^. The generalization performance of the obtained model ϕ_^ can be written as 𝔼_∼ P_𝒟[ℓ(ϕ_^(),y)] where P_𝒟 is the data distribution. The goal of DD is to generate a small set of condensed synthetic samples with their labels, ={(_i,y_i)}|_i=1^|| where ∈ℝ^d and y∈𝒴, || ≪ ||. Similar to <ref>, one can train ϕ with these synthetic samples as follows:^=_^𝒮()where ^()=1/||∑_(,y)∈ℓ(ϕ_(),y) and ^ is the minimizer of ^.The goal of DD is to obtain S such that the generalization performance of ϕ_^ is as close as possible to ϕ_^, 𝔼_∼ P_𝒟[ℓ(ϕ_^(),y)]≃𝔼_∼ P_𝒟[ℓ(ϕ_^(),y)] over the real data distribution P_𝒟.DD with Gradient Matching. The goal of obtaining comparable generalization performance by training on the condensed data can be formulated in multiple ways. <cit.> formulate the problem such that the model ϕ_^ trained on S achieves not only comparable generalization performance to ϕ_^ but also converges to a similar solution in the parameter space (^≈^). This is achieved by matching gradients obtained over training and synthetic data over several time steps, thereby making ^ follow a similar path to ^ throughout the optimization.Precisely,is obtained as minimizer of the following: min_E__0∼ P__0[∑_t=0^T-1 d(∇_^(_t),∇_^(_t))]where d(.;.) is a function that measures the distance between the gradients for ^ and ^ w.r.t . Since deep neural are sensitive to initialization, the optimization in <Ref> aims to obtain an optimum set of synthetic images to generate samples that can work with a distribution of random initializations P__0. At each time step, the synthetic data samples are updated by running a few local steps of an optimization algorithm(, ): ←_(d(∇_^(_t),∇_^(_t)),ς_,η_),where ς_ and η_ correspond to the number of steps and the learning rate. The model parameters are then updated through the loss on the updated synthetic data:_t+1←_t -η_∇^(_t)where η_ is the learning rate for the update. The gradient matching continues for T steps along the optimization path, progressively improving the quality of S.This process is repeated for several initializations K drawn from P__0.<Ref>, adapted from <cit.>, shows the complete distillation process.§.§ Integrating DD with FL training by Re-using GradientsProducing good-quality synthetic data that can reliably be used as a substitute for the training data requires many local update steps. Thus, DD is computationally expensive. To reduce this overhead, we integrate the DD process with FL training such that clients reuse the gradient updates computed during FL training to perform DD. We achieve this by exploiting the algorithmic similarity of <cit.> (<Ref>) with standard FL algorithms. <Ref> shows how integrates DD with FL training, with the key differences from <Ref> colored in red.The outer loop of <Ref> corresponds to global rounds in FL (line 1), while the inner loop corresponds to the local update steps in FL (line 4).The random initialization of model parameters in DD is substituted with initialization to the global model parameters received from the server in round k (line 3). A main difference between Algorithm <ref> and <ref> is that Algorithm <ref> distills using K randomly initialized models, whereas Algorithm <ref> only uses a single initialized model. This still works for unlearning as we do not prioritize generalization across different initializations in .Our goal is instead to create a synthetic dataset with which can be utilized to unlearn knowledge from a model that was trained with a specific initialization. Clients generate gradients on mini-batches of their original training data to perform local updates (line 6). effectively utilizes these gradients to update the synthetic data by matching the gradients computed on local synthetic data (lines 7-8). Finally, the model update step on synthetic data in DD is substituted by the local update, which the client performs for FL training (line 10). The federated server aggregates the received models before commencing the next round (line 11). Thus, exploits gradients from FL training to efficiently generate distilled data for downstream unlearning. We note that the algorithm in <cit.> proposes a class-wise gradient matching which performs better than random batches. We also utilize it in but omit from the pseudo code for presentation clarity.Fine-Tuning distilled data. As the FL model training approaches convergence, the magnitude of gradients computed on both the training data and the synthetic data approaches zero. This prevents the synthetic data from being updated, resulting in slightly lower quality samples than when performing DD independently of FL. Therefore, to match the performance of independent DD, we let clients conduct additional fine-tuning steps to further improve the quality of their synthetic data. In this fine-tuning phase, clients perform <Ref> on their synthetic data.§.§ End-to-end Workflow ofFinally, we summarize the end-to-end workflow of , which is also depicted in Figure <ref>. Clients initially train a global model via FL while also conducting dataset distillation (DD) to generate distilled datasets. For this, we use the integrated DD approach based on gradient matching described in Sections <ref> and <ref>. The quality of the distilled dataset can be improved using fine-tuning. The distilled dataset is then employed for unlearning and recovery, also see Section <ref>.Mixing original data samples. Clients also utilize the distilled dataset for the recovery phase.In our experiments, we observed that while using distilled data works well for unlearning, it slightly hurts the achieved performance in the recovery phase as the distilled samples are not a perfect representation of the original datasets. We found that even including a few original samples into the distilled datasets can nullify this performance drop. Thus, clients in perform recovery on the merged dataset comprising distilled data and a few original samples.§ EVALUATIONWe evaluate and compare its efficiency and performance with state-of-the-art FU baselines. We first describe the experimental setup and then show the unlearning performance of , the computational efficiency of our integrated FL and DD approach, the impact of additional fine-tuning steps, and finally the scalability of with 100 clients. §.§ Experimental Setup We evaluate the performance of on three standard image classification datasets: <cit.>, <cit.>, and <cit.>. For all datasets, we use a ConvNet as the deep neural network backbone <cit.>. All experiments are conducted on a machine equipped with an i9-10900K CPU and an RTX 3090 GPU. All source code is available online. Link omitted due to double-blind review.Federated Learning. To generate heterogeneous client local datasets, we adopt the Dirichlet distribution based approach from several previous works <cit.>.The degree of non-IIDness is controlled by a parameter α∈ [0, ∞), with lower values corresponding to more heterogeneity. In this section, we fix α = 0.1 which is highly non-IID, and show experiments with IID distributions in <Ref>. We conduct all experiments in this section with ten clients, and we quantify the scalability of with 100 clients on the dataset in Section <ref>. We use full client participation in each round and train for 200 global rounds (, K=200), sufficient to reach convergence. All other FL-related hyper-parameters follow <cit.>.Dataset Distillation. We initialize the synthetic samples {_i}_i=1^N as randomly selected real training samples from the original local client dataset. We found this to be more effective in our setting than when initializing these samples from Gaussian noise. We use the same distance function d for gradient matching as in <cit.>. Our evaluation mainly focuses on class-level unlearning and shows extensive evaluations comparing to state-of-the-art baselines. Throughout this section, we refer to the class(es) being unlearned as target class(es). However, we note that also supports client-level unlearning, which we experimentally show in <Ref>.To guarantee that the distribution of the distilled dataset of each client reflects their original dataset distribution, we scale the number of distilled samples for each class for different clients by a factor s (, distillated sample size per class=original data size per class/s). For any class with a distilled sample size of zero after scaling, we will round it up to 1 to ensure that this class has at least one distilled sample. We fix s=100 for all experiments, which we found to yield a reasonable balance between efficiency and effectiveness. All other DD-related hyper-parameters follow <cit.>. Baselines. We compare the performance of to the following three baselines: * : This baseline retrains a model from scratch with FL using the original dataset minus the forgetting dataset.* : This baseline performs SGA using the original dataset <cit.>. When the deletion request of a target class arrives, every involved client executes SGA on the data of the target class it owns to unlearn the contribution of the target class in the model.* : This baseline uses model pruning by first measuring the class discrimination of channels in the model (, the relevance of different classes on the model channel) and then prunes the most relevant channel of the target class to unlearn it <cit.>. All reported testing accuracies are the Top-1 accuracy achieved on the testing data of each dataset. We turn off fine-tuning for all experiments (F = 0), except for the experiments reported in Section <ref>. For all experiments with , we perform a single round of unlearning and two rounds for recovery, which we found to be the minimum for the model to sufficiently unlearn particular knowledge and restore the performance of the remaining classes. Doing more unlearning rounds would introduce noise into the model and lower the accuracy of the remaining classes, making it more difficult to recover them later. We run each experiment five times and average all results. Additional notes on the experimental setup and parameters can be found in <Ref>. §.§ Performance Evaluation on a Single Unlearning Requestr6cm< g r a p h i c s >The testing accuracy for all classes with when unlearning class 9 and recovering the accuracy of the other classes with .Unlearning a Single Class. We first quantify the change in testing accuracy of target and non-target classes after the unlearning and recovery stages, using the dataset and 10 clients. The network collaboratively unlearns from the model the knowledge corresponding to class 9 (digit 9) by performing one round of unlearning and two rounds of recovery. Figure <ref> shows the testing accuracy for each class over time in different colors when unlearning with . When starts the unlearning stage, we observe a rapid accuracy drop on the target class while the accuracy of non-target classes decreases as well. This is because SGA introduces some noise that affects non-target classes, even though the model parameters changed by SGA are mainly for unlearning the knowledge of the target class. The recovery stage that starts at round 3, therefore, restores the accuracy of non-target classes by training the global model on all distilled data of the remaining classes using SGD.Figure <ref> shows that the accuracies of non-target classes after two recovery rounds are almost restored to their original values.Testing Accuracy. Next, we compare the testing accuracy of with our baselines on when unlearning a single class. Table <ref> shows the testing accuracy on the and R-Set for different FU approaches and after each stage (unlearning and recovery). Theis the set of samples being unlearned, and the R-Set is comprised of all other samples. Ideally, any FU approach should achieve near-zero testing accuracy on the after unlearning. We remark that there is no recovery stage for . Table <ref> shows that, after the unlearning stage, all approaches effectively eliminate the knowledge of the target class in the model as the testing accuracy is near zero (0.85% for ). After recovery, and restore the accuracy of the R-Set close to the values achieved with . The accuracy of after recovery, 71.98%, is lower than that of the baselines. This is because the synthetic samples generated do not represent the original dataset perfectly. However, additional fine-tuning of distilled datasets can close this gap, at the cost of additional computation (see Section <ref>). Nonetheless, we conclude that effectively unlearns data samples with minimal impact on the remaining samples.Computation Efficiency. Table <ref> also shows the computational cost for unlearning and recovery in terms of rounds, time required, and data samples involved in executing these rounds. We observe significant differences in computation cost between different FU approaches. Since DD reduces the number of samples for each client used during unlearning, the unlearning stage (one round) in only takes 5.03, and 10.58 in the recovery stage; both stages are completed in just 15.61. This efficiency is because a round of unlearning and recovery with only involves 100 and 900 data samples, respectively. Although only needs two rounds to unlearn a target class adequately, it takes 247.58 to complete a round as completing this round updates the model with all clients' original data (5000 data samples in the unlearning stage and 45000 samples in the recovery stage). While is the simplest method with high testing accuracies after unlearning, its computational time renders this approach infeasible in many scenarios, which is 1447× higher than and 14× higher than . We note that employs a different technique, model pruning, than the other approaches in its unlearning stage, while the recovery stage is the same as others. Since model pruning only depends on information obtained from inference, which can be done relatively quickly, the time to complete a single round (61.36) is relatively small compared to (247.58), but its gap with (5.03) is still significant. From <Ref> we conclude that achieves quick unlearning and recovery using only a few data samples and with little computational overhead. is not only suitable for efficient unlearning but can also be applied for efficient relearning when the deleted knowledge is needed again. We use the distilled datasets to relearn particular samples and present experiments showing the performance of unlearning in <Ref>.r7cm 0.84Stage2cUnlearning 2|cRecovery(r)2-3 (r)4-5FU approach c]@c@c]@c@R-Set c]@c@c]@c@ R-Set(r)1-1 (r)2-3 (r)4-50.93% 53.39% 0.81% 71.62% 0.37% 77.25% — —0.68% 65.71% 0.72% 74.21%0.79% 37.68% 0.94% 72.56% The membership inference attack (MIA) accuracy of all baselines in different stages.Membership Inference Attack. We conduct a MIA on the unlearned model to further assess the effectiveness of unlearning with , which follows related work on MU <cit.>. The MIA aims to determine whether a particular sample is included in the model's knowledge. We implement the MIA according to the settings in <cit.> and measure how often the attack model will classify a data sample from deleted data as a training sample of the unlearning model. Table <ref> shows these MIA accuracies of different methods and stages. The performance of can be considered optimal since the produced model has never seen the unlearned samples. We find that for all approaches, the MIA accuracy on the after the unlearning stage is below 1%. The MIA accuracy on the R-Set for evaluated FU methods show similar trends as the values observed in Table <ref>, , shows a slightly lower MIA accuracy on the R-Set compared to other baselines. §.§ The Performance of with Sequential Unlearning Requests In real-world settings, clients may continually launch multiple, sequential unlearning requests. Therefore, we go beyond existing work on FU and evaluate the performance of with sequential unlearning requests. Figure <ref> shows the accuracies when sequentially unlearning all ten classes in random order. We observe that the accuracy of a target class drops to near zero after the unlearning phase for that target class ends, illustrating the effectiveness of in handling sequential unlearning requests. The accuracies of non-target classes also drop after the unlearning stage, which is due to the noise introduced by . In the subsequent recovery stages, rapidly recovers the accuracies of remaining classes by training on the distilled datasets while leaving the low accuracy of the unlearned classes unaffected. Therefore, Figure <ref> shows the capability of in executing multiple unlearning requests. In particular situations, the network can process multiple unlearning requests in parallel. We discuss this optimization in <Ref>. §.§ Dataset Distillation and Additional Fine-tuning r6.7cm< g r a p h i c s >The testing accuracy on the R-Set after recovery (left) and gradient steps on original data (right) when doing additional fine-tuning. As discussed in <Ref>, integrating DD into FL lowers the quality of the distilled dataset compared to when conducting FL and DD separately. To offset this, allows clients to perform additional fine-tuning steps (F) by executing <Ref>. Figure <ref> (left) shows the testing accuracy of on the R-Set after the recovery stage when doing more fine-tuning (, increasing F from 0 to 200). We also show the testing accuracy reached when separately performing DD and FL (74.78%), which we consider as an optimal baseline with a dashed horizontal line. We observe an increase in accuracy as F increases: from 70.48% with F = 0 to 74.55% with F = 200. More fine-tuning, however, comes at the cost of additional computation. Figure <ref> (right) shows the number of gradient steps performed on the original dataset as F increases. This figure marks the portion of gradients we re-use for FL and DD in orange, which indicates the savings in computation by integrating DD and FL. When performing FL and DD separately, we require 25000 gradient steps on the original dataset (indicated by a dashed line). However, when integrating FL and DD, we only need 10000 gradient steps on the original dataset with F = 0, and we can re-use all these gradients for DD. With F = 200, this number increases to 20000.§.§ Performance and Efficiency of in Larger Networksr7cm 0.84Stage2cUnlearning 2|cRecovery(r)2-3 (r)4-5Baseline c]@c@c]@c@R-Set c]@c@c]@c@ R-Set(r)1-1 (r)2-3 (r)4-50.72% 58.57% 0.81% 84.96% 0.34% 88.39% — —0.53% 52.48% 0.66% 86.47%0.58% 39.46% 0.73% 85.63% Testing accuracy of different FU approaches with 100 clients and the SVHN dataset.Finally, we analyze the unlearning performance of and other baselines in a 100-client network and with the dataset. is a large dataset containing more than 600000 samples and 10 classes. In each round, the server selects a random 10% subset of clients to update the model. <Ref> shows for each approach and stage the testing accuracy on the and R-Set when unlearning class 9. Even with 100 clients, effectively unlearns class knowledge, and the accuracy after the unlearning stage on the F-set is just 0.72%. Compared to other baselines, shows good testing accuracy after the recovery stage on the R-Set, even with more clients and samples in the training dataset. Additional fine-tuning steps can further reduce this gap. We also observe that executes a complete unlearning request 475.2× faster than , highlighting the superior advantage in computation efficiency stemming from dataset distillation. § CONCLUSIONIn this paper, we introduced , a novel and efficient federated unlearning method that incorporates dataset distillation to address the challenges of erasing data from a trained ML model. has clients produce and use a compact, distilled dataset that drastically reduces computational overhead during the unlearning and recovery phases. elegantly combines the gradient matching DD and FL processes, allowing for gradient reuse and thereby further reducing computational overhead. Empirical evaluations on three standard datasets confirmed the effectiveness and efficiency of , demonstrating a remarkable acceleration in the unlearning process compared to existing federated unlearning approaches. § REPRODUCIBILITY STATEMENTWe have undertaken several steps to ensure the integrity, reproducibility and replicability of . We have provided an extensive description of the proposed in the main text, specifying its workflow in <Ref>, its theoretical formulation in <ref> and the integration with <cit.> in <Ref>. To facilitate the reproducibility of the experimental results, the complete source code used for the evaluation of will be made publicly available and a link will be added in the final paper version. We have used publicly available ML models and datasets. The details provided in Section <ref>, as well as the information provided in Section <ref>, should be sufficient to reproduce our results. We believe that these efforts will aid researchers in understanding, replicating, and building upon . iclr2024_conference§ ADDITIONAL NOTES ON EXPERIMENTAL SETUP We have tested and baselines with a commonly used deep neural network architecture, , ConvNet <cit.>. Its modular architecture contains D duplicate blocks, and each block has a convolutional layer with W (3× 3) filters, a normalization layer N, an activation layer A, and a pooling layer P, denoted as [W, N, A, P]× D. The default ConvNet (unless specified otherwise) includes 3 blocks, each with 128 filters, followed by InstanceNorm, ReLU and AvgPooling modules. The final block is followed by a linear classifier.After the integration of DD and FL training, we have the following hyper-parameters – the number of FL global rounds K, the number of local update steps T, the number of optimization step ς_ for synthetic sample updating, and the learning rate η_. In all experiments, we set K=200, T=50, ς_=1, η_ = 0.1. The number of FL global rounds is set to K=200 since our trials indicated that the model will converge before that. Other hyperparameters follow previous work <cit.>.In mini-batch sampling for DD with gradient matching, we randomly sample 256 real images of a class as a mini-batch to calculate the gradients. We employ Stochastic Gradient Descent (SGD) as the optimizer.For the evaluation of independent DD and FL, the definition of hyperparameters is slightly different — we have the number of outer-loop steps K, the number of inner-loop steps T, the number of optimization steps ς_ for synthetic sample updating, and the learning rate η_. We set K=500, T=50, ς_=1, η_ = 0.1 following the same settings in previous work <cit.>.§ ADDITIONAL RESULTS WITH DIFFERENT DATASETS, NETWORK SIZES, AND RELEARNINGIn this section, we present additional accuracy results on a single unlearning request with different datasets (and ) and network sizes (10 and 20 clients). Since we already included the results on with 10 clients in <Ref>, we include in this section the remaining combinations in <Ref> (with 20 clients), <Ref> (with 10 clients), and <Ref> (with 20 clients), respectively. All these experiments follow the same setup as the results shown in <Ref>, , we use a non-IID data distribution (α = 0.1). We also attach the additional results of the Relearning stage in each Table to show the effectiveness of different methods in relearning the eliminated knowledge again. The approach used in the relearning stage is the same for different baselines, we adopt the traditional SGD-based model training to update the “unlearning model" over the rejoined data. Note that our still uses the distilled data in the relearning stage while other baselines use the original data, thus can still keep its superiority in computation efficiency. For all reported combinations of dataset and network sizes, we observe that all methods effectively eliminate the knowledge of a target class from the model as the testing accuracy on the is near-zero after the unlearning stage. Then, after the recovery stage, , , and all restore the accuracy of the R-Set close to the value on . Consistent with our observations from <Ref>, the accuracy on the R-Set by after the recovery stage is slightly below the baselines. This is because the distilled data is not a perfect representation of the original training data. This accuracy gap can be reduced by additional fine-tuning of the distilled dataset, at the expense of computation overhead.Table <ref>-<ref> also report the testing accuracy on the and R-Set after relearning. Ideally, we want these accuracies to be high since we attempt to restore the model the state before unlearning. Table <ref>-<ref> show that all evaluated FU approaches successfully relearn the previously eliminated knowledge again, while our can still keep its superiority in computation efficiency since the relearning stage uses the compact, distilled dataset (66.7 × faster than and 47.29 × than ).We are unable to relearning using . This is because the unlearning method of is based on model pruning, which irreversibly destroys the model structure during the unlearning stage. In particular, all channels related to the target class are pruned, and it is impossible to recover the knowledge of that particular class with such a damaged model. § CLIENT-LEVEL UNLEARNING EVALUATION Our evaluation in <Ref> establishes the performance of and baselines when performing class-level unlearning. We now evaluate the effectiveness of when performing client-level unlearning. The goal of client-level unlearning is to erase the data samples of a specific target client from the trained model. Being able to quickly perform client-level unlearning is essential to adhere to privacy regulations such as the right to be forgotten <cit.>. We illustrate the performance of our on client-level unlearning by comparing it with other baselines. is unable to perform client-level unlearning as this approach is specifically designed for class-level unlearning. We conduct experiments on the dataset using two different data distributions: Non-IID (α=0.1) and IID (uniform distribution). The target unlearning client is selected randomly from all available clients and we reset the random seed to change the data distribution of clients in each run of the experiments. Table <ref> shows the results on client-level unlearning with Non-IID distribution (using α=0.1). This table shows that for all evaluated FU approaches, the testing accuracy on the after unlearning is not near zero (8.37% for ), unlike when doing class-level unlearning (see <Ref>). The reason for this is that even though we unlearned the data samples of a particular client, some features associated with the classes that a particular target client holds might still be embedded in the model's knowledge. Because of this, it happens that these forgotten samples are correctly classified, even after unlearning. Furthermore, a target client t may have the majority of data for a particular class c, while it only holds small amounts of data for other classes. Therefore, the model performance after the recovery stage now critically depends on the individual data distribution of clients as unlearning the data of client t may significantly hurt the model performance on class c. Conversely, the knowledge on the classes of which the target client holds a small amount of data will not be completely eliminated from the model.<Ref> also shows that after the recovery stage, the testing accuracy on the R-Set (70.89%) is a bit lower than, but close to the performance of on the R-Set (73.69%). These results are consistent with the accuracies obtained for class-level unlearning. Table <ref>, shows the results for client-level unlearning with IID data distributions.Similar to the results in <Ref>, we observe relatively high accuracies on the after the unlearning stage. When comparing the accuracies of on the (70.81%) and R-Set (71.64%), we find that unlearning the data samples of the target client has minimal impact on the overall model performance. This is because with an IID distribution, each client holds the same number of data samples for all classes. Therefore, when we unlearn the target client, much of its contributed knowledge is still represented by the remaining data (R-Set) in the system and the departure of the target client will barely impact the model performance.Sample-level Unlearning. So far, we have shown the effectiveness and efficiency of when performing class-level and client-level unlearning. These two levels of unlearning already cover many applications of machine unlearning. One might want to perform sample-level unlearning, where the goal is to unlearn a subset of data samples of a particular client. This is difficult to achieve with since each client creates a distilled dataset that contains the knowledge of individual training samples in a compressed format. Even though the algorithm can be performed with a subset of a client's samples, the recovery phase cannot be performed with the distilled dataset as this dataset again contains the knowledge of the samples being unlearned. Therefore, we consider this challenge beyond the scope of our work. However, we remark that can be used to unlearn all samples of a particular class that a client holds since distilled datasets on the granularity of a class. § EXECUTING MULTIPLE UNLEARNING REQUESTS IN PARALLEL In <Ref>, we have demonstrated how is able to execute subsequent, multiple unlearning requests for different classes. While we assume in this experiment that unlearning requests are processed one-by-one, batching multiple unlearning requests could save time and compute resources. supports the processing of multiple unlearning requests at the same time by having clients execute using the distilled data representing the samples being unlearned, and then execute the recovery stage with the distilled data representing the remaining data. This enables the network to unlearn multiple classes, or the data of multiple clients using a single unlearning and recovery stage. | http://arxiv.org/abs/2311.15603v1 | {
"authors": [
"Akash Dhasade",
"Yaohong Ding",
"Song Guo",
"Anne-marie Kermarrec",
"Martijn De Vos",
"Leijie Wu"
],
"categories": [
"cs.LG",
"cs.AI"
],
"primary_category": "cs.LG",
"published": "20231127075344",
"title": "QuickDrop: Efficient Federated Unlearning by Integrated Dataset Distillation"
} |
[email protected] Helmholtz Research Academy Hesse for FAIR (HFHF), GSI Helmholtz Center for Heavy Ion Physics, Campus Frankfurt, 60438 Frankfurt, Germany Institut für Theoretische Physik, Johann Wolfgang Goethe-Universität, Max-von-Laue-Str. 1, D-60438 Frankfurt am Main, [email protected] Center for Nuclear Theory, Department of Physics and Astronomy, Stony Brook University, Stony Brook, New York 11794–3800, USAHelmholtz Research Academy Hesse for FAIR (HFHF), GSI Helmholtz Center for Heavy Ion Physics, Campus Frankfurt, 60438 Frankfurt, Germany Institut für Theoretische Physik, Johann Wolfgang Goethe-Universität, Max-von-Laue-Str. 1, D-60438 Frankfurt am Main, Germany GSI Helmholtzzentrum für Schwerionenforschung GmbH,Planckstrasse 1, D-64291 Darmstadt, Germany We use deep neural networks (DNN) to obtain the microscopic characteristics of partons in terms of dynamical degrees of freedom on the basis of an off-shell quasiparticle description. We aim to infer masses and widths of quasi-gluons, up/down, and strange quarks using constraints on the macroscopic thermodynamic observables obtained by the first-principles calculations lattice QCD. In this work, we use 3 independent dimensionless thermodynamic observables from lQCD for minimization.First, we train our DNN using the DQPM (Dynamical QuasiParticle Model) Ansatz for the masses and widths.Furthermore, we use the DNN capabilities to generalize this Ansatz, to evaluate which quasiparticle characteristics are desirable to describe different thermodynamic functions simultaneously. To evaluate consistently the microscopic properties obtained by the DNNin the case of off-shell quarks and gluons, we compute transport coefficients using the spectral function within Kubo-Zubarev formalism in different setups. In particular, we make a comprehensive comparison in the case of the dimensionless ratios of shear viscosity over entropy density η/s and electric conductivity over temperature σ_Q/T, which provide additional constraints for the parameter generalization of the considered models. Extraction of the microscopic properties of quasi-particles using deep neural networks Elena Bratkovskaya January 14, 2024 ====================================================================================== § INTRODUCTION In the exploration of quark-gluon plasma (QGP) matter produced during heavy-ion collisions, a critical step is establishing the equation of state (EoS) and transport coefficients of the matter corresponding to a specific range of parameters, such as energy density, which is associated with temperature, and baryon density, associated with baryon chemical potential. Moreover, for the fully dynamical evolution of the system, it is required to consider microscopic characteristics for the partonic degrees of freedom.Although first-principles Lattice Quantum Chromodynamics (lQCD) simulations provide vital predictions, especially for temperatures above the critical temperature T_c, they are complex and challenging to execute.While lQCD has successfully outlined the main thermodynamic quantities, finer details of microscopic characteristics and dynamic properties, such as transport coefficients, remain somewhat elusive. These properties are crucial as they depend on the microscopic interactions among quarks and gluons. However, assessing the microscopic properties of QGP matter at finite temperatures and baryon chemical potentials from first principles is a challenging task. To bridge this gap, it is worthwhile to investigate new methodologies that can further elucidate the established lQCD predictions, thereby enhancing our understanding of the QGP matter in these extreme conditions. In the past decade, methods of machine learning (ML) have been developed as a powerful computational tool and novel problem-solving perspective for physics, which offers new avenues for exploring strongly interacting QCD matter under extreme conditions such as finite T and baryon chemical potential <cit.>. Generally, ML stands apart from conventional minimization methods by aiming for predictions instead of just fitting, thus offering enhanced adaptability for applications in physics <cit.>. In the context ofRelativistic Heavy-Ion Collisions (HICs) and phenomenology of the QGP, this includes the use of novel techniques such as active learning <cit.>, transfer learning <cit.>, among other methods <cit.>. In this work, we use machine learning techniques, in particular, Deep Neural Networks (DNNs) <cit.>, to gain insights into the phenomenological description of deconfined QCD matter and test the applicability of these methods to phenomenology. We confine ourselves to the case of 2+1 flavours and colours QCD, N_f=N_c=3, with degenerate light quarks at vanishing quark chemical potential, μ_q=0.To train the DNNs used in this work we use lQCD data provided by the WB Collaboration, in particular results on the entropy density s(T) and baryon susceptibilities χ_2^B(T), χ_2^S(T) <cit.> (T ≤ 3T_c) and latest estimates for the strange susceptibility χ_2^S(T) <cit.> (T ≤ 1.7 T_c).We choose these thermodynamic quantities in order to fix consistently the strange quark, the light quark, and gluon sectors independently. The functional form of the masses and widths of quarks and gluons is inferred by multi-layer Feed-forward neural networks (FFNNs), which are trained on the entropy density, baryon, and strange susceptibilities. The feasibility of this approach is grounded in the universal approximation theorem <cit.>. This theorem states that multi-layer feed-forward neural networks (FFNNs), owing to their non-linear characteristics and given a sufficient number of hidden neurons, can effectively approximate any function.To evaluate the thermodynamic quantities required for the training of DNNs we consider an off-shell quasi-particle description of the QGP, where quasi-quarks and gluons acquire a thermal width as in the original Dynamical Quasi-Particle Model (DQPM) <cit.>. The DQPM is based on a propagator representation with effective (anti-)quarks and gluons whose properties are defined by complex self-energies and off-shell spectral functions.The advantage of the DQPM compared to other QPMis that it is by construction a 2PI (two particle irreducible) model while the other on-shell quasiparticle models are 1PI (one particle irreducible) in nature.This avoids the introduction of an extra “bag constant”often employed for on-shell quasi-particle models in order to describe the thermodynamic properties of the QGP.Quasiparticle models are highly favored for their ease of integration into transport frameworks, which are crucial for simulating the evolution of QGP matter. Specifically, the DQPM has been incorporated into the Parton-Hadron-String Dynamics (PHSD) transport approach <cit.>. Meanwhile, the Quasiparticle Model (QPM) has been adopted in the Catania transport approach <cit.> and <cit.>. Additionally, recent developments have seen the implementation of approximate QCD models, like the Nambu–Jona-Lasinio (NJL) and Polyakov–Nambu–Jona-Lasinio (PNJL) models, within the A Multi-Phase Transport (AMPT) model <cit.>. These implementations utilize scalar and vector potentials to approximate QCD dynamics. In addition to analyzing thermodynamic quantities, we also explore transport coefficients, using the resulting microscopic properties, i.e. spectral functions.Our previous systematic comparison reveals that these transport coefficients are significantly influenced by the properties of the constituent degrees of freedom <cit.>. This leads to the observation that different theoretical models, despite having an almost identical EoS, can yield markedly different transport coefficients.The goal of this study is to explore the properties of strongly interacting quasi-particles, i.e. the T- dependence of masses and widths which are consistent with the lQCD thermodynamics at vanishing chemical potential, by using DNN methods. The techniques used here can be also applied at finite μ_b. Starting from the DQPM Ansatz we gradually generalize it, aiming at a simultaneous description of both the entropy density s(T) and the susceptibilities χ_2^B,S(T). In general, a substantial amount of data is essential to effectively train a neural network to uncover established physics. In the phenomenology of theQGP, these get even worse since the only available lQCD data represents a small amount of points. Therefore the simple architecture and small number of hyperparameters of FFNNs compared to CNNs and ResNets is beneficial for this task.The structure of the paper is as follows. In Sec. <ref> we brieflydescribe the framework of the off-shell Quasi-particle model including the evaluation of thermodynamic observables and transport coefficients. In Sec. <ref> we describe our method to determine the microscopic quantities using DNNs, starting from the DQPM and generalizing it further in Sec. <ref>. Finally, our results and conclusions are summarized in Sec. <ref>.§FRAMEWORK OF THE OFF-SHELL QUASIPARTICLE MODEL In the quasiparticle approach, the degrees of freedom are strongly interacting dynamical quasiparticles: quarks and gluons with a broad spectral function, whose `thermal' masses and widths increase with temperature. The microscopic and macroscopic properties of the off-shell quasiparticles are defined by the complex self-energies and spectral functions.The spectral functions can be taken in various forms. Here we consider the most successful physically motivated Lorentzian form <cit.>: ρ_i(ω, p) =γ_i/Ẽ_i,𝐩( 1/(ω-Ẽ_i,𝐩)^2+γ_i^2 - 1/(ω+Ẽ_i,𝐩)^2+γ_i^2)= 4ωγ_i/( ω^2 - 𝐩^2 - m^2_i )^2 + 4γ^2_i ω^2,where i runs over the particle species (i=q, q̅, g). Here, we introduced the off-shell energy Ẽ_i,𝐩 = √( p^2+m_i^2-γ_i^2 ), m_i and γ_i being the particle's pole mass and width. ρ_i is a real function, odd in ω and for all p it fulfills the sum rule∫_-∞^∞dω/2πωρ_i(ω, p) = 1. Using the spectral function, the quasiparticle (retarded) propagators can be expressed in the Lehmann representation as:Δ_i(ω, p)=∫_-∞^∞dω'/2πρ_i(ω', p)/ω-ω'=1/ω^2 - p^2 -m_i^2 +2 iγ_iω. In (<ref>), we are considering all the (retarded) quasiparticle self-energies to be equal, Π=Σ =Σ_q ≈Σ_q̅, and they are expressed via dynamical masses and widths as: Π_i = m_i^2 -2iγ_i ω.In the off-shell case, ω is an independent off-shell energy.§.§ DQPM AnsatzIn the DQPM, the microscopic quantities, i.e. effective masses and thermal widths, depend on the effective coupling constant which, in turn, acquires an explicit temperature and chemical potential dependence <cit.>. This is the main characteristic of the DQPM parametrization used for our first model study.Now we briefly recall the main details of the parametrization of the DQPM, which has been studied in many variations in Refs <cit.>: ∙ In the DQPM, the coupling constant is adjusted by fixing the quasiparticle entropy density to reproduce the entropy density s(T,μ_B = 0) from lQCD, e.g. <cit.> (see a parametrization method introduced in Ref. <cit.>).It has been shown that, for a given value of g^2, the ratio s(T,g^2)/T^3 is almost constant for different temperatures but identical g^2, i.e. ∂/∂ T (s(T,g^2)/T^3)=0.Therefore the entropy density s and the EoS in the DQPM is a function of g^2 only, i.e. s(T,g^2)/s_SB(T) = f(g^2) where s^QCD_SB = 19/9 π^2T^3 corresponds to the Stefan-Boltzmann limit of the entropy density for massless quarks and gluons. Thus, by inverting the f(g^2) function, the coupling constant g^2 can be directly obtained from the parametrization of lQCD data for the entropy density s(T,μ_B=0). The resulting parametrization for the coupling constant reads <cit.>: g^2(T,μ_B = 0) = d ·[ (s(T)/s^QCD_SB)^e -1 ]^f.Herethe parameters d = 169.934, e = -0.178434, and f = 1.14631 are fixed in accordance with the s(T) calculations by the WB Collaboration from Refs. <cit.>.∙The exact form of pole masses, in the DQPM, depends on the effective coupling constant as predicted by Hard Thermal Loop (HTL) calculations in the high-temperature regime <cit.>. Considering three flavors and colors, N_f=N_c=3, and for vanishing chemical potential, μ_q=0, the gluon (g) and light quarks (u and d, both labeled with l) masses read <cit.>: m^2_g(T) =C_a g^2(T)/6T^2(1+N_f/2N_c) = 3/4g^2(T)T^2 , m^2_l(l̅)(T) = C_f g^2(T)/4T^2 = 1/3g^2(T)T^2,whereC_F = N_c^2 - 12 N_c = 4/3 and C_A = N_c = 3 are the QCD color factors for quarks and gluons, respectively. It is important to note that here g^2(T) is the effective temperature-dependent coupling constant.∙ The strange quark mass, whose scaling has not been studied in detail within the HTL framework, is given by the phenomenological equation:m_s(s̅)(T)= m_l(l̅)(T)+ Δ m_ls,where Δ m_ls=0.03 GeV is a constant mass shift. Later on, we will consider how the change of this parameter affect transport coefficients. However, this simple shift is justified by the larger bare mass of the strange quark, which enhances its dynamic mass. Previously, the value of Δ m_ls has been fixed by comparing experimental data for strange hadrons abundances and the K^+/π^+ ratio in relativistic heavy-ion collisions using the PHSD, a microscopic covariant transport approach <cit.>. There, the microscopic properties of the partonic degrees of freedom are described by the DQPM (see latest results and discusions in <cit.>). ∙ In contrast to the on-shell quasi-particle models <cit.>, where thermal widths are absent and a bag constant must be introduced, in the DQPM the thermal widths are chosen to follow the smooth increase of the interaction rate Γ with T. Therefore, the thermal widths have a physical meaning, as they reflect the collision frequency of particles at finite temperature T.In the DQPM parametrization, the thermal widths read <cit.>: γ_i(T) = 1/3 C_i g^2(T)T/8πln(2c_m/g^2(T)+1). The constant parameter c_m = 14.4 was fixed in Ref. <cit.> and is related to a magnetic cut-off. The finite width reflects the dynamical modification of the spectral function of quasiparticles during their propagation in the sQGP medium.Furthermore, the thermal widths of all (anti-)quarks are assumed to be equal and completely fixed by g^2: γ_u=γ_d=γ_s.This assumption has been checked in Ref. <cit.>, where it was shown that the interaction rates of strange and light quarks coincide, apart from the small difference in the vicinity of the phase transition T<1.5T_c. In the next section, we will discuss possible generalizations of this model using DNNs.§.§ ThermodynamicsIn this subsection, we detail the precise method by which thermodynamic observables are derived from the microscopic characteristics of off-shell quasiparticles. For the analysis of the main thermodynamic properties within thermal QCD, employing the Φ-functional approach is beneficial. This technique represents the thermodynamic potential Ω via the dressed propagators Δ_i <cit.>. Following this representation, the derivatives of Ω can be efficiently calculated.In this formalism, the entropy density s^dqp = - ∂Ω/V∂ T for off-shell quasiparticles can be written as:s^dqp =2 ∫_0^+∞dωdpp^24 π^3 F_s(ω, p, T, μ),withF_s(ω, p, T, μ) = -d_g ∂ f_g ( ω )/∂ T( Im(lnΔ^-1)- ImΠReΔ)-d_q∂ f_q(ω-μ_q)/∂ T( Im(ln S_q^-1)-ImΣ_q ReS_q )-d_q̅∂ f_q̅(ω+μ_q)/∂ T( Im(ln S_q̅^-1)- ImΣ_q̅ReS_q̅) ,where Δ_i =(p^2-Π_i)^-1, S_q = (p^2-Σ_q)^-1 and S_q̅ = (p^2-Σ_q̅)^-1 stand for the full (scalar) quasiparticle propagator of gluons g, quarks q, and antiquarks q̅. Similarly, the quark density of the off-shell quasi-particlesn^dqp = -∂Ω/V ∂μ reads:n^dqp =2 ∫_0^+∞dωdpp^24 π^3 F_n(ω, p, T, μ),withF_n(ω, p, T, μ) =-d_q ∂ f_q(ω-μ_q)/∂μ_q( Im(ln S_q^-1)- ImΣ_q ReS_q )-d_q̅∂ f_q̅(ω+μ_q)/∂μ_q( Im(ln S_q̅^-1)- ImΣ_q̅ReS_q̅).In the above formulae, d_g=2 × (N_c^2-1) is the number of transverse gluonic degrees of freedom while d_q=d_q̅= 2 × N_c is the fermionic one. Furthermore, f_g,q=f_B,Fis the Bose or Fermi distribution for gluon or quark respectively:f_B,F=[exp( (E_i - μ_i)/T) ± 1]^-1,where μ_i is the quark chemical potential. The quasi-particle energy is E_i=√(𝐩_i^2+m_i^2) for the on-shell case or ω_i for the considered off-shell case. Here we emphasize that the entropy and quark number densityfunctionals are not restricted to narrow quasiparticles, i.e. with spectral widths γ_i much smaller than the typical energy.Baryon and strange charge densities follow from the quark densities: = n^dqp/3 = 1/3 (++ ) = - . At vanishing chemical potential, the properties of the quasi-particles are explored using quark-number susceptibilities instead of the densities: χ_i(T,m_i,γ_i) = ∂ n_i∂μ_i|_μ_i=0. Quark-number susceptibilities are related to the pressure byχ_q (T,μ_q)/T^2 =∂^2 (P/T^4)/∂ (μ_q/T)^2,and therefore can be related to theconventional second-order susceptibilities used in the Taylor expansion χ_2^i j at vanishing quark chemical potential as <cit.> P(T,μ_i)/T^4 = P(T, 0)/T^4 + 1/2∑_i, jμ_i μ_j/T^2χ_2^i j,withχ_2^i j = 1/T^2∂ n_j (T, μ_i)/∂μ_i|_μ_i = μ_j = 0.For the N_f =3 case and transverse gluons, the total entropy and the baryon and strange susceptibilities read: s(T) =-d_g I^B_g(T) -d_q∑_i=q,s I^F_i(T) , χ_2^B(T)= 1/T^2d_q/9(2χ_2^l(T) + χ_2^s(T)), χ_2^S(T)= 1/T^2 d_qχ_2^s(T), where we have introduced the integrals:I^B,F_i(T,m,γ) = 1/2π^2 T∫p^24p^2+3m_i^2/3√(ω^2+p^2)f_B,F(ω,T)+2∫_0^∞ω/2π∫^3p/(2π)^3∂ f_B,F(ω,T)/∂ Th(ω,p,m_i,γ_i), χ^i_2(T,m,γ) = 1/2π^2 T∫_0^∞ p p^2/1+cosh(√(m_i^2+p^2)/T)+ 2∫_0^∞ω/2π∫^3p/(2π)^3sinh(ω/T)/T^2(1+cosh(ω/T))^2h(ω,p,m_i,γ_i), where the function h is the auxiliary function:h(ω,p,m_i,γ_i)=2γ_iωω^2-p^2-m_i^2/(ω^2-p^2-m_i^2)^2+4γ_i^2ω^2-arctan(2γ_iω/ω^2-p^2-m_i^2). Notice that all thermodynamical functions implicitly depend on g^2 through the dressed masses and widths if we consider a specific parametric ansatz as in the DQPM. An alternative approach involves a traditional perturbative series expansion of the entropy with respect to the coupling constant g as in references <cit.>:s/T^3 = c_0 + c_2g^2 + c_3g^3 + …Here, c_0 = π^2/45(4(N_c^2 - 1) + 7 N_c N_f) corresponds to the Stefan-Boltzmann limit. Moreover, there are different resummations techniques for thermal QCD such as hard-thermal-loop perturbation theory (HTLpt), which are not discussed here (for further reading we refer to Ref. <cit.>). To enhance the convergence of the HTL resummation, a self-consistent quasi-particle expansion has been adopted, as described in Ref. <cit.>. From the entropy density and quark densities, other thermodynamic quantities follow. The pressure at vanishing baryon chemical potential μ_B=0 is defined employing the entropy density asp(T) =p^lqcd(T_0)+ ∫_T_0^T s(T') dT' ,where p^lqcd(T_0) is taken from lQCD after fixing T_0. The energy density ϵ then follows from the Euler relationϵ = T s - p. Another important thermodynamic observable is the trace of the energy-momentum tensor, also known as interaction measure or trace anomaly:I = ϵ - 3 p = T s - 4p.From lQCD calculations <cit.>, the trace anomaly I is expected to be sizeable in the vicinity of the cross-over transition, indicating a strong interaction of the medium. Consequently, one can expect the shear viscosity close to T_c to be correspondingly smaller than for a weakly interacting medium, as considered in the pQCD limit. §.§ Transport coefficientsAs complementary observables, transport coefficients can reveal how well the considered models can microscopically describe thedense QGP medium. Therefore, we aim to evaluate transport coefficients using the Kubo-Zubarev formalism, where we employ the parton spectral functions without relying on the relaxation time approximation or effective coupling constant. Since quasiparticles masses and widths - obtained from the DNNs - can be larger than those in the original DQPM, this method is better suited.The shear viscosity is evaluated from the slope of the Fourier transform of the spectral function for the spatial traceless part of the stress tensor ⟨[π_ij(x),π_ij(0)]⟩ in the limit ω→ 0.Here we employ the following formula for the shear viscosity:η^Kubo(T) = - ∫d^4p/(2π)^4 ∑_i=q,q̅,g d_i ∂ f_i(ω)/∂ω ρ_i(ω,𝐩)^2 Π_i=1/15T∫ d^4p/(2π)^4 ∑_i=q,q̅,g d_i [ (1 ± f_i(ω)) f_i(ω) ] ρ_i(ω,𝐩)^2 Π_i ,where the notation f_i(ω) = f_i(ω,T,μ_q)= f_B,F is used for the distribution functions. The corresponding derivative of the distribution function accounts for the Pauli-blocking (-) and Bose-enhancement (+) factors, and ρ_i denotes the spectral functions from Eq. (<ref>).Using the notation Π_i=q,g we differentiate the contribution from transverse gluons <cit.>:Π_g = 7 ω^4-10 (ω𝐩)^2+ 7𝐩^4,and from quarks <cit.>:Π_q =p_x^2 p_y^2. We note that (for weak coupling) it is common to derive η from a Boltzmann equation to next-to-leading log (NLL) order <cit.>: η^NLL≈T^3g^2 ln (1/g) . This approach is suited for on-shell or narrow quasi-particles, whereas we don't assume small g and employ the more rigorous Kubo-formalism. It is expected that, near T_c≈ 158 MeV, the results of Eq.(<ref>) are not applicable.We consider here another important transport coefficient, i.e. the electric conductivity for stationary electric fieldsσ_Q(T), which describes the response of the system to an external electric field. In the case of the electrical conductivity only quark degrees of freedom contribute, resulting in an analogous expression <cit.>:σ_Q^Kubo(T) = - ∫d^4p/(2π)^2 ∑_i=q,q̅ d_i ∂ f_i(ω)/∂ωρ_i(ω,𝐩)^2=1/3T∫ d^4p/(2π)^4 𝐩^2 ∑_i=q,q̅ d_i [ (1 ± f_i(ω)) f_i(ω) ] ρ_i(ω,𝐩)^2, for q=u,d,s. By separating the contributions of the strange flavor one can also single out the strange quark's contribution to the conductivity, which gives us a complementary observable to infer the properties of the strange quarks. §NEURAL NETWORKS FOR THE REGRESSION APPLIED TO THE QUASI-PARTICLE MODELIn this section, we describe the technical details of our framework, covering aspects such as the input and output observables, the architecture of the DNNs, the training process, and the evaluation methods. The code of the neural network is written in Python and it is implemented using the Keras Deep Learning API v2.13.1 <cit.> together with the Tensorflow v2.13.0 library <cit.>.For the regression task, we utilize Feed-forward neural networks (FFNNs), which are a preferred choice over CNN especially for simple, non-spatially-distributed datasets, due to their efficiency and the smaller number of hyperparameters, allowing a faster training. The number of layers of the DNNs is chosen heuristically, balancing between the quality of the fit, the smaller number of parameters, and the training speed. Three hidden layers, with 24, 12, and 12 neurons respectively, turn out to be sufficient for the purpose of this study. For all layers, the activation function is sigmoid:σ(x)=1/e^x+1.For the minimization procedure, we employ the Adam algorithm <cit.>. For all of the presented results, the learning rate is initially set to 0.005 and decays by 90% every 1000 epochs of the training. The number of epochs used in the training is 4× 10^3, and we observed that increasing this number by a factor of three doesn't change our results and leads to overfitting.In this study, we use machine learning to infer the functional form of microscopic quasiparticles properties (masses, widths and coupling constant) as a function of temperature, such that the formulae obtained with the Φ-functional approach - detailed in Sec. <ref> - are in agreement with lattice QCD data. Therefore, we consider a DNN-based DQPM model, which we will refer to as DQPMnn. The input of the DQPMnn is the temperature, expressed in GeV, in the range T>T_c, T_c = 0.158GeV in accordance with the employed lQCD data from the WB collaboration. We will explore two distinct regression tasks. As outputs for these tasks, we consider (i) the squared coupling constant g^2(T); and (ii) a combination of g^2(T), the masses m_i(T), and the widths γ_i(T) of the quasi-particles. The next subsections describe the details of the loss function, and the extraction of the physical properties of the quasiparticles. §.§ DQPMnn: ℒ_0 - extraction of g^2(T)The original idea of DQPM relies on adjusting the effective coupling to fit the entropy density, employing an HTL-based parametrization for the effective masses and widths. In this section, as proof of principle, we employ the DQPM parametrization, Eqs. (<ref>), (<ref>) and (<ref>), and compare the outputs of the DQPMnn to different lattice observables, such as the EoS and the transport coefficients. In what follows, and throughout the paper, in order to distinguish quantities from the the DQPMnn, we use an underline to identify them. For example, g^2(T) is the function associated with the running coupling output by the DQPMnn, whereas g^2(T) is the one used in the original DQPM.To extract the coupling constant g^2(T) with the DQPMnn, we use a loss function: ℒ_0= β_G[s(T)/T^3-s_lQCD/T^3/Δ s_lQCD/T^3]^2 + β_L[χ_2^B(T)-χ^B_2_lQCD/Δχ^B_2_lQCD]^2+ β_S[χ_2^S(T)-χ^S_2_lQCD/Δχ^S_2_lQCD]^2 .The minimization of the above loss function is analogous to the minimization of the χ^2 in the standard fitting procedures, but the parameters β_G, β_L, β_S allow to regulate the contribution of each thermodynamic quantity to the loss function. Notice that the loss function in eq. (<ref>) is not only a minimum squared error, as it was used for example in <cit.>, but also takes into account the uncertainties associated with the lattice measurements.The case β_G = 1 and β_L=β_S=0, where microscopic quantities are inferred from the entropy density only, should reproduce the results of the DQPM. The thermodynamic functions are computed from Eqs. (<ref>) and the subscript “lQCD” labels the lattice data, which have been taken from <cit.>. All the thermodynamic functions appearing in the loss have an implicit dependence on g(T) through the mass and width of the different quasi-particles. To be more explicit, one should write, for example, s(T)→ s(T,g(T)), but we choose to simplify the notation and drop the implicit dependence of the thermodynamic functions.Notice that, to train the DQPMnn, we have chosen thermodynamic quantities that are dimensionless.The main advantage of the use of dimensionless observables such as s/T^3, χ_2^B, χ_2^S is that they mitigate the impact of discrepancies between lattice EoS results obtained by different groups: HotQCD <cit.> or WB Collaboration <cit.> results influence less the final results for the desired microscopic quantities: masses, widths.In contrast, if we consider s(T) in GeV^3 as an input for the training,the difference between the two EoS would be larger.This is in contrast with a recent work, Ref. <cit.>, where DNNs were used for an on-shell quasi-particle model with a bag constant and s(T) and Δ=I (T) were used for training. However, the loss function in Eq. (<ref>) requires the evaluation of thermodynamic integrals at each step during the training process. While the computation of one of the integrals in Eqs. (<ref>) is rather fast, the frequency at which they must be calculated renders the use of Eq. (<ref>) not optimal, as it would significantly prolong the training duration. To overcome this complication, we use other 2 neural networks as surrogate models to approximate the integrals in Eqs. (<ref>), and hence s(T) and χ_2^B,S(T). These neural networks take T, m/T and γ/T as input and are trained to reproduce the values of the integrals I^B,F and χ^i_2(i = l,s)in Eqs. (<ref>). A schematic representation of the surrogate models is given in Fig. <ref>.Denoting the functions resulting from the surrogate model with a tilde (Ĩ^F,B and χ̃_̃2̃) to distinguish them from the actual values of Eqs. (<ref>), we can use a loss function with the substitutions s→s̃ and χ_2^B,S→χ̃_2^B,S, where the tilded thermodynamic functions are computed from Eqs. (<ref>) using Ĩ^F,B and χ̃_2 in place of I^F,B and χ_2: s̃(T) =-d_g Ĩ^B_g(T) -d_q∑_i=q,sĨ^F_i(T) , χ̃_2^B(T)= 1/T^2d_q/9(2χ̃_2^l(T) + χ̃_2^s(T)), χ̃_2^S(T)= 1/T^2 d_qχ̃_2^s(T). The use of the surrogate model leads to a significant improvement in the speed of the training.The actual loss function used in the training therefore becomes: ℒ_0= β_G[s̃(T)/T^3-s_lQCD/T^3/Δ s_lQCD/T^3]^2 + β_L[χ̃_2^B(T)-χ^B_2_lQCD/Δχ^B_2_lQCD]^2+ β_S[χ̃_2^S(T)-χ^S_2_lQCD/Δχ_2^S_lQCD]^2 .Further details about the surrogate model are reported in Appendix <ref>.The structure of the DQPMnn is depicted schematically in Figure <ref>. However, in this section, we are only considering g as an output of the DQPMnn, whereas the masses and the width are taken from the DQPM parametrization, Eqs. (<ref>), (<ref>) and (<ref>), with the substitution g→g. The consequences of relaxing the DQPM parametrization will be explored in the forthcoming sections.We have examined different values of the weightsβ_G, β_L, β_Sin the loss function ℒ_0 to understand how these adjustments influence the microscopic quantities, such as masses, widths, and the coupling constant, based on the selected thermodynamic observable.In particular, we have tested the three setups resulting from setting one of the β_i to one and the others to zero. These setups called “I”, “II” and “III”, are explicitly summarized in Table <ref>.They correspond to * setup “I” – best DNN fit of ℒ_0 to s/T^3 (notations in the figures “ℒ_0: s/T^3”),* setup “II” – best DNN fit of ℒ_0 to χ_2^B (notations in the figures “ℒ_0: χ_2^B”),* setup “III” – best DNN fit of ℒ_0 to χ_2^S (notations in the figures“ℒ_0: χ_2^S”).We begin with the comparison of the thermodynamic observables used for the minimization of the loss function ℒ_0 to the true values - the lQCD estimates. Fig. <ref> shows the dimensionless entropy (s/T^3) in the top panel, baryon susceptibility (χ_2^B) in the middle panel, and strangeness susceptibility (χ_2^S) in the bottom panel, all as functions of the scaled temperature (T/T_c). The lines represent predictions from the DQPMnn with ℒ_0 in setups I (red solid lines), II (blue solid lines), and III (green solid lines). The symbols correspond to the true values - estimates from lattice QCD by the WB Collaboration <cit.>. We observe that the setup I underestimates the susceptibility, similar to the DQPM model. Setups II and III are very similar to each other and overestimate the entropy. §.§ Results: g^2, m, γ and transport coefficients.Now, let's delve into the microscopic characteristics, starting with the output of DQPMnn - g^2. For a comparison with the previous results, the running coupling α_S = g^2/(4 π) is displayed in Fig <ref> as a function of the scaled temperature T/T_c in the setups I-III and is compared to the DQPM result. Additionally, we show lQCD estimates:quenched QCD N_f = 0 results (grey circles) are taken from Ref. <cit.>,N_f = 2 (black inverted triangles) from Ref. <cit.> and N_f = 2+1 (brown squares) from Ref. <cit.> (from Fig. 7). It is important to note that the coupling constant obtained by lQCD strongly depends on the definition of α_s extracted from the static potential <cit.>. The main feature of the resulting coupling constants is a significant increase approaching T_c, comparable to the predictions from lQCD, showing the importance of nonperturbative effects for T< 3T_c. One can see that the initial choice of fitting s/T^3 in the case of setup I aligns with the DQPM parametrization.On the other hand, setups II and III lead to an unrealistically low coupling.In Figs. <ref>, <ref> we show the masses and widths for gluons (top panels) and light quarks (bottom panels) as a function of T/T_c for different setups and compare them to the benchmark DQPM results (black dash-dotted line). As seen from the figures, in the DQPM and DQPMnn ℒ_0 setup I, the masses and widths of quarks and gluons increase with T. For all presented setups the results respect the colour factors M_q = 2/3 M_g, γ_q = 4/9γ_g. The temperature dependence of the masses and widths are rather different for setups II and III since they follow the extracted g^2. Similarly the m_i, γ_i for setups II and III are smaller than the DQPM one. Notable small differences are observed between setups II and III, particularly near the critical temperature T_c and for temperatures exceeding 2.5 T_c. This observation aligns well with our initial expectations, which are based on the comparative analysis of thermodynamic observables used for training and prediction, as depicted in Fig. <ref>. It's important to highlight that fitting all three thermodynamic observables using the basic DQPM Ansatz is not feasible: as we have just shown fitting the entropy density underestimates the susceptibilities, whereas fitting the latter overestimates the entropy density. Thus, in the subsequent section, we will modify the DQPM parametrization to potentially extend the model for a more comprehensive representation of all thermodynamic observables.Similarly to the thermodynamic observables, we compare the transport coefficients from the Kubo-Zubarev method for three setups to the estimates of the original DQPM spectral functions. To ensure an accurate comparison we focus on dimensionless transport coefficients. These include the shear viscosity to entropy density ratio, denoted as η/s, and the ratio of electric conductivity to temperature, expressed as σ_Q/T.Fig. <ref> displays the specific shear viscosity η/s as a function of the scaled temperature T/T_c in comparison to various results from the literature. Multiple colored lines represent the DQPMnn estimates from Eq. (<ref>) using three different setups for the weights. The black and cyan dashed lines correspond to the DQPM estimates for Kubo (Eq. (<ref>)) η^Kubo/s <cit.> and RTA with the interaction rate η^RTA/s <cit.>, respectively.The RTA estimate of the shear viscosity is found to be very close to the one from the Kubo formalism<cit.> indicating that the quasiparticle limit (γ≪ M) holds in the DQPM Ansatz. We also present the primary estimates obtained from well-established methods such as lQCD calculations, AdS/CFT correspondence, and Bayesian analysis, which can serve as valuable guidance for further improvements, although these estimates still rely on certain assumptions about the QGP. A dashed gray line highlights the well-known Kovtun-Son-Starinets bound (η/s)_KSS = 1/(4π) <cit.>.Symbols on the plot represent lQCD data for pure SU(3) gauge theory from various references: black squares from <cit.>, a blue line from <cit.>, green triangles from <cit.>, and magenta circles from <cit.>. The grey area encompasses estimates from the Bayesian analysis conducted by the JETSCAPE Collaboration <cit.>. We see that the DQPM results are in agreement with the lattice estimations and close to the range provided by JETSCAPE. For a reasonably good description of the QGP it is important to haveη/s> (η/s)_KSS = 1/(4π). Comparing results for different setups we see that estimates from setup I are in agreement with the standard DQPM result and with the lattice results, while in the case of setups II and III the values obtained are higher. This can be explained by the smaller masses and widths, resulting from unrealistically small coupling. The setup I, as well as the original DQPM, is in good agreement with the lattice data for gluodynamics, however, the phenomenologically constrained results from state-of-the-art Bayesian statistical analyses for dynamical, i.e. out of equilibrium, full QCD medium indicate smaller values of η/s <cit.>. In this respect, we expect that an improved description of the QGP matter should correspond to η/s > (η/s)_KSS = 1/(4π)and within the range of Bayesian estimates. Fig. <ref> presents the scaled electric conductivity, σ_Q/T, for all flavors, versus the scaled temperature, T/T_c. The DQPMnn results derived from Eq. (<ref>) are characterized by red solid lines (representing calculations using ℒ_0 in the setup I), blue solid lines (setup II), and green dashed lines (setup III). Lattice QCD data points are represented using various symbols: for N_f=2, black squares denote the results from Ref. <cit.>, while for N_f=2+1, brown circles represent estimates from Ref. <cit.>, and blue triangles and stars from Ref. <cit.>.The bottom panel of Fig. <ref> focuses specifically on the scaled electric conductivity of strange quarks, maintaining a similar structure and representation as the upper plot. One can see the same tendency in the case of different setups, i.e. setups II and III predict a larger value of the conductivities due to the smaller effective coupling. The consistent overestimation of σ_Q(strange)/T across all setups compellingly suggests the need for revising the masses and widths of strange quarks to achieve closer alignment with the lQCD data. This observation is not merely an anomaly but a clear indication that our current understanding and modeling of the strange quark behavior requires refinement.Furthermore, the simultaneous pursuit of a more accurate agreement with the lQCD data for both σ_Q(full)/T and σ_Q(strange)/T demands a thoughtful reevaluation and potential modification of the Ansatz employed in the DQPM.In the forthcoming two subsections we will explore possible generalizations of the model, which can shed light on which direction microscopic quantities can be adjusted to improve χ_2^S and σ_Q(strange)/T. In summary, we have found that setups II and III, which do not fit the lQCD entropy density, provide a better description of χ^B_2(T), χ^S_2(T). However, because of the DQPM Ansatz, these two setups yield an unrealistically small value compared to the estimates from lQCD and the analytical 2/1 loop running constant <cit.> for α_s = g^2(T)/( 4 π) at T > 1.5 T_c. In particular, one can see that the effective coupling g^2(T)→ 0 already at T ≈ 2.5 T_c. In the next section we will relax the form of the Ansatz aiming at a consistent simultaneous description of s/T^3, χ^B_2 and χ^S_2 in a quasiparticle model. § MODIFIED QUASIPARTICLE MODEL DNN ℒ_1: EXTRACTION OF M_I(T), Γ_I(T), AND G^2(T)Now we explore a generalization of the DQPM ansatz considered in the previous section.The properties of each quasiparticle are the output of the fully connected NN model depicted in Fig. <ref>. For this purpose we consider 6 quantities as output of the DQPMnn: the coupling constant g^2(T), gluon and light quark masses, M_g,M_l, and three independent quasiparticle widths γ_g, γ_l, γ_s. The mass of the strange quark is still assumed to be M_s=M_l+0.03GeV. In order to make the extraction of the effective coupling g(T) possible, we follow the idea proposed in<cit.> of absorbing the nonperturbative corrections to the masses out of the effective coupling constant. In the case of gluodynamics, the screening masses were generalized taking into account non-perturbative corrections in the partonic phase in the following way:m_D(T)T= A(T)(1+ N_f/6)^1/2g(T),where A(T) describes non-perturbative corrections. As in Ref. <cit.>, A(T) turns out to be larger than the perturbative limit even at high T. Based on these observations we modify the effective masses using the following ansatz:m_q/g(T)T= A_i(T)g(T),where A_i and g are computed from the Neural Network. In the DQPM A_g = 3/4, A_q=1/3. It is important to note that, using the above parametrization, g(T) remains relatively similar to the original effective coupling g^DQPM(T) whereas the masses can differ. This allows to obtain a better description of thermodynamical quantities while preserving transport coefficients and cross-sections from dramatic changes. The widths γ_i(T) are left completely free, and no parametrization in terms of g(T) is assumed.This differentiation in non-perturbative corrections within the masses emphasizes that gluons and quarks may exhibit distinct temperature dependence in their non-perturbative adjustments. Additionally, this distinction leaves more freedom to the DQPMnn to learn and adapt the T-dependence of masses and widths, thus achieving a better fit for the susceptibilities χ_2^B,S.In this case study we are relaxing the DQPM assumptions in order to achieve a better agreement with thermodynamic functions. However, to meet physical requirements such as the asymptotic HTL scaling and the hierarchy between masses and widths, we need to modify the loss function by adding a regularization term.The loss function used in this section reads: ℒ_1= ℒ_0 + β_regℒ_DQPM,where the regularization term is:ℒ_DQPM = ∑_i=g,l,s[γ_i(T)-γ_i^DQPM(T)]^2+[A_g(T) - 3/4 ]^2 +[A_q(T)- 1/3 ]^2.The inclusion of ℒ_DQPM acts as a constraint on the neural network, ensuring that the produced outputs stay relatively close to the expectations derived from the DQPM. In order to regulate the influence of ℒ_0 on the predictions, one has to fix β_reg taking into account the relative contribution for this term in the loss function. In this section, we present two distinct setups for β_i, as given in Table <ref>. * In the setup “A” the weights were chosen such that all the contributions to the loss function are of the same order of magnitude. The use of a larger β_G compared to β_L,S is due to the larger errors Δ s_lQCD/T^3 associated with the lattice data for s/T^3. Choosing a β_G of order one would result in the NN regarding the entropy as an irrelevant feature and learning only from the susceptibilities. Furthermore, the regularization loss is not enhanced by dividing by the variance, hence using a smaller β_reg would effectively remove any sizeable effect from the regularization loss.* The setup “B”, on the other hand, is such that the entropy and the DQPM regularization term have a bigger role compared to the susceptibilities, yielding an intermediate result between the setup “I” of the previous section and the setup “A”. Furthermore, in this section we train the DQPMnn multiple times randomizing the initial weight distribution each time. The results of these variations are plotted as shaded areas.Fig. <ref> shows the resulting thermodynamic observables for the setups A (blue areas), and B (orange areas) as a function of the scaled temperature T/T_c: dimensionless entropy s/T^3 (top panel), baryon susceptibility χ_2^B (middle panel), and strangeness susceptibilityχ_2^S (bottom panel). The red dashed lines depict predictions generated by the DQPMnn with the loss function ℒ_0 :s/T^3 (Eq. (<ref>)),as discussed in Section <ref> and trained in setup I. The symbols correspond to the lQCD results from the WB Collaboration <cit.>. The dimensionless entropy s/T^3 is described well for both setups, while in the case of setup A, the DQPMnn can describe χ_2^B and χ_2^S better than the original DQPM or the DQPMnn ℒ_0:s/T^3. This is expected, as the setup B is closer to the standard DQPM due to the larger weight of the entropy and regularization terms β_g and β_reg in the loss function. In both setups, due to the regularization term in the loss function, χ_2^B and χ_2^S start to approach DQPM asymptotics for T>2.5 - 3T_c, as the absence of lattice data leads the DQPMnn to learn only from the regularization loss term. In general, one can introduce other asymptotic terms for high T, such as the Stefan-Boltzmann limit. The main objective here is to obtain physically motivated values of microscopic observables - non-vanishing effective masses and widths, therefore we choose the DQPM form.§.§ Results: g^2, m, γ and transport coefficients.Now we look closely into the resulting microscopic observables and compare them to the original DQPM parametrization.Fig. <ref> depicts the effective coupling g^2 as a function of the scaled temperature T/T_c constrained in the setups ℒ_1: A-B in comparison to the DQPMnn with ℒ_0:s/T^3 (setup I, red dashed line).In setup A, the coupling exhibits higher values within the T<3T_c range. Unlike setups II and IIIdiscussed in the previous section, the improved description of χ_2^B and χ_2^S in setup A does not lead to unphysically small values of the effective coupling.This observation is less surprising considering that, in the setups we are using in this section, macroscopic quantities do not depend solely on g^2(T) but also to the additional A_i(T) parameters, and the widths γ(T), which we are leaving completely free. Allowing a general form of the widths,we can keep the masses of quasi-particles in the physical range of 1- 1.4 GeV for gluons and 0.3-0.7 GeV for T= T_c- 3T_c. Examining the dimensionless factors A_q and A_g as functions of the scaled temperature T/T_c (as depicted in Fig. <ref>)in setups A-B (colored areas), in comparison to the DQPM values (indicated by the black dashed lines) we can quantify the difference between these scenarios. Specifically for gluons the observed A_g values are higher than those suggested by the DQPM, in agreement with the results of Ref. <cit.>. The convergence of these values between the different setups becomes more pronounced for temperatures near 3T_c, eventually aligning with the DQPM values. This trend is dominantly attributed to the regularization term in the loss function ℒ_1. On the contrary, for quarks, the variation among the setups is more pronounced, and the overall values remain consistently lower than those predicted by the DQPM. The smaller value of A_q can be understood from the results about the susceptibilities obtained in the previous section. Indeed, the setups II and III show that, in order to fit χ_2^B,S, a smaller coupling is needed. In the previous section, this can also be rephrased by saying that a smaller mass is needed. However, in the present section, the mass is given by the product between A(T) and g^2, therefore a favorable condition to fit the susceptibilities can also be obtained by decreasing the value of A, as observed in Fig. <ref>. This underlines a fundamental difference in the behavior of quarks when compared to gluons within the framework provided by the DQPMnn with a relaxed DQPM Ansatz. Fig. <ref> illustrates masses in GeV as functions of the scaled temperature T/T_c for gluons (top panel) and light quarks (bottom panel). Here predictions generated by the DQPMnn with ℒ_1 are depicted by colored areas within setups A-B contrasted with the DQPM values represented by the black dashed lines. As expected from the results on g^2 and A_q/g the gluon masses are higher, while quark masses are smaller than the DQPM parametrization. Not imposing the DQPM asymptomatic behavior for T>3 T_c we would expect smaller masses for light quarks and less-pronounced T-dependence for gluons. Fig. <ref> presents the widths in GeV as a function of the scaled temperature T/T_c across three panels, covering gluons (top panel), light quarks (middle panel), and strange quarks (bottom panel). Similarly to the masses, we show the comparison to the DQPM values represented by the black dashed lines. This comparative analysis reveals the behavior of widths as a function of temperature within these distinct setups depicted by colored areas. For gluon widths, both setup A and B are close to the DQPM, with setup B showing smaller values. In contrast, for light quarks the overall values are higher than the DQPM, while setup B yields smaller widths. Similarly, for the strange quarks, both setups predict higher values of γ. Therefore we expect that transport properties will change for setup A and B compared to the DQPM predictions.Now we look closely into the influence of the change in microscopic quantities on the transport coefficients. The comparison between DQPMnn results with different loss functions in the case of η/s vs T/T_c is depicted in Fig. <ref>. The red dashed line shows the predictions generated by the DQPMnn with the loss function ℒ_0:s/T^3, as discussed in Section <ref> and trained in the setup I.One can see that for both setups predictions by DQPMnn with the modified description are smaller than in the case of ℒ_0:s/T^3. Since the microscopic properties of the setup B are closer to the DQPM parametrization (as have shown above), the η/s approaches the value of ℒ_0:s/T^3 at T≈ 2.5 T_c. The main difference is due to the difference in quark and gluon masses compared to the DQPM values: smaller values of quark masses and increased quark width. In the case of setup A the specific shear viscosity is even smaller due to the larger widths and gluon mass.More interesting is to look into the quark sector and compare predictions of DQPMnn with different setups while considering the full and strange electric conductivity. We show the resulting σ_Q/T, for all flavors (top panel) and strange quarks (bottom panel) as a function of the scaled temperature T/T_c in Fig. <ref>. For the total conductivity, we see that predictions from the DQPMnn with ℒ_1 in the setup B are larger than setup A, but overall values are smaller than in the case of DQPMnn with ℒ_0:s/T^3. In the strange sector, both setups perform better than the standard DQPM model as compared to the lattice data.§.§ Strange sector refinements: DNN ℒ_1 set A - modifications Δ m_lsHere we present results on the influence of Δ m_ls - the difference between the masses of strange and light quarks/antiquarks:Δ m_ls = m_s(s̅)- m_l(l̅) .For this purpose, we train the DQPMnn with the loss function ℒ_1 in the setup A for different values of Δ m_ls. Now comparing to the standard DQPM Ansatz we have the following modifications for the strange quarks: the value of γ_s is not equal to γ_l, and different values of Δ m_ls. In this section, we present results from four different setups, characterized by Δ m_ls values of 0.0, 0.03, 0.09, and 0.15 GeV. With these values, we obtain a good fit for the EoS and reasonable values of the transport coefficients. If we look into the training observables, s/T^3, χ_2^B, χ_2^S depicted in Fig. <ref> in three panels from top to bottom, respectively, we find that a higher value ofΔ m_ls slightly improves the fit of χ_2^B, however χ_2^S deviates from the data for T>2 T_C. Now we look at the microscopic properties predicted by the DQPMnn when we tweak the mass shift Δ m_ls. In Fig. <ref>, we show the masses for the four scenarios, utilizing the ℒ_1 loss function (set A) with Δ m_ls set to 0.0 GeV (blue areas), 0.03 GeV (orange areas), 0.09 GeV (green areas) and 0.15 GeV (magenta areas), compared to the DQPM values (depicted by the black dashed line).We see that the light-quarks masses are most affected by the mass-shift, becoming smaller as Δ m_ls increases, especially at T<2T_c. The gluon mass, on the other hand, stays approximately the same. Similarly, for the widths shown in Fig. <ref> we see that the main deviations between the different scenarios are seen for the quarks.The plot displays widths [GeV] as functions of the scaled temperature T/T_c for gluons (top panel), light quarks (middle panel), and strange quarks (bottom panel), derived from the DQPMnn. We can see that, as Δ m_ls increases, the light-quarks width becomes smaller, whereas the opposite trend is found in the strange-quark width. For Δ m_ls≥0.09 GeV the widths of the strange quarks become larger than the light quarks, and the difference is more prominent for the larger strange mass quark in the region of T<2.5T_c, where lattice data for χ_2^S are present.It is noticeable that the asymptotic behaviour of γ_l has also changed, and shows different asymptotics forT>2 T_c compared to the standard DQPM parametrization and predictions from set A.Now let's look how the mass shift Δ m_ls changes the transport properties. We start with the shear viscosity, which is less affected by the strange quark. Fig . <ref> depicts the comparison of the specific shear viscosity (η/s) as a function the scaled temperature (T/T_c). Colored areas represent estimates from the DQPMnn with the loss function ℒ_1 (set A) for four values of Δ m_ls (0.0 GeV in blue, 0.03 GeV in orange, 0.09 GeV in green and 0.15 GeV in magenta) fromEq. (<ref>). Only a subtle difference can be seen in the vicinity of T_c, where the widths and the masses change more drastically.In Fig. <ref> the scaled electric conductivity (σ_Q/T) is shown for all flavors (top panel) and strange quarks (bottom panel), plotted against the scaled temperature (T/T_c) computed from Eq. (<ref>).A more precise description is obtained when using the larger Δ m_ls value, particularly within the temperature range of T<1.5T_c, for both full and strange conductivities. Nevertheless, there is a decrease in the value of the full conductivity observed in the T>2T_c region, which can be attributed to the temperature-dependent behavior of γ_l.We find that for larger masses of the strange quarks the agreement for the electric conductivity from strange quarks becomes better, however, the full conductivity suffers in that case. At this stage, it is unclear which specific value within the range of Δ m_ls = 0.0 - 0.15 GeV would be optimal. It would be more beneficial to consider Δ m_ls = f(T) as an additional output of the DQPMnn, however, in the current framework and without additional input from lattice data or HTL calculations, the results obtained are challenging to interpret physically. If this approach is to be pursued, which would be desirable according to the findings discussed, more data are needed. We expect a T dependent mass shift for the strange quark to be larger than Δ m_ls=0.03 GeV (DQPM value) in the region T < 1.5T_c and to be decreasing with T. § CONCLUSIONS AND OUTLOOKIn this study, we have addressed the phenomenological problem of extraction of the microscopic properties of the off-shell quasi-particles in the QGP using machine learning techniques based on macroscopic observables measured in lattice QCD.DNNs are used to learn the relationship between T, which serves as an input to the network, and dependent variables, i.e. microscopic observables, that are designed as outputs of the network. In particular, we have addressed a few scenarios for temperature-dependent masses, widths, and effective coupling constants inferred from the DNNs.In our framework, we trained our DNNs using three independent thermodynamic quantities, the entropy density s/T^3 and the baryon and strange susceptibilities χ_2^B,S using lQCD data provided by the WB Collaboration.The key ingredient is that our simple formulation can provide a playground for fixing the strange quark and light quark sectors and studying the interplay between different sectors by utilizing both thermodynamic and transport properties. In addition, it has the flexibility to preserve a chosen scaling by adding additional terms.Our results cover the temperature range T_c<T<3.5 T_c. To assess the quality of our model description of the QGP we have incorporated the Kubo-Zubarev formalism into our framework to compute transport coefficients (the dimensionless specific shear viscosity η/s and the ratio of electric conductivity to the temperature σ_Q/T ) and compared them to results from first principle lQCD calculations and phenomenologically constrained Bayesian estimations by the JETSCAPE Collaboration. In particular, for the comparison, we consider various lQCD results for the gluodynamics in the case of η/s, while in the case of the σ_Q/T there are recent lattice results for (2+1)-flavor QCD that separates additionally the strange quark contribution. I. Firstly, we have considered a DNN with one output microscopic quantity, the effective coupling constant g^2, and used the DQPM parametrization for quasiparticle's masses and widths. In this framework, we have tested three different setups, labeled I-III, and explored how model parameters can affect microscopic quantities and transport coefficients. We found that setups II and III, despite not fitting the lQCD entropy density, offer a more accurate description of χ_2^B(T) and χ_2^S(T). This is possible by using a smaller coupling constant and consequently quark masses are smaller compared to the ones employed in the DQPM. Furthermore, in these setups, the thermal widths are found to be smaller than in the DQPM, which generally translates into a smaller interaction rate and larger transport coefficients (notice however that the interaction rate was not employed in this work, but instead we use the more general Kubo-Zubarev formalism). These observations show that in order to achieve a good simultaneous description of both s/T^3 and χ_2^B,S, as well as of the transport coefficients, one should reconsider the parameterization for the DQPM. II. Such a generalization has been performed in the second part of this work, where we have considered a DNN with 6 output microscopic quantities: temperature-dependent masses, widths, and effective coupling constants. In this case, a regularization term has been introduced in the loss function in order to preserve a HTL-predicted asymptotic behaviour.The modifications introduced in this section, which have been explored in two setups A and B, allow to fit simultaneously both the entropy density and the susceptibilities. This less constrained model favors heavier gluons and lighter quarks, which is in agreement with the findings of the first part of this paper. For what concerns the thermal widths, the gluon sector is very similar to the DQPM parametrization, whereas the quark widths are larger than in the DQPM. We also observe that the DNN favors different values for the widths of the strange and the light quarks. These modifications decrease the value of the transport coefficients. The specific shear viscosity η/s now lies within the JETSCAPE predictions. Concerning the conductivities, the description of the strange quark contribution to the conductivity now is in better agreement with the lattice data, however, the total conductivity is found to be smaller than the lQCD results.III. We further tested another assumption commonly used in the quasi-particle models, namely that the light and strange quark masses are related by a temperature-independent mass shiftΔ m_ls.We have explored the value of Δ m_lsin the range [0,0.15] GeV and repeated the previous study, finding that this mass shift also has an impact on transport coefficients. We demonstrate that the strange quark electric conductivity can be better aligned with the lQCD results for a larger Δ m_ls. Our study suggests that to achieve a more realistic quark sector description and a better fit for the scaled conductivities considering a temperature-dependent mass shift Δ m_ls(T) with a higher value (Δ m_ls>0.03 GeV) near T_c is essential. However, the lack of physical constraints on its value and on its asymptotic scaling, prevents us from making any physically sensible assessment. In this sense, first-principle-based studies, such as those employing functional methods in the case of Dyson–Schwinger equations (DSE) <cit.> or functional renormalisation group (FRG) <cit.>, constraining the strange quark mass would be highly desirable.Overall, our work identifies machine learning as a promising flexible framework, which can provide hints for the improvements of a model description in the phenomenology of HICs. The parameterization of the quasiparticle properties inferred in this work is useful for transport approaches that incorporate a partonic phase, such as the Parton-Hadron-String Dynamics (PHSD), based on Kadanoff-Baym off-shell dynamics (cf. the reviews <cit.>). Furthermore, a similar scheme to the one proposed here can be used to study systems at finite baryon density, with a modest chemical potential. In this context, reliable lattice results are being produced, and the generalization capabilities of neural networks may provide yet another tool to investigate the QGP at large μ_B. The authors acknowledge inspiring discussions with L. Wang, J. Aichelin, E. Grossi, M. Ruggieri, O. Kaczmarek, and R. Pisarski. A.P. would also like to thank L. Anderlini and P. Braccia for useful suggestions. O.S. would like to thank V. Dusiak for the valuable comments. Also we thank to W. Cassing for the critical reading of our manuscript. Furthermore, we acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the grant CRC-TR 211 “Strong-interaction matter under extreme conditions” – project number 315477589 – TRR 211 as well as for the European Union's Horizon 2020 research and innovation program under grant agreement STRONG–2020 – No 824093. The computational resources have been provided by the Center for Scientific Computing (CSC) of Goethe University.§ APPENDIX A: SURROGATE MODEL APPROACH This appendix is devoted to the description of the surrogate model used in this work. As described in the main text, the training process of the neural networks involves the calculation of the integrals in Eqs. (<ref>) appearing in Eq.(<ref>).A direct numerical calculation of these integrals during training would be too computationally intensive and time-consuming, making the overall training process prohibitively lengthy. To circumvent this challenge, we employ a surrogate model. The primary role of the surrogate is to approximate the results of the aforementioned integrals in a computationally efficient way, allowing the neural network training to proceed faster. In practice, we have computed numerically the functions I_B, I_F, and χ_2^i appearing in Eqs. (<ref>) for different values of temperature T, mass over temperature m/T and width over temperature γ/T. The range of these parameters has been chosen to include the physical values of T, m/T, and γ/T for all quasiparticles as a subset. Therefore, we have computed the integrals in the range 155 MeV≤ T≤ 510 MeV, 0.2≤ m/T≤ 10 and 0.02≤γ/T≤ 6.2. In terms of the critical temperature T_c=158 MeV, the temperature range explored corresponds to T∈ [0.98 T_c, 3.2 T_c] approximately. For the numerical integration, we have used Mathematica <cit.>, and generated tables, which were then employed in the training of the surrogate models. These tables are formatted to represent data for s/T^3:T[GeV] |m / T|γ /T|I_B [GeV^3]|I_F[GeV^3]and for χ_2: T[GeV]| m/T| γ/T|χ^q_2 .We use the tables to train two neural networks that take as input T, m/T, and γ/T and give as output the functions Ĩ^F,B and χ̃_2, respectively. These neural networks are represented schematically in Figure <ref>. The loss function used to train these DNNs is the mean squared error, explicitly:ℒ^surr_s =∑_j=I,F(Ĩ^j(T,m/T,γ/T)-I^j(T,m/T,γ/T))^2, ℒ^surr_χ_2= (χ̃_2^q(T,m/T,γ/T)-χ_2^q(T,m/T,γ/T))^2.The surrogates are trained for 6× 10^4 epochs and at the end of the training the loss function is ℒ^surr∼ 10^-7-10^-8 for both training and validation samples in both models. The small value of the loss function testifies to the good accuracy achieved by the surrogates in approximating the numerical values of the integrals in Eqs. (<ref>).The surrogate models effectively perform a non-linear interpolation of the data collected in the training table. We prefer to use this approach with respect to standard interpolation procedures because of its speed, its non-linearity, and because of its ability to generalize outside of the range of the table.The surrogate model is used in the training process as described in the main text. Given the accuracy of the training of the surrogate model and the fact that the physical values of m/T and γ/T are well within the range of values used in the training of the surrogate, we expect that numerical errors and artifacts caused by the use of the surrogate will be much smaller than the errors on the lattice data. Therefore, we conclude that the surrogate model can be applied safely to the case studies. Notice that, despite the ability of the neural network to generalize outside of its range of training, any result out of the training intervals of the surrogate models mentioned above is, in principle, less reliable. apsrev4-1 | http://arxiv.org/abs/2311.15984v1 | {
"authors": [
"Olga Soloveva",
"Andrea Palermo",
"Elena Bratkovskaya"
],
"categories": [
"hep-ph",
"nucl-th"
],
"primary_category": "hep-ph",
"published": "20231127162840",
"title": "Extraction of the microscopic properties of quasi-particles using deep neural networks"
} |
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Error Performance of Coded AFDM Systems in Doubly Selective Channels Haoran Yin This work was supported by Guangdong Natural Science Foundation under Grant 2019A1515011622. Haoran Yin is with the School of Electronics and Communication Engineering, Sun Yat-sen University, China (e-mail: [email protected]). January 14, 2024 ================================================================================================================================================================================================================================================================================Affine frequency division multiplexing (AFDM) is a strong candidate for the sixth-generation wireless network thanks to its strong resilience to delay-Doppler spreads. In this letter, we investigate the error performance of coded AFDM systems in doubly selective channels. We first study the conditional pairwise-error probability (PEP) of AFDM system and derive its conditional coding gain. Then, we show that there is a fundamental trade-off between the diversity gain and the coding gain of AFDM system, namely the coding gain declines with a descending speed with respect to the number of separable paths, while the diversity gain increases linearly. Moreover, we propose a near-optimal turbo decoder based on the sum-product algorithm for coded AFDM systems to improve its error performance. Simulation results verify our analyses and the effectiveness of the proposed turbo decoder, showing that AFDM outperforms orthogonal frequency division multiplexing (OFDM) andorthogonal time frequency space (OTFS) in both coded and uncoded cases over high-mobility channels. AFDM, DAFT domain, channel coding, coding gain, diversity analysis, doubly selective channels. § INTRODUCTIONEmerging applications including space air-ground integrated networks (SAGIN), high-speed railways, and vehicle-to-vehicle (V2V) networks call for reliable communications techniques that adapt to high-dynamic scenarios. The widely adopted orthogonal frequency division multiplexing (OFDM) fails to support reliable communications in doubly selective channels due to the serious inter-carrier interference (ICI) induced by Doppler spreads <cit.>, leading to an urgent demand for designing a new delay-Doppler-resilience waveform. In this context, the recently proposed affine frequency division multiplexing (AFDM) attracts rapidly growing attentions <cit.>. Along with some appealing features of inherent optimal diversity, low channel estimation overhead, and high backward compatibility with OFDM in doubly selective channels, AFDM is a promising candidate waveform for future wireless networks. Information symbols in AFDM are modulated on a set of orthogonal chirps via inverse discrete affine Fourier transform (DAFT), where the chirps' slope are tuned elaborately to harvest an equivalent delay-Doppler (DD) channel representation in DAFT domain. Many researches have studied the implementation of practical AFDM systems concerning channel estimation <cit.>, signal detection <cit.>, and multiple-input multiple-output techniques <cit.>.Channel coding is one of the most critical techniques to against fading and channel impairments and hence is widely-used in modern and pervious generations of wireless networks to ensure ultra-reliable communication. To the best of our knowledge, a comprehensive study on the performance of coded AFDM systems is still missing in the emerging AFDM literature. The main contributions of this letter are summarized as follows. ∙ We first study the conditional pairwise-error probability (PEP) of AFDM system and derive the corresponding conditional coding gain. Then we show how the number of resolvable paths, the Euclidean distance between the transmit codewords, and the maximum delay-Doppler spread of the channels influence the coding gain of coded AFDM systems with Monte Carlo simulation. In particular, we reveal that there is a fundamental trade-off between the diversity gain and the coding gain of AFDM system, i.e., the coding gain declines with a descending speed relative to the augmentation of resolvable paths, while the diversity gain increases linearly. ∙ To further improve the error performance of coded AFDM systems, we propose a near-optimal turbo decoder by exploring the sum-product <cit.> algorithm. Based on that, the analytical results are verifiedand a comparison among OFDM, orthogonal time frequency space (OTFS) <cit.>, and AFDM is conducted, showing that AFDM exhibits the best performance in terms of frame error rate (FER) in both coded and uncoded cases over high-mobility channels. Notations: ℂ denotes the set of complex numbers; a ∼𝒞𝒩(0, N_0) means that a follows the complex Gaussian distribution with zero mean and variance N_0;δ(·) denotes the Dirac delta function; diag(·) denotes a square diagonal matrix with the elements of input vector on the main diagonal;(·)^H, (·)^T, and · denote the Hermitian, transpose, and Euclidean norm operations; |·| denotes the absolute value of a complex scalar; (·)_N denotes the modulus operation with respect to N;Q(·) denotes the tail distribution function of the standard normal distribution.§ CODED AFDM SYSTEM MODEL In this section, we develop the coded AFDM system model with its block diagram shown in Fig. <ref>. 1) AFDM modulation: Let T_s denotes sample period, N denotes the number of subcarriers (chirps), then AFDM signal has a bandwidth B=1/T_s, subcarrier spacing Δ f=B/N=1/NT_s. At the transmitter, an information sequence 𝐮 is channel coded and mapped into DAFT domain vector 𝐱∈𝔸^N × 1, where 𝔸 represents the modulation alphabet. Then, N-point inverse DAFT (IDAFT) is performed to modulate 𝐱 to the time domain as <cit.>s[n]= ∑_m=0^N-1 x[m] ϕ_m[n], n=0, ⋯, N-1where n and m denote the time and DAFT domains indices, respectively, chirp subcarrier ϕ_m[n] is given by ϕ_m[n]=1/√(N) e^j 2 π(c_1 n^2+c_2 m^2+n m / N), c_1 and c_2 are two AFDM parameters, and c_1 determines the chirps' slope. Equation (<ref>) can be written in matrix form as𝐬 =Λ_c_1^H𝐅^HΛ_c_2^H𝐱 = 𝐀^H𝐱where 𝐀 = Λ_c_2𝐅Λ_c_1∈ℂ^N× N represents the DAFT matrix,𝐅 is the DFT matrix with entries e^-j 2 π m n / N / √(N), Λ_c≜diag(e^-j 2 π c n^2, n=0,1, …, N-1). Before transmitting 𝐬, an chirp-periodic prefix (CPP) should be added, which plays the same role as the cyclic prefix (CP) in OFDM to cope with the multipath propagation and makes the channel lie in a periodic domain equivalently.2) Channel model: Consider the general doubly selective channel with followingimpulse responsewith delay τ and Doppler κ as h(τ, κ)=∑_i=1^P h_iδ(τ-l_iT_s) δ(κ-ν_iΔ f) <cit.>,where P is the number of paths, h_i denotes the channel coefficient of the i-th path, non-negative integer l_i∈ [0, l_max] is the associated delay normalized with T_s,ν_i=α_i+β_i represents the associated Doppler shift normalized with subcarrier spacing Δ f and has a finite support bounded by [-ν_max, ν_max], α_i∈ [-α_max, α_max] and β_i∈ (-1/2, 1/2] are the integer and fractional parts of ν_i respectively, ν_max denotes the maximum Doppler and α_max denotes its integer component.3) AFDM demodulation: At the receiver,the relationship between the received time domain symbols 𝐝 and 𝐬 can be expressed as d[n]=∑_i=1^P h_i e^-j2 π/Nν_in s[(n-l_i)_ N]+v[n],wherev ∼𝒞𝒩(0, N_0) represents the additive white gaussian noise (AWGN) component. It can be vectorized as𝐝 = ∑_i=1^P h_i𝐇̃_i𝐬+ 𝐯= 𝐇̃𝐬 + 𝐯where 𝐯∼𝒞𝒩(0, N_0𝐈_N) is the time domain noise vector, 𝐇̃∈ℂ^N× N denotes the effective time domain channel matrix, 𝐇̃_i= Δ_ν_iΠ^l_i represents the time domain subchannel matrix of the i-th path (each path can be viewed as one subchannel), Π denotes the forward cyclic-shift matrix which models the delay, while the digital frequency shift matrix Δ_ν_i≜diag(e^-j 2 π/Nν_i n, n=0,1, ⋯, N-1) models the Doppler. Finally, N-point DAFT is implemented and𝐝 are transformed to the DAFT domain symbols 𝐲 with y[m]=∑_m=0^N-1 d[n] ϕ_m^*[n] + w[m], where w represents the noise in the DAFT domain. Its matrix representationis 𝐲 = Λ_c_2𝐅Λ_c_1𝐝 = 𝐀𝐝.Since 𝐀 is a unitary matrix, w has the same statistical properties as v.4) Input-output relationship: The matrix form of AFDM input-output relationship in the DAFT domain can be obtained by substituting (<ref>) and(<ref>) into (<ref>) as <cit.>𝐲 =𝐇_eff𝐱 + 𝐰= ∑_i=1^P h_i𝐇_i𝐱 + 𝐰where 𝐇_i = 𝐀𝐇̃_i𝐀^H denotes the DAFT domain subchannel matrix of the i-th path, 𝐇_eff = ∑_i=1^P h_i𝐇_i is the effective channel matrix, 𝐰∼𝒞𝒩(0, N_0𝐈_N) is the DAFT domain noise vector. Remark 1: It has been proven in <cit.> (Theorem 1) that AFDM can achieve optimal diversity in doubly selective channels as long as c_1 = 2 (α_max+k_ν)+1/2N, and c_2 is set as an arbitrary irrational number (spacing factor k_ν is a non-negative integer used to combat the fractional Doppler).§ ERROR ANALYSIS OF CODED AFDM SYSTEMS In this section, we investigate the theoretical error performance of coded AFDM systems. For the convenience of illustration, we denote the vectors of channel coefficient, delay indices and Doppler indices as 𝐡, τ, and κ, respectively, i.e., 𝐡 = [h_1, h_2, …, h_P]^T∈ℂ^P× 1, τ = [l_1, l_2,…, l_p]^T∈ℂ^P× 1, and κ = [ν_1, ν_2,…, ν_p]^T∈ℂ^P× 1. Perfect channel state information (CSI) and maximum likelihood (ML) detection are assumed at the receiver. Then, according to <cit.>, Equation (<ref>) can be presented in an alternate way as 𝐲 = Φ_τ,κ(𝐱) 𝐡+𝐰where Φ_τ,κ(𝐱) = [𝐇_1𝐱,𝐇_2𝐱,…,𝐇_P𝐱 ]∈ℂ^N × P is the equivalent codeword matrix.Therefore, for a given channel realization, the conditional PEP of transmitting the symbol 𝐱 and deciding in favor of 𝐱^' at the receiver can be expressed as <cit.>P(𝐱, 𝐱^'|𝐡,τ,κ)=Q(√(Φ_τ,κ(𝐞)𝐡^2/2 N_0))where 𝐞 = 𝐱-𝐱^' is the corresponding codeword difference sequence. Define codeword difference matrix Ω_τ,κ(𝐞) ≜Φ_τ,κ(𝐞)^HΦ_τ,κ(𝐞), which is a Hermitian matrix and hence can bediagonalized by unitary transformation as Ω_τ,κ(𝐞) = 𝐔^HΛ𝐔, where Λ=diag{λ_1, λ_2, …, λ_P},λ_i is the i-th nonnegative real eigenvalue (sorted in the descending order) of Ω_τ,κ(𝐞). Then, we haveΦ_τ,κ(𝐞)𝐡^2 = 𝐡^HΦ_τ,κ(𝐞)^HΦ_τ,κ(𝐞)𝐡= 𝐡^H𝐔^HΛ𝐔𝐡 =𝐡̃^HΛ𝐡̃ = ∑_i=1^rλ_i|h̃_i|^2with 𝐡̃ = 𝐔𝐡, h̃_i being the i-th value of 𝐡̃, r denoting the rank of Ω_τ,κ(𝐞). Substituting (<ref>) into (<ref>) and applying the Chernoff bound of the Q-function, i.e., Q(γ) ≤exp(-1/2γ^2), ∀γ>0, we haveP(𝐱, 𝐱^'|𝐡,τ,κ)≤exp(-∑_i=1^rλ_i|h̃_i|^2/4N_0)where 1/N_0 denotes the signal-to-noise ratio (SNR). Moreover, applying the inequality of exp(-γ)≤1/1+γ,∀γ≥ 0, (<ref>) can beexpanded asP(𝐱, 𝐱^'|𝐡,τ,κ) ≤∏_i=1^r1/1+λ_i|h̃_i|^2/4N_0.Considering that h_i follows the distribution of 𝒞 𝒩(0,1 / P) (uniform scattering profile and |h_i| follows the Rayleigh distribution) and 𝐔 is unitary, h̃_i follows the distribution of 𝒞 𝒩(0,1 / P) as well. Therefore, in the case of Rayleigh fading and high SNR, (<ref>) can be further simplified asP(𝐱, 𝐱^'|τ,κ) ≤1/(1/4N_0)^r∏_i=1^rλ_i/P.It should be noted that (<ref>) is consistent with the analysis in <cit.>. Generally, the power of SNR 1/N_0 is defined as the diversity gain, which dominates the exponential behaviour of the error performance for AFDM systems with respect to SNR. According to Remark 1, AFDM systems always attain optimal diversity gain, i.e., r=P regardless of (τ, κ). Therefore, after some manipulations on (<ref>), we haveP(𝐱, 𝐱^'|τ,κ) ≤(1/4N_0)^-P((∏_i=1^Pλ_i)^1/P/P)^-P.In particular, we define the term (∏_i=1^Pλ_i)^1/P/P as conditional coding gain, which indicates the potential error performance improvement introduced by channel coding for a given channel realization. It is determined by the number of paths P, delay-Doppler profile (τ, κ) and codeword difference sequence 𝐞 jointly. In order to derive the unconditional coding gain, one should find the statistical distribution of the term ∏_i=1^Pλ_i regarding (τ, κ) and 𝐞, which is generally intractable <cit.>. Therefore, we resort to the Monte Carlo method to take a deep look at the unconditional coding gain of AFDM systems by approximating it with averaged conditional coding gain. Fig. <ref> shows the average coding gain versus squared error Euclidean distance d_E^2(𝐞)=𝐞^H𝐞 with differentmaximum delay and Doppler[Without loss of generality, we consider BPSK mapping and generate 𝐞by randomly selecting the indices of 𝐞 and set them to “±2", where the number of non-zero indices isd_E^2(𝐞)/4, and 𝐞 is of the form [0,2,0,-2,0,…,0]^T. Besides, the integer delay and Doppler indices are chosen randomly according to the uniform distribution among [0, l_max] and [-α_max, α_max], respectively.].We can observe that with the rise of d_E^2(𝐞), the average coding gain increases with a descending speed. Moreover, we can notice that different maximum Doppler α_max and maximum delay l_max do not have a prominent influence on the average coding gain. This implies that given the same number of paths, the error performance of AFDM systems remain nearly unchanged with various channel dynamic levels (the maximum speeds corresponding to α_max=1, 2, and 3 are 135 kmph, 270 kmph, and 405 kmph, respectively). More importantly, we can observe clearly that with the increases of P,the average coding gain decreases with a descending speed. This indicates that there exists a fundamental trade-off between the coding gain and the diversity gain, which are framed formally as follows.Corollary 1 (Trade-Off Between Coding Gain and Diversity Gain of AFDM Systems): For a given channel code, the coding gain of AFDM systems declines at a descending speed as the number of separable paths increases, while the diversity gain grows linearly.Corollary 1 implies that when there are few notable propagation paths in the channel, the diversity gain of AFDM systems is small and the potential of error performance improvement offered by channel coding is of great tremendousness. While in rich-scattering conditions, it is expected that channel coding can only provide a limited improvement to the overall error performance of AFDM systems.§ TURBO DECODER FOR CODED AFDM SYSTEMS To unleash the potential of channel coding to improve the error performance of AFDM systems to the greatest extent, we propose a near-optimal turbo decoder by exploring the sum-product algorithm (SPA) <cit.> for coded AFDM systems.1) Symbol-wise SPA detector: We first derive the symbol-wise SPA detection for AFDM systems. For the convenience of illustration, we consider integer delay and Doppler (β_i=0, ∀ i ∈ [1,P]). In this case,due to the delay-Doppler spread and fading of the doubly selective channel, each received symbol y[m^'] consists of P impaired transmitted symbols x[m_i], m_i = m^'+ind_i,i ∈ [1,P], where loc_i≜(α_i+2 N c_1 l_i)_N is the index indicator of the i-th path <cit.>. Meanwhile, each transmitted symbol affects P received symbols. Denote the set of P received symbols that influenced by x[m] as 𝐲_m≜{y[(m- loc_i)_N] | i ∈ [1,P]}, and the set of P-1 transmitted symbols that related to the received symbol 𝐲_m[i] except for x[m] as 𝐱_m^(i)≜{x[(m-loc_i+loc_j)_N] | j ∈ [1,P], j≠ i }. Consider symbol-wise maximum a posterior (MAP) criterion, i.e., x̂[m]=max_x[m] ∈𝔸Pr{x[m] |𝐲}where the a posterior probabilitycan be expended by applying the Bayes's rule asPr{x[m] |𝐲}∝Pr{𝐲| x[m]}Pr{x[m]}.Let 𝐲_m|_i+1^P represents the vector of the (i+1)-th to the P-th entries of 𝐲_m, then (<ref>) can be further expended with the chain rule asPr{𝐲| x[m]}Pr{x[m]}=∏_i=1^PPr{𝐲_m[i]| 𝐲_m |_i+1^P, .𝐲\𝐲_m, x[m]}}Pr{x[m]}=∏_i=1^P∑_𝐱_m^(i)Pr{𝐲_m[i], 𝐱_m^(i)| 𝐲_m|_i+1^P, 𝐲\𝐲_m, x[m]}Pr{x[m]}where 𝐲\𝐲_m represents the complementary set of 𝐲_m correspending to 𝐲. Applying the chain rule again, we havePr{𝐲| x[m]}Pr{x[m]}=∏_i=1^P∑_𝐱_m^(i)Pr{𝐲_m[i] | 𝐱_m^(i), x[m]} ×Pr{𝐱_m^(i)| 𝐲_m|_i+1^P, 𝐲\𝐲_m}Pr{x[m]} .Finally, assuming that the entries of 𝐱_m^(i) are independent to the elements from 𝐲_m|_1^i-1, we obtain[According to <cit.> and <cit.>, this assumption will only introduce few errors when the channel is sufficiently sparse (N≫ P).]Pr{x[m] |𝐲}≈∏_i=1^P∑_𝐱_m^(i)Pr{𝐲_m[i] | 𝐱_m^(i), x[m]}×Pr{𝐱_m^(i)| 𝐲\𝐲_m}Pr{x[m]}where Pr{x[m] |𝐲\𝐲_m}∝∏_j=1 j ≠ i^PPr{x[m] |𝐲_m[j]} Pr{𝐲_m[i]|𝐱_m^(i),x[m]}=1/√(π N_0)×exp(-|𝐲_m[i]-ℱ(m,i)|^2/ N_0) ℱ(m,i) =∑_j=1^P-1𝐇_eff[ind_𝐲(m,i),ind_𝐱(m,i,j)] 𝐱_m^(i)[j]+𝐇_eff[ind_𝐲(m,i),m] x[m]two index extractors are defined as ind_𝐲(m,i) = (m-ind_i)_N and ind_𝐱(m,i,j)=(m-loc_i+loc_j)_N. The detailed procedures of the symbol-wise SPA detector are summarized in Algorithm <ref>.2) Near-optimal turbo decoder: Based on the above SPA detector, we propose an iterative turbo decoder for coded AFDM systems, which is illustrated in Fig. <ref>. After acquiring the a posterior probability ofx̂[m], m=0, ⋯, N-1, we can calculate the corresponding bit log likelihood ratios (LLR) with 𝔸, which is de-interleaved and then fed to an optimaldecoder, e.g., Bahl-Cocke-Jelinek-Raviv (BCJR) decoder <cit.> for convolutional code. The output bit LLRs from the BCJR decoder are then interleaved and converted to symbol LLRs according to 𝔸, which is the updated a priori probabilityPr{x[m]} and initiates the SPA detector again. This completes one turbo iteration.Since both the symbol-wise SPA detector and the BCJR decoder followthe MAP criterion, the proposed turbo decoder is expected to approach the optimal error performance of AFDM systems in terms of the bit error ratio (BER). Moreover, the BCJR decoder therein can be replaced by other good decoders depending on the adopted channel code, e.g.,low-density parity-check (LDPC) and turbo codes. § SIMULATION RESULTS In this section, we present the error performance of coded AFDM in terms of FER. Carrier frequency f_c=4 GHz, number of subcarriers N=128, subcarrier spacing Δ f_AFDM= 500 Hz, BPSK and 4QAM mappings, and the proposed SPA detector and turbo decoder are applied. Without loss of generality, we adopt three near-1/2 coderate convolutional codes, termed “Code A”, “Code B”, and “Code C”, with generating polynomials given by (3,1), (5,7), and (51,77) and corresponding minimum squared Euclidean distances d_min, E^2(𝐞) given by 12, 20, and 32,respectively. We only consider integer delay and Doppler with Rayleigh fading, where l_max=3 and α_max=3,corresponds to a maximum UE speed of 405 kmph. For each channel realization, the channel coefficient h_i follows the distribution of 𝒞 𝒩(0,1 / P) and the delay and Doppler indices are chosen randomly according to the uniform distribution among [0, l_max] and [-α_max, α_max], respectively. Fig. <ref> (a) shows the FER performance of AFDM systems with different codes. P=2 and the result of uncoded AFDM is also provided. We can observe that the FER curves of uncoded AFDM system and coded AFDM systems with different minimum squared Euclidean distances share the same diversity slope. This is because the uncoded AFDM already achieves the optimal diversity gain and applying channel coding does not influence the overall diversity gain of AFDM systems, as revealed in (<ref>). Moreover, we can also notice that with theincrease of d_min, E^2(𝐞), the channel gain of coded AFDM system enhances.Therefore, a preliminary guideline for the code design of AFDM systems is to maximize d_min, E^2(𝐞) among all pairs of codewords of the adopted channel code. Furthermore, the trend of coding gain enhancement over larger d_min, E^2(𝐞) gets slow down with the increase of d_min, E^2(𝐞). These observations are consistent with our analyses of Fig .<ref>. Fig. <ref> (b) shows the FER performance of AFDM systems with different number of paths. 4QAM and code A are applied. We can see that the diversity gains of the uncoded and coded cases are the same regardless of the value of P. Moreover, the diversity gain increases with the increase of P, which demonstrates the effectiveness of the symbol-wise SPA detector and turbo decoder in exploring the optimal diversity of AFDM. More importantly, the coding gain of AFDM system declines at a descending rate. In specific, at FER =10^-2,the coding gains of AFDM system with P=2, 3, and 4 are 1.43 dB, 1.03 dB, and 0.83 dB, respectively. This verifies the correctness of Corollary 1.We next compare the error performance of OFDM, OTFS, and AFDM with the same time-frequency resources. Frequency domain single-tap SPA detection is used in OFDM systems and we can see from Fig. <ref> (c) that the OFDM systems are paralyzed due to the serious ICI. Moreover, We can notice that the uncoded OTFS can not achieve optimal diversity, which is consistent with the analyses in <cit.>. This leads to a huge error performance gap between uncoded OTFS and uncoded AFDM. In specific, at FER ≈ 8 × 10^-3, the required SNR of uncoded OTFS is around 1.70 dB larger than that of uncoded AFDM. By observing the FER trends of the two systems, we can infer that the error gap between uncoded OTFS and AFDM system will become larger with the increase of E_b/N_0. Furthermore, we can see that the difference between coded OTFS and coded AFDM is smaller than the uncoded cases, which is consistent with the conclusion in <cit.>that the achieved diversity gain of OTFS systems can be improved via channel coding. It is worth noting that the coded AFDM still exhibits slightly better error performance than the coded OTFS. Therefore, we can conclude that AFDM outperforms OFDM and OTFS in both uncoded and coded cases.§ CONCLUSION In this letter, we provide a comprehensive study on the error performance of coded AFDM systems in doubly selective channels. We derive the conditional coding gain of AFDM systems and show that there is afundamental trade-off between coding gain and diversity gain in AFDM systems. Moreover, we explore the sum-product algorithm and design a near-optimal turbo decoder for AFDM systems. Simulation results verify our analyses and the effectiveness of the proposed SPA detector and turbo decoder. Finally, a comparison among OFDM, OTFS, and AFDM is conducted, showing that AFDM outperforms OTFS in both uncoded and coded systems, especially in the former case. In the near future, we will investigate the modern code design for AFDM systems with imperfect channel state information.99 IEEEtran23.11.14.1 T. Wang, J. G. Proakis, E. Masry, and J. R. Zeidler, “Performance degradation of OFDM systems due to doppler spreading,” IEEE Trans. Wireless Commun., vol. 5, no. 6, pp. 1422-1432, 2006.23.10.17.1 A. Bemani, N. Ksairi and M. Kountouris, “AFDM: a full diversity next generation waveform for high mobility communications,” IEEE Int. Conf. Commun. Workshops (ICC Workshops), pp. 1-6, 2021.bb101 A. Bemani, G. Cuozzo, N. Ksairi and M. Kountouris, “Affine frequency division multiplexing for next-generation wireless networks,” Int. Symp. on Wireless Commun. Systems (ISWCS), pp. 1-6, 2021. 23.10.17.2 A. Bemani, N. Ksairi and M. Kountouris, “Affine frequency division multiplexing for next generation wireless communications,” IEEE Trans. Wireless Commun., early access, 2023. 23.10.17.4 H. Yin and Y. Tang, “Pilot aided channel estimation for AFDM in doubly dispersive channels,” IEEE/CIC Int. Conf. Commun. in China (ICCC), pp. 308-313, 2022. 23.10.18.1 H. Yin, X. Wei, Y. Tang, and K. Yang, “Diagonally reconstructed channel estimation for MIMO-AFDM with inter-doppler interference in doubly selective channels,” arXiv preprint arXiv:2206.12822v6, 2023.bb102 A. Bemani, N. Ksairi and M. Kountouris, “Low complexity equalization for AFDM in doubly dispersive channels,” IEEE Int. Conf. on Acoustics, Speech, and Signal Process. (ICASSP), pp. 5273-5277, 2022. 23.10.17.3 H. Yin et al., “Cyclic delay-doppler shift: a simple transmit diversity technique for delay-doppler waveforms in doubly selective channels,” IEEE Int. Conf. on Acoustics, Speech, and Signal Process.Workshops (ICASSP Workshops), pp. 1-5, 2023.23.11.15.1 F. R. Kschischang, B. J. Frey, and H. -A. Loeliger, “Factor graphs and the sum-product algorithm,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 498–519, Feb. 2001.23.10.16.3 R. Hadani et al., “Orthogonal time frequency space modulation,” IEEE Wireless Commun. Netw. Conf. (WCNC), pp. 1-6, 2017. 23.11.16.1 D. Tse and P. Viswanath, Fundamentals of Wireless Communication. Cambridge, U.K.: Cambridge University Press, 2005.23.11.16.2 B. Lu and X. Wang, “Space-time code design in OFDM systems,” IEEE Global Telecommun. Conference. Conf. (Globecom), vol. 2, pp. 1000–1004. Nov. 2000.23.11.17.1 S. Li et al., “Hybrid MAP and PIC detection for OTFS modulation,” IEEE Trans. Veh. Technol., vol. 70, no. 7, pp. 7193-7198, July 2021. 23.11.15.2 L. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal decoding of linear codes for minimizing symbol error rate (Corresp.),” IEEE Trans. Inf. Theory, vol. IT-20, no. 2, pp. 284–287, Mar. 1974.23.11.18.1 E. Biglieri, P. Raviteja, and Y. Hong, “Error performance of orthogonal time frequency space (OTFS) modulation,” IEEE Int. Conf. Commun. Workshops (ICC Workshops), May 2019, pp. 1–6.23.11.18.2 S. Li, et al., “Performance analysis of coded OTFS systems over high-mobility channels," IEEE Trans. Wireless Commun., vol. 20, no. 9, pp. 6033-6048, Sept. 2021. | http://arxiv.org/abs/2311.15595v1 | {
"authors": [
"Haoran Yin"
],
"categories": [
"cs.IT",
"eess.SP",
"math.IT"
],
"primary_category": "cs.IT",
"published": "20231127074236",
"title": "Error Performance of Coded AFDM Systems in Doubly Selective Channels"
} |
Institut für Angewandte Physik, Technische UniversitätWien, Wiedner Hauptstraße 8-10/E134, A-1040 Vienna, Austria Institut für Photonik, Technische UniversitätWien, Gußhausstraße 27-29/E387, A-1040 Vienna, Austria Theiss Research, 7411 Eads Ave., La Jolla, CA 92037-5037, USA. Spectroscopy of correlated electron pairs was employed to investigate the energy dissipation process as well as thetransport and the emission of low energy electrons on a polymethylmetracylate (PMMA) surface, providing secondary electron (SE) spectra causally related to the energy loss of the primary electron.Twogroups of electrons are identified in the cascade of slow electrons, corresponding to different stages in the energy dissipation process.For both groups, the characteristic lengths for attenuation due to collective excitations and momentum relaxation are quantified and are found to be distinctly different: λ_1=(12.0±2)Å andλ_2=(61.5±11)Å.The results strongly contradict the commonly employed modelof exponential attenuationwith the electron inelastic mean free path (IMFP) as characteristic length, but essentially agree with a theoryused fordecades in astrophysicsand neutron transport, albeit with characteristic lengths expressed in units of Ångstrøms rather than lightyears.Energy Dissipation of Fast Electrons in Polymethylmetacrylate (PMMA): Towards a Universal Curve for Electron Beam Attenuation in Solidsfor Energies between ∼0 eV and 100 keV Olga Ridzel January 14, 2024 =============================================================================================================================================================================== Electrons with vacuum energies in the range of ∼0-20 eV are playing an increasingly important role in modern science and technology. While low energy electrons (LEEs)have been utilised for a century in electron microscopy<cit.>, modern applications ofnanotechnology require an improved understanding of the energy dissipation of LEEs in solidsurfaces. Thisconcerns the effective interaction volume in electron beam lithography caused by electron diffusion (proximity effect)<cit.> as well as focussed electron beamdeposition <cit.>, spacecraft surface charging<cit.>, electron cloud formation in charged particle storage rings<cit.> and plasma-wall interaction in fusion research<cit.>. LEEs are also the essential agents for DNA-strand breaks as a result of irradiation of biological tissue with ionising radiation<cit.>.The transport of LEEs near solid surfaces is particularly important for the emerging fields of plasmonics<cit.>and photonics, e.g. to quantify photoelectron delay times due to collective excitations in solids for attosecondphysics on solid surfaces<cit.>or optical-field induced correlated electron emission<cit.>. For medium energies(∼100 eV–100 keV), the electron–solid interaction relevant to electron spectroscopy for surface analysis is nowadays quantitatively understood<cit.>. A case in point is the attenuation of electron beams penetrating a surface. Jabłonski and Powell<cit.> have recently reviewed thedevelopmentof a method to reliably quantify electron attenuation, which took several decades. The commonly accepted model is an exponentialattenuation law, with theelectron inelastic mean free path (IMFP, λ_i) as characteristic length, slightly modifiedto account for the influence ofelastic electron scatteringinaccordance with the employed experimental conditions. Measurements of the IMFP using electron reflection experiments agree quantitatively with theoreticalresults based on optical data and linear response theory<cit.>. At low energies (<100 eV), however, it is still not possibletosatisfactorily describe essential observablesupon impact of a primary electron, such as the spectrumof emitted secondary electrons(SEs) or the SE-yield, since additional physicalphenomena come into play that make the parameters of theoretical models less reliable, whileexperiments with LEEs are generally more difficult<cit.>. Refinement of any modelis complicated by the lack of benchmark experiments specifically designed to obtain information on individual physical parameters or processes.Concerning the length scale over which low energy electrons are attenuated, many authorsadopt the same approach as for medium energies, i.e. exponential attenuation withthe IMFP as characteristic length. The present work challengesthis approach for low energies. We investigate the energy dissipation of fast electronsin polymethylmetacrylate (PMMA), aphotoresist commonly used in electron beam lithography <cit.>, and study the transport and emission ofLEEs liberated upon impact of the primary. Correlated electron pairs of primary (medium-energy) electrons striking a surface and secondary (low-energy) electronsemitted as a result are measured in coincidence, yielding secondary electron spectra causally related to a given energy loss of the primaryafter a certain number of inelastic collisions. The quantitativemodelformedium-energy electron transport <cit.> is then invoked tocalculate the average depth at which a given number ofenergy losses of theprimary electrons take place. Since each energy loss createsa single secondary electron<cit.>, comparison of the intensity of energy losses of the primary, i.e. the number of secondaryelectronscreated at a certain depth with the number emitted into vacuum, then provides the length scale over whichlow energy secondary electrons are attenuated.The results strongly contradict the commonly used exponential attenuation law.This observation is not unexpected, given the dynamic interplay between energy fluctuations arising from collective excitations (governed by the IMFP) and momentum relaxation attributed to elastic scattering by the Coulomb potential of the ionic cores (described by the transport mean free path(λ_tr,TrMFP) <cit.>). This relationship changes dramatically at energies below 100 eV. A universal attenuation law adequately accounting for these phenomena developed earlierin astrophysics and neutron transport theory<cit.>describes our results satisfactorily. The present findings may thus help to resolve the ongoingcontroversy (see e.g. Refs. <cit.>) regarding low energy electron beam attenuation in solids.Thechain of processes we identify in the energy dissipation mechanism is expected to be more generallyencountered, e.g.,in biological matter exposed to ionising radiation <cit.>, since the relevant electron–solid interaction characteristics and electronic structure is similar for a large class of materials <cit.>. Spectra ofelectron pairs correlated in time were measured for electrons with energies of E_0=173, 500 and 1000 eV incident on a PMMA surface.To avoid charging of the insulator surface, the experimental conditions ensuredthat each surface atom on average was hit at the most by one primary electron during acquisition,which took about one month for each primary energy (see the SM for experimental details<cit.>). Fig. <ref>a shows theraw data for E_0=500 eV on a false colour scale.The SE-spectra caused by specific energy losses of the primary are given in Fig. <ref>b. Each pixel in the double differential coincidence data in Fig. <ref>a represents the intensity of detected electron pairs: a fast inelastically scattered (primary) electron with energy E_1 and a slow (secondary) electron with energy E_2 created during the impact of the primary.A simple model for the SE-emission process explaining these data is illustrated in Fig. <ref>a: in the course of an inelastic scattering process,the energy loss Δ E=E_0-E_1ofa primary electronis transferred to an occupied state in the valence band withbinding energy E_b. The SE-electron liberated inside the solid can be emitted into vacuum if its energy suffices toovercomethe surface barrier U=E_g+χ, consisting of the energy gap, E_g=5.5 eV <cit.>, and the electron affinity, χ=4.5 eV<cit.>. The yellow curve in Fig. <ref>a delimits the maximum vacuum energy of a secondary electron created by a given energy loss E_2=Δ E -U.The white curverepresents the differential inverse inelastic mean free path (DIIMFP),i.e. the distributionof energy losses in individual inelastic collisions. The narrow stripe of high intensitynear the plasmon-resonance just below the yellow curve in Fig. <ref>a is attributed to a plasmon-assisted(e,2e)-process<cit.>. Multiple plasmon excitation by the primary is responsible for the intensity atlargerlosses(>30 eV). Here,the intensity along the E_2 axis approximately peaks atE_2-E_vac=ħω_p-χ-E_g∼ 11 eV (see Fig. <ref>b), in a process wherea plasmon decays andthe resonance energyħω_p is transfered to a single solid-state electron near the valence band maximum that overcomes the surface barrier.The well-known phenomenon <cit.> that each energy loss leads to liberation of a single sold-state electronfollows fromthe fact thatwidth of the plasmon feature along the binding energy axis in the coincidence spectrummatches the width of the valence band of PMMA (see Ref. <cit.>).The similarity of the coincidence SE-spectra for arbitrary energy loss ranges indicates that the source energy distribution ofSEs depends weakly on theenergy of the electrons generating them. The reason is that the shape of theDIIMFP does not significantlychange with the energy of the projectile, which in our case is theprimary electron aftermultiple plasmon losses. Thisfollows fromFig. <ref>b<cit.> showingthe DIIMFP for various energies calculated from optical data <cit.>.For projectile energies well above the plasmon resonance of ħω_p∼21 eV, theirshape ispracticallyidentical.The maximum energy loss for11 eV electrons (above vacuum)is seen to be 15.5 eV, since no allowed states exists atenergies below E_vac-χ. These observationsprovide furtherevidence for the Markov-type characterof multiple inelastic electron scatteringleading toSE-emission<cit.> .It is perhapssurprising that the maximum in the SE spectra in Fig. <ref> is found at 11 eV,a much higher energy than typically observed in SE-spectra. It should be kept in mind thatthe data in Fig. <ref> are special in that they constitute SE-spectra emitted as a result of a specific energy loss. Indeed,the maximum of the SE-peak in the singles spectra as well as the peak inthe cascade region in the coincidencesis located at ∼3.7 eV (not shown,<cit.>).These observations qualitatively clarify the first stage of the energy dissipation of swift electrons in PMMA: the projectile spends its energy in the course ofmultiple plasmon excitations, as illustratedby the filled curves in Fig. <ref>a and b. Subsequent plasmon decayinduces interband transitions leading to SE emission with energy distributions practically independent of the energy loss of the primary since theshape of the DIIMFP depends very weakly on the projectile energy (as long as it exceeds the plasmon resonance energy). The intensity of coincidences alongΔ Ein Fig. <ref>ais remarkable in that it increases monotonicallyup to an energy of ∼150 eV and decreases afterwards. A similar behaviour was observed for all primary energies and can be seen more clearly for 1000 eV in Fig. <ref>a and b: while the intensity in the singlesenergyloss spectrum (Fig. <ref>a), decreases monotonically with the energy loss, the total number of emitted SEs (i.e. the coincidence data integrated over E_2,Fig. <ref>b) exhibits a maximum atΔ E∼250 eV. The electron energy loss spectrum is a superposition of the n-fold self-convolutions of the DIIMFP <cit.>. Fitting the spectra to a linear combination of such functions then yields the contributionofn-fold inelastically scattered primaries to the spectrum<cit.>.The corresponding fits are shown as black curves in Fig. <ref>b and c, while the coloured filledcurves represent the contributions to the spectra of individual n-fold energy losses. The areas under these curves correspond, respectively, to the numberof inelastic collisionsexperienced by theprimaries (for the singles spectrum)andthe number of secondary electrons emitted as a result (for the coincidence spectrum). Thesequantities are referred to as partial intensities, C_n<cit.>. The reduced partial intensities, γ_n=C_n/C_1are presented in Fig. <ref>c.For the first few scattering orders, the singles partial intensities are close to unity. It is then expected that the coincidence partial intensities should follow the relationship γ_n^coi=n (green line in Fig. <ref>c) since n energy lossescreate n secondary electrons. However, all coincidence partialintensities consistently lie below the green line.The probability for n-fold scattering increases with the travelled pathlength <cit.>, i.e., the average depth at which higher order collisions take place increasesmonotonically with collision number. Then, the decrease of the number of emitted secondary electrons with increasing scattering order, i.e., the deviation of the coincident intensity from the expected behaviour γ_n^coi=n, is attributable to acorresponding increaseof depth of creation ⟨ z_n⟩.At this stage we invoke the quantitative model formedium-energy electron-solid interaction<cit.> to calculate the average depth ⟨ z_n⟩atwhich n-fold scattering takes place using a Monte Carlo (MC) model (see SM <cit.>).Since n-fold scattering leads to generation of n secondaryelectrons at the corresponding depths, the quantity γ_n^coi./n×γ_n^sing. as a function of ⟨ z_n⟩ describes the attenuation of SE created at a certain depth before they reach the surface.This relationship is shown inFig. <ref>aon a semilogarithmic plot.The accessible depth ranges are widelydifferent for the three considered primary energies but theirdepth dependence is satisfactorily described by the same attenuation law, which is clearly not a simple exponential function: thesolid (red) curves represent a fit to a double exponential function,α_1 exp(-z/λ_1)+α_2 exp(-z/λ_2),yielding distinctly different characteristic lengths ofλ_1=(12.0±2)Å andλ_2=(61.5±11)Å.The same analysis was applied to simulated spectra using our MC model. An essential aspect of this MC model is that deflections in the course ofinelastic collisions are taken into account using a quantum-mechanical approach<cit.>.The MC-results for 1000 eV are shown by the redcircles in Fig. <ref>b, alongwith a fit (solid red curve) to a double exponential curve with the same characteristic lengthsλ_1 and λ_2as in (a). Theblue and green points are for subsets of these datafor depths of origin z_0 smaller (triangles, blue)and larger than 12 Å (diamonds,green). The solid blue curve is a double exponential function with the same characteristic lengths as above, the solid green curve is a singleexponential function with characteristic length λ_2. The MC calculations also yield the meanvacuum energies in the above two groupsas⟨ E_λ_1⟩ =11 eV and ⟨ E_λ_2⟩=4 eV. These results suggest a rather simple chain of processes for the first stages of energydissipation of fast electrons in PMMA. Multiple plasmon excitationof the fast primary electron leads to creation of secondaries with energy distribution peaking around E-E_vac=ħω_p-E_g-χ∼11 eV.During the transport to the surface such an electron has a significantprobabilityto undergo an inelastic collision: the area under the curvesfor 11 and 1000 eV in Fig. <ref>b is of the same order of magnitude.Let the corresponding characteristiclength be denoted by λ_1.If such afirst generation "11eV"-secondary electron is created at a depthlarger than λ_1it is likely to suffer another energy loss before escape. In casethis energy lossis smaller than the surface barrier (Δ E < U), it is transferred to an electron in the valence band, the latter(liberated) electron can only be promoted to ahot-electron state in the conduction band below the vacuum level. It cannot escape into vacuum. The inelastically scattered electronitself will have an energy just above vacuum after the collision. The other case when the energy loss of the first generation secondary electron exceeds the surface barrier (Δ E>U), leads to a situation where in the final state, the role ofthe scattered and liberated electron is reversed: the scattered electronwill be a hot electron below the vacuum level, while the liberated electron will have a positive vacuumenergy and can be emitted as a secondary electron (of the second generation). In both cases, the energy of theelectrons with a positive vacuum energy will be small (typically ofthe order of a few eV above vacuum) and their IMFPwill be large due to the limited availability of final states in further scattering processes. Hence the characteristic length forattenuation for the second generation(λ_2) will also be large (green diamonds, z_0>12 Å). If a first generation "11eV"-electron is created at a depth smaller than λ_1 (blue triangles, z_0<12 Å) it can escape without further loss if its initial direction points outward, otherwise it will scatterandbelong to the λ_2-group thereafter. The mechanism outlined abovecorresponds exactly to the results shown in Fig. <ref>b.In the framework of linear transport theory, the expression for the effective attenuation length (EAL), λ_a, that takes into account the combined influence of energy fluctuations (inelastic scattering) and momentum relaxation (deflections) is given by <cit.>: λ_a=λ_i λ_tr/λ_i +λ_trν_0=λ_trcν_0where the single scattering albedo is given by c=λ_i/(λ_i+λ_tr) and the quantity ν_0 is the positive rootof the characteristic equation2/c=ν_0ln ν_0+1/ν_0-1 In the medium energy range, the TrMFP exceeds the IMFP by a significantfactor, yielding avalue for ν_0 very close to unity and the EAL is slightly smaller than the IMFP, the difference being∼10% or less. For small values of c ≪ 1,the particle will approximately move along a straight lineand the attenuation is dominated by the IMFP (see Fig. <ref>a). For low energies,as the TrMFP assumes values of the order of the IMFP or less and the influence of momentum relaxation becomes more pronounced, the EAL and IMFPare essentially different. For values of c∼1, many deflections occur before an inelastic process can take place. Identifying ⟨ E_λ_1,2⟩ as theenergies associated with the characteristic lengthsof the two stages of the energy dissipation process, λ_1,2are shown asgreen diamonds in Fig. <ref>b and are compared with the mean free path for inelastic scattering λ_i (IMFP <cit.>) and momentum relaxationλ_tr (TrMFP <cit.>) as well as the effective attenuation length λ_aaccording to Eqn. <ref>. The (magenta) circles in Fig. <ref> were calculated with the MC-technique and agree quantitatively with Eq. <ref>, whichadequately accounts for the relative importance of energy fluctuations and momentum relaxation.The present results for λ_1 andλ_2 differ by more than a factorof two with theIMFP and agree significantly better with Eq. <ref>, underscoringthe importanceto adequately account for the combined influenceof collective excitations and momentum relaxation.In summary,the energy dissipation process of fast electrons in PMMA begins with multiple plasmon excitation of the primary. Plasmon decay induces interband transitions acting as sources of SEs. The SE source energy distribution depends weakly on the energy of the primary since the shape of the DIIMFP is very similar for any projectile energy (above the plasmon resonance). The subsequent analysisidentifies two groups in the SE cascade which correspond to different stages in the energy dissipation process. Theassociated characteristic lengthsfor electron beam attenuation have been determined usingthe quantitative model for medium energy transport to calculate the corresponding depth scale. Comparison of the characteristics length λ_1,2 with the universal curve, Eqn.<ref>, suggests that the transport of low energy electrons can be described by the same physical laws as those in light scattering in interplanetary nebulae, impressively demonstrating the scaling of physical laws over 26 orders of magnitude. There isconsensus in the community working onattenuation of electron beams at medium energies,that adopting elements of linear transport theory to compare experiment and theory constituted an essential step <cit.>. While for medium energies, the influence of momentum relaxation leads to a rather minor correction of the order of ∼10%,it plays an essential role in the electrontransport for low energies. The reasonable agreement in Fig. <ref>b between the experimental attenuation lengths λ_1,2 and Eqn.<ref> indeedsuggests that the scientific debate on low energy electron attenuation should explorethe meritsof linear transport theory at the earliest stage possible. AcknowledgmentsThe computational resultshave been achieved using the Vienna Scientific Cluster (VSC). TU Wien Bibliothek is acknowledged for financial support through its Open Access Funding Programme.. .Supplemental Material for "Energy dissipation of fast electrons in polymethylmetacrylate (PMMA): Towards a universal curve for electron beam attenuation in solids for energies between 0 eV and 100 keV"Wolfgang S.M. Werner,Florian Simperl,Felix Blödorn,Julian Brunner,Johannes Kero,Institut für Angewandte Physik, Technische UniversitätWien, Wiedner Hauptstraße 8-10/E134, A-1040 Vienna, Austria Alessandra Bellissimo,Institut für Photonik, Technische UniversitätWien, Gußhausstraße 27-29/E387, A-1040 Vienna, Austria Olga Ridzel Theiss Research, 7411 Eads Ave., La Jolla, CA 92037-5037, USA. § EXPERIMENTAL A schematic illustration of the experimental setup is shown in Fig. <ref>:An electron gun provides a stable low-current electron beamincident on the surface and electronpairs leaving the surface as a result are detected by a hemispherical analyser(HMA)and a time-of-flight analyser (TOF).The experiment is conducted in a UHV-chamber with a pressure during the experiment not exceeding 2× 10^-10 mbar. The arrival times in either analyserare written todisk and coincidences are retrieved from the dataretrospectively. The HMAis equipped with 5 channeltron detectors and is operated in the constant analyser energymode with E_pass=200 eV (energy resolution 5 eV) for coincidence measurements and E_pass=20 eV(energy resolution 0.5 eV) for singles spectra. The electronsin the TOF analyser are detected by a stack of two multi-channel platesand a delay-line anode.The energy resolution of the TOFis a tenth of an eV at an energy of 1eV and tens of an eV at energies of a few hundred eV. The entrance aperture of the TOF is kept at a potential of +5 V to accelerate slow electrons towards it. The angle of incidence and detection with the HMA decsribean angle of 60^∘ with the surface normal, the TOF-axis is parallel to the surface normal.During the coincidence measurements, the Kimball Physics ELG-2 electron gun is operated with a continuousbeamat a low current of ∼1 pA. Before a coincidence measurement, a pulsed electron beam is used to calibrate thetime it takesan electron with a given energy to leavethe sample and to reachone of the channeltrons in the HMA. This is repeated for each energy used laterduring the coincidence run.Within the experimental resolution, the two electrons in the pair are emitted simultaneously since the duration of the emission process (of the order of femtoseconds) is orders of magnitude smaller than the net time resolution of our experiment, which amounts to a few nanoseconds. For the coincidence measurement, we then use the calibratedflight times ofelectrons in the HMA to trace back the starting time of the pair, yielding the TOF-flight time in spite of the use of a continuous electron beam.The above procedure is advantageous compared to pulsed beam experiments in that it allows one to use larger currents and gives rise to higher coincidence rates.With this setup, a histogram of arrival time differences (between electrons arriving in the HMA and TOF) exhibits a peak of true coincidences superimposed on a flat background of random coincidences, which is subtracted. When the current is increased, the background intensity increases quadratically, while the intensity in thepeak increases linearly, proving that it is made up of true coincidences <cit.>.Coincidence measurements were conducted forthree different primary energies E_0 - E_vac, i. e. 173 eV, 500 eV and 1000 eV.We use a 1x1 cm Polymethylmethacrylate (C_5H_8O_2) sample with a thickness of 50 nm on a silicon substrate. The position of irradiation is changedevery 24 hours by 1 mm to reduce surface charging.Under these conditions, each atom on the surface on average is hit by one primary electron or less during acquisition. Total acquisition time for each measurement amounted to about 1 month.The number of detected electrons in the HMA during the coincidence run is recorded as "singles"-spectrum and used in the analysis in Fig. 4 of the main text to determine thefraction of secondary electrons generated at a certain depthwhich are eventually emitted from the surface. In this way, spurious influences due to e.g., drift in the beam current, the transmission function of the analyser, etc., are eliminated.§ DATA INTERPRETATIONWe designate the scattered and ejected electron by the indices "s"and "e" , while in the main text we merely distinguish between events where electrons are detected inanalyser 1 and 2and label the energy scales accordingly.This is strictly speaking correct and necessary because of the indistinguishability of electrons but generally identifying electron 1 with the scattered (fast) electron and electron 2 with the ejected (slow) electron will be correct in most cases.The binding energy E_b of the bound electron before it is liberated in the collision with the primary is found by requiring that the energy loss of the primary electronΔ E=E_0-E_s –where the index "0" indicates the primary electron– is used to liberate the bound electron from the solid, by overcoming the surface potential barrier U, and that it is ultimately ejected from the solid with an energy E_e above the vacuum level:Δ E=E_0-E_s=E_e+U-E_b ,where thebinding energy is counted from the top of the valence band and is negative. The surface potential barrier U is the sum of the band gap energy E_g and the electron affinity χ. The binding energy of the solid stateelectron before liberationfollows from the above as:E_b=-E_0+(E_e+E_s)+U,or, in other words, the spectrum along the energy sum axis in fact represents the binding energy spectrum of the solid state electrons. The double-differential (e,2e)-coincidence spectrum presented in Fig. 1(a) in the main text displays the intensity of time-correlated electron pairs given as a function ofthe energy loss of the fast electronand the energy of an emitted electron. The energy loss directly results from the energy transfer of a primary electron that undergoes aninelastic collision (with Δ E=E_0 - E_s)in the sample, releasing it as a secondary electron with kinetic energy E_e=Δ E- U+E_b, either via a direct knock-on collisionwith a single solid state electron or after excitation and decay of a collective excitation, such as a plasmon. Hence, each pixel shown in the spectrum of Fig. (1)a. represents the intensity of onesuch correlated electron pair.The fact that each energy loss results in the liberation of exactly one solid-state electron is one of the central ideas of the present work,since it allows to construct the depth dependent attenuation curves shown in Fig. 4a in the main text. Only in this case and only without depth dependent attenuation of theoutgoing beam, the reduced partial intensities in the coincidence loss spectrum should follow the green curve,γ_n^coi.=n, in Fig. 3c (in the main text).This issuehas been discussed previously in severalworks <cit.> and is clarified here for completeness in Fig. <ref>, which shows the details aroundthe plasmon feature in Fig. 1a (in the main text).The yellow curveis defined by E_e=Δ E-E_g-χ and represents thetop of the valence band. The red curve indicates the binding energy corresponding to the bottom of the valence band according to the value of Δ E_v=15.8 eV found in Ref. <cit.>.If more than one electron would be released for a given energy loss Δ E (e.g. in the plasmon feature), then, on any realisticmodel for the energy sharing between the released electrons, one would expect intensity at all energies E_e<Δ E-U, for the considered energy loss Δ E. This is clearly not the case. Rather, the range of binding energies in the plasmon feature matches the width of the valence band quite well. The above applies to the single scattering region (Δ E=10∼ 30 eV). For larger energy losses consecutive (multiple) plasmon features overlap leading to intensity at any energy E_e Coherent multiple plasmon excitation and decay, in which k plasmons are excited and the total energy loss Δ E_tot=kħω_p is transferred to a single ejected electronin acoherent process would give rise to intensity near (E_s,E_e)=(E_0-kħω_p,E_0-kħω_p-U). This can neither be observed in Fig. 1a (of the main text) nor has it been observed in previous works <cit.> which include measurements on single crystals <cit.>. Rather, at energy losses corresponding to multiple plasmon excitation, a peak is observed with its maximum always at E_e=ħω_p-U∼ 11 eV (see Fig.1b in the main text). Then, multiple plasmon excitation is governed by a Markov-type process and each energy loss leads to liberation of a single solid-state electron.§ PHYSICAL MODEL FOR ELECTRON SCATTERING The findings presented in the main text and discussed in the following section exclusively pertain to polycrystalline or amorphous solids. In this case, the off-diagonalelements of the density matrix, responsible for quantum mechanical interference effects, can be neglected. This leads to a Boltzmann-type kinetic equation describing electron transport which was implicitly used throughout the present work <cit.>. However,a quantum mechanical approach was used to calculate all interaction parameters as described below.Assuming that spin can be neglected, the electron transport in solids depends on processes that alter the direction and energy of electrons propagating in solids, described interms of elastic and inelasticscattering. Elastic scattering is defined by the interaction of an electron with the screened Coulomb potential of the nucleus leading to a deflection and a small recoil energy loss which is negligible for the present work by virue of the largemass difference between the scattering partners.Inelastic scattering describes the process of electron interaction with solid state electrons leading to appreciable energy loss and a small (but in the present case non-negligible) momentum transfer.The differential inverseinelastic mean free path (DIIMFP) W_in(ω,E) describes the probability for anelectron with energy E to lose the energy ħω in a single inelastic collision. Based on the model for non-conductors byTosatti and Parravicini <cit.>, several authors <cit.> express theDIIMFP for semiconductors and insulators in terms of the dielectric function ϵ(ω,q) asW_in(ω,E) = 1/π(E - E_g)∫_q-^q+[-1/ϵ(ω,q)]dq/q, whereE is the incident energy andthe boundaries for the momentum transferq are given as q_± = √(2(E-E_g))±√(2(E-E_g-ω)). Here and below, atomic units are used (ħ=m_e=e≡ 1).The IMFP is obtained by integrating the DIIMFP over all allowed energy losses<cit.>:λ_in(E)^-1= 1/π (E-E_g)∫_E_g^E-(Δ E_v+E_g)dω∫_q_-^q_+[-1/ϵ(ω,q)] dq/q,The lower integration boundary in Eq. <ref> is defined by the smallest possible loss which correspondsto a HOMO-LUMO transition, i.e. ω_min=E_g. The upper integration boundary is defined by the lowest available state for the primary electron after the collision, i.e., thebottom of the conduction band: ω_max=E_-(Δ E_v+E_g). For low energy electrons, deflections due to inelastic collisions become important and we find that the classical approach leads to unphysical spectral shapes. Hence, we rely on the formula given by Ding and Shimizu <cit.> for the distribution of scattering angles associated with a given energy loss ωd^2λ_in^-1/dΩ dω =1/π^2 q^2√(1-ω/E - E_g)[-1/ϵ(ω,q)],whereq^2/2 = 2(E-E_g - ω) - 2√((E-E_g)(E - E_g - ω))cosθand θ is the polar scattering angle. To evaluate the above quantities, we used the dielectric function given by Ridzel et al. <cit.>employing a quadratic dispersion.Deflections occuringduring elastic scattering were modelled using the values for the differential Mott cross section for elastic scatteringdσ(θ)/dΩ provided by the ELSEPA package <cit.>. The elastic mean free path λ_e is obtained by integrating the cross section over the unit sphere.The transport mean free path λ_tr in essence gives the characteristic length for momentum transfer <cit.>:1/λ_tr=N_a ∫_4πdσ(θ)/dΩ(1-cosθ)sinθ dθ dϕ,where N_a is the density of scattering centers.For medium energies, above a few hundred eV, the inelastic interaction characteristics have been extensively tested, mainly by comparison of Monte Carlo model calculationsand experiments <cit.>. The uncertainty in IMFP values for medium energies is nowadays believed to be better than 10%, while for lower energies,the uncertainty is essentially unknown. The same can be said to be true for the elastic interaction characteristics, such as the transport mean free path. Here, for lower energies, the commonly made assumptions that exchange and polarisation effects are negligible makes the resulting quantities less reliable.Finally, an electron reaches vacuum only if it can overcome the surface potential barrier represented by a step potential in the one dimensional Schrödinger equation. Two cases need to be distinguishedfor thetransmissions function T(E,ϕ): either the electroncrosses the surface and is refracted, or is internally reflected back:T(E,ϕ) =4 √(1-E_g+χ/E cos^2ϕ)/(1+√(1-E_g+χ/E cos^2ϕ))^2,if Ecos^2ϕ > E_g+χ0,if Ecos^2ϕ≤ E_g+χwhere ϕ is the angle of the electron relative to thesurface normal and the barrier heightis taken to be the electron affinity χ,asillustrated in Fig. 2(a) in the main text.§ MONTE CARLO SIMULATIONIn general, MC simulations allow one to approximate the solution to the complicated multidimensional transport equations by statistical sampling. The present algorithm essentially follows algorithms which can be found in the literature (e.g. <cit.>). The electron trajectory is assumed to consist of piece-wise straight line segments in between scattering processes. The step lengths are sampled analytically using the inverse cumulative distribution method applied to an exponential distribution:s=-λ_totln(ξ)whereλ_tot=1/(λ_in^-1+λ_el^-1),λ_i and λ_e are the inelastic and elastic mean free path, respectively, and ξ is a uniform random number on the interval (0,1]. After travelling a step, the position is updated andanother random numberξis used to decide whether the scattering process is elastic (ξ < λ_tot/λ_el)or inelastic (ξ > λ_tot/λ_el = 1-λ_tot/λ_in).To generatestochastic values for the energy loss, scattering angle, etc., the corresponding distributions, discussed in the previous section, are sampledusing the accept and reject method. The slowing-down process during the electron transport, leads to achange of scattering characteristics with the projectile's energy. After each inelastic process the relevantparameters are updated accordingly using extensive lookup tables. Each inelastic scattering process leads to an electronic transition from an occupied state in the valence band with binding energy E_b to an availableunoccupied state in the conduction band, as described in the main text. We assume that after each inelastic collision, the energy loss and change in momentum of the primary are transferred to a single secondary electronin the valence band (assuming a width of the valence band Δ E_v=15.8 eV<cit.>), either in a direct knock-on collision with a solid state electron, or after decay of a collective excitation, e.g., a plasmon. At the solid-vacuum interface, the electron energy(E_sol = E_vac + χ) and direction (sinθ_vac E_vac = sinθ_sol E_sol) are updated as explained in the previous section. If the maximum angle definingthe so-called "escape cone" is exceeded, the electron suffers a total internal reflection instead of escaping to vacuum.The excited secondary electron can reach its final state in vacuum if the energy loss of the primary electron is large enough to overcome the surface barrier. Whenever a SE is generated, all the information pertinent to this electron is stored on a stack until its trajectory is terminated,i. e. until the electron is either detected or abandoned (surpassing the maximum simulation depth, reaching minimum escape energy in solid or missing the detector in vacuum). The SE cascade is simulated by keeping track of the additional energy losses of the previous-generation secondary electrons. The algorithm terminates if the desired number of trajectories have been simulated. The above algorithm was used in the present work for three main goals: (1.) to calculate theaverage depth ⟨ z_n ⟩ at which n-fold inelastic scattering (i.e.creation of n secondary electrons) takes place (inset of Fig. 4a in the main text); (2.) simulation of the singles and coincidence spectra(results displayed in Fig. 4b in themain text); and (3.) calculation of the energy corresponding to the groups of electrons with attenuation length λ_1,2 (green diamonds in Fig. 5b in the main text). We trust the results for ⟨ z_n ⟩ to be realistic within 10% since these model calculations only concern the transport of the primary electron, which is believed to bequantitatively understood.To simulate the coincidence data, for each escaping electron that reaches the detector, its history is stored in thedetector stack. After finishing a given trajectory, all N detected electrons are collected into all possible pairs (N(N-1)/2 combinations), which are subsequently added as a contribution to a two-dimensionalhistogram, representing the double-differential coincidence spectrum. Since this involves both penetration of the primary into the surface as well as generation and emission ofsecondaries, for which the transport parameters are far less reliable, we cannot make a realistic error estimate. It should be nonetheless noted that an attempt to simulate thedata in Fig. 4 in the main text using a classical description of deflection angles in inelastic collisions gave results which were qualitatively different from experiment.Concerning the calculation of the energies ⟨ E_λ_1,λ_2⟩, one can say that the error bars shown in Fig. 5 of the main text (which are difficult todiscern since they are about the same as the size of the green diamonds),are believed to give a realistic estimate of the uncertainty. They were calculated from the width of theresulting distribution of energiesE_λ_1,λ_2 and amounted to about Δ(E_λ_1,λ_2-E_vac)∼ 1 eV in both cases. get arXiv to do 4 passes: Label(s) may have changed. Rerun | http://arxiv.org/abs/2311.16046v1 | {
"authors": [
"Wolfgang S. M. Werner",
"Florian Simperl",
"Felix Bloedorn",
"Julian Brunner",
"Johannes Kero",
"Alessandra Bellissimo",
"Olga Ridzel"
],
"categories": [
"cond-mat.mtrl-sci",
"physics.app-ph"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20231127181131",
"title": "Energy Dissipation of Fast Electrons in Polymethylmetacrylate (PMMA): Towards a Universal Curve for Electron Beam Attenuation in Solids for Energies between ~0 eV and 100 keV"
} |
Istituto Nazionale di Ricerca Metrologica, Strada delle Cacce 91, I-10135 Torino, Italy University of Vienna, Faculty of Physics, Vienna Center for Quantum Science and Technology (VCQ), Boltzmanngasse 5, 1090 Vienna, AustriaIstituto Nazionale di Ricerca Metrologica, Strada delle Cacce 91, I-10135 Torino, [email protected] Istituto Nazionale di Ricerca Metrologica, Strada delle Cacce 91, I-10135 Torino, [email protected] Dipartimento di Fisica “Aldo Pontremoli”, Università degli Studi di Milano, via Celoria 16, I-20133 Milano, Italy INFN, Sezione di Milano, I-20133 Milano, [email protected] Istituto Nazionale di Ricerca Metrologica, Strada delle Cacce 91, I-10135 Torino, Italy We analyze the generation of spin-squeezed states by coupling three-level atoms to an optical cavity and continuously measuring the cavity transmission in order to monitor the evolution of the atomic ensemble. Using analytical treatment and microscopic simulations of the dynamics, we show that one can achieve significant spin squeezing even without the continuous feedback that is proposed in optimal approaches. In the adiabatic cavity removal approximation and large number of atoms N limit, we find the scaling exponents N^-2/3 for spin squeezing and N^-1/3 for the corresponding protocol duration, which are crucially impacted by the collective Bloch sphere curvature. With full simulations, we characterize how spin-squeezing generation depends on the system parameters and departs from the bad cavity regime, by gradually mixing with cavity-filling dynamics until metrological advantage is lost. Finally, we discuss the relevance of this spin-squeezing protocol to state-of-the-art optical clocks. Analysis of spin-squeezing generation in cavity-coupled atomic ensembles with continuous measurements G. Bertaina January 14, 2024 =====================================================================================================Keywords Spin squeezing, Continuous measurements, atomic clocks, cavity quantum electrodynamics§ INTRODUCTION Quantum sensors based on atomic ensembles, such as atomic clocks, gyroscopes, magnetometers, etc., have nowadays reached and surpassed their classical counterparts. Their standard quantum limit due to the measurement noise (quantum projection noise <cit.>) determines the optimal precision obtainable using uncorrelated atoms. It can be surpassed by a squeezing factor ξ^2<1, by introducing quantum correlations <cit.>. The simplest entangled state offering metrological gain is the spin-squeezed state (SSS) <cit.>. Over the past decade, SSSs have been demonstrated in several systems <cit.>, including interacting Bose-Einstein condensates <cit.>, ions <cit.>, and neutral atomic ensembles. Among different techniques, SSSs have been produced by quantum non-demolition (QND) measurement <cit.>, collective spin ensembles with cavity-mediated interactions <cit.>, and Rydberg coupling <cit.>. In neutral atoms, cavity-aided collective spin measurements enabled up to 20 of metrologically useful spin squeezing <cit.> involving transitions in the radio frequency (RF) domain, i.e. 5-10 orders of magnitude smaller than optical frequencies where the best atomic clocks currently work <cit.>. Recently, proof-of-principle experiments employing cavity-aided measurements have achieved spin squeezing on an optical transition <cit.> and improved clock performances of a state-of-the-art optical clock <cit.>. Measurement protocols based on continuous monitoring have been extensively studied for quantum state engineering purposes <cit.>, leading to the outstanding experimental results observed in Refs. <cit.> for the cooling of a quantum mechanical oscillator towards its quantum ground state. In particular much theoretical effort has been devoted to the exploitation of this kind of protocols for the generation of metrologically useful quantum states, such as squeezed states of quantum harmonic oscillators or of spin-squeezed states for atomic ensembles <cit.>. The physical intuition behind these protocols is the following: by continuously monitoring a particular observable of the quantum system, for example a quadrature operator for a quantum harmonic oscillator, or a spin operator for an atomic ensemble, the variance of such operators will decrease reaching eventually values below the so-called standard quantum limit, fixed by the fluctuations of the corresponding coherent (classical) states.Continuous monitoring of the collective spin operator of an atomic system can be achieved by engineering a dispersive coupling between the atoms and a cavity field driven by an external laser. By performing a continuous homodyne detection on the cavity output, one is indeed implementing a quantum non-demolition (QND) measurement of the spin operator <cit.>. In this work we will describe in more detail, employing both analytical treatment and full cavity quantum electrodynamics simulations, how and under which assumptions this kind of interaction and consequent dynamics can be achieved in specific atomic ensembles, with a particular attention to the application on future optical clocks.One of the major questions when devising spin-squeezing protocols is determining the scaling exponent of the spin-squeezing parameter for large number of particles ξ^2∝ N^-α. While continuous feedback protocols often reach Heisenberg scaling α=1 <cit.>, the celebrated one-axis twisting Hamiltonian (OAT) <cit.> realizes α=2/3. Reduction from Heisenberg scaling in spin systems is often due to the curvature of the collective Bloch sphere, which causes the backaction of the squeezing operation to reduce contrast. A relevant experimental parameter to be considered is the collective state preparation time, which in the case of the atom-cavity coupled system coincides with the cavity interaction time t. This time must be optimized in order to reduce atom-cavity scattering and decoherence, and to minimize aliasing noise due to the added dead-time in the atomic sensor <cit.>.The article is organized as follows. In Sec. <ref> we introduce in detail the considered model of continuously measured cavity-coupled atoms, and describe the master equations used to study its dynamics. In Sec. <ref> we discuss the analytical treatment of the cavity-removal regime and the results of our full simulations, concerning the optimal spin squeezing, the time at which this is expected, and their scaling with atom number, which is impacted by the interplay between the absence of continuous feedback, Bloch sphere curvature, and atom-cavity coupling. In Sec. <ref> we discuss the relevance of our results for optical clocks, and in Sec. <ref> we draw our conclusions. The Appendices detail the adiabatic elimination of the atomic excited state, the tangential spin-squeezing parameter evaluation, the analytical derivations, and the computational details concerning our simulations.§ MODEL AND METHODS The considered system, as schematically depicted in Fig. <ref>, is the simplest model of a cavity-enhanced atomic optical clock: an ensemble of N three-level uncorrelated atoms placed in a driven-dissipative optical cavity, which mediates an effective interaction between them. We assume that a deep optical lattice freezes the translational degrees of freedom of the atoms (Lamb-Dicke regime), so that only the internal states are relevant. The interaction between an atomic ensemble and a light mode in a high-finesse optical cavity has been intensively studied for the generation of both atom-light and atom-atom entanglement <cit.>. We focus on the generation of the input (spin-squeezed) collective state of a Ramsey protocol, deferring to future work the analysis of the entire preparation/interrogation cycle, including the role of clock laser noise and dead time in a closed-loop optical clock <cit.>.Throughout the paper we set ħ=1, meaning that we measure energy in units of angular frequency. The clock states are labeled ↓, ↑, and the clock frequency is ω_0.For the clock states subspace, we use the standard pseudo-spin-1/2 representation:ŝ_x = (↑↓ + ↓↑)/2, ŝ_y = i( ↓↑ - ↑↓)/2, ŝ_z =(↑↑ - ↓↓)/2, obeying the algebra [ŝ_j, ŝ_k]=iϵ_jklŝ_l.The global atomic ensemble is characterized by a collective spin vector Ĵ = ∑^N_i ŝ^(i), and Ĵ_z = (N̂_↑ - N̂_↓)/2 corresponds in particular to the difference of population of the two clock states.We initially focus on the Λ-level configuration (Fig. <ref>b) interacting with a single cavity mode ĉ with balanced couplings g_↑=g_↓≡ g and symmetric cavity detunings Δ_↑=-Δ_↓≡Δ=ω_0/2. We thus consider the quantized Stark-shift HamiltonianĤ_a = ∑_i g^2/Δĉ^†ĉ(|↑⟩_i ⟨↑| - |↓⟩_i ⟨↓|) = 2 g^2/Δn̂ Ĵ_zin the rotating frame of the bare atomic levels and cavity mode, whose derivation from the cavity-coupled three-level Hamiltonian is reported in App. <ref>. n̂=ĉ^†ĉ is the cavity photon number operator. Having removed the atomic auxiliary excited state e, the population difference of the clock states remains constant, as the operator Ĵ_z commutes with the effective Hamiltonian. This dispersive interaction thus provides a means of QND measurement of Ĵ_z. The fundamental request to perform the above excited-state adiabatic elimination is for the detuning (and thus the clock frequency ω_0) to be much larger than any other frequency, so that the transitions to and from the excited state happen on a much smaller time-scale than any other process.This also translates into a request regarding the cavity dynamics: from the point of view of the atomic ensemble, the interaction factor 2 g^2 n̂/Δ corresponds to a frequency shift, which must, for consistency, be much smaller than Δ. This corresponds to the request that the average number of photons is ⟨n̂⟩≪(Δ/g)^2 Up to now we described the dynamics of the atomic component and its interaction with the cavity mode. The internal dynamics of the cavity is given by a usual single mode bosonic Hamiltonian (neglecting zero-point energy) and an additional driving term, which, in the laboratory reference frame, is given byĤ_c^lab = ω_c n̂ +ε(ĉ e^iω_D t + ĉ^† e^-iω_D t), where ω_D is the driving laser frequency and ε is the driving amplitude. We consider a loss term characterized by a transmission rate κ corresponding to photon decay to the external environment through the cavity walls. Driving amplitude and transmission rate are not independent, but related by ε=√(κ P/ω_D), where P is the experimentally widely tunable pumping power. When working in the cavity frame of reference, the cavity Hamiltonian isĤ_c = ε(ĉ e^-iδ_D t+ ĉ^† e^iδ_D t),where δ_D = ω_c -ω_D is the detuning between the cavity mode and the driving laser. In this work, we focus on the case of resonant driving laser δ_D=0, where there is no explicit time dependence. This regime enhances the feasibility of measurement-induced spin-squeezing generation, while the nearly-detuned regime has been also considered for a deterministic generation of induced-interaction squeezing, which has been often dubbed "coherent cavity feedback" <cit.> (not to be confused with the feedback used in some continuous measurement protocols). In the absence of coupling to the atomic transitions, the number of photons which occupy the cavity in the steady-state at large times would stabilize atn_0 = (2ε/κ)^2= 4 P/κω_D.Therefore the total Hamiltonian of the atom-cavity system that here we consider is Ĥ = Ĥ_a + Ĥ_c.The main figure of merit of the considered protocol is the spin-squeezing parameter. In a general sense, squeezed states have reduced variance for a certain observable, at the cost of increased variance for a non-commuting observable <cit.>. Following the definition by Kitagawa and Ueda<cit.>, N two-level atoms being described by a collective spin with maximum magnitude J = N/2 are in a spin-squeezed state (SSS) if the variance of one spin component Ĵ_⊥, normal to the mean spin vector ⟨Ĵ⟩, is smaller than the variance of a coherent spin state (CSS), Δ^2 Ĵ_⊥ < J/2. To be metrologically relevant, such variance is weighted by the contrast 𝒞=|⟨Ĵ⟩|^2/J^2, yielding Wineland's spin-squeezing parameter <cit.>:ξ^2 = min_⊥(Δ^2Ĵ_⊥/J𝒞/2)The spin-squeezing parameter of a CSS is ξ^2_CSS= 1, corresponding to the standard quantum limit (SQL). This represents the best scaling available using uncorrelated atoms. Metrologically useful spin squeezing corresponds to ξ^2 < 1. §.§ Continuous measurement dynamics The main idea in the scheme that we analyze to generate spin squeezing is that, since the dynamics described by the Hamiltonian in Eq. (<ref>) couples directly the collective spin z-component to the bosonic field, we may obtain information on that particular observable from measurements on the photonic degrees of freedom, without having to directly perturb the atomic ensemble. A QND measurement does not in fact perform a destructive projective measurement on the system itself, but instead acts on the environment coupled to the considered system <cit.>. In particular, through continuous homodyne sensing of the transmitted photonic field √(κ)ĉ <cit.>, one can detect the phase shift proportional to the atomic population difference, thus obtaining information regarding Ĵ_z <cit.>.The dynamics of the internal system is described by a stochastic master equation (SME) for the density matrix conditioned on the measurement outcome ρ̂_c, which contains a decoherence term, due to the interaction with the external environment, and a stochastic term which instead describes the non-linear evolution of the system due to the performed measurement <cit.>:dρ̂_c==-i[Ĥ,ρ̂_c] dt ==+ κ 𝒟[ĉ]ρ̂_cdt +√(ηκ) ℋ[ĉe^-iφ] ρ̂_cdW_t I(t) dt== √(ηκ)⟨ĉe^-iφ+ĉ^† e^iφ⟩_cdt + dW_twhere the notation ⟨Â⟩_c indicates the expectation value of operator  with the conditional density matrix ρ̂_c and we have introduced the photocurrent measured at each time step I(t), the Lindbladian superoperator 𝒟[A]∙ = A∙ A^† - 1/2{ A^† A,∙}, and the non-linear superoperator ℋ[A]∙ = A∙+∙ A^† - Tr[∙( A + A^†)]∙. The photocurrent is biased by system observables, but its measurement noise is modeled by the Wiener increment dW_t. This is a stochastic variable following a Gaussian distribution with mean E[dW_t]=0 and variance E[dW^2_t ] = dt.Its characteristic property is that, in the infinitesimal time-step limit, its square is not random but deterministically dW^2_t ≡ dt. The parameter φ represents the phase of the local oscillator to which the photons exiting the cavity are coupled in order to perform the homodyne sensing of the bosonic field.In particular, we choose φ = 0, which corresponds to a measurement of the field quadrature (ĉ+ĉ^†)/√(2)= x̂ in the standard basis. Here, the only source of decoherence is given by the photon losses through the cavity, ignoring for now other sources of noise like atomic decay.We also assume ideal measurements with efficiency η = 1, where all photons which leaked outside the cavity are successfully detected. Notice that in the opposite case of null efficiency, Eq. (<ref>) reduces to a Lindblad master equation where the only effect of the cavity transmission is to introduce dissipation.The master equation defined in Eq. (<ref>) can be used to determine the evolution of the expectation value of relevant quantities, for example ⟨Ĵ_z⟩_c. From the definition of expectation value as ⟨Ĵ_z⟩ = [ρ̂ Ĵ_z], we get the conditional evolution equation: d⟨ Ĵ_z ⟩_c= √(2κ) [ ⟨Ĵ_zx̂⟩_c- ⟨ Ĵ_z⟩_c ⟨x̂⟩_c ]dW_tI(t) dt= √(2κ)⟨x̂⟩_cdt + dW_t. Since Ĵ_z commutes with the Hamiltonian, as expected the evolution of its average value is determined only by the stochastic increment that depends on the measurement outcome. At first it may seem that the evolution of the expectation value, and thus of the spin-squeezing parameter, may be obtained solely from the photocurrent measurements and the evolution of the measurement quadrature. However, even though the state density matrix does not appear directly in Eq. (<ref>), it is still necessary to determine the conditional increment. By definition, the Wiener increment dW_t can be found as the difference between the the actual measured photocurrent I(t) and its expected value at each time step. At any given time, it is thus necessary to know the full conditional density matrix in order to determine the value of this random increment, and it is not possible to determine exactly the conditional evolution of the expectation value of relevant observables without also knowing the conditional trajectory of the full state. However, as we also remark in the following, it is possible in certain scenarios to approximately determine its value from the photocurrent and also cancel this stochastic contribution via real-time feedback. §.§ Adiabatic cavity removal As shown in Eq. (<ref>), the cavity interacts with the atomic ensemble with an effective frequency shift2 g^2/Δ⟨ Ĵ_z ⟩≃g^2/Δ N ≡δωThe other process in which the cavity photons are involved is the cavity loss, which happens at a rate κ, and corresponds to the information acquisition rate, when η=1.When this rate is much larger than the effective shift per photon, κ≫δωthe system is said to be in the so-called "bad cavity regime", where the information on the atoms encoded in the photon leaving the cavity is transferred directly to the detector (when efficiency is maximal), as if the measurements were performed directly on the spin system.The optical cavity thus represents a "medium" through which information is transferred and, much like the excited state in Eq. (<ref>), it can be adiabatically removed <cit.>. Following the same scheme, one obtain the effective dynamics described by the following SME:dρ̂_c= κ̃ 𝒟[Ĵ_z]ρ̂_cdt + √(κ̃) ℋ[Ĵ_z] ρ̂_c dW_tI(t) dt= 2 √(κ̃)⟨Ĵ_z ⟩_c dt + dW_twhere the density matrix now refers only to the atomic Hilbert space, and the effective transmission rate is <cit.>:κ̃ = 4 (2g^2/Δ)^2n_0/κ .Here, since the photons are not dynamical anymore and their frequency shift is negligible, we have assumed that their number is equal to the stationary one in noninteracting cavity, Eq. (<ref>). We notice how there is no Hamiltonian term anymore, apart from a constant Stark shift that has been included in the reference frame, as the cavity-atom interaction is directly embodied by the dissipative and measurement terms of the effective SME. The generation of spin squeezing under this evolution has been investigated in great detail in <cit.>. As for Eqs. (<ref>), also in this case one observes a stochastic evolution for ⟨Ĵ_z⟩, given by the equation d⟨Ĵ_z⟩_c=2√(κ̃) (Δ^2Ĵ_z)_cdW_t. This may be corrected exactly via Markovian feedback by solving the full trajectory of the conditional state; the corresponding feedback scheme leads to an unconditional Heisenberg-limited spin squeezing. It was also shown that an approximate feedback, depending only on the photocurrent results and not on the full trajectory, allows for deriving the following approximate analytical solution valid for short-to-intermediate times:ξ^2_F=e^κ̃t1+Nκ̃tfrom which one obtains a minimum spin-squeezing parameter following Heisenberg scaling:ξ^2_F,m=e/N ,reached at the optimal time t_F,m=1/κ̃ . We now focus on the assumptions needed to perform the cavity adiabatic removal: first of all, the cavity is assumed to be in the stationary regime, so that photons follow no dynamics other than the decay into free space; given a weak interaction with the atomic ensemble, this request relates their number only on the parameters ε and κ, as in Eq. (<ref>). Secondly, the "bad cavity" requirement that κ must be the highest frequency (besides Δ) imposes a further condition besides Eq. (<ref>), namely that g^2 ⟨n̂⟩/Δ≪κ. This provides a tighter bound on the maximum average number of photons expected in the cavity than the one expressed in Eq. (<ref>):⟨n̂⟩≪κΔ/g^2 ≪ (Δ/g)^2.§.§ Simulation of system dynamicsWe solve the SMEs (<ref>) and (<ref>) using the QuTiP library <cit.>. A very relevant speed-up is obtained by considering only the atomic Dicke sector with maximum eigenvalue J(J+1) of Ĵ^2, with J=N/2 <cit.>. We are allowed to do so because we do not consider atomic depolarization and we choose an initial pure state in this subspace, namely a spin-coherent state with J=N/2. In the case of Eq. (<ref>), the initial atomic state is in a tensor product with an empty cavity. Since we focus on unit efficiency, this also allows us to reduce our simulations to the corresponding stochastic Schrödinger equations (SSE) <cit.>, with tremendous reduction of memory and computational requirements.See Appendix <ref> for details on the simulation setup.The metrological spin-squeezing parameter (<ref>) is not simply proportional to the spin variance along z, but it is estimated at any given time by determining the minimal variance of the collective spin components which are perpendicular to the instantaneous mean spin vector. This corresponds to the smallest eigenvalue (normalized by the contrast) of the covariance matrix:cov_i,j(Ĵ) = 1/2⟨Ĵ_iĴ_j+Ĵ_jĴ_i⟩ - ⟨Ĵ_i⟩⟨Ĵ_j⟩where i,j∈{1,2} and Ĵ_i=Ĵ·n_i, with n_i⊥⟨Ĵ⟩_c. The polar and azimuthal angles are defined by⟨Ĵ⟩_c bycosθ=⟨Ĵ_z⟩_c/|⟨Ĵ⟩_c| ,tanϕ = ⟨Ĵ_y⟩_c/⟨Ĵ_x⟩_c .Details on this evaluation can be found in Appendix <ref>.§ RESULTS§.§ Analytical results in the cavity-removal approximation In this section, we determine an analytical expression for the conditional and average spin-squeezing parameter in the cavity-removal approximation, by analyzing the time evolution of the conditional mean spin and spin covariance. Here, we outline the derivation, while details are reported in App. <ref>.We introduce the scaled time τ≡κ̃ t and recall that the evolution of the conditional expectation value of an observable  is determined by the SME (<ref>) via d⟨Â⟩_c=(Âdρ̂_c). Since the off-diagonal covariances are zero for τ=0, we assume that they remain negligible along the dynamics. This is equivalent to a third order cumulant truncation, namely a Gaussian approximation, and will be confirmed by the simulations. Inspection of the SMEs for Ĵ_x, Ĵ_y, Ĵ_x^2 and Ĵ_y^2 shows then that their evolution is purely dissipative and unconditional. By taking into account the initial condition of a CSS along the positive x axis, we obtain:⟨Ĵ_x(τ)⟩ = J e^-τ/2 ⟨Ĵ_y(τ)⟩ = 0⟨Ĵ_x^2(τ)⟩ = J^2/2( 1+e^-2τ)+ J/4( 1-e^-2τ)⟨Ĵ_y^2(τ)⟩ = J^2/2( 1-e^-2τ)+ J/4( 1+e^-2τ)resulting in the following closed expressions:Δ^2Ĵ_x(τ)/J/2 = J( 1-e^-τ)^2+ 1/2( 1-e^-2τ)Δ^2Ĵ_y(τ)/J/2 = J( 1-e^-2τ)+ 1/2( 1+e^-2τ) . Conversely, it is clear that all powers of Ĵ_z evolve only via the stochastic term, due to them commuting with the dissipator.However, the evolution of Δ^2Ĵ_z contains a stochastic term corresponding to the third order cumulant ⟨Ĵ_z^3⟩_C that we approximate to zero, and an additional unconditional term stemming from Itô calculus, resulting indΔ^2Ĵ_z=-4 (Δ^2Ĵ_z)^2dτ ,whose solution is Δ^2Ĵ_z/J/2= 1/1+2Jτ ,This result saturates the uncertainty bound in the y-z plane, for moderate times: Δ^2Ĵ_zΔ^2Ĵ_y≥ |⟨Ĵ_x⟩|^2/4 (See panel a of Fig. <ref>). For large J, the contrast is determined by the x component, yielding 𝒞(τ)≃⟨Ĵ_x(τ)⟩^2/J^2= e^-τ. Notice that the expressions found for the y and z components are consistent in the limit of large J with the results from the Holstein-Primakoff approximation (see Ref. <cit.>), which is however unable to correctly describe variations of ⟨Ĵ_x⟩ and Δ^2 Ĵ_x: crucially, in our case these observables should not be fixed to their initial values, J and 0, respectively, as we demonstrate in the following.We are interested in the tangential spin-squeezing parameter, corresponding toξ^2(τ,cos^2θ)=Δ^2Ĵ_⊥(τ)/J𝒞(τ)/2=Δ^2Ĵ_z(τ)sin^2θ+Δ^2Ĵ_x(τ)cos^2θ/J𝒞(τ)/2 ,where cosθ is defined by the mean spin via Eq. (<ref>) and we again neglected the off-diagonal x-z covariance. Eq. (<ref>) implies that Δ^2Ĵ_x increases quadratically with small time. Panel b of Fig. <ref> then shows that the x contribution to spin squeezing may become dominant if the mean spin is far from the equator. Indeed, the absence of continuous feedback in our approach implies that, even in the J→∞ limit, θ should not simply be set equal to π/2. The reason is that the statistical distribution P of conditional values of ⟨Ĵ_z⟩_c is constant in time and equivalent to the initial one, which is Gaussian, in the large J limit, and reads:P(⟨Ĵ_z⟩_c)=1/[2πΔ^2 Ĵ_z(0)]^1/2exp(-⟨Ĵ_z⟩_c^2/2Δ^2 Ĵ_z(0)) . We can now analytically evaluate the trajectory average of the conditional spin-squeezing parameter in the absence of continuous feedback of Eq. (<ref>), ξ^2_NF(τ)=E[ξ^2(τ,⟨Ĵ_z⟩_c^2/|⟨Ĵ⟩|^2)], by noticing that Eq. (<ref>) depends quadratically on ⟨Ĵ_z⟩_c, while the rest of the expression is unconditional in our approximations, and we obtain:ξ^2_NF=∫_-∞^+∞P(q) ξ^2(τ,q^2/|⟨Ĵ⟩|^2)dq = ξ^2(τ,Δ^2 Ĵ_z(0)/J^2𝒞(τ)) =(1-e^τ/N)e^τ/1+Nτ + 1/2N[N(e^τ-1)^2+ e^2τ-1] . We have thus found that Gaussianity implies that the average spin-squeezing parameter is the one corresponding to a trajectory where ⟨Ĵ_z⟩_c is equal to the initial standard deviation of Ĵ_z.To infer the scaling of the optimal time and minimal squeezing with the number of particles, we first keep only the dominant terms of the previous expression for N→∞, and then expand for small time, presuming that the minimum occurs for τ≪ 1:ξ^2_NF(τ)N→∞≈1/Nτ + 1/2(e^τ-1)^2τ→ 0≈1/Nτ + τ^2/2 .It is clear here that the second term, stemming from the Δ^2Ĵ_x contribution, causes an increase of the tangential spin-squeezing parameter, in competition with Δ^2Ĵ_z that is decreasing. The time at which the minimum is reached is dubbed the optimal time t_m and is relevant when devising an experimental protocol. In our analytical approximation, it occurs for t_NF,m =τ_NF,m/κ̃ = 1/κ̃N^1/3 ,corresponding to the optimal average spin-squeezing parameterξ^2_NF,m=3/21/N^2/3 . We highlight here that, even in the N→∞ limit, tangential spin-squeezing in the absence of continuous feedback is worsened by the contribution coming from an increasing variance of Ĵ_z. This implies a minimum average spin-squeezing parameter that loses Heisenberg scaling and is on par with the OAT result of Ref. <cit.>, and a corresponding optimal time which is not size independent. The found spin-squeezing exponent is slightly smaller than the one, α≃0.73, recently obtained with a mean-field approach in a similar setup when also the pump detuning is optimized <cit.>. The scaling exponents α, for the minimum average spin-squeezing parameter and β, for the optimal time, are consistent with the relation α+β = 1, which also holds for the OAT model and the cavity-mediated interaction model of Refs. <cit.> (see App. <ref>). §.§ Comparison with numerical results in the cavity-removal approximation We now check the analytical results and assumptions of the previous section, via numerical solution of the SME (<ref>). The results of a SME are crucially conditioned on the sampled noise, which is not a computational artifact, but has the physical meaning of representing a possible realization of the noise of the continuous measurements. Therefore, we both evaluate the distribution of conditional results ⟨Â(t)⟩_c= [Âρ̂_c(t)] of some relevant physical quantities Â, and then we consider their statistical average over the M trajectories E[A(t)]. The variance of the average, which is smaller the higher M, is not to be confused with the variance of the distributions, which increases over time due to the wandering of the average collective spin and the absence of feedback, at odds with unconditional protocols.In Fig. <ref> we show the values of relevant elements of the covariance matrix, along different trajectories, together with the contrast and the average tangential spin-squeezing parameter E[ξ^2], and compare them to our analytical predictions (dashed lines). We simulated N=160 atoms and normalized the time with the effective information rate κ̃. The diagonal variances and the contrast are clearly almost unconditional and in excellent agreement with the results of the previous Section. Consistently, the off-diagonal covariances are essentially zero (xy and zy cases, not shown) or negligible (xz). Notice how the z and x variances have opposite behavior, implying that their weighted sum, the average spin-squeezing parameter, displays a minimum and then worsens. Also for this crucial quantity, the analytical expression of Eq. (<ref>) is in very good agreement with the numerical result in the region of the minimum (deviations at higher times stem from finite-size effects, see App. <ref>).In Fig. <ref> we now also check the analytical expression corresponding to Eq. (<ref>) (See Eq. (<ref>)) against simulations with varying number of atoms. At fixed time, such expression provides the correlation between the conditional value ⟨Ĵ_z⟩_c and the corresponding conditional spin-squeezing parameter. For each simulation we select the optimal time t_m and plot Eq. (<ref>): the agreement is again very good and discrepancies start to be noticeable only for very large ⟨Ĵ_z⟩_c, where the conditional contrast should take into account not only the unconditional x component, but also the z contribution. Notice also that the average squeezing (horizontal lines) is consistent with the conditional squeezing corresponding to ⟨Ĵ_z⟩_c equal to the initial standard deviation (vertical lines), an agreement which increases the larger the particle number. This figure manifests the effect of Bloch sphere's curvature: trajectories which keep ⟨Ĵ_z⟩_c≈ 0 display the best conditional spin squeezing, because the fixed squeezing direction z is almost perpendicular to the average spin. In a continuous feedback scheme, the state is constantly realigned with the equator, thus always obtaining the maximum possible squeezing.On the other hand, in the absence of feedback, many trajectories will result in ⟨Ĵ_z⟩_c far from the equator, corresponding to worse spin-squeezing parameter, due to the squeezing operation not acting perpendicularly to the average spin.Although one might assume that the Holstein-Primakoff approximation for the collective spin is valid in the N→∞ limit, implying that the plane tangent to the Bloch sphere is always perpendicular to the equator and thus feedback is not necessary to achieve Heisenberg scaling, we have in fact analytically and numerically demonstrated that the role of curvature persists in such limit, resulting in α=2/3 for metrological spin squeezing. Heisenberg scaling characterizes spin squeezing only if evaluated along the fixed z axis <cit.>. We numerically check the predictions concerning the scaling of the spin-squeezing parameter and the optimal time in the following Section, together with the results from the full simulations. §.§ Numerical results for the full atom-field dynamics Having analytically and numerically solved the dynamics in the bad cavity regime in cavity removal approximation, we now focus on the numerical solution of the full SME (<ref>). We are interested in inspecting the accuracy of our previous results and to investigate the main qualitative and quantitative changes to be expected when gradually exiting the bad-cavity regime.Evolution of observables for different trajectories. In Fig. <ref>, we show the distribution of relevant observables along different conditional trajectories from the full simulations in the bad-cavity (left panels) and out of the bad-cavity regimes (right panels). We focus on the number of photons in the cavity ⟨n̂ (t)⟩_c (panels a and d), the clock population difference ⟨Ĵ_z (t)⟩_c (panels b and e), and the spin-squeezing parameter ξ^2 defined in Eq. (<ref>) (panels c and f).In the bad cavity regime, the photon number almost deterministically fills the cavity (panel a), reaching a value close to the non-interacting case n_0. It takes a transient time δ t = c/(κ/2) with c≈ 3 for the statistical average E[n̂ (t)] to reach 90% of the stationary value. During such transient, the population differences (panel b) depart from zero and vary widely, each trajectory tending to fluctuate around a particular eigenvalue of Ĵ_z, since continuous measurement increases the precision in its knowledge. For small times, most of the trajectories of the spin-squeezing parameter (panel c) are compatible with each other, while, as time progresses, the distribution of ξ^2(t) widens. Some of the trajectories continue to decrease and follow what appears to be an optimal value. This limiting values indeed compare well with the analytical approximate expression in the case of feedback (dot-dashed line), Eq. (<ref>), provided a temporal shift equal to δ t is introduced. On the other hand, most of the trajectories tend to increase after reaching a minimum value at some time. Their statistical average also manifests a steep initial decrease, followed by a minimum and a slow increase, until metrological advantage is lost. Hence the reason for characterizing each considered system with the minimum of the average spin-squeezing parameter, ξ^2_m = min_t E[ξ^2(t)]. In panel c, we also plot E[ξ^2(t)] for the adiabatically removed cavity simulation (dotted line), and from the analytical expression of Eq. (<ref>) (solid curve); we observe that they are essentially the same as for the full system, provided the initial offset δ t is introduced. This indicates that the squeezing process begins as soon as the phase shift induced by the atoms on the photons is detected by the continuous measurement, but the generation rate of spin squeezing reaches its optimal value only when n(t) reaches its stationary value. On the other hand, the adiabatically removed cavity approximation assumes a stationary photon population, so that the information on the atomic ensemble is directly transmitted to the homodyne detector and spin squeezing is generated right away. Notice how the dynamics of the simulations without feedback (both with cavity removal and in the full system, once the offset is introduced) for small times, when most of the trajectories are compatible with each other, is compatible with the behavior of the continuous feedback system (dot-dashed line), when instead the continuous measurement outcomes are used to realign the state to the equator of the collective Bloch sphere, to guarantee that ⟨Ĵ_z⟩_c≈ 0. However, as the trajectories evolve and the average collective spins move away from the equator, the tangential planes, on which the metrological spin squeezing is evaluated, are typically less and less parallel to the fixed measurement direction z: together with the loss of contrast, this causes the departure from the feedback solution.Our full simulations allow us to consider also scenarios with smaller κ, outside the bad-cavity regime. Here, the stationary photon number (panel d) varies strongly for different trajectories and is generically significantly lower than in the non-interacting case. The transient δ t, defined as above, corresponds to c≈ 2.5 and is longer, due to smaller κ. It takes therefore a longer time for the population difference to stabilize (panel e), and correspondingly, the dynamics of spin-squeezing generation is strongly mixed with the cavity filling, resulting in much larger variance of the distribution of ξ^2(t) (panel f). This has two consequences: first, the minimal average spin-squeezing parameter is worse than in the bad-cavity regime, since most of its trajectories stop decreasing earlier, resulting in a smaller optimal time; second, the cavity removal simulations, with or without feedback, do not provide accurate information on the full system, even introducing a time offset as above. Dependence of ξ^2_m on coupling at fixed N. Having shown two representative cases, we now consider the dependence of the minimum average spin squeezing in the full system case on the ratio of the effective interaction frequency of the cavity with the atoms g^2/Δ and the transmission rate κ. In Fig. <ref> we report the N=45 case.For small values of this ratio, the bad-cavity condition Eq. (<ref>) is fulfilled (before the vertical dashed line), and the results converge to the cavity removal simulation with the same number of particles. For this latter case, it is natural for ξ^2_m to only depend on the number of atoms, since the spin-squeezing parameter is dimensionless and Eq. (<ref>) only contains the frequency κ̃, which sets the timescale.Ratios of other dimensional parameters that only occur in the full master equation (<ref>), such as the transmission rate κ, the driving amplitude ε, and the coupling g^2/Δ, in principle become relevant outside the bad-cavity regime, when also the details of the cavity directly affect the global dynamics. Indeed, here we observe a reduction of spin squeezing with respect to the cavity removal result, and, interestingly, we still find small dispersion of the results when plotted as a function of g^2/κΔ, the residual variance arguably related to the driving amplitude and thus the stationary number of photons.Scaling of ξ^2_m with N. We now discuss the scaling dependence of the spin-squeezing parameter on the atomic ensemble size. In Fig. <ref>, we compare the results of the full simulations of different configurations to the numerical results obtained with the adiabatic removal of the cavity, which provide the optimal spin squeezing achievable with this continuous measurement scheme in the absence of feedback. As a reference, we report also the analytical result of Eq. (<ref>) from <cit.>, for the continuous feedback scheme (dot-dashed line). This reaches the ultimate Heisenberg scaling. The efficiency of the cavity removal simulations of Eq. (<ref>) allows us to consider up to N=20000 atoms (diamond symbols).We then fit the power-law ξ^2_m=a/N^α and observe convergence in the results, provided only numbers N≥ 10^3 are considered. We obtain a= 1.89(6) and α = 0.680(6), which favorably compare to the analytical result of Eq. (<ref>) (dashed line), even though finite-size effects are noticeable. We simulate the full master equation Eq. (<ref>) up to N=200, for various configurations of g^2/Δ, κ, ε.We confirm the observation that, in the bad-cavity regime (full symbols), ξ^2_m is independent of any parameter other than N, as the results for different configurations are all compatible with each other.As the system size increases past N≳κΔ/g^2 (empty symbols), the results progressively start to deviate from the optimal scaling, even beginning to increase and eventually losing metrological advantage.Scaling of the optimal time on N. In Fig. <ref>, we now investigate whether a power law dependence in N holds for the optimal time t_m, for different configurations. Since in the bad cavity regime the effective dynamics is governed by the effective transmission rate from Eq. (<ref>), we scale t_m for each configuration with the corresponding 1/κ̃. We then fit the cavity removal results with the power-law t_m=b/(κ̃N^β), obtaining, when considering N≥ 10^3, b=0.9(1) and β=0.32(1), which are in agreement with the analytical result of Eq. (<ref>). Concerning the results from the full simulations, we notice that they are mostly compatible with each other and with the cavity-removal ones, once scaled with κ̃. However, discrepancies increase with larger driving amplitude (circles), corresponding to large stationary photon number ∼ n_0. As we commented when discussing Fig. <ref>, this increase of the optimal time can be modeled by adding a contribution describing the transient δ t required for filling the cavity, which is initially empty:t_m = b/κ̃N^β + 2 c/κ ,where c≃ 3 deep in the bad-cavity regime. By removing such transient contribution (inset of Fig. <ref>), we indeed observe good agreement among all data. Dependence of t_m on coupling. In Fig. <ref>, we focus on the role of coupling in the full simulations in determining the optimal time. We scale the latter also with the obtained power-law dependence on the atomic ensemble size, and compare two system sizes, N=20 and N=45. Unlike for the spin-squeezing parameter inFig. <ref>, here we notice that a good scaling variable is the ratio of the effective transition rate κ̃ with the original transmission rate κ. This hints at a prominent role of the number of stationary photons, since κ̃/κ=16(g^2/κΔ)^2 n_0. A qualitative explanation of Fig. <ref> is the following: the squeezing process begins to be considerable only after the cavity has reached the steady state. When κ≫κ̃, this filling transient is negligible compared to the squeezing characteristic time, and the bad-cavity adiabatic removal prediction is accurate. As the ratio κ̃ / κ increases, the transient time cannot be neglected, but becomes more and more relevant. As the two time-scales become comparable, κ̃ / κ∼ 1, once the cavity steady-state is reached the atomic degrees of freedom are already partially squeezed, therefore it takes less to achieve the optimal average spin squeezing than as estimated with Eq.(<ref>).§ RELEVANCE TO V-LEVEL OPTICAL CLOCKS Up to this point we have focused on the Λ-configuration, where a single excited state is coupled to two ground states: this configuration is relevant to describe alkali atoms such as Rubidium where the clock frequency is in the microwave range<cit.>. However, the same basic scheme can be adapted also to V-level atoms (as depicted in Fig. <ref>c), in which a single ground state is coupled to two different excited states. This configuration is relevant for the low-lying levels of alkaline earth-like atoms, such as Strontium, which is the atomic species employed in the cavity-enhanced atomic clock being developed ad INRiM<cit.>. In this case, the cavity-aided continuous measurement protocol detailed above could be adapted to operate in the proximity of the closed ^1S_0 - ^3P_1 intercombination transition, which is particularly suitable for continuous measurements because of its extremely low spontaneous emission rate. Since the ^1S_0 - ^3P_0 clock transition is far-detuned to the cavity and does not directly participate in the dynamics, the N_↑ population is constant. We can still define a collective spin observable as difference of population between the clock states <cit.>:Ĵ_z = N̂_↑ - N̂_↓/2 = N/2 - N̂_↓We stress that this definition is valid under the assumption that the number of atoms on the two clock states N does not change during the dynamics. Just as for the Λ-configuration, it is possible to choose a blue detuning Δ=ω_c-ω_e ≫ g_↓≡ g so that the excited state of the cavity-coupled transition can be adiabatically removed (See App. <ref>). In this configuration, the effective Hamiltonian couples the cavity only to the ground state projection operator:Ĥ_a =∑_i=1^Ng^2/Δ n̂ |↓⟩_i⟨↓|= g^2/Δ n̂ N̂_↓ = -g^2/Δ n̂Ĵ_z + g^2/2Δ n̂ N. The above effective Hamiltonian is basically equivalent to (<ref>), except for a factor -2 in the coupling and a constant cavity resonance shift <cit.>, which can be neglected in the measurement dynamics. Therefore the results of Sec. <ref> can be adapted to the Sr case. The optimal squeezing time t_m should be minimized, to reduce spontaneous losses due to absorption of cavity probe light. This can be obtained at the border of the bad cavity regime, g^2 N/Δ≃κ, which fixes the optimal detuning Δ = g^2 N/κ. In this regime, the excited stated adiabatic elimination, Eq. (<ref>), requires that the stationary cavity photon numbern_0 ≪(g N/κ)^2 ≡ n_limcorresponding to an input power limit P(n_lim)=g^2 N^2 ħω_D/4κ∼ N^2·10^-3 according to (<ref>) and the experimental parameters in Ref. <cit.> (namely g≃ 2π·7, κ=2π·30, ω_D≃ 2π·429). We then introduce an attenuation factor f = P(n_lim)/P=n_lim/n_0 and the optimal time for squeezing is estimated witht_m^Sr≃f κ/4 g^2 N^1/3+6/κ .In the case of N=10^4 and f=100, namely P = 1, then t_m^Sr≃150, which is within the current state-of-the-art for continuous quantum measurements of quantum systems <cit.>. The expected optimal average spin squeezing parameter would be ξ^2_m≃ (3/2)/N^2/3≃25. § CONCLUSIONS In this work we analytically and numerically analyze the dynamics of a three-level atom coupled to an optical cavity affected by a continuous measurement of the transmitted cavity field. We show how this continuous measurement observation scheme consistently generates conditional spin-squeezed states. We analyze in detail the corresponding average spin squeezing in the different regimes characterizing the cavity properties and the strength of the interaction between atoms and the cavity mode. We demonstrate that, in the bad-cavity regime and cavity removal approximation, the achievable optimal average spin squeezing depends solely on the atomic ensemble size with scaling exponent α=2/3; complementarily, the optimal duration of the squeezing operation shortens with exponent β=1/3 on particle number, and depends on an effective information rate. Out of the cavity-removal approximation, we observe that the first correction to this result equals to the short transient required to fill the cavity. Exiting this regime gradually complicates such simple picture and introduces explicit dependence on the pumping parameters.The scaling found does not match the ideal results obtained with a continuous feedback scheme, due to the role of the Bloch sphere curvature, as we demonstrate analytically; nevertheless, it is comparable to the scaling for other squeezing methods (e.g. OAT <cit.>) and has the additional advantage of relying on a much simpler experimental configuration that does not require a strict feedback control of the atomic system which would introduce further sources of noise <cit.>.The obtained results however rely on some ideal assumptions, neglecting information losses due to non-unity measurement efficiency and atomic scattering of the cavity field from the excited state.Relaxing these assumptions will be included in future works, to investigate how they impact the optimal expected average spin squeezing. Also, optimization of the pump laser detuning will be included to investigate the interplay between continuous measurement and cavity-induced interactions <cit.>. Finally, it would be useful to compare full simulations with the results from the cumulant expansion <cit.> and investigate whether an analytical approach can be pursued also in this case.The data that support the findings of this study are openly available at Ref. <cit.>.This work was supported in part by the European Union’ Horizon 2020 Research and Innovation Program and the EMPIR Participating States through the project EMPIR 17FUN03-USOQS. We acknowledge funding from the QuantERA project Q-Clocks, and from Italian Ministry of Research via the the PRIN 2022 project CONTRABASS (contract n.2022KB2JJM).§ ADIABATIC ELIMINATION OF THE ATOMIC EXCITED LEVEL The interaction between the single-mode cavity photon field and the ensemble of N three-level uncorrelated atoms, as depicted in Fig. <ref>, panels b and c, is described by the Tavis-Cummings Hamiltonian<cit.> extended to two atomic transitions. Each atom contributes a three-level single-mode Jaynes-Cummings term: Ĥ_JC = ∑_j=↓,↑,eω_j jj + g_↓ĉe↓ + g_↑ĉe↑ + h.c.where the energy of each each atomic level |j⟩ is ω_j and in general each transition j↔ e has a different coupling strength g_j to the single photonic mode described by the bosonic field operator ĉ. The ground-state detunings are defined as the difference between the transition frequencies and the cavity frequency ω_c: Δ_j = ω_c - (ω_e - ω_j), with j = ↓,↑. The frequency splitting ω_↑-ω_↓ = ω_0 is the reference clock frequency. We assume that both the detunings and the couplings are uniform across the system.Based on the coupling strengths g_j, the system assumes one of the possible three-level configurations: for example, by taking g_↑ = 0 we obtain a description for a V-level configuration as shown in Fig. <ref>c, typical of alkali-earth atoms such as Sr, in which only the ground state ↓ is coupled to the cavity mode, whose frequency is of magnitude similar to the clock frequency (see Sec. <ref>). In this paper, we mainly consider the Λ-level scheme (see Fig. <ref>b) in which both the ↓ and the ↑ levels are coupled to the excited state e via the cavity mode. This configuration is typical of alkali atoms such as Rb, for which ω_0≪ω_e-ω_↑≃ω_c.It is convenient to perform the transformation of Eq. (<ref>) to the rotating frame defined by the bare atomic and photonic energies:Ĥ_R =g_↓ e^-iΔ_↓ tĉe↓ + g_↑ e^-iΔ_↑ tĉe↑ + h.c. If the cavity mode is far-detuned from both the atomic transitions, with respect to the couplings |g_↑|,|g_↓|≪|Δ_↑|,|Δ_↓|, the excited state, if initially empty, remains very little populated at the time scales of interest. Therefore it can be adiabatically removed, in order to simplify the interaction which describes the system dynamics. We briefly recap the time-averaging technique <cit.> that allows to perform such removal. Eq. (<ref>) is of the harmonic form Ĥ=∑_n ĥ_n e^-iω_n t+h.c. which can be approximated by Ĥ_eff=∑_m,n [ĥ^†_m,ĥ_n] e^i(ω_m-ω_n) t/ω^+_mn+h.c., with (ω^+_mn)^-1=(ω_m)^-1+(ω_n)^-1, provided |ω_m+ω_n|≫|ω_m-ω_n|. In our case it is therefore convenient to choose ĥ_1=g_↑ĉe↑, with ω_1=Δ_↑, and ĥ_2=g^*_↓ĉ^†↓e, with ω_2=-Δ_↓, resulting in Ĥ_eff=2(|g_↑|^2/2Δ_↑-|g_↓|^2/2Δ_↓)ĉ^†ĉ ŝ_z +(|g_↑|^2/2Δ_↑+|g_↓|^2/2Δ_↓)[ĉ^†ĉ-(2+3ĉ^†ĉ)ee] . When both the couplings are different from zero, such as in the Λ-level case, it is convenient to tune the cavity so that Δ_↓=-Δ_↑ |g_↓|^2/|g_↑|^2 and the second term of Eq. (<ref>) vanishes, resulting in Ĥ_eff=2(|g_↑|^2/Δ_↑)ĉ^†ĉ ŝ_z. Summing this equation over the atoms yields Eq. (<ref>), where we used, without lack of generality, the simplification that the couplings are real and equal, g=g_↓=g_↑, and thus Δ_↑ = - Δ_↓ = ω_0/2 ≡Δ.If instead we consider the V-level case with g_↑=0, then Eq. (<ref>) reduces to Ĥ_eff=-(|g_↓|^2/Δ_↓)ĉ^†ĉ ŝ_z+(|g_↓|^2/2Δ_↓)[ĉ^†ĉ-(2+3ĉ^†ĉ)ee]. Summing this equation over the atoms yields Eq. (<ref>), provided the occupation of the excited state is neglected.§ DETAILS OF THE SPIN-SQUEEZING PARAMETER ESTIMATION The evaluation of the conditional spin-squeezing parameter requires the estimationof the collective spin components' averages and covariance matrix, as defined in (<ref>). The covariance matrix contains information regarding the variance of the spin components; the optimal spin-squeezing parameter is defined as the variance in the optimal direction on the tangent plane, perpendicular to the average spin ⟨Ĵ⟩_c = (⟨Ĵ_x⟩_c, ⟨Ĵ_y⟩_c, ⟨Ĵ_z⟩_c), normalized to the magnitude of such average spin.In the simulations, the expectation values ⟨Ĵ_i (t) ⟩_c, and the covariance matrix, are referred to the fixed reference system integral to the initial average spin vector along the x direction. In post-processing, therefore, the reference frame at each time step should be passively rotated to the instantaneous average spin vector, after which the relevant covariances appear in the new y-z plane. This operation is equivalent to the more efficient active rotation of the average spin vector to the x direction, and the corresponding rotation of the covariance matrix.The rotation matrix ℛ necessary to perform such operation in the Euclidean space ℝ^3 is related to the rotation operator R̂ in the collective Hilbert space which transforms the spin-coherent state |θ, ϕ⟩ on the Bloch sphere to the initial state |π/2,0⟩: .ℛ_i,j⟨Ĵ_i ⟩_c |_θ,ϕ= .⟨Ĵ_i ⟩_c |_π/2,0= .⟨R̂^-1Ĵ_i R̂⟩_c|_θ,ϕfrom which one gets ℛ_i,jĴ_j = R̂^-1Ĵ_i R̂. The direction (θ,ϕ) is related to the mean spin vector by Eqs. (<ref>) and the rotation operator is a composition of a rotation around the z and y axes:R̂ = R̂_y(π2-θ) R̂_z(-ϕ) = e^-i ( π/2-θ) Ĵ_y e^i ϕĴ_zThis operator allows to derive the proper transformation of the covariance matrix as.cov_i,j(Ĵ^')|_π/2,0=.ℛ_k,icov_k,l(Ĵ)ℛ_l,j|_θ,ϕ . Once the covariance matrix is rotated to the fixed reference frame, we can then reduce it to the tangent components y-z and calculate the minimal eigenvalue, from which we obtain the spin-squeezing parameter.§ ANALYTICAL SOLUTION FOR THE SPIN-SQUEEZING PARAMETER FROM THE MASTER EQUATION IN THE CAVITY REMOVAL APPROXIMATION WITHOUT FEEDBACK In this Appendix, we report the derivation of the tangential spin-squeezing parameter in the cavity removal approximation in the absence of feedback.In the cavity removal approximation described by Eqs. (<ref>), the mean spin vector always lies in the x-z plane, if the initial state is a CSS along x, since there is no Hamiltonian term. Then, the minimal variance from Eq. (<ref>) is on the rotated z direction and is related to the covariances in the original frame by the following equation:Δ^2Ĵ_⊥=Δ^2Ĵ_zsin^2θ+Δ^2Ĵ_xcos^2θ-2cov(Ĵ_zĴ_x)sinθcosθ ,where θ is defined in Eq. (<ref>). From this equation, the tangential spin-squeezing parameter is derived as ξ^2=(2/J)Δ^2Ĵ_⊥/𝒞.To determine ξ^2, we study the evolution of the mean spin and the covariance components. A generic observable Â, whose conditional expectation value is ⟨Â⟩_c=[ρ̂_cÂ], obeys the following equation, derived from the master equation (<ref>):d⟨Â⟩_c = ⟨Ĵ_zÂĴ_z -Ĵ_z^2Â/2 -ÂĴ_z^2/2⟩_cdτ+ (⟨ÂĴ_z+Ĵ_zÂ⟩_c- 2⟨Ĵ_z⟩_c ⟨Â⟩_c)dw ,where we introduced the scaled time τ≡κ̃ t and stochastic increment dw≡√(κ̃) dW_t.The initial condition for the mean spin is ⟨Ĵ_z(0)⟩_c=⟨Ĵ_y(0)⟩_c=0, ⟨Ĵ_x(0)⟩_c=J, while the initial variances are Δ^2J_z=Δ^2J_y=J/2, Δ^2J_x=0. The off-diagonal covariances are initially zero, and we make our first approximation in setting them to zero for every time:cov(Ĵ_iĴ_j(τ))_c=1/2⟨(Ĵ_iĴ_j+Ĵ_jĴ_i)(τ)⟩_c-⟨Ĵ_i(τ)⟩_c⟨Ĵ_j(τ)⟩_c = 0 ,for i≠ j. This is equivalent to third order cumulant truncation, namely Gaussian approximation, and will be confirmed by inspection of the simulation results.The resulting coupled equations follow straightforwardly from the angular momentum commutation relations and are reported here, where the "c" subscript is understood for all expectation values:d⟨Ĵ_x⟩ == -1/2⟨Ĵ_x⟩ dτ + 2cov(Ĵ_xĴ_z)dw ≈ -1/2⟨Ĵ_x⟩ dτd⟨Ĵ_y⟩ == -1/2⟨Ĵ_y⟩ dτ + 2cov(Ĵ_yĴ_z)dw ≈ -1/2⟨Ĵ_y⟩ dτd⟨Ĵ_z⟩ ==2Δ^2Ĵ_zdw d⟨Ĵ_x^2⟩ == -(⟨Ĵ_x^2⟩-⟨Ĵ_y^2⟩)dτ = + (⟨Ĵ_x^2Ĵ_z+Ĵ_zĴ_x^2⟩- 2⟨Ĵ_z⟩⟨Ĵ_x^2⟩)dw d⟨Ĵ_y^2⟩ == (⟨Ĵ_x^2⟩-⟨Ĵ_y^2⟩)dτ =+ (⟨Ĵ_y^2Ĵ_z+Ĵ_zĴ_y^2⟩- 2⟨Ĵ_z⟩⟨Ĵ_y^2⟩)dw d⟨Ĵ_z^2⟩ ==2(⟨Ĵ_z^3⟩- ⟨Ĵ_z⟩⟨Ĵ_z^2⟩)dw When consideringthe evolution of the variances, we again make use of third order cumulant truncation <cit.>. This consists in a Gaussian approximation, which is expected to be valid in the J→∞ limit and is later justified by comparison to numerical results. We then obtain the expressions in Eqs. (<ref>), which take into account the initial conditions, resulting in Eqs. (<ref>)-(<ref>). Notice that due to our approximations, these expressions are unconditional. Besides Δ^2Ĵ_x, which was not present inRefs. <cit.>, the other expressions reduce to those in the literature in the small time limit <cit.>.The evolution of ⟨Ĵ_z⟩_c is completely stochastic, and the statistical distribution of conditional values P(⟨Ĵ_z(τ)⟩_c) is thus constant in time and equivalent to the initial one. In the large J limit, this can be approximated by Eq. (<ref>), namely a Gaussian with zero mean and variance Δ^2 Ĵ_z(0)=J/2. The contrast thus evolves as:𝒞(τ)= ⟨Ĵ_x(τ)⟩^2+⟨Ĵ_y(τ)⟩^2+⟨Ĵ_z(τ)⟩_c^2/J^2≈ e^-τ ,since Δ^2 Ĵ_z(0)/J^2→ 0 in the J→∞ limit.Both the Eqs. for Ĵ_z and Ĵ_z^2 evolve only stochastically, and one would presume that also Δ^2Ĵ_z=Ĵ_z^2-Ĵ_z^2 evolves stochastically. However, this quantity evolves asdΔ^2Ĵ_z=d⟨Ĵ_z^2⟩ -d(Ĵ_z^2) = 2(⟨Ĵ_z^3⟩ - ⟨Ĵ_z⟩⟨Ĵ_z^2⟩)dw= -4⟨Ĵ_z⟩Δ^2Ĵ_z dw - 4 (Δ^2Ĵ_z)^2 dτ =2⟨Ĵ_z^3⟩_C dw- 4 (Δ^2Ĵ_z)^2 dτwhere the last term stems from the Itô calculus rule dx = A dτ + B dw → d(f[x])= (A f'[x] + B^2 f”[x]/2)dτ + B f'[x]dw, where f is a function of x. ⟨Ĵ_z^3⟩_C = ⟨Ĵ_z^3⟩ -3⟨Ĵ_z⟩⟨Ĵ_z^2⟩+2⟨Ĵ_z⟩^3 is a third order cumulant that we set to zero in Gaussian approximation. The resulting expression, Eq. <ref>, is consistent, for large J and moderate times, with the one following heuristically from an approach analogous to Refs. <cit.>, where one assumes that the atomic state preserves its minimal uncertainty product in the y-z plane: Δ^2Ĵ_zΔ^2Ĵ_y= |⟨Ĵ_x⟩|^2/4, yielding:Δ^2Ĵ_z(τ)/J/2≈e^-τ/J( 1-e^-2τ)+ ( 1+e^-2τ)/2 . We now have all the ingredients for determiningspin-squeezing as a function of time and of the ⟨Ĵ_z⟩_c projection, recalling its relation to cosθ in Eq. (<ref>). Neglecting again the off-diagonal covariances, we obtain:ξ^2(τ,⟨Ĵ_z⟩_c^2)=e^τ/1+2 Jτ(1-⟨Ĵ_z⟩_c^2/|⟨Ĵ⟩|^2)+ e^τ[J( 1-e^-τ)^2+ 1/2( 1-e^-2τ)]⟨Ĵ_z⟩_c^2/|⟨Ĵ⟩|^2 . We use the above expression in the main text to derive the scaling of the average spin-squeezing parameter and to that end perform the J→∞ limit, in particular for the contrast. However, since this expression should be valid even for large ⟨Ĵ_z⟩, in this case it can be more accurate to retain 𝒞(τ)= e^-τ + ⟨Ĵ_z(τ)⟩_c^2/J^2. We also notice that performing the average of this more refined expression would introduce corrections in terms of powers of e^τ/2J, which do not affect the found scaling of the minimum.§ RELATION BETWEEN SCALING EXPONENTS OF SPIN SQUEEZING AND OPTIMAL TIME The relation α+β = 1 stems from a generic behavior of the spin-squeezing parameter for small times, in which ξ^2(t) ≈ Q^-δ + f Q^γ/N^ϵ with Q∝ N t. Indeed, the minimum of this function occurs at ξ^2_m=(1+δ/γ)(δ/fγ)^δ/(γ+δ)N^-δϵ/(γ+δ) for Q̅=(δ/fγ)^1/(γ+δ)N^ϵ/(γ+δ), implying α=δϵ/(γ+δ) and β=1-ϵ/(γ+δ). The relation α+β=1 is thus valid if δ=1, which holds in our cavity-removal case, as can be seen in Fig. <ref> in the bad-cavity regime and Eq. (<ref>) for large N.§ DETAILS OF THE NUMERICAL SIMULATIONS Given the stochasticity of the evolution due to the explicit dependence on the measurement outcome, each solution of Eq. (<ref>) represents a different unraveling, a particular trajectory of the conditional dynamics, namely a model for a specific realization of an experiment.Therefore a particular configuration of physical parameters g, κ, ε, N can be generically characterized only based on the behavior of the system averaged over many trajectories. Each trajectory is found by integrating the conditional master equations, using thelibrary<cit.><cit.>. Since the initial state is pure, we use thedynamic solver for the stochastic Schrödinger equation, which needs less computational resources than the equivalent solverfor master equations.This solver implements the implicit Milstein method, which we found to be the most accurate at long times among those available, at a relatively moderate cost.The QuTiP library offers the possibility to automatically evolve different trajectories in parallel, profiting of multi-core CPUs, and finally yielding the average of the observables. However, since the metrological spin-squeezing parameter ξ^2 is not associated to a single quantum operator, but it is the ratio of expectation values of different operators, it cannot be evaluated directly by the QuTiP library during the evolution and must be evaluated in post-processing from the expectation values of the relevant observables. Were we to compute the spin-squeezing parameter from the average observables, we would obtain a result corresponding to the unconditional evolution where no continuous measurement is executed and no squeezing is generated. Therefore, ξ^2 must be evaluated specifically for each trajectory, and its average and standard deviation are then statistically estimated.The number of trajectories M determines the precision of the results, and we estimate that M=1001000 is sufficient for our purposes.In order to have both a statistically relevant sample of trajectories but also to speed up the computation we rely on parallel numerical methods, we employ thetool provided by QuTiP to solve simultaneously different trajectories, each parameterized by independent seeds that initialize the stochastic increments of the SSE. This implementation is embarrassingly parallel and the speed-up thus grows linearly with the number of available cores.The computational complexity of the simulations is proportional to the total number of integration steps. The minimal number of time-steps required to achieve a suitable level of accuracy for the integration of the SSE is determined in order to guarantee the resolution of any process, without accidentally time-averaging any higher-frequency effect. We therefore compare the most relevant frequencies in the master equations, including: the effective atomic shift n_0 δω, the effective cavity shift N δω, the decay rate κ and the driving strength ε. The time step is then chosen as dt = 2π/R ω_max, where the number R=1000 has been estimated to be sufficient to yield acceptable accuracy, that is compatibility with the true value, extrapolated for dt→ 0, at the precision obtained given the chosen number of trajectories M=1000. For the minimal average spin-squeezing parameter, we noticed a residual time-step bias that we estimated as δξ^2_m≃ 1.5· 10^-2 and added to the uncertainty bars.The other major contribution to computational complexity is given by the size of the quantum system: the SME resolution would require the complete density matrix, therefore the memory usage would grow as (d_c d_a)^2, where d_c is the dimension of the photonic Hilbert space and d_a is the dimension of the atomic Hilbert space. It is immediately clear that solving the SSE is beneficial because the memory requirement only grows as d_c d_a. As customary, the photonic Fock space is cut off at a maximum number of photons that we expect to be relevant in the considered dynamics. Given the predictions of Eq. (<ref>), we can estimate the expected number of photons at the steady state from the initial parameter, thus also giving an estimate of the required dimension to avoid a too low cut-off. Since for coherent photonic states in the uncoupled steady state we would have Δ^2 n̂ = n_0, to be more conservative for generic coupled dynamics, we set d_c = (3 n_0+6). The atomic Hilbert space dimension in the initial qutrit representation would scale as d_a=3^N. This allows to perform simulations with up to N≃ 10 atoms (not shown in this work) with standard resources. The adiabatic removal of the excited state allows for reducing the dimension to d_a=2^N, allowing for the simulation of up to N≃ 16. However, not considering atomic scattering gives us the possibility to restrict the dynamics to the atomic Dicke sector with maximum eigenvalue J(J+1) of Ĵ^2, with J=N/2, whose space dimension is d_a=N+1. This allows us to simulate up to N=200 atoms when considering the full SSE corresponding to Eq. (<ref>), and N=20000 after performing the adiabatic removal of the cavity.79 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Itano et al.(1993)Itano, Bergquist, Bollinger, Gilligan, Heinzen, Moore, Raizen, and Wineland]Itano_1993 author author W. M. Itano, author J. C. Bergquist, author J. J. Bollinger, author J. M. Gilligan, author D. J. Heinzen, author F. L. Moore, author M. G. Raizen, andauthor D. J. Wineland,title title Quantum projection noise population and fluctuations in two-level systems, https://doi.org/10.1103/physreva.47.3554 journal journal Phys. Rev. A volume 47, pages 3554 (year 1993)NoStop [Pezzè et al.(2018)Pezzè, Smerzi, Oberthaler, Schmied, and Treutlein]PEZZE_2018 author author L. Pezzè, author A. Smerzi, author M. K. Oberthaler, author R. Schmied, and author P. Treutlein, title title Quantum metrology with nonclassical states of atomic ensembles, https://doi.org/10.1103/RevModPhys.90.035005 journal journal Rev. Mod. Phys. volume 90, pages 035005 (year 2018)NoStop [Kitagawa and Ueda(1993)]KITAGAWA_1993 author author M. Kitagawa and author M. Ueda, title title Squeezed spin states,https://doi.org/10.1103/PhysRevA.47.5138 journal journal Phys. Rev. A volume 47,pages 5138 (year 1993)NoStop [Ma et al.(2011)Ma, Wang, Sun, and Nori]SpinSqueezingReview author author J. Ma, author X. Wang, author C. Sun, and author F. Nori, title title Quantum spin squeezing, https://doi.org/https://doi.org/10.1016/j.physrep.2011.08.003 journal journal Phys. Rep. volume 509, pages 89 (year 2011)NoStop [Riedel et al.(2010)Riedel, Böhi, Li, Hänsch, Sinatra, and Treutlein]Riedel2010 author author M. F. Riedel, author P. Böhi, author Y. Li, author T. W. Hänsch, author A. Sinatra, and author P. Treutlein, title title Atom-chip-based generation of entanglement for quantum metrology,https://doi.org/10.1038/nature08988 journal journal Nature volume 464, pages 1170 (year 2010)NoStop [Hamley et al.(2012)Hamley, Gerving, Hoang, Bookjans,and Chapman]Hamley2012 author author C. D. Hamley, author C. S. Gerving, author T. M. Hoang, author E. M. Bookjans, and author M. S. Chapman, title title Spin-nematic squeezed vacuum in a quantum gas,https://doi.org/10.1038/nphys2245 journal journal Nat. Phys. volume 8, pages 305 (year 2012)NoStop [Gross(2012)]Gross2012 author author C. Gross, title title Spin squeezing, entanglement and quantum metrology with bose-einstein condensates, https://doi.org/10.1088/0953-4075/45/10/103001 journal journal J. Phys. B: Atom. Mol. Phys. volume 45, pages 103001 (year 2012)NoStop [Bohnet et al.(2016)Bohnet, Sawyer, Britton, Wall, Rey, Foss-Feig, and Bollinger]Bohnet2016 author author J. G. Bohnet, author B. C. Sawyer, author J. W. Britton, author M. L. Wall, author A. M. Rey, author M. Foss-Feig, and author J. J. Bollinger, title title Quantum spin dynamics and entanglement generation with hundreds of trapped ions, https://doi.org/10.1126/science.aad9958 journal journal Science volume 352, pages 1297 (year 2016)NoStop [Kuzmich et al.(2000)Kuzmich, Mandel, and Bigelow]Kuzmich_GenerationSpinSqueezing_2000 author author A. Kuzmich, author L. Mandel,and author N. P. Bigelow,title title Generation of Spin Squeezing via Continuous Quantum Nondemolition Measurement, https://doi.org/10.1103/PhysRevLett.85.1594 journal journal Phys. Rev. Lett. volume 85, pages 1594 (year 2000)NoStop [Appel et al.(2009)Appel, Windpassinger, Oblak, Hoff, Kjaergaard, and Polzik]Appel_2009 author author J. Appel, author P. J. Windpassinger, author D. Oblak, author U. B. Hoff, author N. Kjaergaard, andauthor E. S. Polzik, title title Mesoscopic atomic entanglement for precision measurements beyond the standard quantum limit, https://doi.org/10.1073/pnas.0901550106 journal journal Proc. Natl. Acad. Sci. volume 106,pages 10960 (year 2009)NoStop [Schleier-Smith et al.(2010a)Schleier-Smith, Leroux, and Vuletić]SchleierSmith2010 author author M. H. Schleier-Smith, author I. D. Leroux, and author V. Vuletić, title title States of an ensemble of two-level atoms with reduced quantum uncertainty, https://doi.org/10.1103/physrevlett.104.073604 journal journal Phys. Rev. Lett. volume 104,pages 073604 (year 2010a)NoStop [Bohnet et al.(2014)Bohnet, Cox, Norcia, Weiner, Chen, and Thompson]Bohnet_2014 author author J. G. Bohnet, author K. C. Cox, author M. A. Norcia, author J. M. Weiner, author Z. Chen, and author J. K. Thompson, title title Reduced spin measurement back-action for a phase sensitivity ten times beyond the standard quantum limit, https://doi.org/10.1038/nphoton.2014.151 journal journal Nat. Photonics volume 8, pages 731 (year 2014)NoStop [Cox et al.(2016)Cox, Greve, Weiner, and Thompson]Cox2016 author author K. C. Cox, author G. P. Greve, author J. M. Weiner, andauthor J. K. Thompson,title title Deterministic squeezed states with collective measurements and feedback, https://doi.org/10.1103/physrevlett.116.093602 journal journal Phys. Rev. Lett. volume 116,pages 093602 (year 2016)NoStop [Hosten et al.(2016)Hosten, Engelsen, Krishnakumar, and Kasevich]Hosten2016a author author O. Hosten, author N. J. Engelsen, author R. Krishnakumar, and author M. A. Kasevich, title title Measurement noise 100 times lower than the quantum-projection limit using entangled atoms, https://doi.org/10.1038/nature16176 journal journal Nature volume 529, pages 505 (year 2016)NoStop [Huang et al.(2023)Huang, de la Paz, Mazzoni, Ott, Rosenbusch, Sinatra, Garrido Alzar, and Reichel]Huang2023 author author M.-Z. Huang, author J. A. de la Paz, author T. Mazzoni, author K. Ott, author P. Rosenbusch, author A. Sinatra, author C. L. Garrido Alzar, and author J. Reichel, title title Observing spin-squeezed states under spin-exchange collisions for a second, https://doi.org/10.1103/PRXQuantum.4.020322 journal journal PRX Quantum volume 4, pages 020322 (year 2023)NoStop [Serafin et al.(2021)Serafin, Fadel, Treutlein, andSinatra]Serafin_NuclearSpinSqueezing_2021 author author A. Serafin, author M. Fadel, author P. Treutlein, andauthor A. Sinatra, title title Nuclear Spin Squeezing in Helium-3 by Continuous Quantum Nondemolition Measurement, https://doi.org/10.1103/PhysRevLett.127.013601 journal journal Phys. Rev. Lett. volume 127,pages 013601 (year 2021)NoStop [Schleier-Smith et al.(2010b)Schleier-Smith, Leroux, and Vuletić]SSMITH_2010 author author M. H. Schleier-Smith, author I. D. Leroux, and author V. Vuletić, title title Squeezing the collective spin of a dilute atomic ensemble by cavity feedback, https://doi.org/10.1103/PhysRevA.81.021804 journal journal Phys. Rev. A volume 81, pages 021804(R) (year 2010b)NoStop [Braverman et al.(2019)Braverman, Kawasaki, Pedrozo-Peñafiel, Colombo, Shu, Li, Mendez, Yamoah, Salvi, Akamatsu, Xiao, andVuletić]Braverman_2019 author author B. Braverman, author A. Kawasaki, author E. Pedrozo-Peñafiel, author S. Colombo, author C. Shu, author Z. Li, author E. Mendez, author M. Yamoah, author L. Salvi, author D. Akamatsu, author Y. Xiao, and author V. Vuletić, title title Near-unitary spin squeezing in Yb171, https://doi.org/10.1103/physrevlett.122.223203 journal journal Phys. Rev. Lett. volume 122,pages 223203 (year 2019)NoStop [Li et al.(2022)Li, Braverman, Colombo, Shu, Kawasaki, Adiyatullin, Pedrozo-Peñafiel, Mendez, and Vuletić]Li_CollectiveSpinLightLightMediated_2022 author author Z. Li, author B. Braverman, author S. Colombo, author C. Shu, author A. Kawasaki, author A. F. Adiyatullin, author E. Pedrozo-Peñafiel, author E. Mendez, and author V. Vuletić, title title Collective Spin-Light and Light-Mediated Spin-Spin Interactions in an Optical Cavity, https://doi.org/10.1103/PRXQuantum.3.020308 journal journal PRX Quantum volume 3, pages 020308 (year 2022)NoStop [Eckner et al.(2023)Eckner, Darkwah Oppong, Cao, Young, Milner, Robinson, Ye, andKaufman]Eckner_Realizingspinsqueezing_2023 author author W. J. Eckner, author N. Darkwah Oppong, author A. Cao, author A. W. Young, author W. R. Milner, author J. M. Robinson, author J. Ye, and author A. M. Kaufman, title title Realizing Spin Squeezing with Rydberg Interactions in an Optical Clock, https://doi.org/10.1038/s41586-023-06360-6 journal journal Nature volume 621, pages 734 (year 2023)NoStop [Bornet et al.(2023)Bornet, Emperauger, Chen, Ye, Block, Bintz, Boyd, Barredo, Comparin, Mezzacapo, Roscilde, Lahaye, Yao, andBrowaeys]Bornet_Scalablespinsqueezing_2023 author author G. Bornet, author G. Emperauger, author C. Chen, author B. Ye, author M. Block, author M. Bintz, author J. A.Boyd, author D. Barredo, author T. Comparin, author F. Mezzacapo, author T. Roscilde, author T. Lahaye, author N. Y. Yao, and author A. Browaeys, title title Scalable spin squeezing in a dipolar Rydberg atom array, https://doi.org/10.1038/s41586-023-06414-9 journal journal Nature volume 621, pages 728 (year 2023)NoStop [Beloy et al.(2021)Beloy, Bodine, Bothwell, Brewer, Bromley, Chen, Deschênes, Diddams, Fasano, Fortier, Hassan, Hume, Kedar, Kennedy, Khader, Koepke, Leibrandt, Leopardi, Ludlow, McGrew, Milner, Newbury, Nicolodi, Oelker, Parker, Robinson, Romisch, Schäffer, Sherman, Sinclair, Sonderhouse, Swann, Yao, Ye, Zhang, and Collaboration*]Beloy2021 author author K. Beloy, author M. I. Bodine, author T. Bothwell, author S. M. Brewer, author S. L. Bromley, author J.-S. Chen, author J.-D. Deschênes, author S. A. Diddams, author R. J. Fasano, author T. M. Fortier, author Y. S. Hassan, author D. B. Hume, author D. Kedar, author C. J. Kennedy, author I. Khader, author A. Koepke, author D. R.Leibrandt, author H. Leopardi, author A. D. Ludlow, author W. F. McGrew, author W. R. Milner, author N. R. Newbury, author D. Nicolodi, author E. Oelker, author T. E. Parker, author J. M.Robinson, author S. Romisch, author S. A. Schäffer, author J. A. Sherman, author L. C. Sinclair, author L. Sonderhouse, author W. C. Swann, author J. Yao, author J. Ye, author X. Zhang, and author B. A. C. O. N. B. Collaboration*,title title Frequency ratio measurements at 18-digit accuracy using an optical clock network, https://doi.org/10.1038/s41586-021-03253-4 journal journal Nature volume 591, pages 564 (year 2021)NoStop [Bothwell et al.(2022)Bothwell, Kennedy, Aeppli, Kedar, Robinson, Oelker, Staron, and Ye]Bothwell2022 author author T. Bothwell, author C. J. Kennedy, author A. Aeppli, author D. Kedar, author J. M. Robinson, author E. Oelker, author A. Staron, and author J. Ye, title title Resolving the gravitational redshift across a millimetre-scale atomic sample, https://doi.org/10.1038/s41586-021-04349-7 journal journal Nature volume 602, pages 420 (year 2022)NoStop [Pedrozo-Peñafiel et al.(2020)Pedrozo-Peñafiel, Colombo, Shu, Adiyatullin, Li, Mendez, Braverman, Kawasaki, Akamatsu, Xiao, andVuletić]PedrozoPenafiel2020 author author E. Pedrozo-Peñafiel, author S. Colombo, author C. Shu, author A. F. Adiyatullin, author Z. Li, author E. Mendez, author B. Braverman, author A. Kawasaki, author D. Akamatsu, author Y. Xiao, and author V. Vuletić, title title Entanglement on an optical atomic-clock transition, https://doi.org/10.1038/s41586-020-3006-1 journal journal Nature volume 588, pages 414 (year 2020)NoStop [Robinson et al.(2022)Robinson, Miklos, Tso, Kennedy, Bothwell, Kedar, Thompson, and Ye]Robinson_Directcomparisontwo_2022 author author J. M. Robinson, author M. Miklos, author Y. M. Tso, author C. J. Kennedy, author T. Bothwell, author D. Kedar, author J. K. Thompson, and author J. Ye, https://doi.org/10.48550/arXiv.2211.08621 title Direct comparison of two spin squeezed optical clocks below the quantum projection noise limit (year 2022), https://arxiv.org/abs/2211.08621 arXiv:2211.08621 [physics, physics:quant-ph] NoStop [Bowden et al.(2020)Bowden, Vianello, Hill, Schioppo,and Hobson]Bowden2020 author author W. Bowden, author A. Vianello, author I. R. Hill, author M. Schioppo, and author R. Hobson, title title Improving the q factor of an optical atomic clock using quantum nondemolition measurement, https://doi.org/10.1103/PhysRevX.10.041052 journal journal Phys. Rev. X volume 10, pages 041052 (year 2020)NoStop [Wiseman and Milburn(2010)]WisemanMilburn author author H. M. Wiseman and author G. J. Milburn, @nooptitle Quantum Measurement and Control (publisher Cambridge University Press,address New York, year 2010)NoStop [Jacobs(2014)]JacobsBook author author K. Jacobs, @nooptitle Quantum Measurement Theory and its Applications (publisher Cambridge University Press, address Boston, year 2014)NoStop [Rossi et al.(2018)Rossi, Mason, Chen, Tsaturyan, andSchliesser]Rossi2018 author author M. Rossi, author D. Mason, author J. Chen, author Y. Tsaturyan, and author A. Schliesser, title title Measurement-based quantum control of mechanical motion,https://doi.org/10.1038/s41586-018-0643-8 journal journal Nature volume 563, pages 53 (year 2018)NoStop [Magrini et al.(2021)Magrini, Rosenzweig, Bach, Deutschmann-Olek, Hofer, Hong, Kiesel, Kugi, and Aspelmeyer]Magrini2021 author author L. Magrini, author P. Rosenzweig, author C. Bach, author A. Deutschmann-Olek, author S. G. Hofer, author S. Hong, author N. Kiesel, author A. Kugi, and author M. Aspelmeyer, title title Real-time optimal quantum control of mechanical motion at room temperature, https://doi.org/10.1038/s41586-021-03602-3 journal journal Nature volume 595, pages 373 (year 2021)NoStop [Tebbenjohanns et al.(2021)Tebbenjohanns, Mattana, Rossi, Frimmer, and Novotny]Tebbenjohanns2021 author author F. Tebbenjohanns, author M. L. Mattana, author M. Rossi, author M. Frimmer, and author L. Novotny, title title Quantum control of a nanoparticle optically levitated in cryogenic free space, https://doi.org/10.1038/s41586-021-03617-w journal journal Nature volume 595, pages 378 (year 2021)NoStop [Wiseman and Milburn(1993)]Wiseman1993 author author H. M. Wiseman and author G. J. Milburn, title title Quantum theory of optical feedback via homodyne detection, https://doi.org/10.1103/PhysRevLett.70.548 journal journal Phys. Rev. Lett. volume 70, pages 548 (year 1993)NoStop [Wiseman and Milburn(1994)]Wiseman1994 author author H. M. Wiseman and author G. J. Milburn, title title Squeezing via feedback,https://doi.org/10.1103/PhysRevA.49.1350 journal journal Phys. Rev. A volume 49,pages 1350 (year 1994)NoStop [Thomsen et al.(2002a)Thomsen, Mancini, and Wiseman]Thomsen2002 author author L. K. Thomsen, author S. Mancini,and author H. M. Wiseman,title title Spin squeezing via quantum feedback, https://doi.org/10.1103/PhysRevA.65.061801 journal journal Phys. Rev. A volume 65, pages 061801(R) (year 2002a)NoStop [Thomsen et al.(2002b)Thomsen, Mancini, and Wiseman]Thomsen2002a author author L. K. Thomsen, author S. Mancini,and author H. M. Wiseman,title title Continuous quantum nondemolition feedback and unconditional atomic spin squeezing, https://doi.org/10.1088/0953-4075/35/23/316 journal journal J. Phys. B volume 35, pages 4937 (year 2002b)NoStop [Geremia et al.(2003)Geremia, Stockton, Doherty, andMabuchi]GEREMIA2003 author author J. M. Geremia, author J. K. Stockton, author A. C. Doherty, and author H. Mabuchi, title title Quantum kalman filtering and the heisenberg limit in atomic magnetometry, https://doi.org/10.1103/PhysRevLett.91.250801 journal journal Phys. Rev. Lett. volume 91,pages 250801 (year 2003)NoStop [Mølmer and Madsen(2004)]MolmerMadsen2004 author author K. Mølmer and author L. B. Madsen, title title Estimation of a classical parameter with gaussian probes: Magnetometry with collective atomic spins,https://doi.org/10.1103/PhysRevA.70.052102 journal journal Phys. Rev. A volume 70,pages 052102 (year 2004)NoStop [Madsen and Mølmer(2004)]MADSEN_2004 author author L. B. Madsen and author K. Mølmer, title title Spin squeezing and precision probing with light and samples of atoms in the gaussian description, https://doi.org/10.1103/PhysRevA.70.052324 journal journal Phys. Rev. A volume 70, pages 052324 (year 2004)NoStop [Nielsen and Mølmer(2008)]Nielsen2008 author author A. E. B.Nielsen and author K. Mølmer, title title Atomic spin squeezing in an optical cavity, https://doi.org/10.1103/PhysRevA.77.063811 journal journal Phys. Rev. A volume 77, pages 063811 (year 2008)NoStop [Serafini and Mancini(2010)]SerafozziMancini author author A. Serafini and author S. Mancini, title title Determination of Maximal Gaussian Entanglement Achievable by Feedback-Controlled Dynamics, https://doi.org/10.1103/PhysRevLett.104.220501 journal journal Phys. Rev. Lett. volume 104,pages 220501 (year 2010)NoStop [Szorkovszky et al.(2011)Szorkovszky, Doherty, Harris, andBowen]Szorkovszky2011 author author A. Szorkovszky, author A. C. Doherty, author G. I. Harris, and author W. P. Bowen, title title Mechanical squeezing via parametric amplification and weak measurement, https://doi.org/10.1103/PhysRevLett.107.213603 journal journal Phys. Rev. Lett. volume 107,pages 213603 (year 2011)NoStop [Genoni et al.(2013)Genoni, Mancini, and Serafini]Genoni2013PRA author author M. G. Genoni, author S. Mancini,and author A. Serafini,title title Optimal feedback control of linear quantum systems in the presence of thermal noise, https://doi.org/10.1103/PhysRevA.87.042333 journal journal Phys. Rev. A volume 87, pages 042333 (year 2013)NoStop [Genoni et al.(2015)Genoni, Zhang, Millen, Barker, andSerafini]Genoni2015NJP author author M. G. Genoni, author J. Zhang, author J. Millen, author P. F. Barker, and author A. Serafini, title title Quantum cooling and squeezing of a levitating nanosphere via time-continuous measurements, https://doi.org/10.1088/1367-2630/17/7/073019 journal journal New J. Phys. volume 17,pages 073019 (year 2015)NoStop [Hofer and Hammerer(2015)]Hofer2015 author author S. G. Hofer and author K. Hammerer, title title Entanglement-enhanced time-continuous quantum control in optomechanics, https://doi.org/10.1103/PhysRevA.91.033822 journal journal Phys. Rev. A volume 91, pages 033822 (year 2015)NoStop [Albarelli et al.(2017)Albarelli, Rossi, Paris, andGenoni]Albarelli_Ultimatelimitsquantum_2017 author author F. Albarelli, author M. A. C. Rossi, author M. G. A. Paris, and author M. G. Genoni, title title Ultimate limits for quantum magnetometry via time-continuous measurements, https://doi.org/10.1088/1367-2630/aa9840 journal journal New J. Phys. volume 19, pages 123011 (year 2017)NoStop [Brunelli et al.(2019)Brunelli, Malz, and Nunnenkamp]Brunelli2019PRL author author M. Brunelli, author D. Malz,and author A. Nunnenkamp,title title Conditional dynamics of optomechanical two-tone backaction-evading measurements, https://doi.org/10.1103/PhysRevLett.123.093602 journal journal Phys. Rev. Lett. volume 123,pages 093602 (year 2019)NoStop [Di Giovanni et al.(2021)Di Giovanni, Brunelli, and Genoni]DiGiovanni2021 author author A. Di Giovanni, author M. Brunelli, and author M. G. Genoni, title title Unconditional mechanical squeezing via backaction-evading measurements and nonoptimal feedback control, https://doi.org/10.1103/PhysRevA.103.022614 journal journal Phys. Rev. A volume 103, pages 022614 (year 2021)NoStop [Fallani et al.(2022)Fallani, Rossi, Tamascelli, andGenoni]Fallani2022 author author A. Fallani, author M. A. C. Rossi, author D. Tamascelli,and author M. G. Genoni,title title Learning feedback control strategies for quantum metrology, https://doi.org/10.1103/PRXQuantum.3.020310 journal journal PRX Quantum volume 3, pages 020310 (year 2022)NoStop [Isaksen and Andersen(2023)]Isaksen_Mechanicalcoolingsqueezing_2023 author author F. W. Isaksen and author U. L. Andersen, title title Mechanical cooling and squeezing using optimal control, https://doi.org/10.1103/PhysRevA.107.023512 journal journal Phys. Rev. A volume 107, pages 023512 (year 2023)NoStop [Rossi et al.(2020)Rossi, Albarelli, Tamascelli, and Genoni]ROSSI_2020 author author M. Rossi, author F. Albarelli, author D. Tamascelli, andauthor M. G. Genoni, title title Noisy quantum metrology enhanced by continuous nondemolition measurement, journal journal Phys. Rev. Lett. volume 125, https://doi.org/10.1103/PhysRevLett.125.200505 10.1103/PhysRevLett.125.200505 (year 2020)NoStop [Amorós-Binefa and Kołodyński(2021)]Binefa_2021 author author J. Amorós-Binefa and author J. Kołodyński, title title Noisy atomic magnetometry in real time, https://doi.org/10.1088/1367-2630/ac3b71 journal journal New J. Phys. volume 23, pages 123030 (year 2021)NoStop [Quessada et al.(2003)Quessada, Kovacich, Courtillot, Clairon, Santarelli, and Lemonde]Quessada_2003 author author A. Quessada, author R. P. Kovacich, author I. Courtillot, author A. Clairon, author G. Santarelli, andauthor P. Lemonde, title title The dick effect for an optical frequency standard, https://doi.org/10.1088/1464-4266/5/2/373 journal journal J. Opt. B: Quantum Semiclass. Opt. volume 5, pages S150 (year 2003)NoStop [Kloc et al.(2017)Kloc, Stránský, and Cejnar]Kloc_2017 author author M. Kloc, author P. Stránský, and author P. Cejnar, title title Quantum phases and entanglement properties of an extended dicke model, https://doi.org/10.1016/j.aop.2017.04.005 journal journal Ann. Phys. volume 382, pages 85 (year 2017)NoStop [Schulte et al.(2020)Schulte, Lisdat, Schmidt, Sterr, and Hammerer]Schulte_Prospectschallengessqueezingenhanced_2020 author author M. Schulte, author C. Lisdat, author P. O. Schmidt, author U. Sterr, and author K. Hammerer, title title Prospects and challenges for squeezing-enhanced optical atomic clocks, https://doi.org/10.1038/s41467-020-19403-7 journal journal Nat. Commun. volume 11, pages 5955 (year 2020)NoStop [Braverman et al.(2018)Braverman, Kawasaki, and Vuletić]Braverman_Impactnonunitaryspin_2018 author author B. Braverman, author A. Kawasaki, and author V. Vuletić, title title Impact of non-unitary spin squeezing on atomic clock performance, https://doi.org/10.1088/1367-2630/aae563 journal journal New J. Phys. volume 20, pages 103019 (year 2018)NoStop [Wineland et al.(1992)Wineland, Bollinger, Itano, Moore, and Heinzen]WINELAND_1992 author author D. J. Wineland, author J. J. Bollinger, author W. M. Itano, author F. L. Moore,and author D. J. Heinzen,title title Spin squeezing and reduced quantum noise in spectroscopy, https://doi.org/10.1103/PhysRevA.46.R6797 journal journal Phys. Rev. A volume 46, pages R6797 (year 1992)NoStop [Wineland et al.(1994)Wineland, Bollinger, Itano, andHeinzen]Wineland_Squeezedatomicstates_1994 author author D. J. Wineland, author J. J. Bollinger, author W. M. Itano, and author D. J. Heinzen, title title Squeezed atomic states and projection noise in spectroscopy, https://doi.org/10.1103/PhysRevA.50.67 journal journal Phys. Rev. A volume 50, pages 67 (year 1994)NoStop [Braginsky et al.(1980)Braginsky, Vorontsov, and Thorne]Braginsky1980 author author V. B. Braginsky, author Y. I. Vorontsov, and author K. S. Thorne, title title Quantum nondemolition measurements, https://doi.org/10.1126/science.209.4456.547 journal journal Science volume 209, pages 547 (year 1980)NoStop [Braginsky and Khalili(1996)]Braginsky_Quantumnondemolitionmeasurements_1996 author author V. B. Braginsky and author F. Ya.Khalili, title title Quantum nondemolition measurements: The route from toys to tools, https://doi.org/10.1103/RevModPhys.68.1 journal journal Rev. Mod. Phys. volume 68, pages 1 (year 1996)NoStop [Clerk et al.(2010)Clerk, Devoret, Girvin, Marquardt,and Schoelkopf]Clerk_Introductionquantumnoise_2010 author author A. A. Clerk, author M. H. Devoret, author S. M. Girvin, author F. Marquardt, and author R. J. Schoelkopf, title title Introduction to quantum noise, measurement, and amplification, https://doi.org/10.1103/RevModPhys.82.1155 journal journal Rev. Mod. Phys. volume 82, pages 1155 (year 2010)NoStop [Gardiner and Collett(1985)]GARDINER_1985 author author C. W. Gardiner and author M. J. Collett, title title Input and output in damped quantum systems: Quantum stochastic differential equations and the master equation, https://doi.org/10.1103/PhysRevA.31.3761 journal journal Phys. Rev. A volume 31, pages 3761 (year 1985)NoStop [Doherty and Jacobs(1999)]DOHERTY_1999 author author A. C. Doherty and author K. Jacobs, title title Feedback control of quantum systems using continuous state estimation, https://doi.org/10.1103/PhysRevA.60.2700 journal journal Phys. Rev. A volume 60, pages 2700 (year 1999)NoStop [Johansson et al.(2012)Johansson, Nation, and Nori]QUTIP_2012 author author J. Johansson, author P. Nation,and author F. Nori, title title Qutip: An open-source python framework for the dynamics of open quantum systems, https://doi.org/https://doi.org/10.1016/j.cpc.2012.02.021 journal journal Comput. Phys. Commun. volume 183, pages 1760 (year 2012)NoStop [Johansson et al.(2013)Johansson, Nation, and Nori]QUTIP_2013 author author J. Johansson, author P. Nation,and author F. Nori, title title Qutip 2: A python framework for the dynamics of open quantum systems, https://doi.org/https://doi.org/10.1016/j.cpc.2012.11.019 journal journal Comput. Phys. Commun. volume 184, pages 1234 (year 2013)NoStop [Arecchi et al.(1972)Arecchi, Courtens, Gilmore, andThomas]ARECCHI_1972 author author F. T. Arecchi, author E. Courtens, author R. Gilmore, and author H. Thomas, title title Atomic coherent states in quantum optics, https://doi.org/10.1103/PhysRevA.6.2211 journal journal Phys. Rev. A volume 6, pages 2211 (year 1972)NoStop [Zhang et al.(2023)Zhang, Zhang, Guo, Wang, Chen, Shan, and Mølmer]Zhang_StochasticMeanfieldTheory_2023 author author Z. Zhang, author Y. Zhang, author H. Guo, author L. Wang, author G. Chen, author C. Shan, and author K. Mølmer, https://doi.org/10.48550/arXiv.2306.00868 title Stochastic Mean-field Theory for Conditional Spin Squeezing by Homodyne Probing of Atom-Cavity Photon Dressed States (year 2023), https://arxiv.org/abs/2306.00868 arxiv:2306.00868 [quant-ph] NoStop [Leroux(2011)]LEROUX_2011 author author I. D. Leroux, title Squeezing Collective Atomic Spins with an Optical Resonator, http://hdl.handle.net/1721.1/68696 Ph.D. thesis, school Massachusetts Institute of Technology (year 2011)NoStop [Tarallo(2020)]TARALLO_2020 author author M. G. Tarallo, title title Toward a quantum-enhanced strontium optical lattice clock at inrim, https://doi.org/10.1051/epjconf/202023000011 journal journal EPJ Web Conf. volume 230,pages 00011 (year 2020)NoStop [Orenes et al.(2022)Orenes, Sewell, Lodewyck, and Mitchell]Orenes_2022 author author D. B. Orenes, author R. J. Sewell, author J. Lodewyck, andauthor M. W. Mitchell,title title Improving short-term stability in optical lattice clocks by quantum nondemolition measurement, https://doi.org/10.1103/PhysRevLett.128.153201 journal journal Phys. Rev. Lett. volume 128,pages 153201 (year 2022)NoStop [Martin et al.(2020)Martin, Livingston, Hacohen-Gourgy, Wiseman, and Siddiqi]Martin_2020 author author L. S. Martin, author W. P. Livingston, author S. Hacohen-Gourgy, author H. M. Wiseman, and author I. Siddiqi, title title Implementation of a canonical phase measurement with quantum feedback, https://doi.org/10.1038/s41567-020-0939-0 journal journal Nat. Phys. volume 16, pages 1046 (year 2020)NoStop [Barberena et al.(2023)Barberena, Chu, Thompson, andRey]Barberena_Tradeoffsunitarymeasurement_2023 author author D. Barberena, author A. Chu, author J. K. Thompson, andauthor A. M. Rey, https://doi.org/10.48550/arXiv.2309.15353 title Trade-offs between unitary and measurement induced spin squeezing in cavity QED (year 2023), https://arxiv.org/abs/2309.15353 arxiv:2309.15353 [quant-ph] NoStop [Fuderer et al.(2023)Fuderer, Hope, and Haine]Fuderer_Hybridmethodgenerating_2023 author author L. A. Fuderer, author J. J. Hope,and author S. A. Haine,title title Hybrid method of generating spin-squeezed states for quantum-enhanced atom interferometry, https://doi.org/10.1103/PhysRevA.108.043722 journal journal Phys. Rev. A volume 108, pages 043722 (year 2023)NoStop [Plankensteiner et al.(2022)Plankensteiner, Hotter, and Ritsch]Plankensteiner_QuantumCumulantsjlJulia_2022 author author D. Plankensteiner, author C. Hotter, and author H. Ritsch, title title QuantumCumulants.jl: A Julia framework for generalized mean-field equations in open quantum systems, https://doi.org/10.22331/q-2022-01-04-617 journal journal Quantum volume 6,pages 617 (year 2022)NoStop [Verstraelen et al.(2023)Verstraelen, Huybrechts, Roscilde, andWouters]Verstraelen_QuantumClassicalCorrelations_2023 author author W. Verstraelen, author D. Huybrechts, author T. Roscilde, and author M. Wouters, title title Quantum and Classical Correlations in Open Quantum Spin Lattices via Truncated-Cumulant Trajectories, https://doi.org/10.1103/PRXQuantum.4.030304 journal journal PRX Quantum volume 4, pages 030304 (year 2023)NoStop [Caprotti et al.(2023)Caprotti, Barbiero, Tarallo, Genoni, and Bertaina]caprotti_zenodo_2023 author author A. Caprotti, author M. Barbiero, author M. G. Tarallo, author M. G. Genoni, and author G. Bertaina, title title Data for: Analysis of spin-squeezing generation in cavity-coupled atomic ensembles with continuous measurements, https://doi.org/10.5281/zenodo.10250601 10.5281/zenodo.10250601 (year 2023)NoStop [Jaynes and Cummings(1963)]Jaynes_Comparisonquantumsemiclassical_1963 author author E. Jaynes and author F. Cummings, title title Comparison of quantum and semiclassical radiation theories with application to the beam maser,https://doi.org/10.1109/PROC.1963.1664 journal journal Proceedings of the IEEE volume 51, pages 89 (year 1963)NoStop [Tavis and Cummings(1968)]Tavis_ExactSolutionMolecule_1968 author author M. Tavis and author F. W. Cummings, title title Exact Solution for an N-Molecule—Radiation-Field Hamiltonian, https://doi.org/10.1103/PhysRev.170.379 journal journal Phys. Rev. volume 170, pages 379 (year 1968)NoStop [Gamel and James(2010)]GAMEL_2010 author author O. Gamel and author D. F. V. James, title title Time-averaged quantum dynamics and the validity of the effective Hamiltonian model, https://doi.org/10.1103/PhysRevA.82.052106 journal journal Phys. Rev. A volume 82, pages 052106 (year 2010)NoStop [James and Jerke(2011)]JAMES_2011 author author D. F. James and author J. Jerke,title title Effective Hamiltonian theory and its applications in quantum information, https://doi.org/10.1139/p07-060 journal journal Can. J. Phys. volume 85, pages 625 (year 2011)NoStop | http://arxiv.org/abs/2311.15725v2 | {
"authors": [
"A. Caprotti",
"M. Barbiero",
"M. G. Tarallo",
"M. G. Genoni",
"G. Bertaina"
],
"categories": [
"quant-ph",
"physics.atom-ph"
],
"primary_category": "quant-ph",
"published": "20231127111915",
"title": "Analysis of spin-squeezing generation in cavity-coupled atomic ensembles with continuous measurements"
} |
Spreading of information in physical systems is a common phenomenon, but when the information is quantum, then tracking, describing, and quantifying the information is a challenging task. Quantum Information (QI) scrambling defines the quantum information propagating chaotically over the physical system. This article describes the effect of Quantum Information (QI) scrambling on bound entangled states. A bound entangled state is a particular type of entangled state that carries noisy entanglement. The distillation of this type of entangled state is very difficult. In recent times, the usefulness of these states has been depicted in different applications. The outcome of this study exhibits that Quantum Information (QI) scrambling develops entanglement in the separable portion of the bound entangled states. Although Quantum Information (QI) scrambling reduces free entanglement, it is also found from the study that Quantum Information (QI) scrambling plays a significant role in activating the bound entangled states by introducing a certain amount of approximately stable free entanglement. Effect of Quantum Information Scrambling on Bound Entangled StatesSuprabhat Sinha^∗[^∗[email protected]] School of Computer Science, Engineering and Applications,D Y Patil International University, Akurdi, Pune-411044, India================================================================================================================================================================================§ INTRODUCTIONQuantum Information (QI) scrambling addresses the quantum manifestation of the chaotic nature of classical information dynamics of a system. When a system interacts with another system, local information, which is preserved in the initial system, diffuses over the total system chaotically. It is very challenging to rake up the entire information perfectly. If the information is quantum in nature, then it is impossible to rack up the whole quantum information using local measurement. The loss of quantum information due to local measurement, strictly speaking, the amount of quantum information that is not able to be recovered using local measurement is defined as QI scrambling. Explaining the black hole information paradox by showing that black holes rapidly process the quantum information and exhibit the fastest QI scrambling, Hayden et al.<cit.> attracts the scientific community towards QI scrambling. After this, several researches are conducted applying QI scrambling in different domains like condensed matter physics, high energy physics, information theory, quantum thermodynamics, and so on <cit.>. To quantify this chaotically scrambled information in physically interacting systems, several different approaches have already been proposed like Loschmidt Echo, entropy production, Out-of-Time-Ordered Correlator (OTOC) <cit.> etc. From these different varieties of quantifiers, OTOC has gotten the most attraction in recent years. On the other hand, quantum entanglement, a fundamental property of quantum particles, is one of the founding members of the branch of quantum computation and quantum information theory. From the beginning, entanglement shows its gravity in this branch and proves itself as an important asset. As the branch moves forward, the scientific community starts several number research on entangled quantum states from different directions. Some of such research concludes that entangled quantum states can be split into two types. One type of entangled quantum states are distillable and pure entanglement can be extracted very easily. These types of entangled quantum states are termed as free entangled states. Another type of entangled quantum states are very hard to distill and extract pure entanglement. These types of entangled quantum states are defined as bound entangled states <cit.>. Many different bound entangled states have already been proposed by different researchers. But due to the requirement of maximum pure entanglement, free entangled states are preferred for perfect execution in most of the applications of this branch and bound entangled states are staved off. In some recent studies, it is found that bound entangled states can be used in quantum information theory with some free entanglement<cit.>. After that, a variety of research works have been conducted on dynamical analysis, distillation, and activation of bound entangled states<cit.>. These research works involve a variety of methods for detecting and measuring entanglements. For free entangled states, several methods are available to quantify the free entanglement such as concurrence, negativity, three π measurement <cit.> etc. On the contrary, the characterization and detection of the bound entanglement is still an open problem. Although some criteria have been already developed to detect the bound entanglement, such as separability criterion, realignment criterion, computable cross-norm or realignment (CCNR) criterion <cit.> etc.In the current article, the effect of QI scrambling on bound entangled states is discussed. Although QI scrambling has already been studied in different qubit and qutrit systems and discussed the effect of QI scrambling on the respective considered systems. However, according to the best of my knowledge, studying QI scrambling in different bound entangled states is missing in the literature. This study is conducted on four 3×3 dimensional bipartite bound entangled quantum states provided by Bennett et al., Jurkowski et al., and Horodecki et al. <cit.>. During the study OTOC has been applied to find the effect of QI scrambling on the bound entangled quantum states, negativity has been employed for quantifying and measuring the free entanglement of the states, and the CCNR criterion has been selected to detect the bound entanglement of the state.This article is sketched as follows. In section 2, OTOC and its role in QI scrambling, negativity, and CCNR criterion has been discussed. Section 3 dealt with the brief details of four chosen bound entangled states. In the different subsections of section 4, the effect of QI scrambling on different bound entangled quantum states has been studied. The last section contains the conclusion of the study of this article. § OTOC, NEGATIVITY AND CCNR CRITERIONIn the current section, the role of OTOC in QI scrambling, Negativity and CCNR criterion has been discussed. OTOC is a commonly used quantifier for QI scrambling in present days. It studies QI scrambling by measuring the degree of irreversibility of the system by applying the mismatch between the forward-backward evolution of the system. OTOC was first introduced by Larkin et al. <cit.> as a quasiclassical method in superconductivity theory and Hashimoto et al. <cit.> introduce it in the field of quantum mechanics. The mathematical form of OTOC can be written as f(t)=⟨[O_2(t),O_1]^†·[O_2(t),O_1] ⟩.Where, O_1 and O_2(t) are the local operators which areHermitian as well as unitary. At initial time (t=0), O_1 and O_2(0) are commute with each other (i.e. [O_2(t),O_1]=0). As the time move forward the operator O_1 remain unchanged but the operator O_2(t) evolves with time. Due to this evolution the commutation relation between two operators are generally breaks because of QI Scrambling. According to the Heisenberg Picture of quantum mechanics the operator O_2(t) can be written as, O_2(t)=U(t)^†O_2(0)U(t). Where, U(t)=e^-iHt/ħ is the unitary time evolution operator under the Hamiltonian H. To calculate the QI scrambling of a quantum system with density matrix ρ, OTOC can be written as,S(t)=Tr[([O_2(t),O_1]^†·[O_2(t),O_1])·ρ].The above equation can be simplified as,S(t)=2[1- Re(M )].Where,M=Tr[ρ(t)] andρ(t)=O_2(t) · O_1· O_2(t) · O_1·ρ. In the current study, O_1 is considered as a swap operator which swaps between qutrit | 0 ⟩, | 2 ⟩ and O_1=O_2(0). It is also considered that O_2(0) evolves with time under the Dzyaloshinskii-Moriya (DM) Hamiltonian in the Z-direction which is developed due the DMinteraction <cit.> between the qutrits of the considered state. The mathematical expression of DM Hamiltonian is written as,H_z=D · (σ_A^x ⊗σ_B^y - σ_A^y ⊗σ_B^x).Where D is the interaction strength along the Z-direction with the range 0 ≤ D ≤ 1 and σ_A^x, σ_A^y and σ_B^x, σ_B^y are the spin matrices of qutrit A and qutrit B respectively. To simplify the calculations and discussion, ħ is also assumed as 1 (i.e. ħ = 1) throughout the present study. At D=0 there is no interaction between the qutrits of the considered state, so the operators O_1 and O_2(t) commutes and no QI scrambling takes place in the system. The matrix form of O_1, σ^x, σ^y can be expressed as,O_1=[ [ 0 0 1; 0 1 0; 1 0 0; ]] σ^x = [ [0 1/√(2)0; 1/√(2)0 1/√(2);0 1/√(2)0;]]σ^y =[ [ 0 -i/√(2) 0;i/√(2) 0 -i/√(2); 0i/√(2) 0; ]]. In the current work, negativity and CCNR criterion are used to detect and quantify the entanglement of the considered bound entangled state. To quantify the free entanglement of the state negativity has been used while the CCNR criterion has been used to detect the bound entanglement of the state. CCNR is a very simple and strong criterion for the separability of a density matrix. This criterion can detect a wide range of bound entangled states and performs with better efficacy. The negativity (N) and CCNR criterion are defined as below,N=(ρ_AB^T-1)/2 andCCNR=(ρ_AB-ρ_A⊗ρ_B)^R-√((1-Tr ρ_A^2) (1-Tr ρ_B^2)).Where ..., (...)^T and (...)^R represent the trace norm, partial transpose and realignment matrix respectively. Further, ρ_AB is the density matrix of the bound entangled state and ρ_A, ρ_B are the reduced density matrices of qutrit A and qutrit B respectively and expressed as,ρ_A=Tr_B(ρ_AB) andρ_B=Tr_A(ρ_AB).For a system, N>0 or CCNR>0 implies that the state is entangled, N=0 and CCNR>0 implies that the state is bound entangled and N>0 corresponds to the free entangled state. § BOUND ENTANGLED STATES In this section, the bound entangled states, which are studied in the current article, are discussed. Bound entangled states are different type of entangled states which carries noisy entanglement and it is very hard to distill this type of entangled state. The usefulness of bound entangled states has been depicted in different applications. Many authors have already proposed different bound entangled states. Among them, four of the states have been chosen for this current study. The first considered bound entangled state is suggested by Bennett et al. <cit.>. The state is a 3×3 dimensional bipartite bound entangled state dealing with two qutrits A and B. The density matrix of the considered bound entangled state can be written in the form,ρ_BS=1/4[ (I ⊗ I)-∑_i=0^4|ψ_i⟩⟨ψ_i |]. Where, I is the 3×3 dimensional identity matrix,|ψ_0⟩=1/√(2)| 0⟩ (| 0⟩-|1⟩),|ψ_1⟩=1/√(2) (|0⟩-|1⟩)|2⟩, |ψ_2⟩=1/√(2)|2⟩|1⟩-|2⟩), |ψ_3 ⟩=1/√(2)(|1⟩-|2⟩)|0⟩, and|ψ_4⟩=1/3 (|0⟩+|1⟩+|2⟩)(|0⟩+|1⟩+|2⟩). The second bound entangled state is proposed by Jurkowski et al. <cit.>. This is a parameterized, 3×3 dimensional bipartite bound entangled state constructed with qutrits A and B. The chosen bound entangled state depends on the three parameters ϵ_1, ϵ_2 and ϵ_3 and comes with different parameter conditions. When the parameters ϵ_1=ϵ_2=ϵ_3=1 the state behaves like a separable state. The density matrix of the state can be written as,ρ_JS=1/N[ [100010001;0 ϵ _10000000;00 1/ϵ _3000000;0001/ϵ_100000;100010001;00000ϵ_2000;000000ϵ_300;00000001/ϵ_20;100010001;]]Where,N=(ϵ _1+1/ϵ _3+1/ϵ_1+ϵ_2+ϵ_3+1/ϵ_2+3 ).Fig. <ref> has shown the entanglement behavior of the Jurkowski et al. bound entangled state with respect to the different conditions of the parameters ϵ_1, ϵ_2 and ϵ_3.The third and fourth bound entangled states are investigated by Horodecki et al. <cit.>. Both of these states are also 3×3 dimensional bipartite bound entangled states formed by qutrits A and B with a parameter α. The density matrix of one of the states investigated by Horodecki et al. [State 1] is written as,ρ_HS_1= [ [ α/8 α +1000 α/8 α +1000 α/8 α +1;0 α/8 α +10000000;00 α/8 α +1000000;000 α/8 α +100000; α/8 α +1000 α/8 α +1000 α/8 α +1;00000 α/8 α +1000;000000α +1/2 (8 α +1)0 √(1-α ^2)/2 (8 α +1);0000000 α/8 α +10; α/8 α +1000 α/8 α +10 √(1-α ^2)/2 (8 α +1)0 α +1/2(8 α +1);]]The range of the parameter α is 0 ≤α≤ 1 for the above-mentioned state (State 1).The density matrix of the another state [State 2] can be written in the form as, ρ_HS_2= 2/7Δ+α/7δ^++5-α/7δ^-.Where,Δ = |ψ⟩⟨ψ| ; |ψ⟩ =1/√(3)(| 00 ⟩ +| 11 ⟩+| 22 ⟩)δ^+ = 1/3(| 01 ⟩⟨ 01 |+| 12 ⟩⟨ 12 |+| 20 ⟩⟨ 20 |) δ^- = 1/3(| 10 ⟩⟨ 10 |+| 21 ⟩⟨ 21 |+| 02 ⟩⟨ 02 |).The discussed state (State 2) follows the parameter α's limit as 2 ≤α≤ 5 with the following conditions,ρ_HS_11={[ Separable state for 2 ≤α≤ 3 ,; Bound entangled state for3 < α≤ 4 ,;Free entangled state for4 < α≤ 5 ,; ] ..Fig. <ref> has depicted the entanglement behavior of both Horodecki et al. bound entangled states with respect to the parameter α.§ EFFECT OF QI SCRAMBLING ON BOUND ENTANGLED STATES In this section, the effect of QI scrambling on the chosen bound entangled states has been discussed. During this study, the chosen bound entangled states have gone through the forward-backward evolution under the considered swap operators O_1 and O_2(t). After passing through the evolution process, the density matrix of the evolved bound entangled state (ρ(t)) can be calculated using the equation <ref>. If the trace value of this evolved bound entangled state (i.e. the value of M in equation <ref>) is 1, then it can be claimed that no QI is scrambling in the system. Since this study is focused on the effect of QI scrambling on bound entangled states, in this article the density matrix ρ(t) of the evolved bound entangled state has been studied. Negativity and CCNR criteria are used in the present discussion for quantification and detection of entanglement as mentioned before. The Present study is conducted for four different bound entangled states provided by three different authors and discusses the results in the three sequent cases, which are described in the consecutive subsections below.§.§ Case 1: Effect on Bennett et al. state Bennett et al. provided non-parameterized bound entangled state, which is discussed in equation <ref>, is considered and dealt in this current case. The state has been passed through the evolution process to examine the effect of QI scrambling on the state and shown the outcomes in Fig. <ref> for the different values of the interaction strength D. In the figure, the negativity (N) of the system is indicated by the solid red line, and the CCNR criterion of the system is represented by the dashed blue line, which will be followed throughout this article.The figure shows that for the interaction strength D=0, the negativity (N) of the state is zero, but the CCNR criterion exists. This result signifies that for the specific value of the interaction strength D, the state is bound entangled, and no free entanglement exists in the state. As the value of the interaction strength D is introduced in the state, the negativity (N) of the state increases with time and attains a maximum value of around 0.1. This phenomenon shows that the free entanglement develops in the state with the introduction of the interaction strength D. For the further increment of the interaction strength D, oscillatory attitude raises in the system with growing frequency. As a result negativity (N) attains its maximum value more quickly but remains approximately stable with a minor trace of disturbance, which can be verified from Fig. <ref>. §.§ Case 2: Effect on Jurkowski et al. state In this case, a triple parameterized bound entangled state is adopted, which is proposed by Jurkowski et al. and talked over in the equation <ref>. The selected state comes with different parameter conditions, which are shown in Fig. <ref>. In the present study, the parameter condition of the state is set as ϵ_1=1;ϵ_2=ϵ_3=ϵ and passed through the evolution process. For the parameter value ϵ = 4 and different values of the interaction strength D, the results are shown in Fig. <ref>, and for the interaction strength D=0.6 and multiple values of the parameter ϵ, the results are shown in Fig. <ref>.Fig. <ref> shows that for a particular value of the parameter ϵ and the interaction strength D=0, the CCNR criterion exists, but negativity (N) of the state is zero. This result indicates that for this case also, the state is bound entangled without any free entanglement for the considered value of the interaction strength D. As the interaction strength D is introduced, a smooth free entanglement is raised in the state with the increment of negativity (N) with time and attains the ceiling value around 0.1 as in the previous case. Further advancement of the interaction strength D enhances the frequency of the oscillatory behavior of the system for which the maximum value of negativity reaches more quickly and a small amount of disturbance is generated in the negativity (N).In Fig. <ref>, the interaction strength D is fixed at D=0.6 and the parameter ϵ varies. For the parameter ϵ=0.5, it is found that negativity (N) and CCNR criterion both exist in the state, which implies that the bound entanglement and the free entanglement both exist in the state for the selected parameter values. From the discussion of the previous section, it is found that the state is separable for the value ϵ = 1, which is shown in Fig. <ref>. On the contrary fig. <ref> depicted that negativity (N) and CCNR criterion are incremented in the state with the forward movement of time for the parameter ϵ = 1 and the interaction strength D=0.6. This result concludes that both the bound entanglement and the free entanglement are developed in the separable portion of the state, and the state becomes totally entangled. The maximum value of negativity (N) and CCNR criterion is amplified very slowly with the further increment of the value of ϵ, which can be observed in Fig. <ref>. §.§ Case 3: Effect on Horodecki et al. states In the current case, two bound entangled states are considered and both the states are proposed by Horodecki et al. The considered states are discussed in the previous section and depicted their behavior with the parameter α in Fig. <ref>. The effect of QI scrambling on both states is described in the given successive subsections below. §.§.§ Effect on State 1The first bound entangled state [State 1] proposed by Horodecki et al. is described in equation <ref> and selected here to discuss the effect of the QI scrambling on it. After passing through the evaluation process, the outcomes are depicted in Figs. <ref>,<ref>. For the parameter value α = 0.5 and different values of the interaction strength D, the results are displayed in Fig. <ref> and for the interaction strength D=0.6 and different values of the parameter α, the results are exhibited in fig. <ref>.Fig. <ref> depicts the same behavior as the previous case for a particular value of the parameter α and the interaction strength D=0 and manifested the same conclusion that in the absence of the interaction strength D, the state is bound entangled without any free entanglement. With the introduction of the interaction strength D, negativity (N) is raised with time and achieves the maximum value around 0.1. Further increment of the interaction strength D is responsible for increasing the frequency of the oscillatory nature of the system, which made the negativity (N) of the system more unstable.Fig. <ref> has displayed the attitude of the state for a particular value of the interaction strength D=0.6 and different values of the parameter α. The figure indicates that for the initial values of the parameter α negativity (N) and CCNR criterion exhibit the oscillatory behavior, which is sinusoidal in nature. As the value of the parameter α increases, the oscillatory pattern of negativity (N) moves toward stability with an amount of distortion, which can be shown in Fig. <ref>. §.§.§ Effect on State 2The effect of the QI scrambling on the second bound entangled state [State 2] proposed by Horodecki et al., which is described in equation <ref>, is discussed here. After passing through the evaluation process, the outcomes are exhibited in Figs. <ref>,<ref>. For the parameter value α = 3.7 and different values of the interaction strength D, the outcomes are shown in Fig. <ref> and for the interaction strength D=0.6 and different values of the parameter α, the outcomes are shown in fig. <ref>.For a particular value of the parameter α and the interaction strength D=0, fig. <ref> depicted the same behavior as the previous cases and proclaimed that in the absence of the interaction strength D, only the bound entanglement exists in the state without any free entanglement. With the introduction of the interaction strength D, negativity (N) increases with time and achieves the maximum value around 0.1. With the further increment of the interaction strength D, the frequency of the oscillatory behavior of the system increases. As a result, negativity (N) gains its maximum value more quickly, and a non-sinusoidal oscillatory behavior increases in the system, which develops distortion in the negativity (N) that can be noticed in Fig. <ref>.Fig. <ref> displays the behavior of the state for the interaction strength D=0.6 and different values of the parameter α. The figure shows that for the parameter value α = 2.5, the negativity (N) increases in the state with the advancement of time and attains the maximum value around 0.1, the same as the previous cases. This outcome implies that free entanglement developed in the state with time. On the other hand, according to the previous discussion, it has been found that for the value of the parameter α = 2.5, the state is separable, and it has already been shown in Fig. <ref>. From this discussion, it can be accomplished that the free entanglement develops in the separable part of the particular state and makes the whole state entangled. Further increment of the parameter α carries the same attitude of the negativity (N) for the remaining range and doesn't affect the state significantly, which can be seen in Fig. <ref>. § CONCLUSIONIn the current article, the effect of QI scrambling is studied on the bound entangled states. The study is conducted on the four different bound entangled states proposed by Bennett et al., Jurkowski et al., and Horodecki et al. During the study, the swap operator is selected as the evolution operator and the operator evolved under DM interaction. The effect of QI scrambling on each of the bound entangled states is described in the respective cases with full detail analysis. Analyzing all the cases, it has been found that although QI scrambling minimizes the quantum information by reducing the free entanglement of the systems, using the considered operator and interaction, QI scrambling can activate the bound entangled states by introducing a certain amount of approximately stable free entanglement. It is also found that due to QI scrambling, both free entanglement and bound entanglement are developed in the separable portion of the selected bound entanglement states and make the states totally entangled for the whole parameter range. Further the study can be continued for different operators and interactions to understand the behavior of bound entangled states.99 hn Hayden, P., & Preskill, J. (2007). Black holes as mirrors: quantum information in random subsystems. Journal of high energy physics, 2007(09), 120.qs1 Iyoda, E., & Sagawa, T. (2018). Scrambling of quantum information in quantum many-body systems. Physical Review A, 97(4), 042330.qs2 Landsman, K. A., Figgatt, C., Schuster, T., Linke, N. M., Yoshida, B., Yao, N. Y., & Monroe, C. (2019). Verified quantum information scrambling. Nature, 567(7746), 61-65.qs3 Touil, A., & Deffner, S. (2020). Quantum scrambling and the growth of mutual information. Quantum Science and Technology, 5(3), 035005.qs4 Blok, M. S., Ramasesh, V. V., Schuster, T., O’Brien, K., Kreikebaum, J. M., Dahlen, D., ... & Siddiqi, I. (2021). Quantum information scrambling on a superconducting qutrit processor. Physical Review X, 11(2), 021010.le Jalabert, R. A., & Pastawski, H. M. (2001). Environment-independent decoherence rate in classically chaotic systems. Physical review letters, 86(12), 2490.ep Spohn, H. (1978). Entropy production for quantum dynamical semigroups. Journal of Mathematical Physics, 19(5), 1227-1230.otoc1 Larkin, A. I., & Ovchinnikov, Y. N. (1969). Quasiclassical method in the theory of superconductivity. Sov Phys JETP, 28(6), 1200-1205.otoc2 Hashimoto, K., Murata, K., & Yoshii, R. (2017). Out-of-time-order correlators in quantum mechanics. Journal of High Energy Physics, 2017(10), 1-31.be1 Horodecki, M., Horodecki, P., & Horodecki, R. (1998). Mixed-state entanglement and distillation: Is there a “bound” entanglement in nature?. Physical Review Letters, 80(24), 5239.be2 Horodecki, P., & Horodecki, R. (2001). Distillation and bound entanglement. Quantum Inf. Comput., 1(1), 45-75.qk Horodecki, K., Horodecki, M., Horodecki, P., & Oppenheim, J. (2005). Secure key from bound entanglement. Physical review letters, 94(16), 160502.cri Augusiak, R., & Horodecki, P. (2006). Generalized Smolin states and their properties. Physical Review A, 73(1), 012318.tp Masanes, L. (2006). All bipartite entangled states are useful for information processing. Physical Review Letters, 96(15), 150501.cc Epping, M., & Brukner, Č. (2013). Bound entanglement helps to reduce communication complexity. Physical Review A, 87(3), 032305.gq Guo-Qiang, Z., & Xiao-Guang, W. (2008). Quantum dynamics of bound entangled states. Communications in Theoretical Physics, 49(2), 343.bz Baghbanzadeh, S., & Rezakhani, A. T. (2013). Distillation of free entanglement from bound entangled states using weak measurements. Physical Review A, 88(6), 062320.ss2 Sinha, S. (2022). Comparative Dynamical Study of a Bound Entangled State. International Journal of Theoretical Physics, 62(1), 9.con Wootters, W. K. (1998). Entanglement of formation of an arbitrary state of two qubits. Physical Review Letters, 80(10), 2245.neg Vidal, G., & Werner, R. F. (2002). Computable measure of entanglement. Physical Review A, 65(3), 032314.3pi Ou, Y. C., & Fan, H. (2007). Monogamy inequality in terms of negativity for three-qubit states. Physical Review A, 75(6), 062308.sc1 Peres, A. (1996). Separability criterion for density matrices. Physical Review Letters, 77(8), 1413.sc2 Horodecki, P. (1997). Separability criterion and inseparable mixed states with positive partial transposition. Physics Letters A, 232(5), 333-339.rc Chen, K., & Wu, L. A. (2002). The generalized partial transposition criterion for separability of multipartite quantum states. Physics Letters A, 306(1), 14-20.ccnr Rudolph, O. (2005). Further results on the cross norm criterion for separability. Quantum Information Processing, 4, 219-239.bs Bennett, C. H., DiVincenzo, D. P., Mor, T., Shor, P. W., Smolin, J. A., & Terhal, B. M. (1999). Unextendible product bases and bound entanglement. Physical Review Letters, 82(26), 5385.js Jurkowski, J., Chruściński, D., & Rutkowski, A. (2009). A class of bound entangled states of two qutrits. Open Systems & Information Dynamics, 16(02n03), 235-242.hs2 Horodecki, P. (1997). Separability criterion and inseparable mixed states with positive partial transposition. Physics Letters A, 232(5), 333-339.hs1 Horodecki, P., Horodecki, M., & Horodecki, R. (1999). Bound entanglement can be activated. Physical review letters, 82(5), 1056.dm1 Dzyaloshinsky, I. (1958). A thermodynamic theory of “weak” ferromagnetism of antiferromagnetics. Journal of physics and chemistry of solids, 4(4), 241-255.dm2 Moriya, T. (1960). Anisotropic superexchange interaction and weak ferromagnetism. Physical review, 120(1), 91.dm3 Moriya, T. (1960). New mechanism of anisotropic superexchange interaction. Physical Review Letters, 4(5), 228. | http://arxiv.org/abs/2311.16209v1 | {
"authors": [
"Suprabhat Sinha"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20231127172241",
"title": "Effect of Quantum Information Scrambling on Bound Entangled States"
} |
Topological invariants are a significant ingredient in the study of topological phases of matter that intertwines the supposedly contradicting concepts of bulk and boundary. The nature of the invariants differ depending on the dimensionality of the boundary at which the topologically non-trivial states manifest themselves. The primary motivation of this work is to study two distinct scenarios of topological phase, differing in the dimensionality of their boundary states and study the associated bulk topological invariants that characterize them. In this regard, we study the band engineered Kane Mele model which originally is a prototypical example of a system that hosts quantum spin Hall effect on a honeycomb lattice.Under a smooth band deformation caused by varying one of the nearest neighbor hopping amplitudes (say t_1) as compared to the other two (say t), we observe that the system transits from its first order topological insulating state (or quantum spin Hall state) to a second order topological insulating (SOTI) state via a gap closing transition. This transition occurs when the system crosses a particular threshold of the deformation parameter t_1/t (namely t_1/t=2), known as the semi-Dirac limit in literature. We show the presence of edge and corner modes as a signature of first and second order topology respectively. Further, we observe the evolution of the Wannier charge center (WCC), a bulk property as a function of the deformation parameter t_1/t. It is seen that the behavior of the WCC is entirely different in the quantum spin Hall (QSH) phase as compared to the second order topological state. We also find that, while the ℤ_2 invariant successfully characterizes the QSH state, it cannot characterize higher order topology (second order here). The model being mirror invariant, we also calculate mirror winding number to show that it is rendered trivial in the SOTI phase as well, while being non-trivial in the QSH phase. Finally, we observe that the spin resolved bulk polarization correctly establishes the appearance of second order topological corner modes and thus categorizes this phase as an obstructed atomic insulator. Wannier charge center, spin resolved bulk polarization and corner modes in a strained quantum spin Hall insulator Srijata Lahiri and Saurabh Basu Department of Physics, Indian Institute of Technology Guwahati-Guwahati, 781039 Assam, India January 14, 2024 ==================================================================================================================================§ INTRODUCTIONTopological insulators (TI) have been a subject of extensive research in the past decade. TIs are novel materials that show the intriguing feature of hosting a gapped bulk, but gapless edge/surface states. These topological states are robust and protected against minor perturbations that do not disturb the symmetries inherent in the system. Traditional TIs show the essence of non-trivial topology on a d-1 dimensional surface for a bulk that is d-dimensional <cit.>. A major aspect of topological materials lies in the bulk boundary correspondence, where a topological invariant, evaluated purely from the bulk eigenstates, predicts the behaviour at the boundaries of the system. Currently there have been multiple extensions to the field of topological materials. These include floquet topological insulators which exhibit topological phases exclusive to a periodically driven system and not shown by their static counterpart <cit.>. Furthermore, non-Hermitian (NH) TIs are gaining growing attention recently <cit.>. Non-Hermiticity enhances the richness of topological phases of matter and cannot be bound within the conventional 10-fold classification of symmetry protected topological states. NH systems also feature topological manifestations unusual to their Hermitian counterparts which include the presence of exceptional points and skin effect. Another important extension to the field of topological insulators, that is being actively explored, are higher order topological insulators (HOTI) <cit.>. Unlike conventional TI, an n^th order HOTI exhibits the presence of non-trivial topological states on a surface/edge of dimension d-n for a bulk that is d dimensional. This gives rise to corner modes in 2D and corner/hinge modes in 3D HOTI systems. The conventional definition of bulk boundary correspondence fails here. Rather, HOTI shows a refined bulk boundary correspondence. Higher order topological insulators can be majorly called a `subclass' of topological crystalline insulators where rotation or mirror symmetries protect the topological phases. Research in this field found a massive boost with the advent of the electric multipole insulators in the study of Benalcazar et al <cit.> as well as chiral and helical higher order topological states in the study of Schindler et al <cit.>. In the latter work, prospective material candidates such as SnTe and surface modified BiTe and BiSe, have been theoretically claimed to host a higher order phase. However, despite HOTI being a well studied phenomena in recent times, it is still ambiguous how the topological conducting edge/surface states of a 2D/3D system can be gapped out to show higher order topology. Here, we study one such possibility where band deformation under strain of a quantum spin Hall insulator induces a transition from a TI to an HOTI phase of matter.Our primary aim is to study two topological phases of different order, one evolving into the other and track the behavior of the corresponding bulk topological invariants that characterize them. In this regard, the Kane Mele model which is a prototypical example of the quantum spin Hall insulator, is considered <cit.>. The proposal of the Kane Mele model owes its origin to the seminal work by Haldane who showed that an external magnetic field and hence Landau levels are not indispensable for the observation of quantum Hall effect <cit.>. Haldane introduced a complex second neighbour hopping to a honeycomb lattice which causes the Dirac nodes at the K and K' points in the Brillouin zone (BZ) of bare graphene to gap out, thus giving rise to conducting edge states. This complex second neighbour hopping however breaks time reversal symmetry (TRS) and bestows the occupied energy subspace with a non-zero Chern number, thus yielding a non-zero conductance similar to the original quantum Hall effect. Since the Haldane model breaks TRS, it was now imperative to study how topology behaves if TRS is restored. With this aim, Kane and Mele proposed a spinful model with an equal and opposite Haldane flux for the spin up and spin down particles. The spinful bands acquire opposite Chern number thus causing the net Chern number of the occupied energy subspace to vanish. This is in accordance with TRS. It is however observed that the difference of Chern number for the two spin sectors act as an effective topological invariant implying that the system shows a finite spin Hall conductance although the conductance in the charge sector vanishes. Such systems fall under the class of ℤ_2 topological insulators and show the presence of helical edge states. Moreover, it was observed by Kane and Mele that an inversion symmetry breaking Rashba spin orbit coupling term, that destroys the conservation of the z component of spin, causes little qualitative difference to the original results owing to leaving the TRS intact. Experimental evidence of the QSH state has been suggested to be found in HgTe/CdTe quantum well <cit.>, low buckled germanene <cit.>, Cl-doped ZnSe <cit.>, Pt wires <cit.> etc. It should be mentioned here that the limit t_1/t=2 is largely known as the semi-Dirac limit, where the bulk energy spectrum shows a linear dispersion along one component of momentum and a quadratic dispersion along the direction perpendicular to the former. Evidence of such inhomogeneous dispersion has been expected to be found in monolayer phosphorene subjected to pressure or doping <cit.>, deformed graphene <cit.> etc.In this work, we smoothly deform the bands of the Kane Mele model defined on a honeycomb lattice by modifying one of the nearest neighbour hopping amplitudes (say t_1), while keeping the other two (say t) fixed. It is seen that the quantum spin Hall state with distinct edge modes is destroyed beyond the critical point t_1/t=2, and the system converts itself into a second order topological insulator, which is an HOTI with topological states manifested at the d-2 dimensional boundary. The bulk bandstructure shows a shift in the band extrema points as a function of the deformation parameter ξ=t_1/t. At t_1=t (ξ=1) the bulk bandstructure hosts the band minima at the K and K' points in the Brillouin Zone (BZ). They shift towards each other along the Γ-K-M-K'-Γ line before finally merging at the M point of the BZ when t_1=2t (ξ=2). It is seen that the behaviour of the bulk topological invariants corresponding to the two different topological regimes is completely different owing to their dissimilar order. In this regard, we mention another work by Ren et al <cit.>, where an in-plane Zeeman field applied to the Kane Mele model, destroys the QSH phase and transforms the system into a higher order topological insulator. However, the TRS is broken in this system and the vital essence of the Kane Mele model is lost. On the contrary, in our work we keep the TRS of the system undisturbed while inducing a second order topological phase solely by means of band engineering. While extensive work has been done on several models featuring an HOTI phase, we focus on the transition of the system and the corresponding bulk topological invariants as it smoothly changes its topological order as a function of band deformation. We also provide a clear perspective pertaining to the occurence of this transition which is crucial to the study of topological phases of matter.The paper is organized as follows. In section II we define the tight binding Hamiltonian for the strained Kane Mele model and show the effect of band deformation on the bulk bandstructure. The energy spectra of a ribbon-like configuration is also studied which shows the existence of helical edge modes in the regime ξ<2. Further deformation destroys the QSH phase and the helical edge modes vanish. However, beyond this critical point, a real space probability distribution shows the existence of zero energy corner modes in the system localized at two corners of a suitably formed supercell that obeys the crystal symmetries of the Hamiltonian. In section III we study the evolution of the Wannier charge center along one direction (say x) with respect to momentum along the other (say y). It is seen that the nature of this evolution is completely dissimilar for the two different regimes. Correspondingly, the ℤ_2 invariant which is finite in the region ξ<2, vanishes beyond it. Pertaining to the presence of mirror symmetry M_x in the system, we also calculate the mirror winding number which corresponds to the Berry phase picked up by the ground state of a mirror symmetry resolved effective Hamiltonian over a complete cycle in its parameter space. We observe that the mirror winding number shows a similar trend as the WCC. However, the spin resolved bulk polarization which indicates the position of the center of charge in a unit cell becomes quantized in the second order topological phase. This indicates an obstructed atomic insulator where the center of charge suffers a mismatch from the original lattice sites <cit.>. This leads to an excess charge accumulation at the corners of a rhombic supercell which manifests as second order topology. Finally we conclude with a brief summary of our results in section IV.§ THE HAMILTONIANThe Kane Mele model defined on a honeycomb lattice is shown in Fig. <ref>.The vectors connecting the nearest neighbours are given by δ⃗_1=a_0(0,1), δ⃗_2=a_0(-√(3)/2,-1/2), δ⃗_3=a_0(√(3)/2,-1/2) where a_0 is the nearest neighbour distance. The lattice vectors are given by a⃗_1=δ⃗_1-δ⃗_2 and a⃗_2=δ⃗_1-δ⃗_3. The hexagonal lattice has two sublattices denoted by A and B. In our model, the NN hopping along the direction δ̂_1 is assumed to be t_1, while it is given by t in the directions δ̂_2 and δ̂_3. We tune the bandstructure as a function of the deformation parameter ξ=t_1/t and observe the behaviour of the boundary states. The tight binding Hamiltonian for the real space Kane Mele model is given as,H =∑_⟨ i,j ⟩t_ijc_i^† c_j + iλ_so∑_⟨⟨ i,j ⟩⟩ν_ijc_i^†σ_z c_j + iλ_R∑_⟨ i,j ⟩c_i^†(σ×𝐝̂_𝐢𝐣)_zc_j+∑_iλ_vc_i^† c_iwhere c_i (c_i^†) represent annihilation (creation) operators at lattice site i. Here t_ij is the NN hopping amplitude which is equal to t_1 when the hopping occurs along the direction δ⃗_1 and is equal to t along δ⃗_2 and δ⃗_3. The second term is a spin-orbit coupling (SOC) term where λ_so corresponds to the intrinsic SOC amplitude which is a key ingredient in the formation of the QSH phase. ν_ij=1(-1) if the electron takes a left(right) turn while moving from site j to site i. The third term corresponds to Rashba SOC with λ_R as the coupling strength. The conservation of the z component of spin that is σ_z is violated in presence of λ_R. 𝐝̂_𝐢𝐣 corresponds to the nearest neighbor vector connecting site j to site i. Finally the fourth term denotes the onsite sublattice potential where λ_v assumes a positive amplitude (say m_s) for sublattice A and negative (say -m_s) for sublattice B. It is known that the QSH phase survives in the original Kane-Mele as long as λ_v<3√(3)λ_so <cit.>. Fourier transformation of the real space Hamiltonian gives us the tight binding Hamiltonian in the momentum space,H(𝐤)=[ γ(𝐤) + m_s η(𝐤)0 ρ(𝐤); η^*(𝐤)-γ(𝐤)-m_s -ρ(-𝐤)0;0 -ρ^*(-𝐤)-γ(𝐤)+m_s η(𝐤); ρ^*(𝐤)0 η^*(𝐤) γ(𝐤)-m_s ]where η(𝐤) =t_1e^-ik_ya+2te^ik_ya/2cos√(3)k_xa/2γ(𝐤) =2λ_so[2sin√(3)k_xa/2cos3k_ya/2-sin√(3)k_xa]ρ(𝐤) =iλ_R[e^-ik_ya+e^ik_ya/22cos[√(3)k_xa/2+π/3] The bulk bandstructure calculated using Eq. <ref> shows band extrema at the K(-2π/3√(3) a_0, 2π/3a_0) and K'(2π/3√(3) a_0,2π/3a_0) points for ξ=t_1/t=1, as seen in Fig. <ref>. In this case the amplitude of the Rashba SOC and the onsite sublattice potential are kept zero resulting in the spin-↑ and spin-↓ bands to be degenerate. It is seen that as the band is slowly deformed, the extrema slowly shift towards each other, finally converging at the M point of the BZ for ξ=2. The gap closing transition at ξ=2 destroys the QSH phase and renders the system trivial from the perspective of first order topology. For non-zero values of the onsite potential λ_v and λ_so, the degeneracy of the bands is lifted as seen in Fig. <ref>. However, the general behavior of the spectral properties with respect to the deformation parameter remains the same.Next, in order to study the behavior of the edge modes pertaining to the QSH phase, we plot the energy bandstructure of a zig-zag ribbon-like configuration with periodic boundary condition (PBC) along the direction â_1-â_2 and open boundary condition (OBC) along the direction â_1. The presence of PBC along the x direction (which is the same as the direction â_1-â_2) enables us to Fourier transform the Hamiltonian along x-direction and thus plot the dispersion of this finite ribbon as a function of k_x. Distinct edge modes are seen traversing the band gap as a function of k_x, in the region 1<ξ<2, as shown in Fig. <ref>. Evidently, these are conducting eigenstates confined to the edges of the system. At ξ=2, the closure of the bulk band gap causes the first order topological phase to disappear and the conducting edge states get trivialised beyond this critical point.To investigate the topology of the phase beyond the critical point (ξ=2), we carefully construct a rhombic supercell taking into the account that the system possesses a mirror symmetry M_x. A schematic representation of this supercell is shown in Fig. <ref>. The real space energy eigenspectra is evaluated which shows the presence of four distinct zero energy modes. In presence of a non-zero onsite potential λ_v, the in-gap modes shift from zero energy as shown in Fig. <ref>. The real space probability distribution of the zero energy states show that they are confined at the two mirror invariant corners of the rhombic supercell (Fig. <ref>). § TOPOLOGICAL INVARIANTSThe QSH phase seen in the regime ξ<2, is a ℤ_2 topological phase which has a zero Hall conductivity but a non-zero spin Hall conductivity. If the z-component of spin, that is σ_z, is conserved (for the case where λ_R=0), the spin Hall conductivity is also quantized. The ℤ_2 invariant in such a case is given by <cit.>,ν = (C^↑-C^↓)/2where C^↑ (C^↓) refers to the spin-↑ (spin-↓) Chern numbers. However, in presence of a Rashba SOC, the z-component of spin that is σ_z is not conserved and hence this form of the ℤ_2 invariant is no longer valid. However, a quantized ℤ_2 invariant pertaining to a quantum spin Hall phase still persists. In our work we study the hybrid Wannier charge centers as a function of the deformation parameter ξ to follow the fate of the QSH phase in the presence of σ_z non-conserving terms. In this context, Wannier charge center refers to the center of charge in a unit cell. Mathematically they represent the expectation value of position operator for a basis represented by Wannier functions which are a set of orthogonal functions indexed by a lattice position say 𝐑 and maximally localized about that point with respect to all relevant spatial dimensions. The Wannier functions are represented as <cit.>,|𝐑, n⟩=V/(2π)^D∫d^Dk e^-i𝐤.𝐑|ψ_n𝐤⟩where |ψ_n𝐤⟩ represents the Bloch wave function and D corresponds to the dimensionality of the k-space. V is the real space primitive cell volume. Hybrid Wannier functions, on the other hand, refer to wave functions which are localized along one spatial dimension (say, x) while being delocalized along the other dimensions (say, y and z) and can be written as,|R_x, k_y, k_z, n⟩=1/2π∫_-π^πdk_xe^-iR_xk_x|ψ_n𝐤⟩Expectation value of the position operator (say, X̂) with respect to the hybrid Wannier function, gives us the hybrid Wannier charge center. Mathematically, this is represented as,x̅_n(k_y, k_z) = ⟨ R_x, k_y, k_z, n|X̂|R_x, k_y, k_z, n⟩ The hybrid WCC being proportional to the Berry phase captures the topological details of the system efficiently and is given as <cit.>,ϕ_n(k_y)=∫_0^2πA_n(k_x,k_y)dk_xHere 𝐀_𝐧(k_x, k_y)=-i⟨ u_nk|∇_k|u_nk⟩ is known as the Berry connection, where n is the band index and |u_nk⟩ corresponds to the periodic part of the Bloch wavefunction. Here, we calculate the WCC along the x-direction and study its evolution as a function of the momentum in the y-direction that is k_y (since the model we study is 2D and lies on the x-y plane). The ℤ_2 invariant is now defined as the number of individual hybrid WCC crossed by a line traversing half the BZ, modulo 2 <cit.>. If the line cuts through odd (even) number of hybrid WCC while traversing half the BZ, the ℤ_2 invariant is non-trivial (trivial). We observe in Fig. <ref> that the ℤ_2 invariant remains non-trivial as long as ξ<2. Beyond this point, the evolution of the hybrid WCC is changed and the system no longer remains in the QSH phase, as shown in Fig. <ref>.Next we focus on the crystalline symmetries of the deformed Kane-Mele Hamiltonian. The deformed Kane-Mele model possesses a mirror symmetry M_x given by s_x⊗𝕀, where s_x and 𝕀 act on the spin and the sublattice degrees of freedom respectively. Here 𝕀 corresponds to identity and s_x corresponds to the x component of the Pauli matrices. The mirror symmetry decouples the Hamiltonian into two subspaces given by the positive and negative mirror eigenvalues. We put k_x=0 and decouple the Hamiltonian H(0, k_y) into two parts denoted by H^± corresponding to the positive and the negative values of the mirror symmetry operator M_x. The action of the mirror symmetry operator on the Hamiltonian is given as follows,M_xH(k_x, k_y)M_x^-1=H(-k_x, k_y)Thus, on putting k_x=0, the mirror operator M_x can be used to decouple the original Hamiltonian into H^± which is given as,H^±(k_y)=T^±_x(k_y)σ_x+T_y^±(k_y)σ_ywhere, T_x^±(k_y)=±2λ_Rsin3ak_y/2+4tcos3ak_y/2+2t_1T_y^±(k_y)=±2λ_R[cos3ak_y/2+1]-4tsin3ak_y/2 On studying the evolution of T_x and T_y over a complete path in the BZ (Γ→ M →Γ), we see that the winding number is 1 (that is the origin of the T_x-T_y plane is enclosed) only when the deformation parameter ξ remains less than 2. The origin lies outside the enclosed area as soon as the QSH phase is destroyed and the second order topological phase is reached, as shown in Fig. <ref>, <ref>, <ref>. This implies that the Berry phase acquired by the ground state of either the positive or negative subspace of the mirror resolved Hamiltonian, is another alternate bulk property that correctly captures the QSH phase. However, it is trivial in the second order topological regime. This has been shown in Fig. <ref>, where ϕ_m represents the Berry phase acquired by the ground state of the effective 1D mirror resolved Hamiltonian H^+(k_y) over a complete 1D path in the BZ, and is given by,ϕ_m = -i∫_Γ→ M→Γdk_y⟨ u_nk_y|∇_k_y|u_nk_y⟩|u_nk_y⟩ corresponds to the periodic part of the Bloch wavefunction |ψ_nk_y⟩ belonging to the band n. The negative mirror subspace given by H^-(k_y) shows a similar behavior. Thus it is implied that both the evolution of the WCC and the 1D polarization corresponding to the effective Hamiltonian H^±(k_y) are incapable of capturing any essence of the second order topological phase (that is the regime beyond ξ>2) whereas they accurately characterize the first order QSH phase.To characterize the second order topological states of the Kane Mele model beyond ξ>2, we resort to spin resolved bulk polarization. Keeping the value of λ_R=0, so that the z-component of spin is conserved, we calculate bulk polarization for the two different spin sectors which is given by <cit.>,p^s_α=r̅_α = ⟨ w^s_n|r_α|w^s_n⟩=i/S∫_BZ d^dk ⟨ u^s_nk|∂/∂ k_r_α|u^s_nk⟩where |w_n^s⟩=|0,n⟩_s is the Wannier function corresponding to the n^th band and p_α^s refers to the value of bulk polarization in the direction α for the spin component s (↑,↓). S corresponds to the total area of the honeycomb BZ and is taken as 8π^2/3√(3)a_0^2. As shown in Fig. <ref> we observe that the bulk polarization p_y has a quantized value of a_0/2 for the spin-↑ and a value of -a_0/2 for the spin-↓ component forthe regime ξ>2. For 1<ξ<2, that is in the QSH phase p_y bears no quantized value. Furthermore it is seen that the value of p_x is uniformly zero both above and below the critical point ξ=2. Thus, we establish that the second order topological phase of the band deformed Kane Mele model is an obstructed atomic insulator phase where the center of charge in a unit cell is displaced from the actual lattice point in real space and lies between two consecutive sites. The displacement of the center of charge results in fractional charge accumulation at two specific corners of the rhombic supercell, thus exhibiting second order topology in the form of localized corner modes. § CONCLUSIONWe study a prototypical quantum spin Hall system that shows two topological phases of different orders, brought about by band deformation. The system under study is the celebrated Kane Mele model on a honeycomb lattice which exhibits the presence of helical edge modes as a signature of the quantum spin Hall phase. We smoothly deform the bandstructure of the Kane Mele model by varying one of the nearest neighbour hopping amplitudes (say t_1) of the honeycomb lattice while keeping the other two (say t) fixed. It is observed that the system retains the QSH phase as long as t_1/t<2. We plot the bandstructure of a zig-zag ribbon like configuration to explicitly show the helical edge states which disappear after the system is deformed beyond t_1/t=2. However, beyond this critical point is crossed, the system transcends into a second order topological phase hosting robust second order modes at the two corners of a suitably formed rhombic supercell. We study bulk properties like evolution of the hybrid WCC, mirror winding number and spin resolved bulk polarization to characterize and study the evolution of the different topological phases. The evolution of the hybrid WCC shows a stark contrast in the first order and the second order topological phase. The nature of the evolution establishes that the ℤ_2 invariant is non-trivial in the QSH phase, while being trivial in the HOTI phase. The mirror winding number shows a similar behaviour where it is non-trivial only in the QSH phase. Finally, to decipher the origin of the second order topological phase we calculate spin resolved bulk polarization which depicts the center of charge in a unit cell. We observe that for ξ>2, the value of |p_y| is quantized for both the spin sectors at a_0/2, while it is not quantized for ξ<2. p_x on the other hand is uniformly zero everywhere. This quantization of the value of p_y indicates a displacement of center of charge with respect to the real space lattice site and causes the appearance of fractional charge excess at the corners of the rhombic supercell. The bulk polarization p_y for the spin-↑ sector is a_0/2 while it is -a_0/2 for the spin-↓ sector. Herein lies the appearance of a second order topological phase beyond ξ>2 which is an obstructed atomic insulator.ieeetr | http://arxiv.org/abs/2311.16011v1 | {
"authors": [
"Srijata Lahiri",
"Saurabh Basu"
],
"categories": [
"cond-mat.mes-hall"
],
"primary_category": "cond-mat.mes-hall",
"published": "20231127171003",
"title": "Wannier charge center, spin resolved bulk polarization and corner modes in a strained quantum spin Hall insulator"
} |
[email protected] study of thermodynamic properties of microscopic systems, such as a colloid in a fluid, has been of great interest to researchers since the discovery of the fluctuation theorem and associated laws of stochastic thermodynamics. However, most of these studies confine themselves to systems where effective fluctuations acting on the colloid are in the form of delta-correlated Gaussian white noise (GWN). In this study, instead, we look into the work distribution function when a colloid trapped in a harmonic potential moves from one position to another in a fluid medium with an elongational flow field where the effective fluctuations are given by the Ornstein-Uhlenbeck (OU) noise, a type of coloured noise. We use path integrals to calculate this distribution function and compare and contrast its properties to the case with GWN. We find that the work distribution function turns out to be non-Gaussian as a result of the elongational flow field, but continues to obey the fluctuation theorem in both types of noise. Further, we also look into the effects of the various system parameters on the behaviour of work fluctuations and find that although the distribution tends to broaden with increasing noise intensity, increased correlation in fluctuations acts to oppose this effect. Additionally, the system is found to consume heat from the surroundings at early times and dissipate it into the media at later times. This study, therefore, is a step towards gaining a better understanding of the thermodynamic properties of colloidal systems under non-linear complex flows that also display correlated fluctuations.Work distribution of a colloid in an elongational flow field and under Ornstein-Uhlenbeck noise Rati Sharma January 14, 2024 ===============================================================================================§ INTRODUCTION Thermodynamic properties of microscopic systems display markedly different behaviours compared to those exhibited by macroscopic systems. In particular, the principles that govern classical thermodynamics for macroscopic systems are often violated in the microscopic limit. For microscopic systems that are away from equilibrium, these violations appear as broad distributions of thermodynamic quantities such as entropy, heat and work <cit.>. The corresponding distributions have been observed in several experimental studies <cit.> and also match those calculated using the principles of non-equilibrium statistical mechanics <cit.>. Additionally, these distribution functions are now known to follow a certain universal principle, called the fluctuation theorem (FT) <cit.>. The fluctuation theorem <cit.> is an important result in the field of stochastic thermodynamics that provides insight into the distributions of thermodynamic quantities of systems that are in a non-equilibrium state. The FT was first proposed in 1993 <cit.>, in which Evans, Cohen and Morriss showed that there is a finite probability of entropy being consumed particularly when the system is far from equilibrium, leading to a violation of the second law of thermodynamics. Mathematically, it states that the ratio between the probability of entropy production to that of entropy consumption varies as exp(β S), where β = 1/k_BT^' is the Boltzmann factor and T^' is the temperature of the surrounding heat bath. This mathematical expression was later found to be true not just for entropy, but for other thermodynamic quantities as well, such as work and heat. The principle of Fluctuation theorem was experimentally verified for the first time in 2002 in which a plastic bead was trapped by an optical tweezer <cit.> and set to move around in a solution <cit.>. Following this, over the years, multiple other experiments have validated FT for various systems, such as systems of optically trapped colloidal beads, granular systems, systems in turbulent flows, nano-scale systems, material sciences, and also in many biological processes such as protein folding, chemical kinetics, gene regulations and RNA folding <cit.>. Another area where stochastic thermodynamics and FT are studied extensively is in the systems of active baths in which the thermodynamics of passive tracers or colloids are studied under the influence of random collisions with active components <cit.>. Here, in this work, we focus on the stochastic thermodynamics of a colloidal system that is under the influence of correlated fluctuations which was earlier found to have a significant influence on different physical, chemical, and biological processes <cit.>. Recent advances in the study of active baths using the principles of non-equilibrium statistical mechanics have gained significant attention in the past few years <cit.>. Active baths are composed of self-propelling units that can convert energy from the surroundings to directed motions. Because of this reason, a bath consisting of active components is always present in a non-equilibrium state. Understanding the behaviour of such systems has become the central interest of both theoreticians and experimentalists from very diverse research backgrounds. In order to model the dynamics of a passive tracer in such an active system, an active random force is considered in the equation of motion in addition to the Gaussian white noise (GWN) arising due to the thermal fluctuations in the system. The active counterpart of the random force appears from the random collisions of the passive tracer with the surrounding active components in the system. The active force is usually modelled using the Gaussian coloured noise, also known as the Ornstein-Uhlenbeck (OU) noise which has exponential correlations in time with a characteristic time limit (τ) <cit.>. The incorporation of this OU noise then enables a study of the time evolution of thermodynamic observables of such systems away from equilibrium<cit.>. Further, in the study of active baths, the contribution of thermal fluctuations can be ignored when the velocities of the active components are very high <cit.>. Although significant efforts have gone towards the study of stochastic thermodynamics in colloidal systems, its understanding in the presence of varying background fluid flows and velocities is still lacking. Earlier studies often considered the background medium of these systems to be either at rest or having a uniform velocity. The study of the motion of colloids in the presence of a constant flow of the surrounding medium <cit.> or that of a charged Brownian oscillator in an external electric field <cit.> are a couple such examples. But, in practice, whether it is a particle moving in an air medium or a cellular entity moving around in the cytoplasm, the nature of the motion of background media has a significant effect on the dynamics of the tracer particle. The media in such real systems in which colloids move around are found to exhibit motions that are non-uniform in nature and that keep the systems away from equilibrium. Also in the case of active baths, the motion of active components generates disorder in the media which in turn affects the motion of the passive tracers. Therefore, for the purposes of generality, it becomes important to study the statistical behaviours of such systems. But, unfortunately, the thermodynamics of such systems are very poorly studied because of the complications that arise from the complex flow patterns present in the surrounding media, rendering the flow anisotropic. Further, although the types and complexities of non-linearity can vary from one system to another, in practice, it is preferable to study some particular types of non-linear gradient flows such as one or a combination of shear, rotation and elongation, to get insights into the dynamics of a particle under such conditions. Some of the past research works have focused on the dynamics and thermodynamics of colloids in various types of non-linear flows, including shear and elongational flow <cit.>. In a few of these works, the probability distributions of different observables were also evaluated which were found to satisfy the fluctuation theorem <cit.>. Most of these studies, however, were carried out considering delta-correlated noise, also known as Gaussian white noise (GWN) arising from thermal fluctuations that are intrinsic to the system <cit.>. But, the thermodynamics of such systems in the presence of OU noise (or coloured noise) which is often considered as an external noise <cit.>, still remains unexplored. In this article, we use the path integral technique <cit.> to evaluate the work distribution function of a colloidal particle moving from one position to another in a fluid medium that is exhibiting a particular non-linear flow and where the colloid is itself under the influence of an external harmonic potential. We define our system and the corresponding equation of motion of a colloidal particle in elongational flow and in the presence of external harmonic potential through the overdamped Langevin equation and then go on to calculate the work done by the colloid in moving from one position to another. We introduce the elongational fluid flow in this system as it is a typical example of a non-linear flow of a fluid in which the x and y components of the velocity field (v⃗ (r⃗)) are coupled with each other while the z component of velocity remains independent of other components. Because of this coupling behaviour, the work done in moving the colloid from one position to another is a non-linear function of its time-dependent coordinates and the distribution of work done (which is the primary goal of our study) becomes asymmetric. We also compare this with the work distribution for constant background flow by modifying the velocity field such that the flow rate (γ̇) is set to zero in the expression of v⃗ (r⃗). For both types of background flow, we study the dynamics of the colloid by considering different types of noise such as white noise and coloured noise to examine how the work distributions change when internal and external noise are considered, respectively. The rest of the manuscript is organized as follows. The system under consideration in this study and the amount of work done due to the motion of the colloid is described in Section <ref>. The calculation of the probability distribution function for the final positions of the colloid in an elongational flow field using OU or coloured noise is shown in Section <ref>. A similar calculation for delta-correlated or white noise is given in Appendix <ref>. The calculation of the work distribution function and the corresponding results and discussion are presented in Section <ref>.§ MOTION OF A COLLOID IN AN ELONGATIONAL FLOW FIELD Consider a colloidal particle that is under the influence of an external harmonic potential and is moving in a fluid medium with an elongational flow. The dynamics of this colloid is highly affected by the noise present in the system. Let, r⃗(t) be the position of the colloid at any time t. The motion of the colloid in such a system can be described by an overdamped Langevin equation <cit.>, as follows,ζṙ⃗̇-ζv⃗(r⃗)+∂ U(r⃗)/∂r⃗ = η⃗ (t)where ζ is the friction coefficient. The velocity profile of the solvent medium is given by v⃗ (r⃗) = v⃗_0 + γ̇κr⃗, where v⃗_0 = v_0 (î+ĵ+k̂) is the constant background solvent velocity, γ̇ is the flow rate and κ= ([ 0 1 0; α 0 0; 0 0 0 ]) is the velocity gradient tensor which is responsible for the non-linear flow of the medium and coupling between different velocity components. The value of α varies from -1 to +1. α = -1 corresponds to pure rotation, 0 corresponds to shear flow and +1 corresponds to elongational flow. In our calculations, we have used α = 1 which is the case for elongational type of flows. The external harmonic potential is given by U(r⃗) = k r^2/2. η⃗ (t) is the random force (or noise) acting on the colloid. This noise, can either be uncorrelated, as is the case for thermal fluctuations, or correlated, as is the case for active noise. Here, we consider the noise to be in the form of the OU process <cit.>, wherein, the noise auto-correlation function decays exponentially over time with a characteristic time constant of τ. The OU process is a stochastic process whose dynamics can be represented in terms of the following stochastic differential equation: η̇(t) = -η(t)/τ + √(D)θ(t)/τ. Here θ (t) is a Gaussian white noise of zero mean and delta-correlated autocorrelation function. D represents the strength of the noise. The OU noise, therefore, has the following statistical properties:⟨η⃗_i (t) ⟩ = 0 ⟨η⃗_i (t) η⃗_j (t^') ⟩ = D/τδ_ijexp(-|t-t^'|/τ) with higher D corresponding to larger fluctuations about the mean. The colloid moves in the fluid from one position to another during a time interval of T and during the event it performs some work due to its motion. Since the process is stochastic, the trajectories between the initial and final points for the given interval are different for each sample. As a result, the work done, which is a path-dependent function, also varies from sample to sample. This ultimately leads to a distribution of work for a specific set of system conditions. This work done by the colloid following a particular trajectory can, in general, be calculated as, W_T=∫_0^Tv⃗(r⃗)·∇ U(r⃗) dt Here, v⃗(r⃗) is the velocity of the solvent and U(r⃗) is the external harmonic potential as mentioned earlier. For the evaluation of the work distribution under the influence of the OU noise, we consider two special cases. The first case is for the constant background flow which is obtained by setting γ̇=0 in the expression of v⃗(r⃗), whereas the second case is for elongational flow with nonzero γ̇. We discuss these two cases in detail in Section <ref>. We next compute the conditional probability distribution of the position of the colloidal particle before moving on to the calculation of the work distribution function. In the following calculations, we have used different values of T to show the system behaviour at different times. The probability distributions for the final positions of the colloid in different cases are shown at two different times (T = 1 and 10). Further, in the plots to illustrate the fluctuation theorem, we have used different values of T (upto 2.7 for OU noise in elongational flow) to show the dynamical changes in the probability distribution of positive and negative work done along a trajectory. Also, in the study of the dynamical change of distribution properties, we used a range of time scales depending on the choice of various parameters such as friction coefficient, relaxation time, etc., for different cases. For example, in the plots showing the mean work and standard deviation, T is varied between 0 and 10 for constant flow and between 0 and 14 for elongational flow. Similarly, in the plots showing the skewness parameter, T is varied between 0 and 6 for white noise and between 0 and 15 for coloured noise. Numerical computations exhibit divergence errors beyond these values of T and therefore have not been plotted. However, all the interesting characteristics of the quantities can already be seen for the plotted ranges of time. § CONDITIONAL PROBABILITY DISTRIBUTION FOR THE FINAL POSITION OF THE COLLOIDWe now focus on computing the probable distance the colloidal particle can travel in a time T given that it was at position r⃗_0 at the initial time. The resulting conditional probability can then be used to compute the distribution function for the work performed. Since the OU process <cit.> is Gaussian distributed, the probability distribution of the colloid following a particular trajectory during a time interval T can be obtained from <cit.> P[η⃗] ∝exp{-1/4D∫_0^Tdt[η⃗ (t)^Tη⃗(t)+τ^2η̇⃗̇(t)^Tη̇⃗̇(t)]} Using η⃗ and its first order time derivative (η̇⃗̇) from Eq. <ref> in Eq. <ref>, we getP[x,y,z] ∝ J[x,y,z] exp{-1/4D∫_0^Tdt[τ^2ζ^2(ẍ^2+ÿ^2+z̈^2)+(ζ^2+τ^2k^2+τ^2ζ^2γ̇^2)(ẋ^2+ẏ^2) +(ζ^2+τ^2k^2)ż^2-2ζ^2v_0(ẋ+ẏ+ż)+2ζ k(xẋ+yẏ+zż)+2ζ kτ^2(ẋẍ+ẏÿ+żz̈) -2ζ^2γ̇(ẋy+xẏ)-2ζ^2γ̇τ^2(ẋÿ+ẍẏ)-4ζγ̇kτ^2ẋẏ-4kζγ̇xy+(k^2+ζ^2γ̇^2)(x^2+y^2) +k^2z^2+(2ζ^2γ̇v_0-2ζ kv_0)(x+y)-2ζ kv_0z+3ζ^2v_0^2]} where, J[x,y,z] is the Jacobian for the change of variable from η⃗ to r⃗ <cit.> whose calculation is shown in Appendix <ref>. The conditional probability density, P(x_f,y_f,z_f,T|x_0,y_0,z_0), of finding the particle at (x_f,y_f,z_f) after time T given that the particle started moving from (x_0,y_0,z_0) at t=0, can be expressed as P(x_f,y_f,z_f,T | x_0,y_0,z_0) ∝ e^3kT/2ζe^-ζ k/4D[(x_f^2+y_f^2+z_f^2-x_0^2-y_0^2-z_0^2)+τ^2(v_x_f^2+v_y_f^2+v_z_f^2-v_x_0^2-v_y_0^2-v_z_0^2)]×∫_x(0)=x_0^x(T)=x_f𝒟[x] ∫_y(0)=y_0^y(T)=y_f𝒟[y] ∫_z(0)=z_0^z(T)=z_f𝒟[z] e^-S[x,y,z] where 𝒟[x], 𝒟[y] and 𝒟[z] represent the path integrals over x, y and z between the end points (x_0,y_0,z_0) and (x_f,y_f,z_f), and S[x,y,z] represents the action during the time interval of T,defined as S[x,y,z] = ∫_0^T dt ℒ(x,y,z,ẋ,ẏ,ż,ẍ,ÿ,z̈,t) Here, ℒ represents the Lagrangian of the system given by, ℒ(x,y,z,ẋ,ẏ,ż,ẍ,ÿ,z̈,t) = 1/4D[ τ^2ζ^2(ẍ^2 + ÿ^2 + z̈^2) +(ζ^2 + τ^2k^2 + τ^2ζ^2γ̇^2) (ẋ^2+ẏ^2) +(ζ^2+τ^2k^2)ż^2-2ζ^2v_0(ẋ+ẏ+ż)-2ζ^2γ̇(ẋy+xẏ)-2ζ^2γ̇τ^2(ẋÿ+ẍẏ)-4ζγ̇kτ^2ẋẏ-4kζγ̇xy+(k^2+ζ^2γ̇^2)(x^2+y^2)+k^2z^2+(2ζ^2γ̇v_0-2ζ kv_0)(x+y)-2ζ kv_0z+3ζ^2v_0^2] Eq. <ref> represents the path integral for the particle moving from the initial to the final position that can be solved using Feynman’s variational technique <cit.>. The motion of the colloid in such a system is highly stochastic and there can be an infinitely large number of trajectories between the initial and final points. The most probable trajectory along which the action is minimum can therefore be found using the Euler-Lagrange equation of motion, given by ∂ℒ/∂ r_i - d/dt(∂ℒ/∂ṙ_i) + d^2/dt^2(∂ℒ/∂r̈_i)=0 where the index i=1,2,3 corresponds to x, y and z components. Using the Lagrangian in Eq. <ref>, the equation of motion of the colloid becomes⃜r⃗ + Mr̈⃗̈ + Nr⃗ + PI⃗ = 0 whereM=[ -α_1α_20;α_2 -α_10;00 -β_1;];N=[α_3 -α_40; -α_4α_30;00 -β_2;];P=[α_5;α_5; -β_3;]and I⃗ is the 3×3 identity matrix. Here, α_1=γ̇^2+1/τ^2+k^2/ζ^2, α_2=2kγ̇/ζ, α_3=k^2/τ^2ζ^2+γ̇^2/τ^2, α_4=2kγ̇/ζτ^2, α_5=γ̇v_0/τ^2-kv_0/ζτ^2, β_1=1/τ^2+k^2/ζ^2, β_2=k^2/τ^2ζ^2 and β_3=kv_0/ζτ^2. The x and y components of Eq. <ref> are fourth-order coupled differential equations which are difficult to solve analytically. However, in the limiting case of constant background flow that can be obtained from the general case of colloid in an elongational flow medium by setting γ̇=0, equations of motions along individual components are independent of each other and, therefore, are possible to solve analytically. We solve for x(t), y(t) and z(t) numerically using Mathematica <cit.> by first setting the parameters of Eq. <ref> to specific constant values. We set τ, ζ, k and D to unity (in appropriate units) to avoid extremely large solutions. We consider v_0 and γ̇ to be unity as well. Solutions of x(t) and y(t) contain eight constants that can be evaluated by using the boundary conditions, which are x(0)=x_0, x(T)=x_f, ẋ(0)=v_x_0, ẋ(T)=v_x_f and y(0)=y_0, y(T)=y_f, ẏ(0)=v_y_0, ẏ(T)=v_y_f. Solution for the z-component is comparatively easy since the motion of the particle along the z-direction is independent of the other components, and therefore can be solved using the boundary conditions z(0)=z_0, z(T)=z_f, ż(0)=v_z_0 and ż(T)=v_z_f. For further simplification, we consider that the motion of the colloid starts from the origin at t=0 with zero initial velocity. We took the final velocity of the colloid to be unity as well along each direction. The action was then calculated using all these values of parameters within a time scale of 0 to T. Taking all these constants, the final form of the normalized conditional PDF at some arbitrary time T is P_N(x_f,y_f,z_f,T | x_0,y_0,z_0) = exp[A_1+A_2(x_f^2+y_f^2)+A_3 (x_f+y_f)+A_4 x_f y_f+A_5z_f^2+A_6z_f)] where A_i's are numerical constants depending upon the particular choice of parameters mentioned above. A similar calculation of the conditional probability distribution for the case of a colloid in a fluid flow under the influence of delta-correlated noise is provided in Appendix <ref>. The plots for the conditional probability distributions (as obtained from Eq. <ref> and Eq. <ref>) for the final position of the colloid in the case of constant flow (upper row) and elongational flow (lower row) are shown in Fig. <ref>. For each type of flow, we compare and contrast the distributions obtained considering white noise and coloured noise at two different final times, i.e., T=1 and T=10. This helps in comparing the shift of the distributions with time in different conditions. As seen from Fig. <ref>, for the case of constant flow, the distributions are shifted towards the direction of the background flow field and the diffusion happens symmetrically along every direction. But, in the case of the elongational flow, distributions indeed shift along the flow field direction but the shape is elongated along the diagonal axis of the x-y plane. This is expected particularly due to the nature of the flow of the surrounding medium as the flow of the medium is no longer uniform and is biased in a particular direction. As a result, the motion of the particle is more probable along the direction of flow compared to that along the other directions. One can also observe that the distributions in the case of coloured noise spread slower than that in the case of white noise for both types of flow. Eq. <ref> is now further used for the calculations of work distribution in moving the colloid from the initial position to the final position which is discussed in Sec. <ref>.§ RESULTS AND DISCUSSION Having computed and studied the dynamics of the particle through the conditional probability distribution for the final position of the particle after time T (Eq. <ref> and Fig. <ref>), we can now proceed with the calculation of the work distribution. Specifically, we calculate the distribution for work performed by the colloid during its evolution from the initial position (x_0,y_0,z_0) at t=0 to the final position (x_f,y_f,z_f) at any arbitrary time T. This distribution, given by P(W,T) and representing the amount of work W_T that is being performed in time T, can be expressed as,P(W,T) = ⟨δ (W-W_T) ⟩The angular brackets here denote the ensemble average taken over all possible trajectories between the initial and the final positions. Using the Fourier representation of the Dirac-delta function and taking the ensemble average over all possible trajectories, Eq. <ref> can be re-written asP(W,T)=e^3kT/2ζ∫_-∞^∞dλ∫_-∞^∞dx_0∫_-∞^∞dy_0∫_-∞^∞dz_0∫_-∞^∞dx_f∫_-∞^∞dy_f∫_-∞^∞dz_f × P_0(x_0,y_0,z_0) P_N(x_f,y_f,z_f,T | x_0,y_0,z_0) exp[iλ (W-W_T)]where,P_0(x_0,y_0,z_0)=δ(x_0)δ(y_0)δ(z_0)is the initial distribution of the colloid assuming that the colloid begins its motion from the origin. Substituting this initial distribution in Eq. <ref> and carrying out the integration over all the possible initial and final positions of the colloid, the characteristic function of P(W,T) can be obtained as𝒞_W(λ) = ⟨exp(-iλ W_T)⟩. Making use of this characteristic function, 𝒞_W(λ), the distribution for work can be calculated as P(W,T) = ∫_-∞^∞dλ exp(iλ W) 𝒞_W(λ) Further, the moments of the work distribution function can be found analytically from the characteristic function using the formulas⟨ W ⟩ = i∂/∂λ𝒞_W(λ)|_λ=0and⟨ W^2⟩ = -∂^2/∂λ^2𝒞_W(λ)|_λ=0 The first moment, ⟨ W ⟩ represents the mean value of P(W,T) and σ= √(⟨ W^2 ⟩ - ⟨ W ⟩^2) gives the standard deviation of the distribution of work done. One can now calculate these properties of the work distribution function for different types of flow and in the presence of different types of noise. Work done in moving the colloid in the velocity field v⃗(r⃗) = v⃗_0 + γ̇κr⃗, for a duration of time T is calculated via Eq. <ref> and is given by,W_T = k∫_0^T[v_0(x+y+z) + 2γ̇xy ] dt We now substitute the above equation (Eq. <ref>) into Eq. <ref> via Eq. <ref> and compute the work distribution function. Specifically, for the limiting case of the constant background flow, we set the value of the flow rate (γ̇) to zero which makes W_T, as given by Eq. <ref>, a linear function of the colloid's position. The corresponding work distribution is evaluated numerically by setting the friction coefficient (ζ), stiffness constant (k), and constant background velocity of the fluid (v_0) to unity. The relaxation time (τ) is fixed to a value of 0.1. The resulting distributions for the case of delta-correlated and exponentially correlated noise at T=1 (in the regime where the mean work and standard deviation of the distribution rapidly increase with time) are shown in Fig. <ref>a (shown with empty squares and filled squares, respectively). For both the cases, the distributions under the condition of constant background flow are symmetric about the mean values resembling Gaussian distribution-like properties. One can also verify that the mean of the distribution increases with increasing background velocities and the increase is higher in the case of delta-correlated noise compared to that for OU noise. It can also be verified that the distribution is more spread out for higher velocities reflecting the corresponding probable longer excursions of the colloid in the same time duration. Additionally, the symmetry of the distribution comes from the fact that the work done by the colloid in the case of constant background flow is a linear function of its trajectory. On the other hand, in the case of elongational flow where the flow rate (γ̇) is nonzero, W_T becomes a non-linear function of the position of the colloid (see Eq. <ref>). To find P(W,T) in this case, we first calculate the characteristic function, 𝒞_W(λ), from which we obtain the moments of the distribution following Eq. <ref>. The value of 𝒞_W(λ) is then further used in Eq. <ref> and integrated over λ to obtain the exact result for P(W,T). This distribution for the work done in the case of elongational flow field of the fluid is shown in Fig. <ref>b. Since the shape of this distribution, in contrast to the case for constant flow is no longer symmetric, we also measure its skewness (α). Skewness is the parameter that determines the asymmetry of the distribution and can be evaluated as α = [⟨ W^3 ⟩ - 3⟨ W ⟩σ^2 - ⟨ W ⟩ ^3]/σ^3, where ⟨ W^3 ⟩ is the third moment of the distribution. The distribution is symmetric for α=0 and higher the value of α, the more asymmetric the distribution becomes. For α > 0, the distribution is positive-skewed and for α < 0, the distribution is negative-skewed. The resulting work distribution function of the colloid in the presence of the OU noise is shown in Fig. <ref>b (filled squares) along with the distribution in the case of white noise (empty squares) for comparison. Distributions were calculated at some arbitrary time T=1 and other parameters such as friction coefficient (ζ), stiffness constant (k), noise strength (D), and the uniform component of the background velocity (v_0) were fixed to unity. In the case of the OU noise, the value of τ was fixed at 0.1. In both the cases (under white noise and OU noise conditions), the distributions are asymmetric which is unlike the case in a constant flow. This asymmetry in the presence of elongational flow appears because of the fact that the work done in this case is a non-linear function of the trajectory of the colloid. This is also evident from Fig. <ref>, where we observed that the distributions become asymmetric in the case of elongational flow, unlike the case for constant flow. A similar effect is also observed in the case of white noise. In this case, the corresponding result is similar to the work distribution of a dumbbell-shaped polymer chain in an elongational flow where the fluctuations were modeled as white noise <cit.>. The presence of asymmetry in the work distribution can be described with the help of position distribution which is shown in Fig. <ref>. In the case of constant flow, the particle moves symmetrically in every direction which in turn gives a symmetric work distribution. But, in the case of elongational flow, the particle motion is more likely along the direction of the flow, and hence work done along a particular direction is higher compared to that along the other directions, and the work distribution, therefore, becomes asymmetric. The variation of mean work and standard deviation of the work distribution for constant flow and elongational flow are shown in Fig. <ref> for both types of noise. The standard deviation (σ) of the distribution is calculated by taking the square root of the variance. The mean work increases linearly with time for higher times whereas the increase is non-linear in the early time regime for all cases. It should also be noted that the linear increase in the case of white noise appears faster than that for coloured noise for both types of flow. Additionally, the standard deviation for white noise increases in the early time limit whereas it becomes saturated for higher times in both types of flows and for both kinds of noise considered here. This particular behaviour of mean work and standard deviation has a significant role in understanding the fluctuation theorem as well which is discussed later. Further, the time evolution of the shape parameter (skewness) of work distribution under elongational flow is shown in Fig. <ref> for both the noise conditions. It shows that the skewness increases very rapidly with time initially and after reaching a maximum, it gradually decreases over time. However, the magnitude of skewness continues to be sufficiently large (greater than zero) even after a significant amount of time has elapsed for both the cases. Nevertheless, as is also evident from Fig. <ref>b, it is clear that the measure of asymmetry (or the skewness parameter) is higher when white noise is considered in comparison to the case when the system is under the influence of coloured noise. Next, the function ln[P(W,T)/P(-W,T)] (=f(W,T) let's say) is plotted with respect to W (β set to unity) for constant flow and elongational flow in the presence of white and coloured noise to test the validation of the fluctuation theorem. The results are shown in Fig. <ref> in which the curves show different behaviours of the system under different flow properties. In the case of constant flow, f(W,T) for different values of T are straight lines of varying slopes passing through the origin for both white and coloured noise which are shown in Fig. <ref>a and Fig. <ref>b. The slope of the function initially decreases with increasing value of T, but after reaching a minimum, the slope again starts increasing. The slopes themselves depend on the choice of parameter values as well as the amount of time elapsed by the system. Further, Figs. <ref>c and <ref>d, show the variation of the function f(W,T) at different T values in the case of the elongational flow field and under the influence of white and coloured noise, respectively. Similar to the case of constant flow and as shown in the figure, the slope initially decreases with time and after reaching a minimum it starts increasing. However, unlike the case of constant flow, f(W,T) is a non-linear function of W in the case of the elongational flow field. To further quantify the non-linearity, we have also fitted each curve corresponding to varying T with the function f(W,T) = a W^m and the values of a and m for different values of T are given in Table <ref>. It is observed that in the presence of white as well as coloured noise, the curve is non-linear for short times (small T values) with m>1. However, the value of m decreases and becomes closer to 1 with increasing time resulting in the curves gradually becoming linear. Further, one can see that, the plots follow the trend f(W,T) ≈ W for T ≫τ resembling a phenomenon known as the stationary state fluctuation theorem (SSFT), expressed as, P(W,T)/P(-W,T)≈exp(W)which is found to be valid for large time limits only <cit.>. Therefore, we see from Fig. <ref> that the work fluctuation in both constant and elongational flow satisfies the fluctuation theorem in the presence of white as well as coloured noise. The non-linear and time-dependent behaviour of FT that we see here for this system was also found earlier in many other systems such as a system in a transient and stationary state in which a harmonically trapped Brownian particle is dragged through a fluid medium <cit.>, a harmonic oscillator in contact with thermostat and under the effect of external force <cit.> and a system of a simple electrical circuit consisting of a resistor and capacitor <cit.>. In all of the above examples, it was shown that the curves deviate from the f(W,T) = W line and the slopes vary with time and, as the system evolved over a sufficient amount of time, the non-linearity of the curve gradually decreased and approached linearity. Therefore, the work distribution for the colloid in such a system in a non-equilibrium state satisfies the principle of the fluctuation theorem. The fluctuation theorem was also found to be valid in elongational flow for a sufficiently large time where fluctuations were considered as white noise <cit.>. In our study, we have reported the work distribution for a colloid and have also established the FT in the presence of OU noise, both for constant background flow and elongational flow. So far, we limited our study to making comparisons of work distributions of a moving colloid in different types of background flows and under different noise conditions. We now look into the dependencies of work distributions of a colloid moving in an elongational flow field and that is influenced by OU noise. Fig. <ref> shows how P(W,T) changes with different parameters which were taken to be fixed during earlier computations and how these parameters affect the dynamics of such a colloid. In Fig. <ref>a, we have shown work distributions for different values of the stiffness coefficient (k) of the external harmonic potential.As k increases, the peak of the distribution shifts towards higher value of W, but the variation of mean and standard deviation with k shows an oscillatory behaviour. Fig. <ref>b shows distributions for varying relaxation time constants (τ). The standard deviations of the distributions decrease with higher values of τ. This indicates that the distribution gets a short duration of time to spread over for higher relaxation time resulting in lower values of work being sampled across different trajectories. Fig. <ref>c shows P(W,T) for varying noise strength (D). It is evident that the fluctuations in any thermodynamic quantity will increase with increasing noise strength and the result satisfies the argument. We have also shown the variation of P(W,T) with varying flow rates (γ̇) in Fig. <ref>d. The mean of the distribution increases with increasing flow rates which suggests that the colloid performs more work in the case of higher flow rates in an elongational flow field. Until this point, we mainly focused on the behaviour of the colloid in different flow types and different kinds of noise influencing its dynamical properties. We have also studied the temporal evolution of different parameters of the work distribution function and its dependence on the four system parameters. We now look into what this means for the overall thermodynamics of the system. Having computed the work done, the amount of heat exchanged (Q_T) with the surroundings up to a time T, can be calculated by invoking the first law of thermodynamics i.e., Q_T = W_T - Δ U, where Δ U = k(x_f^2+y_f^2+z_f^2 - x_0^2- y_0^2- z_0^2)/2 is the change in internal energy of the system. Making use of W_T from Eq. <ref>, the amount of heat exchanged is then given byQ_T = k∫_0^T[v_0(x+y+z) + 2γ̇xy ] dt - k/2(x_f^2+y_f^2+z_f^2)after invoking the initial condition x_0 = y_0 = z_0 = 0, as also mentioned earlier. Similar to the work distribution function, the heat distribution function, P(Q,T), representing the probability that Q_T amount of heat energy is being exchanged in time T is given by P(Q,T) = ⟨δ(Q - Q_T) ⟩. The characteristic function for the heat exchange can then be obtained following the same technique that was used to compute Eq. <ref> and <ref> for the calculation of the work distribution. Therefore,𝒞_Q(λ) = ⟨exp(- i λ Q_T )⟩where, Q_T can be evaluated from Eq. <ref>. Using this characteristic function, the heat distribution can be calculated asP(Q,T) = ∫_-∞^∞ dλ exp(iλ Q) 𝒞_Q(λ)The mean value of the heat energy exchanged with the surrounding medium can also be evaluated from the characteristic function through⟨ Q ⟩ = i∂/∂λ𝒞_Q(λ)|_λ=0Using this set of equations (Eqs. <ref>–<ref>), we have calculated the heat distribution function (P(Q,T)) and the mean value of Q_T for a colloid moving in an elongational flow field and influenced by white and coloured noise. The final results for the heat distribution of the colloid moving in the elongational flow field and under the influence of the two different noise types are shown in Fig. <ref>. One can see that at a sufficiently small time (T=0.5), the probability of positive heat exchange is approximately zero, but it gradually increases with time. Analogous to the distribution, the mean value of the heat exchange (shown in the inset of Fig. <ref>) is also negative at short times. It indicates that the system consumes heat energy from the surroundings to perform the work required to move the colloid from one position to another. A similar phenomenon of heat consumption in the small time limit was earlier also found in a system having multiple coupled harmonic oscillators kept at different temperatures <cit.> and a system with a trapped Brownian oscillator in aging gelatin droplet <cit.>. After a sufficient time has elapsed (T > 1.5), the mean heat exchange becomes positive and the system starts dissipating heat into the surrounding medium. At the time at which this transition occurs, the distribution, P(Q,T), becomes continuous with significant probability for positive Q values as well. The high probability for positive heat exchange continues beyond the transition time but the probability distribution itself starts to exhibit a singularity in the case of white noise (see Fig. <ref>a). On the other hand, in the case of coloured noise, the heat distribution shows a similar transition, but no discontinuity or singularity is observed for higher T values. This kind of transition in the heat distribution was also found when the fluctuation phenomena was studied in an electrical circuit having a resistor and a capacitor in parallel <cit.>. Heat distributions are known to generally have a discontinuity in these kinds of systems in which a passive colloid is driven by an external harmonic potential and noise <cit.>. Additionally, the spread of the heat distribution is smaller in the case of coloured noise compared to that in white noise which is consistent with the results obtained for the work distribution and the conditional probability distribution for the final position of the colloid as well, as shown in Figs. <ref> and <ref>, respectively. Now let us discuss the overall thermodynamic picture of the system in terms of the fluctuation theorem, work done by the colloid, and the heat exchanged with the surrounding medium. During the study of the fluctuation theorem we noticed that at a small time limit, the probability of positive work is much higher compared to that of the negative work. The ratio initially decreases with time and after reaching a minimum, it starts increasing again. This particular phenomenon can be explained with the help of Fig. <ref>, in which the temporal evolution of the mean and standard deviation of the work distribution is shown. When T is very small, the ratio P(W,T)/P(-W,T) is very large, but as T increases, the standard deviation of P(W,T) also increases which implies that the width of the distribution becomes broader with time. As a result, the probability of negative work also increases with time. After sufficient time has elapsed (in both noise types), the standard deviation becomes saturated and the mean of the distribution continues to increase linearly with time. This basically implies that the distribution shifts towards the higher W without getting broadened in width and because of this, the probability of positive work increases and that of negative work decreases with T. This explains why the ratio again starts increasing after a certain amount of time and continues to increase monotonically. Although the slope of the ratio was previously found to oscillate with time <cit.>, it is not the same for our system. The insets of Fig. <ref> show that at early times, the system consumes heat from the surroundings and part of the heat is used to increase the internal energy, and the remaining amount is used to perform some work which is required for the system to move from one state to another. The entropy change of the medium Δ S_m = Q/T^' also becomes negative. After some time, the system stops consuming heat, in fact, it starts dissipating heat to the surroundings, and therefore the entropy of the medium increases. Further, consistent with the results of Fig. <ref> and Fig. <ref>, the average work done and the average change in internal energy increases with time as it is dependent on the amount of displacement of the colloid from the initial position. § CONCLUSIONS In summary, we deduce that the work distribution of the colloid is symmetric when it moves in a constant flow from the initial position to the final position during a time interval of T irrespective of whether it is under the influence of either delta-correlated noise or OU noise. But, in the case of elongational flow, the work distribution becomes asymmetric for both types of noise. This is a direct effect of the non-linearity introduced in the work done as a result of the elongational flow field. We have also studied the temporal dynamics of different distribution parameters, such as mean work and standard deviation. Specifically, in the early time limit, the increase in mean work is nonlinear in time, whereas it becomes linear after a finite amount of time for each type of flow and under both noise types. We also observed that the linear increase of mean work appears quicker in delta-correlated noise compared to that in OU noise for each flow. Additionally, the standard deviation of each distribution becomes saturated after a finite amount of time. The fluctuation theorem is also satisfied for the system considered in this study. We have also looked at how the work distribution changes with varying system parameters, such as stiffness constant k, relaxation time τ, noise strength D, and flow rate γ̇. Out of these four system parameters, the work distribution narrows only for increasing τ values, while it, in general, broadens for increasing values of other system parameters. This is because of the correlated nature of the fluctuations, which do not allow the colloid to undergo large excursions. This phenomenon is then also reflected in the work distribution function. Similarly, a study of other thermodynamic variables also gives valuable information about the system. At the small time limit, heat is consumed by the system to perform work. However, in the long time limit, as the system performs more work as a result of the increased displacement of the colloidal particle due to the elongational flow field, heat is dissipated into the surroundings. Therefore, the present study provides an insight into a yet unexplored aspect of the dynamics of a colloidal particle in a flow field, specifically, while it is also under the influence of a type of correlated noise. Future extensions of this study looking into the effects of other kinds of noise will really open up avenues for better theoretical understanding and eventually greater control over experimental studies in real biological systems as well.§ CALCULATION OF THE JACOBIAN The equation of motion of a colloidal particle moving in a medium and under the influence of external harmonic potential can be expressed by an overdamped Langevin equation ζṙ⃗̇ - ζv⃗(r⃗) + ∂ U(r⃗)/∂r⃗ = η⃗(t) where, v⃗(r⃗) = v⃗_0 + γ̇κr⃗ is the velocity of the background medium exhibiting elongational flow. For the case of the constant flow, γ̇=0. To calculate the Jacobian of the coordinate transformation (J[x,y,z] as mentioned in Eq. <ref>) from η⃗ to r⃗ <cit.>, we modified Eq. <ref> as ζṙ⃗̇-ζv⃗_0-D⃗·r⃗ = η⃗(t) where D⃗=-kI⃗ for constant flow and D⃗=ζγ̇κ-kI⃗ for elongational flow, I⃗ is the unit tensor. We can then write Eq. <ref> in discrete form which reduces to η⃗(t_i)=ζr⃗(t_i)-r⃗(t_i-1)/Δ t - D⃗·r⃗(t_i)+r⃗(t_i-1)/2 - ζv⃗_0 where i=1,2,...,N, corresponds to different time steps. J[x,y,z] can be found by calculating det[∂η⃗(t_i)/∂r⃗(t_j)], i,j=1,2,...,N, which eventually gives a N × N lower triangular matrix. The determinant can then be easily calculated as follows J[x,y,z]= ∏_i=1^N(ζ/Δ t - D⃗·I⃗/2 ) = ∏_i=1^N(ζ/Δ t + k/2)^3=(ζ/Δ t)^3N∏_i=1^N(1+3kΔ t/2ζ +𝒪(Δ t^2)) =(ζ/Δ t)^3Nexp(3kΔ t/2ζ) The total time ranging from 0 to T is divided into N equal segments of equal width Δ t such that the time elapsed after i^th step is given as t_i=iΔ t. For the continuum limit N→∞, Δ t→ 0 and NΔ t→ T, the above equation reduces to J[x,y,z]=(ζ/Δ t)^3Nexp(3kT/2ζ). § CONDITIONAL PDF FOR THE CASE OF GAUSSIAN WHITE NOISEThe Gaussian white noise usually accounting for thermal fluctuations considered to be produced from random collisions of the colloid with other surrounding particles, has the following properties: ⟨η⃗_i(t) ⟩ = 0 ⟨η⃗_i(t) η⃗_j(t^') ⟩ = 2ζ k_BT^'δ_ijδ (t-t^')Since the white noise is Gaussian distributed, the probability distribution of the noise can be written as, P[η⃗] ∝exp{-1/8ζ k_BT^'∫_0^Tdtη⃗(t)^Tη⃗(t)} Substituting the value of η⃗(t) from Eq. <ref> into Eq. <ref>, the probability can be evaluated as P[x,y,z] ∝ J[x,y,z] exp{-1/8ζ k_BT^'∫_0^T[ ζ^2(ẋ^2+ẏ^2+ż^2) - 2ζ^2v_0(ẋ+ẏ+ż)+ 2ζ k(xẋ+yẏ+zż) - 2ζ^2γ̇(ẋy+xẏ) - 4ζ kγ̇xy + (k^2+ζ^2γ̇^2)(x^2+y^2) + k^2z^2+ (2ζ^2γ̇v_0-2ζ kv_0)(x+y)- 2ζ kv_0z + 3ζ^2v_0^2] } J[x,y,z] is the Jacobian for transforming the coordinates from η⃗ to r⃗ whose calculation is given in Appendix <ref>. The conditional probability distribution for finding the colloid at (x_f,y_f,z_f) after a finite time T given that it was at (x_0,y_0,z_0) at t=0 is given by P(x_f,y_f,z_f,T | x_0,y_0,z_0)∝ e^3kT/2ζ e^-k(x_f^2+y_f^2+z_f^2-x_0^2-y_0^2-z_0^2)/4k_BT^'×∫_x(0)=x_0^x(T)=x_f𝒟[x] ∫_y(0)=y_0^y(T)=y_f𝒟[y] ∫_z(0)=z_0^z(T)=z_f𝒟[z] e^-S[x,y,z] where 𝒟[x], 𝒟[y] and 𝒟[z] represent the path integrals over x, y and z between the end points (x_0,y_0,z_0) and (x_f,y_f,z_f) and S[x,y,z] represents the action, defined as S[x,y,z] = ∫_0^T dt ℒ(x,y,z,ẋ,ẏ,ż,t) Here, ℒ is the Lagrangian of the system given by ℒ (x,y,z,ẋ,ẏ,ż,t) = 1/8ζ k_BT^'[ ζ^2(ẋ^2+ẏ^2+ż^2) - 2ζ^2v_0(ẋ+ẏ+ż) - 2ζ^2γ̇(ẋy+xẏ) - 4ζ kγ̇xy + (k^2+ζ^2γ̇^2)(x^2+y^2) + k^2z^2 + (2ζ^2γ̇v_0 - 2ζ kv_0)(x+y) - 2ζ kv_0z + 3ζ^2v_0^2] The most probable trajectory of the colloid between two given endpoints in such a system can be obtained by using the Euler-Lagrange equation of motion, which is given by ∂ℒ/∂ r_i - d/dt(∂ℒ/∂ṙ_i)=0 Using the Lagrangian in Eq. <ref>, the equations of motion of the colloid along individual components can be obtained as r̈⃗̈ + Rr⃗ + SI⃗ = 0 where R=([ -α_1α_20;α_2 -α_10;00 -β_1 ]), S=([ α_3; α_3; -β2 ]), I⃗ is the 3×3 identity matrix, α_1 = k^2/ζ^2 + γ̇^2, α_2 = 2kγ̇/ζ, α_3 = γ̇v_0 - kv_0/ζ and β_1 = k^2/ζ^2, β_2 = kv_0/ζ. Solutions to the above equations can be obtained by integrating them using the boundary conditions x(0)=x_0, x(T)=x_f, y(0)=y_0, y(T)=x_f, z(0)=z_0, z(T)=z_f. Similar to the case with OU noise, we solved Eq. <ref> numerically by setting x_0=y_0=z_0=0 and the total time taken to be unity. Other parameters were set to have the same values as before. The solutions were then used to evaluate the action using Eq. <ref> from which we calculated the normalized probability distribution for the final position of the colloid, given byP_N(x_f,y_f,z_f,T | x_0,y_0,z_0) = A_1 exp[ A_2(x_f+y_f) + A_3x_fy_f+ A_4(x_f^2+y_f^2) +A_5 z_f +A_6 z_f^2 ] where A_i's are some constants that depend on a particular set of parameter values. The flow rate is taken to be unity for the calculation of the PDF. The distribution is shown in Fig. <ref> both for white noise (left) and coloured noise (right) at two different times. unsrt | http://arxiv.org/abs/2311.15768v1 | {
"authors": [
"Debasish Saha",
"Rati Sharma"
],
"categories": [
"cond-mat.soft",
"cond-mat.stat-mech",
"physics.flu-dyn"
],
"primary_category": "cond-mat.soft",
"published": "20231127123920",
"title": "Work distribution of a colloid in an elongational flow field and under Ornstein-Uhlenbeck noise"
} |
Induced current in braneworld model in high-dimensional AdS bulk in the cosmic string spacetime W. Oliveira dos Santos^1 E-mail: [email protected] , E. R. Bezerra de Mello^1 E-mail: [email protected] ^1Departamento de Física, Universidade Federal da Paraí ba 58.059-970, Caixa Postal 5.008, João Pessoa, PB, Brazil January 14, 2024 =================================================================================================================================================================================================================================================================== In this paper we investigate the bosonic current induced by a brane and a magnetic flux running along the idealized cosmic string in a (D+1)-dimensional anti-de Sitter (AdS) background. We consider the brane is parallel to the AdS boundary and the cosmic string is orthogonal to them. Moreover, we assume that on the brane the charged bosonic field obeys the Robin boundary condition. The brane divides the space into two regions with different properties of the vacuum state. We show that the only nonzero component of the current density is along the azimuthal direction in both regions. In order to develop this analysis we calculate, for both regions, the positive frequency Wightman functions. Both functions present a part associated with the AdS in presence of a cosmic string only, and the other part induced by the brane. In this paper we consider only the contributions induced by the brane. We show that in both regions the azimuthal current densities are odd functions of the magnetic flux along the string. Different analytic and numerical analysis are performed and an application of our results is provided for the Randall-Sundrum braneworld model with a single brane. PACS numbers: 98.80.Cq, 11.10.Gh, 11.27.+d § INTRODUCTION One of the most fascinate topological object predicted by the Grand Unified Theory as consequence of gauge symmetry breaking is the cosmic string <cit.>. Although recent observational data on the cosmic microwave background have discarded cosmic strings as the primary source for primordial density perturbation, these objects are still candidate for the generation of a number of interesting physical effects such as gamma ray bursts <cit.>, gravitational waves <cit.> and high energy cosmic rays <cit.>. The gravitational field produced by a cosmic string may be approximated by a planar angle deficit in the two-dimensional sub-space. The strength of gravitational interactions of cosmic strings with matter is its tension, that is characterized by the dimensionless parameter, Gμ_0, defined in natural units. In this expression G represents the Newton's gravitation constant and μ_0 the linear mass density of the string, which is proportional to the square of the energy scale where the gauge symmetry is broken. The anti-de Sitter (AdS) spacetime is a solution of the Einstein equation in presence of negative cosmological constant. Being maximally symmetric spacetime, it allowed us to solve many problems in quantum fields exactly (see, for example <cit.>-<cit.>).Besides, the importance of this background has increased when it was observed that it generically arises as a ground state in extended supergravity and in string theories. Moreover, additional interest in this geometry was generated by the appearance of two models where AdS spacetime plays a special role. The first model, the AdS/CFT correspondence (for a review see <cit.>), represents a realization of the holographic principle and relates string theories or supergravity in the AdS bulk with a conformal field theory living on its boundary. The second model is the braneworld scenario with large extra dimensions. This model offers a solution to the hierarchy energy scale problem associated with the gravitational and electroweak interactions (for comprehensive discussions on braneworld gravity and cosmology, see <cit.>).The analysis of the spacetime geometry due to a cosmic string in AdS bulk has been considered in <cit.>. There it was shown that at distances larger than the string's core radius, the gravitational effects due to the presence of the string is well described by a planar deficit angle in the AdS metric, similarly to the case in the Minkowskian bulk. This non-trivial topology by its turn provides additional vacuum polarization effects. In this way the combined effect of the curvature and non-trivial topology contribute to the evaluation of the vacuum expectation value of several physical observables, as the energy-momentum tensor. The investigation of the vacuum expectation value (VEV) of the bosonic current density, ⟨ j^μ⟩, and the energy-momentum tensor, ⟨ T^μ_ν⟩, induced by anidealized cosmic string carrying magnetic flux running along its core in a (D+1)-dimensionalAdS spacetime, have been analyzed in <cit.> and <cit.>, respectively. Moreover, in both papers it was admitted a compactification of one dimension along the string, and the presence of an extra magnetic flux running its center. Also the study of the VEV of fermionic energy-momentum tensor and current density in a (1+4)-dimensional AdS space with a compactified cosmic string have been considered in <cit.> and <cit.>, respectively.The analysis of the effects due to a brane on the vacuum fermionic current, ⟨ j^μ⟩, and the energy-momentum tensor, ⟨ T^μ_ν⟩, with the brane parallel to the AdS boundary, were studied in <cit.> and <cit.>, respectively. The analysis of the VEV of the energy-momentum tensor associated with a charged bosonic field on the AdS background in the presence of a cosmic string consideringa brane parallel to the AdS boundary, was developed in <cit.>. Here in this paper, we want to continue in the same line of investigation, but at this time we will turn our attention to analyze the effects of the brane on the VEV of the induced current in both regions defined by the brane. The organization of the paper is the following: In section <ref> we present the geometry of the spacetime that we want to consider, and the complete set of normalized positive energy solutions of the Klein-Gordon equation considering the presence of a brane parallel to the AdS boundary. In section <ref> we construct the Wightman functions for both regions of the space separated by the brane. These regions are: between the AdS boundary and the brane (L-region) and between the brane and AdS horizon (R- region). The corresponding Wightman functions are decomposed in a part due to the AdS spacetime in presence of a cosmic sting in the absence of brane, plus the ones induced by the brane. In section <ref> we evaluate the VEV of the bosonic current densities in both regions.Because the above mentioned decomposition of the Wightman functions, the same happens for the current densities. Also in this section, various asymptotic limits of the currents are considered and numerical results are presented. In Section <ref> we apply our analysis to the Randal-Sundrum type model with a single brane. In Section <ref> we summarize the most relevant result obtained. Throughout the paper, we use natural units G=ħ =c=1.§ KLEIN-GORDON EQUATION The main objective of this section is to provide the complete set of normalized solutions of the Klein-Gordon equation associated with a massive scalar charged quantum field propagating in a (D+1)-dimensional AdS spacetime, with D≥ 3, in presence of a magnetic-flux-carrying cosmic string and taking into account the presence of a brane parallel to the AdS boundary.So, with this objectivewe present first, the line element, in cylindrical coordinate, in (1+3)-dimensional AdS spacetime in the presence of a cosmic stiring:ds^2=e^-2y/a[dt^2-dr^2-r^2dϕ ^2]-dy^2 .In the above line element the idealized cosmic string is along the y-axis,r⩾ 0 and ϕ∈ 0, 2π /q] define the coordinates on the conical geometry, (t,y)∈ (-∞ , ∞ ). The parameter a is associated with the curvature scale of the background spacetime; moreover, the parameter q> 1 provides the planar angle deficit, δϕ=2π(1-q^-1), produced by the cosmic string. In our analysis, we will use the Poincaré coordinate defined by w=ae^y/a. In this case the line element above is expressed in the form conformally related to the line element associated with a cosmic string in Minkowski spacetime:ds^2 = (a/w)^2[dt^2 - dr^2 - r^2dϕ^2 - dw^2 ].The new coordinate, w, is defined in the interval 0, ∞ ). Two values for this coordinates deserve to be mentioned: w=0 and w=∞. They correspond to the AdS boundary and horizon, respectively.The generalization of (<ref>) to (D+1)-dimensional, with D>3, is given byds^2 = (a/w)^2[dt^2 - dr^2 - r^2dφ^2 - dw^2 - ∑_i=4^D(dx^i)^2],with x^i∈ (-∞ , ∞ ).The curvature scale a in (<ref>) is related to the cosmological constant, Λ, and the Ricci scalar, R, by the formulasΛ =-D(D-1)/2a^2 ,R=-D(D+1)/a^2 .§.§ Klein-Gordon equationThe field equation which governs the quantum dynamics of a charged bosonic field with mass m, in a curved background and in the presence of anelectromagnetic potential vector, A_μ, reads (𝒟^2 + m^2 + ξ R)φ(x) = 0 , where the differential operator in the field equation reads 𝒟^2=1/√(|g|)𝒟_μ(√(|g|)g^μν𝒟_ν),𝒟_μ=∂ _μ+ieA_μ with g=(g_μν).In the above equation, we also consider the presence of a non-minimal coupling, ξ, between the field and the geometry represented by the Ricci scalar, R. Two specific values for the curvature coupling are of special interest: the value ξ = 0 corresponds to minimal coupling, and and ξ = D - 1/4D, the conformal coupling, respectively. Moreover, we consider only the component vector potential A_ϕ=-qΦ_ϕ/(2π) different from zero. This vector potential corresponds to a magnetic flux, Φ_ϕ, running along the string's axis. In this work we admit the presence of a codimension one flat boundary, hereafter named brane, located at w=w_0 and parallel to the AdS boundary, and that the the field operator obeys the gauge invariant Robin boundary condition, (1+β n^μ𝒟_μ)φ(x)=0,w=w_0.The parameter β in the equation above is a constant and it encodes the properties of the brane. In particular, the special cases β=0 and β=∞ correspond to the Dirichlet and Neumann boundary conditions, respectively. In addition, n^μ is the inward pointing (with respect to the region under consideration) normal to the brane at w=w_0. In the region 0≤ w≤ w_0 (referred to as the L(left)-region this normal is given as n^μ=-δ_3^μa/w and in the region w_0≤ w≤∞ R(right)-region asn^μ=δ_3^μa/w. In the geometry defined by (<ref>) and in the presence of the azimuthal vector potentials, A_ϕ, the Klein-Gordon (KG) equation (<ref>) becomes [∂^2/∂ t^2 - ∂^2/∂ r^2 - 1/r∂/∂ r - 1/r^2(∂/∂ϕ + ieA_ϕ)^2 - ∂^2/∂ w^2-(1-D)/w∂/∂ w. . + M(D,m,ξ)/w^2 - ∑_i=4^D∂^2/∂ (x^i)^2]φ(x) = 0,with M(D,m,ξ) = a^2m^2 - ξ D(D+1). On basis of previous analysis presented in <cit.>, the positive energy solution of (<ref>) can be expressed by φ_σ(x) = C_σw^D/2W_ν(pw)J_q|n +α|(λ r)e^- iE t + iqnϕ + ik⃗·x⃗_∥ ,with the function W_ν(pw), W_ν(pw)=C_1J_ν(pw)+C_2Y_ν(pw) ,being given by a linear combination of the Bessel and Neumann functions <cit.>. The order of these functions is, ν = √(D^2/4 + a^2m^2 - ξ D(D+1)) .The enrgy and the parameter α in the Bessl function associated with the radial coordinate are: E= √(λ^2 + p^2 + k⃗^2), α = eA_ϕ/q = -Φ_ϕ/Φ_0 , being Φ_0=2π/e, the quantum flux. Moreover, in (<ref>) x⃗_∥ corresponds the coordinates defined in the (D-3) extra dimensions, and k⃗ the corresponding momentum. In (<ref>),σ represents the set of quantum numbers (n, λ, p, k⃗), with n=0,±1,±2,…, λ≥ 0, -∞<k^j<∞ for j=4,...,D. As to the quantum number p, it is determined, separately, for each region divided by the brane. The coefficient C_σ in (<ref>) is determined from the normalization condition ∫ d^Dx√(|g|)g^00φ_σ'^*(x)φ_σ(x)= 1/2Eδ_σ,σ' ,with delta symbol on the right-hand side is understood as Dirac delta function for the continuous quantum number, and Kronecker delta for the discrete one.§.§.§ Normalized wave-functions in R-region First let us consider the R-region. Due to the Robin boundary condition (<ref>) on the flat boundary, we obtain the relation C_2/C_1=-J̅_ν(pw_0)/Y̅_ν(pw_0) for the coefficients in (<ref>). From now on, we use the compact notationF̅(x)=A_0F(x)+B_0xF^'(x),with the coefficients A_0=1+Dβ/2a,B_0=β/a,in the expressions of the wave-functions. The normalized solutions of KG equation in R-region compatible with the boundary condition (<ref>), can be written presented as, φ_(R)σ(x) = C_(R)σw^D/2g_ν(pw_0,pw)J_q|n +α|(λ r)e^- iE t + iqnϕ + ik⃗·x⃗_∥ , where we have introduced the function g_ν(u,v)=J_ν(v)Y̅_ν(u)-J̅_ν(u)Y_ν(v).Due to thecontinuous values assumed by the quantum number p, the normalization condition (<ref>), provides |C_(R)σ|^2=(2π)^2-Dqpλ/2Ea^D-1[J̅_ν^2(pw_0)+Y̅_ν^2(pw_0)] . §.§.§ Normalized wave-functions in L-region In the L-region, the integration over w in (<ref>), is restricted in the interval 0≤ w≤ w_0. In the case of C_2≠0 in (<ref>), the integral over w diverges at the lower limit w=0 for the case with ν≥1. Consequently, in that region, we should take C_2=0. On the other hand, in the interval 0≤ν <1, the solution (<ref>) with C_2≠0 is normalizable and in order to uniquely define the mode functions an additional boundary condition at the AdS boundary is required <cit.>. Here, we assume the Dirichlet boundary condition on w=0 which implies C_2=0. Thus, with this choice, the mode function in the L-region are given by φ_(L)σ(x) = C_(L)σw^D/2J_ν(pw)J_q|n +α|(λ r)e^- iE t + iqnϕ + ik⃗·x⃗_∥ .According to the Robin boundary condition (<ref>), the eigenvalues of the quantum number p obey the relation: J̅_ν(pw_0)=0,being J̅_ν(x)given by (<ref>), with A_0=1-Dβ/2a,B_0=-β/a.The eigenvalues of (<ref>) are given by p=p_ν,i/w_0, with p_ν,i being the positive zeros of the function J̅_ν(x), enumerated by i=1, 2,.... We can observe that the roots p_ν,i do not depend on the location of the brane. Considering now the normalization condition (<ref>), with δ_p,p^'=δ_i,i^', and integrating over w in the interval [0,w_0], after some algebraic manipulations involving the Bessel functions, we obtain |C_(L)σ|^2=(2π)^2-Dqp_ν,iλ T_ν(p_ν,i)/w_0a^D-1√(p_ν,i^2+w_0^2(λ^2+k⃗^2)) .In the above expression we have for the function T_ν(z), the following result:T_ν(z)=z[(z^2-ν^2)J_ν^2(z)+z^2(J_ν^'(z))^2]^-1 . § WIGHTMAN FUNCTION In this section we want to obtain the positive frequency Wightman function induced by the brane for both regions, R and L, in a closed form. The positive frequency Wightman function is defined byW(x,x^')=⟨ 0|φ̂(x)φ̂^†(x^')|0⟩, where |0⟩ stands for the vacuum state. Here we assume that the field operator, φ̂(x), is prepared in the Poincaré vacuum state. To evaluate this function, we use the mode sum formula W(x,x^')=∑_σφ_σ(x)φ_σ^∗(x^'). §.§ R-regionThe normalized positive energy solution of the KG equation in R-region, is given by combining previous results, (<ref>), (<ref>) and (<ref>). It reads,φ_(R)σ(x) = √((2π)^2-Dqpλ/2Ea^D-1[J̅_ν^2(pw_0)+Y̅_ν^2(pw_0)])w^D/2g_ν(pw_0,pw)J_q|n +α|(λ r)e^- iE t + iqnϕ + ik⃗·x⃗_∥ . Substituting the above expression into (<ref>), we get:W_R(x,x^') = q(ww')^D/2/2a^D-1(2π)^D-2∑_σpλ/Eg_ν(pw_0,pw)g_ν(pw_0,pw^')/(J̅_ν^2(pw_0)+Y̅_ν^2(pw_0))J_q|n +α|(λ r)J_q|n +α|(λ r^')× e^- iE(t-t^') + iqn(ϕ-ϕ^') + ik⃗·(x⃗_∥- x⃗^'_∥),where we are using the compact notation below for the summation over σ:∑_σ=∑_n=-∞^+∞ ∫ dp∫_0^∞ dλ ∫ dk⃗ . Now performing a Wick rotation on the time coordinate and using the identitye^-EΔτ/E=2/√(π)∫_0^∞ ds e^-s^2E^2-(Δτ)^2/(4s^2),being E=√(λ^2+p^2+k⃗^2), we can integrate over λ and k⃗ with the help of <cit.>. The final result result isW_R(x,x^') = qrr^'/2(2π)^D/2a^D-1(ww^'/rr^')^D/2∫_0^∞dv v^D/2-2e^-r^2+r^'2+Δx⃗_∥^2-Δ t^2/2rr^'v∑_ne^inqΔϕI_q|n+α|(v)× ∫_0^∞dppe^-rr^'/2vp^2g_ν(pw_0,pw)g_ν(pw_0,pw^')/J̅_ν^2(pw_0)+Y̅_ν^2(pw_0) ,where we have introduced a new variable, v=rr^'/(2s^2).The above Wightman function contains the contributions coming from the cosmic string in AdS spacetime without boundary, more the contribution induced by the boundary. Because, in this paper, we are interested to calculate the VEV of the current induced by the presence of the brane, let us subtract form (<ref>) the corresponding Wightman induced by the cosmic string only in AdS. The latter can be obtained from (<ref>), by taking the limit w_0→ 0. So we can write:W_b(R)(x,x^')=W_R(x,x^')-W_cs(x,x^'). The Wightman function, W_b(x,x^'), is obtained by using the the following identity:g_ν(pw_0,pw)g_ν(pw_0,pw^')/J̅_ν^2(pw_0)+Y̅_ν^2(pw_0)- J_ν(pw)J_ν(pw^')=-1/2∑_l=1^2J̅_ν(pw_0)/H̅_ν^(l)(pw_0)H_ν^(l)(pw)H_ν^(l)(pw^'),being H_ν^(l)(x), l=1, 2, the Hankel functions <cit.>. So, we get,W_b(R)(x,x^') = -qrr^'/4(2π)^D/2a^D-1(ww^'/rr^')^D/2∫_0^∞dv v^D/2-2e^-r^2+r^'2+Δx⃗_∥^2-Δ t^2/2rr^'v∑_ne^inqΔϕI_q|n+α|(v)× ∫_0^∞dppe^-rr^'/2vp^2∑_l=1^2J̅_ν(pw_0)/H̅_ν^(l)(pw_0)H_ν^(l)(pw)H_ν^(l)(pw^'). The parameter α can be written in the formα=n_0+α_0,with |α_0|<1/2,with n_0 being an integer number; moreover, the sum over the quantum number n in (<ref>), has been developed in <cit.>. The result is given below:∑_n=-∞^∞e^iqnΔϕI_q|n+α|(v)=1/q∑_ke^vcos(2π k/q-Δϕ)e^iα(2π k -qΔϕ)-e^-iqn_0Δϕ/2π i× ∑_j=±1je^jiπ q|α_0|∫_0^∞dy(cosh[qy(1-|α_0|)]-cosh(|α_0| qy)e^-iq(Δϕ+jπ))e^-vcosh(y)/cosh(qy)-cos(q(Δϕ+jπ) ,with the parameter k varyingin the interval:-q/2+Δϕ/Φ_0≤ k≤q/2+Δϕ/Φ_0 .Substituting (<ref>) into (<ref>), it is possible to perform the integration over v using <cit.>. The result isW_b(R)(x,x^') = -(ww^')^D/2/2(2π)^D/2a^D-1{∑_ke^iα(2π k -qΔϕ)/u_k^D/2-1∫_0^∞dpp^D/2∑_l=1^2J̅_ν(pw_0)/H̅_ν^(l)(pw_0)× H_ν^(l)(pw)H_ν^(l)(pw^')K_D/2-1(pu_k)-qe^-iqn_0Δϕ/2π i∑_j=±1je^jiπ q|α_0|× ∫_0^∞dycosh[qy(1-|α_0|)]-cosh(|α_0| qy)e^-iq(Δϕ+jπ)/u_y^D/2-1[cosh(qy)-cos(q(Δϕ+jπ))]∫_0^∞dpp^D/2∑_l=1^2J̅_ν(pw_0)/H̅_ν^(l)(pw_0)× H_ν^(l)(pw)H_ν^(l)(pw^')K_D/2-1(pu_y)} .In the above expression we have introduced the notationu_k^2 = r^2+r'^2-2rr'cos(2π k/q-Δϕ) +Δx⃗_∥^2-Δ t^2 u_y^2 = r^2+r'^2+2rr'cosh(y)+Δx⃗_∥^2-Δ t^2.Finally we rotate the contour integration over p by the angle π/2 (-π/2) for the term l=1 (l=2). Using the relations involving Bessel functions with imaginary argument <cit.>, the result isW_b(R)(x,x^') = -(ww^')^D/2/(2π)^D/2a^D-1∫_0^∞dpp^D-1I̅_ν(pw_0)/K̅_ν(pw_0)K_ν(pw)K_ν(pw^')× {∑_ke^iα(2π k -qΔϕ)f_D/2-1(pu_k)-qe^-iqn_0Δϕ/2π i∑_j=±1je^jiπ q|α_0|× ∫_0^∞dycosh[qy(1-|α_0|)]-cosh(|α_0| qy)e^-iq(Δϕ+jπ)/cosh(qy)-cos(q(Δϕ+jπ))f_D/2-1(pu_y)} ,where we have introduced the notationf_μ(x)=J_μ(x)/x^μ .§.§ L-RegionNow we turn our attention to calculate the positive frequency Wightman function in the L-region. Substituting the respective wave function solutions (<ref>), with the coefficient (<ref>), into (<ref>),we getW_L(x,x^') = q(ww^')^D/2/a^D-1(2π)^D-2w_0^2∑_σλ p_ν,i/√((p_ν,i/w_0)^2+λ^2+k⃗^2)T_ν(p_ν,i)J_ν(p_ν,iw)J_ν(p_ν,iw^') × J_q|n+α|(λ r)J_q|n+α|(λ r^')e^inqΔφ+ik⃗·Δx⃗-iEΔ t ,where now, we have∑_σ=∫_0^∞dλ∑_i=1^∞∑_n∫ dk⃗ . Using the identity (<ref>), we can integrate over λ and k⃗ using the formulas of <cit.>, obtaining the following expression:W_L(x,x^') = q(ww')^D/2/a^D-1(2π)^D/2w_0^2(rr')^D/2-1∫_0^∞dvv^D/2-2e^-r^2+r^'2+Δx⃗_∥^2-Δ t^2/2rr^'v∑_n=-∞^∞e^inqΔφI_q|n+α_0|(v) × ∑_i=1^∞p_ν,iT_ν(p_ν,i)J_ν(p_ν,iw/w_0)J_ν(p_ν,iw'/w_0)e^-r^2p_ν,i^2/2w_0^2v ,where we have introduced a new variable v=r^2/(2s^2).At this point we substitute (<ref>) into the above expression. After procedure the integral v, we obtain,W_L(x,x^') = 2(ww^')^D/2/(2π)^D/2a^D-1w_0^D/2+1{∑_ke^iα(2π k -qΔϕ)/u_k^D/2-1∑_i=1^∞p_ν,i^D/2T_ν(p_ν,i)J_ν(p_ν,iw/w_0)J_ν(p_ν,iw^'/w_0)× K_D/2-1(u_kp_ν,i/w_0)-qe^-iqn_0Δϕ/2π i∑_j=±1je^jiπ q|α_0|× ∫_0^∞dycosh[qy(1-|α_0|)]-cosh(|α_0| qy)e^-iq(Δϕ+jπ)/u_y^D/2-1[cosh(qy)-cos(q(Δϕ+jπ))]× ∑_i=1^∞p_ν,i^D/2T_ν(p_ν,i)J_ν(p_ν,iw/w_0)J_ν(p_ν,iw^'/w_0)K_D/2-1(u_yp_ν,i/w_0)} .Again, we use the definition (<ref>) to the variable u_k and u_y.In order to obtain an expression to the Wightman function in the L-region more convenient for the extraction of the brane induced part, we apply to the series over i a variant of the generalized Abel-Plana formula <cit.>∑_i=1^∞T_ν(p_ν,i)f(p_ν,i)=1/2∫_0^∞dzf(z)-1/2π∫_0^∞dzK̅_ν(z)/I̅_ν(z)[e^-iν zf(iz)+e^iν zf(-iz)]. For the problem that we are analyzing the function f(z) is given below,f(z)=z^D/2J_ν(zw/w_0)J_ν(zw^'/w_0)K_D/2-1(2uz/w_0).The first term provided by (<ref>) corresponds the Wightman function in the absence of brane, while the second is induced by the boundary. As mentioned in the previous subsection, here we are interested in the brane-induced Whigthman function. Therefore, after some intermediate steps, we obtainW_b(L)(x,x^') = -(ww^')^D/2/(2π)^D/2a^D-1∫_0^∞dpp^D/2K̅_ν(pw_0)/I̅_ν(pw_0)I_ν(pw)I_ν(pw^')× {∑_ke^iα(2π k -qΔϕ)f_D/2-1(pu_k)-qe^-iqn_0Δϕ/2π i∑_j=±1je^jiπ q|α_0|× ∫_0^∞dycosh[qy(1-|α_0|)]-cosh(|α_0| qy)e^-iq(Δϕ+jπ)/cosh(qy)-cos(q(Δϕ+jπ))f_D/2-1(pu_y)} ,where we have made change of variable z=pw_0.§ BOSONIC CURRENT DENSITY The VEV of the bosonic current density can be expressed in terms of the positive frequency Wightman function by, ⟨ j_μ⟩=ielim_x^'→ x{(∂_μ-∂_μ^')W(x,x^')+2ieA_μW(x,x^')} Because the analysis of the induced bosonic current in the (1+D)-dimensional AdS space in the presence of a carrying-magnetic-flux cosmic string has been given in <cit.>, here we are mainly interested to calculate the bosonic current induced by the presence of the brane in both regions, i.e., for 0≤ w≤ w_0 and w_0≤ w<∞. As we will see in the next subsections, the only non-zero current density components are the azimuthal ones, and it is a periodic function of the magnetic flux along the string, Φ_ϕ, with period equal to the quantum flux. §.§ Charge density Let us begin with the calculation of the charge density. Since A_0=0, we have⟨ j_0(x)⟩_b(J)=ielim_x'→ x(∂_t-∂'_t)W_b(J)(x,x'),with J=R,L, that represents the R and L regions. The analysis of the charge density depends on the behavior of the time derivative of the function f_D/2-1(pu_σ), with σ=j,y, that appear in (<ref>) and (<ref>). Using the fact that∂_zf_μ(z)=-zf_μ+1(z) ,and knowing that the arguments of the function f_D/2-1(pu_σ) depend on the time variable with (t-t'),we can see that∂_tf_D/2-1(pu_σ)=p^2(t-t')f_D/2(pu_σ). Now taking the coincidence limit on the function f_D/2(pu_σ), we can verify that for the case where σ=k=0,this function goes to a finite value for u_0→0. As to the case σ=k≠0, u_k→2rsin(kπ/q) and for σ=y, u_y→ 2rcosh(y/2). For both last cases the function f_D/2(pu_σ) assumes a finite value, even for r→0. Finally taking t'→ t, the above expression vanishes. Consequently we conclude that there is no induced charge density.Following similar procedure we can prove that there are no induced current density along the radial coordinate and extra dimensions, i.e., ⟨ j^r⟩=⟨ j^i⟩=0. As to the induced current along w, ⟨ j^w⟩, we can promptly verify that it is zero. § AZIMUTHAL CURRENT In this subsection we will proceed the calculations of the induced azimuthal currents in the regions R and L, induced by the brane. §.§ R-region Let us start the analysis in R-region. Although the Wightman in this region was given in (<ref>), todevelop the calculation of the azimuthal current density, it is more convenient to use the Wightman function provided in (<ref>). So, substituting this function into the formal expression for the VEV of the current density operator, ⟨ j_ϕ(x)⟩_b(R)=ielim_x'→ x[(∂_ϕ-∂_ϕ')W_b(R)(x',x)+2ieA_ϕ W_b(R)(x',x)] ,using A_μ=δ_μ^ϕA_ϕ=qα/e, and taking the coincidence limit also in the angular variable, we get ⟨ j_ϕ(x)⟩_b(R) = qew^D/2a^D-1(2π)^D/2r^D-2∫_0^∞dvv^D/2-2e^-v∑_nq(n+α)I_q|n+α|(v)× ∫_0^∞dpp e^-p^2r^2/2v∑_l=1^2J̅_ν^2(pw_0)/H̅_ν^(l)(pw_0)(H_ν^(l)(pw))^2. In <cit.>, a compact expression for the summation over the quantum number n above has been derived. We reproduce this result here, ∑_n=-∞^∞(n+α)I_q|n+α|(v) = 2v/q^2'∑_k=1^[q/2]sin(2π k/q)sin(2π kα_0)e^vcos(2π k/q)+ v/qπ∫_0^∞dysinh(y)e^-vcosh(y)g(q,α_0,y)/cosh(qy)-cos(π q), where [q/2] represents the integer part of q/2, and the prime on the sign of the summation means that in the case q=2p the term k=q/2 should be taken with the coefficient 1/2. Moreover the function, g(q,α_0,y), is defined as g(q,α_0,y)=sin(qπα_0)sinh((1-|α_0|)qy)-sinh(qα_0 y)sin((1-|α_0|)π q). Substituting the above result into (<ref>) and with the help of <cit.>, after a few straightforward steps, we get ⟨ j_ϕ(x)⟩_b(R) = 2ew^D/(4π)^D/2a^D-1r^D/2-2∫_0^∞dpp^D/2+1['∑_k=1^[q/2]sin(2π k/q)sin(2π kα_0)/s_k^D/2K_D/2(2prs_k) + q/2π∫_0^∞dysinh(y)g(q,α_0,y)/cosh^D/2(y/2)[cosh(qy)-cos(qπ)]K_D/2(2prcosh(y/2))] × ∑_l=1^2J̅_ν^2(pw_0)/H̅_ν^(l)(pw_0)(H_ν^(l)(pw))^2,s_k=sin(π k/q) . As the next step we rotate the contour of the integration over p by the angle π/2 (-π/2) for the term with l=1 (l=2). Using the relations between Bessel functions of imaginary argument, the contravariant component of the azimuthal current reads ⟨ j^ϕ⟩_b(R) = -4e/(2π)^D/2a^D+1∫_0^∞dzz^D+1I̅_ν(zw_0/w)/K̅_ν(zw_0/w)K_ν^2(z)× ['∑_k=1^[q/2]sin(2π k/q)sin(2π kα_0)f_D/2(2z(r/w)s_k) + q/2π∫_0^∞dysinh(y)g(q,α_0,y)/cosh(qy)-cos(qπ)f_D/2(2z(r/w)cosh(y/2))],where we have defined z=p w. Note that this VEV depends on the ratio r/w, which is related to the proper distance from the string, and the ratio w/w_0, which is related to the proper distance from the brane w/w_0=e^(y-y_0)/a. Let us now investigate this VEV in some special and asymptotic cases. In the conformal massless field case, we have ν=1/2 according to (<ref>), and using the corresponding expressions of the Bessel functions for this particular order, we obtain⟨ j^ϕ⟩_b(R) = -4e/(2π)^D/2a^D+1∫_0^∞dzz^De^(-2+w_0/w)z× 2B_0(w_0/w)zcosh(zw_0/w)+(2A_0-B_0)sinh(zw_0/w)/2A_0-B_0(1+2zw_0/w)× ['∑_k=1^[q/2]sin(2π k/q)sin(2π kα_0)f_D/2(2z(r/w)s_k) + q/2π∫_0^∞dysinh(y)g(q,α_0,y)/cosh(qy)-cos(qπ)f_D/2(2z(r/w)cosh(y/2))]. For distances from the brane much larger compared with the AdS radius, w/w_0≫1, we make use of the formulae for the modified Bessel functions for small values of the argument <cit.>, with the assumption that A_0-ν B_0≠0, to the leading order, we have⟨ j^ϕ⟩_b(R) ≈ -2^3-2ν-D/2e/π^D/2Γ(ν)Γ(ν+1)a^D+1(A_0+ν B_0/A_0-ν B_0)(w_0/w)^2ν∫_0^∞dzz^D+2ν+1K_ν^2(z)× ['∑_k=1^[q/2]sin(2π k/q)sin(2π kα_0)f_D/2(2z(r/w)s_k) + q/2π∫_0^∞dysinh(y)g(q,α_0,y)/cosh(qy)-cos(qπ)f_D/2(2z(r/w)cosh(y/2))]. Another interesting limiting case is the Minkowskian limit. In this asymptotic limit, we take a→∞ with fixed y, and the geometry under consideration is reduced to the background of a cosmic string in (D+1)-dimensional Minkowski spacetime. As we approach the Minkowskian limit, the coordinate w in the arguments of the Bessel functions become large and one has w≈ a+y. By taking into account that also the order becomes large in this limit, we can use the corresponding uniform asymptotic expansion of the Bessel function in (<ref>). After a few intermediatesteps, to the leading order, we obtain⟨ j^ϕ⟩_b^(M) = -2e/(2π)^D/2∫_m^∞du(u^2-m^2)^D/2['∑_k=1^[q/2]sin(2π k/q)sin(2π kα_0)× f_D/2(2rs_k√(u^2-m^2)) +q/2π∫_0^∞dysinh(y)g(q,α_0,y)/cosh(qy)-cos(qπ)× f_D/2(2rcosh(y/2)√(u^2-m^2))]1+β u/1-β ue^-2u(y-y_0) .For the Neumann BC, β→∞, we can perform the last integration by making the change of variable v=√(u^2-m^2) and using the identity (<ref>), the expression given in (<ref>) is reduced to⟨ j^ϕ⟩_b^(M) = 4em^D+1/(2π)^(D+1)/2['∑_k=1^[q/2]sin(2π k/q)sin(2π kα_0)× K_D+1/2(2m[r^2s_k^2+(y-y_0)^2])/(2m[r^2s_k^2+(y-y_0)^2])^D+1/2 +q/2π∫_0^∞dysinh(y)g(q,α_0,y)/cosh(qy)-cos(qπ)× K_D+1/2(2m[r^2cosh^2(y/2)+(y-y_0)^2])/(2m[r^2cosh^2(y/2)+(y-y_0)^2])^D+1/2],which coincides exactly with the result reported in <cit.> for y_0=0. On the other hand, for Dirichlet BC, β=0, the corresponding result differs from (<ref>) by the sign.In Fig. <ref> is exhibit the behavior of the current density in R-region as function of the magnetic flux, α_0, (left panel) and the ratio w/w_0 (right panel), considering Dirichlet and Neumann boundary conditions with different values of the parameter associated with the deficit angle, q. From the left panel we can see that the current density is an odd function of α_0, as expected. From the right panel we observe that the VEV is finite on the brane location and goes to zero as it approaches the AdS horizon, in accordance to our asymptotic analysis. Moreover, note that in both plots the intensities increase with q and are higher for Neumann BC.§.§ L-regionTo calculate the induced azimuthal current density in the L-region, we start by substituting the Wightman function given in (<ref>) into the expression bellow, ⟨ j_ϕ(x)⟩_b(L)=ielim_x'→ x[(∂_ϕ-∂_ϕ')W_b(L)(x',x)+2ieA_ϕ W_b(L)(x',x)],using A_μ=δ_μ^ϕA_ϕ=qα/e. After some steps we get,⟨ j^ϕ⟩_b(L) = -2eq^2w^D/a^D-1(2π)^D/2w_0^2r^D-2∫_0^∞dvv^D/2-2e^-v∑_n=-∞^∞(n+α)I_q|n+α_0|(v) × ∑_i=1^∞p_ν,iT_ν(p_ν,i)J_ν^2(p_ν,iw/w_0)e^-r^2p_ν,i^2/2w_0^2v .Substituting the summation over n in the modified Bessel function by (<ref>), we have,⟨ j_ϕ⟩_b(L) = -4ew^Dr^2-D/a^D-1(2π)^D/2w_0^2∫_0^∞dvv^D/2-1e^-v['∑_k=1^[q/2]sin(2π k/q)sin(2π kα_0)e^vcos(2π k/q).+ .q/2π∫_0^∞dysinh(y)e^-vcosh(y)g(q,α_0,y)/cosh(qy)-cos(π q)] ∑_i=1^∞p_ν,iT_ν(p_ν,i)J_ν^2(p_ν,iw/w_0)e^-r^2p_ν,i^2/2w_0^2v .Now we are in position to integrate over the variable v. Doing this we get,⟨ j_ϕ⟩_b(L) = -8ew^D/a^D-1(4π)^D/2r^D/2-2w_0^D/2+2∑_i=1^∞p^D/2+1_ν,iT_ν(p_ν,i)J_ν^2(p_ν,iw/w_0)× ['∑_k=1^[q/2]sin(2π k/q)sin(2π kα_0)/s_k^D/2K_D/2(2s_k r p_ν,i/w_0).+ .q/2π∫_0^∞dysinh(y)g(q,α_0,y)/cosh^D/2(y/2)[cosh(qy)-cos(π q)]K_D/2(2cosh(y/2) r p_ν,i/w_0)] .Now to procedure the summation over the quantum number i, we use the generalized Abel-Plana summation formula(<ref>), with f(z)=z^D/2+1J^2_ν(z(w/w_0))K_D/2(2 ρ z(r/w_0)) with ρ=(sin(π k/q),cosh(y/2) ) .Because we want to calculate the azimuthal current induced by the brane, only the second term on the right hand side of the summation formula is of our interest. Using the relations between the Bessel functions with imaginary arguments <cit.>, and after some intermediate steps, we obtain⟨ j_ϕ⟩_b(L) = 4ew^D/(4π)^D/2a^D-11/w_0^D/2+2r^D/2-2∫_0^∞dzz^D/2+1K̅_ν(z)/I̅_ν(z)I_ν^2(z(w/w_0))× ['∑_k=1^[q/2]sin(2π k/q)sin(2π kα_0)/s_k^D/2J_D/2(2z(r/w_0)s_k) + q/2π∫_0^∞dysinh(y)g(q,α_0,y)/cosh^D/2(y/2)[cosh(qy)-cos(qπ)]J_D/2(2z(r/w_0)cosh(y/2))].Now changing the variable z→ z(w_0/w), the contravariant component of the azimuthal current reads⟨ j^ϕ⟩_b(L) = -4e/(2π)^D/2a^D+1∫_0^∞dzz^D+1K̅_ν(zw_0/w)/I̅_ν(zw_0/w)I_ν^2(z)× ['∑_k=1^[q/2]sin(2π k/q)sin(2π kα_0)f_D/2(2z(r/w)s_k) + q/2π∫_0^∞dysinh(y)g(q,α_0,y)/cosh(qy)-cos(qπ)f_D/2(2z(r/w)cosh(y/2))].Comparing the above expressionwith (<ref>), we observe that the brane-induced azimuthal current in the L-region is obtained from the corresponding one in R-region by the replacements I→ K, K→ I of the modified Bessel functions.Proceeding similarly as in the R-region, we now investigate some special and limiting cases. In the conformal massless field case, with ν=1/2, we use the corresponding expressions of the Bessel functions, and after a few algebraic manipulations, we get⟨ j^ϕ⟩_b(L) = -4e/(2π)^D/2a^D+1∫_0^∞dzz^De^-zw_0/w× [2A_0-B_0(1+2zw_0/w)]sinh^2(z)/2B_0(w_0/w)zcosh(zw_0/w)+(2A_0-B_0)sinh(zw_0/w)I_ν^2(z)× ['∑_k=1^[q/2]sin(2π k/q)sin(2π kα_0)f_D/2(2z(r/w)s_k) + q/2π∫_0^∞dysinh(y)g(q,α_0,y)/cosh(qy)-cos(qπ)f_D/2(2z(r/w)cosh(y/2))]. For points near the AdS boundary (w = 0), w≪ w_0, the argument of the modified Bessel function I_ν(z) is small and using the corresponding asymptotic function in the leading term <cit.>, we get⟨ j^ϕ⟩_b(L) ≈ -2^2-2ν-D/2e/π^D/2Γ^2(ν+1)a^D+1(w/w_0)^2ν∫_0^∞dzz^D+2ν+1K̅_ν(zw_0/w)/I̅_ν(zw_0/w)× ['∑_k=1^[q/2]sin(2π k/q)sin(2π kα_0)f_D/2(2z(r/w)s_k) + q/2π∫_0^∞dysinh(y)g(q,α_0,y)/cosh(qy)-cos(qπ)f_D/2(2z(r/w)cosh(y/2))]. In the Minkowskian limit, we follow the same procedure described for the current density in the R-region, obtaining the following result:⟨ j^ϕ⟩_b(L)^(M) = -2e/(2π)^D/2∫_m^∞du(u^2-m^2)^D/2['∑_k=1^[q/2]sin(2π k/q)sin(2π kα_0)× f_D/2(2rs_k√(u^2-m^2)) +q/2π∫_0^∞dysinh(y)g(q,α_0,y)/cosh(qy)-cos(qπ)× f_D/2(2rcosh(y/2)√(u^2-m^2))]1+β u/1-β ue^-2u(y_0-y) ,which is similar to the result we found for the R-region with y-y_0 replaced by y_0-y. This similarity is expected since in the Minkowskian limit the VEV is symmetric to the brane. Moreover, for Neumann BC we obtain the same result present in (<ref>) with y-y_0 replaced by y_0-y. On the other hand, for Dirichlet BCthe corresponding expression differs by the sign. In Fig. <ref> is displayed the dependence of the current density in the L-region as function of the magnetic flux, α_0, (left panel) and the ratio w/w_0 (right panel), considering Dirichlet and Neumann boundary conditions with different values of the parameter associated with the deficit angle, q. Similar to the R-region, from the left panel we observe that the current density in the L-region is an odd function of α_0. On the other hand, from the right panel we can see that the VEV rapidly goes to zero near the AdS boundary and is finite on the brane. Moreover, in both plots the intensities increase with the parameter q and, differently from the R-region, are higher for Dirichlet BC. § APPLICATION TO RSII MODEL The formulas for the VEV of the current density in the setup problem under consideration and present in the previous section can be applied to the study of the influence of the cosmic string in Z_2-symmetric braneworlds models. In this paper we apply them to the Randall-Sundrum model with a single brane (RSII) <cit.>. In this model, the four-dimensional world is understood as aZ_2-symmetric brane with positive tension embedded in a higher dimensional AdS bulk. Moreover the existence of bulk fields is a common feature in models motivated by string theories. In the present setup problem, we consider the cosmic string perpendicular to the brane. The latter is located at y=0 (or w_0=a in the w coordinate) and divides the background geometry in two copies of the R-region which are identified by the Z_2-symmetrytransformation y⟷ -y. The corresponding line element is obtained from (<ref>) by the replacement e^-2y/a→ e^-2|y|/a. The fields in the regions -∞<y<0 and 0<y<∞ are related by the Z_2-symmetry of the model. It worths to note that for an observer located at y=0, the corresponding line element for the described geometry is reduced to the one for a cosmic string in (D+1)-dimensional flat spacetime. The bulk scalar field obeys the boundary conditions at the position of the brane and are obtained by integration of the field equations about y=0 <cit.>. Following this procedure for an untwisted scalar field (even under reflection with respect to the brane location), one can see that the received boundary condition has the Robin form (<ref>), with coefficient β=a/4Dξ. In the particular case of a minimally coupled field, ξ=0, the boundary condition is reduced to the Neumann one (β→∞). On the other hand, for twisted scalar fields (odd under reflection with respect to the brane), the boundary condition has the Dirichlet form.In models with Z_2 symmetry, the range of integration over the extra dimension y varies from -∞ to ∞ in the normalization condition. This results in an additional factor of 1/2when compared to the one obtained previously for the R-region, which has half of interval range, 0≤ y<∞, with the brane location at y_0=0. Thus, we can conclude that, similar to the VEVs of the field squared and the energy-momentum tensor <cit.>, the formulas for the VEV of the current density induced by a cosmic string in the generalized RSII braneworld model are obtained from those expressions present in subsection <ref> by directly putting w_0=a with an additional overall factor of 1/2. § CONCLUSIONSIn the present paper we have studied the combined effects of curvature, conical topology and the presence of a brane on the VEV of the current density for a massive charged scalar field propagating in the background of a (D+1)-dimensional AdS spacetime. Along the cosmic string we assume the existence of a magnetic flux. Moreover, on the brane we impose that the field operator obeys the general Robin boundary conditions, and we consider that the string is perpendicular to the brane which is parallel to the AdS boundary, which divides the manifold into two regions, L-region (0≤ w≤ w_0) and R-region (w_0≤ w<∞). In this setup the scalar modes are obtained for both regions and the corresponding Wightman functions. They are presented in closed form in Section <ref> for the R-region (<ref>) and L-region (<ref>). In Section <ref> we have calculated the VEVs of the current density for the R-region and L-region. We have shown that the only non-vanishing component is induced along the azimuthal direction. In Subsection <ref> we have developed the VEV of the azimuthal current density in the R-region and a closed expression is present in (<ref>). For a massless quantum scalar field, the azimuthal current density is given by (<ref>) for general Robin boundary conditions. For distances from the brane much larger compared with the AdS radius, w/w_0≫1, we found that the ⟨ j^ϕ⟩_b(R) decays as (w_0/w)^2ν. Also we have obtained its Minkowskian limit given by theexpression (<ref>) for the general Robin boundary condition. For the particular cases of Neumann and Dirichet boundary condition we found that the VEV in this limiting case differs by the sign. In order to provide a better understanding of this induced current, in figure <ref> we exhibit the plots for behaviors of the azimuthal current density, considering D=3 and fixed r/w, in the R-region, ⟨ j^ϕ⟩_b(R), as function of the magnetic flux along the string, α_0, (left panel) and the ratio w/w_0 (right panel). The left panel shows that this VEV is an odd function of the magnetic flux. The right panel shows that ⟨ j^ϕ⟩_b(R) is finite on the brane and goes to zero as it approaches the AdS horizon, confirming the asymptotic analysis. In addition, both plots show that the intensities increase with parameter associated with deficit angle, q, and are higher for Neumann BC.In Subsection <ref> we have developed the azimuthal current density induced in the L-region. The corresponding formula in closed form is given by (<ref>) and is obtained from the result for R-region by the replacement I→ K, K→ I of the Bessel functions. For a conformal massless scalar field, ⟨ j^ϕ⟩_b(L) is present in (<ref>). For points close to the AdS boundary, w≪ w_0, we found the expression (<ref>) which shows that the azimuthal current density in this region decays as (w/w_0)^2ν. In the Minkowskian limit we have obtained the expression present in (<ref>), which is similar to the result we found in the R-region with y-y_0 replaced by y_0-y. Moreover, in the particular cases of Neumann and Dirichlet boundary conditions, the VEV in this limiting case differs by the sign. For further investigation, in figure <ref> we have plotted the azimuthal current density given by (<ref>) as function of α_0 (left panel) and w/w_0 (right panel), considering D=3 and fixed r/w. Similar to the R-region, the left panel shows that ⟨ j^ϕ⟩_b(L) is an odd function of the magnetic flux long the string, α_0, as expected. The right panel shows that azimuthal current density rapidly vanishes near the AdS boundary, in accordance to the asymptotic analysis, and is finite on the brane. Moreover, both plots show that the intensities of the azimuthal current density increase with the parameter q and, in contrast with the R-region, are higher for Dirichlet BC. In the last section we have applied the results found in the R-region to investigate the cosmic string induced effects in the generalized RSII model. By integration of the field equations about the brane location, y=0, we have found that the boundary conditions in this Z_2-symmetric model has the Robin type with the coefficient β=a/4Dξ for untwisted scalar field and it is reduced to the Neumann BC for a minimally coupled field, ξ=0. On the other hand for a twisted scalar field, we have the Dirichlet BC. As a final remark, we have concluded that the VEV of the azimuthal current density induced by a cosmic string in the RSII model can be obtained from those present in the Section <ref> by putting w_0 = a with an additional factor 1/2.§ ACKNOWLEDGMENTW.O.S is supported under grant 2022/2008, Paraíba State Research Foundation (FAPESQ). E.R.B.M is partially supported by CNPq under Grant no 301.783/2019-3. 99 Kibble T. W. Kibble, J. Phys. A 9, 1387 (1976). V-S A. Vilenkin and E.P.S. Shellard, Cosmic Strings and Other Topological Defects (Cambridge University Press, Cambridge, England, 1994). Berezinski V. Berezinski, B. Hnatyk and A. Vilenkin, Phys. Rev. D 64, 043004 (2001). Damour T. Damour and A. Vilenkin, Phys. Rev. Lett. 85, 3761 (2000). Bhattacharjee P. Bhattacharjee and G. Sigl, Phys. Rep. 327, 109 (2000). Fronsdal C. Fronsdal, Phys. Rev. D 10, 589 (1974).Fronsdal_1 C. Fronsdal and R. B. Haugen, Phys. Rev. D 12, 3810 (1975).Avis J. S. Avis, C. J. Isham and D. Storey, Phys. Rev. D 18, 3565 (198).Allen B. Allen and T. Jacobson, Commun. Math. Phys. 103, 669 (1986).Camporesi_91 R. Camporesi, Phys. Rev. D 43, 3958 (1991).Camporesi_92 R. Camporesi and H. Higuchi, Phys. Rev. D 45, 3951 (1992).Caldareli M. M. Caldareli, Nuc. Phys. B 549, 499 (1999) .Ahar00 O. Aharony, S. S. Gubser, J. Maldacena, H. Ooguri and Y. Oz, Phys. Rep. 323, 183 (2000).Brax03 P. Brax and C. Van de Bruck, Classical Quantum Gravity 20, R201(2003).Maar10 R. Maartens, Living Rev. Relativity 13, 5 (2010). Ghe1 M. H. Dehghani, A. M. Ghezelbash and R.B. Mann, Nucl. Phys. B 625, 389 (2002). Cristine C. A. Ballon Bayona, C.N. Ferreira and V.J. Vasquez Otoya, Class. Quantum Grav. 28, 015011 (2011). Wagner_19 W. Oliveira dos Santos, H.F. Mota and E.R. Bezerra de Mello, Phys. Rev. D 99, 045005 (2019).Wagner_20 W. Oliveira dos Santos, E.R. Bezerra de Mello and H. F. Mota,Eur.Phys. J. Plus 135, 27 (2020). Wagner_20aS. Bellucci, W. Oliveira dos Santos and E.R. Bezerra de Mello, Eur. Phys. J. C 80, 963 (2020).Wagner_22S. Bellucci, W. Oliveira dos Santos, E.R. Bezerra de Mello and A. A. Saharian, JCAP 01 (2022) 010.Wagner_21 S. Bellucci, W. Oliveira dos Santos, E.R. Bezerra de Mello and A. A. Saharian, JHEP 02 (2021) 190.Wagner_22a S. Bellucci, W. Oliveira dos Santos, E.R. Bezerra de Mello and A. A. Saharian, JHEP 05 (2022) 021. Wagner_23W. Oliveira dos Santos and E.R. Bezerra de Mello, Eur. Phys. J. C 83, 726 (2023). Abra M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions (Dover, New York, 1972).Breitenlohner P. Breitenlohner, D.Z. Freedman, Ann. Phys. (NY) 1982, 144 (2), 249-281.Avis1978 S. J. Avis, C. J. Isham and D. Storey, Phys. Rev. D 18, 3565-3576 (1978).Grad I. S. Gradshteyn and I. M. Ryzhik. Table of Integrals, Series and Products (Academic Press, New York, 1980).deMello:2014ksa E. R. Bezerra de Mello, V. B. Bezerra, A. A. Saharian, and H. H. Harutyunyan, Phys. Rev. D 91, 064034 (2015). SahaRev A. A. Saharian, The Generalized Abel-Plana Formula with Applications to Bessel Functions and Casimir Effect (Yerevan State University Publishing House, Yerevan, 2008); Preprint ICTP/2007/082; arXiv:0708.1187. Braganca_15E. A. F. Bragança, H. F. Santana Mota, E. R. Bezerra de Mello, Int. J. Mod. Phys. D 24, 1550055 (2015).Braganca:2020jci E. A. F. Bragança and E. R. Bezerra de Mello,Eur. Phys. J. Plus 136,50 (2021)RSII L. Randall and R. Sundrum,Phys. Rev. Lett. 83, 4690 (1999).Saharian:2003qs A. A. Saharian,Nucl. Phys. B 712, 196-228 (2005).Gherghetta2000 T. Gherghetta, A. Pomarol, Nucl. Phys. B 586, 41 (2000).Flachi2001 A. Flachi, D. J. Toms, Nucl. Phys. B 610, 144 (2001). | http://arxiv.org/abs/2311.15774v1 | {
"authors": [
"W. Oliveira dos Santos",
"E. R. Bezerra de Mello"
],
"categories": [
"hep-th",
"gr-qc"
],
"primary_category": "hep-th",
"published": "20231127124841",
"title": "Induced current in braneworld model in high-dimensional AdS bulk in the cosmic string spacetime"
} |
=1 | http://arxiv.org/abs/2311.16242v1 | {
"authors": [
"Souvik Banerjee",
"Ulf Danielsson",
"Maximilian Zemsch"
],
"categories": [
"hep-th"
],
"primary_category": "hep-th",
"published": "20231127190006",
"title": "The dark bubbleography"
} |
UTF8bsmivan der Waals - Zeeman Institute, University of Amsterdam, Sciencepark 904, 1098 XH Amsterdam, the Netherlands QuSoft, Science Park 123, 1098 XG Amsterdam, The Netherlands Centre for Nanoscience and Nanotechnology, Department of Physics, University of Bath, Bath BA2 7AY, United KingdomCentre for Nanoscience and Nanotechnology, Department of Physics, University of Bath, Bath BA2 7AY, United KingdomDipartimento di Fisica, Politecnico di Milano, 20133 Milano, Italy Centre for Nanoscience and Nanotechnology, Department of Physics, University of Bath, Bath BA2 7AY, United KingdomCentre for Nanoscience and Nanotechnology, Department of Physics, University of Bath, Bath BA2 7AY, United [email protected] van der Waals - Zeeman Institute, University of Amsterdam, Sciencepark 904, 1098 XH Amsterdam, the Netherlands QuSoft, Science Park 123, 1098 XG Amsterdam, The NetherlandsThe layered van der Waals material, TaS_2 features a meta-stable mosaic phase on the verge of a nearly commensurate to commensurate charge density wave transition. This meta-stable or 'hidden' phase can be reached by laser pumping the low temperature, commensurate charge density wave phase. Here we report the stabilization of a bulk, equilibrium mosaic phase in 1T-TaS_1.2Se_0.8 single crystals observed with transport and optical spectroscopy experiments. We identify a bulk pseudogap in the mosaic phase of approximately 200 meV at the lowest temperatures, while the CCDW phase can be obtained by heating and instead has a full optical gap of about 100 meV. Surprisingly, a spectral weight analysis shows that Se doping gives rise to an increased charge density despite the fact that this is formally an isovalent substitution. This finding is consistent with the recent observation that the mosaic phase is stabilized as equilibrium phase through the appearance of charged defects.Optical response of the bulk stabilized mosaic phase in Se doped TaS_2-xSe_x. Erik van Heumen January 14, 2024 ============================================================================= Transition metal dichalcogenides (TMDCs) provide a rich playground of exotic electronic ordering phenomena such as superconducting or charge density wave phases. 1T-TaS_2 stands out among them due to the observation of a correlated insulating phase, the nature of which is still under debate <cit.>. Disentangling the interplay between structural and electronic order is difficult, but the consensus now appears to be that the equilibrium groundstate is a commensurate charge density wave that originates in dimerization of layer stacking along the c-axis <cit.>. However, at elevated temperatures <cit.> or at specific surface terminations <cit.> signatures of Mott physics reemerge pointing to the interplay of several instabilities. Against this backdrop, a series of experiments employing light or current pulses demonstrated the existence of a `hidden' or meta-stable state <cit.>. The induced metallicity appears to be tied to the emergence of domain walls in the CCDW phase <cit.> and changes in the interlayer stacking <cit.>. The meta-stable state has subsequently been dubbed the `mosaic' phase. The tunability between distinct electronic phases with external control has led to suggestions for applications <cit.> and consequently stabilising the mosaic phase under equilibrium conditions would be very useful. Recently, an equilibrium version of the mosaic phase was reported to exist as a surface effect on 1T-TaS_2 single crystals <cit.>. Similar equilibrium states appear to exist as surface structure in doped 1T-TaS_2-xB_x crystals where B is Ti <cit.> or Se <cit.>. Here we report the existence of a meta-stable, metallic phase in bulk single crystals of Se doped 1T-TaS_2-xSe_x (x = 0.8) that can be reached under thermal equilibrium conditions. This state is observed in a large temperature window below 130 K and is distinctly different from the insulating CCDW phase. The latter can be realized after heating above 130 K and it remains the equilibrium state if the crystal is subsequently cooled down again. From a detailed analysis of the optical spectra, we conclude that there are two distinct mechanisms that give rise to the mosaic and CCDW phase. The former is characterized by the formation of a pseudogap, while the latter is characterized by a full optical gap.Single crystals of 1T-TaS_2-xSe_x with nominal compositions x = 0.8 and x = 1.0 are are grown using chemical vapor transport. Typical crystal sizes obtained in this way are approximately 2 × 3 mm with a somewhat irregular shape. Further details of the growth process are provided in the supplementary material. Transport experiments were carried out using a 4-point method using a Physical Property Measurement System (PPMS) and data was recorded between 3 and 310 K. Infrared reflectivity spectra are collected over the energy range from 6 meV to 4 eV and between 16 K and 400 K for two samples with Se contents x = 0.8 and x = 1.0. In order to study the hysteresis in this system, a temperature loop is designed to start cooling from 400 K down to 16 K. At base temperature, data is collected for 20 minutes and then the sample is heated to 400 K. Both cooling and warming experiments are carried out with a constant temperature change of 1.7 K/min. Each cycle is repeated several times to verify the reproducibility and reduce the noise in the data. The experiment is repeated right after the in situ evaporation of silver or gold on the samples to obtain reference spectra. These allow us to determine the absolute reflectivity of our samples. To obtain the optical conductivity, we use the variational dielectric function method proposed in Ref. <cit.>.A first investigation of the bulk electronic properties is carried out with transport experiments, Fig. <ref>a. The sample is briefly heated to 400 K and subsequently mounted in the PPMS for resistance measurements. These single crystals are relatively thin and have a somewhat irregular shape. This prevented us from accurately determining the resistivity and we therefore only show the measured resistance. As the crystal is cooled a first time, we observe a weak semi-conducting behaviour of the resistance (CD1, blue curve). Around 130 K, a change in slope can be observed that signals a phase transition. As temperature is further reduced, the resistance eventually starts to decrease at the lowest measured temperatures. As the crystal is heated, we again observe a transition around 130 K (WU1, red curve). Surprisingly, the resistance increases by an order of magnitude before passing through a maximum. A new transition to a low resistance state appears around 305 K. Additional cooling and heating cycles display markedly different behaviour to the first cycle. We now observe a phase transition at 180 K to an insulating phase as evidenced by the large upturn at low temperature (CD2, black curve). The second heating cycle (WU2, magenta curve) closely tracks the cooling cycle CD2, but displays a large hysteresis in the phase transition (305K heating compared to 180 K cooling). The difference between the insulating (WU2) and metallic (WU1) behaviour closely resembles the changes in resistivity that take place when light pulses induce the mosaic phase in the CCDW phase of 1T-TaS_2 <cit.>, which is a first indication that the mosaic phase is stabilized under equilibrium conditions in our crystals.To further explore this, we turn to the real part of the optical conductivity, σ_1(ω,T) in Fig. <ref>. Data is presented separately for cooling and heating to highlight the distinct behaviour of the optical response. The optical conductivity of the incommensurate charge density wave phase (ICCDW; T∼400 K) is characterized by a nearly frequency independent free charge response. Compared to the optical conductivity of pristine 1T-TaS_2, reported in Ref. <cit.> and Ref. <cit.>, the extrapolated DC conductivity of Se doped crystals is similar. The interband conductivity above 1 eV is significantly larger in our crystals, although the three interband transitions at 1 eV, 1.66 eV and 2.2 eV are also observed in previous measurements. We further note that the interband response show a significant temperature dependence.At the transition to the nearly commensurate state (NCCDW; T_NCCDW ≈370 K), we observe a sudden depletion of spectral weight below 1 eV (visible as a step in the temperature dependent optical resistivity in Fig. <ref>(d)). This depletion evolves similarly for a crystal with x = 1.0 (see Supplementary material). Different from the x = 1.0 crystal, the x = 0.8 crystal undergoes a second transition at 130 K where another sudden removal of spectral weight takes place. For 1T-TaS_2, this transition has been identified as the formation of the commensurate CDW phase (CCDW). As we will discus below, the x = 0.8 crystal first enters an intermediate meta-stable phase.As the screening from free charge carriers is reduced, a series of phonon modes becomes visible. A group theoretical analysis predicts 40 infrared active phonon modes in the CCDW phase of 1T-TaS_2 <cit.>. This number can be expected to become even larger for Se doped crystals, since the substitution of some of the S atoms with Se atoms will lead to mode splitting and frequency shifts. This is indeed what we observe: there are two additional phonon modes at 18 meV and 19.6 meV that are not present in the earlier infrared data<cit.>. As was pointed out in Ref. <cit.>, the frequency splitting between the phonon modes can be very small. Our experimental resolution of 0.25 meV is certainly not sufficient to observe all possible infrared active modes. Apart from the extra phonon modes, we observe a similar number of modes compared to earlier work although they are all shifted in frequency. Most modes appear broader and we attribute this to unresolved mode splitting due to Se substitution.We now return to the second transition of the x = 0.8 crystal at 130 K. In 1T-TaS_2 a transition from the NCCDW to CCDW phase takes place at 180 K. Angle resolved photoemission spectroscopy (ARPES) and scanning tunneling microscopy (STM) experiments have shown that this transition is accompanied by the opening of a large gap at the Fermi level and the formation of bands below and above the Fermi level <cit.>. Regardless the nature of this transition, it has been shown that the resistivity below this transition becomes insulating with an exponential enhancement of the resistivity, ρ(T)∝ exp(-A/k_BT) <cit.>. Our optical conductivity data indeed shows the formation of a gap, but still has a significant background conductivity at 16 K. We attribute this to a residual metallicity that would be expected for the mosaic phase and is consistent with the low temperature resistance data (CD1). Similar to the resistance data (WU1), we observe a further depletion of spectral weight when the crystal is heated above 130 K. The 157 K data in Fig. <ref>c shows a clear suppression of low energy spectral weight compared to the 16 K data. We can directly identify different phases observed in transport experiments with optical spectra by comparing resistance with optical resistivity data, Fig. <ref>d. The optical resistivity (inverse of the optical conductivity) shows quantitative differences with the resistance as might be expected given that the former is measured at finite energy and in the phonon range. Nevertheless, at ħω = 15 meV it compares qualitatively very well with the measured resistance and we can map the optically observed transitions one-to-one to those in the resistance. The residual conductivity observed in the optical data at 16 K is a second indication that a mosaic phase is stabilized in 1T-TaS_1.2Se_0.8. Since optical experiments probe the volume of the crystal, this is a clear demonstration that we are observing a bulk mosaic phase and not just a surface effect. The temperature dependence of the optical conductivity provides another clue that this phase is indeed meta-stable. During our experiment, we stabilize the temperature after cooling to 16 K for 20 minutes. We observe a small but significant change in the optical response that is most visible in Fig. <ref>(d) (a small difference between the red and blue curve; a more prominent difference is observed at higher photon energy, see supplementary Fig. <ref>d). Based on the above observations, we thus identify the 16 K optical conductivity as representative for the electronic spectrum of the mosaic phase. The lowest temperature optical conductivity spectrum of the CCDW phase is in our experiment only obtained above 130 K after heating the sample (Fig. <ref>(b); green curve). We can exclude that this transition is to one of the other CDW phases that have been observed in 1T-TaS_2 <cit.>: as temperature increases further, three more transitions are visible in our data. These resemble transitions to the T-phase (305 K), NCCDW phase (340 K) and ICCDW phase (385 K). The question that now emerges is whether there is a difference in the nature of the transitions between NCCDW and mosaic phase and between the mosaic and CCDW phase. These questions are often approached by making use of spectral weight analysis. However, in many TMDC's, it has been observed that the temperature dependence of the optical spectra is significant even in the visible and UV parts of the optical spectrum <cit.>. It has been speculated that this is a consequence of changes in interlayer coupling that emerge when lattice expansion or contraction takes place <cit.>. The temperature dependence of 1T-TaS_2-xSe_x appears to follow this trend. Our optical conductivity data shows significant temperature dependence over the entire spectral range, up to 2 eV. These significant changes in interband spectra perhaps find their origin in a reduction of the c-axis lattice constant as temperature is reduced. A significant portion of this spectral weight enhancement thus may be unrelated to the various charge density wave transitions and we have no way to disentangle this 'trivial' spectral weight change from redistributions due to the opening of new charge density wave related gaps.We therefore turn to another quantity to highlight differences in the formation of the mosaic and CCDW phases: the normalised difference of the optical conductivity. In Fig. <ref>, we calculate the difference between the optical conductivity and a reference temperature, normalised to the conductivity at the same reference temperature, Δσ_1(ω,T,T_0)/σ_1(ω,T_0). Having identified the transition temperature of the NCCDW to the mosaic phase around T_0 ≈145 K from Fig. <ref>(c), we plot in Fig. <ref>(a) Δσ_1(ω,T,145 K)/σ_1(ω,145 K). With decreasing temperature the spectral weight at low energy is depleted, with the largest changes happening at the lowest photon energy. As the photon energy increases, the changes become gradually smaller, as can be more clearly seen from Fig. <ref>b. The transition from the mosaic to the CCDW state during heating follows a different behaviour. To highlight this, we take T_0 ≈ 124 K as reference. Fig. <ref>c shows that a pronounced minimum develops around 0.1 eV and possibly a second minimum around 0.2 eV. Fig. <ref>(d) shows that the largest change in the optical response now takes place at finite photon energy between 50-100 meV. This difference in temperature dependence between cooling and heating points to different gap formation mechanisms as we will discuss next.The two likely scenarios along which a gap opens at the Fermi level are a depletion (Fig. <ref>i) or gradual opening of a gap (Fig. <ref>k) of the density of states around the Fermi level. The later case is often associated with spontaneous symmetry breaking phases that are accompanied by the formation of a temperature dependent gap and associated Goldstone modes (sliding modes in this case). The temperature dependent optical response of such a BCS type phase transition was numerically evaluated by Zimmerman et al. in Ref. <cit.> and has been implemented in the software package RefFit <cit.>. We model the temperature dependent response of Δσ(ω,T,T_c) using this numerical code and plot the result in Fig. <ref>g. The temperature dependence at selected energies is shown in Fig. <ref>h. A comparison between these panels and corresponding experimental panels (Fig. <ref>c and d respectively), shows qualitatively similar behavior. The impact of a depletion in the density of states on the optical response is harder to model: it requires a concrete theoretical backing of the phenomenon or one has to resort to approximate estimates making use of the joint density of states (JDOS). The advantage of the JDOS is that an approximate estimation of the optical response can be used for both gap opening and gap closing scenarios. However, since we have an exact method available for the BCS case, we use a similar approach to model the pseudogap formation. This is achieved by taking a sum of a Drude term and the T=0 BCS optical conductivity:σ(ω,T)=(T/T_c)^2σ_Dr.(ω,T)+(1-T/T_c)^2σ_BCS(ω,0) We have verified that this gives the same qualitative result as the JDOS approximation. Eq. <ref> allows us to introduce a small temperature dependence in the Drude response, in particular in the Drude width Γ(T). The result of these simulations is shown in Fig. <ref>e,f. The results qualitativel reproduce the experimental results in Fig. <ref>a,b. The depletion of Δσ_1(ω,T,145 K)/σ_1(ω,145 K) above 0.1 eV is reproduced in panel <ref>e for energies ω/Δ > 1. This requires us to assume a temperature dependent Drude component for which the Γ(T) decreases. The temperature dependence in the experimental data (Fig. <ref>b) is much faster, but we are able to qualitatively reproduce the observation that the depletion is largest at the lowest energy.We now turn to the possible interpretation of our experimental results. The key observation is that the low temperature optical conductivity features a residual metallic contribution. This was observed previously in non-equilibrium experiments <cit.> and has been attributed to the formation of domain walls <cit.> arising from a collapse of layer dimerization <cit.>. Furthermore, it has been shown that these domain walls can be 'charged' <cit.>. We propose that domain walls or stacking faults in our crystals are pervasive throughout the bulk of the crystal and collectively contribute to the observed optical conductivity. As temperature increases, thermal fluctuations lead to a removal of these domain walls and the long range CCDW order with a full optical gap can set in. The analysis of our optical data shows that there is a distinct difference between the electronic mechanism driving the formation of the mosaic and long range ordered CCDW state. The former is accompanied by a depleted density of states around the Fermi level. Such a gradual depletion is often referred to as a 'pseudogap' in the context of the cuprate superconductors. Its origin in the cuprate case is unknown, but some form of pair formation without long range coherence <cit.> has been suggested as a possible source. In TaS_2-xSe_x, this scenario may hold true in the mosaic phase where the onset of long range CCDW order is suppressed. The interplay between the metallic domain boundaries and short range CCDW order could be analogous to incoherent fluctuations of the pairing field, thus providing a route to the formation of 'preformed density fluctuations'. To summarize, we have observed a bulk, meta-stable phase in 1T-TaS_1.2Se_0.8 that closely resembles the non-equilibrium mosaic phase observed in 1T-TaS_2. This phase is accompanied by the formation of a large pseudogap but has a residual metallic component. By changing temperature, we can also reach the CCDW phase which instead has a full optical gap. It would be very interesting to further explore the connection to the non-equilibrium mosaic phase with diffraction, ARPES and STM experiments.EvH would like to especially thank Dr. Lev Gasparov for making spectral weight data of Ref. <cit.> available for comparison with our work. EvH acknowledges support for this research from the research center for quantum software and technology, QuSoft. EDC acknowledges support from... § SUPPLEMENTARY INFORMATIONBulk 1T-TaS_2-xSe_x crystals with a nominal stoichiometry of x = 0.8 and x = 1.0 were grown by the chemical vapor transport method . The starting materials were weighed in a nitrogen-filled glove-box using an Ohaus Pioneer digital balance with 0.5 mg precision. The elements were sealed in a quartz ampoule in vacuum. During the evacuation, it is necessary to place the bottom of the ampoule (containing the powdered elements) within a bucket of dry ice, to ensure that the iodine powder does not sublime and escape the ampoule before sealing. The sealed ampoule was placed in the middle of a Lenton two-zone furnace, which was then sealed at either end using insulating blocks. The furnace was ramped up from room temperature at 0.5 K/min to final temperatures of 1173 K and 1233 K for the growth and reaction zones, respectively. The growth proceeded for a total of seven days, at the end of which time the ampoule was removed and quenched in water. The single crystals thus obtained range from one to a few mm in size and are stored in a nitrogen-filled glove-box to minimise oxidation. In addition to the x = 0.8 crystal we have also performed extensive experiments on a x = 1.0 crystal. This crystal does not display signatures of the mosaic phase or CCDW phases and provides a good contrast with the behaviour of the x = 0.8 crystal. Fig. <ref> shows a comparison between 1T-TaS_1.2Se_0.8 (top panels) and 1T-TaSSe (bottom panels). The overal spectral features are very similar. We observe a large temperature dependent interband conductivity also in this crystal, with an interband response that is quantitatively the same. This provides some evidence that this large change in interband conductivity is not directly related to the CCDW or mosaic phase transitions. | http://arxiv.org/abs/2311.15791v1 | {
"authors": [
"Xuanbo Feng",
"Liam Farrar",
"Charles J. Sayers",
"Simon J. Bending",
"Enrico Da Como",
"Erik van Heumen"
],
"categories": [
"cond-mat.str-el",
"cond-mat.mtrl-sci"
],
"primary_category": "cond-mat.str-el",
"published": "20231127131417",
"title": "Optical response of the bulk stabilized mosaic phase in Se doped TaS$_{2-x}$Se$_{x}$"
} |
MPP-2023-270Institute of Physics, Academia Sinica, Taipei, 11529, Taiwan Institute of Astronomy and Astrophysics, Academia Sinica, Taipei, 10617, Taiwan Physics Division, National Center for Theoretical Sciences, Taipei 10617, Taiwan GSI Helmholtzzentrum für Schwerionenforschung, Planckstraße 1, D-64291 Darmstadt, Germany In the most extreme astrophysical environments, such as core-collapse supernovae (CCSNe) and neutron star mergers (NSMs), neutrinos can undergo fast flavor conversions (FFCs) on exceedingly short scales. Intensivesimulationshave demonstrated that FFCs can attain equilibrium states in certain models.In this study, we utilize physics-informed neural networks (PINNs) to predict the asymptotic outcomes of FFCs, by specifically targeting the first two moments of neutrino angular distributions. This makes our approach suitable for state-of-the-art CCSN and NSM simulations. Through effective feature engineering and the incorporation of customized loss functions that penalize discrepancies in the predicted total number of ν_e and ν̅_e, our PINNs demonstrate remarkable accuracies, with an error margin of≲3%. Our study represents a substantial leap forward in the potential incorporation of FFCs intosimulations of CCSNe and NSMs, thereby enhancing our understanding of these extraordinary astrophysical events. Physics-Informed Neural Networks for Predicting the Asymptotic Outcome of Fast Neutrino Flavor ConversionsZewei Xiong 0000-0002-2385-6771 ===========================================================================================================§ INTRODUCTIONCore-collapse supernovae (CCSNe) and neutron star mergers (NSMs) are among the most extreme astrophysical phenomena,representing the deathof massive stars' life cycles and the collision of extremely dense remnants, respectively. These events not only signify the demise of massive stars and dense objects but also reveal some of the universe's most energetic and mysterious phenomena.At the heart of these extraordinary settings lies a captivating process: the emission of neutrinos, released in copious amounts during both CCSNe and NSMs <cit.>. During their propagation through theextreme conditions within these events, neutrinos undergo a fascinating phenomenon known as collective neutrino oscillations <cit.> (for a recent review see Ref. <cit.>). This intriguing behavior emerges from the intricate interplay between the propagating neutrinos andthe dense background neutrino gas, where coherent forward scatterings play a crucial role. This phenomenon behaves in a nonlinear and collective manner, leading to a complex tapestry of flavor conversions. Of particular interest are the so-called fast flavor conversions (FFCs), which occur on scales characterized by ∼ (G_F n_ν)^-1(see, e.g., Refs. <cit.>).Here, G_F represents the Fermi coupling constant, and n_ν denotes the neutrino number density. These FFCs can take place on scales much shorter than what would be expected in the vacuum. FFCs occur iff the angular distribution of the neutrino lepton number,defined as,G(𝐯) = √(2) G_F∫_0^∞E_ν^2 d E_ν/(2π)^3 [( f_ν_e(𝐩) -f_ν_x(𝐩) )- ( f_ν̅_e(𝐩) -f_ν̅_x(𝐩) )],crosses zero at some 𝐯 = 𝐯(μ,ϕ_ν), with μ =cosθ_ν <cit.>.Here, E_ν, θ_ν, and ϕ_ν are the neutrino energy, the zenith, and azimuthal angles of the neutrino velocity, respectively.The f_ν's are the neutrinooccupation numbers of different flavors, with ν_x and ν̅_x denoting the heavy-lepton flavor of neutrinos and antineutrinos.Whenν_x and ν̅_x have similar angular distributions, a scenario commonly observed in state-of-the-artCCSN simulations, this expression transforms into the conventional definition of the neutrino electron lepton number, νELN. FFCs tend to occur on spatial and temporal scales which are expected to be significantly shorter than those typically addressed in hydrodynamical simulations of CCSNe and NSMs. As a consequence, integrating FFC into these simulations presents a notable challenge. A potential strategy to address this challenge involves breaking down the problem into two scale hierarchies.This consideration motivatesconducting local dynamical simulations at shorter scales and subsequently integrating the findings into some practical prescriptions. Such prescriptions can then be efficiently applied to the broader astrophysical modelings and hydrodynamic simulations <cit.>.The assessment of the outcome of FFCs has undergone thorough examinations of local dynamical simulations conducted within confined spaces employing periodic boundary conditions <cit.> (see also Ref. <cit.> for the possible impact of the choice of boundary conditions). Insights from these investigations indicate a tendency toward kinematic decoherence in flavor conversions, generally resulting in quasistationary states.These stationary states can be characterized by survival probabilities, which are governed by the conservation of neutrino lepton number and have been demonstrated to be possibly modeled by analytical formulation to a good accuracy <cit.>.Despite theexistence of such analytical formulaefor the angular distribution of survival probabilities, implementing FFCsin CCSN and NSM simulations remains a challenge. The obstacle lies in the requirement of having access to complete angular distributions of neutrinos to determine FFC outcomes through these analytical expressions. However, acquiring such detailed angular information proves challenging in most cutting-edge CCSN and NSM simulations due to their computationally intensive nature.As a practical alternative to considering the full neutrino angular distributions, many state-of-the-art simulations opt for a more feasible approach by simplifying neutrino transport through a limited set of angular distribution moments <cit.>. In our specific investigation, we concentrate on radial moments, defined as,I_n = ∫_-1^1dμ μ^n ∫_0^∞∫_0^2πE_ν^2 d E_νdϕ_ν/(2π)^3f_ν(𝐩).These moments effectively capture crucial aspects of the neutrino angular distribution while facilitating a computationally more manageable treatment[Our primary focus is presently on axisymmetric crossings, particularly emphasizing radial moments where the angular distribution integrates overϕ_ν. It's crucial to highlight that our current study excludes non-axisymmetric crossings. Exploring these aspects is a subject reserved for future investigations.].In practical scenarios, one often encounters a situation where simulations directly provide only the first two moments, I_0 and I_1, for (anti)neutrinos.The problem then becomes determiningthe ultimate values of I_0 and I_1 following FFCs, based on their initial states. Note that despite the availability of analytical formulae for the angular distribution of the neutrino survival probabilities, determining these final values is inherently complex.This paper represents a pioneering effort in predicting the asymptotic outcomes of FFCsin the moments scenario, using artificial neural network (NNs). NNs imitate closely the brain's network of connected neurons. Specifically, they have layers of artificial neurons that handle information. NNs have been proven to be useful in solving tricky problems due their strong learning capacities, resulting fromadjusting the connections between neurons during the training phase. Their ability to learn from data without explicit programming sets NNs apart,making them attracting tools across various domains. In particular, NNs have been extensively used inthe field of astrophysics and high energy physics <cit.>. Our approach involves the utilization of a NN, which takesthe essential information extracted from the initial (anti)neutrino zeroth and first moments and then outputs thecorresponding moments regarding the asymptotic outcome ofFFCs.In particular, we employ physics-informed neural networks (PINNs), where the learning andperformance of the NN can be enhanced with the utilization of the domain knowledge (specialized information specific to the problem that can be integrated into theNN) <cit.>. Our findings demonstrate the efficacy of a single hidden layer PINN, achieving a remarkable accuracy for the asymptotic values of I_0 and I_1.The paper is structured as follows. In Sec. <ref>, we initiate by detailing our simulations of FFCs and elucidating the assumptionsto deriving the outcomes of FFCs. Moving forward in Sec. <ref>, we delve into the architecture of our NNs, shedding light on the requisite feature engineering and the deployment of customized loss functions. The ensuing discussion encompasses the results gleaned from both two- and three-flavor scenarios. Finally, we conclude in Sec. <ref>.§ FFCS SIMULATIONSTo effectively train our NN, we require a substantial number of training samples containing initial values of the (anti)neutrino moments, I_0 and I_1, within the neutrino gas. These samples should also encompass their corresponding final values, reflecting the asymptotic outcomes of FFCs.Our chosen physical model involves the evolution of FFCs within a one-dimensional (1D) box, mirroring the setup outlined in Ref. <cit.>. This model assumes translation symmetry along the x and y axes, axial symmetry around the z axis, and periodic boundary conditions in the z direction. Notably, we omit considerations of vacuum mixing and neutrino-matter forward scattering in this model. In this study, we prime the neutrino gas for tracking its flavor evolution by employing two widely utilized parametric neutrino angular distributions documented in existing literature. The first one is the maximum entropy distribution defined as, f^max-ent_ν(μ) = exp[η + aμ],where we here consider the ϕ_ν-integrated distribution, i.e.,f_ν(μ) =∫_0^∞∫_0^2πE_ν^2 d E_νdϕ_ν/(2π)^3f_ν(𝐩).This is a very natural choice for the neutrino angular distribution since the maximum entropy closure <cit.> is currently very popularin themoment-based neutrino transport methods.Thisparametricdistribution has been also used to detect νELN crossings using fitting and machine learning techniques <cit.>. Another angular distribution considered in the literature of FFCs (see, e.g., Refs. <cit.>)is the Gaussian distribution defined as,f^Gauss_ν(μ) = Aexp[-(1-μ)^2/ξ].Note that both of these distributions have a parameter which determines the overall neutrino number density, namely η and A, and the other parametersdetermining the shape of the distribution, i.e., a and ξ. Allowing for two distinct forms of angular distributions takes into consideration potential deviations in the shape of neutrino angular distributions in realistic simulations, which can occur, e.g., due to the use of different closure relations. To ready our datasets, we begin with the initial angular distributions of neutrinos, which can either follow a maximum entropy distribution or a Gaussian distribution. Subsequently, we utilize analytical neutrino survival probabilities to determine the asymptotic outcome of FFCs. By performing integration over the neutrino angular distributions, we can obtain the initial and final values of I_0 and I_1.In our analytical treatment of the survival probability, we follow closely our recent work in Ref. <cit.>.We assume that G(μ) (=∫_0^2πdϕ_ν G(𝐯)) has only one zero crossing μ_c.This helps us to define, Γ_+ =| ∫_-1^1dμ G(μ) Θ[G(μ)] |,Γ_- =| ∫_-1^1dμ G(μ) Θ[-G(μ)] |,as the integration of positive and negative parts of G(μ). Here Θ is the Heaviside theta function. In the following, we specify the μ range over which the above integral is smaller (larger) by μ^< (μ^>). For the survival probability in the two-flavor scenario, weuse the analytical formula: P^2f_sur(μ) =1/2for μ^<,𝒮(μ) for μ^>,where the distribution over μ^> is formulated as,𝒮(μ) = 1-1/2h(|μ-μ_c|/ζ).Here,h(x) is a μ-dependent function that monotonically decreases from 1 to 0 when x increases from 0 to infinity. To be specific, we here assume h(x) to havea power-1/2 form, i.e.,h(x) = (x^2+1)^-1/2. In addition, the parameter ζ can be found such that the survival probability function is continuous. In the three-flavor case where ν_μ and ν_τ are indistinguishable,one can simply find the survival probabilities bythe expressionP^3f_sur(μ) = 1-4[1-P^2f_sur(μ)]/3. As illustrated in Table. 1 of Ref. <cit.>, adopting a power-1/2 form for the survival probability proves to yield a comparatively low error in computing I_0 and I_1 analytically. This explains the rationale behind opting for this analytical survival probability in this work. In the concluding part of Sec. <ref>, we also explore the scenario where the outcomes derived from actual simulations of FFCsare applied. § APPLICATIONS OF NEURAL NETWORKS Before unveiling our findings, it's crucial to emphasize that to ensurehigh performance of our NN models in the test set, it is necessary to divide the dataset into three distinct sets. These sets are defined as follows: i) Training Set: This set serves as the foundation for training the NN, allowing it to learn and adapt based on the provided data. ii) Development Set: Also known as the validation set, this subset plays a pivotal role in determining the optimal hyper-parameters of the algorithm. It serves as a testing ground to fine-tune the model for optimal performance. iii) Test Set: To assess the NN efficacy on novel, unseen data, the test set is utilized. This set provides a critical evaluation of the model's generalization capabilities beyond the training data. §.§ The architecture of NNs For a given arbitrary neutrino gas, one is provided with the initial values of I_0's and I_1's of ν_e, ν̅_e, and ν_x. In this context, we make the assumption that the initial distributions of ν̅_x and ν_x are identical (though their final ones following FFCs could be different), a simplification that aligns with the majority of state-of-the-art CCSN and NSM simulations. In order to enhance the performance of our NNs, we introduce a layer of feature engineering, employing the following features as pertinent inputs in our NNs:α,α_ν_x, F_ν_e, F_ν̅_e,and F_ν_x, with: α = n_ν̅_e/n_ν_e,α_ν_x = n_ν_x/n_ν_e, and F_ν = ( I_1/I_0)_ν. Note that the selection of these features offers explicit insights into the configuration of neutrino angular distributions, which plays a crucial role in understanding the asymptotic outcome of FFCs. Furthermore, it is worth highlighting that all quantities in this context are normalized by the initial ν_e number density, allowing the convenient choice of setting it to n^initial_ν_e = 1. This simplificationreduces the number of inputs to our NNs, and notably, there is no input parameter related to n_ν_e. Though the aforementioned features serve as a necessary foundation for developing a NN, there remains room for further enhancement through more advanced feature engineering to optimize the performance of our NNs. This optimization can be achieved by gaining insights from the neutrino survival probability's shape, as expressed in Eq. (<ref>). Substantial information pertaining to the distribution of the survival probability can be derived by learning the position of μ_c. Anothervaluable piece of information, given μ_c, is determining the specific side of μ_c on which equipartition occurs,while the behavior of the survival probability on the other side is governed by conservation laws. Information regarding the side on which equipartition occurs is provided in the quantityE_RL, a binary number which is 1 if the equipartition occurs for μ_c≤μ, and 0 otherwise.In our NN framework, we explore two distinct architectures, as illustrated in Fig. <ref>. In the foundational architecture, we integrate only α, α_ν_x, F_ν_e, F_ν̅_e, and F_ν_x into our NN. An alternative NN that we examine involves an additional layer of feature engineering, as discussed in the preceding paragraph, encompassing information about μ_c and E_RL.Practically, this augmentation is accomplished by constructing a separate regression model trained on our dataset, from which information regarding μ_c and E_RL can be readily extracted. We have confirmedthatthe computation of E_RL and μ_c can be performed withsmall errors. As illustratedin Fig. <ref>, our feedforward NN has a single hidden layer containing 50 neurons, unless stated otherwise. The rationale for this choiceis illustrated in Fig. <ref> and the text around it.Also regarding the output layer, our NNs provide the values of I_0 and I_1 for both ν_e and ν̅_e, effectively utilizing a total of 4 neurons. The determination of I_0 and I_1 for ν_x and ν̅_x can be deduced by applying the principles of neutrino and antineutrino number density and momentum conservation.In simpler terms, our NN's ensures that the fundamental laws governing neutrino conservation are respected withoutany exceptions.§.§ Loss functions The loss function, ℒ, is a crucial component in training NNs, serving as a measure of the model's predictive performance. It quantifies the disparity between predicted values and actual target values, providing a guide for the model to adjust its parameters during the optimization process.When it comes to neutrino flavor conversions in CCSNe and NSMs, a critical parameter of utmost significance is the number of neutrinos in the electron channel, i.e., N_ν_e + ν̅_e = n_ν_e + n_ν̅_e, as opposed to the number of neutrinos in the heavy-lepton channel, N_ν_x + ν̅_x = n_ν_x + n_ν̅_x. Leveraging this crucial physical insight to enhance the performance of our NN, we incorporate an additional loss term in the optimization of the NN model with the extra features. Thisloss term is designed to penalize discrepancies in N_ν_e + ν̅_e, and is defined as, ℒ_extra = 1/N_sampleΣ_i (Δ N_ν_e + ν̅_e, i)^2,whereΔ, N_sample, and Σ_i denote the difference between the true and predicted values, the number of samples in the training set, and the summation over the training samples, respectively. Thisloss termprovides an additional constraint for the model. The integration of the domain knowledge characterizes this particular NN architecture as a PINN, given that its distinctive nature is shaped by our insights into the underlying physics of the problem <cit.>. The PINN should be compared with our basic NN, referred to as NN with no extra features, for which the loss term only includes the ordinary mean squared errors of the output parameters.Note that the incorporation of the domain knowledge in our PINN includes botharchitectural aspects, utilizing additional features, and learning-based enhancements via the loss function.It's also important to highlight that unlike what typically observed in PINNs, our approach does not involve a loss term associated with some partial differential equations.§.§ Three-flavor scenario In this section, we discussthe evaluation of our NNs concerning their predictions for the asymptotic outcome of FFCsin the three-flavor scenario.To train and assess our model, we utilize a dataset comprising a well-balanced combination of maximum entropy and Gaussian initial neutrino angular distributions. The ultimate outcome of FFCs is determined through a three-flavor survival probability, as detailed in Eq. (<ref>) and the surrounding text. To better encapsulate realistic conditions regarding the values of n_ν's and the hierarchy among F's, weprepare each sample by randomly selecting the inputs for our NNs.Specifically, we set α∈ (0,2.5),α_ν_x∈ (0,3), F_ν_x∈ (0,1),F_ν̅_e∈ (0.4F_ν_x, F_ν_x), and F_ν_e∈ (0.4 F_ν̅_e, F_ν̅_e). This selection process ensures consistency with the expected hierarchy F_ν_e≲ F_ν̅_e≲ F_ν_x, characteristic of CCSN environment. Given these quantities, one can then determine the initial angular distributions of neutrinos. Using the analytical survival probability, the final I's can be derived.In the left panels of Fig. <ref>, we present the performance results of our PINN model.Here, an epochrefers to a single pass through the entire training dataset during the training phase. Notably, the relative error in the electron neutrino number density, defined as |Δ (n_ν_e + n_ν̅_e)|/(n_ν_e + n_ν̅_e), can attain values as low as 2.5%. Furthermore, we observe that the meanabsoluteerrorin the output quantities, defined as(|Δ I^ν_e_0| + |Δ I^ν_e_1| + |Δ I^ν̅_e_0| + |Δ I^ν̅_e_1|)/4,can reach values ∼ 3%. Additionally, from the lower panel, it is evident that almost 90% of the predictions exhibit errors of≲ 5% in N_ν_e + ν̅_e.In the middle panels of Fig. <ref>, we present the performance of our basic NN with no extra feature andno loss term for enhancing the accuracy of N_ν_e + ν̅_e. It is evident that the errors in this case are a bit greater than those of the PINN model. It's also worth noting that while the error in N_ν_e + ν̅_e is smaller than the absolute error in the output quantities, the gap between them has beenreduced.This could be attributed to the lack of a specific loss term targeting the reduction of errors in N_ν_e + ν̅_e. In addition and by examining the lower panel, we can observe thatalmost 85% of the predictions still exhibit errors of ≲5% in N_ν_e + ν̅_e. Instead of rigidly enforcing the conservation of neutrino quantities, an alternative approach involves a more flexible NN model for which strict conservation laws are not forcefully respected. Instead, one can aim to derive all the neutrino moments (I_0's and I_1's for all flavors) as outputs, while introducing an additional loss term, which effectively enforces the conservation of (anti)neutrino number densities and momenta.This increased flexibility, combined with the loss term addressing conservation, has the potential to enhance the training of the NN,resulting in reduced errors in its predictions. Consider that this NN architecture could also be referred to as a PINN due to its loss term encompassing domain-specific knowledge,accounting for the conservation laws governing neutrino number density and momentum. However, our findings emphasize the potential significant risks associated with applying such an informedNN to our specific problem. This concern is evident in the right panel of Fig. <ref>.While the absolute error and the error observed in N_ν_e + ν̅_e are comparable to what oneobserves in the left and middle panels, there is a new type of error which appears in the totalnumber of neutrinosand its first moment, soaring to values as high as 1-3%. Sucherrors, whether in total neutrino number density or momentum, have the potential todistort the physics of CCSNe and NSMs. Note that such errors can generate greater risks in scenarios where a disparity might exist between the training and test datasets. All calculations presented so faremploy a feedforward NN with a single hidden layer containing n_h = 50 neurons. The rationale for this choice of the number of neurons is illustrated in Fig. <ref>, where different errors are shown for different NN architectures. It is evident that the NNs perform best on the validation set once n_h≳ 50. It is also illuminating to note that if, for any reason such as computational constraints, a simpler NN with a smaller n_h is used, the performance of the PINN surpasses notably that of the model without additional features. However, this performance gap diminishes as larger n_h values are utilized. Furthermore, it is evident that even the model without explicitly enforced conservation laws achieves its optimal performance when n_h≳ 50. In Fig. <ref>, we present an analysis of the performance of our PINN as a function of the size of the training set. The red curve represents the absolute error in the PINN's output, while the blue curve illustrates the relative error in N_ν_e + ν̅_e. It is noteworthy that as the training dataset expands to incorporate several thousand data points, the error rapidly diminishes to valuesbelow 5% and the difference between the error in the validation and training set disappears. This establishes the absolute minimum number of data points essential for conducting dependable calculations using NN's. However, it's important to bear in mindthat this requisite number is expected to inherently grow as one explores increasingly intricate models involving more inputs and outputs. It is also interesting toobserve that the performance of our NN remains satisfactory even when trained on relatively small datasets, comprising just a few hundred data points. §.§ Two-flavor scenario In this section, we conduct an evaluation of the performance of our NNs within a two-flavor scenario. The architectural configuration of the NNs and the training process closely mirror those detailed in the preceding section.A notable departure from the prior section lies in the fact that, in this case, we employ the two-flavor version of the survival probability (Eq. (<ref>)). Additionally, we haveconsidered the results derived from our 1D box simulations of FFCs. These simulations encompass 10,000 data points characterized by the Gaussian angular distributions. It is crucial to emphasize that when working with the simulation results, no analytical prescription is employed for the survival probability. Instead, we derive the outcomes of FFCs directly from the simulations. This distinctive approach offers the advantage of enabling us to assess the reliability of the NN models trained on artificial data, when tested on outcomes derived from actual simulations of FFCs. The left panels of Fig. <ref> present the performance of our NNs trained using the artificial data, when tested on the simulation data. Notably, the PINN consistently outperforms the basic NN with no extra features, particularly in the error associated with N_ν_e + ν̅_e. It is crucial to observe that the errors exhibit more pronounced variations compared to previous cases. This increased variability might be attributed to inherent systematic errors in the analytical prescription (for an estimation of the error associated with the analytical prescription, see Table.1 of Ref. <cit.>).During the evaluation of our NNs' performance on simulation data, we noted the critical importance of ensuring the equivalence between the input parameter space covered in the training set and the test set. This holds particularly true for the ranges of α and α_ν_x, as well as forthe hierarchical relationship among F_ν's. To elaborate further, if the test set encounters regions within the input parameter space that were completely unseen during the training phase, it could significantly degrade the performance of the NN, potentially resulting in very poor performance. The middle and right panels of Fig. <ref> display errors encountered during the computations in which both training and testing were performed on identical datasets—either artificial data generated using analytical formulas or data obtained from simulations. As discussed before, we have considered herea single form of neutrino angular distributions, namely the Gaussian one. Thisanalysis providesinsights into theinherent errors presented in each of the training sets. Then by comparing them with the left panels, one gets an idea of the error existing in the analytical formula.The results depicted in Fig. <ref> demonstrate the notable enhancement achieved by utilizing our PINN method in the case of two-flavor scenario. Specifically, a noticeable disparity is evident betweenthe performance of PINN and that of the NN with no extra features,surpassing the distinctions observed in Fig. <ref> for the three-flavor scenario. This fundamental discrepancy between two- and three-flavor scenarios highlights a greater degeneracy in the former, ultimately resulting in a moreoverall performance improvement when additional information, such as the implementation of PINN, is incorporated. It is also worth noting a shift in the hierarchy between the absolute error and the error in N_ν_e + ν̅_e, in the two- and three-flavor scenarios (compare Fig. <ref> with the left and middle panels of Fig. <ref>). Although this observation is intriguing, it is crucial to recognize that comparing an absolute error with a relative error may not be entirely equitable. Such a hierarchy is anticipated to be sensitive to changes in the data, and thus, its intrinsic merit is limited. To address this concern, we investigated the hierarchy between the relative absolute error and the relative error in N_ν_e + ν̅_e. This led us to find that the hierarchy remains consistent when considering these two types of errors, providing a fair basis for comparison.Despite what discussed above, we opted to utilize absolute error instead of relative absolute error throughout this study for two primary reasons. Firstly, in our calculations, we have already normalized all quantities by n_ν_e, resulting in having relative values for each quantity. This implies that any absolute error could be interpreted already to be relative in spirit. Secondly, to avoid excessive sensitivity to the small values associated with some of the neutrino quantities, we found it more appropriate to employ absolute error as a metric in our study. This decision ensures a balanced and meaningful evaluation of our results.§ DISCUSSION AND OUTLOOKIntensive simulations have demonstrated that FFCs can achieve equilibriumstates in some models. In this study, we have employed neural networks (NNs) to predict the asymptotic outcome of FFCs in a three-flavor neutrino gas within a 1D box with periodic boundary conditions. Specifically, our focus was on the first two moments of neutrino angular distributions as inputs/outputs, making our NN models applicable to cutting-edge CCSNe and NSM simulations. We have shown thatour NNscan predict the asymptotic outcomes of the (anti)neutrino I_0's and I_1's with a notable accuracy, corresponding to an error of ≲ 3%.In order to enhance the performance of our NNs, we implement some novel features aiming at capturing thecharacteristics of the expected neutrino survival probability distributions.Firstly, we incorporate a new feature related to the position of the zero crossing in the distribution of νELN, μ_c. Additionally, we introduce another feature indicating on which side of μ_c the expected equipartition between different neutrino flavorsoccurs. Both of thesefeatures are derived through a layer of regression applied to the initial inputs of the NN (see Fig. <ref>).In the context of neutrino flavor conversions in CCSNe and NSMs, a critical parameter is the quantity of neutrinos and antineutrinos in the electron channel. To further optimize our NNs, we incorporate a supplementary loss term penalizing any discrepancies in predicting N_ν_e + ν̅_e.The results demonstrate a relative improvement in our customized physics-informed neural network (PINN) due to the incorporation of extra features and a tailored loss function, outperforming a basic neural network that uses a standard mean squared error loss function and lacks these extra features.We have also conducted a comprehensive evaluation of the performance of our NNs focusing on the variance between the training and validation sets (Fig. <ref>). Our findings reveal that the observed variance almost disappears when considering a minimum of a few thousand data points.This establishes an absolute minimum number of data points essential for developing a dependable NN for predicting the outcome of FFCs in our model. An intriguing observation arising from our study is that, even with the utilization of relatively small datasets, the variance remains modest, with an associated errorlimited to ≲ 15%. This insight further emphasizes the potential of NNs in scenarios where obtaining extensive datasets may be challenging or resource-intensive. Instead of rigidly adhering to the strict conservation of neutrino quantities, we have also assessed the performance of a NN with a more flexible approach, where all the neutrino moments (I_0's and I_1's for all flavors) are treated as outputs. Here, we introduced an additional loss term effectively ensuring the conservation of (anti)neutrino number densities and momenta. However, our research has underscored noteworthy concerns associated with the application of such an informed NN to our specific problem. Indeed, we have shown thatthere could exist unignorable errors in the total number of neutrinos and their first moments, indicating a capacity to distort the physics of CCSNe and NSMs. In our research, our NN models were predominantly trained on artificial data derived from two initial parametric angular distributions: the maximum entropy and Gaussian distributions. Additionally, we assessed the performance of our NN models using simulation data in a two-flavor scenario. Our findings indicate that the observed errors can be rationalized by accounting for the anticipated discrepancies between analytical and numerical results, as well as the inherent errors present in the training set.In summary, our findings underscore the viability of NNs in forecasting the asymptotic outcomes of FFCs, once only the initial two moments of neutrino angular distributions are taken into account. This marks a significant advancement in the potential integration of FFCs into simulations of CCSNe and NSMs. However, there are still vital avenues for further exploration. Firstly, our NN models were notably constrained to scenarios where neutrino distributions were assumed to be axisymmetric. Additionally, we operated under the assumption that ν_x and ν̅_x exhibit similar distributions. Relaxing these assumptions necessitates access to training datasets derived from actual simulations of FFC evolution in models without imposed axisymmetry, and where ν_x and ν̅_x distributions may differ. Furthermore, our results are founded on a single-energy neutrino gas, prompting a crucial question regarding expectations in a multi-energy neutrino environment. This consideration is especially relevant, given that almost all practical applicationsinvolve predicting FFC outcome regarding the neutrino energy spectrum. Given the efficacy of NNs in this domain, taking these crucial steps enhances remarkablythe feasibility of incorporating FFCs into CCSN and NSM simulations. § ACKNOWLEDGMENTSWe are deeply grateful to Georg Raffelt for insightful conversations and reading our manuscript. We also thank Gabriel Martínez-Pinedo and Oliver Just for fruitful discussions.S.A. was supported by the German Research Foundation (DFG) through the Collaborative Research Centre“Neutrinos and Dark Matter in Astro- and Particle Physics (NDM),” Grant SFB-1258, and under Germany’s Excellence Strategy through the Cluster of Excellence ORIGINS EXC-2094-390783311. M.-R. W. acknowledges supports from the National Science and Technology Council under Grant No. 111-2628-M-001-003-MY4, the Academia Sinica under Project No. AS-CDA-109-M11, and the Physics Division, National Center for Theoretical Sciences, as well as the resource of the Academia Sinica Grid-computing Center (ASGC). Z.X. was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (ERC Advanced Grant KILONOVA No. 885281). We would also like to acknowledge the use of the following softwares: Scikit-learn <cit.>,Keras <cit.>, Matplotlib <cit.>, Numpy <cit.>, SciPy <cit.>, and IPython <cit.>.elsarticle-num | http://arxiv.org/abs/2311.15656v1 | {
"authors": [
"Sajad Abbar",
"Meng-Ru Wu",
"Zewei Xiong"
],
"categories": [
"astro-ph.HE"
],
"primary_category": "astro-ph.HE",
"published": "20231127093630",
"title": "Physics-Informed Neural Networks for Predicting the Asymptotic Outcome of Fast Neutrino Flavor Conversions"
} |
Descent with algebraic structures for symplectic cohomology]Descent with algebraic structures for symplectic cohomology We formulate and prove a chain level descent property of symplectic cohomology for involutive covers by compact subsets that take into account the natural algebraic structures that are present. The notion of an involutive cover is reviewed. We indicate the role that the statement plays in mirror symmetry.[ Umut Varolgunes 27 November 2024 ====================§ INTRODUCTION Let (M^2n,ω) be a geometrically bounded symplectic manifold <cit.> and letbe a field of characteristic 0. If c_1(T M)=0, we also fix a grading of M, that is, a homotopy class of non-vanishing sections of the complex line bundle Λ_ℂ^n(TM)→ M for some compatible almost complex structure, to equip all the chain complexes below with ℤ-gradings; otherwise we only have ℤ_2-gradings. The -algebraΛ_≥ 0:={∑_i≥ 0 a_iT^α_i| a_i∈, α_i∈ℝ_≥ 0,where α_i→∞, asi→∞}is called the Novikov ring and its quotient field Λ is called the Novikov field.The relative symplectic cohomology SH_M^*(K) of a compact subset K⊂ M was defined in <cit.> using the geometric ideas of <cit.> (see <cit.> for similar constructions). Due to the reasoning explained in Remark <ref>, in this note will try out a new terminology where we replace “relative symplectic cohomology" with “symplectic cohomology with supports" and “relative symplectic cohomology of K inside M" with “symplectic cohomology of M with support on K".SH_M^*(K) is the homology of a canonically defined chain complex SC^*_M(K) over<cit.>, which we now call symplectic cochains of M with support on K. SC^*_M(K) is obtained as the completed homotopy colimit of the universal homotopy coherent diagram of continuation maps of Floer complexes CF^*(- ;Λ_≥ 0) of Hamiltonians that are negative on K[Using SC^*_M(K) for this canonical model is a change in notation as well. Previously, SC^*_M(K) was used to denote the telescope model from <cit.>, which was not ideal notation since it hid the choice of acceleration data that was involved.]. It can intuitively be thought of as the Floer complex of the upper semi-continuous function that is 0 on K and +∞ outside.There are canonical restriction chain maps SC^*_M(K')→ SC^*_M(K), for K⊂ K', which upgrades the structure to a presheaf of chain complexes over the compact subsets of M. We also have a canonical PSS chain map <cit.>: C^*(M;ℤ)⊗Λ_> 0→ SC^*_M(K). Crucially, this map is a quasi-isomorphism if M is closed and M=K <cit.>.We cannot hope to have a local-to-global property for symplectic cohomology with supports in general because of the PSS isomorphism that we just mentioned and the fact that SH_M^*(K)⊗_ is known to vanish for displaceable sets <cit.>, e.g. for sufficiently small Darboux balls. It turns out that for a special class of covers we do have a satisfactory positive result. We say that the compact subsets K_1,…, K_N⊂ M are Poisson commuting if there exists a smooth map F: M→ℝ^k with Poisson commuting components and compact P_1,…, P_N⊂ℝ^k such thatK_m=π^-1(P_m) for all m=1,… ,N. If K=K_1∪…∪ K_N and K_1,…, K_N are Poisson commuting compact subsets, we call K_1,…, K_N an involutive cover of K. The Čech chain complexof SC^*_M(-) for a cover K=K_1∪…∪ K_N is defined as(SC_M; K_1,…,K_N):=⊕_p≥ 0⊕_|J|=p+1SC^*_M(⋂_m∈ JK_m)[p], where [-] denotes a degree shift and J runs through subsets of {1,… ,N}, and with differential the sum of the Čech differential and the Floer differential - ignoring the signs for the moment.Assume that K=K_1∪…∪ K_N is an involutive cover. Then, the canonical chain mapSC^*_M(K)→(SC_M; K_1,…,K_N)is a quasi-isomorphism. In fact, below we will give a very mild generalization of this result to weakly involutive covers (Definition <ref>) in Theorem <ref>, which in particular gives us a chance to review the proof.In <cit.>, with Yoel Groman and Mohammed Abouzaid, we construct a bigger model SC^*_M,big(K) of SC^*_M(K), which has a natural action of the KSV operad and restriction maps that are compatible with this action. Here bigger model precisely means that we have canonical inclusions of chain complexesSC^*_M(K) → SC^*_M,big(K),compatible with restrictions maps, that induce isomorphisms on homology.The goal of this note is to incorporate this algebraic structure into the local-to-global principle for weakly involutive covers. We achieve this task in Theorem <ref> using existing techniques from the literature. We then indicate how this result would be used in mirror symmetry in the final section.§.§ AcknowledgementsThe author thanks Bertrand Toën for a useful conversation about Section 2.3 and Mohammed Abouzaid for explaining the role of gerbes in Section 5. U.V. was supported by the TÜBİTAK 2236 (CoCirc2) programme with a grant numbered 121C034.§ BACKGROUND §.§ Weakly involutive coversLet M be a symplectic manifold, which has an induced a Poisson bracket {-,-}: C^∞(M)× C^∞(M)→ C^∞(M). We note the following well-known result, e.g. see <cit.>. Let f_1,… ,f_N: M →, and g_1, g_2 : ^N → be smooth functions. Assume that {f_i , f_j} = 0, for all i,j. Then the functions G_l : M →, l = 1, 2, defined by G_l(x)=g_l(f_1(x),… ,f_N(x)) also satisfy {G_1 , G_2} = 0. We call compact subsets K_1,...,K_N⊂ M weakly Poisson commuting if there exists smooth functions f_m,i:M→ for m=1,…, N and i=1,2,… such that* f_m,i|_K_m<0 for all m and i.* f_m,i< f_m,i+1 for all m and i.* For all m, if f_m,i(x)<0 for all i, then x∈ K_m.* The Poisson bracket {f_m,i,f_m',i}=0 for all i and m,m'.If K=K_1∪…∪ K_N and K_1,…, K_N are weakly Poisson commuting compact subsets, we call K_1,…, K_N a weakly involutive cover of K.The order of the compact subsets is of course unimportant. We used this notation for ease of reading. Weakly Poisson commutation property is inherited by sublists.If K_1,...,K_N are Poisson commuting as in Definition <ref>, then they are weakly Poisson commuting.This immediately follows from the fact that for compact Z ⊂ℝ^k, we can always find a non-negative smooth function on ℝ^k which vanishes precisely on Z along with Lemma <ref>.We do not know whether weakly Poisson commuting is a strictly weaker condition than Poisson commuting. It is tailored to be more convenient in the “degeneration setup". This is easiest to explain when we are given a semi-stable polarized degeneration of a smooth complex manifold over disk <cit.>. Roughly speaking, we use symplectic parallel transport to move the decomposition of the central fiber into its irreducible components to the general fiber in order to obtain an involutive cover <cit.>. Let K be a finite union of finite intersections of the compact subsets K_1,…,K_N for which there exists smooth functions f_m,i:M→ with m=1,…, N and i=1,2,… such that* f_m,i|_K_m<0 for all m and i.* f_m,i< f_m,i+1 for all m and i.* For all m, if f_m,i(x)<0 for all i, then x∈ K_m.Then, we can construct smooth functions g_i:M→, i=1,2,… such that * g_i|_K<0 for all i.* g_i< g_i+1 for all i.* If g_i(x)<0 for all i, then x∈ K.* There are smooth h_i:ℝ^N→ℝ such thatg_i(x)=h_i(f_1,i(x),… ,f_N,i(x))for all i and x.By a simple induction, it suffices to prove the case N=2, where K is either K_1∩ K_2 or K_1∪ K_2. We deal with the intersection case. Consider the maps F_i:= (f_1,i, f_2,i): M→ℝ^2. By our assumption, F_i's map K_1 to the open left half space and K_2 to the open lower half space. Let C_i be a curve that is a smoothing of the union of the non-positive parts of the two axes in ℝ^2, which is contained in the negative quadrant. We assume that the slopes of all of its tangent lines are non-positive and no negative slope is attained twice. The complement of C_i in the plane has two components. We call the one that is contained in the negative quadrant B_i. We can make sure that F_i(K_1∩ K_2)⊂ B_i and B_i+1⊃ B_i for all i. We define a smooth function h_i:ℝ^2→ℝ which is 0 on C_i, negative on B_i and positive elsewhere as follows. Let x∈ℝ^2, draw the straight line with slope 1 from x and define h_i(x) to be the signed length of the segment between x and where it intersects C_i. It is now elementary to check that g_i:=h_i∘ F_i satisfies the conditions.The union case is similar and we omit it. If K_1,...,K_N are weakly Poisson commuting, then the list of all compact subsets that can be obtained as a finite union of finite intersections of K_1,…,K_N is also weakly Poisson commuting. This is immediate from Proposition <ref> and Lemma <ref>. Note that this corollary is true when we remove the words “weakly" as well and the proof is easier.§.§ The KSV operadWe will use the notions of operads and algebras over operads freely. For definitions see <cit.>. Unless otherwise specified our operads are symmetric and over Ch().The operad that is relevant here is the KSV operad <cit.>{KSV(n)=C_*(fℳ_0,n+1^ℝ;)}_n≥ 1.Here fℳ_0,2^ℝ:=S^1 and, for n≥ 2, an element of fℳ_0,n+1^ℝ is a stable nodal curve from ℳ_0,n+1 equipped with tangent rays at the branches of each node up to simultaneous rotation (in opposite directions) as well as at the marked points. The details are unimportant for the purposes of this paper.It is well-known that algebras over the homology operad of KSV are precisely BV algebras <cit.>. The definiton of the latter is reviewed in Section <ref> In other words, homology operad of KSV is isomorphic to the BV operad. Let us note that KSV operad is formal <cit.>, see also <cit.>.In <cit.>, a cofibrant replacement for the BV operad (and by its formality, for the KSV operad) was constructed. Following the same paper, we call algebras over this operad homotopy BV algebras. For these there is a more flexible notion of a morphism that we call BV_∞-maps <cit.>. We have a functor from KSV-algebras with strict morphisms to homotopy BV algebras with BV_∞-maps. §.§ Homotopy limits of cosimplicial diagramsLet ℕ^inj_aug be the category with objects -1,0,1,2,… and morphisms from p to q the set of injective maps{n∈ℤ| 1≤ n≤ p+1}→{n∈ℤ| 1≤ n≤ q+1}. ℕ^inj denotes the full subcategory with non-negative integers as objects.A contravariant/covariant functor ℕ^inj→ Ch() is called a semi simplicial/cosimplicial chain complex. An augmentation of a semi cosimplicial chain complex ℕ^inj→ Ch() is simply an extension of the functor to ℕ^inj_aug→ Ch().Let us denote by Δ^p the p dimensional simplex inside ℝ^p+1 and the affine, vertex order preserving inclusions into codimension one faces byδ_i: Δ^p→Δ^p+1, with i=0,… ,p+1.We start with an alternative interpretation of (SC_M; K_1,…,K_N). Let (𝒟^·,d_i) be a semi cosimplicial chain complex. In our case, this will be given by𝒮𝒞^·:=p↦⊕_|J|=p+1SC^*_M(⋂_m∈ JK_m),with face maps d_i obtained from the restriction maps in the standard way. An augmentation of 𝒮𝒞^· is given by-1↦ SC^*_M(K_1∪…∪ K_N). Let us define thesemi simplicial chain complexp↦ NC^*(Δ^p)where NC denotes the normalized simplical cochain complex whose elements are arbitrary assignments of scalars to each face of Δ^p with grading given by dimension and differential obtained by the usual alternating sum of duals of codimension one face maps. The coface maps are the ones induced from Δ^p→Δ^q.We can define the totalization of 𝒟^· asTot(𝒟^·):=eq(∏_p=0^∞ D^p⊗ NC^*(Δ^p)⇉∏_r→ q D^q⊗ NC^*(Δ^r)).Here the second product is over all maps in ℕ^inj and equalizer means that we are taking the kernel of the difference of the two natural maps. An augmentation of 𝒟^· gives rise to a canonical mapD^-1→ Tot(𝒟^·).It is important to notice that the inclusion ofTot(𝒟^·) into {(x_p=(f↦ x_p(f))) ∈ ∏_p=0^∞𝒟^p⊗ NC^*(Δ^p)| d_i(x_p(f))=x_p+1(δ_if) for allp≥ 0, i=0,… ,p+1 and face f of Δ^p}is surjective. The following is easy to see from this description.There is an isomorphism of chain complexesTot(𝒮𝒞^·)→(SC_M; K_1,…,K_N)by (x_p)↦ (x_p(F)), where F denotes the unique codimension 0 faces. Moreover, the isomorphism intertwines the canonical maps received fromSC_M^*(K), where K=K_1∪…∪ K_N. In fact, NC^*(Δ^·) is a semi simplicial dga using the standard cup product. Therefore, if 𝒟^· is a semi cosimplicial algebra over a non-symmetric operad, Tot(𝒟^·) has an algebra structure over the same operad (from the same reasoning as in Proposition <ref>). This is commonly used in the literature for defining product structures in Cech complexes of dga's, e.g. <cit.>. Unfortunately, the KSV operad is a symmetric operad. So we need a replacement for NC^*(Δ^·) that has a semi-simplicial commutative dga structure. Following Sullivan this is possible under the characteristic 0 assumption (see <cit.>).Let us define the commutative dga of polynomial differential forms on the p-simplex asΩ^*(Δ^p):=[t_0,… ,t_p,dt_0,… ,dt_p]/(∑ t_i-1, ∑ dt_i)with deg(t_i)=0 and deg(dt_i)=1. Using pullback of forms one can easily define the semi simplicial cdga:p↦Ω^*(Δ^p). We then defineTW(𝒟^·):=eq(∏_p=0^∞ D^p⊗Ω^*(Δ^p)⇉∏_r→ q D^q⊗Ω^*(Δ^r))using the same notation with the definition of totalization. An augmentation gives rise to a canonical mapD^-1→ TW(𝒟^·). By integrating forms we can define a map of semi cosimplicial spaces:Ω^*(Δ^·)→ NC^*(Δ^·),which induces a quasi-isomorphismTW(𝒟^·)→ Tot(𝒟^·)compatible with the maps obtained from an augmentation. One can even write down an explicit and universal homotopy equivalence between these two complexes extending the integration map, see <cit.> and the references contained therein. Using the projective model structure on chain complexes <cit.> and the induced Reedy model structure on semi-simplicial complexes <cit.>, one can show that Tot and TW both compute the homotopy limit of semi cosimplicial chain complexes. This follows from<cit.>. Noting that, by definition, a map of chain complexes is fibrant if it is surjective, the needed Reedy fibrancy is obvious in the case of Tot and it follows from the basic <cit.> for TW.Let 𝒪 be a (symmetric) operad over Ch() and assume that we have a semi cosimplicial object 𝒜^· in 𝒪-alg. Then,TW(𝒜^·) is canonically an 𝒪-algebra so that for an augmentation as 𝒪-algebras, the canonical map 𝒜^-1→ TW(𝒜^·) is a map of 𝒪-algebras. The key point is that if C is a commutative dga, then D↦ D⊗ C defines a symmetric laxmonoidal functor Ch()→ Ch() with the natural transformation(D_1⊗ C)⊗ (D_2⊗ C)→ (D_1⊗ D_2)⊗ C given by the product structure of C. This implies that if A is an 𝒪-algebra, then so is A⊗ C automatically. This means that ∏_p=0^∞ D^p⊗Ω^*(Δ^p) has a canonical 𝒪-algebra structure. It is easy to see that this descends to the equalizer.There is a model structure on 𝒪-algebras transferred using the free-forgetful adjunction to chain complexes <cit.>. This again induces a Reedy model structure on semi-simplicial 𝒪-algebras. By definition, again, fibrancy of a map of 𝒪-algebras means surjectivity. Therefore, by the same reasoning as Remark <ref>, TW computes the homotopy limit of semi cosimplicial 𝒪-algebras for this particular model structure.§ SYMPLECTIC COHOMOLOGY WITH SUPPORTS §.§ Definition Let us introduce symplectic cohomology with supports following <cit.>. We note that the details are not so important here. We simply hope to give the reader an overview. The details can be found in <cit.>. We also do not include the modifications that are necessary in the case where M is geometrically bounded but not closed and refer the reader to <cit.>.For a non-degenerate Hamiltonian H:× M→ℝ we denote the Hamiltonian Floer cochain complex <cit.> of H overby CF^*(H;Λ_≥ 0).We first define a dg-category ℋ^cyl whose objects are non-degenerate Hamiltonians. Morphisms from H_- to H_+ are (roughly speaking) chains on a carefully constructed cubical set of monotone continuation map data whose unbroken 0-cubes are {H_s:ℝ×× M→ℝ| H_s= H_± nears=±∞ and ∂_s H_s≥ 0}.Composition is given by associating the outputs with inputs to form broken data.We then construct the Floer functor𝒞ℱ:ℋ^cyl→ Ch^dg(Λ_≥ 0).On objects it is defined by H↦ CF^*(H;Λ_≥ 0) and for morphisms we consider continuation maps defined by virtual dimension zero parametrized moduli spaces of Floer solutions. Moreover, ℋ^cyl has a canonical functor to the dg-category with one object and endomorphism space isomorphic to .Finally, for a compact K⊂ M, SC^*_M(K) is defined as the degreewise T-adic completion of the homotopy colimit of𝒞ℱ restricted to the full-subcategory ℋ^cyl_K with objects the non-degenerate Hamiltonians that are negative on K:SC^*_M(K) := (𝒞ℱ|_ℋ^cyl_K). Homotopy colimit here is defined by a two sided bar complex B(⋆,ℋ^cyl_K,𝒞ℱ) <cit.>. The underlying module of the bar complex is⊕_n=0^∞⊕_(H_0, …, H_n) ∈ Ob ℋ^cyl_KCF^*(H_0;Λ_≥ 0) ⊗ Hom_ℋ^cyl(H_0,H_1) ⊗⋯⊗ Hom_ℋ^cyl(H_n-1,H_n)and the differential is obtained using composition in the category, the module action and the canonical augmentations of the morphism complexes.The homology of SC^*_M(K) is denoted by SH^*_M(K) and called symplectic cohomology of M with support on K. It will be important for us to work at the chain complex level (keeping in mind for example the generalized Mayer-Vietoris principle for de Rham cohomology <cit.>). As already mentioned in the introduction, we had previously called SH^*_M(K) the relative symplectic cohomology of K inside M. Let us explain the main reason for wanting to change terminology. The theory has an open string counterpart which is currently in development (see <cit.> for baby steps in this direction). Eventually, there will be a Fukaya category that is associated to a compact subset. Without a change in terminology this would have to be called the relative Fukaya category which clashes with the Seidel-Sheridan terminology for the Fukaya category of a symplectic manifold relative to a divisor <cit.>. We hope the new terminology catches on. There are canonical restriction chain mapsSC^*_M(K')→ SC^*_M(K),for K⊂ K', which upgrades the structure to a presheaf over the compact subsets of M. The base change of the symplectic cochains with supports presheaf SC_M^*(-) to the Novikov field will be denoted by SC_M^*(-;) and its homology by SH_M^*(-;).For computations it is better to introduce a smaller model <cit.>. We define an acceleration datum for K⊂ M as a sequence H_1≤ H_2≤… of non-degenerate Hamiltonians H_i: × M→ satisfying H|_× K<0 and for every (t,x)∈× M, H_i(t,x) 0,x∈ K,+∞,x∉ K. along with monotone interpolations between H_i and H_i+1 for all i≥ 1.This pointwise convergence condition is equivalent to the subcategory of ℋ^cyl_K with objects H_i and morphisms given by the span of the chosen 0-chains being homotopy cofinal. It is elementary to show that acceleration data exists.Homotopy cofinality implies that the inclusion of the homotopy colimit of the diagram𝒞_K:=CF^*(H_1 ;Λ_≥ 0)→ CF^*(H_2 ;Λ_≥ 0)→…into 𝒞ℱ|_ℋ^cyl_K is a quasi-isomorphism. It is well-known that this simpler homotopy colimit can also be computed by the telescope model(𝒞_K):= (⊕ CF^*(H_i ;Λ_≥ 0)κ-id⟶⊕ CF^*(H_i ;Λ_≥ 0)).Denoting the degree-wise T-adic completion of (𝒞_K) by (𝒞_K) we therefore have a canonical chain map(𝒞_K)→ SC^*_M(K),which is a quasi-isomorphism. We note the following the extension without proof:Assume that X,Y are compact subsets of M. Then, we have a homotopy commutative diagram: (𝒞_X∪ Y) rrd cone((𝒞_X)⊕(𝒞_Y)→(𝒞_X∩ Y))dSC^*_M(X∪ Y)rr(SC_M; X,Y) where the top row constructed using appropriate acceleration data as in <cit.>.§.§ Descent at the linear levelWe first state and prove the statement for two subsets.Assume that K=K_1∪ K_2 is a weakly involutive cover. Then, the canonical chain mapSC^*_M(K)→(SC_M; K_1,K_2)is a quasi-isomorphism.Theorem 4.8.1 from <cit.> shows that we can make choices such that the upper horizontal arrow in the diagram of Proposition <ref> is a quasi-isomorphism. Homotopy commutativity of the same diagram finishes the proof. We note that <cit.> is more general in that it assumes Poisson commutativity only along the “barrier", but we do not know whether this has any use or not.Assume that K=K_1∪…∪ K_N is a weakly involutive cover. Then, the canonical chain mapSC^*_M(K)→(SC_M; K_1,…,K_N)is a quasi-isomorphism.Before we give the proof we give a lemma that is apurely algebraic statement about the presheaf of chain complexes SC_M^*(-) paraphrasing <cit.>. If the conclusion of Theorem <ref> holds, let us say that K_1,…, K_N satisfy descent. Let K_1,…, K_N be compact subsets with N>2. Assume that K_1, K_2∪…∪ K_N satisfies descent as well as K_2,…, K_N and K_1∩ K_2,…, K_1∩ K_N. Then, K_1,…, K_N satisfies descent.We have that the canonical mapsSC^*_M(K_1∪…∪ K_N)→(SC_M; K_1, K_2∪…∪ K_N), (SC_M; K_1, K_2∪…∪ K_N)→ cone( SC^*_M(K_1)⊕(SC_M; K_2,…, K_N)→(SC_M; K_1∩ K_2,…, K_1∩ K_N)) are quasi-isomorphisms. Noticing that the cone is nothing but (SC_M; K_1,…,K_N) finishes the proof.Let 𝒦 be the smallest set of subsets of M which is closed under intersection and union, and contains K_1,… ,K_n. By Corollary <ref>, the members of 𝒦 are Poisson commuting. By Theorem <ref>, for any two element list from 𝒦 descent is satisfied. Using Lemma <ref>, we finish the proof by induction. §.§ Algebraic structuresWe now discuss algebraic structures in symplectic cohomology with supports. Let us start with the homology level structure.A BV-algebra is a graded super-commutative algebra A equipped with a degree decreasing differential Δ, called the BV operator, with the following property. It does not satisfy the graded Leibniz rule, but the error can be used to define a degree -1 Lie bracket on A, which, in particular, does satisfy the graded Leibniz rule in both of its slots. A unit in a BV-algebra is a multiplicative unit that is Δ-closed. Using the techniques of <cit.>, SH_M^*(K) can be equipped with a natural BV-algebra structure, which also admits a unit after base change to . The restriction maps respect these structures.We now move on to the chain level structure, the statement of which requires the terminology from Section <ref>.Associated to any compact subset K⊂ M is a complete torsion free chain complex SC^*_M,big(K) over the Novikov ring, which is equipped with the following structures: * An action of the KSV operad. * A restriction map for each inclusion K ⊂ K' of compact subsetsSC^*_M,big(K') →SC^*_M,big(K),which is compatible with the operadic action. * There is a canonical quasi-isomorphismSC^*_M(K) → SC^*_M,big(K),which is compatible with restrictions maps. The BV algebra structure that we obtain on H^*(SC^*_M,big(K))≃ SH^*_M(K) agrees with the aforementioned one. We also obtain a canonical homotopy BV-algebra structure on SC^*_M,big(K). § DESCENT WITH ALGEBRAIC STRUCTURES Let K_1,…, K_M be compact subsets of M. Weapply the Thom-Whitney construction from Proposition <ref>to 𝒪 being the KSV algebra and 𝒜^· the semi cosimplicial KSV-algebra𝒮𝒞^·:=p↦⊕_|J|=p+1SC^*_M(⋂_m∈ JK_m),to obtain another KSV algebraTW(SC^*_M,big;K_1,…, K_M).Assume that K=K_1∪…∪ K_N is a weakly involutive cover. Then, the canonical mapSC^*_M,big(K)→ TW(SC_M,big; K_1,…,K_N)is a quasi-isomorphism of KSV-algebras. By construction, the map respects the KSV structures so all we need to show is that the map induces an isomorphism on homology. It follows from Theorem <ref> that we have a commutative diagram SC^*_M(K) rrd(SC_M; K_1,…,K_N)dSC^*_M,big(K)rr(SC_M,big; K_1,…,K_N)with vertical arrows being quasi-isomorphisms. Theorem <ref> says that the upper arrow is a quasi-isomorphism and therefore the lower arrow is a quasi-isomorphism.Moreover, by Propositions <ref> and <ref>, we have a canonical commutative diagramSC^*_M,big(K) drdlTW(SC_M,big; K_1,…,K_N) rr(SC_M,big; K_1,…,K_N)with the horizontal arrow a quasi-isomorphism. This finishes the proof. Assume that K=K_1∪…∪ K_N is an involutive cover. Then, the canonical mapSC^*_M,big(K)→ TW(SC_M,big; K_1,…,K_N)is a BV_∞ quasi-isomorphism of homotopy BV-algebras. § MIRROR SYMMETRYWe now explain with lightning speed a very long program that is aimed at a conceptualization of mirror symmetry. Our goal is to highlight the role played by the local-to-global principles of the last section. We do not touch upon homological mirror symmetry. The whole section should thought of as conjectural.Let us now assume that M is closed and graded with a weakly involutive cover M=⋃_i=1^NC_i, e.g. we have in mind Remark <ref> with the assumption that the pair of the total space and the special fiber form a log CY pair <cit.>. In the mirror side, which is in the world of rigid analyic geometry <cit.> for our purposes here, we will consider a smooth (meaning that the tangent sheaf is locally free) and proper rigid analytic space Y with an admissible affinoid cover Y=⋃_i=1^NY_i and a global non-vanishing section of the canonical bundle ⋀^nT^*Y. Note that we have a BV (and hence a BV_∞) structure on Sym^*_𝒪_J(Der(𝒪_J)[1]), where 𝒪_J is the algebra of functions on the affinoid domain ⋂_m∈ JY_m for non-empty J⊂{1,… ,N} <cit.>. We can now assume mirror symmetry holds locally and deduce a global form of mirror symmetry as an application of the local-to-global principles.Assume that for all non-empty J⊂{1,… ,N}, we have a BV_∞ quasi-isomorphism of homotopy BV-algebrasSC^*_M,big(⋂_m∈ JC_m; Λ)→ Sym^*_𝒪_J(Der(𝒪_J)[1])compatibly with respect to restriction maps. Then, we have a BV_∞ quasi-isomorphism of homotopy BV-algebrasSC^*_M,big(M; Λ)≃ C^*(M;Λ)→ TW(Y,⋀ TY), where ⋀ TY denotes the sheaf of polyvector fields. We are being vague about what it means for the local BV_∞ quasi-isomorphisms in Equation (<ref>) to be compatible with restriction maps as the relevant notion of BV_∞ homotopies has not been sufficiently developed in the literature. Nevertheless, a compatibility of this form is a non-trivial condition. It is best to consider an example following <cit.>. For the Thurston manifold with an involutive cover lifted from a good cover of the base of its Lagrangian torus fibration without a Lagrangian section and a non-archimedean abelian variety (and an appropriate cover) as the mirror, one can find local BV_∞ quasi-isomorphisms as in Equation (<ref>) but cannot make these compatible with restriction maps altogether. The problem can be solved by equipping the rigid analytic space with a gerbe but we will not discuss this further.We could have written a statement with a complete proof if instead of homotopy BV-algebras and BV_∞ quasi-isomorphisms we used KSV algebras (by formality, BV algebras can be functorially turned to KSV algebras) and maps of KSV algebras. This would have been a much less useful statement as one cannot hope to produce quasi-isomorphisms as in Equation (<ref>) which strictly respect the KSV actions. Given a chain complex A with a homotopy BV algebra structure and a null-homotopy of the circle action, we obtain a hypercommutative algebra structure on H(A) <cit.>. Incorporating the cyclic (in the operadic sense) structures in these results, we recover the full genus 0 cohomological field theories <cit.>. This is the main reason why it is important to state these results at the chain level. Discussing these two extra pieces of structure is beyond the scope of this note.The direct homology level implication of Theorem <ref> is that we have an isomorphism of BV-algebrasQH^*(M;Λ)→ H^*(Y,⋀ TY).This is weaker than it might appear at first sight since it is known that the BV operators vanish on both sides. Nevertheless, it says that the mirror of the small quantum product is the exterior product structure on the sheaf cohomology of polyvector fields, which is worth recording as a corollary (weakening the assumption to what is necessary)Assume that for all non-empty J⊂{1,… ,N}, we have a quasi-isomorphism of chain complexesSC^*_M,big(⋂_m∈ JC_m; Λ)→ Sym^*_𝒪_J(Der(𝒪_J)[1])compatible with the algebra structures and restriction maps up to exact terms. Then, we have an isomorphism of algebrasQH^*(M;Λ)→ H^*(Y,⋀ TY), where ⋀ TY denotes the sheaf of polyvector fields.We have a geometric context in which we expect to be able to construct Y from M=⋃_i∈ IC_i by gluing the affinoid domainsY_i:=MaxSpec(HF_M^*(C_i; S; Λ)),for some reference Lagrangian submanifold S. The reference Lagrangian S is an abstraction of a Lagrangian section of an SYZ fibration, and it satisfies conditions that lead to the algebras HF_M^*(C_i; S; Λ) being affinoid (in particular commutative) algebras supported in degree 0 as well as a certain local generation criterion. The latter means thatHH_-n(CF^*_M(C_i;S;Λ))→ SH^0_M(C_i;Λ) hits the unit for all 1≤ i≤ N.The smoothness of Y and the local statements highlighted in Equation (<ref>) are then expected consequences of thelocal generation package involving Cardy relations and more <cit.>; along with very souped up versions of the classical HKR theorem <cit.>. plain | http://arxiv.org/abs/2311.15934v1 | {
"authors": [
"Umut Varolgunes"
],
"categories": [
"math.SG",
"53D40"
],
"primary_category": "math.SG",
"published": "20231127154215",
"title": "Descent with algebraic structures for symplectic cohomology"
} |
Volume filtered FEM-DEM framework for simulating particle-laden flows in complex geometries [===========================================================================================§ INTRODUCTIONThis is the second one of a series of papers in trying to classify rank two theories with eight supercharges. The basic idea isto classify consistent Seiberg-Witten (SW) solution on the effective 4d Coulomb branch [See results in classifying 4d rank one SCFTs <cit.> and rank two SCFTs in <cit.>.]. In previous paper <cit.>, the local of local singularities of SW solution are classified and the corresponding IR physical theories are identified by using the associateddual graph. The purpose of this paper is to study the global SW geometry for rank two theories, namely glue the local singularitiesconsistently. There are a couple oftopological constraints: a) the product of local monodromy should be identity; b) there are simple constraints on the sum of local invariants (see discussion in <cit.>) by assuming the geometry of the total space of the SW fiberation. However, the above constraintsarenot very strong and a systematical study seems quite difficult. It is certainly very helpful if one can write down the full SW families which encode the prescribed local singularities (See <cit.>), which we will pursue elsewhere.We do not take that approach in this paper. Insteada topological approach is taken so that the construction of global SW geometry can be carried outin a combinatorial way. The following two topological facts are important for us:* We will useMastumoto-Montesinos's (MM) theory <cit.> for local singularity: the local degeneration is given by the conjugacy class of mapping class group.So there is a mapping class group element around each degeneration, whichcompletely characterizes the local singularity. * We consider the generic deformation of the theory so that only the simplest possible singularity exists at the bulk: they are called I_1 or Ĩ_̃1̃ singularities, see figure. <ref> for the geometric illustration. The physical interpretation for them is that there is an extra massless particle associated with the vanishing cycle <cit.>. One also need to add a singularity at ∞ which is also assumed to be split into I_1 and Ĩ_̃1̃ singularities.Therefore, one has a genus two SW fiberation with just I_1 or Ĩ_1 singularities, see figure. <ref>, and such fiberation is called Lefschetz pencils <cit.>.Now let's fixed a generic point at Coulomb branch, and one has an element in mapping class group by following a path around each singularity, see figure. <ref>. The mapping class group element along I_1 or Ĩ_1 singularities is rather simple: it is given by the so-called Dehn twist alongthe vanishing cycle <cit.>. Once we have thetopological picture of global SW geometry shown in figure. <ref>, the classification is achieved by solving followingproblems in genus two mapping class group M_2, which are generated by five Dehn twists, see figure. <ref>: * The singular fiber at ∞ determines the UV theory, and so there is an associated mapping class group element. Since it is now split into I_1 and Ĩ_1 singular fibers, which means that the corresponding mapping class group is given bythe product of positive Dehn twists.So the group theory question isto find out the positive factorization of the mapping class group element associated with the UV theory,and this has been solved for4d SCFTs, see table. [<ref>,<ref>,<ref>,<ref>]. The candidate factorization for 4d asymptotical free theories, 5d and 6d KK theoriesare also found.* For the global SW geometry, since the base space is now compact, the product of Dehn twists should satisfy the topological condition τ_i_1τ_i_2…τ_i_s=1.This amounts to find the factorization of the identity element in terms of Dehn twist. Another topological constraint is the assumption of the total space being a rational surface <cit.>, which put the constraint on the number of I_1 and Ĩ_1 singularities [The choice of (I_1, Ĩ_1)=(n,m) singularities aren+2m=20, and so the choices are (n,m)=(20,0), (18, 1), (16,2), etc.] on the Coulomb branch. Such factorization of identity element was found in this paper. * The final step of finding global SW geometry is then to rearrange the factorization of identity so that one can get a desirable UV configuration at infinity (see step one),and this problem has been solved for most of 4d UV complete theories in this paper, see table. <ref> and <ref>.The factorization of mapping class group element associated with 4d SCFT is extremely useful physically: One can use the braid move and Hurwitz moveto get configuration involving more complicated singularities, which would then determine all the IR configuration of the theory, so that one can solvethis theory completely.This paper is organized as follows: section two revisited the local singularities by using the classification of pseudo-periodic map; section three discussesthe factorization of mapping class group elements in terms of Dehn twists, and the factorization for local singularity and global SW geometry for 4d SCFTs are given; section four givesseveral representation of mapping class group of genus two curve which would be useful for the further study, such as the UV singular fiber of 5d and 6d KK theory; finally a conclusion is given in section five. § MAPPING CLASS GROUP AND GENUS TWO DEGENERATION§.§ Genus two degeneration revisitedIn the context of 4-dimensional 𝒩=2 supersymmetric field theories, the Coulomb branch is a critical part of the moduli space <cit.>. This branch is associated with the vacuum expectation values (VEVs) of scalar fields known as Coulomb branch operators. The Seiberg-Witten solution for Coulomb branch can be described as a bundle of abelian varieties over the moduli space. In the majority of cases along the Coulomb branch (they are called generic vacua), the low-energy dynamics of the theory can be effectively modeled as a U(1)^r abelian gauge theory, here r represents the rank of the theory, and it corresponds to the number of massless photon fields in the theory. The complex structure of the abelian variety is identified with coupling constants of photons.In contrast to the generic vacua, at certain special points along the Coulomb branch, the abelian variety undergoes a degeneration (i,e. it becomes singular). This results in additional degrees of freedom in the low-energy theory. Depending on the specifics of this degeneration, the low-energy behavior can become much more intricate and may exhibit features like interacting superconformal field theory (SCFT), infrared (IR) free abelian theories, or non-abelian gauge theories. The exact nature of the low-energy theory depends on the details of the degeneration of the abelian variety at these special points. When the family of abelian varieties associated with a Coulomb branch solution can be described using the Jacobian of Riemann surfaces, it implies that for each point on the Coulomb branch moduli space, one can associate a Riemann surfaces. This correspondence simplifies the study of the Coulomb branch, as it allows us to focus on the geometric properties of these curves, which is a more well-developed area in algebraic geometry.In the special scenario where the rank of the theory is two, it is possible to represent all abelian varieties in terms of the Jacobian of genus two curves. This reduction is particularly significant because it streamlines the investigation of the Coulomb branch, effectively reducing it to the study of the properties of genus two curves and the associated Jacobians.It's useful to note that every genus two curve is indeed hyperelliptic [The equation for a genus two hyperelliptic curve is y^2=x^5+… or y^2=x^6+….].The first key step in analyzing rank two (r=2) Coulomb branch solutionsinvolves the classification of local degenerations of genus two curves. This classification is fundamental for comprehending how the theory behaves at specific points along the Coulomb branch. It's noteworthy that this classification has already been completed, with detailed information available in <cit.>. To accomplish this classification, they make use of Hodge theory, a powerful mathematical framework within algebraic geometry.Distinguishing the various degenerations in the rank two case requires considering three essential components: Monodromy Group: This group captures the transformations on homology groups that occur as one traverses loops around singular points in the moduli space; Type of Modulus Point: Specifying the type of modulus point is critical, as it signifies the nature of the singularity where the curve degenerates;Additional Discrete Parameter m: The inclusion of the discrete parameter m serves to fine-tune the characterization of the degenerations, providing further details that refine the classification.In the context of classifying degenerations of Riemann surfaces, the topological approach put forward by Matsumoto-Montesinios (MM) is highly valuable. Their theory, detailed in <cit.>, is particularly useful for our purposes. The key insight in MM's theory is that the conjugacy class of the mapping class group action serves as a complete determinant of the degeneration type, see figure. <ref> for the description of mapping class group. This approach, aside from being systematic, is also combinatorial in nature, which greatly facilitates a comprehensive physical investigation <cit.>. We will apply MM's theory specifically to the degeneration of genus two curves, aiming to recover the classification results presented in <cit.>. In the study of Riemann surface degenerations, the conjugacy class of the mapping class group has a distinctive character known as a pseudo-periodic map of negative type <cit.>. This map, denoted as f, can be classified using specific combinatorial data:* Admissible System of Cut CurvesC=∪ C_i: The classification begins with an admissible system of cut curves, denoted as C=∪ C_i. An admissible system is one where the irreducible component B=Σ_g/ C satisfies certain conditions. Each component B_i should have a non-negative Euler number χ_i, which is calculated as χ_i=2-2g_i+n_i≥ 0. Here, n_i represents the number of boundary curves for an irreducible component C_i , and g_i is the genus of that component. * Finite Group Action on Oriented Graph G_ C: The map f induces a finite group action on an oriented graph G_ C defined in last step. * Screw Numbers for Annuli C_i: For each annulus C_i in the system, the screw number is given. It's worth noting that these screw numbers must be negative in accordance with the classification. * Periodic Map Action: The action of f on each irreducible component of B is a periodic map. This periodicity is, in turn, determined by the valency data, denoted as (n, g^', σ_1λ_1+σ_2/λ_2+…+σ_s/λ_s). Here: n is the order of the map (i.e., f^n=id). g^' represents the genus of the base, defined by the covering map f:Σ→Σ^',σ_i, λ_i are integral values that further specify the characteristics of the periodic map action. In summary, the classification of pseudo-periodic maps of negative type for Riemann surface degenerations relies on a systematic consideration of admissible cut curves, group actions on oriented graphs, screw numbers for annuli, and the valency data determining periodic map actions. This detailed combinatorial data provides a comprehensive understanding of the degeneration types. The first two step gives rise to a weighted graph: each node has label representing genus and the number of internal cut curves; each edge represents a separating cut curve and a multiplicity from the finite group action. The third step gives an integer K≥ -1 along each weighted curve in weighted graph. Finally, one has a periodic map for each component in the cut system. See figure. <ref> for an example.The three set of data of genus two degeneration given in <cit.> are recovered from MM's theory as follows: a) The monodromy group action is induced from the mapping group action <cit.>; b) The modulus point is given by the cut system (before the finite group action); c) The integral value m is given by thescrew number and the periodic map data on the boundaries of the annulus.Let's now revisit the classification of genus two degeneration by using MM's theory.In fact, there are some missing items in <cit.>, but will be found here.There are a total of five classes, and thecombinatorial data is described as follows.Remark: The weighted graph 4a) with the periodic data [(g=2, k=0, r=2): ord(f)=2, (C_1, C_2), 1/2+1/2+(1)+(1)] seems missing in <cit.>. Weighted graph: The weighted graph for genus two degeneration is described in figure. <ref>, and we identify them with the names in <cit.>.They are found by first imposing the non-negative Euler number constraint (This is just the constraint in Deligne-Mumford theory), and then find the finite group actions.Periodic Maps: The list of periodic maps for curves with genus one and two is given in table. <ref>. Additionally, we'll explore periodic maps on curves that include boundaries. The fundamental component in this context is labeled as (g, k, r), where k represents the count of boundary edges, and r signifies the number of internal cutting curves. We employ bold letters to indicate data associated with the boundary curves, while data enclosed within brackets pertains to the internal curves. See table. <ref> for the full data. Dual graph: One can attach a dual graph (star-shaped) for the periodic maps, andthese sub-graphs are glued together to form a full dual graph for each degeneration. The dual graph is closely related to 3d mirror of the IR theory, and is used in an essential wayin <cit.> to determine the IR theory. Here we give a short review of the results in <cit.>. First, one can define a dual graph fora period map as follows. The data for a periodic map is(n ,g^', σ_1 λ_1+…+σ_lλ_l), here n is the order of the map, g^' isthe genus of the base defined by the covering map f:g→ g^', and σ_iλ_i gives the data for the ramification point of the covering map. These data are constrained by Hurwitz formula2g-2=n[ (2g^'-2) +∑_i(1-1/λ_i)]The dual graph is constructed from the combinatorial data (n ,g^', σ_1 λ_1+…+σ_lλ_l) as follows: * First given a valency data σλ (mλ =n), one attach a linear chain of spheres with following nonzero multiplicities a_0>a_1>a_2…>a_s=1:a_0=λ, a_1=σ, a_i+1+a_i-1 a_i=λ_i ∈ ZGiven a_i and a_i-1, the above formula uniquely determines the integer a_i+1.Since n=λ m,the final chain of spheres for the valency data σλ are ma_0-ma_1-ma_2-…-ma_s-1-mSo one get a star-shaped dual graph, with the central node having genus g^' and all the other node having genus zero. * Then one can glue the dual graph together as follows. Let's first assume C_i to be non-amphidrome. Then m^(1)=m^(2)=m,and one obtaintwo sequences of integers a_0>a_1>… >a_u=1 andb_0>b_1>… > b_v=1. Graphically, one get two quiver tails from above sequence of integers.Define an integer K=-s(C_i)-δ^(1)/λ^(1)-δ^(2)/λ^(2)where δ^(j) are integers such that σ^(j)δ^(j)=1 (mod λ^(j)), 0≤δ^(j)<λ^(j)-1. If λ^(j) =1, one set δ^(j)=0.K satisfies condition K≥ -1, as s(C_i)<0, 0≤δ^(j)/λ^(j) < 1. The gluingfor the two quiver tails is defined as follows * If K≥ 1, then the glued tail looks as follows(ma_0, ma_1,…, ma_u,m,m,…,m_K-1,mb_v,…,mb_1,mb_0)* If K= 0, then the glued tail looks as follows(ma_0, ma_1,…, ma_u-1,m,mb_v-1,…,mb_1,mb_0)* Finally, if K=-1, then one can find u_0<u and v_0<v so that a_u_0=b_v_0, and(a_u_0-1+b_v_0-1)/a_u_0 is an integer greater than one. Then the quiver tail looks like(ma_0,ma_1,…, ma_u_0,mb_v_0-1,…, mb_1,mb_0)Let's now assume C_i to be amphidrome, then C_i^', C_i^” has valency data (2m, λ, σ). Similarly, one hasa sequence of integers a_0>a_1>…>a_u=1, from which one can get a quiver tail. Then K=-s(C_i)/2-δ/λ is a non-negative integer where δσ=1(mod λ). The glued quiver tail now has u+K+2 spheres, and it is aDynkin diagram of D type(2m a_0, 2ma_1,…, 2m a_u, 2m,…, 2m_K (the tree part), m,m (the terminal part)) One can get a 3d quiver gauge theory from the dual graph: the multiplicities gave the gauge group U(n_i) and theedges give the bi-fundamental matter. The quiver gauge theory determines wether the degeneration is allowed or not: it is allowed if the Higgs branch dimension of it is equal to two (after contracting -1 curve and peeling off the quiver tail), see details in <cit.>, and this is notalways possible. If it is an allowed degeneration,the (modified) quiver gives the 3d mirror for the IRtheory, which can then be used to determine the IR theory.Example: Let's now give an example showing how to find the classification for a given weighted graph.The weighted graph is taken to be (g=2, r=2) (4(a) in figure. <ref>), which means that there are two non-separating cutting curves for the genus two Riemann surface.There are following situationsthat one need to consider (see table. <ref>):* The action of f on two cutting curves C_1, C_2 is non-amphidrome, and f(C_1)=C_1, f(C_2)=C_2. After the cut, one has a sphere with four marked points representing four boundaries, see figure, <ref>. The periodic map fixed all the boundary curves, i.e. f(C_1^')=C_1^', etc, and sothe order of periodic map is one.* The action of f isf(C_1)=C_2, f(C_2)=C_1. The periodic map on the fourth punctured sphere now acts as f(C_1^')=C_2^', f(C_1^”)=C_2^”. So the periodic maphas order two,and (C_1^', C_2^'), (C_1^”, C_2^”) are inZ_2 orbits.The periodic map is then f:Σ→Σ^' with both Σ and Σ^' genus zero curve. To satisfy the Hurwitz formula, there must be two fixed points on Σ whose valency data is 1/2.Therefore the valency data on the punctured sphere is (1)+(1)+1/2+1/2, with the 1 in the bracket indicating that the boundaries of two cutting curves are in the Z_2 orbit. * The action of f on C_1 is amphidrome: Amp(C_1), f(C_2)=C_2. The periodic map on the fourth punctured sphere now acts as f(C_1^')=C_1^”, f(C_2^')=C_2^', f(C_2^”)=C_2^”.This means that C_1, C_1^' are in a Z_2 orbit, while C_2^”, C_2^' are fixed points.So the periodic map on the fourth punctured sphere has order two, and the valency data is (1)+(1/2)+(1/2).* The action of f on C_1 and C_2 is amphidrome: Amp(C_1), Amp(C_2). The periodic map on the fourth punctured sphere now acts as f(C_1^')=C_1^”, f(C_2^')=C_2^”.This means that C_1^', C_1^” are in a Z_2 orbit, while C_2^', C_2^” are another Z_2 orbit.So the periodic map on the fourth punctured sphere has order two, and the valency data is (1)+(1)+1/2+1/2.* The action of f on two cutting curves C_1, C_2 is non-amphidrome,i.e. Am(C_1, C_2). Theboundaries C_1^', C_1^”, C_2^', C_2^” are in a Z_4 orbit. So the periodic map on the fourth punctured sphere has order four,and the valency data on the punctured sphere is (1)+(1)+3/4+1/4.§.§ IR theory Let's now use the result of last subsection to classify the possible degeneration for rank two theories. The subtly is that one can not get sensible physical interpretation for all the degenerations. To determine the IR theory, we use the link between the dual graph and 3d mirror, and the basic assumption is that oneshould get a consistent 3d mirror for a physical sensible degeneration, otherwise we will not consider it.§.§.§ 4d 𝒩=2 SCFTs The basic assumption to get a SCFT is following: a): one get consistent 3d mirror from the dual graph ; b): there is no variable link number K: the link number is truncated dual to the consistency of 3d mirror; This excludes the weighted graph with separating curves, since if the gluing is possible, the link number is never truncated; for the internal cutting curve, one has to consider the amphidrome cutting. One find the following possibilities by looking at the weighted graph in figure.<ref>:* The weighted graph 1a) in figure. <ref> has no cut curve and so there is no variable link number K. The only constraint would be that the modification of the mirror quiver is possible. By looking at the list of genus two periodic map, we find thatthe degeneration (9/10+3/5+1/2), (3/10+1/5+1/2), (5/6+5/6+1/3), (2/3+1/3+1/2+1/2) are not good.The bad valency data is in put in bold letter. Others give SCFT, see <cit.>.* The weighted graph 2b) in figure. <ref> would give SCFT.The basic data for the periodic map is (g=1, r=0, k=1).The corresponding dual graph are listed in figure. <ref>. Here we need to first do the contraction of -1 curve <cit.>, and then use the modification procedure to get the 3d mirror.* The weighted graph is 3a) in figure. <ref>,and the cut curve is taken to be amphidrome. The basic building block of the periodic map is (g=2, r=1),See figure. <ref> for the dual graphs. §.§.§ Non-abelian gauge theoryLet's now consider the degeneration whose low energy theory is IR free non-abelian gauge theory. The rank two gauge group could be A_2, A_1× A_1, B_2(=C_2), G_2.One can find non-abelian gauge groups from weighted graph as follows <cit.>: 1) if there is a weight n edges connecting two components in the weighted graph, the gauge group would be SU(n); 2) The internal cut is amphidrome with weight m (mcutting curves will be mapped to each other by the mapping group action), thegauge group would be Sp(2m). By looking at all the weighted graphs in figure. <ref>, one find the following possibilities:1) weighted graph 3a), and the cut is taken to be non-amphidrome; the gauge group is SU(2) and is coupled witha rank one SCFT.2):weighted graph 4a), and the gauge group is (Sp(4) (weight 2), and the gauge group is SU(2)× SU(2) (for weight (1,1)). 3) weighted graph 5e), and the gauge group is SU(3), see figure. <ref>.Example: For the weighted graph 4a in figure. <ref>, one can get non-abelian gauge theory for following two situations. Assuming the cut curves are C_1, C_2: a): the action of the mapping class group is Am(C_1), Am(C_2), andthe gauge group is SU(2)× SU(2), and there is one SU(2) associated with cutting; b) the action acts as (Am(C_1, C_2)), and the gauge group is Sp(4).§.§.§ Other cases: abelian gauge theoryThe IR theory for other cases involves decoupled systems: if the edge in the weighted graph has multiplicity one, the gauge group is SU(1) and sothe two adjacent system is decoupled. The appearanceof abelian gauge group: this happens if the internal cut is non-amphidrome: foran internal cut with multiplicity n, the gauge group is U(n). The full list is: * Weighted graph 2a): two decoupled rank one SCFTs, plus possible uncharged hypermultiplets;* Weighted graph 3a) and the cut is taken to be non-amphidrome. The IR theory is U(1) gauge group coupled with n free hypermultiplets plusa rank one SCFT.* Weighted graph 3b). If the cut is amphidrome, the IR theory is SU(2) gauge group coupled with n fundamental hypermultiplets plus a rank one SCFT; If the cut is non-amphidrome, the IR theory is U(1) gauge group coupled with n free hypermultipelts plus a rank one SCFT.* The IR theory for weighted graph 4a), 4b), 4c) and 5c) are listed in figure. <ref>.Example:Let's consider the last item in figure. <ref>.The periodic map on each genus zero component should be (1/2)+(1)+1/2, andthe dual graph is shown in figure. <ref>. To find the IR theory, we try to find the 3d mirror of the dual graph, which can be achieved using S duality of type IIB string theory.§ GLOBAL SW GEOMETRYLet's now study the global SW geometry for rank one and rank two theory from the perspective of mapping class group. A one dimensional slice of the Coulomb branch is taken, and there are singular fibers where the low energy physics is different from the generic point.We add a point at ∞ to get a compact base, and this introduces a singular fiber at ∞ which determines the UV theory.We first study the generic one dimensional slice, so that the bulk singularity is either I_1 or Ĩ_1 type, see figure. <ref>for the illustration. The physical understanding of these singularities are: a) The vanishing cycle associated with I_1 singularity is non-separating and its homology class is non-trivial; the low energy theory isa massless hypermultiplet charged witha U(1) gauge group (in proper duality frame), plus other free vector multiplets; b) There is a separating vanishing cycle associated with Ĩ_1 singularity, and there is an extra massless hypermultiplet which is notcharged under the low energy gauge groups.The SW fiberation with just I_1 or Ĩ_1 singular fibers are called Lefschetz fiberation. Notice that in our case the singular fiber at ∞could be special, however, we assume that it could also be deformed into I_1 and Ĩ_1 singularities, see thediscussion below. The Coulomb branch structure is shown in figure. <ref>.There are following topological constraints that the global SW geometry has to satisfy:* One can define two topological invariants for a singular fiber: d_x and δ_x. δ_xcan be easily computed from the dual graph and d_x can be computed from holomorphic data, see <cit.> for those numbers. The difference of d_x and δ_x reflects the number of Ĩ_1 singularity in the generic deformation. The first topological constraint is∑ d_x = 2, which reflects the fact the total space of SW fiberation is a rational surface. The topological data for I_1 singularity is d_x=δ_x=1, while the data for Ĩ_1 singularity isd_x=2, δ_x=1. The above topological constraint comes fromthe assumption that the total space of the genus two fiberation is rational.The above constraint implies that the choice of I_1, Ĩ_1 singularities are (20,0), (18,1), (16,2), etc. * The second topological constraint is due to the compactness of the base of the fiberation. By choosing a generic point on the moduli space, there is one mapping class group element M_i for each singular fiber by choosing a path, and the product of them should be trivial:M_1… M_s=I.Therefore, the global SW geometry has a simple topological meaning: a factorization of identity element in terms of mapping class group elements associated with I_1 and Ĩ_1 singularity. Therefore we need to solve two problems in mapping class group: a) Find the factorization of the UVMCG element in terms of the MCG element of I_1 and Ĩ_1 singularity; b) Find the factorizationof the identity element in terms of suitable number MCG element of I_1 and Ĩ_1 singularity. §.§ Dehn twistAs we discussed at the beginning of this section, the global SW geometry has a meaning in terms of factorization of mapping class group (MCG) elementintoI_1 or Ĩ_1 singularity.There is a vanishing cycle for each I_1 or Ĩ_1 singularity, and one can have an associated MCG element called Dehn twist, see figure. <ref>.The following are the basic relations for Dehn twist associated with two cycles a,b: * If a,b is disjoint, then T_aT_b=T_bT_a.* If b=h(a) with h an element of mapping class group, then T_b=hT_ah^-1.* If (a,b)=1, then T_a T_b T_a=T_b T_a T_b.* If (a,b)>1, then there are no relations between T_a and T_b.The first and third relations are called braid relations, which would play crucial roles later. §.§ Rank one theoryLet's now apply our classification strategy to rank one theory, and we'd like to recover the classification of rank one theory.The genus one MCG is just SL(2,Z) group, and itis generated by the Dehn twist around two cycles a_1, b_1, seefigure. <ref>. The generators are denoted as τ_1, τ_2, and there is just one non-trivial relation:(τ_1τ_2)^6=1. We also have thebraid relation τ_1τ_2τ_1=τ_2τ_1τ_2.The intersection form is (a_1, b_1)=1, and so the action of the Dehn twist on homology classes [a_1], [b_1] isT_1(a_1)=a_1+(a_1,a_1)a_1=a_1, T_1(b_1)=b_1+(b_1,a_1)a_1=-a_1+b_1 T_2(a_1)=a_1+(a_1,b_1)b_1=a_1+b_1, T_2(b_1)=b_1+(b_1,b_1)b_1=b_1 So the representation matrix on the basis of homology groups([a_1], [b_1]) is given asτ_1=([1 -1;01 ]), τ_2=([ 1 0; 1 1 ]) which gives the standard representation for the generators of SL(2,Z) group.Factorization for MCG element of singular fibers: The classification of rank one IR theory is given by the degeneration of elliptic curve, and it is the same as the classification of the conjugacy class of M_1 whose homology representation M satisfyingTr(M)≤ 2.We will find the factorization of these conjugacy class by requiring: * It is given by a product of positive Dehn twist, namely it involves only the generators τ_1,τ_2, but not the inverse of them.* The number of elements in the factorization is the same as the Euler number, which is also equal to the number of I_1 singularities under the generic deformation of the singularity.The factorizations for genus one degeneration is shown in table. <ref>. Given the representation of MCG element in table. <ref>, one can then easily find the possible IR configurations: one simply rearrange theword by using the braid relation and the Hurwitz moveτ_1τ_2τ_1=τ_2τ_1τ_2, (τ_1 …τ_iτ_i+1…τ_r)=(τ_1…(τ_iτ_i+1τ_i^-1)τ_i…τ_r)Since τ_iτ_i+1τ_i^-1= τ_i(i+1) is also a positive Dehn twist around the curve i(i+1) (which is the resulting curve ofthe Dehn twist τ_iacting on the curve (i+1).). The above two moves would also give the positive factorization. One can find the IR configuration by looking at the possiblecollapsing of the I_1 singularities.Example 1: Let'slook atE_8 SCFT which is represented by the word (τ_1τ_2)^5, and it is easy to find various singular fiber combinations. Here we just give several simple examples. The simplest one is the configuration with five type II singularities:(τ_1 τ_2) (τ _1 τ_2) (τ_1 τ_2) (τ_1 τ_2) (τ_1 τ_2)We use bracket to indicate that the singularities inside it is collapsed.The next one involves a I_0^* singularity (τ_1τ_2 τ_1 τ_2 τ_1τ_2) τ_1τ_2 τ_1τ_2Finally one can find a configuration with a I_3^* singularity(τ_1τ_2 τ_1) τ_2 τ_1τ_2 τ_1 τ_2 τ_1τ_2 → (τ_2 τ_1) τ_2 τ_2 τ_1τ_2 τ_1 τ_2 τ_1τ_2 →τ_2(1) (τ_2^3 ( τ_1τ_2)^3)Here one use the braid relation in the first step, and use the Hurwitz move in the second step, andfinally one get a I_3^* singularity. The interested reader can work out allthe other configurationslisted in <cit.>. Global SW geometry:The total mapping class group around all the singularities on the compactified Coulomb branch should be trivial, which implies that the ordered product of Dehn twists should be trivial. We conjecture that the corresponding elliptic fibered surface should bea rational surface, which implies that the total number of I_1 singularities should be 12 <cit.>. So one should find a positive factorization of identity element with length 12, and the only choice is(τ_1τ_2)^6=(τ_1τ_2τ_1)^4=1.We can then find the global SW geometry by using braid move and Hurwitz move to rearrange the above letters, and get the configuration for the factorization of the UV fiber, see table. <ref>. 4d SCFT: The configuration for SCFT is quite simple, we have ((τ_1τ_2)^i, (τ_1τ_2)^6-i), ((τ_1τ_2 τ_1)^i, (τ_1τ_2τ)^4-i).So one can get a pair of SCFT by putting oneconfiguration at ∞, and others are bulk singularities which can be moved freely. 4d asymptotical free theory:The configuration for asymptotical free theory is(I_bulk, I_∞)=( τ_1 τ_22(1)τ_2^4-k,τ_2^k(τ_1 τ_2)^3). This is the SW geometry for SU(2) with 4-k fundamental hypermultiplets. The configuration is found by doing the braid move and Hurwitz move on the fundamental factorization of identity, so that one can form a I_k^* singularity:1=τ_1τ_2 (τ_1 τ_2 τ_1) τ_2 (τ_1τ_2)^3 = τ_1(τ_2 τ_2 τ_1) τ_2 τ_2 (τ_1τ_2)^3 =τ_1 τ_22(1)τ_2^4 (τ_1τ_2)^3.One do the braid move for the first step, and do the Hurwitz move for the second step. The notation τ_22(1) means the positive Dehn twist along the curve 22(1), which is thecurve derived by first doingDehn twist along curve 2 on curve 1 to get a curve 2(1), and then do the Dehn twist along curve 2 on curve 2(1). Using the action of Dehn twist on homology, one has 22([1])=2([1]+(1,2)[2])=[1]+(1,2)[2]+(1,2)[2]=[1]+2[2].Here one use the intersection number (1,2)=1. For the pure SU(2) theory (k=4) in <ref>, the bulk singularity is τ_1 τ_22(1); So there isone vanishing cycle with homology [1],and another vanishing cycle with homology [1]+2[2].The intersection pair of these two vanishing cycles are 2, which indeed givesthe BPS quiver of this theory. Notice that the singular fiber at ∞ is I_k^* singular fiber, which is represented by τ_2^k(τ_1 τ_2)^3, and is the singular fiber at ∞ for 4d asymptotical free theory. 5d KK theory: One simply use the braid move and Hurwitz move to get a I_k singularity on the right:τ_1(τ_2 τ_1 τ_2) τ_1 (τ _2 τ_1τ_2) τ_1 (τ_2 τ_1τ_2) →τ_1 τ_1τ_2 τ_1 τ_1 τ_1τ_2τ_1 τ_1 τ_1 τ_2 τ_1 →τ_1^2(2)τ_1^5(2)τ_1^8(2)τ_1^9.and one can get a I_9 singularity. Different type of 5d theory is found by puttingI_k singularity at ∞. 6d KK theory: This one is the simplest one as there is no singularity at ∞, so the bulk singularity is just (τ_1τ_2)^6.Non-deformable singularity: To get theory with non-deformable singularity, one simply use braid move and Hurwitz move on the word for a SCFT to get the non-deformable singularity type(such as I_n, I_k^*, II^*, III^*, IV^*) on the bulk. 𝒩=2^* theory: SW geometry of 𝒩=2^* theory is found by SU(2) with N_f=4 word, which is a I_0^* singularity. It can be written as I_4I_1^2 configuration,which can be found as follows:τ_1(τ_2τ_1τ_2)τ_1τ_2=(τ_1^2τ_2)τ_1^2τ_2=τ_1^2(2)(τ_1^4) τ_2.Here we used braid move and the Hurwitz move to get a I_4 singularity. Another realization is given by I_2I_2I_2 configuration, which can be found from I_0^* word as follows:τ_1(τ_2τ_1 τ_2)τ_1τ_2=τ_1^2 (τ_2τ_1τ_1)τ_2=(τ_1^2) (τ_2(1)^2) (τ_2^2).§.§ Rank two theory The mapping class groupM_2 of genus two curveis generated by Dehn twists associated with the five curves δ_i shown in figure. <ref>. The generators are labeled asτ_1,τ_2,τ_3, τ_4,τ_5and the relations areτ_j τ_j+1τ_j=τ_j+1τ_j τ_j+1τ_iτ_j=τ_j τ_i, if |i-j|>1I τ_j=τ_j II^2=1(τ_1 τ_2 τ_3 τ_4 τ_5)^6=1Here I=τ_1τ_2τ_3τ_4τ_5^2τ_4τ_3τ_2τ_1, and is the hyperelliptic involution.The first two relations are called braid relations.An element in M_2 can be represented by a positive product of generators; However, the representationis far from unique due to the braid relations andother relations in M_2. Furthermore, we also need to impose following two equivalence relations: * The Hurwitz equivalence:(τ_1 …τ_iτ_i+1…τ_r)=(τ_1…(τ_iτ_i+1τ_i^-1)τ_i…τ_r)=(τ_1…τ_i(i+1)τ_i …τ_r). * Global conjugacy:ϕ(τ_1…τ_r)ϕ^-1=(ϕτ_1 ϕ^-1…ϕτ_r ϕ^-1).This is due to the fact the degeneration is given by the conjugacy class of mapping class group.The goal for finding a special factorization for the degeneration is following: a): It should be given by a positive factorization, namely,the word consists of only Dehn twist, but not its inverse; b): The number of generators are determined by the local invariant d_x and δ_x:#τ= 2δ_x-d_x, #σ= d_x-δ_x. Here #τ is the number of Dehn twist along non-separating curve, and #σ is the Dehn twist along the separating curve σ. The task of finding above factorization is a difficult one, and we will solve it for the SCFT and many other IR theories in this paper. Let's now summarize some important relations regarding group M_2, which will be quite useful for our later studies. * Let ζ_a,b=∏_i=a^b τ_i, then τ_i ζ_a,b=ζ_a,bτ_i-1Here a<i≤ b. This equation can be proven using the braid relation:Proof: τ_i (τ_a τ_a+1…τ_b)= τ_a … (τ_i τ_i-1τ_i) τ_i+1…τ_b=τ_a …τ_i-1τ_i (τ_i-1τ_i+1)…τ_b =(τ_a…τ_b) τ_i-1* Let ζ=τ_1τ_2τ_3τ_4τ_5,η=τ_1τ_2τ_3τ_4, we have the relationτ_i ζ^j=ζ^jτ_i-j, (i≠ j) τ_i ζ^i=ζ^i+1η^-1, (η=ζ^-iτ_i^-1ζ^i+1)These two equations are derived using therelation <ref>. From the second relation, one find that the generators can be expressed in terms of ζ, η:τ_i= ζ^i+1η^-1ζ^-iSo the mapping class group M_2 is generated by ζ, η subject to relation ζ^6=η^10=1.* Let ϵ=τ_1^2τ_2τ_3τ_4, η=τ_1τ_2τ_3τ_4. Wehavethe relation ϵ^4=η^5, andη^5 is conjugate with I.Proof: Let's first do the computation for η^5. Using formula. <ref>, we haveη^5=(ζ^-1τ_1^-1ζ^2) (ζ^-2τ_2^-1ζ^3)… (ζ^-5τ_5^-1ζ^6)= =ζ^-1τ_1^-1τ_2^-1τ_3^-1τ_4^-1τ_5^-1==τ_5^-1τ_4^-1τ_3^-1τ_2^-1τ_1^-1τ_1^-1τ_2^-1τ_3^-1τ_4^-1τ_5^-1==τ_5τ_4τ_3τ_2τ_1τ_1τ_2τ_3τ_4τ_5=ĨThe last expression is named Ĩ which is conjugate to I, and the corresponding conjugate element is ϕ=τ_1τ_2τ_3τ_4τ_5. On the other hand, we have ϵ=τ_1η (definition), and τ_i+1η=ητ_i(see <ref>), we haveϵ^4=τ_1(ητ_1)ητ_1ητ_1η=τ_1 τ_2 η (ητ_1)ητ_1η=τ_1τ_2 (ητ_2) ηητ_1η=τ_1τ_2 τ_3 ηη (ητ_1)η=τ_1τ_2 τ_3 ηητ_2 ηη =…=τ_1τ_2τ_3τ_4 η^4=η^5.In the following, we'd like to list some useful conjugation transformation. Let ζ_a,b=∏_i=a^b τ_i, then we have * For any a,b such that 1≤ a<b≤ 5, we haveC(ζ_a,b)(τ_i)=τ_i, if i<a-1τ_i+1, if a≤ i<b, τ_i, if i>b+1 * For any a,b such that 1≤ a<b≤ 5, let η_a,b=∏_c=0^b-aτ_a,b-c, then C(η_a,b)(ζ_i)=τ_i, if i<a-1τ_a+b-i, if a≤ i ≤ b, τ_i, if i>b+1In particular, one can take a=1, b=5, and so its action on the index would be 5→ 1, 4→ 2, 3→3.* C(ζ_a,b^2)(τ_b)=τ_a.* There is following conjugacy condition ζ_1^t_1ζ_2^t_2…ζ_5^t_5∼ζ_s(1)^t_s(1)ζ_s(1)^t_s(1)…ζ_s(5)^s(t_5)Here s is a permutation.One can prove above conjugacy equation by using braid relations.§.§.§ Mapping class elements for 4d SCFTWe are going to list the mapping class elements for rank two 𝒩=2 SCFT realized as the periodic map of genus two curve. We'd like to find a factorization ofthe mapping class element so that the number of I_1 and Ĩ_1 singularities are given as (see next section for the derivation):# I_1= 2δ_x-d_x #Ĩ_1= d_x-δ_xHere the I_1 singularity is represented by the Dehn twist along any closed curve whose homology class is nontrivial, andĨ_1 singularity is given by the Dehn twist along the cycle with trivial homology class. This problem has been studied in <cit.> and the results are listed in table. [<ref>, <ref>, <ref>, <ref>]. One of thecrucial relation is the following representation of the Dehn twist along the separating curve:(τ_1τ_2)^6=σHere σ is the Dehn twist along the non-separating curve, see figure. <ref>.Let's verify some of the results intable. [<ref>, <ref>, <ref>, <ref>]: * The first one would be the factorization of η^4:η^4=(ζ^-1τ_1^-1ζ^2)(ζ^-2τ_2^-1ζ^3)… (ζ^-4τ_4^-1ζ^5)=ζ^-1τ_1^-1τ_2^-1τ_3^-1τ_4^-1ζ^-1Here we used equation η=ζ^-iτ_i^-1ζ^i+1, and ζ^5=ζ^-1.We then use the relation I^2=1 to express above expressionin terms of positive productof generators:1=I^2=τ_1τ_2τ_3τ_4τ_5 (τ_5τ_4τ_3τ_2τ_1 τ_1τ_2τ_3τ_4τ_5) τ_5τ_4τ_3τ_2τ_1 =(τ_1τ_2τ_3τ_4τ_5) τ_5 τ_5τ_4τ_3τ_2τ_1 τ_1τ_2τ_3τ_4τ_5 τ_4τ_3τ_2τ_1 =τ_5^2 τ_4τ_3τ_2τ_1( τ_1τ_2τ_3τ_4τ_5 τ_4τ_3τ_2τ_1 τ_1τ_2τ_3τ_4τ_5) =τ_5^2 τ_4τ_3τ_2τ_1 (ζτ_4τ_3τ_2τ_1 ζ)→ζ^-1τ_1^-1τ_2^-1τ_3^-1τ_4^-1ζ^-1=τ_5^2 τ_4τ_3τ_2τ_1Here in the first step we used the fact Ĩ=τ_5τ_4τ_3τ_2τ_1 τ_1τ_2τ_3τ_4τ_5 commutes withall the generators (in particular τ_5), and in the second step one used the cyclic equivalence by moving letters τ_1τ_2τ_3τ_4τ_5 to the end of the word.* Second, let's compute ϵ^3. Since ϵ^4=Ĩ, and soϵ^3=ϵ^-1Ĩ^-1Next, we'd like to use the relation 1=Ĩ^2=Ĩτ_5τ_4τ_3τ_2τ_1 τ_1 τ_2τ_3τ_4 τ_5=τ_5^2τ_4τ_3τ_2Ĩτ_1^2τ_2τ_3τ_4=τ_5^2τ_4τ_3τ_2 Ĩϵ→ϵ^-1Ĩ^-1=τ_5^2τ_4τ_3τ_2Here in the first step we used the fact that Ĩ commutes with any generators, and in the second step one used conjugation to move τ_5 from the end to the beginning (cyclic relation).* Thirdly, we'd like to compute the word ζ^3.ζ^3=12(345 123)45 12345=1213243(545)12345=12132(434)5412345 =121(323)435412345=12123243(5412)345=(12)^23243 12 5(434)5=(12)^23243 12 5(343)5 =(12)^23243 12 3(545)3=(12)^23241(3 2 3)4543=(12)^23(212) (43 4)2543=(12)^2 1(323)1 43 2543 =(12)^212321432543=(12)^3321432543=(12)^6 12^3321432543We then need to compute following word:12^3321432543=2̅1̅2̅1̅2̅1̅321432543=2̅1̅2̅1̅2̅3(1̅21)432543 =2̅1̅2̅1̅(2̅32)14(2̅32)543=2̅1̅2̅3(1̅21)(3̅43)25(3̅43)=2̅1̅2̅3(212̅)(434̅)25(434̅)=2̅1̅(2̅32)14(2̅32)(4̅54)34̅=2̅1̅(323̅)14(323̅)(545̅)34̅=2̅3(1̅21)3̅4325(3̅43)5̅4̅=(2̅32)12̅3̅432543(4̅5̅4̅)=321(3̅2̅3̅)432(545̅)34̅5̅=3212̅3̅(2̅4324̅)5434̅5̅=1_32· (2̅4324̅) · 3_54=1_32· (432 3̅4̅) ·1_45=1_32· 2_43· 3_54In the process of the computation, weused the following braid relation:τ̅_i+1τ_i τ_i+1=τ_iτ_i+1τ̅_iwhich is easily derived from the standard braid relation. So finally one has the following important resultζ^3=σ· 1_32· 2_43· 3_54* We'd like to compute η^7:η^7=Ĩη^2=τ_5τ_4τ_3τ_2τ_1 τ_1τ_2τ_3τ_4τ_5 τ_1τ_2τ_3τ_4 τ_1τ_2τ_3τ_4 = τ_5τ_4τ_3τ_2τ_1τ_1τ_2τ_3τ_4τ_5 τ_1τ_2τ_3τ_4 (τ_5 τ̅_5) τ_1τ_2τ_3τ_4 (τ_5τ̅_5)= τ_5τ_4τ_3τ_2τ_1(τ_1τ_2τ_3τ_4τ_5)^2 τ_1τ_2τ_3(τ̅_5τ_4 τ_5)τ̅_5 = τ_5τ_4τ_3τ_2τ_1(τ_1τ_2τ_3τ_4τ_5)^2 τ_1τ_2τ_3 τ_4 τ_5τ̅_4τ̅_5= τ_5τ_4[τ_3τ_2τ_1(τ_1τ_2τ_3τ_4τ_5)^3 ]τ̅_4τ̅_5So η^7 is conjugate to the element τ_3τ_2τ_1(τ_1τ_2τ_3τ_4τ_5)^3= τ_3τ_2τ_1 ζ^3, and so it has a factorization with 6 I_1 singularities and one Ĩ_1 singularity byusing the factorization of ζ^3. This is also consistent with the local invariant of degeneration (3/10+1/5+1/2): (d_x=8, δ_x=7). * Let's now verify ϕ_4^3=I. First, we have trivial relationτ_1^-1τ_2^-1τ_3^-1τ_4^-1τ_5^-1=ζ I →τ_4^-1τ_5^-1=τ_3τ_2τ_1 ζ Iand so ϕ_4=(τ_1τ_2)(τ_4^-1τ^-1_5)=(ζτ_5^-1τ_4^-1τ_3^-1)(τ_3τ_2τ_1 ζ I)=ζτ_5^-1τ_4^-1τ_2τ_1 ζ I =(ζτ_2) τ_5^-1τ_1 (τ_4^-1ζ) I=τ_3 (ζτ_5^-1) (τ_1 ζ) τ_3^-1 I=τ_3ηζ^2 η^-1τ_3^-1 I =(τ_3 η) ζ^2(τ_3η)^-1 Ihere we used the relation τ_1ζ=ζ^2η^-1. So ϕ_4^3=I (notice that I is the central element of the mapping class group), by using ζ^6=1, I^2=1.On the other hand, using the above formula, we find thatϕ_4=(τ_3 η) ζ^2(τ_3η)^-1 I=(τ_3 η) ζ^3 ζ^-1 I (τ_3 η) ^-1=(τ_3 η) ζ^3 (τ_5τ_4 τ_3 τ_2 τ_1) (τ_3 η) ^-1Now use the fact that ζ^3 can be factorized into a Ĩ_1 singularity and three I_1 singularity, one see from above formula that ϕ_4 can be factorized into a Ĩ_1 singularity and eight I_1 singularity. This agrees with the result from the local invariant (d_x=10, δ_x=9). Singular configurations for SCFT: Once we find out the desired factorization for mapping class group elements of SCFT, one can find various singular configurations of them by doing braid moves and Hurwitz moves. Here let's just give several simple examples.Example 1: Consider the theory whose word is ζ=τ_1τ_2τ_3τ_4τ_5, one can have following singular configuration: a): (τ_1τ_2τ_3τ_4) τ_5, namelythere is a AD theory represented by η=(τ_1τ_2τ_3τ_4), and a I_1 singularity; b):(τ_1τ_2)τ_3τ_4 τ_5, namely there is a rank one AD theory represented by τ_1τ_2,and three I_1 singularities.Example 2: Consider the theory whose word is (τ_1τ_2τ_3τ_4τ_5)^2, one can have following singular configuration: a): (τ_1τ_2τ_3τ_4 τ_5)(τ_1 τ_2τ_3 τ_4 τ_5), namelythere are tworank two AD theories represented by the word ζ; b): (τ_1τ_2τ_3τ_4) τ_5 (τ_1 τ_2τ_3 τ_4) τ_5, namely there are two AD theory represented by the word η, andtwo extra I_1 singularities.It is possible to use the braid move and Hurwitz move to get undeformable singularities, see following examples for I_n type. Here we use braid movesto give the configuration with four identical letter, and then one can use Hurwitz moves to move those letters together (moving the letters from left to right). The I_4 seriescorresponding to scaling dimension (5,3) and (4,2) were discussed in <cit.>.(8,6): τ_1^2τ_2τ_3τ_4(τ_5τ_4τ_3τ_2τ_1^2τ_2τ_3τ_4τ_5) (10,4): τ_1τ_2τ_3τ_4τ_1τ_2τ_3(τ_4 τ_1τ_2τ_3τ_4)=τ_1τ_2 τ_3τ_4τ_1τ_2 τ_3τ_1τ_2 τ_3τ_4 τ_3 (5,3): (τ_1^2τ_2τ_3τ_4τ_5 )(τ_1^2τ_2τ_3τ_4τ_5)(4,2): (τ_1^2τ_2τ_3τ_4) (τ_1^2τ_2τ_3τ_4)(2,2): τ_1τ_2τ_3τ_4τ_5 τ_5 τ_4τ_3τ_2τ_1 ∼τ_2 τ_1^2 τ_2τ_3τ_4τ_5^2τ_4τ_3=τ_2(1)^2 τ_2^2τ_3τ_4 τ_5^2 τ_4 τ_3 =τ_2(1)^2 τ_2^2τ_3 τ_4(5)^2 τ_4^2 τ_3=τ_2(1)^2 τ_2^2 τ_34(5)^2 τ_3(4)^2 τ_3^2There are also underformable singularities of Z_2 type <cit.>, which is given by mapping class group element I.By looking at table. [<ref>, <ref>, <ref>, <ref>], i.e. theory with scaling dimension (10,4), (8,6) and (5,4). §.§.§ Global SW geometryThe global SW geometry is given by the positive factorization of the identity element in mapping class group.Since I^2=1, the first choice is (τ_1τ_2τ_3τ_4 τ_5^2 τ_4 τ_3τ_2τ_1)^2=1.This is the (20,0) type as there are a total of 20 I_1 singularities. The second choice would be η^10=ζ^6=1, and since one need to have the topological constraint on the number of I_1 and Ĩ_̃1̃ singularities, we use the equivalent factorizationη^7 η^3=[τ_5τ_4[τ_3τ_2τ_1(τ_1τ_2τ_3τ_4τ_5)^3 ]τ̅_4τ̅_5] η^3= [τ_5τ_4[τ_3τ_2τ_1σ· 1_32· 2_43· 3_54] τ̅_4τ̅_5] η^3=1.This is the (18,1) type. Finally, we have the relation ϕ_4^6=1, and so ϕ_4^5=ϕ_4^-1=τ_5τ_4τ_2^-1τ_1^-1,and one can find an element in mapping class group so that its action would change the index as 5→ 1, 4→2 (see <ref>, and take a+b=6), so ϕ_4^-1 is conjugate with ϕ_4, and its factorization involves a Dehn twist along σ and eight Dehn twists along non-separating curves. So the factorization is justϕ_4 ϕ_4^-1=1.There are now two σ Dehn twists in above factorization, and so it gives the (16,2) type.To find the global configuration for the SCFT, one need to rearrange the aboveconfiguration to get a sensible singular fiberat ∞. Using the data in table. [<ref>,<ref>,<ref>,<ref>], one can easily find the results, see table. <ref>. Let's show some moves to derive the equivalent factorization of identity which is used to derive the result in table. <ref>.* First, we have the following equivalent factorization:1=I^2=τ_1τ_2τ_3τ_4 τ_5^2 τ_4 τ_3τ_2τ_1 I=τ_1τ_2τ_3τ_4 τ_5 I τ_5 τ_4τ_3 τ_2 τ_1=(τ_1τ_2τ_3τ_4τ_5)^2(τ_5τ_4τ_3τ_2τ_1)^2.The fact I commuting with all the generators is used.* Secondly, we have the following factorization (see table. <ref>):1=ϵ^8=ϵ^2 (ϵ^3)^2=(τ_1^2τ_2τ_3τ_4)^2(τ_5^2τ_4τ_3τ_2)^2. Other 4d 𝒩=2 SCFTs: Let's now give the global SW geometry of all rank two 𝒩=2 SCFT discussed in section 2. We have found the configuration for theories engineered using periodic maps. Herewe'd list the global SW geometry for other choices, see table. <ref>. The important difference is thatthe fiber at ∞ is no longer given bythe periodic map. We first use the following factorization of identity ζ^6=1,here ζ=τ_1τ_2τ_3τ_4τ_5. We then use the factorization ζ^3=σ· 1_32· 2_43· 3_54,and finally the fact σ=(τ_1τ_2)^6, and so the factorization of identity becomes ζ^3(ζ)^3=σ· 1_32· 2_43· 3_54·σ 1_322_43 3_54= σ· 1_32· 2_43· 3_54· (τ_1τ_2)^6 · 1_32· 2_43· 3_54We then split the middle factor (τ_1τ_2)^6 to the bulk and infinity. § REPRESENTATION OF MAPPING CLASS GROUP OF GENUS TWOTo have a complete understanding of the mapping class group elements for rank two theory, one need to haveseveral useful representation for M_2. In this section, we are going to discuss three important representation: a): the action on homology groups;b): Jones representation; c): the signature function. We then use these representations to discuss the candidate configuration for 4d asymptotical free theories,5d and 6d KK theories. §.§ Symplectic representationThe Dehn twist action on homology represented by oriented curves is given as:(T_b)([a⃗])=[a⃗]+(a⃗,b⃗)[b⃗].Here (a⃗,b⃗)) is the intersection number, see figure. <ref> for the illustration. For genus two, the basis for homology group is a_1,b_1, a_2, b_2 and the intersection form is (a_1,b_1)=1, (a_2,b_2)=1, δ_3=a_1+b_2, see figure. <ref>. we have the matrix representation for those generators τ_i:τ_1=([1 -100;0100;0010;0001 ]), τ_2=([ 1 0 0 0; 1 1 0 0; 0 0 1 0; 0 0 0 1 ]),τ_3=([1 -110;0100;0010;0 -111 ]), τ_4=([1000;0100;001 -1;0001 ]), τ_5=([ 1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 1 1 ]).The representation matrix for some mapping class groupsare then:I=([ -1000;0 -100;00 -10;000 -1 ]), ζ=([0 -100;100 -1;000 -1;0 -110 ]),ϕ_4=([0 -100;1100;0001;00 -11 ]), ϵ=([ -1 -1 -11;101 -1;001 -1;0 -110 ]), η=([0 -100;101 -1;001 -1;0 -110 ]). Given an element α if mapping class group M_2,the characteristic polynomial of its symplectic representation Sp(α) is Det(y I_4-Sp(α))=y^4+i_1(α) y^3+i_2(α) y^2+i_1(α) y+1.§.§ Jones representationJones <cit.> gave another representation for M_2 which is given byM_5× 5[q], namely the representationis 5× 5 matrix with the entries in polynomial of q, and the explicit form for the generators are shown here:τ_1=([ -1000q;0 -1100;00q00;001 -10;0000q ]), τ_2=([ -1000q;0 -1100;00q00;001 -10;0000q ]), τ_3=([ -100q0;0 -1100;000q0;001 -10;0010 -1 ])τ_4=([q0000;1 -1000;00 -10q;100 -10;0000q ]), τ_5=([ -1q000;0q000;00q00;001 -10;0010 -1 ]).Unlike homology representation where different mapping class group would give the same monodromy group, Jones's presentation is more unique. §.§ Signature funcation One can define a Meyer's function<cit.> for a mapping class group as follows. First, given two element α, β∈ M_2, we have two 4× 4 matrices A=Sp(α) and B=Sp(β) by using the homology representation. Now define a real vector space by following equationV_A,B={(x,y) ∈ R^4× R^4|(I_4-A^-1)x+(I_4-B)y=0};The above equation defines a linear subspace inside R^4× R^4, since one can solve some components of x and y coordinates in terms of other components linearly. The dimension actually depends on the specific form of A and B. Then define the following quadratic form on V_A,B as followsψ_A,B((x_1, y_1),(x_2, y_2))=(x_1+y_1)^tJ(I_4-B)y_2;Here J=([0I_2; -I_20 ]). One then define a cocycle as followsτ_2(α, β):=sgn ψ_A,B;Here sgn counts the difference of the positive and negative eigenvalues of the quadratic form ψ_A,B. For a word, one can define a Meyer's function for it as follows: First for the generators: ϕ_2(τ_i)=3/5, and secondly for a word (τ_1τ_2…τ_r), its Meyer's function isϕ_2(τ_1τ_2…τ_r)=3/5r-∑_j=1^r-1τ_2(τ_1…τ_j,τ_j+1);Here ϕ_2 is the function defined in <ref>.In particular, the Meyer's function for I_1 and Ĩ_1 singularity are ϕ_2(I_1)=3/5, ϕ_2(Ĩ_1)=4/5. The Meyer's function satisfies the equation:ϕ_2(1)=0, ϕ_2(α^-1)=-ϕ_2(α) ,ϕ_2(βαβ^-1)=ϕ_2(α).In particular, the value of the signature depends only on the conjugacy class. The Meyer's function also satisfies the equationϕ_2(B)-ϕ_2(AB)+ϕ_2(A)=τ_2(A,B). One can then define a signature function for a singular fiber F as follows:σ(F)=-ϕ_2(F)+sgn(f^-1(D)),and D is the small disk around the singularity. On the other hand, the local signature can also be computed using the local invariant d_x, δ_x as follows:σ(F)=2/5 d_x -δ_x=-3/5(2δ_x-d_x)-1/5(d_x-δ_x).So one can compute the signature for a local singularityfrom local invariants. In particular, The signature value for the I_1 and Ĩ_1 singularity is σ(I_1)=-3/5, σ(Ĩ_1)=-1/5.The signature function is conserved under the deformation of a degeneration. if a degeneration Fis split into several singularities F_i, then thesignatures of the corresponding mapping class group satisfies the important condition <cit.>:σ(F)=∑_i σ(F_i).The above fact isuseful to find the number of terms in the factorization of the mapping class group element (see formula <ref>):# I_1= 2δ_x-d_x, #Ĩ_1= d_x-δ_x.which is derived by the conservation of the signature function.§.§ Asymptotical free theoryTo find the configuration for asymptotical free theory, one might use the method similar to rank one theory: namely one start withthe conformal theory and then move the bulk I_1 singularities to the fiber at ∞. SU(3) gauge theory:The factorization for SU(3) with N_f=6 fundamental flavor is(I_bulk, I_∞)=((τ_5τ_4τ_3τ_2τ_1)^2, (τ_1τ_2τ_3τ_4τ_5)^2).To get the configuration for SU(3) gauge theory with N_f<6, we'd like to move some letter from the bulk to ∞. One of the constraint is that the characteristic polynomial of the corresponding monodromy group at ∞ is y^4+2x^3+3x^2+2x+1=0. One possible choice is to move the letter τ_1, τ_3, τ_5 for the conformal case to ∞ and the end result is(τ_5(4)τ_3(2)τ_35^2(4)τ_13^2(2), τ_3τ_5τ_1τ_3τ_5τ_1 (τ_1τ_2τ_3τ_4τ_5)^2). We'd like to conjecture that the SW geometry for the SU(3) with N_f=k fundamental flavor is just(τ_5(4)τ_3(2)τ_35^2(4)τ_13^2(2) I_k, I_6-k (τ_1τ_2τ_3τ_4τ_5)^2). Here I_k involves k letter of 1,3,5. Since 1,3,5 commute with each other,the ordering in I_k is not important. An interesting check is that when k=0, the BPS quiver defined fromthe BPS particle associated with the vanishing cycle is given in figure. <ref>, which is exactly the same as found earlier. We have bulk word τ_aτ_bτ_cτ_d, and the symplectic pairing as (a,b)=-1, (a,c)=2, (a,d)=-2, (b,c)=0, (b,d)=2, (c,d)=-1.Here a=5(4)=[4]+[5], b=3(2)=[2]-[3]=[2]-[1]-[5],c=35^2(4)=[4]+[3]+2[5]=[4]+[1]+3[5],d=13^2(2)=[2]-[1]-2[3]=[2]-3[1]-2[5].We used the relation [3]=[1]+[5]. The basis for BPS quiver is (-a, -b,c,d), which gives the BPS quiver.SU(2)× SU(2) gauge theory:The factorization for the conformal theory is (I_bulk, I_∞)=(τ_1τ_2τ_3τ_4τ_5τ_5τ_4τ_3τ_2τ_1, τ_1τ_2τ_3τ_4τ_5τ_5τ_4τ_3τ_2τ_1).This time one should move the letter τ_2, τ_4 of the bulk letter to ∞, and there are just four possibilities, whichagree with the field theory expectation. The above choice might be understood from the cut system for the corresponding mapping class group element.§.§ 5d and 6dKK theoryLet's now discuss the global SW geometry for 5d 𝒩=1 KK theory. The maindifficulty here is to determine the singular fiber at ∞.There are several clues about the monodromy group that would be useful:firstly the eigenvalues of the monodromy group at ∞for many 5d theory engineered using toric singularity are (1,1,-1,-1) [More generally, one pair of eigenvalues can be one, and the other pair is (exp(2π i/3),exp(4π/3)).];secondly the monodromy group is not periodic; thirdly the number of Dehn twist is determined by the local invariants d_x, δ_x. We'd like to findthe SW geometry for 5d theory whose flavor symmetry is SO(20) <cit.>, and other cases should be found in a similar way. Our basic idea is following: one can find the UV singular fiber of the 5d theory by using thesingular fiber of thecorresponding 4d theory with same flavor symmetry, namely one need to move a I_1 factor of the 4d UV fiber to the bulk. The reason is following: the dimension of the charge lattice of the 5d theory is one bigger than that of the 4d theory, as the BPS particle of the KK theory could carry the winding mode charge.Now the I_∞ for the 4d theory with SO(20) theory is τ_1^2τ_2τ_3τ_4, and by moving τ_4 to the bulk, we get the word:τ_1τ_1τ_2τ_3,which gives the required eigenvalues: (1,1,-(-1)^1/3,(-1)^2/3), and certainly this is not a periodic map. The global SW geometry is then (I_∞, I_bulk)=(τ_1τ_1τ_2τ_3, τ_4τ_1^2τ_2τ_3τ_4(τ_5^2τ_4τ_3τ_2)^2).For the fiber at ∞ for 6d (1,0) KK theory, one of the possible choice is the so-called I_n-p-q singularity: the eigenvalues of the monodromy group is (1,1,1,1).In the MM's theory, the cut curves are shown in figure. <ref>. The periodic map on two genus zero component is just trivial. Since the mapping group action associated with each cut curve is a Dehn twist, it is natural to identify the factorization as τ_1^n τ_3^p τ_5^q.When n=p=q=1, this should give the F_∞ so that the theory has SO(20) flavor symmetry. The global SW geometry I_1-1-1 case can be found as follows:1=τ_1τ_2 τ_3 τ_4 τ_5 τ_5 τ_4τ_3τ_2τ_1 τ_1τ_2τ_3τ_4τ_5^2τ_4τ_3τ_2τ_1= τ_1(2)τ_1 τ_3 τ_4 τ_5 τ_5 τ_4τ_3τ_2τ_1τ_1τ_2τ_3τ_4τ_5^2τ_4τ_3τ_2τ_1=τ_1(2)τ_3(4) (τ_1 τ_3τ_5) τ_5 τ_4τ_3τ_2τ_1τ_1τ_2τ_3τ_4τ_5^2τ_4τ_3τ_2τ_1= (τ_1 τ_3τ_5) τ_5 τ_4τ_3τ_2τ_1(τ_1τ_2τ_3τ_4τ_5^2τ_4τ_3τ_2τ_1)τ_1(2)τ_3(4) .In the first two steps, we used Hurwitz move to move the letter 1,3,5 together, and finally we use cyclic equivalence. So finally, we have(I_∞, I_bulk)=(τ_1 τ_3τ_5, τ_5 τ_4τ_3τ_2τ_1(τ_1τ_2τ_3τ_4τ_5^2τ_4τ_3τ_2τ_1)τ_1(2)τ_3(4) ). Alternatively, one can move the one letter of UV fiber of 5d KK theory with flavor symmetry SO(20) to the bulk,as the charge lattice of 6d theory is one dimensional higher, since there is one more winding charge for 6d KK theory. To satisfy the eigenvalue condition,the choice isτ_1^2 τ_3, which might be conjugate to the simple choice τ_1τ_3τ_5.§ CONCLUSIONWe studies the global SW geometry for rank two theories with eight supercharges by using a topological approach.The main task is to find a factorization of a mapping class group element for the degeneration in terms of positive Dehn twist (up to conjugation, braid move, and Hurwitz move), and a factorization of identify element. We achieved this for most local singularities, and find the global SW geometry for most 4d theories with generic deformations.The rank one case studied in <cit.> can now easily recovered using the topological approach, while the rank two case seems significantly more complicated, althoughit is still manageable. We take a topological approach in the classification, and one need to find the SW geometries to solve the Coulomb branch solution, i.e. the photon couplings. The genus two fiberations are all hyperelliptic which makes the study easier and the study of those curves are under progress. The algebraic curves andthe choice of SW differential can give us further constraints (for example, the theory with scaling dimension (10,8) seem not exist although topologically we do not find any obstruction, and one mayexplain it using holomorphic constraint, i.e. no such hyperelliptic family can be written down <cit.>). The approach taken in this paper can be straightforwardly generalized to higher rank theory whose SW geometry can be given by the fiberation of genus g curves. The reason is that the mapping class group can also be generated by the Dehn twists.One can easily find the factorization for some familiar theories (such as (A_1, A_n) or (A_1, D_n) theories). A thorough study takes more efforts, andwe hope to report the progress in the future.One of the interesting result in this paper is that the mapping class group M_2 is generated bythat of the simple (A_1,A_n) and (A_1, D_n) type SCFTs. This suggests that one may start with simple SCFTs and generateother SCFTs.In this paper, we studied the theory with generic deformation (with only I_1 or Ĩ_1 singularities). One can find more general theories by allowing so-called un-deformable singularities. We have already discussed how to form those un-deformable singularities by moving the letters of the Dehn twist around, and a thorough understanding needs the study of the automorphism of the genus two fiberation, which will be left for part III of the series.JHEP | http://arxiv.org/abs/2311.15986v1 | {
"authors": [
"Dan Xie"
],
"categories": [
"hep-th",
"math.AG",
"math.GT"
],
"primary_category": "hep-th",
"published": "20231127163100",
"title": "On rank two theories with eight supercharges part II: Lefschetz pencils"
} |
[email protected] [t1]© 2023. This manuscript version is made available under the CC-BY-NC-ND 4.0 license.Institute of Geosciences and Exact Sciences, São Paulo State University (UNESP) Júlio de Mesquita Filho, Rio Claro, SP, 13506-900, BrazilauthorThis paper investigates video game identification through single screenshots, utilizing five convolutional neural network (CNN) architectures (MobileNet,DenseNet, EfficientNetB0, EfficientNetB2, and EfficientNetB3) across 22 home console systems, spanning from Atari 2600 to PlayStation 5. Confirming the hypothesis, CNNs autonomously extract image features, enabling the identification of game titles from screenshots without additional features. Using ImageNet pre-trained weights, EfficientNetB3 achieves the highest average accuracy (74.51%), while DenseNet169 excels in 14 of the 22 systems. Employing alternative initial weights from another screenshots dataset boosts accuracy for EfficientNetB2 and EfficientNetB3, with the latter reaching a peak accuracy of 76.36% and demonstrating reduced convergence epochs from 23.7 to 20.5 on average. Overall, the combination of optimal architecture and weights attains 77.67% accuracy, primarily led by EfficientNetB3 in 19 systems. These findings underscore the efficacy of CNNs in video game identification through screenshots. video game identification convolutional neural networks transfer learning single screenshot analysis automated game recognition From Pixels to Titles: Video Game Identification by Screenshots using Convolutional Neural Networks Fabricio Aparecido Breve January 14, 2024 ===================================================================================================§ INTRODUCTION Humans possess the remarkable ability to easily recognize their favorite video games or titles they have played frequently from a single screenshot. This proficiency is rooted in the presence of consistent visual elements, including sprites, heads-up displays (HUDs), and distinctive game scenarios. However, extending this capability to the automated identification of random video games from an extensive console library presents a formidable challenge, even for the most dedicated gamers. Therefore, the concept of automatically identifying video games from single screenshots holds immense appeal, not only for its technical complexity but also for its vast practical applications.Automated game identification could offer substantial benefits to various sectors within the gaming industry. Video game databases, search engines, and online platforms stand to gain significantly from this technology. By analyzing user-uploaded screenshots, these platforms can automatically generate metadata, including game titles, release dates, and developer information. Such automation would not only improve the accuracy of their game libraries but also enhance cataloging efficiency. Moreover, online streaming platforms could harness screenshot recognition to provide real-time information to viewers about the games being played during live streams, enhancing the overall viewer experience. This technology opens doors to further innovation within the gaming ecosystem, potentially influencing game recommendation systems and aiding game-related research. Most of the video games classification attempts so far aimed at genre classification. <cit.> pioneering work classified game genre of gameplay videos. Their dataset comprises 700 gameplay videos spanning seven distinct game genres. In their research, they introduced novel descriptors known as Bossa Nova and BinBoost. The experimental outcomes demonstrated the effectiveness of their proposed approach, achieving an accuracy rate of 89.84%. <cit.> also introduced a novel method for classifying video games genres based on content. They used a dataset comprising 351 gameplay videos spanning six different genres. They employed random forest and gradient boosting trees as underlying machine-learning techniques, combined with feature selection of image-based features and motion-based features. The most promising results were achieved using the random forest classifier, which yielded an accuracy rate of 60.6%.<cit.> introduced a game classification method based on graphical and video complexity. Their approach categorizes games into three distinct classes: low-complexity, medium-complexity, and high-complexity games. To achieve this classification, they developed a decision tree capable of accurately assigning a game to its appropriate complexity class with an accuracy rate of 96%. The classification process relies on the analysis of specific attributes within the gameplay video, including the observation of a static area, assessment of the degree of freedom (DoF), and quantification of camera movement.While the majority of classification endeavors have typically focused on broader categories, there was a unique attempt to classify video games by their titles more than a decade ago. In this pioneering effort, <cit.> explored several fusion methods using a dataset containing 120,000 gameplay videos, with the objective of identifying 30 distinct game titles. Their approach integrated both audio and visual features to accurately pinpoint these specific game titles, ultimately achieving a F1-score of 0.82.In the past decade, there has been a notable surge in the adoption of deep learning methods, as exemplified by the works of <cit.>, <cit.>, and <cit.>. Among these methods, convolutional neural networks (CNNs) have played a pivotal role in driving advancements in automatic image classification, as initially demonstrated by <cit.>. CNNs represent a specific category of deep neural networks that find widespread use in the field of visual image analysis.However, one of the inherent challenges associated with CNNs lies in their substantial appetite for annotated image samples, which are essential for estimating the millions of parameters required for network training. The process of annotating images can be both expensive and time-consuming, often presenting a significant bottleneck, particularly when dealing with problems characterized by limited available training data <cit.>. This limitation has, at times, hindered the widespread application of CNNs in scenarios where access to abundant training data is restricted.Remarkably, <cit.> addressed this predicament by unveiling a breakthrough approach. Their research demonstrated that image representations acquired through CNNs trained on extensive, annotated datasets could be judiciously leveraged for other visual recognition tasks, even in cases where only a limited amount of training data is available. This innovative method involved repurposing layers from a CNN model previously trained on a large dataset to compute mid-level image representations for a different dataset, yielding remarkable improvements in classification performance. This powerful technique, commonly referred to as transfer learning, has found successful application across various domains and scenarios, as exemplified by the works of <cit.> <cit.> and <cit.>.CNNs have significantly influenced the landscape of automatic image classification, including game classification. <cit.> used games screenshots and icons provided in game stores to classify them by genre using convolutional neural networks and ensemble techniques. They achieved 40.3% and 46.7% classification accuracies for single icon and screenshot classification tasks, respectively. They increased these results to 40.5% and 47.6%, respectively, in a later work <cit.>, in which they also used features extracted from their trained models to perform other two tasks: similar game searching and quality assessment of game images based on the correctness of viewers' understanding of game content. Recently, <cit.> devised deep neural networks for the purpose of classifying game genres using either cover images or description text. Their dataset encompassed cover images and description texts sourced from a pool of 50,000 games, which they categorized into 15 distinct genres. In their approach, several pre-trained CNNs were fine-tuned for the cover image classification task. For the classification of description text, they employed Long Short-Term Memory (LSTM) networks and the Universal Sentence Encoder (USE). The image-based model yielded a highest accuracy rate of 31.4% when utilizing ResNet-50. They also achieved significant improvement in accuracy, up to 49.9%, by combining image and text features within a multi-modal model.In this paper, the primary focus revolves around the task of video game title classification based on single screenshots, utilizing CNN models. The hypothesis is that the inherent CNN capacity of automatically extracting relevant features from images is sufficient to identify video game titles from single screenshots in most scenarios, without relying on other features. To embark on this journey, a dataset encompassing 170,881 screenshots from 8,796 games of 22 popular home console systems was curated. The screenshots were sourced from the reputable Moby Games Database <cit.>. The proposed dataset spans a wide spectrum of gaming history, ranging from iconic consoles like the `Atari 2600' of the second generation to the cutting-edge `PlayStation 5' and `Xbox Series' of the current generation, carefully selecting the most sold consoles from each generation between them. Its richness and diversity make it an ideal playground for our research.To tackle this ambitious task, well-established CNN architectures were selected: MobileNet <cit.>, DenseNet <cit.>, and EfficientNet <cit.>. These architectures have consistently demonstrated outstanding performance in previous works with different kinds of images <cit.>, making them prime candidates for this game title classification task. The initial weights of these CNNs were first initialized with pre-trained weights from the ImageNet dataset <cit.>, a widely adopted approach in transfer learning <cit.>. Subsequently, the pre-trained weights from another dataset of screenshots were employed to enhance both classification accuracy and reduce training times.What sets this research apart is its pioneering spirit. To the best of my knowledge, this marks the first attempt to tackle the intriguing challenge of game title classification using CNNs. By pushing the boundaries of automated video game identification, the aim is to contribute valuable insights to game-related research and practical applications to the ever-evolving gaming industry.The remainder of this paper is organized as follows. Section <ref> presents the Moby Games Database and how the dataset was sourced from it. Section <ref> shows the CNN architectures employed in this paper. Section <ref> displays the computer simulations comparing the CNN architectures in the task of identifying the games from their screenshots, initialized with pre-trained weights from the ImageNet dataset. Section <ref> demonstrates computer simulations using weights pre-trained in another screenshots dataset, comparing the accuracy and training epochs with those obtained with the ImageNet weights. Finally, the conclusions are drawn in Section <ref>. § THE DATASETThe Moby Games Database <cit.>, as stated on their website, is an ambitious project with the primary goal of meticulously cataloging comprehensive information about electronic games, encompassing computer, console, and arcade titles, on a game-by-game basis. This extensive catalog includes release details, credits, cover art, player-uploaded screenshots with captions, neutral descriptions, and much more. Impressively, the database boasts a collection of over one million screenshots, meticulously organized by game titles and systems. Additionally, they offer an API that simplifies the process of requesting and retrieving dataset entries and screenshot files. Given these advantages, the Moby Games Database was selected as the primary source for the screenshots used in this research.To maintain a focused scope for this initial endeavor in video game identification using CNNs, the study exclusively considered home video game consoles. Handheld devices, arcade games, and computer-based titles will be addressed in future research. The selection process involved choosing the top 22 best-selling home video game consoles of all time <cit.>. These consoles originate from six different manufacturers and exhibit varying quantities of screenshots per game in the database, ranging from none to a few dozen. To ensure that each game has a minimum of five screenshots available for cross-validation purposes, only games meeting this criterion were selected. Table <ref> provides an overview of the 22 chosen systems, detailing their total game and screenshot counts, as well as the specific number of games and screenshots selected to satisfy the “at least 5 screenshots” criterion. Figure <ref> shows some screenshots from the built dataset. [subfloat]labelformat=empty, justification=centering, font=footnotesize § CNN ARCHITECTURESIn this section, the CNN architectures explored in this study are introduced, along with a description of the additional layers integrated to achieve successful screenshot classification. Table <ref> offers an overview of the four architectures under examination, highlighting their input image resolution, the size of their output in the final convolutional layer, the quantity of parameters involved, and citations to their respective references in the literature.The output from the final convolutional layer of the original CNN is directed into a global average pooling layer. Subsequently, a dropout layer is implemented with a rate of 20% to mitigate overfitting, followed by a softmax classification layer. This proposed architecture is visualized in Figure <ref>, with x representing the dimensions of the input size (image size), w, y, and z indicating the dimensions of the CNN output in its ultimate convolutional layer (as detailed in Table <ref>), and g denoting the output layer's dimensions, which depend on the number of games from the system being evaluated that are present in the dataset (as indicated in Table <ref>).§ CNN COMPARISONThis section presents computer simulations that compare CNN models applied to the classification of screenshots from the 22 systems shown in Table <ref>. All simulations utilized Python and TensorFlow running on three distinct desktop computers equipped with NVIDIA GeForce GPU boards: GTX 970, GTX 1080, and RTX 2060 SUPER, respectively[Access the source code at <https://github.com/fbreve/videogame>].For each CNN architecture, image preparation involved resizing to fit the CNN input size and normalizing the range, with no additional preprocessing. Networks commenced with pre-trained weights sourced from the Imagenet dataset <cit.>, which boasts millions of images and hundreds of classes, widely used in transfer learning. These pre-trained weights are available in Tensorflow.K-Fold Cross Validation, employing k=5, was applied universally across all datasets. Training utilized the Adam optimizer <cit.>, initiating with a learning rate of 10^-3 and halving whenever the validation accuracy stagnated for 2 epochs, down to a minimum of 10^-5. Within the training subset, a random 20% of images were allocated to the validation subset, ensuring consistent class proportions through stratification. All models underwent training for up to 50 epochs, with an early stopping criterion to cease training if the validation set loss failed to decrease during the last 10 epochs. The results, detailed in Table <ref>, denote averages derived from five different instances of each model, following the Cross Validation approach. They show that DenseNet169 achieved the best accuracy in 14 of the 22 systems. On the other hand, the best average accuracy is attained by EfficientNetB3 (0.7451) which is only slightly higher than that achieved by the runner-up DenseNet169 (0.7446).Regarding the systems, the best accuracy is achieved with the Xbox Series using EfficientNetB3 (0.9714). However, it is worth noticing that this system only had 37 screenshots from a total of five games. Its results were markedly worse with DenseNet169 (0.6071), for example. The Atari 2600 is the system with the best average accuracy (0.8942).This is likely related to the simpler graphics of this second-generation console compared to newer systems. Most games for the Atari 2600 do not exhibit significant screen variation.In simpler tasks, smaller architectures often perform as well as larger ones. It's valuable to consider these smaller networks when selecting the optimal architecture for a task because if a smaller network can yield comparable results in less computational time, there's no justification for employing a larger one. This rationale led to the inclusion of MobileNet and EfficientNetB0 in this comparison. However, in the context of video game detection by screenshot, it became evident that larger networks outperformed the smaller ones.§ ALTERNATIVE INITIAL WEIGHTSThe ImageNet weights are commonly used in many transfer learning scenarios with success. Through fine-tuning, these weights can be adapted to perform many different tasks. However, it is expected that transferring weights from a similar task might enhance accuracy and reduce training times compared to using the ImageNet weights. Hence, investigating whether this holds true for the game identification by screenshots task is worthwhile.To conduct these simulations, Arcade screenshots were obtained from the Moby Games Database using the same criteria applied in sourcing screenshots from home console systems. Out of 3,125 games and 24,714 screenshots, 1,633 games and 24,235 screenshots were selected based on the criterion of `at least five screenshots per game.' This Arcade dataset holds particular significance due to the inclusion of games contemporary to multiple home console generations, thereby presenting screenshots with highly diverse graphics.From the architectures that exhibited the best performance in the previous section — DenseNet169, EfficientNetB2, and EfficientNetB3 — each was trained using the entire Arcade dataset, utilizing identical parameters as outlined earlier. The weights obtained from training on the Arcade dataset were subsequently employed as initial weights for training these architectures with screenshots from each of the 22 home console systems.Tables <ref>, <ref>, and <ref> display the accuracy achieved and the epochs required to train each network, using both the ImageNet and Arcade weights. For DenseNet169, employing the Arcade weights resulted in improved accuracy for only 6 out of the 22 systems. However, training times decreased for 21 of the 22 systems. Overall, while the average accuracy slightly decreased from 0.7446 to 0.7432, the average number of epochs needed to train the network decreased from 24.2 to 19.6.For EfficientNetB2 (Table <ref>), employing the Arcade weights resulted in improved accuracy for 19 of the 22 systems. Additionally, training times decreased for 20 of the 22 systems. Overall, the average accuracy increased from 0.7343 to 0.7552, while the average number of epochs needed to train the network decreased from 24.4 to 19.9Finally, with EfficientNetB3 (Table <ref>), employing the Arcade weights similarly led to improved accuracy for 19 of the 22 systems. Moreover, training times decreased for 20 of the 22 systems. Overall, the average accuracy increased from 0.7451 to 0.7636, while the average number of epochs required to train the network decreased from 23.7 to 20.5. It's worth noting that the systems showing improved accuracy or training times are consistent between EfficientNetB2 and EfficientNetB3.Table <ref> displays the highest accuracy achieved for each system, showcasing the best combination of architecture and initial weights. EfficientNetB3 notably outperforms other architectures, yielding the best results in 19 out of 22 systems, with one tie alongside EfficientNetB2. EfficientNetB2 and DenseNet169 excel in two systems each. Concerning the initial weights, the 'Arcade weights' account for the best results in 17 out of 22 systems, while the remaining five systems attained their highest accuracy with the ImageNet initial weights. § CONCLUSIONSThis paper explores the application of five distinct CNN architectures (MobileNet,DenseNet, EfficientNetB0, EfficientNetB2, and EfficientNetB3) for identifying video games through screenshots across 22 diverse home console systems, from Atari 2600 (first released in 1977) to PlayStation 5 (first released in 2020). The computer simulations confirmed the hypothesis that CNN inherent capacity of automatically extracting relevant features from images is sufficient to identify video game titles from single screenshots in most scenarios, without relying on other features. Using pre-trained weights from the ImageNet dataset, an average accuracy of 74.51% over all the 22 systems is achieved with the EfficientNetB3 architecture. On the other hand, DenseNet169 architecture is the best in 14 of the 22 tested systems. When weights pre-trained in another screenshot dataset (Arcade) are used as initial weights - instead of those from ImageNet - the accuracy improves for both EfficientNetB2 and EfficientNetB3, at the same time that the amount of epochs required to converge lowers. The average accuracy over all the 22 systems increases from 74.51% to76.36%with EfficientNetB3, while the number of epochs required for models to converge decreases from 23.7 to 20.5. On the other hand, the Arcade weights does not improve DenseNet169 in most scenarios.Overall, considering only the best architecture and weights for each system, an accuracy of 77.67% is achieved. EfficientNetB3 is responsible for the best results in 19 from the 22 systems. The “Arcade weights” are responsible for the best results in 17 from the 22 systems, confirming the hypothesis that starting from weights trained on a different dataset but the same task are better than starting with the more general ImageNet weights in most scenarios.The simulations performed for this paper showed evidence of the efficacy of CNNs in the task of video game identification by screenshots. Since the largest networks explored in this paper achieved the best results, future research will explore even larger CNN architectures and CNN ensembles to further enhance accuracy. Additionally, this study suggests potential applications in other screenshot-based tasks, such as genre classification or similar game searches, leveraging the efficacy of CNNs in video game identification. elsarticle-harv | http://arxiv.org/abs/2311.15963v1 | {
"authors": [
"Fabricio Breve"
],
"categories": [
"cs.CV",
"cs.NE"
],
"primary_category": "cs.CV",
"published": "20231127160734",
"title": "From Pixels to Titles: Video Game Identification by Screenshots using Convolutional Neural Networks"
} |
[Correspondence email address: ][email protected] Department of Physics, Cornell University, Ithaca, NY 14853, USADepartment of Pediatrics, Masonic Institute for the Developing Brain, University of Minnesota[Correspondence email address: ][email protected] Department of Psychiatry, Rutgers University, Piscataway, NJ 08854, USADynamics play a critical role in computation. The principled evolution of states over time enables both biological and artificial networks to represent and integrate information to make decisions. In the past few decades, significant multidisciplinary progress has been made in bridging the gap between how we understand biological versus artificial computation, including how insights gained from one can translate to the other. Research has revealed that neurobiology is a key determinant of brain network architecture, which gives rise to spatiotemporally constrained patterns of activity that underlie computation. Here, we discuss how neural systems use dynamics for computation, and claim that the biological constraints that shape brain networks may be leveraged to improve the implementation of artificial neural networks. To formalize this discussion, we consider a natural artificial analog of the brain that has been used extensively to model neural computation: the recurrent neural network (RNN). In both the brain and the RNN, we emphasize the common computational substrate atop which dynamics occur—the connectivity between neurons—and we explore the unique computational advantages offered by biophysical constraints such as resource efficiency, spatial embedding, and neurodevelopment. Shaping dynamical neural computations using spatiotemporal constraints Linden Parkes January 14, 2024 ====================================================================== § INTRODUCTIONDynamics have long underpinned computation. From the cycles of central pattern generators that support locomotion <cit.> to the networks of large-scale brain dynamics thought to regulate decision-making <cit.>, it is clear that biological systems make ample use of their time-evolution to respond to their environment. Harnessing this dynamical computation, artificial recurrent neural networks (RNNs) have been trained to successfully perform the same computational tasks as humans <cit.>. However, while inspired by the brain, training of RNNs is typically carried out in an unconstrained manner, leading to solutions that lack biophysical realism. Additionally, decades of neuroscience research has demonstrated the importance of biological constraints for achieving the brain's unique structure and capabilities <cit.>. Here, we draw on literature from dynamical systems and neuroscience to discuss (i) how RNNs leverage dynamics to compute, and (ii) how biophysical constraints may shape this computation by guiding the formation of network structure. At the intersection of these goals exists an opportunity to study how biologically-constrained RNNs may yield more powerful and more interpretable computational models. To understand how biologically realistic neurons compute, there has been a long and rich history of modeling and interpreting neurobiological systems to leverage their computational capabilities. These quantitative models fall under the category of dynamical systems, whose evolution in time is determined by mathematical functions. At the scale of a single neuron, detailed circuit models of the ion channels that mediate membrane voltage have enabled quantitative understanding of signal propagation and computation in dendrites <cit.>. At the scale of neural populations, mean-field models of excitatory and inhibitory neurons have enabled the study of neural circuits for biological sensing, imitation, and attention <cit.>. At the whole-brain level, both linear <cit.> and non-linear <cit.> dynamical models have been used to simulate large-scale activity patterns, and have examined how those patterns spread across the brain's white matter tracts. Across this broad range of systems, scales, and models, there exists a diversity of ways in which dynamics can be used for computation, as well as a crucial dependence of these dynamics on biophysical parameters. More recently, with technological advances in deep learning, the study of neural computation has adopted a more functional direction that moves away from biological realism. That is, rather than seeking a direct biophysical model <cit.>, RNNs posit a general dynamical system that is Turing complete <cit.>, with parameters that are trained to solve computational tasks. We focus on RNNs because they have been used extensively to understand how general brain-like systems leverage dynamics to perform computation. Examples of this use include time-series prediction <cit.>, source-separation <cit.>, decision-making <cit.>, odor classification <cit.>, as well as concurrent performance of multiple cognitive tasks <cit.>. However, these insights and models often fail to translate into the real computational substrate of the brain—the neural architecture—because RNNs are trained without regard for biophysical constraints. To merge biological realism with computational dynamics, we must first understand the physical embedding and constraints of the brain. In contrast to artificial RNNs, the brain is embedded within a circumscribed physical space <cit.>, and its inter-connectivity is subject to limited metabolic resources <cit.>. This discrepancy makes it challenging to translate insights relating the structure, dynamics, and computation of biological brains to artificial RNNs. Neuroscience has studied these resource constraints for more than a century <cit.>, suggesting that the brain is pressured to make efficient use of space, material, and time. That is, the brain must learn to communicate efficiently (time) while leveraging limited physical (space) as well as metabolic and cellular (material) resources. Critically, many of the brain's topological features of connectivity and communication are thought to emerge as a consequence of navigating these pressures <cit.>. These findings suggest that the brain's resource constraints play a critical role in shaping its dynamic repertoire and computational capacity. Here, our goal is to lay out promising new directions for improving the computational power and interpretability of RNN models of the brain. We posit that this goal will be achieved by placing biological constraints on RNNs that shape their structure and activity in systematic ways, which will in turn produce computationally improved dynamics. We focus on two aspects of RNN computational dynamics: the diversity of information that is represented by the neurons (expressivity), and the manipulation of low-dimensional internal representations (latent-spaces). In each section, we examine how biology shapes brain networks—with particular emphasis on the spatially-patterned macro-scale organizing principles of the cortex—and discuss how these constraints may be ported to RNNs to improve performance with interpretable structure and dynamics. Overall, we discuss how insights from biological and artificial computation can enrich each other towards a new generation of biophysically realistic RNNs. § THE RNN MODEL To mathematically model the time-evolution of neural systems, we turn to dynamical systems which posit that the next state of a neural system can be written as a function of the current state and an input asr_t+1 = f(r_t,u_t).Here, r_t ∈^n is a vector of n neural activity states, u_t ∈^k is a vector of k inputs, and f is a function. As an example, let us consider a simple leaky integrator model with a single neuron which evolves according tor_t+1 = ar_t + bu_t,where 0 ≤ a < 1 and b are real numbers (Fig. <ref>A). As time evolves forward, the neuron state integrates the input bu_t, and the accumulated history of inputs decays at a rate set by a. While Eq. <ref> is written with t advancing in integer steps—thereby called a discrete-time dynamical system—many physical neural models evolve forward continuously in time as,/ tr(t) = f̂(r(t),u(t)).We can approximate these continuous dynamics as discrete by evolving Eq. <ref> in time using steps of Δ t as,r_t+Δ t = f(r_t,u_t) = r_t + ∫_t^t+Δ tf̂(r(τ),u(τ)) τ.For example, the continuous-time version of the leaky integrator neuron is given byd/dt r(t) = âr(t) + b̂u(t),for â≤0 (Fig. <ref>A). In this case, the parameters of these two models can be interchanged through the transformation a = e^âΔ t, b = (a-1)b̂/â, but fundamental differences exist between continuous- and discrete-time systems <cit.>. Regardless of the system type, neural models make tradeoffs between complexity in the level of detail and tractability. The RNN is a model that attempts to capture the biophysical quantity of the interactions between neurons through the connectivity matrix A. In tandem, the RNN simplifies the precise functional form of that interaction through the activation function f. In its most basic form, an RNN is a subset of dynamical systems (Eq. <ref>) that evolves in time asr_t+1 = f(Ar_t + Bu_t + d), o_t= g(r_t).where A ∈^n× n is the connectivity matrix between neurons, B ∈^n× k is a matrix that linearly maps the inputs to the neurons, d∈^n is a vector of bias terms, and f is an activation function (Fig. <ref>B). Rather than having f be a complex and biophysically motivated function, it is often approximated as a simple nonlinear function such as a sigmoid. The output of the RNN, o_t, is usually taken to be some function g of the RNN state, and is often a linear output o_t = Wr_t. Typically, A,B,d, and W are treated as learnable parameters, some or all of which can be trained using a wide variety of methods <cit.>. We focus primarily on the computational role of the connectivity matrix A, as it dictates how the information in the RNN states is integrated as in the leaky integrator example (Eq. <ref>, <ref>). Despite its crucial importance in implementing computation, most uses of RNNs do not consider the biological pressures experienced by the brain while training RNN connectivity. In the following section, we describe how diversely the RNN states can express the inputs by leveraging A, and how biological processes and constraints reflect, mediate and accentuate this diversity. § TUNING EXPRESSIVITY THROUGH REGULARILIZED ACTIVITY AND NEUROMODULATIONWhen solving any computational problem, the expressivity of the language used is of crucial importance. The more expressive a language is, the greater the set of computations, formulae, and theorems comprise that language <cit.>. For example, a programming language that supports conditionalstatements can represent many more programs than an equivalent language that is without anstatement. In the same way, neural networks can be viewed from the lens of expressivity. Specifically, we can ask: given arbitrary weights, what is the space of functions or dynamics that can be achieved? Previous work has demonstrated that shallow multi-layer perceptrons (MLPs) are universal function approximators <cit.>, and that RNNs are universal dynamics approximators <cit.>. However, even if a particular function or dynamical pattern is theoretically achievable, artificial neural networks—just like biological neural networks—must be trained from an initial condition. Hence, the study of expressivity extends beyond theoretical guarantees, and has been shown to rely heavily upon architectural features such as depth <cit.>, neuron activation <cit.>, and connectivity <cit.>, which are crucial for explaining and engineering the success of modern-day neural networks. In this section, we study three consequences of biological processes for expressivity. First, we introduce RNN expressivity as a richness of time history information about the inputs, and tie this richness to spatially-patterned temporal receptive fields in the brain that underpin information integration. Second, we study constraints on expressivity induced by resource constraints on neural activity as a putative learning mechanism. Finally, we explore the potential for neural networks to modulate their expressivity at short time-scales through neuromodulation. Together, the processes of the brain offer enticing and novel paradigms for training and constructing more expressive RNNs under biological constraints.§.§ Expressivity as variable time-lagged integration of information Expressivity of RNNs is intricately tied to the concept of stability: how quickly a perturbation to the RNN state decays <cit.>. If a perturbation decays very quickly, then the information contained in the perturbation cannot be used by the RNN for extended information processing. On the other hand, if the perturbation grows uncontrollably, then the temporal information in the inputs quickly becomes too complex to be represented by the finite number of neurons. As a result, an optimal amount of controlled stability should maximally preserve temporal information without saturating the RNN's capacity. We can quantify this intuition through a simple recursive substitution of Eq. <ref>,r_t+1 = f(Ar_t + Bu_t + d)= f(A(f(Ar_t-1 + Bu_t-1 + d) + Bu_t + d)= …= h(r_0,u_0,u_1,…,u_t).Hence, in a noiseless system, the RNN state r_t+1 can be written as an explicit function h of the initial state, r_0, and the full time history of the inputs, u_τ, mediated by the recursive application of A, B, d, and the activation function f. The state of all neurons r_t+1 generates a basis for a subspace of the delay-embedded space of inputs u_τ (Fig. <ref>A), which means that the neuron states at time t+1 hold information about the time history of the inputs mediated by A, B, and d <cit.>. Thus, when computing an output using the neural states, we are implicitly computing an output using a basis of time-lagged input terms, where the connectivity defines the basis vectors, and therefore the reconstructable subspace (Fig. <ref>A). The more expressive this time-lagged basis, the greater the diversity of output functions which can be computed. This expressivity is intimately tied to the connectivity matrix <cit.>, and has been studied through many different lenses such as computation at the edge of chaos <cit.>, criticality, and avalanches <cit.>. The RNN's stability is set by the specific activation function f and the connectivity matrix A. To gain intuition for this dependence, let us consider a simple linear 2 neuron system driven by one input. When there is no connectivity between the neurons, they store no time history of the inputs, and their state at time t=3 is purely a function of the input at time t=2 (Fig. <ref>B). When we add weak connections between the neurons, they begin to store some information about the input at the previous time point t=1 (Fig. <ref>C). When we strengthen these connections, the neurons begin to store information from further back in time at t=0 (Fig. <ref>D). To further develop this intuition for larger systems and a specific task, we consider a 50 neuron system whose connectivity is randomly initialized, and whose output is trained to recall an impulse from 30 time steps in the past. At the trivial limit of an RNN with no connectivity where |A| = 0, the recursion of Eq. <ref> yields r_t+1 = f(Bu_t + d), and we see that there is no time history of the input present in the RNN state. As a result, the RNN is unable to recall the input at a later point in time (Fig. <ref>E) <cit.>. As we increase the strength of connectivity, the RNN state stores more information about longer time lags of the input, u_t-τ, and is thus able to more accurately recall the input later in time (Fig. <ref>F). As the connectivity strength continues to further increase, the RNN state holds increasingly more time lags of the input until it saturates such that the number of neurons is smaller than the dimension of the space of time-lagged input functions, thereby forming an incomplete basis for that space (Fig. <ref>G). This link between storing long time histories and expressivity is displayed prominantly and spatially in the brain. Specifically, there exists a tight coupling between longer periods of temporal integration and higher-order computation, and this relationship varies systematically across the cortex <cit.>. At the macro-scale, cortical brain regions are thought to follow a dominant axis of variation that encodes a global processing hierarchy <cit.>. This gradient of brain organization is broadly referred to as the sensorimotor-association (S-A) axis <cit.>. The S–A axis spans from primary cortices supporting sensation and movement at the bottom, to multimodal cortices supporting multisensory processing and integration in the middle, to transmodal association cortices supporting higher-order cognition at the top. The S-A axis is observed across multiple diverse features of brain structure and function <cit.> and is conserved across species <cit.>, demonstrating its evolutionary roots. Notably, as regions traverse up the S-A axis they undergo a progressive lengthening of their temporal receptive windows <cit.>. Specifically, regions' intrinsic functional timescales vary over the S-A axis <cit.> with regions at the top showing slower fluctuations reflecting longer temporal receptive windows. In turn, these longer windows are thought to enable greater accumulation and integration of information over time, facilitating higher-order cognition <cit.>. Conversely, regions at the bottom of the S-A axis show relatively fast dynamics, which is thought to underpin rapid integration of recent sensory information <cit.>. This spatial patterning of receptive windows suggests that the brain—unlike naively constructed RNNs—distributes its computational expressivity systematically across the cortex, and research suggests that this may be critical for functional integration <cit.>. The above data suggest that RNNs too may benefit from spatially varying periods of temporal integration. Specifically, the stability of RNNs is typically only considered globally across the entire system, as the connectivity of many RNNs are initialized randomly. If the RNN is linear (i.e., if the activation function f is the identity matrix), then any ensuing dynamic instabilities are localized to linear subspaces, or modes, of neural activity <cit.>. However, if the RNN is nonlinear (e.g., f = tanh), then instabilities bleed into other modes, making it difficult for the RNN to form a clean segregation of time-scales. Hence, varying periods of temporal integration could be achieved by specifying spatially varying penalties on neuronal timescales into the RNN cost functions. Such penalties may give rise to spatially segregated modules responsible for processing inputs at different time-scales.§.§ Learning by suppressing activity While expressivity hinges on a careful balance of stable dynamics, biological neural networks are constrained by energy; more active neurons require more metabolic energy, which is a limited resource. As such, while artificial networks can maximize their expressivity through unconstrained backpropagation, the brain's capacity to learn is restricted by resource constraints; also, the extent to which the brain performs backpropagation remains unclear <cit.>, which has motivated the machine learning community to consider more biologically-inspired optimization approaches. Hence, penalizing activity in RNNs should intuitively penalize computational capability through reduced expressivity. However, recent work has instead demonstrated important computational benefits of minimizing energy usage during training <cit.>. For example, Ali et al. <cit.> trained an RNN to predict sequences of handwritten digits, and examined how different optimization functions impacted model architecture and behavior. Specifically, Ali et al. <cit.> did not train their RNN to minimize prediction error through backpropagation. Instead, they trained their model to minimize absolute levels of neural activity prior to passing that activity through neurons' activation functions (here, ReLU). Such preactivation minimization is akin to selectively minimizing neurons' presynaptic inputs in biological networks. Critically, executing this cost function required no information about task performance, and instead simply limited the RNN's resources in a biologically plausible way. Alongside good task performance, Ali et al. <cit.> observed dynamics in their RNN indicative of predictive coding. Predictive coding describes the hypothesis that the brain stores and updates expectations about its environment which it compares with incoming sensory evidence for those expectations <cit.>. Specifically, Ali et al. <cit.> observed activity patterns in their RNN suggestive of (i) selective self-inhibition in neurons receiving visual stimuli and (ii) prediction of future inputs in neurons not receiving visual stimuli. These results accord with hierarchically-organized predictive coding coupled to the S-A axis, wherein association cortices store predictions and via their distributed connectivity modulate activity in sensorimotor cortices <cit.>. Taken together, the findings of Ali et al. <cit.> indicate that while limiting the neurons' activation might intuitively reduce their expressivity—thereby limiting their computational capability in RNNs—it can also serve as a unique mechanism for distributed learning that is more biophysically realistic than backpropagation. §.§ Dynamically Tuning Expressivity via Neuromodulation The preceding sections discussed expressivity as an emergent property of a trained network that can be modified by placing certain constraints on RNN training. This static expressivity has been shown to be effective at performing a wide range of tasks, and is the basis of the success of reservoir computing <cit.>. Unlike in an RNN, the internal connectivity of the reservoir computer (RC) is not trained. Instead, only the output is trained, typically as a weighted sum of RC states<cit.>. Hence, RCs rely completely on the preexisting expressivity of their internal dynamics to generate a sufficiently expressive basis representation of their inputs. Because RCs can be trained without knowledge or modification of the internal system, a wide variety of physical systems have been explored as efficient RCs <cit.>, including photonics <cit.>, electrical circuits <cit.>, hydrodynamic systems <cit.>, and the brain <cit.>. However, expressivity in biological networks is not static, even in the presence of fixed weights. Instead it can be modulated dynamically over short time-scales and–––similar to regions' temporal receptive windows–––this too is spatially patterned. Previously, we discussed how the S-A axis tracks functional specialization and integration across the cortical mantle; cortical brain systems located at the bottom of the S-A axis are responsible for processing sensory/motor information while systems at the top of the S-A axis are involved in processing higher-order cognition <cit.>, and the brain's connectivity allows for the hierarchical flow of information across these systems <cit.>. However, the functional roles of these different brain systems are not static. Instead the brain utilizes a complex array of neuromodulatory systems to actively reconfigure the brain's dynamic repertoire <cit.>. In turn, this neuromodulation endows a relatively static network architecture (i.e., structural connectivity) with an increased capacity for functional flexibility. Reviewing all of the brain's neuromodulatory mechanisms, and their effects on neural dynamics, is beyond the scope of this piece (see <cit.> for reviews), as these mechanisms comprise myriad cortico-cortical, cortico-subcortical, and subcortical-subcortical interactions. Here, we focus on a specific example that we believe is well positioned to be integrated into RNNs: the balance and modulation of cortical excitation and inhibition. One fundamental neuromodulatory effect is that of dynamic changes to cortical excitation and inhibition. Neuron's in the cortex receive a complex set of excitatory and inhibitory inputs, and the ratio between these inputs (E/I ratio) plays a critical role in coordinating an action potential. Following the S-A axis <cit.>, the E/I ratio varies systematically across the cortex <cit.>, leading to baseline differences in regions' dynamics and computation <cit.>. Moreover, incorporating regional variations to the E/I ratio into biophysical models has been shown to improve their fit to empirical functional data <cit.>, demonstrating that the E/I ratio shapes large-scale brain dynamics. However, unlike features of brain stucture that track the S-A axis <cit.>, regions' baseline E/I ratio can be dynamically shifted via up- or downregulating the excitatory and inhibitory neurotransmitters of postsynaptic cells <cit.>. This regulation is achieved via multiple neurochemical pathways which can be driven exogenously—for example, via pharmaceutical agents <cit.> or chemogenetics <cit.>—or endogenously, for example via the ascending noradrenergic arousal system (AAS) <cit.>. In dynamical systems, changes to neuronal excitation and inhibition are thought to engender population-level changes in neural gain (Fig. <ref>) <cit.>; the slope of a function that maps simulated neurons' inputs to their outputs. By tuning the neural gain between coupled oscillators, Shine et al. <cit.> observed that increased gain lead to greater functional integration between neural populations. Critically, functional integration is thought to be an important computational property of the brain; in the human brain, functional integration fluctuates over short time scales <cit.> and facilitates cross-talk between the brain’s many functionally-specialized communities <cit.>. Thus, on-the-fly changes to neurons' E/I ratio facilitates a diverse range of dynamic behaviors <cit.>. This diversity allows brain function to flexibly decouple from its underlying structural architecture <cit.>, which in turn supports a broader range of computations than would otherwise be possible. The above data suggests that modulation of regions' E/I ratio gives rise to state-dependent dynamics that facilitate the brain's computational expressivity. Recently, researchers have begun examining how E/I modulation might be instantiated in RNNs, with a particular focus on the aforementioned AAS. The AAS stems from the locus coeruleus, a small brainstem structure that provides diffuse noradrenergic projections spanning the cerebral cortex <cit.>. These projections modulate neuronal excitability via the neurotransmitter noradrenaline, granting the AAS the capacity to modulate the E/I ratio. Drawing on this mechanism, Wainstein et al. <cit.> trained an RNN to perform a perceptual switching task, wherein one visual stimuli (a plane) gradually morphed into another (a shark) and the RNN was tasked with reporting which stimuli it perceived at each time point. Once trained, Wainstein et al. <cit.> modified the slope of the artificial neurons' activation function (i.e., the neural gain) and examined the corresponding change in perceptual switching. The authors observed that higher gain caused perceptual switches to occur earlier than expected, while lower gain caused the opposite. Additionally, Wainstein et al. <cit.> modeled the energy landscape of the RNN state-space and observed that increasing neurons' gain flattened the landscape, allowing for easier state transitions (perceptual switches). Finally, the authors supported these modeling results with task-based fMRI data as well as pupillometry data, which is thought to be an indirect measure of noradrenaline-mediated arousal <cit.>. Together, the authors' results demonstrate that a system's computational function can be dynamically modulated in behaviorally meaningful ways, and that this reconfiguration may be underpinned by an internal capacity to regulate neural excitability. Critically, this dynamic reconfiguration unfolds on top of a static network architecture, wherein only neurons' activation functions are tweaked while their trained weights are preserved. The results of Wainstein et al. <cit.> demonstrate that the affects of neuromodulation can be introduced to RNNs, modifying their functional outcomes in behaviorally meaningful ways. However, as touched on above, the brain comprises multiple neuromodulatory systems that are capable of influencing regions' excitation and inhibition, each of which subserve different functional goals <cit.> and each exhibit unique spatial patterning of their associated neurotransmitters and genes <cit.>. Future work examining how each of these neurotransmitter maps affect RNN behavior, across a diverse range of tasks, will be important to characterize how different neuromodulatory mechanisms influence expressivity. Indeed, other fields of dynamical systems (e.g. linear systems) have already begun pursuing these goals <cit.>.§ COMPUTING WITH THE LATENT SPACES OF RNNS VIA CONSTRAINED CONNECTIVITYWhile expressivity tells us what information about the input is contained in a specific state, it does not tell us about the computational meaning behind that state. Specifically, although Eq. <ref> provides us with a map of how any input series u_τ is expressed as a specific neural state x_t+1, the meaning of that state depends on the context of the problem being solved. For example, while the dynamics of the transistors in a microprocessor can be known and simulated, the computational meaning of the transistor state depends on its internal, or latent representation <cit.>. As an illustration of latent representation, consider one of the fundamental memory elements of computers, the set-reset latch (SR-latch) <cit.>, which simply remembers which of two inputs were pulsed most recently. A single nonlinear neuron with two inputs can be designed to mimic this behavior (Fig. <ref>A), where its state remains high if input u_1 was last pulsed, and remains low if input u_2 was last pulsed. Here, the state of the neuron is directly the output of a latch. Alternatively, this latch functionality can be defined in a distributed manner into a system of multiple neurons, where the high state is represented as some pattern of activity r^*, the low state is represented as another pattern of activity r^†, and the input pulses transition the RNN state between these two (Fig. <ref>B). Here, no single neuron is responsible for the latch dynamics. Rather, these latent-space latch dynamics depend on the connectivity between neurons, as well as how that connectivity was formed by training. In this section, we discuss how RNNs represent and manipulate information in their latent space, and the consequence of biological constraints on these latent representations. Then, we draw on recent advances from the field of neurodevelopment to put forth new directions for studying biologically-constrained RNNs. §.§ Sparsity and attractor stability Neural networks harness the power of internal, or latent, representations for computational tasks such as path integration <cit.>, tracking <cit.>, and spatial working memory <cit.>. In RNNs and dynamical systems, these latent representations are often referred to as attractors: sets of points, 𝒮 = {s_i}, to which the dynamics evolve after a relatively long period of time. An early example of latent representations is associative memory in Hopfield networks <cit.>, wherein a specific set of neural activity patterns, 𝒮 = {s_i}, are stored as memories that could represent information such as an image. Specifically, 𝒮 are stored as fixed-point attractors <cit.> such that after stimulating neurons close to a specific pattern, x_0 = s_i + ϵ, the neural states will evolve toward a stored memory, x_t→∞≈s_i. In general, a fixed-point is a state x^* such thatx^* = f(x^*).The computational power of this Hopfield network is in using inputs to retrieve pre-defined information (i.e., memories) stored in attractors, and substantial work has gone into improving their computational capability <cit.> and biophysical realism <cit.>. Hence, fixed-point attractors are latent dynamical properties that can be harnessed for computation. In addition to discrete memory states, RNNs can make use of the geometry of their attractors to form representations and make decisions. For example, continuous-attractor neural networks (CANNs) extend the concept of an attracting point to higher-dimensional manifolds, thereby forming attracting curves and surfaces such that the geometric position along these manifolds holds latent computational meaning. For example, the geometric trajectory of the neural network state along these manifolds can reflect a path traversed in real physical space by an agent, <cit.>, the continuous tracking of a moving stimulus <cit.>, and the recall of spatial location in the prefrontal cortex <cit.>. While the connectivity and dynamics of CANNs are precisely engineered to preserve translation-invariance along their structure, this continuum of attractors also emerges in trained RNN models <cit.>, and even in models of the prefrontal cortex trained to integrate information given different contexts <cit.>. Hence, rather than the activity of one or a collection of specific neurons <cit.>, it is the geometry of the attractor manifold that forms the internal representation of information in the RNN, and the RNN integrates external information by moving its representation along that manifold <cit.>. Formation of attractors occurs as a consequence of a loss of energy in the system, which in turn results in the stabilization of dynamics <cit.>. This stabilization can be characterized using methods such as Lyapunov functions—energy-like quantities that monotonically decrease or dissipate throughout the dynamics <cit.>—and Lyapunov exponents—the rate of convergence towards an attracting manifold <cit.>—to study existing systems. However, what is less clear is how connectivity can develop to improve the stabilization of these attractors. Intuitively, this energy dissipation usually takes the form of a loss in the neural activity (e.g. the “leaky” component of the leaky integrator in Eq. <ref>), whereas the parameters that can be learned in the RNN are the connectivity. Thus, the question becomes: how do we modify the RNN connectivity to achieve greater attractor stability? A biologically-inspired optimization process that has proven useful for stabilizing RNN dynamics is sparseness. In order to minimize energy expenditure <cit.>, the brain substantially prunes its connectivity <cit.> retaining only a sparse set of weights that are finely tuned to achieve its functional goals. In RNNs, inducing sparseness via weight pruning has been shown to provide several computational benefits. For example, Averbeck <cit.> trained RNNs with and without weight pruning to complete a working memory task. Compared to their unpruned counterparts, moderate amounts of pruning yielded RNNs that (i) exhibited better task performance; (ii) required fewer training epochs; (iii) had stronger connectivity weights; and (iv) were more resistant to task distractors <cit.>. Notably, regarding distractor resistance, pruned RNNs showed a smaller departure from their dynamic trajectories when they were perturbed by a distracting probe within the task. This result demonstrates that the sparse connectivity in the pruned RNNs strengthened their attractor basins (Fig. <ref>), making them more stable and resistant to undesired inputs. §.§ Deduction: Learning Problem Structure through Iterative Algorithms While the geometry of dynamical attractors can be used for computational purposes, they also arise as the solutions to complex problems. For example, iterative methods are commonly used in optimization problems such as iterative refinement, <cit.>, root-finding methods <cit.>, and feasibility problems <cit.>. Critically, in these and many other examples, the solution is not learned in the typical deep learning sense (i.e., training), but rather emerges as an attractor to satisfy the conditions of an iterative algorithm. In the same manner, RNNs need not only learn the attractor structure of specific input-output relations (i.e., through training), but have the potential to encode a specific algorithm in the iteration of the neural states, such that solutions to problems (i.e., tasks) emerge as attractors. Biological neural networks demonstrate the ability to run iterations within their latent representations <cit.>. A prominant example is hippocampal replay, whereby hippocampal place cells will reactivate along the same sequence as in a prior navigation experience <cit.>, even when the subject is not actively performing a navigation task. Another prominant example is dynamical inference, whereby neural activity in the dlPFC can reliably predict the future nonlinear trajectory of a ball, and RNN models which best replicate prediction behavior are trained on the sequence of the ball's trajectory <cit.>. Hence, RNNs are not only capable of learning attractor geometries, but also of learning and simulating the sequence of the problem structure, which may enable more generalizable solutions. RNNs can be engineered to run iterative deductions through several means. One approach involves assigning to each neuron the state variable of an algorithm, and defining complex interaction dynamics such that the RNN state will settle on the solution as a stable attractor. For example, an RNN can be designed to solve k-satisfiability problems, which seek an assignment of n Boolean variables that satisfy a set of c constraints, where each constraint places a condition on subsets of k Boolean variables (Fig. <ref>A) <cit.>. These RNNs evolve until the neural states find a solution <cit.>. Surprisingly, a wide variety of different dynamics and architectures can lead to different algorithms for solving the same satisfiability problem <cit.>, and other difficult optimization problems such as integer linear programming feasibility <cit.> or the n-queens problem <cit.>. Hence, algorithm variables can be directly represented by individual neurons, and the algorithm rules can be directly encoded in the connectivity and update rules f of RNNs. In addition to a direct, one-to-one encoding of problem structure as an RNN, we can also design algorithms into the latent spaces of RNNs <cit.>. Rather than ascribing each neuron a specific variable in an algorithm, we can embed an algorithm into the distributed connectivity of an RNN (Fig. <ref>B). One technique for this embedding is the neural engineering framework (NEF) which enables the design of iterations of the latent-space variables through the engineering of low-rank connectivity matrices <cit.>. Extensions enable the programming of iterations in pre-existing, higher-rank connectivity, and the ability to reverse-engineer representations from conventionally trained RNNs <cit.>. Other engineered architectures such as the differentiable neural computer <cit.> and the neural turing machine <cit.> emulate the structure of conventional computers using differentiable neural elements. Hence, RNNs have the capability to explicitly run complex algorithms in their latent spaces, which is crucial for generalizable computation; rather than learning individual solutions, we posit that training RNNs on solution sequences will enable them to learn and generalize problem-solving strategies <cit.>, even into nonlinear, out-of-sample regimes <cit.>. While the above approaches may provide more generalizable RNN solutions than task-specific training, the added computational capabilities of engineered RNNs is accompanied by a further deviation of the corresponding connectivity from biology. Whether through the enforcing of low-rank connectivity <cit.> or the segregation of memory and processing units in a neural von Neumann architecture <cit.>, engineered neural connectivities lack many of the costs and constraints experienced by biological networks. A critical question then is how the brain formulates connectivities that permit sophisticated latent space computations while adhering to biological constraints. Prior work has demonstrated the capability of largely disordered RNNs to produce sequences that rely on recurrent connectivity <cit.>, and the importance of the sequence of learning over many learning iterations—a curriculum—for RNN performance <cit.>. Hence, rather than forming low-rank structures ab initio, the brain defines its structure and dynamics through learning and plasticity on long time scales. We explore insights into the governing principles and computational advantages of this progression in neurodevelopment. §.§ Latent-space computational capability throughout neurodevelopment Just as the topology of an RNN is sculpted over training epochs (Fig. <ref>A), the topology of the human connectome is sculpted throughout development (Fig. <ref>B). However, unlike the RNN, which may be trained in an unconstrained manner, neurodevelopment follows a carefully orchestrated and stereotyped program that unfolds dynamically across space and time. Specifically, cortical neurodevelopment is thought to spatially track the aforementioned S-A axis in a temporally staged manner <cit.>, and this staging is thought to underpin the emergence of cortical regions' functional specialization and inter-connectivity. Crucially, the asynchronous nature of this developmental program is thought to underpin the sequential emergence of increasingly complex cognitive functions <cit.>, suggesting that neurodevelopment stages the brain's acquisition of lower- and higher-order computational processes. Mechanistically, this program may be underpinned by windows of heightened neural plasticity that cascade up the S-A axis <cit.> priming specific neural circuits at specific points in time for experience-dependent neural change. Regions in the cortex are defined in part according their laminar structure <cit.>, with different regions exhibiting variations in the number and size of their distinct layers, as well as different distributions of cells throughout those layers. Critically, cortical variations in cytoarchitecture conform to the S-A axis <cit.>, and animal research demonstrates that this spatial patterning predicts regions' extrinsic connectivity <cit.>, including their strength <cit.>, distance <cit.>, and layer-wise projections <cit.>. In humans, structural connectivity between regions at the bottom of the S-A axis refines relatively early in development, while connectivity at the top of the S-A axis does so later in development <cit.>. Furthermore, recent work has shown that cytoarchitecture plays an important role in shaping how dynamics spread across the connectome throughout development <cit.>. Thus, the spatial patterning embedded in the S-A axis plays a key role in shaping connectome topology throughout development. But what about RNNs? Recent work by Achterberg et al. <cit.> regularized RNNs by using the Euclidean distance between regions to constrain training. The authors found that modularity <cit.> and small-worldness <cit.>—two complex topological features that are hallmarks of the human connectome <cit.>—emerged to a greater extent in these spatially-embedded RNNs compared to standard RNNs (see also recent work by Tanner et al. <cit.> for evidence of modularity in RNNs trained without spatial constraints). Additionally, this effect coincided with achieving higher out-of-sample task performance earlier in training compared to standard RNNs (though performance eventually converged; see their Figure 2A). The results of Achterberg et al. <cit.> demonstrate that incorporating space-based inductive biases into RNN training causes them to converge on topological features observed in the human connectome. However, it remains unclear whether neurodevelopmentally-informed spatial constraints, like those embedded in the S-A axis <cit.>, show similar effects. We posit that constraining RNNs using the S-A axis may outperform Euclidean distance-based spatial embedding, as the former is rooted in evolutionary programs of connectivity formation and functional specialization <cit.>. Additionally, the spatial constraints deployed by Achterberg et al. <cit.> were static throughout training. As mentioned above, the S-A axis scaffolds connectome development in a temporally varying way <cit.>, and incorporating this dynamic information will be critical to achieving realistic brain-like topology in RNNs. One approach would be to code spatially varying periods of heightened learning into RNNs, simulating traveling waves of heightened neural plasticity <cit.>. Such an inductive bias could be achieved by including temporal cascades of weight training that flow bottom-up across the S-A axis. In addition to injecting spatially constrained inductive biases into RNNs, we can also directly assess the ability of the developing connectome to support latent-space computation. This analysis can be achieved through studying the synchronization between a given RNN and a particular latent attractor. Historically, synchronization has been shown to be crucial for computation in both biological cortical networks <cit.> and artifical RNNs <cit.>, and is deeply related to consensus dynamics <cit.>. Intuitively, synchronization between two systems implies that both systems are evolving identically. This concept can be extended to generalized synchronization, which stipulates conditions under which a response system, r_t, has synchronized in a general sense to a driving system, d_t <cit.>, such that rather than evolving identically such that r_t = d_t <cit.>, the joint system has collapsed onto a function ϕ of only the driver system such that r_t = ϕ(d_t). Under these conditions, the response system has followed the attractor structure of the driving system. If we choose the response system to be an RNN with an empirically-derived connectivity taken at a specific point in neurodevelopment, and the driver system to be a specific latent attractor dynamical system, then we can assess whether the RNN can follow the attractor structure of the driving system (Fig. <ref>C). Of course, just because the RNN can synchronize to a particular latent-space attractor does not guarantee that it can maintain that attractor once the driving system is gone. In order for the RNN to internalize the attractor dynamics, theories of invertible generalized synchronization (IGS) stipulate conditions for which the attractor of the driver system can be invertibly reproduced and stabilized by the response system, and thus can be learned autonomously <cit.>. Hence, rather than modifying RNN connectivity to learn a latent space conditioned on spatial and resource constraints, IGS tests whether a given RNN connectivity that already obeys those spatial and resource constraints can stably generate a latent attractor (Fig. <ref>D). The IGS theory also indicates that the ability to internalize latent attractors from driving systems depends not only on the RNN's structure, but also the latent attractor. Thus, given RNNs of different structures throughout development, we may examine the IGS on their most-likely encountered driving signals corresponding to their position along the S-A axis. § CONCLUSIONSNeural systems compute using dynamics, and the dynamics of biological brains evolve atop the computational substrate of a spatially and resource constrained network. Here, we sought to jointly discuss advances in neuroscience and dynamical systems with a view to improving the computational power and interpretability of RNNs. Within the context of two computational capabilities—expressivity and latent-space computing—we highlighted several avenues for future research that we believe will advance our understanding of dynamical computation. Through these avenues, we envision biologically interpretable, computationally improved RNN models of how the brain computes. Central to this research program is the application of biophysical constraints on the computational dynamics of RNNs. While not exhaustive, the constraints discussed herein represent the diverse influences of neurobiology on network structure, and they have been shown to influence the emergence of complex behavior in humans. Furthermore, the influence of these constraints on RNN connectivity can studied in combination. For example, the spatial-patterning of regions' baseline E/I ratio emerges throughout development <cit.>, indicating it's connection to the S-A axis. Another example is to explore how to extend low-rank RNN design approaches to higher-rank connectivities that more closely match the spatial gradients of the S-A axis. Thus, these biophysical constraints provide fertile ground for future experimental, computational, and theoretical work into biologically-informed RNNs. The methodological strategies for incorporating these constraints into computational RNN modeling are vast. Here, we have discussed several approaches: using known cortical structure and function as target connectivities of additional training constraints <cit.>, using resource constraints as an alternative learning mechanism <cit.>, dynamically altering connectivity with neuromodulation for increased expressivity <cit.>, stabilizing latent attractors through pruning <cit.>, engineering and modifying low-rank latent representations and algorithms <cit.>, probing the teachability of RNNs via synchronization <cit.>, among many others. Hence, there is no one prescription that serves as a panacea for the rich problems that lie at the intersection of computation, dynamics, and neurobiology. Instead, we must continue developing diverse and creative approaches for maximizing the computational capabilities of neurodynamical models through the use of biological and developmental constraints. § ACKNOWLEDGEMENTSWe gratefully acknowledge comments and conversations with Dr. Zhixin Lu, Dr. Harang Ju, Dr. Dale Zhou, Dr. Melody Lim, and Dr. Stephen Hanson. JZK was supported by the Bethe/KIC/Wilkins postdoctoral fellowship. BL was supported by the National Institute Of Mental Health of the National Institutes of Health under Award Number R00MH127293. LP was supported by the National Institute Of Mental Health of the National Institutes of Health under Award Number R00MH127296.§ REFERENCES100 url<#>1urlprefixURL ijspeert2008central authorIjspeert, A. J. titleCentral pattern generators for locomotion control in animals and robots: a review. journalNeural networks volume21, pages642–653 (year2008). seeley2007dissociable authorSeeley, W. W. et al. titleDissociable intrinsic connectivity networks for salience processing and executive control. journalJournal of Neuroscience volume27, pages2349–2356 (year2007). mante2013context authorMante, V., authorSussillo, D., authorShenoy, K. V. & authorNewsome, W. T. titleContext-dependent computation by recurrent dynamics in prefrontal cortex. journalnature volume503, pages78–84 (year2013). werbos1990backpropagation authorWerbos, P. J. titleBackpropagation through time: what it does and how to do it. journalProceedings of the IEEE volume78, pages1550–1560 (year1990). betzel_generative_2016 authorBetzel, R. F. et al. titleGenerative models of the human connectome. journalNeuroImage volume124, pages1054–1064 (year2016). oldham_development_2019 authorOldham, S. & authorFornito, A. titleThe development of brain network hubs. journalDevelopmental Cognitive Neuroscience volume36, pages100607 (year2019). oldham_modeling_2022 authorOldham, S. et al. titleModeling spatial, developmental, physiological, and topological constraints on human brain connectivity. journalScience Advances volume8, pageseabm6127 (year2022). akarca_generative_2021 authorAkarca, D. et al. titleA generative network model of neurodevelopmental diversity in structural brain organization. journalNature Communications volume12, pages4216 (year2021). hodgkin1952quantitative authorHodgkin, A. L. & authorHuxley, A. F. titleA quantitative description of membrane current and its application to conduction and excitation in nerve. journalThe Journal of physiology volume117, pages500 (year1952). petousakis2022impact authorPetousakis, K.-E., authorApostolopoulou, A. A. & authorPoirazi, P. titleThe impact of hodgkin–huxley models on dendritic research. journalThe Journal of Physiology (year2022). wilson1972excitatory authorWilson, H. R. & authorCowan, J. D. titleExcitatory and inhibitory interactions in localized populations of model neurons. journalBiophysical journal volume12, pages1–24 (year1972). pinto1996quantitative authorPinto, D. J., authorBrumberg, J. C., authorSimons, D. J., authorErmentrout, G. B. & authorTraub, R. titleA quantitative population model of whisker barrels: re-examining the wilson-cowan equations. journalJournal of computational neuroscience volume3, pages247–264 (year1996). sadeghi2020dynamic authorSadeghi, S., authorMier, D., authorGerchen, M. F., authorSchmidt, S. N. & authorHass, J. titleDynamic causal modeling for fmri with wilson-cowan-based neuronal equations. journalFrontiers in Neuroscience volume14, pages593867 (year2020). parkes_using_2023 authorParkes, L. et al. titleUsing network control theory to study the dynamics of the structural connectome. typepreprint, institutionNeuroscience (year2023). lynn_physics_2019 authorLynn, C. W. & authorBassett, D. S. titleThe physics of brain network structure, function and control. journalNature Reviews Physics volume1, pages318–332 (year2019). seguin_brain_2023 authorSeguin, C., authorSporns, O. & authorZalesky, A. titleBrain network communication: concepts, models and applications. journalNature Reviews Neuroscience (year2023). srivastava_models_2020 authorSrivastava, P. et al. titleModels of communication and control for brain networks: distinctions, convergence, and future outlook. journalNetwork Neuroscience volume4, pages1122–1159 (year2020). breakspear_dynamic_2017 authorBreakspear, M. titleDynamic models of large-scale brain activity. journalNature Neuroscience volume20, pages340–352 (year2017). roberts_metastable_2019 authorRoberts, J. A. et al. titleMetastable brain waves. journalNature Communications volume10, pages1056 (year2019). fitzhugh1961impulses authorFitzHugh, R. titleImpulses and physiological states in theoretical models of nerve membrane. journalBiophysical journal volume1, pages445–466 (year1961). mcculloch1943logical authorMcCulloch, W. S. & authorPitts, W. titleA logical calculus of the ideas immanent in nervous activity. journalThe bulletin of mathematical biophysics volume5, pages115–133 (year1943). chung2021turing authorChung, S. & authorSiegelmann, H. titleTuring completeness of bounded-precision recurrent neural networks. journalAdvances in Neural Information Processing Systems volume34, pages28431–28441 (year2021). liu2020dstp authorLiu, Y., authorGong, C., authorYang, L. & authorChen, Y. titleDstp-rnn: A dual-stage two-phase attention-based recurrent neural network for long-term and multivariate time series prediction. journalExpert Systems with Applications volume143, pages113082 (year2020). lu2020supervised authorLu, Z., authorKim, J. Z. & authorBassett, D. S. titleSupervised chaotic source separation by a tank of water. journalChaos: An Interdisciplinary Journal of Nonlinear Science volume30 (year2020). wang_evolving_2021 authorWang, P. Y., authorSun, Y., authorAxel, R., authorAbbott, L. & authorYang, G. R. titleEvolving the olfactory system with machine learning. journalNeuron volume109, pages3879–3892.e5 (year2021). yang_task_2019 authorYang, G. R., authorJoglekar, M. R., authorSong, H. F., authorNewsome, W. T. & authorWang, X.-J. titleTask representations in neural networks trained to perform many cognitive tasks. journalNature Neuroscience volume22, pages297–306 (year2019). roberts_contribution_2016 authorRoberts, J. A. et al. titleThe contribution of geometry to the human connectome. journalNeuroImage volume124, pages379–393 (year2016). bullmore_economy_2012 authorBullmore, E. & authorSporns, O. titleThe economy of brain network organization. journalNature Reviews Neuroscience volume13, pages336–349 (year2012). ramon_y_cajal_histology_1995 authorRamón y Cajal, S. titleHistology of the nervous system of man and vertebrates. No. numberno. 6 in seriesHistory of neuroscience (publisherOxford University Press, addressNew York, year1995). dupont2019augmented authorDupont, E., authorDoucet, A. & authorTeh, Y. W. titleAugmented neural odes. journalAdvances in neural information processing systems volume32 (year2019). lukovsevivcius2009reservoir authorLukoševičius, M. & authorJaeger, H. titleReservoir computing approaches to recurrent neural network training. journalComputer science review volume3, pages127–149 (year2009). ali2022predictive authorAli, A., authorAhmad, N., authorde Groot, E., authorvan Gerven, M. A. J. & authorKietzmann, T. C. titlePredictive coding is a consequence of energy efficiency in recurrent neural networks. journalPatterns volume3 (year2022). felleisen1991expressive authorFelleisen, M. titleOn the expressive power of programming languages. journalScience of computer programming volume17, pages35–75 (year1991). sipser1996introduction authorSipser, M. titleIntroduction to the theory of computation. journalACM Sigact News volume27, pages27–29 (year1996). hornik1989multilayer authorHornik, K., authorStinchcombe, M. & authorWhite, H. titleMultilayer feedforward networks are universal approximators. journalNeural networks volume2, pages359–366 (year1989). cybenko1989approximation authorCybenko, G. titleApproximation by superpositions of a sigmoidal function. journalMathematics of control, signals and systems volume2, pages303–314 (year1989). schafer2006recurrent authorSchäfer, A. M. & authorZimmermann, H. G. titleRecurrent neural networks are universal approximators. In booktitleArtificial Neural Networks–ICANN 2006: 16th International Conference, Athens, Greece, September 10-14, 2006. Proceedings, Part I 16, pages632–640 (organizationSpringer, year2006). poole2016exponential authorPoole, B., authorLahiri, S., authorRaghu, M., authorSohl-Dickstein, J. & authorGanguli, S. titleExponential expressivity in deep neural networks through transient chaos. journalAdvances in neural information processing systems volume29 (year2016). raghu2017expressive authorRaghu, M., authorPoole, B., authorKleinberg, J., authorGanguli, S. & authorSohl-Dickstein, J. titleOn the expressive power of deep neural networks. In booktitleinternational conference on machine learning, pages2847–2854 (organizationPMLR, year2017). bertschinger2004real authorBertschinger, N. & authorNatschläger, T. titleReal-time computation at the edge of chaos in recurrent neural networks. journalNeural computation volume16, pages1413–1436 (year2004). sompolinsky1988chaos authorSompolinsky, H., authorCrisanti, A. & authorSommers, H.-J. titleChaos in random neural networks. journalPhysical review letters volume61, pages259 (year1988). kim2023neural authorKim, J. Z. & authorBassett, D. S. titleA neural machine code and programming framework for the reservoir computer. journalNature Machine Intelligence pages1–9 (year2023). rajan2006eigenvalue authorRajan, K. & authorAbbott, L. F. titleEigenvalue spectra of random matrices for neural networks. journalPhysical review letters volume97, pages188104 (year2006). langton1990computation authorLangton, C. G. titleComputation at the edge of chaos: Phase transitions and emergent computation. journalPhysica D: nonlinear phenomena volume42, pages12–37 (year1990). sussillo2009generating authorSussillo, D. & authorAbbott, L. F. titleGenerating coherent patterns of activity from chaotic neural networks. journalNeuron volume63, pages544–557 (year2009). munoz2018colloquium authorMunoz, M. A. titleColloquium: Criticality and dynamical scaling in living systems. journalReviews of Modern Physics volume90, pages031001 (year2018). hochstetter2021avalanches authorHochstetter, J. et al. titleAvalanches and edge-of-chaos learning in neuromorphic nanowire networks. journalNature Communications volume12, pages4008 (year2021). ju2020network authorJu, H., authorKim, J. Z., authorBeggs, J. M. & authorBassett, D. S. titleNetwork structure of cascading neural systems predicts stimulus propagation and recovery. journalJournal of Neural Engineering volume17, pages056045 (year2020). wolff_intrinsic_2022 authorWolff, A. et al. titleIntrinsic neural timescales: temporal integration and segregation. journalTrends in Cognitive Sciences pagesS1364661321002928 (year2022). mesulam_representation_2008 authorMesulam, M. titleRepresentation, inference, and transcendent encoding in neurocognitive networks of the human brain. journalAnnals of Neurology volume64, pages367–378 (year2008). sydnor_neurodevelopment_2021 authorSydnor, V. J. et al. titleNeurodevelopment of the association cortices: Patterns, mechanisms, and implications for psychopathology. journalNeuron volume109, pages2820–2846 (year2021). margulies_situating_2016 authorMargulies, D. S. et al. titleSituating the default-mode network along a principal gradient of macroscale cortical organization. journalProceedings of the National Academy of Sciences volume113, pages12574–12579 (year2016). xu_cross-species_2020 authorXu, T. et al. titleCross-species functional alignment reveals evolutionary hierarchy within the connectome. journalNeuroImage volume223, pages117346 (year2020). hasson_hierarchy_2008 authorHasson, U., authorYang, E., authorVallines, I., authorHeeger, D. J. & authorRubin, N. titleA Hierarchy of Temporal Receptive Windows in Human Cortex. journalJournal of Neuroscience volume28, pages2539–2550 (year2008). gao_neuronal_2020 authorGao, R., authorvan den Brink, R. L., authorPfeffer, T. & authorVoytek, B. titleNeuronal timescales are functionally dynamic and shaped by cortical microarchitecture. journaleLife volume9, pagese61277 (year2020). sydnor_intrinsic_2023 authorSydnor, V. J. et al. titleIntrinsic activity development unfolds along a sensorimotor–association cortical axis in youth. journalNature Neuroscience volume26, pages638–649 (year2023). hespanha2018linear authorHespanha, J. P. titleLinear systems theory (publisherPrinceton university press, year2018). marblestone_toward_2016 authorMarblestone, A. H., authorWayne, G. & authorKording, K. P. titleToward an Integration of Deep Learning and Neuroscience. journalFrontiers in Computational Neuroscience volume10 (year2016). ali_predictive_2022 authorAli, A., authorAhmad, N., authorDe Groot, E., authorJohannes Van Gerven, M. A. & authorKietzmann, T. C. titlePredictive coding is a consequence of energy efficiency in recurrent neural networks. journalPatterns volume3, pages100639 (year2022). millidge_predictive_2021 authorMillidge, B., authorSeth, A. & authorBuckley, C. L. titlePredictive Coding: a Theoretical and Experimental Review(year2021). notePublisher: arXiv Version Number: 4. bastos_canonical_2012 authorBastos, A. M. et al. titleCanonical Microcircuits for Predictive Coding. journalNeuron volume76, pages695–711 (year2012). singer_recurrent_2021 authorSinger, W. titleRecurrent dynamics in the cerebral cortex: Integration of sensory evidence with stored knowledge. journalProceedings of the National Academy of Sciences volume118, pagese2101043118 (year2021). friston_computational_2022 authorFriston, K. titleComputational psychiatry: from synapses to sentience. journalMolecular Psychiatry (year2022). jaeger2002tutorial authorJaeger, H. titleTutorial on training recurrent neural networks, covering bppt, rtrl, ekf and the" echo state network" approach (year2002). tanaka2019recent authorTanaka, G. et al. titleRecent advances in physical reservoir computing: A review. journalNeural Networks volume115, pages100–123 (year2019). van2017advances authorVan der Sande, G., authorBrunner, D. & authorSoriano, M. C. titleAdvances in photonic reservoir computing. journalNanophotonics volume6, pages561–576 (year2017). soriano2014delay authorSoriano, M. C. et al. titleDelay-based reservoir computing: noise effects in a combined analog and digital implementation. journalIEEE transactions on neural networks and learning systems volume26, pages388–393 (year2014). suarez_learning_2021 authorSuárez, L. E., authorRichards, B. A., authorLajoie, G. & authorMisic, B. titleLearning function from structure in neuromorphic networks. journalNature Machine Intelligence (year2021). parkes_asymmetric_2022 authorParkes, L. et al. titleAsymmetric signaling across the hierarchy of cytoarchitecture within the human connectome. journalScience Advances volume8, pageseadd2185 (year2022). pines_development_2023 authorPines, A. et al. titleDevelopment of top-down cortical propagations in youth. journalNeuron pagesS0896627323000387 (year2023). baum_modular_2017 authorBaum, G. L. et al. titleModular Segregation of Structural Brain Networks Supports the Development of Executive Function in Youth. journalCurrent Biology volume27, pages1561–1572.e8 (year2017). shine_computational_2021 authorShine, J. M. et al. titleComputational models link cellular mechanisms of neuromodulation to large-scale neural dynamics. journalNature Neuroscience volume24, pages765–776 (year2021). marder_cellular_2002 authorMarder, E. & authorThirumalai, V. titleCellular, synaptic and network effects of neuromodulation. journalNeural Networks volume15, pages479–493 (year2002). bucher_beyond_2011 authorBucher, D. & authorGoaillard, J.-M. titleBeyond faithful conduction: Short-term dynamics, neuromodulation, and long-term regulation of spike propagation in the axon. journalProgress in Neurobiology volume94, pages307–346 (year2011). mccormick_neuromodulation_2020 authorMcCormick, D. A., authorNestvogel, D. B. & authorHe, B. J. titleNeuromodulation of Brain State and Behavior. journalAnnual Review of Neuroscience volume43, pages391–415 (year2020). kim_brain-wide_2017 authorKim, Y. et al. titleBrain-wide Maps Reveal Stereotyped Cell-Type-Based Cortical Architecture and Subcortical Sexual Dimorphism. journalCell volume171, pages456–469.e22 (year2017). burt_hierarchy_2018 authorBurt, J. B. et al. titleHierarchy of transcriptomic specialization across human cortex captured by structural neuroimaging topography. journalNature Neuroscience volume21, pages1251–1259 (year2018). anderson_transcriptional_2020 authorAnderson, K. M. et al. titleTranscriptional and imaging-genetic association of cortical interneurons, brain function, and schizophrenia risk. journalNature Communications volume11, pages2889 (year2020). deco_how_2014 authorDeco, G. et al. titleHow Local Excitation-Inhibition Ratio Impacts the Whole Brain Dynamics. journalJournal of Neuroscience volume34, pages7886–7898 (year2014). deco_dynamical_2021 authorDeco, G. et al. titleDynamical consequences of regional heterogeneity in the brain’s transcriptional landscape. journalScience Advances volume7, pageseabf4752 (year2021). gao_inferring_2017 authorGao, R., authorPeterson, E. J. & authorVoytek, B. titleInferring synaptic excitation/inhibition balance from field potentials. journalNeuroImage volume158, pages70–78 (year2017). zhang_ei_2023 authorZhang, S. et al. titleIn-vivo whole-cortex estimation of excitation-inhibition ratio indexes cortical maturation and cognitive ability in youth. typepreprint, institutionbioRxiv (year2023). larsen_developmental_2022 authorLarsen, B. et al. titleA developmental reduction of the excitation:inhibition ratio in association cortex during adolescence. journalScience Advances volume8, pageseabj8750 (year2022). rocchi_increased_2022 authorRocchi, F. et al. titleIncreased fMRI connectivity upon chemogenetic inhibition of the mouse prefrontal cortex. journalNature Communications volume13, pages1056 (year2022). markicevic_cortical_2020 authorMarkicevic, M. et al. titleCortical Excitation:Inhibition Imbalance Causes Abnormal Brain Network Dynamics as Observed in Neurodevelopmental Disorders. journalCerebral Cortex volume30, pages4922–4937 (year2020). shine_modulation_2018 authorShine, J. M., authorAburn, M. J., authorBreakspear, M. & authorPoldrack, R. A. titleThe modulation of neural gain facilitates a transition between functional segregation and integration in the brain. journaleLife volume7, pagese31130 (year2018). shine_human_2019 authorShine, J. M. et al. titleHuman cognition involves the dynamic integration of neural activity and neuromodulatory systems. journalNature Neuroscience volume22, pages289–296 (year2019). bertolero_modular_2015 authorBertolero, M. A., authorYeo, B. T. T. & authorD’Esposito, M. titleThe modular and integrative functional architecture of the human brain. journalProceedings of the National Academy of Sciences volume112 (year2015). vazquez-rodriguez_gradients_2019 authorVázquez-Rodríguez, B. et al. titleGradients of structure–function tethering across neocortex. journalProceedings of the National Academy of Sciences volume116, pages21219–21227 (year2019). preti_decoupling_2019 authorPreti, M. G. & authorVan De Ville, D. titleDecoupling of brain function from structure reveals regional behavioral specialization in humans. journalNature Communications volume10, pages4747 (year2019). misic_network-level_2016 authorMišić, B. et al. titleNetwork-Level Structure-Function Relationships in Human Neocortex. journalCerebral Cortex volume26, pages3285–3296 (year2016). zamani_esfahlani_local_2022 authorZamani Esfahlani, F., authorFaskowitz, J., authorSlack, J., authorMišić, B. & authorBetzel, R. F. titleLocal structure-function relationships in human brain networks across the lifespan. journalNature Communications volume13, pages2053 (year2022). samuels_functional_2008 authorSamuels, E. & authorSzabadi, E. titleFunctional Neuroanatomy of the Noradrenergic Locus Coeruleus: Its Roles in the Regulation of Arousal and Autonomic Function Part II: Physiological and Pharmacological Manipulations and Pathological Alterations of Locus Coeruleus Activity in Humans. journalCurrent Neuropharmacology volume6, pages254–285 (year2008). wainstein_gain_2023 authorWainstein, G. et al. titleGain neuromodulation mediates perceptual switches: evidence from pupillometry, fMRI, and RNN Modelling. typepreprint, institutionResearch Square (year2023). joshi_pupil_2020 authorJoshi, S. & authorGold, J. I. titlePupil Size as a Window on Neural Substrates of Cognition. journalTrends in Cognitive Sciences volume24, pages466–480 (year2020). singleton_receptor-informed_2022 authorSingleton, S. P. et al. titleReceptor-informed network control theory links LSD and psilocybin to a flattening of the brain’s control energy landscape. journalNature Communications volume13, pages5812 (year2022). luppi_transitions_2023 authorLuppi, A. I. et al. titleTransitions between cognitive topographies: contributions of network structure, neuromodulation, and disease. typepreprint, institutionbioRxiv (year2023). aitken2023neural authorAitken, K. & authorMihalas, S. titleNeural population dynamics of computing with synaptic modulations. journalElife volume12, pagese83035 (year2023). jonas2017could authorJonas, E. & authorKording, K. P. titleCould a neuroscientist understand a microprocessor? journalPLoS computational biology volume13, pagese1005268 (year2017). torii2016asic authorTorii, N. et al. titleAsic implementation of random number generators using sr latches and its evaluation. journalEURASIP Journal on Information Security volume2016, pages1–12 (year2016). samsonovich1997path authorSamsonovich, A. & authorMcNaughton, B. L. titlePath integration and cognitive mapping in a continuous attractor neural network model. journalJournal of Neuroscience volume17, pages5900–5920 (year1997). fung2010moving authorFung, C. A., authorWong, K. M. & authorWu, S. titleA moving bump in a continuous manifold: a comprehensive study of the tracking dynamics of continuous attractor neural networks. journalNeural Computation volume22, pages752–792 (year2010). wimmer2014bump authorWimmer, K., authorNykamp, D. Q., authorConstantinidis, C. & authorCompte, A. titleBump attractor dynamics in prefrontal cortex explains behavioral precision in spatial working memory. journalNature neuroscience volume17, pages431–439 (year2014). hopfield1982neural authorHopfield, J. J. titleNeural networks and physical systems with emergent collective computational abilities. journalProceedings of the national academy of sciences volume79, pages2554–2558 (year1982). strogatz2018nonlinear authorStrogatz, S. H. titleNonlinear dynamics and chaos with student solutions manual: With applications to physics, biology, chemistry, and engineering (publisherCRC press, year2018). ramsauer2020hopfield authorRamsauer, H. et al. titleHopfield networks is all you need. journalarXiv preprint arXiv:2008.02217 (year2020). storkey1997increasing authorStorkey, A. titleIncreasing the capacity of a hopfield network without sacrificing functionality. In booktitleArtificial Neural Networks—ICANN'97: 7th International Conference Lausanne, Switzerland, October 8–10, 1997 Proceeedings 7, pages451–456 (organizationSpringer, year1997). smith2022learning authorSmith, L. M., authorKim, J. Z., authorLu, Z. & authorBassett, D. S. titleLearning continuous chaotic attractors with a reservoir computer. journalChaos: An Interdisciplinary Journal of Nonlinear Science volume32 (year2022). nichols2002middle authorNichols, M. J. & authorNewsome, W. T. titleMiddle temporal visual area microstimulation influences veridical judgments of motion direction. journalJournal of Neuroscience volume22, pages9530–9540 (year2002). jazayeri2021interpreting authorJazayeri, M. & authorOstojic, S. titleInterpreting neural computations by examining intrinsic and embedding dimensionality of neural activity. journalCurrent opinion in neurobiology volume70, pages113–120 (year2021). howarth_updated_2012 authorHowarth, C., authorGleeson, P. & authorAttwell, D. titleUpdated Energy Budgets for Neural Computation in the Neocortex and Cerebellum. journalJournal of Cerebral Blood Flow & Metabolism volume32, pages1222–1232 (year2012). giesl2015review authorGiesl, P. & authorHafstein, S. titleReview on computational methods for lyapunov functions. journalDiscrete and Continuous Dynamical Systems-B volume20, pages2291–2331 (year2015). bellman1962vector authorBellman, R. titleVector lyapunov functions. journalJournal of the Society for Industrial and Applied Mathematics, Series A: Control volume1, pages32–34 (year1962). wolf1985determining authorWolf, A., authorSwift, J. B., authorSwinney, H. L. & authorVastano, J. A. titleDetermining lyapunov exponents from a time series. journalPhysica D: nonlinear phenomena volume16, pages285–317 (year1985). petanjek_extraordinary_2011 authorPetanjek, Z. et al. titleExtraordinary neoteny of synaptic spines in the human prefrontal cortex. journalProceedings of the National Academy of Sciences volume108, pages13281–13286 (year2011). averbeck_pruning_2022 authorAverbeck, B. B. titlePruning recurrent neural networks replicates adolescent changes in working memory and reinforcement learning. journalProceedings of the National Academy of Sciences volume119, pagese2121331119 (year2022). moler1967iterative authorMoler, C. B. titleIterative refinement in floating point. journalJournal of the ACM (JACM) volume14, pages316–321 (year1967). ypma1995historical authorYpma, T. J. titleHistorical development of the newton–raphson method. journalSIAM review volume37, pages531–551 (year1995). aragon2020douglas authorAragón Artacho, F. J., authorCampoy, R. & authorTam, M. K. titleThe douglas–rachford algorithm for convex and nonconvex feasibility problems. journalMathematical Methods of Operations Research volume91, pages201–240 (year2020). flesch2022orthogonal authorFlesch, T., authorJuechems, K., authorDumbalska, T., authorSaxe, A. & authorSummerfield, C. titleOrthogonal representations for robust context-dependent task performance in brains and neural networks. journalNeuron volume110, pages1258–1270 (year2022). gillespie2021hippocampal authorGillespie, A. K. et al. titleHippocampal replay reflects specific past experiences rather than a plan for subsequent choice. journalNeuron volume109, pages3149–3163 (year2021). rajalingham2022dynamic authorRajalingham, R., authorSohn, H. & authorJazayeri, M. titleDynamic tracking of objects in the macaque dorsomedial frontal cortex. journalbioRxiv pages2022–06 (year2022). rajalingham2022recurrent authorRajalingham, R., authorPiccato, A. & authorJazayeri, M. titleRecurrent neural networks with explicit representation of dynamic latent variables can mimic behavioral patterns in a physical inference task. journalNature Communications volume13, pages5865 (year2022). ercsey2011optimization authorErcsey-Ravasz, M. & authorToroczkai, Z. titleOptimization hardness as transient chaos in an analog approach to constraint satisfaction. journalNature Physics volume7, pages966–970 (year2011). molnar2012continuous authorMolnar, B., authorToroczkai, Z. & authorErcsey-Ravasz, M. titleContinuous-time neural networks without local traps for solving boolean satisfiability. In booktitle2012 13th International Workshop on Cellular Nanoscale Networks and their Applications, pages1–6 (organizationIEEE, year2012). yamashita2019bounded authorYamashita, H., authorSuzuki, H., authorToroczkai, Z. & authorAihara, K. titleBounded continuous-time satisfiability solver. In booktitleInternational Symposium on Nonlinear Theory and its Applications (NOLTA2019) (year2019). li2019continuous authorLi, C. & authorMacLennan, B. J. titleContinuous-time systems for solving 0-1 integer linear programming feasibility problems. journalarXiv preprint arXiv:1905.04612 (year2019). ding2010high authorDing, Y., authorLi, Y., authorXiao, M., authorWang, Q. & authorLi, D. titleA high order neural network to solve n-queens problem. In booktitleThe 2010 International Joint Conference on Neural Networks (IJCNN), pages1–6 (organizationIEEE, year2010). mastrogiuseppe2018linking authorMastrogiuseppe, F. & authorOstojic, S. titleLinking connectivity, dynamics, and computations in low-rank recurrent neural networks. journalNeuron volume99, pages609–623 (year2018). eliasmith2003neural authorEliasmith, C. & authorAnderson, C. H. titleNeural engineering: Computation, representation, and dynamics in neurobiological systems (publisherMIT press, year2003). dewolf2020nengo authorDeWolf, T., authorJaworski, P. & authorEliasmith, C. titleNengo and low-power ai hardware for robust, embedded neurorobotics. journalFrontiers in Neurorobotics volume14, pages568359 (year2020). graves2016hybrid authorGraves, A. et al. titleHybrid computing using a neural network with dynamic external memory. journalNature volume538, pages471–476 (year2016). tkavcik2016neural authorTkačík, J. & authorKordík, P. titleNeural turing machine for sequential learning of human mobility patterns. In booktitle2016 International joint conference on neural networks (IJCNN), pages2790–2797 (organizationIEEE, year2016). fawzi2022discovering authorFawzi, A. et al. titleDiscovering faster matrix multiplication algorithms with reinforcement learning. journalNature volume610, pages47–53 (year2022). kim2021teaching authorKim, J. Z., authorLu, Z., authorNozari, E., authorPappas, G. J. & authorBassett, D. S. titleTeaching recurrent neural networks to infer global temporal structure from local examples. journalNature Machine Intelligence volume3, pages316–323 (year2021). valente2022probing authorValente, A., authorOstojic, S. & authorPillow, J. W. titleProbing the relationship between latent linear dynamical systems and low-rank recurrent neural network models. journalNeural computation volume34, pages1871–1892 (year2022). rajan2016recurrent authorRajan, K., authorHarvey, C. D. & authorTank, D. W. titleRecurrent network models of sequence generation and memory. journalNeuron volume90, pages128–142 (year2016). kepple2022curriculum authorKepple, D., authorEngelken, R. & authorRajan, K. titleCurriculum learning as a tool to uncover learning principles in the brain. In booktitleInternational Conference on Learning Representations (year2022). larsen_critical_2023 authorLarsen, B., authorSydnor, V. J., authorKeller, A. S., authorYeo, B. T. & authorSatterthwaite, T. D. titleA critical period plasticity framework for the sensorimotor–association axis of cortical neurodevelopment. journalTrends in Neurosciences pagesS0166223623001674 (year2023). kim_biased_2019 authorKim, N. Y. & authorKastner, S. titleA biased competition theory for the developmental cognitive neuroscience of visuo-spatial attention. journalCurrent Opinion in Psychology volume29, pages219–228 (year2019). tervo-clemmens_canonical_2022 authorTervo-Clemmens, B. et al. titleA Canonical Trajectory of Executive Function Maturation During the Transition from Adolescence to Adulthood. typepreprint, institutionPsyArXiv (year2022). larsen_adolescence_2018 authorLarsen, B. & authorLuna, B. titleAdolescence as a neurobiological critical period for the development of higher-order cognition. journalNeuroscience & Biobehavioral Reviews volume94, pages179–195 (year2018). garcia-cabezas_structural_2019 authorGarcía-Cabezas, M. Á., authorZikopoulos, B. & authorBarbas, H. titleThe structural model: a theory linking connections, plasticity, pathology, development and evolution of the cerebral cortex. journalBrain Structure and Function volume224, pages985–1008 (year2019). garcia-cabezas_protocol_2020 authorGarcía-Cabezas, M. Á., authorHacker, J. L. & authorZikopoulos, B. titleA protocol for cortical type analysis of the human neocortex applied on histological samples, the atlas of von economo and koskinas, and magnetic resonance imaging. journalFrontiers in Neuroanatomy volume14, pages576015 (year2020). paquola_microstructural_2019 authorPaquola, C. et al. titleMicrostructural and functional gradients are increasingly dissociated in transmodal cortices. journalPLOS Biology volume17, pagese3000284 (year2019). barbas_general_2015 authorBarbas, H. titleGeneral Cortical and Special Prefrontal Connections: Principles from Structure to Function. journalAnnual Review of Neuroscience volume38, pages269–289 (year2015). markov_anatomy_2014 authorMarkov, N. T. et al. titleAnatomy of hierarchy: Feedforward and feedback pathways in macaque visual cortex. journalJournal of Comparative Neurology volume522, pages225–259 (year2014). beul_towards_2015 authorBeul, S. F. titleTowards a "canonical" agranular cortical microcircuit. journalFrontiers in Neuroanatomy pages8 (year2015). huttenlocher_synaptogenesis_1982 authorHuttenlocher, P. R., authorde Courten, C., authorGarey, L. J. & authorVan der Loos, H. titleSynaptogenesis in human visual cortex — evidence for synapse elimination during normal development. journalNeuroscience Letters volume33, pages247–252 (year1982). peter_r_synaptic_1979 authorPeter R., H. titleSynaptic density in human frontal cortex — Developmental changes and effects of aging. journalBrain Research volume163, pages195–205 (year1979). semple_brain_2013 authorSemple, B. D., authorBlomgren, K., authorGimlin, K., authorFerriero, D. M. & authorNoble-Haeusslein, L. J. titleBrain development in rodents and humans: Identifying benchmarks of maturation and vulnerability to injury across species. journalProgress in Neurobiology volume106-107, pages1–16 (year2013). buckner_evolution_2013 authorBuckner, R. L. & authorKrienen, F. M. titleThe evolution of distributed association networks in the human brain. journalTrends in Cognitive Sciences volume17, pages648–665 (year2013). achterberg_spatially_2023 authorAchterberg, J., authorAkarca, D., authorStrouse, D. J., authorDuncan, J. & authorAstle, D. E. titleSpatially embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings. journalNature Machine Intelligence (year2023). <https://www.nature.com/articles/s42256-023-00748-9>. sporns_modular_2016 authorSporns, O. & authorBetzel, R. F. titleModular Brain Networks. journalAnnual Review of Psychology volume67, pages613–640 (year2016). bassett_small-world_2017 authorBassett, D. S. & authorBullmore, E. T. titleSmall-World Brain Networks Revisited. journalThe Neuroscientist volume23, pages499–516 (year2017). tanner_functional_2023 authorTanner, J., authorL., S. M., authorColetta, L., authorGozzi, A. & authorBetzel, R. F. titleFunctional connectivity modules in recurrent neural networks: function, origin and dynamics(year2023). <https://arxiv.org/abs/2310.20601>. notePublisher: arXiv Version Number: 1. fries2009neuronal authorFries, P. titleNeuronal gamma-band synchronization as a fundamental process in cortical computation. journalAnnual review of neuroscience volume32, pages209–224 (year2009). loebel2002computation authorLoebel, A. & authorTsodyks, M. titleComputation by ensemble synchronization in recurrent networks with synaptic depression. journalJournal of computational neuroscience volume13, pages111–124 (year2002). li2009consensus authorLi, Z., authorDuan, Z., authorChen, G. & authorHuang, L. titleConsensus of multiagent systems and synchronization of complex networks: A unified viewpoint. journalIEEE Transactions on Circuits and Systems I: Regular Papers volume57, pages213–224 (year2009). rulkov1995generalized authorRulkov, N. F., authorSushchik, M. M., authorTsimring, L. S. & authorAbarbanel, H. D. titleGeneralized synchronization of chaos in directionally coupled chaotic systems. journalPhysical Review E volume51, pages980 (year1995). pecora1990synchronization authorPecora, L. M. & authorCarroll, T. L. titleSynchronization in chaotic systems. journalPhysical review letters volume64, pages821 (year1990). lu2020invertible authorLu, Z. & authorBassett, D. S. titleInvertible generalized synchronization: A putative mechanism for implicit learning in neural systems. journalChaos: An Interdisciplinary Journal of Nonlinear Science volume30 (year2020). | http://arxiv.org/abs/2311.15572v1 | {
"authors": [
"Jason Z. Kim",
"Bart Larsen",
"Linden Parkes"
],
"categories": [
"q-bio.NC"
],
"primary_category": "q-bio.NC",
"published": "20231127064531",
"title": "Shaping dynamical neural computations using spatiotemporal constraints"
} |
[ Michael Roop================ Quantum data-syndrome (QDS) codes are a class of quantum error-correcting codes that protect against errors both on the data qubits and on the syndrome itself via redundant measurement of stabilizer group elements. One way to define a QDS code is to choose a syndrome measurement code, a classical block code that encodes the syndrome of the underlying quantum code by defining additional stabilizer measurements.We propose the use of primitive narrow-sense BCH codes as syndrome measurement codes.We show that these codes asymptotically require O(tlogℓ) extra measurements, where ℓ is the number of stabilizers generators of the quantum code and t is the number of errors corrrected by the BCH code.Previously, the best known general method of constructing QDS codes out of quantum codes requires O(t^3logℓ) extra measurements. As the number of additional syndrome measurements is a reasonable metric for the amount of additional time a general QDS code requires, we conclude that our construction protects against the same number of syndrome errors with significantly less time overhead. § INTRODUCTIONThe ability to accurately detect, identify, and correct errors is essential to building functional and scalable quantum computers. This is typically achieved through the use of quantum stabilizer codes, which are defined by their stabilizer group. The measurement of the group generators produces a binary syndrome that indicates the locations of errors. These measurements involve several multi-qubit gates, which can introduce errors on the qubits involved and corrupt the measurement outcome. For syndrome fault tolerance, we can measure additional stabilizer group elements to add redundancy. One common way to do this is to repeatedly measure the same set of stabilizers <cit.>. However, the syndrome can be protected much more efficiently with the use of quantum data-syndrome (QDS) codes. Introduced over a series of papers by Fujiwara <cit.> and Ashikhmin, Lai, and Brun <cit.>, QDS codes simultaneously encode quantum information and protect against syndrome errors. Recent work includes decoding QDS codes <cit.>, connections to 2-designs <cit.>, and extenstions to quantum convolutional <cit.> and subsystem codes <cit.>. QDS codes are also closely related to single-shot quantum error correction <cit.>.We propose a way of constructing quantum data-syndrome codes using primitive narrow-sense BCH codes. We will also show that such a construction that protects against t syndrome errors requires O(tlog(n-k)) additional measurements, which is a significant improvement over other ways of constructing these codes. Recently, BCH codes have also been used to design good flag fault-tolerant syndrome extraction schemes <cit.>.In this paper we will look at a phenomenological error model, in which individual gates of a stabilizer have a chance of causing an error on the syndrome bit. This model does not consider hook errors propagated onto a circuit due to gate errors, but is intended to be a stepping-stone to exploring a full circuit model in the future. § BACKGROUND §.§ Stabilizer codes Pauli operators on n qubits are n-fold tensor products of Pauli matrices P_0=I=[1 0 0 1],P_1=X=[0 1 1 0],P_2=Y=[0 -i i 0],P_3=Z=[1 0 0 -1], of the formi^c· P_a_0 P_a_1⋯ P_a_n-1,with a_i,c∈{0,1,2,3}. For simplicity's sake we will omit tensor products from our notation; so the three-qubit operator X Y I becomes XYI. These operators form the group 𝒫^n.An n,k,d stabilizer code encoding k logical qubits into n physical qubits is defined by its ℓ:=n-k independent stabilizer generators, {g_1,g_2,…,g_ℓ}⊂𝒫^n. These operators generate the stabilizer group 𝒮⊂𝒫^n of the code, which is commutative, does not contain -I^ n, and has order 2^ℓ. Elements of the stabilizer group fix the states in the quantum code: for any stabilizer g_i∈𝒮 and any codeword |ψ⟩ of the associated quantum code, g_i|ψ⟩=|ψ⟩.A stabilizer on n qubits can be seen as a length-n vector with elements in (4), under the homomorphism τ:𝒫→(4) that maps I→0,X→1,Y→ω̅=1+ω,Z→ω and ignores any global phase of ±1,± i. This can be naturally extended to send operators on n qubits from 𝒫^n to (4)^n <cit.>. Under this homomorphism, multiplication of stabilizers corresponds to bitwise addition in (4)^n. Throughout this paper we will use g∈𝒫^n to refer to a stabilizer and 𝐠∈(4)^n to refer to the corresponding length-n (4) vector τ(g). For instance, the 3-qubit stabilizer g=IXY is equivalently represented by the length-3 GF(4) vector 𝐠=τ(IXY) =01ω̅.In general, an error E on an n,k,d quantum code can be represented as an element of 𝒫^n. The measurement of its set of ℓ stabilizer operators gives as an output a length-ℓ binary vector called the syndrome 𝐬=(s_1,s_2,…,s_ℓ). The i-th syndrome bit s_i, corresponding to stabilizer operator g_i, is 0 if E and g_i commute, and 1 if they anticommute. For vectors 𝐱,𝐲∈(4)^n with elements x_i,y_i∈(4), the analagous function is the trace inner product 𝐱⋆𝐲:(4)^n(4)^n→(2):𝐱⋆𝐲=∑_i=1^n(x_iy̅_̅i̅+x̅_̅i̅y_i),where 0̅=0,1̅=1,ω̅=1+ω,1+ω=ω, and multiplication is typical in (4) <cit.>. Essentially, the i-th element of the sum is 0 if x_i=0, y_i=0, or x_i=y_i, and 1 otherwise. If an odd number of summands are 1, the trace will be 1, and if an even number are 1, the trace will be 0. Therefore the trace inner product is 0 when the Pauli operators represented by 𝐱 and 𝐲 commute, and is 1 when they anticommute. §.§ Quantum Data-Syndrome Codes We assume that Shor-style syndrome extraction <cit.> is used for measurement, in which a weight-w stabilizer can be measured fault-tolerantly using w single-qubit measurements. This is done using transversal gates and a w-qubit ancilla cat state to prevent a single error on an ancilla qubit from propagating to more than one data qubit.In this paper, we focus on the phenomenological model where errors can either occur on the data qubits or on the syndrome bits. We only consider measurement errors—errors that result in a single syndrome bit-flip, equivalent to an error on the ancilla qubit after all transversal gates have already occurred, with no propagation to data qubits. Let p_m be the probability of a single-qubit measurement error. Then the probability of incorrectly measuring a stabilizer generator S_i of weight w_i is given byp_err(S_i)=∑_j oddw_ijp_m^j(1-p_m)^w_i-j. If any bits of the syndrome are flipped during measurement, the resulting erroneous syndrome vector 𝐬̂ =𝐬+𝐞̂ may suggest an incorrect series of gates to restore the state. Some stabilizer codes have a choice of generators that allow them to correct either a single data error or a single syndrome error <cit.>, but in general, we need to perform additional stabilizer measurements to add redundancy against syndrome errors. Shor accomplished this by repeatedly measuring each stabilizer generator multiple times <cit.>. However, this can be achieved more efficiently by measuring an overdetermined set of stabilizer elements. We call a quantum code with r extra measurements beyond the ℓ=n-k needed an n,k,d:r quantum data-syndrome (QDS) code if it can correct up to a combined ⌊(d-1)/2⌋ data and syndrome errors. A reasonable metric for the amount of extra time a QDS code takes is the number of extra syndrome measurements that take place. Assuming elements of the syndrome group are of roughly equal weight, the measurement of each syndrome bit involves a similar number of gates to be performed, and thus takes a similar amount of time <cit.>. This assumes that measurements are done sequentially; the ability to measure in parallel could produce further improvement. Because of this, the most efficient QDS codes are those that correct a certain number of errors with the fewest additional stabilizer measurements. This is one reason that encoding syndrome information by simply repeating stabilizer measurements is inefficient—an m-fold repetition of ℓ stabilizer measurements that can correct up to m-1/2 syndrome errors requires O(mℓ) additional measurements. A general framework for designing a QDS code with a desired total distance 2t_c+1 was proposed by Fujiwara <cit.>. It involves s-detection parity check matrices (s-DPMs), which are parity-check matrices for codes that can detect up to s errors (without necessarily being able to correct or identify them) <cit.>. Specfically, an s-DPM(m,w) is a binary m× w matrix such that any m× s submatrix contains a row of odd weight. In <cit.>, Fujiwara showed that a 2i-DPM(m,w) exists so long as:m≥log_2(w2i-w-2i2i)+log_2e. Fujiwara uses these parity check matrices to construct a QDS code from a n,k,2t+1 stabilizer code that can protect against all errors such that the sum of the errors on syndrome bits and data qubits is t or fewer, and at most t_c of those occur on the syndrome bits. The resulting stabilizer parity check matrix has |S| rows where:|S|=n-k+2t_c+∑_i=1^t_c(2t_c-2i+1)m_i, m_i=log_2(n-k2i-n-k-2i2i)+log_2e.The construction in <cit.> requires O(t_c^3logℓ) additional stabilizer measurements. As we have ℓ=n-k, then we can express m_i in terms of ℓ as: m_i=log_2(ℓ2i-ℓ-2i2i)+log_2e. As we are interested in the behavior of this code asymptotically, we look at the behavior for larger ℓ. The binomial coefficient nk can in general be expressed as the polynomial:nk=∑_i=0^ks(k,i)n^i/k!,where s(k,i)=(-1)^k-iki are the Stirling numbers of the first kind <cit.>. There are several special values of Stirling numbers of the first kind; most valuable to us are s(n,n)=nn=1 and s(n,n-1)=-nn-1=-n2=-n(n-1)/2. Now consider the terms in these expansions with order greater than 2i-2: ℓ2i=2i2iℓ^2i/(2i)!-2i2i-1ℓ^2i-1/(2i)^!+O(ℓ^2i-2), ℓ-2i2i=2i2i(ℓ-2i)^2i/(2i)!-2i2i-1(ℓ-2i)^2i-1/(2i)!+O(ℓ^2i-2). Expanding the (ℓ-2i)^2i and (ℓ-2i)^2i-1 terms, we see: (ℓ-2i)^2i = ℓ^2i-(2i)^2ℓ^2i-1+O(ℓ^2i-2), (ℓ-2i)^2i-1 = ℓ^2i-1+O(ℓ^2i-2),so in fact the ℓ^2i and ℓ^2i-1 terms of the polynomials are (evaluating the Stirling numbers of the first kind):ℓ2i=ℓ^2i/(2i)!-2i(2i-1)/2ℓ^2i-1/(2i)^!+O(ℓ^2i-2), ℓ-2i2i =ℓ^2i/(2i)!-(2i)^2ℓ^2i-1/(2i)!-2i(2i-1)/2ℓ^2i-1/(2i)!+O(ℓ^2i-2).Then we can see the difference between these two binomial coefficients will have a highest-order term in ℓ of the form ℓ^2n-1:ℓ2i-ℓ-2i2i=2iℓ^2i-1/(2i-1)!+O(ℓ^2i-2). We know the m_i is at least (2i-1)log_2ℓ:m_i=log_2(ℓ2i-ℓ-2i2i)+log_2e ⇓ m_i>log_2(2iℓ^2i-1/(2i-1)!)=(2i-1)log_2ℓ+O(1). So the number of redundant stabilizer measurements required by this construction is at least2t_c+∑_i=1^t_c(2t_c-2i+1)(2i-1)log_2ℓ.This sum can actually be evaluated to a closed form; ∑_i=1^t_c(2i-1)(2t_c-2i+1)log_2ℓ=log_2ℓ(2t^3+t/3). So in terms of t and ℓ, the total number of additional stabilizers constructed is|S|-(n-k)=2t_c+log_2ℓ/3(2t_c^3+t_c).The dominant term of this is 2t_c^3log_2ℓ/3, and2t_c^3log_2ℓ/3∈ O(t_c^3logℓ). § SYNDROME MEASUREMENT CODES In general, given a stabilizer code, it is nontrivial to choose a set of stabilizer generators such that the resulting QDS code has a good total minimum distance. Instead we make use of syndrome measurement (SM) codes. A syndrome measurement code is a [n_S,ℓ,2t_S+1] classical block code that defines an overdetermined set of n_S stabilizer operators to be measured. This allows for a two-step decoding protocol that is simpler than simultaneously decoding syndrome and data errors, but can perform suboptimally in comparison <cit.>. In the classical decoding step, the measured length-n_C bit string is decoded with a decoder of the SM code. This results in a length-ℓ syndrome for the stabilizer code, which is then used to correct quantum errors in the second step. One advantage of using a SM code is that the number of correctable syndrome bit-flip errors is easy to dictate and independent from the minimum distance of the stabilizer code.If we have a stabilizer code n,k,d whose syndrome is encoded in a [n_C,ℓ,d_C] classical code, then the overall minimum distance of the QDS code is d'≥min(d,d_C), and it can correct up to a simultaneous d-1/2 errors on the data qubits and d_C-1/2 errors on the syndrome bits. This ability to control the distance of the SM code makes the codes particularly useful for systems with relatively high probability of measurement error. § PROTECTING SYNDROMES WITH BCH CODES We propose the use of primitive narrow-sense Bose-Chaudhuri-Hocquenghem (BCH) codes as SM codes to encode the syndrome bits. These codes are a class of cyclic binary codes of the form [2^m-1, 2^m-R(m,t) - 1, 2t+1], where R(m,t)≤ mt for a chosen m,t∈ℕ. The properties of a BCH code are defined by the degree of the least common multiple of certain irreducible polynomials <cit.>. Any BCH code can be shortened to create a [2^m-1-a,2^m-R(m,t)-1-a,2t+1] code. The generators of the shortened code are the codewords that are zero in their first a bits. Note that because both the number of logical and data bits both decrease by a, this process does not change the difference between the number of logical and data bits. This means that whether we use a regular or shortened BCH code as an SM code, the number of additional stabilizers measured will still be R(m,t). In order to encode a syndrome of an n,k,d quantum code (with a ℓ× 2n stabilizer matrix H) in a BCH code with distance d_S=2t_S+1, we choose m to be the smallest integer such that ℓ≤ 2^m-mt_S-1. Note that this means 2^m-2<ℓ<2^m<4ℓ, and so log_2ℓ is O(m) and m is O(logℓ). We use this m and our desired t_S to find the corresponding [2^m-1,2^m-R(m,t_S)-1,2t_S+1] BCH code, and if 2^m-R(m,t_S)-1>ℓ we shorten it by an appropriate amount so that it is a [ℓ+R(m,t_S),ℓ,2t_S+1] code. This code has a ℓ× (ℓ+R(m,t_S)) generator matrix G_B. We can use this matrix to generate the (ℓ+R(m,t_S))× 2n stabilizer matrix for our n,k,d:R(m,t_S) QDS code, H_Q:=G_B^TH. Using a BCH code or shortened BCH code that can correct up to t errors as a SM code encoding ℓ syndrome bits requires O(tlogℓ) additional stabilizer measurements.When we encode ℓ bits in a BCH code with distance 2t+1, we find the m used to construct the code as the smallest m such that 2^m-1-mt>ℓ, and we can see m is in O(logℓ). The number of additional stabilizer measurements required when using a BCH code as a syndrome measurement code is the difference between the number of encoded and data bits in the BCH code. This is definitionally R(m,t)≤ mt, so R(m,t)∈ O(mt) R(m,t)∈ O(tlogℓ). This R(m,t) is the same for a BCH code and any resulting shortened BCH code, so our methodology requires O(tlogℓ) additional stabilizer measurements. The significance of this improvement can be seen by considering the rotated surface code. A d^2,1,d rotated surface code is defined by ℓ=d^2-1 stabilizers; this means that ℓ∈ O(t^2) for t=d-1/2. A standard way to protect against t syndrome errors, proposed by Shor, requires at least 2t and up to t^2 additional rounds of syndrome extraction <cit.>. A minimum-weight perfect-matching (MWPM) space-time decoder is the standard method for fault tolerant syndome extraction for the surface code <cit.>. The method uses d=2t+1∈ O(t) rounds of syndrome measurements, yielding O(ℓ t) total measurements . In terms of t, this means that the most efficient t-fault-tolerant syndrome extraction for the surface code requiresO(t^3) additional measurements. Fujiwara's methodology requires O(t^3logℓ) additional measurements, which is O(t^3log t) in terms of t. In contrast, using our methodology, we require O(tlogℓ) additional measurements; in terms of t this means our method is O(tlog t) for the surface code, significantly better than the alternatives in terms of measurements, but at the cost of a complex syndrome extraction circuit relative to MWPM.§.§ Example: encoding the 7,1,3 Steane code in a BCH codeThe Steane code is a CSS code that encodes X and Z errors using the [7,4,3] Hamming code. This Hamming code has parity check matrix:H_H=[1 0 1 0 1 0 1 0 1 1 0 0 1 1 0 0 0 1 1 1 1].So thegenerators of the Steane code's stabilizer group 𝒮 and corresponding binary parity-check matrix H_S are:𝒮=<XIXIXIX,IXXIIXX,IIIXXXX,..ZIZIZIZ,IZZIIZZ,IIIZZZZ>, H_S=[0 H_H H_H 0]. This code gives us a syndrome of length 6. Say we want to protect against t_S=3 possible syndrome errors. Then we determine our relevant BCH code. The smallest m such that 6<2^m-1-3m-1 is m=5, so our BCH code is a [31,16,7] code. We need to shorten it by a=10 bits in order for it to encode exactly 6 bits; the resulting code is a [21,6,7] code with generator matrix G_B: G_B= [ 1 0 0 0 0 0 0 0 0 1 0 1 0 1 1 0 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 1 0 1 1 0 1 0 0 1 0 0 1 0 0 0 1 1 1 1 0 0 0 0 1 0 0 1 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 0 1 0 0 1 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 0 1 0 0 1 1 0 0 0 0 0 1 1 1 1 0 1 0 1 1 1 1 1 0 0 0 1]. Multiplying G_B^T by our matrix H_S gives us a 21× 14 matrix that defines a set of stabilizers whose measurement returns a length-21 syndrome vector. This can be decoded by appending a=10 zeroes to the beginning of the vector and passing it through the decoder for the original [31,16,7] code (for more on the decoding of cyclic codes see <cit.>). The resulting length-16 vector will begin with 10 zeros that can be removed, and we are left with a 6-bit syndrome—which can be used to identify and correct errors on our 7 data qubits in the usual way. §.§ Time overhead A QDS code that uses a BCH code as an SM code will need only O(mt_S) additional measurements to be performed. Note that m∈ O(logℓ), so the number of additional measurements is O(t_Slogℓ ). Compared to Fujiwara's construction, which is O(t_c^3logℓ), we can see that ours gives a significant improvement. Note that the t_c in Fujiwara's construction and the t_S in ours are not necessarily the same. Fujiwara's construction restricts t_c to be at most the t of the quantum code the QDS code is based on. If we desire to protect against a number of syndrome errors fewer than those corrected by the base code, our construction outperforms. If we instead protect against more syndrome errors than t, our construction allows for this while Fujiwara's does not. Specifically, if we have a quantum code and want to protect against fewer than t syndrome errors, Fujiwara's construction grows with the number of errors cubed. For the same code and desired correctable syndrome errors, our BCH construction requires a number of additional stabilizer measurements that is linear in t. For a comparison of the additional stabilizer measurements needed to obtain the same error-correcting properties between our and Fujiwara's constructions see Fig. <ref>. This means that protecting against a specific number of syndrome errors can be done with significantly less time overhead using our encoding procedure than using Fujiwara's construction. Perhaps more meaningfully, it also means that if a circuit has a limited amount of time to perform stabilizer measurements, our construction can correct significantly more errors on the syndrome bits than Fujiwara's construction.As an example of this, consider a code with ℓ=10. If we want to protect against up to 3 syndrome errors, Fujiwara's construction requires up to 76 additional stabilizers. The highest-distance BCH code encoding ℓ=10 bits such that R(m,t)≤ 76 is the [80,10,23] shortened BCH code (shortened from a [127,57,23] code) that only needs 70 additional measurements while protecting against up to 12 syndrome errors. Our methodology allows for significantly more well-protected syndrome measurement in the same amount of time.§ COMPARISONSTo illustrate this, we perform Monte Carlo simulation over many error weights. To obtain accurate results despite the need for high-weight errors with low probability, we determine the probabilities of both logical and decoded syndrome errors for each pair of syndrome error weight w_s and qubit error weight w_q, a methodology outlined in <cit.>. For a system with probability of syndrome error p_s on a number of syndrome bits ℓ, and probability of qubit error p_q on number of qubits q, we can calculate the probability of a logical error:p_err(p_q,p_s)=∑_w_s=1^ℓ∑_w_q=1^qA_w_q,w_s(p_q,p_s)p_L(w_q,w_s),where A_w(p,n)=nw(p)^w(1-p)^q-n is the probability that w errors will occur on n possible locations when an error occurs with probability p, and A_w_q,w_s(p_q,p_s)=A_w_q(p_q,q)*A_w_s(p_s,ℓ) is the probablity that exactly w_q qubit errors and w_s syndrome errors will occur. Then p_L(w_q,w_s) is the Monte Carlo-determined probability of an error resulting from the decoding process when w_q qubit errors and w_s syndrome errors occur. Many of the smallest terms of this can be truncated, and the terms where both weights are below the capabilities of the code are guaranteed to be zero, so it is computationally less intense than traditional simulation. If we take our encoding and Fujiwara's encoding, in similar numbers of additional stabilizer measurements, our construction performs significantly better under a wide range of possible measurement error probabilities. As an example, consider this encoding of an ℓ=10 code (Fig. <ref>). The probability of a set of bit flips resulting in a logical error is significantly lower with the BCH SM code at all bit-flip error probabilities, with a very pronounced effect at lower probabilities due to the significantly higher distance in the BCH construction.In a realistic model, it is likely that qubit errors and syndrome bit-flip error probabilities are similar to each other. It can be helpful then to look at the behavior of these codes while varying both errors. In Fig. <ref> we look at an encoding of the Steane code in a BCH code and in Fujiwara's construction for syndrome bit-flip probability 100x greater than the Pauli error probability. We can see that the codes both reach the same slope, because the probability of qubit error becomes more significant than the probability of measurement error. However, because the BCH code has a higher distance, it behaves significantly better. § CONCLUSIONWe have shown that it is possible to create quantum error-correcting codes that are robust against syndrome measurement errors. We show that using the classical BCH family of cyclic binary codes, we can protect against any number of syndrome errors t we choose, while only requiring a number of additional stabilizer measurements linear in t. This is a significant improvement on the previous construction given in <cit.>, which requires a number of additional measurements cubic in t. Our use of BCH codes confers two advantages. First, for a specific desired number of syndrome errors to correct, our construction requires significantly fewer syndrome measurements and therefore takes significantly less time than that in <cit.>. Second, if we have a certain amount of time allocated for syndrome measurement, our construction allows for significantly more syndrome errors to be corrected with the same time overhead as Fujiwara's method.One direction of future research is investigating how the use of BCH codes as syndrome measurement codes can be combined with other methods of fault-tolerant techniques such as flag error correction <cit.>, adaptive syndrome measurement <cit.>, and single-shot error correction <cit.>, and whether this improves their error-correcting properties. Another direction for future study is to go beyond the phenomenological error model and investigate the usage of SM codes under full circuit noise. Although BCH codes are generally good codes, this methodology may not the optimal choice for specific quantum codes. For highly structured families of quantum codes, using BCH codes as SM codes may not preserve the structure of the underlying quantum code. For example, a good classical code may not be a good SM code for a QLDPC code if it measures high-weight stabilizer elements. For surface codes, using a BCH code as a SM code likely will not preserve the locality of stabilizer elements. For a CSS code it may be desirable to avoid having to measure the product of both X- and Z-type stabilizers, and therefore to encode the X- and Z-type subgroups separately with SM codes. Therefore, there is significant work that is needed for determining the best SM codes for these families. Future work may therefore include encoding subgroups of the stabilizer generators in separate syndrome measurement codes to preserve certain properties of the base quantum code. § ACKNOWLEDGEMENTSThis work was supported by the ARO/LPS QCISS program (W911NF-21-1-0005) and the NSF QLCI for Robust Quantum Simulation (OMA-2120757). Support is also acknowledged from the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Systems Accelerator. empty | http://arxiv.org/abs/2311.16044v1 | {
"authors": [
"Eren Guttentag",
"Andrew Nemec",
"Kenneth R. Brown"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20231127180910",
"title": "Robust Syndrome Extraction via BCH Encoding"
} |
remarkRemark ./figures/wExr̊pMỹḍ D f gŁ𝕃ϵr̊TheoremTheoremcorolaryCorollarylemmaLemma[Email: ][email protected] [Email: ][email protected] of Mathematics, King's College London, London WC2R 2LS, UK We consider highly heterogeneous random networks with symmetric interactions in the limit of high connectivity. A key feature of this system is that the spectral density of the corresponding ensemble exhibits a divergence within the bulk. We study the structure of the eigenvectors associated with this divergence and find that they are multifractal with the statistics of eigenvector elements matching those of the resolvent entries. The corresponding localization mechanism relies on the statistical properties of the nodes rather than on any spatial structure around a localization centre. This “statistical localization” mechanism is potentially relevant for explaining localization in different models that display singularities in the bulk of the spectrum of eigenvalues. Multifractality and statistical localization in highly heterogeneous random networks Peter Sollich January 14, 2024 =====================================================================================§ INTRODUCTION After three decades of an extensive analysis of its main properties, the configuration model has stood its ground as the simplest but realistic theoretical model for complex networks <cit.>. A particularly appealing property of this model is that the degree distribution can be freely specified, making it possible to capture a large range of random networks varying from highly homogeneous (all nodes with same degree) to very heterogeneous <cit.>.In addition to the topological structure of the network, the edges are often weighted, physically representing the strength of an interaction between nodes <cit.>. This is encoded in a matrix that contains the weights of all edges in the network. For the theoretical approach, rather than focusing on individual matrices, one reduces them to their defining properties and represents these in a statistical ensemble of weight matrices <cit.>. The interplay between interaction and structure can be studied by analyzing, for instance, the spectral properties of the corresponding ensemble of random matrices (see for instance ref. <cit.> for a classical work in the case of symmetric sparse matrices or ref. <cit.> for a more recent perspective on non–Hermitian random matrices). Analytical methods for computing spectra of configuration model networks can be found in references <cit.>. Most of the theory is based on message–passing algorithms and the corresponding cavity (or resolvent) equations <cit.>. Recently, in references <cit.> an analysis of these equations forhighly heterogeneous networks, i.e. those with non-vanishing relative variance of the degree distribution (in the high connectivity limit), has been performed. The main result is that in the thermodynamic limit, classical results of Random Matrix Theory do not apply. This includes in particular Wigner's semicircle law to describe the spectrum of eigenvalues. Deviations from this are accompanied also by other striking properties, such as the appearance of singularities in the bulk of the spectral density. In this paper, we study highly heterogeneous random networks using the negative binomial distribution as the degree distribution for the configuration model, with interactions sampled from a Gaussian with zero mean and a variance that scales inversely with the mean connectivity. As has been shown in reference <cit.>, this setting leads to a family of networks controlled by a single parameter that allows one to cover the range from highly heterogeneous all the way to essentially homogeneous networks. In reference <cit.>, the resolvent equations were derived and solved for this system. Their solution revealed that in the case of highly heterogeneous graphs, there is a divergence in the spectral density at eigenvalue zero. This can be traced back to the distribution of the local Density of States, which acquires a power–law tail leading to an infinite first moment.Here we investigate the structure of the eigenvectors associated with the divergence of the spectral density. Our main finding is that they are localized and exhibit “strong multifractal” behavior. We rationalize this by finding an anticorrelation between the degree of the node and the corresponding amplitude of the eigenvector. It turns out that localization occurs in nodes with low degree (relative to the mean), which typically are far apart from each other. We denote the corresponding localization mechanism “statistical localization” as it is driven by the statistical properties of the node(s). This contrasts with the standard case of Anderson localization on random networks <cit.>, where eigenvectors are localized on nodes around a single localization centre. We conjecture that the new mechanism of statistical localization is relevant also for other types of models that exhibit singularities within the bulk of the spectral density such as the Poisson random graph with Gaussian couplings <cit.> or the sparse Barrat–Mézard trap model <cit.>.§ HETEROGENEOUS WEIGHTED RANDOM NETWORKS We provide in this section the essential pieces of information that define the ensemble of random networks that we study. For more details, we refer to the reader to the original paper <cit.>. Let us consider a simple and undirected network with N nodes. We consider the configuration model with a negative binomial distribution of degrees k,p_k = Γ(γ + k)/Γ(γ)1/k!( c/γ)^k 1/( 1+ c/γ)^γ + kwhere c is the mean connectivity, Γ(·) is the Gamma function and 0 < γ < ∞ controls the heterogeneity of the distribution [In ref. <cit.>, the parameter γ is denoted by α. We use a different convention here because we will need α later in the notation for the spectrum of fractal dimensions.]. Indeed, the relative variance of the degree distribution is given byσ^2/c^2 = 1/c + 1/γEach node i is assigned a degree k_i drawn randomly from p_k, and nodes are then randomly connected following the standard configuration model prescription <cit.>. If there is a link between nodes i and j we set the random variable c_ij = 1, otherwise it is set to zero. Additionally we consider interactions J_ij between nodes as random variables sampled independently on each link (with J_ij=J_ji) from a distribution p_J with mean zero and standard deviation J/√(c). Overall, the weight matrix A has elementsA_ij = c_ij J_ijIn the high–connectivity limit c →∞, it makes sense to consider the distribution of the rescaled degrees κ = k/c. This distribution ν(κ) is formally defined as followsν(κ) = lim_c →∞∑_k =0^∞ p_kδ( κ - k/c)For p_k as given in eq. (<ref>), one finds that ν(κ) is a Gamma distribution with shape parameter γ and scale parameter 1/γ, i.e.ν(κ) = γ^γκ^γ - 1e^-γκ/Γ(γ)The regime γ < 1 characterizes random networks with strongly heterogeneous degrees as the relative variance of the degree distribution is greater than 1 (cf. eq. (<ref>)). In the limit N →∞ (taken beforec →∞),the distribution of eigenvalues, ρ(λ) = lim_N→∞1/N∑_i=1^N δ(λ - λ_i) , of the weight matrix (eq. (<ref>)) exhibits a power law divergenceρ(λ) ∼ |λ|^γ - 1, |λ| → 0with 0 < γ < 1. This behavior can be rationalized via the connection between the resolvent and the spectral density <cit.>, namelyρ(λ) =lim_ϵ→ 0lim_N→∞1/π N∑_i = 1^N ImG_ii (λ -i ϵ)where the G_ii are the diagonal elements of the resolvent matrix G = ((λ - i ϵ) I - A)^-1, with I the identity matrix. As a matter of fact, networks generated by the configuration model exhibit locally a tree–like structure <cit.>, this implies that we can use the cavity method <cit.> to estimate the diagonal elements of the resolvent matrix, also called local Density of States (lDOS). Ref. <cit.> shows that at λ = 0 the distribution ofy_i = Im G_ii inthe limit ϵ→ 0 is given byP_0(y) = γ^γ/Γ(γ) J^γexp( - γ/J y)/y^γ + 1By virtue of the resolvent identity (eq. (<ref>)), the mean value of the lDOS distribution determines the spectral density. From the power-law tail of the distribution (<ref>) one can then clearly see the origin of the divergence in equation (<ref>).A final result that we want to introduce here comes from reference <cit.>. This states that in the limit c →∞, the spectral density of the weight matrix A is given by the free multiplicative convolution <cit.> of ν(κ) with the Wigner semicircle law ρ_W(λ). As an implication, one has that the weight matrix can be decomposed as the product of two different matrices, X and D, where the latter is the degree matrix with elements D_ij = κ_i δ_ij and the former is a random (symmetric) matrix with zeros on the diagonal and off–diagonal elements drawn from a Gaussian distribution with mean zero and variance J^2/N. After symmetrization, one can writeA = D^1/2XD^1/2 This decomposition is used in reference <cit.> to compare the predictions (in particular, for the spectral density) from theory to numerical results using exact diagonalization, without explicitly constructing the configuration model. We will use the same approach to obtain the eigenvectors associated with the modes around λ = 0 numerically.§ WAVEFUNCTION STATISTICS We exploit the decomposition in equation (<ref>) to generate an interaction matrix A. Then we diagonalize it with the Lanczos algorithm to extract the eigenvectors with associated eigenvalues closest to zero [For this purpose we use an implementation of the Arpack package for the Julia programming language <cit.>.]. In Figure <ref>((a)) we show the distribution of the (scaled) squared eigenvector entries x_i = N|ψ_i|^2, which in quantum mechanical language are squared wavefunction amplitudes. The data are obtained from independent instances of A for three different values of γ. The figure suggests that the distributions have power-law tails for large x, with γ-dependent exponents [The figure also suggests the existence of a left power-law tail. This would contribute to the scaling of the exponents τ(q) for negative q (see eq. (<ref>)) and equivalently to the multifractal spectrum for γα > 1 (cf. eq. (<ref>)). As this piece is not relevant for the understanding of the localization properties, we do not consider it further in our analysis.]. In fact, we found that the exponents agree with thosefor the resolvent, i.e. P(x) ∼ P_0(x) ∼ x^-(1 + γ) for sufficiently large x, as shown in Figure <ref>((b)). This agreement of the statistics of the resolvent and eigenvector entries is consistent with previous results for eigenvectors with multifractal properties, see for instance <cit.>. The power-law tail motivates the following ansatz for the whole distribution above a scale x_min(N),close to the mode of the distributionP(x)= b(N) x^-(γ + 1) with b(N) and x_min(N) functions to be determined. Their scaling can be estimated by using the normalization of P(x) and its first moment <cit.>; the latter follows from the normalization of the eigenvectors, 1=∑_i |ψ_i|^2 = N^-1∑_i x_i. The first condition, namely, ∫ P(x) dx = 1 gives1 ≈∫_x_min^N P(x) dx≈γ b x_min^-γwhereas the second one, namely ∫ xP(x) dx = 1, yields 1 ≈∫_x_min^N x P(x) dx≈γ b N^-γ + 1 These two relations imply b ∼ N^γ - 1 and x_min∼ N^1 - 1/γ.As x_min sets the scale from which the power–law becomes valid, we identify this as the typical value of the distribution. This scaling is confirmed using results from Exact Diagonalization as can be appreciated in Figure <ref>.Substituting the scaling for b(N) in our ansatz (eq. (<ref>)), we write the distribution P(x) in the generic wayP(x) = A/x N^γ - 1 x^- γwith A = O(N^0) a normalization constant. The spectrum of fractal dimensions f(α), which is definedas <cit.> f(α) = lim_N →∞ln( xN P(x) ) /ln Nwhere on the r.h.s. x=N^1-α or conversely α = 1 - ln x/ln N. Substitution of eq. (<ref>) into eq. (<ref>) gives the expression f(α) = γα, forγα≤ 1 where the upper cutoff α_max = 1/γ corresponds to the lower cutoff x_min.Finally, the Legendre transform of equation (<ref>) gives the set of exponents characterizing the q–th moment of the distribution I_q(N) = ∑_i |ψ_i|^2q∝ N^-τ(q) The Legendre transform relation reads explicitly <cit.> -τ(q) = max_α [f(α) - qα] Substitution of f(α) (eq. (<ref>)) into equation (<ref>) yields τ(q) =q/γ - 1q ≤γ 0q > γFigure <ref> compares this result with Exact Diagonalization. § DISCUSSION The scaling of the moments I_q as given by equation (<ref>)has been found across different models and the corresponding phase has been described variously as “quasilocalized” <cit.>, or localized with multifractal properties <cit.>, or localized with “strong multifractal” behavior <cit.>. The corresponding models are the Gaussian Rosenzweig–Porter, random Levy matrices, and the Anderson model on small–world networks, respectively. References <cit.> point to a power–law (instead of exponential) localization as the origin of this kind of behavior. In contrast, reference <cit.> (see also <cit.>) focusses on the existence of a length scale that characterizes the decay of wave functions along the typical branches as responsible for the scaling above. These mechanisms, however, do not seem to be operative for our system. In order to find the mechanism that governs the behavior of ourmodel, we construct finite instances of the networks generated with the configuration model. Then we investigate the correlation between the squared eigenvector entry (or “mass”) and the degree of the node. Our finding is that low–degree nodes, typically leaves (i.e. nodes with degree one), concentrate the mass of the eigenvectors. For mildly heterogenous networks, i.e. those with 1 < γ < 2, the anticorrelation between degree and eigenvector mass is easily visible in a scatter plot (c.f. Figure <ref>). If we extrapolate this finding to c →∞, we expect that for a given instance nodes with small relative degree will concentrate the mass of the eigenvector, and what is more relevant, that those nodes may be far apart from each other on the network. Indeed, we find that the dominant nodes from figure <ref> lie at distances of the order of the graph diameter from each other. Thus what matters for localization is the identity of each node and its statistical properties, rather than its spatial location in the network. In this sense, we say that the system exhibits “statistical localization” and that this leads to the multifractal behavior encoded in the exponents (<ref>).Toillustrate further the mechanism described above, consider the Bouchad trap model on a sparse random graph <cit.>. The ground state of this system is the Boltzmann distribution, and the set of exponents characterizing the scaling of the moments is of the form (<ref>), withtemperature instead of γ as the control parameter (see equation 4.15 in reference <cit.>). Thus, for finite instances, the nodes that would carry most of the mass of the vector are the ones with the largest energies, corresponding to the deepest traps. Those nodes are spatially uncorrelated and areinstead identified by local statistical properties.At this point, it is worth comparing our results with earlier observations of localization in heterogenous networks. As a representative example of this class we consider the Laplacian on Poisson (Erdös–Rényi) random graphs <cit.> (see also <cit.>). In both cases, for small |λ|, localization is driven by low degree nodes. However, for our model, this happens in the bulk of the spectrum and is accompanied by a divergence in the spectral density; whereas for the Poisson random graph, localization happens in the (Lifshitz <cit.>) tail of the spectrum, which is separated from the bulk by a mobility edge. Additionally, while it has been observed that localized states in this tail are centred on geometric defects with abnormally low connectity <cit.> and that the density of states is dominated by low degree nodes <cit.>, it remains an open question whether those states exhibit multifractal behavior and the existence of multiple localization centres. This would be interesting to study in future work.In conclusion, we have analysed the wavefunction statistics for highly heterogeneous random graphs from the configuration model defined by a negative binomial degree distribution and in the limit of high connectivity. We have found that the distribution of the eigenvector mass for the modes that contribute to the divergence of the spectral density is a power-law, with the same exponent as the one for the local Density of States. Those eigenvectors exhibit strong multifractality, and for finite graphs are essentially localized in the lowest-degree nodes. On this basis, we introduce the concept of “statistical localization” that is to be contrasted with the standard one of “spatial localization”.We leave for future work an extensive analysis of the eigenstate correlations of these modes as has been done for the Anderson model <cit.> or the Gaussian Rosenzweig-Porter model <cit.>, in order to investigate specific footprints of the statistical localization mechanism. It will also be interesting to carry out an analysis similar to the one in this paper for the sparse Barrat-Mézard trap model <cit.>, which like the model considered here exhibits divergences in the bulk of the spectral density. Work in this direction is in progress.Finally, we point out that the phase described in earlier studies as a “frozen phase”has similar phenomenology to the one observed here <cit.>, and thus may potentially also be viewed as an instance of the notion of statistical localization that we have introduced here. A quantitative analysis along these lines is left as an interesting task for future work.* § NUMERICAL DETAILSWe generate 2^21/Ninstancesof weight matrices of size N ∈{2^13 , 2^14}, constructed according to equation (<ref>). For each instance, we extract the 50 eigenvectors closest to λ = 0. Then, for each eigenvector ψ^(α) with entries ψ^(α)_i we compute the q–moment (of the squared entries) as I_q = ∑_i=1^N |ψ^(α)_i|^2q. Finally, we obtain the typical value over all the vectors asI_q^typ = e^⟨ln I_q ⟩∼ N^-τ(q)and estimate τ(q) as:τ(q) = - ln I_q^typ(N )- ln I_q^typ(N - δ N)/ln(δ N)with N = 2^14 and δ N=2^13. unsrt | http://arxiv.org/abs/2311.15808v1 | {
"authors": [
"Diego Tapias",
"Peter Sollich"
],
"categories": [
"cond-mat.dis-nn",
"cond-mat.stat-mech",
"math.PR"
],
"primary_category": "cond-mat.dis-nn",
"published": "20231127133111",
"title": "Multifractality and statistical localization in highly heterogeneous random networks"
} |
empty Problems in the astrophysics of accretion onto compact celestial bodies ^*^*Invited book chaptersubmitted for publication in "Highlights of theCompact Objects Sciences in the Last Decade", edited by Şölen Balman, IU Press (Istanbul University Press, Turkey)Jean-Pierre LASOTA^1,2^1 Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences,Warsaw, Poland ^2 Institut d'Astrophysique de Paris, CNRS et Sorbonne Université,Paris, FranceE-mail: [email protected] people do not read; if they read, they do not understand. And those who understand forget.Henry de Montherlant (French writer; 1895-1972) Abstract Although during the last decade new observations and new theoretical results have brought better understanding of the physics of accretion onto compact objects, many old and several new questions and problems await answers and solutions. I show how the disc thermal–viscous instability model applied to both cataclysmic variable stars and X–ray binary transients compels us to conclude that assuming the existence in these systems of a flat accretion disc extending down to the accretor's surface or to the last stable orbit and fed with matter at its outer edge is too simple and inadequate a description of these objects. It is also clear that, in most cases, these discs cannot driven by (anomalous) viscosity only. The origin of the superhumps observed in cataclysmic variables and X–ray binaries is, contrary to the common opinion, still unknown. In accreting magnetic white dwarf systems outbursts not of the dwarf–nova typecan be due to the magnetic gating instability and/or thermonuclear micronova explosions. Although the “typical” lightcvurves of X–ray transients can be described by analytical formulæ (but their decay phase is not exponential), observations show that in many cases the light variations in these systems are much more complex. An elementary argument shows the impossibility of magnetars in pulsing ultraluminous X–ray systems, but we still do not have a complete, self-consistent description of supercritical accretion onto magnetized neutron stars and the resulting (necessarily beamed) emission. Although it is (almost) universally believed that active galactic nuclei contain accretion discs of the same type as those observed in binary systems, the evidence supporting this alleged truth is slim and the structure of accretion flows onto supermassive black holes is still to be determined. Keywords: black holes, neutron stars, white dwarfs - accretion, accretion discs - binary stars§ INTRODUCTION §.§ Accretors There are three types of compact celestial bodies in the Universe: white dwarfs, neutron stars and black holes. Other types of compact objects have also been proposed, e.g. strange stars, gravastars, Q-stars, boson stars etc., but so far, no evidence of their existence has been found.Compactness is defined through the size of the body relative to its gravitational radius R ∼ R_g = GM/c^2. For white dwarfs, this is R_g/R ∼ 10^-3 - 10^-4, so they are “weakly” compact, but their efficiency of accretion ∼ R_g/R can be comparable to the efficiency of thermonuclear reactions. Neutron stars are obviously relativistic bodies with R_g/R ∼ 0.1.Black holes are purely general–relativistic objects. However, this does not mean that they must be extremely dense, or that the gravitation at their surface must be extremely strong, contrary to what is (too) often asserted in the media and even in the astrophysical literature. The mean density of a black hole is equal toρ(R_g)= 1.0 M/R_g^3=(7.8× 10^8/M/)^-2 ,so the mean density of a 10^9 black hole is the same as the density of water. It's not clear what the density of a black hole corresponds to, but since the space–time curvature is also ∼ M/R^3, Eq. (<ref>) tells us that the curvature near a black-hole horizon is not necessarily strong. The space-time curvature in the vicinity of the famous M87* black hole is weaker than on the Earth's surface. In this sense, gravitation is stronger around us than it is near this six–billion solar-mass compact body. But this does not exempt us from using Einstein's theory of gravitation when describing physical processes in the vicinity of a supermassive black hole. Although the pull of gravitation can be locally suppressed by free–falling, crossing a black–hole surface, despite being unnoticeable, has literally inescapable consequences. Above the black–hole's surface, its gravity pull can be locally counteracted by applying external forces (thorough engines) but at the surface itself this requires infinite energy. What is, usually, called the “black–hole surface” is, in fact a 2D slice of a stationary and axisymmetric null hypersurface – a global, space–time structure. This has consequences that are, too often, ignored by astrophysicists.The space–time curvature certainly cannot be ignored when considering motions on the scale of the curvature radius, such as propagation of light emitted near the black hole surface (see e.g. ). Also the popular “pseudo–potential” of <cit.> should be used with care, since it describes correctly only Keplerian orbits (in the Schwarzschild metric) of massive particles in the equatorial plane, but not other types of motions (and not the light propagation, of course).Black holes supposedly come in three mass categories: stellar–mass, intermediate–mass and supermassive (sometimes just called “massive”). The first category is observed mostly in binary systems with a normal star, but since 2015, mergers of couples of such black holes have been observed in gravitational waves. While the masses of black holes in electromagnetically observed binaries span the range 5 - 21 <cit.>, in the case of detection through gravitational waves this range is much larger, spanning ∼ 3 - 80 <cit.>. This difference could be due to the fact that observations in gravitational waves reaching up to z ≲ 0.7, access a much larger range of metallicities than observations in electromagnetic waves <cit.>. Low metalliticities allow the formation of high–mass (> 30) black holes <cit.>. The existence of intermediate–mass black holes (IMBHs) is still to be confirmed. It is now clear that they are not components of the ultraluminous X–ray sources <cit.>, but black hole mergers can result in ≳ 100 objects, formally in the IMBH range. The existence of supermassive black holes in galaxy centres is well established. The maximum mass of an accreting black hole could be equal to 2.7 × 10^11 <cit.>, but this result relies on the (untested, see Sect. <ref>) hypothesis that standard accretion discs are present in active galactic nuclei (AGN). It is conceivable that primordial, “stupendously” massive (10^12- 10^18M_⊙) black holes exist <cit.>.§.§.§ Accretion flows Most of accretion flows onto compact celestial bodies contain a sufficient amount of angular momentum to form a flattened rotating structure, usually called an “accretion disc” even when it looks more like a torus. In binary systems, an accretion disc always form when the mass–loosing star is filling, or almost filling, its Roche lobe. It is not clear what is the structure of the inner accretion flow in active galactic nuclei (see e.g.and Sect. <ref>), but in at least some cases a spectacular disc is observed far from the nuclear black hole (e.g. a warped Keplerian disc in NGC 4258; ). In the case of cataclysmic variables (CVs), thanks to observations of eclipsing systems, there is no doubt that the accreting white dwarf is surrounded by a geometrically thin Keplerian disc. Some such discs have been observed in a bright steady state (e.g., ), other in a non–steady quiescent state (e.g. ). Despite this certainty, the predictions of accretion disc models do not always correspond to observations. This is the case of disc spectra which fail to be faithfully reproduced by the standard α–disc model (see e.g. ). The problem lies in the disc's vertical structure which still not well understood <cit.>. In other systems, in cases where the existence of an accretion disc is not well established, model predictions not corresponding to the observed spectra (as e.g. ) should not be authomaticallyused as a decisive argument against the disc's presence.It seemed that the discovery by <cit.> that the magneto–rotational–instability (MRI) triggers turbulence in Keplerian discs, had finally solved the problem of the origin of the accretion driving mechanism, but in reality things are more complicated. First, the MRI works only in ionised discs, while quiescent (low temperature) discs between dwarf–nova and transient X–ray binary outbursts are neutral, hence not subject to MRI <cit.>. The same is true of protostellar accretion discs <cit.>. Therefore some other angular–momentum transport mechanisms have to be at work in such discs (e.g., winds, ). Second, in hot discs of dwarf novæ and X–transients in outburst, the observed decay times imply a viscosity parameter α (corresponding to the ratio of the (vertically averaged) total stress to thermal (vertically averaged) pressure) ≈ 0.2 (in the case of dwarf novæ, ) and ≥ 0.2 (for X–ray transients, ), while MRI simulations (with no net magnetic field) give an α∼ 0.01. In addition, the disc instability model (DIM) that correctly reproduces the main properties of dwarf–nova and X-ray transient outbursts requires a ratio 4 – 10 between the values of α in the hot and cold states of the disc <cit.>. This is not reproduced by the standard MRI simulations. α increases to ∼ 0.1 when convection appears in the MRI simulations () but this is not sufficient to produce dwarf–nova lightcurves resembling the ones observed <cit.>. It seems that additional ingredients, such as winds () must play a role in driving disc accretion.Despite its obvious weaknesses, the DIM (seefor reviews of the model) has proved to be a powerful tool to test some basic properties of accretion discs, mainly those relating to the accretion driving mechanisms. The best test-bed for the MRI simulations are CV discs, in particular those of dwarf–novæ, since they are the real structures closest to what these simulations are supposed to describe. Due to a couple incorrect determinations of the distance to the closest dwarf nova SS Cyg, the veracity of DIM was put out to doubt, but its reputation has been rescued by distance measurements by radio interferometry and Gaia (seeand references therein).In this chapter I present and discuss problems with understanding accretion onto compact objects, that have been solved or have arisen mainly during the preceding decade. As usual, solving some problems, gives rise to new ones. The chapter begins with a reminder of the basic DIM features. This is followed by a discussion of the problems of applying this model to the description of the observed dwarf–nova outbursts. I then present the inadequacy of the almost universally used model of superhumps but I also enumerate the weaknesses of the alternative. The next section deals with outbursts of systems with magnetised white dwarfs. I consider first the (rare) cases when such binaries appear as dwarf novæ, then I present the successful application of the magnetic–gating instability model, designed to explain the neutron–star Rapid Burster, to the case of intermediate polars. I end this section with a discussion the problems with the recently proposed phenomenon of micronovæ. The following part of the article deals with X-ray binaries. I begin with presenting an analytic method of describing the decay–from–maximum light curves of X-ray transients. Then I provide a detailed discussion of pulsing ultra–luminous X-ray sources (PULXs) in which I present a simple argument that exclude the presence of magnetars in these systems. A short subsection deals with recent results on transient ULXs. The chapter ends with a section concerned with the problem of accretion discs in AGNs.§ THE THERMAL–VISCOUS DISC–INSTABILITY If they are sufficiently large, all hot (T > 10^4K), standard, stationary accretion discs are thermally unstable. In such discs the viscous heating rate per unit surface can be written asQ^+=𝔗Ω^'/4π R=9/8ΣνΩ_K^2,where Ω is the angular velocity, the prime denotes the radial derivative, Σ=∫^+∞_-∞ρ dz is the column density and 𝔗 the total “viscous" torque (see e.g. ). In the last equality, the disc is assumed to be Keplerian (Ω=Ω_K).For Keplerian discs the angular momentum conservation in the form ofṀ (ℓ -ℓ_ in) =𝔗,ℓ being the specific (per unit mass) angular momentum, implies the following relation between viscosity and accretion rateνΣ =Ṁ/3π[1 -(R_ in/R)^1/2],where R_ in is the inner disc radius. Therefore, from Eqs. (<ref>) and (<ref>) it follows thatQ^+ ≡σ T_ eff^4= 3/8πGMṀ/R^3[1 -(R_ in/R)^1/2],hence the temperature of the disc is decreasing with distance from the centre:T_ eff∼ R^-3/4.Notice that Eq. (<ref>) assumes only the stationarity and Keplerianity of the disc, so it is a universal relation independent of the accretion mechanism.It follows that for r=R/R_S ≫ 1 (R_S=2R_g) the temperature profile of a stationary Keplerian accretion disc can be writtenT_ eff=T_ in(r/3)^-3/4,whereT_ in=(3GMṀ/8πσ (3R_S)^3)^1/4≈ 3.0 × 10^9 m^-1/2ṁ^1/4 K,if one assumes that R_ in=3R_S, i.e. the ISCO for a non-rotating black hole. M=m and Ṁ=ṁ M_ Edd, where the Eddington accretion rate is defined as ≡4π GM/ηκ_ esc = 1.4× 10^18η_0.1^-1(M/)g s^-1,where η=0.1η_0.1 is the radiative efficiency of accretion and κ_ es the electron–scattering (Thomson) opacity coefficient.For white dwarf accretors the ISCO is not relevant since the stellar radius R_⋆≫ R_S and typical accretion rates are usually well below the Eddington value. In this case the disc temperature profile is more conveniently written asT_ eff=T_ in(R/R_⋆)^-3/4,with T_ in=(3GMṀ/8πσ R_⋆^3)^1/4≈ 4.1 × 10^4 m^1/4Ṁ_16^1/4R_9^3/4 K,where R=R_910^9cm.We see that, for sufficiently large discs, depending on the accretor's mass and accretion rate, even for very hot discs, a radius will be reached where the temperature drops below ∼ 10^4K, roughly the hydrogen recombination temperature. This is where the disc not only stops being hot but also becomes thermally unstable (see, e.g. ). For neutron stars and stellar–mass black holes, the critical (maximal) hot–disc radius is 10^11 - 10^12cm for ṁ∼ 1 and≳ 10^10cm for ṁ≈ 10^-3. For white dwarfs, this radius varies from ∼ 10^10cm for Ṁ_16≲ 1, to ≳ 10^11cm for Ṁ_16≳ 100 (notice that such accretion rates do not result in luminosities close to the Eddington value, because the accretion efficiency R_g/R_* for white dwarfs is ≲ 0.001 and not ∼ 0.1).Therefore cataclysmic variables, whose accretion discs have radii larger than the critical value, cannot be steady. § CATACLYSMIC VARIABLES §.§ Dwarf novæFigure <ref> shows that they do indeed exhibit outbursts: they are dwarf-nova stars which show repeated outbursts with an amplitude larger than about 2 optical magnitudes on timescales of weeks to decades. In this figure the solid red line corresponds to the stability criterion obtained from Eq. (<ref>) by assuming (R)=T_ crit, where T_ crit is the value of the temperature at which the disc becomes thermally unstable. The critical values of parameters at which the disc becomes unstable are calculated through fits obtained from numerical models of the disc's vertical structure. The stability limit in Figure <ref> uses fits from <cit.> where ( crit)=6890K. Eq. (<ref>) provides Ṁ_ crit(R_D), so to get Ṁ_ crit(P_ orb) one uses a relation between the disc radius R_D and the orbital period P_ orb, obtained assuming that the radius of the disc is a fraction f(q) of the binary separation a: R_D= f(q)a=2.28 × 10^9 f(q) M_1^1 / 3 P_min^2 / 3 cm, where q is the mass ratio (mass-of-the-companion/white-dwarf-mass). In general, f is well approximated by f=0.6/(1+q)^2/3.In Fig. <ref> all the observed systems above the stability limit are steady (“nova–like”), as they should be; all systems below show outbursts, except for one: a very special magnetic binary AE Aqr, also known as a source of very–high energy emission, for which the method of deducing the mass–transfer rate from the secondary used in plotting the figure does not apply (seefor details).§.§ Dwarf–nova lightcurvesThe DIM for dwarf novæ is not only successful in predicting which CV must be a dwarf nova, but also is able to reproduce the lightcurves of (at least some) of these systems.Figure <ref> <cit.> compares observations of SS Cyg with the lightcurve calculated using the DIM code by <cit.>. The DIM reproduces the two types of outbursts observed in this best–observed (and brightest) dwarf nova, as well as the recurrence time. However, one should stress that the “standard” DIM cannot reproduce the observed sequences of dwarf–nova outbursts (see e.g. ). By “standard” (or “basic”) I understand the model which assumes that the disc always extends to the white-dwarf surface and is supplied in mass to its outer edge at aconstant rate. To calculate the lightcurve in Fig. <ref>, the disc was assumed to be truncated and the mass–supply was smoothly varied by 15%. However, SS Cyg, a dwarf-nova of U Gem type, has a relatively simple lightcurve, if one exclude occasional anomalies. In other type of systems the lightcurves are more complex. For example, in Z Cam–type dwarf novæ, the decay from outburst's peak is interrupted by a standstill. In this case the DIM can reproduce the lightcurves of Z Cam stars if one takes into account the heating of the outer disc by the impact of the mass–transfer stream and by the tidal torques and if the mass-transfer rate from the secondary varies by about 30% around the value critical for stability <cit.>. In this case, the disc during standstill is hot and stable. This, however, seemed to be false when outbursts appearing during standstills were observed <cit.>. <cit.> showed that applying the DIM to such systems requires a rather special type of mass–transfer bursts from the secondary. Such bursts should last a few days and have short rise–times and exponential decaysfollowed by short but significant mass-transfer dips.They could result from a giant flare near the Roche–lobe filling, secondary's star surface, due, for example to the absence of star spots in the L1 region.All these truncations and mass–transfer variations could look like made–up tricks had they not been observed in real CVs. Disc truncation is confirmed by X–ray observations (see e.g. ) and could be due either to the action of the white–dwarf's magnetic field (see Sect. <ref>) or to evaporation <cit.>. Huge mass–transfer variations are directly observed in polars (AM Her stars; seeand references therein). Since the strong magnetic moment of the white dwarf prevents disc formation in these systems, there can be no doubt that the observed luminosity variations are provoked by changes in the mass–transfer rate from the companion star. Also in the case of VY Scl (see Sect. <ref>) it is clear that the mass-transfer simply switches off. Short–term variations are most probably provoked by the movements of star spots, as other mechanisms involving the star's bulk, or the mass transferare excluded <cit.>. Things become even more complicated when one tries to apply the DIM to dwarf–novæ called SU UMa stars. In these systems, in addition to normal dwarf–nova outbursts, one also observes longer and brighter eruptions, so-called superoutbursts. Superoutbursts occur regularly, typically every few normal outbursts. A sub–class of SU UMa stars, the WZ Sge–type dwarf novæ, show superoutbursts only. The nature of the mechanism producing superoutbursts is still a subject of controversy. On the one hand, the tidal–thermal instability (TTI) model, proposed by <cit.> invokes the existence of a tidal instability that is supposed to occur when the mass ratio is low enough for the 3:1 resonance radius to fit inside the tidal radius defining the outermost disc radius (see Sec. <ref> for details and the allegedly related superhump phenomenon).This tidal instability is supposed to generate an increased viscous dissipation in the whole disc, thus leading to a superoutburst until the disc shrinks enough to be well inside the tidal radius. The physical mechanism behind the supposed viscosity–increase mechanism in TTI remains a total mystery.On the other hand, <cit.>, <cit.> and <cit.> proposed that superoutbursts are caused by an enhanced mass transfer (EMT) from the secondary. Next, <cit.>showed that relating the mass transfer rate to the accretion rate, i.e. assuming that irradiation of secondary increases the mass–transfer rate, allows one to reproduce the observed visual light curves. The irradiated EMT (IEMT) works quite well (see e.g. ) and succeeds where the TTI model fails, i.e. it reproduces the lightcurves of the frantic SU UMa stars: the ER UMa's that are never in quiescence, showing superhumps (see next Section) even when exhibiting normal outbursts. However, while in contrast to the TTI, the IEMT superoutburst mechanism is well–specified, it is not immediately clear that it can work in real binaries. The problem is that, because it is shadowed by the disc, the L1 point cannot be irradiated directly, which precludes any significant increase of the mass transfer rate <cit.>. This is true if the disc is flat. But discs in binary systems might be warped which would allow the L1 point to be irradiated directly, at least at some orbital phases. Simulations by <cit.> show that when the disc is warped, irradiation can “agitate” the mass–transfer from the secondary. The simulations leading to this conclusion are still only in 2D, we shall have to wait some time for a detailed description of the irradiated mass–transfer enhancement. As we shall see in a moment, a warped disc has another useful property: it allows the mass–transfer stream to over(under)flow the disc surface, modifying, among other things, how mass and angular momentum are delivered to the disc. One should stress, however, that in cataclysmic variables the warping mechanism is not well established. In fact, the only viable mechanism proposed is itself related to secondary's irradiation. <cit.> finds that the mass–transfer stream through L1 has a component perpendicular to the disc plane which oscillates in phase with the binary period. He suggests that this comes about because the tilted disc enables the neighbourhood of the L1 point to be heated in an asymmetric manner, which varies with the orbital period. Reproducing the extraordinary lightcurve of the dwarf nova TCP J21040470+4631129 (Fig. <ref>) with the DIM is a real challenge. This system exhibited, first a bright superoutburst, followed by two normal outbursts that were succeeded by three superoutbursts (fainter than the first one), the first two of which were separated by a normal outburst. In total 4 superoutbursts and 3 normal outbursts during 300 days. All superoutbursts exhibited superhumps.<cit.> showed that the main features of this astonishing lightcurve can be reproduced by the DIM with the following additions: the mass transfer rate from the secondary increases by several orders of magnitudes during the initial superoutburst. Then the mass–transfer rate slowly returns to its secular average and causes the observed succession of outbursts with increasing quiescence durations until the disc is steady, cold, and neutral; its inner parts are truncated either by the white dwarf magnetic field or by evaporation. The very short, quiescence phases between reflares are reproduced when the mass-transfer stream overflows the disc. The luminosity in quiescence is dominated by a hot white dwarf that cools down on timescales of months.Using similar additions to the DIM, one can also produce lightcurves containing rebrightenings closely resembling those observed in two WZ Sge stars, the prototype and EG Cnc <cit.>. All these supplements may look like “epicycles”[On the other hand, epicycles, deferents and equants do not deserve their bad name (see https://farside.ph.utexas.edu/books/Syntaxis/Almagest/index.htmlA Modern Almagest)],but their necessity proves that an accretion disc by itself, even when a realistic model is used to describe its behaviour, will not be able to reproduce lightcurves of TCP J21040470+4631129's complexity.§.§ The superhump problemSuperhumps are periodic light variations, with periods slightly longer (in the case of positive superhumps[There exist also negative superhumps, with slightly shorter periods.]) than the orbital period, observed in the light–curves of dwarf–nova superoubursts. Superhumps are also observed in some bright (nova–like) cataclysmic variables, the so–called “permanent superhumpers”.A popular explanation of the superhump phenomenon is given by thetidal-resonance model (TRM, Whitehurst 1988, Hirose and Osaki 1990) according to whichit results fromperiodic enhancement of tidal stresses in an eccentric accretion disc undergoing apsidal motion. The mechanism producing the disc's eccentricity is supposed to beprovided by the 3:1 resonance between the orbital frequency of the binary system andthe orbital frequency of the outer parts of the deformed disc. For this mechanism to work, the 3:1–resonance radiusR_3:1 =1/3^2/3(1+q)^1/3,where q is mass–ratio (secondary/white-dwarf), must be smaller than the disc (tidal) radius. For a long time and not very–well known reasons it was believed that the value of the maximum mass–ratiofor the R_3:1<R_ tid condition to be satisfied isq_ crit=0.25 or evenq_ crit=0.39. <cit.> calculated this critical ratio from the first principles obtaining a smaller value: q_ crit=0.22.All observed permanent superhumpers have q> 0.24. The prototypical dwarf–nova U Gem have q=0.35, but in 1985 went into a superoutburst. In dwarf novæ of the SU UMa type, superhumps appear during superoubursts, so when <cit.> have pointed out that the outburst of U Gem observed in X–rays by <cit.> is in fact a superoutburst (the only one seen during almost 170 years of continous observations), <cit.> searched in the archival data for a superhump and found one. Its statistical significance is only 2σ <cit.> but if superhumps in dwarf novæ are related to superoutbursts, its presence should be expected. The argument against its reality on account of the “incorrect” mass–ratio of the system, has been considerably weakened by the observation of equally incorrect permanent superhumpers. But there is worse.The frequency of the eccentric–disc apsidal motion Ω_ abs is related to the orbital and superhump frequencies Ω_ orb and Ω_ SH throughΩ_ abs = Ω_ orb - Ω_ SH.The superhump excess, used to quantify the superhump phenomenon is defined asε_ SH≡P_ SH - P_ orb/P_ orb,(P=2π/Ω) and can be expressed through the apsidal frequency:ε_ SH=Ω_ absΩ_ orb - Ω_ abs.The initial version of the TR model assumed that the apsidal motion can be described by orbits of free particles and that this dynamical effect is given by a function of the mass ratio and the disc's effective radius Ω_ dyn=f(q,R) Ω_ orb, where the effective radius is assumed to be the 3:1 resonance radius in Eq. (<ref>). This formulation of the model failed, however, to describe the observations (see e.g. ).But since, in reality, particles in the discs in question do not move on exactly free orbits, one could hope that adding a pressure term to the apsidal frequency formula:Ω_ abs= Ω_ dyn + Ω_ press,would solve the problem ().<cit.> have tested this hypothesis by comparing its predictions to observations of 21 CVs exhibiting superhumps (including two helium, AM CVn systems). His results are presented in Fig. <ref>.<cit.> has determined the observed apsidal frequency, then calculated Ω_ dyn, using the system's orbital parameters. This allowed him to calculate ΔΩ = Ω_ abs - Ω_ dyn, which according to the theory should be equal to Ω_ press and plotted it as a function of q. Clearly, the result contradicts the model: not only do the points not follow the theoretical curves but they show large scatter and do not seem to represent any clear trend. Clearly, theTR model is not in agreement with observation. Unfortunately, it is widely used for their interpretation.The irradiation–modulated mass–transfer (IMMT) model (), based on purely observational evidence, explains superhumps as being due to the periodically variable dissipation of the kinetic energy of the stream which results from variations in the mass transfer rate which are produced by the modulated irradiation of the secondary star. This is a purely “observational model”, providing a description of the phenomenon but not its explanation. It does not provide the mechanism of the clock ticking at the superhump period. Since the model assumes that the mass–transfer is modulated through the irradiation of the secondary, it would imply the presence of a warped accretion disc, but the stream overflow of such a disc is supposed to produce only negative superhumps, not the excess–period ones observed. Of course the accretion stream can also overflow a flat disc but then it is unlikely that the secondary's irradiation would be able to increase the mass–transfer rate. On the other hand, there are systems showing both negative and positive superhumps, but they are rather an exception.The superhump phenomenon in CVs still awaits a complete explanation (see e.g., ).§ INSTABILITIES IN MAGNETIC WHITE–DWARF SYSTEMSWhen its moment is sufficiently large, the accretor's magnetic field can affect the structure of the inner accretion flow. White dwarfs, whose dipolar magnetic fields can have more than 10^7G, can have sometimes magnetic momentslarger than 10^34Gcm^3, 4 orders of magnitudes larger than that of the usual X-ray pulsars (XRPs) with magnetic fields ∼ 10^12 G. Only the most extreme magnetars have magnetic moments comparable to that of polars (AM Her stars), but since they are never members of binary systems, this is not relevant for accretion processes.The condition for accretion–disc formation and existence can be written asR_circ> R_Mwhere the circularisation radius is defined throughR_circ =3.5 × 10^10 m_1^1/3 f(q) P_orb(h)^2 / 3 cm,where 0.12 ≲ f(q) ≲ 0.3, while themagnetospheric radius can be written asR_M=1.4 × 10^10Ṁ_15^-2 / 7 m^-1 / 7μ_32^4/7 cm,where μ=10^32 Gcm^3μ_32 <cit.>.Therefore when μ≈ 10^34 G cm^3, for systems with P_orb < 8 hr, a disc will not form, even for Ṁ_15 = 100,This is case of polars, for which the magnetospheric radius is larger than the orbital separation and the secondary is brought by magnetic torques torotation synchronous with the orbital motion. For lower magnetic moments, depending on the orbital period and mass–transfer rate, disc might form, as is the case of intermediate polars (IPs). However, if the mass–transfer from the companion star diminishes or even stops, such discs will vanish when the increasing magnetospheric radius reaches and overtakes the circularisation radius. This phenomenon was used by <cit.> to explain the absence of dwarf–nova outbursts during the decay and rise phases of the VY Scl stars luminosity variations. In these nova–like CVs, with orbital periods mostly between 3 and 4 hours, the mass–transfer diminished on timescales longer than the disc viscous time, reaching a minimum during which the transfer of mass stops. Then on similar timescales the system reaches its initial, quasi–steady luminosity state. Since during the luminosity decay and rise the system crosses the thermal–viscous instability strip, one would expect to observe dwarf–nova outbursts, but none has been observed. <cit.> found that this can only happen if,during the decay, the magnetospheric radius exceeds the circularisation radius, so that the disc disappears before it enters the instability strip for dwarf nova outbursts. And on the way up, the disc reappears only when it can be stable. The principle is simple: no disc outbursts must mean: no disc <cit.>. But this must also mean that white dwarfs in VY Scl stars dwarf–nova outbursts are magnetised, with a magnetic momentμ≳ 1.5 × 10^33 f_0.12^1.75 P_orb(4h)^2.06(3 R_out/ a)^1.34 m^1.4G cm^3,where R_ out is the outer disc radius, and a the orbital separation.Polars and IPs exhibit low states similar to those observed in VY Scl stars. Since polars have no discs, the dimming of these sources can be due only to a drop of the mass–transfer from the secondary. But IPs with discs should be become discless by the same mechanism as VY Scl stars. <cit.> showed that observations of the IP FO Aqr are well accounted for by the same mechanism that we have suggested to explain the absence of outbursts during low states of VY Scl stars. This has been confirmed in detail by the observations of this system by <cit.>.Until now, only the VY Scl star DW Cnc has been confirmed to be an IP with a spin period ∼ 38.6 min <cit.>. Some other systems of this type could be magnetic according to e.g. <cit.>. Three SW Sex stars: LS Peg and V795 Her, RXJ1643.7+3402 are known to exhibit optical modulated circular polarization <cit.>, an unmistakable signal of the presence of a strong (> 10^6 G) magnetic field, §.§ Intermediate Polars as dwarf novæDwarf nova outbursts from intermediate polars are rare despite the fact that many of them having mass–transfer rates locating them in the thermal instability strip, i.e. a mass–transfer rate satisfyingṀ_tr<Ṁ^+_ crit(R_out)=9.5 × 10^15 m^-0.88(R_out/10^10 cm)^2.65 g s^-1,where R_outis the “effective” disc radius (usually ∼ 0.8 R_D; ), andṀ_tr>Ṁ^-_ crit(R_in)=8.4 × 10^12 m^-0.89(R_in/10^9 cm)^2.68 g s^-1.In general, for hydrogen–dominated discs extending down to the white dwarf's surface, the lower limit is too low to be of much interest[But stable cold helium-dominated discs of AM CVn stars, satisfying Ṁ_tr<Ṁ^-_ crit,He(R_in) are observed (see ). Because of the higher ionisation potential of helium, Ṁ^-_ crit,He≫Ṁ^-_ crit,H <cit.>.].However, in the case of IPs, R_in≈ R_M, putting Eq. (<ref>) in Eq. (<ref>) givesṀ_tr< 6.6 × 10^15 m^-0.72μ_32^0.87 g s^-1,which is quite a realistic mass–transfer rate for these systems.IPs with nodisc (R_circ< R_M) obviously cannot have dwarf–nova outbursts. But the majority of IPs with magnetic fields allowing the presence of discsseem to be mostly steady, and the rare observed outbursts, in particular in systems with long orbital periods, are much too short (sometimes lasting less than the orbital period) to be dwarf–nova outbursts, since only long outbursts (lasting a few days) result from the thermal–viscous disc instability. In many cases the mass transfer is low enough and the magnetic field strong enough to keep the accretion disc stable on the cold equilibrium branch <cit.>. §.§ Magnetic–gating instabilityThe nature of the short (and rare) IP outbursts therefore requires an explanation, especially now that the Transiting Exoplanet Survey Satellite (TESS), with its unprecedented monitoring of the optical sky has drastically increased their observed number.<cit.> and <cit.> proposed that the repetitive series of rapid, low-amplitude luminosity bursts in MV Lyr, TW Pic and V1025 Cen are produced by the magnetic–gating instability, originally proposed by <cit.> to explain the Type II X-ray bursts of the Rapid Burster. This instability appears in systems with a magnetised accretor, when the inner disc radius, i.e. the radius of the magnetosphere, is close to the corotation radius R_cor, defined as the radius at which the centrifugal forces on matter corotating with the white dwarf balance gravity forces:R_cor=(GM/Ω^2_spin)^1/3 =3.52 × 10^10 M_1^1 / 3 P_spin(h)^2 / 3 cm.In such a situation, the rapidly rotating magnetosphere prevents accretion and causes the accretion flow to pile up just outside the magnetospheric boundary. Eventually, this material compresses the magnetosphere until it is able to couple to the magnetic field lines (opening a gate in the magnetic wall) and accrete. Once the reservoir of matter outside the magnetosphere is depleted, the cycle repeats itself, giving rise to episodic bursts of accretion. The magnetic–gating instability model (MGIM) was developed for magnetic neutron stars by <cit.> (hereafter D'AS). However, there is a problem with such an interpretation of white-dwarf system outbursts: MV Lyr and TW Pic are not known to be magnetic (although MV Lyr is a VY Scl star) and V1025 Cen has no disc, while the MGIM assumes its existence.<cit.> adapted the MGIM to the case of accreting magnetic white dwarfs and applied it to the bona fide disc–possessing IP V1223 Sgr (Fig <ref>). The main uncertainty in the description of the disc-magnetosphere border is its width, the size of the region where the two interact. The most popular, and still widely used, is the model by <cit.> which assumes that the stellar fields invade the disc over a large range of radii. However, the problem with this picture is that to make it work, one needs a very large (and unrealistic) magnetic diffusivity (see e.g.and references therein).Taking this into account, in their model D'AS correctly assume that the width of the disc-magnetosphere interaction region is narrow.They define a critical accretion rateṀ_c = εμ^2/4Ω_ spinR^5_cor =2.63 × 10^14 P_spin(h)^-7 / 3m^-5 / 3μ_33^2 ,where ε is a numerical factor describing the distortion of the magnetic field by the disc. Following D'AS <cit.> took it to be equal to 0.1 (they call it η).Ṁ_c is the rate at which the inner (magnetospheric) disc radius is equal to the corotation radius. When the inner disc radius is less than R_cor, disc accretion proceeds in a standard way. In the opposite case, the accretion rate is vanishingly small. The model depends on two parameters: Δ R – the width of the disc–magnetospheric interaction, and Δ R_2 - the characteristic width of the change of accretion–rate through the interaction zone. <cit.> adapted the D'AS model to the parameters of an IP, but in contrast to D'AS they used a standard definition of the viscosity parameter α and applied it to configurations of thermally stable–hot and stable–cold discs. They found magnetic–gating instabilities in both cases, but in the cold case the outburst amplitudes are too low to be of interest and the recurrence times are too long: of the order of years. In the hot disc case, for mass–transfer rates Ṁ^+_ crit <Ṁ_tr< Ṁ_c, <cit.> find (as do D'AS) two regimes of magnetic–gating : RII and RI (see Fig. <ref>). In the RII region the outbursts have low amplitudes and short recurrence times. In this regime, the accretion rate varies smoothly with a period of a few hours, by less than a factor oftwo. Regime RI, on the other hand, corresponds well to the sequence of short outbursts observed in V1223 Sgr. Fig. <ref> (bottom) shows the variations of the accretion rate whenṀ_ tr = 0.045Ṁ_c = 1.24 × 10^17 and Δ R/R_ in = 0.04. These parameters correspond to the (white on the left, black on the right) point in the instability map in Fig. <ref>. The magnetic moment μ_33=15 is more appropriate for polars than for IPs, but was chosen in order to explore a large range of mass-transfer rates, reaching down to 0.01 Ṁ_c. For lower accretion rates (depending on Δ R), the disc would become thermally unstable. <cit.> verified that simulations for μ_33=7 produce a lightcurve similar to that of Fig. <ref>. The MGIMexplicitly assumes the presence of an accretion disc. The similarities between the lightcurves of V1223 Sgr and the discless IP V1025 Cen <cit.> suggest that this model should also apply to systems with an accretion annulus (torus; see, e.g. ), instead of a disc.It seems that isolated, short outbursts that cannot be of the dwarf–nova type are not due to the MGI. Although this instability can produce long recurrence times, of the order of a month or more, the outburst duration would then also have to be long, because the ratio between the outburst recurrence time and its duration is roughly equal to the ratio between the mean mass-transfer and the peak–accretion rates, which is less than 30 in the model. We should also notice that the profiles of isolated outbursts often have a sharp rise and a slower decay, in contrast with the almost symmetric profiles produced by the MGIM and observed in V1223 Sgr and V1025 Cen. In fact, the shape of some isolated short bursts is similar to that of X–ray bursts from neutron stars. Hence the idea that they could be of thermonuclear origin.§.§ MicronovæIn several cataclysmic variable, TESS has observed short-duration fast-rise exponential-decay bursts, grouped inpairs or triples and lasting a few hours, with recurrence times of days to months. Such events are therefore different (mainly in recurrence–time properties) from the outbursts discussed in the previous section. <cit.> noted such events in the IPs TV Col (see Fig. <ref>), EI UMa and in the CV ASASSN-19bh. <cit.> discovered similar bursts in the recurrent nova V2487 Oph during quiescence. During these bursts the optical/UV luminosity increases by a factor of more than 3 in less than an hour and decays during ≈ 10 hours. Fast outflows with velocitieslarger than 3500, comparable to the escape velocity from the white dwarf surface, have been observed in UV spectral lines. The bursts have a total energy ≈ 10^-6 that of classical nova explosions (“micronovæ”), and their lightcurves are similar to those observed in Type I X-ray bursts. Guided by energy considerations and these similarities, <cit.> proposed that these events result from thermonuclear runaway events in magnetically-confined accretion columns on the surface of accreting white dwarfs. Thus the series of short outbursts observed in IPs would be the analogue of Type II X-ray bursts, while those with longer recurrence times would be the equivalent of Type I X-ray bursts.The model assumes the accretion column on the magnetic poles of the white dwarf to be magnetically confined andincreasing in mass. The pressure exerted on the white–dwarf surface, resulting from the column's weight, causes its base to sink to greater depths. If this magnetic confinement can hold until the pressure at the base of the accreted column reaches P_ crit≈ 10^18 dyn /cm^2, a thermonuclear runaway (TNR) will start and burn through most of the overlaying accumulated mass in the column. The process can repeat every time the pressure at the column's base reaches the critical pressure required to drive a TNR.For the flow to remain confined in the column by the magnetic pressure P_B = B^2/8π, the conditionβ = P_ base/P_B<β_ crit,must be satisfied, whereβ = P_ gas/P_B,P_ gas is the gas pressure of the magnetically confined material, and β_ crit a critical value. As the weight of the column grows with time, the pressure at the base of the magnetically confined column (P_ base) will also grow, acting sideways on the magnetically confined boundary. When β>β_ crit the column pressure substantially distorts the magnetic field lines, and plasma from the accretion column may spread on to the white–dwarf surface. The pressure exerted at the base of the accretion column is given by P_ base(t) =GM_ WDM_ col(t)/4π f R_ WD^4,where M_ WD is the white–dwarf mass, M_ col(t)=Ṁ_ acc t the column mass, andf=( R_ col/2R_ WD) ^2,where R_ WD is the white–dwarf radius and R_ col the radius of the circular accretion–column's footprint. Then the accretion column will remain confined by the magnetic pressure P_B as long as β(t) = P_ base(t)/P_B < β_ critwhere β_ crit≈ 7α^2andα=R_ col/h,with h being the height of the accumulated material in the column <cit.>.Assuming that outbursts are produced by the freshly accreted hydrogen from the companion star, the CNO cycle flash will yield ∼ 10^16 erg/g. Since the observed micronovæ release10^38 – 10^39 erg, this implies column masses 5× 10^-12≲ M_ col≲ 5× 10^-11, which for a typical value M_ WD≈ 0.8 corresponds to a fractional accretion area f ∼ 10^-6. This is a much smaller value than that deduced from observations, which suggest rather f ∼ 10^-2 – 10^-3 (see,who, in a different context and for strongly magnetized polar systems, consider buried columns with f ∼ 10^-7, but find such a model “contrived”.). For masses M_ WD≳ 1.3, f can be increased to 10^-4, reaching 10^-3 for masses close to the Chandrasekhar limit. Such high masses and strong fieldscontradict the observationally determined white–dwarf's mass in TV Col: 0.74.The exploding–column model suffersfrom two other drawbacks. First, it assumes that the magnetic field lines are solidly anchored in the white dwarf, at the bottom of the accretion column. While such an assumption is justified in the case of a neutron star, it is rather uncertain when the accreting body is a white dwarf. Second, plasma confined by a magnetic field is subject to instabilities that may lead to leakage preventing accumulation of the mass required to ignite a TNR. But what is the alternative? A model in which TNRs are triggered by magnetically confined “blobs”, whose ram pressure reaches P_ crit≈ 10^18 dyn /cm^2, requires even smaller fractional areas f∼ 10^-10 <cit.>. Dwarf–nova outbursts are excluded, as are mass–transfer variationsor reconnection events <cit.>. In the end, the micronova explanation might be the best option.§ X-RAY BINARIES§.§ X-ray transientsIn X-ray binaries, the outer parts of the accretion disc are strongly X-ray irradiated by the central source <cit.>. This heating effect must be taken into account when considering the disc thermal stability <cit.>. In an irradiated disc the surface temperature is T_ surf =+ T_ irr,withT_ irr= 𝒞̃Ṁc^2/4π R^2,where the irradiation constant𝒞̃ <cit.> is usually taken to be ∼ 10^-3 (). Of course 𝒞̃, as defined by Eq. (<ref>), is unlikely to correspond to disc X–ray irradiation in all circumstances and this equation will have to be modified when more is known about the process it is supposed to be describing (see ). In practice, even small modifications appear to be useful in some cases (see, e.g. ).For an X-ray irradiated disc the stability criterion readsṀ > Ṁ_crit =9.5 × 10^14𝒞̃_-3^-0.36 m^-0.64+0.08 log𝒞̃_-3R_10^2.39-0.10 log𝒞̃_-3 g s^-1,where 𝒞̃=10^-3𝒞̃_-3, R= R_1010^10 cm and a factor very weakly depending on the viscosity parameter α has been dropped.Figure <ref> <cit.> shows the stability properties of neutron–star X–ray binaries. The non–irradiated–disc criterion is also plotted. Clearly, the irradiated–disc stability limit correctly separates steady systems from outbursting ones, i.e. X-ray transient systems. Whileboth steady and transient systems are found among low–mass, neutron–star X–ray binaries, all black–hole, low–mass X-ray binaries are transient. The reason for this difference isstill unknown <cit.>, butall such black–hole X–transients also lie below the irradiated–disc stability criterion <cit.>.§.§ Lightcurves of X-ray transients As for dwarf novæ, and for the same reason, in X-ray transients systems the simplest version of the DIM predicts fast–rise and slow–decay outbursts, but contrary to the common opinion <cit.> the decay parts of the outburstlightcurves produced by the irradiated–disc instability are not exponential. As shown by <cit.>, according to the model, the initial (irradiation controlled) decay from outburst maximum of an X–ray transientis described byṀ=Ṁ_ max[ 1+ t/t_0]^-10/3,where Ṁ_ max is the accretion rate at the outburst maximum and t_0 is given by: t_0 = 3.19α_ 0.2^-4/5 M_1^1/4 R_12^5/4Ṁ_ max,19^-3/10yr,where R=R_1210^12, Ṁ_ max,19 = Ṁ_ max/10^19 g s^-1 <cit.>.t_0 corresponds to the time it takes the accretion rate to fall to one tenth of its initial value.Based on Eq. (<ref>) and the DIM, <cit.> found analytical formulæ describing the decay lightcurves of X–transients which can be used to determine the disc parameters.The characteristic timescale of the disc evolution τ is defined asτ = M_d/Ṁ_ max = 0.92 f^-0.3 M_1^0.37 f_ irr^0.15 R_12^0.62α_0.2^-0.8yr,where[There is some confusion in the literature about the definition of the “irradiation constant” 𝒞: on the one hand <cit.> define 𝒞 (called 𝒞̃ here) through Eq. (<ref>) (this definition is also used by ), on the other, <cit.>, “extracting” the accretion efficiency,in Eq. (<ref>) use η𝒞, instead of 𝒞̃. Finally, <cit.> find it convenient to use f_ irr, as in Eq.(<ref>). As the co–author of three of the above–mentioned papers, I would like to apologise for this inconvenience.]f_ irr = 0.2 𝒞̃ = η𝒞/5× 10^-4and the ratio of the maximum to the critical accretion rate f=Ṁ_ max/Ṁ_ crit^+(R_ out) > 1 is given byf ∼ϕ(α_ h/α_ c) Ṁ_̇ ̇ṫṙ/ Ṁ_ crit^+(R_ out),while Ṁ_ tr is the mass transfer rate from the secondary. α_ h and α_ c are respectively the viscosity parameter in the hot and cold disc. As in the case of dwarf novæ, for the outbursts to have the observed amplitudes, this ratio has to satisfy the condition α_ h/α_ c≈ 4–10 <cit.> so that Ṁ_ max= ϕ(α_ h/α_ c) Ṁ_ tr. Since the necessary condition for disc instability is Ṁ_̇ ̇ṫṙ/ Ṁ_ crit^+(R_ out) < 1, itfollows that f<ϕ. The mass of a fully hot disc is obtained by integrating the critical surface density over the disc <cit.>:M_ d = 4.3 × 10^26α_0.2^-4/5Ṁ_19^7/10 M_1^1/4 R_12^5/4g. The critical accretion below which a hot, irradiated disc become unstable is given byṀ_ crit^+ ≈ 2.4 × 10^19 M_1^-0.4f_ irr^-0.5(R_ fr,max/10^12)^2.1g s^-1,(; slightly different fits to the critical values of the disc parameters are found in ).When the front fails to reach the outer disc edge[In X-ray binary transients outbursts are only of the inside-out type <cit.>.], Ṁ_ max, the maximum accretion rate during an outburst,is roughly equal to the rate at the maximum distance reached by the transition front R_ fr,max <cit.>, because at the outburst peak, the portion of the disc that has been brought into the hot state is almost steady, and the mass accretion rate is thus equal to the minimum (critical) rate Ṁ_ crit^+ for which a hot stable disc can still exist. Only when the heating front reaches the outer disc edge (R_ fr,max≈ R_ out) is the ratio Ṁ_ max/Ṁ_ crit^+(R_ out) > 1. The maximum accretion rate can be related to the characteristic decay time by combining Eqs. (<ref>) and (<ref>):Ṁ_ max = 2.0 × 10^19α_0.2^2.71 f^2.02 M_1^-1.65( τ/1 yr)^3.39 f_ irr^-1.01g s^-1.When both Ṁ_ max and τ are knownfrom observations, this relation determines f, and hence Ṁ_ crit^+ and the size of the accretion disc. When this size can be estimated (from the orbital period and mass ratio) this can be used to estimate α_h.Alternatively, from Eqs. (<ref>), (<ref>) and (<ref>) we can obtain the duration of the the quasi–steady phase of the outburst decay, during whichṀ/Ṁ_ crit^+(R_ out) > 1;Δ t_1=t_0[1.38 t_0^-0.50 M_1^0.25Ṁ_19, max^0.15 f_irr^0.15α_0.2^-0.4-1],from which we can determine α_h if the accretor mass is known, since here the dependence on f_irr is weak.Once the decreasing accretion rate reaches the critical value, a cooling front starts propagating (always inwards, since in the hot state the accretion rate is roughly constant but the critical values increase with radius) and switches off the outburst. Since now the hot disc is shrinking we haveṀ_ d = -Ṁ - Ṁ_ fr+ 2 π R_ frΣṘ_ fr,where R_ fr(t) is the front positional radius, the dots indicate time derivatives and Ṁ_ fr is the mass flow at the propagating–front position[Not the mass transfer rate.]. Since Ṁ=Ṁ_ crit^+(R_ fr), from Eq. <ref>, the radius R_ fr can be expressed as a function of Ṁ:Ṁ_ d = -2.47 (Ṁ + Ṁ_ fr) =ξṀ.Putting ξ=6.3 <cit.>, from Eq. (<ref>) using Eqs. (<ref>) and (<ref>); we obtain:Ṁ = 6.7 × 10^19α_0.2^2.71 M_1^-1.65 f_ irr^-1[(t_0^'-t)/1 yr]^3.39g s^-1,where t_0^' is a constant that is determined by the condition that, when the cooling front starts, Ṁ is equal to Ṁ_ crit^+ at the maximum front–transition radius. t_0^' can then be written as:t_0^'=0.7M_1^0.37 f_ irr^0.15α_0.2^-0.8 r_12^0.62yr.As can be seen in Fig. <ref>, Eq. <ref> represents the results of numerical simulationsquite well also for this part of the lightcurve. The shift between the two curves is due to the fact that in the simulations the accretion rate is not exactly equal to the critical rate.Using Eqs. (<ref>), (<ref>), (<ref>) and (<ref>) we can determine the propertiesof the outbursting accretion disc from observed lightcurves, in particular the value α_h[Such amethod was used by <cit.> but in those papers it was assumed thatthe quasi–steady decay phase is exponential.]. In Sect.<ref>, I will show the results of applying this method to transient ultraluminous X–ray sources.As in the case of dwarf novæ, quite often the real lightcurves of X-ray transients (see Fig. <ref>) are much more complicated than those predicted by the DIM (Fig. <ref>). The <cit.> model assumes a flat disc, while the lightcurve in Fig. <ref> requires the disc to be warped <cit.>. Also, this model describes neither the source of the hard X–ray component(s) (the corona), nor the jet ejected at some phase of the outburst. Figure <ref> compares the evolution of the accretion rate as found in numerical simulations with the analytical estimate given by Eq. (<ref>), demonstrating how well the analytical formula represents the results of numerical simulations. One should stress again that, although the decay is not far from exponential, the “-10/3” power law isa much better fit by far. § ULTRALUMINOUS X-RAY SOURCESBy definition, ultraluminous X-ray sources (ULXs) have luminosities L_X > 10^39 and are not located in galaxy centres. The defining luminosity has no astrophysical basis but was chosen for corresponding roughly to the Eddington luminosity of a 10 black hole, this critical luminosity being defined as= 1.3 × 10^38(M/)erg s^-1,while the corresponding Eddington accretion rate is given by≡/η c^2= 1.4× 10^18η_0.1^-1(M/)g s^-1 = 2.2 × 10^-8 η_0.1^-1(M/)M_⊙yr^-1,where η=0.1η_0.1 is the radiative efficiency of accretion. ULXs were identified as a separate class of objects at the end of the previous millenium <cit.>. After about fifteen years, during which the majority of researchers involved in their study believed that they contained intermediate–mass black-holes (IMBH), the discovery by <cit.> that the source ULX-2 in the galaxy M82 is a pulsar confirmed the view of the dissenters <cit.>, who had claimed from the startthat ULXs are just a phase in the life of (presumably massive) X-ray binaries containing stellar–mass accreting objects, i.e. black holes or neutron stars, or even white dwarfs. Now we know that out of the ∼ 1800 observed ULXs (seeand references therein) at least 10 contain magnetized neutron stars, detected through their periodic pulses (PULXs; see Table <ref>). Four of them are transient: they are members of Be–X binary systems, which become X–ray sources when the eccentric orbit of the compact companion (in most cases a neutron star) of the massive Be star crosses its circumstellar disc. In most cases this disc–crossing produces sub-Eddington–luminosity outbursts (called “Type I”), but from time to time, most probably due to von Zeipel-Kozai-Lidov oscillations <cit.>, it results in a giant (super-Eddington; “Type II”) outburst. Since the maximum mass of neutron star is ∼ 2, it is clear that the observed luminosity of all PULXs is super-Eddington - up to 1000 times larger than the Eddington luminosity of 1. Since at super-Eddington accretion rates the resulting luminosity isL = [ 1 + lnṀ/],(see e.g. ), it is clear that, for realistic values of the mass–transfer rate in stellar binary systems (≲ 10^4, say), observed luminosities in excess of 10 (∼ 2× 10^39) cannot be intrinsic if the critical luminosity of the accretion flow is equal to the Eddington luminosity. Since strong magnetic fields lower the Thomson scattering opacity <cit.>, in their presence the critical luminosity (corresponding to the equality of the radiative and gravitational forces) is equal toL_ crit≈ 2 B_12^4/3(g/2× 10^14 cm s^-2)^-1/3,(when L_ crit≫ ) where g=GM/R^2 , as shown by <cit.>. Therefore, to be intrinsic, the observed PULX luminosities ≳ 10^40 must be emitted by a plasma permeated by magnetar–strength fields > 10^14 G. As an alternative, not requiring super–strong magnetic fields, it has been proposed that a buoyancy-driven, “photon bubble” wave pattern can facilitate the escape of radiation, producing intrinsic super-Eddington luminosities from accretion columns (, see also ). However, the reality of this process has yet to be confirmed and it seems that very super–Eddington emission from such a bubbling column would also require beaming. In the absence of super–strong (magnetar–strength)magnetic fields, the emitted luminosity must be beamed and it is the apparent luminosity that is super-Eddington, Eq. (<ref>) becomingL_ app = 1/b[ 1 + lnṀ/],where the beaming factor b < 1, asalready proposed by <cit.>. There are thus three options explaining the observed hugely super-Eddington luminosities of PULXs: * Magnetar–strength fields of accreting neutron stars ().* Buoyancy–driven, “photon bubble” wave patterns allowing the escape of radiation at extremely super-Eddington rates and some amount of geometrical beaming <cit.>.* Geometrical beaming (collimation) by an accretion–flow wind <cit.>. I will show, however, that the magnetar hypothesis can easily be rejected <cit.>: a fundamental property of PULX, the value of their spin-up rate ν̇ (ν is the pulsar's spin frequency), makes the first option physically impossible, as first pointed out by <cit.>. The photon–bubble mechanism which allows large super–Eddington intrinsic luminosities has not been sufficiently developed to be compared to observations, but would require some beaming anyway <cit.>.Figure <ref> shows that the spin-up rate ν̇ for all (with sub– and super–Eddington luminosities) X-ray pulsars is strongly correlated with their X–ray luminosity L_X. This correlation (over-seven-orders of magnitude in luminosity)can be explained as resulting from the domination of the accretion torque over all other torques acting in these systems:ν̇= J̇(R_M)/2π I = Ṁ (GMR_M)^1/2/2π I∝Ṁ^6/7μ^2/7,where R_ M∝Ṁ^-2/7μ^4/7 (Eq. <ref>)is the magnetospheric radius, μ the neutron star's magnetic moment and I the neutron star's moment of inertia.Figure <ref> shows that PULXs are characterised not only by their luminosity (L_X > 10^39)but also by their spin-up rate (ν̇≳ 10^-10 s^-2): they are all located inside a rectangle delimited by the values of these two quantities. If we did not know the distances to the X-ray pulsars, the value of their spin-up rate alonewould allow us to separate “normal” XRPs from PULXs. Indeed, since the transient ULX Swift J0243.6+6124 is in the Galaxy (the only known such source there), its exact distance is uncertain, but while its deduced luminosity is close to the ULX-defining limit (∼ 1.5 × 10^39), the value of its spin-up rate (2.2 × 10^-10 s^-2) puts it safely into the PULX category.The magnetospheric radius is defined by the equation <cit.>R_ M = 2.6 × 10^8 q (Ṁ/10^17)^-2/7(M/)^-3/7μ_30^4/7cm,where q∼ 1 is a factor taking into account the geometry of the accretion flow at the magnetosphere and μ=10^30μ_30 Gcm^3.Putting M≈ 1 and q≈ 1, from Eqs. <ref> and <ref> one obtains Ṁ≈ 5.7 × 10^18ν̇^7/6_-10μ_30^-1/3.In general, super–Eddington luminosities are not proportional to the accretion rate. Only in the presence of very strong magnetic fields, when L_ crit≫ (see Eq. <ref>) can we assume that L_X ≈ 0.1 Ṁ c^2, sinceis no longer the critical luminosity. This would be the case if PULXs contained accreting magnetars.In such a sub–critical case, from Eq. (<ref>), one getsthe relationL_X ≈ 2 × 10^38ν̇^7/6_-10μ_31^-1/3≈.But this contradicts the condition L ≳ L_ crit≫, assumed in its derivation, since for the second inequality to be satisfied, one needs (by construction) μ_30≫ 1. Eq. (<ref>) is a direct consequence of Eq. (<ref>), i.e. assumes that the spin–up torque is dominated by accretion. Which, in this context, is inescapable.Therefore Eq. (<ref>) demonstrates that magnetars cannot be present in systems with both L_X > 10^39 and ν̇≳ 10^-10 s^-2, i.e. it shows that neutron stars with magnetar field-strengths cannot be present in PULXs. Which is fully consistent with other observational facts, such as the absence of magnetars in binary systems (for a detailed discussion see ).Since the magnetar–PULX model is directly contradicted by observations, one is necessarily left with the geometrically–beamed–emission option.This means that Eqs. (<ref>) and (<ref>) have to be completed by two equations: one providing Ṁ=Ṁ(R) that gives Ṁ(R_ M), the other defining the beaming factor b.To describe the accretion flow <cit.> (hereafter KLK17; see alsoand ) used the <cit.>, “windy” accretion–disc model, according to which the local radiative flux is never larger than its Eddington value, the “excess” power being blown away in a disc–wind. This happens inside the spherization radiusR_ sph= 15ṁ R_g,where ṁ=Ṁ/ (see ;use the original, , “27/4” factor, instead of the correct “15”), resulting inṀ(R) ≃ṁ_0 Ṁ_ EddR/R_ sph.for R < R_ sph (ṁ_0 is the mass-transfer rate in Eddington units). Following <cit.> the beaming factor is taken to beb ≃73/ṁ^2.For the given values of the observed quantities L_X and ν̇, KLK17 obtain the value of the mass-transfer rate Ṁ_0 = Ṁ(R_ sph), the beaming factor b and the neutron star's magnetic moment μ for each PULX. The results are presented in Table <ref>.Magnetic fields have normal pulsar values 10^10 G ≲ B ≲ 10^13 G and their beaming factors are moderate. The exception is ULX1 in NGC5907, rather strongly beamed and magnetized (but still not a magnetar). In a recent paper <cit.> reported observations of ULX1 in NGC5907 in a low state, during a phase of spindown, and, assuming that it is in a propeller regime, deduced a magnetic field B ≈ 2.5 × 10^13 G. This value is fully consistent with the result of the KLK17 model: 9.4× 10^12 G (Table <ref>), which was obtained assuming q=1 and m=1. Using q=0.5 and m=1.4, say, the resulting field is 1.6× 10^13 G. One should stress, however, that for such a field, neither the KLK17 nor the <cit.> calculations are self-consistent because they do not take into account the fact that for such a high field the Eddington luminosity is no longer critical (Eq. <ref>). But even then, ULX1 in NGC 5907 is still super-critical. Iterating, one can put the field 1.0× 10^13 G back into the KLK17 equations and obtain a solution which takes into account the fact that the critical luminosity is 43 times larger (Eq. <ref>) than the Eddington luminosity. Then for ULX1 in NGC 5970 L/L_ crit = 16.6. One then obtains a solution with ṁ_0 = 730 (∼ 7 times larger than with L_ crit=), and a beaming factor b=0.25. It has been claimed often that beaming suppresses pulsations, however, <cit.> pointed out that themagnetic axis of a neutron-star accretor is not necessarily aligned with the disc (i.e. funnel) axis, and that it is very common for the neutron star spin to be misaligned from the binary orbit defining the accretion disc plane (seefor a recent example). When these three axes are not aligned the system appears as a PULX since when the neutron-star spin axis is strongly misaligned from the central disc axis at the spherization radius, large polar caps produce the sinusoidal pulse light curves observed in pulsing ULXs because a significant part of the pulsed emission can escape without scattering, giving a large pulse fraction (seefor details). Since neutron stars in X–ray binaries have magnetic fieldsspanning the range from10^8 G to several 10^13 G <cit.>, PULXs are normal XRPs at a special phase of the evolution of their parent binary systems, as suggested a long time ago by <cit.>. This conclusion is beautifully confirmed by the Be-X binaries that become PULXs only at a certain phase of their orbital evolution.The binary SMC X-3 also illustrates how, during the ULX phase, the neutron–star spin evolution becomes dominated by the accretion torque, as assumed in the KLK17 model. Fig. <ref> <cit.> shows the spin-down observed in this system, right up to the beginning of a giant outburst on MJD 57599, when significant spin-up is observed. <cit.> deduce from the SMC X-3 spin history that the angular momentum transferred by material accreted during the 5–month giant–outburst was larger than the angular momentum lost by magnetic braking over the previous 18 years. The long-term spin-down rate of SMC X-3 is roughly 500 times lower than the spin-up rateseen during the giant outburst, showing the significantly larger torques present during this outburst. Even during previous Type I outbursts recorded by RXTE, the spin period seemed to continue increasing under low levels of accretion. But during the giant outburst, the spin-up rate is tightly correlated with the X-ray luminosity during the super-Eddington phase <cit.>. In other words, in PULXs the spin-up rate is strongly correlated with the X-ray luminosity both in time and over the population.<cit.> and <cit.> deduce the value of the magnetic field in SMC X-3. The first team gets 6.8 × 10^12 G, the second ∼ 1 × 10^12 G, both values well below the magnetar–strength. The KLK17 model gives a lower value of 2.3× 10^10 G. But <cit.> use the <cit.> model describing the accretion-disc – magnetosphere interaction. Although widely used, this model is known to use very unrealistic assumptions as mentioned in Sect. <ref>. On the other hand <cit.> use a “simple” model describing the presumed spin–equilibrium of the system. <cit.> use the <cit.> framework for the description the orbital spin evolution and do not get satisfactory results. In view of this, the discrepancies between these various methods of magnetic–field determination do not seem to be a serious problem. Neither of them lead to the conclusion that SMC X-3 contains a magnetar. <cit.> claim that close to the neutron-star surface the magnetic field may contain stronger components than those of the dipole. But, of course, what counts at the magnetosphere is the dipole.Quite recently, <cit.> found through X-ray polarimetry observations that the WR X-ray binary Cyg X-3 is a ULX with a beaming factor[I am using here the symbol “b” as defined in this chapter;use b to denote our 1/b.]b=0.02, which corresponds to an Eddington factor ṁ = 69 (), but seen from the side. This system is supposed to contain a black hole. Puzzlingly, the authors of this paper do not mention the famous source SS433, which until now had been the best documented case of a “sideways–seen” ULX, nor do they cite any paper on the beamed–radiation interpretation of the ULXs (see the epigraph of this chapter).§.§ Transient ultraluminous X-ray sourcesIt hasrecently been established that many, and most probably most, ULXs are transient (). As mentioned above, some of the ULX transients are Be–X transient systems that occasionally become super–Eddington. Although the peak luminosities of the transient sources in <cit.>: L ∼ 2 -4 × 10^39 are similar to those of Be-ULXs (≤ 4 × 10^39; see, ), their rise to outburst peak is faster than that observed in Be-ULX sources and three regions of their occurrence show varying ages of the possible stellar counterparts, while Be stars are massive and young. The transient ULX in the galaxy M51 reached a luminosity of ∼ 10^40 <cit.>, which rather excludes a Be-ULX source. It is therefore worth trying to apply the DIM to transient ULXs, especially since it has already been successfully applied to the description of the lightcurve of XT1 in M51 <cit.>.<cit.> used the formulæ from Sec. <ref> to fit the lightcurves of the five transient ULXs described in this paper. The results are shown in Fig. <ref>.Neither the accretor masses nor the orbital parameters of these transient ULXs are known, which precludes a univocal determination of α_h. Also,one can determine both Δ t_1 and t_0^' only in two cases, because in the other three cases a change of slope has not been observed, which leaves us with only an upper limit on the hot–disc viscosity parameter. The fits correspond to rather high values of α: 0.3 ≲α_h ≲ 7 (Brightman et al. 2023). The highest α–value determined is 1.39, for a 10 accretor, but the fit for the same system with 1.4 gives 0.37. High values (> 0.2) were also determined by <cit.> for sub–Eddington transient outbursts of black-hole X-ray binaries. § AGNAlmost 40 years ago, Martin Rees ended his seminal AGN review <cit.> with the words: “There has been progress toward a consensus, in that some bizarre ideas that could be seriously discussed a decade ago have been generally discarded. But if we compare present ideas with the most insightful proposals advanced when quasars were first discovered 20 years ago (such proposals being selected, of course, with benefit of hindsight), progress indeed seems meager. It is especially instructive to read <cit.> paper entitled `The Mass of Quasi-Stellar Objects'. In this paper, on the basis of early data on 3C 273, they conjectured the following: (a) Radiation pressure perhaps balances gravity, so the central mass is 10^8. (b) For a likely efficiency of 10%, the accretion rate would be 3 (c) The radiation would come from an effective “photosphere” at a radius2 × 10^15cm (i.e. ≫ R_g), outside of which line opacity would cause radiationto drive a wind. (d) The accretion may be self-regulatory, with a characteristictime scale of 3 yrs. These suggestionsaccordwiththeideasthatremain populartoday, andwe cannotyet make many firmly based statements that are more specific.”When in 2023 one reads the recent assessment, “In marked contrast to models of accretion discs around stellar mass black holes, neutron stars, and in cataclysmic variables, existing theoretical models of accretion discs around supermassive black holes do a very poor job of explaining, never mind predicting, the observed properties of luminous active galactic nuclei." (), one is tempted to conclude that during the last 60 years, theoretical progress in the field was still meagre (see also an older, but still valid diagnosis in ). One of the reasons for this lack of progress could be deduced from the two panels on Figure <ref>. These types of figures might be thought to illustrate what are often called “toy models”, even if these two particular examples are not supposed to belong to this category. Nevertheless they correspond to what could be called “Lego models”, since they join together various, more or less physical, models as if they were Lego bricks taken from different sets. Is it the fault of astrophysicists, as suggested by <cit.>, or are AGNs too complex and too distant to be understood with the theoretical and observational tools available to us? Probably both.As an illustration, I will address two problems with applying the standard disc model to AGNs: the presumed disc size and the variability timescales. §.§.§ Disc radiusThe disc size can be defined as corresponding to the radius R_λ at which the disc temperature matches the wavelength λ:kT(R_λ) = hc/λ,i.e., R_λ=[45 G λ^4 M Ṁ/16 π^6 h c^2]^1 / 3=2.1 × 10^15(λ/μm)^4 / 3m_8^2 / 3(L/η L_E)^1 / 3cm.λ is the wavelength in the rest-frame of the AGN.Thus the prediction of the thin-disc model is that (for a given Eddington ratio) the size of the disc satisfies the relation R ∼ M^2/3, which is confirmed by observations <cit.> and that R∼λ^4/3. Determinations of discsizes through microlensing and reverberation mapping give values several times larger than those expected from Eq. (<ref>). For example, for the AGN MCG 08-11-011 <cit.> obtainlags that are larger by a factor of ∼ 3 - 7 than predictions based on the standard thin–disc model . They also detect a size-wavelength relation significantly steeper than predicted by the model: R∼λ^4.74. However, the derivation of Eq. (<ref>) assumes that emission at wavelength λ originates solely at radius R_λ, while in real discs this emission also comes from other radii. Therefore a more appropriate size for comparison with observations would be a flux–weighted mean radius R_λ= 𝒳R_λ with 𝒳≈ 2 - 3. Also, the formula for the radius assumes its stationarity, but if the disc variability is taken into account this factor could even be ∼ 5. In addition, R_λ depends on the black hole mass and the accretion rate (through radiative efficiency), so uncertainties in the values of these quantities might influence the comparison of the model-size with observations. However, the main weak point of Eq. (<ref>) is the assumption that every ring of the putative disc radiates like a black body, and what is observed is the effective temperature. This is not the case with discs in cataclysmic variables (see Sect. <ref>), and it is not the case here. The emitted spectrum depends on the details of the disc's vertical structure, which are not well known, and what is observed is the colour temperatureT(r) = f_ col((r))(r),where f_ col((r)) ≥ 1 is the colour temperature correction.This formula was used by <cit.> to reevaluate AGN disc radii. Instead of the disc radius, they calculate the half-light radius, R_1/2,ν, at which half of the emission at the frequency ν is emitted inside R_1/2,ν. They apply the disc model to the quasar SDSS 0924+0219, whose half-light radius, at ν≈ 4.8 × 10^14 Hz, from microlensing, is R_1/2,ν≈ 150 <cit.>. Using two different colour corrections, <cit.> obtain the same result: R_1/2,ν≈ 107. Still too short, but just by a factor of 1.4. When the effect of the disc's inner truncation and disc winds is added, the half–light radius rises to 128. The authors mention that AGN discs are supposed to have X-ray coronas, whoseeffect would be to multiply the disc size by a factor of (1 - f_c)^-1/2, where f_c < 1 is the fraction of the disc emission that is lost due both to itscovering by a corona and to the coronal dissipation. This might be so, but by adding all these ingredients (except for the colour correction) we end up trying to force a more complex system into the shape of a thin accretion disc, just as Ptolemeus was attempting to make an ellipse into a circle with “equants” and “epicycles'Therefore, to the question: are observations of AGN disc sizes compatible with the presence there of a stationary,geometrically thin, optically thick Keplerian disc[This does not have to be a <cit.> disc which is a power-law solution assuming a specific form of the viscosity torque and opacity law. The result is much more general.] ?, the answer is clearly: “no”.§.§.§ Timescales The most conspicuous challenge to the thin disc model is probably the extreme variability observed in so–calledchanging look AGNs (CLAGNs). In these sources, the UV–optical continuum and broad emission line spectral components appear or disappear on timescales of months to years. In the case of Seyferts, this corresponds to a transition between AGN spectra that contain broad emission lines (i.e. Seyfert 1 type) and those with only narrow lines (Seyfert 2 type). For thin discs in the gas–pressure dominated regime the viscous time ist_ vis≈ 1.5 × 10^4 α^-1 T^1/2_4 m_8 r^1/2 yr,so these variability timescales arerelated rather to the thermal timescalet_ th=1/αt_ dyn= 1.4 × 10^4 α_0.1^-1m_8 r^3/2s,but we still need to explain how huge changes in luminosities can be triggered without affecting the accretion rate.<cit.> remark that there is a very important difference between accretion discs in bright AGNs and those in cataclysmic variables and X-ray binaries: discs around supermassive black holes “have thermal pressures that are hugely dominated by radiation pressure”, which in the case of the latter could explain the CLAGN phenomenon.At first glance this assertion might seem surprising, since (when κ_ abs≫κ_ es), according to the <cit.> modelP_g/P_r= 0.32 α^-1/10m_8^-1/10r^3/8ṁ^-7/20f^-7/20,so that this ratio, close to a 10^8 black hole, say, is only 6 times smaller than it would be near a neutron star. On the other hand, only hugely super–Eddington accretion rates (ṁ≳ 50) would make P_r ≫ P_g, However, true absorption dominates the opacities only for r_ Res > 7400ṁ^2/3,where I used the Kramers opacitiesκ_ R= 5× 10^24ρ T^-7/2 = 5× 10^-4ṁ^-1/2 r^3/4 cm^2g^-1,so it seems that, as in the original <cit.> solution, the P_r ≫ P_g regime can exist only whenκ_ es≫κ_ abs. This will happen for radiir ≲ 100α^2/21ṁ^16/21 m_8^2/21,and there, i.e. for radii less than about few tens of R_SP_rP_g∝α^1/4m^1/4ṁ^2r^-21/8,so that,at comparable Eddington (accretion–rate) factors, radiation pressure in discs around supermassive BHs can indeed be 100 times larger than in discs around NS and stellar–mass BH. But the real difference between accretion at high rates onto supermassive BH and stellar–mass compact bodies is the temperature of the inner parts of the accreting matter. In the case of NS and stellar–mass BHs the (effective) temperature is ≳ 10^8K, for white dwarfs it is ≳ 10^4K, while for supermassive black holes it is≳ 10^5K, which corresponds to UV radiation. In addition, the density is much lower than in discs around stellar–mass accretors: ρ < 10^-8. In such conditions the inner region of the accretion disc can be in a regime non–existing in the standard, three–zone Shakura–Sunyaev model: κ_ abs > κ_ es and P_r ≫ P_g. The reason is illustrated in Fig. <ref>, taken from <cit.>. Therefore, for typical conditions in the inner–disc region of bright AGNs, the Rosseland mean opacity is expected to be larger than the electron scattering value. <cit.> show that the iron opacity bump (around 1.8 × 10^5K) causes the disc to be convectively unstable. Their simulations show that turbulence generated by convection increases the disc thickness due to additional turbulent–pressure support and enhances the local angular momentum transport. They find that this also results in strong fluctuations in surface density and heating of the disc. When theopacity drops with increasing temperature, the convection is suppressed, the disc cools down and the whole cycle repeats again. As a result, the disc scale height strongly oscillates, causing luminosity variations of more than a factor of ≈ 3 - 6 on atimescale of a few years. The authors propose that this is the physical mechanism which explains AGN variability with a wide range of amplitudes over a timescale of years to decades.This is, however, doubtful, as demonstrated by the example ofMrk 590, a nearby CLAGN <cit.>. In its high state, it is a Seyfert 1, i.e. a moderate–accretion rate source, but it shows X-ray and UV variability amplitudes higher than those typically observed in steady-state AGNs of this type. Its variability is similar to that of highly accreting AGNs, i.e. quasars. The characteristic timescale of the Mrk 590 flares is ∼ 100 d. Since its mass is 4.75 × 10^7, this corresponds to the thermal timescale of its putative accretion discs (see Eq. <ref> and Fig. 10 in ). However, with an accretion rate ṁ = 0.05 at maximum, it is unlikely that the <cit.> mechanism is here at work . At such accretion rates the disc is radiation–pressure dominated only at its innermost tip. The X-ray and UV flares in Mrk 590 have a complex spacetime structure, which seems to exclude a simple disc–like structure. Incidentally, the X-rays in this source irradiate the UV–emitting region, which might suppress convection <cit.>, if any.Recently, <cit.> suggested that changes observed in changing-look quasars (CLQ) are due to changing accretion rates, with the multiwavelength emission varying accordingly. In this they find “promising analogies to the accretion states of X-ray binaries”. One should, however, be very careful with the interpretation of apparent analogies in the behaviour of accreting systems at different mass and space scales. For example, it has been shown that the hystereses observed in the hardness–intensity diagrams of LMXBs (in X–rays) and of dwarf–novæ (in optical vs EUV/X-ray), despite their apparent similarity, are due to two completely different mechanisms (; see also ). In the case of apparent analogies between CLQs and binary X–ray transients, one should stress that in the latter, the viscous timescale in the relevant disc region is much shorter than the observed variability times, in striking contrast to CLQs and CLAGNs, so that the existence of a common, or similar, mechanism explaining both these classes of phenomena is rather doubtful.Interestingly, at ṁ≲ 0.05, the accretion disc of Mrk 590 should be subject to the dwarf–nova–type thermal–viscous instability <cit.>, but the observed variability of this AGN is nothing like the lightcurves produced by this mechanism (see Fig. 7 in ). The reason is that, although heating and cooling fronts propagate throughout the disc, their movement is too fast tosignificantly affect the disc's density. The front propagation timescale is t_ front≈R/α c_ s = R/H t_ th,where t_ th is the thermal timescale. Hence t_ front isshorter than the viscous time t_ visc = (R/H)^2 t_ th by a factor of H/R, i.e. by several orders of magnitude, since (in a gas–pressure dominated disc)H/R≈c_s/v_K= 5.5 × 10^-5 T_4^1/2 r^1/2,where c_s and v_K is the Keplerian speed. Another difference between the discs in binaries and in AGNs is that in the former, the optical emission region is at r ≳ 1000, while in the latter it is at ≳ 10, much deeper in the gravity potential well, if the AGNs have stationary, flat, geometrically thin, optically thick Keplerian accretion discs. Which is less than certain. But there are alternatives: see e.g. chapter 5 in <cit.>. § ACKNOWLEDGEMENTSI am grateful to Robert Antonucci, Mitchell Begelman, Jean-Marie Hameury and Andrew King for inspiring comments, criticismand discussions. I acknowledge with thanks the variable star observations from the AAVSO International Database contributed by observers worldwide and used in this research.apalike 0.25 | http://arxiv.org/abs/2311.16013v1 | {
"authors": [
"Jean-Pierre Lasota"
],
"categories": [
"astro-ph.HE"
],
"primary_category": "astro-ph.HE",
"published": "20231127171349",
"title": "Problems in the astrophysics of accretion onto compact celestial bodies"
} |
APS/123-QED Theoretical Physics III, Center for Electronic Correlations and Magnetism, Institute of Physics, University of Augsburg, 86135 Augsburg, GermanyMotivated by the recent impressive experimental progress in quantum simulators the study of quantum matter with local constraints has gained significant attention. In this work, we investigate a paradigmatic class of constrained matter - the hard-disk problem. We introduce a quantum version on lattices, which exhibits a natural realization in Rydberg atom arrays due to the Rydberg blockade mechanism. While the static properties on a general level turn out to be equivalent to the classical case, yielding crystalline phases at sufficiently high particle densities, we find that the dynamical properties are fundamentally different. In one dimension, we identify genuine quantum features in the melting process of a finite-size crystal displaying ballistic behavior, whereas the classical scenario exhibits sub-diffusion governed by the Kardar-Parisi-Zhang universality class. On two-dimensional square lattices, we show that in the quantum domain, crystals remain intact against most defects, whereas classically the initial crystal structure is washed out completely. We link this peculiar quantum behavior to the presence of quantum many-body scars, breaking conventional expectations of ergodicity. Our study highlights the potential of constrained two-dimensional quantum matter to display unique dynamical behaviors.Quantum hard disks on a lattice Markus Heyl January 14, 2024 ===============================Introduction.—The recent tremendous advances in quantum simulators have led to unprecedented experimental control of the real-time dynamics in constrained quantum matter.This includes constraints generated by symmetries such a dipole conservation relating to fractonic systems <cit.> or strong local interactions <cit.>, which can lead to the emergence of quantum many-body scars <cit.>.The latter case nowadays finds a natural realization in Rydberg atom arrays due to the famous Rydberg blockade mechanism originating from the strong underlying Rydberg interactions <cit.>.It is of key importance to recognize that Rydberg blockade, on a general level, is equivalent to associating with each excitation an excluded volume, which on the classical level represents a paradigmatic class of systems with unique properties, such as in the context of the hard-disk problem <cit.>.In the quantum domain, however, the hard-disk problem despite its fundamental nature has so far remained unexplored.In this work, we introduce a quantum hard-disk model on lattices inspired by current experiments in Rydberg atom arrays.We show that the static properties are identical to the classical analogue yielding crystalline phases at sufficiently high particle densities.However, we find that the dynamical features turn out to be fundamentally different, which we illustrate with a crystal melting process in one spatial dimension (1D) and the dynamics of crystal defects in two-dimensional (2D) quantum hard-disk systems.In particular, for the 2D case we observe that, generically, the crystals remain intact in the presence of defects, while they are washed out on the classical level.We associate this unique quantum feature with the presence of quantum many-body scars and Hilbert-space fragmentation.We discuss the feasibility of implementing the quantum hard-disk problem experimentally in Rydberg atom arrays as the natural platform to realize excluded volumes through the Rydberg blockade mechanism.The quantum hard-disk model on a lattice.— We model the quantum hard-disk problem as a system of hard-core bosons on a lattice with nearest-neighbor hopping:H = J∑_⟨ i, j ⟩P_i ( a_i^†a_j +a_j^†a_i ) P_j.Here, a_i^† and a_i are the corresponding creation and annihilation operators for a hard-core boson on lattice site i=1,… L^d, respectively, with L^d denoting the total number of lattice sites, and d is the dimension of the lattice.We set the lattice constant a=1 in the following.The excluded volume due to the hard disk we include through the projection operators P_i =∏_ j ∋ |r⃗_i - r⃗_j| = 1(1 - n_j)/2,that prevents particles from occupying nearest-neighboring sites (see Fig. <ref>A for an illustration), and n_i ≡a_i^† a_i.Notice, that it is straightforward to adjust the disk radius by adjusting these projection operators to include more lattice sites.The above Hamiltonian is equivalent to a model of spin-1/2's upon identifying a_i ↦ S_i^-, a_i^†↦ S_i^+, and n_i ↦ S_i^z+1/2 <cit.>, which links directly to a system of Rydberg atoms, see below for a more detailed discussion on the potential experimental realization.We will consider open boundary conditions for convenience, which is also motivated by potential experimental scenarios in quantum simulators.Hilbert-space fragmentation.—A general feature of strong local constraints is the fragmentation of the Hilbert space into kinetically disconnected regions <cit.>, which can have drastic consequences on dynamical properties such as the emergence of quantum many-body scars <cit.> or disorder-free localization in gauge theories <cit.>.Depending on the ratio between the size of the largest fragment 𝒩_max and the Hilbert-space dimension 𝒩, one can distinguish weakly and strongly fragmented regimes <cit.> with 𝒩_max/𝒩→ 0 in the thermodynamic limit representing strong and 𝒩_max/𝒩→ 1 weak fragmentation, respectively.For our hard-disk model in Eq. (<ref>), certain aspects of Hilbert-space fragmentation have already been studied in Ref. <cit.>, which will be summarized below and extended by further analysis.In the 1D case, we find that there is no fragmentation except at the maximum particle density η = 1/2.However, in 2D, we observe both weak and strong fragmentation depending on the particle density η=M/L^d with M the total number of particles.In Ref. <cit.> it was conjectured that an equivalent model is supposed to exhibit a weak-strong fragmentation crossover for finite system sizes L^d.Now we determine the precise threshold densities between the different fragmentation regimes, which we have also illustrated in Fig. <ref> B.If you have enough particles to fully fill the diagonal of the lattice (termed a snake), it creates a barrier that confines the particles to one region, because the particles on the diagonal cannot move due to the excluded volume. Consequently, we obtain Hilbert-space fragmentation whenever the particle density η≥ 1/L or equivalently whenever the particle number M≥ L becomes larger than the linear extent L of the system.We find that for not too high densities this leads to a weakly fragmented situation with the Hilbert space breaking up into several small fragments containing the states with snakes and a large fragment without snakes.However, when η≥ (1/2) - ⌈ L/2 ⌉/L^2 (with ⌈ L/2 ⌉ denoting the ceiling operation), each allowed configuration has to contain at least one snake or completely filled diagonals adjacent to the main diagonal.Hence no large fragment can form, and the Hilbert space then splits into tiny fragments, leading to strong fragmentation.As we will show in detail below, fragmentation has a decisive impact on the dynamics of our model.Additionally, however, it can also be used for pushing the numerical solution by means of exact diagonalization.Instead of addressing the full Hamiltonian, we can separately solve for the much smaller individual fragmented blocks.Concretely, to simulate the quantum dynamics following some initial conditions, we first construct the Hamiltonian block corresponding to the fragment where the initial state resides.Subsequently, we employ a Lanczos algorithm with 7 Krylov vectors at each time step.We choose the time steps as large as possible while still achieving convergence.In a later section of this letter, we will compare the resulting quantum dynamics to a classical analog.Considering that quantum dynamics can be viewed as a quantum walk of hard disks on the lattice, the natural classical counterpart is the random walk.In this context, we simulate the classical dynamics at each time step by randomly selecting a particle and moving it to one of the available sites with equal probability.In the end, we perform an average over many trajectories until we reach convergence. Static properties.—Let us first discuss the static equilibrium properties of the quantum hard-disk problem on a lattice. Since we will be mostly interested in the dynamics at high energies, we focus in the following on the case of infinite temperature T=∞. Classically, it is known that there is a dilute-crystalline phase transition in 2D <cit.>. Since the crystalline phase can be characterized by the structure factor, here we also take it as the natural order parameter: C_∞(η) = N^-1 (2π/L^d)^2∑_i,j^ e^i π⃗· (r⃗_i - r⃗_j)(n_i n_j). The trace is taken for the Hamiltonian blocks with particle density η, N denotes the number of states in the block, and π⃗ is π for 1D and equal to (π, π) for 2D. Clearly, there is no difference to the classical case because the quantum trace becomes equal to the sum over all classical configurations and therefore the static phase diagrams are identical.In 1D, this implies that there is crystalline order only at the maximum particle density η=1/2. For 2D, the system exhibits a critical particle density η_c = 0.37 with a crystalline phase for η>η_c <cit.>.Notice that the crystalline phase transition is located in the weakly fragmented regime and is detached from the crossover towards strong fragmentation at high densities.Solving for the full quantum problem for finite system sizes up to 81 lattice sites we find numerical evidence of this phase transition (see Fig. <ref>C).In the next step, we aim to study the dynamical properties of the quantum hard-disk problem, which, as we will show, exhibits distinct quantum features.Crystal melting in 1D.—For the 1D case, we study the melting of a finite-size crystal object of M tightly packed particles embedded in an otherwise empty lattice.In Fig. <ref>, we display the numerically obtained data via exact diagonalization for the local occupations ⟨ n_i(t) ⟩.We observe that the spreading of the occupations exhibits fundamentally different behaviors, being linear for the quantum case and non-linear for the classical one, respectively.This result can be proven analytically for both cases in general by mapping to effective models where the excluded volume is eliminated.In the quantum domain, this can be achieved by removing the site to the right of each particle in every basis configuration. Therefore, the quantum dynamics for our particular initial configuration are identical to the case where the particles are non-interacting.The unitary dynamics are then equivalent to the single-particle quantum walk that is known to be ballistic, which is consistent with the numerical results obtained in Fig. <ref> and with the Bethe-ansatz integrability of the 1D model <cit.>.In the classical domain, one can perform a mapping from the hard-disk model to one where disks have no radius <cit.>, where, however, it is key to take into account that due to the hard-core interaction particles cannot pass through each other implying preservation of the order of particles.The dynamics turn then out to be sub-diffusive according to the Kardar-Parisi-Zhang universality class <cit.>.As a key consequence, we find that the 1D quantum crystal melts much faster than the classical one.Defect dynamics in 2D.— While in 1D a crystal exists only at maximal filling in equilibrium, in 2D our results for the structure factor in Fig. <ref> imply that the long-range order remains also in the presence of defects as long as η>η_c.In the following, we will now explore the nonequilibrium real-time evolution of crystal defects and we will show that their dynamics exhibit genuine quantum features.For that purpose, we prepare the system in a specific many-body configuration and monitor the subsequent dynamics, see Fig. <ref>. In the beginning, let us quickly discuss the extreme limits of weak and high particle densities.Clearly, preparing a configuration with low particle density η<1/L in the non-fragmented regime, see Fig. <ref>(B), the excluded volume becomes irrelevant and the dynamics of the particles becomes essentially equivalent to a simple tight-binding problem and any initial configuration is smeared out uniformly over the lattice.In the opposite regime η≥ (1/2) - ⌈ L/2 ⌉/L^2 the strong fragmentation of Hilbert space naturally leads to nonergodic behavior leaving any accordingly initialized crystal intact.Most importantly, however, in the regime of weak fragmentation, the dynamics yields unexpected behavior, a typical instance of which is shown in Fig. <ref>.For the initial configuration in Fig. <ref>(A) involving a large number of defects in the center of the crystal we observe that even in the long-time limit the crystal remains stable under quantum real-time evolution.For the equivalent classical random walk with excluded volume we, however, find that the crystal structure is completely washed out.Accordingly, such defects in the quantum material retain a memory as opposed to the classical domain calling for the importance of quantum interference effects which is the one key aspect missing in the classical random walk dynamics. Let us emphasize that this is a generic behavior not fine-tuned to this particular initial condition, see the discussion further below.We quantify the memory of the initial crystal pattern by means of the followingautocorrelation function:G(t) = 1/L^2∑_i=1^L^2⟨(2n_i(t) - 1) (2n_i(0) - 1)⟩ - G^* .Here, ⟨…⟩ = ⟨ψ | … | ψ⟩ represents the usual expectation value in the initial condition |ψ⟩ for the quantum case and an average over trajectories for the classical one.A constant G^* = (2η - 1)^2 is subtracted such that G(t) → 0 when the crystal melts because in the ergodic limit, we have that ⟨ n_i(t)⟩→η.In Fig. <ref>(D), we display G(t) as a function of time t for the initial condition in Fig. <ref>(A) including both the quantum and the classical dynamics.For the quantum real-time evolution, G(t) saturates to a finite value, while in the classical domain, it approaches zero.Notice that also in the classical case G(t→∞) can retain a small non-zero value due to finite-size effects, which originate from the weak crystalline pattern in the vicinity of the edges of the system as also visible in Fig. <ref> C.In the Supplemental Material <cit.> we demonstrate that this is a finite-size effect vanishing in the thermodynamic limit. The defect dynamics in Fig. <ref> highlights the crucial role of quantum unitary dynamics indicating nonergodic behavior.In the following, we provide compelling numerical evidence that this nonergodic behavior originates from the presence of a large number of quantum many-body scars in the spectrum.For that purpose, we will study the many-body eigenstate properties by means of a bipartite entanglement entropy S_A: S_A = -log[ _A̅(|Ψ⟩⟨Ψ |)] .Here, |Ψ⟩ denotes an eigenstate, A the subsystem, which we choose as the bottom half of our 2D square lattice, and A̅ is the complement of A.Additionally, we aim to quantify how close a given eigenstate is to a single many-body configuration, such as the one for the dynamics shown in Fig. <ref>.This can be achieved by means of an Edwards-Anderson (EA) order parameter Q_EA = L^-4∑_i,j^ |⟨Ψ |(2 n_i - 1) (2 n_j - 1 ) |Ψ⟩ |^2, where |Ψ⟩ again denotes an eigenstate of the system.In order to quantitatively compare the EA order parameter for different particle densities η we further introduce a normalized version:Q = Q_EA - (2η - 1)^4/1 - (2η - 1)^4 ,so that Q=1 when |Ψ⟩ is a single configuration and Q → 0 when |Ψ⟩ represents a superposition over all basis states.In Fig. <ref> we display S_A of all the eigenstates as a function of energy density E/L^2 for three different densities η representing the three different levels of fragmentation.We include the respective value of the normalized EA order parameter Q by coloring the data points.For low densities in the non-fragmented domain, see Fig. <ref>(A), we observe a dome of the entanglement entropy data originating from the single Hilbert-space component.This is consistent with conventional ergodic behavior, which is further supported by the uniform small values of Q.On the other end in the strong fragmentation regime, see Fig. <ref>(C), there is no ergodic component but rather a large number of small fragments, each of which contains only small values of S_A and a high value of Q. Consequently, the eigenstates in this domain represent almost individual many-body configurations displaying only a weak superposition.Concerning the stability of defects, which we observed in Fig. <ref>, the regime of most interest is the weakly fragmented region at intermediate particle densities η, see Fig. <ref>(B).Here, we observe a diverse structure.On the one hand, one can identify a set of ergodic eigenstates in the large fragment accompanied by a small value of Q.On the other hand, there is a large number of eigenstates, which don't follow the conventional ergodic paradigm, which we identify as quantum many-body scars with entanglement entropies S_A and Q significantly departing from the ergodic expectations.Most importantly, these nonergodic eigenstates not only emerge from the small Hilbert-space fragments but also from the large one, where they arrange along towers at specific energy densities.As these nonergodic eigenstates are mostly also associated with large values of Q, we conclude that there is a large number of many-body configurations, which are not thermalizing but rather retain their original structure over time.This is exactly related to the stability of the crystal defects displayed in Fig. <ref>.Most importantly, we observe from Fig. <ref>(B) that our many-body configuration in Fig. <ref> was not fine-tuned.But rather there are many crystal defects, which leave the quantum crystal stable. Conclusions.—In this letter, we have studied the quantum hard-disk model on lattices, which as we have shown displays unique quantum features in its dynamics.The considered hard-disk model exhibits a direct realization in Rydberg atomic systems.The Rydberg blockade mechanism naturally implements the excluded volume, so that a Rydberg excitation (identified here with a hard-core bosonic particle) blocks for the presence of a second excitation within the blockade radius <cit.>.When considering Rydberg atoms on a lattice, it remains to adjust the lattice spacing between the atoms such that the Rydberg blockade radius is exactly such that it blocks the presence of particles on nearest-neighboring sites.The typical microscopic interactions realized in Rydberg atomic systems are either of Ising character, leading to PXP-type models <cit.>, or XY character <cit.>.Importantly, our hard-disk model exhibits particle-number conservation, which would be either implementable directly via XY-type interactions or via Ising-type models upon enforcing particle-number conservation through a strong longitudinal magnetic field for instance. We envision a large range of further interesting prospective research questions for the future emerging from this work.A natural extension would be to consider disks or excluded volumes with larger radii as well as different shapes.Further, it might be interesting to consider how the dynamics are modified upon relaxing the hard-disk constraint to a soft one, such as for the soft-shoulder potentials realized naturally in Rydberg dressing <cit.>. Data availability.—The data to generate all figures in this letter is available in Zenodo <cit.>.Acknowledgements.—We thank Werner Krauth, Alessio Lerose, and Tobias Wiener for fruitful discussions. V.D.N. and F.B.T. contributed equally to this work. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 853443).* * Supplemental Material: Quantum hard disks on a lattice § BIPARTITE ENTANGLEMENT ENTROPY VERSUS EIGENSTATE ENERGY DENSITYIn Fig. <ref>, we show a plot analogous to that of Fig. <ref> for a 5 × 5 lattice. In the weak fragmentation region Fig. <ref>(B-C), we observe that the scar states also appear and their number increases with density. § DIFFERENT INITIAL DEFECTS In this section, we present results for the comparison between the 2D classical and quantum dynamics for different system sizes and initial defect configurations in the weak fragmentation region. In Fig. <ref> we show the dynamics for an initial crystal in a 6 × 6 lattice with a very low density. Interestingly, even at this low density, we see evidence of quantum effects coming from the overlap of the initial state with scar states. To give further examples of the same system size as the one presented in Fig. <ref>, we show different initial defect configurations that do not melt (see in Fig. <ref> and <ref>). Lastly, in Fig. <ref>, we show a larger system size of 10 × 10 with a high density where the system still retains a large memory of the initial state whereas it completely melts for the classical case. § FINITE SIZE EFFECTS IN 2D CLASSICAL DYNAMICSIn this section, we want to discuss the boundary effects that are present in the 2D classical dynamics. Concretely, the crystal structure that appears near the boundaries in the final occupation. Leading G(t) to not take a value of exactly 0 even though all memory of the initial state is lost, as all initial states evolve to the same final occupation. To show that these boundary effects are negligible in the thermodynamic limit we measure the mean deviation of the final occupation from a homogeneous occupation fixed by the particle density η defined asδη = 1/L^2 ∑_i |⟨ n_i⟩ - η|. In Fig. <ref>, we show the mean deviation as a function of system size with the initial state being maximally packed in the bottom half of the lattice. We find that the deviation decreases as a power law with increasing system size. Showing that, in the thermodynamic limit the autocorrelation G(t) will become 0. | http://arxiv.org/abs/2311.16240v1 | {
"authors": [
"Vighnesh Dattatraya Naik",
"Fabian Ballar Trigueros",
"Markus Heyl"
],
"categories": [
"quant-ph",
"cond-mat.stat-mech"
],
"primary_category": "quant-ph",
"published": "20231127190004",
"title": "Quantum hard disks on a lattice"
} |
Subsets and Splits