text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
Centre for Quantum and Optical Science, Swinburne University of Technology, Hawthorn, Victoria 3122, Australia Centre for Quantum and Optical Science, Swinburne University of Technology, Hawthorn, Victoria 3122, Australia Centre for Quantum and Optical Science, Swinburne University of Technology, Hawthorn, Victoria 3122, Australia Centre for Quantum and Optical Science, Swinburne University of Technology, Hawthorn, Victoria 3122, Australia Department of Physics, University of Auckland, Private Bag 92019, Auckland, New Zealand Centre for Micro-Photonics, Swinburne University of Technology, Hawthorn, Victoria 3122, Australia Melbourne Centre for Nanofabrication, Victorian Node of the Australian National Fabrication Facility, 151 Wellington Rd., Clayton, Victoria 3168, Australia Centre for Physical Sciences and Technology, Savanoriu Ave 2131, LT-02300 Vilnius, Lithuania Experimental Physics IV, Institute of Physics, University of Augsburg, Universitätstrasse 1, D-86159 Augsburg, Germany Experimental Physics IV, Institute of Physics, University of Augsburg, Universitätstrasse 1, D-86159 Augsburg, Germany Centre for Quantum and Optical Science, Swinburne University of Technology, Hawthorn, Victoria 3122, Australia Centre for Quantum and Optical Science, Swinburne University of Technology, Hawthorn, Victoria 3122, AustraliaWe report the trapping of ultracold ^87Rb atoms in a 0.7 μm-period two-dimensional triangular magnetic lattice on an atom chip. The magnetic lattice is created by a lithographically patterned magnetic Co/Pd multilayer film plus bias fields. Rubidium atoms in the |F=1,m_F=-1⟩ low-field seeking state are trapped at estimated distances down to about 100nm from the chip surface and with calculated mean trapping frequencies up to about 800kHz. The measured lifetimes of the atoms trapped in the magnetic lattice are in the range 0.4 - 1.7ms, depending on distance from the chip surface. Model calculations suggest the trap lifetimes are currently limited mainly by losses due to one-dimensional thermal evaporation following loading of the atoms from the Z-wire trap into the very tight magnetic lattice traps, rather than by fundamental loss processes such as surface interactions, three-body recombination or spin flips due to Johnson magnetic noise. The trapping of atoms in a 0.7 μm-period magnetic lattice represents a significant step towards using magnetic lattices for quantum tunneling experiments and to simulate condensed matter and many-body phenomena in nontrivial lattice geometries.Trapping Ultracold Atoms in a Sub-Micron-Period Triangular Magnetic Lattice P. Hannaford December 30, 2023 ===========================================================================§ INTRODUCTIONMagnetic lattices consisting of periodic arrays of microtraps created by patterned magnetic films on an atom chip provide a potential complementary tool to optical lattices for simulating condensed matter and many-body phenomena (e.g., <cit.>). Such lattices have a high degree of flexibility and may, in principle, be fabricated with almost arbitrary two-dimensional (2D) and one-dimensional (1D) geometries and lattice spacing <cit.> and may be readily scaled up. In addition, magnetic lattices do not require high power, stable laser beams and precise beam alignment, they operate with relatively little technical noise, power consumption, or heating, and they involve state-selective atom trapping, allowing rf evaporative cooling to be performed in the lattice and rf spectroscopy to be used to characterize the lattice-trapped atoms in situ. Finally, magnetic lattices have the potential to enable miniaturized integrated quantum technologies exploiting many-body states of ultracold atoms and hybrid quantum systems such as quantum registers with on-chip readout.However, magnetic lattices are still in their infancy compared with optical lattices due largely to the difficulty in fabricating high-quality magnetic microstructures, especially lattices with sufficiently small periods to enable quantum tunneling experiments. To date, 1D magnetic lattices <cit.> and 2D rectangular <cit.>, square <cit.> and triangular <cit.> magnetic lattices with periods down to 10 μm have been produced and clouds of ultracold atoms have been trapped in them <cit.>. In the case of the 10 μm-period 1D magnetic lattice, ^87Rb atoms have been cooled to degeneracy to create a periodic array of isolated Bose-Einstein condensates <cit.>. In order to conduct experiments involving quantum tunneling, lattices with periods in the sub-micron regime are required (e.g., <cit.>).In this paper we report the trapping of ultracold ^87Rb |F=1,m_F=-1⟩ atoms in a 0.7 μm-period triangular magnetic lattice on an atom chip. The magnetic lattice is created by a lithographically patterned magnetic Co/Pd multilayer film plus bias fields <cit.>. The design of the triangular magnetic lattice and calculations of the lattice trapping potentials including the effect of the Casimir-Polder surface interaction are presented in Sec. <ref>. Sec. <ref> gives experimental details, including the fabrication and characterization of the 0.7 μm-period triangular magnetic lattice structure. In Sec. <ref> we present experimental results for the interaction of the ultracold atoms with the magnetic lattice potential, loading of atoms into the magnetic lattice traps, and lifetime measurements of the lattice-trapped atoms at various distances from the chip surface. In Sec. <ref> we discuss possible ways for improving the lifetimes and the loading procedure, and in Sec. <ref> we summarize our results. § THE SUB-MICRON-PERIOD TRIANGULAR MAGNETIC LATTICE The triangular magnetic lattice structure is designed using the linear programming algorithm developed by Schmied et al. <cit.>. Figure <ref>(a) shows the magnetic film pattern designed to create a triangular lattice optimized for a trap distance z = z_min = a/2 from the surface of the magnetic film, where a is the lattice period. For a= 0.7 μm and a film with perpendicular magnetization 4π M_z = 5.9kG (or M_z = 470emu/cm^3) and nominal thickness t_m = 10.3nm, the required bias magnetic fields are B_x = 0.5G, B_y = 4.5G, where the x- and y-directions are defined in Fig. <ref>. A 2D contour plot for these parameters is shown in Fig. <ref>(b). In the present experiment, the magnetic lattice is loaded with atoms from a Z-wire magnetic trap operating with a bias field B_x≈ 52G (parallel to the ends of the Z-wire). Figure <ref>(c) shows a 2D contour plot for the 0.7 μm-period triangular lattice structure with bias fields B_x = 52G, B_y = 0 and the above parameters. For this magnetic lattice, the traps are more elongated and tighter than for the optimized triangular lattice with B_x = 0.5G, B_y = 4.5G and each trap is surrounded by four rather than six potential maxima.For a magnetic film structure magnetized in the z-direction, the magnetization can be modeled as a virtual current circulating around the edges of the patterned structure, as indicated by the arrows in Fig. <ref>(a). A bias field B_y applied along the +y-direction can cancel the magnetic field produced by the virtual current flowing along the horizontal black edge of the patterned structure shown in Fig. <ref>(a) to create a periodic array of magnetic traps aligned along the short horizontal black edges (Fig. <ref>(b)). On the other hand, a bias field B_x applied along the +x-direction can cancel the magnetic field produced by the virtual current flowing along the vertical red edge to create a periodic array of elongated magnetic traps aligned along the long vertical red edges (Fig. <ref>(c)). In general, a larger bias field B_x produces lattice traps which are closer to the magnetic film, and which are tighter and deeper.For the 0.7 μm-period magnetic lattice, the atoms are trapped at distances down to about 100nm from the chip surface, so that effects of surface interactions need to be considered. The trapping potential at distance z from the magnetic film surface may be expressed asV(z) = V_M(z) + V_CP(d),where V_M(z) is the magnetic lattice potential, V_CP(d) is the combined Casimir-Polder and van der Waals potential, and d = z_min-(t_Au+t_SiO_2) is the distance of the trap centre from the surface of the atom chip (allowing a thickness (t_Au+t_SiO_2) = 75nm for the gold and silica surface layers in the present experiment). V_CP(d) may be expressed as (e.g., <cit.>)V_CP(d) = - C_4/d^3(d+3λ_opt/2π^2),where C_4=1/4πϵ_03ħ c α_0/8πϵ_r-1/ϵ_r+1ϕ(ϵ_r) <cit.> is the Casimir-Polder coefficient, α_0 is the static atomic polarizability, ϕ(ϵ_r) is a numerical factor <cit.> that depends on the relative permittivity ϵ_r of the top surface layer, ϵ_0 is the vacuum permittivity, and λ_opt is the wavelength of the strongest electric dipole transition of the atom. The gravitational potential is negligible compared with the strong magnetic lattice potential and is not included in Eq. (<ref>).Figures <ref>(d)-(f) present calculations of the trapping potentials for different bias fields B_x, where C_4 is taken to be 8.2×10^-56Jm^4 for a dielectric surface of silica film, for which ϵ_r= 4.0 and ϕ(ϵ_r) = 0.771, and α_0= 5.25 × 10^-39Fm^2 for a ground-state Rb atom. The vertical orange lines in Fig. <ref> (d)-(f) indicate the position of the chip surface which is taken here to be 75nm from the magnetic film. According to these calculations, the trapping potential for B_x = 52G is very shallow (trap depth Δ E_in/k_B∼ 1.5 μK). Introducing an offset δ d = + 25nm (see Sec. <ref>D) for the distance d = z_min-(t_Au+t_SiO_2) of the lattice traps from the chip surface gives Δ E_in/k_B = 655 μK for B_x = 52G.The calculated trap parameters for different bias fields B_x with δ d = 25nm are listed in Table <ref>. For B_x<26G, the trap centre is located at distances d > 150nm from the chip surface and the effect of the Casimir-Polder interaction is small, so that the effective depth of the lattice traps Δ E_eff≡Δ E_z. For B_x>40G, the trap centre is located <110nm from the chip surface, depending on the distance d, and the magnetic potential is deformed by the attractive Casimir-Polder interaction, so that Δ E_eff≡Δ E_in. For these very tight magnetic lattice traps, the atom densities are very high; for example, for B_x = 26G and assuming two atoms per lattice site, the calculated peak atom density is n_0≈ 2 × 10^15cm^-3.§ EXPERIMENT§.§ Fabrication of the 0.7 Lg-period triangular magnetic lattice structureThe magnetic film used for fabrication of the 0.7 μm-period magnetic lattice structure consists of a stack of eight bi-layers of alternating Pd (0.9nm) and Co (0.28nm) <cit.>. Such multilayer films have a large perpendicular magnetic anisotropy and a high degree of magnetic homogeneity is expected. In addition, they exhibit a large saturation magnetization (4π M_z = 5.9kG), square-shaped hysteresis loops <cit.>, a high coercivity (H_c∼ 1kOe), a high Curie temperature (300 - 400 ^∘C) and a very small grain size (down to ∼6nm). Alternating layers of 0.9nm Pd and 0.28nm Co are known to exhibit an enhanced (∼ 20%) magnetization relative to bulk cobalt due to polarization of the Pd atoms by the nearby Co layers (e.g., <cit.>).The Co/Pd multilayers are deposited by dc-magnetron sputtering onto a seed layer of 3nm-thick Pd plus 3nm-thick Ta on a 500 μm-thick Si(100) substrate <cit.>. A 1.1nm protective layer of Pd is deposited on top of the Co/Pd stack. The active magnetic thickness of the stack is taken to be t_m= 10.3nm[In <cit.>, the active magnetic thickness of the Co/Pd stack was given as 2.24nm, which represents the total Co thickness only.], where an additional 0.9nm Pd is included to allow for polarization of the 3nm Pd in contact with the bottom Co layer.The 0.7 μm-period triangular magnetic lattice structure was fabricated using electron-beam lithography (EBL) plus reactive ion etching <cit.>. A 300nm-thick layer of positive tone resist (PMMA 495k polymer, MicroChem Corp) is spin-coated onto a Co/Pd film-coated silicon wafer and the triangular lattice pattern (Fig. <ref>(a)) is written onto the resist using an e-beam lithography machine operating at 100kV (Raith EBPG5000plusES). A 5nm electron spot is scanned along the designated pattern at 50MHz rate by the pattern generator. The 1mm^2 write field of the e-beam machine allows exposure of an entire magnetic lattice structure without the need to move the sample stage. When a uniform EBL exposure is performed over a large (1mm^2) area, the electron beam can be scattered in the resist to produce a pattern that is deformed towards the edges. To compensate for these proximity effects an exposure dose proximity map is designed using Monte-Carlo simulations to evaluate the scattering of the electron beam <cit.>. The duration of the EBL exposure is about two hours. After development of the resist, the triangular pattern is etched into the Co/Pd film by argon-ion bombardment in an inductively-coupled plasma reactive ion etching tool (Samco RIE-101iPH). The patterned Co/Pd magnetic film is coated with a reflective 50nm layer of gold plus a 25nm layer of silica to prevent rubidium atoms reacting with the gold surface. The patterned Co/Pd magnetic film is then glued onto a direct bonded copper (DBC) 50 mm×55mm atom chip <cit.> comprising 130 μm-thick current-carrying U-wire and Z-wire structures <cit.>. The atom chip can accommodate four separate 1mm^2 magnetic lattice structures, each of which has a U-wire and Z-wire structure directly beneath it (Fig. <ref>(a)).Finally, the 0.7 μm-period Co/Pd triangular magnetic lattice structure is magnetized and then characterized by magnetic/atomic force microscopy and scanning electron microscopy, prior to mounting in the vacuum chamber. The period of the triangular magnetic structure is measured from scanning electron microscopy (SEM) scans (Fig. <ref>(b)) to be 0.70 μm within about 1%. The quality of the present 0.7 μm-period triangular magnetic lattice structure is significantly improved over that reported earlier <cit.>.The parameters of the triangular magnetic lattice structure are summarized in Table <ref>. §.§ Atom trapping and cooling and atom imagingRubidium atoms released from a pulsed dispenser are trapped in a standard four-beam mirror magneto-optical trap (MMOT) on the atom chip with a gold reflecting surface. The beams derived from a 1W tapered amplifier laser system consist of an atom trapping beam detuned 15MHz below the F = 2 → F^' = 3 cycling transition combined with a repumper beam locked to the F = 1 → F^' = 2 transition. We trap typically 2 × 10^8 atoms in 25s in the MMOT at 1 - 2mm below the chip surface. The atoms are then transferred to a compressed MMOT formed by passing 20A through a U-wire on the atom chip plus a bias field B_x = 12G to create the quadrupole trap. This is followed by a polarization gradient cooling stage, resulting in ∼ 1.5 × 10^8 atoms cooled to ∼ 40 μK.The atoms are then optically pumped to the required |F=1, m_F=-1⟩ low-field seeking ground state, which is chosen because of its smaller three-body recombination rate <cit.> compared with the |F=2, m_F=+2⟩ state. Next, the atoms are transferred to a Z-wire magnetic trap formed by passing a current I_z = 35A and raising the bias field to B_x = 33G. The trap bottom is adjusted to ∼3G to prevent spin-flip loss by applying a bias field B_y = 7G. To enhance the elastic collision rate, the atom cloud is then compressed by ramping I_z, B_x and B_y up to 37A, 52G and 8G, respectively, in 100ms resulting in ∼ 5 × 10^7 atoms at a temperature of ∼ 200 μK at ∼ 700 μm below the chip surface with a Z-wire trap lifetime of ∼ 20s. Forced rf evaporative cooling is then applied to the atoms in the Z-wire trap for 12s by logarithmically ramping the rf field from 30MHz down to various final evaporation frequencies. For a final evaporation frequency of 0.5MHz about 2 × 10^5 ^87Rb atoms are left in the Z-wire trap at a temperature of ∼200nK to produce a Bose-Einstein condensate (BEC).The atom clouds are imaged in situ using reflection absorption imaging <cit.>, in which the imaging beam is sent at a small angle (θ∼ 2^∘) to the reflecting gold surface on the atom chip, so that two beam paths traverse the atom cloud, creating a direct image and a mirror image of the cloud (Fig. <ref>(a), inset). The atoms are pumped into the |F=2, m_F=+2⟩ state and a spatially filtered σ^+-polarized imaging beam tuned to the F = 2 → F^' = 3 cycling transition is focussed by a 50.8mm-diameter achromatic lens doublet (f_1 = 120mm, f_2 = 500mm). The light transmitted by the atoms is imaged by the first lens which is positioned against one of the vacuum viewports at a distance f_1 from the atom cloud. The magnification is M = f_2/f_1, the effective pixel size in the object plane is 3.5 μm, and the measured resolution is about 9 μm. The images are recorded in a CCD camera operated in frame transfer mode.§ RESULTS§.§ Bringing the Z-wire trapped atoms close to the chip surfaceTo determine the distance of the centre of the Z-wire trapped atom cloud from the chip surface, we measure the separation of the centres of the direct and mirror images of clouds recorded by reflection absorption imaging (Fig. <ref>(a), inset). The distance between the direct and mirror images is 2dcosθ≈ 2d, where d is the distance of the trap centre to the chip surface. The data points (Fig. <ref>(a)) fit well to a straight line, where the intercept d(I_z=0) = -718 μm corresponds approximately to the estimated distance of the gold mirror from the current-carrying copper wires. At very small distances from the chip surface the direct and mirror images merge into one owing to the finite size of the atom cloud and the finite resolution of the imaging system. To determine these small distances, we use an extrapolation based on the best fit to the data points in Fig. <ref>(a).To investigate effects of the chip surface, we measure the fraction of remaining atoms χ(d) versus distance d = z - 75nm from the chip surface (where z is the distance from the magnetic film). The atom cloud in the Z-wire trap is moved to a final position d, where it is held for t_0 = 10ms, before moving back quickly to its original positon for imaging. Figure <ref>(b) shows the measured atom fraction χ(d) versus distance d for a condensate (T = 200nK) well below the critical temperature (T_c≈ 520nK), and for thermal clouds at 600nK, 1 μK and 2 μK. The different temperatures are obtained by changing the final evaporation frequency during the rf evaporative cooling and are measured by time of flight.To model the atom fraction χ(d) versus distance d from the chip surface, we consider the combined potential of the Z-wire magnetic trap and the attractive Casimir-Polder interaction V(z)= V_Z(z)+V_CP(d), where the Z-wire trap potential is approximated by a harmonic potential V_z(z)=1/2 Mω_r^2(z-z_min)^2 truncated at the chip surface z = t_Au + t_SiO_2 and V_CP(d) is given by Eq. (<ref>) <cit.>. The attractive Casimir-Polder interaction lowers the trap depth slightly to Δ E_b and causes the trap to disappear at a finite distance from the surface, e.g., at d ≈ 1 μm for C_4 = 8.2 × 10^-56Jm^4 and ω_r/2π = 280Hz.The trap depth produced by the Z-wire magnetic potential plus the Casimir-Polder interaction results in a sudden truncation of the high energy tail of the Boltzmann distribution of atoms in the Z-wire trap, so that the remaining atom fraction is χ(d)=1-e^-η, where η = Δ E_b/(k_BT) is the truncation parameter. The radial trap frequency (ω_r/2π = 280Hz) is estimated from dipole oscillations taken over a range of distances as the Z-trap approaches the chip surface and extrapolating to the region of interest. Using C_4 = 8.2 ×10^-56Jm^4, the main fitting parameter is the cloud temperature T, which for the four data sets in Fig. <ref>(b) is 190nK, 430nK, 0.85 μK and 1.5 μK. These values are comparable to the above temperatures measured by time of flight.The above simple truncation model can be extended to include the effect of 1D surface evaporation in which the more energetic atoms in the trap region near the chip surface preferentially escape the Z-wire trap. Using a classical 1D surface evaporation model <cit.>, the remaining atom fraction becomes χ(d)=(1-e^-η)e^-Γ_ev t_0, where Γ_ev=f(η) e^-η/τ_el is the loss rate due to 1D surface evaporation, f(η) ≈ 2^-5/2 (1-η^-1+3/2η^-2) <cit.>, τ_el =[n_0σ_elv_rel]^-1 is the elastic collision time, v_rel= √(16k_BT/(π M)) is the mean relative velocity, n_0=N/(2π)^3/2σ_r^2σ_ax is the peak atom density in the Z-wire trap, σ_r,ax = (k_BT/M)^1/2/ω_r,ax, N is the number of atoms in the Z-wire trap, σ_el=8π a_s^2 is the elastic collision cross section, and a_s = 5.3nm is the s-wave scattering length for ^87Rb |F=1, m_F=-1⟩ atoms.In Fig. <ref>(b) we compare fits for the 1D surface evaporation model using T = 130nK and τ_el = 0.6ms (dashed blue curve) and the simple truncation model (solid blue curve) for the condensate at 200nK. The discrepancy for χ < 0.4 is likely due to limitations of the simple 1D surface evaporation model which ignores evaporation-induced temperature changes and the effect of collisions which can redistribute the atom directions. The redistribution of atom directions results in a larger atom loss rate and a χ(d) vs d curve with a shape similar to the experimental data in Fig. <ref>(b) <cit.>. §.§ Interaction of ultracold atoms with the 0.7 Lg-period magnetic potentialTo check that the ultracold atoms can interact with the magnetic potential very close (about 100nm) to the chip surface, we project an ultracold atom cloud from the Z-wire trap towards the lattice potential and monitor the reflection dynamics, similar to previous experiments with a 1D magnetic lattice potential <cit.>. This is performed for the 0.7 μm-period triangular magnetic lattice structure without bias fields, which produces a sinusoidal corrugated potential with period ∼ a in the y-direction and ∼ a/2 in the x-direction <cit.>.An ultracold atom cloud at ∼200nK, i.e., below the critical temperature, is prepared in the Z-wire trap and brought to various distances d_0 = 145 - 65 μm from the chip surface by ramping down I_z. The Z-wire trap is switched off suddenly by turning off I_z and the bias field B_x. I_z rapidly decreases to zero in ∼ 0.1ms while B_x, which is produced by large Helmholtz coils, decreases slowly in ∼10ms. The resulting delay provides a momentum kick to the atom cloud, launching it vertically towards the magnetic lattice potential close to the chip surface. When the launching position is far from the chip surface, e.g., d_0 = 145 μm (Fig. <ref>(a)), the atom cloud falls down under gravity before reaching the magnetic lattice potential and no reflection is observed. Reflection signals start to appear when the launching position approaches d_0=128 μm (Fig. <ref>(b)); both the free falling part (no lateral (y) expansion) and the reflected part (with lateral expansion) are observed. When d_0≤ 76 μm (Fig. <ref>(c)-(d)), clear reflection signals are observed, which exhibit “half-moon” shapes due to the sinusoidal corrugation, with a lateral expansion of up to a factor of about 3.With a 2D corrugated potential, the lateral expansion occurs in two dimensions, and since one of the directions is along the imaging beam path, the reflected cloud exhibits a half-moon shape.Figures <ref>(e),(f) show the lateral width along y and the vertical position of the ultracold atom cloud versus projection time t for the different launching positions d_0. Without reflection, the lateral width remains almost constant at ∼ 50 μm and the trajectory of the cloud in the vertical direction fits well to a single quadratic function. For the case of reflection, the lateral width increases approximately linearly with time after reflection, with a slope corresponding to lateral velocities of 30 and 21 μm/ms for d_0 = 67 and 76 μm, respectively. The fitted equations for the cloud trajectories in the caption to Fig. <ref> indicate (i) for d_0=128 μm (green (top) curve) the atom cloud reaches its turning point after 5.3ms and at about 8 μm below the chip surface, and (ii) for d_0=67 μm (blue (bottom) curve) and d_0=76 μm (orange (second) curve), the atom cloud interacts with the magnetic potential after 1.0ms and 1.3ms with an incident velocity of 60 μm/ms and 52 μm/ms and is reflected back after 2.0ms and 2.6ms with an exit velocity of 45 and 45 μm/ms, respectively. When the atom cloud is launched towards a region of the magnetic film where there is no magnetic lattice structure, the atom cloud disappears almost immediately upon touching the surface.From the above results, we conclude that the observed reflection of the atom cloud is caused by the magnetic lattice potential and that the ultracold atom cloud can interact with the short-range magnetic potential. §.§ Loading atoms into the 0.7 Lg-period triangular magnetic latticeThe loading stage starts with a thermal cloud of ∼ 5 × 10^5 ^87Rb |F=1, m_F=-1⟩ atoms at ∼ 1 μK prepared in the Z-wire trap at ∼ 670 μm from the chip surface with I_z = 38A and B_x = 52G. Loading of the magnetic lattice is performed using a range of bias fields B_x = 9, 14, 26, 40 and 52G. For B_x = 52G, there is no change in B_x when the atoms are transferred from the Z-wire trap to the magnetic lattice traps and the procedure involves simply ramping down I_z. For smaller B_x, the procedure is more complex since B_x needs to be reduced first before loading atoms into the magnetic lattice traps, which results in the Z-wire cloud being pushed away from the surface. To compensate for the change in position of the Z-wire trap, I_z is reduced at the same time.The atom cloud is loaded into the magnetic lattice traps by further reducing I_z keeping B_x fixed, which brings the atoms closer to the surface until the Z-wire trap merges smoothly with the lattice potential a few hundred nanometres from the chip surface. The ramping speed for I_z is optimized so that it is sufficiently slow to prevent the Z-wire trapped atoms acquiring enough momentum to penetrate the magnetic lattice potential and hit the surface but not so slow that at distances very close to the chip surface the atoms are lost by surface interactions and sloshing. After the loading stage, the Z-wire cloud is brought further from the surface for imaging by rapidly ramping up I_z. A representative reflection absorption image is shown in Fig. <ref>(a) for B_x = 52G. The clouds at the bottom and top of the figure are the direct and mirror images of the atoms remaining in the Z-wire trap, while the smaller cloud in the middle is attributed to atoms trapped in the magnetic lattice very close to the chip surface. The direct and mirror images of the lattice trapped cloud cannot be resolved owing to their very small (∼ 0.2 μm) separation and atoms in individual lattice sites (separated by 0.7 μm) are not resolved because of the limited resolution of the imaging system. Similar images of the small atom cloud trapped very close to the chip surface are observed for the other values of the bias field B_x.The small atom cloud mid-way between the two larger images remains when the atoms in the Z-wire trap are removed by quickly reducing I_z to project them vertically to hit the chip surface (Fig. <ref>(b)) and also when the Z-wire current is completely turned off. We estimate that typically ∼ 2 × 10^4 atoms are trapped in the magnetic lattice, initially in an area of ∼ 180 μm× 13 μm (FWHM) containing about 4900 lattice sites, which corresponds to N_site≈ 4 atoms per site. In a second experiment, the atom cloud is launched from a distance d_0 = 130 μm from the chip surface by quickly switching off both B_x and I_z together, so that the fast response of I_z relative to B_x projects the atom cloud vertically towards the magnetic lattice potential. Immediately after launching the atom cloud, small bias fields of B_x = - 5.3G and B_y = 6G are applied for 3 ms. The small negative B_x bias field, which is produced by small fast-response Helmholtz coils, approximately cancels the residual B_x field from the large Helmholtz coils, while the B_y bias field creates a triangular magnetic lattice similar to the optimized lattice in Fig. <ref>(b). With careful optimization of the launching velocity, the atom cloud can merge with the magnetic lattice potential such that a fraction of the atoms remain trapped, while the rest fall down under gravity (Fig. <ref>(c), right panel). To remain trapped in the conservative potential of the magnetic lattice the atoms need to experience some dissipation which may be provided by surface evaporative cooling. After 3ms time of flight the small trapped cloud appears mid-way between the direct and mirror images of the falling cloud and then disappears after a further 1.5ms, which is consistent with the measured lifetime of the lattice trapped atoms (Sect. <ref>D).Further discussion about the loading of the 0.7 μm-period magnetic lattice is given in Sec. <ref>. §.§ Lifetimes of atoms trapped in the 0.7 Lg-period triangular magnetic latticeThe lifetime of the lattice trapped atoms is measured by recording the number of remaining atoms versus holding time for a range of bias fields B_x, and hence for a range of distances z = z_min from the magnetic film surface (Table <ref>). Figure <ref>(a) shows a representative decay curve for B_x = 14G. Within our detection sensitivity, the decay curves are well fitted with a single exponential, with lifetimes varying from 0.43 ± 0.06ms for B_x = 52G to 1.69 ± 0.11ms for B_x = 9G. These lifetimes are much longer than the corresponding lattice trap periods (1 - 3 μs), and they are found to increase approximately linearly with distance d = z - (t_Au + t_SiO_2) from the chip surface over the range investigated (Fig. <ref>(b)). To interpret the short lifetimes and their approximately linear increase with distance d, we consider possible loss mechanisms. When the thermal cloud of atoms is transferred from the Z-wire trap to the very tight magnetic lattice traps, the atoms are heated by adiabatic compression from ∼ 1 μK to an estimated initial 3 - 8mK (depending on distance d from the chip surface) in the magnetic lattice. Atoms with energies higher than the effective trap depth Δ E_eff = Min{Δ E_z, Δ E_in} (Fig. <ref>(e)) rapidly escape the traps, resulting in a sudden truncation of the high energy tail of the Boltzmann energy distribution. We estimate that, initially, there are many (∼100) atoms available for elastic collisions and evaporative cooling which provides dissipation to allow the atoms to be trapped in the conservative potential. The remaining more energetic atoms that populate the outer region of the lattice traps with energies comparable to the effective trap depth Δ E_eff are rapidly lost or spill over into neighboring lattice traps or are lost by rapid three-body recombination. The remaining atoms reach a quasi-equilibrium at a lower temperature T ≈Δ E_eff/(η k_B), where η is the truncation parameter. In the presence of the attractive Casimir-Polder interaction, the barrier height for distances d very close to the chip surface is lowest in the z-(vertical) direction (Table <ref>). Using the 1D evaporation model <cit.> in Sect. <ref>A, the lifetime for one-dimensional thermal evaporation is τ_ev = τ_el/[f(η) e^-η], where τ_el =[n_0σ_elv_rel]^-1, and n_0 = N_site/(2π)^3/2(M/k_BT)^3/2ω^3 is the peak atom density in the magnetic lattice traps. According to this model, τ_ev scales as Δ E_eff/[ω^3N_siteη f(η)e^-η], where the truncation parameter η is assumed to remain constant. For decreasing B_x < 40G (where Δ E_eff≡Δ E_z), the trap minima move away from the chip surface and ω^-3 increases at a faster rate than Δ E_z decreases (Table <ref>), so that τ_ev exhibits an almost linear increase with increasing distance d from the chip surface (Fig. <ref>, red (second) curve). On the other hand, for increasing B_x≥ 40G (where Δ E_eff≡Δ E_in), the trap minima move very close to the chip surface and Δ E_in and ω^-3 both decrease together with decreasing z, resulting in a sharp decrease in τ_ev.A second possible loss process is three-body recombination in the very tight magnetic lattice traps. The lifetime for (non-exponential) decay by 3-body recombination is τ_3b = 1 / (K_3n_0^2), where K_3 = 4.3(1.8) × 10^-29cm^6s^-1 for non-condensed ^87Rb |F = 1, m_F = -1⟩ atoms <cit.>. Thus, τ_3b scales as Δ E_eff^3/[ω^6N_site^2η^3]. For decreasing B_x < 40G (where Δ E_eff≡Δ E_z), the trap minima move away from the chip surface and Δ E_z^3 decreases at about the same rate as ω^-6 increases (Table <ref>), so that τ_3b remains almost constant for distancesz> 170nm (Fig. <ref>, blue (top) curve). For increasing B_x≥ 40G (where Δ E_eff≡Δ E_in), the trap minima move very close to the chip surface and Δ E_in^3 and ω^-6 both decrease strongly together with decreasing z, resulting in a rapid decrease in τ_3b.A further possible loss process can result from spin flips caused by Johnson magnetic noise from the gold conducting layer on the magnetic film <cit.>. The spin-flip lifetime is given by τ_s = 256πħ^2d/3μ_0^2μ_B^2σ k_BT g(d,t_Au,δ)for state |F=1,m_F=-1⟩ <cit.>, where g(d,t_Au,δ) ≈t_Au/(t_Au + d) for δ≫ Max{ d,t_Au} <cit.>; δ = √(2/(σμ_0ω_L)) is the skin depth at the spin-flip transition frequency ω_L = m_Fg_Fμ_BB_IP/ħ; σ is the electrical conductivity of the conducting layer; and μ_0 is the vacuum permeability. For t_Au = 50nm, we obtain spin-flip lifetimes (Fig. <ref>, dashed orange curve) that are much longer than the measured trap lifetimes, for example, τ_s = 48ms and 230ms for d = 110nm and 290nm, respectively.The calculated one-dimensional evaporation lifetime τ_ev versus distance (Fig. <ref>, red (second) curve) has a positive slope, given approximately by Δ E_eff/(ω^3d), which closely matches the slope of the measured lifetime versus distance (Fig. <ref>), with no adjustable parameters. On the other hand, the calculated τ_3b versus distance (Fig. <ref>, blue (top) curve) remains almost constant for z > 170nm. This suggests that the dominant loss mechanism limiting the trap lifetimes is one-dimensional thermal evaporation, rather than three-body recombination or spin flips due to Johnson magnetic noise. With thermal evaporation, one might expect some atoms to remain in the lattice traps for times much longer than 1ms. Within our detection sensitivity, there is no indication of a non-exponential tail in the decay curves, e.g., Fig. <ref>(a).The red curve in Fig. <ref>(b) shows the calculated evaporation lifetime τ_ev with fitted scaling parameters N_site = 1.5, η = 4, a fitted offset δ d = 25nm (see below) and the fixed parameters given in Tables <ref> and <ref>. To obtain a reasonable fit such that the evaporation lifetime is much shorter than the three-body recombination lifetime requires a value N_site≈ 1.5 which is smaller than the N_site≈ 4 estimated from the number of atoms (∼ 2 × 10^4) initially trapped in ∼ 4900 lattice sites. The smaller value of N_site≈ 1.5 could be a result of atoms spilling over into neighboring lattice sites during the initial transfer of atoms from the Z-wire trap into the tight magnetic lattice traps, so that more than 4900 lattice sites are occupied at the time of measurement of the atom number and/or it could be a result of uncertainties in the size of the Z-trap cloud or the total number of lattice trapped atoms. An average of 1.5 atoms per lattice site over the occupied lattice is consistent with the end-product of rapid three-body recombination prior to the observation period, leaving zero, one or two atoms on any given site.To obtain a reasonable fit to the measured lifetimes at very small distances d from the chip surface, where the calculated lifetime is very sensitive to the distance d due to the Casimir-Polder interaction, requires either the calculated C_4 = 8.2 × 10^-56Jm^4 to be smaller by an order of magnitude or the calculated distances of the trapped atoms from the chip surface d = z_min-(t_Au + t_SiO_2) to be larger by δ d ≈ 25 nm. The above C_4 value is expected to be accurate to within ∼ 40% based on the level of agreement between the calculated C_4 value and the measured value <cit.> for a dielectric sapphire surface film. A value of δ d = 25nm is within the estimated uncertainty (^+40_-30 nm) in d = z_min-( t_Au + t_SiO_2) for B_x = 40G and 52G, which has contributions from a systematic error of about +10nm due to the effect of the 20nm-deep etching of the magnetic film and estimated uncertainties in t_Au + t_SiO_2 (± 5nm) and z_min (± 25nm) and the effect of the estimated uncertainty in C_4 (± 2nm).§ DISCUSSION The measured lifetimes of the atoms trapped in the 0.7 μm-period magnetic lattice are short, 0.4 - 1.7ms for distances d = 90 - 260nm from the chip surface, and need to be increased to enable quantum tunneling. For example, for an atomic system trapped in a 0.7 μm-period square lattice with a trap depth of 12E_r∼20mG (where E_r=ħ^2k^2/(2M) is the recoil energy), the Bose-Hubbard model for the Mott-insulator transition predicts that at the critical point, which occurs at (J/U)_c∼ 0.06 <cit.>, the tunneling matrix element J/k_B = 0.82nK, the on-site interaction energy U/k_B = 14nK, and the tunneling time is 9ms <cit.>.Our model calculations suggest that the short lifetimes of the atoms trapped in the magnetic lattice are currently limited mainly by losses due to one-dimensional thermal evaporation following transfer of the thermal atom cloud from the Z-wire trap into the very tight magnetic lattice traps, rather than by fundamental loss processes such as surface interactions, three-body recombination or spin flips due to Johnson magnetic noise. Therefore, it should be technically feasible to reach longer lifetimes in the magnetic lattice traps by reducing the effect of one-dimensional thermal evaporation following the loading process. One possible way is to increase the distance of the trapped atoms from the magnetic surface, for example, by using a thicker magnetic film and/or by using an optimized triangular magnetic lattice with z_min≈ a ≈ 700nm. However, increasing z_min reduces not only the mean trap frequency but also the trap depth, thereby resulting in only a marginal increase in the trap lifetime, as exemplified in Fig <ref>(b).A bigger gain is likely to come from improving the transfer of atoms from the Z-wire trap to the very tight magnetic lattice traps. Heating due to adiabatic compression during transfer of the thermal cloud to the magnetic lattice traps could be reduced by loading the atoms from a magnetic trap with trap frequency higher than ∼100Hz. Trap frequencies as high as 5kHz <cit.> or even tens of kilohertz <cit.> have previously been achieved for a current-carrying conductor microtrap on an atom chip. A further gain in transfer efficiency could be obtained by ensuring that the direction of the trap bottom field (B_IP) of the magnetic lattice traps is aligned with that of the Z-wire trap. It should also be possible to reduce heating due to adiabatic compression if a BEC, rather than a thermal cloud, can be loaded directly from the Z-wire trap into the magnetic lattice.If trap lifetimes ∼ 100ms can be achieved, losses due to spin flips caused by Johnson magnetic noise may become significant (Fig. <ref>, dashed orange curve). Such losses could be reduced, for example, by replacing the reflecting 50nm gold layer (ρ = 0.22 × 10^-7 Ωm) on the chip with a reflecting material with higher resistivity such as palladium (ρ = 1.05 × 10^-7 Ωm) and by operating at larger distances from the conducting layer, as discussed above.To gain a more complete understanding of the loss processes presently limiting the trap lifetimes, it would be informative to study magnetic lattices with periods in between 0.7 μm and 10 μm (for which trap lifetimes of 10s have been achieved <cit.>).§ SUMMARY AND CONCLUSIONS We have demonstrated trapping of ultracold ^87Rb atoms in a 0.7 μm-period triangular magnetic lattice on an atom chip based on the following observations:(1)The atom cloud is found to interact with the magnetic lattice potential very close to the chip surface when it is projected vertically towards the surface.(2) A small atom cloud appears mid-way between the direct and mirror images of the Z-trapped atom cloud when it is brought very close to the chip surface. The small cloud remains when the atoms remaining in the Z-wire trap are removed and when the Z-wire current is completely turned off.(3) A small atom cloud also appears very close to the chip surface when a cloud of atoms is projected vertically from the Z-wire trap with optimized velocity to almost touch the chip surface.(4) The lifetimes of the small atom cloud (0.4 - 1.7ms) are much longer than the corresponding lattice trap periods (1 - 3 μs) and increase significantly with increasing distance from the chip surface, approximately in accordance with model calculations.Our model calculations suggest that the trap lifetimes are currently limited mainly by losses due to one-dimensional thermal evaporation following transfer of atoms from the Z-wire trap to the very tight magnetic lattice traps, rather than by fundamental loss processes such as surface interactions, three-body recombination or spin flips due to Johnson magnetic noise. It should be feasible to overcome one-dimensional thermal evaporation losses by improving the transfer of atoms from the Z-wire trap to the very tight magnetic lattice traps, for example, by loading the atoms from a magnetic trap with higher trap frequency.The trapping of atoms in a 0.7 μm-period magnetic lattice represents a significant step towards using magnetic lattices for quantum tunneling experiments and to simulate condensed matter and many-body phenomena in nontrivial lattice geometries. To the best of our knowledge, the trapping of atoms at distances of about 100nm from the chip surface and at trap frequencies as high as 800kHz represents new territory for trapping ultracold atoms.§ ACKNOWLEDGMENTSWe are indebted to Shannon Whitlock, Russell McLean, Saulius Juodkazis and Peter Krüger for fruitful discussions. We thank Pierette Michaux for fabricating early versions of the magnetic lattice structures and James Wang for assistance with the magnetic force/atomic force microscope measurements. The electron beam lithography was performed at the Melbourne Centre for Nanofabrication (MCN) in the Victorian Node of the Australian National Fabrication Facility (ANFF). The atom chip was fabricated using the nanofabrication facility at Swinburne University. Funding from the Australian Research Council (Discovery Project Grant No. DP130101160) is acknowledged. 10Yibo2016Y. Wang, P. Surendran, S. Jose, T. Tran, I. Herrera, S. Whitlock, R. McLean, A. Sidorov, and P. Hannaford, Sci. Bulletin 61, 1097 (2016).Schmied2010R. Schmied, D. Leibfried, R. J. C. Spreeuw, and S. Whitlock, New J. Phys. 12, 103029 (2010).Singh2008M. Singh, M. Volk, A. Akulshin, A. Sidorov, R. McLean, and P. Hannaford, J. Phys. B 41, 065301 (2008).Jose2014S. Jose, P. Surendran, Y. Wang, I. Herrera, L. Krzemien, S. Whitlock, R. McLean, A. Sidorov, and P. Hannaford, Phys. Rev. A 89, 051602(R) (2014).Surendran2015P. Surendran, S. Jose, Y. Wang, I. Herrera, H. Hu, X. Liu, S. Whitlock, R. McLean, A. Sidorov, and P. Hannaford, Phys. Rev. A 91, 023605 (2015).Gerritsma2007R. Gerritsma, S. Whitlock, T. Fernholz, H. Schlatter, J. A. Luigjes, J. -U. Thiele, J. B. Goedkoop, and R. J. C. Spreeuw, Phys. Rev. A 76, 033408 (2007).Whitlock2009S. Whitlock, R. Gerritsma, T. Fernholz, and R. J. C. Spreeuw, New J. Phys. 11, 023021 (2009).Leung2011V. Y. F. Leung, A. Tauschinsky, N. J. van Druten, and R. J. C. Spreeuw, Quant. Inf. Process. 10, 955 (2011).Herrera2016I. Herrera, Y. Wang, P. Michaux, D. Nissen, P. Surendran, S. Juodkazis, S. Whitlock, R. McLean, A. Sidorov, M. Albrecht, and P. Hannaford, J. Phys. D 48, 115002 (2015).Leung2014V. Y. F. Leung, D. R. M. Pijn, H. Schlatter, L. Torralbo-Campo, A. L. La Rooij, G. B. Mulder, J. Naber, M. L. Soudijn, A. Tauschinsky, C. Abarbanel, B. Hadad, E. Golan, R. Folman, and R. J. C. Spreeuw, Rev. Sci. Instrum. 85, 053102 (2014).Bloch2008I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. 80, 885 (2008).Bakr2009W. S. Bakr, J. I. Gillen, A. Peng, S. Fölling, and M. Greiner, Nature (London) 462, 74 (2009).Pasquini2004T. A. Pasquini, Y. Shin, C. Sanner, M. Saba, A. Schirotzek, D. E. Pritchard, and W. Ketterle, Phys. Rev. Lett. 93, 223201 (2004).Lin2004Y. Lin. I. Teper, C. Chin, and V. Vuletic, Phys. Rev. Lett. 92, 050404 (2004).Yan1997Z. Yan, A. Dalgarno, and J. F. Babb, Phys. Rev. A 55, 2882 (1997).Stark2015M. Stärk, F. Schlickeiser, D. Nissen, B. Hebler, P. Graus, D. Hinzke, E. Scheer, P. Leiderer, M. Fonin, M. Albrecht, U. Nowak, and J. Boneberg,Nanotechnology 26, 205302 (2015).Stinson1990D. G. Stinson and S.-C. Shin, J. Appl. Phys. 67, 4459 (1990).Yibo2017Y. Wang, PhD thesis, Swinburne University of Technology (2017).Squires2011M. B. Squires, J. A. Stickney, E. J. Carlson, P. M. Baker, W. R. Buchwald, S. Wentzell, and S. M. Miller, Rev. Sci. Instr. 82, 023101 (2011).Burt1997E. A. Burt, R. W. Ghrist, C. J. Myatt, M. J. Holland, E. A. Cornell, and C. E. Wieman, Phys. Rev. Lett. 79, 337 (1997).Soding1999J. Söding, D. Guéry-Odelin, P. Desbiolles, F. Chevy, H. Inamori, and J. Dalibard, Appl. Phys. B 69, 257 (1999).Smith2011D. A. Smith, S. Aigner, S. Hofferberth, M. Gring, M. Andersson, S. Wildermuth, P. Krüger, S. Schneider, T. Schumm, and J. Schmiedmayer, Opt. Express 19, 8471 (2011).Surkov1996E. L. Surkov, J. T. M. Walraven, and G. V. Shlyapnikov, Phys. Rev. A 53, 3403 (1996).Markle2014J. Märkle, A. J. Allen, P. Federsel, B. Jetter, A. Günther, J. Fortágh, N. P. Proukakis, and T. E. Judd, Phys. Rev. A 90, 023614 (2014).Singh2009M. Singh, R. McLean, A. Sidorov, and P. Hannaford, Phys. Rev. A 79, 053407 (2009).Singh2010M. Singh and P. Hannaford, Phys. Rev. A 82, 013416 (2010).Treutlein2008P. Treutlein, PhD Thesis, Ludwig-Maximilians University Munich (2008).Jones2003M. P. A. Jones, C. J. Vale, D. Sahagun, B. V. Hall, and E. A. Hinds, Phys. Rev. Lett. 91, 080401 (2003).Henkel2005C. Henkel, Eur. J. Phys. D. 35, 59 (2005).Stehle2011C. Stehle, H. Bender, C. Zimmermann, D. Kern, M. Fleischer, and S. Slama, Nat. Photon. 5, 494 (2011).Jacqmin2012T. Jacqmin, B. Fang, T. Berrada, T. Roscilde, and I. Bouchoule, Phys. Rev. A. 86, 043626 (2012).Batrouni2002G. Batrouni, V. Rousseau, R. Scalettar, M. Rigol, A. Muramatsu, P. Denteneer, and M. Troyer, Phys. Rev. Lett. 89, 117203 (2002).
http://arxiv.org/abs/1705.09419v2
{ "authors": [ "Yibo Wang", "Tien Tran", "Prince Surendran", "Ivan Herrera", "Armandas Balcytis", "Dennis Nissen", "Manfred Albrecht", "Andrei Sidorov", "Peter Hannaford" ], "categories": [ "physics.atom-ph" ], "primary_category": "physics.atom-ph", "published": "20170526030424", "title": "Trapping ultracold atoms at 100 nm from a chip surface in a 0.7-micrometer-period magnetic lattice" }
On the (parameterized) complexity of recognizing well-covered (r,ℓ)-graphsThis work was supported by FAPERJ, CNPq, CAPES Brazilian Research Agencies, EPSRC (EP/K025090/1), the Leverhulme Trust (RPG-2016-258), and the French ANR projects DEMOGRAPH (ANR-16-CE40-0028) and ESIGMA (ANR-17-CE40-0028). Sancrey Rodrigues AlvesFAETEC, Fundação de Apoio à Escola Técnica do Estado do Rio de Janeiro, Brazil. . Konrad K. DabrowskiDepartment of Computer Science, Durham University, Durham, United Kingdom. . Luerbio FariaUERJ, DICC, Universidade do Estado do Rio de Janeiro, Brazil. . Sulamita KleinUFRJ, COPPE-Sistemas, Universidade Federal do Rio de Janeiro, Brazil. . Ignasi SauCNRS, LIRMM, Université de Montpellier, Montpellier, France. .  Departamento de Matemática, Universidade Federal do Ceará, Fortaleza, Brazil. Uéverton S. SouzaUFF, IC, Universidade Federal Fluminense, Niterói, Brazil. .=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================An (r, ℓ)-partition of a graph G is a partition of its vertex set into r independent sets and ℓ cliques. A graph is (r, ℓ) if it admits an (r, ℓ)-partition. A graph is well-covered if every maximal independent set is also maximum. A graph is (r,ℓ)-well-covered if it is both (r,ℓ) and well-covered. In this paper we consider two different decision problems. In the (r,ℓ)-Well-Covered Graph problem ((r,ℓ)wc-g for short), we are given a graph G, and the question is whether G is an (r,ℓ)-well-covered graph. In the Well-Covered (r,ℓ)-Graph problem (wc-(r,ℓ)g for short), we are given an (r,ℓ)-graph G together with an (r,ℓ)-partition, and the question is whether G is well-covered. This generates two infinite families of problems, for any fixed non-negative integers r and ℓ, which we classify as being P, coNP-complete, NP-complete, NP-hard, or coNP-hard. Only the cases wc-(r,0)g for r≥ 3 remain open. In addition, we consider the parameterized complexity of these problems for several choices of parameters, such as the size α of a maximum independent set of the input graph, its neighborhood diversity, its clique-width, or the number ℓ of cliques in an (r, ℓ)-partition. In particular, we show that the parameterized problem of determiningwhether every maximal independent set of an input graph G has cardinality equal to k can be reduced to the wc-(0,ℓ)g problem parameterized by ℓ. In addition, we prove that both problems are coW[2]-hard but can be solved in XP-time.Keywords: well-covered graph; (r, ℓ)-graph; coNP-completeness; FPT-algorithm; parameterized complexity; coW[2]-hardness. § INTRODUCTIONOne of the most important combinatorial problems is Maximum Independent Set (MIS), where the objective is to find a maximum sized subset S⊆ V of pairwise non-adjacent vertices in a graph G=(V,E). Maximum independent sets appear naturally in a wide range of situations, and MIS also finds a number of “real world” relevant applications.Unfortunately, the decision version of MIS is an -complete problem <cit.>, and thus it cannot be solved in polynomial time unless =. In spite of the fact that finding a maximum independent set is a computationally hard problem, a maximal independent set of a graph can easily be found in linear time. Indeed, a naive greedy algorithm for finding maximal independent sets consists simply of selecting an arbitrary vertex v to add to a set S, and updating the current graph by removing the closed neighborhood N[v] of v. This algorithm always outputs a maximal independent set in linear time, but clearly not all choices lead to a maximum independent set. Well-covered graphs were first introduced by Plummer <cit.> in 1970. Plummer defined that “a graph is said to be well-covered if every minimal point cover is also a minimum cover”. This is equivalent to demanding that all maximal independent set have the same cardinality. Therefore, well-covered graphs can be equivalently defined as the class of graphs for which the naive greedy algorithm discussed above always outputs a maximum independent set.The problem of recognizing a well-covered graph, which we denote by Well-Covered Graph, was proved to be coNP-complete by Chvátal and Slater <cit.> and independently by Sankaranarayana and Stewart <cit.>. On the other hand, the Well-Covered Graph problem is inwhen the input is known to be a perfect graph of bounded clique size <cit.> or a claw-free graph <cit.>.Let r, ℓ≥ 0 be two fixedintegers. An (r, ℓ)-partition of a graph G=(V,E) is a partition of V into r independent sets S^1,…, S^r and ℓ cliques K^1,…, K^ℓ. For convenience, we allow these sets to be empty. A graph is (r, ℓ) if it admits an (r, ℓ)-partition. Note that the notion of (r,ℓ)-graphs is a generalization of that of r-colorable graphs. Aversus -complete dichotomy for recognizing (r, ℓ)-graphs was proved by Brandstädt <cit.>: the problem is in if max{r, ℓ}≤ 2, and -complete otherwise. The class of (r,ℓ)-graphs and its subclasses have been extensively studied in the literature. For instance, list partitions of (r,ℓ)-graphs were studied by Feder et al. <cit.>. In another paper, Feder et al. <cit.> proved that recognizing graphs that are both chordal and(r,ℓ) is in .A graph is (r, ℓ)-well-covered if it is both (r, ℓ) and well-covered. In this paper we analyze the complexity ofthe (r,ℓ )-Well-Covered Graph problem, which consists of deciding whether a graph is (r,ℓ)-well-covered. In particular, we give a complete classification of the complexity of this problem.Additionally, we analyze the complexity of the Well-Covered-(r,ℓ)-Graphproblem, which consists of deciding, given an (r,ℓ )-graph G=(V,E) together with an (r,ℓ ) partition, whether G is well-covered or not. We classify the complexity of this problem for every pair (r,ℓ ), except for the cases when ℓ=0 and r ≥ 3, which we leave open.We note that similar restrictions have been considered in the literature. For instance, Kolay et al. <cit.> recently considered the problem of removing a small number of vertices from a perfect graph so that it additionally becomes (r, ℓ).To the best of our knowledge, this is the first time in the literature that a decision problem obtained by “intersecting” two recognition -complete and -complete properties has been studied. From our results, the (r,ℓ)wc-g problem has a very peculiar property, namely that some cases of the problem are in NP, but other cases are in coNP. And if , there are some cases where the decision problem is neither in NP nor in coNP.In addition, according to the state of the art for the Well-Covered Graph problem, to the best of our knowledge this is the first work that associates the hardness of Well-Covered Graph with the number of independent sets and the number of cliques of an (r,ℓ)-partition of the input graph. This shows an important structural property for classifying the complexity of subclasses of well-covered graphs.As a by-product of this paper, an infinite class of decision problems was classified as being both NP-hard and coNP-hard. Hence, unless = these decision problems are neither in NP nor in coNP.More formally, in this paper we focus on the following two decision problems.(r,ℓ)-Well-Covered Graph ( (r,ℓ).wc-g.) Input:A graph G.Question: Is G (r, ℓ)-well-covered?Well-Covered (r,ℓ)-Graph (.wc-(r,ℓ)g.) Input:An (r,ℓ)-graph G, together with a partition of V(G) intor independent sets and ℓ cliques.Question: Is G well-covered?We establish an almost complete characterization of the complexity of the (r,ℓ)wc-g and wc-(r,ℓ)g problems. Our results are shown in the following tables, where r (resp. ℓ) corresponds to the rows (resp. columns) of the tables, and wherestands for coNP-complete,stands for NP-hard,stands for NP-complete, andstands for both NP-hard and coNP-hard. The symbol `?' denotes that the complexity of the corresponding problem is open.[ [ 0 1 2 ≥ 3; 0 -; 1; 2; ≥ 3; ]; ; ; [ 0 1 2 ≥ 3; 0 -; 1; 2; ≥ 3; ] ]We note the following simple facts, which we will use to fill the above tables: If (r,ℓ)wc-g is in , then wc-(r,ℓ)g is in .If wc-(r,ℓ)g is coNP-hard, then (r,ℓ)wc-g is coNP-hard. Note that wc-(r,ℓ)g is in coNP, since a certificate for a NO-instance consists just of two maximal independent sets of different size. On the other hand, for (r,ℓ)wc-g we have the following facts, which are easy to verify: For any pair of integers (r,ℓ) such that the problem of recognizing an (r,ℓ)-graph is in P, the (r,ℓ)wc-g problem is in coNP.For any pair of integers (r,ℓ) such that the wc-(r,ℓ)g problem is in P, the (r,ℓ)wc-g problem is in NP. In this paper we prove that (r,ℓ)wc-g with (r,ℓ) ∈{ (0,1), (1,0), (0,2), (1,1),(2,0), (1,2)} can be solved in polynomial time, which by Fact <ref> yields that wc-(r,ℓ)g with (r,ℓ) ∈{ (0,1), (1,0),(0,2),(1,1), (2,0),(1,2)} can also be solved in polynomial time. On the other hand, we prove that wc-(2,1)g is coNP-complete, which by Fact <ref> and Fact <ref> yields that (2,1)wc-g is also coNP-complete. Furthermore,we also prove that wc-(0,ℓ)g and wc-(1,ℓ)g arein , and that (r,ℓ)wc-g with (r,ℓ) ∈{(0,3), (3,0), (1,3)} are NP-hard. Finally, we state and prove a “monotonicity” result, namely Theorem <ref>, stating how to extend the NP-hardness or coNP-hardness of wc-(r,ℓ)g (resp. (r,ℓ)wc-g) to wc-(r+1,ℓ)g (resp. (r+1,ℓ)wc-g), and wc-(r,ℓ+ 1)g (resp. (r,ℓ+ 1)wc-g). Together, these results correspond to those shown in the above tables.In addition, we consider the parameterized complexity of these problems for several choices of the parameters, such as the size α of a maximum independent set of the input graph, its neighborhood diversity, its clique-width or the number ℓ of cliques in an (r, ℓ)-partition. We obtain several positive and negative results. In particular, we show that the parameterized problem of determiningwhether every maximal independent set of an input graph G has cardinality equal to k can be reduced to the wc-(0,ℓ)g problem parameterized by ℓ. In addition, we prove that both problems are coW[2]-hard, but can be solved in XP-time.The rest of this paper is organized as follows. We start in Section <ref> with some basic preliminaries about graphs, parameterized complexity, and width parameters. In Section <ref> we prove our results concerning the classical complexity of both problems, and in Section <ref> we focus on their parameterized complexity. We conclude the paper with Section <ref>. § PRELIMINARIESGraphs.We use standard graph-theoretic notation, and we refer the reader to <cit.> for any undefined notation. A graph G = (V, E) consists of a finite non-empty set V of vertices and a set E of unordered pairs (edges) of distinct elements of V. If uv ∈ E(G), then u, v are said to be adjacent, and u is said to be a neighbor of v. A clique (resp. independent set) is a set of pairwise adjacent (resp. non-adjacent) vertices. A vertex cover is a set of vertices containing at least one endpoint of every edge in the graph. The open neighborhood N(v) or neighborhood, for short, of a vertex v ∈ V is the set of vertices adjacent to v. The closed neighborhood of a vertex v is defined as N[v]=N(v)∪{v}. A dominating set is a set of vertices S ⊆ V such that ⋃_v∈ SN[v] = V. Given S⊆ V and v∈ V, the neighborhood N_S(v) of v in S is the set N_S(v)=N(v)∩ S.Throughout the paper, we let n denote the number of vertices in the input graph for the problem under consideration.Parameterized complexity. We refer the reader to <cit.> for basic background on parameterized complexity, and we recall here only some basic definitions. A parameterized problem is a language L ⊆Σ^* ×ℕ.For an instance I=(x,k) ∈Σ^* ×ℕ, k is called the parameter. A parameterized problem is fixed-parameter tractable (FPT) if there exists an algorithm 𝒜, a computable function f, and a constant c such that given an instance I=(x,k), 𝒜 (called an FPT-algorithm) correctly decides whether I ∈ L in time bounded by f(k) |I|^c.Within parameterized problems, the class W[1] may be seen as the parameterized equivalent to the class of classical optimization problems. Without entering into details (see <cit.> for the formal definitions), a parameterized problem being W[1]-hard can be seen as a strong evidence that this problem is not . The canonical example of a W[1]-hard problem is Independent Set parameterized by the size of the solution[Given a graph G and a parameter k, the problem is to decide whether there exists an independent set S ⊆ V(G) such that |S| ≥ k.].The class W[2] of parameterized problems is a class that contains [1], and so the problems that are W[2]-hard areeven more unlikely to be than those that are W[1]-hard (again, see <cit.> for the formal definitions). The canonical example of a W[2]-hard problem is Dominating Set parameterized by the size of the solution[Given a graph G and a parameter k, the problem is to decide whether there exists a dominating set S ⊆ V(G)such that|S| ≤ k.].For i ∈1,2, to transfer W[i]-hardness from one problem to another, one uses an fpt-reduction, which given an input I=(x,k) of the source problem, computes in time f(k) |I|^c, for some computable function f and a constant c, an equivalent instance I'=(x',k') of the target problem, such that k' is bounded by a function depending only on k.Hence, an equivalent definition of [1]-hard (resp. [2]-hard) problem is any problem that admits an fpt-reduction from Independent Set (resp. Dominating Set) parameterized by the size of the solution.Even if a parameterized problem is [1]-hard or [2]-hard, it may still be solvable in polynomial time for fixed values of the parameter; such problems are said to belong to the complexity class . Formally, a parameterized problem whose instances consist of a pair (x,k) is in if it can be solved byan algorithm with running time f(k) |x|^g(k), where f,g are computable functions depending only on the parameter and |x| represents the input size. For example, Independent Set and Dominating Set parameterized by the solution size are easily seen to belong to . Width parameters.A tree-decomposition of a graph G = (V,E) is a pair (T,𝒳), where T = (I, F) is a tree, and 𝒳 = {B_i},i∈ I is a family of subsets of V(G), called bagstree decomposition!bag|ii and indexed by the nodes of T, such that* each vertex v ∈ V appears in at least one bag, i.e., ⋃_i∈ I B_i = V;* for each edge e = {x, y}∈ E, there is an i∈ I such that x,y ∈ B_i; and* for each v ∈ V the set of nodes indexed by { i | i ∈ I, v ∈ B_i} forms a subtree of T. The width of a tree-decomposition is defined as max_i ∈ I{|B_i| - 1}. The treewidth of G, denoted by (G), is the minimum width of a tree-decomposition of G.The clique-width of a graph G, denoted by (G), is defined as the minimum number of labels needed to construct G, using the following four operations: * Create a single vertex v with an integer label ℓ (denoted by ℓ(v)); * Take the disjoint union (i.e., co-join) of two graphs (denoted by ⊕); * Join by an edge every vertex labeled i to every vertex labeled j for i ≠ j (denoted by η(i,j)); * Relabel all vertices with label i by label j (denoted by ρ(i,j)). An algebraic term that represents such a construction of G and uses at most k labels is said to be a k-expression of G (i.e., the clique-width of G is the minimum k for which G has a k-expression). Graph classes with bounded clique-width include cographs <cit.>, distance-hereditary graphs <cit.>, graphs of bounded treewidth <cit.>, graphs of bounded branchwidth <cit.>, and graphs of bounded rank-width <cit.>.§ CLASSICAL COMPLEXITY OF THE PROBLEMSWe start with a monotonicity theorem that will be very helpful to fill the tables presented in Section <ref>. The remainderof this section is divided into four subsections according to whether (r,ℓ)wc-g and wc-(r,ℓ)g are polynomial or “hard” problems. Let r,ℓ≥ 0 be two fixed integers. Then it holds that: * if wc-(r,ℓ)g is coNP-complete then wc-(r+1,ℓ)g and wc-(r,ℓ+1)g are coNP-complete; * if (r,ℓ)wc-g is NP-hard (resp. coNP-hard) then (r,ℓ+1)wc-g is NP-hard (resp. coNP-hard); * supposing that r ≥ 1, if (r,ℓ)wc-g is NP-hard (resp. coNP-hard) then (r+ 1,ℓ)wc-g is NP-hard (resp. coNP-hard). (i) This follows immediately from the fact that every (r,ℓ)-graph is also an (r+ 1,ℓ)-graph and an (r,ℓ+1)-graph.(ii) Let G be an instance of (r,ℓ)wc-g. Let H be an (r,ℓ+1)wc-g instancedefined as the disjoint union of G and a clique Z with V(Z)={z_1,…,z_r+1}. Clearly G is well-covered if and only if H is well-covered. If G is an (r,ℓ)-well-coveredgraph then H is an (r,ℓ+1)-well-covered graph. Suppose H is an (r,ℓ+1)-well-covered graph, with a partition into r independent sets S^1,…,S^r and ℓ+1 cliques K^1,…,K^ℓ+1. Each independent set S^i can contain at most one vertex of the clique Z. Therefore, there must be a vertex z_i in some clique K^j. Assume without loss of generality that there is a vertex of Z in K^ℓ+1. Then K^ℓ+1 cannot contain any vertex outside of V(Z), sowe may assume that K^ℓ+1 contains all vertices of Z. Now S^1,…,S^r,K^1,…,K^ℓ is an (r,ℓ)-partition of G, so G is an (r,ℓ)-well-coveredgraph. Hence, H is a YES-instance of (r,ℓ+1)wc-g if and only if G is a YES-instance of (r,ℓ)wc-g.(iii) Let G be an instance of (r,ℓ)wc-g. Let G' be an (r+1,ℓ)wc-g instance obtained from G by adding ℓ+1 isolated vertices. (This guarantees that every maximal independent set in G' contains at least ℓ+1 vertices.) Since r≥ 1, it follows that G' is an (r,ℓ)-graph if and only if G is. Clearly G' is well-covered if and only if G is.Next, find an arbitrary maximal independent set in G' and let p be the number of vertices in this set. Note that p ≥ℓ+1. Let H be the join of G' and a set of p independent vertices Z={z_1,…,z_p}, i.e., N_H(z_i)=V(G') for all i. Every maximal independent set of H is either Z or a maximal independent set of G' and every maximal independent set of G' is a maximal independent set of H. Therefore, H is well-covered if and only if G' is well-covered. Clearly, if G' is an (r,ℓ)-graph then H is an (r+1,ℓ)-graph. Suppose H is an (r+1,ℓ)-graph, with a partition into r+1 independent sets S^1,…,S^r+1 and ℓ cliques K^1,…,K^ℓ. Each clique set K^i can contain at most one vertex of Z. Therefore there must be a vertex z_i in some independent set S^j. Suppose that there is a vertex of Z in S^r+1. Then S^r+1 cannot contain any vertex outside of Z. Without loss of generality, we may assume that S^r+1 contains all vertices of Z. Now S^1,…,S^r,K^1,…,K^ℓ is an (r,ℓ)-partition of G, so G is an (r,ℓ)-graph. Thus H is a YES-instance of (r+1,ℓ)wc-g if and only if G is a YES-instance of (r,ℓ)wc-g. §.§ Polynomial cases for wc-(r,ℓ)gwc-(0,ℓ)g and wc-(1,ℓ)g are infor every integer ℓ≥ 0. It is enough to prove that wc-(1,ℓ)g is in . Let V=(S,K^1, K^2, K^3,… , K^ℓ) be a (1,ℓ)-partition for G. Then each maximal independent set I of G admits a partition I=(I_K,S∖ N_S(I_K)), where I_K is an independent set of K^1∪ K^2∪ K^3∪⋯∪ K^ℓ.Observe that there are at most O(n^ℓ) choices for an independent set I_K of K^1∪ K^2∪ K^3∪⋯∪ K^ℓ, which can be listedin time O(n^ℓ), since ℓ is constant and (K^1,K^2, K^3,… , K^ℓ) is given. For each of them, we consider the independent set I=I_K∪ (S∖ N_S(I_K)). If I is not maximal (which may happen if a vertex in (K^1∪ K^2∪ K^3∪⋯∪ K^ℓ)∖ I_K has no neighbors in I), we discard this choice of I_K. Hence, we have a polynomial number O(n^ℓ) of maximal independent sets to check in order to decide whether G is a well-covered graph. §.§ Polynomial cases for (r,ℓ)wc-g The graph induced by a clique or by an independent set is well-covered. The following corollary is a simple application of Fact <ref>. G is a (0,1)-well-covered graph if and only if G is a (0,1)-graph. Similarly, G is a (1,0)-well-covered graph if and only if G is a (1,0)-graph. The following is an easy observation. (0,2)wc-g can be solved in polynomial time. By definition, a graph G=(V,E) is a (0,2)-graph if and only if its vertex set can be partitioned into two cliques, and this can be tested in polynomial time. It follows that every (0,2)-graph has maximum independent sets of size at most 2. Let G be a (0,2)-graph with (0,2)-partition (K^1,K^2). If V is a clique, then G is a (0,1)-well-covered graph, and hence a (0,2)-well-covered graph. If V is not a clique, then G is a (0,2)-well-covered graph if and only if G has no universal vertex. In the next three lemmas we give a characterization of (1,1)-well-covered graphs in terms of their graph degree sequence. Note that (1,1)-graphs are better known in the literature as split graphs. Let G=(V,E) be a (1,1)-well-covered graph with (1,1)-partition V=(S,K), where S is a independent set and K is a clique. If x∈ K, then |N_S(x)|≤ 1. Suppose that G is a (1,1)-well-covered graph with (1,1)-partition V=(S,K), where S is a independent set and K is a clique. Let I be a maximal independent set of G such that x∈ I∩ K. Suppose for contradiction that |N_S(x)|≥ 2, and let y,z∈ N_S(x). Since y,z∈ S, N_G(y), N_G(z)⊆ K. Since K is a clique, vertex x is the only vertex of I in K. Hence, we have that N_G(y)∩ (I∖{x})= N_G(z)∩ (I∖{x})=∅. Therefore I'=(I∖{x})∪{y,z} is an independent set of G such that |I'|=|I|+1. Thus, I is a maximal independent set that is not maximum, so G is not well-covered. Thus, |N_S(x)|≤ 1.A graph G is a (1,1)-well-covered graph if and only if it admits a (1,1)-partition V=(S,K) such that either for every x∈ K, |N_S(x)|=0, or for every x∈ K, |N_S(x)|=1. Let G be a (1,1)-well-covered graph. By Lemma <ref> we have that, given a vertex x∈ K, either |N_S(x)|=0 or |N_S(x)|=1. Suppose for contradiction that there are two vertices x,y∈ K such that |N_S(x)|=0 and |N_S(y)|=1. Let z be the vertex of S adjacent to y. Let I be a maximal independent set containing vertex y. Note that the vertex x is non-adjacent to every vertex of I∖{y} since there is at most one vertex of I in K. The same applies to the vertex z. Hence, a larger independent set I', with size |I'|=|I|+1, can be obtained from I by replacing vertex y with the non-adjacent vertices x,z, i.e., I is a maximal independent set of G that is not maximum, a contradiction. Thus, either for every x∈ K, |N_S(x)|=0, or for every x∈ K, |N_S(x)|=1.Conversely, suppose that there is a (1,1)-partition V=(S,K) of G such that either for every x∈ K, |N_S(x)|=0, or for every x∈ K, |N_S(x)|=1. If K=∅, then G is (1,0) and then G is well-covered. Hence we assume K∅. If for every x∈ K, |N_S(x)|=0, then every maximal independent set consists of all the vertices of S and exactly one vertex v ∈ K. If for every x∈ K, |N_S(x)|=1, then every maximal independent set is either I=S, or I={x}∪ (S∖ N_S(x)) for some x ∈ K. Since |N_S(x)|=1 we have |I|=1+|S|-1=|S|, and hence G is a (1,1)-well-covered graph.(1,1)wc-g can be solved in polynomial time. Since we can check in polynomial time whether G is a (1,1) graph <cit.>, and one can enumerate all (1,1)-partitions of a split graph in polynomial time,we can solve the (1,1)wg-g problem in polynomial time.The next lemma shows that (1,1)-well-covered graphs can be recognized from their degree sequences.G is a (1,1)-well-covered graph if and only if there is a positive integer k such that G is a graph with a (1,1)-partition V=(S,K) where |K|=k, such that the degree sequence of V is either (k,k,k,…,k, i_1,i_2,…,i_s, 0,0,0,…,0) with ∑_j=1^s (i_j)= k, or (k-1,k-1,k-1,…,k-1, 0,0,0,…,0), where the subsequences k, …, k (resp. k-1, …, k-1) have length k. Let G be a (1,1)-well-covered graph. Then G admits a (1,1)-partition V=(S,K) where k:=|K|, k≥ 0. If k=0, then the degree sequence is (0,0,0,…,0). If k≥ 1, then by Lemma <ref> either for every x∈ K, |N_S(x)|=0, or for every x∈ K, |N_S(x)|=1. If for every x∈ K, |N_S(x)|=0, then the degree sequence of G is (k-1,k-1,k-1,…,k-1, 0,0,0,…,0). If for every x∈ K, |N_S(x)|=1, then the degree sequence of G is (k,k,k,…,k, i_1,i_2,…,i_s, 0,0,0,…,0), with ∑_j=1^s (i_j)= k.Suppose that there is a positive integer k such that G is a graph with (1,1)-partition V=(S,K) where |K|=k, with degree sequence either (k,k,k,…,k, i_1,i_2,…,i_s, 0,0,0,…,0), or (k-1,k-1,k-1,…,k-1, 0,0,0,…,0), such that ∑_j=1^s (i_j)= k. If the degree sequence of G is (k,k,k,…,k, i_1,i_2,…,i_s, 0,0,0,…,0), then the vertices of K are adjacent to k-1 vertices of K and exactly one of S, since the vertices with degree i_1,i_2,…,i_s, have degree at most k and the vertices with degree 0 are isolated. If the degree sequence of G is (k-1,k-1,k-1,…,k-1, 0,0,0,…,0), then the vertices of K are adjacent to k-1 vertices of K and none of S and the vertices with degree 0 are isolated. By Lemma <ref> we have that G is a well-covered graph.Ravindra <cit.> gave the following characterization of (2,0)-well-covered graphs.Let G be a connected graph. G is a (2,0)-well-covered graph if and only if G contains a perfect matching F such that for every edge e = uv in F, G[N(u)∪ N(v)] is a complete bipartite graph. We now prove thatProposition <ref> leads to a polynomial-time algorithm. (2,0)wc-g can be solved in polynomial time. Assume that G is connected and consider the weighted graph (G,ω) with ω :E(G) →{0,1} satisfying ω(uv)=1, if G[N(u)∪ N(v)]is a complete bipartite graph, and 0 otherwise. ByProposition <ref>, G is well-covered if and only if (G,ω) has a weighted perfect matching with weight at least n/2, and this can be decided in polynomial time <cit.>. (1,2)wc-g can be solved in polynomial time. We can find a (1,2)-partition of a graph G (if such a partition exists) in polynomial time <cit.>. After that, we use the algorithmfor wc-(1,ℓ)g given by Theorem <ref>. Below we summarize the cases for which we have shown that wc-(r,ℓ)g or (r,ℓ)wc-g can be solved in polynomial time.(r,ℓ)wc-g with (r,ℓ) ∈{(0,1), (0,2), (1,0), (1,1), (1,2),(2,0)} and wc-(r,ℓ)g with r ∈{0,1} or (r,ℓ) = (2,0) can be solved in polynomial time.The first part follows from Corollary <ref>, Lemma <ref>, Corollary <ref>, Lemma <ref>, and Lemma <ref>, respectively. The second part follows from Theorem <ref>, and Lemma <ref> together with Fact <ref>.§.§ coNP-complete cases for wc-(r,ℓ)gWe note that the well-Covered Graphinstance G constructed in the reduction of Chvátal and Slater <cit.> is a (2,1)-graph, directly implying that wc-(2,1)g is coNP-complete. Indeed, Chvátal and Slater <cit.> take a 3-sat instance I=(U,C)=({u_1,u_2, u_3,… ,u_n}, {c_1,c_2,c_3,… ,c_m}), and construct a Well-Covered Graph instance G=(V,E)= (.{u_1,u_2,u_3, … , u_n,u_1,u_2,u_3,… , u_n, c_1,c_2,c_3,… ,c_m}, {xc_j: x c_j} ∪ {u_iu_i : 1 ≤ i ≤ n} ∪ {c_ic_j:1≤ i<j≤ m}.).Note that {c_ic_j:1≤ i<j≤ m} is a clique, and that {u_1,u_2,u_3, …,u_n}, and {u_1,u_2,u_3,… , u_n} are independent sets. Hence, G is a (2,1)-graph. An illustration of this construction can be found in Figure <ref>. This discussion can be summarized as follows.wc-(2,1)g is coNP-complete. As (2,1)-graphs can be recognized in polynomial time <cit.>, we obtain the following. (2,1)wc-g iscoNP-complete.§.§ NP-hard cases for (r,ℓ)wc-gNow we prove that (0,3)wc-g is NP-complete. For this purpose, we slightly modify an -completeness proof of Stockmeyer <cit.>. Stockmeyer's <cit.> -completeness proof of 3-coloring considers a 3-sat instance I=(U,C)= (.{u_1,u_2,u_3,…, u_n}, {c_1,c_2,c_3,…, c_m}.), and constructs a 3-coloring instanceG=(V,E)= .({u_1,u_2,u_3,…,u_n,u_1,u_2,u_3,… , u_n}∪{v_1[j],v_2[j],v_3[j],v_4[j],v_5[j],v_6[j]: j∈{1,2,3,…,m}}∪{t_1,t_2},{u_iu_i: i∈{1,2,3,… ,n}}∪{v_1[j]v_2[j],v_2[j]v_4[j],v_4[j]v_1[j],v_4[j]v_5[j],v_5[j]v_6[j],v_6[j]v_3[j], v_3[j]v_5[j]: j∈{1,2,3,… , m}}∪{v_1[j]x, v_2[j]y, v_3[j]z: c_j=(x,y,z)}∪{t_1u_i, t_1u̅_i: i∈{1,2,3,…, n}}∪{t_2v_6[j]:j∈{1,2,3,… , m}}.); seeFigure <ref>(a).(0,3)wc-g is NP-complete. As by Theorem <ref> the Well-Covered Graph problem can be solved in polynomial time on (0,3)-graphs, by Fact <ref> (0,3)wc-g is in .Let I=(U,C) be a 3-sat instance. We produce,in polynomial time in the size of I, a (0,3)wc-g instance H, such that I is satisfiable if and only if H is (0,3)-well-covered. Let G=(V,E) be the graph of <cit.> obtained from I, and let G' be the graph obtained from G by adding to V a vertex x_uv for every edge uv of G not belonging to a triangle, and by adding to Eedges ux_uv and vx_uv; see Figure <ref>(b). Finally, we define H=G' as the complement of G'. Note that, by <cit.>, I is satisfiable if and only if G is 3-colorable. Since x_uv is adjacent to only two different colors of G, clearly G is 3-colorable if and only if G' is 3-colorable. Hence, I is satisfiable if and only if H is a (0,3)-graph. We prove next that I is satisfiable if and only if H is a (0,3)-well-covered graph.Suppose that I is satisfiable. Then, since H is a (0,3)-graph, every maximal independent set of H has size 3, 2, or 1. If there is a maximal independent set I in H with size 1 or 2, then I is a maximal clique of G' of size 1 or 2. This contradicts the construction of G', since every maximal clique of G' is a triangle. Therefore, G is well-covered.Suppose that H is (0,3)-well-covered. Then G' is 3-colorable, so G is also 3-colorable. Thus, by <cit.>, I is satisfiable.We next prove that (3,0)wc-g is NP-hard. For this, we again use the proof of Stockmeyer <cit.>, together with the following theorem.Let G=(V,E) be an n-vertex graph, V={v_1,v_2,v_3, … ,v_n}, and letH be obtained from G such that V(H)=V∪{u_1,u_2,u_3, … ,u_n} and E(H)=E∪{v_iu_i: i∈{1,2,3,… , n}}. Then H is a well-covered graph where every maximal independent set has size n. Observe that every maximal independent set I of H has a subset I_G= I ∩ V. Let U⊆{1,2,3,… ,n} be the set of indices i such that v_i∈ I. Since I is maximal, the set {u_i: i∈{1,2,3,… ,n}∖U} must be contained in I, so |I|=n. (3,0)wc-g is NP-hard. Let I=(U,C) be a 3-sat instance; let G=(V,E) be the graph obtained from I in Stockmeyer's <cit.> -completeness proof for 3-coloring; and let H be the graph obtained from G by the transformation described in Proposition <ref>. We prove that I is satisfiable if and only if H is a (3,0)-well-covered graph. Suppose that I is satisfiable. Then by <cit.> we have that G is 3-colorable. Since a vertex v ∈ V(H)∖ V(G) has just one neighbor, there are 2 colors left for v to extend a 3-coloring of G, and so H is a (3,0)-graph. Hence, by Proposition <ref>, H is a (3,0)-well-covered graph. Suppose that H is a (3,0)-well-covered graph. Then we have that G is a (3,0)-graph. By <cit.>, I is satisfiable. Note that Theorem <ref> combined with Lemma <ref> does not imply that (1,3)wc-g is NP-complete.(1,3)wc-g is NP-complete.As by Theorem <ref> the Well-Covered Graph problem can be solved in polynomial time on (1,3)-graphs, by Fact <ref> (1,3)wc-g is in .Let I=(U,C) be a 3-sat instance. Without loss of generality, I has more than two clauses. We produce a (1,3)wc-g instance Hpolynomial in the size of I,such that I is satisfiable if and only if H is (1,3)-well-covered.Let G=(V,E) be the graph of Stockmeyer <cit.> obtained from I (see Figure <ref>(a)), and let H be the graph obtained from G (the complement of the graph G) by adding one pendant vertex p_v for each vertex v of G. Note that V(H)=V(G)∪{p_v: v∈ V(G)}, E(H)=E(G)∪{p_vv: v∈ V(G)}, and N_H(p_v)={v}.First suppose that I is satisfiable. Then by <cit.>, G is a (3,0)-graph, and G is a (0,3)-graph with partition into cliques V(G)=(K_G^1,K_G^2,K_G^3). Thus it follows that (S={p_v: v∈ V(G)},K_G^1,K_G^2,K_G^3) is a (1,3)-partition of V(H). In addition, from Proposition <ref> and by the construction of H, H is a well-covered graph. Hence H is (1,3)-well-covered.Conversely, suppose that H is (1,3)-well-covered, and let V(H)=(S,K^1, K^2, K^3) be a (1,3)-partition for H. Then we claim that no vertex p_v∈ V(H)∖ V(G) belongsto K^i, i∈{1,2,3}. Indeed, suppose for contradiction that p_v∈ K^i for some i∈{1,2,3}. Then, K^i ⊆{p_v, v}. Hence, H∖ K^i is a (1,2)-graph and G∖{v} is an induced subgraph of a (2,1)-graph. But by construction of G, G∖{v} (for any v∈ V(G)) contains at least one 2K_3 (that is, two vertex-disjoint copies of K_3) as an induced subgraph, which is a contradiction given that 2K_3 is clearly a forbidden subgraph for (2,1)-graphs. Therefore, {p_v: v∈ V(G)}⊆ S, and since {p_v: v∈ V(G)} is a dominating set of H, S={p_v: v∈ V(G)}. Thus, G is a (0,3)-graph with partition V(G)=(K^1,K^2,K^3), and therefore G is a (3,0)-graph, i.e., a 3-colorable graph. Therefore, by <cit.>, I is satisfiable. If r≥ 3 and ℓ=0, then (r,ℓ)wc-g is NP-hard. If r∈{0,1} and ℓ≥3, then (r,ℓ)wc-g is NP-complete. (r,ℓ)wc-g is NP-hard in all of these cases by combining Theorem <ref>, and Lemmas <ref>, <ref> and <ref>. For r∈{0,1} and ℓ≥3, the Well-Covered Graph problem can be solved in polynomial time on (r,ℓ)-graphs, so by Fact <ref> (r,ℓ)wc-g is in . Below we summarize the cases for which we have shown that wc-(r,ℓ)g or (r,ℓ)wc-g is computationally hard. The following classification holds:* wc-(r,ℓ)g with r≥ 2 and ℓ≥ 1 are -complete;* (0,ℓ)wc-g and (1,ℓ)wc-g with ℓ≥ 3 are -complete;* (2,1)wc-g and (2,2)wc-g are -complete;* (r,ℓ)wc-g with r≥ 0 and ℓ≥ 3 is -hard;* (r,ℓ)wc-g with r≥ 3 and ℓ≥ 0 is -hard;* (r,ℓ)wc-g with r≥ 2 and ℓ≥ 1 is -hard.Statement 1 follows from Proposition <ref> and Theorem <ref>(i). Statement 2 follows from Corollary <ref>. Statement 3 follows from Statement 1, Facts <ref> and <ref> and the fact that recognizing (r,ℓ)-graphs is inif max{r, ℓ}≤ 2 <cit.>. Statement 4 follows from Statement 2 and Theorem <ref>(ii)-(iii). Statement 5 follows from Lemma <ref> and Theorem <ref>(ii)-(iii). Finally, Statement 6 follows from Corollary <ref> and Theorem <ref>(ii)-(iii). § PARAMETERIZED COMPLEXITY OF THE PROBLEMS In this section we focus on the parameterized complexity of the Well-Covered Graph problem, with special emphasis on the case where the input graph is an (r,ℓ)-graph. Recall that the results presented in Section 2 show that wc-(r,ℓ)g is para-coNP-complete when parameterized by r and ℓ. Thus, additional pa­ra­me­ters should be considered. Henceforth we let α (resp. ω) denote the size of a maximum independent set (resp. maximum clique) in the input graph G for the problem under consideration. Note that wc-(r,ℓ)g parameterized by r, ℓ, and ω generalizes wc-(r,0)g, whose complexity was left open in the previous sections. Therefore, we focus on the complexity of wc-(r,ℓ)g parameterized by r, ℓ, and α, and on the complexity of the natural parameterized version of Well-Covered Graph, defined as follows:k-Well-Covered GraphInput:A graph G and an integer k.Parameter: k.Question: Does every maximal independent set of G have size exactly k?The next lemma provides further motivation to study of the wc-(0,ℓ)g problem, as it shows that k-Well-Covered Graph (on general graphs) can be reduced to the wc-(0,ℓ)g problem parameterized by ℓ. The k-Well-Covered Graph problem can be fpt-reduced to the wc-(0,ℓ)g problem parameterized by ℓ. Consider an arbitrary input graph G with vertices u_1,…,u_n. First, we find an arbitrary maximal (with respect to set-inclusion) independent set I in G. Without loss of generality we may assume that |I|=k and I={u_1,…,u_k}. Let ℓ=k+1.We construct a (0,ℓ)-graph G' with vertex set {v_i,j:i ∈{1,…,ℓ}, j ∈{1,…,n}} as follows: * For all i ∈{1,…,ℓ} add edges to make V_i:={v_i,j:j ∈{1,…,n}} into a clique.* For all j ∈{1,…,n} add edges to make W_j:={v_i,j:i ∈{1,…,ℓ}} into a clique.* For all pairs of adjacent vertices u_a, u_b in G, add edges between v_i,a and v_j,b for all i,j ∈{1,…,ℓ} (so that V_a is complete to V_b).Note that the sets V_i partition G' into ℓ cliques, so G' is indeed a (0,ℓ)-graph, where ℓ=k+1. The graph G' has a maximal independent set of size k, namely {v_1,1,…, v_k,k}, so G' is well-covered if and only if every maximal independent set in G' has size exactly k. Every maximal independent set in G' has at most one vertex in any set V_i and at most one vertex in any set W_j, since V_i and W_j are cliques. As there are ℓ=k+1 sets V_i, it follows that every independent set in G' contains at most k+1 vertices. If G' contains an independent set {v_i_1,j_1,…,v_i_x,j_x} for some x then {u_j_1,…,u_j_x} is an independent set in G. If G contains an independent set {u_j_1,…,u_j_x} for some x then {v_1,j_1,…,v_min(x,k+1),j_min(x,k+1)} is an independent set in G'. Therefore G contains a maximal independent set smaller than k if and only if G' contains a maximal independent set smaller than k and G contains a (not necessarily maximal) independent set of size at least k+1 if and only G' contains a maximal independent set of size exactly k+1. It follows that G' is well-covered if and only if G is. As ℓ=k+1, this completes the proof. Recall that the Well-Covered Graph problem is coNP-complete <cit.>. In order to analyze the parameterized complexity of the problem, we will need the following definition. The class coW[2] is the class of all parameterized problems whose complement is in W[2]. For an overview of parameterized complexity classes, see <cit.>.We are now ready to show the next result. The wc-(0,ℓ)g problem parameterized by ℓ is coW[2]-hard.Red-Blue Dominating Set (RBDS)is a well-known W[2]-complete problem <cit.>, which consists of determining whether a given bipartite graph G = (R ∪ B, E) admits a set D⊆ R of size k (the parameter) such that D dominates B (that is, every vertex in B has a neighbor in D). To show the coW[2]-hardness of our problem, we present an fpt-reduction from Red-Blue Dominating Set to the problem of determining whether a given (0,ℓ)-graph is not well-covered, where ℓ=k+1. From an instance (G,k) of RBDS we construct a (0,ℓ)-graph G' as follows. Replace the set R={r_1,r_2,…,r_m} by k copies: R_1 ={r_1^1,r_2^1,…,r_m^1}, R_2={r_1^2,r_2^2,…,r_m^2}, … , R_k={r_1^k,r_2^k,…,r_m^k}, where each new vertex has the same neighborhood as the corresponding vertex did in G. Add edges to make B, as well as each R_i for 1≤ i ≤ k, induce a clique. For each i ∈{1,…,k}, create a vertex s_i, and add all possible edges between s_i and the vertices in R_i. Let G' be the resulting graph. Note that the vertex set of G' can be partitioned into ℓ=k+1 cliques: B, R_1 ∪{s_1}, R_2 ∪{s_2}, …, R_k ∪{s_k}. Clearly, for every b∈ B, the set {s_1, s_2,…,s_k}∪{b} is an independent set of G' of size k+1. Note that such an independent set is maximum, as it contains one vertex from each of the k+1 cliques that partition V(G'). In addition, any maximal independent set of G' has size at least k, since every maximal independent set contains either s_i or a vertex of R_i. At this point, we claim that G has a set D⊆ R of size k which dominates B if and only if G' has a maximal independent set of size k (i.e., G' is not well-covered).If D = {r_i_1, r_i_2, …, r_i_k} is a subset of R of size k which dominates B in G, then D'={r_i_1^1, r_i_2^2, …, r_i_k^k} is a maximal independent set of G', implying that G' is not well-covered.Conversely, if G' is not well-covered then there exists in G' a maximal independent set D' of size k. Note that D'∩ B = ∅ and each vertex in B has at least one neighbor in D', as otherwise D' would not be a maximal independent set of size k. Therefore, by letting D be the set of vertices in R that have copies in D'∩{R_1 ∪ R_2 ∪…∪ R_k}, we find that D is a subset of R of size at most k which dominates B in G. From the previous theorem we immediately obtain the following corollaries. The k-Well-Covered Graph problem is coW[2]-hard. This follows immediately from Lemma <ref> and Theorem <ref>.Unless FPT = coW[2], the wc-(r,ℓ)g problem cannot be solved in time f(α + ℓ) n^g(r) for any computable function f. This follows from the fact that an algorithm running in time f(α + ℓ) n^g(r), would be an FPT-algorithm for wc-(0,ℓ)g parameterized by ℓ, and from the coW[2]-hardness of the problem demonstrated in Theorem <ref>. In contrast to Corollary <ref>, Lemma <ref> shows that the wc-(r,ℓ)g problem can be solved in time 2^r α n^O(ℓ). The wc-(r,ℓ)g problem can be solved in time 2^r α n^O(ℓ). In particular, it is FPT when ℓ is fixed and r, α are parameters.Note that each of the r independent sets S^1, …, S^r of the given partition of V(G) must have size at most α. On the other hand, any maximal independent set of G contains at most one vertex in each of the ℓ cliques. The algorithm exhaustively constructs all maximal independent sets of G as follows: we start by guessing a subset of ⋃_i=1^r S^i, and then choose at most one vertex in each clique. For each choice, we just have to verify whether the constructed set is a maximal independent set, and then check that all the constructed maximal independent sets have the same size. The claimed running time follows. In fact, in the statement of the lemma, one could replace rα with ∑_1 ≤ i ≤ r|S^i|, which yields a stronger result. Although wc-(1,ℓ)g parameterized by ℓ is coW[2]-hard (see Theorem <ref>), Theorem <ref> shows that the problem is in . The wc-(1,ℓ)g problem can be solved in time n^O(ℓ). In other words, it is in XP when parameterized by ℓ. This follows from Theorem <ref> by considering ℓ to not be a constant. Table <ref> summarizes the results presented so far. Note that, by Ramsey's Theorem <cit.>, when both ω and α are parameters the input graph itself is a trivial kernel.§.§ Taking the neighborhood diversity as the parameterNeighborhood diversity is a structural parameter based on a special way of partitioning a graph into independent sets and cliques. Therefore, it seems a natural parameter to consider for our problem, since an (r, ℓ)-partition of a graph G is also a partition of its vertex set into cliques and independent sets. The neighborhood diversity (G) of a graph G=(V,E) is the minimum integer t such that Vcan be partitioned into t sets V_1,…,V_t where for every v∈ V(G) and every i∈{1,…,t}, either v is adjacent to every vertex in V_i or it is adjacent to none of them. Note that each part V_i of G is either a clique or an independent set. Another natural parameter to consider is the vertex cover number, because well-covered graphs can be equivalently defined as graphs in which every minimal vertex cover has the same size. However, neighborhood diversity is stronger than vertex cover, in the sense that every class of graphs with bounded vertex cover number is also a class of graphs with bounded neighborhood diversity, but the reverse is not true <cit.>. Thus, for our analysis, it is enough to consider the neighborhood diversity as the parameter. In addition, neighborhood diversity is a graph parameter that captures more precisely thanvertex cover number the property that two vertices with the same neighborhood are “equivalent”.It is worth mentioning that an optimal neighborhood diversity decomposition of a graph G can be computed in time O(n^3); see <cit.> for more details. The Well-Covered Graph problem is FPT when parameterized by neighborhood diversity.Given a graph G, we first obtain a neighborhood partition of G with minimum width using the polynomial-time algorithm of Lampis <cit.>.Let t :=(G) and let V_1,…,V_t be the partition of V(G). As we can observe, for any pair u,v of non-adjacent vertices belonging to the same part V_i, if u is in a maximal independent set S then v also belongs to S, otherwise S cannot be maximum. On the other hand, if N[u] = N[v] then for any maximal independent set S_u such that u∈ S_u there exists another maximal independent set S_v such that S_v = S_u∖{u}∪{v}. Hence, we can contract each partition V_i that is an independent set into a single vertex v_i with weight τ(v_i)=|S_i|, and contract each partition V_i that is a clique into a single vertex v_i with weight τ(v_i)=1, in order to obtain a graph G_t with |V(G_t)|=t, where the weight of a vertex v_i of G_t means that any maximal independent set of G uses either none or exactly τ(v) vertices of V_i. At this point, we just need to analyze whether all maximal independent sets of G_t have the same weight (sum of the weights of its vertices), which can be done in time 2^t n^O(1). The Well-Covered Graph problem is FPT when parameterized by the vertex cover number n-α.§.§ Taking the clique-width as the parameterIn the 90's, Courcelle proved that for every graph property Π that can be formulated in monadic second order logic(MSOL_1), there is an f(k)n^O(1) algorithm that decides if a graph G of clique-width at most k satisfies Π (see <cit.>), provided that a k-expression is given.LinEMSOL is an extension of MSOL_1 which allows searching for sets of vertices which are optimal with respect to some linear evaluation functions. Courcelle et al. <cit.> showed that every graph problem definable in LinEMSOL is linear-time solvable on graphs with clique-width at most k (i.e., FPT when parameterized by clique-width) if a k-expression is given as input. Using a result of Oum <cit.>, the same result follows even if no k-expression is given. The Well-Covered Graph problem is FPT when parameterized by clique-width. Given S⊆ V(G), first observe that the property “S is a maximal independent set” is MSOL_1-expressible. Indeed, we can construct a formula φ(G,S) such that “S is a maximal independent set” ⇔φ(G,S) as follows: [∄  u,v ∈ S : edge(u,v)] ∧  [∄  S': (S⊆ S') ∧ (∄  x,y ∈ S' : edge(x,y))] Since φ(G,S) is an MSOL_1-expression, the problem of finding goal(S): φ(G,S) for goal∈{max, min} is definable in LinEMSOL. Thus we can find max(S) and min(S) satisfying φ(G,S) in time f((G)) n^O(1). Finally, G is well-covered if and only if |max(S)|=|min(S)|. The Well-Covered Graph problem is FPT when parame­te­ri­zed by treewidth. This follows from the fact that graphs with treewidth bounded by k have clique-width bounded by a function of k <cit.>.For any fixed r and ℓ, the (r,ℓ)-Well-Covered Graph problem is FPT when parameterized by clique-width. As r and ℓ are constants, the problem of determining whether G is an (r,ℓ)-graph is also MSOL_1-expressible. Note that, since for every graph G we have (G)≤(G)+1 <cit.>, Lemma <ref>is also a corollary of Theorem <ref>. Nevertheless, the algorithm derived from the proof of Lemma <ref> is much simpler and faster than the one that follows from the meta-theorem of Courcelle et al. <cit.>. § FURTHER RESEARCHConcerning the complexity of the (r,ℓ)wc-g and wc-(r,ℓ)g problems, note that the only remaining open cases are wc-(r,0)g for r≥ 3 (see the tables in Section <ref>).We do not even know if there exists some integer r≥ 3 such that wc-(r,0)g is coNP-complete, although we conjecture that this is indeed the case.As another avenue for further research, it would be interesting to provide a complete characterization of well-covered tripartite graphs, as has been done for bipartite graphs <cit.>. So far, onlypartial characterizations exist <cit.>.Acknowledgement. We would like to thank the anonymous reviewers for helpful remarks that improved the presentation of the manuscript. abbrv
http://arxiv.org/abs/1705.09177v2
{ "authors": [ "Sancrey R. Alves", "Konrad K. Dabrowski", "Luerbio Faria", "Sulamita Klein", "Ignasi Sau", "Uéverton S. Souza" ], "categories": [ "cs.DS", "cs.CC", "05C85", "G.2.2; F.2.2" ], "primary_category": "cs.DS", "published": "20170525134733", "title": "On the (parameterized) complexity of recognizing well-covered (r,l)-graphs" }
Department of Physics, Renmin University of China, Beijing 100872, ChinaBeijing National Laboratory for Condensed Matter Physics, and Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China School of Physical Sciences, University of Chinese Academy of Sciences, Beijing, 100190, ChinaPhysics Department, University of Michigan, Ann Arbor, MI 48109, USAInstitut für Theoretische Physik und Astrophysik, Universität Würzburg, 97074 Würzburg, Germany Beijing National Laboratory for Condensed Matter Physics, and Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China School of Physical Sciences, University of Chinese Academy of Sciences, Beijing, 100190, ChinaDepartment of Physics, Renmin University of China, Beijing 100872, China We report discovery of a topological Mott insulator in strongly-correlated Dirac semimetals. Suchan interaction-driven topological statehas been theoretically proposed but not yet observed withunbiasedlarge scalenumerical simulations. In our model,interactions between electrons aremediated by Ising spins in a transverse field.The results indicatethat the topological mass termis dynamically generatedand the resulting quantum phase transitionbelongs to the (2+1)D N=8 chiral Ising universality class. These conclusions stemfrom large scale sign free quantum Monte Carlo simulations. 71.10.Fd, 02.70.Ss, 05.30.Rt., 11.30.Rd Dynamical Generation of Topological Masses in Dirac Fermions Zhong-Yi Lu December 30, 2023 ============================================================ Introduction. Combination of the richness of quantum many-body effects and the elegance of topological physics <cit.>has revealed remarkable phenomena and new principles of physics, such as the fractional quantum Hall effect <cit.> and topological order <cit.>. Among these discoveries, one intriguing example is interaction-driven topological states, where strong correlations among particles convert a conventional state of matter into a topological one. One pathway towards such states is to utilize the phenomenon of spontaneous symmetry breaking <cit.>, i.e. in a system where nontrivial topological structures are prohibited by symmetry, strong interactions can spontaneouslybreak symmetry and thus stabilize a topologically nontrivial ground state. As proposed in Ref. <cit.>, such a phenomenon can arise in a 2D Dirac semimetal (DSM) through a quantum phase transition that breaks spontaneously the time-reversal or the spin rotational symmetry, resulting in an interaction-driven, quantum-Hall or quantum-spin-Hall (QSH), topological insulator, dubbed topological Mott insulators (TMI).Although the general principle about TMI has been well understood, finding such a state via unbiased theoretical/numerical methods turns out to be challenging due to the strong coupling nature of the problem and the presence of competing orders. Extensive numerical efforts on interacting DSMs <cit.> report negative results, suggesting that in all explored parameter regimes, topologically-trivial competing states always have lower energy and thus the proposed TMI states cannot be stabilized. The successful alternative came lately, by substituting the DSM by a semimetal with a quadratic band crossing <cit.>, an interaction-driven quantum Hall stateis observed numerically <cit.>. Furthermore, experimental realization of such scenario has very recently been proposed in functionalized α-Fe_2O_3 nanosheet <cit.>. However, whether a TMI can emerge from a DSM without the assistance of a quadratic band crossing point, as in the original proposal <cit.>, still remains an open question. It is also worthwhile to highlight that between the two possible types of TMIs, quantum Hall and quantum spin Hall <cit.>, only the former has been observed in numerical studies <cit.>. Hence, to find a time-reversal invariant TMI is one key objective of this study.On the other hand, in a seemingly unrelated research area,recent developments insign-problem-free quantum Monte Carlo (QMC) approaches for itinerant fermions coupled to fluctuating bosonic fields open the door toinvestigate many intriguing strongly-correlated systems, such as antiferromagnetic fluctuations mediated superconductivity in metals <cit.>, nematic quantum critical points in itinerant systems <cit.>, as well as non-fermi liquid initinerant quantum critical regions <cit.>. The strong-coupling nature of these systems makes analytical approach challenging <cit.>, and hence sign-problem-free QMC solutions pave a new avenue towards quantitative understanding about these systems. These QMC approaches also offer a new platform for studying strongly-correlated topological states, and have recently been utilized to study topological phase transitions in DSM <cit.> and exotic states with topological order <cit.>.In this Letter, we study interaction-driven topological Mott insulators in Dirac semimetals with the aforementioned QMC approach. Instead of bare interactions, our model utilizes fluctuating bosonic fields to mediate interactions between fermions. At the level of the effective field theory, the model is equivalent to the originally proposed TMI model in Ref. <cit.>, except for a minor difference in symmetry irrelevant to topology. For the study of TMI, our modified model shows two advantages: (1) other competing orders are strongly suppressed, allowing a clear TMI phase; (2) the sign-problem is avoided and thus the model can be solved via QMC techniques. Comparing to previous exact diagonalization studies <cit.>, the QMC approach can access much larger system size and reveals detailed information about the critical properties associated with the interaction-driven topological transition. Our QMC results show a continuous quantum phase transition from a DSM state to a QSH-type TMI phase, with the critical scaling at the quantum critical point agreeing nicely with the N=8 chiral Ising universality <cit.>.Model and Method. Our model describes Dirac fermions coupled to a transverse field Ising model. As illustrated in Fig. <ref>(a), fermions in this model reside on the lattice sites (disks), while Ising spins are placed on each dual lattice site (squares) at the plaquette centers. The Hamiltonian consists of three parts,H=H_Fermion + H_Ising + H_Coupling,H_Fermion =-t∑_⟨ ij ⟩σ ( e^+iσϕc_iσ^†c_jσ + e^-iσϕc_jσ^†c_iσ ),H_Ising =-J∑_⟨ pq⟩s_p^z s_q^z - h∑_ps_p^x, H_Coupling = ∑_⟨⟨ ij ⟩⟩σξ_ij s_p^z( c_iσ^†c_jσ + c_jσ^†c_iσ ).where indices i,j represent fermion sites and p,q label the dual lattice sites for Ising spins s^z. Fermion spins are labeled by subindex σ. H_Fermion describes the nearest-neighbor (NN) hopping for fermions, which contains a staggered flux ± 4ϕ for each plaquette. Here, we request spin-up and spin-down fermions to carrier opposite flux patterns to preserve the time-reversal symmetry. The Ising spins are governed by H_Ising, which describes a ferromagnetic (J>0) transverse-field Ising model. The last term H_Coupling couples the Ising spins with the next-nearest-neighbor (NNN) fermion hoppings, where the coupling constant ξ_ij=±ξ t has a staggered sign structure alternating between neighboring plaquette, i.e., + (-) for solid (dashed) NNN bonds as illustrated in Fig. <ref>(a). Up to a basis change, the low-energy physics in this model can be described by the following effective field theory S=∑_σ∫𝐝𝐫 dtΨ̅_σ(i γ^μ∂_μ + g σφγ^3γ^5 ) Ψ_σ+S_φ, where γ^μ are gamma matrices and φ is a bosonic field governed by the φ^4-theory S_φ. Here, σ=± 1 (up or down) is the fermion spin index, and g is the coupling constant for the boson-fermion interactions. This effective field theory is in strong analogy to the model proposed early on in Ref. <cit.>, provided that we decouple the fermion-fermion interactions with a Hubbard-Stratonovich auxiliary field,as appropriatein the limit h/J →∞ <cit.>. It is also worthwhile to emphasize that in our model, the fermion spins only preserve U(1) symmetry, while the model in Ref. <cit.> has a SU(2) spin symmetry. This difference has little effect on topological properties, but as discussed below it changes the critical scaling as well as the finite temperature phase diagram.As in the original model of TMI, our Hamiltonian also contains a symmetry which prohibits nontrivial topology. It is easy to verify that our Hamiltonian is invariant under the following Z_2 transformation, P̂=R̂_x(π)×T̂_A→ B, where R̂_x(π) stands for π-rotation along x-axis for both Ising and fermion spins, and T̂_A→ B represents space translation from sublattice A to B inside a unit cell. Because the topological index (the spin Chern number) flips sign under this transformation, this symmetry requires the index to vanish and thus any (quantum spin Hall) topological insulator is prohibited, unless this Z_2 symmetry is broken spontaneously.To explore the ground-state phase diagram of this model, we employ the projector quantum Monte Carlo (PQMC) method <cit.>, with details presented in Sec.I.A of the supplemental material (SM) <cit.>. In addition to the usual local updates of Ising spins, both Wolff <cit.> and geometric cluster updates <cit.> are applied in our simulations, as shown in Sec.I.B of SM <cit.>. Our QMC simulations are free of the sign problem at and away from half filling <cit.>. In this Letter, we focus on the coupling strength 0≤ξ≤ 1 with J=t=1 and the system sizes simulated in this work are L=4,6,8,10,12,14 with N=L^2 unit cells and N_s=2L^2 lattice sites.Ground state phase diagram. The ground state phase diagram in theξ-h planeis shown in Fig. <ref>(c). Several regimes in the phase diagram can be solved exactly. At ξ=0, the fermions and Ising spins decouple: the fermions form a non-interacting Dirac semimetal, and theIsing spins undergo a paramagnetic to ferromagnetic (PM-FM) quantum phase transition at h_c=3.046(3)in the3D Ising universality class <cit.>. At h=0, quantum fluctuations of Ising spins vanish and Ising spins form a fully-polarized FM state. As a result, the fermions turn into a non-interacting quantum-spin-Hall topological insulator, whose Hamiltonian is H_Fermion+H_Coupling with fully polarized Ising spins s^z=+1(-1)  <cit.> (See Sec. V.A in the SM <cit.> for details). At h→∞, the Ising spins are aligned along the x-axis. Second order perturbation theory around this point,gives rise to an interaction of order ξ^2/h between the fermions. Since the Dirac semimetal is a stable state of matter, we expect that it will be realized in the limit h →∞.At ξ>0 and intermediate h, we find a direct second-order quantum transition between the PM and FM phases. This transition is also the topological phase transition for the fermions, in which the Dirac semimetal acquires a topological mass gap corresponding to the quantum spin Hall topological insulator. This conclusion is consistent with the symmetry analysis above, where the PM (FM) phase preserve (spontaneously breaks) the Z_2 symmetry and thus a quantum spin Hall insulator is prohibited (allowed). At ξ>0, the scaling exponents at the transition deviates from the 3D Ising universality class. Due to the coupling between fermions and bosons, the ξ>0 phase transitionflows to a different universality class, namely the N=8 component chiral Ising universality class <cit.>.FM-PM phase transition for Ising spins. We determine the location of QCP via the Binder cumulant <cit.>: U_2=1/2( 3 -⟨ m^4⟩/⟨ m^2 ⟩^2 ) and correlation ratio <cit.>: R_Corr=1-S^Ising(𝐐+𝐪)/S^Ising(𝐐), where m=1/N_s∑_ps_p^z and S^Ising(𝐤) is the trace of the structure factor matrix (2× 2) of Ising magnetic order at 𝐤 point. Here, 𝐐=Γ=(0,0) is the ordering vector for Ising spin, and 𝐪 is the smallest momentum on the lattice, i.e., (0,2π/L) or (2π/L,0). Both U_2 and R_Corr converge to 0 (1) in the PM (FM) phase at the thermodynamic limit. The crossing points for finite-size results of U_2 and R_Corr, respectively, provide the location of QCP. In this way, we first determine the position of QCP and then perform finite-size scaling analysis of ⟨ m^2 ⟩ close to it to extract the critical exponents.The results of U_2 and R_Corr, as well as the data collapse of ⟨ m^2 ⟩ for ξ=0.5 and ϕ=π/4 (π-flux in each plaquette) are presented in Fig. <ref>. Up to system size L=12, we can obtain the finite size crossing points h=4.06 for U_2 and h=4.10 for R_Corr as the approximate location of QCP. In Fig. <ref>(c), we collapse the data as ⟨ m^2⟩ L^z+η=f(L^1/ν(h-h_c)/h_c) for L=6,8,10,12,14 and L=10,12,14, respectively. The critical exponents extracted from these two collapses are slightly different especially in η, indicating some finite-size effect. As will be discussed below, this shifting of exponents is due to a crossover phenomenon. Combining both collapses, we take the exponents as ν=0.85(2),η=0.61(7) (taking z=1) with h_c=4.11(1), which are well consistent with the results presented in Ref. <cit.> as ν=0.83(1),η=0.62(1) for N=8 components chiral Ising universality class.We employed two additional measurements to further corroboratethe critical exponents. First, we performed finite-size scaling analysis for S^Ising(𝐤) at ξ=0.50 and ϕ=π/4, which is shown in Sec. II.B of the SM <cit.>, with the extracted critical exponents ν=0.84(4),η=0.62(6). Second, we also simulated the model with ξ=0.50 and ϕ=π/8 (half-π flux) and obtained the critical exponents from the finite-size scaling of ⟨ m^2 ⟩, and the results are presented in Sec. III.A of SM <cit.>. The obtained critical exponents are ν=0.85(3),η=0.63(7) with h_c=4.242(3). These exponents are well consistent with those in Fig. <ref>(c), rendering the N=8 components chiral Ising universality class.The properties of QCPs for the PM-FM phase transitions of Ising spins for ξ=0.25,0.75,1.00 as presented in the phase diagram of Fig. <ref>(c), are also determined with U_2 and R_Corr, as well as the finite-size scaling of ⟨ m^2 ⟩ and excitation gaps of fermions.Topological phase transition for fermions. Our numerical results further show a single phase transition from Dirac semimetal to topological Mott insulator with decreasing transverse-field h, which comes hand in hand with the PM-FM phase transition of Ising spins. As shown in Fig. <ref>, we find that the fermions remains gapless in the PM phase with vanishing gap at the Dirac point. Here, both Δ_sp(𝐗), the average single particle gap at two Dirac points 𝐗_1 and 𝐗_2, and Δ_s(𝐌), the two-particle spin gap at M, vanish at the thermodynamic limit, consistent with the Dirac semimetal spectrum. In the FM phase, both gaps start to merge at h/J=4.10∼4.15, consistent with the location of PM-FM phase transition point for Ising spins. It is worthwhile to highlight that the gaps remain finite in the whole FM phase with h<h_c, indicating the absence oftopological phase transition. Since the fermions form a quantum spin Hall insulator in the exactly solvable limit at h=0, this finite gap implies that the whole FM phase shares the same nontrivial topology. To further verify this conclusion, we compute directly the topological invariant, the spin Chern number C_s=(C_↑-C_↓)/2. As shown in Sec. V.B. of SM <cit.>, we obtain C_s=+1 for whole h<h_c region, indicating that FM phase is a quantum-spin-Hall topological insulator.In Sec. IV of SM <cit.>, we present raw data of dynamic quantities G(𝐤,τ) and S^xy(𝐤,τ), from which Δ_sp(𝐗) and Δ_s(𝐌) are extrapolated. The comparisons between 2Δ_sp(𝐗) and Δ_s(𝐌) are also shown to reveal the effect of electron-electron interactions. Furthermore, the gap opening of Δ_sp(𝐗) and Δ_s(𝐌) at ξ=0.25,0.75,1.00match the QCPs of PM-FM phase transition for Ising spins, thus supportingthe picture of a semimetal-TMI topological phase transition.Finite-size scaling crossover. As discussed above at ξ=0 and ξ>0, the PM-FM transition belongs to two different universality classes, 3D Ising and N=8 chiral Ising. As a result, in the thermodynamic limit, the scaling exponents will change discontinuously as we change the value of ξ away from 0. In numerical studies, because of the finite size, such a discontinuous change will not show up. Instead, a crossover behavior is expected, i.e. at small ξ, a crossover length scale L_c(ξ) shall arise. For L<L_c (L>L_c), the scaling behavior merges towards the 3D Ising (N=8 chiral Ising) universality class. As ξ approaches zero (increases), L_c diverge to infinity (decreases to microscopic values) and thus the 3D Ising (N=8 chiral Ising) universality class is fully recovered. Such an effect is indeed observed in our data. In Sec. VI in SM <cit.>, we present the finite-size scalings of ⟨ m^2⟩ from L=6,8,10,12 and L=8,10,12, respectively, for ξ=0.25,0.50,0.75. At ξ=0.25, the data collapse suffers strongly from the finite-size effect, and chiral Ising exponents only arise in very large system sizes, especially for η. However, as ξ increases, the chiral Ising exponents emerge even if the smallest size L=6 is included in the fitting.Discussions. Because the fermion spin in our model only preserves a U(1) symmetry, instead of SU(2), our topological Mott insulator breaks a Z_2 symmetry in contrast to the SU(2) symmetry breaking in Ref. <cit.>. This difference in symmetry breaking patterns is irrelevant for topology. However, this leads to different scaling exponents at the transition <cit.>. Furthermore, at finite temperature, the symmetry breaking phase in our model survives, while the SU(2) symmetry breaking arises only at T=0.To the best of our knowledge, our study demonstrates the first interaction-driven quantum-spin-Hall topological Mott insulator from unbiased numerical method, and for the first time, this novel topological phenomenon becomes accessible to large-scale lattice QMC simulations. Our work points outa new route to realize interaction-driven topological phases and phase transitions. It has experimental relevance since the interaction-driven quantum anomalous Hall effecthas recently being suggested in functionalized α-Fe_2O_3 nanosheet <cit.>.We (YYH, XYX, ZYM and ZYL) acknowledge fundings from the Ministry of Science and Technology of China through National Key Research and Development Program under Grant No. 2016YFA0300502 and from the National Science Foundation of China under Grant Nos. 91421304, 11421092, 11474356, 11574359, 11674370 as well as the National Thousand-Young Talents Program of China. Y.Y.H is also supported by the Outstanding Innovative Talents Cultivation Funded Programs 2016 of Renmin University of China. K.S. acknowledges support from the National Science Foundation under Grant No. PHY1402971 and the Alfred P. Sloan Foundation. F.F.A thanks the German Research Foundation (DFG) for financial support throughthe SFB 1170 ToCoTronics. We thank the Physical Laboratory of High Performance Computing in Renmin University of China, the Center for Quantum Simulation Sciences in the Institute of Physics, Chinese Academy of Sciences and the Tianhe-1A platform at the National Supercomputer Center in Tianjin for their technical support and generous allocation of CPU time. Supplemental Material: Chiral Ising transition between Dirac semimetal and quantum spin Hall insulators Department of Physics, Renmin University of China, Beijing 100872, ChinaBeijing National Laboratory for Condensed Matter Physics, and Institute of Physics, Chinese Academy of Sciences, Beijing 100190, ChinaDepartment of Physics, University of Michigan, Ann Arbor, MI 48109, USAInstitut für Theoretische Physik und Astrophysik, Universität Würzburg, 97074 Würzburg, GermanyInstitute of Physics, Chinese Academy of Sciences, Beijing 100190, China School of Physical Sciences, University of Chinese Academy of Sciences, Beijing, 100190, ChinaDepartment of Physics, Renmin University of China, Beijing 100872, China § I. PROJECTOR QMC ALGORITHM FOR THE MODEL §.§ A. PQMC Formalism The fermion-Ising model in the main text is expressed asH=H_Fermion + H_Ising + H_Coupling,H_Fermion =-t∑_⟨ ij ⟩σ ( e^+iσϕc_iσ^†c_jσ + e^-iσϕc_jσ^†c_iσ ),H_Ising =-J∑_⟨ pq⟩s_p^z s_q^z - h∑_ps_p^x, H_Coupling = ∑_⟨⟨ ij ⟩⟩σξ_ijξ s_p^z( c_iσ^†c_jσ + c_jσ^†c_iσ ),In the projector QMC setup, the partition function of this model can be expressed as𝒵 = ⟨ψ_T| Tr_{𝐬}{e^-Θ H} |ψ_T⟩where the trace is taken over all Ising spin configurations, and |Ψ_T ⟩ is the Slater-determinant trial wave function. In projector QMC framework, we first break the projection length Θ into M slices (Θ=MΔτ). Since the s_p^z operator in H_Coupling term is diagonal under Ising spin configurations, the Boltzmann weight from H_Ising is separable. After tracing out fermion degrees of freedom, we canobtain the partition function of the fermion-Ising coupling model as <cit.>𝒵 = ⟨ψ_T| Tr_{𝐬}{ e^-ΘĤ} |ψ_T⟩≈ ∑_{s_i,ℓ^z=±1} 𝒲_Ising∏_σ=↑,↓(P_σ^†B_M^σ⋯ B_2^σB_1^σP_σ).where𝒲_Ising= λ^MN_sexp[-( -Δτ J∑_ℓ=1^M∑_⟨ pq ⟩ s_p,ℓ^z s_q,ℓ^z - γ∑_p∑_ℓ=1^M s_p,ℓ+1^z s_p,ℓ^z )]is from H_Ising, when the 2D transverse-field Ising model is mapped to a 3D classical Ising model <cit.>, with{[ λ = √(sinh(Δτ h) cosh(Δτ h)); γ = -1/2ln[tanh(Δτ h)] ]. .Due to the spin-staggered phase e^iσϕ in H_Fermion term in Eq. (<ref>), the spin-up determinant (P_↑^†B_M^↑⋯ B_2^↑B_1^↑P_↑) in Eq. (<ref>) is complex-conjugate to the spin-down one (P_↓^†B_M^↓⋯ B_2^↓B_1^↓P_↓). Since the configuration weight for the classical Ising spin part is always positive, the configuration weight containing both the fermion and classical spin parts is non-negative thus there is no sign problem in the QMC simulations of the model Hamiltonian in Eq. (<ref>). To calculate the B_ℓ^σ, we have applied Trotter decomposition for both the e^-Δτ H_Fermion and e^-Δτ H_Coupling terms. For the H_Fermion term, we have applied the checkerboard decomposition, which divides all the NN hopping terms into two parts such that in each part all the hopping terms commute with one another. For the H_Coupling term as the fermion-Ising spin coupling term, it's separated into four parts such that in each part all the hopping term commute with one another. Including both of these decompositions, we have the B_ℓ^σ matrix asB_ℓ^σ = e^𝒞_ℓ,4e^𝒞_ℓ,3e^𝒞_ℓ,2e^𝒞_ℓ,1 e^𝒦_2 e^𝒦_1,where 𝒞_ℓ,4,𝒞_ℓ,3,𝒞_ℓ,2,𝒞_ℓ,1 correspond to the H_Coupling term while 𝒦_2,𝒦_1 represent the H_Fermion term. Here, all of 𝒞_ℓ,4,𝒞_ℓ,3,𝒞_ℓ,2,𝒞_ℓ,1 and 𝒦_2,𝒦_1 are effectively 4×4 matrices. For the results presented in main text, we set Δτ=0.04/t and purposely increase the projection parameter Θ for increasing system size, such as Θ=20 for L=4 and Θ=50 for L=14, to ensure convergence to ground state.§.§ B. Cluster Updates for Ising spins As for the updates of the classical Ising spins during the PQMC simulations, in one sweep, we first apply the local updates to scan through the space-time of configuration and then take one cluster update. For the case studied here, the 3D classical Ising model (2D transverse field Ising) has ferromagnetic interactions in both space and time directions asβ^'Ĥ^' = -Δτ J∑_ℓ=1^M∑_⟨ pq⟩ s_p,ℓ^z s_q,ℓ^z - γ∑_p∑_ℓ=1^M s_p,ℓ+1^z s_p,ℓ^z,with J>0 and γ=-1/2ln[tanh(Δτ h)]>0. This is a highly anisotropic model and the ratio between the coupling strengths in time and space is r=γ/(Δτ J), which can reach 10^1∼10^3. To guarantee the efficiency and correctness, we employ local updates, Wolff cluster updates <cit.> and geometric cluster update <cit.> in the simulation. The local update is, of course, the simplest one to be implemented. The two cluster updates, greatly suppress the critical slowing down, are complementary. As close to the quantum critical point, the size of Wolff cluster can sometimes be as large as the entire space-time lattice, rendering the update actually with low efficiency around the quantum criticality. The geometric cluster updates implicitly impose a restriction on the size of the cluster, making it not to grow too large. On the other hand, the geometric cluster updates can not change the magnetization of the system, but Wolff cluster update can explicitly change the value of ∑_p s_p,ℓ^z. In PQMC, a sweep contains the propagation of the configurations in the imaginary time from τ=Θ to τ=0 and successive propagation from τ=0 to τ=Θ. In the propagations of both τ=Θ→0 and τ=0→Θ, the local updates of classical Ising spin are carried out sequentially according to the space-time lattice. Then at τ=0, the cluster updates are performed to flip spins in a cluster constructed by Wolff and geometric algorithms.The detailed balance condition of Wolff or geometric updates is given byπ(a)P(a→ b)𝒜(a→ b) = π(b)P(b→ a)𝒜(b→ a),where a,b are configurations of Ising spins, and P(a→ b), 𝒜(a→ b) are the priori and acceptance probabilities. With π(a)=π_s(a)π_f(a) (where π_s(a) is the weight of spin part and π_f(a) is the weight of fermion part) as the total configuration weight, both Wolff and geometric cluster algorithms carefully choose the cluster such that the priori probability ratio has the following propertyP(b→ a)/P(a→ b) = π_s(a)/π_s(b).Then we can obtain the acceptance probability as𝒜(a→ b) = min{P(b→ a)/P(a→ b)π(b)/π(a) , 1 }= min{π_f(b)/π_f(a), 1 }. One can see that the acceptance probability is simply the ratio of fermion part of weights for the classical Ising spin configurations. At τ=0, arbitrary number of steps of Wolff and geometric cluster updates can be performed and the overall acceptance probability is𝒜(a_1→ a_n) = 𝒜(a_1→ a_2)𝒜(a_2→ a_3)⋯𝒜(a_n-1→ a_n)= min{π_f(a_n)/π_f(a_1), 1 }.Hence the implementation of cluster updates in PQMC simulations goes as follows, first construct the cluster of Ising spins according to Wolff and geometric cluster algorithms, and then calculate the ratio of fermion part of weights to obtain the acceptance probability, such that the cluster update can be decided to be accepted or rejected.From our simulations, we find that, in one sweep, performing one Wolff cluster update and one geometric cluster update reaches the highest acceptance ratio for the overall cluster updates. For example, the acceptance ratio for cluster updates is typically four times larger than that for the local updates. Table <ref> lists the number of spins flipped, size of the Wolff and geometric clusters, for linear system size L=12 with ξ/t=0.50,h/J=3.35 close to FM-PM phase transition. One can see that in one step of cluster update, thousands of Ising spins are flipped.§ II. QUANTUM CRITICAL POINTS OF ISING SPINSIn this section, we demonstrate some additional results for the FM-PM phase transition of the Ising spins. These includeSec. II. A., the determination of the QCPs for ξ=0.25,0.75,1.00, via the Binder cumulant U_2 and correlation ratio R_Corr. Sec. II. B, data collapse of structure factor S^Ising(Γ)/L^2, to extract the critical exponents. §.§ A. QCPs for different ξAs shown in the main text, we first determine the FM-PM phase transition points from the crossing of the Binder cumulant U_2 and correlation ratio R_Corr, defined asU_2= 1/2( 3 -⟨ m^4 ⟩/⟨ m^2 ⟩^2)R_Corr =1-S^Ising(𝐐+𝐪)/S^Ising(𝐐),with the Ising spin magnetization m=1/N_s∑_ps_p^z (here we have N_s=2L^2). And the structure factor of classical Ising spin is defined on checkerboard lattice (two sublattices) asS^Ising(𝐤) = 1/2N∑_a=A,B∑_mn e^-i𝐤·(𝐑_m-𝐑_n)⟨ s_ma^z s_na^z ⟩, with m,n as unit cell indexes and a=A,B indicating the sublattice (N=L^2 as number of unit cells). In Eq. (<ref>), 𝐐 is the ordered wave vector, which is (0,0) for the ferromagnetic order, and 𝐪 is the smallest distance vector away from 𝐐, equal to (0,2π/L) or (2π/L,0). It is expected that both U_2 and R_Corr converge to 1 inside the ordered phase in the thermodynamic limit, while they are both 0 in disordered phase. So there is a jump from 1 to 0 when going from ordered phase to the disordered phase in thermodynamic limit, and for finite-size system they both show crossing between data with different system sizes, and the location of the crossing point defines the quantum critical point. Also, both ⟨ m^2⟩ and S^Ising(Γ)/L^2 can tell whether the FM-PM phase transition is continuous or of first-order, depending on whether they are smooth or have drops close to the phase transition point.In Fig. <ref>,  <ref> and  <ref>, we present the results of both U_2 and R_Corr for ξ=0.25,0.75,1.00 across the QCPs with L=4,6,8,10,12. Similar to the results of ξ=0.50, shown in the main text, both U_2 and R_Corr converges to 1 deeply inside the ordered phase and 0 inside the disordered phase. The crossing points, for example h/J=3.368,4.103,5.210,6.78 for L=10,12 of R_Corr for ξ=0.25,0.50,0.75,1.00, are good estimation of the QCPs, from which, we can further determine the more precise position of the thermodynamic QCPs by combining the data collapse of ⟨ m^2 ⟩ and extrapolation of excitation gaps. §.§ B. Data Collapse for Structure factor S^Ising(Γ)/L^2In the main text, we have obtained the critical exponents of the FM-PM phase transition of Ising spins from the data collapse of ⟨ m^2⟩ data with the relation ⟨ m^2⟩ L^z+η=f(h-h_c/h_cL^1/ν). In this part, we want to show that we can obtain the same results for ν,η exponents from the data collapse of S^Ising(Γ)/L^2 with similar relation as L^z+η·[S^Ising(Γ)/L^2]=f(h-h_c/h_cL^1/ν).According to the definition of S^Ising(𝐤) in Eq. (<ref>), we haveS^Ising(Γ)/L^2 = 1/2N^2∑_a=A,B∑_mn⟨ s_ma^z s_na^z ⟩ = 1/2⟨(1/N∑_p∈ As_p^z )^2 + (1/N∑_p∈ Bs_p^z )^2 ⟩ = 1/2(⟨ m_A^2⟩ + ⟨ m_B^2⟩),with m_α=1/N∑_p∈αs_p^z (α=A,B) sublattice magnetization for Ising spins. Similarly, we can express the ⟨ m^2 ⟩ by m_A and m_B as⟨ m^2 ⟩ = 1/4( ⟨ m_A^2⟩ + ⟨ m_B^2⟩ + 2⟨ m_A m_B⟩).Then we can observe that at h→0, the system is almost classically ordered and we have ⟨ m_A^2⟩=⟨ m_B^2⟩=⟨ m_A m_B⟩ due to the classical decoupling of the correlation as ⟨ m_A m_B⟩=⟨ m_A⟩⟨ m_B⟩. Thus, in the h→0 limit, we have ⟨ m^2 ⟩=S^Ising(Γ)/L^2. However, for finite h, the equality doesn't hold anymore and ⟨ m_A^2⟩ can be different from ⟨ m_A m_B⟩ across the QCPs. However, they represent the same physical meaning of magnetization. Thus, we can surely perform the data collapse for them independently and the critical exponents extracted from them are expected to be the same.Below we perform the data collapse for S^Ising(Γ)/L^2 for ξ=0.50 at π-flux case (ϕ=π/4). The results are presented in Fig. <ref>. The collapse of data from L=6,8,10,12,14 yields ν=0.81(1),η=0.57(2),h_c/J=4.084(5), while the collapse of data from L=10,12,14 gives ν=0.85(2),η=0.67(1),h_c/J=4.108(3) . Combining them, we conclude ν=0.84(4),η=0.62(6) for the FM-PM phase transition, which is well consistent with the results extracted from data collapse of ⟨ m^2⟩, as shown in Fig.2 in main text.§ III. QUANTUM PHASE TRANSITIONS FOR HALF-Π FLUX CASE (Φ=Π/8)To further confirm the general properties of both Ising spins and fermions across the QCPs, we have also simulated the half-π-flux case as ϕ=π/8. In Sec. III. A., we show that the FM-PM phase transition has almost the same critical exponents as that for the ϕ=π/4 case, indicating the same universality class. What's more, the excitation gaps of the fermions also show gap opening behavior with decreasing h, suggesting the same DSM-TMI phase transition as that for ϕ=π/4 case, which are presented in Sec. III. B. For this half-π flux case (ϕ=π/8), we only concentrate on ξ=0.50. §.§ A. FM-PM phase transition for Ising spinsAgain, the location of the QCP is determined from the crossings of U_2 and R_Corr for different system sizes, which are shown in Fig. <ref>. Considering the convergence to thermodynamic limit, the QCP of this ξ=0.50 with ϕ=π/8 should be h_c/J>4.262 (see the insets of Fig. <ref>), which is larger than that for ϕ=π/4 case with the same ξ parameter. Then we take a closer look at the QCP and perform the data collapse of ⟨ m^2⟩ around the QCP. The results are shown in Fig. <ref>. Again, we have performed the collapse of the data for L=6,8,10,12 and L=8,10,12 to obtain the reasonable values of ν and η. The collapse of data from L=6,8,10,12 yields ν=0.84(2),η=0.58(2),h_c/J=4.242(3), while the collapse of data from L=8,10,12 gives ν=0.86(2),η=0.67(2),h_c/J=4.272(6). Combining them, we conclude ν=0.85(3),η=0.63(7) for the FM-PM phase transition, which is well consistent with the results extracted from data collapse of ϕ=π/4 case, as shown in Fig.2 in main text as well as Fig. <ref> in Sec. II. B. These results imply that both the FM-PM phase transitions of Ising spins for ϕ=π/4 and ϕ=π/8 cases belong to the N=8 Chiral Ising universality class, withcritical exponents consistent with previous work in Ref. Chandrasekharan2013,Otsuka2016. §.§ B. DSM-TMI phase transition for fermionsBesides the PM-FM phase transition for the Ising spins, we have also confirmed it's accompanied by the DSM-TMI phase transition, via monitoring the opening of excitation gaps. We have calculated the single-particle gap Δ_sp(𝐗) (average over 𝐗_1 and 𝐗_2 points) and spin gap Δ_s(𝐌).The extrapolations of Δ_sp(𝐗)/t and Δ_s(𝐌)/t over 1/L are shown in Fig. <ref>. We can observe that with decreasing h/J, both Δ_sp(𝐗)/t and Δ_s(𝐌)/t have gap opening at h/J∈[4.25,4.30], suggesting the DSM-TMI phase transition at h_c/J∈[4.25,4.30]. This location of the QCP is consistent with the results of Binder cumulant U_2 and Correlation Ratio R_Corr shown in Fig. <ref>, and also the QCP from the data collapse presented in Fig. <ref>. These consistency all suggest that the PM-FM phase transition for Ising spins and DSM-TMI phase transition for fermions happens simultaneously, and in theN=8 Chiral Ising universality class.§ IV. RAW DATA OF DYNAMIC PROPERTIES AND EXCITATION GAPSIn this section, we present raw data on dynamic properties, including the dynamic single-particle Green's function G(𝐗,τ) and dynamic spin-spin correlation function S^xy(𝐌,τ) in Sec. IV. A. and the comparisons of single-particle gap Δ_sp(𝐗)/t and spin gap Δ_s(𝐌)/t in Sec. IV. B. §.§ A. Dynamic single-particle Green's function and spin-spin correlation functionSince we extract the excitation gaps Δ_sp(𝐗)/t and Δ_s(𝐌)/t from G(𝐗,τ)∝ e^-Δ_sp(𝐗)τ and S^xy(𝐌,τ)∝ e^-Δ_s(𝐌)τ at large τ limit, we need to ensure that the data of G(𝐗,τ) and S^xy(𝐌,τ) have high quality. Here, we want to demonstrate that the dynamic data obtained from our simulations indeed have very good quality. The raw data of G(𝐗,τ) and S^xy(𝐌,τ) for ξ=0.50,ϕ=π/4 and L=10 are shown in Fig. <ref> with both linear and semi-log coordinates. The perfect straight lines of ln[G(𝐗,τ)] and ln[S^xy(𝐌,τ)] over τ allow us to extract the gaps Δ_sp(𝐗)/t and Δ_s(𝐌)/t with high precision. This good quality of the dynamic data also holds for different system size L, different ξ parameter and also different ϕ such as ϕ=π/8.Notice that here we have calculated the single-particle gap at 𝐗_1 and 𝐗_2 points in BZ and the spin gap at 𝐌 point. Since in the large h limit, the fermions has DSM ground state and the Dirac cones locate at 𝐗_1 and 𝐗_2 points. With decreasing h, the fermions open gaps at 𝐗_1 and 𝐗_2 points and enter into the TMI phase. As for the spin gap, it has smallest value at 𝐌 point, which can be understood as follows: since the coupling term Ĥ_Coupling in the model Hamiltonian of Eq. (<ref>) couples the Ising spin with next-nearest-neighbor (NNN) hopping of fermions, it effectively induces a NNN density-density interaction of fermions when integrate out the Ising spins, and that NNN density-density interaction favors collinear order on square lattice, which has ordered wave vector 𝐐=𝐌 on checkerboard lattice. We note, that this 𝐐=𝐌 spin order of the fermions should not be confused with the usual antiferromagnetic long-range order on the square lattice.§.§ B. Excitation GapsIn Fig. <ref> in the main text, the extrapolations of Δ_sp(𝐗)/t and Δ_s(𝐌)/t over 1/L are presented, and one can observe that the values of 2×Δ_sp(𝐗)/t seem to be very close to those of Δ_s(𝐌)/t within the same model parameters and system size. It's well known that the two-particle excitation gaps are twice of single-particle gap in noninteracting fermion systems. What's more, we have indeed observed Δ_s(𝐌)/t≈ 2×Δ_sp(𝐗)/t at small h, since it's very close to noninteracting fermion system. However, close to the QCP in the phase diagram, this relation does not hold any longer. In Fig. <ref>, the comparison of twice of single-particle gap 2×Δ_sp(𝐗)/t and spin gap Δ_s(𝐌)/t for ξ=0.50 under π-flux case (ϕ=π/4), for systems L=4,6,8,10,12 across the DSM-TMI phase transition are shown. One can see that 2×Δ_sp(𝐗)/t is larger than Δ_s(𝐌)/t. This indicates the presence of effective electron-electron interactions, mediated by the Ising spin flucutations in the model Hamiltonian. The observation that 2×Δ_sp(𝐗)/t is larger than Δ_s(𝐌)/t also holds for different ξ parameter and ϕ=π/8 cases close to the DSM-TMI transition.In Fig. <ref> in the main text, we have shown the extrapolations of excitation gaps Δ_sp(𝐗)/t and Δ_s(𝐌)/t over 1/L as the proof of DSM-TMI phase transition for fermions only for ξ=0.50 with ϕ=π/4. Here, we also present the extrapolations of Δ_sp(𝐗)/t for ξ=0.25,0.75,1.00 with ϕ=π/4 and show the gap opening during the DSM-TMI phase transition. The results are shown in Fig. <ref>. We only present the data of Δ_sp(𝐗)/t while the spin gap Δ_s(𝐌)/t has almost the same behaviour of gap opening with decreasing h. For ξ=0.25,0.75,1.00 cases with ϕ=π/4, the excitation gaps opens at h_c/J∈[3.35,3.40], h_c/J∈[5.20,5.25] and h_c/J∈[6.75,6.80], respectively, which are well consistent with data crossing points of Binder cumulant and correlation ratio for Ising spins in Fig. <ref>, Fig. <ref> and Fig. <ref>.§ V. TOPOLOGICAL NATURE OF TMIIn addition to the opening of excitation gaps, in this section we demonstrate the fermions in TMI indeed have QSHI ground state at h<h_c. At h=0 limit, Ising spins are classically ordered and the fermions become non-interacting, here one can analytically show the system has QSHI ground state, this part is presented in Sec. V. A. On the other hand, at finite h<h_c, we can also provide theoretical arguments and more importantly numerical evidence that the system is in QSHI ground state as well, demonstrated in Sec. V. B. §.§ A. h=0 LimitAt h=0, the Ising spin in Eq. (<ref>) ordered in a classical way – ferromagnetic order without quantum fluctuation. Thus, Hamiltonian becomes non-interacting and remaining fermion part can be written asĤ_Fermion = -t∑_⟨ ij ⟩σ ( e^+iσϕc_iσ^†c_jσ + e^-iσϕc_jσ^†c_iσ )+∑_⟨⟨ ij ⟩⟩σ t_ij^'ξ( c_iσ^†c_jσ + c_jσ^†c_iσ).With the choices of t_ij^'=± t_2, this simple tight-binding model has QSHI ground state <cit.>. This model has U(1)_spin× U(1)_charge⋊ Z_2^T symmetry (here Z_2^T stands for time-reversal symmetry), which renders the Z classification of the topological index. In Fig. <ref>, we present the spectral functions for spin-up and spin-down part along one of the edges for the model in Eq. (<ref>) on a ribbon geometry (periodic boundary condition in x-direction, open boundary condition in y-direction) shown in Fig. <ref>(a). We can observe that it has helical edge states with a large band gap in the bulk. We note here the topological invariant, spin Chern Number C_s=(C_↑-C_↓)/2=+1, can also be calculated via the standard zero-frequency Green's function formalism <cit.>. Hence, combining these information, one can see that at h=0 the system is indeed ina QSHI ground state for any finite ξ. §.§ B. Finite h with h<h_cAs shown both in the main text as well as in Sec. III. B and Sec. VI. A, the fermions in the coupling model Eq. (<ref>) has DSM ground state for h>h_c, namely the Ising spins in paramagnetic state. Decreasing h to h<h_c, we have observed the opening of single-particle and two-particle excitation gaps, the system enters TMI phase with QSHI character. But since here fermions are interacting, mediated by quantum fluctuations of the Ising spins, we cannot obtain the band structure analytically as in Sec. V. A., hence will relie on the QMC results.To numerically verify the QSHI ground state of fermions in h<h_c with finite ξ, we calculate the topological invariant: the spin Chern number C_s=(C_↑-C_↓)/2 with C_σ as the Chern number for spin-σ channel. Due to the time-reversal symmetry, C_↑=-C_↓, thus C_s=C_↑. We have applied the method with zero-frequency single-particle Green's function to calculate C_s, which was successfully demonstrated by some of us in Ref. YuaoYao2016B. As mentioned in the main text, since the coupling model Eq. <ref> has Z_2 symmetry, we need to add a pinning field Ĥ_̂ẑ=B_z∑_ps_p^z to Ising spins to break this Z_2 symmetry, incorporating the effect of spontaneously Z_2 symmetry breaking in thermodynamic limit of TMI phase. In practical simulations, we choose B_z=0.001J. As for the calculation of C_s, we first obtain the 𝐆_σ(τ,𝐤) data with both τ≥0 and τ<0 from the QMC simulations as[𝐆_σ(τ,𝐤)]_pq = -⟨ T_τ[c_𝐤pσ(τ)c_𝐤qσ^†(0)] ⟩ = - 1/N∑_i,j=1^N e^-i𝐤·(𝐑_i-𝐑_j)⟨ T_τ[c_ipσ(τ)c_jqσ^†(0)] ⟩ τ>0 → [𝐆_σ^>(τ,𝐤)]_pq = - 1/N∑_i,j=1^N e^-i𝐤·(𝐑_i-𝐑_j)⟨ c_ipσ(τ)c_jqσ^†⟩ τ<0 → [𝐆_σ^<(τ,𝐤)]_pq = + 1/N∑_i,j=1^N e^-i𝐤·(𝐑_i-𝐑_j)⟨ c_jqσ^†(-τ)c_ipσ⟩where p,q=A,B standing for sublattice indexes. Then we can obtain the zero-frequency single-particle Green's function 𝐆_σ(iω=0,𝐤) as𝐆_σ(iω=0,𝐤) = ∫_-∞^+∞𝐆_σ(τ,𝐤) dτ = ∫_0^+∞[𝐆_σ^>(τ,𝐤)+𝐆_σ^<(τ,𝐤)] dτ≃∫_0^+θ[𝐆_σ^>(τ,𝐤)+𝐆_σ^<(τ,𝐤)] dτ.In the last step of Eq. (<ref>), a cut-off is applied to the integral, due to exponential decaying of 𝐆_σ(τ,𝐤) at large τ. Then we can use the following formula with 𝐆_σ(iω=0,𝐤) to calculate C_σ for finite-size system as𝒞=1/2π i∬_𝐤∈ BZdk_xdk_y·Tr{P(𝐤)[∂_k_xP(𝐤)∂_k_yP(𝐤)-∂_k_yP(𝐤)∂_k_xP(𝐤)]},where P(𝐤) is a projection operator matrix constructed from eigenvectors |ϕ_m(0,𝐤)⟩ of 𝐆_σ(iω=0,𝐤):P(𝐤)=∑_μ_m>0|ϕ_m(0,𝐤)⟩⟨ϕ_m(0,𝐤)|,and μ_m is the corresponding eigenvalue of 𝐆_σ(0,𝐤) with eigenvector |ϕ_m(0,𝐤)⟩.Due to the finite momentum mesh in L× L system, the spin Chern number C_s calculated from Eq. (<ref>) suffers finite-size effect and can be away from quantized integer for small L. Thus, we apply the interpolation scheme of 𝐆_σ(iω=0,𝐤) to achieve denser momentum resolution and approach quantized spin Chern number, as demonstrated in Ref. YuaoYao2016B.The results of spin Chern number C_s for L=4,8,12 systems are shown in Fig. <ref>(a) for ξ=0.50 with ϕ=π/4. Since the spin Chern number in Eq. (<ref>) is not well defined for gapless DSM phase, we only measure it in the QSHI phase with h/J∈[0,4], i.e., h<h_c. As one can see, with increasing linear system size, L=8 to L=12, C_↑(=C_s) increases gradually though it's not quantized. And since h/J=4 is close to the QCP at h_c=4.11(1) and the single-particle gap Δ_sp(𝐗)/t is small, the cut-off θ causes the dip in C_↑ though the system is still in the QSHI phase. After the interpolation with IL=512 from L=8,12 systems, C_↑=1, reaches quantized integerperfectly, meaning that the system is indeed inside TMI phase with QSHI character.We have also measured the excitation gaps for ξ=0.50,ϕ=π/4 with decreasing h from h/J=4 to h/J=0, as shown in Fig. <ref>(b), in which we have observed that both single-particle gap Δ_sp(𝐗)/t and spin gap Δ_s(𝐌)/t increase monotonously to the analytical values Δ_sp(𝐗)/t=2 and Δ_s(𝐌)/t=4 at h=0 point. This demonstrates the fact that there is no further topological phase transition in h<h_c region, the QSHI at h=0 smoothly cross over into TMI at finite h. Furthermore, we have scanned a path inside the gapped region at h/J=3 with ξ∈[0,1] and the results for energy and structure factors for several fermion bilinears (not shown) have no signature of phase transition, suggesting that the fermions in the whole h<h_c region are in TMI ground state. § VI. FINITE-SIZE SCALING CROSSOVER OF FM-PM PHASE TRANSITION FOR ISING SPINSWithout coupling to fermions, 2D transverse-field Ising model has a continuous h-tuned quantum phase transition at 3D Ising universality class. Thus, at ξ=0 point of model in Eq. (<ref>), the h-tuned quantum phase transition belongs to the 3D Ising universality class with ν=0.629971(4),η=0.026298(2) <cit.>. With coupling to fermion, however, the universality class is altered to the N=8 Chiral Ising. Therefore, there are two different kinds of universality classes along the phase transition line (the red line) in the ground-state phase diagram in Fig. <ref> (c) in main text. A natural question here is, how the 3D Ising universality class evolves into the N=8 Chiral Ising universality class. In this section, we present some results on this problem.We think that there exists a finite-size scaling crossover behavior of the universality class for the QCPs of Ising spins. In the thermodynamic limit, the QCP should belong to 3D Ising universality at ξ=0 point and N=8 Chiral Ising universality class for infinitesimally small ξ. However, in the finite-size systems simulated, we are expected to see the following features. First, at small ξ, the critical exponents obtained from finite-size scaling applied to data of small system sizes are closer to 3D Ising universality class, while those obtained from data of larger system sizes are closer to N=8 Chiral Ising universality class. Second, the finite-size scalings of ⟨ m^2⟩ data near QCPs from the same system size for increasing ξ parameters should arrive at critical exponents closer to those of N=8 Chiral Ising universality class. These features imply that there exist a length scale L_c for each ξ, below which the critical exponents obtained from scaling should be close to 3D Ising universality class and otherwise close to N=8 Chiral Ising universality class. Clearly, the L_c should be larger for smaller ξ and becomes smaller for larger ξ.The finite-size scalings of ⟨ m^2⟩ data from ξ=0.00,0.25,0.50,0.75 are shown in Fig. <ref>. First of all, we have applied the critical exponents ν=0.629971(4),η=0.026298(2) to collapse the ⟨ m^2⟩ data for ξ=0 and we have obtained h_c/J=3.046(3), which is well-consistent with previous results <cit.> and the data collapse also have high quality. Then for ξ=0.25,0.50,0.75, we have performed the data collapses with free ν,η,h_c and data from L=6,8,10,12 and L=8,10,12, respectively. For ξ=0.25, we can observe a dramatic change of η exponent from η=0.30(2) to η=0.61(5), by simply adding the ⟨ m^2⟩ data of L=12 system. This fact explicitly shows that for smaller system sizes like L=6,8, the critical behavior is closer to 3D Ising universality, while it's more likely to be N=8 Chiral Ising universality class for L=12 system. We can conclude L_c≈10 for ξ=0.25, signifying the finite-size scaling crossover behavior. The critical exponents from data collapses of L=6,8,10,12 and L=8,10,12 are both converging to the numbers in N=8 Chiral Ising universality class, for increasing ξ=0.25,0.50,0.75 parameters. This indicates the decreasing L_c length scale in the finite-size scaling for increasing ξ. Third, we can also observe that the η exponent suffers much stronger finite-size effect than that of ν exponent, especially for small ξ. This is simply due to the fact that the two universality classes have similar ν exponents while η exponent differs a lot. As a result, the crossover between these two universality classes gives much larger deviation for η than ν. All of these numerical results support the finite-size scaling crossover behavior in the ground-state phase diagram.
http://arxiv.org/abs/1705.09192v2
{ "authors": [ "Yuan-Yao He", "Xiao Yan Xu", "Kai Sun", "Fakher F. Assaad", "Zi Yang Meng", "Zhong-Yi Lu" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20170525142033", "title": "Dynamical Generation of Topological Masses in Dirac Fermions" }
BMXNet: An Open-Source Binary Neural Network Implementation Based on MXNet Haojin Yang, Martin Fritzsche, Christian Bartz, Christoph Meinel December 30, 2023 ============================================================================Arti Yardi^* IRIT/INP-ENSEEIHT, University of ToulouseToulouse, France Ruud Pellikaan Department of Mathematics and Computer ScienceEindhoven University of Technology P.O. Box 513, 5600 MB Eindhoven, The Netherlands (Communicated by xxx) The problem of identifying whether the family of cyclic codes is asymptotically good or not is a long-standing open problem in the field of coding theory. It is known in the literature that some families of cyclic codes such as BCH codes and Reed-Solomon codes are asymptotically bad, however in general the answer to this question is not known. A recent result by Nelson and Van Zwam shows that, all linear codes can be obtained by a sequence of puncturing and/or shortening of a collection of asymptotically good codes <cit.>. In this paper, we prove that any linear code can be obtained by a sequence of puncturing and/or shortening of some cyclic code. Therefore the result that all codes can be obtained by shortening and/or puncturing cyclic codes leaves the possibility open that cyclic codes are asymptotically good. § INTRODUCTION A family of codes is called as asymptotically good if it containsan infinite sequence of codes such that both the rate and the ratio of minimum distance to length of every code in this sequence is bounded away from zero <cit.>. The problem of identifying asymptotically good families of codes have been well studied in theliterature <cit.>. Justesen codes, AG codes, and Expander codes are some examples of the known families of asymptotically good codes. While the family of BCH codes is known to be asymptotically bad <cit.>, some families of quasi-cyclic codes have been identified as asymptotically good <cit.>.However, whether the family of cyclic codes is asymptotically good or not isa long-standing open problem in the literature <cit.>. This problem of identifying whether cyclic codes are asymptotically good or not was first addressed byAssmus, Mattson, and Turyn <cit.>. Since then this problem has been studied by various researchers <cit.>.Lin and Weldon proved that the family of BCH codes is asymptotically bad <cit.>.Berlekamp and Justesen constructed a family of cyclic codes that performs better than BCH codes,however this family also turned out to be asymptotically bad <cit.>. Berman provided a necessary condition for the sequence of cyclic codes to be asymptotically good <cit.>.He proved that the necessary condition for the sequence of cyclic codes to be asymptotically good is that the number of distinct prime factors of the lengths of cyclic codes should tend to infinity. Martínez-Pérez and Willems have also providedsimilar necessary conditions for cyclic codes to be asymptotically good and provided some classesof cyclic codes that were also identified as asymptotically bad <cit.>. Castagnoli et al. showed that if there exists an asymptotically good sequence of cyclic codes (simple-root or repeated-root), thenan asymptotically good sequence of simple-root cyclic codes can also be identified <cit.>.While the existing literature provides a sequence of asymptotically bad cyclic codes,the possibility of existence of a good sequence of cyclic codes cannot be denied. A recent result by Nelson and Van Zwam shows that all linear codes can be obtained by a sequence of puncturingand/or shortening of a collection of asymptotically good codes <cit.>.This result provides a necessary condition for a given class of codes to be asymptotically good. One can use this result to give an independent proof of the fact that graphic and co-graphic codes are not asymptoticallygood as shown by Kashyap <cit.>. In this paper, we prove that the class of cyclic codes satisfy this necessary condition, i.e.,any linear code can be obtained by a sequence of puncturing and/or shortening of some cyclic code. Therefore the result that all codes can be obtained by shortening and puncturing cyclic codes leaves the possibilityopen that cyclic codes are asymptotically good.The remaining paper is organized as follows. In Section <ref>, we provide some notation and preliminaries required in the paper.The main result of the paper is provided in Section <ref>, followed by some concluding remarks in Section <ref>.§ NOTATION AND PRELIMINARIESThe finite field with q elements is denoted by 𝔽_qand the polynomial ring with coefficients from 𝔽_q is denoted by 𝔽_q[X].Without loss of generality, we consider any vector as a row vector and use boldface letters to indicate vectors. The components of a vector are indicated by lowercase letters. For example, a vector 𝐯∈𝔽_q^n is given by𝐯 = [ v_0 v_1 … v_n-1 ], where v_i ∈𝔽_q is the ith component of 𝐯, for i = 0,1,…,n-1. The polynomial representation of 𝐯 is given by 𝐯(X)=v_0+v_1X+v_2X^2+…+v_n-1X^n-1. An all-zero vector of length n is denoted by 0_n.The collection of linear block codes of length n and dimension k is denoted by C(n,k) anda linear block code in this collection in denoted by C.The cyclic code of length n and generator polynomial g(X) ∈𝔽_q[X] is denoted byC(n,g).Throughout this paper, we consider linear block codes that are defined over 𝔽_q. We shall next recall the definition of puncturing and shortening operations on a linear block code C. Puncturing of a code <cit.>: Let C be an [n, k] linear block code over 𝔽_q and let ℒ be the set of any l coordinate locations.Then the puncturing operation on C at coordinate locations in the set ℒ consists of deleting the entries of every codeword in C at locations in the set ℒ. Shortening of a code <cit.>: Let C and ℒ be as defined in Definition <ref>.Then the shortening operation on C at coordinate locations in the set ℒ consists of two steps. In the first step, consider the set 𝒲 of codewords in C thathave zeros at the locations in the set ℒ. In the second step, puncturing operation is performed on 𝒲 at coordinate locations in the set ℒ. It is known that the code obtained after the above mentioned puncturing and shortening operation is a linear block code of length n-l <cit.>. § MAIN RESULTIn this section, we provide the main result of the paper in the following theorem. Any 𝔽_q linear block code can be obtained by a sequence of puncturing and/or shortening of some cyclic code.Let C be any 𝔽_q linear block code of length n and dimension k. Let {𝐯_1, 𝐯_2, …, 𝐯_k } be a basis of C. Using this basis, we now construct a cyclic code such that by a sequence of puncturing and/or shortening of this cyclic code, it is possible to obtain code C. Corresponding to {𝐯_1, 𝐯_2, …, 𝐯_k },define a vector 𝐟 as follows 𝐟 [ 1[𝐯_1 0_n] [𝐯_2 0_n][𝐯_3 0_n] ⋯ [𝐯_k-10_n] 𝐯_k1 ],= [f_0f_1f_2 … f_m], where m = 2n(k-1) + n + 1 and f_0, f_1, …, f_m are the components of 𝐟 such that f_0=f_m=1. Let f(X) be the polynomial corresponding to 𝐟. Let p be the characteristic of the finite field 𝔽_q. Considering the coefficients of f(X), define a vector 𝐠 as follows 𝐠 [ f_0 0_p-1 f_10_p-1⋯0_p-1 f_m-1 f_m ], where 0_p-1 is an all-zero vector of length p-1. Let g(X) be the polynomial corresponding to 𝐠. It is given by, g(X) ∑_i=0^m-1 f_i X^pi + X^[(m-1)p + 1]. The derivative g^'(X) of g(X) is given by, g^'(X)= ∑_i=0^m-1 pf_i X^pi-1 + [(m-1)p + 1] X^(m-1)p= X^(m-1)p, where the last equality is obtained since p=0 in 𝔽_q. Since g(0) is not equal to zero, it follows that the greatest common divisor of g(X) and g^'(X) is equal to one.This implies that g(X) does not have multiple zeros <cit.>.Furthermore there exists an extension 𝐅_q^e of 𝔽_q such that all zeros of g(X) are non-zero elements of this finite extension,where e is some positive integer. Hence g(X) divides X^N-1 where N=q^e-1 and (N,q)=1.Let n^' be the shortest integer such that g(X) divides X^n^'-1, (n^',q)=1, and 2(n^'-(g)) ≥ n^'.Note that if g(X) has multiple zeros, it will never divide X^n^'-1 for some integer n^'such that (n^',q)=1 <cit.>. The necessity of the condition 2(n^'-(g)) ≥ n^' will be explained later in the proof.From (<ref>) we have f_m=1 and this implies that g(X) is a monic polynomial. It is known that g(X) that satisfies the above mentioned conditions generates the cyclic code C(n^',g) of length n^' <cit.>. Let k^' = n^'-(g) be the dimension of C(n^',g).The condition 2(n^'-(g)) ≥ n^' implies that2(n^'-(g)) = 2 k^'≥ n^', i.e.,the rate of the code C(n^',g) is greater than or equal to 1/2. We next prove that C(n^',g) is the required cyclic code of the proof.Let G be a generator matrix of C(n^',g). In order to perform a sequence of puncturing and/or shortening operations on C(n^',g),we will first write the rows of G in a convenient form.The first row 𝐠_0 of G can be given by, 𝐠_0[ f_0 0_p-1 f_10_p-1⋯0_p-1 f_m-1 f_m 0_k^'-1]. The ith row 𝐠_i of G can be obtained by considering i right cyclic shifts of 𝐠_0 <cit.>, i.e., 𝐠_i is given by, 𝐠_i [0_i f_0 0_p-1 f_10_p-1⋯0_p-1 f_m-1 f_m 0_k^'-1-i], where i = 0, 1,…, k^'-1.The condition 2k^'≥ n^' implies that k^'≥ n^' - k^' = (g).Recall that (g) = (m-1)p+1 and this implies that k^'-1 ≥ (m-1)p. Thus the condition 2k^'≥ n^' ensures that G has at least (m-1)p rows. The generator matrix G can now be written as[𝐠_0;⋮;𝐠_p;⋮; 𝐠_(m-1)p;⋮;𝐠_k^'-1 ]= [ [ f_0 0_p-1 f_1 0_p-1 · f_m-1 f_m 0_p-2 0 · · · 0; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; 0 0_p-1 f_0 0_p-1 · f_m-2 0 0_p-2 f_m-1 f_m 0 · 0; ⋮ ⋮ ⋮ ⋮ ⋱ ⋮; 0 · · · · f_0 0 0_p-2 f_1 · f_m · 0; ⋮ ⋮ ⋮ ⋱ ⋮; 0 · · · · · · · · · · f_m; ] ],where the coordinate location of the coefficient f_m-1 in row 𝐠_0 is same as that ofthe coordinate location of the coefficient f_m-i-1 in row 𝐠_ip, for i = 1,2,…, m-1,which is indicated using the two vertical dashed lines.The code C can now be obtained by a sequence of puncturing and/or shortening of C(n^',g) in the following five steps.   A] Shortening of C(n^',g) at coordinate locations in the range [ (j-1)p + 2, jp],for j = 1,2,…, m-1 We first choose j=1 and shorten C(n^',g) in the range [2, p].Let 𝒲_1 be the vector space spanned by the rows 𝐠_0 and 𝐠_p, 𝐠_p+1, …, 𝐠_k^'-1,i.e., the first row of G and all the rows after 𝐠_p of G in (<ref>).The vectors in 𝒲_1 have zeros in the coordinate location range [2,p]. We now prove that the set of codewords in C(n^',g) that have zeros in this range is exactlythe vector space 𝒲_1.Consider the matrix formed by the rows 𝐠_0, 𝐠_1, …, 𝐠_p of G, i.e.,the initial p+1 rows of G as follows,[ [ 𝐠_0; 𝐠_1; ⋮; 𝐠_p-1; 𝐠_p; ]] = [ [ f_0 0 · · · · · 0 f_1 0 · · · · · 0; 0 f_0 0 · · · · 0 0 f_1 · · · · · 0; ⋮ ⋱ ⋮ ⋱; 0 · · · · · 0 f_0 0 · · · · · · 0; 0 · · · · · · 0 f_0 · · · · · · 0; ] ], where the coordinate location range between the two vertical dashed lines in (<ref>) is [2,p].Recall that f_0=1 (see (<ref>)). From (<ref>) and (<ref>) this implies that a non-zero codeword in C(n^',g) that is a linear combination of the rows 𝐠_1, 𝐠_2, …, 𝐠_p-1,will have at least one non-zero coordinate in the range [2,p]. Thus the set of codewords in C(n^',g) that have zeros in the range [2,p]is exactly the vector space 𝒲_1.We will now shorten C(n^',g) in the range [2,p].In the first step of shortening, we will get subspace 𝒲_1 (see Definition <ref>).In the second step, puncturing operation will be performed in the coordinate location range [2,p] of 𝒲_1.A generator matrix G_1^' of the code obtained by shortening C(n^',g) in this range is given by,G_1^' = [ [ f_0 f_1 0_p-1 · · f_m-1 f_m 0_p-2 0 · · · · · 0; 0 f_0 0_p-1 · · f_m-2 0 0_p-2 f_m-1 f_m 0 · · · 0; ⋮ ⋱ ⋮ ⋮ ⋱ ⋮; 0 · · · · f_0 0 0_p-2 f_1 · · f_m 0 · 0; ⋮ ⋮ ⋱ ⋮; 0 · · · · · · · · · · · · f_m; ] ].Observe that shortening of C(n^',g) in the coordinate location range [2,p] eliminated the rows 𝐠_1, 𝐠_2, …, 𝐠_p-1 of G. Thus the number of rows of G_1^' is equal to k^'-(p-1). Using similar arguments it can be proved that shortening of C(n^',g) in the range [ (j-1)p + 2, jp]will eliminate the rows 𝐠_(j-1)p + 1, 𝐠_(j-1)p+2, …, 𝐠_jp-1 of G for j = 1,2,…, m-1. A generator matrix G_1 of the code obtained by shortening in these sets is given by,G_1 = [ [ f_0 f_1 f_2 · · f_m-1 f_m 0_p-2 0 · · · · 0; 0 f_0 f_1 · · f_m-2 0 0_p-2 f_m-1 f_m · · · 0; ⋮ ⋮ ⋮ ⋮ ⋱ ⋮; 0 · · · · f_0 0 0_p-2 f_1 · · f_m · 0; ⋮ ⋮ ⋮ ⋱; 0 · · · · · · · · · · · f_m; ]] [ 𝐡_0; 𝐡_1; ⋮; 𝐡_m-1; ⋮; 𝐡_k_1 ],where k_1 = [k^' -1] - [(m-1)(p-1)] and 𝐡_0, 𝐡_1, …, 𝐡_k_1 aredefined as the rows of G_1. Let C_1 be the linear block code generated by G_1 defined in (<ref>).   B] Shortening of C_1 at the last k^'-1 - [2np(k-1)] locationsFrom (<ref>) we have k_1 ≥ m-1.Substituting the value of m = 2n(k-1) + n + 1 we get k_1 ≥ 2n(k-1) + n, which implies that k_1 > 2n(k-1). Thus the condition k_1 ≥ m-1 ensures that G_1 has at least 2n(k-1) rows.In order to perform this shortening operation, we first write the generator matrix G_1in a convenient form.We substitute the value of the vector[f_0f_1 … f_m-1] = [1 𝐯_10_n𝐯_20_n⋯0_n𝐯_k]from (<ref>) and form a matrix using the rows 𝐡_0 and 𝐡_2n of G_1 as follows, [ [𝐡_0; 𝐡_2n;]] = [ [ [1 𝐯_10_n]𝐯_2··𝐯_kf_m0·····0; 0_2n 1 ]𝐯_1··𝐯_k-10··f_m0··0;]]. From (<ref>), it can be seen that the coordinate location of the vector 𝐯_k in 𝐡_0 and the location of the vector 𝐯_k-1 in 𝐡_2n is the same, which is indicated using the two vertical dashed lines.In general, the coordinate location of 𝐯_k-j in 𝐡_2nj will be same as that ofthe location of 𝐯_k in 𝐡_0, for j = 1,2, …, k-1. Using this the generator matrix G_1 can be written as[ 𝐡_0; ⋮;𝐡_2n; ⋮; 𝐡_2n(k-1); 𝐡_2n(k-1)+1; ⋮; 𝐡_k_1 ]= [ [ [1 𝐯_10_n]𝐯_2··𝐯_kf_m0· ····0;⋮⋮⋮⋱⋮; 0_2n 1 ]𝐯_1··𝐯_k-1··f_m ····0;⋮⋮⋮⋱⋮; 0_2n+10_n··𝐯_1····f_m0··0; 0_2n+10_n······· f_m0·0;⋮⋮⋮⋮⋱⋮; 0_2n+10_n······· ···f_m;] ]. Note that the row 𝐡_2n(k-1) of G_1 corresponds to the row 𝐠_2np(k-1) of G.From (<ref>) this implies that the number of columns on the right hand sidethe vertical dashed line in (<ref>) is equal to k^'-1 - [2np(k-1)]. Using similar arguments as in step A], it can be shown that the shortening of C_1 in the last k^'-1 - [2np(k-1)] locations will provide a code with generator matrix G_2 given by G_2= [ [ [1 𝐯_10_n]𝐯_2··𝐯_kf_m000···0;⋮⋮⋮⋱⋮; 0_2n 1 ]𝐯_1··𝐯_k-1···f_m0··0;⋮⋮⋮ ⋱ ⋮; 0_2n+10_n··𝐯_1·······f_m ] ]. Let C_2 be the linear block code generated by G_2 defined in (<ref>).   C] Puncturing of C_2 at the locations other than the initial m columnsComparing (<ref>) and (<ref>), the number of columns on the left hand side of the vertical dashed line in (<ref>) is equal to m. Thus the generator matrix G_3 of the code obtained after this puncturing operation is given by, G_3= [ [ [1 𝐯_10_n] [𝐯_20_n]··𝐯_k;⋮⋮⋮; 0_2n 1 ] [𝐯_10_n]··𝐯_k-1;⋮⋮⋮; 0_2n+1 0_2n··𝐯_1 ] ] [ [ 𝐛_0; ⋮;𝐛_2n; ⋮; 𝐛_2n(k-1); ]], where 𝐛_0, 𝐛_1, …, 𝐛_2n(k-1) are defined as the rows of G_3.Suppose the codeword 𝐯_j is given by𝐯_j= [ 0_l_j 𝐰_j 0_r_j ] where 0_l_j and 0_r_j are all-zero vectors of lengths l_j and r_j respectively and 𝐰_j is the vector obtained by puncturing the initial l_j and the last r_j entries of 𝐯_j such that the first and the last entry of 𝐰_j are not zero, for j = 1,2,…,k. Let d_1, d_2, …, d_k be the lengths of 𝐰_1, 𝐰_2, …, 𝐰_k respectively, i.e.,l_j+d_j+r_j=n. Without loss of generality we assume that r_1 ≤ r_2 ≤…≤ r_k. Substituting the value of 𝐯_j, the generator matrix G_3 can be written as [ 𝐛_0; ⋮;𝐛_2n;𝐛_2n+1; ⋮; 𝐛_2n(k-1) ]= [ [10_l_1𝐰_10_r_10_n-10[0_l_2𝐰_20_r_2]···𝐯_k;⋮ ⋮⋮⋮;00_l_10_d_10_r_10_n-11 [ 0_l_1𝐰_10_r_1]···𝐯_k-1;00_l_10_d_10_r_10_n-10 [ 1 0_l_1𝐰_10_r_1-1]····;⋮ ⋮⋮⋮ ⋮;00_l_10_d_10_r_10_n-100_n···𝐯_1 ] ], where the number of coefficients between the two vertical dashed lies in (<ref>) is equal to n. Let C_3 be the linear block code generated by G_3.   D] Shortening of the code C_3 at coordinate locations in the range [(2j-1)n+1-r_1,2jn ], for j=1,2, …, k-1We first choose j=1 and shorten C_3 in the range [n+1-r_1, 2n]. Let 𝒲_2 be the vector space spanned by the rows 𝐛_0 and 𝐛_2n, 𝐛_2n+1, …, 𝐛_2n(k-1),of G_3, i.e., the first row of G_3 and all the rows after 𝐛_2n in (<ref>).A generator matrix of 𝒲_2 is given by [ 𝐛_0;𝐛_2n;𝐛_2n+1; ⋮; 𝐛_2n(k-1) ]= [ [10_l_1𝐰_10_r_10_n-10[0_l_2𝐰_20_r_2]···𝐯_k;00_l_10_d_10_r_10_n-11 [ 0_l_1𝐰_10_r_1]···𝐯_k-1;00_l_10_d_10_r_10_n-10 [ 1 0_l_1𝐰_10_r_1-1]····;⋮ ⋮⋮⋮ ⋮;00_l_10_d_10_r_10_n-100_n···𝐯_1 ] ], where the range [n+1-r_1, 2n] is indicated by the two vertical dashed lines. It can be seen that every vector in 𝒲_2 has zeros in this range. We next prove that the set of codewords in C_3 that has zeros in this range is exactly the vector space 𝒲_2.Suppose [ 0_l_1𝐰_1] = [w_1 w_2… w_d], where d=l_1+d_1. Using this the matrix formed by the rows 𝐛_0, 𝐛_1, …, 𝐛_2n of G_3, i.e., the initial 2n+1 rows of G_3 is as follows,[𝐛_0;𝐛_1;⋮;𝐛_d;𝐛_d+1;⋮; 𝐛_2n-1; 𝐛_2n ]= [ [ 1 w_1 · w_d 0 · · · · · · 0 0 [0_l_2𝐰_2 0_r_2 ] · 𝐯_k; 0 1 · · w_d 0 · · · · · 0 0·· · ·; ⋮ ⋱ ⋮; 0 · · 1 w_1 · · w_d 0 · · 0 0·· · ·; 0 · · · 1 w_1 · · w_d 0 · 0 0·· · ·; ⋮ ⋱ ⋮; 0 · · · · · · · · · · 1 ··· · ·; 0 · · · · · · · · · · 0 1 [0_l_1𝐰_1 0_r_1 ] · 𝐯_k-1 ] ].where the range [n+1-r_1, 2n] is indicated by the two vertical dashed lines.Since w_d is not equal to zero (see (<ref>)),it can be seen that a non-zero codeword in C_3 that is a linear combination of the rows 𝐛_1, 𝐛_2, …, 𝐛_d,will have at least one non-zero entry in the range [d+2, 2d+2 ] ⊆[n+1-r_1, 2n]. Further, a non-zero codeword in C_3 that is a linear combinationof the rows 𝐛_d+1, 𝐛_d+2, …, 𝐛_2n-1,will have at least one non-zero entry in the range [n+1-r_1, 2n ].Therefore the set of codewords in C_3 that have zeros in the range [n+1-r_1, 2n] is exactly the vector space 𝒲_2. Using steps similar to step A], it can be shown that shortening of C_3 in the range [n+1-r_1, 2n] will eliminate the rows 𝐛_1, 𝐛_2, …, 𝐛_2n-1 of G_3, i.e.,when j=1, the rows of G_3 between 𝐛_0 to 𝐛_2n were eliminated.We next prove that shortening of C_3 in the range [(2j-1)n+1-r_1, 2jn],the rows of G_3 between 𝐛_2n(j-1) to 𝐛_2jn will get eliminated, for j = 1,2,…, k-1.The generator matrix G_3 with a focus on these rows is given by[ 𝐛_0; ⋮;𝐛_2n; ⋮; 𝐛_2n(j-2); ⋮; 𝐛_2n(j-1); 𝐛_2n(j-1)+1; ⋮; 𝐛_2nj; ⋮; 𝐛_2n(k-1) ]= [ [1·[0_l_j𝐰_j 0_r_j ]0_n-10 [0_l_j+1𝐰_j+10_r_j+1 ]··𝐯_k;⋮ ⋮⋮⋮;0· [0_l_j-1𝐰_j-10_r_j-1 ]0_n-10[0_l_j𝐰_j 0_r_j ]··𝐯_k-1;⋮ ⋮⋮⋮;0· [0_l_2𝐰_20_r_2 ]0_n-10[0_l_3𝐰_3 0_r_3 ]···;⋮ ⋮⋮⋮;0· [0_l_1𝐰_10_r_1 ]0_n-10[0_l_2𝐰_2 0_r_2 ]···;0· [1 0_l_1𝐰_10_r_1-1 ]0_n-10[0 0_l_2𝐰_2 0_r_2-1 ]···;⋮ ⋮⋮⋮;0· ··0_n-11[0_l_1𝐰_1 0_r_1 ]··𝐯_k-j;⋮ ⋮⋮⋮;0· ·······𝐯_1 ] ].Observe that the structure of the rows 𝐛_2n(j-1), 𝐛_2n(j-1)+1, …, 𝐛_2nj of the matrix in (<ref>) is the same as that of the rows 𝐛_0, 𝐛_1, …, 𝐛_2n the matrix G_3 in (<ref>).Further, since r_1 ≤ r_2 ≤…≤ r_k, it can be seen that the vector space spanned by therows 𝐛_0, 𝐛_2n, …, 𝐛_2n(j-1)and the all the rows after row 𝐛_2n(j-1) have zeros in the range [(2j-1)n+1-r_1,2jn ]. Using similar arguments as that of the case when j=1 it can be proved that shortening in this range will eliminate the rows of G_3 between 𝐛_2n(j-1) to 𝐛_2jn.A generator matrix G_4 of the code obtained after the above mentioned set of shortening operations is given by G_4= [ [ 𝐛_0;𝐛_2n; ⋮; 𝐛_2n(k-1); ]] = [ [ 1 0_l_1 𝐰_1 0 [0_l_2𝐰_20_r_2-r_1] · · · 𝐯_k; 0 0_l_1 0_d_1 1 [ 0_l_1𝐰_1] · · · 𝐯_k-1; ⋮ ⋮ ⋮ ⋮; 0 · · · · · · · 𝐯_1 ] ]. Let C_4 be the code corresponding to the generator matrix G_4 in (<ref>).  E] Puncturing of C_4 at the coordinate locations other than the last n locationsThe puncturing operation in this range will delete the columns of G_4 that are on the left hand side of the vertical dashed line in (<ref>). A generator matrix G_5 of the code obtained by this puncturing operation is given by, G_3= [ [ 𝐯_k; 𝐯_k-1; ⋮; 𝐯_1 ] ].Observe that, G_5 is a generator matrix of the required linear block code C of the theorem and this completes the proof.§ CONCLUSIONIn this paper, we proved that any linear block code can be obtained by a sequence of puncturing and/or shortening of some cyclic code. While the result is in itself interesting, due to the recent result given by Nelson and Vam Zwam <cit.>, our result may have applications in studying the long-standing open problem of deciding whether the family of cyclic codes is asymptotically good or not. The result given by Nelson and Vam Zwam says that, given a family of asymptotically good codes, any linear block code can be obtained by a sequence of puncturing and/or shortening of some code in this family. Our result essentially proves that the family of cyclic codes satisfies this condition. § ACKNOWLEDGMENTS The first author is supported by ANR-11-LABEX-0040-CIMI within the program ANR-11-IDEX-0002-02 of the Centre International de Mathématiques et Informatique de Toulouse, France. The first author would also like to acknowledge the support of the Bharti Centre for Communication at IIT Bombay, India.10Nelson_2015 P. Nelson and S. H.  M. van Zwam, On the existence of asymptotically good linear codes in minor-closed classes, in IEEE Transactions on Information Theory, 61 (2015), 1153–-1158.Macwilliams_Sloane_1977 F. MacWilliams and N. Sloane,The Theory of Error Correcting Codes, Amsterdam, Netherlands:North-Holland Publishing Company, (1977).Justesen_1972 J. Justesen,Class of constructive asymptotically good algebraic codes, in IEEE Transactions on Information Theory, 18 (1972), 652–656.Justesen_1973 J. Justesen,New convolutional code constructions and a class of asymptotically good time-varying codes, in IEEE Transactions on Information Theory, 19 (1973), 220–225.Lin_67 S. Lin and E. J. W. Jr., Long BCH codes are bad, in Information and Control, 11 (1976), 445–451.Berlekamp_74 E. Berlekamp and J. Justesen, Some long cyclic linear binary codes are not sobad, in IEEE Transactions on Information Theory, 20 (1974), 351–356.Roth_1992 N. Alon, J. Bruck, J. Naor, M. Naor, and R. M. Roth, Construction of asymptotically good low-rate error-correcting codes through pseudo-randomgraphs, in IEEE Transactions on Information Theory, 38 (1992), 509–516.Schulman_2002 L. J. Schulman and D. Zuckerman,Asymptotically good codes correcting insertions, deletions, and transpositions, in IEEE Transactions onInformation Theory, 45 (1999), 2552–2557.Ling_2003 S. Ling and P. Solé,Good self-dual quasi-cyclic codes exist, in IEEE Transactions on Information Theory, 49 (2003), 1052–1053.Assmus1965cyclic E. Assmus Jr, H. Mattson Jr, and R. Turyn,Cyclic codes, in Air Force Cambridge Research Labs, Bedford, Massachusetts, Scientific report, CRL-65-332 (1965).P_Charpin_open P. Charpin,Open problems on cyclic codes, in Handbook of Coding Theory, 1 (1998), 969–1063.Berman_1967 S. D. Berman,Semisimple cyclic and abelian codes II, in Cybernetics, 3 (1967), 17–23.Massey_repeated_cyclic_1991 G. Castagnoli, J. L. Massey, P. A. Schoeller, and N. von Seemann,On repeated-root cyclic codes, in IEEE Transactions on Information Theory, 37 (1991), 337–342.Perez_2006 C. Martínez-Pérez and W. Willems,Is the class of cyclic codesasymptotically good? in IEEE Transactions on Information Theory, 52 (2006), 696–700.N_Kashyap_2008 N. Kashyap,A decomposition theory for binary linear codes, in IEEE Transactions on Information Theory, 54 (2008), 3035–3058.Huffman_Pless_ECC W. Huffman and V. Pless,Fundamentals of Error-Correcting Codes, Cambridge, United Kingdom: CambridgeUniversity Press, (2003).Lidl86 R. Lidl and H. Niederreiter,Introduction to Finite Fields and TheirApplications Cambridge, United Kingdom: Cambridge University Press, (1986).LinCostello2004 S. Lin and D. Costello,Error Control Coding, 2nd ed, Englewood Cliffs, New Jersey, USA: Prentice-Hall, (2004).Received for publication xxx.E-mail address: [email protected] address: [email protected]
http://arxiv.org/abs/1705.09859v1
{ "authors": [ "Arti Yardi", "Ruud Pellikaan" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170527200241", "title": "On shortened and punctured cyclic codes" }
A polarity theory forsets of desirable gambles Alessio Benavoli [email protected] Facchini [email protected] Zaffalon [email protected] Dalle Molle di Studi Sull'Intelligenza Artificiale (IDSIA), Lugano (Swizterland) José Vicente-Pérez [email protected] de Fundamentos del Análisis Económico,Universidad de Alicante (Spain) December 30, 2023 ================================================================================================================================================================================================================================================================================================================================================Coherent sets of almost desirable gambles and credal sets are known to be equivalent models. That is, there exists a bijection between the two collections of sets preserving the usual operations, e.g. conditioning. Such a correspondence is based on the polarity theory for closed convex cones. Learning from this simple observation, in this paper we introduce a new (lexicographic) polarity theory for general convex cones and then we apply it in order to establish an analogous correspondence between coherent sets of desirable gambles and convex sets of lexicographic probabilities.Desirability; Credal sets; Lexicographic probabilities; Separation theorem; Polarity.§ INTRODUCTIONestablished a foundation of probability theory based on the notionof “coherence” (self-consistency). The idea was that a subject is considered rational if she chooses her odds so that there is no bet that leads her to a sure loss (no Dutch books are possible). In this way, since numerically odds are the inverse of probabilities,de Finetti's approach provides a justification of Kolmogorov's axioms of probability as a rationality criterion on a gambling system.Later, building on de Finetti's betting setup, <cit.> and then <cit.> have shown that it is possible to justify probability in a way that is even simpler, more general and elegant. The basic idea is thatan agent's knowledge about the outcome of an experiment to be performed (e.g. tossing a coin) is provided by her set of desirable gambles, that is the set of gambles she is ready to accept. A gamble is modelled as a real-valued function g on the set Ω of outcomes of the experiment. Hence by accepting a gamble g, an agent commits herself toreceive g(ω) utiles in casethe experiment is performed and the outcome of the experiment eventually happensto be the event ω∈Ω.Among all the sets of desirable gambles, we are able to find those satisfying some properties, and called coherent sets of desirable gambles, as they represent rational choices. Mathematically, those properties boil down to ask for a coherent set of desirable gambles to be a convex cone without the origin that contains all positive gambles, and thus avoids the negative ones (avoids partial loss).In spite of its simplicity, the theory of desirable gambles encompasses not only the Bayesian theory of probabilitybut also other important mathematical models like upper and lower previsions or (credal) sets of probabilities. An important variant of the traditional theory of probability is the probabilistic model of lexicographic probabilities <cit.>, that is a sequence of standard probability measures. Developed to deal with the problem of conditioning on events of measure 0, it shares several features not only with models such as conditional probabilities or non-standard probabilities, but also with the theory of desirable gambles <cit.>. In particular <cit.> notices that(conditional) sets of desirable gambles expressedvia preference relations can be represented by sets of (conditional) lexicographic probabilities. This fact leads us to wonder whether, analogously to the case of sets of almost desirable gambles and sets of probabilities, a stronger, more fundamental correspondenceexists between sets of desirable gambles and sets of lexicographic probabilities. The goal of the present paper is to show that this is the case. That is, we verify that (conditional) sets of lexicographic probabilities and (conditional) sets of desirable gamblesare isomorphic structures. In doing so,we provide a duality transformation (via orthogonal matrices) that allowsus to go from a coherent set of desirable gambles to an equivalent set of lexicographic probabilities and vice versa.This transformation is an important contribution touncertainty modellingbecause having access todual models of uncertainty enables greater freedom of expression. In particular, we believe that the possibility of transferring through duality constructions from one theory to the other can be used to better understand issues related to lexicographic probabilities, such as defining independence. § PRELIMINARIESWe start by introducing the necessary notation and basic definitions to be used later. Assume that the set of outcomes of an experiment is finite, say Ω={ω_1,…,ω_n}, and that there is an unknown true value in Ω. A gamble g on Ω is a mapping g:Ω→ℝ, and so g(ω) represents the reward the gambler would obtain if ω is the true unknown value. As the cardinality of Ω is n (a natural number), every gamble g on Ω can be thought as a point in the Euclidean space ℝ^n, and hence write g=(g_1,…,g_n) with g_i∈ℝ for every i∈ N:={1,…,n}. In line with the tradition within the imprecise probability community, the set of all gambles defined on Ω is denoted by ℒ(Ω), although at times we simply write ℝ^n. The elements of ℝ^n will be considered column vectors and the symbol ^⊤ will mean transpose. We denote by 0_n (-1_n, respectively) the vector whose components are all equal to 0 (-1, respectively). The vectors e^1,…,e^n stand for the canonical basis of ℝ^n, that is, e^i is the vector of zeros with a one in the i-th position, for all i∈ N. Given g,f∈ℝ^n, the standard inner product of g and f is ⟨ g,f⟩ := g^⊤f and the Euclidean norm of g is g:=√(⟨ g,g⟩). For any subset C ⊂^n, we denote by (C) the set of all positive linear combinations of gambles in C, that is, (C):= {∑_j=1^mλ_j g^j : g^j ∈ C, λ_j >0, m ∈ℕ}. We say that g is less than or equal to f (in short, g ≤ f) whenever g_i ≤ f_i for all i ∈ N, and we will write g < f whenever g ≤ f and g≠ f.The set of non-negative gambles is ℝ^n_+ := {g∈ℝ^n : g ≥ 0_n }.Furthermore, g is said to be lexicographically less than f (in short, g <_L f) if g≠ f and g_k < f_k for k:=min{ i∈ N : g_i≠ f_i}. We also write g≤_L f if either g<_Lf or g=f. The following properties for a subset ⊂^n will be needed below. A1. If g∈ and f∈, then g+f ∈ (addition). A2. If g∈ and λ>0, then λ g∈ (positive homogeneity). A3. If g > 0_n, then g∈ (accepting partial gain). A4. 0_n∉ (avoiding status quo). A5. If g < 0_n, then g∉ (avoiding partial loss). A6. -1_n ∉ (avoiding sure loss). A7. If g +f ∈ for all f>0_n, then g∈ (closure). A8. 0_n∈ (accepting status quo). A subset ⊂^n is said to be a coherent set of ∙ desirable gambles if it satisfies properties A1, A2, A3, A4; ∙ almost desirable gambles if it satisfies properties A1, A2, A3, A6, A7. Thus, it easily follows that a coherent set of desirable gambles also satisfies properties A5 and A6, and a coherent set of almost desirable gambles also satisfies property A8. By definition, one has that the elements of 𝔻_n, the family of all coherent sets of desirable gambles on Ω, are convex cones in ℝ^n omitting their apex (the origin), whereas the elements of 𝔸_n, the family of all coherent sets of almost desirable gambles on Ω, are closed convex cones (containing the origin) in ℝ^n. However, not every convex cone omitting its apex (closed convex cone, respectively) belongs to 𝔻_n (𝔸_n, respectively).A crucial tool for duality within the framework of Convex Analysis is the polarity operator. Given a convex cone K⊂ℝ^n, the (positive) polar of K is defined to be K^∘:={v∈ℝ^n : ⟨ v,g⟩≥ 0for allg∈ K}.Note that K^∘ is a closed convex cone (containing the origin). Furthermore, one has K^∘∘ = cl K <cit.>,and for closed convex cones K_1,K_2⊂ℝ^n, one hasK_1⊂ K_2 if and only if K_2^∘⊂ K_1^∘. Let m∈ℕ with m≤ n. The symbol 𝕄_m,n denotes the space of real matrices with m rows and n columns, whereas𝕆_m,n denotes the subset of matrices in 𝕄_m,n with orthonormal rows, that is, those matrices A satisfying A A^⊤ = I (where I is the identity matrix of appropriate order). For A∈𝕄_m,n we denote by a_ij the element of A in row i and column j, the i-th row of A is denoted by a_i ·, whereas its j-th column is denoted by a_· j. Given A∈𝕄_n,n, we write A ≥_L (>_L)0_n <cit.> if each column of A satisfies a_· j≥_L (>_L)0_n for all j∈ N.A probability mass function over Ω is any vector belonging to the set ℙ_n := { p∈ℝ^n : 0≤ p_i ≤ 1, ∑_i∈ Np_i=1} . Any closed convex subset of ℙ_n is called a credal set. We shall denote by ℂ_n the family of all credal sets within ℙ_n. A lexicographic probability over Ω is a sequence {p^j}_j=1^m with p^j ∈ℙ_n. We usually identify lexicographic probabilities over Ω with stochastic matrices, that is,𝕊_m,n := { P ∈𝕄_m,n : p_i·∈ℙ_nfor alli=1,…,m}.We shall denote by 𝕋_m,n the subset of 𝕊_m,n containing all the full-rank stochastic matrices.§ ALMOST DESIRABILITY AND PROBABILITY showed that there is a one-to-one correspondence between coherent sets of almost desirable gambles and credal sets, say 𝐂:𝔸_n →ℂ_n. Moreover, it is often claimed that this correspondence actually shows thatthe theory of almost desirable gambles and the theory of credal sets are equivalent. In this section, we first recall the bijection 𝐂 which is based on the polarity theory for closed convex cones <cit.>. Second, by using the point of view of model theory<cit.>,we explain how one has to understand the claim that the theory of almost desirable gambles and the theory of credal sets are equivalent. Finally, we prove the claim.§.§ Polarity for almost desirability The underlying tool for getting the aforementioned bijection is the classical separation theorem for closed convex sets: if ⊂ℝ^n is a nonempty closed convex cone, then for every g∉ there exists v∈ℝ^n (non-null) such that ⟨ v,g⟩≥ 0 > ⟨ v,g⟩ for all g∈. Thus, every closed convex cone⊂ℝ^n can be written as = {g∈ℝ^n : ⟨ v^t,g⟩≥ 0, t∈ T } for certain v^t∈ℝ^n and T an arbitrary index set. In such a case, a well-known result in Convex Analysis <cit.> states that ^∘ coincides with the closure of the conic convex hull of the {v^t, t∈ T}. In particular, if = {g∈ℝ^n : ⟨ v,g⟩≥ 0 } with v∈ℝ^n, then ^∘ = ℝ_+v = {λ v : λ≥ 0}.Concerning the geometry of coherent sets of almost desirable gambles, any set ∈𝔸_n is characterised as a closed convex cone containing the set ℝ^n_+ (or equivalently, containing all indicator gambles). Thus, as a particular case, since any ∈𝔸_n is a closed convex cone containing {e^1,…,e^n}, the following proposition holds. Let ∈𝔸_n and g∉. Then, there exists v∈ℝ^n with v > 0_n and v=1 such that ⟨ v,g⟩≥ 0_n > ⟨ v,g⟩ for all g∈. For every ∈𝔸_n, there exist an index set T and vectors v^t > 0_n with v^t=1 for all t∈ T such that = {g∈ℝ^n : ⟨ v^t, g⟩≥ 0, t∈ T }. Recall that a set ∈𝔸_n is said to be maximal if there is no other element ' ∈𝔸_n such that ⊊'. Thus, we have that the maximal elements in 𝔸_n are the closed halfspaces containing the origin in the boundary and determined by vectors with non-negative components and norm 1. Hence, if we denote by (𝔸_n) the set of all maximal elements in 𝔸_n, given ∈𝔸_n one has ∈(𝔸_n)⟺ ∃ v > 0_n, v=1 (unique) such that= {g∈ℝ^n : ⟨ v,g⟩≥ 0 } .This means that there is a one-to-one correspondence between maximal coherent sets of almost desirables gambles and non-negative vectors with norm 1. Since a bijection between the set of non-negative vectors with norm 1 and ℙ_n exists, then there is a one-to-one correspondence between maximal coherent sets of almost desirables gambles and probability mass functions over Ω.Furthermore, as a consequence of Proposition <ref>, for any ∈𝔸_n one can write= ⋂{' ∈ (𝔸_n) : ⊂'}.The above equality and the one in (<ref>) imply a reformulation of Proposition <ref>: if ∈𝔸_n and g ∉, then there exists ' ∈ (𝔸_n) such that ⊂' and g ∉'.Next we define the function 𝐂:𝔸_n →ℂ_n which maps coherent sets of almost desirable gambles into credal sets and it is the key for the equivalence of both theories. For a coherent set of almost desirable gambles ∈𝔸_n, we associate the credal set𝐂(𝒦) := 𝒦^∘∩ℙ_n.Observe that if ∈(𝔸_n) is determined by v as in (<ref>), then 𝐂(𝒦) = (∑_i∈ Nv_i)^-1v.The mapping 𝐂:𝔸_n →ℂ_n defined in (<ref>) is a bijection whose inverse is given by𝐂^-1(𝒫) := 𝒫^∘ for every credal set 𝒫∈ℂ_n. First, it is easy to see that, for any ∈𝔸_n, the set 𝐂(𝒦) is a credal set. Since ℝ^n_+ ⊂, one has^∘⊂ (ℝ^n_+)^∘ = ℝ^n_+. Moreover, ^∘ does not reduce to 0_n (this fact just happens whenever =ℝ^n, which does not belong to 𝔸_n indeed) and so, ^∘ contains non-null non-negative vectors, and particularly, at least one vector with the sum of its components equal to 1 (up to normalisation). Thus, the set 𝒦^∘∩ℙ_n⊂ℙ_n is nonempty. Moreover, since both 𝒦^∘ and ℙ_n are closed convex sets and closedness and convexity are preserved under intersection, then 𝐂(𝒦) ∈ℂ_n. We have shown that the mapping 𝐂 is well-defined, associating a credal set to each coherent set of almost desirable gambles. Next, we verify that 𝐂 is a bijection, that is, for any credal set 𝒫∈ℂ_n, there exists a unique ∈𝔸_n such that 𝐂(𝒦) = 𝒫.Given a credal set 𝒫∈ℂ_n, it follows that ℝ_+𝒫 is a closed convex cone contained in ℝ^n_+. Thus, by taking polars one has ℝ^n_+ = (ℝ^n_+)^∘⊂ (ℝ_+𝒫)^∘ = 𝒫^∘ and so, 𝐂^-1(𝒫) ∈𝔸_n as 𝒫^∘ is a closed convex cone containing ℝ^n_+. Indeed, 𝐂^-1(𝒫) ∈𝔸_n is the unique coherent set of almost desirable gambles satisfying 𝐂(𝐂^-1(𝒫)) = 𝒫. Furthermore, for any ∈𝔸_n one has 𝐂^-1(𝐂()) =. §.§ Theories as structures, and equivalence as isomorphism The fact that 𝐂 establishes a bijection between coherent sets of almost desirable gambles and credal sets is clearly not enough for claiming that the two theories are equivalent. We also need to verify that such a mapping preserves all considered operations (like conditioning and marginalisation) and relations (like independence). In other words, we have to verify that itis an isomorphism, once the two theories, from the point of view of model theory, are formulated as structures on the same signature. To illustrate this point, let us assume that we are only interested in conditioning. From a model-theoretic point of view, this means that we are considering a signature consisting of only a unary functional symbol.The next steps are thence the following: (i) we have to state how the considered operation is defined over coherent sets of almost desirable gambles and over credal sets (in model-theoretic terms, we have to specify how the elements of the signature – in this case its unique element – must be interpreted in both cases), and then (ii) we have to show that the map 𝐂 preserves the considered operation (in model-theoretic terms, we have to verify that the map is a homomorphism). Here below we thence recall the definition ofthis operation within the theory of almost desirable gambles as given in <cit.>, a slightly different but completely equivalentversion as the one in <cit.>. To this aim, given a subset Π⊊Ω of cardinality m<n, we shall denote by Π^c the set of outcomes which are not in Π, that is, Π^c := Ω\Π. For a gamble g∈^m we define the gamble (g ⌈_Π^c) ∈^n as (g⌈_Π^c)(ω) := g(ω) if ω∈Π and (g⌈_Π^c)(ω) := 0 if ω∈Π^c. Let ⊂^n. The conditioned set ofwith respect to Π is the set (⌋_Π) := {g ∈^m : (g ⌈_Π^c) ∈}. Notice that conditioning does not necessarily preserve coherent sets of almost desirable gambles (see <cit.> for a thorough discussion on this point). As an example, consider the sets Ω={1,2}, Π={2} and ={ g ∈ℝ^2 : g_1 ≥ 0}. Whereas ∈𝔸_2, it holds that (⌋_Π) = ℝ∉𝔸_1.For a probability mass function p over Ω, let p(· | Π) denote the usual conditioning of p with respect to Π⊂Ω.Hence,if 𝒫⊂ℙ_n is a credal set over Ω,the conditioning of 𝒫 on Π is the projection on Π of all p(· | Π) ∈ℙ_n, with p ∈𝒫; that is (𝒫⌋_Π):={ p ∈ℙ_m : ∃q ∈𝒫 such that(p ⌈_Π^c) = q(· | Π) }. Notice that this definition is completely equivalentas the usual definition of conditioning for credal sets as given in<cit.>.We can thence formulate the missing property for the mapping 𝐂 to be called an isomorphism, and thus to be claimed to show the equivalence between the two theories (when the considered operation is conditioning only). Let 𝒦∈𝔸_n and Π⊂Ω. The following statements hold: (i) (⌋_Π)∈𝔸_m if and only if (𝐂(𝒦)⌋_Π) ∈ℂ_m. (ii)If (⌋_Π) ∈𝔸_m, then 𝐂(⌋_Π) = (𝐂(𝒦)⌋_Π). It is enough to prove both claims for 𝒦∈(𝔸_n). Let {p}= 𝐂(𝒦 ) ∈ℂ_n. With i_Π we should denote the indicator gamble on Π. Since ⟨ p, i_Π f ⟩= ⟨i_Π p, f ⟩ and Theorem <ref>, the following holds:(⌋_Π) = { g ∈^m : ⟨ i_Π p, f ⟩≥ 0,forf ∈^nsuch that i_Π f = g ⌈_Π^c}.Hence, for both points we conclude by applying Theorem <ref> to Equation <ref>.§ DESIRABILITY AND LEXICOGRAPHIC PROBABILITIES As discussed by <cit.>, coherent sets of desirable gambles and lexicographic probabilities seem to share several properties. We wonder whether these two models are somehow equivalent, that is, if there is a one-to-one correspondence 𝐆 : 𝔻_n→𝔾_n between coherent sets of desirable gambles and certain sets (to be defined later) of lexicographical probabilities, similar to the one existing for credal sets and coherent sets of almost desirable gambles described inSection <ref>.§.§ Polarity for desirability As done inSection <ref>, the following (lexicographic) separation theorem for convex sets will benow the key result for getting the aforementioned equivalence. Let G⊂ℝ^n be a nonempty convex set and g∉ G. Then, there exists A∈𝕄_n,n and b ∈^n such that A g >_L b ≥_L A gfor all g ∈ G. The matrix A in the above theorem can be assumed to be full-rank, or even orthonormal. Consequently, every convex set G ⊂ℝ^n can be written as G = {g∈ℝ^n : A^t g >_L b^t, t∈ T } for certain A^t∈𝕄_n,n, b^t∈ℝ^n and T an arbitrary index set. In particular, if ⊂ℝ^n is a convex cone omitting its apex, one can take b=0_n in Theorem <ref> and write = {g∈ℝ^n : A^t g >_L 0_n, t∈ T } for certain A^t∈𝕄_n,n (even in 𝕆_n,n) and T an arbitrary index set. At this point, we recall that in ℝ^n there exist maximal convex cones excluding their vertices which are called semispaces (at the origin) <cit.>. Thus, a convex set⊂ℝ^n is a semispace if and only if 0_n ∉ and for all g∈ℝ^n\{0_n}, exactly one of g and -g belongs to . Furthermore, according to <cit.>, ⊂ℝ^n is a semispace if and only if there exists A∈𝕆_n,n (unique, as follows from <cit.>) such that = {g∈ℝ^n : A g >_L 0_n }. Thus, every convex cone omitting its apex can be written as an intersection of semispaces. Concerning the geometry of coherent sets of desirable gambles, any set ∈𝔻_n is characterised as a convex cone omitting its apex and containing the set Q:=ℝ^n_+\{0_n}. Thus, as a consequence of the above statement, since any ∈𝔻_n is a convex cone containing {e^1,…,e^n}, the following proposition follows. Let ∈𝔻_n and g∉. Then, there exists A∈𝕆_n,n with A >_L 0_n such that Ag >_L 0_n ≥_L Agfor all g∈. For every ∈𝔻_n, there exist an index set T and matrices A^t∈𝕆_n,n with A^t >_L 0_n for all t∈ T such that = {g∈ℝ^n : A^t g >_L 0_n, t∈ T }. Next we characterise the matrices which are lexicographically greater than 0_n. We understand that a matrix is unitary if it has ones in the main diagonal. Given A∈𝕄_n,n, the following statements are equivalent: (i) A >_L 0_n. (ii)Ag >_L 0_n for all g > 0_n. (iii) A=LP for some unitary lower-triangular matrix L and some P∈𝕄_n,n such that p_· j > 0_n for all j∈ N. (i) ⇔ (ii). If Ag >_L 0_n for all g > 0_n, then in particular we have a_· j = A e^j >_L 0_n for all j∈ N since e^j > 0_n, and that is the definition of A >_L 0_n. Conversely, assume that A >_L 0_n and so, A e^j >_L 0_n for all j∈ N. Since any g=(g_1,…,g_n)>0_n can be written as g=∑_i∈ Ng_i e^i with g_i≥ 0 for all i∈ N and there is at least one index j such that g_j is strictly positive, then Ag = ∑_i∈ Ng_i A e^i >_L 0_n. (i) ⇔ (iii). Observe that A >_L 0_n if and only if A ≥_L 0_n and a_· j≠ 0_n for each j∈ N. According to <cit.>, A ≥_L 0_n if and only if A=LP for some unitary lower-triangular matrix L∈𝕄_n,n and some P ∈𝕄_n,n such that p_ij≥ 0 for all i,j ∈ N. Since a_· j = L(p_· j) and L is a regular lower-triangular matrix, then a_· j = 0_n if and only if p_· j = 0_n. Thus, the conclusion follows. We say that a coherent set of desirable gambles ∈𝔻_n is maximal if there is no other element ' ∈𝔻_n such that ⊂'.Thus, we have that the maximal elements in 𝔻_n are the semispaces (at the origin) given by matrices A∈𝕆_n,n satisfying A>_L 0_n.Hence, if we denote by (𝔻_n) the set of all maximal elements in 𝔻_n, given ∈𝔻_n one has∈(𝔻_n)⟺ ∃A∈𝕆_n,n, A >_L 0_n(unique) such that= {g∈ℝ^n : Ag >_L 0_n }.This means that there is a one-to-one correspondence between maximal coherent sets of desirables gambles and orthonormal matrices whose columns are lexicographically positive. Furthermore, as a consequence of Proposition <ref>, for any ∈𝔻_n one can write= ⋂{' ∈ (𝔻_n) : ⊂'},recovering thus the characterisation given in <cit.>. The above equality and the one in (<ref>) imply a reformulation of Proposition <ref>: if ∈𝔻_n and g ∉, then there exists ' ∈ (𝔻_n) such that ⊂' and g ∉'.The following notions will be useful in the sequel. We say that 𝒜⊂𝕄_n,n is L-convex if 𝒜 = { A∈𝕄_n,n : A g^t >_L b^t, t∈ T} for certain vectors g^t,b^t∈ℝ^n for all t∈ T. In other words, 𝒜⊂𝕄_n,n is L-convex if and only if for every A∉𝒜 there exist g,b∈ℝ^n such that A g >_L b ≥_L A g for all A∈𝒜. Analogously, we say that 𝒜⊂𝕄_n,n is an L-convex cone (omitting its apex) if 𝒜 = { A∈𝕄_n,n : A g^t >_L 0_n, t∈ T} for certain g^t∈ℝ^n for all t∈ T. For any 𝒜⊂𝕄_n,n, we define the set Lposi(𝒜) := { B∈𝕄_n,n : B g >_L 0_nfor any g∈ℝ^n satisfying A g >_L 0_nfor all A∈𝒜}. Thus, B∉Lposi(𝒜) if and only if there is g∈ℝ^n such that A g >_L 0_n ≥_L B gfor all A∈𝒜. Next we define a new polarity operator which is suitable for general convex cones in ℝ^n. For a set 𝒦⊂ℝ^n, we define 𝒦^⧫ := { A∈𝕄_n,n : Ag >_L 0_n for all g ∈ K}. Furthermore, for a set 𝒜⊂𝕄_n,n we also define 𝒜^◊ := {g∈ℝ^n : Ag >_L 0_n for all A∈𝒜}. The following facts can be derived from these definitions: * 𝒜^◊ is a convex cone omitting its apex in ℝ^n. Moreover, 𝒜 = (𝒜^◊)^⧫ if and only if 𝒜 is an L-convex cone omitting its apex in 𝕄_n,n. * 𝒦^⧫ is an L-convex cone omitting its apex in 𝕄_n,n. Moreover, 𝒦 = (𝒦^⧫)^◊ if and only if 𝒦 is a convex cone omitting its apex in ℝ^n. In particular, this equality holds whenever 𝒦∈𝔻_n. * For any 𝒦,ℋ⊂ℝ^n, if 𝒦⊂ℋ then ℋ^⧫⊂𝒦^⧫. Analogously, for any 𝒜,ℬ⊂𝕄_n,n, if 𝒜⊂ℬ then ℬ^◊⊂𝒜^◊. * 𝒦^⧫ = { A∈𝕄_n,n : 𝒦⊂ A^◊} and 𝒜^◊ = { g∈ℝ^n : 𝒜⊂ g^⧫}. The following statements hold: (i) If 𝒜 = { A∈𝕄_n,n : A g^t >_L 0, t∈ T }, then 𝒜^◊ = {g^t, t∈ T}. (ii) If = {g∈ℝ^n : A^t g >_L 0, t∈ T }, then ^⧫ = Lposi{A^t, t∈ T}. (i) Clearly, g^t ∈𝒜^◊ for all t∈ T. Since 𝒜^◊ is a convex cone omitting its apex, then {g^t, t∈ T}⊂𝒜^◊. To prove the converse statement, assume that there is g∈𝒜^◊ such that g∉{g^t, t∈ T}. By the separation theorem, there exists A∈𝕄_n,n such that Ag >_L 0_n ≥_L Agfor all g∈{g^t, t∈ T}. In particular, A g^t >_L 0_n for all t∈ T, which implies that A∈𝒜. Thus, as g∈𝒜^◊, one has A g >_L 0_n, which entails a contradiction. The proof of (ii) follows the same reasoning as for (i).As a consequence of the above result, if we consider the sets ℋ:={g∈ℝ^n : g > 0_n} and ℬ := {A∈𝕄_n,n : A >_L 0_n}, then one hasℋ^⧫ = ℬ and ℬ^◊ =ℋ. As this point, we establish an important correspondence between orthonormal matrices with lexicographically positive columns and equivalence classes of full-rank stochastic matrices. Next result guarantees the existence of a full-rank stochastic matrix determining the same semispace as a given orthonormal matrix A >_L 0_n, and the proof provides a method for obtaining such a matrix. Let A∈𝕆_n,n be such that A >_L 0_n. Then, there exists a full-rank stochastic matrix P∈𝕋_n,n such that P^◊ = A^◊. In virtue of Lemma <ref>, one can write A=LQ with L a unitary lower-triangular matrix and Q such that q_· j > 0_n for all j∈ N. Thus, one has a_1· = q_1· and a_i· = ∑_j=1^i-1l_ijq_j· + q_i· for i∈ N\{1}. Since A is orthonormal, then it follows that q_i· >0_n for all i∈ N, that is, Q does not have null rows, and clearly Q is full-rank as A is. By normalising each row so as that each row becomes a probability mass function, that is, by dividing each row by its sum, one gets the existence of a P∈𝕋_n,n. Finally, we observe that A^◊ = Q^◊ = P^◊. The following proposition studies the way of getting an orthonormal matrix being lexicographically greater than 0_n from a full-rank stochastic one. Let P∈𝕋_n,n be a full-rank stochastic matrix. Then, there exists A∈𝕆_n,n with A >_L 0_n such that A^◊ = P^◊. We shall denote by (P) the orthogonal matrix obtained from the full-rank stochastic matrix P ∈𝕋_n,n by applying the Gram–Schmidt orthogonalisation procedure according to the row order. Let A∈𝕆_n,n be the orthonormal matrix obtained from (P) by normalising each row. Since P have neither null rows nor null columns, itfollows that (P) >_L 0_n and so, A >_L 0_n. Finally, the Gram–Schmidt procedure guarantees that A^◊ = P^◊. The next example illustrates that the matrix whose existence has been guaranteed in the Proposition <ref> is not necessarily unique. Let us consider the maximal coherent set of desirable gambles ={g ∈^3: A g >_L 0_3 }, where A=[ 01/√(2)1/√(2); 0 -1/√(2)1/√(2); 1 0 0; ]. Since A >_L 0_3, following Lemma <ref> A can be written as A = [100;τ10; l_31 l_321;][ 01/√(2)1/√(2); 0 (-1-τ)/√(2)(1-τ)/√(2); 1 0 0; ] for any τ≤ -1, l_31, l_32∈ℝ. According to Proposition <ref>, by normalising each row of the second matrix in the right-hand side of the equality above, we get that every matrix P(τ)=[01/21/2;0 (τ+1)/2τ (τ-1)/2τ;100;], with τ≤ -1, is a full-rank stochastic matrix which determines . Finally, it can be checked that (P(τ))=A holds for any τ≤ -1 (after normalisation).The above results suggest the definition of the ◊-equivalence class of a given matrix A∈𝕄_n,n as the set of matrices having the same polar that A, that is, [A]_◊ := {P∈𝕄_n,n : P^◊ = A^◊}. According to this definition, we have that there is a one-to-one correspondence between maximal coherent sets of desirable gambles and ◊-equivalence classes of stochastic matrices of full rank. We say that a nonempty subset of 𝕄_n,n is an L-credal set if it is the intersection with 𝕋_n,n of some L-convex cone in 𝕄_n,n. We shall denote by 𝔾_n the family of allL-credal sets. We are now in position to define the function 𝐆 : 𝔻_n→𝔾_n which maps coherent sets of desirable gambles into L-credal sets and it is the key for the equivalence of both theories. For acoherent set of desirable gambles ∈𝔻_n, we associate the L-credal set𝐆(𝒦) := 𝒦^⧫∩𝕋_n,n.We aim at showing that 𝐆 is a bijection. The mapping 𝐆:𝔻_n →𝔾_n defined in (<ref>) is a bijection whose inverse is given by 𝐆^-1(𝒫) := 𝒫^◊,for every 𝒫∈𝔾_n.From the definition of the ⧫-polarity operator, 𝐆() is an L-credal set, for any∈𝔻_n.As ℋ⊂, then ^⧫⊂ℋ^⧫ = ℬ (see Remark <ref>). One also has ^⧫ = {A∈𝕄_n,n : ⊂ A^◊}. Sinceis determined by orthonormal matrices, then ^⧫ contains orthonormal matrices with lexicographically positive columns and, as a consequence of Proposition <ref>, ^⧫ also containsfull-rank stochastic matrices, which shows that 𝐆() is nonempty. Now, if 𝒫∈𝔾_n, one has that 𝐆^-1(𝒫) = 𝒫^◊ is a convex cone omitting its apex. On the other hand, as 𝒫⊂𝕋_n,n⊂ℬ, then Q = ℬ^◊⊂𝒫^◊ and so, 𝐆^-1(𝒫) ∈𝔻_n. To see that 𝐆 is one-to-one, we just need to show 𝐆(𝐆^-1(𝒫)) = 𝒫 for any𝒫∈𝔾_n and also 𝐆^-1(𝐆()) = for 𝒦∈𝔻_n. First, 𝐆(𝐆^-1(𝒫)) = 𝐆(𝒫^◊) = 𝒫^◊⧫∩𝕋_n,n = Lposi(𝒫) ∩𝕋_n,n = 𝒫. On the other hand,𝐆^-1(𝐆()) = 𝐆^-1(^⧫∩𝕋_n,n) = (^⧫∩𝕋_n,n)^◊ = ^⧫◊ = asis a convex cone omitting its apex.§.§ Closing the circle, or preserving conditioning As for almost desirability, one wants to verify that 𝐆 is not only a bijection but also an isomorphism. To make sense of this claim, we thus have first to specify which operations and relations we decide to consider (in model-theoretic terms, the signature),and how they are defined over sets of gambles and over sets of stochastic matrices (in model-theoretic terms, the interpretation).Finally, we have to verify that the map 𝐆 preserves the considered operations and relations. As before,here we are only interested in conditioning. Without loss of generality we assume that Π⊊Ω has cardinality m. In the case of stochastic matrices, conditioning has to be defined by slightly modifying the approach by <cit.>. This is because we want to be sure that the result of the operation is a square stochastic matrix.With this aim in mind, we first define the following reduction rule for matrices: (R) Given A ∈𝕄_n,m, for every i ∈ N, discard the i-th row a_i· whenever it is a linear combination of a_1·, …, a_i-1· (and thus in particular when it is equal to 0_m).Let P' ∈𝕄_n,m be the matrix obtained by projectingon Π the conditioning p(· |Π), or taking 0_m when it is undefined, for each row p of P ∈𝕋_n,n. Define P⌋_Π as the matrix obtained from P' by applying rule (R).By an immediateapplication of properties of minors and cofactors, weget that P⌋_Π∈𝕋_m,m. Moreover (P⌋_Π)⌋_Δ=(P ⌋_Δ), for Δ⊂Π. Hence, the following operation is always defined. Let𝒫⊂𝕋_n,n, with n>1. Its conditioning on Π is the set (𝒫⌋_Π):={ (P⌋_Π)| P ∈𝒫}⊂𝕋_m,m. From Definition <ref>, it is immediate to verify that (⌋_Π) ∈𝔻_m whenever ∈𝔻_n, and that 𝔻_n is closed under conditioning. Moreover, (⌋_Π) ∈(𝔻_m) whenever ∈(𝔻_n). To conclude, we verify that polarity preserves conditioning. Let ∈𝔻_n, then (𝐆()⌋_Π)= 𝐆(⌋_Π) ∈𝔾_m. It is enough to provethe claimfor maximal consistent sets of desirable gambles. Hence, let ∈(𝔻_n). We first define a conditioning operation on orthogonal matrices. Let A ∈𝕆_n,n. Its conditioning on Π is the matrix A⌋_Π obtained by the following procedure: (i)erase all k-th column from A,with k ∈{m+1, …, n}; (ii)apply rule (R) to the matrix obtained after the previous point; (iii)assume the matrix you obtained after the previous point is B. By linear algebra, B ∈𝕌_m,m. Hence, A⌋_Π:=(B)∈𝕆_m,m. Note that the operation also preserves the property of being lexicographic positive for columns. Thus, let A∈𝕆_n,n, A >_L 0_n, such that = A^◊. Both (⌋_Π), (A^◊⌋_Π) ∈(𝔻_m). This means that, in order to show that (⌋_Π)=(A^◊⌋_Π), it is enough to verify one of the two inclusions. So, let f ∈ (⌋_Π). By definition f⌈_Π^c∈, and thus A(f⌈_Π^c)>_L0_n. But this means that Bf>_L0_n, since f⌈_Π^c agrees on Π with f, and is 0 elsewhere. Thence (B)f>_L0_n, meaning that f ∈ A^◊⌋_Π. Now, because of the properties of the procedures given by Propositions <ref> and <ref>,it holds that P ∈ [A]_◊ if and only if P⌋_Π∈ [A⌋_Π]_◊, for P ∈𝕋_n,n. Finally, we can apply Theorem <ref> and conclude that (𝐆()⌋_Π)= 𝐆(⌋_Π). § CONCLUSIONSIn this paper we have shownthat (conditional) sets of lexicographic probabilities and (conditional) sets of desirable gamblesare isomorphic structures. In doing so,we haveprovided a duality transformation (via orthogonal and stochastic matrices) that allowsus to go from a coherent set of desirable gambles to an equivalent (convex) set of lexicographic probabilities and vice versa. As future work we plan to complete this analysis by including other operations, such as marginalisation (this should be straightforward), and structural judgements such as independence. It would be also of great interest to study what are the geometric properties of lexicographic convex sets of stochastic matrices, and what happens for gambles on infinite sample spaces.The authors are grateful to the referees for their constructive comments and helpful suggestions which have contributed to the final preparation of the paper. J. Vicente-Pérez was partially supported by MINECO of Spain and ERDF of EU, Grants MTM2014-59179-C2-1-P and ECO2016-77200-P.
http://arxiv.org/abs/1705.09574v1
{ "authors": [ "Alessio Benavoli", "Alessandro Facchini", "Jose Vicente-Perez", "Marco Zaffalon" ], "categories": [ "math.PR" ], "primary_category": "math.PR", "published": "20170526131549", "title": "A polarity theory for sets of desirable gambles" }
Optimization of Measurement Device Independent Scarani-Acìn-Ribordy-Gisin protocol C. Tannous[Tel.: (33) 2.98.01.62.28,E-mail: [email protected]] and J. Langlois Version December 30, 2023 ========================================================================================We present a primal–dual memory efficient algorithm for solving a relaxed version of the general transportation problem. Our approach approximates the original cost function with a differentiable one that is solved as a sequence of weighted quadratic transportation problems. The new formulation allows us to solve differentiable, non–convex transportation problems.Keywords: General transportation problem, Half–quadratic potentials,Sequential quadratic programming.§ INTRODUCTIONThe general transportation problem (GTP) deals with the distribution of goods from m suppliers with production capacities p = { p_i}_i=1,…,m to n destinations with demands q = { q_i}_i=1,…,n.Without loss of generality, we assume balanced production and demand: ∑_i p_i = ∑_j q_j. A classical approach to this problem assumes that the cost of transport remains constant, independently of the the quantity to be transported. In real problems, this is not the case. The cost may increase or decrease according the volume of the transported good. We can write the general transportation problem as follows:min_x ∑_i,j f_ij(x_ij)∑_jx_ij= p_i∑_ix_ij = q_jx_ij≥ 0 .where x_ij denotes the quantity of goods to be transported from the ith supplier to the jth destination, f: [ m × n ] ×ℝ^+→ℝ^+ is a continuous cost function that depends on the supplier i, thedestination j and the volume x_ij. The first case of a transportation problem was formulated by Hitchcock assuming that the cost functions are linear: f_ij(x_ij) = c_ijx_ij <cit.>. Another popular model is the quadratic one: f_ij(x_ij) = a_ijx_ij^2 + b_ijx_ij.According to Ref. <cit.>, the quadratic model is popular because it can approximate other cost functions. Despite such flexibility, the limitation of quadratic models has been well documented in the context of robust statistics, and its applications to image processing and computer vision <cit.>. In our opinion, the main limitations of quadratic models are the following: * The impossibility of limiting the cost for large values;i.e.,one has lim_x → +∞ |q_ij(x_ij) | = ∞.* The limitation of promoting sparse solutions; i.e., solutions that use a reduced number of routes.In this work, we present an approximation scheme that allows us to define new cost functions that overcome the aforementioned limitations. We also present a primal–dual algorithm with limited memory requirements. The GTP is relevant in modern computer science applications such as computer vision <cit.>, machine learning <cit.> and data analysis <cit.>. The Earth Mover Distance (EMD) is an interesting application of the transportation problem where the optimum cost is used as a metric between the histograms, vectors, p and q.In Ref. <cit.>, EMD is used as a metric for image retrieval in the context of computer vision. Recently,the EMD was proposed as a measure of reconstruction error for non–negative matrix factorisation <cit.>. The Word Moving Distance is the metric version for comparing documents based on the transportation problem <cit.>. Recent EMD applications include the quantification of biological differences in flow cytometry samples <cit.>. In addition, there is current interest in the learning of metrics for particular problems <cit.>; since our proposal is parametrised, the parameters involved can be learned. § PRELIMINARIESBefore presenting our transportation formulation, we review an important result reported in the context of robust statistics and continuous optimisation applied to image processing <cit.>. The purpose of such work was to transform some non–linear cost functions to half–quadratic functions <cit.>. A half–quadratic function is quadratic in the original variable and convex in a new auxiliary variable, where the minima of the auxiliar variable can be computed with a closed formula. The next proposition resumes the conditions imposed on the cost function f and the transformed half–quadratic function.Let f: ℝ^+→ℝ^+ be a function that fulfils the following conditions: * f(t) ≥ m with f(0) =m, for t≥ 0 and m > -∞. * f is continuously differentiable. * f'(t) ≥ 0. * lim_t → +∞f'(t)/(2t) =0. * lim_t → +0^+ f'(t)/(2t) =M, 0<M<+∞.Then,* there exists a strictly convex and decreasing functionψ:(0,M]→ [0, β), whereβ = lim_t→+∞{ f(t) - t^2 f'(t) /(2t)} such that f(t) = inf_0<ω≤ M{ω t^2 + ψ( ω) };* the solution to inf_0<ω≤ M{ω t^2 + ψ( ω)} is unique and given byω^* = f'(t) / (2t) .The proof is presented in Ref. <cit.>.Observe that our version of the half–quadratic Proposition assumes a non-negativity constraint on the primal variables. § HALF–QUADRATIC TRANSPORTATION PROBLEM In this section we present a memory efficient primal–dual algorithm for solving GTP which cost functions satisfy Proposition <ref>. Let f_ij be a cost function in (<ref>), such that each f satisfies Proposition <ref>; then, a solution to the transportation problem can be computed with Algorithm <ref>.* On the half quadratic transportation problem. By (<ref>), the cost(<ref>) can be rewritten as min_x∑_ij f_ij(x_ij) =min_x,w∑_ij{ω_ij x^2_ij + ψ(ω_ij) }.* On the algorithm convergence. Letdenotes the Lagrangian of the half–quadratic transportation problem, then one can interchange the order of the minimisations; i.e.,min_x,ωmax_ y(x,ω,y) = min_x min_ωmax_ y( · )= min_ωmin_xmax_ y(·);where we denote with y the Lagrange's multiplies vectors. This suggests an alternating minimisation scheme w.r.t. ω and (x,y). Let x^k, ω^k and y^k be the current feasible values, then we define ω^k+1 = ω( x^k,ω,y^k )to be the updated ω value. Thus, x and y are updated by solving the quadratic transportation problem: x^k+1,y^k+1 = x y( x,ω^k+1,y).We define F(x) = ∑_ij f_ij(x_ij) and F̂(x, ω) = ∑_ij{ω_ij x^2_ij + ψ(ω_ij) } and observe that F(x^k) = F̂(x^k, ω^k) ≥F̂(x^k, ω^k+1) ≥F̂(x^k+1, ω^k+1)= F(x^k+1). Then, the alternated minimisations w.r.t. ω and x produce a feasible convergent sequence {x^k, x^k+1, x^k+2, …}that reduces the cost of the GTP: F(x^k) ≥ F(x^k+1) ≥ F(x^k+2) ≥….* On the alternated minimisations.From (<ref>), the optimum ωin(<ref>) is computed asω_ij = f'_ij(x_ij)/(2x_ij).for a given x.We define y equal to (λ, γ, s) where λ and γ are the Lagrange's multipliers for the equality constraints (<ref>) and (<ref>), respectively; and s are the Lagrange's multipliers for the non–negativity constraint.Then, the minimisation (<ref>) corresponds to finding the vectors(x, λ, γ, s) that solve the Karush-Kuhn-Tucker conditions (KKTs) with ω fixed:ω_ij x_ij- λ_i - γ_j- s_ij =0,∑_jx_ij- p_i =0, ∑_ix_ij-q_j=0, s_ij x_ij = 0, s_ij, x_ij ≥ 0,A strategy for solving the KKTs is to use an iterative Projected Gauss–Seidel scheme <cit.>. Thus, from(<ref>):x_ ij= ω̅_ ij(λ_i + γ_j +s_ ij), where we defineω̅_ ij = 1/ ω_ ij. Substituting x_ij in (<ref>), we have∑_j ω̅_ ij(λ_i + γ_j +s_ ij) = p_i.We solve for λ_i and obtainλ_i=p_i - ∑_j (γ_j + s_ij)ω̅_ ij/∑_j ω̅_ ij.Similarly, we substitute x_ij in (<ref>) and solve for γ_j:γ_j=q_j - ∑_i ( λ_i + s_ij )ω̅_ ij/∑_i ω̅_ ij.From (<ref>), (<ref>) and (<ref>), we see two cases:x_ij=0and s_ij = -( λ_i + γ_j ) ≥ 0; orx_ij=ω̅_ij ( λ_i + γ_j ) ≥ 0 and s_ij =0. Thuss_ ij= max{0, -( λ_i + γ_j ) }.The complete procedure is shown in Algorithm <ref>. In practice, we observed that the internal loop in Algorithm<ref>requires only a few iterarations to approximate the dual variables and is not necessary to achieve convergence; in our experiments we used five iterations. The solution is the global minima or a local one by depending if the cost is or is not convex. Following, we present two particular and interesting cases of half-quadratic transportation problems. The linear cost can be approximated with the differential function <cit.>: f_ij(ij) =c_ij( x_ij^2 + β^2 ) ^1/2, with β≈ 0. Thus, ω is computed with ω_ij = 1/2 c_ij / ( x_ij^2 + β^2 ) ^1/2in Algorithm <ref>.It follows from lim_β→ 0c_ij( x_ij^2 + β^2 )^1/2= c_ij x_ij ; c_ij, x_ij≥ 0.Hence, the formula for ω follows directly from (<ref>). The cost function of the form f_ij(x_ij) = c_ij[1-δ̂(x_ij)], for c_ij, x_ij >0 (where δ̂(t) is the Kroneker's delta) can be approximated with the half-quadratic function: f̃_ij(x_ij) =c_ij x^2_ij/ (β^2 + x^2_ij ) with β≈ 0. Thus, ω_ij =c_ijβ^2 /(β^2 + x_ij^2)^2. It follows from lim_β→ 0t^2/β^2+t^2 = 1-δ̂(t) = {[0for t=0,;1for t∈ℝ\{0}. ].The formula for ω follows directly from (<ref>). Figure <ref> plots the half–quadratic functions that approximate the L_1 and L_0 norms. § RELATIONSHIP WITH THE QUADRATIC TRANSPORTATION PROBLEMThe SQT problem is defined by the cost function f_ij(ij) = c_ij x_ij^2; thus, ω_ij = c_ij. It follows directly from (<ref>).In the case of the QT problem, the Algorithm<ref>is reduced to the dual Algorithm <ref>. From Proposition <ref>, we note that ω is constant, and that the computations ofλ, γ and s are independent of x.Dorigo and Tobler discussed the relationship between the QTP and the push–pull migration lawsimplemented in Algorithm <ref> <cit.>. The QT is defined by a cost function of the form f_ij(ij) = a_ij x_ij^2 + b_ij x_ij. Thus, the dual algorithm is derived with ω=c_ij and using the conditionω_ij x_ij- λ_i - γ_j- s_ij = b_ijinstead of (<ref>).It follows directly from the KKTs. Remark. An alternative to Proposition <ref>is given by the half–quadratic approximation a_ij x_ij^2 + b_ij x_ij≈ a_ij x_ij^2 + 2 b_ij (x_ij^2 + β^2)^1/2, with β≈ 0; thus, ω_ij = a_ij +1/2 b_ij/(x_ij^2 + β^2)^1/2.This approximation is presentedwith the sole aim of illustrating the potential of our approach. It is clear that the dual algorithm derived according to Proposition <ref> is more accurate, faster and requires less memory to be implemented. § DISCUSSION AND CONCLUSIONS The transportation problem is the base of the Earth Mover Distance which has become a relevant metric to for compare distributions in applications to data analysis and computer vision.The presented technique can motivate the design of new algorithms in those areas. In order to demonstrate the versatility of our proposal, we generate two random vectors p and q (depicted in Figure <ref>); we compute the optimum transported volumes x with three cost function models: the quadratic (c_ijx_ij^2), the approximation L_1 (c_ij√(x_ij^2+β^2), with β^2= 1×10^-3) and the approximation L_0 (c_ijx_ij^2 / (β^2 + x_ij^2), with β^2= 1×10^-1). In all the cases, we use c_ij = |i-j|+1. Figure <ref> depicts the computed x values. One can observe that the quadratic cost function promotes dense solutions; i.e., there are many x's with small values. On the other hand, one can observe the sparseness of the solution is induced with the use of the approximated L_1–norm. Such sparsity is emphasised with the approximated L_0–norm.We have presented a model to approximate solutions to the general transportation problems by approximating the transportation cost functions with half–quadratic functions. The approach guarantees convergence using an alternated minimisation scheme. In the case of a non–convex cost function f the convergence is guaranteed to a local minimum. Although we present a minimisation algorithm with reduced memory requirements, our scheme accepts other efficient solvers for the quadratic transportation subproblem; such as those reported in Refs. <cit.>. 10 url<#>1urlprefixURL href#1#2#2 #1#1Hitchcock41 F. L. Hitchcock, http://dx.doi.org/10.1002/sapm1941201224The distribution of a product from several sources to numerous localities, Journal of Mathematics and Physics 20 (1-4) (1941) 224–230.adlakha13 V. Adlakha, K. Kowalski, On the quadratic transportation problem, Open Journal of Optimization 2 (3) (2013) 89–94.gemanyang95 D. Geman, C. Yang, http://dx.doi.org/10.1109/83.392335Nonlinear image recovery with half-quadratic regularization, Trans. Img. Proc. 4 (7) (1995) 932–946.charbonnier97 P. Charbonnier, L. Blanc-Feraud, G. Aubert, M. Barluad, Deterministic edge-preserving regularization in computed imaging, IEEE Trans. Image Processing 6 (1997) 298–311.rubner:EMD00 Y. Rubner, C. Tomasi, L. J. Guibas, http://dx.doi.org/10.1023/A:1026543900054The earth mover's distance as a metric for image retrieval, Int. J. Comput. Vision 40 (2) (2000) 99–121.baccianella13:emd S. Baccianella, A. Esuli, F. Sebastiani, http://dx.doi.org/10.1162/NECO_a_00558Feature selection for ordinal text classification, Neural Computation 26 (3) (2013) 557–591.levina:emdmallows01 E. Levina, P. Bickel, The earth mover's distance is the mallows distance: some insights from statistics, in: Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on, Vol. 2, 2001, pp. 251–256 vol.2.kusner:emd15 M. Kusner, Y. Sun, N. Kolkin, K. Q. Weinberger, http://jmlr.org/proceedings/papers/v37/kusnerb15.pdfFrom word embeddings to document distances, in: D. Blei, F. Bach (Eds.), Proceedings of the 32nd International Conference on Machine Learning (ICML-15), JMLR Workshop and Conference Proceedings, 2015, pp. 957–966.zen:emd14 G. Zen, E. Ricci, N. Sebe, http://dx.doi.org/10.1109/ICPR.2014.634Simultaneous ground metric learning and matrix factorization with earth mover's distance, in: Proceedings of the 2014 22Nd International Conference on Pattern Recognition, ICPR '14, IEEE Computer Society, Washington, DC, USA, 2014, pp. 3690–3695.orlova:emd16 D. Y. Orlova, N. Zimmerman, S. Meehan, C. Meehan, J. Waters, E. E. B. Ghosn, A. Filatenkov, G. A. Kolyagin, Y. Gernez, S. Tsuda, W. Moore, R. B. Moss, L. A. Herzenberg, G. Walther, http://dx.doi.org/10.1371 distance (EMD): A true metric for comparing biomarker expression levels in cell populations, PLOS ONE 11 (3) (2016) 1–14.cuturi14 M. Cuturi, D. Avis, http://dl.acm.org/citation.cfm?id=2627435.2627452Ground metric learning, J. Mach. Learn. Res. 15 (1) (2014) 533–564.morales:pgs08 J. L. Morales, J. Nocedal, M. Smelyanskiy, http://dx.doi.org/10.1007/s00211-008-0183-5An algorithm for the fast solution of symmetric linear complementarity problems, Numerische Mathematik 111 (2) (2008) 251–266.dorigo83 G. Dorigo, W. Tobler, http://dx.doi.org/10.1111/j.1467-8306.1983.tb01392.xPush-pull migration laws, Annals of the Association of American Geographers 73 (1) (1983) 1–17.megiddo93 N. Megiddo, A. Tamir, http://dx.doi.org/10.1016/0167-6377(93)90041-ELinear time algorithms for some separable quadratic programming problems, Oper. Res. Lett. 13 (4) (1993) 203–211.cosares94 S. Cosares, D. S. Hochbaum, http://dx.doi.org/10.1287/moor.19.1.94Strongly polynomial algorithms for the quadratic transportation problem with a fixed number of sources, Math. Oper. Res. 19 (1) (1994) 94–111.
http://arxiv.org/abs/1705.09789v1
{ "authors": [ "Mariano Rivera" ], "categories": [ "math.OC", "cs.DS" ], "primary_category": "math.OC", "published": "20170527090208", "title": "Half-quadratic transportation problems" }
Submitted to IEEE Transactions on Image Processing, May 2017 Shell et al.: Bare Demo of IEEEtran.cls for JournalsFast MPEG-CDVS Encoder with GPU-CPU Hybrid Computing Ling-Yu Duan, Wei Sun, Xinfeng Zhang, Shiqi Wang, Jie Chen, Jianxiong Yin, Simon See, Tiejun Huang, Alex C. Kot, Fellow, IEEE, and Wen Gao, Fellow, IEEE L.-Y. Duan, W. Sun, J. Chen, T. Huang, and W. Gao are with the School of Electronics Engineering and Computer Science, Institute of Digital Media, Peking University, Beijing 100871, China (e-mail: {lingyu, weisun199508, cjie, tjhuang, wgao}@pku.edu.cn). X. Zhang and Alex C. Kot are with the Rapid-Rich Object Search (ROSE) Lab, Nanyang Technological University, Singapore (e-mail: {xfzhang, EACKOT}@ntu.edu.sg). S. Wang is with the Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong (e-mail: [email protected]). J. Yin, S. See are with the NVIDIA AI Tech. Centre (e-mail: {jianxiongy, ssee}@nvidia.com). Ling-Yu Duan, Wei Sun and Xinfeng Zhang are joint first authors, and Ling-Yu Duan is the corresponding author. Version December 30, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The compact descriptors for visual search (CDVS) standard from ISO/IEC moving pictures experts group (MPEG) has succeeded in enabling the interoperability for efficient and effective image retrieval by standardizing the bitstream syntax of compact feature descriptors. However, the intensive computation of CDVS encoder unfortunately hinders its widely deployment in industry for large-scale visual search. In this paper, we revisit the merits of low complexity design of CDVS core techniques and present a very fast CDVS encoder by leveraging the massive parallel execution resources of GPU. We elegantly shift the computation-intensive and parallel-friendly modules to the state-of-the-arts GPU platforms, in which the thread block allocation and the memory access are jointly optimized to eliminate performance loss. In addition, those operations with heavy data dependence are allocated to CPU to resolve the extra but non-necessary computation burden for GPU. Furthermore, we have demonstrated the proposed fast CDVS encoder can work well with those convolution neural network approaches which has harmoniously leveraged the advantages of GPU platforms, and yielded significant performance improvements. Comprehensive experimental results over benchmarks are evaluated, which has shown that the fast CDVS encoder using GPU-CPU hybrid computing is promising for scalable visual search.MPEG-CDVS, feature compression, GPU, visual search, hybrid computing, standard§ INTRODUCTION Recently, there has been an exponential increase in the demand for visual search, which initiates the visual queries to find the images/videos representing the same object or scene. Visual search can facilitate many applications such as product identification, landmark localization, visual odometry, augmented reality, etc. In typical visual-search systems, the users send a query image or its visual feature descriptors to the remote servers<cit.><cit.>. The images with the same object or scene as that in query image are identified by measuring the visual feature descriptor distance between reference and query images. However, efficient and effective visual search systems are often subject to the constraints of memory footprint, bandwidth and computational cost, low complexity generation and fast transmission of visual queries.Over the past decade, numerous visual feature descriptors are proposed from the perspectives of high accuracy, low bandwidth and fast extraction. Although the most classical Scale-Invariant Feature Transform (SIFT) descriptor <cit.> has achieved outstanding performance, it imposes severe computational burden and memory cost, especially for mobile visual search or large-scale visual search scenarios., especially for real-time application systems. This led to lots of research work for compact descriptorswith lower computation and bandwidth requirement. A series of representative visual feature descriptors, e.g., SURF <cit.>, ORB <cit.>, BRISK <cit.>, have been proposed. However, most of them approach the goal of reduced computational cost and improved descriptor compactness at the expense of performance loss compared with the original SIFT.Towards high performance visual search, the Moving Picture Experts Group (MPEG) has published the Compact Descriptors for Visual Search (CDVS) standard in 2015 <cit.><cit.>. The MPEG-CDVS standard provides the standardized bitstream syntax to enable interoperability for visual search achieving comparable accuracy with much lower bandwidth requirement than SIFT. Herein, two kinds of compressed descriptors, i.e., local and global feature descriptors, are compactly represented at different bit rates to achieve bit-rate scalability (e.g., 512B, 1KB, 2KB, 4KB, 8KB, and 16KB). As such, the stringent bandwidth and accuracy requirements can be well fulfilled. Besides the bandwidth and accuracy requirements, the encoding efficiency of CDVS descriptors directly determines the visual search latency and affects interactive experience, which is becoming the bottleneck in hindering its wide deployment in industry. Especially, when targeting large-scale video analysis, fast extracting CDVS descriptor from huge amounts of video frames is crucial to support pervasive video analysis applications such as mobile augmented reality, robots, surveillance and media entertainment etc <cit.>. Although some algorithms have been proposed to speed up the encoding process in CDVS standard, e.g., image downsampling pre-processing <cit.> and BFLoG <cit.>, the efficiency of extracting CDVS descriptors falls far behind the practical requirements of zero-latency or real-time visual search, for example, more than 100 ms per VGA resolution image is incurred on CPU platform.Undoubtedly, GPU has achieved great success in high throughput image and video processing due to its parallel-processing capability <cit.>. Especially for the state-of-the-arts convolution neural network (CNN) approaches, GPU has become the crucial computation platform. Therefore, how to leverage GPU to significantly speed up CDVS encoder, and explore the harmonious operation and complementary effects of (handcrafted) CDVS compact descriptors and deep learning based features over GPU platform is becoming a promising and practically useful topic. In this paper, we first revisit the CDVS technique contributions in reducing computational cost. Then, we present the fast MPEG-CDVS encoder. The main contributions of this paper are three-fold: * We revisited significant contributions of MPEG-CDVS from the perspective of reducing the computational cost of CDVS encoders, and its merits of accommodating parallel implementation over GPU-CPU hybrid computing platforms. With the challenges of big image/video data analysis, the exploration of high throughout computing of standard compliant low complexity CDVS descriptor (or other handcrafted features) via hybrid platforms are expected to facilitate the deployment of scalable and interoperable visual search applications.* We proposed a very fast CDVS encoder, which elegantly shifts the computational intensive operations to GPU platform. By leveraging the high parallel processing capability of GPU and the strength of parallel operations in CDVS, the fast CDVS encoder has achieved up to 30× speedup over CPU platform without noticeable performance loss. To the best of our knowledge, this is the first and the fastest CDVS standard compliant encoder over GPU platforms. * Furthermore, we have studied the significant performance improvement by combining CDVS descriptors and CNNs features over benchmarks, with 0.0305 and 0.174 mAP gains over CDVS or CNNs, respectively. In particular, we propose the marriage of higher computational efficiency of CDVS (3.27 ms CDVS vs 144 ms CNNs for a 640×480 image) and promising search performance of CNNs towards scalable visual search framework, in which their complementary effects in terms of efficiency and performance have been well demonstrated.The remainder of this paper is organized as follows. Section II reviews the related works. Section III revisits the techniques to speed up the CDVS encoding process. Section IV presents the fast CDVS encoder using GPU-CPU hybrid computing. Extensive experimental results and discussions are reported in Section V, and finally we conclude this paper in Section VI.§ RELATED WORK§.§ Review of MPEG-CDVS The normative blocks of CDVS standard mainly include five modules, i.e., interest point detection, local feature description, selection, compression and local feature aggregation <cit.>. For an input image, the interest points are first detected by the Laplacian of Gaussian (LoG) detector, which is based on the pyramid representation with a series of smoothing kernels. Subsequently, the interest points are selected according to their characteristics (scale, peak response, location, etc.) <cit.>, and the scale-invariant feature transform (SIFT) descriptors <cit.> are computed for the selected interest points. The SIFT descriptors are further compressed by applying a low-complexity transform and scalar quantization scheme <cit.>, such that low memory and computational complexity local feature compression is supported for compact feature representation. In the local feature compression procedure, the locations of these interest points are further compressed using the location histogram coding scheme for Geometric Consistency Check (GCC) <cit.>. Besides the local feature descriptors, CDVS also standardizes local aggregation procedure to construct a bit rate scalable global feature descriptor or image signature <cit.> utilizing a Scalable Compressed Fisher Vector (SCFV) representation <cit.> for image retrieval application.The local and global feature descriptors jointly comprise the CDVS bitstream, and can be utilized separately or jointly in visual applications. For example, the image retrieval is accomplished with the pairwise matching by comparing the feature descriptor distances between two images. Due to the computational complexity comparing the local feature descriptors, CDVS recommends an image retrieval strategy by first comparing the global feature descriptors to select candidates and then utilizing the local descriptors comparison with GCC to determine the final results. As such, competitive matching and retrieval accuracy can be achieved with very low memory footprint.§.§ Introduction of GPU ArchitectureDifferent from CPU, consisting of a few cores optimized for sequential processing, GPU exhibits massively parallel architecture consisting of thousands of smaller, but more efficient cores designed for handling multiple tasks simultaneously by launching with Single Instruction Multiple Threads (SIMT) in which a set of atom operations are applied to process huge amounts of pixels in parallel. There exist a variety of parallel computing platforms and application programming interface models created in recent years, e.g., CUDA, Directcompute and OpenCL, which have significantly strengthened the parallel-processing capabilities of GPUs towards general-purpose computing. Herein, CUDA is the most widely used parallel programming framework developed by NVIDIA. It partitions workloads into thread blocks (TB), each of which is a batch of threads that can cooperate together by efficiently sharing data through some fast shared memory and synchronizing their execution to coordinate memory accesses. Furthermore, thread blocks of same dimensionality and size that execute the same kernel can be batched together into a grid of blocks, so that the total number of threads that can be launched in a single kernel invocation is much larger as illustrated in Fig. <ref> <cit.>. The TBs are allocated to streaming multiprocessors (SM) to be executed simultaneously using GPU cores. In addition, an important feature for the shared memory is that the memory access operation can be performed simultaneously when the adjacent threads in the same TB access the adjacent shared memory units, which is known as global memory coalescing. Therefore, by optimizing the TB allocation and memory access, the speedup of calculations on GPU can be further improved. §.§ Review of GPU based feature extraction Based on the outstanding parallel performance of GPU, there has been a fast growing interest in applying GPU to speed up visual feature descriptor construction. Especially, tremendous algorithms have been proposed for GPU based SIFT implementation <cit.>, as SIFT descriptors require high computational cost and huge demand of memory. In <cit.>, an efficient GPU implementation of SIFT was presented based on the vector operations of GPU by texture packing, and 20 fps with the QuadroxFX 3400 GPU was realized. In <cit.>, an open source GPU/CPU mix implementation for SIFT was provided and achieved 27.1 fps with CUDA on 8800GTX GPU. In <cit.>, with GTX 1060 GPU the CUDA-SIFT implementation consumes 2.7 ms for the images with resolution 1280×960 and 3.8 ms for the resolution 1920×1080. Wang et al. further analyzed the workload of SIFT in <cit.> and proposed to distribute the feature extraction tasks to CPU and GPU, such that a speed of 10 fps for a 320× 256 image and 41% energy consumption reduction can be achieved. Besides SIFT, the speeded-up robust feature (SURF) <cit.> and fisher vectors (FV) <cit.> were also explored in implementing using GPU platform, and around an order of magnitude speedup was achieved compared to CPU based implementation.Besides hand-crafted features, recently convolutional neural network based features have achieved promising performance in various computer vision tasks such as image classification <cit.> and retrieval <cit.>face recognition <cit.>, image classification <cit.>, retrieval <cit.>, etc. CNN requires extremely fast parallel feature extraction on GPU. In <cit.>, a highly optimized GPU implementation of CNN is made publicly available for training networks. A number of CNN softwares based on CUDA using NVIDIA GPUs have also been developed, such as Caffe <cit.> and TensorFlow <cit.>. Recently, Vasilache et al. introduced the fast fourier transform convolution implementation based on NVIDIA's cuFFT library <cit.>, which is faster than than NVIDIAs cuDNN implementation for the common convolutional layers.However, although there are many visual feature descriptors implemented on GPU, they are inferior in fulfilling several practical but crucial requirements, e.g., low bandwidth cost, high performance, good compactness, and the excessive GPU resource consumption. Therefore, the very fast and standard compliant CDVS encoder over GPU platforms can elegantly contribute to the state-of-the-art large-scale visual search. § CDVS REVISIT FOR SPEEDUP Targeting high accuracy and low bandwidth, CDVS has achieved significant success. Although several technique proposals <cit.><cit.> were proposed to speed up the encoding process of CDVS, it cannot fulfill real-time requirement on CPU platform. To reduce the computational complexity, CDVS first downsamples the input images into low resolution with the longer side less than 640. However, CDVS extraction still needs more than 100 ms for one image <cit.>. As illustrated in Fig.<ref>, the CDVS encoder can be divided into five major modules, i.e., interest point detection, local feature selection, description, compression and aggregation. To analyze the computation cost of different modules, we test the running time of each module using TM14.0 on 1000 images with resolution 640× 480. Fig.<ref> shows the percentage of average running time for different modules. From the results we can see that interest point detection, local feature description and aggregation take up most of the running time, more than 99.5%. Therefore, the optimization strategy of CDVS encoders are centralized in the three modules. §.§ Interest Point Detection The interest point detection consists of two stages, i.e., scale-space construction and extremum detection. The CDVS constructs the scale space as an image pyramid which is generated by filtering the input image via a series of 2D separable Gaussian filters at increasing scale factors. The interest points are regarded as the scale-space extrema of the normalized derivatives of each scale in an image pyramid, which is generated by applying Laplacian filter to each scale image. Therefore, there are multiple convolution operations for input image I with different scale factors as follows,L_k = I * g_k * f,where g_k and f are the Gaussian and Laplacian kernels, respectively. Obviously, it is a computation intensive process.To speed up the process of LoG filtering, the Block based Frequency Domain Laplacian of Gaussian (BFLoG) filtering <cit.><cit.> is proposed instead of that in spatial domain. The original input image is first decomposed into overlapped blocks, which are further transformed into frequency domain using Discrete Fourier Transform (DFT). Then, the spatial domain convolution operation can be equivalently implemented by element-wise product between the frequency image matrix and frequency filter kernel matrix, as illustrated in Fig.<ref>. To remove the FFTs for filter kernels, BFLoG adopts the fixed block size to pre-compute convolution kernels in DFT domain, and the pre-computation also reduces the memory cost. Due to Fast Fourier Transform (FFT) <cit.>, the computational complexity of convolution in spatial domain 𝒪(M^2N^2) becomes 𝒪(4N^2logN^2-11N^2+16N), where M and N are the size of square filter and square block. By optimizing the block size and overlap size, the BFLoG achieves about 47% filtering time reduction with ignorable performance variations.the BFLoG achieves significant running time reduction for CDVS with ignorable variations in visual search performance, achieving about 47% running time reduction compared with spatial filteringas show in Table <ref>.Although the block-level interest point detection are independent of each other, the pixels of each block are dependent in FFT calculation, which makes it difficult to be implemented on pixel-level parallelism. To reduce the boundary effects, BFLoG utilizes R overlapped pixels for each N× N block, which leads to extra computational burdens. Empirically, the block size N=128 and overlapped size R=16 are optimal for performance and efficiency. However, for an 640× 480 image, only 35 blocks can be executed in parallel, which is much fewer than the number of GPU cores. For the subsequent extremum detection, orientation assignment and descriptor calculation,BFLoG has to recompose the (LoG and Gaussian) filtered representations, which leads to extra IFFT operations and memory increase. There are 8 inverse IFFTs for each block with 5 scales as illustrated in Fig.<ref>. Considering the Laplacian filter kernel with fixed and small size for all the scale images, the computational cost of Laplacian convolution is much lower than that of Gaussian convolution. Therefore, Duan et al. proposed the mixture domain LoG filtering approach (BMLoG) to further reduce the computation cost of BFLoG by applying Gaussian convolution in frequency domain and Laplacian convolution in spatial domain as shown in Fig.<ref>. As a result, the filtering process has 1 FFT and 5 IFFT operations plus 5× 7 adds/subtractions of each pixel for the Laplacian convolution, which achieves about 20% filtering time reduction <cit.>.After the LoG filtering, interest points are detected by identifying the local extrema, which needs to compare each sample point with its 8 neighbors in the current scale image and 18 neighbors in the above and below scales. CDVS adopts an alternative extrema detection algorithm with low computational complexity, which is a low-degree polynomial (ALP) approach <cit.>. By assuming that LoG kernel can be approximated by linear combinations of LoG kernels at different coordinates and scales, ALP approximates the LoG scale space by a third-degree polynomial of the scale σ_k for each sample point (x,y),p(x,y,σ_k) = ∑_i=0^3α_i(x,y)σ_k^i,where the polynomial coefficients are functions of the image coordinates (x,y),α_i = ∑_k=0^K-1β_k,iL_k(x,y),   i= 0, 1, 2, 3.The parameters, {β_k,i}, correspond to the K predefined scales σ_k and {L_k|k=0,⋯,K} are the LoG filtered images in one octave. ALP first finds the scales of the extrema via the derivatives with respect to σ_k of the polynomial in Eqn.(<ref>), and then it compares the point with its 8 neighbors. Compared with the previous methods, ALP is more efficient by reducing the 18 comparisons for each sample point to 8 comparisons. Although these fast algorithms significantly reduce both the computational and memory costs, the computational burden of interest point detection is still too high for CPU, and the interest point detection is still the most time-consuming module. Fortunately, the calculations of convolution and ALP are suitable for pixel-level parallelism. These computations can well fit into highly parallel architectures like GPU.§.§ Local Feature selection and Description Since there are usually hundreds or thousands of interest points in an image as illustrated in Fig. <ref>, it brings difficulties in compact representation and low computational cost when processing all features. Fig. <ref> shows the relationship between computational cost and feature number. Selecting a subset of good features may save considerable computation time for the subsequent local feature description, compression and aggregation. The computational cost of local feature description and aggregation increase obviously along with the feature number. Therefore, the core module of local feature selection plays an important role in reducing the computational cost. During the development of CDVS, lots of methods <cit.> are proposed to describe and compress a subset of essential interest points while maintaining and even improving search accuracy. The basic rationale is based on the statistical characteristics to select the interest points with high probability being positive matching ones and remove the noise points. Samsung Electronic proposed to extract features in visual attention regions <cit.> based on the assumption that more relevant descriptors are located in salient regions for human visual system. Thus, computation overhead can be significantly reduced by applying feature extraction to regions of interest (ROI). However, this method leads to performance loss due to the inconsistency between ROIs and the distribution of true matching points. Moreover, it is difficult to define accurate ROIs at low computational cost. Finally, CDVS adopts a relevance measure to evaluate feature significance in image matching and retrieval performance based on five statistical characteristics of interest points, including the scale σ of the interest point, the scale-normalized LoG response value p obtained with p(x,y,σ) as in Eqn.(<ref>), the distance d from the interest point to the image center, the ratio ρ of the squared trace of Hessian to the determinant of the Hessian, and the second derivative p_σσ of the scale space function with regard to σ, i.e., ∂^2p(x,y,σ)/∂^2σ. The relevance measure indicates the priori probability of a query feature correctly matching a feature of database images. Given a characteristic parameter by symbol y lying within region B, the conditional probability for correct matching isf(c=1|y∈ B) = P(y∈ B, c=1 )/P(y∈ B).CDVS attempted to learn these conditional distributions over training pairwise feature matching from a large database of matching image pairs<cit.>, and stored the quantized characteristic parameters and their corresponding function values of conditional distributions in normative tables, which may minimize the computational cost by look-up tables.By assuming independency of the characteristic parameters, the relevant score for each point is obtained by multiplying these conditional probabilities:r(σ,p,d,ρ,p_σσ) = f_1(σ)f_2(p)f_3(d)f_4(ρ)f_5(p_σσ).Therefore, a subset of N interest points will be selected by ranking relevance measure r, and the feature description is only performed on the selected points. Fig.<ref> shows the distribution comparison before and after selecting salient interest points. Although there are lots of interest points in images, only a few of them are useful in matching.The circle size of interest points in Fig.<ref> are proportional to the relevance value. We can see that the selected relevant interest points by Eqn.(<ref>) basically fall into the salient regions, in which feature selection can significantly reduce the computational cost without performance degeneration. To balance the computational cost and search accuracy, CDVS empirically selects around 300 local feature descriptors to represent an image, which saves more than 30% computational cost (seeing Fig.<ref>). §.§ Local Feature Compression The uncompressed SIFT descriptors are difficult to use in practice for two reasons, 1) size limitation: it needs 1024 bits for each descriptor by representing each dimension with 1 byte; 2) speed limitation: the computational cost for byte-vector distance is too high over large-scale databases. Therefore, the local feature compression and fast matching in compressed domain are necessary.During CDVS development, two kinds of compression algorithms based on vector quantization and transform are widely discussed and both of them achieve significant improvement on compression performance and computational efficiency. In early stage of CDVS, the Test Model under Consideration (TMuC) for CDVS <cit.> employed tree structured vector quantization (TSVQ) <cit.> and product quantization (PQ) <cit.> to make compact descriptors. However, these methods need to store huge codebooks, thereby leading to heavy computational burdens. For example, in <cit.>, the authors proposed PQ-SIFT to quantize local descriptors with over a large vocabulary with 1 million centroids for 16 sub-segments. The Multi-Stage Vector Quantization (MSVQ) scheme <cit.> was adopted into the test model TM2.0 and significantly reduced the size of quantization tables. The MSVQ quantization consists of two stages, i.e., Tree Structured Vector Quantization (TSVQ) for the original raw descriptors at the 1st stage and the subsequent Product Quantization (PQ) for the residuals at the 2nd stage, as illustrated in Fig.<ref>. After training the MSVQ over 6 million SIFT descriptorsextracted from the MIRFLICKR-25000 database <cit.>, the MSVQ only utilizes a 2-level tree-structure quantization table with 2048 visual centers and a PQ table with ∼16K centers. Therefore, in total comparison operations are reduced to 256 (1st level TSVQ)+8 (2nd level TSVQ)+16K (PQ) for each descriptor, which significantly reduces the computational cost in searching codewords. To further reduce the computational cost, CDVS finally adopts the transform coding with scalar quantization instead of the MSVQ. For each SIFT subregion Histogram of Gradients (HoG) h with bins {h_0⋯ h_7}as shown in Fig.<ref>, CDVS applies an order-8 linear transform to capture the shape of HoGs. To improve the discriminative power of descriptors, CDVS defines two sets of transforms and applies different transforms to neighboring subregion HoGs, and the transforms are implemented via addtion/subtraction and shift operations with extremely low complexity. as defined in Eqn.(<ref>) and Eqn.(<ref>),Each element of the transformed descriptors is further individually quantized to three values, -1, 0 and +1, using quantization thresholds calculated from the off-line learned probability density function of that element. This transform coding method exhibits much lower computational cost than MSVQ while keeping comparable performance. In addition, the scalar quantization only needs two comparison operations for each descriptor.More importantly, the transform coding method is codebook free, and it is more suitable for GPU since the I/O speed is actually the bottleneck of GPU in large-scale computing. §.§ Local Feature AggregationThe global descriptors with highly efficient distance computation are crucial for fast large-scale visual search. Different global descriptors were proposed in CDVS development such as Residual Enhanced Visual Vector (REVV)<cit.>, Robust Visual Descriptor (RVD)<cit.> and Scalable Compressed Fisher Vector (SCFV)<cit.>. The REVV utilizes a set of 190 centroids from k-means clustering of SIFT descriptors off-line, and assigns each uncompressed SIFT descriptor of an input image to its nearest centroid in terms of L2 distance. The difference or residual between each SIFT descriptor and its nearest centroid is computed, and the mean residual of all the SIFT descriptors quantized to the same centroid is computed for a centroid. A power law with exponent value 0.6 is appliedto the values of the mean residuals. With dimension reduction via PCA, these residual vectors are binarized according to its sign and concatenated to form a global descriptor. Although the REVV is with very low computational cost and memory footprint, the performance is yet to be improved. Bober et al. proposed an enhanced global descriptor, RVD, which improves the robustness of REVV by assigning each SIFT descriptor to multiple cetroids while reducing the computational cost. To reduce the computational cost, the RVD shifts the PCA to the first stage to reduce the computational cost for the subsequent calculations, and utilizes the matrix multiplication to implement PCAand trains the PCA matrix from over 5 millions SIFT feature descriptors, and utilizes the matrix multiplication instead of the traditional PCA calculation. This transform reduces the PCA computational complexity from 𝒪(n^2) to 𝒪(n), where n is the number of local features. In addition, the matrix operations has been well implemented and optimized on GPU by NVIDIA, i.e., cuBLAS, and can greatly reduce the global memory operations compared with non-matrix operations. Furthermore, the RVD utilizes the L1 norm distance between SIFT descriptors and centroids instead of L2 norm distance in REVV to avoid the computation-intensive multiplication operations. Although each of the SIFT descriptors is assigned to multiple centroids with a bit increased computation, the RVD achieves better performance compared with REVV. Finally, the CDVS adopted the SCFV as its global descriptor, which takes a Gaussian Mixture Model (GMM) with 512 components to capture the distribution of up to N selected local feature descriptors. The SCFV also firstly reduces the SIFT dimensions utilizing PCA matrix, but it transforms the 128D SIFT into 32D instead of 48D of RVD, which further reduces the computational cost for subsequent operations. For each 32D vector x_t, the major computation in SCFV include probability in Eqn.(<ref>), the accumulated gradient vector g_μ_i^x with respect to the mean of the i^th Gaussian function in Eqn.(<ref>), and its standard deviation δ(i) in Eqn.(<ref>),γ_t(i) = w_ip_i(x_t|λ)/∑_j=0^Nw_jp_j(x_t|λ),g_μ_i^x = 1/K√(w_i)∑_t=0^K-1γ_t(i)x_t-μ_i/σ_i,δ(i) = √(1/32∑_j=0^31(g_j-1/32∑_k=0^31g_k)^2),where p_i(x_t|λ) is the Gaussian function, w_i is the weight of the i^th Gaussian function, and N is the number of local features. Then, the Gaussian components are ranked in descending order according to δ, and the top K ones are selected based on the bit budget. Since only 250 local feature and 512 Gaussian functions are applied in CDVS, there are around 4.5 million multiplications/divisions for these 32D features, which is more than that in RVD, about 36 thousands multiplicaitons/divisions. However, the SCFV achieves very promising search performance and fulfills novel bit rate scalability, and moreover all these calculations can be transformed into matrix operations, which can be well implemented and optimized in parallel on GPU achieving significant speedup. The detailed speedup results for SCFV on GPU are shown in Fig.<ref>. § FAST CDVS ENCODER USING GPU-CPU HYBRID COMPUTING Although great efforts have been made to reduce the computational complexity of CDVS, it is still difficult to implement highly efficient CDVS on CPU even with multiple threads in parallel. By leveraging the massive parallel process cores of GPU, we design and implement the very fast CDVS encoder using GPU-CPU hybrid computing. Three major time-consuming modules, i.e., interest point detection, local feature description and aggregation, are shifted to GPU platform, while the others remains on CPU platform as shown in Fig.<ref>. In addition, the local feature compression and aggregation are independent process, they can be in parallel performed on CPU and GPU simultaneously, which elegantly leverages the computational resources on CPU and GPU. In the following subsection, we present technical details on interest point detection, local feature description and aggregation. §.§ Interest Point Detection on GPU In interest point detection, we adopt sperate Gaussian filters to construct the image pyramid, utilize Laplacian filter and ALP to detect interest points. Distinct from BFLoG, the spatial domain Gaussian and Laplacian filtering can be implemented in much higher degree of parallelism with very low memory usage, while the BFLoG is only implemented on block-level parallel with doubling memory usage due to the complex operations in FFT. In addition, the spatial domain filtering can be completed in one step, while the BFLoG need three sequential steps, i.e., FFT, element-wise production and IFFT, which incurs more process time for GPU.The implementation of interest point detection comprises of three basic stages as illustrated in Fig.<ref>. Part A illustrates the implementation of LoG filtering and ALP detection, which outputs the interest point candidates. To achieve high degree parallelism, the input image is first divided into N× N blocks, and each block is assigned to a thread block (TB) to perform LoG filtering. Since the access speed for shared memory is much higher than graphic memory, these image blocks are loaded into the shared memory of their corresponding thread blocks. Then, the threads in the same TB access the pixels from its shared memory sequentially, i.e., TB0.thread0 access the first pixel, TB.thread1 access the second pixel, and so no. This memory access mechanism is to make full use of the merit of global memory coalescing to reduce memory access time cost. Then, the Gaussian filtering, Laplacian filtering and ALP detector are sequentially performed in corresponding threads. In our implementation, we specify that both image block and thread block are of the same size, in which each pixel is processed on each thread in parallel.For Part B, the interest point candidates are refined to remove unstable interest points and more accurate locations of interest points are determined. To speed up the process, we reorganize the interest point data (x_i,y_i,σ_i) and the corresponding pixels in its 3× 3 neighborhood into a continuous queue. For N continuous candidates, we assign a TB with N threads to calculate the LoG response in a 3× 3 region around the candidate, which employs the global memory coalescing to speed up the process. Afterwards, the interest points considered as an unstable ones are removed out, otherwise, more accurate position are updated by the interpolation. In Part C, the final interest points are determined by comparing the neighboring ones in adjacent octaves. Likewise, we first construct two queues for the interest point candidates in current octave and preceding octave respectively, and then apply one TB to perform the comparisons between one candidate in current octave and all the candidates in preceding octave by the global memory coalescing. Each thread first executes the comparison between two candidates. If they are close enough in (x,y,σ) space, the candidate with lower LoG response is eliminated. The implementation details on each thread are illustrated in Algorithm <ref> usingpseudo-code.The pixel-level thread allocation and efficient shared memory assess significantly reduce runtime cost for interest point detection. Since there is no data dependency in these operations, our implementation on GPU can get the identical calculation results as that in CPU except for machine precision.Based on our experimental results, the optimal GPU implementation can achieve around 26 times speedup compared with that on CPU platform, and the running time decrease from 56.2ms to 2.16ms for 640× 480 images.§.§ Local Feature Description on GPUThe local feature description includes two stages, i.e., orientation computation and SIFT description. To allow rotation invariance for local feature descriptors, each interest point is assigned one or more dominant orientations based on the distribution of the quantized gradient directions in its neighborhood with radius 3.96σ. To derive the dominant orientation, an orientation histogram with 36 bins shall be formed from the computed gradient orientations. The orientations with bin values greater than 0.8 times of the highest peak are kept as orientations of the interest point. For SIFT description, the image patch centered at interest point (x,y) is first rotated to the angle of its orientation, and then it is divided into 4 horizontal and 4 vertical spatial subregions referred to as cells. The size of each side of each cell shall be 3σ pixels. From each cell, a histogram of gradients with 8 orientation bins, referred to as cell histogram, is generated. The SIFT descriptor is formed by concatenating these cell histograms.Based on the above introduction, the local feature description is divided into two stages as illustrated in Fig.<ref>. The histogram construction occupies major computation. To maximize the degree of parallelism, we assign one TB to each interest point to generate the orientation histogram and compute the dominant orientations. However, the pixel-level parallelism with global memory coalescing is difficult to implement by simply allocating one thread to compute the gradient of each pixel and sum them up into a histogram. This is because that the number of threads in one TB should be the same, while the image patch size 3.96σ is variant for different interest points. If we allocate threads according to the largest image patch, the amount of pixels is much more than the available threads in one TB.To solve the problem of variant patch size, we design a novel block-based pixel-level parallelization method. For each image patch, we divide it into non-overlapped N× N sub-patches, and allocate each TB with N× N threads. Then, the gradient computation can be performed in pixel-level parallelism, and gradients are then exported to shared memory to form gradient histograms by atomic addition. The detailed design for local feature description is illustrated via pseudocode in Algorithm <ref>. Compared with the feature-level parallelization which assigns one independent thread to each interest point, the proposed block-based pixel-level parallelization is more efficient. Because feature-wise parallelization would lead to very unbalanced workload among threads and thus degenerate the processing efficiency on GPU due to variant patch sizes for interest points. Besides, local feature selection is applied in CDVS to keep less than N (N is usually smaller than 650) features for describing an image. It means that at most N threads can be launched simultaneously for feature-wise parallelization, which incurs much less than cores in GPU. Our GPU implementation for local feature description achieves more than 90 times speedup compared that on CPU at different bitrates, the running time significantly decreases from 33.2∼ 79.2 ms to 0.4∼ 0.83 ms. §.§ Local Feature Aggregation on GPU When the stages of interest point detection and local feature description have been implemented optimally, the proportion of time-consumption for local feature aggregation increase from 10% up to 80%, which becomes the bottleneck in the whole pipeline of CDVS encoding. Hence, the remaining issue is to speed up the local feature aggregation. As introduced in Section <ref>, there are two parts, i) dimension reduction via PCA, ii) the fisher vector aggregation. The PCA calculation in CDVS is defined by matrix multiplication, which can be well fulfilled by calling the highly optimized matrix operation lib in CUDA.To speed up the fisher vector aggregation, we first transform the probability calculation for each 32D local descriptors in Eq.(<ref>) into the matrix multiplications and a set of element-wise operations as shown in Egn.(<ref>).P_i,j = ∑_k=0^31(D_i,k-M_j,k)^2/V_j,k,P = (D.*D)*(1./V^T)-2D*(M./V)^T + O*((M.*M)./V)^T.Here, P_i,j represents the probability of the i^th descriptor under the j^th GMM component, D_i,k denotes the k^th dimension of the i^th descriptor, M_j,k and V_j,k denote the k^th dimension of mean and covariance vector for the j^th GMM component, O is a matrix filled by 1 and has the same dimension with D. The notations, '.*' and './' represent the element-wise multiplication and division. These matrix operations can be efficiently performed by calling the well optimized matrix operation library in CUDA. Similarly, we derive the matrix implementation for the calculation of accumulated gradient vectors with respect to the mean and variance denoted as (GM) and (GV), the calculations of which are transformed from Eqn.(<ref>) and Eqn.(<ref>) to Eqn.(<ref>) and Eqn.(<ref>).GM_i,j = ∑_k=0^DescNum(D_i,k- M_j,k/V_j,k * Q_i,j),GM = (Q^T*D - Q^T * O .* M)./V,GV_i,j = ∑_i,j^DescNum((D_i,k-Mj,k/V_j,k)^2-1)*Q_i,j, GV= (Q^T*(D.*D) - Q^T*D.*2M+ Q^T*O.*(M.*M-V.*V)) ./ (V.*V).Here Q is the normalized probability of P, and DescNum is the number of the local feature descriptors. After these conversions, local feature aggregation module can be implemented by invoking the matrix lib in CUDA, which is well optimized by NVIDIA. Based on the experimental results, the running time of local feature aggregation decreases from around 13 ms to 0.25 ms, and the speedup of GPU implementation is more than 49 times compared that on CPU.§ EXPERIMENTAL RESULTS AND ANALYSIS§.§ Databases and Evaluation Criteria To analyze the performance of the fast CDVS encoder, we perform pairwise matching and image retrieval tasks on patch-level and image-level databases. Two image-level databases are utilized in our experiments,1) MPEG-CDVS benchmark database <cit.>, which consists of 5 classes: graphics, paintings, video frames, landmarks and common objects; 2) Holiday database <cit.>, which contains images from different scene types to test the robustness to transformations such as rotation, viewpoint, illumination changes and blurring, and is widely used in academic literatures. In addition, we also utilize two patch-level databases, MPEG-CDVS patch database <cit.> and Winder and Brown database <cit.>, which contain 100K and 500K matching pairs of 64× 64 pixels image patches involving canonical scaled and oriented patches.For performance validation, we adopt the Mean Average Precision (mAP), True Positive Rate (TPR) and False Positive Rate (FPR) to measure the image retrieval and pair matching performance respectively. The mAP for a set of queries is calculated as the mean of the average precision scores for each query, which is defined as follows,mAP = ∑_q=1^QAP(q)/Q, AP = ∫_0^1p(r)dr,where Q is the number of queries, and AP is the average precision, and p(r) is the precision function at recall r. The TPR and FPR are calculated as, TPR = TP/TP+FN, FPR = FP/FP+TN,where TP, FP, TN and FN are the number of the true positive, false positive, true negative and false negative retrieval results.The fast CDVS encoder is implemented based on the latest CDVS reference software TM14.0 using GPU-CPU hybrid computing. In the following section, we validate the improvements of visual search accuracy and descriptor extraction speed, respectively. Since there are some non-normative fast algorithms for CDVS, we also compare our CDVS encoder (denoted CDVS_GPU) with the reference software and the optimized reference software using non-normative fast algorithms on CPU platform, which are denoted as CDVS_CPU and OPT_CDVS_CPU, respectively.To fully understand the superior performance of CDVS, we also compare CDVS descriptor with state-of-the-art visual feature descriptors including SIFT <cit.>, SURF <cit.>, ORB <cit.>, BRISK <cit.>, AKAZE <cit.> and LATCH <cit.>, which are from OpenCV 3.2.0 with default parameters.§.§ Performance Comparison In this section, we first show the patch-level evaluation as it provides the initial idea about the performance of descriptors and avoids the influence of different interest point selection strategy. To understand the matching accuracy at different FPR, the ROC curves on MPEG-CDVS and Winder and Brown patch databases are illustrated in Fig.<ref>. The CDVS descriptors generated from CPU and GPU platforms (denoted as CDVS_CPU and CDVS_GPU) yields almost the same pairwise matching results, which verifies that the CDVS encoder is exactly implemented on heterogeneous architectures without performance sacrifice. The patch matching performance of CDVS is only inferior to the original SIFT descriptors, and much more superior to other visual feature descriptors. It is worthy to note, CDVS only needs 32∼205 bits for each descriptor under different rate configurations, and its bitrate is much fewer than that of SIFT, which costs 512 bytes for each descriptor. Although SURF, ORB, BRISK, AKAZE and LATCH are also compact descriptors, their performances are obviously inferior to CDVS and also cannot outperform CDVS in compactness. Herein, there are 256, 128, 256, 244 and 128 bytes in representing each SURF, ORB, BRISK, AKAZE and LATCH descriptor, respectively. In Fig.<ref>, we only utilize 103 bits for each CDVS descriptor. To verify the overall performance on real images, we carry out image-level evaluation for pair matching and image retrieval tasks. In pairwise matching database, there are 10,155 matching image pairs and 112,175 non-matching in MPEG-CDVS database, and 2072 matching image pairs and 20874 non-matching in Holiday database. Fig. <ref> shows the ROC curves on image pair matching task for different visual descriptors. The CDVS descriptors extracted using CPU and GPU platform achieve almost the same performance, and obviously outperform the other competitors. Remarkably, the performance of SIFT descriptors is poor when FPR is low. It is observed that, there are many false matched image pairs where the resulting matched SIFT points are on the background as illustrated in Fig.<ref>. This phenomena verifies that the local feature selection not only reduces the computational cost, but removes the non-meaningful interest points, which significantly contributes to high performance.In the image retrieval experiments, we compare the performance of the CDVS and its global descriptors generated at 6 pre-defined descriptor lengths: 512 bytes, 1K, 2K, 4K, 8K and 16K in MPEG common test conditions. From the results in Fig.<ref>, we can see that the fast CDVS encoder is well implemented in parallel using GPU platform with negligible performance difference. However, the OPT_CDVS_CPU with fast algorithms, e.g., BFLoG, introduce cause performance loss to some extent. Table <ref> shows the detailed numerical mAP results for CDVS_CPU , OPT_CDVS_CPU and CDVS_GPU respectively, from which the same conclusion can be drawn.In addition, the OPT_CDVS_CPU brings about xx%∼ xx% performance loss compared with that of reference software CDVS_CPU. §.§ Speed ComparisonIn this section, we further show the speedup of the proposed CDVS encoder compared with that on CPU platform. Table <ref> shows the average running time of extracting different visual descriptors on 1000 images with resolution 640× 480MPEG-CDVS database. The notations CDVS_CPU_L and CDVS_GPU_L represents the running time for CDVS local feature descriptor extraction on CPU and GPU platform. Except for CDVS_GPU and CDVS_GPU_L, all others descriptors are extracted on CPU platform with Intel(R) Xeon(R) CPU E5-2650 [email protected]. The CDVS_GPU using Quadro GP100 achieves significant speedup compared with that on CPU platform when extracting 200∼300 local features from each image, which achieves more than 35 times and xx times speedup for CDVS_CPUand OPT_CDVS_CPU respectively. The average of CDVS extraction time is reduced from 116.69 ms on CPU platform to 3.27 ms on GPU platform. Compared with other visual feature descriptors extracted on CPU platform, the proposed CDVS encoder also significantly outperforms them, and it can well satisfy practical real-time applications.Limited by computational power, the CDVS selects no more than 300 local features for each image, which may be less optimal for visual search accuracy especially for high resolution images. Hence, we further analyze the relationship between the number of local features and extraction time (Seeing Fig. <ref>). We can see that with the increase of local features, the extraction time increases almost linearly on CPU. To make a tradeoff between the computational cost and visual search accuracy, the CDVS has to select no more than 300 local features for each image.However, by leveraging GPU, the extraction time cost almost keeps in the same level, about 3 ms. Therefore, more local feature descriptors can be allowed to further improve CDVS performance over GPU platforms, without noticeable increase of computational cost. When dealing with high resolution images, CDVS needs to first downsample the input images into low resolution with the large side no longer than 640. The downsampling operation can directly reduce the computational cost, but it also brings image distortion or information loss especially for high resolution images. We explore the variations of feature extraction time along with image resolutions for CDVS using GPU platform, and the results are shown in Fig.<ref>. We can see that the most time-consuming module on GPU is still the interest point detection, which has been implemented on pixel-level parallelism. When the pixel number exceeds thread number, the running time will increase significantly. Therefore, the running time increases very slowly when we have enough computational resources on GPU, e.g., the image resolution smaller than 1080× 720. The running time for local feature description and aggregation almost does not changes since their computation mainly depend on the number of local featureswhich is mainly influenced by the pre-defined descriptor lengths.To explore the speedup on GPU, we test the running time for these modules on CPU and GPU respectively, and Fig. <ref> shows their running time at different pre-defined descriptor length. The CDVS_GPU achieves about 26, 50 and 22 times speedup for interest point detection, local feature description and aggregation compared with that of CDVS_CPU, respectively. Fig.<ref> shows the running time proportion for different modules with GPU-CPU hybrid computing. We can see that the running time consumption can be covered by the local feature aggregation when using GPU-CPU hybrid computing.To further test the speedup of the fast CDVS encoder, we run tests on different kinds of GPUs as shown in Table <ref>. Herein, the 11 NVIDIA GPUs are utilized in our experiment, Tesla M40, GTX1060, GTX1080 ,GTX1060and GTX1080Ti are the popular GPUs targeting at hyper-converged systems, and the Jetson TX1 is the popular GPU for embedded systems. Specially, NVIDIA has provided us the up-to-date GPUs for the two systems, i.e., Tesla P40, P100, Titan X, Quadro GP100 and Jetson TX2, which shows the maximum speedup. Our CDVS encoder only costs 3.38∼ 5.05 ms for VGA resolution images on popular GPUs, and 3.24∼ 3.75 ms on the up-to-date GPUs. Even for the 1920× 1080 images, our encoder can extract the descriptors in real time with minimum 12.41 ms. More importantly, our CDVS encoder can extract feature descriptors for 640× 480 images in real time on Jetson TX2, which are the promising platform for embedded systems with low power consumption. This indicates that the CDVS can be well deployed on mobile devices to support very fast visual search. Based on comprehensive experimental results, we can claim that the CDVS using GPU-CPU hybrid computing can well support scalable image/video retrieval and analysis with real-time requirements. §.§ Promising Future of CDVS and CNN Feature DescriptorsCDVS provides MPEG standard compliant handcrafted visual feature descriptor with high performance, good compactness and friendly to parallel implementation. Recently, although the visual feature descriptors generated from convolutional neural network (CNN) have provide very promising performance, the performance can be further improved when combining the handcraft visual descriptors of CDVS. In <cit.>, we have proposed a Nested Invariance Pooling (NIP) methodto obtain compact and robust CNN descriptors, which are generated by applying three different pooling operations to the feature maps of CNNs in a nested way towards rotation and scale invariant feature representation. To explore the potential performance, we combine the CDVS local and global feature descriptors with two CNN features, i.e., NIP-VGG-16 <cit.> and RMAC <cit.> to perform pair matching and image retrieval.In our experiments, the dimensions of NIP-VGG-16 and RMAC descriptors are 512. We use 4 bytes to represent each dimension, which leads to 2KB representation for NIP-VGG-16 and RMAC. Table <ref> shows the image retrieval and matching performance on MPEG-CDVS database. Another metric, Precision@R, is adopted in this experiment, which is the retrieval precision at a given cut-off rank R for a single query.Although both CNN feature descriptors achieve good performance at low bitrate, their performances are further improved by combining CDVS descriptors. The improvements of NIP-VGG-16 with CDVS global descriptors are up to 0.0305 and 0.174 compared with CDVS and NIP-VGG-16 respectively in terms of mAP. With FPR=1%, TPR gets more than xx% and xx% improvements.To the best of our knowledge, the combination of CNN and CDVS achieves the best pair matching and image retrieval performance at the same descriptor length, which also has been verified in the latest proposals of the emerging MPEG Compact Descriptors for Video Analysis (CDVA)standard <cit.>. Although the CNN descriptors achieve promising results in computer vision applications, the extremely huge computational burdens make them heavily dependent on GPU platform. For 1000 VGA resolution images, the average running time of CNN descriptor extraction about 144 ms and 36.5 ms for NIP-VGG-16 and RMAC networks, which obviously exceed that of CDVS on the GPU platform. In additional, the CNN descriptor extraction also consumes too much memory compared with that of CDVS as shown in Table <ref>. Hence,it is promising to combine the CNN feature descriptors and the handcraft feature descriptors, and implement them harmonically using GPU platform, which can provide scalable descriptors for both computational resources and visual search accuracy.§ CONCLUSIONWe have revisited the merits of the MPEG-CDVS standard in computational cost reduction. A very fast CDVS encoder has been implemented using hybrid GPU-CPU computing. By thoroughly comparison with other state-of-the-art visual descriptors on large-scale database, the fast CDVS encoder achieves significant speedup compared with that on CPU platform (more than 35 times) while maintaining the competitive performance for image retrieval and matching. Furthermore, by incorporating the CDVS encoder with deep learning based approaches on GPU platform, we have shown that the handcraft visual feature descriptors and CNN based feature descriptors are complementary to some extent and the combination of CDVS descriptors and the CNN descriptors has achieved the state-of-the-art visual search performance over benchmarks. Especially towards real-time (mobile) visual search and augmented reality applications, how to harmoniously leverage the merits of highly efficient and low complexity handcrafted descriptors, and the state-of-the-art CNNs based descriptors via GPU or GPU-CPU hybrid computing, is a competitive and promising topic. 10url@samestyle girod2011mobile1 B. Girod, V. Chandrasekhar, R. Grzeszczuk, and Y. A. Reznik, “Mobile visual search: Architectures, technologies, and the emerging mpeg standard,” IEEE MultiMedia, vol. 18, no. 3, pp. 86–94, 2011.lowe2004distinctive D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, no. 2, pp. 91–110, 2004.bay2008speeded H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (surf),” Computer vision and image understanding, vol. 110, no. 3, pp. 346–359, 2008.Rublee_ORB2011 E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” in 2011 International Conference on Computer Vision, Nov 2011, pp. 2564–2571.Leutenegger_2011 S. Leutenegger, M. Chli, and R. Y. Siegwart, “Brisk: Binary robust invariant scalable keypoints,” in 2011 International Conference on Computer Vision, Nov 2011, pp. 2548–2555.duan2016overview L.-Y. Duan, V. Chandrasekhar, J. Chen, J. Lin, Z. Wang, T. Huang, B. Girod, and W. Gao, “Overview of the MPEG-CDVS standard,” IEEE Transactions on Image Processing, vol. 25, no. 1, pp. 179–194, 2016.mpeg_cdvs_standard “Information technology-Multimedia content description interface-Part 13: Compact descriptors for visual search,” ISO/IEC JTC1/SC29/WG11/N14956, Oct 2014.Chen_BFLoG2015 J. Chen, L. Y. Duan, F. Gao, J. Cai, A. C. Kot, and T. Huang, “A low complexity interest point detector,” IEEE Signal Processing Letters, vol. 22, no. 2, pp. 172–176, Feb 2015.garland2008parallel M. Garland, S. Le Grand, J. Nickolls, J. Anderson, J. Hardwick, S. Morton, E. Phillips, Y. Zhang, and V. Volkov, “Parallel computing experiences with cuda,” Micro, IEEE, vol. 28, no. 4, pp. 13–27, 2008.clara2008nvidia S. Clara, “Nvidia cuda compute unified device architecture: Programming guide version 1.1.”heymann2007sift S. Heymann, K. Müller, A. Smolic, B. Froehlich, and T. Wiegand, “SIFT implementation and optimization for general-purpose GPU,” International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, 2007.wu2007siftgpu C. Wu, “SiftGPU: A GPU implementation of scale invariant feature transform (SIFT),” 2007.rister2013fast B. Rister, G. Wang, M. Wu, and J. R. Cavallaro, “A fast and efficient SIFT detector using the mobile GPU,” in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, 2013, pp. 2674–2678.lee2016complexity C. Lee, C. E. Rhee, and H.-J. Lee, “Complexity Reduction by Modified Scale-Space Construction in SIFT Generation Optimized for a Mobile GPU,” IEEE Transactions on Circuits and Systems for Video Technology, 2016.cudasift CUDASIFT, “https://github.com/Celebrandil/CudaSift,” 2007.wang2013workload G. Wang, B. Rister, and J. R. Cavallaro, “Workload analysis and efficient OpenCL-based implementation of SIFT algorithm on a smartphone,” in Global Conference on Signal and Information Processing (GlobalSIP), 2013 IEEE, 2013, pp. 759–762.patlolla2015gpu D. Patlolla, S. Voisin, H. Sridharan, and A. Cheriyadat, “GPU accelerated textons and dense sift features for human settlement detection from high-resolution satellite imagery,” GeoComp, 2015.cornelis2008fast N. Cornelis and L. Van Gool, “Fast scale invariant feature detection and matching on programmable graphics hardware,” in Computer Vision and Pattern Recognition Workshops, 2008. CVPRW'08. IEEE Computer Society Conference on, 2008, pp. 1–8.ma2016gpu W. Ma, L. Cao, L. Yu, G. Long, and Y. Li, “GPU-FV: Realtime Fisher Vector and Its Applications in Video Monitoring,” arXiv preprint arXiv:1604.03498, 2016.krizhevsky2012imagenet A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.zheng2016sift L. Zheng, Y. Yang, and Q. Tian, “SIFT meets CNN: a decade survey of instance retrieval,” arXiv preprint arXiv:1608.01807, 2016.jia2014caffe Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture for fast feature embedding,” in Proceedings of the 22nd ACM international conference on Multimedia.1em plus 0.5em minus 0.4emACM, 2014, pp. 675–678.abadi2016tensorflow M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin et al., “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint arXiv:1603.04467, 2016.ISO2013_MPEGCDVS28076 D. Pau, E. Napoli, G. Lopez, E. Plebani, A. BRUNA, and D. SORENSEN, “Fourier transform Based interest point detector using LoG frequency response,” ISO/IEC JTC1/SC29/WG11/M28076, Jan 2013.ISO2013_MPEGCDVS28090 Z. Liu, Q. Zhou, and X. Guojun, “Huaweis Response to CE 4: Preliminary Results by Fourier Transform Based LOG,” ISO/IEC JTC1/SC29/WG11/M28090, Jan 2013.ISO2014_MPEGCDVS33159 C. Jie, L.-Y. Duan, T. Huang, W. Gao, A. C. Kot, M. Balestri, G. Francini, and S. Lepsøy, “CDVS CE1: A low complexity detector ALP_BFLoG,” ISO/IEC JTC1/SC29/WG11/M33159, Oct 2014.ISO2013_MPEGCDVS31399 C. Jie, L.-Y. Duan, T. Huang, and W. Gao, “Peking University Response to CE1: Improved BFLoG Interest Point Detector,” ISO/IEC JTC1/SC29/WG11/M31399, Oct 2013.ISO2013_MPEGCDVS30256 G. Francini, S. Lepsoy, and M. Balestri, “CDVS: Telecom Italias response to CE1interest point detection,” ISO/IEC JTC1/SC29/WG11/M30256, Jul 2013.carr1999option P. Carr and D. Madan, “Option valuation using the fast fourier transform,” Journal of computational finance, vol. 2, no. 4, pp. 61–73, 1999.ISO2012_MPEGCDVS23822 L.-T. Cheok, J. Song, and K. Park, “CDVS: Telecom Italias response to CE1interest point detection,” ISO/IEC JTC1/SC29/WG11/M23822, Feb 2012.ISO2012_MPEGCDVS23929 W. Chunyu, L.-Y. Duan, C. Jie, T. Huang, and W. Gao, “Reference results of key point reduction,” ISO/IEC JTC1/SC29/WG11/M23929, Feb 2012.francini2013selection G. Francini, S. Lepsøy, and M. Balestri, “Selection of local features for visual search,” Signal Processing: Image Communication, vol. 28, no. 4, pp. 311–322, 2013.ISO2012_MPEGCDVS12367 G. Francini, S. Lepsoy, and M. Balestri, “Description of Test Model under Consideration for CDVS,” ISO/IEC JTC1/SC29/WG11/N12367, Feb 2012.ISO2012_MPEGCDVS24737 ——, “Telecom Italia Response to the CDVS Core Experiment 2,” ISO/IEC JTC1/SC29/WG11/M24737, Apr 2012.jegou2011product H. Jegou, M. Douze, and C. Schmid, “Product quantization for nearest neighbor search,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 33, no. 1, pp. 117–128, 2011.ISO2012_MPEGCDVS22806 J. Chen, L.-Y. Duan, C. Wang, T. Huang, and W. Gao, “Peking Univ. Response to CE 2: Improvements of the SCFV Global Descriptor,” ISO/IEC JTC1/SC29/WG11/M22806, Oct 2011.ISO2012_MPEGCDVS24780 C. Jie, L.-Y. Duan, T. Huang, and W. Gao, “CDVS:CE2: Multi-Stage Vector Quantization for Low Memory Descriptors,” ISO/IEC JTC1/SC29/WG11/M24780, Apr 2012.chen2011residual D. Chen, S. Tsai, V. Chandrasekhar, G. Takacs, H. Chen, R. Vedantham, R. Grzeszczuk, and B. Girod, “Residual enhanced visual vectors for on-device image matching,” in Signals, Systems and Computers (ASILOMAR), 2011 Conference Record of the Forty Fifth Asilomar Conference on.1em plus 0.5em minus 0.4emIEEE, 2011, pp. 850–854.ISO2012_MPEGCDVS23578 D. Chen, V. Chandrasekhar, G. Takacs, S. Tsai, M. Makar, R. Vedantham, R. Grzeszczuk, and B. Girod, “Improvements to the Test Model Under Consideration with a Global Descriptor,” ISO/IEC JTC1/SC29/WG11/M23578, Feb 2012.husain2016improving S. S. Husain and M. Bober, “Improving large-scale image retrieval through robust aggregation of local descriptors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016.ISO2012_MPEGCDVS31426 M. Bober, S. Husain, S. Paschalakis, and K. Wnukowicz, “Improving performance and usability of CDVS TM7 with a Robust Visual Descriptor (RVD) - CE 2 Proposal from University of Surrey and Visual Atoms,” ISO/IEC JTC1/SC29/WG11/M31426, Oct 2013.lin2014rate J. Lin, L.-Y. Duan, Y. Huang, S. Luo, T. Huang, and W. Gao, “Rate-adaptive compact fisher codes for mobile visual search,” IEEE Signal Processing Letters, vol. 21, no. 2, pp. 195–198, 2014.ISO2012_MPEGCDVS31401 J. Lin, L.-Y. Duan, Z. Wang, T. Huang, and W. Gao, “Peking Univ. Response to CE 2: Improvements of the SCFV Global Descriptor,” ISO/IEC JTC1/SC29/WG11/M31401, Oct 2013.ISO2011_MPEGCDVS “Evaluation framework for compact descriptors for visual search,” ISO/IEC JTC1/SC29/WG11/N12202, Jul 2011.jegou2008hamming H. Jegou, M. Douze, and C. Schmid, “Hamming embedding and weak geometric consistency for large scale image search,” Computer Vision–ECCV 2008, pp. 304–317, 2008.Vijay2013_PatchCDVS CDVS Patches, 2013. [Online]. Available: <http://blackhole1.stanford.edu/vijayc/cdvs patches.tar> winder2009picking S. Winder, G. Hua, and M. Brown, “Picking the best daisy,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on.1em plus 0.5em minus 0.4emIEEE, 2009, pp. 178–185.Alcantarilla2013Fast P. Alcantarilla, J. Nuevo, and A. Bartoli, “Fast explicit diffusion for accelerated features in nonlinear scale spaces,” in British Machine Vision Conference, 2013, pp. 13.1–13.11.Levi_2016 G. Levi and T. Hassner, “Latch: Learned arrangements of three patch codes,” in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), March 2016, pp. 1–9.Lou_DCC2017 Y. Lou, Y. Bai, J. Lin, S. Wang, J. Chen, V. Chandrasekhar, L.-Y. Duan, T. Huang, A. C. Kot, and W. Gao, “Compact deep invariant descriptors for video retrieval,” in 2017 Data Compression Conference, 2017.tolias2015particular G. Tolias, R. Sicre, and H. Jégou, “Particular object retrieval with integral max-pooling of cnn activations,” arXiv preprint arXiv:1511.05879, 2015.ISO2012_MPEGCDVS39219 Y. Lou, F. Gao, Y. Bai, J. Lin, S. Wang, J. Chen, C. Gan, V. Chandrasekhar, L. Duan, T. Huang, and A. Kot, “Improved retrieval and matching with CNN feature for CDVA,” ISO/IEC JTC1/SC29/WG11/M39219, Oct 2016.
http://arxiv.org/abs/1705.09776v2
{ "authors": [ "Lingyu Duan", "Wei Sun", "Xinfeng Zhang", "Shiqi Wang", "Jie Chen", "Jianxiong Yin", "Simon See", "Tiejun Huang", "Alex C. Kot", "Wen Gao" ], "categories": [ "cs.MM" ], "primary_category": "cs.MM", "published": "20170527065937", "title": "Fast MPEG-CDVS Encoder with GPU-CPU Hybrid Computing" }
1]Hazim Shakhatreh 1]Abdallah Khreishah 2]Bo Ji [1]Department of Electrical and Computer Engineering, New Jersey Institute of Technology [2]Department of Computer and Information Sciences, Temple University Providing Wireless Coverage to High-rise Buildings Using UAVs [ Version December 30, 2023 ============================================================= Unmanned aerial vehicles (UAVs) can be used as aerial wireless base stations when cellular networks go down. Prior studies on UAV-based wireless coverage typically consider an Air-to-Ground path loss model, which assumes that the users are outdoor and they are located on a 2D plane. In this paper, we propose using a single UAV to provide wireless coverage for indoor users inside a high-rise building under disaster situations (such as earthquakes or floods), when cellular networks are down. First, we present a realistic Outdoor-Indoor path loss model and describe the tradeoff introduced by this model. Then, we study the problem of efficient UAV placement, where the objective is to minimize the total transmit power required to cover the entire high-rise building. The formulated problem is non-convex and is generally difficult to solve. To that end, we consider two cases of practical interest and provide the efficient solutions to the formulated problem under these cases. In the first case, we aim to find the minimum transmit power such that an indoor user with the maximum path loss can be covered. In the second case, we assume that the locations of indoor users are symmetric across the dimensions of each floor. Unmanned aerial vehicles, Outdoor-to-Indoor path loss model. § INTRODUCTION UAVs can be used to provide wireless coverage during emergency cases where each UAV serves as an aerial wireless base station when the cellular network goes down <cit.>. They can also be used to supplement the ground base station in order to provide better coverage and higher data rates for the users <cit.>. In order to use a UAV as an aerial wireless base station, the authors in <cit.> presented an Air-to-Ground path loss model that helped the academic researchers to formulate many important problems. The authors of <cit.> utilized this model to study the problem of UAV placement, where the objective is to minimize the number of UAVs for covering a given area. The authors of <cit.> described the tradeoff in this model. At a low altitude, the path loss between the UAV and the ground user decreases, while the probability of line of sight links also decreases. On the other hand, at a high altitude line of sight connections exist with a high probability, while the path loss increases. However, it is assumed that all users are outdoor and the location of each user can be represented by an outdoor 2D point. These assumptions limit the applicability of this model when one needs to consider indoor users. Providing good wireless coverage for indoor users is very important. According to Ericsson report <cit.>, 90% of the time people are indoor and 80% of the mobile Internet access traffic also happens indoors <cit.>. To guarantee the wireless coverage, the service providers are faced with several key challenges, including providing service to a large number of indoor users and the ping pong effect due to interference from near-by macro cells <cit.>. In this paper, we propose using a single UAV to provide wireless coverage for users inside a high-rise building during emergency cases, when the cellular network service is not available. To the best of our knowledge, this is the first work that proposes using a UAV to provide wireless coverage for indoor users. We summarize our main contributions as follows. First, we assume an Outdoor-Indoor path loss model <cit.>, certified by ITU, and show the tradeoff introduced by this model. Second, we formulate the problem of efficient UAV placement, where the objective is to minimize the total transmit power required to cover the entire high-rise building. Third, since the formulated problem is non-convex and is generally difficult to solve, we consider two cases of practical interest and provide the efficient solutions to the formulated problem under these cases. In the first case, we aim to find the minimum transmit power such that an indoor user with the maximum path loss can be covered. In the second case, we assume that the locations of indoor users are symmetric across the dimensions of each floor, and propose a gradient descent algorithm for finding the efficient location of the UAV. The rest of this paper is organized as follows. In Section II, we describe the system model and a path loss model suitable for studying indoor wireless coverage. In Section III, we formulate the problem of UAV placement with an objective of minimizing the transmit power for covering the entire building. In Section IV, we describe the tradeoff introduced by the path loss model and show how to find the efficient location of the UAV such that the total transmit power is minimized in two scenarios of practical interest. Finally, we present our numerical results in Section V and make concluding remarks in Section VI. § SYSTEM MODEL §.§ System Settings Let (x_UAV,y_UAV,z_UAV) denote the 3D location of the UAV. We assume that all users are located inside a high-rise building as shown in Figure <ref>, and use (x_i,y_i,z_i) to denote the location of user i. The dimensions of the high-rise building are [0,x_b] × [0,y_b] × [0,z_b]. Also, let d_3D,i be the 3D distance between the UAV and indoor user i, let θ_i be the incident angle , and let d_2D,i be the 2D indoor distance of user i inside the building. §.§ Outdoor-Indoor Path Loss Model The Air-to-Ground path loss model presented in <cit.> is not appropriate when we consider wireless coverage for indoor users, because this model assumes that all users are outdoor and located at 2D points. In this paper, we adopt the Outdoor-Indoor path loss model, certified by the ITU <cit.>. The path loss is given as follows: L_i=L_F+L_B+L_I=                  (wlog_10d_3D,i+wlog_10f_Ghz+g_1)+ (g_2+g_3(1-cosθ_i)^2)+(g_4d_2D,i) where L_F is the free space path loss, L_B is the building penetration loss, and L_I is the indoor loss. In this model, we also have w=20, g_1=32.4, g_2=14, g_3=15, g_4=0.5 <cit.> and f_Ghz is the carrier frequency (2Ghz). Note that there is a key tradeoff in the above model when the horizontal distance between the UAV and a user changes. When this horizontal distance increases, the free space path loss (i.e., L_F) increases as d_3D,i increases, while the building penetration loss (i.e., L_B) decreases as the incident angle (i.e., θ_i) decreases. Similarly, when this horizontal distance decreases, the free space path loss (i.e., L_F) decreases as d_3D,i decreases, while the building penetration loss (i.e., L_B) increases as the incident angle (i.e., θ_i) increases. § PROBLEM FORMULATION Consider a transmission between a UAV located at (x_UAV,y_UAV,z_UAV) and an indoor user i located at (x_i,y_i,z_i). The rate for user i is given by: C_i=Blog_2(1+P_t,i/L_iN) where B is the transmission bandwidth of the UAV, P_t,i is the UAV transmit power to indoor user i, L_i is the path loss between the UAV and indoor user i and N is the noise power. In this paper, we do not explicitly model interference, and instead, implicitly model it as noise. Let us assume that each indoor user has a channel with bandwidth equals B/M, where M is the number of users inside the building and the rate requirement for each user is v. Then the minimum power required to satisfy this rate for each user is given by: P_t,i,min=(2^v.M/B-1)⋆ N⋆ L_i Our goal is to find the efficient location of the UAV such that the total transmit power required to satisfy the rate requirement of each indoor user is minimized. The objective function can be represented as: P=∑_i=1^M(2^v.M/B-1)⋆ N⋆ L_i, where P is the UAV total transmit power. Since (2^v.M/B-1)⋆ N is constant, our problem can be formulated as: min_x_UAV,y_UAV,z_UAV L_Total=∑_i=1^ML_i                                  subject  to                                                            x_min≤ x_UAV≤ x_max,                       y_min≤ y_UAV≤ y_max,                       z_min≤ z_UAV≤ z_max,                       L_Total≤ L_max Here, the first three constraints represent the minimum and maximum allowed values for x_UAV, y_UAV andz_UAV. In the fourth constraint, L_max is the maximum allowable path loss and equals P_t,max/((2^v.M/B-1)⋆ N), where P_t,max is the maximum transmit power of UAV. Finding the optimal placement of UAV is generallydifficult because the problem is non-convex. Therefore, in the next section, we consider two special cases of practical interest and derive efficient solutions for the formulated problems under these cases. § EFFICIENT PLACEMENT OF UAV Due to the intractability of the problem, we study the efficient placement of the UAV under two cases. In the first case, we find the minimum transmit power required to cover the building based on the location that has the maximum path loss inside the building. In the second case, we assume that the locations of indoor users are symmetric across the dimensions of each floor, and propose a gradient descent algorithm for finding the efficient location of the UAV. §.§ Case One: The worst location in building In this case, we find the minimum transmit power required to cover the building based on the location that has the maximum path loss inside the building. The location that has the maximum path loss in the building is the location that has maximum d_3D,i, maximum θ_i, and maximum d_2D,i. The locations that have the maximum path loss are located at the corners of the highest and lowest floors at points (x_b,0,0), (x_b,y_b,0), (x_b,0,z_b) and (x_b,y_b,z_b) (see Figure 1). Since the locations that have the maximum path loss inside the building are the corners of the highest and lowest floors, we place the UAV at the middle of the building (y_UAV= 0.5y_b and z_UAV=0.5z_b). Then, given the Outdoor-to-Indoor path loss model, we need to find the optimal horizontal point x_UAV for the UAV such that the total transmit power required to cover the building is minimized. Now, when the horizontal distance between the UAV and this location increases, the free space path loss also increases as d_3D,i increases, while the building penetration loss decreases because we decrease the incident angle θ_i. Similarly, when the horizontal distance decreases, the free space path loss decreases as d_3D,i decreases, while the building penetration loss increases as the incident angle increases. In Figure <ref>, we demonstrate the minimum transmit power required to cover a building of different heights, where the minimum transmit power required to cover the building is given by: P_t,min(dB)=P_r,th+L_i P_r,th(dB)=N+γ_th Here, P_r,th is the minimum received power, N is the noise power (equals -120dBm), γ_th is the threshold SNR (equals 10dB), y_b=50 meters , and x_b=20 meters. The numerical results show that there is an optimal horizontal point that minimizes the total transmit power required to cover a building. Also, we can notice that when the height of the building increases, the optimal horizontal distance also increases. This is to compensate the increased building penetration loss due to an increased incident angle. In Theorem 1, we characterize the optimal incident angle θ that minimizes the transmit power required to cover the building. This helps us finding the optimal horizontal distance between the UAV and the building. When we place the UAV at the middle of building , the optimal incident angle θ that minimizes the transmit power required to cover the building will be equal to 48.654^o and the optimal horizontal distance between the UAV and the building will be equal to ((0.5z_btan(48.654^o))^2-(0.5y_b)^2)^0.5-x_b. In order to find the optimal horizontal point, we rewrite the equation that represents the path loss in terms of the incident angle (θ_i) and the altitude difference between the UAV and the user i (Δ h_i): L_i(Δ h_i,θ_i)=wlog_10Δ h_isinθ_i+wlog_10f_Ghz+g_1 +g_2+g_3(1-cosθ_i)^2+g_4d_2D,i We know that the altitude difference between the UAV and the location that has the maximum path loss is constant for a given building. Now, when we take the first derivative with respect to θ and assign it to zero, we get: dL(θ)dθ=wln10-Δ h.cosθsin^2θΔ hsinθ+2g_3sinθ(1-cosθ)=0          dL(θ)dθ=-wln10cosθsinθ+2g_3sinθ(1-cosθ)=0                 wln10cosθ=2g_3sin^2θ(1-cosθ)                                wln10cosθ=2g_3(1-cos^2θ)(1-cosθ)                         2g_3cos^3θ-2g_3cos^2θ-(wln10+2g_3)cosθ+2g_3=0 To prove that the function is convex, we take the second derivative and we get: d^2Ldθ^2=wln101sin^2θ+2g_3cosθ(1-cosθ)+2g_3sin^2θ>0 for 0<θ≤ 90 Equation (9) has only one valid solution which is cosθ=0.6606, where the non valid solutions are cosθ=1.4117 and cosθ=–1.0723. Therefore, the optimal incident angle between the UAV and the location that has the maximum path loss inside the building will be 48.654^o. In order to find the optimal horizontal distance between the UAV and the building, we apply the pythagorean's theorem. The optimal horizontal distance between the UAV and the location that has maximum path loss inside the building can be represented as: d_H=((0.5z_btan(48.654^o))^2-(0.5y_b)^2)^0.5 In order to find the optimal horizontal distance between the UAV and the building, we subtract x_b from d_H and we get: d_opt=((0.5z_btan(48.654^o))^2-(0.5y_b)^2)^0.5-x_b In Figure <ref>, we demonstrate the transmit power required to cover the building as a function of incident angle, we can notice that the optimal angle that we characterize in Theorem 1 gives us the minimum transmit power required to cover the building. §.§ Case Two: The locations of indoor users are symmetric across the xy and xz planes In this case, we assume that the locations of indoor users are symmetric across the xy-plane((0,0,0.5z_b),(x_b,0,0.5z_b) ,(x_b,y_b,0.5z_b),(0,y_b,0.5z_b))) and the xz-plane ((0,0.5y_b,0), (x_b,0.5y_b,0), (x_b,0.5y_b,z_b),(0,0.5y_b,z_b)). First, we prove that z_UAV=0.5z_b and y_UAV=0.5y_b when the locations of indoor users are symmetric across the xy and xz planes, then we will use the gradient descent algorithm to find the efficient x_UAV that minimizes the transmit power required to cover the building. Our simulation results show that there is only one local minimum point and the gradient descent algorithm will successfully converge to this point. When the locations of indoor users are symmetric across the xy and xz planes, the efficient z_UAV that minimizes the power required to cover the indoor users will be equal 0.5z_b. Consider that m_1 represents the users that have altitude lower than the UAV altitude and m_2 represents the users that have altitude higher than the UAV altitude, then: d_3D,i=((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_UAV-z_i)^2)^0.5                                 ∀ z_UAV>z_i d_3D,i=((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_i-z_UAV)^2)^0.5                                 ∀ z_UAV<z_i Also, cos_θ_i=((x_UAV-x_i)^2+(y_UAV-y_i)^2)^0.5((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_UAV-z_i)^2)^0.5                                 ∀ z_UAV>z_i cos_θ_i=((x_UAV-x_i)^2+(y_UAV-y_i)^2)^0.5((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_i-z_UAV)^2)^0.5                                 ∀ z_UAV<z_i Rewrite the total path loss: L_Total=                                                                               ∑_i=1^m_1(wlog_10(d_3D,i)+g_3(1-cosθ_i)^2)+                                           ∑_i=1^m_2(wlog_10(d_3D,i)+g_3(1-cosθ_i)^2)+K Where: K=∑_i=1^M(wlog_10f_Ghz+g_1+g_2+g_4d_2D,i) Now, take the derivative with respect to z_UAV, we get: dL_Totaldz_UAV=                                                               ∑_i=1^m_1wln10(z_UAV-z_i)/((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_UAV-z_i)^2)    +2g_3.                                                                    (1-((x_UAV-x_i)^2+(y_UAV-y_i)^2)^0.5((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_UAV-z_i)^2)^0.5).      (((x_UAV-x_i)^2+(y_UAV-y_i)^2)^0.5(z_UAV-z_i)((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_UAV-z_i)^2)^3/2)+     ∑_i=1^m_2wln10-(z_i-z_UAV)/((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_i-z_UAV)^2)    +2g_3.                                                                    (1-((x_UAV-x_i)^2+(y_UAV-y_i)^2)^0.5((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_i-z_UAV)^2)^0.5).      (-((x_UAV-x_i)^2+(y_UAV-y_i)^2)^0.5(z_i-z_UAV)((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_i-z_UAV)^2)^3/2) Rewrite the dL_Totaldz_UAV again, we have: dL_Totaldz_UAV =∑_i=1^m_1wln10(z_UAV-z_i)/d_3D,i^2+                      2g_3.(1-((x_UAV-x_i)^2+(y_UAV-y_i)^2)^0.5d_3D,i).            (((x_UAV-x_i)^2+(y_UAV-y_i)^2)^0.5(z_UAV-z_i)d_3D,i^3)+    ∑_i=1^m_2wln10-(z_i-z_UAV)/d_3D,i^2+                                  2g_3.(1-((x_UAV-x_i)^2+(y_UAV-y_i)^2)^0.5d_3D,i).           (-((x_UAV-x_i)^2+(y_UAV-y_i)^2)^0.5(zi-z_UAV)d_3D,i^3)The equation above equals zero when the UAV altitude equals the half of the building height, where the locations of indoor users are symmetric across the xy and xz planes. When the locations of indoor users are symmetric across the xy and xz planes, the efficient y_UAV that minimizes the power required to cover the indoor users will be equal 0.5y_b.The proof of Theorem 3 is similar to that of Theorem 2.The question now is how to find the efficient horizontal point x_UAV that minimizes the total transmit power. In order to find this point, we use the gradient descent algorithm <cit.>: x_UAV,n+1=x_UAV,n-adL_Totaldx_UAV,n Where: dL_Totaldx_UAV=∑_i=1^Mwln10-(x_i-x_UAV)d_3D,i^2+                            2g_3.(1-((x_i-x_UAV)^2+(y_i-y_UAV)^2)^0.5d_3D,i).                  ((x_i-x_UAV)d_3D,i((x_i-x_UAV)^2+(y_i-y_UAV)^2)^-0.5d_3D,i^2-   ((x_i-x_UAV)^2+(y_i-y_UAV)^2)^0.5(x_i-x_UAV)d_3D,i^-1d_3D,i^2) a: the step size. d_3D,i=((x_i-x_UAV)^2+(y_i-y_UAV)^2+(z_i-z_UAV)^2)^0.5 The pseudo code of this algorithm is shown in Algorithm 1. § NUMERICAL RESULTS In this section, we verify our results for the second case. We assume that each floor contains 20 users. Then, we apply the gradient descent algorithm to find the efficient horizontal point x_UAV that minimizes the transmit power required to cover the indoor users. Table I lists the parameters used in the numerical analysis. In Figure <ref>, we find the efficient horizontal points for a building of different heights. In the upper part of the figures, we find the total path loss at different locations (x_UAV,0.5y_b,z_UAV) and in the lower part of the figures, we find the efficient horizontal point x_UAV that results in the minimum total path loss using the gradient descent algorithm. As can be seen from the figures, when the height of the building increases, the efficient horizontal point x_UAV increases. This is to compensate the increased building penetration loss due to an increased incident angle. In Figure <ref>, we investigate the impact of different building widths (i.e., x_b). We fix the building height to be 250 meters and vary the building width. As can be seen from the figures, when the building width increases, the efficient horizontal distance decreases. This is to compensate the increased indoor path loss due to an increased building width. In <cit.>, we validate the simulation results by using the particle swarm optimization algorithm and study the problem when the locations of indoor users are uniformly distributed in each floor. § CONCLUSION In this paper, we study the problem of providing wireless coverage for users inside a high-rise building using a single UAV. First, we demonstrate why the Air-to-Ground path loss model is not appropriate for considering indoor users with 3D locations. Then, we present the Outdoor-to-Indoor path loss model, show the tradeoff in this model, and study the problem of minimizing the transmit power required to cover the building. Due to the intractability of the problem, we study the efficient placement of the UAV under two cases. In the first case, we find the minimum transmit power required to cover the building based on the location that has the maximum path loss inside the building. In the second case, we assume that the locations of indoor users are symmetric across the xy and xz planes and we use the gradient descent algorithm to find the efficient placement of the UAV. In order to model more realistic scenarios, we will consider different types of user distribution in our future work. We will also study the problem of providing wireless coverage using multiple UAVs. § ACKNOWLEDGMENT This work was supported in part by the NSF under Grants CNS-1647170 and CNS-1651947. IEEEtran
http://arxiv.org/abs/1705.09770v1
{ "authors": [ "Hazim Shakhatreh", "Abdallah Khreishah", "Bo Ji" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170527055652", "title": "Providing Wireless Coverage to High-rise Buildings Using UAVs" }
[ michael magee December 30, 2023 =====================§ INTRODUCTIONAtom interferometry is a high precision measurement technique [1]. Interference via atoms rather than light provides a theoretical 10^11 increase in sensitivity of gyroscopes [2], as well as achieving the world's most precise measurements of local gravity [3]. Atom interferometry can also, amongst other things, be used to measure Newton's constant [4], the fine-structure constant [5], to test Lorentz invariance [6], to test dark sector physics and as precision space-time sensors [7].Here we present the initial performance from a drop-topology atom interferometer that has been developed for a search into the dark contents of the vacuum [8] and as a test stand for inertial sensing applications such as navigation and gravity scanning. Our current knowledge about the nature of the dark contents of the vacuum, such as dark energy, is entirely based upon cosmological observations. We intend to use atom interferometry as a possible probe of the dark contents of the vacuum on the laboratory scale [8].An atom interferometer has been developed at low-cost by employing common-off-the-shelf (COTS) components with minor modifications using ultra-cold ^85Rb as the atomic medium and a simplified two-laser optical system. Drawing on extensive experience and autonomy in complex radiation detection environments, bespoke and robust DAQ, control and detection systems have been developed for the apparatus. Using stimulated Raman transitions, interference fringes have been observed.Development is underway with upgrades and the presented results influencing the decisions and directions for future improvements. § THEORYThe ground states of alkali metals such as ^85Rb are split into two hyperfine states which may be described using a semi-classical model of a two-level system [9]. Interaction of the atoms with coherent electromagnetic radiation allows the superposition of these states to be created and reliably controlled. Light-pulse atom interferometers measure the interference between the two states due to phase differences accrued by the light-atom interaction.Light pulses with a frequency tuned to a two-photon transition between the states can coherently manipulate the state amplitudes. Varying the time the pulse impinges on the atoms causes the probability to be in each state to oscillate, a phenomenon known as Rabi oscillations. A pulse of characteristic duration, a `π/2' pulse, creates an equal superposition of the two states. A pulse of twice this length, a π pulse, acts to invert the state population. By inserting a delay of time T between two π/2 pulses, the state populations change to produce interference fringes as the two-photon transition frequency is varied. These fringes are known as Ramsey fringes and are caused by a time dependent phase difference being accumulated between the pulses. This interference behaviour is only observable in systems which maintain coherence over the duration of the pulse sequence, requiring the atoms to be at extremely low temperatures.For a more comprehensive description see [9].§ METHOD Initially 10^8 ^85Rb atoms are cooled within a magneto-optical trap (MOT) to achieve micro-Kelvin temperatures. An optical molasses is formed from which the atoms are released at a temperature of 15 μK. The temperature is limited by deliberately not cancelling the vertical component of the Earth's magnetic field. This field thereby acts as the axis of quantisation for the apparatus and separates the Zeeman substates. The atoms then fall freely under gravity to the interferometry region.Two phase-locked laser beams with a stable frequency offset close to the hyperfine splitting n_HFS = 3.0357 GHz [10] are required to coherently manipulate the state population. These drive stimulated Raman transitions between the F =2 and F = 3 hyperfine states of the ^85Rb ground state.These beams are generated using an acousto-optic modulator (AOM) which splits the carrier into two frequency components [11].Control and DAQ for the Interferometer is a combination of a System on Chip (SoC) and FPGAs, allowing an interferometry sequence to be run repetitively over a long-time period. The system includes frequency generation for MHz range to control AOMs and can be chirped when necessary (providing compensation for the Doppler shift under the influence of gravity). § RESULTSA search for Rabi oscillations was performed using co-propagating Raman beams. This was done by varying the total time of interrogation of the atoms whilst keeping the laser intensity constant. The required frequency difference between the two Raman beams to provide maximum two-photon transitions was experimentally determined, with the results in figure 1 showing damped Rabi oscillations at various detunings from this resonance. The damping is due to far-from-resonance single-photon scattering causing decoherence and a linear increase in the state population. The effect of this is subtracted in the right-hand plots of figure 1. After 500 μs no oscillations are visible, indicating the limit of coherence of the present status of the Raman system. The data are fit with a sinusoid and an exponential damping term. For a detuning equal to zero, the π/2 pulse length is determined to be approximately 80 μs. Adding the time delay T = 0.8 ms between two π/2 pulses demonstrates Ramsey interference fringes, as shown in figure 2. The central position is shifted by 200 Hz from the expected hyperfine splitting, consistent with an AC Stark shift. Figure 3 shows the central fringe for three different pulse separations. The positions of the central peaks vary by approximately 200 Hz, less than 1 part in 10^7. The fringe height did not change as a function of pulse separation time, T, displaying no loss of coherence over these time periods.§ SUMMARY We have developed a low-cost ^85Rb atom interferometer, successfully realising the cold atom preparation, Raman population transfer, and high contrast interference fringes, all without the need for magnetic state-preparation. Upgrades are currently underway towards realization of the experiment's long term goals.§ ACKNOWLEDGEMENTS The authors would like to thank the support and encouragement of Themis Bowcock and the particle physics group at the University of Liverpool, STFC Daresbury Laboratory and the Cockcroft Institute, also the useful discussions with Yuri Ovchinnikov and the NPL. The work was supported by the Royal Society, AWE Ltd. We would like to thank the in particular Joseph Perl for their help and encouragement in this work.99a M. Kasevich and S. Chu, Atomic interferometry using stimulated Raman transitions, Phys. Rev. Lett. 67 (1991) 181.b P. Berman , Atom Interferometry, Academic Press, 1997c T. L. Gustavson, et al., Precision Rotation Measurements with an Atom Interferometer Gyroscope, Phys. Rev. Lett. 78 (1997) 2046.d S. M. Dickerson, et al., Multiaxis Inertial Sensing with Long-Time Point Source Atom Interferometry, Phys. Rev. Lett. 111 (2013) 083001.e G. Rosi, et al., Precision measurement of the Newtonian gravitational constant using cold atoms, Nature 510 (2014) 518.f M. Cadoret, et al., Combination of Bloch Oscillations with a Ramsey-Bordé Interferometer: New Determination of the Fine Structure Constant, Phys. Rev. Lett. 101 (2008) 230801.gK-Y Chung, et al., Atom interferometry tests of local Lorentz invariance in gravity and electrodynamics, Phys. Rev. D. 80 (2009) 016002.g2 Quantum Sensors at the Intersections of Fundamental Science, Quantum Information Science & Computing, Report of DOE Roundtable (2016)hR. J. Adler, H. Mueller, and M. L. Perl, A terrestrial search for dark contents of the vacuum, such as dark energy, using atom interferometry, Int. J. Mod. Phys. A26 (2011) 4959.iC. J. Foot, Atomic Physics, Oxford Master Series in Atomic, Optical and Laser Physics, 2005jD. A. Steck, Rubidium 85 D Line Data, available online at http://steck.us/alkalidata (revision 2.1.6, 20 September 2013). kJ. I Soos, R. G. Rosemeier, J. Rosenbaum, Proceedings of the Ninth International Conference on Lasers and Applications, (1987) 488.
http://arxiv.org/abs/1705.09376v1
{ "authors": [ "O. Burrow", "A. Carroll", "S. Chattopadhyay", "J. Coleman", "G. Elertas", "J. Heffer", "C. Metelko", "R. Moore", "D. Morris", "M. Perl", "J. Ralph", "J. Tinsley" ], "categories": [ "physics.ins-det", "physics.atom-ph" ], "primary_category": "physics.ins-det", "published": "20170525215049", "title": "Atom Interferometry for Dark Contents of the Vacuum Searches" }
Demazure crystals for Schubert polynomials]A Demazure crystal construction for Schubert polynomials S. Assaf]Sami Assaf Department of Mathematics, University of Southern California, 3620 S. Vermont Ave., Los Angeles, CA 90089-2532, U.S.A. [email protected]. Schilling]Anne Schilling Department of Mathematics, UC Davis, One Shields Ave., Davis, CA 95616-8633, U.S.A. [email protected][2010]Primary 14N15, 05E10; Secondary 05A05, 05E05, 05E18, 20G42Stanley symmetric functions are the stable limits of Schubert polynomials. In this paper, we show that, conversely, Schubert polynomials are Demazure truncations of Stanley symmetric functions. This parallels the relationship between Schur functions and Demazure characters for the general linear group. We establish this connection by imposing a Demazure crystal structure on key tableaux, recently introduced by the first author in connection with Demazure characters and Schubert polynomials, and linking this to the type A crystal structure on reduced word factorizations, recently introduced by Morse and the second author in connection with Stanley symmetric functions.[ [ December 30, 2023 =====================§ INTRODUCTIONSchubert polynomials _w were first introduced by Bernstein et al. <cit.> as certain polynomialrepresentatives of cohomology classes of Schubert cycles X_w in flag varieties. They were extensively studied byLascoux and Schützenberger <cit.> using an explicit definition in terms of difference operators ∂_w.Subsequently, a combinatorial expression for Schubert polynomials as the generating polynomial for compatible sequences for reduced expressions of a permutation w was discovered by Billey, Jockusch, and Stanley <cit.>. In the special case of the Grassmannian subvariety, Schubert polynomials are Schur polynomials, which also arise as the irreducible characters for the general linear group.The Stanley symmetric functions F_w were introduced by Stanley <cit.> in the pursuit of enumerations ofthe reduced expressions of permutations, in particular of the long permutation w_0. They are defined combinatorially asthe generating functions of reduced factorizations of permutations. Stanley symmetric functions are the stable limit ofSchubert polynomials <cit.>, preciselyF_w(x_1,x_2,…) = lim _m→∞𝔖_1^m× w (x_1,x_2,…,x_n+m). Edelman and Greene <cit.> showed that the coefficients of the Schur expansion of Stanley symmetric functionsare nonnegative integer coefficients. Demazure modules for the general linear group <cit.> are closely related to Schubert classes forthe cohomology of the flag manifold. In certain cases these modules are irreducible polynomial representations, andso the Demazure characters also contain the Schur polynomials as a special case. Lascoux andSchützenberger <cit.> stated that Schubert polynomials are nonnegative sums of Demazure characters. This was proven by Reiner and Shimozono <cit.> using the right keys associated toEdelman–Greene insertion. Using a key tableaux interpretation for Demazure characters <cit.>, Assaf <cit.> showed that theEdelman and Greene algorithm giving the Schur expansion of a Stanley symmetric function can be modified toa weak Edelman–Greene algorithm which gives the Demazure expansion of a Schubert polynomial.In this paper, we deepen this connection and provide a converse to (<ref>) by showing that Schubert polynomialsare Demazure truncations of Stanley symmetric functions. Specifically, we show in Theorem <ref> that thecombinatorial objects underlying the Schubert polynomials, namely the compatible sequences, exhibit a Demazure crystaltruncation of the full Stanley crystal of Morse and Schilling <cit.>. We prove this usingTheorem <ref>, in which we give an explicit Demazure crystal structure on semi-standard key tableaux,which coincide with semi-skyline augmented fillings of Mason <cit.>. This, together with Theorem <ref>,in which we show that the crystal operators on reducedfactorizations intertwine with (weak) Edelman–Greene insertion, proves our main result.Lenart <cit.> defined crystal operators on RC graphs <cit.>, which are closelyrelated to compatible sequences, though it was not observed there that this structure is a Demazure crystal.Earlier, Reiner and Shimozono <cit.> defined r-pairings on factorized row-frank words thatcan now be interpreted as crystal operators, but again, this was not observed nor was it noted that this structure is aDemazure crystal structure. One could complete either of these perspectives to prove our main result, though we preferthe key tableaux approach given its simplicity, the natural crystal operators on these objects, and the connection withEdelman–Greene insertion.This paper is structured as follows. In Section <ref>, we review the crystal structure on semi-standard Young tableaux and define Demazure crystals. In Section <ref>, we introduce new crystal operators on key tableaux and prove that this amounts to a Demazure crystal (Theorem <ref>). Section <ref> is reserved for the review of Stanley symmetric functions, Edelman–Greene insertion and the crystal structure on reduced factorization, which underly the Stanley symmetric functions. Section <ref> contains our main result (Theorem <ref>), namely a Demazure crystal structureon reduced factorizations with cutoff, which are equivalent to compatible sequences. This gives a Demazure crystal structure for Schubert polynomials and shows that Schubert polynomials are a Demazure truncation of Stanley symmetric functions.§.§ AcknowledgmentsAS was partially supported by NSF grantDMS–1500050. The authors are grateful to Per Alexandersson, Sara Billey, Jim Haglund, Cristian Lenart, Sarah Mason, Liz Milicevic, Jennifer Morse, Vic Reiner, Mark Shimozono, and Alex Yong for helpful discussions and comments on this topic. AS would also like to thank the University of Southern California for their hospitality during her talk in March 2017 and the AWM Research Symposium at UCLA in April 2017, where this work started.§ CRYSTAL STRUCTURE ON TABLEAUXWe begin in Section <ref> by reviewing the basics of Schur polynomials via the combinatorics of Youngtableaux. In Section <ref>, we review the type A crystal structure on semi-standard Young tableaux, andconclude in Section <ref> with the definition of Demazure crystals. §.§ Combinatorics of Schur polynomialsGiven a partition λ, the Young diagram of shape λ is the array of left-justified cells withλ_i boxes in row i. Here we use French notation, where the rows weakly decrease in size from bottomto top in the Young diagram. A Young tableau is a filling of the cells of a Young diagram from some totallyordered alphabet (for example the set of positive integers) such that rows and columns weakly increase. Asemi-standard Young tableau is a Young tableau with distinct column entries. Figure <ref> provides an example of semi-standard Young tableaux of a fixed shape.The weight of a semi-standard Young tableau T, denoted by (T), is the weak compositionwhose ith part is the number of occurrences of i in T. The shape λ of T is also denoted by sh(T). The Schur polynomial in n variables indexed by the partition λ iss_λ(x) = s_λ(x_1,…,x_n) = ∑_T ∈_n(λ) x_1^(T)_1⋯ x_n^(T)_n,where _n(λ) is the set of semi-standard Young tableaux of shape λ over the alphabet {1,2,…,n}.Schur polynomials arise as characters for irreducible highest weight modules for the general linear group withsemi-standard Young tableaux giving a natural indexing set for the basis of the module. §.§ Crystal operators on semi-standard Young tableauxA crystal graph is a directed, colored graph with vertex set given by the crystal basis and directed edges givenby deformations of the Chevalley generators. For the quantum group U_q(𝔰𝔩_n), the crystal basiscan be indexed by semi-standard Young tableaux over the alphabet A={1,2,…,n} and there is an explicitcombinatorial construction of the crystal graph on tableaux <cit.>. For an introduction to crystals from the quantum group perspective, see <cit.>. For a purely combinatorial introduction tocrystals, see <cit.>.For a word w of length k with letters from the alphabet A={1,2,…,n}, an integer 0 ⩽ r ⩽ k,and an integer 1⩽ i<n, defineM_i(w,r) = (w_1 w_2⋯ w_r)_i - (w_1 w_2⋯ w_r)_i+1,where (w) is the weak composition whose jth part is the number of j's in w. SetM_i(w) = max_r⩾ 0{M_i(w,r)}. Observe that if M_i(w) > 0 and p is the leftmost occurrence of thismaximum, then w_p = i, and if q is the rightmost occurrence of this maximum, then either q=k or w_q+1 = i+1.For a Young tableau T, the column reading word of T, denoted by w(T), is the word obtained byreading the entries of T down columns from left to right. For example, the column reading word of the leftmostYoung tableau in the top row of Figure <ref> is 32131. Given an integer 1⩽ i<n, define the lowering operator f_i on semi-standard Young tableaux over the alphabet A as follows:if M_i(w(T)) ⩽ 0, then f_i(T)=0; otherwise, let p be the smallest index such that M_i(w(T),p) = M_i(w(T)),and f_i(T) changes the entry in T corresponding to w(T)_p to i+1.An example of the lowering operator f_2 is given in Figure <ref>. For this example, the column readingword is given below each semi-standard Young tableau with the largest index that attains M_2(w(T))>0 underlinedand the corresponding entry in the tableau circled. Given an integer 1⩽ i<n, define the raising operator e_i on semi-standard Young tableaux over the alphabet A as follows:let q be the largest index such that M_i(w(T),q) = M_i(w(T)). If q is the length of w(T), then e_i(T)=0;otherwise, e_i(T) changes the entry in T corresponding to w(T)_q+1 to i.For further examples of raising and lowering operators on semi-standard Young tableaux, see Figure <ref>.Note that we have drawn the crystal in Figure <ref> with lowering operators pointing upward to facilitatethe bijection with semi-standard key tableaux as explained in Section <ref>.For a partition λ, we may define the highest weight crystal (of type A_n) of highest weight λ, denoted B(λ), as the set _n(λ) together with the operators f_i,e_i for 1⩽ i<n and the weight function . The character of a crystal is defined asch B(λ) = ∑_b∈ B(λ) x_1^(b)_1⋯ x_n^(b)_n,which in this case is precisely the Schur polynomial s_λ(x_1,…,x_n). §.§ Demazure crystalsDemazure characters first arose in connection with Schubert classes for the cohomology of the flag manifoldin <cit.>.The divided difference operators ∂_i for 1⩽ i <n act on polynomials by∂_i f(x_1,…,x_n) = f(x_1,…,x_i,x_i+1, …,x_n) - f(x_1,…,x_i+1,x_i, …,x_n)/x_i-x_i+1.For w∈ S_n, we may define ∂_w = ∂_i_1∂_i_2⋯∂_i_k ifw = s_i_1s_i_2⋯ s_i_k. Here s_i (1⩽ i<n) is the simple transposition interchanging i andi+1 and k is the number of inversions (or length) of w. When k is the length of w, the expression s_i_1s_i_2⋯ s_i_k for w is called a reduced expression. It can be shown that ∂_w is independent of the choice of reduced expression.There exist degree-preserving divided difference operators π_i for 1 ⩽ i < n, which act on polynomials byπ_i f(x_1,…,x_n) = ∂_i ( x_i f(x_1,…,x_n) ).As with ∂_i, we extend this definition to w∈ S_n, by π_w = π_i_1π_i_2⋯π_i_k ifw = s_i_1s_i_2⋯ s_i_k is a reduced expression, and π_w is independent of the choice of reduced expression. Given a weak composition a of length n, the Demazure character _a is defined as_a(x) = _a(x_1,…,x_n) = π_w( x_1^λ_1 x_2^λ_2⋯ x_n^λ_n),where λ is the partition rearrangement of a and w is the shortest permutation that sorts a to λ.For example, we may compute the Demazure character _(0,2,1,2) by taking a = (0,2,1,2), λ = (2,2,1,0)and w = 2431, and so we have_(0,2,1,2)=π_1 π_3 π_2 π_3 (x_1^2 x_2^2 x_3)=π_1 π_3 π_2 ( x_1^2 x_2^2 x_3 + x_1^2 x_2^2 x_4 )=π_1 π_3 ( x_1^2 x_2^2 x_3 + x_1^2 x_2^2 x_4 + x_1^2 x_2 x_3^2 + x_1^2 x_2 x_3 x_4 + x_1^2 x_3^2 x_4)=π_1 ( x_1^2 x_2^2 x_3 + x_1^2 x_2^2 x_4 + x_1^2 x_2 x_3^2 + 2 x_1^2 x_2 x_3 x_4 + x_1^2 x_2 x_4^2+ x_1^2 x_3^2 x_4 + x_1^2 x_3 x_4^2 )= x_1^2 x_2^2 x_3 + x_1^2 x_2^2 x_4 + x_1^2 x_2 x_3^2 + 2 x_1^2 x_2 x_3 x_4 + x_1^2 x_2 x_4^2+ x_1^2 x_3^2 x_4 + x_1^2 x_3 x_4^2 + x_1 x_2^2 x_3^2 + 2 x_1 x_2^2 x_3 x_4 + x_1 x_2^2 x_4^2 + x_1 x_2 x_3^2 x_4 + x_1 x_2 x_3 x_4^2 + x_2^2 x_3^2 x_4+ x_2^2 x_3 x_4^2. Macdonald <cit.> showed that when a is weakly increasing of length n, we have_a(x_1,…,x_n) = s_rev(a) (x_1,…,x_n),where rev(a) is the partition obtained by reversing (equivalently, sorting) a. In particular, Demazurecharacters are a polynomial generalization of irreducible characters.Making this more precise, Demazure crystals are certain subsets of B(λ), which were first conjectured by Littelmann <cit.> to generalize the Demazure characters. This conjecture was later proven by Kashiwara <cit.>. Given a subset X ⊆ B(λ), we define 𝔇_i for 1⩽ i <n as𝔇_i X = { b ∈ B(λ) |e_i^k(b) ∈ X for some k⩾ 0}.For a permutation w∈ S_n with reduced expression w=s_i_1 s_i_2⋯ s_i_k, we defineB_w(λ) = 𝔇_i_1𝔇_i_2⋯𝔇_i_k{ u_λ},where u_λ is the highest weight element in B(λ) satisfying e_i(u_λ)=0 for all 1⩽ i<n. Whenever b,b' ∈ B_w(λ) ⊆ B(λ) and f_i(b)=b' in B(λ), then this crystal operator is also defined in B_w(λ).Let us define the character of a Demazure crystal asch B_w(λ)= ∑_b ∈ B_w(λ) x_1^(b)_1⋯ x_n^(b)_n.It was proven by <cit.> that this character coincides with _a, where w · a = λ. § DEMAZURE CRYSTAL STRUCTURE ON KEY TABLEAUXIn Section <ref>, we review the combinatorial model of key tableaux <cit.> that is central to our results.In Section <ref>, we introduce a new crystal structure on semi-standard key tableaux and show that thisprecisely realizes the Demazure character by truncating the crystal structure on semi-standard Young tableaux. §.§ Combinatorics of Demazure charactersCombinatorial interpretations and definitions for Demazure characters for the general linear group were given by Lascouxand Schützenberger <cit.>, Kohnert <cit.>, Reiner and Shimozono <cit.>,and Mason <cit.>, all of whom refer to them as key polynomials. We use an equivalent definition in termsof semi-standard key tableaux due to Assaf <cit.>, which is combinatorially equivalent to Mason's semi-skyline augmented fillings but which replaces the triple conditions for more direct row and column conditions (seealso <cit.>). Generalizing Young diagrams, given a weak composition a, thekey diagram of shape a is the array of left-justified cells with a_i boxes in row i.A key tableau is a filling of a key diagram with positive integers such that columns have distinct entries,rows weakly decrease, and, if some entry i is above and in the same column as an entry k with i<k, thenthere is an entry immediately right of k, say j, with i<j.For the Schur polynomial case, we restrict entries in the Young tableaux globally allowing entries 1 throughn to appear anywhere. In the Demazure case, we must restrict the entries in the key tableaux locally allowingentries to appear only in their row and lower. A semi-standard key tableau is a key tableau in which no entry exceeds its row index. For examples, see Figure <ref>. The following property of semi-standard key tableaux will be useful. Suppose row r of a semi-standard key tableau has two entries i+1 in columns c and c+1. If there isan i above row r in column c, then there cannot be an i below row r in column c+1.If this were the case, then there must be an entry, say k, in column c immediately left of the i in column c+1.By the weakly decreasing rows condition, k ⩾ i, and so by the distinct column entries condition, and k > i+1. However, since there is an i+1 above k, the entry immediately right of k, which is an i, is not larger than i+1,a contradiction to the key tableaux column inversion condition. The weight of a semi-standard key tableau T, denoted by (T), is the weak composition whose ithpart is the number of occurrences of i in T. The following result is proved in <cit.> by showing that the semi-standard key tableaux conditions are equivalent to the triple conditions on Mason's semi-skyline augmented fillings <cit.>. This more direct characterization facilitates the constructions to follow. The key polynomial _a(x) is given by_a(x) = ∑_T ∈(a) x_1^(T)_1⋯ x_n^(T)_n ,where (a) is the set of semi-standard key tableaux of shape a.The map from standard key tableaux of shape a to standard Young tableaux of shape λ,where λ is the unique partition rearrangement of a, from <cit.> relates the tableaux modelsfor key polynomials and Schur polynomials. We extend this map to the semi-standard case as follows. Given a weak composition a of length n, define the column sorting map on (a) by lettingcells fall vertically until there are no gaps between rows, sorting the columns to decrease bottom to top, and thenreplacing all entries by i ↦ n-i+1.For example, the semi-standard key tableaux in Figure <ref> map to the semi-standard Young tableauxin the first two rows of Figure <ref>, respectively. The four semi-standard Young tableaux in the bottomrow of Figure <ref> are not in the image of the column sorting map. The column sorting map is a well-defined, injective map ϕ(a) →(λ),where λ is the partition rearrangement of a. The column strict condition on semi-standard key tableaux ensures that columns have distinct values. Thereforeby construction, a column sorted tableau will have strictly increasing columns. By the columninversion condition for key tableaux, if row j sits above row i and is weakly longer, then column by columnthe entries in row j must be greater than those in row i. Consider applying the column sorting map by firstrearranging rows of longest size at the bottom and reversing the relative order of rows of equal length. Sinceentries within rows are maintained, the weakly decreasing row condition on semi-standard key tableaux isobviously maintained by this process. The column sorting necessarily brings entries from a strictlyshorter row down into a longer row. That is, row values can be increased only when the first k values allincrease for some k, and entries decrease only when the entire row is changed, maintaining theweakly decreasing row condition. Hence the image of the map is indeed a semi-standard Young tableauof shape λ.To see that the map is injective, we can define an inverse map by first applying i ↦ n-i+1 to all letters in a semi-standard Young tableau. Then fill the shape of a column by columnfrom right to left, and within a column from bottom to top, according to the columns of the given tableau by selecting at each step the smallest available entry that maintains the weakly decreasingrow condition. To see that the column inversion condition for key tableaux still holds, suppose j is the smallestlabel available that can be placed in cell C in order to satisfy weakly decreasing rows. It is easy to see thatthe column inversion condition for key tableaux is maintained, but it could happen that an entry is placed in arow with smaller index. The tableaux for which this occurs are precisely the ones not in the image of the columnsorting map.§.§ Crystal operators on semi-standard key tableauxWe generalize the crystal structure on semi-standard Young tableaux to a Demazure crystal structure onsemi-standard key tableaux as follows.For a word w of length k with letters in the alphabet A={1,2,…,n}, an integer 1 ⩽ r ⩽k,and an integer 1⩽ i<n, definem_i(w,r) = (w_r w_r+1⋯ w_k)_i+1 - (w_r w_r+1⋯ w_k)_i.Set m_i(w) = max_r{m_i(w,r)}. Observe that if m_i(w) > 0 and q is the rightmost occurrence of this maximum,then w_q = i+1, and if p is the leftmost occurrence of this maximum, then either p=1 or w_p-1 = i.For T a key tableau, the column reading word of T, denoted by w(T), is the word obtained by readingthe entries of T down columns from right to left. Note that columns for key tableaux are read in the reverse orderas columns for Young tableaux. For example, the column reading word of the leftmost key tableau in the top row ofFigure <ref> is 42432. Given an integer 1⩽ i<n, define the raising operators e_i on semi-standard key tableauxof shape a of length n as follows: if m_i(w(T)) ⩽ 0, then e_i(T)=0; otherwise, let q be the largestindex such that m_i(w(T),q) = m_i(w(T)), and e_i(T) changes all entries i+1 weakly right of the entry in Tcorresponding to w(T)_q to i and change all i's in the same columns as these entries to i+1's.For an example of the raising operator e_1, see Figure <ref>. For this example, the column readingword is given below each key tableau with the largest index that attains m_1(w(T))>0 underlined and thecorresponding entry in the tableau circled. The raising operator e_i (a) →(a) ∪{0} is a well-defined map. Moreover, therestriction of e_i to the pre-image e_i^-1((a)) satisfies (e_i(T))_i = (T)_i +1,(e_i(T))_i+1 = (T)_i+1-1, and (e_i(T))_j = (T)_j for all j ≠ i,i+1. Let T ∈(a), set m = m_i(w(T)), and suppose m>0. Let x, say in row r and column c,be the cell in T that attains m at the rightmost position in column reading order. We claim that every cell weaklyright of x in row r with entry i+1 except for one has an i above it. If the entry immediately right of x is hfor some h < i+1, then the key tableaux conditions ensure that there cannot be an i above x since h ⩽ i.Suppose, then, that there is an i+1 immediately right of x. Since x attains the maximum m and there isan i+1 to its left, we must have an i between them in column reading order. Thus there must be ani either below row r in column c+1 or above row r in column c. If there is an i in row r^'<rin column c+1, then there must be an entry, say k, in row r^' in column c satisfying k ⩾ i.Moreover, by the key tableau column inversion condition, we cannot have k>i+1 since i+1>i. Therefore k=i,in which case x cannot be the rightmost position to attain m, a contradiction. Moreover, it now follows byinduction from Lemma <ref> that every i+1 right of x in row r either has an i above it or no iin its column, and the latter cannot be the case more than once else the rightmost i+1 would have i-indexgreater than m. This proves the claim, from which it follows that one more cell changes entry from i+1 to ithan the reverse, thus proving (e_i(T))_i = (T)_i +1, (e_i(T))_i+1 = (T)_i+1-1, and(e_i(T))_j = (T)_j for all j ≠ i,i+1. Next we show that rows of e_i(T) are weakly decreasing. This is clear for row r since all i+1 weaklyright of x change to i. If i changes to i+1 in cell y and the cell immediately left of y also contains an i,then this i also is changed to an i+1. This is clear from the previous analysis provided y is not in the column of x; if y is in the column of x and has an i immediately to its left, then x cannotbe the rightmost cell in column reading order to attain m. Next we show that columns of e_i(T) have distinct entries. Since x cannot have an i below it and be theleftmost cell in column reading order to attain m, any i+1 that changes to an i either has no i in the columnor an i above it. In the latter case, this i will become an i+1. Next we show that e_i(T) if a<c with a above c, then there is an entry b immediately right of c with a<b. If a column contains i and not i+1, then nothing is changed, and if it has both, then the i+1 appears above i in e_i(T). Therefore the only potential problem occurs when b=i+1 in T is changed to i in e_i(T) and a=i. In this case, if the column of a has no i+1, then b does not attain m and is not changed to i, and otherwise both a and c change removing the inversion triple from consideration. Finally, decrementing values maintains the property that entries do not exceed their row index, and i changes toi+1 only when it sits above an i+1, so these entries lie strictly above row i+1. Therefore e_i(T) is asemi-standard key tableau.For T ∈(a) and for any 1 ⩽ i <n, e_i(T) ≠ 0 if and only if f_n-i(ϕ(T))≠ 0.In this case, we have ϕ(e_i(T)) = f_n-i(ϕ(T)), where ϕ denotes the column sorting map.Given a word w = w_1 w_2 ⋯ w_k with 1⩽ w_j ⩽ n for all j, letu = (n-w_k+1) (n-w_k-1+1) ⋯ (n-w_1+1).Then m_i(w,r) = M_n-i(u,k-r+1), and q is the index of the rightmost occurrence of m_i(w) in w if and only ifk-q+1 is the index of the leftmost occurrence of M_n-i(u) in u. If T ∈(a) has no column inversions,then since the column reading word of a semi-standard key tableau is right to left and the column reading wordof a semi-standard Young tableau is left to right, w(T) and w(ϕ(T)) precisely have the relationship ofw and u, and the result follows.In the general case, since e_i and f_i depend only on the letters i,i+1, we may restrict our attention to thesubword on those letters. In doing so, notice that columns with i above i+1 appear in consecutive runs separatedat least by a column immediately right of the run with an i+1 and no i. In the column reading word, this manifests itself as a string of alternating i's and i+1's that begins and ends with an i+1. If we let q^' denote the leftmosti+1 in the alternating string that attains m_i(w(T)), then k-q^'+1 is the smallest index that attainsM_n-i(w(ϕ(T))). That is, the rightmost column of T in which an i+1 changes to an i without ani also changing to an i+1 in passing to e_i(T) is precisely the column of ϕ(T) in which an n-i changesto an n-i+1 in passing to f_n-i(ϕ(T)). For example, the semi-standard key tableaux of shape (0,5,3) in Figure <ref> map by the column sortingmap to the semi-standard Young tableaux of shape (5,3) in Figure <ref>, and the raising operatore_1 on the former becomes the lowering operator f_2 on the latter. Given an integer 1⩽ i<n, define the lowering operator f_i on semi-standard key tableauxof shape a as follows:let p be the smallest index such that m_i(w(T),p) = m_i(w(T)). If p=1 or if the entry in T corresponding tow_p lies in row i, then f_i(T) = 0; otherwise f_i(T) changes all entries i weakly right of the entryin T corresponding to w_p-1 to i+1 and change all i's in the same columns as these entries to i's.For examples of lowering operators on semi-standard key tableaux, see Figure <ref> (f_i are inverses of e_i when they are defined on an element).For T∈(a) and for any 1 ⩽ i <n, if there exists S∈(a) such that e_i(S)=T, thenf_i(T)=S, and otherwise f_i(T)=0. In particular, the lowering operator f_i is well-defined and if f_i(T)≠ 0,then it satisfies (f_i(T))_i = (T)_i +1, (f_i(T))_i+1 = (T)_i+1-1, and (f_i(T))_j = (T)_j forall j ≠ i,i+1. Moreover, letting ϕ denote the column sorting map, if f_i(T)≠ 0, then we haveϕ(f_i(T)) = e_n-i(ϕ(T)).Recall from the analysis in the proof of Proposition <ref> that when e_i(S) ≠ 0, w(S) andw(e_i(S)) differ on the restriction to letters i,i+1 precisely in that an alternating string beginning and endingwith i+1 for which the last entry is the rightmost to attain m_i(w(S)) becomes an alternating string beginningand ending with i for which the first entry is immediately left of the leftmost to attain m_i(w(e_i(S))). Thereforeif e_i(S)=T, then f_i(T)=S. We have f_i(T)=0 precisely when there is no place to act(when p=1) or when acting would violate the semi-standard key tableaux condition that entries cannot exceedtheir row index. The remainder of the result follows from Proposition <ref> and Lemma <ref>. §.§ Demazure crystal on semi-standard key tableauxTo arrive at our main result, that the raising and lowering operators on semi-standard key tableaux give a Demazure crystal,we refine the column sorting map to an injective map between semi-standard key tableaux for different weak compositions. Given a weak composition a and an index i such that a_i < a_i+1, for T ∈(a) such that e_i(T)=0,there exists S ∈(s_i a) such that ϕ(T) = ϕ(S), where ϕ is the column sorting map. The statement is equivalent to the assertion that there exists S ∈(s_i a) with the same column sets as T.We may describe the map from T to S explicitly as follows. First, move the a_i+1-a_i rightmost cells in row i+1down to row i. Since e_i(T)=0, there cannot be a letter i+1 that is moved down since if any of these cells contain an i+1, there will be a positive index allowing e_i to act non-trivially. If, after this, row i is not weakly decreasing,then swap the entries in rows i and i+1 of the offending column. Since e_i(T)=0, there cannot be any letters i+1 that are moved down at this step either, so the resulting tableau S has no entry exceeding its row index. Rows clearly maintain their weakly decreasing status, and it is easy to see that no violations of the column inversion condition canarise. Therefore S ∈(s_i a). Lemma <ref> ensures that the following operators are well-defined on semi-standard key tableaux. Given a weak composition a and an index i such that a_i < a_i+1, define an operator ℰ_i on(a) by ℰ_i(T) = S, where S∈(s_i a) satisfies ϕ(S) = ϕ(e_i^k-1(T)) for k minimalsuch that e_i^k(T)=0.For examples of ℰ_i, see Figure <ref>. Similar to π_w and ∂_w, we may extend this todefine ℰ_w=ℰ_i_1⋯ℰ_i_k, where s_i_1⋯ s_i_kis any reduced expression for w. It is easy to see that this is well-defined from the local relations of the type A crystaloperators on tableaux as characterized by Stembridge <cit.>. Given a weak composition a, for w the permutation that sorts a to partition shape λ, the operatorℰ_w takes T∈(a) to the highest weight element of the crystal along edges specified by w.This is precisely the statement needed to show that the crystal operators defined on semi-standard key tableaux ofshape a realize the Demazure crystal for w.Let a be a weak composition that sorts to the partition λ. The raising and lowering operators on (a)give the Demazure crystal for highest weight λ truncating with respect to the minimal length permutation w thatsorts a to λ.Given T ∈(a), for w the permutation that sorts a to partition shape λ, we necessarilyhave ℰ_w(T) ∈(λ). However, the constraint that entries cannot exceed their row indextogether with distinct column values forces (λ) to have a single element, the tableau with all entriesin row i equal to i. In particular, this element maps via the column sorting map to the highest weight u_λ.By Lemma <ref>, this means T ∈𝔇_w { u_λ} for every T ∈(a), and soϕ((a)) ⊆ B_w(λ). By Theorem <ref>, the sums of the weights on both sidesagree, so we must have equality. For example, removing the four vertices of the (2,2,1)-crystal in Figure <ref> corresponding to thefour semi-standard Young tableaux of shape (2,2,1) that are not in the image of the column sorting map onsemi-standard key tableaux of shape (0,2,1,2) precisely gives the (0,2,1,2)-Demazure crystal in Figure <ref>.§ CRYSTAL STRUCTURE FOR STANLEY SYMMETRIC POLYNOMIALSWe review the combinatorics of Stanley symmetric functions and polynomials in terms of reduced factorizations ofa permutation in Section <ref>. We proceed in Section <ref> to review Edelman–Greene insertion and review the crystal structure on reduced factorizations as recently introduced in <cit.> in Section <ref>.§.§ Combinatorics of Stanley symmetric functionsStanley <cit.> introduced a new family of symmetric functions to enumerate reduced expressions for permutations. A reduced word for a permutation w∈ S_n is a word i_1 … i_k such that s_i_1⋯ s_i_k = w wherek is the inversion number of w.For example, there are 11 reduced words for the permutation 153264 as shown in Figure <ref>. Given a reduced word ρ, an increasing factorization for ρ partitions the word ρ into(possibly empty) blocks (or factors) such that entries increase left to right within each block. Given a permutation w, a reduced factorization for w is an increasing factorization of a reduced word for w.Denote the set of reduced factorizations for w by (w).For example, the reduced factorizations for 153264 into 4 blocks are shown in Figure <ref>.The weight of a reduced factorization r, denoted by (r), is the weak composition whose ith part isthe number of letters in the ith block of r from the right. For example, ((45)(3)(23)()) = (0,2,1,2). The Stanley symmetric function indexed by the permutation w isF_w(x) = ∑_r∈(w^-1) x^(r). Therefore we compute F_143625 using reduced factorizations for 143625^-1 = 153264.Note that reduced factorizations can, in principle, have an arbitrary number of blocks and hence F_w(x) isa symmetric function in infinitely many variables x=(x_1,x_2,…).We can restrict Stanley symmetric functions to Stanley symmetric polynomials by restricting the number of blocksin the reduced factorizations. Let ^ℓ(w) be the set of reduced factorizations of w with precisely ℓ blocks. Then the Stanley symmetric polynomial in ℓ variables isF_w(x_1,x_2,…,x_ℓ) = ∑_r∈^ℓ(w^-1) x^(r). §.§ Edelman–Greene correspondenceIn their study of Stanley symmetric functions, Edelman and Greene <cit.> developed the following insertionalgorithm that they used to give a formula for the Schur expansion of Stanley symmetric functions.<cit.> Let P be a Young tableau, and let x be a positive integer. Let P_i be the ith lowest row of P. Definethe Edelman-Greene insertion of x into P, denoted by P ← x, as follows. Setx_0=x and for i⩾ 0, insert x_i into P_i+1 as follows. If x_i ⩾ z for all z∈ P_i+1, place x_iat the end of P_i+1 and stop. Otherwise, let x_i+1 denote the smallest element of P_i+1 such thatx_i+1>x_i. If x_i+1≠ x_i+1 or x_i is not already in P_i+1, replace x_i+1 by x_i in P_i+1and continue (we say that x_i bumps x_i+1 in row i+1). Otherwise leave P_i+1 unchanged andcontinue with x_i+1.Given a reduced expression ρ, define the insertion tableau for ρ, denoted by P(ρ), to be theresult of inserting the word for ρ letter by letter into the empty tableau. To track the growth of P(ρ), definethe recording tableau for ρ, denoted by Q(ρ), to be the result of adding i into the new cell createdwhen inserting the ith letter. For example, Figure <ref> constructs the insertion tableau (top) and recordingtableau (bottom) for the reduced expression 45232.<cit.> The Edelman–Greene correspondence ρ↦ (P(ρ),Q(ρ)) is a bijection between reducedexpressions and all pairs of tableaux (P,Q) such that P and Q have the same shape, P is increasingwith row(P) a reduced word, and Q is standard.We may extend the Edelman–Greene correspondence to a bijection between reduced factorizations and all pairsof tableaux (P,Q) such that P and Q have the same shape, P is increasing with row(P) areduced word, and Q is semi-standard. To do so, given a reduced factorization r into ℓ blocks, defineP(r) to be P(ρ) where ρ is the underlying reduced expression for r, and define Q(r) to be theresult of adding ℓ-i+1 into each new cell created when inserting a letter from block i (from the right). For example, therecording tableau for the reduced factorization (4)(5)(23)(2) is constructed in Figure <ref>. The correspondence r ↦ (P(r),Q(r)) is a bijection between reduced factorizations and all pairs oftableaux (P,Q) such that P and Q have the same shape, P is increasing with row(P) areduced word, and Q is semi-standard. Moreover, if r has ℓ blocks, then (Q(r))_i = (r)_ℓ-i.For example, the Edelman–Greene correspondence gives a weight-reversing bijection^ℓ(153264) →( 4 3 5 2 3×_ℓ(2,2,1) ) ⋃( 4 3 2 3 5×_ℓ(3,1,1) ).In particular, by the symmetry of Schur functions, we have the following expansion from <cit.>. The Stanley symmetric function for w may be expressed asF_w(x) = ∑_T ∈Yam(w^-1) s_sh(T)(x),where Yam(w^-1) is the set of insertion tableaux with row(P) a reduced word for w^-1. For example, we haveF_143625(x) = s_(2,2,1)(x) + s_(3,1,1)(x).§.§ Crystal operators on reduced factorizationsFollowing <cit.>, we are going to define an A_ℓ-1-crystal structure on ^ℓ(w). Let r=r^ℓ r^ℓ-1⋯ r^1 ∈^ℓ(w), where r^i is the ith block from the right. The Kashiwara raising and lowering operators e_i and f_i only act on the blocks r^i+1 r^i. The action is defined by first bracketing certain letters and then moving an unbracketed letter from one factor to the other. Let us begin by describing the bracketing procedure. Start with the largest letter b in r^i and pair it with the smallest a>b in r^i+1. If no such a exists in r^i+1, then b is unpaired.The pairing proceeds in decreasing order on elements of r^i, and with each iterationpreviously paired letters of r^i+1 are ignored. DefineR_i(r^ℓ⋯ r^1)= { b∈ r^i | b is unpaired in the r^i+1r^i-pairing}andL_i(r^ℓ⋯ r^1)= { b∈ r^i+1| b is unpaired in the r^i+1r^i-pairing} . Then f_i(r^ℓ⋯ r^1) is defined by replacing the blocks r^i+1 r^i by r^i+1r^i such that r^i=r^i\{b}andr^i+1=r^i+1∪{b-t}for b=min(R_i(r^ℓ⋯ r^1)) and t=min{j⩾ 0| b-j-1∉r^i}. If R_i(r^ℓ⋯ r^1)=∅, then f_i(r^ℓ⋯ r^1)= 0.Similarly, e_i(r^ℓ⋯ r^1) is defined by replacing the factors r^i+1 r^i by r^i+1r^i such thatr^i=r^i∪{a+s}and r^i+1=r^i+1\{a} for a=max(L_i(r^ℓ⋯ r^1)) and s=min{j⩾ 0| a+j+1∉r^i+1}. If L_i(r^ℓ⋯ r^1)=∅, then e_i(r^ℓ⋯ r^1)= 0. Let (2)(13)(23) ∈^3(w) for w= s_2 s_1 s_3 s_2s_3 ∈ S_4. To apply f_1 we need to first bracket the letters in r^1 = 23 with those in r^2 = 13. The letter 3 in r^1 is unbracketed since there is no bigger letter in r^2, but the letter 2 in r^1 is bracketed with 3 in r^2. Henceb = min(R_1(r^3 r^2 r^1))=3 and t=min{j⩾ 0| b-j-1∉r^1}=1. Therefore, f_1((2)(13)(23)) = (2)(123)(2). Similarly, e_1((2)(13)(23)) = (2)(3)(123).In <cit.>, the Stanley symmetric function F_w is defined using decreasing factorizations of reduced words of w. Here we use increasing factorizations of w^-1. To relate the two, one needs to revert the reduced factorizations. The crystal structures are related by interchanging f_i (resp. e_i) with e_ℓ-i (resp. f_ℓ-i).<cit.>The above defined operators f_i and e_i for 1⩽ i<ℓ and the weight functiondefine a A_ℓ-1-crystal structure on ^ℓ(w).<cit.> The Stanley symmetric function for w may be expressed asF_w(x) = ∑_r ∈^ℓ(w^-1)e_i r = 0 ∀ 1⩽ i < ℓ s_(r)(x).For example, the highest weight reduced factorizations for 153264=143625^-1 with ℓ=4 are ()(4)(35)(23) and ()(4)(3)(235) of weights (2,2,1) and (3,1,1), respectively,confirming (<ref>).It turns out that this crystal structure on reduced factorizations relates to the crystal structure on semi-standardYoung tableaux via the Edelman–Greene correspondence.<cit.>Given r∈^ℓ(w), let P(r) denote its Edelman–Greene insertion tableau and Q(r) its Edelman–Greenesemi-standard recording tableau, where letters in block i of r are recorded by the letter i. Then, if e_i(r) ≠ 0,we have P(e_i(r)) = P(r) and Q(e_i(r)) = f_ℓ-i(Q(r)). § DEMAZURE CRYSTAL STRUCTURE FOR SCHUBERT POLYNOMIALSWe review the combinatorial expression of Billey, Jockusch and Stanley <cit.> for Schubert polynomialsin terms of compatible sequences in Section <ref> and show that it can be reformulated in terms of reduced factorizations with a cutoff condition.In Section <ref> we discuss the weak analog of the Edelman–Greene insertion presented in <cit.>. It turns out that the cut-off condition precisely amounts to a Demazure crystal structure as shown inSection <ref>. §.§ Combinatorics of Schubert polynomialsSchubert polynomials are generalizations of Schur polynomials which represent cohomology classes of Schubert cyclesin flag varieties. They were first introduced by Bernstein et al. <cit.> and extensively studied by Lascoux and Schützenberger <cit.>. Given a permutation w, the Schubert polynomial for w is given by𝔖_w(x) = ∂_w^-1 w_0(x_1^n-1 x_2^n-2⋯ x_n-1),where w_0=n n-1 … 21 is the longest permutation of length n2. The first proven combinatorial formula for Schubert polynomials, due to Billey, Jockusch and Stanley <cit.>, isin terms of compatible sequences for reduced expressions. For ρ = ρ_1 …ρ_k a reduced word, a sequence α=α_1 …α_k of positive integersis ρ-compatible if α is weakly decreasing, α_j ⩽ρ_j, and α_j > α_j+1whenever ρ_j > ρ_j+1.For example, seven of the reduced words for 153264 have compatible sequences as shown in Figure <ref>. The Schubert polynomial _w(x) indexed by a permutation w is given by_w(x) = ∑_ρ∈ R(w^-1)∑_α∈RC(ρ) x^α ,where x^a is the monomial x_1^a_1⋯ x_n^a_n.We may encode compatible sequences for the reduced words as increasing factorizations with an additional cutoff condition. Given a reduced word ρ, an increasing factorization with cutoff is an increasing factorization such that inaddition the first entry in block i from the right is at least i. Given a permutation w, a reduced factorization with cutoff for w is an increasing factorization with cutoffof a reduced word for w.The set of reduced factorizations with cutoff is denoted by (w). For example, the reduced factorizations withcutoff for 153264 are shown in Figure <ref>.The weight function on reduced factorizations provides a simple bijection between compatible sequences andincreasing factorizations with cutoff for a reduced word. For example, compare Figure <ref>with Figure <ref>. The Schubert polynomial _w(x) is given by_w(x) = ∑_r ∈(w^-1) x^(r).To prove that (<ref>) is equivalent to (<ref>), we show that there is a bijection ⋃_ρ∈ R(w^-1)RC(ρ) →(w^-1). Given a compatible sequence α for a reduced word ρ, the letter ρ_i belongs to the a-th factor from the right if α_i=a. Due to the condition that α_j> α_j+1 whenever ρ_j>ρ_j+1, the letters within each factor are weakly increasing. Since the word ρ is reduced, the letters within each factor must actually be increasing. Furthermore, since α_j⩽ρ_j, all letters in the a-th factor must be of value at least a. Conversely, given a reduced factorization with cutoff one can immediately construct the compatiblesequence α by setting α_j=a if ρ_j is in factor a. Reduced factorizations have the advantage of tracking the reduced word along with the weight, making this a morenatural indexing set for the crystal structure discussed in the next section. §.§ Weak Edelman–Greene correspondenceWe recall a generalization of the Edelman–Greene correspondence <cit.> that gives the Demazure expansionof a Schubert polynomial, parallel to the Schur expansion of a Stanley symmetric function.Following <cit.>, for P a semi-standard Young tableau with strictly increasing rows, define thelift of P, denoted by lift(P), to be the tableau of key shape obtained by raising each entry in thefirst column of P until it equals its row index, and, once columns 1 through c-1 have been lifted, raising entries incolumn c from top to bottom, maintaining their relative order, placing each entry in the highest available row such thatthere is an entry in column c-1 that is strictly smaller. For ρ a reduced expression, define the weak insertion tableau P(ρ) byP(ρ) = lift(P(ρ)), where P(ρ) is the insertion tableau under the Edelman–Greene insertion. In addition,define the weak recording tableau Q(ρ) to be the unique standard keytableau of the same key shape as P(ρ) such that ϕ(Q(ρ)) = Q(ρ), where Q(ρ)is the Edelman–Greene recording tableau and ϕ is the column sorting map.For example, Figure <ref> constructs the weak insertion tableau (top) and weak recording tableau(bottom) for the reduced expression 45232. Compare this with Figure <ref>.For P a key tableau, define the drop of P, denoted by drop(P), to be the Young tableaudefined by letting the entries of P fall in their columns while maintaining their relative order. It is clear thatdrop(lift(P))=P for any P of partition shape. The weak Edelman–Greene correspondence ρ↦ (P(ρ),Q(ρ)) is a bijection betweenreduced expressions and all pairs of tableaux (P,Q) such that P and Q have the same key shape, Phas increasing rows and columns with row(P) a reduced word and lift(drop(P))=P,and Q is a standard key tableau.Analogous to the Edelman–Greene correspondence, this extends to a bijection between reduced factorizationswith cutoff and all pairs of tableaux (P,Q) such that P and Q have the same key shape, P is increasing withrow(P) a reduced word and lift(drop(P))=P, and Q is a semi-standard key tableau.For example, the recording tableau for the reduced factorization (4)(5)(23)(2) is constructed inFigure <ref>. The correspondence r ↦ (P(r),Q(r)) is a weight-preserving bijection betweenreduced factorizations and all pairs of tableaux (P,Q) such that P and Q have the same key shape, Pis increasing with row(P) a reduced word and lift(drop(P))=P, and Q is asemi-standard key tableau.Theorem <ref> is proved in <cit.> using the standard key tableau. To get thesemi-standard case, we appeal to <cit.> where it is shown that the fundamental slide polynomial,defined in <cit.>, associated to a standard key tableau is the sum of monomials associated to the semi-standardkey tableaux that standardize to it. As shown in <cit.>, the fundamental slide polynomial associatedto a reduced expression is the sum of monomials associated to the corresponding compatible sequences. The resultfollows from the bijection between compatible sequences and increasing factorizations.For example, the weak Edelman–Greene correspondence gives a weight-preserving bijection(153264) →( 4 53 2 3×(0,2,1,2) ) ⋃( 4 3 2 3 5×(0,3,1,1) ).In particular, we have the following expansion from <cit.>. The Schubert polynomial for w may be expressed as_w(x) = ∑_T ∈Yam(w^-1)_(T)(x),where Yam(w^-1) is the set of increasing tableaux of key shape with row(P) a reducedword for w^-1 and lift(drop(P))=P. For example, we have_143625(x) = _(0,2,1,2)(x) + _(0,3,1,1)(x). §.§ Demazure crystal operators on reduced factorizations with cutoffSince (w) ⊆^n(w) for w∈ S_n, we can restrict the crystal operators f_i and e_ion reduced factorizations to (w) by defining f_i(r) as in Section <ref> if f_i(r) ∈(w)and f_i(r)=0 otherwise and similarly for e_i. An example is given in Figure <ref>. We will show in this section that this amounts to a union of Demazure crystalstructures. We begin with an analog of Theorem <ref>. Given r∈(w) for w∈ S_n, denote by P(r) the weak Edelman–Greene insertion tableau andby Q(r) the weak Edelman–Greene recording tableau, where letters in block i of r are recorded bythe letter i. Then, if e_i(r) ≠ 0, we have P(e_i(r)) = P(r) andQ(e_i(r)) = e_i(Q(r)) for 1⩽ i<n.By Theorem <ref> we have P(e_i(r))=P(r) andQ(e_i(r)) = f_n-i(Q(r)), where P and Q are the Edelman–Greene insertion and recording tableaux, respectively. By Definition <ref>, we have P(r)=lift(P(r)), which proves P(e_i(r))= P(r). Again by Definition <ref>, we have ϕ(Q(r)) = Q(r). ByLemma <ref>, we have ϕ(e_i Q(r)) = f_n-iϕ(Q(r)) =f_n-i Q(r), proving that Q(e_i(r)) = e_i(Q(r)). By Proposition <ref>, combinatorial objects underlying the Schubert polynomials _w^-1(x) are the reduced factorizations with cutoff (w). On the other hand, ^n(w) are combinatorial objects underlying the Stanley symmetric polynomials F_w^-1(x) by Definition <ref>. By Theorem <ref>, there is a crystal structure on ^n(w). Now we show that (w) admits a Demazure crystal structure. The operators f_i and e_i for 1⩽ i<n define a Demazure crystal structure on (w). More precisely, (w) ≅⋃_r∈(w)e_i r = 0 ∀ 1⩽ i<n B_w(r)((r)), where w(r) is the shortest permutation that sorts sh(P(r)).By Theorem <ref>, the crystal operators on reduced factorizations under weak Edelman–Greene insertion intertwine with the crystal operators on key tableaux. On the other hand, by Theorem <ref> the crystal operators on key tableaux form a Demazure crystal. For example, the highest weight elements in (153264) are ()(4)(35)(23) and ()(4)(3)(235) (see Figure <ref>), so that as Demazure crystals(153264) ≅ B_s_1s_3s_2s_3(2,2,1) ∪ B_s_1s_2s_3(3,1,1).The Schubert polynomial for w∈ S_n may be expressed as_w(x) = ∑_r ∈(w^-1)e_i r = 0 ∀ 1⩽ i < n_sh(P(r))(x).amsalpha
http://arxiv.org/abs/1705.09649v2
{ "authors": [ "Sami Assaf", "Anne Schilling" ], "categories": [ "math.CO", "math.AG", "math.RT", "14N15, 05E10, 05A05, 05E05, 05E18, 20G42" ], "primary_category": "math.CO", "published": "20170526171622", "title": "A Demazure crystal construction for Schubert polynomials" }
Shell et al.: Bare Demo of IEEEtran.cls for Computer Society Journals Our experience of the world is multimodal- we see objects, hear sounds, feel texture, smell odors, and taste flavors.Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together.Multimodal machine learning aims to build models that can process and relate information from multiple modalities.It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy.We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research.Multimodal, machine learning, introductory, survey. Multimodal Machine Learning: A Survey and Taxonomy Tadas Baltrušaitis, Chaitanya Ahuja, and Louis-Philippe Morency T. Baltrušaitis, C. Ahuja and L-P. Morency are with the Language Technologies Institute, at Carnegie Mellon University, Pittsburgh, Pennsylvania E-mail: tbaltrus, cahuja, [email protected] 05/26/17 ========================================================================================================================================================================================================================================================================================= § INTRODUCTIONThe world surrounding us involves multiple modalities — we see objects, hear sounds, feel texture, smell odors, and so on.In general terms, a modality refers to the way in which something happens or is experienced.Most people associate the word modality with the sensory modalities which represent our primary channels of communication and sensation, such as vision or touch.A research problem or dataset is therefore characterized as multimodal when it includes multiple such modalities. In this paper we focus primarily, but not exclusively, on three modalities: natural language which can be both written or spoken; visual signals which are often represented with images or videos; and vocal signals which encode sounds and para-verbal information such as prosody and vocal expressions.In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret and reason about multimodal messages.Multimodal machine learning aims to build models that can process and relate information from multiple modalities.From early research on audio-visual speech recognition to the recent explosion of interest in language and vision models, multimodal machine learning is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. The research field of Multimodal Machine Learning brings some unique challenges for computational researchers given the heterogeneity of the data.Learning from multimodal sources offers the possibility of capturing correspondences between modalities and gaining an in-depth understanding of natural phenomena.In this paper we identify and explore five core technical challenges (and related sub-challenges) surrounding multimodal machine learning.They are central to the multimodal setting and need to be tackled in order to progress the field.Our taxonomy goes beyond the typical early and late fusion split, and consists of the five following challenges:1) Representation A first fundamental challenge is learning how to represent and summarize multimodal data in a way that exploits the complementarity and redundancy of multiple modalities.The heterogeneity of multimodal data makes it challenging to construct such representations.For example, language is often symbolic while audio and visual modalities will be represented as signals.2) Translation A second challenge addresses how to translate (map) data from one modality to another. Not only is the data heterogeneous, but the relationship between modalities is often open-ended or subjective. For example, there exist a number of correct ways to describe an image and and one perfect translation may not exist.3) Alignment A third challenge is to identify the direct relations between (sub)elements from two or more different modalities. For example, we may want to align the steps in a recipe to a video showing the dish being made.To tackle this challenge we need to measure similarity between different modalities and deal with possible long-range dependencies and ambiguities.4) Fusion A fourth challenge is to join information from two or more modalities to perform a prediction. For example, for audio-visual speech recognition, the visual description of the lip motion is fused with the speech signal to predict spoken words. The information coming from different modalities may have varying predictive power and noise topology, with possibly missing data in at least one of the modalities.5) Co-learning A fifth challenge is to transfer knowledge between modalities, their representation, and their predictive models. This is exemplified by algorithms of co-training, conceptual grounding, and zero shot learning. Co-learning explores how knowledge learning from one modality can help a computational model trained on a different modality. This challenge is particularly relevant when one of the modalities has limited resources (e.g., annotated data). For each of these five challenges, we defines taxonomic classes and sub-classes to help structure the recent work in this emerging research field of multimodal machine learning.We start with a discussion of main applications of multimodal machine learning (Section <ref>) followed by a discussion on the recent developments on all of the five core technical challenges facing multimodal machine learning: representation (Section <ref>), translation (Section <ref>), alignment (Section <ref>), fusion (Section <ref>), and co-learning (Section <ref>).We conclude with a discussion in Section <ref>.§ APPLICATIONS: A HISTORICAL PERSPECTIVE Multimodal machine learning enables a wide range of applications: from audio-visual speech recognition to image captioning.In this section we present a brief history of multimodal applications, from its beginnings in audio-visual speech recognition to a recently renewed interest in language and vision applications. One of the earliest examples of multimodal research is audio-visual speech recognition (AVSR)<cit.>.It was motivated by the McGurk effect <cit.> — an interaction between hearing and vision during speech perception.When human subjects heard the syllable /ba-ba/ while watching the lips of a person saying /ga-ga/, they perceived a third sound: /da-da/.These results motivated many researchers from the speech community to extend their approaches with visual information.Given the prominence of hidden Markov models (HMMs) in the speech community at the time <cit.>, it is without surprise that many of the early models for AVSR were based on various HMM extensions <cit.>. While research into AVSR is not as common these days, it has seen renewed interest from the deep learning community <cit.>.While the original vision of AVSR was to improve speech recognition performance (e.g., word error rate) in all contexts, the experimental results showed that the main advantage of visual information was when the speech signal was noisy (i.e., low signal-to-noise ratio) <cit.>.In other words, the captured interactions between modalities were supplementary rather than complementary.The same information was captured in both, improving the robustness of the multimodal models but not improving the speech recognition performance in noiseless scenarios.A second important category of multimodal applications comes from the field of multimedia content indexing and retrieval <cit.>.With the advance of personal computers and the internet, the quantity of digitized multimedia content has increased dramatically <cit.>. While earlier approaches for indexing and searching these multimedia videos were keyword-based <cit.>, new research problems emerged when trying to search the visual and multimodal content directly.This led to new research topics in multimedia content analysis such as automatic shot-boundary detection <cit.> and video summarization <cit.>.These research projects were supported by the TrecVid initiative from the National Institute of Standards and Technologies which introduced many high-quality datasets, including the multimedia event detection (MED) tasks started in 2011 <cit.>. A third category of applications was established in the early 2000s around the emerging field of multimodal interaction with the goal of understanding human multimodal behaviors during social interactions.One of the first landmark datasets collected in this field is the AMI Meeting Corpus which contains more than 100 hours of video recordings of meetings, all fully transcribed and annotated <cit.>.Another important dataset is the SEMAINE corpus which allowed to study interpersonal dynamics between speakers and listeners <cit.>.This dataset formed the basis of the first audio-visual emotion challenge (AVEC) organized in 2011 <cit.>.The fields of emotion recognition and affective computing bloomed in the early 2010s thanks to strong technical advances in automatic face detection, facial landmark detection, and facial expression recognition <cit.>.The AVEC challenge continued annually afterward with the later instantiation including healthcare applications such as automatic assessment of depression and anxiety <cit.>.A great summary of recent progress in multimodal affect recognition was published by D'Mello et al. <cit.>.Their meta-analysis revealed that a majority of recent work on multimodal affect recognition show improvement when using more than one modality,but this improvement is reduced when recognizing naturally-occurring emotions. Most recently, a new category of multimodal applications emerged with an emphasis on language and vision: media description.One of the most representative applications is image captioning where the task is to generate a text description of the input image <cit.>.This is motivated by the ability of such systems to help the visually impaired in their daily tasks <cit.>. The main challenges media description is evaluation: how to evaluate the quality of the predicted descriptions.The task of visual question-answering (VQA) was recently proposed to address some of the evaluation challenges <cit.>, where the goal is to answer a specific question about the image. In order to bring some of the mentioned applications to the real world we need to address a number of technical challenges facing multimodal machine learning.We summarize the relevant technical challenges for the above mentioned application areas in Table <ref>.One of the most important challenges is multimodal representation, the focus of our next section.§ MULTIMODAL REPRESENTATIONS Representing raw data in a format that a computational model can work with has always been a big challenge in machine learning.Following the work of Bengio et al. <cit.> we use the term feature and representation interchangeably, with each referring to a vector or tensor representation of an entity, be it an image, audio sample, individual word, or a sentence.A multimodal representation is a representation of data using information from multiple such entities.Representing multiple modalities poses many difficulties: how to combine the data from heterogeneous sources; how to deal with different levels of noise; and how to deal with missing data. The ability to represent data in a meaningful way is crucial to multimodal problems, and forms the backbone of any model.Good representations are important for the performance of machine learning models, as evidenced behind the recent leaps in performance of speech recognition <cit.> and visual object classification <cit.> systems.Bengio et al. <cit.> identify a number of properties for good representations: smoothness, temporal and spatial coherence, sparsity, and natural clustering amongst others.Srivastava and Salakhutdinov <cit.> identify additional desirable properties for multimodal representations: similarity in the representation space should reflect the similarity of the corresponding concepts, the representation should be easy to obtain even in the absence of some modalities, and finally, it should be possible to fill-in missing modalities given the observed ones.The development of unimodal representations has been extensively studied <cit.>.In the past decade there has been a shift from hand-designed for specific applications to data-driven.For example, one of the most famous image descriptors in the early 2000s, the scale invariant feature transform (SIFT) was hand designed <cit.>, but currently most visual descriptions are learned from data using neural architectures such as convolutional neural networks (CNN) <cit.>.Similarly, in the audio domain, acoustic features such as Mel-frequency cepstral coefficients (MFCC) have been superseded by data-driven deep neural networks in speech recognition <cit.> and recurrent neural networks for para-linguistic analysis <cit.>.In natural language processing, the textual features initially relied on counting word occurrences in documents, but have been replaced data-driven word embeddings that exploit the word context<cit.>.While there has been a huge amount of work on unimodal representation, up until recently most multimodal representations involved simple concatenation of unimodal ones <cit.>, but this has been rapidly changing. To help understand the breadth of work, we propose two categories of multimodal representation: joint and coordinated.Joint representations combine the unimodal signals into the same representation space, while coordinated representations process unimodal signals separately, but enforce certain similarity constraints on them to bring them to what we term a coordinated space. An illustration of different multimodal representation types can be seen in Figure <ref>.Mathematically, the joint representation is expressed as:𝐱_m = f(𝐱_1,…, 𝐱_n),where the multimodal representation 𝐱_m is computed using function f (e.g., a deep neural network, restricted Boltzmann machine, or a recurrent neural network) that relies on unimodal representations 𝐱_1,…𝐱_n. While coordinated representation is as follows:f(𝐱_1)∼ g(𝐱_2), where each modality has a corresponding projection function (f and g above) that maps it into a coordinated multimodal space.While the projection into the multimodal space is independent for each modality, but the resulting space is coordinated between them (indicated as ∼).Examples of such coordination include minimizing cosine distance <cit.>, maximizing correlation <cit.>, and enforcing a partial order <cit.> between the resulting spaces.§.§ Joint RepresentationsWe start our discussion with joint representations that project unimodal representations together into a multimodal space (Equation <ref>).Joint representations are mostly (but not exclusively) used in tasks where multimodal data is present both during training and inference steps.The simplest example of a joint representation is a concatenation of individual modality features (also referred to as early fusion <cit.>).In this section we discuss more advanced methods for creating joint representations starting with neural networks, followed by graphical models and recurrent neural networks (representative works can be seen in Table <ref>).Neural networks have become a very popular method for unimodal data representation <cit.>.They are used to represent visual, acoustic, and textual data, and are increasingly used in the multimodal domain <cit.>. In this section we describe how neural networks can be used to construct a joint multimodal representation, how to train them, and what advantages they offer.In general, neural networks are made up of successive building blocks of inner products followed by non-linear activation functions.In order to use a neural network as a way to represent data, it is first trained to perform a specific task (e.g., recognizing objects in images).Due to the multilayer nature of deep neural networks each successive layer is hypothesized to represent the data in a more abstract way <cit.>, hence it is common to use the final or penultimate neural layers as a form of data representation. To construct a multimodal representation using neural networks each modality starts with several individual neural layers followed by a hidden layer that projects the modalities into a joint space <cit.>.The joint multimodal representation is then be passed through multiple hidden layers itself or used directly for prediction. Such models can be trained end-to-end — learning both to represent the data and to perform a particular task. This results in a close relationship between multimodal representation learning and multimodal fusion when using neural networks. As neural networks require a lot of labeled training data, it is common to pre-train such representations using an autoencoder on unsupervised data<cit.>. The model proposed by Ngiam et al. <cit.> extended the idea of using autoencoders to the multimodal domain.They used stacked denoising autoencoders to represent each modality individually and then fused them into a multimodal representation using another autoencoder layer. Similarly, Silberer and Lapata <cit.> proposed to use a multimodal autoencoder for the task of semantic concept grounding (see Section <ref>).In addition to using a reconstruction loss to train the representation they introduce a term into the loss function that uses the representation to predict object labels.It is also common to fine-tune the resulting representation on a particular task at hand as the representation constructed using an autoencoder is generic and not necessarily optimal for a specific task <cit.>. The major advantage of neural network based joint representations comes from their often superior performance and the ability to pre-train the representations in an unsupervised manner.The performance gain is, however, dependent on the amount of data available for training.One of the disadvantages comes from the model not being able to handle missing data naturally — although there are ways to alleviate this issue <cit.>.Finally, deep networks are often difficult to train <cit.>, but the field is making progress in better training techniques <cit.>.Probabilistic graphical models are another popular way to construct representations through the use of latent random variables <cit.>.In this section we describe how probabilistic graphical models are used to represent unimodal and multimodal data. The most popular approaches for graphical-model based representation are deep Boltzmann machines (DBM) <cit.>, that stack restricted Boltzmann machines (RBM) <cit.> as building blocks. Similar to neural networks, each successive layer of a DBM is expected to represent the data at a higher level of abstraction. The appeal of DBMs comes from the fact that they do not need supervised data for training <cit.>.As they are graphical models the representation of data is probabilistic, however it is possible to convert them to a deterministic neural network — but this loses the generative aspect of the model <cit.>.Work by Srivastava and Salakhutdinov <cit.> introduced multimodal deep belief networks as a multimodal representation.Kim et al. <cit.> used a deep belief network for each modality and then combined them into joint representation for audiovisual emotion recognition.Huang and Kingsbury <cit.> used a similar model for AVSR, and Wu et al. <cit.> for audio and skeleton joint based gesture recognition.Multimodal deep belief networks have been extended to multimodal DBMs by Srivastava and Salakhutdinov <cit.>.Multimodal DBMs are capable of learning joint representations from multiple modalities by merging two or more undirected graphs using a binary layer of hidden units on top of them.They allow for the low level representations of each modality to influence each other after the joint training due to the undirected nature of the model. Ouyang et al. <cit.> explore the use of multimodal DBMs for the task of human pose estimation from multi-view data. They demonstrate that integrating the data at a later stage — after unimodal data underwent nonlinear transformations — was beneficial for the model.Similarly, Suk et al. <cit.> use multimodal DBM representation to perform Alzheimer's disease classification from positron emission tomography and magnetic resonance imaging data. One of the big advantages of using multimodal DBMs for learning multimodal representations is their generative nature, which allows for an easy way to deal with missing data — even if a whole modality is missing, the model has a natural way to cope.It can also be used to generate samples of one modality in the presence of the other one, or both modalities from the representation.Similar to autoencoders the representation can be trained in an unsupervised manner enabling the use of unlabeled data.The major disadvantage of DBMs is the difficulty of training them — high computational cost, and the need to use approximate variational training methods <cit.>. Sequential Representation. So far we have discussed models that can represent fixed length data, however, we often need to represent varying length sequences such as sentences, videos, or audio streams.In this section we describe models that can be used to represent such sequences.Recurrent neural networks (RNNs), and their variants such aslong-short term memory (LSTMs) networks <cit.>, have recently gained popularity due to their success in sequence modeling across various tasks <cit.>. So far RNNs have mostly been used to represent unimodal sequences of words, audio, or images, with most success in the language domain.Similar to traditional neural networks, the hidden state of an RNN can be seen as a representation of the data, i.e., the hidden state of RNN at timestep t can be seen as the summarization of the sequence up to that timestep.This is especially apparent in RNN encoder-decoder frameworks where the task of an encoder is to represent a sequence in the hidden state of an RNN in such a way that a decoder could reconstruct it <cit.>.The use of RNN representations has not been limited to the unimodal domain.An early use of constructing a multimodal representation using RNNs comes from work by Cosi et al. <cit.> on AVSR.They have also been used for representing audio-visual data for affect recognition <cit.> and to represent multi-view data such as different visual cues for human behavior analysis <cit.>. §.§ Coordinated RepresentationsAn alternative to a joint multimodal representation is a coordinated representation.Instead of projecting the modalities together into a joint space, we learn separate representations for each modality but coordinate them through a constraint.We start our discussion with coordinated representations that enforce similarity between representations, moving on to coordinated representations that enforce more structure on the resulting space (representative works of different coordinated representations can be seen in Table <ref>).Similarity models minimize the distance between modalities in the coordinated space.For example such models encourage the representation of the word dog and an image of a dog to have a smaller distance between them than distance between the word dog and an image of a car <cit.>. One of the earliest examples of such a representation comes from the work by Weston et al. <cit.> on the WSABIE (web scale annotation by image embedding) model, where a coordinated space was constructed for images and their annotations.WSABIE constructs a simple linear map from image and textual features such that corresponding annotation and image representation would have a higher inner product (smaller cosine distance) between them than non-corresponding ones.More recently, neural networks have become a popular way to construct coordinated representations, due to their ability to learn representations.Their advantage lies in the fact that they can jointly learn coordinated representations in an end-to-end manner. An example of such coordinated representation is DeViSE — a deep visual-semantic embedding <cit.>.DeViSE uses a similar inner product and ranking loss function to WSABIE but uses more complex image and word embeddings.Kiros et al. <cit.> extended this to sentence and image coordinated representation by using an LSTM model and a pairwise ranking loss to coordinate the feature space.Socher et al. <cit.> tackle the same task, but extend the language model to a dependency tree RNN to incorporate compositional semantics.A similar model was also proposed by Pan et al.<cit.>, but using videos instead of images. Xu et al. <cit.> also constructed a coordinated space between videos and sentences using a ⟨subject, verb, object⟩ compositional language model and a deep video model.This representation was then used for the task of cross-modal retrieval and video description. While the above models enforced similarity between representations, structured coordinated space models go beyond that and enforce additional constraints between the modality representations.The type of structure enforced is often based on the application, with different constraints for hashing, cross-modal retrieval, and image captioning. Structured coordinated spaces are commonly used in cross-modal hashing — compression of high dimensional data into compact binary codes with similar binary codes for similar objects<cit.>.The idea of cross-modal hashing is to create such codes for cross-modal retrieval <cit.>.Hashing enforces certain constraints on the resulting multimodal space: 1) it has to be an N-dimensional Hamming space — a binary representation with controllable number of bits; 2) the same object from different modalities has to have a similar hash code; 3) the space has to be similarity-preserving. Learning how to represent the data as a hash function attempts to enforce all of these three requirements <cit.>. For example, Jiang and Li <cit.> introduced a method to learn such common binary space between sentence descriptions and corresponding images using end-to-end trainable deep learning techniques.While Cao et al. <cit.> extended the approach with a more complex LSTM sentence representation and introduced an outlier insensitive bit-wise margin loss and a relevance feedback based semantic similarity constraint. Similarly, Wang et al. <cit.> constructed a coordinated space in which images (and sentences) with similar meanings are closer to each other. Another example of a structured coordinated representation comes from order-embeddings of images and language <cit.>.The model proposed by Vendrov et al. <cit.> enforces a dissimilarity metric that is asymmetric and implements the notion of partial order in the multimodal space.The idea is to capture a partial order of the language and image representations — enforcing a hierarchy on the space; for example image of “a woman walking her dog“ → text “woman walking her dog” → text “woman walking”. A similar model using denotation graphs was also proposed by Young et al. <cit.> where denotation graphs are used to induce a partial ordering. Lastly, Zhang et al. present how exploiting structured representations of text and images can create concept taxonomies in an unsupervised manner <cit.>.A special case of a structured coordinated space is one based on canonical correlation analysis (CCA) <cit.>.CCA computes a linear projection which maximizes the correlation between two random variables (in our case modalities) and enforces orthogonality of the new space. CCA models have been used extensively for cross-modal retrieval <cit.> and audiovisual signal analysis <cit.>. Extensions to CCA attempt to construct a correlation maximizing nonlinear projection <cit.>.Kernel canonical correlation analysis (KCCA) <cit.> uses reproducing kernel Hilbert spaces for projection.However, as the approach is nonparametric it scales poorly with the size of the training set and has issues with very large real-world datasets.Deep canonical correlation analysis (DCCA) <cit.> was introduced as an alternative to KCCA and addresses the scalability issue, it was also shown to lead to better correlated representation space.Similar correspondence autoencoder <cit.> and deep correspondence RBMs <cit.> have also been proposed for cross-modal retrieval.CCA, KCCA, and DCCA are unsupervised techniques and only optimize the correlation over the representations, thus mostly capturing what is shared across the modalities.Deep canonically correlated autoencoders <cit.> also include an autoencoder based data reconstruction term.This encourages the representation to also capture modality specific information.Semantic correlation maximization method <cit.> also encourages semantic relevance, while retaining correlation maximization and orthogonality of the resulting space — this leads to a combination of CCA and cross-modal hashing techniques.§.§ Discussion In this section we identified two major types of multimodal representations — joint and coordinated.Joint representations project multimodal data into a common space and are best suited for situations when all of the modalities are present during inference.They have been extensively used for AVSR, affect, and multimodal gesture recognition.Coordinated representations, on the other hand, project each modality into a separate but coordinated space, making them suitable for applications where only one modality is present at test time, such as: multimodal retrieval and translation (Section <ref>), grounding (Section <ref>), and zero shot learning (Section <ref>).Finally, while joint representations have been used in situations to construct representations of more than two modalities, coordinated spaces have, so far, been mostly limited to two modalities.§ TRANSLATION A big part of multimodal machine learning is concerned with translating (mapping) from one modality to another.Given an entity in one modality the task is to generate the same entity in a different modality.For example given an image we might want to generate a sentence describing it or given a textual description generate an image matching it. Multimodal translation is a long studied problem, with early work in speech synthesis <cit.>, visual speech generation <cit.> video description <cit.>, and cross-modal retrieval <cit.>.More recently, multimodal translation has seen renewed interest due to combined efforts of the computer vision and natural language processing (NLP) communities <cit.> and recent availability of large multimodal datasets <cit.>.A particularly popular problem is visual scene description, also known as image <cit.> and video captioning <cit.>, which acts as a great test bed for a number of computer vision and NLP problems.To solve it, we not only need to fully understand the visual scene and to identify its salient parts, but also to produce grammatically correct and comprehensive yet concise sentences describing it.While the approaches to multimodal translation are very broad and are often modality specific, they share a number of unifying factors. We categorize them into two types — example-based, and generative.Example-based models use a dictionary when translating between the modalities.Generative models, on the other hand, construct a model that is able to produce a translation.This distinction is similar to the one between non-parametric and parametric machine learning approaches and is illustrated in Figure <ref>, with representative examples summarized in Table <ref>.Generative models are arguably more challenging to build as they require the ability to generate signals or sequences of symbols (e.g., sentences).This is difficult for any modality — visual, acoustic, or verbal, especially when temporally and structurally consistent sequences need to be generated.This led to many of the early multimodal translation systems relying on example-based translation.However, this has been changing with the advent of deep learning models that are capable of generating images <cit.>, sounds <cit.>, and text <cit.>.§.§ Example-based Example-based algorithms are restricted by their training data — dictionary (see Figure <ref>). We identify two types of such algorithms: retrieval based, and combination based.Retrieval-based models directly use the retrieved translation without modifying it, while combination-based models rely on more complex rules to create translations based on a number of retrieved instances.Retrieval-based models are arguably the simplest form of multimodal translation.They rely on finding the closest sample in the dictionary and using that as the translated result.The retrieval can be done in unimodal space or intermediate semantic space.Given a source modality instance to be translated, unimodal retrieval finds the closest instances in the dictionary in the space of the source — for example, visual feature space for images.Such approaches have been used for visual speech synthesis, by retrieving the closest matching visual example of the desired phoneme <cit.>.They have also been used in concatenative text-to-speech systems <cit.>. More recently, Ordonez et al. <cit.> usedunimodal retrieval to generate image descriptions by using global image features to retrieve caption candidates <cit.>. Yagcioglu et al. <cit.> used a CNN-based image representation to retrieve visually similar images using adaptive neighborhood selection.Devlin et al. <cit.> demonstrated that a simple k-nearest neighbor retrieval with consensus caption selection achieves competitive translation results when compared to more complex generative approaches. The advantage of such unimodal retrieval approaches is that they only require the representation of a single modality through which we are performing retrieval. However, they often require an extra processing step such as re-ranking of retrieved translations <cit.>.This indicates a major problem with this approach — similarity in unimodal space does not always imply a good translation.An alternative is to use an intermediate semantic space for similarity comparison during retrieval. An early example of a hand crafted semantic space is one used by Farhadi et al. <cit.>.They map both sentences and images to a space of ⟨object, action, scene⟩, retrieval of relevant caption to an image is then performed in that space. In contrast to hand-crafting a representation, Socher et al. <cit.> learn a coordinated representation of sentences and CNN visual features (see Section <ref> for description of coordinated spaces).They use the model for both translating from text to images and from images to text.Similarly, Xu et al. <cit.> used a coordinated space of videos and their descriptions for cross-modal retrieval.Jiang and Li <cit.> and Cao et al. <cit.> use cross-modal hashing to perform multimodal translation from images to sentences and back, while Hodosh et al. <cit.> use a multimodal KCCA space for image-sentence retrieval.Instead of aligning images and sentences globally in a common space, Karpathy et al. <cit.> propose a multimodal similarity metric that internally aligns image fragments (visual objects) together with sentence fragments (dependency tree relations).Retrieval approaches in semantic space tend to perform better than their unimodal counterparts as they are retrieving examples in a more meaningful space that reflects both modalities and that is often optimized for retrieval.Furthermore, they allow for bi-directional translation, which is not straightforward with unimodal methods. However, they require manual construction or learning of such a semantic space, which often relies on the existence of large training dictionaries (datasets of paired samples).Combination-based models take the retrieval based approaches one step further.Instead of just retrieving examples from the dictionary, they combine them in a meaningful way to construct a better translation.Combination based media description approaches are motivated by the fact that sentence descriptions of images share a common and simple structure that could be exploited. Most often the rules for combinations are hand crafted or based on heuristics. Kuznetsova et al. <cit.> first retrieve phrases that describe visually similar images and then combine them to generate novel descriptions of the query image by using Integer Linear Programming with a number of hand crafted rules.Gupta et al. <cit.> first find k images most similar to the source image, and then use the phrases extracted from their captions to generate a target sentence.Lebret et al. <cit.> use a CNN-based image representation to infer phrases that describe it. The predicted phrases are then combined using a trigram constrained language model.A big problem facing example-based approaches for translation is that the model is the entire dictionary — making the model large and inference slow (although, optimizations such as hashing alleviate this problem).Another issue facing example-based translation is that it is unrealistic to expect that a single comprehensive and accurate translation relevant to the source example will always exist in the dictionary — unless the task is simple or the dictionary is very large. This is partly addressed by combination models that are able to construct more complex structures.However, they are only able to perform translation in one direction, while semantic space retrieval-based models are able to perform it both ways. §.§ Generative approachesGenerative approaches to multimodal translation construct models that can perform multimodal translation given a unimodal source instance.It is a challenging problem as it requires the ability to both understand the source modality and to generate the target sequence or signal. As discussed in the following section, this also makes such methods much more difficult to evaluate, due to large space of possible correct answers.In this survey we focus on the generation of three modalities: language, vision, and sound.Language generation has been explored for a long time <cit.>, with a lot of recent attention for tasks such as image and video description <cit.>.Speech and sound generation has also seen a lot of work with a number of historical <cit.> and modern approaches <cit.>. Photo-realistic image generation has been less explored, and is still in early stages <cit.>, however, there have been a number of attempts at generating abstract scenes <cit.>, computer graphics <cit.>, and talking heads <cit.>.We identify three broad categories of generative models: grammar-based, encoder-decoder, and continuous generation models. Grammar based models simplify the task by restricting the target domain by using a grammar, e.g., by generating restricted sentences based on a ⟨subject, object, verb⟩ template. Encoder-decoder models first encode the source modality to a latent representation which is then used by a decoder to generate the target modality. Continuous generation models generate the target modality continuously based on a stream of source modality inputs and are most suited for translating between temporal sequences — such as text-to-speech.Grammar-based models rely on a pre-defined grammar for generating a particular modality.They start by detecting high level concepts from the source modality, such as objects in images and actions from videos.These detections are then incorporated together with a generation procedure based on a pre-defined grammar to result in a target modality.Kojima et al. <cit.> proposed a system to describe human behavior in a video usingthe detected position of the person's head and hands and rule based natural language generation that incorporates a hierarchy of concepts and actions. Barbu et al. <cit.> proposed a video description model that generates sentences of the form: who did what to whom and where and how they did it.The system was based on handcrafted object and event classifiers and used a restricted grammar suitable for the task. Guadarrama et al. <cit.> predict ⟨subject, verb, object⟩ triplets describing a video using semantic hierarchies that use more general words in case of uncertainty.Together with a language model their approach allows for translation of verbs and nouns not seen in the dictionary.To describe images, Yao et al. <cit.> propose to use an and-or graph-based model together with domain-specific lexicalized grammar rules, targeted visual representation scheme, and a hierarchical knowledge ontology.Li et al. <cit.> first detect objects, visual attributes, and spatial relationships between objects.They then use an n-gram language model on the visually extracted phrases to generate ⟨subject, preposition, object⟩ style sentences. Mitchell et al. <cit.> use a more sophisticated tree-based language model to generate syntactic trees instead of filling in templates, leading to more diverse descriptions.A majority of approaches represent the whole image jointly as a bag of visual objects without capturing their spatial and semantic relationships.To address this, Elliott et al. <cit.> propose to explicitly model proximity relationships of objects for image description generation. Some grammar-based approaches rely on graphical models to generate the target modality.An example includes BabyTalk <cit.>, which given an image generates ⟨object, preposition, object⟩ triplets, that are used together with a conditional random field to construct the sentences.Yang et al. <cit.> predict a set of ⟨noun, verb, scene, preposition⟩ candidates using visual features extracted from an image and combine them into a sentence using a statistical language model and hidden Markov model style inference.A similar approach has been proposed by Thomason et al. <cit.>, where a factor graph model is used for video description of the form ⟨subject, verb, object, place⟩.The factor model exploits language statistics to deal with noisy visual representations.Going the other way Zitnick et al. <cit.> propose to use conditional random fields to generate abstract visual scenes based on language triplets extracted from sentences.An advantage of grammar-based methods is that they are more likely to generate syntactically (in case of language) or logically correct target instances as they use predefined templates and restricted grammars.However, this limits them to producing formulaic rather than creative translations. Furthermore, grammar-based methods rely on complex pipelines for concept detection, with each concept requiring a separate model and a separate training dataset. Encoder-decoder models based on end-to-endtrained neural networks are currently some of the most popular techniques for multimodal translation. The main idea behind the model is to first encode a source modality into a vectorial representation and then to use a decoder module to generate the target modality, all this in a single pass pipeline.Although, first used for machine translation <cit.>, such models have been successfully used for image captioning <cit.>, and video description <cit.>.So far, encoder-decoder models have been mostly used to generate text, but they can also be used to generate images <cit.>, and continuos generation of speech and sound <cit.>.The first step of the encoder-decoder model is to encode the source object, this is done in modality specific way. Popular models to encode acoustic signals include RNNs <cit.> and DBNs <cit.>.Most of the work on encoding words sentences uses distributional semantics <cit.> and variants of RNNs <cit.>. Images are most often encoded using convolutional neural networks (CNN) <cit.>.While learned CNN representations are common for encoding images, this is not the case for videos where hand-crafted features are still commonly used <cit.>.While it is possible to use unimodal representations to encode the source modality, it has been shown that using a coordinated space (see Section <ref>) leads to better results<cit.>.Decoding is most often performed by an RNN or an LSTM using the encoded representation as the initial hidden state <cit.>. A number of extensions have been proposed to traditional LSTM models to aid in the task of translation.A guide vector could be used to tightly couple the solutions in the image input <cit.>. Venugopalan et al. <cit.> demonstrate that it is beneficial to pre-train a decoder LSTM for image captioning before fine-tuning it to video description.Rohrbach et al. <cit.> explore the use of various LSTM architectures (single layer, multilayer, factored) and a number of training and regularization techniques for the task of video description.A problem facing translation generation using an RNN is that the model has to generate a description from a single vectorial representation of the image, sentence, or video.This becomes especially difficult when generating long sequences as these models tend to forget the initial input.This has been partly addressed by neural attention models (see Section <ref>) that allow the network to focus on certain parts of an image <cit.>, sentence <cit.>, or video<cit.> during generation. Generative attention-based RNNs have also been used for the task of generating images from sentences <cit.>, while the results are still far from photo-realistic they show a lot of promise.More recently, a large amount of progress has been made in generating images using generative adversarial networks <cit.>, which have been used as an alternative to RNNs for image generation from text <cit.>.While neural network based encoder-decoder systems have been very successful they still face a number of issues.Devlin et al. <cit.> suggest that it is possible that the network is memorizing the training data rather than learning how to understand the visual scene and generate it.This is based on the observation that k-nearest neighbor models perform very similarly to those based on generation. Furthermore, such models often require large quantities of data for training.Continuous generation models are intended for sequence translation and produce outputs at every timestep in an online manner. These models are useful when translating from a sequence to a sequence such as text to speech, speech to text, and video to text.A number of different techniques have been proposed for such modeling — graphical models, continuous encoder-decoder approaches, and various other regression or classification techniques.The extra difficulty that needs to be tackled by these models is the requirement of temporal consistency between modalities.A lot of early work on sequence to sequence translation used graphical or latent variable models.Deena and Galata <cit.> proposed to use a shared Gaussian process latent variable model for audio-based visual speech synthesis.The model creates a shared latent space between audio and visual features that can be used to generate one space from the other, while enforcingtemporal consistency of visual speech at different timesteps.Hidden Markov models (HMM) have also been used for visual speech generation <cit.> and text-to-speech <cit.> tasks. They have also been extended to use cluster adaptive training to allow for training on multiple speakers, languages, and emotions allowing for more control when generating speech signal <cit.> or visual speech parameters <cit.>.Encoder-decoder models have recently become popular for sequence to sequence modeling.Owens et al. <cit.> used an LSTM to generate sounds resulting from drumsticks based on video.While their model is capable of generating sounds by predicting a cochleogram from CNN visual features, they found that retrieving a closest audio sample based on the predicted cochleogram led to best results. Directly modeling the raw audio signal for speech and music generation has been proposed by van den Oord et al. <cit.>.The authors propose using hierarchical fully convolutional neural networks, which show a large improvement over previous state-of-the-art for the task of speech synthesis.RNNs have also been used for speech to text translation (speech recognition) <cit.>. More recently encoder-decoder based continuous approach was shown to be good at predicting letters from a speech signal represented as a filter bank spectra <cit.> — allowing for more accurate recognition of rare and out of vocabulary words. Collobert et al. <cit.> demonstrate how to use a raw audio signal directly for speech recognition, eliminating the need for audio features.A lot of earlier work used graphical models for multimodal translation between continuous signals.However, these methods are being replaced by neural network encoder-decoder based techniques.Especially as they have recently been shown to be able to represent and generate complex visual and acoustic signals.§.§ Model evaluation and discussionA major challenge facing multimodal translation methods is that they are very difficult to evaluate.While some tasks such as speech recognition have a single correct translation, tasks such as speech synthesis and media description do not. Sometimes, as in language translation, multiple answers are correct and deciding which translation is better is often subjective.Fortunately, there are a number of approximate automatic metrics that aid in model evaluation.Often the ideal way to evaluate a subjective task is through human judgment.That is by having a group of people evaluating each translation.This can be done on a Likert scale where each translation is evaluated on a certain dimension: naturalness and mean opinion score for speech synthesis <cit.>, realism for visual speech synthesis <cit.>, and grammatical and semantic correctness, relevance, order, and detail for media description <cit.>.Another option is to perform preference studies where two (or more) translations are presented to the participant for preference comparison <cit.>.However, while user studies will result in evaluation closest to human judgments they are time consuming and costly.Furthermore, they require care when constructing and conducting them to avoid fluency, age, gender and culture biases.While human studies are a gold standard for evaluation, a number of automatic alternatives have been proposed for the task of media description: BLEU <cit.>, ROUGE <cit.>, Meteor <cit.>, and CIDEr <cit.>.These metrics are directly taken from (or are based on) work in machine translation and compute a score that measures the similarity between the generated and ground truth text.However, the use of them has faced a lot of criticism.Elliott and Keller <cit.> showed that sentence-level unigram BLEU is only weakly correlated with human judgments. Huang et al. <cit.> demonstrated that the correlation between human judgments and BLEU and Meteor is very low for visual story telling task. Furthermore, the ordering of approaches based on human judgments did not match that of the ordering using automatic metrics on the MS COCO challenge <cit.> — with a large number of algorithms outperforming humans on all the metrics.Finally, the metrics only work well when a number of reference translations is high <cit.>, which is often unavailable, especially for current video description datasets <cit.>These criticisms have led to Hodosh et al. <cit.> proposing to use retrieval as a proxy for image captioning evaluation, which they argue better reflects human judgments. Instead of generating captions, a retrieval based system ranks the available captions based on their fit to the image, and is then evaluated by assessing if the correct captions are given a high rank. As a number of caption generation models are generative they can be used directly to assess the likelihood of a caption given an image and are being adapted by image captioning community <cit.>.Such retrieval based evaluation metrics have also been adopted by the video captioning community <cit.>.Visual question-answering (VQA) <cit.> task was proposed partly due to the issues facing evaluation of image captioning. VQA is a task where given an image and a question about its content the system has to answer it. Evaluating such systems is easier due to the presence of a correct answer. However, it still faces issues such as ambiguity of certain questions and answers and question bias.We believe that addressing the evaluation issue will be crucial for further success of multimodal translation systems.This will allow not only for better comparison between approaches, but also for better objectives to optimize.§ ALIGNMENTWe define multimodal alignment as finding relationships and correspondences between sub-components of instances from two or more modalities.For example, given an image and a caption we want to find the areas of the image corresponding to the caption's words or phrases <cit.>.Another example is, given a movie, aligning it to the script or the book chapters it was based on <cit.>.We categorize multimodal alignment into two types – implicit and explicit.In explicit alignment, we are explicitly interested in aligning sub-components between modalities, e.g., aligning recipe steps with the corresponding instructional video <cit.>. Implicit alignment is used as an intermediate (often latent) step for another task, e.g., image retrieval based on text description can include an alignment step between words and image regions<cit.>. An overview of such approaches can be seen in Table <ref> and is presented in more detail in the following sections. §.§ Explicit alignmentWe categorize papers as performing explicit alignment if their main modeling objective is alignment between subcomponents of instances from two or more modalities. A very important part of explicit alignment is the similarity metric.Most approaches rely on measuring similarity between sub-components in different modalities as a basic building block. These similarities can be defined manually or learned from data. We identify two types of algorithms that tackle explicit alignment — unsupervised and (weakly) supervised.The first type operates with no direct alignment labels (i.e., labeled correspondences) between instances from the different modalities. The second type has access to such (sometimes weak) labels.Unsupervised multimodal alignment tackles modality alignment without requiring any direct alignment labels.Most of the approaches are inspired from early work on alignment for statistical machine translation <cit.> and genome sequences <cit.>. To make the task easier the approaches assume certain constrains on alignment, such as temporal ordering of sequence or an existence of a similarity metric between the modalities.Dynamic time warping (DTW) <cit.> is a dynamic programming approach that has been extensively used to align multi-view time series.DTW measures the similarity between two sequences and finds an optimal match between them by time warping (inserting frames). It requires the timesteps in the two sequences to be comparable and requires a similarity measure between them. DTW can be used directly for multimodal alignmentby hand-crafting similarity metrics between modalities; for example Anguera et al. <cit.> use a manually defined similarity between graphemes and phonemes; and Tapaswi et al. <cit.> define a similarity between visual scenes and sentences based on appearance of same characters <cit.> to align TV shows and plot synopses. DTW-like dynamic programming approaches have also been used for multimodal alignment of text to speech <cit.> and video <cit.>.As the original DTW formulation requires a pre-defined similarity metric between modalities, it was extended using canonical correlation analysis (CCA) to map the modalities to a coordinated space. This allows for both aligning (through DTW) and learning the mapping (through CCA) between different modality streams jointly and in an unsupervised manner <cit.>. While CCA based DTW models are able to find multimodal data alignment under a linear transformation, they are not able to model non-linear relationships.This has been addressed by the deep canonical time warping approach <cit.>, which can be seen as a generalization of deep CCA and DTW.Various graphical models have also been popular for multimodal sequence alignment in an unsupervised manner. Early work by Yu and Ballard <cit.> used a generative graphical model to align visual objects in images with spoken words.A similar approach was taken by Cour et al. <cit.> to align movie shots and scenes to the corresponding screenplay.Malmaud et al. <cit.> used a factored HMM to align recipes to cooking videos, while Noulas et al. <cit.> used a dynamic Bayesian network to align speakers to videos.Naim et al. <cit.> matched sentences with corresponding video frames using a hierarchical HMM model to align sentences with frames and a modified IBM<cit.> algorithm for word and object alignment <cit.>.This model was then extended to use latent conditional random fields for alignments <cit.> and to incorporate verb alignment to actions in addition to nouns and objects <cit.>. Both DTW and graphical model approaches for alignment allow for restrictions on alignment, e.g. temporal consistency, no large jumps in time, and monotonicity.While DTW extensions allow for learning both the similarity metric and alignment jointly, graphical model based approaches require expert knowledge for construction <cit.>.Supervised alignment methods rely on labeled aligned instances.They are used to train similarity measures that are used for aligning modalities. A number of supervised sequence alignment techniques take inspiration from unsupervised ones.Bojanowski et al. <cit.> proposed a method similar to canonical time warping, but have also extended it to take advantage of existing (weak) supervisory alignment data for model training. Plummer et al. <cit.> used CCA to find a coordinated space between image regions and phrases for alignment. Gebru et al. <cit.> trained a Gaussian mixture model and performed semi-supervised clustering together with an unsupervised latent-variable graphical model to align speakers in an audio channel with their locations in a video.Kong et al. <cit.> trained a Markov random field to align objects in 3D scenes to nouns and pronouns in text descriptions.Deep learning based approaches are becoming popular for explicit alignment (specifically for measuring similarity) due to very recent availability of aligned datasets in the language and vision communities <cit.>. Zhu et al. <cit.> aligned books with their corresponding movies/scripts by training a CNN to measure similarities between scenes and text.Mao et al. <cit.> used an LSTM language model and a CNN visual one to evaluate the quality of a match between a referring expression and an object in an image.Yu et al. <cit.>extended this model to include relative appearance and context information that allows to better disambiguate between objects of the same type. Finally, Hu et al. <cit.> used an LSTM based scoring function to find similarities between image regions and their descriptions. §.§ Implicit alignment In contrast to explicit alignment, implicit alignment is used as an intermediate (often latent) step for another task. This allows for better performance in a number of tasks including speech recognition, machine translation, media description, and visual question-answering.Such models do not explicitly align data and do not rely on supervised alignment examples, but learn how to latently align the data during model training. We identify two types of implicit alignment models: earlier work based on graphical models, and more modern neural network methods.Graphical models have seen some early work used to better align words between languages for machine translation <cit.> and alignment of speech phonemes with their transcriptions <cit.>.However, they require manual construction of a mapping between the modalities, for example a generative phone model that maps phonemes to acoustic features <cit.>.Constructing such models requires training data or human expertise to define them manually. Neural networksTranslation (Section <ref>) is an example of a modeling task that can often be improved if alignment is performed as a latent intermediate step. As we mentioned before, neural networks are popular ways to address this translation problem, using either an encoder-decoder model or through cross-modal retrieval. When translation is performed without implicit alignment, it ends up putting a lot of weight on the encoder module to be able to properly summarize the whole image, sentence or a video with a single vectorial representation.A very popular way to address this is through attention <cit.>, which allows the decoder to focus on sub-components of the source instance.This is in contrast with encoding all source sub-components together, as is performed in a conventional encoder-decoder model.An attention module will tell the decoder to look more at targeted sub-components of the source to be translated — areas of an image <cit.>, words of a sentence <cit.>, segments of an audio sequence <cit.>, frames and regions in a video <cit.>, and even parts of an instruction <cit.>. For example, in image captioninginstead of encoding an entire image using a CNN, an attention mechanism will allow the decoder (typically an RNN) to focus on particular parts of the image when generating each successive word <cit.>. The attention module which learns what part of the image to focus on is typically a shallow neural network and is trained end-to-end together with a target task (e.g., translation).Attention models have also been successfully applied to question answering tasks, as they allow for aligning the words in a question with sub-components of an information source such as a piece of text <cit.>, an image <cit.>, or a video sequence <cit.>.This both allows for better performance in question answering and leads to better model interpretability <cit.>. In particular, different types of attention models have been proposed to address this problem, including hierarchical <cit.>, stacked <cit.>, and episodic memory attention <cit.>.Another neural alternative for aligning images with captions for cross-modal retrieval was proposed by Karpathy et al. <cit.>.Their proposed model aligns sentence fragments to image regions by using a dot product similarity measure between image region and word representations.While it does not use attention, it extracts a latent alignment between modalities through a similarity measure that is learned indirectly by training a retrieval model. §.§ Discussion Multimodal alignment faces a number of difficulties: 1) there are few datasets with explicitly annotated alignments; 2) it is difficult to design similarity metrics between modalities; 3) there may exist multiple possible alignments and not all elements in one modality have correspondences in another. Earlier work on multimodal alignment focused on aligning multimodal sequences in an unsupervised manner using graphical models and dynamic programming techniques.It relied on hand-defined measures of similarity between the modalities or learnt them in an unsupervised manner.With recent availability of labeled training data supervised learning of similarities between modalities has become possible.However, unsupervised techniques of learning to jointly align and translate or fuse data have also become popular.§ FUSION Multimodal fusion is one of the original topics in multimodal machine learning, with previous surveys emphasizing early, late and hybrid fusion approaches <cit.>.In technical terms, multimodal fusion is the concept of integrating information from multiple modalities with the goal of predicting an outcome measure: a class (e.g., happy vs. sad) through classification, or a continuous value (e.g., positivity of sentiment) through regression.It is one of the most researched aspects of multimodal machine learning with work dating to 25 years ago <cit.>. The interest in multimodal fusion arises from three main benefits it can provide.First, having access to multiple modalities that observe the same phenomenon may allow for more robust predictions.This has been especially explored and exploited by the AVSR community <cit.>.Second, having access to multiple modalities might allow us to capture complementary information — something that is not visible in individual modalities on their own.Third, a multimodal system can still operate when one of the modalities is missing, for example recognizing emotions from the visual signal when the person is not speaking <cit.>.Multimodal fusion has a very broad range of applications, including audio-visual speech recognition (AVSR) <cit.>, multimodal emotion recognition <cit.>, medical image analysis <cit.>, and multimedia event detection <cit.>. There are a number of reviews on the subject <cit.>.Most of them concentrate on multimodal fusion for a particular task, such as multimedia analysis, information retrieval or emotion recognition.In contrast, we concentrate on the machine learning approaches themselves and the technical challenges associated with these approaches. While some prior work used the term multimodal fusion to include all multimodal algorithms, in this survey paper we classify approaches as fusion category when the multimodal integration is performed at the later prediction stages, with the goal of predicting outcome measures.In recent work, the line between multimodal representation and fusion has been blurred for models such as deep neural networks where representation learning is interlaced with classification or regression objectives.As we will describe in this section, this line is clearer for other approaches such as graphical models and kernel-based methods. We classify multimodal fusion into two main categories: model-agnostic approaches (Section <ref>)that are not directly dependent on a specific machine learning method; and model-based (Section <ref>) approaches that explicitly address fusion in their construction — such as kernel-based approaches, graphical models, and neural networks. An overview of such approaches can be seen in Table <ref>. §.§ Model-agnostic approachesHistorically, the vast majority of multimodal fusion has been done using model-agnostic approaches <cit.>. Such approaches can be split into early (i.e., feature-based), late (i.e., decision-based) and hybrid fusion <cit.>.Early fusion integrates features immediately after they are extracted (often by simply concatenating their representations).Late fusion on the other hand performs integration after each of the modalities has made a decision (e.g., classification or regression).Finally, hybrid fusion combines outputs from early fusion and individual unimodal predictors.An advantage of model agnostic approaches is that they can be implemented using almost any unimodal classifiers or regressors.Early fusion could be seen as an initial attempt by multimodal researchers to perform multimodal representation learning — as it can learn to exploit the correlation and interactions between low level features of each modality.Furthermore it only requires the training of a single model, making the training pipeline easier compared to late and hybrid fusion.In contrast, late fusion uses unimodal decision values and fuses them using a fusion mechanism such as averaging <cit.>, voting schemes <cit.>, weighting based on channel noise <cit.> and signal variance <cit.>, or a learned model <cit.>.It allows for the use of different models for each modality as different predictors can model each individual modality better, allowing for more flexibility.Furthermore, it makes it easier to make predictions when one or more of the modalities is missing and even allows for training when no parallel data is available. However, late fusion ignores the low level interaction between the modalities.Hybrid fusion attempts to exploit the advantages of both of the above described methods in a common framework.It has been used successfully for multimodal speaker identification <cit.> and multimedia event detection (MED) <cit.>.§.§ Model-based approachesWhile model-agnostic approaches are easy to implement using unimodal machine learning methods, they end up using techniques that are not designed to cope with multimodal data.In this section we describe three categories of approaches that are designed to perform multimodal fusion: kernel-based methods, graphical models, and neural networks.Multiple kernel learning(MKL) methods are an extension to kernel support vector machines (SVM) thatallow for the use of different kernels for different modalities/views of the data <cit.>.As kernels can be seen as similarity functions between data points, modality-specific kernels in MKL allows for better fusion of heterogeneous data.MKL approaches have been an especially popular method for fusing visual descriptors for object detection <cit.> and only recently have been overtaken by deep learning methods for the task <cit.>.They have also seen use for multimodal affect recognition <cit.>, multimodal sentiment analysis <cit.>, and multimedia event detection (MED) <cit.>.Furthermore, McFee and Lanckriet <cit.> proposed to use MKL to perform musical artist similarity ranking from acoustic, semantic and social view data.Finally, Liu et al. <cit.> used MKL for multimodal fusion in Alzheimer's disease classification.Their broad applicability demonstrates the strength of such approaches in various domains and across different modalities.Besides flexibility in kernel selection, an advantage of MKL is the fact that the loss function is convex, allowing for model training using standard optimization packages and global optimum solutions <cit.>.Furthermore, MKL can be used to both perform regression and classification.One of the main disadvantages of MKL is the reliance on training data (support vectors) during test time, leading to slow inference and a large memory footprint. Graphical models are another family of popular methods for multimodal fusion. In this section we overview work done on multimodal fusion using shallow graphical models.A description of deep graphical models such as deep belief networks can be found in Section <ref>. Majority of graphical models can be classified into two main categories: generative — modeling joint probability; or discriminative — modeling conditional probability <cit.>.Some of the earliest approaches to use graphical models for multimodal fusion include generative models such as coupled <cit.> and factorial hidden Markov models <cit.> alongside dynamic Bayesian networks <cit.>.A more recently-proposed multi-stream HMM method proposes dynamic weighting of modalities for AVSR <cit.>. Arguably, generative models lost popularity to discriminative ones such as conditional random fields (CRF) <cit.> which sacrifice the modeling of joint probability for predictive power.A CRF model was used to better segment images by combining visual and textual information of image description <cit.>.CRF models have been extended to model latent states using hidden conditional random fields <cit.> and have been applied to multimodal meeting segmentation <cit.>.Other multimodal uses of latent variable discriminative graphical models include multi-view hidden CRF <cit.> and latent variable models <cit.>. More recently Jiang et al. <cit.> have shown the benefits of multimodal hidden conditional random fields for the task of multimedia classification. While most graphical models are aimed at classification, CRF models have been extended to a continuous version for regression <cit.> and applied in multimodal settings <cit.> for audio visual emotion recognition.The benefit of graphical models is their ability to easily exploit spatial and temporal structure of the data, making them especially popular for temporal modeling tasks, such as AVSR and multimodal affect recognition.They also allow to build in human expert knowledge into the models. and often lead to interpretable models.Neural Networks have beenused extensively for the task of multimodal fusion <cit.>. The earliest examples of using neural networks for multi-modal fusion come from work on AVSR <cit.>.Nowadays they are being used to fuse information for visual and media question answering <cit.>,gesture recognition <cit.>, affect analysis <cit.>, and video description generation <cit.>.While the modalities used, architectures, and optimization techniques might differ, the general idea of fusing information in joint hidden layer of a neural network remains the same.Neural networks have also been used for fusing temporal multimodal information through the use of RNNs and LSTMs. One of the earlier such applications used a bidirectional LSTM was used to perform audio-visual emotion classification <cit.>.More recently, Wöllmer et al. <cit.> used LSTM models for continuous multimodal emotion recognition, demonstrating its advantage over graphical models and SVMs. Similarly, Nicolaou et al. <cit.> used LSTMs for continuous emotion prediction.Their proposed method used an LSTM to fuse the results from a modality specific (audio and facial expression) LSTMs.Approaching modality fusion through recurrent neural networks has been used in various image captioning tasks, example models include: neural image captioning <cit.> where a CNN image representation is decoded using an LSTM language model, gLSTM <cit.> which incorporates the image data together with sentence decoding at every time step fusing the visual and sentence data in a joint representation.A more recent example is the multi-view LSTM (MV-LSTM) model proposed by Rajagopalan et al. <cit.>.MV-LSTM model allows for flexible fusion of modalities in the LSTM framework by explicitly modeling the modality-specific and cross-modality interactions over time.A big advantage of deep neural network approaches in data fusion is their capacity to learn from large amount of data. Secondly, recent neural architectures allow for end-to-end training of both the multimodal representation component and the fusion component.Finally, they show good performance when compared to non neural network based system and are able to learn complex decision boundaries that other approaches struggle with.The major disadvantage of neural network approaches is their lack of interpretability.It is difficult to tell what the prediction relies on, and which modalities or features play an important role.Furthermore, neural networks require large training datasets to be successful.§.§ Discussion Multimodal fusion has been a widely researched topic with a large number of approaches proposed to tackle it, including model agnostic methods, graphical models, multiple kernel learning, and various types of neural networks.Each approach has its own strengths and weaknesses, with some more suited for smaller datasets and others performing better in noisy environments.Most recently, neural networks have become a very popular way to tackle multimodal fusion, however graphical models and multiple kernel learning are still being used, especially in tasks with limited training data or where model interpretability is important.Despite these advances multimodal fusion still faces the following challenges: 1) signals might not be temporally aligned (possibly dense continuous signal and a sparse event); 2) it is difficult to build models that exploit supplementary and not only complementary information; 3) each modality might exhibit different types and different levels of noise at different points in time. § CO-LEARNINGThe final multimodal challenge in our taxonomy is co-learning — aiding the modeling of a (resource poor) modality by exploiting knowledge from another (resource rich) modality.It is particularly relevant when one of the modalities has limited resources — lack of annotated data, noisy input, and unreliable labels.We call this challenge co-learning as most often the helper modality is used only during model training and is not used during test time. We identify three types of co-learning approaches based on their training resources: parallel, non-parallel, and hybrid.Parallel-data approaches requiretraining datasets where the observations from one modality are directly linked to the observations from other modalities.In other words, when the multimodal observations are from the same instances, such as in an audio-visual speech dataset where the video and speech samples are from the same speaker. In contrast, non-parallel data approaches do not require direct links between observations from different modalities.These approaches usually achieve co-learning by using overlap in terms of categories.For example, in zero shot learning when the conventional visual object recognition dataset is expanded with a second text-only dataset from Wikipedia to improve the generalization of visual object recognition.In the hybrid data setting the modalities are bridged through a shared modality or a dataset.An overview of methods in co-learning can be seen in Table <ref> and summary of data parallelism in Figure <ref>.§.§ Parallel dataIn parallel data co-learning both modalities share a set of instances — audio recordings with the corresponding videos, images and their sentence descriptions.This allows for two types of algorithms to exploit that data to better model the modalities: co-training and representation learning.Co-training is the process of creating more labeled training samples when we have few labeled samples in a multimodal problem <cit.>.The basic algorithm builds weak classifiers in each modality to bootstrap each other with labels for the unlabeled data.It has been shown to discover more training samples for web-page classification based on the web-page itself and hyper-links leading in the seminal work of Blum and Mitchell <cit.>.By definition this task requires parallel data as it relies on the overlap of multimodal samples. Co-training has been used for statistical parsing <cit.> to build better visual detectors <cit.> and for audio-visual speech recognition <cit.>. It has also been extended to deal with disagreement between modalities, by filtering out unreliable samples <cit.>.While co-training is a powerful method for generating more labeled data, it can also lead to biased training samples resulting in overfitting.Transfer learning is another way to exploit co-learning with parallel data.Multimodal representation learning (Section <ref>) approaches such as multimodal deep Boltzmann machines <cit.> and multimodal autoencoders <cit.> transfer information from representation of one modality to that of another.This not only leads to multimodal representations, but also to better unimodal ones, with only one modality being used during test time <cit.> . Moon et al. <cit.> show how to transfer information from a speech recognition neural network (based on audio) to a lip-reading one (based on images), leading to a better visual representation, and a model that can be used for lip-reading without need for audio information during test time.Similarly, Arora and Livescu <cit.> build better acoustic features using CCA on acoustic and articulatory (location of lips, tongue and jaw) data.They use articulatory data only during CCA construction and use only the resulting acoustic (unimodal) representation during test time. §.§ Non-parallel dataMethods that rely on non-parallel data do not require the modalities to have shared instances, but only shared categories or concepts.Non-parallel co-learning approaches can help when learning representations, allow for better semantic concept understanding and even perform unseen object recognition.Transfer learning is also possible on non-parallel data and allows to learn better representations through transferring information from a representation built using a data rich or clean modality to a data scarce or noisy modality. This type of trasnfer learning is often achieved by using coordinated multimodal representations (see Section <ref>). For example, Frome et al. <cit.> used text to improve visual representations for image classification by coordinating CNN visual features with word2vec textual ones <cit.> trained on separate large datasets. Visual representations trained in such a way result in more meaningful errors — mistaking objects for ones of similar category <cit.>.Mahasseni and Todorovic <cit.> demonstrated how to regularize a color video based LSTM using an autoencoder LSTM trained on 3D skeleton data by enforcing similarities between their hidden states.Such an approach is able to improve the original LSTM and lead to state-of-the-art performance in action recognition.Conceptual grounding refers to learning semantic meanings or concepts not purely based on language but also on additional modalities such as vision, sound, or even smell <cit.>. While the majority of concept learning approaches are purely language-based, representations of meaning in humans are not merely a product of our linguistic exposure, but are also grounded through our sensorimotor experience and perceptual system <cit.>.Human semantic knowledge relies heavily on perceptual information <cit.> and many concepts are grounded in the perceptual system and are not purely symbolic <cit.>.This implies that learning semantic meaning purely from textual information might not be optimal, and motivates the use of visual or acoustic cues to ground our linguistic representations.Starting from work by Feng and Lapata <cit.>, grounding is usually performed by finding a common latent space between the representations <cit.> (in case of parallel datasets) or by learning unimodal representations separately and then concatenating them to lead to a multimodal one <cit.> (in case of non-parallel data).Once a multimodal representation is constructed it can be used on purely linguistic tasks.Shutova et al. <cit.> and Bruni et al. <cit.> used grounded representations for better classification of metaphors and literal language.Such representations have also been useful for measuring conceptual similarity and relatedness —identifying how semantically or conceptually relatedtwo words are <cit.> or actions <cit.>.Furthermore, concepts can be grounded not only using visual signals, but also acoustic ones, leading to better performance especially on words with auditory associations <cit.>, or even olfactory signals <cit.> for words with smell associations. Finally, there is a lot of overlap between multimodal alignment and conceptual grounding, as aligning visual scenes to their descriptions leads to better textual or visual representations <cit.>. Conceptual grounding has been found to be an effective way to improve performance on a number of tasks.It also shows that language and vision (or audio) are complementary sources of information and combining them in multimodal models often improves performance.However, one has to be careful as grounding does not always lead to better performance <cit.>, and only makes sense when grounding has relevance for the task — such as grounding using images for visually-related concepts.Zero shot learning (ZSL) refers to recognizing a concept without having explicitly seen any examples of it.For example classifying a cat in an image without ever having seen (labeled) images of cats.This is an important problem to address as in a number of tasks such as visual object classification: it is prohibitively expensive to provide training examples for every imaginable object of interest.There are two main types of ZSL — unimodal and multimodal.The unimodal ZSL looks at component parts or attributes of the object, such as phonemes to recognize an unheard word or visual attributes such as color, size, and shape to predict an unseen visual class <cit.>.The multimodal ZSL recognizes the objects in the primary modality through the help of the secondary one — in which the object has been seen. The multimodal version of ZSL is a problem facing non-parallel data by definition as the overlap of seen classes is different between the modalities.Socher et al. <cit.> map image features to a conceptual word space and are able to classify between seen and unseen concepts.The unseen concepts can be then assigned to a word that is close to the visual representation — this is enabled by the semantic space being trained on a separate dataset that has seen more concepts.Instead of learning a mapping from visual to concept space Frome et al. <cit.>learn a coordinated multimodal representation between concepts and images that allows for ZSL.Palatucci et al. <cit.> perform prediction of words people are thinking of based on functional magnetic resonance images, they show how it is possible to predict unseen words through the use of an intermediate semantic space.Lazaridou et al. <cit.> present a fast mapping method for ZSL by mapping extracted visual feature vectors to text-based vectors through a neural network.§.§ Hybrid data In the hybrid data setting two non-parallel modalities are bridged by a shared modality or a dataset (see Figure <ref>).The most notable example is the Bridge Correlational Neural Network <cit.>, which uses a pivot modality to learn coordinated multimodal representations in presence ofnon-parallel data.For example, in the case of multilingual image captioning, the image modality would always be paired with at least one caption in any language.Such methods have also been used to bridge languages that might not have parallel corpora but have access to a shared pivot language, such as for machine translation <cit.> and document transliteration <cit.>.Instead of using a separate modality for bridging, some methods rely on existence of large datasets from a similar or related task to lead to better performance in a task that only contains limited annotated data.Socher and Fei-Fei <cit.> use the existence of large text corpora in order to guide image segmentation. While Hendricks et al. <cit.> use separately trained visual model and a language model to lead to a better image and video description system, for which only limited data is available.§.§ Discussion Multimodal co-learning allows for one modality to influence the training of another, exploiting the complementary information across modalities.It is important to note that co-learning is task independent and could be used to create better fusion, translation, and alignment models.This challenge is exemplified by algorithms such as co-training, multimodal representation learning, conceptual grounding, and zero shot learning (ZSL) and has found many applications in visual classification, action recognition, audio-visual speech recognition, and semantic similarity estimation.§ CONCLUSION As part of this survey, we introduced a taxonomy of multimodal machine learning: representation, translation, fusion, alignment, and co-learning.Some of them such as fusion have been studied for a long time, but more recent interest in representation and translation have led to a large number of new multimodal algorithms and exciting multimodal applications.We believe that our taxonomy will help to catalog future research papers and also better understand the remaining unresolved problems facing multimodal machine learning. IEEEtranS[ < g r a p h i c s > ]Tadas Baltrušaitis is a post-doctoral associate at the Language Technologies Institute, Carnegie Mellon University. His primary research interests lie in the automatic understanding of non-verbal human behaviour, computer vision, and multimodal machine learning. In particular, he is interested in the application of such technologies to healthcare settings, with a particular focus on mental health. Before joining CMU, he was a post-doctoral researcher at the University of Cambridge, where he also received his Ph.D and Bachelor’s degrees in Computer Science. His Ph.D research focused on automatic facial expression analysis in especially difficult real world settings.[ < g r a p h i c s > ]Chaitanya Ahuja is a doctoral candidate in Language Technologies Institute in the School of Computer Science at Carnegie Mellon University. His interests range in various topics in natural language, computer vision, computational music and machine learning. Before starting with graduate school, Chaitanya completed his Bachelor's at Indian Institute of Technology, Kanpur.[ < g r a p h i c s > ]Louis-Philippe Morencyis an Assistant Professor in the Language Technology Institute at Carnegie Mellon University where he leads the Multimodal Communication and Machine Learning Laboratory (MultiComp Lab). He was formerly research assistant professor in the Computer Sciences Department at University of Southern California and research scientist at USC Institute for Creative Technologies. Prof. Morency received his Ph.D. and Master degrees from MIT Computer Science and Artificial Intelligence Laboratory. His research focuses on building the computational foundations to enable computers with the abilities to analyze, recognize and predict subtle human communicative behaviors during social interactions. He is currently chair of the advisory committee for ACM International Conference on Multimodal Interaction and associate editor at IEEE Transactions on Affective Computing.
http://arxiv.org/abs/1705.09406v2
{ "authors": [ "Tadas Baltrušaitis", "Chaitanya Ahuja", "Louis-Philippe Morency" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20170526013531", "title": "Multimodal Machine Learning: A Survey and Taxonomy" }
1Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA; mailto: 2Astrophysics, Department of Physics, University of Oxford, Keble Road, Oxford OX1 3RH, UK 3Institute of Astronomy and Department of Physics, National Tsing Hua University, Hsinchu 30013, Taiwan 4School of Physics and Astronomy, Sun Yat-sen University, Zhuhai 519082, China 5Yunnan Observatories, Chinese Academy of Sciences, 396 Yangfangwang, Guandu District, Kunming 650216, P. R. China 6Key Laboratory for the Structure and Evolution of Celestial Objects, Chinese Academy of Sciences, 396 Yangfangwang, Guandu District, Kunming 650216, P. R. China 7Center for Astronomical Mega-Science, Chinese Academy of Sciences, 20A Datun Road, Chaoyang District, Beijing 100012, P. R. China 8Institute of Particle Physics and Astronomy, Huazhong University of Science and Technology, China 9Department of Astronomy and Space Science, Chungnam National University, Daejeon, Republic of Korea We report our recent Swift, NuSTAR, and XMM-Newton X-ray and Lijiang optical observations on PSR J2032+4127/MT91 213, the γ-ray binary candidate with a period of 45–50 years. The coming periastron of the system was predicted to be in November 2017, around which high-energy flares from keV to TeV are expected. Recent studies with Chandra and Swift X-ray observations taken in 2015/16 showed that its X-ray emission has been brighter by a factors of ∼10 than that before 2013, probably revealing some on-going activities between the pulsar wind and the stellar wind.Our new Swift/XRT lightcurve shows no strong evidence of a single vigorous brightening trend, but rather several strong X-ray flares on weekly to monthly timescales with a slowly brightening baseline, namely the low state.The NuSTAR and XMM-Newton observations taken during the flaring and the low states, respectively, show a denser environment and a softer power-law index during the flaring state, implying that the pulsar wind interacted with stronger stellar winds of the companion to produce the flares. These precursors would be crucial in studying the predicted giant outburst from this extreme γ-ray binary during the periastron passage in late 2017.§ INTRODUCTIONGamma-ray binaries are a subclass of high-mass X-ray binaries (HMXBs) that harbours a compact object (neutron star or stellar-mass black hole) and a massive O or Be companion emitting modulated γ-ray emission at GeV/MeV and even TeV energies <cit.>. For those with a highly eccentric orbit of e≳0.8, the periastron passage of the compact object (probably neutron star in these cases as pulsar wind is usually required in modelling; see, e.g.,and the references therein) through the stellar wind and/or the Be circumstellar disc (if present) can trigger extraordinary flares seen from radio to TeV γ-rays (e.g., PSR B1259-63/LS 2883; , and HESS J0632+057/MWC 148; ).PSR J2032+4127/MT91 213 (J2032 hereafter) is a strong γ-ray binary candidate with a high eccentricity. It was first discovered as a γ-ray and radio emitting pulsar with the Fermi Large Area Telescope (LAT; ) and the NRAO Green Bank Telescope (GBT; ), respectively, and later identified as a binary system with further γ-ray and radio observations <cit.>.While J2032 was initially thought to be a binary with a long period of ∼20 years, <cit.> refined the binary model and suggested an even longer period of 45–50 years.According to their timing solutions, a strong radio/γ-ray pulsation at P_s=6.98 Hz with a strong spin-down rate of ∼6×10^-13 s^-2 (spin-down luminosity: Ė∼10^35) was detected, showing that it is a young pulsar of a characteristic age of ∼200 kyr. A V=11.95 mag Be star, MT91 213 (a member of the Cyg OB2 stellar association; about 1.5 kpc from us) is found at the inferred pulsar's position as the high-mass companion of the pulsar. The best-fit ephemeris shows that the next periastron of the binary will be in late 2017 (i.e., MJD 58069 in the Model 2 of ).<cit.> found an X-ray counterpart of J2032 with Chandra and Swift/XRT, which was faint (i.e., F_X=(1–5)×10^-14) before 2013, but was ∼10 times brighter (i.e., F_X≈3×10^-13) after 2015.This extraordinary X-ray brightening strongly indicates an intimate interaction between the pulsar and stellar winds (seefor a detailed modelling). Since the brightening, a rapidly increasing trend seemingly appears in the Swift/XRT lightcurve from 2015 September to mid-2016 <cit.>, which is reminiscent of PSR B1259-63/LS 2883 just before the disc passage (see, e.g., ).In this letter, we report our recent Swift, NuSTAR, and XMM-Newton X-ray and Lijiang optical observations of J2032 and clarify the current status of the system based on the results.§ THE 2016 CHANDRA OBSERVATIONWe re-analysed the 4.9 ksec Chandra observation taken on 2016 February 24 (ID: 18788). While it has been well studied by <cit.> for J2032, we focus on the three bright nearby X-ray sources (Cyg OB2 4, MT91 221, and CXOU J203213.5+412711), which are just marginality resolved by XMM-Newton and Swift, and unresolved by NuSTAR.Latest spectral information of these sources from the Chandra data is extremely useful to eliminate their undesired contributions in our data. The(v4.7.2) taskwas used to extract the spectra with circular source regions of r=15 and source-free background regions of r=10. The sources can be described by an absorbed power-law or an absorbed thermalmodel (the best-fit parameters are listed in Table <ref>). These models will be included in the NuSTAR and XMM-Newton spectral fits (with frozen parameters) to subtract the field sources' contributions. For J2032, we used and discussed the results presented in <cit.> throughout this work.§ SWIFT/XRT OBSERVATIONS In March 2016, we launched a bi-weekly monitoring campaign to follow up the X-ray brightening seen by Swift and Chandra <cit.>.We once switched the observing cadence to one week from December 2016 to February 2017. But we changed it back to the two-week cadence in March, which is the best for the study.We also note that there is another Swift program on J2032, probably with a longer cadence (PI: Coe).Some of the Swift observations (i.e., data taken before 2016 September) have been reported in <cit.> and <cit.> and we extend the analysis with all the XRT observations taken before 2017 April 14 in this work.The exposures are ranging from <1 ksec to 5 ksec. Most of them are useful in building a long-term X-ray lightcurve, but the data qualities are still insufficient for meaningful spectral analyses and such analyses are therefore skipped in this work.For the XRT lightcurve extraction, we used the Swift's on-line analysis tool[<http://www.swift.ac.uk/user_objects/>] <cit.> to take a good care of the bad pixels, vignetting, and point spread function (PSF) corrections of the data. All parameters were left at program default values with the option binning by observation chosen.Figure <ref> shows the XRT lightcurve after (i) removing the bad data (i.e., upper limits due to extremely short exposures, some data bins with S/N<3, and a fake detection in 2006 due to a noisy background), (ii) re-binning the data points taken within 24 hours, and (iii) subtracting the expected contributions from the three bright X-ray sources (i.e., 1.7×10^-3 cts/s, estimated bywith the parameters in Table <ref>).As previously mentioned, the spectral information of the XRT data is bad. Given that J2032 showed strong spectral variability (cf. Table <ref>), we discuss the XRT lightcurve using the XRT count rate throughout the paper to avoid providing misleading information. We here give a counts-to-flux conversion factor of 9.5×10^-11 erg cm^-2 cts^-1 (absorption corrected), computed based on the best-fit model for the XMM-Newton data (see <ref> and Table <ref>), for a rough reference.§ NUSTAR OBSERVATIONWe obtained a 45 ksec (live time) NuSTAR ToO observation to study the J2032 on 2016 September 9–10 (Figure <ref>). In the NuSTAR FPMA/FPMB images, the stray light from the HMXB Cygnus X-3, 30away from J2032, created ghost ray patterns through single reflections (Kristin Madsen, private communication). Fortunately, the contamination, especially for energies >5 keV, is not too severe, and the source was clearly detected with a net count rate of ∼0.02 cts/s (FPMA+B). A simultaneous 4 ksec Swift observation was also obtained to extend the analysis down to 0.3 keV.The(v6.19) taskwith the CALDB (v20160731) was used to extract spectra and lightcurves from the FPMA/FPMB observations in the default energy range of 3–78 keV (channels: 35–1909). We adopted a circular source region of radius r=30, which is recommended for faint sources by the NuSTAR team. To minimize the effect of the stray light, we selected two source-free regions of r=30 at the respective positions of the source in the ghost patterns for the background extractions.The spectra (together with the simultaneous Swift/XRT spectrum extracted by the Swift's on-line analysis tool) can be well described (χ_ν^2=53.2/64) by an absorbed simple power-law of Γ=2.7±0.2,=2.5^+1.3_-0.9×10^22, and F_3-78keV=1.25^+0.14_-0.13×10^-12 (or F_0.3-10keV=6.1^+3.2_-1.9×10^-12; absorption corrected), with no obvious high-energy exponential cutoff feature (Figure <ref>).The fitting result does not significantly change, if the Swift data is not included (Table <ref>). An additionalthermal component with a plasma temperature of T≈0.5 keV can slightly improve the joint NuSTAR-Swift fit by Δχ^2=2.4. We simulated 10000 spectra based on the best-fit simple power-law and then fitted the simulated spectra with bothandmodels. 46% of the simulations improve better than Δχ^2=2.4, indicating that improvement is not significant. For the NuSTAR lightcurve, we binned it with 5 ksec to achieve about 100 counts per bin and no strong variability can be seen.§ XMM-NEWTON OBSERVATION A 43 ksec XMM-Newton ToO observation operated under the prime full window mode with the medium optical blocking filter was obtained on 2016 November 6–7 (Figure <ref>).Following the analysis threads in the XMM-Newton Science Operation centre[<https://www.cosmos.esa.int/web/xmm-newton/sas-threads>], we used the metataskin(v15.0.0) to extract the scientific products from the raw data in the Observation Data File (ODF).The live times were 35 ksec and 41 ksec for PN and MOS1/2, respectively. After filtering the high background periods, usable live times are reduced to 27 ksec (PN), 39 kec (MOS1), and 38 kec (MOS2). J2032 was detected in all EPIC cameras with net count rates of 0.1  cts/s for PN and 0.03 cts/s for each MOS in 0.3–10 keV. Similar to the NuSTAR lightcurve, no hourly variability can be seen in the PN (1 ksec binned) and MOS1+2 (1.5 ksec binned) lightcurves. We fitted an absorbed simple power-law model to the spectra and found the best-fit parameters of Γ=1.9±0.1,=0.70^+0.08_-0.07×10^22, and F_0.3-10keV=0.87^+0.07_-0.06×10^-12 (absorption corrected; χ_ν^2=278.9/272) (Figure <ref>), which are all very different from that of the NuSTAR+Swift spectral fit (Table <ref> and Figure <ref>). We also tried to add acomponent to improve the fit, but the reduced χ was found to be even higher. All the best-fit parameters (including the NuSTAR+Swift's) are shown in Table <ref>.§ THE LIJIANG 2.4-M OBSERVATIONSTo study the evolving Hα emission line from the circumstellar disc of MT91 213 <cit.>, two 120 sec and 180 sec spectra were taken with the Yunnan Faint Object Spectrograph and Camera (YFOSC) on the Lijiang 2.4-m telescope on 2016 November 20 and December 11, respectively.The spectral resolutions are medium with Grism 15 (183 nm/mm) & Slit 3 (10) on December 11, and Grism 14 (92 nm/mm) & Slit 3 (18) on November 20. After the standard data reduction processes with , the Hα emission line was clearly detected in both datasets, although the double-peaked line profile <cit.> is unresolved. Using thetask in thepackage, we computed the equivalent widths (EW) of the Hα emission lines to be -5.6Å and -5.3Å on November 20 and December 11, respectively (Figure <ref>).§ DISCUSSION Before 2013 March, the X-ray source was marginally detected by XRT at C_XRT∼0.001 cts/s.In 2015 September, it had brightened to C_XRT≈0.008 cts/s after a 2.5-year observing gap.The X-ray emission then increased more rapidly from C_XRT≈0.007 cts/s to ≈0.024 cts/s in 2016 April–July <cit.>.While a continuous increase was theoretically expected (see, e.g., ), the X-ray emission returned back to C_XRT≈0.006 cts/s in three months, confirmed by the XMM-Newton observation (Figure <ref>). The flux was increasing again afterwards, but a few declines are again shown later (Figure <ref>).More than one type of variation should be involved to result in the complexity seen in the X-ray lightcurve. §.§ The Long-Term Variability By taking a closer look of the Swift/XRT lightcurve, one can easily identify some local flux minima and the most obvious ones are indicated by the shadowed regions in Figure <ref>.We tried to fit these XRT minima in the shadow and the quiescent fluxes measured before 2013 (including the three Chandra measurement taken in 2002–2010; ) to a simple power-law, and the data can be well connected with F_X∝ t_p^-1.2±0.1 (Figure <ref>), where t_p is the number of days from the periastron passage (i.e., MJD 58069; ).Apparently, these low flux intervals could belong to the same emission state, which will be called the low state in the following discussion.The momentum ratio of the stellar wind to the pulsar wind is one of the major factors to determine the X-ray luminosity of the wind-wind interacting shock in a γ-ray binary. The dependence can be even higher under a consideration of a non-constant magnetization of the shock along the distance from the pulsar <cit.>.In J2032, the slowly brightening low state are likely the consequence as the pulsar approaches the Be star and interacts with the stronger stellar wind.Because of the current large distance between the pulsar and the Be star, the rate of X-ray flux increase would be slow. However, this is still sufficient to develop the two distinct flux levels before/after the 2.5-year observing gap in 2013–2015, as observed by Swift/XRT and Chandra.It is worth noting that the XMM-Newton data was taken during the low state. The best-fit hydrogen column density (i.e.,=7×10^21) is well consistent with the foreground value estimated by the optical color excess of MT91 213 (i.e.,=7.7×10^21; ). In addition, the best-fit photon index (i.e., Γ=1.9) is very close to that of those Chandra observations taken before 2016 (i.e., Γ=2 with the foreground ; ), supporting our suggestion that the source was in the same low state during the Chandra observations. §.§ The Short VariabilityBesides the possible long-term brightening trend, multiple flares on weekly to monthly timescales are obviously present in the XRT lightcurve (Figure <ref>). Our NuSTAR observation provides a good X-ray spectroscopic study on one of these flares. Comparing with the low state spectrum taken by XMM-Newton, the NuSTAR spectrum is significantly softer in photon index with a heavierabsorption (Figure <ref>).This highstrongly implies a denser medium around the pulsar during the flare, probably caused by an occasional strong wind from the Be star.Using the binary orbit presented in <cit.> and the mass-loss rate of ṁ=4π r^2v_wρ_w (where r is the distance from the star, v_w∼1000 km/s is the wind speed, and ρ_w is the density of the wind at r) for a steady and spherically symmetric wind, we integrated the density along the line-of-sight and found that ṁ∼10^-5–10^-4M_ yr^-1 is required to accumulate the intrinsicto ∼ 10^22. This inferred rate is several orders of magnitude higher than the typical value for B type stars <cit.>, suggesting that the wind is likely compact and clumpy (i.e., the pulsar was hitting compact wind clumps, instead of a homogeneous wind).When impacting the pulsar, this strong clumpy wind probably pushed the shock toward the pulsar side to cause a stronger magnetic field at the emission region <cit.>.In this case, NuSTAR might observe the emissions from the particles in both the slow and fast cooling regimes in the flaring state, while only the emission from the slow cooling regime was observed by XMM-Newton in the low state, possibly explaining the observed divergence in photon index.In the Hα line study of <cit.>, the circumstellar disc of the Be star was expanding from R_disc≈0.2 to 0.4 AU (i.e. EW: from -3.3 to -10.2Å) in 3–4 months during the first few X-ray flares (see also Figure <ref>).Our Lijiang spectra indicate that the circumstellar disc shrank back to R_disc≈0.3 AU (converted from EW using the equation in ) a few months later while the X-ray source was likely in the low state.We suspect that the disc expansion is an indication of the hypothetical strong clumpy wind to trigger the observed flares. On the other hand, the activity of the circumstellar disc could be induced by the approaching pulsar as shown in PSR B1259-63/LS 2883 <cit.>, although the separation between the pulsar and the Be star in J2032 was much larger.Finally, we note that PSR B1259-63/LS 2883 did not show such pre-periastron-passage flares previously (see, e.g., ). However, when PSR B1259-63 entered the circumstellar disc in 2004, the X-ray emission, , and the photon index were all increasing <cit.>. In J2032, a very similar spectral change is seen when transiting from the low state to the flaring state (Table <ref> and Figure <ref>), although the photon indexes of PSR B1259-63/LS 2883 are generally harder (i.e., increased from Γ=1.2 to 1.8) than that of J2032 (i.e., from Γ=1.9 to 2.7). It would be intriguing to ask whether this is a common feature in γ-ray binaries when the pulsars entering from a lighter medium to a denser medium.§ CONCLUSIONWith the NuSTAR and XMM-Newton X-ray observations, we identify two very different spectral states, namely the low state (i.e., low X-ray flux andwith a hard spectrum) and the flaring state (i.e., high X-ray flux andwith a soft spectrum). The Swift/XRT lightcurve suggests that the low state has been slowly evolving, possibly following F_X∝ t_p^-1.2, while the flares are likely on weekly to monthly timescales. In addition, these flares could be correlated to the size of the circumstellar disc of MT91 213, indicated by the Hα emission line studies (see also ). The physical origin of these flares and the implication of the slowly brightening low state are still not entirely clear. Hopefully, continuous multi-wavelength monitoring observations (e.g., from Swift and Fermi) will be useful in studying these flares as well as any pre-periastron activities before the periastron passage in late 2017. We thank (i) Neil Gehrels and Brad Cenko for approving our Swift monitoring campaign, (ii) Fiona Harrison for approving the NuSTAR DDT request and Kristin Madsen for the technical support, and (iii) Norbert Schartel for approving the XMM-Newton DDT request and Rosario Gonzalez-Riestra for scheduling the observation.We acknowledge the support of the staff of the Lijiang 2.4m telescope. Funding for the telescope has been provided by Chinese Academy of Sciences and the People’s Government of Yunnan Province. The scientific results reported in this article are based in part on data obtained from the Chandra Data Archive.AKHK is supported by the Ministry of Science and Technology of Taiwan through grant 105-2112-M-007-033-MY2, 105-2119-M-007-028-MY3, and 106-2918-I-007-005. XH is supported by National Natural Science Foundation of China through grant 11503078.PHT is supported by the National Science Foundation of China (NSFC) through grants 11633007 and 11661161010.JT is supported by NSFC grants of Chinese Government under 11573010 and U1631103.CYH is supported by the National Research Foundation of Korea through grants 2014R1A1A2058590 and 2016R1A5A1013277. Facilities: Swift, NuSTAR, XMM, YAO:2.4m, and CXO 25 natexlab#1#1[Abdo et al.(2009)Abdo, Ackermann, Ajello, Anderson, Atwood, Axelsson, Baldini, Ballet, Barbiellini, Baring, Bastieri, Baughman, Bechtol, Bellazzini, Berenji, Bignami, Blandford, Bloom, Bonamente, Borgland, Bregeon, Brez, Brigida, Bruel, Burnett, Caliandro, Cameron, Caraveo, Casandjian, Cecchi, Çelik, Chekhtman, Cheung, Chiang, Ciprini, Claus, Cohen-Tanugi, Conrad, Cutini, Dermer, de Angelis, de Luca, de Palma, Digel, Dormody, do Couto e Silva, Drell, Dubois, Dumora, Farnier, Favuzzi, Fegan, Fukazawa, Funk, Fusco, Gargano, Gasparrini, Gehrels, Germani, Giebels, Giglietto, Giommi, Giordano, Glanzman, Godfrey, Grenier, Grondin, Grove, Guillemot, Guiriec, Gwon, Hanabata, Harding, Hayashida, Hays, Hughes, Jóhannesson, Johnson, Johnson, Johnson, Kamae, Katagiri, Kataoka, Kawai, Kerr, Knödlseder, Kocian, Kuss, Lande, Latronico, Lemoine-Goumard, Longo, Loparco, Lott, Lovellette, Lubrano, Madejski, Makeev, Marelli, Mazziotta, McConville, McEnery, Meurer, Michelson, Mitthumsiri, Mizuno, Monte, Monzani, Morselli, Moskalenko, Murgia, Nolan, Norris, Nuss, Ohsugi, Omodei, Orlando, Ormes, Paneque, Parent, Pelassa, Pepe, Pesce-Rollins, Pierbattista, Piron, Porter, Primack, Rainò, Rando, Ray, Razzano, Rea, Reimer, Reimer, Reposeur, Ritz, Rochester, Rodriguez, Romani, Ryde, Sadrozinski, Sanchez, Sander, Parkinson, Scargle, Sgrò, Siskind, Smith, Smith, Spandre, Spinelli, Starck, Strickman, Suson, Tajima, Takahashi, Takahashi, Tanaka, Thayer, Thompson, Tibaldo, Tibolla, Torres, Tosti, Tramacere, Uchiyama, Usher, Van Etten, Vasileiou, Vilchez, Vitale, Waite, Wang, Watters, Winer, Wolff, Wood, Ylinen, Ziegler, & Fermi LAT Collaboration]2009Sci...325..840A Abdo, A. A., Ackermann, M., Ajello, M., et al. 2009, Science, 325, 840[Abdo et al.(2011)Abdo, Ackermann, Ajello, Allafort, Ballet, Barbiellini, Bastieri, Bechtol, Bellazzini, Berenji, Blandford, Bonamente, Borgland, Bregeon, Brigida, Bruel, Buehler, Buson, Caliandro, Cameron, Camilo, Caraveo, Cecchi, Charles, Chaty, Chekhtman, Chernyakova, Cheung, Chiang, Ciprini, Claus, Cohen-Tanugi, Cominsky, Corbel, Cutini, D'Ammando, de Angelis, den Hartog, de Palma, Dermer, Digel, Silva, Dormody, Drell, Drlica-Wagner, Dubois, Dubus, Dumora, Enoto, Espinoza, Favuzzi, Fegan, Ferrara, Focke, Fortin, Fukazawa, Funk, Fusco, Gargano, Gasparrini, Gehrels, Germani, Giglietto, Giommi, Giordano, Giroletti, Glanzman, Godfrey, Grenier, Grondin, Grove, Grundstrom, Guiriec, Gwon, Hadasch, Harding, Hayashida, Hays, Jóhannesson, Johnson, Johnson, Johnston, Kamae, Katagiri, Kataoka, Keith, Kerr, Knödlseder, Kramer, Kuss, Lande, Lee, Lemoine-Goumard, Longo, Loparco, Lovellette, Lubrano, Manchester, Marelli, Mazziotta, Michelson, Mitthumsiri, Mizuno, Moiseev, Monte, Monzani, Morselli, Moskalenko, Murgia, Nakamori, Naumann-Godo, Neronov, Nolan, Norris, Noutsos, Nuss, Ohsugi, Okumura, Omodei, Orlando, Paneque, Parent, Pesce-Rollins, Pierbattista, Piron, Porter, Possenti, Rainò, Rando, Ray, Razzano, Razzaque, Reimer, Reimer, Reposeur, Ritz, Sadrozinski, Scargle, Sgrò, Shannon, Siskind, Smith, Spandre, Spinelli, Strickman, Suson, Takahashi, Tanaka, Thayer, Thayer, Thompson, Thorsett, Tibaldo, Tibolla, Torres, Tosti, Troja, Uchiyama, Usher, Vandenbroucke, Vasileiou, Vianello, Vitale, Waite, Wang, Winer, Wolff, Wood, Wood, Yang, Ziegler, & Zimmer]2011ApJ...736L..11A —. 2011, , 736, L11[Acciari et al.(2009)Acciari, Aliu, Arlen, Beilicke, Benbow, Boltuch, Bradbury, Buckley, Bugaev, Byrum, Cannon, Cesarini, Cesarini, Chow, Ciupik, Cogan, Dickherber, Duke, Ergin, Falcone, Fegan, Finley, Finnegan, Fortin, Fortson, Furniss, Gibbs, Gillanders, Grube, Guenette, Gyuk, Hanna, Holder, Horan, Hui, Humensky, Imran, Kaaret, Karlsson, Kertzman, Kieda, Kildea, Konopelko, Krawczynski, Krennrich, Lang, LeBohec, LeBohec, Maier, McCann, McCutcheon, Millis, Millis, Moriarty, Mukherjee, Ong, Otte, Pandel, Perkins, Petry, Pohl, Quinn, Ragan, Reyes, Reynolds, Rose, Schroedter, Sembroski, Smith, Steele, Swordy, Theiling, Toner, Varlotta, Vassiliev, Vincent, Wagner, Wakely, Ward, Weekes, Weinstein, Weisgarber, Williams, Wissel, & Wood]2009ApJ...698L..94A Acciari, V. A., Aliu, E., Arlen, T., et al. 2009, , 698, L94[Aharonian et al.(2005)Aharonian, Akhperjanian, Aye, Bazer-Bachi, Beilicke, Benbow, Berge, Berghaus, Bernlöhr, Boisson, Bolz, Braun, Breitling, Brown, Bussons Gordo, Chadwick, Chounet, Cornils, Costamante, Degrange, Djannati-Ataï, O'C. Drury, Dubus, Emmanoulopoulos, Espigat, Feinstein, Fleury, Fontaine, Fuchs, Funk, Gallant, Giebels, Gillessen, Glicenstein, Goret, Hadjichristidis, Hauser, Heinzelmann, Henri, Hermann, Hinton, Hofmann, Holleran, Horns, de Jager, Johnston, Khélifi, Kirk, Komin, Konopelko, Latham, Le Gallou, Lemière, Lemoine-Goumard, Leroy, Martineau-Huynh, Lohse, Marcowith, Masterson, McComb, de Naurois, Nolan, Noutsos, Orford, Osborne, Ouchrif, Panter, Pelletier, Pita, Pühlhofer, Punch, Raubenheimer, Raue, Raux, Rayner, Redondo, Reimer, Reimer, Ripken, Rob, Rolland, Rowell, Sahakian, Saugé, Schlenker, Schlickeiser, Schuster, Schwanke, Siewert, Skjæraasen, Sol, Steenkamp, Stegmann, Tavernet, Terrier, Théoret, Tluczykont, Vasileiadis, Venter, Vincent, Völk, & Wagner]2005A A...442....1A Aharonian, F., Akhperjanian, A. G., Aye, K.-M., et al. 2005, , 442, 1[Bongiorno et al.(2011)Bongiorno, Falcone, Stroh, Holder, Skilton, Hinton, Gehrels, & Grube]2011ApJ...737L..11B Bongiorno, S. D., Falcone, A. D., Stroh, M., et al. 2011, , 737, L11[Caliandro et al.(2015)Caliandro, Cheung, Li, Scargle, Torres, Wood, & Chernyakova]2015ApJ...811...68C Caliandro, G. A., Cheung, C. C., Li, J., et al. 2015, , 811, 68[Camilo et al.(2009)Camilo, Ray, Ransom, Burgay, Johnson, Kerr, Gotthelf, Halpern, Reynolds, Romani, Demorest, Johnston, van Straten, Saz Parkinson, Ziegler, Dormody, Thompson, Smith, Harding, Abdo, Crawford, Freire, Keith, Kramer, Roberts, Weltevrede, & Wood]2009ApJ...705....1C Camilo, F., Ray, P. S., Ransom, S. M., et al. 2009, , 705, 1[Casares et al.(2012)Casares, Ribó, Ribas, Paredes, Vilardell, & Negueruela]2012MNRAS.421.1103C Casares, J., Ribó, M., Ribas, I., et al. 2012, , 421, 1103[Chernyakova et al.(2006)Chernyakova, Neronov, Lutovinov, Rodriguez, & Johnston]2006MNRAS.367.1201C Chernyakova, M., Neronov, A., Lutovinov, A., Rodriguez, J., & Johnston, S. 2006, , 367, 1201[Chernyakova et al.(2014)Chernyakova, Abdo, Neronov, McSwain, Moldón, Ribó, Paredes, Sushch, de Naurois, Schwanke, Uchiyama, Wood, Johnston, Chaty, Coleiro, Malyshev, & Babyk]2014MNRAS.439..432C Chernyakova, M., Abdo, A. A., Neronov, A., et al. 2014, , 439, 432[Chernyakova et al.(2015)Chernyakova, Neronov, van Soelen, Callanan, O'Shaughnessy, Babyk, Tsygankov, Vovk, Krivonos, Tomsick, Malyshev, Li, Wood, Torres, Zhang, Kretschmar, McSwain, Buckley, & Koen]2015MNRAS.454.1358C Chernyakova, M., Neronov, A., van Soelen, B., et al. 2015, , 454, 1358[Dubus(2013)]2013A ARv..21...64D Dubus, G. 2013, , 21, 64[Evans et al.(2007)Evans, Beardmore, Page, Tyler, Osborne, Goad, O'Brien, Vetere, Racusin, Morris, Burrows, Capalbi, Perri, Gehrels, & Romano]2007A A...469..379E Evans, P. A., Beardmore, A. P., Page, K. L., et al. 2007, , 469, 379[Evans et al.(2009)Evans, Beardmore, Page, Osborne, O'Brien, Willingale, Starling, Burrows, Godet, Vetere, Racusin, Goad, Wiersema, Angelini, Capalbi, Chincarini, Gehrels, Kennea, Margutti, Morris, Mountford, Pagani, Perri, Romano, & Tanvir]2009MNRAS.397.1177E —. 2009, , 397, 1177[Hanuschik(1989)]1989Ap SS.161...61H Hanuschik, R. W. 1989, , 161, 61[Hinton et al.(2009)Hinton, Skilton, Funk, Brucker, Aharonian, Dubus, Fiasson, Gallant, Hofmann, Marcowith, & Reimer]2009ApJ...690L.101H Hinton, J. A., Skilton, J. L., Funk, S., et al. 2009, , 690, L101[Ho et al.(2017)Ho, Ng, Lyne, Stappers, Coe, Halpern, Johnson, & Steele]2017MNRAS.464.1211H Ho, W. C. G., Ng, C.-Y., Lyne, A. G., et al. 2017, , 464, 1211[Krtička(2014)]2014A A...564A..70K Krtička, J. 2014, , 564, A70[Lyne et al.(2015)Lyne, Stappers, Keith, Ray, Kerr, Camilo, & Johnson]2015MNRAS.451..581L Lyne, A. G., Stappers, B. W., Keith, M. J., et al. 2015, , 451, 581[Moldón et al.(2011)Moldón, Johnston, Ribó, Paredes, & Deller]2011ApJ...732L..10M Moldón, J., Johnston, S., Ribó, M., Paredes, J. M., & Deller, A. T. 2011, , 732, L10[Skilton et al.(2009)Skilton, Pandey-Pommier, Hinton, Cheung, Aharonian, Brucker, Dubus, Fiasson, Funk, Gallant, Marcowith, & Reimer]2009MNRAS.399..317S Skilton, J. L., Pandey-Pommier, M., Hinton, J. A., et al. 2009, , 399, 317[Takata et al.(2017)Takata, Tam, Ng, Li, Kong, Hui, & Cheng]2017ApJ...836..241T Takata, J., Tam, P. H. T., Ng, C. W., et al. 2017, , 836, 241[Tam et al.(2011)Tam, Huang, Takata, Hui, Kong, & Cheng]2011ApJ...736L..10T Tam, P. H. T., Huang, R. H. H., Takata, J., et al. 2011, , 736, L10[Tam et al.(2015)Tam, Li, Takata, Okazaki, Hui, & Kong]2015ApJ...798L..26T Tam, P. H. T., Li, K. L., Takata, J., et al. 2015, , 798, L26[Wang et al.(2004)Wang, Johnston, & Manchester]2004MNRAS.351..599W Wang, N., Johnston, S., & Manchester, R. N. 2004, , 351, 599
http://arxiv.org/abs/1705.09653v1
{ "authors": [ "K. L. Li", "A. K. H. Kong", "P. H. T. Tam", "X. Hou", "J. Takata", "C. Y. Hui" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170526172523", "title": "Swift, XMM-Newton, and NuSTAR observations of PSR J2032+4127/MT91 213" }
firstpage–lastpage Classification of Quantitative Light-Induced Fluorescence Images Using Convolutional Neural Network Sultan Imangaliyev1,2,6 Corresponding author.Monique H. van der Veen3 Catherine M. C. Volgenant3 Bruno G. Loos3 Bart J. F. Keijser3,4 Wim Crielaard3 Evgeni Levin5,6 December 30, 2023 ======================================================================================================================================================================== We show that the jet power P_j and geometrically corrected γ-ray luminosity L_γ for the X-ray binaries (XRBs) Cygnus X-1, Cygnus X-3, and V404 Cygni, and γ-ray upper limits for GRS 1915+105 and GX339-4, follow the universal scaling for the energetics of relativistic jets from black hole (BH) systems found by <cit.> for blazars and GRBs. The observed peak γ-ray luminosity for XRBs is geometrically corrected; and the minimum jet power is estimated from the peak flux density of radio flares and the flare rise time. The L_γ-P_j correlation holds across ∼ 17 orders of magnitude. The correlation suggests a jet origin for the high energy emission from X-ray binaries, and indicates a common mechanism or efficiency for the high energy emission 0.1-100 GeV from all relativistic BH systems. relativistic processes — stars: black holes - jets§ INTRODUCTION Astrophysical jets are observed on many different scales from proto-stars and X-ray binaries (XRBs) within our Galaxy, to radio-galaxies, blazars, and γ-ray bursts (GRBs) at cosmological distances. Relativistic jets from black hole (BH) systems have a broad range of luminosities and dynamics: XRBs with a BH component have bolometric luminosities that can reach ∼ 10^39 erg s^-1 <cit.>, with Lorentz factors constrained by observations of the jet and counter-jet to a few Γ≤ 5-10; blazars, the on-axis analogue to the kilo-parsec jet structures of radio-galaxies <cit.>, have luminosities of ∼ 10^48 erg s^-1, and Lorentz factors Γ≤ 40-50 <cit.>; GRBs have energy outputs ∼10^52 erg s^-1, where achromatic temporal breaks in the afterglow indicate a jet structure <cit.>, and Lorentz factors 100 can be inferred from the highly variable non-thermal emission <cit.>.Several attempts have been made to unify the different scales of BH engines. The relativistic jets or ouflows from BH systems are thought to have a common mechanism. The appearance of superluminal features in a jet following a dip in X-ray emission has been observed forboth XRBs and the radio-galaxy 3C120, where the X-ray dip is associated with accretion <cit.>. A fundamental plane connecting BH mass, radio, and X-ray luminosity was found for active galactic nuclei (AGN) and XRBs by <cit.>. A scaling relation for the radio flux, from the core of AGN and XRBs, with BH mass M (or accretion rate), where most accretion scenarios produce the relation F_ν∝ M^17/12-s/3 and s is the spectral index where s=0 for flat spectrum sources and s∼ 0.75 for optically thin emission, demonstrates that the radio-loudness of jets scales with BH mass, where the mass can range over nine orders of magnitude <cit.>. Similarly, scaling laws have been found to unify low-power accreting BH over many decades in mass <cit.>. The emission models for jets from a supermassive BH have also been successfully applied to an XRB e.g. GRS 1915+105 <cit.>.Comparisons between the jets from different mass BH systems led to <cit.> using the nature of episodic jets from AGN and XRBs to explain the erratic light-curves of GRBs. A correlation between blazar jets and GRBs was demonstrated by <cit.>; by considering the power of GRB and blazar jets P_j, and the collimation corrected γ-ray luminosity L_γ, the relation P_j ∝ L_γ^0.98 was found. Blazars and GRBs occupy the low and high ends of the correlation respectively. This result implies that the efficiency of the γ-ray producing mechanism within these jets is consistent over 10 orders of magnitude in jet power.There have been several attempts to find a unifying scheme or scaling relation between BH systems where accretion and ejection is at work. Many results have been obtained that separately relate AGN and GRBs, or AGN and XRBs. An attempt to relate all three classes was made recently by <cit.>, they used the X-ray and radio luminosities from GRBs and inferred a BH mass to show that the fundamental plane of BH activity <cit.> holds for all jetted BH systems. Also, by considering the bolometric luminosity from jets, <cit.> demonstrated that BH XRBs and low-luminosity AGN fit on the L_γ-P_j relation for GRBs and AGN. If the L_γ-P_j relation is truly universal then the on-axis and collimation corrected γ-ray luminosity and power for the jets from XRBs should fit the same relation as for blazars and GRBs. A fit to this relation could indicate a ubiquitous emission mechanism for all relativistic BH jets and allow for constraints on the high energy emission models for XRBs, AGN, and GRBs.The XRBs Cygnus X-1 <cit.>, Cygnus X-3 <cit.>, and V404 Cygni <cit.> have been detected at Fermi LAT γ-ray energies. A further two sources have Fermi LAT upper limits; GRS 1915+105 and GX339-4 <cit.>. All of these objects have evidence for a BH component <cit.>: Cygnus X-1 (Cyg X-1), has a BH confirmed by dynamical modelling <cit.>; Cygnus X-3 (Cyg X-3), has a radio and X-ray correlation which follows that found in BH X-ray binaries <cit.>; V404 Cygni (V404 Cyg) has a BH confirmed by the mass function <cit.>; GRS 1915+105, the BH is established using a dynamical mass estimate <cit.>; GX339-4, the K-correction and model confirm the BH <cit.>. Including these XRB on the L_γ-P_j universal scaling found by <cit.>, we make the first attempt, using γ-ray luminosities, at comparing the energetics for three classes of accreting BH systems. The comparison is extended to ∼17 decades in both γ-ray luminosity and jet power.In <ref> the XRB parameters are discussed. <ref> outlines the method for correcting the γ-ray luminosity and inferred jet power for the inclination and collimation. The results are presented in <ref>. The discussion and conclusion are in <ref> and <ref>. § XRB PARAMETERSThe inclusion of XRBs on the L_γ-P_j relation requires estimates for the γ-ray luminosity from the relativistic jets, for an on-axis observer, and estimates for the jet power. Unlike blazars and GRBs, the jets from XRBs are not guaranteed to be oriented along the line-of-sight. Any detected emission from an off-axis jet will have to be corrected for the relativistic Doppler effect; this requires knowledge of the system inclination and bulk Lorentz factor Γ. Additionally, any high-energy emission from the jet will be collimated within an angle 1/Γ, which is typically greater than the jet half-opening angle for XRBs. To estimate the jet power we assume equipartion of energy between the particles and magnetic field, and use the optically thin emission during radio flares to find the minimum power.The necessary parameters are: the detected γ-ray photon flux N; the system distance D; the jet inclination i; the jet bulk Lorentz factor Γ; and radio flare peak flux density S_ν, observed frequency ν, and rise time Δ t.Radio emission and flares from XRBs are attributed to relativistic jets. Accretion, seen at X-ray energies, and ejection, seen at radio, are strongly correlated <cit.>. The peak flux and rise time of radio flares can be used to constrain the power of a jet. Emission at γ-ray energies from XRBs has been associated with radio flaring and variability <cit.>. Detection of γ-rays during periods of intense radio flaring suggests the origin of the high energy emission is a jet <cit.>. The simultaneous detection of the 511 keV annihilation line and higher energy γ-rays from V404 Cyg within hours of a giant radio flare indicates a jet as the origin of the γ-ray emission <cit.>. XRBs are not persistent γ-ray sources at detection sensitivity, although see <cit.> where Cyg X-3 was detected above the background without a flare.Generally XRBs have only been observed at these high energies during flares;therefore we use the detected peak Fermi LAT γ-ray photon flux for each source and determine an observed isotropic equivalent γ-ray luminosity L_γ, obs,iso from the Fermi LAT photon spectral index[High energy photon spectral index is regularly represented using Γ; to avoid confusion with the outflow bulk Lorentz factor (Γ) we use α throughout] α at energies >100 MeV. Detections are in the 0.1-10 GeV range for Cyg X-1 and Cyg X-3 <cit.>, and 0.1-100 GeV for V404 Cyg <cit.>. Upper limits for the γ-ray photon flux from GRS 1915+105 and GX339-4, in the energy range 0.1-10 GeV, are used to estimate the maximum L_γ, obs,iso for these objects <cit.>. The detected peak photon flux and spectral index α, for Cyg X-1, Cyg X-3 and V404 Cyg, and the γ-ray photon flux upper limits for GRS 1915+105 and GX339-4 are shown in Table <ref>.The photon spectral index is defined as N_E ∝ E^-α, where N_E is in units ph s^-1 cm^-2 erg^-1 and E is the photon energy. The γ-ray luminosity is then,L_γ, obs,iso∼ 1.9× 10^35 N_-6 D_ kpc^2 (α-1)/(α-2)(E_ low^2-α-E_ high^2-α)/(E_ low^1-α-E_ high^1-α)    erg s^-1,where E_ low and E_ high are the detection band limits in GeV, N_-6=N/(10^-6 ph s^-1 cm^-2) and N is the detected photon flux, and D_ kpc is the distance in kpc.The observed proper motion of radio jet components can be used to put constraints on the value of Γ. The proper motion is defined as μ=cβsin i/[D(1±βcos i)] radians s^-1, where β=(1-Γ^-2)^1/2. An approaching component μ_a has 1-βcos i and a receding component μ_r has 1+βcos i. Using resolved μ_a and μ_r, a value for βcos i can be found, where βcos i=(μ_a-μ_r)/(μ_a+μ_r) <cit.>. Values of βcos i for various XRBs are listed by <cit.> (MFN06 from here). For a system with a known inclination, the observable quantity βcos i can be used to determine the bulk Lorentz factor Γ. Where the inclination is unknown, the Lorentz factor can be determined using the approaching and receding proper motions and the distance to the system. From the product of the proper motions μ_aμ_r,Γ = [1-x^2-μ_aμ_rD^2(1-x^2)/c^2]^-1/2,where x is the observed value βcos i, the proper motions μ_a and μ_r are in radians s^-1, D is the distance in cm, and c is the speed of light in cm s^-1.If the proper motions of either component are poorly constrained then a limit on Γ can be found by considering the observed jet opening angle ϕ. The angle ϕ is an upper-limit found by measuring the angle between the jet central axis and a tangential line from the edge of a radio component to the system core. The jet components are assumed to be spherical plasmoids that expand uniformly with a co-moving velocity β_ exp. If we assume maximum co-moving expansion velocity of c, then the jets bulk Lorentz factor is, Γ [1+tan^-2ϕsin^-2i]^1/2. Where the co-moving expansion velocity is less than the maximum, Γ [1+β_ exp^2/(tan^2ϕsin^2i)]^1/2. This assumes no jet confinement.The inclination i of the system to the line of sight is well constrained for Cyg X-1, V404 Cyg, and GRS 1915+105 <cit.>. Cyg X-3 and GX339-4 have unknown system inclinations. For Cyg X-3; <cit.> showed that the jet orientation within the system is constrained to be between 20^∘θ_j80^∘, and the system line-of-sight inclination is i=30^∘. <cit.> used an inclination of i=30^∘ in their models. Using the βcos i values in MFN06 and the distance to the system, the bulk Lorentz factor Γ of the jet can be constrained. Given βcos i=0.5, μ_aμ_r∼ 7.4×10^-26 rads s^-1, and D=7 kpc, the bulk Lorentz factor is Γ=1.18 and the line of sight inclination to the jet-axis is i ≃ 20^∘. For GX339-4; MFN06 measured βcos i ≥ 0.16 and derived a Γ≥4.9 from the jet opening angle; a lower limit of Γ≥ 2.3 is used by <cit.>. Using these values for Γ, the inclination of the system can be determined from βcos i = 0.16; for Γ=4.9 the inclination is i=80^∘.6; for Γ=2.3 the inclination is i=79^∘.8. In all cases we assume that the inclination angle is the same as the line-of-sight angle to the jet axis and that there is no significant precession. Values for the inclination are listed in Table <ref>.XRB jets typically have Γ<5. For Cyg X-1 a Lorentz factor Γ=1.25 is used by <cit.> for modelling the lepto-hadronic broadband emission, whilst from the jet opening angle and βcos i from MFN06, there is a minimum value of Γ=3.3. We show results for both values. For Cyg X-3 the Lorentz factor must be Γ≤ 2 (MFN06); we derived the value Γ=1.18 (equation <ref>). We show results for Γ=2 and Γ=1.18. For V404 Cyg we assume a Lorentz factor Γ=2.3 <cit.>. For GRS 1915+105, from the inclination i=60^∘ and βcos i=0.41 we derive a Γ=1.75. For GX339-4, we use the value Γ=4.9 from the jet opening angle. We assume that γ-ray emission, and radio flares are from the jet with negligible contribution from the accretion disk or star. The peak radio flare flux density S_ν, the rise time Δ t, frequency ν, and distance D are shown with references in Table <ref>. § METHODWe use the Fermi LAT measured γ-ray photon flux for three XRBs: Cygnus X-1 <cit.>, Cygnus X-3 <cit.>, and V404 Cygni <cit.>. These are currently the only γ-ray detected XRBs. Fermi LAT upper limits exist for GRS 1915+105 and GX339-4 <cit.>; the upper limits are used for these objects. Cynus X-3 and V404 Cygni have also been detected at >100 MeV by AGILE <cit.>. The high energy emission is associated with jet activity.Emission from a relativistic jet is beamed in the direction of the jet bulk motion; we assume a point like emission region on the jet axis for all high energy photons. The γ-ray luminosity is corrected for the inclination of the jet to the line of sight. The Lorentz invariant quantity I_ν/ν^3 <cit.>, where I_ν is the specific intensity and ν the frequency, can be used to determine the specific luminosity from a relativistic source where the observer is outside the relativistic beaming angle. As ν=δν', where δ=[Γ(1-βcos i)]^-1 is the relativistic Doppler factor, Γ the bulk Lorentz factor, i the inclination, and primed quantities are in the co-moving frame, then I_ν=I'_ν'(ν/ν')^3=I'_ν'δ^3. The observed luminosity is then L_ν=4π I'_ν'δ^3. For an on-axis observer the Doppler factor becomes δ=[Γ(1-β)]^-1; the observed luminosity is then a^3 times the on-axis luminosity <cit.>, where a is the correction for an on-axis observer to an off-axis observer; the factor a= (1-β)/(1-βcosi).The γ-ray luminosity for an on-axis observer has detection band limits a factor a^-1 times the off-axis detection limits; a correction to the on-axis Doppler boosted emission should be made to ensure the detection band is consistent. All the γ-ray detections have a single power-law spectral fit with a ν F_ν index 2-α, and no information of a spectral peak or behaviour at lower energies. A peak for the γ-ray component should exist at a few GeV <cit.> for an on-axis observer; we therefore assume a flat spectrum for the correction. The on-axis isotropic equivalent γ-ray luminosity is then L_γ, iso=a^-3 L_γ, obs,iso, where L_γ, obs,iso is the observed isotropic equivalent γ-ray luminosity, equation <ref>. The collimation-corrected luminosity is L_γ=f_b  L_γ, iso, where f_b is the collimation factor for the jet. The collimation-correction is f_b=1-cos(1/Γ). The intrinsic, on-axis γ-ray luminosity is then L_γ=f_b a^-3 L_γ, obs,iso.Bright radio flares from plasmoids that travel along the relativistic jet structures can be used to estimate the minimum power of the jet. Although γ-ray emission is often correlated with radio flaring, the site of the emission within the jet is distinct. Radio flares are contained by the plasmoids and equipartition of the energy within these structures can be assumed. The jet power is estimated by assuming equipartition of energy between the synchrotron emitting particles and the magnetic field strength B<cit.>. The energy density in the particles, given a random magnetic field, is e ∝ B^-3/2, and the energy density in the magnetic field is u ∝ B^2. The total energy is E_ total=V(e+u), where V is the volume of the emitting region; as the dominant component is unknown i.e. large B and small e, or small B and large e, then a minimum energy can be found at the point where dE_ total/ dB = 0. The particle number density assumes a power-law distribution of ultra-relativistic electrons n_e ∝ E^-p; the contribution from relativistic protons is included by the factor η=1+ϵ_p/ϵ_e, where ϵ_p is the energy in protons and ϵ_e is the energy in electrons. The energy in the particles is E = C(p,ν) η L_ν B^-3/2, where L_ν is the co-moving specific luminosity and C(p,ν) is a constant that depends on the particle index p, the frequency ν of the specific luminosity, and the upper and lower synchrotron frequency limits for the particle distribution.For a distribution of particles with a power-law index p>2, the low energy particles dominate. By assuming that ν=ν_ min, the minimum synchrotron frequency, a simple estimate for the energy in the system can be made[This assumes no large flux of low energy relativistic particles with a different energy spectrum]. We assume a particle distribution, in all cases, of p=2.5 <cit.>; the observed flux density S_ν would have a spectral index of 0.75, where S_ν∝ν^-0.75. The volume of the emitting system is assumed to be spherical, where the size can be inferred from the light crossing time indicated by radio flare rise time Δ t; the volume is then V=4π(Δ t c)^3/3. The jet-power P_j=E_ total/Δ t can then be estimated by considering the Doppler corrected observed flux density; for an optically thin source the Doppler correction to the flux density is δ^3+(p-1)/2 <cit.>.The jet power P_j is a Lorentz invarient quantity, therefore the observed flux density, time, and frequency must be co-moving quantities. The flux dependence is S'_ν' = δ^-(3+(p-1)/2) S_ν, the time is Δ t' = δΔ t, and frequency ν' = δ^-1ν. The jet power is then,P_j∼ 3.5× 10^33 η^4/7 Δt'^2/7 ν'_ GHz^2/7 S'_ν', mJy^4/7 D_ kpc^8/7    erg  s^-1,where, Δ t' is in seconds, ν' is in GHz, S'_ν' is in mJy, and D is in kpc. We assume equal energy in protons and electrons, ϵ_p/ϵ_e=1.Uncertainties on the derived values are estimated by propagating the uncertainty on the distance, the inclination, the γ-ray flux, and the bulk Lorentz factor. The uncertainty on Γ is assumed to be dΓ=0.1 for Cyg X-1, Cyg X-3, GRS 1915+105, and GX339-4 where the estimate for Γ is from observed proper motions, and dΓ=0.5 for V404 Cyg where Γ is found from a model jet velocity. The choice of uncertainty for Γ reflects the estimation method and a conservative value for the minimum precision. The error on the final parameters is dominated by the uncertainty in the γ-ray flux and is only very weakly dependent on the choice of dΓ. § RESULTSFigure <ref> shows the L_γ - P_j relation for the sample of XRBs. The observed luminosities (filled markers) and collimation/Doppler-corrected values (unfilled markers) are both shown. Values for Cyg X-1 are blue squares; Cyg X-3 are red diamonds; and V404 Cyg, are pink stars. For Cyg X-1; the small unfilled marker is the estimate based on Γ=1.25, the large marker is Γ=3.3. For Cyg X-3; the small unfilled marker is Γ=1.18, and the large Γ=2. For V404 Cyg, there is only one estimate for the bulk Lorentz factor used. GRS 1915+105 is an upward pointing black triangle. GX339-4 is a downward pointing black triangle. Errorbars are those derived from the quoted uncertainties or 0.5 dex where propagated errors are large. The parameters used for the XRB sample, and the derived luminosity and power, are listed in Table <ref>. § DISCUSSIONUsing the observed peak Fermi LAT γ-ray flux or upper limit, the jet to line-of-sight inclination, and the jet Lorentz factor, we have made estimates for the on-axis, isotropic equivalent γ-ray luminosity from the jets of five XRBs. The isotropic on-axis luminosity is further corrected for the collimated emission, where the fraction is given by 1-cos(1/Γ), resulting in a collimation-corrected estimate for the γ-ray luminosity. This γ-ray luminosity, along with an estimate for the jet power, can be directly compared with the universal scaling for relativistic jets from BH systems proposed by <cit.>. The Nemmen relation is based on the peak γ-ray luminosity and jet power for blazars and γ-ray bursts (GRB). The inclusion of XRBs on this plot, extends this L_γ-P_j relation to lower luminosities and power. The XRB fit on this plot can also be used to indicate the jet origin of γ-ray photons from such sources.The γ-ray luminosity for the three source types, blazars, GRBs, and XRBs, is the beamed on-axis and collimation corrected luminosity. The jet power is estimated for each source type uniquely: for blazars, the jet power is found from a tight correlation between the radio luminosity and the power required to inflate an X-ray cavity <cit.>. Using the relation P_j∼6×10^43L_40^0.7 erg s^-1, where L_40 is the radio luminosity in units × 10^40 erg s^-1, the power for blazars with VLA observed extended radio emission was determined; for GRBs, the jet power is found using a collimation corrected estimate for the kinetic energy from the peak of the radio or X-ray afterglow and assuming the fireball model. The jet power is then P_j =(1+z)f_b E_k, iso/t_90, where f_b is the collimation correction, E_k, iso the isotropic equivalent kinetic energy, and t_90 the timescale for 90% of the prompt emission energy; for XRBs, the jet power is found using the minimum energy assuming equipartition and the peak radio flare flux density. The jet power is given by equation <ref>. Our estimates for the Doppler- and collimation-corrected γ-ray luminosity and jet power, for the five XRBs in our sample, all fall within the uncertainties associated with the original L_γ-P_j relation for BH jets:logP_j= (0.98±0.02)logL_γ+(1.6±0.9)   erg s^-1.The L_γ-P_j correlation can be applied to XRBs without a limit on the γ-ray luminosity. There are at least four additional XRBs with peak radio flare flux densities, rise times, βcos i, and distance measurements: GRO J1655-40, V4641 Sgr, XTE J1550-564, and H 1743-322. All have BH components <cit.>. The distances to these systems are: GRO J1655-40 is at 3.2 kpc <cit.>; V4641 Sgr is at 6.2 kpc <cit.>; XTE J1550-564 is at 4.5 kpc <cit.>; H 1743-322 is at 10 kpc <cit.>. Given the βcos i and μ_aμ_r values in MFN06, the bulk Lorentz factors are Γ=[2.5,  ≥2.5,  1.3,  3.7] respectively. The power of the jet for these systems, using the rise time and peak flux listed in MFN06 for GRO J1655-40, V4641 Sgr, and XTE J1550-564, and the rise time and peak flux from <cit.> for H 1743-322, is then P_j=[2.93,  1.05,  0.04,  8.33]× 10^38 erg s^-1 respectively. The L_γ-P_j relation can give us contraints on the on-axis γ-ray luminosity. As the observed upper limit for γ-ray luminosity is L_γ, obs,iso=f_b^-1 a^3 L_γ, using equation <ref>, the maximum γ-ray photon flux at a detector for each source is: N_γ≤ [1.6,  0.5,  2.8,  0.1]× 10^-8 photons s^-1 cm^-2 respectively, at energies > 100 MeV and assuming α=2.5.The inclination angle used for the relativistic Doppler correction is in all cases assumed to be the angle from a point source on the jet-axis to the line-of-sight. However, the jets have a finite opening angle ϕ; the angle to the jet could be as low as (i-ϕ). The Doppler corrected luminosity will be lower in each case than those presented here. Cyg X-1 and Cyg X-3 have relatively small inclination angles, 27^∘.1 and ∼20^∘, and jet opening angles, <18^∘ and <16^∘.5 respectively. The Doppler and collimation corrected values for each system using (i-ϕ) are shifted to lower γ-ray luminosities. For Cyg X-1, L_γ∼ 2.5× 10^34 erg s^-1 when Γ=3.3, and for both Γ values used here is closer to the central L_γ-P_j trend. For Cyg X-3, L_γ∼ 10^36 erg s^-1 for Γ=2, and for both Γ values used is well within the correlation limits.However, note that the Doppler-corrected γ-ray energies for the five XRBs are most likely underestimates; this is due to the shift of the observed Fermi LAT band >100 MeV, to the on-axis energy range, where ν_ obs=aν_ o and ν_ o is the value to an on-axis observer. The observed Fermi LAT spectrum, in all cases, is assumed to be a single power law; without information regarding the spectral peak or index below the observed minimum energy 100 MeV, we have assumed the on-axis γ-ray luminosity to be equivalent to the energy in the Doppler-corrected band i.e. a flat spectrum. If the single power-law extended to lower energies than those observed by Fermi LAT then the on-axis Doppler corrected luminosities would be of order L_γ∼ 10^41 erg s^-1; such a bright on-axis source could be detectable as a γ-ray transient in local galaxies e.g. N_γ∼ 2× 10^-6 ph s^-1 cm^-2 at 1 Mpc, and becoming limited at N_γ∼ 2× 10^-8 ph s^-1 cm^-2 at 10 Mpc.To estimate the minimum power of the jet we have assumed a ratio of energy in relativistic protons to electrons in the synchrotron emitting region of ϵ_p/ϵ_e=1. This ratio could in reality be very small or as high as ∼100 e.g. GRBs, where the ratio is typically in the range 10ϵ_p/ϵ_e100. If the energy in the hadronic particles is larger, the jet powers presented here would be underestimates; for ϵ_p/ϵ_e=2 the jet power would increase by a factor of ∼1.3, for ϵ_p/ϵ_e=100 the power would be ∼9.4 times those presented here. Alternatively, if the energy in relativistic protons is very small, then the jet power would be ∼0.7 of those presented. As noted by <cit.>, this method does not consider the contribution by cold ions in the jet bulk flow to the total power. The minimum jet powers presented here are therefore underestimates; the maximum correction factor to the presented powers is a factor ∼ 50 larger. If the minimum jet powers presented here are massively underestimated, then by considering similar arguments for the underestimate of the jet power in blazars <cit.> the L_γ-P_j correlation may still hold but with a shallower index.Figure <ref> shows the <cit.> distribution of blazars and GRBs with the uncertainties for each population, plus the five XRBs presented here. Where two estimates for the Doppler-corrected luminosity and power exist for an XRB, we have used the values that correspond to the largest Γ. The addition of more XRBs to this distribution will help to determine the validity of the correlation, and if it holds, better constrain the index and limits for a wide range of BH jets in L_γ-P_j.<cit.> found a similar correlation for XRBs in the hard state using the bolometric luminosity for the jet derived from models; the power estimates for the jets in their sample were typically lower than those found here by up to 3-4 orders of magnitude. Our estimates are based on the minimum jet power during a flaring/transient event as opposed to the compact jets seen during the hard state; this difference can explain the disagreement in jet power where the same source is compared. The luminosity used in our sample is the γ-ray flare luminosity not the hard-state bolometric jet luminosity and therefore our estimates are directly comparable to the original <cit.> correlation.That <cit.> find a correlation without using γ-ray luminosity demonstrates that a common mechanism links all BHs and jets through accretion with very small differences. By considering only the γ-ray flux from these jets we can probe the part of the outflow with the highest Lorentz factor and strongest relativistic beaming. For GRBs, the emitted γ-rays are a small fraction of the total engine energy;despite the differences in these sources (stellar mass BH in XRBs, SMBH in AGN, or SNe/merger for GRBs) the observed relation is always the same. A confirmed correlation for L_γ-P_j for jets from accreting BH systems, regardless of phenomenological differences between the systems, could help determine a ubiquitous emission mechanism for high energy photons from such jets. The existence of a correlation across extremes of time and mass scale points to common physical phenomena between all relativistic BH jets. If all relativistic BH jets have the same high-energy emission mechanism then the differences between the system classes can be used to constrain the emission mechanism at γ-ray energies. Alternatively, the correlation may indicate that the efficiency for various γ-ray emission processes in relativistic jets is similar.Two groups of models are used to explain the high energy emission in XRBs and blazars: the hadronic/lepto-hadronic models, where the high energy emission is from internal jet processes such as synchrotron self Compton (SSC), synchrotron of protons or the decay of neutral pions from proton-proton cascade; and the leptonic models, where high energy emission is due to the external Compton scattering by relativistic electrons of a strong photon field, either from stellar companion black-body photons or X-ray photons from the accretion disk for XRBs, or the accretion disk and broadline region for blazars.Strong polarization measurements made in the γ-ray tail of Cyg X-1 favour a lepto-hadronic model, and a jet origin, for the high energy emission <cit.>. Lepto-hadronic models for the broadband emission of Cyg X-1 by <cit.> also favours a synchrotron or SSC, and therefore jet origin, for the high energy tail. The low mass of the companion to V404 Cyg, the temporal association of γ-ray excess and radio flares, and the simultaneous detection of the 511 keV annihilation line and γ-rays of higher energy all point to the jet as the origin for such emission <cit.>. Long-term monitoring of blazars indicates a correlation between γ-ray flares and optical flares; the optical emission from blazars is polarized to different degrees depending on the location of the synchrotron peak relative to the observed optical bands <cit.>; optical and γ-ray flares, and high polarizations, are associated with the jet. Polarization measurements of the early afterglow in GRBs indicates the existence of an ordered magnetic field in a magnetized baryonic jet <cit.>. Alternatively, if high energy emission from AGN and XRBs is due to a leptonic process i.e. inverse Compton scattering of external photons, then this correlation may have implications for the emission mechanism responsible for the high-energy prompt GRB. For a long GRB the target photons may be from shock breakout, early supernova photosphere photons, or the photons of a companion star whose presence may be inferred by the high degree of stripping of long GRB progenitor supernovae i.e. Ic SNe.Throughout, we have made the assumption that all GRBs are powered by BH, as opposed to magnetars; the overall L_γ-P_j correlation presented here may support this assumption. XRBs, blazars, and GRBs all populating the same relation for L_γ-P_j indicates a common jet emission mechanism, or efficiency, for γ-rays where the magnitude of L_γ depends on the jet power. § CONCLUSIONSWe have shown that when corrected for collimation and Doppler boosting, the γ-ray luminosity from XRBs follows the L_γ-P_j relation found by <cit.> for relativistic jets from BH systems. This correlation holds across ∼ 17 orders of magnitude and is the first attempt at comparing the energetics, using γ-ray luminosities, for three classes of accreting BH systems e.g. XRB, AGN, and GRB. Although the jet powers and γ-ray luminosities for XRBs are most likely underestimates, XRBs are relatively closely grouped in the parameter space. The power of a jet from a BH system can be independently constrained by the on-axis γ-ray luminosity. Alternatively, the jet power can be used to indicate the expected on-axis γ-ray luminosity for high energy flares from BH jets. Future target of opportunity high energy observations of XRB during radio flaring events could help further constrain this relation. If such a relation is ubiquitous amongst relativistic jets from BHs, then a common emission mechanism, or efficiency, is most likely responsible. By comparing the different systems, constraints can be put on the emission dynamics.§ ACKNOWLEDGEMENTS GPL thanks Iain Steele and Phil James for helpful comments, and the participants of the Astrophysical Jets School of Cargése 2016 for useful discussions. This research was supported by STFC grants. EP acknowledges financial support from INAF-ASI contract I/088/06/0 from the Italian Ministry of Education and Research, and from Scuola Normale Superiore.999 [Blandford & Königl(1979)]1979ApJ...232...34B Blandford, R. D., & Königl, A. 1979, , 232, 34[Bodaghee et al.(2013)]2013ApJ...775...98B Bodaghee, A., Tomsick, J. A., Pottschmidt, K., et al. 2013, , 775, 98[Burbridge (1956)]1956Phys.Rev...103...264Burbridge, G. R. 1956, Phys. Rev., 103, 264 [Burbridge (1959)]1959ApJ...129...849Burbridge, G. R. 1959, , 129, 849 [Casares & Charles(1994)]1994MNRAS.271L...5C Casares, J., & Charles, P. A. 1994, , 271, L5 [Cavagnolo et al.(2010)]2010ApJ...720.1066C Cavagnolo, K. W., McNamara, B. R., Nulsen, P. E. J., et al. 2010, , 720, 1066 [Corbel et al.(2000)]2000A A...359..251C Corbel, S., Fender, R. P., Tzioumis, A. K., et al. 2000, , 359, 251 [Corbel et al.(2003)]2003A A...400.1007C Corbel, S., Nowak, M. A., Fender, R. P., Tzioumis, A. K., & Markoff, S. 2003, , 400, 1007 [Corbel et al.(2012)]2012MNRAS.421.2947C Corbel, S., Dubus, G., Tomsick, J. A., et al. 2012, , 421, 2947[Corbel et al.(2013)]2013MNRAS.428.2500C Corbel, S., Coriat, M., Brocksopp, C., et al. 2013, , 428, 2500 [Corral-Santana et al.(2016)]2016A A...587A..61C Corral-Santana, J. M., Casares, J., Muñoz-Darias, T., et al. 2016, , 587, A61 [Dubus et al.(2010)]2010MNRAS.404L..55D Dubus, G., Cerutti, B., & Henri, G. 2010, , 404, L55 [Falcke et al.(2004)]2004A A...414..895F Falcke, H., Körding, E., & Markoff, S. 2004, , 414, 895 [Fender & Pooley(1998)]1998MNRAS.300..573F Fender, R. P., & Pooley, G. G. 1998, , 300, 573 [Fender(2001)]2001MNRAS.322...31F Fender, R. P. 2001, , 322, 31 [Fender & Belloni(2004)]2004ARA A..42..317F Fender, R., & Belloni, T. 2004, , 42, 317 [Fender et al.(2004)]2004MNRAS.355.1105F Fender, R. P., Belloni, T. M., & Gallo, E. 2004, , 355, 1105 [Fermi LAT Collaboration et al.(2009)]2009Sci...326.1512F Fermi LAT Collaboration, Abdo, A. A., Ackermann, M., et al. 2009, Science, 326, 1512 [Ghisellini(1999)]1999ASPC..161..249G Ghisellini, G. 1999, High Energy Processes in Accreting Black Holes, 161, 249 [Granot et al.(2002)]2002ApJ...570L..61G Granot, J., Panaitescu, A., Kumar, P., & Woosley, S. E. 2002, , 570, L61 [Heinz & Sunyaev(2003)]2003MNRAS.343L..59H Heinz, S., & Sunyaev, R. A. 2003, , 343, L59 [Hjellming & Rupen(1995)]1995Natur.375..464H Hjellming, R. M., & Rupen, M. P. 1995, , 375, 464 [Huppenkothen et al.(2017)]2017ApJ...834...90H Huppenkothen, D., Younes, G., Ingram, A., et al. 2017, , 834, 90 [Jermak et al.(2016)]2016MNRAS.462.4267J Jermak, H., Steele, I. A., Lindfors, E., et al. 2016, , 462, 4267 [Jorstad et al.(2013)]2013ApJ...773..147J Jorstad, S. G., Marscher, A. P., Smith, P. S., et al. 2013, , 773, 147 [Lewin & van der Klis(2006)]2006csxs.book.....L Lewin, W. H. G., & van der Klis, M. 2006, Compact stellar X-ray sources, 39 [Ling et al.(2009)]2009ApJ...695.1111L Ling, Z., Zhang, S. N., & Tang, S. 2009, , 695, 1111 [Lister et al.(2009)]2009AJ....138.1874L Lister, M. L., Cohen, M. H., Homan, D. C., et al. 2009, , 138, 1874-1892 [Longair(1994)]1994hea2.book.....L Longair, M. S. 1994, High energy astrophysics. Volume 2. Stars, the Galaxy and the interstellar medium., by Longair, M. S..  Cambridge University Press, Cambridge (UK), 1994, 410 p., ISBN 0-521-43439-4. [Loh et al.(2016)]2016MNRAS.462L.111L Loh, A., Corbel, S., Dubus, G., et al. 2016, , 462, L111 [Ma et al.(2014)]2014ApJ...780L..14M Ma, R., Xie, F.-G., & Hou, S. 2014, , 780, L14[MacDonald et al.(2014)]2014ApJ...784....2M MacDonald, R. K. D., Bailyn, C. D., Buxton, M., et al. 2014, , 784, 2 [Marscher et al.(2002)]2002Natur.417..625M Marscher, A. P., Jorstad, S. G., Gómez, J.-L., et al. 2002, , 417, 625 [Marscher(2006)]2006AIPC..856....1M Marscher, A. P. 2006, Relativistic Jets: The Common Physics of AGN, Microquasars, and Gamma-Ray Bursts, 856, 1 [McClintock et al.(2009)]2009ApJ...698.1398M McClintock, J. E., Remillard, R. A., Rupen, M. P., et al. 2009, , 698, 1398 [Merloni et al.(2003)]2003MNRAS.345.1057M Merloni, A., Heinz, S., & di Matteo, T. 2003, , 345, 1057 [Mészáros(2002)]2002ARA A..40..137M Mészáros, P. 2002, , 40, 137 [Miller-Jones et al.(2006)]2006MNRAS.367.1432M Miller-Jones, J. C. A., Fender, R. P., & Nakar, E. 2006, , 367, 1432 [Mirabel et al.(1998)]1998A A...330L...9M Mirabel, I. F., Dhawan, V., Chaty, S., et al. 1998, , 330, L9 [Mirabel & Rodríguez(1999)]1999ARA A..37..409M Mirabel, I. F., & Rodríguez, L. F. 1999, , 37, 409 [Mundell et al.(2013)]2013Natur.504..119M Mundell, C. G., Kopač, D., Arnold, D. M., et al. 2013, , 504, 119 [Muñoz-Darias et al.(2008)]2008MNRAS.385.2205M Muñoz-Darias, T., Casares, J., & Martínez-Pais, I. G. 2008, , 385, 2205 [Nemmen et al.(2012)]2012Sci...338.1445N Nemmen, R. S., Georganopoulos, M., Guiriec, S., et al. 2012, Science, 338, 1445 [Orosz et al.(2011)]2011ApJ...730...75O Orosz, J. A., Steiner, J. F., McClintock, J. E., et al. 2011, , 730, 75 [Orosz et al.(2011)]2011ApJ...742...84O Orosz, J. A., McClintock, J. E., Aufdenberg, J. P., et al. 2011, , 742, 84 [Pepe et al.(2015)]2015A A...584A..95P Pepe, C., Vila, G. S., & Romero, G. E. 2015, , 584, A95 [Piano et al.(2017)]2017arXiv170310085P Piano, G., Munar-Adrover, P., Verrecchia, F., Tavani, M., & Trushkin, S. A. 2017, arXiv:1703.10085[Piran(2004)]2004RvMP...76.1143P Piran, T. 2004, Reviews of Modern Physics, 76, 1143 [Reid et al.(2011)]2011ApJ...742...83R Reid, M. J., McClintock, J. E., Narayan, R., et al. 2011, , 742, 83 [Reid et al.(2014)]2014ApJ...796....2R Reid, M. J., McClintock, J. E., Steiner, J. F., et al. 2014, , 796, 2 [Rodriguez et al.(2003)]2003ApJ...595.1032R Rodriguez, J., Corbel, S., & Tomsick, J. A. 2003, , 595, 1032 [Rodriguez et al.(2008)]2008ApJ...675.1449R Rodriguez, J., Shaw, S. E., Hannikainen, D. C., et al. 2008, , 675, 1449-1458 [Rodriguez et al.(2015)]2015ApJ...807...17R Rodriguez, J., Grinberg, V., Laurent, P., et al. 2015, , 807, 17 [Rybicki & Lightman(1986)]1986rpa..book.....R Rybicki, G. B., & Lightman, A. P. 1986, Radiative Processes in Astrophysics, by George B. Rybicki, Alan P. Lightman, pp. 400. ISBN 0-471-82759-2. Wiley-VCH , June 1986., 400 [Saikia et al.(2016)]2016MNRAS.461..297S Saikia, P., Körding, E., & Falcke, H. 2016, , 461, 297 [Sari et al.(1999)]1999ApJ...519L..17S Sari, R., Piran, T., & Halpern, J. P. 1999, , 519, L17 [Shaposhnikov & Titarchuk(2009)]2009ApJ...699..453S Shaposhnikov, N., & Titarchuk, L. 2009, , 699, 453 [Sironi & Spitkovsky(2011)]2011ApJ...726...75S Sironi, L., & Spitkovsky, A. 2011, , 726, 75 [Steele et al.(2009)]2009Natur.462..767S Steele, I. A., Mundell, C. G., Smith, R. J., Kobayashi, S., & Guidorzi, C. 2009, , 462, 767 [Szostek et al.(2008)]2008MNRAS.388.1001S Szostek, A., Zdziarski, A. A., & McCollough, M. L. 2008, , 388, 1001 [Tanaka et al.(2016)]2016ApJ...823...35T Tanaka, Y. T., Itoh, R., Uemura, M., et al. 2016, , 823, 35 [Tavani et al.(2009)]2009Natur.462..620T Tavani, M., Bulgarelli, A., Piano, G., et al. 2009, , 462, 620[Tetarenko et al.(2016)]2016ApJS..222...15T Tetarenko, B. E., Sivakoff, G. R., Heinke, C. O., & Gladstone, J. C. 2016, , 222, 15 [Türler et al.(2004)]2004A A...415L..35T Türler, M., Courvoisier, T. J.-L., Chaty, S., & Fuchs, Y. 2004, , 415, L35[Urry & Padovani(1995)]1995PASP..107..803U Urry, C. M., & Padovani, P. 1995, , 107, 803 [Vilhu & Hannikainen(2013)]2013A A...550A..48V Vilhu, O., & Hannikainen, D. C. 2013, , 550, A48 [Wang & Dai(2017)]2017MNRAS.470.1101W Wang, F. Y., & Dai, Z. G. 2017, , 470, 1101 [Yuan & Zhang(2012)]2012ApJ...757...56Y Yuan, F., & Zhang, B. 2012, , 757, 56 [Zanin et al.(2016)]2016A A...596A..55Z Zanin, R., Fernández-Barral, A., de Oña Wilhelmi, E., et al. 2016, , 596, A55 [Zdziarski et al.(2014)]2014MNRAS.442.3243Z Zdziarski, A. A., Pjanka, P., Sikora, M., & Stawarz, Ł. 2014, , 442, 3243 [Zdziarski(2014)]2014MNRAS.445.1321Z Zdziarski, A. A. 2014, , 445, 1321 [Zdziarski et al.(2016)]2016arXiv160705059Z Zdziarski, A. A., Malyshev, D., Chernyakova, M., & Pooley, G. G. 2016, arXiv:1607.05059
http://arxiv.org/abs/1705.09191v2
{ "authors": [ "Gavin P Lamb", "Shiho Kobayashi", "Elena Pian" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170525141918", "title": "Extending the \"Energetic Scaling of Relativistic Jets From Black Hole Systems\" to Include $γ$-ray-loud X-ray Binaries" }
Cortexica Vision Systems Limited Capital Tower – 91 Waterloo Road LondonUnited KingdomSE1 8RTcorresponding author Dept. of Engineering, University of Cambridge Dept. of Bio-engineering, Imperial College London Cortexica Vision Systems Limited United [email protected] this paper we detail Cortexica's (<https://www.cortexica.com/>) recommendation framework – particularly, we describe how a hybrid visual recommender system can be created by combining conditional random fields for segmentation and deep neural networks for object localisation and feature representation. The recommendation system that is built after localisation, segmentation and classification has two properties – first, it is knowledge based in the sense that it learns pairwise preference/occurrence matrix by utilising knowledge from experts (images from fashion blogs) and second, it iscontent-based as it utilises a deep learning based framework for learning feature representation. Such a construct is especially useful when there is a scarcity of user preference data, that forms the foundation of many collaborative recommendation algorithms.<ccs2012> <concept> <concept_id>10003752.10003809.10010031.10010032</concept_id> <concept_desc>Theory of computation Pattern matching</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> <ccs2012> <concept> <concept_id>10010147.10010257.10010293.10010294</concept_id> <concept_desc>Computing methodologies Neural networks</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012><ccs2012> <concept> <concept_id>10002951.10003317.10003347.10003350</concept_id> <concept_desc>Information systems Recommender systems</concept_desc> <concept_significance>500</concept_significance> </concept></ccs2012><ccs2012> <concept> <concept_id>10010147.10010178.10010224</concept_id> <concept_desc>Computing methodologies Computer vision</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [300]Theory of computation Pattern matching [500]Information systems Recommender systems [500]Computing methodologies Computer vision [500]Computing methodologies Neural networksAlgorithmic clothing: hybrid recommendation, from street-style-to-shop B Sengupta December 30, 2023 ======================================================================§ INTRODUCTIONAlgorithmic clothing frameworks have been routinely operationalized using recommender systems that utilise deep convolutional networks for feature representation and prediction <cit.>. Such a content-based image retrieval amounts to recommending items to users who might have similar styles (contemporary vs. retro, etc.), like similar patterns (stripes vs. polka, etc.) or colours. Often time, frameworks rely on recommending not a single item but a pair (or triplets, etc.) of products. For the fashion vertical, this would mean recommending what trousers to wear with a given shirt, for example. Yet, service providers often do not have access to a consumer's behaviour – be it click-through rate, prior purchases, etc. This means the recommender system generally has a cold-start problem, resulting in incorrect recommendations. This is a problem intrinsic to many collaborative recommendation algorithms which are built with the a priori assumption of the availability of a user preference database. In this paper, we describe a scalable method to recommend fashion inventory (tops, trousers, etc.) wherein user preferences are learnt from experts (oracles), that we accumulate using images from fashion blogs. We call them `street-style images'. Utilising deep neural networks enable us to parse such images into a high dimensional feature representation that allow us to recommend a pair from a high-dimensional user preference tensor that can be constrained to exist in a retailer's database. Whilst the database presented in this paper has a 2-D form, utilising a deep neural network enables us to learn an n-D tensor. This helps in recommending multiple items that have a structure – for example, to predict not only whether a trouser fits well with a shirt but also what allied accessories like belts, cuffs, etc. can be worn.In section <ref>, we detail the technical infrastructure that allows us to produce recommendations starting from a single image. We start by (a) segmenting and localising the object of interest and (b) designing a knowledge base that constrains the recommendation from the knowledge derived from street-style images to only those commodities that are available from a retailer's database.§ METHODS Recommending an item that is suitable to wear with yet another item involves the following steps:* Localisation/segmentation of garments from street-style images (oracle)* Generation of association between a pair of garment, i.e., determine which items in each image are being worn by the same person* Construction of a joint distribution (co-occurrence matrix) based on either visual features from street-style images or items from a vendor's inventory* Produce recommendation by using: (a) colour, (b) pattern or (c) a street-style oracle under a content-based retrieval frameworkIn the next sections, we will describe the steps comprising a recommendation engine (Figure <ref>) starting with a description of deep convolutional neural networks (dCNN). A pre-trained VGG-16 model <cit.> on ImageNet dataset is used as a base model to fine tune our fashion segmentation, localisation and pattern classification models. §.§ Segmentation using a dCNNImages are segmented using a dCNN framework so that the images could be partitioned into groups of unique objects. Specifically, in order to remove background effect for the dominant colour generation, each clothing item in street-style images is segmented by using an FCN (Fully Convolutional Network) followed by a CRF (Conditional Random Fields), as shown in Figure <ref>. Two methods – FCN <cit.> and DeepLab <cit.> are evaluated on 10K street-style images by annotating individual items with a mask (in Section <ref>). Both methods first convert the pre-trained dCNN classifier by replacing the last fully connected layers by fully convolutional layers; this produces coarse output maps. For pixel-wise prediction, upsampling and concatenating the scores from intermediate feature maps are applied to connect these coarse outputs back to the pixels. DeepLab speeds up segmentation of dCNN feature maps by using the `atrous' (with holes) algorithm <cit.>. Instead of deconvolutional layers, the atrous algorithm is applied in the layer that follows the last two max-pooling layers. This is done by introducing zeros to convolutional filters to increase their length. This can control the Field-of-View (FOV)of themodels byadjusting theinput stride, without increasing the number of parameters or the amount of computation. Additionally, atrous spatial pyramid pooling (ASPP) is employed in DeepLab to encode objects as well as image context at multiple scales. After coupling with dCNN based pixel-wise prediction (blob-like coarse segmentation), a fully-connected pairwise Conditional Random Field (CRF) <cit.> is applied to model the fine edge details. This is done by coupling neighbouring nodes to assign the same label to spatially proximal pixels. §.§ Localisation using a dCNNDue to complicated backgrounds in street-style images, object detection is applied to localize the clothing items from the cluttered background. The detected garments are used as a query to find similar items from same class inventory images;for details on the deep learning architecturerefer to <cit.>, for feature encoding refer to <cit.> and for similarity measurements please see <cit.>. Additionally, detected items are classified according to different texture patterns. With the associated bounding boxes, the co-occurrence matrix of patterns is generated based on street-style images.Three state-of-the-art neural networks for object detection are evaluated by training on 45K street-style images (Section <ref>). Faster-R-CNN (Faster Region-based Convolutional Neural Network) <cit.> is the first quasi-real-time network. Its architecture is based on a Region Proposal Network (RPN) for quick region proposals and a classifier network for assessing the proposals. A non-maximum suppression (NMS) algorithm suppresses the boxes that are redundant. These steps are provided by the same base network, saving computational time. SSD (Single Shot Multi-Box Detector) <cit.> is the best network for optimising speed at the cost of a small drop in accuracy. The structure is equivalent to a number of class-specific RPN working at different scales. The results are then combined by NMS. R-FCN (Region-based Fully Convolutional Networks) <cit.> is yet another improvement of Faster-R-CNN with a structure based on an RPN and a classifier. It has a reduced overhead due to the reduction in the size of the fully connected classifier that facilitates the classification of the different regions of the proposal independently.§.§ Recommendation architecture In the next section, we describe three different methods for recommending items to users that are based on visual content such as the dominant colour, texture pattern, etc. §.§.§ Recommendation by colourA common attribute that encompasses consumer behaviour is to recommend garments based on colour. We operationalize such a scheme by (a) segmenting clothing items from street-style images, (b) extracting the dominant colour from segmented items using density estimation, (c) finding the associations of segmented items in the street-style dataset and (d) building a joint distribution of associations based on co-occurrence of dominant colours, from street-style images.First of all, a colour map is generated by using k-means based on CLElab <cit.> value of segmented pixels in individual garment category. Each category has its own colour map; these maps are then used as an index of the co-occurrence matrix.When a query is submitted, the dominant colour is extracted from the segmented garment and a search is initiated on the corresponding co-occurrence matrix to find the colour that best matches the query garment. Finally, we recommend inventory images from the corresponding category that has the matching colour to go with the query image. Figure <ref> shows the framework for dominant colour extraction and co-occurrence matrix generation from street-style image dataset. Additionally,when the dominant colour is extracted from the query image, we use a knowledge-based recommendation engine wherein a colour wheel is also utilised to recommend items with specific attributes – for example, complementary colour, triadic colour, etc. <cit.>. §.§.§ Recommendation by patternA similar framework for pattern recommendation, as shown in Figure <ref> is used to make a recommendation based on the pattern that is intrinsic to an object. Again, we recommend items that have a similar pattern by (a) detecting garments from street-style images, (b) classifying cropped garments to one of the texture patterns and (c) searching corresponding co-occurrence of texture pattern from street-style images. §.§.§ Recommendation via content-based retrievalIn Figure <ref>, a content-based recommendation system is operationalised as follows: (a) locate and crop garments in street-style images, (b) associate top and bottom garments worn by the same person, (c) generate a table from each associated top-bottom pair of the inventory dataset. Specifically, we initiate a query on the top cropped garment from street-style image against inventory images of the same category (e.g., query a street-style blouse against inventory blouses), similarly, run a query on the bottom garment, for details on the specific architecture used for retrieval please see <cit.>. A joint table can be constructed by adding the score of all possible combinations of top 5 retrieval results for the top garment (blouse) with top 5 retrieval results for bottom garment(i.e., trouser). Such a table tells us how fashionable such a garment combination is. Given an image of a blouse, for example, a skirt may then be recommended by using retrieval on query image and then looking up in the table which of the skirts gather higher frequencies when combined with the blouse. § DATASETS AND RESULTS§.§ Fashion Datasets In order to recommend inventory images based on fashion trends (street-to-shop), we generate two fashion datasets i.e., a street-style image dataset (no. 1) and an inventory image dataset (no. 2; Figure <ref>). Dataset no.1 has 280K street-style images that were downloaded from latest fashion blogs. Out of these, 70K street-style images were used to build co-occurrence matrices of fashion inventories. Data set no. 2 has 100K inventory images that come from various fashion retailers. The images are categorised according to 5 classes, i.e., Coat/Jackets, Dresses, Skirts, Top/Blouses and Trousers. Inventory images are recommended to users based on colour, pattern and visual similarity by querying the generated co-occurrence matrices from street-style images. As seen in Figure <ref>, most street-style images contain cluttered backgrounds: a person with a larger pose variation, multiple persons who overlap – stand side by side, etc. Due to such backgrounds, object detection and segmentation is applied for street-style images to localise the requisite fashion items. Most inventory images show a single item with a plain background; localisation is also required here to mask out the model's leg and her head. §.§.§ Dataset for segmentation10K street-style images are generated and manually segmented by using the GrabCut algorithm <cit.>. The dataset is split by using 7K images for training, 1.5K for validation and 1.5K for testing. Table <ref> shows the split results for training, testing and validation dataset with a number of masks for each fashion item.§.§.§ Dataset for localisation45K street-style images are generated and annotated manually by drawing a bounding box around each fashion item. The data is split into36K images for training, 4.5K for validation and 4.5K for testing. The split results for training, testing and validation dataset with bounding boxes (BBs) for each item is shown in Table <ref>.§.§.§ Dataset for texture classificationCombining some categories in DTD <cit.>, texture tags in Deep Fashion dataset <cit.> and some fashion blogs,10 most popular pattern categories on fashion inventory are selected. 14K single fashion item images (11K for training and 3K for testing) from a client is sourced for training a neural network model to classify the pattern behind each texture; some examples for the 10 categories are shown in Figure <ref>.§.§ Results §.§.§ Results for segmentation Two methods for segmenting objects (Section <ref>) are evaluated on street-style images. Both models are initialized with a pretrained VGG-16 model on ImageNet, trained on 8.5K (train+validation) segmentation dataset and evaluated on a 1.5K testing dataset. Table <ref> shows Intersection over Union (IoU = True positive/(True positive+False positive+False Negative)) and Pixel Accuracy (PA = True positive/(True positive+False Negative)) on each fashion class and the mean value of each class for both the models. DeepLab with multiple scales and LargeFOV along with a CRF achieves the best performance of 59.66% mean IoU and 73.99% mean PA. It is also evident that combining CRF with FCN increases theMean IoU by 4% and PA by 2%. We have also trained the model by initializing with VGG-16 on our fashion classification dataset <cit.>; here, the mean IoU drops by4%. Figure <ref> showssegmentation results on street-style images by using the DeepLab-MultipleScales-LargeFOV algorithmic combination with a CRF. Average test time to segment an image: FCN = 115ms and CRF=638ms on a nVIDIA Titan X GPU with 12GB memory. §.§.§ Results for localisation In order to evaluate the performance of three networks (Section <ref>) on localisation, three models are trained on 40.5K (train+validation) street-style localisation dataset and tested on a 4.5K dataset. We used the default parameters chosen from the original papers. Table <ref> shows the Average Precision (AP) calculated on each object class and the mean Average Precisions (mAPs) for the three models. The bounding boxes are considered only if the IoU is larger than 50%. Average testing time for an image is evaluated on a NVIDIA Quadro M6000 GPU with 24GB memory. Table <ref> shows that R-FCN has an edge over the other models that were evaluated. SSD is particularly suitable when speed is the main concern. Figure <ref> showsR-FCN detection results on street-style images.§.§.§ Results for texture classification For the pattern recommendation system, the cropped garments are classified in 10 texture patterns. For this,a pattern classifier istrained by fine tuning a pre-trained VGG-16. We use 11K images for training and3K images for testing (Section <ref>). Results are listed in Table <ref>.§.§.§ Results for garment association After detecting garments, aperson detector <cit.> is applied to constrain the cropped garment being worn by the same person. In total, 6 associations between a pair of garments in 70K street-style images are generated and listed in Table <ref>. The numbers indicate how many people wear corresponding garments in the 70K street-style images.§.§.§ Recommendation using colour After garment association, 6 co-occurrence matrices of dominant colour are generated from 70K street-style images. In our system, a colour map with 130 bins for each category is created by using k-means on all segmented pixels of the corresponding garment in street-style images.When a query image is submitted, the dominant colour is extracted from the segmented item and a search from corresponding co-occurrence matrix is initiated to find the best colour that matches the query item. For example in Figure <ref>(1), the first row shows the query image and best matching colour obtained from the tops/blouses-skirts colour co-concurrence matrix. The second row shows the recommend skirt according to the recommended colour from an inventory database; some reference examples with same match colour from the street-style dataset are displayed in the third row. In Figure <ref>(2-4) we show some examples for the colour recommendation based on different aspects of the colour wheel. Figure <ref>(2) shows complimentary colour trousers with a query top. Figure <ref>(3) shows one of the triadic coloured skirts with a yellow top. Figure <ref>(4) shows triadic coloured skirts and tops/blouses with a yellow coat.§.§.§ Recommendation using patternFor pattern recommendation, when a query image is submitted, the garments are cropped from the images and classified in to one of the ten texture patterns. We then search a corresponding 10 x 10 pattern co-occurrence matrix to find the best match pattern with respect to the query item. This then allows us to recommend items with a matching pattern from the inventory dataset. Two examples are shown in Figure <ref>: (1) shows the top/blouse that form the query; a plain colour trouser is recommended to take into account the attributes of the query pattern. Figure <ref>(2) shows the query i.e., top/blouse with a dotted pattern;the FRS recommends that a plain coloured skirt is worn with such a top. The third row in each figure shows some reference images from the street-style dataset with the same match pattern. §.§.§ Recommendation using content-based retrieval Given a query image, we run a query on the image against inventory images of the same “top" category <cit.>,pick some of higher ranking “top" garments, use the look-up table to find and recommend the most frequent “bottom" garments for each “top" garment to the user. Two examples are shown in Figure <ref>: (1) top/blouses with trousers and (2) coat/jackets with skirts. § DISCUSSIONThis paper has detailed an end-to-end commercially deployable system – starting from image segmentation, localisation to recommending a dyad of clothing accessories. The knowledge representation is learnt by crawling through fashion blogs (street-style oracles) for images that are prescriptive of a variety of style, preferred by consumers. Deep neural networks complement this knowledge by learning a latent feature representation which then enables dyadic recommendations. We propose two other simpler recommendations by utilising the colour wheel to prescribe dyads of colours or use deep feature vectors to recommend clothing accessories based on the texture of the fabric. The framework is scalable and has been deployed on cloud-service providers. Our work adds on to the burgeoning vertical of algorithmic clothing <cit.> that use discriminative and probabilistic models to recommend consumers on how to finesse their dressing style. For example, <cit.> has learnt `urban tribes' by learning which group of people are more likely to socialise with one another, therefore may have similar dressing style. Classifying styles of clothing has been the focus of <cit.>'s work where the authors use a random forest classifier to distinguish a variety of dressing styles. Similarly, <cit.> use neural networks to measure the visual compatibility of different clothing item by analysing co-occurrences of base-level elements between images of compatible objects. The combination of street-style oracles with a deep learning based feature representation framework is similar to the work by <cit.> wherein they use a Siamese convolutional neural network to learn compatibility of a variety of clothing items. There is a stark dissimilarity though – <cit.> used <Amazon.com>'s co-purchase dataset, which is instantiated on the assumption that two items purchased together are worn together. This may not be always true – therefore, the present work bases recommendation on the current trends in fashion (well represented by fashion blogs). The framework is flexible such that `clothing trends' can be updated to keep up with seasonality trends <cit.> or dissected into hierarchial models suited to the demographics or age-range of the clientele. With the availability of GPUs, such a framework becomes highly scalable.There are a few challenges that can imperil the formation of a joint occurrence matrix. The first is the sparsity of the matrix involved – this is caused due to an inadequate number of street-style images with a specific combination of two clothing items. An easy way to alleviate such an issue is to use generative models <cit.>. A neural network can also be utilised as a function approximator (see below) such that the learnt features can be encoded <cit.> to reveal dependencies between inventory items. Our framework would recommend similar items to those previously suggested to the user. Whilst such a problem is severe for collaborative filtering approaches,our hybrid recommender system alleviates just a part of it, especially if we relax the assumption that the co-occurrence matrix has quite a stable probability distribution. Thus, a vital strand of our current research lies in personalization – how can we alter the recommendations such that it takes into account not only our shopping behaviour but also the granularity of our `personal' taste. One way forward to formulate this feature-based exploration/exploitation problem is to frame it as a contextual bandit problem <cit.>. Put simply, such an algorithm sequentially selects the dyad recommendation based on the interaction of the consumer with the recommendation system. The present work focuses on a recommendation dyad i.e., a trouser to go with a shirt; nevertheless, the present framework is equipped to make recommendations over a much larger combination of co-occurrences. As earlier, the next step forward would be to replace the joint-occurrence matrix with a neural network so that a non-linear function over multiple items could be learnt. This would be necessary for next generation algorithms that can recommend us an entire wardrobe rather than dyads or triads of clothing items. § ACKNOWLEDGMENT This work was supported by two Technology Strategy Board (TSB) UK grants (Ref: 720499 and Ref: 720695). ACM-Reference-Format
http://arxiv.org/abs/1705.09451v2
{ "authors": [ "Y Qian", "P Giaccone", "M Sasdelli", "E Vasquez", "B Sengupta" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170526064347", "title": "Algorithmic clothing: hybrid recommendation, from street-style-to-shop" }
Reliability of Broadcast Communications UnderSparse Random Linear Network Coding Suzie Brown, Oliver Johnson and Andrea TassiThis work is partially supported by the University of Bristol Faculty of Science, the School of Mathematics and theProject, which is supported by Innovate UK under Grant Number 102202.S. Brown and O. Johnson are with the School of Mathematics, University of Bristol, UK (e-mail: [email protected], [email protected]).A. Tassi is with the Department of Electrical and Electronic Engineering, University of Bristol, UK (e-mail: [email protected]).December 30, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Discrete statistical models supported on labelled event trees can be specified using so-called interpolating polynomials which are generalizations of generating functions. These admit a nested representation.A new algorithm exploits the primary decomposition of monomial ideals associated with an interpolating polynomial to quickly compute all nested representations of that polynomial. It hereby determines an important subclass of all trees representing the same statistical model. To illustrate this method we analyze the full polynomial equivalence class of a staged tree representing the best fitting model inferred from a real-world dataset. Keywords Graphical Models; Staged Tree Models; Computer Algebra; Ideal Decomposition; Algebraic Statistics.§ INTRODUCTION Families of finite and discrete multivariate models have been extensively studied, including many different classes of graphical models <cit.>. Because these families of probability distributions can often be expressed as polynomials – or collections of vectors of polynomials – this has spawned a deep study of their algebraic properties <cit.>.These can then be further exploited using the discipline of computational commutative algebra and computer algebra software such as  <cit.> which has proved to be a powerful though somewhat neglected tool of analysis. In this paper, we demonstrate how certain computer algebra techniques – especially the primary decomposition of ideals – can be routinely applied to the study of various finite discrete models. Throughout we pay particular attention to an important class of graphical models based on probability trees and called staged trees or chain event graph models <cit.>. These contain the familiar class of discrete (and context-specific) Bayesian networks as a special case. In particular, <cit.> gave a mathematical way of determining the statistical equivalence classes of staged tree models but did not give algorithms to actually find these. Here we use computer algebra in a novel way to systematically find a staged tree representation of agiven family – if it indeed exists – and to uncover statistically equivalent staged trees in an elegant, systematic and useful way. This is an extensions of the techniques developed by <cit.> and others to determine Markov-equivalence classes of Bayesian networks where, instead of algebra,graph theory was used as a main tool. So our methodology supports a new analysis of a very general but fairly recent statistical model class in a novel algebraic way and serves as an illustration of how more generally computer algebra can be a useful tool not only to the study of conventional classes of graphical model but other families of statistical model as well. § STAGED TREES ANDINTERPOLATING POLYNOMIALS §.§ Labeled event trees and staged trees In this work we will exclusively consider graphs which are trees, so those which are connected and without cycles.We first review the theory of staged trees which represent interesting and very general discrete models in statistics <cit.>. Letbe a finite directed rooted tree with vertex set V and edge set ⊆ V× V. We denote the root vertex of T by v_0.The tree T is called an event tree if every vertex v∈ V has either no, two or more than two emanating edges.For v∈ V, letv ={(v,w) | w∈ V}∩ denote the set of the edges emanating from v. The pair (v,v) is called a floret.Let Θ be a non-empty set of symbol/labels and let a function θ:⟶Θ be such that for any floret (v,v) the labels in θ(v) are all distinct. We call θ(v) the floret labels of v and denote this set by θ_v.The pair = of graph and function is called a labeled event tree.When θ takes values in (0,1) and ∑_e∈vθ(e)=1, is called a probability tree[We should say more precisely: when the symbols θ(e) are evaluated in (0,1) for all e∈ E.]. For v∈ V, the labeled subtree rooted in v is_v = (T', θ'), where T' is the largest subtree of T rooted in v, and θ' is the restriction of θ to the edges in T'. For any leaf v∈ V, so for any vertex with no emanating edges, we trivially have that v=∅, and hence θ_v=∅. labeled event trees are well-known objects in probability theory and decision theory where they are used to depict discrete unfoldings of events. The labels on edges of a probability tree then correspond to transition probabilities from one vertex to the next and all edge probabilities belonging to the same floret sum to unity. See <cit.> for the use of probability trees in probability theory and causal inference, and see for instance <cit.> for how such a tree representation can be used in computational statistics.In this paper, we generally do not require the labels on a labeled event tree to be probabilities. A labeled event tree =, with , is called a staged tree if for every pair of vertices v, w∈ Vtheir floret labels are either equal or disjoint, θ_v=θ_w or θ_v ∩θ_w=∅. A stage is a set of vertices with the same floret labels.In illustrations of staged trees, all vertices in the same stage are usually assigned a common color: compare <ref>. Staged trees were first defined as an intermediate step to building chain event graphs as graphical representations for certain discrete statistical models <cit.>. Every chain event graph is uniquely associated to a staged tree and vice versa. In this way, the graphical redundancy of staged trees can be avoided, and elegant conjugate analyzes can be applied to staged tree models <cit.>. In particular, every discrete and context-specific Bayesian network can alternatively be represented by a staged tree where stages indicate equalities of conditional probability vectors. We give examples of this later in the text.For the development in this paper it is important to observe that staged trees with labels evaluated as probabilities are always also probability trees. This is however not the case for all labeled event trees because sum-to-1 conditions imposed on florets can be contradictory. See also Examples <ref> and <ref>below.[Saturated trees]A saturated tree is a labeled event tree where all edges have distinct labels. So this is a staged tree where all floret labels are disjoint, or alternatively with every stage containing exactly one vertex. In the development below, saturated trees are graphical representations of saturated statistical models. <Ref> shows a staged tree where all blue-coloured vertices are in the same stage.<Ref> depicts a staged tree where the two green vertices are in the same stage. <Ref> show a labeled event tree which is not staged becausethe floret labels of the two black vertices areneither equal nor disjoint. §.§ Network polynomials and interpolating polynomialsWe next define a polynomial associated to a labeled event tree which is the key tool used in this paper: see also <cit.>. Let = be a labeled event tree and letΛ() denote the set ofroot-to-leaf paths in . For λ∈Λ() let λ be the set of edges of λ. We call the products of the labels along a root-to-leaf path, =∏_e∈λθ(e), atomic monomials.Given a real-valued function g:Λ()→ℝ,we define the network polynomial ofand g, the linear combination of the atomic monomials with coefficients given by g, as:c_g,=λ∈Λ() g(λ)·with the particular case c_g,=1 ifhas no edges. The interpolating polynomial is the network polynomial with all g(λ)=1 equal to one, and we write =c_1,.A network polynomial c_g, is a polynomial in the ring ℝ[Θ] of polynomials with real coefficients and whose indeterminates are the labels in Θ. An interpolating polynomial is a polynomial with positive integer coefficients by construction. For these we write ∈[Θ].When = is a probability tree,every atomic monomialis the product of transition probabilities along a root-to-leaf path and thus the probability of an atomic event (or atom).Often the function g is an indicator function g=1_A of an event A⊆Λ(). In this case, <ref> is a polynomial representation of the finite-additivity property of probabilities for A,so c_1_A,=∑_λ∈ A. Interpolating polynomials have been used successfully to classify equivalence classes of staged trees which make the same distributional assumptions <cit.>, as outlined in <ref> below. They have further been used as a toolfor calculating marginal and conditional probabilities in Bayesian networks and staged trees, using differentiation operations <cit.>.In <ref> and <ref> for the purposes of this paper we now present two central results on interpolating polynomials. These results are given here in a reformulated, recursive form and very different from their original development <cit.>. This refinement is necessary because the new proofs we give are constructive and, most importantly, transparently illustrate the mechanisms needed for our later algorithmic implementation. Let =be an event tree and for v∈ V define _v= 1 if v=∅ ∑_(v,w)∈vθ(v,w)·_wotherwise.Then the interpolating polynomialofis equal to _v_0 where v_0 is the root of . We prove the claim by induction on the depth of the tree, i.e. the number of edges in the longest root-to-leaf path.Ifhas depth =0 then v_0 =∅ and c_ = 1 = _v_0.Ifhas depth ≥1 then_v_0 =(v_0,w)∈v_0θ(v_0,w)·_w. Furthermore, = λ∈Λ() = (v_0,w)∈v_0θ(v_0,w)·λ' ∈Λ(_w)π_θ(λ')= (v_0,w)∈v_0θ(v_0,w) · c__w and_w = c__w by the inductive hypothesis because the subtrees _w all have lower depths than .The two staged trees = and = in <ref>have the same interpolating polynomial, so the same sum of atomic monomials: =c_ =θ_1ϕ_1+θ_1ϕ_2 +θ_1ϕ_3+θ_2ϕ_1+θ_2ϕ_2σ_1+θ_2ϕ_2σ_2+θ_2ϕ_2σ_3+θ_2ϕ_3. Here, the functions θ and θ' assign the same labels to different edges in the graphs T and S.Following the recursive construction in <ref>, we can then write this polynomial in terms of the interpolating polynomials of subtrees:== θ_1·_1 + θ_2·_2where _1=ϕ_1+ϕ_2 and _2=ϕ_1+ϕ_2·(σ_1+σ_2+σ_3)+ϕ_3;or alternativelyc_==ϕ_1·_1 + ϕ_2·_2 + ϕ_3·_3where_1=θ_1+θ_2, _2=θ_1+θ_2 ·(σ_1+σ_2+σ_3)and_3=θ_1+θ_2. <Ref> shows that the distributive property of multiplication over addition is at the core of our work.The following corollary will be useful for studying staged trees withsquare-free atomic monomials: compare also <ref> below. Let = be a labeled event tree andletbe its interpolating polynomial.Thenwe can write= (v_0,w)∈v_0θ(v_0,w) · c__w.Moreover, if the root labels are not repeated, i.e. θ_v_0∩θ_v=∅ for all v∈ V∖{ v_0}, then no label in θ_v_0 appears in any subtree-interpolating polynomial c__w. The proof is a trivial consequence of the construction ofthe polynomial _v_0 in <ref> above.Consider again the two staged trees in <ref>.Their interpolating polynomial admits two different representations in terms of a linear combination as in <ref>, namely the ones in <ref> and <ref>.We can see here explicitly how the polynomials above depend on the variables in subtrees ofand . In particular, both sets{θ_1,θ_2} and {ϕ_1,ϕ_2,ϕ_3} provide potential root-floret labels of a corresponding tree representation. §.§ Polynomials with a nested representation We know now that we can straightforwardly read an interpolating polynomial, and in particular a recursive representation of that polynomial, from a labeled event tree. In this section and in <ref> we consider the inverse problem: given a polynomial in distributed form can we tell whether it is the interpolating polynomial of a labeled event tree?In order to answer this question first observe that the polynomials defined below admit a special structured representation and can be used as a surrogate for a labeled event tree as shown in <ref>. Let f∈[Θ] be a polynomial with positive integer coefficients.We say that f admits a nested representation if f=1 or if it can be written as f=∑_x∈ Ax· f_x where A⊆Θ is such that # A≥ 2 and, for each x∈ A, the polynomial f_x admits a nested representation.The recursion in <ref> is finite because (f_x) = (f)-1, for by construction polynomials with nested representations have positive coefficients. The polynomial _v in <ref> is written in nested representation by construction.In this sense <ref> below is the inverse result of <ref>, and a polynomial admits a nested representation if and only if it is the interpolating polynomial of a labeled event tree. If f∈[Θ] admitsa nested representation then there exists a labeled event treesuch that f = c_.We prove the claim by induction on the degree of f.If (f)=0 then f=1 and therefore f=c_ whereis formed by a single vertex with no edges and no labels.If (f)>0 then f=∑_x∈ A x · f_x and therefore by<ref> and by induction f_x=c__x for some tree _x labeled over Θ. For all x∈ A let v_x be the root of _x. Then a treewith interpolating polynomial f can be constructed by taking a new vertex v_0 assigned as the root ofand defining the edges of the root floret v_0 to be {(v_0, v_x)| x∈ A}.Then f=c_.The result above implies in particular that if f is a polynomial with nested representation f=∑_x∈ A x · f_x then the root labels of a tree with interpolating polynomial f are given byA. The nested representations of the two event treesandin <ref> are=θ_1(ϕ_1+ϕ_2+ϕ_3)+ θ_2(ϕ_1+ϕ_2(σ_1+σ_2+σ_3)+ ϕ_3),c_= ϕ_1(θ_1+θ_2) +ϕ_2(θ_1+θ_2(σ_1+σ_2+σ_3))+ ϕ_3(θ_1+θ_2) as in Examples <ref> and <ref>. These nestings are in one-to-one correspondence with the depicted trees, just as stated in <ref>. Let Θ = {θ_1,θ_2,θ_3} and consider the polynomial f = θ_1θ_2+θ_2θ_3+ 2θ_1θ_3∈[Θ]. Then f hasnested representation θ_1·(θ_2+θ_3) + θ_3·(θ_1+θ_2) corresponding to a labeled event tree which is not staged.Let Θ = {θ_0, θ_1, θ_2, θ_3, ϕ_1,ϕ_2} and consider the polynomialf = θ_0 +θ_1ϕ_1 +θ_1ϕ_2 + θ_2ϕ_1 +θ_2ϕ_2 + θ_3ϕ_1 +θ_3ϕ_2. Then f admits three different nested representations: f= θ_0 ·(1)+θ_1·(ϕ_1 +ϕ_2) + θ_2·(ϕ_1 +ϕ_2) + θ_3·(ϕ_1 +ϕ_2),= θ_0 ·(1)+ϕ_1·(θ_1 +θ_2 + θ_3) + ϕ_2·(θ_1 +θ_2 + θ_3),= θ_0 ·(1)+θ_1·(ϕ_1 +ϕ_2) + ϕ_1·(θ_2 + θ_3) + ϕ_2·(θ_2 + θ_3).In particular, <ref> corresponds to the staged tree in <ref> and <ref> to the staged tree in <ref>. In <ref> we show that there are no other staged trees with interpolating polynomial f.The third nested representation <ref> corresponds to the labeled event tree in <ref> which is not staged. In the above examples, a given polynomial can admit several different nested representations. By the result below, this is not always the case.For a saturated tree , the interpolating polynomialhas a unique nested representation.Let ' be a labeled event tree, not necessarily saturated nor staged, with interpolating polynomial c_' = c_.We prove that '=, i.e. ' is indeed the saturated tree . Let C = (c_) be the set of power-products (or monomials) in c_,and for a label xindicate the set of all multiples of that label with C_x={t ∈ c | t multiple ofx}.Let F={θ_1,…,θ_s}and F', respectively,be the set of root-floret labels ofand ', so θ_v_0 in <ref> w.r.t.  and '. We first prove that F=F'. For any θ_i∈ F the power-productsin c_θ_i, corresponding to the root-to-leaf paths originating from the root-edge inwhich is labeled θ_i, are not multiples of any θ_j for i j becauseis saturated.Thus, if F'⊊ F and θ_i∉F' then the power-products in c_θ_i could not correspond to root-to-leaf paths in '.It follows that if F F' then there must be a label ϕ∈ F' with ϕ∉F. Sinceis saturated, ϕ is the label of only one edge in , and this edge is, say,in the subtreestarting from the root edge labeled θ_1. In terms of the power-products, this implies that C_ϕ⊆ C_θ_1. Hence,in ' all root-to-leaf paths originating from the root edge labeled by ϕ must have an edge labeled θ_1: see the figure below.Now consider the root-to-leaf path in ' where θ_1 appears at greatest depth, i.e. with the longest path from the root vertex. The floret containing θ_1 must have at least another edge so the paths through this other edge have θ_1 at greater depth. But this is a contradiction. Hence F=F'.The subtrees ofrooted inthe s children of its root are again saturated trees, and their interpolating polynomials are∑_t∈ c_τt for τ∈{θ_1,…,θ_s} and havedisjoint sets of labels becauseis saturated.Therefore we can repeat the reasoning above on these subtrees and their interpolating polynomials. We conclude in a finite number of steps that ='. Thus when reading an interpolating polynomial from a tree, instead of summing atomic monomials as in <ref> we can directly use the tree graph to infer a bracketed, nested representation of that polynomial. This representation is in one-to-one correspondence with the labeled graph itself, so the original representation can be easily recovered. Similarly, once we are given any polynomial in distributed form and this polynomial admits such a nested bracketing then we can always find a corresponding tree representation. These insights open the door to replace graphical representations of statistical models by polynomial representations, and hence enable us to employ computer algebra in their study. We will show how this can be done in the next section.§ POLYNOMIAL AND STATISTICAL EQUIVALENCE Computer algebra is often used to study polynomials that arise naturally in statistical inference. For instance, context-specific Bayesian networks, staged trees and chain event graphs are all parametric statistical models whose probability mass function is of monomial form: p_(x)=^α_x = θ_1^α_x,1⋯θ_d^α_x,d for every atom x in an underlying sample space where α_x=(α_x,1,…,α_x,d)∈_≥0^d. This monomial ^α_x can then be thought of as for instance a product of potentials <cit.> or simply a product of edge probabilities in a staged tree with root-to-leaf paths as atoms. So the network and interpolating polynomials as in <ref> can be defined for all parametric models admitting a general monomial parametrization as given above <cit.>.We can then apply the theory above to these models and employ computer algebra techniques in their study. In particular, often very different parametrizations can give rise to the same model and the interpolating polynomial can help to determine these. Two staged trees = and = with the same label set Θ are called polynomially equivalent if their interpolating polynomials are equal.Two staged trees = and = with possibly different label sets, say Θ and Ξ, are called statistically equivalent if there is a bijection Ψ:Λ()→Λ() which identifies their root-to-leaf paths and for any evaluation function on Θ, namely _Θ :Θ→ (0,1) extended to λ∈Λ() as _Θ(λ) = ∏_e∈λ_Θ(θ(e)), there exists an evaluation on Ξ,_Ξ: Ξ→ (0,1), such that _Θ(λ) = _Ξ(Ψ(λ)) for all λ∈Λ().By definition, two staged trees whose labels are evaluated as probabilities are statistically equivalent if and only if they represent the same statistical model.Since the interpolating polynomials of polynomially equivalent trees are equal, they are the sum of the same atomic monomials. Therefore there is a bijection between the root-to-leaf paths of polynomially equivalent trees. This implies that polynomially equivalent trees are also statistically equivalent. For instance, the trees from Examples <ref> and <ref> are polynomially, and so statistically equivalent. In particular, the interpolating polynomial is sufficient to determine a probability distribution up to a permutation of the values it takes across an underlying sample space.From <ref>, the class of polynomially equivalent trees is fully described by all nested representations of the interpolating polynomial. Indeed, when reordering the terms of a nested representation as in <ref>, the atomic monomials of the underlying tree do not change. So if we are given the interpolating polynomial of astaged tree and we can find all its possible nested representations then we have automatically found all of its polynomially equivalent tree representations – and often a large subclass of the whole statistical equivalence class. For example, in the case of decomposable Bayesian networks the equivalence class of a polynomial given in clique parametrization contains the Markov-equivalence class <cit.>.Polynomially equivalent trees can be thought of as those having the same parametrization. However this parametrization is often read in a different non-commutative way for different graphical representation in that class. For instance, the staged trees in Examples <ref> and <ref> have the same atomic monomials belonging to identified atoms but =θ_1ϕ_1 inand π_θ'(λ')=ϕ_1θ_1 infor identified atoms λ and λ'. Analogous instances of this phenomenon occur in the class of decomposable Bayesian networks where a model parametrization can be given by potentials on cliques which are renormalized across different graphical representations of the same model.Statistically equivalent trees however can be thought of as reparametrizations of each other, very much like in Bayesian networks where a parametrization can either be based on parent relations between single nodes in a graph or alternatively on clique margins.See also <ref>.Polynomially equivalent trees can often be described by a variety of different graphs. For instance, the polynomial c=θ_0+(θ_1+θ_2+θ_3)(ϕ_1+ϕ_2) has at least three different labeled trees associated: see <ref> and <ref>.The two trees in Figs. <ref> and <ref> are polynomially equivalent representations of the same model on seven atoms. The tree in <ref> is not because it is not a staged tree. In particular, this tree is not a probability tree because sum-to-1 conditions imposed on its florets would be contradictory.[Maximal representations] For any labeled event tree there exists a statistical equivalent binary labeled event tree whose graphis such that #v∈{0,2} for all v∈ V. This can be thought of as a maximal representation within the class of statistically equivalent trees.We can easily obtain a binary tree by splitting up each floret with strictly more than two edges as shown in <ref>. In particular, for a floret in a probability tree labeled by θ_1,θ_2,θ_3, we would obtain new labels σ_1,σ_2,σ_3,σ_4 which are renormalizations of the original parameters such that sum-to-1 conditions hold, σ_1+σ_2=1 and σ_3+σ_3=1, while retaining the distribution over the three depicted atoms, so σ_1=θ_1+θ_2, σ_1=θ_1/θ_1+θ_2, σ_2=θ_2/θ_1+θ_2and σ_2=1-σ_1.[Minimal representations] In the polynomial equivalence class of a saturated tree there is exactly one member, namely the tree itself. This is because, by <ref>, for saturated trees the nested representation of an interpolating polynomial is unique. The statistical equivalence class of a saturated tree however is much bigger. This is a consequence of <ref> above. In particular, for every saturated tree there is a unique minimal graphical representation given by a single floret whose labels are the atomic monomials (or joint probabilities) and whose number of edges coincides with the number of root-to-leaf paths in any equivalent representation. In the development in this paper we mainly focus on a parametric characterization of staged tree and other statistical models. This naturally links in with an alternative implicit characterization which is well known in algebraic statistics. For instance, a polynomial representation of a Bayesian network involving exclusively the joint probabilities – i.e. the values of the associated probability mass function p(x) as x varies in the sample space – can be derived from the equalities p(x)=^α_x using ring operations. The algebraic theory behind this is calledelimination theory <cit.>of which Gaussian elimination for solving systems of linear equations is a simple example.The representation of a Bayesian network as such a set of polynomials is an algebraic structure called a toric ideal and has great importance in algebraic statistics: see e.g. <cit.>.Notably, this alternative characterization can also be used to describe statistical equivalence – though in a less constructive way than the method we present here and without immediate links to a graphical representation of a model.The labeled event tree in <ref> is a staged tree on four atoms with labelsΘ={θ_0, θ_1, θ_2, θ_3}.The equalities holding for the four atomic monomials p_1= θ_0θ_2 ,p_2= θ_0θ_3 ,p_3= θ_1θ_2 , p_4= θ_1θ_3 imply the equalityp_1p_4 = p_2p_3.This parametrization of the model in <ref> is not to be confused with the minimal representation of the saturated model on four atoms in <ref>.An interpretation of this equation is as follows. Assume two binary random variables X,Y∈{0,1} are such that Pr(Y=1, X=1 )= p_1, Pr(Y=0, X=1 ) = p_2,Pr(Y=1, X=0 )= p_3, Pr(Y=0, X=0 ) = p_4 .Then p_1p_4 = p_2p_3 is an instance of a fundamental relationship in algebraic statistics for representing conditional independence of discrete random variables: see e.g. <cit.> and <cit.>. In this specific case the equality implies that X and Y are independent.§ FROM POLYNOMIALS TO TREES: FINDING THE NESTED REPRESENTATIONS §.§ Potential root-floretlabels and square-free monomials Building on the results above we can now use methods from commutative algebra to compute all the staged trees with a given interpolating polynomial and so to compute a complete polynomial equivalence class. The two key notions we use to build an algorithm which determines these classes are those of a monomial ideal and of its primary decomposition which, for square-free monomials, coincides with the prime decomposition. These notions are recalled in the appendix.The key of the proposed algorithm is <ref> below.This states in algebraic termsthat for any tree =(T, θ)each monomial in c_ is divisible by some label in the set F=θ_v_0 of the floret labels belonging to the root of , and that F is minimal (with respect to inclusion) with this property. Letbe a staged tree. The monomial idealθ_v_0 generated by the root-floret labels is a minimal prime of the ideal () generated by the support of c_.Let F=θ_v_0 = {θ_1, ..., θ_s} be the set of root-floret labels.Then each power-product in c_ is a multiple of some label in F. Because it is generated by indeterminates, F is a prime ideal containing all power-products in c_.Suppose, by contradiction, that F is not minimal. Then there exists F̃⊊ F withF̃ containing all power-products in c_. Without loss of generality let F̃={θ_2, ... , θ_s}.Now, each root-to-leaf path starting with the root edge labeled θ_1 has an associated atomic monomial θ_1t∈(c_)⊆F̃, j≥ 2.Therefore θ_1t= θ_1θ_jt' for some θ_j∈F̃. Asis staged, this implies that the whole root floret F must appear again in the subtree: see the illustration below. < g r a p h i c s > Next consider the subtree containing the repeated root-floret labels at a minimum depth and repeat the reasoning above:each root-to-leaf path containing the two edges labeled θ_1 corresponds to an atom θ_1^2t∈(c_) and istherefore a multiple of some label in F̃. Then the whole root floret is repeated again deeper in the subtree, producing some atom divisible by θ_1^3. Since this reasoning can be repeated a finite number of times, wehave the contradiction that there is an atomic monomial divisible by a power of θ_1 and by no label in F̃. ThereforeF={θ_1, ..., θ_s}is minimal.The interpolating polynomialin <ref> has support ()={θ_1ϕ_1,θ_1ϕ_2 ,θ_1ϕ_3,θ_2ϕ_1,θ_2ϕ_2σ_1,θ_2ϕ_2σ_2 ,θ_2ϕ_2σ_3,θ_2ϕ_3}. The primary decomposition of the corresponding square-free monomial ideal is ()=ϕ_1,ϕ_2,ϕ_3∩θ_1,θ_2∩ϕ_1,ϕ_3,θ_1,σ_1,σ_2,σ_3.Therefore, by <ref>, there are three different sets of possible root labels for a staged tree with interpolating polynomial .We show in <ref> below that the polynomial equivalence class ofis given by just two trees.Consider the interpolating polynomial =θ_0+θ_1ϕ_1+θ_1ϕ_0ϕ_1+θ_1ϕ_0^2.The minimal prime decompositionof () is given by two sets, namely θ_0,θ_1 and ϕ_0, θ_0, ϕ_1.The first one leads to the tree in <ref>. It can be shown by exhaustive search that the second does not give the labels of a root floret in a labeled event tree.The key assumption in <ref> is that the input treeis staged, otherwise the result need not be true.This theorem is central to the algorithm we present in the following section because it shows that instead of searching for root-floret labels among all subsets of labels Θ,the search can be limitedto those subsets which are the generators of the minimal primes of ().If Θ has d elements, their number is bounded above by d⌈d/2⌉ whereas the number of the subsets of Θ is 2^d.So considering all possible subsets of Θ, and having to repeat this recursively, may lead to a combinatorial explosion of cases to analyze. As a consequence, <ref> gives a drastic reduction of the set of candidate root-floret labels.Staged trees whose interpolating polynomials are sums of square-free power-products are interesting cases both from an algebraic viewpoint and for their interpretation in statistical inference. For instance, if all power-products in c_ are square-free thenthe proof of <ref> can be shortened obtaining the contradiction by <ref> directly. In terms of staged tree models, this condition implies that if a unit passes through a vertex in a given stage it cannot subsequently pass through another vertex in the same stage. By making this requirement we can avoid various complex ambiguities associated with exactly how we relate a sample distribution to a polynomial family. Although less useful in modeling time series, in most cross-sectional statistical models this constraint will almost always apply.The restriction to polynomials with square-free support enables us to prove the second and third central result for our algorithmic implementation. Letbe a staged tree whose interpolating polynomial =∑_(v_0,w)∈v_0θ(v_0,w) · c__w is a sum of square-free power-products. Then no label in θ_v_0 appears in any subtree-polynomial c__w. Becauseis a staged tree we have θ_v_0∩θ_v=∅ or θ_v_0 = θ_v for all v∈ V∖{ v_0} by <ref>. By contradiction, suppose there is a subtree _w containinga floret with labels θ_v_0. Let θ_1 be the label of the edge (v_0,w) for some w∈ V. Then there is a root-to-leaf path with at least two edges labeled θ_1: see also the illustration in the proof of <ref>. Hence there is a multiple of θ_1^2 in . This is a contradiction becauseis a sum of square-free power-products. So there is no subtree _w containinga floret with labels θ_v_0. The claim follows from <ref>.Letbe a staged tree whose interpolating polynomial c_ is a sum of square-free power-products. Then all coefficients in c_ are equal to 1.The claim follows from <ref> and itsrecursive application to subtrees of . So when searching for staged trees using square-free interpolating polynomials, coefficients might be ignored. This is not true for labeled event trees by <ref>. In <ref> we will see that this result will allow the application of the algorithm in <ref> to network polynomials of staged trees.§.§ The algorithmGiven a polynomial f whose power-products are square-free and with coefficients all equal to one,there is an obvious algorithm which determines all its nested representations, andin particular all staged trees for which f is the interpolating polynomial. This algorithm is here calledand is given in pseudo-code in <ref>. Following the notation in <ref>,the proposed algorithm searches over subsets A⊆Θ of the indeterminates appearing in f and recursively checks whether it is possible to construct the polynomials f_x for x∈ A. The choices of A are hereby constrained to the minimal primes of the monomial ideal associated to f as determined by <ref>. This algorithm works even when it is not known a priori whether or not f is the interpolating polynomial of a staged tree. Since the support of f is finite it is clear that the recursion terminates.The functionis part of thedistribution from version 5.1.6 (). The base steps of the recursion in <ref> are given by the simplest trees: a single vertex tree for C=1 (Step 2),or a floret without subtrees for C⊆Θ (Step 4) with at least two edges (Step 3). Compare also the recursive description in <ref>.In Step 5, <ref> is applied to determinethe candidate root-florets F_1,…,F_k. The main loop in Step 6 considers each F_i one at a time,and determines all the staged trees having root floret F_i, i=1,…,k.In the main loop, Step 6.2 checks if the subsets defined in Step 6.1give a partition for C which is a necessary condition from <ref>: since F_i is a minimal prime for C it follows that C = ∪_x∈ F_iC_x. Thereforeonly disjointness needs to be verified. Then the inner loop in Step 6.3, with its sub-steps,considers one at a time each x∈ F_i,and determines (if possible) all the subtrees emanating from the second vertex of the edge labeled x. In particular, Step 6.3.1 stops the search for F_i if there is a single emanating edge and therefore by definition not an event tree. Step 6.3.3 makes the recursive call on C'_x (defined in Step 6.3.2) to determine the set W_x of all possible subtrees from x.If W_x is empty then Step 6.3.4stops the search for F_i.Concluding the main loop, Step 6.4 is reached if for each edge having a labelin F_i there is at least one subtree. Then the floret labeled by F_i together with all combinations of its subtrees make a set W' of event trees, with root-floret labels F_i, whose interpolating polynomial is the sum of the monomials in C.At this point Step 6.5 discards those which are not staged.In particular the subtrees are staged, and compatibility of stages across the subtrees is checked here, in the obvious way. Finally, Step 6.6 stores them in W.We illustrate the working of thealgorithm on <ref>. From <ref> we can consider only three sets of potential root-floret labels of staged trees with interpolating polynomialgiven in<ref>. These are:F_1 ={ϕ_1,ϕ_3,θ_1,σ_1,σ_2,σ_3}F_2 ={ϕ_1,ϕ_2,ϕ_3 }F_3 ={θ_1, θ_2 }.The first setF_1 cannot be a floret-label setbecause C_ϕ_1∩ C_θ_1∅, see Step 6.2 in the algorithm.Indeed the two sets [C_ϕ_1= {θ_1 ϕ_1, θ_2 ϕ_1 }= ϕ_1 {θ_1, θ_2 };C_θ_1 = {θ_1 ϕ_1, θ_1 ϕ_2, θ_1 ϕ_3 }= θ_1 {ϕ_1, ϕ_2, ϕ_3} ]show that, if F_1 were a floret-label set, then the tree would include a structure such as in <ref>which cannot be part of a staged tree: see also <ref> and <ref>. Above we have used the convention that the product of a single label with a set of labels is defined as the set of all elementwise products.With F_2 in the first step of the algorithm we have [ C_ϕ_3= {ϕ_3 θ_1, ϕ_3θ_2 } = ϕ_3{θ_1, θ_2}; C_ϕ_2 = {θ_1 ϕ_2, θ_2 ϕ_2 σ_1,θ_2 ϕ_2 σ_2, θ_2 ϕ_2 σ_3}= ϕ_2 {θ_1 , θ_2σ_1,θ_2 σ_2, θ_2σ_3}; C_ϕ_1 = {θ_1 ϕ_1, θ_2 ϕ_1 } = ϕ_1 {θ_1, θ_2 } ]The algorithm calls recursively on the sets C'_ϕ_3 and C'_ϕ_1 but stops immediately (Step 4 in the algorithm) as summarized in <ref>.For the middle branch we need to continue the recursion by working onC'_ϕ_2. The monomial ideal generated by C'_ϕ_2 has the following primary decomposition C'_ϕ_2 =θ_1,θ_2∩θ_1 ,σ_1,σ_2,σ_3 .Taking F={θ_1,θ_2 } gives the tree in <ref> while F={θ_1 , σ_1,σ_2,σ_3 } leads to the situation in <ref> which does not correspond to an event tree.In conclusion, F_2 gives the tree in <ref> only.The result of the algorithm starting from F_3 is analogous and leads to the tree in <ref>.§.§ Discussion of the algorithm It was shown in <cit.> that the application of two graphical operators called the swap and resize on a staged tree could be used to traverse a statistical equivalence class. However these authors did not provide an implementation of their graphical methods in algebraic or computational terms. So <ref> fills that gap and enables us to determine the full polynomial equivalence class of a given staged tree. We hereby focus on staged as opposed to labeled event trees because these can always be interpreted as representations of statistical models as in <ref>. Of course our new algorithm can be easily adapted to discover more general representations. We will now discuss some of the properties of this algorithm. First, thealgorithm can be modified to work on non-square-free power-products. For this purpose Step 6.2 must be disabled and all the possible partitions of C need to be checked, making the algorithm more expensive. For example, the only minimal prime for the ideal (θ_1 + θ_2·(θ_1+θ_2)) is θ_1,θ_2 which leads to two partitions {θ_1, θ_1θ_2}, {θ_2^2}, and {θ_1}, {θ_1θ_2,θ_2^2}. Calling the algorithm on the first partition gives no answer because it leads to a tree which is not an event tree, whereas the second gives the original nested representation. Moreover, in this partitioning one also needs to keep track of the coefficients: as illustrated by the nested representation θ_1 ·(θ_1+θ_2) + θ_2·(θ_1+θ_2) = θ_1^2 + θ_1θ_2 + θ_2^2.Second, so far we often emphasized the use of the interpolating polynomial as opposed to the network polynomial in <ref>.This was to highlight the structure of the tree, as opposed to the real values associated its root-to-leaf paths: compare also <ref>. However, if c_g, is the network polynomial associated to a staged treeand its power-products are square-free, from <ref> it follows that the root-to-leaf paths λ∈Λ() are labeled by distinct monomials.This means that in the network polynomial the coefficients g(λ) are kept distinct.In conclusion, all staged trees with a given network polynomial c_g, are found by the algorithmapplied to C = (c_g,). Afterwards the coefficients g(λ)can be associated to the corresponding root-to-leaf paths. Third, thanks to the reduction to minimal primes, the algorithm is very fast also for real-world settings. In <ref> we will applyto discover the polynomial equivalence class of a staged tree describing a real problem with 24 atomic events. This computation takes much less than a second on a laptop with a 2.4 GHz Intel Core 2 Duo processor. Similarly, it takes 2.3 seconds to compute the 576 staged trees sharing the interpolating polynomial (θ_0+θ_1)(ϕ_1+ϕ_2)(τ_0+τ_1)(σ_0+σ_1) representing four independent binary random variables: compare <ref>. Computing the polynomial equivalence class of four independent random variables taking three levels each takes significantly longer at 12:23min but produces 55,296 different staged trees, each having 81 atoms. Naturally, the more stage structure there is present the more different polynomially equivalent representations are possible, so the latter two are somewhat extreme cases. On medium-sized real-world applications like the one presented below our computations are very fast. So this algorithm allows us to systematically enumerate and analyze staged trees of the same order or even bigger than the study we will consider.Fourth, every Bayesian network, context-specific Bayesian network <cit.> and object-oriented Bayesian network <cit.> can be represented by a staged tree where inner vertices correspond to conditional random variables and the emanating edges correspond to the different states of these variables. Then two vertices are in the same stage if and only if the correspondingrows of conditional probability tables are identified.For instance, the independence model of two binary random variables can be represented by the staged tree depicted in <ref>. The complete Bayesian network on two binary random variables can be represented by the staged tree in <ref>. However, staged trees allow for much less symmetric – and hence more general – modeling assumptions. In particular, they do not rely on an underlying product-space structure but can express relationships directly in terms of events. So this class of models is much larger than the Bayesian network class and as a consequence thealgorithm can be optimized to traverse this wider class as well as the class of Bayesian networks.So the methodology we developed for thealgorithm will serve as a springboard for really fast algorithms to analyze equivalence classes of staged trees and in the future causal discovery algorithms over this class: see also <ref>. We illustrate below that these computer algebra analysis enable us to obtain further insights about the properties of the underlying class of statistical models. § ADDITIONAL PROPERTIES OF INTERPOLATING POLYNOMIALSA natural question to ask is whether or not a given polynomial can be seen to be the interpolating polynomial of an event tree without having to construct a nested representation first.The following proposition gives some necessary conditions for a polynomial to be an interpolating polynomial of a labeled event tree.Recall that for a power-product^a=θ_1^a_1,…,θ_d^a_d, the degree is the sum of the exponents, (^a)=∑_j=1^da_j,and for a polynomial c= ∑_i=1^d ^α_i the degree is (c) = max{(^α_i)}. Let c()=∑_i=1^n^α_i be apolynomial with square-free support,i.e. α_i=(a_i1,…,a_id)∈{0,1}^d for all i=1,…,n and some d≥1.If there exists a labeled event tree such that c is its interpolating polynomial then the following conditions hold:* If c1 then d, n≥ 2 and d≤ 2n-2, and d>(c). * The frequency with which each root label appears in the monomials ^α_i, i=1,…,n, is greater than the degree of the monomials in which they appear. * If the degree of ^α_i is equal to the degreeof c, then there exists _j with i≠j with the same degree as ^α_i and the degree of the greatest common divisor of _j and _i is equal to the degree of c minus one.* No power-product in the support of c can be a proper multiple of another. * The root floret of a labeled event tree with at least one edge has at least two edges with distinct labels, thus d, n≥ 2. We prove the claim by induction on the number of florets in a labeled event tree.Let E be the set of edges and L the set of leaves of the tree. If a tree is formed by a single vertex then #E=0 and #L=1. Therefore #E=0 = 2#L-2.By induction suppose that#E ≤ 2#L-2 for the tree . Consider the tree ' obtained by adding to a leaf ina floret with s edges. Because s≥ 2, thus s≤ 2s-2 and hence #E'=#E+s and #L'=#L + s-1. As a result,#E' = (#E)+(s) ≤ (2#L -2)+(2s-2) = 2(#L+s-1) -2 = 2#L'-2. We conclude by noticing that d≤#E and n=#L. * Consider <ref>. In labeled event trees, an atomic monomial of degree l∈ℕ is associated to a root-to-leaf path of length l. This path has one bifurcation at every vertex, so is embedded in a graph with at least l+1distinct root-to-leaf paths. So every root-label θ_1 occurs in monomials of maximal degree l and there are at least l+1 of those. * Because #v≥ 2 for all v∈ V, every leaf-floret has two edges. There are hence at least two monomials of the same maximal degree, namely those belonging to the longest paths in the tree: these are equal until they split at a leaf-floret. * Let t_1 and t_2 in c be multiples of each other, written as t_1|t_2.They are atomic monomials of two root-to-leaf paths, λ_1 and λ_2, which are not empty if  is not trivial. Let e be the root edgelabeled θ_1, the first edge in λ_1. Then λ_2 starts with the same edge: otherwiseθ_1| t_1,and θ_1|̸t_2for <ref>. Thereforewe can repeat the reasoning on λ_1∖{e} and λ_2∖{e} in the subtree (w).After a finite number of steps we can then conclude λ_1=λ_2 and thus t_1=t_2.The conditions in <ref> are necessary but not sufficient.The polynomial θ_1 ϕ_1+θ_1 ϕ_2+θ_2 θ_3 θ_4+θ_2 θ_3 ϕ_1+θ_2 θ_4 ϕ_2 satisfies all pointsin <ref>. However, it cannot be written in the form of a nested representation. It is thus not the interpolating polynomial of a labeled event tree. § TWO OTHER REPRESENTATIONS OF LABELED EVENT TREES From the previous section we see that if there is a labeled event tree for a square-free polynomial c with n terms then that tree has n root-to-leaf paths.Every such path is labeled by a monomial ^α which is a power-product in (c).We next present two well-known alternative representations of these atomic monomials of a staged tree. The first representation is basedon the notion of an abstract simplicial complex, i.e. a familyof subsets of a finite set (the nodes of the simplicial complex) such that if A∈ and B⊆ A then B∈. In our case the nodes of the simplicial complex are the labels Θ of a labeled event tree = and the family is given by the monomials π_θ(λ_i)=^α_i, i=1,…,n, and all of their divisors. For an illustration see <ref>. This graphical representation for a set of monomials has been successfully used in the data analysis of complex systems <cit.>. A labeled event tree is saturated withroot labels θ_1,…,θ_k if and only if its associated simplicial complex =_1⊕_2⊕…⊕_k is the disjoint union of k connected simplicial complicies and the vertex of maximal degree within each complex is a root-label. Letbe a saturated tree. If no edge labels are identified, then writing <ref> as c_ = ∑_i=1^k θ_i c_i we find that no two c_i and c_j, i≠j, have any indeterminates in common, i, j=1,…,k. Thus, we can split the set of atomic monomials ^α_i, i=1,…,n, into k disjoint sets, each given by the monomial terms in one θ_ic_i. This gives us the disjoint union of =_1⊕_2⊕…⊕_k. By the linear expansion of the interpolation polynomial,the vertex θ_i is connected to every other monomial in _i.It is thus of highest degree in the sense that it has the highest number of emanating edges.For if in _i there was a second vertex θ_j, i≠j, of equally high degree then both θ_i and θ_j would divide every monomial in that subset. But by definition a sequence of single edges, here labeled θ_i and θ_j, is not possible.Conversely, assume we have a set of monomials belonging to an event tree.Then the associated simplicial complex is the disjoint union of simplicial complicies =_1⊕_2⊕…⊕_k where each _i has a vertexθ_i of highest degree, i=1,…,k. Thus, we can write the corresponding interpolating polynomial in the form <ref>. Because no _i is connected to any _j for i≠j, the terms belonging to one sub-simplicial complex have no indeterminates in common with those belonging to the other. Thus the subtrees rooted after the root do not have any labels in common. Therefore the original tree is saturated. The proposition enables us to use this simplicial complex representation of an interpolating polynomial to quickly decide whether or not the corresponding labeled event tree is saturated. Thus, by <ref>, we will know whether or not we need to check for different nested representations of its interpolating polynomial, or whether or not any representation that is discovered is unique . If a tree is saturated, we can then resize it to a simpler graphical representation as in <ref>. The other natural representation of these monomials is via an incidence matrix.Let 𝒯= be a labeled event tree with monomialsθ_1^α_1,jθ_2^α_2,j⋯θ_d^α_d,j =^α_j, for α_j∈ℤ_≥ 0^d and j=1,…,n.The interpolating polynomial of 𝒯 can be visualized bya d× n matrix A_=(a_ij)_ij with integer non-negative entries such thata_ij= mif θ_i^mdivides ^α_j andm∈ℕ is maximal0otherwise.If the atomic monomials in 𝒯 are square-free then A_ is a matrix with entries 0 or 1.The matrix A_ codes a number of properties of the atomic monomials of . In particular, every column encodes those indeterminates which divide the associated monomial, so column sums are the degree of the monomial indexing the column. Every row sum codes the number of monomials which are divided by a certain indeterminate. In order for a set of monomials to be associated to a tree, we need thati=1,…,da_il < j=1,…,na_kjfor all pairs of k,l.This follows from <ref>.2. Submatrices of A_ can easily be associated to subtrees of . For instance for a subtree _v⊆ rooted after an edge (·,v) labeled θ_i, we cancel all rows a_i· and all columns a_· j from the matrix which include an entry a_ij=0. The remaining matrix A_,i=A__v isthen the incidence matrix of _v. For example, the incidence matrix A_ for the interpolating polynomialin <ref>of the trees in <ref> isθ_1 ϕ_1θ_1 ϕ_2θ_1 ϕ_3θ_2 ϕ_1θ_2 ϕ_3θ_2 ϕ_2 σ_1θ_2 ϕ_2 σ_2θ_2 ϕ_2 σ_3 θ_11 1 1 0 0 0 0 0θ_2 0 0 0 1 1 1 1 1ϕ_11 0 0 1 0 0 0 0 ϕ_2 0 1 0 0 0 1 1 1 ϕ_3 0 0 1 0 1 0 0 0 σ_1 0 0 0 0 0 1 0 0σ_2 0 0 0 0 0 0 1 0 σ_3 0 0 0 0 0 0 0 1 The sum of the first two rows in this matrix is a vector with all entries equal to one and the labels indexing these first two rows are root-floret labels. This is not by chance. In fact, the full tree can be retrieved by splitting the set of columns into those which have one in the first row or in the second row and proceeding recursively. This procedure can be turned into a matrix version of the StagedTree algorithm. This matrix representation enables us to link model representations given by labeled or staged trees to log-linear models and well-known results in algebraic stiatistics <cit.>. § AN APPLICATION In this section we will apply the algorithm presented in <ref> to determine the full polynomial equivalence class of a staged tree representing the best fitting model inferred from a real-world dataset. The work of <cit.> provides an early analysis of what we will refer to as the Christchurch dataset. These data have been collected on a cohort of nearly one thousand children over the course of thirty years and include measurements of a number of possibly relevant factors to determine the likelihood of child illness. These measurements can be grouped into the very broad categories of socio-economic background and number of life events – like divorce of its parents or death in the family – of a child, with respective states high, average and low. The state of health of a child is then assessed as hospital admission yes or no <cit.>.An MAP algorithm running on the Christchurch dataset determined the highest scoring staged tree representation among those which had all vertices that are in the same stage also at the same depth <cit.>. Later, <cit.> found a statistically equivalent but graphically simpler representation with no saturated subtrees. This staged treeis shown in <ref>.Here, socio-economic background of a child has been modified to a measure of the access to credit which can be high (++), moderately high (+- or -+) or low (–). The colouring of the staged tree then indicates a number of interesting conditional independence statements. For instance, the red stages on the first level of the tree state that the likelihood of hospital admission was inferred to be the same for all children from a family with high or moderately high access to credit. The blue stages on the subsequent level add that the number of life events of a child is independent of it being admitted to hospital given that its family's access to credit was high, but different given that its access to credit was low. From the green stages we can see that for children with moderate access to credit the likelihood of a certain quantity of life events is not independent of admission to hospital.The order of events depicted by the staged tree in <ref> suggests that the number of life events of a child might be a putative cause of its admission to hospital. The analysis of <cit.> then showed that in fact when keeping the original problem variables intact across the class of staged trees which are statistically equivalent to , this order is preserved. This interpretation of the tree's directionality thus seems to be supported by the Christchurch data. We will now use the algorithmin <ref> to automatically determine the polynomial equivalence class of =. To this end we first specify the interpolating polynomial for the tree in <ref>, using labels as specified in <ref>: (a,h,l)  =   a_1h_1l_1+a_1h_1l_2+a_1h_1l_3+a_1h_2l_1+a_1h_2l_2+a_1h_2l_3+a_2h_1l_1+a_2h_1l_2+a_2h_1l_3+a_2h_2l_4+a_2h_2l_5+a_2h_2l_6+a_3h_1l_1+a_3h_1l_2+a_3h_1l_3+a_3h_2l_4+a_3h_2l_5+a_3h_2l_6+a_4l_4+a_4l_5+a_4l_6+a_5l_4+a_5l_5+a_5l_6where a=(a_1,a_2,a_3,a_4,a_5), h=(h_1,h_2) and l=(l_1,l_2,l_3,l_4,l_5,l_6) are the respective (conditional) probabilities of different degress of access to credit, hospital admission and numbers of life events, read from left to right and from top to bottom along the root-to-leaf paths of .Running , we find precisely four different nested representations of . These are: r_0() =    a_1(h_1(l_1+l_2+l_3)+h_2(l_1+l_2+l_3))+a_2(h_1(l_1+l_2+l_3)+h_2(l_4+l_5+l_6))+a_3(h_1(l_1+l_2+l_3)+h_2(l_4+l_5+l_6))+a_4(l_4+l_5+l_6)+a_5(l_4+l_5+l_6)r_1() =    h_1(l_1(a_1+a_2+a_3)+l_2(a_1+a_2+a_3)+l_3(a_1+a_2+a_3)) +h_2(a_1(l_1+l_2+l_3)+a_2(l_3+l_4+l_5)+a_3(l_3+l_4+l_5))+a_4(l_4+l_5+l_6)+a_5(l_4+l_5+l_6)r_2() =    h_1(a_1(l_1+l_2+l_3)+a_2(l_1+l_2+l_3)+a_3(l_1+l_2+l_3)) +h_2(a_1(l_1+l_2+l_3)+a_2(l_3+l_4+l_5)+a_3(l_3+l_4+l_5))+a_4(l_4+l_5+l_6)+a_5(l_4+l_5+l_6)r_3() =    a_1(l_1(h_1+h_2)+l_2(h_1+h_2)+l_3(h_1+h_2)) +a_2(h_1(l_1+l_2+l_3)+h_2(l_4+l_5+l_6))+a_3(h_1(l_1+l_2+l_3)+h_2(l_4+l_5+l_6))+a_4(l_4+l_5+l_6)+a_5(l_4+l_5+l_6) where for now r_i denotes one fixed order of summation in a nested representation, i=0,1,2,3.By <ref>, r_0() is the nested factorisation of . In <ref> wehave drawn the staged tree _1 corresponding to the representation r_1(), in <ref> the staged tree _2 corresponding to r_2() and in <ref> the staged tree _3 corresponding to r_3(). These staged trees are the only labeled event trees with the above interpolating polynomial on whichconditions imposed on florets induce a probability distribution over the depicted atoms. So in <ref> we see all four elements of the polynomial equivalence class of . By <ref>, these staged trees all represent the same underlying model. So we can now analyse the orders in which the same events are depicted across different graphs.Because in <ref> and <ref> all vertices in the same stage are also at the same distance from the leaves, we can in this case assign an interpretation to each such level of the tree. So in <ref> the first level ofdepicts all states of the random variables access to credit, the second level depicts all states of the random variable hospital admission and the third and last level depicts all states of the random variable life events. Now this interpretation has been reversed in <ref>. In _2, the third level still depicts life events but the first two levels have been interchanged. The first level now represents the states of a joint random variable hospital admission and hospital admission having low access to credit. The second level then depicts access to credit with states high and moderately high. So because bothand _2 represent the same model withshowing access to credit before hospital admission and _2 reversing that order, we cannot hypothesize a putative causal relationship on these (conditionally independent) variables: see <cit.> for a more thorough presentation of this very subtle point.It is less straightforward to assign a meaning in terms of problem variables to the staged trees in <ref> and <ref>. However, we can still see when comparing _1 with _2 orwith _3 that only for children from a family with high access to credit is the order of hospital admission and life events reversible. In all other circumstances the model depicts hospital admission before life events. As in <cit.>, we therefore might want to assign this a putative causal interpretation.§ ACKNOWLEDGMENTS Christiane Görgen was supported by the EPSRC grantEP/L505110/1. Part of this research was supported through the programme Oberwolfach Leibniz Fellows by the Mathematisches Forschungsinstitut Oberwolfach in 2017. During some of this development Jim Q. Smith was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1.§ APPENDIX §.§ Square-free monomial idealsWe summarize here the notions from commutative algebra which have been mentioned in this paper.Given a non-zero polynomial f∈ℝ[x_1,…,x_d], with coefficients in ℝ and indeterminates (or variables) x_1,…,x_d, f is uniquely written as f = ∑_i=1^s β_i t_i, with coefficients b_i0, and power-products (or terms, or monomials) t_i = x_1^α_i,1⋯ x_d^α_i,d all distinct, for every i = 1,…,s.The support of a polynomial f is the set of the power-products actually occurring in f. With the notation above, (f) = {t_i| i = 1,…,s}.An ideal generated by a set of polynomials, say I = f_1, …, f_k, is the set of all linear combinations with polynomial coefficients, i.e. I = { g_1f_1 +… + g_kf_k | g_i∈ℝ[x_1,…,x_d]fori=1,…,k }. In particular, if all f_i's are power-products, I is called amonomial ideal. If a power-product has all exponents in{0,1}, it is saidsquare-free, and an ideal generated by square-free power-products is called square-free monomial ideal.Given a monomial ideal I, a minimal primeof I is an ideal P generated by a subset of the indeterminates {x_1,…,x_d} such that I is contained in P, but is not contained in any ideal generated by a subset of the generators of P (used in <ref>).An ideal is primary if f g ∈ I implies either f ∈ I orsome power g^m∈ I (for some integer m > 0). All ideals in ℝ[x_1,…,x_d] admit a primary decomposition, i.e. may be written as an intersection of primary ideals.In the particular case of interest in this paper, a square-free monomial ideal has primary decomposition I= P_1∩… P_ℓ, where the primary ideals P_i are indeed the minimal primes of I.In general, the prime decomposition of an ideal is given by the minimal primes of the ideal (used in <ref>), and is the primary decomposition of the radical of the ideal.In general, computing the primary decomposition of a polynomial ideal is quite difficult, but for monomial ideals the operations are a lot easier.In particular, for square-free monomial ideals there is a very simple and efficient algorithm called Alexander Dual.plainAuthors' addresses Christiane Görgen, , Max-Planck-Institute for Mathematics in the Sciences, Leipzig, Germany.Anna Bigatti,, Dipartimento di Matematica, Università degli Studi di Genova, 16146 Genova, Italy.Eva Riccomagno, , Institute of Intelligent Systems for Automation, National Research Council, Italy; and Dipartimento di Matematica, Università degli Studi di Genova, 16146 Genova, Italy.Jim Q. Smith, , Department of Statistics, University of Warwick, Coventry CV5 7AL, U.K.; and The Alan Turing Institute, British Library, 96 Euston Road, NW1 2DB London, U.K..
http://arxiv.org/abs/1705.09457v1
{ "authors": [ "Christiane Görgen", "Anna Bigatti", "Eva Riccomagno", "Jim Q. Smith" ], "categories": [ "math.ST", "stat.CO", "stat.TH" ], "primary_category": "math.ST", "published": "20170526071724", "title": "Discovery of statistical equivalence classes using computer algebra" }
Quantum quench dynamics of the attractive one-dimensional Bose gas via the coordinate Bethe ansatz J. C. Zill1,T. M. Wright1,K. V. Kheruntsyan1, T. Gasenzer2,3, M. J. Davis4,5*1 School of Mathematics and Physics, The University of Queensland, Brisbane QLD 4072, Australia 2 Kirchhoff-Institut für Physik, Universität Heidelberg, Im Neuenheimer Feld 227, 69120 Heidelberg, Germany 3 ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung, 64291 Darmstadt, Germany 4 ARC Centre of Excellence in Future Low-Energy Electronics Technologies, School of Mathematics and Physics, The University of Queensland, Brisbane QLD 4072, Australia 5 JILA, University of Colorado, 440 UCB, Boulder, Colorado 80309, USA* [email protected] 30, 2023§ ABSTRACT We use the coordinate Bethe ansatz to study the Lieb–Liniger model of a one-dimensional gas of bosons on a finite-sized ring interacting via an attractive delta-function potential. We calculate zero-temperature correlation functions for seven particles in the vicinity of the crossover to a localized solitonic state and study the dynamics of a system of four particles quenched to attractive interactions from the ideal-gas ground state. We determine the time evolution of correlation functions, as well as their temporal averages, and discuss the role of bound states in shaping the postquench correlations and relaxation dynamics. 1pt fancy 1pt§ INTRODUCTIONThe near-perfect isolation and exquisitecontrol possible for many experimental parameters in ultra-cold atomic gases has enabled thestudy of nonequilibrium dynamics of closed many-body quantum systems <cit.>. A number of different trapping geometries have led to the realization ofquasi-one-dimensional systems <cit.> that are well described by the paradigmatic exactly solvable Lieb–Liniger model of pointlike interacting bosons <cit.>. As this model is integrable, the various forms of the Bethe ansatz provide powerful methodologies with which to investigate the physics it describes <cit.>. One of the simplest methods of taking a quantum system out of equilibrium is to effect an instantaneous change of a parameter in its Hamiltonian — a so-called quantum quench.Several authors have considered the nonequilibrium dynamics of repulsively interacting systems, where one particularly well-studied scenario is an interaction quench starting from the zero-temperature ideal gas <cit.>. Here we study quantum quenches in which a one-dimensional Bose gas, initially prepared in its noninteracting ground state, is subjected to the abrupt introduction of attractive interparticle interactions <cit.>. The ground-state wave function for the attractive one-dimensional (1D) Bose gas on the infinite line with finite particle number N was constructed by McGuire <cit.> and consists of a single bound state of all the particles. For systems with finite spatial extent, the coordinate Bethe ansatz provides solutions in terms of quasi-momenta (or rapidities), which for attractive interactions are in general complex-valued.Ground-state solutions on a finite ring were found numerically in Refs. <cit.>.Since the energy of the ground state is proportional to -N^3, where N is the particle number, a proper thermodynamic limit with N,L→∞ and fixed density n=N/L does not exist <cit.>. However, the limit N,L→∞ with N^3/L = const is well defined, andwas recently analysed in Ref. <cit.>. The zero-density limit L→∞, N=const is also well defined and nontrivial for attractive interactions. In this limit, some correlation functions are accessible with the algebraic Bethe ansatz <cit.>.An alternative large-system limit is given by N→∞ in a finite ring of circumference L. In particular, in the Bogoliubov limit c→ 0, N→∞, cN=const, where c is the interaction strength, a mean-field Gross–Pitaevskii description of the finite-circumference system predicts the appearance of a localized bright-soliton state beyond some threshold interaction strength <cit.>. This has been interpreted as evidence for spontaneous breaking of translational symmetry in the infinite-N, finite-L limit <cit.>.However, Bogoliubov theory predicts a diverging quantum depletion in the vicinity of the threshold interaction strength, invalidating the mean-field description in this regime <cit.>.A many-body analysis for finite N reveals a smooth crossover between a uniform condensate and a state with solitonic correlations, as expected in a finite system <cit.>. Such an analysis also indicates that the gap at the crossover point vanishes as N^-1/3 <cit.>. The Bogoliubov-theory prediction of a vanishing gap at the crossover point in the semiclassical limitN→∞ is thus regained. The crossover to the correlated state has therefore been interpreted <cit.> as a kind of effective quantum phase transition in the finite-L system, though it should be stressed that the crossover in a system of finite particle number N cannot be considered a finite-size precursor of a true quantum phase transition, as no proper thermodynamic limit exists.In a full many-body quantum-mechanical treatment, energy eigenstates on the localized side of the crossover respect the symmetry of the Hamiltonian, but may contain solitonic structure in (pair) correlations. Localized bright solitons can thus be constructed from superpositions of certain exact many-body wave functions <cit.>, which are given by the Bethe ansatz <cit.>. An integral equation for the density of Bethe rapidities of the ground state for particle number N→∞, valid across the crossover, has recently been derived and signatures of the crossover were observed in this density <cit.>. Bright-soliton-like structures have also been observed experimentally in elongated quantum-gas samples <cit.>. A particular nonequilibrium scenario for the attractive 1D Bose gas was proposed in Refs. <cit.> and subsequently realized experimentally in Ref. <cit.>.In the latter work the system was prepared near the ground state at strong repulsive interactions, before the interactions were suddenly switched to strongly attractive using a confinement-induced resonance <cit.>. In doing so a metastable state was created: the so-called super-Tonks gas <cit.>.This highly excited state of the attractive gas has a “fermionized” character <cit.> that both stabilizes it against decay via recombination losses and implies a large overlap with the Tonks–Girardeau-like prequench state, leading to efficient state preparation via the interaction quench <cit.>.This comparatively tractable regime also allows for a Luttinger-liquid description <cit.>, as well as numerical studies with algebraic Bethe-ansatz <cit.> and tensor-network methods <cit.>.Local correlations in the super-Tonks regime can be obtained via an identification of the Lieb–Liniger gas with a particular nonrelativistic limit of the sinh-Gordon model <cit.>, as well as by combining the equation of state of the super-Tonks gas with the Hellmann–Feynman theorem <cit.>.There are fewer results available for more general quench scenarios of the one-dimensional Bose gas involving attractive interparticle interactions. References <cit.> introduced a Bethe-ansatz method, based on the Yudson contour-integral representation <cit.>, for calculations of nonequilibrium correlation functions in systems of a few particles in the infinite-volume limit. Recently, the local second-order correlation function in the relaxed state following a quench from the ideal-gas ground state to attractive interactions was determined in the thermodynamic limit[The quench from the ideal gas to attractive interactions leaves the system with a finite energy per unit length and the thermodynamic limit is therefore well defined in this case <cit.>.] <cit.> using the quench-action method <cit.>.In Refs. <cit.> we developed a methodology for the calculation of equilibrium and nonequilibrium correlation functions of the repulsively interacting Lieb–Liniger gas based on the semi-analytical evaluation of matrix elements between the eigenstates of the Lieb–Liniger Hamiltonian given by the coordinate Bethe ansatz.Here we extend this approach to the attractively interacting gas, for which the Bethe rapidities that characterize the eigenstates are in general complex-valued, indicating the presence of multiparticle bound states.We apply our method to calculate results for the time evolution of correlation functions following a quench to attractive interactions from the ideal-gas ground state, for a system of four particles.As in our previous studies of quenches to repulsive interactions <cit.>, we find that finite-size effects are significant for quenches to weak final interaction strengths.For strong final interaction strengths our results for the time-averaged local second-order correlation function are consistent with the stationary values in the thermodynamic limit calculated in Refs. <cit.>.In contrast to that work, however, our approach allows us to also calculate the time-averaged value of the postquench third-order correlation function, which we find to be dramatically enhanced over the ideal-gas value, implying that three-body recombination losses would be significant in experimental realizations of the quench.Our approach also allows us to calculate the dynamical evolution of correlation functions following the quench, and for a quench to strong attractive interactions we observe behaviour similar to that following a quench to repulsive interactions of the same magnitude, superposed with characteristic contributions of bound states at small interparticle separations.This paper is organised as follows. We provide a brief summary of the Lieb-Liniger model in Sec. <ref>. We also discuss thecomplications that arise in numerically solving the Bethe equations due to the appearance of complex Bethe rapidities, and explain how we manage these. In Sec. <ref>, we calculate ground-state correlation functions for up to seven particles in the vicinity of the mean-field crossover point where solitonic correlations emerge. We also present results for the ground state of four particles subject to strongly attractive interactions. In Sec. <ref>, we compute representative nonequilibrium correlation functions following quenches of the interaction strength from zero to attractive values for up to four particles.We discuss quenches to the weakly interacting regime in the vicinity of the mean-field crossover, as well as those to the more strongly interacting regime. We also compare the nonequilibrium dynamics to that following an interaction quench to repulsive interactions of the same magnitude. In Sec. <ref> we present results for time-averaged correlation functions, before concluding in Sec. <ref>.§ METHODOLOGY §.§ Lieb–Liniger modelThe Lieb–Liniger model <cit.> describes a system of N indistinguishable bosons subject to a delta-function interaction potential in a one-dimensional geometry. The Hamiltonian isĤ = - ∑_i=1^N∂^2/∂ x^2_i + 2c ∑_i<j^Nδ(x_i - x_j),where c is the interaction strength, and we have set ħ=1 and the particle mass m=1/2. The interactions are attractive for c<0, and repulsive for c>0.The eigenstates of Hamiltonian (<ref>) in the ordered spatial permutation sector R_p (x_1 ≤ x_2 ≤…≤ x_N) are given by the coordinate Bethe ansatz in the form <cit.>ζ_{λ_j}({x_i})≡⟨{x_i}|{λ_j}⟩= A_{λ_j}∑_σ (-1)^[σ]a(σ)exp[ i ∑_m=1^Nx_m λ_σ(m)],where the sum runs over all permutations σ={σ(1), σ(2), ⋯, σ(N) } of { 1, 2, ⋯, N }, (-1)^[σ] denotes the sign of the permutation, and the scattering factors area(σ) =∏_k>l( λ_σ(k)-λ_σ(l) - i c ).The quantities λ_j are termed the rapidities, or quasimomenta of the Bethe-ansatz wave function.The normalization constant A_{λ_j} is given by <cit.>A_{λ_j} =[ N!det{M_{λ_j}} ∏_k>l [(λ_k-λ_l)^2 + c^2 ] ]^-1/2,where M_{λ_j} is the N × N matrix with elements[M_{λ_j}]_kl = δ_kl( L + ∑_m=1^N2c/c^2 + (λ_k-λ_m)^2) - 2c/c^2 + (λ_k-λ_l)^2. Imposing periodic boundary conditions leads to a set of N equations for the N rapidities, the so-called Bethe equationse^i L λ_j = ∏_l ≠ j(λ_j-λ_l) + i c/(λ_j-λ_l) - ic,where L is the length of the periodic geometry. The rapidities determine the total momentum P=∑_j=1^Nλ_j and energy E = ∑_j=1^Nλ_j^2 of the system in each eigenstate. The ground state of the system for attractive interactions is an N–body bound state (the finite-system analogue of the McGuire cluster state <cit.>) and has purely imaginary rapidities <cit.>.All eigenstates corresponding to bound states have some Bethe rapidities with imaginary components. This is in contrast to the repulsively interacting system (c>0), for which the solutions {λ_j} to the Bethe equations (<ref>) are purely real. These are usually parameterized by a set of quantum numbers { m_j }, which for c→ +∞ are proportional to {λ_j}, see e.g. Ref. <cit.>. For the attractively interacting gas, it is more convenient to enumerate the solutions of the Bethe equations (<ref>) by their corresponding N ideal-gas (i.e., c=0) quantum numbers {n_j}, where k_j = 2π n_j/L are the quantized free single-particle momenta in the finite ring and n_j is an integer[The energy of an eigenstate with {n_j} for c→ 0^- connects to the energy of the eigenstate with { m_j^(0) + n_j } for c→ 0^+.Here, { m_j^(0)} are the quantum numbers of the “Fermi-sea” ground state for c>0.In the remainder of this article, we will label states of the repulsive gas by their reduced quantum numbers {n_j}≡{m_j - m_j^(0)}.] <cit.>. In this paper, in which we consider ground-state correlations and quenches from the ideal-gas ground state, we only need to consider eigenstates that are parity invariant, i.e., those for which we can order the n_j such that n_j = -n_N+1-j for j∈[1,N]. Thus, we can label all eigenstates by ⌊ N/2⌋ quantum numbers {n_j}, where ⌊…⌋ is the floor function. By convention we choose these numbers to be the nonnegative values {n_j}, which we regard as sorted in descending order (for odd N, n_(N+1)/2=0).Our results depend explicitly on the number of particles N in our system, though the extent L of our periodic geometry, and consequently the density n≡ N/L of the gas, is arbitrary.We follow Refs. <cit.> in absorbing the density into the dimensionless interaction-strength parameter γ=c/n.Our finite-sized system is then identified by the specification of both γ and N.The Fermi momentum k_F = (2π/L)(N-1)/2, which is the magnitude of the largest rapidity in the ground state in the Tonks–Girardeau limit of infinitely strong repulsive interactions <cit.>, is a convenient unit of inverse length and so we specify lengths in units of k_F^-1, energies in units of k_F^2, and times in units of k_F^-2. §.§ Correlation functionsThe static and dynamic behaviour of the Lieb-Liniger gas can be characterized by the normalized m^th-order correlation functionsg^(m) (x_1, …, x_m, x_1', …, x_m'; t) ≡⟨Ψ̂^† (x_1) ⋯Ψ̂^† (x_m) Ψ̂(x_1')⋯Ψ̂(x_m')⟩/[⟨n̂(x_1)⟩⋯⟨n̂(x_m)⟩⟨n̂(x_1')⟩⋯⟨n̂(x_m')⟩]^1/2,where Ψ̂^(†)(x) is the annihilation (creation) operator for the Bose field, n̂(x) ≡Ψ̂^†(x) Ψ̂(x) is the particle-density operator, and ⟨⋯⟩≡Tr{ρ̂(t)⋯} denotes an expectation value with respect to a Schrödinger-picture density matrix ρ̂(t). Due to the translational invariance of the system the density is constant [i.e., ⟨n̂(x) ⟩≡ n] and the correlation functions are invariant under global coordinate shifts x→ x + d. Without loss of generality, we therefore set one of the spatial coordinates to zero and focus on the first-order correlation function g^(1)(x)≡ g^(1)(0,x), the second-order correlation function g^(2)(x) ≡ g^(2)(0,x,x,0), and the local third-order correlation g^(3)(0) ≡⟨ [ Ψ̂^†(0) ]^3 [Ψ̂(0) ]^3 ⟩/n^3.We also consider the momentum distribution n(k) = n ∫_0^L dx e^-i k x g^(1)(x),which we evaluate at the discrete momenta k_j.For a system in a pure state |ψ(t)⟩, Eq. (<ref>) readsg^(m)(x_1, …, x_m, x_1', …, x_m'; t) = 1/n^m⟨ψ(t) | Ψ̂^† (x_1) ⋯Ψ̂^† (x_m) Ψ̂(x_1')⋯Ψ̂(x_m') | ψ(t) ⟩, =N! ∫_0^Ldx_m+1⋯ dx_N/n^m(N-m)!ψ^*(x_1,…,x_m,x_m+1,…,x_N,t) ψ(x_1',…, x_m', x_m+1,…,x_N,t) .By expressing the wave function ψ({x_j},t) in terms of Lieb–Liniger eigenstates ζ_{λ_j}({x_i}) [Eq. (<ref>)], we can calculate the integrals in Eq. (<ref>) semi-analytically with the methodology of Ref. <cit.>. This approach also allows for the evaluation of the overlaps of the initial state with Lieb–Liniger eigenstates necessary for our nonequilibrium calculations in Sec. <ref>[We note that direct evaluation of the normalization constant A_{λ_j} via Eq. (<ref>) is susceptible to catastrophic cancellations similar to those discussed in Appendix <ref>.In practice, we therefore obtain the constants A_{λ_j} by evaluating the self-overlaps of unnormalized Bethe eigenfunctions using the methodology of Ref. <cit.>.]. In Sec. <ref>, we consider the relaxed state of the system, as described by the diagonal-ensemble <cit.> density matrix ρ̂_DE≡∑_{λ_j}ρ^DE_{λ_j}|{λ_j}⟩⟨{λ_j}|, for which Eq. (<ref>) readsg_DE^(m)(x_1, …, x_m, x_1', …, x_m') = 1/n^mTr{ρ̂_DEΨ̂^† (x_1) ⋯Ψ̂^† (x_m) Ψ̂(x_1')⋯Ψ̂(x_m')},=1/n^m∑_{λ_j}ρ^DE_{λ_j}⟨{λ_j} |Ψ̂^† (x_1) ⋯Ψ̂^† (x_m) Ψ̂(x_1')⋯Ψ̂(x_m') | {λ_j}⟩,= N! ∑_{λ_j}ρ^DE_{λ_j}∫_0^Ldx_m+1⋯ dx_N/n^m(N-m)!ζ_{λ_j}^*(x_1,…,x_m,x_m+1,…,x_N) ×ζ_{λ_j}(x_1',…, x_m', x_m+1,…,x_N) .§.§ Numerical considerationsFor repulsive interactions the solutions to the Bethe equations (<ref>) are characterized by purely real rapidities {λ_j}, and finding these numerically is relatively straightforward — see, e.g., Ref. <cit.>. However, for attractive interactions solutions with complex rapidities are possible, and the associated Yang-Yang action <cit.> of the problem is nonconvex(see, e.g., Ref. <cit.>), which significantly complicates the root-finding procedure.To find the rapidities for attractive interactions, we start our root-finding routine close to γ=0.Here the rapidities {λ_j}are close to the free-particle momenta corresponding to { n_j }, and these can be used as an initial guess for a Newton-method root finder. We then decrease γ in small steps, using linear extrapolation of the previous solutions to form initial guesses for the rapidities at each new value of γ.We have found that this procedure gives good convergence of the rapidities to machine precision. Eigenstates with complex rapidities arrange themselves in so-called string patterns in the complex plane for large values of |c| L ≡ N|γ|, with deviations from these strings exponentially small in the system length L <cit.>. For these states, some of the scattering factors a(σ) in Eq. (<ref>) become increasingly smaller with increasing |γ|, cancelling the extremely large exponential factor to give a finite result. Naïve evaluation of the wave function would therefore lead to numerical inaccuracies due to catastrophic cancellations as soon as the string deviations shrink to the order of machine precision.This problem can be overcome by using the Bethe equations (<ref>) to rewrite the problematic factors in a(σ) in terms of exponentials,thereby rendering the expressions more amenable to numerical calculation, as we discuss in Appendix <ref>. For N=4, this enables us to calculate correlation functions for attractive interaction-strength values γ≥ -40 using standard double-precision floating-point arithmetic, with the exception of a single eigenstate that we treat with high-precision arithmetic, as we discuss in Appendix <ref>. For larger values of |γ|, the bound states become increasingly localized, leading to factors in Eq. (<ref>) that are too large to be represented with double-precision floating-point arithmetic. We could in principle treat systems with γ < -40 through extensive use of high-precision arithmetic, but find that the regime γ≥ -40 to which we restrict our analysis reveals many important features of the physics of the attractively interacting system. § GROUND-STATE CORRELATION FUNCTIONSThe ground-state correlation functions of the one-dimensional Bose gas with attractive interactions have so far been investigated both in the mean-field regime <cit.> and with beyond-mean-field methodologies <cit.>. The corresponding Bose-Hubbard lattice approximation was considered in Ref. <cit.>. Systems in the limit L →∞ were studied in Refs. <cit.>, while in Ref. <cit.> correlation functions for up to N=4 particles under hard-wall boundary conditions were obtained via the coordinate Bethe ansatz. References <cit.> used the algebraic Bethe ansatz to calculate the dynamic structure factor to first order in the string deviations under periodic boundary conditions.Piroli and Calabrese recently computed the local two- and three-body correlations in the limit where the interaction strength goes to zero as the system size increases at fixed particle density <cit.>.Here we compute exact correlation functions for a finite system of length L with periodic boundary conditions and compare them with the predictions of mean-field theory,first for N=7 particles in the vicinity of the uniform-density to bright-soliton crossover -0.7≤γ≤ 0, before considering more strongly attractive systems of N=4 particles with -40≤γ≤-2. §.§ Correlations near the crossoverIn Fig. <ref> we plot the first- and second-order correlation functions of the ground state for N=7 particles for a range of γ.Figure <ref>(a) shows the first-order correlation g^(1)(x) in the spatial domain.For γ=-0.1 (red dashed line), the proximity to the noninteracting gas results in a nearly constant g^(1)(x). For more attractive values of γ, g^(1)(x) begins to decay towards zero at larger separations x. For γ=-0.7 (pink dot-dashed line), g^(1)(x) comes close to zero for x=3π k_F^-1, which corresponds to x=L/2 for N=7. [Due to the periodic nature of our geometry, g^(1)(x) is symmetric around x=L/2, and we therefore only show g^(1)(x) up to this point.]Mean-field theory predicts a crossover from a uniform mean-field wave function to a localized bright-soliton state at an interaction strength γ_crit= - π^2/N^2≃ -0.201 <cit.>. In our exact quantum-mechanical treatment of the translationally invariant (and particle-number conserving) system, the density is necessarily constant. However, a signature of the bright-soliton-like state can be found in thefirst-ordercorrelation function.In the finite-sized system the crossover is broad, but there is clearly a significant change in g^(1)(x) between γ=-0.1 [red dashed line in Fig. <ref>(a)] and γ=-0.3 (blue dot-dashed line).In the mean-field description, the many-body wave function is approximated by a translationally symmetrized Hartree-Fock product of single-particle wave functions <cit.>. In this approximation correlation functions for the small system sizes we consider here are comparatively straightforward to compute numerically, see Appendix <ref> for details.Whereas the mean-field analysis predicts a sharp transition to the localized regime at the threshold interaction strength, the inclusion of quantum fluctuations leads to a smooth crossover between the delocalized and localized regimes in a system of finite N <cit.>.To characterize the breadth of the crossover in our system, we calculate the single-particle entanglement entropy; i.e., the von Neumann entropy S=-Tr[ρ^(1)log(ρ^(1))] of the single-particle density matrix ρ^(1)(x,x')=ng^(1)(x,x') <cit.>. In translationally invariant systems S=-∑_j [n(k_j)/N] log[n(k_j)/N], where the n(k_j) are the momentum-mode populations.In the (symmetrized) mean-field description, the ground state for γ> γ_crit is a pure product state, and hence S=0. For γ < γ_crit, the ground state is a superposition of bright solitons, and S>0 <cit.>. This can indeed be seen in the inset of Fig. <ref>(d), where we plot the single-particle entanglement entropy of the exact solution (black line) and of the mean-field solution (grey line) for N=7 particles. The mean-field entropy S(γ) exhibits a slope discontinuity at the crossover point, whereas the von Neumann entropy of the exact ground state (black line) varies smoothly.For γ > γ_ crit the mean-field wave function is uniform, leading to a constant g^(1)(x). In Fig. <ref>(a) we compare our exact results to the mean-field solution just on the localized side of the crossover at γ=-0.21 (green crosses), and find thatthe exact many-body solution (green dotted line) is slightly more localized. By contrast, for γ=-0.3, i.e., further from the crossover point, the mean-field solution (blue diamonds) is more localized than the exact solution (blue dot-dashed line). For γ=-0.7 the mean-field solution (pink triangles) and the exact g^(1)(x) (pink dot-dot-dashed line) are reasonably similar, though the mean-field solution is again somewhat more localized than the exact solution. We note that this behaviour is consistent with that of the entanglement entropy [inset to Fig. <ref>(d)], which is smaller for the exact solution than for the mean-field approximation for |γ|≳0.23.By contrast, at weaker interaction strengths finite-size rounding of the crossover yields an entropy for the exact system larger than the mean-field value.In Fig. <ref>(c), we plot the momentum distribution n(k) corresponding to the first-order correlations shown in Fig. <ref>(a). [For our system n(k_j,t) ≡n(-k_j,t) and hence we only plot positive momenta.] We note that for all interaction strengths we consider here, the exact momentum distributions exhibit a power-law decay n(k) ∝ k^-4 at high momenta — the universal large-momentum behaviour for systems with short-range interactions <cit.>.For the case of γ=-0.1 (red empty circles), interactions are sufficiently weak that no visible deviation from this scaling is visible at the smallest nonzero momenta k_j resolvable in our finite geometry.By contrast, for γ=-0.21 (green triangles), less trivial behaviour of the momentum distribution can be seen, with the lowest nonzero momentum modes deviating visibly from the ∝ k^-4 scaling.As |γ| increases, the deviations from this scaling extend to higher momenta, and a broad hump in the momentum distribution develops.This broadening can be more clearly seen in Fig. <ref>(d), where we plot the momentum distribution for low momenta k ≤ 1 k_F on a linear scale. For γ=-0.1 (red empty circles), the zero-momentum occupancy is close to its ideal-gas value of n(k=0)=N. The zero-momentum mode occupation decreases with increasing |γ| and much of this population is redistributed to the first few nonzero momentum modes, resulting in, e.g., a broad distribution n(k) for γ=-0.7 (pink empty squares).The ground-state mean-field momentum distributions in Fig. <ref>(c) do not show the ∝ k^-4 scaling for large k — this feature appears with a first-order Bogoliubov analysis <cit.>. For an interaction strength γ=-0.21, i.e., close to the crossover point, the exact n(k) (green dotted line) and the mean-field solution (green crosses) are clearly different away from k=0. For larger attractive values of γ, however, the two momentum distributions start to agree more closely.For example, from Figs. <ref>(c) and <ref>(d) we observe reasonable agreement between the exact and mean-field results for the lowest three modes at γ=-0.3 (blue diamonds for mean-field solution, blue dot-dashed line for exact solution). Even closer agreement is observed for γ=-0.7, where the lowest six modes of the exact solution (pink dot-dot-dashed line) agree well with the mean-field solution (pink triangles), before the ∝ k^-4 tail of the exact momentum distribution takes over.In Fig. <ref>(b), we plot the second-order correlation g^(2)(x) for the same values of γ as before. For γ=-0.1 (red dashed line), g^(2)(x) is close to the ideal-gas value g^(2)_γ=0(x)=1-1/N (horizontal grey line). For γ=-0.21 (green dotted line), g^(2)(x) is increased over the ideal-gas value at distances x≲ 1.3 π k_F^-1 and correspondingly decreased at larger distances. This behaviour is even more pronounced for γ=-0.3 (blue dot-dashed line), and the trend continues for larger attractive values of γ, for which there is significant bunchingof particles.Comparing the exact results to the mean-field solutions, we again observe a clear difference at γ=-0.21, where the exact solution (green dotted line) is more localized than the mean-field solution (green crosses). For γ=-0.3, the exact solution (blue dot-dashed line) has a slightly increased value at zero separation compared to the mean-field solution (blue diamonds), but at intermediate separations the latter is marginally broader.For γ=-0.7, the local value g^(2)(0) of the exact solution (pink dot-dot-dashed line) is again slightly larger than the mean-field value (pink triangles). At separations x≳π/4 k_F^-1, the mean-field and exact distributions show good agreement. §.§ Correlations for strongly interacting systems In Fig. <ref>, we plot the first- and second-order correlation functions of the ground state for N=4 particles and for a larger range of values of the interaction strength -40 ≤γ≤ -2.For N=4, the mean-field critical interaction strength is γ_crit≃ -0.617, and all ground states we consider here are therefore well in the localized regime. Figure <ref>(a) indicates the first-order correlation function g^(1)(x), which shows that the soliton-like state becomes increasingly tightly localized with increasing |γ|. This can also be observed in momentum space, Fig. <ref>(c), where the corresponding momentum distributions n(k) become broader with increasing |γ|.We note that the momentum distributions for the most strongly interacting systems considered here are much broader than the “hump” that forms in the ground-state momentum distribution of the repulsive gas in the strongly interacting Tonks limit, which extends to ≃2k_F <cit.>. For comparison, we also plot the mean-field correlation functions for γ=-40 in Figs. <ref>(a) and (c) (grey diamonds). The mean-field first-order correlation function is similar to that of the exact solution but slightly more localized, and its momentum distribution is correspondingly somewhat broader than the exact distribution for small values of k. Nevertheless, the two momentum distributions agree well over a wide range of momenta up to k≃ 30 k_F, where the universal ∝ k^-4 scaling of the exact momentum distribution begins.Figure <ref>(b) shows the second-order correlation g^(2)(x) for separations up to x=π/4 k_F^-1 (which corresponds to x=L/12 for N=4). We again observe that the system becomesmore tightly bound with increasingly attractive interactions. In order to ensure that the form of the correlation function at moderate separations x is visible in this figure, we have limited the extent of the y axis.The maximum value of the second-order correlation function for γ=-40 (solid black line), g^(2)(x=0)=100, is therefore not shown.The mean-field correlation function for γ=-40 (grey diamonds) shows good agreement with the exact solution, though its value at zero separation g^(2)_MF(x=0)=80 (not shown) is reduced compared to that of the exact solution.Figure <ref>(d) shows the local second- and third-order correlations for a wide range of interaction strengths. For small values of |γ|, these correlations are close to their respective ideal-gas values, g^(2)(0)=1-1/N=0.75 and g^(3)(0)=N(N-1)(N-2)N^-3 = 0.375 <cit.>. In the vicinity of the mean-field crossover point (indicated by the vertical grey line), both g^(2)(0) and g^(3)(0) begin to increase significantly with increasing |γ|. For larger values of |γ|, we observe a linear scaling of the second-order correlation g^(2)(0)∝ -γ and a quadratic scaling of the third-order correlation g^(3)(0)∝γ^2, both of which we indicate by black dot-dashed lines in Fig. <ref>(d). The former scaling can be understood by noting that the McGuire cluster energy scales as E_G∝- n^2 γ^2 <cit.>, and that g^(2)_γ(0) = n^-2N^-1 d E_G(γ) / dγ <cit.>.In summary, the exact finite-system correlation functions show behaviour consistent with a broad crossover around the mean-field critical value. At stronger interactions, our exact results for small atom numbers are in close agreement with the predictions of mean-field theory. § DYNAMICS FOLLOWING AN INTERACTION QUENCHIn this section we investigate the nonequilibrium evolution of the attractively interacting Lieb–Liniger gas following an interaction quench for N=4 particles at time t=0. Initially the system is prepared in the ideal-gas ground state, for which the wave function is constant in space, ψ_0({x_i}) = ⟨{x_i} | ψ_0 ⟩ = L^-N/2.Formally, the state of the system at time t>0 is given by |ψ(t)⟩ = ∑_{λ_j^} C_{λ_j}e^-i E_{λ_j}t |{λ_j}⟩,where the C_{λ_j}≡⟨{λ_j} | ψ_0⟩ are the overlaps of the initial state with the Lieb–Liniger eigenstates |{λ_j}⟩ at the postquench interaction strength γ, and the E_{λ_j} are the corresponding energies.The evolution of equal-time correlation functions (Sec. <ref>) is calculated by noting that the time evolution of the expectation value of an arbitrary operator Ô in the time-dependent state |ψ(t)⟩ is given by ⟨Ô (t)⟩ ≡⟨ψ(t) |Ô| ψ(t) ⟩=∑_{λ_j^}∑_{λ_j'} C_{λ_j'}^* C_{λ_j^}^ e^ i (E_{λ_j'} - E_{λ_j^}) t⟨{λ'_j} | Ô | {λ_j}⟩.The matrix elements ⟨{λ'_j} | Ô | {λ_j}⟩ and overlaps C_{λ_j} are calculated with the method described in Ref. <cit.>.Numerically it is necessary to truncate the infinite sum in Eq. (<ref>), and our truncation procedure is analogous to that described in Appendix A of Ref. <cit.>: we include all eigenstates for which the populations |C_{λ_j}|^2 are larger than some threshold value, thereby minimizing the normalization sum-rule violation Δ N= 1 - ∑_{λ_j} |C_{λ_j}|^2 for the corresponding basis size. For calculations of n(k_j, t) and g^(2)(x,t) for interaction-strength quenches to γ=-40 we use a cutoff |C_{λ_j}|^2 ≥ 10^-8, leading to a sum-rule violation of Δ N= 9×10^-6. All other correlation functions are calculated with a more stringent cutoff |C_{λ_j}|^2 ≥ 10^-10, and the sum-rule violations are correspondingly smaller. We have checked that increasing the cutoff does not visibly alter any of our results. §.§ Influence of bound states following a quenchBefore investigating the detailed nonequilibrium dynamics of the Lieb–Liniger gas following a quench to attractive interactions, we first consider the populations |C_{λ_j}|^2 of the eigenstates of the postquench Hamiltonian, which are constant at all times t>0 [cf. Eq. (<ref>)].Comparing these populations to those resulting from quenches to repulsive interactions helps provide an understanding of the contribution of bound states to the nonequilibrium dynamics in the attractive case.In Fig. <ref> we plot the populations |C_{λ_j}|^2 of several representative Lieb-Liniger eigenstates following quenches of the interaction strength from zero to a wide range of final interaction strengths γ. [Recall from Sec. <ref> that for N=4 there are two independent n_j to be specified, which we indicate by the legend in Fig. <ref>(b)[Note that for repulsive interactions the quantum-number pairs {n_j} quoted here refer to the “reduced” quantum numbers, i.e., the excitation numbers relative to the Fermi-sea ground state (cf. Sec. <ref>).].] For attractive interactions [Fig. <ref>(a)] several eigenstates containing bound states have significant populations for small values of |γ|≲ 5. (Note that the number of particles in the bound state can be inferred from the distribution of the rapidities in the complex plane.)The populations of the ground state { n_j } ={0,0} (red solid line), which is a four-particle bound state, and the three-particle bound state { n_j } ={1,0} (green dotted line) are dominant for quenches to γ≳ -4.However, their populations decrease rapidly with increasing absolute interaction strength beyond |γ| = 4. At intermediate interaction strengths γ≃ -5, two-body bound states start to dominate the populations [e.g.,  the states with {n_j}={2,0} (blue dot-dashed line) and {n_j}={1,1} (pink dot-dot-dashed line)]. Forincreasingly attractive values of γ, the populations of gas-like states with no bound-state component grow [e.g., {n_j}={3,1} (black solid line) and {n_j}={4,1} (pink dotted line)]. Indeed, at γ≃ -24, the population of the super-Tonks state {n_j}={3,1} — the lowest-energy gas-like state at strong interactions — begins to dominate. However, the two-body bound state with {n_j}={2,0} (blue dot-dashed line) still has a significant population in the strongly interacting regime[We note that at γ=-40 this state has an energy of E=-143.9 k_F^2, which is close to the energy of the two-particle McGuire cluster state with E=-144.1 k_F^2 <cit.>.]. Consequently, we expect bound states to influence the dynamical evolution of correlation functions following a quench from the ideal gas to all attractive interaction strengths that we consider. Comparing the populations of eigenstates for attractive postquench interactions to those for repulsive interactions, Fig. <ref>(b), we can see that there is significantly less structure in the latter, which are all gas-like. We observe that the populations of excited gas-like eigenstates increase monotonically with increasing |γ| for both repulsive and attractive interactions, whereas the results of Fig. <ref>(a) suggest that the populations of the eigenstates containing bound states all eventually decrease as γ→ -∞. We note that although scattering states of the attractive gas connect adiabatically to states of the repulsive gas in the limit γ→±∞ <cit.>, the quantum-number labels of the states differ on either side of the infinite-interaction-strength limit.For example, for N=4 particles, the super-Tonks state with {n_j} = {3,1} connects on to the ground state for repulsive interactions, {n_j} = {0,0}.To better understand the eigenstate contributions to the nonequilibrium dynamics following a quench to attractive interactions, we focus on quenches of N=4 particles from the ideal-gas ground state to attractive and repulsive interactions with γ=±40, and plot in Fig. <ref> the populations |C_{λ_j}|^2 of the contributing eigenstates against their energies E_{λ_j}.We see that there are additional families of populated states for the attractive gas (sequences of blue crosses that extend to negative energies) that are not present for the repulsive gas (red circles).These are due to four different types of contributing bound states, which we now describe.The first two types of bound states are four-body and three-body bound states, and each of these types contains only a single populated state.These are, respectively,the ground state { n_j } ={0,0} at E≃-1441 k_F^2 with |C_0|^2 ≃ 10^-5 and the first parity-invariant excited state { n_j } ={1,0} at E≃-576 k_F^2 with |C_1|^2 ≃ 3.7 × 10^-3.We note that the parity invariance of eigenstates for quenches from the initial ideal gas <cit.> restricts the appearance of bound states with more than two bound particles to only these two states.The third type is represented by the eigenstate with { n_j } ={2,0}, which has two bound particles and two free particles, and is the first in a family of similar states {2+l,0} (l a nonnegative integer) whose populations decrease gradually with increasing l.The fourth type is represented by the eigenstate with { n_j } ={1,1}, which contains two two-particle bound states, and is the first in a family with decreasing populations for higher excitations which alternate between the quantum numbers {1+l,1+l} and {1+l,l}, with l a positive integer. For larger l, the two two-body bound states have higher “centre-of-mass” momenta with opposite sign (recall that only eigenstates with total momentum P=0 have nonzero occupations following the quench), and for l>12 the corresponding positive centre-of-mass energy of the pairs exceeds their binding energy.We can see from Fig. <ref> that the distributions of populations over gas-like eigenstates are similar for quenches to γ = ± 40, aside from a shift in energy and a small decrease in populations for the attractive gas due to the appearance of the additional bound states.In particular, the number of eigenstates with populations |C_{λ_j}|^2 ≥ 10^-10 is 7815 (7462) for the attractive (repulsive) gas.The shift in energy can be explained by noting that for γ=±40, the system is in the strongly interacting regime and the Bethe rapidities of scattering states (i.e. states with no bound particles) can be obtained by a strong-coupling expansion around the Tonks–Girardeau limit of infinitely strong interactions (see, e.g., Ref. <cit.>). This yields λ_j ≃ (1-2/γ) k_j, where the k_j are the Tonks–Girardeau values, implying opposite energy shifts in the attractive and repulsive cases. §.§ Dynamics of local correlations We now consider the nonequilibrium dynamics following the quench.In Fig. <ref>(a) we plot the local second-order correlation g^(2)(x=0,t) for N=4 particles following a quench from γ=0 to four representative final interaction strengths. Initially, g^(2)(0,t=0)=1-1/N = 0.75 (cf. Sec. <ref>). For a quench to γ=-0.5 (pink dot-dashed line), g^(2)(0,t) shows nearly monochromatic oscillatory behaviour. This is similar to the behaviour following quenches to small repulsive interaction strengths analyzed in Ref. <cit.>. Because the difference between the postquench energy E ≡⟨ψ(0^+)|Ĥ|ψ(0^+)⟩ = (N-1)n^2γ <cit.> and the ground-state energy of the system is small compared to the finite-size energy gap to the first (parity-invariant) excited state, the ensuing dynamics are dominated by these two states, and the energy difference between them determines the dominant frequency of the oscillations.Quenches to more attractive values of γ show the generic behaviour of an initially rising g^(2)(0,t) that eventually fluctuates about a seemingly well-defined average value.The frequencies of the oscillations are determined by the energy differences between the Lieb-Liniger eigenstates with the largest populations. For example, for γ=-40 (solid red line), the postquench wave function is dominated by the super-Tonks state and the first two-body bound state, cf. Fig. <ref>, and the dominant frequency in the oscillations at early times matches the energy difference between these two eigenstates. At later times, the shape of g^(2)(0,t) is more irregular, but the large oscillations due to the two dominant eigenstates persist.In Fig. <ref>(b) we plot the local third-order correlation g^(3)(x=0,t) for N=4 particles following a quench from γ=0 to the same four final interaction strengths as before. Initially, g^(3)(0,t=0)=N(N-1)(N-2)N^-3 = 0.375 (see Sec. <ref>). For small postquench interaction strengths, γ=-0.5 (pink dot-dashed line) and γ=-2 (blue dashed line), the evolution is similar to that of g^(2)(x=0,t) for the same interaction strengths. For larger attractive values of the postquench interaction strength, on the other hand, the shape of g^(3)(x=0,t) is more regular compared to g^(2)(x=0,t), reflecting the fact that only one three-body bound state contributes to the postquench wavefunction, whereas multiple states containing bound pairs are present.Indeed for γ=-10 (green dotted line) and γ=-40 (solid red line), g^(3)(0,t) is dominated by a single frequency, given by the energy difference between the three-body bound state {n_j}={1,0} and the predominant two-body bound state {n_j}={2,0}. The initial rise of both g^(2)(0,t) and g^(3)(0,t) terminates on an increasingly shorter time scale with increasingly attractive postquench interaction strength. This time scale corresponds to about half the period of the ensuing oscillations and is proportional to γ^-2, corresponding to the scaling of the energy E_{λ_j}∝γ^2 of eigenstates containing bound states <cit.>. Forquenches from the ideal-gas initial state, we find that the population of the bound states leads to significantly increased values of both g^(2)(0,t) and g^(3)(0,t) — in stark contrast to the decay of the same quantities following quenches to repulsive interactions <cit.> due to the “fermionization” of the system. Such large values of these local correlation functionswould lead to strong particle losses in experiments <cit.>.This is in contrast to the observations in the quench experiments performed in Ref. <cit.>, where the quasi-one-dimensional gas was quenched from strongly repulsive interactions to strongly attractive interactions, and no significant losses wereobserved.In such a scenario the overlap of the initial strongly repulsive ground state with the super-Tonks state is dominant, and the bound states thus acquire only small populations in the course of the quench <cit.>.To investigate the influence of the initial state on the populations of the two most dominant postquench eigenstates (cf. Fig. <ref>),we find the (correlated) ground state |ψ_0⟩ of the systemat γ_0>0 and then compute the populations of the eigenstates following a quench to γ=-40. In Fig. <ref>, we plot the populations |⟨{2,0}| ψ_0⟩|^2 and |⟨{3,1}| ψ_0⟩|^2 of the aforementioned two-body bound state and the super-Tonks state, respectively, for a wide range of initial values γ_0. Starting in the strongly interacting regime γ =10^3, the overlap between the initial (Tonks–Girardeau) state and the super-Tonks state is close to unity. As γ_0 is decreased, the population of the super-Tonks gas decreases, while the population of the bound state increases. At γ_0 ≃ 1, the two populations are already near their respective values following a quench from the ideal-gas initial state (indicated by black arrows on the left-hand side). The results of Fig. <ref> suggest that the postquench values of g^(2)(0,t) and g^(3)(0,t) would be much smaller for quenches from initial values of γ_0 ≳ 10 compared to those from the noninteracting initial state.§.§ Dynamics of the momentum distribution We now turn our attention to the postquench dynamics of the momentum distribution. Quenches from the ideal-gas ground state with N=4 particles to three differentvalues of γ are compared in Fig. <ref>.In each case we plot the time evolution of the momentum-mode occupations n(k_j,t) [cf. Eq. (<ref>)] for the first six nonnegative momentum modes k_j (j = 0,1,…,5). Initially, all particles occupy the zero-momentum single-particle orbital, n(k_j,t=0)=N δ_j0.At times t>0, theinteraction quench leads to a redistribution of this population over other single-particle modes. Atearly times, all nonzero modes rise with the same rate, independent of k, due to the local nature of the interaction potential, which corresponds to a momentum-independent coupling <cit.>. This applies to all postquench interaction strengths γ, but the time at which deviations from this behaviour first appear depends on γ.All quenches show the same generic behaviour — the momentum-mode populations eventually level off and fluctuate about a well-defined value.These populations undergo oscillations with frequencies determined by the energy differences between the dominant Lieb-Liniger eigenstates.For example, for the γ = -40 case of Fig. <ref>(c) each mode exhibits fast oscillations at a single frequency given by the energy difference between the super-Tonks state{n_j}={3,1} and the two-body bound state {n_j}={2,0}, superposed with some irregular envelope function.In Fig. <ref>, we compare n(k=0,t) for quenches from the ideal gas to repulsive and attractive interaction strengths of the same magnitude. In Fig. <ref>(a), we plot the time evolution of the zero-momentum mode occupation n(0,t) for quenches from γ=0 to γ=-10 (solid red line) and γ=10 (blue dashed line).The envelope of n(0,t) for attractive interactions issimilar to the shape of n(0,t) for repulsive interactions. On top of this envelope for quenches to attractive interactions, n(0,t) shows large regular oscillations. This also applies for quenches to γ=±40, Fig. <ref>(b), but the oscillations for quenches to γ=-40 (solid red line) are faster than for quenches to γ=-10. The correspondence between n(0,t) following a quench to strong attractive interactions and that following a quench to equally strong repulsive interactions reflects the fact that the two postquench wave functions are similar in their composition, aside from the additional presence of two-body bound states for attractive interactions, as illustrated in Fig. <ref>. We also observe a partial revival in n(0,t) for quenches to γ=±40.This revival is due to the proximity of the system at γ=40 to the Tonks–Girardeau limit of infinitely strong interactions, where the spectrum of the repulsive Lieb–Liniger model is identical to that of free fermions <cit.>. This also applies to the scattering states of the attractive system. For γ=±∞, this would lead to recurrences at integer multiples of t_rev=3.5k_F^-2 <cit.> due to the commensurability of eigenstate energies <cit.>. However, for the finite interaction strengths considered here, the revival time is shifted to a later time t_rev≃ 3.9 k_F^-2 for repulsive interactions <cit.> and to an earlier time t_rev≃ 3.2 k_F^-2 for attractive interactions, due to the finite-coupling corrections to the Bethe rapidities discussed in Sec. <ref>.§.§ Dynamics of nonlocal pair correlations We now consider the evolution of the full nonlocal second-order correlation g^(2)(x,t). In Fig. <ref> we plot the behaviour of this quantity for an interaction quench from zero to γ=-40 for N=4 particles.Figure <ref>(a) shows g^(2)(x,t) at four representative times t. Initially, g^(2)(x,0)=1-1/N (horizontal line). At t=0.01 k_F^-2 (red dashed line), the local value is alreadygreatly enhanced, g^(2)(0,t=0.01 k_F^-2)≃ 3.5, cf. Fig. <ref>(a). [The scale of the y axis is chosen so that the long-range features of g^(2)(x) are visible, andthe large valuesfor x≲ 0.02 × (2π k_F^-1) are therefore cut off.]In addition to the central peak, at separations x≃ 0.1 × (2π k_F^-1) a secondary peak emerges, while at larger distances g^(2)(x) exhibits a decaying oscillatory structure. As time progresses, this secondary peak propagates away from the origin and broadensas can be seen at, e.g. t=0.1k_F^-2 (green dotted line) and t=0.25 k_F^-2 (blue dot-dashed line).The build-up of this secondary correlation peak and its propagation through the system can be more clearly seen in Fig. <ref>(b), where we plot the time-evolution of g^(2)(x,t) up to t=0.25 k_F^-2. The propagation of this peak is consistent with x(t)∝ t^1/2, which was also observed for quenches from the same initial state to strongly repulsive interactions <cit.>. (Note that the color scale is chosen so that the long-range behaviour is visible, and the local second-order correlation is again not resolved.) Figure <ref>(c) shows g^(2)(x,t) for longer times up to t=4 k_F^-2. The overall structure on this longer time scale is more complicated, with several soliton-like correlation dips propagating through the system <cit.> and a partial revival of g^(2)(x,t=0) at t ≃ 3.2 k_F^-2 [cf. Figs. <ref>(c) and <ref>(b)].Besides the largely increased value at small distances, the behaviour of g^(2)(x,t) is strikingly similar to the results obtained in Ref. <cit.> for quenches from the same noninteracting ground state to repulsive final interaction strengths.In summary, quenches from the ideal-gas ground state to attractive values of γ result in the occupation of energy eigenstates containing bound states in addition to the gas-like scattering states of the attractively interacting model, which are analogous to the eigenstates of the repulsively interacting Lieb–Liniger gas.As the magnitude |γ| of the final interaction strength is increased, the postquench occupations of the gas-like excited states approach those of their counterparts following a quench to the corresponding repulsive interaction strength, and the occupations of bound states eventually decrease.However, these bound states significantly influence the dynamics of postquench correlation functions for all final interaction strengths we have considered, causing large oscillations in local correlations and in the occupation of the zero-momentum mode.For large attractive values of γ, bound states are highly localized and thus influence the second-order correlation function only at small separations, whereas at larger separations this function exhibits postquench dynamics similar to those observed following quenches to repulsive interactions <cit.>. § TIME-AVERAGED CORRELATIONSA closed quantum-mechanical system prepared in a pure state will remain in a pure state for all time. However, for a nondegenerate postquench energy spectrum, as is the case here (cf. Refs. <cit.>), the energy eigenstates will dephase, and the time-averaged expectation value of any operator Ô can be expressed in terms of its diagonal matrix elements between energy eigenstates⟨Ô⟩_DE = lim_τ→∞1/τ∫_0^τ dt ⟨ψ(t)|Ô|ψ(t)⟩, = ∑_{λ_j}|C_{λ_j}|^2 ⟨{λ_j} | Ô | {λ_j}⟩.This quantity can be viewed as the expectation value of Ô in the diagonal-ensemble density matrix <cit.>ρ̂_DE = ∑_{λ_j} |C_{λ_j}|^2 |{λ_j}⟩⟨{λ_j}| . We note that in practice the sum in Eq. (<ref>) runs over a finite set of energy eigenstates with populations |C_{λ_j}|^2 exceeding some threshold value.If the expectation value of an operator relaxes at all, it must relax to the corresponding diagonal-ensemble value <cit.>.Althoughexpectation values may exhibit rather large fluctuations around their time-averaged values for system sizes as small as those considered here, in general the relative magnitude of these fluctuations shoulddecrease with increasing system size and vanish in the thermodynamic limit. However, establishing this behaviour is beyond the scope of the current work and we will simply regard the diagonal ensemble defined by Eq. (<ref>) as the ensemble appropriate to describe the relaxed state of our finite-sized system.In the following we consider the time-averaged properties of the quenched system.§.§ Local correlations In Fig. <ref>(a), we plot the enhancement of the diagonal-ensemble value g^(2)_DE(0) of the local second-order correlation over the initial noninteracting value g^(2)_γ=0(0) of this function following an interaction quench from zero to γ for particle numbers N=2, 3, and 4.For all particle numbers N considered, as |γ| is increased from the ideal-gas limit, g^(2)_DE(0) initially increases rapidly before reaching a local maximum, which occurs at smaller values of |γ| for larger particle numbers N. For N=4 particles (solid blue line) this local maximum in g^(2)_DE(0) occurs at γ = -1 and coincides with the crossing of the population of the three-particle bound state {n_j}={1,0} and that of the ground state [see Fig. <ref>(a)]. The local minimum of g^(2)_DE(0) at γ=-1.5 coincides with the maximum population of this three-particle bound state, and as soon as the population of this state starts to decrease, g^(2)_DE(0) begins to increase monotonically with increasing |γ|.For large attractive values of γ, the local second-order correlation tends to a constant value g^(2)_DE(0)/g^(2)_γ=0(0)≃ 4, which is much larger than the ideal gas and super-Tonks values <cit.>. The decrease of g^(2)_DE(0) with increasing particle number at fixed large |γ| appears consistent with an approach toward the quench-action thermodynamic-limit strong-coupling value obtained to third order in 1/γ in Refs. <cit.>, indicated by the solid grey line, as N →∞.Using the quench-action approach <cit.> in the thermodynamic limit, Refs. <cit.> found that g^(2)_DE(0)=2 for γ→ 0^-.Our methodology does not recover this result for smallvalues of |γ|, as our small system sizeslead to a finite-size gap for excitations and therefore the energy added by the quench is small in this case.Additionally, eigenstates with more than four bound particles are trivially absent in our calculations, whereas for small postquench values of |γ| they contribute significantly in the analysis of Refs. <cit.>. For larger values of |γ|, however, states with more than two bound particles are strongly suppressed and we expect our results to be less influenced by finite-size effects <cit.>. In Fig. <ref>(b), we plot the enhancement of the diagonal-ensemble value of the local third-order correlation g^(3)_DE(0) over its noninteracting initial value following an interaction quench from zero to γ for particle numbers N=3 and 4. The qualitative behaviour is similar to that of g^(2)_DE(0). For strong interactions, g^(3)_DE(0) also tends to a constant value that is much larger than the initial value. Whether this result persists for larger atom numbers is an important open question, given that large values of g^(3)(0) lead to strong recombination losses in experiments with ultracold gases <cit.>. §.§ Nonlocal correlations In Fig. <ref>(a) we plot themomentum distribution n_DE(k) in the diagonal ensemble for N=4 particles and for several postquench interaction strengths γ. At high momenta and for all interaction strengths γ, n_DE(k) exhibits a scaling of n_DE(k) ∝ k^-4. This behaviour is due to the universal character of short-range two-body interactions <cit.>. For γ=-0.5 (pink squares), the functional form of n_DE(k) is nearly perfectly given by this ∝ k^-4 scaling, and only the three lowest resolvable nonzero momentum modes in our finite periodic system deviate slightly from it.For a quench to γ=-2 (blue filled circles), the low-momentum part of n_DE(k) starts to deviate more strongly from the ∝ k^-4 scaling, and the distribution seems to get wider at low momenta. This low-k “hump” broadens with increasing postquench interaction strength.This behaviour is qualitatively similar to our earlier results for quenches to repulsive values of γ, where an infrared scaling of n_DE(k) ∝ k^-2 extends to larger values of k with increasing γ <cit.>, consistent with the dependence of the populations |C_{λ_j}|^2 on the rapidities {λ_j} and with analytic results for the postquench momentum distribution in the limit of a quench to infinitely strong repulsive interactions <cit.>.From the results presented in Fig. <ref>(a) it is unclear if the emerging hump in the present case of quenches to attractive interactions is consistent with ∝ k^-2 scaling. In Fig. <ref>(b), we plot the second-order correlation function g^(2)_DE(x) in the diagonal ensemble for several postquench interaction strengths γ and compare these to the initial-state form g^(2)(x,t=0)=1-1/N of this function (horizontal line). The first feature we notice is that for all values of the postquench interaction strength, g^(2)_DE(x) is increased at small separations x compared to its initial value [cf. Fig. <ref>(a)]. For the quench to γ=-0.5 (pink dot-dashed line), g^(2)_DE(x) decreases monotonically with increasing x.[Due to the periodic nature of our geometry, correlation functions are symmetric around x=L/2, and we therefore only show g^(2)_DE(x) up to this point.]For γ=-2 (blue dashed line), g^(2)_DE(x) exhibits a local minimum at a finite separation x≃ 0.3 × (2π k_F^-1), before increasing again at larger separations. This behaviour can also be observed for γ=-10 (green dotted line), where the minimum in g^(2)_DE(x) moves to smaller separations x≃ 0.1 × (2π k_F^-1) and becomes more pronounced. This trend continues for quenches to larger attractive values of the interaction strength. For γ=-40 (solid red line), the minimum is located at x≃ 0.03 × (2π k_F^-1) and its magnitude is again decreased compared to the quench to γ=-10. We note that the increase of g^(2)_DE(x) for x≳ 0.6 × (2π k_F^-1) is a finite-size effect (cf. Ref. <cit.>).In Fig. <ref>(c), we compare g^(2)_DE(x) following a quench to γ=-40 (red solid line) to that following a quench to γ=40 (black dot-dashed line). The shape of g^(2)_DE(x) for interparticle separations x≳ 0.05 × (2π k_F^-1)is similar for both quenches. The main difference is in the short-range behaviour, which is significantly influenced by the highly localized bound states for the quench to attractive interactions.For the quench considered here, the dominant bound-states are two-particle clusters (cf. Fig. <ref>).In Fig. <ref>(c) we plot the matrix element ⟨{λ_j} |ĝ^(2)(x)|{λ_j}⟩ of the two-body correlation function in the dominant two-body bound state {n_j} = {2,0} (blue dashed line).For N=2 particles, the wave function of such a bound state Ψ(x_1,x_2) ∝exp(-|x_1-x_2|/a_1D) = exp(-|x_1-x_2|nγ/2) <cit.>, where a_1D is the 1D scattering length <cit.>.This implies a two-body correlation g^(2)(x) ∝ | Ψ(0,x)|^2 = exp(-x n γ), which is indeed consistent with the form of g^(2)(x) in the state {n_j} = {2,0} at small separations, whereas at larger separations g^(2)(x) in this state tends to a constant finite value, due to the unbound particles it contains.Away from small separations, a small proportion of g^(2)_DE(x) is due to such contributions of free particles in eigenstates containing bound particles, but this function is dominated by the contributions of scattering states.For attractive interactions these scattering states are expected to be identical to states of the one-dimensional Bose gas with hard-sphere interactions outside the corresponding hard-sphere radius a_hs≃ a_1D =-2(γ n)^-1 = 0.01875 × (2π k_F^-1) <cit.>. Indeed from the inset to Fig. <ref>(c) we observe that the form of g^(2)(x) in the super-Tonks state {n_j} = {3,1} (pink dot-dashed line, multiplied by a factor of 10 for visibility) and that of g^(2)_DE following a quench to γ=-40 without the contribution of bound states (green dotted line) are consistent with this expectation.In summary, our results for the time-averaged local second-order correlation function g^(2)_DE(0) are consistent with an enhancement of this quantity over the initial ideal-gas value by a factor of ≃ 4 in the limit of strong final interaction strengths, and thus with the predictions of Refs. <cit.> in this limit.Our calculations also reveal an enhancement of the local third-order correlation function g^(3)_DE(0) over the ideal-gas value by a factor of ≃ 20 for strong interactions, suggesting that the postquench state would be susceptible to large three-body recombination losses in practice.Results for time-averaged correlation functions at interparticle separations larger than the characteristic extent of bound states are comparable to those obtained previously <cit.> for quenches to repulsive interactions. § CONCLUSIONSWe have studied the nonequilibrium dynamics of the one-dimensional Bose gas following a quantum quench from the noninteracting ground state to attractive interaction strengths γ<0.In particular we calculated equilibrium, nonequilibrium, and time-averaged correlation functions of the system and investigated their dependence on the final interaction strength.To achieve this we extended a previously developed coordinate Bethe ansatz method for the nonequilibrium dynamics of the Lieb–Liniger model <cit.> to the attractively interacting regime.Compared with the case of repulsive interactions, the computational evaluation is found to be significantly more demanding. This is a consequence of near cancellations in the scattering factors of Bethe ansatz wave functions for strongly negative interaction strengths. We calculated first-, second-, and third-order correlation functions of the ground state for up to seven particles and a wide range of negative interaction strengths γ, and observed the emergence of bright-soliton-like correlations.As the interaction strength γ becomes more negative, the correlation functions approach a form corresponding to bright-soliton solutions of the mean-field approximation.We then calculated the nonequilibrium correlation functions of a system of four particles following quenches of the interaction strength from γ = 0 to several different values of γ<0.For a smallpostquench interaction strengthγ = -0.5, the excitation energy imparted to the system by the quench is of the order of the finite-size energy gap, and consequently excitations are strongly suppressed. This results in correlation functions exhibiting quasi-two-level dynamics.For quenches to intermediate attractive values of the interaction strength, the local correlations are found to increase on short time scales and at later times fluctuate about a well-defined value, which is greatly enhanced compared to the noninteracting prequench state.For quenches to large attractive interaction strengths |γ| ≳ 10, single-frequency oscillations in the local second-order correlation function on top of an overall irregular behaviour are observed, with the oscillations persisting at late times. The oscillatory behaviour also occurs in the momentum distribution for large postquench interaction strengths, and the frequency of oscillation is determined by the energy difference between the dominant super-Tonks eigenstate and the most highly occupied two-body bound state following the quench. Similar oscillations in the local third-order correlation function occur at a frequency given by the energy difference between two- and three-body bound states of the postquench Hamiltonian.Time-averaged values of the postquench local second-order correlation function appear consistent with a tendency towards a constant value in the limit of infinitely strong attractive interactions.In particular, our results for this quantity indicate an enhancement by a factor of ≃4 over the initial ideal-gas value, consistent with a recently obtained thermodynamic-limit result <cit.>.Our calculations similarly suggest that the time-averaged local third-order correlation function following the quench tends to a constant, greatly enhanced value in the strongly interacting limit.Outside interparticle separations of the order of the extent of bound states of the Lieb–Liniger model, the dynamical behaviour and time-averaged form of the second-order correlation function following a quench to attractive interactions are remarkably similar to those following a quench to repulsive interactions of the same magnitude. § ACKNOWLEDGEMENTSM.J.D. acknowledges the support of the JILA Visiting Fellows program. Funding information This work was partially supported by ARC Discovery Projects, Grant Nos. DP110101047 (J.C.Z., T.M.W., K.V.K., and M.J.D.), DP140101763 (K.V.K.), DP160103311 (M.J.D.) and by the EU-FET Proactive grant AQuS, Project No. 640800 (T.G.). § MEAN-FIELD CORRELATION FUNCTIONSIn this appendix we describe how we obtained the mean-field results for comparison with the Lieb-Liniger results plotted inFigs. <ref> and <ref>. The solution of the 1D Gross–Pitaevskii equation on a ring of finite circumference L is conveniently expressed in terms of the angular coordinate θ∈ [0,2π) around the ring circumference (see e.g. Refs. <cit.>) asΨ_GP(θ,Θ) = {[√(1/2π), γ^(r)≥γ^(r)_crit,; √(K(m)/2π E(m))dn(K(m)/π (θ - Θ) | m ), γ^(r) < γ^(r)_crit, ].where γ^(r) = γ N^2/(2π^2) is the interaction strength, Θ is the centre of the soliton, and we have assumed periodic boundary conditions Ψ_GP(0)=Ψ_GP(2π). In these units the critical value of the interaction strength γ^(r)_crit=-0.5.The functions K(m) and E(m) are the complete elliptic integrals of the first and second kind, respectively, and dn(x | m) is one of the Jacobian elliptic functions. The parameter m ∈ [0,1] is fixed by the solution toK(m) E(m) =π^2 γ^(r)/2. The Gross–Pitaevskii equation arises by approximating the many-body wave function using a Hartree-Fock ansatz Ψ(θ_1,…,θ_N)=∏_j=1^NΨ_GP(θ_j, Θ), where the single-particle wave function depends on the centre-of-mass variable Θ (<ref>). Following Ref. <cit.>, we restore the translational symmetry of the many-body wave function by taking a coherent superposition of symmetry-broken Gross–Pitaevskii states with different soliton locationsΨ(θ_1,…,θ_N)= 1/√(2π)∫_0^2π d Θ^N ∏_j=1^NΨ_GP(θ_j,Θ) .The normalized correlation functions are then given byg^(1)(θ,θ') = G^(1)(θ,θ')/√(G^(1)(θ,θ)G^(1)(θ',θ')) , g^(2)(θ,θ') = G^(2)(θ,θ')/G^(1)(θ,θ)G^(1)(θ',θ'),whereG^(1)(θ,θ') = N/2π∫_0^2πdΘ Ψ^*_GP(θ,Θ) Ψ_GP(θ',Θ),and similarly G^(2)(θ,θ') = N(N-1)/2π×∫_0^2πdΘ Ψ^*_GP(θ,Θ) Ψ_GP(θ,Θ)Ψ^*_GP(θ',Θ) Ψ_GP(θ',Θ). § DETAILS OF NUMERICAL ALGORITHM FOR FINDING EIGENSTATES WITH BOUND STATES Eigenstates with complex rapidities arrange themselves in so-called string patterns in the complex plane for large values of |c| L ≡ N|γ|, up to deviations from these strings that are exponentially small in the system size L at fixed |c| <cit.>. This requires a reformulation of the algorithm previously described in Ref. <cit.> so as to avoid a loss of numerical accuracy due to calculating the difference between two nearly equal values.In this appendix we describe the the details of this procedure for N=2,3, and 4 particles.Extending this procedure to N>4 particles is possible, but the number of factors that have to be considered increases rapidly with increasing N.§.§ N=2 particlesWe begin by considering the N=2 particle ground state, for which the rapidities are imaginary for all c<0.For intermediate and large |c| L the rapidities in this case areλ_j =∓ i c/2 + i δ_j ,where the minus (plus) sign applies to λ_1 (λ_2) by convention. The string deviations δ_j ∝ e^-η L, where η is a positive constant. The (unnormalized) two-particle wave function reads ζ(x_1,x_2)=(λ_2 - λ_1 - ic) e^i(λ_1 x_1+λ_2 x_2) -(λ_1 - λ_2 - ic) e^i(λ_2 x_1+λ_1 x_2), ≡ -i [ (2λ+c) e^λ r +(2λ-c) e^-λ r] ,where we defined the relative coordinate r=x_2-x_1 and λ=λ_1/i=-λ_2/i. In light of Eq. (<ref>), the first term in the last line of Eq. (<ref>) is a product of asmall number(2λ+c) and alarge number (e^λ r)away from r=0. The former is a difference of two numbers that are nearly equal, leading to catastrophic cancellations in double-precision arithmetic. However, from Eqs. (<ref>) and (<ref>) we find 2λ+c ≡ 2δ_1 =e^-λ L (2λ-c),and substituting this expression into Eq. (<ref>) renders it amenable to numerical evaluation. §.§ N=3 particlesFor particle numbers N>2, in addition to the ground state, which always has imaginary rapidities, excited parity invariant states may possess complex rapidities at interaction strengths c<c_crit, where c_crit is an N-dependent “phase-crossover” point in the vicinity of the mean-field transition point <cit.>.For N=3, there are two parity-invariant eigenstates with complex rapidities:(i) The ground state is a three-body bound state with imaginary rapidities λ_1 = -λ_3, and λ_2=0.By convention λ_1/i > 0.For small string deviations, the factor λ_2 - λ_1 - i c ≡ -(λ_1 + ic) needs to be rewritten. The Bethe equation (<ref>) for λ_1 ise^i λ_1 L = λ_1+ic/λ_1-ic 2λ_1+ic/2λ_1-ic,which can be rearranged to find an expressionλ_1+ic = e^i λ_1 L (λ_1 - ic) 2λ_1-ic/2λ_1+icfor the critical factor in this case.(ii) First excited parity invariant state.Here, the rapidities λ_1=-λ_3 are real for c>c_ crit <cit.> and are otherwise imaginary, in which case we again follow the convention that λ_1/i>0. The critical factor to be replaced is 2 λ_1 +i c. From Eq. (<ref>) we obtain the appropriate expression2λ_1+ic = e^i λ_1 L (2λ_1-ic) λ_1 - ic/λ_1+ic.§.§ N=4 particlesFor N=4 particles, an infinite number of parity-invariant bound states contribute to the postquench dynamics, and they can be grouped into the following categories, cf. Sec. <ref>. In the following we write λ_j ≡μ_j + i ν_j with μ_j, ν_j real numbers, and assume that μ_1,μ_2 ≥ 0, ν_1,ν_2 ≥ 0, λ_3 = -λ_2, and λ_4 = -λ_1. (i) The ground state with {n_j}={ 0,0 }. The rapidities are purely imaginary, μ_j=0. Substituting this into Eq. (<ref>) leads to the following two equations.e^-ν_1 L = ν_1 - ν_2 + c/ν_1 - ν_2 - c ν_1 + ν_2 + c/ν_1 + ν_2 - c 2 ν_1 + c/2ν_1 - c, e^-ν_2 L=ν_2 - ν_1 + c/ν_2 - ν_1 - c ν_2 + ν_1 + c/ν_2 + ν_1 - c 2 ν_2 + c/2ν_2 - c.There are two critical factors: ν_1-ν_2+c and 2ν_2 + c. Rewriting Eq. (<ref>) leads toν_1 - ν_2 + c = e^-ν_1 L (ν_1 - ν_2 - c) ν_1 + ν_2 - c/ν_1 + ν_2 + c 2 ν_1 - c/2ν_1 + c≡α.Equation (<ref>) can be expressed as2ν_2 + c = - e^-ν_2 L α ν_2 + ν_1 - c/ν_2 + ν_1 +c 2 ν_2 - c/ν_2 - ν_1 + c,where α is the first critical factor defined in Eq. (<ref>). (ii) The three-body bound statewith {n_j} ={ 1,0}. This is the first parity invariant excited stateand has real rapidities λ_1 and λ_4 that tend to zero for large attractive values of cL. Following Ref. <cit.>, Appendix B, we can reparameterize the rapidities in this case via their deviations δ = e^-|c| L/2 from the string solutionλ_1= δα,λ_2= -ic+iδ^2 β.Substituting this into the Bethe equations (<ref>),Ref. <cit.> obtained in the limit of small string deviationsα = √(12)|c| ,β = 6Lc^2 .We did not find a suitable double-precision strategy for this particular eigenstate, and so resorted to high-precision arithmetic for numerical calculations. To obtain sufficiently precise Bethe rapidities for large attractive values of γ, we used Eqs. (<ref>) as the starting point for our root-finding algorithm. (iii) Eigenstates with {n_j} ={ n,0} for all integers n≥2. In this case, λ_1 is real, λ_2 imaginary, λ_1=μ_1, λ_2=iν_2. The critical factor is 2ν_2+c. Rewriting the Bethe equation for λ_2 leads to 2ν_2 + c = e^-ν_2 L (2ν_2-c)|μ_1 + i ( ν_2 - c)|^2/|μ_1 + i (ν_2 + c)|^2 . (iv)Eigenstates with {n_j} = { n,n } for all integers n≥1. The Bethe rapidities are complex and satisfy λ_1 = λ_2^*. Rewriting the first Bethe equation withμ≡μ_1=μ_2 and ν≡ν_1 = - ν_2 and taking the real part leads to 2ν + c =2ν-c/ 2μ e^-νL [ (2μ + i (2ν-c)) 2μ - ic/2μ + ic e^i μ L] ,where [x] denotes the real part of x. (v) Eigenstates with {n_j} ={ n,n-1} for all integers n≥2. For c>c_crit, the Bethe rapidities are real.For more attractive interactions, they become complex conjugate pairs, λ_1 = λ_2^*, and this case becomes equivalent to the preceding one.
http://arxiv.org/abs/1705.09168v2
{ "authors": [ "Jan C. Zill", "Tod M. Wright", "Karen V. Kheruntsyan", "Thomas Gasenzer", "Matthew J. Davis" ], "categories": [ "cond-mat.quant-gas", "math-ph", "math.MP" ], "primary_category": "cond-mat.quant-gas", "published": "20170525132621", "title": "Quantum quench dynamics of the attractive one-dimensional Bose gas via the coordinate Bethe ansatz" }
[email protected] for Space Research, North-West University, Potchefstroom, 2522, South Africa 2National Institute for Theoretical Physics (NITheP), Gauteng, South Africa 3Center for Space Plasma and Aeronomic Research, University of Alabama in Huntsville, Huntsville, AL 3585, USA 4Department of Space Science, University of Alabama in Huntsville, Huntsville, AL 35899, USADrift effects play a significant role in the transport of charged particles in the heliosphere. A turbulent magnetic field is also known to reduce the effects of particle drifts. The exact nature of this reduction, however, is not clear. This study aims to provide some insight into this reduction, and proposes a relatively simple, tractable means of modelling it that provides results in reasonable agreement with numerical simulations of the drift coefficient in a turbulent magnetic field.§ INTRODUCTIONDrift due to magnetic field gradients and curvatures play a central role in the transport of charged particles in a plasma. Cosmic rays in the heliosphere experience drifts, not only due to the gradient and curvature of the heliospheric magnetic field, but also due to the heliospheric current sheet, a surface over which the sign of the heliospheric magnetic field is reversed. These drifts have long been known to have significant effects on cosmic-ray transport, and hence on cosmic-ray modulation <cit.>, even in the heliosheath <cit.>. Drift effects account for the 22-year cycle observed in cosmic-ray intensities <cit.>, lead to a strong dependence of observed cosmic-ray intensities on the solar tilt angle <cit.> and heliospheric magnetic field polarity <cit.>, as well as having a significant influence on observed global cosmic-ray modulation phenomena such as observed latitude gradients <cit.>. Drift effects may even be of importance to the study of solar energetic particles <cit.>. The drift coefficient, which enters the <cit.> cosmic-ray transport equation via the off-diagonal elements of the diffusion tensor, can, in the weak scattering limit, be expressed by <cit.>κ_A^ws=v/3R_L,with R_L the maximal gyroradius and v the particle speed. Particle drift coefficients have been shown theoretically and by means of numerical test-particle simulations <cit.> to be reduced in the presence of turbulence. It is interesting to note that <cit.> incorporated a reduction factor in the off-diagonal elements of the diffusion tensor, albeit due to isotropic scattering. Given the importance of drift in any study of cosmic-ray modulation, this reduction needs to be carefully modelled, as numerical cosmic-ray modulation studies also indicate that better agreement of model results with spacecraft observations can be found if the cosmic-ray drift coefficient at low to intermediate values were smaller than the weak-scattering value of Eq. <ref> <cit.>, and are very sensitive to the choice made as to the drift-reduction factor <cit.>. Furthermore, such modulation studies have shown a marked solar-cycle dependence of the factor by which the weak-scattering drift coefficient needs to be reduced, so as to fit spacecraft observations of cosmic ray intensities <cit.>. Cosmic ray modulation studies have long employed an ad hoc form for the reduced drift coefficient <cit.>, given byκ_A=β P/3B_0(P/P_0)^2/1+(P/P_0)^2,with B_0 the background magnetic field magnitude, β the ratio of the particle's speed to that of light, P the particle rigidity, and P_0 an ad hoc parameter, in units of GV, that is chosen so as to achieve model agreement with a particular spacecraft dataset. Different values for P_0, however are required to fit different sets of spacecraft data <cit.>, and it must be noted that, in such modulation studies, a large perpendicular coefficient would act so as to mask the effects of drift, even for large values of the drift coefficient <cit.>.Numerical test particle simulations, where the Newton-Lorentz equation is solved for an ensemble of test particles in various pre-specified turbulent magnetic field conditions, do reveal some details as to the exact nature of the reduction of the drift coefficient. <cit.> first showed, by means of such simulations for simulated composite slab/2D turbulence <cit.> and isotropic turbulence, that drift coefficients are indeed reduced under such circumstances, a result confirmed for isotropic turbulence by <cit.> and for composite slab/2D turbulence by <cit.>. <cit.> studied this effect for both a uniform background magnetic field as well as a background field with an imposed spatial gradient, finding the same levels of reduction for the drift coefficient in each case when the same turbulence conditions are used. They also showed that the total drift motion of the particle is not completely described by the off-diagonal elements of the diffusion tensor, and that, due to the scattering of particles, a proper understanding of the drifts of these particles requires an understanding of the symmetric elements of the diffusion tensor, i.e. the parts that govern diffusion parallel and perpendicular to the background field. Furthermore, for their simulations incorporating a background field with a gradient, these authors also report a reduction in the drift velocity of particles in the presence of turbulence, as would be expected from the behaviour of the corresponding drift coefficient, which <cit.> found to agree with that calculated from their simulations performed assuming a uniform background magnetic field.<cit.> performed extensive simulations of the drift coefficient, for different turbulent geometries, and different wavenumber-dependencies of the energy-containing range on the assumed turbulence power spectral form. In line with the previously mentioned studies, <cit.> find, for isotropic and composite turbulence, that the drift coefficient is essentially the weak scattering coefficient given in Eq. <ref> for very low levels of turbulence, becoming ever more reduced as turbulence levels increase, with the amount of reduction decreasing for a given turbulence level as particle energy is increased. Interestingly, <cit.> show that, no matter the strength, pure slab turbulence simply does not reduce the computed drift coefficient from the weak scattering value. These authors also report a relatively weak dependence of the drift-reduction factor on particle rigidity and on the energy-range spectral index of the 2D fluctuation spectrum. It should be noted that all the abovementioned simulations were performed assuming axisymmetric, transverse magnetostatic turbulent fluctuations, and also that, although the simulations of <cit.> agree qualitatively (where comparable) with the results of the studies of <cit.> and <cit.>, they do not agree quantitatively.Due to the extreme complexity of a self-consistent theoretical approach to the reduction of drift effects in the presence of turbulence <cit.>, there have been relatively few attempts at theoretical treatments of this problem. Numerical studies of the drift coefficient report on fits to the turbulence-reduced drift coefficient <cit.>, but these are potentially of limited use to, e.g., modulation studies, as the simulated turbulence conditions assumed in these studies may not necessarily be representative of heliospheric conditions. An example of such a fit is presented by <cit.>, whereκ_A=v/3R_L1/1+a(δ B_T^2/B^2_0)^d,with a and d fitting constants that change with different turbulence geometries assumed in the simulations, and δ B_T^2 the (total) magnetic variance. Note that this expression is similar to what was suggested by <cit.>. <cit.>, considering the effects of transverse turbulent fluctuations on the unperturbed particle orbits, find that the drift coefficient is given byκ_A=vR_L/3(Ωτ)^2/1+(Ωτ)^2where Ω is the (unperturbed) particle gyrofrequency, and τ some decorrelation time. The product of these two quantities they model using Ωτ=2R_L/3D_⊥, where R_L is the maximal (unperturbed) particle Larmor radius, and D_⊥ the field line random walk (FLRW) diffusion coefficient <cit.>, given for slab/2D composite turbulence byD_⊥=1/2(D_sl+√(D_sl^2+4D_2D^2)).whereD_sl=1/2δ B_s^2/B_o^2λ_c,s,andD_⊥=√(δ B_2D^2/2)/B_oλ_u,with δ B_s^2 and δ B_2D^2 variances, respectively, λ_c,s the slab correlation scale, and λ_u the 2D ultrascale <cit.>. Although providing a tractable expression for the turbulence-reduced drift coefficient, <cit.> showed that the <cit.> drift-reduction factor simply does not fit the simulation results of <cit.>, whether they pertained to the drift coefficient or the drift velocity. These authors went on to propose another form for Ωτ, such thatΩτ=11/3√(R_L/λ_c)/(D_⊥/λ_c)^g,where g=0.3log(R_L/λ_c)+1.0 and λ_c is the slab correlation length. This form then fit the <cit.> simulations very well, and has been used with some success in cosmic ray modulation studies <cit.>, but the generality of this result is questionable, as it remains to be seen whether this highly parametrized fit would also agree with the results of simulations performed assuming turbulence conditions very different to those assumed by <cit.>. Lastly, only the complicated results presented by <cit.> and <cit.> predict the lack of drift reduction seen in the simulation results of <cit.> for pure magnetostatic slab turbulence.The question that this study attempts to answer, then, is whether one can derive a simple, tractable expression for the drift-reduction factor that is in agreement with what is known of this quantity from numerical simulations, and which in principle can be applied in the broad range of turbulence conditions typically encountered by, e.g., galactic cosmic rays and solar energetic particles as they traverse the heliosphere. Firstly, from a simplistic analysis of the drift velocity of a charged particle in a turbulent magnetic field we show that one can readily derive an expression for the drift reduction factor similar to the fits proposed by <cit.> that yields results that, in limiting cases, bound the simulation results of e.g. <cit.>. In Section <ref> a new drift-reduction factor is derived, broadly following the approach taken by <cit.>, which not only produces results in reasonably good agreement with the simulations of <cit.> for both the drift velocity and drift coefficient, but also returns the weak-scattering drift coefficient should the assumption of magnetostatic, purely slab turbulence be made. The last section provides a discussion of the abovementioned results. § A FIRST-ORDER APPROACH TO THE EFFECT OF TURBULENCE ON COSMIC RAY DRIFT COEFFICIENTSIn general, the pitch-angle average guiding center drift velocity of a particle with momentum p and charge q in a fluctuating magnetic field B is given by ⟨v⃗_d⟩ = ⟨pv/3q∇×B⃗/B^2⟩,with angle brackets denoting a suitable time average. Using a Reynold's decomposition of the magnetic field, the magnetic field can be written as the sum of a large scale B⃗_⃗0⃗ and fluctuating transverse b⃗ components such thatB⃗ = B⃗_⃗0⃗ + b⃗.Note that the assumption of transverse fluctuations is made throughout this study. The above, when substituted into Eq. <ref>, yields⟨v⃗_d⟩ ≈ pv/3q∇×⟨B⃗/B^2⟩≈ pv/3q∇×B⃗_⃗0⃗/B_0^2+ ⟨ b^2⟩= ∇×( pv/3qB_0)(B⃗_⃗0⃗/B_0) (B_0^2/B_0^2+ ⟨ b^2⟩).In the above equations it is assumed that the turbulence is weak (such that b ≪ B_0) and vanishes when an appropriate long-term time-averaging is performed (⟨b⃗⟩ =0). Moreover, due to the assumption of transverse turbulence,B^2 = ⟨B⃗·B⃗⟩ = ⟨B⃗_⃗0⃗·B⃗_⃗0⃗⟩ + 2 ⟨b⃗·B⃗_⃗0⃗⟩ + ⟨b⃗·b⃗⟩ = B_0^2 + ⟨ b^2 ⟩. In terms of the drift coefficient κ_A, Eq. <ref> is therefore equal to⟨v⃗_d ⟩ = ∇×κ_A^ws f_s 𝐞_B_0with 𝐞_B_0 := B⃗_⃗0⃗/B_0 a unit vector along the mean uniform field B⃗_⃗0⃗, and f_s some factor by which the weak-scattering value of the drift coefficient κ_A^ws is altered. This leads us to conclude, from inspection of Eq. <ref>, that the drift coefficient is suppressed by a factor given byf_s := 1/1 + ⟨ b^2⟩ /B_0^2.Some care must be taken in the interpretation of ⟨ b^2⟩, as the exact nature of the implied time-averaging is not clear. One possible approach to this problem is as follows. Defining the total variance of the fluctuating field asδ B_T^2 := ∫_0^∞ g(k⃗) dk⃗,where g(k⃗) denotes the turbulence power spectrum associated with the fluctuating magnetic field component, the drifting particle is only expected to be influenced by fluctuations on scales comparable to, or larger than, its Larmor radius. Hence, we define⟨ b^2⟩ := ∫_0^R_L^-1 g(k⃗) dk⃗.For ease of comparison between the results of this section and those of previous studies mentioned in the previous section, we introduce the factorϵ := ⟨ b^2⟩/δ B_T^2where a comparison between Equations <ref> and <ref> indicates that ϵ≤ 1. ThenEq. <ref> becomesf_s = 1/1 + ϵδ B_T^2/B_0^2where δ B_T^2 denotes the total variance as defined in Eq. <ref>. This result is similar to that of <cit.>. A brief consideration of various limits shows that Eq. <ref> satisfies, at least to first order, what is expected of such a reduction factor from prior simulations such as those performed by <cit.>. In the very low turbulence limit, where δ B_T^2≪ B_0^2, we have that f_s ≈ 1, which returns the weak scattering drift coefficient, while for the case where the turbulence is strong (δ B_T^2≫ B_0^2), we have f_s → 0. Furthermore, at low particle energies, r_L^-1 becomes very large, implying that ϵ approaches unity and hence that f_s → (1+δ B_T^2/B_0^2)^-1, so that there would be a maximum reduction of the weak scattering drift coefficient. Conversely, at high particle energies there is no drift reduction, as r_L^-1≪ 1, implying that ϵ→ 0, which in turn yields f_s → 1. Also, it is immediately apparent that the form of Eq. <ref> resembles strongly that of the functions <cit.> fit to their simulations of the turbulence-reduced drift coefficient. This is further reinforced by a cursory inspection of Fig. <ref>, which shows examples of f_s, as function of δ B_T^2/B_0^2 for varying values of ϵ, along with the simulation fits proposed by <cit.> for the cases of isotropic and composite (85% / 15% 2D/slab) turbulence. Although the <cit.> fit for their reduction factor in the presence of isotropic turbulence falls below the ϵ=1 case for Eq. <ref>, the composite result falls neatly within the range expected of that equation.Also shown on the same figure are the results of numerical simulations performed by <cit.>, for two different ratios of the proton Larmor radius to the slab correlation scale assumed in that model such that R_L/λ_c is equal to 0.1 and 1.0, as well as the results reported by <cit.> for R_L/λ_c=0.1. Note that these simulations were performed for the approximately the same composite turbulence conditions as those of <cit.>, the difference being that <cit.> assume 80% / 20% 2D/slab turbulence. It is clear that these simulation results fall within the range delineated by the limiting cases of ϵ=0 and 1. The similarity of the drift reduction coefficient of Eq. <ref> in form to the fits presented by <cit.>, as well as the fact that the limiting cases for Eq. <ref> effectively bound the simulation results of that study as well as those of <cit.>, suggest that, at least to first order, the approach presented here will yield a reasonable approximation to the factor by which turbulence reduces the weak scattering drift coefficient, even though uncertainty implicit to the averaging performed on Eq. <ref> makes it difficult to accurately and self-consistently estimate the effect of turbulent fluctuations likely to affect the drift of the particles in question. Furthermore, the drift reduction coefficient of Eq. <ref>, which was derived without making assumptions as to the geometry of the turbulence apart from it being transverse to the background field, cannot explain the simulated drift coefficients reported by <cit.> for purely slab turbulence, which essentially remained at the weak scattering level, except by assuming a posteriori that only 2D turbulent fluctuations act so as to reduce the drift coefficient. Lastly, the sensitivity of numerically simulated cosmic ray intensities demonstrated by <cit.> to the form of the turbulence-reduced drift coefficient employed also implies that a first-order result for f_s may prove to be of limited use in modulation studies, given the uncertainty in the averaging of Eq. <ref>. The following section outlines an alternative approach to the calculation of this quantity, based on the work of <cit.>, which does not suffer from these limitations.§ A MODIFICATION TO THE RESULTS OF BIEBER & MATTHAEUS (1997)In their approach, <cit.> invoke the TGK (<cit.>-<cit.>-<cit.>) formula for a diffusion coefficient in terms of some relevant velocity correlation function: D_ij=∫_0^∞dtR_ij(t) where the subscripts i and j denote Cartesian coordinates (in this study the background magnetic field is assumed to be uniform and pointed in the z-direction), and R_ij(t)=⟨ v_i(t_o)v_j(t_o+t)⟩ the velocity correlation function, which is assumed to be independent of the reference time t_o, and to go to zero at a rate greater than 1/t as t goes to infinity. The assumption that the decay of this correlation function is a function of the time interval (t+t_o) alone implies the assumption that particles are interacting with stationary, homogenous turbulence. <cit.> note that the calculation of this correlation function from first principles, that is to say without making the simplifying assumptions outlined in Section <ref>, is extraordinarily difficult, as information is required as to the spatial and temporal dependences of the turbulent fluctuations. These authors proceed in their derivation of a turbulence-reduced drift coefficient by choosing physically and theoretically motivated forms for the required correlation functions, by considering the effect of magnetic fluctuations on the unperturbed gyromotion of a particle in a uniform magnetic field, arguing that such fluctuations would cause R_ij to go to zero after a sufficient amount of time has elapsed. The form <cit.> choose of interest to this study is thenR_yx=v^2/3sin(Ω t)e^-ν_⊥twith Ω the gyrofrequency of the unperturbed particle, v its speed, and ν_⊥ some perpendicular decorrelation rate. Integration of this correlation function in Eq. <ref> with ν_⊥=0 then yields the weak-scattering drift coefficient, while for non-zero values of the decorrelation rate, it yields Eq. <ref> with τ=1/ν_⊥. <cit.> then argue that the field line random walk process will be the major factor in the perpendicular decorrelation process, introducing a lengthscale z_c=R_L^2/D_⊥ over which the perpendicular correlation function would significantly decrease. This then leads to a decorrelation time ofτ∼R_L^2/vD_⊥.This scaling forms the basis of the drift-reduction term proposed by these authors, as discussed in Section <ref>. In the present study, we do not assume that decorrelation is entirely due to FLRW, as the drift process would act so as to cause particles to leave field lines. We assume that the perpendicular decorrelation scale is inversely proportional to some lengthscale along which decorrelation perpendicular to the uniform background field occurs, which we approximate as the particle's perpendicular mean free path, so that z_c=R_L^2/λ_⊥. The choice of λ_⊥, as opposed to the turbulence correlation length, is motivated by the fact that we are interested in the particle velocity decorrelation in particular. Furthermore, due to the fact that particles drift perpendicular to the background field, we assume that the perpendicular decorrelation rate is influenced only by the particle's speed perpendicular to the uniform background field v_⊥. This then gives the decorrelation time asτ = R_L^2/v_⊥λ_⊥.The perpendicular decorrelation speed is unaffected by the drift velocity term, as it will not contribute to this perpendicular speed under the assumption of a uniform constant background magnetic field, even in the presence of turbulent fluctuations, as indicated by the simulation results of <cit.>. To get an estimate of this perpendicular speed then, consider a Reynold's decomposed turbulent magnetic field in two dimensions B⃗=B_0e⃗_z+b_xe⃗_x, where B_0 is uniform, b_x a fluctuating, transverse component, and ⟨ B ⟩ = B_0. Then at any particular point along B⃗, the sine of the angle θ between B⃗ and B_0e⃗_z will be given by b_x/B ≈ b_x/B_0, assuming small fluctuations. This angle will be the same then as the average angle between the particle velocity v⃗ and it's component parallel to e⃗_z, such that sinθ = v_x/v, again assuming small fluctuations. This then leads to v_x≈ v(b_x/B_0). As it follows that ⟨ v_x⟩ =0, we then model v_⊥ as the root-mean-square value of this quantity. Therefore, we use v_⊥≈ v(δ B_T/B_0), which then leads toΩτ = R_L/λ_⊥B_0/δ B_T,which, after substitution into Eq. <ref> and a little rearrangement, yieldsf_s = 1/1 + λ_⊥^2/R_L^2δ B_T^2/B_0^2.This expression is reminiscent of the form of the reduction term derived in Section <ref>. Perpendicular particle transport has been shown from simulations <cit.> to be subdiffusive in the presence of pure slab turbulence. In this case, then, the perpendicular diffusion coefficient, and thus the perpendicular mean free path, would be zero (see, e.g., <cit.>). It should be noted here that both the theoretical treatments of the drift coefficient in the presence of turbulence proposed by <cit.> and <cit.> predict that there will be no drift reduction in the presence of pure magnetostatic slab turbulence. In these conditions, then, Eq. <ref> automatically yields the weak-scattering result, as seen in the simulations of <cit.>, as λ_⊥ would be zero under these conditions <cit.>. The fact that Eq. <ref> is a function of the perpendicular mean free path, and thus implicitly of the parallel mean free path (assuming a nonlinear guiding center theory prediction for λ_⊥) is also in line with the findings of <cit.>, who report that knowledge of the spatial variation of these mean free paths would be required to fully describe particle drifts.The asymptotic behaviour of this drift-reduction term now depends on the various implicit dependences of the perpendicular mean free path on, for example, the Larmor radius and the ratio of the variance to the background field strength. Assuming that λ_⊥ remains relatively uniform as function of rigidity (and therefore of R_L), as implied by both the <cit.> consensus range as well as various numerical simulations (e.g. <cit.>) and theoretical results (see, e.g., <cit.>), the ratio λ_⊥/R_L would correspond to small values of the quantity ϵ in Eq. <ref> at large energies, and large values of ϵ at the lowest energies, based on an assumed value of δ B_T^2/B_0^2. This then would imply significant reduction of the drift coefficient from the weak-scattering value at low energies, and limited reduction at high energies, as expected from simulations. Furthermore, if one were to hold the ratio λ_⊥/R_L constant, it is clear that the drift coefficient would be more reduced at high turbulence levels, and less reduced at the lowest values of δ B_T^2/B_0^2, again as expected from simulations. We also do not expect a strong dependence of this drift-reduction factor on the spectral index of the energy-containing range of the 2D turbulence power spectrum, as from theoretical results (see, e.g., <cit.> and <cit.>) the rigidity dependence of λ_⊥ for different values of this spectral index is never as steep as that of R_L^2, again in qualitative agreement with the simulation results of <cit.>. It remains, however, to be seen whether Eq. <ref> can yield results comparable to those yielded by numerical simulations of the drift reduction term. In order to make this comparison, a choice needs to be made as to an expression for λ_⊥. This is not a trivial matter, as the implicit dependence of λ_⊥ on, e.g., turbulence quantities will have a significant effect on f_s. We choose an analytical approximation for the perpendicular mean free path derived from the nonlinear guiding center (NLGC) theory of <cit.> by <cit.>, as modified by <cit.>. This choice is motivated by the tractability of the expression, which allows for ease of comparison with simulation results (as opposed to the nonlinear results of, say, <cit.>), as well as being derived for a 2D turbulence spectral form identical to that employed as an input to the numerical simulations of <cit.> (and some of the simulations of <cit.>), which contains a flat energy-containing range, and a Kolmogorov inertial range. Furthermore, this result also automatically satisfies the Shalchi slab hypothesis (see <cit.>), as it becomes zero when the 2D variance is zero. This perpendicular mean free path expression isλ_⊥=[α^2√(3π)2ν-1/νΓ (ν)/Γ (ν-1/2)λ_2Dδ B_2D^2/B_0^2]^2/3λ_∥^1/3where we assume that α^2=1/3 (from <cit.>), λ_2D is the turnover scale where the inertial range commences on the assumed 2D turbulence power spectrum, and ν denotes half the assumed inertial range spectral index. As input for the parallel mean free path we use a quasilinear theory expression based on the results of <cit.>: λ_∥=3s/π (s-1)λ_sR^2B^2_o/δ B^2_sl[1/4+2R^-s/(2-s)(4-s)],where R=R_L/ λ_s is a function of the lengthscale at which the inertial range on the slab turbulence power spectrum commences, which is assumed to have a spectral index s. This choice is also motivated by the tractability of Eq. <ref>, as well as the fact that it is derived assuming a slab spectral form similar to that assumed in the simulation results we are comparing our results to. In order to properly compare our result with the simulations of <cit.>, we choose values for turbulence parameters identical to those used in that study, so that s= 2ν =5/3, δ B_2D^2=0.8δ B_T^2, δ B_s^2=0.2δ B_T^2 and λ_s=10 λ_2D = 1.0. The results of these choices for the parallel and perpendicular mean free paths as inputs for Eq. <ref>, using the values for the turbulence quantities listed above, are plotted as function of the ratio of R_L to the slab correlation length λ_c in the top panel of Fig. <ref>, along with the numerical simulation results of <cit.> and <cit.>, as function of the level of turbulence δ B_T^2/B_0^2. Note that the slab correlation length is related, for the particular slab turbulence spectral form employed here, by λ_c=√(π)Γ (ν-0.5)λ_s/Γ (ν). As expected, Eq. <ref> predicts that at higher particle energies, only the highest levels of turbulence cause a reduction in the drift coefficient. Agreement with the <cit.> simulations at R_L/λ_c=1.0 is good, but less so for R_L/λ_c=0.1. The latter prediction, however, falls within the error bars reported by <cit.>. Note that for lower levels of turbulence (δ B_T^2/B_0^2≲ 0.1 and even to a lesser degree, given the extent of the uncertainties in the simulations, δ B_T^2/B_0^2 ≲ 1), Eq. <ref> is in good agreement with the simulation results. From observations <cit.> and turbulence transport modelling <cit.> it is this range of turbulence levels that is typical in the heliosphere. It should be noted that, for their simulations assuming a gradient in the background magnetic field, <cit.> report turbulence-reduced drift coefficients essentially as those they calculate using a uniform background field. A comparison between the drift velocity calculated using Eq. <ref> and the simulation results of <cit.>, assuming a background magnetic field with a gradient, is shown in the bottom panel of Fig. <ref>. Here the drift velocities are calculated using v⃗_d=∇×κ_A e⃗_B=f_s ∇×κ^ws_Ae⃗_B+ ∇ f_s ×κ^ws_Ae⃗_B, again assuming parameters identical to those employed by <cit.>, and normalised to the zero-turbulence drift velocity. Note that we only compare our results with those pertaining to the y-component of the drift velocities calculated by <cit.>, as the gradient imposed by these authors on their simulated background field (which points in the z-direction) has only an x-component. Here, the use of Eq. <ref> again leads to good agreement with simulation results at smaller levels of turbulence relevant to heliospheric conditions for both values of R_L/λ_c considered. At higher turbulence levels, the y-component of the drift velocity calculated using Eq. <ref> does not agree well with the simulations, a consequence of the assumption of relatively weak turbulence in the derivation of that expression. It is interesting to note that the simulations for the case where R_L/λ_c=0.1 yield negative values for the y-component of the drift velocity, as is the case for the results calculated using Eq. <ref>, even though the latter approach overestimates this effect. However, given the range of turbulence levels relevant to the heliosphere as discussed above, such an effect would not be expected to have significant consequences as to the transport of charged particles.§ DISCUSSION The form of the drift-reduction factor contained in Eq. <ref> provides a relatively simple, tractable way of describing and modelling the effects of a range of turbulence conditions on the drift coefficient of charged particles that satisfies the conditions prescribed by extant numerical simulations of both the drift coefficient and velocity as well as yielding results in reasonably good agreement with said simulation results for turbulence levels corresponding to what is expected in the heliosphere. Due to its explicit dependence on λ_⊥, this quantity should, if used in conjunction with an expression for the perpendicular mean free path and a turbulence transport model, yield complicated spatial dependences for f_s throughout the heliosphere, as has been shown by <cit.> for the drift-reduction factors discussed in Section <ref>. The dependence of Eq. <ref> on basic turbulence quantities should have consequences for studies of the transport of particles such as low-energy electrons of galactic and Jovian origin. These particle's parallel and perpendicular mean free paths are expected to remain ata relatively constant value for a given (small) rigidity <cit.>, which, in combination with the explicit Larmor radius dependence of Eq. <ref>, would lead to small values of f_s for a given turbulence level, and thus lead to a greatly reduced drift coefficient relative to the diffusion coefficients in line with what is expected from prior modulation studies <cit.>. Furthermore, the transport of solar energetic particles would also be affected, in that the higher levels of turbulence closer to the sun <cit.> would feed into Eq. <ref> in such a way so as to kill off any drifts such particles may encounter, a prediction in contrast to the simulation results reported by, e.g, <cit.> and <cit.>. Lastly, the implicit dependence of Eq. <ref> on basic turbulence quantities leads to an implicit solar-cycle dependence for this drift-reduction factor. <cit.> report an increase in the total magnetic variance at Earth as solar activity increases <cit.>. This increase would act so as to decrease f_s, and thus lead to greatly reduced drift effects during solar maximum as opposed to solar minimum, as expected from the modulation studies of, e.g., <cit.> and <cit.>.Some caution has to be exercised in the use of Eq. <ref> due to the assumptions made as to the forms used for the perpendicular decorrelation lengthscale and speed that enter into Eq. <ref>. Furthermore, use of Eq. <ref> in modulation studies requires the assumption of some form for the perpendicular mean free path, which, given the number of expressions for this quantity currently in the literature <cit.>, can also lead to further uncertainty. To model the drift-reduction factor throughout the heliosphere would also require one to employ a turbulence transport model to provide information as to how the basic turbulence quantities λ_⊥ is a function of vary throughout the heliosphere. Lastly, Eq. <ref> does not take into account the possibility of non-axisymmetric turbulence, which could potentially play a role in the drift of charged particles <cit.>. These considerations point to the fact that the predictions of Eq. <ref> should be further tested, firstly by means of numerical test particle simulations, using as input for λ_⊥ the perpendicular mean free path calculated from the simulations themselves and assuming a broader range of turbulence conditions than that hitherto considered, and secondly, by means of particle transport studies such as the numerical study of cosmic ray modulation or solar energetic particle transport.NEE, RDS and RAB acknowledge support from the National Research Foundation (Grant 96478). Opinions expressed and conclusions arrived at are those of the authors and are not necessarily to be attributed to the NRF. [Adhikari et al.(2015)]zank Adhikari, L., Zank, G.P., Bruno, R., Telloni, D., Hunana, P., Dosch, A., Marino, R.,& Hu, Q.2015, , 805, 63[Bieber & Matthaeus(1997)]bm1997 Bieber, J. W.,& Matthaeus, W. H.1997, , 485, 655-659[Bieber et al.(1993)]bieberetal1993 Bieber, J. W., Chen, J., Matthaeus, W. H., Smith, C. W.,& Pomerantz, M.A.1993, , 98, 3585[Bieber et al.(1994)]bieberetal1994 Bieber, J. W., Matthaeus, W. H., Smith, C. W., Wanner, W., Kallenrode, M. -B.,& Wibberenz, G.1994, , 420, 294-306[Burger(1990)]b90 Burger, R. A. 1990, in Physics of the Outer Heliosphere, ed. S. Grzedzielski & D. E. Page (Oxford: Pergamon), 179.[Burger et al.(2000)]burgeretal2000 Burger, R. A.,Potgieter, M. S., & Heber, B.2000, , 105, 27447-27455[Burger et al.(2008)]Burger2008 Burger, R. A.,Krüger, T. P. J., Hitge, M.,& Engelbrecht, N. E.2008, , 674, 511-519 [Burger et al.(2014)]Burger2014 Burger, R. A.,Nel, A.E., & Engelbrecht, N. E.2014, AGU Fall Meeting Abstracts, A4152. [Burger & Visser(2010)]Burger2010 Burger, R. A.,& Visser, D. J.2010, , 725, 1366-1372[Bruno & Carbone(2013)]Bruno2013 Bruno, R.,& Carbone, V.2013, Living Rev. Solar Phys., 10, 2. [Candia & Roulet(2004)]Candia2004 Candia, J., & Roulet, E., 2004, J. Cosmol. Astropart. Phys, 10, 007.[Dalla et al.(2013)]dalla2013 Dalla, S., Marsh, M.S., Kelly, J., & Laitinen, T., 2013, , 118, 5979-5985.[Dalla et al.(2015)]dalla2015 Dalla, S., Marsh, M.S., & Laitinen, T., 2015, , 808, 62.[de Simone et al.(2011)]desimone2011 de Simone, N., di Felice, V., Gieseler, J., Boezio, M., Casolino, M., Picozza, P., Heber, B., &PAMELA Collaboration2011, Astrophysics and Space Sciences Transactions, 7, 425-434[Engelbrecht & Burger(2013a)]EB2013 Engelbrecht, N. E.,& Burger, R. A.2013a, , 772, 46-57 [Engelbrecht & Burger(2013b)]EB2013b Engelbrecht, N. E.,& Burger, R. A. 2013b, , 779, 158. [Engelbrecht & Burger(2015a)]EB2015 Engelbrecht, N. E.,& Burger, R. A.2015a, Adv. Space Res., 55, 390-400[Engelbrecht & Burger(2015b)]EB2015b Engelbrecht, N. E., & Burger, R. A.2015b, , 814, 152 [Fisk & Schwadron(1995)]fs95 Fisk, L. A.,& Schwadron, N. A.1995, , 100, 7865-7871[Forman et al.(1974)]formanetal1974 Forman, M. A., Jokipii, J. R., & Owens, A. J., 1974, , 192, 535-540.[Giacalone & Jokipii(1999)]gj1999 Giacalone, J., & Jokipii, J. R., 1999, , 520(1), 204�4.[Giacalone et al.(1999)]giacaloneetal1999 Giacalone, J., Jokipii, J. R., & Kota, J., 1999, Proc. Int. Conf. Cosmic Ray 26th (Salt Lake City), 7, 37-40.[Green(1951)]green Green, M.S. 1951, J. Chem. Phys. 19, 1036 [Heber et al.(1996)]heber1996 Heber, B., Dröge, W., Ferrando, P., Haasbroek, L. J., Kunow, R., Müller-Mellin, R., Paizis, C., Potgieter, M. S.,Raviart, A., &Wibberenz, G.1996, , 316, 538-546[Jokipii(1993)]jokipii1993 Jokipii, J. R., 1993, Proc. Int. Conf. Cosmic Ray 23rd (Calgary), 3, 497.[Jokipii et al.(1977)]Jokipii1977 Jokipii, J. R., Levy, E. H., & Hubbard, W. B., 1977, , 213, 861-868.[Jokipii & Levy(1977)]jl1977 Jokipii, J. R., & Levy, E. H.1977, , 213, L85-L88.[Jokipii & Kopriva(1979)]Jokipii1979 Jokipii, J. R., & Kopriva, D. A., 1979,, 234, 384-392.[Jokipii & Thomas(1981)]jt1981 Jokipii, J. R., & Thomas, B.1981, , 243, 1115-1122[Jokipii & Kota(1989)]jk1989 Jokipii, J. R., & Kota, J.1989, , 16, 1-4.[Kota(1989)]k89 Kota, J. 1989, in Physics of the Outer Heliosphere, ed. S. Grzedzielski & D. E. Page (Oxford: Pergamon), 119.[Kota(2016)]kota Kota, J. 2016, Journal of Physics: Conference Series, 767, 012014[Kubo(1957)]kubo Kubo, R. 1957, J. Phys. Soc. Jpn. 12, 570[le Roux & Webb(2007)]leroux2007 le Roux, J. A., & Webb, G. M., 2007, , 667, 930-955.[Lockwood & Webber(2005)]lockwoodwebber2005Lockwood, J. A., & Webber, W. R., 2005, , 110, 4102.[Manuel et al.(2011)]rex Manuel, R., Ferreira, S.E.S., Potgieter, M.S., Strauss R.D., &Engelbrecht, N.E.2011, Adv. Space Res., 47, 1529[Matthaeus et al.(1995)]matthaeusetal1995 Matthaeus, W. H., Gray, P. C., Pontius Jr., D. H., &Bieber, J. W.1995, , 75, 2136-2139 [Matthaeus et al.(2003)]Matthaeusetal2003 Matthaeus,W.H., Qin, G., Bieber, J.W., & Zank, G.P. 2003, , 590, L53[Matthaeus et al.(2007)]2007_Mattheaus_etal_ApJ Matthaeus, W. H., Bieber, J. W., Ruffolo, D., Chuychai, P., &Minnie, J.2007, , 667, 956-962[Minnie et al.(2007a)]Minnieetal2007a Minnie, J., Bieber, J. W., Matthaeus, W. H., & Burger, R. A.2007a, , 663, 1049-1054[Minnie et al.(2007b)]Minnie_etal2007b Minnie, J., Bieber, J. W., Matthaeus, W. H., & Burger, R. A.2007b, , 670, 1149-1158[Ndiitwani et al.(2005)]Ndii Ndiitwani, D.C., Ferreira, S.E.S., Potgieter, M.S., &Heber, B.2005, Ann. Geophys., 23, 1061[Nndanganeni & Potgieter(2016)]rendani Nndanganeni, R. R.,& Potgieter, M. S.2016, Adv. Space Res., 58, 453 [Ngobeni & Potgieter(2015)]np2015 Ngobeni, M. D., & Potgieter, M. S., 2015, Adv. Space Res., 56, 1525-1537.[Palmer(1982)]palmer1982 Palmer, I. D.1982, Rev. Geophys. Space Phys., 20, 335-351[Parker(1965a)]parker1965 Parker, E. N.1965a, Planet. Space Sci., 13, 9-49 [Parker(1965b)]park65 Parker, E. N., 1965b, Proc. Int. Conf. Cosmic Ray 9th (London), 1, 126.[Potgieter(1996)]Potgieter1996 Potgieter, M. S. 1996, , 101, 24411.[Potgieter & Burger(1990)]pb90 Potgieter, M. S.,& Burger, R. A.1990, , 233, 598 [Qin et al.(2002a)]qin2002a Qin, G., Matthaeus, W. H., & Bieber, J. W. 2002a, , 578, L117-L120, 2002a.[Qin et al.(2002b)]qin2002b Qin, G., Matthaeus, W. H., & Bieber, J. W. 2002b, , 29(4), 1048, 2002b.[Qin & Zhang(2014)]qz14 Qin, G., & Zhang, L. -H. 2014, , 787, 12.[Ruffolo et al.(2012)]ruffolo12Ruffolo, D., Pianpanit, T., Matthaeus, W. H., & Chuychai, P. 2012, , 747, L34.[Shalchi(2006)]Shalchi2006aShalchi, A.2006, , 453, L43-L46[Shalchi(2009)]shalchibook Shalchi, A. 2009, Nonlinear Cosmic Ray Diffusion Theories (Germany: Springer) [Shalchi et al.(2004)]shalchietal2004 Shalchi, A., Bieber, J.W., & Matthaeus, W.H.2004, , 604, 675-686[Shalchi et al.(2010)]shalchietal2010 Shalchi, A., Li, G., & Zank, G. P., 2010, Astrophys. Space Sci., 325, 99-111.[Stawicki(2005)]Stawicki2005Stawicki, O.2005, , 624, 178-188 [Tautz & Shalchi(2012)]ts2012 Tautz, R.C., & Shalchi, A.2012, , 744, 125[Taylor(1922)]taylor Taylor, G.I. 1922, Proc. Lond. Math. Soc. 20, 196[Teufel & Schlickeiser(2003)]ts2003 Teufel, A.,& Schlickeiser, R.2003, , 397, 15-25 [Usmanov et al.(2016)]usmanovUsmanov, A.V., Goldstein, M.L., & Matthaeus, W. H. 2016, , 820, 17.[Vos & Potgieter(2016)]ep2016 Vos, E. E.,& Potgieter, M. S.2016, Sol. Phys., 291, 2181 [Webber et al.(2005)]webberlockwood2005Webber, W. R., Heber, B., & Lockwood, J. A.2005, , 110, 12107 [Weingarten et al.(2016)]tobias Wiengarten, T., Oughton, S., Engelbrecht, N.E., Fichtner, H., Kleimann, J., & Scherer, K. 2016, , 833, 17. [Weinhorst et al.(2008)]Weinhorst2008 Weinhorst, B., Shalchi, A., & Fichtner, H., 2008, , 677, 671-675. [Zank et al.(1996)]zanketal1996 Zank, G. P., Matthaeus, W. H., & Smith, C. W.1996, , 101, 17093-17107[Zhang(1997)]zhang1997 Zhang, M., 1997, , 488, 841-853.
http://arxiv.org/abs/1705.09197v1
{ "authors": [ "Eugene Engelbrecht", "Du Toit Strauss", "Kobus le Roux", "Adri Burger" ], "categories": [ "physics.space-ph" ], "primary_category": "physics.space-ph", "published": "20170525143131", "title": "Towards a greater understanding of the reduction of drift coefficients in the presence of turbulence" }
Classical and quantum Chaplygin gas Hořava-Lifshitz scalar-metric cosmology H. Ardehali^1, P. Pedram^1 [email protected], and B. Vakili^2 [email protected] ^1Department of Physics, Science and Research Branch, Islamic Azad University, Tehran, Iran ^2Department of Physics, Central Tehran Branch, Islamic Azad University, Tehran, Iran==================================================================================================================================================================================================================================================================================== In this work, we study the Friedmann-Robertson-Walker cosmology in which a Chaplygin gas is coupled to a non-linear scalar field in the framework of the Hořava-Lifshitz theory. In writing the action of the matter part, we use the Schutz's formalism so that the only degree of freedom of the Chaplygin gas plays the role of an evolutionary parameter. In a minisuperspace perspective, we construct the Lagrangian for this model and show that in comparison with the usual Einstein-Hilbert gravity, there are some correction terms coming from the Hořava theory. In such a set-up and by using of some approximations the classical dynamics of the model is investigated and some discussions about their possible singularities are presented. We then deal with the quantization of the model in the context of the Wheeler-DeWitt approach of quantum cosmology to find the cosmological wave function. We use the resulting wave functions to investigate the possibility of the avoidance of classical singularities due to quantum effects.PACS numbers: 04.50.Kd, 98.80.Qc, 04.60.Ds Keywords: Hořava-Lifshitz gravity, Quantum cosmology§ INTRODUCTIONVarious modern cosmological theories such as grand unified theories imply the existence of the classical and semiclassical scalar fields <cit.>. In cosmological viewpoint, scalar-tensor models have been attracted much attention in which a non-minimal coupling appears between the space-time geometry and a scalar field <cit.>. This is due to the fact that various research areas in cosmology such as spatially flat and accelerated expanding universe at the present time <cit.>, inflation <cit.>, dark matter and dark energy <cit.>, and many other behaviors can be explained phenomenologically by the scalar fields. Cosmological models usually described by a single scalar field with a canonical kinetic term in the form 1/2g^μν∂_μϕ∂_νϕ and a self-interaction potential V(ϕ) where the scalar field is often minimally coupled to gravity. However, in scalar-tensor theories, the scalar field is not simply added to the action. Indeed, it is added to the tensor gravitational field by a non-minimal coupling term <cit.>. In recent years, the so-called Hořava-Lifshitz (HL) gravity theory, presented by Hořava, is proved to be power-countable renormalizable. It is based on the anisotropic scaling of space 𝐱 and time t as𝐱→ b𝐱, t→ b^zt,where b is a scaling parameter and z is the dynamical critical exponent. Notice that for z = 1 the standard relativistic scale invariance obeying Lorentz symmetry is recovered in the IR limit. However, the UV gravitational theory implies z = 3 <cit.>. Due to the asymmetry of space and time in HL theory, it is common to use the Arnowitt-Deser-Misner (ADM) formalism to represent the space-time metric g_μν(t,𝐱), in terms of three-dimensional metric γ_ab(t,𝐱), shift vector N_a(t,𝐱) and the lapse function N(t, x) as <cit.>g_μν(t, x)=( [ -N^2(t, x)+N_a(t, x)N^a(t, x) N_b(t, x); N_a(t, x)γ_ab(t, x); ] ).If the lapse function is a function of t only, the theory is projectable, otherwise, in the case where N is a function of (t, 𝐱), theory is called non-projectable. General cases in which the lapse function is taken as a non-projectable function are studied in Ref.<cit.>. However, we assume the lapse function is constrained to be a function only of the time coordinate N=N(t) <cit.>.The most general action for HL gravity (without the detailed balance condition) is given by S_HL=S_K+S_V, where S_K is kinetic partS_K ∼∫ d^4𝐱 √(-g)(K_ijK^ij-λ K^2),in which K_ij is the extrinsic curvature tensor (with trace K) defined byK_ij=1/2N(γ̇_ij-∇_iN_j-∇_jN_i).Also, for the potential part the following general form is proposedS_V=∫ d^4x √(-g)V[γ_ij],in whichV[γ_ij] = g_0ζ^6+g_1ζ^4R+g_2ζ^2R^2+g_3ζ^2R_ijR^ij +g_4R^3+g_5RR_ijR^ij+g_6R^i_jR^j_kR^k_i +g_7R∇^2R+g_8∇_iR_jk∇^iR^jk. The constants λ and g_i (i=0,1,...,8) in above relations, denote the HL corrections to the usual Einstein gravity and ζ is introduced to make the constants g_ks dimensionless. Under these conditions, the full HL action that we shall study is <cit.>S_HL = M_PL^2/2∫_ℳd^4𝐱√(-g)[K_ijK^ij-λ K^2+R-2Λ -g_2/M_PL^2R^2-g_3/M_PL^2R_ijR^ij-g_4/M_PL^4R^3 -g_5/M_PL^4RR_ijR^ij -g_6/M_PL^4R_ijR^jkR^i_ k -g_7/M_PL^4R∇^2R-g_8/M_PL^4∇_iR_jk∇^iR^jk], in which M_PL=1/√(8π G) and we have set c=1, ζ=1, Λ=g_0M_Pl^2/2 and g_1=-1.All cosmological evidences have revealed that the universe is undergoing an accelerated expansion which can be described by exotic cosmic fluid, the so-called dark energy, one of the first model of which is the cosmological constant. On the other hand, scalar fields play an important role in unified theories of interactions and also in inflationary scenarios in cosmology. Indeed, a rich variety of dark energy and inflationary models can be accommodated phenomenologically by scalar fields in which the inflatons produce the initial acceleration. Another attempt, originally raised in string theory <cit.>, is to change the equation of state from an ordinary matter to the Chaplygin gas, an exotic fluid with negative pressure. Chaplygin gas as a candidate behind the current observation of cosmic acceleration has been thoroughly investigated in recent years. The generalized Chaplygin gas with negative pressure is described by an exotic equation of stateP=-A/ρ^α,where P is the pressure, A is a positive constant, and 0≤α≤1 is the equation of state parameter such that α=1 denotes the standard Chaplygin gas <cit.>. In this sense, since string theory deals with the high energy phenomena such as very early universe, considering the chaplygin gas quantum cosmology may have physical grounds. It is shown that <cit.>, the generalized Chaplygin gas (<ref>) can play the role of a mixture of cosmological constant and radiation by means of which the the cosmological dynamics shows a transition from a dust dominated era to a de Sitter phase and thus it interpolates between dust matter and the cosmological constant. Cosmology with generalized Chaplygin gas (<ref>) results in an expanding universe which begins from a non-relativistic matter dominated phase and ends at a cosmological constant dominated era <cit.>. Also, the idea of this fluid is used to find a solution to the coincidence problem in cosmology <cit.>. Quantum cosmological models with Chaplygin gas have been studied in Refs. <cit.>, specially in Ref. <cit.>, a scalar field is also added to the Chaplygin gas quantum cosmology and its effects are investigated. In summary, since the Chaplygin gas models are able to describe the smooth transition from a decelerated expansion to an accelerated universe and also since they try to give a unified picture of dark matter and dark energy, one may use them as an alternative to the traditional ΛCDM models.In this paper we shall consider a cosmological model in the framework of a projectable HL gravity without detailed balance condition. A Chaplygin gas will play the role of the matter source and a scalar field is coupled to metric with a generic coupling function F(ϕ). The classical version of such models are used to answer the missing-matter problem in cosmology <cit.> and their quantum cosmology is studied in Refs. <cit.>. Since our aim in the quantum part of the model is to investigate the time evolution of the wave function, we prefer to use the Chaplygin gas in the framework of the Schutz formalism <cit.>. In such a setup the Hamiltonian of the gas consists of a linear momentum, the variable canonically conjugate to which may play the role of a time parameter (see Refs. <cit.> for details of this formalism).The paper in organized as follows: In Sec. <ref>, we construct the action of HL gravity with Chaplygin gas and scalar field in terms of minisuperspace variables. In Secs. <ref> and <ref>, we approximate the super-Hamiltonian in two cases Sp_ϵ^α+1≫ Aa^3(α+1) and Sp_ϵ^α+1≪ Aa^3(α+1) separately. Schutz formalism for Chaplygin gas allows us to get a Schrödinger-Wheeler-DeWitt (SWD) equation in which the only remaining matter degree of freedom plays the role of time. After choosing the coupling function between the scalar field and metric as F(ϕ)=λϕ^m, we obtain the classical dynamics of the scale factor and scalar field in terms of the Schutz's time parameter and see that they exhibit some types of singularities. We then deal with the quantization of the model and by computing the expectation values of the scale factor and scalar field we show that the evolution of the universe based on the quantum picture is free of classical singularities. Section <ref> is devoted to summary and conclusions. § THE MODELThe total action (without the detailed balance condition) of our model consists of three parts, that are, gravitational Hořava-Lifshitz gravity action, scalar field and Chaplygin gas actions parts asS=S_HL+S_ϕ+S_P,where S_HL, S_ϕ and S_P are the Hořava-Lifshitz, scalar field and Chaplygin gas actions, respectively. Now, we expand them separately. §.§ Hořava-Lifshitz actionThe action for the projectable HL gravity without detailed balance is given in (<ref>). In a quasi-spherical polar coordinate system, we assume that the geometry of space-time is described by the FRW metricds^2 = g_μνdx^μdx^ν= -N^2(t)dt^2+a^2(t)[dr^2/1-kr^2+r^2(dϑ^2+sin^2ϑ dφ)],in which N(t) is the lapse function, a(t) is the scale factor and k=-1,0,+1 denotes the open, flat, and closed universes, respectively. Now, in the language of the ADM variables the above metric can be rewritten asds^2=-N^2(t)dt^2+γ_ijdx^idx^j,whereγ_ij=a^2(t)diag(1/1-kr^2,r^2,r^2sin^2ϑ), is the induced intrinsic metric on the 3-dimensional spatial hypersurfaces from which we obtain the Ricci and extrinsic curvature tensors asR_ij=2k/a^2γ_ij, K_ij=ȧ/Naγ_ij. The gravitational part for the model may now be written by substituting the above results into action (<ref>) givingS_HL = 3(3λ-1)M^2_PLV_0/2∫ dt Na^3[-ȧ^2/N^2a^2 +6k/3(3λ-1)1/a^2. .-2Λ/3(3λ-1)-12k^2/a^43g_2+g_3/3(3λ-1)M^2_PL -24k^3/a^69g_4+3g_5+g_6/3(3λ-1)M^4_PL]= ∫ dt N(-aȧ^2/N^2+g_ca-g_Λa^3-g_r/a-g_s/a^3),where V_0=∫ d^3xr^2sinϑ/√(1-kr^2) is the integral over spatial dimensions. Also, we have defined the coefficients g_c, g_Λ, g_r and g_s as {[ g_c=6k/3(3λ-1),; g_Λ=2Λ/3(3λ-1),;g_r=12k^2(3g_2+g_3)/3(3λ-1)M^2_PL,; g_s=24k^3(9g_4+3g_5+g_6)/3(3λ-1)M^4_PL, ]. in which we have set 3V_0 M_Pl^2(3λ-1)/2=1. Now, the gravitational part of the Hamiltonian for this model can be obtained from its standard procedure. Noting thatp_a=-2aȧ/N,we getH_HL = p_a ȧ-ℒ_HL,= N(-p_a^2/4a-g_ca+g_Λa^3+g_r/a+g_s/a^3).§.§ The Chaplygin gasIn Schutz formalism, the four velocity of a fluid can be expressed in terms of six scalar potentials as <cit.>u_ν=1/μ(∂_νϵ+ϖ∂_νβ+θ∂_ν S),where μ and S are specific enthalpy and entropy respectively while the potentials ϖ and β are related to torsion and are absent in FRW models. The potentials ϵ and θ have no direct physical interpretation in this formalism. The four-velocity obeys the condition u_νu^ν=1. Hence, the four-velocity of the fluid in its rest frame readsu_ν=Nδ^0_ν⇒μ=ϵ̇+θṠ/N.Following the thermodynamical description of <cit.>, the basic thermodynamic relations of the Chaplygin gas are given byρ=ρ_0(1+Π),μ=1+Π+P/ρ_0,where ρ_0 and Π are the rest mass density andthe specific internal energy of the gas, respectively. These quantities together with the temperature τ of the system obey the first law of the thermodynamics, which can be rewritten asτ dS = dΠ+Pd(1/ρ_0)= 1/(1+α)(1+Π)^αd[(1+Π)^1+α-A/ρ_0^1+α], where we have used the equation of state (<ref>). Therefore, the temperature and entropy of the gas are obtained asτ=1/(1+α)(1+Π)^α, S=(1+Π)^1+α-A/ρ_0^1+α.Now, we can express the energy density and pressure as functions of μ and Sρ = [1/A(1-μ^α+1/α/S^1/α)] ^-1/α+1,P = -A[1/A(1-μ^α+1/α/S^1/α)] ^α/α+1.Finally, with the help of these relations, the action of the Chaplygin gas takes the formS_P = ∫ dtd^3x N√(γ)P= -A∫ dt Na^3[1/A(1-(ϵ̇+θṠ)^α+1/α/N^α+1/αS^1/α)]^α/α+1.Now, in terms of the conjugate momenta[p_a=p_θ=0,;p_ϵ=a^3(ϵ̇+θṠ/NS)^1/α[1/A(1-(ϵ̇+θṠ)^α+1/α/N^α+1/αS^1/α)]^-1/α+1,;p_S=θ p_ϵ, ]the Chaplygin gas Hamiltonian can be written as followsH_P = (ϵ̇+θṠ)p_ϵ-ℒ_P= Na^3[1/A(1-(ϵ̇+θṠ)^α+1/α/N^α+1/αS^1/α)]^-1/α+1= N(Sp_ϵ^α+1+Aa^3(α+1))^1/α+1.§.§ The Scalar fieldAs mentioned before, we consider a non-linear self-coupling scalar field minimally coupled to geometry by the coupling function F(ϕ) <cit.>. The action of such a scalar field isS_ϕ=-M_PL^2/2∫ d^4x √(-g) F(ϕ)g^μν∂_μϕ∂_νϕ,where by substituting the metric (<ref>) in which one getsS_ϕ=∫ dt 1/NF(ϕ)a^3ϕ̇^2.Noting that the momentum congugate to ϕ isp_ϕ=2/NF(ϕ)a^3ϕ̇,the Hamiltonian of the scale field is obtained asH_ϕ=N p_ϕ^2/4F(ϕ)a^3.Now, we are ready to write the total Hamiltonian for our model asH = H_HL+H_P+H_ϕ= N[-p_a^2/4a-g_ca+g_Λa^3+g_r/a+g_s/a^3 +p_ϕ^2/4F(ϕ)a^3 +(Sp_ϵ^α+1+Aa^3(α+1))^1/α+1].The setup for constructing the phase space and writing the Lagrangian and Hamiltonian of the model is now complete. However, the resulting classical (and quantum) equations of motion do not seem to have analytical solutions. To extract exact solutions, we first apply some approximation on the above Hamiltonian <cit.>, and then will deal with the behavior of its classical and quantum pictures.§ THE SP_Ε^Α+1≫ AA^3(Α+1) LIMITIn the early times of cosmic evolution when the scale factor is small, we can use the following expansion <cit.>(Sp_ϵ^α+1+Aa^3(α+1))^1/α+1 =S^1/α+1p_ϵ(1+Aa^3(α+1)/Sp_ϵ^α+1) ^1/α+1= S^1/α+1p_ϵ(1+1/α+1Aa^3(α+1)/Sp_ϵ ^α+1+…)≃ S^1/α+1p_ϵ.Therefore, the super-Hamiltonian takes the formH=N (-p_a^2/4a-g_ca+g_Λa^3+g_r/a+g_s/a^3 +p_ϕ^2/4F(ϕ)a^3+S^1/α+1p_ϵ).Now, consider the following canonical transformation <cit.>[ T=-(α+1)S^α/α+1p_ϵ^-1p_S,; p_T=S^1/α+1p_ϵ, ]under the act of which Hamiltonian (<ref>) takes the formH=N (-p_a^2/4a-g_ca+g_Λa^3+g_r/a+g_s/a^3 +p_ϕ^2/4F(ϕ)a^3+p_T).We see that the momentum p_T is the only remaining canonical variable associated with the Chaplygin gas and appears linearly in the Hamiltonian.§.§ The classical modelThe classical dynamics of the system is governed by the Hamiltonian equation of motion q̇={q,H}, for each variable. The result is{[ ȧ=Np_a/2a,; ṗ_a=N(-p_a^2/4a^2+g_c-3g_Λa^2+g_r/a^2+3g_s/a^3 +3p_ϕ^2/4Fa^4),; ϕ̇=Np_ϕ/2Fa^3,; ṗ_ϕ=Np_ϕ^2/4a^3F'/F^2,; Ṫ=N,;ṗ_T=0→ p_T=const. , ].where F'=dF(ϕ)/dϕ. Up to this point the cosmological model, in view of the concerning issue of time, has been of course under-determined. Before trying to solve these equations we must decide on a choice of time in the theory. The under-determinacy problem at the classical level may be resolved by using the gauge freedom via fixing the gauge. A glance at the above equations shows that choosing the gauge N=1, we have Ṫ=1⇒ T=t, which means that variable T may play the role of time in the model. With this time gauge we obtain the following equation of motion for ϕ,2ϕ̈/ϕ̇+F'/Fϕ̇+6ȧ/a=0.This equation can easily be integrated to yieldF(ϕ)ϕ̇^2=Ca^-6,where C is an integration constant. Also, eliminating the momenta from the system (<ref>) results ȧ^2+g_c-g_Λa^2-g_r/a^2-g_s+C/a^4-p_T/a=0, in which we have used Eq. (<ref>). In general, this equation does not seem to have exact solution, so we restrict ourselves to the especial case in which g_c=g_Λ=g_r=0, g_s≠0, for which the solution to Eq. (<ref>) readsa(t)=(9p_T/4t^2-g_s+C/p_T)^1/3.What remains to be found is an expression for the scalar field ϕ(t). In the following, we shall consider the case of a coupling function in the form F(ϕ)=λϕ^m. With this choice for the function F(ϕ), and with the help of Eqs. (<ref>) and (<ref>) we are able to calculate the time evolution of the scalar field asϕ(t)=[ϕ_0-m+2/6√(C/(g_s+C)λ) ln3p_T t-2√(g_s+C)/3p_T t+2√(g_s+C)]^2/m+2, where ϕ_0 is an integration constant and we assumed m≠ -2. Finally, to understand the relation between the big-bang singularity a→ 0 and the blow up singularity ϕ→±∞, we are going to find a classical trajectory in configuration space (a,ϕ), where the time parameter t is eliminated. From (<ref>) and (<ref>) one getsϕ^m(dϕ/da)^2=C a^-6/λ(-g_c+g_Λa^2+g_r/a^2+g_s+C/a^4+p_T/a)^-1, where for the case g_c=g_Λ=g_r=0, g_s≠0, after integrationreadsϕ(a)=[ϕ_0-m+2/6√(C/(g_s+C)λ) ln√(p_Ta^3+g_s+C)-√(g_s+C)/√(p_Ta^3+g_s+C)+√(g_s+C)] ^2/m+2. We see that the evolution of the universe based on (<ref>) has big-bang-like singularities at t=± t_* where t_*=2/3P_T√(g_s+C). Indeed, the condition a(t)≥ 0 separates two sets of solutions each of which is valid for t≤ -t_* and t≥ +t_*, respectively. For the former, we have a contracting universe which decreases its size according to a power law relation and ends its evolution in a singularity at t=-t_*, while for the latter, the evolution of the universe begins with a big-bang singularity at t=+t_* and then follows the power law expansion a(t)∼ t^2/3 at late time of cosmic evolution. On the other hand, the scalar field has a monotonically decreasing behavior coming from ϕ→ +∞ at early times and reaches to zero as time grows, see Fig. <ref>. We shall see in the next subsection how this classical picture may be modified if one takes into account the quantum mechanical considerations.§.§ The quantum modelWe now focus attention on the study of the quantum cosmology of the model described above. We start by writing the Wheeler-DeWitt equation from the Hamiltonian (<ref>). Since the lapse function appears as a Lagrange multiplier in the Hamiltonian, we have the Hamiltonian constraint H=0. Thus, application of the Dirac quantization procedure demands that the quantum states of the universe should be annihilated by the operator version of H, that is HΨ(a,ϕ,T)=0, where Ψ(a,ϕ,T) is the wave function of the universe. Use of the usual representation P_q → -i∂_q we are led to the following SWD equation1/4a(∂^2/∂ a^2 +β/a∂/∂ a)Ψ(a,ϕ,T)+ (-g_c a+g_Λa^3+g_r/a+g_s/a^3)Ψ(a,ϕ,T) -1/4Fa^3(∂^2/∂ϕ^2 +κ F'/F∂/∂ϕ)Ψ(a,ϕ,T) =i∂Ψ(a,ϕ,T)/∂ T,where the parameters β and κ represent the ambiguity in the ordering of factors (a,P_a) and (ϕ,P_ϕ) respectively. This equation takes the form of a Schrödinger equation i∂Ψ/∂ T=HΨ, in which the Hamiltonian operator is Hermitian with the standard inner product⟨Ψ_1|Ψ_2 ⟩=∫_(a,ϕ)dadϕ a Ψ^*_1Ψ_2.We separate the variables in above equation as Ψ(a,ϕ,T)=e^iETψ(a,ϕ) leading to1/4a(∂^2/∂ a^2 +β/a∂/∂ a)ψ(a,ϕ) -1/4Fa^3(∂^2/∂ϕ^2 +κ F'/F∂/∂ϕ)ψ(a,ϕ) +(-g_c a+g_Λa^3+g_r/a+g_s/a^3+E)ψ(a,ϕ)=0,where E is a separation constant. The solutions of the above differential equation are separable and may be written in the form ψ(a,ϕ)=A(a)Φ(ϕ) which yieldsd^2A(a)/da^2+β/adA(a)/da +4(-g_ca^2+g_Λa^4+g_r+g_s+w/a^2+Ea)A(a)=0, d^2Φ(ϕ)/dϕ^2 +κ F'(ϕ)/F(ϕ)dΦ(ϕ)/dϕ+4wF(ϕ)Φ(ϕ)=0,where w is another constant of separation. The factor-ordering parameters does not affect the semiclassical probabilities <cit.>, so in what follows we have chosen β=0 and κ=-1 to make the differential equations solvable. Upon substituting the relation F(ϕ)=λϕ^m into (<ref>), its solutions read in terms of the Bessel functions J and Y asΦ(ϕ)= C_1 ϕ^1+m/2 J_m+1/m+2(4√(λ w)/m+2ϕ^m+2/2)+C_2 ϕ^1+m/2 Y_m+1/m+2(4√(λ w)/m+2ϕ^m+2/2),for m≠-2 andΦ(ϕ)= C_1 ϕ^-1+√(1-16λ w)/2 + C_2 ϕ^-1-√(1-16λ w)/2,for m=-2. Also, if we set (as in the classical solutions) g_c=g_Λ=g_r=0, Eq. (<ref>) admits the solutionA(a)=c_1√(a) J_ν(4/3√(E)a^3/2) +c_2√(a) Y_ν(4/3√(E)a^3/2), where ν=1/3√(1-16(g_s+w)). Thus, the eigenfunctions of the SWD equation can be written asΨ_E,w(a,ϕ,T) = e^iETA(a)Φ(ϕ)= e^iET √(a) J_ν(4/3√(E)a^3/2) ϕ^m+1/2 J_m+1/m+2(4√(λ w)/m+2ϕ^m+2/2), where we have chosen C_2=c_2=0 for having well-defined functions in all ranges of variables a and ϕ. We may now write the general solutions to the SWD equations as a superposition of the eigenfunctions, that isΨ(a,ϕ,T) = ∫ dE dw f(E) g(w) Ψ_E,w(a,ϕ,T)= √(a)ϕ^m+1/2∫_0^w_0dw g(w) J_m+1/m+2(4√(λ w)/m+2ϕ^m+2/2) ×∫_0^∞dE f(E) e^iET J_ν(4/3√(E)a^3/2),where w_0=1/16-g_s and f(E) and g(w) are suitable weight functions to construct the wave packets. By using the equality <cit.>∫_0^∞dx e^-Zx^2 x^ν+1 J_ν(bx)=b^ν/(2Z)^ν+1 e^-b^2/4Z, we can evaluate the integral over E in (<ref>) and simple analytical expression for this integral is found if we choose the function A(E) to bef(E)=E^ν/2e^-σ E, where σ is an arbitrary positive constant. With this procedure we getΨ(a,ϕ,T) = √(a) ϕ^m+1/2∫_0^w_0dw g(w) J_m+1/m+2(4√(λ w)/m+2ϕ^m+2/2) ×(4/3a^3/2)^1/3√(1-16(g_s+w))/(2Z)^1+1/3√(1-16(g_s+w))e^-4a^3/9Z,where Z=σ-iT. To achieve an analytical closed expression for the wave function, we assume that the above superposition is taken over such values of w for which one can use the approximation √(1-16(g_s+w))≃√(1-16g_s), that isΨ(a,ϕ,T) = √(a) ϕ^m+1/2(4/3a^3/2) ^1/3√(1-16g_s)/(2Z)^1+1/3√(1-16g_s)e^-4a^3/9Z ×∫_0^w_0dw g(w) J_m+1/m+2(4√(λ w)/m+2ϕ^m+2/2). Now, by using the equality <cit.>∫_0^1dν ν^r+1(1-ν^2)^s/2J_r(zν)=2^s Γ(s+1)/z^s+1J_r+s+1(z), and choosingthe weight function g(w)=(w/w_0)^m+1/2(m+2)(1-w/w_0)^s/2, we are led to the following expression for the wave functionΨ(a,ϕ,T) =Na^1+√(1-16g_s)/2/(σ-iT)^1+1/3√(1-16g_s)exp(-4a^3/9(σ-iT)) ×ϕ^-1+(m+2)s/2J_2m+3/m+2+s(√((1-16g_s)λ)/m+2ϕ^m+2/2),where N is a normalization coefficient. Now, having the above expression for the wave functionof the universe, we are going to obtain the predictions for the behavior of the dynamical variables in the corresponding cosmological model. In general, one of the most important features in quantum cosmology is the recovery of classical cosmology from the corresponding quantum model or, in other words, how can the WD wave functions predict a classical universe. In this approach, one usually constructs a coherent wave packet with good asymptotic behavior in the minisuperspace, peaking in the vicinity of the classical trajectory. On the other hand, in an another approach to show the correlations between classical and quantum pattern, following the many-worlds interpretation of quantum mechanics, one may calculate the time dependence of the expectation value of a dynamical variable q as⟨ q⟩(t)=⟨Ψ|q|Ψ⟩/⟨Ψ|Ψ⟩.Following this approach, we may write the expectation value for the scale factor as⟨ a⟩(T) = ∫_a=0^∞∫_ϕ=-∞^+∞ da dϕ a^2 |Ψ|^2/∫_a=0^∞∫_ϕ=-∞^+∞ da dϕ a |Ψ|^2= 3/2Γ(4+√(1-16g_s)/3)/Γ(3+√(1-16g_s)/3)(σ^2+T^2/3σ)^1/3. It is important to classify the nature of the quantum model as concerns the presence or absence of singularities. For the wave function (<ref>), the expectation value (<ref>) of a never vanishes, showing that these states are nonsingular. Indeed, the expression (<ref>) represents a bouncing universe with no singularity where its late time behavior coincides to the late time behavior of the classical solution (<ref>), that is a(t)∼ t^2/3. We have plotted this behavior in Fig. <ref>. As this figure shows instead of two separate contracting and expanding classical solutions, the quantum expectation value consists of two branches. In one branch the universe contracts and when reaches a minimum size undergoes to an expansion period. Therefore, we have bouncing cosmology in which the bounce occurs at classical singularity. In a similar manner, the expectation value for the scalar field reads as⟨ϕ⟩(T)=∫ da dϕ aϕ |Ψ|^2/∫ da dϕ a |Ψ|^2=const.We see that the expectation value of ϕ does not depend on time. This result is comparable with those obtained in <cit.> where a constant expectation value for the dilatonic field in a quantum cosmological model based on the string effective action coupled to matter has been obtained. § THE SP_Ε^Α+1≪ AA^3(Α+1) LIMITNow, let us return to the Hamiltonian (<ref>) but this time expand it in the late time limit Sp_ϵ^α+1≪ Aa^3(α+1) as(Sp_ϵ^α+1+Aa^3(α+1))^1/α+1 =A^1/α+1a^3(1+Sp_ϵ^α+1/Aa^3(α+1)) ^1/α+1= A^1/α+1a^3[1+1/α+1Sp_ϵ^α+1/Aa^3(α+1) +1/21/α+1(1/α+1-1) (Sp_ϵ^α+1/Aa^3(α+1))^2+…]≃ A^1/α+1a^3+1/α+1A^-α/α+1Sp_ϵ^α+1/a^3α.Therefore, the super-Hamiltonian takes the formH=N (-p_a^2/4a-g_ca+g̅_Λa^3+g_r/a+g_s/a^3 +p_ϕ^2/4F(ϕ)a^3+1/α+1A^-α/α+1Sp_ϵ^α+1/a^3α),where g̅_Λ=g_Λ+A^1/α+1. Now, consider the following canonical transformation <cit.>[ T=-(α+1)A^α/α+1p_ϵ^-(α+1)p_S,;p_T=1/α+1A^-α/α+1Sp_ϵ^α+1, ]under act of which the above Hamiltonian becomesH=N (-p_a^2/4a-g_ca+g̅_Λa^3+g_r/a+g_s/a^3 +p_ϕ^2/4F(ϕ)a^3+p_T/a^3α). We now may repeat the steps as we have taken in the previous section to obtain the classical and quantum cosmological dynamics based on the Hamiltonian (<ref>). §.§ The classical modelBy the Hamiltonian (<ref>) the classical equations of motion are{[ ȧ=-Np_a/2a,; ṗ_a=N(-p_a^2/4a^2+g_c-3g̅_Λa^2+g_r/a^2+3g_s/a^4 +3p_ϕ^2/4Fa^4+3α p_T/a^3α+1),;ϕ̇=Np_ϕ/2Fa^3,;ṗ_ϕ=Np_ϕ^2/4a^3F'/F^2,; Ṫ=N/a^3α,; ṗ_T=0→ p_T=const. ].To have the clock parameter as T=t, we should choose the lapse function N=a^3α. Since the third and the fourth equations of this system are the same as their counterparts in the system (<ref>), the dynamical equations for the scalar field are the same as Eqs. (<ref>) and (<ref>). Also, with the constraint equation H=0 we obtainȧ^2+a^6α(g_c-g̅_Λa^2-g_r/a^2-g_s+C/a^4 -p_T/a^3α+1)=0.To solve this equation we suppose g_c=g̅_Λ=0 and g_r,g_s≠ 0 which simplifies the above equation asȧ^2=a^6α(g_r/a^2+g_s+C/a^4+p_T/a^3α+1). This equation does not yet have exact solution for general case with arbitrary α. So, from now on we restrict ourselves to the case α=1/3 for which the solution to Eq. (<ref>) isa(t)=√((g_r+p_T)t^2-g_s+C/g_r+p_T). By means of this relation, with the help of (<ref>) and (<ref>) and with the same detail as in previous section, we get the following expressions for ϕ(t) and ϕ(a) ϕ(t)=[ϕ_0-m+2/4√(C/(g_s+C)λ) ln(g_r+p_T)t-√(g_s+C)/(g_r+p_T)t+√(g_s+C)]^2/m+2,andϕ(a)=[ϕ_0+m+2/2√(C/(g_s+C)λ)ln√(g_s+C)+√((g_r+p_T)a^2+g_s+C)/a]^2/m+2.§.§ The quantum modelThe standard quantization process based on the Hamiltonian (<ref>) get us the following SWD equation1/4a(∂^2/∂ a^2 +β/a∂/∂ a)Ψ(a,ϕ,T)+ (-g_c a+g̅_Λa^3+g_r/a+g_s/a^3)Ψ(a,ϕ,T) -1/4Fa^3(∂^2/∂ϕ^2 +κ F'/F∂/∂ϕ)Ψ(a,ϕ,T) =i/a^3α∂Ψ(a,ϕ,T)/∂ T,where β and κ are again factor ordering parameters which as before we set them as β=0 and κ=-1. This time the Hamiltonian operator is Hermitian with the inner product⟨Ψ_1,Ψ_2⟩=∫_(a,ϕ)dadϕ a^1-3α Ψ^*_1Ψ_2. Separation of variables as Ψ(a,ϕ,T)=e^iETA(a)Φ(ϕ) lead to Eq. (<ref>) with solution (<ref>) for the ϕ-sector of the eigenfunctions while for A(a) we are arrived at the following equation (with g_c=g̅_Λ=0)d^2A/da^2+4(g_r+g_s+w/a^2+E/a^3α-1)A=0.For α=1/3 this equation has the solutionsA(a)=c_1√(a) J_ν(2√(g_r+E)a)+c_2√(a) Y_ν(2√(g_r+E)a), with ν=1/2√(1-16(g_s+w)). Therefore, the eigenfunctions of the corresponding SWD equation readΨ_E,w(a,ϕ,T)=e^iET √(a) J_ν(2√(g_r+E)a) ϕ^m+1/2 J_m+1/m+2(4√(λ w)/m+2ϕ^m+2/2), in which we have removed the Bessel functions Y from the solutions. Following the same steps which led us to the wave function (<ref>), we obtain the wave function asΨ(a,ϕ,T) =N e^-ig_rT a^1+√(1-16g_s)/2/(σ-iT)^1+1/2√(1-16g_s)exp(-a^2/σ-iT) ×ϕ^-1+(m+2)s/2J_2m+3/m+2+s(√((1-16g_s)λ)/m+2ϕ^m+2/2), from which the expectation values are obtained as⟨ a⟩(T) = ∫ da dϕ a |Ψ|^2/∫ da dϕ |Ψ|^2= Γ(3+√(1-16g_s)/2)/Γ(2+√(1-16g_s)/2)(σ^2+T^2/2σ)^1/2, ⟨ϕ⟩(T) = ∫ da dϕ ϕ |Ψ|^2/∫ da dϕ |Ψ|^2=const.In Fig. <ref> we have plotted the classical scale factor (<ref>) and its quantum expectation value (<ref>). The discussions on the comparison between quantum cosmological solutions and their corresponding form from the classical formalism, are the same as previous section. Similar discussion as above would be applicable to this case as well.§ CONCLUSIONIn this paper we have applied the Hořava theory of gravity to a FRW cosmological model coupled minimally to a scalar field in which a generalized Chaplygin gas, in the context of the Schutz' representation, plays the roll of the matter field. The use of the Schutz' formalism for Chaplygin gas allowed us to introduce the only remaining matter degree of freedom as a time parameter in the model. After a very brief review of HL theory of gravity, we have considered a FRW cosmological setting in the framework of the projectable HL gravity without detailed balance condition and presented its Hamiltonian in terms of the minisuperspace variables. Though the corresponding classical equations did not have exact solutions, we analyzed their behavior in the limiting cases of the early and late times of cosmic evolution and obtained analytical expressions for the scale factor and the scalar field in these regions. We have seen that these solutions are consisted of two separate branches each of which exhibit some kinds of classical singularities. Indeed, the classical solutions have either contracting or expanding branches which are disconnected from each other by some classically forbidden regions. Another part of the paper is devoted to the quantization of the model described above in which we saw that the classical singular behavior will be modified. In the quantum models, we showed that the SWD equation can be separated and its eigenfunctions can be obtained in terms of analytical functions. By an appropriate superposition of the eigenfunctions, we constructed the corresponding wave packets. Using Schutz's representation for the Chaplygin gas, under a particular gauge choice, we led to the identification of a time parameter which allowed us to study the time evolution of the resulting wave function. Investigation of the expectation value of the scale factor shows a bouncing behavior near the classical singularity. In addition to singularity avoidance, the appearance of bounce in the quantum model is also interesting in its nature due to prediction of a minimal size for the corresponding universe. It is well-known that the idea of existence of a minimal length in nature is supported by almost all candidates of quantum gravity.Acknowledgement The research of P. Pedram is supported by the Iran National Science Foundation (INSF), Grant No. 93047987.9 SF1 E.J. Copeland, M. Sami and S. Tsujikawa, Int. J. Mod. Phys. D 15 (2006) 1753 (arXiv:hep-th/0603057). SF2 C. Brans and R.H. Dicke, Phys. Rev. 124 (1961) 925. SF3 R.H. Dicke, Phys. Rev. 125 (1962) 2163. SF4 T. Damour and G. Esposito-Fares, Class. Quantum Grav. 9 (1992) 2093. SF5 N. Banerjee and D. Pavon, Phys. Rev. D 63 (2001) 043504. SF6 A.G. Riess et al., Astron. J. 116 (1998) 1009. SF7 B.P. Schmidt et al.,Astrophys. J. 507 (1998) 46. SF8 W. Chakraborty and U. Debnath, Role of Brans-Dicke Theory with or without self-interacting potential in cosmic acceleration (arXiv:0807.1776). inflation1 A. Linde, Contemp. Concepts Phys. 5 (2005) 1 (arXiv:hep-th/0503203). inflation2 D.H. Lyth and A. Riotto, Phys. Rep. 314 (1999) 1. dark1 R.R. Caldwell, R. Dave and P.J. Steinhardt, Phys. Rev. Lett. 80 (1998) 1582. dark2 T. Padmanabhan, Phys. Rep. 380 (2003) 235. non-minimalcoupling Y. Fujii and K.-I. Maeda, The Scalar Tensor Theory of Gravitation, Cambridge University Press, Cambridge, 2003. Horava1 P. Hořava, J. High Energy Phys. 0903 (2009) 020 (arXiv:0812.4287). Horava2 P. Hořava, Phys. Rev. D 79 (2009) 084008 (arXiv:0901.3775). Horava3 P. Hořava, Phys. Rev. Lett. 102 (2009) 161301 (arXiv:0902.3657). Horava4 P. Hořava, Phys. Lett. B 694 (2010) 172 (arXiv: 0811.2217). ADM E. Gourgoulhon, 3+1 Formalism and Bases of Numerical Relativity (arXiv:gr-qc/0703035). non-projectable1 D. Blas, O. Pujolas and S. Sibiryakov, Phys. Rev. Lett. 104 (2010) 181302 (arXiv:0909.3525). non-projectable2 D. Blas, O. Pujolas and S. Sibiryakov, J. High Energy Phys. 04 (2011) 018 (arXiv:1007.3503). vakili kord B. Vakili and V. Kord, Gen. Rel. Grav. 45 (2013) 1313 (arXiv:1301.0809). Sotiriou T. P. Sotiriou, M. Visser and S. Weinfurtner, Phys. Rev. Lett 102 (2009) 251601. Pitelli Saa J.P.M. Pitelli and A. Saa, Phys. Rev. D 86 (2012) 063506. John1 John D. Barrow, Phys. Lett. B 180 335 (1986) John D. Barrow, Nucl. Phys. B 310 743 (1988) CG1 A.Y. Kamenshchik, U. Moschella and V. Pasquier, Phys. Lett. B 511 (2001) 265 (arXiv:gr-qc/0103004). CG2 M.C. Bento, O. Bertolami and A. A. Sen, Phys. Rev. D 66 (2002) 043507 (arXiv:gr-qc/0202064). Herrera R. Herrera, M. Olivares and N. Videla, Eur. Phys. J. C 73 (2013) 2295 (arXiv: 1303.5658) CG3 R. Jackiw, A Particle Field Theorist's Lectures on Supersymmetric, Non-Abelian Fluid Mechanics and d-Branes (arXiv:physics/0010042). CG4 M.C. Bento, O. Bertolami and A. A. Sen, Phys. Rev. D 67 (2003) 063003 (arXiv:astro-ph/0210468). CG5 R. Bean and O. Dore,Phys. Rev. D 68 (2003) 023515 (arXiv:astro-ph/0301308). CG6 J.C. Fabris, S. V. Goncalves and P.E. de Souza, Gen. Rel. Grav. 34 (2002) 53 (arXiv:gr-qc/0103083). CG8 N. Ogawa, Phys. Rev. D 62 (2000) 085023 (arXiv:hep-th/0003288). CG9 G.M. Kremer, Gen. Rel. Grav. 35 (2003) 1459 (arXiv:gr-qc/0303103). CG10 M.R. Setare, Phys. Lett. B 644 (2007) 99 CG7 R. Colistete, J.C. Fabris, S.V. Goncalves and P.E. de Souza, Dark energy, dark matter and the Chaplygin gas (arXiv:gr-qc/0210079). Pedram Jalalzadeh Gousheh P. Pedram, S. Jalalzadeh and S.S. Gousheh, Int. J. Theor. Phys. 46 (2007) 3201. ardehali pedram H. Ardehali and P. Pedram, Phys. Rev. D 93 (2016) 043532. Majumder B. Majumder, Phys. Lett. B 697 (2011) 101 (arXiv:1103.5543). Saez Ballester D. Saez and V.J. Ballester, Phys. Lett. A 113 (1986) 467. vakili B. Vakili, Phys. Lett. B 688 (2010) 129. Socorro Sabido Urena-Lopez J. Socorro, M. Sabido and A. Urena-Lopez, Fizika B 19 (2010) 177 (arXiv:0904.0422). schutz1 B.F. Schutz, Phys. Rev. D 2 (1970) 2762. schutz2 B.F. Schutz, Phys. Rev. D 4 (1971) 3559. Lapchinskii Rubakov V.G. Lapchinskii and V.A. Rubakov, Theor. Math. Phys. 33 (1977) 1076. Bertolami Zarro O. Bertolami and C.A.D. Zarro, Phys. Rev. D 84 (2011) 044042 (arXiv:1106.0126). Bouhmadi-Lopez Moniz M. Bouhmadi-Lopez and P.V. Moniz, Phys. Rev. D 71 (2005) 063521. Bouhmadi-Lopez Gonzalez-Diaz Martin-Moruno M. Bouhmadi-Lopez, P.F. Gonzalez-Diaz and P. Martin-Moruno, Phys. Lett. B 659 (2008) 1. Pedram Jalalzadeh P. Pedram and S. Jalalzadeh, Phys. Lett. B 659 (2008) 6. Hartle Hawking J.B. Hartle and S.W. Hawking, Phys. Rev. D 28 (1983) 2960. book1 M. Abramowitz and I.A. Stegun, Handbook of Mathematical Functions, Dover, New York, 1972. 14-1 F.G. Alvarenga, A.B. Batista and J.C. Fabris, Int. J. Mod. Phys. D 14 (2005) 291 (arXiv:gr-qc/0404034).
http://arxiv.org/abs/1705.09618v1
{ "authors": [ "H. Ardehali", "P. Pedram", "B. Vakili" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170525113241", "title": "Classical and quantum Chaplygin gas Hořava-Lifshitz scalar-metric cosmology" }
http://arxiv.org/abs/1705.09576v3
{ "authors": [ "Debabrata Sinha", "Satyaki Kar" ], "categories": [ "cond-mat.str-el", "cond-mat.mes-hall" ], "primary_category": "cond-mat.str-el", "published": "20170526132330", "title": "Andreev tunnelling and Josephson current in light irradiated graphene" }
IEEEexample:BSTcontrolJournal ofClass Files, Vol. x, No. x, xxxx 2016 Shell et al.: Bare Demo of IEEEtran.cls for IEEE JournalsRegrasp Planning using 10,000s of Grasps Weiwei Wan, Member, IEEE, and Kensuke Harada, Member, IEEE Weiwei Wan and Kensuke Harada are with National Institute of Advanced Industrial Science and Technology (AIST), Japan. Kensuke Harada is also affiliated with Osaka University, Japan. [email protected] December 30, 2023 =================================================================================================================================================================================================================================================================================== This paper develops intelligent algorithms for robots to reorient objects. Given the initial and goal poses of an object, the proposed algorithms plan a sequence of robot poses and grasp configurations that reorient the object from its initial pose to the goal. While the topic has been studied extensively in previous work, this paper makes important improvements in grasp planning by using over-segmented meshes, in data storage by using relational database, and in regrasp planning by mixing real-world roadmaps. The improvements enable robots to do robust regrasp planning using 10,000s of grasps and their relationships in interactive time. The proposed algorithms are validated using various objects and robots. Grasp Planning, Manipulation Planning, Reorient Objects, Preparatory Planning § INTRODUCTION This paper develops intelligent algorithms for robots to reorient objects. Given the mesh models of objects, their initial and goal poses, and the kinematic and dimensional parameters of robots, the algorithms developed in the paper could find a sequence of robot poses and grasp configurations that reorients the objects from their initial poses to the goals.The algorithms developed include grasp planning algorithms, placement planning algorithms, and graph searching and motion planning algorithms. Developing these intelligent algorithms are important to industrial robots. In factories, objects are sent to robots in boxes. The objects are in various poses. A robot is required to recognize the objects, pick them up, and reorient them to specific poses for particular use. Examples include: (1) Packing items. Each item should be reoriented to the same orientation and packed into a box. (2) Assembly. Each part in the assembly should be reoriented to specific poses to fit others. (3) Using tools. A tool should be reoriented to have its tool center point facing targets. These tasks require a robot to be equipped with both high-precision vision systems and robust grasp and manipulation planning systems. This paper studies the grasp and manipulation planning systems. It develops robust algorithms to do regrasp by using 10,000s of auto-planned grasps.The developed algorithms include: (1) A grasp planner which automatically plans available grasp configurations using the mesh models of objects. (2) A placement planner which automatically plans stable poses of an object on a planar surface (table top). (3) A regrasp planner which builds and searches regrasp graphs to generate a sequence of robot poses and grasp configurations that reorients objects from their initial poses to the goals.While these algorithms have been studied for thirty years and also have been extensively discussed and re-developed in several of our previous work, this paper makes important improvements to make them robust. It leverages the computational ability of modern computers to deal with various robots, objects, and combinatorics. The main contributions are as follows. (1) We propose a grasp planning system which uses over-segmented mesh surfaces to sample contact points. The over-segmented surfaces provide more even segments and robuster measurement of contact regions. (2) We employ RDB (Relational DataBase) to manage the large amount of data generated by planning algorithms. RDB makes it easy to maintain the relationships among grasp configurations, placements, objects, and robots. It enables saving and retrieving gigabyte-level data to build regrasp graphs and select grasps and placements all over a table in front of a robot. (3) We build the regrasp graph like a roadmap in a robot's workspace and search the graph to find a sequence of robot poses and grasp configurations that reorients the objects from their initial poses to the goals. The graph, together with contributions (1) and (2), makes it possible for different robots to reuse 10,000s of grasps and their relationships to reorient objects with various initial and goal poses in interactive time.§ REORIENTING OBJECTS USING REGRASP PLANNING The seminal work that studied reorienting objects using regrasp is <cit.>. The work motivated many researchers. <cit.><cit.><cit.><cit.> are some of the early publications that applied similar technique to various robots and grippers. These early work concentrated on the regrasp aspect. Their grasp and motion planning was limited by the computational capacities at that time. The number of grasps were small and the grasp planning was based on block models, primitive matching, or manually selected values.More recent work involved better grasp and motion planning. For example, Xue et al. <cit.> used shape primitives <cit.> to plan grasps for a cup and implemented the regrasp and reorient planning of the cup using multi-finger hands. Saut et al. <cit.> used decomposition to plan grasps and implemented dual-arm regrasp of complicated models. King et al. <cit.> used regrasp planning to do preparatory reorient. They implemented primitive-based prehensile and non-prehensile grasp planning to prepare for optimal motion planning. Simeon et al. <cit.> presented a framework which integrated motion planning and transfer-transit regrasp. Hauser et al. <cit.> made a concrete description of multi-modal motion planning and presented several implementations. Yoshida et al. <cit.> applied regrasp and motion planning to a humanoid robot that transported a box. Cohen et al. <cit.> developed algorithms to sequentially handle an object using several manipulators and regrasp. Nguyen et al. <cit.> developed algorithms for a WALK-MAN robot to reorient an electric drill, using some carefully selected grasps to make the manipulation robust. Chang et al. <cit.> studied the preparatory grasps of human beings and used non-prehensile re-grasp (pushing) to reorient pans and pots. Lertkultanon et al. <cit.> presented an integrated regrasp and motion planning system to reorient furniture parts. Their grasps were based on box primitives. Krontiris et al. <cit.> developed algorithms to rearrange objects. Their focus was on the high level planning of manipulation sequence. Similarly, Jentzsch et al. <cit.> used regrasp to solve multi-modal pick-and-place problems. Lee et al. <cit.> also used non-prehensile grasp to plan sequential manipulation and reorient.The essential algorithms in reorienting objects using regrasp include: (1) A grasp planner, (2) a placement planner, and (3) a regrasp planner. The grasp planner plans a redundant number of available grasps. The placement planner finds stable placements of the object in the environment. It also associates the grasps found by the grasp planner to the stable placements. Following the grasp planner and placement planner, the regrasp planner builds a regrasp graph by considering the shared grasps of the stable placements, connects the initial and goal poses to the graph by solving inverse kinematics and detecting collisions, and plans a sequence of robot poses and grasp configurations to reorient the object from its initial pose to the goal.This paper makes improvements on grasp planning and data management. It proposes an improved grasp planning system using over-segmented facets, employs relational database to manage the large amount of data generated by planning algorithms, and builds and searches regrasp graphs like a roadmap in a robot's workspace using the saved data and their relationships. These improvements make it possible for a robot to reuse 10,000s of grasps and relationships to reorient objects with various initial and goal poses in interactive time. To our best knowledge, it is the first work that reorients objects using such a large amount of data. § GRASP PLANNING USING OVER-SEGMENTED SURFACES Over-segmentation: We plan grasps by over-segmenting an object mesh into redundant number of facets. The pseudo code of the over-segmentation algorithm is shown in the upper part of Fig.<ref>. Compared with the conventional algorithm shown in the lower part, the over-segmentation algorithm allows overlap between facets by repeatedly examining all triangle meshes. The conventional algorithm removes the expanded triangle meshes during segmentation, leading to uneven segmentation.Fig.<ref> compares some results. Fig.<ref>(a) and (b) are segmented using a conventional segmentation method which does not allow overlapped facets. (a) and (b) are the results of two different thresholds (parameter τ in Fig.<ref>). Fig.<ref>(c) is the result of the proposed over-segmentation method. The conventional segmentation method has several disadvantages: (1) It is difficult to judge if a facet is safe to touch or not. Take Fig.<ref>(d) and (e) for example. In Fig.<ref>(d), a curved mesh surface is segmented into several flat facets using the conventional segmentation method. Some of them are large, some others are small. Once triangle meshes are segmented to adjacent facets, the remaining ones will be small. Large facets are touchable, but small ones might be either touchable or untouchable. They are segmented independently from surrounding facets and their real touchability is difficult to judge. (2) It is difficult to tune the parameter τ which is used to decide coplanar triangle meshes. A large τ may result into non-planar facets. For example, the whole cylindrical surface in Fig.<ref>(f) is mistaken as one facet. It is not planar. In contrast, a small τ results into many small facets in Fig.<ref>(g), making it difficult to do surface sampling and compute parallel facet pairs. (3) The conventional segmentation method deteriorates the performance of grasp planning. If a facet is too small to support a finger pad, there will be no available grasps to grip at that facet, leading to fewer automatically planned grasps and low success rate during regrasp. For example, the facets in Fig.<ref>(h) are segmented by the conventional segmentation method. They are too small to support a finger pad and no grips on the facet is available (the red hand in Fig.<ref>(h) indicates an unavailable grasp). For this reason, the number of planned grasps using the conventional method is much smaller compared to the over-segmentation method (Fig.<ref>(i) vs. Fig.<ref>(j)). Considering these disadvantages, we propose the grasp planning using over-segmentation. Since the facets are over-segmented, they include the information of a local region. Planning grasps using the over-segmented facets are more robust and complete.Mesh sampling: The next step is to sample some contact points on the facets. While probabilistically sampling points on each over-segmented facet is an intuitive method, it leads to redundancy. To avoid redundancy and repeatedly computing force-closures and performing collision detections, we pre-sample on the whole mesh surface and distribute the pre-sampled results to over-segmented facets. One sampled point could be distributed to multiple facets. In this way, we use one sample process to generate contact points for all over-segmented facets. Fig.<ref>(a) and (b) show the results of the surface sampling and the sampled points distributed to one over-segmented facet.The sampled points are further refined using the following filters. (1) Distance filter. A sampled point must neither be too near to nor too far from the boundary of the facet. If a sampled point was too near to the boundary, the finger pad touching that point might be on the edges of the object, leading to unstable grasp configurations. On the other hand, if a sampled point was too far from the boundary, the palm of the hand might collide with the object. The distance filter removes these unstable and collided grasp configurations. A result of the distance filter is shown in Fig.<ref>(c). (2) Near-neighbour filter. Two sampled points should not be too near to each other. Near sampled points increase the density of auto-planned grasp configurations, which is unnecessary and leads to high computational cost. We merge the sampled points in a region by representing them using a single point. The process is done using the fixed-radius nearest neighbour algorithm.The neighbours that fall inside the radius of a chosen sampled point will be removed. The result of the near-neighbour filter is shown in Fig.<ref>(d).Parallel facets: For a parallel gripper, the contact points of the two fingers pads must be on two parallel facets. Therefore we compute the force-closure grasps by finding all parallel facets where the sampled points on one facet can be projected to the inner region of the other facet along its inverse normal direction. Some results of the parallel facets are shown in Fig.<ref>(e)-(h). The contact points, their projections, and the normals of the contact points and projections are illustrated using colored arrows.One sampled point together with its projection on one of its parallel facet is called a pair of contact points. For each pair of contact points, we pose the two finger pads on them and sample the rotation of the hand around the axis passing through the contact pair. An example of the sampled rotations is shown in Fig.<ref>(m) (ignore the color for this subsection, the rotated hands are gripping at a pair of contact points shown in Fig.<ref>(k)).Resistance to gravity torques: The resistance of a grasp to gravity torques is measured using the distance between the contact pair and the com (center of mass) of the object. Since the object will be reoriented during manipulation, the maximum gravity torque would be mg|p_com-p_grp| where m is the mass of the object, g is the gravity constant, p_com is the center of mass, p_grp is the center of the contact pair. If |p_com-p_grp| is larger than a threshold, the contact pair is judged to be unstable during reorient. The candidate grasp configurations at the contact pair will be removed.Collision detection: Two levels of collision detection are used to remove collided grasps. The first level uses the swept volumes of the finger pads during closing the gripper to detect the collisions between finger pads and the object. The swept volumes are modeled as two cylinders since cylinder models are invariant to rotation around the axis passing through the contact pair and only one collision detection is needed. Some examples of the collision detections in the first level are shown in Fig.<ref>(i)-(l). The second level uses the model of the robotic hand to remove the collisions between the whole hand and the object. The collision detection is performed at each sampled rotation around the axis passing through the contact pair. An example is shown in Fig.<ref>(m) and (n) where the red hands indicate the collided grasp configurations. The white hands indicate the collision-free grasp configurations. The first level of collision detection is fast and reduces the necessity to check the collision between hands and objects at some contact pairs in the second level. They together expedite the collision detection process.A fast grasp planner that plans robust grasp configurations to grasp objects of various shapes could be implemented using the aforementioned algorithms.§ USING RDB TO MANAGE THE PLANNED RESULTS The auto-planned grasps, together with the stable placements and other pre-computed results are saved in a relational database for reuse and analysis. RDB (Relational DataBase), rather than file systems, is used to help process the large amount of data and their relationships.The ERG (Entity Relation Graph) of the database is shown in Fig.<ref>. The database is composed of ten tables named object, robot, freeairgrip, freetabletopplacement, freetabletopgrip, angle, tabletopplacements, tabletopgrips, ikret, and ik, respectively.The contents of the tables are shown in Fig.<ref>. The object and robot tables save the names of the objects and robots. Each row of them has a primary key named ( or ) and a second column storing the name. The freeairgrip table saves the grasp configurations in the local coordinate systems of objects, without considering surrounding obstacles. The freeairgrip table has a foreign key pointing to the id of object. It has a 1:n relationship with object (Fig.<ref>). The freetabletopplacement table saves the placements of objects on a table. It is named free since the horizontal coordinates are always at (0,0), and the rotation around vertical axis is always 0 (they are ready to be displaced and rotated freely). The pose of the placements are saved in freetabletopplacement.. The freetabletopplacement table has a 1:n relationship with the object table and has a foreign key pointing to the id of object. The freetabletopgrip table saves the available grasp configurations of freetabletopplacement. Its columns include the contact points ( and ), the contact normals ( and ), the pose of the hand (), and the opening width of the jaw (). It has 1:n relationships with freeairgrip and freetabletopplacement. The grasp configurations of freetabletopplacement are based on the freeairgrip. We do not re-plan them but transform the freeairgrip to the coordinate systems of the freetabletopplacement. We re-compute the availability of the transformed grasp configurations using collision detection, and save the available ones to freetabletopgrip. The tabletopplacements table also saves the placements of objects on a table. Compared with freetabletopplacement, the horizontal coordinates are not fixed to (0,0). They are discretized to specific positions. Also, the rotation around vertical axis is discretized to specific values saved in the angle table. The tabletopplacements table is re-computed from the freetabletoplacement table, and therefore has a 1:n relationship with freetabletopplacement. It also has a 1:n relationship with the angle table. The tabletopgrips table is similar to the freetabletopgrip table. It saves the available grasp configurations of tabletopplacements. The tabletopgrips table is re-computed using freeairgrip and has a 1:n relationship with freeairgrip and tabletopplacements. The ik table saves the feasibility of the grasp configurations in tabletopgrips with respect to specific robots. It therefore has 1:n relationships with tablettopgrips and robot. The primary key of ik is composed of two foreign keys: The id of robot and the id of tabletopgrips. It saves the feasibility of the grasp configurations denoted by ik., and also saves several other feasibility after some retraction. For example, ik. is the feasibility of IK after retracting hand configurations along their x directions. ik. is the feasibility of IK after first retracting hand configurations along their x directions and then along z direction of the world. The retraction distances are pre-defined and saved in the ikret table.Fig.<ref> visualizes some of the tables. Fig.<ref>(a) and (b) show the freeairgrip and freetabletopgrip of an electric drill object. Fig.<ref>(c) and (d) show two rotated tabletopgrips.Fig.<ref>(e)-(g) show some feasible IKs with respect to a humanoid robot with an 8-DoF (Degree of Freedom) arm (name: HRP5P).§ ROADMAP-BASED REGRASP GRAPH The data and relationships saved in the RDB make it easy to build the graph. Using them, we can analyze the combinatorics of grasps and placements, and build regrasp graphs on a table in front of a robot. The graph encodes the positions all over the table like a roadmap (Fig.<ref>).Building the graph The nodes of the graph are from the tabletopgrips table. Each node indicates one grasp configuration. The edges of the graph are converted from the relationships of tabletopplacements, tabletopgrips, and freeairgrip. Two nodes are connected using a transfer edge when the two rows of tabletopgrips share the same , which means a robot could grasp the object at one placement (one row of tabletopplacements) using a grasp configuration and transfer the object to another placement (another row of tabletopplacements) using the same grasp configuration. Two nodes are connected using a transit edge when the two rows of the tabletopgrips share the same , which means a robot could release the object grasped by a grasp configuration (one node at one end of the edge), transit to a different grasp configuration (the node at the other end of the edge), and grasp the object again using the second grasp configuration.One example of the regrasp graph using tabletopplacements is shown in Fig.<ref>. There are 2912 placements in the graph. These placements are at 91 positions on the table in front of a robot. At each position, the object could be posed at 4 different stable placements with 8 discretized rotations around vertical axis. The available grasps of a placement (a stable placement at a specific orientation) are connected to each other using transit edges (cyan). The shared grasps of different placements are connected to each other using transfer edges (black).Searching the graph Once built, the regrasp graph can be used repeatedly for the same object. A vision system detects the initial pose of the object. The user inputs the goal pose of the object. The graph searching algorithm computes the available grasps of the initial and goal poses, and connects them to the roadmap. An example is shown in Fig.<ref>. The initial placement and their available grasps are connected to the graph using red edges. The goal placement and their available grasps are connected to the graph using blue edges. The graph searching algorithm finds a path (green) from one of the initial grasps to one of the goal grasps. There might be several candidate paths, which could be selected using some criteria.§ EXPERIMENTS AND EXPERIENCES The proposed algorithms are validated using various objects and robots. The objects used include (Fig.<ref>): (1) A plastic tube (TU), (2) a toy plane body (PB), (3) a toy plane wheel (PW), (4) a toy plane support (PS), (5) a toy plane tail (PT), and (6) an electric drill (ED). The robots used include: (1) HRP5P, which is a humanoid robot developed by our institute, and (2) Kawada Nextage, which is a commercially available industrial dual-arm robot. For each robot, a single arm is used. The processor of our computer is Intel Xeon 2.8GHz. Its graphic card is NVIDIA Quadro M3000M. The tasks used to validate the algorithms are shown in Fig.<ref>. Robots need to reorient the objects from the poses rendered in solid yellow to the poses in transparent yellow.Computational cost The size of the database and the time cost of the various planning algorithms are shown in Table.<ref>. The meanings of the abbreviations are in the footnote of the table. The columns before the vertical separator are the volume of the data and the general cost to prepare the data. Particularly, the columns colored in gray are fully off-line. The columns colored in brown are flexible, depending on how complete practitioners would like the planner to be. For a table with a fixed height, they can be fully off-line. For tables with varying heights, they must be recomputed. 10,000s of grasps are planned and saved for regrasp planning (see the #-tpg column). The columns after the vertical separator show the specific costs of the tasks in Fig.<ref>. The time used to compute the IK-feasible grasps at initial and goal poses, connect the grasps to the regrasp graph, and search the graph for the two robots are shown in the gs_n (Nextage) and gs_h (HRP5P) columns. They are in interactive time using the shown data volume.The values of gs depend on #-fpg and #-tpg (the columns colored in light purple). For example, the PW task has more #-fpg and #-tpg and therefore costs more. The PB task has few #-fpg and #-tpg and costs less than 0.2s (the success rate is lower). The times of regrasps are shown in the nr_r (Nextage) and nr_h (HRP5P) columns. The Nextage robot has better kinematic design to reorient the objects in some tasks: It used 1, 0, and 1 regrasps to reorient objects TU, PW, and PT. In contrast, the HRP5P robot used 3, 1, and 3 times of regrasps to reorient them. Also, one arm of HRP5P has 8 DoFs and costs more to do connecting and searching (including the shoulder and waist; One arm of Nextage has 7 DoFs).Planned sequences Some planned manipulation sequence for the two robots to reorient object PT (task Fig.<ref>(PT)) are shown in Fig.<ref> and Fig.<ref>. The Nextage Robot has more flexibility than the HRP5P robot in this workspace. It used one regrasp to finish the task. In contrast, the HRP5P robot used three regrasps. The perspective view of Fig.<ref>(b-4) is shown in the upper left corner of Fig.<ref>. The robot is at a pose which is not reachable by HRP5P.Experiences During the implementation, we carefully designed several parameters including: (1) Density of surface sampling, (2) stability of a placement, (3) resistance to torque caused by gravity, etc. The density of surface sampling is crucial to the number of automatically planned grasps and computational feasibility. The stability is essential to reduce accumulated errors and achieve high success rate during regrasp. The resistance to torque and gravity is important to certainty of grasps and stability during reorient. To make them general, we computed the density of sampling using the size of mesh surfaces, computed the stability of a placement using the ratio between the height of com and the distance to the boundary of the supporting polygon, and filtered the resistance to torque caused by gravity by adding thresholds to the distance between the contact center and the com of the object. These strategies are adaptive to varying model geometry (see Fig.<ref> and the #-tri column of Table.<ref>), but they cannot be adaptive to varying physical properties like Coulumb coefficients, uneven density, etc. Dealing with uncertainties caused by these physical properties is an open problem.§ CONCLUSIONS In this paper, we developed intelligent algorithms for robots to reorient objects using 10,000s grasps. We developed robust grasp planning algorithms to plan the grasps and used RDB to manage the automatically planned data. These data were reused during regrasp planning to build regrasp roadmaps and find robot-pose and grasp-configuration sequences to reorient objects. Experiments showed the developed algorithm with the support of the database can reuse 10,000s of grasps to reorient objects at various poses in interactive time. We conclude that (1) the grasp planning algorithms are robust to find more grasps, (2) the relational database successfully manages the large amount of data generated by planning algorithms, and (3) the algorithms leverage modern computational ability to challenge the relationships and the combinatorics of the data. They are applicable to various robots and objects.§ ACKNOWLEDGMENTThe paper is based on results obtained from a project commissioned by the New Energy and Industrial Technology Development Organization (NEDO).IEEEtran
http://arxiv.org/abs/1705.09400v1
{ "authors": [ "Weiwei Wan", "Kensuke Harada" ], "categories": [ "cs.RO" ], "primary_category": "cs.RO", "published": "20170526001333", "title": "Regrasp Planning using 10,000s of Grasps" }
Department of Physics, William & Mary, Williamsburg, Virginia 23185Large eddy simulations (LES) of a lattice Boltzmann magnetohydrodynamic (LB-MHD) model are performed for the unstable magnetized Kelvin-Helmholtz jet instability.This algorithm is an extension of Ansumali et al. <cit.> to MHD in which one performs first an expansion in the filter width on the kinetic equations followed by the usual low Knudsen number expansion.These two perturbation operations do not commute.Closure is achieved by invoking the physical constraint that subgrid effects occur at transport time scales.The simulations are in very good agreement with direct numerical simulations. lattice Boltzmann MHD large eddy simulations Kelvin-Helmholtz instabilityA large eddy lattice Boltzmann simulation of magnetohydrodynamic turbulence George Vahala December 30, 2023 ===========================================================================§ INTRODUCTIONRecently <cit.> we derived a first-principles two dimensional (2D) magnetohydrodynamic (MHD) large eddy simulation (LES) model based on first filtering the lattice Boltzmann (LB) representation of MHD <cit.> after which one applies the Chapman-Enskog limits to recover the final LES-MHD fluid equations. In essence, we extended to MHD the 2D Navier-Stokes (NS) LES-LB model of Ansumali et. al. <cit.> who exploited the non-commutativity of these two operations.(Of course, if one first applied the Chapman-Enskog limit to LB and then filtering one would land in the conventional quagmire of an LES closure problem.)A technical difficulty with the Ansumali et. al. model is that in 2D NS there is an inverse energy cascade to large spatial scales thereby rendering subgrid modeling non-essential.In 2D MHD, however, the energy cascades to small spatial scales as in 3D - and so makes it attractive to perform the LES-LB-MHD simulations in which there can be a substantial amount of excited subgrid modes.Here we present some preliminary LES-LB-MHD simulations of our model and compare the results with some direct numerical simulations (DNS).As Ansumaliet. al. <cit.> did not perform any simulations on their LES-LB-NS model, these are the first such LES-LB-MHD simulations when one has first filtered the underlying LB representation, followed by the conventional small Knudsen number expansion. The backbone of any LES <cit.> is the introduction of a spatial filter function to smooth out field fluctuations on the order of the filter width Δ.Thus for the mean velocity 𝐮̅(r⃗,Δ)=∫_-∞^∞𝐮(r⃗ '-r⃗ )G(r⃗ ',Δ)dr⃗ '.In general, the filtering results in the standard closure problem.Previous LB-LES-NS modeling <cit.> have first considered the Chapman-Enskog expansion followed by filter and thus have concentrated on the Smagorinsky closure for the subgrid stresses.It has been pointed out <cit.> that in the conventional NS-LES closure, the subgrid stresses are assumed to be in equilibrium with the filtered strain. However, in LB-LES-NS the stresses relax towards the filtered strain at a rate dictated by the current eddy-viscosity, thereby permitting some spatio-temporal memory effects that are absent when applying LES directly to the continuum equations.In essence this gives an edge to any LB - approach.In the Ansumali approach, however,one first performs a perturbation expansion in the filter width Δ followed by the standard LB Chapman-Enskog expansion in the Knudsen number Kn.These two perturbation expansions do not commute.Closure is now achieved by making the physically plausible assumption that eddy transport effects occur at the transport time scale and this results in the scaling Δ≃Kn^1/2.One still retains the LB effects of spatial -temporal memory as noted earlier <cit.> .§ LES-LB-MHD MODELFor completeness we briefly review the essentials of our LES-LB-MHD model <cit.> that yields the following closed set of filtered MHD equations (without further approximations) for the filtered density ( ρ̅), momentum ( ρ𝐮) and the magnetic field (𝐁) in multiple relaxation time (MRT) ∂_tρ + ∇· ρ𝐮 = 0, ∇·𝐁 = 0 ∂_t( ρ𝐮) + ∇·( ρ𝐮 ρ𝐮/ρ̅) = - ∇p̅ + ∇·( 𝐁 𝐁) - 1/2∇( 𝐁·𝐁) + ( ξ + 1/3ν) ∇( ∇·ρ𝐮) + ν∇^2ρ𝐮 - ∇·{6ν/6ν+1Δ^2/12ρ̅[ ( ∂_β(ρ𝐮) ) ( ∂_β(ρ𝐮) ) -∂_βp̅/p̅( ρ𝐮( ∂_β(ρ𝐮) ) + ( ∂_β(ρ𝐮) ) ρ𝐮 -ρ𝐮 ρ𝐮∂_βp̅/p̅) ] }- ∇{( s_4/4 + s_7/20 - 3 s_8/10) Δ^2/12ρ̅[ ( ∂_β(ρ𝐮) ) ·( ∂_β(ρ𝐮) ) - ∂_βp̅/p̅( 2ρ𝐮·( ∂_β(ρ𝐮) ) -ρ𝐮·ρ𝐮∂_βp̅/p̅) ] } - 6ν/6ν+1Δ^2/12{1/2∇[ ( ∂_β𝐁) ·( ∂_β𝐁) ]- ∇·[ ( ∂_β𝐁) ( ∂_β𝐁) ] }, ∂_t𝐁 = ∇( ρ𝐮𝐁/ρ̅) + η∇^2 𝐁 + ∇[ Δ^2/12ρ̅6η/6η+1{( ∂_β(ρ𝐮) ) ( ∂_β 𝐁) . . . . -∂_βp̅/p̅( (ρ𝐮) ( ∂_β 𝐁) + ( ∂_β(ρ𝐮) ) 𝐁 -( ∂_βp̅) /p̅(ρ𝐮) 𝐁) }] . where s_3 … s_8 are relaxation rates, and in this isothermal model, the pressure is directly related to the densityp = ρc_s^2 = ρ/3, in lattice units (c_s is the sound speed).The transport coefficients (shear viscosity ν, bulk viscosity ξ and resistivity η) are determined from the LB-MRT for the particle distribution function (the s's) and the single magnetic distribution function relaxation rate s_m:ν =1/3s_3-1/6=1/3s_4-1/6 ξ =-1/9-1/9s_4-1/15s_7+2/5s_8 η =1/3s_m-1/6We now summarize our computational LB-LES-MHD model that underlies Eqs. (<ref>). For 2D MHD, we consider an LB model with 9-bit lattice ( ∂_t + ∂_γ c_γ i) f_i = ∑_j s^'_ij( f_j^(eq) - f_j ), i = 0 ... 8( ∂_t + ∂_γ c_γ i) g⃗_i = s^'_m( g⃗_i^ (eq) - g⃗_i ) , i = 0 ... 8with the moments ∑_i f_i = ρ,∑_i f_ic⃗_i = ρu⃗, and ∑_k g⃗_k = B⃗.Here the summation convention is employed on the vector nature of the fields (using Greek indices) while for Roman indices, correspond to the corresponding lattice vectors for the kinetic velocities c⃗_i, there is no implicitly implied summation.The lattice is just the axes and diagonals of a square (along with the rest particle i = 0).s^'_ij are the MRT collisional relaxation rate tensor for the f_i while the SRT s^'_m is the collisional relaxation rate for g⃗_i.These kinetic relaxation rates determine the MHD viscosity and resistivity transport coefficients.(Of course, more sophisticated LB models can be formed by MRT on the g⃗_k equations, but for this first reported LB-LES-MHD simulation we will restrict ourselves to the simpler SRT model)A convenient choice of the relaxation distribution functions, will under Chapman-Enskog, yield the MHD equationsf_i^(eq) = w_i ρ[ 1 + 3( c⃗_i ·u⃗) + 9/2( c⃗_i ·u⃗)^2 - 3/2u⃗^ 2] + 9/2 w_i [ 1/2B⃗^2 c⃗_i^ 2 - ( B⃗·c⃗_i )^2 ], i = 0, .. ,8g⃗_i^ (eq) = w_i^'[ B⃗ + 3 {( c⃗_i ·u⃗) B⃗ - ( c⃗_i ·B⃗) u⃗}] , i = 0, .., 8where the w's are appropriate lattice weights.In the operator-splitting solution method of collide-stream, it is most convenient to perform the collision step in moment-space (because of collisional invariants of the zeroth and 1st moment of f_i, and the zeroth moment of g⃗_i.), while the streaming is optimally done in the ( f_i, g⃗_i)-space. Moment space (M_i , N⃗_i) is defined by [M_i = ∑_j=0^8 T_ijf_j, N⃗_i = ∑_q=0^8T_m,iqg⃗_q ]with the 1-1constant transformation matrices, T and T_m given byT=( 1.2[1;c_x;c_y; c_xc_y;c^2_x-c^2_y; 3c_xc^2_y-2c_x; 3c_yc^2_x-2c_y; 4·1-9(c^2_x+c^2_y-2c^2_xc^2_y); 4·1-4(c^2_x+c^2_y)+3c^2_xc^2_y ])=( 1.2[111111111;010 -101 -1 -11;0010 -111 -1 -1;000001 -11 -1;01 -11 -10000;0 -20201 -1 -11;00 -20211 -1 -1;4 -5 -5 -5 -54444;40000 -1 -1 -1 -1 ])and T_m=( 1.2[ 1; c_x; c_y;c_xc_y; c^2_x; c^2_y; c_x^2 c_y; c_x c_y^2; c_x^2 c_y^2 ])=( 1.2[1 -1 -111 -1111;010 -101 -1 -11;0010 -111 -1 -1;000001 -11 -1;010101111;001011111;0000011 -1 -1;000001 -1 -11;000001111 ])The x and y components of the 9-dimensional lattice vectors are[ c_x={0,1,0,-1,0,1,-1,-1,1}, c_y={0,0,1,0,-1,1,1,-1,-1} ] In terms of conserved moments, we can write=5pt [ M_0^(eq) = M_0 = ρ M_3^(eq) = ρ u_x ρ u_y/ρ - B_x B_yM_6^(eq) = -ρ u_y; M_1^(eq) = M_1 = ρ u_x M_4^(eq) = ( ρ u_x )^2 - ( ρ u_y )^2/ρ - B_x^2 + B_y^2 M_7^(eq) = -3( ρ u_x )^2 + ( ρ u_y )^2/ρ; M_2^(eq) = M_2 = ρ u_yM_5^(eq) = -ρ u_x M_8^(eq) = 5/3ρ - 3( ρ u_x )^2 + ( ρ u_y )^2/ρ;] =20pt [N_α 0^(eq) = N_α 0 = B_αN_α 3^(eq) = 0 N_α 6^(eq) = 1/3(ρ u_y B_α - ρ u_α B_y);N_α 1^(eq) = ρ u_x B_α - ρ u_α B_x N_α 4^(0) = B_α/3 N_α 7^(eq) = 1/3(ρ u_x B_α - ρ u_α B_x);N_α 2^(eq) = ρ u_y B_α - ρ u_α B_y N_α 5^(0) = B_α/3N_α 8^(eq) = B_α/9 ]§.§ Filtering LBIn applying filtering to the LB Eqs. (<ref>) and (<ref>), only the nonlinear terms in the relaxation distributions, Eqs. (<ref>) and (<ref>), require further attention. On applying perturbations in the filter width Δ we immediately see that( XY )= X Y + Δ^2/12( ∂_β X) ( ∂_β Y) + O( Δ ^4)and ( XY/Z)=X Y/Z + Δ^2/12Z[ ( ∂_β X) ( ∂_β Y) - ( ∂_β Z)/Z( X( ∂_β Y) +Y( ∂_β X) - X Y( ∂_β Z) /Z) ] + O( Δ ^4) for arbitrary fields X, Y, and Z. Moreover since collisions are performed in moment space, we need first to transform from f^(eq), g⃗^(eq) to M^(eq), N⃗^(eq) and then apply filtering in terms of the filtered collisional invariants M_0,M_1,M_2,N_x0,N_y0 M_i^(eq) = M_i^(eq)(M_0,M_1,M_2,N_x0,N_y0) + Δ^2 M_i^(Δ) i=0 .... 8 where the O(Δ^2) term arises from the nonlinearities.In particular for the M_3^(eq)-term :M_3^(eq) = ρ u_x ρ u_y/ρ̅ -B_x B_y + Δ^2/12ρ̅[ ( ∂_β ρ u_x) ( ∂_β ρ u_y) - ( ∂_β ρ̅)/ρ̅( ρ u_x( ∂_β ρ u_y) +ρ u_y( ∂_β ρ u_x) - ρ u_x ρ u_y( ∂_β ρ̅) /ρ̅) ]- Δ^2/12( ∂_β B_x ) ( ∂_β B_y ) + 𝒪( Δ ^4).Similarly for the other filtered equilibrium moments.§ LES-LB-MHD SIMULATION The filtered LB equations are now solved, with streaming performed in distribution space and collisions in moment space.As this is the first simulation on the LB-filtered-LES approach we have made a significant number of simplifications.We first restrict the evolution of the filtered scalar distribution function to an SRT collision operator.In this case the relaxation rates s_i are all equal so that the 3rd term in Eq. (<ref>) is automatically zero.Moreover since nearly all LB simulations are quasi-incompressible at the fluid level, we neglect (filtered) density gradients in the moment representation of the collision operator.Thus, for example, we approximate M_3^(eq) byM_3^(eq) = ρ u_x ρ u_y/ρ̅ -B_x B_y + Δ^2/12ρ̅( ∂_β ρ u_x) ( ∂_β ρ u_y) - Δ^2/12( ∂_β B_x ) ( ∂_β B_y ) + 𝒪( Δ ^4).Also, since the last term in Eq. (<ref>) is dependent on the filtered density gradient its effects at the filtered MHD level will not be significant when we code the filtered LB system.It should be noted that as in regular LB-MHD, the filtered ∇·𝐁 = 0 is maintained to machine accuracy.There is a little subtlety in that not all the spatial derivatives in the filtered collision moments can be determined from local perturbed moments <cit.>.This limitation is thought to arise from the low D2Q9 lattice.It is expected that on a D3Q27 lattice the linearly independent set of derivatives can be represented by the now larger number of local perturbed moments. While we solve the filtered LB equations, resulting in the filtered LES MHD Eqs. (<ref>), there is some similarity in our final MHD model with that of the "tensor diffusivity" model of Müller-Carati <cit.>.However it must be stressed that we are performing a first principles derivation of the eddy transport coefficients from a kinetic (LB) model while Muller-Carati propose an ad hoc scheme of minimizing the error between two filters at each time step in their determination of their model's transport coefficients.We will now evolve in time our filtered LB equations andconsider the magnetized Kelvin-Helmholtz instability in a sufficiently weak magnetic field so that the 2D velocity jet is not stabilized <cit.>.The initial jet velocity profile is U_y = U_0 ^2(2π/L4x).The corresponding vorticity is shown in Fig. <ref>. The initial Reynolds number is chosen to be Re = U_0 L/ ν= 50k = const., with U_0 = 4.88×10^-2 and B_0 = 0.005 U_0. The viscosity and resistivity on a grid of 1024^2 are ν=η=10^-3 and scale with the grid to maintain a constant Re and a constant magnetic Reynolds number U_0 L/ η.The initial perturbation to the fields are:U_y = 0.01 U_0 sin(2π/L4x), B_y = 0.01 B_0 sin(2π/L4x), U_x = 0.01 U_0 sin(2π/L4y), and B_x = 0.01 B_0 sin(2π/L4y).Note that initially ∇·B⃗ = 0 = ∇·U⃗. In Fig, <ref> we compare the evolution of vorticity in time from DNS on a 2048^2 grid with that determined from our LES-LB-MHD model on a 1024^2 grid.The DNS simulations are determinedby solving the direct unfiltered LB Eqs. (<ref>) and (<ref>). For constant Reynolds number simulations at different grid sizes, the kinematic viscosity is adjusted appropriately.Thus on halving the spatial grid, a DNS time step of 2t_0 corresponds to time step t_0 in LES-LB-MHD. At relatively early times the jet profile width slightly widens while within the vorticity layers the Kelvin-Helmholtz instability will break these layers into the familiar vortex street (Fig. <ref>).Since we have chosen a weak magnetic field insufficient to stabilize the jet, the vorticity streets break apart with like vortex-vortex reconnection (Fig. <ref>).There is very good agreement between DNS and LES-LB-MHD with filter width Δ=2 (in lattice units) on a grid L/2.Finally, we show the corresponding vorticity (Fig. <ref>), total energy spectrum (Fig. <ref>), and current (Fig. <ref>) plots at t = 780k for simulations on 1024^2 gridsand their counterparts on 2048^2 grids at time t = 1.56M. We consider 4 cases: (a) DNS on 1024^2, (b) filtered LB-LES-MHD on 1024^2 grid and small filter width, Δ = 1, (c)DNSon 2048^2 and (d) filtered LES-LB-MHD on a 1024^2 grid but with filter width Δ=2.The effect of the filter width Δ in our LB-LES-MHD model on the evolution of the vorticity is evident when comparingFig. <ref>b to Fig. <ref>d - both in location and strength of the main vortices as well as in the fine grained small scale vorticity. As the filter width increases to Δ = 2 (fig. <ref>d), there is stronger agreement now with the DNS (fig. <ref>c) on L^2 grid with our LES-LB-MHD filtered model on (L/2)^2 grid.This shows that the subgrid terms are now influencing larger scales with some accuracy.The spectral plots (fig. <ref>) are somewhat similar in all simulations with a very localized Kolmogorov energy spectrum.Presumably this is because the turbulence is limited and relatively weak.There appears to be good agreement in both the vorticity and current between DNS and LES-LB-MHD with Δ=2 on half the grid.§ CONCLUSION Here we have presented some preliminary 2D filtered SRT LB-MHD simulation results based on an extension of ideas of Ansumali et. al. <cit.> that leads to a self-consistent LES-LB closure scheme based solely on expansions in the filter width Δ and invoking the constraint that any eddy transport effects can only occur on the transport time scales.We find very good agreement between DNS and our LES-LB-MHD models. This warrants further investigation of other filters used in LES, as well as in dynamic subgridding commonly used in LES of Navier-Stokes turbulence. Finally, an exploration of the effects of MRT on this LES algorithm should be quite interesting as a somewhat unexpected term related to the gradient of a pressure appears in the subgrid viscosity. This term reveals that higher-order moments (not stress related) can have a first order effect on the subgrid viscosity when MRT is employed. Given that this subgrid pressure term relies on the existence of higher order moments, it suggests that the extra parameters in lattice Boltzmann (ie. the distribution velocities/moments) are introducing new physics naturally absent from LES in computational fluid dynamics. It would be very interesting to see whether this new term enhances the LES accuracy or increases stability at even higher Reynold's flow. Further study could include how this term effects other, well-established LES approaches in computational fluid dynamics. These ideas are under consideration.§ ACKNOWLEDGMENTSThis work was partially supported by an AFOSR and NSF grant.The computations were performed on Department of Defense supercomputers. § REFERENCES
http://arxiv.org/abs/1705.09807v4
{ "authors": [ "Christopher Flint", "George Vahala" ], "categories": [ "physics.plasm-ph" ], "primary_category": "physics.plasm-ph", "published": "20170527112048", "title": "A large eddy lattice Boltzmann simulation of magnetohydrodynamic turbulence" }
Semi-Supervised Model Training for Unbounded Conversational Speech Recognition Shane Walker, Morten Pedersen, Iroro Orife and Jason Flaks Marchex Inc., 520 Pike, Seattle, WA, 98101December 30, 2023 ==============================================================================================================empty For conversational large-vocabulary continuous speech recognition (LVCSR) tasks, up to about two thousand hours of audio is commonly used to train state of the art models. Collection of labeled conversational audio however, is prohibitively expensive, laborious and error-prone. Furthermore, academic corpora like Fisher English (2004) or Switchboard (1992) are inadequate to train models with sufficient accuracy in the unbounded space of conversational speech. These corpora are also timeworn due to dated acoustic telephony features and the rapid advancement of colloquial vocabulary and idiomatic speech over the last decades. Utilizing the colossal scale of our unlabeled telephony dataset, we propose a technique to construct a modern, high quality conversational speech training corpus on the order of hundreds of millions of utterances (or tens of thousands of hours) for both acoustic and language model training. We describe the data collection, selection and training, evaluating the results of our updated speech recognition system on a test corpus of 7K manually transcribed utterances. We show relative word error rate (WER) reductions of {35%, 19%} on {agent, caller} utterances over our seed model and 5% absolute WER improvements over IBM Watson STT on this conversational speech task.Index Terms: conversational speech recognition, acoustic modeling, language modeling, large unsupervised training sets, data selection, data augmentation§ INTRODUCTION This paper examines a semi-supervised approach that aims to increase the quantity of conversational telephony speech transcripts available to train a LVCSR system. We define dataset construction and training as semi-supervised because we employ a seed model to transcribe a vast quantity of unlabeled audio, perform data selection on the new transcripts, retrain the seed model and then repeat the process with the improved decoder <cit.>. Our approach works by running a large-beam decoder tuned for high accuracy on our unlabeled telephony dataset. Lattices generated during the decoding process are used to compute Minimum Bayes Risk (MBR) confidences. The transcribed text is filtered to select minimum-length utterances with the lowest MBR confidence <cit.> and the lowest language model (LM) perplexity. Perplexity is a measurement of how well a probabilistic LM will predict new sample text. We use an LM trained on 20K manually transcribed in-domain conversational utterances. This method takes advantage of the scale of Marchex's call traffic, enabling us to rapidly construct a very large-scale speech dataset, covering all types of language contexts, speaker demographics, accents and noise conditions. It also permits tracking of changes in quotidian vernacular as well as changes in acoustic channel features based on shifts in device and codec technology.Because the error rate of the confidence-filtered training data can limit the gains due to poor acoustic modeling alignments <cit.><cit.>, we use various natural language processing (NLP) heuristics to algorithmically identify the highest prevalence, unique mistranscriptions for correction. We have developed tools to facilitate the creation and application of corrective text transforms over the full corpus of automatic transcriptions. The updated, post-processed text improves the quality of our acoustic model training alignments and so we iterate anew by retraining our acoustic model from scratch with the transformed text. For language modeling, the set of unique, text transform targets (i.e. the applied corrections) are added to the ground truth set used to build a new LM for the next iteration of filtering and utterance perplexity scoring. The core of our approach, an iterative method of taking qualified output from a seed model, using various NLP heuristics to further correct and select utterances for use in the subsequent rounds of training, allows us to progressively reduce the error rate of our ASR models, while operating at a scale that allows us to generalize well on in-domain speech.The paper is organized as follows, Section 2 will provide some perspective on the complexity of the conversational ASR task. Section 3 will review recent schemes for speech dataset construction, especially for large-scale and low-resourced tasks. Section 4 describes our Speech Recognition system. Section 5 introduces the Marchex U.S. English Corpus of conversational North American english, discusses semi-supervised training and the data pipeline. Section 6 presents our results and contrasts our corpus with other conversational corpora. Section 7 describes how we scale up and areas of future work.§ CONVERSATIONAL ASR Automatic speech recognition (ASR) of spontaneous conversations is a different and more complex task than ASR for Voice Command or Voice Search applications performed by modern digital assistants. In addition to the usual challenges in LVCSR (e.g. speaker-independence, coarticulation, variable speech rates, noise-robustness and LM capacity) in natural, unscripted conversations additional factors come into play. These include disfluencies such as mid-sentence hesitations, stutters, ungrammatical or filled pauses (uh, um, ah, er), back-channels (yeah, mhm, uh-huh), discourse markers (like, so, you know), self-editing terms (or rather, I mean), cut-off phrases, restarts, repetitions, final lengthening of syllables, coughs and laughter <cit.><cit.>. In an article which tackles the linguistic appropriations and interpretations of Chomsky's “Colorless Green Ideas Sleep Furiously", Manfred discusses a co-operative principle in human communication which binds two speakers to conversational maxims <cit.>. For speakers and listeners, this amounts to a set of interpretive assumptions that are very flexible in the presence of ungrammatical, rhetorical, figurative or completely novel utterances. This means that in free flowing conversation, semi-grammatical incongruities and semantic ill-formedness are always admissible when the utterance is well-chosen and/or the listener obtains a meaningful interpretation. Now considering that the English language has approximately half a million words, excluding many colloquial forms, with Unabridged English dictionaries listings of between 300,000 to 600,000 words, we see the combinatorial complexity and valid correct transcription of an arbitrary spontaneous utterance though finite, is unbounded <cit.>.§ BUILDING ASR TRAINING CORPORA There are many approaches to building speech training sets, including acoustic data perturbation and data synthesis <cit.>. Our survey of the literature will be restricted to unsupervised and semi-supervised approaches to corpora assembly. Google takes advantage of their large scale in constructing a training set for their Voice Search and Voice Input tasks for low-resource languages such as Brazilian Portuguese, Italian, Russian and French <cit.>. Their unsupervised approach makes use of a slow but accurate decoder, confidence scores, transcript length and transcript flattening heuristics to select the utterances for acoustic modeling. In conjunction with owner-uploaded transcripts, Youtube apply “island of confidence" filtering heuristics to generate additional semi-supervised training data for the deep neural network (DNN) based acoustic model (AM) driving their closed captions feature <cit.>.Kapralova et al. and Yu et al. <cit.><cit.> train acoustic models on a Mandarin language Broadcast News (BN) and Broadcast Conversation (BC) dataset created with semi-supervised techniques. Due to the prevalence of English loan words and code-switching, data selection starts with a dual Mandarin-English language classifier, followed by the computation of utterance and word-level decoder confidence scores for the Mandarin-only utterances.Ragni et al. <cit.> use a semi-supervised system to build corpora for low-resource languages Zulu and Assamese task, using weighted word-confusion-network confidences for data selection. Li et al. <cit.> employ semi-supervised methods to construct a Mandarin training corpus based on a Chinese television spoken lectures series, using conditional random fields (CRF) for confidence estimation instead of the raw ASR decoder confidence measure. Enarvi et al. <cit.><cit.> tackle a conversational Finnish language ASR task with a novel semi-supervised approach to training text selection.In lieu of adding new transcribed candidate utterances to the corpus based on low in-domain LM perplexity, they score utterances by the decrease in in-domain perplexity when the utterance is removed from the set of candidate utterances.For a low-resource English, German and Spanish LVCSR task, Thomas et al. <cit.> use a hybrid confidence score based on word-level ASR confidence as well as a posteriogram-based phoneme occurrence confidence. This latter confidence uses a posteriogram representation of an utterance computed by passing utterance acoustic features through a trained DNN classifier.§ SPEECH RECOGNITION SYSTEM The seed ASR system is based on an online decoder written using Kaldi, a free, open-source C++ toolkit for speech recognition research <cit.>. In online decoding, the input audio features are processed buffer by buffer, progressively emitting the output text with minimal latency and without having to ingest the entire input before producing output. The seed decoder uses a “prebuilt" deep neural network - hidden Markov model (DNN-HMM) hybrid model provided with Kaldi. In this hybrid model, a DNN is trained via minibatch asynchronous stochastic gradient descent to emit HMM posterior probabilities. These are then converted into “scaled likelihoods" for the states of an HMM. In contrast to Gaussian mixture models (GMM) traditionally used in speech recognition, DNN models are superior classifiers that generalize better with a smaller number of model parameters, even when the dimensionality of the input features is very high <cit.>. Cross-entropy loss is the DNN training objective function, a standard choice for classification tasks.The seed decoder's DNN is a 4 hidden layer neural network where the final layer is a softmax layer with a dimension corresponding to each of the 3500 context-dependent HMM states <cit.>. The input feature pipeline consumes 25 millisecond frames, processed to generate 13-dimensional Mel-frequency cepstral coefficients (MFCCs), which are spliced together with ± 3 frames of context, for a total of 7 · 13 frames. The input dimensionality is then reduced to 40 by applying linear discriminant analysis (LDA) followed by a decorrelation step using maximum likelihood linear transform (MLLT). Finally a speaker normalization transform is applied, called feature-space maximum likelihood linear regression (fMLLR) <cit.>. During decoding the DNN takes an input feature vector 140 elements wide, comprising the 40 “cooked" features described above and a 100 element iVector. The same iVector is used for all acoustic feature vectors associated with a given speaker's utterances in the training set. Augmenting a new speaker's input feature vector with a corresponding iVector projection before DNN processing, permits the DNN to discriminate better between phonetic events in an adaptive, speaker-independent fashion <cit.>, with minimal impact to the DNN training cycle.The seed model was trained on 1935 hours of conversational audio. We extend it by rebuilding its decoding-graph to incorporate an additional 20K manually transcribed in-domain language modeling utterances and expand the lexicon by an additional 600 domain-specific phonetic pronunciations. The lexicon is based on CMUdict, but with numeric stress markers removed.The seed decoder's language model is a trigram model created from text from 1.6M (Fisher English) utterances using the SRILM toolkit. The lexicon, acoustic and language models are compiled down to weighted finite state transducers (WFSTs), which are composed into a single structure called the decoding-graph. Each letter stands for a separate WFST performing a specific input-to-output transduction: HCLG = min(det(H ∘ C ∘ L ∘ G)).* H maps multiple HMM states to context-dependent triphones.* C maps triphone sequences to monophones.* L is the lexicon, it maps monophone sequences to words.* G represents a language model FST converted from an ARPA format n-gram model.When the graph is composed with an utterance's per-frame DNN output (i.e. HMM state likelihoods) it produces a lattice. The best path through the lattice produces text. For further details on ASR with weighted finite-state transducers refer to <cit.>. To summarize, our seed ASR system is a prebuilt Kaldi online-nnet2, cross-entropy trained, hybrid DNN-HMM model. It has an updated lexicon and language model and provides a competitive and well-understood baseline upon which we iterate.§ TRAINING SYSTEM DESCRIPTION First we introduce a brand new conversational telephony speech corpus of North American english and then describe our semi-supervised training and data selection methods in detail. §.§ The Marchex US English Corpus Marchex's call and speech analytics business securely fields over one million calls per business day, or decades of encrypted audio recordings per week. These are conversational, consumer to business phone calls occurring via a modern mixture of mobile phones on various telephone networks or landlines, capturing everyday North American dialog in every possible accent variant, English language fluency, under broad environmental or noise conditions, with comprehensive, colloquial vocabulary. Speaker demographics are extensive from teenagers to octogenarians. Example conversations may be sales related, e.g. calling to book a hotel, buying a mobile phone, cable service or to renegotiate insurance rates. Other examples are service related, e.g. scheduling a dentist appointment, an oil change, car repair or a house-moving service. The average conversation is four minutes long. Both the caller and the answering agent channels are recorded. This unlabeled corpus of calls is current, exhibiting natural and spontaneous conversations on business matters, in addition to popular culture, sports, politics and chitchat on uncontroversial topics like the weather. §.§ Data Collection and Processing To make use of this telephony dataset, we programmatically gather call audio from the fleet of Marchex call processor servers. Mono 8kHz μ-law-decoded files from the caller and agent channel are passed to a Voice Activity Detector (VAD) which creates single utterances usually shorter than 5 seconds long. For our initial experiments, we decoded a subset of 35 million utterances or some 25,000 hours of raw conversational audio with roughly a 44%-56% split between caller and agent. This split is less than even due to VAD's rejection of silent or degenerate caller-side audio, e.g. voicemail, fax machines calling phones.The system architecture is shown in Figure 1. Solid lines show the flow of data towards the AM and LM training corpora. Dotted lines denote updates to the ASR decoder, as well as updates to language modeling text used for perplexity based data selection.Similarly to <cit.>, we decode using a slower but more accurate, non-production decoder tuned to have a large-beam. The decoder emits N-best lattices which we use to compute MBR confidences per utterance, with Kaldi's lattice-mbr-decode. Our 20K manually transcribed corpus was similarly decoded and MBR scores were compared to Word Error Rate (WER). Shown below in Figure 2, a strong correlation between low MBR score and low WER suggests that MBR will useful for selecting accurately transcribed utterances. In the table below, we outline WER statistics for two values of very low MBR. Interestingly, the 90%+ WER utterances turn out to either be systemic mistranscriptions or Spanish language IVR prompts. WER Statistics MBR=0.0 MBR=0.1 count 337 1263 mean 4.14 20.3 std 0.0 0.0 min 0.0 0.0 25% 0.0 0.0 50% 0.0 0.0 50% 0.0 0.0 75% 0.0 0.0 90% 6.25 28.5 95% 20.0 50.0 max 100.0 100.0 Kaldi's lattice-confidence is another confidence that we examined. Its value is the difference in total costs between the best and second-best paths through the N-best lattice. We ultimately rejected it due to lack of performance and low correlation with WER.While MBR is used as a measure of expected risk in the “whole system" based on the full N-best lattice, language model perplexity is a measurement of how well a probabilistic LM will predict a new sample text. Low perplexity indicates the LM is good at predicting the new text and is not “confused". When combined, MBR and perplexity provide good intuition about how “hard" it was for the system to arrive at its 1-best text output, the assertion being that lower WER utterances are “easier" to decode. For selecting utterances, we compute perplexity with a Kneser-Ney smoothed 5-gram model with a 125K vocabulary and 5M n-grams <cit.>.Given the scale of our audio retrieval and the simplicity of the VAD used, there are a few different kinds of non-speech audio that get automatically transcribed that we certainly do not want in our training set. These include: hold-time musak, telephony signaling tones, Spanish language utterances (especially in IVR), pseudo random impulsive noises from typing on keyboards, cellphones dropping, Rihanna, laughter, coughing and other environmental noise. Utterances in the table below will have high perplexity, i.e. greater than 1000, when scored with a 5-gram LM trained on our 20K manually transcribed corpus. We remove up to 7M such degenerate utterances along with utterances with very short transcripts. Removed Utterances Audio content be in they need to “bienvenidos" (Spanish) but i spend you own “para espagnol" (Spanish) bull pretty men dogs “oprima dos" (Spanish) much guess seem go “marque cinqo" (Spanish) it it it's it's ittelephony signaling noise whole whole wholetelephonic beeps mhm mhm mhm mhm impulsivenoise mm mm mm mm mm impulsive noise [noise] i i ienvironmental noise and uh and uh and uh hold music or or or or or or hold music and uh and uh Rihanna song in a in a in in a hold music Next we look at other systemic errors that we can correct. The approach taken is based on global and sub-structure frequency of the seed transcribed text. By sorting and counting the highest prevalence unique full utterances, we identify common elements where incomplete language representation and/or missing audio context can be fixed. Sub-structure frequencies are counted by using n-gram or part-of-speech tagging to isolate sub-elements to be amended. For example in the table below “a grey day" can be part of “you have a great day" or “it is a great day". Caller Mistranscriptions Ground truthhave a grey day have a great day yeah that be great yeah that'd be great okay think so much okay thanks so much b. e. as in boy b. as in boy a two one zero eight two one zero i don't have any count i don't have an account Identification and creation of targeted replacements are prepared manually via custom tools developed to present top candidates for correction. We distinguish caller versus agent side text because the nature of conversational speech on the caller side is much more diverse. Additionally, agent audio quality is usually higher, as agents may be in a call center or quiet office, while the caller may be in the car, on the street or on the bus. Below is a small section of agent side mistranscriptions, a number of which are from the Interactive Voice Response (IVR) utterances. Agent Mistranscriptions Ground truth horror leather increase for all other inquiries (IVR) rest you press two (IVR) oppressed you or press two (IVR) arrest three press three (IVR) um or cared customer care call back drone normalcall back during normal for parts in excess serious for parts and accessories active eight activate chevy taco chevy tahoe now and if you words now in a few words retire fritcher jack free tire pressure check Once transforms have been created and applied to automatic utterances, the corrected text is ready to be filtered with both MBR and perplexity at thresholds appropriate for acoustic modeling. Language modeling text is also derived from the same corrected text but with much tighter perplexity thresholds, usually 40-80. From the original batch of 35M utterances, we are left with between 2.5M and 5M pristine utterances. We contrast these figures with the 1.6M utterances that comprise the Fisher English corpus <cit.>.Figure 3 details the Text Processing and Selection block shown in Figure 1. We note two paths though the system. The first (solid magenta) is strictly for generating corrected text to build the AM and LM training corpora. The second path (yellow dotted) is for the generation of new LM building text derived transform targets (i.e. text corrections). These manual contributions are language modeling ground truth and are admissible to improve the capacity and the ability of our LM to generalize in subsequent training iterations. Starting with just the 20K manually transcribed corpus for language modeling, through this iterative process we grew the LM text used to compute perplexity in all parts of the system by another 6K items. This is a 30% increase in the amount of high quality LM text and more importantly, text which comprises the correct labels for the most common in-domain conversational phrases. §.§ Retraining The ASR Model After we've successfully prepared the corrected, filtered automatic transcripts, it is time to retrain our ASR model. For retraining, we choose the cleanest 5,000 hours or 11.7M utterances. This figure was selected to have minimal impact on the training recipe's hyperparameters, with an eye out for maximum data-capacity of the training model. For AM training, we add in 13K utterances from our 20K manually transcribed set to the automatic, cleaned corpus. Because our manually transcribed utterances are the most accurate, these 13K are used for the training recipe's initial monophone and triphone steps. The remaining 7K utterances are excluded as a test set. Given the relative data increase over the seed model, we use a larger-capacity multi-splice version of the online-nnet2 recipe described in Section 4. This recipe uses 2 additional fully-connected hidden layers, for a total of 6, and a more elaborate input splicing scheme. We train this model on a single 12GB NVIDIA Titan X (Pascal) GPU for 6 full epochs over a period of 2 weeks.§ EXPERIMENTAL RESULTS Our results are as follows on the Marchex North American english conversational task, a test set based on our 7K manually transcribed utterances, excluded in the training process described above. We compute WER scores for the seed model, IBM Watson Speech to Text service <cit.><cit.> as well as our updated production model. Our updated model shows strong performance against IBM and demonstrates our ability to generalize well on an unseen dataset with a model trained on a mixture of manually transcribed and automatic transcriptions.While IBM have not trained their models on Marchex English, their results are valid benchmarks because of the their published results on the Hub5 2000 Evaluation conversational task, a corpus of 40 test telephone conversations from the SWBD and CALLHOME corpora <cit.><cit.>. We contrast the proportions of Hub5 2000 with the size of our test set of 7K utterances or 4.5 hours of no-filler, manually transcribed conversational audio, sourced from more than 3,000 calls. Model Agent WER Relative gainSeed model 22.1 - IBM Watson STT 20.0 9.5% Marchex production 14.27 35% ModelCaller WER Relative gainSeed model 21.6 - IBM Watson STT 22.6 -4.6% Marchex production 17.5 19% §.§ Comparing conversational corpora To better understand our performance with respect to the IBM models trained on Fisher English (FE) or Switchboard (SWBD), we now examine more closely how the Marchex English (ME) is different. By ME, we refer only to this first, post-seed iteration of 11.7M utterances or 5,000 hours used to train an updated model. During the collection of FE, topics were pre-assigned or worked out between the contributors <cit.>. For SWBD, a prompt suggested a topic of conversation <cit.>. ME on the other hand, captures real world conversations with the full degree of naturalness. FE furthermore excludes greetings and leave-takings, which we consider essential to correctly decode. SWBD transcribers were asked post-facto to rate the naturalness of conversations on a 5-point scale from “very natural" to “artificial-sounding". The mean rating for SWBD utterances is 1.48 <cit.>.FE calls lasted no more than 10 minutes from which 8 minutes were deemed useful. ME calls last on average 4 minutes, but can be as short as 30 seconds, as in a voice-mail or wrong number. They can also be as long as an hour in the case that a contract is being negotiated or there are terms and conditions to be agreed upon. The much longer temporal context under which ME utterances are automatically collected, adds to the diversity of the corpus. In Figure 4 we show the distribution of durations in minutes of a 37K sampling of ME calls. In the table below, we draw further contrast between FE, SWBD and Marchex English <cit.><cit.>. SWBD FE MEHours 309 2,000 5,000+ Speakers 543 20,407 605KUtterances 391K1.6M 11.7M Conversations 2,400 16,000+ 288K Words 3M 18M 79.5M § CONCLUSIONSIn this report we have outlined results from only one iteration of our semi-supervised approach. We review our plan to scale up and promising next steps. §.§ Scaling Up Encouraged by our very competitive error-rates, we see a lot of potential as our corpus grows. A natural question at this point, given an iterative process which produces increasingly large quantities of audio and text is: How do we scale up processing in a time and cost efficient manner?Given our goals of iterating on a monthly cadence, our solution is to use modern, cloud-based distributed computation. Initial work to collect, decode, clean utterances and train our models took place over a couple of months in a small local-cluster environment. So our first step was to move our corpus of 30M post-VAD utterances from the first round and well as 30M brand new utterances, hot off the wire, into an Amazon Web Services (AWS) S3 bucket. An S3 bucket is a logical unit of storage used to store data objects (audio and text) as well as any corresponding metadata like utterance ids, speaker-to-utterance mapping, etc. Next, we build an Amazon Machine Image (AMI), which provides the information required to launch a virtual machine instance, pre-configured with the requisite 64-bit system architecture, operating system, Python environment, Kaldi decoder and other software dependencies. Now we can spin up a dynamic and configurable cluster of virtual machines for re-decoding and post-processing. To reliably manage the scheduling and distribution of audio to be re-decoded and VMs ready to accept work, we use Amazon Simple Queue Service (Amazon SQS). This service offers a highly-scalable hosted queue for sending, storing and receiving messages and is designed to guarantee that messages are processed exactly once, in the exact order that they are sent, with limited throughput <cit.>. Now with a SQS queue, a fleet of VMs and S3 data, we are ready to re-decode. First we start off by populating our SQS queue with work items. This is done by generating a S3 Inventory Report, which is an enumeration of the 60M (audio) data objects to re-decode. We initialize our SQS queue from this report. Then we simply turn on the fleet and our SQS queue distributes messages to it. Messages are simply locations in S3 of audio (utterances) to decode. Our fleet consists of “spot instances", a flexible, cost-effective alternative VM provisioning solution especially for data analysis, batch and background processing jobs where applications can be interrupted. At any time during processing if a VM goes away or stops, the message will merely timeout and go back into the queue to be rescheduled for processing by another VM. With a fleet of 100 mixed class {cc2.8xlarge, r4.8xlarge, x1.16xlarge, m4.16xlarge} VM instances, we are able to re-decode 30M utterances in 20 hours. Finally to do utterance filtering, processing and ASR model retraining, we employ a GPU-enabled AWS P2 instance like the p2.16xlarge. With 16 NVIDIA K80 GPUs each with 12 GB of memory, 64 virtual CPUs, 700+ GB of memory and low-latency, peer-to-peer GPU-to-GPU transfers, this class of machine is most commonly used for scientific and industrial scale deep learning tasks. While true distributed end-to-end ASR model training is an eventual objective, the P2 instance solution is most compatible with the parallelization tools in our Kaldi-based training recipe and provides immediate performance gains. In lieu of the weeks it took to train the first iteration using a local GPU, our AWS solution completes in a matter of days. §.§ Future Work Our semi-supervised training process permits us to compile very large, high quality conversational speech datasets orders of magnitude greater than what is possible via manual transcription. The manual effort is highly focused on specific tasks that have the highest impact on WER reduction and that improve the compounding effect of rinsing and repeating with a bigger and better decoder, trained on cleaner and larger quantities of correctly labeled audio. Future work includes making the VAD more selective, improving language detection and speech signal conditioning. There are also opportunities to use RNN-LM or CNN models for text classification to do more powerful data selection <cit.>. Furthermore, we see a lot of potential in the algorithmic superiority of bleeding-edge ASR methods using attention-based models or sequence trained neural networks with lattice-free MMI or CTC objectives <cit.><cit.>.abbrv
http://arxiv.org/abs/1705.09724v1
{ "authors": [ "Shane Walker", "Morten Pedersen", "Iroro Orife", "Jason Flaks" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20170526211015", "title": "Semi-Supervised Model Training for Unbounded Conversational Speech Recognition" }
ifundefineddate 0.0cm 0.2cm 16cm 21cm 1.0cmsciabstract lastnote scilastnotelastnote+1lastnote. 24ptAtomic-scale structure analysis of a moleculeat a (6-nanometer)^3 ice crystal Xi Kong,^1,2,3,† Fazhan Shi,^1,2,3,† Zhiping Yang,^1,† Pengfei Wang,^1,3, Nicole Raatz,^4Jan Meijer,^4 Jiangfeng Du^1,2,3,∗^1CAS Key Laboratory of Microscale Magnetic Resonance and Department of Modern Physics, University of Science and Technology of China (USTC), Hefei, 230026, China^2Hefei National Laboratory for Physical Sciences at the Microscale, USTC^3Synergetic Innovation Center of Quantum Information and Quantum Physics, USTC^4Felix-Bloch institute for solid state physics University Leipzig, Linnéstr. 5, D-04103 Leipzig, Germany^† These authors contributed equally to this work.^∗ Corresponding author. E-mail: [email protected]. ==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Water is the most important solvent in nature. It is a crucial issue to study interactions among water molecules. Nuclear magnetic resonance (NMR) spectroscopy is one of the most powerful tools to detect magnetic interactions for the structure analysis of a molecule with broad applications <cit.>.But conventional NMR spectroscopy requires macroscopic sample quantities with hampers in investigating nanoscale structures <cit.>. Through quantum control of a single spin quantum sensor, magnetic resonance spectroscopy of nanoscale organic molecules<cit.> and single molecules<cit.> has been achieved. However, the measurement of the dipolar interaction of nuclear spins within a molecule at nanoscale and the analysis of its structure remain a big challenge. Here we succeed in detecting the NMR spectrum from an ice crystal with (6-nanometer)^3 detection volume. More importantly, the magnetic dipolar coupling between two proton nuclear spins of a water molecule was recorded. The resolved intra-molecule magnetic dipolar interactions are about15 kHz and 33 kHz with spectral resolution at a few kHz. Analysis of the interaction-resolved NMR spectroscopy provides a spatial view of nanoscale ice crystal, from which the orientation of a water-molecule bond is derived and further the length of the bond can be got. This work enables NMR spectroscopy applications in single molecule structure analysis, provides a further tool for nanocrystalline and confined water research <cit.>.Water is the most important solvent in nature. The interaction among water molecules is a very crucial issue, for example it gives rise to the“hydrophobic force" which is responsible for membrane formation and contributes to protein structure. Yet, information on how water structure is on nanoscale is scarce. On the one hand it makes hard to observe in X-ray<cit.> or electron microscopy<cit.> that water comprises light elements. On the other hand bulk methods like dielectric spectroscopy<cit.> does not allow to access local information which is essential when water is interacting with solutes. Here we use nanoscale NMR to provide an unprecedented insight into water structure formation. Because conventional NMR spectroscopy requires macroscopic sample quantities<cit.>, extending NMR spectroscopy to structure analysis of molecule at nanoscale is a long standing goal.Recently, a single quantum spin sensor, the nitrogen-vacancy (NV) defect in diamond, has been developed to realize nanoscale magnetic resonance spectroscopy<cit.>. During the last several years, magnetic resonance spectroscopy of nanoscale organic molecules<cit.> and single molecules<cit.> has been achieved. However, the measurement of the dipole-dipole interaction of nuclear spins within a molecule at nanoscale and the analysis of its structure remain ellusive. In this work, we report the NMR spectrum from an ice crystal at (6-nanometer)^3 detection volume. More importantly, the magnetic dipolar coupling between two proton nuclear spins of a water molecule was recorded. The analysis of the interaction-resolved NMR spectra provides a spatial view of the ice crystal and the structure of water molecules inside.The NV center is a highly sensitive atomic-scale magnetic sensor <cit.>. It consists of a nitrogen impurity and a neighbor vacancy in diamond (Fig. 1a). A spin triplet ground state of an NV center can be initialized and readout by 532 nm illumination. Such a physical system can be used for detecting magnetic fields. It performs high sensitivity and high resolution spin spectroscopy of targets both in diamond<cit.> and near surface<cit.>. In our experiments, NV centers are implanted by 5 keV N^+ ions into a diamond of 50 μm thickness. The depth of shallow NV centers are identified by NMR based methods to measure the distance toprotons to be 5-7 nm(Tabel S1 in Supplementary Information). The experiment setup is shown schematically in Fig. <ref>a. The magnetic sensor in diamond is mounted between a coplanar waveguide and a glass plate. The water is filled in the gap between the diamond and the coplanar waveguide, while paraffin wax is dropped around the gaps to prevent water from evaporation. The whole sample together with magnetic sensor and waveguide is connected to two semiconductor coolers in nitrogen gas atmosphere. The water sample is frozen to solid and its structure is a hexagonal crystal of I_h, as shown in Fig <ref>b. The sensing volume of the protons in ice is around (6nm)^3 (Fig. <ref>b).The sensor-sample system Hamiltonian is H=H_NV+H_hf+H_nuc. The NV sensor spin Hamiltonian is H_NV=D S_z^2+γ_e B·S, where D denotes the zero field splitting and γ_e=2.8MHz/G is the electron spin gyromagnetic ratio. The NV sensor couples to the proton nuclear spins through the hyperfine Hamiltonian, H_hf=S^z∑_m=1^N(A^zz_mI_m^z+A^zx_mI_m^x), where A^αβ_m is the hyperfine tensor, S^z and I_m^α is NV spin and proton nuclear spin, respectively.The proton nuclear spins Hamiltonian is H_nuc=ω_LI_m^z+ ∑_m=1^N∑_n=1^m-1I⃗_m·𝔇_m,n·I⃗_n, where ω_L = γ_H B_0 is the Larmor frequency of nuclear spins, 𝔇_m,nis dipolar interaction between nuclear spins m and n. To detect the NMR signal of protons in water, a “lock-in-detection" method is used. A periodic dynamical decoupling pulse sequence, XY8-K is used to control the NV center. When the sensor is driven in synchrony with nuclear evolution ( π pulse intervals τ are adjusted to τ=1/(2ω_L) in Fig. <ref>a), the effective evolution of nuclear spin is given by A^zx_m I^x_m/π <cit.>. An a.c. magnetic signal from nuclear spins causes the decoherence of sensor spin state, which is then readout optically after another π/2 pulse. By sweeping τ and converting it to frequency domain (Fig. S6),NMR spectra are observed ( Fig. <ref>b ). The NMR spectrum of water at room temperature is shown in the upper panel in Fig. <ref>b. Surprisingly, the full-width at half maxium (FWHM) of theliquid spectrum is much broader than the liquid spectrum from a conventional 400MHz-NMR spectrometer (Fig. S5). Even the liquid spectrum is broader than the solid spectrum by a same NV sensor ( lower panel in Fig. <ref>b ), which is opposite to conventional NMR spectra. In fact, it is a special phenomenon in nano-NMR spectroscopy due to the diffusion of sample at nanoscale sensing volume. The spectral broadening of liquid water comes from the fast diffusion of water molecules through the detection volume of the NV sensor <cit.>. To eliminate the diffusion and preserve the dipolar interaction, the water is frozen to solid. The spectral FWHM of frozen water is about 36 kHz, which mainly results from the dipolar interactions and the detection sequences. It is comparable with the solid spectrum from a conventional spectrometer. NMR spectra at various fields are observed (Fig. <ref>c). The fitted resonant frequencies are proportional to the external magnetic fields. The observed gyromagnetic ratio 4.250(3) kHz/G matches well with proton nuclear spins. All spectra are verified with correlation spectroscopy (Fig. S8 and Ref.<cit.>).The dipolar coupling is usually less than a hundred kHz.To detect this kind of weak coupling, a high-resolution strategy, correlation spectroscopy<cit.>, is then carried out to resolve the nuclear spin interactions. With the periodic dynamical decoupling method, the coherence time T_2 of NV centers with depth of a few nanometers is at the order of a few tens of microseconds. The correlation spectroscopy extends the frequency resolution from 1/T_2 to 1/T_1, which is about a few kHz for most of shallow NV centers, sufficient to resolve the intra-molecule nuclear spin dipolar interaction. The pulse sequence is described in Fig. <ref>a. The protocol consists of two dynamical decoupling sequences with the free evolution time T in between <cit.>. Both of the dynamical decoupling sequences are applied to correlate the NV sensor with proton nuclear spins. During the free evolution time T, the protons evolve freely under the external magnetic field and the mutual proton-proton dipolar interactions. The methodallows us to measure the free evolution of the transverse proton spinfor an extented time interval T as shown in Fig. <ref>b. The frequency is 1838 ± 27 kHz which matches with Larmor frequency γ_H B_0 = 1847 kHz of proton nuclear spins under the external field B_0. To resolve the weak couplings, a correlation protocol with T = 110μs is taken to detect the NMR spectrum with a line-width on order of ∼ 5 kHz. Without losing anyspectral information, an under-sampling protocol is carried out to record the evolution envelope of proton nuclear spins. A modulation is observed on the correlation spectrum (Fig. <ref>c) and its fast Fourier transform (FFT) shows the frequency components more straightforward (Fig. <ref>d). The original spectrum, shown in Fig. <ref>a, is reconstructed from the spectrum in Fig. <ref>d according to the Nyquist-Shannon sampling theorem<cit.> (see Supplementary Information for details).The central peak, marked by yellow arrow with Δ f_0 = 0 kHz, comes from both of zero coupling protons in H_2O and the unpaired protons in HDO. It is further known that the other four peaks correspond to two frequency splittings Δ f_1,2= 15.1 kHz, 33.6 kHz, caused by magnetic dipolar interactions of proton nuclear spins. Through the analysis of the spectrum, we can derive the orientation of an ice nanocrystal, the directions of proton dimers and even the distance d between two protons in a molecule. The distance d can be extracted by measuring the NMR spectra under different θ. The homonuclear magnetic dipolar interaction is formulated by H_D= δ(1-3cos^2θ)(3I_1^z I_2^z - I⃗_1·I⃗_2)/2, where the dipolar coupling parameter δ = μ_0/4πγ_H^2ħ/d^3 and θ is the angle of proton dimer orientation vectors with external magnetic field B_0. The dipolar splitting frequency relative to Larmor frequency of protons is Δ f =3/4δ(1-3cos^2θ). The parameter δ∝ 1/d^3 leads to the inter-molecule interaction much smaller than intra-molecule interaction in most cases. The inter-molecular dipolar interaction in our experiment is further reduced by dilution of proton nuclear spins (1:1 mixture of light and heavy water). The dipolar coupling parameter δ is approximately 30.5 kHz for a distance d = 1.58Å between protons of a molecule. The average proton nuclear spin interaction strength decreases from 4 kHz to 2 kHz, which is smaller than the resolution in the experiment. In the following analysis of the spectrum, only intra-molecule interaction is considered. We assume that the ice on the diamond surface is a single crystal as our sensor only detects a sample with a few cubic nanometer volume. Under our experimental conditions, the ice single crystal has I_h symmetry with azimuth angle (α,β) (inset in Fig. <ref>a). In each I_h ice crystal cell, there are water molecules with different orientations. In the ice, there are 12 different proton dimer orientations θ_i in total and all of them depend only on the crystal azimuth angle (α,β). The dipolar interaction of a proton dimer bond depends on the angle θ_i(α,β) between the dimer orientation with the external magnetic field, where i=1, … ,12. The spectrum splitting, 3/4δ(1-3cos^2θ), is determined by each of the proton dimer angle θ_i and distance d in Fig. <ref>b. The spectra are calculated by ∑_i=1^N3/4δ(1-3cos^2θ_i). Matching the calculated spectra with experiments finds the crystal azimuth angle α,β and thus determines the proton dimer bond angles θ_i.The spectra are dominated by proton nuclear spins in both of light water and semi-heavy water molecules as the signal from deuterium is mitigated due to their nearly one order smaller magnetic moment.The ratio D_2O:H_2O is originally 1:1, which may slowly decreased due to the absorb and exchange between the liquid water and the water molecules in air. We simulate the spectra with various ratios of semi-heavy water molecules HDO to light water molecules H_2O step by step. The simulation curve matches with the experimental spectra well when the molecules ratio HDO:H_2O is 1:2 (Fig. 4b). The crystal orientation (α,β) = (65^∘, 79^∘) as an optimal azimuth angle is firstly derived (see Supplementary Information for details).Thus 12 different proton dimer orientations θ_i(α,β) are resolved and listed in Table <ref>. Three peaks in the spectrum, Δ f_0,1,2, indicated by yellow, blue and red arrows in turn (Fig. <ref>a), are contributed by proton nuclear spin dimers with directions θ _1,2, θ _3∼10, and θ _11,12, respectively. The orientations are shown as circular conical surface relative to B_0 in Fig. <ref>c. In the above analysis, we assume the proton dimer bond length to be 1.58Å. In principle, the distance d can be extracted by measuring the dipolar splitting under different θ. The dimer bond length resolution is about 0.1Å, which is estimated from a broadened spectrum with FWHM ≈ 6 kHz. In conclusion, we firstly observe the liquid and solid NMR spectra and firstly resolve the magnetic dipolar coupling of two protons within a water molecule. Through the analysis of the magnetic dipolar coupling spectrum,we resolved the orientation of an ice nanocrystal with (α,β) = (65^∘, 79^∘) and the directions of proton dimers θ_i:1∼12 which listed in Table<ref>. We further can get the distance d between two protons in a molecule using the same spectral analysis method. Equipped by new technology with high sensitivity<cit.> and angle-adjustable magnetic field<cit.>, there'll be more constraints for the crystal orientations. Thus we can uniquely determine the dimer bond length and the angle for both by our method. Detection of magnetic interactions is an essential part of the nano-NMR spectroscopy. This work shows that NMR spectroscopy yields the structure analysis of ice crystal and a water molecule at nanoscale by an NV sensor. This work provides a further tool for single molecule structure analysis. Together with the previous work on nano-NMR spectroscopy <cit.>, spectrally resolved dipolar interaction in small sample volume can yield valuable structural information as opposed to bulk NMR, where such interactions typically hamper structure analysis.Combined with detection of chemical shift, J-coupling and widely used nuclear spin labeled methods in conventional NMR, the structure of a single complex bio-molecule could be fully resolved by NV-based nano-NMR spectroscopy. More importantly, the detection of nanoscale water is a highly challenging scientific field in itself. As nanoscale proton configurations are especially difficult to observe under ambient conditions for both of STM or X-ray, the NV-based sensing provides a unique way to take a close snapshot of nanoscale configuration. For these reasons, the NV-based quantum sensing has the potential to shed light on a long standing challenge in structural biology: the role of water layers on protein structures. Water is the most important solvent in nature and water layers on surfaces or around molecules determine protein folding or the energetics of cell membranes. For example, the structure of water bound to surfaces such as proteins is of outstanding importance for their function but is only accessible by molecular dynamics simulation. Here we provide a method which enables to detect such structure. Besides, this work opens a way in the research of nanoscale confined water with a few atomic layers by magnetic resonance, which may reveal novel physical phenomena and gain fundamental knowledge of water related science <cit.>.§ COOLING SYSTEMThe whole sample area is cooled by two semiconductor coolers. The coolers are pasted on a cooper heat sink with silicone grease. The temperature of the cooper sink is measured to be -20^∘C when a 17^∘C cooling water is carried out on the heat side of the semiconductor coolers. All the setup is mounted in a nitrogen gas atmosphere to prevent frost. The cooper heat sink is glued on the coplanar waveguide with silicone grease to cooling down the water stored between the waveguide and NV sensor in diamond. The lowest temperature measured by NV sensor is -7^∘C.§ CALCULATION AND FITTINGTo simulate the spectrum, we consider the proton nuclear spins to be interacted with adjacent proton or adjacent deuterium nuclear spins. The dominated coupling ofadjacent proton-proton dipolar interaction is Δ f(α,β). The coupling splitting caused by proton-deuterium dipolar interaction is Δ f^D(α,β). There're portion p deuterium-proton water molecule, such that there would be p/(1-p) deuterium peak compared with only proton-proton peak. Considering the peak broadening Δ, frequency shift f_shift and linewidth σ of the filter function, we have the fitting function,S(α,β,f_shift)= ( 2p/1-p∑_i=1^121/3(e^-(f-Δ f^D(α,β))^2/Δ^2+e^-f^2/Δ^2 +e^-(f+Δ f^D(α,β))^2/Δ^2) +∑_i=1^12( e^-(f-Δ f(α,β))^2/Δ^2 +e^-(f+Δ f(α,β))^2/Δ^2) ) e^-2(f-f_shift)^2/σ^2The experimental spectrum is fitted by S(α,β,f_shift) with least squareto find the optimized azimuth angles (α,β).naturemag AcknowledgementsThe authors thank J. Wrachtrup's many helpful advises on improving the work and manuscript. We also thank F. Jelezko, Y.F. Yao and F. Reinhard for their helpful discussions. This work is supported by the 973 Program (Grant No. 2013CB921800, No. 2016YFA0502400), the NNSFC (Grants No. 11227901, No. 31470835, and No. 91636217), the CAS (Grant No. XDB01030400, No. QYZDYSSW-SLH004, 2015370), CEBioM and the Fundamental Research Funds for the Central Universities (WK2340000064) Author contributionsJ.D. supervised the entire project. J.D. and F.S. designed the experiments. X.K., Z.Y., P.W., and F.S. prepared the setup. N.R. and J.M. prepared the NV centers by ion implantation. X.K. and Z.Y. performed the experiments. X.K. and F.S. performed the simulation. J.D., F.S., and X.K. wrote the manuscript. All authors discussed the results and commented on the manuscript.Competing Interests The authors declare that they have no competing financial interests.Additional informationSupplementary information accompanies the paper on http://www.nature.com/nat. Correspondence and requests for materials should be addressed to J.D.
http://arxiv.org/abs/1705.09201v1
{ "authors": [ "Xi Kong", "Fazhan Shi", "Zhiping Yang", "Pengfei Wang", "Nicole Raatz", "Jan Meijer", "Jiangfeng Du" ], "categories": [ "quant-ph", "physics.chem-ph" ], "primary_category": "quant-ph", "published": "20170525143656", "title": "Atomic-scale structure analysis of a molecule at a (6-nanometer)$^3$ ice crystal" }
[email protected]@phys.msu.ru Faculty of physics, Lomonosov Moscow State University, Moscow, Russian Federation. A consideration of waves propagating parallel to the external magnetic field is presented. The dielectric permeability tensor is derived from quantum kinetic equations with non-trivial equilibrium spin-distribution functions in the linear approximation on amplitude of wave perturbations. It is possible to consider equilibrium spin-distribution functions with nonzero z-projection proportional to the difference of the spin distribution function while x- and y-projections are equal to zero. It is called trivial equilibrium spin-distribution functions. In general case, x- and y-projections of the spin-distribution functions are nonzero which is called the non-trivial regime. Corresponding equilibrium solution is found in [Phys. Plasmas 23, 062103 (2016)]. Contribution of the nontrivial part of the spin-distribution function appears in the dielectric permeability tensor in the additive form. It is explicitly found here. Corresponding modification in the dispersion equation for the transverse waves is derived. Contribution of nontrivial part of the spin-distribution function in the spectrum of transverse waves is calculated numerically. It is found that the term caused by the nontrivial part of the spin-distribution function can be comparable with the classic terms for the relatively small wave vectors and frequencies above the cyclotron frequency. In majority of regimes, the extra spin caused term dominates over the spin term found earlier, except the small frequency regime, where their contributions in the whistler spectrum are comparable. A decrease of the left-hand circularly polarized wave frequency, an increase of the high-frequency right-hand circularly polarized wave frequency, and a decrease of frequency changing by an increase of frequency at the growth of the wave vector for the whistler are found. A dramatic decrease of the spin wave frequency resulting in several times larger group velocity of the spin wave is found either. Found dispersion equations are used for obtaining of an effective quantum hydrodynamics reproducing these results. This generalization requires the introduction of corresponding equation of state for the thermal part of the spin current in the spin evolution equation. 52.25.Xz, 52.25.Dg, 52.35.Hr, 75.30.Ds Dielectric permeability tensor and linear waves in spin-1/2 quantum kinetics with non-trivial equilibrium spin-distribution functions L. S. Kuz'menkov December 30, 2023 =====================================================================================================================================§ INTRODUCTION Spin effects modify properties of plasmas <cit.> as well as electron gas in other mediums <cit.>. They play a role for the degenerate and non-degenerate plasmas. However, the spin effects are more prominent for the degenerate plasmas since spin polarization clearly splits the Fermi step on two Fermi steps of different width for the spin-up and spin-down electrons <cit.>. It allows to distinguish two types of electrons and consider them as two species <cit.>. Consideration of the independent evolution of electrons with different spin projections leads to discovery of the spin-electron acoustic waves.Spin evolution modifies hydrodynamic and kinetic properties of plasma <cit.>. In both cases dynamical equations contain the force of spin-spin interaction: S^β(r,t)∇_rB^β in hydrodynamics <cit.>, ∇_pS^β(r,p,t)·∇_rB^β in kinetics <cit.>, where B^β=B^β(r,t) is the magnetic field, ∇_r is the gradient in coordinate space, and ∇_p is the gradient in momentum space. This force contains the spin density S. It is the coordinate space density of spin in hydrodynamics S(r,t) and it is the phase space density of spin (the spin distribution function) in kinetics S(r,p,t). Therefore, the complete model requires an equation of spin density evolution. The time evolution of spin density happens due to two mechanisms: kinematic mechanism, where the flow of spinning particles in and out of the vicinity of the point in space change the local spin density, and dynamical, where the change of spin happens due to the interparticle interaction. The kinematic mechanism gives the spin current. In hydrodynamics, the spin current J^αβ has the structure similar to the structure of the momentum current Π^αβ existing in Euler equation <cit.>. Tensor Π^αβ contains the flow of the local center of mass nv^αv^β, the flow on the thermal velocities (the thermal pressure or the Fermi pressure for degenerate fermions) p^αβ, and the quantum part which is usually called the quantum Bohm potential. The spin current J^αβ contains the flow of spin on the velocity of the local center of mass S^αv^β, the thermal part of the spin current J_th^αβ (or the Fermi spin current for degenerate fermions <cit.>) and the quantum part calculated by Takabayasi <cit.>.Majority of works on the spin evolution in plasmas are focused on interaction and drop the Fermi spin current <cit.>. The thermal spin current is not considered in the ferrofluids either <cit.>. Hence, usually, the Fermi spin current is assumed to be equal to zero. However, recently, an equation of state is derived for the Fermi spin current <cit.>. More detailed study of physical effects similar to the Fermi spin current required kinetic modeling. Corresponding research is performed in Refs. <cit.>. It is shown that the kinetic analysis can be done with non-zero equilibrium scalar distribution function f_0 and non-zero z-projection of the equilibrium spin distribution function S_0z while S_0x=S_0y=0 <cit.>. However, the general model requires consideration of S_0x≠0and S_0y≠0 which are found in Ref. <cit.>. Required analysis is performed in this paper for waves propagating parallel to the external magnetic field. Influence of the spin on properties of magnetized plasmas is studied in many papers <cit.>. Quantum hydrodynamics <cit.> and quantum kinetics <cit.> are applied to this research. There are considered ordinary electromagnetic waves <cit.>, spin damping corrections <cit.>, waves propagating parallel <cit.>, and perpendicular <cit.> to the external magnetic field, and oblique propagating waves <cit.>, the quantum vorticity <cit.>, and the ponderomotive force <cit.>. Majority of these papers consider all electrons as one species. Separate spin evolution quantum hydrodynamics and separate spin evolution quantum kinetics are developed for the study of electrons as two fluids <cit.>. The spin-electron acoustic waves found from these models are studied in different regimes <cit.>. Study of the nontrivial part of the equilibrium distribution functions continues this research.This paper is organized as follows. In Sec. II, basic quantum kinetic equations for spin-1/2 plasmas are presented. In Sec. III, the equilibrium distribution functions are presented and the main structure of the dielectric permeability tensor is described. In Sec. IV, the dispersion equation and spectrum of the transverse waves propagating parallel to the external are studied under influence of extra spin effects caused by the non-trivial part of the equilibrium distribution functions. In Sec. V, an equation of state for the Fermi spin current entering the hydrodynamic spin evolution equation is deducted from spectrum derived from kinetic model. In Sec. VI, a summary of the obtained results is presented. In Sec. VII, Appendix A is presented, where the linearized kinetic equations and their solutions are found. In Sec. VIII, Appendix B is presented, where details of the the dielectric permeability tensor and some details of its calculation are described. In Sec. IX, Appendix C is presented, where an approximate form of the dielectric permeability tensor is found. In Sec. X, Appendix D is presented, where the dimensionless form of the dispersion equation is demonstrated.§ QUANTUM KINETIC MODEL FOR SPIN-1/2 PLASMAS The quantum kinetics of spin-1/2 particles can be modeled by the distribution functions (the scalar function f and the vector (spin) function S) defined in the six-dimensional phase space <cit.>.Equation for the scalar distribution function f=f(r,p,t) is the generalized Vlasov equation <cit.>:∂_tf+v·∇_rf +q_e(E+1/cv×B)·∇_pf +μ_e∇^α_r B^β·∇_p^α S^β=0,which contains an extra term (the last term) caused by the spin-spin interaction.The kinetic equation for the vector distribution function S=S(r,p,t) has the following form <cit.>:∂_tS^α+v·∇_rS^α +q_e(E+1/cv×B)·∇_pS^α +μ_e∇^β_r B^α·∇_p^βf -2μ_e/ħε^αβγS^βB^γ=0.The last two terms are caused by the spin-spin interaction. Kinetic equations (<ref>) and (<ref>) contain the following notations: E and B are the electric fields, q_e=-| e| is the charge of electrons, μ_e=-gμ_B is the magnetic moment of electrons, μ_B=| e|ħ/2mc is the Bohr magneton, g=1.00116, r (p=mv) is the coordinate in coordinate (momentum) space, t is time, ∂_t is the time derivative, ∇_r (∇_p) is the gradient on the space coordinate (on the momentum), ∇_p^α and ∇_p^α are projections of described gradients on the coordinate axis, ħ is the reduced Planck constant, c is the speed of light, ε^αβγ is the antisymmetric symbol (the Levi-Civita symbol).Kinetic equations are coupled to the Maxwell equations ∇·E=4πρ, ∇×E=-1/c∂_tB,∇·B=0, and∇×B=1/c∂_tE+4π/cj +4π∇×M,where ρ=q_e∫ f(r,p,t)dp+q_in_0i, j=q_e∫vf(r,p,t)dp, M=μ_e∫S(r,p,t)dp is the magnetization. § STRUCTURE OF THE DIELECTRIC PERMEABILITY TENSORLinear on the small perturbations kinetic equations are needed to be considered for the derivation of the dielectric permeability tensor. Assume that the following functions f_0(p), S_0(p,φ), B_0=B_ext=B_0e_z have non-zero values in the equilibrium state. Moreover, the explicit forms of the equilibrium distribution functions for the partially spin polarized degenerate electrons appear as the sum or the difference of the Fermi steps for the spin-up and spin-down electrons <cit.>:f_0(p)=1/(2πħ)^3[Θ(p_F↑-p) +Θ(p_F↓-p)],and[ S_0x=Σ(p)cosφ, S_0y=Σ(p)sinφ, S_0z=Σ(p),;]withΣ(p)=1/(2πħ)^3[Θ(p_F↑-p) -Θ(p_F↓-p)],where Θ is the step function, p=|p| is the module of the momentum, p_F↑ and p_F↓ are the Fermi momentums for the spin-up and spin-down electrons, v_Fs=p_Fs/m=(6π^2n_0s)^1/3ħ/m, s=↑, ↓, and φ is the polar angle in cylindrical coordinates in the momentum space, n_0s are the concentrations of the spin-up and spin-down electrons. Present small perturbations of the described equilibrium state as the plane waves. For instance, the scalar distribution function can be presented as f=f_0+δ f, where δ f=F e^-ω t+kr and F is an amplitude of perturbation. Moreover, the waves propagating parallel to the external magnetic field are under consideration k={0,0,k_z}. Below, the following notations are used for the charge cyclotron frequency Ω_e=q_eB_0/mc, the magnetic moment cyclotron frequency Ω_μ=2μ_eB_0/ħ, and the partial Langmuir frequency ω_Ls^2=4π e^2n_0s/m. Parameters Ω_e=q_eB_0/mc and Ω_μ=2μ_eB_0/ħ are equal to each other if the anomalous part of magnetic moment of electron is neglected. Linearized kinetic equations and their solutions are presented in Appendix A.Linear kinetic theory allows to calculate the dielectric permeability tensor which enters the equation for the small perturbations of the electric field:[k^2δ^αβ-k^αk^β-ω^2/c^2ε^αβ(ω)]δ E_β=0,where the dielectric permeability tensor appears asε^αβ(ω)=δ^αβ+ε^αβ_1(ω)+ε^αβ_2(ω),where δ^αβ is the Kronecker symbol, ε^αβ_1(ω) is the dielectric permeability tensor caused by the currentε^αβ_1(ω)δ E_β=4π/ωq_e/m∫ p^αδ f dp,and ε^αβ_2(ω) is the dielectric permeability tensor caused by the curl of magnetizationε^αβ_2(ω)δ E_β=4πμ_e/ωε^αγ z k_zc ∫δ S^γdp. The explicit form of ε^αβ_1 and ε^αβ_2 are presented in Appendix B. Meanwhile, consider structure of tensors ε^αβ_1 and ε^αβ_2. Tensor ε^αβ_1 can be separated into three parts: ε^αβ_1=ε^αβ_10+ε^αβ_11+ε^αβ_12. The first part is the quasi-classic term ε^αβ_10 existing with no account of the spin evolution. Tensor ε^αβ_10 consists of two terms ε^αβ_10=-∑_s=↑, ↓∫sinθ dθΠ^αβ_Cl(θ,s) since the spin polarized plasma with the equilibrium scalar distribution function splitted in two terms (<ref>) is considered, where θ is an angle of the spherical coordinates in the velocity space defined as cosθ=v_z/v. Despite this fact tensor ε^αβ_10 has well-known structure presented in textbooks (see for instance <cit.>). Tensor ε^αβ_2 can be separated into two parts: ε^αβ_2=ε^αβ_21+ε^αβ_22. Tensors ε^αβ_11, ε^αβ_12, ε^αβ_21, ε^αβ_22 appear due to the spin evolution. Tensors ε^αβ_11 and ε^αβ_21 are found at the account of the trivial part of the equilibrium distribution functions. It means that f_0 and S_0z are given by equations (<ref>) and (<ref>) while S_0x=S_0y=0. The account of non-zero S_0x, S_0y given by equations (<ref>) (the nontrivial part of the equilibrium distribution functions) leads to existence of tensors ε^αβ_12, ε^αβ_22. Calculation of tensors ε^αβ_12 and ε^αβ_22 and derivation of their contribution to the plasma properties are the main subjects of this paper. The parts of the dielectric permeability tensor by ε^αβ_10, ε^αβ_11, ε^αβ_21 are in accordance with the earlier developed models <cit.>, while tensors ε^αβ_12 and ε^αβ_22 are the generalization of the mentioned papers. § TRANSVERSE WAVES PROPAGATING PARALLEL TO THE EXTERNAL MAGNETIC FIELD A partially explicit form of dispersion equation for the transverse waves appears as followsk_z^2c^2/ω^2=1 -∑_s=↑, ↓3/4ω_Ls^2/ω1/k_zv_Fs[2(ω∓Ω_e)/k_zv_Fs +(1-(ω∓Ω_e)^2/(k_zv_Fs)^2) ln(ω+k_zv_Fs∓Ω_e/ω-k_zv_Fs∓Ω_e)] +Σ_∓,for δ E_x=±δ E_y (left/right hand circular polarization) correspondingly, where Σ_∓ are the terms caused by the spin evolution presented in the nonexplicit form. Consider new term in the long-wavelength regime k_zv_Fs/|ω±Ω_e|≪1. Start with regime ω±|Ω_e|>0. In this regime, equation (<ref>) has the following formk_z^2c^2/ω^2=1 -ω_Le^2/ω (ω±|Ω_e|) +ω_Le^2/ω^2ħ k_z^2/2m n_0e[n_0↑-n_0↓/ω±|Ω_μ|] ∓ (6π^2)^2/3/32πω_Le^2/ω(ω±|Ω_μ|)n_0↑^2/3 -n_0↓^2/3/n_0ek_z ,where ω_Le^2=ω_L↑^2+ω_L↓^2=4π e^2n_0e/m is the Langmuir frequency for all electrons and n_0e=n_0↑+n_0↓ is the concentration of all electrons. Equation (<ref>) contains coefficients proportional to ħ k_z^2/m. It resembles a similarity to the well-known quantum Bohm potential. However, it comes from the spin evolution <cit.>. Equation (<ref>) is a generalization of the equation which is well-known from spin-1/2 hydrodynamics <cit.>.The fourth term on the right-hand side of equation (<ref>) is proportional to the difference of the Fermi energies for electrons with different spin projections ε_F↑-ε_F↓∼ n_0↑^2/3 -n_0↓^2/3 which is a signature of the Fermi spin current (see equation of state derived in <cit.>, and discussion in introductions of Refs. <cit.>). Compare the fourth term on the right-hand side of equation (<ref>) with k_z^2c^2/ω^2. Consider the ration of these terms and find Ξ=(3π^2)^2/3 rτ (n_0e^1/3/k_z)ω/8(ω±|Ω_μ|), where r=e^2 n_0e^1/3/mc^2 and τ=[(1+η)^2/3-(1-η)^2/3] with η=| n_0↑-n_0↓|/n_0e∈[0,1] is the spin polarization. Basically, this ratio is proportional to parameter r. Parameter r is small even for n_0=10^27 cm^-3 (r≈2×10^-4). The ratio Ξ is not affected by frequency in the high-frequency regime ω≫|Ω_μ|. It is decreased by ω/|Ω_μ| at small frequencies while Ξ grows at the intermediate frequencies ω≈|Ω_μ| for the right-hand circular polarization. Small spin polarization η≪1 decreases parameter Ξ. The ratio Ξ increases at the large spin polarization η∼1 and small wave vectors k_z≪√(n_0e). Next, compare the second term and the fourth term on the right-hand side of equation (<ref>). Their ratio has the following form Λ=(3π^2)^2/3(k_z/n_0e^1/3)τ/32π.Ratio Λ grows with increase of wave vector k_z, but large values of k_z cannot be considered since equation (<ref>) is derived in the following limit ω±|Ω_a|≫ k_zv_Fs. Find Λ∼10^-2 at k_z=0.1n_0^1/3, η=0.5, and ω≫|Ω_a|. It is relatively small value, but it is a good value for a spin effect in plasma. In this regime Λ≫Ξ. Moreover, compare the third term and the fourth term on the right-hand side of equation (<ref>). Both of them present contribution of spin effects while the fourth term is derived in this paper.Their ratio has the following formΠ=8π/(3π^2)^2/3k_z/n_0e^1/3ħ n_0e^2/3/mω(1+η)^2/3+(1-η^2)^1/3+(1-η)^2/3/(1+η)^1/3+(1-η)^1/3It is decreased by factor k_z/n_0e^1/3, but for small frequencies ω parameter ħ n_0e^2/3/(mω) leads to increase of Π.Consider the left-hand circularly polarized electromagnetic waves taking the upper sign in equation (<ref>). This wave appears due to the classic terms in equation (<ref>). Consider modifications of its properties arising due to the spin terms: the third and fourth terms on the right-hand side of equation (<ref>). The third term (∼ħ k_z^2) is small in considering regime of the small wave vectors. Hence, this analysis is focused on the last term in equation (<ref>). It has positive sign (n_0↑<n_0↓) while the classic term coming from the charge evolution (the second term on the right-hand side) is negative. Hence, they give opposite influences. The classic term gives a considerable increase of frequency in compare with the frequency of wave propagating in vacuum. So, the spin term decreases the frequency of the wave. This effect is presented in Fig. <ref>. The spin caused third and fourth terms have opposite signs in this regime.Next, consider the right-hand circularly polarized electromagnetic waves. In this regime, the lower sign should be taken in equation (<ref>) (for ω>|Ω_e|). Consider frequencies ω larger 1.1|Ω_e| and obtain the classic right-hand circularly polarized electromagnetic wave. Its analysis is similar to the presented above for the left-hand circularly polarized waves, but the sign of the last term in equation (<ref>) is different. It is negative like the classic term caused by the electron motion. Therefore, both terms lead to the increase of the frequency. The increase of the frequency of right-hand circularly polarized electromagnetic wave caused by the spin effects in compare to the classic regime is demonstrated in Fig. <ref>. The spin caused third and fourth terms have same sign in this regime.The regime of right-hand circularly polarized waves demonstrates the spin wave solution at 0<ω-|Ω_e|<0.1|Ω_e|. Equation (<ref>) corresponds to relatively large deviation of frequency ω from the cyclotron frequency |Ω_e| for the large variations of the wave vector, or the small deviations of frequency ω from the cyclotron frequency |Ω_e| for the small values of the wave vector. Hence, it allows consideration of area ω≈|Ω_e| and analysis of the spin waves with ω≈|Ω_μ| for small wave vectors only. It is illustrated in Fig. <ref>. It shows that the nontrivial part of the equilibrium distribution functions increases the deviation of the spin wave frequency from the cyclotron frequency several times in compare with the results following from the third term on the right-hand side in equation (<ref>). Next, consider regime ω-|Ω_e|<0, which is meaningful for the right-hand circularly polarized waves. As the result find, find the following dispersion equation:k_z^2c^2/ω^2=1 -ω_Le^2/ω (ω-|Ω_e|) +ω_Le^2/ω^2ħ k_z^2/2m n_0e[n_0↑-n_0↓/ω-|Ω_μ|] - (6π^2)^2/3/32πω_Le^2/ω(ω-|Ω_μ|)n_0↑^2/3 -n_0↓^2/3/n_0ek_z . Equation (<ref>) demonstrates that the last term changes its sign for the right-hand circular polarized waves at the transition to the small frequency regime ω<|Ω_μ| in compare with the large frequency regime ω>|Ω_μ| presented by equation (<ref>) with the lower sign.Considering the low-frequency limit of the dispersion equation for the right-hand circularly polarized waves (whistlers) (<ref>), find the following analytical solution:ω=|Ω_e|[ k_z^2c^2/ω^2_Le-ηħ k_z^2/2m|Ω_e| + k_z^2c^2/ω^2_Le((3π^2)^2/3/32π((1+η)^2/3-(1-η)^2/3) k_z/n_0e^1/3) ].It is found by the iteration method assuming that spin contribution is relatively small. The first term in equation (<ref>) is the classic term. The spin effects give two contributions in equation (<ref>). The second term on the right-hand side is found earlier in literature <cit.>. In this model it comes from the trivial part of the equilibrium distribution functions. Its contribution gives a decrease of frequency of the whistler. The last term in equation (<ref>) comes from the non-trivial part of the equilibrium distribution functions and gives an increase of the whistler frequency as it is demonstrated in Fig. <ref>.As it is described above, at small frequencies the last term is decreased by factor ω/|Ω_e|. Hence, it becomes comparable with the third term and competition between them is revealed in nonmonotonial shift of the whistler spectrum (see Fig. <ref>).§ HYDRODYNAMIC FERMI SPIN CURRENT Analysis shows that a phenomenological generalization of the quantum hydrodynamic equations allows to derive the last term in equations (<ref>), (<ref>), and (<ref>) which is the main spin correction appearing in the considered regime. It is found that an additional term should appear in the spin (magnetization) evolution equation. Moreover, it can be interpreted as the divergence of the spin current (spin flux). Therefore, an equation of state for the hydrodynamic Fermi spin current existing in the hydrodynamic spin evolution equation can be extracted from obtained results. The equation of state corresponds to the long-wavelength k→0 limits. In this limit, the Fermi spin current leading to the last terms in equations (<ref>) and(<ref>) can be captured for future study of the long-wavelength excitations. The magnetization evolution equation has the following structure <cit.>n(∂_t+u·∇) -ħ/2mμ_e∂^β[n×∂^β ] +=2μ_e/ħn[×B],where =M/n, M=M(r,t) is the magnetization, n=n(r,t) is the concentration of particles, u(r,t) is the velocity field, andis the divergence of the thermal part of the spin current (which is called Fermi spin current for the degenerate electron gas), its explicit form is found in the following form_x= ∓(6π^2)^2/3/32πω_Le^2/(ω±|Ω_μ|) cn_0↑^2/3-n_0↓^2/3/n_0^2 (Ω_μδ E_x+ωδ E_y) , _y= ∓(6π^2)^2/3/32πω_Le^2/(ω±|Ω_μ|)c n_0↑^2/3-n_0↓^2/3/n_0^2 (ωδ E_x-Ω_μδ E_y) ,_z=0, for left/right-hand polarized transverse waves correspondingly, at ω±|Ω_μ|>0. At ω<|Ω_μ|, the lower sing should be chosen in the denominator and the upper sing should be chosen in front of the expression. This is a frequency dependent equation of state. Hence, it can be suitable for the linear or weakly-non-linear phenomena. Advantage of this equation of state is the fact that it derived for the perturbation evolution while other equations of state derived in literature are derived for the equilibrium regimes <cit.>. However, as it is mentioned above, the equation of state is found in small range of parameters.§ CONCLUSION It has been demonstrated that the non-trivial part of the equilibrium distribution functions gives a considerable contribution in the long-wavelength limit. Spectra of all classic transverse waves propagating parallel to the external field are changed. In this regime, there are three classic waves and the spin wave. The spectra of classic waves are shifted while spin wave spectrum is modified dramatically. The well-known dispersion equation follows from the trivial part of the distribution functions leads to decreasing frequency as a function of the wave vector. The non-trivial part of the equilibrium distribution functions leads to further decrease of frequency as a function of the wave vector. Moreover, the module of group velocity dω/dk_z increases several times.It has been found that the dispersion equation is different for the right-hand circularly polarized waves with ω<|Ω_e| while the hydrodynamic model or the earlier developed kinetic models give the same dispersion equation in both regimes. The modifications are caused by the by the non-trivial part of the equilibrium distribution functions. Corresponding generalization of hydrodynamic equations has been developed, where the frequency dependent equation of state for the spin current has been found and included.An analytical expression is found for the spectrum of whistler. This spectrum explicitly presents the contribution of the spin effects including effects caused by the non-trivial equilibrium distribution functions.All described above show that the developed kinetic model is necessary for the description of the spin effects in plasmas. This model is an essential generalization of existing models. The model allows to discover new spin related effects in linear and non-linear waves propagating parallel or perpendicular to the external magnetic field or for the oblique wave propagation. The equation of state for the hydrodynamic spin current has been extracted from the obtained results. Hence, the generalized hydrodynamic will provide a simple approximate description of phenomena related to the nontrivial part of the equilibrium distribution functions. The work of P.A. was supported by the Russian Foundation for Basic Research (grant no. 16-32-00886) and the Dynasty foundation. § APPENDIX A: LINEAR SOLUTIONS OF KINETIC EQUATIONS The following linearized Fourier transformed kinetic equations can be found from kinetic equations (<ref>) and (<ref>) at the consideration of small amplitude plane wave perturbations of the described equilibrium state:-ωδ f +v·kδ f +q_e/cB_0 (v×e_z)·∇_pδ f +q_eδE·∇_pf_0+μ_e(k·∇_p)(S_0·δB)=0,and-ωδS+ (v·k) δS +q_e/cB_0 ((v×e_z)·∇_p)δS +q_e/c((v×δB)·∇_p)S_0+μ_e (k·∇_p)f_0δB +q_e(δE·∇_p)S_0 +2μ_e/ħ(B_0×δS-S_0×δB)=0,where the wave vector has the following structure k={0,0,k_z} that corresponds to the propagation of waves parallel to the external magnetic field which leads to δ B_z=0. Consider the following part of the Lorentz-like force in the spin evolution kinetic equation q_e/c((v×δB)·∇_p)S_0 which is non-zero since S_0x and S_0y are non-isotropic functions. It can be represented in the following form -(q_e/mc)ε^α zγS_0γ(v^2δ B_z-v_z(v·δB)/v_⊥^2. Present more explicit form of the sixth term in the spin evolution kinetic equation q_e(δE·∇_p)S_0=q_e(δE·v)∂_εS_0+(e_z×S_0) ((v×δE)·e_z)/mv_⊥^2, where ∂_ε is the derivative on kinetic energy ε=p^2/2m. Solution of the linearized kinetic equations (<ref>) and (<ref>) leads to the following perturbations of the distribution functions δ f=1/Ω_e∫_C_0^φ(q_e(v·δE)∂ f_0/∂ε +μ_e(k_zv_z)(δB·∂S_0/∂ε)) exp((ω-k_zv_z)/Ω_e(φ'-φ))dφ',δ S_z=1/Ω_e∫_C_3^φ(q_e(v·δE)∂ S_0z/∂ε +2μ_e/ħ(δ B_xS_0y-δ B_yS_0x))exp((ω-k_zv_z)/Ω_e(φ'-φ))dφ',δ S_x=1/2[ ∫_C_1^φexp(-Ω_μ/Ω_e(φ'-φ))(Π_x(φ')+Π_y(φ'))dφ' +∫_C_2^φexp(Ω_μ/Ω_e(φ'-φ))(Π_x(φ')-Π_y(φ'))dφ'] exp(-(ω-k_zv_z)/Ω_eφ),andδ S_y=1/2[ ∫_C_1^φexp(-Ω_μ/Ω_e(φ'-φ))(Π_x(φ')+Π_y(φ'))dφ' -∫_C_2^φexp(Ω_μ/Ω_e(φ'-φ))(Π_x(φ')-Π_y(φ'))dφ'] exp(-(ω-k_zv_z)/Ω_eφ). Solutions for δ S_x and δ S_y (presented by equations (<ref>) and (<ref>)) contain the following functionsΠ_x(φ)=1/Ω_eexp((ω-k_zv_z)/Ω_eφ) ×(μ_e(k·∇_p)f_0δ B_x+2μ_e/ħS_0zδ B_y +S_0y(v·δB)q_e/cv_z/mv_⊥^2 +q_e(v·δE)∂_εS_0x-q_eS_0y((v×δE)e_z)/mv_⊥^2),andΠ_y(φ)=1/Ω_eexp((ω-k_zv_z)/Ω_eφ) ×(μ_e(k·∇_p)f_0δ B_y-2μ_e/ħS_0zδ B_x -S_0x(v·δB)q_e/cv_z/mv_⊥^2 +q_e(v·δE)∂_εS_0y+q_eS_0x((v×δE)e_z)/mv_⊥^2). Constants C_0, C_1, C_2 and C_3 are chosen that distribution functions δ f and δS are periodic functions of angle φ: δ f(φ+2π)=δ f(φ) and δS(φ+2π)=δS(φ). § APPENDIX B: EXPLICIT FORM OF DIELECTRIC PERMEABILITY TENSOR Tensor Π^αβ_Cl(θ,s) has the following explicit form:Π_Cl(θ,s)=3ω_Ls^2/2ω([1/4(sin^2θ/ω-k_zv_Fscosθ-Ω_e+sin^2θ/ω-k_zv_Fscosθ+Ω_e)1/4(sin^2θ/ω-k_zv_Fscosθ-Ω_e-sin^2θ/ω-k_zv_Fscosθ+Ω_e) 0; -1/4(sin^2θ/ω-k_zv_Fscosθ-Ω_e-sin^2θ/ω-k_zv_Fscosθ+Ω_e)1/4(sin^2θ/ω-k_zv_Fscosθ-Ω_e+sin^2θ/ω-k_zv_Fscosθ+Ω_e) 0; 0 0cos^2θ/ω-k_zv_Fscosθ ]).Tensors ε^αβ_11 and ε^αβ_21 are calculated in Refs. <cit.>, <cit.> and have the following form: ε^αβ_11=0,ε^αβ_21,a=-m^2/πħ^3μ_e^2c^2/2ω^2× ×∑_s=↑, ↓∫sinθ dθ∑_r=+,-v_Fs^2k_zcosθκ^αβ_r/ω-k_zv_Fscosθ+rΩ_μ ,andε^αβ_21,b=m^3/πħ^3μ_e^2c^2/ħω^2× ×∑_s=↑, ↓∫sinθ dθ∑_r=+,-∫_0^v_Fsrκ^αβ_r (-1)^i_s v^2dv/ω-k_zv cosθ+rΩ_μ,where κ^αβ_-=(K^αβ_∥)^*, κ^αβ_+=K^αβ_∥, i_↑=0, i_↓=1,K̂_∥=k_z^2([ 1 - 0; 1 0; 0 0 0; ]),where ω_Ls^2=4π e^2n_0s/m. Π_Cl(θ,s) is similar to the traditional result for degenerate electrons presented in many textbooks (see for instance <cit.>), but it also includes the spin separation effect. Elements of the dielectric permeability tensor caused by the nontrivial part of δ f have the following structureε_12=4π/ω([α_+-α_- -(α_++α_-)0;(α_++α_-)α_+-α_-0;000;]) ,whereα_±=1/4q_eμ_ek_z^2c/ω∫v_zv_⊥∂_pΣ(p)/ω-k_zv_z±Ω_edp/v,andΣ(p)=1/(2πħ)^3[Θ(p_F↑-p) -Θ(p_F↓-p)].Elements of the dielectric permeability tensor caused by the nontrivial part of δS have the following structure ε_22=ε_22,a+ε_22,b+ε_22,c:ε_22,a= ( [κ_--κ_+(κ_++κ_-)0; -(κ_++κ_-)κ_--κ_+0;000;]),ε_22,b= ( [χ_+-χ_- -(χ_++χ_-)0;(χ_++χ_-)χ_+-χ_-0;000;]),andε_22,c=4π/ω( [β_+-β_- -(β_++β_-)0;(β_++β_-)β_+-β_-0;000;]),whereβ_±=q_eμ_e/4mk_z^2c/ω∫v_z/v_⊥Σ(p)/ω-k_zv_z±Ω_μdp, κ_±=q_eμ_e/(2πħ)^3k_zc/ω∑_s∫2π^2(-1)^i_sp_Fs^2sin^2θ dθ/ω-k_zv_Fscosθ±Ω_μ,andχ_±=k_zc/mωπ q_eμ_e/(2πħ)^3∑_s(-1)^i_s∫dp/v_⊥Θ(p_Fs-p)/ω-k_zv_z±Ω_μ.Further integration in the dielectric permeability tensor gives the following result:ε^αβ_10=-∑_s=↑, ↓Π^αβ_Cl(s),withΠ_Cl(s)=3ω_Ls^2/2ω× ×([1/4[G_-+G_+]1/4[G_--G_+] 0; -1/4[G_--G_+]1/4[G_-+G_+] 0; 0 0G_zz ]),whereG_±=G(ω±Ω_e)= 1/k_zv_Fs[2(ω±Ω_e)/k_zv_Fs +(1-(ω±Ω_e)^2/(k_zv_Fs)^2)ln(ω+k_zv_Fs±Ω_e/ω-k_zv_Fs±Ω_e)],andG_zz=ω/(k_zv_Fs)^2[ω/k_zv_Fsln(ω+k_zv_Fs/ω-k_zv_Fs)-2]. Tensors ε_11^αβ and ε_21^αβ are caused by contribution of f_0 and S_0z in the spin evolution. They are derived in <cit.> and <cit.>. Their structure is described in the following form:ε_21,a=k_z^2([γ_++γ_- -(γ_+-γ_-)0;(γ_+-γ_-)γ_++γ_-0;000;]),andε_21,b=k_z^2([δ_+-δ_- -(δ_++δ_-)0;(δ_++δ_-)δ_+-δ_-0;000;]) . Next, present the explicit forms of elements γ_± and δ_±γ_±=∑_s=↑, ↓m^2v_Fs/πħ^3μ_e^2c^2/2ω^2× ×(2-ω±Ω_μ/k_zv_Fsln(ω+k_zv_Fs±Ω_μ/ω-k_zv_Fs±Ω_μ)),andδ_±=∑_s=↑, ↓m^3/πħ^3μ_e^2c^2/ħω^2(-1)^i_s/k_z^2(v_Fs(ω±Ω_μ) -(ω±Ω_μ)^2-(k_zv_Fs)^2/2k_zln(ω+k_zv_Fs±Ω_μ/ω-k_zv_Fs±Ω_μ)).The integral in α_± contains the Dirac delta function of the momentum module. So, it can be easily presented as an integral over angle θ:α_±=-π q_eμ_e/2(2πħ)^3k_z^2c/ω∑_s(-1)^i_s× × m^2v_Fs^3∫sin^2θcosθ dθ/ω-k_zv_Fscosθ±Ω_e.After taking the last integral, the explicit form of α_± is found:α_±=-q_eμ_e/16πħ^3k_zc/ω∑_s(-1)^i_s m^2v_Fs^2[-1/2+(ω±Ω_e/k_zv_Fs)^2 -ω±Ω_e/k_zv_Fs(1-ω±Ω_e/k_zv_Fs) √(ω±Ω_e+k_zv_Fs/ω±Ω_e-k_zv_Fs)]. Using the explicit form of Σ(p) represent β_± in the following form:β_±=k_zc/ωq_eμ_e/16π^2ħ^3∑_s(-1)^i_s× ×∫_0^π dθ∫_0^p_Fs pdpcosθ/ω±Ω_μ/k_zv-cosθ.Taking integral over the angle θ find the following:β_±=k_zc/ωq_eμ_e/16πħ^3∑_s(-1)^i_s∫_0^p_Fs pdp(-1 + (ω±Ω_μ/ω±Ω_μ+k_zv) √(ω±Ω_μ+k_zv/ω±Ω_μ-k_zv)).Finally, taking integral over the momentum module find the explicit form of function β_±:β_±=q_eμ_e/16πħ^3k_zc/ω∑_s(-1)^i_s(-1/2p_Fs^2 +m^2/k_z^2(ω±Ω_μ)( (ω±Ω_μ)-√((ω±Ω_μ)^2-k_z^2v_Fs^2)))for ω±Ω_μ>0 and ω+Ω_μ<-k_zv_Fs, orβ_+=q_eμ_e/16πħ^3k_zc/ω∑_s(-1)^i_s(-1/2p_Fs^2 +m^2/k_z^2(ω+Ω_μ)( (ω+Ω_μ)+√((ω+Ω_μ)^2-k_z^2v_Fs^2)))for -k_zv_Fs<ω+Ω_μ<0.The final form of the functions κ_± can be found after integration over the angle in equation (<ref>). It appears as follows κ_±=q_eμ_ek_zc/ω1/k_zm^2/4ħ^3∑_s(-1)^i_sv_Fs× ×[ω±Ω_μ/k_zv_Fs+(1-ω±Ω_μ/k_zv_Fs) √(ω±Ω_μ+k_zv_Fs/ω±Ω_μ-k_zv_Fs)]. To find the final form of functions χ_± take integrals over angles φ and θ and then take integral over the velocity module and obtain the following results: χ_±=q_eμ_ec/ωm^2/4ħ^3∑_s(-1)^i_s× ×∫_0^v_Fsdv1/1+ω±Ω_μ/k_zv√(ω±Ω_μ+k_zv/ω±Ω_μ-k_zv) =q_eμ_ek_zc/ω1/k_z^2m^2/2ħ^3∑_s(-1)^i_s(ω±Ω_μ -√((ω±Ω_μ)^2-k_z^2v_Fs^2)). The dispersion equation appears in the following form at the application of the found structure of the dielectric permeability tensor:det( [ ε_xx-k_z^2c^2/ω^2ε_xy 0;ε_yz ε_yy-k_z^2c^2/ω^2 0; 0 0ε_zz; ])=0,with ε_xx=ε_yy≡ϵ, and ε_yx^*=ε_xy=Ξ, where ϵ=k_z^2(γ_++γ_-+δ_+-δ_-)+4π/ω(α_+-α_-+β_+-β_-) +κ_--κ_+-χ_-+χ_+ -∑_s3/8ω_Ls^2/ω(G_++G_-), Ξ=-k_z^2(γ_+-γ_-+δ_++δ_-) -4π/ω(α_++α_-+β_++β_-) +κ_-+κ_+-χ_--χ_+ -∑_s3/8ω_Ls^2/ω(G_--G_+). All functions α_±, β_±, γ_±, δ_±, κ_±, χ_±, G_± are described above, these functions are related to the following elements of the dielectric permeability tensor ε_12, ε_22,c, ε_21,a, ε_21,b, ε_22,a, ε_22,b, ε_10 correspondingly, where ε_21=ε_21,a+ε_21,b, ε_22=ε_22,a+ε_22,b+ε_22,c and ε_11=0 for the waves propagating parallel to the external field. Tensor ε_21,a comes from μ_e∇_r^αδ B^α·∇_p^βf_0 in equation (<ref>). Tensor ε_21,b comes from torque-like term in equation (<ref>): -(2μ_e/ħ)S_0z(r,p,t)e_z×δB(r,t). Tensors ε_22,a and ε_22,b appear from q_e(δE·∇_p)S_0x and q_e(δE∇_p)S_0y in equation (<ref>). Tensor ε_22,a comes from q_e(δE·v)∂_εS_0x and q_e(δE·v)∂_εS_0y. Tensor ε_22,b comes from (e_z×S_0)((v×δE)·e_z)/mv_⊥^2 Tensor ε_22,c appears from (q_e/c)((v×δB)∇_p)S_0. All ± and ∓ presented above describe parts of terms containing ω+Ω_a or ω-Ω_a, where a=e, μ. The dielectric permeability tensor (<ref>) contains superposition of these terms.The dispersion equation splits on three equations, one equation is for the longitudinal waves ε_zz=0 which is discussed in several papers <cit.>, <cit.> and two equations are for the transverse wavesk_z^2c^2/ω^2=ϵ±Ξ,whereϵ+Ξ=2k_z^2(γ_--δ_-)-8π/ω(α_-+β_-) +2κ_--2χ_- -∑_s3/4ω_Ls^2/ωG_-,andϵ-Ξ=2k_z^2(γ_++δ_+)+8π/ω(α_++β_+) -2κ_++2χ_+ -∑_s3/4ω_Ls^2/ωG_+.Coefficient ± in equation (<ref>) appears as solution of a quadratic equation. Hence, this coefficient is independent from all ± and ∓ presented above. In equation (<ref>) ± corresponds to transverse waves with different circular polarizations δ E_x=±δ E_y. Here and below ± and ∓ are produced by ± in equation (<ref>). Presented above leads to the following structure of function Σ_∓ introduced in equation (<ref>)Σ_∓=2k_z^2(γ_∓∓δ_∓)∓8π/ω(α_∓+β_∓) ±2κ_∓∓2χ_∓. § APPENDIX C: APPROXIMATE FORM OF FUNCTIONS Α_±, Β_±, Γ_±, Δ_± Consider the small frequency and small wave vector limit, where k_zv_Fs/|ω±Ω_e|≪1.Consider approximate forms of functions α_±, β_±, γ_±, δ_± which are elements of dispersion equation (<ref>):ε_10=1 -∑_s=↑, ↓ω_Ls^2/ω k_zv_Fs[k_zv_Fs/ω±|Ω_e| +1/5(k_zv_Fs/ω±|Ω_e|)^3] ≈ 1-ω_Ls^2/ω1/ω±|Ω_e|,ε_21,a=2k_z^2γ_∓= -ω_Ls^2/ω^2ħ^2k_z^4/m^21/(ω±|Ω_μ|)^2,ε_21,b=∓ 2k_z^2δ_∓ =∑_s=↑, ↓(-1)^i_sω_Ls^2/ω^2ħ k_z/2mv_Fs[k_zv_Fs/ω±|Ω_μ| +1/5(k_zv_Fs/ω±|Ω_μ|)^3] ≈ω_Lu^2-ω_Ld^2/ω^2ħ k_z^2/2m1/ω±|Ω_μ|,ε_12=∓8π/ωα_∓ =∓3π(6π^2)^1/3/16ω_Le^2/ω^2ħ^2k_z^3/4m^2n_0u^4/3-n_0d^4/3/n_0e(ω±|Ω_e|)^2 , ε_22,c= ∓8π/ωβ_∓ =∓3π(6π^2)^1/3/16ω_Le^2/ω^2ħ^2k_z^3/4m^2n_0u^4/3-n_0d^4/3/n_0e(ω±|Ω_μ|)^2 ,at ω±Ω_μ>0 and ω+Ω_μ<0, ε_12 and ε_22,c presented by equations (<ref>) and (<ref>) have same form, but differs by the different cyclotron frequencies entering their expressions, ε_22,a=± 2κ_∓= ± 2q_eμ_ek_zc/ω1/k_zm^2/8ħ^3∑_s(-1)^i_sv_Fs× ×[k_zv_Fs/ω±|Ω_μ| +1/4k_z^3v_Fs^3/(ω±|Ω_μ|)^3] ≈±(6π^2)^2/3/32πω_Le^2/ω (ω±|Ω_μ|)n_0u^2/3-n_0d^2/3/n_0ek_z, If ω±|Ω_μ| >0, the functions χ_∓ give the following assumptions:ε_22,b=∓ 2χ_∓=∓ 2 q_eμ_ek_zc/ω1/k_zm^2/4ħ^3∑_s(-1)^i_sv_Fs× ×k_zv_Fs/ω±|Ω_μ|(1+1/4k_z^2v_Fs^2/(ω±|Ω_μ|)^2) ≈∓(6π^2)^2/3/16πω_Le^2/ω (ω±|Ω_μ|)n_0u^2/3-n_0d^2/3/n_0ek_z≈∓4κ_∓.If ω-|Ω_μ|<0, the function χ_+ give the following assumption:ε_22,b=2χ_+= k_zc/ωq_eμ_e/k_z^2m^2/ħ^3× ×∑_s(-1)^i_s(2(ω-|Ω_μ|)-1/2k_z^2v_Fs^2/ω-|Ω_μ|)≈0.Functions α_±, β_± and γ_± gives no contribution in equation (<ref>). Functions δ_± give the third term on the right-hand side. Functions κ_± and χ_± lead to the last term. Earlier result can be found for instance inRef. <cit.>. The fourth term is an extra term in compare with earlier papers. § APPENDIX D: DIMENSIONLESS FORM OF THE DISPERSION EQUATION FOR TRANSVERSE WAVESDimensionless variables are used for numerical analysis of the obtained results ξ=ω/ω_Le, κ=k_zc/ω_Le, f=|Ω_e|/ω_Le, g=1.00116. Equation (<ref>) is presented in the described dimensionless variables:ξ^2-κ^2-ξ/ξ± f-ηΥκ^21/ξ± g f ±(3π^2)^2/3/32π((1+η)^2/3-(1-η)^2/3)κ/Rξ/ξ± g f=0for ξ± f>0, where R=√(n_0)c/ω_Le, Υ=ħω_Le/mc^2, for n_0e=10^27 cm^-3 find R≈16,7, Υ≈2.4×10^-3. Signs in equation (<ref>) corresponds to the circular polarization of the plane wave δ E_x=±δ E_y. If ξ-f<0 equation (<ref>) changes toξ^2-κ^2-ξ/ξ- f-ηΥκ^21/ξ- g f +(3π^2)^2/3/32π((1+η)^2/3-(1-η)^2/3) κ/Rξ/ξ- g f=0. 17Dyson PR 55 F. J. Dyson, Phys. Rev. 98, 349 (1955). MaksimovTMP 2001 L. S. Kuz'menkov, S. G. Maksimov, V. V. Fedoseev, Theor. Math. Phys. 126, 110 (2001).MaksimovTMP 2001 b L. S. Kuz'menkov, S. G. Maksimov, and V. V. Fedoseev, Theor. Math. Phys. 126, 212 (2001).Brodin PRL 08 Cl Reg G. Brodin, M. Marklund, and G. Manfredi, Phys. Rev. Lett. 100, 175001 (2008). Brodin PRL 08 g Kin G. Brodin, M. Marklund, J. Zamanian, A. Ericsson, and P. L. Mana, Phys. Rev. Lett. 101, 245002 (2008). Andreev PoP kinetics 17 a P. A. Andreev, Phys. Plasmas 24, 022114 (2017).Andreev PoP kinetics 17 b P. A. Andreev, Phys. Plasmas 24, 022115 (2017).Dodin PRA 15 First-principle D. E. Ruiz, I. Y. Dodin, Phys. Rev. A 92, 043805 (2015).Dodin PRA 15 Relativistic D. E. Ruiz, C. L. Ellison, and I. Y. Dodin, Phys. Rev. A 92, 062124 (2015).Dodin PRA 17 Ponderomotive D. E. Ruiz, I. Y. Dodin, Phys. Rev. A 95, 032114 (2017).Dodin PoP 17 D. E. Ruiz, I. Y. Dodin, Phys. Plasmas 24, 055704 (2017). Koide PRC 13 T. Koide, Phys. Rev. C 87, 034902 (2013). Ekman 1702 R. Ekman, F. A. Asenjo, J. Zamanian, arXiv:1702.00722. Mahajan IJTP 14 S. M. Mahajan, F. A. Asenjo, Int. J. Theor. Phys. 54, 1435 (2014).Barth JPC 72 U. von Barth, L. Hedin, J. Phys. C 5, 1629 (1972).Rajagopal PRB 73 A. K. Rajagopal, J. Callaway, Phys. Rev. B 7, 1929 (1973).Bloch ZP 29 F. Bloch, Z. Phys. 57, 545 (1929).Jones RMP 15 R. O. Jones, Rev. Mod. Phys. 87, 897 (2015).Ryan PRB 91 J. C. Ryan, Phys. Rev. B 43, 4499 (1991).Agarwal PRL 11 A. Agarwal, M. Polini, R. Fazio, and G. Vignale, Phys. Rev. Lett. 107, 077004 (2011). Andreev PRE 15 SEAW P. A. Andreev, Phys. Rev. E 91, 033111 (2015). Kasuya PTP 56 T. Kasuya, Progr. Theor. Phys. 16, 58 (1956).Andreev AoP 15 SEAW P. A. Andreev, L. S. Kuz'menkov, Ann. Phys. 361, 278 (2015).Lundin PRE 10 J. Lundin and G. Brodin, Phys. Rev. E 82, 056407 (2010). Andreev PRE 16 P. A. Andreev, Z. Iqbal, Phys. Rev. E 93, 033209 (2016).Hussain PP 14 spin bernst A. Hussain, M. Stefan and G. Brodin, Phys. Plasmas 21, 032104 (2014).Takabayasi comb T. Takabayasi, Progr. Theor. Phys. 14, 283 (1955); T. Takabayasi, Prog. Theor. Phys. 13, 222 (1955); T. Takabayasi, Phys. Rev. 102, 297 (1956); T. Takabayasi, Nuovo Cimento 3, 233 (1956).Andreev kinetics 12 P. A. Andreev, arXiv:1212.0099.Andreev Phys A 15 P. A. Andreev, Physica A 432, 108 (2015). Torrey PR 57 H. C. Torrey, Phys. Rev. 104, 563 (1957). Andreev 1510 Spin Current P. A. Andreev, L. S. Kuz'menkov, arXiv:1510.03468.Shukla UFN 10 P. K. Shukla, B. Eliasson, Phys. Usp. 53, 51 (2010).Shukla RMP 11 P. K. Shukla, B. Eliasson, Rev. Mod. Phys. 83,885 (2011).Uzdensky RPP 14 D. A. Uzdensky, S. Rightley, Rep. Progr. Phys. 77, 036902 (2014). Felderhof PRE 00 B. U. Felderhof, Phys. Rev. E 62, 3848 (2000).Andreev PoP 16 sep kin P. A. Andreev, Phys. Plasmas 23, 062103 (2016).Marklund PRL07 M. Marklund and G. Brodin, Phys. Rev. Lett. 98, 025001 (2007).Oraevsky AP 02 V. N. Oraevsky, V. B. Semikoz, Astroparticle Physics 18, 261 (2002). Oraevsky PAN V. N. Oraevsky, V. B. Semikoz, Phys. At. Nucl. 66, 466 (2003).Andreev VestnMSU 2007 P. A. Andreev, L. S. Kuz'menkov, Moscow University Physics Bulletin 62, N.5, 271 (2007). Mahajan PoP 16 S. M. Mahajan, F. A. Asenjo, Phys. Plasmas 23, 056301 (2016).Asenjo PL A 09 F. A. Asenjo, Phys. Lett. A 373, 4460 (2009).Zhu PPCF 12 J. Zhu, and P. Ji, Plasma Phys. Control. Fusion 54, 065004 (2012).Misra JPP 10 A. P. Misra, G. Brodin, M. Marklund, and P. K. Shukla, J. Plasma Physics 76, 857 (2010).Andreev PoP 17 extr SEAWs P. A. Andreev, Phys. Plasmas 24, 022123 (2017).Andreev PoP 17 2D P. A. Andreev, Phys. Plasmas 24, 022106 (2017). Zamanian PoP 10 J. Zamanian, M. Stefan, M. Marklund, and G. Brodin, Phys. Plasmas 17, 102109 (2010).Asenjo PLA 12 F. A. Asenjo, Phys. Lett. A 376, 2496 (2012).Yoshida JPA 16 Z. Yoshida, S. M. Mahajan, J. Phys. A: Math. Theor. 49, 055501 (2016).Mahajan PRL 11 S. M. Mahajan, F. A. Asenjo, Phys. Rev. Lett. 107, 195003 (2011).Andreev PP 15 Positrons P. A. Andreev, Phys. Plasmas 22, 062113 (2015). Braun PRL 12 S. Braun, F. A. Asenjo, and S. M. Mahajan, Phys. Rev. Lett. 109, 175003 (2012). Brodin PRL 10 SPF G. Brodin, A. P. Misra, and M. Marklund, Phys. Rev. Lett. 105, 105004 (2010).Andreev EPL 16 P. A. Andreev, L. S. Kuz'menkov, EPL 113, 17001 (2016).Andreev APL 16 P. A. Andreev and L. S. Kuz'menkov, Appl. Phys. Lett. 108, 191605 (2016).Andreev_Iqbal PoP 16 Z. Iqbal, P. A. Andreev, Phys. Plasmas 23, 062320 (2016). Hurst EPJD 14 J. Hurst, O. Morandi, G. Manfredi, and P.-A. Hervieux, Eur. Phys. J. D 68, 176 (2014). Rukhadze book 84 A. F. Aleksandrov, L. S. Bogdankevich, A. A. Rukhadze, Principles of plasma electrodynamics, Berlin; New York: Springer-Verlag, 1984.
http://arxiv.org/abs/1705.09738v1
{ "authors": [ "Pavel A. Andreev", "L. S. Kuz'menkov" ], "categories": [ "physics.plasm-ph" ], "primary_category": "physics.plasm-ph", "published": "20170526232621", "title": "Dielectric permeability tensor and linear waves in spin-1/2 quantum kinetics with non-trivial equilibrium spin-distribution functions" }
#1#1 #1#1 #1#1 í Í åà ÅÀ ç çã çõ é É ão ρ̂ #1tr#1 ḍ e (t )
http://arxiv.org/abs/1705.09592v1
{ "authors": [ "J. G. G. de Oliveira Jr.", "Gustavo de Souza", "L. A. Cabral", "I. G. da Paz", "Marcos Sampaio" ], "categories": [ "quant-ph", "physics.atom-ph" ], "primary_category": "quant-ph", "published": "20170526142120", "title": "Exotic looped trajectories via quantum marking" }
Blowup constructions for Lie groupoids and a Boutet de Monvel type calculus[ The authors were partially supported by ANR-14-CE25-0012-01 (SINGSTAR).AMS subject classification: Primary 58H05,19K56. Secondary 58B34, 22A22, 46L80,19K35, 47L80. ] by Claire Debord and Georges Skandalis Université Clermont Auvergne -2pt LMBP, UMR 6620 - CNRS -2pt Campus des Cézeaux,-2pt 3, Place Vasarely-2pt TSA 60026CS 60026-2pt 63178 Aubière cedex, France -2pt [email protected] Université Paris Diderot, Sorbonne Paris Cité -2ptSorbonne Universités, UPMC Paris 06, CNRS, IMJ-PRG -2ptUFR de Mathématiques, CP 7012 - Bâtiment Sophie Germain-2pt5 rue Thomas Mann, 75205 Paris CEDEX 13, France -2pt [email protected] present natural and general ways of building Lie groupoids, by using the classical procedures of blowups and of deformations to the normal cone. Our constructions are seen to recover many known ones involved in index theory. The deformation and blowup groupoids obtained give rise to several extensions of C^*-algebras and to full index problems. We compute the corresponding K-theory maps. Finally, the blowup of a manifold sitting in a transverse way in the space of objects of a Lie groupoid leads to a calculus, quite similar to the Boutet de Monvel calculus for manifolds with boundary. § INTRODUCTION Let G⇉ M be a Lie groupoid.The Lie groupoid G comes with its natural family of elliptic pseudodifferential operators. For example* if the groupoid G is just the pair groupoid M× M, the associated calculus is the ordinary (pseudo)differential calculus on M;* if the groupoid G is a family groupoid M×_B M associated with a fibration p:M→ B, the associated(pseudo)differential operators are families of operators acting on the fibers of p (those of <cit.>);* if the groupoid G is the holonomy groupoid of a foliation, the associated(pseudo)differential operators are longitudinal operators as defined by Connes in <cit.>; * if the groupoid G is the monodromy groupoid the groupoid of homotopy classes (with fixed endpoints) of paths in a (compact) manifold M,the associated(pseudo)differential operators are the π_1(M)-invariant operators on the universal cover of M...The groupoid G defines therefore a class of partial differential equations. Our study will focus here on the corresponding index problems on M. The index takes place naturally in the K-theory of the C^*-algebra of G. Let then V be a submanifold of M. We will consider V as bringing a singularity into the problem: it forces operators of G to “slow down” near V, at least in the normal directions. Inside V, they should only propagate along a sub-Lie-groupoid Γ⇉ V of G.This behavior is very nicely encoded by a groupoid _r,s(G,Γ) obtained by using a blow-up construction of the inclusion Γ→ G. The blowup construction and the deformation to the normal cone are well known constructions in algebraic geometry as well as in differential geometry. Let X be a submanifoldof a manifold Y. Denote by N_X^Y the normal bundle. * The deformation to the normal cone of X in Y is a smooth manifold (Y,X) obtained by naturally gluing N_X^Y×{0} with Y×^*.* The blowup of X in Y is a smooth manifold (Y,X) whereX is inflated to the projective space N_X^Y. It is obtained by gluing Y∖ X with N_X^Y in a natural way. We will mainly consider its variant the spherical blowup (Y,X) (which is a manifold with boundary) in whichthe sphere bundle N_X^Y replaces the projective bundle N_X^Y. The first use of deformation groupoids in connection with index theory appears in <cit.>.A. Connes showed there that the analytic index on a compact manifold V can be described using a groupoid, called the “tangent groupoid". This groupoid is obtained as a deformation to the normal cone of the diagonal inclusion of V into the pair groupoid V× V. Since Connes' construction, deformation groupoids were used by many authors in various contexts.* This idea of Connes was extended in<cit.> by considering the same construction of a deformation to the normal cone for smooth immersions which are groupoid morphisms. The groupoid obtained was used in order to define the wrong way functoriality for immersions of foliations (<cit.>). An analogous construction for submersions of foliations was also given in a remark (<cit.>). * In <cit.> Monthubert-Pierrot and Nistor-Weinstein-Xu considered the deformation to the normal cone of the inclusion G^(0)→ G of the space of units of a smooth groupoid G. This generalization of Connes' tangent groupoid wascalled the adiabatic groupoid of G and denoted by G_ad. It was shown that this adiabatic groupoid still encodes the analytic index associated with G. * Many other important articles use this idea of deformation groupoids. We will briefly discuss some of them in the sequel of the paper. It is certainly out of the scope of the present paper to review them all...Let us briefly present the objectives of our paper. §.§ The groupoids (G,Γ) and _r,s(G,Γ).In the present paper, we give a systematic construction of deformations to the normal cone and define the blow-up deformations of groupoids. More precisely, we use the functoriality of these two constructions and note that any smooth subgroupoid Γ⇉ V of a Lie groupoid G⇉ M gives rise to a deformation to the normal cone Lie groupoid (G,Γ)⇉(M,V) and to a blowupLie groupoid _r,s(G,Γ)⇉(M,V) as well as its variant the spherical blowupLie groupoid _r,s(G,Γ)⇉(M,V). §.§ Connecting maps and index mapsThese groupoids give rise to connecting maps and to index problems that will be the main object of our study here.Connecting maps. The (restriction _+(G,Γ) to _+ of the) deformation groupoid (G,Γ) is the disjoint union of an open subgroupoid G×_+^* and a closed subgroupoid _Γ^G×{0}. The blowup groupoid _r,s(G,Γ) is the disjoint union of an open subgroupoid which is the restriction G_^ of G to =M∖ V and a boundary which is a groupoid N_Γ^G which is fibered over Γ, a(in the sense of Pradines <cit.>).This decomposition gives rise to exact sequences of C^*-algebras that we wish to “compute”: 0⟶ C^*(G_^)⟶ C^*(_r,s(G,Γ))⟶ C^*( N_Γ^G)⟶ 0E^∂_and0⟶ C^*(G×_+^*)⟶ C^*(_+(G,Γ))⟶ C^*(_Γ^G)⟶ 0E^∂__+ Full index maps. Denote by Ψ^*(_+(G,Γ)) and Ψ^*(_r,s(G,Γ)) the C^*-algebra of order 0 pseudodifferential operators on the Lie groupoids _+(G,Γ) and _r,s(G,Γ) respectively.The above decomposition of groupoids give rise to extensionsof groupoid C^*-algebras of pseudodifferential type 0⟶ C^*(G_^)⟶Ψ^*(_r,s(G,Γ))σ_full⟶Σ_(G,Γ)⟶ 0E^_and0⟶ C^*(G×_+^*)⟶Ψ^*(_+(G,Γ))σ_full⟶Σ__+(G,Γ)⟶ 0E^__+where Σ__+(G,Γ) and Σ_(G,Γ) arecalled the full symbol algebra, and the morphisms σ_full the full symbol maps. The full symbol maps. The full symbol algebras are naturally fibered products:Σ_(G,Γ)=C(^*_r,s(G,Γ))×_C(^* N_Γ^G)Ψ^*( N_Γ^G)and Σ__+(G,Γ)=C(^*_+(G,Γ))×_C(^*_Γ^G)Ψ^*(_Γ^G). Thus, the full symbol maps have two components:* The usual commutative symbol of the groupoid. They are morphisms:Ψ^*(_+(G,Γ))→ C(^*_+(G,Γ))andΨ^*(_r,s(G,Γ))→ C(^*_r,s(G,Γ)).The commutative symbol takes its values in the algebra of continuous fonctions on the sphere bundle of the algebroid of the Lie groupoids (with boundary) _+(G,Γ)) and _r,s(G,Γ).* The restriction to the boundary:σ_∂:Ψ^*(_r,s(G,Γ))→Ψ^*( N_Γ^G) Ψ^*(_+(G,Γ))→Ψ^*(_Γ^G) .Associated KK-elements. Assume that the groupoid Γ is amenable. Then the groupoids _Γ^G and N_Γ^G are also amenable, and exact sequences E^∂_ and E^∂__+ give rise to connecting elements ∂_^G,Γ∈ KK^1(C^*( N_Γ^G),C^*(G_^)) and∂__+^G,Γ∈ KK^1(C^*(_Γ^G),C^*(G×_+^*)) (<cit.>). Also, the full symbol C^*-algebras Σ_(G,Γ) and Σ__+(G,Γ) are nuclear and we also get KK-elements _^G,Γ∈ KK^1(Σ_(G,Γ),C^*(G_^)) and __+^G,Γ∈ KK^1(Σ__+(G,Γ),C^*(G×_+^*)).If Γ is not amenable, these constructions can be carried over in E-theory (of maximal groupoid C^*-algebras).Connes-Thom elements We will establish the following facts.* There is a natural Connes-Thom element β∈ KK^1(C^*(_r,s(G,Γ)),C^*(_+(G,Γ))). This element restricts to very natural elements β'∈ KK^1(C^*(G_^),C^*(G×_+^*)) and β”∈ KK^1(C^*( N_Γ^G),C^*(_Γ^G)). These elements extend to elementsβ_Ψ∈ KK^1(Ψ^*(_r,s(G,Γ)),Ψ^*(_+(G,Γ))) and β_Σ∈ KK^1(Σ_(G,Γ)),Σ__+(G,Γ))).We have ∂_^G,Γ⊗β'=-β”⊗∂__+^G,Γ (facts <ref> and <ref>) and _^G,Γ⊗β'=-β_Σ⊗__+^G,Γ (fact <ref>). * If M∖ V meets all the orbits of G, then β' is KK-invertible. Therefore, in that case, ∂__+^G,Γ determines∂_^G,Γ and __+^G,Γ determines_^G,Γ. * We will say that V is G-small if the transverse action of G on V is nowhere 0, if for every x∈ V, the image by the anchor of the algebroid G of G is not contained in T_xV (definition <ref>). In that case, β', β”, β_Σ are KK-invertible: the connecting elements ∂__+^G,Γ and∂_^G,Γ determine each other, and the full index maps __+^G,Γ and _^G,Γdetermine each otherComputation. If Γ=V, then C^*(_V^G) is KK-equivalent to C_0(N_V^G) using a Connes-Thom isomorphism and the element∂^G,Γ__+is the Kasparov product of the inclusion of N_V^G in the algebroid G=N_M^G of G (using a tubular neighborhood) and the index element _G∈ KK(C_0(^* G),C^*(G)) of the groupoid G (prop. <ref>.<ref>). Of course, if V is G-small, we obtain the analogous result for ∂_^G,Γ.The computation of the corresponding full index is also obtained in the same way in prop.<ref>.<ref>).Full index and relative K-theory. Assume that Γ is just a G-small submanifold V of M. We will actually obtain a finer construction by using relative K-theory. It is a general fact that relative K-theory gives more precise index theorems than connecting maps (<cit.>). In particular, the relative K-theory point of view allows to take into account symbols from a vector bundle to another one. Let ψ:C_0((M,V))→Ψ^*(_r,s(G,Γ)) be the natural inclusion and consider the morphism μ_=σ_full∘ψ:C_0((M,V))→Σ_(G,V). The relative index theorem computes the map _rel:K_*(μ_∘ψ )→ K_*(C^*()):* the relative K-group K_*(μ_) is canonically isomorphic to K_*(C_0(^*)); * under this isomorphism _rel identifies with the index map of the groupoid .We prove an analogous result for the morphism μ_:C_0(_+(M,V)→Σ__+(G,V) In fact, most of the computations involved come from a quite more general situation studied in section <ref>. There we consider a groupoid G and a partition of G^(0) into an open and a closed saturated subset and study the connecting elements of the associated exact sequences. §.§ A Boutet de Monvel type calculus.Let H be a Lie groupoid. In <cit.>, extending ideas of Aastrup, Melo, Monthubert and Schrohe <cit.>, we studied the gauge adiabatic groupoid H_ga: the crossed product of the adiabatic groupoid of H by the natural action of _+^*. We constructed a bimodule _H giving a Morita equivalence between the algebra of order 0 pseudodifferential operators on H and a natural ideal in the convolution C^*-algebra C^*(H_ga) of this gauge adiabatic groupoid. The gauge adiabatic groupoid H_ga is in facta blowup groupoid, namely _r,s(H× (×),H^(0)) (restricted to the clopen subset H^(0)×_+ of (H^(0)×,H^(0))=H^(0)× (_-⊔_+)). Let now G⇉ M be a Lie groupoid and let V be a submanifold M which is transverse to the action of G (see def. <ref>). We construct a bimodule: it is a C^*(_r,s(G,V)),Ψ^*(G_V^V) bimodule(G,V), which is a full Ψ^*(G_V^V) Hilbert module. When G is the direct product of G_V^V with the pair groupoid × the bimodule coincide with _G_V^V constructed in <cit.>. In the general case, thanks to a convenient (spherical) blowup construction, we construct a linking space between the groupoids _r,s(G,V) and (G_V^V)_ga=_r,s(G_V^V× (×),V). This linking space defines a C^*(_r,s(G,V)),C^*((G_V^V)_ga)-bimodule (G,V) which is a Morita equivalence of groupoids when V meets all the orbits of G. The -bimodule is then the composition of _G_V^V with (G,V). Denote by Ψ^*_BM(G;V) the Boutet de Monvel type algebra consisting of matrices R=[ Φ P; T Q ] with Φ∈Ψ^*(_r,s(G,V)), P∈(G,V), T∈^*(G,V) and Q∈Ψ^*(G_V^V), and C^*_BM(G;V) its ideal - where Φ∈ C^*(_r,s(G,V)).This algebra has obvious similarities with the one involved in the Boutet de Monvel calculus for manifold with boundary <cit.>. We will examine its relationship with these two algebras in a forthcoming paper.We still have two natural symbol maps: the classical symbol σ_c:Ψ^*_BM(G,V)→ C(^* G) given by σ_c[ Φ P; T Q ]=σ_c(Φ) and the boundary symbol r_V which is restriction to the boundary.We have an exact sequence:0→ C^*(G_∐ V^⊔ V)→Ψ^*_BM(G;V)σ_BM⟶Σ_BM(G,V) → 0where Σ_BM(G,V)=Ψ^*_BM(G;V)/C^*(G_⊔ V^⊔ V) andσ_BM is defined using both σ_c and r_V. We may note that Ψ^*(_r,s(G,V))identifies with the full hereditary subalgebra of Ψ^*_BM(G,V) consisting of elements of the form [ Φ 0; 0 0 ]. We thus obtain Boutet de Monvel type index theorems for the connecting map of this exact sequence- as well as for the corresponding relative K-theory.The paper is organized as follows: * In section 2 we recall some classical facts, constructions and notation involving groupoids.* Section 3 is devoted to the description and computation of various KK-elements associated with groupoid C^*-algebras. The first and second type are encountered in the situation where a Lie groupoid G can be cut in two pieces G=G|_W⊔ G|_ F where W is an open saturated subset of the units G^(0) and F=G^(0)∖ W. They are respectively built from exact sequences of C^*-agebras of the form:0⟶ C^*(G_W)⟶ C^*(G)⟶ C^*(G_F)⟶ 0 E_∂and0⟶ C^*(G_W)⟶Ψ^*(G)⟶Σ^W(G)⟶0 E__fullThe other KK-elements are Connes-Thom type elements arising when a -action is involved in those situations.* In section 4 we review two geometric constructions: deformation to the normal cone and blowup, and their functorial properties. * In section 5, using this functoriality, we study deformation to the normal cone and blowup in the Lie groupoid context. We present examples which recover groupoids constructed previously by several authors.* In section 6, applying the results obtained in section 3, we compute the connecting maps and index maps of the groupoids constructed in section 5.* In section 7, we describe the above mentioned Boutet de Monvel type calculus.* Finally, in the appendix, we make a few remarks on . In particular, we give a presentation of the dualE^* of aE and show that C^*(E) and C^*(E^*) are isomorphic. * Our constructions involved a large amount of notation, that we tried to choose as coherent as possible. We found it however helpful to list several items in an index at the end of the paper. We will use the following notation:* If E is a real vector bundle over a manifold (or over a locally compact space) M, the corresponding projective bundle (E) is the bundle over M whose fiber over a point x of M is the projective space (E_x). The bundle(E) is simply the quotient of E∖ M by the natural action of ^* by dilation. The quotient ofE∖ M under theaction of _+^* by dilation is the (total space of the) sphere bundle (E). * If E is a real vector bundle over a manifold (or a locally compact space) M, we will denote by B^*E, B^*E and ^*E the total spaces of the fiber bundles of open balls, closed ball and spheres of the dual vector bundle E^* of E. If F⊂ M is a closed subset of M, we will denote by B^*_FE the quotient of B^*E where we identify two points (x,ξ) and (x,η) for x∈ F and ^*_FE the image of ^*E in B^*_FE. Acknowledgements. We would like to thank Vito Zenobi for his careful reading and for pointing out quite a few typos in an earlier version of the manuscript.[B, 02](E), (E) The projective and sphere bundles associated to a real vector bundle E over M, whose fiber over x∈ M are respectively the projective space (E_x) and the sphere (E_x)[B, 04]B^*E, B^*E, ^*EThe total spaces of the fiber bundles of open balls, closed ball and spheres of the dual vector bundle E^* of E [B, 05]B^*_FEThe quotient of B^*E where we identify two points (x,ξ) and (x,η) for x∈ F, F being a closed subset of the zero section M of the bundle E [B, 06]^*_FEThe image of ^*E in B^*_FE § SOME QUITE CLASSICAL CONSTRUCTIONS INVOLVING GROUPOIDS §.§ Some classical notation Let G be a Lie groupoid. We denote by G^(0) its space of objects and r,s:G→ G^(0) the range and source maps.[G, 01]Gr,s⇉ G^(0)A Lie groupoid with sources, range r and space of units G^(0) The algebroid of G is denoted by G, and its anchor by ♮: G→ TG^(0) its anchor. Recall that (the total space of) G is the normal bundleN_G^(0)^G and the anchor map is induced by (dr-ds). [G, 02] GThe Lie algebroid of the groupoid GWe denote by ^*G the dual bundle of G and by ^*G the sphere bundle of ^*G. * We denote by C^*(G)its (full or reduced) C^*-algebra. We denote by Ψ^*(G) the C^*-algebra of order ≤0 (classical, polyhomogeneous) pseudodifferential operators on G vanishing at infinity on G^(0) (if G^(0) is not compact). More precisely, it is the norm closure in the multiplier algebra of C^*(G) of the algebra of classical pseudodifferential operators on G with compact support in G. [H, 01]C^*(G)The (either maximal or reduced) C^*-algebra of the groupoid G [H, 02]Ψ^*(G)The C^*-algebra of order ≤0 pseudodifferential operators on G vanishing at infinity on G^(0)We have an exact sequence of C^*-algebras 0→ C^*(G)→Ψ^*(G)→ C_0(^*G)→ 0.As mentioned in the introduction, our constructions involve connecting maps associated to short exact sequences of groupoid C^*-algebras, therefore they make sens a priori for the full C^*-algebras, and give rise to E-theory elements (<cit.>). Nevertheless, in many interesting situations, the quotient C^*-algebra will be the C^*-algebra of an amenable groupoid, thus the corresponding exact sequence is semi-split as well as for the reduced and the full C^*-algebras and it defines moreover a KK-element. In these situations C^*(G) may either be the reduced or the full C^*-algebra of the groupoid G and we have preferred to leave the choice to the reader.* For any maps f:A→ G^(0) and g:B→ G^(0), defineG^f={(a,x)∈ A× G; r(x)=f(a)} ,G_g={(x,b)∈ G × B; s(x)=g(b)}andG^f_g={(a,x,b)∈ A× G× B; r(x)=f(a), s(x)=g(b)} . In particular for A,B⊂ G^(0), we put G^A={x∈ G; r(x)∈ A} and G_A={x∈ G; s(x)∈ A}; we also put G_A^B=G_A∩ G^B.Notice that A is a saturated subset of G^(0) if and only if G_A=G^A=G_A^A. [G, 05]G^f, G_g, G^f_g If f:A→ G^(0) and g:B→ G^(0) are maps, G^f={(a,x)∈ A× G; r(x)=f(a)}, G_g={(x,b)∈ G × B; s(x)=g(b)} and G^f_g=G^f∩ G_g [G, 03]G^A, G_B, G^A_BIf A and B are subsets of G^(0), G^A={x∈ G; r(x)∈ A},G_B={x∈ G; s(x)∈ B} and G_A^B=G_A∩ G^B* We denote by G_ad the adiabatic groupoid of G, (<cit.>), it is obtained by using the deformation to the normal cone construction for the inclusion of G^(0) as a Lie subgroupoid of G (see section <ref> and <ref> below for a complete description). Thus: G_ad=G×^* ∪ G×{0}⇉ G^(0)× .If X is a locally closed saturated subset of M×, we will denote sometimes by G_ad(X) the restriction (G_ad)_X^X of G_ad to X: it is a locally compact groupoid.In the sequel of the paper, we letG_ad^[0,1]=G_ad(G^(0)×[0,1]) andG_ad^[0,1)=G_ad(G^(0)×[0,1))G_ad^[0,1]=G× (0,1] ∪ G×{0}⇉ G^(0)×[0,1]and G_ad^[0,1)=G× (0,1) ∪ G×{0}⇉ G^(0)×[0,1). [B]N_V^MThe normal bundle of a submanifold V of a manifold M [G, 07] G_ad, G_ad^[0,1], G_ad^[0,1)The adiabatic groupoid of G and its restriction respectively to G^(0)×[0,1] and to G^(0)×[0,1)[G, 08]G_ad(X)The restriction of G_ad to a locally closed saturated subset X of G^(0)× [0,1] Many manifolds and groupoids that occur in our constructions have boundaries or corners. In fact all the groupoids we consider sit naturally inside Lie groupoids without boundaries as restrictions to closed saturated subsets. This means that we consider subgroupoids G_V^V=G_V of a Lie groupoid Gr,s⇉ G^(0) where V is a closed subset of G^(0). Such groupoids, have a natural algebroid, adiabatic deformation, pseudodifferential calculus, etc. that are restrictions to V and G_V of the corresponding objects on G^(0) and G. We chose to give our definitions and constructions for Lie groupoids for the clarity of the exposition. The case of a longitudinally smooth groupoid over a manifold with corners is a straightforward generalization using a convenient restriction. §.§ Transversalityand Morita equivalence Let us recall the following definition (see <cit.> for details): Let Gr,s⇉ M be a Lie groupoid with set of objects G^(0)=M. Let V be a manifold. A smooth map f:V→ M is said to betransverse to (the action of the groupoid) Gif for every x∈ V, df_x(T_xV)+♮ _f(x) _f(x)G=T_f(x)M. An equivalent condition is that the map (γ,y) ↦ r(γ)defined on the fibered product G_f=Gs,f× V is a submersion G_f→ M. A submanifold V of M is transverse toG if the inclusion V→ M is transverse to G - equivalently, if for every x∈ V, the composition q_x=p_x∘♮_x:_x G→ (N_V^M)_x=T_xM/T_xV is onto.Let V be a (locally) closed submanifold of M transverse to a groupoid Gr,s⇉ M. Denote by N_V^M the (total space) of the normal bundle of V in M. Upon arguing locally, we can assume that V is compact. By the transversality assumption the anchor ♮: G_|V→ TM_|V induces a surjective bundle morphism G_|V→ N_V^M. Choosing a subbundle W' of the restriction G_|V such that W'→ N_V^M is an isomorphism and using an exponential map, we thus obtain a submanifold W⊂ Gsuch that r:W→ M is a diffeomorphism onto an open neighborhood of V in M and s is a submersion from W onto V. Replacing W by a an open subspace, we may assume that r(W) is a tubular neighborhood of V in M, diffeomorphic to N_V^M. The map W× _VG_V^V× _VW→ G defined by (γ_1,γ_2,γ_3)↦γ_1∘γ_2∘γ_3^-1 is a diffeomorphism and a groupoid isomorphism from the pull back groupoid (see next section) (G_V^V)_s^s=W× _VG_V^V× _VW onto the open subgroupoid G_r(W)^r(W) of G.§.§.§ Pull back If f:V→ M is transverse to a Lie groupoid Gr,s⇉ M, then the pull back groupoid G_f^f is naturally a Lie groupoid (a submanifold of V× G× V).If f_i:V_i→ M are transverse to G (for i=1,2) then we obtain a Lie groupoid G_f_1⊔ f_2^f_1⊔ f_2⇉ V_1⊔ V_2. The linking manifold G_f_2^f_1 is a clopen submanifold. We denote by C^*(G_f_2^f_1) the closure in C^*(G_f_1⊔ f_2^f_1⊔ f_2) ofthe space of functions (half densities) with support in G_f_2^f_1; it is a C^*(G_f_1^f_1)-C^*(G_f_2^f_2) bimodule.The bimodule C^*(G_f_2^f_1) is full if all the G orbits meeting f_2(V_2) meet also f_1(V_1). §.§.§ Morita equivalenceTwo Lie groupoids G_1r,s⇉ M_1 and G_2r,s⇉ M_2 are Morita equivalent if there exists a groupoid Gr,s⇉ M and smooth maps f_i:M_i→ M transverse to G such that the pull back groupoids G_f_i^f_i identify to G_i and f_i(M_i) meets all the orbits of G.Equivalently, a Morita equivalence is given by a linking manifold X with extra data: surjective smooth submersions r:X→ G_1^(0) and s:X→ G_2^(0) and compositions G_1×_s,rX→ X,X×_s,r G_2→ X, X×_r,r X→ G_2 and X×_s,s X→ G_1 with natural associativity conditions (see <cit.> for details). In the above situation, X is the manifold G_f_2^f_1 and the extra data are the range and source maps and the composition rules of the groupoid G_f_1⊔ f_2^f_1⊔ f_2⇉ M_1⊔ M_2 (see <cit.>).If the map r:X→ G_1^(0) is surjective but s:X→ G_2^(0) is not necessarily surjective, then G_1 is Morita equivalent to the restriction of G_2 to the open saturated subspace s(X). We say that G_1 is sub-Morita equivalent to G_2. §.§ Semi-direct productsAction of a groupoid on a space. Recallthat an action of a groupoid G r,s⇉G ^(0) on a space V is given by a map p:V→ G ^(0) and the action G ×_s,pV→ V denoted by (g,x)↦ g.x with the requirements p(g.x)=r(g), g.(h.x)=(gh).x and u.x=x if u=p(x).In that case, we may form the crossed product groupoid V⋊ G:* as a setV⋊ G is the fibered product V× _p,r G;* the unit space (V⋊ G)^(0) is V. The range and source maps are r(x,g)=x and s(x,g)=g^-1.x;* the composition is given by (x,g)(y,h)=(x,gh) (with g.y=x). If G is a Lie groupoid, M is a manifold and if all the maps defined are smooth and p is a submersion, then V⋊ G is a Lie groupoid. Action of a group on a groupoid. Let Γ be a Lie group acting on a Lie groupoid Gr,s⇉ M by Lie groupoid automorphisms. The set G×Γ is naturally a Lie groupoid G⋊Γr_⋊,s_⋊⇉ M we put r_⋊(g,γ)=r(g), s_⋊(g,γ)=γ^-1(s(g)) and, when (g_1,γ_1) and (g_2,γ_2) are composable, their product is (g_1,γ_1)(g_2,γ_2)=(g_1 γ_1(g_2),γ_1γ_2).Note that the semi-direct product groupoid G⋊Γ is canonically isomorphic to the quotient /Γ of the product =G× (Γ×Γ) of G by the pair groupoid Γ×Γ where the Γ action onis the diagonal one: γ· (g,γ_1,γ_2)=(γ (g),γ^-1γ_1,γ^-1γ_2). Free and proper action of a group on a groupoid. When the action of Γ on G (and therefore on its closed subset M=G^(0)) is free and proper, we may define the quotient groupoid G/Γr,s⇉ M/Γ.In that case, the groupoid G/Γ acts on M and the groupoid G identifies with the action groupoid M⋊ (G/Γ). Indeed, let p:M→ M/Γ and q:G→ G/Γ be the quotient maps. If x∈ M and h∈ G/Γ are such that s(h)=p(x), then there exists a unique g∈ G such that q(g)=h and s(g)=x; we put then h.x=r(g). It is then immediate that φ:G→ M× _p,r(G/Γ) given by φ (g)=(r(g),q(g)) is a groupoid isomorphism.The groupoid G/Γ is Morita equivalent to G⋊Γ: indeed one easily identifies G⋊Γ with the pull back groupoid (G/Γ)_q^q where q:M→ M/Γ is the quotient map.Note also that in this situation the action of Γ on G leads to an action of Γ on the Lie algebroid G and (G/Γ) identifies with G /Γ. As the Lie groupoids we are considering need not be Hausdorff, the properness condition has to be relaxed. We will just assume that the action is locally proper, that every point in G has a Γ-invariant neighborhood on which the action of Γ is proper.Action of a groupoid on a groupoid. Recallthat an action of a groupoid G r,s⇉G ^(0) on a groupoid Hr_H,s_H⇉ H^(0) isby groupoid automorphisms (<cit.>) if G acts on H^(0) through a map p_0:H^(0)→ G ^(0), we have p=p_0∘ r_H=p_0∘ s_H and g.(xy)=(g .x)(g.y).In that case, we may form the crossed product groupoid H⋊ G =:* as a setH⋊ G is the fibered product H× _p,r G;* the unit space ^(0) of =H⋊ G is H^(0). The range and source maps are r_(x,g)=r_H(x) and s_(x,g)=g^-1.s_H(x);* the composition is given by (x,g)(y,h)=(x(g.y),gh). If G and H are Lie groupoids and if all the maps defined are smooth and p is a submersion, then =H⋊ G is a Lie groupoid. §.§ Index maps for Lie groupoidsRecall (<cit.>) that if G is any Lie groupoid, the index map is an element in KK(C_0(^*G),C^*(G)) which can be constructed thanks to the adiabatic groupoid G_ad^[0,1] of Gas_G=[ev_0]^-1⊗ [ev_1]where ev_0:C^*(G_ad^[0,1])→ C^*(G_ad(0))≃ C_0(^*G) ev_1:C^*(G_ad^[0,1])→ C^*(G_ad(1))≃ C^*(G)are the evaluation morphisms (recall that [ev_0] is invertible). [I, 01]_GThe KK-element [ev_0]^-1⊗ [ev_1], which belongs to KK(C_0(^*G),C^*(G)),associated to the deformation groupoid G_ad^[0,1]=G× (0,1] ∪(G)×{0}⇉ G^(0)×[0,1] [I, 02]_GThe connecting element, which belongs to KK^1(C(S^*),C^*()) associated to the short exact sequence 0→ C^*(G)→Ψ^*(G)→ C(S^* G)→ 0It follows quite immediately that the element _G∈ KK^1(C(^* G),C^*(G)) corresponding to the pseudodifferential exact sequence0→ C^*(G)→Ψ^*(G)→ C(^* G)→ 0E_Ψ^*(G)is the composition _G=_G⊗ q_^*G where q_^*G∈ KK^1(C(^* G),C_0(^* G)) corresponds to the pseudodifferential exact sequence for G which is0→ C_0(^* G)→ C(B^*G)→ C(^*G)→ 0 E_Ψ^*( G)This connecting element is immediately seen to be the element of KK(C_0(^* G×_+^*),C_0(^* G)) associated to the inclusion of ^* G×_+^* as the open subset ^* G∖ G^(0) - where G^(0) sits in^* G as the zero section.§ REMARKS ON EXACT SEQUENCES, CONNES-THOM ELEMENTS, CONNECTING MAPS AND INDEX MAPSThe first part of this section is a brief reminder of some quite classical facts about connecting elements associated to short exact sequences of C^*-algebras. The second part is crucial forour main results of section 6: given a Lie groupoid and an open saturated subset of its unit space, we consider connecting maps and full index maps, compare them, compute them in some cases... In particular, we study a Fredholm realizability problem generalizing works of Albin and Melrose (<cit.>) and study index maps using relative K-theory.In the last part we study a proper action of _+^* on a Lie groupoid G with an open saturated subset wich is _+^*-invariant. We compare the connecting maps and the index maps of G with those of G/_+^*, using Connes-Thom morphisms. §.§ A (well known) remark on exact sequences We will use the quite immediate (and well known) result:Consider a commutative diagram of semi-split exact sequences of C^*-algebras0[r]J_1[r][d]_f_JA_1[r]^q_1[d] _f_A B_1[r][d]_f_B00[r]J_2[r]A_2[r] ^q_2 B_2[r]0* We have ∂_1 ⊗ [f_J]=[f_B]⊗∂_2 where ∂ _i denotes the element in KK^1(B_i,J_i) associated with the exact sequence0[r]J_i[r]A_i[r]B_i[r]0. * If two of the vertical arrows are KK-equivalences, then so is the third one.When f:A→ B is a morphism of C^*-algebra, we will denote the corresponding mapping cone by _f ={(x,h)∈ A⊕ B[0,1) ;h(0)=f(x)}.[H, 06]_fThe mapping cone of a morphism f:A→ B of C^*-algebra * See <cit.>. Let _q_i be the mapping cone of q_i and j_i:B_i(0,1)→_q_i and e_i:J_i→_q_i the natural (excision) morphisms. The excision morphism e_i is K-invertible and ∂_i=[j_i]⊗ [e_i]^-1. * For every separable C^*-algebra D, by applying the “five lemma” to the diagram...[r]KK^*(D,J_1)[r][d] KK^*(D,A_1)[r][d] KK^*(D,B_1)[r][d]KK^*+1(D,J_1)[r][d] ......[r]KK^*(D,J_2)[r]KK^*(D,A_2)[r]KK^*(D,B_2)[r]KK^*+1(D,J_2)[r] ... we find that all vertical arrows are invertible. Applying this to D=J_2 (A_2, B_2) we find a one sided inverse to[f_J](f_A, f_B). Applying this again to D=J_1 (A_1, B_1), it follows that this inverse is two-sided. [I, 01][f]The KK-element, in KK(A,B) associated to a morphism of C^*-algebra f:A→ B §.§ Saturated open subsets, connecting maps and full index mapIn this section, we let G⇉ M be a Lie groupoid and F be a closed subset of M saturated for G. Put W=M∖ F. Denote by G_W the open subgroupoid G_W=G_W^W of G and G_F its complement. If F is not a submanifold, then G_F is not a Lie groupoid, but as explained in remark <ref>, we still can define Ψ^*(G_F) (it is the quotient Ψ^*(G)/Ψ^*(G_W)) the symbol map, etc. [H, 03]Ψ^*(G_F)The quotient Ψ^*(G)/Ψ^*(G_W) where F is a closed subset of G^(0) saturated for G and W=G^(0)∖ F [H, 06]Σ^W(G)The quotient Ψ^*(G)/C^*(G_W) [I, 04]_full^W(G)The connecting element, which belongs to KK^1(Σ^W(G),C^*(G_W)) associated to the short exact sequence 0⟶ C^*(G_W)⟶Ψ^*(G)⟶Σ^W(G)⟶0 [I, 03]∂_G^WThe connecting element, which belongs to KK^1(C^*(G|_F),C^*(G|_W)),associated to the short exact sequence 0[r]C^*(G|_W)[r] C^*(G)[r]C^*(G|_F)[r] 0 where W is a saturated open subset of G^(0) and F=G^(0)∖ WDefine the full symbol algebra Σ^W(G) to be the quotient Ψ^*(G)/C^*(G_W).In this section we will be interested in the description of elements ∂_G^W∈ KK^1(C^*(G_F),C^*(G_W)) and _full^W(G)∈ KK^1(Σ^W(G),C^*(G_W)) associated to the exact sequences0⟶ C^*(G_W)⟶ C^*(G)⟶ C^*(G_F)⟶ 0 E_∂and0⟶ C^*(G_W)⟶Ψ^*(G)⟶Σ^W(G)⟶0 . E__fullTo that end, it will be natural to assume that the restriction G_F of G to F is amenable - so that the above sequences are exact and semi-split for the reduced as well as the full groupoid algebra. At some point, we wish to better control the K-theory of the C^*-algebras C^*(G_F) and Σ^W(G). We will assume that the index element _G_F∈ KK(C_0(( ^*G)_|F),C^*(G_F)) is invertible. This assumption is satisfied in our main applications in section <ref>.§.§.§ Connecting map and indexAssume that the groupoid G_F is amenable. We have a diagram0[d] 0[d] E_∂ : 0[r]C^*(G_W)[r]@=[d] C^*(G)[r][d]C^*(G_F) [r][d]^j 0 E__full : 0[r]C^*(G_W)[r]Ψ^*(G)[d][r]Σ^W(G)[r][d] 0 C_0(^* G)[d]@=[r] C_0(^* G)[d]0 0It follows that ∂ _G^W=j^*(_full^W(G)) (proposition <ref>). §.§.§ Connecting mapsLet ∂_G^W ∈ KK^1(C^*(G_F),C^*(G_W)) be the elementassociated with the exact sequence0⟶ C^*(G_W)⟶ C^*(G)⟶ C^*(G_F)⟶ 0.Similarly, let∂_ G^W∈ KK^1(C_0(( ^*G)_|F),C_0(( ^*G)_|W)) be associated with the exact sequence0⟶ C_0(( ^*G)_|W)⟶ C_0( ^*G)⟶ C_0(( ^*G)_|F)⟶ 0.We have ∂_ G^W⊗_G_W=_G_F⊗∂_G^W.In particular, if the index element _G_F∈ KK(C_0(( ^*G)_|F),C^*(G_F)) is invertible, then the element ∂_G^Wis the composition _G_F^-1⊗∂_ G^W⊗_G_W.Indeed, we just have to apply twice proposition <ref> using the adiabatic deformation G_ad^[0,1] and the diagram:0[r] C_0(( ^*G)_|W)) [r] C_0( ^*G) [r]C_0(( ^*G)_|F))[r]00[r]C^*(G_ad(W× [0,1]) [u]^ev_0 [r][d]_ev_1C^*(G_ad) [u]^ev_0 [r][d]_ev_1 C^*(G_ad(F× [0,1]))[u]^ev_0[d]_ev_1[r]00[r]C^*(G_W) [r] C^*(G) [r]C^*(G_F)[r]0 §.§.§ A general remark on the index In the same way as the index _G∈ KK(C_0(^*G),C^*(G)) constructed using the adiabatic groupoid is more primitive and to some extent easier to handle than _G∈ KK^1(C_0(^* G),C^*(G)) constructed using the exact sequence of pseudodifferential operators, there is in this “relative” situation a natural more primitive element. Denote by _W G=G_ad(F× [0,1)∪ W×{0}) the restriction of G_ad to the saturated locally closed subset F× [0,1)∪ W×{0}. Note that, since we assume that G_F is amenable, and since G is also amenable (it is a bundle groupoid), the groupoid _W G is amenable. [G, 09]_W GThe restriction of the adiabatic groupoid G_ad to F× [0,1)∪ W×{0} where F is a closed subset of G^(0) saturated for G and W=G^(0)∖ FSimilarly to <cit.>, we define the noncommutative algebroid of G relative to F to be C^*( _W G). Note that by definition we have:C^*(G_ad^ [0,1))/C^*(G_ad(W× (0,1))=C^*(G_ad(F× [0,1)∪ W×{0})=C^*( _W G) We have an exact sequence0→ C^*(G_W× (0,1])⟶ C^*(G_ad(F× [0,1)∪ W× [0,1]))ev_0⟶ C^*( _W G)→ 0, where ev_0 : C^*(G_ad(F× [0,1)∪ W× [0,1])) → C^*(G_ad(F× [0,1)∪ W×{0})=C^*( _W G) is the restriction morphism. As C^*(G_W× (0,1]) is contractible the KK-class [ev_0]∈ KK( C^*(G_ad(F× [0,1)∪ W× [0,1])),C^*( _W G)) is invertible. Let as usual ev_1:C^*(G_ad(F× [0,1)∪ W× [0,1]))→ C^*(G_W) be the evaluation at 1. We put:_G^W=[ev_0]^-1⊗ [ev_1] ∈ KK(C^*( _W G),C^*(G_W)) .[I, 05]_G^WThe KK-element [ev_0]^-1⊗ [ev_1], which belongs to KK(C^*( _W G),C^*(G_W)),associated to the groupoid G_ad(F× [0,1)∪ W× [0,1])=G_W× (0,1]⊔ _W G Recall from <cit.> and <cit.> that there is a natural action ofon Ψ^*(G) such that Ψ^*(G)⋊ is an ideal in C^*(G_ad^[0,1)) (using a homeomorphism of [0,1) with _+). This ideal is the kernel of the composition C^*(G_ad^[0,1))_0⟶C_0(^* G)→ C(M). Recall that the restriction to C^*(G) of the action ofis inner. It follows that C^*(G_W)⊂Ψ^*(G) is invariant by the action of- and C^*(G_W)⋊=C^*(G_W)⊗ C_0()=C^*(G_ad(W× (0,1))). We thus obtain an action ofon Σ^W(G)=Ψ^*(G)/C^*(G_W) and an inclusion i:Σ^W(G)⋊↪ C^*(_WG). The element _full^W∈ KK^1(Σ^W(G),C^*(G_W)) corresponding to the exact sequence0⟶ C^*(G_W)⟶Ψ^*(G)⟶Σ^W(G)⟶0 . E__fullis the Kasparov product of:* the Connes-Thom element [th]∈ KK^1(Σ^W(G),Σ^W(G)⋊);* the inclusion i:Σ^W(G)⋊↪ C^*( _W G);* the index _G^W=[ev_0]^-1⊗ [ev_1] defined above.By naturality of the Connes Thom element, it follows that^W_full⊗ [B]=-[th]⊗[∂]where ∂∈ KK^1(C^*( _W G),C^*(G_W× (0,1))) is the KK^1-element corresponding with the exact sequence0[r]C^*(G_W)⋊ [r]Ψ^*(G)⋊ [r] Σ^W(G)⋊[r]0and [B]∈ KK^1(C^*(G_W),C^*(G_W)⋊) is the Connes-Thom element. Note that, since the action is inner, [B] is actually the Bott element.By the diagram 0[r]C^*(G_W)⋊ [r][d] Ψ^*(G)⋊ [r][d]Σ^W(G)⋊[d]^i[r]00[r]C^*(G_W× (0,1)) [r] C^*(G_ad^[0,1)) [r]C^*(_W G)[r]0we deduce that [∂]=i^*[∂'] where ∂' corresponds to the second exact sequence.Finally, we have a diagram0[d] 0[d]0[r]C^*(G_W× (0,1))[r][d]C^*(G_ad^[0,1))[r][d] C^*(_W G)@=[d][r]00[r]C^*(G_W× (0,1])[d] [r] C^*(G_ad(F× [0,1)∪ W× [0,1]))[d]_ev_1 [r]^ 50pt ev_0 C^*(_W G)[r]0C^*(G_W)@=[r][d] C^*(G_W)[d]0 0where exact sequences are semisplit. Now the connecting element corresponding to the exact sequence0[r]C^*(G_W× (0,1))[r] C^*(G_ad(F× [0,1)∪ W× [0,1]))[r]^ 23pt ev_0⊕ ev_1 C^*(_W G)⊕ C^*(G_W)[r] 0 is [∂']⊕ [B] and it follows that[ev_0]⊗ [∂']+[ev_1]⊗[B]=0.As _full^W⊗[B]=-[th]⊗[i]⊗[∂'] and [∂']=-[ev_0]^-1⊗[ev_1]⊗ [B], the result follows from invertibility of the Bott element. §.§.§ Full symbol algebra and indexDenote by Ψ_F^*(G) the subalgebra C_0(M)+Ψ^*(G_W) of Ψ^*(G). It is the algebra of pseudodifferential operators which become trivial (multiplication operators) on F. Let Σ_F(G)=Ψ_F^*(G)/C^*(G_W) be the algebra of the corresponding symbols. It is the subalgebra C_0(M)+C_0(^*G_W) of C_0(^*G) of symbols a(x,ξ) with x∈ M and ξ∈ (^*G)_x whose restriction on F does not depend on ξ. [H, 05]Ψ_F^*(G)The subalgebra C_0(M)+Ψ^*(G_W) of Ψ^*(G) [H, 07]Σ_F(G)The algebra Ψ_F^*(G)/C^*(G_W)Assume that the index element _G_F∈ KK(C_0(( ^*G)_|F),C^*(G_F)) is invertible, that the C^*-algebra of the adiabatic groupoid C^*(G_ad(F× [0,1))) is K-contractible. * The inclusion j_ψ:C_0(F)→Ψ^*(G_F)is a KK-equivalence.* The inclusion j_σ:Σ_F(G) =Ψ_F^*(G)/C^*(G_W) →Σ^W(G) is also a KK-equivalence. * Consider the diagram0[r] C_0(( ^*G)_|F) [r] C_0((B^* G)_|F) [r]C_0((^* G)_|F)[r]00[r]C^*(G_ad(F× [0,1])) [u]^ev_0 [r][d]_ev_1 Ψ^*(G_ad(F× [0,1])) [u]^ev_0 [r][d]_ev_1 C((^* G)_|F×[0,1])[u]^ev_0[d]_ev_1[r]0 0[r]C^*(G_F) [r]Ψ^*(G_F) [r]C((^* G)_|F)[r]0where the horizontal exact sequences are the pseudodifferential exact sequences E_Ψ^*( G)_F, E_Ψ^*(G_ad(F× [0,1])) and E_Ψ^*(G_F). Since _G_F is invertible ev_1:C^*(G_ad(F× [0,1])→ C^*(G_F) is a KK-equivalence. Hence, the left and right vertical arrows are all KK-equivalences, and therefore so are the middle ones. The inclusion C_0(F) in C_0((B^* G)_|F) is a homotopy equivalence and therefore the inclusions C_0(F)→Ψ^*(G_ad(F× [0,1])) and C_0(F)→Ψ^*(G_F) are KK-equivalences. * Apply Lemma <ref> to the diagrams 0[r]Ψ^*(G_W)[r]@=[d] Ψ_F^*(G)[r][d]_J_ψ C_0(F)[r][d]_j_ψ00[r]Ψ^*(G_W)[r] Ψ^*(G)[r]Ψ^*(G_F)[r]00[r]C^*(G_W)[r]@=[d] Ψ_F^*(G)[r][d]_J_ψ Σ_F(G)[r][d]_j_σ00[r]C^*(G_W)[r] Ψ^*(G)[r]Σ^W(G)[r]0 we find that J_Ψ and j_σ are K-equivalences. The diagram in lemma <ref>.<ref>) shows that ∂_F=j_σ^*(_full^W(G)) where ∂_F∈ KK^1(Σ_F(G),C^*(G_W)) is the KK-element associated with the exact sequence0[r]C^*(G_W)[r]Ψ_F^*(G)[r]Σ_F(G)[r] 0.[I, 06]∂_FThe connecting element, which belongs to KK^1(Σ_F(G),C^*(G_W)) associated to the short exact sequence 0[r]C^*(G_W)[r]Ψ_F^*(G)[r]Σ_F(G)[r] 0So, let's compute the KK-theory of Σ_F(G) and the connecting element ∂_F.Consider the vector bundle G as a groupoid (with objects M). It is its own algebroid - with anchor 0. With the notation in <ref>, * C^*( G) identifies with C_0(^* G) and C^*( G_W) with C_0(^* G_W);* Ψ ^*( G)identifies with C_0(B^* G); it is homotopy equivalent to C_0(M); * the spectrum of Ψ_F ^*( G) is B^*_F G the quotient of B^* G where we identify two points (x,ξ) and (x,η) for x∈ F; it is also homotopy equivalent to C_0(M).* the algebroid of the groupoid G is G itself; therefore, Σ_F( G)=Σ_F(G);its spectrum is ^*_F G which is the image of ^* G in B^*_F G.We further note. * Let k:C_0(^* G_W)→ C_0(M) be given by k(f)(x)=f(x,0) ifx∈ W 0 ifx∈ F. We find a commutative diagram C_0( B^* G_W)[r][d]C_0(B_F^* G)[d]C_0(^* G_W)[r]^kC_0(M) where the vertical arrows are homotopy equivalences.* the exact sequence 0→ C^*( G_W)→Ψ_F^*( G)→Σ_F( G)→ 0, reads 0→ C_0( B^* G_W)→ C_0(B_F^* G)→ C_0(S_F^* G)→ 0.We deduce using successively (<ref>) and (<ref>):* The algebra C_0(S_F^* G) is KK^1-equivalent with the mapping cone of the inclusion C_0( B^* G_W)→ C_0(B_F^* G).* This mapping cone is homotopy equivalent to the mapping cone of the morphism k.□Note finally that we have a diagram0[r] C_0(( ^*G)_|W) [r]Ψ^*_F( G) [r]Σ_F( G)[r]00[r]C^*(G_ad(W× [0,1])) [u]^ev_0 [r][d]_ev_1 Ψ_F× [0,1]^*(G_ad) [u]^ev_0 [r][d]_ev_1 Σ_F×[0,1](G_ad)[u]^ev_0[d]_ev_1[r]00[r]C^*(G_W) [r]Ψ_F^*(G) [r]Σ_F( G)[r]0The right vertical arrows are KK-equivalences, and therefore we find ∂⊗[ev_0]^-1⊗ [ev_1] =∂_F, where ∂ is the connecting element of the first horizontal exact sequence. To summarize, we have proved:Assume that the index element _G_F∈ KK(C_0(( ^*G)_|F),C^*(G_F)) is invertible. * The inclusion j_σ:Σ_F(G)→Σ^W(G) is a KK-equivalence.* The analytic index _full^W(G) ∈ KK^1(Σ^W(G),C^*(G_W)) corresponding to the exact sequence 0[r]C^*(G_W)[r] Ψ^*(G)[r]Σ^W(G)[r]0is the Kasparov product of * the element [j_σ]^-1∈ KK(Σ^W(G), Σ_F(G));* the connecting element ∂∈ KK^1(Σ_F( G),C_0(( ^*G)_|W)) associated with the exact sequence of (abelian) C^*-algebras0[r] C_0(( ^*G)_|W) [r]Ψ^*_F( G) [r]Σ_F( G)[r]0; * the analytic index element _G_W of G_W, the element[ev_0]^-1⊗ [ev_1]∈ KK(C_0(( ^*G)_|W),C^*(G_W)).□§.§.§ Fredholm realizationLet σ be a classicalsymbol which defines an element in K_1(C_0(^* G)). A natural question is: when can this symbol be lifted to a pseudodifferential element which is invertible modulo C^*(G_W)?In particular, if G_W is the pair groupoid W× W, this question reads: when can this symbol be extended to a Fredholm operator? Particular cases of this question were studied in <cit.>.Consider the exact sequences:0 0 E:0[r]C^*(G_F)[r] [u]Σ^W(G)[r]^q[u]C_0(^* G)[r] @=[d] 00[r] C^*(G)[r] [u]Ψ^*(G) [u][r] C_0(^* G) [r] 0C^*(G_W) @=[r] [u] C^*(G_W) [u]0 [u] 0 [u] The element σ is an invertible element in M_n(C_0(^* G)^+) (where C_0(^* G)^+ is obtained by adjoining a unit to C_0(^* G) - if G^(0) is not compact). The question is: when can σ be lifted to an invertible element of M_n( Σ^W(G)^+). By the K-theory exact sequence, if this happens then the class of σ is in the image of K_1(Σ^W(G)) and therefore its image via the connecting map of the exact sequence E is 0 in K_0(C^*(G_F)). Conversely, ifthe image of σ via the connecting map of E vanishes, then the class of σ in K_1(C_0(^* G)) is in the image of K_1(Σ^W(G)). This means that there exists p∈ and an invertible element x∈ M_n+p(Σ^W(G)^+) such that q(x) and σ⊕ 1_p are in the same path connected component of GL_n+p( C_0(^* G)^+). Now the morphism q:M_n+p( Σ^W(G)^+)→ M_n+p(C_0(^* G)^+) is open and therefore the image of the connected component GL_n+p(Σ^W(G)^+)_(0) of 1_n+p in GL_n+p(Σ^W(G)^+) is an open (and therefore also closed) subgroup of GL_n+p(C_0(^* G)^+). It follows immediately that q(GL_n+p(Σ^W(G)^+)_(0))=GL_n+p(C_0(^* G)^+)_(0). Finally (σ⊕ 1_p)x^-1 is in the image of GL_n+p(Σ^W(G)^+)_(0), therefore σ⊕ 1_p can be lifted to aninvertible element of M_n(Σ^W(G)^+). Let us make a few comments:* Considering the diagram0[r]C^*(G_F)[r]@=[d] Σ^W(G)[r][d]C_0(^* G)[d][r]00[r]C^*(G_F)[r] Ψ^*(G_F)[r]C_0(^* G_F)[r]0 we find that the image of σ in K_0(C^*(G_F)) is the index (σ_F) of the restriction σ_F of σ to F. * Considering the diagram0[d] 0[d] 0[d]0[r]C^*(G_W)[r][d] Ψ^*(G_W)[r][d]C_0(^* G_W)[r][d]00[r]C^*(G)[r][d]Ψ^*(G)[r][d]C_0(^* G)[d][r]00[r]C^*(G_F)[r][d] Ψ^*(G_F)[r][d]C_0(^* G_F)[r][d]00 0 0 we could also say that our question is: when is the index (σ)∈ K_0(C^*(G)) in the image of K_0(C^*(G_W)), and of course this happens if and only if the image of (σ) in K_0(C^*(G_F)) vanishes. Again we may notice that the image of (σ) in K_0(C^*(G_F)) is (σ_F). * Of course, the same remark holds if we start with a symbol defining a class in K_0(C_0(^* G)). §.§.§ Relative K-theory and full index It is actually better to consider the index map in a relative K-theory setting. Indeed, the starting point of the index problem is a pair of bundles E_± over M together with a pseudodifferential operator P from sections of E_+ to sections of E_- which is invertible modulo C^*(G_W). Consider the morphism ψ:C_0(M)→Ψ^*(G) which associates to a (smooth) function f the order 0 (pseudo)differential operator multiplication by f and σ_full:Ψ^*(G)→Σ^W(G) the full symbol map. Put μ=σ_full∘ψ. By definition, for any P∈Ψ^*(G),the triple (E_±,σ_full(P)) is an element in the relative K-theory of the morphism μ. The index ·⊗_full^W(G) considered in the previous section is the composition of the morphism K_1(Σ^W(G))→ K_0(μ)[Recall that if f:A→ B is a morphism of C^*-algebras, we have a natural morphism u:K_*+1(B)→ K_*(f) corresponding to the inclusion of the suspension of B in the cone of f.]with the index map _rel:K_0(μ)→ K_0(C^*(G_W)) which to (E_±,σ_full(P)) associates the class of P.The morphism _rel can be thought of as the composition of the obvious morphism K_0(μ)→ K_0(σ_full)≃ K_0((σ_full))=K_0(C^*(G_W)).Let us now compute the group K_*(μ) and the morphism _rel when the index element _G_F∈ KK(C_0(( ^*G)_|F),C^*(G_F)) is invertible. Assume that the index element _G_F∈ KK(C_0(( ^*G)_|F),C^*(G_F)) is invertible. Then K_*(μ) is naturally isomorphic to K_*(C_0(^* G_W)). Under this isomorphism, _rel identifies with_G_W. We have a diagram0[r] C_0(W)[r][d]^iC_0(M)[r][d]^μ C_0(F)[d]^j_Ψ[r] 00[r] C_0(^* G_W)[r] Σ^W(G)[r] Ψ^*(G_F)[r] 0As j_Ψ is an isomorphism in K-theory, the map K_*(i)→ K_*(μ) induced by the first commutative square of this diagram is an isomorphism. As K_*(i)=K_*(_i) and _i=C_0(^* G_W), we obtained the desired isomorphism K_*(C_0(^* G_W))≃ K_*(μ).Comparing the diagramsC_0(W)[r][d]^iC_0(M)[r][d]^μ Ψ^*(G)[d]^σ_fullC_0(^* G_W)[r] Σ^W(G)@=[r] Σ^W(G) andC_0(W)[r][d]^i Ψ^*(G_W)[r][d]^σ_W Ψ^*(G)[d]^σ_fullC_0(^* G_W)@=[r] C_0(^* G_W)[r] Σ^W(G)we find that the composition K_*(i)∼⟶ K_*(μ)⟶ K_*(σ_full) coincides with the index K_*(i)⟶ K_*(σ_W)∼⟶ K_*(σ_full). We wrote the relative index map in terms of morphisms of K-groups. One can also write everything in terms KK-theory, by replacing relative K-theory by mapping cones, construct the relative index as the element ofKK(_μ,C^*(G_W)) given as ψ_^*([e]^-1) where e:C^*(G_W)→_σ_full is the (KK-invertible) “excision map” associated with the (semi-split) exact sequence 0→ C^*(G_W)→Ψ^*(G)σ_full⟶Σ_F(G)→ 0 and ψ_:_μ→_σ_full is the morphism associated with ψ.§.§ Connes-Thom elements and quotient of a groupoid by _+^* §.§.§ Proper action on a manifoldLet _+^* act smoothly (freely and) properly on a manifold M. We have a canonical invertible KK-element α=(H,D)∈ KK^1(C_0(M),C_0(M/_+^*)). * The Hilbert module H is obtained as a completion of C_c(M) with respect to the C_0(M/_+^*) valued inner product ⟨ξ|η⟩(p(x))=∫_0^+∞ξ(t.x) η(t.x) dt/t for ξ,η∈ C_c(M), where p:M→ M/_+^* is the quotient map. * The operator D is 1/t∂/∂ t. The inverse element β∈ KK^1(C_0(M/_+^*),C_0(M)) is constructed in the following way: C_0(M/_+^*) sits in the multipliers of C_0(M). One may define a continuous function f:M→ [-1,1] such that, uniformly on compact sets of M, lim_t→±∞f(e^t.x)=±1. The pair (C_0(M),f) is then an element in KK^1(C_0(M/_+^*),C_0(M)). To construct f, one may note that, by properness, we actually have a section φ:M/_+^* → M and we may thus construct a homeomorphism _+^*× M/_+^*→ M defined by (t,x)↦ t.φ(x). Then put f(e^t,x)=t(1+t^2)^-1/2.As an extension of C^*-algebras the element β is given by consideringP=(M×_+)/_+^* (where _+^* acts -properly - diagonally). Then M sits as an open subset (M×_+^*)/_+^* and we have an exact sequence 0→ C_0(M)→ C_0(P)→ C_0(M/_+^*)→ 0.§.§.§ Proper action on a groupoidLet now _+^* act smoothly (locally remark <ref>) properly on a Lie groupoid G⇉ M. The groupoid G/_+^* acts on M, and the element α is G invariant - and β is almost G invariant in the sense of <cit.>. In other words, we obtain elements α∈ KK^1_G/_+^*(C_0(M),C_0(M/_+^*)) and β∈ KK_G/_+^*^1(C_0(M/_+^*),C_0(M)) which are inverses of each other in Le Gall's equivariant KK-theory for groupoids. Using the descent morphism of Kasparov (<cit.>) and Le Gall (<cit.>), we obtain elements j_G/_+^*(α)∈ KK^1(C^*(G),C^*(G/_+^*)) and j_G/_+^*(β)∈ KK^1(C^*(G/_+^*),C^*(G)) that are also inverses of each other. Note also that the element β_G=j_G/_+^*(β) is the connecting element of the extension of groupoid C^*-algebras 0→ C^*(G)⟶ C^*()ev_0⟶ C^*(G/_+^*)→ 0, where =(G×_+)/_+^*and ev_0 comes from the evaluation at G×{0}. Using the pseudodifferential operators on the groupoid , we obtain a KK-element β_G^Ψ∈ KK^1(Ψ^*(G/_+^*),Ψ^*(G)). We obtain a commutative diagram0[d] 0[d] 0[d] 0[r] C^*(G)[r][d] C^*()[r][d] C^*(G/_+^*)[r][d] 00[r] Ψ^*(G)[r][d] Ψ^*()[r][d] Ψ^*(G/_+^*)[r][d] 00[r] C_0(^*G)[r][d] C_0(^* )[r][d] C_0(^*(G/_+^*))[r][d] 00 0 0The third horizontalexact sequence corresponds to the proper action of _+^* on ^* G. In fact ^* is homeomorphic (using a cross section) to (^* (G/_+^*))×_+. As the connecting elements of thefirst and third horizontal (semi-split) exact sequences are invertible, it follows that C^*() and C_0(^* ) are K-contractible, whence so is Ψ^*(G) and therefore β_G^Ψ is a KK-equivalence.Hence we have obtained:If _+^* acts smoothly (locally) properly on the Lie groupoid G, the connecting elements β_G ∈ KK^1(C^*(G/_+^*),C^*(G)),β_^* G∈ KK^1(C_0(^*(G/_+^*),C_0(^* G)) and β_G^Ψ∈ KK^1(Ψ^*(G/_+^*),Ψ^*(G)) are KK-equivalences.§.§.§ Closed saturated subsets and connecting mapsIf W is an open saturated subset in M for the actions of G and of _+^* and F=M∖ W, one compares the corresponding elements. We then obtain a diagram 0[r]C^*(G_W^W/_+^*)[r]@-[d]^β' C^*(G/_+^*)[r]@-[d]^βC^*(G_F^F/_+^*)[r]@-[d]^β” 00[r]C^*(G_W^W)[r] C^*(G)[r]C^*(G_F^F)[r] 0where the horizontal arrows are morphisms and the vertical ones KK^1-equivalences. Using the deformation groupoid=G^F_F× [0,1)∪ G×{0}which is the restriction of the groupoid G× [0,1)⇉ M× [0,1] to the closed saturated subset F× [0,1)∪ M×{0}, we obtain: If G_F^F is amenable, ∂_G/_+^*^W⊗β'=-β”⊗∂_G^W∈ KK(C^*(G_F^F/_+^*),C^*(G_W^W)) where ∂_G^W∈ KK^1(C^*(G_F^F),C^*(G_W^W)) and ∂_G/_+^*^W∈ KK^1(C^*(G_F^F/_+^*),C^*(G_W^W/_+^*)) denote the KK-elements associated with the above exact sequences.Indeed, the connecting map of a semi-split exact sequence 0→ J→ A p→ A/J→ 0 is obtained as the KK-product of the morphism A/J(0,1)→_p with the KK-inverse of the morphism J→_p. The - sign comes from the fact that we have naturally elements of KK(C^*(G_F^F/_+^*× (0,1)^2),C^*(G_W^W)) which are equal but with opposite orientations of (0,1)^2.Note also that the same holds for Ψ^* in place of C^*. §.§.§ Connes-Thom invariance of the full index Let W be as above: an open subset of M saturated for G and invariant under the action of _+^*. One compares the corresponding _full elements. Indeed, we have a diagram 0[r]C^*(G_W^W/_+^*)[r]@-[d]^β^G_W Ψ^*(G/_+^*)[r]@-[d]^β_Ψ^G Σ^W/_+^*(G/_+^*)[r]@-[d]^β_Σ^(G,W) 0E__full:0[r]C^*(G_W^W)[r] Ψ^*(G)[r] Σ^W(G)[r] 0where the horizontal arrows are morphisms and the vertical ones KK^1-elements. As β^G_W and β_Ψ^G are invertible, we deduce as in prop. <ref>:* The element β_Σ^(G,W) is invertible. * We have β_Σ^(G,W)⊗_full^W(G)=-_full^W/_+^*(G/_+^*)⊗β^G_W.□§ TWO CLASSICAL GEOMETRIC CONSTRUCTIONS: BLOWUP AND DEFORMATION TO THE NORMAL CONEOne of the main object in our study is a Lie groupoid G based on a groupoid restricted to a half space. This corresponds to the inclusion of a hypersurface V of G^(0) into G and gives rise to the “gauge adiabatic groupoid” . The construction ofis in fact a particular case of the blowup construction corresponding to the inclusion of a Lie subgroupoid into a groupoid. In this section, we will explain this general construction. We will give a more detailed description in the case of an inclusion V→ G when V is a submanifold of G^(0).Let Y be a manifold and X a locally closed submanifold (the same constructions hold if we are given an injective immersion X→ Y). Denote by N_X^Y the (total space) of the normal bundle of X in Y.§.§ Deformation to the normal cone [G, 10](Y,X)The deformation to the normal cone of the inclusion of a submanifold X in a manifold Y, (Y,X)=Y×^* ∪ N_X^Y The deformation to the normal cone (Y,X) is obtained by gluing N_X^Y×{0} with Y×^*. The smooth structure of (Y,X) is described by use of any exponential map θ:U'→ U which is a diffeomorphism from an open neighborhood U' of the 0-section in N_X^Y to an open neighborhood U of X. The map θ is required to satisfy θ (x,0)=x for all x∈ X and p_x∘ dθ_x=p'_x where p_x:T_xY→ (N_X^Y)_x=(T_xY)/(T_xX) and p'_x:T_xN_X^Y≃ (N_X^Y)_x⊕ (T_xX) → (N_X^Y)_x are the projections. The manifold structure of (Y,X) is described by the requirement that:* the inclusion Y×^*→(Y,X) and* the map Θ:Ω'={((x,ξ),λ)∈ N_X^Y×; (x,λξ)∈ U'}→(Y,X) defined by Θ((x,ξ),0)=((x,ξ),0) andΘ((x,ξ),λ)=(θ(x,λξ),λ)∈ Y×^* if λ 0.are diffeomorphisms onto open subsets of (Y,X).It is easily shown that (Y,X) has indeed a smooth structure satisfying these requirements and that this smooth structure does not depend on the choice of θ. (See for example <cit.> for a detailed description of this structure).In other words, (Y,X) is obtained by gluing Y×^* with Ω' by means of the diffeomorphism Θ:Ω'∩ (N_X^Y×^*)→ U×^*.Let us recall the following facts which are essential in our construction. The gauge action of ^*. The group ^* acts on (Y,X) by λ.(w,t)=(w,λ t) and λ.((x,ξ),0)=((x,λ^-1ξ),0) (with λ,t∈^*, w∈ Y, x∈ X and ξ∈ (N_X^Y)_x).Functoriality. Given a commutative diagram of smooth mapsX@^(->[r][d]_f_X Y[d]^f_Y X'@^(->[r] Y'where the horizontal arrows are inclusions of submanifolds, we naturally obtain a smooth map (f):(Y,X)→(Y',X'). This map is defined by (f)(y,λ)=(f_Y(y),λ) for y∈ Y and λ∈_* and (f)(x,ξ,0)=(f_X(x),f_N(ξ),0) for x∈ X and ξ∈ (N_X^Y)_x=T_xY/T_xX where f_N:N_x→ (N_X'^Y')_f_X(x)=T_f_X(x)Y'/T_f_X(x)X' is the linear map induced by the differential (df_Y)_x at x. This map is of course equivariant with respect to the gauge action of ^*.Let us make a few remarks concerning the DNC construction. * The map equal to identity on X×^* and sending X×{0} to the zero section of N_X^Y leads to an embedding of X× into (Y,X), we may often identifyX× with its image in (Y,X). As (X,X)=X×, this corresponds to the naturality of the diagramX@^(->[r][d]_ X[d]^ X@^(->[r] Y* We have a natural smooth map π:(Y,X)→ Y× defined by π(y,λ)=(y,λ) (for y∈ Y and λ∈^*) and π((x,ξ),0)=(x,0) (for x∈ X⊂ Y and ξ∈ (N_X^Y)_x a normal vector). This corresponds to the naturality of the diagramX@^(->[r][d]_ Y[d]^ Y@^(->[r] Y * If Y_1 is an open subset of Y_2 such that X⊂ Y_1, then (Y_1,X) is an open subset of (Y_2,X) and (Y_2,X) is the union of the open subsets (Y_1,X) and Y_2×^*. This reduces to the case when Y_1 is a tubular neighborhood - and therefore to the case where Y is (diffeomorphic to) the total space of a real vector bundle over X.In that case one gets (Y,X)=Y× and the gauge action of ^* on (Y,X)=Y× is given by λ.((x,ξ),t)= ((x,λ^-1ξ), λ t) (with λ∈^*, t∈, x∈ X and ξ∈ Y_x). * More generally, let E be (the total space of) a real vector bundle over Y. Then (E,X) identifies with the total space of the pull back vector bundle π̂^*(E) over (Y,X), where π̂ is the composition of π:(Y,X)→ Y× (remark <ref>) with the projection Y×→ Y. The gauge action of ^* is λ.(w,ξ)= (λ .w,λ^-1ξ) for w∈(Y,X) and ξ∈ E_π̂(w). *Let X_1 be a (locally closed) smooth submanifold of a smooth manifold Y_1 and let f:Y_2→ Y_1 be a smooth map transverse to X_1. Put X_2=f^-1(X_1). Then the normal bundle N_X_2^Y_2 identifies with the pull back of N_X_1^Y_1 by the restriction X_2→ X_1 of f. It follows that (Y_2,X_2) identifies with the fibered product (Y_1,X_1)×_Y_1Y_2.*More generally, let Y,Y_1,Y_2 be smooth manifolds and f_i:Y_i→ Y be smooth maps. Assume that f_1 is transverse to f_2. Let X⊂ Y and X_i⊂ Y_i be (locally closed) smooth submanifolds. Assume that f_i(X_i)⊂ X and that the restrictions g_i:X_i→ X of f_i are transverse also. We thus have a diagram X_1 @^(->[d][r]^g_1 X@^(->[d] X_2@^(->[d][l]_g_2 Y_1[r]^f_1 Y Y_2[l]_f_2Then the maps (f_i):(Y_i,X_i)→(Y,X) are transverse and the deformation to the normal cone of fibered products (Y_1×_YY_2,X_1×_XX_2) identifies with the fibered product (Y_1,X_1)×_(Y,X)(Y_2,X_2). §.§ Blowup constructions[G, 13](Y,X)The blowup of the inclusion of a submanifold X in a manifold Y, (Y,X)=Y∖ X ∪(N_X^Y) [G, 14](Y,X)The spherical blowup of the inclusion of a submanifold X in a manifold Y, (Y,X)=Y ∖ X ∪(N_X^Y)The blowup (Y,X) is a smooth manifold which is a union of Y∖ X with the (total space) (N_X^Y) of the projective space of the normal bundle N_X^Y of X in Y. We will also use the “spherical version” (Y,X) of (Y,X) which is a manifold with boundary obtained by gluing Y∖ X with the (total space of the)sphere bundle (N_X^Y). We have an obvious smooth onto map (Y,X)→(Y,X) with fibers 1 or 2 points. These spaces are of course similar and we will often give details in our constructions to the one of them which is the most convenient for our purposes.We may view (Y,X) as the quotient space of a submanifold of the deformation to the normal cone (Y,X) under the gauge action of ^*. Recall that the group ^* acts on (Y,X) by λ.(w,t)=(w,λ t) and λ.((x,ξ),0)=((x,λ^-1ξ),0) (with λ,t∈^*, w∈ Y, x∈ X and ξ∈ (N_X^Y)_x). This action is easily seen to be free and (locally remark <ref>) proper on the open subset (Y,X)∖ X× (see remark <ref> below).For every locally closed subset T ofcontaining 0, we define _T(Y,X)=Y× (T∖{0})∪ N_X^Y×{0}=π^-1(Y× T) (with the notation of remark <ref>.<ref>). It is the restriction of (Y,X) to T. We put_+(Y,X)=__+(Y,X)=Y×_+^*∪ N_X^Y×{0}.[G, 11]_T(Y,X)The restriction Y× (T∖{0})∪ N_X^Y×{0} of (Y,X) to aclosed subset T ofcontaining 0 [G, 12]_+(Y,X)The restriction __+(Y,X)We put(Y,X)=((Y,X)∖ X×)/^*and(Y,X)=(_+(Y,X)∖ X×_+)/_+^*. With the notation of section <ref>, (Y,X) is thus obtained by gluing Y∖ X=((Y∖ X)×^*)/^*, with (Ω'∖ (X×))/^* using the map Θ which is equivariant with respect to the gauge action of ^*.Choose a euclidean metric on N_X^Y. Let ={((x,ξ),λ)∈Ω'; ξ=1}. The map Θ induces a diffeomorphism of /τ with an open neighborhood Ω of (N_X^Y) in (Y,X) and τ is the map ((x,ξ),λ)↦ ((x,-ξ),-λ).In this way, with a Riemannian metric on Y, we may naturally associate a Riemannian metric on (Y,X) (using a partition of the identity to glue the metric of Y∖ X with that of Ω). Since π̂:(Y,X)→ Y is invariant by the gauge action of ^*, we obtain a natural smooth map π̃:(Y,X)→ Y whose restriction to Y∖ Xis the identityandwhose restriction to (N_X^Y) is the canonical projection (N_X^Y)→ X⊂ Y. This map is easily seen to beproper. Note that, according to remark <ref>.<ref>), (Y,X) canonically identifies with the open subset (Y×,X×{0})∖(Y×{0},X×{0}) of (Y×,X×{0}). Thus, one may think at (Y×,X×{0}) as a “local compactification” of (Y,X) (since the map (Y×,X×{0})→ Y× is proper).In the case where Y is a real vector bundle over X, (Y,X) identifies non canonically with an open submanifold of the bundle of projective spaces (Y×) over X. Indeed, in that case (Y,X)=Y×; choose a euclidian structure on the bundle Y. Consider the smooth involution Φ from (Y∖ X)× onto itself which to (x,ξ,t) associates (x,ξ/ξ^2,t) (for x∈ X, ξ∈ Y_x, t∈). This map transforms the gauge action of ^* on (Y,X) into the action of ^* by dilations on the vector bundle Y× over X and thus defines a diffeomorphism of (Y,X) into its image which is the open set (Y×)∖ X where X embeds into (Y×) by mapping x∈ X to the line {(x,0,t), t∈}. Since we will apply this construction to morphisms of groupoids that need not be proper, we have to relax properness as in remark <ref>: we will say that f:Y→ X is locally proper if every point in X has a neighborhood V such that the restriction f^-1(V)→ V of f is proper. In particular, if Y is a non Hausdorff manifold and X is a locally closed submanifold of Y, then the map (Y×,X×{0})→ Y× is locally proper §.§.§ Functoriality LetX@^(->[r][d]_f_X Y[d]^f_Y X'@^(->[r] Y'be a commutative diagram of smooth maps, where the horizontal arrows are inclusions of closed submanifolds. LetU_f =(Y,X)∖(f)^-1(X'×) be the inverse image by (f) of the complement in (Y',X') of the subset X'×. We thus obtain a smooth map (f):_f(Y,X)→(Y',X') where _f(Y,X)⊂(Y,X) is the quotient of U_f by the gauge action of ^*. [G, 15]_f(Y,X)The subspace of (Y,X) on which (f):_f(Y,X)→(Y',X') can be defined for a smooth map f:Y→ Y' (with f(X)⊂ X') In particular, * If X⊂ Y_1 are (locally) closed submanifolds of a manifold Y_2, then (Y_1,X) is a subma­nifold of (Y_2,X). * Also, if Y_1 is an open subset of Y_2 such that X⊂ Y_1, then (Y_1,X) is an open subset of (Y_2,X) and (Y_2,X) is the union of the open subsets (Y_1,X) and Y_2∖ X. This reduces to the case when Y_1 is a tubular neighborhood. §.§.§ Fibered productsLet X_1 be a (locally closed) smooth submanifold of a smooth manifold Y_1 and let f:Y_2→ Y_1 be a smooth map transverse to X_1. Put X_2=f^-1(X_1). Recall from remark <ref>.<ref> that in this situation (Y_2,X_2) identifies with the fibered product (Y_1,X_1)×_Y_1Y_2. Thus (Y_2,X_2) identifies with the fibered product (Y_1,X_1)×_Y_1Y_2. § CONSTRUCTIONS OF GROUPOIDS §.§ Linear groupoids We will encounter groupoids with an extra linear structure which are special cases of / in the sense of Pradines <cit.>. We will also need to consider the spherical and projective analogues. Let E be a vector space over a fieldand let F be a vector sub-space. Let r,s:E→ F be linear retractions of the inclusion F→ E. §.§.§ The linear groupoidThe space E is endowed with a groupoid structurewith base F.The range and source maps are r and s and the product is (x,y)↦ (x· y)=x+y-s(x) for (x,y) composable, such that s(x)=r(y).One can easily check: * Since r and s are linear retractions: r(x· y)=r(x) and s(x· y)=s(y). * If (x,y,z) are composable, then (x· y)· z=x+y+z-(r+s)(y)=x· (y· z). * The inverse of x is (r+s)(x)-x.* Note that, given E and linear retractions r and s on F, ⇉ Fis the only possible linear groupoid structure([A linear groupoid is a groupoid G such that G^(0) and G are vector spaces and all structure maps (unit, range, source, product) are linear.]) on E . Indeed, for any x∈ E one must have x· s(x)=x and r(x) · x=x. By linearity, it follows that for every composable pair (x,y)=(x,s(x))+(0,y-s(x)) we have x· y=x· s(x)+0·(y-s(x))=x+y-s(x).* The morphism r-s:E/F → F gives an action of E/F on F by addition. The groupoid associated to this action is in fact . * Given a linear groupoid structure on a vector space E, we obtain the “dual” linear groupoidstructure^* on the dual space E^* given by the subspace F^⊥ ={ξ∈ E^*; ξ|_F=0} and the two retractions r^*,s^*:E^*→ F^⊥ with kernels ( r)^⊥ and ( s)^⊥: for ξ∈ E^* and x∈ E, r^*(ξ)(x)=ξ(x-r(x)) and similarly s^*(ξ)(x)=ξ(x-s(x)). §.§.§ Theprojective groupoid The multiplicative group ^* acts on by groupoid automorphisms. This action is free on the restriction=∖ ( r∪ s) of the groupoid to the subset F∖{0} of^(0)=F.The projective groupoid is the quotient groupoid E=/^*. It is described as follows. As a set E=(E)∖ (( r)∪( s)) and ^(0)=(F)⊂(E). The source and range maps r,s:E→(F) are those induced byr,s: E → F. The product ofx,y∈ Ewiths(x)=r(y)is the linex· y={u+v-s(u); u∈ x, v∈ y;s(u)=r(v)}. The inverse of x∈ E is (r+s-id)(x).* When F is just a vectorline, E is a group. Let us describe it:we have a canonical morphism h: E→^* defined by r(u)=h(x)s(u) for u∈ x. The kernel of h is ( (r-s))∖( r). Note that F⊂ (r-s) and therefore (r-s)⊄ r, whencer∩ (r-s) is a hyperplane in (r-s). The group h is then easily seen to be isomorphic to (r)∩ (s). Indeed, choose a non zero vector w in F; then the map which assigns to u∈ (r)∩ (s) the line with direction w+u gives such an isomorphism onto h. Then: * If r=s,E is isomorphic to the abelian group(r)=(s).* If r s, choose x such that r and s do not coincide on x and let P be the plane F⊕ x. The subgroup (P)∖{ r∩ P, s∩ P} of E is isomorphic through h with ^*. It thus defines a section of h. In that case E is the group of dilations ( (r)∩ (s))⋊^*.* In the general case, let d∈(F). Put E^d_d=r^-1(d)∩ s^-1(d). * The stabilizer ( E)_d^d is the group E^d_d=(E^d_d)∖ (( r)∪( s)) described above. * The orbit of a line d is the set of r(x) for x∈ E such that s(x)=d. It is therefore (d+r( s)). * the following are equivalent: * (r,s):E→ F× F is onto; * r( s)=F;* (r-s):E/F→ F is onto; * the groupoid E has just one orbit.* When r=s, the groupoid E is the product of the abelian group E/F by the space (F).When r s, the groupoidis Morita equivalent to since F∖{0} meets all the orbits of . Ifis a locally compact field and r s, the smooth groupoid E is Morita equivalent to the groupoid crossed-product⋊^*. In all cases, whenis a locally compact field, E is amenable.§.§.§ The spherical groupoid If the field is , we may just take the quotient by _+^* instead of ^*. We then obtain similarly the spherical groupoid E=(E)∖ (( r)∪( s)) where ^(0)(E)=(F)⊂(E). The involutive automorphism u ↦ -u of E leads to a /2action, by groupoid automorphisms on E. Since this action is free (and proper!), it follows that the quotient groupoid E and the crossed product groupoid crossed product E⋊/2 are Morita equivalent. Thus E is also amenable.As for the projective case, if(r,s):E→ F× Fis onto, the groupoid Ehas just one orbit. The stabilizer ofd∈ (F)identifies with the group( r∪ s)⋊_+^*, and therefore the groupoid Eis Morita equivalent to the group( r∪ s)⋊_+^*.§.§.§ Bundle groupoids We may of course perform the constructions of section <ref> (with say=) whenEis a (real) vector bundle over a spaceV,Fis a subbundle andr,sare bundle maps. We obtain respectively vector bundle groupoids, projective bundle groupoids and spherical bundle groupoids:,( E ,r,s)and( E,r,s)which are respectively families of linear, projectiveand sphericalgroupoids. *A vector bundle groupoid is just given by a bundle morphism α=(r-s):E/F→ F. It is isomorphic to the semi direct product F⋊_α E/F.*All the groupoids defined here are amenable, since they are continuous fields of amenable groupoids (<cit.>).The analytic index element_G∈ KK(C_0(^*G),C^*(G))of a vector bundle groupoidGis aKK-equivalence. The groupoidGis a vector bundleEover a locally compact spaceX,G^(0)is a vector subbundleFandGis given by a linear bundle map(r-s):E/F→ F.Let E be a vector bundle groupoid. Then C^*(E) is KK-equivalent to C_0(E). More precisely, the index _E:KK(C_0(^*E),C^*(E)) is invertible. Put F=E^(0) and H=E/F. Then H acts on C_0(F) and C^*(E)=C_0(F)⋊ H.We use the equivariant KK-theory of Le Gall (<cit.>) KK_H(A,B).The thom element of the complex bundle H⊕ H defines an invertible element t_H∈ KK_H(C_0(X),C_0(H⊕ H)). We deduce that, for every pair A,B of H algebras, the morphism τ_C_0(H):KK_H(A,B)→ KK_H(A⊗ _C_0(X)C_0(H),B⊗ _C_0(X)C_0(H)) is an isomorphism. Its inverse is x↦ t_H⊗τ_C_0(H)(x)⊗ t_H^-1.Denote by A_0 the C_0(X) algebra A endowed with the trivial action of H. We have an isomorphism of H-algebras u_A:C_0(H)⊗ _C(X) A≃ C_0(H)⊗ _C(X) A_0. It followsthat the restriction mapKK_H(A,B) to KK_X(A,B) (associated to the groupoid morphism X→ H) is an isomorphism - compatible of course with the Kasparov product.Let v_A∈ KK_H(A_0,A) be the element whose image in KK_X(A_0,A) is the identity. The descent of j_H(v_A)∈ KK(C_0(H^*)⊗ _C(X)A,A⋊ H) is a KK-equivalence. The proposition follows by letting A=C_0(F). §.§.§Recall from <cit.> that ais a groupoid which is a vector bundle over a groupoidG. More precisely: Let G@<-3pt>[r] @<1pt>[r]^r_G,s_G G^(0) be a groupoid. Aover G is a vector bundle p:E→ G with a groupoid structure E@<-3pt>[r] @<1pt>[r]^r_E,s_E E^(0) such that all the groupoid maps are linear vector bundle morphisms.This means that E^(0)⊂ E is a vector subbundle of the restriction of E to G^(0) and that r_E,s_E, x↦ x^-1 and the composition are linear bundle maps. We also assume that the bundle maps r_E:E→ r_G^*(E^(0)) ands_E:E→ s_G^*(E^(0)) are surjective. We will come back toin the appendix. §.§ Normal groupoids, deformation groupoids and blowup groupoids §.§.§ Definitions LetΓbe a closed Lie subgroupoid of a Lie groupoidG. Using functoriality ( Definition <ref>)of theandconstruction we may construct a normalanda blowup groupoid. *The normal bundle N_Γ^G carries a Lie groupoid structure with objects N_Γ ^(0)^G^(0). We denote by _Γ^G⇉ N_Γ ^(0)^G^(0) this groupoid. The projection _Γ^G→Γ is a groupoid morphism and it follows that _Γ^G is aoverΓ. *The manifold (G,Γ ) is naturally a Lie groupoid (unlike what was asserted in remark 3.19 of <cit.>). Its unit space is (G^(0),Γ ^(0)); its source and range maps are (s) and (r); the space of composable arrows identifies with (G^(2),Γ ^(2)) and its product with (m) where m denotes both productsG^(2)→ G and is Γ^(2)→Γ. [G, 17](G,Γ )⇉(G^(0)_2,G^(0)_1)The deformation groupoid where Γ is a closed Lie subgroupoid of a Lie groupoid G *The subset (G,Γ )=U_r∩ U_s of (G,Γ ) consisting of elements whose image by (r) and (s) is not in G^(0)_1× is an open subgroupoid of (G,Γ ): it is the restriction of (G,Γ ) to the open subspace (G^(0),G^(0)_1)∖ G^(0)_1×. *The group ^* acts on (G,Γ ) via the gauge action by groupoid morphisms. Its action on (G,Γ ) is (locally) proper. Therefore the open subset _r,s(G,Γ )=(G,Γ )/^* of (G,Γ )inherits a groupoid structure as well: its space of units is (G^(0)_2,G^(0)_1); its source and range maps are (s) and (r) and the product is (m). [G, 18]_r,s(G,Γ )⇉(G^(0),Γ ^(0))The blowup groupoid _r(G,Γ ) ∩_s(G,Γ ) where Γ is a closed Lie subgroupoid of a Lie groupoid G * In the same way, we define the groupoid _r,s(G,Γ ). It is the quotient of the restriction _+(G,Γ ) of (G,Γ ) to _+ by the action of _+^*. Similarly _r,s(G,Γ ) will be the quotient of (G,Γ ) by the action of _+^*. This is the “double” of the Lie groupoid with boundary _r,s(G,Γ ).[G, 19]_r,s(G,Γ ) The spherical version of _r,s(G,Γ )[G, 20](G,Γ )The open subgroupoid of (G,Γ ) of (G,Γ ) consisting of elements whose image by (r) and (s) is not in G^(0)_1×[G, 20]_+(G,Γ )The restriction of (G,Γ ) to _+An analogous result about the groupoid structure on_r,s(G,Γ )in the case ofΓ ^(0)being a hypersurface ofG^(0)can be found in <cit.> ( also <cit.>).§.§.§ Algebroid and anchor The (total space of the) Lie algebroidΓis a closed submanifold (and a subbundle) of G. The Lie algebroid of(G,Γ )is( G,Γ ). Its anchor map is(♮_G):( G,Γ )→(TG^(0),TΓ ^(0)).The groupoid(G,Γ )is the union of its open subgroupoidG×^*with its closed Lie sub-groupoid_Γ^G. The algebroid ofG×^*is G×^*and the anchor is just the map♮ _G×𝕀: G×^*→ T(G^(0)×_+^*).§.§.§ Morita equivalence LetG_1⇉ G_1^(0)andG_2⇉ G_2^(0)be Lie groupoids,Γ_1⊂ G_1andΓ_2⊂ G_2Lie subgroupoids. A Morita equivalence of the pair(Γ_1⊂ G_1)with the pair(Γ_2⊂ G_2)is given by a pair(X⊂ Y)whereYis a linking manifold which is a Morita equivalence betweenG_1andG_2andX⊂ Yis a submanifold ofYsuch that the mapsr,sand products ofY(see page Moreq1) restrict to a Morita equivalenceXbetweenΓ_1andΓ_2.Then, by functoriality,* (Y,X) is a Morita equivalence between (G_1,Γ_1) and (G_2,Γ_2), * _+(Y,X) is a Morita equivalence between _+(G_1,Γ_1) and _+(G_2,Γ_2),* _r,s(Y,X) is a Morita equivalence between _r,s(G_1,Γ_1) and _r,s(G_2,Γ_2), * _r,s(Y,X) is a Morita equivalence between _r,s(G_1,Γ_1) and _r,s(G_2,Γ_2)... Note that ifYandXare sub-Morita equivalences, the above linking spaces are also sub-Morita equivalences. §.§.§ Groupoids on manifolds with boundaryLetMbe a manifold andVan hypersurface inMand suppose thatVcutsMinto two manifolds with boundaryM=M_- ∪ M_+withV=M_- ∩ M_+. Then by considering a tubular neighborhood ofVinM,(M,V)=M×^* ∪_V^M×{0}identifies withM×, the quotient(M,V)/_+^*identifies with two copies ofMand(M,V)identifies with the disjoint unionM_- ⊔ M_+. Under this last identification, the class under the gauge action of a normal vector in_V^M∖ V×{0}pointing in the direction ofM_+is an element ofV⊂ M_+.LetM_bbe manifold with boundaryV. A piece ofLie groupoid is the restrictionG=G_M_b^M_btoM_bof a Lie groupoidG⇉ MwhereMis a neighborhood ofM_bandGis a groupoid without boundary. Note that when the boundaryVis transverse to the groupoidG,Gis in fact a manifold with corners. With the above notation, sinceVis of codimention1inM,(M,V)=M_b⊔ M_-whereM_-= M∖is the complement inMof the interior=M_b∖ VofM_binM. Let thenΓ⇉ Vbe a Lie subgroupoid ofG.We may construct_r,s(G,Γ)and consider its restriction to the open subsetM_bof(M,V). We thus obtain a longitudinally smooth groupoid that will be denoted_r,s(G,Γ).Note that the groupoid_r,s(G,Γ) ⇉ M_bis the restriction toM_bof a Lie groupoid⇉ Mfor whichM_bis saturated. Indeed_r,s(G,Γ)is an open subgroupoid of_r,s(G,Γ)⇉ M_b⊔ M_-which is a piece of the Lie groupoid(G,Γ)/_+^*⇉(M,V)/_+^*≃ M⊔ M. We may then letbe the restriction of( M,V)/_+^*to one of the copies of M.In this way, we may treat by induction a finite number of boundary componentsa groupoid on a manifold with corners. *If M is a manifold with boundary V and G=M× M is the pair groupoid, then _r,s(G,V) is in fact the groupoid associated with the 0 calculus in the sense of Mazzeo (<cit.>),the canonical pseudodifferential calculs associated with _r,s(G,V) is the Mazzeo-Melrose's 0-calculus. Indeed, the sections of the algebroid of _r,s(G,V) are exactly the vector fields of M vanishing at the boundary V,those generating the 0-calculus.*In a recent paper <cit.>, an alternative description of _r,s(G,V) is given under the name ofedge modification for G along the “G-tame manifold" V, thus in particular V is transverse to G. This is essentially the gluing construction described in <ref> below.§.§ Examples of normal groupoids,deformation groupoids and blowup groupoids We examine some particular cases of inclusions of groupoidsG_1⊂ G_2. The various constructions of deformation to the normal cone and blow-up allow us to recover many well known groupoids. As already noted in the introduction, our constructions immediately extend to the case where we restrict to a closed saturated subset of a smooth groupoid, in particular for manifolds with corners.§.§.§ Inclusion F⊂ E of vector spacesLetEbe a real vector space - considered as a group - andFa vector subspace ofE. The inclusion of groupsF→ Egives rise to a groupoid(E,F). Using any supplementary subspace ofFinE, we may identify the groupoid(E,F)withE×⇉. ItsC^*-algebra identifies then withC_0(E^*×).More generally, ifFis a vector-subbundle of a vector bundleEover a manifoldM(considered as a family of groups indexed byM), then the groupoid(E,F)⇉ M×identifies withE×and itsC^*-algebra isC_0(E^*×).Letp_E:E→ Mbe a vector bundle over a manifoldMand letVbe a submanifold ofM. Letp_F:F→ Vbe a subbundle of the restriction ofEtoV. We use a tubular construction and find an open subsetUofMwhich is a vector bundleπ:Q→ V. Usingπ, we may extendFto a subbundleF_Uof the restriction toFonU. Using that, we may identify(E,F)with the open subsetE×^*∪ p_E^-1(U)×ofE×. ItsC^*-algebraidentifies then withC_0(E^*×^*∪ p_E^*^-1(U)×).§.§.§ Inclusion G^(0)⊂ G: adiabatic groupoid The deformation to the normal cone(G,G^(0))is the adiabatic groupoidG_ad(<cit.>), it is obtained by using the deformation to the normal cone construction for the inclusion ofG^(0)as a Lie subgroupoid ofG. The normal bundleN_G^(0)^Gis the total space of the Lie algebroid (G) of G. Note that its groupoid structure coincides with its vector bundle structure. Thus,(G,G^(0))=G×^* ∪(G)×{0}⇉ G^(0)× . The particular case whereGis the pair groupoidM× Mis the original construction of the “tangent groupoid” of Alain Connes (<cit.>). Note that(G^(0),G^(0))=∅=_r,s(G,G^(0)). §.§.§ Gauge adiabatic groupoid Start with a Lie groupoidG⇉ V.LetG× (×)r̃,s̃⇉ V×be the product groupoid ofGwith the pair groupoid over. First notice that since V×{0} is a codimension 1 submanifold in V×, (V×,V×{0}) is canonically isomorphic to V× (_-⊔_+). Then_r̃,s̃(G× (×),V×{(0,0)})_V×_+^V×_+is the semi-direct product groupoidG_ad(V×_+)⋊_+^*:_r̃,s̃(G× (×),V×{(0,0)})_V×_+^V×_+= G_ad(V×_+)⋊^*⇉ V×_+. In other words,_r̃,s̃(G× (×),V×{(0,0)})_V×_+^V×_+is the gauge adiabatic groupoid used in <cit.>. Indeed, asG× (×)is a vector bundle overG,(G× (×),V×{(0,0)})≃(G,V)×^2(remark <ref>.<ref>). Under this identification, the gauge action of^*is given byλ.(w,t,t')=(λ.w,λ^-1t,λ^-1t'). The maps(s̃)and(r̃)are respectively(w,t,t')↦ ((s)(w),t')and(w,t,t')↦ ((r)(w),t).It follows that_r̃,s̃(G× (×),V×{(0,0)})is the quotient by the diagonal action of_+^*of the open subset(G,V)× (^*)^2of_+(G,V)×^2. According to the description of the groupoid of a group action on a groupoid given in section <ref> it is isomorphic to(G,V)_+⋊_+^* ×{-1,+1}^2where{-1,+1}^2is the pair groupoid over{-1,+1}. §.§.§ Inclusion of a transverse submanifold of the unit space LetGbe a Lie groupoid with set of objectsM=G^(0) and letVbe a submanifold ofM. We now study the special case of normal and blowup groupoids(G,V)and_r,s(G,V)(as well as_r,s(G,V)) associated to the groupoid morphismV→ G. Put=M∖ V.LetN=N_V^GandN'=N_V^Mbe the normal bundles. We identifyN'with a subbundle ofNby means of the inclusionM⊂ G. The submersionsr,s:G→ Mgive rise to bundle morphismsr^N,s^N:N→ N'that are sections of the inclusionN'→ N. By construction, using remark <ref>.a), the groupoid(G,V)is the union ofG×^*with the family of linear groupoids_r^N,s^N(N). It follows that_r,s(G,V)is the union ofG_^with the family( N,r^N,s^N)of projective groupoids.IfVis transverse toG, the bundle mapr^N-s^N:N=N_V^G→ N'=N_V^Mis surjective; it follows that* _r^N,s^N(N) identifies with the pull-back groupoid ((G_V^V))_q^q where q:N'→ V is the projection,* ( N,r^N,s^N)with the pull-back groupoid ((G_V^V)⋊^*)_ρ^ρ where ρ: (N')→ V is the projection,* ( N,r^N,s^N)with the pull-back groupoid ((G_V^V)⋊_+^*)_p^p where p: (N')→ V is the projection.Let us give a local description of these groupoids in the neighborhood of the transverse submanifoldV. PutG=G_M∖ V^M∖ V. Upon arguing locally, we can assume thatVis compact. By Remark <ref>,Vadmits a tubular neighborhoodW≃ N_V^Msuch thatG_W^Wis the pull back ofG_V^Vby the retractionq:W→ V.The normal groupoid(G_W^W,V)identifies with the pull back groupoid((G_V^V,V))_q^qof the adiabatic deformation(G_V^V,V)=(G_V^V)_adby the mapq:N_V^M→ V. The (spherical) blowup groupoid_r,s(G_W^W,V)identifies with the pull back groupoid(_+(G_V^V,V)⋊_+^*)_p^pof the gauge adiabatic deformation_+(G_V^V,V)⋊_+^*=(G_V^V)_gaby the mapp: N_V^M→ V.In order to get_r,s(G,V), we then may glue(_+(G_V^V,V)⋊_+^*)_p^pwithGin their common open subset((G_V^V)_q^q)_W∖ V^W∖ V≃ G_W∖ V^W∖ V.§.§.§ Inclusion G_V^V⊂ G for a transverse hypersurface V of G: b-groupoid IfVis a hypersurface ofM, the blowup(M× M,V× V)is just the construction of Melrose of theb-space. Its open subspace_r,s(M× M,V× V)is the associated groupoid of Monthubert <cit.>. Moreover, ifGis a groupoid onMandVis transverse toGwe can form the restriction groupoidG_V^V⊂ Gwhich is a submanifold ofG. The corresponding blow up construction_r,s(G,G_V^V)identifies with the fibered product_r,s(M× M,V× V)× _M× MG( remark <ref>.<ref>). Iterating (at least locally) this construction, we obtain theb-groupoid of Monthubert for manifolds with corners - <cit.>. The groupoid _r,s(G,V) corresponds to inflating all the distances when getting close to V. The groupoid _r,s(G,G_V^V) is a kind of cylindric deformation groupoid which is obtained bypushing the boundary V at infinity but keeping the distances along V constant.Intermediate examples between these two are given by a subgroupoid Γ⇉ V of G_V^V. In the case where G=M× M, such a groupoid Γ is nothing else than the holonomy groupoid Hol(V,) of a regular foliationof V (with trivial holonomy groups). The groupoid _r,s(M× M,Hol(V,)) is a holonomy groupoid of a singular foliation of M: the sections of its algebroid. Its leaves are M∖ V and the leaves of (V,). The corresponding calculus, when M is a manifold with a boundary V is Rochon's generalization (<cit.>) of the ϕ calculus of Mazzeo and Melrose (<cit.>). Iterating (at least locally) this construction, we obtain the holonomy groupoid associated to a stratified space in <cit.>. §.§.§ Inclusion G_V^V⊂ G for a saturated submanifold V of G: shriek map for immersion Suppose now thatV is saturated thus G_V^V=G_V=G^V. In such a situation the groupoid G_V^V acts on the normal bundle N_G_V^V^G=r^*(N_V^G^(0)) and (G,G_V^V)⇉(G^(0),V) coincides with the normal groupoid of the immersionφ: G_V^V → G. This construction was defined in the case of foliation groupoids in <cit.> and was used in order to define φ_! as associated KK-element.§.§.§ Inclusion G_1⊂ G_2 with G_1^(0)=G_2^(0) This is the case for the tangent and adiabatic groupoid discussed above. Two other kinds of this situation[Note that in this case (G_2^(0),G_1^(0))=∅, whence _r,s(G_2,G_1)=∅.] can be encountered in the literature:*The case of a subfoliation _1 of a foliation _2 on a manifold M: shriek map for submersion. As pointed out in remark 3.19 of <cit.> the corresponding deformation groupoid (G_2,G_1) gives an alternative construction of the element φ_! where φ:M/_1→ M/_2 is a submersion of leaf spaces. *The case of a subgroup of a Lie group.*If K is a maximal compact subgroup of a reductive Lie group G, the connecting map associated to the exact sequence of (G,K) is the Dirac extension mapping the twisted K-theory of K to the K-theory of C^*_r(G) (see <cit.>).*In the case where Γ is a dense (non amenable) countable subgroup of a compact Lie group K, the groupoid (K,Γ) was used in <cit.> in order to produce a Hausdorff groupoid for which the Baum-Connes map is not injective. §.§.§ Wrong way functoriality Letf:G_1→ G_2be a morphism of Lie groupoids. Iffis an (injective) immersion the construction of_+(G_2,G_1)gives rise to a short exact sequence0⟶ C^*(G_2×_+^*)⟶ C^*(_+(G_2,G_1))⟶ C^*(_G_1^G_2)⟶ 0.and consequently to aconnecting map from theK-theory of theC^*-algebra of the groupoid_G_1^G_2, which is aoverG_1, to theK-theory ofC^*(G_2). This wrong way functoriality map will be discussed in the next section. More generally let=G_1^(0)× G_2× G_1^(0)be the product ofG_2by the pair groupoid ofG_1^(0). Assume that the mapx↦ (r(x),f(x),s(x))is an immersion fromG_1→. The above construction gives a map fromK_*(C^*(_G_1^))toK_*(C^*())which is isomorphic toK_*(C^*(G_2))since the groupoidsG_2andare canonically equivalent. §LetG⇉ Mbe a Lie groupoid andΓ⇉ Va Lie subgroupoid ofG. The groupoids_+(G,Γ)and_r,s(G,Γ)that we constructed admit the closed saturated subsetsN_V^M×{0}and N_V^Mrespectively.In order to shorten the notation we put=M∖ Vand the corresponding full symbol algebras* Σ__+(G,Γ)=Σ^M×_+^*(_+(G,Γ));* Σ__+(G,Γ)=Σ^×_+^*(_+(G,Γ));* Σ_(G,Γ)=Σ^(_r,s(G,Γ)). [H, 20]Σ__+(G,Γ)The algebra Σ^M×_+^*(_+(G,Γ))[H, 21]Σ__+(G,Γ)The algebra Σ^×_+^*(_+(G,Γ))[H, 22]Σ_(G,Γ)The algebra Σ^(_r,s(G,Γ))They give rise to the exact sequences0⟶ C^*(G_^)⟶ C^*(_r,s(G,Γ))⟶ C^*( N_Γ^G)⟶ 0E^∂_and0⟶ C^*(G×_+^*)⟶ C^*(_+(G,Γ))⟶ C^*(_Γ^G)⟶ 0E^∂__+of groupoidC^*-algebras as well as index type ones0⟶ C^*(G_^)⟶Ψ^*(_r,s(G,Γ))⟶Σ_(G,Γ)⟶ 0E^_and0⟶ C^*(G×_+^*)⟶Ψ^*(_+(G,Γ))⟶Σ__+(G,Γ)⟶ 0E^__+We will compare the exact sequences given byand by.IfVis G-small (see notation <ref> below), we will show that, in a sense,andgive rise to equivalent exact sequences - both for the “connecting” ones and for the “index” ones.We will then compare these elements with a coboundary construction.We will compute these exact sequences whenΓ=V⊂ M. Finally, we will study a refinement of these constructions using relativeK-theory. §.§ “DNC” versus “Blup” LetΓbe a submanifold and a subgroupoid of a Lie-groupoidG. We will further assume that the groupoidΓis amenable. PutM=G^(0)andV=Γ^(0). Put also=M∖ Vand let_Γ^Gbe the restriction of the groupoid_Γ^Gto the open subset_V^M=N_V^M∖ Vof its unit spaceN_V^M.§.§.§ The connecting element As the groupoidΓis amenable we have exact sequences both for the reduced and for the maximalC^*-algebras:0⟶ C^*(G_^)⟶ C^*(_r,s(G,Γ))⟶ C^*( N_Γ^G)⟶ 0E^∂_and0⟶ C^*(G×_+^*)⟶ C^*(_+(G,Γ))⟶ C^*(_Γ^G)⟶ 0E^∂__+By amenability, these exact sequences admit completely positive cross sections and therefore define elements∂_^G,Γ= ∂__r,s(G,Γ)^∈ KK^1(C^*(_Γ^G/_+^*),C^*(G_^))and∂__+^G,Γ = ∂__+(G,Γ)^M×^*_+∈ KK^1(C^*(_Γ^G),C^*(G×_+^*)). [I, 20]∂_^G,Γ, ∂__+^G,Γ, ∂__+^G,ΓRespectively the element ∂__r,s(G,Γ)^, ∂__+(G,Γ)^M×^*_+ and ∂__+(G,Γ)^×^*_+ With the notation of section <ref>, write_+forrestricted to_+and_+forrestricted to_+.By section <ref>,we have a diagram where the vertical arrows areKK^1-equivalences and the squares commute inKK-theory.0[r]C^*(G_^)[r]@-[d]^β' C^*(_r,s(G,Γ))[r]@-[d]^βC^*(_Γ^G/_+^*)[r]@-[d]^β” 0E^∂_0[r]C^*(G_^×_+^*)[r] C^*(_+(G,Γ))[r]C^*(_Γ^G)[r] 0E^∂__+Denote by∂__+^G,Γ=∂__+(G,Γ)^×^*_+the connecting element associated toE^∂__+. We thus have, according to proposition <ref>:∂_^G,Γ⊗β'=-β”⊗∂__+^G,Γ∈ KK^1(C^*( N_Γ^G),C^*(G_^×_+^*)).We also have a commutative diagram where the vertical maps are inclusions:0[r]C^*(G_^×_+^*)[r][d]^j' C^*(_+(G,Γ))[r][d]^jC^*(_Γ^G)[r][d]^j” 00[r]C^*(G×_+^*)[r]C^*(_+(G,Γ))[r]C^*(_Γ^G)[r] 0 We thus find (j”)^*(∂__+^G,Γ)=j'_*(∂__+^G,Γ)∈ KK^1(C^*(_Γ^G),C^*(G×_+^*)). §.§.§ The full symbol index We now compare the elements_^G,Γ =_full^(_r,s(G,Γ)) ∈ KK^1(Σ_(G,Γ),C^*(G_^))and __+^G,Γ =_full^M×_+^*(_+(G,Γ)) ∈ KK^1(Σ__+(G,Γ),C^*(G×_+^*))defined by the semi-split exact sequences:0⟶ C^*(G_^)⟶Ψ^*(_r,s(G,Γ))⟶Σ_(G,Γ)⟶ 0E^_and0⟶ C^*(G×_+^*)⟶Ψ^*(_+(G,Γ))⟶Σ__+(G,Γ)⟶ 0E^__+[I, 21]_^G,Γ, __+^G,Γ, __+^G,ΓRespectively the elements _full^(_r,s(G,Γ)), _full^M×_+^*(_+(G,Γ)) and _full^×_+^*(_+(G,Γ)) By prop. <ref>,we have a diagram where the vertical arrows areKK^1-equivalences and the squares commute inKK-theory.0[r]C^*(G_^)[r]@-[d]^β' Ψ^*(_r,s(G,Γ))[r]@-[d]^β_Ψ Σ_(G,Γ)[r]@-[d]^β_Σ 00[r]C^*(G_^×_+^*)[r] Ψ^*(_+(G,Γ))[r] Σ__+(G,Γ)[r] 0We let__+^G,Γ=_full^×_+^*(_+(G,Γ))∈ KK^1(Σ__+(G,Γ),C^*(×_+^*)). We thus have: _^G,Γ⊗β'=-β_Σ⊗__+^G,Γ∈ KK^1(Σ_(G,Γ),C^*(G_^×_+^*)). We also have a commutative diagramwhere the vertical maps are inclusions:0[r]C^*(G_^×_+^*)[r][d]^j' Ψ^*(_+(G,Γ))[r][d]^j_Ψ Σ__+(G,Γ)[r][d]^j_Σ 00[r]C^*(G×_+^*)[r]Ψ^*(_+(G,Γ))[r]Σ__+(G,Γ)[r] 0 We thus find: j_Σ^*(__+^G,Γ)=j'_*(__+^G,Γ)∈ KK^1(Σ__+(G,Γ),C^*(G×_+^*)). §.§.§ When V is G-small IfVis small in eachGorbit,if the Lebesgue measure (in the manifoldG^x) ofG_V^xis0for everyx, it follows fromprop. <ref> belowthat the inclusioni: C^*(G_^)↪ C^*(G)is an isomorphism. Also, ifmeets all the orbits ofG, the inclusioniis a Morita equivalence. In these cases∂__+^G,Γdetermines∂_^G,Γ. We will say that V is G-small if for every x∈ V, the composition G_x♮_x⟶ T_xM⟶ (N_V^M)_x is not the zero map IfVis G-small, then the orbits of the groupoid_Γ^Gare never contained in the0section,they meet the open subset_V^M, and in fact the setV×{0}is small in every orbit of the groupoid(G,Γ). It follows that the mapjis an isomorphism - as well of course asj'andj”of diagram (<ref>). In that case,∂__+^G,Γand∂_^G,Γcorrespond to each other under these isomorphisms.(<cit.>) Let ⇉ Y be a Lie groupoid and let X⊂ Y be a (locally closed) submanifold. Assume that, for every x∈ X, the composition _x♮_x⟶ T_xY⟶ (N_X^Y)_x is not the zero map. Then the inclusion C^*(G_Y∖ X^Y∖ X)→ C^*(G) is an isomorphism.For every x∈ V, we can find a neighborhood U of x∈ M, a section X of G such that, for every y∈ U, ♮_y (X(y)) 0 and, if y∈ U∩ V, ♮_y (X(y))∉T_y(V). Denote bythe foliation of U associated with the vector field X.It follows from <cit.> that C_0(U∖ V)C^*(U,)=C^*(U,); as C^*(U,) acts in a non degenerate way on the Hilbert-C^*(G) module C^*(G^U), we deduce that C_0(U∖ V)C^*(G^U)=C^*(G_U). We conclude using a partition of the identity argument that C_c(M∖ V)C^*(G)=C_c(M)C^*(G), whence C_0(M∖ V)C^*(G)=C_0(M)C^*(G)=C^*(G).We assume that Γ is amenable and that V is G-small.Then, the inclusions j_Σ:Σ__+(G,Γ)→Σ__+(G,Γ), j_Ψ: Ψ^*(_+(G,Γ))→Ψ^*(_+(G,Γ)) and j_symb:C_0(^*(_+(G,Γ)))→ C_0(^*(_+(G,Γ))) are KK-equivalences. We havea diagram0[d]0[r]C^*(_+(G,Γ))[r][d]^j Ψ^*(_+(G,Γ))[r][d]^j_ΨC_0(^*(_+(G,Γ)) [r][d]^j_symb 00[r]C^*(_+(G,Γ))[r]Ψ^*(_+(G,Γ))[r]C_0(^*(_+(G,Γ))[r][d] 0C_0(^* G_|V×_+)[d]0 As j is an equality, we find an exact sequence 0⟶Ψ^*(_+(G,Γ))j_Ψ⟶Ψ^*(_+(G,Γ))⟶ C_0(^* G_|V×_+)⟶0. As j': C^*(×_+^*)→ C^*(G×_+^*) is also an equality, we find (using diagram (<ref>)) an exact sequence 0⟶Σ__+(G,Γ))j_Σ⟶Σ__+(G,V)⟶ C_0(^* G_|V×_+)⟶0.As the algebra C_0(^* G_|V×_+) is contractible, we deduce that j_symb and then j_Ψ and j_Σ are KK-equivalences.As a summary of these considerations, we find: Let G⇉ M be a Lie groupoid and Γ⇉ V a Lie subgroupoid of G.Assume that Γ is amenable and put =M∖ V. Let i:C^*(G_^)→ C^*(G) be the inclusion. Put β̂”=j”_*(β”)∈ KK^1(C^*( N_Γ^G),C^*(_Γ^G)) and β̂_Σ=(j_Σ)_*(β_Σ)∈ KK^1(Σ_(G,Γ),Σ__+(G,V)).*We have equalities* ∂_^G,Γ⊗[i]=β̂”⊗∂ __+^G,Γ∈ KK^1(C^*( N_Γ^G),C^*(G)) and* _^G,Γ⊗ [i]=β̂_Σ⊗ __+^G,Γ∈ KK^1(C^*(Σ_(G,Γ),C^*(G)) *If V is G-small, then i is an isomorphism and the elements β̂” and β̂_Σ are invertible.□§.§ The KK-element associated with DNC The connecting element∂__+^G,Γcan be expressed in the following way: letbe the restriction of(G,Γ)to[0,1],=_[0,1](G,Γ)=_Γ^G×{0}∪ G× (0,1]. We have a semi-split exact sequence:0→ C^*(G× (0,1])→ C^*()_0⟶ C^*(_Γ^G)→ 0.AsC^*(G× (0,1])is contractible,_0is aKK-equivalence. Letev_1:C^*()→ C^*(G)be evaluation at1and letδ_Γ^G=[ev_0]^-1⊗ [ev_1]∈ KK(C^*(_Γ^G),C^*(G)). Let[Bott]∈ KK^1(, C_0(_+^*))be the Bott element. We find∂__+^G,Γ=δ_Γ^G ⊗ [Bott].Consider now the groupoid_ad^[0,1]. It is a family of groupoids indexed by[0,1]× [0,1]: * its restriction to {s}× [0,1] for s 0 is G_ad^[0,1];* its restriction to {0}× [0,1] is (_Γ^G)_ad^[0,1];* its restriction to [0,1]×{s} for s 0 is =_[0,1](G,Γ);* its restriction to [0,1]×{0} is the algebroid =_[0,1]( G,Γ) of .For every locally closed subsetX⊂ [0,1]× [0,1], denote by_ad^Xthe restriction of_ad^[0,1]toX.For every closed subsetX⊂ [0,1]× [0,1], denote byq_X:C^*(_ad^[0,1])→ C^*(^X_ad)the restriction map. We thus have the following commutative diagram:C^*(_Γ^G) @/^1pc/@.>[rr]^δ_Γ^G _[0,1](G,Γ) [r][l]C^*(G) C^*((_Γ^G)_ad^[0,1]) [d][u]C^*(_ad^[0,1])[ld]^q_(0,0)[lu]_q_(0,1)[ru]^q_(1,1)[rd]^q_(1,0)[r]^q_{1}× [0,1][d]^q_[0,1]×{0}[l]_q_{0}× [0,1][u]_q_[0,1]×{1}C^*(G_ad^[0,1]) [u] [d] C^*(N_Γ^ G) @/^3pc/@.>[uu]^__Γ^G@/_1pc/@.>[rr]_δ_Γ^ G C^*(_[0,1]( G,Γ))[l] [r] C_0(^* G) @/_3 pc/@.>[uu]__GFor every locally closed subsetT⊂ [0,1], theC^*-algebrasC^*(_ad^(0,1]× T)andC^*(_ad^T× (0,1])are null homotopic as well asC^*(_ad^[0,1]^2∖{0,0)}). It follows thatq_{0}× [0,1],q_[0,1]×{0}andq_{(0,0)}areKK-equivalences.Now[q_(0,0)]^-1⊗ [q_(0,1)]=__Γ^Gand it follows that[q_(0,0)]^-1⊗ [q_(1,1)]=__Γ^G⊗δ_Γ^G.In the same way,[q_(0,0)]^-1⊗ [q_(1,0)]=δ_Γ^ Gand it follows that[q_(0,0)]^-1⊗ [q_(1,1)]= δ_Γ^ G⊗_G.Finally, it follows from example <ref> thatδ_Γ^ Gis associated with a morphismφ:C_0(^*(_V^G))↪ C_0(^* G)corresponding to an inclusion of^*(_Γ^G)in^* Gas a tubular neighborhood.We thus have established:__Γ^G⊗δ_Γ^G=[φ]⊗_G.§.§ The case of a submanifold of the space of unitsLetGbe a Lie groupoid with objectsMand letΓ=V⊂ Mbe a closed submanifold ofM. In this section, we push further the computations the connecting maps and indicesthe connecting maps of the exact sequencesE^∂_, E^∂__+, E^_andE^__+.§.§.§ Connecting map and index map From propositions <ref>, <ref>, <ref> and fact <ref>, we find *The index element __V^G∈ KK(C_0(^* N_V^G),C^*(_V^G)) is invertible.*The inclusion j:Σ_N_V^M×{0}(_+(G,V))↪Σ__+(G,V) is invertible in KK-theory.*The C^*-algebra Σ__+(G,V) is naturally KK^1-equivalent with the mapping cone of the map χ:C_0(^*G×_+^*)→ C_0(_+(M,V)) defined by χ(f)(x)=f(x,0) ifx∈ M×_+^* 0 ifx∈ N_V^M. * The connecting element ∂__+^G,V∈ KK^1(C^*(_V^G),C^*(G×_+^*))=KK(C^*(_V^G),C^*(G)) isδ_V^G=__V^G^-1⊗ [φ]⊗_G where φ:C_0(^* N_V^G)→ C_0(^* G) is the inclusion using the tubular neighborhood construction.* Under the KK^1 equivalence of c), the full index element__+^G,V∈ KK^1(Σ__+(G,V),C^*(G×_+^*))=KK^1(_χ,C^*(G)) is q^*([]⊗_G) where q:_χ→ C_0(^*G×_+^*) is evaluation at 0.□The element[χ]∈ KK(C_0(^*G×_+^*),C_0(_+(M,V)))is the Kasparov product of the “Euler element" of the bundle^*Gwhich is the class inKK(C_0(^*G),C_0(M))=KK(C_0(^*G×_+^*),C_0(M×_+^*))of the mapx↦ (x,0)with the inclusionC_0(M×_+^*)→ C_0(_+(M,V)) . It follows that[χ]is often the zero element ofKK(C_0(^*G×_+^*),C_0(_+(M,V))). In particular, this is the case when the Euler class of the bundle^*Gvanishes. In that case, the algebraΣ__+(G,V)isKK-equivalent toC_0(^*G)⊕ C_0(_+(M,V)). IfVis Gsmall, then, by theorem <ref>,∂_^G,Vand _^G,Vare immediately deduced from proposition <ref>. Let M_b be a manifold with boundary andV=∂ M_b. Put =M_b∖ V. Let G be apiece ofLie groupoid on M_b in the sense of section <ref>. Thus G is the restriction of a Lie groupoid G⇉ M, where M is a neighborhood of M_b. Recall that in this situation, (M,V)=M_b ⊔ M_-, where M=M_b∪ M_- and M∩ M_-=V, and we let _r,s(G,V) ⇉ M_b be the restriction of _r,s(G,V) to M_b. Let us denote by _V^G the open subset of N_V^G made of (normal) tangent vectors whose image under the differential of the source and range maps of G are non vanishing elements of N_V^M pointing in the direction of M_b.The groupoid _r,s(G,V)is the union _V^G/_+^* ∪ G_^. We have exact sequences 0→ C^*(G_^)→ C^*(_r,s(G,V))→ C^*(_V^G/_+^*)→ 00→ C^*(G_^)→Ψ^*(_r,s(G,V))→Σ_(G,V)→ 0.As V is of codimension 1, we find that V is G-small if and only if it is transverse to G. In that case, Proposition <ref> computes the KK-theory of C^*(_V^G/_+^*) and of Σ_(G,V) and the KK-class of the connecting maps of these exact sequences.In particular, we obtain a six term exact sequence K_0(C(M_b))[r]K_0(Σ_(G,V))[r] K_1(C_0(^*G_^))[d]^χ K_0(C_0(^*G_^))[u]^χ K_1(Σ_(G,V))[l] K_0(C(M_b))[l] and the index map K_*(Σ_(G,V))→ K_*+1(G_^) is the composition of K_*(Σ_(G,V))→ K_*+1(C_0(^*G_^)) with the index map of the groupoid G_^.This holds, in particular, if G=M_b× M_b since the boundary V=∂ M_b is transverse to G= M×M. Note that in that case, χ=0 (in KK(C_0(T^*),C_0(M_b))) so that we obtain a (noncanonically) split short exact sequence: 0[r]K_*(C_0(M_b))[r]K_*(Σ_(G,V))[r] K_*+1(C_0(^*G_^))[r] 0.§.§.§ The index map via relative K-theoryIt follows now from prop. <ref>:Let ψ_:C_0(_+(M,V))→Ψ^*(_+(G,V)) be the inclusion map which associates to a (smooth) function f the order 0 (pseudo)differential operator multiplication by f and σ_full:Ψ^*(_+(G,V))→Σ__+(G,V) the full symbol map. Put μ_=σ_full∘ψ_. Then the relative K-group K_*(μ_)is naturally isomorphic to K_*+1(C_0(^* G)). Under this isomorphism, _rel:K_*(μ_)→ K_*(C^*(G×_+^*))=K_*+1(C^*(G)) identifies with_G. □Let us say also just a few words on the relative index map for_r,s(G,V),for the mapμ_:C_0(_+(M,V))→Σ_(G,V)which is the composition of the inclusionψ_:C_0( (M,V)→Ψ^*(_r,s(G,V))with the full index mapσ_full: Ψ^*(_r,s(G,V))→Σ_(G,V)), and the corresponding relative index map_rel:K_*(μ_)→ K_*(C^*()). Equivalently we wish to compute the relativeindex map_rel:K_*(μ_)→ K_*+1(C^*()), whereμ_:C_0(_+(M,V))→Σ__+(G,V). We restrict to the case whenVis Gsmall. We have a diagram0[r] C_0(_+(M,V))[r]C_0(_+(M,V))[r]C_0(V×_+)[r] 0and it follows that the inclusionC_0(_+(M,V))→ C_0(_+(M,V))is aKK-equivalence. Since the inclusionsΨ^*(_+(G,V))→Ψ^*(_+(G,V))andΣ__+(G,V)→Σ__+(G,V)are alsoKK-equivalences (prop. <ref>), it follows that the inclusion_μ_→_μ_is aKK-equivalence - and therefore the relativeK-groupsK_*(μ_)andK_*(μ_)are naturally isomorphic. Using this, together with the Connes-Thom isomorphism, we deduce: We assume that V is G small * The relative K-group K_*(μ_)is naturally isomorphic to K_*+1(C_0(^* G)). Under this isomorphism, _rel:K_*(μ_)→ K_*(C^*(G×_+^*))=K_*+1(C^*(G)) identifies with_G. * The relative K-group K_*(μ_)is naturally isomorphic to K_*(C_0(^* G)). Under this isomorphism, _rel:K_*(μ_)→ K_*(C^*(G)) identifies with_G. □§ A BOUTET DE MONVEL TYPE CALCULUS From now on, we suppose that V is a transverse submanifold ofMwith respect to the Lie groupoid G⇉ M. In particularVis G-small- of course, we assume that (in every connected component ofV), the dimension ofVis strictly smaller than the dimension ofM. §.§ Thebimodule AsVis transverse toG, the groupoidG_V^Vis a Lie groupoid, so that we can construct its “gauge adiabatic groupoid”(G_V^V)_ga(see section <ref>).In <cit.>, we constructed a bi-module relating theC^*-algebra of the groupoid(G_V^V)_gaand theC^*-algebra of pseudodifferential operators ofG_V^V. In this section,* We first show that the groupoid (G_V^V)_ga, is (sub-) Morita equivalent to _r,s(G,V) ( also section <ref> for a local construction).*Composing the resulting bimodules, we obtain the “Poisson-trace” bimodule relating C^*(_r,s(G,V)) and Ψ^*(G_V^V).§.§.§ The _r,s(G,V)-(G_V^V)_ga-bimodule (G,V)Define the mapj:M⊔ (V×)→ Mby lettingj_0:M→ Mbe the identity andj_1:V×→ Mthe composition of the projectionV×→ Vwith the inclusion. Let=G_j^j. AsVis assumed to be transverse, the mapjis also transverse, and thereforeis a Lie groupoid.It is the union of four clopen subsets *the groupoids G_j_0^j_0=G=_M^M and G_j_1^j_1=G_V^V× (×)=_V×^V×.*the linking spacesG_j_1^j_0=_V×^M=G_V× and G_j_0^j_1=^V×_M=G^V×. By functoriality, we obtain a sub-Morita equivalence of_r,s(G_V^V××,V)and_r,s(G,V)(see section <ref>).Let us describe this sub-Morita equivalence in a slightly different way:Let alsoΓ =V×{0,1}^2, sitting in: [V×{(0,0)}⊂ G=G_j_0^j_0 ;V×{(0,1)}⊂ G_V×{ 0}⊂ G_j_1^j_0 ;;V×{(1,0)}⊂ G^V×{ 0}⊂ G^j_1_j_0 ; V×{(1,1)}⊂ G_V^V×{(0,0)}⊂ G_j_1^j_1 . ] It is a subgroupoid of. The blowup construction applied toΓ⊂gives then a groupoid_r,s(,Γ)which is the union of:[_r,s(G,V) ; _r,s(G_V×,V) ;; _r,s(G^V×,V) ; _r,s(G_V^V××,V). ]Recall that (V×,V×{0})≃ V× (_-⊔_+). Thus_r,s(,Γ)is a groupoid with objects(M,V)⊔ V×_-⊔ V×_+. The restriction of_r,s(,Γ)toV×_+coincides with the restriction of_r,s(G_V^V××,V)toV×_+: it is the gauge adiabatic(G_V^V)_gagroupoid ofG_V^V( section <ref>). Put_r,s(G_V×,V)_+=_r,s(,Γ)_V×_+^(M,V). It is a linking space between the groupoids_r,s(G,V)and(G_V^V)_ga. Put also_r,s(G^V×,V)_+=_r,s(,Γ)^V×_+_(M,V). With the notation used in fact <ref>, we define theC^*(_r,s(G,V))-C^*((G_V^V)_ga)-bimodule(G,V)to beC^*(_r,s(G_V×,V)_+).It is the closure ofC_c(_r,s(G_V×,V)_+)inC^*(_r,s(,Γ)). It is a full Hilbert-C^*(_r,s(G,V))-C^*((G_V^V)_ga)-module.The Hilbert-C^*((G_V^V)_ga)-module(G,V)is full and((G,V))is the idealC^*(_r,s(G_Ω^Ω,V))whereΩ=r(G_V)is the union of orbits which meetV. Notice thatΩ=M∖ V⊔ V×^*andF= N_V^M ⊔ V⊔ Vgives a partition by respectively open and closed satured subsets of the units of_r,s(,Γ). Furthermore_r,s(,Γ)_Ω^Ω=_Ω^ΩandC^*(_Ω^Ω)=C^*()according to proposition <ref>. This decomposition gives rise to an exact sequence of C^*-algebras.0[r]C^*()[r]C^*(_r,s(,Γ))[r] C^*( N_Γ^)[r]0This exact sequence gives rise to an exact sequence of bimodules:0[r]C^*(G)[r]@-[d]^(G,V) C^*(_r,s(G,V))[r]@-[d]^(G,V)C^*( N_V^G)[r]@-[d]^^∂(G,Γ) 0 0[r]C^*(G_V×_+^*^V×_+^*)[r] C^*((G_V^V)_ga)[r]C^*( G_V^V⋊_+^*)[r] 0where(G,V)=C^*(^M∖ V_V×_+^*)and^∂(G,Γ)=C^*(( N_Γ^)^ N_V^M_V)=(G,V)/(G,V). §.§.§ Thebimodule In <cit.>, we constructed, for every Lie groupoidHaC^*(H_ga)-Ψ^*(H)-bimodule_H. Recall that the HilbertΨ^*(H)-module_His full and that( _H)⊂ C^*(H_ga)is the kernel of a natural*-homomorphismC^*(H_ga)→ C_0(H^(0)×). We also showed that the bimodule _Hgives rise to an exact sequence of bimodule as above:0[r]C^*(H×_+^* ×_+^*)[r]@-[d]^ _H C^*(H_ga)[r]@-[d]^ _HC^*( H ⋊_+^*)[r]@-[d]^ ^∂_H 0 0[r] C^*(H) [r] Ψ^*(H) [r]C_0(^*H) [r] 0Putting together the bimodule (G,V)and _G_V^Vwe obtain aC^*(_r,s(G,V))-Ψ^*(G_V^V)bimodule (G,V)⊗_C^*((G_V^V)_ga) _G_V^Vthat we call thebimodule and denote by(G,V)- or just. It leads to the exact sequence of bimodule:0[r]C^*(G)[r]@-[d]^(G,V) C^*(_r,s(G,V))[r]@-[d]^(G,V)C^*( N_V^G)[r]@-[d]^^∂(G,V) 0 0[r] C^*(G_V^V) [r] Ψ^*(G_V^V) [r]C_0(^*G_V^V) [r] 0Thebimoduleis a full HilbertΨ^*(G_V^V)-module and((G,V))is a two sided ideal ofC^*(_r,s(G,V)). Denote by (G,V)^*its dual module,theΨ^*(G_V^V)-C^*(_r,s(G,V))-bimodule((G,V),Ψ^*(G_V^V)).§.§ A algebra TheC^*-algebraC^*_BM(G,V)=(C^*(_r,s(G,V)) ⊕(G,V)^* )is an algebra made of matrices of the form[ K P; T Q ]whereK∈ C^*(_r,s(G,V)), P∈(G,V), T∈(G,V)^*, Q∈Ψ^*(G_V^V).We have an exact sequence (where⊔ V Mdenotes the topological disjoint union ofMwithV):0→ C^*(G_⊔ V^⊔ V)→ C^*_BM(G,V)r_V^C^*⟶→ 0,where the quotientis the algebra of the Boutet de Monvel type boundary symbols. It is the algebra of matrices of the form[ k p; t q ]wherek∈ C^*( N_V^G),q∈ C(^* G_V^V),p,t^*∈(G,V):= (G,V)⊗_Ψ^*(G_V^V)C(^* G_V^V). The mapr_V^C^*is of the formr_V^C^*[ K P; T Q ]=[ r_V^(K) r_V^(P); r_V^(T)σ_V(Q) ]where:*the quotient map σ_V is the ordinary order 0 principal symbol map on the groupoid G_V^V; *the quotient maps r_V^,r_V^,r_V^ are restrictions to the boundary N_V^M: r_V^:C^*(_r,s(G,V))→ C^*( N_V^G)=C^*(_r,s(G,V))/C^*(G^_),r_V^:(G,V)→(G,V)=(G,V)/C^*(G^_V), and r_V^(T)=r_V^(T^*)^*. The mapr_V^C^*is called the zero order symbol map of the Boutet de Monvel type calculus. §.§ Apseudodifferential algebraWe denote byΨ^*_BM(G,V)the algebra of matrices[ Φ P; T Q ]withΦ∈Ψ^*(_r,s(G,V)), P∈(G,V), T∈(G,V)^*andQ∈Ψ^*(G_V^V).Such an operatorR=[ Φ P; T Q ]has two symbols: * the classical symbolσ_c:Ψ^*_BM(G,V)→ C_0(^*_r,s(G,V)) given by σ_c[ Φ P; T Q ]=σ_c(Φ);* the boundary symbolr_V^BM:Ψ^*_BM(G,V)→ defined by r_V[ Φ P; T Q ]=[ r_V^(Φ) r_V^(P); r_V^(T)σ_V(Q) ] where r_V^:Ψ^*(_r,s(G,V))→Ψ^*( N_V^G) is the restriction. Heredenotes the algebra of matrices of the form[ ϕ p; t q ]withϕ∈Ψ^*( N_V^G),p,t^*∈^V(G,V)andq∈ C(^* G_V^V). The full symbol map is the morphismσ_BM:Ψ^*_BM(G,V)→Σ_BM(G,V):=C_0(^*_r,s(G,V))× _C_0(^* N_V^G)defined byσ_BM(R)=(σ_c(R),r_V(R)). We have an exact sequence:0→ C^*(G_⊔ V^⊔ V)→Ψ^*_BM(G,V)σ_BM⟶Σ_BM(G,V) → 0.We may note thatΨ^*(_r,s(G,V))(Ψ^*( N_V^G)) identifies with the full hereditary subalgebra ofΨ^*_BM(G,V)( ofΣ_BM(G,V)) consisting of elements of the form[ x 0; 0 0 ].§.§ K-theory of the symbol algebras and index maps In this section we examine the index map corresponding to the Boutet de Monvel type calculus and in particular to the exact sequence. We compute theK-theory of the symbol algebraΣ_BMand the connecting element_BM∈ KK^1(Σ_BM,C^*(G))([We use the Morita equivalence of C^*(G) with C^*(G_⊔ V^⊔ V).]). We then extend this computation by including bundles into the pictureby computing a relativeK-theory map.§.§.§ K-theory of Σ_BM and computation of the index As the HilbertΨ^*(G_V^V)module(G,V)is full,*the subalgebra {[ K 0; 0 0 ]; K∈ C^*(_r,s(G,V))} is a full hereditary subalgebra of C^*_BM(G,V);*the subalgebra {[ Φ 0; 0 0 ]; Φ∈Ψ^*(_r,s(G,V))} is a full hereditary subalgebra of Ψ^*_BM(G,V); *the subalgebra {[ x 0; 0 0 ]; x∈Σ_(G,V)} is a full hereditary subalgebra of Σ_BM(G,V); *the subalgebra {[ k 0; 0 0 ]; k∈ C^*( N_V^G)} is a full hereditary subalgebra of ;*the subalgebra {[ ϕ 0; 0 0 ]; ϕ∈Ψ^*( N_V^G)} is a full hereditary subalgebra of . We have a diagram of exact sequences where the vertical inclusions are Morita equivalences:0[r]C^*(G_^)[r]@^(->[d] Ψ^*(_r,s(G,V))[r]^σ_full@^(->[d] Σ_(G,V)[r]@^(->[d] 00[r]C^*(G_⊔ V^⊔ V)[r]Ψ^*_BM(G,V)[r]^σ_BM Σ_BM(G,V)[r] 0We thus deduce immediately from theorem <ref> and prop. <ref>: The algebra Σ_BM(G,V)) is KK-equivalent with the mapping cone _χ and, under this K-equivalence,the index _BM is q^*([Bott]⊗__G) where q:_χ→ C_0(^*G×_+^*) is evaluation at 0. §.§.§ Index in relative K-theoryOne may also consider more general index problems, which are concerned with generalized boundary value problems in the sense of <cit.>: those are concerned with index of fully elliptic operators of the form R=[ Φ P; T Q ], where we are given hermitian complex vector bundlesE_±over(M,V)andF_±overV, and * Φ is an order 0 pseudodifferential operator of the Lie groupoid _r,s(G,V)from sections of E_+ to sections of E_-;* P is an order 0“Poisson type” operator from sections of F_+ to sections of E_-;* T is an order 0“trace type” operator from sections of E_+ to sections of F_-;* Q is an order 0 pseudodifferential operator of the Lie groupoid G_V^V from sections of F_+ to sections of F_-. In other words, writingE_±as associated with projectionsp_±∈ M_N(C^∞((M,V)))andF_±as associated with projectionsq_±∈ M_N(C^∞(V)), thenR∈ (p_-⊕ q_-)M_N(Ψ^*_BM(G,V))(p_+⊕ q_+). Full ellipticity forRmeans just that the full symbol ofRis invertible,that there is a quasi-inverseR'∈ (p_+⊕ q_+)M_N(Ψ^*_BM(G,V))(p_-⊕ q_-), such that(p_+⊕ q_+)-R'R∈ M_N(C^*(G_⊔ V^⊔ V))and(p_-⊕ q_-)-RR'∈ M_N(C^*(G_⊔ V^⊔ V)).In other words, we wish to compute the morphism_rel:K_*(μ_BM)→ K_*(C^*_BM(G,V))whereμ_BMis thenatural morphismμ_BM:C_0((M,V))⊕ C_0(V)→Σ_BM(G,V). Let us outline here this computation. We start with a remark. Let H⇉V be a Lie groupoid. The bimodule ^∂_H is a Morita equivalence of an ideal C^*( H ⋊_+^*) with C_0(^*H) and therefore defines an elementζ_H∈ KK(C_0(^*H),C^*( H ⋊_+^*)).Let μ_H:C_0(V)→ C_0(^*H) be the inclusion (given by the map ^*H→ V). The composition μ_H^*(ζ_H)is the zero element in KK(C_0(V),C^*( H ⋊_+^*)). Indeed μ_H^*(ζ_H) can be decomposed as * the Morita equivalence C_0(V)∼ C_0((V×_+^*)⋊_+^*),*the inclusion C_0(V×_+^*)⋊_+^* ⊂ C_0(V×_+)⋊_+^*,*the inclusion C_0(V×_+)⋊_+^*→ C_0(^* H) ⋊_+^* corresponding to the map (x,ξ)↦ (x,ξ) from ^* H to V×_+.Now, the Toeplitz algebra C_0(_+)⋊_+^* is K-contractible. From this remark, we immediately deduce: The inclusion C_0(V)→Σ_BM(G,V) is the zero element in KK-theory. □We have a diagramC_0((M,V))⊕ C_0(V)[rr]^μ_⊕μ_V@=[d]Σ_(G,V)⊕ C_0(^*G_V^V)[d]C_0((M,V))⊕ C_0(V)[rr]^ψ_BMΣ_BM(G,V)The mapping cone_μ̌_of the morphismμ̌_: C_0((M,V))⊕ 0→Ψ^*_BM(G,V)is Morita-equivalent to the mapping cone of the morphismμ_:C_0(_+(M,V))→Σ_(G,V))and therefore it isKK-equivalent toC_0(^* G×)by Cor. <ref>. We then deduce: *The relative K-theory ofμ_BM is naturally isomorphic to K_*(^* G)⊕ K_*+1(C_0(V)).*Under this equivalence, the relative index map identifies with_G on K_*(^* G) and the zero map on K_*+1(C_0(V).□§ APPENDIX§.§ A characterization of groupoids via elements composable to a unitLet G be a groupoid. For n∈, we may define the subsets U^n(G)={(x_1,…,x_n)∈ G^n; s(x_i)=r(x_i+1); x_1· x_2… x_n∈ G^(0). We have U^1(G)=G^(0), the sets U^k(G) are invariant under cyclic permutations; moreover, we havenatural maps δ_k:U^n(G)→ U^n+1(G) (1≤ k≤ n) defined by δ_k(x_1,…,x_n)=(x_1,…,x_k,s(x_k),x_k+1,… ,x_n) and boundaries b_k:U^n+1(G)→ U^n(G) defined by b_k(x_1,… ,x_n+1)=(x_1,…,x_k-1,x_kx_k+1,x_k+2,… x_n+1) if k n+1 and b_n+1(x_1,… ,x_n+1)=(x_n+1x_1,x_2,… ,x_n). LetGbe a set. For(x,y,z)∈ G^3, putq_1(x,y,z)=xandq_1,2(x,y,z)=(x,y).Let G be a set and U^1(G)⊂ G, U^3(G) ⊂ G^3 be subsets satisfying the following conditions. *The subset U^3(G) of G^3 is invariant under cyclic permutation. *For all x∈ U^1(G), (x,x,x)∈ U^3(G). *The map q_12:(x,y,z)↦ (x,y) from U^3(G) to G^2 is injective. * Let R={(x,y,z)∈ U^3(G); z∈ U^1(G)}. The map q_1:R→ G is a bijection and U^2(G)=q_12(R) is invariant under (cyclic) permutation. * The subset U^4(G)={(x,y,z,t)∈ G^4; ∃ (u,v)∈ U^2(G); (x,y,v)∈ U^3(G) and (u,z,t)∈ U^3(G)} is invariant under cyclic permutation in G^4.Then there is a unique groupoid structure on G such that U^1(G) is its set of units, and U^3(G)={(x,y,z)∈ G^3; (x,y)∈ G^(2), z=(xy)^-1}.Uniqueness is easy:one defines the range and the inverse of x by saying that (x,x^-1,r(x)) is the unique element w in R such that q_1(w)=x; the source is defined by s(x)=r(x^-1); the product for composable elements (x,y) is then defined by the fact (x,y,(xy)^-1)∈ U^3(G). Let us pass to existence. *By condition (<ref>), U^2(G) is the graph of an involution that we denote by x↦ x^-1. By condition (<ref>), if x∈ U^1(G), then x^-1=x. *Define also r:G→ U^1(G) to be the (unique) element in U^1(G) such that (x,x^-1,r(x))∈ U^3(G) and put s(x)=r(x^-1).*Put G^(2)=q_12(U^3(G)). If (x,y)∈ G^(2), then there exists z such that (x,y,z)∈ U^3(G). As (x,s(x),x^-1)∈ U^3(G), it follows that (x,s(x),y,z)∈ U^4(G) and thus (z,x,s(x),y)∈ U^4(G), and therefore (s(x),y)∈ G^(2), and r(y)=s(x).Conversely, if (x,y)∈ G^2 satisfy s(x)=r(y), as (x^-1,x,s(x))∈ U^3(G) and (s(x),y,y^-1)∈ U^3(G), it follows that (x^-1,x,y,y^-1)∈ U^4(G), whence (x,y,y^-1,x^-1)∈ U^4(G) and (x,y)∈ G^(2).In other words, G^(2)={(x,y)∈ G^2; s(x)=r(y)}. *For (x,y)∈ G^(2), we may define thanks to condition (<ref>) the element xy∈ G, by the requirement (x,y,(xy)^-1)∈ U^3(G).*Since (y,(xy)^-1)∈ G^(2) and ((xy)^-1,x)∈ G^(2), it follows that s(xy)=r((xy)^-1)=s(y) and r(xy)=s((xy)^-1)=r(x). *For x∈ G, since (r(x),x,x^-1)∈ U^3(G) and (x,s(x),x^-1)∈ U^3(G), it follows that r(x)x=x and xs(x)=x - thus units are units. As (x,x^-1,r(x))∈ U^3(G) and (x^-1,x,s(x))∈ U^3(G) we find xx^-1=r(x) and x^-1x=s(x) and thus x^-1 is the inverse of x. *Finally, let (x,y,z)∈ G^3 be such that (x,y)∈ G^(2) and (y,z)∈ G^(2). We saw that s(xy)=s(y)=r(z). Put w=((xy)z)^-1. Then (x,y,(xy)^-1)∈ U^3(G) and (xy,z,w)∈ U^3(G), and thus (x,y,z,w)∈ U^4(G), whence (y,z,w,x)∈ U^4(G) and therefore (yz,w,x)∈ U^3(G) and finally, (x,yz,w)∈ U^3(G), which means that x(yz)=w^-1=(xy)z.§.§and their duals (<cit.>)Aover a groupoidGis a vector bundleEoverGwith a groupoid structure such thatE^(0)⊂ Eis a vector subbundle of the restriction ofEtoG^(0)and such that all the structure maps of the groupoidE(r_E,s_E,x↦ x^-1and the composition) are linear bundle maps andr_E:E→ r_G^*(E^(0))is surjective. Let E→ G be a . For all k∈ (k≥ 1) U^k(E)→ U^k(G) is a subbundle of the restriction to U^k(G)⊂ G^k of the bundle E^k→ G^k. We identify the dual bundle of E^k→ G^k with (E^*)^k. Then the dual bundle E^* is a VB-groupoid over G with U^k(E^*)=U^k(E)^⊥ for all k.We prove that U^1(E^*)=(E^(0))^⊥ and U^3(E^*)=U^3(E)^⊥ satisfy the conditions of prop. <ref>. * Condition (<ref>) is obvious: since U^3(E) is invariant under cyclic permutations, so is U^3(E)^⊥. * Taking the restriction of E over a point of G^(0), we have a linear groupoid, and we have already proved that its orthogonal, is a linear groupoid. Condition (<ref>) follows immediately. *By condition (<ref>) for E, it follows that q_1:U^3(E)→ E is onto, whence (by condition <ref>) q_3:U^3(E)→ E is onto. Therefore q_1,2:U^3(E)^⊥→ E^*× E^* is injective. * Since q_2,3:U^3(E)→ E× E injective, it follows thatq_1:U^3(E)^⊥→ E^* is onto. Since U^2(E) is the graph of an involution, the same holds for U^2(E)^⊥. Note also that condition (<ref>) ensures that q_1:U^2(E)^⊥→ E^* is an isomorphism. We then just have to show that {(x,y,z)∈ U^3(E)^⊥; z∈ (U^1(E))^⊥}={(x,y,z)∈ U^3(E)^⊥;(x,y)∈ U^2(E)^⊥}. The first term is the orthogonal of U^3(E)+F_1 where F_1={(0,0,z); z∈ U^1(E)} and the second the orthogonal of U^3(E)+F_2 where F_2={(x,y,0); (x,y)∈ U^2(E)}.Now, for every (x,y)∈ U^2(E) there exists z∈ U^1(E), namely z=r_E(x)=s_E(y) such that (x,y,z)∈ U^3(E); by surjectivity of r_E, it follows that for every γ∈ G and every z∈ E^(0)_γ there exists x∈ E_γ such that r_E(x)=z, thus (x,x^-1,z)∈ U^3(E). In other words, U^3(E)+F_1=U^3(E)+F_2. Condition (<ref>) follows. * We just need to show that U^4(E)^⊥={(w,x,y,z)∈ (E^*)^4; ∃ (u,u')∈ U^2(E)^⊥,(w,x,u)∈ U^3(E)^⊥ and (u',y,z)∈ U^3(E)^⊥}. As U^4(E)^⊥ is cyclicly invariant, condition (<ref>) will follow.If there exists (u,u')∈ U^2(E)^⊥ such that (w,x,u)∈ U^3(E)^⊥ and (u',y,z)∈ U^3(E)^⊥, then, for every (a,b,c,d)∈ U^4(E), there exists (v,v')∈ U^2(E) such that (a,b,v)∈ U^3(E) and (v',c,d)∈ U^3(E). It follows that (w,x,y,z)∈ U^4(E)^⊥. The inclusion {(w,x,y,z)∈ (E^*)^4; ∃ (u,u')∈ U^2(E)^⊥,(w,x,u)∈ U^3(E)^⊥ and (u',y,z)∈ U^3(E)^⊥}⊂ U^4(E)^⊥ follows.Now, as vector bundles, if E=n and E^(0)=p, it follows that (U^k(E))=(k-1)n-(k-2)p (for all k≥ 1); therefore U^3(E)^⊥=n+p and U^4(E)^⊥=n+2p. As the projection q_1:U^3(E)^⊥→ E^* is onto, we find that for (γ_1,γ_2,γ_3,γ_4)∈ U^4(G), we have {(w,x,u),(v,y,z)∈ U^3(E)^⊥_(γ_1,γ_2,γ_3γ_4)× U^3(E)^⊥_(γ_1γ_2,γ_3,γ_4); v=u^-1} is (n+p)+(n+p)-n and we find the desired equality by dimension equality. It is then quite immediately seen, using induction and dimension equality, that, for every k, we have U^k(E^*)=U^k(E)^⊥.§.§ Fourier transformLet F_1,F_2 be real vector spaces and let H be a subspace of F_1× F_2. Assume that p_1:H→ F_1 is injective and p_2:H→ F_2 is surjective. For f∈(F_1) we put q_H(f)=(p_2)_!(p_1^*)(f).Then, for every f∈(F_1), q_H(f)=q_H^⊥(f̂) (taking a good normalization for the Fourier transform).Indeed we can write F_1=F_2× L× K, H={((x,y,0),x), x∈ F_2, y∈ L}. Then H^⊥ ={((ξ,0,η),-ξ), ξ∈ F_2^*, η∈ K^*}. For x∈ F_2, we have q_H(f)(x)=∫ _Lf(x,y,0) dy.For ξ∈ F_2^*, we have q_H(f)(ξ)=∫ _F_1× Lf(x,y,0)e^-i⟨ x|ξ⟩ dx dy and q_H^⊥(f̂)(ξ)=∫_K^*f̂(ξ,0,η) dη=∫_K^*(∫_F_2× L× K f(x,y,z)e^-i(⟨ x|ξ⟩+⟨ z|η⟩)dx dy dz)dη. But ∫_K^*(∫_K f(x,y,z)e^-i ⟨ z|η⟩dz)dη=f(x,y,0). Let E→ G be a . For γ∈ G, E_x is a linking space between the groupoids E_s(x) and E_r(x). The family of Hilbert bimodules C^*(E_x)_x∈ G is a Fell bundle and C^*(E) is the C^*-algebra associated with this Fell bundle (<cit.>).Let E→ G be a , and let E^* be the dual . Then C^*(E)≃ C^*(E^*) - via Fourier transform. For (γ,γ')∈ G^(2), let F_1⊂ E_γ× E_γ' be the set of composable elements; let F_2=E_(γγ')^-1 and H⊂ F_1× F_2 the set of (x,y,z)∈ E_γ× E_γ'× E_(γγ')^-1 that compose to a unit. Remark <ref> implies that for f∈(E_γ) and g∈(E_γ), we have (f· g)=f̂·ĝ - where f· g∈ E_γγ' is the “Fell bundle product”. In other words, the Fourier transform map isan isomorphism of the Fell-bundles and therefore the corresponding C^*-algebras are isomorphic.[3cm] amsplain
http://arxiv.org/abs/1705.09588v2
{ "authors": [ "Claire Debord", "Georges Skandalis" ], "categories": [ "math.OA", "math.DG", "math.KT", "Primary 58H05, 19K56, Secondary 58B34, 22A22, 46L80, 19K35, 47L80" ], "primary_category": "math.OA", "published": "20170526140813", "title": "Blowup constructions for Lie groupoids and a Boutet de Monvel type calculus" }
[name=Example,qed=-0.3ex□ ] Example [name=Proof,qed=-0.3ex▪,numbered=no] Proof [name=Definition] Definition [name=Theorem] Theorem[name=Problem] Problem [name=Assumption] Assumption [name=Corollary] Corollary[name=Remark] Remark [name=Proposition] Proposition
http://arxiv.org/abs/1705.09374v1
{ "authors": [ "Yoram Zarai", "Alexander Ovseevich", "Michael Margaliot" ], "categories": [ "q-bio.SC" ], "primary_category": "q-bio.SC", "published": "20170525214350", "title": "Optimal Translation Along a Circular mRNA" }
Rejection-Cascade of GaussiansKiran et al.Navya, Paris, France Detection Vision Systems, Valeo India Valeo Vision Systems, Galway, Ireland [email protected], {arindam.das,senthil.yogamani}@valeo.comRejection-Cascade of Gaussians: Real-time adaptive background subtraction framework B Ravi Kiran^1 Arindam Das^2 Senthil Yogamani^3 December 30, 2023 =================================================================================== Background-Foreground classification is a well-studied problem in computer vision. Due to the pixel-wise nature of modeling and processing in the algorithm, it is usually difficult to satisfy real-time constraints. There is a trade-off between the speed (because of model complexity) and accuracy. Inspired by the rejection cascade of Viola-Jones classifier, we decompose the Gaussian Mixture Model (GMM) into an adaptive cascade of Gaussians(CoG). We achieve a good improvement in speed without compromising the accuracy with respect to the baseline GMM model. We demonstrate a speed-up factor of 4-5x and 17 percent average improvement in accuracy over Wallflowers surveillance datasets. The CoG is then demonstrated to over the latent space representation of images of a convolutional variational autoencoder(VAE). We provide initial results over CDW-2014 dataset, which could speed up background subtraction for deep architectures.§ INTRODUCTION Background subtraction is critical component of surveillance applications (indoor and outdoor), action recognition, human computer interactions, tracking, experimental chemical procedures that require significant change detection. Work on background subtraction started since the 1970s and even today it is an active open problem. There have been a host of methods which have been developed and below is a short review which will serve to aid understanding our algorithm. A survey by  <cit.> provides an overview of common methods which includes Frame differencing (FD), Running Gaussian average (RGA), Gaussian Mixture Model (GMM) and Kernel Density Estimation (KDE). We employ these basic methods in a structured methodology to develop our algorithm.A survey of variants of GMM, issues and analysis are presented in  <cit.>. In our work, we focus on solving the variable-rate adaptation problem and improving the performance. Abstractly, our work tries to fuse several algorithms to achieve speed and accuracy and we list similar methods here. Similar attempts have been made by the following researchers.  <cit.> and  <cit.> used a Hierarchical background subtraction method that operates in different scales over the image : namely pixel, region and image level, while their models themselves are not hierarchical. Authors  <cit.> switch between GMM and RGA models, while choosing a complex model for complicated backgrounds and simple model for simpler backgrounds. They use an entropy based measure to switch between the different models. We briefly describe our observations and improvement over the standard GMM from  <cit.>. We observe in most cases, background subtraction is an asymmetric classification problem with probability of foreground pixel being much lesser than that of background. This assumption fails in the case of scenes like highways, a busy street, etc. In our work, we focus mainly on surveillance scenarios where there is very low foreground occupancy. Our framework exploits this fact and at the same time handles variable rate changes in background and improves accuracy. Our key contributions in this paper include: 1. Decomposition of GMM to form an adaptive cascade of classifiers - Cascade of Gaussians (CoG) which handles complex scenes in an efficient way to obtain real-time performance. 2. A confidence estimate for each pixel’s classification which would be used to vary the learning rate and thresholds for the classifiers and adaptive sampling. 3. Learning a time windowed KDE from the training data-set which would act as a prior to the Adaptive Rejection Cascade and also help the confidence estimate.The decomposition of the GMM into the cascade is similar to the increasing true positive detection rate inspired by the Viola Jones Rejection Cascade  <cit.>. Authors  <cit.> provided an optimized lookup for highly probable colors in the incoming background pixels thus providing speedup in the access. § COMPONENTS OF THE CASCADEThis section describes the different components of the rejection cascade and how they were determined. The rejection cascade is accompanied by the confidence measure to make an accurate background classification at each level of the cascade. Scene Prior in Background Model:The process of distinguishing linearly varying background and noisy pixels is a challenge and critical since the background subtraction model intrinsically has no additional attribute to separate them. For this scenario, in our approach we introduce a prior probability for every pixel (eqn <ref>). The non-parametric probability distribution for the pixels assuming independent R,G,B channels is now given too. The Scene prior basically provides an non-parametric estimate of pixel-values value over N frames during training. The choice of N is empirical and depends on how much dynamic background and foreground is present in the training frames. To obtain complete variability we choose as large N as possible. Henceforth we refer to Scene Prior as the prior. In the training phase we estimate the underlying temporal distribution of pixels by calculating the kernel function that approximates the said distribution.Our case primarily concentrates on long surveillance videos with sufficient information (minimal foreground) available in the training sequence that decides N. For the standard GMM model(assuming the covariance matrix is diagonal) the updates of the parameters include:P(I_n(x,y))=∑_i=1^K ω_i,n*η(I_n(x,y),μ_i,n,σ_i,n, ω_n+1,k(x,y) ⟵(1-α)ω_n,k(x,y) + α(M_i,n+1) μ_n+1,k(x,y) ⟵(1-ρ)μ_n,k(x,y) + ρ I_n(x,y) σ_n+1,k^2(x,y) ⟵(1-ρ)σ_n,k^2(x,y) + ρ I_n(x,y) Where K_σ, represents the gaussian kernel and σ the scale or bandwidth. This Kernel function is calculated to provide the modes of the different pixels. Where η represents the pixel mode distribution obtained in equation <ref>, where ω_i represents the ratio of the component i in the distribution of pixel I_n(x,y), and μ_i,σ_iare the parameters of the component, M represents 0 or 1 based on a component match and finally α represents the learning rate of the pixel model. The α is intialised for all pixels usually, there has been work in adapting it based on the pixel entropy. We use the pixel gradient value distribution to do the same.Determining Learning Rate Hyper-parameters:Besides the kernel density, we also estimate the dynamic nature of the pixels in the scene. This is obtained by the clustering the residue between consecutive frames into 3 categories : into static/drifting, oscillating and dynamic pixels (Fig <ref> top right). This helps resolve a pixel drift versus a pixel jump as shown in example below in figure. Once we have the residue R_n(x,y) = I_n(x,y) - I_n-1(x,y), n ∈ [1,N], we evaluate the normalized histogram over the residue values. We select bins intervals to extract the 3 classes based on the dynamic nature of pixels. A peaky first bin implies near zero residue, thus a drift or static pixels. A peaky second bin implies oscillating pixels and the other cases are considered as dynamic pixels. Based on these values we choose the weights for the confidence measure (explained in the next section). This frequency over each bin sets the learning rate for the pixel. The process of obtaining the right learning rates(confidence function) from the normalized binned histogram values to determine α, β and γ test for the learning rates have determined empirically by shape matching the histograms. Clustering Similar Background - Spatio-Temporal Grouping: The next step in the training phase is to determine background regions of pixels, in the frame that behave similarly in terms of adapted variance, number of modes, and optimally use fewer parameters and lesser instructions to update this specific region's, pixel models. The problem definition can be formalized as: We are given Nx(framesize) pixels and for each pixel I_n(x,y) we have a set of matches of the form (I_n(x,y), I_n(x',y'))_t_n, which means that pixel I_n(x,y) correlated with pixel I_n(x',y') at frame number n . From these N matches, we construct a discrete time series x_i(t) by clustering pixel F_x,y^n at time interval t frames. A time series of the pixel I_n(x,y) values at frame n_0. Intuitively, x_i measures the correlation in behavior of pixels over time window t. For convenience we assume that time series x_i have the same length. We group together pixel value time series so that similar behavior is captured by similarity of the time series x_i(t). This way we can infer which pixels have a similar temporal pattern variances and modalities, and we can then consider the center of each cluster as the representative common pattern of the group. This helps us cluster similar behaving pixels together. This is can be seen a spectral clustering problem as described in  <cit.>. We try a simpler approach here first by clustering the adapted pixel variances(matrix V) and weights(matrix R) of first dominant mode of pixels within a mixture model. * Get N frames & estimate pixel-wise μ(t), σ(t), ω(t) * Form matrix whose rows are adapted variance and ranked weight observations, while columns are variables V and R, V(t_k,i) = I(t_k) , k=1:N* Obtain covariance matrices R_cov = Cov(R), V_cov = Cov(V)* Perform K-means clustering with K=3 (for temporal pixel residue due to dynamic, oscillating, or drifting BG).* Threshold for pixels within 0.7-0.5 σ* Calculate the KDE of given cluster & the joint occurrence distribution and associated weight ω_1, μ_1 and σ_1 where μ_1 is first dominant common cascade level at grouped pixels. This suffers from the setback that the variances chosen temporally do not correspond to mean values associated with the maximum eigen value as obtained in case of Spectral Clustering. So we have the pixel variance and adapted weight (dominant mode) covariance matrices R(x_i,y_i) = Cov(Var(I_n(x_i,y_i))) and W(x_i,y_i) = Cov(Var(W_n(x_i,y_i))). A single gaussian is fit over thresholded covariance matrices (Adapted variance and first dominant mode weight).r_n =μ_advar-σ_advar<var(R_cov)<μ_advar+σ_advar w_n =μ_adw-σ_adw<var(W_cov)<μ_adw+σ_adw The parameters μ_advar, σ_advar and μ_adw,σ_adw represent the mean and standard deviation of the cluster of pixel variances and adapted weights of the first dominant modes. The fundamental clustering algorithm requires Data set R_cov and V_cov, number of clusters - quantization of the adapted weights or variances, Gram matrix  <cit.>. One critical point to note here is that, when we do not choose to employ spatio-temporal grouping, and reduce the number of parameters and consequent updates, we can use the Scene Prior covariance estimation to increase the accuracy of the foreground detection. This is very similar to the background subtraction based on Co-occurrence of Image Variations. Confidence Measure : The confidence measure is a latent variable use to aid the Rejection Cascade to obtain a measure of fitness for the classification of a pixel based on various criteria. The Confidence C_n (x,y) for a pixel I_n (x,y) is given by C_n (x,y) = α P(x,y) + β (Δ_n I(x,y) + γ M(I_n (x,y)).Here, M() represents the difference between the current pixel value I_n(x,y) and the parameters of themodel occurring at the top of the ordered Rejection cascade described below, while Δ_n I(x,y) = I_n (x,y)-I_n-1 (x,y). As seen in the ordered tree, the first set of parameters would be the first dominant mode - (μ_1+σ_1,μ_1-σ_1). This is carried out based on the level in which the pixel gets successfully classified. P() represents the probability of occurrence of the pixel from the KDE. The values of α β and γ are determined by the normalized temporal residue distribution (explained above). The physical significance and implications of α β and γ- α says how confident the region is and regions that are stable (for example from the segments from clustering adapted variances and weights of training phase pixel models) would have high α values. While the value of β determines how fast the pixel would need to adapt to new incoming values and this would mean a lower effect of the prior distribution. The final parameter γ determines the consistency of the pixel belonging to a model and this would change whenever the pixels behavior is much more dynamic (as opposed to a temporal residue weighting it).Confidence based temporal sampling: Applying multiple modes of background classifiers and observing the consistency in their model parameters (mean, variance, and connectivity) we predict the future values of these pixels. A threshold on confidence function value determined by using stable regions(using region growing) as a reference is used to select the pixels both spatially and temporally. The description of the confidence measure is given in more detail in section 2.3.The pixels with low confidence reflect regions R over the frame with activity and thus a high probability of finding pixels whose label are in transition (FG-BG). Thus by thresholding the confidence function we sub-sample the incoming pixels spatio-temporally. This intuition is when pixel values arriving now are within the first dominant mode's 0.7σ region, and even more so within the CHP level for a large number of frames, the confidence value saturates. The Region R(x_i,y_i) = C_n(x_i,y_i)>C_ScencePrior(x_i,y_i) is just a thresholded binary map of this confidence value. This is demonstrated in the analysis in section 3.Cascade of Gaussians CoG The proposed method can be viewed as a decomposition of the GMM in an adaptive framework so as to reduce complexity and improve accuracy using a strong prior to determine the scenarios under which said gains can be achieved. The prior is used to determine the modality of the pixels distribution and any new value is treated as a new mean with variance model. The Cascade can be seen to consist of K Gaussians which are ordered based on the successful classification of the pixel. During steady state the ordered cascade conforms to the Viola Jones Rejection Cascade with decreasing positive detection rates.The cascade is first headed by a Consistent Hypothesis Propagation (CHP) classifier which basically repeats the labeling process on the current pixel if its value is equal to the previous value (previous frame). This CHP classifier is then followed by an ordered set of Gaussians ω_i.η(μ_i,σ_i) including the spatio-temporally grouped parameters. The tree ordering is different for different pixel and the order is decided based on the prior distribution (KDE) of the pixel and the temporal consistency of the pixel in the different levels. When the pixel values do not belong to any of the dominant modes based on the prior, we have scenario where the beta weight and gamma weight only considered and alpha is rejected (Prior Nullified).The rejection cascade assumes that the frequency of occurrences of foreground detections is lesser than that of the background. This idea was first introduced in the classic Viola-Jones paper  <cit.>. For the rejection cascade the training phase produces a sequence of features with decreasing rates of negative rejections. In our case we arrange the different classifiers in increasing complexity to maximize the speed. We observe in practice that, this cascade would also produce decreasing rates of negative rejections. The critical difference in this rejection cascade is that the classifier in each level of the cascade is evolving over time. To make adaptation efficient we adapt only the active level of the cascade, thus resulting in only one active update at a time, and during a transition the parameters are updated.The performance of different rejection cascade elements is depicted in Figure <ref>. It depicts cascade elements with increasing complexity (and consequently accuracy) have higher performance. These times were obtained over 4 videos from the wallflowers data set by  <cit.> of different types of dynamic background. This by itself can stand for the possible amount of speedup that can be obtained when the Rejection Cascade is operated on pixels adaptively based on the nature of the pixel.In a similar observation we saw that the number of pixels (in each of these 4 videos) was distributed in different manner amongst the 4 levels. This is seen in figure <ref>. Thus we see that even though the number of pixels corresponding to dynamic nature of pixel varies with the nature of the video, there is greater number of pixels on an average corresponding to low complexity Cascade elements. The rejection cascade for BG subtraction was formed by determining (same as in  <cit.>) the set of background pixel classifiers (or in our case models like attentional operator in Viola Jones) and is organized as a degenerate tree such that it has decreasing false positive rate as we proceed down the cascade.The performance of different rejection cascade elements are depicted in Figure <ref>. It depicts cascade elements with increasing complexity (and consequently accuracy) have higher performance. These times were obtained over different types of static and dynamic background. This by itself can stand for the possible amount of speedup that can be obtained when the Rejection Cascade is operated on pixels adaptively based on the nature of the pixel.In a similar observation we saw that the number of pixels (in each of these 4 videos) was distributed in different manner amongst the 4 levels. This is seen in figure <ref>. Thus we see that even though the number of pixels corresponding to dynamic nature of pixel varies with the nature of the video, there is greater number of pixels on an average corresponding to low complexity Cascade elements. The learning rate for the model is calculated as a function of the confidence measure of the pixels. The abrupt illumination change is detected in the final level of the rejection cascade, by adding a conditional counter. This counter measures the number of pixels that are not modeled by the penultimate cascade element. If this value is above a threshold we can assume an abrupt illumination change scenario. This threshold is around seven tenth of the total number of pixels in the frame  <cit.>.§ ANALYSIS & VAE-COG§.§ Scene Prior AnalysisHere we discuss the the Scene Prior and its different components. First with regard to the clustering pixels based on their dynamic nature similarity, we show results of various clustering methods and their intuitions. The first model considers the time series of variances of said pixels in the N frames of training. The covariance matrix is calculated for the variances of the pixels. This can loosely act as the affinity matrix for the describing similar behavior of a pair of pixels. The weight of the first dominant mode is also considered to form the affinity matrix.§.§ Cascade AnalysisThe CoG is faster on two accounts : Firstly it is cascade of simple-to-complex classifiers, CHP to RGA, and averaging over the performance (seen in figure), we see an improvement in speed of operation, since the simpler cases of classification outweigh the complex ones. Secondly it models the image as a spatio-temporal group of super pixels that needs a single set of parameters to update, even more so, when the confidence of the pixel saturates, the Cascade updates are halted, providing huge speedups. Though it is necessary to mention that the window of sampling is chosen empirically and in scale with the confidence saturation values. The average speedup of the rejection tree algorithm is calculated as : I(x,y)/∑_i s_in_i where x,y go over all indices of image, n_i refers the ratio of background pixels labeled mean or mean with variance w.r.t the total number of background pixels in the image, s_iis the normalized ratio of the time it takes for level i BG model to evaluate and label a pixel as background. The values of n and s were profiled over various videos for different durations. Also we show the distribution of the CHP pixels as well as the first 3 dominant modes within different frames of Waving tree and Time of Day videos with 40 frames of training each. We can see a huge occupancy of Red (CHP) for both background and foreground pixels. Here we explain the confidence measure and effect on accuracy of the GMM model. We obtain a speedup of 2x-3x with the use of the Adaptive Rejection cascade based GMM. This speedup goes up at the effectiveness of accuracy of confidence based spatio-temporal sampling to 4-5x. This is evident in the Cascade level population (in figure <ref>). We observe a 17% improvement in accuracy over the baseline model because of adaptive modelling to handle difficult scenarios explicitly using scene priors.§.§ Latent space CoG with VAEsCNNs have become become the state-of-the-art models for various computer vision tasks. Our proposed framework is generic and can be extended to CNN models. In this section, we study a possible future extension of the the Rejection cascade to the Variational AutoEncoder (VAE). There has been recent work on using auto-encoders to learn dynamic background for the subtraction task <cit.>. Rejection cascades have also been employed withinconvolutional neural networks architectures for object detection <cit.>. VAEs one of the most interpretable deep generative models.VAEs are deep generative models that approximate the distribution for high-dimensional vectors 𝐱 that correspond to pixel values in the image domain. Like a classical auto-encoder. VAEs consists of a probabilistic encoder q_ϕ(𝐱|𝐳) that reduces the input image to latent space vector 𝐳 and enforces a Gaussian prior, and a probabilisic decoder p_θ(𝐱|𝐳) that reconstructs these latent vectors back to the original images. The loss function constitutes of the KL-Divergence regularization term, and the expected negative reconstruction error with an additional KL-divergence term between the latent space vector and the representation with a mean vector and a standard deviation vector, that optimizes thevariational lower bound on the marginal log-likelihood of each observation <cit.>.The classical cascade : CHP, ordered sequence of modes of GMM (μ_i, σ_i), can now be envisaged in the latent space for a multivariate 1-Gaussian 𝒩(𝐳, 0, I). The future goal would be to create Early rejection classifiers as in <cit.> for classification tasks, where within each layer of the probabilistic encoder we are capable of measuring the log-likelihood of being foreground. Storing previous latent space vectors for the CHP test would require addition memory aside that assigned to the latent space mean and variance vectors. VAEs are an ideal extension to the rejection cascade since the pixel-level tests in CoG are now performed by the VAE in the latent space, over which a likelihood can be evaluated. We also gain the invariance to positions, orientations, pixel level perturbations, and deformations in mid-level features due the convolutional architecture. A convolutional VAE with latent space of 16 dimensions was trained on the CDW-2014 datasets <cit.>, preliminary results are show in figure <ref>. § CONCLUSIONThe CoG was evaluated on the wallflower dataset, as well as its autoencoder counterpart VAE-CoG on the CDW-2014 datasets. We observed a speedup of 4-5x, over the baseline GMM, with an average improvement of 17% in the mis-classification rate. This study has demonstrated conceptually how a GMM can be re-factored optimally into a prior scene based pixel density and rejection cascade constituent of simpler models ordered based on the probability of occurrences of each level of the cascade, the accuracy (and complexity) of each model in the cascade level. splncs04
http://arxiv.org/abs/1705.09339v2
{ "authors": [ "B Ravi Kiran", "Arindam Das", "Senthil Yogamani" ], "categories": [ "stat.ML", "cs.CV" ], "primary_category": "stat.ML", "published": "20170525195045", "title": "Rejection-Cascade of Gaussians: Real-time adaptive background subtraction framework" }
Classification of Quantitative Light-Induced Fluorescence Images Using Convolutional Neural Network Sultan Imangaliyev1,2,6 Corresponding author.Monique H. van der Veen3 Catherine M. C. Volgenant3 Bruno G. Loos3 Bart J. F. Keijser3,4 Wim Crielaard3 Evgeni Levin5,6 December 30, 2023 ======================================================================================================================================================================== In this paper we improve the layered implementation of arbitrary stabilizer circuits introduced by Aaronson and Gottesman in Phys. Rev. A 70(052328), 2004: to obtain a general stabilizer circuit, we reduce their 11-stage computation -H-C-P-C-P-C-H-P-C-P-C- over the gate set consisting of Hadamard, Controlled-NOT, and Phase gates, into a 7-stage computation of the form -C-CZ-P-H-P-CZ-C-.We show arguments in support of using -CZ- stages over the -C- stages: not only the use of -CZ- stages allows a shorter layered expression, but -CZ- stages are simpler and appear to be easier to implement compared to the -C- stages.Based on this decomposition, we develop a two-qubit gate depth-(14n-4) implementation of stabilizer circuits over the gate library {,̋,}, executable in the Linear Nearest Neighbor (LNN) architecture, improving best previously known depth-25n circuit, also executable in the LNN architecture.Our constructions rely on Bruhat decomposition of the symplectic group and on folding arbitrarily long sequences of the form (-P-C-)^m into a 3-stage computation -P-CZ-C-.Our results include the reduction of the 11-stage decomposition -H-C-P-C-P-C-H-P-C-P-C- into a 9-stage decomposition of the form -C-P-C-P-H-C-P-C-P-.This reduction is based on the Bruhat decomposition of the symplectic group.This result also implies a new normal form for stabilizer circuits.We show that a circuit in this normal form is optimal in the number of Hadamard gates used.We also show that the normal form has an asymptotically optimal number of parameters. Classification of Quantitative Light-Induced Fluorescence Images Using Convolutional Neural Network Sultan Imangaliyev1,2,6 Corresponding author.Monique H. van der Veen3 Catherine M. C. Volgenant3 Bruno G. Loos3 Bart J. F. Keijser3,4 Wim Crielaard3 Evgeni Levin5,6 December 30, 2023 ========================================================================================================================================================================§ INTRODUCTIONStabilizer circuits are of particular interest in quantum information processing (QIP) due to their prominent role in fault tolerance <cit.>, the study of entanglement <cit.>, and in evaluating quantum information processing proposals via randomized benchmarking <cit.>, to name a few.Stabilizer circuits are composed of the Hadamard gate $̋, Phase gate, and the controlled-NOT gatedefined as:̋=1/√(2)[[11;1 -1 ]], :=[[ 1 0; 0 i ]], and :=[[ 1 0 0 0; 0 1 0 0; 0 0 0 1; 0 0 1 0 ]],where in case of$̋ andeach of these gates is allowed to act on any of a given number n of qubits, and on any pair of qubits in case of thegate. The stabilizer circuits over n qubits, such as defined above form a finite group which is known to be equivalent <cit.> to the group of binary 2n × 2n symplectic matrices, .Knowing this equivalence allows to evaluate the stabilizer group size, through employing the well-known formula to calculate the number of elements in the respective symplectic group,||=2^n^2∏_j=1^n(2^2j-1)=2^2n^2+O(n).In this paper, we rely on the phase polynomial representation of {,} circuits.Specifically, arbitrary quantum circuits over P and CNOT gates can be described in an alternate form, which we refer to as phase polynomial description, and vice versa, each phase polynomial description can be written as a P and CNOT gate circuit.We use this result to induce circuit transformations via rewriting the respective phase polynomials.We adopt the phase polynomial expression result from <cit.> to this paper as follows: Any circuit C on n qubits over {,} library with k Phase gates can be described by the combination of a phase polynomial p(x_1, x_2, ..., x_n)=f_1(x_1, x_2, ..., x_n) + f_2(x_1, x_2, ..., x_n) + ⋯ + f_k(x_1, x_2, ..., x_n) and a linear reversible function g(x_1, x_2, ..., x_n), such that the action of C can be constructed as C|x_1x_2...x_n⟩ = i^p(x_1, x_2, ..., x_n)|g(x_1, x_2, ..., x_n)⟩,where i denotes the complex imaginary unit. Functions f_j corresponding to the j^th Phase gate are obtained from the circuit C via devising Boolean linear functions computed by the CNOT gates in the circuit C leading to the position of the respective Phase gate.In the following we focus on finding a short layered sequence of gates capable of representing an arbitrary stabilizer circuit over n primary inputs.The layers are defined as follows:* -H- layer contains all unitaries representable by arbitrary circuits composed of the Hadamard gates. Since ^̋2=Id, and Hadamard gate is a single-qubit gate, -H- layer has zero or one gates acting on each of the respective qubits.The number of distinct layers -H- on n qubits is thus 2^n.We say -H- has n Boolean degrees of freedom.* -P- layer is composed of an arbitrary set of Phase gates.Since ^4=Id, and the Phase gate is also a single-qubit gate, -P- layer has anywhere between zero to three gates on each of the respective qubits.Note that ^2=, and therefore the gate sequencemay be better implemented as the Pauli- gate; ^3=^†, and frequently ^† is constructible with the same cost as .This means that the -P- layer is essentially analogous to the -H- layer in the sense that it consists of at most n individual single-qubit gates.The number of different unitaries represented by -P- layers on n qubits is 2^2n.We say -P- has 2n Boolean degrees of freedom.* -C- layer contains all unitaries computable by thegates.The number of different -C- layers corresponds to the number of affine linear reversible functions, and it is equal to ∏_j=0^n-1(2^n-2^j)=2^n^2+O(n) <cit.>.We say -C- has n^2+O(n) Boolean degrees of freedom.* -CZ- layer contains all unitaries computable by thegates, wheregate is defined as:=[[1000;0100;0010;000 -1 ]] . Since allgates commute, and due tobeing self-inverse, i.e., ^2=Id, the number of different unitaries computable by -CZ- layers is ∏_j=1^n2^n-j=2^n^2/2+O(n).We say -CZ- has n^2/2+O(n) Boolean degrees of freedom.Observe that the above count of the degrees of freedom suggests that -P- and -H- layers are “simple”.Indeed, each requires no more than the linear number of single-qubit gates to be constructed via a circuit.The number of the degrees of freedom in -C- and -CZ- stages is quadratic in n.Other than the two-qubit gates often being more expensive than the single-qubit gates <cit.>, the comparison of the degrees of freedom suggests that we will need more of the respective gates to construct each such stage.The -CZ- layer has roughly half the number of the degrees of freedom compared to the -C- layer.We may thus reasonably expect that the -CZ- layer can be easier to obtain.Unlike the -C- circuits, the problem of optimizing -CZ- circuits does not seem to have been studied in the literature.Part of the reason could be due to thegate complexity of -CZ- circuits being a very inconspicuous problem to study: indeed, worst case optimal circuit has (n-1)n/2gates, and optimal circuits are easy to construct, as they are determined by the presence or lack ofgates acting on the individual pairs of qubits.However, we claim that using onlygates to construct -CZ- layer is not the best solution, and a better approach would be to also employ theandgates.Indeed, bothandgates must have a comparable cost of the implementation, since they are related by the formula (a,b)=(̋b)(a,b)(̋b), and single-qubit gates are “easy” <cit.>. is furthermore the elementary gate in superconducting circuits QIP <cit.>, and as such, technically, it costs less than the , and in the trapped ions QIP the costs of the two are comparable <cit.>.Further discussion of the relation of implementation costs between -C- and -CZ- layers is postponed to Section <ref>.The different layers can be interleaved to obtain stabilizer circuits not computable by a single layer.A remarkable result of <cit.> shows that 11 stages over a computation of the form -H-C-P-C-P-C-H-P-C-P-C- suffices to compute an arbitrary stabilizer circuit.The number of Boolean degrees of freedom in the group of stabilizer unitaries, defined as the logarithm base-2 of their total count, is given by the formula log_2||=2n^2+O(n).This suggests that the 11-stage circuit by Aaronson and Gottesman <cit.> is suboptimal, as it relies on 5n^2+O(n) degrees of freedom, whereas only 2n^2+O(n) are necessary.Indeed, we find (Section <ref>) a shorter 9-stage decomposition of the form -P-C-P-C-H-C-P-C-P- in which all -C- stages correspond to upper triangular matrices having n^2/2 degrees of freedom each, leading to an asymptotically tight parameterization of all stabilizer circuits. Notation. We denote withthe group of invertible n× n matrices, with S_n the full permutation group on n letters, and with (A,B) the (block) diagonal operator that has diagonal elements A and B.§ (-P-C-)^M CIRCUITS In this section we show that an arbitrary length n-qubit computation described by the stages -P-C-P-C-...-P-C- folds into an equivalent three-stage computation -P-CZ-C-. (-P-C-)^m=-P-CZ-C-. A (-P-C-)^m circuit has no more than k≤ 3nm Phase gates.Name those gates _j=1..k, denote Boolean linear functions they apply phases to as f_j=1..k(x_1,x_2,...,x_n), and name the reversible linear function computed by (-P-C-)^m (Theorem <ref>) as g(x_1,x_2,...,x_n). Phase polynomial computed by the original circuit is f_1(x_1,x_2,...,x_n)+f_2(x_1,x_2,...,x_n)+...+f_k(x_1,x_2,...,x_n).We will next transform phase polynomial to an equivalent one, that will be easier to write as a compact circuit.To accomplish this, observe that i^a+b+c+(a⊕ b) + (a⊕ c) + (b⊕ c) + (a⊕ b ⊕ c) = i^4= 1, where a, b, and c are arbitrary Boolean linear functions of the primary variables.This equality can be verified by inspection through trying all 8 possible combinations for Boolean values a, b, and c. The equality can be rewritten as i^a⊕ b ⊕ c = i^3a+3b+3c+3(a⊕ b)+3(a⊕ c)+3(b⊕ c), suggesting how it will be used.The following algorithm takes n-2 steps.Step n. Consolidate terms in the phase polynomial f_1(x_1,x_2,...,x_n)+f_2(x_1,x_2,...,x_n)+...+f_k(x_1,x_2,...,x_n) by replacing uf_j(x_1,x_2,...,x_n)+vf_k(x_1,x_2,...,x_n) with (u+v4)f_j(x_1,x_2,...,x_n) whenever f_j=f_k.Once done, look for f_j=x_1 ⊕ x_2 ⊕ ... ⊕ x_n, being the maximal length linear function of the primary inputs.If no such function found, move to the next step.If it is found with a non-zero coefficient u, as an additive term u(x_1 ⊕ x_2 ⊕ ... ⊕ x_n), replace it by the equivalent 6-term mixed arithmetic polynomial (4-u)x_1 + (4-u)x_2 + (4-u)(x_3 ⊕ x_4 ⊕ ... ⊕ x_n) + (4-u)(x_1 ⊕ x_2) + (4-u)(x_1 ⊕ x_3 ⊕ x_4 ⊕ ... ⊕ x_n) + (4-u)(x_2 ⊕ x_3 ⊕ ... ⊕ x_n). This transformation is derived from eq. (<ref>) by assigning a=x_1, b=x_2, and c=x_3 ⊕ x_4 ⊕... ⊕ x_n.Consolidate all equal terms.The transformed phase polynomial is equivalent to the original one in the sense of the overall combination of phases it prescribes to compute, however, it is expressed over linear terms with at most n-1 variables.Step s, s=(n-1)..3. From the previous step we have phase polynomial of the form u^'_1f^'_1(x_1,x_2,...,x_n)+u^'_2f^'_2(x_1,x_2,..., x_n)+...+u^'_k^'f^'_k^'(x_1,x_2,...,x_n). By construction it is guaranteed that the functions f^'_j=1..k^' EXOR no more than s literals.For each f^'_j=x_j_1⊕ x_j_2⊕ ... ⊕ x_j_s, with the coefficient u^'_j ≢04 replace this term with the sum of six terms, each having no more than s-1 literals by using eq. (<ref>) and setting a, b, and c to carry linear functions over the non-overlapping non-empty subsets of {x_j_1, x_j_2, ..., x_j_s} whose union gives the entire set {x_j_1, x_j_2, ..., x_j_s}. Value s=3 marks the last opportunity to break down a term in the phase polynomial expression into a set of terms over smaller numbers of variables.Upon completion of this step, the linear functions participating in the phase polynomial expression contain at most two literals each.The transformed phase polynomial description of the original circuit now has the following form:phase polynomial ∑_j=1^nu_jx_j + ∑_j=1^n∑_k=j+1^nu_j,k(x_j ⊕ x_k), where u_·, u_·,·∈ℤ_4, and the linear reversible function g(x_1,x_2,...,x_n). We next show how to implement such a unitary as a -P-CZ-C- circuit, focusing separately on the phase polynomial and the linear reversible part. We synthesize individual terms in the phase polynomial as follows.* For j=1..n, the term u_jx_j is obtained as the single-qubit gate circuit ^u_j(x_j);* For j=1..n, k=j+1 ..n, the term u_j,k(x_j ⊕ x_k) is obtained as follows:* if u_j,k≡ 24, by the circuit ^2(x_j)^2(x_k) = (x_j)(x_k);* if u_j,k≡ 1or34, by the circuit^u_j,k(x_j)^u_j,k(x_k) (x_j,x_k).The resulting circuit containsandgates; it implements phase polynomial ∑_j=1^nu_jx_j + ∑_j=1^n∑_k=j+1^nu_j,k(x_j ⊕ x_k) and the identity linear reversible function. Since allandgates commute, Phase gates can be collected on the left side of the circuit. This results in the ability to express phase polynomial construction as a -P-CZ- circuit.We conclude the entire construction via obtaining the linear reversible function g(x_1,x_2,...,x_n) as a -C- stage, with the overall computation described as a -P-CZ-C- circuit. Note that -P-CZ-C- can also be written as -C-P-CZ-, if one first synthesizes the linear reversible function g(x_1,x_2,...,x_n)=(g_1(x_1,x_2,...,x_n),g_2(x_1,x_2,...,x_n),...,g_n(x_1,x_2,...,x_n)), and then expresses the phase polynomial in terms of the degree-2 terms over the set {g_1,g_2,...,g_n}. Other ways to write such a computation include -CZ-P-C- and -C-CZ-P-, that are obtained from the first two by commuting -P- and -CZ- stages. -H-C-P-C-P-C-H-P-C-P-C- <cit.> = -H-C-CZ-P-H-P-CZ-C-.§ -C- VS -CZ- We have previously noted thatandgates have a comparable cost as far as their implementation within some QIP proposals is concerned.In this section, we study{, , } implementations of stages -C- and -CZ-.The goal is to provide further evidence in support of the statement that -CZ- can be thought of as a simpler stage compared to the -C- stage, and going beyond counting the degrees of freedom argument. Optimal quantum circuit over {} library for a -CZ- stage has at most n(n-1)/2gates. Indeed, allgates commute, which limits the expressive power of the circuits overgates.However, once we add the non-commutinggate, and after that the Phase gate, the situation changes.We can now implement -CZ- circuits more efficiently, such as illustrated by the circuit identities shown in Fig. <ref>.The unitary implemented by the circuitry shown in Fig. <ref> requires 7gates as a {} circuit, 6 gates as a {,} circuit, and only 5 two-qubit gates as a {,,} circuit.This illustrates that theandgates are important in constructing efficient -CZ- circuits.We may consider adding theandgates to the {} library in hopes of constructing more efficient circuits implementing the -C- stage.However, as the following lemma shows, this does not help.Any {, , } circuit implementing an element of the layer -C- using a non-zero number ofandgates is suboptimal. Eachgate applied to a qubit x can be expressed as a phase polynomial 1· x over the identity reversible linear function.Eachgate applied to a set of qubits y and z can be expressed as a phase polynomial y + z + 3(y ⊕ z) and the identity reversible function.Removing allandgates from the given circuit thus modifies only the phase polynomial part of its phase polynomial description.Removing allandgates from the {, , } circuit guarantees that the phase polynomial of the resulting circuit equals to the identity, such as required in the -C- stage.This results in the construction of a shorter circuit in cases when the originalandgate count was non-zero. We next show in Table <ref> optimal counts and upper bounds on the number of gates it takes to synthesize the most difficult function from stages -C- and -CZ- for some small n. Observe how the two-qubit gate counts for the -CZ- stage, when constructed as a circuit over{, , } library, remain lower than those for the -C- stage. In <cit.> an asymptotically optimal algorithmfor {} synthesis of arbitrary -C- stage functions was reported, that leads to the worst case gate complexity of O(n^2/log n).It is possible that an asymptotically optimal algorithm for {, , } circuits implementing arbitrary -CZ- stage functions can be developed, at which point its complexity has to be O(n^2/log n).To determine which of the two results in shorter circuits, one has to develop constants in front of the leading complexity terms. We point out that gate count is only one of several possible metrics of efficiency. For instance, two-qubit gate depth over Linear Nearest Neighbour (LNN) architecture is also an important metric of efficiency.This metric has been applied in <cit.> to show an asymptotically optimal upper bound of 5nlayers required to obtain an arbitrary -C- stage.Define -CZ- to be -CZ- accompanied by the complete qubit reversal (i.e., the linear reversible mapping |x_1x_2...x_n⟩↦|x_nx_n-1...x_1⟩).We next show that -CZ- can be executed as a two-qubit gate depth-(2n+2) computation over LNN.This result will be used to reduce depth in the implementation of arbitrary stabilizer circuits. -CZ- can be implemented as adepth-(2n+2) circuit. Consider phase polynomial description of the circuit -CZ-. However, rather than describe both parts of the expression, phase polynomial itself and the linear reversible transformation, over the set of primary variables, we will describe phase polynomial over the variables y_1,y_2,...,y_n defined as follows:y_1:=x_1, y_2:=x_1 ⊕ x_2, ..., y_n:= x_1 ⊕ x_2 ⊕ ... ⊕ x_n.This constitutes the change of basis {x_1,x_2,...,x_n}↦{y_1,y_2,...,y_n}.Similarly to how it was done in the proof of Theorem <ref>, we reduce phase polynomial representation of -CZ- to the application of Phase gates to the EXORs of pairs and the individual variables from the set {y_1,y_2,...,y_n}, ∑_j=1^nu_jy_j + ∑_j=1^n∑_k=j+1^nu_j,k(y_j ⊕ y_k),and the linear reversible function g(x_1,x_2,...,x_n): |x_1x_2...x_n⟩↦|x_nx_n-1...x_1⟩.Observe that y_j ⊕ y_k = x_j ⊕ x_j+1⊕ ... ⊕ x_k, and thereby this linear function can be encoded by the integer segment [j,k]. The primary variable x_j admits the encoding [j,j].We use this notation next.In the following we implement the pair of the phase polynomial expression and the reversal of qubits (a linear reversible function) via a quantum circuit.Observe that the swapping operation g(x_1,x_2,...,x_n): |x_1x_2...x_n⟩↦|x_nx_n-1...x_1⟩ can be implemented as a circuit similar to the one from Theorem 5.1 <cit.> in depth 2n+2.The rest of the proof concerns the ability to insert Phase gates in the circuit accomplishing the reversal of qubits such as to allow the implementation of each term in the phase polynomial, eq. (<ref>).Since our qubit reversal circuit is slightly different from the one used in <cit.>, and we explore its structure more extensively, we describe it next.It consists of n+1 alternating stages, S_1 and S_2, whereS_1:= (x_1;x_2)(x_3;x_4)...(x_n-2;x_n-1)· (x_3;x_2)(x_5;x_4)...(x_n;x_n-1)for odd n, and S_1:= (x_1;x_2)(x_3;x_4)...(x_n-1;x_n)· (x_3;x_2)(x_5;x_4)...(x_n-1;x_n-2)for even n, is a depth-2 circuit composed with thegates. Similarly,S_2:= (x_2;x_1)(x_4;x_3)...(x_n-1;x_n-2) · (x_2;x_3)(x_4;x_5)...(x_n-1;x_n) for odd n, andS_2:= (x_2;x_1)(x_4;x_3)...(x_n;x_n-1) · (x_2;x_3)(x_4;x_5)...(x_n-2;x_n-1)for even n, is also a depth-2 circuit composed with thegates.We refer to the concatenation of S_1 and S_2 as S.The goal is to show that after ⌈n/2⌉ applications of the circuit S we are able to cycle through all n(n+1)/2 linear functions [j,k], j ≤ k. The remainder of the proof works slightly differently depending on the parity of n.First, choose odd n=2m+1.Consider two patterns of length 2n-3,Pj:= (n-1,n-3,n-3,...,4,4,2,2,1,1,3,3,...,n-2,n-2)and Pk:= (3,3,5,5,...,n,n,n-1,n-1,n-3,n-3,...6,6,4,4,2).Observe by inspection that the i^th linear function computed by the single application of the stage S is given by the formula [Pj(n-3+i),Pk(i)], where Pj(l) and Pk(l) return l^th component of the respective pattern.It may further be observed, via direct multiplication by the linear reversible matrix corresponding to the transformation S, that the i^th component upon t (t≤ m) applications of the circuit S is computable by the following formula, [Pj(n-1-2t+i),Pk(2t-2+i)]=[Pj(n-3-2(t-1)+i),Pk(2(t-1)+i)]. A simple visual explanation can be given: at each application of S pattern Pj is shifted by two positions to the left (down, Fig. <ref>), whereas pattern Pk gets shifted by two positions to the right (up, Fig. <ref>).Observe that every [j,k], j=1..n, k=1..n, j≤ k is being generated. Indeed, a given [j,k] may only be generated at most once by the 0 to m applications of the circuit S. This is because once a given j meets a given k for the first time, at each following step, the respective value k gets shifted away from j to never meet again.We next employ the counting argument to show that all functions [j,k] are generated.Indeed, the total number of functions generated by 0 to m applications of the stage S is (m+1)n=(n-1/2+1)n=n(n+1)/2, each linear function generated is of the type [j,k] (j=1..n, k=1..n, j≤ k), none of which can be generated more than once, and their total number is n(n+1)/2.This means that every [j,k] is generated. We illustrate the construction of the circuit implementing -CZ- for n=7 in Fig. <ref>.For even n=2m the construction works similarly. The patterns Pj and Pk are (n,n-2,n-2,n-4,n-4,...,2,2,1,1,3,3...,n-3,n-3,n-1) and (3,3,5,5,...,n-1,n-1,n,n,n-2,n-2,...,4,4,2,2), respectively. The formula for computing the linear function [j,k] for i^th coordinate after t applications of S is [Pj(n-2t+i),Pk(2t-2+i)]. After m applications of the circuit S we generate linear functions x_n, x_n-1, ..., x_4, x_2 in addition to the m new linear functions of the form [j,k] (j<k).To consider circuit depth makes most sense when applied to measure depth across most computationally intensive operations.In both of the two leading approaches to quantum information processing, and limiting the attention to fully programmable universal quantum machines, superconducting circuits <cit.> and trapped ions <cit.>, the two-qubit gates take longer to execute and are associated with lower fidelity.As such, they constitute the most expensive resource and motivate our choice to measure depth in terms of the two-qubit operations.The selection of the LNN architecture to measure the depth over is motivated by the desire to restrict arbitrary interaction patterns to a reasonable set.Both superconducting and trapped ions qubit-to-qubit connectivity patterns<cit.> are furthermore such that they allow embedding the linear chain in them.A further observation is that the two-qubitgate may not be native to a physical implementation, and therefore theimplementation may likely use correcting single-qubit gates before and after using a specific two-qubit interaction.This means that interleaving the two-qubit gates with the single-qubit gates such as done in the proof of Theorem <ref> may not increase the depth, and restricting depth calculation to just the two-qubit stages is appropriate.We did, however, report enough detail to develop depth figure over both single- and two-qubit gates for the implementations of stabilizer circuits relying on our construction.Arbitrary n-qubit stabilizer unitary can be executed in two-qubit gate depth 14n-4 as an {,̋, } circuit over the LNN architecture. Firstly, observe that -H-C-CZ-P-H-P-CZ-C = -H-C-CZ-P-H-P-CZ-C-. This is because both -CZ- stages reverse the order of qubits, and therefore the effect of the qubit reversal cancels out. The two-qubit gate depth of the -C- stage is 5n <cit.>, and the two-qubit gate depth of the -CZ- stage is 2n+2, per Theorem <ref>.This means that the overall two-qubit gate depth is 14n+4.This number can be reduced somewhat by the following two observations.Name individual stages in the target decomposition as follows, -H-C_1-CZ_1-P-H-P-CZ_2-C_2-.Using the construction in Theorem <ref>, we can implement -CZ_1- without the first S circuit through applying Phase gates at the end of it (see Fig. <ref> for illustration).The first S circuit can then be combined with the -C_1- stage preceding it.This results in the saving of 4 layers of two-qubit computations.Similarly, -CZ_2- can be implemented up to S if it is implemented in reverse, and phases are applied in the beginning (the end, but invert the circuit).This allows to merge depth-4 computation S with the stage -C_2- that follows.These two modifications result in the improved depth figure of 14n-4. Observe how the aggregate contribution to the depth from both -CZ- stages used in our construction, ∼4n, is less than that from a single -C- stage, 5n.The result of <cit.> can be applied to the 11-stage decomposition -H-C-P-C-P-C-H-P-C-P-C- of <cit.> to obtain a two-qubit gate depth-25n LNN-executable implementation of an arbitrary stabilizer unitary.In comparison, our reduced 8-stage decomposition -H-C-CZ-P-H-P-CZ-C- allows execution in the LNN architecture in only 14n-4 two-qubit stages.§ STABILIZERS AND THE SYMPLECTIC GROUP We now establish a normal form for stabilizer circuits that eliminates two of the layers of the 11-layer form given in <cit.>, while using same types of layers.As already mentioned, the stabilizer circuits form a finite group which, modulo the group that is generated by the center and the Pauli subgroup, is isomorphic to the binary symplectic group defined as follows (see also <cit.> and <cit.>):The groupof symplectic matrices of size 2n × 2n with entries over the finite field _2 ={0,1} is defined as:= { A ∈ : A^t J A = J }, where J = [[ _n _n; _n _n ]], and _n and _n denote the identity matrix and the all zero matrix of size n× n (the subscript may furthermore be dropped when it is clear what the dimension is;may furthermore be used to denote a rectangular matrix), respectively. Similarly to <cit.> we can work with a tableau representation for symplectic matrices, where we omit column vector r as in <cit.>, which corresponds to an overall sign that, if needed, can be obtained via a single layer of Z gates. Definition <ref> implies that the square block matrix M=[ A B; C D ] is symplectic if and only if the following four conditions hold: A^tC=C^tA,A^tD+C^tB=_n, B^tD=D^tB,B^tC+D^tA=_n.In other words, two columns c_i and c_j of M are perpendicular with respect to the symplectic inner product, unless they form one out of n symplectic pairs (c_i, c_n+i), where i=0,1,...,n-1, and in which case the symplectic inner product evaluates to 1. It should be noted that if M is symplectic, so is M^-1, as the symplectic matrices form a group. As M^-1 = [D^t -B^t; -C^tA^t ], the equation (M^-1)^t J M^-1 = J implies thatthe following four conditions hold for M as well: AB^t=BA^t,AD^t+BC^t=_n, CD^t=DC^t,CB^t+DA^t=_n.In other words, also two rowsr_i and r_j of M are perpendicular with respect to the symplectic inner product, unless they form one out of n symplectic pairs (r_i, r_n+i), where i=0,1,...,n-1, and in which case the symplectic inner product evaluates to 1. Equations (<ref>) and (<ref>) will be useful later when we bring a given stabilizer circuit, represented as a symplectic matrix, into a suitable normal form. The right side action of the stabilizer circuit layers -H-, -P-, and -C- on a symplectic matrix M can be described as follows (see also <cit.>):* Right multiplication with a Hadamard gate on qubit k corresponds to exchanging columns k and n+k of M. * Right multiplication with a Phase gate on qubit k corresponds to the addition modulo 2 of column k of M to column n+k;* Right multiplication with agate with control j and target k, 1 ≤ j,k ≤ n, corresponds to the addition modulo 2 of column j to column k of M and the addition modulo 2 of column n+k to column n+j of M. Similarly, the left side action on the rows of M can be defined. § BN-PAIRS AND BRUHAT DECOMPOSITION A property of the symplectic group that we exploit to show an asymptotically optimal decomposition is that this group can be written as a disjoint union = _w ∈ W B w B,where B is the Borel subgroup ofand W labels a system of representatives of the Weyl group of . For complex Lie group this decomposition is also known as the Bruhat decomposition <cit.>. However, even over a finite field such as _2 the decomposition eq. (<ref>) can be suitably defined using the notion of BN-pairs <cit.>, <cit.>. As we will see below, we can identify B with a subgroup ofthat is isomorphic to a subgroup of the upper triangular matrices, and we can identify W with a wreath product of _2 with S_n which corresponds to the group generated by all qubit permutations together with all possible Hadamard gate combinations on n qubits.(BN pair) Let G be a group and B, N ⊆ G be two subgroups such that G = ⟨ B, N ⟩ and T := B ∩ N is a normal subgroup of N.Let S be a set of generators for W := N/T. Denote by C(w) = BwB the double coset corresponding to the representative w ∈ W. If the following two properties hold for all s ∈ S and all representatives w ∈ W * C(s) C(w) ⊆ C(w) ∪ C(sw),* s B s^-1⊈B,then (B,N) is called a BN-pair and the data (G, B, N, S) is called a Tits system, see also <cit.>.For the group G= the subgroup B can be identified with the set B_n defined in Fig. <ref>. Completing the description of the BN-pair in case ofwe have to determine the subgroups N, T, and W. In case of the finite field _2 it turns out that T is trivial and N consists of the group generated by all permutation matrices and all Hadamard gates. This means that a set S of generators for W can be defined as S:= {[[ _k _k;_n-k _n-k; _k _k;_n-k _n-k ]] : k =0..n }⋃ {[[ τ_i; τ_i ]] : τ_i = (i, i+1), i =1..n-1 }.The first set in eq. (<ref> corresponds to the tensor products of Hadamard matrices, namely W={w_k: k=0,1,...,n}, where w_k = H_2^⊗ k⊗_2^⊗ n-k, whereas the second set corresponds to wire permutations of adjacent wires. Furthermore, we note that B_n= {[[A _n; _n (A^t)^-1 ]] [[ _nB; _n _n ]] . . : A ∈, B ∈_2^n× n, B=B^t },which implies that B_n is isomorphic to a subgroup of the upper triangular matrices, i.e., in particular, it is a solvable group. This decomposition also implies that there are n^2/2 Boolean degrees of freedom in the part corresponding to [[A _n; _n (A^t)^-1 ]] and n^2/2 Boolean degrees of freedom in the part corresponding to [[ _nB; _n _n ]] as B is symmetric. Hence, matrices in B_n have an overall of n^2 Boolean degrees of freedom. Finally, note that the elements of the form (τ,τ) stabilize the set B^0_n as they leave the diagonal part invariant and map the set of symmetric matrices into itself. § COMPUTING THE BRUHAT DECOMPOSITION We first state two lemmas that will be useful later for a step-wise decomposition of a given stabilizer circuit.For any symmetric matrix A∈_2^n × n there exist matrices Λ, U∈_2^n× n such that A = U U^t + Λ, where Λ is diagonal and U invertible and upper triangular.In <cit.> a decomposition M=L L^t+Λ was derived, where L is lower triangular. By conjugating this expression with a permutation matrix that exchanges the rows (1,n), (2,n-1), ... we see that the same proof also gives rise to a decomposition into M=U U^t + Λ' with U upper triangular and some diagonal matrix Λ'.Any matrix in B_n can be written in the form -C-P-C-P- or alternatively in the form -P-C-P-C- with all -C- layers consisting of gates in C_n^↓.To see this, we first apply Lemma <ref> decompose a given matrix A = [[ _nB; _n _n ]] into the productA = [[_n U U^t;_n_n ]][[ _nΛ; _n _n ]]. Now, the first factor can be implemented in the form -C-P-C- and we get the following overall circuit of the form -C-P-C-P- for A:A =[[U ;(U^t)^-1 ]] [[; ]] [[ U^-1 ; U^t ]] [[ Λ; ]].Clearly, the -C- layers are in C_n^↓. The other decomposition -P-C-P-C- is obtained similarly, by factoring out the Λ component on the left. For any matrix M ∈_2^n× 2n that is the lower n × 2n part of a 2n × 2n symplectic matrix, there exist a lower triangular matrix L, an upper triangular matrix U, permutation matrices σ,τ∈ S_n, and k, 0 ≤ k ≤ n, such that M= Lσ [ [ _k D_1D_2;_n-k ]] (τ, τ)(U, (U^-1)^t). The main idea is to use the fact that any matrix M ∈_2^n × 2n can be decomposed into the product of a triangular matrix, a permutation pattern (i.e., a matrix that has at most one non-zero entry in each row and column), and another triangular matrix. LU decomposition with pivoting is a special case of this decomposition <cit.>, <cit.>, however, in our situation we cannot assume that we know the pivoting of the matrix. Using L, P, and U as shorthand for lower triangular, permutation pattern, and upper triangular matrices, it is known that all four combinations M=LPL=LPU=UPL=UPU are possible, see, e.g., <cit.> for a discussion. For instance, for LPL we start in the upper right hand corner of M and eliminate the non-zero matrix entries going down and left.For LPU, we start in the upper left corner and eliminate the non-zero matrix entries going down and right.The remaining pattern defines the P-part of the matrix. Since, by assumption, M is a part of the 2n × 2n symplectic matrix, we obtain that rk(M)=n which means that using an LPU decomposition on the left n×n block of M we can find L_1 and U_1 such thatL_1 M (U_1, (U_1^-1)^t) =[P_1 | M_1], where P_1 is a permutation pattern and M_1 is another matrix. By considering the support of P_1 we can define row indices R := {i ∈{1,2,..., n}: (P_1)_i,* = 0^n } and column indices C := { j ∈{1,2,..., n}: (P_1)_*,j = 0^n }. If k :=rk(P_1), then clearly |R|=|C|=n-k. Using an LPL decomposition on the restriction of the right block M_1 of this new matrix to the rows and columns in R × C, we therefore obtainL_2, L_2^', and permutation matrices σ, τ such thatσ L_2 L_1 M (U_1, (U_1^-1)^t) (((L_2^')^-1)^t,L_2^')(τ,τ)=[ [ _k D_1D_2;_n-k ]] for certain D_1 ∈_2^k × k and D_2 ∈_2^k × (n-k).Any Clifford circuit on n qubits can written in the form -P-C-P-C-H-C-P-C-P-.We start with the 2n× 2n symplectic matrix M of the form M = [ [ A B;; C D ]],where A, B, C, and D are in _2^n× n. We next give an algorithm that synthesizes M in a canonical form.The algorithm proceeds in several steps, by clearing out the entries of M via left-hand and right-hand multiplications by other matrices, until finally only a permutation matrix remains, which then corresponds to a Hadamard layer up to a permutation of qubits. Step 1.We apply Lemma <ref> to the submatrix [ C | D ].Note that since M ∈ we have C^t D = D^t C, i.e., the conditions to the lemma are satisfied and we can find a lower triangular matrix L ∈ and an upper triangular matrix U ∈ and two permutation matrices σ, τ∈ S_n such that σ L[ C | D ][ [U ;(U^t)^-1 ]] [ [ τ; τ ]]=[ [ _k D_1D_2;_n-k ]],where 0 ≤ k ≤ n, D_1 ∈_2^k × k, D_2 ∈_2^k × (n-k), anddenotes all zero-matrices of the appropriate sizes.Application of these operations to the initial matrix forces some simplifications: =[ [ σ; σ ]] [ [ (L^t)^-1 ; L ]] M[ [U ;(U^t)^-1 ]] [ [ τ; τ ]]=[ [A_1A_2B_1B_2;A_3A_4B_3B_4; _k D_1D_2;_n-k ]] =: M_1.Here, A_2= (implying A_4=_n-k) and A_1 is symmetric because of the symplectic condition between the last two block rows and the first two block rows of this matrix.Step 2. We left multiply the matrix M_1 by a matrix in B_n^0, as follows: =[ [_k A_1 A_3^t;_n-k A_3;_k;_n-k ]] [ [A_1 B_1B_2;A_3 _n-kB_3B_4; _k D_1D_2;_n-k ]]= [ [ B_1^' B_2^';_n-k B_3^' B_4^';_k D_1 D_2;_n-k ]] =: M_2.Note that since A_1 is symmetric the matrix [[ A_1 A_3^t; A_3 ]] is symmetric as well.We can apply Lemma <ref> to obtain a decomposition of this upper triangular symplectic matrix applied from the left as -P-C-P-C-, where all -C- layers are in C_n^↓. Step 3. Note that because of the symplectic condition between columns one and three of M_2 we must have B_1^' = _k and D_1 is symmetric.Similarly, the symplectic condition between columns two and four of M_2 implies that B_2^' = and B_4^' is symmetric.Moreover, by considering the symplectic condition between rows two and three, which needs to be zero, we obtain that B_3^' = D_2^t. We can therefore apply a final column operation to M_2 to clear out the remaining entries by multiplying on the right =[ [_k;_n-k D_2^t B_4^';_k D_1 D_2;_n-k ]][ [_k D_1 D_2;_n-k D_2^t B_4^';_k;_n-k ]]=[ [ _k ;_n-k; _k ;_n-k ]] =: M_3.As in Step 2, the symmetric (this follows from its block expression and the previously established notion that both D_1 and B_4^' are symmetric) matrix [[ D_1 D_2; D_2^t B_4^' ]] can be decomposed using Lemma <ref> to obtain a representation of the overall upper triangular matrix applied from the right in the form -P-C-P-C-, where again all -C- layers are in C_n^↓. The final matrix M_3 corresponds to a sequence of Hadamard gates applied to the first k qubits. Overall, we applied the sequence U_1 π_1 T_1 M T_2 π_2 U_2 = H,where H is a product of Hadamard matrices applied to the first k basis states, T_1, T_2∈ C_n^↓, π_1,π_2 ∈ S_n, and U_1, U_2 ∈ B_n^0. Multiplication by inverses from both sides yields M = T_1^-1π_1^-1 U_1^-1 H U_2^-1π_2^-1 T_2^-1.Now, notice that permutations stabilize B_n^0, i.e., we can find V_1, V_2 ∈ B_n^0 such that M = T_1^-1 V_1 π_1^-1 H π_2^-1 V_2 T_2^-1.Note that V_1 is of the form -C-P-C-P- with the first -C- layer in C^↓, i.e., T_1^-1 and V_1 can be combined into one matrix W_1 ∈ B_n.Similarly, V_2 can be written in the form -P-C-P-C- and therefore V_2 and T_2^-1 can be combined into one matrix W_2 ∈ B_n.Note finally that we can implement π_1^-1 H π_2^-1 using a single layer of Hadamard gates H_1 acting non-trivially on some k qubits, and merge the qubit swapping stage with either W_1 or W_2. Overall, we have that M can be written asM = W_1 H π W_2 ∈-C-P-C-P-H-P-C-P-C-.Since -C-P-C-P- circuit can be written as a -P-C-P-C- circuit, the claimed decomposition follows.Combining the results of Theorems <ref> and <ref>, and Corollary <ref> allows to obtain the main result of this paper, An arbitrary stabilizer circuit can be written as a 7-stage layered decomposition -C-CZ-P-H-P-CZ-C-.It is executable in the LNN architecture as a two-qubit gate depth-(14n-4) circuit. Defining B :=B_n and N (=W) to be the group generated by -H- and all wire permutations we obtain that B and N define a BN-pair for .From Theorem <ref> we obtain, in particular, that B and N generate the entire group . Clearly we have that T = B ∩ N is trivial, i.e., it is normal in N.The stated property s B s^-1⊈B for all s ∈ S clearly holds for our choice of the generator set S in eq. (<ref>), as Hadamard as well as qubit swaps do not preserve the directionalgates. Finally, to establish the coset multiplication rule C(s)C(w) ⊆ C(w) ∪ C(sw) we use <cit.>.The Bruhat decomposition gives rise to an asymptotically tight parametrization of all 2^2n^2 + O(n) stabilizer circuits. This is a direct consequence of the decomposition into layers of the form -C-P-C-P-H-P-C-P-C- proved in Theorem <ref>. From the proof of the theorem we see that the -C-P-C-P- and the -P-C-P-C- layers correspond to the elements of B_n, each of which has n^2 + o(n^2) Boolean degrees of freedom.This yields the claimed statement. § NORMAL FORM FOR STABILIZER CIRCUITS The Bruhat decomposition eq. (<ref>) allows us to characterize the possible block structures that stabilizer operators might have when considered as a unitary matrix of size 2^n × 2^n and how they behave under multiplication. Let C be a stabilizer circuit. Let B · w(C) · B denote the unique double coset that C lies in. Then we can represent w(C) by an element in _2^n ⋊ S_n, or equivalently by a matrix of the form U π, where U is a tensor product of k Hadamard matrices, where 1 ≤ k ≤ n and π is a permutation matrix of n wires. By rearranging the non-identity Hadamard operators, we can represent such an element Uπ in a form σ (_2^⊗n-k⊗ H^⊗ k) τ, where π=στ. We call (k, σ, τ) the block structure of C. Note that whereas U and π are unique, in general σ and τ are not, as there is a degree of freedom corresponding to elements in S_n-k× S_k. However, the collection I ⊂ℤ_2^n^2 defined as I := { (i,j): |C_i,j| = 1/√(2)^k} is uniquely defined by C and the corresponding block structure (k,σ,τ). As a corollary to Theorem <ref> we obtain the following multiplication rule for block structures.Let C_1 and C_2 be stabilizer circuits with block structures (k_1,σ_1,τ_1) and (k_2, σ_2, τ_2), respectively. Then the block structure of C_1 C_2 is of the form (m,σ_3,σ_3^-1σ_1 τ_1 σ_2 τ_2), where 0 ≤ m ≤ k_1+k_2 and σ_3 ∈ S_n.Let w(C_1) = U π denote the representative of C_1 in the Weyl group. Write w(C_1) as a product over the generators S = S_h ∪ S_p where S_h = {h_i = _i: i ∈{1,2,...,n}} and S_p = { p_i = (i,i+1) : i ∈{1,2,...,n-1}}. As the Weyl group is a semidirect product we can collect the factors corresponding to S_h and S_p together and write w(C_1) = ∏_i=1^k_1 h_t_1,i∏_j=1^n_1 p_s_1,i. We similarly write w(C_2)and note that as W is a semidirect product, we get w(C_1)w(C_2)=∏_i=1^k_1 h_t_1,i∏_i=1^k_2h_π t_2,i∏_j=1^n_1 p_s_1,i∏_j=1^n_2 p_s_2,i. As there might be cancellation between the Hadamard matrices ∏_i=1^k_1 h_t_1,i and ∏_i=1^k_2h_π t_2,i, we obtain that the Hadamard block can have m non-trivial factors where 0 ≤ m ≤ k_1+k_2. The permutational parts multiply, i.e., we conclude that the permuted block structure of the product is of the claimed form.§ CONCLUSION In this paper, we reduced the 11-stage computation -H-C-P-C-P-H-P-C-P-C- <cit.> into the 9-stage decomposition -C-P-C-P-H-P-C-P-C- relying on the Bruhat decomposition of the symplectic group.We showed that all -C- stages in our 9-stage decomposition correspond to upper triangular matrices.This leads to an asymptotically tight parameterization of the stabilizer group, matching its number of 2^2n^2+O(n) degrees of freedom.We then derived a 7-stage decomposition of the form -C-CZ-P-H-P-CZ-C-, that relies on the stage -CZ-, not considered by <cit.>.We showed evidence that the -CZ- stage is likely superior to the comparable -C- stage.Indeed, the number of the Boolean degrees of freedom in the -CZ- stage is only about a half of that in the -C- stage, two-qubit gate counts for optimal implementations of -CZ- circuits remain smaller than those for -C- circuits (see Table <ref>), and -CZ- computations were possible to implement in a factor of 2.5 less depth than that for -C- stage computations over LNN architecture.We reported a two-qubit gate depth-(14n-4) implementation of stabilizer unitaries over the gate library {,̋,}, executable in the LNN architecture.This improves previous result, a depth-25n circuit <cit.> executable over LNN architecture.Our 7-stage construction can be written in 16 different ways, by observing that -C-CZ-P- can be written in 4 different ways: -C-CZ-P-, -C-P-CZ-, -P-CZ-C-, and -CZ-P-C-. For the purpose of practical implementation we believe a holistic approach to the implementation of the 3-layer stage -P-CZ-C- may be due, where the linear reversible function g(x_1,x_2,...,x_n) is implemented by thegates such that the intermediate linear Boolean functions generated go through the set that allows implementation of the phase polynomial part.§ ACKNOWLEDGEMENTSDM thanks Yunseong Nam from IonQ for helpful discussions.MR thanks Jeongwan Haah and Vadym Kliuchnikov from Microsoft Research for discussions.Circuit diagrams were drawn using qcircuit.tex package, http://physics.unm.edu/CQuIC/Qcircuit/http://physics.unm.edu/CQuIC/Qcircuit/. 10 ar:ag S. Aaronson and D. Gottesman. Improved simulation of stabilizer circuits. Phys. Rev. A, 70, 052328, 2004, http://arxiv.org/abs/quant-ph/0406196quant-ph/0406196.ar:ammr M. Amy, D. Maslov, M. Mosca, and M. Roetteler. A meet-in-the-middle algorithm for fast synthesis of depth-optimal quantum circuits. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 32(6):818–830, 2013,https://arxiv.org/abs/1206.0758arXiv:1206.0758.Aschbacher:2000 M. Aschbacher. Finite Group Theory. Cambridge University Press, 2000.ar:bdsw C. H. Bennett, D. P. DiVincenzo, J. A. Smolin, and W. K. Wootters. Mixed state entanglement and quantum error correction. Phys. Rev. A 54:3824–3851, 1996, https://arxiv.org/abs/quant-ph/9604024quant-ph/9604024.Bourbaki:68 N. Bourbaki. Elements of Mathematics – Lie Groups and Lie Algebras, Chapters 4–6. Springer, 1968. Brown:89 K. S. Brown. Buildings. Springer, New York, 1989. ar:crss A. R. Calderbank, E. M Rains, P. W. Shor, and N. J. A. Sloane. Quantum error correction and orthogonal geometry. Phys. Rev. Lett. 78:405–408, 1997, https://arxiv.org/abs/quant-ph/9605005quant-ph/9605005.ar:crss2 A. R. Calderbank, E. M. Rains, P. W. Shor, and N. J. A. Sloane. Quantum Error Correction Via Codes Over GF(4). IEEE Transactions on Information Theory, 44(4):1369–1387, 1998, https://arxiv.org/abs/quant-ph/9608006quant-ph/9608006. ATLAS J. H. Conway, R. T. Curtis, S. P. Norton, R. A. Parker, and R. A. Wilson. ATLAS of Finite Groups. Clarendon Press, Oxford, 1985. ar:deb S. Debnath, N. M. Linke, C. Figgatt, K. A. Landsman, K. Wright, and C. Monroe. Demonstration of a programmable quantum computer module. Nature 536:63–66, 2016, http://arxiv.org/abs/1603.04512arXiv:1603.04512.ar:ggz J. Ghosh, A. Galiautdinov, Z. Zhou, A. N. Korotkov, J. M. Martinis, and M. R. Geller. High-fidelity controlled-σ^ gate for resonator-based superconducting quantum computers. Phys. Rev. A 87, 022309, 2013, https://arxiv.org/abs/1301.1719arXiv:1301.1719.GvL:2000 G. H. Golub and Ch. F. van Loan. Matrix Computations. 3rd ed. Johns Hopkins University Press, 1996. www:g M. Grassl. Code Tables: Bounds on the parameters of various types of codes.http://www.codetables.de/http://www.codetables.de/, last accessed February 27, 2017.HJ:1985 R. A. Horn and Ch. R. Johnson. Matrix Analysis. Cambridge University Press, 1985. www:IBM IBM Quantum Experience, http://www.research.ibm.com/quantum/http://www.research.ibm.com/quantum/, last accessed February 27, 2017.ar:klr E. Knill, D. Leibfried, R. Reichle, J. Britton, R. B. Blakestad, J. D. Jost, C. Langer, R. Ozeri, S. Seidelin, and D. J. Wineland. Randomized benchmarking of quantum gates. Phys. Rev. A 77, 012307, 2008, https://arxiv.org/abs/0707.0963arXiv:0707.0963.KS:2015 R. Koenig and J. A. Smolin. How to efficiently select an arbitrary Clifford group element. J. Math. Phys. 55, 122202, 2014, https://arxiv.org/abs/1406.2170arXiv:1406.2170.ar:kms S. A. Kutin, D. P. Moulton, and L. M. Smithline.Computation at a distance. 2007, https://arxiv.org/abs/quant-ph/0701194quant-ph/0701194.ar:m D. Maslov. Basic circuit compilation techniques for an ion-trap quantum machine. New J. Phys. 19, 023035, 2017, https://arxiv.org/abs/1603.07678arXiv:1603.07678.bk:nc M. A. Nielsen and I. L. Chuang.Quantum Computation and Quantum Information, Cambridge University Press, New York, 2000.ar:pmh K. N. Patel, I. L. Markov, and J. P. Hayes. Optimal synthesis of linear reversible circuits.Quantum Information and Computation 8(3&4):282–294, 2008.ar:s A. M. Steane. Quantum computing and error correction.2003, https://arxiv.org/abs/quant-ph/0304016quant-ph/0304016.Strang:2012 G. Strang. Banded matrices with banded inverses and A = LPU. Proc. Fifth Intl. Congress of Chinese Mathematicians (ICCM2010), American Mathematical Society, Providence, RI, pp. 771–784, 2012.Tits:74 J. Tits. Buildings of spherical type and finite BN-pairs. Lecture Notes in Mathematics, vol. 386, Springer, 1974.
http://arxiv.org/abs/1705.09176v2
{ "authors": [ "Dmitri Maslov", "Martin Roetteler" ], "categories": [ "quant-ph", "cs.ET", "cs.IT", "math.IT" ], "primary_category": "quant-ph", "published": "20170525134617", "title": "Shorter stabilizer circuits via Bruhat decomposition and quantum circuit transformations" }
Classification of Quantitative Light-Induced Fluorescence Images Using Convolutional Neural Network Sultan Imangaliyev1,2,6 Corresponding author.Monique H. van der Veen3 Catherine M. C. Volgenant3 Bruno G. Loos3 Bart J. F. Keijser3,4 Wim Crielaard3 Evgeni Levin5,6 December 30, 2023 ======================================================================================================================================================================== We introduce a neural network that represents sentences by composing their words according to induced binary parse trees. We use Tree-LSTM as our composition function, applied along a tree structure found by a fully differentiable natural language chart parser. Our model simultaneously optimises both the composition function and the parser, thus eliminating the need for externally-provided parse trees which are normally required for Tree-LSTM. It can therefore be seen as a tree-based RNN that is unsupervised with respect to the parse trees. As it is fully differentiable, our model is easily trained with an off-the-shelf gradient descent method and backpropagation. We demonstrate that it achieves better performance compared to various supervised Tree-LSTM architectures on a textual entailment task and a reverse dictionary task.§ INTRODUCTION Recurrent neural networks, in particular the Long Short-Term Memory (LSTM) architecture <cit.> and some of its variants <cit.> have been widely applied to problems in natural language processing. Examples include language modelling <cit.>, textual entailment <cit.>, and machine translation <cit.> amongst others.The topology of an LSTM network is linear: words are read sequentially, normally in left-to-right order. However, language is known to have an underlying hierarchical, tree-like structure <cit.>. How to capture this structure in a neural network, and whether doing so leads to improved performance on common linguistic tasks, is an open question. The Tree-LSTM network <cit.> provides a possible answer, by generalising the LSTM to tree-structured topologies. It was shown to be more effective than a standard LSTM in semantic relatedness and sentiment analysis tasks.Despite their superior performance on these tasks, Tree-LSTM networks have the drawback of requiring an extra labelling of the input sentences in the form of parse trees. These can be either provided by an automatic parser <cit.>, or taken from a gold-standard resource such as the Penn Treebank <cit.>. <cit.> proposed to remove this requirement by including a shift-reduce parser in the model, to be optimised alongside the composition function based on a downstream task. This makes the full model non-differentiable so it needs to be trained with reinforcement learning, which can be slow due to high variance.Our proposed approach is to include a chart parser in the model, inspired by the CYK constituency parser <cit.>. Due to the parser being fully differentiable, the entire model can be trained end-to-end for a downstream task by using stochastic gradient descent. Our model is also unsupervised with respect to the parse trees, similar to <cit.>. We show that the proposed method outperforms other Tree-LSTM architectures based on fully left-branching, right-branching, and supervised parse trees on a textual entailment task and a reverse dictionary task.§ RELATED WORKOur work can be seen as part of a wider class of sentence embedding models that let their composition order be guided by a tree structure. These can be further split into two groups: (1) models that rely on traditional syntactic parse trees, usually provided as input, and (2) models that induce a tree structure based on some downstream task.In the first group, <cit.> take inspiration from the standard Montagovian semantic treatment of composition. They model nouns as vectors, and relational words that take arguments (such as adjectives, that combine with nouns) as tensors, with tensor contraction representing application <cit.>. These tensors are trained via linear regression based on a downstream task, but the tree that determines their order of application is expected to be provided as input. <cit.> and <cit.> also rely on external trees, but use recursive neural networks as the composition function.Instead of using a single parse tree, <cit.> propose a model that takes as input a parse forest from an external parser, in order to deal with uncertainty. The authors use a convolutional neural network composition function and, like our model, rely on a mechanism similar to the one employed by the CYK parser to process the trees. <cit.> propose a related model, also making use of syntactic information and convolutional networks to obtain a representation in a bottom-up manner. Convolutional neural networks can also be used to produce embeddings without the use of tree structures, such as in <cit.>.<cit.> propose an RNN that produces sentence embeddings optimised for a downstream task, with a composition function that works similarly to a shift-reduce parser. The model is able to operate on unparsed data by using an integrated parser. However, it is trained to mimic the decisions that would be taken by an external parser, and is therefore not free to explore using different tree structures. <cit.> introduce a probabilistic model of sentences that explicitly models nested, hierarchical relationships among words and phrases. They too rely on a shift-reduce parsing mechanism to obtain trees, trained on a corpus of gold-standard trees.In the second group, <cit.> shows the most similarities to our proposed model. The authors use reinforcement learning to learn tree structures for a neural network model similar to <cit.>, taking performance on a downstream task that uses the computed sentence representations as the reward signal. <cit.> take a slightly different approach: they formalise a dependency parser as a graphical model, viewed as an extension to attention mechanisms, and hand-optimise the backpropagation step through the inference algorithm. § MODELSAll the models take a sentence as input, represented as an ordered sequence of words. Each word w_i ∈𝒱 in the vocabulary is encoded as a (learned) word embedding w_i ∈ℝ^d. The models then output a sentence representation h∈ℝ^D, where the output space ℝ^D does not necessarily coincide with the input space ℝ^d. §.§ Bag of WordsOur simplest baseline is a bag-of-words (BoW) model. Due to its reliance on addition, which is commutative, any information on the original order of words is lost. Given a sentence encoded by embeddings w_1,…,w_n it computes h = ∑_i=1^n tanh(𝐖w_i + b), where 𝐖 is a learned input projection matrix.§.§ LSTMAn obvious choice for a baseline is the popularLong Short-Term Memory (LSTM) architecture of <cit.>. It is a recurrent neural network that, given a sentence encoded by embeddings w_1,…,w_T, runs for T time steps t=1… T and computes [ i_t; f_t; u_t; o_t ] = 𝐖w_t + 𝐔h_t-1 + b, c_t= c_t-1⊙σ (f_t) + tanh (u_t) ⊙σ (i_t), h_t= σ(o_t) ⊙tanh ( c_t), where σ(x) = 1/1+e^-x is the standard logistic function. The LSTM is parametrised by the matrices 𝐖∈ℝ^4D × d, 𝐔∈ℝ^4D × D, and the bias vector b∈ℝ^4D. The vectors σ(i_t), σ(f_t), σ(o_t) ∈ℝ^D are known as input, forget, and output gates respectively, while we call the vector tanh(u_t) the candidate update. We take h_T, the h-state of the last time step, as the final representation of the sentence.Following the recommendation of <cit.>, we deviate slightly from the vanilla LSTM architecture described above by also adding a bias of 1 to the forget gate, which was found to improve performance. §.§ Tree-LSTMTree-LSTMs are a family of extensions of the LSTM architecture to tree structures <cit.>. We implement the version designed for binary constituency trees. Given a node with children labelled L and R, its representation is computed as [ i; f_L; f_R; u; o ] = 𝐖w + 𝐔h_L + 𝐕h_R + b,c = c_L ⊙σ (f_L) + c_R ⊙σ (f_R) + tanh (u) ⊙σ (i), h = σ(o) ⊙tanh ( c), where w in (<ref>) is a word embedding, only nonzero at the leaves of the parse tree; and h_L,h_R and c_L,c_R are the node children's h- and c-states, only nonzero at the branches. These computations are repeated recursively following the tree structure, and the representation of the whole sentence is given by the h-state of the root node. Analogously to our LSTM implementation, here we also add a bias of 1 to the forget gates. §.§ Unsupervised Tree-LSTMWhile the Tree-LSTM is very powerful, it requires as input not only the sentence, but also a parse tree structure defined over it. Our proposed extension optimises this step away, by including a basic CYK-style <cit.> chart parser in the model. The parser has the property of being fully differentiable, and can therefore be trained jointly with the Tree-LSTM composition function for some downstream task. The CYK parser relies on a chart data structure, which provides a convenient way of representing the possible binary parse trees of a sentence, according to some grammar. Here we use the chart as an efficient means to store all possible binary-branching trees, effectively using a grammar with only a single non-terminal. This is sketched in simplified form in Table <ref> for an example input. The chart is drawn as a diagonal matrix, where the bottom row contains the individual words of the input sentence. The n^th row contains all cells with branch nodes spanning n words (here each cell is represented simply by the span – see Figure <ref> below for a forest representation of the nodes in all possible trees). By combining nodes in this chart in various ways it is possible to efficiently represent every binary parse tree of the input sentence.The unsupervised Tree-LSTM uses an analogous chart to guide the order of composition. Instead of storing sequences of words however, here each cell is made up of a pair of vectors (h,c) representing the state of the Tree-LSTM RNN at that particular node in the tree. The process starts at the bottom row, where each cell is filled in by calculating the Tree-LSTM output (<ref>)-(<ref>) with w set to the embedding of the corresponding word. These are the leaves of the parse tree. Then, the second row is computed by repeatedly calling the Tree-LSTM with the appropriate children. This row contains the nodes that are directly combining two leaves. They might not all be needed for the final parse tree: some leaves might connect directly to higher-level nodes, which have not yet been considered. However, they are all computed, as we cannot yet know whether there are better ways of connecting them to the tree. This decision is made at a later stage.Starting from the third row, ambiguity arises since constituents can be built up in more than one way: for example, the constituent “neuro linguistic programming” in Table <ref> can be made up either by combining the leaf “neuro” and the second-row node “linguistic programming”, or by combining the second-row node “neuro linguistic” and the leaf “programming”. In these cases, all possible compositions are performed, leading to a set of candidate constituents (c_1,h_2),…,(c_n,h_n). Each is assigned an energy, given bye_i = cos (u, h_i),where cos(·,·) indicates the cosine similarity function and u is a (trained) vector of weights. All energies are then passed through a softmax function to normalise them, and the cell representation is finally calculated as a weighted sum of all candidates using the softmax output:s_i = softmax(e_i/t), c = ∑_i=1^n s_i c_i,h = ∑_i=1^n s_i h_i.The softmax uses a temperature hyperparameter t which, for small values, has the effect of making the distribution sparse by making the highest score tend to 1. In all our experiments the temperature is initialised as t=1, and is smoothly decreasing as t=1/2^e, where e∈ℚ is the fraction of training epochs that have been completed. In the limit t→ 0^+, this mechanism will only select the highest scoring option, and is equivalent to the argmax operation. The same procedure is repeated for all higher rows, and the final output is given by the h-state of the top cell of the chart.The whole process is sketched in Figure <ref> for an example sentence. Note how, for instance, the final sentence representation can be obtained in three different ways, each represented by a coloured circle. All are computed, and the final representation is a weighted sum of the three, represented by the dotted lines. When the temperature t in (<ref>) reaches very low values, this effectively reduces to the single “best” tree, as selected by gradient descent. § EXPERIMENTS All models are implemented in Python 3.5.2 with the DyNet neural network library <cit.> at commit . The code for all following experiments can be found on the first author's website.[<https://www.maillard.it/>]For training we use stochastic gradient descent with a batch size of 16, which was found to perform better than AdaGrad <cit.> and similar methods on our development data. Performance on the development data is used to determine when to stop training.The textual entailment model was trained on a 2.2 GHz Intel Xeon E5-2660 CPU, and took one and a half weeks to converge. The reverse dictionary model was trained on a NVIDIA GeForce GTX TITAN Black GPU, and took five days to converge.On top of the baselines already described in <ref>, for the following experiments we also train two additional Tree-LSTM models that use a fixed composition order: one that uses a fully left-branching tree, and one that uses a fully right-branching tree. §.§ Textual EntailmentWe test our model and baselines on the Stanford Natural Language Inference task <cit.>, consisting of 570 k manually annotated pairs of sentences. Given two sentences, the aim is to predict whether the first entails, contradicts, or is neutral with respect to the second. For example, given “children smiling and waving at camera” and “there are children present”, the model would be expected to predict entailment.For this experiment, we choose 100D input embeddings, initialised with 100D GloVe vectors <cit.> and with out-of-vocabulary words set to the average of all other vectors. This results in a 100× 37 369 word embedding matrix, fine-tuned during training. For the supervised Tree-LSTM model, we used the parse trees included in the dataset.Given a pair of sentences, one of the models is used to produce the embeddings s_1,s_2∈ℝ^100. Following <cit.> and <cit.>, we then computeu = (s_1-s_2)^2, v = s_1⊙s_2, q = ReLU(𝐀[ u; v; s_1; s_2 ]+a),where 𝐀∈ℝ^200× 400 and a∈ℝ^200 are trained model parameters. Finally, the correct label is predicted by p(ŷ=c|q;𝐁,b) ∝exp(𝐁_cq + b_c), where 𝐁∈ℝ^3× 200 and b∈ℝ^3 are trained model parameters.Table <ref> lists the accuracy and number of parameters for our model, baselines, as well as other sentence embedding models in the literature. When the information is available, we report both the number of intrinsic model parameters as well as the number of word embedding parameters. For other models these figures are based on the data from the SNLI website[<https://nlp.stanford.edu/projects/snli/>] and the original papers. §.§ Reverse DictionaryWe also test our model and baselines on the reverse dictionary task of <cit.>, which consists of 852 k word-definition pairs. The aim is to retrieve the name of a concept from a list of words, given its definition. For example, when provided with the sentence “control consisting of a mechanical device for controlling fluid flow”, a model would be expected to rank the word “valve” above other confounders in a list. We use three test sets provided by the authors: two sets involving word definitions, either seen during training or held out; and one set involving concept descriptions instead of formal definitions. Performance is measured via three statistics: the median rank of the correct answer over a list of over 66 k words; and the proportion of cases in which the correct answer appears in the top 10 and 100 ranked words (top 10 accuracy and top 100 accuracy).As output embeddings, we use the 500D CBOW vectors <cit.> provided by the authors. As input embeddings we use the same vectors, reduced to 256 dimensions with PCA. Given a training definition as a sequence of (input) embeddings w_1,…,w_n∈ℝ^256, the model produces an embedding s∈ℝ^256 which is then mapped to the output space via a trained projection matrix 𝐖∈ℝ^500× 256. The training objective to be maximised is then the cosine similarity cos(𝐖s,d) between the definition embedding and the (output) embedding d of the word being defined. For the supervised Tree-LSTM model, we additionally parsed the definitions with Stanford CoreNLP <cit.> to obtain parse trees.We hold out 128 batches from the training set to be used as development data. The softmax temperature in (<ref>) is allowed to decrease as described in <ref> until it reaches a value of 0.005, and then kept constant. This was found to have the best performance on the development set.Table <ref> shows the results for our model and baselines, as well as the models of <cit.> which are based on the same cosine training objective. Our bag-of-words model consists of 193.8 k parameters; our LSTM uses 653 k parameters; the fixed-branching, supervised, and unsupervised Tree-LSTM models all use 1.1 M parameters. On top of these, the input word embeddings consist of 113 123× 256 parameters. Output embeddings are not counted as they are not updated during training. § DISCUSSION The results in Tables <ref> and <ref> show that the unsupervised Tree-LSTM matches or outperforms all tested baselines.For the textual entailment task, our model compares favourably to all baselines including the supervised Tree-LSTM, as well as some of the other sentence embedding models in the literature that have a higher number of parameters. Our model could be plausibly improved by combining it with aspects of other models, and we make some concrete suggestions in that direction in <ref>.In the reversed dictionary task, the very poor performance of the supervised Tree-LSTM can be explained by the unusual tokenisation algorithm used in the dataset of <cit.>: all punctuation is simply stripped, turning for instance “(archaic) a section of a poem” into “archaic a section of a poem”, or stripping away the semicolons in long lists of synonyms. On the one hand, this might seem unfair on the supervised Tree-LSTM, which received suboptimal trees as input. On the other, it demonstrates the robustness of our method to noisy data. Our model also performed well in comparison to the LSTM and the other Tree-LSTM baselines. Despite the slower training time due to the additional complexity, Figure <ref> shows how our model needed fewer training examples to reach convergence in this task. Following <cit.>, we also manually inspect the learned trees to see how closely they match conventional syntax trees, as would typically be assigned by trained linguists. We analyse the same four sentences they chose. The trees produced by our model are shown in Figure <ref>. One notable feature of the trees is the fact that verbs are joined with their subject noun phrases first, which differs from the standard verb phrase structure. Type-raising and composition in formalisms such as combinatory categorial grammar <cit.> do however allow such constituents. The spans of prepositional phrases in (b), (c) and (d) are correctly identified at the highest level; but only in (d) does the structure of the subtree match convention. As could be expected, other features such as the attachment of the full stops or of some determiners do not appear to match human intuition.§ CONCLUSIONSWe presented a fully differentiable model to jointly learn sentence embeddings and syntax, based on the Tree-LSTM composition function. We demonstrated its benefits over standard Tree-LSTM on a textual entailment task and a reverse dictionary task. The model is conceptually simple, and easy to train via backpropagation and stochastic gradient descent with popular deep learning toolkits based on dynamic computation graphs such as DyNet <cit.> and PyTorch.[<https://github.com/pytorch/pytorch>]The unsupervised Tree-LSTM we presented is relatively simple, but could be plausibly improved by combining it with aspects of other models. It should be noted in particular that (<ref>), the function assigning an energy to alternative ways of forming constituents, is extremely basic and does not rely on any global information on the sentence.Using a more complex function, perhaps relying on a mechanism such as the tracking LSTM in <cit.>, might lead to improvements in performance. Techniques such as batch normalization <cit.> or layer normalization <cit.> might also lead to further improvements.In future work, it might be possible to obtain trees closer to human intuition by training a model to perform well on multiple tasks instead of a single one, an important feature for intelligent agents to demonstrate <cit.>. Techniques such as elastic weight consolidation <cit.> have been shown to help with multitask learning, and could be readily applied to our model. plainnat
http://arxiv.org/abs/1705.09189v1
{ "authors": [ "Jean Maillard", "Stephen Clark", "Dani Yogatama" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20170525140948", "title": "Jointly Learning Sentence Embeddings and Syntax with Unsupervised Tree-LSTMs" }
Mesh Model (MeMo): A Systematic Approach to Agile System EngineeringAmit Kumar Mishra, and Alan Langman University of Cape TownSouth Africa 7700 [email protected] ==========================================================================================================§ INTRODUCTIONInnovation and entrepreneurshiphave a very special role to play in creating sustainable development in the world.Engineering design plays a major role in innovation.These are not new facts. However this added to the fact that incurrent time knowledge seem to increase at an exponential rate, growing twice every few months [bostoncommons.net/knowledge-doubling]. This creates a need to have newer methods to innovate with very little scope to fall short of the expectations from customers. In terms of reliable designing, system design tools and methodologies have been very helpful and have been in use in most engineering industries for decades now. But traditional system design is rigorous and rigid.As we can see, we need an innovation system that should be rigorous and flexible at the same time. We take our inspiration from biosphere, where some of the most rugged yet flexible plants are creepers which grow to create mesh.In this thematic paper we shall explain our approach to system engineering which we call the MeMo (Mesh Model) that fuses the rigor of system engineering with the flexibility of agile methods to create a scheme that can give rise to reliable innovation in the high risk market of today. § RECENT CHALLENGES ANDTRENDSThe current generation of system engineering (SE) practices have been extremely helpful in managing big, complex and system-of-system natured projects.However, as we enter into the exponential zone of knowledge growth the current practices we will have some major limitations. Some of these (taken from INCOSE vision-2025 document) are as follows.* Because of rapid development at component level the old fashioned architecture based system design is not reliable. This means we depend on a system design which emerges from pieces.This makes the system difficult to design and verify. * Again, due to the rapid development in different components there is a need for agile change of design architecture.However, traditional system engineering practices are not able to handle this agility and thereby causing loss in investment and knowledge. * A holistic approach to SE is good to take care of programmatic side of projects but not the technical side.This fosters risky decisions. Hence, we need an approach which can* be inclusive of rapid technical development; * fosters agility; * has scope to use the knowledge created at project life cycle phase boundaries; and * gives an avenue to reduce risky-decisions by creating enough local feedbacks in the process. We shall keep the above points in mind while discussing our proposed model. It can be noted that there are two major challenges driving recent trends in innovation (e.g. Design Thinking, Innovators Methods, Agile Development etc.).First of all is the exponential growth of knowledge and, hence, the coming of newer technologies at a faster pace every year.Second is the emergence of demand uncertainty.Conventional system engineering and design processes are well adapted to deal with technical uncertainty (in the presence of demand certainty).However they are not good enough to handle either demand uncertainty with technological certainty or (even worse) demand undertaking with technological uncertainty. § SPIRAL MEETS V TO CREATE A MESHAs discussed in the previous section we need a process that is agile enough to run through extreme technical and demand uncertainties.One of the important ways to achieve this is to have rapid customer feedback based design thinking.This helps us to take care of the demand uncertainty.A spiral approach, almost, takes care of technological uncertainties.One of the lacuna of the Spiral Model is the lack of a designated verification phase. In our proposed model we combine the spiral model with well defined Design Blocks (DB).The Design Blocks includeall or some of the design steps like requirement analysis, solution choice, STTPLE analysis, rapid prototype, detailed design and acceptance test procedure definition.The choice to run the full Design Block or part of it is up to the technical uncertainty that faces the development. In addition to a spiral way of developing the Design Blocks we introduce the Feedback Collection Block (FCB) to take care of the demand uncertainty.Each industry can set its own (semi-automated) process to run the FCBs and its own process to store and interpret the results from these.It can be noted here that the outcome from FCBs are mostly technology independent.Hence, these can be used not only in the immediate-next DB but also by further DBs later in the product life-cycle.This also helps in preserving innovations and knowledge across design cycles and versions.§ CONCLUSIONIn this brief white paper we have discussed the status quo of engineering innovation and discussed some of the current and upcoming challenges.Then we proposed a process for engineering system design and development in the face of demand and technological uncertainty.This also takes into account all the key challenges faced by the current system design processes (as detailed in the INCOSE vision document).IEEEtran
http://arxiv.org/abs/1705.09170v1
{ "authors": [ "Amit Kumar Mishra" ], "categories": [ "cs.SE" ], "primary_category": "cs.SE", "published": "20170525132947", "title": "Mesh Model (MeMo): A Systematic Approach to Agile System Engineering" }
Coverage and Spectral Efficiency of Indoor mmWave Networks with Ceiling-Mounted Access PointsFadhil Firyaguna, Jacek Kibiłda, Carlo Galiotto, Nicola Marchetti CONNECT Centre, Trinity College Dublin, Ireland {firyaguf, kibildj, galiotc, nicola.marchetti}@tcd.ieAccepted 2017 June 22. Received 2017 June 21; in original form 2017 April 21 ===================================================================================================================================================================================empty empty This paper is about minimum cost constrained selection of inputs and outputs in structured systems for generic arbitrary pole placement. The input-output set is constrained in the sense that the set of states that each input can influence and the set of states that each output can sense is pre-specified. Our goal is to optimally select an input-output set that the system has no structurally fixed modes. Polynomial time algorithms do not exist for solving this problem unless P = NP. To this end, we propose an approximation algorithm by splitting the problem in to three sub-problems: a) minimum cost accessibility problem, b) minimum cost sensability problem and c) minimum cost disjoint cycle problem. We prove that problems a) and b) are equivalent to the weighted set cover problem. We also show that problem c) can be solved using a minimum cost perfect matching algorithm. Using these, we give an approximation algorithm which solves the minimum cost generic arbitrary pole placement problem. The proposed algorithm incorporates an approximation algorithm to solve the weighted set cover problem to solve a) and b) and a minimum cost perfect matching algorithm to solve c). Further, we show that the algorithm has polynomial complexityand gives an order optimal O( logn) approximate solution to the minimum cost input-output selection for generic arbitrary pole placement problem, where n denotes the number of states in the system.Large scale control system design, Linear structured systems, Arbitrary pole placement, Input-output selection, Approximation algorithms.. § INTRODUCTIONConsider structured matrices ∈{,0}^n × n, ∈{,0}^n × m and ∈{,0}^p × n whose entries are eitheror 0. The matrices ,andstructurally represent state, input and output matrices respectively of any control system ẋ = Ax + Bu, y = Cx such that:A_ij =0 _ij = 0,B_ij =0 _ij = 0,C_ij =0 _ij = 0.Any triple (A,B,C) that satisfy (<ref>) is said to be a numerical realization of the structural system (, , ). Further, the matrix ∈{,0}^m × p, where _ij = if the j^ th output is available for static output feedback to the i^ th input is referred as the feedback matrix. Let [K] is the collection of all numerical realizations of , i.e., [K] := {K:K_ij = 0 _ij = 0}. The structural system (, , ) is said not to have structurally fixed modes (SFMs) with respect to an information patternif there exists one numerical realization (A, B, C) of (, , ) such that ∩_K ∈ [K]σ(A + BKC) = ϕ, where the function σ(T) denotes the set of eigenvalues of any square matrix T. Let p_u ∈^m, where every entry p_u(i), i=1,…,m, indicates the cost of using i^ th input. Also, p_y ∈^p, where every entry p_y(j), j=1,…,p, indicates the cost of using j^ th output. For ⊆{1,…,m}, ⊆{1,…,p}, let _ be the restriction ofto columns only inand _ be the restriction ofto rows only in . Furthermore, let = {(, ) : (,_, _, _(×)) }. Our aim is to find (, ) ∈ such that the cost of inputs and outputs is minimized. Specifically, we wish to solve the following optimization: for any (, ), define p(, ) = ∑_i∈p_u(i) + ∑_j∈p_y(j). Given a structural system (, , ), feedback matrixand cost vectors p_u, p_y, find(^, ^)  ∈ min_0n (, ) ∈ p(, ). We refer to Problem <ref> as minimum cost constrained input-output selection for generic arbitrary pole placement. Let p^ = p(^, ^). Thus, p^ denotes the minimum cost for constrained input-output selection that ensures generic arbitrary pole placement. Without loss of generality, assume (, , , ) has no SFMs. Thusis non-empty. In this paper we consider a special case in whichis complete, i.e., _ij = for all i,j. Even with this restriction the problem is NP-hard. In our main contribution, we propose an approximation algorithm of computational complexity O(n^3). In the worst case, the proposed algorithm achieves approximation ratio of 6  log n, and the ratio can be improved significantly in many practical systems. We also establish a negative result which states that no polynomial time algorithm can achieve approximation ratio of 1/4 log n. Thus our algorithm is order optimal as it provides O( log n) approximation. Formally, the main result of our paper is the following: Consider a structural system (, , ), a complete feedback matrixand cost vectors p_u, p_y. Let n be the number of states in the system and (_a, _a) be an output of Algorithm <ref>. Then the following hold: i) (_a, _a) ∈, i.e., (, __a,__a, _(_a ×_a)) has no SFMs, andii) p(_a, _a) ⩽ (2log n)p^,Moreover, there does not exist any polynomial time algorithm to solve Problem <ref> that has approximation ratio (1-o(1))log n. Thus the proposed algorithm is an order optimal approximation algorithm. The organization of this paper is as follows: in Section <ref> we discuss preliminaries, existing results and related work in this area. In Section <ref> we explain our approach to solve the minimum costinput-output selection problem for generic arbitrary pole placement by splitting it in to three sub-problems: minimum cost accessibility, minimum cost sensability and minimum cost disjoint cycle problem. In Section <ref> we discuss an approximation algorithm for solving the problem and then prove the main results of the paper. In Section <ref> we explain the approximation result in the context of few special cases. In Section <ref> we give the final concluding remarks.§PRELIMINARIES, EXISTING RESULTS AND RELATED WORKIn this section we first discuss few graph theoretic concepts used in the sequel and some existing results. Then we discuss related work in this area. §.§ Preliminaries and Existing Results Arbitrary pole placement is said to be possible in a structural system if it has no structurally fixed modes (SFMs). Basically there are two types of fixed modes,and(see <cit.>, <cit.> for more details). To ensure non-existence of SFMs one has to ensure that both these types are absent in the system. Presence of Type-1 SFMs can be checked using the concept of strong connectedness of the system digraph which is constructed as follows: firstly, we construct the state digraph () := (V_, E_), where V_ = {x_1, …, x_n } and (x_j, x_i) ∈ E_ if _ij≠ 0. Thus a directed edge (x_j, x_i) exists if state x_j can influence state x_i. Now we construct the system digraph (, , , ) := (V_∪ V_∪ V_Y, E_∪ E_∪ E_Y∪ E_K), where V_U = {u_1, …, u_m } and V_Y = {y_1, …, y_p }. An edge (u_j, x_i) ∈ E_U if _ij≠ 0, (x_j, y_i) ∈ E_Y if _ij≠ 0 and (y_j, u_i) ∈ E_K if _ij≠ 0. Thus a directed edge (u_j, x_i) exists if input u_j can actuate state x_i and a directed edge (x_j, y_i) exists if output y_i can sense state x_j. Construction of state digraph () and system digraph (, , , ) is illustrated through an example in Figure <ref>. Next we define two concepts, namely accessibility and sensability, that we need for explaining our algorithm.A state x_i is said to be accessible if there exists a directed simple path from some input u_j to x_i in the digraph (, , , ). Also, a state x_i is said to be sensable if there exists a directed simple path from x_i to some output y_j in the digraph (, , , ). A digraph is said to be strongly connected if for each ordered pair of vertices (v_1,v_k) there exists an elementary path from v_1 to v_k. A strongly connected component (SCC) of a digraph is a maximal strongly connected subgraph of it. If () is a single SCC, then the system is said to be irreducible.Using the digraph (, , , ) a necessary and sufficient graph theoretic condition for absence of SFMs is given in the following result.A structural system (, , ) has no structurally fixed modes with respect to a feedback matrixif and only if the following conditions hold:a) in the digraph (, , , ), each state node x_i is contained in an SCC which includes an edge in E_K, and b) there exists a finite disjoint union of cycles _g = (V_g, E_g) in (, , , ) where gis a positive integer such that V_X ⊂∪_gV_g. In Proposition <ref>, condition a) corresponds to SFMs of Type 1 and condition b) corresponds to SFMs of Type 2. In order to characterize condition a) we first generate a directed acyclic graph (DAG) associated with () by condensing each SCC to a supernode. In this DAG, vertex set comprises of all SCCs in (). A directed edge exists between two nodes of the DAG if and only if there exists a directed edge connecting two states in the respective SCCs in (). Using this DAG we have the following definition that characterizes SCCs in (). An SCC is said to be linked if it has atleast one incoming or outgoing edge from another SCC. Further, an SCC is said to be non-top linked (non-bottom linked, resp.) if it has no incoming (outgoing, resp.) edges to (from, resp.) its vertices from (to, resp.) the vertices of another SCC. Without loss of generality we will assume that () has q non-top linked SCCs, _1, …,_q and k non-bottom linked SCCs, _1, …, _k. We have the following definition. An SCC is said to be covered by input u_j if there exists a state x_i in the SCC such that _ij =. Similarly, an SCC is said to be covered by output y_j if there exists a state x_i in the SCC such that _ji =.We define μ_i := {j:_ju_i} and η_i := {j:_jy_i}. Let μ_ max :=max_ iμ_i and η_ max :=max_i η_i. In the example given in Figure <ref> each state is individually an SCC. Moreover, there are two non-top linked SCCs, _1 = x_2 and _2 = x_4 and one non-bottom linked SCC, _1 = x_3. Note that x_1 is neither non-top linked nor non-bottom linked SCC. Also, μ_1 = 0, μ_2 = 1, μ_3 = 2, η_1 = 1 and η_2 =0. Thus μ_ max = 2 and η_ max=1. Following is an important observation.All states are accessible (sensable, resp.) if all non-top (non-bottom, resp.) linked SCCs are covered by input (output, resp.).Corollary <ref> is an immediate consequence of Definitions <ref> and <ref>. For a generic system (, , ) with feedback matrix , verifying absence of SFMs has polynomial complexity. Specifically, condition a) can be verified in O(n^2) computations using the concept of SCCs in (, , , ) <cit.>. Condition b) can be verified in O(n^2.5) computations using concepts of information paths given in <cit.> or using bipartite matching as proposed in <cit.>. In our work, we use bipartite matching condition and so we explain this in detail. Given an undirected bipartite graph G(V, V, ), where V ∪V denotes the set of vertices and ⊆ V ×V denotes the set of edges, a matching M is a collection of edges M ⊆ such that for any two edges (i,j), (u,v) ∈ M, i ≠ u and j ≠ v. A perfect matching is a matching M such that |M| =min(|V|, |V|). Now for checking condition b) in a structural system, we use the bipartite graph (, , , ) constructed in <cit.>. Let (, , , ) := (V_X'∪ V_U'∪ V_Y', V_X∪ V_U∪ V_Y, _∪_∪_Y ∪_K ∪_𝕌∪_𝕐), where V_X'={x'_1, …, x'_n }, V_U'={u'_1, …, u'_m }, V_Y' = {y'_1, …, y'_p } and V_X={x_1, …, x_n }, V_U={u_1, …, u_m } and V_Y = {y_1, …, y_p }. Also, (x'_i, x_j) ∈_⇔ (x_j, x_i) ∈ E_, (x'_i, u_j) ∈_⇔ (u_j, x_i) ∈ E_, (y'_j, x_i) ∈_Y⇔ (x_i, y_j) ∈ E_Y and (u'_i, y_j) ∈_K⇔ (y_j, u_i) ∈ E_K. Moreover, _𝕌 include edges (u'_i, u_i) for i =1,…,m and_𝕐 include edges (y'_j, y_j) for j =1,…,p. We show that there exists a perfect matching in (, , , ) if and only if the system (, , ) along with feedback matrixsatisfies condition b) (see Section <ref>).Note that (), (,, , ) are digraphs, but (, , , ) is an undirected graph. Also, E denotes directed edges anddenotes undirected edges. The system bipartite graph (, , , ) for the structural system given in Figure <ref> is shown in Figure <ref>. Summarizing, a structural closed-loop system is said not to have SFMs if and only if all state vertices lie in some SCC of (, , , ) with an edge in E_K and the system bipartite graph (, , , ) has a perfect matching. Thus, using the two graph theoretic conditions explained in this section, we conclude that presence of SFMs in a structural closed-loop system can be checked in O(n^2.5) computations. Hence one can conclude if generic arbitrary pole placement is possible in a structural system in polynomial time. However, optimal selection of input-output set that guarantee arbitrary pole placement cannot be solved in polynomial time unless P = NP <cit.>. §.§ Related Work In large scale systems, including biological systems, the web, power grids and social network to name a few, more often only the connections in the graph are known. The exact parameters are unavailable. In this context, structural analysis of the system is performed to study the various system properties generically (see <cit.>, <cit.>, <cit.>, <cit.> and references therein). Study of controllability and observability of the system generically using the structure of the system is referred to as structural controllability and structural observability. Structural controllability was introduced by Lin in <cit.>. Since then various associated problems including minimum input selection <cit.>, <cit.>, <cit.> and <cit.>, input addition for structural controllability <cit.>, strong structural controllability <cit.>, minimum cost control selection and control configuration selection <cit.> are addressed in the literature. In most of these papers the structure of the input (output, resp.) matrix is not constrained. For example <cit.> discusses the problem of finding sparsest set (,,) for a givensuch that arbitrary pole placement is possible. This problem can be solved in polynomial complexity. However, constrained input (output, resp.) selection for structural controllability (observability, resp.) is NP-hard <cit.>. A special class of systems where the state bipartite graph () has a perfect matching and every input can influence a single state (dedicated input) is discussed in <cit.>. Note that under these assumptions the problem is not NP-hard. However, for the general case there are no known approximation results. Given (, , ) finding the sparsestsuch that the closed-loop system has no SFMs is proved to be NP-hard in <cit.>.This paper focusses on minimum cost constrained input-output selection for generic arbitrary pole placement of structural systems. It is shown to be NP-hard in <cit.>. This paper is motivated by <cit.> where Pequito et.al investigated Problem <ref>along with costs foron a class of systems whose graph is irreducible. For this class of systems Problem <ref> is not NP-hard. However, for general systems there are no known results. We address Problem <ref> in its full generality. Note that we do not assume cost on . Unfortunately there do not exist polynomial algorithms for solving this unless P = NP. To this end, we propose an approximation algorithm for solving Problem <ref>. Our key contributions in this paper are threefold:∙ We provide a polynomial time approximation algorithm that gives approximation ratio 6log n for solving Problem <ref>.∙ We prove that no polynomial time algorithm can achieve approximation ratio 1/4 log n. Thus the proposed algorithm is order optimal∙ We show that the approximation can be much tighter in practical systems.In the next section we detail our approach.§ APPROXIMATING MINIMUM COST CONSTRAINED INPUT-OUTPUT SELECTION PROBLEM FOR GENERIC ARBITRARY POLE PLACEMENT Our approach for solving Problem <ref> is to split the problem in to three sub-problems listed below: ∙ Minimum cost accessibility problem∙ Minimum cost sensability problem∙ Minimum cost disjoint cycle problemBroadly, minimum cost accessibility (sensability, resp.) problem aims at finding minimum cost sub-collection of inputs (outputs, resp.) that cover all states. In minimum cost disjoint cycle problem, our aim is to find minimum cost sub-collection of inputs and outputs such that condition b) is satisfied given that all chosen outputs connect to all chosen inputs (recall that _ij = for all i,j). For better readability and notational brevity we denote the structural system (, , ) with feedback matrixand cost vectors p_u, p_y as (, , p_u) (without output) while discussing the accessibility problemand as (, , p_y) (without input) while discussing the sensability problem. Firstly we show that the minimum cost accessibility (sensability, resp.) problem is “equivalent to" the weighted set cover problem. On account of the equivalence any algorithm for weighted set cover can be used for solving the minimum cost accessibility and sensability problems with the same performance guarantees and vice-versa. Weighted set cover problem is a well studied NP-hard problem <cit.>. There exist approximation algorithms that give solution to the weighted set cover problem up to log factor in problem size <cit.>. However, there also exist inapproximability result showing that it cannot be approximated up to a constant factor <cit.>. Thus, using the equivalence of the problems we provide an order optimal approximation algorithm to solve the minimum cost accessibility and sensability problems.Then we show that the minimum cost disjoint cycle problem can be solved using a minimum cost perfect matching problem defined on (, , , ). Bipartite matching is also a well studied area and there exist polynomial time algorithm of complexity O(ℓ^3) that find minimum cost perfect matching in a bipartite graph with ℓ nodes on one side <cit.>. Using the minimum cost perfect matching algorithm we provide a polynomial time algorithm to solve the minimum cost disjoint cycle problem optimally. Then we prove that combining the solutions to these sub-problems we can obtain an approximate solution to Problem <ref>.Now we formally define and tackle each of these sub-problems separately in the following subsections. §.§ Solving Minimum Cost Accessibility ProblemIn this subsection, we establish a relation between the accessibility condition for structural controllability and the weighted set cover problem. Specifically, we show that when the inputs are constrained and each input is associated with a cost, then satisfying minimum cost accessibility condition is equivalent to solving a weighted set cover problem defined on the structural system.Consider a structural system (,) and a cost vector p_u denoted as (, , p_u). This system is said to satisfy the minimum cost accessibility condition if all the non-top linked SCCs in () are covered using the least cost input set possible. That is, we need to find a set of inputs ^_⊆{1,…,m} such that all state nodes are accessible in (, _^_, , ) and p(^_) ⩽ p(_) for any _⊆{1,…,m} that satisfy accessibility of all state nodes in (, __, , ). Specifically, we need to solve the following optimization: for any ⊆{1,…,m}, define p() = ∑_i∈p_u(i).Given (, , p_u), find ^_ ^_ ∈ min_0n _⊆{1,…,m} p(_), such that all state nodes are accessible in (, __, , ).We refer to Problem <ref> as the minimum cost accessibility problem. Before showing the equivalence between Problem <ref> and the weighted set cover problem, we first describe the weighted set cover problem for the sake of completeness. Weighted set cover problem is a well studied NP-hard problem<cit.>. Given a universe of N elements = { 1,2, ⋯, N}, a set of r sets = {_1, _2, ⋯, _r } with _i ⊂ and ⋃_i = 1^r _i = and a weight function w fromto the set of non-negative real numbers, weighted set cover problem consists of finding a set ^⊆ such that ∪__i ∈^_i = and ∑__i ∈^w(i) ⩽∑__i ∈w(i) for anythat satisfies ∪__i ∈ =. Now we reduce Problem <ref> to an instance of the weighted set cover problem in polynomial time.The pseudo-code showing a reduction of Problem <ref> to an instance of weighted set cover problem is presented in Algorithm <ref>. Given (, , p_u), we define a weighted set cover problem as follows: the universeconsists of all non-top linked SCCs {_1,…,_q} in () (see Step <ref>). The Sets _1, …, _m is defined in such a way that set _i consists of all non-top linked SCCs that are covered by the i^ th input (see Step <ref>). Further, for each set _i we define weight w(i) as shown in Step <ref>.Given a solutionto the weighted set cover problem, we define the associated weight w() as the sum of the weights of all sets selected under(see Step <ref>). Also, the indices of the sets selected inis denoted as () and its cost is denoted as p(()) as shown in Steps <ref> and <ref> respectively. We denote an optimal solution to Problem <ref> as ^_ and its cost as p^_. Also an optimal solution to the weighted set cover problem given in Algorithm <ref> is denoted by ^_ and its weight is denoted by w^_. Now we prove the following preliminary results. Consider any structural system (, , ), feedback matrixand cost vectors p_u, p_y. Then, Algorithm <ref> reduces Problem <ref> to a weighted set cover problem in O(n^2) time. Moreover, for any cover , the set () and cost p(()) given in Steps <ref> and <ref> respectively can be obtained in O(n) computations, where n denotes the number of states in the system. Given state digraph () = (V_X, E_X) all the non-top linked SCCs can be found in O( max(|V_X|,|E_X|)) computations. Here |V_X| = n and |E_X| is atmost |V_X|^2. Thus the reduction in Algorithm <ref> has O(n^2) computations. Also, given a coverwe can obtain () and p(()) in linear time and this completes the proof.Consider any structural system (, , ), feedback matrixand cost vectors p_u, p_y and the corresponding weighted set cover problem obtained using Algorithm <ref>. Letbe a feasible solution to the weighted set cover problem and () be the index set selected in Step <ref>. Then, all states are accessible in (, _(), , ) and p(()) = w(). Givenis a feasible solution to the weighted set cover problem. Thus ∪__i ∈_i =. Hence, () = {i:_i ∈} covers all the non-top linked SCCs in (). By Corollary <ref> this implies that all states are accessible in (, _(), , ). Now steps <ref>, <ref>, <ref> and <ref> of Algorithm <ref> proves p(()) = w(). In the following lemma we show that an ϵ-approximation algorithm for the weighted set cover problem can be used to obtain an ϵ-approximate solution to Problem <ref>.Consider any structural system (, , ), feedback matrixand cost vectors p_u, p_y and the corresponding weighted set cover problem obtained using Algorithm <ref>. Then, for ϵ > 1, ifis an ϵ-optimal solution to the weighted set cover problem, then () is anϵ-optimal solution to the minimum cost accessibility problem.The proof of this lemma is twofold: (i) we show that an optimal solution ^_ to the weighted set cover problem gives an optimal solution ^_ to Problem <ref>, and (ii) we show that if w() ⩽ϵ w^_, then p(()) ⩽ϵ p^_. Given ^_ is an optimal solution to the weighted set cover problem with cost w^_. For (i) we show that input set (^_) selected under ^_ is a minimum cost input set that satisfy the accessibilityof all states, i.e., all states are accessible in (, _(^_), , ) and p((^_)) = p^_. Since ^_ is a solution to the weighted set cover problem, using Lemma <ref> all states are accessible in (, _(^_), , ). Thus (^_) is a feasible solution to Problem <ref>. To prove minimality, we use a contradiction argument. Let us assume that ^_ is an optimal solution to the weighted set cover problem but (^_) = {i:_i ∈^_ } is not a minimum cost input set that satisfy the accessibility condition. Then there exists '_⊆{1,…,m} such that all state nodes are accessible in (, _'_, , ) and p('_) < p((^_)). Note that for , ∪__i ∈'_i =. Using Lemma <ref>, . This gives a contradiction to the assumption that ^_ is a minimal solution to the weighted set cover problem. This completes the proof of (i). Now (ii) follows from Lemma <ref> and Step <ref> of Algorithm <ref> and this completes the proof. As an immediate consequence of the above result we can now show that approximation algorithm for minimum cost accessibility problem can be obtained from an approximation algorithm for the weighted set cover problem. If there exists a polynomial time ϵ-optimal algorithm for solving the weighted set cover problem, then there exists a polynomial time ϵ-optimal algorithm for solving Problem <ref>. Thus, we can find a log μ_ max-optimal solution to Problem <ref>, where μ_ max is the maximum number of non-top linked SCCs covered by a single input. From Lemma <ref>, a polynomial time ϵ-optimal algorithm for solving the weighted set cover problem gives a polynomial time ϵ-optimal algorithm for solving Problem <ref>. Now, using the greedy approximation algorithm for solving the weighted set cover problem given in <cit.>, we can obtain a log μ_ max-optimal solution to Problem <ref>. Note that through Algorithm <ref> we have shown that any instance of Problem <ref> can be reduced in polynomial time to an instance of the weighted set cover problem. Now, we prove constant factor inapproximability of Problem <ref>. That is, there does not exist any polynomial time algorithm that give ϵ-optimal solution to Problem <ref> for any ϵ > 1. To achieve this we give a polynomial time reduction of the weighted set cover problem to an instance of Problem <ref> in Algorithm <ref>. Using this, we will show that any polynomial time ϵ-optimal algorithm for solving Problem <ref> can be used to get polynomial time ϵ-optimal algorithm for the weighted set cover problem. Thus, since weighted set cover problem cannot be approximated up to constant factor, Problem <ref> also cannot be approximated up to constant factor. The pseudo-code showing a reduction of the weighted set cover problem to an instance of Problem <ref> is presented in Algorithm <ref>. Given , and w, we reduce the weighted set cover problem to an instance of the minimum cost accessibility problem. Here,is a diagonal N × N matrix with all diagonal entries 's (see Step <ref>). Now,is defined in such a way that its j^ th column corresponds to the set _j (see Step <ref>) and cost of j^ th input is same as the weight w(j) of _j (see Step <ref>). Given a solutionto the accessibility problem, we define the associated cost p(), the sets selected () and its weight w(()) as shown in Steps <ref>,<ref> and <ref> respectively. We denote an optimal solution to the set cover problem in Algorithm <ref> as ^ and its weight as w^. Now we prove the following preliminary results. Consider any weighted set cover problem with universe , setand weight w. Let || = N. Then, Algorithm <ref> reduces the weighted set cover problem to Problem <ref> in O(N^2) computations. Moreover, for any set , the cover () and weight w(()) given in Steps <ref> and <ref> respectively can be obtained in O(N) computations. Given any weighted set cover problem , , w, matrices ,can be found in O(N), O(N^2) computations respectively. Also, cost vector p_u can be found in linear time. Thus the reduction of the set cover problem to an instance of Problem <ref> given in Algorithm <ref> has O(N^2) computations. Also, given a setwe can obtain () and w(())in linear time and this completes the proof. Consider any weighted set cover problem given by , , w and the corresponding structural system obtained using Algorithm <ref>. Letbe a feasible solution to Problem <ref> and () consists of the sets selected under . Then, () coversand w(()) = p(). Givenis a feasible solution to Problem <ref>. Thus all states are accessible in(, _, , ). This implies for () = {_i: i ∈}, ∪__i ∈()_i =. Thus by Corollary <ref> () covers .Now Steps <ref>, <ref>, <ref> and <ref> of Algorithm <ref> gives w(()) = p().In the following lemma we show that an ϵ-approximation algorithm for Problem <ref> can be used to obtain an ϵ-approximate solution to the weighted set cover problem.Consider any weighted set cover problem and the corresponding structural system (, , p_u) obtained using Algorithm <ref>. For ϵ > 1, ifis an ϵ-optimal solution to the minimum cost accessibility problem, then () is anϵ-optimal solution to the weighted set cover problem.The proof of this lemma is twofold: (i) we show that an optimal solution ^_ to Problem <ref> gives an optimal solution ^_ to the weighted set cover problem, and (ii) we show that, if p() ⩽ϵ p^_, then w(()) ⩽ϵ w^_.For proving (i) we assume that ^_ is an optimal solution to Problem <ref> and then prove that (^_) is an optimal solution to the weighted set cover problem, i.e, ∪__i ∈(^_) = andw((^_)) = w^_. Given ^_ is an optimal solution to Problem <ref>. Thus all states are accessible in (, _^_, , ). Hence, by Lemma <ref>, (^_) is a feasible solution to the weighted set cover problem. Now we prove optimality using a contradiction argument. Let ^_ is an optimal solution to Problem <ref>, but (^_) is not an optimal solution to the weighted set cover problem. Then there exists ⊂{_1,…,_r} such that ∪__i ∈_i = and w() <w((^_)). Then = {i:_i ∈} covers all the non-top linked SCCs in (). Also, from Lemma <ref>, p() < p^_. This gives a contradiction to the assumption that ^_ is a minimum cost input setthat satisfies accessibility condition. This completes the proof of (i). Now (ii) follows directly from Lemma <ref> and Step <ref> of Algorithm <ref>. This completes the proof.Lemmas <ref> and <ref> prove the equivalence of Problem <ref> and the weighted set cover problem. There areno polynomial algorithms for solving weighted set cover problem unless P = NP. However, there exist various approximation algorithms that find approximate solution to the weighted set cover problem. Specifically, the greedy approximation algorithm given in <cit.> gives a logd approximation, where d is the cardinality of the largest set _i in .In addition to this, we also know strong negative approximability result for the set cover problem. The set cover problem is a special case of weighted set cover problem, where all weights are non-zero and uniform. Thus the inapproximability result of the set cover problem applies to the weighted set cover problem also. <cit.> If there is some ϵ > 0 such that a polynomial time algorithm can approximate the set cover problem within (1-ϵ)log L, then NP ⊂ NTIME(L^ loglogL), where L denotes the number of items in the universe.Using Lemma <ref> and Proposition <ref> we can now show that inapproximability result of the weighted set cover problem implies inapproximability result of Problem <ref>.If there does not exist a polynomial time ϵ-optimal algorithm for solving the weighted set cover problem, then there does not exist a polynomial time ϵ-optimal algorithm for solving Problem <ref>. Moreover, there does not exist a polynomial time algorithm that can approximate Problem <ref> to factor (1-o(1))log q, where q denotes the number of non-top linked SCCs in (). From Lemma <ref>, a polynomial time ϵ-optimal algorithm for solving Problem <ref> gives a polynomial time ϵ-optimal algorithm for solving the weighted set cover problem. Now, from Proposition <ref> weighted set cover problem cannot be approximated up to factor (1-O(1))log N, where N is the cardinality of the universe. The weighted set cover reduction of Problem <ref> has || = q. Thus Problem <ref> cannot be approximated to factor (1-o(1))log q. This shows the hardness of the problem. The number of non-top linked SCCs is atmost n. This happens when each state is decoupled. However, in practical cases the states are not decoupled. The more connected the graph is, the number of non-top linked SCCs are less. In such cases the above result gives a tighter bound. In the following sub-section we discuss briefly about the minimum cost sensability problem. §.§ Solving Minimum Cost Sensability ProblemIn this section, we establish a relation between the sensability condition for structural observability and a set cover problem. Specifically, we show that when the outputs are constrained and each output is associated with a cost, then satisfying minimum cost sensability condition is equivalent to solving a weighted set cover problem defined on the structural system.Consider a structural system (,) and a cost vector p_y denoted as (, , p_y). This system is said to satisfy the minimum cost sensability condition if all the non-bottom linked SCCs in () are covered by the least cost output set possible.That is, we need to find a set of outputs ^_⊆{1,…,p} such that all state nodes are sensable in (,, _^_, ) and p(^_) ⩽ p(_) for any _⊆{1,…,p} that satisfy sensability of all state nodes in (, , __, ). We refer to the above problem as the minimum cost sensability problem. However, because of duality between controllability and observability solving minimum cost sensability problem is equivalent to solving minimum cost accessability problem of the structural system (^T, ^T, p_y). Thus the weighted set cover reformulation of Problem <ref> for (^T, ^T, p_y) solves the minimum cost sensability problem of (, , p_y). Hence the following result immediately follows from the analysis done in the previous sub-section.Consider a structurally observable system (, , p_y). We can find a log η_ max-optimal solution to the minimum cost sensability problem, where η_ max is the maximum number of non-bottom linked SCCs covered by a single output. Also, there does not exist polynomial time algorithm that can approximate minimum cost sensability problem to factor (1-o(1)))log k, where k is the number of non-bottom linked SCCs in ().Now we will find a relation between minimum cost disjoint cycle condition and a bipartite matching problem.§.§ Solving Minimum Cost Disjoint Cycle ProblemIn this subsection we establish a relation between disjoint cycle condition and perfect matching problem. Specifically, we show that when the inputs and outputs are constrained and each input and output are associated with costs, then satisfying disjoint cycle condition using a minimum cost input-output set is equivalent to solving a minimum cost perfect matching problem on a bipartite graph defined on the structural system. A structural system (, ,) with feedback matrixand cost vectors p_u, p_y is said to satisfy the minimum cost disjoint cycle condition if all state vertices are spanned by disjoint union of cycles in the system digraph by using the least possible cost input-output set. That is, we need to find an input set ^_⊆{1,…,m} and an output set ^_⊆{1,…,p} such that all x_i's are spanned by disjoint cycles in (, _^_, _^_, _(^_×^_)) and p(^_) + p(^_) ⩽ p() + p() for any ⊆{1,…,m } and ⊆{1,…,p } that satisfy disjoint cycle condition in (, _, _, _(×)). Specifically, we need to solve the following optimization problem. Given (, , , ) and cost vectors p_u and p_y, find(^_, ^_)  ∈ min_0n _⊆{1,…,m} _⊆{1,…,p} p(_, _),such that all x_i's lie in finite disjoint union of cycles in (, __, __, _(_×_)). We refer to Problem <ref> as the minimum cost disjoint cycle problem. Now we reduce the minimum cost disjoint cycle problem to a minimum cost perfect matching problem.Pseudo-code for reducing the minimum cost disjoint cycle problem to a minimum cost perfect matching problem is presented in Algorithm <ref>. The bipartite graph (, , , ) constructed in <cit.> for a special case is used here to guarantee condition b) in Proposition <ref> for a general case. Given the bipartite graph (, , , ) and the cost function c defined as in Step <ref>, we find a perfect matching M_. On obtaining a perfect matching M_, we define the associated cost c(M_) as the sum of the costs of edges that are present in M_ (see Step <ref>). The input index set selected under M_ defined as (M_) is the set of indices of u_i's that are connected to some state vertices in M_ (see Step <ref>) and its cost is defined as p((M_)) (see Step <ref>). Now, the output index set selected under M_ defined as (M_) consists of indices of all outputs y_j's that are connected to some state vertices in M_ (see Step <ref>) and its cost is defined as p((M_)) (see Step <ref>).We denote an optimal solution to the minimum cost perfect matching problem as M^ and the optimal cost as c^. Also, an optimalsolution to Problem <ref> is denoted as (^_, ^_) and the optimal input-output cost is denoted as (p^_ u + p^_ y). We prove the following theorem to give a necessary and sufficient condition for condition b) in Proposition <ref> for the sake of completeness. Consider a structural system (, , ) with feedback matrix . Then, the bipartite graph (, , , ) has a perfect matching if and only if all states are spanned by disjoint union of cycles in (, , , ). Only-if part: We assume that the bipartite graph (, , , ) has a perfect matching and prove that all state nodes are spanned by disjoint union of cycles in (, , , ). Let M be a perfect matching in (, , , ). Let ' = {(u'_i, u_i), (y'_j, y_j)}∈ M for i ∈{1,…,m} and j ∈{1,…,p}.Thus edges in M ∖' correspond to edges in (, , , ) such that there exist one incoming edge and one outgoing edge corresponding to every vertex in (, , , ) except nodes u_i's and y_j's that has edges in '. Since corresponding to edges in M ∖' every vertex has both in-degree and out-degree one, these edges corresponds to disjoint cycles in (, , , ). Note that all state vertices lie in M ∖'. Hence, all x_i's are spanned by disjoint union of cycles. This completes the proof of only-if part.If part: We assume that there exist disjoint union of cycles that span all state nodes in (, , , ) and prove that there exists a perfect matching in (, , , ). Since the cycles are disjoint, each node in it has one incoming edge and one outgoing edge. Each edge in the cycle corresponds to an edge in the bipartite graph. Vertices in (, , , ) that are not covered by these cycles will belong to the set of input and output nodes only. For such nodes there exist edges (u'_i, u_i) for all i ∈{1,…,m} and (y'_j, y_j) for all j ∈{1,…,p} in (, , , ). These edges along with the cycle edges results in a perfect matching. This completes the proof. Let M_ be a perfect matching in (, , , ) and (M_), (M_) denote the index set of inputs and index set of outputs selected under M_ respectively. Then, all x_i's lie in disjoint cycles in (, _(M_), _(M_), _((M_) ×(M_))) and p((M_))+ p((M_))= p((M_), (M_)) = c(M_).Given M_ is a perfect matching in the bipartite graph (, , , ) with cost function c. Using Theorem <ref>, there exist disjoint cycles that cover all state nodes in (, _(M_), _(M_), _((M_) ×(M_))). Now, Step <ref> and Steps <ref> to <ref> in Algorithm <ref> gives p((M_)) + p((M_)) = c(M_).Now we prove that minimum cost perfect matching problem on (, , , ) with cost ccan be used to solve the minimum cost disjoint cycle problem. Consider a structural system (, , ) with feedback matrixand cost vectors p_u, p_y. Let (^_, _^) be an optimal solutionto Problem <ref> and p(^_, _^) be the optimal cost of Problem <ref>. Let c^ is the optimal cost of the minimum cost perfect matching problem on (, , , ). Then, c^ = p(^_, ^_). Moreover, the input index set and output index set selected under Algorithm <ref> provide an optimal solution to Problem <ref>. Given (^_, _^) is an optimal solution to Problem <ref>. Then, from Theorem <ref> there exists a perfect matching in (, _^_, _^_, _(^_×^_)). Let M be an optimum matching in (, _^_, _^_, _(^_×^_)). Then, c(M) ⩽ p(^_, _^). Note that M = M ∪{(u'_i, u_i):i ∉^_}∪{(y'_j, y_j):j ∉^_} is an optimum matching in (, , , ). Also c(M) = c(M). Thus c(M) = c^⩽ p(^_, _^).Now let M^ is an optimal solution to the minimum cost perfect matching problem in (, , , ). Then c(M^) = c^. By Theorem <ref> there exists disjoint cycles whose union span all x_i's in (, _(M^), _(M^), _((M^) ×(M^))). Let the input-output set used in these cycles are (, ). Now p(, ) ⩽ c^. Also, p(^_, _^) ⩽ p(, ). Thus p(^_, _^) ⩽ c^. Combining both, we get p(^_, _^) = c^.Now we assume that M^ is an optimal solution to the minimum cost perfect matching problem with cost c^ and then show that input-output set ((M^), (M^)) selected under M^ is an optimal solution to Problem <ref>, i.e., all x_i's lie in disjoint union of cycles in (, _(M^), _(M^), _((M^) ×(M^))) and p((M^), (M^)) = p(^_, _^).Since M^ is a solution to the minimum cost perfect matching problem, by Lemma <ref> there are disjoint cycles in (, _(M^), _(M^), _((M^), (M^))) such that all state nodes lie in their union. Thus ((M^), (M^)) is a feasible solution to Problem <ref>. To prove minimality we use a contradiction argument. Let us assume that M^ is an optimal matching but ((M^), (M^)) is not an optimal solution to Problem <ref>. Then there exists '_⊂{1,…,m} and '_⊂{1,…,p} that satisfy the disjoint cycle condition in (, _'_, _'_, _('_×'_)) and p('_,'_) < p((M^), (M^)). Then by Theorem <ref> there exists a perfect matching M' such that (M') = '_ and (M') = '_. Using Lemma <ref>, . This gives a contradiction to the assumption that M^ is an optimal matching. This completes the proof.Hence, an optimal solution M^ to the minimum cost perfect matching problem gives a minimum cost input-output set (^_, ^_) that satisfies the disjoint cycle condition. There exist efficient polynomial time algorithms to solve the minimum cost perfect matching problem <cit.>. Thus using these algorithms we can solve Problem <ref> optimally in polynomial time.In the next section we give an approximation algorithm to solve Problem <ref>.§ APPROXIMATING CONSTRAINED INPUT-OUTPUT SELECTION FOR GENERIC ARBITRARY POLE PLACEMENTIn this section we give a polynomial time approximation algorithm for solving Problem <ref>. We propose a three stage algorithm for solving Problem <ref>. The pseudo-code for the proposed algorithm is given in Algorithm <ref>. In the first stage of Algorithm <ref> we solve a weighted set cover problem defined on the structural system (, , p_u) using a greedy approximation algorithm given in <cit.> to obtain an approximate solution to the minimum cost accessibility problem. We define the input index set selected under its solution as ^_ (see Step <ref>). Subsequently, in stage two we solve a weighted set cover problemdefined on the structural system (, , p_y) to approximate the minimum cost sensability problem. We define the output index set selected under its solution as ^_ (see Step <ref>). In the third stage of the algorithm a minimum cost perfect matching problem is solved on (, , , ) with cost function c. We define the input-output index set selected under solution to this problem as (^_, ^_) (see Step <ref>). In one of our main result we prove that (^_∪^_, ^_∪^_) is an approximate solution to Problem <ref>. Firstly, we prove the following preliminary result. Let () denote the state digraph of a structural system. Then, either one of the following happens:∙ an SCC in () is both non-top linked and non-bottom linked, ∙ an SCC in () lies in a path starting at some non-top linked SCC and ending at some non-bottom linked SCC.Consider the Directed Acyclic Graph (DAG) whose vertices are the SCCs in () and an edge exists between two nodes if and only if there exists an edge connecting two states in those respective SCCs in (). The nodes in the DAG are of two types: (i) isolated, and (ii) has an incoming and/or outgoing edge. In case (i) the corresponding SCC is both non-top linked and non-bottom linked. In case (ii) it has either an incoming edge or an outgoing edge or both. Thus those SCCs lie in some path from some non-top linked SCC to some non-bottom linked SCC since the DAG is acyclic. This completes the proof.Now we prove our main result.Proof of Theorem <ref>: Given (_a, _a) is an output of Algorithm <ref>. Hence, all states are accessible in (, __a, , ) and states are sensable in (, , __a, ). Thus, in (, __a, __a, _(_a ×_a)) all states are both accessible and sensable. Consider an arbitrary state x which belongs to some SCC 𝒩. By Lemma <ref>, 𝒩 lies on some path from a non-top linked SCC, say , to a non-bottom linked SCC, say , in the SCC DAG. Since U = {u_i:i ∈_a } are enough for accessibility, there exists u ∈ U such that u covers . Similarly, since Y = {y_j: j ∈_a } are enough for sensability there exists y ∈ Y such that y covers . Sinceis complete (y,u) belong to (, __a, __a, _(_a ×_a)). Thus in this digraph all states in all the SCCs of () that lie in the path fromtonow belong to a single SCC in (, __a, __a, _(_a ×_a)) which has edge (y,u). Thus x belongs to an SCC in (, __a, __a, _(_a ×_a)) with a (y,u) edge. Since x is arbitrary condition a) in Proposition <ref> follows. Since (_a, _a) is an output of Algorithm <ref>, by Theorem <ref> there exists disjoint cycles that cover all state nodes using inputs whose indices are in _a and outputs whose indices are in _a. Thus (_a, _a) satisfies condition b) in Proposition <ref>. Thus (_a, _a) ∈. This completes the proof of i).Let ^_ and ^_ are optimal solutions to the minimum cost accessibility problem and minimum cost sensability problem respectively. Given (_a, _a) is an output of Algorithm <ref>. Let _a = ^_∪^_, where ^_ is an ϵ_1-optimal solution to the minimum cost accessibility problem and ^_ is a minimum cost set that satisfy the disjoint cycle condition. Similarly, _a = ^_∪^_, where ^_ is an ϵ_2-optimal solution to the minimum cost sensability problem and ^_ is a minimum cost set that satisfy the disjoint cycle condition.Now by Theorem <ref>, ϵ_1 ⩽ log μ_ max and by Corollary <ref>, ϵ_2 ⩽ log η_ max. Since (^, ^) is an optimal solution to Problem <ref> its cost is atleast the cost of satisfying the two conditions in Proposition <ref> separately. This give Equations (<ref>) and (<ref>).p(^,^)⩾p(^_) +p(^_), p(^,^)⩾p(^_,^_), 2p(^,^)⩾p(^_) + p(^_)+p(^_) + p(^_), p(^_) + p(^_)⩽ log n (p(^_)+p(^_)), p(^,^)⩾p(^_) +p(^_)2 (log n ) + p(^_,^_)2,⩾ p(^_,^_) + p(^_,^_)2 (log n ),=p(_A,_A)2 (log n ).Equation (<ref>) holds as ^_ and ^_ are approximate solutions to the minimum cost accessibility problem and the minimum cost sensability problem respectively, obtained using greedy approximation of their weighted set cover formulations. Equation (<ref>) holds as 2log n ) ⩾ 1. This proves (ii). From Proposition <ref> we know that the weighted set cover problem cannot be approximated to factor (1-o(1))log N, where N is the cardinality of the universe. Hence, there does not exist any polynomial algorithm that has approximation ratio(1-o(1))log(max(q,k)) for Problem <ref>. Note that max(q,k) ⩽ n. Thusthere does not exist any polynomial algorithm that has approximation ratio (1-o(1))log n for solving Problem <ref>. Thus the proposed algorithm is order optimal approximation algorithm for Problem <ref>. In the following theorem we give the complexity of the proposed approximation algorithm.Algorithm <ref> which takes as input a structural system (, , ) with complete feedback matrixand cost cost vectors p_u, p_y and gives as output an approximate solution (_a, _a) to Problem <ref> has complexity O(n^3), where n denotes the number of states in the system. Given state digraph () = (V_X, E_X) all the non-top linked SCCs can be found in O( max(|V_X|,|E_X|)) computations. Here |V_X| = n and |E_X| is atmost |V_X|^2. Thus set cover problems can be formulated in O(n^2) computations. The greedy selection scheme for finding the approximate solution to the set cover problem has O(n) complexity <cit.>. The minimum cost bipartite matching can be solved in O(n^3) computations. Thus Algorithm <ref> has O(n^3) complexity. In the next section we discuss few special class of systems in the context of Problem <ref>.§ SPECIAL CASESIn this section we consider few special cases. Using theapproximation algorithm given in Section <ref> we obtain the approximation results for these cases. In the following subsections we explain each of these cases briefly. §.§ Irreducible SystemsIn this sub-section we consider systems whose digraph () is irreducible, that is () is a single SCC. Note that for this class of systems Problem <ref> is not NP-hard <cit.>. Pequito.et al addressed Problem <ref> along with costs for feedback edges in <cit.> and obtained a polynomial time optimal algorithm. In the following result we prove that the polynomial time algorithm given in this paper also gives an optimal solution to Problem <ref>.Consider a structural system (, , ), complete feedback matrixand cost vectors p_u, p_y. Let () is irreducible. Then Algorithm <ref> returns an optimal solution to Problem <ref>. Given () is irreducible andis complete. Thus condition a) is satisfied by any (y_j, u_i) edge. Hence Algorithm <ref> solves only the minimum cost perfect matching problem for satisfying condition b) optimally. Without loss of generality, let u_i be an input and y_j be an output obtained in the solution, i.e, i ∈_a and j∈_a. Then edge (y_j, u_i) satisfies both conditions in Proposition <ref>. In case if () has a perfect matching, then connecting the minimum cost input to the minimum cost output satisfies both the conditions in Proposition <ref>. Thus p(_a, _a) = p^. Hence, Algorithm <ref> gives an optimal solution to Problem <ref>. §.§ Systems with Perfect matching in ()In this sub-section we consider systems whose bipartite graph () has a perfect matching. In this case condition b) in Proposition <ref> is satisfied without using any input or output. Thus condition a) alone has to be considered. That is, only minimum cost accessibility and minimum cost sensability problems need to be solved. We have the following result for these class of systems.Consider a structural system (, , ), complete feedback matrixand cost vectors p_u, p_y. Let () has a perfect matching. Then, Algorithm <ref> gives a 2 ( logμ_ max +logη_ max)-optimal solution to Problem <ref>, where μ_ max denotes the maximum number of non-top linked SCCs covered by a single input and η_ max denotes the maximum number of non-bottom linked SCCs covered by a single output. Given () has a perfect matching. Thus condition b) is satisfied. Thus we need to solve only the minimum cost accessibility problem and the minimum cost sensability problem. Now following the similar lines given in the proof of Theorem <ref>, we get p(_a, _a) ⩽ 2 ( logμ_ max +logη_ max)p^. Hence, Algorithm <ref> gives a 2 ( logμ_ max +logη_ max)-optimal solution to Problem <ref>. §.§ Systems with a Single non-top/non-bottom linked SCCIn this sub-section we consider systems that has a single non-top linked SCC or a single non-bottom linked SCC. For this class of systems we have the following result. Consider a structural system (, , ), complete feedback matrixand cost vectors p_u, p_y. Let () has a single non-top linked SCC. Then, Algorithm <ref> gives a 3 ( log η_ max)-optimal solution to Problem <ref>, where η_ max denotes the maximum number of non-bottom linked SCCs covered by a single output. Given () has a single non-top linked SCC. Thus μ_ max = 1. Thus p(_a, _a) ⩽ 3 ( log η_ max) p^. Hence, Algorithm <ref> gives a 3 ( log η_ max)-optimal solution to Problem <ref>. Note that if () has a single non-bottom linked SCC using the same argument we will get a 3log (μ_ max)-optimal solution to Problem <ref> using Algorithm <ref>. §.§ Discrete SystemsIn this subsection we consider linear time invariant discrete control system given by, x(t+1) = Ax(t) + Bu(t), y(t) = Cx(t). For discrete systems we have the following result.Consider a discrete structural system (, , ), complete feedback matrixand cost vectors p_u, p_y.Then, Algorithm <ref> gives a 2 ( log μ_ max +log η_ max)-optimal solution to Problem <ref>. In discrete linear time invariant systems, only condition a) in Proposition <ref> has to be satisfied, since uncontrollable and unobservable modes of the system at origin is not of concern. Thus Algorithm <ref> need to solve only the minimum cost accessibility problem and the minimum cost sensability problem. Hence, we can get a 2 ( log μ_ max +log η_ max)-optimal solution to the minimum cost constrained input-output selection for generic arbitrary pole placement of discrete systems. This completes the discussion of the approximation results for various special classes of systems considered. § CONCLUSIONThis paper deals with minimum cost constrained input-output selection problem for generic arbitrary pole placement when the input and output matrices are constrained and each input and output is associated with costs. Our aim is to find a minimum cost input-output set that generic arbitrary pole placement is possible. There do not exist polynomial time algorithms for solving this unless P = NP. To this end, we proposed a polynomial time algorithm for finding an approximate solution to the problem by splitting the problem in to three sub-problems: minimum cost accessibility, minimum cost sensability and minimum cost disjoint cycle. We proved that minimum cost accessibility and minimum cost sensability problems are equivalent to the weighted set cover problem. Further, we proved that the minimum cost disjoint cycle problem can be solved using a minimum cost perfect matching problem on a system bipartite graph with suitably defined cost function. Using these results we proposed a polynomial time algorithm for solving minimum cost constrained input-output selection problem for generic arbitrary pole placement. The proposed algorithm gives a 3 ( log μ_ max +log η_ max)-optimal solution. We also proved that there does not exist any polynomial time algorithm that that can give a (1-o(1))log n-optimal solution. Thus the proposed algorithm gives an order optimal O( logn) approximate solution to the minimum cost input-output selection for generic arbitrary pole placement problem.myIEEEtran
http://arxiv.org/abs/1705.09600v2
{ "authors": [ "Shana Moothedath", "Prasanna Chaporkar", "Madhu N. Belur" ], "categories": [ "math.OC", "cs.DS" ], "primary_category": "math.OC", "published": "20170526144128", "title": "Approximating Constrained Minimum Cost Input-Output Selection for Generic Arbitrary Pole Placement in Structured Systems" }
firstpage–lastpage Turbulent Compression of Magnetized Gas]Compression of turbulent magnetized gas in Giant Molecular Clouds Y. Birnboim, C. Federrath & M. Krumholz]Yuval Birnboim^1,2Contact email: [email protected], Christoph Federrath^1 & Mark Krumholz^1 ^1Research School of Astronomy & Astrophysics, Australian National University, Canberra, ACT, Australia ^2Racah Institute of Physics, The Hebrew University, Jerusalem 91904,Israel2017 [ [ Last updated 2017 May 23; in original form 2017 May 23 ========================================================== Interstellar gas clouds are often both highly magnetized and supersonically turbulent, with velocity dispersions set by a competition between driving and dissipation. This balance has been studied extensively in the context of gases with constant mean density. However, many astrophysical systems are contracting under the influence of external pressure or gravity, and the balance between driving and dissipation in a contracting, magnetized medium has yet to be studied. In this paper we present three-dimensional (3D) magnetohydrodynamic (MHD) simulations of compression in a turbulent, magnetized medium that resembles the physical conditions inside molecular clouds. We find that in some circumstances the combination of compression and magnetic fields leads to a rate of turbulent dissipation far less than that observed in non-magnetized gas, or in non-compressing magnetized gas. As a result, a compressing, magnetized gas reaches an equilibrium velocity dispersion much greater than would be expected for either the hydrodynamic or the non-compressing case. We use the simulation results to construct an analytic model that gives an effective equation of state for a coarse-grained parcel of the gas, in the form of an ideal equation of state with a polytropic index that depends on the dissipation and energy transfer rates between the magnetic and turbulent components. We argue that the reduced dissipation rate and larger equilibrium velocity dispersion has important implications for the driving and maintenance of turbulence in molecular clouds, and for the rates of chemical and radiative processes that are sensitive to shocks and dissipation.dynamo — ISM: clouds — ISM: magnetic fields — magnetohydrodynamics (MHD) — plasmas — turbulence § INTRODUCTIONMagnetized plasma is ubiquitous in astrophysical systems. Particularly, gas in the interstellar medium (ISM) is observed to be magnetized, and a large fraction of its energy content is in the form of magnetic fields <cit.>. The magnetic field in the ISM of disk galaxies consists of an ordered rotating component on galactic disk scales that is consistent with slow winding of the magnetic field via macroscopic dynamo processes, and small-scale magnetic fields that are generated by the winding of the magnetic field via turbulent dynamo processes <cit.>. For the Galaxy, the values of the two components are comparable, with typical values of 2–5 μG.When portions of this magnetized fluid are subject to rapid radiative cooling, for example in molecular clouds, the result is a highly supersonic, strongly magnetized flow. Within such a flow, the velocity dispersion is dictated by the balance between driving and dissipation processes. This balance, particularly the dissipation part of it, has been studied extensively for both non-magnetized and magnetized flows in the context of periodic boxes with constant mean density <cit.>. The general result from these simulations is that the turbulence decays on a timescale comparable to a large eddy turnaround time, and that the rates of decay are not substantially altered by the presence or absence of a magnetic field.The problem of the balance between driving and decay for magnetized turbulence is most acute in molecular clouds. Since these have linewidths indicating the presence of supersonic flow, the fast dissipation of turbulence found by these simulations necessitates a mechanism to reinject the energy equally quickly. A number of candidates have been proposed, including internal feedback from H ii regions <cit.> or protostellar outflows <cit.>, driving of turbulence by ongoing accretion <cit.> or gravitational contraction on small scales <cit.>, thermal instability drivingand injection of energy from external supernova shocks <cit.>. Alternately, it is possible that the linewidths do not reflect turbulent motion at all, and instead indicate global gravitational collapse <cit.>. Each of these proposals, however, faces challenges – internal feedback must maintain large linewidths without destroying the clouds in which they occur, driving by accretion faces the problem of what happens when the accretion eventually ends, thermal instability seems unlikely to be a viable mechanism in molecule-dominated galaxies that lack a significant warm phase, and external driving requires efficient coupling between the low density external medium and the dense clouds. The view that clouds are in global collapse is hard to reconcile with the observed very low rates of star formation found even in gas at densities ≳ 10^5 cm^-3 <cit.>.The problem of the persistence of turbulence in molecular clouds is significantly eased if gravitational compression is able to pump energy into turbulent motion, since this would provide a mechanism to both power the turbulence and slow the collapse. The phenomenon has been explored for non-magnetized flows by <cit.>. In their work, an initially-turbulent gas is compressed in a scale-free manner by renormalizing the thermodynamic variables according to the expected values from a uniform collapse. As gas is compressed, the amplitude of the velocity field increases because the compression does PdV work against the kinetic pressure. On the other hand, the typical size of the eddies is reduced by the compression, and this accelerates the decay of turbulence. Depending on the compression rate, one process or the other dominates, and the turbulence either increases or decays. Qualitatively, the results are consistent with what one would have derived by naively equating the rate of PdV work with a decay timescale of ∼ 1 eddy turnover time derived from non-compressing driven turbulence simulations: the turbulence is amplified when the box compression time is short compared to the eddy turnover time, and decays if the converse holds. However, <cit.> did not include magnetic fields in their simulations, and we know that all clouds in the ISM are magnetized to a level that corresponds to a near equipartition between turbulent and magnetic energy densities <cit.>.In this work we seek to determine whether the result by <cit.> is altered in the presence of a magnetic field. We have already noted that, in driven turbulence simulations, magnetic fields make no qualitative difference. However, driving turbulence by global compression is qualitatively different than direct driving of the gas. In the first case, the scaling relations of velocity and distance for global compression enhance all modes similarly, and some modes, for which the dissipation is faster and that are not replenished quickly enough by a turbulent cascade, can decay and disappear. In the second, the turbulence forcing arbitrarily sets the geometry of the flow and phases of the various modes, preventing the flow from achieving a more relaxed state.A magnetic field might change the situation for a compressing flow in two ways. First, a magnetic field and gas motions can exchange energy via a turbulent dynamo <cit.>. For driven turbulence, the amount of energy stored in the dynamo is limited by the back reaction of the Lorentz forces on the gas, <cit.>, and as a result the energy stored in the magnetic field is always subdominant compared to the turbulence. However, gravitational compression will amplify magnetic fields differently than gas motions <cit.>, potentially leading to magnetic-turbulent interactions not found in driven, non-compressing boxes. Second, magnetic fields will impose anisotropy on the flow, and anisotropic turbulence shows a different cascade pattern and a different decay rate than isotropic turbulence <cit.>.In this paper we examine the effect of magnetic fields on global compression of turbulent gas in idealized 3D MHD simulations. We distinguish between cases of zero net magnetic flux and cases with non-zero net flux of various amplitudes. This is of theoretical importance because a magnetic field with finite flux increases monotonically as gas contracts, and of practical interest because fields with non-zero net flux are likely present in proto-GMCs <cit.>. Rather than introducing cooling, we assume that the gas is isothermal, which is a reasonable approximation for GMCs over a wide range of densities. We describe our setup in sims and our simulated results in results. We then construct an analytic prediction for the effective equation of state of a system with mixed thermal, kinetic and magnetic pressure components (model), use some of the physical insights to further analyze the dissipation in the simulations and compare our predictions to the simulations (comparison). In discussion we discuss possible implication of our results to ISM and GMCs, and in conclusions we summarize and conclude.§ SIMULATIONS §.§ The FLASH code We use a modified version of the grid-based code FLASH <cit.> (<http://www.flash.uchicago.edu/site/flashcode/>) to solve the three-dimensional (3D), compressible, ideal magnetohydrodynamical (MHD) equations,ρ + ∇·(ρ𝐯)=0, (ρ𝐯) + ∇·(ρ𝐯⊗𝐯 - 1/4π𝐁⊗𝐁) + ∇ P_tot = 0,e + ∇·[(e+P_tot)𝐯 - 1/4π(𝐁·𝐯)𝐁] = 0,𝐁 - ∇×(𝐯×𝐁) = 0,∇·𝐁 = 0.Here, ρ, 𝐯, P_tot=P_th+ (1/8π)|𝐁|^2, 𝐁, and e=ρϵ_int + (1/2)ρ|𝐯|^2 + (1/8π)|𝐁|^2 denote the gas density, velocity, pressure (thermal plus magnetic), magnetic field, and total energy density (internal, plus kinetic, plus magnetic), respectively. The MHD equations are closed with a quasi-isothermal equation of state (EoS), P_th=(γ-1)ρϵ_int, where we set γ=1.00001. Using this setting we model a gas with an extremely high number of degrees of freedom, f=2/(γ-1)∼2×10^5, effectively resulting in a gas that is isothermal. This is a standard procedure to obtain a quasi-isothermal EoS and results in the same thermodynamic response of the gas as a polytropic EoS, P_th∝ρ^Γ with Γ=1 <cit.>. The practical reason for choosing the simple ideal gas EoS with γ=1.00001 is to keep track of how much energy is dissipated by the turbulence, i.e., we solve the energy equation (<ref>) and record the change in energy every timestep.The system of ideal MHD equations (<ref>–<ref>) are solved with the robust HLL3R Riemann scheme by <cit.>, based on previous developments in applied mathematics to preserve positive density and pressure by construction <cit.>. For our particular simulations, the magnetic field is shown to remain divergence free to a reasonable degree (Appendix divB)§.§ Numerical scheme for solving the MHD equations in an expanding or contracting coordinate system The cosmology unit in FLASH allows one to use any hydrodynamics solver written for a non-expanding universe to work unmodified in a cosmological context. This is achieved by solving the MHD equations in the co-moving reference frame and accounting for the additional terms in the MHD equations that appear due to the expansion or contraction of the system. All calculations are assumed to take place in co-moving coordinates 𝐱 = 𝐫/a, where 𝐫 is the physical (proper) position vector and a(t) is the time-dependent cosmological scale factor. When transforming the MHD equations to the co-moving frame, the spatial derivative transforms as ∇_𝐱 = a ∇_𝐫 and the time derivative transforms as (∂/∂ t)_𝐱 = (∂/∂ t)_𝐫 + H𝐫·∇_𝐫, where the Hubble constant is defined as H=ȧ/a. The physical (proper) velocity is given as 𝐯̃ = H𝐫 + a𝐱̇, where the first term is the Hubble flow and the second term contains the co-moving velocity 𝐯=𝐱̇.Using these relations between the physical and co-moving derivatives in addition to the following transformations from physical (with tilde) to co-moving hydrodynamical quantities (without tilde),ρ = a^3 ρ̃,𝐁 = a^1/2𝐁̃, P_tot = a P̃_tot, e= a ẽ,ϵ_int = a^-2ϵ̃_int,the MHD equations in co-moving coordinates have exactly the same form as Equations (<ref>–<ref>) with additional Hubble source terms on the right-hand sides of the momentum, energy and induction equations:ρ + ∇·(ρ𝐯)=0, (ρ𝐯) + ∇·(ρ𝐯⊗𝐯 - 1/4π𝐁⊗𝐁) + ∇ P_tot = -2Hρ𝐯,e + ∇·[(e+P_tot)𝐯 - 1/4π(𝐁·𝐯)𝐁] =..........................-H[(3γ-1)ρϵ_int+2ρ𝐯·𝐯],𝐁 - ∇×(𝐯×𝐁) = -3/2H𝐁,∇·𝐁 = 0.Note that we have changed all time and space derivatives in these equations to the co-moving frame, i.e., ∂/∂ t ≡ (∂/∂ t)_𝐱 and ∇≡∇_𝐱.Since the form of these equations is identical to the conservation Equations (<ref>–<ref>) without the Hubble source terms, we can use any existing hydrodynamical scheme to solve this set of equations in the co-moving frame. In order to account for the Hubble source terms on the right-hand side of these equations, we use an operator-splitting approach, where the co-moving hydrodynamical variables are modified in each time step (after the hydro step) to account for the source terms.First we note that the mass continuity equation is unchanged between physical and co-moving coordinates. The momentum equation has the Hubble source term -2Hρ𝐯. Expanding the co-moving momentum equation (<ref>) with respect to the change in a, we find ρ̇v+ρv̇+∇(…)=-2Hρ v, where ρ̇=(dρ/da)(da/dt)=0 because dρ/da=0, and any spatial derivatives cancel, because a does not depend on space. This leaves us with the simple differential equation, v̇/v=-2ȧ/a, for which the solution is v'=v(a/a')^2, where v and a are the velocity and scale factor before accounting for the Hubble term (i.e., before the hydro step) and v' and a'=a(t+Δ t) are the velocity and scale factor after the current time step Δ t. An analogous correction has to be made in the co-moving energy equation to account for the Hubble source term, i.e., ϵ_int'=ϵ_int(a/a')^3γ-1.These procedures to account for the Hubble flow in pure hydrodynamics (without magnetic fields) were already implemented in the cosmology module of the public version of FLASH. However, MHD was not supported. Here we implemented the necessary modifications of the induction equation with the Hubble source term -(3/2)H𝐁 in Equation (<ref>), which requires a modification of the co-moving magnetic field with B'=B(a/a')^3/2, analogous to the operator-split corrections for the velocity and energy explained in the previous paragraph.§.§ Initial driving of turbulenceIn order to establish a fully-developed turbulent state, we first drive turbulence for a few crossing times. The state after this initial driving phase serves as the initial condition for our numerical experiments on the statistics of MHD turbulence in a contracting reference frame. Since we are focussing on MHD turbulence in molecular clouds, we drive turbulence to a target mass-weighted (MW) Mach number =⟨ v_rms/⟩_MW=9–10 <cit.> by applying a driving field ρ F as a source term in the momentum equation (<ref>). The sound speed is chosen as =1 in normalised units.The turbulence driving field is constructed with a stochastic Ornstein-Uhlenbeck (OU) process <cit.>, implemented by <cit.> and available in the public version of the FLASH code. The OU process creates a spatial and temporal driving pattern that varies smoothly in space and time with an auto-correlation timescale equal to the turbulent turnover time (also called turbulent box-crossing time), =L/(2)=0.05 for =10 on the largest scales (L/2) in our periodic simulation domain of side length L=1 (normalised units). The driving field F is constructed in Fourier space such that most power is injected at the smallest wave numbers, 1<|𝐤|L/2π<3. The peak of energy injection is on scale L/2, i.e., k=2, and falls off as a parabola towards smaller and higher wave numbers, such that the driving power is identically zero at k=1 and k=3, as in our previous studies of driven turbulence <cit.>. This procedure confines the effect of the driving to a narrow wave number range and allows the turbulence to develop self-consistently on smaller scales (k≥3).In constructing the driving field, we apply a Helmholtz decomposition in Fourier space, in order to separate the driving field into its solenoidal and compressive parts. This allows us to construct a solenoidal (divergence-free) driving field (∇·𝐅=0) or a compressive (curl-free) driving field (∇×𝐅=0). The influence of different driving on the statistics of turbulence, the amplification of magnetic fields, and on the star formation rate has been determined in <cit.>, <cit.>, <cit.>, and <cit.>. For simplicity and since here we simply want to seed a fully-developed initial turbulent state before starting the contraction, we chose to use purely solenoidal (divergence-free) driving. §.§ Initial conditions and list of simulations We start from gas with uniform density ρ_0=1 (normalised units) at rest and drive turbulence for t_0=4 =0.2 in a fixed (non-contracting) reference frame (a=1), which establishes fully-developed turbulence. After this, we begin the contraction phase, a(t)<1, at which point the driving is deactivated and turbulence as well as magnetic-field dynamics are solely determined by the contraction of the gas in the co-moving reference frame given by Equations (<ref>–<ref>).Table <ref> provides a list of all the simulations performed. We distinguish between three main cases: two purely hydrodynamical (HD) runs, four MHD runs without magnetic guide field (noGF), and four MHD runs that include a constant guide field (GF) ⟨ B_z⟩ in the z-direction of the simulation domain; for each of the MHD cases we consider multiple field strengths in order to determine the sensitivity of the results to this parameter. The simulations without guide field use an initial turbulent field generated with a flat power spectrum in the range k/(2π)=2–20, which produces initial turbulent fields (after the driving phase) of =9.2, 24, and 21, respectively (Table <ref>, middle section). The simulations with guide field were initialised with ⟨ B_z⟩ = 0.35, 3.5, and 35, respectively, giving rise to initial turbulent fields (after the driving phase) of =5.5, 18, and 21, respectively (Table <ref>, bottom section). We note that the field strength is in normalised units, so the Alfvén speed =B/(4πρ_0)^1/2 in normalised units. This means that B ∼3.5 corresponds to an Alfvén speed of one and B∼ 35 to an Alfvén speed of 10, comparable to the turbulent velocity dispersion. Thus we can think of our three guide field cases as representing three regimes of plasma β and Alfvén Mach number ℳ_A (computed with respect to the guide field): GF-Weak has β≫ 1, ℳ_A ≪ 1, GF-Medium has β∼ 1, ℳ_A ≪ 1, and GF-Strong has β≪ 1, ℳ_A ∼ 1.All our simulations use the same resolution of N_res^3=512^3 grid cells (except MHD-noGF-Strong-LR with N_res=256, used to investigate numerical convergence in Appendix convergence).Finally, our simulations (Tab. <ref>) use the same time evolution for the scale factor a(t)=exp[H(t-t_0)] for t≥ t_0 with H=-(/10)^-1=-200, i.e., fast contraction on a time scale ten times shorter than the initial turbulent crossing time. However, we also run an HD-H1, MHD-noGF-Strong-H1 and MHD-GF-Medium-H1 simulation with H=-1, in order to demonstrate that our main conclusions do not depend on the choice of H (see Appendix H1). <cit.> discussed three different cases for the contraction law in the pure HD limit, while here we are primarily interested in the case of fast contraction (compression), focussing on the effect of the magnetic field (MHD runs). § SIMULATION RESULTS §.§ Evolution of the Mach number and energy mach_a and <ref> show the time evolution of the rms Mach number and the kinetic and thermal energies, respectively, in all simulations. We show these quantities as a function of time during the initial driving phase, and as a function scale factor once compression begins, with the two phases separated by the solid vertical lines in the plots. Since the scale factor is exponential in time (H=ȧ/a=-200) and the plots use a logarithmic scale, position on the x-axis is proportional to time during both phases, albeit with different scalings. The top panels show our noGF simulations (without magnetic guide field) and the bottom panel shows our GF simulations. We show the pure hydrodynamic simulation (HD) in both panels to help guide the eye. First examine mach_a. We see that, starting from the fully-developed turbulent state at a=1, all simulations start compression (decreasing a), which drives turbulence, i.e., increasing v and . The turbulence is initially supersonic with ∼9–10 (Tab. <ref>), and increases to a peak of ℳ≈ 20. However, at a∼0.2, in all simulations except noGF-Strong, the evolution reverses and turbulence begins to decay. This change is also apparent in the kinetic energy density evolution shown in E_t, which increases sharply from a=1 to a≈ 0.2, but then shows an inflection point and increases less steeply thereafter.This qualitative change from increasing to decaying turbulence is not related to the turbulence becoming sonic or subsonic, which only happens much later. Instead, it can be explained by the change of dissipation with a. The dissipation timescale is proportional to the largest eddy turnover time <cit.> and the dissipation rate becomes comparable to the compression rate whenη^* v_ rms(a)/aλ=ηv_ rms/aL=|H|=-ȧ/a,with v_rms the root mean square of the velocity field, λ=L/2 the largest eddy size and H=-200, the compression rate in our simulations (<ref>). The coefficient η^*=η/2 is a dimensionless dissipation efficiency (see model) and has been calibratedto be η^* ≈ 0.9 (see comparison). We can solve dissip numerically for a using the value of v_ rms(a) measured from the simulations, and the result is a≈ 0.2; we show the exact solution for the HD run as the vertical dashed line in mach_a. It is evident thatthis typical timescale for equality between compression and dissipation successfully predicts the onset of efficient dissipation and roughly coincides with the beginning of the decaying stage of turbulence. From a=1 to a≈ 0.2, the magnetic field has only minor effects on the evolution in all runsexcept GF-Strong. Compared to the HD case, in the noGF models the magnetic field stores additional energy which replenishes some of the kinetic energy that is dissipated. This slightly delays the onset of the decaying stage, and allows higher maximum velocities or Mach numbers by about 10–20%, but this is clearly a modest effect. However, at later times the MHD and HD runs show profound differences. In all the MHD runs the Mach number (mach_a) eventually stops decreasing and begins to increase again. Corresponding to this, the slope of the kinetic energy versus a curve (E_t) steepens again. The value of a at which the switch from decaying to increasing turbulence happens appears to depend both on whether there is a guide field, and on the saturation level of the turbulent dynamo, as parameterized by σ_ sat≡ e_ B/e_ kin, at the onset of compression; we report this quantity in Tab. <ref>. The noGF-medium run has σ_ sat = 8%, and does not switch from decaying to increasing until a≈ 0.001, while the noGF-Strong (σ_ sat = 56%), GF-Weak (σ_ sat = 3%), and GF-Medium (σ_ sat = 30%) all reverse at a≈ 0.01. The GF-Strong case (σ_ sat = 130%) never goes through a decaying phase at all, and instead has a Mach number that increases almost monotonically. In call cases, the difference between the MHD and HD cases is large and growing with time. Even the noGF-Medium case, with σ_ sat = 8%, has ∼ 10 times as much kinetic energy as the pure HD case by a=0.001. The GF-Strong case has 10 times the kinetic energy of the HD case even at a≈ 0.1, and by a = 0.01 this gap has grown to more than two orders of magnitude.§.§ Dissipationless flows§.§.§ The transition to dissipationless flowHaving seen that the presence of a magnetic field causes a major change in the behavior of compressive turbulence, we now investigate in more detail the origin of this behavior. We shall show that this change the result of a shift in the flow pattern to one that is nearly dissipationless. As a first step in this direction, we note that the switch from decaying to increasing Mach number is associated the ratio of magnetic to kinetic pressure. In our dimensionless units, the volume-averaged thermal pressure is 1/V, where V is the box volume, and we define the volume-averaged kinetic and magnetic pressures byP_ kin=1/V∫1/2ρ v^2 dV P_ B=1/V∫1/3(B^2/8π) dV.Note that the factor of 1/3 in the definition of P_ B might at first seem surprising, but we shall see the justification for it in <ref>. We plot the time evolution of P_ kin and P_ B in all our runs in beta_mach. As in <ref>, the x-axis is separated (by the vertical solid line) into the initial driving stage on the left, and the contraction phase on the right. At the beginning of compression, dissipation is comparatively unimportant because the compression timescale is small compared to the eddy turnover timescale. Thus the flow is nearly dissipationless. We show below that, for adiabatic contraction, kinetic pressure acts as a gas with γ = 5/3 and turbulent magnetic pressure acts as a gas with γ = 4/3, and we expect these pressures to scale asP/P_ th∝ρ^γ-1∝ a^-3(γ-1).Thus we expect the kinetic to thermal ratio to scale as a^-2, and the magnetic to thermal ratio to scale as a^-1 for the case without a guide field. With a guide field, flux conservation requires that the mean magnetic field rise as B_ mean∝ a^-2, and thus the scaling is the same, though for a somewhat different reason. We show lines with slopes of -1 and -2 in beta_mach, and they are indeed good descriptions of the slope at early times.Unsurprisingly, the kinetic and magnetic pressures begin to drop when the dissipation rate becomes comparable to the compression rate. However, the kinetic term drops more steeply than the magnetic term. This effect is due to the fact that the only true dissipation channel in the system is via the kinetic term, and that the dissipation of the magnetic component is bottlenecked by the rate at which the now over-magnetized gas can transfer energy back into the kinetic component.As a result of the difference in the dissipation rates, the magnetic pressure ultimately exceeds the kinetic pressure in all cases except GF-Weak. In all the other cases, the transition from decreasing to increasing Mach number occurs almost exactly when this crossover happens, although if one closely compares GF-Medium to noGF-Medium, it is clear that at equal field strength the transition occurs earlier, in terms of both a and in terms of ratio of P_ kin to P_ B, in the presence of a guide field. Whether the flow is subsonic or supersonic appears to make little difference to the transition, consistent with the findings of <cit.> that the dissipation rate is not greatly affected by whether the flow is subsonic or supersonic.§.§.§ The nature of the dissipationless flow When the magnetic pressure begins to dominate, or even earlier in the presence of a net magnetic flux, the flow re-arranges itself into a fundamentally different topology, characterized by a much lower rate of dissipation. We illustrate this topology in morphology, which shows density field maps along the major three axes, and velocity streamlines colour-coded by their z-component of the Mach number. The left column presents the initial state (a=1) and the right column a highly compressed stage for which the flow has had time to settle into a self consistent non-driven mode (a=10^-3). The runs presented here are the HD run (top), noGF-Strong (middle) and GF-Medium (bottom panels), but the other MHD runs are qualitatively the same as the two shown in the figure.Both the MHD runs exhibit a behaviour such that after compression has taken place, the flow settles into two main sheets sliding across each other at supersonic velocities. Since there is no preferred direction in the noGF run, and in the x-y plane of the GF run, in the simulation setup, the division into the domains is arbitrary (for this specific simulation it roughly coincides with the x-axis. It is clear that this flow, which has naturally developed from standard turbulence, is highly non-random, and that the expected dissipation of these flows is greatly reduced compared to the standard flow of the HD run or the initial state.We can illustrate the reduces dissipation more directly by examining the ratio of flow power in compressible modes to the total power in all modes, which we refer to as the compressive ratio. We show the time evolution of this quantity for all runs inElgt. We compute the energy in solenoidal (⟨ v_s^2⟩) and compressible (⟨ v_c^2⟩) modes by performing a Helmholtz decomposition of the velocity field <cit.>. The ratio of ⟨ v_c^2⟩ / (⟨ v_s^2⟩+⟨ v_c^2⟩) ∼ 0.2–0.6 for supersonic turbulence depends on the driving mode <cit.>. In the absence of magnetic fields, the flow remains in this range of values even after driving ceases, during the compressive phase (the HD case). However, Elgt demonstrates that when the magnetic field begins to dominate (a≲ 0.01 for noGF-Medium, and a≲ 0.03 for noGF-Strong; see also beta_mach, ⊤), the compressive ratio drops rapidly. These values are marked by the vertical dashed lines in Elgt.The GF-Strong run is particularly noteworthy in that it has a compressive ratio ≲ 10% even before the onset of compression, simply as a result of the strong magnetic field that prevents flows across field lines. As a result, it never experiences significant dissipation, and never goes through a phase when the turbulence decays. § THEORETICAL FRAMEWORK FOR COMPRESSIBLE MHD TURBULENCEHaving seen that magnetic fields lead to novel and initially-unexpected effects in MHD turbulence, we now seek to construct a theoretical model that we can use to interpret the results. Our basic approach will be to think of the region we are simulating as a small portion of a much larger cloud. We will then coarse-grain the MHD equations over the scale of our box, allowing us to write down an effective pressure in the box. We will use the results of our numerical experiments, together with some basic physical arguments, to provide an effective equation of state to describe this pressure and its evolution, so that we can interpret our numerical results in thermodynamic terms. §.§ Coarse-grained pressures We begin by following the usual method of constructing a set of coarse-grained equations <cit.>. We define a spatial filter F_Δ(𝐱) with characteristic scale Δ with which we can convolve all the fluid variables. For any field ϕ(𝐱), we defineϕ ≡ ∫ϕ(𝐱') F_Δ(𝐱-𝐱')d𝐱'ϕ'≡ ϕ - ϕ ϕ̃ ≡ ρϕ/ρ.Here ϕ is the filtered variable, obtained by convolving ϕ with the filter, and ϕ' is the fluctuating part that remains after the filtered part has been removed.Convolving the MHD equation of momentum conservation, mhd2, with the filter F_Δ(𝐱) gives0 =∂/∂ t(ρ𝐯̃) + ∇·(ρ𝐯⊗𝐯 -1/4π𝐁⊗𝐁 + 1/8πB^2𝐈) + ∇P_ th,where 𝐈 is the identity tensor. Per the usual approach, we now write the averages over correlated terms as differences of the filtered quantities and the sub-filter-scale (SFS) quantities,0 =∂/∂ t(ρ𝐯̃) + ∇·(ρ𝐯̃⊗𝐯̃ -1/4π𝐁⊗𝐁 + 1/8πB^2𝐈) + ∇P_ th - ∇·(_ R,SFS + _ M,SFS),where_ R,SFS=ρ𝐯̃⊗𝐯̃ - ρ𝐯⊗𝐯 _ M,SFS= -1/4π𝐁⊗𝐁 + 1/8πB^2+1/4π𝐁⊗𝐁 - 1/8πB^2are the Reynolds stress and Maxwell stress exerted by the SFS components of the fluid velocity and magnetic field, respectively.As standard for the microphysical stress tensor, we decompose the SFS stresses _ R,SFS and _ M,SFS into on- and off-diagonal components, and identify the former as effective pressures. That is, we define the effective kinetic and magnetic pressures byP_ kin= -1/3 _ R,SFS _ R,SFS= -P_ kin𝐈 + π_ R,SFSP_ B= -1/3 _ M,SFS _ M,SFS= -P_ B𝐈 + π_ M,SFS.We use the notation P_ B for the effective magnetic pressure to distinguish it from P_ mag, the true, microphysical magnetic pressure, since we shall see below that they are somewhat different. For homogenous, isotropic turbulence the tensors π_ R,SFS and π_ M,SFS have zero on their diagonals. In the presence of a large-scale guide field where isotropy is broken, this is not necessarily the case, and in principle the on-diagonal components of π_ R,SFS and π_ B,SFS can be as large as P_ kin and P_ B. However, since we are only after a heuristic model, we will ignore this complication. With these definitions, the filtered momentum equation reads0 =∂/∂ t(ρ𝐯̃) + ∇·(ρ𝐯̃⊗𝐯̃ -1/4π𝐁⊗𝐁 + 1/8πB^2𝐈) + ∇(P_ th+P_ kin + P_ B)- ∇·(π_ R,SFS + π_ M,SFS). The final step in defining the coarse-grained pressures via an equation of state is to relate the pressures as we have defined them to the energy content of the gas. The SFS kinetic and magnetic energies per unit volume are simply the differences between the true energies per unit volume and their analogs defined using the filtered quantities, i.e.,e_ kin,SFS=1/2ρ v^2 - 1/2ρ𝐯̃^2 e_ B,SFS=B^2/8π - B^2/8π.From the definitions of _ R,SFS, _ M,SFS, P_ kin, and P_ B, it is immediately clear that we haveP_ kin=2/3 e_ kin,SFSP_ B=1/3 e_ B,SFS,i.e., the kinetic pressure is simply 2/3 of the sub-filter-scale kinetic energy density, and the magnetic pressure is 1/3 the scale of the sub-filter-scale magnetic energy density. Note that this relationship between pressure and energy density is different than the ones that obtain between the microscopic pressures and energy densities, for which P_ th = (γ-1) e_ th and P_ mag = e_B = B^2/8π. This difference is the reason for the factor of 1/3 we introduce into P_B as computed in eq. <ref>. Interpreted in terms of an adiabatic index γ, we see that SFS kinetic pressure acts like a fluid with a γ=5/3 equation of state, while SFS magnetic pressure acts like a fluid with a γ=4/3 equation of state.If the thermal pressure also obeys an equation of stateP=(γ-1)ρϵ,where ϵ is the energy per unit mass, then we can write the total coarse-grained pressure asP_ tot=P_ th + P_ kin + P_ B =ρ[(γ-1)ϵ_ th + 2/3ϵ_ kin,SFS + 1/3ϵ_ B,SFS],where the various ϵ terms are the thermal, SFS kinetic, or SFS magnetic energy per unit mass.§.§ An effective EoS for supersonic magnetized gasWe now wish to model the reaction of the full system of isothermal, turbulent and magnetized gas to compression, taking into account dissipation terms and interactions between the various components. Since these additional energy transfers are time dependent, they cannot be modelled as a proper EoS, that is only a function of the thermodynamic state. Instead, we model this behaviour as an effective EoS which also depends on the thermodynamic trajectory of a parcel of gas. A similar approach has been successfully implemented for stability analysis of gravitationally collapsing haloes, filaments and sheets in a cosmological context in <cit.>, <cit.>, and <cit.>. For the sake of notational simplicity, we shall from this point forward drop the overlines and the SFS notation, and we will understand that, unless otherwise stated, all quantities except energies are unit mass are filtered quantities, while specific energies are SFS quantities.In analogy to the definition of the adiabatic index (with the subscript s indicating constant entropy) γ=( ∂ln P/∂lnρ)_s,we defineas the full derivative of the pressure to density of a fixed parcel of gas along a Lagrangian path,=dln P_ tot/dlnρ=ρ/P_ totṖ_ tot/ρ̇,with the upper dot indicating full time derivative and P_ tot as defined in eq. (<ref>).§.§.§ Ideal EoS with dissipationThe time derivative of an ideal EoS (EoS) can be separated into its isentropic and non-isentropic part. We first differentiate the pressure of a Lagrangian parcel of gas:Ṗ=( γ-1 )( ϵ̇ρ+ϵρ̇),The time derivative of the specific energy, ϵ̇, can be taken from its thermodynamic definition,ϵ̇=-PV̇-q,with V the specific volume (V=ρ^-1) and q a general non-adiabatic energy sink rate. A negative value of q corresponds to an energy source. Inserting edot into pdot and using EoS again, we find,Ṗ=γρ̇/ρP-(γ-1)ρ q.The interaction between two forms of energy, such as the transfer between kinetic and magnetic components associated with dynamo action, can be incorporated into this framework by introducing a positive q term into one form of energy (for example the kinetic), compensated by a negative contribution of equal magnitude to the other (for example magnetic energy).§.§.§ Kinetic EoS with dissipationFollowing <cit.> and <cit.> we model the dissipation rate of turbulence as proportional to the largest eddy turnover time (see dissip), q_ dis=ηv/a λv^2/2=ηv^3/a L,with η a dimensionless free parameter (that depends on the numerical scheme and resolution) and λ the largest eddy scale (L/2). When decay is efficient, we expect η to be of order unity, but once a dissipationless flow pattern develops, it will be much smaller. This term dissipates kinetic into thermal energy, and formally should appear with a negative sign in the thermal component. However, by using an isothermal EoS for the gas, the thermal energy of the gas is fixed, and any heating of the gas is assumed to radiate out instantly. We simply introduce this term as cooling, directly from the kinetic pressure.§.§.§ Energy transfer between the kinetic and magnetic componentsTurbulence enhances initially small seeds of the magnetic field via small-scale dynamo processes <cit.>. The amplification rate is initially exponential and eventually decreases to zero as the magnetic energy and the turbulent energy approach their saturation ratio, which is a function both of the Mach number and turbulent driving pattern <cit.> and the ratio of turbulent and magnetic dissipation, i.e, the magnetic Prandtl number <cit.>. Inspired by this behavior, we model the energy transfer rate from the kinetic to magnetic asϵ̇_ KB=Γϵ_ B(1-ϵ_ B/),where we recall that ϵ_ B andare the sub-filter scale magnetic and kinetic energies. The saturation ratio, =ϵ_ B^ sat/^ sat, is the ratio of the two components before compression, and will be calibrated for each simulation depending on its setup. By analogy with the dissipation rate, the growth rate of the magnetic field, Γ, is also taken as a fraction of the largest eddy turnover rate,Γ=η_ Bv/aL,with the dimensionless coefficient η_ B of order unity.While this internal energy transfer does not change the total energy content of the gas, it does change the total pressure, because of the difference in γ for each component. We thus treat it as a sink term in ϵ̇_ kin and as a source term in ϵ̇_ B. We also note that this rate can become negative if the magnetic energy exceeds its saturation level with respect to the kinetic energy. While energy in such a physical case will flow from magnetic to kinetic, as the fields are strong enough to rearrange the material into a lower magnetic energy state, there is no justification to assume that the rate at which this happens is related to the eddy turnover rate. Regardless, we use ekb even for that case. As we show later, this modeling, with the same coefficients for ϵ_ B→ as→ϵ_ B, leads to reasonable results, although the value of η_ B that we end up requiring is considerably less than unity. We leave further investigation into this point for later studies.§.§ Calculation of We have now everything in place for the calculation of . It is convenient to write the total pressure (eq. <ref>) as:P_ tot=P_ th(1+α_k+β^-1),with α_k =P_ kin/P_ th=1/3v^2/c_s^2=1/3ℳ^2, β =P_ th/P_B=c_s^2ρ(1/3B^2/8π)^-1 with ℳ the rms Mach number of the flow, and β the plasma β parameter for the coarse-grained case.If the sound speed is constant as we have assumed, and we use the relation in pdot for the kinetic and magnetic parts, we get:Ṗ_ tot=ρ̇/ρ(P_ th+5/3P_ kin+4/3P_ B)-2/3ρ q_ dis - 1/3ρϵ̇_ KB.By noting that (c.f., qdisekb)ρ q_ dis =3ηv/aLP_ kin, ρϵ̇_ KB =3η_ Bv/aL P_ B(1-ϵ_ B/),and using the relationρ̇/ρ=-3ȧ/a,and alphabeta, we get:Ṗ_ tot =ρ̇/ρP_ th[1+α_k (5/3+2/3ηv/ȧL)+β^-1(4/3+1/3η_ Bv/ȧL(1-ϵ_ B/) )] .Plugging pdot2 into geff, we finally get:=1/1+α_k+β^-1[ 1+α_k (5/3+2/3ηv/ȧL)+β^-1(4/3+1/3η_ Bv/ȧL(1-ϵ_ B/) )] .Eq. (<ref>) manifests a few important physical properties. The equation properly interpolates between various extremes: when the thermal component is dominant over the kinetic and magnetic components, 1≫α_k,β^-1,reduces to 1, as expected (for an isothermal gas). Likewise, when the kineticcomponent dominates, α_k≫ 1,β^-1,reduces to 5/3 (and to 4/3 when the magnetic component dominates). The sign of the dissipation term and the energy transfer term depends on the sign of ȧ.This is consistent with the expected behaviour that the dissipation always acts to reduce the pressure. When gas contracts (ȧ <0),drops, and the pressure growth is reduced. When gas expands (ȧ >0),increases, and the pressure drop due to the expansion, P∝ρ^ drops even faster because of the dissipation. The magnitude of the dissipation term is not theoretically bound, andcan become negative or very large. This does not contradict our physical understanding. When large dissipation is present (v≫ |ȧ|), it is possible that as gas is compressed its pressure decreases, corresponding to a negative .§.§ Comparison to simulations In this section we analyze the reaction of multi-component gas to contraction, and compare the simulated runs (sims and results) to our analytic predictions. To quantify this reaction we use(geff), and, in order to compare the simulations to our predictions, we derive this quantity in two separate ways. First, we take the logarithmic derivative of the total pressure P_ tot (eqs. <ref>, <ref>, <ref>) with respect to density directly from the simulation, γ_ sim. Then, we compare it to our analytic prediction, γ_ pred, according to geff2.It is of pedagogical and practical value to consider first a simplified version of our analytic model, for which zero dissipation is assumed. This is discussed in gamma_nodissip. A comparison to the full model is presented in gamma_full.§.§.§comparison without dissipation Dissipation sinks energy from the gas as it is compressed. Therefore, we expect gas to be more compressible (i.e., lower ) when dissipation cannot be neglected. By contrast, if dissipation is weak enough,approaches the value expected for a gas with a mixture of thermal, turbulent, and magnetic pressure, with the different components weighted according to the relative pressures of each component. In terms of our model, this amounts to setting η=η_ B=0. As discussed in the previous sections, we expect this approximation to be valid at the initial stage, and then, to some degree, at late stages when a dissipationless flow (DLF) forms. gamma_nodissip compares γ_ sim (solid lines) with γ_ pred (dashed lines). The horizontal dashed lines mark values of =1, 4/3, and 5/3 that correspond to the expected behavior for purely isothermal, purely magnetic and purely kinetic gas, respectively. For the HD simulation, the gas initially starts close to the value predicted for supersonic turbulence, ∼5/3, and as dissipation becomes dominant γ_ sim drops to ≈ 1/2. Once the turbulence has decayed significantly (mach_a ⊤) and the kinetic pressure drops below the thermal pressure (beta_mach), no energy is left to dissipate, and the gas behaves like an isothermal gas with =1. The MHD runs without magnetic guide fields (noGF runs, ⊤) initially behave similarly, withdropping as dissipation becomes important. However, the dissipation stops for a different reason. At late times, after the transition to DLF, the gas behaves as magnetic field-dominated, with ≈ 4/3. The asymptotic behavior at the initial stage and then again for a≲ 10^-2 is well recovered by our analytic model without dissipation (⊤, dashed lines), for the HD and noGF runs. Note that at a≈ 1 the predicted values are slightly higher than the simulated ones, indicating that some dissipation exists even for the very early stages when compression starts.Thepresents a similar calculation, but for the runs with guide fields (GF runs). In all three GF runs, γ_ sim approaches ≈ 4/3 at large compressions. This occurs even when the magnetic pressure is sub-dominant to the kinetic one (GF-Weak) or comparable (GF-Medium and GF-Strong), and our model predicts that the gas behaves like a purely kinetic gas (GF-Weak, γ_ pred≈ 5/3) or intermediate (GF-Medium and GF-Strong, 4/3<γ_ pred<5/3). This discrepancy indicates that some dissipation is present even at those late stages. In summary, the simplified non-dissipative model provides a very good prediction of the asymptotic values of γ for the HD and noGF runs, and provides a reasonable prediction but with a slight overestimate for the asymptoticof the GF runs.§.§.§comparison of full model gamma shows a similar plot to gamma_nodissip, but with γ_ pred that includes dissipation as the dashed lines. First examine the top panel. In this plot, we have used η=1.8 and η_ B=0.01 at compressions a ≳ 10^-2. These are by-eye fits; a more systematic calibration is certainly possible, but since it is likely that η depends on the numerical scheme and resolution, and η_ B depends on the specific geometry of the driving, we find little practical value in estimating them more robustly here. To obtain a meaningful physical results, we would need to explicitly include physical dissipation terms, i.e., kinematic viscosity and magnetic resistivity <cit.>. For our purpose, it suffices to show that we can fit the numerical results with reasonable values of η and η_ B. The inclusion of the kinetic dissipation term, η v/aL improves the small mismatch at ρ≈ 1 that was present in gamma_nodissip. This illustrates that for our simulation setup, dissipation makes a minor difference from the beginning of the compression phase, even though it was not directly observed in beta_mach. For the HD and noGF runs (⊤), the predictedcurve is a reasonably good fit to the simulated one, even when the dissipation dominates.However, the fit would fail miserably when flows enter the self-avoiding channel-flow phase. At that stage, our model would predict an ever-increasing dissipation because of the rising turbulent velocity when, in fact, very little dissipation takes place. Similarly, as the magnetic energy exceeds its saturation level, more and more energy would be predicted by our model to be pumped from magnetic to kinetic when, in fact, little energy does. To mimic this drop in transfer terms, we simply set η = η_ B = 0 when (α_kβ)^-1=P_ B/P_ kin>3. In the Figure, the shutdown of the transfer terms produces the discontinuity in the predicted . Without it,would first shoot up because of the large ϵ̇_ KB, and then shoot down because of the large q_ dis. Obviously, the transition to the ineffective dissipation regime is not sharp, and a better fit between the calculated and predicted models can be obtained with a more sophisticated transition model. However, at this stage, we value the simplicity of the model over its accuracy. The success of the dissipation model when P_ B≲ P_ kin, combined with its failure beyond that point and the success of the dissipationless model there, is the key physical finding of this paper.Our model is somewhat less successful in the case when a guide field is present, as shown in the lower panel of gamma. The model is reasonable at first, but, as noted above, the switch to DLF appears to depend not just on the ratio of magnetic to kinetic pressures, but also on the field topology. Our switch at P_ B/P_ kin>3 is too conservative for this case, and as a result predicts strong dissipation at intermediate values of a when in fact our simulations are already switching to a low-dissipation state. We could improve the fits by hand-tuning when we switch to η = η_ B = 0, but without an understanding of exactly how this switch depends on the magnetic topology, we would have to do so independently for each run, which seems of little value.Regardless of our ability to predict exactly when the switch to DLF will occur, we can still give a physical interpretation to our results as follows. With our simulation setup, the turbulence at the compression stage is driven by globally re-normalizing the velocity and distance. This type of driving enhances existing modes, and allows the system to settle into “natural modes” which are not forced by an arbitrary external driving.From our results it appears that even a relatively small decay of some of the modes (e.g., even in the case of a sub-dominant magnetic field) is enough to significantly decrease the power in these modes and allows the flow to settle into a DLF.§ IMPLICATIONS FOR ISM While the study of dissipation in scale-free compression of turbulence described above is general, our motivation for conducting this study is not. We seek to find how magnetic fields alter the collapse of GMCs and, in particular, to test whether realistic initial conditions may delay the dissipation and consequent collapse of the GMCs. The ISM, in which GMCs form, is a multi-phase medium, with subsonic turbulence in the diffuse, warm and hot phases and supersonic turbulence in the denser, cold phases such as GMCs. The entire ISM is furthermore interlaced with magnetic fields and immersed in cosmic rays and radiation fields. The turbulence is maintained by various energy sources, including driving by feedback from star formation, gravitational collapse, and galaxy dynamics <cit.>. In this work we simplify this system to manageable levels by focusing on its most basic aspects: we start with gas that is supersonic and magnetized and test its response to quick compression. The compression pumps energy into the turbulence and into the magnetic field on all scales, without imposing any particular scale or randomness through external driving. We find that even weak magnetic fields eventually force the gas to settle into a channel-flow pattern with greatly-reduced dissipation. This is in stark contrast to the case where magnetic fields are absent, where we find that (in agreement with previous purely hydrodynamical simulations – ) that the kinetic energy steadily decays and dissipation remains significant throughout the entire evolution of the gas.The greatly reduced dissipation rate for the flow that we find in the presence of a magnetic field indicates that significantly less energy input may be required to produce the observed linewidths in GMCs than had previously been conjectured. However, a full exploration of this issue will need to take into account many more processes, among them the multiphase nature of the ISM and realistic stellar feedback. The latter process, which will drive random turbulence on a typical scale corresponding to the distance between stars within each GMC, may actually increase the dissipation rate by disturbing the dissipationless flow. Thus, while stellar feedback is highly energetic, it could prove to be an effective cooling agent by increasing the dissipation. Even within our toy model, a more systematic test of various compression models and rates is needed to test the universality (or lack-thereof) of the dissipation model and the calibrated rates. We leave these tests to future work.At a minimum, however, we note that our results suggest that dissipation inside a forming GMC will be much less than has commonly been assumed. The gas from which GMCs form is observed to be threaded by a significant net magnetic flux <cit.>, and in this case the flow can be nearly dissipationless almost immediately after compression begins. Even in the limiting case of zero net flux but fields close to equipartition, as observed, the dissipation rate is substantially reduced once the cloud has compressed by a factor of ∼ 100 in linear dimension. This is not all that much by interstellar standards: the mean density of the Milky Way's ISM is n∼ 1 cm^-3, so a factor of 100 linear compression corresponds to a density n∼ 10^6 cm^-3, i.e., typical of a prestellar core.The onset of dissipationless flow will not necessarily halt collapse. Even in the most favorable case, when no dissipation occurs, highly magnetized gas is unstable to scale-free collapse because its effective EoS is =4/3, which is right at the critical value for hydrostatic gravitational stability (γ_ crit=4/3). This indicates that, as a cloud compresses, the force exerted by the magnetic field outwards grows just as fast as the gravitational forces increases inwards. This general point has been overlooked in the past, because the focus was on comparing timescales rather than using the formal requirement for hydrostatic atmospheres. However, we note that if the external compression is filamentary as is often found, the critical value for stability drops to =1, and, if linear magnetic fields prevent compression perpendicular to the magnetic-field direction, the collapse can only occur along one direction, for which the critical value is =0 <cit.>.The compression rate of observed GMCs can be characterized in a way that relates it to our ideal simulations. We define the non-dimensional compression rateH_ND=- H which is our (absolute value of the) Hubble coefficient for the compression in units of sound crossing time. If compression is driven by gravitational collapse, this non-dimensional compression rate is (up to order of unity corrections) /, withthe gravitational free-fall time. By replacingof our turbulent cloud by(which is, again, correct up to order of unity corrections) we get H_ND=/. The ratio of the turbulent timescale to the free fall timescale for reasonable GMC's is /=2/√(α_ vir), with α_ vir≈ 1, the virial parameter for virialized GMCs <cit.>, yielding H_ND≈ 2. In our numerical simulations =1 and H_ND=-H spanning a range between 1 and 200 (see Table <ref> and Appendix H1). Since typical Mach numbers for local GMCs <cit.> and in the Central Molecular Zone Cloud G0.253+0.016 <cit.> are ≈ 10 and for high-z GMCs or ULIRGS can be as high as ≈ 100, we argue that our simulations bracket the observed range. Furthermore, as gas is compressed, dissipation always dominates eventually over the adiabatic compression. Faster compression rate simply extends the initial stage in which dissipation can be neglected. We note, however, that unlike realistic GMCs, our Mach number is independent of the compression rate and is set by the initial driving. Our Mach numbers (≈ 10) are smaller than is observed for ULIRGS, and is perhaps comparable to that of local GMCs. Additionally, it does not follow the correlation set by H_ND≈ 2 that is expected by observations. Since the onset of the dissipation is predicted by comparing the dissipation timescale (ie. turbulent timescale) and compression timescale, we expect that realistic GMCs will always start near the onset of the dissipation phase, without the long adiabatic phase seen for H=-200 in our simulations. We leave a more systematic test of the dependence of our conclusion on the compression rate and Mach numbers for future work.§ SUMMARY AND CONCLUSIONSIn this paper, we study the evolution of magnetized supersonically turbulent gas as it is compressed. The compression is scale-free and corresponds to gravitational compression that operates on all scales, much like the expansion of the universe in cosmology, but with a scale factor that decreases with time (negative Hubble constant). Our simulations of this scale-free compression are performed by using a modified version of the cosmological expansion model in the FLASH MHD code (<ref>), and explore a range of magnetic field strengths and net fluxes.The scale-free compression enhances all turbulent and magnetic modes by the same factor, and does not impose any arbitrary scale or randomness of phases. Consequently, the system is allowed to relax into a self-consistent state for which naturally decaying modes decay away while non-decaying modes are enhanced by the compression. We find that this relaxation, combined with a magnetic field, produces a surprising result: after some time the gas re-arranges itself into a self-avoiding channel flow, in which state the dissipation rate is nearly zero. This occurs whether or not there is a net magnetic flux, but the transition happens more readily for non-zero magnetic flux and for stronger fields, and it does not occur at all in the absence of a magnetic field.We interpret the simulations by comparing them to a theoretical model for coarse-grained MHD turbulence. Our model treats the flow has having three distinct energy reservoirs (thermal, kinetic, and magnetic) that are coupled by dynamo action and dissipation. This model allows us to construct an equation of state with an effective adiabatic index , whose value depends on the relative balance between the different energy reservoirs and on the overall rate of dissipation. We calibrate the transfer and dissipation terms from the simulations, and show that, once calibrated, the model provides a good match to our numerical experiments. A key features of the model, and of the numerical results it describes, is that once the flow is sufficiently magnetically-dominated the dissipation rate for the flow is nearly zero, and compression drives a continual increase in the kinetic energy per unit mass and the Mach number.The existence of this dissipationless state may have significant implications for the ISM, and for giant molecular clouds in particular. These clouds are assembled by compression of regions of atomic ISM with significant magnetic flux. If this compression resembles the idealized gravitational contraction we consider here, then GMCs may be in flow states where the dissipation rate is much less than is commonly assumed based on earlier work in non-compressing regions and on non-magnetized flows. This in turn might significantly ease the problem of how GMCs maintain their large linewidths while still forming stars with very low efficiencies.Finally, we note that the analytic model derived here constitutes a first step towards a physically-motivated sub-resolution model for the ISM. Given some idea <cit.> about the content of the turbulence, thermodynamics and magnetic fields of the ISM, our model predicts the behavior of the gas in a way that can be implemented into large-scale, low-resolution simulations that only resolve the ISM as a coarse-grained mixture of the components. This too will be investigated in future work.§ ACKNOWLEDGEMENTS We thank Chalence Safranek-Shrader and Romain Teyssier for their help during the implementation of the Hubble source terms for MHD (<ref>). Y.B. wishes to thank the Research School of Astronomy and Astrophysics at The Australian National University for hosting him on a sabbatical during 2016–2017.C.F. gratefully acknowledges funding provided by the Australian Research Council's Discovery Projects (grants DP150104329 and DP170100603). M. R. K.'s work was supported under the Australian Research Council's Discovery Projects funding scheme (project DP160100695). The simulations presented in this work used high performance computing resources provided by the Leibniz Rechenzentrum and the Gauss Centre for Supercomputing (grants pr32lo, pr48pi and GCS Large-scale project 10391), the Partnership for Advanced Computing in Europe (PRACE grant pr89mu), the Australian National Computational Infrastructure (grant ek9), and the Pawsey Supercomputing Centre with funding from the Australian Government and the Government of Western Australia, in the framework of the National Computational Merit Allocation Scheme and the ANU Allocation Scheme.The simulation software FLASH was in part developed by the DOE-supported Flash Center for Computational Science at the University of Chicago.mnras @urlcharsothermakeother $&#_% @doi@urlcharsother ifnextchar [ @doi@ @doi@[] @doi@[#1]#2tempa#1tempaempty http://dx.doi.org/#2 doi:#2http://dx.doi.org/#2 #1 @eprint#1#2@eprint@#1:#2::nil @eprint@arXiv#1http://arxiv.org/abs/#1 arXiv:#1 @eprint@dblp#1http://dblp.uni-trier.de/rec/bibtex/#1.xml dblp:#1 @eprint@#1:#2:#3:#4niltempa #1tempb #2tempc #3tempc empty tempc tempb tempb tempa tempb empty tempb arXivifundefined mn@eprint@tempbtempb:tempcmn@eprint@tempbtempc[Ballesteros-Paredes, Hartmann, Vázquez-Semadeni, Heitsch& Zamora-AvilésBallesteros-Paredes et al.2011]Ballesteros-Paredes11a Ballesteros-Paredes J.,Hartmann L. W.,Vázquez-Semadeni E., Heitsch F., Zamora-Avilés M. A.,2011, @doi [] 10.1111/j.1365-2966.2010.17657.x, http://adsabs.harvard.edu/abs/2011MNRAS.411...65B 411, 65[Beck, Brandenburg, Moss, Shukurov& SokoloffBeck et al.1996]beck96 Beck R.,Brandenburg A.,Moss D.,Shukurov A., Sokoloff D.,1996, @doi [araa] 10.1146/annurev.astro.34.1.155, http://adsabs.harvard.edu/abs/1996ARA[Birnboim & DekelBirnboim & Dekel2003]bd03 Birnboim Y.,Dekel A.,2003, @doi [mnras] 10.1046/j.1365-8711.2003.06955.x, 345, 349[Birnboim, Balberg& TeyssierBirnboim et al.2015]birnboim15 Birnboim Y.,Balberg S., Teyssier R.,2015, @doi [mnras] 10.1093/mnras/stu2717, http://adsabs.harvard.edu/abs/2015MNRAS.447.3678B 447, 3678[Birnboim, Padnos& ZingerBirnboim et al.2016]birnboim16 Birnboim Y.,Padnos D., Zinger E.,2016, @doi [apjl] 10.3847/2041-8205/832/1/L4, http://adsabs.harvard.edu/abs/2016ApJ...832L...4B 832, L4[Bouchut, Klingenberg& WaaganBouchut et al.2007]BouchutKlingenbergWaagan2007 Bouchut F.,Klingenberg C., Waagan K.,2007, Numerische Mathematik, 108, 7[Bouchut, Klingenberg& WaaganBouchut et al.2010]BouchutKlingenbergWaagan2010 Bouchut F.,Klingenberg C., Waagan K.,2010, Numerische Mathematik, 115, 647[Bovino, Schleicher& SchoberBovino et al.2013]BovinoEtAl2013 Bovino S.,Schleicher D. R. G., Schober J.,2013, @doi [New Journal of Physics] 10.1088/1367-2630/15/1/013055, http://adsabs.harvard.edu/abs/2013NJPh...15a3055B 15, 013055[Brandenburg & SubramanianBrandenburg & Subramanian2005]BrandenburgSubramanian2005 Brandenburg A.,Subramanian K.,2005, @doi [] 10.1016/j.physrep.2005.06.005, http://cdsads.u-strasbg.fr/abs/2005PhR...417....1B 417, 1[Brandenburg, Sokoloff& SubramanianBrandenburg et al.2012]BrandenburgSokoloffSubramanian2012 Brandenburg A.,Sokoloff D., Subramanian K.,2012, @doi [] 10.1007/s11214-012-9909-x, http://adsabs.harvard.edu/abs/2012SSRv..169..123B 169, 123[Cho & LazarianCho & Lazarian2003]Cho03a Cho J.,Lazarian A.,2003, , http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=2003MNRAS.345..325C&db_key=AST 345, 325[Cho, Lazarian& VishniacCho et al.2002]Cho02a Cho J.,Lazarian A., Vishniac E. T.,2002, @doi [] 10.1086/324186, http://adsabs.harvard.edu/abs/2002ApJ...564..291C 564, 291[CrutcherCrutcher2012]Crutcher12a Crutcher R. M.,2012, @doi [] 10.1146/annurev-astro-081811-125514, http://adsabs.harvard.edu/abs/2012ARA%26A..50...29C 50, 29[Dekel & BirnboimDekel & Birnboim2006]db06 Dekel A.,Birnboim Y.,2006, @doi [mnras] 10.1111/j.1365-2966.2006.10145.x, 368, 2[Dubey et al.,Dubey et al.2008]DubeyEtAl2008 Dubey A.,et al., 2008, in Pogorelov N. V.,Audit E., Zank G. P., eds,Astronomical Society of the Pacific Conference Series Vol. 385, Numerical Modeling of Space Plasma Flows. p. 145[Eswaran & PopeEswaran & Pope1988]EswaranPope1988 Eswaran V.,Pope S. B.,1988, Computers and Fluids, http://adsabs.harvard.edu/abs/1988CF.....16..257E 16, 257[Evans, Heiderman& VutisalchavakulEvans et al.2014]Evans14a Evans II N. J.,Heiderman A., Vutisalchavakul N.,2014, @doi [] 10.1088/0004-637X/782/2/114, http://adsabs.harvard.edu/abs/2014ApJ...782..114E 782, 114[FederrathFederrath2013]Federrath2013 Federrath C.,2013, @doi [] 10.1093/mnras/stt1644, http://adsabs.harvard.edu/abs/2013MNRAS.436.1245F 436, 1245[FederrathFederrath2016]Federrath2016jpp Federrath C.,2016, @doi [Journal of Plasma Physics] 10.1017/S0022377816001069, http://adsabs.harvard.edu/abs/2016JPlPh..82f5301F 82, 535820601[Federrath & KlessenFederrath & Klessen2012]FederrathKlessen2012 Federrath C.,Klessen R. S.,2012, @doi [] 10.1088/0004-637X/761/2/156, http://adsabs.harvard.edu/abs/2012ApJ...761..156F 761, 156[Federrath & KlessenFederrath & Klessen2013]FederrathKlessen2013 Federrath C.,Klessen R. S.,2013, @doi [] 10.1088/0004-637X/763/1/51, http://adsabs.harvard.edu/abs/2013ApJ...763...51F 763, 51[Federrath, Klessen& SchmidtFederrath et al.2008]FederrathKlessenSchmidt2008 Federrath C.,Klessen R. S., Schmidt W.,2008, @doi [] 10.1086/595280, http://adsabs.harvard.edu/abs/2008ApJ...688L..79F 688, L79[Federrath, Klessen& SchmidtFederrath et al.2009]FederrathKlessenSchmidt2009 Federrath C.,Klessen R. S., Schmidt W.,2009, @doi [] 10.1088/0004-637X/692/1/364, http://adsabs.harvard.edu/abs/2009ApJ...692..364F 692, 364[Federrath, Roman-Duval, Klessen, Schmidt& Mac LowFederrath et al.2010]FederrathDuvalKlessenSchmidtMacLow2010 Federrath C.,Roman-Duval J.,Klessen R. S.,Schmidt W., Mac Low M.,2010, @doi [] 10.1051/0004-6361/200912437, http://adsabs.harvard.edu/abs/2010A[Federrath, Chabrier, Schober, Banerjee, Klessen& SchleicherFederrath et al.2011a]federrath11 Federrath C.,Chabrier G.,Schober J.,Banerjee R.,Klessen R. S., Schleicher D. R. G.,2011a, @doi [Physical Review Letters] 10.1103/PhysRevLett.107.114504, http://adsabs.harvard.edu/abs/2011PhRvL.107k4504F 107, 114504[Federrath, Sur, Schleicher, Banerjee & KlessenFederrath et al.2011b]FederrathSurSchleicherBanerjeeKlessen2011 Federrath C.,Sur S.,Schleicher D. R. G.,Banerjee R., Klessen R. S.,2011b, @doi [] 10.1088/0004-637X/731/1/62, http://adsabs.harvard.edu/abs/2011ApJ...731...62F 731, 62[Federrath, Schrön, Banerjee& KlessenFederrath et al.2014a]FederrathEtAl2014 Federrath C.,Schrön M.,Banerjee R., Klessen R. S.,2014a, @doi [] 10.1088/0004-637X/790/2/128, http://adsabs.harvard.edu/abs/2014ApJ...790..128F 790, 128[Federrath, Schober, Bovino& SchleicherFederrath et al.2014b]FederrathSchoberBovinoSchleicher2014 Federrath C.,Schober J.,Bovino S., Schleicher D. R. G.,2014b, @doi [] 10.1088/2041-8205/797/2/L19, 797, L19[Federrath et al.,Federrath et al.2016b]federrath16a Federrath C.,et al., 2016b, @doi [The Astrophysical Journal, The Astrophysical Journal] 10.3847/0004-637X/832/2/143, 832, 143[Federrath et al.,Federrath et al.2016a]FederrathEtAl2016 Federrath C.,et al., 2016a, @doi [] 10.3847/0004-637X/832/2/143, http://adsabs.harvard.edu/abs/2016ApJ...832..143F 832, 143[Federrath et al.,Federrath et al.2017]FederrathEtAl2017iaus Federrath C.,et al., 2017, in Crocker R. M.,Longmore S. N., Bicknell G. V.,eds,IAU Symposium Vol. 322, IAU Symposium. pp 123–128 (@eprint arXiv 1609.08726), @doi10.1017/S1743921316012357[FerrièreFerrière2001]ferriere01 Ferrière K. M.,2001, @doi [Reviews of Modern Physics] 10.1103/RevModPhys.73.1031, http://adsabs.harvard.edu/abs/2001RvMP...73.1031F 73, 1031[Fryxell et al.,Fryxell et al.2000]FryxellEtAl2000 Fryxell B.,et al., 2000, @doi [] 10.1086/317361, http://cdsads.u-strasbg.fr/abs/2000ApJS..131..273F 131, 273[GermanoGermano1992]Germano92a Germano M.,1992, @doi [Journal of Fluid Mechanics] 10.1017/S0022112092001733, 238, 325[Goldbaum, Krumholz, Matzner& McKeeGoldbaum et al.2011]Goldbaum11a Goldbaum N. J.,Krumholz M. R.,Matzner C. D., McKee C. F.,2011, @doi [] 10.1088/0004-637X/738/1/101, http://adsabs.harvard.edu/abs/2011ApJ...738..101G 738, 101[Hansen, McKee& KleinHansen et al.2011]Hansen11a Hansen C. E.,McKee C. F., Klein R. I.,2011, @doi [] 10.1088/0004-637X/738/1/88, http://ukads.nottingham.ac.uk/abs/2011ApJ...738...88H 738, 88[Hennebelle & InutsukaHennebelle & Inutsuka2006]Hennebelle06a Hennebelle P.,Inutsuka S.-i.,2006, @doi [] 10.1086/505316, http://adsabs.harvard.edu/abs/2006ApJ...647..404H 647, 404[Heyer & BruntHeyer & Brunt2004]HeyerBrunt2004 Heyer M. H.,Brunt C. M.,2004, @doi [] 10.1086/425978, http://cdsads.u-strasbg.fr/abs/2004ApJ...615L..45H 615, L45[Heyer, Krawczyk, Duval& JacksonHeyer et al.2009]HeyerEtAl2009 Heyer M.,Krawczyk C.,Duval J., Jackson J. M.,2009, @doi [] 10.1088/0004-637X/699/2/1092, http://cdsads.u-strasbg.fr/abs/2009ApJ...699.1092H 699, 1092[Heyer, Gutermuth, Urquhart, Csengeri, Wienen, Leurini, Menten& WyrowskiHeyer et al.2016]Heyer16a Heyer M.,Gutermuth R.,Urquhart J. S.,Csengeri T.,Wienen M., Leurini S.,Menten K., Wyrowski F.,2016, @doi [] 10.1051/0004-6361/201527681, http://adsabs.harvard.edu/abs/2016A[Jin, Salim, Federrath, Tasker, Habe & KainulainenJin et al.2017]JinEtAl2017 Jin K.,Salim D. M.,Federrath C.,Tasker E. J.,Habe A., Kainulainen J. T.,2017, , http://adsabs.harvard.edu/abs/2017arXiv170309709J [KazantsevKazantsev1968]Kazantsev1968 Kazantsev A. P.,1968, Soviet Journal of Experimental and Theoretical Physics, http://adsabs.harvard.edu/abs/1968JETP...26.1031K 26, 1031[Kitsionas et al.,Kitsionas et al.2009]KitsionasEtAl2009 Kitsionas S.,et al., 2009, @doi [] 10.1051/0004-6361/200811170, http://cdsads.u-strasbg.fr/abs/2009A[Klein, Inutsuka, Padoan& TomisakaKlein et al.2007]KleinEtAl2007 Klein R. I.,Inutsuka S.-I.,Padoan P., Tomisaka K.,2007, in Reipurth B.,Jewitt D., Keil K.,eds, Protostars and Planets V. pp 99–116[Klessen & HennebelleKlessen & Hennebelle2010]Klessen10a Klessen R. S.,Hennebelle P.,2010, @doi [] 10.1051/0004-6361/200913780, http://adsabs.harvard.edu/abs/2010A%26A...520A..17K 520, A17+[Klingenberg, Schmidt& WaaganKlingenberg et al.2007]KlingenbergSchmidtWaagan2007 Klingenberg C.,Schmidt W., Waagan K.,2007, @doi [Journal of Computational Physics] 10.1016/j.jcp.2007.07.034, http://adsabs.harvard.edu/abs/2007JCoPh.227...12K 227, 12[Koyama & InutsukaKoyama & Inutsuka2002]Koyama02a Koyama H.,Inutsuka S.,2002, , http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=2002ApJ...564L..97K&db_key=AST 564, L97[Kritsuk, Norman, Padoan& WagnerKritsuk et al.2007]KritsukEtAl2007 Kritsuk A. G.,Norman M. L.,Padoan P., Wagner R.,2007, @doi [] 10.1086/519443, 665, 416[Kritsuk et al.,Kritsuk et al.2011]KritsukEtAl2011Codes Kritsuk A. G.,et al., 2011, @doi [] 10.1088/0004-637X/737/1/13, 737, 13[KrumholzKrumholz2017]krumholz17 Krumholz M. R.,2017, Star Formation. World Scientific Series in Astrophysics, World Scientific Publishing, Singapore[Krumholz & TanKrumholz & Tan2007]Krumholz07e Krumholz M. R.,Tan J. C.,2007, @doi [] 10.1086/509101, http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=2007ApJ...654..304K&db_key=AST 654, 304[Krumholz, Matzner& McKeeKrumholz et al.2006]Krumholz06d Krumholz M. R.,Matzner C. D., McKee C. F.,2006, @doi [] 10.1086/508679, http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=2006ApJ...653..361K&db_key=AST 653, 361[Krumholz, Dekel& McKeeKrumholz et al.2012]Krumholz12a Krumholz M. R.,Dekel A., McKee C. F.,2012, @doi [] 10.1088/0004-637X/745/1/69, http://adsabs.harvard.edu/abs/2012ApJ...745...69K 745, 69[Kuncic & BicknellKuncic & Bicknell2004]kuncic04 Kuncic Z.,Bicknell G. V.,2004, @doi [apj] 10.1086/425032, http://adsabs.harvard.edu/abs/2004ApJ...616..669K 616, 669[LarsonLarson1981]Larson1981 Larson R. B.,1981, , http://cdsads.u-strasbg.fr/abs/1981MNRAS.194..809L 194, 809[Lee & HennebelleLee & Hennebelle2016]Lee16b Lee Y.-N.,Hennebelle P.,2016, @doi [] 10.1051/0004-6361/201527981, http://adsabs.harvard.edu/abs/2016A[Lemaster & StoneLemaster & Stone2009]Lemaster09a Lemaster M. N.,Stone J. M.,2009, @doi [] 10.1088/0004-637X/691/2/1092, http://adsabs.harvard.edu/abs/2009ApJ...691.1092L 691, 1092[Li & HenningLi & Henning2011]LiHenning2011 Li H.-B.,Henning T.,2011, @doi [] 10.1038/nature10551, http://adsabs.harvard.edu/abs/2011Natur.479..499L 479, 499[Li & NakamuraLi & Nakamura2006]Li06b Li Z.-Y.,Nakamura F.,2006, @doi [] 10.1086/503419, http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=2006ApJ...640L.187L&db_key=AST 640, L187[Li, Blundell, Hedden, Kawamura, Paine& TongLi et al.2011]LiEtAl2011 Li H.-B.,Blundell R.,Hedden A.,Kawamura J.,Paine S., Tong E.,2011, @doi [] 10.1111/j.1365-2966.2010.17839.x, http://adsabs.harvard.edu/abs/2011MNRAS.411.2067L 411, 2067[Mac LowMac Low1999]maclow99 Mac Low M.-M.,1999, @doi [apj] 10.1086/307784, http://adsabs.harvard.edu/abs/1999ApJ...524..169M 524, 169[Mac Low & KlessenMac Low & Klessen2004]MacLowKlessen2004 Mac Low M.-M.,Klessen R. S.,2004, RvMP, 76, 125[Mac Low, Klessen, Burkert& SmithMac Low et al.1998]maclow98 Mac Low M.-M.,Klessen R. S.,Burkert A., Smith M. D.,1998, @doi [Physical Review Letters] 10.1103/PhysRevLett.80.2754, http://adsabs.harvard.edu/abs/1998PhRvL..80.2754M 80, 2754[MatznerMatzner2002]Matzner02a Matzner C. D.,2002, @doi [] 10.1086/338030, http://adsabs.harvard.edu/abs/2002ApJ...566..302M 566, 302[Nakamura & LiNakamura & Li2007]Nakamura07a Nakamura F.,Li Z.-Y.,2007, @doi [] 10.1086/517515, http://adsabs.harvard.edu/abs/2007ApJ...662..395N 662, 395[Ossenkopf & Mac LowOssenkopf & Mac Low2002]OssenkopfMacLow2002 Ossenkopf V.,Mac Low M.-M.,2002, @doi [] 10.1051/0004-6361:20020629, http://cdsads.u-strasbg.fr/abs/2002A[Ostriker, Gammie& StoneOstriker et al.1999]OstrikerGammieStone1999 Ostriker E. C.,Gammie C. F., Stone J. M.,1999, @doi [] 10.1086/306842, http://cdsads.u-strasbg.fr/abs/1999ApJ...513..259O 513, 259[Padoan & NordlundPadoan & Nordlund1999]PadoanNordlund1999 Padoan P.,Nordlund Å.,1999, @doi [] 10.1086/307956, http://adsabs.harvard.edu/abs/1999ApJ...526..279P 526, 279[Padoan, Federrath, Chabrier, Evans, Johnstone, Jørgensen, McKee& NordlundPadoan et al.2014]PadoanEtAl2014 Padoan P.,Federrath C.,Chabrier G.,Evans II N. J.,Johnstone D.,Jørgensen J. K.,McKee C. F., Nordlund Å.,2014, @doi [Protostars and Planets VI] 10.2458/azu_uapress_9780816531240-ch004, http://adsabs.harvard.edu/abs/2014prpl.conf...77P pp 77–100[Padoan, Pan, Haugbølle& NordlundPadoan et al.2016a]Padoan16a Padoan P.,Pan L.,Haugbølle T., Nordlund Å.,2016a, @doi [] 10.3847/0004-637X/822/1/11, http://adsabs.harvard.edu/abs/2016ApJ...822...11P 822, 11[Padoan, Juvela, Pan, Haugbølle& NordlundPadoan et al.2016b]Padoan16b Padoan P.,Juvela M.,Pan L.,Haugbølle T., Nordlund Å., 2016b, @doi [] 10.3847/0004-637X/826/2/140, http://adsabs.harvard.edu/abs/2016ApJ...826..140P 826, 140[Pan, Padoan, Haugbølle& NordlundPan et al.2016]PanEtAl2016 Pan L.,Padoan P.,Haugbølle T., Nordlund Å.,2016, @doi [] 10.3847/0004-637X/825/1/30, http://adsabs.harvard.edu/abs/2016ApJ...825...30P 825, 30[Pillai, Kauffmann, Tan, Goldsmith, Carey& MentenPillai et al.2015]PillaiEtAl2015 Pillai T.,Kauffmann J.,Tan J. C.,Goldsmith P. F.,Carey S. J., Menten K. M.,2015, @doi [] 10.1088/0004-637X/799/1/74, http://adsabs.harvard.edu/abs/2015ApJ...799...74P 799, 74[Price & FederrathPrice & Federrath2010]PriceFederrath2010 Price D. J.,Federrath C.,2010, @doi [] 10.1111/j.1365-2966.2010.16810.x, http://adsabs.harvard.edu/abs/2010MNRAS.406.1659P 406, 1659[Robertson & GoldreichRobertson & Goldreich2012]robertson12 Robertson B.,Goldreich P.,2012, @doi [apjl] 10.1088/2041-8205/750/2/L31, http://adsabs.harvard.edu/abs/2012ApJ...750L..31R 750, L31[Roman-Duval, Federrath, Brunt, Heyer, Jackson& KlessenRoman-Duval et al.2011]RomanDuvalEtAl2011 Roman-Duval J.,Federrath C.,Brunt C.,Heyer M.,Jackson J., Klessen R. S.,2011, @doi [] 10.1088/0004-637X/740/2/120, http://adsabs.harvard.edu/abs/2011ApJ...740..120R 740, 120[Salim, Federrath& KewleySalim et al.2015]Salim15a Salim D. M.,Federrath C., Kewley L. J.,2015, @doi [] 10.1088/2041-8205/806/2/L36, http://adsabs.harvard.edu/abs/2015ApJ...806L..36S 806, L36[Schekochihin, Iskakov, Cowley, McWilliams, Proctor& YousefSchekochihin et al.2007]SchekochihinEtAl2007 Schekochihin A. A.,Iskakov A. B.,Cowley S. C.,McWilliams J. C., Proctor M. R. E., Yousef T. A.,2007, @doi [New Journal of Physics] 10.1088/1367-2630/9/8/300, http://adsabs.harvard.edu/abs/2007NJPh....9..300S 9, 300[Schlei­cher, Banerjee, Sur, Arshakian, Klessen, Beck& SpaansSchlei­cher et al.2010]SchleicherEtAl2010 Schlei­cher D. R. G.,Banerjee R.,Sur S.,Arshakian T. G., Klessen R. S.,Beck R., Spaans M.,2010, @doi [] 10.1051/0004-6361/201015184, http://adsabs.harvard.edu/abs/2010A[Schmidt & FederrathSchmidt & Federrath2011]Schmidt11a Schmidt W.,Federrath C.,2011, @doi [] 10.1051/0004-6361/201015630, http://adsabs.harvard.edu/abs/2011A[Schmidt, Niemeyer& HillebrandtSchmidt et al.2006]Schmidt06a Schmidt W.,Niemeyer J. C., Hillebrandt W.,2006, @doi [] 10.1051/0004-6361:20053617, http://adsabs.harvard.edu/abs/2006A[Schmidt, Federrath, Hupp, Kern& NiemeyerSchmidt et al.2009]SchmidtEtAl2009 Schmidt W.,Federrath C.,Hupp M.,Kern S., Niemeyer J. C., 2009, , 494, 127[Schneider et al.,Schneider et al.2013a]SchneiderEtAl2013 Schneider N.,et al., 2013a, @doi [] 10.1088/2041-8205/766/2/L17, http://adsabs.harvard.edu/abs/2013ApJ...766L..17S 766, L17[Schneider et al.,Schneider et al.2013b]schneider13 Schneider N.,et al., 2013b, @doi [The Astrophysical Journal, The Astrophysical Journal] 10.1088/2041-8205/766/2/L17, 766, L17[Schober, Schleicher, Federrath, Klessen& BanerjeeSchober et al.2012a]SchoberEtAl2012PRE Schober J.,Schleicher D.,Federrath C.,Klessen R., Banerjee R.,2012a, @doi [PhRvE] 10.1103/PhysRevE.85.026303, http://adsabs.harvard.edu/abs/2012PhRvE..85b6303S 85, 026303[Schober, Schleicher, Bovino& KlessenSchober et al.2012b]SchoberEtAl2012PRE2 Schober J.,Schleicher D.,Bovino S., Klessen R. S.,2012b, @doi [PhRvE] 10.1103/PhysRevE.86.066412, http://adsabs.harvard.edu/abs/2012PhRvE..86f6412S 86, 066412[Schober, Schleicher, Federrath, Glover, Klessen& BanerjeeSchober et al.2012c]SchoberEtAl2012 Schober J.,Schleicher D.,Federrath C.,Glover S.,Klessen R. S., Banerjee R.,2012c, @doi [] 10.1088/0004-637X/754/2/99, http://adsabs.harvard.edu/abs/2012ApJ...754...99S 754, 99[Schober, Schleicher, Federrath, Bovino& KlessenSchober et al.2015]schober15 Schober J.,Schleicher D. R. G.,Federrath C.,Bovino S., Klessen R. S., 2015, @doi [pre] 10.1103/PhysRevE.92.023010, http://adsabs.harvard.edu/abs/2015PhRvE..92b3010S 92, 023010[Solomon, Rivolo, Barrett& YahilSolomon et al.1987]SolomonEtAl1987 Solomon P. M.,Rivolo A. R.,Barrett J., Yahil A.,1987, @doi [] 10.1086/165493, http://cdsads.u-strasbg.fr/abs/1987ApJ...319..730S 319, 730[Stone, Ostriker& GammieStone et al.1998]StoneOstrikerGammie1998 Stone J. M.,Ostriker E. C., Gammie C. F.,1998, @doi [] 10.1086/311718, http://cdsads.u-strasbg.fr/abs/1998ApJ...508L..99S 508, L99[SubramanianSubramanian1997]Subramanian1997 Subramanian K.,1997, arXiv:astro-ph/9708216, http://adsabs.harvard.edu/abs/1997astro.ph..8216S [SubramanianSubramanian1999]Subramanian1999 Subramanian K.,1999, @doi [PhRvL] 10.1103/PhysRevLett.83.2957, http://adsabs.harvard.edu/abs/1999PhRvL..83.2957S 83, 2957[Sur, Schlei­cher, Banerjee, Federrath& KlessenSur et al.2010]SurEtAl2010 Sur S.,Schlei­cher D. R. G.,Banerjee R.,Federrath C., Klessen R. S.,2010, @doi [] 10.1088/2041-8205/721/2/L134, http://adsabs.harvard.edu/abs/2010ApJ...721L.134S 721, L134[Sur, Federrath, Schleicher, Banerjee & KlessenSur et al.2012]SurEtAl2012 Sur S.,Federrath C.,Schleicher D. R. G.,Banerjee R., Klessen R. S.,2012, @doi [] 10.1111/j.1365-2966.2012.21100.x, http://adsabs.harvard.edu/abs/2012MNRAS.423.3148S 423, 3148[Usero et al.,Usero et al.2015]Usero15a Usero A.,et al., 2015, , http://adsabs.harvard.edu/abs/2015arXiv150600703U 150, 115[Vutisalchavakul, Evans& HeyerVutisalchavakul et al.2016]Vutisalchavakul16a Vutisalchavakul N.,Evans II N. J., Heyer M.,2016, @doi [] 10.3847/0004-637X/831/1/73, http://adsabs.harvard.edu/abs/2016ApJ...831...73V 831, 73[WaaganWaagan2009]Waagan2009 Waagan K.,2009, Journal of Computational Physics, 228, 8609[Waagan, Federrath& KlingenbergWaagan et al.2011a]WaaganFederrathKlingenberg2011 Waagan K.,Federrath C., Klingenberg C.,2011a, @doi [Journal of Computational Physics] 10.1016/j.jcp.2011.01.026, http://adsabs.harvard.edu/abs/2011JCoPh.230.3331W 230, 3331[Waagan, Federrath, Klingenberg, Waagan, Federrath& KlingenbergWaagan et al.2011b]waagan11 Waagan K.,Federrath C.,Klingenberg C.,Waagan K.,Federrath C., Klingenberg C.,2011b, @doi [Journal of Computational Physics, Journal of Computational Physics] 10.1016/j.jcp.2011.01.026,%002010.1016/j.jcp.2011.01.026, 230, 3331[Wang, Li, Abel& NakamuraWang et al.2010]Wang10a Wang P.,Li Z.,Abel T., Nakamura F.,2010, @doi [] 10.1088/0004-637X/709/1/27, http://adsabs.harvard.edu/abs/2010ApJ...709...27W 709, 27[Zamora-Avilés & Vázquez-SemadeniZamora-Avilés & Vázquez-Semadeni2014]Zamora-Aviles14a Zamora-Avilés M.,Vázquez-Semadeni E.,2014, @doi [] 10.1088/0004-637X/793/2/84, http://adsabs.harvard.edu/abs/2014ApJ...793...84Z 793, 84§ NUMERICAL TESTS §.§ The ∇· B=0 constraint Physically, no energy should propagate into longitudinal modes of the magnetic field. For highly supersonic, low plasma β turbulence simulations in FLASH using the HLL3R and HLL5R solver <cit.> this has been demonstrated and compared with alternative schemes in <cit.>. However, the compression in the simulations presented here could, in principle, change this conclusion because the amplitude of the magnetic fields is enhanced due to adiabatic compression. divB presents the ratio of magnetic field energy in longitudinal modes to the total energy in magnetic fields. As is evident, this value is never larger than 10^-4, and, except for GF-Weak, smaller than 10^-5. Additionally, this value drops as compression occurs, and is largest near the comencement of compression. We therefore do not expect the numerical errors in ∇· B to effect the conclusions of this paper. §.§ Convergence with numerical grid resolutionIt is well known that the necessary grid resolution for simulating fully-developed turbulence with an inertial subrange is at least 1024^3 cells <cit.>. However, this was not our goal in this paper. We expect that for our particular focus on coarse-grained values averaged over large scales, such a high resolution is not critical. In the interest of computational efficiency and to allow for many different simulations, we only ran 512^3-cell boxes. Indeed, previous studies have demonstrated that large-scale averages converge even at a resolution of 256^3 grid cells <cit.>. In this appendix we briefly demonstrate that our resolution of 512^3 cells is sufficient for our needs.We check for convergence by comparing two physically similar MHD runs: noGF-Strong and noGF-Strong-LR (see <ref>). The two simulations differ only in that the the former has a lower resolution of 256^3, while the latter has our standard 512^3 resolution. We find that, at the end of the initial driving stage, the Mach numbers for the two runs differ by 5%, while the saturation levels for the magnetic field strength differ by about 30%. While these differences are not negligible, they do not change the qualitative results once compression begins. We demonstrate this in M_at_converge, which shows the Mach number evolution, and gam_converge, which shows ; these figures can be compared to mach_a and gamma in the main text. We see that the overall behaviour of both the Mach number and the adiabatic index are nearly identical in the two runs.§ SLOW (H=-1) COMPRESSION All the simulations used in the main text have H=-200 corresponding to very fast compression. The motivation for this choice is that, given our high Mach numbers (ℳ≈ 10, motivated by the properties of observed molecular clouds), the dissipation timescale is very short. Consequently, values of H near unity would not show a distinct amplification stage before the onset of compression, and instead would proceed directly to the dissipation stage. We find that ensuring the presence of a distinct amplification phase helps elucidate the physics of the problem, which is why we elected to use H=-200 as our standard choice. However, it is important to demonstrate that our central result, the onset of dissipationless flow, is independent of this choice. For this reason, in M_at_H1 we show the evolution of the Mach number for three simulations (HD-H1, noGF-Strong-H1 and GF-Medium-H1; see <ref>) with a compression rate H=-1, as well as our fiducial HD run to guide the eye. This compression rate corresponds to the gas compression at roughly the sound speed, significantly slower than what would be expected if the gas were to contract at free-fall.In these runs, the Mach number declines immediately once driving is turned off and compression starts, as we would expect for such slow compression. However, in these runs we still see the characteristic increase in Mach number at late times that occurs due to the onset of dissipationless flow. We demonstrate this more clearly in gam_H1, which showsmeasured for the H=-1 simulations, as compared to our theoretical model using the same values of η and η_ B (including the condition when we set these terms to zero) calibrated from the H=-200 simulations, and used for gamma in the main text. We first note that, as in the H=-200 simulations, at late times the MHD runs havesubstantially above unity, demonstrating the reduced dissipation that is the central result of this work. Moreover, the plot shows that our model continues to provide a very good description of the value of . Since dissipation is dominant from the beginning, our model predicts the initial value ofto be very low, as is the case. At late times, our model predicts thatshould increase as energy is pumped into the magnetic component, and that eventually dissipation should turn off, leading to an increase in . Both of these predictions are borne out by the simulations. The success of our model's γ_ pred at reproducing the value γ_ sim we measure from the simulations, using the same calibration of η and η_ B, suggests that our results are not dependent on the choice of a particular compression rate H.
http://arxiv.org/abs/1705.09657v2
{ "authors": [ "Yuval Birnboim", "Christoph Federrath", "Mark Krumholz" ], "categories": [ "astro-ph.GA", "astro-ph.SR", "physics.flu-dyn", "physics.plasm-ph" ], "primary_category": "astro-ph.GA", "published": "20170526175959", "title": "Compression of turbulent magnetized gas in Giant Molecular Clouds" }
A. Adamatzky On dynamics of excitation in F-actinUniversity of the West of England, Bristol BS16 1QT, United Kingdom [email protected] We represent a filamentous actin molecule as a graph of finite-state machines (F-actin automaton). Each node in the graph takes three states — resting, excited, refractory. All nodes update their states simultaneously and by the same rule, in discrete time steps. Two rules are considered: threshold rule — a resting node is excited if it has at least one excited neighbour and narrow excitation interval rule — a resting node is excited if it has exactly one excited neighbour. We analyse distributions of transient periods and lengths of limit cycles in evolution of F-actin automaton, propose mechanisms for formation of limit cycles and evaluate density of information storage in F-actin automata.On dynamics of excitation in F-actin: automatonmodel Andrew Adamatzky 05/26/17 ======================================================figs/§ INTRODUCTION Actin is aprotein presented in all eukaryotic cells in the forms of globular actin (G-actin) and filamentous actin (F-actin) <cit.>. G-actin, polymerises in double helix of filamentous actin(Fig. <ref>a), during polymerisation G-actin units slightly change their shapes and thus become F-actin units <cit.>. The actin filaments form a skeleton of single cells, where they play key roles in motility and shape changing — together myosin — and signal transduction – together with tubulin microtubules <cit.>.Actin filaments networks are key components of neural synapses <cit.>. The actin networks is a substrate of cell-level learning <cit.> and information processing <cit.>. Actin filaments process information in synapses and cells, they compute in a hardwired sense, as specialised processors. If we did manage to uncover exact mechanisms of information transmission and processing in the actin filaments and establish an interface with actin filaments we would be able to make a large-scale massive-parallel nano-computing devices. In  <cit.> we proposed a model ofactin filaments as two chains of one-dimensional binary-state semi-totalistic automaton arrays. We discovered automaton state transitions rules that support travelling localisations, compact clusters of non-resting states. These travelling localisations are analogous toionic waves proposed in actin filaments <cit.>. We speculated that a computation in actin filaments could be implemented when localisations (defects, conformation changes, ionic clouds, solitons), which represent data, collide with each other and change their velocity vectors or states. Parameters of the localisations before a collision are interpreted as values of input variables. Parameters of the localisation after the collision are values of output variables. We implemented a range of computing schemes in several families of actin filament models, from quantum automata to lattice with Morse potential <cit.>.These models considered a unit (F-actin) of an actin filament as a single, discrete, entity which can take just two or three states, and carriers of information occupied one-two actin units. These were models of rather coarse-grained computation <cit.>. To take the paradigm of computation via interaction travelling localisations at the sub-molecular level we must understand how information, presented by a perturbation of some part of an F-actin unit from its resting state, propagates in the F-actin unit. The paper is structured as follows. We define a model of F-actin automata in Sect. <ref>. In Sect. <ref> we study excitation dynamics of automata with a threshold excitation rule, and in Sect. <ref> with a rule of narrow excitation interval. Implications of our findings for designs of actin-based information storagedevices are discussed in Sect. <ref>.§ MODEL We use a structure of F-actin moleculeproducedusing X-ray fibre diffraction intensities obtainedfrom well oriented sols of rabbit skeletal muscles <cit.>. The structurewas calculated with resolution 3.3Å in radial direction and 5.6Å along the axis (Fig. <ref>b) <cit.>. The molecular structure was converted to a non-directed graph 𝒜, where every node represents an atom andan edge corresponds to a bond between the atoms.Thegraph 𝒜 has 2961 nodes, 3025 edges. Minimum degree is 1, maximum is 4, average is 2.044 (with standard deviation 0.8) and median degree 2. There are 883 nodes with degree 1, 1009 nodes with degree 2, 1066 nodes with degree 3 and two nodes with degree 4. The graph 𝒜 has a diameter (longest shortest path) 1130 nodes, and a mean distance (mean shortest path between any two nodes) 376, and a median distance 338. We study dynamics of excitation in the actin graph 𝒜 using the following models. Each node s of 𝒜 takes three states: resting (∘), excited (⊕) and refractory (⊖). Each node s has a neighbourhood u(s) which is a set of nodes connected to the node s by edges in 𝒜. The nodes update their states simultaneously in discrete time by the same rule. Each step of simulated discrete time corresponds to one attosecond of real time. A resting node s^t=∘ excites depending on a numberσ_s^t of its excited neighbours in neighbourhood u(s):σ_s^t = ∑_w ∈ u(s){w^t=⊕} We consider two excitation rules.In rule 𝒜_0a resting node excites if it has at least one excited neighbour:σ_s^t>0. In rule 𝒜_1a resting node excites if it has exactly one excited neighbour: σ_s^t=1 (we do not consider rules where σ_s^t>1 because excitation there extincts quickly). Transitions from excited state to refractory state and from refractory state to resting state are unconditional, i.e. these transitions take place independently on neighbourhood state.The rules can be written as follows𝒜_0 𝒜_1 s^t+1= ⊕,if σ_s^t>0⊖,ifs^t=⊕ ∘,otherwises^t+1= ⊕,if σ_s^t=1⊖,ifs^t=⊕ ∘,otherwiseAt the beginning of each computational experiment the F-actin automaton 𝒜 is in a global resting state, every node is assigned state ∘.An excitation dynamic is initiated by assigningnon-resting states ⊕ or ⊖ or both to a portion of randomly selected nodes.Three stimulation scenarios are considered: * Single node stimulation: a single node is selected at random and this node is assigned excited state ⊕* (+)-stimulation:a specified ratio of nodes is selected at random and the selected nodes are assigned excited state ⊕* (+-)-stimulation:a specified ratio of nodes is selected at random and the selected nodes are assigned either excited state ⊕ or refractory state ⊖ at random. The automaton 𝒜 is deterministic, therefore from any initial configuration the automaton evolves into in a limit cycle in its state space (where its configuration is repeated after a finite number of steps) or an a absorbing state (this is limit cycle length one). For the rules selected there is only one absorbing state — all nodes are in the resting state. A limit cycle is comprised of configurations where compact patterns of excitation travel along closed paths.A transient period is an interval of automaton evolution from initial configuration to entering a limit cycle or an absorbing state. For modelling we used C and Processing, for visualisation and analyses we used R, iGraph, Chimera.§ DYNAMICS OF𝒜_0§.§ Single node stimulation figs/The excitation propagates as a localised pattern(Fig. <ref>a-f). A number of nodes excited at every single stepof time varies between one and five (Fig. <ref>a). Sometimes an excitation pattern splits into two localisations which travel along their independent pathways. The automaton 𝒜_0 always evolves into the absorbing state where all nodes are resting. This is because travelling localisation either cancel each other when collide or reach cul-de-sacs of their pathways.A distribution oftransition periods is shown in Fig. <ref>b. The mean transition period is 840 time steps, median 847, minimum 2 and maximum 1131. Only 29 nodes, when stimulated lead to excitation development with a transition period between 2 and 15 steps. Stimulation of all others2932 trigger excitation dynamics forat least 568 steps. The longest transition period is observed when the localised excitation runs along a longest shortest path where initially stimulated node is a source. The path of the longest excitation is visualised inFig. <ref>a; the path matches the backboneof the actin unit.§.§ (+)-stimulationfigs/ figs/ When we stimulate more than one node the automaton 𝒜_0 exhibits several `epicentres' of excitation, the patterns of excitation propagate away from their origins (Fig. <ref>), and populate the graph. This stage is manifested in increasing a number of excited states at each step of the evolution (Fig. <ref>a). Eventually, depending on distances between sources of excitation, the graph becomes filled with waves and localisations, e.g. in illustration Fig. <ref>a a peak is reached in 7-8 steps. Then patterns of excitation start colliding with each other. Theyannihilate in the results of the collisions. A number of excited nodes decreases over time (Fig. <ref>a). The graph returns to the totally resting state. The larger is the portion of initially excited nodes the quicker evolution halts in the resting state (Fig. <ref>b). The `quicker' can be quantified by a polynomial function p = 4.7 ·ρ^-0.6, where p is a length of transientperiod and ρ is a ratio of initially excited nodes. §.§ (+-)-stimulationIn a 'classical' two-dimensional discrete excitable medium stimulation of the medium with an excited node neighbouring with a refractory node leads to formation of a spiral wave. Due to the spiral waves excitation can persist in a modelled medium indefinitely. F-actin automata follow this principle. When we stimulate nodes of 𝒜_0 such that some of the nodes get excited states and some refractory states we evoke the excitation patterns. Average level of excitation over trials is proportional to a number of nodes stimulated (see row e in Tab. <ref>a). The automaton enters alimit cycle (Fig. <ref>). The limit cycle's length varies between 5 to 14 time steps (see row c in Tab. <ref>a). Apparently, the automaton falls into longest limit cycles when nearly half of nodes are stimulated, however, due to high deviation of the results (see row σ(c) in Tab. <ref>a), we would not state this as a fact. Lengths of transient periods, from stimulation to entering the limit cycle, is over a half of the number of nodes in 𝒜. § DYNAMICS OF 𝒜_1 §.§ Single node stimulationWhen a single node is excited initially, the automaton 𝒜_1 always evolves to a globally resting state. In sampling of seventy trials we found that average length ofthe transient period is 862 time steps (standard deviation 230) and median transition period is 869. The average transient period to the resting state is 22 steps longer then the one in the automaton 𝒜_0.§.§ (+)-stimulationIn contrast to automaton𝒜_0, automaton 𝒜_1 does not show a pronounced sensitivity to a ratio ρ of initially excited nodes. Transition periods for all values of ρ are grouped around 1112 (Tab. <ref>b). The automata always evolve to limit cycles. Cycle lengths are around 15 time steps with excitation level (number of excited nodes) of just below 600 nodes. The system shows a high degree of variability in lengths of transition periods and cycles, as manifested in large values of standard deviations σ(p) and σ(c).Level of excitation typically remains preserved.§.§ (+-)-stimulation𝒜_1behaves similarly to the scenario of (+)-start: there are many travelling localisation, which collide and, mostly, annihilate each other. Fewlocalisations survive by finding a cyclic path to travel: if no other localisation enters their path, the remaining localisations can cycle indefinitely. The surviving localisations are responsible for𝒜_1 falling into the limit cycle. Automaton starting with a mix of randomly excited and refractory states usually travels one-and-half times longer to its limit cycle than the same automaton starting only with randomly excited states (compareTab. <ref>b and Tab. <ref>c).§ STABILITY OF THE DYNAMICSHow does repeated stimulation affect excitation dynamics of𝒜_0 and 𝒜_1? (+)-stimulation of 𝒜_0 at any stage of its evolution raises level of excitation by amount equivalent to that ofstimulated resting automaton (Fig. <ref>). Thus repeated stimulation prolongs return of the automaton to its resting state. In scenario of (+-)-stimulation 𝒜_0 evolves to a limit cycle. Repeated (+-)-stimulation of the automaton while it is in the limit cycle causes the automaton to change its trajectory in a state space. This change is characterised by initially reduced level of excitation. Typically, excitation level drops by 100-150 nodes at the moment of stimulation. The level of excitation returns to its `pre-stimulation' value in 400-500 time steps. § IMPLEMENTATION OF MEMORY F-actin automata entering limit cycles could play a role of information storage in actin filaments. The minimal length of a limit cycle detected is 5 time steps. Thus aromatic rings could be a substrate responsible for some patterns of cycling excitation dynamics. Let an aromatic ring automaton be stimulated such that a node is assigned an excited state and one of its neighbours is assigned refractory state. The wave of excitation (comprised of one excited and one refractory states) propagates into the direction of its excited head (Fig. <ref>a).The excitation running along the aromatic ring can not be extinguished by stimulation of one resting node (Fig. <ref>bcde) or two resting nodes (Fig. <ref>fgh). This is because an excited node surrounded by two resting neighbours excites both resting neighbours. Thus excitation waves propagate along the ring in both direction. Therefore even if original excitation wave is cancelled by external stimulation then similar running wave will emerge.To extinguish the excitation in an aromatic ring we must externally excite all four resting nodes or force them into a refractory state. The excited aromatic rings act as generators of excitation in F-actin automata. Let us consider an example. In Fig. <ref> we see a histidine's aromatic ring stimulated: one node is assigned excited state and its neighbour refractory state. The wave of excitation travels along the ring clockwise (Fig. <ref>abc). When excitation reaches a node linked to the rest of the graph the excitation propagates along the `bridge' (Fig. <ref>d). The excitation then propagates further inside the graph (Fig. <ref>ef) splitting into two compact excitation patterns at the junction (Fig. <ref>gh). The overall pattern of excitation in 𝒜_0 recorded at 90th step of evolution is shown in Fig. <ref>.§ DISCUSSIONAutomaton model of F-actin unit is a fast prototyping tool for studying dynamics of excitation in actin filaments allowing for controlled propagation of localisations at atomic level. Two rules of excitation were analysed. First rule states that a resting node is excitedif it has at least one excited neighbour (𝒜_0): this is a classical threshold excitation rule. Second rule states that a resting node is excited if it has exactly one excited neighbour (𝒜_1): this may be seen as a rule of non-linear excitation because only narrow band of local activity triggers excitation in the node. We did not consider other ranges of thresholds or excitation intervals, because they always lead to extinction of excitation at the very beginning ofthe evolution. Both rules support travelling patterns of excitation. Automata 𝒜_0 show longer transient periods, smaller limit cycles and larger average levels of excitation than automata 𝒜_1 (Tab. <ref>d). When a resting automaton 𝒜_0 is stimulated by external excitation of some nodes the excitation patterns spread all over the automaton graph but then activity declines to a global resting state. Stimulation of actin automata with a mix of excited and refractory states leads to excitation dynamics with longer transient periods and formation of repeated patterns of excitation, analogous to oscillatory structures.The limit cycles are stable: an automaton subjected to repeated stimulation always slide back to its pre-stimulation activity level. Due to substantial noise-tolerance of excitation waves propagating in aromatic rings, the rings could be seen as memory devices in a hypothetical actin computer. Assume excited aromatic ring represents one bit. To write a bit we excite one node and inhibit (force into a refractory state) one of its neighbours. To erase a bit we must excite or inhibit all resting nodes.An F-actin unit contains 40 rings (8 of histidine, 12 of phenylalanine, 4 of tryptophane, and 16 tyrosine), seeconfiguration of the aromatic rings in Fig. <ref>. Thus an F-actin unit can store 40 bits. Maximum diameter of an actin filament is 8 nm <cit.>. An actin filament is composed of overlapping units of F-actin (Fig. <ref>a). Thus, diameter of a single unit is c. 4 nm. Persistent length of F-actin polymer is 17 μm <cit.>, therefore we can assume that it is feasible to write32 · 10^4 bit on a double-strand actin filament. Given appropriate tools to read and write dynamics of excitation in predetermined parts of F-actin molecule we can assume the actin polymer offers us a memory density64 Petabit per square inch (6.452· 10^16 per square inch).plain
http://arxiv.org/abs/1705.09402v1
{ "authors": [ "Andrew Adamatzky" ], "categories": [ "cs.ET" ], "primary_category": "cs.ET", "published": "20170526003628", "title": "On dynamics of excitation in F-actin: automaton model" }
[email protected] [email protected] [email protected] [email protected]^1 Laboratoire de Chimie Théorique, Université Pierre et Marie Curie, Sorbonne Universités, CNRS, Paris, France^2 Institut des Sciences du Calcul et des Données, Université Pierre et Marie Curie, Sorbonne Universités, Paris, France^3 Dipartimento di Scienze Fisiche e Chimiche, Universitá degli Studi dell'Aquila, L'Aquila, Italy^4 Laboratory of Atomic and Solid-State Physics, Cornell University, Ithaca, NY, USAWe present the extension of variational Monte Carlo (VMC) to the calculation of electronic excitation energies and oscillator strengths using time-dependent linear-response theory. By exploiting the analogy existing between the linear method for wave-function optimisation and the generalised eigenvalue equation of linear-response theory, we formulate the equations of linear-response VMC (LR-VMC). This LR-VMC approach involves the first- and second-order derivatives of the wave function with respect to the parameters. We perform first tests of the LR-VMC method within the Tamm-Dancoff approximation using single-determinant Jastrow-Slater wave functions with different Slater basis sets on some singlet and triplet excitations of the beryllium atom. Comparison with reference experimental data and with configuration-interaction-singles (CIS) results shows that LR-VMC generally outperforms CIS for excitation energies and is thus a promising approach for calculating electronic excited-state properties of atoms and molecules. Keywords: excitation energies - linear method - Tamm-Dancoff approximation - oscillator strengths - berylliumTime-dependent linear-response variational Monte Carlo Julien Toulouse^1 May 8, 2017 ====================================================== § INTRODUCTIONQuantum Monte Carlo (QMC) methods <cit.> are a powerful and reliable alternative to wave-function methodsand density-functional theory (DFT) for quantum chemistry calculations, thanks to their favorable scaling with system size and to their suitability for high-performance computing infrastructures, such as petascale architectures. Variational Monte Carlo (VMC) <cit.> combines Monte Carlo integration for computing the expectation value of the electronic Hamiltonian Ĥ and the variational principle for the ground state. VMC scales as N^3-4 (where N is the number of electrons), similar to DFT scaling. The main drawback of any QMC approach is the very large prefactor in the scaling, preventing the systematic use of QMC in quantum chemistry calculations of medium- and large-size systems. This drawback is alleviated by performing massive parallel calculations on supercomputers <cit.>. A fundamental role is played by the trial wave function, often written as a product of a determinantal part and a bosonic Jastrow factor <cit.> which depends on interparticle distances (with electron-nucleus, electron-electron, higher many-body terms,…). For example, one can use for the determinantal part a linear combination of configuration state functions (CSF, i.e. spatial- and spin-symmetry adapted linear combinations of Slater determinants of one-electron molecular orbitals) <cit.>, or the antisymmetrised geminal power (AGP) ansatz (a single determinant of geminal pairing functions <cit.>).Furthermore, the optimisation of the wave function is crucial for an accurate description of both static and dynamic electron correlation. The linear method <cit.> allows one to efficiently perform such an optimisation for all the parameters of the wave function, using only the first-order derivatives of the wave function with respect to the parameters. The calculation of excited-state properties of molecules (from prototypical models to complex organic dyes and biochromophores) still represents an open challenge for theoreticians. The two commonly used approaches are time-dependent density-functional theory, which is not computationally demanding but sometimes lacks accuracy,and wave-function methods, which are more accurate but very computationally demanding. QMC methods were originally formulated for ground state problems and their extension to excited states is not straightforward. Relatively few applications of QMC for electronic excitations are present in literature, see e.g. the singlet and triplet energies for the benchmark CH_2 diradical <cit.>, the low-lying singlet excited states of biochromophores <cit.>, the n →π^* transition in acrolein <cit.>, and the recent extension of the AGP ansatz for calculating excited-state energies <cit.>.The basic idea of the present work stems from the formal analogy existing between the linear method for wave function optimisation and time-dependent linear-response theory <cit.>. Indeed, the generalised eigenvalue equations of linear-response theory in the Tamm-Dancoff approximation (TDA) and of the linear method at the ground-state minimum coincide. Starting from this observation, we derive and implement the linear-response equations in VMC (LR-VMC). This represents an extension of the well-established time-dependent linear-response Hartree-Fock or multiconfiguration self-consistent field methods, taking into account both static and dynamic electron correlations. The paper is organised as follows. In Section <ref>, VMC and linear-response theory are briefly reviewed, and the LR-VMC method is presented and discussed in detail. Results of LR-VMC calculations in the TDA of some singlet and triplet excitations of the beryllium atom are reported and discussed in Section <ref>. Conclusions and perspectives for future work are given in Section <ref>. § THEORYWe first briefly review the form of the wave function that we use and the linear optimisation method. We then derive the time-dependent linear-response equations and show how to implement them in VMC. §.§ Wave-function parametrisationWe consider Jastrow-Slater-type wave functions parametrised as <cit.>|Ψ(p̱)⟩ = Ĵ(α)e^κ̂(κ)∑_I=1^N_CSFc_I|C_I⟩,where Ĵ(α) is a Jastrow factor operator depending on a set of parameters α, e^κ̂(κ) is the orbital rotation operator depending on a set of orbital rotation parameters κ, and |C_I⟩ are CSFs with associated coefficients c̱={c_I}. The CSFs are linear combinations of Slater determinants of orbitals |ϕ_i⟩, which are expanded in a basis of Slater functions {|χ_μ⟩}|ϕ_i⟩ = ∑_μ=1^N_basisλ_i μ|χ_μ⟩.The Slater functions are centered on the nuclei and their spatial representation is⟨ṟ|χ_μ⟩= N_n(ζ) r^n-1e^-ζ r Y_ℓ,m(θ,ϕ),each function being characterized by a set of quantum numbers n,ℓ,m and an exponent ζ, Y_ℓ,m(θ,ϕ) are real spherical harmonics, and N(ζ) a normalisation factor. The full set of parameters to consider is thus p̱ = {α, c̱, κ, ζ} where ζ stands for the set of exponents. §.§ Linear optimisation method The linear optimisation method<cit.> allows one to find the optimal parameters p̱ using an iterative procedure. At each iteration, we consider the intermediate-normalised wave function|Ψ(p̱)⟩ =|Ψ(p̱)⟩/Ψ_0Ψ(p̱)where |Ψ_0⟩ = |Ψ(p̱^0)⟩ is the wave function for the parameters p̱^0 at the current iteration (taken as normalised to unity, i.e. Ψ_0Ψ_0=1), and we expand it to linear order in the parameter variations Δp̱ = p̱ - p̱^0,|Ψ_lin(p̱)⟩ = |Ψ_0⟩ + ∑_iΔ p_i |Ψ_i⟩,where |Ψ_i⟩ are the first-order derivatives of the wave function |Ψ(p̱)⟩|Ψ_i⟩ = ( ∂|Ψ(p̱)⟩/∂ p_i)_p̱=p̱^0 = |Ψ_i⟩ - Ψ_0Ψ_i|Ψ_0⟩,where |Ψ_i⟩ = ( ∂|Ψ(p̱)⟩/∂ p_i )_p̱=p̱^0 are the first-order derivatives of the original wave function |Ψ(p̱)⟩. Using the intermediate-normalised wave function has the advantage that the derivatives in Eq. (<ref>) are orthogonal to |Ψ_0⟩, i.e. Ψ_0Ψ_i=0. We then determine the parameter variations Δp̱ by minimising the corresponding energyE_lin = min_p̱⟨Ψ_lin(p̱)|Ĥ|Ψ_lin(p̱)⟩/Ψ_lin(p̱)Ψ_lin(p̱),we update the parameters as p̱^0→p̱^0 + Δp̱, and iterate until convergence.The minimisation in Eq. (<ref>) leads to the following generalized eigenvalue equation to be solved at each iteration[E_0 g̱_R^T/2; g̱_L/2 H̱ ][ 1; Δp̱ ] = E_lin[1 0̱^T; 0̱ S̱ ][ 1; Δp̱ ],where E_0 = ⟨Ψ_0|Ĥ|Ψ_0⟩ is the current energy, g_L,i=2 ⟨Ψ_i|Ĥ|Ψ_0⟩ and g_R,j=2 ⟨Ψ_0|Ĥ|Ψ_j⟩ are the left and right energy gradients (identical except on a finite Monte Carlo sample), and H_ij = ⟨Ψ_i|Ĥ|Ψ_j⟩ is the Hamiltonian matrix in the basis of the first-order wave function derivatives, and S_ij = Ψ_iΨ_j is the overlap matrix in this basis. Note that in Eq. (<ref>), 0̱ and 0̱^T stand for the zero column vector and the zero row vector, respectively. §.§ Linear-response theoryStarting from the previously optimised wave function, we introduce now a time-dependent perturbation (e.g, interaction with an electric field) in the HamiltonianĤ(t) = Ĥ + γV̂(t),where γ is a coupling constant. The approximate ground-state wave function |Ψ(p̱(t))⟩ evolves in time through its parameters p̱(t), which become now generally complex. As before, it is convenient to introduce the intermediate-normalised wave function|Ψ(p̱(t))⟩ =|Ψ(p̱(t))⟩/Ψ_0Ψ(p̱(t)),where |Ψ_0⟩ = |Ψ(p̱^0)⟩ is the wave function for the initial parameters p̱^0, again taken as normalised to unity (i.e., Ψ_0Ψ_0=1). At each time, the time-dependent parameters p̱(t) can be found from the Dirac-Frenkel variational principle (see, e.g., Ref.)p_i^*Ψ(p̱(t)) Ĥ(t) - i ∂/∂ t Ψ(p̱(t)) /Ψ(p̱(t)) Ψ(p̱(t))= 0.To apply Eq. (<ref>) in linear order in γ, we start by expanding the wave function |Ψ(p̱(t))⟩ around p̱^0 to second order in the parameter variations Δp̱(t) = p̱(t) - p̱^0|Ψ(p̱(t))⟩ = |Ψ_0⟩ + ∑_iΔ p_i(t) |Ψ_i⟩ +1/2∑_i,jΔ p_i(t) Δ p_j(t)|Ψ_ij⟩ + ⋯,where |Ψ_i⟩ are the first-order derivatives of |Ψ(p̱)⟩ already introduced in Eq. (<ref>), and |Ψ_ij⟩ are the second-order derivatives of the wave function |Ψ(p̱)⟩|Ψ_ij⟩ = ( ∂^2 |Ψ(p̱)⟩/∂ p_i∂ p_j)_p̱=p̱^0= |Ψ_ij⟩ -Ψ_0Ψ_j|Ψ_i⟩ -Ψ_0Ψ_i|Ψ_j⟩ +( 2Ψ_0Ψ_iΨ_0Ψ_j -Ψ_0Ψ_ij)|Ψ_0⟩,where |Ψ_ij⟩ = ( ∂^2 |Ψ(p̱)⟩/∂ p_i∂ p_j )_p̱=p̱^0 are the second-order derivatives of the original wave function |Ψ(p̱)⟩. Again, the advantage of using the intermediate-normalised wave function is that the second-order derivatives are orthogonal to |Ψ_0⟩, i.e. Ψ_0Ψ_ij=0. Plugging Eq. (<ref>) into Eq. (<ref>) and keeping only first-order terms in Δp̱(t), in the limit of a vanishing perturbation (γ→ 0), we findA̱ Δp̱ (t)+ Ḇ Δp̱ (t)^* = i S̱ ∂Δp̱ (t)/∂ t,with the matrices A_ij= ⟨Ψ_i|Ĥ - E_0 |Ψ_j⟩= H_ij - E_0 S_ij where E_0 is the ground-state energy, B_ij= ⟨Ψ_ij|Ĥ|Ψ_0⟩, and S_ij = Ψ_iΨ_j. If we look for free-oscillation solutions of the form Δp̱(t) =X̱e^-iω_n t+ Y̱^*e^iω_n t,where ω_n corresponds to an excitation (or de-excitation) energy, we arrive at the linear-response equation in the form of a non-Hermitian generalized eigenvalue equation<cit.> [ A̱Ḇ;Ḇ^* A̱^*;][ X̱_n; Y̱_n ]= ω_n[S̱0̱;0̱ -S̱^*; ][ X̱_n; Y̱_n ]. The Tamm-Dancoff approximation (TDA) corresponds to neglecting the contributions from Ḇ, leading toA̱X̱_n = ω_n S̱X̱_n.At the ground-state minimum,i.e. when the energy gradient is zero, the generalised eigenvalue equation of the linear method in Eq. (<ref>) is equivalent to the TDA equation (<ref>) which directly gives excitation energies ω_n = E_lin - E_0.Finally, the oscillator strength f_n for the transition from the ground state to the excited state n (with excitation energy ω_n) can be easily extracted from the response vector (X̱_n,Y̱_n)f_n = 2/3ω_n∑_α=x,y,z [ (X̱_n + Y̱_n)^Tμ^α ]^2,where μ^α is the vector containing the transition dipole moments for the component α (x, y, or z) between the ground-state wave function |Ψ_0⟩ and the wave-function derivative |Ψ_i⟩μ^α_i =Ψ_iμ̂^αΨ_0,and μ̂^α is the electronic dipole operator. §.§ Realisation in VMC We now give the expressions for performing linear-response calculations in VMC, referred to as LR-VMC, i.e. for calculating the expressions in Section <ref> in a VMC run. For convenience, we also recall the expressions necessary for the linear optimisation method.The current ground-state energy is calculated asE_0 = E_L(Ṟ),where E_L(Ṟ) = [HΨ_0(Ṟ)]/Ψ_0(Ṟ) is the local energy and ... stands for an average on a finite Monte Carlo sample of points Ṟ_k distributed according to Ψ_0(Ṟ)^2, with Ṟ=(ṟ_1,ṟ_2,...,ṟ_N) designating the electron coordinates. The left and right energy gradients are evaluated asg_L,i = 2 Ψ_i(Ṟ)/Ψ_0(Ṟ)HΨ_0(Ṟ)/Ψ_0(Ṟ)=2 [ Ψ_i(Ṟ)/Ψ_0(Ṟ) E_L(Ṟ) - Ψ_i(Ṟ)/Ψ_0(Ṟ)E_L(Ṟ)],andg_R,j = 2 Ψ_0(Ṟ)/Ψ_0(Ṟ)HΨ_j(Ṟ)/Ψ_0(Ṟ)=2 [ Ψ_j(Ṟ)/Ψ_0(Ṟ) E_L(Ṟ) - Ψ_j(Ṟ)/Ψ_0(Ṟ)E_L(Ṟ) + E_L,j(Ṟ)],where E_L,j(Ṟ) is the first-order derivative of the local energyE_L,j(Ṟ)=HΨ_j(Ṟ)/Ψ_0(Ṟ) - Ψ_j(Ṟ)/Ψ_0(Ṟ) E_L(Ṟ).Note that, in the limit of an infinite sample, E_L,j(Ṟ)=0 due to the hermiticity of the Hamiltonian, and therefore the left and right gradients become identical. The elements of the overlap matrix S̱ are calculated asS_ij = Ψ_i(Ṟ)/Ψ_0(Ṟ)Ψ_j(Ṟ)/Ψ_0(Ṟ) = Ψ_i(Ṟ)/Ψ_0(Ṟ)Ψ_j(Ṟ)/Ψ_0(Ṟ) -Ψ_i(Ṟ)/Ψ_0(Ṟ)Ψ_j(Ṟ)/Ψ_0(Ṟ),and the elements of the matrix H̱ are evaluated asH_ij = Ψ_i(Ṟ)/Ψ_0(Ṟ)HΨ_j(Ṟ)/Ψ_0(Ṟ)= Ψ_i(Ṟ)/Ψ_0(Ṟ)Ψ_j(Ṟ)/Ψ_0(Ṟ)E_L(Ṟ) -Ψ_i(Ṟ)/Ψ_0(Ṟ)Ψ_j(Ṟ)/Ψ_0(Ṟ)E_L(Ṟ) -Ψ_j(Ṟ)/Ψ_0(Ṟ)Ψ_i(Ṟ)/Ψ_0(Ṟ)E_L(Ṟ) +Ψ_i(Ṟ)/Ψ_0(Ṟ)E_L,j(Ṟ) -Ψ_i(Ṟ)/Ψ_0(Ṟ)E_L,j(Ṟ) +Ψ_i(Ṟ)/Ψ_0(Ṟ)Ψ_j(Ṟ)/Ψ_0(Ṟ)E_L(Ṟ). The elements of the matrix A̱ are then given byA_ij =H_ij - E_0 S_ij= Ψ_i(Ṟ)/Ψ_0(Ṟ)Ψ_j(Ṟ)/Ψ_0(Ṟ)E_L(Ṟ) -Ψ_i(Ṟ)/Ψ_0(Ṟ)Ψ_j(Ṟ)/Ψ_0(Ṟ)E_L(Ṟ) -Ψ_j(Ṟ)/Ψ_0(Ṟ)Ψ_i(Ṟ)/Ψ_0(Ṟ)E_L(Ṟ) +Ψ_i(Ṟ)/Ψ_0(Ṟ)E_L,j(Ṟ) -Ψ_i(Ṟ)/Ψ_0(Ṟ)E_L,j(Ṟ)-Ψ_i(Ṟ)/Ψ_0(Ṟ)Ψ_j(Ṟ)/Ψ_0(Ṟ)E_L(Ṟ) +2Ψ_i(Ṟ)/Ψ_0(Ṟ)Ψ_j(Ṟ)/Ψ_0(Ṟ)E_L(Ṟ),and the elements of the matrix Ḇ areB_ij = Ψ_ij(Ṟ)/Ψ_0(Ṟ)HΨ_0(Ṟ)/Ψ_0(Ṟ)= Ψ_ij(Ṟ)/Ψ_0(Ṟ)E_L(Ṟ)- Ψ_ij(Ṟ)/Ψ_0(Ṟ)E_L(Ṟ) -Ψ_i(Ṟ)/Ψ_0(Ṟ)Ψ_j(Ṟ)/Ψ_0(Ṟ)E_L(Ṟ)-Ψ_j(Ṟ)/Ψ_0(Ṟ)Ψ_i(Ṟ)/Ψ_0(Ṟ)E_L(Ṟ)+2Ψ_i(Ṟ)/Ψ_0(Ṟ)Ψ_j(Ṟ)/Ψ_0(Ṟ)E_L(Ṟ).Finally, the expression of the transition dipole moment needed for calculating oscillator strengths isμ^α_i = Ψ_i(Ṟ)/Ψ_0(Ṟ)μ^α(Ṟ)= Ψ_i(Ṟ)/Ψ_0(Ṟ)μ^α(Ṟ) - Ψ_i(Ṟ)/Ψ_0(Ṟ)μ^α(Ṟ),where μ^α(Ṟ) = - ∑_k=1^N r_k,α is the α-component of electronic dipole moment. In the linear optimisation method, using the non-symmetric estimator of the matrix H̱ in Eq. (<ref>) instead of a symmetrised one has the advantage of leading to the strong zero-variance principle of Nightingale and Melik-Alaverdian <cit.>: in the limit where the current wave function |Ψ_0⟩ and its first-order derivatives |Ψ_i⟩ form a complete basis for expanding an exact eigenstate of the Hamiltonian, the parameter variations Δp̱ and the associated energy E_lin are found from Eq. (<ref>) with zero variance provided that the Monte Carlo sample size is larger than the number of parameters (see discussion in Ref. ). Unfortunately, this strong zero-variance principle does not apply when solving the linear-response equation (<ref>). However, in the limit where |Ψ_0⟩ is an exact eigenstate of the Hamiltonian, the left energy gradient g_L,i in Eq. (<ref>) vanishes with zero variance, and thus the TDA linear equation (<ref>) becomes equivalent to Eq. (<ref>) for calculating excited-state energies even on a finite Monte Carlo sample. Therefore, in this case, the strong zero-variance principle applies to the calculation of the response vectors X̱_n and excitation energies ω_n. §.§ Computational details The calculations shown here were performed using the QMC program CHAMP <cit.>, starting from Hartree-Fock calculations done with GAMESS <cit.>. Two Slater basis sets of different sizes were used, namely the VB1 and VB2 basis set from Ref. . The VB1 basis set has five s and one p Slater functions ([5s,1p]), whereas the VB2 basis set has six s, two p, and one d Slater functions ([6s,2p,1d]). We use a flexible Jastrow factor consisting of the exponential of the sum of electron-nucleus, electron-electron and electron-electron-nucleus terms, written as systematic polynomial and Padé expansions <cit.>, with 4 electron-nucleus parameters, 5 electron-electron parameters and 15 electron-electron-nucleus parameters. For each VMC calculation, 10^4 blocks were employed with 10^4 steps each. One block was used for equilibration of the VMC distribution.§ RESULTS The beryllium atom was used as a first test of the LR-VMC approach, since accurate experimental reference values for the excitation energies are available from Ref. . An accurate description of the Be ground state requires a multiconfigurational wave function for accounting for the near-degeneracy between the 2s and 2p orbitals. However, for these preliminary tests, we present only results of calculations using a Jastrow-Slater single-determinant wave function for the ground state using TDA linear-response theory. This choice is motivated by the fact that a direct comparison between the LR-VMC/TDA method and configuration-interaction-singles (CIS) calculations represents a simple but essential first step for validating our approach.We expect LR-VMC/TDA to outperform CIS because the Jastrow factor in LR-VMC should account for a substantial part of electronic correlation, and we find this to be the case for most of the excitations studied. The results are presented both as errors with respect to the experimental values in Figure <ref> and as detailed excitation energies in the subsequent tables.In Table <ref> results for the singlet 2s3s (^1S) state are reported. The effect of the Slater basis set adopted is dramatic at the CIS level, as a reasonable agreement with the reference experimental value of 0.249 Hartree is found only when the VB2 basis set is used. LR-VMC/TDA values are labelled as follow: (j) designates the response of the Jastrow parameters only, while (j+o) is the response of both the Jastrow and orbital parameters. The response of the Jastrow factor substantially improves upon the CIS VB1 estimate, going from 0.378 to 0.2888(1) Hartree. The excitation energy improves further when the response of the orbital parameters are included in the LR-VMC/TDA calculation, yielding an error of around 0.02 Hartree with respect to the experimental value.Increasing the size of the Slater basis set, i.e. moving from VB1 to VB2, we obtain a fair agreement with the experimental data when both the Jastrow and the orbital parameters are included in the response (0.2378(2) Hartree). The singlet 2s4s (^1S) excitation is higher in energy, and CIS fails to recover the experimental result of 0.297 Hartree, for both basis sets, as shown in Table <ref>. As already mentioned for the 2s3s excitation, the response of the Jastrow factor plays an important role for the VB1 basis set, reducing the error in the excitation energy by around 2 Hartree. Including the orbital parameters in the response lowers the excitation energy further to 0.5578(2), but this is still a large overestimate of the experimental value. With the VB2 basis, the LR-VMC/TDA(j+o) calculation outperforms CIS, but a substantial error (>0.02 Hartree) still remains for this high-lying excitation. The failure of VB1 and, to a lesser extent, of VB2 is likely due to the poor description of the 4s orbital.The extension of our proposed approach to P excitations is straightforward, with a relaxation of the spatial symmetry constraints in the orbital rotation parameters. Note that the Jastrow factor employed in this work only depends on interparticle distances, i.e. it has spherical symmetry, and therefore excited states with P symmetry cannot be represented with the wave-function derivatives with respect to the Jastrow parameters. For this reason, only results concerning the response of the orbitals (o) are reported for the P excitations. In Table <ref>, results for the singlet 2s2p (^1P) state are given, which is the lowest energy excitation in the beryllium atom. The CIS calculations with the VB1 and VB2 basis sets show a fair agreement with the reference value of 0.194 Hartree, the CIS calculation using the VB2 basis set being only 5 mHartree below it.The LR-VMC/TDA(o) estimate is also close to the experimental reference when the VB2 basis set is employed (0.1873(2) Hartree), while for the VB1 basis set LR-VMC/TDA(o) greatly overestimates the excitation energy.Similarly, our implementation of linear response allows us to easily compute triplet excitations by considering triplet orbital rotation parameters. The CIS calculation underestimates the correct excitation energy by more than 30 mHartree, while the LR-VMC/TDA(o) excitation energies are very close to the reference values of 0.100 Hartree. The basis set effects are small is this case. Finally, we computed the oscillator strength f (Table <ref>) corresponding to the singlet 2s2p (^1P) excitation, which is non zero according to selection rules. The LR-VMC(o) oscillator strengths seem more sensitive to the basis set compared to the CIS oscillator strengths. Moreover, the inclusion of the Jastrow factor does not improve the oscillator strength. § CONCLUSIONS AND PERSPECTIVESIn this work we have presented a formulation of time-dependent linear-response theory in the VMC framework using a Jastrow-Slater wave function. Compared to state-specific or state-average excited-state QMC methods, the advantage of this LR-VMC approach is that, after optimizing only one ground-state wave function, one can easily calculate several excitation energies of different spatial or spin symmetry. Compared to similar linear-response quantum chemistry methods, the presence of the Jastrow factor in LR-VMC allows one to explicitly treat a part of dynamical correlation. A disadvantage of the method is that the excitation energies are much more sensitive than the ground-state energy to the quality of the optimized ground-state wave function. This is true in other linear-response quantum-chemistry methods as well, but is a bigger drawback in a method that employs stochastic optimization.Using a Jastrow-Slater single-determinant wave function and the TDA, the LR-VMC method was shown to be more accurate that CIS for most of the excitation energies of the beryllium atom that were studied. The LR-VMC approach thus seems a promising method for calculating electronic excitation energies. In the near future, a systematic study on a set of molecules will be an essential step to further validate the proposed methodology, together with calculations using the full response equation beyond the TDA. Also, we will explore using multideterminant wave functions, larger basis sets, and including the wave-function derivatives with respect to the exponents of the Slater functions.§ ACKNOWLEDGMENTS EC thanks University of L'Aquila for financial support and the Laboratoire de Chimie Théorique for computational resources. MO and CJU were supported in part by NSF grant ACI-1534965. ieeetr 10bk:hammond B. L. Hammond, W. A. Lester, Jr., and P. J. Reynolds, Monte Carlo Methods in Ab-Initio Quantum Chemistry. World Scientific, 1994.fou+01rmp W. M. C. Foulkes, L. Mitas, R. J. Needs, and G. Rajagopal, “Quantum monte carlo simulations of solids,” Rev. Mod. Phys., vol. 73, pp. 33–83, 2001.tou16 J. Toulouse, R. Assaraf, and C. J. Umrigar, “Introduction to the Variational and Diffusion Monte Carlo methods,” Adv. Quantum Chem., vol. 73, pp. 285–314, 2016.bres98 D. Bressanini and P. J. Reynolds, “Between classical and quantum monte carlo methods: "variational" qmc,” Advances in Chemical Physics, Monte Carlo Methods in Chemical Physics, vol. 105, pp. 5345–5350, 1998.Coccia:2012kz E. Coccia and L. Guidoni, “Quantum monte carlo study of the retinal minimal model C_5H_6NH_2^+,” J. Comput. Chem., vol. 33, pp. 2332–2339, 2012.Coccia14 E. Coccia, D. Varsano, and L. Guidoni, “Ab Initio Geometry and Bright Excitation of Carotenoids: Quantum Monte Carlo and Many Body Green's Function Theory Calculations on Peridinin,” J. Chem. Theory Comput., vol. 10, pp. 501–506, Jan. 2014.dru+04prb N. D. Drummond, M. D. Towler, and R. J. Needs, “Jastrow correlation factor for atoms, molecules, and solids,” Phys. Rev. B, vol. 70, p. 235119, 2004.pet12 F. R. Petruzielo, J. Toulouse, and C. J. Umrigar, “Approaching chemical accuracy with quantum Monte Carlo,” J. Chem. Phys., vol. 136, p. 124116, 2012.cas+03jcp M. Casula and S. Sorella, “Geminal wave function with jastrow correlation: A first application to atoms,” J. Chem. Phys., vol. 119, p. 6500, 2003.cas+04jcp M. Casula, C. Attaccalite, and S. Sorella, “Correlated geminal wave function for molecules: An efficient resonating valence bond approach,” J. Chem. Phys., vol. 121, p. 7110, 2004.zen14 A. Zen, E. Coccia, Y. Luo, S. Sorella, and L. Guidoni, “Static and Dynamical Correlation in Diradical Molecules by Quantum Monte Carlo Using the Jastrow Antisymmetrized Geminal Power Ansatz,” J. Chem. Theory Comput., vol. 10, pp. 1048–1061, 2014.tou07 J. Toulouse and C. J. Umrigar, “Optimization of quantum Monte Carlo wave functions by energy minimization,” J. Chem. Phys., vol. 126, p. 084102, 2007.umr+07prl C. J. Umrigar, J. Toulouse, C. Filippi, S. Sorella, and R. G. Hennig, “Alleviation of the fermion-sign problem by optimization of many-body wave functions,” Phys. Rev. Lett., vol. 98, p. 110201, 2007.tou08 J. Toulouse and C. J. Umrigar, “Full optimization of Jastrow-Slater wave functions with application to the first-row atoms and homonuclear diatomic molecules,” J. Chem. Phys., vol. 128, p. 174101, 2008.zim09 P. M. Zimmerman, J. Toulouse, Z. Zhang, C. B. Musgrave, and C. J. Umrigar, “Excited states of methylene from quantum Monte Carlo,” J. Chem. Phys., vol. 131, p. 124103, 2009.fili10 O. Valsson and C. Filippi, “Photoisomerization of model retinal chromophores: Insight from quantum Monte Carlo and multiconfigurational perturbation theory,” J. Chem Theory Comput., vol. 6, p. 1275, 2010.filippi2011bathochromic C. Filippi, F. Buda, L. Guidoni, and A. Sinicropi, “Bathochromic shift in green fluorescent protein: a puzzle for QM/MM approaches,” J. Chem. Theory Comput., vol. 8, no. 1, pp. 112–124, 2011.val12 O. Valsson, C. Angeli, and C. Filippi, “Excitation energies of retinal chromophores: Critical role of the structural model,” Phys. Chem. Chem. Phys., vol. 14, p. 11015, 2012.val13 O. Valsson, P. Campomanes, I. Tavernelli, U. Rothlisberger, and C. Filippi, “Rhodopsin absorption from first principles: Bypassing common pitfalls,” J. Chem. Theory Comput., vol. 9, p. 2441, 2013.tou12 J. Toulouse, M. Caffarel, P. Reinhardt, P. E. Hoggan, and C. J. Umrigar, “Quantum Monte Carlo calculations of electronic excitation energies: The case of the singlet n →π* (CO) transition in acrolein,” in "Advances in the Theory of Quantum Systems in Chemistry and Physics", Progress in Theoretical Chemistry and Physics, vol. 22, pp. 345–353, 2012.flo14 F. M. Floris, C. Filippi, and C. Amovilli, “Electronic excitations in a dielectric continuum solvent with quantum Monte Carlo: Acrolein in water,” J. Chem. Phys., vol. 140, p. 034109, 2014.zen+15jctc A. Zen, E. Coccia, S. Gozem, M. Olivucci, and L. Guidoni, “Quantum monte carlo treatment of the charge transfer and diradical electronic character in a retinal chromophore minimal model,” J. Chem. Theory Comput., vol. 11, pp. 992–1005, 2015.dup15 N. Dupuy, S. Bouaouli, F. Mauri, S. Sorella, and M. Casula, “Vertical and adiabatic excitations in anthracene from quantum Monte Carlo: Constrained energy minimisation for structural and electronic excited-state properties in the JAGP ansatz,” J. Chem. Phys., vol. 142, p. 214109, 2015.zha16a L. Zhao and E. Neuscamman, “Equation of Motion Theory for Excited States in Variational Monte Carlo and the Jastrow Antisymmetric Geminal Power in Hilbert Space,” J. Chem. Theory. Comput., vol. 12, pp. 3719–3726, 2016.zha16b L. Zhao and E. Neuscamman, “An efficient Variational Principle for the Direct Optimisation of Excited States,” J. Chem. Theory. Comput., vol. 12, pp. 3436–3440, 2016.neu16 E. Neuscamman, “Variation after response in quantum Monte Carlo,” J. Chem. Phys., vol. 145, p. 081103, 2016.bk:mc R. McWeeny, Methods of Molecular Quantum Mechanics. Academic Press, 1992.zero M. P. Nightingale and V. Melik-Alaverdian, “Optimisation of Ground- and Excited-State Wave functions and van der Waals Clusters,” Phys. Rev. Lett., vol. 87, p. 043401, 2001.CHAMP “CHAMP, a quantum Monte Carlo program written by C. J. Umrigar, C. Filippi and J. Toulouse, see http://www.physics,cornell.edu/cyrus/champ.html.”SchBalBoaElbGorJenKosMatNguSuWinDupMon-JCC-93 M. W. Schmidt, K. K. Baldridge, J. A. Boatz, S. T. Elbert, M. S. Gordon, J. H. Jensen, S. Koseki, N. Matsunaga, K. A. Nguyen, S. J. Su, T. L. Windus, M. Dupuis, and J. A. Montgomery, “General atomic and molecular electronic structure system,” J. Comput. Chem., vol. 14, p. 1347, 1993.ema03 I. Ema, J. M. G. de la Vega, G. Ramirez, R. Lopez, J. F. Rico, H. Meissner, and J. Paldus, “Polarized basis sets of Slater-type orbitals: H to Ne atoms,” J. Comput. Chem., vol. 24, pp. 859–868, 2003.Umr-UNP-XX C. J. Umrigar. unpublished.FilUmr-JCP-96 C. Filippi and C. J. Umrigar, “Multiconfiguration wave functions for quantum Monte Carlo calculations of first-row diatomic molecules,” J. Chem. Phys., vol. 105, p. 213, 1996.GucSanUmrJai-PRB-05 A. D. Güçlü, G. S. Jeon, C. J. Umrigar, and J. K. Jain, “Quantum Monte Carlo study of composite fermions in quantum dots: The effect of Landau-level mixing,” Phys. Rev. B, vol. 72, p. 205327, 2005.kra97 A. Kramida and W. C. Martin, “A Compilation of Energy Levels and Wavelengths for the Spectrum of Neutral Beryllium (Be I),” J. Phys. Chem. Ref. Data, vol. 26, p. 1185, 1997.SchKoc-PRA-00 R. Schnabel and M. Kock, “f-value measurement of the Be I resonance line using a nonlinear time-resolved laser-induced-fluorescence technique,” Phys. Rev. A, vol. 61, p. 062506, 2000.
http://arxiv.org/abs/1705.09813v1
{ "authors": [ "Bastien Mussard", "Emanuele Coccia", "Roland Assaraf", "Matt Otten", "C. J. Umrigar", "Julien Toulouse" ], "categories": [ "physics.chem-ph", "physics.comp-ph" ], "primary_category": "physics.chem-ph", "published": "20170527120954", "title": "Time-dependent linear-response variational Monte Carlo" }
The Indoor Mobile Coverage Problem Using UAVs Hazim Shakhatreh, Abdallah Khreishah, and Issa KhalilHazim Shakhatreh and Abdallah Khreishah are with the Department of Electrical and Computer Engineering, New Jersey Institute of Technology (email: {hms35,abdallah}@njit.edu) Issa Khalil is with Qatar Computing Research Institute (QCRI), HBKU, Doha, Qatar (email: [email protected])Part of this work was presented in ICC 2017 <cit.> and ICICS 2017 <cit.>. December 30, 2023 =====================================================================================================================================================================================================================================================================================================================================================================================================================================Unmanned aerial vehicles (UAVs) can be used as aerial wireless base stations when cellular networks are not operational due to natural disasters. They can also be used to supplement the ground base station in order to provide better coverage and higher data rates for the users. Prior studies on UAV-based wireless coverage typically consider an Air-to-Ground path loss model, which assumes that the users are outdoor and located on a 2D plane. In this paper, we propose using UAVs to provide wireless coverage for indoor users inside a high-rise building. First, we present realistic Outdoor-Indoor path loss models and describe the tradeoff introduced by these models. Then we study the problem of efficient placement of a single UAV, where the objective is to minimize the total transmit power required to cover the entire high-rise building. The formulated problem is non-convex and is generally difficult to solve. To that end, we consider three cases of practical interest and provide efficient solutions to the formulated problem under these cases. Then we study the problem of minimizing the number of UAVs required to provide wireless coverage to high rise buildings and prove that this problem is NP-complete. Due to the intractability of the problem, we use clustering to minimize the number of UAVs required to cover the indoor users. We demonstrate through simulations that the method that clusters the building into regular structures and places the UAVs in each cluster requires 80% more number of UAVs relative to our clustering algorithm. Unmanned aerial vehicles, Outdoor-to-Indoor path loss model, gradient descent algorithm, particle swarm optimization, k-means clustering.§ INTRODUCTION UAVs can be used to provide wireless coverage during emergency cases where each UAV serves as an aerial wireless base station when the cellular network goes down <cit.>. They can also be used to supplement the ground base station in order to provide better coverage and higher data rates for the users <cit.>.In order to use a UAV as an aerial wireless base station, the authors in <cit.> presented an Air-to-Ground path loss model that helped the academic researchers to formulate many important UAV-based coverage problems. The authors in <cit.> utilized this model to evaluate the impact of a UAV altitude on the downlink ground coverage and to determine the optimal values for altitude which lead to maximum coverage and minimum required transmit power. In <cit.>, the authors used the path loss model to propose a power-efficient deployment for UAVs under the constraint of satisfying the rate requirement for all ground users. The authors in <cit.> utilized the path loss model to study the optimal deployment of multiple UAVs equipped with directional antennas, using circle packing theory. The 3D locations of the UAVs are determined in a way that the total coverage area is maximized. In <cit.>, the authors used the path loss model to find the minimum number of UAVs and their 3D locations so that all outdoor ground users are served. However, it is assumed that all users are outdoor and the location of each user can be represented by an outdoor 2D point. These assumptions limit the applicability of this model when one needs to consider indoor users.Providing good wireless coverage for indoor users is very important. According to Ericsson report <cit.>, 90% of the time people are indoor and 80% of the mobile Internet access traffic also happens indoors <cit.>. To guarantee wireless coverage, service providers are faced with several key challenges, including providing service to a large number of indoor users and the ping pong effect due to interference from near-by macro cells <cit.>. In this paper, we propose using UAVs to provide wireless coverage for users inside a high-rise building during emergency cases and special events (such as concerts, indoor sporting events, etc.), when the cellular network service is not available or it is unable to serve all indoor users. To the best of our knowledge, this is the first work that proposes using UAVs to provide wireless coverage for indoor users. We summarize our main contributions as follows: * We utilize an Outdoor-Indoor path loss model for low-SHF band (450 MHz to 6 GHz) <cit.>, certified by ITU, and an Outdoor-Indoor path loss model for high-SHF band (over 6 GHz) <cit.>, then we show the tradeoff introduced by these models.* We formulate the problem of efficient placement of a single UAV, where the objective is to minimize the total transmit power required to cover the entire high-rise building.* Since the formulated problem is non-convex and is generally difficult to solve, we consider three cases of practical interest and provide efficient solutions to the formulated problem under these cases and for different operating frequencies (low-SHF and high-SHF bands). In the first case, we aim to find the minimum transmit power such that an indoor user with the maximum path loss can be covered. In the second case, we assume that the locations of indoor users are symmetric across the dimensions of each floor (such as office buildings or hotels), and propose a gradient descent algorithm for finding an efficient location of a UAV. In the third case, we assume that the locations of indoor users are uniformly distributed in each floor, and propose a particle swarm optimization algorithm to find an efficient 3D placement of a UAV that tries to minimize the total transmit power required to cover the indoor users. * Due to the limited transmit power of a UAV, we formulate the problem of minimizing the number of UAVs required to provide wireless coverage to high rise building and prove that this problem is NP-complete.* Due to the intractability of the problem, we use clustering to minimize the number of UAVs required to cover indoor users. We demonstrate through simulations that the method that clusters the building into regular structures and places the UAVs in each cluster requires 80% more number of UAVs relative to our clustering algorithm. § SYSTEM MODEL§.§ System Settings Let (x_UAV,y_UAV,z_UAV) denote the 3D location of the UAV. We assume that all users are located inside a high-rise building as shown in Figure <ref>, and use (x_i,y_i,z_i) to denote the location of user i. The dimensions of the high-rise building, in the shape of a rectangular prism, are [0,x_b] × [0,y_b] × [0,z_b]. Also, let d_out,i be the distance between the UAV and indoor user i, let θ_i be the incident angle , and let d_in,i be the distance between the building wall and indoor user i. §.§ Outdoor-Indoor Path Loss Models The Air-to-Ground path loss model presented in <cit.> is not appropriate when we consider wireless coverage for indoor users, because this model assumes that all users are outdoor and located at 2D points. In this paper, we adopt the Outdoor-Indoor path loss model, certified by the ITU <cit.>, for low-SHF operating frequency. The path loss is given as follows: L_i=L_F+L_B+L_I= (wlog_10d_out,i+wlog_10f_Ghz+g_1) +(g_2+g_3(1-cosθ_i)^2)+(g_4d_in,i) where L_F is the free space path loss, L_B is the building penetration loss, and L_I is the indoor loss. In this model, we also have w=20, g_1=32.4, g_2=14, g_3=15,g_4=0.5 <cit.> and f_Ghz is the carrier frequency. In <cit.>, the authors clarify the Outdoor-to-Indoor path loss characteristics based on the measurement for 0.8 to 37 GHz frequency band. We adopt this path loss model for high-SHF operating frequency. The path loss is given as follows: L_i=L_F+L_B+L_I= (α_1+α_2log_10d_out,i+α_3log_10f_Ghz)+    (β_1+β_2-β_1/1+exp(-β_3(θ_i-β_4)))+(γ_1d_in,i) In this model, we have α_1=31.4, α_2=20, α_3=21.5, β_1=6.8, β_2=21.8, β_3=0.453, β_4=19.7 and γ_1=0.49. Note that there is a key tradeoff in the path loss models when the horizontal distance between the UAV and a user changes. When this horizontal distance increases, the free space path loss (i.e., L_F) increases as d_out,i increases, while the building penetration loss (i.e., L_B) decreases as the incident angle (i.e., θ_i) decreases (Figure <ref> shows the penetration loss for high-SHF band). Similarly, when this horizontal distance decreases, the free space path loss (i.e., L_F) decreases as d_out,i decreases, while the building penetration loss (i.e., L_B) increases as the incident angle (i.e., θ_i) increases. § PROVIDING WIRELESS COVERAGE USING A SINGLE UAV §.§ Problem Formulation Consider a transmission between a UAV located at (x_UAV,y_UAV,z_UAV) and an indoor user i located at (x_i,y_i,z_i). The rate for user i is given by: C_i=Blog_2(1+P_t,i/L_iN) where B is the transmission bandwidth of the UAV, P_t,i is the UAV transmit power to indoor user i, L_i is the path loss between a UAV and an indoor user i and N is the noise power. In this paper, we do not explicitly model interference, and instead, implicitly model it as noise. Let us assume that each indoor user has a channel with bandwidth equals B/M, where M is the number of users inside the building and the rate requirement for each user is v. Then the minimum power required to satisfy this rate for each user is given by: P_t,i,min=(2^v.M/B-1)⋆ N⋆ L_i Our goal is to find the optimal location of UAV such that the total transmit power required to satisfy the downlink rate requirement of each indoor user is minimized. The objective function can be represented as: P=∑_i=1^M(2^v.M/B-1)⋆ N⋆ L_i, where P is the UAV total transmit power. Since (2^v.M/B-1)⋆ N is constant, our problem can be formulated as: min_x_UAV,y_UAV,z_UAV L_Total=∑_i=1^ML_i                                  subject  to                                                            x_min≤ x_UAV≤ x_max,                       y_min≤ y_UAV≤ y_max,                       z_min≤ z_UAV≤ z_max,                       L_Total≤ L_max Here, the first three constraints represent the minimum and maximum allowed values for x_UAV, y_UAV andz_UAV. In the fourth constraint, L_max is the maximum allowable path loss and equals P_t,max/((2^v.M/B-1)⋆ N), where P_t,max is the maximum transmit power of UAV. Finding the optimal placement of UAV is generallydifficult because the problem is non-convex. Therefore, in the next subsection, we consider three special cases of practical interest and derive efficient solutions under these cases. §.§ Efficient Placement of a Single UAV Case 1. The worst location in building: In this case, we find the minimum transmit power required to cover the building based on the location that has the maximum path loss inside the building. The location that has the maximum path loss in the building is the location that has maximum d_out,i, maximum θ_i, and maximum d_in,i. The locations that have the maximum path loss are located at the corners of the highest and lowest floors. Since the locations that have the maximum path loss inside the building are the corners of the highest and lowest floors, we place the UAV at the middle of the building (y_UAV= 0.5y_b and z_UAV=0.5z_b). Then, given Outdoor-to-Indoor path loss models for low-SHF and high-SHF bands, we need to find an efficient horizontal point x_UAV for the UAV such that the total transmit power required to cover the building is minimized. Now, when the horizontal distance between the UAV and this location increases, the free space path loss also increases as d_out,i increases, while the building penetration loss decreases because we decrease the incident angle θ_i. In Figure <ref>, we demonstrate the minimum transmit power required to cover a building of different heights, where the minimum transmit power required to cover the building is given by: P_t,min(dB)=P_r,th+L_i P_r,th(dB)=N+γ_th Here, P_r,th is the minimum received power, N is the noise power (equals -120dBm), γ_th is the threshold SNR (equals 10dB), y_b=50 meters , x_b=20 meters and the carrier frequency is 2Ghz. The numerical results show that there is an optimal horizontal point that minimizes the total transmit power required to cover a building. Also, we note that when the height of the building increases, the optimal horizontal distance increases. This is to compensate for the increased building penetration loss due to an increased incident angle. In Theorem 1, we characterize the optimal incident angle θ for low-SHF band that minimizes the transmit power required to cover the building. This helps us finding the optimal horizontal distance between the UAV and the building. For the low-SHF operating frequency case, when we place the UAV at the middle of building , the optimal incident angle θ that minimizes the transmit power required to cover the building will be equal to 48.654^o and the optimal horizontal distance between the UAV and the building will be equal to ((0.5z_btan(48.654^o))^2-(0.5y_b)^2)^0.5-x_b. In order to find the optimal horizontal point, we rewrite the equation that represents the path loss in terms of the incident angle (θ_i) and the altitude difference between the UAV and the user i (Δ h_i): L_i(Δ h_i,θ_i)=wlog_10Δ h_isinθ_i+wlog_10f_Ghz+g_1 +g_2+g_3(1-cosθ_i)^2+g_4d_in,i We know that the altitude difference between the UAV and the location that has the maximum path loss is constant for a given building. Now, when we take the first derivative with respect to θ and assign it to zero, we get: dL(θ)dθ=wln10-Δ h.cosθsin^2θΔ hsinθ+2g_3sinθ(1-cosθ)=0          dL(θ)dθ=-wln10cosθsinθ+2g_3sinθ(1-cosθ)=0                 wln10cosθ=2g_3sin^2θ(1-cosθ)                                wln10cosθ=2g_3(1-cos^2θ)(1-cosθ)                         2g_3cos^3θ-2g_3cos^2θ-(wln10+2g_3)cosθ+2g_3=0 To prove that the function is convex, we take the second derivative and we get: d^2Ldθ^2=wln101sin^2θ+2g_3cosθ(1-cosθ)+2g_3sin^2θ>0    for 0<θ≤ 90 Ecrf (<ref>) has only one valid solution which is cosθ=0.6606. Therefore, the optimal incident angle between the UAV and the location that has the maximum path loss inside the building will be 48.654^o. In order to find the optimal horizontal distance between the UAV and the building, we apply the pythagorean's theorem. This gives us: d_H=((0.5z_btan(48.654^o))^2-(0.5y_b)^2)^0.5 Therefore, the optimal horizontal distance between the UAV and the building is given by: d_opt=((0.5z_btan(48.654^o))^2-(0.5y_b)^2)^0.5-x_b In Figure <ref>, we demonstrate the transmit power required to cover the building as a function of the incident angle, we notice that the optimal angle that we characterize in Theorem 1 gives us the minimum transmit power. Now, we find an efficient incident angle θ for high-SHF band that minimizes the transmit power required to cover the building. In order to find an efficient angle, we rewrite the equation that represents the path loss in terms of the incident angle (θ) and the altitude difference between the UAV and location that has the maximum path loss inside the building (Δ h), we get: L(Δ h,θ)=(α_1+α_2log_10Δ h/sinθ+α_3log_10f_Ghz)+    (β_1+β_2-β_1/1+exp(-β_3(θ_i-β_4)))+(γ_1d_in,i) By numerically plotting the transmit power required to cover the location that has the maximum path loss inside the building (see Figure <ref> and Figure <ref>), where y_b=50 metersand x_b=20 meters, we show that for different building heights and different operating frequencies there exists only one global minimum value. As can be seen from the figures, to provide wireless coverage to small buildings, the UAV transmit power must be very high, due to the high free space path loss, this demonstrates the need for multiple UAVs to cover the high rise building when we use high-SHF operating frequency. To find an efficient incident angle that could give us the global minimum value, we use the ternary search algorithm. A ternary search algorithm is a method for finding the minimum of a unimodal function, it iteratively splits the domain into three separate regions and discards the one where the minimum does not belong to. The pseudo code of this algorithm is shown in Algorithm 1. From our numerical results, we found that the angle that minimizes the power is always 15^o. This is because the building penetration loss will be minimized at this angle (see Figure <ref>). The angles less than 15^o will also give us minimum building penetration loss but the free space path loss will increase as the incident angle θ_i decreases. Note that for the high-SHF case the incident angle that results in the minimum path loss is smaller than that for low-SHF case. This is due to the fact that the building penetration loss at high operating frequency will be higher than that at low operating frequency. Case 2. The locations of indoor users are symmetric across the xy and xz planes: In this case, we assume that the locations of indoor users are symmetric across the xy-plane((0,0,0.5z_b),(x_b,0,0.5z_b) ,(x_b,y_b,0.5z_b),(0,y_b,0.5z_b))) and the xz-plane ((0,0.5y_b,0), (x_b,0.5y_b,0), (x_b,0.5y_b,z_b),(0,0.5y_b,z_b)). First, we prove that z_UAV=0.5z_b and y_UAV=0.5y_b when the locations of indoor users are symmetric across the xy and xz planes and the operating frequency is low-SHF (Theorem 2) or high-SHF (Theorem 3). Then we use the gradient descent algorithm to find an efficient x_UAV that minimizes the transmit power required to cover the building. For the low-SHF operating frequency case, when the locations of indoor users are symmetric across the xy and xz planes, the optimal (y_UAV,z_UAV) that minimizes the power required to cover the indoor users will be equal (0.5y_b,0.5z_b). The proof is presented in Appendix A. The question now is how to find an efficient horizontal point x_UAV that minimizes the total transmit power.In order to find this point, we use the gradient descent algorithm <cit.>: x_UAV,n+1=x_UAV,n-adL_Totaldx_UAV,n Where:dL_Totaldx_UAV=∑_i=1^Mwln10-(x_i-x_UAV)d_out,i^2+2g_3.(1-((x_i-x_UAV)^2+(y_i-y_UAV)^2)^0.5d_out,i).                  ((x_i-x_UAV)d_out,i((x_i-x_UAV)^2+(y_i-y_UAV)^2)^-0.5d_out,i^2-                                    ((x_i-x_UAV)^2+(y_i-y_UAV)^2)^0.5(x_i-x_UAV)d_out,i^-1d_out,i^2) a: the step size. d_out,i=((x_i-x_UAV)^2+(y_i-y_UAV)^2+(z_i-z_UAV)^2)^0.5The pseudo code of this algorithm is shown in Algorithm 2. Now, we prove that z_UAV =0.5z_b and y_UAV =0.5y_b when the locations of indoor users are symmetric across the xy and xz planes and the operating frequency is high-SHF. For the high-SHF operating frequency case, when the locations of indoor users are symmetric across the xy and xz planes, the optimal (y_UAV, z_UAV) that minimizes the power required to cover the indoor users will be equal (0.5y_b,0.5z_b). The proof is presented in Appendix B. To find an efficient horizontal point x_UAV that minimizes the total transmit power, we use the gradient descent algorithm, where: dL_Totaldx_UAV=∑_i=1^Mα_2ln10(x_UAV-x_i)d_out,i^2+ (-(β_2-β_1)(-β_3/√(1-u^2))(-(z_UAV-z_i)(x_UAV-x_i)/d_out,i^3)(1+exp(-β_3(sin^-1u-β_4))).                   exp(-β_3(sin^-1u-β_4))(1+exp(-β_3(sin^-1u-β_4))))d_out,i=((x_i-x_UAV)^2+(y_i-y_UAV)^2+(z_i-z_UAV)^2)^0.5u=((z_UAV-z_i)((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_UAV-z_i)^2)^0.5) Case 3. The locations of indoor users are uniformly distributed in each floor:In this case, we propose the Particle Swarm Optimization (PSO) <cit.> to find an efficient 3D placement of the UAV, when the locations of indoor users are uniformly distributed in each floor.The particle swarm optimization algorithm starts with (npop) random solutions and iteratively tries to improve the candidate solutions based on the best experience of each candidate (particle(i).best.location) and the best global experience (globalbest.location). In each iteration, the best location for each particle (particle(i).best.location) and the best global location (globalbest.location) are updated and the velocities and locations of the particles are calculated based on them <cit.>. The velocity value indicates how much the location can be changed (see ecrf (<ref>)). The velocity is given by: particle(i).velocity=w*particle(i).velocity+                           c_1*rand(varsize)*(particle(i).best.location-particle(i).location) +c_2*rand(varsize)*(globalbest.location-particle(i).location) where w is the inertia weight, c_1 and c_2 are the personal and global learning coefficients, and rand(varsize) are random positive numbers. Also, the location of each particle is updated as: particle(i).location=particle(i).location+particle(i).velocity The pseudo code of the PSO algorithm is shown in Algorithm 3. Convergence of the candidate solutions has been investigated for PSO <cit.>. This analyses has resulted in guidelines for selecting a set of coefficients (κ,ϕ_1,ϕ_2) that are believed to cause convergence to a point and prevent divergence of the swarm’s particles. We selected our parameters according to this analysis (see Table <ref> and Algorithm 3).§ PROVIDING WIRELESS COVERAGE USING MULTIPLE UAVSProviding wireless coverage to High-rise building using a single UAV can be impractical, due to the limited transmit power of a UAV. The transmit power required to cover the building is too high. It is in the range of 50dBm to 65dBm (see Figures 3, 5 and 6), which corresponds to 100-3000 watts.Our problem can be formulated as:[min |k|; subject to; ∑_j=1^|k| y_ij=1                                   ∀ i ∈ m(3.a);∑_i=1^|m|(2^v.|m|/B-1).N.L_ij.y_ij≤ P           ∀ j ∈ k(3.b);x_min≤ x_j≤ x_max                         ∀ j ∈ k(3.c); y_min≤ y_j≤ y_max                          ∀ j ∈ k(3.d); z_min≤ z_j≤ z_max                          ∀ j ∈ k(3.e);]where k is a set of fully charged UAVs, m is a set of indoor users, υ is the rate requirement for each user (constant), N is the noise power (constant), B is the transmission bandwidth (constant), L_ij is the total path loss between UAV j and user i and P is the maximum transmit power of UAV (constant). We also introduce the binary variable y_ij that takes the value of 1 if the indoor user iis connected to the UAV j and equals 0 otherwise. The objective is to minimize the number of UAVs that are needed to provide a wireless coverage for indoor users. Constraint set (3.a) ensure that each indoor user should be connected to one UAV. Constraint set (3.b) ensure that the total power consumed by a UAV should not exceed its maximum power consumption limit. Constraints (3.c-3.e) represent the minimum and maximum allowed values for x_j, y_j and z_j. The problem represented by (3) is NP-complete. The number of constraints is polynomial in terms of the number of indoor users, UAVs and 3D locations. Given any solution for our problem, we can check the solution’s feasibility in polynomial time, then the problem is NP. To prove that the problem is NP-hard, we reduce the Bin Packing Problem which is NP-hard <cit.> to a special case of our problem. In the Bin Packing Problem, we have a set of items G={1,2,..,N} in which each item has volume z_n where n∈ G. All items must be packed into a finite number of bins (b_1, b,...,b_B), each of volume V in a way that minimizes the number of bins used. The reduction steps are: 1) The b-th bin in the Bin Packing Problem is mapped to the j-th UAV in our problem, where the volume V for each bin is mapped to the maximum transmit power of the UAV P. 2) The n-th item is mapped to the indoor i-th user, where the volume for each item n is mapped to the power required to cover the i-th indoor user. 3) All UAVs have the same maximum transmit power P. 4) The power required to cover the i-th indoor user from any 3D location will be constant. If there exists a solution to the bin packing problem with cost C, then the selected bins will represent the UAVs that are selected and the items in each bin will represent the indoor users that the UAV must cover and the total cost of our problem is C.Due to the intractability of the problem, we study clustering indoor users. In the k-means clustering algorithm <cit.>, we are given a set of points m, and want to group the points into a k clusters such that each point belongs to the cluster with the nearest mean. The first step in the algorithm is to choose the number of clusters k. Then, randomly initialize k clusters centroids. In each iteration, the algorithm will do two things:1) Cluster assignment step. 2) Move centroids step. In cluster assignment step, the algorithm goes through each point and chooses the closest centroids and assigns the point to it. In move centroids step, the algorithm calculates the average for each group and moves the centroids there. The algorithm will repeat these two steps until it converges. The algorithm will converge when the assignments no longer change. To find the minimum number of UAVs required to cover the indoor users, we utilize this algorithm to cluster the indoor users. In our algorithm, we assume that each cluster will be covered by only one UAV. We start the algorithm with k=2 and after it finishes clustering the indoor users, it applies the particle swarm optimization <cit.> to find the UAV 3D location and UAV transmit power needed to cover each cluster. Then, it checks if the maximum transmit power is sufficient to cover each cluster, if not, the number of clusters k is incremented by one and theproblem is solved again. The pseudo code of this algorithm is shown in Algorithm 4.§ NUMERICAL RESULTS§.§ Simulation results for single UAV First, we verify our results for the second case, when the locations of indoor users are symmetric across the xy and xz planes, using different operating frequencies, 2GHz for low-SHF band and 15GHz for high-SHF. We assume that each floor contains 20 users. Then we apply the gradient descent (GD) algorithm to find the optimal horizontal point x_UAV that minimizes the transmit power required to cover the indoor users. Table <ref> lists the parameters used in the numerical analysis for single UAV cases. In Figures <ref> and  <ref>, we find the optimal horizontal points for a building of different heights. In the upper part of the figures, we find the total path loss at different locations (x_UAV,0.5y_b,z_UAV) and the optimal horizontal point x_UAV that results in the minimum total path loss using the GD algorithm. In the lower part of the figures, we show the convergence speed of the GD algorithm. As can be seen from the figures, when the height of the building increases, the optimal horizontal point x_UAV increases. This is to compensate the increased building penetration loss due to an increased incident angle. In Figures <ref> and  <ref>, we investigate the impact of different building widths (i.e., x_b). We fix the building height to be 250 meters for low-SHF operating frequency and 25 meters for high-SHF, then we vary the building width. As can be seen from the figures, when the building width increases, the optimal horizontal distance decreases. This is to compensate for the increased indoor path loss due to an increased building width.Now, we validate the simulation results for low-SHF operating frequency by using the particle swarm optimization (PSO) algorithm and verify our result for the third case, when the locations of indoor users are uniformly distributed in each floor, using low-SHF operating frequency. As can be seen fromthe simulation results in Table II, both algorithms converge to the same 3D placement, when the locations of indoor users are symmetric across the xy and xz planes. After that, we assume that each floor contains 20 users and the locations of these users are uniformly distributed in each floor. When we apply the GD algorithm, the 3D efficient placements and the total costs for 200 meter, 250 meter and 300 meter buildings are (24.7254, 25, 100) (7.8853*10^4), (33.8180, 25, 125) (9.9855*10^4) and (43.1170, 25, 150)(1.2154*10^5), respectively. UAV efficient placement and the convergence speed of the PSO algorithm for different building heights is shown in Figure 11. The 3D efficient placements and the total costs for 200 meter, 250 meter and 300 meter buildings are (21.7995, 37.3891, 111.7901) (7.8645*10^4), (32.9212, 28.7125, 124.0291) (9.9725*10^4) and (46.5898, 31.5061 ,143.8588)(1.2117*10^5), respectively. As can be seen from the simulation results, the PSO algorithm provides better results. It provides total cost less than the cost that the GD algorithm provides by (37dB-208dB). This is because the PSO algorithm is designed for the case in which the locations of indoor users are uniformly distributed in each floor. On the other hand, the GD algorithm is designed for the case in which the locations of indoor users are symmetric across the dimensions of each floor.We also investigate the impact of different building widths (i.e., x_b) using the GD and PSO algorithms (see Figure <ref>). We fix the building height to be 250 meters and vary the building width.As can be seen from the simulation results, the PSO algorithm provides better results. It provides total cost less than the cost that the GD algorithm provides by (57dB-161dB). We can notice that the tradeoff in case three is similar to that in case two, when the height of the building increases, the efficient horizontal point x_UAV computed by our algorithm increases. This is to compensate the increased building penetration loss due to an increased incident angle. Also, when the building width increases, the efficient horizontal distance computed by our algorithm decreases. This is to compensate the increased indoor path loss due to an increased building width.§.§ Simulation results for multiple UAVs In this section, we verify our results for multiple UAVs scenario. First, we assume that a building will host a special event (such as concert, conference, etc.), the dimensions of the building are [0,20]×[0,50]×[0,100]. The organizers of the event reserve all floors higher than 75 meters and they expect that 200 people will attend the event. Due to interference from near-by macro cells, the organizers decide to use UAVs to provide wireless coverage to the indoor users. We assume that 200 indoor users are uniformly distributed in upper part of the building (higher than 75 meters) and 200 indoor users are uniformly distributed in the lower part (less than 75 meters). Then, we apply the clustering indoor users algorithm to find the minimum number of UAVs required to cover the indoor users. Table III lists the parameters used in the numerical analysis for multiple UAVs. The algorithm starts with k=2 and after it finishes clustering the indoor users, it applies the particle swarm optimization to find the UAV 3D location and UAV transmit power needed to cover each cluster. Then, it checks if the maximum transmit power is sufficient to cover each cluster, if not, the number of clusters k is incremented by one and theproblem is solved again. As can be seen from the simulation results in Figure <ref>, we need 5 UAVs to cover the indoor users. We can notice that an efficient horizontal point x_UAV for all UAVs 3D locations is the same x_UAV=25, the minimum allowed value for x_UAV, this is because the tradeoff (shown in Figure <ref>) disappears when a UAV covers small height of building.In Figure <ref>, we uniformly split the building into k parts and cover it by k UAVs. As can be seen from the simulation results, we need 9 UAVs to cover the indoor users. The clustering algorithm provides better results, this is because it utilizes the distribution of indoor users to divide them into clusters. On the other hand, the uniformly split method is designed for the case in which the locations of indoor users uniformly distributed in the building.§ CONCLUSIONIn this paper, we study the problem of providing wireless coverage for users inside a high-rise building using UAVs. First, we demonstrate why the Air-to-Ground path loss model is not appropriate for considering indoor users with 3D locations. Then, we present Outdoor-to-Indoor path loss models, show the tradeoff in these models, and study the problem of minimizing the transmit power required to cover the building. Due to the intractability of the problem, we study an efficient placement of a single UAV under three cases. Due to the limited transmit power of a UAV, we formulate the problem of minimizing the number of UAVs required to provide wireless coverage to high rise building and prove that this problem is NP-complete. Due to the intractability of the problem, we use clustering to minimize the number of UAVs required to cover the indoor users. In order to model more realistic scenarios, we will study the problem of providing wireless coverage for multiple buildings in our future work.§ PROOF OF THEOREM 2Consider that m_1 represents the users that have altitude lower than the UAV altitude and m_2 represents the users that have altitude higher than the UAV altitude, then:d_out,i=((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_UAV-z_i)^2)^0.5,  ∀ z_UAV>z_id_out,i=((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_i-z_UAV)^2)^0.5,  ∀ z_UAV<z_iAlso, cos_θ_i=((x_UAV-x_i)^2+(y_UAV-y_i)^2)^0.5((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_UAV-z_i)^2)^0.5,  ∀ z_UAV>z_i cos_θ_i=((x_UAV-x_i)^2+(y_UAV-y_i)^2)^0.5((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_i-z_UAV)^2)^0.5,  ∀ z_UAV<z_iRewrite the total path loss:L_Total= ∑_i=1^m_1(wlog_10(d_out,i)+g_3(1-cosθ_i)^2)+ ∑_i=1^m_2(wlog_10(d_out,i)+g_3(1-cosθ_i)^2)+KWhere:K=∑_i=1^M(wlog_10f_Ghz+g_1+g_2+g_4d_in,i)Now, take the derivative with respect to z_UAV, we get:dL_Totaldz_UAV= ∑_i=1^m_1wln10(z_UAV-z_i)/((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_UAV-z_i)^2) +                          2g_3. (1-((x_UAV-x_i)^2+(y_UAV-y_i)^2)^0.5((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_UAV-z_i)^2)^0.5).                        (((x_UAV-x_i)^2+(y_UAV-y_i)^2)^0.5(z_UAV-z_i)((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_UAV-z_i)^2)^3/2)+                                 ∑_i=1^m_2wln10-(z_i-z_UAV)/((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_i-z_UAV)^2)                            +2g_3. (1-((x_UAV-x_i)^2+(y_UAV-y_i)^2)^0.5((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_i-z_UAV)^2)^0.5).                     (-((x_UAV-x_i)^2+(y_UAV-y_i)^2)^0.5(z_i-z_UAV)((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_i-z_UAV)^2)^3/2) Rewrite the dL_Totaldz_UAV again, we have: dL_Totaldz_UAV =∑_i=1^m_1wln10(z_UAV-z_i)/d_out,i^2+ 2g_3.(1-((x_UAV-x_i)^2+(y_UAV-y_i)^2)^0.5d_out,i).                    (((x_UAV-x_i)^2+(y_UAV-y_i)^2)^0.5(z_UAV-z_i)d_out,i^3)+ ∑_i=1^m_2wln10-(z_i-z_UAV)/d_out,i^2+                       2g_3.(1-((x_UAV-x_i)^2+(y_UAV-y_i)^2)^0.5d_out,i). (-((x_UAV-x_i)^2+(y_UAV-y_i)^2)^0.5(zi-z_UAV)d_out,i^3)The equation above equals zero when the UAV altitude equals the half of the building height, where the locations of indoor users are symmetric across the xy and xz planes. § PROOF OF THEOREM 3Consider that m_1 represents the users that have altitude lower than the UAV altitude and m_2 represents the users that have altitude higher than the UAV altitude, then:d_out,i=((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_UAV-z_i)^2)^0.5,   ∀ z_UAV>z_id_out,i=((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_i-z_UAV)^2)^0.5 ,   ∀ z_UAV<z_iAlso, θ_i= sin^-1((z_UAV-z_i)((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_UAV-z_i)^2)^0.5),   ∀ z_UAV>z_i θ_i= sin^-1((z_i-z_UAV)((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_i-z_UAV)^2)^0.5),   ∀ z_UAV<z_iRewrite the total path loss:L_Total= ∑_i=1^m_1α_2log_10(d_out,i) +(β_2-β_1)(1+exp(-β_3(sin^-1(u)-β_4)))                                       +∑_i=1^m_2α_2log_10(d_out,i) +(β_2-β_1)(1+exp(-β_3(sin^-1(u)-β_4)))+KWhere:u=((z_UAV-z_i)((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_UAV-z_i)^2)^0.5),   ∀ z_UAV>z_i u=((z_i-z_UAV)((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_i-z_UAV)^2)^0.5),   ∀ z_UAV<z_iK=∑_i=1^M(α_1+α_3log_10f_Ghz+β_1+γ_1d_in,i)Now, take the derivative with respect to z_UAV, we get:dL_Totaldz_UAV= ∑_i=1^m_1α_2ln10(z_UAV-z_i)/((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_UAV-z_i)^2)                                    +(-(β_2-β_1)(-β_3/√(1-u^2))(d_out,i-(z_UAV-z_i)^2d_out,i^-1/d_out,i^2)(1+exp(-β_3(sin^-1u-β_4))). exp(-β_3(sin^-1u-β_4))(1+exp(-β_3(sin^-1u-β_4))))+                        ∑_i=1^m_2α_2ln10-(z_i-z_UAV)/((x_UAV-x_i)^2+(y_UAV-y_i)^2+(z_i-z_UAV)^2)                                              +(-(β_2-β_1)(-β_3/√(1-u^2))(-d_out,i+(z_UAV-z_i)^2d_out,i^-1/d_out,i^2)(1+exp(-β_3(sin^-1u-β_4))). exp(-β_3(sin^-1u-β_4))(1+exp(-β_3(sin^-1u-β_4))))The equation above equals zero when the UAV altitude equals the half of the building height, where the locations of indoor users are symmetric across the xy and xz planes. IEEEtran
http://arxiv.org/abs/1705.09771v1
{ "authors": [ "Hazim Shakhatreh", "Abdallah Khreishah", "Issa Khalil" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170527060826", "title": "The Indoor Mobile Coverage Problem Using UAVs" }
Department of Physics, National Cheng Kung University, Tainan 701, TaiwanDepartment of Physics, National Cheng Kung University, Tainan 701, TaiwanNTT Basic Research Laboratories, NTT Corporation, 3-1 Morinosato-Wakamiya, Atsugi, Kanagawa 243-0198, Japan Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario N2L 3G1, CanadaDepartment of Physics, University of Basel, Klingelbergstrasse 82, 4056 Basel, [email protected] Department of Physics, National Cheng Kung University, Tainan 701, TaiwanThe device-independent approach to physics is one where conclusions are drawn directly from the observed correlations between measurement outcomes. In quantum information, this approach allows one to make strong statements about the properties of the underlying systems or devices solely via the observation of Bell-inequality-violating correlations. However, since one can only perform a finite number of experimental trials, statistical fluctuations necessarily accompany any estimation of these correlations. Consequently, an important gap remains between the many theoretical tools developed for the asymptotic scenario and the experimentally obtained raw data. In particular, a physical and concurrently practical way to estimate the underlying quantum distribution has so far remained elusive. Here, we show that the natural analogs of the maximum-likelihood estimation technique and the least-square-error estimation technique in the device-independent context result in point estimates of the true distribution that are physical, unique, computationally tractable and consistent. They thus serve as sound algorithmic tools allowing one to bridge the aforementioned gap.As an application, we demonstrate how such estimates of the underlying quantum distribution can be used to provide, in certain cases, trustworthy estimates ofthe amount of entanglement present in the measured system. In stark contrast to existing approaches to device-independent parameter estimations, our estimation does not require the prior knowledge of any Bell inequality tailored for the specific property and the specific distribution of interest.Device-independent point estimation from finite dataand its application to device-independent property estimation Yeong-Cherng Liang December 30, 2023 ===================================================================================================================§ INTRODUCTIONThe proper analysis of empirical data is an indispensable part in the development of both science and technologies. In quantum information, for instance, the careful preparation followed by the proper characterization of quantum systems (which includes estimating reliably the prepared quantum state or the confirmation that it possesses certain desired properties) is often the first step to many quantum information processing protocols. In practice, however, the execution of this preliminary task is far from trivial. For example, systematic uncertainties arising from various imperfections in the setup may compromise the reliability of the estimate <cit.>. Moreover, unavoidable statistical fluctuationsresult in situations where the ideal, theoretical description become inapplicable. Hence, quantum state estimation <cit.> using real data is a daunting task <cit.> where there remains an ongoing debate on the preferred approach (see, e.g., <cit.> and references therein).Interestingly, the first of these problems can be circumvented, to some extent, by the so-calleddevice-independent approach <cit.>. There, the nature of the devices employed is deduced directly from the measurement statistics <cit.>, without relying on any assumption about the devices' detailed functioning or the associated Hilbert space dimension. Consequently, robust characterizations of quantum systems and instruments are now in principle possible with minimal assumptions. Likewise, the distribution of shared secret keys <cit.> and the generation of random bits—secured by the laws of physics <cit.>—are now possibilities at our disposal.Crucially, in order to make nontrivial statements from the empirical data, the latter have to be Bell-inequality <cit.> violating, which cannot arise from local measurements on a separable quantum state <cit.>. Indeed, the extent of the observed Bell-inequality violation can be used to provide an estimate, e.g., of the amount of shared entanglement <cit.>, or even the incompatibility between the measurements <cit.> employed. Nonetheless, as with the case where the measurement devices are fully characterized, the underlying distribution—which serves as the analog of a quantum state in this black-box setting—contains all the available information and thus generally provides a much better estimate of the system's properties <cit.>.Indeed, various theoretical tools taking into account the full quantum distribution have been developed for device-independent characterizations: from the nature of the (multipartite) entanglement <cit.> present to their quantification <cit.>, from the steerability <cit.> of the underlying state to the incompatibility of the measurements employed <cit.>, and from the minimal compatible Hilbert space dimension <cit.> to the self-testing <cit.> of the quantum apparatus <cit.>. Stemming from the algorithmic characterization of the set of quantum distributions due to Navascués-Pironio-Acín (NPA) <cit.>, they share the common assumption that the estimated distributions satisfy the physically motivated conditions of nonsignaling <cit.>. However, raw distributions estimated from the relative frequencies of experimental outcomes—due to statistical fluctuations—generically do not satisfy these conditions. As such, none of the aforementioned tools can be directly applied to experimentally observed statistics. In other words, while the device-independent approach offers an elegant solution to overcome the problem of mistrusting the measurement devices, there remains an important gap between thetheoretical tools developed for such purposes and the actual data available from any Bell experiment.For the very specific problem of device-independent randomness certification and quantum key distribution, techniques based on hypothesis testing have been shown, respectively, in <cit.> andin <cit.> to be applicable even in the presence of finite statistics. These approaches are, however, very problem specific, and it is not yet known how to generalize them for the general problem of device-independent characterizations. Here, we consider an alternative approach inspired by estimation theory, which consists in constructing a point estimate for the underlying quantum distribution from the observed frequencies.In particular, we show that the natural analogs of two physical estimators employed in usual quantum state tomography, namely,maximum-likelihood (ML) estimation and least-square-error estimation, also serve as sound estimators in the device-independent context, thereby allowing us to regularize these raw data and obtain a direct estimation of the properties of interestthrough the respective theoretical techniques. Although there have beenattempts to perform regularizations for device-independent property estimations <cit.> and for the quantification of nonlocality <cit.>,these proposals turn out to suffer from thedrawback of generating estimates that can be either nonphysical or nonunique. In contrast, our methods are provably free from such problems. Armed with these point estimates of the underlying distribution, a device-independent estimation of the property of interest then follows naturally by applying the algorithmic tools mentioned above (see Fig. <ref>).§ PRELIMINARIESThe starting point of device-independent estimations is a Bell experiment. Consider the simplest Bell scenario where Alice and Bob each randomly performs two possible measurements (labeled, respectively, by x, y∈{0,1}), and where each measurement gives binary outcomes (labeled, respectively, by a, b∈{0,1}). Generalization of our discussion to other finite Bell scenarios is obvious from the context. The correlations between their measurement outcomes can be summarized by a vector of joint conditional probability distributions ={P(a,b|x,y)}_a,b,x,y∈ℝ^16.Denote by ρ the state shared by Alice and Bob, and by M^A_a | x(M^B_b | y) the positive-operator-valued-measure elements associated with their measurements. Born's rule dictates that for all a,b,x,y, the conditional probability distributions read as P(a,b|x,y) =( ρM^A_a | x⊗ M^B_b | y) where the positivity and the normalization of probabilities demand that M^A_a | x,M^B_b | y≽0 (matrix positivity) and ∑_a M^A_a | x=_A, ∑_b M^B_b | y=_B with _A,_B being identity operators. Throughout, we useto denote the set of quantum distributions, i.e., the collection ofthat follows from Born's rule.Importantly, quantum distributions satisfy the nonsignaling conditions <cit.>, i.e., their marginal distributions are independent of the measurement choice of the distant party:P(a|x,y)≡∑_b P(a,b|x,y)= P (a | x, y'), ∀a,x, y, y', P(b|x,y)≡∑_a P(a,b|x,y)= P (b | x', y), ∀ b, x, x', y. In an experiment, the underlying quantum distribution P(a,b|x,y) is often estimated by computing the relative frequency, i.e.,P(a,b|x,y)≈ f(a,b|x,y)=N_a,b,x,yN_x,y where N_a,b,x,y is the number of coincidences registered for the combination of outcomes and settings (a,b,x,y) while N_x,y = ∑_a b N_a,b,x,y is the total number of trials pertaining to the measurement choice (x,y). Of course, in the asymptotic limit of a large number of trials, i.e., when =min_x,y N_x,y→∞, the difference between the true distributionand the relative frequency ={f(a,b|x,y)}_a,b,x,y vanishes. In practice, as max_x,y N_x,y is necessarily finite, not only is this difference nonzero buttypically also violates the weaker requirement of the nonsignaling conditions [see Eq. (<ref>)]. As mentioned above, this discrepancy between theory and practice immediately renders many of the tools developed for device-independent characterizations inapplicable. § REGULARIZATION METHODS (ESTIMATORS)To overcome this mismatch, one may project the observed frequencyonto an affine subspaceof ℝ^16 which contains only 's that satisfy Eq. (<ref>). For example, if onedemands that this projection (via the corresponding projector Π) commutes <cit.> with all possible permutations of the labels for parties, settings, and outcomes (e.g., a=0↔ a=1), then it happens to be equivalent to finding the unique minimizer of the least-square-error problem: Π()=_∈ ||-||_2, where ._p denotes the p-norm; the regularization invoked in <cit.> is precisely an application of such a projection (see Appendix <ref>). Albeit intuitive and straightforward, such a projection suffers from the serious drawback that it may give “negative probabilities" (see Appendix <ref> for an explicit example). Indeed, the possibility of giving rise to an unphysical estimate is a problem that such a projection shares with the linear inversion technique employed in standard quantum state tomography (see, e.g., <cit.>). Moreover, even when Π() represents a legitimate probability vector, it may well be outside the quantum set . To overcome these issues, one is naturally ledto the least-square (LS) estimator in the device-independent context, i.e., =_∈ ||-||_2.We thus see that various estimators(see also <cit.> and Appendix <ref>)mapto a regularized distribution () that is non-negative and normalized and which satisfies the nonsignaling conditions. However, for a regularization method to be relevant for subsequent property estimation, it is also convenient that () is uniquely determined by . In particular, for a given , a nonunique estimator may give rise to()'s with drastically different properties, e.g., some being Bell-inequality violating (and therefore implying some nontrivial features of the underlying system) and some not [which renders that particular () useless for device-independent property estimation]. An ambiguity then arises: which of these estimates should we rely on for subsequent property estimation? A possibility would be to consider the worst case over all such estimates, but this clearly complicates the property estimation as one would now need to consider an entire solution set {()} (the characterization of which is generally nontrivial). This makes evident the inconveniences of 1-norm estimators, and more generally nonunique estimators in the present context. The regularization procedure previously considered in <cit.>—both being 1-norm estimators—precisely suffers from this nonuniqueness drawback.In this regard, note that a regularized distribution () obtained from minimizing a strictly convex function g over a convex set (such as ) is provably unique, and is determined byand g (see, e.g., Theorem 8.3 of <cit.>). Using this observation, we show in Appendix <ref> that the aforementioned LS estimator is unique. Similarly, the device-independent analog of the ML estimator <cit.> is provably unique (see Appendix <ref> for a proof). To this end, consider the Kullback-Leibler (KL) divergence <cit.> (i.e., the relative entropy <cit.>) from some ∈ to : (||)=∑_a, b, x, y f(x,y) f(a,b|x,y) log_2 [ f(a,b|x,y)/P(a,b|x,y)],where f(x, y) is the relative frequency of choosing the measurement settings labeled by (x,y). The quantity (||) can be seen as a measure of “statistical closeness" <cit.> betweenand . Indeed, its minimizationover ∈ is equivalent <cit.> to maximizing the likelihood of producing the observed frequency by ∈ (we provide in Appendix <ref> a proof adapted to the present context). The unique minimizer of (||) over ∈, i.e., =_∈(||), therefore serves as the equivalent of the ML estimator in the device-independent context. Hereafter, we focus predominantly on this operationally well-motivated estimator. For further details of the LS estimator and some other plausible regularization methods, see, respectively, Appendix <ref> and Appendix <ref>. As it stands, since there is no known exact characterization ofusing only finite resources, the ML estimator cannot be computed exactly.Nonetheless, via a converging hierarchy of semidefinite programs (SDPs) <cit.>, one can in principle obtain an arbitrary good approximation to . To fix ideas, we hereafter focus on employing thehierarchy _ℓ of approximations todiscussed in <cit.> and <cit.>. The lowest level of this hierarchy _1 gives a decent outer approximation ofknown as the almost-quantum set <cit.>. In general, _ℓ⊆_ℓ-1 for all ℓ≥2 and lim_ℓ→∞_ℓ→. For any fixed ℓ, although the nonlinear optimization problem _∈(||) does not appear to be a semidefinite program, we show in Appendix <ref> that it belongs to a more general class of convex optimization problems <cit.> — an exponential conic program. A minimization of the KL divergence with NPA constraints is thus also efficiently solvable on a computer with a numerical precision of 10^-6 or better.§ NOTABLE PROPERTIES OF POINT ESTIMATESThe uniqueness ofand the nonnegativity of the KL divergence ensure that our estimators are consistent <cit.>, in the sense that they provide an estimate that converges to the true distributionin the asymptotic limit of →∞. This can be seen by noting that in the asymptotic limit,→, the nonnegativity of the KL divergence then implies that the unique minimizer of (||) over ∈ is necessarily given by =. [Likewise, the LS estimatoris provably consistent.]In practice, one would evidently be more interestedin how these methods fare for finite values of . To gain insights into this, we carry out extensive numerical simulations by (i) picking some ideal , (ii) numerically simulating the outcomes of a Bell experiment according toand computing the relative frequency , (iii) computing the point estimate () and calculating various quantities of interest, and (iv) repeating steps i-iii 10^4 times for =10^2,10^3,…,10^10 for all x,y. For simplicity, we take N_x,y=, i.e., a constant independent of x,y [this amounts to setting f(x, y) as a constant in Eq. (<ref>)]. Our numerical results in Appendix <ref> suggest that in general, the difference between () and the ideal distribution , quantified, e.g., via ||()-||_1 (or other p norms), diminishes, as with ||-||_1, at a rate proportional to 1√(). Similar convergence is also observed for (||). Moreover, although we have only employed an outer approximation toin the regularization step iii, as our example below illustrates, the regularized distribution () can already be used to perform reasonable device-independent property estimations. § APPLICATION TO DEVICE-INDEPENDENT ESTIMATIONS As a concrete example of such property estimations via regularization (see Fig. <ref>), consider =^τ_1.25, a quantum distribution considered in arecent Bell test <cit.> (see Appendix <ref> for details). Device-independent estimations of the underlying negativity <cit.> (a well-known entanglement measure) based on ideal quantum distributions are known to be possible <cit.>. Here, we illustrate how such an estimation can be realized for finite data through the regularization of . To facilitate comparison, we plot in Fig. <ref> the average negativity N(ρ) of the underlying state ρ estimated from the regularized distribution () (via the SDP described in <cit.>) against that deducible from the amount of Clauser-Horne-Shimony-Holt <cit.> (CHSH) Bell-inequality violation _ <cit.>: N(ρ)≥_-24√(2)-4 for _∈[2,2√(2)].A few features of this comparison are worth noting. First, since the negativity estimated directly from the CHSH Bell-inequality violation _ ofdepends linearly on this violation, the mean value of the negativity estimated hardly depends on , and is suboptimal. In fact, even in the asymptotic limit, a negativity estimation based on ^τ_1.25 and _ is suboptimal. In contrast, the mean value of the negativity estimated from the regularized distribution () rapidly converges to the true value asincreases; already at =10^4, this mean value only differs from the true value by a few percents. Second, note that our negativity estimations based on () clearly systematically underestimatethe amount of negativity present, which is in strong contrast with the results presented in <cit.> for the non-device-independent scenario using both least-square and maximum-likelihood estimators. (For further examples of underestimation usingsampled from other quantum distributions, see Appendix <ref>.) Of course, instead of the CHSH Bell inequality, one could hope to improve the negativity estimation by considering a Bell-inequality the quantum violation of which is optimized for the negativity estimation of ^τ_1.25 [see Eq. (<ref>) and Appendix <ref> for details]. Such an optimized device-independent witness may be obtained, e.g., by feeding the DI algorithm with some regularized distribution sampled from ^τ_1.25, as indicated in Fig. <ref>. In practice, however, the relative frequencies sampled from ^τ_1.25 turn out to give—independent of —about half the time, a Bell violation more than that allowed by quantum theory, thereby rendering negativity estimation from this Bell violation impossible in all these cases. § DISCUSSION The device-independent state estimation problem is the core of all state estimation problems from finite data, as it addresses the generic problem of matching empirical data (subjected to statistical fluctuations) with the ideal, theoretical description given by Born's rule. The recent demonstration of loophole-free Bell tests <cit.> has made it clear that the development of reliable techniques for the device-independent estimation of underlyingproperties from finite data not only is of fundamental interest but also would play an indispensable role in the next generation of quantum information protocols.In fact, although our focus is on a fully device-independent setting, the insights obtained thereof are also relevant in analogous problems in a partially device-independent scenario, such as those incurred in a quantum steering experiment <cit.>.In this paper, we have provided the device-independent analog of the maximum-likelihood and the least-square estimators and shown that they are physical, computationally tractable, and unique.These features render them ideal for bridging the device-independent tools developed for ideal quantum distributions and the experimentally obtained raw data. Generalizing the arguments in <cit.>, however, it can be shown that all quantum estimators are necessarily biased, as summarized in the following proposition (see Appendix <ref> for a proof and the corresponding definition of being strictly nonorthogonal).Letbe a closed convex subset of the nonsignaling polytope(such as ) with two strictly nonorthogonal extreme points. Any point estimator constrained to give ()∈ is necessarily biased. As stressed above, one of the goals of performing a device-independent state estimation is to obtain therefrom various device-independent parameter (property) estimates (see Fig. <ref>). Given Proposition <ref>, one may expect that any such parameter estimates are also biased. Indeed, for data sampled from a distribution that is not Bell-inequality violating, but which is sufficiently near to the boundary of the local polytope, an overestimation of the corresponding negativity is to be expected.[Evidently, due to statistical fluctuations, some of the () would lie inside the local polytope, giving a zero negativity value while some other () would violate a Bell inequality, giving a strictly positive negativity value. Their average is thus positive, thereby resulting in an overestimation.] On the other hand, our numerical results show thatfor data sampled from some extremal quantum distribution, we observe, instead, an underestimation of the underlying negativity (see Fig. <ref>) and/or Bell violation (see Fig. <ref>). Admittedly, for such point estimation to be useful, a more thorough investigation is needed in order to determine when such device-independent estimations are trustworthy (in the sense of not leading to an overestimation).As one can perform analogous property estimation directly from the observed Bell-inequality violation, our approach of estimation by regularization may seem redundant at first glance. However, as we illustrate in the negativity example, the quality of an estimate obtained from Bell-inequality violation depends heavily on the choice of the inequality and it is a priori not always obvious which Bell inequality (the violation of which is used as a device-independent witness) will provide an optimal estimate. In contrast, our approach yields an optimized Bell-like inequality for witnessing the desired property as a byproduct. Moreover, due to the signaling nature of the relative frequencies, even physically equivalent <cit.> Bell inequalities may give rise to different estimates <cit.>, thereby resulting in further ambiguities. We now briefly comment on some other possibilities for future research. Implicit in our discussion is the assumption that the experimental trials are independent and identically distributed. While this is an often adopted assumption in the non-device-independent setup, its justification in a device-independent setting is far from trivial. A natural line of research thus consists of relaxing this assumption while maintaining the possibility to perform a reliable state estimation. In close connection to this is the problem of establishing a confidence region: how does one generalize the tools presented here to construct a region of estimates in accordance to, say, some pre-defined likelihood <cit.>? For some device-independent tasks, specific techniques <cit.> for dealing with finite statistics (possibly with the inclusion of confidence regions) have been developed, but general techniques for establishing confidence regions associated with generic quantum properties are still lacking. To appreciate the importance of constructing these confidence regions, recall from our example that the Bell violation given by the observed relative frequency—due to statistical fluctuations—may give rise to a value beyond that allowed by quantum theory, thus rendering this Bell violation useless for the estimate of a quantum parameter. To this end, we remark that the work of <cit.> suggests that the hypothesis-testing technique of <cit.>, together with the numerical technique developed here, can indeed be used to provide a confidence region for general device-independent estimates. Addressing these questions, however, clearly goes beyond the scope of the present paper and is something that we plan to take up in the sequel to this paper. We are grateful to Bänz Bessire, Daniel Cavalcanti, Flavien Hirsch, Sacha Schwarz and Paul Skrzypczyk for useful discussions, and to Boris Bourdoncle, Jedrzej Kaniewski, Lukas Knips, as well as a few anonymous referees for useful comments on an earlier version of this paper. This work is supported by the Ministry of Science and Technology, Taiwan (Grant No. 104-2112-M-006-021-MY3), the National Center for Theoretical Sciences of Taiwan, the Swiss National Science Foundations [through the NCCR QSIT as well as Grants No. PP00P2-150579 and No. P2GEP2_162060 (Early Postdoc.Mobility)],the Ontario Research Fund, and the Natural Sciences and Engineering Research Council of Canada. § NOTATIONS AND DEFINITIONSThroughout, welabel the measurement settings (inputs) by x,y,z,… and the corresponding measurement outcomes by a,b,c,… where each of these labels are elements from some finite sets.The correlations between their measurement outcomes are succinctly summarized by the vector of joint conditional distributions :={P(a,b,c,…|x,y,z,…)}. For simplicity, we will focus our discussion on the bipartite cases.We denote bythe set of all legitimate probability distributions, i.e., those that obey:∑_a,b P(a,b|x,y) = 1∀ x,y,P(a,b|x,y) ⩾ 0∀ x,y,a,b. Moreover, we denote bythe subset ofwhich satisfies also the nonsignaling conditions <cit.>:P(a|x,y)≡∑_b P(a,b|x,y)= P (a | x, y'), ∀a,x, y, y', P(b|x,y)≡∑_a P(a,b|x,y)= P (b | x', y), ∀ b, x, x', y. It is worth noting that bothand  <cit.> are convex polytopes, i.e., convex sets having a finite number of extreme points, and are conventionally referred to, respectively, as the signaling and the nonsignaling polytope. Here, we describe bothandin their H-representation using linear equations and inequalities.The nonsignaling affine space ⊃ is the smallest-dimensional affine space containing the setand is given by the distributions satisfying Eqs. (<ref>) and (<ref>).§ FURTHER DETAILS ABOUT THE PROJECTION METHODHere, we provide some further details about the projection mentioned in the main text. §.§ Equivalent definitions The projection method can be defined in three equivalent ways.* It is the minimizerof the following optimization problem: min_∈ -_2 =-_2* It is the nonsignaling part _∈, i.e., the first component of the decomposition= f⃗_ + f⃗_SI, where f⃗_SI is the signaling component ofand is orthogonal to all vectors in the affine subspace . * It is the result= Πof the projection ontoby the linear operator Π, where Π is uniquely determined by the set of commutation relations Π M = M Π with M being any permutation matrix corresponding to relabeling <cit.> of outputs, inputs and parties. For the bipartite Bell scenario with binary inputs and outputs, their equivalence can be shown using the decomposition given in <cit.>. For more general Bell scenarios, a proof of their equivalence analogously follows from group representation theory, and will be made available in <cit.>. §.§ Explicit form of the projection matrix in the simplest Bell scenario Using the notation of <cit.>, the projection operator Π in the bipartite Bell scenario with binary inputs and outputs admits the explicit formΠ_abxy,a'b'x'y' = 1_16 - 1/16∑_(i,j,k,l)∈ℐ i^a+a' j^b+b' k^x+x' l^y+y',where 1_16 is the 16×16 identity matrix, the sum is carried out over the four quadruplets ℐ = {(+1,-1,-1,± 1),(-1,+1,± 1, -1)}, the rows of Π are indexed by (a,b,x,y), while its columns are indexed by (a',b',x',y'). §.§ An algorithm for performing the projection for the more general Bell scenariosIn general, in (bipartite) Bell scenarios where parties have binary outputs, it is customary to write A = (-1)^a and B = (-1)^b, and compute the expectation values (correlators <cit.>):< A_x > = ∑_A=±1 A   P(A|x) = ∑_a=0,1 (-1)^a P(a|x)and similarly for < B_y >, while< A_x B_y > = ∑_A, B A   B   P(a,b|x,y) = ∑_a,b (-1)^a+b P(a,b|x,y). Together, the <A_x>, <B_y> and < A_x B_y > represent a parametrization of the nonsignaling subspace. In particular, the above relations can be inverted to give:P(a,b|x,y)=1/4[1 +(-1)^a<A_x>+(-1)^b<B_y>+(-1)^a+b<A_xB_y>]However, when the distribution P(a,b|x,y) is signaling, the marginals P(a|x,y), P(b|x,y) depend on both inputs (x,y). As shown in <cit.>, in the case of binary inputs, the (nonsignaling) correlators <A_x> should be computed as < A_x > = ∑_a (-1)^a P̃_A(a|x) where P̃_A(a|x) is given by:P̃_A(a|x) = 1/2∑_b,y P(a,b|x,y)i.e., averaged uniformly over y=0,1. This is the only choice that keeps the projection invariant under permutations of y. We make the same choice to compute < B_y >. Now, the correlators <A_x>, <B_y> and < A_x B_y > correspond to a unique distribution P_Π(a,b|x,y) in the nonsignaling subspace, cf. Eq. (<ref>), which is the result of the projection. This is the essence of the regularization method employed in <cit.>.The construction above extends to an arbitrary number of parties, inputs, and outputs, specified by the tuple (n,m,k). We give a sketch below of this generalization, which satisfies criteria (<ref>)–(<ref>) (a proper proof will be discussed in a future work <cit.>).The generalization to additional parties and inputs is simple. Write, for example:P̃_A(a|x) = 1/m^n-1∑_b,y,c,z... P(a,b,c,...|x,y,z...),and the same for other marginal distributions P̃_..., by averaging uniformly over all inputs not fixed by the indices of the marginals P̃_.... Then, use these P̃_... to compute the correlators according to the straightforward multipartite generalizations of Eqs. (<ref>) and (<ref>).To cater for scenarios with nonbinary outputs, the correlators have to be generalized. We use the framework proposed in <cit.>, adapting slightly the notation to the present paper:< A^i_x> = ∑_a c_iaP̃(a|x),0 ≤ i ≤ k-2,where c_ia = k  δ_ia - 1 and we omitted the coefficient for i=k-1 as it is linearly dependent on the others. Note that < A^0_x > = < A_x > in the case of binary outcomes (k=2). Multipartite correlators are written, for example:< A^i_x B^j_y> = ∑_a,b c_ia c_jbP̃(a,b|x,y),0 ≤ i,j ≤ k-2. Starting from a signaling distribution P(a,b,…|x,y,…), we compute the generalized correlators using the averaged marginals of Eq. (<ref>). Then we interpret the resulting correlators as a description of the nonsignaling distribution P_Π(a,b,…|x,y,…). The whole process is a projection: all operations are linear, and the averaging [e.g., P(a|x,y) →P̃(a|x)] is injective for nonsignaling distributions. Importantly, the projection defined by this algorithm commutes with relabelings and thus corresponds to the three equivalent definitions given in Appendix <ref>. §.§ An explicit example showing that the output of the projection method may be nonphysicalAlthough easy to compute, the projection method may give rise to coefficients ofthat are negative. Indeed, the output space of the projection method is the nonsignaling affine space . We now give an explicit example to illustrate this fact. Consider some relative frequencygiven in the compact matrix representation: =[[ f(a,b|0,0) f(a,b|0,1); f(a,b|1,0) f(a,b|1,1);]] =110[ [ 3 0 7 0; 1 6 1 2; 5 1 1 6; 1 3 3 0; ]]where the entries in each block are arranged such that the value of a (b) increases downward (rightward). By applying the projection matrix Π given in Eq. (<ref>) to this signaling distribution, one obtains:=[[ P(a,b|0,0) P(a,b|0,1); P(a,b|1,0) P(a,b|1,1);]]= 140[ [ 182 200;2 184 16; 1977 19;1 13 17 -3;]],which is easily seen to satisfy the nonsignaling conditions of Eq. (<ref>). However, thisis evidently nonphysical as its entry for x=y=a=b=1 is negative. § FURTHER DETAILS ABOUT THE DEVICE-INDEPENDENT LEAST-SQUARE METHODHere, we present the details of the device-independent analog of the least-square tomography method. Formally, the method amounts to finding the unique minimizer of the least-square problem: =_∈ ||-||_2. §.§ Equivalence to performing a projection and minimization of the 2-norm distance fromto We now prove that the above optimization is equivalent to first performing the projection method, followed by performing a minimization of the 2-norm distance fromto . For convenience, we shall prove this equivalence, instead, for any converging superset relaxation <cit.> _ℓ of the quantum set . The desired equivalence then follows from the fact that lim_ℓ→∞_ℓ =. Given the relative frequency , the least-square estimator (when _ℓ is used to approximate ) satisfies =_∈_ℓ-_2. First, note from Appendix <ref> that any given relative frequencycan be decomposed as = _+_ SI, where _∈ is orthogonal to _ SI. Similarly, for any P⃗∈_ℓ⊂⊂, we must have (P⃗-_) ∈, which is orthogonal to _ SI. It then follows from the definition of the least-square method that = _P⃗∈_ℓP⃗-_2= _P⃗∈_ℓ(P⃗-_)-_ SI_2 = _P⃗∈_ℓP⃗-__2 = _P⃗∈_ℓP⃗-_2where the second to last equality follows from the orthogonality of P⃗-_ and _ SI and the fact that _ SI^2 is a constant in the 2-norm minimization, while the last equality follows from the equivalence between the second and the third definition of the projection method.Note that althoughis not necessarily in _ℓ, if it happens that ∈_ℓ then the equivalent regularization shown in Appendix <ref> implies that =. §.§ Formulation as a semidefinite programWhen the quantum setis approximated by a superset relaxation _ℓ that admits a semidefinite programming <cit.> characterization, the optimization problem of Eq. (<ref>) can also be solved as an SDP. To this end, note that the SDP characterization of _ℓ is achieved in terms of some moment matrix χ that contains all the entries ofas some of its matrix elements. Using the characterization of positive semidefinite matrices via their Schur complements (see, e.g., Theorem 7.7 of <cit.>), we can then reformulate Eq. (<ref>) (with _ℓ approximating ) as: _∈_ℓ s s.t.[ s 1 -; ⊤-⊤ s ]≽ 0,where 1 is the identity matrix having the same dimension as the column vector , and ⊤ is the transpose of . Evidently, we see from Eq. (<ref>) that the equivalent optimization problem of Eq. (<ref>) now involves only an objection function and matrix inequality constraintsthat are linear in all their optimization variables: s,and some other entries of χ (that cannot be estimated from experimental data). Thus, as claimed, the minimization of the 2-norm of - over ∈_ℓ can indeed be cast as an SDP. § FURTHER DETAILS ABOUT THE KULLBACK-LEIBLER DIVERGENCE AND THE CORRESPONDING REGULARIZATION METHODThe Kullback-Leibler (KL) divergence:(v⃗||w⃗)=∑_i v_i log_2 ( v_i / w_i ),is conventionally defined for unconditional probability distributions v_i/ w_i [such as P(a,b,x,y) and f(a,b,x,y)]. In the explicit examples studied, we fixed P(x,y) = f(x, y) = 1/|×|. This allows us to keep our definition of the ML method valid when f(x,y), sampled from P(x,y)= constant, has itself statistical fluctuations, as reflected in our definition of the KL divergence given in Eq. (2) in the main text. Note that while the KL divergence is a statistical distance, it is not a metric as it is asymmetric and violates the triangle inequality. To appreciate the relevance of this asymmetry, see <cit.>.§.§ Connection to maximum likelihoodThe equivalence between the minimization of the KL divergence over some ∈ (for some set ) toand the maximization of the likelihood of generatingfromcan be seen as follows:min_∈(||) = min_∈∑_a, b, x, y f(x,y) f(a,b|x,y) log_2 [ f(a,b|x,y)/P(a,b|x,y)], = κ +min_∈-1N∑_a, b, x, y N_a,b,x,ylog_2 P (a, b |x, y), = κ -1Nmax_∈∑_a, b, x, ylog_2 P (a, b |x, y)^N_a,b,x,y, = κ -1Nmax_∈log_2 ∏_a, b, x, y P (a, b |x, y)^N_a,b,x,y,where κ:=∑_a, b, x, y f(x,y)f(a, b| x, y)log_2 f(a, b |x, y) is a constant of the optimization, N:=∑_x,y,a,b N_a,b,x,y, and in the second equality we have used the definition of the relative frequencyand the fact that f(x, y)=N_x,yN. In the last line of Eq. (<ref>), the argument of the maximization is the log likelihood of observing N_a,b,x,y times the event labeled by (x,y,a,b)with probability P(a,b|x,y). Hence, we see that the minimization of the KL divergence is equivalent to maximizing the likelihood of generatinggiven P(a,b|x,y).§.§ Formulation as a conic programHere, we briefly explain how the device-independent ML method can be formulated and solved as a conic program (CP) with an exponential cone. Recall from the main text that for any given relative frequency , the ML estimator (with the quantum set approximated by _ℓ) works by solving:_∈_ℓ∑_a, b, x, y f(x,y) f(a,b|x,y) log_2 [ f(a,b|x,y)/P(a,b|x,y)]. A conic program takes the canonical form of:min c⃗·x⃗,s.t.Ax⃗ = b⃗, x⃗∈ K,where K is a convex cone, such as an exponential cone:K_exp={(u,v,w)   |   v e^u/v≤ w, v≥ 0}. After discarding the constant term and folding the factors in f(a,b,x,y) = f(x, y) f(a,b|x,y), the minimizer of Eq. (<ref>) can be obtained by solving:=∑_abxy f(a,b,x,y) log_2 1/P(a,b|x,y) s.t.χ ≽ 0,[ F_abxyχ ] = P(a,b|x,y) ∀ a,b,x,y,[ G_k χ ] = 0,k = 1,2,...where χ is the moment matrix associated with _ℓ, while F_abxy and G_k encode the equality constraints associated with the structure of this moment matrix. This problem has the conic form of Eq. (<ref>): =∑_abxy f(a,b,x,y) u_abxy s.t.χ ≽ 0, e^u_abxy ≤ P(a,b|x,y) ∀ a,b,x,y,[ F_abxyχ ] = P(a,b|x,y) ∀ a,b,x,y,[ G_k χ ] = 0,k = 1,2,... where the constraint (<ref>) is that for a positive semidefinitecone and the constraint (<ref>) is that for copies of the exponential cone (<ref>) (with dummy variables v_abxy = 1 for all a,b,x,y). Thus, we see that the problem of Eq. (<ref>) is indeed an exponential conic program.§ DETAILS OF NUMERICAL INVESTIGATIONS §.§ Explicit form of the quantum distributionconsideredHere, we provide the explicit form of the ideal quantum distributionsemployed in our numerical studies and an explicit quantum strategy realizing each of these correlations. First, we consider =^ CHSH with entries given by 14+(-1)^a+b+xy√(2)8. ^ CHSH is known to violates maximally the CHSH <cit.> Bell inequality: _ CHSH = ∑_a,b,x,y=0^1 (-1)^a+b+xy P(a,b|x,y) Ł≤ 2,up to the limit of 2√(2) allowed by quantum theory. ^ CHSH can be realized by both parties locally measuringcos3π8σ_z+sin3π8σ_x and cos7π8σ_z+sin7π8σ_x for, respectively, input 0 and 1 on the shared state |Ψ^+⟩=1√(2)(|01⟩+|10⟩).Next, we consider ^ 90%CHSH, which consists of a mixture of ^ CHSH with the uniformly random distribution _=14. This allows us to gain some insight on how the regularization methodsfare for noisy quantum distributions, which are more readily accessible in the laboratory. Explicitly, the entries of this distribution are:P_^ 90%CHSH(a,b|x,y)=14+910(-1)^a+b+xy√(2)8,which can be realized by performing the measurements mentioned above on the mixed state ρ=910Ψ^++1104 where 4 is the maximally mixed two-qubit state.For the purpose of device-independent property estimations, it is known <cit.> that _1 generally does not provide a tight estimate of, e.g., the amount of negativity <cit.> present  in the system. An example of this is given by the quantum distribution =^τ_1.25 which arises from both parties locally measuring the state |ψ⟩≃0.91 |00⟩+0.42 |11⟩ in the basis of n⃗_0·σ⃗ and n⃗_1·σ⃗, where n⃗_0≃(0.26, -0.97) and n⃗_1≃-(0.87,0,0.49) are Bloch vectors. Explicitly, using the matrix representation given just below Eq. (<ref>), ^τ_1.25 reads as:^τ_1.25=[[ α_00 β_00 α_01 β_01; γ_00 ϵ_00 γ_01 ϵ_01; α_10 β_10 α_11 β_11; γ_10 ϵ_10 γ_11 ϵ_11;]] ,where α_00≃0.00, β_00=γ_00≃0.01, α_01=α_10≃ 0.00, β_01=γ_10≃ 0.01, β_10=γ_01≃0.21, α_11≃ 0.05, β_11=γ_11≃ 0.16 and ϵ_xy=1-α_xy-β_xy-γ_xy for all x,y∈{0,1}. Note that^τ_1.25 can be used to demonstrate more nonlocality with less entanglement <cit.>, as was achieved in <cit.>. In particular, being on the boundary of , ^τ_1.25 maximally violates the τ=54 version of the Bell inequality from <cit.>:_τ:= ∑_x,y (-1)^xyP(0,0|x,y) - τ∑_a[P(a,0|1,0)+P(0,a|0,1)]Ł≤ 0which is provably <cit.> satisfied by all finite-dimensional maximally entangled states whenever 1√(2)+12≤τ≤32.Finally, we also consider the correlation ^ MDL of <cit.>: P_^ MDL(a,b|x,y)= 112(8ab+1)δ_x,0δ_y,0 +13(1-δ_a,0δ_b,0)δ_xy,1 + 16(3ab+1)(1-δ_a,xδ_b,y)δ_x⊕ y,1.which can be realized with both parties locally measuring σ_xand σ_z for, respectively, input 0 and 1 on the shared state |Ψ⟩=1√(3)(|01⟩+|10⟩-|11⟩). _^ MDL can be used to demonstrate the Hardy paradox <cit.>, as well as a violation of measurement-dependent locality <cit.> (MDL), e.g., via the following MDL inequality:_ MDL =l P(0,0,0,0) - h [P(0,1,0,1)+ P(1,0,1,0) + P(1,1,1,1)]MDL≤ 0,where h>l>0. Note also that as opposed to ^ CHSH which lies on the boundary of , but strictly inside ,^ MDL lies on the boundary of bothand . Experimental realizations of a correlation analogous to _^ MDL have been achieved in <cit.>. §.§ Rate of convergence to the true distributionWe provide in Fig. <ref> the plots of the mean value of the 1-norm deviation ||()-||_1, between the regularized distribution () and the variousdiscussed above as a function of the number of trials =10^2,10^3,…,10^10 for the regularization methods discussed in the main text and two additional regularization methods discussed in Appendix <ref>. For ease of comparison, we also include in each of these figures the corresponding plot for .Notice that from some basic numerical fitting, one finds that for all these methods, the mean value of ||()-||_1, as with the mean value of ||-||_1 diminishes at a rate of1√(). In addition, since the 1-norm upper bounds all other p-norms with p being an integer greater than or equal to 2, our results of 1-norm deviation also upper bound the deviation when measured in terms of other p norms.§.§ Bias and mean squared errors of estimates To gain further insight into the bias and the mean squared errors of various regularization methods, we provide in Fig. <ref> and Fig. <ref>our simulation results for the mean Bell-inequality violation and the corresponding mean squared error based on the regularized distributions as a function of the number of trials =10^2,10^3,…,10^10. Our results clearly suggest that the bias in the Bell value obtained fromis essentially negligible,[As the projection method involves a linear transformation of , its bias is in theory identically zero. The nonzero bias that we observe in this case arises from the fact that our numerical simulations involve only a finite number of samples (10^4).]whereas that obtained fromand —for extremal — systematically underestimates (on average) the true value. However, as can be seen from the corresponding insets, such underestimations rapidly shrink with , diminishing at a rate of the order of 1√().For the case of nonextremal , such as ^90%CHSH, we see that the bias present is essentially of the same order as that given by the projection method, which is basically negligible already for small . Similarly, as can be seen from Fig. <ref>, the mean squared error rapidly decreases withat a rate of the order of 1 in all the cases investigated. In particular, it is worth noting that for the case of ^ MDL, the mean squared error present for the ML method is approximately three orders of magnitude less than all those given by the other methods. This superiority of the ML method over the others is, to some extent, anticipated from the fact that the KL divergence is superior as a statistical distance over, e.g., the total variation distance in terms of discriminating probability distribution that contains zero entries, such as ^ MDL (see page 28 of the preprint version of <cit.> for a discussion).In general, the bias nature of the few physical point estimators considered in Fig. <ref> can be rigorously shown. See Appendix <ref> for a proof.§ PROOF OF CERTAIN PROPERTIES OF THE ESTIMATORS§.§ Uniqueness of estimatorsWe give here a proof that the output of certain regularization methods () is determined uniquely by . To this end, we first recall from Theorem 8.3 of <cit.> that the minimizer of a strictly convex function over a convex set is unique. To see that the LS method provides a unique estimate, let us first note that the Euclidean norm squared (x⃗_2)^2 is strictly convex in x⃗, since its Hessianis two times the identity matrix. Moreover, min_∈_ℓ (-_2)^2 and min_∈_ℓ-_2 share exactly the same set of minimizer(s). Thus, the output of the LS method with ∈ for any convex set ⊆ is necessarily unique. Likewise, since -log_2(x) is a strictly convex function of x,the KL divergence from ∈ tois also strictly convex. In other words, the output of the regularization via both the LS method and the ML method is unique. In the main text, we have focused onbeing some superset relaxation _ℓ of , but this uniqueness clearly applies also to the nonsignaling polytope , thus allowing one to show the uniqueness of the LS and the ML estimators even when the target set is(see Appendix <ref>). §.§ Bias of estimatorsWe give here a proof of the biased nature of the physical estimator provided by the ML method and the LS method. To this end, we will first introduce the following definition for identifying correlations that may give rise to the same set of relative frequencies.Let _1 and _2 be two correlation vectors. We say that _1 and _2 are strictly nonorthogonal if for all input combinations, there is at least one combination of outcomes where the corresponding probability distributions of both _1 and _2 are nonvanishing. Explicitly, in the bipartite scenario, the nonnegativity of probabilities means that _1 and _2 are strictly nonorthogonal if and only if ∑_a,b P_1(a,b|x,y) P_2(a,b|x,y)>0 for all x,y. The biased nature ofandis then an immediate consequence of Proposition <ref>, which we prove as follows. We will give a proof by contradiction. The steps follow closely those given in the proof of the main proposition of <cit.>. For simplicity, we give the proof in a finite bipartite Bell scenario. The generalization to more complicated but still finite Bell scenarios follows analogously. Suppose a Bell experiment is carried out with _j governing the underlying joint distribution of measurement outcomes. If the experiment involves only a finite number of trials ,and _j is nondeterministic, one ends up with finite (nonsingleton) possibilities of relative frequencies _i indexed by i. Let ℱ_j={_i | q__j(_i)>0} be the set of all such relative frequencies obtainable from _j, where q__j(_i)≥ 0 is the probability of observing the relative frequency _i given that the underlying distribution is _j.By assumption, one can find two strictly nonorthogonal extremal distributions of , say, _1 and _2, with _1≠_2. Due to statistical fluctuations and the fact that _1 and _2 are strictly nonorthogonal, the set of relative frequencies simultaneously obtainable by both _1 and _2 is not empty, i.e., ℱ_1⋂ℱ_2≠{}. Now, suppose that a regularization procedure gives an unbiased estimator. The expected distribution returned by the regularization scheme is then exactly the true distribution:𝔼__j()=∑_∈_j q__j() () = _j∀ _j∈,where ∑_j q__j()=1. In particular, since _1∈, we have∑_∈_1 q__1() () = _1. By assumption, _1 is extremal inand ()∈ for all ∈_1. Thus, the unbiased nature of (), see Eq. (<ref>), implies that () = _1 for all ∈_1.Exactly the same line of reasonings can be applied to _2 to give () = _2 for all ∈_2. But as argued above, there exists =_c such that _c∈_1 and _c∈_2. This implies that _1=(_c) = _2, which contradicts our assumption that _1≠_2. Hence, given the premise that there exist extremal _1, _2∈ that are strictly nonorthogonal, the estimator (), which is constrained to be a member of , must be biased.To complete the proof thatandare biased, it suffices to choose _1=^ and _2 as the deterministic point with P_2(a,b|x,y)=δ_a,xδ_b,y. Indeed, both these quantum distributions are known to be extremal in = (as well as any of its relaxation _ℓ). Evidently, since no two extreme points of the nonsignaling polytopein the simplest Bell scenario are strictly nonorthogonal, Proposition <ref> cannot be used to show that a regularization method havingas its target is biased. Nonetheless, we conjecture that such estimators (which include all the other unique estimators given in Table <ref>) must also be biased.§ DEVICE-INDEPENDENT NEGATIVITY ESTIMATION, OPTIMIZED WITNESSES AND BELL INEQUALITYIn the scenarios we studied, the sets _1, _2, ..., _4 correspond to outer approximations of the quantum setusing moment matrices of (local) level ℓ = 1, 2, … (see <cit.> for details). As shown in <cit.>, the same relaxations provide a way to lower bound the amount of entanglement present in a quantum system. We discuss here a geometrical formulation of their method. Let ^≤ν be the set of all correlationsobtained from Born's rule using a state ρ of maximal negativity ν. Following <cit.>, we write _ℓ^≤ν⊇^≤ν as the corresponding semidefinite relaxation of level ℓ. Then, given correlationsand approximation level ℓ, a lower bound on the negativity of ρ can be obtained by computing the supremum ν such that ∉_ℓ^≤ν.What are the requirements on the distributions used as input to the negativity estimation algorithm? We first remark that _ℓ^≤ν is a subset of the nonsignaling set, so the negativity estimation algorithm can never be performed on the relative frequency , but should instead employ one of the regularized distributions (). Secondly, the set _ℓ^≤ν is a subset of _ℓ, so thatshould be regularized to a target set _ℓ' with ℓ' ≥ℓ. Otherwise, we run the risk of having ∈_ℓ'∖_ℓ and the semidefinite program will turn out to be infeasible. In practice, this discrepancy could happen even by regularizing to ℓ' = ℓ due to insufficient numerical precision, in which case a small amount of white noise can be added to restore feasibility.For both ℓ=1,2, this typical fraction of white noise is found to be of the order of 10^-8 or less.In our negativity estimation shown in Fig. <ref> of the main text, we have employed _2 as our approximation to the quantum set . This may seem unnecessary as _1 <cit.> is known to be a pretty good approximation of the quantum set. However, as in the case of obtaining a negativity bound from the CHSH Bell-inequality violation <cit.>, one finds that _1 generally does not provide a tight negativity bound. This becomes evident by plotting _ℓ and _ℓ^≤ν around the distribution ^τ_1.25, as shownin Fig. <ref>. There, we see that although ^τ_1.25 lies visually on the boundary of both _1 and _2, the approximations _ℓ^≤ν of the bounded negativity sets differ significantly, to the point that, in the considered slice, the approximation level ℓ=1 is unable to certify any point with a negativity higher than 0.3.Let us also clarify here the connection between the optimized device-independent witness obtained by running a DI algorithm and an ordinary Bell inequality. By solving an SDP analogous to that given in <cit.> with a regularized distribution (), the dual of the SDP gives an optimized device-independent witness (a Bell-like inequality) of the form:[Extracting this witness follows a procedure analogous to that given in Appendix A of <cit.>.]∑_a,b,x,yβ^xy_ab P(a,b|x,y)N(ρ)≤α≤_α,where α≥ 0 and _α are some fixed numbers, and ∑_a,b,x,yβ^xy_ab P_ Reg(a,b|x,y)> _α. Inequality (<ref>) is an (optimized) device-independent negativity witness in the sense that for any (nonsignaling) , if it gives rise to the left-hand-side of Eq. (<ref>) a value that is within the limit allowed by quantum theory and is greater than _α, then whatever quantum state gives rise tomust have a negativity greater than α. Thus, assuming that () is a reasonable estimate of the underlying distribution, we see that it gives a negativity estimate of the underlying state that is at least α.Alternatively,  <cit.> provides a way to estimate negativity, in a device-independent manner, by starting from a given Bell-inequality violation: given any vector of coefficients β_ab^xy, one can compute, for each value = ∑_abxyβ_ab^xyP(a,b|x,y) a corresponding lower bound α onthe negativity. For example, when one uses the coefficients of the CHSH inequality as β_ab^xy, we obtain the negativity bound mentioned in the main text: N(ρ)≥_-24√(2)-4 for _∈[2,2√(2)], which gives a monotonous relation between α and . Note that only when _ is greater than the local bound 2 one can hope to obtain a nonzero lower bound on negativity. Also, _> 2√(2) is not quantum realizable and thus cannot be used for the device-independent estimation of negativity.In the same spirit, any vector of coefficients β_ab^xy can be converted to a family of witnesses. In this process, one can even reuse the optimized Bell inequality given in  (<ref>), providing a way to interpret values of ≥_α for which the coefficients β_ab^xy were originally computed.§ SOME OTHER PLAUSIBLE REGULARIZATION METHODS AND THEIR PROPERTIESHere, wediscuss a few other plausible regularization methods with a target set ∈{,}. Properties of these methods are summarized in Table <ref>. §.§ Nearest quantum or nonsignaling approximation via p-norms Obviously, one canregularize a givento(resp. ) by determining the nearest quantum (resp. nonsignaling) approximation NQA (resp. NNA) ofto(resp. ) with a metric induced by any of the p-norms: _ NQA_p()=_∈ ||-||_p, _ NNA_p()=_∈ ||-||_p. When p=2,NQA_2 gives the LS method described in Appendix <ref>. By the same token, we shall refer to NNA_2 of Eq. (<ref>) as LS_. We can rewrite it as the following second-order cone program (SOCP) <cit.>:min ss.t.-_2≤ sd_i≤c⃗_i·∀ i=1,2,…,mwhere inequalities in the last line are positivity constraints used to define the nonsignaling polytope. Hence, the LS_ regularization can be also efficiently computed using an SOCP solver. Note that the inequality constraints in the last line of Eq. (<ref>) can be replaced by imposing the nonsignaling constraints of Eq. (<ref>). The regularization method of LS_ is then evidently a least-square minimization problem with linear equality constraints. This regularization method has previously been implemented in <cit.> as part of their data analysis. For the case of p=1 (previously considered in <cit.>) or p=∞, the optimization problem of Eqs. (<ref>) and  (<ref>) can be cast, respectively, as a semidefinite program and a linear program. For both values of p, one can find easily an example ofwhere some _ NQA_p() [_ NNA_p()] is Bell-inequality violating but some other is not. For explicit examples, see the Supplemental Material <cit.>. Similarly, the variant of NNA_1 employed in <cit.> is known to give nonunique estimators <cit.>. §.§ Minimizing the KL divergence toAs with the LS_ method, one can also consider minimizing the KL divergence from the nonsignaling polytopeto some given relative frequencymin_∈∑_a, b, x, y f(x,y) f(a,b|x,y) log_2 [ f(a,b|x,y)/P(a,b|x,y)].We shall refer to the corresponding regularization method as the ML_ method. Note that, as with the ML method, performing the ML_ regularization method amounts to solving a conic program, which can be achieved using, e.g., the SCS solver.Although not explicitly discussed as a regularization method, the ML_ method has been employed in <cit.> and was noted to help in the analysis of the hypothesis testing of local causality, as discussed in Appendix 2 of <cit.> [see Eqs. (A1) and (A2) therein]. Conceivably, the ML method may help in the analysis of Bell tests further. 90Rosset:2012 D Rosset, R. Ferretti-Schöbitz, J.-D. Bancal, N. Gisin, and Y.-C. Liang,86, 062325 (2012).Moroder2013b T. Moroder, M. Kleinmann, P. Schindler, T. Monz, O. Gühne, and R. Blatt, Phys. Rev. Lett. 110, 180401 (2013).vanEnk:2013 S. J. van Enk and R. Blume-Kohout, New J. Phys. 15, 025024 (2013).Hradil1997 Z. Hradil, Phys. Rev. A 55, R1561(R) (1997).Blume-Kohout2010R. Blume-Kohout, New J. Phys. 12, 043034 (2010).arXiv:1202.5270R. Blume-Kohout, eprint quant-ph arXiv:1202.5270 (2012).Christandl2012 M. Christandl and R. Renner, Phys. Rev. Lett. 109, 120403 (2012).Sugiyama2013 T. Sugiyama, P. S. Turner, and M. Murao, Phys. Rev. Lett. 111, 160406 (2013).Shang2013 J. Shang, H. K. Ng, A. Sehrawat, X. Li, and B.-G. Englert, New J. Phys. 15, 123026 (2013).Faist2016 P. Faist and R. Renner, Phys. Rev. Lett. 117, 010404 (2016).arXiv:1405.5350:brief J. Shang, H. K. Ng, and B.-G. Englert, eprint quant-ph arXiv:1405.5350 (2014).Schwemmer2015 C. Schwemmer, L. Knips, D. Richart, H. Weinfurter, T. Moroder, M. Kleinmann, and O. Gühne, Phys. Rev. Lett. 114, 080403 (2015).Scarani2012 V. Scarani, Acta Phys. Slovaca 62, 347 (2012).Brunner:RMP N. Brunner, D. Cavalcanti, S. Pironio, V. Scarani, and S. Wehner, Rev. Mod. Phys. 86, 419 (2014).Mayers1998 D. Mayers and A. Yao, in Proceedings of the 39th Annual Symposium on Foundations of Computer Science, Proc. No. 98CB36280(1998), pp. 503–509. Mayers2004 D. Mayers and A. Yao. Quant. Inf. Comput. 4, 273 (2004).Ekert1991 A. K. Ekert, Phys. Rev. Lett. 67, 661 (1991).Acin2007 A. Acín, N. Brunner, N. Gisin, S. Massar, S. Pironio, and V. Scarani,98, 230501 (2007).Vazirani2014 U. Vazirani and T. Vidick, Phys. Rev. Lett. 113, 140501 (2014). ColbeckPhD R. Colbeck, Quantum And Relativistic Protocols For Secure Multi-Party Computation, Ph.D. thesis, University of Cambridge (2006).Pironio:2010aa S. Pironio et al., Nature (London) 464, 1021 (2010).Colbeck2011 R. Colbeck and A. Kent, J. Phys. A: Math. Theo. 44, 095305 (2011). Bell1964 J. S. Bell, Physics 1, 195 (1964).Werner1989 R. F. Werner,40, 4277 (1989).Moroder2013 T. Moroder, J.-D. Bancal, Y.-C. Liang, M. Hofmann, and O. Gühne,111, 030501 (2013).Toth2015 G. Tóth, T. Moroder, and O. Gühne, Phys. Rev. Lett. 114, 160501 (2015).Chen2016 S.-L. Chen, C. Budroni, Y.-C. Liang, and Y.-N. Chen, Phys. Rev. Lett. 116, 240401 (2016).Cavalcanti2016 D. Cavalcanti and P. Skrzypczyk, Phys. Rev. A 93, 052112 (2016).Zhang2011 Y. Zhang, S. Glancy, and E. Knill, Phys. Rev. A 84, 062118 (2011).Zhang2013 Y. Zhang, S. Glancy, and E. Knill, Phys. Rev. A 88, 052119 (2013).Bancal2014 J.-D. Bancal, L. Sheridan, and V. Scarani, New J. Phys. 16, 033011 (2014).Nieto2014 O. Nieto-Silleras, S. Pironio, and J. Silman, New J. Phys. 16, 013035 (2014).Bancal2011 J.-D. Bancal, N. Gisin, Y.-C. Liang and S. Pironio, Phys. Rev. Lett. 106, 250404 (2011).LiangPRL2015 Y.-C. Liang, D. Rosset, J.-D. Bancal, G. Pütz, T. J. Barnea, and N. Gisin, Phys. Rev. Lett. 114, 190401 (2015).Baccari2017 F. Baccari, D. Cavalcanti, P. Wittek, and A. Acín, Phys. Rev. X 7, 021042 (2017).Wiseman2007 H. M. Wiseman, S. J. Jones, and A. C. Doherty, Phys. Rev. Lett. 98, 140402 (2007).Brunner2008 N. Brunner, S. Pironio, A. Acín, N. Gisin, A. A. Méthot, and V. Scarani, Phys. Rev. Lett. 100, 210503 (2008).NavascuesPRX M. Navascués, G. de la Torre, T. Vértesi, Phys. Rev. X 4, 011011 (2014).NavascuesPRL2015 M. Navascués and T. Vértesi, Phys. Rev. Lett. 115, 020501 (2015).Yang2014 T. H. Yang, T. Vértesi, J.-D. Bancal,V. Scarani, M. Navascués, Phys. Rev. Lett. 113, 040401 (2014).BancalPRA2015 J.-D. Bancal, M. Navascués, V. Scarani, T. Vértesi, and T. H. Yang, Phys. Rev. A 91, 022115 (2015).Navascues2007 M. Navascués, S. Pironio, andA. Acín,98, 010401 (2007). Navascues2008a M. Navascués, S. Pironio, andA. Acín, New J. Phys., 10, 073013 (2008).Popescu1994 S. Popescu and D. Rohrlich, Found. Phys. 24, 379 (1994).Barrett2005 J. Barrett, N. Linden, S. Massar, S. Pironio, S. Popescu, and D. Roberts, Phys. Rev. A 71, 022101 (2005).Pironio2013 S. Pironio and S. Massar, Phys. Rev. A 87, 012336 (2013).arXiv:1611.00352:brief O. Nieto-Silleras, C. Bamps, J. Silman, and S. Pironio, New J. Phys. 20, 023049 (2018)arXiv:1702.05178:brief P. Bierhorst, E. Knill, S. Glancy, A. Mink, S. Jordan, A. Rommal, Y.-K. Liu, B. Christensen, S. W. Nam, and L. K. Shalm, eprint quant-ph arXiv:1702.05178 (2017).Knill2017E. Knill, Y. Zhang, and P. Bierhorst, eprint quant-ph arXiv:1709.06159 (2017).Dupuis:1607.01796:brief F. Dupuis, O. Fawzi, and R. Renner, eprint quant-ph arXiv:1607.01796 (2016).Rotem2017R. Arnon-Friedman, R. Renner, and T. Vidick, eprint quant-ph arXiv:1607.01797 (2016).Schwarz2016 S. Schwarz, B. Bessire, A. Stefanov, and Y.-C. Liang, New J. Phys. 18, 035001 (2016).Bernhard2014 C. Bernhard, B. Bessire, A. Montina, M. Pfaffhauser, A. Stefanov, and S. Wolf, J. Phys. A: Math. Theo. 47, 424013 (2014).Renou2017 M. O. Renou, D. Rosset, A. Martin, and N. Gisin, J. Phys. A: Math. Theo. 50, 255301 (2017).Beck2018 Amir Beck, Introduction to Nonlinear Optimization: Theory, Algorithms, and Applications with MATLAB (Society for Industrial and Applied Mathematics, 2014). Banaszek1999K. Banaszek, G. M. D'Ariano, M. G. A. Paris, and M. F. Sacchi, Phys. Rev. A 61, 010304(R) (1999).vanDam2005 W. van Dam, R. D. Gill, and P. D. Grunwald, IEEE Trans. Inf. Theory 51, 2812-2835 (2005); eprint quant-ph: arXiv:0307125 (2005).Acin2005 A. Acín, R. Gill, and N. Gisin, Phys. Rev. Lett. 95, 210402 (2005).Kullback1951 S. Kullback and R. A. Leibler, Ann. Math. Statist. 22, 79 (1951).Cover:Book T. M. Cover and J. A. Thomas, Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing) (Wiley-Interscience, 2006).Doherty2008 A. C. Doherty, Y.-C. Liang, B. Toner, and S. Wehner, in 23rd Annual IEEE Conference on Computational Complexity, 2008, CCC'08 (IEEE, Los Alamitos, CA, 2008), pp. 199-210.Vallins2017 J. Vallins, A. B. Sainz, and Y.-C. Liang, Phys. Rev. A 95, 022111 (2017).Navascues2015 M. Navascués, Y. Guryanova, M. J. Hoban, and A. Acín, Nat. Commun. 6, 6288 EP (2015).Boyd2004Book S. Boyd and L. Vandenberghe, Convex Optimization (Cambridge University Press, New York, NY, USA, 2004).Shao2003Book J. Shao, Mathematical Statistics (Springer, New York, NY, USA, 2003).Christensen2015 B. G. Christensen, Y.-C. Liang, N. Brunner, N. Gisin, and P. G. Kwiat, Phys. Rev. X 5, 041052 (2015).Vidal2002G. Vidal and R. F. Werner, Phys. Rev. A 65, 032314 (2002).CHSH J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, Phys. Rev. Lett. 23, 880 (1969).hensen_loophole-free_2015B. Hensen et al., Nature 526, 682 (2015).shalm_strong_2015L. K. Shalm et al.,115, 250402 (2015).giustina_significant-loophole-free_2015 M. Giustina et al.,115, 250401 (2015).rosenfeld_event-ready_2017 W. Rosenfeld, D. Burchardt, R. Garthoff, K. Redeker, N. Ortegel, M. Rau, and H. Weinfurter, Phys. Rev. Lett. 119, 010402 (2017).Cavalcanti:2015aa D. Cavalcanti, P. Skrzypczyk, G. H. Aguilar, R. V. Nery, P. H. S. Ribeiro, and S. P. Walborn, Nat. Commun. 6, 7941 EP (2015).Collins2004 D. Collins and N. Gisin, J. Phys. A: Math. Theo. 37, 1775 (2004).Wills:UnpublishedP. Wills, E. Knill, K. Coakley, and Y. Zhang, eprint quant-ph arXiv:1709.04078 (2017).Rosset:Unpublished D. Rosset, M.-O. Renou, J.-D. Bancal, and N. Gisin (in preparation).Bancal2012 J.-D. Bancal, C. Branciard, N. Brunner, N. Gisin, and Y.-C. Liang, J. Phys. A: Math. Theo. 45, 125301 (2012). Bancal2010 J.-D. Bancal, N. Gisin, and S. Pironio, J. Phys. A: Math. Theo. 43, 385303 (2010).Horn1985Book R. A. Horn and C. R. Johnson, eds., Matrix Analysis (Cambridge University Press, New York, NY, USA, 1986).Junge2011 M. Junge and C. Palazuelos, Commun. Math. Phys. 306, 695 (2011).Liang2011 Y.-C. Liang, T. Vértesi, and N. Brunner, Phys. Rev. A 83, 022108 (2011).Vidick2011 T. Vidick and S. Wehner, Phys. Rev. A 83, 052310 (2011). Putz:NJP G. Pütz and N. Gisin, New J. Phys. 18, 055006 (2016).Hardy1993 L. Hardy, Phys. Rev. Lett. 71, 1665 (1993).Puetz2014 G. Pütz, D. Rosset, T. J. Barnea, Y.-C. Liang, and N. Gisin, Phys. Rev. Lett. 113, 190402 (2014).Aktas2015 D. Aktas, S. Tanzilli, A. Martin, G. Pütz, R. Thew, and N. Gisin, Phys. Rev. Lett. 114, 220404 (2015).Putz2016 G. Pütz, A. Martin, N. Gisin, D. Aktas, B. Fedrici, and S. Tanzilli, Phys. Rev. Lett. 116, 010401 (2016).MatFile See Supplemental Material at <https://journals.aps.org/pra/supplemental/10.1103/PhysRevA.97.032309> for the MATLAB data file “ExamplesOfNonuniqueEstimators.mat.”Schwarz:Private S. Schwarz (private communication).
http://arxiv.org/abs/1705.09245v5
{ "authors": [ "Pei-Sheng Lin", "Denis Rosset", "Yanbao Zhang", "Jean-Daniel Bancal", "Yeong-Cherng Liang" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170525161412", "title": "Device-independent point estimation from finite data and its application to device-independent property estimation" }
1,2]Domenico Tallarico [Corresponding author. Office 405, Department of Mathematical Sciences, The University of Liverpool, L69 7ZL, Liverpool, United Kingdom. E-mail: ] 2]Alessio Trevisan 1]Natalia V. Movchan 1]Alexander B. Movchan [1]Department of Mathematical Sciences, Peach St, University of Liverpool, Liverpool L697ZL, United Kingdom. [2]EnginSoft SPA, Via Giambellino 7, Padova 35129, Italy. Edge waves and localisation in lattices containing tilted resonators [======================================================================= The paper presents the study of waves in a structured geometrically chiral solid. A special attention is given to the analysis of the Bloch-Floquet waves in a doubly periodic high-contrast lattice containing tilted resonators.Dirac-like dispersion of Bloch waves in the structure is identified, studied and applied to wave-guiding and wave-defect interaction problems. The work is extended to the transmission problems and models of fracture, where localisation and edge waves occur. The theoretical derivations are accompanied withnumerical simulations and illustrations.§ INTRODUCTION We introduce a novel concept of a multi-scale shield/filter, which couples pressure waves and rotational motion in an elastic lattice.Such a structureincorporateshigh-contrast tilted resonators, and their dynamic response is linked to the rotational wave forms.The interest in elastic waves in chiral media is high, as reflected by the series of papers on micro-structured media, which incorporate active gyroscopes <cit.> and <cit.>. Waves in such periodic structures possess fascinating, sometimes counter-intuitive, properties. These include filtering, polarisation, as well as directional preference and/or localisation. The present paper, in contrast with <cit.>, deals with the lattice that does not include any active chiral mechanical elements, like gyroscopic inclusions or a gyroscopic foundation.However, the geometry of the multi-structure considered here is chiral, and this, in turn, contributes to the coupling between the pressure and shear waves, which is supported by the lattice.The Bloch-Floquet wavesin doubly periodic structures with tilted lattice resonators, and their dispersion properties, were studied in <cit.>. Other geometrically chiral lattices were studied in<cit.> in the continuum approximation. When dealing with effective properties of periodic media, high-frequency homogenisation techniques <cit.> can be used. The notion of the multi-scale multi-structure <cit.> was used in <cit.> to approximate the frequencies of standing waves of a multi-scale periodic structure with resonators, consisting of discs connected with the ambient medium by thin ligaments. In particular, the issue of degeneracies was noted for configurations of resonators with special inclinations of the thin ligaments. The influenceof the micro-structure on a dynamic crack in a lattice was discussed in <cit.>.For a transient propagating crack, the crack edge emanates waves, which interact with the ambient medium. Even in subsonic regimes, the problem of a crack advancing in a micro-structured solid is a challenge. Analytical approaches applicable to cracks propagating at an average constant speedare presented in <cit.>.We draw the attention of the reader to the papers <cit.>, which addressed formation of unidirectional edge waves in active chiral elastic systems by achieving time-reversal symmetry breaking. In the present work, we give a special attention to micro-structured solids containing cracks, and we show how a coating, built of a tilted resonator lattice,can absorb vibrations, or otherwise can channel the energy away from the crack tip.An adaptive finite element computation has been performed to model a transient propagation of a crack inside a channel of the micro-structured material. The earlier work <cit.> has addressed the question of a transient advance of a crack subjected to a dynamic load. The influence of a geometrically chiral multi-scale lattice on the field around the crack is demonstrated in the present paper. An additional focus of this paper is on the effect of geometric chirality to the edge waves propagating along structured interfaces. In this context, we would like to mention the earlier work <cit.> where asymptotics for elastic waves propagating along line defects in triangular and square lattices were investigated. Here we analyse waves around a “coated” crack, where the coating is introduced as a multi-scale structure of tilted resonators.We show examples of dynamic localisation and edge waves.The structure of the paper is as follows. The formulation of the problem and an outline of the dispersion properties of the Bloch-Floquet waves in a lattice with tilted resonators are included in Section <ref>. Wave localisation and edge states are discussed in Section <ref>. In Section <ref>, we model a crack in a triangular lattice,surrounded by a structured coating containing tilted resonators.In section <ref>, we study an edge crack sandwiched between two strips of resonators andsubjected to a pulsatingthermal load. The advance ofthe crack isstudiedin the transient regime. In section <ref> we draw our main conclusions. § BLOCH-FLOQUET WAVES IN A TRIANGULAR LATTICE WITH TILTED RESONATORS In this section, we refer to the earlier paper <cit.>, and give an outline describing the propagation of Bloch-Floquet waves in a triangular lattice with tilted rotational resonators. A schematic representation of the triangular lattice with resonators (TLR) is given in Fig. <ref>(a).Here we demonstrate that the Bloch-Floquet frequency dispersion surfaces for the TLR can exhibit Dirac-like dispersion. Dirac-like dispersion arises from the triple-degeneracy of two conical bands and one flat band, as also stated in<cit.>. In contrast, the pure Dirac dispersion is represented by a conical surface, incorporating two conesabove and below thecommon vertex, called the “Dirac point". Such dispersion surfaces are observed, for example,for lattices of high order of symmetry, such as graphene. Dirac-like dispersion can be achieved via the fine tuning of the unit cell's eigenvalues in a plethora of phononic and photonic metamaterials. Dirac-like phononiclattices remain highly attractive because of their interesting physical properties: dynamic neutrality has recently been observed in aplatonic crystal <cit.> and <cit.>.Perfect transmission and tunnelling were reportedin <cit.> which focussed on photonic crystal governed by the Helmholtz wave equation and exhibiting Dirac-like dispersion.§.§ Governing equationsWe consider an elastic triangularlattice (TL) containing tilted rotational resonators, as the one represented in Fig. <ref>(a). Point-wise massesm (black full circles) are considered at the triangular lattice nodes in Fig. <ref>(a), the lattice vectors beingt_1= [ 1; 0 ] L andt_2= [1; √(3) ]L/2,where L is the distance between nearest neighbours. Fig. <ref>(b) shows the first Brillouin zone of the TL, together with its irreducible part (grey area). The high-symmetry points are Γ=[ 0; 0; ], M=2π/√(3)L[ 1/√(3);1;]andX=2π/√(3)L[ 0; 1; ]. The nodal points of the lattice whose mass ism, are linked to each other by non-flexible, massless, extensible rods (thin lines) of longitudinal stiffness c_ℓ. The unit cell of the lattice (semitransparent yellow region in Fig. <ref>(a)) contains a resonator, an equilateral triangle of side ℓ with point masses m_o attached to its vertices (empty circles in Fig. <ref>(a)). The vertices of the resonators are linked to the nodal points of the TL by non-flexible, extensible rods of longitudinal stiffness c_ℓ o(medium thickness black lines in Fig. <ref>(a)). In this paper the resonators are assumed to be rigid, i.e. the longitudinal stiffness c_o of the links connecting the vertices of the resonators is such that c_o/c_ℓ o→+∞ and c_o/c_ℓ→+∞. The resonators are tilted with respect to the external triangular lattice by an angle ϑ_0, marked in Fig. <ref>(a). We give now some geometric definitions useful to represent the dispersion equation for the triangular lattice with resonators. We denote by b̃_i, i={1,2,3}, the position vectorof the i^ th mass relative to the centre of mass r̃_ cm=L/2 ( 1, 1/√(3))^ T, where “T” denotes transposition. The explicit expression isb̃_i=b R̂_i β̃_1= b R̂_i[ sinϑ_0; cosϑ_0 ],with R̂_i=.R̂_ϑ|_ϑ=2π(i-1)/3,    i={ 1,2,3}, whereϑ_0 is the tilting angle, b=ℓ/√(3), andR̂_ϑ= [cosϑsinϑ; -sinϑcosϑ; ],is the clockwise rotation matrix. The vector linking the triangular lattice to the i^ th mass of the resonator in the reference cell n= 0 isα̃_i=R̂_iα̃_1,     i={ 1,2,3},withα̃_1=t_2 - r̃_ cm-b̃_1=[b sinϑ_0; -(B-bcosϑ_0 ) ],where B=L/√(3), b has been introduced in Eq. (<ref>) and the matrix R̂_i is given in Eq. (<ref>). Given the set of vectors(<ref>) and (<ref>), we introduce the corresponding projector matrices τ̂_1 =1/L^2 t_1t_1^ T,τ̂_2=1/L^2 t_2t_2^ T,τ̂_3=1/L^2(t_1-t_2) (t_1-t_2)^ T, Π̂_i=1/ℓ_r^2α̃_iα̃_i^ T,   i={1,2,3},    with  ℓ_r = ||α̃_i||= 1/√(3)√(L^2+ℓ^2-2ℓ Lcos(ϑ_0)).The notation v u^ T in Eqs (<ref>) is used to denote the dyadic product v⊗ u of two vectors u and v.We consider time-harmonic elastic Bloch-Floquet waves propagatingthrough the lattice.Following <cit.>,the Bloch-Floquet displacement wave's amplitude with Bloch-Floquet wave vector k is U_ k=[ u_0^ T( k), u^ T_ cm( k), ϑ( k) ]^ T,where the vectors quantities u_0^ T( k) and u^ T_ cm( k) are the in-plane displacements of the TL nodal points and of the centre of mass of the resonators, respectively.In Eq. (<ref>), ϑ( k) represents the angular displacement with respect to the equilibrium ϑ_0.In the time-harmonic regime, the equations of motion in the lattice characterised by the displacement (<ref>), have the matrix form (Σ̂_ k-ω^2M̂) U_ k= 0,where ω is the Bloch-Floquet radian frequency and the vector U_ k is given in Eq. (<ref>). The inertia matrix which appears in Eq. (<ref>) is M̂= diag(m,m,M,M,I),where M=3m_o is the total mass of the resonator, I=m_oℓ^2 is its moment of inertia, and m is the mass of the nodal points of the triangular lattice. In <cit.> it has been shown thatthe stiffness matrix in Eq. (<ref>) isΣ̂_ k =[ Σ̂_0,0( k) Σ̂_0, cm( k)Σ_0,ϑ( k); ; Σ̂^†_0, cm( k) Σ̂_ cm, cmΣ_ cm,ϑ; ;Σ^†_0,ϑ( k)Σ^†_ cm,ϑΣ_ϑ,ϑ ],=∑_i=1^3[ -2 c_ℓ(cos( k· t_i)-1)τ̂_i+c_ℓ oΠ̂_i-c_ℓ oφ_i( k)Π̂_i -c_ℓ oφ_i( k)Π̂_iR̂'_ib̃_1; ; -c_ℓ oφ^*_i( k )Π̂_ic_ℓ oΠ̂_i c_ℓ oΠ̂_iR̂'_ib̃_1; ;-c_ℓ o (φ_i( k)Π̂_iR̂'_ib̃_1)^† c_ℓ o(Π̂_iR̂'_ib̃_1)^†-c_ℓ ob̃_1^ T· (R̂_i'Π̂_iR̂_i'b̃_1) ],whereφ_1( k)= exp(- k· t_2), φ_2( k)= exp(- k· t_1), φ_3( k)=1 and R̂'_i=d/dϑ.(R̂_ϑ)|_ϑ=2π(i-1)/3. Consider the3×3 block independent of k which appears in Eq. (<ref>). We observe thatσ= [ Σ̂_ cm, cmΣ_ cm,ϑ; ;Σ^†_ cm,ϑΣ_ϑ,ϑ ] =[ 3c_ℓ o /2 Î_2 × 2 0;;0^ T c_ℓ oℓ^2sin^2ϑ_0/(1+ℓ^2/L^2-2 ℓ/L cos(ϑ_0)); ],where Î_2×2 is the 2×2 identity matrix. The diagonalmatrix (<ref>) is the stiffness matrix for a single resonator for which the natural frequencies squared are <cit.>Ω_ cm=3/2c_ℓ o/M, and Ω_ϑ=c_ℓ oℓ^2/Isin^2ϑ_0/1+ℓ^2/L^2-2ℓ/Lcosϑ_0.In Eq. (<ref>),Ω_ cm is the frequency of oscillation of the centre of mass of a single resonator, whereas the frequency Ω_ϑ describes the harmonic rotation of the resonator. §.§ Triple eigenvalue and Dirac-like dispersion surfaces near k= 0The elastic Bloch-Floquet waves in the doubly-periodic structure of tilted resonators have interesting dispersion properties shown inFig. <ref>. A special feature is the Dirac-like cone with the vertex corresponding to k= 0, which is the main focus of this paragraph. Seeking non-trivial solutions for Eq. (<ref>) requiresD( k,ω)= det(Σ̂_ k-ω^2M̂)=0,whose roots ω vs k determine the dispersion ofBloch waves (see e.g. Fig. (<ref>)).At k= 0, the roots of the fifth-degree in Ω=ω^2polynomial equation (<ref>) can be found in their closed forms.Introducing the notation Ω^(i)_Γ=.Ω^(i)_ k|_ k= 0, with i the index of the root, we find Ω^(1)_Γ=0, Ω^(2)_Γ=Ω_ cm(1+3m_o/m)and Ω^(3)_Γ=Ω_ϑ.where Ω_ cm and Ω_ϑ have been introduced in Eq. (<ref>). The first and second eigenvalues in Eqs. (<ref>) have multiplicity two, and the third one has multiplicity one.The geometric conditions0<ℓ/L<1/2and|ϑ_0|<ϑ_ max≡ arcos(ℓ/L),guarantee that the trusses do not cross each another. We observe that it is possible to obtain a triple eigenvalue corresponding to Ω^(2)_Γ=Ω^(3)_Γ, if there exists m̅= cos 2ϑ_0 + ℓ̅^2- 2 ℓ̅cosϑ_0/2ℓ̅cosϑ_0- ℓ̅^2-1>0,with m̅=3m_o/m and ℓ̅=ℓ/L. We observe that m̅>0⟺ cosϑ_0-|sinϑ_0|<ℓ̅<cosϑ_0+|sinϑ_0|.The substitution of theexpression (<ref>) for m_o into theBloch frequencies at Γ in Eq. (<ref>) gives the frequency squaredfor the triple eigenvalue Ω_Γ^ (te)=-3 c_ℓ o/msin^2ϑ_0/ℓ̅^2-2 ℓ̅cosϑ_0+cos 2 ϑ_0,which is a positive quantity if the condition on ℓ̅ and ϑ_0 of Eq. (<ref>) is satisfied. Fig. <ref>(a)represents the frequency dispersion surfaces for a TLR as a function ofa set of Bloch wave vectorswhich comprise the first Brillouin zone (see Fig. <ref>(b)).The lattice parameters have been chosen in such a way that Eq. (<ref>) is satisfied. This implies the occurrence of a triple eigenvalue atΓ, as it can be seen by direct inspection of the optical part of the dispersion diagram. Specifically,we choose ℓ̅=0.21 and ϑ_0=0.82, which gives m̅=0.41. Moreover, we fix L=c_ℓ=1 and m=0.8 which influences the maximum frequency of the acoustic modes.Finally, the choice c_ℓ o=1.53 guarantees that the triple-eigenvalue's frequency is √(Ω_Γ^ (te))=π.Figs <ref>(b) and <ref>(c) show the slowness contours of Fig. <ref>(a) around the triple-eigenvalue's frequency ω=π. Fig. <ref>(b) (Fig. <ref>(c)) refers to frequencies slightly above (slightly below) ω=π.Figs <ref>(b) and <ref>(c) show that the dispersion in the vicinity of the triple-eigenvalue is isotropic.In Fig. <ref>(d) we compare along the path MΓ XM the optical branches of three different TLRs whose lattice parameters are listed in Table <ref> . The black solid line refers to the Set 1 in Table <ref> which has been already used in Fig. <ref>(a). Hence, the dispersion around the triple-eigenvalue's frequency ω=π is linear, suggesting that the triple eigenvalue is aDirac-like point.Other choices of the parameters are possible resulting in different effective group velocities at Γ. In Fig. <ref>(d) we use the set 2 (red dashed line) and set 3 (blue dotted line) listed in Table <ref>. The chosen sets of parameters satisfy (<ref>), which corresponds to the occurrence of a triple eigenvalue at Γ and ω=π. We observe that Dirac-like dispersion is robust over the chosen sets of the lattice parameters. § LOCALISATION AND EDGE WAVES AT THE DIRAC-LIKE POINT In this section,we investigate the wave forms, which correspond to the frequencies in the neighbourhood of the Dirac-like point. In addition, we studythe propagation of edge waves along interfaces obtained by modifying the bulk homogeneous lattices. The periodic lattice's dynamic response to point loads of different orientations is studied using the Finite Element Method (COMSOL Multiphysics). In the computations we truncate the lattice retaining a N× N cluster of TLR's cells, where N≈ 50.In order to reduce spurious reflections from the boundaries of the computational window, the dynamic equations of the nodal points close to the sides of the grid include a damping term. The damping layer has width L_D=4Land is non uniform with spatial distribution η(x)=η_0(1- exp(-σ|x|)), where σ=1/L and η_0 is a frequency dependent factor and x=[0,L_D] spans from the inner to the outer boundary of the damping frame.The harmonic responses shown in this section are triggered by a point force of frequencyω=π  rad/s, linear polarisation and amplitude F=0.1  N. We assume that the force is exerted on a triangular lattice node, located at the centre of the clusters. The lattice parameters here considered are listed in Table <ref>, where SI units of measurement and angles in unit of radiant are understood. These parameters have been chosen to reproduce a triple-eigenvalue at Γ and frequency ω=π  rad/sat the Dirac-like point (see section <ref>).The effective properties of the dispersion surfaces emanating from theDirac-like point strongly influence the harmonic response of the structure. Special attention is given to the influence of the effective mass of the parabolic-in-kmode, and to the effective group velocities of theconical modes,on the localisation patterns and on the amplitude and wavelength of the edge waves propagating alonginterfaces obtained from the bulk TLRs. §.§ Edge waves along the interface between non-homogeneously tilted TLR Figs <ref>(a), <ref>(b) and <ref>(c) show the harmonic responses of a cluster with lattice parameters as in set 1 of Table <ref>. In these computations, three different linearly polarised forces have been used, each of which is oriented at 0, π/3 and π/6 with respect to the horizontal axis (see black arrows). In Figs <ref>(a), <ref>(b) and <ref>(c), we observe a localisation pattern consistent with the flat band intersecting the Dirac cone at the triple eigenvalue.The symmetry axis of the localisation pattern follows the polarisation angle of the force.Figs <ref>(d), <ref>(e) and <ref>(f) show the harmonic responses of a special cluster of resonators in which an homogeneity has been introduced via the tilting angle. The remaining parameters are listed in “set 1” of Table <ref> and the harmonic force is the same as in panels (a), (c) and (e). Above the thin black line the resonators are tilted in the anticlockwise direction (ϑ_0=-0.82), while below the line a clockwise tilting (ϑ_0=0.82) is implemented. This inhomogeneity introduces an interface which runs along the thin black line. It shall be pointed out that the dispersion surface of the lattice of resonators with clockwise and anticlockwise tilting are identical. In particular, the effective group velocities at the Dirac-like point are identical. Nevertheless, the harmonic response of the non-homogeneous clusterdiffers significantlyfrom the corresponding responses of the homogeneously tilted cluster. In fact, we observe that a point force of frequency ω=π  rad/s, corresponding to the Dirac-like point, triggers an edge wave travelling along the interface. The amplitude of the edge wave depends on the orientation of the harmonic point force, being larger for larger deflections from the horizontal direction (cf. Fig. <ref>(d) , <ref>(e) and <ref>(f)). The three panels suggest that the elastic edge wave propagating along the interface have elliptic polarisation whose principal axis is oriented at  π/3 with respect to the interface. When the linear polarisation angle of the source matches π/3 (see Fig. <ref>(f)), the amplitude of the edge wave isgreater than the other two cases for geometrical reasons. In the same spirit as in Figs <ref>, Figs<ref> show the harmonic responses of clusters whose lattice parameters are listed in“set 2” (panels (a) and (b)) and“set 3” (panels (c) and (d)) of Table <ref>. The aimhere is to illustrate how different dispersive properties near the Dirac-like point, already highlighted in Fig. <ref>(d),affect the harmonic responses of homogeneously tilted clusters (panels (a) and (c)) and non-homogeneously tilted clusters (panels (b) and (d)). Thenon-homogeneity considered here has the same meaning as in Figs <ref>.Figs <ref>(a) and <ref>(c) show localised patterns similar to what encountered in Fig. <ref>(a). Figs <ref>(c) and <ref>(d) show an edge wave travelling across the interface. We remark that the wavelength of the edge waves is larger for smaller effective group velocities at the Dirac-like point ω=π  rad/s. This suggests that the dynamics of the edge waves is controlled by the effective group velocitiesat the Dirac-like point. §.§ Edge waves along a line defect in a non-homogeneously tilted TLRFig. <ref> shows the modulus of the displacement field for a forced TLR contaning adefect which consists of a missing line of resonators, as shown in the magnifying inset highlighted in yellow on the left of the figure. The lattice parameters used in this computation are listed in set 1 of Table <ref> and the tilting angle is anticlockwise and clockwise, above and below the defect, respectively. The harmonic force is identical to the one used in Fig. <ref>(a) and is exerted on a triangular lattice nodal point below the line defect (see blue arrow in the inset). We observe that the defect acts as a wave guide for an edge wave whose wavelength differs from the one in Fig. <ref>(b). We emphasise again that the wave-guiding behaviour in Fig. <ref> differs significantly from the localisation pattern in Fig. <ref>(a), the bulk homogeneous counterpart.§ WAVE-FORMS AROUND A CRACK SURROUNDED BY A MICRO-STRUCTURED COATINGIn this section we study a special coating for one-dimensional cracks inside a TL. We consider a shear plane wave of angular frequency ω=π  rad/s impinging on the crack. The coating is obtained by introducing resonatorsaround the crack. The physical parameters of the exterior triangular lattice in which the plane wave propagates,can be chosen in order to guarantee an isotropic dynamic response. In this section, the maximum plane wave' s frequency is ω=π  rad/s. The stiffness of the links c_ TL=50  N/m, for the mass of the nodal points corresponding to m_ TL=m+3m_o=1.43  Kg (see set 1 in Table <ref>),guarantees anisotropic dynamic response.We observe that the aforementioned choice of the mass minimises spurious scattering effects associated with a contrast of inertia. Fig. <ref>(a) shows a shear plane wave of frequency ω=π  rad/s propagating through the isotropic triangular lattice.The wave is excited by applying a time-harmonic horizontal displacement to the nodal points of the lattice close to the horizontal line y=45. In Fig. <ref>(b), a crack obtained by removing some links from the triangular lattice scatters the shear plane wave. In this section, the lattice parameters ofthe structured coatingare given in set 1 of Table <ref>. The corresponding dispersion surfaces are reported in Fig. <ref>(a). The different frequency regimes are discussed via the analysis of the scattered displacement fields: in subsection <ref> we address frequencies close to the Dirac-like point andin subsection <ref> we focus on band gap regime.§.§ Dirac-like regime In Fig. <ref> we compare the modulus of the displacement field resulting from the interaction of an elastic shear wave witha cluster of resonators (panel (a)) and with a cluster of resonators containing a crack (panel (b)). The source of the excitation is a plane wave of frequency ω=π  rad/s which corresponds to the Dirac-like point for the periodic TLR (see Fig. <ref>). Panel (a) shows that scattering of elastic waves is highly anisotropic, the displacement field being concentrated on the right sideof the cluster. It is worthwhile noting that if the resonators are rotated in the anticlockwise direction (ϑ_0=-0.82) the displacement field ismirror-symmetric compared to the one of Fig. <ref>(a). The introduction of a crack within the cluster (Fig. <ref>(b)) triggers the propagation of elastic waves around the crack itself. The displacement field and the corresponding stresses are still visibly concentrated around the right tip of the crack. This suggests that a coating of resonators in the Dirac-like regime is likely to lead to aleft-right asymmetry in the propagation of the crack. In Figs <ref>,long strips of resonators containing a crack interact with a shear plane wave impinging on the strip from above. Several arrangements for the resonators are considered. In panel (a) (panel (b)) the resonators in the strip are homogeneously tilted in the clockwise (anticlockwise) direction. Similarly to Fig. <ref>(a), this leads to an enhancement of the displacement field close to the tips of the cracks. Moreover, the results aremirror-symmetric about the vertical line passing through the centre of the crack. This is consistent with what we observe in Fig. <ref>(b). In Fig. <ref>(c) the homogeneously tilted strip analysed in panel (a) has been replaced by a strip with an interface. The interface is represented by a line ofmissing resonators. The stiffness of the triangular lattice links which define the interface, is assumed to be c_ TL=50  N/m, as in the exterior triangular lattice. In Fig. <ref>(d), the strip is similar to the one of Fig. <ref>(c) butanticlockwise tilting above the line and clockwise tilting below the line is implemented. In Figs <ref>(c) and <ref>(d) the displacement field is mirror-symmetric with respect to a vertical line passing throughthe crack. §.§ Band gap regime In Figs <ref>, a shear plane wave coming from above impingesat normal incidence on a cluster of resonators (panel (a)) and on clusters of resonators containing a cracks (panels (b),(c) and (d)). The frequency of the excitation is ω=2.4  rad/s corresponding to the band gap of Fig. <ref>(a). It is remarked that the coating is not penetrated by the incident wave. In particular, panel (b) shows that the structured cluster acts as a protective layer for the crack, as one would expect from the analysis of the dispersion diagram for Bloch waves. In Figs <ref>(c) and <ref>(d) we introduce a defect consisting of a missing line of resonators along the extension of the crack.In Fig. <ref>(c) the tilting angle is homogeneous,whereas in panel <ref>(d)the resonators are tilted in opposite directions above and below the line defect. The stiffness of the links of the line defectsis the same as of the exterior triangular lattice. Figs <ref>(c) and<ref>(d) show a displacement enhancementat the perimeter of the cluster, however away from the crack tip. Fig. <ref>(e) highlightsan edge wave travelling along the boundary of the cluster. In Figs <ref>(a),<ref>(b) and <ref>(c), the angular frequency ω=2.1  rad/s of the plane wave corresponds to the lower edge of the band gapof Fig. <ref>(a).In Figs <ref>(d),<ref>(e) and <ref>(f), the frequencyω=2.7  rad/s corresponds to the upper edge of the band gap. For the lower edge frequency, although the cluster is partially protective (see panel (b)), the introduction of the one-dimensional defect increases the stress concentration around the crack (panels (c)), compared to the uncoated configuration (panels (a)).A similar effect is reported for the upper edge of the band gap in Fig. <ref>(e) and Fig. <ref>(f). In the vicinity of the band gap edges, the coating of resonators enhances the displacement field around the crack, increasing the chances for the crack to propagate.§ EDGE CRACK SUBJECTED TO A TRANSIENT THERMAL LOAD The governing equations, loading configuration and the fracture criterion arethe same as in the earlier computations for the thermoelastic crack advancing through a homogeneous triangular lattice <cit.>. Here, a geometrically chiral coating surrounding the crack is introduced into the model.An elastic wave is generated as a result of a rapid variation of the boundary temperature. The fracture criterion is based on a normalised threshold elongation ϵ=Δ L / L. The crack advances when the ligament at the crack tip reaches the critical threshold elongation.The loading configuration is made of square pulses applied to the left edge of the computational domain. Theperiod of the load is θ=4 τ, where τ=16  s is the duration of a single pulse.The radian frequency of the pulse is ω_s=2π/θ=0.0982  rad/s, where the subscript s stands for“striping". The duration of the pulse is 60 θ. Fig.<ref> shows the Fourier spectrum of the temperature loading. We observe that the spectrum is dominated by spikes occurring at multiples of ω_s. We limited the plot to ω̅∈[0,2.1 ω_s], where the most pronounced spikes of the spectrum appear.Tilted resonators are added as four layers (two above and two below the crack). The trusses which link the resonators to the TL's nodal points are thermally insulating.The mass of the unit cell containing a resonator is not equal to the mass of the exterior triangular lattice nodal points. In Table <ref>, we list the thermoelastic parameters used in thetransient non-linear simulations. The dispersion diagrams corresponding to the periodic lattices are represented in Fig. <ref>. Fig. <ref>(a) represents the dispersion surfaces for the triangular lattice outside thecracked strip. Figs<ref>(b) and <ref>(c) show the dispersion diagrams for twotriangular lattices with resonators which differ from each other by the tilting angle (47  deg and 78  deg, respectively). The structured lattices are deliberately designed in such a way that ω_s lies in the passband for Fig. <ref>(b) and in the stop band for Fig. <ref>(c), as highlighted by the horizontal red lines.From the transient solution of the thermoelastic problem described above, we extracted the crack length L_c at several times. The results are represented in Fig. <ref> for different normalised elongation thresholds ϵ. Panel (a) corresponds to the lower tilting angle and panel (b) to the higher one. At the same elongation thresholds, the average crack speeds in panel (a) are slightly higher than those in panel (b).We provide a qualitative interpretation of this phenomenon as follows.The thermal shocks trigger elastic waves whose amplitudes vs frequency at the left edge of the computaional window, differ from Fig. <ref> by a multiplicative constant. When ω_s is in the passband, i.e. when ϑ_0=78  deg, elastic waves can propagate along the strip of tilted resonators (see Fig. <ref> ), resulting in a reduction of strain concentration at the crack tip compared to the ϑ_0=47  deg configuration.Equivalently, the strip of resonators acts as astructured waveguide which channels the energy away from the crack tip, as illustrated in Fig.<ref>.On the contrary, when ϑ_0=47  deg, the waveguide action is being suppressed, which leads to the field localisation around the crack and hence the stronger advance of the fracture through the lattice.§ CONCLUDING REMARKS We have identified several important applications of a novel geometrically chiral micro-structure in the design of advanced materials, used as filters/polarisers of elastic waves.A transient advance of a crack, whose instantaneous snapshot is given in Fig. <ref>, has been studied in a micro-structured layer where tilted resonators in the lattice are present. The analysis of the transient crack advance illustrated byFigs <ref>(a) and <ref>(b) is linked to the tunable dispersion properties of the lattices (see Fig. <ref>) and to the guiding features of the structured coating around the crack, as shown inFig. <ref>. The Dirac-like dynamic regime deserves the special mention. It has been achieved and studied here in relation to the wave-guiding and wave-defect interaction problems. Asymmetries in the scattered elastic field have been identified for waves at the Dirac-like frequency. This in turn empowers further studies in the context of asymmetric crack initiation mechanisms (see Figs <ref> and <ref>). Shielding of a defect from an incident elastic shear wave have been achieved in the regimes, which correspond to the complete band-gap of the triangular lattice with resonators. In addition to the usual low penetration ofexternal waves within the protecting coating, we emphasisethat edge waves occur around the perimeter of the coatingin our model (see Fig. <ref>). This is a “finger-print" of the lattice's geometric chirality, and cannot be achieved bythe straightforward adjustment of the triangular lattice parameters, e.g. by introducing a contrast in the inertia or stiffness.§ ACKNOWLEDGEMENTSD.T. gratefully acknowledges the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme FP7/2007-2013/ under REA grant agreement number PITN-GA-2013- 606878. The paper was completed while D.T. was in a work secondment at Enginsoft (Italy), whose stimulating and welcoming environment is gratefully acknowledged. A.B.M. and N.V.M. acknowledge the financial support of the EPSRC through programme grant EP/L024926/1. The paper was completed while A.B.M. was visiting the University of Trento;the support from the ERC Advanced Grant ‘Instabilities and nonlocal multiscale modelling of materials’ FP7-PEOPLE-IDEAS-ERC-2013-AdG is gratefully acknowledged.unsrtunsrt
http://arxiv.org/abs/1705.09726v1
{ "authors": [ "Domenico Tallarico", "Alessio Trevisan", "Natalia V. Movchan", "Alexander B. Movchan" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170526212321", "title": "Edge waves and localisation in lattices containing tilted resonators" }
Unsupervised Feature Learning for Writer Identification and Writer Retrieval Vincent Christlein1, Martin Gropp1, Stefan Fiel2, and Andreas Maier1 1 Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg,91058 Erlangen, Germany 2 Computer Vision Lab, TU Wien, 1040 Vienna, Austria, [email protected], [email protected], [email protected], [email protected] ======================================================================================================================================================================================================================================================================================================================================== Deep Convolutional Neural Networks (CNN) have shown great success in supervised classification tasks such as character classification or dating. Deep learning methods typically need a lot of annotated training data, which is not available in many scenarios. In these cases, traditional methods are often better than or equivalent to deep learning methods. In this paper, we propose a simple, yet effective, way to learn CNN activation features in an unsupervised manner. Therefore, we train a deep residual network using surrogate classes. The surrogate classes are created by clustering the training dataset, where each cluster index represents one surrogate class. The activations from the penultimate CNN layer serve as features for subsequent classification tasks. We evaluate the feature representations on two publicly available datasets. The focus lies on the ICDAR17 competition dataset on historical document writer identification (). We show that the activation features trained without supervision are superior to descriptors of state-of-the-art writer identification methods. Additionally, we achieve comparable results in the case of handwriting classification using the ICFHR16 competition dataset on historical Latin script types ().unsupervised feature learning; writer identification; writer retrieval; deep learning; document analysis§ INTRODUCTIONThe analysis of historical data is typically a task for experts in history or paleography. However, due to the digitization process of archives and libraries, a manual analysis of a large data corpus might not be feasible anymore. We believe that automatic methods can support people working in the field of humanities. In this paper, we focus on the task of writer identification and writer retrieval. Writer identification refers to the problem ofassigning the correct writer for a query image by comparing it with images of known scribal attribution. For writer retrieval, the task consists of finding all relevant documents of a specific writer. Additionally, we evaluate our method in a classification task to classify historical script types.We make use of deep Convolutional Neural Networks (CNN) that are able to create powerful feature representations <cit.> and are the state-of-the-art tool for image classification since the AlexNet CNN of Krizhevsky <cit.> won the ImageNet competition.Deep-learning-based methods achieve also great performance in the field of handwritten documents classification, dating <cit.>, word spotting <cit.>, or handwritten text recognition <cit.>. However, such methods typically require a lot of labeled data for each class. We face another problem in the case of writer identification, where the writers of the training set are different from those of the test set in the typical used benchmark datasets. On top of that, current datasets have only one to five images per writer. While a form of writer adaptation with exemplar Support Vector Machines (E-SVM) is possible <cit.>, CNN training for each query image would be very cost-intensive. Thus, deep-learning-based methods are solely used to create robust features <cit.>.In these cases, the writers of the training set serve as surrogate classes. In comparison to this supervised feature learning, we show that deep activation features learned in an unsupervised manner can i) serve as better surrogate classes, and ii) outperform handcrafted features from current state-of-the-art methods. In detail, our contributions are as follows: * We present a simple method for feature learning using deep neural networks without the need of labeled data. <Ref> gives an overview of our method. First SIFT descriptors <cit.> are computed on the training dataset, which are subsequently clustered. A deep residual network (ResNet) <cit.> is trained using patches extracted from each SIFT location (keypoint) using the cluster membership as target. The activations of the penultimate layer serve as local feature descriptors that are subsequently encoded and classified. * We thoroughly evaluate all steps of our pipeline using a publicly available dataset on historical document writer identification. * We show that our method outperforms state-of-the-art in the case of writer identification and retrieval. * Additionally, we evaluate our method for the classification of medieval script types. On this task, we achieve equally good results as the competition winner.The rest of the paper is organized as follows. <Ref> gives an overview over the related work in the field of unsupervised feature learning, and writer identification.The unsupervised feature learning and encoding step is presented in <ref>. The evaluation protocol is given in <ref>, and the results in <ref>. <Ref> gives a summary and an outlook. § RELATED WORK We focus our evaluation on the task of writer identification and retrieval. Method-wise, writer identification / retrieval can be divided into two groups: statistical methods (a. k. a. textural methods <cit.>) and codebook-based methods. The differentiation lies in the creation of the global descriptor which is going to be compared, or classified, respectively.Global statistics of the handwriting are computed in the former group, such as the width of the ink trace, or the angles of stroke directions <cit.>.More recently, Nicolaou <cit.> employed local binary patterns that are evaluated densely at the image.Conversely, codebook-based descriptors are based on the well-known Bag-of-(Visual)-Words (BoW) principle, a global descriptor is created by encoding local descriptors using statistics obtained by a pre-trained dictionary. Fisher vectors <cit.>, VLAD <cit.> or self organizing maps <cit.> were employed for writer identification and retrieval. Popular local descriptors for writer identification are based on Scale Invariant Feature Transform <cit.> (SIFT), see <cit.>. However, also handcrafted descriptors are developed that are specifically designed to work well on handwriting. One example is the work by He <cit.>, who characterize script by computing junctions of the handwriting. In contrast, the hereinafter presented work learns the descriptors using a deep CNN. In previous works the writers of the training datasets have been used as targets for the CNN training <cit.>. While the output neurons of the last layer were aggregated using sum-pooling by Xing and Qiao <cit.>, the activation features of the penultimate layer were encoded using Fisher vectors <cit.> and GMM supervectors <cit.>. In contrast, we do not rely on any writer label information, but use cluster membership of image patches as surrogate targets. Clustering has also been used to create unsupervised attributes for historical document dating in the work of He <cit.>. However, they use handcrafted features in conjunction with SVMs. Instead, we learn the features in an unsupervised manner using a deep CNN.The most closely related work comes from Dosovitskiy <cit.>, where surrogate classes are created by a variety of image transformations such as rotation or scale. Using these classes to train a CNN, they generate features, which are invariant to many transformations and are advantageous in comparison to handcrafted features. They also suggest to cluster the images in advance to apply their transformations on each cluster image, and then use the cluster indices as surrogate classes. A similar procedure is applied by Huang <cit.> to discover shared attributes and visual representations.In comparison to the datasets used in the evaluation of Dosovitskiy and Huang, we have much more training samples available since we consider small handwriting patches. Thus, an exhaustive augmentation of the dataset is not necessary; instead, one cluster directly represents a surrogate class. Another interesting approach for deep unsupervised feature learning is the work of Paulin <cit.>, where Convolutional Kernel Networks (CKN) are employed. CKNs are similar to CNNs but are trained layer-wise to approximate a particular non-linear kernel.§ METHODOLOGY Our goal is to learn robust local features in an unsupervised manner. These features can then be used for subsequent classification tasks such as writer identification or script type classification. Therefore, a state-of-the-art CNN architecture is employed to train a powerful patch representation using cluster memberships as targets. A global image descriptor is created by means of VLAD encoding. §.§ Unsupervised Feature LearningFirst, SIFT keypoints are extracted. At each keypoint location a SIFT descriptor and a 32×32 patch is extracted. The SIFT descriptors of the training set are clustered. While the patches are the inputs for the CNN training, the cluster memberships of the corresponding SIFT descriptors are used as targets. Cf. also <ref> for an overview of the feature learning process. SIFT keypoint localization is based on blob detection <cit.>. The keypoints rely on finding both minima and maxima in the Difference-of-Gaussian (DoG) scale space, and in addition to document coordinates also contain information about rotation and size, their location in scale space. The keypoints commonly occur between text lines, as can be seen in <ref>. These gratuitous locations can be filtered out either afterwards by analyzingthe keypoint size or using the binarized image as mask. Another possibility is to restrict the SIFT keypoint algorithm on finding only minima in the scale space, thus, obtaining only dark on bright blobs. We employ this technique to mainly obtain patches containing text, further referred to R-SIFT (restricted SIFT). Note that we also filter keypoints positioned at the same location to always obtain distinct input patches. For an improved cluster association, we also normalize the SIFT descriptors by applying the Hellinger kernel <cit.>. In practice, the Hellinger normalization of SIFT descriptors consists of an element-wise application of the square root, followed by an l_1 normalization. This normalization effectively helps to reduce the occurrence of visual bursts, dominating bins in the SIFT descriptor, and has been shown to improve image recognition <cit.> and writer identification / retrieval <cit.>. The descriptors are dimensionality-reduced from 128 to 32 dimensions and whitened using principal component analysis (PCA) to lower the computational cost of the clustering process. For clustering we use a subset of 500k randomly chosen R-SIFT descriptors of the training set.We use the mini-batch version of k-means <cit.> for a fast clustering. After the clustering process, we filter out descriptors (and corresponding patches) that lie on the border between two clusters. Therefore, the ratio ρ between thedistances of the input descriptor x⃗ to the closest cluster centerμ⃗_1 and to the second closest one μ⃗_2 is computed, i. e.:ρ = ‖x⃗ - μ⃗_1‖_2/‖x⃗ - μ⃗_2‖_2If this ratio is too large, the descriptor is removed. In practice, we use a maximum allowed ratio of 0.9.Given the 32×32 image patches and their cluster memberships, a deep CNN is trained. We employ a deep residual network <cit.> (ResNet) with 20-layers. Residual networks have shown great results in image classification and object recognition. A ResNet consists of residual building blocks that have two branches. One branch has two or more convolutional layers and the other one just forwards the result of the previous layer, thus bypassing the other branch. These building blocks help to preserve the identity and allow training deeper models. As the residual building block, we use the pre-resnet building block of <cit.>. For training, we follow the architectural design and procedure of He <cit.> for the CIFAR10 dataset. Following previous works <cit.>, we use the activations of the penultimate layer as feature descriptors. Note that typically the features of the penultimate layer are most distinctive <cit.>, but other layers are possible, too <cit.>. In our case, the penultimate layer is a pooling layer that pools the filters from the previous residual block.It consists of 64 hidden nodes resulting in a feature descriptor dimensionality of 64.§.§ EncodingA global image descriptor is created by encoding the obtained CNN activation features. We use VLAD encoding <cit.>, which can be seen as a non-probabilistic version of the Fisher Kernel. It encodes first order statistics by aggregating the residuals of local descriptors to their corresponding nearest cluster center. VLAD is a standard encoding method, which has already been used for writer identification <cit.>. It has also successfully been used toencode CNN activation features for classification and retrieval tasks <cit.>. Formally, a VLAD is constructed as follows <cit.>. First, a codebook D⃗ = {μ⃗_1,…,μ⃗_K} is computed from random descriptors of the training set using k-means with K clusters.Every local image descriptor x⃗ of one image is assigned to its nearest cluster center. Then, all residuals between the cluster center and the assigned descriptors are accumulated for each cluster:v⃗_k = ∑_x⃗_t: NN(x⃗_t)=μ⃗_k (x⃗_t -μ⃗_k) ,where NN(x⃗_t) refers to the nearest neighbor of x⃗_t in the dictionary D⃗. The final VLAD encoding is the concatenation of all v⃗_k:v⃗ = (v⃗^⊤_1, …, v⃗^⊤_K)^⊤ . We use power normalization <cit.> instead of the more recent intra normalization <cit.>. The former one is preferable, since we employ keypoints for the patch extraction instead of a dense sampling <cit.>. In power-normalization, the normalized vector v⃗̂⃗ follows as:v⃗̂⃗_i sign(v⃗_i) |v⃗_i |^ρ∀ i={1,…,|v⃗|},0<ρ≤1 ,where we set ρ to 0.5. Afterwards, the vector is l_2-normalized.Similar to the work of Christlein <cit.>, multiple codebooks are computed from different random training descriptors. For each of these codebooks a VLAD encoding is computed. The encodings are subsequently decorrelated and optionally dimensionality reduced by means of PCA whitening. This step has been shown to be very beneficial for writer and image retrieval<cit.>. We refer to this approach as multiple codebook VLAD, or short m-VLAD.§.§ Exemplar SVMAdditionally, we train linear support vector machines (SVM) for each individual query sample. Such an Exemplar SVM (E-SVM) is trained with only a single positive sample and multiple negative samples. This method was originally proposed for object detection <cit.>, where an ensemble of E-SVMs is used for each object class. Conversely, E-SVMs can also be used to adapt to a specific face image <cit.> or writer <cit.>.In principle, we follow the approach of Christlein <cit.> and use E-SVMs at query time. Since we know that the writers of the training set are independent from those of the test set, an E-SVM is trained using the query VLAD encoding as positive sample and all the training encodings as negatives. This has the effect of computing an individual similarity for the query descriptor. The SVM large margin formulation with l_2 regularization andsquared hinge loss h(x)=max(0,1-x)^2 is defined as: _w⃗1/2‖w⃗‖^2_2 + c_p h(w⃗^⊤x⃗_⃗p⃗) + c_n ∑_x⃗_⃗n⃗∈𝒩 h(-w⃗^⊤x⃗_⃗n⃗), where x⃗_⃗p⃗ is the single positive sample and x⃗_⃗n⃗ are the samples of the negative training set 𝒩. c_p and c_n are regularization parameters for balancing the positive and negative costs. We chose to set them indirectly proportional to the number of samples such that only one parameter C needs to be cross-validated in advance,  <cit.> for details. Unlike the work of Christlein <cit.>, we do not rank the other images according to the SVM score.Instead, we use the linear SVM as feature encoder <cit.>, we directly use the normalized weight vector as our new feature representation for x⃗:x⃗↦x̂⃗̂ = w⃗/‖w⃗‖_2 .The new representations are ranked according to their cosine similarity.§ EVALUATION PROTOCOL The focus of our evaluation lies on writer identification and retrieval, where we thoroughly explore the effects of different pipeline decisions. Additionally, the features are employed for theclassification of medieval handwriting. In the following subsections the datasets, evaluation metrics and implementation details are presented.§.§ DatasetsThe method proposed is evaluated on the dataset of the ICDAR 2017 Competition on Historical Document Writer Identification () <cit.>. The test set consists of 3600 document images written by 720 different writers. Each writer contributed 5 pages to the dataset, which have been sampled equidistantly of all available documents to ensure a high variability of the data. The documents have been written between the 13^th and 20^th century and contain mostly correspondences in German, Latin, and French. The training set contains 1182 document images written by 394 writers. Again, the number of pages per writer is equally distributed.Additionally, the method is evaluated on a document classification task using the dataset for the ICFHR2016 competition on the classification of medieval handwritings in Latin script () <cit.>. It consists of 3000 images of Latin scripts scanned from handwritten books dated between 500 and 1600 CE.The dataset is split into 2000 training and 1000 test images. The task is to automatically classify the test images into one of twelve Latin script types.§.§ Evaluation MetricsTo evaluate our method, we use a leave-one-image-out procedure, where each image in the test set is used once as a query and the system has to retrieve a ranked list of documents from the remaining images. Ideally, the top entries of these lists would be the relevant documents written by the same scribe as the query image.We use several common metrics to assess the quality of these results. Soft Top N (Soft-N) examines the N items ranked at the top of a retrieved list. A list is considered an acceptable result if there is at leastone relevant document in the top N items. The final score for this metric is then the percentage of acceptable results. Hard Top N (Hard-N), by comparison, is much stricter andrequires all of the top N items to be relevant for an acceptableresult. Precision at N (p@N) computes the percentage of relevant documents in the top N items of a result. The numbers reported for p@N are the means over all queries.The average precision (AP) measure considers the average p@Nover all positions N of relevant documents in a result. Taking the mean AP over all queries finally yields the Mean Average Precision score (mAP).Since for N = 1 Hard-N, Soft-N, and p@N are equivalent, we record these scores only once as TOP-1. §.§ Implementation DetailsIf not stated otherwise, the standard pipeline consists of 5000 cluster indices as surrogate classes for 32×32 patches. The patches were extracted from the binarized images in the case of the dataset, and from the grayscale images in the case of the dataset.The patches are extracted around the restricted SIFT keypoints (see <ref>).We extract RootSIFT descriptors and apply a PCA for whitening and reducingthe dimensionality to 32. These vectors are then used for the clustering step. A deep residual network (number of layers L=20) is trained using stochasticgradient descent with an adaptive learning rate (if the error increases, the learning rate is divided by 10), a Nesterov momentum of 0.9 and a weight decay of 0.0001. The training runs for a maximum of 50 epochs, stopping early if the validation error (20k random patches not part of the training procedure) increases. Note that the maximum epoch number is sufficient given the large number ofhandwriting patches (480k). The activations of the penultimate layer are used as local descriptors. They are encoded using m-VLAD with five vocabularies. The final descriptors are PCA-whitened and compared using the cosine distance. For the comparison with the state of the art, we also employ linear SVMs. The SVM margin parameter C is cross-evaluated in the range [10^-5,10^4] using an innerstratified 5-fold cross-validation for script type classification.In the case of writer identification / retrieval a 2-fold cross-validation is employed, the training set is split into two writer-independent parts to have more E-SVMs for the validation. § RESULTS First, the use of writers as surrogate classes is evaluated, similar to the work of Christlein <cit.> and Fiel <cit.>. Afterwards, our proposed method for feature learning, different encoding strategies and the used parameters are evaluated and eventually compared to the state-of-the-art methods.§.§ Writers as Surrogate Classes A natural choice for the training targets are the writers of the training set. This has been successfully used by recent works for smaller, non-historical benchmark datasets such as the ICDAR 2013 competition dataset for writer identification <cit.>. Thus, we employ the same scheme also for . On one hand, we employ the LeNet architecture used by Christlein <cit.>, two subsequent blocks of a convolutional layer, followed by a pooling layer, and a final fully connected layer before the target layer with its 394 nodes. On the other hand, we employ the same architecture we propose for ourmethod, a residual network (ResNet) with 20 layers. <ref> reveals that the use of writers as the surrogate class does not work as intended. Independent of the architecture, we achieve much worse results than a standard approach using SIFT descriptors or Zernike moments,  <ref>. §.§ Influence of the Encoding Method For the following experiments, we now train our network using the clusterindices as surrogate classes (denoted as ).Babenko <cit.> states that sum-pooling CNN activation featuresis superior to other encoding techniques such as VLAD or Fisher Vectors. In<ref>, we compare sum-pooling to three other encoding methods:I) Fisher vectors <cit.> using first and second orderstatistics, which have also been employed for writeridentification <cit.>. We normalize them in a manner similar to theproposed VLAD normalization, power normalization followed by an L_2normalization.II) GMM supervectors <cit.>, which were used for writeridentification by Christlein <cit.>, normalized by aKullback-Leibler normalization scheme.III) the proposed VLAD encoding <cit.>. <Ref> shows that sum pooling (+ Sum) performs significantly worse than other encoding schemes.While Fisher vectors (+ FV) trail the GMM supervectors (+ SV) and VLAD encoding (+ VLAD / m-VLAD / m-VLAD_400), GMM supervectors perform slightly better than the average of the non-whitened version of the five VLAD encodings (+ VLAD). However, when using the m-VLAD approach (+ m-VLAD), jointly decorrelating the five VLAD encodings by PCA whitening, we achieve a much higher precision. Even if we incorporate a dimensionality reduction to 400 components (+ m-VLAD_400) during the PCA whitening process, the results are significantly better than other encoding schemes with 6400 dimensions in case of the GMM supervectors, or 12 800 in case of the Fisher vectors.§.§ Parameter Evaluation<Ref> plots the writer retrieval performance given different numbers of surrogate classes that are used for clustering, and the training targets, respectively. Interestingly, even a small number of 2 clusters is sufficient to produce better results than using the writers as surrogate classes. When using more than 1 000 clusters, the results are very similar to each other with a peak at 5 000 clusters. To evaluate the importance of the number of layers, we employed a much deeper residual network consisting in total of 44 layers (instead of 20). Since the results in <ref> show that the increase in depth ((L=44)) produces only a slight improvement in terms of , and comes with greater resource consumption, we stick to the smaller 20 layer deepnetwork for the following experiments. Next, we evaluate the influence of the parameter ρ, which is used to remove patches that do not clearly fall into one Voronoi cell computed by k-means,  <ref>. When using a factor of 1.0 (instead of 0.9), and thus, not removing any patches, the performance drops from 74.1% to 72.4% .§.§ Sampling Importance Finally, we also evaluate the impact of the proposed restricted SIFT keypoint computation (R-SIFT) in comparison to standard SIFT, as well as the influence of binarization (bin.) in comparison to grayscale patches (gray). We standardize the grayscale patches to zero mean and unit standard deviation. <Ref> shows that binarization is in general beneficial for an improvement in precision. This is even more astonishing considering that several images belong to thesame handwritten letter. Thus, the background information should actually improve the results. A possible explanation could be that binary image patches are easier to train with, thus resulting in a better representation. When comparing SIFT with its restricted version (R-SIFT), the former consistently outperforms the restricted version by about 0.7% . It seems that completely blank patches do not harm the CNN classification. This might be related to the clustering process, since all these patches typically end up in one cluster. Furthermore, the training patches, which are extracted, are more diverse. Also keypoints located right next to the contour are preserved, <ref>.In summary, we can state that 1) m-VLAD encoding is the best encoding candidate.2) Our method is quite robust to the number of clusters. Given enough surrogate classes, the method outperforms other surrogate classes that need label information. 3) The removal of descriptors (and corresponding patches) using a simple ratio criterion seems to be beneficial. 4) Deeper networks do not seem to be necessary for the task of writer identification. 5) Patches extracted at SIFT keypoint locations computed on binarized images are preferable to other modalities.§.§ Comparison with the state of the artWe compare our method with the state-of-the-art methods of Fiel <cit.> (SIFT + FV) and Christlein <cit.> (C-Zernike + m-VLAD).While the former one uses SIFT descriptors that are encoded using Fisher vectors <cit.>,the latter relies on Zernike moments evaluated densely at the contour that are subsequently encoded using the m-VLAD approach. <Ref> shows that our proposed method achieves superior results in comparison to these methods. Note that the encoding stage of the Contour-Zernike-based method is similar to ours (). It differs only in the way of post-processing, where we use power normalization in preference to intra normalization <cit.>. However, the difference in accuracy is very small, see <cit.>. It follows that the improvement in performance just relies on the better feature descriptors. The use of Exemplar SVMs for feature encoding gives another improvement of nearly 1.5% .Additionally, we evaluate the method on the classification of medieval Latin script types. <Ref> shows that our method is slightly, but not significantly, better than state-of-the-art methods <cit.> (Soft-5: 98.1%). Possible reasons are: a) the text areas in the images are not segmented, the images contain much more non-text elements such as decorations, which might lower the actual feature learning process; b) the images are not binarized, which proves beneficial, <ref>; c) one can train here on average with 166 instances per class, while only an exemplar classifier is trainable in the case of writer identification. § CONCLUSION We have presented a simple method for deep feature learning using cluster memberships as surrogate classes for local extracted image patches. The main advantage is that no training labels are necessary. All necessary training parameters have been evaluated thoroughly.We show that this approach outperforms supervised surrogate classes and traditional features in the case of writer identification and writer retrieval. The method achieves also comparable results to other methods on the task of classification of script types.As a secondary result, we found that binarized images are preferable to grayscale versions for the training of our proposed feature learning process.In the future, we want to investigate this further, by evaluating only single handwritten lines instead of full paragraphs to investigate the influence of inter-linear spaces. Activations from other layers than the penultimate one are also worth to be examined. Another idea relates to the use of the last neural network layer, the predicted cluster membership for each patch.Since VLAD encoding relies on cluster memberships, this could be directly incorporated in the pipeline. IEEEtran
http://arxiv.org/abs/1705.09369v3
{ "authors": [ "Vincent Christlein", "Martin Gropp", "Stefan Fiel", "Andreas Maier" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170525213040", "title": "Unsupervised Feature Learning for Writer Identification and Writer Retrieval" }
[email protected] International School for Advanced Studies (SISSA), Via Bonomea 265, 34136 Trieste, Italy. Dipartimento di Fisica, Università degli Studi di Milano, Via Celoria 16, 20133 Milano, Italy Institute for Molecules and Materials, Radboud University Nijmegen, Heyendaalseweg 135, 6525 AJ Nijmegen, The Netherlands. CNR-IOM Democritos National Simulation Center, Via Bonomea 265, 34136 Trieste, Italy. International School for Advanced Studies (SISSA), Via Bonomea 265, 34136 Trieste, Italy. Institute for Molecules and Materials, Radboud University Nijmegen, Heyendaalseweg 135, 6525 AJ Nijmegen, The Netherlands. International School for Advanced Studies (SISSA), Via Bonomea 265, 34136 Trieste, Italy. The Abdus Salam International Centre for Theoretical Physics (ICTP), Strada Costiera 11, 34151 Trieste, Italy. The contact strength, adhesion and friction, between graphene and an incommensurate crystalline substrate such as h-BN depends on their relative alignment angle θ. The well established Novaco-McTague (NM) theory predicts for a monolayer graphene on a hard bulk h-BN crystal face a small spontaneous misalignment, here θ_NM ≃ 0.45 degrees which if realized would be relevant to a host of electronic properties besides the mechanical ones. Because experimental equilibrium is hard to achieve, we inquire theoretically about alignment or misalignment by simulations based on dependable state-of-the-art interatomic force fields. Surprisingly at first, we find compelling evidence for θ = 0, i.e., full energy-driven alignment in the equilibrium state of graphene on h-BN. Two factors drive this deviation from NM theory. First, graphene is not flat, developing on h-BNa long-wavelength out-of-plane corrugation. Second, h-BN is not hard, releasing its contact stress by planar contractions/expansions that accompany the interface moiré structure. Repeated simulations by artificially forcing graphene to keep flat, and h-BN to keep rigid, indeed yield an equilibrium misalignment similar to θ_NM as expected. Subsequent sliding simulations show that friction of graphene on h-BN, small and essentially independent of misalignments in the artificial frozen state, strongly increases in the more realistic corrugated, strain-modulated, aligned state. Graphene on h-BN: to align or not to align? E. Tosatti October 2, 2017 ===========================================§ INTRODUCTION Graphene, h-BN, MoS2, and other materials provide strong 2D monolayers of great importance in physics and technology. Practical use of such monolayers generally requires deposition on a substrate, often a crystal surface. Understanding the alignment, adhesion and friction between the two is instrumental to that end. The monolayer 2D graphene lattice and that of a substrate such ash-BN are generally incommensurate – not related through a rational fraction. That situation may lead to structural lubricity (sometimes called superlubricity), involving the possible vanishing of static friction and smooth sliding in the absence of defects.<cit.> On the other hand, the theory of incommensurate epitaxy, developed long ago in the context of adsorbed rare gas monolayers by Novaco & McTague<cit.> and others,<cit.> predicted a striking structural effect – immediately confirmed experimentally<cit.> – consisting of a small spontaneous misalignment angle θ = θ_NM of the adsorbed monolayer as a whole relative to the substrate axes. Such misalignment influences the contact strength between lattices with several consequences including a change of friction, as recently found in a different context.<cit.>As of now however, a sharp assessment of the equilibrium alignment or misalignment and of the friction of graphene on pertinent substrates, such as h-BN, Cu, and others, is missing. Existing experimental work with h-BN deposited graphene<cit.> is abundant, and a variety of observed deposition angles are reported.Generally, it appears that the deposition history and kinetics dominates the relative angle much more than subtle energy differences connected with misalignment. The precision of the observed self-rotation of micron sized graphene flakes on h-BN towards small angles after annealing<cit.> is limited to θ < 0.7^∘. Most experiments concerning equilibrium alignment near θ = 0 remain inconclusive in this respect. The question whether the equilibrium graphene geometry is aligned or misaligned with h-BN or other substrates must therefore be resolved theoretically. According to Novaco-McTague<cit.> the predicted equilibrium misalignment angle between an adsorbed monolayer and an underlying substrate lattice is generally nonzero and equal to θ_NM = arccos(1+ρ^2(1+2δ)/ρ[2+δ(1+ρ^2)]) where ρ = a_s / a_C, with a_s the lattice constant of the hard substrate and a_C the lattice constant of the adsorbed layer. The misalignment depends on the ratio of the sound velocities c_L and c_T of longitudinal acoustic (LA) and transverse acoustic (TA) phonon modes of the adsorbed layer through the parameter δ = (c_L/c_T)^2-1 . Importantly, there will only be a misalignment if ρδ > 1  . The assumptions of this theory include a) incommensurate contact; b) weakness of the interaction between the two lattices; c) rigidity of the substrate; d) flatness of the adsorbate (i.e.,negligible surface-normal displacements of the monolayer), making the problem strictly two-dimensional. The physics of this misalignment has been clear for a long time. At perfect alignment, θ = 0, the misfit dislocations ("solitons") composing the moiré pattern formed at the adsorbate/substrate contact concentrate into stripes the necessary 2D compression-expansion waves, strictly longitudinal in character and therefore energetically costly. Even a small misalignment angle allows the energy balance to change drastically. The moiré pattern size shrinks and therefore the soliton density increases: but the soliton's nature turns at the same time from longitudinal to shear. The latter is energetically cheaper because the shear sound velocity is generally much lower than the longitudinal one. As soon as parameters in Eq. <ref> are such that the elastic energy drop overcompensates the cost, the misalignment becomes energetically favorable, and is realized in full equilibrium. Soon after its predictionit wasindeed observed experimentally for Ar monolayers on graphite.<cit.> So universalis this misalignment mechanism that it even occurs for a colloid monolayer in an optical lattice, where characteristic distances are three-four orders of magnitude larger than for the rare gas adsorbed monolayers for which it was developed.<cit.>The question we therefore address is: should monolayer graphene, or h-BN, or MoS2, etc., also exhibit a misalignments by some small Novaco-McTague-type angle, once deposited on an incommensurate substrate? The effects of misalignment, if present, should be relevant to friction, which is generally influenced by the mutual lattice orientation. It would also influence a variety of important physical and technological phenomena from growth to mechanical, electrical and electronic. Last but not least, misalignement changes the length of moiré patterns which is used in experiments to establish the effective lattice mismatch.Experimentally, for graphene (a_C^exp = 1.4197) on bulk h-BN (a_s^exp = 1.4460) it is ρ = 1.018. Thesound velocity ratio c_L/c_T ≈ 1.6 for graphene (see e.g. Ref. ). This means δ = 1.56, leading to a predicted theoretical misalignment by θ_NM ≃ ±0.45^∘. A lattice misalignment smaller than 1^∘ may appear hard to detect, but the much larger moiré pattern rotation angle ψ, satisfying<cit.> tanψ = sinθ/ρ-cosθ, will yield ψ ≫ θ_NM much easier to visualize, effectively employable as a magnifying lensing. Moreover, for a general θ the adsorbed graphene and the incommensurate substrate lattices form a moiré coincidence pattern of length L,<cit.> L = a_C √(3)(1+σ)/√(2(1+σ)(1-cosθ)+σ^2)  , where σ = (a_s-a_C)/a_C = ρ-1.The Novaco-McTague misaligned state of graphene θ_NM ≃ 0.45^∘ would imply a moiré pattern of length L = 12.4 nm and moiré rotation angle ψ = ±22.9^∘. Simulated moiré patterns of graphene on h-BN (see Method) at θ = 0^∘, 0.45^∘ and 1.5^∘, are compared in Fig. <ref>. The decrease of L for graphene/h-BN obtained through Eq. <ref> is shown as a function of θ in Figure <ref>. Experimentally, moiré patterns with a length of approximately 14 nm have been reported,<cit.> consistent with perfect alignment, θ = 0^∘, and in disagreement with L = 12.4 nm predicted by the above theory. It was also observed that after long annealing, micron sized graphene flakes initially at different angles slowly rotated towards θ ≈ 0^∘, the slow kinetics indicating a flat dependence of energy on angle for θ < 0.7^∘.<cit.> In general, it is likely that the observed deposition patterns could be out of equilibrium. The moiré lengths might also depend on graphene stretching, which if present would decrease the lattice mismatch σ. Figure <ref> shows that the length of the moiré pattern at 1.8% mismatch and θ = 0^∘ is close to that of the marginally smaller 1.7% mismatch (globally stretched graphene layer) and θ_NM ≃ 0.45^∘. Our present goal is to establish a more precise theoretical understanding of equilibrium alignment to be expected for unstretched graphene on h-BN.§ METHOD For our simulations we modeled the graphene/h-BN system as a fully mobile single layer graphene on a flat h-BN monolayer substrate whose out-of-plane motion was inhibited, while in-plane motion was allowed. Even if in reality h-BN is not vertically rigid, its top layer, resting on the semi-infinite lattice underneath (that would be much more cumbersome to simulate), has a substantially smaller flexibility than the graphene membrane. The interatomic interactions within the graphene and h-BN layers were described by an optimized Tersoff potential (a_C = 1.439 Å, a_s = 1.442 Å). <cit.> Graphene interactions with h-BNwere described with the Kolmogorov Crespi (KC) potential<cit.> modified as described in Ref. , where the strength of the KC potential was doubled for C–N interactions and reduced to 60% for C–B ones. Also, since the C–C bond length depends on the chosen graphene potential, we adopted a slightly rescaled planar simulation cell size so that the graphene/h-BN size ratio exactly matches the experimental ratio ρ. A sequence of 21 unit cells was created describing graphene adsorbed on h-BN with increasing misalignments angles from 0 to 30 degrees, especially focusing on the small angles. For each angle we constructed a sample as in Ref.  and carefully minimized its classical energy (T = 0), by allowing all atoms to relax their positions, while keeping at the same time the chosen overall alignment angle θ blocked by the periodic boundary conditions.A convergency test on the graphene corrugation Δz as a function of the number of the underlying h-BN layers, showed unreasonably large vertical h-BN displacements when vertical mobility was allowed for even up a dozen layers.<cit.> Eventually, in the semi-infinite the vertical displacements would heal out: but that limit is very far away. In line with that, the simple assumption of a z-rigid, in-plane mobile h-BN substrate, adopted in the rest of this work, immediately provided very good agreement with the experimentally determined geometry (see Fig. <ref>: Δz ≃ 40 pm vs. Δz^exp ≃ 35 pm).<cit.> As it turned out, the two crucial elements that influenced alignment or misalignment were graphene corrugation and in-plane deformability ofh-BN. To understand the role of out-of-plane motion of graphene, easily permitted by its soft ZA modes, we repeated all calculations by imposing a solidal motion of graphene atoms in the out-of-plane direction, which blocked corrugations. To investigate the effect of h-BN in-plane mobility, we also considered the case of a fully rigid h-BN plane. By the combination of the above cases we were able to extrapolate the contribution to energetics of some of the fundamental degrees of freedom involved.§ RESULTSThe results of Fig. <ref> show the resulting angle-dependent changes of the total energy Δ E, obtained by considering the interplay between the intra-graphene (elastic) energy E_intra, and the graphene/h-BN (adhesive) interlayer contribution E_inter. The total energy profile is very flat up to 0.26^∘ and, contrary to theoretical expectations, there is no well-defined minimum at or near θ_NM.In conclusion, simulations show that misalignement does not really occur. While that is compatible with several observations, it contradicts Novaco-McTague theory, which ought to have been applicable to this case. We must clarify why.The crucial clues are provided by structure: equilibrated graphene does not lie flat on h-BN. Fig. <ref> shows the range of z-distances between individual carbons in graphene and the rigid h-BN substrate plane. At the same time, and equally important, the BN planar lattice does not remain unperturbed, but to some extent mirrors the moiré. The vertical corrugation displacements of graphene over the substrate are as large as ±8% near θ = 0. In the Novaco theory, strictly 2D, the monolayer at θ = 0 has a higher energy than θ = θ_NM. However, graphene as a flexible membrane is free to relax in the third, vertical direction. The vertical relaxation will reduce the energy, both for θ = 0 and θ = θ_NM, but the two energy gains need not be the same. Because at θ = 0 the misfit solitons are longitudinal and initially cost more energy than the shear misfit solitons atθ_NM, it is natural that vertical relaxation will gain more energy than that at θ_NM. The result is that in general the Novaco-McTague rotation is weakened by vertical corrugation, and can therefore even disappear depending on actual numbers.The vertical corrugation is accompanied by a nontrivial in-plane distortion of BN. As shown in Fig. <ref>, the BN lattice squeezes into quasi-commensurability in the regions where graphene and BN adhere closely,<cit.> and can release back compensating its strain in the soliton regions where graphene bulges outwards. The combined result of the vertical graphene corrugation and of the concurrent in-plane BN lattice modulation is to eliminate the Novaco-McTague misalignment.As a decisive step to verify this hypothesis we repeated the simulations by keeping graphene artificially flat, allowing only in-plane relaxations and impeding corrugations, while also keeping the h-BN planar lattice fully rigid . Once we fulfil in this manner all the ideal Novaco-McTague conditions, we indeed recover, as shown in Fig. <ref>, a small but nonzero equilibrium rotation of about 0.26^∘. That confirms that vertical corrugations of the graphene monolayer, along with a matching in-plane strain pattern of the h-BN substrate are responsible for the weakening and essential suppression of misalignment on h-BN, that would otherwise be expected from the flat Novaco-McTague theory.There is to our knowledge no direct modification of the Novaco-McTague formulation that would theoretically describe the reason why corrugation on the one hand and the accompanying modulation of in-plane substrate strain on the other hand reduce and eventually eliminate the tendency to misalign the graphene lattice over the substrate. However, the simple structural analysis makes the physical reasons clear enough. With direct reference to the moiré shown Fig. <ref>, the graphene/h-BN epitaxy comprises two regions: the closely adhesive hexagons, and the vertically corrugated soliton lines where graphene detaches from the substrate. As mentioned, the detachment reduces – screens, as it were – the cost of the soliton. That reduction will be larger for the very costly longitudinal solitons in the θ = 0 case than for the cheaper shear solitons at θ > 0 favoring alignment. On the other hand, the in-plane h-BN strain brings the two lattices locally closer to commensurability, with a lattice mismatch reduction from 1.8%, down to nearly zero, in agreement with experiment.<cit.> Thelarger size moiréat θ = 0implies larger locally commensurate hexagons, inside which the graphene/h-BN adhesion is stronger.As a result, both factors conspire to stabilize the aligned state θ = 0.§ FRICTION The next and last point of our concern is the sliding friction of graphene on incommensurate h-BN.Friction is an important property with respect to anchoring and moving one system with respect to the other.In our case, we examine the question how much would sliding friction be influenced by either alignment or small misalignments, whichever the case.We do not try to address static friction, which is important but hard to study experimentally, and also hard to pursue computationally given the very large supercells required to reduce the finite-size effects, which are especially relevant at low velocities.Actually, in a structurally lubric (“superlubric”) system like the present one, the dynamic friction per unit area at infinite size is expected to be proportional to the velocity v, therefore always larger than static friction, the latter arbitrarily small in the v = 0 limit.To obtain dynamic friction, we simulated the sliding of graphene through non-equilibrium molecular dynamics, by applying a force F = 0.001 meV/Å to each C atom in the x-direction (perpendicular to the B–N substrate bond), typically producing a center-of-mass speed of 0.5 m/s.Application of a sliding force to the-center-of-mass is not too different from pushing graphene by means of a large on-top AFM tip, while it does differ form the action of a tip exerting a side pushing on a bordered system.Although this speed is large, the viscous-like proportionality between speed and friction typical of a lubric situation like ours suggests that relative results would not change at lower speed regimes, harder to access in simulation.The frictional heat was absorbed through a Langevin damping mvγ where γ = 0.1 ps^-1 and m,v are the C-atom mass and velocity, respectively. We conducted frictional simulations at θ = 0, 0.45, and 1.5^∘, for a fully mobile graphene over a rigid or a Z-frozen h-BN substrate. The average dissipated frictional power (per C atom) was evaluated by<cit.> p_fric = F· <v_cm> - γ m < |v_cm|^2 > where v_cm is the center-of-mass velocity along x of the N atoms in the graphene sheet. The frictional power obtained are reported in Table <ref>. For the more realistic case of mobile h-BN substrate the sliding friction generally remains constant up to about 0.45^∘, where the energy landscape and the interface strain map remain practically unchanged due to C-BN squeezing into quasi-commensurability (see Fig. <ref>). Conversely, moving towards larger misalignment angles, e.g. 1.5^∘, dissipation experiences a singnificant drop resembling the tribological response of almost incommensurate interface, as proved by the more sinusoidal profile of Fig. <ref>, right panel. Due to the toughness of squeezing into such quasi-commensurability for graphene over a rigid substrate, we clearly see that the frictional values obtained in this case are much less influenced by the misalignement angle. The increase in the measured friction due to substrate mobility is much more pronounced at 0-0.45^∘ (∼+50 %) than for the misaligned system at 1.5^∘ (∼+25 %).§ CONCLUSIONS In summary, we found that the spontaneous corrugation due to vertical z-relaxations of the adsorbed graphene monolayer with an associated in-plane strain pattern of the h-BN substrate leading to locally quasi-commensurate portions of the incommensuratemoiré are the elements that remove the small misalignmentpredicted by classic flat-adsorbate over rigid-substrate epitaxial theory. The closeness in energy between the aligned and the slightly misaligned geometries is in turn reflected by not too dissimilar sliding frictions obtained for aligned or misaligned lattices. On the other hand, overlayer corrugation and substrate strain bring about an increase of sliding friction, the better interdigitation of the two lattices reflecting in a better anchoring between the two. The above information is ofimportance for a correct enginering of the graphene/h-BN bulk alloy, which thanks to its structurally lubric interface could combine flexibility with extreme strength, suitable for flexible coating applications,<cit.> high performance cables,<cit.> and probably more. The typeof interplay between corrugation and strain which we found to be responsible for stabilization of the aligned state will also bear consequences for electronics applications, as it is likely to influence transport in a nontrivial manner. More generally, it can be expected that this physical picture, once modified to account for different parameters, will be relevant to all strong layer sheet deposited on crystalline substrates.§ ACKNOWLEDGEMENTS We acknowledge COST Action MP1303. Work in Nijmegen is part of the research program of the Foundation for Fundamental Research on Matter (FOM), Netherlands Organisation for Scientific Research (NWO), and funding from the European Union Seventh Framework Programme under grant agreement No. 604391 Graphene Flagship. Work in Trieste was carried out under the ERC Advanced Grant No. 320796-MODPHYSFRICT. Discussions with D. Mandelli, M. Urbakh, and O. Hod are gratefully acknowledged. vanossi2013 A. Vanossi, N. Manini, M, Urbakh, S. Zapperi, and E. Tosatti, Rev. Mod. Phys., 2013, 85, 529. NM1977 A.D. Novaco and J.P. McTague, Phys. Rev. Lett., 1977, 38, 1286. NMPRB1979 J.P. McTague and A.D. Novaco, Phys. Rev. B, 1979, 19, 5299. shiba1979 H. Shiba, J. Phys. Soc. Jpn., 1979, 46, 1852. shiba1980 H. Shiba, J. Phys. Soc. Jpn., 1980, 48, 211. shaw78 C. G. Shaw, S. C. Fain, Jr., M. D. Chinn, Phys.Rev. Lett., 1978, 41, 955. mandelli2015 D. Mandelli, A. Vanossi, N. Manini, and E. Tosatti, Phys. Rev. Lett., 2015, 114, 108302. woods2016macroscopic C.R. Woods, F. Withers, M.J. Zhu, Y. Cao, G.Yu, A. Kozikov, M. Ben Shalom, S.V. Morozov, M.M. van Wijk, A. Fasolino, M.I. Katsnelson, K. Watanabe, T. Taniguchi, A.K. Geim, A. Mishchenko, and K.S. Novoselov, Nat. Comm., 2016, 7, 10800. zhang2015 X. Li, X. Lu, T. Li, W. Yang, J. Fang, G. Zhang, and Y. Wu, ACS Nano, 2015, 9, 11382. yankowitz2012 M. Yankowitz, J. Xue, D. Cormode, J.D. Sanchez-Yamagishi, K. Watanabe, T. Taniguchi, P. Jarillo-Herrero, P. Jacquod, and B.J. LeRoy, Nat. Phys., 2012, 8, 382. hod2016 I. Leven, T. Maaravi, I. Azuri, L. Kronik, and O. Hod, J. Chem. Theory Comput., 2016, 12, 2896. hermann2012periodic K. Hermann, J. Phys.: Cond. Mat., 2012, 24, 314210.koukaras2015phonon E.N. Koukaras, G. Kalosakas, C. Galiotis, and K. Papagelis, Sci. Rep., 2015, 5, 12923. tang2013precisely S. Tang, H. Wang, Y. Zhang, A. Li, H. Xie, X. Liu, L. Liu, T. Li, F. Huang, X. Xie, et al., Sci. Rep., 2013, 3, 2666. woods2014commensurate C.R. Woods, L. Britnell, A. Eckmann, R.S. Ma, J.C. Lu, H.M. Guo, X. Lin, G.L. Yu, Y. Cao, R.V. Gorbachev, et al, Nat. Phys., 2014, 10, 451. LeRoy2011 J. Xue, J. Sanchez-Yamagishi, D. Bulmash, P. Jacquod, A. Deshpande, K. Watanabe, T. Taniguchi, P. Jarillo-Herrero and B. J. LeRoy, Nat. Mater., 2011, 10, 282. lammps S. Plimpton, J. Comput. Phys., 1995, 117, 1. brenner2002second D.W. Brenner, O.A. Shenderova, J.A. Harrison, S.J. Stuart, B. Ni, and S.B. Sinnott, J. Phys.: Cond. Mat., 2002, 14, 783. KC2005 A.N. Kolmogorov and V.H. Crespi, Phys. Rev. B, 2005, 71, 235415. slotman2015effect G.J. Slotman, M.M. van Wijk, P-L Zhao, A. Fasolino, M.I. Katsnelson, and S. Yuan, Phys. Rev. Lett., 2015, 115, 186801. lindsay2010optimized L. Lindsay and D.A. Broido, Phys. Rev. B, 2010, 81, 205441. cagin2012 A. Kınacı, J.B. Haskins, C. Sevik, and T. Çaǧın, Phys. Rev. B, 2012, 86, 115410. los2003intrinsic J.H. Los and A. Fasolino, Phys. Rev. B, 2003, 68, 024107. vanossi2012 A. Vanossi, N. Manini, and E. Tosatti, Proc. Nat. Ac. Sci., 2012, 109, 16429. lee2014science J-H Lee, P.E. Loya, J. Lou, and E.L. Thoma, Science, 2014, 346, 1092. landi2011nanoscale P. Jarosz, C. Schauerman, J. Alvarenga, B. Moses, T. Mastrangelo, R. Raffaelle, R. Ridgleya, and B. Landi, Nanoscale, 2011, 3, 4542. tai2016apl Z. Zhang, W. Guoa, and G. Tai, Appl. Phys. Lett., 2016, 90, 133103.
http://arxiv.org/abs/1705.09522v1
{ "authors": [ "Roberto Guerra", "Merel van Wijk", "Andrea Vanossi", "Annalisa Fasolino", "Erio Tosatti" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170526104515", "title": "Graphene on h-BN: to align or not to align?" }
This work introduces and analyzes a finite element scheme for evolution problems involving fractional-in-time and in-spacedifferentiation operators up to order two.The left-sided fractional-order derivative in time we consider is employed to represent memory effects, while a nonlocaldifferentiation operator in space accounts for long-range dispersion processes. We discuss well-posedness and obtain regularity estimates for the evolution problems under consideration.The discrete scheme we develop is based on piecewise linear elements for the space variable and a convolution quadrature for the time component. We illustrate the method's performance with numerical experiments in one- and two-dimensional domains. Optimization of Measurement Device Independent Scarani-Acìn-Ribordy-Gisin protocol C. Tannous[Tel.: (33) 2.98.01.62.28,E-mail: [email protected]] and J. Langlois Version December 30, 2023 ========================================================================================§ INTRODUCTION Since the introduction of Continuous Time Random Walks (CTRW) by Montroll and Weiss <cit.>, anomalous diffusion phenomena has been an active area of research among the scientific community.The CTRW assign a joint space-time distribution to individual particle motions: when the tails of these distributions are heavy enough, non-Fickian dispersion results for all time and space scales. A heavy-tailed jump (waiting time) distribution implies the absence of a characteristic space (time) scale.The equivalence between these heavy-tailed motions and transport equations that use fractional-order derivatives has been shown by several authors; see, for example <cit.>.Space nonlocality is a direct consequence of the existence of arbitrarily large jumps in space, whereas time nonlocality is due to the history dependence introduced in the dynamics by the presence of anomalously large waiting times. The evidence of anomalous diffusion phenomena has been thoroughly reported in physical and social environments, such as plasma turbulence <cit.>, hidrology <cit.>, finance <cit.>, and human travel <cit.> and predator search <cit.> patterns. Models of transport dynamics in complex systems taking into account this non-Fickian behavior have been proposed accordingly. Also, evolution processes intermediate between diffusion and wave propagation have been shown to govern the propagation of stress waves in viscoelastic materials <cit.>.Integer-order differentiation operators are local, because the derivative of a function at a given point depends only on the values of the function in a neighborhood of it. In contrast, fractional-order derivatives are nonlocal, integro-differential operators. A left-sided fractional-order derivative in time may be employed to represent memory effects, while a nonlocal differentiaton operator in space accounts for long-range dispersion processes. We now describe the problems we are going to consider in this work.Let ⊂ be a domain with smooth enough boundary,α∈ (0,2], s ∈ (0,1) and a forcing term f× (0,T) →. We aim to solve the fractional differential equation u +(-Δ)^s u = f×(0,T). Here, denotes the Caputo derivative, given byu (x,t) = {[ 1/Γ(k-α)∫_0^t 1/(t-r)^α-k+1^k u/ r^k (x,r) drk-1 < α < k,k ∈,;^k u/ t^k u(x,t)α = k ∈, ].while (-Δ)^s is the fractional Laplace operator, defined as(-Δ)^s u (x) = C(n,s) ∫_u(x)-u(y)/|x-y|^n+2sdy.Above, C(n,s) = 2^2s s Γ(s+n/2)/π^n/2Γ(1-s) is a normalization constant.Closely related to the Caputo derivative, the Riemann-Liouville fractional derivative is needed in the sequel. Let us recall here its definition, u (x,t) = {[ 1/Γ(k-α)^k / t^k∫_0^t 1/(t-r)^α-k+1 u (x,r) dr k-1 < α < k,k ∈,; ^k u/ t^k u(x,t) α = k ∈. ].For 0<α≤ 1, problem (<ref>) is usually called a fractional diffusion equation. On the other hand, for1<α≤2it is sometimes called a fractional diffusion-wave equation. Analyzing scaling and similarity properties of the Green function G_α,s associated to theoperator + (-Δ)^s, in <cit.> it is shown that G_α,s(x,t) = t^-α/2sΦ_α,s( x/t^α/2s),for a certain one-variable function Φ_α,s. Notice that in case α = s, although the CTRW associated to equation (<ref>) hasthe same scaling properties as Brownian motion, the lack of finite momentsmakes the diffusion process anomalous. On the other hand, the term fractional wave equation has been utilized to refer to the problem with 1<α=2s<2,since for this choice of the parameters some features of the standard wave equation are preserved <cit.>. For example, the maximum, gravity center and mass center of the fundamental solution such possess constant propagation velocity. Let v, b ∈ L^2() be given data. We complement problem (<ref>) with the initial and boundary value conditions{[ u =0 ^c × [0,T) ,;u(·, 0) = v, ].and, additionally for 1<α≤ 2, we require that _t u (·, 0)=b . A variational formulation for the time-fractional problem involving a Caputo derivative of order α∈ (0,1) has been recently studied by Karkulik <cit.>. In that work, the author showsthat the Caputo derivative is a linear and bounded operator on a time-fractional Sobolev-Bochner space, considers a variational formulationbased exclusively on Sobolev regularity and proves that if the initial condition belongs to H^1-1/α + () for some > 0, then the time-fractional problem is well posed.As fractional-order differential operators involve singular kernels, numerical approximations of equations involving them is a delicate task. Moreover, their nonlocal character calls for the design of efficient numerical schemes for the discretization.Several strategies have been shown to succeed in the one-dimensional context. For example, finite element schemes were treated in <cit.>; finite difference methods have been employed to discretize either the fractional-order space diffusion term <cit.> and time derivative <cit.>; pseudospectral methods were considered in <cit.>. A comparison between these approaches may be found in <cit.>. Also, a convolution quadrature rule has been proposed in <cit.>, and in turn, has been utilized for the approximation of fractional time derivatives in<cit.>.Nevertheless, analysis and implementation of two-dimensional schemes is scarce. In <cit.>, the Dirichlet problem for the fractional Laplace operator was analyzed and a simple finite element implementation for the two-dimensional problem was introduced. Later on, the authors presented the code employed for such implementation in <cit.>. The space discretization in the current work is based on such a code. Recently, implementations based on finite elements <cit.> and integral representation formulas <cit.> have been proposed as well.It is noteworthy that the fractional Laplace operator defined by (<ref>) does not coincide with the operator considered, for example, in <cit.>. Indeed, the spatial operator considered in those works is a power of the Laplacian in the spectral sense. Our work does not include the case s=1, which corresponds to a local-in-space process, as it is already covered by other authors' work. For the range 0<α≤ 1, reference <cit.> develops a semidiscrete Galerkin method and studies the error both for smooth and non-smooth data. Naturally, the local-in-space case is also covered by the previously mentioned works <cit.> regarding spectral fractional powers of the Laplacian.For the full range of time derivatives we are considering in this work, <cit.> deals with an alternative formulation of (<ref>) and a method based on the Laplace transform is developed, while in <cit.> an approach via discontinuous Galerkin discretization in time is introduced.§.§ Organization of the paperPreliminary concepts regarding fractional Sobolev spaces, elliptic regularity for the fractional Laplacian and the Mittag-Leffler function are discussed in Section <ref>. Moreover, that section also deals with the well-posedness of (<ref>) with conditions (<ref>), (<ref>). Afterwards, in Section <ref> we take advantage of the representation of solutions to derive regularity estimates for the fractional evolution problems under consideration. A numerical scheme, based on standard Galerkin finite element approximations for the space variable and a convolution quadrature for the time component, is proposed in Section <ref>, and error bounds for this scheme are presented in Section <ref>. In Section <ref> we present some numerical examples that illustrate the accuracy of our convergence estimates and qualitative behavior of solutions to fractional evolution problems. Finally, Appendices <ref> and <ref> include details on the convolution quadrature rule utilized for the time discretization and on the error analysis of the semi-discrete scheme, respectively.§ PRELIMINARIES In this section we set the basic notation and present some preliminary concepts necessary for the analysis of the fractional evolution problems under consideration. We recall elliptic regularity results for the fractional Laplacian and some important properties of the Mittag-Leffler function. These properties are then utilized to derive a representation formula for solutions that allows to prove the well-posedness of problem(<ref>). §.§ Sobolev spaces and fractional Laplace operatorLet ⊂ be an open set and s ∈(0,1). The fractional Sobolev space H^s() is defined byH^s() = { w ∈ L^2()|w|_H^s() := ( ∬_^2|w(x)-w(y)|^2/|x-y|^n+2sdx dy )^1/2 < ∞}.This set, furnished with the norm ·_H^s() = ·_L^2() + |·|_H^s() , constitutes a Hilbert space.If s>1 and it is not an integer, considering the decomposition s = m + σ, where m ∈ℕ and σ∈ (0,1),allows to define H^s() by means ofH^s() = { w ∈ H^m()|D^α w|_H^σ() < ∞ for all α s.t |α| = m }.Another important space of interest for the problems under consideration is that of functions in H^s() supported within , H^s () = { w ∈ H^s()suppw ⊂}. The bilinear form ⟨ u , w ⟩_s := C(n,s) ∬_(×) ∖ (^c ×^c)(u(x)-u(y)) (w(x)-w(y))/|x-y|^n+2sdy dxconstitutes an inner product on H^s(). The norm it induces, which is just a multiple of the H^s()-seminorm, is equivalent to the full H^s()-norm on this space, because a Poincaré-type inequality holds in it. See, for example, <cit.> for details.We emphasize that functions in H^s() are defined in the whole space . In particular, this allows to consider the action of the fractional Laplacian over this set. Finally, Sobolev spaces of negative order are defined by duality, using L^2() as pivot space. Of interest in the problems we are considering is the spaceH^-s() = ( H^s () ) '. Recalling definition (<ref>) and denoting by ( ·, ·) the inner product in L^2(), the weak formulation of problem (<ref>) reads: find u ∈ L^2(0,T; H^s()) such that ∫_0^T (u (·, t) w(·, t) ) dt + ∫_0^T ⟨ u(·, t), w(·, t) ⟩_s dt =∫_0^T ( f(·, t), w(·, t) ) dtfor all w ∈ L^2(0,T; H^s()). §.§ Elliptic regularityWe recall regularity results for the homogeneous problem{[ (-Δ)^s u= gin ,;u =0in ^c. ]. Even though the fractional Laplacian is an operator of order 2s in , in the sense that (-Δ)^sH^ℓ() → H^ℓ - 2s() is bounded and invertible, theory is much more delicate for problems posed on bounded domains.Grubb <cit.> provides regularity estimates for solutions of (<ref>) in the setting of Hörmander μ-spaces. We express these results in terms of standard Sobolev spaces, and refer to <cit.> for further details.Let ⊂ be a bounded domain with smooth boundary, g ∈ H^r(Ω) for some r≥ -s and consider u∈H^s(), the solution of the Dirichlet problem (<ref>). Then, there exists a constant C(n,s) such that |u|_H^s+γ()≤ Cg _H^r().Here, γ = min{ s, 1/2 - }, with > 0 arbitrary small. Observe that, in general, it is not true that solutions of (<ref>) have 2s more derivatives than the right-hand side function g. No matter how regular g is, the solution of (<ref>) is not expected to be more regular than H^s+γ().In spite of this, the singular behavior of solutions can be localized at the boundary and described more appropriately in weighted spaces <cit.>. In view of Proposition <ref>, given v ∈H^s(), assuming that it satisfies (-Δ)^s v ∈ L^2() is weaker than assuming that v ∈ H^2s(). Hypotheses of this type on the initial/boundary conditions are employed throughout this work.§.§ Mittag-Leffler functionLet α > 0 and μ∈, then the Mittag-Leffler functionE_α,μℂ→ℂ is defined byE_α,μ(z) = ∑_k=0^∞z^k/Γ(α k + μ). This is a complex function that depends on two parameters; in particular, it generalizes the exponentials, in view of the identity E_1,1(z) = e^z for all z ∈ℂ. The following properties of this family of functions are useful to derive the regularity estimates we present in Section <ref>.If α, λ > 0, then E_α, 1 (-λ t^α ) = - λ E_α, 1 (-λ t^α ) .Moreover, the following identities hold for integer-order differentiation:d/dt E_α,1(-λ t^α) = -λ t^α-1 E_α,α(-λ t^α) ,andd/dt(t E_α,2(-λ t^α) ) = E_α,1(-λ t^α) . Let 0< α<2 and μ∈ satisfying πα / 2 < δ < min{π, πγ}, then there exists a constantC(α,μ,δ) > 0 such that| E_α, μ (z)| ≤C/1 + |z|δ < | (z)| ≤π. §.§ Solution representationLet {(ϕ_k, λ_k) }_k=1^∞ denote the solutions of the fractional eigenvalue problem{[ (-Δ)^s u = λu ,;u =0^c .; ].It is well-known that the fractional Laplacian has a sequence of eigenvalues0 < λ_1 < λ_2 ≤… , λ_k →∞ k→∞ ,and that the eigenfunctions' set {ϕ_k }_k = 1^∞ may be taken toconstitute an orthonormal basis of L^2(). Unlike the classical Laplacian, eigenfunctions of the fractional Laplacian are in general non-smooth <cit.>. Indeed, considering a smooth functiond that behaves like dist(x,∂) near to , all eigenfunctions ϕ_k belong to the space d^sC^2s(-)() (theis active only if s=1/2) andϕ_k/d^s does not vanish near ∂.The best Sobolev regularity guaranteed for solutions of (<ref>) is ϕ_k ∈ H^s + 1/2 - () for >0 (see <cit.>).The reduced Sobolev regularity of eigenfunctions precludes the possibility of solutions to equation (<ref>) being smooth, even for α = 1. This is in stark contrast with the case of the classical Laplacian. However, solutions of diffusion equations with memory –local in space but fractional in time– are known to be less regular than their classical counterparts <cit.>. The effect of fractional differentiation in time is that high-frequency modes are less strongly damped than in classical diffusion, and the time derivatives of the solution are unbounded as t→ 0. In the next section we shall show that solutions of (<ref>) present the same behavior. See also the numerical experiments in Subsection <ref>.We write solutions of (<ref>) by means of separation of variables,u(x,t) = ∑_k = 1^∞ u_k (t)ϕ_k (x) .Then, for every k ≥ 1 it must hold that{[ u_k + λ_k u_k = f_k, ;u_k(0) = v_k,; u'_k(0) = b_k, if α∈ (1,2]. ].where f_k = (f, ϕ_k ), v_k = (v, ϕ_k ), andb_k = (b, ϕ_k ). Existence and uniqueness of solutions to (<ref>) follow from standard theory for fractional-order differential equations <cit.>. Moreover, solutions of (<ref>) may be represented as the superposition of the respective solution of the problem with forcing term and initial data equal to zero and the solution of the problem with vanishing forcing term. Namely, definingF_k (t) w = ∫_0^t (t-r)^α-1 E_α,α(-λ_k (t-r)^α) w(r) dr ,the solution of (<ref>) may be written asu_k(t)= F_k(t)f_k + v_k E_α, 1 (-λ_k t^α) (0 < α≤ 1), u_k(t)= F_k(t)f_k + v_k E_α, 1 (-λ_k t^α) + b_k t E_α, 2 (-λ_k t^α)(1 < α≤ 2).For the particular value of α = 1 the above expression yields the well-known formula u_k (t) = ∫_0^t e^-λ_k(t-r) f_k(r) dr+ v_k e^-λ_k t ,usually derived by the method of variation of parameters.Also, considering α = 2, in virtue of the identities E_2,1(z) = cosh (√(z)) and E_2,2(z) = sinh (√(z))/√(z), expression (<ref>) becomesu_k (t) = 1/√(λ_k)∫_0^t sin(√(λ_k)(t-r)) f_k(r) dr+ v_k cos (√(λ_k) t) + b_k sin(√(λ_k) t)/√(λ_k) . Summing the solutions for every eigenmode, we obtain the following result.Letbe a bounded, smooth domain, s ∈ (0,1) and α∈ (0,2]. Assume that f ∈L^∞(0,T;L^2()), v ∈ L^2() and b ∈ L^2() are given. Then, problem (<ref>), with boundary conditions (<ref>) (and (<ref>) if α∈ (1,2]) admits a solution u ∈ L^2(0,T; H^s()) thatcan be represented by (<ref>). The modes u_k are given, accordingly, by (<ref>) if α∈ (0,1] and(<ref>) if α∈ (1,2].§ REGULARITY OF SOLUTIONS In this section we state some regularity results for solutions of the problems under consideration. We split the estimates according to whether the initial values or the forcing term are null. For the sake of brevity we omit the proofs; these can be obtained following the arguments of <cit.>, and are based on expansion (<ref>) and utilizing Lemmas <ref> and <ref> to bound the corresponding coefficients. We also recall also that throughout this paper we are assuming thatis a domain with smooth boundary, so that Proposition <ref> holds. According to that proposition, we fix the notation γ := min{ s, 1/2 - }, with > 0 arbitrarily small. Let 0<α≤ 1 and suppose that f ≡ 0. Let u,given by (<ref>),be the solution of (<ref>) with initial and boundary conditions according to (<ref>).* If v ∈ L^2(), then u ∈ C([0,T];L^2())∩ C((0,T];H^s()∩ H^s+γ()) and u ∈ C ((0,T]; L^2()). Moreover, there exists a constant C>0 such thatu _C([0,T];L^2())≤ Cv _L^2() ,u(·, t) _H^s+γ() +u(·, t) _L^2()≤ C t^-α v _L^2() .* Assume that v ∈H^s(). Then, u ∈ L^2(0,T; H^s()∩ H^s+γ()), u ∈ L^2(×(0,T)), andthe following estimate holds:u _L^2(0,T; H^s+γ()) +u _L^2(×(0,T))≤ C v _H^s() . * Furthermore, if v ∈H^s() is such that (-Δ)^s v ∈ L^2(), then u∈ C([0,T];H^s()∩ H^s+γ()), u ∈ C ([0,T]; L^2()) and the boundu _C([0,T];H^s+γ()) +u _C([0,T];L^2())≤ C(-Δ)^s v _L^2()is satisfied for some C>0 independent of v.Regularity estimates for the fractional diffusion problem with a non-homogeneous right-hand side function f are also attainable.Let 0<α≤ 1 and v ≡ 0. Consider u,given by (<ref>), the solution of (<ref>) with homogeneous initial and boundary conditions. If f ∈L^∞(0,T;L^2()), then u ∈ L^2(0,T;H^s()∩ H^s+γ()), u ∈ L^2(×(0,T)) andu _L^2(0,T;H^s+γ()) +u _L^2(× (0,T))≤ Cf _L^∞(0,T;L^2()) .Estimates for the fractional diffusion-wave case are obtained similarly.Let 1<α≤ 2 and suppose that f ≡ 0. Let u,given by (<ref>),be the solution of (<ref>) with initial/boundary conditions (<ref>) and(<ref>).* Assume that v ∈ L^2() and b ∈ L^2(). Then, u ∈ C([0,T];L^2())∩ C((0,T];H^s()∩ H^s+γ()) and u ∈ C ((0,T]; L^2()). Moreover, there exists a constant C>0 such thatu _C([0,T];L^2())≤ C (v _L^2() +b _L^2()) ,u(·, t) _H^s+γ() +u(·, t) _L^2()≤ C ( t^-α v _L^2() + t^1-α b _L^2()).* If v ∈H^s() and b ∈ L^2(), then _t u ∈ C([0,T];H^-s()), and_t u _C([0,T]; H^-s())≤ C ( v _H^s() +b _L^2()). * Moreover, if v ∈H^s() is such that (-Δ)^s v ∈ L^2() and b ∈H^s(), then u ∈ C([0,T]; H^s()∩ H^s+γ()) ∩ C^1([0,T];L^2()), u ∈ C([0,T];L^2()), andthe following estimates hold:u _C([0,T]; H^s+γ()) +u _C([0,T];L^2())≤ C((-Δ)^s v _L^2() +b _H^s()),u _C^1([0,T]; L^2())≤ C ( (-Δ)^s v _L^2() +b _L^2()) . Finally, estimates for problems with non-null forcing term and α∈ (1,2] have the following form.Let 1<α≤ 2, v ≡ 0 and b ≡ 0. Consider u,given by (<ref>), be the solution of (<ref>) with homogeneous initial and boundary conditions. If f ∈ C([0,T]; L^2()) is such that (-Δ)^s f ∈ L^2(× (0,T)), then u ∈ C([0,T];H^s()∩ H^s+γ()), u ∈ C([0,T];L^2()) andu _C([0,T];H^s+γ()) + u _C([0,T];L^2())≤≤ C ((-Δ)^s f _L^2(×(0,T)) +f _C([0,T];L^2())).§ NUMERICAL APPROXIMATIONS In this section we devise a discrete scheme to approximate (<ref>). To this end, standard Galerkin finite elements are utilized in the spatial discretization (following <cit.>) and a convolution quadrature is used for the time variable (following <cit.>). §.§ Semi discrete schemeFor an appropriate treatment, it is convenient to derive the numerical scheme in two steps. In first place we discretize in space, and afterwards in time.We follow the ideas developed in <cit.>, taking advantage of the fact that, from the theoretical point of view, minor changes are required to handle with the fractional Laplacian instead of its classical counterpart.Indeed, let 𝒯_h be a shape regular and quasi-uniform admissible simplicial mesh on Ω, and let X_h⊂H^s(Ω) be the piecewise linear finite element space associated with 𝒯_h, namely,X_h := {u_h ∈ C(Ω) u_hT∈𝒫^1∀ T ∈𝒯_h,u_h |_ = 0} .The semidiscrete problem reads:find u_h[0, T] → X_h such that {[ (u_h, w )+⟨ u_h, w ⟩_s=( f, w ), ∀ w ∈ X_h,; u_h(0)= v_h ,;u'_h(0) = b_h,if α∈ (1,2]. ].Here, v_h = P_h v, b_h = P_h b, and P_h denotes the L^2() projection on X_h. Observe that, defining the discrete fractional Laplacian A_h: X_h → X_h as the unique operator that satisfies( A_h w , v ) = ⟨ w , v ⟩_s, for allw,v ∈ X_h,and considering f_h:=P_h f, we may rewrite (<ref>) as{[ u_h +A_h u_h=f_h,; u_h(0)= v_h ,;u'_h(0) = b_h,if α∈ (1,2]. ].§.§ Fully discrete schemeAt this point, a suitable discretization of the Caputo differentiation operator is required to obtain a fully discrete scheme. To this end, we employ the convolution quadrature technique described by Lubich in <cit.>, which allows us to derive discrete estimations of an integral which involve singular kernels.Upon dividing [0,T] uniformly with a time step size τ = T/N, and letting t = n τ (n ∈{ 1, …, N}), by means of the convolutionquadrature rule we are able to estimate the Riemann-Liouville operator of a function g byg(t) ≈∑_j = 0^nω_j g(t - jτ),where the weights {ω_j}_j ∈ℕ_0 are obtained as the coefficients of the power series ( 1 - ξ/τ)^α = ∑^∞_j=0ω_jξ^j . For the reader's convenience we give an overview of the main ideas of this technique in Appendix <ref> and refer the reader, for example, to <cit.> for further details. We are now able to suitably discretize the Caputo differentiation operator. To this end, we need to reformulate (<ref>) using the Riemann-Liouville derivative instead of the Caputo one. It is well-known that these two operators are related by (see, for example, <cit.>)u(t) = ( u(t) - ∑_k=0^⌊α⌋u^(k)(0)/k!t^k ).Thus, we rewrite (<ref>) for the fractional diffusion case as{[ ( u_h - v_h)+A_h u_h =f_h; u_h(0) = v_h, ].and for the fractional diffusion-wave case as{[ ( u_h - v_h - tb_h) +A_h u_h =f_h; u_h(0)= v_h;u'_h(0) = b_h. ]. Replacing the Riemann-Liouville derivative by its discrete version given by (<ref>), and that we will denote by ^α, we formulate the fully discrete problem as: find U^n_h∈ X_h, with n = {1,…,N }, such that{[ ^α U_h^n +A_h U_h^n =^αv_h +F_h^n; U_h^0= v_h, ].or{[^α U_h^n +A_h U_h^n =^αv_h + (^αt)b_h +F_h^n;U_h^0 = v_h, ]. for fractional diffusion and fractional diffusion-wave problems respectively, where F^n_h = P_hf(t_n).In order to obtain a better error estimation in the diffusion-wave case, it is necessary to replace F^n_h with a corrected term G^n_h := ∂^-1_t f_h(t_n). See <cit.> for further details.For the sake of the reader's convenience, we conclude this section by giving the vectorial form of the fully discrete scheme. Let {φ_i }_i=1,…,𝒩 be the Lagrange nodal basis that generates X_h. Let U^n, F^n and G^n ∈ℝ^𝒩, n=0,…,N be such that U^n_h = ∑_i=1^𝒩 U^n_iφ_i, F^n_h = ∑_i=1^𝒩 F^n_iφ_i and G^n_h = ∑_i=1^𝒩 G^n_iφ_i, where U_h^n denotes the solution of the fully discrete problem.Then we formulate (<ref>) and (<ref>), respectively, in the following vectorial equations:M^-1·(_0 M+K) · U^n=(∑_j=0^n_j )U^0 - ∑^n_j=1_jU^n-j +F^nandM^-1·(_0 M+K) · U^n=(∑_j=0^n_j )U^0 + (∑_j=0^n_jτ(n-j) ) v_h- ∑^n_j=1_jU^n-j +G^n.Above, M,K ∈ℝ^𝒩×𝒩 are the mass and stiffness matrices, respectively. Namely, M_i,j = (φ_i,φ_j) and K_i,j = ⟨φ_i , φ_j ⟩_s. The computation and assembly of the stiffness matrix indimension greater than one is not a trivial task. However, this problem for two-dimensional domains has been tackled in <cit.>, where the authors provide a short MATLAB implementation.There are several options to compute the coefficients {_j}_j ∈ℕ_0. Recalling that ( 1 - ξ/τ)^α = ∑_j=0^∞_j ξ^n, Fast Fourier Transform can be used for an efficient computation of {_j}_j ∈ℕ_0 (see <cit.>) .Alternatively, a useful recursive expression is given also in <cit.>:_0 = τ^-α, _j = ( 1 - α + 1/j) _j-1, ∀ j>0.For the numerical experiments we exhibit in Section <ref> we have taken advantage of this identity.§ ERROR BOUNDSThis section shows error estimates for the numerical scheme discussed in Section <ref>. The derivation of the error bounds can be carried out following the guidelines from <cit.>. Most of these results can be extrapolated to our case. Details of the proofs in those cases where the generalization is not direct are provided in Appendix <ref>.§.§ Error bounds for the semi-discrete schemeHere we focus only on the diffusion-wave case (1<α<2), where the generalization of the error bounds becomes more laborious. In this context, the error bounds are given by the generalization of two theorems. The first one, in the spirit of <cit.>, provides an error estimation for the homogeneous case. Let 1 < α < 2, u be the solution of (<ref>) with v ∈H^q(), b ∈H^r() q,r ∈ [0,2s], and f = 0; and let u_h be the solution of (<ref>) with v_h = P_h v, b_h = P_h b, and f_h = 0. Writing e_h(t) = u(t) - u_h(t), there exists a positive constant C = C(s,n) such that e_h_L^2() + h^γ|e_h|_H^s()≤C h^2 γ(t^-α( 2s -q/2s)v_H^q()+t^1-α( 2s -r/2s)b_H^r()).To complete the error estimate for the semi-discrete scheme, it still remains to analyze the case v = 0, b = 0 and f ≠ 0. A proper generalization of<cit.> can be carried out following the guidelines outlined in that work. Let 1 < α < 2, f ∈ L^∞(0,T ; L^2() ), and let u and u_h be the solutions of (<ref>) and (<ref>) respectively, with f_h = P_h f, and all the initial data equal to zero. Then, there exists a positive constant C = C(s,n) such thatu - u_h _L^2() + h^γ|u - u_h |_H^s()≤C h^2 γ |log h|^2 f_ L^∞( [0,T] ; L^2() ). §.§ Error bounds for the fully-discrete schemeConsidering all the theory displayed up to this point, error estimates for the fully-discrete scheme can be derived in the same way as in <cit.>. We refer the reader to that work for the details.Let u be the solution of problem (<ref>) with v ∈H^q(), b ∈H^r() q,r ∈ [0,2s], and f = 0; and let U^n_h be the solution of (<ref>) or (<ref>) with v_h = P_h v, b_h = P_h b, and F^n_h = 0. Then, there exists a positive constant C = C(s,n) such that * If 0<α<1, thenu(t_n) - U^n_h_L^2()≤ C ( t_n^α( q/2s) - 1τ + t_n^-α( 2s-q/2s)h^s + γ)v_H^q(). * If 1<α<2, thenu(t_n) - U^n_h_L^2() ≤ C ( t_n^α( q/2s) - 1 τ + t_n^-α( 2s-q/2s)h^s + γ)v_H^q() + C ( t_n^α( r/2s)τ + t_n^1 -α( 2s -r/2s) h^2 γ)b_H^r().In the previous theorem –and in Theorem <ref> as well – we wrote the orders of convergence in term ofvarious Sobolev norms of the data. For clarity, hypotheses in theorems <ref> and <ref> just involved either L^2 or H^s norms of the data. For instance, assuming that v ∈H ^s() is such that (-Δ)^s v ∈ L^2() and b ∈H ^s(), the conclusions of Theorem <ref> readu(t_n) - U^n_h_L^2() ≤ C ( t_n^α - 1 τ +h^s + γ)(-Δ)^s v_L^2() if0 < α < 1, u(t_n) - U^n_h_L^2() ≤ C ( t_n^α - 1 τ +h^s + γ)(-Δ)^s v_L^2()+ C ( t_n^α/2τ + t_n^1 -α/2h^2 γ)|b|_H^s() if1 < α < 2. We emphasize that, as stated in Remark <ref>, the identity (-Δ)^s v_L^2()≤ C | v |_H^2s() holds for all v ∈H^2s().Finally, we state the order of convergence of the fully-discrete scheme for the problems with a non-null source term.Let u be the solution of (<ref>) with homogeneous initial data and with f ∈ L^∞(0,T;L^2()); and let U^n_h bethe solution of (<ref>) or (<ref>) withf_h = P_h f. Then, there exists a positive constant C = C(s,n) such that * For 0<α<1, if ∫^t_0 (t-s)^α-1f'(s)_L^2()ds < ∞ for t ∈ (0,T], then u(t_n) - U^n_h_L^2()≤C ( h^2 γℓ^2_h f_ L^∞( [0,T] ; L^2() ) + t_n^α - 1τf(0)_L^2() +τ∫_0^t_n(t_n - s)^α - 1f'(s)_L^2()).* If 1<α<2, thenu(t_n) - U^n_h_L^2()≤ C ( h^2 γℓ^2_h + τ) f_ L^∞( [0,T] ; L^2() ).§ NUMERICAL EXPERIMENTS This section exhibits the results of numerical tests for discretizations of problems posed in one- and two-dimensional domains. Numerical solutions of (<ref>) were obtained by applying the scheme described in Section <ref>. The experiments in two-dimensional geometries were carried out with a code based on the one presented in <cit.>. §.§ Explicit SolutionsIn <cit.> it is shown how some families of non-trivial solutions for the fractional Poisson problem can be constructed. For the sake of brevity, we refer the reader to that work for details. Here we summarize these results in order to be applied to the evolution equation in the cases in which Ω corresponds to a) (-1,1)⊂ and, more generally, b) B(0,1)⊂^n. Define for n≥ 1, the function ^s:^n→,^s(x) = (1 - |x|^2)_+ ^s. Then, u(x):=^s(x) g_k^(s)(x) solves{[ (-Δ)^su =f Ω,; u= 0 Ω^c, ]. with f(x)=μ_s^k g_k^(s)(x), where incase a) μ_s^k=Γ(2s + k + 1)/k! g_k^(s)(x):=C_k^(s + 1/2)(x),andincase b) μ_s^k=2^2s Γ(1+s+k) Γ(n/2+s+k)/k!Γ(n/2+k) g_k^(s)(x):=P_k^(s, n/2-1) ( 2 |x|^2 - 1). Above, C_k^(s + 1/2) and P_k^(s, n/2-1) denote a Gegenbauer anda Jacobi polynomial<cit.>, respectively. Next, let h(t) be a function such thath(t) can be easily computed. By means of separation of variables we can construct explicit solutions of the fractional evolution problem of the formu(x,t) = h(t) ·^s(x) g_k^(s)(x). §.§ Orders of convergenceIn order to confirm the predicted convergence rate, we show the results we obtained in three example problems:* u(x,t)= E_α,1(-t^α) ·^s(x) C^(s)_3 (x), Ω = (-1,1);* u(x,t)= sin(t) ·^s(x) C^(s)_3 (x), Ω = (-1,1);* u(x,t)= E_α,1(-t^α)·^s(x) P^(s,0)_k(2 | x|^2 -1), Ω = B(0,1) ⊂^2. For examples (a) and (b) we examine the time and spatial convergence over a fixed time t = 0.1. A fixed small time step is taking to see the spatial convergence and viceversa.Our results are summarized in tables <ref>, <ref>, <ref> and <ref>.The experimental orders of convergence (e.o.c.) are in agreement with the theory in the case s>1/2 while our numerical examples exhibit e.o.c. (in space) higher than those predicted if s<1/2 (see Tables <ref> and <ref>). This behavior seems to be due to the fact that the extra regularity of the data present in our examples can not be exploited in our arguments; the actual solutions are more regular than what is predicted by Theorems <ref> and <ref>. §.§ Qualitative aspects Finally, we present experiments that illustrate some qualitative effects of the fractional derivatives. In Figure <ref>, we fix s=0.5,and show the evolution in time for different values of the parameter α, ranging from fractional diffusion to fractional diffusion-wave. Memory effects are present for α = 0.5, while the solution oscillates for α = 1.5. Figure <ref>, in turn, displays the effect of moving the parameters α and s for a fixed time. It can be seen that increasing the spatial differentiation order s leads to a faster spreading of the initial condition. Apparent differences can be noticed among the three different problems with α = 2s exhibited along the diagonal of the figure.Our last example, in Figure <ref>, exhibits the persistence of a singularity along the time due to the memory induced by the fractional in-time derivative. In that experiment, we have set α = 0.99 and s = 0.9. Notice that the solution vanishes to 0 as time increases, but even though the differentiation parameters are both close to 1, which corresponds to the classical heat equation, the singular behavior of the initial condition persists in time.§.§ AcknowledgmentsThe authors thank Prof. Bangti Jin for pointing out reference <cit.>.§ CONVOLUTION QUADRATURE RULEThe aim of this appendix is to describe a numerical approximation technique for convolutions that plays an important role in the assemblage of the numerical scheme we propose.Dividing [0,T] uniformly with a time step size τ = T/N, and letting t = n τ (n ∈{ 1, …, N}), we seek for a numerical approximation of the convolution integralk*g(t)=∫^t_0k(r)g(t-r) drby means of a finite sum ∑_j = 0^nω_j g(t - jτ). The weights {ω_j}_j ∈ℕ_0 are obtained as the coefficients of the power series K( δ(ξ)/τ) = ∑^∞_j=0ω_jξ^j ,where K denotes the Laplace transform of the kernel k, and δ(ξ) is the quotient of the generating polynomials of a linear multistep method.To obtain the weights ω_j, suppose that we extend the kernel k by zero over r ≤ 0 and that for all r>0 it satisfes |k(r)| ≤ Cr^μ - 1e^cr,for some c,μ > 0. Then,the inversion formulak(r) = 1/2π i∫_ΓK(z)e^zrdz holds, where Γ is a contour lying in the sector of analyticity of K, parallel to its boundary and oriented with an increasing imaginary part. Furthermore, defining Σ_θ:={ z ∈ℂ : |arg(z)| ≤θ}, θ∈ (π/2,π), it holds that K is analytic in Σ_θ and satisfies |K(z)| ≤ C|z|^-μ∀ z ∈Σ_θ.This condition is in turn equivalent to (<ref>).Replacing (<ref>) in (<ref>) and switching the order of integration gives∫^t_0k(r)g(t-r)dr =1/2π i∫_Γ K(z) ∫^t_0e^zrg(t-r)dr dz.Since the inner integral in the right-hand side is the solution of the ordinary differential equation y' = zy + g, with y(0)=0, we can obtain a numerical estimation by using some multistep method. For simplicity, suppose we utilize Backward Euler discretization (BE), that gives the schemey_n - y_n-1/τ =zy_n + g_n.Multiplying by ξ^n both sides of the equality, and summing over n, we obtain(1-ξ)/τ𝐲(ξ) =z𝐲(ξ) + 𝐠(ξ),where 𝐲(ξ) :=∑^∞_n=0y_nξ^n and 𝐠(ξ):=∑^∞_n=0g_nξ^n. Defining δ(ξ):=(1-ξ), from (<ref>) we deduce𝐲(ξ)=(δ(ξ)/τ-z)^-1𝐠(ξ).Thus, the numerical approximation of y at time nτ is given by the n-th coefficient of the power series (δ(ξ)/τ-z)^-1𝐠(ξ).In order to obtain the desired numerical approximation of (<ref>) we utilize the former expression,fix ξ and integrate in z the right hand side in (<ref>). Using Cauchy's integral formula gives1/2π i∫_ΓK(z)(δ(ξ)/τ-z)^-1𝐠(ξ) dz=K (δ(ξ)/τ) ·𝐠(ξ).Therefore, the numerical approximation of (<ref>) at t=nτ is given by the n-th coefficient of the power series K (δ(ξ)/τ) ·𝐠(ξ). Finally, noticing that the coefficients of the series are the Cauchy product of the sequences {_n}_n ∈ℕ_0 and {g(nτ)}_n ∈ℕ_0, where {_n} are the coefficients of the power series expansion of K (δ(ξ)/τ), we obtain an expression for the weights in (<ref>). Given a complex valued function K, analytic in Σ_θ and satisfying (<ref>), we use the transfer function notation for (<ref>),K(z)g(t) := k*g(t) = ∫^t_0k(r)g(t-r)dr,where k is given by (<ref>), and the notationK(δ(ξ)/τ)g(t) := ∑_j = 0^n ω_j g(t - jτ)for the discrete approximation.Next, we generalize the definition of the Convolution Quadrature Rule to operators that satisfy (<ref>) with a negative value of μ. Indeed, let m be a positive integer such that μ + m>0, setting K̃(z) := z^-mK(z) we define K(z)g(t) := ^m/ t^mk̃*g(t) =^m/ t^m∫^t_0k̃(r)g(t-r)dr,with k̃ the kernel associated with K̃. All the results and estimates that are achieved in the former case are still true upon this generalization (see <cit.>).This is convenient because we are interested in the particular case of K(z)=z^α, that deliversz^αg(t) := ^m/ t^m∫^t_01/r^α - m + 1g(t-r) dr =g(t).with m a positive integer such that m-1≤α < m. Considering this, we set the notation K(∂_t):=K(z) and K( ) := K(δ(ξ)/τ).§ ERROR ANALYSIS OF THE SEMI-DISCRETE SCHEME We first set an integral representation of u for the homogeneous case f = 0.Define the sector Σ_θ := { z ∈ℂ such that|(u)|<θ}, then u(t) : [0,T] → L^2(Ω) can be analytically extended to Σ_π/2 (see <cit.>). Applying the Laplace transform in (<ref>) we obtainz^αû(z) + Aû(z) = z^α - 1v +z^α - 2b,where A is the fractional Laplacian with homogeneous Dirichlet conditions. Therefore, via the Laplace inversion formula, we write the integral representationu(t) = 1/2π i∫_Γ_θ,δ e^zt(z^αI + A)^-1( z^α - 1v +z^α - 2b) dz, where Γ_θ,δ = { z ∈ℂ : |z| = δ, |(z)|≤θ}∪{ z ∈ℂ : z = re^± i θ, r ≥δ}. If we choose θ such that π/2 < θ < min{π,π/α}, then z^α∈Σ_θ' with θ' = αθ for all z ∈Σ_θ. Considering θ in this mode, there exists a constant C only depending on θ and α such that(z^α + A)^-1)_L^2()≤ C|z|^-α. As in (<ref>), we can write an analogous expression for u_h,u_h(t) = 1/2π i∫_Γ_θ,δ e^zt(z^αI + A_h)^-1( z^α - 1v_h +z^α - 2b_h) dz. The following technical result can be proved analogously to <cit.>. Let φ∈H^s(), and z ∈Σ_θ with π/2 < θ < min{π,π/α}. Then there exists a positive constant c(θ) such that|z^α| φ_L^2()^2 +| φ |_H^s()^2 ≤ c| z^αφ_L^2()^2 +| φ |_H^s()^2|. The next lemma sets an error estimate between (z^αI + A)^-1f and its discrete approximation (z^αI + A_h)^-1P_h f, analogous to <cit.>. Let f ∈ L^2(), z ∈Σ_θ, := (z^αI + A)^-1f, _h := (z^αI + A_h)^-1P_h f. Then there exists a positive constant C(s,n,θ) such that- _h _L^2() + h^γ| - _h |_H^s()≤ Ch^2 γf_L^2().As before, γ = min{ s, 1/2 - }, with ε > 0 arbitrary small. We consider first the case s ≥ 1/2. By definition ofand _h, it holds thatz^α( , φ) + ⟨ , φ⟩_s= (f , φ ),∀φ∈H^s(),z^α(_h , φ) + ⟨_h , φ⟩_s= (f , φ ),∀φ∈ X_h. If we set e_h :=- _h and subtract these two expressions, we derivez^α(e_h , φ) + ⟨ e_h , φ⟩_s= 0,∀φ∈ X_h. ApplyingLemma <ref> and this identity, we arrive to| z^α| e_h^2_L^2() + |e_h|^2_H^s() ≤c| z^α (e_h,e_h) + ⟨ e_h , e_h ⟩_s |= c| z^α (e_h, -φ) + ⟨ e_h ,-φ⟩_s | ∀φ∈ X_h . Taking φ = Π_h in the former expression, where Π_h is a suitable quasi-interpolation operator (see, for example, <cit.>), we deduce| z^α|e_h^2_L^2() + |e_h|^2_H^s()≤ c ( e_h_L^2()h^1/2 - ε||_H^s() + |e_h|_H^s() h^1/2 - ε||_H^s+1/2-()),where we have used the fact that s ≥ 1/2 impliesthat h^s ≤ h^1/2 - ε. On the other hand, if we choose φ = in Lemma <ref>, we obtain| z^α| ^2_L^2() +||^2_H^s() ≤ c| z^α( , ) + ⟨ , ⟩_s|= c | (f,)| ≤ c f_L^2()_L^2(). Consequently, _L^2()≤ c | z |^-αf_L^2()and||_H^s()≤ c | z |^-α/2f_L^2(). From Proposition <ref>, we know that for the case z = 0 the estimate ||_H^s+1/2-()≤ f _L^2() holds. Utilizing this estimate with -z^α + f instead of f, we obtain ||_H^s+1/2-()≤-z^α +f _L^2()≤ c f _L^2(),where in the last inequality we used (<ref>). Combining this with (<ref>), we derive| z^α| e_h^2_L^2() + |e_h|^2_H^s()≤ c h^1/2 - ε f _L^2()( z^α/2e_h_L^2() + |e_h|_H^s()).This implies that | z^α| e_h^2_L^2() + |e_h|^2_H^s()≤ c h^1 - 2ε f ^2_L^2(),and gives the bound for |e_h|_H^s(). Next, we aim toestimate e_h^2_L^2(). For this purpose, we proceed via the following duality argument. Given φ∈ L^2(), define ψ := (z^α + A)^-1φandψ_h := (z^α + A)^-1P_hφ.Thus, we writee_h_L^2() = sup_φ∈ L^2() |(e_h,φ)|/φ_L^2() = sup_φ∈ L^2() |z^α(e_h,ψ) + ⟨ e_h,ψ⟩_s |/φ_L^2(). We aim to bound the supremum in the identity above. Resorting to (<ref>) and the Cauchy-Schwarz inequality, we bound|z^α(e_h, ψ) + ⟨ e_h,ψ⟩_s |= |z^α(e_h,ψ - ψ_h) + ⟨ e_h,ψ - ψ_h ⟩_s | ≤ z^α/2e_h_L^2() z^α/2ψ - ψ_h_L^2() + |e_h|_H^s() |ψ - ψ_h|_H^s()≤( z^α/2e_h_L^2() + |e_h|_H^s()) ( z^α/2ψ - ψ_h_L^2() +|ψ - ψ_h|_H^s()) .Finally, applying (<ref>) we arrive at |z^α(e_h,ψ) + ⟨ e_h,ψ⟩_s | ≤ h^1 - 2εf_L^2()φ_L^2(),from where we can derive the desired inequality.The analysis of the case s ≤ 1/2 can be carried out in analogously. Indeed, usingthat h^s ≥ h^1/2 - we obtain, instead of (<ref>), the inequality| z^α| e_h^2_L^2() + |e_h|^2_H^s()≤ c ( e_h_L^2()h^s ||_H^s() + |e_h|_H^s() h^s ||_H^s+1/2-()),and proceeding as before we arrive at the desiredestimate. At this point, we are able to give a sketch of the proof of Theorem <ref>. The proof outlined in <cit.> can be reproduced using Lemma <ref> instead of Lemma 3.1 from that work and approximation properties of the elliptic projection along with the estimate(-Δ)^s w_L^2()≤ C | w |_H^2s(). The details are therefore omitted. Finally, we consider the problem with zero initial conditions and a non-zero source term.Using the approximation properties of the elliptic projection together with an appropriate generalization of <cit.> (in the same spirit of Lemma <ref>), the proof given in that work can be adapted to our purpose without major difficulties. plain
http://arxiv.org/abs/1705.09815v2
{ "authors": [ "Gabriel Acosta", "Francisco M. Bersetche", "Juan Pablo Borthagaray" ], "categories": [ "math.NA", "65R20, 65M60, 35R11" ], "primary_category": "math.NA", "published": "20170527122246", "title": "Finite element approximations for fractional evolution problems" }
1]Golnaz BadkobehSupported by the Leverhulme Trust on the Leverhulme Early Career Scheme. 2]Travis Gagie 3]Shunsuke Inenaga 4]Tomasz KociumakaSupported by Polish budget funds for science in 2013–2017 under the `Diamond Grant' program. 5]Dmitry Kosolobov 5]Simon J. PuglisiSupported by the Academy of Finland via grant 294143. [1]Department of Computer Science, University of Warwick, Coventry, England [ ]mailto:[email protected]<[email protected]> [2]EIT, Diego Portales University and CeBiB, Santiago, Chile [ ]mailto:[email protected]<[email protected]> [3]Department of Informatics, Kyushu University, Fukuoka, Japan [ ]mailto:[email protected]<[email protected]> [4]Institute of Informatics, University of Warsaw, Warsaw, Poland [ ]mailto:[email protected]<[email protected]> [5]Department of Computer Science, University of Helsinki, Helsinki, Finland [ ]mailto:[email protected]<[email protected]> mailto:[email protected]<[email protected]> On Two LZ78-style Grammars: Compression Bounds and Compressed-Space Computation[ ================================================================================ We investigate two closely related LZ78-based compression schemes: LZMW (an old scheme by Miller and Wegman) and LZD (a recent variant by Goto et al.). Both LZD and LZMW naturally produce a grammar for a string of length n; we show that the size of this grammar can be larger than the size of the smallest grammar by a factor Ω(n^1/3) but is always within a factor O((n/log n)^2/3). In addition, we show that the standard algorithms using Θ(z) working space to construct the LZD and LZMW parsings, where z is the size of the parsing, work in Ω(n^5/4) time in the worst case. We then describe a new Las Vegas LZD/LZMW parsing algorithm that uses O (z log n) space and O(n + z log^2 n) time with high probability.Keywords: LZMW, LZD, LZ78, compression, smallest grammar § INTRODUCTION The LZ78 parsing <cit.> is a classic dictionary compression technique, discovered by Lempel and Ziv in 1978, that gained wide use during the 1990s in, for example, the Unix compress tool and the GIF image format. Not written about until much later was that LZ78 actually produces a representation of the input string as a context-free grammar. In recent years, grammar compressors have garnered immense interest, particularly in the context of compressed text indexing: it is now possible to efficiently execute many operations directly on grammar-compressed strings, without resorting to full decompression (e.g., see <cit.>).A wide variety of grammar compressors are now known, many of them analyzed by Charikar et al. <cit.> in their study of the smallest grammar problem, which is to compute the smallest context-free grammar that generates the input string (and only this string). Charikar et al. show that this problem is NP-hard, and further provide lower bounds on approximation ratios for many grammar compressors. LZ78 is shown to approximate the smallest grammar particularly poorly, and can be larger than the smallest grammar by a factor Ω(n^2/3/log n) (in <cit.> this bound was improved to Ω((n/log n)^2/3)), where n is the input length.Our focus in this paper is on the LZD <cit.> and LZMW <cit.> grammar compression algorithms, two variants of LZ78 that usually outperform LZ78 in practice. Despite their accepted empirical advantage over LZ78, no formal analysis of the compression performance of LZD and LZMW in terms of the size of the smallest grammar exists. This paper addresses that need. Moreover, we show that the standard algorithms for computing LZD and LZMW have undesirable worst case performance, and provide an alternative algorithm that runs in log-linear randomized time. In particular the contributions of this article are as follows: * We show that the size of the grammar produced by LZD and LZMW can be larger than the size of the smallest grammar by a factor Ω(n^1/3) but is always within a factor O((n/log n)^2/3). To our knowledge these are the first non-trivial bounds on compression performance known for these algorithms.* Space usage during compression is often a concern. For both LZD and LZMW, parsing algorithms are known that use O(z) space, where z is the size of the final parsing. We describe strings for which these algorithms require Ω(n^5/4) time. (The only previous analysis is an O(n^2/log n) upper bound <cit.>.)* We describe a Monte-Carlo parsing algorithm for LZD/LZMW that uses a z-fast trie <cit.> and an AVL-grammar <cit.> to achieve O(z log n) space and O(n + z log^2 n) time for inputs over the integer alphabet {0,1,…,n^O(1)}. This algorithm works in the streaming model and computes the parsing with high probability. Using the Monte-Carlo solution, we obtain a Las Vegas algorithm that, with high probability, works in the same space and time. In what follows we provide formal definitions and examples of LZD and LZMW parsings. Section <ref> then establishes bounds for the approximation ratios for the sizes of the LZD/LZMW grammars. In Section <ref> we consider the time efficiency of current space-efficient parsing schemes for LZD/LZMW. Section <ref> provides an algorithm with significantly better (albeit randomized) performance. Conclusions and reflections are offered in Section <ref>.§.§ Preliminaries.We consider strings drawn from an alphabet Σ of size σ = |Σ|. The empty string is denoted by ϵ. The ith letter of a string s is denoted by s[i] for i such that 1 ≤ i ≤ |s|, and the substring of s that begins at position i and ends at position j is denoted by s[i..j] for 1 ≤ i ≤ j ≤ |s|. Let s[i..j] = ϵ if j<i. For any i,j, the set {k∈ℤ i ≤ k ≤ j} (possibly empty) is denoted by [i..j].For convenience, we assume that the last letter of the input string s is$, where $ is a special delimiter letter that does not occur elsewhere in the string.The LZD (LZ–Double) parsing <cit.> of a string s of length n is the parsing s = p_1 p_2 ⋯ p_z such that, for i ∈ [1..z], p_i= p_i_1p_i_2 where p_i_1 is the longest prefix of s[k..n] and p_i_2 is the longest prefix of s[k+|p_i_1|..n] with p_i_1, p_i_2∈{p_1,…,p_i-1}∪Σ where k=|p_1⋯ p_i-1|+1. We refer to the set Σ∪⋃_i ∈ [1..z]{p_i} as the dictionary of LZD.The LZMW (LZ–Miller–Wegman) parsing <cit.> of a string s of length n is the parsing s = p_1 p_2 ⋯ p_z such that, for i ∈ [1..z], p_i is the longest prefix of s[k..n]with p_i ∈{p_j p_j+1 1 ≤ j ≤ i-2}∪Σ where k=|p_1⋯ p_i-1|+1. We refer to the set ⋃_i ∈ [2..z]{p_i-1p_i} as the dictionary of LZMW.The LZD parsing of the string s = abbaababaaba$ is p_1=ab, p_2=ba, p_3=abab, p_4=aab, and p_5=a$. This can be represented by (a,b), (b,a), (1, 1),(a, 1), (a,$). The LZMW parsing of s is the following: p_1=a, p_2=b, p_3=b, p_4=a, p_5=ab, p_6=ab, p_7=aab, p_8= a, and p_9=$. This can be represented by (a,b,b,a,1,1,4,a,$). Notice that the LZD/LZMW parsing of string s can be seen as a grammar that only generates s, with production rules of form p_i → p_jp_k (j < i, k < i) or p_i → a (∈Σ) for each phrase p_i, and the start rule S → p_1 p_2 ⋯ p_z. The size of a grammar is the total number of symbols in the right-hand side of the production rules. Thus, the size of the LZD (resp., LZMW) grammar is only by a constant factor larger than the number of phrases in the LZD (resp., LZMW) parsing.§ APPROXIMATING THE SMALLEST GRAMMAR The following theorem shows that, although LZD and LZMW have good compression performance in practice on high-entropy strings, their performance on low-entropy strings can be very poor. For arbitrarily large n, there are strings s of length n for which the size of the grammars produced by the LZD and LZMW parsings is larger than the size of the smallest grammar generating s by a factor Ω(n^1/3). Our proof is inspired by <cit.>. Let k ≥ 4 be an integer that is a power of 2. We will construct a string s of length n = Θ(k^3) that can be encoded by a grammar of size O(k) = O(n^1/3), but for which the LZMW parsing produces a grammar of size Ω(k^2) = Ω(n^2/3). The input alphabet is {a,b,c,d}; the letters c and d serve as separators. Denote δ_i = a^i bb a^k-i and γ_i = ba^i a a^ib c ba ba^2 ba^3⋯ ba^i. The string s is as follows:[x = δ_k δ_k-1 δ_k δ_k-2 δ_k δ_k-3⋯δ_k δ_k/2+1 δ_k a^k-1,; s = γ_0γ_1⋯γ_k-1δ_0 dδ_1 d⋯δ_k d c aa c aa^2a^2 ⋯ ca^2^i-1a^2^ia^2^i⋯ ca^k/2-1a^k/2a^k/2 dcx^k/2. ] We have |s| = Θ(k^3). Consider the prefix γ_0γ_1 ⋯γ_k-1 δ_0 dδ_1 d ⋯ dδ_k d, which will ensure the strings δ_i are in the LZMW dictionary.We will show by induction on i that each substring γ_i of the prefix γ_0γ_1⋯γ_k-1 is composed of the phrases ba^i, a, a^ib, cbaba^2⋯ ba^i in the parsing of the string s. It is trivial for i = 0. Suppose that i > 0 and the assertion holds for all γ_i' and i' < i. It follows from the inductive hypothesis that ba^i is the longest prefix of γ_i that is equal to a concatenation of two adjacent phrases introduced before the starting position of γ_i. Hence, by the definition of LZMW, the string γ_i starts with the phrase ba^i. In the same way we deduce that the phrase ba^i is followed by the phrases a, a^ib, and cbaba^2⋯ ba^i.By a similar inductive argument, one can show that each substring δ_i d of the substring δ_0 dδ_1 d⋯δ_k dc is composed of the phrases a^ib, ba^k-i, d. Since the phrases a^ib and ba^k-i are adjacent, the LZMW dictionary now contains the strings δ_i = a^ibba^k-i for all i = 0,1,…, k.Similarly, the substring c aa caa^2a^2 ⋯ ca^2^i-1a^2^ia^2^i⋯ ca^k/2-1a^k/2a^k/2 dc is parsed as c, a, a, ca, a^2, a^2, …, ca^2^i-1, a^2^i, a^2^i, …, ca^k/2-1, a^k/2, a^k/2, dc. In what follows we need only the string a^k introduced to the dictionary by the pair of phrases a^k/2.Finally, consider the substring x^k/2. Observe that the first occurrence of x is parsed in (almost) the way it is written, i.e., it is parsed as δ_k, δ_k-1, δ_k, δ_k-2, …, δ_k, δ_k/2+1, δ_k. But the last phrase is a^k instead of a^k-1. In other words, the parsing of the second occurrence of x starts from the second position of x and, therefore, the first phrases of this parsing are as follows:δ_k-1, δ_k-2, δ_k-1, δ_k-3, …, δ_k-1, δ_k/2, δ_k-1.Again, the last phrase is a^k and, hence, the parsing of the third occurrence of x starts with the third position of x, and so on.The LZMW parsing of s, therefore, consists of Ω(k^2) phrases and the size of the LZMW grammar is Ω(k^2). But there is a grammar of size O(k) producing s:[S →Γ_0Γ_1⋯Γ_k-1Δ_0dΔ_1d⋯Δ_kd cA_2cA_5cA_11⋯ cA_k/2+k-1dcX^k/2,; A_0 →ϵ,B_0 → c, A_i → A_i-1a,B_i → B_i-1bA_i for i ∈ [1..2k],; Γ_i → bA_2i+1bB_i, Δ_i → A_i bb A_k-i for i ∈ [0..k],;X →Δ_kΔ_k-1 Δ_kΔ_k-2⋯Δ_kΔ_k/2+1 Δ_k A_k-1. ] Using similar ideas we can describe a troublesome string for the LZD scheme:s = (a^2 c^2 a^3 c^3 ⋯ a^kc^k)(bb abb a^2bb a^3 ⋯ bba^k-1bb)(δ_0d^2δ_1d^3 ⋯δ_kd^k+2)x^k/2. As above, the size of the grammar corresponding to the LZD parsing of s is Ω(k^2) whereas the size of the smallest grammar is O(k); hence, the result follows.[ S → A_2C_2A_3C_3⋯ A_kC_kbbA_1bbA_2⋯ bbA_k-1bbΔ_0D_2Δ_1D_3⋯Δ_kD_k+2X^k/2,; A_0 →ϵ, C_0 →ϵ, D_0 →ϵ, A_i → A_i-1a, C_i → C_i-1c, D_i → D_i-1d for i ∈ [1..k+2],;Δ_i → A_i bb A_k-i for i ∈ [0..k], X →Δ_kΔ_k-1 Δ_kΔ_k-2⋯Δ_kΔ_k/2+1 Δ_k A_k-1. ]The analysis is similar to the above but simpler, so, we omit it. To additionally verify the correctness of both constructions, we conducted experiments on small k and, indeed, observed the described behavior; the code can be found in <cit.>. We can also show that the upper bound for the approximation ratio of the LZ78 parsing given in <cit.> also applies to the LZD and LZMW parsings. For this, we will use the following known results.If there is a grammar of size m generating a given string, then this string contains at most mk distinct substrings of length k. All phrases in the LZD parsing of a given string are distinct.Let p_1p_2⋯ p_z be the LZMW parsing of a given string. Then, for any i ∈ [2..z] and j ∈ [i+2 .. z], we have p_i-1p_ip_j-1p_j. If p_i-1p_i = p_j-1p_j for i < j - 1, then, by the definition of LZMW, the phrase p_j-1 either is equal to p_i-1p_i or contains p_i-1p_i as a prefix, which is a contradiction. Now we are ready to show an upper bound on the approximation ratio of the LZD and LZMW parsings. For all strings s of length n, the size of the grammar produced by the LZD/LZMW parsing is larger than the size of the smallest grammar generating s by at most a factor O((n/log n)^2/3). The theorem can be shown by an analogous way as for the upper bound of the LZ78 parsing against the smallest grammar <cit.> (which is especially straightforward for LZD due to Lemma <ref>), but we provide a full proof for completeness.Let us consider LZMW. Suppose that s is a string of length n and m^* is the size of the smallest grammar generating s. Let p_1, p_2, …, p_z be the LZMW parsing of s. It suffices to evaluate the number z of phrases since the total size of the grammar produced by LZMW is only by a constant factor larger than z.Consider the multiset S = {p_1p_2, p_2p_3, …, p_z-1p_z} (recall that a multiset can contain an element more than one time). Let p_i_1p_i_1+1, p_i_2p_i_2+1, …, p_i_z-1p_i_z-1+1 be a sequence of all strings from S sorted in increasing order of their lengths (again, some strings may occur more than once in the sequence). We partition the sequence by grouping the first 2· m^* strings, then the next 2· 2m^* strings, the next 2· 3m^* strings, and so forth. Let r be the minimal integer satisfying 2(1m^* + 2m^* + ⋯ + rm^* + (r+1)m^*) > z. This implies that z = O(r^2m^*).By Lemma <ref>, any string has at most two occurrences in the multiset S. Also, it follows from Lemma <ref> that s contains at most km^* distinct substrings of length k. Thus, for any k ≥ 1, there are at most 2km^* strings from S that generate substrings of length k. This implies that each string in the kth group generates a substring of length at least k. Hence, we have that2n ≥ |p_i_1p_i_1+1| + |p_i_2p_i_2+1| + ⋯ + |p_i_z-1p_i_z-1+1| ≥ 2(1^2m^* + 2^2m^* + ⋯ + r^2m^*),which implies that r = O((n/m^*)^1/3). By plugging this into z = O(r^2m^*), we obtain z = O((n/m^*)^2/3m^*) and thus the approximation ratio of the grammar produced by LZMW is O((n / m^*)^2/3). Since m^* = Ω(log n), we finally get the desired bound O((n/log n)^2/3).Let us sketch the analysis of LZD, which is very similar. In this case, we consider the set S' of all phrases p_1, p_2, …, p_z (not pairs as in LZMW) of the LZD parsing. Let p_i_1, …, p_i_z be the sequence of all strings from S' sorted by the increasing order of lengths. We partition the sequence into groups of size 1m^*, 2m^*, 3m^*, … (without the factor 2 as in LZMW). It follows from Lemma <ref> that any string occurs in S' at most once. Therefore, similar to the case of LZMW, we obtain n = |p_i_1| + |p_i_2| + ⋯ + |p_i_z| ≥ 1^2m^* + 2^2m^* + ⋯ + r^2m^*, which implies the result in the same way as above.§ SMALL-SPACE COMPUTATIONIn this section we analyze the time required to compute the LZD and LZMW parsings using the O(z)-space algorithms described by Goto et al. <cit.> and Miller and Wegman <cit.>, where z is the number of phrases. We focus on LZD throughout, but a very similar algorithm and analysis applies for LZMW. Goto et al. upperbound the runtime at O(z(m + min(z,m)logσ)), where m is the length of the longest LZD (or LZMW) phrase and σ is the size of the input alphabet. Because m = O(n) and z = O(n), the runtime is upper bounded by O(n^2). Below we provide a lower bound of Ω(n^5/4) on the worst-case runtime, but before doing so we provide the reader with a description of Goto et al.'s algorithm <cit.>.[We concern ourselves here with LZD parsing, but it should be easy for the reader to see that the algorithms are trivially adapted to instead compute LZMW.]Naïve parsing algorithms.In the compacted trie for a set of strings, each edge label ℓ is represented as a pair of positions delimiting an occurrence of ℓ in the set. In this way we can store the trie for s_1, …, s_k in O(k) space. During parsing Goto et al. <cit.> maintain the dictionary of LZD phrases in a compacted trie. The trie is of size O(z), but read-only random access to the input string is also required in order to determine the actual values of the strings on the edge labels.Initially the trie is empty, consisting of only the root. At a generic step during parsing, when we go to compute the phrase p_i = p_i_1p_i_2 starting at position j = |p_1p_2… p_i-1| + 1, the trie contains nodes representing the phrases p_1, p_2, …, p_i-1 and all the distinct symbols occurring in s[1..j-1], and all these nodes (corresponding to phrases and symbols) are marked. Note that there may also be some nodes in the trie that do not correspond to any phrase, i.e., branching nodes. Let s[j..k] be the longest prefix of s[j..n] that can be found by traversing the trie from the root. If s[j..k] cannot be matched even for k = j, then s[j] is the leftmost occurrence of symbol c = s[j] in s, and we add a child node of the root labelled with c, mark the node, and set it as the first element of the new phrase, i.e., p_i_1 = c. Otherwise, the first element of p_i, p_i_1, is the string written on the path connecting the root and the lowest marked node on the path that spells s[j..k]. The second element, p_i_2, of the phrase is computed in a similar manner, by searching for s[j+|p_i_1|+1..n] in the trie.After computing p_i we modify the trie by a standard procedure so that there is a marked node representing p_i: first, we traverse the trie from the root finding the longest prefix of p_i present in the trie, then, possibly, create one or two new nodes, and, finally, mark the node (which, probably, did not exist before) corresponding to p_i (the details can be found in any stringology textbook).The time taken to compute a new phrase and update the trie afterwards is bounded by O(m + min(z,m)logσ), where m = O(n) is the length of the longest phrase (and therefore an upper bound on the length of the longest path in the trie), min(z,m) is an upper bound on the number of branching nodes, and logσ is the time taken to find the appropriate outgoing edge at each branching node during downward traversal. Over all z phrases the runtime is thus O(z(m + min(z,m)logσ)).The LZMW construction algorithm of Miller and Wegman <cit.> is analogous but, unlike the LZD algorithm, when we go to compute the phrase p_i, the trie contains the strings p_1p_2, p_2p_3, …, p_i-2p_i-1 and the nodes corresponding to these strings are marked. One can easily show that the running time of this algorithm is O(z(m + min(z,m)logσ)), where z and m are defined analogously as for LZD.We call both these algorithms naïve. Worst-case time of the naïve algorithms.Now let us investigate the worst-case time complexity of the naïve LZD and LZMW construction algorithms.The naïve LZD and LZMW construction algorithms take time Ω(n^5/4) in the worst case. Let k ≥ 8 be an integer that is a power of two. We will describe a string s of length n = Θ(k^4) for which the basic LZD construction algorithm (see the above discussion) spends Θ(n^5/4) time to process. The string s is composed of pairwise distinct letters a_i,j, for i, j ∈ [1..k], and “separator” letters, all of which are denotedand supposed to be distinct. We will first construct a prefix s' of s that forces the algorithm to fill the dictionary with a set of strings that are used as building blocks in further constructions. To this end, denote (with parentheses used only for convenience):[ w_i = a_i,1a_i,2⋯ a_i,k for i = 1,2,…,k and w = w_1w_2 ⋯ w_k,; s_pre,i = w_i[1..2]w_i[1..3] ⋯ w_i[1..k] for i = 1,2,…,k,; s_suf,i = w_i[k-1..k]w_i[k-2..k] ⋯ w_i[2..k] for i = 1,2,…,k,;p = (s_pre,1s_pre,2⋯ s_pre,k) (s_suf,1s_suf,2⋯ s_suf,k),; q = (w_k-2w_k-1)(w_k-3w_k-2w_k-1) ⋯ (w_1w_2⋯w_k-1) (w),; s' = pq· w^2^1 w^2^2⋯ w^k (w_k[2..k]w^k) (w_k[3..k]w^k) ⋯ (w_k[k..k]w^k). ]Analyzing the prefix p of s', it is clear that the LZD construction algorithm adds to the dictionary exactly all prefixes and suffixes of the strings w_i for i = 1,2,…, k; parsing the string q, the algorithm adds the strings w_k-2w_k-1, w_k-3w_k-2w_k-1, …, w_1w_2⋯ w_k-1, and w_1w_2 ⋯ w_k = w; then, processing the string w^2^1 w^2^2⋯ w^k, the algorithm adds w^2^1, w^2^2, …, w^k (we are interested only in w^k); finally, the strings w_k[2..k]w^k, w_k[3..k]w^k, …, w_k[k..k]w^k are added. So, the algorithm adds to the dictionary exactly the following strings: * all prefixes and suffixes of w_i (including w_i itself) for i = 1,2,…,k;* w_k-2w_k-1, w_k-3w_k-2w_k-1, …, w_1w_2⋯ w_k-1, and w;* w^k along with w^k/2,…, w^2^2, w^2 (we use only w^k in what follows);* w_k[2..k]w^k, w_k[3..k]w^k, …, w_k[k..k]w^k. It is easy to verify that |w| = k^2, |w^k| = k^3, and |s'| = Θ(k^4). (The string w_k[2..k]w^k w_k[3..k]­w^k ⋯ w_k[k..k]w^k contributes the most to the length.)We first provide an overview of our construction. The main load on the running time of the algorithm is concentrated in the following strings z_i:z_i = w_i[2..k]w_i+1⋯ w_k w^k-2w_1⋯ w_i for i = 1,2,…,k - 2.Put s = s'x_1z_1 x_2z_2⋯ x_k-2z_k-2, where x_1, …, x_k are auxiliary strings defined below. Before processing of z_i, the algorithm processes x_i and adds the strings w_i[j..k]w_i+1⋯ w_k-1­w_k[1..j-1] and w_k[j..k]w_1⋯ w_i-1w_i[1..j] for j ∈ [2..k] to the dictionary (see below). So, analyzing z_i, the algorithm consecutively “jumps”, for j = 2,3,…, k, from the string w_i[j..k]w_i+1⋯ w_k-1w_k[1..j-1] to w_k[j..k]w_1⋯ w_i-1w_i[1..j] and so on. The crucial point is that, while analyzing w_k[j..k]w_1⋯ w_i-1w_i[1..j], the algorithm does not know in advance that the string w_k[j..k]w^k from the dictionary does not occur at this position and, since the length of the longest common prefix of the strings w_k[j..k]w^k and w_k[j..k]w^k-jw_1⋯ w_i is Θ(k-j+1 + |w^k-j|), spends Θ(|w^k-j|) = Θ((k-j)k^2) time verifying this. Therefore, the analysis of the string s takes Θ((k - 2)∑_j=2^k (k-j)k^2) = Θ(k^5) time overall. Since |z_i| = O(k^3) and, as it is shown below, |x_i| = O(k^3), we have n = |s| = Θ(k^4) and the processing time is Θ(n^5/4) as required. We now describe this in more detail.We prove by induction that the following invariant is maintained: when the algorithm starts the processing of the suffix x_iz_i⋯ x_k-2z_k-2 of the string s (x_i are defined below), the dictionary contains the following set of strings: * “building blocks” constructed during the processing of s';* pairs of separators(recall that all separators are distinct);* for each i' ∈ [1..i-1] and j ∈ [2..k]: w_i'[j..k]w_i'+1⋯ w_k-1w_k[1..j-1] and w_k[j..k]w_1⋯ w_i'-1w_i'[1..j], w_i'[j..k]w_i'+1⋯ w_k-1 and w_k[j..k]w_1⋯ w_i'-1, w_i'[j..k]w_i'+1⋯ w_k w_1⋯ w_i'-1w_i'[1..j]. The strings from the last two lines in the above list are not used and appear as byproducts. (But it is still important to have them in mind to verify that the algorithm works as expected.)So, assume that, by inductive hypothesis, the invariant holds for all i' ∈ [1..i-1] (it is trivial for i = 1).Define x_i as follows (the parentheses are only for visual ease):[ u'_i,j = (w_k[j..k]w_1⋯ w_i-1 w_i[1..j]),;u_i,j = (w_k[j..k]w_1⋯ w_i-2w_i-1[1..j]) (w_i-1[j+1..k]) u'_i,j,; v_i,j = (w_i[j..k]w_i+1⋯ w_k-1) (w_i[j..k]w_i+1⋯ w_k-1w_k[1..j-1]),;x_1 = (u'_1,2 u'_1,3⋯ u'_1,k-1 u'_1,k) (v_1,2 v_1,3⋯ v_1,k),;x_i = (u_i,2 u_i,3⋯ u_i,k-1 u'_i,k) (v_i,2 v_i,3⋯ v_i,k), for i 1. ] Observe that |x_i| = O(k^3). Using the inductive hypothesis, one can prove that the algorithm adds the strings w_k[j..k]w_1⋯ w_i-1 (for jk), w_k[j..k]w_1⋯ w_i-1w_i[1..j], w_i[j..k]w_i+1⋯ w_k-1, and w_i[j..k]w_i+1⋯ w_k-1w_k[1..j-1] for j ∈ [2..k] to the dictionary after the processing of x_i (plus several pairs ). It remains to show that the algorithm adds exactly the strings w_i[j..k]w_i+1⋯ w_k w_1 ⋯ w_i-1w_i[1..j], for j ∈ [2..k], to the dictionary when processing z_i.Observe that, for j ∈ [2..k], w_i[j..k]w_i+1⋯ w_k-1w_k[1..j-1] is the longest string from the dictionary that has prefix w_i[j..k], and w_k[j..k]w_1⋯ w_i-1w_i[1..j] is the longest string from the dictionary that has prefix w_k[j..k] and does not coincide with w_k[j..k]w^k. Hence, the algorithm consecutively “jumps” over the substrings w of the string z_i adding after each such “jump” the string w_i[j..k]w_i+1⋯ w_k w_1 ⋯ w_i-1w_i[1..j] to the dictionary (for j = 2,3,…,k). No other strings are added.Each time the algorithm processes a substring w_k[j..k]w_1⋯ w_i-1w_i[1..j], it also verifies in Θ(ki + |w^k-j|) time whether the string w_k[j..k]w^k occurs at this position. Therefore, by the above analysis, processing takes Θ(|s|^5/4) time.An analogous troublesome string for the naïve LZMW construction algorithm is as follows (again, all separatorsare assumed to be distinct letters):[w_i = a_i,1a_i,2⋯ a_i,k and w = w_1w_2 ⋯ w_k,;s_pre,i = w_i[1..2] w_i[1..3]⋯ w_i[1..k],;s_suf,i = w_i[k-1..k] w_i[k-2..k]⋯ w_i[2..k],; p = s_pre,1 s_pre,2⋯ s_pre,k s_suf,1 s_suf,2⋯ s_suf,k,;q = w_k-2w_k-1 w_k-3w_k-2w_k-1⋯ w_1w_2⋯w_k-1 w,; s' = p q w^2^1 w^2^2⋯ w^k w_k[2..k]w^k w_k[3..k]w^k⋯ w_k[k..k]w^k,;y_j = w_k[j..k]w_1 w_k[j..k]w_1w_2[1..j],; t_i,j = w_i-2[j+1..k]w_i-1[1..j] w_i-1[j+1..k]w_i[1..j],; u_i,j = (w_k[j..k]w_1⋯ w_i-3w_i-2[1..j]) (w_i-2[j+1..k]w_i-1[1..j]),;v_i,j = w_i[j..k]w_i+1⋯ w_k-1 w_i[j..k]w_i+1⋯ w_k-1w_k[1..j-1],;x_i = t_i,2 t_i,3⋯ t_i,k-1 u_i,2 u_i,3⋯ u_i,k v_i,2 v_i,3⋯ v_i,k,; z_i = w_i[2..k]w_i+1⋯ w_k w^k-2w_1⋯ w_i,;s = s'y_2 y_3⋯ y_k x_4 z_4 x_6 z_6 ⋯ x_2j z_2j⋯x_k-2 z_k-2. ] Let us explain on a high level why the LZMW algorithm works slowly on s. While analyzing the prefix s' y_2 y_3 ⋯ y_k, the algorithm adds a number of “building block” strings into the LZMW dictionary, including the strings w[j..k]w^k for j = 2,3,…,k (recall that, unlike the LZD dictionary containing phrases, the LZMW dictionary contains pairs of adjacent phrases). Before the processing of z_i, the algorithm processes x_i and adds the strings w_i[j..k]w_i+1⋯ w_k-1w_k[1..j-1] (from v_i,j), w_k[j..k]w_1⋯ w_i-2w_i-1[1..j] (from u_i,j), and w_i-1[j+1..k]w_i[1..j] (from t_i,j) to the dictionary. The concatenation of these three strings is w_i[j..k]w_i+1⋯ w_kw_1⋯ w_i-1w_i[1..j], so, analyzing z_i, the algorithm consecutively “jumps”, for j = 2,3,…, k, from the string w_i[j..k]w_i+1⋯ w_k-1w_k[1..j-1] to w_k[j..k]w_1⋯ w_i-2w_i-1[1..j] and then to w_i-1[j+1..k]w_i[1..j], thus producing three new phrases (and then moves on to j+1). The point is that, while analyzing the string w_k[j..k]w_1⋯ w_i-2w_i-1[1..j], the algorithm does not know in advance that the string w_k[j..k]w^k from the dictionary does not occur at this position and, since the length of the longest common prefix of the strings w_k[j..k]w^k and w_k[j..k]w^k-jw_1⋯ w_i is Θ(k-j+1 + |w^k-j|), spends Θ(|w^k-j|) = Θ((k-j)k^2) time verifying this. Therefore, the analysis of the string s takes Θ((k/2)∑_j=2^k (k-j)k^2) = Θ(k^5) time overall. Since n = |s| = Θ(k^4), the processing time is Θ(n^5/4) as required. We omit the detailed proof since it is very similar to the LZD case.To additionally verify the correctness of both constructed examples, we performed the naïve LZD and LZMW algorithms (with some diagnostics to track their execution) on the examples for small k and, indeed, observed the expected “bad” behavior in the special positions described above. Our verifying code (it can be found in <cit.>) thoroughly checks the correspondence of the behavior of the parsers in the special positions to the behavior discussed in the above text. Thus, we hope that the correctness of both our constructions is well supported. We now explain how to decrease the alphabet size in the examples ofTheorem <ref>. The construction for both parsing schemes relies on the following reduction.Consider the parsing schemeLZD or LZMW and a string s∈Σ^*. There exists a string t∈{0,1}^* of length Θ(|Σ|log |Σ|) and a morphism ϕ with ϕ(Σ)⊆{0,1}^ℓ for ℓ = Θ(log |Σ|) such that the parsing of t ·ϕ(s) consists of the parsing of t followed by the image with respect to ϕ of the parsing of s. We analyze the two parsing schemes separately. For LZD, we recursively define A_L ⊆{0,1}^2^L, setting A_0 = {0,1} and A_L = {xy : x,y∈ A_L-1∧ x ≤ y} for L>0. Let (α_i)_i=1^∞ be the infinite sequence of all elements of A_L, for all L≥ 1, with members of each set A_L listed in the lexicographic order; e.g., α_1,…, α_12 = 00, 01, 11, 0000, 0001, 0011, 0101, 0111, 1111, 00000000, 00000001, 00000011. We will define t=α_1⋯α_m for some m.Let us characterize parsings of such strings.For any non-negative integer m and any string w∈{0,1}^*, the first m phrases of the LZD parsing of the binary string α_1⋯α_m· w are α_1,…,α_m. We proceed by induction on m; the base case of m=0 is trivial.For m>0, the inductive assumption implies that the first m-1phrases are α_1,…,α_m-1. Our goal is to prove that the mth phrase is α_m. Before processing α_m, the LZD dictionary is D = {0, 1, α_1,…,α_m-1}. Suppose that α_m=xy ∈ A_L with x,y ∈ A_L-1. Recall that x≤ y; consequently, D∩(y·{0,1}^*) = {y} andD∩(x·{0,1}^*) = {x}∪{xy' : y'∈ A_L-1∧ x ≤ y' < y}.Thus, the longest prefix of α_m· w contained in D is x, and the longest prefix of y· w contained in D is y. This means that the mth phrase is indeed α_m=xy.Consider a string s∈Σ^n. We choose the smallest L with |A_L|≥ |Σ| and define t=α_1⋯α_m so that t is shortest possible and the LZD dictionary after processing t contains at least |Σ| elements of A_L. The morphism ϕ is then defined by injectively mapping Σ to these dictionary strings from A_L.Note that |A_L-1| ≤ |Σ| and m ≤ |Σ| + ∑_ℓ=1^L-1 |A_ℓ|, so we have m = Θ(|Σ|), ℓ = 2^L=Θ(log |Σ|), and |t| = Θ(|Σ|log |Σ|), as desired.We are to prove that the LZD parsing of t·ϕ(s) is α_1,…,α_m,ϕ(p_1),…,ϕ(p_z), where p_1,…,p_z is the LZD parsing of s. For this, we inductively prove that the LZD dictionary D after parsing p_1⋯ p_i is related to the LZD dictionary D̂ after parsing t·ϕ(p_1⋯ p_i) by the following invariant: D̂∩(ϕ(Σ)·{0,1}^*) = ϕ(D). The base case follows from the claim (D̂∩(ϕ(Σ)·{0,1}^*) = ϕ(Σ)=ϕ(D)), and the inductive step is straightforward. This completes the proof for the LZD scheme. The construction for LZMW is more involved, but the idea is the same. We recursively define B_L ⊆{0,1}^2^L, setting B_0 = {0,1} and B_L = {xy : x,y∈ B_L-1∧ xy1^2^L-10^2^L-1} for L>0. Let (β_i)_i=1^∞ be the infinite sequence that lists all elements of B_L consecutively for all L≥ 0, with members of each B_L listed in the lexicographic order (i.e., (β_i)_i=1^∞ is defined by analogy with (α_i)_i=1^∞ for LZD but starting with L = 0). For β_m ∈ B_L, define b(β_m) = β_M β_m·β_M+1β_m ⋯β_m-1β_m·β_m, where β_M=0^2^L is the first element of B_L in (β_i)_i=1^∞. For example, b(β_1)⋯ b(β_6) = 0· 01 1·00· 00 01 01·00 11 01 11 11· 0000.For m≥ 1, consider a binary string b(β_1)⋯ b(β_m)· 0^|β_m|· w for w∈{0,1}^*. The LZMW parsing decomposes its fragments b(β_i) into phrases of length |β_i|. We proceed by induction on m. The base case m=1 is straightforward: it suffices to note that the first phrase of 0· 0 · w is 0. Below, we consider m>1.First, suppose that β_m = 0^2^L, i.e., β_m-1 = 1^2^L-1∈ B_L-1. Note that b(β_m) starts with 0^2^L-1, so the inductive hypothesis yields that the prefix b(β_1)⋯ b(β_m-1) is parsed as desired. Observe that after parsing this prefix, the LZMW dictionary is D={1^2^ℓ-1 0^2^ℓ : 0 < ℓ < L}∪⋃_ℓ=0^L B_ℓ. Consequently, we obtain D∩(B_L ·{0,1}^*) = B_L and, therefore, b(β_m)=β_m is parsed as claimed.Finally, suppose that β_m∈ B_L ∖{0^2^L}. In this case, β_m-1∈ B_L and β_M= 0^2^L for some M < m. Since b(β_m) starts with β_M = 0^2^L, the inductive hypothesis lets us assume that the prefix b(β_1)⋯ b(β_m-1) is parsed as desired. Due to 1^2^L-10^2^L-1∉ B_L, after parsing this prefix, the LZMW dictionary D satisfies:D∩ (B_L·{0,1}^*) = B_L ∪{β_kβ_k' : M ≤ k,k' < m∧(k,k') (m-1,M)}.Let us consider the parsing of b(β_m)0^2^L w = β_M β_m ·β_M+1β_m⋯β_m-1β_m·β_m· 0^2^L w. One can inductively prove that before parsing β_k β_m ·β_k+1⋯, for M≤ k < m, we have D∩(β_k ·{0,1}^*) = {β_k}∪{β_k β_k' : M ≤ k' < m}, so the subsequent phrase is β_k. Next, before parsing β_m·β_k+1⋯, for M≤ k < m, we have D∩(β_m·{0,1}^*) = {β_m}∪{β_m β_k' : M < k' ≤ k}, so the subsequent phrase is β_m. Finally, before parsing β_m· 0^2^L w, we have D∩(β_m·{0,1}^*) = {β_m}∪{β_m β_k' : M < k' < m}, so the last phrase is also β_m. Thus, b(β_m) is parsed as claimed. Consider a string s∈Σ^n. We choose the smallest L with |B_L|≥ |Σ| and define t=b(β_1)⋯ b(β_m) so that t is shortest possible and the LZMW dictionary after processing t contains at least |Σ| members of B_L (note that β_m ∈ B_L-1 in this case). The morphism ϕ is then defined by injectively mapping Σ to these dictionary strings from B_L. Moreover, we put ϕ(s[1]) = 0^2^L so that the claim is applicable for t·ϕ(s). The remaining proof is analogous to the LZD counterpart. We only need to observe that the LZMW dictionary additionally containsβ_m0^2^L, but β_m 0^2^L-1∉ϕ(Σ) and, hence, this does not affect the parsing of t·ϕ(s). The hard binary examples are now straightforward to derive.The naïve LZD and LZMW parsing algorithms take time Ω(n^5/4 / log^1/4 n) in the worst case even on a binary alphabet. We apply Lemma <ref> for a string s∈Σ^* of length n constructed in the proof of Theorem <ref> for the appropriate parsing algorithm, which results in a binary string ŝ:=t·ϕ(s). Without loss of generality, we may assume |Σ|≤ n, son̂ := |ŝ|= Θ(|Σ|log |Σ| + n log |Σ|) = Θ(n log |Σ|). Recall that the naïve parsing algorithm traverses at least Ω(n^5/4) trie edges while parsing s. Since the parsing of the suffix ϕ(s) of ŝ is the ϕ-image of the parsing of s, this algorithm traverses at least Ω(n^5/4log |Σ|) trie edges while parsing ŝ. In terms of n̂, the running time is at least Ω(n̂^5/4 / log^1/4 |Σ|), which is Ω(n̂^5/4 / log^1/4n̂) due to |Σ| ≤ n < n̂. § FASTER SMALL-SPACE COMPUTATIONIn this section we describe a new parsing algorithm that works in O(n + zlog^2 n) time (randomized, in expectation) and uses O(zlog n) working space to parse the input string over the integer alphabet {0,1,…,n^O(1)}. Before getting to the algorithm itself, we review four tools that are essential for it: Karp–Rabin hashing <cit.>, AVL-grammars of Rytter <cit.>, the dynamic z-fast trie of Belazzougui et al. <cit.>, and the dynamic marked ancestor data structure of Westbrook <cit.>. Karp–Rabin hashing. A Karp–Rabin <cit.> hash function ϕ has the form ϕ (s [1..n]) = ∑_i = 1^n s [i] δ^i - 1 p, where p is a fixed prime and δ is a randomly chosen integer from the range [0..p-1] (this is a more popular version of the original hash proposed in <cit.>).The value ϕ(s) is called s's Karp–Rabin hash. It is well-known that, for any c > 3, if p > n^c, then the probability that two distinct substrings of the given input string of length n have the same hash is less than 1/n^c-3.We extensively use the property that the hash of the concatenation s_1s_2 of two strings s_1 and s_2 can be computed as (ϕ(s_1) + δ^|s_1|ϕ(s_2))p. Therefore, if the values ϕ(s_1) and ϕ(s_2) are known and p ≤ n^O(1), then ϕ(s_1s_2) can be calculated in O(1) time provided the number (δ^|s_1| p) is known.AVL-grammars.Consider a context-free grammar G that generates a string s (and only s). Denote by Tree(G) the derivation tree of s. We say that G is an AVL-grammar (see <cit.>) if G is in the Chomsky normal form and, for every internal node v of Tree(G), the heights of the trees rooted at the left and right children of v differ by at most 1. The following result straightforwardly follows from the algorithm of Rytter described in <cit.>.Let G be an AVL-grammar generating a prefix s[1..i-1] of a string s. Suppose that the string s[i..k] occurs in s[1..i-1]; then one can construct an AVL-grammar generating the string s[1..k] in O(log i) time modifying at most O(log i) rules in G. Let G be an AVL-grammar generating a string s. It is well-known that, for any substring s[i..j], one can find in O(log n) time O(log n) non-terminals A_1, …, A_k such that s[i..j] is equal to the string generated by A_1⋯ A_k. Hence, if each non-terminal A of G is augmented with the Karp–Rabin hash ϕ(t) of the string t generated by A and with the number δ^|t| p, then we can compute ϕ(s[i..j]) in O(log n) time. One can show that, during the reconstruction of the AVL-grammar in Lemma <ref>, it is easy to maintain the described integers augmenting the non-terminals (see <cit.>). Z-fast tries.Let x be a string such that one can compute the Karp–Rabin hash of any prefix of x in O(t_x) time. The z-fast trie <cit.> is a compacted trie containing a dynamic set of variable-length strings that supports the following operations: * we can find (w.h.p.) in O(t_xlog |x|) time the highest explicit node v such that the longest prefix of x present in the trie is written on the root-v path;* we can insert x into the trie in O(|x| + t_xlog |x|) randomized time.The space occupied by the z-fast trie is Θ(k), where k is the number of strings inserted in the trie.Dynamic marked ancestor.Let T be a dynamic compacted trie (or just tree) with k nodes. The dynamic marked ancestor data structure of <cit.> supports the following two operations on T (both in O(log k) time): for a given node v, (1) mark v, (2) find the nearest marked ancestor of v (if any).Algorithm.Our faster parsing algorithm computes the LZD phrases from left to right one by one, spending O(log^O(1) n) time on each phrase. We maintain an AVL-grammar G for the prefix s[1..i-1] of s we have already parsed, and a z-fast trie T containing the first phrases p_1, p_2, …, p_r of the LZD parsing of s such that s[1..i-1] = p_1p_2⋯ p_r. We augment T with the dynamic marked ancestor data structure and mark all nodes corresponding to phrases (i.e., all nodes v such that the string written on the path from the root to v is equal to t ∈{p_1,…,p_r}). We augment each non-terminal of G with the Karp–Rabin hash ϕ(t) of this non-terminal's expansion t and with the number δ^|t| p, so that the hash of any substring of s[1..i-1] can by calculated in O(log n) time. Suppose we are looking for the first part of the next phrase and that, in addition to having parsed s[1..i-1], we have already read s[i..j-1] without parsing it — but we have found the endpoints of an occurrence of s[i..j-1] in s[1..i-1].(Notice s[i..j-1] can be empty, i.e., i = j.) Denote by x the longest prefix of s[i..j-1] that is also a prefix of some of the phrases p_1,…,p_r. Since we cancompute quickly with G the hash of any prefix of s[i..j-1], we can use the z-fast search to find in O(log^2 n) time a node v of T such that x is written on the path connecting the root and v. Let s[ℓ_v .. r_v] be a substring of s[1..i-1] corresponding to v (the numbers ℓ_v and r_v are stored with the node v). Using hashes and the binary search, we find the longest common prefix of the strings s[i..j-1] and s[ℓ_v .. r_v] (with high probability) in O(log^2 n) time; this prefix must be x. If s[i..j-1]x, then we perform a marked-ancestor query on the vertex corresponding to x (which can be found in O(log^2 n) time in the same way as v) and thus find the longest phrase that is a prefix of s[i..j-1].We take that phrase as the first part of the next phrase and start over, looking for the second part, with the remainder of s [i..j-1] now being what we have read but not parsed (of which we know an occurrence in s [1..i-1]).On the other hand, if s[i..j-1] = x, then we read s[j..n] in blocks of length log^2 n, stopping when we encounter an index k such that s[i..k] is not a prefix of a phrase p_1,…,p_r; the details follow.Suppose that we have read q blocks and the concatenation s[i..j + q log^2 n - 1] of s[i..j-1] and the q previous blocks is a prefix of a phrase t ∈{p_1, …, p_r}. We compute in O(log^2 n) time the hashes of all the prefixes of the block s[j + q log^2 n .. j + (q + 1) log^2 n - 1], which allows us to compute the hash of any prefix of s[i .. j + (q + 1)log^2 n - 1] in O(log n) time.Therefore, again using z-fast search and binary search, we can check in O(log^2 n) time if the block s[j + q log^2 n .. j + (q + 1) log^2 n - 1] contains such a k — and, if so, find it.If k is not found, then using information from the search, we can find a phrase t' ∈{p_1, …, p_r} — which may or may not be equal to t — such that s[i .. j + (q + 1)log^2 n - 1] is a prefix of t'; we then proceed to the (q+2)nd block.Once we have found such a k, we conceptually undo reading the characters from s[k] onwards (which causes us to re-read later those O(log^2 n) characters), then perform a search and marked-ancestor query in T, which returns the longest phrase that is a prefix of s[i..k-1].We take that longest phrase as the first part of the next phrase and start over, looking for the second part, with the remainder of s [i..k-1] now being what we have read but not parsed (of which we know an occurrence in s[1..i-1]).Once we have found both the first and second parts of the next phrase — say, p'_1 and p'_2 — we add the next phrase p_r+1 = p'_1p'_2 to G (by Lemma <ref>) and to T, which takes O(|p_r+1| + log^2 n) time.In total, since processing each block takes O(log^2 n) time and the algorithm processes at most z + n/log^2 n blocks, we parse s in O(n + z log^2 n) time.Our space usage is dominated by G, which takes O(z log n) space. Finally, we verify in a straightforward manner in O(n) time whether the constructed parsing indeed encodes the input string. If not (which can happen with probability 1/n^c-3, where p > n^c), we choose a different random δ∈ [0..p-1] for the Karp–Rabin hash and execute the whole algorithm again.The computation of the LZMW parsing in O(n + zlog^2 n) expected time and O(zlog n) space is similar: the z-fast trie stores pairs p_1p_2, p_2p_3, …, p_z-1p_z of adjacent phrases in this case and the nodes corresponding to these pairs are marked. We omit the details as they are straightforward.§ CONCLUDING REMARKSWe believe that our new parsing algorithms can be implemented efficiently, and we leave this as future work. Perhaps a more interesting question is whether there exists an LZD/LZMW parsing algorithm with better working space and the same (or better) runtime.We note that the algorithmic techniques we have developed here can also be applied to, e.g., develop more space-efficient parsing algorithms for LZ-End <cit.>, a variant of LZ77 <cit.> with which each phrase s [i..j] is the longest prefix of s [i..n] such that an occurrence of s [i..j-1] in s [1..i-1] ends at a phrase boundary.Kempa and Kosolobov <cit.> very recently gave an LZ-End parsing algorithm that runs in O (n logℓ) expected time and O (z + ℓ) space, where ℓ is the length of the longest phrase and z is the number of phrases.To reduce Kempa and Kosolobov's space bound, we keep an AVL-grammar (again augmented with the non-terminals' Karp–Rabin hashes, meaning our algorithm Monte-Carlo) of the prefix of s we have processed so far; a list of the endpoints of the phrases so far, in the right-to-left lexicographic order of the prefixes ending at the phrases' endpoints; and an undo stack of the phrases so far.For each character s [k] in turn, for 1 ≤ k ≤ n, in O(log^O (1) n) time we use the grammar and the list to find the longest suffix s [j..k] of s [1..k] such that an occurrence of s [j..k-1] in s [1..j-1] ends at a phrase boundary.We use the undo stack to remove from the grammar, the list, and the stack itself, all the complete phrases lying in the substring s [j..k-1], and then add the phrase consisting of the concatenation of those removed phrases and s [k].By <cit.>, we remove at most two phrases while processing s [k], so we still use a total of O(log^O (1) n) worst-case time for each character of s.Again, the space bound is dominated by the grammar, which takes O (z log n) words.We leave the details for the full version of this paper. Regarding compression performance, we have shown that like their ancestor, LZ78, both LZD and LZMW sometimes approximate the smallest grammar poorly. This, of course, does not necessarily detract from their usefulness in real compression tools; now however, practitioners have a much clearer picture of these algorithms' possible behavior. The future work includes closing the gap between the lower bound Ω(n^1/3) and the upper bound O((n/log n)^2/3) for the approximation ratio and designing parsing algorithms with better guarantees. § ACKNOWLEDGEMENTSWe thank H. Bannai, P. Cording, K. Dabrowski, D. Hücke, D. Kempa, L. Salmela for interesting discussions on LZD at the 2016 StringMasters and Dagstuhl meetings. Thanks also go to D. Belazzougui for advice about the z-fast trie and to the anonymous referees.plainurl
http://arxiv.org/abs/1705.09538v2
{ "authors": [ "Golnaz Badkobeh", "Travis Gagie", "Shunsuke Inenaga", "Tomasz Kociumaka", "Dmitry Kosolobov", "Simon J. Puglisi" ], "categories": [ "cs.DS" ], "primary_category": "cs.DS", "published": "20170526113305", "title": "On Two LZ78-style Grammars: Compression Bounds and Compressed-Space Computation" }
[email protected] [email protected]@uni-mainz.de [email protected] [email protected] PRISMA Cluster of Excellence andMainz Institute for Theoretical Physics,Johannes Gutenberg-Universität Mainz, 55099 Mainz, Germany MITP/17-037We discuss novel ways in which neutrino oscillation experiments can probe dark matter.In particular, we focus on interactions between neutrinos and ultra-light (“fuzzy”) dark matter particles with masses of order 10^-22 eV. It has been shown previously that such dark matter candidates are phenomenologically successful and might help ameliorate the tension between predicted and observed small scale structures in the Universe. We argue that coherent forward scattering of neutrinos on fuzzy dark matter particles can significantly alter neutrino oscillation probabilities.These effects could be observable in current and future experiments. We set new limits on fuzzy dark matter interacting with neutrinos using T2K and solar neutrino data, and we estimate the sensitivity of reactor neutrino experiments and of future long-baseline accelerator experiments. These results are based on detailed simulations in GLoBES. We allow the dark matter particle to be either a scalar or a vector boson.In the latter case, we find potentially interesting connections to models addressing various B physics anomalies.Fuzzy Dark Matter and Non-Standard Neutrino Interactions Xiao-Ping Wang May 24, 2017 ========================================================Our ignorance about the particle physics nature of dark matter (DM) is so vast that viable candidate particles span more than 90 orders of magnitude in mass. At the heavy end of the spectrum are primordial black holes  <cit.>.On the low end of the DM mass spectrum are models of “Fuzzy Dark Matter” with a mass of order m_ϕ∼ 10^-22 eV.The term “fuzzy” refers to the huge Compton wave length λ = 2π/m_ϕ≃ 0.4 pc× (10^-22 eV / m_ϕ) of such DM particles. Fuzzy DM has been studied mostly in the context of axions or other extremely light scalar fields <cit.>.Such DM candidates can be searched for in laboratory experiments using cavity-based haloscopes <cit.>,helioscopes <cit.>, LC circuits <cit.>, atomic clocks <cit.>, atomic spectroscopy <cit.> and interferometry <cit.>, as well as accelerometers <cit.> and magnetometry <cit.>.Constraints on their parameter space can also be set usingcurrent gravitational wave detectors <cit.>. However, ultra-light vector bosons are also conceivable fuzzy DM candidates <cit.>. The tightest constraints on the mass of Fuzzy DM come from observations of large scale structure in the Universe, and very recent studies suggest that m_ϕ > 10^-21 eV may be required <cit.>.Because of its macroscopic delocalization, Fuzzy DM has the potential to resolve several puzzles related to structure formation in the Universe:(i) DM delocalization can explain the observed flattening of (dwarf) galaxy rotation curves towards their center <cit.>, which is in tension with predictions from N-body simulations <cit.> (“cusp vs. core problem”); (ii) the lower than expected abundance of dwarf galaxies <cit.> (“missing satellites problem”) can be understood in Fuzzy DM scenarios because of the higher probability for tidal disruption of DM subhalos and because of the suppression of the matter power spectrum at small scales <cit.>; (iii) the apparent failure of many of the most massive Milky Way subhalos to host visible dwarf galaxies <cit.> (“too big to fail problem”) is ameliorated since Fuzzy DM predicts fewer such subhalos <cit.>; While it is conceivable that these galactic anomalies will disappear with a more refined treatment of baryonic physics in simulations <cit.>, the possibility that DM physics plays a crucial role is far from excluded.Our goal in the present paper is to highlight the tremendous opportunities for probing interactions of fuzzy DM in current and future neutrino oscillation experiments.These opportunities exist in particular in scenarios in which DM–neutrino interactions are flavor non-universal or flavor violating.In this case, even very feeble couplings between neutrinos and dark matter are sufficient for coherent forward scattering to induce a non-negligible potential for neutrinos, which affects neutrino oscillation probabilities and will thus alter the expected event rates and spectra in current and future neutrino oscillation experiments <cit.>. We will in particular derive constraints from T2K and solar neutrino neutrino data, and we will determine the sensitivities of DUNE and RENO.Similar effects have been considered previously in ref. <cit.>, where the focus has been on anomalous temporal modulation of neutrino oscillation probabilities.Dark Matter–Neutrino Interactions. Fuzzy DM can consist either of scalar particles ϕ or of vector bosons ϕ^μ. In the scalar case, the relevant terms in the Lagrangian are given by <cit.>ℒ_scalar = ν̅_L^α i ∂ν_L^α- 12 m_ν^αβ(ν_L^c)^αν_L^β- 12 y^αβϕ (ν_L^c)^αν_L^β ,where α, β are flavor indices and y^αβ are the coupling constants.For vector DM, the Lagrangian isℒ_vector = ν̅_L^α i ∂ν_L^α- 12 m_ν^αβ(ν_L^c)^αν_L^β+ g Q^αβϕ^μν̅_L^αγ_μν_L^β ,with the coupling constant g and the charge matrix Q^αβ. In both Lagrangians, m_ν is the effective Majorana neutrino mass matrix. The interaction term in <ref> can be generated in a gauge invariant way by coupling the scalar DM particle ϕ to heavy right-handed neutrinos in a seesaw scenario <cit.>. The interaction in <ref> could arise for instance if the DM is the feebly coupled gauge boson corresponding to a local L_μ - L_τ lepton family number symmetry, defined via Q^ee = 0, Q^μμ = 1, Q^ττ = -1.Alternatively, the DM particle could couple to the SM via mixing with a much heavier gauge boson Z' with flavor non-universal couplings.If the Z' boson has a mass of order m_Z'∼TeV, we expect the mixing-induced coupling g in <ref> to be of order g ∼ m_ϕ / m_Z'. Intriguingly, we will see below that such tiny couplings may be within reach of neutrino oscillation experiments.Interesting candidates for a TeV-scale Z' boson mediating interactions of ultra-light vector DM and neutrinos include an L_μ - L_τ gauge boson, or a new gauge boson coupled predominantly to the second family of leptons.The latter possibility is of particular interest as such a particle could explain several recent anomalies in B physics <cit.>. We defer a detailed discussion of possible UV completions of <ref> to a forthcoming publication <cit.>.The mass of ϕ^μ can be generated either through the Stückelberg mechanism <cit.> or from spontaneous symmetry breaking in a dark Higgs sector.Production of Ultra-light DM Particles. DM particles with masses m_ϕ≪keV must have been produced non-thermally in the early Universe to avoid constraints on hot (i.e.relativistically moving) DM.The most popular way to achieve this is the misalignment mechanism, which was first introduced in the context of QCD axion models <cit.>, but can also be applied to other ultra-light fields. For vector bosons, the misalignment mechanism has been discussed in refs. <cit.>.In this case, the mechanism may require also a non-minimal coupling of ϕ^μ to the Ricci scalar to avoid the need for super-Planckian field excursions <cit.>.It might be possible to avoid these extra couplings in certain UV completions of the model <cit.>. The misalignment mechanism for vector bosons is more constrained if the bosons obtain their mass through a Higgs mechanism than in models with Stückelberg masses.In particular, it is required that the boson is massive at the temperature T_osc at which ϕ^μ begins to oscillate about its minimum <cit.>. This temperature is given by T_osc∼√(m_ϕ M_Pl), where M_Pl is the Planck mass.We see that the dark Higgs boson thus needs to acquire a vacuum expectation value (vev) v at a critical temperature T_c much larger than m_ϕ≃ g v. Since typically T_c ≃ v <cit.>, this implies g ≲√(m_ϕ / M_Pl). We will, however, see that neutrino oscillation experiments are sufficiently sensitive to probe the relevant parameter region. As an alternative to the misalignment mechanism, the authors of ref. <cit.> propose production of vector DM from quantum fluctuations during inflation, but argue that this mechanism can only account for all the DM in the Universe if m_ϕ > 10^-6 eV.Coherent Forward Scattering of Neutrinos on Fuzzy DM. By inspecting <ref>, we observe that scalar DM ϕ, treated as a classical field, alters the neutrino mass matrix, m_ν→ m_ν + y ϕ, while vector DM ϕ^μalters their effective 4-momenta, p_μ→ p_μ + g Q ϕ_μ. This can be seen as dynamical Lorentz violation<cit.>.For implementing these effects in simulation codes, we parameterize them in terms of a Mikheyev–Smirnov–Wolfenstein-like potential V_eff <cit.>. To do so, we use the equations of motion derived from <ref> (treating ϕ and ϕ^μ as classical fields) to derive a modified neutrino dispersion relation in the form(E_ν - V_eff)^2 = p⃗_ν^2 + m_ν^2.Here, E_ν, V_eff, and m_ν should be understood as 3 × 3 matrices.Neglecting the V_eff^2 term in <ref>, we read off thatV_eff = 1/2E_ν( ϕ(y m_ν + m_νy) + ϕ^2 y^2 ) ,(scalar DM)V_eff = -1/2E_ν( 2 (p_ν·ϕ) g Q + g^2 Q^2 ϕ^2 ) .(vector DM)These expressions for V_eff should now be added to the Hamiltonian on which the derivation of neutrino oscillation probabilities is based. The classical DM field can be expressed as ϕ = ϕ_0 cos(m_ϕ t) for scalar DM and as ϕ^μ = ϕ_0 ξ^μcos(m_ϕ t) for vector DM, where ξ^μ is a polarization vector.The oscillation amplitude ϕ_0 is related to the local DM energy density ρ_ϕ∼ 0.3 GeV/cm^3 via <cit.>ϕ_0 = √(2 ρ_ϕ)/m_ϕ .For the tiny DM masses we are interested in here, the period τ of field oscillations is macroscopic, τ≃ 1.3 yrs× (10^-22 eV / m_ϕ) <cit.>. Note that in <ref>, the terms linear in the couplings constants are valid when the DM mass is so low that the DM field can be treated as classical; the quadratic terms are approximately valid for any DM mass.In deriving numerical results, we will for definiteness assume the neutrino–DM couplings to have a flavor structure given by y = y_0 (m_ν / 0.1 eV) for scalar DM, where y_0 is a constant for scalar DM. This choice is motivated by the assumption of universal couplings of ϕ to right-handed neutrinos. For vector DM, we assume Q = (0, 1, -1), as motivated by L_μ - L_τ symmetry.We will moreover assume that contributions to the neutrino oscillation probabilities proportional to powers of cos(m_ϕ t) are averaged. In other words, we assume the running time of the experiment to be much larger than τ. <Ref> shows that for vector DM, V_eff depends on the polarization of the field. As it is unclear whether the initial polarization survives structure formation or is completely randomized even on scales ∼ 1000 km relevant to long-baseline experiments, we will consider both the case of fully polarized and fully unpolarized DM.In the former case, we assume the polarization axis to be parallel to the ecliptic plane for definiteness.For fully polarized DM, the leading contribution to V_eff is linear in the small coupling g, while for unpolarized DM ξ^μ varies randomly along the neutrino trajectory, so the leading contribution to V_eff is 𝒪(g^2).The same would be true for DM polarized in a direction transverse to the neutrino trajectory.Modified Neutrino Oscillation Probabilities. We have implemented the potential from <ref> in GLoBES <cit.>. To facilitate integration of the predicted event rates over time, we evaluate the oscillation probabilities at several fixed times and interpolate them using a second order polynomial in cos(m_ϕ t).The latter can then be integrated analytically. We do not include long-term temporal modulation effects in our fits because the available long-baseline data is presented in time-integrated form. We have checked that including modulation with time in the fit does not significantly improve our results <cit.>.In <ref> we show the impact of neutrino–DM interactions on the oscillation probabilities as a function of neutrino energy E_ν and baseline L. We see that even for tiny couplings, substantial modifications are possible.Signals in Long Baseline Experiments. In <ref>, we collect various limits and future sensitivities on neutrino–DM interactions. For the T2K experiment, we have developed a new GLoBES implementation <cit.>, which we use to fit data based on an integrated luminosity of 6.6 × 10^20 protons on target (pot) <cit.>. We have verified that we reproduce T2K's standard oscillation results to high accuracy before setting limits on DM. For the projected sensitivity of DUNE <cit.>, we use the simulation code released with ref. <cit.>, corresponding to 14.7 × 10^20 pot for neutrinos and anti-neutrinos each. To determine the sensitivity of RENO, we rely on a simulation based on refs. <cit.> and corresponding to 3 yrs of data taking.We observe that experimental sensitivities are superb, thanks to the scaling of V_eff with 1/m_ϕ, see <ref>. For vector DM, the sensitivity is more than ten orders of magnitude better in the polarized case (left panel of <ref>) than in the unpolarized case. In the former case, the sensitivity comes from the term linear in g, which is enhanced by E_ν / (g ϕ) compared to the quadratic one. In general, experiments exclude values of the coupling constant for which V_eff is much larger than the oscillation frequency ∼ m_ν^2 / (2 E). For scalar or polarized vector DM, long baseline experiments have a significant edge over reactor experiments, while for unpolarized DM, RENO is able to compete even with DUNE. The reason is the scaling of V_eff with 1/E according to <ref>.Signals in Solar Neutrino Experiments. Since solar neutrinos evolve adiabatically as they propagate out of the Sun, their survival probability in the electron flavor is given byP_ee(E_ν) = ∑_i |U_ei^⊙|^2 |U_ei^⊕|^2,where U^⊙ and U^⊕ are the effective leptonic mixing matrices at the center of the Sun and at Earth, respectively. U^⊙ is strongly affected by both SM matter effects and DM–neutrino interactions, while U^⊕ differs from the vacuum mixing matrix mainly through the DM term in our scenario. We neglect Earth matter effects as their impact on our results would be negligible <cit.>. We fit solar neutrino data from Borexino <cit.>, Super-Kamiokande <cit.>, and SNO <cit.>, as collected in ref. <cit.>. We illustrate in <ref> how the presence of DM–neutrino interactions could improve the fit to solar neutrino data.For standard oscillations, we find χ^2 / dof≃ 22/20, while the best fit point for polarized vector DM yields χ^2 / dof≃ 12/19.Even though we find in both cases an acceptable goodness of fit, standard oscillations are disfavored compared to the new physics hypothesis. This is a reflection of the fact that the upturn of the survival probability at low energy has not been observed yet <cit.>. As the preference for new physics in our fit is somewhat stronger than in a fit including full spectral data <cit.>, we show in <ref> also conservative constraints obtained by artificially inflating the error bars of all solar data points by a factor of two.Comparing limits from solar neutrino observations to those from long-baseline experiments, we see from <ref> that for unpolarized vector DM, solar neutrinos offer the most powerful constraints. This is once again due to the 1/E_ν dependence of V_eff in this case.Even though the same scaling applies to scalar DM, solar limits are much weaker because in our benchmark scenario, neutrino–DM interactions alter only neutrino masses (to which solar neutrinos have poor sensitivity), but not the mixing angles. This is also the reason why the limits from ref. <cit.>, which rely on variations in the mixing angle θ_12, are not applicable here.Cosmological Constraints on ∑ m_ν.As pointed out in ref. <cit.>, interactions between neutrinos and ultra-light scalar DM are constrained by the requirement that the DM-induced contribution to the neutrino mass term does not violate the cosmological limit on the sum of neutrino masses, ∑ m_ν. We estimate this constraint in <ref> (a) by requiring that, at recombination (redshift z = 1 100), the correction to the heaviest neutrino mass (taken at 0.05 eV) should not be larger than 0.1 eV.Astrophysical Neutrinos. One may wonder whether neutrino–DM interactions could inhibit the propagation of astrophysical neutrinos <cit.> from distant sources <cit.>. The optical depth for such neutrinos is given by <cit.> τ_ν (E_ν) = σ_νϕ(E_ν) X_ϕ m_ϕ^-1, with the DM column density X_ϕ≡∫_l.o.s dlρ_ϕ, where the integral runs along the line of sight. For both galactic and extragalactic neutrino sources, we have typically X_ϕ∼ 10^22–10^23 GeV/cm^2 <cit.>. The scattering cross section for vector DM is approximatelyσ_νϕ^T≃g^4/8πm_ν^2/E_ν^2 m_ϕ^2 ,(vector DM)where the superscript T indicates that, for simplicity, we have only considered the transverse polarization states of DM. For scalar DM, the corresponding expression isσ_νϕ ≃g^4/36π m_ν^2 ,(scalar DM)Requiring τ_ν < 1, we obtain the constraintsg/m_ϕ < 3 · 10^8 eV^-1 ( E_ν/PeV)^1/2( 0.1 eV/m_ν)^1/2( 10^-22 eV/m_ϕ)^1/4,(vector DM) y/m_ϕ < 1.3 · 10^11 eV^-1 ( m_ν/0.1 eV)^1/2( 10^-22 eV/m_ϕ)^3/4.(scalar DM)We see that these limits are much weaker than the constraints imposed by oscillation experiments (see <ref>) except for DM masses much larger than the ones considered here and for very low neutrino energies At low energy, however, astrophysical neutrinos cannot be observed because of prohibitively large atmospheric backgrounds.Summary. To conclude, we have demonstrated that unique opportunities exist at current and future neutrino oscillation experiments to probe interactions between neutrinos and ultra-light DM particles. The latter are an interesting alternative to WIMP (Weakly Interacting Massive Particle) DM, avoiding many of the phenomenological challenges faced by WIMPs.Aparticularly interesting possibility, which we plan to explore further in an upcoming publication <cit.>, is a possible connection to flavor non-universal new physics at the TeV scale,as motivated by recent anomalies in quark flavor physics.Note added. While we were finalizing this paper, ref. <cit.> appeared on the arXiv, addressing similar questions.While the main focus of ref. <cit.> (and also of the earlier ref. <cit.>) is on scalar DM, we consider also DM in the form of ultra-light gauge bosons. The authors of ref. <cit.> have considered a larger range of experiments for setting limits than us, while our results are based on more detailed numerical simulations of the few most relevant experiments.Where our results are comparable to those of ref. <cit.>, they are in good agreement.Acknowledgments. We would like to thank Pedro Machado, Georg Raffelt, and Felix Yu for very helpful discussions.This work has been funded by the German Research Foundation (DFG) under Grant Nos. EXC-1098, , FOR 2239, GRK 1581, and by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No.637506, “νDirections”).JHEP
http://arxiv.org/abs/1705.09455v2
{ "authors": [ "Vedran Brdar", "Joachim Kopp", "Jia Liu", "Pascal Prass", "Xiao-Ping Wang" ], "categories": [ "hep-ph", "astro-ph.HE", "hep-ex" ], "primary_category": "hep-ph", "published": "20170526070423", "title": "Fuzzy Dark Matter and Non-Standard Neutrino Interactions" }
decorations.pathreplacingvertex=[circle,fill=black!15,minimum size=20pt,inner sep=0pt,font=] smallvertex=[vertex,minimum size=10pt] operator=[vertex,fill=black!1] source = [vertex, fill=red!34] supersource = [vertex, fill=blue!34,minimum size=15pt] hiddensource = [vertex, fill=red!8,minimum size=10pt] smallsource = [vertex, fill=red!34,minimum size=10pt] receiver = [vertex, fill=green!34] smallreceiver = [vertex, fill=green!34,minimum size=10pt] edge = [draw,thick,->] undirect_edge = [draw, thick] dedge = [edge,dotted] redge = [edge,color=red] bedge = [edge,color=blue] gedge = [edge,color=green] medge = [edge,color=magenta] oedge = [edge,color=orange] bredge = [edge,color=brown,line width=2pt] weight = [font=] selected edge = [draw,line width=5pt,-,red!50]
http://arxiv.org/abs/1705.09372v1
{ "authors": [ "Salman Salamatian", "Ahmad Beirami", "Asaf Cohen", "Muriel Médard" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170525213829", "title": "Centralized vs Decentralized Multi-Agent Guesswork" }
http://arxiv.org/abs/1705.09121v1
{ "authors": [ "P. R. Venkatesh", "A. Venkatesan", "M. Lakshmanan" ], "categories": [ "nlin.CD" ], "primary_category": "nlin.CD", "published": "20170525102552", "title": "Design and implementation of dynamic logic gates and R-S flip-flop using quasiperiodically driven Murali-Lakshmanan-Chua circuit" }
Cavendish Laboratory, University Cambridge, J. J. Thomson Avenue, Cambridge, CB3 OHE At low temperatures, < 200 mK, the thermal flux through low-dimensional amorphous dielectric bars, < 2 μm wide and 200 nm thick, is transported by a small number of low-order elastic modes. For long bars, L> 400 μm, it is known that the conductance scales as 1/L, where L is the length, but for short bars, 1 μm < L< 400 μm, the length dependence is poorly known. Although it is assumed that the transport must exhibit a diffusive to ballistic transition, the functional form of the transition and the scale size over which the transition occurs have not, to our knowledge, been measured. In this paper, we use ultra-low-noise superconducting Transition Edge Sensors (TESs) to measure the heat flux through a set of SiN_ x bars to establish the characteristic scale size of the ballistic to diffusive transition. For bars supporting 6 to 7 modes, we measure a thermal elastic-wave attenuation length of 20 μm. The measurement is important because it sheds light on the scattering processes, which in turn are closely related to the generation of thermal fluctuation noise. Our own interest lies in creating patterned phononic filters for controlling heat flow and thermal noise in ultra-low-noise devices, but the work will be of interest to others trying to isolate devices from their environments, and studying loss mechanisms in micro-mechanical resonators. Thermal elastic-wave attenuation in low-dimensional SiN_ x bars at low temperatures S. Withington,E. Williams, D. J. Goldie, C. N. Thomas, and M. Schneiderman December 30, 2023 ===================================================================================§ INTRODUCTION At low temperatures, < 200 mK, low-dimensional dielectric bars transport heat through a small number of elastic modes <cit.>. The lowest order of these correspond to simple compressional, torsional, and in-plane and out-of-plane flexural waves, and for bars having cross sections of less than about 200 × 1000 nm, these are the dominant modes present <cit.>. In recent years, low-dimensional dielectric bars have been fabricated in SiN_ x having lengths L ranging from 400 μm to 1000 μm <cit.>, and it has been found experimentally that, over this range, the low-temperature thermal conductance scales as 1/L. This length dependence is important because it allows low thermal conductances, < 300 fW K^-1, to be achieved in nano-engineered components. For example, the sensitivities of far-infrared Transition Edge Sensors (TES) are determined by thermal fluctuation noise in the support legs of the device, and low conductances are needed to achieve ultra-low-noise operation, NEP < 10^-18 WHz^-1/2. In addition to measurements on long bars, measurements of conductance and thermal fluctuation noise have been carried out on short bars, 500 nm to 3 μm, and these show values in precise agreement with ballistic calculations based on the dispersion relations of the low-order elastic waves <cit.>. An outstanding requirement, however, is to understand how transport changes from being fully diffusive to fully ballistic as the length of a bar is reduced, and the characteristic scale length L_a over which the transition occurs. The few-mode ballistic to diffusive transition can only be observed by making measurements on narrow < 2 μm legs having lengths that span the range 5 μm to 500 μm.In this paper we report a set of measurements that reveal the ballistic to diffusive transition, and thereby determine the thermal elastic wave attenuation length L_a. The uniqueness of our approach is that we determine the phonon-mean-free path (MFP) directly in a few-mode system. We do not infer the MFP indirectly through kinetic arguments based on say heat-capacity measurements on bulk samples.From a practical perspective, L_a is important because it determines how short a bar must be in cases were ballistic transport is needed: for example when fabricating phononic support structures based on say ring resonators, or equivalently Mach-Zhender interferometers. Conversely, it determines how long a bar must be to guarantee a 1/L dependence in devices where diffusive transport is needed. More fundamentally, there are still uncertainties about whether the dominant phonon scattering process is elastic or inelastic, and the nature of the transport over the transition region. In particular, L_a corresponds to the elastic-wave coherence length, which itself is closely related to the thermal fluctuation noise coupled in and out of a structure. Despite its importance, we are not aware of any experiments that have measured L_a directly in a few-mode system, either in SiN_ x or any other similar amorphous dielectric. Also, we are not aware of any thermal measurements that show the functional form of the diffusive to ballistic transition in low-dimensional structures at low temperatures. The motivation for the work described here was to measure the thermal elastic-wave attenuation length L_a of SiN_ x, and to investigate the functional form of the transition region. Although our own interest lies in creating patterned phononic filters for controlling heat flow and noise in low-temperature devices, we believe that the measurement will be of interest to others trying trying to isolate devices from their environments, and studying loss mechanisms in high-Q micro-mechanical resonators <cit.>.§ EXPERIMENT We have fabricated a number of Transition Edge Sensors (TESs) having leg lengths ranging from few microns to a few hundred microns: a typical example is shown in the inset of Fig. 1. Each device comprised a superconducting MoAu bilayer and an infrared β-Ta absorber on a 200 nm SiN_ x membrane. The absorber is not relevant to the experiment described here, but was included so that the devices could be tested as infrared ( 100-200 μm ) sensors. The salient dimensions are listed in Table <ref>. These results supplement previous work <cit.>, allowing us to build up a complete set of data that spans the range of leg lengths needed.Each TES was biased through two Nb lines that ran along the surfaces of the legs. Electronic heat conduction along the superconducting lines is negligible because the quasiparticle density is exceedingly small at these temperatures. In addition, the Nb leads do not change the elastic modes of the legs because Nb is relatively ductile compared with SiN_ x. Each device was biased using a constant voltage source having a low internal impedance, 1.5 mΩ, which was achieved by integrating a low-value thin-film resistor in an isolated light-tight cavity close to the TES chips. A stray series resistance of typically 2 mΩ was also measured, and accounted for in the analysis. The devices were read out using SQUIDs as low-noise current to voltage convertors. The whole assembly was contained in a partitioned light-tight box, which was coated on the inside with RF and far-infrared absorber to avoid light leakage placing an additional heat load on the TES island. We have many years of experience fabricating and testing ultra-low-noise TESs, and have detailed models describing all aspects of behaviour <cit.>. The NEPs of our devices are determined by thermal fluctuation noise in the legs, with different leg conductances giving different sensitivities. The NEPs of the measured devices were all in the range 10^-18 WHz^-1/2 to 10^-19 WHz^-1/2, depending on the design, which means that small thermal fluxes, in the fW range, could be measured with high precision. The devices were tested in an Adiabatic Demagnetisation Refrigerator (ADR), on a pulse tube cooler, giving a base temperature of around 80 mK. The temperature was stabilised to 200 μK using a Proportional-Integral-Derivative(PID) controller to adjust the residual current in the ADR magnet. The purpose of the work described here was to observe the ballistic to diffusive transition in low-dimensional SiN_ x bars, and therefore it is pertinent to comment on the structure and stoichiometry of the SiN_ x used. The material was grown by LPCVD using dichlorosilane and ammonia. Because hydrogen and hydrochloric acid are both byproducts of the process, hydrogen can remain trapped, which affects physical characteristics. By increasing the flow of dichlorosilane, almost stress-free nitride can be formed through a silicon enriching process. The optical (633 nm) refractive index of the SiN_ x used for our devices was typically in the range 2.0 to 2.3, corresponding to the near-stoichiometric limit, which is consistent with a measured tensile stress of 500 MPa. SiN_ xof this composition is known to have many voids having scale sizes of around 10 nm. Our own surface roughness measurements using Atomic Force Microscopy (AFM) show structure having a log-normal height distribution covering the range 0.5-2.5 nm with a long tail out to 10 nm.The primary experimental method requires the base temperature of the fridge to be varied whilst recording the power flowing onto the island of the device. A series of measurements were taken so that the experimental data could be corrected for voltage offsets and stray resistance. Figure <ref> shows a set of typical, calibrated IV curves, and Fig. <ref> shows the power flow as a function of bath temperature. Because of electrothermal feedback, the hot temperature T_h essentially stays constant at the critical temperature T_c of the bilayer during this process. The intercept on the abscissa is the critical temperatureof the bilayer. All of the T_c's are within 7 mK of each other, with the suppression of device 5 being due to the bilayer having 6 normal-metal bars patterned on its surface <cit.>. The power flow was then fitted to the functional formP = K ( T_h^n - T_b^n),where K and n are parameters, and T_b was the temperature of the copper block that housed the chip. The thermal conductance G can then be determined using G = ∂ G / ∂ T_h = K n T_h^n-1. Although P is, strictly, measured with respect to variations in bath temperature, the symmetry of the expression allows G to be calculated for variations in T_h. This method for measuring thermal conductance is standard in the TES community . Table 1 lists key parameters derived for the new devices measured; previous data has already been described <cit.>.Fig. <ref> shows the normalised power flow, at the coldest bath temperature 80 ± 5 mK, as a function of leg length, where we have also included measurements of our earlier ballistic devices <cit.>. Measurements on longer legs, which are entirely consistent with the results reported here, are not included because they add little information about the transition region. The powers plotted are normalised to the ballistic power expected on the basis of the effective number of elastic modes available, such that one would expect the limiting ballistic value to be unity. More specifically, normalisation was achieved as follows: For each each device, the width and height of the legs were used to calculate the dispersion curves of the modes . These calculations required finding full numerical solutions to the elastic wave equations <cit.>. They were based on the bulk elastic constants (density, Young's modulus, and Poisson's ratio) of SiN_ x. Because of the long-wavelength nature of the phonons, this approach is suitable because it averages over any microstructure in the material. In any case, the dispersion curves are insensitive to the precise values of the bulk elastic constants used.The cut-off frequencies of the modes were then used to calculate the ballistic power flow, P_ bal, between two heat baths; one held at the critical temperature of the bilayer and the other held at the lowest bath temperature used in the experiment:P_ bal = ∑_i∫_ν_i^∞ B(ν,T_h) -B(ν,T_b) d ν,where ν_i is the cut-off frequency of mode i, andB(ν,T) = h ν/ e^h ν / k T -1.Finally, the effective number of modes carrying heat was calculated through N_ eff = P_ bal/P_ qua, whereP_ qua = ∫_0^∞ B(ν,T_h) -B(ν,T_b) d ν,is the ballistic power that would flow in a single mode that propagates at all frequencies. Experimentally, we found that typically N_ eff = 5-7 modes transported heat in our devices at the temperatures used, with N_ eff = 4 being the low-temperature limit. Finally, the normalised heat flow, shown in Fig. <ref>, was calculated through ϵ = P / 4P_ qua N_ eff, where the factor of 4 accounts for each TES having 4 legs. It can be seen that all of the short-legged devices had a normalised power flow of near unity, confirming that the power flowing in ballistic legs can be calculated accurately using dispersion relationships based on bulk elastic constants, with no free parameters. The power measurement errors shown in Fig. <ref> correspond to ± 5 %, which is conservative, and the errors in length, for the very short legs, arose because of the uncertainty in length associated with the gradual widening of the legs as they connected to their termination points. For ballistic legs this is not an issue, as the key effect of the constriction is to limit the modal throughput of the structure.§ DISCUSSION It is sometimes said that at low temperatures, < 500 mK, anharmonic scattering process are not significant, and phonon scattering is caused by surface roughness. At 100 mK, however, thermal power is carried phonons having frequencies of less than 5 GHz, corresponding to wavelengths in SiN_ xgreater than 500 nm. A calculation of specularity, based on measured surface roughness data, as a function of phonon wavelength shows that the probability of a phonon being specularly reflected is essentially 100 % for wavelengths of greater than 500 nm. Detailed numerical simulations also show that the reduction in power seen cannot be accounted for by surface scattering <cit.>. A more appealing model is one based on density inhomogeneities in the amorphous SiN_ x. We have carried out extensive scattered-wave calculations of thermal transport, dividing few-mode dielectric bars into many thousands of elements <cit.>. Each scattering section, described by a set of complex elastic-wave scattering parameters, was chosen to represent a density inhomogeneity of a few percent with a scale length of 5 nm, in correspondence with the distribution of known feature sizes in SiN_ x. In this case, localised transport is revealed, where the thermal conductance is reduced by the appearance of disordered resonant cells, spanning tens to hundreds of elements. In other words, travelling waves are reflected strongly by impedance discontinuities caused by the formation of Fabry-Perot-like resonators in the disorder of the material.A feature of localised transport is that for bars that are long compared with the localisation length, the transmitted power P varies exponentially with lengthP = P_0exp[- L / L_e],where P_0 is the few-mode ballistic limit, and L_e is a characteristic scale length. Another feature of localisation is that the conductances of long bars are predicted to vary widely from one physical sample to the next; although, this variation is reduced as the number of modes is increased, 2D and 3D transport is approached, and power is scattered laterally. In low-dimensional systems, the formation of high-Q resonant features exaggerates the variations in disorder from one sample to the next. Our own devices, and those of other groups, having legs that are hundreds of microns long, show measured conductance variations of typically ±15 %, but sometimes higher, even between notionally identical devices on the same wafer (verified by optical inspection and measurements of geometry and surface roughness). The variations seen in the longest legs in Fig. <ref> is a real effect, well above experimental error, which is tiny on the plot. Long, wide artificially surface-roughened, but otherwise identical crystalline Si bars show even more extreme variations <cit.>, typically factors of 5, which is strongly indicative of localisation.On Fig. <ref> we plot (<ref>) for L_e = 25 μm, dashed green line, and it can be seen that an exponential dependence on length is ruled out.It can also be seen that the short, ballistic legs have almost identical behaviour, leading to high levels of device uniformity, and that the sample-to-sample variations increase as bars are made longer. This trend has been seen in our work going back many years. Our simulations also show that attenuation lengths of less than 100 μm require RMS density inhomogeneities of over 20%, which is possible, but seems high. In addition, the conductance repeatability of even long SiN_ x bars is much better than our localised heat transport simulations would suggest, and seen in crystalline Si <cit.>. Overall, we take these results to indicate that some level of localisation caused by density inhomogeneities is present in long bars, > 100 μm, but this does not explain why the conductance variations are quite small, and indeed why the power falls as 1/L in the case of long bars. The fact that sample-to-sample conductance variations in disordered SiN_ x are substantially less than those seen in surface-roughened crystalline Si, and the complete inability to fit Fig. <ref> with an exponential, shows that phase-incoherent inelastic damping must be present in the amorphous material.At the other extreme, inelastic scattering leads to fully diffusive transport. In this case, it is straightforward to show analytically thatP = P_0 (1 + L/L_a)^-1,whereL_a is the amplitude attenuation length of the low-order elastic waves present: the travelling wave amplitude decays according to a(z) = a(0) exp( - z / L_a). (<ref>) can be appreciated by differentiating each side with respect to temperature, and noting that d P_0 / d T = G_ qua is the quantum thermal conductance, giving1/G = ( 1/G_ qua + N/G_ qua)where N = L/L_a, the length of the bar in attenuation lengths. The interpretation is clear: the overall conductance comprises the quantum limit of conductance in series with N cells, each of which behaves in a local sense ballistically so that it contributes an additional quantum-limited conductance.Physically, the most likely scattering process corresponds to phase-incoherent absorption and reradiation by Two Level Systems (TLSs) in the amorphous material. TLSs are known to absorb ultrasonic waves at microwave frequencies, to lead to high specific heats, and to influence thermal conductance <cit.>. We have measured the heat capacity of our SiN_ x to be many hundreds of times higher than the Debye value, an effect that is traditionally attributed to TLSs <cit.>. We also know that for very long legs, > 400 μm, the conductances of our TESs fall as 1/L.It should be noted that although, traditionally, TLSs are used to describe non-Debye-like dissipative behaviour in disordered dielectrics, the precise physical origin is usually not known <cit.>. In fact any non-harmonic dynamical behaviour that has non-equally spaced energy levels will lead to the saturation that TLSs are commonly used to represent. Here, we use the term TLS loosely to indicate the presence of any low-energy phase incoherent scattering process.The red solid line in Fig. <ref> shows a typical diffusive model, having the form of (<ref>), with L_a= 20 μm. The data is consistent with diffusive transport. Notice that in both the cases of localised and diffusive transport, the best fits give normalised ballistic conductances that are slightly higher than unity, which can be easily accounted for by uncertainties in the bulk elastic constants of the material.There is another peculiarity in our data that points to the role of TLSs. Short ballistic legs consistently have an n of around 2.4, which is exactly what one would expect. Indeed the fully ballistic limit of the lowest order modes, which propagate at all frequencies, is n=2. As the temperature is increased and higher-order modes cut on, the value of n increases accordingly. Our own very wide-leg devices, > 10 μm, and those of other groups, have n in the range 3 to 4, which is characteristic of a highly-moded structure. The legs measured here have an n of typically 1: this is reproducible, and something we have seen many times, going back several years, for long narrow legs. (Device 5 shows an anomalously low value of n, which is an artefact of the normal metal bars on the bilayer reducing the electrothermal feedback that holds the temperature of the TES constant as the temperature of the bath is varied, but this does not invalidate the flux measurement.)As the length of a bar is increased through the ballistic to diffusive transition, keeping the number of propagating modes constant, the temperature dependence changes from n=2.5 to n = 1. Diffusion calculations based on travelling waves with TLS absorption do show that n can fall well below 2 as a consequence of the elastic losses having a frequency and power dependence <cit.>. Interestingly, the reason why this is not seen in wide-leg devices is that the increase in the number of modes with temperature, masks the effect on n of the TLSs. Also, a value of n= 0.5 can be seen in the data of Schwab <cit.>, up to a temperature of 200 mK, at which point the gradient changes to n= 2.7, which is characteristic of the temperature range over which the frequency and power dependence of TLSs would be influential. Our measured elastic-wave attenuation length of 20 μm is slightly low, but generally in good agreement with ultrasonic, mechanical and thermal measurements of phonon mean free path made on bulk amorphous dielectrics <cit.>. It is also interesting to estimate the Q-factor relating to `internal friction'. Using Q^-1 = λ / π L_a, where λ≈ 500 nm is an approximate characteristic thermal wavelength, we find Q^-1 = 8×10^-3, which again is slightly high, but comparable with measured Q^-1 factors of amorphous bulk materials. Typically in the range 10^-3 to 10^-4 <cit.>. It has been noted by others that the vibrational modes of mesoscopic systems have anomalously low Q values compared with larger systems fabricated from the same material <cit.>.We make two final comments. The first is that we are not entirely sure about the degree to which a very thin layer of SiO_2, used as an etch stop, is removed from the underside of the legs during processing. Zink and Hellman <cit.> noticed that the nature of the residual underlayer can affect thermal properties at low temperatures. The scattering processes described here nevertheless prevail in the case where surface contamination is present, say if patches of SiO_2 remain on one side of the legs after processing; it is simply that there is an additional contribution to the disorder, and perhaps surface states. The secondis that Fig. <ref> shows that even when the leg length is shorter than 1 μm, there is no evidence of a rapid increase in thermal flux above the ballistic travelling-wave limit, indicating that there is no measurable evanescent coupling <cit.>.§ CONCLUSION We have observed directly the diffusive to ballistic transition in heat flow along low-dimensional SiN_ x bars. The nature of the transition is indicative of a diffusive process, but the systematic increase in device-to-device variation as the bars are made longer is strongly indicative of localisation: Fabry-Perot resonances caused by disorder in the material. The thermal elastic-wave attenuation length has been determined to be 20 μm, which to our knowledge is the first measurement of this important parameter. An anomalous value of n ≈ 1 in the case of long, narrow, few-mode bars, points to the role of TLSs. Overall, we favour a model where the dissipative losses associated with TLSs bring about diffusive transport and also dampen the Q factors of the localised resonant cavities formed by disorder.Ballistic legs, L< 10 μm, give highly uniform device-to-device behaviour because of the elimination of localisation, but for some applications, the thermal conductances achieved are not low enough. Given that the characteristic wavelength of thermal phonons is approximately 1 μm at these temperatures, and given that the travelling-wave attenuation length has been measured to be 20 μm, it is fully realistic to micro-engineer patterned phononic filters, Mach-Zehnder interferometers, that use interference effects to control the flow of heat. If diffusive transport is needed, a longer leg must be used, but some uniformity in behaviour will be lost. The observed coherence length, and the nature of the scattering mechanism present, will have a strong influence on the thermal fluctuation noise in the legs. In the case of an inelastic process, thermal exchange noise can take place between the losses within a coherence length of the end of a bar and the bath. We are keen to measure fluctuation noise as a function of leg length as the functional form would shed further light on the scattering processes present.Emily Williams gratefully acknowledges support from NanoDTC EPSRC Grant EP/L015978/1 during the course of this work. xxxxxref1 K. Schwab, E. A. Henriksen, J. M. Worlock, and M. L. Roukes, Nature 404, 974 (2000).ref2 L. G. C. Rego and G. Kirczenow, Phys. Rev. Lett. 81, 232 (1998).ref3 K. R. Patton and M. R. Geller, Phys. Rev. B 64, 155320 (2001).ref4 D. H. Santamore and M. C. Cross, Phys. Rev. B 66, 144302 (2002).ref5 R. Prasher, T. Tong, and A. Majumdar, Nano Lett. 8 (1), 99 (2008).ref6 M. C. Cross and R. Lifshitz, Phys. Rev. B 64, 085324 (2001).ref7 D. J. Goldie, J. R. Gao, D. M. Glowacka, D. K. Griffin, R. A. Hijmering, P. Khosropanah, B. D. Jackson, P. D. Mauskopf, D. Morozov, J. A. Murphy, M. Ridder, N. Trappe, C. O'Sullivan, and S. Withington, Proc. SPIE 8452, 84520A (2012).ref8 P. Khosropanah, R. A. Hijmering,M. Ridder, M. A. Lindeman, L. Gottardi, M. Bruijn, J. van der Kuur, P. A. J. de Korte, J. R. Gao, and H. Hoevers, J. Low Temp. Phys. 167, 188 (2012).ref9 D. Osman, S. Withington, D. J. Goldie, and D. M. Glowacka, J. Appl. Phys. 116, 064506 (2014).ref10 P. Mohanty, D. A. Harrington, K. L. Ekinci, Y. T. Yang, M. J. Murphy, and M. L. Roukes, Phys. Rev. B 66, 085416 (2002).ref11 S. S. Verbridge and J. M. Parpia, J. Appl. Phys. 99, 124304 (2006).ref12 M. Yuan, M. A. Cohen, and G. A. Steele, Appl. Phys. Lett. 107, 263501 (2015).ref13 D. Osman, Thermal Transport and Noise in Micro-Engineered Support Structures for Detector Applications, PhD Thesis, University Cambridge (2016).ref30 Rebeccaref14 D. J. Goldie, M. D. Audley, D. M. Glowacka, V. N. Tsaneva and S. Withington, J. Appl. Phys. 103, 084509 (2008).ref15 D. J. Goldie, M. D. Audley, D. M. Glowacka, V. N. Tsaneva and S. Withington, J. Appl. Phys. 105, 074512 (2009).ref16 M. Schneiderman, Thermal Modelling and Fabrication of Suspended Phononic Structures for Transition Edge Sensors, M. Phil. Thesis, University Cambridge, (2015).ref17 K. Rostem, D. T. Chuss, F. A. Colazo, E. J. Crowe, K. L. Denis, N. P. Lourie, S. H. Moseley, T. R. Stevenson, and E. J. Wollack, J. Appl. Phys. 115, 124508 (2014).ref18 P. W. Anderson, B. I. Halperin, and C. M. Varma, Philisophical Magazine 25, 1 (1971).ref19 M. Von Haumeder, U. Strom and S. Hunklinger, Phys. Rev. Lett. 44, 84 (1980).ref20 R. C. Zeller and R. O. Pohl, Phys. Rev. B 4, 2029 (1971).ref21 R. O. Pohl, X. Liu and E. Thompson, Rev. Mod. Phys. 74, 991 (2002).ref22 C. C. Yu and J. J. Freeman, Phys. Rev. B. 36, 7620 (1987).ref23 B. L. Zink and F. Hellman, Solid State Communications 129, 199 (2004).ref24 B. L. Zink, R. Pietri and F. Hellman, Phys. Rev. Lett 96, 055902 (2006).ref31 A. J. Legget and D. C. Vural, J. Phs. Chem. B 117, 12966 (2013). ref25 S. Withington, D. J. Goldie and A. V. Velichko, Phys. Rev. B 83, 195418 (2011).ref26 K. E. Goodson, M. I. Flik, L. T. Su, and D. A. Antoniadis, J. Heat Transfer 116, 317 (1994).ref27 S. K. Watson and R. O. Pohl, Phys. Rev. B68, 104203 (2003).ref28 A. A. Krushynska and V. V. Meleshko, J. Acoust. Soc. Am. 129 (3), 1324 (2011).ref29 F. Xie, K.-Q. Chen, Y. G. Wang and Y. Zhang, J. Appl. Phys. 103, 084501 (2008).
http://arxiv.org/abs/1705.09453v1
{ "authors": [ "Stafford Withington", "Emily Williams", "David J. Goldie", "Christopher N. Thomas", "Max Schneiderman" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170526065052", "title": "Thermal elastic-wave attenuation in low-dimensional SiN$_{x}$ bars at low temperatures" }
Classification of QLF-images Using Convolutional Neural Network Imangaliyev et al.VU University Medical Center Amsterdam, Amsterdam, The Netherlands, Cancer Center Amsterdam, Amsterdam, The Netherlands, Academic Centre for Dentistry Amsterdam, Amsterdam, The Netherlands, Netherlands Organisation for Applied Scientific Research, Zeist, The Netherlands, Academic Medical Center, Amsterdam, The Netherlands Horaizon BV, Rotterdam, The NetherlandsClassification of Quantitative Light-induced Fluorescence Images Using Convolutional Neural Network S. Imangaliyev et al. Classification of Quantitative Light-Induced Fluorescence Images Using Convolutional Neural Network Sultan Imangaliyev1,2,6 Corresponding author.Monique H. van der Veen3 Catherine M. C. Volgenant3 Bruno G. Loos3 Bart J. F. Keijser3,4 Wim Crielaard3 Evgeni Levin5,6 December 30, 2023 ========================================================================================================================================================================Images are an important data source for diagnosis and treatment of oral diseases. The manual classification of images may lead to misdiagnosis or mistreatment due to subjective errors. In this paper an image classification model based on Convolutional Neural Network is applied to Quantitative Light-induced Fluorescence images. The deep neural network outperforms other state of the art shallow classification models in predicting labels derived from three different dental plaque assessment scores. The model directly benefits from multi-channel representation of the images resulting in improved performance when, besides the Red colour channel, additional Green and Blue colour channels are used. § INTRODUCTIONDiagnosis and therapy in many areas of medicine, including dentistry, nowadays extensively rely on technological advances in biomedical imaging. One of the challenges in the diagnosis of dental patients during daily practice is assessment of their dental plaque level. A novel way to look at this plaque is the use of a Quantitative Light-induced Fluorescence (QLF) camera. When the QLF-camera is used some dental plaque fluoresces red, which is suggested to be an indication for the pathogenicity of the dental plaque <cit.>.In this paper we apply deep artificial neural network on QLF-images to make a predictive classification model, where class separation is based on the amount of red fluorescent dental plaque disclosed in such images. Although both intra-examiner and inter-examiner reliability of manual assessment of QLF-images are shown to be high <cit.>, this may become expensive and laborious if the number of images is large. Therefore, there is a need to automate this procedure by implementing a computer-based system for assessment of QLF-images. Existing computer programs developed for this goal have several drawbacks which limit efficiency of QLF-images assessment. They require that the images must have been captured under the fixed circumstances such as camera geometry, focal distance and ambient light conditions <cit.>, which is hard to achieve under clinical settings.The problem mentioned above could be solved by the use of Deep Learning models, because descriptive features can be learnt directly from raw data representations <cit.> being insensitive to ambient conditions and natural image variability. Since images have a special two-dimensional structure, a group of Deep Learning methods called Convolutional Neural Network (CNN) explicitly uses the advantages of such a representation <cit.>. Applications of CNN may include both non-biological <cit.> and biological images <cit.>.The aim of this paper is to describe the novel application of CNN to QLF-images obtained during clinical intervention study <cit.>. Furthermore, we compare the performances of the CNN and several state of the art classification models. We tested all of these models on three existing plaque assessment scoring systems. We also checked the influence of adding various colour channels on the model performance. Possible differences were explained based on the biological nature of the problem and based on the properties of these models. Previous studies on this topic either focused on only a single plaque scoring system without providing detailed analysis of results <cit.> or used small dataset of different images and different network architecture <cit.>.§ MATERIALS AND METHODS §.§ Convolutional Neural NetworksMany of the modern deep learning models utilize very deep architectures to achieve superhuman performance in solving object recognition problems <cit.>. One of such architectures is a novel ultra-deep residual learning network (ResNet) <cit.>. This architecture can be implemented by adding so called 'shortcut connections' <cit.> which skip one or more layers. They perform a mapping so that their outputs are added to the outputs of the stacked layes. The whole network can be trained and implemented by using common libraries without modifying the solvers, hence adding neither extra parameters nor computational complexity. ResNet and many other architectures <cit.> use convolutional operator in extracting useful feature mappings in image classification task. Generally, given the filter K ∈ℝ^(2h_1+1) × (2h_2+1), the discrete convolution of the image I with filter K is given by(I ∗ K)_r,s := ∑ _u = -h_1 ^h_1∑ _v = -h_2^h_2 K_u,v I_r+u,s+v. Let layer l ∈ℤ be a convolutional layer. The i^th feature map in layer l, denoted Y_i^(l), is computed asY_i^(l) = B^(l)_i + ∑ _j = 1^m_1^(l-1) K^(l)_i,j∗ Y_j^(l-1),where B_i^(l) is a bias matrix and K^(l)_i,j is the filter of size (2h_1^(l) + 1) × (2h_2^(l) + 1) connecting the j^th feature map in layer (l-1) with the i^th feature map in layer l <cit.>. §.§ DatasetThe analyzed 427 QLF-images were taken during a clinical intervention study <cit.> which was conducted at the Academic Centre for Dentistry Amsterdam. Those images were translated into a combined dataset of three colour channels with 216×324 raw pixel intensity values in each of them. In total, three different experiments were performed on labels derived from plaque scoring systems such as Red Fluorescent Plaque Percentage (RF-PP) <cit.>, Red Fluorescent modified Quigley-Hein index (RF-mQH) <cit.> and modified Sillness-Loe Plaque index (mSLP) <cit.>. §.§ Experimental SetupThe CNN model was implemented on an NVIDIA GeForce GTX Titan X Graphics Processing Unit (GPU) using the Theano package <cit.>. To compare the influence of different colour channels three dataset compositions were tested which are only Red, Red with Green, or full RGB representations. To compare the CNN performance with the performance of the other models, experiments were performed using various shallow classification models implemented in the Scikit-learn package <cit.> such as Logistic Regression (LR), Support Vector Machines Classifier with Gaussian Kernel (SVMC-K), Support Vector Machines Classifier with Linear Kernel (SVMC-L), Gaussian Naïve Bayes Classifier (GNB), Gradient Boosting Classifier (GBC), K-Neighbors Classifier (KNC), and Random Forest Classifier (RFC).Hyperparameters of those models were selected via an exhaustive grid search with stratified shuffled cross-validation procedure so that 80% of the dataset was used as a training set, 10% as a validation set, and the rest 10% as a test set. All binary models were adapted to a multiclass setting by using a one-versus-all approach. The predictive performance of the models was assessed by calculating the F_1-score <cit.>. The reported final F_1-score was obtained by averaging the results of ten random shuffles with fixed test-train splits across all models.§ RESULTS AND DISCUSSION §.§ Model Performance EvaluationResults of experiments for RF-PP, RF-mQH and mSLP labels are provided in Figure <ref>, Figure <ref>, and Figure <ref> respectively. As it is seen from Figure <ref>, in the experiment with the RF-PP label, most of the models have a perfect classification performance on the training dataset, but a poor performance on the test dataset. Moreover, the results indicate that using only the Red channel results in a relatively good and comparable performance between both SVM models and Logistic Regression. Adding the Green and especially Blue channels improves the performance of CNN compared to the other models. As a result, the best model (CNN) provided a 0.76 ± 0.05 F_1-score on the test set and a 0.89 ± 0.11 F_1-score on the training set.Similar to the experiment with RF-PP labels, results depicted in Figure <ref>, and Figure <ref> clearly demonstrate the advantage of CNN over the other models, especially after adding the Green channel. As a result, the best model (CNN) provided a 0.54 ± 0.07 F_1-score on the test set for RF-mQH labels and a 0.40 ± 0.08 F_1-score on the test set for mSLP labels. However, unlike in the RF-PP case, adding the Blue channel did not improve and even decreased the performance for most of the models. Also, there is a clear difference between the performance of models applied on RF-PP and the other labels overall. Namely, even the best model's F_1-scores are in the interval [0.4, 0.55] in the experiments with RF-mQH and mSLP labels, which are much less than the 0.76 achieved in experiments with the RF-PP label.§.§ Advantages of the Deep Learning ModelThe results of the models' predictive performance evaluation clearly demonstrated advantage of the CNN model over the other models. In general, the predictive performance of the model on previously unseen data, i.e., its generalization can be improved if certain a priori information about the problem is added into the choice of the model architecture <cit.>. In case of images, domain information about the problem can be utilized by a model if such a model is able to learn spatial information between the pixels of an image. This property is explicitly embedded into the CNN model via a discrete convolution operation <cit.>. In the case of the QLF-images the model may learn, for example, the intensity of red colour associated with plaque, or the sharpness of edges between gingiva and teeth as well as between teeth. Classification results shown in Figures <ref>, <ref>, <ref> indicate the robustness of CNN to overfitting despite image variability. Other models used in this study do not directly embed spatial information unique for image pixel representation, thus these models have poorer generalization properties and result in a lower classification performance on previously unseen data.The QLF-images are a good example of images where learning invariant representation is crucial for good predictive performance. Typical examples of QLF-images for each of the three RF-PP classes are provided in Figure <ref>.As seen from this figure, these images were taken under various conditions such as slightly different focal distances, rotations, angles and not all images are perfectly centered or focussed to get better resolution. Besides ambient conditions during taking the pictures, the definition of every person is unique. Thus, there is a risk that standard models would overfit and learn variations in angles and distances which are not important for the plaque assessment. §.§ Influence of Multi-channel RepresentationFor the experiments on the RF-PP plaque labels, the CNN model results in superior performance over the other classification models if all three colour channels were used. In the experiments on the RF-mQH and mSLP labels, an improvement was achieved when only the Green channel was added. Moreover, the standard deviation of the training performance tends to be narrower compared to when the model is applied on the Red channel only. This is especially true for GBC, LR and both of the SVMC models.The Red over Green ratio of pixel values is generally used to identify red fluorescent plaque. Therefore, previous work performed on QLF-images <cit.> used the Red over Green pixel intensities' ratio instead of using the Red channel's pixel intensity values only. The Green channel helps to distinguish plaque from gingiva, since they have slightly different pixel values in Green channel of RGB representation. As for adding the Blue channel, due to technical implementation of the QLF-camera, the blue backscattered light is expected to produce sharper defined edges in images with little red fluorescent plaque, in comparison to images with a thicker plaque. The CNN model incorporates usage of all three colour channels without calculating ratios, thus numerically it is more stable and preferable. Based on these results we conclude that the CNN model benefits from multi-channel representation of the images. Precisely speaking, the CNN model efficiently and explicitly uses the fact that each colour channel contains important information relevant to the classification task.§ CONCLUSIONIn this study, we applied the CNN model for the automatic classification of red fluorescent dental plaque images. A comparison with several other state of the art shallow classification methods clearly showed the advantage of the CNN model in achieving a higher prediction performance. Such a result was possible because the CNN model directly learns invariant feature representations from raw pixel intensity values without engineering of hand-crafted features. We expect that Deep Learning of red fluorescent dental plaque images can help dental practitioners to perform efficient fluorescent plaque assessments and thus contribute to the improvement of patients' oral health.10bergstra2011theano Bergstra, J., Bastien, F., Breuleux, O., Lamblin, P., Pascanu, R., Delalleau, O., Desjardins, G., Warde-Farley, D., Goodfellow, I., Bergeron, A., et al.: Theano: Deep learning on GPUs with Python. In: NIPS 2011, BigLearning Workshop, Granada, Spain (2011)david2016deeppainter David, O.E., Netanyahu, N.S.: DeepPainter: Painter classification using deep convolutional autoencoders. In: International Conference on Artificial Neural Networks. pp. 20–28. Springer (2016)esteva2017dermatologist Esteva, A., Kuprel, B., Novoa, R.A., Ko, J., Swetter, S.M., Blau, H.M., Thrun, S.: Dermatologist-level classification of skin cancer with deep neural networks. Nature542(7639),115–118 (2017)he2016deep He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 770–778 (2016)he2016identity He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: European Conference on Computer Vision. pp. 630–645. Springer (2016)imangaliyev2016deep Imangaliyev, S., van der Veen, M.H., Volgenant, C.M., Keijser, B.J., Crielaard, W., Levin, E.: Deep learning for classification of dental plaque images. In: International Workshop on Machine Learning, Optimization and Big Data. pp. 407–410. Springer (2016)jarrett2009best Jarrett, K., Kavukcuoglu, K., Ranzato, M., LeCun, Y.: What is the best multi-stage architecture for object recognition? In: Computer Vision, 2009 IEEE 12th International Conference on. pp. 2146–2153. IEEE (2009)kang2006dental Kang, J., Li, X., Luan, Q., Liu, J., Min, L.: Dental plaque quantification using cellular neural network-based image segmentation. In: Intelligent computing in signal processing and pattern recognition, pp. 797–802. Springer (2006)kim2014monitoring Kim, Y.S., Lee, E.S., Kwon, H.K., Kim, B.I.: Monitoring the maturation process of a dental microcosm biofilm using the Quantitative Light-induced Fluorescence-digital (QLF-D). Journal of dentistry42(6),691–696 (2014)LeCun2015 LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature521(7553),436–444 (2015)lecun1989backpropagation LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Backpropagation applied to handwritten zip code recognition. Neural computation1(4),541–551 (1989)lecun2010convolutional LeCun, Y., Kavukcuoglu, K., Farabet, C., et al.: Convolutional networks and applications in vision. In: ISCAS. pp. 253–256 (2010)lee2013association Lee, E.S., Kang, S.M., Ko, H.Y., Kwon, H.K., Kim, B.I.: Association between the cariogenicity of a dental microcosm biofilm and its red fluorescence detected by Quantitative Light-induced Fluorescence-Digital (QLF-D). Journal of dentistry41(12),1264–1270 (2013)scikit-learn Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., et al.: Scikit-learn: Machine learning in Python. Journal of Machine Learning Research12,2825–2830 (2011)simonyan2014very Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556(2014)sokolova2009systematic Sokolova, M., Lapalme, G.: A systematic analysis of performance measures for classification tasks. Information Processing & Management45(4),427–437 (2009)szegedy2015going Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1–9 (2015)van2016dynamics van der Veen, M.H., Volgenant, C.M., Keijser, B.J., ten Cate, J.B., Crielaard, W.: Dynamics of red fluorescent dental plaque during experimental gingivitis -— a cohort study. Journal of dentistry48,71–76 (2016)volgenant2016comparison Volgenant, C.M., y Mostajo, M.F., Rosema, N.A., van der Weijden, F.A., ten Cate, J.B., van der Veen, M.H.: Comparison of red autofluorescing plaque and disclosed plaque —- a cross-sectional study. Clinical oral investigations 20(9),2551–2558 (2016)weijden1993comparative Weijden, G., Timmerman, M., Nijboer, A., Lie, M., Velden, U.: A comparative study of electric toothbrushes for the effectiveness of plaque removal in relation to toothbrushing duration. Journal of clinical periodontology 20(7),476–481 (1993)
http://arxiv.org/abs/1705.09193v1
{ "authors": [ "Sultan Imangaliyev", "Monique H. van der Veen", "Catherine M. C. Volgenant", "Bruno G. Loos", "Bart J. F. Keijser", "Wim Crielaard", "Evgeni Levin" ], "categories": [ "cs.CV", "cs.LG" ], "primary_category": "cs.CV", "published": "20170525142140", "title": "Classification of Quantitative Light-Induced Fluorescence Images Using Convolutional Neural Network" }
switch case[1](#1)SE[SWITCH]SwitchEndSwitch[1] #1SE[CASE]CaseEndCase[1] #1*EndSwitch *EndCase input: End-to-end Global to Local CNN Learning for Hand Pose Recovery in Depth Data[This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.] 16cmMeysam Madadi^1,2, Sergio Escalera^1,3, Xavier Baró^1,4, Jordi Gonzàlez^1,2^1 Computer Vision Center, Edifici O, Campus UAB, 08193 Bellaterra (Barcelona), Catalonia Spain^2 Dept. of Computer Science, Univ. Autònoma de Barcelona (UAB), 08193 Bellaterra, Catalonia Spain^3 Dept. Mathematics and Informatics, Universitat de Barcelona, Catalonia, Spain^4 Universitat Oberta de Catalunya, Catalonia, Spain =================================================================================================================================================================================================================================================================================================================================================================================================================================================Despite recent advances in 3D pose estimation of human hands, especially thanks to the advent of CNNs and depth cameras, this task is still far from being solved. This is mainly due to the highly non-linear dynamics of fingers, which make hand model training a challenging task. In this paper, we exploit a novel hierarchical tree-like structured CNN, in which branches are trained to become specialized in predefined subsets of hand joints, called local poses. We further fuse local pose features, extracted from hierarchical CNN branches, to learn higher order dependencies among joints in the final pose by end-to-end training. Lastly, the loss function used is also defined to incorporate appearance and physical constraints about doable hand motion and deformation. Finally, we introduce a non-rigid data augmentation approach to increase the amount of training depth data. Experimental results suggest that feeding a tree-shaped CNN, specialized in local poses, into a fusion network for modeling joints correlations and dependencies, helps to increase the precision of final estimations, outperforming state-of-the-art results on NYU and SyntheticHand datasets.§ INTRODUCTION Recently, hand pose recovery attracted special attention thanks to the availability of low cost depth cameras, like Microsoft Kinect <cit.>. Unsurprisingly, 3D hand pose estimation plays an important role in most HCI application scenarios, like social robotics and virtual immersive environments <cit.>.Despite impressive pose estimation improvements thanks to the use of CNNs and depth cameras, 3D hand pose recovery still faces some challenges before becoming fully operational in uncontrolled environments with fast hand/fingers motion, self occlusions, noise, and low resolution <cit.>. Although the use of CNNs and depth cameras has allowed to model highly non-linear hand pose motion and finger deformation under extreme variations in appearance and viewpoints, accurate 3D-based hand pose recovery is still an open problem.Two main strategies have been proposed in the literature for addressing the aforementioned challenges: Model-based and data-driven approaches. Model-based generative approaches fit a predefined 3D hand model to the depth image <cit.>. However, as a many-to-one problem, accurate initialization is critical; besides, the use of global objective functions might not convey accurate results in case of self-occlusions of fingers. Alternatively, the so-called data-driven approaches consider the available training data to directly learn hand pose from appearance. Data-driven approaches for hand pose estimation have been benefited from recent advances on Convolutional neural networks (CNNs) <cit.>. CNNs, as in many other computer vision tasks, have been successfully applied in data-driven hand-pose recovery approaches either for heat-map regression of discrete outputs (corresponding to joint estimation probabilities), or direct regression of continuous outputs (corresponding to joint locations)<cit.>. Heat-map regression models require additional optimization time for computing the likelihood of a joint being located at a particular spatial region. Unfortunately, heat-map based methods are subject to propagate errors when mapping images to final joint space. A main issue with CNNs as direct regression models, on the other hand, is how to deal with high nonlinear output spaces, since too complex models jeopardize generalization. Indeed for CNNs, learning suitable features (i.e. with good generalization and discrimination properties) in highly nonlinear spaces while taking into account structure and dependencies among parameters, is still a challenging task. In this paper, direct regression of the 3D hand pose is implemented as a specific tree-shaped CNN architecture designed to avoid training a coarse, global hand motion model, but allowing instead local finer specializations for different fingers and hand regions. So we break the hand pose estimation problem into hierarchical optimization subtasks, each one focused on a specific finger and hand region. Combined together in a tree-like structure, the final CNN shows fast convergence rates due to computations applied at a local level. In addition, we model correlated motion among fingers by fusing the features, learned in the hierarchy, through fully connected layers and training the whole network in an end-to-end fashion. The main advantage of this strategy is that the 3D hand pose prediction problem is attained as a global learning task based on local estimations. Moreover, it has been proved that L2 loss, in regression problems, is sensitive against outliers and ground-truth noise <cit.>. Therefore, in order to further improve the final estimation in high non-linear spaces of hand configurations, we incorporate appearance and physical penalties in the loss function, based on the physical constraints typically applied in 3D reconstruction of human poses  <cit.>. By including such penalties during the network learning stage, unrealistic pose configurations are avoided. Lastly, as it is common in deep learning problems, variability and amount of data defines the success of a model and it has been proved that CNN models can not generalize well to unseen data. In this paper we introduce a non-rigid augmentation approach to generate realistic data from training data. To the best of our knowledge this is the first time such augmentation is applied in depth images. We use ground truth joints to compute hand kinematic parameters and deform hand joints. We then apply interpolation techniques to deform point cloud based joints. Results demonstrate that our proposed framework trained on augmented data outperforms state-of-the-art data-driven approaches in NYU and MSRA datasets.We qualitatively compare pose estimation state-of-the-art approaches w.r.t. ours in Fig. <ref>. The work of Tompson <cit.> estimates 2D pose using joints heat-map only, thus providing poor pose estimation results in the case of noisy input images (second column). Oberweger <cit.> results (DeepPrior) show that PCA is not able to properly model hand pose configurations. Oberweger <cit.> improved previous results by applying an error feedback loop approach. However, error feedbacks do not provide accurate pose recovery for all the variability of hand poses. In essence, in our proposed local-based pose estimation framework, a separate network is trained for each finger. Subsequently, we fuse such learned local features to include higher order dependencies among joints, thus obtaining better pose estimation results than previous approaches.§ RELATED WORKHand pose estimation has been extensively studied in literature <cit.>, we refer the reader to <cit.> for a complete classification of state-of-the-art works in the field. Here we focus mostly on recent works using CNNs and depth cameras.Most CNN-based architectures in data-driven hand pose estimation approaches are specifically designed to be discriminative and generalizable. Although the success of such approaches depends on the availability and variability of training data, CNN models cope reasonably well with this problem, and two main families of approaches can be distinguished in the literature, namely heat-map and direct regression methods.Heat-map approaches estimate likelihoods of joints for each pixel as a pre-processing step. In <cit.>, a CNN is fed with multi resolution input images and one heat-map per joint is generated. Subsequently, an inverse kinematic model is applied on such heat-maps to recover the hand pose. Nevertheless, this approach is prone to propagate errors when mapping to the original image, and estimated joints may not correlate with the hand physics constraints. The work of <cit.> extends this strategy by applying multi-view fusion of extracted heat-maps, where 3D joints are recovered from only three different viewpoints. In this approach, erroneous heat-maps are expected to be improved in the fusion step using complementary viewpoints. The key idea in this work is to reduce the complexity of input data by aligning all data with respect to the hand point cloud eigenvectors. For most heat-map based approaches, however, an end-to-end solution can be only achieved by considerably increasing the complexity of the model, e.g. introducing a cascading approach <cit.>. Although such approaches used to work well for 2D pose estimation in RGB images, they are not necessarily able to model occluded joints from complex hand poses in depth data.As an alternative, a number of works propose direct regression for estimating the joint positions of the 3D hand pose based on image features <cit.>. As mentioned in <cit.>, contrary to heat-map based methods, hand pose regression can better handle the increase in complexity of modeling highly nonlinear spaces. Although some approaches propose Principle Component Analysis (PCA) to reduce the pose space <cit.>, in general such linear methods typically fail when dealing with large pose and appearance variabilities produced by different viewpoints (as shown in Fig. <ref>).Recently, error feedback <cit.> and cascading <cit.> approaches have proven to avoid local minima by iterative error reduction. Authors in <cit.> propose to train a generative network of depth images by iteratively improving an initial guess. In this sense, Neverova <cit.> use hand segmentation as an intermediate representation to enrich pose estimation with iterative multi-task learning. Also, the method proposed in <cit.> divides the global hand pose problem into local estimations of palm pose and finger poses. Thus, finger locations can be updated at each iteration relative to the hand palm. Contrary to our method, the authors use a cascade of classifiers to combine such local estimations. Authors in <cit.> apply a CNN to make use of the resulting feature maps as the descriptors for computing k-nearest shapes. Similarly to our approach, in their method the CNN separates palm and fingers and computes the final descriptor by dimensionality reduction. Differently to our approach, they factorize the feature vectors and nearest neighbors hyper-parameters to estimate the hand pose. In a different way, we propose training the network by fusing local features to avoid non-accurate local solutions, without the need of introducing cascading strategies nor multi-view set-ups. Contrary to the methods trying to simplify the problem by dividing the output space into subspaces, Guo <cit.> divided input image to smaller overlapping regions and fused CNN feature maps as a region ensemble network.In CNN-based methods, data augmentation is a common approach to boost network to generalize better. Recently, Ge <cit.> applied data augmentation in the problem of hand pose recovery and showed a meaningful improvement in the results. Even Oberweger <cit.> extended DeepPrior model in <cit.> and showed effectiveness of a simple model trained with data augmentation. However, aforementioned approaches use simple and rigid data augmentation like scaling, rotation and translation, which may not represent the visual variability in terms of 3D articulated joints. Here, we propose a non-rigid data augmentation by deforming hand parameters and interpolating point cloud.§ GLOBAL HAND POSE RECOVERY FROM LOCAL ESTIMATIONS Given an input depth image ℐ, we refer the 3D locations of n hand joints as the set J={ j∈ℝ^3}_1^n. We denote j^xyz and j^uvz as a given joint in the world coordinate system and after projecting it to the image plane, respectively. We define n=20 for the wrist, finger joints and finger tips, following the hand model defined in <cit.>. We assume a hand is initially visible in the depth image, i.e. not occluded by other objects in the scene, although may present self-occlusions, and properly been detected beforehand (i.e. pixels belonging to the hand are already segmented <cit.>). We also assume intrinsic camera parameters are available. We refer to global pose as the whole set J, while, a local pose is a subset of J (e.g. index finger joints).Considering hand pose recovery as a regression problem with the estimated pose as output,we propose a CNN-based tree-shaped architecture, starting from the whole hand depth image and subsequently branching the CNN blocks until each local pose. We show the main components of the proposed approach in Fig. <ref>. In such a design, each network branch is specialized in each local pose, and related local poses share features in the earlier network layers. Indeed, we break global pose into a number of overlapping local poses and solve such simpler problems by reducing the nonlinearity of global pose. However, since local solutions can be easily trapped into local minima, we incorporate higher order dependencies among all joints by fusing the last convolutional layer features of each branch and train the network for global and local poses jointly. We cover this idea in Sec. <ref>.We also apply constraints based on appearance and dynamics of hand as a new effective loss function which is more robust against overfitting than simple L2 loss while providing a better generalization. This is explained in Sec. <ref>. §.§ Hand pose estimation architecture In CNNs, generally, each filter extracts a feature from a previous layer, and by increasing the number of layers, a network is able to encode different inputs by growing the Field of View (FoV). During training, features are learned to be activated through a nonlinear function, for instance using Rectified Linear units (ReLU). The complexity and number of training data has a direct relation to the number of filters, layers or complexity of the architecture: an enormous number of filters or layers might cause overfitting, while a low number might lead to slow convergence and poor recognition rates. Interestingly, different architectures have been proposed to cope with these issues <cit.>. For example, in multi-task learning, different branching strategies are typically applied to solve subproblems <cit.>, and the different subproblems are solved jointly by sharing features. Similarly, we divide global hand pose into simpler local poses (i.e. palm and fingers) and solve each local pose separately in a branch by means of a tree-shaped network. We show this architecture in Fig. <ref>.The proposed architecture has several advantages. Firstly, most correlated fingers share features in earlier layers. By doing this, we allow the network to hierarchically learn more specific features for each finger with respect to its most correlated fingers. Secondly, the number of filters per finger can be adaptively determined. Thirdly, the estimation of the global pose is reduced to the estimations of simpler local poses, resulting the network to train at fast convergence rates.We define the amount of locality by the number of joints contributing to a local pose. Keeping such locality high (i.e. lower number of joints), in one hand, causes fingers to be easily confused among each other, or detected in a physically impossible location. A low locality value (i.e. higher number of joints), on the other hand, increases the complexity. Besides, local joints should share a similar motion pattern to keep lower complexity. So in the particular implementation in this paper, we assign to each local pose one finger plus palm joints, thus leading to a 24 dimensional vector.Training the network only based on local poses omits information about inter-fingers relations. Tompson  <cit.> included a graphical model within the training process to formulate joints relationships. Li  <cit.> used a dot product to compute similarities for embedded spaces of a given pose and an estimated one in a structural learning strategy. Instead, we apply late fusion based on local features, thus, let the network learn the joint dependencies through fully connected layers for estimating the final global pose. The whole network is trained end-to-end jointly for all global and local poses given a constrained loss function.Network details. Input images are pre-processed with a fix-sized cube centered on the hand point cloud and projected into the image plane. Subsequently, the resulting window is cropped and resized to a 192×192 fixed size image using nearest neighbor interpolation, with zero-mean depth.As intermediate layers, the network is composed of six branches, where each branch is associated to specific fingers as follows: two branches for index and middle fingers, two branches for ring and pinky fingers, one branch for thumb, and one branch for palm. For the palm branch, instead of performing direct regression on palm joints, we make regression on the palm viewpoint, defined as the rotation (in terms of quaternions) between the global reference view and the palm view. As shown in the experimental results, more accurate and reliable optimization is then achieved, since the network is able to model interpolations among different views.As shown in Fig. <ref>, each convolutional block consists of a convolution layer with 3×3 filter kernels and a ReLU followed by a max-pooling, except for the last block. All pooling layers contain a 2×2 window. The last block contains a convolutional layer with 6×6 filter kernels, providing a feature vector. Fully connected layers are added to the end of each branch for both local and global pose learning. For local pose at each branch there are two hidden layers with 1024 neurons with a dropout layer in between. Similarly, for global pose at each branch, the feature vector is followed by two hidden layers with 1024 neurons with a dropout layer in between. Then, the last hidden layers are concatenated and followed by a dropout and a hidden layer with 1024 neurons. Finally, the global and local output layers provide the estimation of joints with one neuron per joint and dimension. §.§ Constraints as loss function In regression problems, the goal is to optimize parameters such that a loss function between the estimated values of the network and the ground-truth value is getting minimized. Usually, in the training procedure, an L2 loss function plus a regularization term is optimized. However, it is generally known that, in an unbalanced dataset with availability of outliers, L2 norm minimization can result in poor generalization and sensitivity to outliers where equal weights are given to the training data <cit.>. Weight regularization is commonly used in deep learning as a way to avoid overfitting. However, it does not guarantee the weight updating to bypass the local minima. Besides, a high weight decay causes low convergence rates. Belagiannis  <cit.> proposed Tukey’s biweight loss function in the regression problems as an alternative to L2 loss robust against outliers. We formulate the loss function as L2 loss along with constraints applied to hand joints regarding the hand dynamics and appearance, leading to more accurate results and less sensitivity to ground-truth noise.We define the loss function for one frame in the form of:L=λ_1 L_loc + λ_2 L_glo + λ_3 L_app + λ_4 L_dyn,where λ_i i∈{1..4} are factors to balance loss functions. L_loc, L_glo, L_app and L_dyn denote the loss for the estimated local and global pose, appearance, and hand dynamics, respectively. Next, each component is explained in detail.Let F^l∈ℝ^3 × m be the concatenation of the m estimated joints in each branch of the proposed network and G^l∈ℝ^3 × m be the ground-truth matrix. Note that m is not necessarily equal to n=20. F^g∈ℝ^3 × n and G^g∈ℝ^3 × n are the outputs of the embedded network for estimated joints and ground-truth, respectively. Then, we define local and global losses as:L_loc=∑_i=1^3m (F^l_i-G^l_i)^2, L_glo=∑_i=1^3n (F_i^g-G_i^g)^2. A common problem in CNN-based methods for pose estimation is that in some situations estimated pose does not properly fit with appearance. For instance, joints are placed in locations where there is no evidence of presence of hand points, or being physically incorrect <cit.>. In this paper, during training we penalize those joint estimations that do not fit with the appearance or are physically not possible, and include such penalties in the loss function. We first assume that, rationally, joints must locate inside the hand area and have a depth value higher than the hand surface, besides, joints must present physically possible angles in the kinematic tree. Therefore, for a given joint j^xyz the inequality ℐ(j^u,j^v)-j^z<0 must hold, where ℐ(j^u,j^v) is the pixel value at location (j^u,j^v). To avoid violating the first condition (i.e. when a joint is located outside hand area after projection to the image plane), we set the background with a cone function as:5√((u-0.5w)^2+(v-0.5h)^2)+ϕ,where w and h are width and height of the image, and ϕ is a fixed value set to 100. The reason to use a cone function instead of a fixed large value is to avoid zero derivatives on the background. We use hinge formulation to convert inequality to a loss through:L_app=∑_i=1^mmax(0,ℐ(j_i^u,j_i^v)-j_i^z).We subsequently incorporate hand dynamics by means of the top-down strategy described in Algorithm 1. We assume all joints belonging to each finger (except thumb) should be collinear or coplanar. Thumb has an extra non-coplanar form and we do not consider it in the hand dynamics loss. A groundtruth finger state s_G∈{1..4} is assigned to each finger computed by the conditions defined in Algorithm 1. Each finger has a groundtruth normal vector 𝐞_G which is finger direction for the case 1 and finger plane normal vector for the other cases. Therefore, we define four different losses, one of them triggered for each finger (as shown in Algorithm 1). Let A, B, C and D be four joints belonging to a finger starting in A as the root joint and ending in D as fingertip. Then the dynamics loss is defined as:L_dyn=∑_i=1^4Δ_i(A,B,C,D,s_G,𝐞_G),where i denotes a finger index. Now we consider each case in Algorithm 1 in the following.We consider a collinear finger in case 1. A finger is collinear if: B-A+C-B+D-C<D-A+κ, where κ is a threshold defining the amount of collinearity and set to 0.01D-A. To compute the loss for a collinear groundtruth finger, the following condition has to be hold: ρ<cos(∠ (AD, 𝐞_G))≤1,where ρ is a threshold. This condition has to be met for AB and AC as well. The cosine function can be extracted through dot product. Therefore, using hinge formulation, the loss is defined as:Δ_i (A,B,C,D,1,𝐞_G)= max(0,ρ-AB·𝐞_G/AB) +max(0,ρ-AC·𝐞_G/AC) +max(0,ρ-AD·𝐞_G/AD) +μmax(0,AB+BC+CD-1.01AD),where μ is a factor to balance different components of the loss function. We consider a coplanar finger for cases 1, 2 and 3. We define a finger to be coplanar if cross products of all subsets of the finger joints with three members to be parallel. Note that a collinear finger is necessarily coplanar. However, we exclude collinear fingers from this definition due to cross product, as shown in Algorithm 1. For a groundtruth coplanar finger, such cross products must be parallel to the plane normal vector. Therefore, for given joints A, B and C, the following condition must hold:ρ<cos(∠ (AB×BC,𝐞_G)≤1.Given the groundtruth finger is coplanar of case 2, we compute the loss function as:Δ_i.9!(A,B,C,D,2,𝐞_G)=max(0,ρ-(AB×BC)·𝐞_G/AB×BC) +max(0,ρ-(AC×CD)·𝐞_G/AC×CD).The loss functions for the other coplanar finger cases are computed in the same way. §.§ Loss function derivatives All components in Eq. <ref> are differentiable, thus we are able to use gradient-based optimization methods. In this section we explain derivatives of the constraint loss function in Eq. <ref>. Derivatives of the rest of loss functions are computed through matrix calculations. We first define derivative of L_app with respect to t∈{j_i^x,j_i^y,j_i^z} through: ∂ L_app/∂ t= 0if ℐ(j_i^u,j_i^v)-j_i^z≤0 ∂ℐ/∂ t - ∂ j_i^z/∂ t otherwise.In the following we just consider positive condition of Eq. <ref>. Besides, we omit index i (which denotes i-th joint) from the notations for the easiness of reading. Depth image ℐ is a discrete multi-variable function of j^u and j^v, where j^u is a multi-variable function of j^x and j^z, and j^v is a multi-variable function of j^y and j^z. Consequently, the total derivative of a depth image can be computed by the chain rule through:dℐ/d t = ∂ℐ/∂ j^ud j^u/d t + ∂ℐ/∂ j^vd j^v/d t d j^u/d t = ∂ j^u/∂ j^xd j^x/d t + ∂ j^u/∂ j^zd j^z/d t d j^v/d t = ∂ j^v/∂ j^yd j^y/d t + ∂ j^v/∂ j^zd j^z/d t Next, we present components of j^u derivative in detail[Derivatives belonging to j^v are computed in the same way as j^u]. Depth image ℐ is a function of hand surface. However, hand surface given by the depth camera may have noise and not be differentiable at some points. To cope with this problem, we estimate depth image derivatives by applying hand surface normal vectors. Let 𝐬 to be the surface normal vector for a given joint. Then, derivative of ℐ with respect to u axis is given by the tangent vectors through:∂ℐ/∂ j^u = 𝐬^x/𝐬^z. As mentioned, j^uvz is the projection of the estimated joint j^xyz from world coordinate to the image plane. Note that joints have zero mean and j^uvz is extracted after the image has been cropped and resized. Let f_x, p_x, M^xyz and M^uvz to be the camera focal length and image center for x axis, world coordinate hand point cloud center, and its projection to image plane, respectively. Then j^u is computed as:j^u(j^x,j^z)=( f_x(j^x+M^x)/j^z+M^z+p_x-M^u)scale_x+w/2,scale_x=wM^z/cf_x,where c is the cube size used around hand point cloud to crop the hand image. Using this formulation, derivative of j^u can be easily computed and replaced in Eq. <ref>. § EXPERIMENTS In this section we evaluate our approach on two real-world datasets NYU <cit.> and MSRA <cit.>, and one synthetic dataset SyntheticHand <cit.>. NYU dataset has around 73K annotated frames as training data (single subject) and 8K frames as test data (two subjects). Each frame has been captured from 3 different viewpoints and ground truth is almost accurate. MSRA dataset has 76K frames captured from 9 subjects each in 17 pose categories. This dataset does not provide an explicit training/test set and a subject exclusive 9-fold cross validation is used to train and evaluate on this dataset. MSRA dataset has smaller image resolution and less pose diversity and accurate ground truth comparing to NYU dataset. SyntheticHand dataset has over 700K training data and 8K test data consisting of a single synthetic subject performing random poses from all viewpoints, thus being useful to analyze our methodology under occlusions. All three datasets provide at least 20 hand joints in common. However, NYU dataset has 16 extra joints. We evaluate our approach using two metrics: average distance error in mm and success rate error <cit.>. Next, we detail the method parameters and evaluate our approach both quantitatively and qualitatively in comparison to state-of-the-art alternatives. §.§ Training We utilize MatConvNet library <cit.> on a server with GPU device GeForce GTX Titan X with 12 GB memory. We optimize the network using stochastic gradient descent (SGD) algorithm. We report hyper-parameters used in NYU dataset. We set the batch size, learning rate, weight decay and momentum to 50, 0.5e-6, 0.0005 and 0.9, respectively. Our approach converges in almost 6 epochs while reducing the learning rate by a factor of 10 for two more epochs. Overall, training takes two days on original NYU dataset while testing takes 50 fps.Loss function parameters tuningWe set a low value for parameter μ in Eq. <ref> since it behaves like a regularization and it is not connected to ground-truth. L_dyn is mainly a summation of cosine functions while L_app is in millimeters. Therefore we set λ_4 higher than λ_3 to balance cosine space with millimeter. Finally, we set parameters λ_1, λ_2, λ_3, λ_4 and μ experimentally to 4, 4, 3, 20 and 0.0005, respectively. We show derivatives of appearance and dynamics loss functions for a number of joints in the first five epochs in Fig. 3 , as well as qualitative images of estimated joints. §.§ Ablation studyIn this section we study different components of the proposed architecture trained on NYU dataset. We denote each component by a number.Locality. Locality is referred to the number of joints in the network output. In the first case, we analyze the hierarchical network trained just with one finger in each branch and without constraints and fusion network (so called 1:local). This network shows a high locality value. As one can expect, this network can easily overfit on the training data and exchange estimations for similar fingers. We show a significant improvement by decreasing locality by including palm joints in each branch (so called 2:1+palm). Palm joints are located in a near planar space and thus do not add high non-linearity to the output of each branch while help for better finger localization. We compare these methods in Fig. <ref> (red vs. green lines).Constraints. We train method 2:1+palm by including constraints in the loss function L (Eq. <ref>) without L_glo in this stage (so called 3:2+constraint). We still do not explicitly model any relationship among fingers in the output space, but let the network learn each finger joints with respect to the hand surface and finger dynamics. In Fig. <ref> we show the effectiveness of this strategy (magnet line) against method 2:1+palm. We also analyze the effect of constraints in the training process in Fig. <ref>. As it can be seen, by applying the proposed constraints, method 3:2+constraint is more robust against overfitting than method 1. Validation error in method 3 does not significantly change from epoch 7 to 15. Comparing both methods in epoch 20, method 1 has a lower error in training while its validation error is almost 1.5 times the validation error of method 3.Branching strategy vs. single channel architecture. As baselines, we created two single channel networks with 6 convolutional layers, as shown in Fig. <ref>. The output of the first network (so called single-channel network) is 3D locations of the full set of joints. In this architecture, the capacity of convolutional layers are kept similar to the whole branching network. This network is trained with loss function L without L_loc. The outputs of the second network (so called FC-branching network) is similar to the method 3:2+constraint. The capacityof convolutional layers in this architecture is similar to one branch in tree-structure network. The branching in this network is applied on FC layers. We train this network with the same loss as method 3:2+constraint. We train both networks with the same hyper-parameters introduced in Sec. <ref>. As one can see in Fig. <ref>, single-channel network (dashed magnet line) performs worse than method 3:2+constraint, showing the effectiveness of the tree-structure network. This means regardless of the capacity of the network, in a single channel network, backpropagation of the gradients of the loss is not able to train network filters to map input image to a highly non-linear space in an optimal and generalizable solution. This is even worse for FC-branching network (dashed dark brown line).Palm viewpoint vs. palm joints regression. We evaluate our palm joints vs. palm viewpoint regression in terms of success rate error in Fig. <ref>. Palm viewpoint regressor gives a rotation matrix in terms of quaternions. We convert quaternions to rotation matrix and use it to transform a predefined reference palm example. As it can be seen in the figure, palm viewpoint regression significantly reduces palm joints error.Global vs. local pose. We add fusion network to method 3:2+constraint to model correlations among different local poses in an explicit way (so called 4:3+viewpoint+fusion). We include viewpoint regression features in the fusion as well. We illustrate the results in Fig. <ref> (dashed blue line). Comparing to method 3:2+constraint, method 4:3+viewpoint+fusion improves performance for error thresholds below 30mm. Per joint mean error. We also illustrate per joint mean error in Fig. <ref>. From the figure, as expected, a very local solution (method 1) performed the worst among the baselines. Comparing method 2 and 3:2+constraint in average error shows the benefits of applying constraints as loss, as well. By including viewpoint features in the fusion network, palm joints mean error was considerably reduced by method 4:3+viewpoint+fusion. Although method 4:3+viewpoint+fusion performed better for the pinky and ring fingertips, it did not achieve the best results for index and thumb fingertips.Data augmentation. data augmentation is a common approach to boost CNN models with small deformations in the images. Mainly used data augmentation approaches are rotation, scaling, stretching and adding random noise to pixels. Such approaches are mainly rigid (rotation and scaling) or unrealistic (stretching). Here, we propose a realistic non-rigid data augmentation. As the first step, we remove redundant data by checking ground truth joints. In this sense a redundant data is an image which has a high similarity to at least one image in the training set. Such similarity is defined by maximum Euclidean distance Ψ among corresponding joints. Therefore, two images are similar if Ψ is below a threshold. We used threshold 10 mm for this task.Our data augmentation is consisting of in-plane rotation, changing palm and fingers size and deforming fingers pose. We show some generated images in Fig. <ref>. In the following we explain details of data augmentation. The main idea in non-rigid hand deformation is to deform ground truth hand joints and interpolate point cloud based on new joints. We use thin plate spline (TPS) <cit.> as a standard interpolation technique to deform point cloud. However, to avoid extrapolation problem and unrealistic warping, we add some auxiliary points to the set of joints. We show some possible auxiliary points in Fig. <ref>: we mainly add points around wrist and thumb. We observed unrealistic deformation around thumb and by adding three fixed points we avoided extrapolation problems. For the wrist case we do not want to deform points of lower arm. Fixed auxiliary points around wrist add constraint to space avoiding unrealistic warping.A first possible shape deformation is the changing of hand scale. However, a simple scaling does not guarantee generalization to unseen subjects. Instead, we change the size of fingers and palm. This can be seen in the Fig. <ref> 4th row. As the first step we compute hand kinematic parameters based on hand coordinate. Hand coordinate is defined by palm joints such that, in a quite open hand, thumb defines x coordinate direction, other fingers define y coordinate direction and z coordinate is perpendicular to palm plane. Then, palm can be stretched in the direction of x or y. We stretch each direction by a random factor. Having kinematic parameters fixed, we are able to randomly modify fingers length and reconstruct new joints for each finger. It is also likely to slightly modify kinematic parameters and reconstruct joints in a new pose. However, we keep kinematic parameters near to original values to avoid unrealistic point cloud deformations and possible big holes in the depth image. Finally we apply morphological operations to fill small gaps. [Code will be publicly available after publication.]In NYU dataset, around 60K images were remained after removing redundant images from all 218K samples in the training set (including all cameras). We then generated two sets of augmented images including around 780K and 1500K. We use random scaling factors in the range [0.85,1.05] for palm and fingers. Kinematic parameters are changed by summation to random degrees in the range [-7.5,7.5]. The only difference in generated sets is the in-plane rotation degrees. The first and second sets have in-plane rotation in the range [-30,30] and [-90,90] degrees, respectively. We compare the results on both generated sets in Fig. <ref> (brown and dark green lines). We train method 4:3+viewpoint+fusion on these new 2 sets, so called method 5:4+aug1 and method 6:4+aug2, respectively. One can see the model trained on the set with more samples and wide in-plane rotation degrees (method 6:4+aug2) generalizes better to the test set. Also, a significant improvement is achieved comparing to the original data (method 4:3+viewpoint+fusion). We observed that wrist joint has the maximum error in 20% of the cases in method 6:4+aug2. Therefore we replaced estimated palm joints in method 6:4+aug2 with estimated palm joints from viewpoint regressor and slightly improved performance (final method 7:6+palm). We also illustrate method 7:6+palm per joint mean error in Fig. <ref>. Fingertips have the highest error among the joints. Data augmentation helps to significantly improve the fingertip estimation, as we can see in Fig. <ref> comparing different baselines qualitatively.§.§ Comparison with state of the artWe report method the performance of our final model comparing to state-of-the-art data-driven approaches like <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.> on NYU dataset. On MSRA dataset we compare to <cit.>, <cit.>, <cit.>, <cit.> and <cit.>. Finally, we compare to <cit.> and <cit.> on SyntheticHand dataset. NYU dataset. Mentioned works in the comparison use 14 joints (as proposed in <cit.>) to compare on NYU dataset. For a fair comparison on this dataset we take 11 joints most similar to <cit.> out of our 20 used joints. We show maximum error success rate results in Fig. <ref>. As one can see, we outperform state-of-the-art results. However, <cit.> and <cit.> performs slightly better for error thresholds lower than 13mm. We also illustrate the average error success rate in Fig. <ref>. This shows our method is performing well in average for a majority of frames, i.e. less than 10mm error for 60% of the test set. We compare to state-of-the-art regarding overall mean error in table <ref>. All these results show a significant improvement using data augmentation.MSRA dataset. We applied introduced non-rigid hand augmentation same as NYU dataset. However, we observed a divergence during training. A possible reason could be the accuracy of ground truth annotations in MSRA dataset. Therefore, we applied standard augmentation techniques such as random scaling (in range [0.9,1.05]) and rotation (in range [-90,90] degrees). We show the maximum error success rate results in Fig. <ref>. As it can be seen, our method slightly outperform methods in the comparison for the error threshold between 13mm and 40mm. Although Sun <cit.> has a higher number of good frames for errors lower than 11mm, it performs the worst for higher error rates. Without using data augmentation, our method (dashed blue line) performs slightly worse than <cit.>. Note that <cit.> uses a pre-alignment over samples given hand point cloud eigenvectors which can be assumed as a kind of augmentation. We also show average error in table <ref>. In average, our method with standard augmentation performs slightly similar to <cit.> in this dataset. Note that <cit.> uses random translation in the augmentation as well. We show some qualitative results in Fig. <ref>. As one can see, ground truth annotations are not accurate in some cases, more specifically for thumb.SyntheticHand dataset. We use original training set without augmentation to train our model on this dataset. Our model converges in 7 epochs. Mean error success rate is shown in Fig. <ref>. As it can be seen, our method performs quite well on this dataset even for complex poses and viewpoints. Some qualitative results are shown in Fig. <ref>. The overall average error on this dataset is 3.94mm.§ CONCLUSIONSWe proposed a novel hierarchical tree-like structured CNN for recovering hand poses in depth maps. In this structure, branches are trained to become specialized in predefined subsets of the hand joints. We fused a network based on learned local features to model higher order dependencies among joints. The network is trained end-to-end. By including a new loss function incorporating appearance and physical constraints about doable hand motion and deformation, we found our network helps to increase the precision of the final hand pose estimations for quite challenging datasets. In particular, we found fusion network can help to better localize joints for easier hand configurations while it behaves similar to a local solution for more complex cases. We improved palm joints by applying a viewpoint regressor, and by fusing its learned features into the global pose. Finally, we introduced a non-rigid hand augmentation technique to deform original hands in terms of shape and pose helping to generalize better to unseen data. As a result we significantly improved estimations on original NYU dataset by 4.6mm in average. As future work, we will consider more complex data augmentation techniques to cope with noise in the depth image. Realistic data can be combined with synthetic data as well. In this sense, we will work on filling gaps realistically when more complex pose deformations are applied in the augmentation.§ ACKNOWLEDGEMENTSThis work has been partially supported by the Spanish projects TIN2015-65464-R and TIN2016-74946-P (MINECO/FEDER, UE) and CERCA Programme / Generalitat de Catalunya. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.ieee
http://arxiv.org/abs/1705.09606v2
{ "authors": [ "Meysam Madadi", "Sergio Escalera", "Xavier Baro", "Jordi Gonzalez" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170526145544", "title": "End-to-end Global to Local CNN Learning for Hand Pose Recovery in Depth Data" }
Abnormality Detection and Localization in Chest X-Rays using Deep Convolutional Neural Networks Mohammad Tariqul Islam^1, Md Abdul Aowal^1, Ahmed Tahseen Minhaz^1, Khalid Ashraf^2 ^1Semion, House 167, Road 3, Mohakhali DOHS, Dhaka, Bangladesh.^2Semion, 1811 Francisco St., St 2,Berkeley, CA 94703, USA.{mhdtariqul, aowal.eee, tahseenminhaz92}@gmail.com, {khalid}@semion.ai December 30, 2023 ==================================================================================================================================================================================================================================================================================================fancy shortarticle Bound states in the continuum (BICs) are peculiar solutions of wave equations, which are spatially bound and spectrally discrete with an infinite lifetime, equivalently an infinite quality factor (Q factor), at frequencies of continuum of unbound modes. The concept of BIC was first proposed as a solution to Schrödinger equation with a complex artificial potential <cit.>. Since then, many different types of BICs have been reported in various physical systems including quantum <cit.>, acoustic <cit.>, water <cit.>, and photonic systems <cit.>. Recently, BICs in photonic crystal (PhC) slabs have attracted substantial attention as a platform for studying interesting phenomena, e.g. topological charges, as well as a new way of confining light in surface-normal direction (z-direction), allowing for novel designs of photonic devices <cit.>. The robustness of BICs in PhC slabs over structure variations as a result of the conservation of topological charges is an important advantage in experimentally implementing BICs <cit.>.BICs in PhC slabs occur above the light line, i.e., in the leaky regime of a resonance frequency ω versus in-plane wavevector 𝐤 relation (frequency dispersion), when specific conditions are met. The conditions are that the leaky modes in the slab have a mismatching symmetry from the free-space modes of surrounding materials (symmetry-protected BICs), or multiple leaky modes destructively interfere with each other in the far-field (non-symmetry-protected BICs) <cit.>. These conditions are ideally valid for an infinitely periodic structure at specific in-plane wavevector 𝐤_BIC, which means the Q-factor in the surface-normal direction Q_⊥ is high only in the vicinity of 𝐤_BIC.The Fourier-transform of a BIC field profile (k-space mode profile) which is extend over an infinite PhC slab, is a delta function at 𝐤_BIC.A motivating question of this work is what happens to the BIC as the spatial extension of a PhC slab reduces to a few unit cell structure. In such a finite structure as illustrated in Fig. <ref>(a), the Q_⊥ is no longer infinite due to a finite-width k-space mode profile. The in-plane loss via terminations can be enormous due to a short propagation time over a few unit cells, leading to a very small in-plane Q-factor Q_∥, which makes the entire Q-factor considerably small since 1/Q=1/Q_⊥+1/Q_∥.In this Letter, we show that a set of BICs can turn into quasi-BICs with a fairly high Q-factor even for two or three unit cell structures, as shown in Fig. <ref>(b). These BICs feature a strong resonance in individual unit cells, which results in an ultraflat frequency dispersion and a very slow in-plane group velocity, abruptly reducing the in-plane loss. The origin of this phenomenon is different from that of the slow light in conventional PhC slab modes below the light line, and is discussed in analogy with the tight-binding of individual atoms in semiconductors. Furthermore, we propose a method to match the k-space mode profile of a quasi-BIC with the Q-factor dispersion (Q factor versus in-plane wavevector) of a BIC defined for an infinite PhC slab, which reduces the surface-normal loss by a few orders of magnitude. These results may provide an insight for designing compact BIC platforms for experimental studies as well as enabling new high-Qmicrocavities for various applications, which operates above the light line without a surrounding mirror. These quasi-BIC based microcavities could be an alternative to the defect-based PhC microcavities, which work below the light line, i.e., relying on the total internal reflection. In this study, 1D PhC slabs of bars, also known as high-contrast grating <cit.> are used for more transparent understanding. However, we expect that the all proposed concepts are applicable to the 2D PhC slabs of rods or holes. The structure dimensions are scaled for a resonance at the communication wavelength of 1550 nm. It is assumed that the PhC slab is made of Si with a refractive index n=3.48 and suspended in air, and the Si bars are infinitely long (2D simulation, and 𝐤_BIC=k_BICx̂). 3D structures are also considered in Figs. <ref>(a) to <ref>(e) and S4(a) to S4(d).Details of numerical technique are provided in the Supplemental 1. Firstly, let us discuss the relation between the ultraflat dispersion of an infinite PhC slab and the strong individual resonances in each unit cell. The single unit cell Q-factor of a subwavelength structure typically ranges from 1 to 10 <cit.>. In contrast, the single unit cell resonance associated with the ultraflat-dispersion BICs exhibits one or two orders of magnitude larger Q-factor of which origin will be discussed below.To illustrate the correlation between these BICs and the strong resonance of a single unit cell, four different PhC structures (PhC1 to PhC4) are compared. As shown in Figs. <ref>(a) and <ref>(b), the PhC structures are designed to have almost identical bandedge wavelengths at 1553 nm and non-symmetry-protected BICs peaked at k_BIC≈0.5 μm^-1, while having different dispersion curvatures ∂^2 ω/∂ k_x^2. As shown in Fig. <ref>(c), for all PhC structures, the mode profiles of single unit cell resonances are identical to those of BICs occurring in infinite unit cells. The resonance wavelengths of single unit cells are also close to those of BICs. However, there is a trend that a single unit cell resonance with a higher Q-factor has a resonance wavelength closer to the BIC wavelength. Furthermore, if a single unit cell resonance possesses a higher Q-factor, the corresponding PhC structure has a flatter wavelength dispersion, as shown in Fig. <ref>(d).This can be explained as follows. Each unit cell works as an individual resonator with a characteristic Q-factor. As unit cells are brought closer to one another, a band of resonance wavelengths is formed by virtue of mutual interactions, while keeping the properties of individual resonances such as field profile and resonance wavelength. This is analogous to the electronic band structure in the tight-binding model of semiconductors. The interaction strength that is inversely proportional to the resonance Q-factor, determines the width of the band. Thus, with the resonance Q-factor higher, the bandwidth is smaller, leading to a flatter band dispersion. The enormous Q-factor increase at a specific BIC wavelength with the number of unit cells approaching an infinity, can be explained as the consequence of successive resonance trapping phenomenon <cit.>. It can be concluded that the BICs with an ultraflat dispersion are related to both the properties of an individual unit cell as well as the collective effect of many unit cells.Since the properties of individual resonances are strongly maintained in an infinite structure, the frequency and Q-factor dispersion curves obtained for infinite structures can be applied to a finite structure as approximations. More rigorous proof will be provided elsewhere.The high-Q resonance of the individual unit cell is attributed to the destructive interference of two waveguide modes in the single unit cell.Each unit cell is seen as a waveguide along the z-direction with terminations, as shown in Fig. <ref>(e), which supports a few guided modes WG_i. At the terminations, each waveguide mode not only reflects back into itself but also couples to each other as well as the surrounding continuum of the modes as shown schematically in Fig. <ref>(e). Resonances occur in cases that the Fabry-Perot phase condition is satisfied for each mode, when the coupling with other waveguide modes is typically weak <cit.>. However, for a specific thickness and width, the two waveguide modes with the same spatial symmetry, e.g. WG_1 and WG_3, can couple strongly together at the interface, while their coupling to the continuum of modes destructively interfere. It results in energy building up inside the waveguide, i.e.,a strong resonance of the single unit cell.Now, we can investigate the impact of an ultraflat frequency dispersion on the in-plane loss of a microcavity formed by a few unit cells of PhC slab, as depicted in Fig. <ref>(a). The in-plane loss and corresponding Q-factor, Q_∥, depend on the group velocities of wavevector components within the k-space mode profile (Fourier transform of a coordinate-space mode profile) <cit.> and the cavity termination at its boundaries <cit.>. For microcavities of a few microns long, this in-plane loss is considerable due to the short length. The ultraflat dispersion BIC of this paper allows for very small group velocities over all wavevector components, significantly reducing the in-plane loss.Figures <ref>(a) and <ref>(b) compare the mode profiles of two microcavities based on PhC2 and PhC4. The PhC2-based microcavity with a smaller dispersion curvature has a much smaller in-plane loss than the PhC4-based one, while both microcavities have negligible out-of-plane loss. This is also shown in Fig. <ref>(c) which presents the Q-factors of both microcavities as a function of the number of unit cells, N. Finally, it is noteworthy to mention that the in-plane Q-factor can be further-enhanced by introducing a heterostructure, which has been widely employed in other form of PhC microcavities <cit.>.Let us consider the mechanism of out-of-plane loss in the microcavity formed by few unit cells, and a method to minimize it based on k-space engineering. In an infinite PhC slab, the k-space mode profile of a BIC is a delta function centered at k_∥=k_BIC, whereas the k-space profile of a resonance in a finite structure has a considerable width. The k-space components away from k_BIC couple to the radiation modes above the light line.Thus, the resultant out-of-plane loss and corresponding Q-factor, Q_⊥, depend on the overlap of the k-space mode profile of the microcavity with the Q-factor dispersion of the infinite PhC slab. Given a k-space mode profile, of which the width is inversely proportional to the microcavity lateral length L=NΛ, the shape of the Q-factor dispersion can be engineered to give a better overlap by controlling the positions of paired non-symmetry-protected BICs, and furthermore by combining them with a symmetry-protected BIC. The position of paired non-symmetry-protected BICs, k_BIC can be engineered by changing the PhC parameters, e.g., period, filling ratio, thickness, and refractive index <cit.>. For instance, Fig. <ref>(a) illustrates the Q-factor dispersions of two PhC slabs, both based on PhC2but with slightly different thicknesses, as well as the k-space profile of the fundamental mode for amicrocavity with 6 unit cells of PhC2.If the BIC positions ± k_BIC are too close to k_x=0 (blue dashed), most of the k-space mode profile overlaps the low Q region of the Q-factor dispersion, leading to a considerable out-of-plane loss. On the other hand, if the BIC positions are far fromk_x=0 (blue solid), the Q-factor value becomes low around k_x=0 where most k-space mode-profile components reside. This results in a poor overlap and consequently a huge out-of-plane loss and small Q_⊥. Thus, there is an optimum value of k_BIC to maximize Q_⊥, for a given lateral-size of a microcavity, as shown in Fig. <ref>(b). Furthermore, by using cavity modes with different shapes of mode profiles, a better overlap with the Q-factor dispersion can be obtained. This is illustrated in Fig. <ref>(c), in which the Q-factor dispersion of infinite PhC and the k-space mode profiles of the fundamental (red solid) and 1st-higher-order (red dashed) modes of a cavity with N=9 unit cells of that PhC are shown. A larger Q_⊥ is expected for the latter due to the better alignment of its k-space mode profile with the Q-factor dispersion curve. Figure <ref>(d) shows that for N≥8, the 1st-higher-order mode has higher Q-factors, as expected. For N<8, it has more in-plane loss than the fundamental mode, which leads to smaller Q-factors.A symmetry-protected BIC can be combined with paired non-symmetry-protected BICs while maintaining the frequency dispersion ultraflat, as shown in Fig. S2.The two non-symmetry-protected BICs at two off-Γ-points broaden the high-Q region size, while one symmetry-protected BIC at the Γ-point elevates the Q-factor value in the middle of the high-Q region. It needs to be noted in Fig. S2(b) (see Supplement 1) that the odd symmetry of the fundamental mode makes its k-space mode profile having a node at k_x=0. This leads to an excellent matching of the k-space profile with the Q-factor dispersion. For example, the high-Q cavity of 3 unit cells shown in Fig. <ref>(b) is based on this approach. Another example with 2 unit cells is also illustrated in Fig. S2(b) (see Supplement 1). The proposed concepts can be employed for designing the other in-plane direction (y) as well to make a 3D compact microcavity, as depicted in Fig. <ref>(a). As shown in Fig. <ref>(b), four non-symmetry-protected and one symmetry-protected BICs form a large area of high Q region in the vicinity of Γ-point, and give rise to a small out-of-plane loss. Furthermore, the ultraflat dispersion surface results in a small in-plane loss. All these lead to a Q-factor of 1.8×10^4 for a 4 unit cells 3D cavity, of which footprint is less than 20 μm^2. The fields are well confined in all three directions, as shown in Figs. <ref>(c) to <ref>(e). Another 3D microcavity example based on only non-symmetry-protected BICs is also presented in Fig. S3 (c.f. Supplement 1). In addition, the results in Figs. S4(a) and S4(b) (c.f. Supplement 1) show that high Q-factor microcavities based on BIC can be formed for TM polarization as well as elliptical bars. Though not optimized, these results show that the proposed concepts can be generalized. The ultracompact footprint, high field intensity in the air or dielectric, and access to free-space modes of the proposed quasi-BICs may open novel application chances for lasers, sensors, and switches. For instance, the high field intensity in the air for the TM polarization microcavity [c.f. Fig. <ref>(f)] is highly desirable for sensing of gas molecules or bacteria <cit.>. The strong field at the dielectric surface is ideal for surface-enhanced Raman spectroscopy <cit.>, while the strong field enhancement inside the dielectric can be used for strong non-linear effects, e.g., four-wave mixing <cit.>. Ultrasmall lasers can also be realized by integrating a gain material inside the microcavity <cit.> or hybrid integration of dye-doped organic materials surrounding the cavity <cit.>.We emphasize here that the very slow group velocities of the BICs of this study is fundamentally different from the slow light in conventional photonic crystals <cit.>. The former originates from the single cell resonance and thus can be effective even for two or three unit cells, whereas the latter results mainly from the periodic interactions among unit cells, being observed for many unit cells. Furthermore, it is also noted that the effect of the BIC phenomenon defined for infinite structure can be kept considerably strong for finite structure with few unit cells by matching the reciprocal-space properties. The resultant possibility of realizing quasi-BICs in a very compact platform may ease the experimental studies of BICs, as well as novel functionalities for important applications such as high-Q microcavities. Funding. Innovation Fund Denmark through the HOT project (Grant No. 5106-00013B). Acknowledgments. The authors thank Prof. Andrei Lavrinenko and Prof. J. Mørk for helpful discussions. See Supplement 1 for supporting content.shortarticle FrozenBIC
http://arxiv.org/abs/1705.09842v1
{ "authors": [ "Alireza Taghizadeh", "Il-Sug Chung" ], "categories": [ "physics.optics" ], "primary_category": "physics.optics", "published": "20170527164630", "title": "Quasi Bound States in the Continuum with Few Unit Cells of Photonic Crystal Slab" }
for the M4 Mission Selection Review of the ESA's Cosmic Vision Program Mauro Focardi INAF - Osservatorio Astrofisico di Arcetri - 50125 Firenze, ITALIALargo E. Fermi 5 Tel.: +39-055-275 5213;[email protected] Pace Maurizio PancrazziVladimiro Noce Università degli Studi di Firenze - Dipartimento di Fisica e Astronomia - 50125 Firenze, ITALIA;Anna Maria Di Giorgio Maria Farina Stefano PezzutoINAF - Istituto di Astrofisica e Planetologia Spaziali - 00133 Roma, ITALIA;Joseph Colomé Ferrer Ignasi Ribas Carles Sierra Roig Louis Gesa Bote Juan Carlos MoralesICE - Institut de Ciències de l'Espai - 08193 Barcelona, ESPANA;Jerome Amiaux Christophe Cara Jean Louis Augures CEA - Commissariat à l'Energie Atomique - 91191 Saclay, FRANCE;Martin Frericks Frans Zwart SRON - Netherlands Institute for Space Research - 3584 CA Utrecht, THE NETHERLANDS;Enzo Pascale Università degli Studi di Roma "La Sapienza" - 00185 Roma, ITALIA; Gianluca Morgante INAF - Istituto di Astrofisica Spaziale e Fisica Cosmica - 40129 Bologna, ITALIA; Vania Da Deppo CNR IFN LUXOR - Istituto di Fotonica e Nanotecnologie, 35131 Padova, ITALIA;Georgia Bishop Kevin Middleton Paul Eccleston RAL Space - Rutherford Appleton Laboratory - OX11 0QX Harwell Oxford, UK;Giuseppina Micela INAF - Osservatorio Astronomico di Palermo - 90134 Palermo, ITALIA; Giovanna Tinetti UCL - University College of London - WC1E 6BT London, UK. The ARIEL Instrument Control Unit design M. Focardi E. Pace M. FarinaA. M. Di Giorgio J. Colomé Ferrer I. Ribas C. Sierra Roig L. Gesa Bote J. C. MoralesJ. Amiaux C. Cara J. L. AuguresE. Pascale G. Morgante V. Da DeppoM. Pancrazzi V. NoceS. PezzutoM. Frericks F. ZwartG. Bishop K. Middleton P. Eccleston G. Micela G. Tinetti Received: date / Accepted: date =============================================================================================================================================================================================================================================================================================================================The Atmospheric Remote-sensing Infrared Exoplanet Large-survey mission (ARIEL) <cit.> is one of the three present candidates for the ESA M4 (the fourth medium mission) launch opportunity. The proposed Payload <cit.>, <cit.>, <cit.> will perform a large unbiased spectroscopic survey from space concerning the nature of exoplanets atmospheres and their interiors to determine the key factors affecting the formation and evolution of planetary systems.ARIEL will observe a large number (> 500) of warm and hot transiting gas giants, Neptunes and super-Earths around a wide range of host star types, targeting planets hotter than 600 K to take advantage of their well-mixed atmospheres. It will exploit primary and secondary transits spectroscopy in the 1.2-8 μ m spectral range and broad-band photometry in the optical and Near IR (NIR).The main instrument of the ARIEL Payload is the IR Spectrometer (AIRS) <cit.> providing low-resolution spectroscopy in two IR channels: Channel 0 (CH_0) for the 1.95-3.90 μ m band and Channel 1 (CH_1) for the 3.90-7.80 μ m range. It is located at the intermediate focal plane of the telescope <cit.>, <cit.>, <cit.> and common optical system and it hosts two IR sensors and two cold front-end electronics (CFEE) for detectors readout, a well defined process calibratedfor the selected target brightness and driven by the Payload's Instrument Control Unit (ICU). § INTRODUCTIONThe ARIEL ICU design is conceived for scientific data pre-processing and to implement the commanding and control of the AIRS Spectrometer. The ICU is interfaced on one side with the instrument and on the other side (spacecraft, S/C, side) with both the Data Management System (DMS) and the Power Conditioning and Distribution Unit (PCDU), both belonging to the hosting platform.The DMS is composed of the On-Board Computer (OBC) and Solid State Mass Memory (SSMM) operating as the main buffering memory for scientific data and HK telemetries before sending them to Ground, thanks to a communication system based on two X-band transponders. For this reason, the ICU internal memories are basically conceived and designed for temporary local buffering and to support a reduced data handling as the AIRS scientific data, once properly pre-processed, are delivered to the SSMM.This characteristic is exploited to simplify the unit electrical design, saving mass and power, for both the ICU architectures (baseline and alternative) designed at this stage to be interfaced respectively to US detectors or EU detectors by means of their customized CFEE, operating at cryogenic temperatures.As the ICU is a warm electronics, it will be located inside the S/C Service Vehicle Module (SVM) and connected to the AIRS CFEE by means of cryogenic harness.The ICU subsystem I/F to the cryogenic harness is a warm FEE (WFEE), called Detector Control Unit (DCU), as shown in Fig.  <ref>.In case of the adoption of US detectors from Teledyne (with a present higher TRL, Technology Readiness Level), the ARIEL CFEE will be represented by the SIDECAR[System Image, Digitizing, Enhancing, Controlling, And Retrieving.] ASIC, while in case of EU detectors, the CFEE will rely on a customized design presently under development by the SRON Space Research Institute (The Netherlands).The ICU responsibility and AIT/AIV activities at system level are in charge of Italy. § AIRS DETECTION MODESDue to the expected photon flux and its estimated dynamic range, different AIRS detection modes of operation will be implemented. The on-board data processing is in turn depending on the readout mode and thus three modes are defined: multiple CDS (Correlated Double Sampling) / Multiple slopes up-the-ramp sampling / Single slope up-the-ramp sampling. Fig.  <ref> illustrates these three modes of operation by representing the pixel-level signal.The baseline selected detector (512 x 512 pixel with 15 to 18 μ m pixel pitch) for the ARIEL Spectrometer is similar to the Teledyne MCT (Mercury Cadmium Telluride) 1k x 1k array developed for the NASA's NEOCam payload and based on the heritage of the WISE mission; this kind of detectors allow for non-destructive (or multi-accumulate sampling up-the-ramp) readout modes. This capability can effectively reduce the equivalent readout noise, improving the signal to noise ratio and allowing an easier identification and rejection of glitches in the signal induced by cosmic rays hits.Scientific data, after the selected time sampling, are transferred from DCUs to DPU (Data Processing Unit). They are represented by images (detector windowing is foreseen) of 270 x 64 pixels for CH_0 and 100 x 64 pixels for CH_1 with 1 value (16 bit) per pixel and 1 Quality Criteria[The mean or χ ^2 of the data is computed; deglitching by rejecting samples having a value above a threshold is an option (though not implemented currently, TBD with science SGS team).] (8 bits) per pixel, for a total of 24 bits for each computed ramp slope. Assuming to sample up-the-ramp pixels in a non-destructive manner with the length of the ramp (duration) determined by the saturation limits of the detector, an estimation of the expected data rate can be provided, in principle, for any target of known flux (bright, medium, faint).The overall daily data budget has been calculated and reported in Tab.  <ref>, where also the expected data rate for the housekeeping collected by ICU (in particular by the Telescope Control Unit, refer to the next paragraph) is provided within the following assumptions: * Fine Guidance System (FGS), VIS/NIRPhot and NIRSpec science channels <cit.> telemetries are not taken into account;* Different integration times could be used as a function of the targets brightness;* In case of very bright sources (e.g. 0.1 s of pixel saturation time), several exposures will be averaged together following a CDS readout scheme, to build one frame every ∼4 s.The estimation includes the need of having a detector reset between each ramp.The overall ARIEL daily data volume (25.0 Gibit/day, including FGS, NIRPhot and NIRSpec channels) is dominated by AIRS-CH_0, AIRS-CH_1 and NIR-Spec and takes into account the observing efficiency (95% - targets + calibration), as well as the required data volume margin (30% at this stage). The calculation provided in Tab.  <ref> assumes on-board fitting of the ramp, with an average ramp length to saturation of 3.26 s for the AIRS channels.In particular, it is assumed that sampling up-the-ramp pixels is performed in a non-destructive manner with a relative high sampling rate (∼3.5 Hz for AIRS-CH_0, ∼9.3 Hz for AIRS-CH_1), followed by destructive readouts after a well defined number of samples, depending on the brightness of the target.Indeed, the actual read-out mode to be used will vary between targets (as a function of their brightness) with the possibility of setting the ramp integration time in the range of 3.5 to 7 s. For each target in the ARIEL Targets List, the expected flux will be used for refining the best readout scheme to be driven by the ICU, obtaining the corresponding data rate.The adopted scheduling tool for ARIEL observations, calibrations and data delivery to Ground, will also be used to show how the payload data rate may vary throughout the mission and to evaluate the expected maximum and average data rates, thus allowing for a correct dimensioning of the on-board processing (along with the ICU buffering capabilities) in quasi-real-time, prior to send data to the S/C SSMM. § ICU BASELINE ARCHITECTURE The ICU baseline architecture includes five (active or switched-on at the same time) units: * 1 PSU - Power Supply Unit* 1 DPU - Data Processing Unit* 2 DCU - Detector Control Unit* 1 TCU - Telescope Control Unitas represented in Fig.  <ref> along with the number and type of needed PCBs (exploiting standard 3U and 6U formats).The Telescope Control Unit is considered an ICU slave subsystem and for its complexity and required volume is located in an independent box, with stacked drawers to the unit main box. This configuration is exploited in both ICU baseline and alternative designs, as it presents several advantages.Indeed, as currently foreseen, the TCU will be provided by Spain and shall host the main logic board called Thermal Stabilizer & IR Calibrator (TSIRC), the M2 mirror mechanism (M2M) drivers and the needed power section and points of load (PoL) to properly feed its subsystems. In particular, the Telescope Control Unit shall be able to accomplish the following tasks in ARIEL Science and Calibration modes:* driving the M2 refocusing mechanism * driving the on-board calibration source* monitoring the thermal state of several PLM elements * controlling the thermal stability of the Thermal Control System (TCS) for the following PLM subsystems:- AIRS detectors (actively cooled down to the operating temperature) - FGS, VIS/NIRPhot and NIRSpec detectors- M1 mirror The Telescope Control Unit will be composed of three separated boards, as the volume of the electronics to fulfil the Unit requirements is larger than a standard PCB and also to ease AIT/AIV activities at system level since the boards can operate independently with only a power supply (different for each unit) and their respective data buses.In Fig.  <ref>, the ICU nominal units (TCU included) are indicated with blue labels, whilst the three redundant units are highlighted by means of red labels. They are hosted inside two stacked and independent boxes including PSUs, DPUs and DCUs (ICU box) and TSIRC logic, M2M drivers and the needed additional PSU (TCU box). The two boxes are electrically connected (power and TM/TC) by means of external harnessing exploiting front panel connectors to facilitate AIV/AIT activities, as the Units will be integrated and tested separately before integration. Both ICU and TCU boxes will implement their own back panel for routing power and signals lines connecting the internal electronics boards or, alternatively, will exploit external connections, but the latter solution would limit the allocated volume for the two boxes. A final assessment on both solutions will be performed during the next phase, taking into account the needed resources in terms of mass, volume, power dissipation and overall complexity. At the present time, in order to minimize the length and the mass of the harnessing connecting the cases, the stacked configuration is preferred.The ICU baseline electrical architecture relies on the adoption of US detectors (H1RG-type) and cold front-end electronics (CFEEs) from Teledyne (SIDECAR), given their very high TRL and space heritage with respect to the present European alternative. The SIDECAR solution is the best one to drive properly the US MCT (HgCdTe) detectors and to save mass, volume and power at the same time. They can work easily down to the ARIEL required cryogenic temperatures (≤ 60 K for SIDECARs and ≤ 42 K for detectors) so that both CH_0 and CH_1 are fed and controlled thanks to the adoption of two DCU boards, residing in the warm part of the Service Module. The two electronics sides will be connected by means of cryogenic harnessing, passing through the three V-grooves <cit.> (working at different temperatures) of the telescope assembly.The present ICU architecture exploits a partial cold redundancy and cross-strapping capability. In particular, both TCUs and DCUs are cross strapped and can work along with PSU and DPU boards (Nominal and Redundant) as a whole, although DCUs aren't involved in a cold redundant configuration as no duplicated DCUs are foreseen. A very similar ICU architecture, involving DCUs as SIDECAR I/F (for biases, clocks and control signals), has been already designed and adopted for the Euclid Mission (NISP Instrument). Each DCU controls and interfaces a single SIDECAR (as well as the related detector) and, in this sense, its design can be considered for the ARIEL Payload a strong heritage from the Euclid project. Indeed, an overall DPU/DCU/SIDECAR/H2RG detector chain reliability figure, higher than 98%, has been computed and for this reason redundancy for ARIEL DCUs has not been considered, also because the related increasing complexity and needed budgets in terms of power, mass and volume. At the present time the DCU Technology Readiness Level (TRL) has been demonstrated higher than 5 and a DCU EM has been already manufactured and fully tested as it is working properly along with the SIDECAR and the detector. A DCU EQM model for NISP (very similar to the EM one) is under manufacturing and testing by the Italian industry. Moreover, a DCU/SIDECAR I/F simulator has been developed for the NISP instrument. The same philosophy concerning the DCU simulator is adopted for the ARIEL ICU case. As baseline, all the ICU and TCU boards (N and R) are designed respecting both the 3U (160 mm x 100 mm) and 6U (233 mm x 160 mm) standard PCB formats. The ICU boards will be stiffened by a proper mechanical frame with the external I/O connectors fixed and screwed to the board external panels. The TCU logic sections will be internally interfaced with a board implementing the M2M drivers and a board hosting the power supply and points of load required to feed them.In particular, the ICU box shall host: * two (N and R) 3U-format PSU boards, providing +5 V to ICU's DPU and DCU boards and +28 V filtered (N and R lines) to TCU box; * two (N and R) 6U-format DPU boards; * two (CH_0 and CH_1) not-redundant 6U-format DCU boards; and the TCU box: * two (N and R) 3U-format PSU boards, locally deriving (thanks to on-board DC/DCs and PoL) ±5 V and the needed voltage levels (+20 V, ±12 V) from +28 V filtered coming from ICU; * two (N and R) 6U-format TSIRC boards (hosting a control FPGA, IR calibration source drivers, etc.); * two (N and R) 3U-format M2M driver boards (or a single 6U board hosting N and R drivers); for a total of nine equivalent (from the point of view of the overall dimensions and volume allocation) 6U-format boards, fitting the ESA allocated budgets (power[TCU included, Decontamination Mode excluded.], mass and volume - refer to Tab.  <ref>and  <ref>). The lateral sides of the PCB modules will be equipped with card-lock retainers, used to fix them to the unit internal frame. All the boxes panels will be manufactured in an Aluminum alloy and then externally painted in black (except the bottom panel) to improve radiating exchange with the environment and assure, at the same time, a proper thermal conduction towards the SVM mounting panel.§.§ PSU board design The PSU board is a standard Power Supply Unit hosting DC/DC converters with a number of secondary sections needed to support the adopted cross-strapped and partially redundant configuration. It is in charge of collecting currents, voltages on secondary outputs and temperatures HK (A/D converted internally to the Unit, exploiting the SPI HK I/F for signals and control lines to/from the ADCs). The Unit consumption monitoring is in charge of platform as well as its switching on/off (both PSU and DPU boards together, thanks to a sequencing logic belonging to PSU), by means of HPC commands.The PSU is mainly composed of three sections (refer Fig.  <ref>):* Power conditioning section, performing the following tasks: * DC/DC conversion, i.e. main DC/DC for the generation of the +5 V to be distributed to the other boards, Aux DC/DC for internal logic powering, HK DC/DC for powering the HK section for the acquisition of voltages / currents / temperatures HK;* Inrush current limitation;* Polarity inversion protection;* Power-on sequence generation;* Unit power-on reset generation;* EMI (Electro-Magnetic Interference) filtering. It is worth noting that only a main DC/DC converter is foreseen to feed respectively CPU (N or R) and DCU_0 + DCU_1 boards as presently it is assumed that it will comply with the overall required current. As alternative, a further DC/DC converter can be exploited to feed some of the itemized boards, provided that both DC/DC can satisfy the required current absorption and be accommodated on a 3U board at the same time. * Power distribution section hosting Output Power Controllers (OPC), implementing switching capabilities and overcurrent plus overvoltage protections on the +5 V and +28 V voltage/current distribution lines (+28 V only to the Telescope Control Units, N and R); * HK acquisition section with three 12 bits ADCs for voltages, currents and temperatures measurements, controlled and acquired by the processor via SPI (Serial Peripheral I/F).Each electronic board, apart TCU, is basically supplied by a main voltage level of +5 V protected for overvoltage and overcurrent and locally on-board (DPU and DCU) are derived, by means of a Point of Load (PoL), the secondary voltage levels needed by the hosted electronic components.§.§ DPU board designThe Data Processing Unit can be implemented as a single 6U board hosting a CPU (the UT699E processor from Cobham, as baseline) and a co-processing FPGA, hosting some peripherals.Memories for:* booting (PROM)* storing the ASW (E2PROM and/or NVM e.g. MRAM)* data buffering (e.g. SDRAM)* data processing support (e.g. SRAM, SDRAM)are included in the design as well. The DPU board block diagram is provided in Fig.  <ref>. The two main blocks, i.e. the UT699E CPU and the RTAX1000 FPGA are connected through an on-board cPCI bus.The selected enhanced UT699E CPU is a 32 bits fault tolerant LEON3FT SPARC V8 microprocessor supporting up to 100 MHz clock rate and allowing up to 140 DMIPS. The processor includes an on-chip Integer Unit (IU), a Floating Point Unit (FPU), a Memory Controller with a DMA Arbiter and a UART-based DSU I/F. It is interfaced to the on-board FPGA by means of a 32 bits wide, 33 MHz, cPCI bus supporting DMA (in case the SDRAM controller function were assigned to the FPGA).One of the main characteristics of the adopted UT699E CPU is the on-board availability of 4 embedded SpaceWire (SpW) links (2 supporting the RMAP protocol) allowing to be directly interfaced to the SVM (OBC and SSMM Units) and to the DCU SpW I/F. The two SpW links implementing the RMAP protocol could be exploited to read from and write to the DCU FPGA registers[Indeed, the DCU SpW I/F link could be replaced with a serial I/F having reduced performance from the point of view of data rate, but the former offer the possibility to the CPU to read from and write to directly to the DCU FPGA registers, for remote configuration, thanks to RMAP protocol.].The DPU FPGA, along with the processor, is in charge of the DCU board management (by means of the RS485[The UT699E LEON3FT UART port is not compatible with the UART/RS485 hosted by the DCU FPGA as it isn't able to manage the enabling/disabling of the transmission driver as required by the standard RS485 when adopting a single TX/RX line. For this reason, the RS485 bus, is used to interface the DCU FPGA by means of the Data Processing Unit FPGA.] bus I/F, as adopted for the Euclid's DCU) and of the data acquisition (through the SpW I/F) and pre-processing tasks, e.g. implementation of the logic for the summing up the ramp readout mode, data deglitching and lossless SW compression, if needed. Alternatively, some of the pre-processing tasks could be devolved to the SIDECAR ASIC or to the DCU unit in order to share properly the overall data processing load and the needed resources. Analogously, to increase the overall ICU performances, the possibility to adopt an FPGA-based (or HW) implementation of the data compressor (and/or other processing routines) is under evaluation.The DPU FPGA is interfaced to the TCU TSIRC board by means of an I^2C bus I/F (for parameters configuration, telescope mirrors temperatures and mechanisms HK telemetries acquisition). An embedded HDL[Hardware Description Language.]-based Finite State Machine (FSM), in charge of controlling and scheduling the FPGA tasks, is foreseen along with an AMBA-bridged (AHB/APB) bus to connect and control all the internal peripherals thanks to an AHB arbiter. Finally, an on-board PoL is included in the DPU design, with the aim of providing all the needed fine-regulated voltage levels to feed properly the processor and the FPGA (core voltages).As alternative to the adoption of the UT699E as main processor, the Cobham/GR UT700 or the GR712RC dual-core LEON3FT CPU could be selected. The latter results to be one of the eligible on-board CPU for implementing both instrument control and data acquisition and processing functionalities (e.g. for SW data compression) exploiting properly its dual-core nature.In particular, the GR712RC processor, supported by the RTEMS OS, can be more easily exploited in the so-called AMP (Asymmetric Multi-Processing mode) configuration (instead of SMP or Symmetric Multi-Processing mode), as the SW tasks can run asynchronously on the individual cores, configured to have separate memory addressable areas and hardware resources. This choice is normally driven by the fact that in space applications a high level of reliability and testability is needed, with a deterministic behaviour of the SW, and the latter can be better achieved by means of the AMP mode, where any hypothetical anomaly or overload of the running tasks in the additional core would not affect the effective reactivity of the other one, granting a physical insulation of the running spaces.In case of AMP operating mode, the multi-core design requires extra-work to manage the possibility of concurrent access to all the shared resources (interrupts, timers, peripherals, memory). In order to use a multi-core processor, the software should be split up and distinguished into items that can run in parallel on the different cores.In this configuration, two instances of the RTEMS OS in AMP mode are executed. The RTEMS running on the first GR712RC core, the boot processor, has the control over the primary resources and initializes the overall environment, while the RTEMS running on the second core has not access to the main resources but keep a full and independent control over its own threads scheduling, being the management of the other resources left to the developer choices.It should be noted that the GR712RC can exploit up to 6 embedded SpW I/F if no SDRAM-type memory is directly interfaced, otherwise only 4 links are available, as in case of the UT699E processor.The final and proper selection of the processor to be adopted for the management of the AIRS Spectrometer will be performed during the next phase of the Project, when the overall requirements on instrument management and data processing as well shall be finely addressed. Indeed, the choice of the AMP dual-core architecture should be justified by the actual need in terms of CPU resources (mainly peripherals, as the GR712RC too guarantees up to 140 DMIPS, when running at 100 MHz). §.§.§ DPU SW As described above, the DPU science data handling functionalities include the AIRS spectrometer digital data (16 bits/pixel, 24 bits depth for ramps[Some more bits, beside 16, are needed to represent the quality criteria of the ramps slopes and fitting. AIRS adopts additional 8 bits for the ramp quality criteria definition.]) acquisition, buffering and pre-processing. A task of lossless compression (e.g. adopting the RICE algorithm, providing a compression ratio CR=2 at least) could be planned as well, although it is not strictly required. The compressed data are packetized according to the CCSDS protocol format and sent to the S/C DMS for storing and later downloading to Ground. Pre-processing and compression tasks can be disabled in case of raw data request from the Spacecraft/Ground (ESA mandatory requirement).The science data handling functionalities will be implemented on the ICU's Application Software (ASW), running on the DPU CPU. It handles all the ICU / Spectrometer and ICU / TCU digital interfaces and implements the following instrument monitoring and control functionalities: verifying and executing the telecommands received from the S/C, handling the switching on/off of the ICU and TCU subsystems, configuring and commanding the spectrometer sub-units, monitoring the ICU and AIRS units, reporting housekeeping and events, supporting the payload FDIR (Fault Detection, Isolation and Recovery) tasks and the operational modes, managing the on-board time thanks to a combination of the absolute time (received from the S/C through the SpaceWire protocol and Time Codes[It is also foreseen, as baseline, an external sync signal (with TBD frequency, amplitude and overall characteristics) in case the SpW packets time stamping exploiting Time Codes and an internal HW clock were not able to guarantee the needed timing accuracy for scientific data processed and sent to the S/C in quasi real-time.]) and the internal time (based on a HW clock).The listed functionalities will be implemented by means of the CCSDS PUS services and all the mandatory PUS services will be guaranteed along with a set of services specific to the ARIEL Mission (private services for the ARIEL electronics subsystems).§.§ DCU board design The proposed Detector Control Unit design (refer to Fig.  <ref>) is a heritage of the design adopted for the DCUs of the NISP instrument on-board the Euclid Mission, where the same kind of detectors have been used along with the same CFEEs (SIDECARs). This choice allows minimizing all the risks concerning the design, development, performance characterization and testing activities on the board. The DCU hosts, as baseline, a FLASH-based reprogrammable FPGA to offer maximum flexibility also in case of late requirements specification (or modification) from the ARIEL Science Team. Alternatively, a Microsemi RTAX-family FPGA (in anti-fuse technology and so not reprogrammable once burned, as One Time Programmable -OTP- logic) could be adopted in case of early requirements specification and detectors / CFEEs selection.The FPGA presently selected is a Microsemi ProASIC3-type device offering the capability to embed a HDL FSM with some programmable Science data pre-processing tasks (e.g. pixels co-adding, ramp slopes computing etc.) by means of a flexible parameters configuration that can be reprogrammed up to the EQM/FM unit. The FPGA also hosts a SDRAM memory controller to manage 128 MB of on-board memory used as a buffer to support the HDL-based pre-processing tasks.It should be noted, indeed, that for the Euclid Mission case the use of a reprogrammable FPGA for the logic device implementing the interface with the detectors system has been preferred for the following reasons: * Request of maximum flexibility from the Science Team along with the DCU development process; * Lack of knowledge of the actual behaviour of the logic interface to the SIDECAR: unexpected behavior could also have required mitigation in the FPGA not known a-priori during the following assessment phase;* Risk of late modifications required on the FPGA design (e.g. concerning the implementation of updated high-performance pre-processing tasks): the chosen RTProASIC3 FPGA allows for the modification of the design via a JTAG port without opening the unit box and changing/removing the device.On the other hand, the use of a flash reprogrammable FPGA has the disadvantage of a lower level of immunity to radiation effects with respect to a FPGA based on anti-fuse technology (e.g. Microsemi RTAX-S family) and its radiation-hard design improvement requires a not negligible effort (e.g. I/O, at RTL level, logic placement, etc.), although for the L2 radiation environment 50 krad outside the S/C can be assumed as a typical value.For this reason, the standard rad-hard FPGA design flow should be modified with the introduction of radiation mitigation activities up to the validation with EM and EQM models, but this risk can be properly assessed and addressed by the ARIEL Consortium, as at least a European company has already acquired all the needed knowledge and competences to interface and drive the Teledyne SIDECAR + H1RG detector system.The DCU WFEE is in charge of SIDECAR clocking (at least a master clock is needed for the ASIC) and feeding (secondary finely regulated voltages produced by an on-board PoL, refer to Tab.  <ref>) and it collects digitised scientific data and HK (currents, voltages and temperatures) describing the ASIC status. The needed enabling and control signals for SIDECAR management are represented in Fig.  <ref>, on the magnification of the box inside the FPGA block diagram (on the right). Three different grounding references (analog and digital) are foreseen for a clean power supply feeding.In particular, the SIDECAR Science I/F is based on an 8 bits LVDS parallel I/F (with data buffering and packets CRC) and a TM/TC I/F running @ 2 Mbps (serial syncro) + master clock line @ 10 MHz. The DPU I/F shall be based, instead, on SpW for Science data TM along with a RS485 serial I/F offering the capability to manage and configure the DCU FPGA from DPU. Alternatively, the FPGA registers could be managed thanks to the RMAP protocol exploiting the SpW-based I/F.An important issue of the electrical I/F to the SIDECAR ASICs is the harness, electrically and thermally linking the WFEE part of the electronics (working at the Service Module temperature of 270-300 K) and the CFEE part (working at T < 60 K, on the Payloads optical bench). For this kind of harnessing it is foreseen to split the electrical connections in different parts, or mated cables, characterized by different thermal conductivities (e.g. copper, constantan or manganine, phosphor bronze, steal, etc.) in order to be properly connected to the three V-grooves heat sinks. §.§ TCU unit design The Telescope Control Unit (refer to <cit.> for an exhaustive description of the metrology capabilities of the Unit) will be composed of three distinct boards as shown in Fig.  <ref>.A 6U PCB (TSIRC) will hosts the PLM thermal monitoring and control HW, the IR calibration lamp driver and their multiplexing stages. For the driver electronics of M2 mechanism, it is foreseen an upgraded version of Euclid's M2M, with the same driver, which will require a separated 6U board for both nominal and redundant systems (M2MD). In order to reduce M2MD modifications to fit ARIEL requirements, as well as to reduce the number of I/F from ICU's PSU, simplifying it, a dedicated 3U board PSU is foreseen (TCU-PSU), which will generate (from the main power line of +28 V coming from ICU) all the voltage levels required by the M2MD and TSIRC boards. The system will be based on a cold redundancy, with all the boards resting inside a dedicated box on top of ICU's, as represented in the mechanical design picture (see Fig.  <ref>). The digital system of the TCU will be based on a FPGA with an embedded HDL FSM to control all the TCU boards and to simplify, at the same time, the overall SW architecture of the Unit. The UT6325 FPGA will be located in the TSIRC board as well as its PoL converters to generate the proper voltages for GPIO interfaces and internal cores. The FPGA will host two Digital Signal Processing Modules (DSPM, one for the thermal monitoring subsystem and the other one for the IR calibration lamp driver), five PID controllers, GPIO interface management to generate multiplexers addresses and select the proper voltage and gain for a given thermistor, as well as to control the OPCs of TCU-PSU. It will also include a memory bank, two I^2C (or SpW, option to be explored in the next B1 phase of the mission) links to communicate with DPU and one MIL-STD-1553 (or SpW as well) link to communicate with the M2M Driver.The telescope thermal monitoring will be performed by means of two types of sensors: Cernox thermistors for precise readings (detectors, M1, optical elements, etc.) and DT-670 diodes for housekeeping TM of other elements of the PLM (V-grooves, OB, baffle, etc.). These sensors will be driven and read thanks to the Thermal Stabilizer & IR Calibrator electronics. In particular, sensors will be driven by an adjustable current source (one for each type of sensor, with 4 to 6 selectable levels) and thanks to their 4-wires configuration, read by means of an instrumental amplifier (IA) and a 16-bit ADC. All 46 PLM sensors will be sequentially powered and read with a multiplexing stage with no cross-strapping: nominal sensors will be connected to nominal TSIRC and backup ones to redundant TSIRC.Thermal perturbations in the PLM are expected to have time periods much longer than 1 s, therefore, the system will be designed so that each second all Cernox thermistors (approx. 60%) plus 4% of diodes will be read so the temperature controller can be updated with the proper feedback and, every 10 s, housekeeping TM can be generated and sent to the DPU. The thermal control of the TCS subsystems (which are placed between critical detectors/mirrors and their thermal sinks) will be carried out by monitoring their temperature and activating their heaters once the correction has been calculated by the FPGA logic (a PID control type loop). The heaters power will be supplied by their driver stage, which consists of a DAC and a buffer carefully designed to supply a constant current for detector heaters (avoiding EMI with the detectors) as well as a buffered PWM signal for M1 mirror (not affected by EMI). Detectors will have a single heater to stabilize their temperature, but M1 requires several (3 to 5) to help distributing the heat. Survival heaters and thermistors might be installed in each TCS as well, but are assumed to be completely in charge of the S/C provider (their control system too).The IR calibration lamp will be based on a thermal source to generate the proper light spectrum for the detectors. Thermal source consists of a 4-wires tungsten filament in order to power and read its voltage at the same time. ARIEL requirements foresee a 16-bits DAC to control the filament current once ground test has found the proper current to achieve 1100 K at the tungsten filament. The proposed architecture foresees a 24-bits resolution PID feedback loop in order to control the calibration lamp power with a resolution better than one part per million. Once the signal has been adapted in the instrumental amplifier, a delta sigma ADC from Texas Instruments (ADS1282-SP) will acquire a 24-bits sample every millisecond. The control logic inside the FPGA (a PID controller with a ΔΣ modulator to reduce bits count at a higher rate) will drive an overclocked (∼65 MHz) 16-bits DAC, which in turn will drive a current buffer for the calibration lamp. This architecture could be simplified (using 16-bits control, only reducing sampling frequency and removing the FPGA delta sigma modulator) to fulfil the ARIEL baseline requirements but could also be implemented the 24-bits control in order to gain precision when calibrating the detectors. The final solution will be chosen in the next phase of the Mission.The M2M Driver and mechanism are based on an inherited design from Euclid's and GAIA's M2MM. The ∼4.45 kg mechanism will have 3 degrees of freedom (DOF, tip/tilt and piston) controlled by a dedicated driver hosted inside the TCU box.The system will rely on a single 6U board within a cold redundant configuration, where nominal coils of the stepper motors are connected to the nominal section of the driver, and backup coils are connected to the redundant section. The maximum power consumption is expected to be approximately 10 W, but this value is inherited from Euclid's system and might end up being higher due to the higher mass of ARIEL M2 (compared with Euclid's M2). Drivers and mechanism voltages (±5 V, ±12 V and +20 V) will be supplied by a PSU included in the TCU box. Communication with the driver board is achieved by a MIL-STD-1553 bus (or SpW, as alternative), where each section is provided with a logic capable of decoding all the telecommands received in order to generate the switching sequences required by the motors, and encoding the status information to provide serial telemetry.§ HARNESSING Three different sets of harnesses are foreseen: internal to the ICU and TCU subsystems (internal harnessing), towards the Payload and towards the S/C (or external harnessing for the SVM-Payload linking). The SIDECAR-DCU electrical I/F as well as the adopted kind of harnessing has been already described in the previous paragraphs; hereunder are explicated the ICU internal I/F as well as those towards the S/C.§.§ Internal harnessingThe main electrical I/F internally connecting the ICU and TCU subsystems are the following: * DPU Power I/F: +5 V (from PSU to DPU)* DCU Power I/F: +5 V (from PSU to DCU_0 and DCU_1)* TCU Power I/F: +28 V (from PSU to TCU N and R PSU boards)* DPU TM/TC I/F: GPIO and SPI (from DPU to PSU)* DCU TM/TC I/F: RS485, SPI and SpW (from DPU to DCU_0 and DCU_1)* TCU TM/TC I/F: I^2C, SPI or SpW (from DPU to TCU N and R TSIRC boards)In case a back panel were adopted for signal and power lines routing inside the ICU and/or TCU boxes (baseline choice) no flying harness would be foreseen between their boards. Note that the ICU-TCU harnessing will be implemented by means of external (i.e. outside the ICU and TCU boxes) connections. This choice would facilitate the AIV and AIT activities at Unit and System level, as already pointed out. §.§ External harnessingConcerning the selected (electrically and thermally) conductive materials, the harnessing towards the Payload (AIRS, telescope mirrors and mechanisms) is partially to be refined as it plays a fundamental role in thermal linking the warm electronics side to the cold electronics part (it shall be further assessed during the next Phase). The foreseen harnessing towards the S/C is hereunder itemized. Nominal (and Redundant) Power Supply and control I/F:* Power line: +28 V + RTN (From S/C PCDU to ICU PSU)* Switch On HPC (Signal + RTN) (From S/C DMS to ICU PSU)* Switch Off HPC (Signal + RTN) (From S/C DMS to ICU PSU)* Switch Status BSM[Bi-level Switch Monitor] (Signal + RTN) (From S/C DMS to ICU PSU)* Sync (Signal + RTN) (From S/C DMS to ICU DPU) Nominal (and Redundant) I/O digital TM/TC I/F:* 1 or 2 (baseline 1 TM + 1 TC) Standard SpaceWire links (configured @ 10 Mbit/s) (From S/C DMS to the ICU DPU). The MIL-STD-1553 bus use is not foreseen at this stage of the design, except for the M2M driver TM/TC towards the TSIRC board.§ MECHANICAL DESIGN Fig.  <ref> shows the 3D CAD model of the overall Unit (ICU and TCU), which foresees two stacked boxes hosting: * 2x ICU/PSU (N&R), in 3U format* 2x DPU (N&R), in 6U format* 2x DCU (both N), in 6U formatinside the ICU, and: * 2x TSIRC (N&R), in 6U format* 2x M2M (N&R) drivers, in 3U format (TBC)* 2x TCU/PSU (N&R), in 3U format inside TCU.Indeed, the M2M control logic N&R could be implemented in a single 6U-format PCB without any change on the TCU mechanical envelope.The Unit overall dimensions are 300 mm (including mounting feet) x 240 mm x 245 mm. The boxes depth (245 mm) also foresees the adoption of two distinct back panels for ICU and TCU power and signals lines routing.Both boxes shall host the grounding reference point along with a bounding stud and at least a TRP (Temperature Reference Point). Their mechanical coupling will be done joining the bottom box top plate with the top box base plate, assuring a proper thermal conduction and heat dissipation to the SVM optical bench by means of the lateral panels.Connectors shown in the mechanical design are only indicative (MDM micro-d type, 9 and 25 poles, for SpW TM/TC and analog/digital signals; DSUB, 9 poles, for power and control signals).§ ICU ALTERNATIVE ARCHITECTURE The ICU alternative solution is very similar to the baseline one and is designed for the AIRS Spectrometer with the US detectors or European detectors (CH_0 and CH_1 channels) mainly thanks to different DCU control electronics.EU sensors are developed within a joint collaboration between CEA-LETI and Sofradir (ROIC development) and characterized by two different pixel architectures and readout modes under the AIRS Team responsibility.In this configuration, the AIRS architectural blocks of the electronics design are the following: * Detector Sensor Chip Assembly + Read Out Integrated Circuit (AIRS FPA);* Cold Front End Electronics (Payload side);* Warm Front End Electronics (DCU on SVM side);* Instrument Control Unit (and TCU unit on SVM side). The adoption of European detectors and ASICs/CFEEs is pending on TRL level that shall reach level 6 before freezing the adopted technology for the electronics chain design.The two pixel alternatives for the European detectors are SFD (Source Follower per Detector) and CTIA (Capacitance Trans Impedance Amplifier), respectively requiring the following CFEEs (presently in charge of SRON): * In the SFD case an ASIC would be required;* In the CTIA case the baseline choice would be to adopt an amplification and A/D conversion stage located on an intermediate 110 K thermal interface.The preferred option relies on the adoption of a CTIA-based pixel readout architecture plus a 110 K readout electronics stage (with performance expected to be in the 30 to 50 e^- rms as readout noise) for the European detectors CFEE, as shown in Fig.  <ref>. In the first case, ROIC is SFD type and requires a SIDECAR CFEE.The ICU alternative design block diagram is represented in Fig.  <ref>. The DCU Unit is depicted by means of a separate block (which can be internal or external to the ICU box) hosting the digital I/F to the Spectrometer cold front end electronics (ASIC or CFEE stage) and the power conditioning sections for both CH_0 and CH_1.TCU and PSU are respectively implemented with 2 x 3U + 1 x 6U and 1 x 6U format boards.§.§ DCU alternative design The DCU can be in the form of integrated boards into the ICU (preferred option) or in a distinct mechanical box or drawer (that in principle could be stacked to the ICU and TCU boxes) and will host the following functionalities: * CFEE/ASIC control and configuration;* Detector readout and control sequencing;* Digital data processing and packaging;* Low-level command decoding;* Low-level command ACK and HK parameters & data packets transfer;* Clean power generation for CFEEs;* PSU interfacing along with the needed cross-strapping.Fig.  <ref> and Fig.  <ref> show two viable architectures in case of adoption of US or European detectors and CFEE.The DCU alternative electrical design is still to be refined but could implement a processor[In order to implement only a high-level SW (ASW) running on the ICU processor (hosted by DPU) and in charge of Instrument management, data processing (TBD) and FDIR procedures, this complex solution should be avoided.] and/or a space qualified FPGA. As for the ICU internal boards, a PoL section is needed to derive the required voltage levels for the on-board devices and electronics components.Due to the use of the Teledyne SIDECAR ASIC in the cold front-end electronics that already implements many functions, the DCU has a low level of complexity.Beside the functions that interface the non-redundant detector assemblies to the rest of the electronics, the warm front-end electronics comprises redundant FPGA that implements low level SIDECAR commands generation and scientific data buffering. DCU interfaces externally with the DPU function of the ICU through a SpW serial interface.In addition to the functions already mentioned above, the DCU for the EU detectors solution implements a clock sequencer function that provides both analog electronics and detector with signals that sequence the detection chain/assembly clocks, since AIRS-CFEE does not feature digital to analog converters so, these devices, shall be located within DCU.§ ICU ELECTRICAL GROUND SUPPORT EQUIPMENTThe short-functional, full-functional and performance tests on the overall ICU assembly shall be performed using an appropriate Electrical Ground Support Equipment (EGSE). The EGSE shall support both the ICU test/verification and the AIRS Spectrometer end-to-end test at different stages of the AIV flow, e.g.: * ICU integration and testing (using additional test equipment to simulate the instrument I/F); * Integration and testing of the WFEE (DCU) + CFEE + FPA (AIRS-level); * Overall ARIEL payload Integration and Verification (TCU functionalities and telescope monitoring included).At the first stage, the EGSE will include additional HW and SW Test Equipments (TE) simulating the relevant I/F and functionalities of the missing payload segments. These additional TE will be totally or partyially removed from the EGSE as soon as the related units will be added to the test configuration.The ICU subsystems simulators and EGSE are needed to perform the preliminary tests (Short Functional Tests, Full Functional Tests) on the ICU Engineering Model (EM) and later on the FM/PFM Model (Performance Tests), as the chosen baseline model philosophy for the ICU subsystem is the Proto-Flight approach.§.§ OverviewThe aim of the ICU EGSE is to support testing and operations on both the ICU and the AIRS Spectrometer. A block diagram of the needed ICU subsystems simulators and EGSE is illustrated in Fig.  <ref>. The represented scheme fits both the baseline and the alternative solution design.The main functions performed by the ICU EGSE may be summarized as follows: * TM/TC S/C interface exercising; * Storage of scientific data (S/C science data interface to mass memory simulator); * Power generation; * Payload instrument simulation with possibility to simulate instrument HK data; * ROICs/ASICs/CFEEs simulation with possibility to simulate Scientific data; * Data packets acquisition for storage and following monitoring & processing. The foreseen GSE (Electrical and SW GSE) is the following:* ICU EGSE:- Workstations (desktops and/or laptops PC); - SCOS-2000/SCOE (ESA provided); - Harnessing (to the S/C and to the Payload); - Software (S/C and ICU subsystems simulators); - Instrument Database (IDB hosting TCs, TMs and related parameters); - Instrument Databank/Datapool; - S/C Interfaces Simulator (SIS).§.§ Functional descriptionThe ICU EGSE shall allow the test operator or Test Conductor to: * Fully control the ICU via a unique control station; * Command & monitor the ICU via the adopted S/C communication protocol (SpW / CCSDS / RMAP) through a DMS simulator (SIS); * Generate primary power bus and interface all HK lines; * Send simulated scientific data via ROICs / ASICs / CFEEs Data & HK simulators; * Acquire the ICU scientific packets via the Data Acquisition link (SSMM / OBC or DMS simulator); * Compare the acquired data versus the simulated data to check for ICU correct data handling.The best candidate for the SW platform to be adopted for the EGSE is the standard ESA SCOS-2000 (Spacecraft Control & Operations System). SCOS-2000 is the generic mission control system software of ESA, which supports CCSDS TM and TC packet standards and the ESA Packet Utilization Standard (PUS). It has been proven by recent ESA missions (e.g. Herschel/Planck) that, using SCOS-2000 and its add-ons, it can be extended to cover the on-ground testing phase and work as a proper EGSE.The use of SCOS-2000 will guarantee a very smooth transition from the ICU subsystem AIV to the ARIEL Payload AIV and the in-orbit operation phases (assuming that SCOS-2000 will be adopted by ESA for the EGSE and the Ground Segment, respectively). In particular, this smooth transition will concern the SCOS-2000 instrument Data Base (MIB tables), which describes the TM and TC packets structure. § ON-BOARD SOFTWAREThe ARIEL ICU On Board Software (OBSW) <cit.> will be composed of the following three main components:* Basic Software:* Boot software: it is installed on the PROMs of the ICU DPU board and allows loading the ICU Application Software. It contains all the low-level drivers for the CPU board and its related interfaces. * Basic I/O SW, Service SW & Peripheral Drivers: it is a HW-dependent Software including the Software Drivers for all the internal and external ICU digital interfaces. This SW is used by the Application SW and can depend on the selected Operating System (OS).* Application Software:* Instrument Control & Configuration Software: it implements the ARIEL scientific payload handling. It controls the spectrometer, implements the operating modes, monitors the instrument health and runs FDIR procedures. It implements the interface layer between the S/C and the instrument. * Data Processing and Compression Software: it implements all the necessary on-board processing functionalities, included the on-board lossless compression (if needed). After the processing the SW prepares CCSDS packets for the transmission to the S/C Mass Memory (SSMM).* Real-Time Operating System (RTOS): the selected baseline operating system is RTEMS. The role and the interconnections between the three listed components can be clearly identified in the layered representation reported in Fig.  <ref>. §.§ On-board software layers With reference to the following block diagram, the physical layer includes all the ICU HW components with a direct level of interaction with the on-board software. The Runtime environment includes the Real Time Operating System layer, necessary to provide multi-tasking support. In case the baseline architecture based on the LEON processor will be confirmed, the RTEMS operating system is a good RTOS candidate, being already used for applications on board ESA satellites. The other indicated system services are those not directly provided with the OS kernel, but included in the Basic Software Component mentioned above. An OS abstraction layer has then been included in the layered structure for the ARIEL OBSW, in which all middleware libraries have been considered. The middleware services are based on the use of RTOS function calls. They include all library functions dedicated to the low-level handling of the ICU HW devices/interfaces. All the middleware libraries will be developed in house and will provide a mean for developing the Application Software virtually independent from the HW and OS below it. This layer is very important and will ease the testing activities.The Application Layer includes both the ICU Instrument Control software and the Data Processing software.The ICU Instrument Control SW will implement the TM/TC S/C interface handling, the payload housekeeping data acquisition and monitoring, the instruments operating modes management and the autonomous function execution. The software will be written in C, though some functions may need to be coded in assembly to optimize their performance.In case stringent timing requirements have to be met for subsystem commanding, an interrupt driven command sequencer (On Board procedures, OBP interpreter) can be included into the ICU on board software. Based on the experience of Herschel's HIFI and SPIRE instrument control software, this is a flexible and effective solution to implement time-critical commanding procedures.The Data Processing SW implements all the necessary on board processing functionalities, included (if implemented) the on-board lossless compression (i.e. RICE). After the processing the SW prepares CCSDS packets for the transmission to the S/C Solid State Mass Memory.For the ARIEL Science it is desirable to minimize the on-board data processing. To allow the Science Team to have the optimum chance to extract the best SNR from the available data, the capability to improve the processing during the mission and the maximum flexibility in the algorithms can be exploited using more complex on-ground processing.The actual processing will be finalized during the Phase B study exploiting simulated data flows to verify the effectiveness of the adopted data reduction steps for the selected detectors. In particular, the deglitching algorithm performances shall be verified against the expected data redundancy (spectra overlapping) as well as the data acquisition rate and the spaxels[A spaxel is a set of binned pixels in both spatial and spectral dimensions.] dimensions.Finally, the need to implement an on-board effective lossless compression is strictly related to the results of the on-board deglitching algorithm. If required, a dedicated trade off activity to evaluate the performances of different standard lossless compression algorithms on the on-board CPU processor shall be planned. § CONCLUSIONS The presented ICU baseline electrical architecture can be considered as the best solution for what concern the risks assessment and mitigation during the present Phase, as the very high TRL of its subsystems as well as the computed reliability figure for the DPU / DCU / SIDECAR / Detector chain, larger than 98%, lead to an overall design characterized by already developed and tested boards.The strong heritage of the Teledyne detectors and SIDECAR ASIC, coupled with the lessons learned from the development and testing of the DCU boards up to the EQM model and their electrical I/F for the Euclid Mission, will guarantee a very reliable Unit based on already available on the shelf subsystems. On the contrary, an early management of the ITAR procedures for the required US subsystems shall be undertaken.Finally, the power, mass and volume budges derived for the baseline solution, as well as its complexity, are compliant with those allocated by ESA for the Unit, margins included.Concerning the alternative design, its main advantage is to have the full contributors to the AIRS performance under the same system responsibility (DCU requirements specification by the AIRS Team) ensuring that the design is optimal in terms of adaptation of the solution to the needs of the Mission. It also ensures that the full data acquisition chain can be verified before delivery.The alternative architecture also ensures that, independently of the detector choice at the end of phase B1, the external interfaces to ICU remain identical, although it should be noted that the ICU baseline architecture, with its own flexibility granted by the DCU design, would allow driving ASICs and detectors different from the US ones, likely without any modification or only with marginal changes.On the other hand, the alternative electronics design could be more complex especially for what concerns the electrical interfaces, as a further box for DCUs could be required between ICU and AIRS.Depending on the final DCU implementation (in a separate box/drawer or as an internal ICU board) the mass, volume and power budgets could be impacted and might require a further revision. The authors gratefully acknowledge the Italian Space Agency for the financial contribution to the ARIEL project in the framework of the ASI-INAF agreement 2015-038-R.0 and for the financial support from the Spanish Ministry of Economy and Competitiveness (MINECO) through grants ESP2014-57495C2-2-R and ESP2016-80435-C2-1-R.A special thank to the European Space Agency for the support provided by the ARIEL Study Team, to the University College of London (UCL) leading the Project and to the Rutherford Appleton Laboratory (RAL Space) Managers and Engineers.Tinetti_1 G. Tinetti et al, , Special Issue on ARIEL, Experimental Astronomy, (2017)Eccleston_1 P. Eccleston et al, , Special Issue on ARIEL, Experimental Astronomy, (2017)Morgante_1 G. Morgante et al, , Special Issue on ARIEL, Experimental Astronomy, (2017)Da_Deppo_0 V. Da Deppo et al, , Special Issue on ARIEL, Experimental Astronomy, (2017)Amiaux_1 J. Amiaux et al, , Special Issue on ARIEL, Experimental Astronomy, (2017)Da_Deppo_1 V. Da Deppo et al, An afocal telescope configuration for the ESA ARIEL mission, ICSO International Conference on Space Optics, Biarritz (FR), (2016)Da_Deppo_2 V. Da Deppo et al, The afocal telescope optical design and tolerance analysis for the ESA ARIEL Mission, OSA Optical Design and Fabrication Congress, Denver (US), (2017)Da_Deppo_3 V. Da Deppo et al, An afocal telescope configuration for the ESA ARIEL Mission , CEAS Aeronautical Journal, (2017)Rataj_1 M. Rataj et al, , Special Issue on ARIEL, Experimental Astronomy, (2017)Sierra-Roig_1 C. Sierra-Roig et al, The ARIEL ESA mission on-board metrology, Proceedings of the 4th IEEE International Workshop on Metrology for Aerospace, Padova (IT), (2017)Farina_1 M. Farina et al, Ariel Spectrometer Instrument Control and Data Processing Software, European Planetary Science Congress (EPSC), Riga (LT), (2017)
http://arxiv.org/abs/1705.09777v1
{ "authors": [ "M. Focardi", "E. Pace", "M. Farina", "A. M. Di Giorgio", "J. Colome Ferrer", "I. Ribas", "C. Sierra Roig", "L. Gesa Bote", "J. C. Morales", "J. Amiaux", "C. Cara", "J. L. Augures", "E. Pascale", "G. Morgante", "V. Da Deppo", "M. Pancrazzi", "V. Noce", "S. Pezzuto", "M. Freriks", "F. Zwart", "G. Bishop", "K. Middleton", "P. Eccleston", "G. Micela", "G. Tinetti" ], "categories": [ "astro-ph.IM", "astro-ph.EP" ], "primary_category": "astro-ph.IM", "published": "20170527072402", "title": "The ARIEL Instrument Control Unit design for the M4 Mission Selection Review of the ESA's Cosmic Vision Program" }
[][email protected] ^1 Department of Physics, Zhejiang University, Hangzhou 310027, China ^2 State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Hangzhou 310027, China ^3 College of Optical Engineering, Zhejiang University, Hangzhou 310027, China We propose a graphene-on-grating nanostructure to enable second-order spatial differentiation computation in terahertz (THz) region. The differentiation operation is based on the interference between the direct reflected field and the leakage of two excited surface plasmon polaritons counter-propagating along the graphene sheet. With the spatial coupled-mode theory, we derive out that the requirement for the second-order spatial differentiation is the critical coupling condition. We numerically demonstrate such an analog computation with Gaussian beams. It shows that the spatial bandwidth of the proposed differentiator is large enough such that even when the waist radius of the Gaussian beam is as narrow as w_0=0.68λ (λ is the free-space wavelength), the accuracy of the differentiator is higher than 95%. The proposed differentiator is ultra-compact, with a thickness less than 0.1λ, and useful for real-time imaging applications in THz security detections. On-grating graphene surface plasmons enabling spatial differentiation in terahertz region Yisheng Fang^1, Yijie Lou^1, Zhichao Ruan^1,2,3December 30, 2023 =========================================================================================Terahertz electromagnetic wave exhibits several unique properties. It can penetrate barriers such as clothing and packing materials and is used for spectral characterization of resonances in meV range, such as phonon rotational and vibrational modes in solid substances. Importantly, nonionizing radiation promises THz waves to be harmless <cit.>. Therefore, THz wave is suitable for a wide range of imaging applications, especially for contact-free security scanning <cit.>. However, in THz real-time scanning applications, the high-throughput image processing demands time-consuming computation, which represents a key challenge in practice <cit.>. In the past decade, an impressive range of photonic devices performing analog computing have been proposed to improve information processing speed several-order higher than their electronic counterparts <cit.>. Especially, optical spatial differentiators are of great interests in image applications, which are capable of detecting edges in an entire image with a single shot <cit.>. Recently, we experimentally demonstrated optical edge detection with a plasmonic differentiator operating in visible region <cit.>. Given the ultrafast and high-throughput features of optical analog computation, it can be quite useful to design and realize optical spatial differentiators working in THz region.In this Letter, we propose a graphene-on-grating nanostructure to realize the second-order spatial differentiation in THz region. We demonstrate that for the normal incidence case, the reflected field corresponds to the second-order spatial differentiation of the incident field. Such an analog computation results from the spatial mode interference between the direct reflected wave and the leakage of two excited surface plasmon polaritons (SPPs) counter-propagating along the graphene sheet. By developing the spatial coupled-mode theory (CMT), we show that the second-order spatial differentiation is realized when the coupling process satisfies the critical coupling condition. The thickness of the proposed device is less than 0.1λ and such ultra-compact due to the highly confined nature of SPPs on graphene. By numerical simulations, we investigate the performance of the differentiator using Gaussian beams with various waist radius w_0. We show that the proposed device has a broad spatial spectral bandwidth, being able to process Gaussian beams as narrow as w_0=0.68λ, with an accuracy higher than 95%.Fig.<ref>(a) schematically shows the graphene-on-grating structure of the proposed differentiator. It can be fabricated by depositing silicon on a thick gold layer, and then patterned and etched as a diffraction grating. A monolayer graphene is grown on copper substrate and coated by PMMA and then transferred onto the Si diffraction grating <cit.>. Here the gold layer is assumed thick enough and perfectly conducting in THz frequency region.Such a monolayer graphene supports SPPs for TM-polarized wave (magnetic field perpendicular to the incident plane), where the collective oscillation of Dirac fermions is in resonance with the electromagnetic field <cit.>. The corresponding dispersion diagram of the SPPs on the air-graphene-Si interface is shown in Fig.<ref>(b). Here the surface conductivity of monolayer graphene is described by the Drude model σ =e^2/πħ^2iμ_c/ω +iγ, where μ_c and γ correspond to the chemical potential and scattering rate, respectively, and γ =2π/τ with a finite transport scattering time τ <cit.>. The chemical potential and scattering time of the monolayer graphene are assumed to be μ_c=0.6eV and τ =10^-11s. The refractive index of the silicon is n_Si=3.4164. We note that the dispersion line of graphene SPPs lies far away from the light line cone, i.e. β_spp≫k_0, indicating that the excited SPPs are strongly confined to the graphene surface.To show the physical mechanism of the proposed spatial differentiator, we develop a spatial coupled-mode theory. Here we use the grating coupling method <cit.> to excite the SPPs on graphene, where the period of the grating L is designed to satisfy the phase-matching condition β_spp=2π/L and β_spp is the SPP wavevector. Fig.<ref>(c) shows the magnetic field at the air-graphene-grating interface for the normal incidence case, and indeed exhibits the strong confinement feature of graphene SPPs. In this case, the incident light simultaneously generates two SPP modes that propagate on the graphene surface along the positive and negative x direction, respectively. The phase matching also enables the two counter-propagating SPP modes to couple with each other. Meanwhile, the propagating SPP modes leak out into air through the phase matching. Therefore, the reflected field distribution results from the interference of three contributions: the direct reflection of the incident wave, and the leakage from two propagating SPP modes. Based on the spatial coupled-mode theory <cit.>, the spatial-mode coupling and interference process can be described as: d/dx( a_1(x) a_2(x) )=[ i( β_spp β_12e^2i2π/Lx -β _12^*e^-2i2π/Lx-β_spp )-( α_l α_12e^2i2π/Lx -α _12^*e^-2i2π/Lx-α_l )-( α_00 0 -α_0 ) ]( a_1(x) a_2(x) )+( κe^i2π/Lx -κe^-i2π/Lx )s_+(x) s_-(x)=e^iφs_+(x)+de^-i2π/Lxa_1(x)+de^i2π/Lxa_2(x) Here we take the origin of the x-coordinate at the middle of the grating slot and the time convention of e^-iω t, where ω is the angular frequency of the incident wave. s_±(x) and a_1,2(x) are the amplitudes of the incident and reflected magnetic fields and two SPP modes, which are normalized to the x-component of Poynting vector and the x-direction energy flow respectively <cit.>. The phase shift terms exp(± i2π/Lx) and exp(± 2i2π/Lx) in Eq.(<ref>,<ref>) enable the phase-matching between the incident wave and the excited SPP modes propagating in two directions. α_l and α_0 represent the loss rate resulting from the leakage radiation of SPPs and the intrinsic loss rate from the material loss, and β_12 represents the coupling between two excited SPPs. e^iφ is the background reflection coefficient without exciting the SPP modes. We note thatthe background phase term e^iφ, the coupling coefficient d_0, the leakage rate α_l, the cross-coupling terms β_12 and α_12 are not independent of each other, since they are constrained by the energy conservation, mirror-symmetry and time-reversal condition <cit.>. As the results of the theory <cit.>, we show that both β _12 and α _12 are real numbers and these parameters are related by β_12=β _12^* α_l=α_12=α _12^*=1/2dd^* κ =d e^iφd^*+d=0 Eq.(<ref>,<ref>) further give that d=√(2α_l)e^i(φ /2-π /2+nπ ), where n is an integer determined by the choice of the origin point of x-axis. Note that the spatial coupled-mode theory takes the approximation of the strong confinement condition, α_l+α_0≪β_spp <cit.>.Based on Eq.(<ref>,<ref>), we obtain the spatial spectral transfer function for the graphene-on-grating structure. We expand the incident and the reflected field into a series of plane waves as s_±(x)=∫s_±(k_x)e^ik_xxdk_x, where s_±(k_x) is the amplitude of each plane wave and k_x represents the x-component of the wavevector. By transforming Eq.(<ref>,<ref>) into the Fourier domain, the spatial spectral transfer function is obtained asH(k_x)≡s_-(k_x)/s_+(k_x)=e^iφk_x^2+(-2α_l+α_0)α_0-2iα_lβ_12+β _12^2/k_x^2+(2α_l+α_0)α_0+2iα_lβ_12+β _12^2We note that in the lossless case α_0=0, | H(k_x) |=1, which is consistent with the energy conservation condition. Especially, when the critical coupling condition α_0=2α_l is satisfied, and β_12 is small enough to be approximated to zero, the transfer function can be approximated in | k_x|≪ 2α_l asH(k_x)≈e^iφ/8α _0^2k_x^2Eq.(<ref>) is the spatial frequency domain transfer function for a spatial second-order differentiation, which has quadratic dependence of k_x at about k_x=0. Correspondingly, in the spatial domain, the reflected field profile is proportional to the spatial second-order differentiation ass_-=e^iφ/8α _0^2d^2s_+/dx^2 To realize the spatial second-order differentiation we design the depth of the grating slot to satisfy the critical coupling condition α_0=2α_l. Here we consider that the incident wave is at 5.368 THz. The intrinsic material loss of the SPP α_0 is mainlydetermined by the graphene conductivity and thus is insensitive to the slot size. On the other hand, the leakage loss α_l of the SPP monotonically increases as the slot width and depth increase. Thus, the critical coupling condition can be realized by appropriately designing the slot width and depth of the grating structure. Guided by these criterions, we design a graphene-on-grating structure where the Si dielectric layer has a slot width W=1119nm, slot depth H=200nm, thickness d=5μ m, and period L=4μ m.In order to demonstrate the spatial second-order differentiation, we first compare the spatial spectral transfer function of the proposed graphene-on-grating structure with the the one of an ideal spatial second-order differentiator (Fig.<ref>). We numerically calculate the transfer function by the finite element method using the commercial software COMSOL, and the results are shown as the blue solid lines in Fig.<ref>. To validate our CMT theory, we fit the calculated transfer function with Eq.(<ref>) (the green dashed lines in Fig.<ref>). It shows that the amplitude of the transfer function can be well fitted with the parameters α_l=0.241k_0, α_0=2.01α_l andβ_12=-0.008α_l in the range | k_x|<0.25k_0, where k_0 is the free space wavevector. We note that the phase of the transfer function exhibits a peak at the vicinity of k_x=0. It is well fitted by the CMT under the consideration of the cross-coupling of two SPP modes described by β_12. We also plot the ideal transfer function of a second-order differentiator by Eq.(<ref>) with the fitting parameters α_0=2.01α_l.In comparison with the numerical results, it confirms that the transfer function of the proposed graphene plasmonic structure exhibits a quadratic dependence of k_x near k_x=0. Moreover, as our numerical simulations show below, the phase difference between the real and the ideal cases has a rather minor impact on the differentiation accuracy.We now illustrate the spatial second-order differentiation with a Gaussian beam illumination. The incident beam has a TM-polarized magnetic field profile H_y=e^-(x^2+y^2)/w_0^2 with a waist radius w_0=1.3λ and focuses on the grating normally. Fig.<ref>(a) and <ref>(b) show the y component of magnetic field (H_y) for the incident and the reflected beams. Here we calculate the reflected field using the three-dimension full vector Fourier optics method where the reflection coefficients for each plane wave are calculated with COMSOL. Fig.<ref>(b) exhibits three peaks in the reflected beam, which are the obvious features of a second-order spatial differentiation of the incident Gaussian beam on x-direction. For a more explicit presentation, we specifically extract the reflected H_y field amplitude along x-axis at the interface z=0, as plotted in Fig.<ref>(d). The ideal second-order spatial differentiation result analytically computed using Eq.(<ref>) is also plotted for comparison. The simulated reflected field amplitude agrees well with the analytical differentiation result. The accuracy of the differentiation is 99.8%, described by the Pearson correlation coefficient between the simulated and analytical computed reflected field amplitudes along x-axis at the interface. Eq.(<ref>) shows that the spatial bandwidth of the differentiator is limited by the leakage loss rate α_l. Here in comparison with the free space wavevector k_0 the intrinsic loss rate α_0 is 0.241k_0, which indicates that the differentiator has a broad operation bandwidth such that it is capable of resolving the change of the incident field when the beam size is small. To investigate the spatial resolution of our plasmonic differentiator, we gradually reduce the beam size and simulate the field transformation during the reflection. Fig.<ref> showsthe Pearson correlation coefficients with respect to w_0/λ. The device implements second-order spatial differentiation with the Pearson correlation coefficients over 99% for incident Gaussian beams with w_0>λ. When Gaussian beam waist radius w_0 becomes narrower, the differentiation result degrades. The insets of Fig.<ref> (b), (d) and (f) show the numerically simulated reflected field patterns at the interface z=0, for w_0/λ = 1.2, 0.9 and 0.6 respectively. Correspondingly, Fig.<ref> (a), (c) and (e) show the ideal ones for the second-order spatial differentiation. We note that even when the Gaussian beam size is as narrow as w_0/λ =0.68 , nearly the diffraction limit, the differentiator still works effectively, with the Pearson correlation coefficient over 95%. This feature shows that the plasmonic differentiator has a very high spatial resolution, which is useful for applications in THz image sharpening and edge detections.In summary, we have demonstrated that the proposed graphene-on-grating nanostructure can perform second-order spatial differentiation on THz waves at normal incidence. The desired differentiation is realized when two counter-propagating SPPs on graphene surface are excited and the critical coupling condition α_0=2α_l is satisfied. The device is ultra-compact and has a broad spatial operation bandwidth, which promises it the ability to process ultra-narrow optical signals, for example, Gaussian beam as narrow as w_0=0.68λ. Such a miniaturized and broadband photonic second-order spatial differentiator could be useful for high-resolution all-optical signal processing and imaging applications in THz region.The authors acknowledge the financial support by Fundamental Research Funds for the Central Universities (2014QNA3007), and the National Natural Science Foundation of China (NSFC 61675179).33 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Hu and Nuss(1995)]Hu1995 author author B. B. Hu and author M. C. Nuss,@noopjournal journal Optics Lettersvolume 20, pages 1716 (year 1995)NoStop [Jepsen et al.(2011)Jepsen, Cooke, and Koch]Jepsen11 author author P. U. Jepsen, author D. G. Cooke, and author M. Koch, @noopjournal journal Laser & Photonics Reviewsvolume 5, pages 124 (year 2011)NoStop [Ferguson and Zhang(2002)]Ferguson2002 author author B. Ferguson and author X. C. Zhang, @noopjournal journal Nature materials volume 1, pages 26 (year 2002)NoStop [Karpowicz et al.(2005)Karpowicz, Zhong, Zhang, Lin, Hwang, Xu, and Zhang]Karpowicz2005 author author N. Karpowicz, author H. Zhong, author C. L. Zhang, author K. I. Lin, author J. S. Hwang, author J. Z. Xu,and author X. C. Zhang, @noopjournal journal Applied Physics Letters volume 86, pages 054105 (year 2005)NoStop [Shen et al.(2005)Shen, Lo, Taday, Cole, Tribe, and Kemp]Shen2005 author author Y. C. Shen, author T. Lo, author P. F. Taday, author B. E. Cole, author W. R. Tribe,and author M. C. Kemp, @noopjournal journal Applied Physics Letters volume 86, pages 241116 (year 2005)NoStop [Kawase et al.(2003)Kawase, Ogawa, Watanabe, and Inoue]Kawase2003 author author K. Kawase, author Y. Ogawa, author Y. Watanabe,andauthor H. Inoue, @noopjournal journal Optics Express volume 11, pages 2549 (year 2003)NoStop [Tonouchi(2007)]tonouchi2007cet author author M. Tonouchi, @noopjournal journal Nature Photonics volume 1, pages 97 (year 2007)NoStop [Kulishov and Azana(2005)]Kulishov2005 author author M. Kulishov and author J. Azana, @noopjournal journal Optics Letters volume 30, pages 2700 (year 2005)NoStop [Berger et al.(2007)Berger, Levit, Fischer, Kulishov, Plant, and Azana]Berger2007 author author N. K. Berger, author B. Levit, author B. Fischer, author M. Kulishov, author D. V. Plant,and author J. Azana, @noopjournal journal Optics Express volume 15, pages 371 (year 2007)NoStop [Park et al.(2007)Park, Azana, and Slavik]Park2007 author author Y. Park, author J. Azana,andauthor R. Slavik, @noopjournal journal Optics Letters volume 32, pages 710 (year 2007)NoStop [Bykov et al.(2011)Bykov, Doskolovich, and Soifer]bykov2011temporal author author D. A. Bykov, author L. L. Doskolovich,and author V. A. Soifer, @noopjournal journal Optics Letters volume 36, pages 3509 (year 2011)NoStop [Wu et al.(2014)Wu, Cao, Hu, Jiang, Pan, Yang, Qiu, Tremblay,and Su]wu2014compact author author J. Wu, author P. Cao, author X. Hu, author X. Jiang, author T. Pan, author Y. Yang, author C. Qiu, author C. Tremblay,and author Y. Su, @noopjournal journal Optics Express volume 22, pages 26254 (year 2014)NoStop [Preciado and Muriel(2008)]Preciado2008 author author M. A. Preciado and author M. A. Muriel, @noopjournal journal Optics Letters volume 33, pages 1348 (year 2008)NoStop [Ferrera et al.(2010)Ferrera, Park, Razzari, Little, Chu, Morandotti, Moss, and Azana]Ferrera2010 author author M. Ferrera, author Y. Park, author L. Razzari, author B. E. Little, author S. T. Chu, author R. Morandotti, author D. J.Moss,and author J. Azana, @noopjournal journal Nature Communications volume 1,pages 29 (year 2010)NoStop [Doskolovich et al.(2014)Doskolovich, Bykov, Bezus, andSoifer]doskolovich2014spatial author author L. L. Doskolovich, author D. A. Bykov, author E. A. Bezus, and author V. A. Soifer,@noopjournal journal Optics Lettersvolume 39, pages 1278 (year 2014)NoStop [Bykov et al.(2014)Bykov, Doskolovich, Bezus, and Soifer]Bykov2014 author author D. A. Bykov, author L. L. Doskolovich, author E. A. Bezus,and author V. A. Soifer, @noopjournal journal Optics Express volume 22, pages 25084 (year 2014)NoStop [Golovastikov et al.(2015)Golovastikov, Bykov, and Doskolovich]Golovastikov2015 author author N. V. Golovastikov, author D. A. Bykov,and author L. L. Doskolovich, @noopjournal journal Optics Letters volume 40, pages 3492 (year 2015)NoStop [Silva et al.(2014)Silva, Monticone, Castaldi, Galdi, Alù, and Engheta]Silva2014performing author author A. Silva, author F. Monticone, author G. Castaldi, author V. Galdi, author A. Alù,and author N. Engheta, @noopjournal journal Science volume 343, pages 160 (year 2014)NoStop [AbdollahRamezani et al.(2015)AbdollahRamezani, Arik, Khavasi, andKavehvash]AbdollahRamezani2015 author author S. AbdollahRamezani, author K. Arik, author A. Khavasi, and author Z. Kavehvash,@noopjournal journal Optics Lettersvolume 40, pages 5239 (year 2015)NoStop [Youssefi et al.(2016)Youssefi, Zangeneh-Nejad, Abdollahramezani, and Khavasi]Youssefi16 author author A. Youssefi, author F. Zangeneh-Nejad, author S. Abdollahramezani,and author A. Khavasi, @noopjournal journal Optics Letters volume 41, pages 3467 (year 2016)NoStop [Chizari et al.(2016)Chizari, Abdollahramezani, Jamali, andSalehi]Chizari2016 author author A. Chizari, author S. Abdollahramezani, author M. V. Jamali,and author J. A. Salehi, @noopjournal journal Optics Letters volume 41, pages 3451 (year 2016)NoStop [Hwang and Davis(2016)]HwangDavis16 author author Y. Hwang and author T. J. Davis, @noopjournal journal Applied Physics Letters volume 109, pages 181101 (year 2016)NoStop [Zhang et al.(2016)Zhang, Qu, and Zhang]ZhangWeixuan2016 author author W. Zhang, author C. Qu,andauthor X. Zhang, @noopjournal journal Journal of Optics volume 18, pages 075102 (year 2016)NoStop [Zhu et al.(2017)Zhu, Zhou, Lou, Ye, Qiu, Ruan, and Fan]ZhuTengfeng2017 author author T. Zhu, author Y. Zhou, author Y. Lou, author H. Ye, author M. Qiu, author Z. Ruan,and author S. Fan,@noopjournal journal Nature Communications volume 8, pages 15391 (year 2017)NoStop [Li et al.(2009)Li, Zhu, Cai, Borysiak, Han, Chen, Piner, Colombo, and Ruoff]LiXuesong2009 author author X. Li, author Y. Zhu, author W. Cai, author M. Borysiak, author B. Han, author D. Chen, author R. D.Piner, author L. Colombo,and author R. S. Ruoff, @noopjournal journal Nano Letters volume 9, pages 4359 (year 2009)NoStop [Zhu et al.(2013)Zhu, Yan, Jepsen, Hansen, Mortensen, and Xiao]ZhuXiaolong2013 author author X. Zhu, author W. Yan, author P. U. Jepsen, author O. Hansen, author N. A. Mortensen,and author S. Xiao, @noopjournal journal Applied Physics Letters volume 102,pages 131101 (year 2013)NoStop [Garcia de Abajo(2014)]GarciadeAbajo2014 author author F. J. Garcia de Abajo, @noopjournal journal ACS Photonics volume 1, pages 135 (year 2014)NoStop [Gao et al.(2012)Gao, Shu, Qiu, and Xu]GaoWeilu2012 author author W. Gao, author J. Shu, author C. Qiu,and author Q. Xu, @noopjournal journal ACS Nano volume 6, pages 7806 (year 2012)NoStop [Lou et al.(2016)Lou, Pan, Zhu, and Ruan]LouPanZhuRuan16 author author Y. Lou, author H. Pan, author T. Zhu,and author Z. Ruan, @noopjournal journal Journal of the Optical Society of America Bvolume 33, pages 819 (year 2016)NoStop [Ruan et al.(2014)Ruan, Wu, Qiu, and Fan]Ruan2014Spatial author author Z. Ruan, author H. Wu, author M. Qiu,and author S. Fan, @noopjournal journal Optics Letters volume 39, pages 3587 (year 2014)NoStop [Ruan(2015)]ruan2015spatial author author Z. Ruan, @noopjournal journal Optics Letters volume 40, pages 601 (year 2015)NoStop [Haus(1984)]haus1984waves author author H. Haus, @nooptitle Waves and fields in optoelectronics. (publisher Prentice-Hall, year 1984)NoStop [Fan et al.(2003)Fan, Suh, and Joannopoulos]fan2003temporal author author S. Fan, author W. Suh,andauthor J. D. Joannopoulos,@noopjournal journal Journal of the Optical Society of America A volume 20, pages 569 (year 2003)NoStop
http://arxiv.org/abs/1705.09252v1
{ "authors": [ "Yisheng Fang", "Yijie Lou", "Zhichao Ruan" ], "categories": [ "physics.optics" ], "primary_category": "physics.optics", "published": "20170525163620", "title": "On-grating graphene surface plasmons enabling spatial differentiation in terahertz region" }
spmatrix[1]#1[ _-2in setauthors empty #1tempcntane ##1##2tempcntbne tempcnta=@tempcntb=@ tempcnta>ne tempcntb=ne iden gobbletwogobble ##1##2##2 10@##1##2 (##2) #1##1 ,and, andTopological Defects in Quantum Field Theory with Matrix Product States # 31 October 2017 ======================================================================topnum@ setcopyright firstpage empty ]
http://arxiv.org/abs/1705.09794v2
{ "authors": [ "Bhavya Tripathi", "Bhupendra Kumar Sharma", "Madhu Sharma" ], "categories": [ "physics.flu-dyn" ], "primary_category": "physics.flu-dyn", "published": "20170527091754", "title": "MHD Pulsatile Two-Phase Blood Flow Through a Stenosed Artery with Heat and Mass Transfer" }
ПРАВИТЕЛЬСТВО РОССИЙСКОЙ ФЕДЕРАЦИИ ФЕДЕРАЛЬНОЕ ГОСУДАРСТВЕННОЕ АВТОНОМНОЕ ОБРАЗОВАТЕЛЬНОЕ УЧРЕЖДЕНИЕ ВЫСШЕГО ОБРАЗОВАНИЯ НАЦИОНАЛЬНЫЙ ИССЛЕДОВАТЕЛЬСКИЙ УНИВЕРСИТЕТ «ВЫСШАЯ ШКОЛА ЭКОНОМИКИ» Факультет компьютерных наук .45 УТВЕРЖДАЮ Академический руководитель образовательной программы «Математические методы оптимизации и стохастики»,   В.Г.Спокойный «»2017г. Выпускная квалификационная работа на тему «Зеркальный вариант метода подобных треугольников для задач условной оптимизации»   тема на английском языке «Mirror version of similar triangles method for constrained optimization problems» по направлению подготовки 01.04.02 «Математические методы оптимизации и стохастики» t]|l|l| [l] Научный руководитель   Должность, место работы0.25in0.25inДоцент, НИУ ВШЭ ученая степень, ученое звание0.66in0.67inД.ф.-м.н И.О. Фамилия0.46in0.46inА.В. Гасников Оценка1in0.97in Подпись, Дата1in0.97in [l] Выполнил студент группы М15МОС 2 курса магистратуры образовательной программы«Математические методыоптимизации и стохастики»   И.О. Фамилия0.53in0.54inA.И. Тюрин Подпись, Дата1in0.97in Москва 2017 Аннотация Наука о методах оптимизации является бурно развивающей в наше время. В машинном обучении, компьютерном зрении, биологии, медицине, конструировании и во многих других отраслях методы оптимизации имеют огромную популярность и являются одним из важнейших инструментов. Одна из основных целей науки: получить некоторый "универсальный"метод, который будет хорошо работать на всех задачах, независимо от гладкости задачи, точности вычисления градиента и других параметров, который характеризуют задачу. В нашей работе мы предлагаем метод, представляющий из себя "универсальный"для многих постановок, но при этом он прост в изложении и понимании.AbstractScience about optimization methods is rapidly developing today. In machine learning, computer vision, biology, medicine, construction and in many other different areas optimization methods have vast popularity and they appear as important tool. One of the most important goals in optimization: create some "universal"method, which will have good performance in all problems regardless smoothness of a task, computation precision of gradient and other parameters which characterize a problem. In this thesis we propose a method which is "universal"for different problems and, at the same time, is simple for understanding. § ВВЕДЕНИЕ В данной работе подробно описан метод, называемый зеркальным методом треугольника, этот метод был получен по аналогии с оригинальным методом треугольника <cit.>. Основное отличие заключается в том, что в методе из <cit.> во вспомогательном шаге происходит накопление градиентов, как это делается, например, в <cit.>. В нашей же работе вспомогательный шаг имеет структуру зеркального спуска <cit.> <cit.>. Зеркальный метод треугольника (ЗМТ) был успешно обобщен на различные оптимизационные задачи, об этих обобщениях рассказывается в данной работе. Структура выпускной квалификационной работы следующая: в разделе <ref> описан базовый метод для общей задачи оптимизации, в разделе <ref> описано обобщение ЗМТ для задачи минимакса с адаптивностью, в разделе <ref> рассказывается о ЗМТ с (δ, L)-оракулом <cit.>, в разделе <ref> о решении оптимизационной задачи с оракулом, который выдает смещенную рандомизированную оценку истинного градиента.§ ЗЕРКАЛЬНЫЙ МЕТОД ТРЕУГОЛЬНИКАВведем для начала общую постановку задачи гладкой выпуклой оптимизации <cit.>. Пусть R^n - евклидово конечномерное действительное векторное пространство с произвольной нормой . Пусть определена функция f(x): Q ⟶R.Будем полагать, что * Q ⊆R^n, выпуклое, замкнутое. * f(x) - непрерывная и выпуклая функция на Q. * f(x) ограничена снизу на Q и достигает своего минимума f_* в некоторой точке (необязательно одной) x_* ∈ Q. * ∇ f(x) существует на Q и является липшицевым с константой L, то есть ∇ f(x) - ∇ f(y)_* ≤ Lx - y∀ x, y ∈ Q. Где λ_* = max_ν≤ 1;ν∈R^n⟨λ,ν⟩ ∀λ∈R^n. Рассматривается следующая задача оптимизации:f(x) →min_x ∈ Q. Введем два понятия: прокс-функция и расстояние Брегмана <cit.>. d(x):Q →R называется прокс-функцией, если d(x) непрерывно дифференцируемая на Q и d(x) является 1 - сильно выпуклой относительно нормы . В общем случае определение прокс-функции немного сложнее, но для класса задач, которые мы решаем, этого достаточно. Расстоянием Брегмана называется V(x,y) = d(x) - d(y) - ⟨∇ d(y), x - y⟩,где d(x) - произвольная прокс-функция. Легко показать, что V(x,y) ≥1/2x - y^2.Во всех далее предложенных алгоритмах имеется некоторая точка, с которой начинается работа метода.x_0 - начальная точка работы алгоритма.Обозначим за R^2 такое число, чтоV(x_*, x_0) ≤ R^2.Рассмотрим алгоритм зеркального метода треугольника. Дано: x_0 - начальная точка, N - количество шагов метода и L - константа Липшица ∇ f(x). 0 - шаг:y_0 = u_0 = x_0 α_0 = 0A_0 = α_0 k+1 - шаг:α_k+1 = 1/2L + √(1/4L^2 + α_k^2)A_k+1 = A_k + α_k+1y_k+1 = α_k+1u_k + A_k x_k/A_k+1 ϕ_k+1(x) = V(x, u_k) + α_k+1[f(y_k+1) + ⟨∇ f(y_k+1), x - y_k+1⟩]u_k+1 = _x ∈ Qϕ_k+1(x) x_k+1 = α_k+1u_k+1 + A_k x_k/A_k+1Для данного алгоритма имеется следующая гарантия скорости сходимости. Пусть x_* - решение (<ref>), тогда для x_N из алгоритма ЗМТ верно f(x_N) - f(x_*) ≤4LR^2/(N+1)^2Данная теорема утверждает, что предложенный метод является быстро градиентным, проксимальным, причем на каждом шаге считается только одна проекция. Доказательство Теоремы <ref> описано в Аппендиксе <ref>.§ РЕШЕНИЕ ЗАДАЧИ МИНИМАКСА С ПОМОЩЬЮ АДАПТИВНОГО ЗМТРассмотрим следующую минимаксную задачу оптимизации:f(x) = max_i = 1,…,M{f_i(x)} + h(x) →min_x ∈ Q. f_i(x),i = 1,…,M - выпуклые функции с L липшицевым градиентом на Q. h(x) - выпуклая функция на Q. Q - выпуклое, замкнутое множество. Аналогично, как и в разделе <ref>, мы предполагаем, что f(x) достигает своего минимума в точке x_* и V(x_*, x_0) ≤ R^2.Рассмотрим модифицированный алгоритм зеркального метода треугольника для задачи минимакса (<ref>) с адаптивным подбором "локальной"константы Липшица: Дано: x_0 - начальная точка, N - количество шагов и L_0 некоторая константа, которая удовлетворяет условию: L_0 ≤ L.0 - шаг:y_0 = u_0 = x_0 L_1 = L_0/2 α_0 = 0A_0 = α_0 k+1 - шаг: Найти наибольший корень α_k+1 : A_k + α_k+1 = L_k+1α^2_k+1A_k+1 = A_k + α_k+1y_k+1 = α_k+1u_k + A_k x_k/A_k+1 ϕ_k+1(x) = V(x, u_k) + α_k+1(max_j = 1,…,M[f_j(y_k+1) + ⟨∇ f_j(y_k+1), x - y_k+1⟩] + h(x))u_k+1 = _x ∈ Qϕ_k+1(x) x_k+1 = α_k+1u_k+1 + A_k x_k/A_k+1.Если выполнено условиеf(x_k+1) ≤max_j = 1,…,M{f_j(y_k+1) + ⟨∇ f_j(y_k+1), x_k+1 - y_k+1⟩} + + L_k+1/2x_k+1 - y_k+1^2 + h(x_k+1), тоL_k+2 = L_k+1/2и перейти к следующему шагу, иначеL_k+1 = 2L_k+1и повторить текущий шаг. * Количество внутренних циклов в каждом шаге конечно. Это следует из того, что на каждом шаге цикла мы увеличиваем L_k+1 в 2 раза, а значит L_k+1 через конечное количество шагов станет больше L, поэтому из L - Липшевости ∇ f_j(x) следует, что через конечное количество внутренних циклов выполнится условие (<ref>). * Для всех k ≥ 0 выполнено L_k≤ 2L. Для k = 0 верно из условия на L_0. Для k ≥ 1 это следует из того, что мы выйдем из внутреннего цикла, где подбирается L_k, ранее, чем L_k станет больше 2L. * Оценим общее число обращений за значениями всех сразу M функций. Внутри каждого шага алгоритм как минимум 1 раз решает задачу (<ref>) и делает проверку (<ref>), пусть j_k - количество дополнительных внутренних циклов k-ого шага, где подбирается L_k, и, соответственно, количество дополнительных решений (<ref>) и проверок (<ref>). Тогда общее количество обращений за значениями всех функций f_j(x) равно ∑_k=1^N2(j_k + 1) = ∑_k=1^N2((j_k - 1) + 2) = ∑_k=1^N2(log_2(L_k/L_k-1) + 2) == 4N + 2log_2(L_N/L_0) ≤4N + 2log_2(2L/L_0). Второе равенство следует из того, что L_k = 2^j_kL_k-1/2. Поэтому мы получаем, что в среднем на каждом шаге мы будем считать значение всех функций 4 раза. Можно показать, что градиент всех функций f_j(x) мы в среднем будем считать на каждом шаге 2 раза. Пусть для последовательности α_k выполнено α_0 = 0, A_k = ∑_i = 0^kα_i, A_k = L_kα_k^2, где L_k ≤ 2L ∀ k≥0. Тогда верно следующее ∀ k ≥ 1 неравенство: A_k ≥(k+1)^2/8L. Пусть k = 1. α_1 = L_1α_1^2 A_1 = α_1 = 1/L_1≥1/2L Пусть k ≥ 2, тогда L_k+1α^2_k+1 = A_k+1 L_k+1α^2_k+1 = A_k + α_k+1 L_k+1α^2_k+1 - α_k+1 - A_k = 0 Решая данное квадратное уравнение будем брать наибольший корень, поэтому α_k+1 = 1 + √( 1 + 4L_k+1A_k)/2L_k+1 По индукции, пусть неравенство (<ref>) верно для k, тогда: α_k+1 = 1/2L_k+1 + √(1/4L_k+1^2 + A_k/L_k+1)≥1/2L_k+1 + √(A_k/L_k+1)≥ ≥1/4L + 1/√(2L)k+1/2√(2L) = k+2/4LПоследнее неравенство следует из A_k ≥(k+1)^2/8L, поэтому α_k+1≥k+2/4Lи A_k+1 = A_k + α_k+1 = (k+1)^2/8L + k+2/4L≥(k+2)^2/8L Пусть ψ(x) выпуклая функция и y = _x ∈ Q{ψ(x) + V(x,z)} Тогда ψ(x) + V(x,z) ≥ψ(y) + V(y,z) + V(x,y)∀ x ∈ Q. По критерию оптимальности: ∃ g ∈∂ψ(y),⟨ g + ∇_y V(y, z), x - y⟩≥ 0∀ x ∈ Q Тогда неравенство ψ(x) - ψ(y) ≥⟨ g, x - y⟩≥⟨∇_y V(y, z), y - x⟩и равенство ⟨∇_y V(y, z), y - x⟩ = ⟨∇ d(y) - ∇ d(z), y - x⟩ = d(y) - d(z) - ⟨∇ d(z), y - z⟩ ++ d(x) - d(y) - ⟨∇ d(y), x - y⟩ - d(x) + d(z) + ⟨∇ d(z), x - z⟩ == V(y,z) + V(x,y) - V(x,z)завершают доказательство. ∀ x ∈ Q выполнено A_k+1 f(x_k+1) - A_k f(x_k) + V(x, u_k+1) - V(x, u_k) ≤α_k+1f(x). Введем обозначение: l^j_f(x;y) = f_j(y) + ⟨∇ f_j(y), x - y ⟩. f(x_k+1) ≤_1max_j = 1,…,M{l^j_f(x_k+1;y_k+1)}+ L_k+1/2x_k+1 - y_k+1^2 + h(x_k+1) == max_j = 1,…,M{l^j_f(α_k+1u_k+1 + A_k x_k/A_k+1;y_k+1)}+ L_k+1/2α_k+1u_k+1 + A_k x_k/A_k+1 - y_k+1^2 ++ h(α_k+1u_k+1 + A_k x_k/A_k+1) ≤ ≤max_j = 1,…,M{f_j(y_k+1) + α_k+1/A_k+1⟨∇ f_j(y_k+1), u_k+1 - y_k+1⟩ ++ A_k/A_k+1⟨∇ f_j(y_k+1), x_k - y_k+1⟩}+ L_k+1α^2_k+1/2 A^2_k+1u_k+1 - u_k^2 + + α_k+1/A_k+1h(u_k+1) + A_k/A_k+1h(x_k) ≤ ≤A_k/A_k+1(max_j = 1,…,M{f_j(y_k+1) + ⟨∇ f_j(y_k+1), x_k - y_k+1⟩} + h(x_k)) ++ α_k+1/A_k+1(max_j = 1,…,M{f_j(y_k+1) + ⟨∇ f_j(y_k+1), u_k+1 - y_k+1⟩} + h(u_k+1)) + + L_k+1α^2_k+1/2 A^2_k+1u_k+1 - u_k^2 =_2= A_k/A_k+1(max_j = 1,…,M{l^j_f(x_k;y_k+1)} + h(x_k)) ++ α_k+1/A_k+1(max_j = 1,…,M{l^j_f(u_k+1;y_k+1)} + 1/2 α_k+1u_k+1 - u_k^2 + h(u_k+1)) ≤ ≤A_k/A_k+1(max_j = 1,…,M{l^j_f(x_k;y_k+1)} + h(x_k)) ++ α_k+1/A_k+1(max_j = 1,…,M{l^j_f(u_k+1;y_k+1)} + 1/α_k+1V(u_k+1, u_k) + h(u_k+1)) ≤_3 ≤A_k/A_k+1 f(x_k) ++ α_k+1/A_k+1(max_j = 1,…,M{l^j_f(x;y_k+1)} + h(x) + 1/α_k+1V(x, u_k) - 1/α_k+1V(x, u_k+1)) ≤_4 ≤A_k/A_k+1 f(x_k) + α_k+1/A_k+1 f(x) + 1/A_k+1V(x, u_k) - 1/A_k+1V(x, u_k+1))1 - из условия (<ref>)2 - из A_k = L_kα^2_k3 - из леммы <ref> сψ(x) = α_k+1(max_j = 1,…,M{f_j(y_k+1) +⟨∇ f_j(y_k+1), x - y_k+1⟩} + h(x)) и выпуклость f_j(x)∀ j4 - выпуклость f_j(x)∀ j Пусть x_* - решение задачи (<ref>). f(x_N) - f(x_*) ≤8LR^2/(N+1)^2 Просуммируем неравенство из леммы <ref> по k = 0, ..., N - 1 A_N f(x_N) - A_0 f(x_0) + V(x, u_N) - V(x, u_0) ≤ (A_N - A_0)f(x) A_N f(x_N) + V(x, u_N) - V(x, u_0) ≤ A_Nf(u) Возьмем x = x_*. A_N (f(x_N) - f_*) ≤ R^2 f(x_N) - f_* ≤R^2/A_N≤_18LR^2/(N+1)^2 1 - из Леммы <ref>. С точностью до константы скорость сходимости никак не изменилась по сравнению с Теоремой <ref>. Но, во-первых, данный метод удобен тем, что не надо знать истинную константу Липшица, так как во время работы алгоритма она подбирается автоматически. Во-вторых, при выборе шага α_k мы учитываем некоторую "локальную"константу Липшица L_k, которая на практике может по ходу работы алгоритма постепенно уменьшаться, из-за чего метод будет на практике работать быстрее.Можно заметить, что оптимизационная задача вообще говоря является негладкой, но из-за того, что мы используем структуру задачи, нам удалось получить быструю оценку скорости сходимости градиентного метода. Пусть у нас имеется задача с функциональными ограничениямиf(x) →min_x ∈ Q приg_j(x) ≤ 0∀ j = 1,…,k Предположим, что мы знаем f_*. Тогда можно записать эквивалентную задачу <cit.>:max{f(x) - f_*, g_1(x), ..., g_K(x)}→min_x ∈ QДля ее решения можно использовать адаптивный алгоритм ЗМТ для задачи минимакса.На каждом шаге алгоритма решаем вспомогательную задачу: ϕ_k+1(x) = V(x, u_k) + α_k+1(max_j = 1,…,M[f_j(y_k+1) + ⟨∇ f_j(y_k+1), x - y_k+1⟩] + h(x)) u_k+1 = _x ∈ Qϕ_k+1(x)Пусть V(x,y) = 1/2x - y^2_2, Q = R^n и h(x) = 0. Тогда вспомогательную задачу можно свести к задаче квадратичного программирования <cit.>: _x, t{t + 1/2x - u_k^2_2} f_j(y_k+1) + ⟨∇ f_j(y_k+1), x - y_k+1⟩≤ t∀ j Если количество f_j(x) мало, и размерность пространства не очень большая, то задачу можно решить быстро методом внутренней точки. § ЗЕРКАЛЬНЫЙ МЕТОД ТРЕУГОЛЬНИКА С НЕТОЧНЫМ (Δ, L)-ОРАКУЛОМВ данном разделе будем решать следующую задачу:F(x) def= f(x) + h(x) →min_x ∈ Q Условия на f(x) такие же, как и в разделе <ref>. Будем предполагать, что x_* решение (<ref>) и V(x_*, x_0) ≤ R^2. В отличие от раздела <ref>, нам будем доступен только (δ, L)-оракул <cit.>. h(x) - выпуклая функция на Q. (δ, L)-оракулом будем называть оракул, который на запрашиваемую точку y дает пару (f_δ(y), ∇ f_δ(y)) такую, что0 ≤ f(x) - f_δ(y) - ⟨∇ f_δ(y), x - y⟩≤L/2x - y^2 + δ ∀ x ∈ Q. Возьмем x = y в (<ref>), тогдаf_δ(y) ≤ f(y) ≤ f_δ(y) + δ ∀ y ∈ Q.Рассмотрим алгоритм зеркального метода треугольника с неточным (δ, L)-оракулом. Дано: x_0 - начальная точка, N - количество шагов, δ и L_0 некоторая константа, которая удовлетворяет условию: L_0 ≤ L.0 - шаг:y_0 = u_0 = x_0 L_1 = L_0/2 α_0 = 0A_0 = α_0 k+1 - шаг: Найти наибольший корень α_k+1 : A_k + α_k+1 = L_k+1α^2_k+1A_k+1 = A_k + α_k+1y_k+1 = α_k+1u_k + A_k x_k/A_k+1 ϕ_k+1(x) = V(x, u_k) + α_k+1(f_δ(y_k+1) + ⟨∇ f_δ(y_k+1), x - y_k+1⟩ + h(x))u_k+1 = _x ∈ Qϕ_k+1(x) x_k+1 = α_k+1u_k+1 + A_k x_k/A_k+1Если выполнено условиеf_δ(x_k+1) ≤ f_δ(y_k+1) + ⟨∇ f_δ(y_k+1), x_k+1 - y_k+1⟩ + + L_k+1/2x_k+1 - y_k+1^2 + δ, тоL_k+2 = L_k+1/2и перейти к следующему шагу, иначеL_k+1 = 2L_k+1и повторить текущий шаг.Все свойства из Замечания <ref> сохраняются, но надо отметить, что из (<ref>) и (<ref>) следует (<ref>) с L_k+1≥ L. Данное замечание гарантирует, что через конечное количество внутренних циклов при подборе L_k будет выполнено (<ref>).Докажем основную лемму, которая практически полностью повторяет лемму <ref>. ∀ x ∈ Q выполнено A_k+1 F(x_k+1) - A_k F(x_k) + V(x, u_k+1) - V(x, u_k) ≤α_k+1F(x) + 2δ A_k+1 Введем обозначение: l_f^δ(x;y) = f_δ(y) + ⟨∇ f_δ(y), x - y ⟩. F(x_k+1) ≤_1 l_f^δ(x_k+1;y_k+1)+ L_k+1/2x_k+1 - y_k+1^2 + h(x_k+1) + 2δ == l_f^δ(α_k+1u_k+1 + A_k x_k/A_k+1;y_k+1)+ L_k+1/2α_k+1u_k+1 + A_k x_k/A_k+1 - y_k+1^2 ++ h(α_k+1u_k+1 + A_k x_k/A_k+1) + 2δ≤ ≤ f_δ(y_k+1) + α_k+1/A_k+1⟨∇ f_δ(y_k+1), u_k+1 - y_k+1⟩ ++ A_k/A_k+1⟨∇ f_δ(y_k+1), x_k - y_k+1⟩+ L_k+1α^2_k+1/2 A^2_k+1u_k+1 - u_k^2 + + α_k+1/A_k+1h(u_k+1) + A_k/A_k+1h(x_k) + 2δ== A_k/A_k+1(f_δ(y_k+1) + ⟨∇ f_δ(y_k+1), x_k - y_k+1⟩ + h(x_k)) ++ α_k+1/A_k+1(f_δ(y_k+1) + ⟨∇ f_δ(y_k+1), u_k+1 - y_k+1⟩ + h(u_k+1))+ + L_k+1α^2_k+1/2 A^2_k+1u_k+1 - u_k^2 + 2δ=_2= A_k/A_k+1(l_f^δ(x_k;y_k+1) + h(x_k)) ++ α_k+1/A_k+1(l_f^δ(u_k+1;y_k+1) + 1/2 α_k+1u_k+1 - u_k^2 + h(u_k+1)) + 2δ≤ ≤A_k/A_k+1(l_f^δ(x_k;y_k+1) + h(x_k)) ++ α_k+1/A_k+1(l_f^δ(u_k+1;y_k+1) + 1/α_k+1V(u_k+1, u_k) + h(u_k+1)) + 2δ≤_3 ≤A_k/A_k+1 F(x_k) ++ α_k+1/A_k+1(l_f^δ(x;y_k+1) + h(x) + 1/α_k+1V(x, u_k) - 1/α_k+1V(x, u_k+1)) + 2δ≤_4 ≤A_k/A_k+1 F(x_k) + α_k+1/A_k+1 F(x) + 1/A_k+1V(x, u_k) - 1/A_k+1V(x, u_k+1)) + 2δ1 - из условия (<ref>) и (<ref>) 2 - из A_k = L_kα^2_k3 - из леммы <ref> сψ(x) = α_k+1(f_δ(y_k+1) +⟨∇ f_δ(y_k+1), x - y_k+1⟩ + h(x)) и левая часть (<ref>)4 - левая часть (<ref>) Пусть x_* - решения задачи (<ref>), тогда F(x_N) - F(x_*) ≤8LR^2/(N+1)^2+ 2Nδ Просуммируем нер-во из леммы <ref> по k = 0, ..., N - 1 A_N F(x_N) - A_0 F(x_0) + V(x, u_N) - V(x, u_0) ≤ (A_N - A_0)F(x) + 2δ∑_k = 0^N-1A_k+1 A_N F(x_N) + V(x, u_N) - V(x, u_0) ≤ A_NF(x) + 2δ∑_k = 0^N-1A_k+1 Возьмем x = x_* и используем, что ∀ k = 1,…,N A_k≤ A_N, так как по определению A_k - неубывающая последовательность. A_N (F(x_N) - F_*) ≤ R^2 + 2NA_Nδ F(x_N) - F_* ≤R^2/A_N + 2Nδ≤_18LR^2/(N+1)^2 + 2Nδ 1 - из Леммы <ref>. Пусть f(y) - α-квази-выпуклая функция, то есть f(y) - невыпуклая функция, но верно свойство ⟨∇ f(y), y - x_*⟩≥α(f(y) - f(x_*))∀ y ∈ Q. В Лемме <ref> единственное место, где используется выпуклость, - это переход 4. Вместо выпуклости достаточно потребовать α-квази-выпуклости c α = 1 и вместо условия ∀ x ∈ Q потребовать x = x_* в Лемме <ref>, чтобы выполнялся переход 4.α-квази-выпуклые функции имеют, в частности, следующее приложение <cit.>.На подобии <cit.> можно получить универсальный метод, если в алгоритме ЗМТ на месте δ поставить α_k+1/A_k+1ϵ в (<ref>).§ ЗЕРКАЛЬНЫЙ МЕТОД ТРЕУГОЛЬНИКА С НЕТОЧНЫМ ВЫБОРОЧНЫМ СТОХАСТИЧЕСКИМ (Δ, L)-ОРАКУЛОМБудем считать, что мы решаем задачу (<ref>). Ограничимся случаем, когда выбранная норма является евклидовой. Стохастическим (δ, L)-оракулом будем называть оракул, который на запрашиваемую точку y дает пару (f_δ(y), ∇ f_δ(y;ξ)) такую, что0 ≤ f(x) - f_δ(y) - ⟨∇ f_δ(y), x - y⟩≤L/2x - y^2 + δ ∀ x ∈ Q. 𝔼∇ f_δ(y;ξ) = ∇ f_δ(y) ∀ y ∈ Q. 𝔼exp(∇ f_δ(y;ξ) - ∇ f_δ(y)^2_*/D) ≤exp(1) ∀ y ∈ Q.Определим константу D_Q такую, чтоD_Q ≥max_x,y ∈ Qx - y Считаем, что D_Q < ∞. Из (<ref>) в силу выпуклости можно получить следующееexp(𝔼∇ f_δ(y;ξ) - ∇ f_δ(y)^2_*/D) ≤𝔼exp(∇ f_δ(y;ξ) - ∇ f_δ(y)^2_*/D) ≤exp(1)То есть𝔼∇ f_δ(y;ξ) - ∇ f_δ(y)^2_*≤ D Пусть y - случайный вектор и y и ξ - независимы, тогда *𝔼∇ f_δ(y;ξ) = 𝔼∇ f_δ(y) *𝔼exp(∇ f_δ(y;ξ) - ∇ f_δ(y)^2_*/D) ≤exp(1)  *𝔼∇ f_δ(y;ξ) = 𝔼(𝔼_ξ[∇ f_δ(y;ξ)|y]) = 𝔼∇ f_δ(y) *𝔼exp(∇ f_δ(y;ξ) - ∇ f_δ(y)^2_*/D) = = 𝔼(𝔼_ξ[exp(∇ f_δ(y;ξ) - ∇ f_δ(y)^2_*/D)| y]) ≤ ≤𝔼exp(1) = exp(1) В методе, который мы предложим далее, будем оценивать истинный градиент на каждом шаге с помощью некоторого количества ∇ f_δ(y;ξ_j) j ∈ [1… m_k+1], используя технику mini-batch.∇^m_k+1 f_δ(y) = 1/m_k+1∑_j=1^m_k+1∇ f_δ(y;ξ_j)Приведем важные следствия: Из того, что выбрана евклидова норма, можно легко показать:𝔼∇^m_k+1 f_δ(y) - ∇ f_δ(y)^2_* ≤D/m_k+1Пусть (f_δ(y), ∇ f_δ(y;ξ_i)), i = 1,…,m_k+1 - m_k+1 независимых выхода стохастического (δ, L)-оракула, x, y ∈ Q - случайные векторы, y и ξ_ii = 1,…,m_k+1 - независимы, L случайная константа, такая, что L≥3/2L и выбрано произвольное Ω≥√(2) - 1, тогдаℙ(f_δ(x) - f_δ(y) - ⟨∇^m_k+1 f_δ(y), x - y⟩ >> (1 + 2Ω+ Ω^2)3D/Lm_k+1 + L/2x - y^2 + δ) ≤exp(-Ω^2/2). Рассмотрим правую часть условия (<ref>).f(x) - f_δ(y) - ⟨∇ f_δ(y), x - y⟩≤L/2x - y^2 + δУчтем (<ref>), тогдаf_δ(x) - f_δ(y) - ⟨∇ f_δ(y), x - y⟩≤L/2x - y^2 + δ,f_δ(x) - f_δ(y) - ⟨∇^m_k+1 f_δ(y), x - y⟩≤ ≤⟨∇ f_δ(y) - ∇^m_k+1 f_δ(y), x - y⟩ + L/2x - y^2 + δ≤ ≤⟨∇ f_δ(y) - ∇^m_k+1 f_δ(y), x - y⟩ + L/3x - y^2 + δВоспользуемся неравенством Фенхеля <cit.> (формула (7.6))f_δ(x) - f_δ(y) - ⟨∇^m_k+1 f_δ(y), x - y⟩≤ ≤L/6x - y^2 + 3/L∇^m_k+1 f_δ(y) - ∇ f_δ(y)^2_* + L/3x - y^2 + δОценим вероятность того, чтоf_δ(x) - f_δ(y) - ⟨∇^m_k+1 f_δ(y), x - y⟩ >> (1 + 2Ω+ Ω^2)3D/Lm_k+1 + L/2x - y^2 + δУчитывая (<ref>), из (<ref>) будет следовать3/L∇^m_k+1 f_δ(y) - ∇ f_δ(y)^2_* > (1 + 2Ω+ Ω^2)3D/Lm_k+1,что эквивалентно∇^m_k+1 f_δ(y) - ∇ f_δ(y)^2_* > (1 + 2Ω+ Ω^2)D/m_k+1,Воспользуемся следующим фактом <cit.>, пусть γ_1,…,γ_N - случайные независимые вектора такие, что𝔼(exp(γ_i^2/σ^2)) ≤exp(1),𝔼γ_i = 0,тогда верно для ∀Ω≥√(2) - 1ℙ(∑_i=1^Nγ_i≥ (1 + Ω)√(N)σ) ≤exp(-Ω^2/2).Возьмем γ_i = ∇ f_δ(y;ξ_i) - ∇ f_δ(y), где y - неслучайный вектор, и σ^2 = D и учтем (<ref>) и (<ref>), тогдаℙ(∑_j=1^m_k+1(∇ f_δ(y;ξ_j) - ∇ f_δ(y))_* > (1 + Ω)√(m_k+1)√(D)) ≤exp(-Ω^2/2) ℙ(∇^m_k+1 f_δ(y) - ∇ f_δ(y)_* > (1 + Ω)√(D)/√(m_k+1)) ≤exp(-Ω^2/2)тогда ℙ(∇^m_k+1 f_δ(y) - ∇ f_δ(y)_* > (1 + Ω)√(D)/√(m_k+1)) = = 𝔼[ℙ(∇^m_k+1 f_δ(y) - ∇ f_δ(y)_* > (1 + Ω)√(D)/√(m_k+1)|y = y)] ≤ ≤𝔼exp(-Ω^2/2) = exp(-Ω^2/2) и ℙ(3/L∇^m_k+1 f_δ(y) - ∇ f_δ(y)_*^2 > (1 + 2Ω+ Ω^2)3D/Lm_k+1) ≤exp(-Ω^2/2).Из этого неравенства и из того, что из (<ref>) следует (<ref>), получаем утверждение следствия.Возьмем математическое ожидание от неравенстваf_δ(x) - f_δ(y) - 𝔼⟨∇^m_k+1 f_δ(y), x - y⟩≤ ≤2/L𝔼∇^m_k+1 f_δ(y) - ∇ f_δ(y)^2_* + 3L/4x - y^2 + δТо есть, мы получили, что из (<ref>) следуетf_δ(x) - f_δ(y) - 𝔼⟨∇^m_k+1 f_δ(y), x - y⟩≤2D/Lm_k+1 + 3L/4x - y^2 + δРассмотрим алгоритм зеркального метода треугольника со стохастическим (δ, L)-оракулом. Ω = 1 + 2Ω+ Ω^2Дано: x_0 - начальная точка, ϵ - желаемая точность решения, δ, L - константа из (δ, L)-оракула, β - доверительный уровень.Возьмем N = ⌈2√(3)√(L)D_Q/√(ϵ)⌉ Ω = √(2lnN/β). 0 - шаг:y_0 = u_0 = x_0 L_1 = L/2 α_0 = 0A_0 = α_0 k+1 - шаг:Найти наибольший корень α_k+1 : A_k + α_k+1 = L_k+1α^2_k+1 A_k+1 = A_k + α_k+1y_k+1 = α_k+1u_k + A_k x_k/A_k+1 m_k+1 = ⌈3DΩα_k+1/ϵ⌉ Сгенерировать: ∇^m_k+1 f_δ(y_k+1)ϕ_k+1(x) = V(x, u_k) + α_k+1(f_δ(y_k+1) + ⟨∇^m_k+1 f_δ(y_k+1), x - y_k+1⟩ + h(x))u_k+1 = _x ∈ Qϕ_k+1(x) x_k+1 = α_k+1u_k+1 + A_k x_k/A_k+1Если выполнено условиеf_δ(x_k+1) ≤ f_δ(y_k+1) + ⟨∇^m_k+1 f_δ(y_k+1), x_k+1 - y_k+1⟩ + + L_k+1/2x_k+1 - y_k+1^2 + 3DΩ/L_k+1m_k+1 + δ,тоL_k+2 = L_k+1/2и перейти к следующему шагу, иначеL_k+1 = 2L_k+1и повторить текущий шаг.В Следствии <ref> стоят два слагаемых 2DΩ/Lm_k+1 и 3L/4x - y^2, в то время, как в (<ref>) стоят 3DΩ/L_k+1m_k+1 и L_k+1/2x_k+1 - y_k+1^2. Эти два слагаемых эквивалентны при выборе L_k+1 = 3/2L. Запись этих слагаемых в (<ref>) более удобная для доказательства.Пусть B_N - событие того, что хотя бы на одном из первых N шагов алгоритма не выполнится условие (<ref>) для L_k+1, в то время как L_k+1≥3/2L, тогда, если Ω = √(2lnN/β), то ℙ(B_N) ≤β. С учетом Следствия <ref> получаем ℙ(B_N) ≤_1∑_k = 0^N - 1ℙ(f_δ(x_k+1) - f_δ(y_k+1) - ⟨∇^m_k+1 f_δ(y_k+1), x_k+1 - y_k+1⟩ >> 3DΩ/L_k+1m_k+1 + L_k+1/2x_k+1 - y_k+1^2 + δ) ≤_2 Nexp(-Ω^2/2) 1 - из неравенство Бонферрони для B_N =⋃_i=1^NB_i, где B_i - событие того, что на i шаге не выполнилось (<ref>) при L_i≥3/2L.2 - Следствие <ref>Так как по условию леммы Ω = √(2lnN/β), тоℙ(B_N) ≤ Nexp(-Ω^2/2) ≤β. L_N = max_k = 0… N-1L_k+1 Пусть для последовательности α_k выполнено α_0 = 0 A_k = ∑_i = 0^kα_i A_k = L_kα_k^2, где {L_k} - последовательность, генерируемая алгоритмом. Тогда c вероятностью 1 - β A_k ≥(k+1)^2/12L, ∀ k = 1,…,N Воспользуемся Леммой <ref>, которая говорит следующая: с вероятностью меньше или равной β хоть одно L_k ≥3/2L. Так как может возникнуть пороговая ситуация, когда L_k ∈ (3/4L,3/2L), тогда в силу удвоения L_k в крайнем случаем получаем, что с вероятностью больше или равной 1 - β все L_k ≤ 3L. Далее доказывается аналогично Лемме <ref>.Так как с вероятностью 1 - β выполнено L_N ≤ 3L, то, как и в Замечании <ref>, можно получить, что с вероятностью 1 - β в среднем на каждом шаге мы будем считать значение всех функций 4 раза, а стохастического градиента ∇^m_k+1 f_δ(y_k+1) - 2 раза.Введем обозначение: l_f^δ(x;y) = f_δ(y) + ⟨∇^m_k+1 f_δ(y), x - y ⟩. ∀ x ∈ Q выполнено l_f^δ(x_k+1;y_k+1)+ L_k+1/2x_k+1 - y_k+1^2 + h(x_k+1) ≤ ≤A_k/A_k+1(l_f^δ(x_k;y_k+1) + h(x_k)) ++ α_k+1/A_k+1(l_f^δ(x;y_k+1) + h(x) + 1/α_k+1V(x, u_k) - 1/α_k+1V(x, u_k+1)) l_f^δ(x_k+1;y_k+1)+ L_k+1/2x_k+1 - y_k+1^2 + h(x_k+1) == l_f^δ(α_k+1u_k+1 + A_k x_k/A_k+1;y_k+1)+ L_k+1/2α_k+1u_k+1 + A_k x_k/A_k+1 - y_k+1^2 ++ h(α_k+1u_k+1 + A_k x_k/A_k+1)≤ ≤ f_δ(y_k+1) + α_k+1/A_k+1⟨∇^m_k+1 f_δ(y_k+1), u_k+1 - y_k+1⟩ ++ A_k/A_k+1⟨∇^m_k+1 f_δ(y_k+1), x_k - y_k+1⟩+ L_k+1α^2_k+1/2 A^2_k+1u_k+1 - u_k^2 + + α_k+1/A_k+1h(u_k+1) + A_k/A_k+1h(x_k)== A_k/A_k+1(f_δ(y_k+1) + ⟨∇^m_k+1 f_δ(y_k+1), x_k - y_k+1⟩ + h(x_k)) ++ α_k+1/A_k+1(f_δ(y_k+1) + ⟨∇^m_k+1 f_δ(y_k+1), u_k+1 - y_k+1⟩ + h(u_k+1))+ + L_k+1α^2_k+1/2 A^2_k+1u_k+1 - u_k^2=_1= A_k/A_k+1(l_f^δ(x_k;y_k+1) + h(x_k)) ++ α_k+1/A_k+1(l_f^δ(u_k+1;y_k+1) + 1/2 α_k+1u_k+1 - u_k^2 + h(u_k+1))≤ ≤A_k/A_k+1(l_f^δ(x_k;y_k+1) + h(x_k)) ++ α_k+1/A_k+1(l_f^δ(u_k+1;y_k+1) + 1/α_k+1V(u_k+1, u_k) + h(u_k+1))≤_2 ≤A_k/A_k+1(l_f^δ(x_k;y_k+1) + h(x_k)) ++ α_k+1/A_k+1(l_f^δ(x;y_k+1) + h(x) + 1/α_k+1V(x, u_k) - 1/α_k+1V(x, u_k+1))1 - из A_k = L_kα^2_k2 - из леммы <ref> сψ(x) = α_k+1(f_δ(y_k+1) +⟨∇^m_k+1 f_δ(y_k+1), x - y_k+1⟩ + h(x)) С вероятностью больше или равной 1 - β ∀ x ∈ Q, ∀ k ≥ 0 A_k+1 F(x_k+1) - A_k F(x_k) + V(x, u_k+1) - V(x, u_k) ≤ ≤α_k+1F(x) + 2δ A_k+1 + 3DΩ/L_k+1m_k+1A_k+1+ α_k+1⟨∇^m_k+1 f_δ(y_k+1)-∇ f_δ(y_k+1), x - u_k ⟩ Надо отметить, что с вероятностью больше или равной 1 - β выполнится за конечное количество шагов (<ref>). Это следует из Леммы <ref>. F(x_k+1) ≤_1 l_f_δ(x_k+1;y_k+1)+ L_k+1/2x_k+1 - y_k+1^2 + h(x_k+1) + 3DΩ/L_k+1m_k+1 + 2δ≤_2 ≤A_k/A_k+1(l_f_δ(x_k;y_k+1) + h(x_k)) ++ α_k+1/A_k+1(l_f_δ(x;y_k+1) + h(x) + 1/α_k+1V(x, u_k) - 1/α_k+1V(x, u_k+1)) + 3DΩ/L_k+1m_k+1 + 2δ 1 - из условия (<ref>) и (<ref>)2 - из Леммы <ref>F(x_k+1) ≤A_k/A_k+1(f_δ(y_k+1) + ⟨∇^m_k+1 f_δ(y_k+1), x_k - y_k+1⟩ + h(x_k)) + + α_k+1/A_k+1(f_δ(y_k+1) + ⟨∇^m_k+1 f_δ(y_k+1), x - y_k+1⟩ + h(x) + + 1/α_k+1V(x, u_k) - 1/α_k+1V(x, u_k+1)) + 3DΩ/L_k+1m_k+1 + 2δ= = A_k/A_k+1(f_δ(y_k+1) + ⟨∇ f_δ(y_k+1), x_k - y_k+1⟩+ h(x_k) + + ⟨∇^m_k+1 f_δ(y_k+1)-∇ f_δ(y_k+1), x_k - y_k+1⟩) + + α_k+1/A_k+1(f_δ(y_k+1) + ⟨∇ f_δ(y_k+1), x - y_k+1⟩ + h(x)+ + ⟨∇^m_k+1 f_δ(y_k+1)-∇ f_δ(y_k+1), x - y_k+1⟩ + + 1/α_k+1V(x, u_k) - 1/α_k+1V(x, u_k+1)) + 3DΩ/L_k+1m_k+1 + 2δ≤_1 ≤A_k/A_k+1F(x_k) + α_k+1/A_k+1(F(x) + 1/α_k+1V(x, u_k) - 1/α_k+1V(x, u_k+1)) + + 3DΩ/L_k+1m_k+1 + 2δ +α_k+1/A_k+1(⟨∇^m_k+1 f_δ(y_k+1)-∇ f_δ(y_k+1), x - y_k+1⟩) + + α_k+1/A_k+1⟨∇^m_k+1 f_δ(y_k+1)-∇ f_δ(y_k+1), y_k+1 - u_k ⟩ = = A_k/A_k+1F(x_k) + α_k+1/A_k+1(F(x) + 1/α_k+1V(x, u_k) - 1/α_k+1V(x, u_k+1)) + + 3DΩ/L_k+1m_k+1 + 2δ +α_k+1/A_k+1(⟨∇^m_k+1 f_δ(y_k+1)-∇ f_δ(y_k+1), x - u_k⟩) 1 - из левой части <ref> и A_k(y_k+1 - x_k) = α_k+1 (u_k - y_k+1) из (<ref>).Нам будет полезен факт из <cit.><cit.> Пусть γ_1,...,γ_k - i.i.d случайные величины, Γ_k и ν_k - неслучайные функции от γ_i, и c_i - неслучайные константы, для которых верно следующее𝔼(Γ_i|γ_1,…,γ_i-1) = 0 Γ_i≤ c_i ν_i 𝔼(exp(ν_i^2/σ^2)|γ_1,…,γ_i-1) ≤exp(1) Тогдаℙ(∑_i=1^kΓ_i ≥√(3)√(Ω)σ√(∑_i=1^kc_i^2)) ≤exp(-Ω) ∀ k;∀Ω≥ 0 Пусть x_* - решения задачи (<ref>), тогда с вероятностью 1 - 2β F(x_N) - F(x_*)≤R^2/A_N + 2δ N + ϵ + D_Q√(ϵ/A_N) Учтем (<ref>) и (<ref>) в Лемме <ref>, тогда с вероятностью большой или равной 1 - β ∀ k ≥ 0 A_k+1 F(x_k+1) - A_k F(x_k) + V(x, u_k+1) - V(x, u_k) ≤ ≤α_k+1F(x) + 2δ A_k+1 + α_k+1ϵ + α_k+1⟨∇^m_k+1 f_δ(y_k+1)-∇ f_δ(y_k+1), x - u_k ⟩ Просуммируем неравенства по k = 0, ..., N - 1, A_N F(x_N) - A_0 F(x_0) + V(x, u_N) - V(x, u_0) ≤ (A_N - A_0)F(x) + + 2δ∑_k = 0^N-1A_k+1 + ∑_k = 0^N-1α_k+1ϵ + ∑_k = 0^N-1α_k+1⟨∇^m_k+1 f_δ(y_k+1)-∇ f_δ(y_k+1), x - u_k ⟩ Откуда, с учетом неравенства V(x, u_N)≥ 0 ∀ x ∈ Q A_N F(x_N) - A_NF(x)≤ V(x, u_0) + + 2δ∑_k = 0^N-1A_k+1+A_Nϵ+ ∑_k = 0^N-1α_k+1⟨∇^m_k+1 f_δ(y_k+1)-∇ f_δ(y_k+1), x - u_k ⟩ Возьмем x = x_*, оценим A_k+1 через A_N. A_N F(x_N) - A_NF(x_*)≤ V(x_*, u_0) + + 2δ N A_N+A_Nϵ+ ∑_k = 0^N-1α_k+1⟨∇^m_k+1 f_δ(y_k+1)-∇ f_δ(y_k+1), x - u_k ⟩ F(x_N) - F(x_*)≤R^2/A_N + 2δ N + ϵ + ∑_k = 0^N-1α_k+1/A_N⟨∇^m_k+1 f_δ(y_k+1)-∇ f_δ(y_k+1), x - u_k ⟩ Воспользуемся Леммой <ref> для последнего слагаемого в неравенстве c γ_i =ξ_i, σ^2 = D, Γ_i = α_k_i+1/A_N m_k_i+1⟨∇ f_δ(y_k_i;ξ_i) - ∇ f_δ(y_k_i), x - u_(k_i - 1)⟩, c_i = D_Qα_k_i+1/A_N m_k_i+1,ν_i = ∇ f_δ(y_k_i;ξ_i) - ∇ f_δ(y_k_i)_* с i ∈ [1,…,∑_k=0^N-1m_k+1], где k_i равно k + 1 для всех i ∈ [m_k + 1,…, m_k+1]. Выберем Ω = ln(1/β) ≤Ω, тогда с вероятностью не меньше 1 - 2β F(x_N) - F(x_*)≤R^2/A_N + 2δ N + ϵ + √(3)√(ΩD)√(∑_i=0^N-1D_Q^2α^2_k+1/A_N^2m_k+1) Учтем (<ref>) F(x_N) - F(x_*)≤R^2/A_N + 2δ N + ϵ + D_Q√(Ω)/√(Ω)√(∑_i=0^N-1α_k+1ϵ/A_N^2) F(x_N) - F(x_*)≤R^2/A_N + 2δ N + ϵ + D_Q√(ϵ/A_N) Пусть δ≤ϵ^3/2/6√(3)√(L)D_Q. Тогда с вероятностью 1 - 3βF(x_N) - F(x_*)≤ 4ϵ. Мы знаем из Леммы <ref>, что с вероятностью 1 - β верно неравенство A_N ≥(N+1)^2/12L, отсюда и условия на N, получаем, что R^2/A_N≤D_Q^2/A_N≤12LD_Q^2/(2√(3)√(L)D_Q/√(ϵ)+1)^2≤D_Q^2ϵ/D_Q^2=ϵ и D_Q√(ϵ/A_N)≤ D_Q√(ϵ^2/D_Q^2) =ϵ .Помимо этого с вероятностью 1 - β из Леммы <ref> F(x_N) - F(x_*)≤R^2/A_N + 2δ N + ϵ + D_Q√(ϵ/A_N) Объединяя все вместе, включая условие на δ, получаем, что с вероятностью 1 - 3β F(x_N) - F(x_*) ≤ 4ϵОценим количество обращений M к оракулу за стохастическими градиентами. Как и в Замечании <ref> количество обращений за ∇^m_k+1 f_δ(y_k+1) будет меньше или равно 2N + 2log_2(2L_N/L_0), учитывая выбор Ω, L_N ≤ 3L с вероятностью 1 - β. Далее для простоты учитывать слагаемое 2log_2(6L/L_0) не будем. Получаем, что общее количество обращений к оракулу за ∇ f_δ(y;ξ) равноM = 2∑_k=0^N-1m_k+1≤ 2N + 6DΩ/ϵA_NЕсли A_N ≤2R^2/ϵ, тоM = 6DΩ/ϵA_N ≤12DΩR^2/ϵ^2Далее все утверждения будут выполнятся с вероятностью не меньше 1 - 3β. Оценим количество обращений M к оракулу за стохастическими градиентами. По ходу алгоритма мы можем контролировать D_Q^2/A_N, тогда пусть N + 1 - минимальное число шагов, для которого выполнено D_Q^2/A_N + 1≤ϵ. Ясно, что N + 1 ≤ N, причем условие D_Q^2/A_N + 1≤ϵ является достаточным условием для достижения ϵ-решения по функции. Как и в Замечании <ref> количество обращений за ∇^m_k+1 f_δ(y_k+1) на k-ом шаге будет равно 2 + log(L_k/L_k-1). L_N + 1≤ 3L. Поэтому общее количество обращений к оракулу за ∇ f_δ(y;ξ) будет равно M = 2∑_k=1^N + 1m_k(2 + log(L_k/L_k-1)) = 4∑_k=1^N + 1m_k + 2∑_k=1^N + 1m_klog(L_k/L_k-1) ≤ ≤ 4∑_k=1^N + 1m_k + 2∑_j=1^N + 1m_j∑_k=1^N + 1log(L_k/L_k-1) = (4 + log(L_N + 1/L_0))∑_k=1^N + 1m_k≤(4 + log(3L/L_0))∑_k=1^N + 1m_k Рассмотрим ∑_k=1^N + 1m_k: ∑_k=0^Nm_k+1≤N + 1 + 3DΩ/ϵA_N + 1 Так какα_N+1 = 1/2L_N+1 + √(1/4L_N+1^2 + A_N/L_N+1)≤_11/L_N+1 + √(A_N/L_N+1)≤2/L_N + √(2A_N/L_N) = = 2/L_N + √(2)α_N 1 - из √(x + y)≤√(x) + √(y).иα_N = 1/2L_N + √(1/4L_N^2 + A_N - 1/L_N)≥1/2L_N Отсюда α_N + 1≤ (4 + √(2))α_N≤ (4 + √(2))A_N≤ 6A_N Так как D_Q^2/A_N > ϵ, это следует из минимальности N + 1, то D_Q^2/A_N + 1 = D_Q^2/A_N + α_N + 1≥D_Q^2/A_N + 6A_N > ϵ/7и∑_k=0^Nm_k+1≤N + 1 + 3DΩ/ϵA_N + 1≤N + 1 + 21DΩD_Q^2/ϵ^2. В конечном счете M≤(4 + log(3L/L_0))(N + 1 + 21DΩD_Q^2/ϵ^2) ≤ ≤(4 + log(3L/L_0))(2√(3)√(L)D_Q/√(ϵ) + 21DΩD_Q^2/ϵ^2 + 1). Из (<ref>)A_N + α_N+1 = L_N+1α^2_N+12α_N+1≥ L_N+1α^2_N+1 2/L_N+1≥α_N+1 Так как L_N+1≥L_N/2≥…≥L/2^N+1, то 2^N + 2/L≥2/L_N+1≥α_N+1 Отсюда L D_Q^2/2^N+3≤D_Q^2/2α_N + 1≤D_Q^2/A_N + α_N + 1 = D_Q^2/A_N + 1≤ϵ N≥ln(L D_Q^2/ϵ)§ СПУСК ПО НАПРАВЛЕНИЮ С НЕТОЧНЫМ ОРАКУЛОМПусть f(x) удовлетворяет всем условиям из раздела. <ref>. Определим следующие величины * e_k+1 - случайный вектор на евклидовой сфере радиуса 1, который удовлетворяет следующим условиям: 𝔼 e_k+1^i e_k+1^j = 0 ∀ i,j и 𝔼 (e_k+1^i)^2 = 1/n∀ i, где e_k+1^i - i компонента случайного вектора e_k+1.* δ_k+1∈R - случайный шум, про который только известно, что он ограничен: δ_k+1≤δ.* x^2_L def= L∑_i = 1^nx_i^2.* ∇f(y) def= n(⟨∇ f(y), e_k+1⟩ + δ_k+1) e_k+1 - это аппроксимация производной по направлению e_k+1. Будем предполагать, что оракул вместо ∇ f(y) выдает ∇f(y). Далее вместо произвольной нормы будем использовать _L и будем брать d(x) = 1/2x_L^2. Как следствие можно заметить, что V(x, y) = d(x) - d(y) - ⟨∇ d(y), x - y⟩ = 1/2x - y^2_L. Дополнительно будем считать, что Q = R^n.Рассматривается следующая задача оптимизацииf(x) →min_x ∈R^n Опишем алгоритм зеркального метода треугольника для оракула с производной по направлению. 0 - шаг:x_0 = u_0 = y_0 α_0 = 1 - 1/nA_0 = α_0k+1 - шаг:y_k+1 = α_k+1u_k + A_k x_k/A_k+1 Сгенерировать: ∇ f(y_k+1).α_k+1 = k + 2n/2n^2A_k+1 = A_k + α_k+1 ϕ_k+1(x) = V(x, u_k) + α_k+1⟨∇ f(y_k+1), x ⟩u_k+1 = _x ∈R^nϕ_k+1(x) x_k+1 = y_k+1 + nα_k+1/A_k+1 (u_k+1 - u_k) Пусть для последовательности α_k выполнено α_0 = 1 - 1/n A_k = ∑_i = 0^kα_i α_i = i - 1 + 2n/2n^2 Тогда верны следующие неравенства ∀ k ≥ 1 A_k = (k - 1 + 2n)^2 + k - 1/4n^2 A_k ≥ n^2α_k^2 = (k - 1 + 2n)^2/4n^2 A_k ≤(k - 1 + 2n)^2/2n^2 При доказательстве основной теоремы нам потребуется следующая лемма, которая была доказана по аналогии с <cit.>. ∀ k≥ 0 x_k+1, y_k+1 есть выпуклая комбинация u_0 … u_k+1. Причем x_k+1 = ∑_l=0^k+1γ_k+1^l u_l, где γ_0^0 = 1, γ_1^0 = 0, γ_0^1 = 1 и для k ≥ 1, γ_k+1^l = (1 - α_k+1/A_k+1)γ_k^l, l = 0,…,k - 1α_k+1/A_k+1 (1 - n α_k/A_k) + n (α_k/A_k - α_k+1/A_k+1),l = k n α_k+1/A_k+1,l = k+1 Сначала отметим, что если x_k есть выпуклая комбинация u_0 … u_k, то и y_k+1 - выпуклая комбинация u_0 … u_k, это следует из (<ref>). Докажем для x_k+1 теперь. Так как x_0 = u_0, то γ_0^0 = 1. Рассмотрим для k = 0. x_1 = y_1 + nα_1/A_1 (u_1 - u_0) = u_0 + nα_1/A_1 (u_1 - u_0) = (1 - nα_1/A_1) u_0 + nα_1/A_1 u_1. Так как nα_1/A_1 = 1, x_1 = u_1, γ_1^0 = 0 и γ_1^0 = 1. Пусть данное утверждение верно для k, докажем для k+1. x_k+1 = y_k+1 + nα_k+1/A_k+1 (u_k+1 - u_k) = α_k+1u_k + A_k x_k/A_k+1 + nα_k+1/A_k+1 (u_k+1 - u_k) = = A_k/A_k+1x_k + (α_k+1/A_k+1 - nα_k+1/A_k+1)u_k + nα_k+1/A_k+1u_k+1 Легко заметить, что коэффициенты при векторах в сумме дают 1. Теперь распишем x_k, x_k+1 = A_k/A_k+1x_k + (α_k+1/A_k+1 - nα_k+1/A_k+1)u_k + nα_k+1/A_k+1u_k+1 = = (1 - α_k+1/A_k+1) ∑_l = 0^kγ_k^l u_l + (α_k+1/A_k+1 - nα_k+1/A_k+1)u_k + nα_k+1/A_k+1u_k+1 = = (1 - α_k+1/A_k+1) ∑_l = 0^k-1γ_k^l u_l + (γ_k^k (1 - α_k+1/A_k+1) + (α_k+1/A_k+1 - nα_k+1/A_k+1))u_k + nα_k+1/A_k+1u_k+1 == (1 - α_k+1/A_k+1) ∑_l = 0^k-1γ_k^l u_l + (n α_k/A_k (1 - α_k+1/A_k+1) + (α_k+1/A_k+1 - nα_k+1/A_k+1))u_k + nα_k+1/A_k+1u_k+1 == (1 - α_k+1/A_k+1) ∑_l = 0^k-1γ_k^l u_l + (α_k+1/A_k+1 (1 - nα_k/A_k) + n(α_k/A_k -α_k+1/A_k+1))u_k + nα_k+1/A_k+1u_k+1 Осталось показать, что γ_k+1^l ≥ 0, l = 0,…,k+1. Легко увидеть, что это верно для γ_k+1^l, l = 0,…,k-1 и γ_k+1^k+1. γ_k+1^k = α_k+1/A_k+1 (1 - nα_k/A_k) + n(α_k/A_k -α_k+1/A_k+1), так как α_k/A_k≥α_k+1/A_k+1 и α_k/A_k≤1/n для k ≥ 1, получаем γ_k+1^k≥ 0. ∀ u ∈R^n выполнено α_k+1⟨∇ f(y_k+1), u_k - u⟩≤ A_k+1(f(y_k+1) - f(x_k+1)) + V(u, u_k) - V(u, u_k+1) + + A_k+1/L⟨∇ f(y_k+1), δ_k+1 e_k+1⟩ + A_k+1/Lδ_k+1^2 α_k+1⟨∇ f(y_k+1), u_k - u⟩ == α_k+1⟨∇ f(y_k+1), u_k - u_k+1⟩ + α_k+1⟨∇ f(y_k+1), u_k+1 - u⟩≤_1 ≤α_k+1⟨∇ f(y_k+1), u_k - u_k+1⟩ + ⟨ -∇ V(u_k+1, u_k), u_k+1 - u ⟩ == α_k+1⟨∇ f(y_k+1), u_k - u_k+1⟩ + V(u, u_k) - V(u, u_k+1) - V(u_k+1, u_k) ≤ ≤α_k+1⟨∇ f(y_k+1), u_k - u_k+1⟩ + V(u, u_k) - V(u, u_k+1) - 1/2u_k - u_k+1^2_L ≤_2 ≤ A_k+1⟨∇ f(y_k+1), y_k+1 - x_k+1⟩ + V(u, u_k) - V(u, u_k+1) - - A_k+1^2/2n^2α_k+1^2y_k+1 - x_k+1^2_L+ A_k+1/L⟨∇ f(y_k+1), δ_k+1 e_k+1⟩ + A_k+1/Lδ_k+1^2 ≤ ≤ A_k+1(⟨∇ f(y_k+1), y_k+1 - x_k+1⟩ - 1/2y_k+1 - x_k+1^2_L) + V(u, u_k) - V(u, u_k+1) + + A_k+1/L⟨∇ f(y_k+1), δ_k+1 e_k+1⟩ + A_k+1/Lδ_k+1^2 ≤_3 ≤ A_k+1(f(y_k+1) - f(x_k+1)) + V(u, u_k) - V(u, u_k+1) + + A_k+1/L⟨∇ f(y_k+1), δ_k+1 e_k+1⟩ + A_k+1/Lδ_k+1^2 1 - из (<ref>)2 - из:Так как ϕ_k+1(x) сильно выпуклая и оптимизация происходит на R^n, то∇ϕ_k+1(u_k+1) = 0 u_k+1 = u_k - α_k+1/L∇f(y_k+1) и y_k+1 - x_k+1 = nα_k+1/A_k+1 (u_k - u_k+1) == nα_k+1^2/A_k+1L∇f(y_k+1) = = n^2α_k+1^2/A_k+1L (⟨∇ f(y_k+1), e_k+1⟩+ δ_k+1)e_k+1 Отсюда α_k+1⟨∇ f(y_k+1), u_k - u_k+1⟩ == α_k+1/L⟨ n (⟨∇ f(y_k+1), e_k+1⟩ + δ_k+1) e_k+1, α_k+1 n (⟨∇ f(y_k+1), e_k+1⟩+ δ_k+1)e_k+1⟩ == α_k+1^2 n^2/L(⟨∇ f(y_k+1), e_k+1⟩ + δ_k+1)^2 ⟨ e_k+1, e_k+1⟩ == α_k+1^2 n^2/L⟨∇ f(y_k+1), e_k+1⟩(⟨∇ f(y_k+1), e_k+1⟩ + δ_k+1) + + α_k+1^2 n^2/Lδ_k+1(⟨∇ f(y_k+1), e_k+1⟩ + δ_k+1)== α_k+1^2 n^2/L⟨∇ f(y_k+1), (⟨∇ f(y_k+1), e_k+1⟩ + δ_k+1)e_k+1⟩ + + α_k+1^2 n^2/Lδ_k+1(⟨∇ f(y_k+1), e_k+1⟩ + δ_k+1)== A_k+1⟨∇ f(y_k+1), y_k+1 - x_k+1⟩ + α_k+1^2 n^2/L⟨∇ f(y_k+1), δ_k+1 e_k+1⟩ + α_k+1^2 n^2/Lδ_k+1^2 ≤ ≤ A_k+1⟨∇ f(y_k+1), y_k+1 - x_k+1⟩ + A_k+1/L⟨∇ f(y_k+1), δ_k+1 e_k+1⟩ + A_k+1/Lδ_k+1^2 3 - из ЛипщевостиПусть 𝔼_k - условное математическое ожидание по k итерации относительно 1, ..., k-1 итерации. R_k = u_k - u_L, M_k = ∇ f(y_k+1)_2. ∀ u ∈R^n выполнено α_k+1⟨∇ f(y_k+1), u_k - u⟩≤ A_k+1(f(y_k+1) - 𝔼_k+1 f(x_k+1)) + V(u, u_k) - - 𝔼_k+1 V(u, u_k+1) + A_k+1/Lδ^2 + A_k+1δM_k/L√(n) + α_k+1δ√(n)/√(L) R_k Возьмем 𝔼_k+1 от обеих частей неравенства леммы <ref> 𝔼_k+1α_k+1⟨∇ f(y_k+1), u_k - u⟩≤𝔼_k+1A_k+1(f(y_k+1) - f(x_k+1)) + + 𝔼_k+1V(u, u_k) - 𝔼_k+1V(u, u_k+1) + A_k+1/L𝔼_k+1⟨∇ f(y_k+1), δ_k+1 e_k+1⟩ + A_k+1/L𝔼_k+1δ_k+1^2 Воспользуемся тем, что 𝔼_k+1⟨ n⟨∇ f(y_k+1), e_k+1⟩ e_k+1, u_k - u⟩ = n⟨𝔼_k+1⟨∇ f(y_k+1), e_k+1⟩ e_k+1, u_k - u⟩ =_1 = ⟨∇ f(y_k+1), u_k - u⟩ 1 - из определения <ref>.1 Тогда α_k+1⟨∇ f(y_k+1), u_k - u⟩ + 𝔼_k+1α_k+1⟨ nδ_k+1 e_k+1 , u_k - u⟩≤ ≤ A_k+1(f(y_k+1) - 𝔼_k+1f(x_k+1)) + V(u, u_k) - 𝔼_k+1V(u, u_k+1) + + A_k+1/L𝔼_k+1⟨∇ f(y_k+1), δ_k+1 e_k+1⟩ + A_k+1/L𝔼_k+1δ_k+1^2 Используя условия из определения <ref> на e_k+1 и δ_k+1, можно показать, что 𝔼_k+1⟨∇ f(y_k+1), δ_k+1 e_k+1⟩≤δ𝔼_k+1|⟨∇ f(y_k+1),e_k+1⟩| = δ𝔼_k+1√(⟨∇ f(y_k+1),e_k+1⟩^2)≤ ≤δ√(𝔼_k+1⟨∇ f(y_k+1), e_k+1⟩^2) = δ√((∇ f(y_k+1))^T 𝔼_k+1 e_k+1 e_k+1^T ∇ f(y_k+1)) = = δ/√(n)∇ f(y_k+1)_2 = δM_k/√(n) Аналогично 𝔼_k+1⟨ nδ_k+1 e_k+1 , u_k - u⟩≥ -δ n 𝔼_k+1|⟨ e_k+1 , u_k - u⟩| ≥ ≥ -δ n √(𝔼_k+1⟨ e_k+1 , u_k - u⟩^2) = -δ√(n)u_k - u_2 = -δ√(n)/√(L) R_k В конечном счете получим, что α_k+1⟨∇ f(y_k+1), u_k - u⟩≤ A_k+1(f(y_k+1) - 𝔼_k+1f(x_k+1)) + + V(u, u_k) - 𝔼_k+1V(u, u_k+1) + A_k+1/Lδ^2 + A_k+1δM_k/L√(n) + α_k+1δ√(n)/√(L) R_k ∀ u ∈R^n выполнено A_k+1𝔼_k+1f(x_k+1) - A_k f(x_k) + 𝔼_k+1V(u, u_k+1) - V(u, u_k) ≤α_k+1f(u)+ + A_k+1/Lδ^2 + A_k+1δM_k/L√(n) + α_k+1δ√(n)/√(L) R_k α_k+1(f(y_k+1) - f(u)) ≤ ≤α_k+1⟨∇ f(y_k+1), y_k+1 - u⟩ = = α_k+1⟨∇ f(y_k+1), y_k+1 - u_k⟩ + α_k+1⟨∇ f(y_k+1), u_k - u⟩ =_1= A_k⟨∇ f(y_k+1), x_k - y_k+1⟩ + α_k+1⟨∇ f(y_k+1), u_k - u⟩≤ ≤ A_k(f(x_k) - f(y_k+1)) + α_k+1⟨∇ f(y_k+1), u_k - u⟩≤_2 ≤ A_k(f(x_k) - f(y_k+1)) + A_k+1(f(y_k+1) - 𝔼_k+1f(x_k+1)) + + V(u, u_k) - 𝔼_k+1V(u, u_k+1) + A_k+1/Lδ^2 + A_k+1δM_k/L√(n) + α_k+1δ√(n)/√(L) R_k 1 - из (<ref>) 2 - из Следствия <ref> То есть α_k+1(f(y_k+1) - f(u)) ≤ ≤ A_k(f(x_k) - f(y_k+1)) + A_k+1(f(y_k+1) - 𝔼_k+1f(x_k+1)) + + V(u, u_k) - 𝔼_k+1V(u, u_k+1) + A_k+1/Lδ^2 + A_k+1δM_k/L√(n) + α_k+1δ√(n)/√(L) R_k Отсюда получаем утверждение леммы. 1/2P_0^2 = 1/2R_0^2 + (1 - 1/n) (f(x_0) - f_*)Пусть ϵ фиксирована и выполнено N = √(2)nP_0/√(ϵ) + 1 - 2n δ≤min{ϵ^3/4√(L)/4√(2)√(nP_0), ϵ^3/2√(L)/96√(n)P_0^2},тогда 𝔼f(x_N) - f(x_*) ≤ 3ϵ Чтобы теорема была корректна, требуется, чтобы N = √(2)nP_0/√(ϵ) + 1 - 2n≥ 1. Рассмотрим случай, когда √(2)nP_0/√(ϵ) + 1 - 2n≤ 0, это эквивалентно√(2)nP_0/√(ϵ) + 1 - 2n ≤ 0Далее√(ϵ)≥√(2)nP_0/2n - 1≥√(2)P_0/2 ϵ≥1/2P_0^2Выпишем условие Липшица для f(x)f(x_0) - f(x_*) ≤⟨∇ f(x_*), x_0 - x_*⟩ + 1/2x_0 - x_*_L^2 ≤1/2R_0^2 ≤1/2P_0^2 ≤ϵЕсли N ≤ 0, то x_0 является ϵ-решением задачи оптимизации, поэтому далее будем считать, что N ≥ 1. Для начала докажем следующее вспомогательное утверждение. 1/2𝔼 R_K^2 ≤ P_0^2 ∀ K ≤ N Для K = 0 это верно 1/2𝔼R_0^2 = 1/2R_0^2 ≤1/2P_0^2 Воспользуемся следующими 2 фактами, по индукции: *1/2(𝔼R_k)^2 ≤1/2𝔼R_k^2 ≤ P_0^2; 𝔼R_k ≤√(2)P_0* Так как оптимизация происходит на R^n, то ∇ f(x_*) = 0, поэтомуM_k = ∇ f(y_k+1) = ∇ f(y_k+1) - ∇ f(x_*) _2 ≤√(L)y_k+1 - x_*_L ≤_1 ≤√(L)∑_k=0^K - 1 q_k R_k ∑_k=0^K-1 q_k = 11 - следует из Леммы <ref>В конечном счете𝔼M_k ≤√(2L) P_0 Докажем далее по индукции. Из леммы <ref> возьмем от обеих частей неравенства полное математическое ожидание и просуммируем все неравенства по k = 0, ..., K - 1 и воспользуемся (<ref>), (<ref>). Зафиксируем u = x_*.A_K𝔼f(x_K) - A_0 f(x_0) + 𝔼V(x_*, u_K) - V(x_*, u_0) ≤ (A_K - A_0)f_* + + ∑_k = 0^K - 1A_k+1/Lδ^2 + ∑_k = 0^K - 1A_k+1δ√(2)P_0/√(Ln) + ∑_k = 0^K - 1α_k+1√(2)√(n)δ P_0/√(L)Так как α_k ≤α_K и A_k ≤ A_K ∀ k ≤ KA_K𝔼f(x_K) - A_0 f(x_0) + 𝔼V(x_*, u_K) - V(x_*, u_0) ≤ (A_K - A_0)f_* + + KA_K/Lδ^2 + KA_Kδ√(2)P_0/√(Ln) + Kα_K√(2)√(n)δ P_0/√(L) Так как 2A_K = (K - 1 + 2n)^2 + K - 1/2n^2≥(K - 1 + 2n)^2/2n^2≥2n(K - 1 + 2n)/2n^2≥α_K n.A_K𝔼f(x_K) - A_0 f(x_0) + 𝔼V(x_*, u_K) - V(x_*, u_0) ≤ (A_K - A_0)f_* + KA_K/Lδ^2 + KA_Kδ√(2)P_0/√(Ln) + KA_Kδ2√(2) P_0/√(Ln)Из того, что V(x_*, u_K) ≥ 0 и 1/2P_0^2 = 1/2R_0^2 + (1 - 1/n) (f(x_0) - f_*) = 1/2u_0 - x_*_L^2 + (1 - 1/n) (f(x_0) - f_*) = V(x_*, u_0) + (1 - 1/n) (f(x_0) - f_*)1/2𝔼 R_K^2 ≤1/2P_0^2 + KA_K/Lδ^2 + KA_Kδ3√(2)P_0/√(Ln) Используя то, что K ≤ N = √(2)nP_0/√(ϵ) + 1 - 2n≤√(2)nP_0/√(ϵ)A_K ≤ A_N ≤(N - 1 + 2n)^2/2n^2≤(√(2)nP_0/√(ϵ) + 1)^2/2n^2≤4P_0^2/ϵ Два последних слагаемых должны быть меньше или равны 1/4P_0^2, поэтомуmin{P_0√(L)/2√(NA_N), √(Ln) P_0^2/12√(2) A_N N P_0}≥ ≥min{P_0√(L)/2√((√(2)nP_0/√(ϵ))(4P_0^2/ϵ)),√(Ln) P_0/12√(2)(√(2)nP_0/√(ϵ)) (4P_0^2/ϵ) } = = min{ϵ^3/4√(L)/4√(2)√(nP_0),ϵ^3/2√(L)/96√(n)P_0^2} = δ Взяв δ таким образом, мы в конечном счете получим условие леммы.Докажем основную теорему Аналогично, из леммы <ref> возьмем от обеих частей неравенства полное математическое ожидание и просуммируем все неравенства по k = 0, ..., N - 1 и воспользуемся (<ref>), (<ref>). A_N𝔼f(x_N) - A_0 f(x_0) + 𝔼V(u, u_N) - V(u, u_0) ≤ (A_N - A_0)f_* + + ∑_k = 0^K - 1A_k+1/Lδ^2 + ∑_k = 0^K - 1A_k+1δ√(2)P_0/√(Ln) + ∑_k = 0^K - 1α_k+1√(2)√(n)δ P_0/√(L) Возьмем u = x_*. Так как 𝔼V(u, u_N) ≥ 0, α_k ≤α_K, A_k ≤ A_K ∀ k ≤ K и 2A_K ≥α_K n. A_N (𝔼f(x_N) - f_*) ≤ P_0^2 + KA_K/Lδ^2 + KA_Kδ3√(2)P_0/√(Ln) Так как δ≤min{P_0√(L)/2√(NA_N), √(Ln) P_0^2/12√(2) A_N N P_0} 𝔼f(x_N) - f_* ≤P_0^2/A_N +1/4P_0^2/A_N + 1/4P_0^2/A_N N выбиралось таким образом, чтобы 1/2P_0^2/A_N≤ϵ, поэтому 𝔼f(x_N) - f_* ≤ 3ϵ Отметим два практически важных случая:* Предположим, что e_k+1 распределены равномерно на ортах, то есть равновероятно разыгрывается случайная координата i ∈ [1 … n], тогда на каждой итерации e_k+1 выбирается следующим образомe_k+1^j =1, j = i 0,j ≠ i Данный метод соответствует координатному методу, в самом деле (пусть δ = 0):∇f(y) = n(⟨∇ f(y), e_k+1⟩) e_k+1 = n ∑_j = 1^n∂ f(y)/∂ y_j e_k+1^j e_k+1 = n ∂ f(y)/∂ y_i e_k+1 То есть{∇f(y)}_j = n∂ f(y)/∂ y_i, j = i 0,j ≠ i* Рассмотрим другой, не менее важный пример, связанный с безградиентным методом. Будем предполагать, что у нас нет доступа к градиенту функции, поэтому мы попробуем оценить истинный градиент с помощью разностной аппроксимации следующим образом∇ f(x) = n/τ((f(x+τ e_k+1) + δ^1_k+1) - (f(x) + δ^2_k+1)) e_k+1,где e_k+1 - случайный вектор равномерно распределенный на сфере. Если бы δ^1_k+1 = δ^2_k+1 = 0, то мы смогли просто устремить τ к нулю. Но на практике все вычисления функций происходит с некоторой точностью, поэтому и возникают ненулевые слагаемые δ^1_k+1 и δ^2_k+1, о которых только известно, что они ограничены δ (например, это может быть машинной точностью ЭВМ).Приведем данную аппроксимацию градиента к стандартному виду, чтобы могли применить Теорему <ref>.∇ f(x) = n/τ((f(x+τ e_k+1) + δ^1_k+1) - (f(x) + δ^2_k+1)) e_k+1 == n ⟨∇ f(x), e_k+1⟩ e_k+1 + n/τ(f(x+τ e_k+1) - f(x) - τ⟨∇ f(x), e_k+1⟩ ++ δ^1_k+1 - δ^2_k+1)e_k+1 Возьмем δ_k+1 = 1/τ(f(x+τ e_k+1) - f(x) - τ⟨∇ f(x), e_k+1⟩ + δ^1_k+1 - δ^2_k+1). Оценим δ_k+1:δ_k+1 = 1/τ(f(x+τ e_k+1) - f(x) - τ⟨∇ f(x), e_k+1⟩ + δ^1_k+1 - δ^2_k+1)≤ ≤1/τf(x+τ e_k+1) - f(x) - τ⟨∇ f(x), e_k+1⟩ + 1/τδ^1_k+1 - δ^2_k+1Из Липшивости и выпуклости 0 ≤ f(x+τ e_k+1) - f(x) - τ⟨∇ f(x), e_k+1⟩≤L τ^2/2, поэтомуδ_k+1≤L τ^2/2 τ + 2 δ/τ = L τ/2 + 2 δ/τОбозначим δ̂ = L τ/2 + 2 δ/τ. Чтобы выполнялась Теорема <ref>, нужно, чтобы выполнялось следующее условиеδ̂≤min{ϵ^3/4√(L)/4√(2)√(nP_0), ϵ^3/2√(L)/96√(n)P_0^2} Проминимизируем δ̂ по τ, тогда δ̂ = 2 √(L δ) и τ = 2√(δ/L). То есть достаточно взять τ = 2√(δ/L) и δ: δ̂ = 2 √(L δ)≤min{ϵ^3/4√(L)/4√(2)√(nP_0), ϵ^3/2√(L)/96√(n)P_0^2} δ≤min{ϵ^3/2/64√(2) n P_0, ϵ^3/36864 n P_0^4} То есть, чтобы обеспечить сходимость метода, ошибка δ должна быть порядка 𝒪(ϵ^3/n), а τ - 𝒪(ϵ^3/2/√(n)).§ ЗАКЛЮЧЕНИЕ В данной работе были представлены модификации зеркального метода треугольника из Раздела <ref>. Стоит отметить, что базовый метод из Раздела <ref> был независимо предложен ранее в <cit.>, но незначительные изменения базового алгоритма, проделанные нами, позволяют получать оценки быстрого градиентного метода для различных задач, в частности, нам удалось получить алгоритм адаптивного быстрого градиентного метода для задачи минимакса, далее довольно просто обобщить метод для (δ, L)-оракула. К этому всему нам удалось представить метод для случая, когда вместо градиента у нас имеется некоторая случайная оценка с шумом, в конечном счете удалось получить ограничения на шум, при которых метод имеет скорость сходимости быстрого градиентного метода. plain§ ДОКАЗАТЕЛЬСТВО ДЛЯ ЗЕРКАЛЬНОГО МЕТОДА ТРЕУГОЛЬНИКА Пусть для последовательности α_k выполнено A_k = ∑_i = 0^kα_i α_0 = 0 A_k = Lα_k^2 Тогда верно следующее рекуррентное соотношение ∀ k ≥ 0 α_k+1 = 1/2L + √(1/4L^2 + α_k^2) и ∀ k ≥ 1 α_k≥k+1/2L A_k ≥(k+1)^2/4L Lα^2_k+1 = A_k+1 Lα^2_k+1 = A_k + α_k Lα^2_k+1 - α_k+1 - A_k = 0 Решая данное квадратное уравнение получаем, что α_k+1 = 1 ±√( 1 + 4LA_k)/2L α_k+1 = 1/2L + √(1/4L^2 + A_k/L) = 1/2L + √(1/4L^2 + α_k^2) Если k = 0, то получаем, что α_1 = 1/L и A_1 ≥1/L, база индукции верна. Пусть данная лемма верна для k, докажем для k+1: α_k+1 = 1/2L + √(1/4L^2 + α_k^2)≥1/2L + α_k Из того, что α_k≥k+1/2L, получаем: α_k+1≥k + 2/2L и A_k+1 = L α_k+1^2 ≥(k + 2)^2/4L ∀ u ∈ Q выполнено α_k+1⟨∇ f(y_k+1), u_k - u⟩≤ A_k+1(f(y_k+1) - f(x_k+1)) + V(u, u_k) - V(u, u_k+1) α_k+1⟨∇ f(y_k+1), u_k - u⟩ == α_k+1⟨∇ f(y_k+1), u_k - u_k+1⟩ + α_k+1⟨∇ f(y_k+1), u_k+1 - u⟩≤_1 ≤α_k+1⟨∇ f(y_k+1), u_k - u_k+1⟩ + ⟨ -∇_u_k+1 V(u_k+1, u_k), u_k+1 - u ⟩ == α_k+1⟨∇ f(y_k+1), u_k - u_k+1⟩ + V(u, u_k) - V(u, u_k+1) - V(u_k+1, u_k) ≤ ≤α_k+1⟨∇ f(y_k+1), u_k - u_k+1⟩ + V(u, u_k) - V(u, u_k+1) - 1/2u_k - u_k+1^2 =_2 = A_k+1⟨∇ f(y_k+1), y_k+1 - x_k+1⟩ + V(u, u_k) - V(u, u_k+1) - A_k+1^2/2α_k+1^2y_k+1 - x_k+1^2 == A_k+1(⟨∇ f(y_k+1), y_k+1 - x_k+1⟩ - L/2y_k+1 - x_k+1^2) + V(u, u_k) - V(u, u_k+1) ≤_3 ≤ A_k+1(f(y_k+1) - f(x_k+1)) + V(u, u_k) - V(u, u_k+1)1 - из условия оптимальности (<ref>)2 - из (<ref>) и (<ref>)3 - условие Липшица ∀ u ∈ Q выполнено A_k+1 f(x_k+1) - A_k f(x_k) + V(u, u_k+1) - V(u, u_k) ≤α_k+1f(u) α_k+1(f(y_k+1) - f(u)) ≤ ≤α_k+1⟨∇ f(y_k+1), y_k+1 - u⟩ == α_k+1⟨∇ f(y_k+1), y_k+1 - u_k⟩ + α_k+1⟨∇ f(y_k+1), u_k - u⟩ =_1 = A_k⟨∇ f(y_k+1), x_k - y_k+1⟩ + α_k+1⟨∇ f(y_k+1), u_k - u⟩≤ ≤ A_k(f(x_k) - f(y_k+1)) + α_k+1⟨∇ f(y_k+1), u_k - u⟩≤_2 ≤ A_k(f(x_k) - f(y_k+1)) + A_k+1(f(y_k+1) - f(x_k+1)) + V(u, u_k) - V(u, u_k+1)== α_k+1f(y_k+1) + A_k f(x_k) - A_k+1 f(x_k+1) + V(u, u_k) - V(u, u_k+1)1 - из (<ref>)2 - из Леммы <ref> f(x_N) - f(x_*) ≤4LR^2/(N+1)^2 Просуммируем нер-во из леммы <ref> по k = 0, ..., N - 1 A_N f(x_N) - A_0 f(x_0) + V(u, u_N) - V(u, u_0) ≤ (A_N - A_0)f(u) A_N f(x_N) + V(u, u_N) - V(u, u_0) ≤ A_Nf(u) Возьмем u = x_*, воспользуемся тем, что V(x_*, u_N) ≥ 0 и u_0 = x_0, тогда A_N (f(x_N) - f_*) ≤ R^2 {def= V(x_*, x_0)} Рассмотрим неравенствоA_N f(x_N) + V(u, u_N) - V(u, u_0) ≤ A_Nf(u) Возьмем u = x_*, воспользуемся тем, что f(x_N) ≥ f(x_*), тогда V(x_*, u_N) ≤ V(x_*, u_0) То есть последовательность u_N ограничена. 1/2x_* - u_N^2 ≤ V(x_*, u_N) ≤ R^2 По индукции: 1/2x_* - x_k+1^2 = 1/2x_* - α_k+1u_k+1 + A_k x_k/A_k+1^2 = = 1/2α_k+1(x_* - u_k+1) + A_k(x_* - x_k)/A_k+1^2 ≤ ≤1/2α_k+1/A_k+1x_* - u_k+1^2 + 1/2A_k/A_k+1x_* - x_k^2 ≤ R^2 То есть последовательность, генерированная методом, ограничена.
http://arxiv.org/abs/1705.09809v1
{ "authors": [ "Alexander Tyurin" ], "categories": [ "math.OC" ], "primary_category": "math.OC", "published": "20170527113541", "title": "Mirror version of similar triangles method for constrained optimization problems" }
Broadband focusing of underwater sound using a transparent pentamode lens Preston S. Wilson December 30, 2023 ========================================================================= In several physical systems, important properties characterizing the system itself are theoretically related with specific degrees of freedom. Although standard Monte Carlo simulations provide an effective tool to accurately reconstruct the physical configurations of the system, they are unable to isolate the different contributions corresponding todifferent degrees of freedom. Here we show that unsupervised deep learning can become a valid support to MC simulation, coupling useful insights in the phases detection task with good reconstruction performance. As a testbed we consider the 2D XY model, showing that a deep neural network based on variational autoencoders can detect the continuous Kosterlitz-Thouless (KT) transitions, and that, if endowed with the appropriate constrains, they generate configurations with meaningful physical content. § INTRODUCTION In physics, actual understanding of phenomena often heavily relies on the feasibility of the computational analysis, i.e. the possibility of performing effective numerical simulations of the theory.These simulations generate different microstates of a theoretical model of a real thermodynamic system, from which the macroscopic observable quantities are computed. For a thermodynamic system in a given thermal state, a microstate is a microscopic (i.e., describing the state of each element of the system) configuration that the system may occupy with a certain probability. On the other hand, a macrostate is defined by one or more overall macroscopic properties, such as temperature, pressure, energy, etc. There are thus several possible microstates accounting for the same macrosystem. As a rule of thumb, a key aspect is the characterization of dramatic changes occurring among set of configurations representing different system macrostates: these changes are known as phase transitions. The standard approach uses Monte Carlo methods to sample configurations from the partition function (the function that represents the theory of the model) and on these configurations expectation values of observables are calculated.However, this computational strategy is prone to several drawbacks, the two most critical being: * To detect phase transitions, an order parameter, a quantity whose dramatic variation as a function of the temperature signals the transformation, must be defined.In many systems, defining this kind of observable can be a hard task. * System properties can be associated to specific degrees of freedom, e.g., topological excitations.It is hard or even impossible with current techniques to isolate such contributions from Monte Carlo generated configurations. Statistical and computational physics has majored as an interesting playground for the machine learning community,(see for instance <cit.> <cit.>),achieving relevant results and with reciprocal benefits, as theoretical insights in learning that can be exploited by using physical approaches (<cit.>). In particular, recent works in the literature tackle the problem of discovering and interpreting phase transitions via machine learning, e.g. <cit.>, while other authors focus on substituting Monte Carlo simulations with an equivalent learning approach (see <cit.>): in both cases, the arise of the deep learning paradigm is strongly contributing to speed up this trend.This work follows such line of thought, aiming to apply deep learning algorithms on top of Monte Carlo simulations.In this context the 2D XY model, detailed in Sec. <ref>, has all the properties for exploring the power of machine learning for phase transition identification and investigation of the internal structure of the Monte Carlo configurations.Characteristic of the 2D XY model is a continuous transition, not related with any symmetry breaking, called Kosterlitz-Thouless (KT) transition (<cit.>).The use of deep learning methods to investigate this type of transformation extends previous research works where machine learning has been applied to identify true phase transitions, by using both supervised as well as unsupervised techniques(<cit.>).The KT transition in this model is mediated by specific topological defects (vortices) that unbind at high temperature; therefore this is a perfect situation to prove the effectiveness of deep learning also in order to identify the relevance of specific degrees of freedom. Operatively, we adopt the Variational AutoEncoders (VAE) network for our study (<cit.>), obtaining promising results focusing on the ability of generative models to reconstruct configurations with meaningful physics.§ THE XY MODEL In the 2D XY model, the degrees of freedom are planar rotors (σ⃗) of unit length arranged on a two dimensional square lattice. Here the rotors take continuous values in [0,2π), differently from the classical Ising model where rotors are seen as binary spins assuming only two positions up or down, and for which analysis is simpler (see <cit.>).For the 2D XY model, the Hamiltonian of the system is given byH_XY = -J ∑_⟨ i,j⟩σ⃗_i·σ⃗_j = -J ∑_⟨ i,j⟩cos(θ_i-θ_j) ,where ⟨ i,j⟩ denotes all adjacent sites on the lattice and θ_i denotes the angle of the rotor on site i.The Mermin-Wagner-Hohenberg theorem (<cit.>) prohibits continuous phase transitions in d ≤ 2 dimensions at finite temperature when all interactions are sufficiently short-ranged.Nevertheless the 2D XY model shows a Kosterlitz-Thouless transition (<cit.>) connected with the presence of topological charges in the system.In particular, vortices that are bounded in the ordered phase, below the critical temperature T_c, unbind at high temperatures causing an unordered phase where the correlation function decays exponentially. This kind of transition is therefore not related with any broken symmetry and it is meaningful to see if it can be captured by Deep Learning networks.In the classical approach, configurations are generated by using Markov chain Monte Carlo to sample the probability distribution:P_θ(T)=exp^-βH(θ)/Z(T) ,where θ represents a given configuration of the system, β = 1/k_B T for k_B the Boltzmann constant, T is the temperature and Z is the partition function at a given temperature defined byZ(T) = ∑_θexp^-βH(θ). Hereafter we consider a system with linear size L=16 which allows the study of all the characteristics of the model despite the small dimension. Operatively, 10.000 uncorrelated configurations are sampled, for β linearly increasing from 0.1 to 1.9 by 0.1 steps.§ VAE FOR THE 2D XY MODEL Variational autoencoders (VAE) are generative models, learning the parameters of a probability distribution by modeling the data through autoencoders (<cit.>).After learning the probability distribution, parameters can be sampled so that the encoder network can generate samples closely resembling the training data.In particular, here we consider the Convolutional VAE that has been shown to be more effective in the encoding of systems of the class considered in this paper (<cit.>). The VAE has four convolutional layers, two fully connected layers and the layer of the latent variables, which will be later shown to be sufficient to correctly capture the phase transition.Adopting an incremental approach from a basic architecture, the standard VAE is first tested using the sum of reconstruction loss and Kullback-Leibler loss as objective function. As our goal goes beyond the identification of the phase transition towards the development of a method to investigate collective dynamics in the configurations.To this aim, we first show that the Standard VAE is not sufficient, thus we enrich the architecture by introducing two novel ingredients, as graphically detailed in Fig. <ref>, that will be later shown to considerably improve the physical meaning of the reconstructed configurations. From now on we indicate this new network as HG-VAE. First, some physical insight is injected in the VAE by modifying the loss function and adding a new term measuring the energy difference between the original and the reconstructed configurations:loss_H = (E(θ_MC) - E(θ_R))^2where θ_MC and θ_R are the Monte Carlo and reconstructed configurations, respectively (Fig. <ref>, upper panel).Further, the reconstruction layer is split into two terms θ and σ corresponding to the parameters of a Gaussian distribution.In detail, by this position there are two quantities associated with each element of the configuration: θ_i, the output of the standard VAE, used to evaluate the reconstruction loss and a standard deviation σ_θ_i.A configuration extracted from the sampling of the Gaussian 𝒩(θ_i,σ_θ_i) is then considered for the evaluation of the loss_H term (Fig. <ref>, lower panel). § RESULTS §.§ Standard VAE The Standard VAE is trained with configurations generated by using Markov chain Monte Carlo, sampling every 100 steps in order to avoid autocorrelations, for 19 values of β in the range [0.1,1.9] with a β step of 0.1. For models with phase transitions connected with a symmetry breaking, there is evidence that the transition can be clearly identified by looking at the distribution of the latent variables(<cit.>). As anticipated before, the XY model is different because the transition is connected with the unbinding of vortices, and a priori it is unclear whether the VAE can capture this kind of transformation. In Fig. <ref> we show that even in this case latent variables corresponding to configurations with β<β_c distribute differently from those at β≥β_c, with β_c the critical value obtained from the Monte Carlo simulations. Encoding 1000 configurations for each β we have a distribution of the latent variables plotted in Fig. <ref>-(a).The two colors refer to values of β smaller (blue) or larger (orange) than the critical point.For large βs, corresponding to lower temperatures and thus to the ordered phase, the latent variables distribute on a subset of the space occupied from latent variables at small βs and far from the center of the distribution. In the same plot, the black dots point the center of the distribution at each β.If the mean Euclidean distance R̅ of the latent variable from the related center of the distribution is computed for all configuration for each β, this quantity can be treated as an order parameter for the transition (Fig. <ref>-(b)). Comparing the derivative of R̅ with the magnetic susceptibility obtained from the Monte Carlo configurations, and defined by χ= V(⟨m^2⟩- ⟨m⟩^2),we see that the critical point identified from the VAE analysis approximate the true point with good accuracy (Fig. <ref>-(c)). From this analysis we can conclude that the Standard VAE, regardless of any physical insight, can successfully be applied to the identification of phase transitions,not only when these transformations are connected with a symmetry breaking but also when these are continuous such as the KT transition.As far as the ability of unsupervised algorithms to generate physically meaningful configuration is concerned, the Standard VAE should be able to reproduce extensive quantities like the energy, directly given by the Hamiltonian Eq. <ref>, and the total magnetization: m(σ⃗) = 1/V∑_𝐱σ⃗_𝐱The expectation values of the energy and the magnetization, as computed on the Monte Carlo and the reconstructed configurations, are presented in Fig. <ref>. It is clear that the Standard VAE fails completely to reproduce the correct value of the observables predicting a constant value as a function of β which corresponds to the maximally ordered state of the system. A visual inspection of a reconstructed configuration can help understanding the problem: in Fig. <ref> an example is displayed for β=0.1, i.e., the maximally disordered case (left: original, right: VAE configuration).The spins in the reconstructed case are all aligned on the same direction, meaning that the Standard VAE could not reproduce any specificity of the input configuration. §.§ H/G-VAETrying to improve the reconstruction task we add physically driven modifications to the learning algorithm.This certainly breaks the view of Deep Learning as an agnostic approach which can be applied to different systems without contextual knowledge; as our focus is on the specific physical problem of detecting and isolating specific degrees of freedom inside the Monte Carlo generated configurations, this design is absolutely plausible. Here we present the results obtained using the two physically motivated versions of the VAE introduced in Sec. <ref>.In all the following figures, VAE indicates the Standard variational autoencoder, H refers to the VAE with the energy loss term (H-VAE) and G to the one including Gaussian fluctuations (G-VAE). As a sanity check, we first check that the introduced physical constraint does not alter the ability of the VAE to identify the phase transition.In Figs. <ref> and  <ref> we reproduce the analysis of the latent variables as in Sec. <ref> showing that, as expected, the H-VAE increases the ability of detecting the transformations.The distribution of the latent variables in the two cases β<β_c and β>=β_c, clearly shows a separation in two regions corresponding to the two different phases.Moreover, the critical point now coincides with the true one as identified from the magnetic susceptibility. Including the proposed H and G terms in the Standard VAE improves the reconstruction performance reflected in the evaluations of the energy and magnetization (Fig. <ref>). There are still discrepancies between the true values and the generated ones. In future works we will try to amend this specialising VAEs to reproduce a single β value. This strategy can probably improve the performance of the models.As a further check, we compute the vortex density to verify the crucial effect of the G and H terms for the reconstruction task. Vortex density is defined by:ρ = ⟨ l_ij⟩ = 1/V∑_𝐱∑_i≠ j|l_ij,𝐱| l_ij,𝐱 = 1/2π(n_i,𝐱 + n_j,𝐱+𝐢 - n_i,𝐱+𝐣 - n_j,𝐱) n_i,𝐱 = (θ_𝐱+𝐢-θ_𝐢).From Fig. <ref> we see that by including the Gaussian fluctuations, the vortex density is much larger and moves significantly towards the true value.For large β, the Gaussian contribution is probably too large, allowing the formation of vortices in a region where the density should be zero. The reconstruction peformance of the G-VAE is shown in Fig. <ref> where we show the reconstruction of a configuration at β=0.1 for the three VAEs.Although only qualitative, this comparison clearly shows that without the Gaussian fluctuation term, the reconstructed configurations display a much higher level of (unrealistic) order than the original Monte Carlo one.§ CONCLUSIONS We presented here an extension of VAEs for applications in the domain of computational physics.Even in the complex cases of phase transitions governed by modifications in the interaction among topological degrees of freedom,we found that VAEs can detect the critical behaviour. Results are presented showing that a physically driven modification of the VAE (H/G-VAE) improves the ability of the model in reproducing the Monte Carlo configurations. This opens the possibility of promoting VAE algorithms as instruments for the investigation of the collective degrees of freedom of Monte Carlo simulated physical configurations. §.§.§ AcknowledgmentsThe authors want to thank F. Di Renzo for the useful discussions.natbib
http://arxiv.org/abs/1705.09524v1
{ "authors": [ "Marco Cristoforetti", "Giuseppe Jurman", "Andrea I. Nardelli", "Cesare Furlanello" ], "categories": [ "hep-lat", "cond-mat.stat-mech", "cs.LG" ], "primary_category": "hep-lat", "published": "20170526104559", "title": "Towards meaningful physics from generative models" }
TIT/HEP-660 May,2017 Quantumperiodsand prepotentialinN=2 SU(2) SQCD .75em 2.5cm Katsushi Ito, Shoichi Kanno and Takafumi Okubo2.5emDepartment of Physics,Tokyo Institute of TechnologyTokyo, 152-8551, Japan 3.0emWe studyN=2 SU(2) supersymmetric QCD with massive hypermultiplets deformed in theNekrasov-Shatashvili limit of the Omega-background.The prepotential of the low-energy effective theory is determined bythe WKB solution of thequantumSeiberg-Witten curve. We calculate thedeformed Seiberg-Witten periodsaround the massless monoplole pointexplicitly up to the fourth order in the deformation parameter. =0.7cm§ INTRODUCTION The Seiberg-Witten (SW) solution <cit.>of the prepotentialof N=2 supersymmetric gauge theoryenables us to understand both weak and strong coupling physics of the theory such as instanton effects,the duality of the BPS spectrum <cit.> and nonlocal superconformal fixed point <cit.>. In the weak coupling region,the Nekrasov partition function<cit.>, where thegauge theory is defined in the Ω-background <cit.>, provides an exact formula of the prepotential including thenonperturbative instanton effects. The Nekrasov partition function can be computed with the help of the localization technique.At strong coupling region, however, we do not know the localization method to reproduce the prepotential around the massless monopole point. The Nekrasov function isrelated to the conformal block of two dimensional conformal field theory <cit.> and also the partition function of topological string theory <cit.>. The analysis of the conformal block with insertion of the surface operator <cit.>leads to the concept ofthe quantum Seiberg-Wittencurve. The solution of the quantum curvegivesthe low-energy effective theory of the Ω-deformed theories, which are parametrized by two deformation parameters ϵ_1 and ϵ_2. In the Nekrasov-Shatashvili limit <cit.> of the Ω-background, where one of the deformation parameters ϵ_2 is set to be zero, the quantum curve becomes the ordinary differential equation. The quantum SW curve is obtained from the quantization procedureof the symplectic structure defined by the SW differential <cit.> where the parameter ϵ_1 plays a role of the Planck constant ħ. In particular, the SW curve for SU(2) Yang-Mills theory becomes the Schrödinger equation with the sine-Gordon potential and the higer order corrections to the deformed period integrals in the weak coupling have been calculated by using the WKB analysis <cit.>. This was generalized to N=2 SU(N) SQCD <cit.>. Note that the SW curve for 𝒩=2^* SU(2) gauge theory corresponds to the Lamé equation and the deformed period integrals also have been calculated by using the WKB analysis <cit.>. One can derive the Bohr-Sommerfeld quantization conditions which are nothing but the Baxter's T-Q relations of the integrable system <cit.>. The deformed period integral agrees with that obtained from the Nekrasov partition function. It is interesting to study perturbative and non-pertubative quantum corrections in the strong coupling region of the moduli space, which might change the strong coupling dynamics of the theory. In <cit.>, the perturbative corrections around the massless monopole point in the N=2 SU(2) super Yang-Mills theory have been studied. In <cit.>, the 1-instanton correction in ħ to the dual prepotential has been calculated. In <cit.>, the non-perturbative aspects of the ħ expansion in 𝒩=2 theories have been studied. The purpose of this work is to study systematically perturbative corrections in ħ to the prepotential at strong coupling where the BPS monopole becomes masslessfor N=2 SU(2) SQCD with N_f=1,2,3,4 hypermultiplets. We investigate quantum corrections to the period integrals of the SW differential and the prepotential up to the fourth order in the deformation parameter ħ.This paper is organized as follows: In Section 2, we review the quantization of the SW curve and the quantum periods for N=2 SU(2) SQCD.In Section 3, we show that the quantum correction can be expressed by acting the differential operator on the undeformed SW periods in detail. In Section 4, we calculate the quantum periods in the weak coupling region for 𝒩=2 SU(2) SQCD and confirm that they agree with those obtained from the Nekrasov partition function. In Section 5, we study the expansions of the periods around the massless monopole point in the moduli space. We consider how the effective coupling and the massless monopole point are deformed by ħ.In Section 6, we add some comments and discussions.§ QUANTUM SW CURVE FORN=2 SU(2) SQCDThe Seiberg-Witten curve for N=2 SU(2) gauge theory with N_f (=0,…,4)hypermultiplets is given byK(p)-Λ̅/2 (K_+(p)e^ix+K_-(p)e^-ix)=0,whereΛ̅=Λ_N_f^2-N_f/2 with Λ_N_f being a QCD scale parameter for N_f≤ 3 and Λ̅=√(q) for N_f=4.Here q=e^ 2π i τ_UV and τ_UV denotes the UV coupling constant <cit.>. K(p) and K_±(p) are defined byK(p)={[ p^2-u,N_f=0,1;p^2-u+Λ_2^28,N_f=2;p^2-u +Λ_3/4(p+m_1+m_2+m_32),N_f=3; (1+q2)p^2-u +q 4p∑_i=1^4 m_i +q8∑_i<jm_i m_j ,N_f=4 ].andK_+(p)=∏ ^N_+_j=1(p+m_j), K_-(p)=∏_j=N_++1^N_f(p+m_j),where u is the Coulomb moduli parameter and m_1,…, m_N_f are mass parameters. N_+ is a fixed integer satisfying 1≤ N_+≤ N_f. The curve (<ref>) can be written into the standard form <cit.>y^2=K(p)^2-Λ̅^2 K_+(p)K_-(p)by introducing y=Λ̅K_+(p)e^ix-K(p). The SW differential is defined byλ=pdlogK_- K_+ -2 π i p dx.Let α and β be a pair of canonical one-cycles on the curve.The SW periods are defined bya=∫_αp(x) dx,a_D=∫_βp(x) dx,where p(x) is a solution of (<ref>). Then the prepotential F(a) is determined bya_D=∂ F(a)∂ a. The SW differential defines a symplectic form dλ_SW=dp∧ dx on the (p,x) space. The quantumSW curve is obtained by regarding the coordinate p as the differential operator -i ħd dx. We have the differential equations(K(-iħ∂_x))-Λ̅/2 ( e^ix2K_+(-iħ∂_x)e^ix2+e^-ix2K_-(-iħ∂_x)e^-ix2)Ψ(x)=0,where ∂_x=∂∂ x. Here we take the ordering prescription of the differential operators as in <cit.>. This differential equation is also obtained by observing the relation between the quantum integrable models and the SW theory in the Nekrasov-Shatashvili (NS) limit of the Ω-background <cit.>. The same differential equation is also obtained from the insertion of the degenerate primary field corresponding to thesurface operator in the two-dimensional conformal field theory<cit.>.In this paper, we will choose N_+ such that the differential equation becomes the second order differential equation of the form: (∂_x^2 +f(x)∂_x+g(x))Ψ(x)=0.Then we convert this equation into the Schrödinger type equation by introducing Ψ(x)=exp(-12∫ f(x)dx)ψ(x):(-ħ^2∂_x^2+Q(x))ψ(x)=0,where Q(x)=-1ħ^2(-12∂_x f-14f^2+g). In thecase ofSU(2) SQCD, it is found thatQ(x) isexpanded in ħ asQ(x)=Q_0(x)+ħ^2 Q_2(x). The quantum SW periods are defined by the WKB solution of the equation (<ref>):ψ(x)=exp( iħ∫^x P(y)dy ),whereP(y)=∑_n=0^∞ħ^n p_n(y)and p_0(y)=p(y). Substituting the expansion (<ref>) into (<ref>), we have the recursion relations for p_n(x)'s.Note thatp_n(x) for odd n becomes a total derivative andonlyp_2n(x) contributes the period integral. The first three p_2n's are given by p_0(x) =i √(Q_0),p_2(x) =i2Q_2√(Q_0)+i 48∂_x ^2 Q_0 Q_0^32, p_4(x) =-7i 1536(∂_x^2 Q_0)^2 Q_0^72+i 768∂_x ^4Q_0 Q_0^52-i Q_2 ∂_x^2 Q_0 32 Q_0^52+i∂_x^2 Q_2 48 Q_0^32-i Q_2^2 8 Q_0^32, up to total derivatives. Then the quantum period integral Π=∫ P(x) dx=(a, a_D) along the cycles α and β can be expanded in ħ asΠ=Π^(0)+ħ^2 Π^(2)+ħ^4 Π^(4)+⋯,where Π^(2n):=∫ p_2n(x)dx.Now we study the equations satisfied by the quantum SW periods. It has been shown that the undeformed (or classical) SW periods Π^(0) obey the third order differential equation with respect to the moduli parameter u called thePicard-Fuchs equation <cit.>.Note that ∂_u p_0 is the holomorphic diffrential on thecurve. When wewrite the curve (<ref>) in the form y^2=∏_i=1^4(x-e_i),where the weak coupling limit corresponds toe_2→ e_3 and e_1→ e_4, we can evaluate the periods∂_u Π^(0)=∫∂_u p_0 dx=∫dp yby the hypergeometric function.Thenby using quadratic and cubic transformations <cit.>, onefinds that in the weak coupling region, where u is large,the classical periods ∂_u a^(0) and ∂_u a^(0)_D are given by∂_u a =√(2)2(-D)^-1/4F( 112,512;1; z), ∂_u a_D =i√(2)/2 (-D)^-1/4[3/2πln 12 F( 1/12, 5/12; 1; z ) -1/2π F_*( 1/12, 5/12; 1; z ) ],where z=-27Δ 4D^3 and the weak coupling region corresponds to z=0.Here Δ and D for the curve (<ref>) are defined byΔ =∏_i<j(e_i-e_j)^2, D =∑_i<je_i^2 e_j^2-6∏_i=1^4 e_i-∑_i<j<k(e_i^2e_j e_k+e_i e_j^2 e_k+e_i e_j e_k^2).Δ is the discriminant of the curve. F(α,β;γ;z) and F_*(α,β;γ;z) arethe hypergeometric functions defined byF(α , β ;γ ;z )=∑ _n=0 ^∞(α )_n (β )_ n/n ! (γ )_nz^n ,F_*(α , β ;1;z)=F(α , β ;1 ;z )ln z+∑ _n=0 ^∞(α )_n (β )_ n/(n !)^2∑_r=0 ^n-1( 1/α +r+1/β +r -2/1+r) z^n .Changing the variable from z to u, the hypergeometric differential equation for F( 112,512;1; z) leads to the Picard-Fuchs equation for ∂Π^(0)/∂ u. It takes the form∂^3Π^(0)/∂ u^3+p_1 ∂^2 Π^(0)/∂ u^2+p_2 ∂Π^(0)/∂ u=0,where p_1 and p_2 are given byp_1 = ∂_u (-D)^1/4 (-D)^1/4 -∂_u ^2 z∂_uz +γ-(1 +α+β)z z(1-z)∂_u z, p_2 =∂_u^2 (-D)^1/4 (-D)^1/4+ ∂_u (-D)^1/4 (-D)^1/4{ -∂_u ^2z∂_u z +γ-(1 +α+β)z z(1-z)∂_u z } -αβ z(1-z)(∂_u z )^2with α=112, β=512 and γ=1.For the SW curve(<ref>) with N_f≤ 3, the Picard-Fuchs equations (<ref>) agree with those in <cit.>. Note that for massless case,the Picard-Fuchs equation turnsout to bethe second order differential equation for Π^(0) <cit.>. The higher order correction Π^(k) to the SW period Π^(0) is determined by acting a differential operator Ô_k on Π^(0) <cit.>:Π^(k)=Ô_k Π^(0).There are various ways to represent the differential operator Ô_k. For example, one can use the first and second order differential operators with respect to u to express Π^(k) asΠ^(k)=(X_k^1∂^2/∂ u^2 +X_k^2 ∂/∂ u)Π^(0). Let us study the simplest example, the N_f=0 theory.We have the quantum SW curve (<ref>) with the sine-Gordon potential:Q(x)=-u-Λ_0^2 2(e^ix+e^-ix).The SW periods Π^(0) satisfy the Picard-Fuchs equation <cit.>:∂^2 Π^(0)/∂ u^2-1 4(Λ_0^4-u^2)Π^(0)=0.The discriminant Δ and D are given byΔ =256 Λ _0^8 (u^2-Λ_0^4),D=12 Λ_0^4-16 u^2.The second and fourth order quantum corrections are given by <cit.>Π^(2) =(112u ∂^2/∂ u^2+124∂/∂ u)Π^(0),Π^(4) =(75 Λ_0^8-4 u^4+153 Λ_0^4 u^2/5760 (u^2-Λ_0^4)^2∂^2/∂ u^2 -u^3-15 Λ_0^4 u/2880 (u^2-Λ_0^4)^2∂/∂ u) Π^(0).With the help of the Picard-Fuchs equation (<ref>), we find a simpler formula for Π^(4):Π^(4) =(7 1440u^2∂^4/∂ u^4+1 48u∂^3/∂ u^3 +5 384∂^2/∂ u^2)Π^(0). In the weakcoupling region where u≫Λ_0^2, substituting (<ref>) into (<ref>) and (<ref>), we can obtain a^(0) and a_D^(0) by expanding (<ref>) and (<ref>) around u=∞ and integrating with respect to u. The quantum SW periods can be obtained by applying (<ref>) and (<ref>) on a(u) and a_D(u):a(u)= ( √(u/2) -Λ_0/16 √(2)(Λ_0^2/u)^3/2+⋯)+ħ^2/Λ_0(-1/64 √(2)(Λ_0^2/u)^5/2-35/2048 √(2)(Λ_0^2/u)^9/2 +⋯) + ħ^4/Λ_0^3(-1/256 √(2)(Λ_0^2/u)^7/2 -273/16384 √(2)(Λ_0^2/u)^11/2 +⋯) +⋯, a_D(u)= -i/2√(2)π[-4√(2) a(u) log8u/Λ_0^2 +(8 √(u)-Λ_0^4/4 u^3/2 +⋯) + ħ^2/Λ_0( -1/6 √(u)-13/96(Λ_0^2/u)^5/2 +⋯). .+ħ^4/Λ_0^3( 1/720 u^3/2-63/1280(Λ_0^2/u)^7/2 +⋯) +⋯],up to the fourth order in ħ. It has been checked that the quantum curve reproduces the prepotential obtained from the NS limit of the Nekrasov partition function <cit.>. We can also consider the quantum SW periods in the strong coupling region. For example, at u=±Λ_0^2 where monopole/dyon becomes massless,by solving the Picard-Fuchs equation in terms of hypergeometric function, we can compute the SW periods <cit.>. For the computation of the deformed SW periods, it is convenient to use(<ref>) rather than (<ref>) since the coefficients in (<ref>) become singular at u=Λ_0^2. We thenfind theexpansion of the SW periods around u=Λ_0^2, which are given by <cit.>a_D(ũ) =i( ũ/2 Λ_0-ũ^2/32 Λ_0^3+⋯) +iħ^2/Λ_0( 1/64-5/1024( ũ/Λ_0^2)+⋯) + iħ^4/Λ_0^3(-17/65536+ 721/2097152( ũ/Λ_0^2)+⋯)+⋯,a(ũ)= i/2π[ a_D(ũ) logũ/2^5 Λ_0^2 +i(- ũ/2 Λ_0-3 ũ^2/64 Λ_0^3+⋯)+ iħ^2/Λ_0(1/24( ũ/Λ_0^2)^-1+5/192+⋯) . . +iħ^4/Λ_0^3(7/1440( ũ/Λ_0^2)^-3- 1/2560 ( ũ/Λ_0^2)^-2+⋯)+⋯],where ũ :=u-Λ_0^2. In the following sections, we will generalize these results and compute the quantum corrections to the SW periods at strong coupling region for the N_f=1,2,3,4 cases.§ QUANTUM PERIODS FOR N_F≥ 1 Let us study the quantum SW periods for SU(2) theory with N_f≥ 1 hypermultiplets.We will choose N_+ of (<ref>) such that the differential equation (<ref>) become the second order differential equation. Then we convert the quantum SW curve into the Schrödinger type equation (<ref>). The quantum SW periods are given by the integral of (<ref>) and (<ref>). These periods can be represented as 𝒪̂_k Π ^(0) with some differential operators Ô_k. We will find the second and fourth order corrections to the SW periods. In the following, Δ_N_f stands for Δ and D_N_f for D in (<ref>) and (<ref>) for the N_f theory.§.§ N_f=1 theoryIn the theory with N_f=1 hypermultiplet, we can take N_+=1 in the SW curve (<ref>) without loss of generality. The quantum curve is written as the Schrödinger type equation with theTzitzéica–Bullough–Dodd type potential:Q(x )=-1/2Λ_1^3/2 m_1 e^i x-u-1/16Λ_1^3 e^2 i x-1/2Λ_1^3/2 e^-i x ,where Q_2(x)=0.The SW periods Π^(0)satisfy the Picard-Fuchs equation (<ref>) with Δ_1= -Λ_1^6( 256u^3-256u^2m_1^2-288um_1Λ_1^3+256m_1^3Λ_1^3 +27Λ_1^6) ,D_1= -16u^2+12m_1 Λ_1^3.It is also found to satisfy the differential equation with respect to the mass parameter m:∂^2 Π ^(0)/∂ m_1 ∂ u = b_1 ∂ ^2 Π ^(0)/∂ u^2 +c_1 ∂Π ^(0)/∂ u,where b_1=-16 m_1u-9Λ_1^3/8(4m_1^2-3u) ,c_1 =-m_1/4m_1^2-3u.We will calculate the corrections of the second andfourth orders in ħ <cit.> to the period integrals using (<ref>) and (<ref>). These corrections are expressed in terms of the basis ∂_u Π^(0) and ∂^2_u Π^(0)Π^(2)= (X_2^1 ∂^2/∂ u^2 +X_2^2 ∂/∂ u) Π ^(0) , Π ^(4)= (X_4^1 ∂^2/∂ u^2 +X_4^2 ∂/∂ u) Π ^(0) ,where the coefficients in (<ref>) are given by X_2^1= --9 Λ _1^3 m_1-16 m_1^2 u+24 u^2/48 (4 m_1^2-3 u), X_2^2= -3 u-2 m_1^2/12 (4 m_1^2-3 u),and the coefficients in (<ref>) are given byX_4^1 = Λ_1^12/1440(4m_1^2 -3u) Δ_1^2( -864 Λ_1^9 m_1 (4350 m_1^2 u+1192 m_1^4+441 u^2) -49152 Λ_1^3 m_1 u^2 (-455 m_1^2 u^2+609 m_1^4 u-204 m_1^6+267 u^3) +768 Λ_1^6 (-19593 m_1^2 u^3+42348 m_1^4 u^2-22624 m_1^6 u+6400 m_1^8+8235 u^4) +131072 u^4 (15 m_1^2 u^2+6 m_1^4 u-2 m_1^6+9 u^3)-729 Λ_1^12(615 u-1792 m_1^2)), X_4^2= Λ_1^12/45(4m_1^2 -3u) Δ_1^2(24 Λ_1^6 (-1080 m_1^2 u^2+4254 m_1^4 u-800 m_1^6+1215 u^3) -768 Λ_1^3 m_1 u (-185 m_1^2 u^2+267 m_1^4 u-80 m_1^6+159 u^3) +2048 u^3 (15 m_1^2 u^2+6 m_1^4 u-2 m_1^6+9 u^3)-81 Λ_1^9 m_1 (235 m_1^2+6 u) ). [1] We will compare the quantum prepotential with the NS limit of the Nekrasov partition function in the weakcoupling region in the next section. The above representation of the period integrals is suitable to consider the decoupling limit to the pure SU(2) theory, which is defined by m_1→∞ and Λ_1 → 0 with m_1 Λ_1^3=Λ _0^4 being fixed. In the decoupling limit, the second and fourth order corrections (<ref>) and (<ref>) agree with (<ref>) and (<ref>).In section <ref>, we will study the deformed period integrals in the strong coupling region, where the monopole/dyon becomes massless.In this case, the discriminant Δ_1 of the curve has a zero of the first order where the coefficients in(<ref>) and (<ref>) become singular.Since the SW periods Π^(0) satisfy the Picard-Fuchs equation (<ref>) and the differential equation (<ref>), the differential operator Ô_k in (<ref>) for the higher order corrections is defined modulo such differential operators. We note that the coefficients of the differential operator for Π^(2) can be rewritten asX_2^1=1/6 u+1/6 m_1 b_1,X_2^2=1/12 +1/6 m_1 c_1.Using the Picard-Fuchs equation (<ref>)and the differential equation (<ref>), we find that the second order correction to the SW periods can be expressed asΠ ^(2) = 1/12( 2u∂^2/∂ u^2 +2m_1 ∂/∂ m_1∂/∂ u +∂/∂ u) Π ^(0) .In the similar way, we find that the fourth order correction to the SW periods is expressed asΠ ^(4) = 1/1440( 28 u^2 ∂^4/∂ u^4 +124 u ∂^3/∂ u^3 +81 ∂^2/∂ u^2+56 u m_1 ∂/∂ m_1∂^3/∂ u^3 +28m_1^2 ∂^2/∂ m_1^2∂^2/∂ u^2+132 m_1 ∂/∂ m_1∂ ^2/∂ u^2) Π ^(0) .Since all the coefficients are now regular when Δ_1=0, we can easily calculate the quantum SW periods at the various strong coupling points in the Coulomb branch. §.§ N_f=2 theoryIn the case of N_f=2, we can choose N_+=1 or N_+=2 in (<ref>) for the SW curve (<ref>). The corresponding quantum curves are the second order differential equation in both cases and can be written in the form of the Schrödinger type equation but they have apparently different Q(x):Q(x)= -u-Λ_2/2( m_1 e^ix +m_2 e^-ix)-Λ_2^2/8cos 2x,(N_+=1) Q(x)= -e^ixΛ_2^3+Λ_2^2(e^2ix(m_1-m_2)^2-2)+8Λ_2 e^ix(m_1 m_2-u)+16u/4(-2+e^ixΛ_2)^2+ħ^2e^ixΛ_2/2(-2+e^ixΛ_2)^2,(N_+=2)where for the N_+=2 case Q(x) includes the ħ^2 term.Although the quantum curves look quite different, they are shown to give the same period integrals. One reason is that the SW periods in both cases satisfy the same Picard-Fuchs equation with the discriminant Δ_2 and D_2: Δ_2= Λ_2^12/16-3 Λ_2^10 m_1 m_2-Λ_2^8 (8 u^2-36( m_1^2+ m_2^2) u+27 m_1^4+27 m_2^4+6 m_1^2 m_2^2) +256 Λ_2^4 u^2 (u-m_1^2) (u-m_2^2)-32 Λ _2^6 m_1 m_2 (10 u^2-9( m_1^2+ m_2^2) u+8 m_1^2 m_2^2), D_2= -3/4Λ_2^4+12 Λ_2^2 m_1 m_2-16 u^2,and the differential equations∂^2 Π^(0)/∂ m_1∂ u =1/L_2( b_2^(1)∂^2 Π^(0)/∂ u^2+c_2^(1)∂Π^(0)/∂ u),∂^2 Π^(0)/∂ m_2∂ u =1/L_2( b_2^(2)∂^2 Π^(0)/∂ u^2+c_2^(2)∂Π^(0)/∂ u),whereL_2= -Λ_2^4+8m_1m_2Λ_2^2+32[4m_1^2m_2^2-3u(m_1^2+m_2^2)+2u^2 ], b_2^(1)= 3Λ_2^4 m_1-4Λ_2^2 m_2(3m_1^2-9m_2^2+8u)-64m_2 u(m_1^2-u),c_2^(1)= 4 Λ_2^2m_2+32m_1(m_2^2-u), b_2^(2)= 3Λ_2^4 m_2-4Λ_2^2 m_1(3m_2^2-9m_1^2+8u)-64m_1 u(m_2^2-u) , c_2^(1)= 4 Λ_2^2m_1+32m_2(m_1^2-u).Since the SW periods are uniquely determined from the Picard-Fuchs equation with perturbative behaviors around singularities, the SW periods do not depend on the choice of N_+. We can also check by explicit calculation that the second and fourth order correctionsare given by Π ^(2)= 1/6( 2u∂^2/∂ u^2 +3/2( m_1 ∂/∂ m_1∂/∂ u+m_2 ∂/∂ m_2∂/∂ u) +∂/∂ u) Π ^(0),Π ^(4) = 1/360[ 28 u^2∂^4/∂ u^4 +120u∂^3/∂ u^3+ 75∂^2/∂ u^2+42 ( u m_1∂/∂ m_1∂^3/∂ u^3+u m_2∂/∂ m_2∂^3/∂ u^3) + 345/4( m_1 ∂/∂ m_1∂ ^2/∂ u^2+m_2 ∂/∂ m_2∂ ^2/∂ u^2)+63/4( m_1^2 ∂^2/∂ m_1^2∂^2/∂ u^2+m_2^2 ∂^2/∂ m_2^2∂^2/∂ u^2) +126/4 m_1m_2 ∂/∂ m_1∂/∂ m_2∂^2/∂ u^2] Π ^(0) ,which are independent of N_+. Here we adapt the expression suchthat all the coefficients do not have any singularity at singular points in the moduli space.Thus we conclude that the quantum SW periods, at least up to the fourth order in ħ,do not depend on the choice of N_+<cit.>. As explained in the previous sections, the expressions (<ref>) and (<ref>) are not a unique way to represent the quantum corrections. With the help of the Picard-Fuchs equation (<ref>)and the differential equation (<ref>),we can rewrite (<ref>) in terms ofa basis ∂_u^2 Π ^(0) and ∂_u Π ^(0) asΠ^(2)=[(13u+14 L_2(m_1b_2^(1)+m_2 b_2^(2)) )∂^2/∂ u^2 +(16+14 L_2(m_1c_2^(1)+m_2 c_2^(2)) ) ∂/∂ u]Π^(0), where L_2, b_2^(1),⋯ c_2^(2) are given in(<ref>). In the decoupling limit where m_2 →∞ and Λ_2 → 0 with m_2 Λ_2^2=Λ_1^3 being fixed, we have the SW periods of the N_f=1 theory. Furthermore, it can be checked that the second and fourth order corrections to the SW periods become those of the N_f=1 theory. §.§ N_f=3 theoryIn the case of N_f=3, we can choose N_+=1 or 2 in (<ref>). Otherwise, we obtain the third order differential equation. We will take N_+=2 without loss of generality. The quantum curve is the Schrödinger type equation (<ref>) withQ(x)= e^-2 i x/16 (-2+e^i xΛ_3^1/2)^2(-4 Λ_3-4 e^3 i xΛ_3^1/2(m_3 Λ_3+8 m_1 m_2-8 u)-e^2 i x(Λ_3^2-24 m_3 Λ_3+64 u) -4 (m_1-m_2)^2 e^4 i xΛ_3+4 e^i xΛ_3^1/2(Λ_3-8 m_3) ) +ħ^2 e^i xΛ_3^1/2/2 (-2+e^i xΛ_3^1/2)^2.The SW periods satisfy the Picard-Fuchs equation and the differential equations with respect to the mass parameter m_i (i=1,2,3) and the moduli parameter u.Since these equations are rather complicated, we will write down themfor thetheory with the same mass m:=m_1=m_2=m_3. In this casethe discriminant Δ_3 and D_3 becomeΔ_3=-Λ_3^2 (8 m^2+Λ_3 m-8 u)^3 (256 Λ_3 (8 m^3-3 m u)+8 Λ_3^2 (3 m^2+u)+3 Λ_3^3 m-2048 u^2)/4096, D_3= -Λ_3^4/256+12 Λ_3 m^3+Λ_3^2 (u-9 m^2/4)-16 u^2.Then the Picard-Fuchs equation is obtainedby substituting (<ref>) and (<ref>) into (<ref>).We can also confirm thatthe SW periods satisfy the differential equation:∂^2 Π^(0)/∂ m∂ u = b_3∂^2 Π^(0)/∂ u^2+c_3∂Π^(0)/∂ uwhereb_3=3 m (Λ_3 ^2+24Λ_3 m-128 u)/16 (16 m^2-Λ_3 m-4 u) , c_3=12 m/m (Λ_3-16 m)+4 u.We can also calculate the Picard-Fuchs equation for general mass case based on Δ_3 and D_3. In this case we can check thatthe quantum corrections to the SW periods Π^(0) are expressed as Π^(2) = [ ( 5/6 u -1/384Λ_3^2) ∂^2/∂ u^2+1/2∑_i=1^3 m _i ∂/∂ m_i∂/∂ u + 5/12∂/∂ u] Π^(0) ,Π^(4) = [7/10( 5/6 u -1/384Λ_3^2 )^2 ∂^4/∂ u^4 +47/20( 241/471/6 u-1/384Λ_3^2 ) ∂^3/∂ u^3 + 571/480∂^2/∂ u^2+ ∑_i=1^3 (7/10(5/6 u-1/384Λ_3^2 ) m_i ∂/∂ m_i∂^3/∂ u^3 + 131/120 m_i ∂/∂ m_i∂^2/∂ u^2)+∑_i=1^3 ∑_j=1^3 ( 7/40 m_i m_j ∂/∂ m_i∂/∂ m_j∂^2/∂ u^2)] Π^(0).The coefficientsare not singular when Δ _3=0. With help of the Picard-Fuchsequation and the differential equation with respect to the mass parameters, we can rewrite the quantum SW periods (<ref>) and (<ref>)in terms of a basis ∂_u Π ^(0) and ∂_u^2 Π ^(0). For the equal mass case, wefind that Π^(2) = [ ( 5/6 u -1/384Λ_3^2+1/2 m b_3) ∂^2 /∂ u^2+ ( 5/12 +1/2 m c_3 ) ∂/∂ u] Π^(0).In thisexpression, however, the coefficients become singular at the pointwhere Δ_3=0. But this representation is useful to discuss the decoupling limit to the N_f=0 theory.In the decoupling limit; m→∞ and Λ_3 → 0 with m^3 Λ_3=Λ_0^4 being fixed, the SW periods for N_f=3 theory agree with those for the N_f=0 theory.Moreover, we can show that the second and fourth order corrections to the quantum SW periodsbecome those of the N_f=0 theory in this limit.§.§ N_f=4 theoryIn the case of N_f=4, we will take N_+=2 in (<ref>). Otherwise, we get the third or fourth order differential equation. The quantum curve can be written in the form of the Schrödinger-type equation withQ(x)= e^-2 i x/4 (-4 √(q)cos (x)+q+4)^2(4 √(q) e^3 i x(m_1^2 q+m_2^2 q-m_1 m_2 (q+8)-m_3 m_4 q+8 u) +4 √(q) e^i x(m_3^2 q+m_4^2 q-m_3 m_4 (q+8)-m_1 m_2 q+8 u) -e^2 i x(q ((m_1^2+m_2^2+m_3^2+m_4^2) q -24 (m_1 m_2+m_3 m_4))+16 (q+4) u) -4q e^4 i x(m_1-m_2)^2 -4q (m_3-m_4)^2 ) +ħ^2 √(q) e^-i x(q e^2 i x-8 √(q) e^i x+q+4 e^2 i x+4)/2 (-4 √(q)cos (x)+q+4)^2.For simplicity, we consider the case that all the hypermultiplets have the same mass: m:=m_1=m_2=m_3=m_4.The SW periods Π ^(0) satisfy the Picard-Fuchs equation (<ref>) with the discriminant Δ_4 and D_4 which are given byΔ_4= 2^24 q^2 (m^2-u)^4 (m^4 (q-16) q+8 m^2 q u+16 u^2)/(q-4)^10, D_4= 16 (-m^4 q ((q-12)^2 q-192)-8 m^2 (q-8) q^2 u-16 ((q-4) q+16) u^2)/(q-4)^4.The quantum corrections to the SW periods are expressed in terms of the basis ∂_u Π^(0) and ∂^2_u Π ^(0). The second order correction is given byΠ^(2) = (X_2^1 ∂^2/∂ u^2 +X_2 ^2 ∂/∂ u)Π^(0) ,whereX_2^1 = --18m^4 q+m^4 q^2-8 m^2 u+10 m^2 q u+24 u^2 96 m^2, X_2^2 =--2m^2+m^2 q +6 u 48 m^2 .The fourth order correction isΠ^(4)= (X_4^1 ∂^2/∂ u^2 +X_4 ^2 ∂/∂ u)Π^(0) ,whereX_4^1= 1/ 46080 m^2 (m^2-u)^2(m^2 q-4 m^2 √(q)+4 u)^2 (m^2 q+4 m^2 √(q)+4u)^2×(7 m^14 q^8-399 m^14 q^7+8484 m^14 q^6-80616 m^14 q^5+312480 m^14q^4-284544 m^14 q^3+153600 m^14 q^2+175 m^12 q^7 u-7196 m^12 q^6 u+96504m^12 q^5 u-436320 m^12 q^4 u +266496 m^12 q^3 u-789504 m^12 q^2 u+1848m^10 q^6 u^2-51624 m^10 q^5 u^2+403488 m^10 q^4 u^2 -896256 m^10 q^3u^2+2328576 m^10 q^2 u^2+313344 m^10 q u^2+10648 m^8 q^5 u^3 -190176 m^8 q^4u^3+820224 m^8 q^3 u^3-1501184 m^8 q^2 u^3-921600 m^8 q u^3+35968 m^6 q^4u^4 -377984 m^6 q^3 u^4+881664 m^6 q^2 u^4-26624 m^6 q u^4-8192 m^6 u^4+70656 m^4q^3 u^5 -344064 m^4 q^2 u^5-325632 m^4 q u^5+24576 m^4 u^5+73728 m^2 q^2 u^6+12288m^2 q u^6 +319488 m^2 u^6+30720 q u^7+122880 u^7), X_4^2= 1/23040 m^2 (m^2-u)^2 (m^2 q-4 m^2√(q)+4 u)^2 (m^2 q+4 m^2 √(q)+4 u)^2×(7 m^12 q^7-287 m^12 q^6+3780 m^12 q^5-15816 m^12 q^4+1440 m^12q^3 -38400 m^12 q^2+147 m^10 q^6 u-4032 m^10 q^5 u+29736 m^10 q^4 u-55872m^10 q^3 u+225408 m^10 q^2 u+30720 m^10 q u+1260 m^8 q^5 u^2-21768 m^8 q^4u^2+88704 m^8 q^3 u^2-221952 m^8 q^2 u^2-133632 m^8 q u^2 +5608 m^6 q^4 u^3-56768m^6 q^3 u^3+147456 m^6 q^2 u^3+7168 m^6 q u^3-2048 m^6 u^3 +13536 m^4 q^3 u^4-64512m^4 q^2 u^4-58368 m^4 q u^4+6144 m^4 u^4+16512 m^2 q^2 u^5+3072 m^2 qu^5 +79872m^2 u^5+7680 q u^6+30720 u^6).In the decoupling limit m→∞ and q→ 0 with m^4 q=Λ_0^4 being fixed, the SW periods coincide with those for the N_f=0 theory. We can also show thatthe second and fourth order corrections of the quantum SW periods (<ref>) and (<ref>) in this limit agree with those for the N_f=0 theory . Wecan also consider the massless limit, where the Picard-Fuchs equationbecomes a simple form:∂^2 Π^(0)/∂ u^2 +1/2u∂Π^(0)/∂ u =0.Note that the coefficients X^1_k and X_k^2 in (<ref>) and (<ref>) become singular in the massless limit m → 0.In the massless case, it is found that(<ref>) and (<ref>) are replaced byΠ^(2) =( -u q8∂^2/∂ u^2+(q-4)q 16u∂/∂ q) Π^(0),Π^(4) =( -26q+11 q^2 2304∂^2/∂ u^2-(q-4)(-52q+35q^2) 4608 u^2∂/∂ q-(q-4)^2 q^2 288 u^2∂^2/∂ q^2) Π^(0),where these formulas include thederivative with respect to q in addition to the u-derivatives.In the following sections, we will computethe quantum SW periods both in the weak and strong coupling regions and compute the deformed (dual) prepotentials. § DEFORMED PERIODS IN THE WEAK COUPLING REGION In this section,for the completeness, we will discuss the expansion of the quantum SW periods in the weak coupling region and compute the deformed prepotential for the N_f theories <cit.>.Then wecompare the prepotential with the NS limit ofthe Nekrasov partition function <cit.>. Note that the deformed prepotentials for N_f=1,2,4 are obtained from the classical limit of the conformal blocks of two dimensional conformal field theories <cit.>. The SW periods (<ref>) around u=∞ have been given by (<ref>) and (<ref>) <cit.>. The quantum SW periods can be obtained by acting the differential operators on the SW periods a^(0) and a_D^(0).§.§ N_f≤ 3In the case of N_f=1, the discriminant Δ_1 and D_1 are given by (<ref>). Expanding a^(0)(u) and a^(0)_D(u) around u=∞ and substituting them into (<ref>) and (<ref>), weobtain the expansions around u=∞. They are found to bea(u) = √(u/2 )-Λ_1^3 m_1 (1/u)^3/2/2^4 √(2)+3 Λ_1^6 (1/u)^5/2/2^10√(2)+⋯+ħ ^2 ( -Λ_1^3 m_1 (1/u)^5/2/2^6 √(2)+15 Λ_1^6 (1/u)^7/2/2^12√(2) -35 Λ_1^6 m_1^2 (1/u)^9/2/2^11√(2)+⋯) +ħ ^4 ( -Λ_1^3 m (1/u)^7/2/2^8 √(2)+63 Λ_1^6 (1/u)^9/2/2^14√(2)-273 Λ_1^6 m^2 (1/u)^11/2/2^14√(2)+⋯)+⋯, a_D(u)= -i/2√(2)π[ √(2) a(u)( i π -3 log16u/Λ_1^2) +(6 √(u)+m_1^2/√(u)+m_1^4/6-1/4Λ_1^3 m_1/u^3/2+⋯) +ħ ^2(-1/4 √(u)-m_1^2/12 u^3/2+-9/64Λ_1^3 m_1-m_1^4/12/u^5/2 +⋯)+ħ ^4 (1/160 u^3/2+7 m_1^2/240 u^5/2+7 m_1^4/96-127 Λ_1^3 m_1/2560/u^7/2+⋯)+⋯].Solving u in terms of a in (<ref>) and substituting it into a_D, a_D becomes a function of a. Then integrating it over a, we obtain the deformed prepotential:ℱ_1(a,ħ) = 1/2π i[ℱ^pert_1(a,ħ)+ ∑_k=0^∞∑_n=1^∞ħ^2kℱ_1^(2k,n)( 1/a) ^2n],where the first few coefficients of ℱ^(2k,n)_1 (k=0,1,2) are listed in the table <ref>.The perturbative part ℱ^pert_1(a,ħ) of the prepotential is given byℱ^pert_1(a,ħ)=-3/2a^2 loga^2/Λ_1^2+1/2ℱ^1_s-a^2log a -3m_1^2/4+ħ^2 (-1/12log a-1/96∂^2 ℱ_s^1/∂ a^2 +1/16)+ħ^4 ( -1/5760 a^2+7/2^10· 3^2 · 5∂ ^4ℱ_s^1/∂ a^4)+⋯,where ℱ^1_s is defined as <cit.>ℱ^1_s=(a+m_1/√(2))^2 log(a+m_1/√(2))+(a-m_1/√(2))^2 log(a-m_1/√(2)). In a similar way, we can calculate the deformed prepotentials for N_f=2 and 3 theories, which are expanded asℱ_N_f(a,ħ) = 1/2π i[ℱ^pert_N_f(a,ħ)+ ∑_k=0^∞∑_n=1^∞ħ^2kℱ_N_f^(2k,n)( 1/a)^2n],where some coefficients ℱ_N_f^(2k,n) (k=0,1,2) are given in appendix <ref>. The perturbative parts are given byℱ_2^pert(a,ħ )=-a^2loga^2/Λ_2^2 +1/2ℱ^2_s-2a^2log a-3/4(m_1^2+m_2^2)+ħ^2 (-1/12log a-1/96∂^2 ℱ_s^2/∂ a^2 +1/8)+ħ^4 ( -1/5760 a^2+7/2^10· 3^2 · 5∂^4ℱ_s^2/∂ a^4)+⋯ , ℱ_3^pert(a,ħ )=-1/4 a^2 loga^2/Λ_3^2 +1/2ℱ^3_s-3a^2 log a-∑_i=1^3 3/4m_i^2 +ħ^2 (-1/12log a-1/96∂^2 ℱ_s^3/∂ a^2 +3/16)+ħ^4 ( -1/5760 a^2+7/2^10· 3^2 · 5∂^4ℱ_s^3/∂ a^4)+⋯ ,where ℱ^N_f_s (N_f=2,3) is definedas <cit.>ℱ^N_f_s =∑ _i=1^N_f((a+m_i/√(2))^2 log(a+m_i/√(2))+(a-m_i/√(2)) ^2 log(a-m_i/√(2))).These deformed prepotentials are shown to be consistent with the decoupling limits. We now compare the prepotentials for N_f=1,2,3 theories with the NS limit of the Nekrasov partition functions. By rescaling the parameters ħ, m_i (i=1,2,3), and Λ_N_f as2π i ℱ(a,ħ ) →ℱ(a,ϵ_1), Λ_N_f→ 2^2/(4-N_f)√(2)Λ_N_f , ħ→√(2)ϵ_1,m_i →√(2)m_i,and then shifting the massparameters : m_i → m_i+ ϵ/2 for a fundamental matter or m_i →ϵ/2-m_i for an anti-fundamental matter, we find that the prepotential agrees with that obtained from the Nekrasov partition <cit.>. §.§ N_f=4In the case of N_f=4,after rescaling of the y and x by a factor of 1-q/2 in the SW curve, we can apply the formulas (<ref>) and (<ref>). Expanding around q =0 and integrating over u, we have the SW periods a^(0) and a_D^(0)in the weak coupling region.To simplify the formulas, we consider the equal mass case m:=m_1=m_2=m_3=m_4, where the discriminant Δ_4 and D_4 are given in (<ref>). The deformed prepotential isℱ_4=1/2π i[ ℱ_4^pert(a,ħ )+∑_k=0 ^∞∑_n=1^∞ħ^2kℱ_4^(2k,n) q ^n],where the perturbative part is given byℱ_4^pert(a,ħ )= a^2 log q+1/2ℱ_s^4 -4 a^2 log a+ħ^2 ( -1/12log (a)-1/96∂^2 ℱ_s^4/∂ a^2)+ħ^4 (-1/5760a^2+7/2^10· 3^2 · 5∂^4 ℱ_s^4/∂ a^4)+⋯,whereℱ^4_s = 4( (a+m/√(2))^2 log(a+m/√(2))+(a-m/√(2))^2 log(a-m/√(2)) ) .The first several coefficients ℱ_4^(2k,n) for k=0,1,2 are given in appendix <ref>. By rescaling the parameters ħ, m and q as 2π i ℱ(a,ħ )→ℱ(a,ϵ_1) ,q→ 4 q, ħ→√(2)ϵ_1 ,m →√(2) m,we find that (<ref>) agrees with the prepotential obtained from the NS limit of the Nekrasov partition function of the theory with the equal mass,where the mass parametermust be shifted as m_i → m_i+ ϵ /2 for a fundamental matter or m_i →ϵ /2-m_i for an anti-fundamental matter (i=1,⋯ 4). For the massless case m=0,the Picard-Fuchs equation (<ref>)has a solution of the form:Π ^(0)=f(q) u^1/2,wheref(q)=√(2)/((q-4)q+16)^1/4F( 1/12;5/12;1;108(q-4)^2q^2/(q^2-4q+16)^3).Then, using (<ref>) and (<ref>), the second and fourth order corrections to the SW periods can be written asΠ ^(2)= 1/32 √(u)(q f(q)+2(q-4) ∂ f(q)/∂ q),Π^(4) = -q/9216 u^3/2((11 q-26)f(q)+2 (q-4) (16 (q-4) q ∂^2f(q)/∂ q^2+(35 q-52) ∂ f(q)/∂ q)).It is found that the prepotential obtained from (<ref>), (<ref>) and (<ref>) coincides with (<ref>) for m=0. §.§ Deformed effective coupling constantFrom the relation (<ref>) and the Picard-Fuchs equation (<ref>), we can compute the deformed effective coupling. Differentiating (<ref>) with respect to u and applying the Picard-Fuchs equation (<ref>), we find∂/∂ uΠ ^(2k)= ( Y_2k^1∂^2/∂ u^2+Y_2k^2 ∂/∂ u) Π ^(0),whereY_2k^1:= -p_1 X_2k^1+∂ X_2k^1/∂ u+X_2k^2, Y_2k^2:= -p_2 X_2k^1+∂ X_2k^2/∂ u.Then taking the u-derivative of the quantum SW period Π=∑_k=0^∞ħ^2kΠ^(2k), we have ∂/∂ uΠ=( Y_1 ∂^2/∂ u^2+Y_2 ∂/∂ u) Π^(0),where Y_1=∑_n=1^∞ħ^2n Y_2n^1,Y_2=1+∑_n=1^∞ħ^2n Y_2n^2.The deformed effective coupling is defined byτ := ∂_u a_D/∂_u a.The leading correction to the classical coupling constantτ ^(0) =∂_u a_D^(0)/∂_u a^(0) is given byτ=τ^(0)( 1+ħ^2 Y_2^1 ∂_u logτ^(0)+ 𝒪(ħ^4)).Therefore the leading correction to the effective coupling constant is determined by a dimensionless constant Y_2^1 in (<ref>).Also ∂ _u logτ ^(0) is proportional to the beta functions at the weak coupling.We will evaluate the coefficient Y_2^1 for some simple cases, where all hypermultiplets have the same mass m. For N_f=0, from the coefficients X_2^1 and X_2^2 in (<ref>) and p_1 =2u/u^2-Λ^4_0, one finds Y_2^1= 1/8-u^2/6 (u^2-Λ_0^4). In a similar way we can compute the coefficient Y_2^1 for N_f≥ 1. The results are the followings: For N_f=1, we haveY_2^1= 1/4 +( 1/2 m +3/16 b_1 ) c_1 -1/6(u+m b_1) (∂_u Δ_1/Δ_1+3/4m^2-3u).For N_f=2,we haveY_2^1= 1/2 +( 3m/4-2 b_2) c_2 - ( 1/3 u +m/4b_2 ) ( ∂_u Δ_2/Δ_2-8 (3m^2-2u)/8m^2-8u+Λ_2^2c_2/m),where b_2 =1/L_2 (b_2^(1)+b_2^(2)),c_2 =1/L_2 (c_2^(1)+c_2^(2)).For N_f=3, we haveY_2^1=5/4+( 3/2 m-1/6b_3 )-(5/6 u -1/384Λ_3^2+1/2 m b_3) (∂_u Δ_3/Δ_3-24 m^2+8u+mΛ_3/-8m^2+8u-mΛ_3c_3/m),where b_3 and c_3 is given by (<ref>). For N_f=4, we findY_2^1= 1-q/8-5 u/8 m^2 -1/96(2 (4-5 q) u-m^2 (q-18) q-24 u^2/m^2) (∂_uΔ_4/Δ_4+3/m^2-u).We have confirmed that the above formulas are consistent with the decoupling limit andthedeformedperiods agree with those obtained from theNS limit of theNekrasov partition function explicitly up to the fourth order in ħ.§ DEFORMED PERIODS AROUND THE MASSLESS MONOPOLE POINTIn this section, we consider the quantum SW periods in the strong coupling region of the theories with N_f=1,2,3 hypermultiplets,where aBPS monopole/dyon becomes massless.In particular we will consider the point in the u-plane such thatthe deformed BPSmonopole becomesmassless a_D(u)=0. The dual SW period a_D^(0) becomes zero at the massless monopole point where the discriminantΔ of the SW curve and also z=-27Δ/4D^3 become zero. In the following, we explicitly calculate the expansion of the quantum SW periods around the classical massless monopole point.The periods around the dyon masslesspoint can be analyzed in the same manner.First we will give some general arguments on the quantum SW periods around the massless monopole point. The solution to the Picard-Fuchs equation around the massless monopole point are given by<cit.>∂_u a_D^(0) =√(2)i2(-D)^-1/4F(112,512;1; z ),∂_u a^(0) =√(2)/2 (-D)^-1/4[3/2πln 12 F( 1/12, 5/12; 1; z ) -1/2π F_*( 1/12, 5/12; 1; z )].Let u_0 be the massless monopole point in the u-plane, where Δ becomes zero. In general,z and (-D)^1/4have the following expansionaround u_0z =∑_n=1^∞r_n ũ^n, (-D)^-1/4=∑_n=0^∞ s_n ũ^n,where ũ=u-u_0. Substituting(<ref>) into(<ref>) and (<ref>) and integrating with respect to u, the SW periods can be given in the following forma^(0)_D(ũ) =∑_n=1^∞ B_n ũ^n , a^(0)(ũ) =i/2π[l a^(0)_D(ũ) {log(r_l^1/lũ)-3/llog12 }+∑_n=1^∞A_n ũ^n ],where a constant of integration for a^(0)_D is fixed by the condition a^(0)_D(0)=0 and a^(0)(ũ) is given up to constant which is independent of ũ.The integer l is defined as the smallest integer which gives nonzero r_n i.e. r_n =0(n < l) and r_l ≠ 0. B_n and A_n are expressed in terms of r_n and s_n.First three terms of B_n and A_n are given byB_1 =is_0/√(2), B_2 =i/2√(2)(s_1+s_0r_1f^(1)),B_3 =i/3√(2){s_2+(s_0r_2+s_1r_1)f^(1)+1/2s_0r_1^2f^(2)},A_1 =-l B_1, A_2 =-l/2B_2+r_l+1/r_l1/2B_1+i/2√(2)s_0r_1g^(1),A_3 =-l/3B_3+r_l+1/r_l2/3B_2+ (r_l+2/r_l-r_l+1^2/2r_l^2) 1/3B_1 +i/3√(2){(s_0r_2+s_1r_1)g^(1)+1/2s_0r_1^2g^(2)},where f^(n) =(1/12)_n(5/12)_n/n!, g^(n) =(1/12)_n(5/12)_n/(n!)^2∑_r=0^n-1(1/1/12 +r+1/5/12 +r-2/1+r).The higher order corrections in ũ can be calculated in a similar way. Once the SW periods around the massless monopole point are obtained, the quantum SW periods can be calculated by applying the differential operators as is in the weak coupling region. Thus what we have to do is to obtain the explicit value of u_0, which is one of the zero of Δ, and the series expansion of z and (-D)^1/4 around u_0. However, for general mass parameters, the expression of u_0 is slightly complicated. Therefor we only give explicit expression of the quantum SW periods in simpler cases; massless hypermultiplets and massive hypermultiplets with the same mass. Before going tothese examples, we will discussan interesting phenomena due to the quantum corrections. Although the undeformed SW period a^(0)_D(u) becomeszero at the monopole massless point u=u_0,the deformed SW period a_D(u) is not zero at the same value of u.This means that the massless monopole point is shifted in the u-plane by the quantum correction. In fact, the quantum SW period a_D around ũ=0 takes the form ∑_k=0^∞ħ^2ka_D^(2k) wherea_D^(2k) =∑_n=0^∞B_n^(2k)ũ^n.Here B_n^(0):=B_n in (<ref>) with B_0^(0)=0 and B_1^(0), B_0^(2) and B_0^(4) are observed to be non-zeroby explicit calculation. We then find the massless monopole point U_0 of the deformed theory is expressed asU_0=u_0+ħ^2 u_1+ħ^4 u_2 +⋯,where u_1 and u_2 are determined byu_1 =-B_0^(2) B_1^(0),u_2 =-B_0^(4) B_1^(0)-B_1^(2) B_1^(0)u_1-B_2^(0) B_1^(0)u_1^2.We will computethese corrections explicitly in the following examples. §.§ Massless hypermultipletsWe discuss the case where mass of the hypermutitplets is zero. This case gives a simple and interesting example since the moduli space admits some discrete symmetry.We will consider the massless monopole point in the moduli space. The solution of the Picard-Fuchs equation around the massless monopole point u_0 has been studied in <cit.>.§.§ N_f=1For the N_f=1 theory, the massless monopole point is u_0=-3Λ_1^2/2^8/3. Around u_0 the z and (-D_1)^-1/4 isexpanded asz =-2^14/3/Λ_1^2ũ-2^22/3· 5 /3 Λ_1^4ũ^2-47104/27 Λ_1^6ũ^3+⋯,(-D_1)^-1/4 =-i (2^1/3/3^1/3Λ_1 +2^2/3^3/2Λ_1^3ũ+2^8/3/3^3/2Λ _1^5ũ^2+⋯) ,from which we can read off the coefficientsr_n and s_n in the expansions (<ref>).Substituting these coefficients into (<ref>) and (<ref>), we can obtain the SW periods (a^(0)(u), a_D^(0)(u)).Then, using therelations (<ref>) and (<ref>), we obtain the expansion of the quantum SW periods around ũ=0:a_D(ũ)= (ũ/2^1/6· 3^1/2Λ_1+ũ^2/2^1/2· 3^5/2Λ_1^3+ ũ^3/2^5/6· 3^11/2Λ_1^5+ ⋯) +ħ^2 /Λ_1(5/2^19/6· 3^5/2+ 35/2^7/2· 3^9/2( ũ/Λ_1^2)+665 /2^23/6· 3^15/2( ũ/Λ_1^2)^2 +⋯) +ħ^4/Λ^3_1( 2471/6^15/2+144347 /2^53/6· 3^19/2( ũ/Λ_1^2)+1964347 /2^55/6· 3^23/2( ũ/Λ_1^2)^2+⋯)+⋯ , a(ũ)= i/2π[ a_D(ũ) ( - iπ +logũ/2^4/3 3^3 Λ_1^2) +i (-ũ/2^1/6· 3^1/2Λ _1-5ũ^2/2^3/2 · 3^5/2Λ_1^3-298 ũ^3/2^5/6· 3^13/2Λ_1^5+⋯) +iħ^2/Λ_1( -1/2^23/6· 3^1/2( ũ/Λ_1^2)^-1+13 /2^19/6· 3^7/2+101/6^9/2( ũ/Λ_1^2)+⋯) +iħ^4/Λ_1^3(7 /2^15/2· 3^1/2· 5( ũ/Λ_1^2)^-3+29/2^47/6· 3^5/2· 5 ( ũ/Λ_1^2)^-2+107 /2^49/6· 3^9/2( ũ/Λ_1^2)^-1+⋯)].Inverting the series of a_D in terms of ũ, we obtain ũ as a function of a_D. Substituting ũ into a andintegrating a with respect to a_D,we obtain the dual prepotential:ℱ_D1(a_D,ħ ) = i/8π[a_D^2 log(a_D/Λ_1) ^2-ħ^2/12log(a_D)-7ħ^4/5760 a_D^2+⋯. +. ∑_k=0^∞∑_n=1^∞Λ_1^2 ( ħ/Λ_1)^2kℱ_D1^(2k,n)( a_D/Λ_1)^n ],where the first several coefficients ℱ_D1^(2k,n) (k=0,1,2) are listed in the table <ref>.§.§ N_f=2,3For N_f=2, the massless monopole point isu_0=Λ_2^2/8. Then z and (-D_2)^-1/4 are expanded asz = 108 /Λ_2^4ũ^2-432 /Λ_2^6ũ^3-3456 /Λ_2^8ũ^4+⋯ ,(-D_2)^-1/4 =1/Λ_2-ũ/Λ_2^3-3 ũ^2/2 Λ_2^5+⋯ .Then we havea_D(u)= i (ũ/2^1/2Λ_2-ũ^2/2^3/2Λ_2^3+3 ũ^3/2^5/2Λ_2^5+⋯) +iħ^2/Λ_2( 1/2^7/2 -5 /2^9/2( ũ/Λ_2^2)+35 /2^11/2( ũ/Λ_2^2)^2+⋯) +iħ^4/Λ_2^3(-17/2^17/2+721 /2^21/2( ũ/Λ_2^2) -10941 /2^23/2( ũ/Λ_2^2)^2+⋯)+⋯ , a(u)= i/2π[ 2a_D(ũ)logũ/4Λ_2^2 +i (-2 ũ/2^1/2Λ_2-3 ũ^2/2^3/2Λ_2^3+ 12 ũ^3/2^5/2Λ_2^5+⋯) . +iħ^2/Λ_2( 1/2^5/2· 3( ũ/Λ_2^2)^-1 +10/2^7/2· 3-77 /2^9/2· 3 ( ũ/Λ_2^2)+⋯) .+iħ^4/Λ_2^3( 7 /2^11/2· 3^2 · 5( ũ/Λ_2^2)^-3-1/2^13/2· 5( ũ/Λ_2^2)^-2+53/2^15/2· 3· 5 ( ũ/Λ_2^2)^-1+⋯)+⋯] . For N_f=3, the massless monopole point is u_0=0. Then z and (-D_3)^-1/4 are expanded asz =2^22· 3^3 /Λ_3^8ũ^4+2^31· 3^3 /Λ _3^10ũ^5+2^34· 3^5 · 5 /Λ_3^12ũ^6+⋯ ,(-D_3)^-1/4 =4/Λ_3+256 /Λ_3^3ũ+36864 /Λ _3^5ũ^2+⋯ .Then we havea_D(u)= i( 2^3/2ũ/Λ_3+2^13/2ũ^2/Λ_3^3+2^11· 3 ũ^3/Λ_3^5+⋯) +iħ^2/Λ_3(1/2^1/2+2^13/2( ũ/Λ_3^2) +2^19· 5^2 ( ũ/Λ_3^2)^2 +⋯) +iħ^4/Λ_3^3( 2^5/2· 5 +2^17/2· 43 ( ũ/Λ_3^2) + 2^25/2· 1141 ( ũ/Λ_3^2)^2+⋯) , a(u)= i/2π[ 4a_D(ũ) log16 ũ/Λ_3^2 +i ( -2^7/2ũ/Λ_3+2^15/2· 3 ũ^2/Λ_3^3+2^29/2· 3 ũ^3/Λ_3^5+⋯) . +iħ^2/Λ_3(-1/2^7/2( ũ/Λ_3^2)^-1+2^7/2/3 +2^13/2· 29 /3 ( ũ/Λ_3^2)+⋯) +. i ħ^4 /Λ_3(7 /2^21/2· 3^2· 5( ũ/Λ_3^2)^-3-1/2^9/2· 3· 5 ( ũ/Λ _3^2)^-2+7/2^3/2· 5 ( ũ/Λ_3^2)^-1+⋯) ].We then obtain the deformed dual prepotentials for the N_f=2 and 3 theories, which are given byℱ_D2(a_D,ħ )= i/8π[2a_D^2 log(a_D/Λ_2)^2+ħ^2/6log (a_D)-7 ħ^4/2880 a_D^2+⋯. .+∑_k=0^∞∑ _n=1^∞Λ_2^2 ( ħ/Λ_2)^2kℱ_D2^(2k,n)( a_D/Λ_2)^n ] for N_f=2 and ℱ_D3(a_D,ħ )= i/8π[4a_D^2 log(a_D/Λ_3)^2+ħ^2/3log(a_D)-7 ħ^4/1440 a_D^2+⋯..+∑_k=0^∞∑_n=1^∞Λ_3^2 ( ħ/Λ_3)^2kℱ_D3^(2k,n)( a_D/Λ_3) ^n ]for N_f=3,where the first several coefficients ℱ_DN_f^(2k,n) (N_f=2,3) are listed in the table <ref> and the table <ref>.The dual prepotentials include the classical term and one loop term as (<ref>), (<ref>) and (<ref>) in the weak coupling region. These terms also appear in the pure SU(2) theory <cit.>.Nowwe compute theshifted massless monopole point U_0 in the u-plane in these examples. Using the expansion of a_D, we obtainU_0 = {[Λ_0^2-1/32ħ^2 +9/32768 Λ_0^2ħ^4+⋯ ,N_f=0; -3Λ_1^2/2^8/3 -5/72ħ^2 -1571/2^22/3 3^7 Λ_1^2ħ^4+⋯ ,N_f=1; Λ_2^2/8 -1/8ħ^2+9/256 Λ_2^2ħ^4+⋯ ,N_f=2;-1/4ħ^2 -4/Λ_3^2ħ ^4 +⋯ , N_f=3. ]. In next subsection, we will discuss the expansion around the massless monopole point u_0 for the theory with massive hypermultipltes with the same mass.§.§ Massive hypermultiplets with the same massWe consider the case that all the hypermultiplets have the same mass m:=m_1=⋯=m_N_f. The classical massless monopole point u_0 corresponds a solution of the discriminant Δ _N_f=0.In the u-plane, it is found as follows;u_0= -64 m^4-216 Λ_1^3 m+8 m^2 H_1^1/3 -H_1^2/3/24 H_1^1/3, forN_f=1, u_0= -Λ_2^2/8+Λ_2 m,forN_f=2, u_0= 1/512(Λ_3^2-96 Λ_3 m+√(Λ_3 (Λ_3+64 m)^3)), forN_f=3whereH_1=729 Λ_1^6-512 m^6+4320 Λ_1^3 m^3+3 √(3)(27 Λ_1^4-64 Λ_1 m^3)^3/2.In the decoupling limit m→∞ and Λ_N_f→ 0 with m^N_fΛ_N_f^(4-N_f)=Λ_0^4 being fixed, these points become the massless monopole point Λ_0^2 of the N_f=0 theory. If we consider the massless limit, these points become the massless monopole points for the massless N_f theory.We first discuss the N_f=1 theory. Here we consider thesmall mass|m|≪Λ_1, whereu_0 is expanded around m=0 as <cit.>u_0= -3 Λ_1^2/2^8/3-Λ_1 m/2^1/3+m^2/3+⋯.From (<ref>), one obtains the expansion of the SW period a_D^(0) around u=u_0a_D^(0)(ũ)= ũ(1/2^1/6· 3^1/2Λ_1-2^3/2m^2/3^7/2Λ_1^3+⋯)+ũ^2 (1/2^1/2·3^5/2Λ_1^3+ 2^17/6 m/3^7/2Λ_1^4+⋯) + ⋯ ,where ũ=u-u_0. By using the relations (<ref>) and (<ref>), we get the quantum SW periods up to the fourth order in ħ around u=u_0:a_D^(2)(ũ)= ( 5 /2^13/6· 3^5/2Λ_1- m/2^5/6· 3^7/2Λ_1^2 +⋯) +ũ(35 /2^7/2· 3^9/2Λ_1^3+5m/2^1/6· 3^11/2Λ_1^4+⋯)+⋯ ,a_D^(4)(ũ)= ( 2471 /6 ^15/2Λ_1^3-613m/2^31/6· 3^15/2Λ_1^4+⋯)+ũ(144347 /2^53/6· 3^19/2Λ_1^5+26495m/2^9/2· 3^21/2Λ_1^6+⋯)+⋯.From these expansions, we find that the monopole massless point U_0 isgiven by (<ref>) where u_0= -3 Λ_1^2/2^8/3-Λ_1 m/2^1/3+m^2/3+⋯ ,u_1= -5/2^3· 3^2+m/2^2/3· 3^3 Λ_1+ 5 m^2/2^1/3· 3^4 Λ _1^2+⋯ , u_2= -1571/2^22/3· 3^7 Λ_1^2+613 m/2^5· 3^7 Λ_1^3+11329 m^2/2^11/3· 3^9 Λ_1^4+⋯ . For N_f=2, we find that the massless monopole point U_0 is found to be (<ref>) whereu_0= -Λ_2^2/8+Λ_2 m,u_1= -m-2 Λ_2/32 m-16 Λ_2,u_2= 9 (-8 Λ_2^3+m^3-2 Λ_2 m^2-26 Λ _2^2 m)/2048 Λ_2 (Λ_2-2 m)^4.In the case of |m|≪Λ _2, we haveu_0= -Λ _2^2/8+Λ_2 m,u_1= -1/8-3 m/16 Λ_2-3 m^2/8 Λ_2^2+⋯,u_2= -9/256 Λ_2^2-405 m/1024 Λ_2^3-2385 m^2/1024 Λ_2^4+⋯ . For N_f=3 with |m| ≪Λ _3, we haveu_0= -3Λ_3 m/8- 3m^2+⋯ , u_1= -1/4+6 m/Λ_3-336 m^2/Λ_3^2+⋯ , u_2= -4/Λ_3^2+888 m/Λ_3^3-131904 m^2/Λ_3^4+⋯ ,in (<ref>). Note that the first terms in the expansions of u_1 and u_2 correspond to those in the massless limit.We can perform a similar calculation of U_0 up to the fourth order in ħ for general m. We find that the massless monopole point is shifted by the ħ-correction. In Fig. <ref> , we have plotted the graphs of the deformed massless monopole point as a function of m/Λ_N_f where we take ħ =1. For N_f=2,U_0 is singular at the Argyres-Douglas point where m/Λ_2=1/2.This is because the ratios of B_n^(k) in (<ref>) and (<ref>) are divergent. For N_f=1 and 3, however, their ratios are finite. In order to study the quantum SW periods near the Argyres-Douglas point, we need to rescale the Coulomb moduli and the mass parameters appropriately, which would beleft for future work.§ CONCLUSIONS AND DISCUSSIONIn this paper, we have studied the low-energy effective theory of N=2 supersymmetric SU(2) gauge theory with N_f hypermultipletsin the NS limit of the Ω-background. The deformation of the periods of the SW differential is described bythe quantum spectral curve, which is the ordinary differential equation and can be solved by the WKB method. The quantum spectral curve andthe Picard-Fuchs equations for the SW periods provide an efficient tool to solve the series expansion with respect to the Coloumb moduli parameter and the deformation parameter ħ.We have founda simple formula to represent the second and fourth order corrections to the SW periods which are obtained by applying some differential operators acting on the SW periods. In the weak coupling region we solved the differential equations up to the fourth order in ħ. We have explicitly checked that the quantum SW periods gives the same prepotential as that obtained from the NS limit of the Nekrasov partition function .We then studied the quantum corrections expansion around the monopole massless point. By solving the Picard-Fuchs equations for the SW periods, we have quantum corrections to the dual SW period a_D. We thenfound that the monopole massless points in the u-plane are shifted by the quantum corrections.It is interesting to explore the higher order corrections and how the structure of themoduli space is deformed by the quantum corrections. It is also interesting to study the expansion around the Argyres-Douglas point <cit.> in the u-plane where the mutually non-localBPS states are massless. A generalization to the theories with general gauge group and various hypermultiplets is also interesting. §.§ AcknowledgementsWe would like to thank K. Maruyoshi and K. Sakai for useful discussions.The work of KI is supported in part by Grant-in-Aid for Scientific Research15K05043 and16F16735 from Japan Society for the Promotion of Science (JSPS). The work of SK is supported by Iwanami-Fujukai Foundation. [1]§ ℱ_N_F^(2K,N) FOR THE N_F=2,3 AND 4 THEORIES In this appendix we explicitly write down some coefficients in the expansion of the prepotentials for N_f=2,3,4 theoriesin the weak coupling region.§.§ N_f=2 For the N_f=2 theory the first four coefficients of the classical part of the prepotential in (<ref>) are ℱ_2^(0,1)= Λ_2^4/4096+1/32Λ_2^2 m_1 m_2 , ℱ_2^(0,2)= -3 Λ_2^4 m_1^2/8192-3 Λ_2^4 m_2^2/8192, ℱ_2^(0,3)= 5 Λ_2^8/134217728+5 Λ_2^4 m_1^2 m_2^2/16384+5 Λ _2^6 m_1 m_2/196608 ,ℱ_2^(0,4)= -63 Λ_2^8 m_1^2/134217728-63 Λ_2^8 m_2^2/134217728-7 Λ_2^6 m_1^3 m_2/393216-7 Λ_2^6 m_1 m_2^3/393216 .The coefficients in the second order correction to the prepotential are ℱ_2^(2,1)= 0,ℱ_2^(2,2)= Λ_2^4/8192+1/256Λ_2^2 m_1 m_2,ℱ_2^(2,3)= -15 Λ_2^4 m_1^2/65536-15 Λ_2^4 m_2^2/65536,ℱ_2^(2,4)= 21 Λ_2^8/134217728+21 Λ_2^4 m_1^2 m_2^2/65536+35 Λ _2^6 m_1 m_2/786432 .For the fourth order corrections they are ℱ_2^(4,1)= 0, ℱ_2^(4,2)= 0, ℱ_2^(4,3)= Λ_2^4/16384+Λ_2^2 m_1 m_2/2048,ℱ_2^(4,4)= -63 Λ_2^4 m_1^2/524288-63 Λ_2^4 m_2^2/524288.§.§ N_f=3For N_f=3 the coefficients of the prepotential in the expansion(<ref>) are given byℱ_3^(0,1)= Λ_3^4/33554432+∑_i=1^3 Λ_3^2 m_i^2/4096 +1/32Λ_3 m_1 m_2 m_3, ℱ_3^(0,2)= ∑_i=1^3 -3 Λ_3^4 m_i^2/33554432 -∑ _i<j3 Λ_3^2 m_i^2 m_j^2/8192-Λ_3^3 m_1 m_2 m_3/32768,ℱ_3^(0,3)= 5 Λ_3^8/4503599627370496+∑_i=1^3 ( 5 Λ_3^6 m_i^2/103079215104 +5 Λ_3^4 m_i^4/134217728+5 Λ_3^3 m_1 m_2 m_3 m_i^2/196608)+∑ _i<j25 Λ_3^4 m_i^2 m_j^2/33554432+5 Λ_3^2 m_1^2 m_2^2 m_3^2/16384+7 Λ_3^5 m_1 m_2 m_3/268435456,ℱ_3^(0,4)= ∑_i=1^3 (-63 Λ_3^8 m_i^2/2251799813685248-7 Λ_3^6 m_i^4/103079215104-21 Λ_3^5 m_i^2 m_1 m_2 m_3/268435456) +∑_i ≠ j-63 Λ_3^4 m_i^4 m_j^2/134217728 +∑_i<j( -35 Λ_3^6 m_i^2 m_j^2/34359738368-7 Λ_3^3 m_i^2 m_j^2 m_1 m_2 m_3/393216)-3 Λ_3^7 m_1 m_2 m_3/137438953472-147 Λ_3^4 m_1^2 m_2^2 m_3^2/33554432 ,for the classical part, ℱ_3^(2,1)= -Λ_3^2/16384,ℱ_3^(2,2)= 5 Λ_3^4/134217728+∑ _i=1^3Λ_3^2 m_i^2/8192+1/256Λ_3 m_1 m_2 m_3,ℱ_3^(2,3)= -5 Λ_3^6/412316860416 -∑_i=1^3 65 Λ_3^4 m_i^2/268435456 -∑_i<j15 Λ_3^2 m_i^2 m_j^2/65536-35 Λ_3^3 m_1 m_2 m_3/786432, ℱ_3^(2,4)= 105 Λ_3^8/9007199254740992+∑_i=1^3 ( 35 Λ_3^6 m_i^2/103079215104 +21 Λ _3^4 m_i^4/134217728+35 Λ_3^3 m_1 m_2 m_3 m_i^2/786432) + ∑_i<j147 Λ_3^4 m_i^2 m_j^2/67108864 +63 Λ_3^5 m_1 m_2 m_3/536870912 +21 Λ_3^2 m_1^2 m_2^2 m_3^2/65536 ,for the second orderinħ andℱ_3^(4,1)= 0,ℱ_3^(4,2)= -Λ_3^2/32768,ℱ_3^(4,3)= 141 Λ_3^4/2147483648+∑_i=1^3 Λ_3^2 m_i^2/16384 +Λ _3 m_1 m_2 m_3/2048, ℱ_3^(4,4)= -133 Λ_3^6/1649267441664-∑_i=1^3 147 Λ_3^4 m_i^2/268435456-∑_i<j63 Λ_3^2 m_i^2 m_j^2/524288-343 Λ_3^3 m_1 m_2 m_3/6291456,for the fourth orderinħ. §.§ N_f=4 For the N_f=4 theory the coefficients of the prepotential (<ref>)are given byℱ_4^(0,1)= a^2/8+m^4/32 a^2, ℱ_4^(0,2)= 13 a^2/1024+11 m^4/2048 a^2-3 m^6/2048 a^4+5 m^8/16384 a^6, ℱ_4^(0,3)= 23 a^2/12288+17 m^4/16384 a^2-m^6/2048 a^4+15 m^8/65536 a^6-7 m^10/98304 a^8+3 m^12/262144 a^10,ℱ_4^(0,4)= 2701 a^2/8388608+1791 m^4/8388608 a^2-1125 m^6/8388608 a^4+6095 m^8/67108864 a^6-1673 m^10/33554432 a^8+2727 m^12/134217728 a^10-715 m^14/134217728 a^12+1469 m^16/2147483648 a^14,for the classical part, ℱ_4^(2,1)= m^4 /256 a^4, ℱ_4^(2,2)= -m^2/4096 a^2+5 m^4/4096 a^4-15 m^6/16384 a^6+21 m^8/65536 a^8, ℱ_4^(2,3)= -m^2/16384 a^2+5 m^4/16384 a^4-5 m^6/12288 a^6+91 m^8/262144 a^8-43 m^10/262144 a^10+55 m^12/1572864 a^12,ℱ_4^(2,4)= -235 m^2/16777216 a^2+2487 m^4/33554432 a^4-8935 m^6/67108864 a^6+11235 m^8/67108864 a^8-38337 m^10/268435456 a^10+43505 m^12/536870912 a^12-29549 m^14/1073741824 a^14+18445 m^16/4294967296 a^16,for the second order in ħ, andℱ_4^(4,1)= m^4 /2048 a^6, ℱ_4^(4,2)= 1/65536 a^2-m^2/8192 a^4+7 m^4/16384 a^6-63 m^6/131072 a^8+219 m^8/1048576 a^10, ℱ_4^(4,3)= 1/262144 a^2-m^2/32768 a^4+119 m^4/786432 a^6-133 m^6/393216 a^8+1689 m^8/4194304 a^10-253 m^10/1048576 a^12+1495 m^12/25165824 a^14,ℱ_4^(4,4)= 235/268435456 a^2-973 m^2/134217728 a^4+24571 m^4/536870912 a^6-9457 m^6/67108864 a^8+68835 m^8/268435456 a^10-625537 m^10/2147483648 a^12+1765673 m^12/8589934592 a^14-353325 m^14/4294967296 a^16+985949 m^16/68719476736 a^18,for the fourth order in ħ.99 Seiberg:1994rs N. Seiberg and E. Witten, Nucl. Phys. B 426 (1994) 19Erratum: [Nucl. Phys. B 430 (1994) 485] doi:10.1016/0550-3213(94)90124-4, 10.1016/0550-3213(94)00449-8 [hep-th/9407087].Seiberg:1994aj N. Seiberg and E. Witten,Nucl. Phys. B 431 (1994) 484 doi:10.1016/0550-3213(94)90214-3 [hep-th/9408099].Argyres:1995jj P. C. Argyres and M. R. Douglas,Nucl. Phys. B 448 (1995) 93 doi:10.1016/0550-3213(95)00281-V [hep-th/9505062]. Argyres:1995xnP. C. Argyres, M. R. Plesser, N. Seiberg and E. Witten,Nucl. Phys. B 461, 71 (1996) doi:10.1016/0550-3213(95)00671-0 [hep-th/9511154].Nekrasov:2002qd N. A. Nekrasov,Adv. Theor. Math. Phys.7 (2004) 831 [hep-th/0206161].Nekrasov:2003rj N. Nekrasov and A. Okounkov,hep-th/0306238.Moore:1997dj G. W. Moore, N. Nekrasov and S. Shatashvili,Commun. Math. Phys.209 (2000) 97 [hep-th/9712241].Alday:2009aq L. F. Alday, D. Gaiotto and Y. Tachikawa,Lett. Math. Phys.91 (2010) 167 doi:10.1007/s11005-010-0369-5 [arXiv:0906.3219 [hep-th]].Gaiotto:2009maD. Gaiotto,J. Phys. Conf. Ser.462, no. 1, 012014 (2013) doi:10.1088/1742-6596/462/1/012014 [arXiv:0908.0307 [hep-th]].Huang:2009md M. x. Huang and A. Klemm,JHEP 1007 (2010) 083 doi:10.1007/JHEP07(2010)083 [arXiv:0902.1325 [hep-th]].Alday:2009fs L. F. Alday, D. Gaiotto, S. Gukov, Y. Tachikawa and H. Verlinde,JHEP 1001 (2010) 113 doi:10.1007/JHEP01(2010)113 [arXiv:0909.0945 [hep-th]].Maruyoshi:2010iu K. Maruyoshi and M. Taki,Nucl. Phys. B 841 (2010) 388 doi:10.1016/j.nuclphysb.2010.08.008 [arXiv:1006.4505 [hep-th]].Awata:2010bzH. Awata, H. Fuji, H. Kanno, M. Manabe and Y. Yamada,Adv. Theor. Math. Phys.16, no. 3, 725 (2012) doi:10.4310/ATMP.2012.v16.n3.a1 [arXiv:1008.0574 [hep-th]].Nekrasov:2009rc N. A. Nekrasov and S. L. Shatashvili,arXiv:0908.4052 [hep-th].Poghossian:2010pn R. Poghossian,JHEP 1104 (2011) 033 doi:10.1007/JHEP04(2011)033 [arXiv:1006.4822 [hep-th]].Mironov:2009uv A. Mironov and A. Morozov,JHEP 1004 (2010) 040 doi:10.1007/JHEP04(2010)040 [arXiv:0910.5670 [hep-th]]. Zenkevich:2011zx Y. Zenkevich,Phys. Lett. B 701 (2011) 630 doi:10.1016/j.physletb.2011.06.030 [arXiv:1103.4843 [math-ph]]. Beccaria:2016wopM. Beccaria,JHEP 1607, 055 (2016) doi:10.1007/JHEP07(2016)055 [arXiv:1605.00077 [hep-th]]. He:2016khfW. He,arXiv:1608.05350 [math-ph]. Mironov:2009dv A. Mironov and A. Morozov,J. Phys. A 43 (2010) 195401 doi:10.1088/1751-8113/43/19/195401 [arXiv:0911.2396 [hep-th]].Popolitov:2010bz A. Popolitov,Theor.Math. Phys. 178 (2014) 239,arXiv:1001.1407 [hep-th].He:2010xa W. He and Y. G. Miao,Phys. Rev. D 82 (2010) 025020 doi:10.1103/PhysRevD.82.025020 [arXiv:1006.1214 [hep-th]].Krefl:2014nfa D. Krefl, JHEP 1412, 118 (2014), doi:10.1007/JHEP12(2014)118 [arXiv:1410.7116 [hep-th]].Basar:2015xnaG. Basar and G. V. Dunne,JHEP 1502, 160 (2015) doi:10.1007/JHEP02(2015)160 [arXiv:1501.05671 [hep-th]].Kashani-Poor:2015pcaA. K. Kashani-Poor and J. Troost,JHEP 1508, 160 (2015) doi:10.1007/JHEP08(2015)160 [arXiv:1504.08324 [hep-th]].Ashok:2016yxz S. K. Ashok, D. P. Jatkar, R. R. John, M. Raman and J. Troost,JHEP 1607 (2016) 115 doi:10.1007/JHEP07(2016)115 [arXiv:1604.05520 [hep-th]].Basar:2017hprG. Basar, G. V. Dunne and M. Unsal,arXiv:1701.06572 [hep-th].Dorey:1996bn N. Dorey, V. V. Khoze and M. P. Mattis,Nucl. Phys. B 492 (1997) 607 doi:10.1016/S0550-3213(97)00132-6 [hep-th/9611016].Hanany:1995na A. Hanany and Y. Oz,Nucl. Phys. B 452 (1995) 283 doi:10.1016/0550-3213(95)00376-4 [hep-th/9505075].Ceresole:1994fr A. Ceresole, R. D'Auria and S. Ferrara,Phys. Lett. B 339 (1994) 71 doi:10.1016/0370-2693(94)91134-7 [hep-th/9408036]. Klemm:1995wp A. Klemm, W. Lerche and S. Theisen,Int. J. Mod. Phys. A 11 (1996) 1929 doi:10.1142/S0217751X96001000 [hep-th/9505150].Ito:1995gaK. Ito and S. K. Yang,Phys. Lett. B 366, 165 (1996) doi:10.1016/0370-2693(95)01310-5 [hep-th/9507144].Ohta:1996hq Y. Ohta,J. Math. Phys.37 (1996) 6074 doi:10.1063/1.531764 [hep-th/9604051].Ohta:1996fr Y. Ohta,J. Math. Phys.38 (1997) 682 doi:10.1063/1.531858 [hep-th/9604059]. Masuda:1996xjT. Masuda and H. Suzuki, Int. J. Mod. Phys. A 12, 3413 (1997) [Int. J. Mod. Phys. A 12, 9700179 (1997)] doi:10.1142/S0217751X97001791 [hep-th/9609066]. Erdelyi A. Erdelyi et al. , "Higher Transcendental Functions", Vol. 1,MacGraw-Hill, New-York Huang:2012knM. x. Huang,JHEP 1206, 152 (2012) doi:10.1007/JHEP06(2012)152 [arXiv:1205.3652 [hep-th]]. He:2013fdaW. He,JHEP 1411, 030 (2014) doi:10.1007/JHEP11(2014)030 [arXiv:1306.4590 [hep-th]]. Piatek:2011tpM. Piatek,JHEP 1106, 050 (2011) doi:10.1007/JHEP06(2011)050 [arXiv:1102.5403 [hep-th]].Ferrari:2012gcF. Ferrari and M. Piatek,JHEP 1205, 025 (2012) doi:10.1007/JHEP05(2012)025 [arXiv:1202.2149 [hep-th]].Piatek:2016xhqM. Piatek and A. R. Pietrykowski,JHEP 1607, 131 (2016) doi:10.1007/JHEP07(2016)131 [arXiv:1604.03574 [hep-th]]. Ohta:1998ibY. Ohta,J. Math. Phys.40, 1891 (1999) doi:10.1063/1.532839 [hep-th/9809180].Eguchi:1996vu T. Eguchi, K. Hori, K. Ito and S. K. Yang,Nucl. Phys. B 471 (1996) 430 doi:10.1016/0550-3213(96)00188-5 [hep-th/9603002]. Masuda:1996npT. Masuda and H. Suzuki, Nucl. Phys. B 495, 149 (1997) doi:10.1016/S0550-3213(97)00199-5 [hep-th/9612240].
http://arxiv.org/abs/1705.09120v3
{ "authors": [ "Katsushi Ito", "Shoichi Kanno", "Takafumi Okubo" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20170525102101", "title": "Quantum periods and prepotential in ${\\cal N}=2$ SU(2) SQCD" }
§ INTRODUCTION In the last years a great deal of spectroscopic data concerning a very large number of Galactic stars has appeared in the literature. Many surveys have released chemical abundances for hundreds thousands of stars in the Galaxy: for example RAVE (Steinmetz et al. 2006), SEGUE-1 (Yanny et al. 2009), SEGUE-2 (Rockosi et al. 2009), ARGOS (Freeman et al. 2013), LAMOST (Cui et al. 2012), Gaia-ESO (Gilmore et al. 2012), GALAH (Zucker et al. 2012), APOGEE (Majewski et al. 2015) and the AMBRE project (de Laverny et al. 2013). It is therefore very important to construct theoretical models able to reproduce the observed abundance patterns. Galactic chemical evolution models had a great start thanks to the pioneering work of Beatrice Tinsley (1941-1981) who established the basis of this important field. Galactic chemical evolution describes how the gas and its chemical composition evolve in galaxies of different morphological type.As it is well known, during the Big Bang only light elements were formed (H, He, D, Li), while all the other species from carbon (elements with A> 12) were built inside the stars: stars produce chemical elements and restore them into the ISM, out of which new stars will born. This is the chemical evolution process. Many chemical evolution models have been proposed in the last 40 years and have demonstrated how important is to relax the hypothesis of instantaneous recycling approximation to compute the evolution of the abundances of different chemical elements, as well as how important are the relative contributions of Type Ia and core-collapse supernovae (SNe) to the chemical enrichment.In particular, it has been established the principle of the “time-delay model”, which allows us to interpret the abundance patterns measured in stars: for example, the [α/Fe] versus [Fe/H] relation and how it is expected to vary in different galaxies (see Matteucci 2012 for a review on the subject). The time-delay model interprets the observed trends as due to the different timescales on which different chemical elements are produced, as for example α-elements and iron. The α-elements are produced on short timescales by core-collapse SNe, whereas Fe is produced on longer timescales (with a delay) by Type Ia SNe. By means of such an interpretation we are able to establish the timescales of the formation of galaxies and of separate galactic components, such as halo, disk and bulge in the Milky Way. The stars in each component show different abundance patterns, indicating different histories of star formation. In fact, through the chemical abundances we can infer the formation and evolutionary history of galaxies and their components, and this approach is known as “astroarchaeological approach”. In this paper, we will concentrate on the evolution of the Milky Way and on what we have learned up to now. We will present model results compared to the most important and recent observations. From these comparisons we will derive the timescales for the formation of the different Galactic components as well as constraints on stellar nucleosynthesis.§ THE CHEMICAL EVOLUTION MODEL FOR THE MILKY WAYThe main ingredients to build a chemical evolution model are: i) initial conditions, ii) the history of star formation, namely the star formation rate (SFR) and the initial mass function (IMF), iii) the stellar yields, iv) gas flows in and out the galaxy. The most common parametrization of the SFR is that of Kennicutt (1998), that we also adopt:ψ(t)=νσ_gas^1.4,where ν is the efficiency of star formation and it should be tuned to reproduce the present time SFR in the object one is modeling. The IMF is normally a power law. The most commonly adopted IMFs are the one of Salpeter (1955), Scalo (1986), Kroupa et al. (1993), Kroupa (2001), Chabrier (2003). The stellar yields are very important ingredients for chemical evolution, the nucleosynthesis prescriptions we will adopt in our model are described below. §.§ Nucleosynthesis prescriptions For chemical evolution models, the nucleosynthesis prescriptions and the implementation of the yields in the model are fundamental ingredients. In this work, we adopt the same nucleosynthesis prescriptions of model 15 of Romano et al. (2010), where a detailed description of the adopted yields can be found.As regard to the computation of the stellar yields, one has to distinguish between different mass ranges, as well as single stars versus binary systems: * low- and intermediate-mass stars (0.8 M_-8 M_), which are divided in single stars and binary systems. Binary systems formed by a white dwarf and a low or intermediate mass star companion can originate either Type Ia SNe or novae (when the companion is a low mass star),* massive stars (M > 8 M_). §.§.§ Low- and intermediate-mass stars * Single stars. The single stars in this mass range contribute to the galactic chemical enrichment through planetary nebula ejection and quiescent mass loss along the giant and asymptotic giant branches. They enrich the ISM mainly in He, C, and N and heavy s-process elements. They can also produce non-negligible amounts of ^7Li. For these stars, which end their lives as white dwarfs, we adopt the prescriptions of Karakas (2010).* Type Ia SNe. Type Ia SNe are thought to originate from carbon deflagration in C-O white dwarfs in binary systems. Type Ia SNe contribute a substantial amount of iron (0.6 M_ per event) and non negligible quantities of Si, S and Ca. They also contribute to other elements, such as O, C, Ne, Mg, but in negligible amounts compared to the masses of such elements ejected by massive stars. The adopted nucleosynthesis prescriptions are from Iwamoto et al. (1999). §.§.§ Massive stars Massive stars are the progenitors of Type II, Ib and Ic SNe and they are known as “core-collapse SNe”. In particular, the SNe Ib,c are the explosion of stars with masses larger than ∼ 30 M_⊙,whereas SNe II originate from stars in the mass range 8< M/M_⊙<30. If the explosion energies are significantly higher than 10^51 erg, hypernova events may occur (SNe Ic). For core-collapse SNe, we adopt up-to-date stellar evolution calculations by Kobayashi et al. (2006) for the following elements: Na, Mg, Al, Si, S, Ca, Sc, Ti, Cr, Mn, Co, Ni, Fe, Cu and Zn. As for the He and CNO elements, we take into account the results of Geneva models for rotating massive stars (see Romano et al. (2010) for references). §.§ The model for halo and disksThe results we will present relative to the Galactic halo, thick and thin disk, are based on the two-infall model of Chiappini et al. (1997). In this model we assume that the halo and thick disk formed out of a relatively fast episode of gas accretion occurring on a timescale no longer than 2 Gyr, whereas the thin disk formed out of another accretion episode occurring on much longer timescales: an inside-out formation with the solar region forming on a timescale of 7 Gyr. The assumed progenitors for Type Ia SNe are white dwarfs in binary systems, in particular the single-degenerate model (see Matteucci & Greggio 1986; Matteucci & Recchi, 2001).The IMF adopted for halo and disks is the Scalo (1986) one. A gas threshold of 7 M_⊙pc^-2 for star formation is assumed. This threshold produces naturally a gap in the star formation between the end of the halo-thick disk phase and the thin disk phase.In Fig.1 we show the results obtained for the yields discussed above (continuous line) as well as for older sets of yields (dashed line), as discussed in Romano et al. (2010). It is clear from Figure 1 that the trend of some chemical elements is well reproduced, whereas there are some elements whose yields should be strongly revised (i.e. K, Sc, Ti).Another model, after that of Chiappini et al. (1997), has been suggested for the Milky Way (Micali et al 2013), where the thick disk phase was considered separately from the halo and thin disk formation. This model has been called “three-infall model”. In this scenario, the halo formed very fast during a first accretion episode, then followed the accretion episode forming the thick disk, on a time scale short but longer than the halo (∼ 2 Gyr)In Figure 2 we show the predicted [O/Fe] vs. [Fe/H] by the three-infall model, compared to data relative to the halo, thick and thin disk. The halo, thick and thin disk star data are indicated in the Figure but a clear separation between thick and thin disk stars is not evident, so that strong conclusions cannot be derived, except that the thick disk stars, being α-enhanced should have formed in a faster process than the thin disk stars. Recently, many data have appeared on the thick and thin disk stars (Hayden et al. 2015; Rojas-Arriagada et al. 2017) where a clear separation is evident. In Figure 3 we show some recent data from Gaia-ESO survey (Rojas-Arriagada et al. 2017) where the thick and thin disk stars are clearly separated in the plot [Mg/Fe] vs. [Fe/H]. In particular, it looks like if the thick and thin disk formed on two different timescales but in parallel and not sequentially, as in the model described above. From the models presented above and comparison with the data, we can conclude that the halo formed on a timescale of less than 1 Gyr, the thick disk on a timescale of 1-2 Gyr, no longer, as proven by the observed enhanced abundances of α-elements relative to Fe, and the thin disk in the solar vicinity formed on a much longer timescale (7 Gyr). §.§ The Galactic bulgeThe Galactic bulge shows a stellarmetallicity distribution function peaked at higher metallicity than the G-dwarfs in the solar vicinity. This means that the bulge stars formed faster than those in the solar vicinity. Hill et al. (2011) suggested that the Galactic bulge contains two main stellar populations: a) a classical bulge stellar population called metal poor and b) a metal rich population which can be the result of star formation induced by the Galactic bar or be inner disk stars. The most recent data on abundance ratios in bulge stars (coming from high-resolution, high-S/N spectroscopic data) are from the Gaia-ESO survey. In Figure 4 we show these data together with the prediction of a bulge model relative to the classical bulge population, as described in Grieco et al. (2012), assuming a very fast SFR with star formation efficiency ν=25 Gyr^-1 and timescale of gas accretion τ=0.1 Gyr. As one can see, the model prediction fits well the data, thus confirming previous papers (Matteucci & Brocato 1990; Ballero et al. 2007; Cescutti et Matteucci 2011; Grieco et al. 2012) suggesting a very fast bulge formation, at least for the main population (the endemic metal-poor classical one). The stellar metallicity distribution function for bulge stars, where there is evidence of two populations (Hill et al 2011; Gonzalez et al 2015, Zoccali et al 2017, GIBSsurvey; Rojas-Arriagada 2014,2017, Gaia-ESO survey; Schultheis et al 2017,APOGEE survey, in contrast with Ness et al 2013, ARGOS survey) is shown in Figure 5, with the model results of Grieco et al. (2012) compared to the observations.§.§ Abundance gradients along the Galactic disk Different models for the evolution of the entire thin diskhave appeared in the past years. These models include radial gas flows and stellar migration (Schoenrich & Binney 2009; Spitoni & Matteucci 2011; Spitoni et al. 2015; Kubryk et al. 2015).Radial gas flows are important for the creation of an abundance gradient in the gas along the thin disk. Other important parameters are: i) the inside-out formation of the disk, ii) the existence of a gas threshold for star formation. In Figure 6 we show the gradients predicted by Spitoni & Matteucci (2011). It is clear from the comparison with O abundance measured inHII regions, planetary nebulae and O, B stars that the model with inside-out formation and radial gas flows with variable speed best reproduces the observed gradient. Models with a constant timescale of gas accretion with galactocentric distance cannot predict any gradient. Models without inside-out but with radial gas flows can in principle produce a gradient. On the other hand, a model with inside-out formation without a gas threshold for star formation and radial gas flows can still produce a gradient in reasonable agreement with the data, but only for the very inner disk regions, as it is evident from Figure 6. For the outermost regions there is practically no gradient. § SUMMARY AND CONCLUSIONS Galactic astroarchaeology is a useful tool to infer the timescales for the formation of the various Galactic components: halo, thick, thin disk and bulge. To do that one compares the predicted and observed abundance patterns. From the discussion above we can suggest the following: * The halo phase lasted no longer than 0.5-1.0 Gyr. This is dictated by the large overabundances of α-elements observed in halo stars. The thick disk also formed quickly, although on a longer timescale than the halo. Also thick disk stars show overabundances of α-elements and their metallicity distribution function can be well reproduced if a timescale no longer than 2 Gyr is assumed (Micali et al. 2013).* The thin disk instead formed on a much longer timescale and inside-out. In particular, the inner regions formed on timescales of 1-2 Gyr, whereas the solar region took at least 7 Gyr, as suggested by the metallicity distribution function of the G-dwarfs, and the outermost regions (R>14 kpc) over 10 Gyr.* Probably the star formation stopped briefly between the formation of the halo and thick disk and between the thick and thin disk. Haywood et al. (2016), by considering APOGEE data (Hayden et al. 2015), concluded that there was a quenching in the star formation at the end of the thick disk phase. Kubryk et al. (2015) suggested instead that the thick disk is the result of stellar migration: they concluded that the thick disk is the early part of the Milky Way disk. Very recent data from Gaia-ESO survey (Rojas-Arriagada et al. 2017) seem to suggest that the thick and thin disk formed in parallel. In such a case we would not expect any halt in the star formation, but the timescales of formation of the two disks should be the same as suggested above.* The Galactic bulge formed very quickly, on a timescale of 0.3-0.5 Gyr, at least the bulk of its stars, the classical bulge stellar population. A population of stars whose formation was triggered by the bar (those participating in the X-shape bulge) is present, outnumbering at the metal-rich part of the bulge MDF a possibly small fraction of endemic metal-rich bulge stars belonging to the initial classical population. The stellar metallicity distribution is showing in fact a bimodal distribution. * Finally, Galactic abundance gradients can arise as a result of the inside-out formation of the thin disk coupled with radial flows with a speed variable with galactocentric distance (Spitoni & Matteucci, 2011).99b23 Ballero, S. Matteucci, F., Origlia, L., Rich, R.M. 2007 A&A 467, 123b23 Cescutti, G. and Matteucci, F. 2011, A&A, 525, 126b23 Chabrier, G.2003, ApJ 586, L133b23 Chiappini, C., Matteucci, F., Gratton, R. 1997, ApJ, 477, 765 b23 Cui, X.Q., Zhao, Y.H., Chi, Y.Q., et al. 2012, RAA, 12, 1197b23 de Laverny, P., Recio-Blanco, A., Worley, C.C., et al. 2013, The Messenger, 153, 18 b23 Freeman, K., Ness, M., Wylie-de-Boer, E., et al. 2013, MNRAS, 428, 3660 b23 Gilmore, G., Randich, S., Asplund, M., et al. 2012, The Messenger, 147, 25b23 Gonzalez, O. A., Zoccali, M., Vasquez, S., et al., 2015, A&A, 584, A46b23 Grieco, V., Matteucci, F., Pipino, A., Cescutti, G. 2012, A&A, 548, 60b23 Hayden, M.R., Bovy, J., Holtzman, J.A., et al. 2015, ApJ, 808, 132 b23 Haywood, M., Lehnert, M.D., Di Matteo, P. et al. 2016, A&A, 589, A66b23 Hill, V., Lecureur, A., Gómez, A., Zoccali, M., Schultheis, M., Babusiaux, C., Royer, F., Barbuy, B. et al. 2011, A&A, 534, 80b23 Iwamoto K., Brachwitz F., Nomoto K., Kishimoto N., Umeda. H., Hix W. R., Thielemann F. K., 1999, ApJS, 125, 439b23 Karakas A. I., 2010, MNRAS, 403, 1413b23 Kennicutt, R. C., Jr, 1998, ApJ, 498, 541b23 Kobayashi C., Umeda H., Nomoto K., Tominaga N., Ohkubo T., 2006, ApJ, 653, 1145b23 Kroupa, Pavel, Tout, C. A., Gilmore, G. 1993, MNRAS, 262, 545b23Kroupa, P. 2001, MNRAS, 322, 231 b23 Kubryk, M., Prantzos, N., Athanassoula, E. 2015, A&A, 580, A126 b23 Majewski, S.R., Schiavon, R.P., Frinchaboy, P.M., et al. 2015, arXiv:1509.05420 b23 Matteucci F., 2012, Chemical Evolution of Galaxies. Springer-Verlag, Berlinb23 Matteucci, F. and Brocato, E., 1990, ApJ, 365, 539 b23 Matteucci, F. and Greggio, L., 1986, A&A, 154, 279b23 Matteucci, F. and Recchi, S. 2001, ApJ, 558, 351b23 Micali, A., Matteucci, F., Romano, D. 2013, MNRAS, 436, 1648 b23 Ness, M., Freeman, K., Athanassoula, E., et al., 2013, MNRAS, 430, 836b23 Rockosi, C., Beers, T.C., Majewski, S., Schiavon, R., Eisenstein, D., 2009, in ArXiv Astrophysics e-prints, Vol. 2010, astro2010: The Astronomy and Astrophysics Decadal Surveyb23 Rojas-Arriagada, A., Recio-Blanco, A., Hill, V., et al., 2014, A&A, 569, A103 b23Rojas-Arriagada, A., Recio-Blanco, A., de Laverny, P., Mikolaitis, S., Matteucci, F., Spitoni, E., Schultheis, M., Hayden, M., Hill, V., Zoccali, M. et al. 2017, arXiv:1704.03325 b23 Romano, D., Karakas, A. I., Tosi, M., Matteucci F., 2010, A&A, 522, A32b23 Salpeter, E.E. 1955, ApJ , 121, 161 b23 Scalo, J.M.1986,Fund. Cosmic Phys., 11, 1b23 Schoenrich, R. and Binney, J. 2009, MNRAS, 396, 203b23 Schultheis, M., Rojas-Arriagada, A., García Pérez, A. E., et al., 2017, A&A, 600, A14 b23 Spitoni, E., Romano, D., Matteucci, F. Ciotti, L. 2015, ApJ, 802, 129 b23Spitoni, E. and Matteucci, F. 2011, A&A, 531, 72 b23 Steinmetz, M., Zwitter, T., Siebert, A., et al. 2006, AJ, 132, 1645b23 Yanny, B., Rockosi, C., Newberg, H.J., et al. 2009, AJ, 137, 4377b23 Zoccali, M., Vasquez, S., Gonzalez, O. A., et al., 2017, A&A, 599, A12b23 Zucker, D. B., de Silva, G., Freeman, K., Bland-Hawthorn, J., & Hermes Team 2012, Galactic Archaeology: Near-Field Cosmology and the Formation of the Milky Way, 458, 421
http://arxiv.org/abs/1705.09596v1
{ "authors": [ "Francesca Matteucci", "Emanuele Spitoni", "Donatella Romano", "Alvaro Rojas-Arriagada" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170526142853", "title": "The chemical evolution of the Milky Way" }
Discrete Symmetries in Dynamo Reversals Mahendra K. Verma December 30, 2023 ======================================= Object tracking is an essential task in computer vision that has been studied since the early days of the field.Being able to follow objects that undergo different transformationsin the video sequence, including changes in scale, illumination, shape and occlusions, makes the problem extremely difficult. One of the real challenges is to keep track of the changes in objects appearance and not drift towards the background clutter. Different from previous approaches, we obtain robustness against background with a tracker model that is composed of many different parts. They are classifiers that respond at different scales and locations. The tracker system functions as a society of parts, each having its own role and level of credibility. Reliable classifiers decide the tracker's next move, while newcomers are first monitored before gaining the necessary level of reliability to participate in the decision process. Some parts that loose their consistency are rejected, while others that show consistency for a sufficiently long time are promoted to permanent roles. The tracker system, as a whole, could also go through different phases, from the usual, normal functioning to states of weak agreement and even crisis. The tracker system has different governing rules in each state. What truly distinguishes our work from others is not necessarily the strength of individual tracking parts, but the way in which they work together and build a strong and robust organization. We also propose an efficient way to learn simultaneously many tracking parts, with a single closed-form formulation. We obtain a fast and robust tracker with state of the art performance on the challenging OTB50 dataset.§ INTRODUCTIONObject tracking is one of the first and most essential problems in computer vision. While it has attracted the interest of many researchers over several decades of computer vision, the task is far from being fully solved <cit.>. The problem is difficult for many reasons, including the severe changes in object appearance, presence of background clutter and occlusions that take place inthe video sequence. Moreover, the only ground truth knowledge given to the tracker is the bounding box of the object in the first frame. Thus, without knowing in advance the properties of the object being tracked (neither the ones of a general object), the tracking algorithm must learn them on the fly. It must adapt correctly and make sure it does not drift toward other objects in the background.Different from previous methods, our proposed tracking model is composed of a large group of different object part classifiers, which act together like a society. Each classifier takes care of a different part of the object, at a certain scale and location. It also has its own level of credibility. The overall tracker is, on one hand, kept robust and stable through a group of reliable classifiers. At the same time, it can adapt to new conditions by considering new candidate part classifiers. Candidates are continuously being monitored and promoted or rejected based on their estimated reliability. The ability to learn a large group of classifiers efficiently, over the video sequence, is given by our proposed multi-class approach using regularized least squares that is based on a novel theoretical insight, presented in detail in Section <ref>.Relation to previous work: There are many tracking methods which differ mainly in terms of target region, appearance model, mathematical formulation and optimization. Objects can be represented by boxes, ellipses <cit.>, superpixels <cit.> or blobs <cit.>. The appearance model can be described as one feature set over the region or an array of features, one for each part of the target <cit.>. Part models are more resistant to occlusions and non-rigid appearance changes. Features used by tracking models could be either simple raw pixel information or more specialized ones that describe regions and keypoints, which could better handle changes in viewpoint, scale, illumination and deformations. Such features include Gabor <cit.>, HOG <cit.>, SIFT, SURF <cit.>, Haar <cit.>, or combinations of low and high level features <cit.> from widely used, pre-trained CNNs. Some recent algorithms start applying powerful features and classifiers pre-learned with deep convolutional networks (CNNs) <cit.>.The disadvantage of current methods using CNNs is the need to learn in advance the features used on large, human labeled, datasets. They are valid only on objects with the same particularities like the ones in the dataset. Both the classifier and the features are known to be very important for classification performance. This can also be observed in the recent method proposed in <cit.>, where only the hard negatives and positives are kept for the tracker model based on SVM.When using more complex features, the performance of their tracker increases by up to 10%. Some methods augment the features with information from optical flow <cit.>, segmentation <cit.> or superpixels <cit.>. More complex methods use active appearance models <cit.>.Besides accuracy, speed is also very important in tracking. Ideally, the tracker should operate in real-time. One of the fastest methods for tracking uses correlation filters <cit.>. More recent work in this direction formulates the problem using circular matrices, easy to decompose in the FFT domain and split it in independent equations, with closed-form solutions<cit.>. The immediate advantage is speed, but the resulting, elegant tracker is also very competitive, while being an order of magnitude faster than its competitors <cit.>. Again, the best performance is obtained when more complex features are used (HOG) than simple raw pixel values.In relation to previous work, our model uses many tracking parts (about 100-600), which are learned fast and simultaneously, in a given frame. We propose a novel formulation in the context of tracking based on regularized least squares. We use only very simple features, based on raw pixel values, our strength being based entirely on the model that functions as a robust society of many parts. Note that our model is general and can accommodate the use of any combination of features and classifiers for the separate object parts. In brief, our main contributions are:Main contributions: 1) Our first contribution, discussed in Sections <ref> and <ref> is the concept and design of the overall tracker that functions as a robust society of many different classifier parts, at different locations and scales and with different weights and reliability levels. Thus, our system is able to keep its stability over time, while also adapting to the current changes of the object in the video. On the difficult OTB50 <cit.> dataset it outperforms by a significant margin current state-of-the-art methods that do not use CNN features pre-trained on large human labeled datasets. 2) Our second contribution enables the efficient implementation of the tracker. We are able to learn simultaneously many part classifiers using a novel weighted one-vs-all regularized least squares formulation, with closed-form solution and important theoretical properties, as discussed in Section <ref>. § INTUITION AND MOTIVATIONVisual tracking is about being able to adapt the current knowledge about the object model to changes that take place continuously in the stream of video. How could the tracker learnnovel aspects of the object of interest and, at the same time, not forget valuable older information? Most current learning methods that continuously adapt to new information could slowly forget the initial models they started from - and those initial models could still be valid and useful for future use.We argue that a tracking model composed of many parts, each with its own degree of reliability (or trust), which function together according to certain rules that consider their different roles and specific trust levels, could have the two highly desirable properties. The tracker, functioning like a society of tracking parts, could be both stable in the face of rapid and noisy variations in the environment and could also adapt and learn when meaningful changes take place. We draw an analogy between the model we propose and a simplified form of a human community (or organization), in which people have different roles and degree of importance. Certain people, very few, are the founders of that community. They are very often considered reliable, from the start.Over the longer term, the community is ruled by the shared responsibility of a group of reliable members, who include some of the initial founders and those who have proved their credibility of the time.In our case, these members would be responsible for deciding the next tracker move. At the lower level, the organization is continuously refreshed with newcomers, young members who want to become part of the core group of leaders, but are not yet ready to rule.While they provide a constant source of new and potentially beneficial information that could be better suited to current changes in the "world", their consistency is not yet proven. New members are first being monitored, without being allowed to make decisions that could affect the behaviour of the whole community. Once they prove their value they are moved to the core of reliable members with decision power. At the same time, current members could loose credibility if they stop showing consistency. Those will eventually be rejected. Others, who have proven reliability for long enough, are promoted to a special permanent member status.In Section <ref> we explain in detail how one could measure reliability for tracking parts. In brief, we consider a part to be reliable if it has showed independently and frequently enough agreement in voting with the majority of the other parts. Since the majority is statistically robust, the estimation of reliability in this way is also robust. By considering members with different capabilities and roles, in many ways similar to a human organization, the tracker becomes a system that could display the following important properties:1) Stability: the core members sustain constant reliable functioning. They act independently and decide by majority, providing robustness against noisy variations. Only the simple and the permanent (gold) reliable members can influence themajority vote for the next tracker move. 2) Adaptation: the tracker is able to continuously adapt by adding new trackers and removing old ones, as time passes. It promotes the new reliable members and eliminates the ones that lost reliability (excepting the permanent members). Note that gaining and loosing reliability can happen only over time. It is the temporal buffer, when tracking parts are monitored, which ensures both stability and the capacity to adapt to new conditions. 3)Ability to never forget: Tracking parts that display consistent reliable behaviour over longer periods of time are promoted to the status of permanent or gold members. Thus we ensure that the model does not forget information that has been proven consistent and could be of vital importance in the future.§ ALGORITHMThe proposed tracker algorithm,which we term Society of Tracking Parts (STP), is based ona system of part classifiers (Figure <ref>). The tracker is learned and formed online, during tracking, from scratch, starting from the first, ground truth bounding box.Tracking by voting: The tracker always chooses as its next move at time t,the place (the center of the bounding box) l_t+1 where there is the largest accumulation (of value M_v) of parts votes , within a certain region R_t.This searching zone for the target is restricted around the previous bounding box, over a region defined by a given parameter δ. For each part i there is an activation map A_ti, computed as the response of the classifier c_i corresponding to that part over the search region R_t.The activation maps of considered parts are each shifted with the part displacement from object center and added together to form the overall A_t. A_t is the voting map for the center and when parts are in strong agreement, all votes focus around a point (the next predicted bounding box center). After smoothing A_t with a small Gaussian filter, the maximum is chosen as the next center location l_t+1.Note that different parts are allowed to contribute with their activation maps, depending on their reliability and tracker state (see also Figure <ref>), as described next.Part reliability states: Reliability of a part i is estimated as the frequency f_i at which the maximum activation of a given part is in the neighborhood (within 5 pixels in our implementation) of the maximum sum activation where the next tracker center l_t+t is chosen. If a part is selected for the first time, it is considered a candidate part. Every T_S->U frames, the tracker measures the reliability of a given part, and promotes parts with a reliability larger than a threshold f_i > p_+, from candidate state (C) to reliable state (R) and from reliable (R) to gold (G) (Figure <ref>). Parts that do not pass the test f_i ≤ p_- are removed, except for gold ones which are permanent. Tracker states are:Tracker states. Strong (S) - in the "strong" (S) state the tracker is ruled by the voting of the reliable and gold parts. When the maximum over the sum of their activation maps is over a threshold (M_v > t_v),tracking is considered strong. Every T_S->U frames the tracker enters the "update" (U) statefrom the S state. Update (U) - in the U state, the tracker considers new classifiers from the current frame as candidates, learned from patches that cover areas of the bounding box where current reliable and gold members have weak responses. The new candidates will be monitored from then on and their reliability will be estimated, based on their consensus frequency with the weighted majority, as discussed previously. Candidate votes are not taken in consideration until they become "reliable" parts. In this state, existent parts (candidates andreliable) are promoted or rejected, based on their reliability f_i as also discussed previously. Weak (W) - when the maximum accumulating vote (M_v) in the S state is weak (M_v ≤ t_v) the tracker enters the W state. In this state, candidates from previous (strong) states are allowed to vote together with the reliable and gold members. If the total accumulation M_v is still weak, the tracker enters the state of "crisis" (C). Otherwise it promotes to "reliable" the candidates that agreed with the majority vote, then goes back to S state. Crisis(C) - state C is enetered from W, when the votes in W are weak. In C, the tracker starts searching the entire image (basically, region R_t becomes the entire image). Then it moves to to the maximum accumulation of the reliable and gold members. When M_v > t_v it goes back to the strong state S. Until then it stays in C, with no member update allowed.Example: In Figure <ref> we showqualitative results to demonstrate the importance of the different tracker states. In the Iron Man sequence (top), the "full" tracker (with all states activated) stays with the main object until the end of the sequence and recovers from moments when it is lost. The "S-only" tracker (with states W and C deactivated) once lost, remains lost. In the second Crossing example, both versions of the tracker have weak votes at frame 7. The full tracker enters the weak state and recovers in a better position, while having promoted new candidates to reliable in the W state. Around frame F21 both trackers are lost, but during Crisis, the full version recovers in frame F57 and stays with the person crossing until the end, unlike the S-only tracker that is lost from F21. Note that in our experiments, we show in Table <ref> quantitative differences between versions of the tracker with different state subsets allowed (S, SW and all SWC), which fully justify the use of all three states.Learning the tracker: The mathematical details related to training the individual classifiersare discussed in Section <ref>.In order to keep the appearance model up to date, in the update phase STP chooses new patches to add as positive parts. Only patch classifiers that are highly discriminative from the rest are selected. A patch classifier is considered discriminative if the ratio between the response on the positive patch (its own corresponding patch) and the maximum response over negatives is larger than a threshold t_d. Positives are selected from the inside of the bounding box, while (hard) negatives are selected as patches from outsideregions with high density of edges. We sample patches from a dense grid (2 pixels stride) of small (17x17), medium (27x27) and full bounding box sizes. The small ones will see local appearance, and the larger ones will contain some context. A point in grid is covered only by one selected discriminative patch, at one size.The smaller ones have priority and we search the next size for the patch centered in the grid point only if the smaller patch is not discriminative enough. The object box is covered when each pixel is covered by any given patch. A simple budgeting mechanism is added, in order to limit the speed impact. When too many parts of a certain patch size become reliable >N_max, we remove the new reliable ones which are most similar to older parts, based on simple dot product similarity for the classifiers.Parameters: we use the following parameters values in our experiments from Section <ref>: δ = 25px, T_S->U=10 frames, t_d=1.4, p_+ = 0.2, p_- = 0.1 and N_max = 200 parts.§ MATHEMATICAL ASPECTS FOR LEARNING THE PARTSWe introduce the mathematical formulation for learning the classifiers for the tracking parts. For a given feature type let 𝐝_i ∈ℝ^1 × k be the i-th descriptor, with k real elements, corresponding to an image patch window at a certain scale and location relative to the object bounding box. Note that in our experiments, the features we use are simple pixel values from seven image channels, the three color channels, plus four more channels representing the gradient magnitudes over four orientations (0, π/4, π/2, 3π/4)The descriptor 𝐝_i is a vectorised version of the specific patch concatenated over all image channels. Let 𝐃 be the data matrix, formed by putting all descriptors in the image one row below the other. We learn the optimal linear classifier 𝐜_i that separates 𝐝_i from the rest of the patches, according to a regularized linear least squares cost -which is both fast and accurate. Classifier 𝐜_i minimizes the following cost (<cit.> Ch. 7.5): min1/n𝐃𝐜_i-𝐲_i^2 + λ𝐜_i^⊤𝐜_i. In classification tasks the number of positives and negatives should be properly balanced, according to their prior distributions and the specific classifier used. Different proportions usually lead to different classifiers. In linear least squares formulationsweighting differently the data samples could balance learning.One sample versus allThe idea of training one classifier for a single positively labeled data sample has been successfully used before, for example, in the context of training SVMs <cit.>. Normally, when using very few positive samples for training a ridge regression classifier, weighting is applied to balance the data. Otherwise the classifier response on the positive samples is too low.Here we show that when a single positive sample is used, weighting does not change the direction of the resulting classifier, even though it changes its magnitude. This makes it possible to easily normalize classifiers trained with different positive to negative ratios.Property 1: for any positive weight w_i given to the positive i-th sample, when the negative labels considered are 0 and the positive label is 1 and all negatives have the same weight 1, the solution vector to the weighted least squares version of Eq. <ref> will have the same direction (it might differ only in magnitude). In other words, it is invariant under L2 normalization. Proof: Let 𝐜_i be the solution to Eq. <ref>.At the optimum the gradient vanishes, thus the solution respects the following equality (𝐃^⊤ 𝐃 + λ𝐈_k)𝐜_i = 𝐃^⊤𝐲_i. Since y_i(i)=1 and y_i(j)=0 for j ≠ i, it follows that (𝐃^⊤ 𝐃 + λ𝐈_k)𝐜_i = 𝐝_i. Since the problem is convex, with a unique optimum, a point that obeys such an equality must be the solution.In the weighted case, a diagonal weight n × n matrix 𝐖 is defined, with different weights on the diagonal w_j=𝐖(j,j), one for each data sample. In that case, the objective cost optimization in Eq. <ref> becomes:min1/n𝐖^1/2(𝐃𝐜_i-𝐲_i)^2 + λ𝐜_i^⊤𝐜_i. We consider when all negative samples have weight 1 and the positive one is given w_i.Now we show that for any w_i, if 𝐜_i is an optimum of Eq. <ref> then there is a real number q such that q𝐜_i is the solution of the weighted case. The scalar q exists if it satisfies (𝐃^⊤ 𝐃 + 𝐝_i 𝐝_i^⊤ (w_i-1)+λ𝐈_k)q𝐜_i=w_i𝐝_i. And, indeed, it can be verified that q = w_i/1+(w_i-1)(𝐝_i^⊤𝐜_i) satisfies the required equality. See Appendix <ref> for a detailed proof. Efficient multi-class ridge regressionThe fact that the classifier vector direction is invariant under different weighting of the positive sample suggests that training with a single positive sample will provide a robust and stable separator. The classifier can be re-scaled to obtain values close to 1 for the positive samples. Property 1 alsoindicates that we could compute classifiers for all positive patches in the bounding box at once, by using a single data matrix 𝐃. We form thetarget output matrix 𝐘, with one target labels column 𝐲_i for each corresponding sample 𝐝_i. Note that 𝐘 is, in fact, the n × n 𝐈_n identity matrix. We now write the multi-class case of the ridge regression model and finally obtain the matrix of one versus all classifiers, with one column classifier for each tracking part: 𝐂 = (D^⊤ D + λ𝐈_k)^-1𝐃^⊤. Note that 𝐂 is a regularized pseudo-inverse of 𝐃. 𝐃 contains one patch descriptor per line. In our case, the descriptor length is larger than the number of positive and negative samples, so we use the Matrix Inversion Lemma <cit.>(Ch. 14.4.3.2) and compute 𝐂 in an equivalent form (see more in Appendix <ref>):𝐂 = 𝐃^⊤ (𝐃 𝐃^⊤ + λ𝐈_n)^-1 . Now the matrix to be inverted is significantly smaller (n × n instead of k × k).§ EXPERIMENTAL ANALYSISWe have evaluated our tracker on the challenging OTB50 dataset. It contains 50 difficult video sequences, combining a variety of videos with complex scenarios, grouped into different categories of difficulty, such as: illumination variation (IV), scale variation (SV), occlusion (OCC), deformation (DEF), motion blur (MB), fast motion (FM), in-plane rotation (IPR), out-of-plane rotation/in-plane rotation (OPR/IPR), out-of-view (OV), background clutter (BC), low resolution (LR). We compared our method against top tracking methods: KCF <cit.>, STRUCK <cit.>, TLD <cit.>,ORIA <cit.>, MIL <cit.>, MOSSE <cit.>, CT <cit.>. They all use, as ground truth informationonly the initial bounding box provided in the first frame and do not employ any pre-trained CNN features or object detectors.We followed the same evaluation protocol as in <cit.> <cit.> <cit.> <cit.> <cit.> by consideringthe predicted target correct if its center is within a threshold distance from the ground truth and compute the average precision (per category and for the whole dataset). We choose the same threshold (20px) as  <cit.>. At this threshold the relative order between the compared trackers stabilizes. In Table <ref> we present the detailed results for each category. Our algorithm outperforms the current state of the art methods by by a large margin, while using only very simple, pixel level features. Our closest competition uses both strong features and stronger models: Struck <cit.> uses Haar and histogram features combined with various kernels and KCF <cit.> uses HOG descriptors, also combined with a non-linear kernel). This concludes that the power of the method is into our algorithm, STP, that uses many weak classifiers acting together.The majority turns out to be superior to a well selected elite (fewer, but smarter classifiers). Relative importance of different tracker states: We tested the performance of our tracker when not all states are allowed, in order to better understand the importance of handling differently difficult scenarios when the accumulation of votes is weak. They usually correspond to cases when the tracker needs to use its candidates (W state, when reliable parts could have become obsolete) or when it undergoes occlusions or severe appearance changes (C state). In Table <ref> and Figure <ref> (also discussed in Section <ref>) we present quantitative and qualitative results of these tests. The results clearly show that using all three states (S,W,C) is superior to the other versions.§ CONCLUSIONSWe have presented a novel model for object tracking based on a society of tracking part classifiers that is robust to different challenges and achieves top results on a very difficult recent benchmark. The strongest advantage of our approach is its ability to learn and adapt its model online while also keeping it robust and stable. The reason this is possible is due to the many different parts learned, at different scales and locations with respect to the object and having different roles according to their levels of credibility. The tracker itself also has different states of functioning. During normal functioning, updates are done at regular timeintervals. At moments when agreement between established trackers is week, the candidates are given the chance to jump in. If that does not work either, the classifier enters a state of crisis with no more updates being allowed, until consensus is found again. The tracker functions as an organization, with many different parts that may be weak by themselves, but they are strong when acting in combination. We have proved the power of this idea on a challenging dataset, demonstrating state of the art performance against top methods in the field. Acknowledgements: This work was supported in part by UEFISCDI, under project PN-III-P4-ID-ERC-2016-0007.§ FAST ONE-SAMPLE VS. ALL RIDGE REGRESSIONIt is easy to obtain the closed form solution for c_i, from linear ridge regression formulation (<cit.> Ch. 7.5), by minimizing the convex cost 1/n𝐃𝐜_i-𝐲_i + λ𝐜_i^⊤𝐜_i, we get Eq. <ref>. It results the well known solution by inverting the positive definite matrix 𝐃^⊤ 𝐃 +λ𝐈_k. (𝐃^⊤ 𝐃 +λ𝐈_k) 𝐜_i = 𝐃^⊤𝐲_iIn "one vs all" context, we choosey_i^⊤ = 0 0 ... 1 ... 0 0, with 1 only on the i^th position. So, the multiplication with y_i selects a column form 𝐃: 𝐃^⊤𝐲_i = 𝐝_i. Eq. <ref> becomes: (𝐃^⊤ 𝐃 +λ𝐈_k) 𝐜_i= 𝐝_i When building classifiers, the classes should be balanced as numbers of entries. This ensures that the comparison between the activation scores for two different classifiers is valid. In "one vs all", usually the positive class is more poorly represented than the negative class. So we needed to use a weighted solution for linear ridge regression <cit.>, in order to build a balanced classifier for our "part of the object vs others/context" classifiers.We prove that for a specific form of weights, the weighting can be applied after computing the simple version (closed form linear regression, Eq. <ref>). This is very important for our algorithm, because for the simple ridge regression we need to compute only one matrix inverse for all classifiers in one step, one matrix that all of them will share: (𝐃^⊤ 𝐃 +λ𝐈_k)^-1 from Eq. <ref>. For the weighted case, the closed form solution (as in<cit.>) would be different from classifier to classifier: (𝐃^⊤ 𝐖_i 𝐃 +λ𝐈_k)θ_i= 𝐃^⊤ 𝐖_i 𝐲_iThe weight matrix for a classifier 𝐖_𝐢 has the following form:𝐖_𝐢 = 𝐈_n +[ 0 0 0 ... 0 0 0; 0 0 0 ... 0 0 0; ...; 0 0 ... w_i ... 0 0; ...; 0 0 0 ... 0 0 0; ] = 𝐈_n + 𝐖_sparse_iwith 0s andw_i only on i^th position on the diagonal, i being the index of the positive patch in data matrix, 𝐃. Replacing Eq.<ref> in Eq.<ref>, and observing that 𝐃^⊤𝐖_sparse_i = w_i [0|𝐝_i|0], the right hand side becomes: 𝐃^⊤ 𝐖_i 𝐲_i = 𝐃^⊤ (𝐈_n + 𝐖_sparse_i) 𝐲_i = 𝐃^⊤𝐲_i + w_i [0| 𝐝_i | 0] 𝐲_i = 𝐝_i + w_i 𝐝_i. So, for the right term we get: 𝐃^⊤ 𝐖_i 𝐲_i= (1 + w_i) 𝐝_i By doing the same operations on the left term: 𝐃^⊤ 𝐖_i 𝐃 = 𝐃^⊤ (𝐈_n + 𝐖_sparse_i) 𝐃 = 𝐃^⊤ 𝐃 + w_i [0|𝐝_i|0] 𝐃 = 𝐃^⊤ 𝐃 + w_i 𝐝_i𝐝_i^⊤, the Eq. <ref> can be rewritten: (𝐃^⊤ 𝐃 + w_i 𝐝_i𝐝_i^⊤ +λ𝐈_k)θ_i= (1 + w_i) 𝐝_iLet θ_i = q_i 𝐜_i, where 𝐜_i is the solution for linear ridge regression (Eq. <ref>) and q_i ∈ℝ. Then Eq. <ref> becomes: (𝐃^⊤ 𝐃 + w_i 𝐝_i 𝐝_i^⊤+λ𝐈_k) q_i 𝐜_i= (1+w_i)𝐝_i. From Eq. <ref>, by simplifying terms we obtain q_i 𝐝_i + q_i w_i 𝐝_i 𝐝_i ^⊤𝐜_i = (1+w_i)𝐝_i. Then, by multiplying at left with 𝐝_i^⊤/||𝐝_i ||_2^2, we get:q_i + q_i w_i𝐝_i^⊤𝐜_i = (1 + w_i) So, the solution for q_i is (w_i is n-1, because in "one vs all" classification, all elements in 𝐃 are negative samples, except for one, the i^th):q_i= (1 + w_i)/1 + w_i𝐝_i^⊤𝐜_i = n/1 + (n-1)𝐝_i^⊤𝐜_i Therefore, we proved that if 𝐜_i is the unique solution of linear ridge regression (since 𝐃^⊤ 𝐃 + λ𝐈_k is always invertible, the solution in Eq. <ref> is unique), then q_i 𝐜_i (q_i from Eq. <ref>) is the unique solution of Eq. <ref> (𝐃^⊤ 𝐃 + w_i 𝐝_i 𝐝_i^⊤+λ𝐈_k is always invertible, since it is also positive definite).§ FASTER SOLUTION WITH EFFICIENT MATRIX INVERSIONConsider a general partitioned matrix 𝐌 = 𝐄 𝐅 𝐆 𝐇, with 𝐄 and 𝐇 invertible (Matrix Inversion Lemma <cit.>, Ch. 4.3.4.2). Then the following relation takes place: (𝐄 - 𝐅 𝐇^-1𝐆)^-1𝐅𝐇^-1 = 𝐄^-1𝐅(𝐇 - 𝐆𝐄^-1𝐅)^-1 By making the replacement: 𝐄 = λ𝐈_k, 𝐇 =𝐈_n, 𝐅 = 𝐃^⊤, 𝐆 = -𝐃 (𝐄 and 𝐇 are invertible) and rearranging the terms, we obtain  <cit.> (Ch. 14.4.3.2): (𝐃^⊤ 𝐃 +λ𝐈_k)^-1𝐃^⊤ =𝐃^⊤ (𝐃 𝐃^⊤ +λ𝐈_n)^-1We observe that the first term in Eq. <ref> is part of the closed form solution for the linear regression (without labels y_i). So, we can replace it with the one easier to compute. Since the bottleneck here is inverting the positive definite matrix 𝐃^⊤ 𝐃 +λ𝐈_k or 𝐃 𝐃^⊤ +λ𝐈_n, we will choose the easiest to invert. And this is the smaller one. In our case, n is the number of patches, and k is the number of features in each patch (equal to the number of pixels in the patch × number of pixel level channels, which is 7). A roughly approximation for n is 500 and approximations for k are 2000 ≈ 17 × 17 × 7, 5000 ≈ 27 × 27 × 7 and bigger for patches of bounding box size.The second solution for computing the classifier is inverting a matrix two orders of magnitude smaller (as number of elements) then the first solution. So we choose the second part of Eq. <ref> for the closed form solution.
http://arxiv.org/abs/1705.09602v1
{ "authors": [ "Elena Burceanu", "Marius Leordeanu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170526145143", "title": "Learning a Robust Society of Tracking Parts" }
myheaderChiral magnetohydrodynamics for heavy-ion collisions Yuji Hirono=======================================================We consider the utilization of a computational model to guide the optimal acquisition of experimental data to inform the stochastic description of model input parameters. Our formulation is based on the recently developed consistent Bayesian approach for solving stochastic inverse problems which seeks a posterior probability density that is consistent with the model and the data in the sense that the push-forward of the posterior (through the computational model) matches the observed density on the observations almost everywhere. Given a set a potential observations, our optimal experimental design (OED) seeks the observation, or set of observations, that maximizes the expected information gain from the prior probability density on the model parameters. We discuss the characterization of the space of observed densities and a computationally efficient approach for rescaling observed densities to satisfy the fundamental assumptions of the consistent Bayesian approach. Numerical results are presented to compare our approach with existing OED methodologies using the classical/statistical Bayesian approach and to demonstrate our OED on a set of representative PDE-based models. § INTRODUCTION Experimental data is often used to infer valuable information about parameters for models of physical systems. However, the collection of experimental data can be costly and time consuming. For example, exploratory drilling can reveal valuable information about subsurface hydrocarbon reservoirs, but each well can cost upwards of tens of millions of US dollars. In such situations we can only afford to gather some limited number of experimental data, however not all experiments provide the same amount of information about the processes they are helping inform. Consequently, it is important to design experiments in an optimal way, i.e., to choose some limited number of experimental data to maximize the value of each experiment.The first experimental design methods employed mainly heuristics, based on concepts such as space-filling and blocking, to select field experiments <cit.>. While these methods can perform well in some situations, these methods can be improved upon by incorporating any knowledge of the underlying physical processes being inferred or measured. Using physical models to guide experiment selection has been shown to drastically improve the cost effectiveness of experimental designs for a variety of models based on ordinary differential equations <cit.>, partial differential equations <cit.> and differential algebraic equations <cit.>. When model observables are linear with respect to the model parameters the alphabetic optimality criteria are often used <cit.>. For example A-optimality to minimize the average variance of parameter estimates, D-optimality to maximize the differential Shannon entropy, or G-optimality to minimize the maximum variance of model predictions. These criteria have been developed in both Bayesian and non-Bayesian settings <cit.>.In this paper we focus attention on Bayesian methods for OED that can be applied to both linear and nonlinear models <cit.>. Specifically we pursue OEDs which are optimal for inferring model parameters on finite-dimensional spaces from experimental data observed at a set of sensor locations.In the context of OED for inference, analogues of the alphabetic criterion, for linear models have also been applied to nonlinear models <cit.>. In certain situations, for example infinite-dimensional problems (random variables are random fields) or problems with computational expensive models, OED based upon linearizations of the model response and Laplace (Gaussian) approximations of the posterior distribution have been necessary <cit.>. In other settings non-Gaussian approximations of the posterior have also beenpursued <cit.>.This manuscript presents a new approach for OED based upon consistent Bayesian inference, introduced in <cit.>. We adopt an approach for OED similar to the approach in <cit.> and seek an OED that maximizes the expected information gain from the prior to the posterior over the set of possible observational densities. Although our OED framework is Bayesian in nature, this approach is fundamentally different from the statistical Bayesian methods mentioned above. The aforementioned Bayesian OED methods use what we will refer to as the classical/statistical Bayesian approach for stochastic inference (see e.g., <cit.>) to characterize posterior densities that reflects an assumed error model. In contrast, consistent Bayesian inference assumes a probability density on the observations is given and produces a posterior density that is consistent with the model and the data in the sense that the push-forward of the posterior (through the computational model) matches the observed density almost everywhere. We direct the interested reader to <cit.> for a discussion on the differences between the consistent and statistical Bayesian approaches. Consistent Bayesian inference has some connections with measure-theoretic inference <cit.>, which was used for OED in <cit.>, but the two approaches make different assumptions and therefore typically give different solutions to the stochastic inverse problem.The consistent Bayesian approach is appealing for OED since it can be used in an offline-online mode. Consistent Bayesian inference requires an estimate of the push-forward of the prior, which although expensive can be computed offline or obtained from archival simulation data. Once the push-forward of the prior is constructed, the posterior density can be approximated cheaply. Moreover, this push-forward of the prior does not depend on the density on the observations which enables a computationally efficient approach for solving multiple stochastic inverse problems for different densities on the observations. This can significantly reduce the cost of computing the expected information gain if the set of candidate observation is known a priori.The main objectives in this paper are to derive an OED formulation using the consistent Bayesian framework and to present a computational strategy to estimate the expected information gained for an experimental design. The pursuit of a computationally efficient approach for coupling our OED method with continuous optimization techniques is an intriguing topic that we leave for future work.Here, we consider batch design over a discrete set of possible experiments. Batch design, also known as open-loop design, involves selecting a set of experiments concurrently such that the outcome of any experiment does not effect the selection of the other experiments. Such an approach is often necessary when one cannot wait for the results of one experiment before starting another, but is limited in terms of the number of observations we can consider. The remainder of this paper is outlined as follows. In Section <ref> we summarize the consistent Bayesian method for solving stochastic inverse problems.In Section <ref> we discuss the information content of an experiment, and present our OED formulation based upon expected information gain.During the process of defining the expected information gain of a given experimental design, care must be taken to ensure that the model can predict all of our potential observed data. In Section <ref> we discuss situations for which this assumption is violated and means for avoiding these situations. Numerical examples are presented in Section <ref> and concluding remarks are provided in Section <ref>. § A CONSISTENT BAYES FORMULATION FOR STOCHASTIC INVERSE PROBLEMS We are interested in experimental designs which are optimal for inferring model parameters from experimental data. Inferring model parameters for a single design and realization of experimental data is a fundamental component of producing such optimal designs. In this section we summarize the consistent Bayes method for parametric inference, originally presented in <cit.>. Although Bayesian in nature, the consistent Bayesian approach differs significantly from its classical Bayesian counterpart <cit.> which was used for OED in <cit.>. We refer the interested reader to <cit.> for a full discussion of these differences. §.§ Notation, Assumptions, and a Stochastic Inverse Problem Let M(Y,λ) denote a deterministic model with solution Y(λ) that is an implicit function of model parameters λ∈⊂ℝ^n. The setrepresents the largest physically meaningful domain of parameter values, and, for simplicity, we assume thatis compact. In practice, modelers are often only concerned with computinga relatively small set of quantities of interest (QoI), {Q_i(Y)}_i=1^m, where each Q_i is a real-valued functional dependent on the model solution Y. Since Y is a function of parameters λ, so are the QoI and we write Q_i(λ) to make this dependence explicit. Given a set of QoI, we define the QoI map Q(λ) := (Q_1(λ), ⋯, Q_m(λ))^⊤:→⊂ℝ^m where := Q() denotes the range of the QoI map. Assume (, , ) and (, , ) are measure spaces. We assumeandare the Borel σ-algebras inherited from the metric topologies on ℝ^n and ℝ^m, respectively. The measuresandare volume measures. We assume that the QoI map Q is at least piecewise smooth implying that Q is a measurable map between the measurable spaces (, ) and (, ). For any A∈, we then haveQ^-1(A) = {λ∈ |Q(λ) ∈ A }∈, and Q(Q^-1(A))=A.Furthermore, B ⊆ Q^-1(Q(B)) for any B∈, although in most cases B≠ Q^-1(Q(B)) even when n=m.Finally, we assume that an observed probability measure, , is given on (,) and is absolutely continuous with respect to , which implies it can be described in terms of an observed probability density, . The stochastic inverse problem is then defined as determining a probability measure, P_, described as a probability density, π_, such that, the push-forward measure agrees with . We use P^Q(P_)_ to denote the push-forward of P_ through Q(λ), i.e.,P^Q(P_)_(A) = P_(Q^-1(A)).for all A∈. Using this notation, a solution to the stochastic inverse problem is defined formally as follows:Given a probability measureon (, ) that is absolutely continuous with respectand admits a density , the stochastic inverse problem seeks a probability measure P_ on (, ) that is absolutely continuous with respect toand admits a probability density π_, such that the subsequent push-forward measure induced by the map, Q(λ), satisfiesP_(Q^-1(A)) = P^Q(P_)_(A) = (A),for any A∈. We refer to any probability measure P_ that satisfies (<ref>) as a consistent solution to the stochastic inverse problem. Clearly, a consistent solution may not be unique, i.e., there may be multiple probability measures that are consistent in the sense of Definition <ref>. This is analogous to a deterministic inverse problem where multiple sets of parameters may produce the observed data. A unique solution may be obtained by imposing additional constraints or structure on the stochastic inverse problem. In this paper, such structure is obtained by incorporating prior information to construct a unique Bayesian solution to the stochastic inverse problem. §.§ A Bayesian solution to the stochastic inverse problemFollowing the Bayesian philosophy <cit.>, we introduce a prior probability measureon (, ) that is absolutely continuous with respect toand admits a probability density . The prior probability measure encapsulates the existing knowledge about the uncertain parameters. Assuming that Q is at least measurable, then the prior probability measure on , , and the map, Q, induce a push-forward measureon , which is defined for all A∈, (A) = (Q^-1(A)). We utilize the following expression for the posterior,(B) := (B) (Q(B))/(Q(B)),if (B)>0, 0,otherwise,which we describe in terms of a probability density given by(λ) = (λ)(Q(λ))/(Q(λ)), λ∈.We note that if =, i.e., if the prior solves the stochastic inverse problem, then the posterior density will be equal to the prior density. It was recently shown in <cit.> that the posterior given by (<ref>) defines a consistent probability measure using a contour σ-algebra. When interpreted as a particular iterated integral of (<ref>), the posterior defines a probability measure on (, ) in the sense of Definition <ref>, i.e., the push-forward of the posterior matches the observed probability density. Approximating the posterior density using the consistent Bayesian approach only requires an approximation of the push-forward of the prior probability on the model parameters, which is fundamentally a forward propagation of uncertainty. While numerous approaches have been developed in recent years to improve the efficiency and accuracy of the forward propagation of uncertainty using computational models, in this paper we only consider the most basic of methods, namely Monte Carlo sampling, to sample from the prior. We evaluate the computational model for each of the samples from the prior and use a standard kernel density estimator <cit.> to approximate the push-forward of the prior.Given the approximation of the push-forward of the prior, we can evaluate the posterior at any point λ∈ if we compute Q(λ). This provides several possibilities for interogating the posterior. In Section <ref>, we compute Q(λ) on a uniform grid of points to visualize the posteriorafter we compute the push-forward of the prior. This does require additional model evaluations, but visualizing the posterior is rarely required and only useful for illustrative purposes in 1 or 2 dimensions. More often, we are interested in obtaining samples from the posterior. This is also demonstrated in Section <ref> where the samples from the prior are either accepted or rejected using a standard rejection sampling procedure. For a given λ, we compute the ratio (λ)/(M(λ)), where M is an estimate of the maximum of the ratio over , and compare this value with a sample, η, drawn from a uniform distribution on (0,1).If the ratio is larger than η, then we accept the sample. We apply the accept-reject algorithm to the samples from the prior and therefore the samples from the posterior are a subset of the samples used to compute the push-forward of the prior. Since we have already computed Q(λ) for each of these samples, the computational cost to select a subset of the samples for the posterior is minimal. However, in the context of OED we are primarily interested in computing the information gained from the prior to the posterior which only involves integrating with respect to the prior (see Section <ref>) and does not require additional model evaluations or rejection sampling.In practice, we prefer to use data that is sensitive to the parameters since otherwise it is difficult to infer useful information about the uncertain parameters.Specifically, if m≤ n and the Jacobian of Q is defined a.e. inand is full rank a.e., then the push-forward volume measureis absolutely continuous with respect to the Lebesgue measure <cit.>.For the rest of this work we maintain the following assumptions needed to produce a unique consistent solution to the stochastic inverse problem:(A1) We have a mathematical model and a description of our prior knowledge about the model input parameters,(A2) The data exhibits sensitivity to the parameters a.e. in , hence, we use the Lebesgue measure μ as the volume measure on the data space,(A3) The observed density is absolutely continuous with respect to the push-forward of the prior.The assumption concerning the absolute continuity of the observed density with respect to the prior is essential to define a solution to the stochastic inverse problem <cit.>. While this assumption may appear rather abstract, it simply assures that the prior and the model can predict, with non-zero probability, any event that we have observed. Since the observed density and the model are assumed to be fixed, this is only an assumption on the prior.In the remainder of this work, we focus on quantifying the value of these posterior densities. We use the Kullback-Leibler divergence <cit.>, to measure the information gained about the parameters from the prior to the posterior. We compute the expected information gain of a given set of QoI (a given experimental design), and then determine the OED to deploy in the field.§ THE INFORMATION CONTENT OF AN EXPERIMENT We are interested in finding the OED for inferring model input parameters. Conceptually, a design is informative if the posterior distribution of the model parameters is significantly different from the prior. To quantify the information gain of a design we use the Kullback-Leibler (KL) divergence <cit.> as a measure of the difference between a prior and posterior distribution. While the KL divergence is by no means the only way to compare two probability densities, it does provide a reasonable measure of the information gained in the sense of Shannon information <cit.> and is commonly used in Bayesian OED <cit.>. In this section we discuss how to compute the KL divergence and define our OED formulation based upon expected information gain over a specific space of possible observed densities.§.§ Information gain: Kullback-Leibler divergence Suppose we are given a description of the uncertainty on the observed data in terms of a probability density . This produces a unique solution to the stochastic inverse problemthat is absolutely continuous with respect to the Lebesgue measure  <cit.> and admits a probability density, . The KL divergence of the posterior fromthe prior (information gain), denoted I_Q, is given byI_Q(:) := ∫_log(/) d. Note that becauseis fixed, I_Q is simply a function of the posteriorI_Q(:) = I_Q(),and from Eq. (<ref>) the posterior is a function of the observed density.Therefore, we write I_Q as a function of the observed density,I_Q() = I_Q().The observation that I_Q is a function of onlyallows us to define the expected information gain in Section <ref> based on a specific space of observed densities.Given a high dimensional parameter space, it may be computationally infeasible to accurately approximate the integral in Eq. (<ref>). For example, a multi-variate normal density with unit variance in 100-dimensions has a maximum value of (1/√(2π))^100≈ 1× 10^-40. However, we may write this integral in terms of densities on the data space evaluated at Q(λ) as followsI_Q()= ∫_(λ)log((λ)/(λ)) d= ∫_(λ)(Q(λ))/(Q(λ))log((Q(λ))/(Q(λ))) d= ∫_(Q(λ))/(Q(λ))log((Q(λ))/(Q(λ))) d,where the second equality comes from a simple substitution using Eq. <ref>.Given a set of samples from the prior, we only need to compute the push-forward of the prior in the data space to approximate I_Q.This observation provides an efficient method for approximating I_Q given a high dimensional parameter space and a low dimensional data space.In fact, we found it convenient to use (<ref>) whenever the prior is not uniform. In the consistent Bayesian formulation, we evaluate the model at the samples generated from the prior to estimate the push-forward of the prior. It is a computational advantage to also use these samples to integrate with respect to the prior rather than integrating with respect to the volume measure which would require additional model evaluations.§.§ A motivating nonlinear system Consider the following 2-component nonlinear system of equations with two parameters introduced in <cit.>:λ_1x_1^2 +x_2^2 = 1x_1^2 - λ_2x_2^2 = 1The first QoI is the second component, i.e., Q_1(λ)=x_2(λ).The parameter ranges are given by λ_1∈[0.79, 0.99] and λ_2∈[1-4.5√(0.1), 1+4.5√(0.1)] which are chosen as in <cit.> to induce an interesting variation in the QoI.We assume the observed density on Q_1 is a truncated normal distribution with mean 0.3 and standard deviation of 0.01, see Figure <ref> (right). We generate 40,000 samples from the uniform prior and use a kernel density estimator (KDE) to construct an approximation to the resulting push-forward density, see Figure <ref> (right).Then we use Eq. (<ref>) to construct an approximation to the posterior density using the same 40,000 samples, see Figure <ref> (left), and a simple accept/reject algorithm to generate a set of samples from the posterior, see Figure <ref> (middle).We propagate this set of samples from the posterior through the model and approximate the resulting push-forward of the posterior density using a KDE.In Figure <ref> (right) we see the push-forward of the posterior agrees quite well with the observed density.Notice the support of the posterior lies in a relatively small region of the parameter space.The information gain from this posterior is I_Q_1()≈ 2.015. Next, we consider a different QoI to use in the inverse problem, and compare the support of its posterior to the one we just observed. Specifically consider,Q_2(λ) = x_1.We assume the observed density on Q_2 is a truncated normal distribution with mean 1.015 and standard deviation of 0.01.We approximate the push-forward density and the posterior using the same 40,000 samples and again generate a set of samples from the posterior and propagate these samples through the model to approximate the push-forward of the posterior, see Figure <ref>.Although both Q_1 and Q_2 have the same standard deviation in their observed densities, clearly the two QoI produce very different posterior densities.The posterior corresponding to data from Q_2 has a much larger region of support within the parameter space compared to that of the posterior corresponding to Q_1.This is quantified with the information gain from this posterior I_Q_2()≈ 0.466.Given these two maps, Q_1 and Q_2, and the specified observed data on each of these data spaces, the data Q_1 is more informative of the parameters than the data Q_2.Next, we consider using the data from both Q_1 and Q_2, Q:→(Q_1,Q_2), with the same means and standard deviations as specified above.Again, we approximate the push-forward density and the posterior using the same 40,000 samples, see Figure <ref>. With the information from both Q_1 and Q_2 we see a substantial decrease in the support of the posterior density.Intuitively, the support of the posterior using both Q_1 and Q_2 is the support of the posterior using Q_1 intersected with the support of the posterior using Q_2.This is quantified in the information gain of this posterior I_Q()≈ 2.98.In the scenario in which we can afford to gather data on both Q_1 and Q_2, we benefit greatly in terms of reducing the uncertainties on the model input parameters.However, suppose we could only afford to gather one of these QoI in the field.Based on the information gain from each posterior, Q_1 is more informative about the parameters than Q_2.However, consider a scenario in which the observed data has different means in both Q_1 and Q_2.Due to the nonlinearities of the maps, it is not necessarily true that Q_1 is still more informative than Q_2.If we do not know the mean of the data for either Q_1 or Q_2, then we want to determine which of these QoI we expect to produce the most informative posterior.§.§ Expected information gain Optimal experimental design must select a design before experimental data becomes available. In the absence of data we use the simulation model to quantify the expected information gain of a given experimental design. Let 𝒪 denote the space of densities over .We want to define the expected information gain as some kind of average over this density space in a meaningful way.However, this is far too general of a space to use to define the expected information gain.This space includes densities that are unlikely to be observed in reality.Therefore, we restrict 𝒪 to be a space more representative of densities that may be observed in reality.With no experimental data available to specify an observed density on a single QoI, we assume the density is a truncated Gaussian with a standard deviation determined by some estimate of the measurement instrument error.With Gaussians of (possibly) varying standard deviations specified for each QoI, this defines the shape of the observed densities we consider.We let 𝒪_ denote the space of all densities of this shape centered in =Q(),𝒪_ = {N̂(q,σ^2) : q∈},where N̂(q, σ^2) is a truncated Gaussian function with mean q and standard deviation σ.More details of this definition of 𝒪_ are addressed in Section <ref>. We can easily generalize our description of 𝒪. For example, we could also consider the standard deviation of the observed data to be uncertain, in which case we would also average over some interval of possible values for σ. However, in this work we only vary the center of the Gaussian densities. 5ptWe can restrict 𝒪 in other ways as well. For example, if we expect the uncertainty in each QoI to be described by a uniform density, then we define the restriction on 𝒪 accordingly. This choice of characterization of the observed density space is largely dependent on the application. The only limitation is that we require the measure specified on the observed density space to be defined in terms of the push-forward measure, , as described below.In Section <ref> we describe one approach for defining a restricted observed density space where the observed density of each QoI has a Gaussian profile and the standard deviations are functions of the magnitudes of each QoI.5pt The restriction of possibleto this specific space of densities allows us to represent each density uniquely with a single point q∈.Based on our prior knowledge of the parameters and the sensitivities of the map Q, the model informs us that some data are more likely to be observed than other data, this is seen in the plot ofin Figure <ref> (upper left).This implies we do not want to average overwith respect to μ or , but rather with respect to the push-forward of the prior on D, . This respects the prior knowledge of the parameters and the sensitivity information provided by the model.We define the expected information gain, denoted E(I_Q), as just described,E(I_Q) := ∫_I_Q(q) (q) dμ = ∫_I_Q(q) d.From Eq. (<ref>), I_Q itself is defined in terms of an integral.The expanded form for E(I_Q) is then an iterated integral,E(I_Q) = ∫_∫_(λ;q)log((λ;q)/(λ)) d d,where we make explicit thatis a function of the observed density and, by our restriction of the space of observed densities in Eq. (<ref>), therefore a function of q∈.We utilize Monte Carlo sampling to approximate the integral in Eq. (<ref>) as described in Algorithm <ref>. Algorithm <ref> appears to be a computationally expensive procedure since it requires solving M stochastic inverse problemsand, as noted in <cit.>, approximatingcan be expensive. In <cit.> and in this paper we use kernel density estimation techniques to approximatewhich does not scale well as the dimension ofincreases <cit.>. On the other hand, for a given experimental design, we only need to compute this approximation once, as each I_Q in Step 4 of Algorithm <ref> is computed using the same prior and map Q and, therefore, the same . In other words, the fact that the consistent Bayes method only requires approximating the push-forward of the prior implies that this information can be used to approximate posteriors for different observed densities without requiring additional model evaluations. This significantly improves the computational efficiency of the consistent Bayesian approach in the context of OED. We leverage this computational advantage throughout this paper by considering a discrete set of designs which allows us to compute the push-forward for all of the candidate designs simultaneously. Utilizing a continuous design space might require computing the push-forward for each iteration of the optimization algorithm, since the designs (locations of the observations) are not known a priori. The additional model simulations required to compute the push-forward of the prior at new design points might be intractable if the number of iterations is large, however the need for new simulations may be avoided, if the new observations can be extracted from archived state-space data. For example, if one stores the finite element solutions of a PDE at all samples of the prior at the first iteration of the design optimization, one can evaluate obsevations at new design locations, which are functionals of this PDE solution, via interpolation using the finite element basis.§.§ Defining the OED We are now in a position for define our OED formulation. Recall that our experimental design is defined as the set of QoI computed from the model and we seek the optimal set of QoI to deploy in the field.Given a physics based model, prior information on the model parameters, a space of potential experimental designs, and a generic description of the uncertainties for each QoI, we define our OED as follows.Let 𝒬 represent the design space, i.e., the space of all possible experimental designs, and Q^z∈𝒬 be a specific design.Then the OED is the Q^z∈𝒬 that maximizes the expected information gain,Q^opt := max_Q^z∈𝒬E(I_Q^z). As previously mentioned, the focus in this paper is on the utilization of the consistent Bayesian methodology within the OED framework, so we do not explore different approaches for solving the optimization problem given by Definition <ref> and simply find the optimal design over a discrete set of candidate designs.Consistent Bayesian inference is potentially well suited to finding OED in continuous design spaces. Typically OED based upon statistical Bayesian methods uses Markov Chain Monte Carlo (MCMC) methods to characterize the posterior distribution. MCMC methods do not provide a functional form for the posterior but rather only provide samples from the posterior. Consequently, gradient-free or stochastic gradient-based optimization methods must be used to find the optimal design. In contrast consistent Bayesian inference provides a functional form for the posterior which allows the use of more efficient gradient based optimizers. Exploring the use of more efficient continuous optimization procedures will be the subject of future work. § INFEASIBLE DATA The OED procedure proposed in this manuscript is based upon consistent Bayesian inference which requires that the observed measure is absolutely continuous with respect to the push-forward measure induced by the prior and the model (assumption A3). In other words, any event that we observe with non-zero probability will be predicted using the model and prior with non-zero probability. During the process of computing E(I_Q), it is possible that we violate this assumption. Specifically, depending on the mean and variance of the observational density we may encounter ∈𝒪_ such that ∫_ dμ < 1, i.e., support ofextends beyond the range of the map Q, see Figure <ref> (upper right). In this section we discuss the causes of infeasible data and options for avoiding infeasible data when estimating an optimal experimental design. §.§ Infeasible data and consistent Bayesian inferenceWhen inferring model parameters using consistent Bayesian inference the most common cause for infeasible data is that the model being used to estimate the OED is inadequate. That is, the deviation between the computational model and reality is large enough to prohibit the model from predicting all of the observational data. The deviation between the model prediction and the observational data is often referred to as model structure error and can often be a major source of uncertainty. This is an issue of most if not all inverse parameter estimation problems <cit.>. Recently there has been a number of attempts to quantify this error (see e.g., <cit.>) however such approaches are beyond the scope of this paper. In the following we will assume that the model structure error does not prevent the model from predicting all the observational data.§.§ Infeasible data and OEDTo estimate an approximate OED we must quantify the expected information gain of a given experimental design (see Section <ref>). The expectation is over all possible normal observation densities with mean q∈𝒟 and variance σ, defined by the space (<ref>). When the support of 𝒟 is bounded these densities may produce infeasible data.The effect of this violation increases as q approaches the boundary of 𝒟.To remedy this violation of (A3) we must modify the set of observational densities. In this paper we choose to normalizeover . We redefine the observed density space 𝒪_ so that (A3) holds for each density in the space,𝒪_ = {N̂(q,σ^2)/C_q : q∈},where N̂(q, σ^2) is a truncated Gaussian function with mean q and standard deviation σ, and C_q is the integral of N̂(q,σ^2) overwith respect to the Lebesgue measure on ,C_q = ∫_N̂(q,σ^2)dμ.A similar approach for normalizing Gaussian densities over compact domains was taken in <cit.>.§.§ A nonlinear model with infeasible data In this section, we use the nonlinear model introduced in Section <ref> to demonstrate that infeasible data can arise from relatively benign assumptions. Suppose the observed density on Q_1 is a truncated normal distribution with mean 0.3 and standard deviation of 0.04. In this one dimensional data space, this observed density is absolutely continuous with respect to the push-forward of the prior on Q_1, see Figure <ref> (left). Next, suppose the observed density on Q_2 is a truncated normal distribution with mean 0.982 and standard deviation of 0.01. Again, in this new one dimensional data space, this observed density is absolutely continuous with respect to the push-forward of the prior on Q_2, see Figure <ref> (right). Both of these observe densities are dominated by their corresponding push-forward densities, i.e., the model can reach all of the observed data in each case.However, consider the data space defined by both Q_1 and Q_2 and the corresponding push-forward and observed densities on this space, see Figure <ref>. The non-rectangular shape of the combined data space is induced by the nonlinearity in the model and the correlations between Q_1 and Q_2. As we see in Figure <ref>, the observed density using the product of the 1-dimensional Gaussian densities is not absolutely continuous with respect to the push-forward density on (Q_1, Q_2), i.e., the support ofextends beyond the support of . Referring to Eq. (<ref>), we normalize this observed density over , see Figure <ref> (right). Now that the new observed density obeys the assumptions needed, we could solve the stochastic inverse problem as described in Section <ref>.§.§ Computational considerationsThe main computational challenge in the consistent Bayesian approach is the approximation of the push-forward of the prior. Following <cit.>, we use Monte Carlo sampling for the forward propagation of uncertainty. While the rate of convergence is independent of the number of parameters (dimension of ), the accuracy in the statistics for the QoI may be relatively poor unless a large number of samples can be taken. Alternative approaches based on surrogate models can significantly improve the accuracy, but are generally limited to small number of parameters. We also employ kernel density estimation techniques to construct a non-parametric approximation of the push-forward density, but it is well-known that these techniques do not scale well with the number of observations (dimension of ) <cit.>.Next, we address the computational issue of normalizing N̂(q,σ^2), i.e., , over . From the plot ofin Figure <ref> (left) it is clear the data space may be a complex region. Normalizing , as in Figure <ref> (right), overwould be computationally expensive. Fortunately, the consistent Bayesian approach provides a means to avoid this expense. Note that from Eq. (<ref>) we have,() = ()(Q())/(Q()),where ()=(Q())=1 which implies,() = (Q()).Therefore, normalizingoveris equivalent to solving the inverse problem and then normalizing(where we use the tilde over π to indicate this function does not integrate to 1 because we have violated (A3)) over .Althoughmay not always be a generalized rectangle, (A1) implies we have a clear definition ofand therefore can efficiently integrateoverand then normalizeby= /∫_ d.In fact, this normalization factor can be estimated without additional model evaluations and without using the values of the prior or the posterior, which may not be usable in high-dimensional spaces. We observe that() = ∫_ d = ∫_(Q(λ))/(Q(λ)) d.Thus, we can use the values ofandcomputed for the samples generated from the prior, which were used to estimate the push-forward of prior, to integrate / with respect to the prior. § NUMERICAL EXAMPLES In this section we consider several models of physical systems. First, we consider a stationary convection-diffusion model with a single uncertain parameter controlling the magnitude of the source term. Next, we consider a transient transport model with a two dimensional parameter space determining the location of the source of a contaminant. Then, we consider a inclusion problem in computational mechanics where two uncertain parameters control the shape of the inclusion. Finally, we consider a high-dimensional example of single-phase incompressible flow in porous media where the uncertain permeability field is given by a Karhunen-Loeve expansion <cit.>.In each example, we have a parameter space , a set of possible QoI, and a specified number of QoI we can afford to gather during the experiment. This in turn defines a design spaceand we let Q^z∈ represent a single experimental design and ^z=Q^z() the corresponding data space. For each experimental design, we let σ^z represent the standard deviations defined by the uncertainties in each QoI that compose Q^z and 𝒪_^z represent the observed density space.All of these examples have continuous design spaces, so we approximate the OED by selecting the OED from a large set of candidate designs. This approach was chosen because it is much more efficient to perform the forward propagation of uncertainty using random sampling only once and to compute all of the candidate measurements for each of these random samples. Alternatively, one could pursue a continuous optimization formulation which would require a full forward propagation of uncertainty for each new design. As mentioned in Section <ref>, one could limit the number of designs using a gradient-based or Newton-based optimization approach, but this is beyond the scope of this paper.§.§ Stationary convection-diffusion: uncertain source amplitude In this section we consider a convection-diffusion problem with a single uncertain parameter controlling the magnitude of a source term. This example serves to demonstrate that the OED formulation gives intuitive results for simple problems. §.§.§ Problem setup Consider a stationary convection diffusion model on a square domain:-D∇^2 u + ∇· (v u) = S, x∈Ω, ∇ u ·𝐧 = 0, x∈Γ_N ⊂∂Ω,u = 0, x∈Γ_D ⊂∂Ω,withS(x) = Aexp(-||x_src-x||^2/2h^2)where Ω = [0,1]^2, u is the concentration field, the diffusion coefficient D=0.01, the convection vector v=[1,1], and S is a Gaussian source with the following parameters: x_src is the location, A is the amplitude, h is the width. We impose homogeneous Neumann boundary conditions on Γ_N (right and top boundaries) and homogeneous Dirichlet conditions on Γ_D (left and bottom boundaries). For this problem, we choose x_src = [0.5,0.5], and h=0.05.We let A be uncertain within [50, 150], thus the parameter space for this problem is =[50, 150].Hence, our goal is to gather some limited amount of data that provides the best information about the amplitude of the source, i.e., reduces our uncertainty in A. To approximate solutions to the PDE in Eq. <ref> given a source amplitude A, we use a finite element discretization with continuous piecewise bilinear basis functions defined on a uniform (25× 25) spatial grid. §.§.§ Results We assume that we have limited resources for gathering experimental data, specifically, we can only afford to place one sensor in the domain to gather a single concentration measurement.Our goal is to place this single sensor in Ω to maximize the expected information gained about the amplitude of the source.We discretize Ω using 2,000 uniform random points which produces a design space with 2,000 possible experimental designs.For this problem, we let the uncertainty in each QoI be described by a truncated Gaussian profile with a fixed standard deviation of 0.1.This produces observed density spaces, 𝒪_^z, as described in Eq. <ref>.We generate 5,000 uniform samples from the prior and simulate measurements of each QoI for each of these 5,000 samples.We consider approximate solutions to the OED problem using subsets of the 5,000 samples of size 50, 200, 1,000 and 5,000. For each experimental design, we calculate E(I_Q^z) using Algorithm <ref> and plot E(I_Q^z) as a function of the discretized design space in Figure <ref>. Notice the expected information gain is greatest near the center of the domain (near the location of the source) and in the direction of the convection vector away from the source. This result matches intuition, as we expect data gathered in regions of the domain that exhibit sensitivity to the parameters to produce high expected information gains.We note that, for this example, a sufficiently accurate approximation to the design space and the OED is obtained using only 50 samples corresponding to 50 model evaluations. In Table <ref> we show the top 5 experimental designs (computed using the full set of 5,000 samples) and corresponding E(I_Q^z) for each set of samples.§.§ Time dependent diffusion: uncertain source location In this section, we compare results from a statistical Bayesian formulation of OED to the formulation described in this paper. Specifically, we consider the model in <cit.> where the author uses a classical Bayesian framework for OED to determine the optimal placement of a single sensor that maximizes the expected information about the location of a contaminant source. §.§.§ Problem setup Consider a contaminant transport model on a square domain:∂ u/∂ t = ∇^2 u + S, x∈Ω, t>0, ∇ u ·𝐧 = 0, x∈∂Ω, t>0,u = 0, x∈Ω, t=0.withS(x) = s/2π h^2exp(-||x_src-x||^2/2h^2),if0≤ t<τ, 0,ift≥τ,where Ω = [0,1]^2, u is the space-time concentration field, we impose homogeneous Neumann boundary conditions along with a zero initial condition, and S is a Gaussian source with the following parameters: x_src is the location, s is the intensity, h is the width, and τ is the shutoff time.Our goal is to gather some limited amount of data that provides the best information about the location of the source, i.e., reduces our uncertainty in x_src. For this problem, we choose s=2.0, h=0.05, andτ=0.3 and let x_src be uncertain within [0,1]^2 such that =[0,1]^2. To approximate solutions to the PDE in Eq. <ref> given a location of S, i.e., a given x_src, we use a finite element discretization with continuous piecewise bilinear basis functions defined on a uniform (25× 25) spatial grid and backward Euler time integration with a step size Δ t = 0.004 (100 time steps). §.§.§ Results We assume that we have limited resources for gathering experimental data, specifically, we can only afford to place one sensor in the domain and can only gather a single concentration measurement at time t=0.24.Our goal is to place this single sensor in Ω to maximize the expected information gained about the location of the contaminant source.For simplicity, we discretize Ω using an 11× 11 regular grid of points which produces a design space with 121 possible experimental designs.We let the uncertainty in each QoI be described by a Gaussian profile with a standard deviation that is a function of the magnitude of the QoI, i.e.,σ_i = 0.1 + 0.1 |q_i|5pt for 5pt i=1… M,where M is the dimension of the data space.This produces observed density spaces, 𝒪_^z, that consist of truncated Gaussian functions with varying standard deviations,𝒪_^z = {N̂(q,(σ(q))^2)/C_q : q∈^z}. We generate 5,000 uniform samples from the prior and simulate measurements of each QoI for each of these 5,000 samples.We consider approximate solutions to the OED problem using subsets of the 5,000 samples of size 50, 200, 1,000 and 5,000.For each experimental design, we use this data to calculate E(I_Q^z) using Algorithm <ref> and plot E(I_Q^z) as a function of the discretized design space in Figure <ref>.Notice the expected information gain is greatest near the corners of the domain and smallest near the center, this is consistent with <cit.>.In Table <ref> we show the top 5 experimental designs, approximated using the full set of 5,000 samples, and corresponding E(I_Q^z) for each set of samples.In Figure <ref> we consider three different posteriors computed using data from the OED approximated using 5,000 samples, i.e., data gathered by a sensor placed in the bottom left corner of the domain, where each posterior corresponds to a different possible location of the source.We see varying levels of information gain in these three scenarios, reiterating the point that we choose the OED based on the average of these information gains, E(I_Q).Although many of the results in this section seem to match our intuition about which measurement locations should produce high expected information gains, this may not always be the case.In particular, we have found that our results can depend on our choice of the variance in the the observed densities σ.If σ is chosen to be large relative to the range of a data space, then the posteriors produced as we average over 𝒪_ are all nearly the same and potentially produce unusually high information gains when the observed densities have substantial support over regions of the data space with very small probability (very small values of the push-forward of the prior).Another way to think of this is the push-forward densities have high entropy and because σ is largeis very close to uniform and this produces posterior densities with high information gains.If σ is chosen to be small relative to the range of the data space, i.e., if we expect the experiments to be informative, we do not encounter this issue because we are integrating overwith respect to the push-forward measure so most of our potential observed data lies in high probability regions of the data space. §.§ A Parameterized Inclusion In this section, we consider a simple problem in computational mechanics where the precise boundary of an inclusion is uncertain. We parameterize the inclusion and seek to determine the location to place a sensor that will maximize the information gained regarding the shape of the inclusion. We use a linear elastic formulation to model the response of the media to surface forces and measure horizontal stress at each sensor location. We assume that the material properties (Poisson ratio and Young's modulus) are different inside the inclusion and that these properties are known a prior. §.§.§ Problem setupConsider a linear elastic plane strain model,-∇·σ(𝐮) = 0, x∈Ω = [-5,5]×[0,2], 𝐮 = 𝐠, x∈Γ_D = {(x,y)∈Ω |x=0}, σ(𝐮)𝐧 = 𝐭, x∈Γ_N = ∂Ω\Γ_D, where σ(𝐮) is given by the linear elastic constitutive relation,σ(𝐮) = λ (∇·𝐮)𝕀 + μ (∇𝐮 +∇𝐮^T).We express this relation in terms of the Lamé parameters, λ and μ, which are related to the Poisson ratio, ν, and Young's modulus, E, via the following expressions,μ = E/2(1+ν), λ = E ν/(1+ν)(1-2ν).Now assume that there is an inclusion within the media defined by an ellipseI = {(x,y)∈Ω |1/α(x-x_0)^2 + 1/β(y-y_0)^2 ≤ 1 },where x_0=y_0=0 and α is uniformly distributed on [0.5,1] and β is uniformly distributed on [0.25,0.5]. The material properties are assumed to be known and are given by,ν =0.45, (x,y)∈ I,0.3,otherwise, ,E =10.0, (x,y)∈ I,40.0,otherwise, .These material properties were not chosen to emulate any particular materials, just to demonstrate the proposed OED formulation. Next let us impose homogeneous Dirichlet boundary conditions on the bottom boundary and stress free boundary conditions on the sides, and impose a uniform traction in the y-direction along the top boundary (𝐭_top = (0,-1)^T). And finally assume that we can probe the media and measure the horizontal stress at a given sensor location. We do not want to puncture the inclusion, so we only consider sensor locations outside the bounds on the inclusion. Equation <ref> was solved using a finite element discretization with piecewise linear basis functions defined on a uniform 400× 80 mesh resulting in a system with 64,962 degrees of freedom. The computational model is implemented using the Trilinos toolkit <cit.> and each realization of the model requires approximately 1 second using 8 processors. §.§.§ ResultsAs in previous examples, we assume that we have limited resources for gathering experimental data, specifically, we can only afford to place one sensor in the domain to gather a single stress measurement. Our goal is to place this single sensor to maximize the expected information gained about the shape of the inclusion. We select 2,000 random sensor locations (outside the inclusion bounds) which produces a design space with 2,000 possible experimental designs. For this problem, we let the probability density for the QoI be described by a truncated Gaussian profile with a fixed standard deviation of 0.001. We generate 1,000 uniform samples from the prior and compute the horizontal stress at each sensor location for each of these 1,000 samples.First, we compare the posterior densities for two sensor locations, (3.5294,1.3049) and (1.3902,1.2100), under the assumption that we have already gathered data at these sensor locations. The purpose here is to demonstrate that we obtain different posterior densities and therefore gain different information from each sensor. The first sensor is further from the inclusion so we expect that the data from the second sensor will constrain the posterior more than the data from the first. In Figures <ref> and <ref>, we plot the samples from the posterior and the corresponding kernel density estimate of the posterior for the first and second sensor locations respectively. It is clear that measuring the horizontal stress closer to the inclusion increases the information gained from the prior to the posterior. We consider approximate solutions to the OED problem using subsets of the 1,000 samples of size 10, 50, 100 and 1,000.For each experimental design, we use this data to calculate E(I_Q^z) using Algorithm <ref> and plot E(I_Q^z) as a function of the discretized design space in Figure <ref>.Notice the expected information gain is greatest near the bottom of the domain near the inclusion and is reasonably symmetric around the inclusion. Also note that the expected information gain is relatively large in the bottom corners of the domain. This is due to the choice of boundary conditions for the model which induces a large amount of stress in these corners.In Table <ref> we show the top 5 experimental designs, approximated using the full set of 1,000 samples, and corresponding E(I_Q^z) for each set of samples.§.§ A Higher-Dimensional Porous Media Example with Uncertain Permeability In this section, we consider an example of single-phase incompressible flow in porous media with a Karhunen-Loéve expansion of the uncertain permeability field.The purpose of this example is to demonstrate the OED formulation on a problem with a high-dimensional parameter space and more than one sensor. §.§.§ Problem setup Consider a single-phase incompressible flow model:-∇· (K(λ) ∇ p) = 0, x∈Ω = (0,1)^2,p = 1, x=0, p = 0, x=1, K∇ p ·𝐧 = 0, y=0andy=1.Here, p is the pressure field and K is the permeability field which we assume is a scalar field given by a Karhunen-Loéve expansion of the log transformation, Y = logK, withY(λ) = Y + ∑_i=1^∞ξ_i(λ)√(η_i)f_i(x,y),where Y is the mean field.We assume the mean removed random media is given by a Gaussian process which implies that the ξ_i are mutually uncorrelated random variables with zero mean and unit variance <cit.>. The eigenvalues, η_i, and eigenfunctions, f_i, are computed numerically using the following covariance function,C_Y(𝐱,𝐱) = σ_Y^2 exp[ -(x_1-x_1)^2/2η_1 - (x_2-x_2)^2/2η_2], where σ_Y and η_i denote the variance and correlation length in the i^th spatial direction respectively. We assume a correlation length of 0.01 in each spatial direction and truncate the expansion at 100 terms. This choice of truncation is purely for the sake of demonstration. In practice, the expansion is truncated once a sufficient fraction of the energy in the eigenvalues is retained <cit.>. This truncation gives 100 uncorrelated random variables, ξ_1,…, ξ_100, with zero mean and unit variance which implies = ℝ^100. To approximate solutions to the PDE in Eq. <ref> we use a finite element discretization with continuous piecewise bilinear basis functions defined on a uniform (50× 50) spatial grid.§.§.§ Results In this section, we present approximate solutions to several different design problems.We begin with the familiar problem of choosing a single sensor location within the physical domain.Then, we consider approximating the optimal location of a second sensor given the location of the first sensor.In this way, we solve the greedy OED problem and determine the greedy optimal locations of 1-8 sensors within the physical domain.We then consider solving the ehaustive OED problem where we limit the sensors to 25 locations and consider determining the optimal location of 5 available sensors.First, assume that we have limited resources for gathering experimental data, specifically, we can only afford to place one sensor in the domain to gather a single pressure measurement.Our goal is to place this single sensor in Ω to maximize the expected information gained about the amplitude of the source.We discretize Ω using 1,301 pionts on a grid which produces a design space with 1,301 possible experimental designs.For this problem, we let the uncertainty in each QoI be described by a truncated Gaussian profile with a fixed standard deviation of 0.01.This produces observed density spaces, 𝒪_^z, as described in Eq. <ref>.We generate 10,000 samples from the prior and simulate measurements of each QoI.We consider approximate solutions to the OED problem using subsets of the 10,000 samples of size 50, 100, 1,000 and 10,000. For each experimental design, we calculate E(I_Q^z) using Algorithm <ref> and plot E(I_Q^z) as a function of the discretized design space in Figure <ref>. Notice the expected information gain is greatest near the top and bottom of the domain away from the left and right edges.This result matches intuition, as we expect data gathered near the left and right edges to be less informative given the Dirichlet boundary condition imposed on those boundaries. We note that, for this example, a sufficiently accurate approximation to the design space and the OED is obtained using only 1,000 samples corresponding to 1,000 model evaluations. In Table <ref> we show the top 5 experimental designs (computed using the full set of 10,000 samples) and corresponding E(I_Q^z) for each set of samples.Next, we consider the greedy OED problem of placing 8 sensors within the physical domain.We choose to use all of the available 10,000 samples to solve this problem. In Figure <ref>, we see the design space as a function of the previously determined locations of placed sensors.We observe a strong symmetry to this problem, as is expected due to the symmetry of the physical process defined on this domain with the given boundary conditions.In the bottom right of Figure <ref>, notice the very small range of the color bar indicating the possible values of the expected information gain.This suggests, for this example, we expect there is a limit on the number of useful sensor locations for informing likely parameter values.Lastly, we consider the exhaustive OED problem of placing 5 sensors within the physical domain and, for computational feasibility, restrict the possible locations of these 5 sensors to 25 points in the physical domain, see Figure <ref>. We choose to use 1,000 samples to solve this problem. In Figure <ref>, we plot the design space for a single sensor location using these 1,000 samples and show the optimal location of 1, 2, 3, 4 and 5 sensors. The results are quite similar to the greedy results previously described.§ CONCLUSION In this manuscript, we developed an OED formulation based on the recently developed consistent Bayesian approach for solving stochastic inverse problems. We used the Kullback-Leibler divergence and the posterior obtained using consistent Bayesian inference to measure the information gain of a design and present a discrete optimization procedure for choosing the optimal experimental design that maximizes the expected information gain. The optimization procedure presented in this paper is limited in terms of the number of observations we can consider, but was chosen to focus attention on the definition and approximation of the expected information gained. More efficient strategies, utilizing gradient-based methods on continuous design spaces, will be pursued in a future work. We discussed a characterization of the space of observed densities needed to compute the expected information gain and a computationally efficient approach for rescaling observed densities to satisfy the requirements of the consistent Bayesian approach. Numerical examples were given to highlight the properties and utility of our approach.§ ACKNOWLEDGMENTSJ.D. Jakeman's work was supported by DARPA EQUIPS. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525. plain
http://arxiv.org/abs/1705.09395v1
{ "authors": [ "Scott N. Walsh", "Tim M. Wildey", "John D. Jakeman" ], "categories": [ "stat.CO", "math.NA", "60H30, 60H35, 60B10" ], "primary_category": "stat.CO", "published": "20170525232411", "title": "Optimal Experimental Design Using A Consistent Bayesian Approach" }
Department of Physics, North Carolina State University, Raleigh, NC 27695 USA The strong gravitational field around a proto-neutron star can modify the neutrino flavor transformations that occur above the neutrinosphere via three General Relativistic (GR) effects: time dilation, energy redshift, and trajectory bending. Depending on the compactness of the central object, the neutrino self-interaction potential is up to three times as large as that without GR principally due to trajectory bending which increases the intersection angles between different neutrino trajectories, and time dilation which changes the fluxes. We determine whether GR effects are important for flavor transformation during the different epochs of a supernova by using multi-angle flavor transformation calculations and consider a density profile and neutrino spectra representative of both the accretion and cooling phases. We find the GR effects are smaller during the accretion phase due to low compactness of the proto-neutron star and merely delay the decoherence; the neutrino bipolar oscillations during the cooling phase are also delayed due to the GR effects but the delay may be more important because the delay occurs at radii where it might alter the nucleosynthesis in the neutrino driven wind. 14.60.Pq,97.60.Jd,13.15.+g GR Effects in Supernova Neutrino Flavor Transformation James P. Kneller December 30, 2023 ======================================================§ INTRODUCTIONThe collapse of the core of a massive star at the end of its life forms a hot and dense object known as a proto-neutron star which cools via the emission of neutrinos over a period of ∼ 10s <cit.>. The spectra and flavor distribution of the neutrinos that emerge from the supernova are not the same as those emitted from the proto-neutron star: for a recent review see Mirizzi et al. <cit.>. At the present time the most sophisticated calculations of the neutrino flavor transformation adopt the so-called `bulb' model: the neutrino source is a spherically symmetric, hard neutrinosphere, the calculation assumes a steady state, and neutrinos are followed along multiple trajectories characterized by their angle of emission relative to the radial direction - the `multi-angle' approach <cit.>. The Hamiltonian governing the flavor evolution for a single neutrino depends on the local density profile plus a contribution from all the other neutrinos which are escaping the proto-neutron star - the neutrino self-interaction. The neutrino self-interaction depends upon the neutrino luminosity, mean energy and a term proportional to 1 - cosΘ due to the current-current nature of the weak interaction where Θ is the angle between two neutrino trajectories. Curiously, while the density profile and the neutrino spectra are sometimes taken from hydrodynamical simulations of supernova which include General Relativistic (GR) effects either exactly or approximately, e.g. from the simulations by Fischer et al. <cit.>, the calculations of the neutrino flavor transformation ignore them. The flavor transformation that occurs in a supernova will alter the expected signal from the next Galactic supernova <cit.>, as well as modify the Diffuse Supernova Neutrino Background<cit.>, and the nucleosynthesis that occurs in the neutrino driven wind <cit.>. Neutrino heating in the region behind the shock is thought to be the mechanism by which the star explodes and such heating depends upon the neutrino spectra of each flavor which depends upon the flavor transformation <cit.>. With so many different consequences of flavor transformation, one wonders how including GR in the flavor transformation calculations might alter our expectations. GR effects upon neutrino oscillations in vacuum have been considered on several occasions e.g. <cit.>. The inclusion of matter is occasionally considered <cit.> and the effect of GR usually limited to a shift in location and adiabaticity of the Mikheyev-Smirnov-Wolfenstein (MSW) resonance <cit.> via the redshift of the neutrino energy. The effects of GR upon neutrino self-interactions have not been considered. The effect of GR has also been studied for the neutrinos emitted from the accretion disk surrounding a black hole formed in the merger of two neutron stars, a black hole and a neutron star, or in a collapsar. For example, Caballero, McLaughlin and Surman <cit.> studied the GR effects for accretion disk neutrinos (but without neutrino transformation) and found the effects upon the nucleosynthesis were large because of the significant changes to the neutrino flux. The aim of this paper is to explore the GR effects upon flavor transformation in supernovae including neutrino self-interactions and determine whether they might be important in different phases of the explosion. Our paper is organized as follows. In section <ref> we describe our calculation and how the GR effects are included. Section <ref> contains our results for the two representative cases we study: luminosities, mean and rms energies, density profiles and source compactness characteristic of the accretion phase, and a different set representative of the cooling phase. In section <ref> we discuss the conditions that lead to the formation of a neutrino halo - neutrinos that were emitted but which later turned-around and returned to the proto-neutron star. We present a summary and our conclusions in section <ref>.§ CALCULATION DESCRIPTION §.§ GR Effects Upon Neutrinos Before describing the formulation of neutrino oscillations in a curved spacetime, we first describe the three general relativistic effects that will be important. For this paper we adopt an exterior Schwarzschild metric for the space beyond the neutrinosphere[For simplicity we ignore the gravitational effect of the matter outside the neutrinosphere.] which is given by dτ^2 = B(r)dt^2 - dr^2/B(r) - r^2dψ^2 - r^2sin^2ψ dϕ^2, where the function B(r) is B(r) = 1 - r_s/r and r_s is the Schwarzschild radius given by r_s = 2 G M with M the gravitational mass. Throughout our paper we set ħ=c=1. Since the rest mass of all neutrino species are much smaller than the typical energies of supernova neutrinos, we can comfortably take the ultra-relativistic limit and assume neutrinos follow null geodesics just like photons. The Schwarzschild metric is isotropic so all geodesics are planar. By setting dτ^2 = 0 and dϕ=0 so that the geodesic lies in the plane perpendicular to the equatorial plane, we obtainB(r)dt^2 = dr^2/B(r) + r^2dψ^2. The energy of a neutrino E decreases as it climbs out of the gravitational well such that its energy at a given radial coordinate r relative to its energy at r→∞, E_∞, isE/E_∞= 1/√(B(r)). The angular momentum ℓ of the neutrino also decreases as it climbs out of the potential well by the same scaling. This means the ratio of the neutrino's angular momentum to its energy is constant and in our chosen plane is given byℓ/E = r^2/B(r)|dψ/dt| = b where b is a constant called the impact parameter. The impact parameter can be evaluated at the neutrinosphere r=R_ν where we find it is given byb = R_νsinθ_R/√(1 - r_s/R_ν) , where θ_R is the the emission angle of the neutrino with respect to the radial direction at the neutrinosphere. Using Eq. (<ref>) to eliminate dt from Eq. (<ref>) we find[Here the plus sign is for outgoing neutrinos, the minus sign is for ingoing neutrinos, this is true for all following equations.] dψ= ±[ 1/b^2 - 1/r^2 B( r )]^-1/2dr/r^2, This equation can be used to describe the neutrino trajectory associated with a certain emission angle θ_R. Or using Eq. (<ref>) to eliminate dψ from Eq. (<ref>) gives dt = ±1/B(r)dr/√(1 - b^2/r^2B(r)). For an observer at position r the relation between the coordinate time t and the local proper time[The “local proper time” is defined as the clock time of an observer sitting at a particular point along the neutrino trajectory.] τ isdτ^2 = B(r) dt^2so using the result from Eq. (<ref>) we finddτ = ±1/√(B(r))dr/√(1 - b^2/r^2B(r)). This collection of equations will be useful when we describe flavor oscillations in a curved spacetime. §.§ Neutrino Oscillations In A Curved SpacetimeOur calculations of the effects of GR on neutrino flavor transformation are based upon the neutrino bulb model established by Duan et al. <cit.>. In this model, neutrinos are emitted from a hard neutrinosphere with radius R_ν and for simplicity we assume the angular distribution of emission is half-isotropic. The setup is illustrated in Fig. <ref> which shows the trajectory of a neutrino emitted at the neutrinosphere R_ν with angle θ_R relative to the radial direction. After propagating to radial coordinate r with angle ψ relative to the radial direction at the point of emission, it makes an angle θ relative to the radial direction at (r, ψ). The formulation of neutrino flavor transformation in a curved spacetime has been considered on multiple occasions <cit.>. The flavor state at some local proper time τ of a neutrino with momentum q is related to the flavor state at the local proper time of emission τ_0 with momentum q_0 via an evolution matrix S(τ,q;τ_0, q_0) which evolves according to the Schrödinger equation. In a curved spacetime the evolution matrices evolves with the local proper time τ asidS/dτ = H(τ)S. Here H is the Hamiltonian which is also a function of the local proper time for the case of neutrinos in a non-uniform medium. The local proper time τ may be replaced with the radial coordinate r by using Eq. (<ref>) once the impact parameter/emission angle is given. Similarly, the evolution of the antineutrinos is given by an evolution matrix S̅ which evolves according to a Hamiltonian H̅. Once the evolution matrix has been found, the probability that a neutrino in some generic initial state ν_j with momentum q_0 at τ_0 is later detected as state ν_i at proper time τ and momentum q is P(ν_j →ν_i) = P_ij = |S_ij(τ,q;τ_0, q_0)|^2. The Hamiltonian H is the sum of three terms: H = H_V + H_M + H_SI, where H_V is the vacuum term, H_M is the matter term to describe the effect of passing through matter, and H_SI is a term due to neutrino self-interactions. For the antineutrinos the Hamiltonian is also a sum of three terms with H̅ = H̅_V + H̅_M + H̅_SI, which are related to the corresponding terms in the neutrino Hamiltonian via H̅_V = H_V^∗, H̅_M = -H̅^∗_M, H̅_SI = -H̅^∗_SI. In a flat spacetime the vacuum term for a neutrino with energy E takes the form of H^(f)_V = 1/2E U_V ( [ m_1^2 0 0; 0 m_2^2 0; 0 0 m_3^2 ]) U_V^† where m_i are the neutrino masses and U_V is the unitary matrix relating the `mass' and flavor bases. The flavor basis is denoted by the superscript (f) upon relevant quantities and we order the rows/columns as e, μ, τ (here τ is the neutrino flavor, not local proper time). We adopt the Particle Data Group parameterization of the matrix U_V which is in terms of three mixing angles θ_12, θ_13 and θ_23 plus a CP violating phase δ_CP <cit.>. In a curved spacetime the energy of a neutrino is dependent on position due to the gravitational redshift so the vacuum term will change accordingly and is H^(f)_V = √(B(r))/2E_∞ U_V ( [ m_1^2 0 0; 0 m_2^2 0; 0 0 m_3^2 ]) U_V^†. The matter Hamiltonian H_M in the flavor basis depends upon the electron density n_e(r) and is simplyH^(f)_M = √(2) G_F n_e(r) ( [ 1 0 0; 0 0 0; 0 0 0 ]).§.§ The GR correction to neutrino self-interactions In addition to the vacuum and matter terms, in a neutrino dense environment such as a supernova we must add to the Hamiltonian a term due to neutrino self-interactions. The form of the self-interaction isH_SI( r,q) = √(2)G_F∑_α= e,μ ,τ∫( 1 - q̂·q̂' )[ ρ_α( r, q' ) dn_α( r, q' )- ρ _α̅^*( r, q' ) dn_α̅( r, q' ) ] dq'where ρ_α(r, q) is the density matrix of the neutrinos at position r with momentum q and initial flavor α defined as ρ_α(r, q)=ψ_α(r, q)ψ^†_α(r, q), with ψ_α(r, q) being the corresponding normalized neutrino wave function, dn_α(r, q) is the differential neutrino number density <cit.>, which is the differential contribution to the neutrino number density at r from those neutrinos with initial flavor α and energy | q| propagating in the directions between q̂ and q̂+dq̂, per unit energy (the hats on q and q' indicate unit vectors). Note that here we have replaced the local proper time τ with the radial coordinate r to denote the location along a given neutrino trajectory.In order to use Eq. (<ref>) we have to first specify the expression for dn_α(r, q). This requires relating the neutrino momenta q at radial coordinate r back to their values q_0 at the neutrinosphere where they are initialized. After this relationship is obtained we can substitute dn_α(r, q) with dn_α(R_ν, q_0) and calculate H_SI by integrating over the neutrino momentum distributions at the neutrinosphere. While the magnitude of q is related to the magnitude of q_0 via an energy redshift q=q_0√(B(R_ν)/B(r)), relating q̂ to q̂_0 means finding the relation between the emission angle θ_R and the angle θ shown in Fig. <ref> since the neutrino trajectory is planar.In flat spacetime, the relation between θ_R and θ can be found through geometric arguments <cit.>. In a curved spacetime, however, θ and θ_R might be expected to be related only after solving for the neutrino trajectory. But fortunately, for the Schwarzschild metric the relation between θ and θ_R can also be found simply by making use of the fact that the impact parameter b is a conserved quantity along each neutrino trajectory <cit.>. It makes no difference whether the impact parameter is evaluated at R_ν or at r, therefore b(r)=b(R_ν). Using this conserved quantity we must haversinθ/√(1 - r_s/r) = R_νsinθ_R/√(1 - r_s/R_ν), from which we find cosθ = √(1 - ( R_νsinθ_R/r)^2 ( 1 - r_s/r/1 - r_s/R_ν)) . In Fig. <ref> we plot the angle θ as a function of emission angle θ_R for three different ratios of r_s to R_ν at r=10 R_ν. The figure shows that for each particular emission angle θ_R, the trajectory bending effect always makes the angle θ larger than without GR. In the bulb model ( 1 - q̂·q̂' ) is found to be equivalent to ( 1 - cosθcosθ' ) after averaging over the angles in the plane perpendicular to the radial direction. Thus the correction to cosθ by GR increases the magnitude of H_SI by increasing the value of 1 - q̂·q̂' for every neutrino. Now we have the expression relating θ to θ_R, we can write the expression for the differential number density, after taking time dilation into account, as dn_α(r, q)≡ dn_α(r,q,θ)≡ dn_α(R_ν,q_0,θ_R) = 1/2π r^2√(B(r)) [ L_α,∞/⟨ E_α,∞⟩] f_α( q_0 )(cosθ_R/cosθ)(dq_0/dq) dcosθ_R,where f_α( q_0 ) is the normalized distribution function for flavor α with momentum q_0 that redshifts to q at r, L_α,∞ is the luminosity of flavor α at infinity if no flavor transformation had occurred, and similarly ⟨ E_α,∞⟩ is the mean energy of neutrinos of flavor α at infinity again assuming no flavor transformation had occurred. The expression for the antineutrinos is similar. The derivation of Eq. (<ref>) can be found in the Appendix.The density matrix ρ_α(r, q) for neutrinos at r with momentum q is related to the corresponding density matrix at the neutrinosphere via ρ_α(r, q) =S(r, q;R_ν, q_0) ρ_α(R_ν,q_0) S^†(r, q;R_ν, q_0) and the same for the antineutrinos using the evolution matrix S̅(r, q;R_ν, q_0). Combining these equations together, we obtain the GR corrected expression of neutrino self-interaction in curved spacetime asH_SI( r,q) =√(2) G_F/2π r^2 √(B(r))∑_α= e,μ ,τ ×∫( 1 - cosθcosθ' ) {[L_α,∞/⟨ E_α,∞⟩]ρ_α(r, q') f_α(q'_0 )- [L_α̅,∞/⟨ E_α̅,∞⟩ ] ρ^⋆_α̅(r, q') f_α̅(q'_0 ) }(cosθ_R'/cosθ' ) dcosθ_R' dq'_0. When we take the weak gravity limit r_s ≪ r and r_s ≪ R_ν we find this expression reduces to the same equation found in Duan et al. <cit.>. This equation includes two GR effects: trajectory bending and time dilation (the energy redshift of the luminosity cancels with the energy redshift of the mean energy). In order to appreciate how significant the GR effects can be for the self-interaction Hamiltonian we show in Fig. <ref> the neutrino trajectories which converge at a certain point above the surface of the central proto-neutron star. From the perspective of an observer at this point, the neutrinos seem to be coming from an expanded source whose radius is increased by a factor of √((1-r_s/r)/(1-r_s/R_ν)), which can be seen from Eq. (<ref>). As noted earlier, the effect of trajectory bending causes the neutrino trajectories to cross at larger angles than in the case without GR. Time dilation also enhances the self-interaction because it leads to a larger effective neutrino flow rate. Close to the neutrinosphere time dilation is the larger effect because the effect of trajectory bending is small. At larger radii the situation is reversed with trajectory bending more important than time dilation. To quantify the magnitude of the GR effects upon the self-interaction we show in the top panel of Fig. <ref> the enhancement of the self-interaction due to GR, which is defined to be the ratio of the magnitude of the self-interaction potential with GR effects to that without, as a function of the coordinate r and assuming no flavor oscillation occurs, for different values of r_s/R_ν. The striking feature of the GR effects is that, even though the spacetime curvature is only pronounced near the proto-neutron star, the enhancement of the neutrino self-coupling turns out to be a long-range effect that is asymptotic to a value greater than unity which depends upon the ratio r_s/R_ν. Since the influence of GR on neutrino flavor transformation is not just a local effect, it can have repercussions upon processes at larger radii such as neutrino heating in the accretion phase and nucleosynthesis in the cooling phase.As we have seen, the magnitude of the GR effect is governed by ratio of the radius of the neutrinosphere relative to the Schwarzschild radius of the proto-neutron star which itself is just proportional to the mass of the proto-neutron star. This suggests we define a neutrino `compactness' - similar to the definition of compactness found in O'Connor & Ott <cit.> - as ξ_ν = M/M_⊙/R_ν/10 km = r_s/2.95 km/R_ν/10 km= 3.39r_s/R_ν. In the bottom panel of Fig. <ref> we plot the enhancement factor as a function of compactness at different distances from the center of the proto-neutron star. For a very compact neutrino source we find the enhancement of the self-interaction can be as large as a factor of 300% if ξ_ν∼ 2.26 which corresponds to r_s/R_ν=2/3. We shall explain the significance of this compactness in section <ref>. The blue line in this figures shows the enhancement factor at the neutrinosphere, where the trajectory bending effect is minimal. Here the enhancement is purely due to time dilation. § NUMERICAL CALCULATIONSWith the formulation complete and with the insights gained from the computation of the enhancement as a function of compactness, we proceed to compute numerically the multi-angle neutrino flavor evolution for two representative cases. These are a density profile, neutrino spectra and compactness typical of the accretion phase of a supernova, and one representative of the cooling phase. The neutrino mixing angles and square mass differences we adopt are m^2_2-m^2_1=7.5×10^-5 eV^2, m^2_3-m^2_2=-2.32×10^-3 eV^2 θ_12=33.9^∘ θ_13=9^∘ and θ_23=45^∘. The CP phase δ_CP is set to zero. We do not consider a normal mass ordering on the basis of the results by Chakraborty et al. <cit.> and Wu et al.<cit.>. §.§ Application to SN accretion phase For the accretion phase we use the density profile at t_pb=0.3s postbounce from Fischer et al. <cit.> for the 10.8M_⊙ progenitor. As previously stated, this simulation includes GR effects in both the hydrodynamics and evolution of the neutrino phase space density (see Liebendörfer et al. <cit.> for further details about the code). The density profile at this snapshot time is shown by the red line in Fig. <ref>. We set the neutrinosphere radius to be R_ν=25km which corresponds to the minimum of the electron fraction for this model at this time. This working definition for the neutrinosphere radius comes from noting the coincidence of the electron fraction minimum and the neutrinosphere radii shown in figures (7) and (8) in Fischer et al. and produces a curve which is similar to figure (15) found in their paper. We note that the value of R_ν we adopt is different from the value estimated by others, e.g. <cit.>, which tend to use relatively larger values for R_ν during the accretion phase. From the simulation we find the mass enclosed within the R_ν=25km radius is M = 1.33M_⊙, giving a compactness of ξ_ν = 0.53. The neutrino luminosities and mean energies we use are also taken from the same simulation and are listed in table (<ref>). To save computational resources we use a source distribution f_α(q_0) which is a delta-function at a single energy taken to be 15MeV. Single-energy calculations were also undertaken by Chakraborty et al. <cit.> when they also studied the self-interaction effects during the accretion phase. As previously stated, the angular distribution is assumed to be half-isotropic which is the same distribution used in Duan et al. <cit.>.Our results are shown in Fig. <ref> which is a plot of the electron flavor survival probability averaged over all angular bins as a function of distance. In the figure we also include three vertical dashed lines to indicate the start of the bipolar oscillation region, the position of the shockwave, and the end of the bipolar oscillation region. The predictions for the beginning and end of the bipolar oscillation region come from equations given in Chakraborty et al. <cit.>. The change in the angle-averaged survival probability P_ee which occurs at r∼ 475km is simply decoherence <cit.>. Comparing the results with and without GR effects we see the decoherence is slightly delayed when GR is included but the difference is only of order ∼ 20km and the final result is identical to the case without GR. Thus it appears GR has little effect upon flavor transformation during the accretion phase and where little change occurs is in a region where it has little consequence.§.§ Application to SN cooling phase As the proto-neutron star cools it contracts which increases the compactness. The sensitivity of the neutrino self-interaction to the compactness means we might expect a larger effect from GR during the cooling phase. To test whether this is the case we use the density profile at t_pb=2.8s postbounce from the Fischer et al. <cit.> simulation for the same 10.8M_⊙ progenitor and which is shown by the blue line in Fig. <ref>. We set the neutrinosphere radius to be R_ν=17km which, again, is close to the minimum of the electron fraction for this model at this time and consistent with figure (15) from Fischer et al.. The mass enclosed within this radius isM≈ 1.44M_⊙, giving a compactness of ξ_ν = 0.85. For this cooling epoch calculation we use multi-energy as well as multi-angle. The neutrino energy range is chosen to be E_∞ = 1MeV to E_∞ = 60MeV, and is divided into 300, equally spaced, energy bins. To generate the neutrino spectra for flavor α at the neutrinosphere we use the luminosities, mean energies and rms energies at this snapshot of the simulation -listed in table (<ref>) - and insert them into the pinched thermal spectrum of Keil, Raffelt and Janka <cit.> which has the formf_α(q_0) = (A_α + 1)^A_α + 1q_0^A_α/⟨E_α ,R_ν⟩^A_α + 1Γ (A_α + 1)exp(- (A_α + 1) q_0/⟨E_α ,R_ν⟩), with ⟨ E_α ,R_ν⟩= ⟨ E_α ,∞⟩ /√(B(R_ν)) and the pinch parameter A_α for flavor α is given byA_α = 2 ⟨ E_α,∞⟩^2 - ⟨ E^2_α,∞⟩/⟨ E^2_α,∞⟩ - ⟨ E_α,∞⟩^2 . The result of this calculation is shown in Fig. <ref> where we plot the electron neutrino flavor survival probability averaged over all angular bins and energy bins (using the emitted neutrino spectrum as the weighting function) as a function of distance. At this epoch self-interaction effects occur much closer to the proto-neutron star and the effect of GR is more important. The net result of adding GR is to delay the onset of bipolar oscillations by around 25km and once more we find the probability at large radii are almost identical to that without GR. But while this shift in the onset of bipolar oscillations may seem small, we note the neutrino flavor evolution in the region from 50km≲ r ≲ 500km was found to be crucial for determining the nucleosynthesis yields in the calculations by Duan et al. <cit.> and Wu et al. <cit.> so even a relatively small delay of flavor transformation caused by GR might have a consequence.§ THE GR NEUTRINO HALOSo far we have considered only cases where all neutrinos propagate to r→∞. However if the compactness of the source becomes too large the neutrinosphere becomes smaller than the “photon sphere”, whose radius is 3r_s/2. When this occurs there will be a critical emission angle for neutrinos beyond which they cannot escape to infinity. Following the argument in Hartle <cit.>, one can obtain a condition for the neutrinos to escape to infinity to be2/3√(3)R_ν/r_s1/√(1 - r_s/R_ν)sinθ_R < 1. We show three example neutrino trajectories for the case where R_ν/r_s < 3/2 in Fig. <ref>. Trajectories 1 and 2 are open and a neutrino emitted along these trajectories will propagate to infinity: the trajectories of neutrinos emitted at sufficiently large angles - such as trajectory 3 - will turn around and return to the proto-neutron star. Note that the farthest place where a neutrino can turn around is the photon sphere. The consequence of such trajectories are included in simulations which include GR. In principle there is a substantial change to the flavor evolution calculations when neutrinos start to follow trajectories such as the trajectory 3 in Fig. <ref> because they lead to the formation of a neutrino `halo' around the proto-neutron star, similar to the neutrino halos produced by scattering on matter <cit.>. From Eq. (<ref>) we can evaluate the critical angle as a function of R_ν/r_s. The relation between the critical angle as a function of R_ν/r_s is shown in Fig. <ref>. If R_ν/r_s>3/2, clearly neutrinos with all emission angles can escape and no neutrino halo is formed. We define a critical compactnessξ_ν⋆ to be the case where R_ν/r_s = 3 /2 and find it equal to ξ_ν⋆ = 2.26 - the value discussed earlier. The compactness of the sources we have considered for our previous numerical calculations did not approach this value because the mass of the proto-neutrons star is not sufficiently large and the neutrinospheres lay beyond the photon sphere. To reach the critical compactness for formation of the halo we require a more massive proto-neutron star with a smaller neutrinosphere. Whether a proto-neutron star surpasses the critical compactness while the proto-neutron star is still cooling via neutrino emission will depend upon the Equation of State of dense matter and the neutrino opacity <cit.>.Note that from causality, the radius of a neutron star is required to be greater than R_NS≳ 2.823 M <cit.> which, if we set R_ν = R_NS, corresponds to a compactness of ξ_ν = 2.4, which is beyond the critical value ξ_ν⋆. A halo will certainly form immediately preceding the collapse of a proto-neutron star to a black hole. The formation of a neutrino halo has consequences for the cooling of the proto-neutron star as well as the flavor transformation due to neutrino self-interaction. One can find a presentation of the changes that occur to the emitted neutrino spectra as the mass of the proto-neutron star approaches its maximum mass in Liebendörfer et al. <cit.>. In their simulations, as the maximum mass is approached (but before the black holes forms) the luminosity of the μ and τ flavors increases due to contraction of the proto-neutron star while the luminosities of electron neutrino and electron antineutrinos drop. The mean energies of all flavors increases. When a halo forms, in principle, one would have to completely change how the flavor calculations are undertaken in the halo region - the zone between the neutrinosphere and the photon sphere. In such cases the flavor evolution up to the photon sphere cannot be treated as an initial value problem - as we have done in this paper - because the flavor evolution up to the photon sphere of outward moving neutrinos is affected by neutrinos that were also emitted in an outward direction but which turned around and are now moving inwards. Thus in the halo region a paradigm beyond the bulb model would be needed to correctly deal with the flavor evolution. Prevailing understanding from the extant literature would indicate that in the case of three active flavors of neutrino emitted spherically symmetrically, one should not expect flavor transformation within the halo: if this is true then the only effect of the formation of a halo would be to alter the luminosity and angular distribution of the neutrinos beyond the photon sphere (which now becomes the effective neutrinosphere). But in other circumstances - such as calculations that include sterile neutrinos <cit.> or calculations with non-standard neutrino interactions <cit.> - flavor transformation can occur much closer to the neutrinosphere in which case the formation of a halo may have greater consequences. § SUMMARY AND CONCLUSIONSIn this paper we have considered the effects of General Relativity upon neutrino flavor transformation in a core-collapse supernova. We adopted a Schwarzschild metric to describe the spacetime and included three GR effects - trajectory bending, time dilation, and energy redshift. Of the three, time dilation is the major effect close to the proto-neutron star, whose role is replaced by trajectory bending at larger radii. The size of the GR effects were found to scale with a single parameter which is the compactness of the source: the relative ratio of the Schwarzschild radius to the neutrinosphere radius. For large compactness with R_ν close to the radius of the photon sphere, the neutrino self-interaction Hamiltonian can be up to approximately three times larger than without GR. We calculated the flavor evolution in two representative cases to determine whether the GR effects led to significant differences compared to calculations without GR. These cases were a density profile and neutrino spectra typical of the accretion phase, and a density profile and neutrino spectra typical of the cooling phase. In both cases we found the effect of GR was to delay the onset of flavor transformation but for the accretion phase the flavor transformation occurred due to decoherence at large radii where the change would have little consequence. In contrast, the change to the onset of bipolar oscillations during the cooling phase may be more important because it is much closer to the proto-neutron star and may impact the nucleosynthesis in the neutrino driven wind. Finally, we showed that GR effects can produce a halo of neutrinos surrounding the proto-neutron star for very compact neutrino sources. If a halo forms then, in principle, one would have to treat flavor transformation in the halo region using a different technique than the usual approach of treating it as an initial-value problem.This research was supported at NC State University by DOE award DE-FG02-10ER41577. * § THE GR CORRECTED EXPRESSION FOR THE NEUTRINO SELF-INTERACTION In order to get the correct expression for dn_α(r,q,θ), we start from the conservation of neutrino flow through an enclosing spherical surface after taking time dilation into account but ignoring flavor transformation. This allows us to writer^2 1mu√(B( r )) 1muF_α( r,q)dq=R_ν ^2 1mu√(B( R_ν)) 1muF_α( R_ν,q_0)dq_0, where F_α(r,q) is the flux of neutrinos with energy q at r per unit energy that were emitted with energy q_0 at the neutrinosphere.Integrated over all momenta, both sides of this equation must evaluate to1/4πL_α,∞ / ⟨ E_α,∞⟩ whereL_α,∞ is the luminosity of flavor α at infinity assuming no oscillations, and similarly ⟨ E_α,∞⟩ is the mean energy at infinity again assuming no oscillations. At the neutrinosphere R_ν we have F_α(R_ν,q_0 ) = ∫_0^1 2π j_α(q_0,θ'_R) cosθ'_Rdcosθ'_R, where j_α( q_0,θ_R) is the emitted intensity of flavor α with energy q_0 at angle θ_R with respect to the radial direction. At radial coordinate r the flux isF_α(r,q) = ∫_0^θ_maxcosθ' dn_α( r, q,θ' ), where θ_max is the angle with respect to the radial direction of neutrinos that were emitted at the neutrinosphere with angle θ_R = π/2.Combining Eq. (<ref>),(<ref>) and (<ref>) we obtain the result thatdn_α( r,q,θ) = 2π R_ν^2/r^2 √(B(R_ν)/B(r)) j_α(q_0,θ_R) (cosθ_R/cosθ)(dq_0/dq) dcosθ_R, In the case of half-isotropic emission the intensity j_α is independent of θ_R and can be written asj_α(q_0) = 1/4 π^2 R_ν^2√(B(R_ν)) [L_α,∞/⟨ E_α,∞⟩] f_α( q_0), where f_α( q_0 ) is the normalized spectral distribution for flavor α at R_ν. The final expression for dn_α( r,q,θ) is thus dn_α( r,q,θ) = 1/2 π r^2√(B(r)) [ L_α,∞/⟨ E_α,∞⟩] f_α( q_0 ) (cosθ_R/cosθ)(dq_0/dq) dcosθ_R. apsrev4-1
http://arxiv.org/abs/1705.09723v3
{ "authors": [ "Yue Yang", "James P. Kneller" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170526210913", "title": "GR Effects in Supernova Neutrino Flavor Transformation" }
Covering complete graphs by monochromatically bounded sets Luka Milićević==========================================================Given a k-colouring of the edges of the complete graph K_n, are there k-1 monochromatic components that cover its vertices? This important special case of the well-known Lovász-Ryser conjecture is still open. In this paper we consider a strengthening of this question, where we insist that the covering sets are not merely connected but have bounded diameter. In particular, we prove that for any colouring of E(K_n) with 4 colours, there is a choice of sets A_1, A_2, A_3 that cover all vertices, and colours c_1, c_2, c_3, such that for each i = 1,2,3 the monochromatic subgraph induced by the set A_i and the colour c_i has diameter at most 160.§ INTRODUCTION Given a graph G, whose edges are coloured with a colouring χ E(G) → C (where adjacent edges are allowed to use the same colour), given a set of vertices A, and a colour c ∈ C, we write G[A, c] for the subgraph induced by A and the colour c, namely the graph on the vertex set A and the edges {xy x,y ∈ A, χ(xy) = c}. In particular, when A = V(G), we write G[c] instead of G[V(G), c]. Finally, we also use the usual notion of the induced subgraph G[A] which is the graph on the vertex set A with edges {xy x,y ∈ A, xy ∈ E(G)}. We usually write [n]={1,2,…,n} for the vertex set of K_n. Our starting point is the following conjecture of Gyárfás.(<cit.>, <cit.>) Let k be fixed. Given any colouring of the edges of K_n in k colours, we can find sets A_1, A_2, …, A_k-1 whose union is [n], and colours c_1, c_2, …, c_k-1 such that K_n[A_i, c_i] is connected for each i ∈ [k-1].This is an important special case of the well-known Lovász-Ryser conjecture, which we now state.(Lovász-Ryser conjecture. <cit.>, <cit.>) Let G be a graph, whose maximum independent set has size α(G). Then, whenever E(G) is k-coloured, we can cover G by at most (k-1)α(G) monochromatic components.Conjectures <ref> and <ref> have attracted a great deal of attention. When it comes to the Lovász-Ryser conjecture, we should note the result of Aharoni (<cit.>), who proved the case of k = 3. For k ≥ 4, the conjecture is still open. The special case of complete graphs was proved by Gyárfás (<cit.>) for k ≤ 4, and by Tuza (<cit.>) for k=5. For k > 5, the conjecture is open.Let us also mention some results similar in the spirit to Conjecture <ref>. In <cit.>, inspired by questions of Gyárfás (<cit.>), Ruszinkó showed that every k-colouring of edges of K_n has a monochromatic component of order at least n/(k-1) and of diameter at most 5. This was improved by Letzter (<cit.>), who showed that in fact there are monochromatic triple stars of order at least n/(k-1). For more results and questions along these lines, we refer the reader to surveys of Gyárfás (<cit.>, <cit.>). In a completely different direction, relating to contaction mappings on metric spaces, the following theorem is proved in <cit.>. (We mention in passing that the current paper is self-contained, and in particular no knowledge of <cit.> is assumed.) There is an absolute constant C > 0 such that the following holds. If 0 < λ < C, and if {f,g,h} are commuting continuous maps on a complete metric space (X,d) with the property that for any two distinct points x,y ∈ X we have min{d(f(x), f(y)), d(g(x), g(y)), d(h(x), h(y))}≤λ d(x,y), then the maps f,g,h have a common fixed point. In fact, we may take C = 10^-23.Some of the ingredients in the proof of Theorem <ref> were the following simple lemmas. Note that Lemma <ref> is in fact a classical observation due to Erdős and Rado.Suppose that the edges of K_n are coloured in two colours. Then we may find a colour c such that K_n[c] is connected and of diameter at most 3. Suppose that the edges of K_n are coloured in three colours. Then we may find colours c_1, c_2, (not necessarily distinct), and sets A_1, A_2 such that A_1 ∪ A_2 = [n], with K_n[A_1, c_1], K_n[A_2, c_2] are each connected and of diameter at most 8.In <cit.>, a common generalization of these statements and a strengthening of Conjecture <ref> was conjectured. For every k, there is an absolute contant C_k such that the following holds. Given any colouring of the edges of K_n in k colours, we can find sets A_1, A_2, …, A_k-1 whose union is [n], and colours c_1, c_2, …, c_k-1 such that K_n[A_i, c_i] is connected and of diameter at most C_k, for each i ∈ [k-1].The main result of this paper isConjecture <ref> holds for 4 colours, and one may take C_4 = 160. §.§ An outline of the proof We begin the proof by establishing the weaker Conjecture <ref> for the case of 4 colours. Although this was proved by Gyárfás in <cit.>, the reasons for giving a proof here are twofold. Firstly, we actually give a different reformulation of Conjecture <ref> that has a more geometric flavour. The proof given here and the reformulation we consider emphasize the importance of the graph G_k, defined as a product of k copies of K_n, to Conjecture <ref>. Another reason for giving this proof is to make the paper self-contained.We also need some auxiliary results about colourings with 2 or 3 colours, like Lemmas <ref> and <ref> mentioned above. In particular, we generalize the case of 2 colours to complete multipartite graphs. Another auxiliary result we use is the fact that G_k essentially cannot have large very sparse graphs. The main tool in our proof is the notion of c_3, c_4-layer mappings, where c_3, c_4 are two colours. For P ⊂ℕ_0^2, this is a mapping L P →𝒫(n), (where [n] is the vertex set of our graph), with the property that * sets L(A) partition [n] as A ranges over P, * and for A, B ∈ P with |A_1 - B_1|, |A_2 - B_2| ≥ 2, we have all edges between L(A) and L(B) coloured using only c_3, c_4. This is a generalization of the idea that if we fix a vertex x_0 and we assign A^(x) = (d_c_1(x_0, x), x_c_2(x_0, x)) ∈ℕ_0^2 to each vertex x, where d_c_1, d_c_2 are distances in colours c_1, c_2 (which are the remaining two colours), then if A^(x), A^(y) satisfy |A^(x)_1 - A^(y)_1|, |A^(x)_2 - A^(y)_2| ≥ 2, the edge xy cannot be coloured by c_1 or c_2.Given a subset P' of the domain P, we say that it is k-distant if for all distinct A, B ∈ P' we have |A_1 - B_1|, |A_2 - B_2| ≥ k. Once we have all this terminology set up, we begin building up structure in our graph, essentially as follows:Step 1. We prove that if a c_3, c_4-layer mapping has a 3-distant set of size at least 4, then Theorem <ref> holds. Step 2. We continue the analysis of distant sets, and prove essentially that if a c_3, c_4-layer mapping has a 6-distant set of size at least 3, then Theorem <ref> holds. Step 3. We prove Theorem <ref> when every colour induces a connected subgraph. Step 4. We prove Theorem <ref> when any two monochromatic components of different colours intersect. Step 5. We put everything together to finish the proof. Organization of the paper. In the next subsection, we briefly discuss a reformulation of Conjecture <ref>. In Section 2, we collect some auxiliary results, including results on 2-colourings of edges of complete multipartite graphs and the results on sparse subgraphs of G_k and indepenent sets in G_3. In Section 3, we prove Conjecture <ref> for 4 colours, reproving a result of Gyárfás. The proof of Theorem <ref> is given in Section 4, with subsections spliting the proof into the steps described above. Finally, we end the paper with some concluding remarks in Section 5. §.§ Another version of Conjecture <ref> Let l be an integer, define the graph G_l with vertex set ℕ_0 ^l and put an edge between any two sequences that differ at every coordinate. Equivalently, G_l is the direct product of l copies of K_ℕ_0 (the complete graph on the vertex set ℕ_0). We formulate the following conjecture.Given a set finite set of vertices of X ⊂ℕ_0^l, we can find l sets X_1, …, X_l ⊆ X that cover X and each X_i is either contained in a hyperplane of the form {x_i = c} or G_l[X_i] is connected.This conjecture is actually equivalent to Conjecture <ref>. Conjectures <ref> and <ref> are equivalent for k = l+1.Conjecture <ref> implies Conjecture <ref>. Let X ⊂ℕ_0^l be a finite set. Let n = |X| and define an (l+1)-colouring χ E(K_n) → [l+1] by setting χ(xy) = i, where i is the smallest coordinate index such that x_i = y_i, otherwise, when x and y differ in all coordinates, set χ(xy) = l+1. If Conjecture <ref> holds, we may find sets A_1, A_2, …, A_l that cover [n], and colours c_1, c_2, …, c_lsuch that K_n[A_i, c_i] are all connected. Fix now any i, and let B ⊂ X be the set of vertices corresponding to A_i. If c_i ≤ l, then for any x,y ∈ B, there is a sequence of vertices z_1, z_2, …, z_m ∈ B such that x_i = (z_1)_i = (z_2)_i = … = (z_m)_i = y_i, so x_i = y_i. Hence, B is subset of the plane {x_i = v} for some value v. Otherwise, if c = l+1, that means that the edges of K_n[A_i, c_i] correspond to edges of G[B], so G[B] is connected, as desired.Conjecture <ref> implies Conjecture <ref>. Let χ E(K_n) → [k] be any k-colouring of the edges of K_n. For every colour c, look at components C^(c)_1, …, C^(c)_n_c of K_n[c]. For each choice of x_1, x_2, …, x_k-1 with x_c ∈ [n_c] for c ∈ [k-1], we define C_x = C_x_1, x_2, …, x_k-1 = ∩_c ∈ [k-1] C^(c)_x_c, which is the intersection of monochromatic components, one for each colour except k. Let X ⊂ℕ^k-1 be the set of all (k-1)-tuples x for which C_x is non-empty. If Conjecture <ref> holds, then we can find A_1, A_2, …, A_k-1 that cover X such that each A_i is either contained in a hyperplane, or induces a connected subgraph of G_k-1. If A_i ⊂{x_c = v}, then the corresponding intersections C_x for x ∈ A_i are all subset of C^(c)_v. On the other hand, if G_k-1[A_i] is connected, then taking any adjacent x,y ∈ G_k-1[A_i], we have that x_c ≠ y_c for all c ∈ [k-1]. Hence all the edges of between C_x and C_y are coloured by k. Hence, all the sets C_x for x ∈ A_i are subset of the same component of K_n[k]. This completes the proof of the proposition.§ AUXILIARY RESULTS As suggested by its title, this section is devoted to deriving some auxiliary results. Firstly we extend Lemma <ref> to complete multipartite graphs. The case of bipartite graphs is slightly different from the general case of more than 2 parts, and is stated separately. We also introduce additional notation. Given a colour c and vertices x,y we write d_c(x,y) for the distance between x and y in G[c]. If they are not in the same c-component, we write d_c(x,y) = ∞. In particular, d_c(x,y) < ∞ means that x, y are in the same component of G[c]. Further, we write B_c(x, r) for the c-ball of radius r around x, defined as B_c(x, r) = {y d_c(x,y) ≤ r}, where c is a colour, x is a vertex, and r is a nonnegative integer. For any graph G, throughout the paper, the diameter of G, written G, is the supremum of all finite distances between two vertices of G. Thus, G = ∞ only happens when G has arbitrarily long induced paths (as we focus on the finite graphs in this paper, this will not occur). For a colour c and a set of vertices A, the c-diameter of A, writen _c A, is the diameter of G[A, c]. We use the standard notation for complete multipartite graphs, so K_n_1, n_2, …, n_r stands for the graph with r vertex classes, of sizes n_1, n_2 …, n_r, and all edges between different classes are present in the graph. Suppose that the edges of G = K_n_1, n_2 are coloured in two colours. Then, one of the following holds: * either there is a colour c, such that G[c] is connected and of diameter at most 10, or* there are partitions [n_1] = A_1 ∪ B_1 and [n_2] = A_2 ∪ B_2 such that all edges in A_1 × A_2 ∪ B_1 × B_2 are of one colour, and all the edges in A_1 × B_2 ∪ B_1 × A_2 are of the other colour. Let χ be the given colouring. We start by observing the following. If there are two vertices v_1, v_2 such that for colour c_1 the inequality 6 ≤ d_c_1(v_1,v_2) < ∞ holds, then for every vertex u such that χ(uv_1) = c_1, we must also have d_c_2(u,v_1) ≤ 3, where c_2 ≠ c_1 is the other colour. Indeed, let v_1 = w_0, w_1, w_2, …, w_r = v_2 be a minimal c_1-path from v_1 to v_2. Hence r ≥ 6, the vertices w_i with the same parity of index belong to the same vertex class of G = K_n_1, n_2 and the edges v_1w_3 = w_0w_3, w_3w_6, w_6u ∈ E(G) are all of colour c_2 (otherwise, we get a contradiction to the fact that d_c_1(w_i, v_2) = r - i), implying that d_c_2(v_1, u) ≤ 3.Now, suppose that a c_1-component C_1 has diameter at least 7. The observation above tells us that if a vertex y is adjacent to x_1, and d_c_2(x_1,y) > 1, then χ(x_1, y) = c_1, so d_c_2(x_1, y) ≤ 3. Hence, every vertex y adjacent to x_1 in G, satisfies d_c_2(x_1, y) ≤ 3. Similarly, any vertex y adjacent to x_2 satisfies d_c_2(x_2, y) ≤ 3. But, x_1, x_2 are in different vertex classes (as their c_1-distance is odd), so their neighbourhhoods cover the whole vertex set, and x_1 x_2 is an edge as well, from which we conclude that G[c_2] is connected and of diameter at most 9. Thus, if any monochromatic component has diameter at least 7, the lemma follows, so assume that this does not occur.Now we need to understand the monochromatic components. From the work above, it suffices to find monochromatic components of the desired structure, the diameter is automatically bounded by 6. Suppose that there are at least 3 c_1-components, X_1 ∪ X_2, Y_1 ∪ Y_2, Z_1 ∪ Z_2 with X_1, Y_1, Z_1 subsets of one class of K_n_1, n_2 and X_2, Y_2, Z_2 subsets of the other. Let u, v ∈ X_1∪ Y_1∪ Z_1 be arbitrary vertices. Then we can find w ∈ X_2∪ Y_2∪ Z_2 in different c_1-component from u, v. Hence, χ(uw) = χ(wv) = c_2, so d_c_2(u,v) ≤ 2. Therefore, both vertex classes of G are c_2-connected and consequently the whole graph is c_2-connected.Finally, assume that each colour has exactly 2 monochromatic components. Let [n_1] = A_1 ∪ B_1, [n_2] = A_2 ∪ B_2 be such that A_1 ∪ A_2, B_1 ∪ B_2 are the c_1-components. Hence, A_1 ∩ B_1 = A_2 ∩ B_2 = ∅, and all edges in A_1 × B_2 and B_1 × A_2 are of colour c_2. Thus, sets A_1 ∪ B_2 and B_1 ∪ A_2 are c_2-connected and cover the vertices of G, so they must be the 2 c_2-components. Thus, all edges in A_1× A_2 and B_1 × B_2 must be coloured by c_1, proving the lemma.Let r≥ 3, and suppose that G = K_n_1, n_2, …, n_r is a complete r-partite graph . Suppose that the edges of G are 2-coloured. Then, there is a colour c such that G[c] is connected and of diameter at most C_r, where we can take C_3 = 20, and C_r = 60 for r > 3. Assume first that r=3. Let A, B, C be the vertex classes. We shall use Lemma <ref> throughout this part of the proof, applying to every pair of vertex classes. We distinguish three cases, motivated by the possible outcomes of Lemma <ref> (although not exactly these outcomes, but resembling them).Observation. Suppose that D, E, F is a permutation of A, B, C and that D ∪ E is contained in a c_1-component of diameter at most N_1, and D ∪ F for each colour splits into two monochromatic components, all of diameter at most N_2. Then, G[c_1] is connected and of diameter at most N_1 + 2N_2.Case 1. Suppose that D, E, F is a permutation of A, B, C, and that Lemma <ref> gives different outcomes when applied to pairs D, E and D, F. Then, by the Observation, there is a colour c such that G[c] is connected and of diameter at most 14. (We took N_1 = 10 and N_2 = 2.)Case 2. Suppose that D, E, F is a permutation of A, B, C, and that Lemma <ref> gives a single monochromatic component for each of pairs D, E and D, F. If we use the same colour c for both pairs, then G[c] is connected and of diameter at most 20. Otherwise, let D∪ E be c_1-connected, and let D ∪ F be c_2-connected, with c_1 ≠ c_2. Apply Lemma <ref> to E, F. If it results in a single monochromatic component, it must be of colour c_1 or c_2, so once again G[c] has diameter at most 20 for some c. Finally, if E ∪ F splits in two pairs of monochromatic components, by Observation G[c] has diameter at most 14, for some c.Case 3. Lemma <ref> gives the second outcome for each pair of vertex classes. Look at complete bipartite graphs G[A∪ B] and G[A∪ C]. Then, we have partitions A = A_1 ∪ A_2 = A'_1 ∪ A'_2, B = B_1 ∪ B_2 and C = C_1 ∪ C_2 such that all edges (A_1 × B_1)∪(A_2 × B_2) ∪ (A'_1 × C_1) ∪ (A'_2 × C_2) receive colour c_1, while the edges (A_1 × B_2)∪(A_2 × B_1) ∪ (A'_1 × C_2) ∪ (A'_2 × C_1) take the other colour c_2. If {A_1, A_2}≠{A'_1, A'_2}, then we must have that some A_i intersects both A'_1, A'_2, or vice-versa. In particular, since any two vertices x, y in the same set among A_1, A_2, A'_1, A'_2 obey d_c_1(x,y) ≤ 2, this means that for any two vertices x,y ∈ A, we have d_c_1(x,y) ≤ 6. Now, every point in B ∪ C in on c_1-distance at most 1 from a vertex in A, so G[c_1] is connected and of diameter at most 8. Hence, we may assume that A_1 ∪ A_2 and A'_1 ∪ A'_2 are the same partitions of A, and similarly for B and C, we get the same partition for both pairs of vertex classes involving each of B and C. Let A = A_1 ∪ A_2, B = B_1 ∪ B_2, C = C_1 ∪ C_2 be these partitions, so the colouring is constant on each product A_i × B_j, A_i × C_j, B_i × C_j, i,j ∈{1,2}. Renaming B_i, C_j, we may also assume that A_1 × B_1, A_2 × B_2, A_1 × C_1, A_2 × C_2 all receive colour c_1. Thus A_1 × B_2, A_2 × B_1, A_1× C_2, A_2 × C_1 all receice colour c_2. But looking at the colour c of B_1 × C_2, we see that G[c] is connected and of diameter at most 5. This finishes the proof of the case r=3, and we may take C_3 = 20.Now suppose that r > 3. Let V_1, V_2, …, V_r be the vertex classes. Fix the vertex class V_r, and look at the 2-colouring χ' of the edges of K_r-1 defined as follows: whenever i, j ∈ [r-1] are distinct, then applying the case r=3 of this lemma that we have just proved to the subgraph induced by V_i ∪ V_j ∪ V_r, we get a colour c such that G[V_i ∪ V_j ∪ V_r, c] has diameter at most 20; we set χ'(ij) = c. By Lemma <ref>, we have a colour c such that K_r-1[c] is of diameter at most 3 for the colouring χ'. Returning to our original graph, we claim that G[c] has diameter at most 60. Suppose that x,y are any two vertices of G. If any of these points lies in V_r, or if they lie in the same V_i, then we can pick i, j such that y ∈ V_i ∪ V_j ∪ V_r and χ'(ij) = c. Hence, by the definition of χ', we actually have d_c(x,y) ≤ 20 in G. Now, assume that x,y lie in different vertex classes and outside of V_r. Let x ∈ V_i, y ∈ V_j. Under the colouring χ' of K_r-1 we have that d_c(ij) ≤ 3, so we have a sequence i_1 = i, i_2, …, i_s = j, with s ≤ 4, such that χ'(i_1 i_2) = … = χ'(i_s-1 i_s) = c. For each t between 1 and s, pick a representative x_t ∈ V_i_t, with x = x_1, y= x_s. Then, d_c(x_t-1, x_t) ≤ 20, so d_c(x,y) = d_c(x_1, x_s) ≤ 60, as desired. §.§ Induced subgraphs of G_l Recall that G_l is the graph on ℕ^l, with edges between pairs of points whose all coordinates differ. In this subsection we prove a few properties of such graphs, particularly focusing on G_3. We begin with a general statement, which will be reproved for specific cases with stronger conclusions. If S is a set of vertices in G_l and the maximal degree of G[S] is at most d, then the number of non-isolated vertices of G[S] is at most O_l,d(1). By Ramsey's theorem we have an N such that whenever E(K_N) is coloured using 2^l - 1 colours, there is a monochromatic K_l+1. Let S' be the set of non-isolated vertices in S. We show that |S'| < (d^2 + d +1)N. Suppose contrary, since the maximal degree is at most d, we have a subset S”⊂ S of size |S”| ≥ N such that sets s ∪ N(s) are disjoint for all s ∈ S” (simply pick a maximal such subset, their second neighbourhoods must cover the whole S'). In particular, S” is an independent set in G_l, so for every pair of vertices x, y ∈ S, the set I(x,y) = {i ∈ [l] x_i = y_i} is non-empty. Thus, I E(K_S”) →𝒫(l)∖{∅} is 2^l-1 colouring of the edges of a complete graph K_S” on the vertex set S”. By Ramsey's theorem, there is a monochromatic clique on subset T ⊂ S” of size at least l+1, whose edges are coloured by some set I_0 ≠∅. Take a vertex t ∈ T, and since t is not isolated and the neighbourhoods of vertices in S” are disjoint, we can find x ∈ S' such that tx is an edge, but t'x is not for other t' ∈ T. Hence, x_i ≠ t_i for all i∈[l] and for distinct t', t”∈ T we have t'_i = t”_i if and only if i ∈ I_0. Thus, x_i ≠ t'_i for all t' ∈ T and i ∈ I_0. But, x t' is not an edge for t' ∈ T ∖{t}, so we always have i ∈ [l] ∖ I_0 such that x_i = t'_i. But, for each i ∈ I_0, the values of t'_i are distinct for each t' ∈ T. Hence, for each i, there is at most one vertex t' ∈ T∖{t} such that x_i = t'_i. Therefore |T| - 1 ≤ |[l] ∖ I_0| ≤ l-1, so |T| ≤ l, which is a contradiction.We may somewhat improve on the bound in the proof of the lemma above by observing that for colour I_0 we only need a clique of size l - |I_0| + 2. Thus, instead of Ramsey numberR(l+1, l+1, …, l+1_2^l-1),we could useR(l+2 - |I_1|, l+2 - |I_2|, …, l+2 - |I_2^l-1|),where I_i are the non-empty sets of [l]. But, even for paths in G_3, which we shall use later, taking l = 3, d = 2, we get the final bound of 7 R(2, 3, 3, 3, 4, 4, 4), where 7 comes from d^2 + d + 1 factor we lose when moving from S' to S”. We now improve this bound. If S is a set of vertices of G_3 such that G_3[S] is a path, then |S| ≤ 30. Let S = {s_1, s_2, …, s_r} be such that s_1, s_2, …, s_r is an induced path in G_3, so the only edges are s_i s_i+1.Case 1. For all i ∈{4, 5, …, 10}, s_i coincides with one of s_1 or s_2 in at least two coordinates.Since s_1 s_2 is an edge, s_1 and s_2 have all three coordinates different. Thus, for i ∈{4,5,…, 10}, we have (s_i)_c ∈{(s_1)_c, (s_2)_c} for all coordinates c. Hence, there are only at most 6 possible choices of s_i (as s_i ≠ s_1, s_2), so r ≤ 9.Case 2. There is i_0 ∈{4,5, …, 10} with at most one common coordinate with each of s_1, s_2. Since s_1 s_i_0, s_2 s_i_0 are not edges, w.l.o.g. we have s_1 = (x_1, x_2, x_3), s_2 = (y_1, y_2, y_3), s_i_0 = (x_1, y_2, z_3), where x_i ≠ y_i, z_3 ∉{x_3, y_3}. Consider any point s_j, for j≥ i_0 + 2. It is not adjacent to any of s_1, s_2, s_i_0. If (s_j)_1 = x_1 and (s_j)_2 ≠ y_2, then (s_j)_3 = y_3. Similarly, if (s_j)_1 ≠ x_1 and (s_j)_2 = y_2, then (s_j)_3 = x_3. Also, if (s_j)_1 ≠ x_1, (s_j)_2 ≠ y_2, then s_j = (y_1, x_2, z_3). Hence, for j≥ i_0 + 2, the point s_j is on one of the lines(x_1, y_2, ·), (x_1, ·, y_3), (·, y_2, x_3)or it is the point (y_1, x_2, z_3),where (a, b, ·) stands for the line {(a,b,z) zarbitrary}, etc. Note that a point on (x_1, y_2, ·) is not adjacent to any point on (·, y_2, x_3), and the same holds for lines (x_1, y_2, ·) and (x_1, ·, y_3). Hence, along out path, a point on the line (x_1, ·, y_3) is followed either by a point on (·, y_2, x_3) or the point (y_1, x_2, z_3) (the latter may happen only once). In any case, if |S| ≥ 30, then among s_i_0 + 2, s_i_0 + 3, …, s_i_0 + 20, we must get a contiguous sequence s_j, s_j+1, …, s_j+7 of pointss_j, s_j+2, s_j+4, s_j+6∈ (x_1, ·, y_3), s_j+1,s_j+3, s_j+5, s_j+7∈ (·, y_2, x_3).Finally, we look at A = s_j, B = s_j+2, C = s_j+5, D = s_j+7. These four points form an independent set, but A ≠ B gives A_2 ≠ B_2, so one of A_2 ≠ y_2, B_2 ≠ y_2 holds, and similarly, one of C_1 ≠ x_1, D_1 ≠ x_1 holds as well. Choosing a point among A, B and a point among C, D for which equality does not hold gives an edge, which is impossible. Finally, we study independent sets in G_3. Note that Lemma <ref> in this case does not tell us anything about the structure of such sets. When we refer to line or planes, we always think of very specific cases, namely the lines are the sets of the form {x x_i = a, x_j = b} and the planes are {x x_i = a}. Similarly, collinearity and coplanarity of points have stronger meaining, and imply that points lie on a common line or plane defined as above.Let S be a set of vertices in G_3. If every two points of S are collinear, then S is a subset of a line. If every three points of S are coplanar, then S is a subset of a plane.We first deal with the collinear case. Take any pair of points, x, y ∈ S, w.l.o.g. they coincide in the first two coordinates. Take third point z ∈ S. If z does not share the values of the first 2 coordinates with x and y, then we must have x_3 = z_3 = y_3, which is impossible. As z was arbitary, we are done.Suppose now that we have all triples coplanar. W.l.o.g. we have a noncolinear pair x, y, which only coincide in the first coordinate. Then all other points may only be in the plane {p p_1 = x_1}.(Structure of the independent sets of size 4.) Given an independent set I of G_3 of size 4 (at least) one of the following alternatives holds(S1) I is coplanar, or (S2) I = {(a,b,c), (a',b',c), (a',b,c'), (a,b',c')}, where a ≠ a'; b ≠ b' and c ≠ c', or (S3) up to permutation of coordinates I = {(a,b,c), (a,b,c'), (a,b',x), (a',b,x)}, where a ≠ a'; b ≠ b' and c ≠ c'.Suppose that I = {A, B, C, D} is not a subset of any plane. We distinguish between two cases. Case 1. There are no collinear pairs in I.Let A = (a, b, c). But AB is not an edge and not colinear so A and B differ in precisely two coordinates. Thus, w.l.o.g. B = (a', b', c) where a ≠ a' and b ≠ b'. If C_3 also equals c, then we must have C_3 = (a”, b”, c) with a” different from a,a' and b” from b, b'. However, looking at D, we cannot have D_3 = c as otherwise I ⊂{x_3 = c}, so D must differ at all three coordinates from one of the points A, B, C, making them joined by an edge, which is impossible. Thus C_3 = c', with c' ≠ c. Since AC and BC are not edges, C ∈{(a, b', c'), (a', b, c')}. The same argument works for D, so D_3 = c”≠ c, and D ∈{(a, b', c”), (a', b, c”)}. However, if c' ≠ c”, then C, D are either collinear or adjacent in G_3, which are both impossible. Hence c” = c', and {C, D} = {(a,b',c'), (a',b,c')}, as desired. Case 2. W.l.o.g. A and B are collinear.Let A = (a,b, c), B = (a,b,c') with c ≠ c'. Since {x_1 = a} does not contain the whole set I, we have w.l.o.g. C_1 = a' ≠ a. If C_2 ≠ b, then AC or BC is an edge, which is impossible. Therefore, C_2 = b. Hence D_2 = b' ≠ b, and by similar argument D_1 = a. Finally CD is not an edge, so their third coordinate must be the same, proving the lemma.(Structure of the independent sets of size 5.) Given an independent set I of G_3 of size 5 (at least) one of the following alternatives holds * I is coplanar, or* I is a subset of a union of three lines, all sharing the same point. List the vertices of I as x_1, x_2, x_3, x_4, x_5. W.l.o.g. x_1, x_2, x_3 are not coplanar. By the previous lemma, {x_1, x_2, x_3, x_i}for i=4,5 may have structure S2 or S3. But if both structures are S2, then we must have that in both quadruples, at each coordinate, each value appears precisely two times. This implies x_4 = x_5. Hence, w.l.o.g. {x_1, x_2, x_3, x_4} has structure S3. Therefore, assume w.l.o.g. thatx_1 = (1,0,0), x_2 = (0,1,0), x_3 = (0,0,1), x_4 = (0, 0, c')for some c' ≠ 1 (which corresponds to the choice a = 0, a' = 1, b = 0, b' = 1, x = 0, c = 1 in the previous Lemma, switching the roles ofc and c' if necessary). Looking at {x_1, x_2, x_3, x_5}, if it had S2 for its structure, we would get x_5 = (1,1,1), which is adjacent to x_4, and thus impossible. Hence {x_1, x_2, x_3, x_5} also has structure S3. Permutting the coordinates only permutes x_1, x_2, x_3, and does not change the number of zeros in x_5. Thus, w.l.o.g.{(1,0,0), (0,1,0), (0,0,1), x_5} = {x_1, x_2, x_3, x_5} = {(d,e,f), (d,e,f'), (d',e,y), (d,e',y)},for some d ≠ d', e ≠ e', f ≠ f'. But in the first coordinate, only zero can appear three times, so d = 0. Similarly, e = 0, so x_5 ∈ (0,0, ·), after a permutation of coordinates. Thus x_5 has at least 2 zeros, so our independent set I is a subset of the union of lines passing through the point (0,0,0), as required.§ CONJECTURE <REF> FOR 4 COLOURS In this short section we reprove the result of Gyárfás. (Gyárfás) Conjecture <ref> for 4 colours and Conjecture <ref> for G_3 are true. By the equivalence of conjectures, it suffices to prove Conjecture <ref> for G_3. Let X be the given finite set of vertices in G_3. Assume that G_3[X] has at least 4 components, otherwise we are done immediately. By a representatives set we mean any set of vertices that contains at most one vertex from each component of X. A complete representative set is a representative set that intersects every component of X. If there are three colinear points, each in different component, then X can be covered by two planes. In particular, if two planes do not suffice, then among every three points in different components, there is a non-colinear pair. W.l.o.g. these are points (0, 0, 1), (0,0,2), (0,0,3). Then, unless X ⊂{x_1 = 0}∪{x_2 = 0}, we have a point of the form (a,b,c) with a,b both non-zero, so it is a neighbour of at least two of the points we started with, contradicting the fact that they belong to different components. For the second part, recall that if every pair in a triple is colinear, then the whole triple lies on a line.By the observation above, every representative set of size at least 3 has a noncollinear pair. Suppose firstly that every complete representative set is a subset of a plane. Pick a complete representative set {x_1, x_2, …, x_r}, with x_i ∈ C_i, where C_i are the components. W.l.o.g. x_1, x_2 is a noncollinear pair, therefore, it determines a plane π, forcing components C_3, C_4, …, C_r to be entirely contained in this plane. Hence, we may cover the whole set X by components C_1 and C_2, and the plane π. Therefore, we may assume that we have a representative set of size three which does not lie in any plane.Case 1. X has more than 4 components.Let x_1, x_2, x_3 be a representative set, x_i ∈ C_i, which is not coplanar. Then, for any choice of y_4, …, y_r, such that {x_1, x_2, x_3, y_4, …, y_r} is a complete representative set, we have 3 linesthat meet in a single point, that contain all these points. Observe that this structure is determined entirely by x_1, x_2, x_3. Indeed, since these three points are not coplanar, they cannot coincide in any coordinate. However, since there are at least 5 components, x_1, x_2, x_3 extend to an independent set of size 5, which must be a subset of three lines sharing a point p. But we can identify p, since p_i must be the value that occurs precisely two times among (x_1)_i, (x_2)_i, (x_3)_i, and hence the lines are l_1 = px_1, l_2 = px_2, l_3 = px_3. Thus, the union of lines l_1, l_2, l_3 contains whole components C_4, …, C_r and x_i ∈ l_i. By the Observation above, each l_i has representatives from at most two components. Hence, we may not have the common point of the three lines p present inX, as otherwise some line l_i would have three components meeting it. W.l.o.g. l_2, l_3 intersect two components, and l_1 may intersect 1 or 2. Then, picking any y ∈ l_2 in a different component than that of x_2 and any z ∈ł_3 with a component different from that of x_3, using the argument above applied to {x_1, y, z} instead of {x_1, x_2, x_3}, we deduce that C_2 ⊂ l_2, C_3 ⊂ l_3. Thus, we actually have singleton components C_2, C_3, …, C_r. Finally, any point in C_1 must be either in the plane of l_2, l_3 or on the line l_1, so we can cover by two planes. Case 2. X has precisely 4 components and there exists a coplanar complete representative set.Let x_1, x_2, x_3, x_4 be a complete representative set, with x_i ∈ C_i. W.l.o.g. we have x_i = (a_i, b_i, 0). As a few times before, we do not have a collinear triple among these 4 points, so each of the sequences (a_i)_i=1^4 and (b_i)_i=1^4 has the property that a value may appear at most twice in the sequence.Suppose for a moment that each of these two sequences has at most one value that appears twice. Write v for the value that appears two times in (a_i), if it existis, and let v be the corresponding value for (b_i). If we take a point y outside the plane (·, ·, 0), then the number of appearances of y_1 in (a_i) and y_2 in (b_i) combined is at least three. So, either y_1 is the unique doubly-appearing value u for a_i or is y_2 = v, so the three planes (u, ·, ·), (·, v, ·) and (·, ·, 0) cover X.Now, assume that w.l.o.g. has two doubly-appearing values, i.e. a_1 = a_2 = u ≠ a_3 = a_4 = v. If y is outside the plane (·, ·, 0), then if y_1 ≠ u, one of the pairs x_1y, x_2 y must be an edge, so x_3y and x_4y are not edges, so we must have y_1 = v. Similarly, if y is outside the plane(·, ·, 0) and y_1 ≠ v, then y_1 = u. Hence, for all points y ∈ X, we have y_1 ∈{u,v} or y_3 = 0, and three planes cover once again. Case 3. X has precisely 4 components, but no complete representative set is coplanar.Thus, by Lemma <ref>, every complete representative set has either S2 or S3 as its structure. Observe that if S2 is always the structure, then all the components are singleton, and we are done by taking a plane to cover two vertices. So, there is a representative set with structure S3. Take such a representative set x_1, x_2, a, b, w.l.o.g. x_1 = (1,0,0), x_2 = (2,0,0). Take any y that shares the component with a, and any z that shares the component with b. Then, x_1, x_2, y, z is also a complete representative set, so it is not coplanar. But, as x_1, x_2 are collinear, it may not have structure S2, so the structure must be S3, which forces y_1 = z_1. Hence, we can cover X by components of x and y and the plane (a_1, ·, ·). This completes the proof. Note that the theorem is sharp – we can take X = {0, e_1, e_2, e_3, e_1 + e_2, e_1 + e_3, e_2 + e_3}, where e_1 = (1,0,0), e_2 = (0,1,0), e_3 = (0,0,1).§ CONJECTURE <REF> FOR 4 COLOURS Recall, by a diameter of a colour c, written diam_c, we mean the maximal distance between vertices sharing the same component of G[c]. In the remaining part of the paper, for a given 4-colouring χ E(K_n) → 4, we say that χ satisfies Conjecture <ref> with (constant) K if there are sets A_1, A_2, A_3 whose union is [n] and colours c_1, c_2, c_3 such that each K_n[A_i, c_i] is connected and of diameter at most K. Thus, our goal can be phrased as: there is an absolute constant K such that every 4-colouring χ of E(K_n) satisfies Conjecture <ref> with K. We begin the proof of the main result by observing that essentially we may assume that at least two colours have arbitrarily large diameters. We argue by modifying the colouring slightly. Suppose χ is a 4-colouring of E(K_n) such that three colours have diameters bounded by N_1. Then χ satisfies Conjecture <ref> with max{N_1, 30}.Write G = K_n, and observe that if a point does not receive all 4 colours at its edges, we are immediately done. Let χ be the given colouring of the edges, and let colours 1, 2 and 3have diameter bounded by N_1. We begin by modifying the colouring slightly. Let xy be any edge coloured by colour 4. If x and y share the same component in G[c] for some c ∈{1,2,3}, change the colour of xy to the colour c (if there is more than one choice, pick any). Note that such a modification does not change the monochromatic components, except possibly shrinking the components for the colour 4. Let χ' stand for the modified colouring.Observe that the diameter of colour 4 in χ' is also bounded. Begin by listing all the components for colours i ∈{1,2,3} as C^(i)_1, C^(i)_2, C^(i)_3, …. For x ∈ℕ^3, consider the sets C_x = C_x_1, x_2, x_3 = C^(1)_x_1∩ C^(2)_x_2∩ C^(3)_x_3. Let X be the set of all x such that C_x≠∅. If G^(χ')[4] (where the superscript indicates the relevant colouring) has an induced path v_1, v_2, …, v_r, then if x_i ∈ℕ^3 is defined to be such that v_i ∈ C_x_i, in fact x_1, x_2, …, x_r becomes an induced path in G_3. But Lemma <ref> implies that r ≤ 30. Hence, the 4-diameter in the colouring χ' is at most 30.Applying Theorem <ref> for the colouring χ', gives three monochromatic components that cover the vertex set, let these be G^(χ')[A_1, c_1], G^(χ')[A_2, c_2], G^(χ')[A_3, c_3], where the superscript indicates the relevant colouring. Using the same sets and colours, but returning to the original colouring, we have that G^(χ)[A_1, c_1], G^(χ)[A_2, c_2], G^(χ)[A_3, c_3] are all still connected, as 1, 2 and 3-components are the same in χ and χ', while there can only be more 4-coloured edges in the colouring χ. Also, 1, 2 and 3-diameters are bounded by N_1, and 4-diameters of sets may only decrease when returning to colouring χ', so the lemma follows.Let us introduce some additional notions. Let P ⊂ℕ_0^2 be a set, and let L P →𝒫(n)∖{∅} be a function with the property that {L(A)A ∈ P} form a partition of [n] and there a two colours c_3, c_4 [This choice of indices was chosen on purpose – we shall first use colours c_1, c_2 to define P and L, and the remaining colours will be c_3 and c_4.]such that whenever A, B ∈ P and |A_1 - B_1|, |A_2 - B_2| ≥ 2, then all edges between the sets L(A) and L(B) are coloured with c_3 and c_4 only. We call L the c_3, c_4-layer mapping and we refer to P as the layer index set. Further, we call a subset S ⊂ P a k-distant set if for every two distinct points A, B ∈ S we have |A_1 - B_1|,|A_2 - B_2| ≥ k.Let us briefly motivate this notion. Suppose that K_n[c_1] and K_n[c_2] are both connected. Fix a vertex x_0 and let P = {(d_c_1(x_0, v), d_c_2(x_0, v)) v ∈ [n]}⊂ℕ_0^2. Let L(A) = {v ∈ [n] (d_c_1(x_0, v), d_c_2(x_0, v)) = A} for all A ∈ P (this also motivates the choice of the letter L, we think of L(A) as a layer). Then, if x ∈ L(A), y ∈ L(B) for A, B ∈ P with |A_1 - B_1| ≥ 2, |A_2 - B_2| ≥ 2, by triangle inequality, we cannot have d_c_1(x,y) ≤ 1 nor d_c_2(x,y) ≤ 1, so xy takes either the colour c_3 or the colour c_4. As we shall see, we may have more freedom in the definition of P and L if there is more than one component in a single colour.We now explore these notions in some detail, before using them to obtain some structural results on the 4-colourings that possibly do not satisfy Conjecture <ref>.Let χ be a 4-colouring, L a c_3, c_4-layer mapping with layer index set P, and suppose that {A, B, C}⊂ P is a 3-distant set. Write G = K_n. Then the following hold. * For some colour c ∈{c_3, c_4} we have G[L(A) ∪ L(B) ∪ L(C), c] connected and of diameter at most 20.* If additionally for c' such that {c,c'} = {c_3,c_4} and some distinct A', B' ∈{A, B, C} we have G[L(A')∪ L(B'), c'] contained in a subgraph H ⊂ G[c'] that is connected and of diameter at most N_3, then the given colouring satisfies Conjecture <ref> with max{40, N_3 + 20}.(1): Observe that all edges between L(A), L(B), L(C) are of colours c_3 and c_4. This is a complete tripartite graph and by Lemma <ref> w.l.o.g. L(A) ∪ L(B) ∪ L(C) is c_3-connected and of c_3-diameter at most 20.(2): W.l.o.g. A' = A, B' = B. Pick any D ∈ P. Note that since A, B, C are 3-distant, D is 2-distant from at least one of A, B, C (otherwise, by pigeonhole principle, for some A', B' among A, B, C and some index i, we have |A'_i - D_i|, |B'_i - D_i| ≤ 1, so |A'_i - B'_i| ≤ 2, which is impossible). Let E ∈{A, B, C} be such that D, E are 2-distant. Thus, all the edges between L(D) and L(E) are of colours c_3 and c_4, so Lemma <ref> applies to L(D) ∪ L(E).Let P' ⊂ P be the set of all D ∈ P such that Lemma <ref> gives that either L(D) ∪ L(E) is c-connected and of c-diameter at most 10, or the second conclusion of that lemma holds. Hence, every vertex x in L(D) for some D ∈ P' is on c-distance at most 10 to a vertex in L(A) ∪ L(B) ∪ L(C), which itself has c-diameter at most 20. Hence, L(A) ∪ L(B) ∪ L(C) ∪ (∪_D ∈ P' L(D)) is c-connected and of c-diameter at most 40.For all other D ∈ P∖ P', Lemma <ref> applied to L(D) ∪ L(E) for a relevant E implies that L(D) ∪ L(E) is c'-connected and of diameter at most 10. Let P” be the set of D ∈ P∖ P' for which E ∈{A, B}, and let P”' = P ∖ (P' ∪ P”) (for which therefore E = C). Hence, H ∪ (∪_D ∈ P” L(D)) is c'-connected and of c'-diameter at most N_3 + 20, and finally L(C)∪ (∪_D ∈ P”' L(D)) is also c'-connected and of c'-diameter at most 20. Hence, takingG[L(A) ∪ L(B) ∪ L(C) ∪ (∪_D ∈ P' L(D)), c],H ∪ G[(∪_D ∈ P” L(D)), c'],andG[L(C)∪ (∪_D ∈ P”' L(D)), c'],proves the lemma. Suppose that χ is a 4-colouring of E(K_n) and that L is a c_3, c_4-layer mapping for some colours c_3, c_4 ∈ [4] with a 3-distant set of size at least 4. Then c satisfies Conjecture <ref> with constant 160.Write G = K_n. Suppose that some A, B, C, D ∈ P are 3-distant. All edges between L(A) ∪ L(B) ∪ L(C) ∪ L(D) are of colours c_3 and c_4 only, so by Lemma <ref> w.l.o.g. G[L(A) ∪ L(B) ∪ L(C) ∪ L(D),c_3] is connected and of diameter at most 60. Pick any E ∈ P. If E has difference at most 1 in absolute value in some coordinate from at least three points among A, B, C, D, by pigeonhole princple, there are A' , B' among these four and coordinate i such that |A'_i - E_i|, |B'_i - E_i| ≤ 1 so |A'_i - B'_i| ≤ 2, which is impossible. Hence, E is 2-distant from at least two points A'(E), B'(E) among A, B, C, D. Hence, A'(E), B'(E), E is a 2-distant set, so edges between L(A'(E)), L(B'(E)) and L(E) are of colours c_3 and c_4 only. By Lemma <ref>, for some colour c(E) ∈{c_3,c_4} we have G[L(A'(E)) ∪ L(B'(E)) ∪ L(E), c(E)] connected and of diameter at most 20. We split P as follows: P' ⊂ P is the set of all E ∈ P such that c(E) = c_3, and for each pair π of A, B, C, D we define P_π as the set of all E ∈ P such that {A'(E), B'(E)} = π and c(E) = c_4. We now look at the set of all pairs π for which P_π≠∅.Case 1: there are π_1, π_2 such that P_π_1 and P_π_2 are non-empty and π_1 ∩π_2 ≠∅. W.l.o.g. π_1 = {A, B}, π_2 = {A,C}. For every π = {A', B'} we already have G[L(A') ∪ L(B') ∪ (∪_E ∈ P_π L(E)), c_4] connected and of diameter at most 40. Hence, G[L(A) ∪ L(B) ∪ L(C) ∪ (∪_E ∈ P_π_1∪ P_π_2 L(E)), c_4] is also connected and of diameter at most 80. But, any other pair π must intersect A, B, C, so we haveG[∪_π((∪_F ∈π L(F)) ∪ (∪_E ∈ P_π L(E))), c_4]connected and of diameter at most 160, where ∪_π ranges over all pairs. Taking additionallyG[L(A) ∪ L(B) ∪ L(C) ∪ L(D) ∪ (∪_E ∈ P' L(E)), c_3]proves the claim.Case 2: all pairs π such that P_π≠∅ are disjoint. There are at most 2 such pairs. Thus, if we take G[(∪_F ∈π L(F)) ∪ (∪_E ∈ P_π L(E)), c_4]for such pairs π (these are connected and of diameter at most 40), andG[L(A) ∪ L(B) ∪ L(C) ∪ L(D) ∪ (∪_E ∈ P' L(E)), c_3],the claim follows. Suppose that χ is a 4-colouring of E(K_n) and that L is a c_3, c_4-layer mapping for some colours c_3, c_4 ∈ [4] with a 7-distant set of size at least 3. Suppose additionally that {A_i A ∈ P} takes at least 28 values for each i=1,2. Then χ satisfies Conjecture <ref> with constant 160.Let {A, B, C} be a 7-distant set. Pick any other D ∈ P. If D is 3-distant from each of A, B, C, we obtain a 3-distant set of size 4, so by Lemma <ref> we are done. Hence, for every D ∈ P we have E ∈{A, B, C} such that |E_i - D_i| ≤ 2 for some i. (Note that this is the main contribution to the constant 160 in the statement.)Since {A, B, C} is a 7-distant set, by Lemma <ref>, we have w.l.o.g. G[L(A) ∪ L(B) ∪ L(C), c_3] connected and of diameter at most 20. We now derive some properties of L(D) for points D ∈ P be such that |D_i - A_i|, |D_i - B_i|, |D_i - C_i| ≥ 3 for some i ∈{1,2}. (Note that such points exist by assumptions.)Let D be such a point and let j be such that {i, j} = {1,2}. Since the set {A, B, C} is 7-distant, there are distinct E_1, E_2 ∈{A, B, C} such that |D_j - (E_1)_j|, |D_j - (E_2)_j| ≥ 3. Thus, {D, E_1, E_2} is also a 3-distant set. Applying Lemma <ref> to {D, E_1, E_2} implies that G[L(D) ∪ L(E_1) ∪ L(E_2), c] is connected and of diameter at most 20, for some c ∈{c_3, c_4}. However, if c = c_4, G[L(E_1) ∪ L(E_2), c_4] is contained in a subgraph of G[c_4] that is connected and of diameter at most 20, so Lemma <ref> (2) applies once again and the claim follows. Hence, we must have G[L(D) ∪ L(E_1) ∪ L(E_2), c_3] is connected and of diameter at most 20. In particular, whenever D ∈ P satisfies |D_i - A_i|, |D_i - B_i|, |D_i - C_i| ≥ 3 for some i ∈{1,2}, then every point in L(D) is on c_3-distance at most 20 from L(A) ∪ L(B) ∪ L(C). By assumptions {A_1 A ∈ P} takes at least 28 values. Hence, we can find X ∈ P such that |X_1 - A_1|, |X_1 - B_1|, |X_1 - C_1| ≥ 5. Similarly, there is Y ∈ P such that |Y_2 - A_2|, |Y_2 - B_2|, |Y_2 - C_2| ≥ 5. W.l.o.g |X_2 - A_2| ≤ 2. If |Y_1 - A_1| ≤ 2, then X, Y, B, C form a 3-distant set of size 4, and once again the claim follows from Lemma <ref>. Hence, w.l.o.g. |Y_1 - B_1| ≤ 2. By the work above, we also have that every point in L(X) ∪ L(Y) is on c_3-distance at most 20 from L(A) ∪ L(B) ∪ L(C). Note also that X, Y are 3-distant. It remains to analyse D ∈ P such that for both i = 1,2 there is an E ∈{A, B, C} such that |E_i - D_i| ≤ 2. We show that in all but one case on the choice of sets E, we in fact have L(D) on bounded c_3-distance to L(A) ∪ L(B) ∪ L(C). If we have an E ∈{A, B, C} such that both |E_1 - D_1| ≤ 2 and |E_2 - D_2| ≤ 2 hold, then taking E', E” such that {E, E', E”} = {A, B, C}, we have D, E', E” 3-distant, so Lemma <ref> once again implies that every vertex in L(D) is on c_3-distance at most 20 from L(A) ∪ L(B) ∪ L(C) (or we are done by the second part of Lemma <ref>).We distinguish the following cases. * If |D_1 - A_1| ≤ 2, |D_2 - B_2| ≤ 2, then D, X, Y form a 3-distant set. Let us check this. We already have X, Y 2-distant. By triangle inequality, we obtain |X_1 - D_1| ≥ |X_1 - A_1| - |A_1 - D_1| ≥ 3, |Y_1 - D_1| ≥ |B_1 - A_1| - |B_1 - Y_1| - |D_1 - A_1| ≥ 3, |D_2 - X_2| ≥ |B_2 - A_2| - |B_2 - D_2| - |X_2 - A_2| ≥ 3 and |Y_2 - D_2| ≥ |Y_2 - B_2| - |B_2 - D_2| ≥ 3.We also know that L(X) ∪ L(Y) is contained in a subgraph H ⊂ G[c_3] that is connected and of diameter at most 20, so applying Lemma <ref> implies that we are done, unless G[L(D) ∪ L(X) ∪ L(Y), c_3] is connected and of diameter at most 20. Hence L(D) is on c_3-distance at most 40 from L(A) ∪ L(B) ∪ L(C).* If |D_1 - C_1| ≤ 2, |D_2 - B_2| ≤ 2, then the same argument we had in the case above proves that L(D) is on c_3-distance at most 40 from L(A) ∪ L(B) ∪ L(C).* If |D_1 - A_1| ≤ 2, |D_2 - C_2| ≤ 2, then the same argument we had in the case above proves that L(D) is on c_3-distance at most 40 from L(A) ∪ L(B) ∪ L(C). Finally, we define P_1, P_2, P_3 ⊂ P asP_1= {D ∈ P |D_1 - B_1|, |D_2 - A_2| ≤ 2}P_2= {D ∈ P |D_1 - C_1|, |D_2 - A_2| ≤ 2 }P_3= {D ∈ P |D_1 - B_1|, |D_2 - C_2| ≤ 2 }which are disjoint and if D ∈ P ∖ (P_1 ∪ P_2 ∪ P_3) we know that L(D) is on c_3-distance at most 40 from L(A) ∪ L(B) ∪ L(C). Let also L_i = ∪_D ∈ P_i L(D). Hence, since for D ∈ P_1 we have |D_1 - C_1|, |D_2 - C_2| ≥ 2, all edges between L(D) and L(C) are coloured using c_3 and c_4, we actually have all edges between L_1 and L(C) coloured using only these two colours. Applying Lemma <ref> we have G[L_1 ∪ L(C), c] connected and of diameter at most 10 for some c ∈{c_3, c_4}, or L_1 is on c_3-distance 1 from L(A) ∪ L(B) ∪ L(C). Similarly, all edges between L_2 and Y, and all edges between L_3 and X are taking only the colours c_3 and c_4. Observe that if D ∈ P_2, D' ∈ P_3 then |D_1 - D'_1| ≥ |C_1 - B_1| - |C_1 - D_1| - |D'_1 - B_1| ≥ 3. Similarly, |D_2 - D'_2| ≥ |A_2 - C_2| - |A_2 - D_2| - |D_2' - C_2|≥ 3, so all edges between L_2 and L_3 are only of colours c_3 and c_4. Apply Lemma <ref> to L_2 and L(Y), implying either G[L_2 ∪ L(Y), c_4] is connected and of diameter at most 10, or L_2 is on c_3-distance at most 30 from L(A) ∪ L(B) ∪ L(C). Similarly, apply Lemma <ref> to L_3 and L(X), implying either G[L_3 ∪ L(X), c_4] is connected and of diameter at most 10, or L_3 is on c_3-distance at most 30 from L(A) ∪ L(B) ∪ L(C). Finally, let V = {v ∈ [n]d_c_3(v, L(A) ∪ L(B) ∪ L(C)) ≤ 40}, which is c_3-connected and of c_3-diameter at most 100. We distinguish the following cases. * L_2, L_3 ⊂ V. In this case, we can take V and G[L_1 ∪ L(C), c] if necessary (otherwise L_1 ⊂ V).* L_2 ⊄V, L_3 ⊂ V. Thus, G[L_2 ∪ L(Y), c_4] is connected and of diameter at most 10, so taking G[L_2 ∪ L(Y), c_4] and V, and additionally G[L_1 ∪ L(C), c] if necessary, we are done.* L_2 ⊂ V, L_3 ⊄V. Thus, G[L_3 ∪ L(X), c_4] is connected and of diameter at most 10, so taking G[L_3 ∪ L(X), c_4] and V, and additionally G[L_1 ∪ L(C), c] if necessary, we are done. * L_2, L_3 ⊄V. In this case, we have G[L_2 ∪ L(Y), c_4] and G[L_3 ∪ L(X), c_4] connected and of diameter at most 10. Apply Lemma <ref> to L_2 and L_3. If L_2 and L_3 are on c_4-distance at most 10, we may take G[L_2 ∪ L_3 ∪ L(X) ∪ L(Y), c_4], V and G[L_1 ∪ L(C), c] if necessary. Otherwise, we have G[L_2 ∪ L_3, c_3] is connected and of diameter at most 10. In this case, take G[L_2 ∪ L_3, c_3], V and G[L_1 ∪ L(C), c] if necessary.This completes the proof of the lemma. Let us now briefly discuss a way of defining c_3, c_4-layer mappings. Pick two colours c_1, c_2 ∈ [4], and take c_3, c_4 to be the remaining two colours. List all the vertices as v_1, v_2, …, v_n. To each vertex, we shall assign two nonnegative integers, D_1(v_i) and D_2(v_i), initially marked as undefined. We apply the following procedure. Step 1 Pick the smallest index i such that D_1(v_i) or D_2(v_i) is undefined. If there is no such i, terminate the procedure.Step 2 For j = 1,2, ifD_j(v_i) is undefined, pick an arbitrary value for it.Step 3 For j = 1,2, ifD_j(v_i) was undefined before the second step, for all vertices u in the same c_j-component of v_i set D_j(u) = d_c_j(v_i, u) + D_j(v_i). Return to Step 1.Upon the completion of the procedure, set P = {(D_1(v), D_2(v)) v ∈ [n]} and L P →𝒫(n) as L(x,y) = {v ∈ [n](D_1(v), D_2(v)) = (x,y)}. Claim. The mapping L above is well-defined and is a c_3,c_4-layer mapping. Observe that each time we pick v_i whose value(s) are to be defined, we end up defining D_1 on one c_1-component or D_2 on one c_2-component or both. Hence, for every vertex v, the values D_1(v), D_2(v) change precisely once from undefined to a nonnegative integer value. Hence, (D_1(v), D_2(v)) are well-defined and take values in ℕ_0^2, so P and L are well-defined and L(A) forms a partition of [n] as A ranges over P. Finally, consider an edge xy coloured by c_1. Let D_1(x) be defined with v_i chosen in Step 2 (possibly x = v_i). Since xy is of colour c_1, these are in the same c_1-component, and hence D_1(x) = d_c_1(v_i, x) + D_1(v_i) and D_1(y) = d_c_1(v_i, y) + D_1(v_i). Therefore,|D_1(x) - D_1(y)|= |(d_c_1(v_i, x) + D_1(v_i)) - (d_c_1(v_i, y) + D_1(v_i))| = |d_c_1(v_i, x) - d_c_1(v_i, y)| ≤ d_c_1(x,y) = 1hence, if χ(xy) = c_1, then |D_1(x) - D_1(y)| ≤1. Similarly, we get the corresponding statement for the colour c_2. It follows that if A, B ∈ P are such that |A_1 - B_1|, |A_2 - B_2| ≥ 2, then if x ∈ L(A), y ∈ L(B), we have (D_1(x), D_2(x)) = A, (D_1(y), D_2(y)) = B, so xy is coloured by c_3 or c_4, as desired. §.§ Monochromaticly connected case Suppose that χ is a 4-colouring of E(K_n) such that every colour induces a connected subgraph of K_n. Then χ satisfies Conjecture <ref> with constant 160.Suppose contrary, in particular every colour has diameter greater than 480. Our main goal in the proof is to find a pair of vertices x',y' with a control over their 1-distance and 2-distance. We need both distances sufficiently large so that we can make a use of distant sets in 3,4-layer mappings, and also bounded by a constant so that if a vertex is on small 1-distance from x', it is also on small 1-distance from y' and vice-versa. More precisely,Suppose that there are vertices x', y' such that d_1(x', y') ∈{6,7, …, 50}, d_2(x', y') ∈{10, 11, …, 20}. Then we obtain a contradiction. Pick any point z ≠ x', y'. Apply the procedure for defnining 3,4-layer mapping starting from x'. If we obtain a 7-distant set of size at least 3, we obtain a contradiction with Lemma <ref>. Hence, the distances corresponding to x', y', z cannot give such a set, so we must have one ofd_1(x', z) ≤ 6 or |d_1(x', y') - d_1(x', z)| ≤ 6 or d_2(x', z) ≤ 6 or |d_2(x', z) - d_2(x', y)| ≤ 6.In particular, we must have d_1(x', z) ≤ 56 or d_2(x', z) ≤ 26. Recalling the definition of monochromatic balls, B_1(x, 56) and B_2(x, 26) cover all the vertices, giving a contradiction.Claim. There are x,y such that d_1(x,y) ∈{25,26,27} and d_2(x,y) ≥ 40. Suppose contrary, for every x,y such that d_1(x,y) ∈{25,26,27}, we must have d_2(x,y) ≤ 39. Pick any y_1, y_2 ∈ [n] such that χ(y_1y_2) = 1. Since the 1-diameter is greater than 160, we can find x ∈ [n] such that d_1(x,y_1) = 26. By triangle inequality, we also have d_1(x,y_2) ∈{25,26,27}. Hence, d_2(x,y_1), d_2(x,y_2) ≤ 39, from which we conclude that whenever an edge y_1y_2 is coloured by 1, then d_2(y_1, y_2) ≤ 78. Hence, taking any x ∈ [n] the ballsB_2(x, 78), B_3(x, 1), B_4(x, 1)cover the vertex set. However, these have diameter less than 160, which is a contradiction.Take x,y given by the claim above. Since the subgraph G[2] is connected, there is a minimal 2-path x = z_0, z_1, …, z_r, z_r+1 = y between x and y, with r ≥ 39. Look at the vertices z_10, z_20, …, z_10k with k such that 10 ≤ r - 10k < 20.Consider x, y, z_10i for some 1 ≤ i ≤ k and check whether we can define 3,4-layer mapping so that these three points become a 7-distant set. Apply the procedure for defining 3,4-layers mapping, starting from x, i.e. we want to see whether (0,0), (d_1(x,y), d_2(x,y)) and (d_1(x, z_10i), d_2(x,z_10i)) are 7-distant. If they are 7-distant, Lemma <ref> gives us a contradiction.Since d_1(x,y) ≥ 25, d_2(x,y) ≥ 3910 ≤ d_2(x, z_10i) = 10i ≤ 10k < d_2(x,y) - 6we must have either d_1(x,z_10i) ≤ 6 or |d_1(x, z_10i) - d_1(x,y)| ≤ 6 (implying d_1(x, z_10i) ∈{19, 20, …, 33}). Similarly, if we start from y instead of x in our procedure, we see that either d_1(y, z_10i) ≤ 6 or |d_1(y, z_10i) - d_1(x,y)| ≤ 6 (implying d_1(y, z_10i) ∈{19, 20, …, 33}) must hold.Observe that for the vertex z_10 we must have d_1(x, z_10) ≤ 6. Otherwise, we would have 19 ≤ d_1(x, z_10) ≤ 33 and d_2(x, z_10) = 10, resulting in a contradiction by Lemma <ref> (applied to the pair x, z_10). For every z_10i we must have either the first inequality (d_1(x,z_10i) ≤ 6) or the second (19 ≤ d_1(x,z_10i) ≤ 33), and we have that the first vertex among these, namely z_10, satisfies the first inequality. Suppose that there was an index i such that z_10(i+1) obeys the second inequality, and pick the smallest such i. Then, by the triangle inequality, we would have13≤ d_1(z_10(i+1), x) - d_1(x, z_10i) ≤ d_1(z_10i, z_10(i+1)) ≤ d_1(z_10(i+1), x) + d_1(x, z_10i) ≤ 39and d_2(z_10i, z_10(i+1)) = 10, so Lemma <ref> applies now to the pair z_10i, z_10(i+1) and gives a contradiction. Hence, for all i ≤ k we must have the first inequality for z_10i. But then z_10k and y satisfy the conditions of Lemma <ref>, giving the final contradiction, since 10 ≤ d_2(y, z_10k) < 20 and19 ≤ d_1(y, x) - d_1(x, z_10k) ≤ d_1(y, z_10k) ≤ d_1(y, x) + d_1(x, z_10k) ≤ 33.This completes the proof. §.§ Intersecting monochromatic componentsSuppose that χ E(K_n) →[4] be a 4-colouring with the property that, whenever C and C' are monochromatic components of different colours, and one of them has diameter at least 30 (in the relevant colour), then C and C' intersect. Then χ satisfies Conjecture <ref> with constant 160. Suppose contrary, we have a colouring χ that satisfies the assumptions but for which the conclusion fails. By Lemma <ref>, we have that at least two colours have monochromatic diameters greater than 160. Let C_1 be such a component for colour c_1, and let C_2 be such a component for colour c_2, with c_1 ≠ c_2. Further, by the Proposition <ref> we have a colour c' (which might equal one of c_1, c_2) with at least two components, w.l.o.g. c_1 ≠ c'.First, we find a pair of vertices x,y with the property that 10 ≤ d_c_1(x,y) ≤ 40 and x, y are in different c'-components. We do this as follows. If there are a couple of vertices x_1, x_2 with d_c_1(x_1, x_2) < 10 that are in different c'-components, then, since c_1-diameter of C_1 is large, we can find y ∈ C_1 with d_c_1(x_1, y) = 25. Hence, 15 ≤ d_c_1(x_2, y) ≤ 35, and y is in different c'-component from one of x_1, x_2, yielding the desired pair. Otherwise, we have that all pairs of vertices x,y ∈ C_1 with d_c_1(x,y) ≤ 30 also share the same c'-component. But then, we must have the whole c_1-component C_1 contained in one c'-component, making it unable to intersect other c'-components, which is impossible. Hence, we have x, y in different c'-components, with 10 ≤ d_c_1(x,y) ≤ 40.Pick any vertex z outside B_c_1(x, 50). Let c”, c”' be the two colours different from c_1, c'. We now apply our procedure for defining c”, c”'-layers mapping with vertices listed as x, y, z, …. Note that |D_1(x) - D_1(y)|, |D_1(x) - D_1(z)|, |D_1(y) - D_1(z)| ≥ 10 (recall the D_1, D_2 notation from the procedure). Hence, we get a 7-distant set, unless d_c'(x,z) ≤ 6 or d_c'(y,z) ≤ 6. Hence, B_c_1(x, 50), B_c'(x, 6) and B_c'(y, 6) cover the vertex set and we get a contradiction.§.§ Final stepsIn the final part of the proof, we show how to reduce the general case to the case of intersecting monochromatic components.Conjecture <ref> holds for 4 colours and we may take 160 for the diameter bounds.Let χ be the given 4-colouring of E(K_n). Our goal is to apply the Proposition <ref>. We start with an observation. Suppose that C is a c-component, that is disjoint from a c'-component C' with c' ≠c. Then for every pair of vertices x,y ∈ C we have d_c(x,y) ≤ 6 or d_c'(x,y) ≤ 6 or the colouring satisfies Conjecture <ref> with the constant 160.Pick x,y ∈ C with d_c(x,y) ≥ 7 and take arbitrary z ∈ C'. Apply our procedure for generating c_3, c_4-layers mapping to the list x, y, z, …, with c_3, c_4 chosen to be the two colours different from c, c'. Since z is in different c- and c'-components from x, y, these three vertices result in a 7-distant set, unless d_c'(x,y) ≤ 6, as desired. Suppose that we have a c-component C, that is disjoint from a c'-component C' with c' ≠c and has c-diameter at least 30. Then the colouring χ satisfies Conjecture <ref> with the constant 160. By the Observation <ref> we are either done, or any two vertices x,y ∈ C with d_c(x,y) > 6 satisfy d_c'(x,y) ≤ 6. Furthermore, given any two vertices x,y ∈ C, since the c-diameter of C is at least 30, we can find z ∈ C such that d_c(x,z), d_c(y,z) ≥ 7, so by triangle inequality d_c'(x,y) ≤ 12 holds for all x,y ∈ C.Now, take an arbitrary vertex v ∈ C, let c”, c”' be the two remaining colours, and consider the setsB_c'(v, 12), B_c”(v,1), B_c”'(v,1).Given any u ∈ [n], if vu is coloured by any of c', c” or c”', it is already in the sets above. On the other hand, if uv is of colour c, then v ∈ C so d_c'(u,v) ≤ 10, thus u ∈ B^(c')(v, 10). Thus, these sets cover the vertex sets and have monochromatic diameters at most 24, so we are done.Finally, we are in the position to apply the Proposition <ref> which finishes the proof of the theorem.§ CONCLUDING REMARKS Apart from the main conjectures <ref> (and its equivalent <ref>) and <ref>, here we pose further questions. Recall the section 2 that contains the auxiliary results. There we first discussed Lemmas <ref> and <ref>, which were variants of the main conjectures with different underlying graph instead of K_n. Recall that Lovasz-Ryser conjecture is also about different underlying graphs. Another natural question would be the following. Let G be a graph, and let k be fixed. Suppose that χ E(G)→[k] is a k-colouring of the edges of G. For which G is it possible to find k-1 monochromaticly connected sets that cover the vertices of G? What bounds on their diameter can we take?Observe already that for 3 colours, the situation becomes much more complicated than that for 2 colours, where complete multipartite graphs behaved well. Consider the following example. Pick n+6 vertices labelled as v_1, v_2, …, v_6 and u_1, u_2, …, u_n. Define the graph G to be the complete graph on these vertices with 3 edges v_1 v_2, v_3v_4 and v_5v_6 removed. Define the colouring χ E(G) →[3] as follows. * Edges of colour 1 are v_1v_3, v_3v_5, v_1v_5, v_4v_6 and v_1u_i, v_3u_i, v_5u_i for all i.* Edges of colour 2 are v_2v_4, v_2v_5, v_4v_5, v_1v_6 and v_2u_i, v_4u_i for all i.* Edges of colour 3 are v_2v_3, v_2v_6, v_3v_6, v_1v_4 and v_6u_i for all i.* Edges of the form u_i u_j are coloured arbitrarily. It is easy to check that this colouring has no covering of vertices by two monochromatic components. Is this essentially the only way the conjecture might fail for such a graph?Let G = K_n ∖{e_1, e_2, e_3} be the complete graph with a mathching of size three omitted. Suppose that χ E(G) → [3] is a 3-colouring of the edges such that no two monochromatic components cover G. Is such a colouring isomorphic to an example similar to the one above? What about K_2n with a perfect matching removed?Finally, recall that the one of the main contributions in the final bound in Theorem <ref> came from Lemma <ref> and that in general the Ramsey approach of Lemma <ref> would give much worse value. It would be interesting to study the right bounds for this problem as well.For fixed l, what is the maximal size of a set of vertices S of G_l such that G_l[S] is a path? What about other families of graphs of bounded degree? In particular, for fixed l and d, what is the maximal size of a set of vertices S of G_l such that G_l[S] is a connected graph of degrees bounded by d? §.§ AcknowledgementsI would like to thank Trinity College and the Department of Pure Mathematics and Mathematical Statistics of Cambridge University for their generous support. I am particularly indebted to András Gyárfás and Imre Leader for the helpful discussions concerning this paper. 10 Aharoni R. Aharoni, Rysers conjecture for tripartite 3-graphs, Combinatorica, 21 (2001), 1-4 GyarfasSurvey1 A. Gyárfás, Large Monochromatic Components in Edge-Colorings of Graphs: A Survey. Progress in Mathematics, (285), 77-96, 2010 GyarfasConn A. Gyárfás, Partition coverings and blocking sets in hypergraphs (in Hungarian) Commun. 634 Comput. Autom. Inst. Hungar. Acad. Sci. 71 (1977) 62 pp GyarfasSurvey2 A. Gyárfás, Vertex covers by monochromatic pieces - a survey of results and problems, Discrete Math., 339 (2016) 1970-1977 Letzter Sh. Letzter, Large monochromatic triple stars in edge colourings, J. of Graph Theory, 80:323-328, 2015 LovL. Lovász, A kombinatorika minimax teteleirol (On minimax theorems in combinatorics), Matematikai Lapok 26 (1975), 209 - 264 Commuting L. Milićević, Commuting Contractive Families, Fundamenta Mathematicae 231 (2015), 225-272 Ruszinko M. Ruszinko, Large components in r-edge-colorings of K_n have diameter at most five, J. of Graph Theory, 69:337-340, 2011 Ryser J. R. Henderson, Permutation decomposition of (0-1) matrices and decomposition transversals, Ph.D. Thesis, Caltech, 1971 TuzaConn Zs. Tuza, On special cases of Ryser's conjecture, manuscript
http://arxiv.org/abs/1705.09370v1
{ "authors": [ "Luka Milićević" ], "categories": [ "math.CO" ], "primary_category": "math.CO", "published": "20170525213146", "title": "Covering complete graphs by monochromatically bounded sets" }
[email protected] Departamento de Física, Centro de Ciências Naturais e Exatas, Universidade Federal de Santa Maria, Avenida Roraima 1000, 97105-900, Santa Maria, RS, BrazilThe existence of incompatible observables constitutes one of the most prominent characteristics of quantum mechanics (QM) and can be revealed and formalized through uncertainty relations. The Heisenberg-Robertson-Schrödinger uncertainty relation (HRSUR) was proved at the dawn of quantum formalism and is ever-present in the teaching and research on QM. Notwithstanding, the HRSUR possess the so called triviality problem. That is to say, the HRSUR yields no information about the possible incompatibility between two observables if the system was prepared in a state which is an eigenvector of one of them. After about 85 years of existence of the HRSUR, this problem was solved recently by Lorenzo Maccone and Arun K. Pati. In this article, we start doing a brief discussion of general aspects of the uncertainty principle in QM and recapitulating the proof of HRSUR. Afterwards we present in simple terms the proof of the Maccone-Pati uncertainty relation, which can be obtained basically via the application of the parallelogram law and Cauchy-Schwarz inequality.Keywords: Quantum mechanics, uncertainty relations The Maccone-Pati uncertainty relation Jonas Maziero December 30, 2023 =====================================§ INTRODUCTION One can say that uncertainty is an integral part of our lives <cit.>. However, the uncertainties we face in our daily lives are frequently something associated more with our ignorance as observers than a characteristic property of the physical entities with which we interact. This scenario changes completely in situations where quantum effects are observationally important. For these systems uncertainty is a fundamental character. That is to say, we just cannot, in general, foretell what is going to happen in the future, even if we have all the information we can have about the history of the object we are describing <cit.>.For systems whose description requires the use of quantum mechanics (QM) <cit.>, we can only calculate probabilities (chances or relative frequencies) for an event to occur. This fact can be attributed to the existence, in QM, of incompatible observables (IO). Once these observables are represented by non-commuting Hermitian matrices, which as a consequence cannot share all eigenvectors, we can, via the measurement of one of them, prepare a state that is a superposition of the eigenvectors of the other observable. In this case, the uncertainty about this last observable is necessarily non-null. This is associated with a positive “width” (measured using e.g. the standard deviation or variance) of the probability distribution (PD) for its eigenvalues.If we prepare a physical system in state |ξ⟩ via the measurement of an observable Ĉ <cit.>, we can utilize the kinematic structure of QM to derive restrictions on how small can be the product or sum of the uncertainties associated with other two observables  and B̂ <cit.>. This kind of inequality, which is dubbed preparation uncertainty relation (PUR), depends on |ξ⟩ and on the regarded observables and is the main theme of this article.The goal of a PUR is to identify (and somehow quantify) the state-dependent incompatibility of two observables via the general impossibility of preparing the physical system of interest in a state for which both probability distributions (for the eigenvalues of these observables) have null variance. The frequent presence of this kind of uncertainty relation (UR) in QM textbooks points towards its didactic importance concerning the learning of the fundamentals of this theory. Moreover, UR have diverse practical applications, going from the justification for the use of a complex field in QM <cit.> to areas such as quantum cryptography <cit.> and entanglement witness <cit.>.There are several other relevant aspects of the uncertainty principle of QM <cit.>, and we shall mention some of them in this paragraph. In Quantum Information Science <cit.>, especially in Quantum Cryptography <cit.>, error-disturbance UR are particularly important because they impose limits on the amount of information we can obtain by making measurements in a system and the consequent disturbance which will be impinged on its state <cit.>. It is worth mentioning that, as in measuring an observable to extract information about the system we shall generally modify the PD of another observable, the error-disturbance UR are closely related to the UR for joint measurement of these observables. On the other hand, the recognition that quantum correlations, such as entanglement <cit.> and discord <cit.>, can be utilized as a resource for the more efficient manipulation of information motivated the proposal and analysis of UR with quantum memories <cit.>. Here it has been shown that the constraints on the variances of IO of a system can be weakened if the observer is quantumly correlated with it. Besides, entropic UR, independent of state, can be obtained which constrain the “entropies” of the PD of IO <cit.>. Another important kind of UR are those involving parameters which are not represented by Hermitian operators, such as time or phase <cit.>. An example of this kind of UR is the energy-time UR, which has a fundamental role for proving limits on how fast quantum states can change with time; which by its turn can be utilized to limit the efficiency of quantum information processing devices <cit.>. It is worthwhile observing that as the majority of the UR mentioned above involve the measurement of the average of the product of to IO, ÂB̂, which is not an Hermitian operator, they are not amenable for experimental tests <cit.>. Recently a scheme has been proposed which can, in principle, turn possible the experimental verification of UR involving the average value of ÂB̂ <cit.>, but such a technique has not been put to work yet.The sequence of this article is organized as follows. In Sec. <ref> we discuss the Cauchy-Schwarz inequality and its use for obtaining the UR of Heisenberg, Robertson, and Schrödinger (HRSUR). Afterwards we discuss the triviality problem of the HRSUR and prove, in Sec. <ref>, the UR of Maccone and Pati (MPUR). In contrast to the HRSUR, the MPUR leads to non-zero lower bounds for the sum of the variances of two observables whenever the system state is not an eigenvector of both corresponding Hermitian operators; therefore the MPUR can be seen as an improvement for the HRSUR. At last, after presenting an example of application of these uncertainty relations in Sec. <ref>, some final remarks are included in Sec. <ref>. § HEISENBERG-ROBERTSON-SCHRÖDINGER UNCERTAINTY RELATION AND ITS TRIVIALITY PROBLEM In view of its importance for proving the results we discuss in this article, we shall begin recapitulating the Cauchy-Schwarz inequality (CSI). The CSI states that for any pair of non-null vectors |ψ⟩ and |ϕ⟩ in a Hilbert space ℋ <cit.>, it follows that ⟨ψ|ψ⟩⟨ϕ|ϕ⟩≥|⟨ψ|ϕ⟩|^2,with equality obtained if and only if |ψ⟩ and |ϕ⟩ are collinear. Let us recall that, for the state spaces we deal with here, the inner product between two vectors |ψ⟩ and |ϕ⟩ is defined as: ⟨ψ|ϕ⟩=|ψ⟩^†|ϕ⟩, with x^† being the conjugate transpose of the vector (or matrix) x. We observe that a simple manner to prove the CSI is by applying the positivity of the norm, ∥|ξ⟩∥=√(⟨ξ|ξ⟩)≥0,to the vector |ξ⟩=|ψ⟩-(⟨ϕ|ψ⟩/⟨ϕ|ϕ⟩)|ϕ⟩. The condition for equality in the CSI can be inferred from the fact that ∥|ξ⟩∥=0 if and only if |ξ⟩ is the null vector.Let us see how the CSI can be used for deriving the Heisenberg-Robertson-Schrödinger uncertainty relation (HRSUR). Let  and B̂ be two observables of a physical system prepared in state |ξ⟩. Let ⟨X̂⟩=⟨ξ|X̂|ξ⟩ denote the average value of any operator X̂, and we use 𝕀 for the identity operator in ℋ. Then we define the vectors|ψ⟩=(Â-⟨Â⟩𝕀)|ξ⟩|ϕ⟩=(B̂-⟨B̂⟩𝕀)|ξ⟩and substitute them in the CSI. Firstly we notice that ⟨ψ|ψ⟩=Var(Â)⟨ϕ|ϕ⟩=Var(B̂),with Var(X̂)=⟨(X̂-⟨X̂⟩𝕀)^2⟩ being the variance of X̂. We can also verify that ⟨ψ|ϕ⟩=⟨ÂB̂-⟨Â⟩⟨B̂⟩𝕀⟩ = 2^-1⟨{Â,B̂}-2⟨Â⟩⟨B̂⟩𝕀⟩+2^-1⟨[Â,B̂]⟩,where [Â,B̂]=ÂB̂-B̂Â{Â,B̂}=ÂB̂+B̂Âare, respectively, the commutator and anti-commutator of  and B̂. As {Â,B̂} and [Â,B̂] are, respectively, Hermitian and anti-Hermitian operators, their mean values are, respectively, purely real and purely imaginary numbers. So, considering that |⟨ψ|ϕ⟩|^2=(Re⟨ψ|ϕ⟩)^2+(Im⟨ψ|ϕ⟩)^2,after some manipulations we obtain the HRSUR <cit.> [Even though this inequality is usually dubbed Heisenberg's uncertainty relation, here we prefer to give credit also for Robertson and Schrödinger, who have obtained it in its more general forms. An alternative proof of HRSUR can be found in Ref. <cit.>. ]: Var(Â)Var(B̂)≥(CovQ(Â,B̂))^2+2^-2|⟨[Â,B̂]⟩|^2=T_1,whereCovQ(Â,B̂)=2^-1(Cov(Â,B̂)+Cov(B̂,Â))is the quantum covariance, with Cov(X̂,Ŷ)=⟨X̂Ŷ⟩-⟨X̂⟩⟨Ŷ⟩ being the covariance between the observables X̂ and Ŷ. If  and B̂ are compatible observables, i.e., if [Â,B̂]=0̂, then we shall have CovQ(Â,B̂)=Cov(Â,B̂).Let us look now at the triviality problem of the HRSUR. Without loss of generality, let's suppose that the system is prepared in a state which coincides with an eigenvector of Â, that is to say, |ξ⟩=|a_j⟩ with Â|a_j⟩=a_j|a_j⟩ and a_j∈ℝ. In this case it is not difficult verifying that Var(Â)=CovQ(Â,B̂)=⟨[Â,B̂]⟩=0. Therefore the HRSUR gives 0Var(B̂)≥0.So, in this case, the HRSUR does not provide any information about the possible incompatibility between the observables  and B̂. In the next section we shall present the proof of an uncertainty relation which avoids the triviality problem, witnessing the incompatibility of two observables even when the system is prepared in one of their eigenvectors. § MACCONE-PATI UNCERTAINTY RELATION In contrast to the HRSUR, the Maccone-Pati uncertainty relation (MPUR), which shall be proved in this section, gives lower bounds for the sum of the variances associated with two observables <cit.>:Var(Â)̂+Var(B̂)̂≥max(L_1,L_2),with L_1=2^-1|⟨ξ|(±B̂)|ξ_⊥⟩|^2,L_2=± i⟨[Â,B̂]⟩+|⟨ξ|(± iB̂)|ξ_⊥⟩|^2,where |ξ_⊥⟩ is any normalized vector orthogonal to the system state |ξ⟩. The signs in Eqs. (<ref>) and (<ref>) are chosen, respectively, to maximize L_1 and L_2. Of course, once MPUR holds for any |ξ_⊥⟩, we should search for the |ξ_⊥⟩ yielding the bigger lower bound for the sum of the variances. It is important to note the the lower bounds L_1 and L_2 will be equal to zero only if the system state, |ξ⟩, is a common eigenvector for both observables  and B̂. It is worthwhile mentioning that the MPUR was already verified experimentally for the special case of observables represented by unitary operators <cit.>.It is worthwhile also mentioning that the novelty of MPUR is not simply the use of the sum of variances instead of their product. One can easily obtain a HRSUR involving sum of variances by using (σ_A-σ_B)^2≥0, with the standard deviation of the observable X̂ defined as σ_X=√(Var(X̂)). This inequality leads to Var(Â)+Var(B̂)≥2σ_Aσ_B≥|⟨[Â,B̂]⟩|=T_2,where the last inequality is a particular case of the HRSUR, Eq. (<ref>). But one can verify that if the system state is an eigenvector of one of the observables, then the uncertainty relation of Eq. (<ref>) also suffers from the triviality problem.§.§ Proof of the first lower bound in the MPUR For the sake of proving the MPUR, we will make use of parallelogram law. This rule is depicted in Fig. <ref> and states that for any two vectors |ψ⟩ and |ϕ⟩ in the Hilbert space ℋ, the following equality holds: 2(∥|ψ⟩∥^2+∥|ϕ⟩∥^2)=∥(|ψ⟩+|ϕ⟩)∥^2+∥(|ψ⟩-|ϕ⟩)∥^2. Let us insert the vectors defined in Eq. (<ref>) in the parallelogram law, Eq. (<ref>). As ∥|ψ⟩∥^2=Var(Â) and ∥|ϕ⟩∥^2=Var(B̂) we shall have Var(Â)+Var(B̂)=2^-1(∥(|ψ⟩+|ϕ⟩)∥^2+∥(|ψ⟩-|ϕ⟩)∥^2) ≥2^-1∥(|ψ⟩±|ϕ⟩)∥^2=2^-1(⟨ψ|±⟨ϕ|)(|ψ⟩±|ϕ⟩)⟨ξ_⊥|ξ_⊥⟩ ≥2^-1|(⟨ψ|±⟨ϕ|)|ξ_⊥⟩|^2=2^-1|⟨ξ|(±B̂)|ξ_⊥⟩-(⟨Â⟩±⟨B̂⟩)⟨ξ|ξ_⊥⟩|^2=2^-1|⟨ξ|(±B̂)|ξ_⊥⟩|^2=L_1.We obtained the inequality in Eq. (<ref>) from the equality in Eq. (<ref>) by applying the positivity of the norm. We get from (<ref>) to (<ref>) and from (<ref>) to (<ref>) using a normalized vector |ξ_⊥⟩ which is orthogonal to the system state |ξ⟩. By its turn, the inequality of Eq. (<ref>) is a consequence of the Cauchy-Schwarz inequality, Eq. (<ref>). The signs in the equations above depend on if ∥(|ψ⟩+|ϕ⟩)∥^2 or ∥(|ψ⟩-|ϕ⟩)∥^2 is used when going from Eq. (<ref>) to Eq. (<ref>) and are chosen to maximize L_1.§.§ Proof of the second lower bound in the MPUR By applying the same procedures of the last sub-section, we can verify that ∥(|ψ⟩± i|ϕ⟩)∥^2= (⟨ψ|∓ i⟨ϕ|)(|ψ⟩± i|ϕ⟩) =∥|ψ⟩∥^2+∥|ϕ⟩∥^2± i(⟨ψ|ϕ⟩-⟨ϕ|ψ⟩) =∥|ψ⟩∥^2+∥|ϕ⟩∥^2± i⟨[Â,B̂]⟩and ∥(|ψ⟩± i|ϕ⟩)∥^2= (⟨ψ|∓ i⟨ϕ|)(|ψ⟩± i|ϕ⟩)⟨ξ_⊥|ξ_⊥⟩≥|(⟨ψ|∓ i⟨ϕ|)|ξ_⊥⟩|^2 = |⟨ξ|(Â∓ iB̂)|ξ_⊥⟩-(⟨Â⟩∓ i⟨B̂⟩)⟨ξ|ξ_⊥⟩|^2 = |⟨ξ|(Â∓ iB̂)|ξ_⊥⟩|^2.Thus, if we utilize i|ϕ⟩ in place of |ϕ⟩ in the parallelogram law, as ∥i|ϕ⟩∥=∥|ϕ⟩∥, we get 2(Var(Â)+Var(B̂))≥ Var(Â)+Var(B̂)± i⟨[Â,B̂]⟩+|⟨ξ|(± iB̂)|ξ_⊥⟩|^2,from which we promptly obtain the lower bound L_2 of Eq. (<ref>). The sign in Eq. (<ref>) is determined by which of the terms ∥(|ψ⟩± i|ϕ⟩)∥^2 in Eq. (<ref>) the inequality (<ref>) is applied to, and is chosen such that L_2 is maximized. § EXAMPLE: COMPLEMENTARITY FOR A QUBIT In this section we look at a two-level system, a qubit, prepared in the state |ξ⟩=2^-1/2(|0⟩+e^iα|1⟩),with |0⟩ and |1⟩ being eigenvectors of the Pauli matrix Ẑ=|0⟩⟨0|-|1⟩⟨1| and α∈[0,2π). Of course, everything we say in this section holds for the popular example of a spin 1/2 particle measured with Stern-Gerlach apparatuses <cit.>. We regard the application of HRSUR and MPUR to witness the well known incompatibility between the observables Ẑ and X̂=|0⟩⟨1|+|1⟩⟨0|. One can verify that for the state |ξ⟩: ⟨Ẑ⟩=0, ⟨X̂⟩=cosα, and ⟨ẐX̂⟩=-⟨X̂Ẑ⟩=isinα. We this we have CovQ(X̂,Ẑ)=0 e |⟨[X̂,Ẑ]⟩|^2=2^2sin^2α. The two lower bounds in the HRSUR of Eqs. (<ref>) and (<ref>) are then given by T_1=2^-2T_2^2=sin^2α.Taking into account that for this example there is only one normalized vector orthogonal to |ξ⟩: |ξ_⊥⟩=2^-1/2(|0⟩-e^iα|1⟩), after some simple calculations, we obtain the lower bounds for the MPUR, Eqs. (<ref>) e (<ref>): L_2=2L_1=1+sin^2α.These four lower bounds for the variances of X̂ and Ẑ are shown in Fig. <ref>. We see that even though the qualitative behavior of the curves is generally similar, there are important quantitative differences for the phases α={0,π,2π}. For these values of α, the system state, |ξ⟩, is an eigenvector of X̂ and, in contrast to the MPUR, the HRSUR, due to the triviality problem, is not capable of indicating that the width of the probability distribution for the eigenvalues of Ẑ is non-null. § FINAL REMARKS In this article, after discussing some aspects of the uncertainty principle of quantum mechanics (QM), we presented a didactic proof of the Maccone-Pati uncertainty relation and exemplified its application to a two-level system. It is a curious fact that a relevant restriction within QM (as is the Heisenberg-Robertson-Schödinger uncertainty relation) has an important problem which, although probably being for long noticed by several teachers and researchers in the area, was solved so much time after its conception. Thus we hope that the simple derivation of the MPUR we presented in this article will further motivate its inclusion in QM courses.This work was supported by CNPq, processes 441875/2014-9 and 303496/2014-2, by the Instituto Nacional de Ciência e Tecnologia de Informação Quântica (INCT-IQ), process 2008/57856-6, and by CAPES, process 6531/2014-08.10 Mlodinow L. Mlodinow, The Drunkard's Walk: How Randomness Rules our Lives (Pantheon Books, Nova York, 2008).Bell M. Bell, K. Gottfried, and M. Veltman, John Bell on The Foundations of Quantum Mechanics (World Scientific, Singapure, 2001).Aspect A. Aspect, Viewpoint: Closing the door on Einstein and Bohr's quantum debate, Physics 8, 123 (2015).Hensen B. Hensen, H. Bernien, A.E. Dréau, A. Reiserer, N. Kalb, M.S. Blok, J. Ruitenberg, R.F.L. Vermeulen, R.N. Schouten, C. Abellán, W. Amaya, V. Pruneri, M.W. Mitchell, M. Markham, D.J. Twitchen, D. Elkouss, S. Wehner, T.H. Taminiau, and R. Hanson, Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres, Nature 526, 682 (2015).Giustina M. Giustina, M.A.M. Versteegh, S. Wengerowsky, J. Handsteiner, A. Hochrainer, K. Phelan, F. Steinlechner, J. Kofler, J.-Å. Larsson, C. Abellán, W. Amaya, V. Pruneri, M.W. Mitchell, J. Beyer, T. Gerrits, A.E. Lita, L.K. Shalm, S.W. Nam, T. Scheidl, R. Ursin, B. Wittmann, and A. Zeilinger, Significant-loophole-free test of Bell's theorem with entangled photons, Phys. Rev. Lett. 115, 250401 (2015).Shalm L.K. Shalm, E. Meyer-Scott, B.G. Christensen, P. Bierhorst, M.A. Wayne, M.J. Stevens, T. Gerrits, S. Glancy, D.R. Hamel, M.S. Allman, K.J. Coakley, S.D. Dyer, C. Hodge, A.E. Lita, V.B. Verma, C. Lambrocco, E. Tortorici, A.L. Migdall, Y. Zhang, D.R. Kumor, W.H. Farr, F. Marsili, M.D. Shaw, J.A. Stern, C. Abellán, W. Amaya, V. Pruneri, T. Jennewein, M.W. Mitchell, P.G. Kwiat, J.C. Bienfang, R.P. Mirin, E. Knill, and S.W. Nam, Strong loophole-free test of local realism, Phys. Rev. Lett. 115, 250402 (2015).Griffiths D.J. Griffiths, Mecânica Quântica (Pearson Education, São Paulo, 2011).Sakurai J.J. Sakurai and J. Napolitano, Mecânica Quântica Moderna (Bookman, Porto Alegre, 2013).Maziero_rbef15 J. Maziero, Understanding von Neumann’s entropy, Rev. Bras. Ensino Fís. 37, 1314 (2015).Maziero_rbef16 J. Maziero, The Kraus representation for the dynamics of open quantum systems, Rev. Bras. Ensino Fís. 38, e2307 (2016).Heisenberg W. Heisenberg, The actual content of quantum theoretical kinematics and mechanics, Zeitschrift für Physic 43, 172 (1927).Robertson H.P. Robertson, The uncertainty principle, Phys. Rev. 34, 163 (1929).Schrodinger E. Schrödinger, About Heisenberg uncertainty relation, Proceedings of the Prussian Academy of Sciences, Physics-Mathematical Section 14, 296 (1930). The english translation, by A. Angelow and M.-C. Batoni, can be obtained in arXiv:quant-ph/9903100.Maccone-Pati L. Maccone and A.K. Pati, Stronger uncertainty relations for all incompatible observables, Phys. Rev. Lett. 113, 260401 (2014).Maczynski P.J. Lahti and M.J. Maczynski, Heisenberg inequality and the complex field in quantum mechanics, J. Math. Phys. 28, 1764 (1987).Koashi M. Koashi, Unconditional security of quantum key distribution and the uncertainty principle, J. Phys.: Conf. Ser. 36, 98 (2006).Takeuchi H.F. Hofmann and S. Takeuchi, Violation of local uncertainty relations as a signature of entanglement, Phys. Rev. A 68, 032103 (2003).Guhne O. Gühne, Characterizing entanglement via uncertainty relations, Phys. Rev. Lett. 92, 117903 (2004).Busch-1 P. Busch, T. Heinonen, and P. Lahti, Heisenberg's uncertainty principle, Phys. Rep. 452, 155 (2007).Nielsen M.A. Nielsen and I.L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, Cambridge, 2000).Wilde M.M. Wilde, Quantum Information Theory (Cambridge University Press, Cambridge, 2013).Ozawa M. Ozawa, Uncertainty relations for joint measurements of noncommuting observables, Phys. Lett. A 320, 367 (2004).Branciard C. Branciard, Error-tradeoff and error-disturbance relations for incompatible quantum measurements, PNAS 110, 6742 (2013).Wilde-1 F. Buscemi, M.J.W. Hall, M. Ozawa, and M.M. Wilde, Noise and disturbance in quantum measurements: An information-theoretic approach, Phys. Rev. Lett. 112, 050401 (2014).Horodecki R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, Quantum entanglement, Rev. Mod. Phys. 81, 865 (2009).Davidovich L. Aolita, F. de Melo, and L. Davidovich, Open-system dynamics of entanglement, Rep. Prog. Phys. 78, 042001 (2015).Maziero_ijqi L.C. Céleri, J. Maziero, and R.M. Serra, Theoretical and experimental aspects of quantum discord and related measures, Int. J. Quantum Inf. 09, 1837 (2011).Vedral K. Modi, A. Brodutch, H. Cable, T. Paterek, and V. Vedral, The classical-quantum boundary for correlations: discord and related measures, Rev. Mod. Phys. 84, 1655 (2012).Renner M. Berta, M. Christandl, R. Colbeck, J.M. Renes, and R. Renner, The uncertainty principle in the presence of quantum memory, Nat. Phys. 6, 659 (2010).Wilde-2 A.K. Pati, M.M. Wilde, A.R.U. Devi, A.K. Rajagopal, and Sudha, Quantum discord and classical correlation can tighten the uncertainty principle in the presence of quantum memory, Phys. Rev. A 86, 042105 (2012).Wehner S. Wehner and A. Winter, Entropic uncertainty relations—a survey, New J. Phys. 12, 025009 (2010).Caves S.L. Braunstein, C.M. Caves, and G.J. Milburn, Generalized uncertainty relations: Theory, examples, and Lorentz invariance, Ann. Phys. 247, 135 (1996).Frey M.R. Frey, Quantum speed limits—primer, perspectives, and potential future directions, Quantum Inf. Process. 15, 3919 (2016).Busch P. Busch and N. Stevens, Direct tests of measurement uncertainty relations: What it takes, Phys. Rev. Lett. 114, 070402 (2015).Vedral1 F. Buscemi, M. Dall'Arno, M. Ozawa, and V. Vedral, Direct observation of any two-point quantum correlation function, arXiv:1312.4240.Vedral2 F. Buscemi, M. Dall'Arno, M. Ozawa, and V. Vedral, Universal optimal quantum correlator, Int. J. Quantum Inf. 12, 1560002 (2014).Rigolin G. Rigolin, A simple derivation of the Schrödinger uncertainty relation, Eur. J. Phys. 36, 065007 (2015).Xue K. Wang, X. Zhan, Z. Bian, J. Li, Y. Zhang, and P. Xue, Experimental investigation of the stronger uncertainty relations for all incompatible observables, Phys. Rev. A 93, 052108 (2016).
http://arxiv.org/abs/1705.09139v2
{ "authors": [ "Jonas Maziero" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170525120508", "title": "The Maccone-Pati uncertainty relation" }
Gravitational-Wave Project Office, Optical and Infrared Astronomy Division, National Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588, Japan An extension of the input-output relation for a conventional Michelson interferometric gravitational-wave detector is carried out to treat an arbitrary coherent state for the injected optical beam. This extension is one of necessary researches toward the clarification of the relation between conventional gravitational-wave detectors and a simple model of a gravitational-wave detector inspired by weak-measurements in [A. Nishizawa, Phys. Rev. A 92 (2015), 032123.]. The derived input-output relation describes not only a conventional Michelson-interferometric gravitational-wave detector but also the situation of weak measurements. As a result, we may say that a conventional Michelson gravitational-wave detector already includes the essence of the weak-value amplification as the reduction of the quantum noise from the light source through the measurement at the dark port. Gravitational-wave detector, weak-value amplification § INTRODUCTION Weak measurements and their weak-value amplifications have been currently discussed by many researchers since their proposal by Aharonov, Albert, and Vaidman in 1988 <cit.>. In particular, the weak-value amplification has been regarded as one of techniques that has been used in a variety of experimental settings to permit the precise measurement of small parameters <cit.>. This paper is motivated by these researches on the precise measurements in quantum theory.As well-known,one of typical examples of precise measurements is the gravitational-wave detection. Recently, gravitational waves are directly observed by the Laser Interferometer Gravitational-wave Observatory (LIGO) <cit.> and the gravitational-wave astronomy has begun. To develop this gravitational-wave astronomy as a precise science, improvements of the detector sensitivity is necessary. So, it is important to continue the research and development of the science of gravitational-wave detectors together with the source sciences of gravitational waves. This paper is also based on such research activities.Although some researchers already commented that the weak-value amplification might be applicable to gravitational-wave detectors, we have been discussed this issue, seriously. The idea of weak measurements also proposed a new view-point of quantum measurement theory together with an amplification effect. To discuss the application of this idea to gravitational-wave detectors not only leads us to a possibility of exploring a new idea of the gravitational-wave detection but also gives us a good opportunity to discuss what we are doing in conventional gravitational-wave detectors from a different view-point of quantum measurement theory. Therefore, it is worthwhile to discuss whether or not the idea in weak measurements is applicable to gravitational-wave detector from many points of view. In particular, the comparison with conventional gravitational-wave detectors is an important issue in such discussions.A simple realization of the weak-value amplification is similar to the gravitational-wave detectors in many points. The base of the conventional gravitational-wave detectors is the Michelson interferometer. The arm lengths of this Michelson interferometer are tuned so that the one of the port of the interferometer becomes the “dark port” as we will explain in Sec. <ref>. Due to the propagation of gravitational waves, photons leak to the “dark port.” The measurement of the photon number at the “dark port” corresponds to the post-selection in weak measurements. This setup is regarded as a measurement of the effective two-level system of the photon. For this reason, we have been concentrated on the researches on weak measurements for two-level systems <cit.>. In particular, a weak-value amplification in a shot-noise limited interferometer was discussed <cit.>, since the shot-noise is one of important noise in gravitational-wave detectors.Recently, Nishizawa <cit.> reported his arguments on the radiation-pressure noise in a weak-measurement inspired gravitational-wave detector. This radiation-pressure noise is also an important noise in gravitational-wave detectors. He also discussed “standard quantum limit,” which is a kind of the sensitivity limit of the detector, and proposed an idea to break his standard quantum limit. Details of the detector model inspired by weak measurements in Refs. <cit.> will also be explained in Sec. <ref>. In this detector model, the optical short pulse beam is used to measure the mirror displacement due to gravitational waves, while the continuous monochromatic laser is used for the continuous measurement of the mirror displacement in conventional gravitational-wave detectors. This short-pulse injection is one of ideas in weak measurements proposed by Aharonov et al. <cit.> and the main difference between a model inspired by weak measurements in Refs. <cit.> and conventional gravitational-wave detectors. Furthermore, in Ref. <cit.>, arguments are restricted to the situation where the mirror displacement is regarded as a constant in time, while we have to monitor the motion of the mirror displacement by the continuous laser in conventional gravitational-wave detectors. Due to this restriction, we cannot directly compare the results in Ref. <cit.> with those in conventional gravitational-wave detector and the meaning of “standard quantum limit” in Ref. <cit.> is not so clear.To monitor the time-evolution of the mirror displacement is important in gravitational-wave detection, because it corresponds to the monitor of the time-evolution of gravitational waves. Expected gravitational-wave signals are in the frequency range from 10 Hz to 10 kHz. When we apply the detector model in Ref. <cit.>, we may inject femto-second pulses into the interferometer and a sufficiently large number of pulses are used to measure 10 kHz signals. Since we want to continuously measure the time-evolution of gravitational-wave signal in the range 10 Hz-10 kHz, we have to evaluate the averaged data of many pulses, continuously. To accomplish this averaged measurement, the different treatment of the detector in Ref. <cit.> is required. In conventional gravitational-wave detectors, the response of the detector to gravitational waves is discussed through the input-output relation of the interferometer in the frequency domain in the range of frequencies of gravitational-waves <cit.>. Therefore, to compare the results with conventional gravitational-wave detectors, it is natural to discuss the input-output relation for the weak-measurement inspired detector model in Ref. <cit.> taking into account of the time-dependence of the mirror displacement.In this paper, we regard that the Fourier transformation of optical fields are averaged variable of many pulses with the time scale which cover the appropriate frequency range and derive the input-output relation for the model in Ref. <cit.>. The important motivation of this extension is the comparison with the conventional gravitational-wave detectors. To carry out this extension, we have to consider at least two issues. The first issue is in the formulation to describe the input-output relation for the interferometers. In the conventional gravitational-wave detectors, the input-output relations are always derived through the two-photon formulation developed by Caves and Schumaker <cit.>. However, it is not clear whether or not this two-photon formulation can be applicable to the situation of weak measurements, because the aim of this formulation is to discuss the sideband fluctuations at the frequency ω_0±Ω where ω_0 is the frequency of the monochromatic laser and Ω is the frequency of fluctuations around this monochromatic laser. Furthermore, we consider the situation ω_0≫Ω in the two-photon formulation. It is not clear whether or not the situation ω_0≫Ω is appropriate for the model in Ref. <cit.>. We also note that there is little literature in which the input-output relation for gravitational-wave detectors is derived without the two-photon formulation. Therefore, we have to re-derive the input-output relations of gravitational-wave detectors without using the two-photon formulation from the starting point.The second issue is the extension of the photon state from the light source in the interferometer. In conventional gravitational-wave detectors, the optical field from the light source is in the coherent state whose complex amplitude is given by the δ-function in the frequency domain. On the other hand, the photon state from the light source in the model of Ref. <cit.> is also a coherent state but its complex amplitude has the broad band support in the frequency domain, which corresponds to the optical pulse. Therefore, we extend the input-output relation for conventional gravitational-wave detectors to the situation where the state of the injected light source is in an arbitrary coherent state. This extension is the main purpose of this paper. As the result of this extension, we can treat the situation of conventional gravitational-wave detectors and that of the model in Ref. <cit.> from the same input-output relation and compare these models. Furthermore, we can easily see that conventional gravitational-wave detectors already and implicitly includes the essence of the weak-value amplification as the noise reduction from the light source through the measurement at the dark port.This paper is organized as follows. In Sec. <ref>, we explain the setup of a simple conventional Michelson gravitational-wave detector and its weak-measurement inspired version discussed in Ref. <cit.>. In Sec. <ref>, we derive the generalized input-output relation which is applicable to the situation where the injected optical beam is an arbitrary coherent state and the derived input-output relation is the main result of this paper. In Sec. <ref>, we re-derive the conventional input-output relation from our derived extended input-output relation in Sec. <ref>, which indicates that our extended input-output relation is a natural extension of the input-output relation for conventional gravitational-wave detectors. In Sec. <ref>, we discuss the situation of the weak-value amplification of the model in Ref. <cit.> from our derived input-output relation in Sec. <ref>, which actually realizes the weak-value amplification. Final section, Sec. <ref>, is devoted to summary and discussion which includes the comparison of the model in Ref. <cit.> and the conventional Michelson gravitational-wave detector. § MICHELSON WEAK MEASUREMENT SETUP In this section, we explain the simplest conventional Michelson interferometric gravitational-wave detector and its weak-measurement inspired version discussed in Ref. <cit.>. The interferometer setup of these two gravitational-wave detectors is described as in Fig. <ref>. In the setup depicted in Fig. <ref>, the optical beam from the light source is injected into the interferometer which reaches to the central beam splitter. The central beam splitter separates the optical beam into two paths. We denote these paths as the x-arm and the y-arm, respectively. The separated optical beams propagate along each x- and y-arms, reach to the end-mirrors, and are reflected to the beam splitter by these end-mirrors, again. At the central beam splitter, a part of the reflected beams is returned to the port at which the light source exists. We call this port as the “symmetric port.” The other part of the beam goes to the port at which the photo-detector is prepared as depicted in Fig. <ref>. We call this port as the “anti-symmetric port.”To regard the setup in Fig. <ref> as a gravitational-wave detector, each end-mirror, which is called x-end mirror and y-end mirror, respectively, undergoes the free-falling motion to the longitudinal direction of the optical beam propagation, respectively. In general relativity, “free-falling motions” are called geodesic motions. The geodesic distance from the beam splitter and each end-mirrors are almost tuned as L. We apply a proper reference frame <cit.> whose center is the central beam splitter. When gravitational waves propagate through this interferometer, the geodesic distances from the beam splitter to each end-mirrors are slightly changed due to the tidal force of gravitational waves. In the proper reference frame, these tiny changes are represented by X̂_x and X̂_y as depicted in Fig. <ref>. Through this setup of the interferometer, we measure the changes X̂_x and X̂_y due to the gravitational-wave propagation by the photo-detector at the anti-symmetric port. Then, this setup is regarded as a gravitational-wave detector. It is important to note that, if the additional noises other than the gravitational-wave signals are included in these displacements X̂_x and X̂_y, we cannot distinguish the gravitational-wave signals and these additional noises, because we measure the gravitational-wave signal only through the mirror displacement X̂_x and X̂_y. These setup are common both in the conventional Michelson gravitational-wave detector and in the model discussed in Ref. <cit.>. §.§ Conventional Michelson gravitational-wave detectorIn the conventional gravitational-wave detector, L is chosen so that there is no photon leakage at the anti-symmetric port (the phase offset θ=0 in Fig. <ref>) when there is no gravitational-wave propagation <cit.>. For this reason, the anti-symmetric port for the photo-detector is usually called “dark port.” On the other hand, the symmetric port is called “bright port.” As mentioned above, the differential motion X̂_x-X̂_y induced by the gravitational waves leads the leakage of photons to the dark port. This is the signal of the gravitational-wave detection.In the conventional Michelson interferometric gravitational-wave detector, the state of the electric field from the light source is the monochromatic continuous laser. In quantum theory, this state is characterized by the coherent state with the δ-function complex amplitude with the carrier frequency ω_0 in the frequency domain. §.§ Weak-measurement-inspired versionNow, we explain the gravitational-wave detector in Fig. <ref> from the view point of the standard explanation of weak measurements <cit.>. The photon states propagate x- and y-arms are denoted by |x⟩ and |y⟩, respectively, and the system to be measured in quantum measurement theory is the which-path information which is a two-level system spaned by the basis {|x⟩,|y⟩}. The difference X̂_x-X̂_y, which includes the gravitational-wave signal, is regarded as the interaction strength between the system and the meter variable in quantum measurement theory. We regard that this interaction affects the photon state at the time t=t_0 of the reflection at the end mirrors. The meter variable in this setup is the frequency of photon in the interferometer.In contrast to the conventional Michelson gravitational-wave detector, we introduce the relative phase offset ±θ/2 in each arm <cit.> in the weak-measurement version of this detector. This introduction of the phase offset is inspired by the original idea of weak measurements <cit.> as explained in Ref. <cit.>. The weak value amplification occurs when this relative phase offset is approaching to vanishing as discussed in Ref. <cit.>. Furthermore, we assume that the state of the electric field from the light source is a coherent state whose complex amplitude in the frequency domain has a broad band support, while the δ-function complex amplitude at the carrier frequency ω_0 is used in the conventional gravitational-wave detectors. In other words, in the weak-measurement version, the pulse light is injected into the interferometer instead of the monochromatic laser. This is due to the fact that the meter variable to observe the interaction strength X̂_x-X̂_y is the frequency distribution of photon and the variance of the meter variable should be large in the original idea of weak measurements <cit.>.Due to the above phase offset θ, the initial state of the photon is given by|ψ_i⟩ = 1/√(2)( e^iθ/2 |y⟩ + e^-iθ/2|x⟩) .This state corresponds to the photon state propagating from the central beam splitter to the end mirrors. On the other hand, the initial state of the photon meter variable is given by|Φ⟩ = ∫ dp Φ(p) |p⟩where p is the momentum, or equivalently photon frequency in the natural unit c=1. The pre-selected state for the total system is |ψ_i⟩|Φ⟩. In the situation where the interaction strength X̂_x-X̂_y=: -g is almost constant, the reflection at the end mirrors at the moment t=t_0 changes the state of photons through the interaction HamiltonianĤ = gδ(t-t_0) Â⊗ p,where and  is the which-path operator := |y⟩⟨ y| - |x⟩⟨ x|.After the interaction (<ref>), we perform the post-selection to the which-path information|ψ_f⟩ = 1/√(2)(|y⟩-|x⟩).This corresponds to the detection of the photon at the anti-symmetric port of the interferometer depicted in Fig. <ref>.The weak value for this measurement model is given byA_w := ⟨ψ_f|Â|ψ_i⟩/⟨ψ_f|ψ_i⟩ = - i θ/2 .The final state of the output photon after the post-selection is given by|Φ'⟩ := ∫ dp Φ'(p)|p⟩ = ⟨ψ_f|e^-igÂ⊗ p|ψ_i⟩|Φ⟩= ∫ dp Φ(p)|p⟩ (1 - i A_w gp) + O(g^2) .Evaluating the expectation value of the momentum p under this final state, we obtain⟨ p⟩' - ⟨ p⟩∼ 2 g A_w(⟨ p^2⟩-⟨ p⟩^2)If we apply the standing point that we want to measure the interaction strength g:=X̂_y-X̂_x as a gravitational-wave detector, Eq. (<ref>) shows that the output is proportional to gA_w∝ g(θ/2). If we consider the situation θ≪ 1, the interaction strength g:=X̂_y-X̂_x can be measured by the large factor ∼ 1/θ. This is the original argument of weak-value amplification in the case of the imaginary weak value.The above explanation can easily be extended to multiple photons <cit.>. The probability distribution for a single photon at the output is given through Eq. (<ref>) asρ(ω) := ⟨ω|Φ'⟩⟨Φ'|ω⟩/⟨Φ'|Φ'⟩ .When the total photon number is N_out, the photon-number distribution is simply given byn(ω) = N_outρ(ω).This implies that the probability distribution ρ(ω) is regarded as the normalized photon number distribution f(ω) in the frequency domain defined by <cit.>f(ω) := n(ω)/N_out := n(ω)/∫_0^∞ dωn̅(ω).In the Heisenberg picture, the output photon number is given by the expectation value of the number operator n̂(ω)=b̂^†(ω)b̂(ω) through the output annihilation operator b̂(ω) which is introduced in Fig. <ref>. This will be discussed in Sec. <ref> after derived the input-output relation for the interferometer setup which are applicable to the situation of the weak measurement. In the understanding of conventional gravitational-wave detectors, the input-output relation for the interferometer plays crucial role <cit.>. This input-output relation is based on the quantum field theory of the photon field. However, the weak-measurement-inspired version is not described by the conventional input-output relation of gravitational-wave detectors. In this paper, we want to discuss these two typical situation within the same mathematical framework. To carry out this, we have to extend the conventional input-output relation to the situation where the coherent state from the light source has an arbitrary complex amplitude in the frequency domain as shown in the next section. § EXTENSION OF INPUT-OUTPUT RELATION TO ARBITRARY COHERENT STATE In this section, we derive an extension of the input-output relation of the Michelson gravitational-wave detector, which is depicted in Fig. <ref>, to an arbitrary coherent state light source in terms of the one-photon formulation. In many literature of gravitational-wave detectors, the two-photon formulation developed by Caves and Schumaker <cit.> is used. However, as emphasized in Sec. <ref>, it is not clear whether or not this two-photon formulation is applicable to the situation of the detector model in Ref. <cit.>. Therefore, we reexamine the derivation of the input-output relation from the starting point. This section is the main ingredient of this paper.In Sec. <ref>, we describe the notation of the electric field in the interferometer. In Sec. <ref>, we derive the input-output relation of the interferometer in which the photon state from the light source is an arbitrary coherent state. In Sec. <ref>, we summarize remarks on the result in this section. §.§ Electric field notationAs well known, the electric field operator at time t and the length of the propagation direction z in interferometers is described byÊ_a(t-z)= Ê^(+)_a(t-z) + Ê^(-)_a(t-z) , Ê_a^(-)(t-z)= [Ê_a^(+)(t-z)]^†,Ê^(+)_a(t-z)= ∫_0^∞dω/2π√(2πħω/ Ac)â(ω) e^-iω(t-z) ,where â(ω) is the photon annihilation operator associated with the electric field Ê_a(t-z), which satisfies the commutation relations[â(ω), â^†(ω')] = 2 πδ(ω-ω') ,[â(ω), â(ω')] = [â^†(ω), â^†(ω')] = 0 .A is the cross-sectional area of the optical beam. To discuss the input-output relation of the Michelson interferometer based on one-photon formulation, it is convenient to introduce operator Â(ω) defined byÂ(ω) := â(ω) Θ(ω) + â^†(-ω) Θ(-ω)so that the electric field (<ref>) is represented asÊ_a(t) = ∫_-∞^+∞dω/2π√(2πħ|ω|/ Ac)Â(ω) e^-iω t ,where Θ(ω) is the Heaviside step functionΘ(ω) = {[ 1 (ω≥ 0),; 0(ω<0). ].Due to the property of the Dirac δ-function ∫_-∞^+∞dt e^+i(ω'-ω)t = 2πδ(ω'-ω), the inverse relation of Eq. (<ref>) is given byÂ(ω)= √( Ac/2πħ|ω|)∫_-∞^+∞ dt e^+iω tÊ_a(t) .Therefore, the operator Â(ω) includes complete information of the electric field operator Ê_a(t) and is convenient to derive the input-output relation of simple interferometers. §.§ Input-output relation for the extended Michelson interferometerIn this subsection, we consider the extension of the input-output relation for the Michelson interferometer depicted in Fig. <ref>. This “extension” includes three meanings. First, the state of the input optical field is unspecified, while that is a coherent state whose complex amplitude is a real δ-function in the frequency domain in conventional gravitational-wave detectors. Second, the tiny motions X̂_x and X̂_y of end-mirrors are not specified within this subsection. Finally, we introduce the phase offset ±θ/2 for each arm as depicted in Fig. <ref>. This phase offset is inspired by the original idea of weak measurements <cit.>. §.§.§ Beam splitter junctions First, we consider the junction conditions for quadratures at the beam splitter. Following the notation depicted in Fig. <ref>, the final output electric field Ê_b(t) is given by <cit.>Ê_b(t) = 1/√(2)[ Ê_c_y'(t) - Ê_c_x'(t) ] .Here, we defineB̂(ω):= b̂(ω) Θ(ω) + b̂^†(-ω) Θ(-ω),Ĉ_x'(ω):= ĉ_x'(ω) Θ(ω) + ĉ_x^'†(-ω) Θ(-ω),Ĉ_y'(ω):= ĉ_y'(ω) Θ(ω) + ĉ_y^'†(-ω) Θ(-ω)as in Eq. (<ref>) and the relation (<ref>) is given byB̂(ω) = 1/√(2)( Ĉ_y'(ω) - Ĉ_x'(ω) ),Similarly, the electric-field operators Ê_c_y(t) and Ê_c_x(t) are also given by the input fields Ê_d(t) and Ê_a(t) as follows <cit.>:Ê_c_x(t)= 1/√(2)( Ê_d(t) - Ê_a(t) ) ,Ê_c_y(t)= 1/√(2)( Ê_d(t) + Ê_a(t) ) .In terms of the quadrature as in Eq. (<ref>), these relations yieldĈ_x(ω)= 1/√(2)( D̂(ω) - Â(ω) ) ,Ĉ_y(ω)= 1/√(2)( D̂(ω) + Â(ω) ) ,where we defined the operatorsĈ_x(ω):= ĉ_x(ω) Θ(ω) + ĉ_x^†(-ω) Θ(-ω),Ĉ_y(ω):= ĉ_y(ω) Θ(ω) + ĉ_y^†(-ω) Θ(-ω),D̂(ω):= d̂(ω) Θ(ω) + d̂^†(-ω) Θ(-ω)as in Eq. (<ref>). §.§.§ Arm propagation Next, we consider the retarded effect due to the propagation along each each x- and y-arm. Here, each arm length is given by L=cτ, where τ is the retarded time for photons which propagate from the beam splitter to the end-mirrors. In addition to the retarded time τ, the tiny displacement of the mirror X̂_x/c and X̂_y/c also contribute to the phase shift of the electric field. In addition to these retarded effects, we add the retarted time Δ t_θ which corresponds to the phase offset ±θ/2 in Fig. <ref>. Then, the relations between the electric field {Ê_c_x'(t),Ê_c_y'(t)} and {Ê_c_x(t),Ê_c_y(t)} are given byÊ_c'_x(t)= Ê_c_x[t - 2(τ + X̂_x(t)/c)+Δ t_θ] ,Ê_c'_y(t)= Ê_c_y[t - 2(τ + X̂_y(t)/c)-Δ t_θ].whereθ/2 = ωΔ t_θ(ω). Here, we treat X̂_x/c and X̂_y/c, perturbatively. Through the representation of the electric fields Ê_c_x(t) and Ê_c'_x(t) as Eq. (<ref>), the relation (<ref>) with Eq. (<ref>) is given byÊ_c'_x(t)= ∫_-∞^+∞dω/2π√(2πħ|ω|/ Ac)Ĉ_x'(ω) e^-iω t∼e^-iθ/2∫_-∞^+∞dω/2π√(2πħ|ω|/ Ac)Ĉ_x(ω) e^- i ω (t-2τ)+ e^-iθ/2∫_-∞^+∞dω/2π√(2πħ|ω|/ Ac)Ĉ_x(ω) e^- i ω (t-2τ)2 i ωX̂_x(t-τ) / c.In this expression, X̂_x is regarded as a quantum operator. In this case, there is a ordering problem of the operators X̂_x and ĉ_x(ω), but, here, we keep the order in which the operator X̂_x should be in the right of the operatorĉ_x(ω) at this moment. We note that this ordering problem is harmless within this paper, because we concentrate only on the linear quadrature relations with a coherent state light source.Here, we introduce the Fourier transformationX̂_x(t)=: ∫_-∞^+∞Ẑ_x(Ω)e^-iΩ tdΩ/2π,substitute Eq. (<ref>) into Eq. (<ref>), and take the Fourier transformation (<ref>). Then, we haveĈ_x'(ω) = e^-iθ/2 e^+ 2 i ωτĈ_x(ω)+e^-iθ/2e^+i2ωτ2i/c√(|ω|)∫_-∞^+∞dΩ/2πe^-iΩτ√(|ω-Ω|)(ω-Ω)×Ĉ_x(ω-Ω)Ẑ_x(Ω).Similarly, from Eq. (<ref>), we obtainĈ_y'(ω) = e^+iθ/2 e^+ 2 i ωτĈ_y(ω) + e^+iθ/2 e^+i2ωτ2i/c√(|ω|)∫_-∞^+∞dΩ/2π e^-iΩτ√(|ω-Ω|) (ω-Ω)×Ĉ_y(ω-Ω)Ẑ_y(Ω)through the replacements Ĉ'_x→Ĉ'_y, Ẑ_x→Ẑ_y and θ→-θ in Eq. (<ref>) and (<ref>).Substituting Eqs. (<ref>) and (<ref>) into Eq. (<ref>), we obtaine^- 2 i ωτB̂(ω)= i sin(θ/2)D̂(ω)+cos(θ/2)Â(ω) +2i/c∫_-∞^+∞dν/2πe^-iντ√(|ω-ν/ω|)(ω-ν) ×[(i sin(θ/2)D̂(ω-ν)+cos(θ/2)Â(ω-ν))Ẑ_com(ν). .-(cos(θ/2)D̂(ω-ν)+i sin(θ/2)Â(ω-ν))Ẑ_diff(ν)],where we definedẐ_com := Ẑ_x + Ẑ_y/2 , Ẑ_diff := Ẑ_x - Ẑ_y/2 .§.§.§ Coherent state of the input optical beam Eq. (<ref>) indicates that the output operator B̂ is given by the operators Â, D̂, Ẑ_diff, and Ẑ_com. In this section, we will see that the Ẑ_diff and Ẑ_com are given by  and D̂ together with the gravitational-wave signal through the equations of motion for end-mirrors, later. Therefore, to discuss the information from the output operator B̂, we have to specify the quantum states associated with the operators  and D̂.The state for the operator  is the state which is injected from the anti-symmetric port. On the other hand, the state associated with the operator D̂ is the state of the electric field which is injected from the symmetric port. The total state of photon in the output port B̂, i.e., b̂, is determined by the specification of the states for the operators D̂ and Â, i.e,the annihilation operators d̂ and â. We assume that the state associated with the operator d̂ is a coherent state with the complex amplitude α(ω), the state associated with the operator â is vacuum state, and no entangled in these states. Then, the total state |ψ⟩ of photon is given by the direct product of the photon states of each frequency as|ψ⟩ = ∏_ω |α(ω)⟩_d⊗|0⟩_a = ∏_ω D_d(α(ω))|0⟩_d⊗|0⟩_a=:D_d|0⟩_d⊗|0⟩_a , D_d := ∏_ω D_d(α(ω))= exp[ ∫dω/2π( α(ω) d^†(ω) - α^*(ω) d(ω) ) ] .In the Heisenberg picture, the operator d̂ is replaced asD_d^†d̂(ω)D_d = d̂(ω) + α(ω) , D_d^†d̂^†(ω)D_d = d̂^†(ω) + α^*(ω) .In terms of the operator D̂(ω) defined by Eq. (<ref>), this replacement is equivalent toD_d^†D̂(ω)D_d = D̂_c(ω) + D̂_v(ω),whereD̂_c(ω) := α(ω)Θ(ω) + α^*(-ω)Θ(-ω) ,D̂_v(ω) := d̂(ω)Θ(ω) + d̂^†(-ω)Θ(-ω) . Since we apply the Heisenberg picture, the operator D_d^†B̂(ω)D_d is useful for evaluation of the photon-number expectation value from the input-output relation (<ref>). We regard the terms D_d^†Ẑ_com(Ω)D_d and D_d^†Ẑ_diff(Ω)D_d are small correction and we neglect the quadratic terms of these small correction. Operating D_d^† and D_d to Eq. (<ref>), substituting Eqs. (<ref>) into Eq. (<ref>), we obtain the input-output relation ase^- 2 i ωτ D_d^†B̂(ω)D_d=i sin(θ/2) D̂_c(ω) + i sin(θ/2) D̂_v(ω) + cos(θ/2) Â(ω) + 2i/c∫_-∞^+∞dΩ/2π e^-iΩτ√(|ω-Ω/ω|) (ω-Ω)×[ i sin(θ/2) D̂_c(ω-Ω) D_d^†Ẑ_com(Ω)D_d.. - cos(θ/2) D̂_c(ω-Ω) D_d^†Ẑ_diff(Ω)D_d] .This is the most general input-output relation within our consideration. In Eq. (<ref>), the first term in the first line is the leakage of the classical carrier field due to the phase offset θ/2. The remaining terms in the first line is the vacuum fluctuations which corresponds to the shot noise. The second- and third-lines are the response of the mirror motion which includes gravitational-wave signal and radiation pressure noise through the motions of mirrors D_d^†Ẑ_com(Ω)D_d and D_d^†Ẑ_diff(Ω)D_d. The input-output relation (<ref>) is the main results of this paper. To evaluate the input-output relation (<ref>), we have to evaluate D_d^†Ẑ_com(Ω)D_d and D_d^†Ẑ_diff(Ω)D_d in some way. §.§.§ End-mirrors' equations of motion (time domain) In the case of gravitational-wave detectors, D_d^†Ẑ_com(Ω)D_d and D_d^†Ẑ_diff(Ω)D_d are evaluated through the equations of motions for the end-mirrors. We assume that the mass of the beam splitter and end-mirrors are equal to m. Since we apply the proper reference frame <cit.> of a local inertia system in which the beam splitter is the center of this coordinate system, X̂_x and X̂_y describe the tiny differential displacement of the geodesic distances of the x- and y-end mirrors from the central beam splitter, respectively. The equations for X̂_x and X̂_y are given bym/2d^2/dt^2X̂_̂x̂(t)= F̂_rp(x)(t) + 1/2m/2 L d^2/dt^2h(t) ,m/2d^2/dt^2X̂_̂ŷ(t)= F̂_rp(y)(t) - 1/2m/2 L d^2/dt^2h(t) .where h is the gravitational wave signal which is derived from the tidal force due to the gravitational-wave propagation in the proper reference frame and m/2 is the reduced mass of the differential motion of the end-mirrors and the central beam splitter. Furthermore, F_rp(x) and F_rp(y) are the radiation pressure due to the incident photon. §.§.§ Radiation pressure forces The radiation pressure forces in Eqs. (<ref>) and (<ref>) are evaluated throughF̂_rp(x)(t)=2A/4π( Ê_c_x[ t - (τ+X̂_x/c) + Δ t_θ/2] )^2 ,F̂_rp(y)(t)=2A/4π( Ê_c_y[ t - (τ+X̂_y/c) - Δ t_θ/2] )^2 ,in this paper. The right-hand sides in Eqs. (<ref>) and (<ref>) are just twice of the pointing flux of the electric fields which incident to the end-mirrors, respectively.Performing the Fourier transformation of the electric field Ê_c_x, using Eq. (<ref>), and taking the zeroth- and the linear-order with respect to the operator X̂_x, the radiation-pressure force (<ref>) is given byF̂_rp(x)(t)= ħ/c e^-iθ/2∫_-∞^∞∫_-∞^∞dω/2πdω'/2π√(|ωω'|)Ĉ_x(ω) Ĉ_x(ω') e^+i(ω+ω')τ e^-i(ω+ω')t +iħ/ce^-iθ/2∫_-∞^∞∫_-∞^∞dω/2πdω'/2π√(|ωω'|)(ω+ω')Ĉ_x(ω)Ĉ_x(ω') ×X̂_x(t)/ce^+i(ω+ω')τe^-i(ω+ω')t.Substituting the Fourier transformation (<ref>) ofX̂_x and taking Fourier transformation F̂_rp(x)(Ω) of F̂_rp(x)(t), we obtain the expression of the radiation-pressure force which affect to the x-end mirror in the frequency domain asF̂_rp(x)(Ω):= ∫_-∞^+∞ dtF̂_rp(x)(t)e^+iΩ t= ħ/c e^-iθ/2 e^+iΩτ∫dω/2π√(|ω(Ω-ω)|)Ĉ_x(ω) Ĉ_x(Ω-ω)+iħ/c^2e^-iθ/2∬dω/2πdω'/2π√(|ωω'|)(ω+ω')Ĉ_x(ω)Ĉ_x(ω') ×Ẑ_x(Ω-ω-ω')e^+i(ω+ω')τ,where we use the notation ∫=∫_-∞^+∞ and ∬=∫_-∞^+∞∫_-∞^+∞. Similarly, we also obtainF̂_rp(y)(Ω):= ∫_-∞^+∞ dtF̂_rp(y)(t)e^+iΩ t= ħ/c e^+iθ/2 e^+iΩτ∫dω/2π√(|ω(Ω-ω)|)Ĉ_y(ω) Ĉ_y(Ω-ω)+iħ/c^2e^+iθ/2∬dω/2πdω'/2π√(|ωω'|)(ω+ω')Ĉ_y(ω)Ĉ_y(ω') ×Ẑ_y(Ω-ω-ω')e^+i(ω+ω')τ.§.§.§ End-mirrors' equations of motions (Frequency domain)D_d^†Ẑ_com(Ω)D_d and D_d^†Ẑ_diff(Ω)D_d in the input-output relation (<ref>) are determined by Eqs. (<ref>) and (<ref>) of motions for the test masses in the frequency domain. Multiplying D_d^† and D_d to the Fourier transformed version of Eqs. (<ref>) and (<ref>), we obtainm Ω^2 D_d^†Ẑ_com(Ω) D_d =- D_d^†F̂_rp(x)(Ω) D_d- D_d^†F̂_rp(y)(Ω) D_d , m Ω^2 D_d^†Ẑ_diff(Ω) D_d =D_d^†F̂_rp(y)(Ω) D_d- D_d^†F̂_rp(x)(Ω) D_d+ 1/2 m L Ω^2 h(Ω)from definitions (<ref>) of Ẑ_com(Ω) and Ẑ_diff(Ω). Here, we have regarded that the gravitational-wave signal h(Ω) is a classical variable which is proportional to the identity operator in the sense of quantum theory. We also used the displacement operator D_d is time-independent. Equations (<ref>) and (<ref>) indicate that we have to evaluate D_d^†F̂_rp(x)(Ω)D_d and D_d^†F̂_rp(y)(Ω)D_d to evaluate D_d^†Ẑ_com(Ω)D_d and D_d^†Ẑ_diff(Ω)D_d.Note that D_d^†Ĉ_x(ω)D_d in D_d^†F̂_rp(x)(Ω)D_d is given by the quadrature D̂(ω) and Â(ω) through Eq. (<ref>). We consider the situation where the state for the input quadrature Â(ω) from the anti-symmetric port is vacuum and the state for the input quadrature D̂(ω) from the symmetric port is the coherent state as Eq. (<ref>) which enable us to separate the operator D̂(ω) into the vacuum quadrature and the classical carrier as Eq. (<ref>). Through Eqs. (<ref>), (<ref>) and (<ref>), we may separate D_d^†Ĉ_x,y(ω)D_d into the vacuum quadrature and the classical carrier asD_d^†Ĉ_x,y(ω)D_d = Ĉ_x,y(c)(ω) + Ĉ_x,y(v)(ω),whereĈ_x(c)(ω)= Ĉ_y(c)(ω) = 1/√(2)D̂_c(ω) ,Ĉ_x(v)(ω):= 1/√(2)( D̂_v(ω) + Â(ω) ) ,Ĉ_y(v)(ω):= 1/√(2)( D̂_v(ω) - Â(ω) ) . Through Eqs. (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>), and ignoring the higher-order terms of the vacuum quadrature and displacement, we obtainD_d^†F̂_rp(x)(Ω)D_d= ħ/2c e^-iθ/2 e^+iΩτ∫dω/2π√(|ω(Ω-ω)|)[ D̂_c(ω) D̂_c(Ω-ω) ..+2D̂_c(ω)(D̂_v(Ω-ω)+ Â(Ω-ω))] +iħ/2c^2e^-iθ/2∬dω/2πdω'/2π√(|ωω'|)(ω+ω')D̂_c(ω)D̂_c(ω') ×D_d^†Ẑ_x(Ω-ω-ω')D_de^+i(ω+ω')τ, D_d^†F̂_rp(y)(Ω)D_d= ħ/2c e^+iθ/2 e^+iΩτ∫dω/2π√(|ω(Ω-ω)|)[ D̂_c(ω) D̂_c(Ω-ω) ..+2D̂_c(ω)(D̂_v(Ω-ω)-Â(Ω-ω))] +iħ/2c^2e^+iθ/2∬dω/2πdω'/2π√(|ωω'|)(ω+ω')D̂_c(ω)D̂_c(ω') ×D_d^†Ẑ_y(Ω-ω-ω')D_de^+i(ω+ω')τ. Through the expressions of the radiation-pressure forces (<ref>) and (<ref>), Eqs. (<ref>) and (<ref>) of the end-mirrors are given bym Ω^2D_d^†Ẑ_com(Ω)D_d=- ħ/c e^+iΩτcos(θ/2) ∫dω/2π√(|ω(Ω-ω)|)D̂_c(ω) D̂_c(Ω-ω) -2ħ/ce^+iΩτ∫dω/2π√(|ω(Ω-ω)|)D̂_c(ω)(cos(θ/2)D̂_v(Ω-ω). . - i sin(θ/2) Â(Ω-ω) )-icos(θ/2)ħ/c^2∬dω/2πdω'/2π√(|ωω'|)(ω+ω')D̂_c(ω)D̂_c(ω')×D_d^†Ẑ_com(Ω-ω-ω')D_de^+i(ω+ω')τ -sin(θ/2)ħ/c^2∬dω/2πdω'/2π√(|ωω'|)(ω+ω')D̂_c(ω)D̂_c(ω')×D_d^†Ẑ_diff(Ω-ω-ω')D_de^+i(ω+ω')τ, m Ω^2 D_d^†Ẑ_diff(Ω) D_d=i sin(θ/2) ħ/c e^+iΩτ∫dω/2π√(|ω(Ω-ω)|)D̂_c(ω) D̂_c(Ω-ω) + 2ħ/c e^+iΩτ∫dω/2π√(|ω(Ω-ω)|)D̂_c(ω)(i sin(θ/2)e^+iθ/2D̂_v(Ω-ω). .-cos(θ/2)Â(Ω-ω)) -sin(θ/2)ħ/c^2∬dω/2πdω'/2π√(|ωω'|)(ω+ω')D̂_c(ω)D̂_c(ω')×D_d^†Ẑ_com(Ω-ω-ω')D_de^+i(ω+ω')τ -icos(θ/2)ħ/c^2∬dω/2πdω'/2π√(|ωω'|)(ω+ω')D̂_c(ω)D̂_c(ω')×D_d^†Ẑ_diff(Ω-ω-ω')D_de^+i(ω+ω')τ +1/2 m L Ω^2 h(Ω). The explicit representations of D_d^†Ẑ_com(Ω)D_d and D_d^†Ẑ_diff(Ω)D_d are given as the solutions to Eqs. (<ref>) and (<ref>), respectively. Through these solutions, we can evaluate the input-output relation (<ref>) in a closed form.§.§ RemarksHere, we describe some remarks on the results of this section.In Eq. (<ref>), we extend the state of the photon field from the light source from the monochromatic laser, which is described by the δ-function complex amplitude for the coherent state whose support only at the ω=ω_0, to an arbitrary coherent state whose complex amplitude is described by an arbitrary function α(ω) in the frequency domain. Due to this extension, in Eq. (<ref>), we have to evaluate the convolution in the frequency domain, while the integration for this convolution is simplified due to the δ-function in the conventional Michelson gravitational-wave detectors. Since the complex amplitude for the coherent state from the light source is arbitrary in Eq. (<ref>), we do not have central frequency ω_0 of the complex amplitude, the sideband picture around this central frequency, nor the approximation ω_0≫Ω. These are due to the fact that we did not use the two-photon formulation.Furthermore, we introduce the phase offset θ in the input-output relation (<ref>). Due to this phase offset θ, Eq. (<ref>) has some effects which are not taken into account in the input-output relation for the conventional Michelson gravitational-wave detector. The first one is the leakage of the classical carrier field from the light source and the second one is the shot noise from the light source. These are described by the first and second terms in the right-hand side of Eq. (<ref>), respectively. The final one is the effect of common motion of the two end-mirrors, which described by the second line in Eq. (<ref>). This common motion D_d^†Ẑ_com(Ω)D_d together with the differential motion D_d^†Ẑ_diff(Ω)D_d in Eq. (<ref>) is determined by the equations of motion of two end-mirrors which are given by Eqs. (<ref>) and (<ref>).The equations of motion (<ref>) and (<ref>) are also modified due to our extension. For example, in the equation (<ref>) for the differential motion of the end-mirrors, we have to evaluate the convolution in the first four lines in the right-hand side due to the extension of the complex amplitude for the coherent state from the δ-function δ(ω-ω_0) to an arbitrary function α(ω) in the frequency domain. Furthermore, the third and fourth lines in the right-hand side of Eq. (<ref>) appears due to this extension. These terms do not appear in the equations of motion for the end-mirrors in the conventional Michelson gravitational-wave detector, and arise due to the modulation of the shape the complex amplitude by the retarded effect due to the optical field propagation from the central beam splitter to the end-mirrors. Moreover, the direct effect due to the classical carrier part from the light sources and the effect due to the shot noise from the light source appears which affects to the differential motion of end-mirrors through the introduction of the phase offset θ.Thus, we regard that the set of the input-output relation for the extended Michelson interferometer (<ref>), equations (<ref>) and (<ref>) of motions for the end-mirrors is the main result of this paper. § RE-DERIVATION OF CONVENTIONAL INPUT-OUTPUT RELATION In this section, we show that the derived input-output relation (<ref>) with equations (<ref>) and (<ref>) of motions yields the input-output relation for the conventional Michelson interferometric gravitational-wave detector. §.§ Input-output relationIn the conventional Michelson gravitational-wave detector, the state of the optical beam from the light source is in the coherent state with the complex amplitudeα(ω) = 2π N δ(ω-ω_0) .We note that α(ω) is real. The corresponding electric field with the amplitude (<ref>) of α(ω) is the continuous monochromatic carrier field with the frequency ω_0. The normalization factor N is related to the averaged photon number per secondN = √(I_0/ħω_0),where I_0 is the averaged power of the carrier field. Through the definition (<ref>), the classical part D̂_c(ω) of the input light source is given byD̂_c(ω) = 2 π N {δ(ω-ω_0) Θ(ω) + δ(ω+ω_0) Θ(-ω) } . Here, we concentrate on the mode with the frequency ω_0±Ω. For these sidebands, D̂_c(ω) given by Eq. (<ref>) includes terms which depend on 2ω_0±Ω. In the time-domain, these terms includes the factor of the rapid oscillation e^i2ω_0t. Therefore, this part can be removed in the data taking or the data analyses processes and we ignore these terms, since we concentrate only on the fluctuations with the frequency ω_0±Ω. For this reason, D̂_c(ω) with ω_0±Ω may be regarded asD̂_c(ω_0±Ω) = 2π N δ(Ω).Through the same approximation, the input-output relations (<ref>) with ω=ω_0±Ω are given bye^- 2 i(ω_0±Ω)τ D_d^†B̂(ω_0±Ω)D_d=i sin(θ/2) D̂_c(ω_0±Ω) + i sin(θ/2) D̂_v(ω_0±Ω) + cos(θ/2) Â(ω_0±Ω)+2iNω_0^3/2e^∓ i Ωτ/c√(|ω_0±Ω|)[i sin(θ/2)D_d^†Ẑ_com(±Ω)D_d. .-cos(θ/2)D_d^†Ẑ_diff(±Ω)D_d],and Eqs. (<ref>) and (<ref>) are given bym Ω^2 D_d^†Ẑ_com(Ω) D_d=- ħ N e^+iΩτ√(ω_0)/ c cos(θ/2) ( √(|Ω-ω_0|)D̂_c(Ω-ω_0) .. + √(|Ω+ω_0|)D̂_c(Ω+ω_0) )-2ħNe^+iΩτ√(ω_0)/c{√(|Ω-ω_0|)(cos(θ/2) D̂_v(Ω-ω_0) . ...-i sin(θ/2) Â(Ω-ω_0)). .+√(|Ω+ω_0|)(cos(θ/2) D̂_v(Ω+ω_0) . ...-i sin(θ/2) Â(Ω+ω_0))}, m Ω^2 D_d^†Ẑ_diff(Ω) D_d=+i ħ N e^+iΩτ√(ω_0)/ c sin(θ/2) {√(|Ω-ω_0|)D̂_c(Ω-ω_0) .. + √(|Ω+ω_0|)D̂_c(Ω+ω_0) } +2ħNe^+iΩτ√(ω_0)/c{√(|Ω-ω_0|)[i sin(θ/2) D̂_v(Ω-ω_0) . ...-cos(θ/2) Â(Ω-ω_0)]. .+√(|Ω+ω_0|)[i sin(θ/2) D̂_v(Ω+ω_0) . ...-cos(θ/2) Â(Ω+ω_0)]} +1/2m LΩ^2h(Ω). Here, we consider the situation where ω_0≫Ω and we apply the approximation in which ω_0±Ω in the coefficients of the input-output relation are regarded as ω_0±Ω∼ω_0. Furthermore, we useω_0τ = ω_0L/c = 2 n π,n∈ℕ,so that the anti-symmetric port is the dark port. We also introduce the following variablesκ := 8 ω_0 I_0/mc^2Ω^2 ,h_SQL := √(8ħ/mΩ^2L^2) .In addition, since ω_0≫Ω, we should regard ω_0+Ω>0 and Ω-ω_0<0. Then, the substitution of Eqs. (<ref>) and (<ref>) into Eq. (<ref>) yields the input-output relations asD_d^†b̂_±D_d = sin(θ/2) ( i + κcos(θ/2) ) √(I_0/ħω_0) 2 πδ(Ω)+e^± 2 iΩτ[i sin(θ/2) d̂_±+cos(θ/2) â_±] +κ e^± 2 iΩτ/2[sinθ(d̂_∓^† + d̂_±). .+ i cosθ(â_∓^† +â_±)] -i√(κ)e^± iΩτcos(θ/2)h(±Ω)/h_SQL,where â_±:=â(ω_0±Ω), b̂_±:=b̂(ω_0±Ω), and d̂_±:=d̂(ω_0±Ω). Here, we note that the carrier part in Eq. (<ref>), which is proportional to δ(Ω) diverges due to the radiation-pressure contribution κ∝Ω^-2. Since we can predict this divergent part and can be removed by the data taking or the data analysis processes, this carrier part is ignored. §.§ Two-photon formulationHere, we note that the two-photon formulation is applicable in our situation ω_0≫Ω and we introduce the operatorsâ_1 = 1/√(2)(â_++â_-^†) , â_2 = 1/√(2)i(â_+-â_-^†) ,b̂_1 = 1/√(2)(b̂_++b̂_-^†) , b̂_2 = 1/√(2)i(b̂_+-b̂_-^†),d̂_1 = 1/√(2)(d̂_++d̂_-^†) , d̂_2 = 1/√(2)i(d̂_+-d̂_-^†).In the two-photon formulation which treats the situation where the carrier field is proportional to cosω_0t, â_1, b̂_1, and d̂_1 are regarded as amplitude quadratures. On the other hand, â_2, b̂_2, and d̂_2 are regarded as phase quadratures. Through these amplitude and phase quadratures, the input-output relation (<ref>) yieldsD_d^†b̂_1D_d = 1/√(2)sinθκ√(I_0/ħω_0) 2 πδ(Ω) + e^+ 2 iΩτ{ - sin(θ/2) d̂_2 + cos(θ/2) â_1}+ e^+ 2 iΩτκsinθd̂_1 , D_d^†b̂_2D_d = √(2)sin(θ/2) √(I_0/ħω_0) 2 πδ(Ω) + e^+ 2 iΩτ{sin(θ/2) d̂_1 + cos(θ/2) â_2}+ cosθ e^+ 2 iΩτκâ_1- e^+iΩτcos(θ/2) √(2κ)h(Ω)/h_SQL . In Eq. (<ref>), the first term is the divergent classical carrier field induced by the radiation pressure force due to the mirror motion. The second term is the shot noise from the quantum fluctuations from the bright port and the dark port. The last term in Eq. (<ref>) is the radiation pressure noise due to the mirror motion which originally comes from the quantum fluctuation in the incident optical beam from the bright port. On the other hand, in Eq. (<ref>), the first term is the classical carrier field which leaks from the light source by the phase offset θ. The second term is the shot noise from the quantum fluctuation from the bright port and the dark port. The third line is the radiation pressure noise due to the mirror motion which originally comes from the quantum fluctuations in the vacuum from the dark port. The last term is the gravitational-wave signal.Although the classical carrier parts in D_d^†b̂_1D_d and D_d^†b̂_2D_d is completely determined in classical sense and can be removed in the data analysis, we also note that the classical carrier part which diverge due to the radiation pressure force contributes only to the amplitude quadrature D_d^†b̂_1D_d. Therefore, as far as we observe only D_d^†b̂_2D_d <cit.>, the divergent term due to the radiation pressure force cancels and we do not have to care about this divergence.Finally, we note that if we choose θ=0, Eqs. (<ref>) and (<ref>) are reduced toD_d^†b̂_1D_d =e^+ 2 iΩτâ_1 , D_d^†b̂_2D_d =e^+ 2 iΩτ( â_2 + κâ_1) - e^+iΩτ√(2κ)h(Ω)/h_SQL ,respectively. This is the conventional input-output relation for the Michelson gravitational-wave detector <cit.>. This means that the input-output relation (<ref>) and (<ref>) are recover the usual input-output relation which is well-known in the gravitational-wave community. Furthermore, this also means that the set of the original input-output relation (<ref>) and Eqs. (<ref>) and (<ref>) of mirrors' motions are the natural extension of the conventional input-output relation of the Michelson interferometric gravitational-wave detector. § WEAK-VALUE AMPLIFICATION FROM THE EXTENDED INPUT-OUTPUT RELATION Here, we consider the situation of the weak measurement in the interferometer setup depicted in Fig. <ref> from the input-output relation (<ref>) to show that this input-output relation actually includes the weak-value amplification. Without loss of generality, we may choose ω>0 in Eq. (<ref>):e^- 2 i ωτ D_d^†b̂(ω)D_d=i sin(θ/2) α(ω) + i sin(θ/2) d̂(ω) + cos(θ/2) â(ω) + 2i/c∫_-∞^+∞dΩ/2π e^-iΩτ√(|ω-Ω/ω|) (ω-Ω)×[ i sin(θ/2) D̂_c(ω-Ω) D_d^†Ẑ_com(Ω)D_d.. - cos(θ/2) D̂_c(ω-Ω) D_d^†Ẑ_diff(Ω)D_d] .To discuss the weak measurement from this input-output relation, we concentrate on the output photon number operator n̂(ω) to the photo-detectorn̂(ω) := b̂^†(ω)b̂(ω)and its expectation value in the state (<ref>) is given byn(ω) := ⟨ψ|n̂(ω)|ψ⟩= ⟨ 0|_a⊗⟨ 0|_d( D_d^†b̂(ω) D_d)^† ( D_d^†b̂(ω) D_d) |0⟩_a⊗|0⟩_d Here, we consider the situation where Ẑ_com and Ẑ_diff are classical, i.e., proportional to the identity operator in the sense of quantum theory and their frequency-dependence are negligible. Furthermore, the complex amplitude α(ω) for the coherent state from the light source is real and has its compact support within the frequency ω∈[0,∞] and is rapidly decreasing at the boundaries ω→ 0 and ω→+∞ of this range. This is the situation discussed by Nishizawa in Ref. <cit.>.Substituting Eq. (<ref>) into Eq. (<ref>), and taking the linear term with respect to Ẑ_com and Ẑ_diff, we obtainn(ω) = sin^2(θ/2) α^2(ω)-sin^2(θ/2)8/2π c ω^1/2 I_s+3/2(τ,α)α(ω) ×(cos(ωτ)Ẑ_com+(θ/2)sin(ωτ)Ẑ_diff),where we introduce the definite integral I_s+3/2(τ,α) defined byI_s+3/2(τ,α) := ∫_0^+∞ dx x^3/2sin(xτ) α(x) .When α(ω) is given by the Gaussian function, this definite integral I_s+3/2(τ,α) does converge and expressed using theparabolic cylinder function <cit.>. Here, we define n_0(ω) and δ n(ω) byn_0(ω) := sin^2(θ/2) α^2(ω), δ n(ω):= -sin^2(θ/2)8/2π c√(ω) I_s+3/2(τ,α)α(ω)×(cos(ωτ)Ẑ_com+(θ/2)sin(ωτ)Ẑ_diff),so thatn(ω) = n_0(ω) + δ n(ω) . As explained in Sec. <ref>, we consider the normalized frequency distribution f(ω) of the output photon number defined by Eq. (<ref>). We evaluate the expectation value of the frequency ω under the distribution function f(ω) as⟨ω⟩ := ∫_0^+∞ dωω f(ω)∼ ω_0+∫_0^+∞ dω (ω - ω_0) δ n(ω)/∫_0^+∞ dωn_0(ω),where we denotedω_0 := ∫_0^+∞ dωωn_0(ω)/∫_0^+∞ dωn_0(ω).Furthermore, we introduce following definite integralsJ(α):= ∫_0^+∞ dωα^2(ω) , I_c± 1/2(τ,α):= ∫_0^+∞ dx x^± 1/2cos(xτ) α(x) , I_s± 1/2(τ,α):= ∫_0^+∞ dx x^± 1/2sin(xτ) α(x) .When α(ω) is the Gaussian function, these definite integrals do converge. Using these definite integrals, the expectation value (<ref>) of the frequency ω under the distribution function (<ref>) is given by⟨ω⟩ - ω_0 ∼ Ẑ_com8/2π cJ(α) I_s+3/2(τ,α) ×( ω_0 I_c-1/2(τ,α) -I_c+1/2(τ,α) )+(θ/2)Ẑ_diff8/2π cJ(α) I_s+3/2(τ,α) ×(ω_0 I_s-1/2(τ,α)- I_s+1/2(τ,α)).When θ≪ 1, the second term in Eq. (<ref>) is dominant, i.e.,⟨ω⟩ - ω_0 ∼ +2/θẐ_diff8/2π cJ(α) I_s+3/2(τ,α) ×(ω_0 I_s-1/2(τ,α)- I_s+1/2(τ,α)).This is the weak-value amplification effect.We note that there is no effect of the quantum fluctuations which described by the quadratures â nor d̂, but completely determined by the amplitude α(ω) of the coherent state for the quadrature d̂. We also note that these arguments does not seriously depend on the details of the real function α(ω).Thus, we have shown that our derived input-output relation (<ref>) actually includes the situation of the weak-value amplification. § SUMMARY AND DISCUSSIONIn this paper, we considered the extension of the input-output relation for a conventional Michelson gravitational-wave detector to compare the weak-measurement inspired gravitational-wave detector in Ref. <cit.> with conventional one. The main difference between these detectors is the injected optical field, which is a continuous monochromatic laser in conventional one and is the continuous pulse beam in the model of Ref. <cit.>. Therefore, we extend the conventional input-output relation for the gravitational-wave detector to the situation where the injected photon state is a coherent state with an arbitrary complex amplitude α(ω). We also showed that our extended input-output relation actually includes both situations of conventional gravitational-wave detectors and that where the weak-value amplification occurs. This is the main result of this paper.Within this paper, we do not discuss quantum noise in the situation where the weak-value amplification occurs. However, in principle, we will be able to discuss quantum noises, i.e., the shot noise and the radiation-pressure noise due to the quantum fluctuations of photons in the Michelson interferometric gravitational-wave detector, even in the situation where the weak-value amplification occurs. In our derivation, we regard that the Fourier transformed variables describes the situation of the stationary continuous measurement through the average of the many pulses. Although this stationarity is an assumption throughout this paper, we can discuss the time-evolution of the gravitational-wave signal through the frequency dependence of the mirror displacement in the extended input-output relation in Sec. <ref>. This is the difference from discussions in Ref. <cit.> which assume that the mirror displacement is constant in time. In Sec. <ref>, we just dare to consider the situation where the mirror displacement is constant in time just for the comparison with Ref. <cit.>. For this reason, we should regard that our derived input-output relation in Sec. <ref> is different from that derived in Ref. <cit.>.In spite of this difference from Ref. <cit.>, we reached to the same conclusions as those in Ref. <cit.>. First, as discussed in Sec. <ref>, the weak-value amplification from the input-output relation (<ref>) is the effect due to the carrier field α(ω) of the coherent state from the light source and has nothing to do with the quantum fluctuations described by the annihilation and creation operators for photon. Second, together with the amplification of the gravitational-wave signal, the weak-value amplification also amplify the radiation-pressure noise which is one of important quantum noise in gravitational-wave detectors. These two conclusions are not affected by the details of the analyses. In this sense, these are robust. In addition to the above two conclusions, from the comparison between the input-output relations (<ref>) in Sec. <ref> and Eq. (<ref>) in Sec. <ref>, we may say that a conventional Michelson gravitational-wave detector already includes the essence of the weak-value amplification as the reduction of the quantum noise from the light source through the measurement at the dark port as the final conclusion.In the situation where the weak-value amplification occurs discussed in Sec. <ref>, the unperturbed photon number, i.e., the first term in the right-hand side of Eq. (<ref>), proportional to sin^2(θ/2) and there are factors sin^2(θ/2) and sin(θ/2)cos(θ/2) in the coefficients of Ẑ_com and Ẑ_diff, respectively. Since we consider the expectation value of ω under the conditional photon-number distribution f(ω) defined by Eq. (<ref>), we divide the perturbed terms, i.e.,the second term in the right-hand side of Eq. (<ref>) which are proportional to Ẑ_com or Ẑ_diff, by the unperturbed photon number in Eq. (<ref>). Through this process, the factor sin^2(θ/2) in the coefficient of the term including Ẑ_com canceled out, but the coefficient of the Ẑ_diff becomes (θ/2). Then, if we choose θ≪ 1, the term which includes Ẑ_diff is dominant. This is the weak-value amplification. Actually, the weak value in our setup depicted in Fig. <ref> is proportional to (θ/2) as shown in Eq. <ref>. The important point is the fact that the weak-value amplifies Ẑ_diff which includes not only gravitational-wave signal h(Ω) but also the radiation-pressure noise. Therefore, we cannot improve the signal to noise ratio by the weak-value amplification at least in the simple model in this paper as pointed out by Nishizawa <cit.>. Since the reduction of these noises from the dark-port is the main target in some researches of gravitational-wave detectors, this model is not useful for this target.On the other hand, we compare of input-output relations (<ref>) and (<ref>) through the original input-output relation (<ref>). The coefficients of Ẑ_com and Ẑ_diff in Eq. (<ref>), which yields the weak-value amplification, are determined by the last term in the input-output relation Eq. (<ref>). The same (sin(θ/2),cos(θ/2))-dependence in the input-output relation can be seen in the original input-output relation (<ref>). The same dependence can also be seen in the input-output relation (<ref>) for the conventional gravitational-wave detector and subsequent input-output relations (<ref>), (<ref>) and (<ref>). Therefore, the same effect as the weak-value amplification is already included in the conventional Michelson-interferometric gravitational-wave detector through the photon detection at the anti-symmetric port which are nearly dark-port. We may say that the common motion Ẑ_com and, equivalently, the quantum fluctuations associated with the quadrature d̂ which affect the input-output relation through Ẑ_com are negligible due to the the weak-value amplification. This is the meaning of the above final conclusion.Although the model discussed here is not useful for the reduction of the quantum noise at the dark port in a gravitational-wave detector, there are still rooms to discuss the weak-measurement inspired gravitational-wave detector. One of the issues to be clarified is the effect due to the optical pulse injection from the light source instead of the monochromatic continuous optical laser in the conventional gravitational-wave detectors. As the first step of this research is to examination of the input-output relation of (<ref>) without the approximation in which we regard that Ẑ_com and Ẑ_diff are almost constant, while our arguments in Sec. <ref> are based on this approximation. This examination will leads to discussion of the direct comparison with the conventional Michelson-interferometric gravitational-wave detector. To complete this discussion, we have to examine the problem whether or not the assumptions which are introduced when we derive the conventional input-output relations (<ref>) and (<ref>) are also valid even in the model of Ref. <cit.>. First, to derive these input-output relations, we concentrate on the sidebands ω_0±Ω as Eq. (<ref>). We have to discuss the sideband picture is appropriate for the weak-measurement or not. As far as the discussion within the level of Sec. <ref> in this paper, we cannot apply the sideband-picture in the weak-measurement inspired gravitational-wave detector. Second, in the derivation of Eq. (<ref>) and (<ref>), we ignored the high frequency modes with the frequency 2ω_0±Ω. On the other hand, in our weak measurement model, we consider the broad frequency distribution of the photon field in Sec. <ref>. We have to judge whether we can ignore the high frequency modes even in the weak-measurement inspired gravitational-wave detector, or not. Finally, we considered the situation ω_0≫Ω in the derivation of Eq. (<ref>) and (<ref>). This will not be appropriate in the weak-measurement inspired gravitational-wave detector.In addition to the above problem, the input-output relation (<ref>) might not converge due to the response of the detector. This will be the most important issue for the model in Ref. <cit.>. Roughly speaking, D^†_dẐ_com(Ω)D̂_d and D^†_dẐ_diff(Ω)D̂_d will have the pole proportional to 1/Ω^2 through the equations of motion (<ref>) and (<ref>). If D̂_c(ω-Ω) in Eq. (<ref>) basically given by the Gaussian function, the integrand in Eq. (<ref>) might diverge due to this pole 1/Ω^2. If this divergence is true and important in our situation, we have to carefully discuss the physical meaning of this divergence and treat it delicately.We have to carefully discuss these issues for the complete comparison between the weak-measurement inspired gravitational-wave detector and the conventional Michelson gravitational-wave detector. However, these issues are beyond the current scope of this paper. Therefore, we leave this comparison with conventional gravitational-wave detectors as one offuture works.Even if we complete the arguments in the case where Ẑ_com and Ẑ_diff are not constant, we might reach to the conclusion that the conventional Michelson-interferometric gravitational-wave detector is more appropriate as a gravitational-wave detector than weak-measurement inspired gravitational-wave detectors. However, even in this case, we will be able to discuss the effect of the pulse-train light source. In the experimental optics, there is a report on the ultrashort optical pulse trains produced by the mode lock laser,which states that there are shot noise correlations in the frequency domain of an ultrashort optical pulse trains and we can reduce the shot noise using this correlation <cit.>. If we can use the same technique as this experiment, there is a possibility to reduce the shot noise through the correlations in the ultrashort optical pulse train produced by the mode-lock laser. Of-course, this is no longer the weak-value amplification but this idea comes from the point of view inspired by the weak measurements. We will hope that our discussion in this paper will be useful when we discuss this interesting possibility. We also leave this interesting possibility as one of future works. § ACKNOWLEDGMENTSK.N. acknowledges to Dr. Tomotada Akutsu and the other members of the gravitational-wave project office in NAOJ for their continuous encouragement to our research. K.N. also appreciate Prof. Akio Hosoya and Prof. Izumi Tsutsui for their support and continuous encouragement.00 Y.Aharonov-D.Z.Albert-L.Vaidman-1988 Y. Aharonov, D. Z. Albert, and L. Vaidman, Phys. Rev. Lett. 60 (1988), 1351.N.W.M.Ritchie-J.G.Story-R.G.Hulet-1991-etcN. W. M. Ritchie, J. G. Story, and R. G. Hulet, Phys. Rev. Lett. 66 (1991), 1107.O. Hosten and P. Kwiat, Science 319 (2008), 787; K.J.Resch, Science 319 (2008), 733.P. B. Dixon, D. J. Starling, A. N. Jordan, and J. C. Howell, Phys. Rev. Lett. 102 (2009), 173601;D. J. Starling, P. B. Dixon, A. N. Jordan, and J. C. Howell, Phys. Rev. A 80 (2009), 041803(R); D. J. Starling, P. B. Dixon, A. N. Jordan, and J. C. Howell, Phys. Rev. A 82 (2010), 063822; D. J. Starling, P. B. Dixon, N. S. Williams, A. N. Jordan, and J. C. Howell, Phys. Rev. A 82 (2010), 011802(R).M. Iinuma, Y. Suzuki, G. Taguchi, Y. Kadoya, and H. F. Hofmann, New J. Phys. 13 (2011), 033041.G. I. Viza, J. Martínez-Rincón, G. A. Howland, H. Frostig, I. Shomroni, B. Dayan, and J. C. Howell, Optics Letters 38 (2013), 2949. LIGO-GW150914-2016-GW151226-2016 B. P. Abbot et al., Phys. Rev. Lett. 116 (2016), 061102; ibid. 116 (2016), 241103. K.Nakamura-A.Nishizawa-M.-K.Fujimoto-2012K. Nakamura, A. Nishizawa, and Masa-Katsu Fujimoto, Phys. Rev. A 85 (2012), 012113. A.Nishizawa-K.Nakamura-M.-K.Fujimoto-2012 A. Nishizawa, K. Nakamura, and Masa-Katsu Fujimoto, Phys. Rev. A 85 (2012), 062108. K.Nakamura-M.Iinuma-2013K. Nakamura, M. Iinuma, Phys. Rev. A 88 (2013), 042106. A.Nishizawa-2015 A. Nishizawa, Phys. Rev. A 92 (2015), 032123. C.M.Caves-B.L.Schumaker-1985 C. M. Caves, and B. L. Schumaker, Phys. Rev. A 31 (1985), 3068; ibid. 31 (1985), 3093. C.W.Misner-T.S.Thorne-J.A.Wheeler-1973 C. W. Misner, T. S. Thorne, and J. A. Wheeler, Gravitation (Freeman, San Francisco, 1973) phase-offset-introduction-comment In the actual gravitational-wave detectors, the similar phase offset is introduced in the different context. When the arm length L is chosen so that the anti-symmetric port is completely dark port, the response of the interferometer to gravitational-wave propagation is maximum. However, in this maximum response, we cannot measure the phase of gravitational wave due to the symmetry of the interferometer. Since detected photon numbers are continuously monitored and the feedback-control technique is used so that the anti-symmetric port is always almost dark port with this finite phase offset. This feedback-control electric current is the actual data of the gravitational-wave observation. At the complete dark port, this feedback-control technique is not applicable. H.J.Kimble-Y.Levin-A.B.Matsko-K.S.Thorne-S.P.Vyatchanin-2001 H. J. Kimble, Y. Levin, A. B. Matsko, K. S. Thorne, and S. P. Vyatchanin, Phys. Rev. D 65 (2001), 022002. beam-splitter-comment In quantum mechanics, Eqs. (<ref>), (<ref>), and (<ref>) are described by the unitary transformation which comes from the energy conservation law. For an ideal beam splitter, the reflectivity of the each side of the beam splitter is different and the incident wave from the direction from the side of lower reflectivity to the side of higher reflectivity of the beam splitter is reflected with the π phase shift or fixed end reflection. This is a simplest choice of the boundary condition at the beam splitter, though one parameter family of the boundary conditions is possible according to the physical property of the beam splitter. H.Miao-PhDthesis-2010 H. Miao, “Exploring Macroscopic Quantum Mechanics in Optomechanical Devices”, PhD. thesis, The University of Western Australia, 2010. homodyne-kouchan-preparation-comment The precise meaning of “observe D^†_db̂_2D_d” will be clarified in <cit.>. K.Nakamura-M.-K.Fujimoto-2017-preparation K. Nakamura and M. -K. Fujimoto, arXiv:1709.01697 [quant-ph]; K. Nakamura and M. -K. Fujimoto, arXiv:1711.03713 [quant-ph]. I.S.Gradshteyn-I.M.Ryzhik-2000 I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series, and Products, 6th ed., edited by A. Jeffrey and D. Zwillinger (Academic, San Diego, 2000). F.Quinlan-etal.-2013 F. Quinlan, et al., Nature photonics, 7 (2013), 290.
http://arxiv.org/abs/1705.09768v3
{ "authors": [ "Kouji Nakamura", "Masa-Katsu Fujimoto" ], "categories": [ "quant-ph", "gr-qc" ], "primary_category": "quant-ph", "published": "20170527054335", "title": "Extension of the input-output relation for a Michelson interferometer to arbitrary coherent-state light sources: --- Gravitational-wave detector and weak-value amplification ---" }
UFAL]D. Messias UFAL]Iram Gleria ARA]S.S. Albuquerque ARA,BU]Askery Canabarrot1 [t1]Corresponding author, [email protected] BU]H. E. Stanley[UFAL]Instituto de Física, Universidade Federal de Alagoas, 57072-970 Maceió, Brazil [ARA]Grupo de Física da Matéria Condensada, Núcleo de Ciencias Exatas - NCEx, Campus Arapiraca, Universidade Federal de Alagoas, 53309-005 Arapiraca-AL, Brazil [BU]Center for Polymer Studies and Department of Physics, Boston University, Boston, Massachusetts 02215, USA We consider a delayed nonlinear model of the dynamics of the immune system against a viral infection that contains a wild-type virus and a mutant. We consider the finite response time of the immune system and find sustained oscillatory behavior as well as chaotic behavior triggered by the presence of delays. We present a numeric analysis and some analytical results.Delay Differential Equations, Immune Response, Non-instantaneous Systems.§ INTRODUCTIONWe consider a nonlinear set of delay differential equations (DDEs) to model the interaction of the immune system with an external pathogen, e.g., a viral infection. Our model follows one presented in Ref. <cit.> in which a time delay takes into account the non-instantaneous immune response caused by a sequence of events (e.g., activation of antigenic response or production of immune cells) that occurs within a finite time period. In addition, the presence of sustained aperiodic oscillations and chaotic trajectories observed in real data <cit.> indicates that time delays are needed to allow bifurcations that cause chaotic behavior even in models that are one- and two-dimensional <cit.>. In ordinary differential equations (ODEs) a minimum set of three coupled equations is required.Because the fundamental underlying mechanisms are non-instantaneous, several biological models have recently been modeled using delay differential equations. Among these are a predator prey model with delays <cit.>, a model for the dynamics of the hormonal control of the menstrual cycle <cit.>, a model for human respiration <cit.>, a model for dioxide carbon levels in the blood <cit.>, and a number of models for viral dynamics <cit.>.In previous research <cit.> we analyzed the cellular immune response and found that stationary solutions bifurcate to an unstable fixed point when delays are longer than a critical immune response time τ_c. We found that increasing the time delay causes the system to suffer a series of bifurcations that can evolve into a chaotic regime. We used two coupled delayed equations to model the interaction of the immune system with a target population <cit.>. We used some analytical tools to analyze delayed systems <cit.>, and we published new results for the model originally presented in Ref. <cit.>. Here we consider a three-dimensional version of a model that previously appeared in the literature <cit.> for the dynamics of the population of virus y(t) and of immune cells z(t), and also a mutant population of virus y_m(t).Delay differential equations require both the initial conditions and the history of the dynamic variable values of t<τ. Because we are using models with discrete delays, τ is constant. This is in contrast to a system with distributed delays in which ∫_t-r^t k(t-s)x(s) ds = ∫_0 ^rk(z)x(t-z)dz, where 0≤ r ≤∞ is the distributed delay and the kernel k is normalized, and thus ∫_-∞^∞ k(y)dy = 1. For an identically null k(u),∀ u>u_ max the delay can be represented by integrals of type ∫_- ∞^t M_1(s)k(t-s) ds = ∫_0 ^∞ M_1(t-u)k(u)du. These are “bounded delays” because they represent the values of M_1 at a past time (t-u_ max,t). A discrete delay is a particular kind of bounded delay. More complicated forms are also possible, e.g., delays of type x(t-r[x(t)]) distributed over space.Introducing delays allows us to model richer behavior, e.g., the well-known logistic equation governing the dynamics of a population density N(t): Ṅ(t)= N(t) (1 - N(t)/K), with r the growth rate and K the carrying capacity. Note that for every initial condition N(0) the system ultimately reaches the stable equilibrium N(t)→ K. A delayed version of this model can be used for a species population that gathers and stores food, i.e., when resources vanish, the species population starves within finite time τ. Reference <cit.> assumes this and analyzes the delayed system Ṅ(t)= N(t)r (1 - N(t-τ)/K).This delayed version of the logistic equation can model chaotic behavior that instantaneous one dimensional models cannot because ODE systems need at least a three-dimensional state space to model chaos, as demonstrated in Lorenz's seminal work <cit.>. Here the number of initial conditions is equal to the number of degrees of freedom. In delayed systems the number of degrees of freedom is infinity and chaos occurs in even one dimensional systems, as in the case for one-dimensional non-invertible maps.We present the model in the next section. In section <ref> we present some analytical and numeric results, and in section <ref> we present our conclusions.§ MODELOur model is based on research described in Refs. <cit.> that uses a two-dimensional model for the dynamics of the population of virus y(t) and of immune cells z(t). We use time-lagged response for the immune system, following previous research demonstrating its importance in the appearance of the Hopf bifurcations <cit.>, chaotic trajectories <cit.>, and sustained oscillatory behavior rarely seen in the instantaneous version of the model <cit.>. Here we extend the model to a spreading population of mutant virus y_m(t), ẏ = r (1-α)y(t) (1-y(t)/K) - ay(t) - py(t)z(t) ẏ_̇ṁ = α_m r_m y_m(t) (1-y_m(t)/K_m)-a_my_m(t) - p_my_m(t)z(t) ż = cy(t-τ_1)z(t-τ_1)/1+dy(t-τ_1) + c_my_m(t-τ_2)z(t-τ_2)/1+d_m y_m(t-τ_2) - qy(t)z(t) - q_my_m(t)z(t) - bz(t), where r(1-α) is the growth rate of the viral population for y≈ 0. This rate decreases and reaches zero when the virus population equals K. The virus population decays with a. We then have a net rate of r(1-α)-a and a carrying capacity of K(r(1-α)-a)/r(1-α). The viruses are eliminated by cells of the immune system according a rate p. The term y_m represents the concentration of the mutant viruses. Its net growth rate and carrying capacity are, respectively, r_mα_m-a_m and K_m(rα_m-a)/rα_m. They are eliminated at a rate p_m.The immune cell concentration z grows proportionally to the virus population according to a saturation term. The τ_2 value is the delay in the immune response to the viral infection. The delay τ_1 refers to the processes used by the organism to prepare the cells to fight the virus. Immune cells are attacked and destroyed by the original viruses and their mutant version with rates q and q_m, respectively.The terms 1/(1+dy(t-τ_1)) and 1/[1+d_m y_m(t-τ_2)] shows that the immune response is proportional to the product of the virus (either y or y_m) and the population of immune cells z, but saturates when the virus population is higher.Numerical estimations of the parameters are provided in Ref. <cit.>. 5cmr = 6 day^-1, K=3virus mm^-3,p = 1 mm^3 cells ^-1 day^-1, a=3 day^-1 ,c=4 mm^3 virus ^-1 day^-1,d=0.5 mm ^-3 virus ^-1, b=1 day^-1,q = 1 mm^-3 virus ^-1 day^-1.Identical numeric values are assumed for K_m, r_m, a_m, c_m, d_m, and q_m. The p_m = 0.9 < p value is an exception because here it is more difficult for the immune system to eliminate cells infected by the mutant virus. We also assume α=1 and α_m=0.05, which indicates that the mutation is a residual portion of the replication mechanisms.§ RESULTSReference <cit.> presents several analytical results for the two-dimensional version presented in (<ref>), which does not take into account the mutant population y_m. Because our model is three-dimensional it is cumbersome to analyze, and we focus on numeric results. Similar to the procedure used in the logistic map, we focus on the emergence of bifurcations and chaos as time-delay values increase. The system in (<ref>) has a total of 11 equilibrium points. Six are facial points (with at least one null component). Those with simple algebraic expressions arey = 0,y_m=0,z=0; y = 0,y_m=K_m(r_m-a_m)/r_m,z=0; y = K(α r+a-r)/(r(-1+α), y_m = 0, z = 0. The others present cumbersome algebraic expressions, which we omit here for sake of simplicity. Note that the stability of the fixed points of a n-dimensional system with k delays can be analyzed using the usual Jacobian evaluated at the equilibrium point <cit.>. Each ẋ_̇i̇,i=1,⋯,n be written ẋ_i=∑_j=1^kF_j^i(x_1(t-τ_j),x_2(t-τ_j),⋯).Performing a series expansion around the equilibrium point x^*=(x_1^*,⋯,x_n^*), we obtain for each x_i,i=1,⋯,n ẋ_i≈∑_j=1^k( F_j^1(x_1,⋯)|_x^*+ ∂ F_j^i/∂ x_1|_x^*(x_1(t-τ_j)-x_1^*) +∂ F_j^i/∂ x_2|_x^*(x_2(t-τ_j)-x_2^*)+⋯). We then have a linear system of variables x̃_i≡ x_i-x_i^* with k Jacobian matrices that take the form J_j=[ [ ∂ F_j^1/∂ x_1 ∂ F_j^1/∂ x_2 ⋯; ⋮ ⋮ ⋮; ∂ F_j^n/∂ x_1 ∂ F_j^n/∂ x_2 ⋯; ]]; evaluated at the fixed points. The stability of a particular fixed point is determined by the eigenvalues of its corresponding Jacobian. Bifurcations occur whenever one eigenvalue crosses the imaginary axis as one or more parameters, including the delays, change. Typical bifurcations involve a turning point when the eigenvalue is initially null, and a Hopf bifurcation when a pair of complex eigenvalues crosses the imaginary axis <cit.>.The general expression for the Jacobian isJ= [ r(1-α)(1-2y^*/K)-a -pz^*0-py^*;0α_mr_m(1-2y_m/K_m)-a_m-p_mz^*-p_my^*_m; (cz^*/(1+dy^*)^2 -qz^*)e^-λτ_1(c_mz^*/(1+d_my_m)^2-q_mz^*)e^-λτ_2 cy^*/(1+dy^*)+ c_my^*_m/(d_my^*_m+1)-qy^*-q_my^*_m-b;] The Jacobian for the origin is thus J̃=[[ r(1-α)-a00;0r_m-a_m0;00 -b ]]. , which holds for all values of τ_1,τ_2. The eigenvalues are -α r-a+r, r_m-a_m, and -b. Stability (with only negative eigenvalues) can be achieved for smaller r and r_m (the viral growth rate), and larger a,a_m (the natural population decay of the virus). Here ultimately the system loses all of its viruses and has no immune cells irrespective of the delay. For the set of parameters chosen here, however, the origin is unstable ∀τ_1,τ_2. Note that for the three fixed points in (<ref>) the stability is unchanged when there are non-null delays. This can be seen from (<ref>) by substituting z=0. Note that r-α, r-a, and -r_m+a_m are common eigenvalues, a condition that renders the origin unstable for all three. For null delays and the chosen set of parameters, only two of the 11 equilibria are stable. Because y,y_m and z are densities and therefore positive quantities, one stable equilibrium is physically irrelevant: y = 5.568989996, y_m = 5.046486447, and z = -7.88108099. The other stable equilibrium is the spiral focus (SF): y = 0.06265629108, y_m = 0.3385711289, and z = 2.580953047. For the parameters used, the remaining equilibria are all unstable and comprise six facial equilibria and two physically-irrelevant equilibria that have at least y<0 or y_m<0 or z<0. Here we focus on how increasing the value of the time delay alters the stability of the stable SF solution. A theorem presented in Ref. <cit.> describes the conditions for switches in stability when there are delays and finds a critical τ^*>0 above which the equilibrium point is always unstable. The theorem states:Let a characteristic equation of a given fixed point be written R(λ)+S(λ)exp(-λτ)=0. R(λ) and S(λ) are analytical in the right half plane and λ>-δ,δ>0. When the following properties hold: (i) R(λ) and S(λ) have no common zero; (ii) R(-Is)=R(Is),S(-Is)=R(Is), where the bar indicates the conjugate and I=√(()-1); (iii) R(0)+S(0)=0; (iv) The half right plane possesses at most a finite number of roots of R(λ)+S(λ)exp(-λτ)=0 when τ=0; and (v) F(y)=|R(Iy)|^2-|S(Iy)|^2 when real y has at most a finite number of zeros.Then the following statements are true: (a) If F(y)=0 has no positive real roots, and if the associated fixed point is stable (unstable) for null delays, it will remain stable (unstable) for all delays. (b) If F(y)=0 has at least one positive root and all roots are simple, stability switches can occur with increasing τ. There exists a τ^*>0 above which the fixed point is unstable for all τ>τ^*. As τ varies from zero to τ^* at most a finite number of stability switches may occur. Reference <cit.> used this theorem to analyze their model. Here we consider equal delays τ_1=τ_2=τ. The Jacobian of the spiral focus stable equilibrium (y = 0.06265629108, y_m = 0.3385711289, z = 2.580953047 is J_SF = (-0.4825880248-1.960851071λ) exp(-λτ)-0.7961892098λ^2-λ^3-0.08061172163λ+8.0611722431̇0^-11. This equation is clearly of type R(λ)+S(λ)exp(-λτ). Thus F(Iy)=|R(Iy)|^2-|S(Iy)|^2 yields F_SF = 0.4726938145y^4+y^6-3.838438673y^2-0.2328912017. The roots of this equation are ± 1.330454624, ± 0.2455259061I, ± 1.477335558I, and thus the condition b of item v of the theorem holds. Figure <ref> shows the expected stability switches. We plot z(t) versus y(t) for τ_1=τ_2=0 (the stable case), τ_1=τ_2=0.2, τ_1=τ_2=5, and τ_1=τ_2=15. There is still stability for delay τ = τ_1=τ_2=0.2, but this is lost in τ = 5 and τ = 15. These results demonstrate how the introduction of delays can change the stability of a stable solution and promote a richer dynamics for the system.For the sake of comparison, we use the two-dimensional model proposed in Ref. <cit.> (which has no mutant virus) and plug y_m=0 and K_m, r_m a_m,c_m,d_m,q_m,τ_2=0 into (<ref>). Figure <ref> shows the maxima values of z(t) versus τ_1. Note that there is a series of bifurcations that switches between sustained oscillations and chaotic behavior, with windows of periodic behavior (e.g., around τ_1=14). When we use the term c_my_m(t)z(t)/1+d_m y_m(t) with τ_2=0 to introduce the mutant component into our model, it changes the dynamics of (<ref>). Figure <ref> shows that periodic orbits are present but not chaotic behavior. Although merely inserting a new equation into the system does not enrich the dynamics, the situation changes completely when τ_2≠ 0. Figure <ref> shows that this time delay causes more complex patterns to emerge, including regions of chaotic behavior.§ CONCLUSIONWe have considered a nonlinear set of delay differential equations to model the interaction between an immune system and an external pathogen, e.g., a viral infection. We extend the previous model considered in <cit.> by introducing a new variable that takes into account mutant viruses. We find a series of bifurcations that lead to chaotic behavior, an outcome that agrees with the results observed in real data <cit.> and that corroborates previous work indicating the need for the time delays that generate richer behavior <cit.>.§ ACKNOWLEDGMENTS AC thanks the Alagoas State Research Agency FAPEAL for support through major projects (PPP - 20110902-011-0025-0069 / 60030-733/2011), also CNPq for PDE (207360/2014-6) and Universal (423713/2016-7) grants. DM acknowledges a scholarship by the Brazilian funding agency CAPES. The Boston University work was supported by DTRA Grant HDTRA1-14-1-0017, by DOE Contract DE-AC07-05Id14517, and by NSF Grants CMMI 1125290, PHY 1505000, and CHE-1213217. § REFERENCES99 shu2014sustained H. Shu, L. Wang & J. Watmough, J. Math. Biol. 68, 477 (2014).askery A. Canabarro, I. Gleria & M. L. Lyra, Physica A 342, 234 (2004).shu2012 M. Y. Li & H. Shu, J. Math. Biol. 64, 1005 (2012).ortiz2002 G. M. Ortiz et al., J. Virology 76, 411 (2002).elder E. de Souza, M. L. Lyra & I. Gleria Chaos, Solitons and Fractals 42, 2494 (2009).smith2011introduction H. L. Smith, An Introduction to Delay Differential Equations with Applications to the Life Sciences (Springer-Verlag, New York, 2011).clark2003bulletL.H. Clark, P.M. Schlosser, S.F. Selgrade, Bull. Math. Bio. 65 157 (2003).batzel2001AMCJ. J. Batzel & H. T. Tran, Appl. Math. Comput. 110, 1 (2000).wang2006viral K. Wang, W. Wang & X. Liu, Chaos, Solitons and Fractals 28, 90 (2006).culshaw2000delay R. V. Culshaw & S. Ruan, Math. Biosci. 165, 27 (2000). iram Iram Gleria, A. R. Neto & A. Canabarro, Brazilian J. Phys. 45, 450 (2015).nelson2002mathematical P. W. Nelson & A. S. Perelson, Math. Biosci. 79, 73 (2002).tam J. Tam, IMA J. Math. Appl. Med. Biol. 16, 29 (1999).shi X. Zhou, X. Song & X. Shi, Appl. Math. Comput. 199, 23 (2008).culshaw R. V. Culshaw& S. Ruan, Math. Biosci. 179, 73 (2002).buric N. Burić, M. Mudrinic & N. Vasović, Chaos, Solitons & Fractals 12, 483 (2001).song X. Song, S. Wang & J. Dong, Jour. Math. Anal. Appl. 373, 345 (2011).liu K. Wang, W. Wang, H. Pang, & X. Liu, Phys. D 226, 197 (2007).iram2 E. de Souza, M. Lyra & I. Gléria, Brazilian J. Phys. 39, 431 (2009).culshaw2 R. V. Culshaw, S. Ruan & G. A. Webb, J. Math. Biol. 46, 425 (2003).bocharov G. A. Bocharov & F. A. Rihan, Jour. Comput. Appl. Math. 125, 183 (2000).komarova2003boosting N. L. Komarova, Proc. Natl. Acad. Sci. USA 100, 1855 (2003).hutchinson1948circular G. E. Hutchinson, Ann. New York Acad. Sci. 50, 221 (1948).lorenz1963 E. N. Lorenz, J. Atmospheric Sci. 20, 130 (1963).vinte K. L. Cooke & V. den Driessche, P. Funkcial Ekvac. 29, 77 (1986).
http://arxiv.org/abs/1705.10693v1
{ "authors": [ "D. Messias", "Iram Gleria", "S. S. Albuquerque", "Askery Canabarro", "H. E. Stanley" ], "categories": [ "q-bio.PE", "37M20" ], "primary_category": "q-bio.PE", "published": "20170526214417", "title": "A nonlinear delayed model for the immune response in the presence of viral mutation" }
[email protected]@saha.ac.in,[email protected] Matter Physics Division, Saha Institute of Nuclear Physics, Calcutta 700064, India We study the noisy nonequilibrium dynamics of a conserveddensity that is driven by afluctuating surface governed by the conserved Kardar-Parisi-Zhang equation. Weuncover the universal scaling properties of the conserved density. We consider two separateminimal models wherethe surface fluctuations couple (i) with the spatial variation of the conserveddensity, and (ii) directly with the magnitude of the conserved density. Both these two models conservethe density, but differ from symmetry stand point. We use our result tohighlight the dependence ofnonequilibrium universality classes on the interplay between symmetries and conservation laws.Symmetries and scaling in generalised coupled conservedKardar-Parisi-Zhang equations Abhik Basu December 30, 2023 ======================================================================================§ INTRODUCTION The concept of universality classes, which are parametrised by the space dimensions and the order parameter components,allows one to have a systematic physical understanding of universal scalingproperties in equilibrium systems <cit.>. These universalityclasses are found to be robust against dynamical perturbations so long as the general conditions forequilibrium are maintained. In contrast, statistical properties of truly nonequilibrium dynamic phenomena in systems with generic non-Gibbsiandistribution are found to be strongly sensitive to all kinds of perturbations. Prominent examples are driven diffusive systems <cit.> and diffusion-limited reactions <cit.>. For instance, one finds that for the Kardar-Parisi-Zhang (KPZ) equation of surfacegrowth <cit.>, that shows paradigmatic nonequilibrium phasetransitions <cit.>,anisotropic perturbations are relevant in d > 2 spatial dimensions, leading to rich phenomena that include novel universality classes and the possibility of first-order phase transitions and multicritical behavior <cit.>. Furthermore, novel nonequilibrium scaling behaviour including continuouslyvarying universality classes are often found inmulticomponent drivensystems <cit.>. Related physical realisations include drivensymmetric mixture of a miscible binary fluid <cit.> and magnetohydrodynamicturbulence <cit.>, dynamic roughening of strings moving in random media <cit.>, sedimenting colloidal suspensions <cit.> and crystals <cit.>. Conservation laws are known to play significant roles in physical systems. Forequilibrium systems, they affect only the dynamical properties <cit.>,where as forout of equilibrium systems even time independent quantities areaffected by conservation laws.This was succinctly brought out bythe studies on a conserved version of the KPZ equation (C-KPZ) that shows scaling behaviour distinctly different from the usual KPZequation <cit.>. For instance, the KPZ universality class ischaracterised by theexact relation between the scaling exponents:χ_kpz + z_kpz=2 <cit.>, where χ_kpz andz_kpz, respectively, are the roughness and dynamic scaling exponentsdescribing the spatial and temporal scaling of the KPZ universality class.In contrast, the C-KPZ equation does not admit any such exact exponentrelations <cit.>. The KPZ equation has subsequently beengeneralised to multicomponent versions to address different questions ofprinciples. For example, how the surface fluctuations in the KPZ equationcontrol the fluctuations of a conserved scalar density that is dynamicallycoupled to the KPZ equation has been studied <cit.> by using awell-known two-component variant of the KPZ equation <cit.>. It isalso known that a breakdown of an external symmetry like parity can lead tonovel scaling behaviour <cit.>. Notable previous works that form a major motivation for our studies here arethe studiesreported by Drossel and Kardar (hereafter DK) inRefs. <cit.> using a set of coupled generalised KPZ equations for theheight field and a density. In particular, DK studied fluctuations inthe concentrations of structureless particles advected by aone-dimensional (1d) Burger's fluid, orequivalently particles sliding on a fluctuating KPZ surface <cit.>. Byretaining feedback from the density fluctuations on the fluctuating KPZ surface, theyelucidated various regimes depending upon the choice of parameters for advectionor anti-advection. The scaling exponents are obtained. Remarkably, continuouslyvarying scaling exponents are illustrated for the anti-advection case inRef. <cit.>. In a subsequent study, DK considered the interplay between afluctuating surface and phase ordering <cit.>, again using a set ofcoupled generalised KPZ equations for the height field and a nonconserveddensity. They obtained the relevant scaling exponents and in some casesillustrated continuously varying dynamic exponent in the model.These studiesby DK openup the questions: (i) How dothe internal symmetries of the equations of motion that controlthe structure of the nonlinear dynamical cross-coupling terms between thedifferent fields conspirewiththe conservation laws to determine the universal scaling behaviour? (ii) Howdoes theconservation law for the surface fluctuations affect the dynamics andfluctuations of an attached density?In order tosystematically address these generic issues, we study how a conservedfluctuatingsurface described by the C-KPZequation affects the spatio-temporal properties of a conserved scalar densitythat isdynamically coupled to the fluctuating surface. When there are multipledynamically coupled fields, with all of them exhibiting dynamical scaling, it is not apriori clear whether or not they should all have the same dynamicexponent; in case of equal dynamic exponents the model is said to display strong dynamic scaling, elseweak dynamic scalingensues <cit.>. In a study on coupled one-dimensional model,Ref. <cit.> showedthe sensitive dependence of the nature of dynamic scaling on the precise formsof the dynamic couplings in the model equations. In a model with severaldynamical fields, one must thus distinguish between strong and weakdynamic scaling. These theoretical issues formthe major motivation of the present work. Independent of any specificapplications, the general importance ofour studies here lie in their ability to identify ingredients that may controllong-time, large-distance universal scaling behaviour in driven systems.We study the coupled nonequilibrium dynamics of a conserved height field hand a conserved signed density ϕ (that can be positive ornegative, e.g., Ising spin-like degrees of freedom) withinsimple reduced models. In the absence ofany general framework for nonequilibrium systems, such simple models areparticularly useful to study and answer questions ofprinciple as we illustrate below. Morespecifically, we consider the nonlinearly coupled dynamics when h is autonomous, i.e., the time-evolution of h is independent of the second fieldϕ and follows the C-KPZ equation. This models the dynamical evolution of astructureless signed species living on a fluctuating surface with conservedfluctuations. In the absence of thecouplings with h, ϕ follows spatio-temporally scale invariant dynamics described by linear equations ofmotion with exactly known scaling exponents.We consider two differentmodels for conserved ϕ-dynamics: (i) Model I, where the fluctuations ofh couples only with thespatial variation of ϕ, given by ∇ϕ, i.e., the dynamics of ϕ is invariant underthe shift ϕ→ϕ+const., aninternal symmetry that leaves the dynamics unchanged;and (ii)Model II, where the dynamics isnot invariantundersuch a shift of ϕ (i.e., no such invariance, unlike inModel I). Generally, we find that the scalingpropertiesof Model I and Model II are starkly different - the spatio-temporalscaling of ϕdepends crucially on the detailed nature of its symmetry-determined couplingwith h. Theremainder of the article is organised as follows: In Sec. <ref>,we introduce Model I,write down the general symmetry permitted equations of motionfor h and ϕ and evaluate the scalings of the model parameters. Then in Sec. <ref>, we discuss Model II and note thedifferences between the two models. We finally summarise and conclude in Sec. <ref>.§ MODEL I The dynamics ofh is simply given by the C-KPZ equation <cit.> ∂ h/∂ t= -∇^2 [ ν∇^2 h +λ_1/2(∇ h)^2] + η_h,where η_h is a Gaussian-distributed, zero-mean conserved noise with avariance ⟨η_h( x,t) η_h(0,0)⟩=-2 D_h ∇^2 δ ( x)δ (t); ν>0 is a damping coefficient and λ_1 is a nonlinearcoupling constant <cit.>.We now write down the dynamical equations of ϕ in thehydrodynamic limit by using symmetry considerations. We demand (i) translational and rotational invariance, (ii) conservation ofϕ, and (iii) invariance underϕ→ϕ +const for the dynamics of ϕ. The lastcondition can be fulfilled only if derivatives of ϕ appear in thedynamical equations. Furthermore, for simplicity we restrictourselves to systems that are linear in ϕ-fluctuations, so that thedynamics of ϕ is invariant under the inversion of ϕ. Thegeneral form of the relaxational equation of motion for a conserved densityϕ is (we ignore any advective processes)∂ϕ/∂ t= μ∇^2 δℱ/δϕ + NL+η_ϕ.Here, ℱ is a free energy functional that controls the dynamics andthermodynamics of ϕ in equilibrium. We choose ℱ=∫d^dx[ r_0ϕ^2 + (∇ϕ)^2]/2,where we have neglected anynonlinear terms for simplicity; r_0=T-T_c with T as the temperature andT_c the critical temperature. Furthermore, NL represents conservednonlinear terms ofnonequilibrium origin that are invariant under inversion of ϕ as well asa constant shift of ϕ. We first consider the case with r_0=0, i.e.,ϕ-fluctuations are critical.∂ϕ/∂ t= -∇^2 [ μ∇^2 ϕ + λ_2 (∇h).(∇ϕ) ] + η_ϕ. Here, η_ϕ is a Gaussian-distributed, zero-mean conserved noise with avariance ⟨η_ϕ( x,t) η_ϕ(0,0)⟩=-2 D_ϕ∇^2δ( x) δ (t), μ>0 is a damping coefficient and λ_2 is anon-linear cross-coupling coefficient through which h affects thedynamics of ϕ. The sign of λ_2 is arbitrary.Equation (<ref>) corresponds to a current of ϕ given byJ_ϕ 1= ∇[μ∇^2ϕ + λ_2(∇ h)·(∇ϕ)].Thus, the nonequilibrium contribution to J_ϕ 1 can act only whenboth ϕ and h have nonzero gradients and one of these gradients isspatially varying.Note that J_ϕ 1 remains invariant underϕ→ϕ+const.. This is in contrast to the models studied inRefs. <cit.>. Furthermore, in contrast to our model I, the dynamics of the density field in Ref. <cit.> is non-conserved. Clearly, Eq. (<ref>) is invariant under ϕ→ϕ + const.. It is clear from the linearised versions ofEqs. (<ref>) and (<ref>) that the naïve scaling dimensions of hand ϕ are identical.Notice that both Eqs. (<ref>) and(<ref>) are invariant under spatial inversion, e.g., x→ - x, aswell as inversion of ϕ.Equations (<ref>) and (<ref>) do not admit any generalised Galileaninvariance; see discussions below and Ref. <cit.> for technicalcomments. §.§ Scaling in Model I It is instructive to first consider the linearised version ofEq. (<ref>) bysettingλ_2=0. In that limit, the dynamics of ϕ can be solved exactly. In particular, in the Fourier space the correlation function C_ϕ( q,ω)=⟨ |ϕ( q,ω)|^2⟩ takes the form C_ϕ( q,ω)=2D_ϕ q^2/ω^2 +μ^2 q^8,where q and ω are the Fourier wavevector and frequency,respectively. Now, correlator (<ref>) corresponds to the dynamic exponentz_ϕ =4 and roughness exponent χ_ϕ=2-d/2 for the fieldϕ <cit.>. Compare these results with the correlations ofh from the linearised version of Eq. (<ref>).This yields the corresponding dynamic and roughness exponents for h asz_h=4 and χ_h=2-d/2, respectively. Clearly, z_ϕ=z_h at the linear level,implying strong dynamic scaling at the linear level. It is of course well-knownthat the scaling are affected by relevant (in a scaling sense)nonlinearities <cit.>, and as a result their values at the linear levelget modifiedby the nonlinear effects. For instance, in the lowest order renormalisedperturbation theory <cit.>, z_h=(12-ϵ)/3 with ϵ=2-d>0,where as z=4 for d≥ 2 <cit.>. Whether or not strongdynamicscaling is still observed at the nonlinear level, is a question that we studyhere.We can now write the dynamic generating functional <cit.>, 𝒵_I, averagedover thenoisesη_h and η_ϕ, for the coupled system; see alsoRef. <cit.> for similar functional approaches𝒵_I=∫𝒟 h 𝒟ϕ𝒟ĥ𝒟ϕ̂exp[S_I],where ĥ and ϕ̂ are dynamic conjugate fields to h and ϕ,respectively <cit.>; S_I is the action functional given by S_I = ∫ d^d x dt [ D_h ĥ∇^2 ĥ + D_ϕϕ̂∇^2 ϕ̂+ ĥ( ∂ h/∂ t+∇^2[ν∇^2 h +λ_1/2(∇ h)^2] )+ ϕ̂( ∂ϕ/∂ t+∇^2[μ∇^2ϕ+ λ_2(∇ h)(∇ϕ)] )].Nonlinear couplings λ_1,λ_2 preclude any exact enumeration ofthe relevant correlation functions from the action functional S_I in Eq. (<ref>). Naturally, perturbative calculations are used. Naïveperturbative expansions yield diverging corrections to the measurablequantities. In order to deal with these long wavelength divergences in asystematic manner, we employ Wilson momentum shell dynamic renormalisation group (DRG) <cit.>. To this end, we first integrate out fields h( q,ω),ϕ( q,ω) with wavevector Λ/b<q<Λ, b>1, perturbatively up to the one-loop order in (<ref>). Here, Λ is an upper cut off for wavevector. This allows us to obtain the “new" model parameters corresponding to a modified action S_I^< with an upper cutoff Λ/b<Λ; see Appendix for the correspondingone-loop Feynman diagrams.In order to extract the renormalised parameters, we then rescalewavevectors and frequencies according to q'=b q and ω'=b^zω.Here b=exp[l] is a dimensionless length scale. In a simple model with a single variable, z becomes the dynamicexponent. For a multivariable problem as ours with the attendant possibility ofunequal dynamic exponents for h and ϕ, the interpretation of z infrequency rescaling as above will be clear as we go along. Under theserescalings, fields h and ϕ also scale. We write, in Fourier space, h( q,Ω)= ξ_h h( q^',Ω^'),ĥ( q,Ω)= ξ̂_h ĥ( q^',Ω^'), ϕ( q,Ω)= ξ_ϕϕ( q^',Ω^'), ϕ̂( q,Ω)= ξ̂_ϕϕ̂(q^',Ω^').Using the redundancy <cit.> of the rescaling factors, ξ̂_h, ξ_h,ξ̂_ϕ and ξ_ϕ, we impose the coefficients of∫ d^d q dΩĥ (-iΩ) hand ∫ d^d q dΩϕ̂(-iΩ) ϕ to remain unity.This leads to the following condition on the rescaling factors :ξ̂_h ξ_h=1=ξ̂_ϕξ_ϕ. In the real space, let h( x^', t^')=ξ_h^R h(x,t) andϕ( x^', t^')=ξ_ϕ^R ϕ(x,t). Thus ξ_h^R=b^-(d+z)ξ_h=b^χ_hand ξ_ϕ^R=b^-(d+z)ξ_ϕ=b^χ_ϕ, where χ_h and χ_ϕ are roughness exponents <cit.> associated with h and ϕ, respectively. §.§.§ Recursion relations and scaling exponentsWe set up a perturbative DRG up to the one-loop order, whereone-loop fluctuation corrections to the different model parameters areobtained. Notice that there areno fluctuation corrections to λ_1 at this order. InRef. <cit.>,this was ascribed to a modified Galilean invariance. Later on it wasargued in Ref. <cit.> that there are indeed corrections to λ_1at the two-loop order. Such considerations hold for Model I as well. Since westick to a one-loop order DRG, we ignore such issues here.Following the standard DRG procedure <cit.>, we arrive at thefollowing recursion relations[with b=exp [l]]: d ν/d l = ν[z-4+g (4-d)],d μ/d l = μ[z-4+B^2 g/P(1+P)(4-d+2(1-P)/1+P)], d λ_1/d l = λ_1[z+χ_h-4], d D_h/d l =D_h[z-2-d-2χ_h], d D_ϕ/d l =D_ϕ[z-2-d-2χ_ϕ], d λ_2/d l = λ_2[χ_h+z-4+ 2gB(3+P)/(1+P)^2- 4 g B^2/(1+P)^2-2gB/1+P],where P=μ/ν, B=λ_2/λ_1 andg=λ_1^2 D_h K_d Λ^2/4 ν^3 d are the effective dimensionless coupling constants; under rescaling of space and timeg scales as b^2-d implying d=2 to be the criticaldimension <cit.>. Theflow equations for g,P and B may be immediately obtained: d g/d l =g[2 -d + g (d-4)],d P/d l = -Pg[4-d - 2B^2/P(1+P)(4-d+2(1-P)/1+P)], d B/d l = B[2gB(3+P)/(1+P)^2-4gB^2/(1+P)^2-2Bg/1+P]. At the DRG fixed point (FP), dg/dl=0=dB/dl=dP/dl. Then,we have from Eq. (<ref>), B^*=1, P^*=1 and g^*=2-d/3(4-d)or g^*=0 atthe FP. Linear stability analysis reveals that g^*=2-d/3(4-d) is the stable FP for d< 2; for d≥ 2,g^*=0 <cit.>. For d< 2 and with these values of B^*, P^* andg^* at the stable DRG FP, we notethat both (<ref>) and (<ref>) yield the choice z=10+d/3 atthe DRG FP make both dν(l)/dl and dμ(l)/dl zero.This in turnimpliesthat both h and ϕ have the same dynamic exponentz_h=z_ϕ=10+d/3.Thus, Model I displaysstrong dynamic scaling. Furthermore, by using(<ref>) and (<ref>) at the stable DRG FP for d<2, we obtainχ_h=χ_ϕ=2-d/3, d< 2.Also, expectedly in contrast to the results in Ref. <cit.>, the flat phase of the CKPZ equation becomes unstable below d=2 and notbelow d=4, due to the roughness exponent becoming positivebelow d=2 <cit.>; equivalently, d=2 is thecritical dimension for the CKPZ equation. For d≥ 2, the nonlineartiesare irrelevant (in a RG sense) and hence the results from the linear theoryholds.Note that the nonlinearities become irrelevant in Ref. <cit.> only above d=6.§.§.§ Model I with λ_1=0 Consider now the limiting case with λ_1=0. Thus, hevolves linearly with z_h=4 and χ_h=2-d/2 known exactly.Theflowequations simplify to d μ/d l = μ[z_ϕ-4+λ_2^2 D_h K_d/2νμ(ν+μ)d(4-d +2(ν-μ)/ν+μ)], d D_ϕ/d l =D_ϕ[z_ϕ-2+d-2χ_ϕ],d λ_2/d l = λ_2[χ_h+z_ϕ-4-λ_2^2 D_hK_d Λ^2/ν(μ+ν)^2 d].In obtaining the flow equations (<ref>), we have rescaled time tthat corresponds to a dynamic exponent z_ϕ. Clearly, there are positive corrections to μ. Thus,scale-dependent μ(l)≫ν (l)=ν, as the DRG FP is approached. Thus, wealready conclude that z_ϕ < z_h=4. Hence,weak dynamic scaling isexpected, implying ν (l)/μ(l)→ 0 as l→∞. In that limit we find from the above flow equationsd μ/d l = μ[z-4+λ_2^2 D_h K_d/2νμ^2 d(2-d)], d λ_2/d l = λ_2[χ_h+z-4-λ_2^2 D_h K_d Λ^2/νμ^2 d]We identify an effective coupling constantg̃=λ_2^2 D_h K_d Λ^2/νμ^2 d that scales asb^2-d under the rescaling of space and time. This shows that d=2 is thecritical dimension, such that for d<2 fluctuation corrections should berelevant in the long wavelength limit. The DRG flowequation for g̃ is dg̃/dl=g̃ [2-d+2g̃(d-3)].At the DRG FP, dg̃/dl=0, yielding g̃=2-d/2(3-d) as thestable FP for d<2, where as g̃=0 is the stable FP for 3>d>2. Theapparent singularity in g̃ at d=3 is likely to be an artifact of alow order perturbation theory used here <cit.>.This then impliesz_ϕ=4+(2-d)^2/2d-6=4+O(ϵ)^2,χ_ϕ=3d(d-2)-8/4(d-3).Thus, z_ϕ differs from z_h by O(ϵ)^2. Since our one-loopanalysis is valid only up to O(ϵ), we set z_ϕ=z_h at this order,restoring strong dynamic scaling. Whether or not this remains true at higherorder remains to be checked. On the whole, thus, within a one-loopapproximation Model I displays strongdynamic scaling independent of whether nonlinear effects are considered in thedynamics of h, i.e., λ_1=0 or not. Whether or not this remains trueat higher order remains to be seen. In contrast, Ref. <cit.> findsboth equal (z_h=z_ϕ) and unequal (z_h≠ z_ϕ) dynamic exponents(at d=1),depending upon the details of the nonlinear couplings. Furthermore,Model I has d=2 as the critical dimension, similar to the KPZequation <cit.>, or the conserved KPZ equation <cit.>. In contrast,the interplay between the KPZ surface fluctuations and phase separation dynamicstend to make the critical dimension higher, as reported in Ref. <cit.>.Unsurprisingly, the scaling behaviour of Model I is completely different fromthose in Ref. <cit.>.§.§.§ Model I with r_0>0 We now briefly discuss the dynamics of ϕ for r_0>0, i.e.,ϕ-fluctuations are noncritical. This generates a linear ∇^2ϕterm in Eq. (<ref>) leading to∂ϕ/∂ t= -∇^2 [-μ_1ϕ+ μ∇^2 ϕ + λ_2 (∇h).(∇ϕ) ] + η_ϕ,where μ_1=μ r_0>0. Equation (<ref>) gives z_ϕ=2 at thelinear level, corresponding to weak dynamic scaling, since theμ_1∇^2ϕ-term is more relevant than the -μ∇^4ϕ-term (ina scaling sense).However, with the existing form ofthe λ_2-nonlinear term, corrections to the propagator are still all atO(q^4).This implies that there are no fluctuation-corrections to theμ_1-term, yielding z_ϕ=2 exactly even at the nonlinearlevel, and hence weak dynamic scaling prevails. This together with the exactknowledgeof χ_ϕ from the non-renormalisation of D_ϕ yield the scalingexponents of ϕ exactly, which are identical to their values in thecorresponding linear theory. Dynamics of ϕ, then, is totally unaffected bythe nonequilibrium drive when ϕ-fluctuations are noncritical.§ MODEL IIIn Model I above, height fluctuations couple with∇ϕ, the local spatial variation in ϕ.In contrast,we now consider the case when ∇h couples directlywith ϕ; consequently the dynamics isnot invariant under a constantshift of ϕ: ϕ→ϕ + const..We again consider the casewhere the dynamics ofh is autonomous, i.e., unaffected by ϕ. Thus, the dynamical equation ofh is still given by Eq. (<ref>). The dynamical equation for ϕ isstill given by Eq. (<ref>), while the nonequilibrium terms NLshould now include conserved terms that break the symmetry under a constant shiftof ϕ as well. We continue to assume that the dynamics in linear in ϕ.Furthermore, we now set r_0>0, i.e., T>T_c (henceϕ isnoncritical); we briefly discuss the r_0 case at the end. Withall these,the most general equation for ϕ tothe leading order in nonlinearities and spatial gradients in the hydrodynamiclimit is now given by ∂ϕ/∂ t = μ̃∇^2 ϕ + g_2∇(ϕ∇ h)+η_ϕ.Here, μ̃>0 is a damping coefficient, g_2 is a nonlinear couplingconstant; noise η_ϕ is same as that in Model I. Notice that thenonlinear coupling term g_2 is identical to the one introduced inRef. <cit.>. Equation (<ref>) corresponds to a currentJ_ϕ 2=-μ̃∇ϕ - g_2 ϕ∇ h.Thus, the nonequilibrium parts in J_ϕ 2 contribute wherever thereis a local tilt in the surface given by ∇h with a localϕ <cit.>. This distinguishes thenonequilibrium effects of Model II from Model I. Both Eqs. (<ref>) and (<ref>) are clearly not invariant under ϕ→ϕ+const.. At this stage it isconvenient to split ϕ as a sum of its mean ϕ_0=∫ d^dx ϕ(x,t)/V and a zero-mean fluctuating part; here V is the system volume. This clearly generates a linear term proportional to ϕ_0∇^2 h. Such atermmanifestly breaks the symmetry under inversion of ϕ. In effect,ϕ_0 now parametrises the dynamics of ϕ. We set ϕ_0=0 and denotethe fluctuating part with zero-mean by ϕ below. This then restores the symmetry under inversion of ϕ.Notice thatEq. (<ref>) is invariant under x→ - x. §.§ Scaling in Model IISimilar to our analysis for Model I, we first consider the linearised version of (<ref>)that can be solved exactly. The correlation function C_ϕ(q, ω) then takes the exact form:C_ϕ( q,ω)=2D_ϕ q^2/ω^2 +μ̃^2q^4. Equation (<ref>) implies that z_ϕ=2 and roughness exponentχ_ϕ=-d/2 for the density field ϕ. Thus, at the linear level,this clearly impliesweak dynamic scaling since z_h = 4≠ z_ϕ at the linear level. As before, we study whether andif so, how the nonlinear effects modify these scaling behaviors, and inparticular if weak dynamic scaling can get further reinforced (largerdifferences between z_h and z_ϕ) or otherwise by nonlinear effects.The action functional S_II for Model II is given by S_II = ∫ d^d r dt [ D_h ĥ∇^2 ĥ + D_ϕϕ̂∇^2 ϕ̂+ ĥ( ∂ h/∂ t+∇^2[ν∇^2 h +λ_1/2(∇ h)^2] )+ ϕ̂( ∂ϕ/∂ t-μ̃∇^2 ϕ - g_2∇(ϕ∇ h) ) ].As in Model I, nonlinearities preclude any exact enumeration ofthe scaling exponents. We again resort to perturbative DRG up to the one-looporder as for Model I. §.§.§ Rescaling of fields and parameters: recursion relations andscaling exponents We enumerate the one-loop corrections to the various model parameters in ModelII. See Appendix for the one-loop diagrams for Model II. We rescale time by afactor that corresponds to a dynamic exponent z. The interpretation of z asthe dynamic exponent of h or ϕ will become clear below. The recursion relations for the model parameters are given by:d ν/d l = ν[z-4+g(4-d)],d μ̃/d l = μ̃[z-2+g_3], d λ_1/d l = λ_1[z+χ_h-4], d D_h/d l =D_h[-d+z-2-2χ_h], d D_ϕ/d l =D_ϕ[-d+z-2-2χ_ϕ+g_3], d g_2/d l =g_2[z-2+χ_h-g_3], where g=λ_1^2 D_h K_d Λ^2/4 ν^3 d (same as in Model I)and g_3=g_2^2 D_h K_d Λ^2/νμ̃^2 d>0 are the effective dimensionless coupling constants for Model II that scales asb^2-d under rescaling of space and time, suggesting d=2 to be thecritical dimension (same as in Model I).Now, if we set d μ̃/d l=0 at the DRG FP, we find z= 2-g_3. Since this choice of z leaves μ̃(l) scale independent, we identify it asthe dynamic exponent of ϕ: z_ϕ=z. Notice that this choice for z does not leave ν(l) scale independent, and hence cannot be thedynamic exponent z_h of h (which is anyway known independently,z_h=(10+d)/3for d≤ 2 and z_h=4 for d≥ 2). With this choice for z, insteadν(l) scales as l^z-4+g(4-d) <cit.>. This scale-dependent ν(l)together with the identification l=-ln q may be used to extract z_h fromthe formal definition C_h(q,ω)=2D_h q^2/ω^2 + ν(q)^2 q^2z,see Ref. <cit.> for more details. This line of argument yields the samevalue for z_h, as already obtained above.Equation (<ref>) combined with d D_ϕ/d l=0, givesχ_ϕ=-d/2 at the DRG FP.Note that χ_h and z_h retain respectively the same values as in Model I.Further, theDRG flow equation for g_3 isd g_3/ d l= 2 g_3 (χ_h-2 g_3). Thus, we have at the FP, g_3=0,χ_h/2. Since χ_h=(2-d)/3,g_3=(2-d)/6 gives the stable FP for d<2; for d≥ 2, g_3=0at the stable FP. Thus for d≥ 2, the coupling becomes irrelevant in thedynamics of ϕ and the results from the linear equation holds. But for d<2,nonlinear effects are relevant andz_ϕ=2-g_3=2-χ_h/2=10+d/6. In general, for any d weakdynamicscaling prevails in the system. §.§.§ Dynamics of ϕ when λ_1=0 We now consider the case when λ_1=0 for model II. In this limit, dν/dl=0z_h=4.From the flow equation of g_2 and g_3, we have d g_2/dl=g_2[z-2+χ_h -g_3] and d g_3/dl=2 g_3 (χ_h - 2 g_3).Now in the limit of λ_1=0, we have χ_h=2-d/2.Using the fact that at FP, either g_3=0 (stable for d≥ 2), org_3=χ_h/2 (stable for d<2) and the value of χ_h given byEq. <ref>, we arrive at two values for z_ϕ.For g_3=0, z_ϕ =2 and χ_ϕ=-d/2, valid for d≥ 2. Wheng_3=χ_h/2, z_ϕ = 6+d/4 and χ_ϕ=-d/2, valid ford<2. Thus χ_ϕ=-d/2 for all d. The results from Model II are complementary to those inRef. <cit.>. While Ref. <cit.> considered non-conserved dynamics for h and conserved dynamics for ϕ at d=1, wehaveconsidered conserved dynamics for both the fields h and ϕ in Model II in general d dimensions, with the dynamicsof h being treated as autonomous for simplicity. Nonetheless, ModelII and the studies in Ref. <cit.> display universal scaling very differentfrom each other - a hallmark of the models being nonequilibrium and unlikeequilibrium models where a conservation law can affect only the dynamic scalingbehaviour; see, e.g., model A and model B in the language of Ref. <cit.>.In contrast, none of the scaling exponents in the two studies have any simplerelations. Whilethe roughness exponent takes the value -d/2 for all d in our Model II, it can take several values depending upon the model parameters in Ref. <cit.>. More importantly, Ref. <cit.> shows the possibility of both strong andweak dynamic scalings. In sharp contrast, our Model II only gives weak dynamic scaling for any d and for both the stable and unstable FP values of the coulping constant, g_3. Thus comparison between the results of Ref. <cit.> and Model II significantly establishes how conservation laws can lead to entirely different physical outcomes, even though the couplingbetween the degrees of freedom can have the same structure.§.§.§ Model II with r_0=0 We briefly discuss what happens when r_0=0, i.e., the ϕ-fluctuations arecritical. With r_0=0, the μ̃∇^2ϕ-term in Eq. (<ref>)is to be replaced by a ∇^4ϕ-term. Nonetheless, with the existingnonlinear term in (<ref>), the lowest order corrections to thepropagator are at O(q^2), thus generating a ∇^2ϕ-term in thefluctuation corrected equation. All our results for Model II with r_0>0 derived above thenimmediately follow. It is however possible to start with a specificbarer_0, so that the fluctuation-corrected r_0 vanishes. The relaxation ofϕ-fluctuations will now be controlled by a (subleading to a bareμ̃∇^2ϕ-term) ∇^4ϕ-term, with a z_ϕ=4 at thelinear level (as in Model I with r_0=0). However, the fluctuation correctionstothis ∇^4ϕ-term are expected to be different from those in Model Iwith r_0=0, owing to the different form of the nonlinear term inEq. (<ref>). We do not discuss the details here.§ CONCLUSIONS AND OUTLOOKWe have thus investigated how the presence or absence of an internalsymmetry affects the universal scaling properties in the noisy dynamics of aconserved scalardensitydriven by a fluctuating conserved KPZ surface h. We make aparticularly simple choice for internal symmetry,viz. invariance under aconstant shift of ϕ. To this end, we considertwo specific reduced models, Model I and Model II, to address how the interplaybetween the symmetries that control the structure of the nonlinear terms and conservation laws control the universal scaling properties. In Model I,h-fluctuations couples with∇ϕ, rendering the ensuing dynamics of ϕindependentof ϕ_0, the mean of ϕ. Model I is constructed in way torespect the invariance under inversion of ϕ. At the linearised level, both h and ϕ dynamics display strong dynamicscaling with a single dynamic exponent z=4. Beyond the linearised theory, the scaling exponents depend crucially on the details of the nonlinearcouplings, and also whether ϕ is a critical field or noncritical.The relevant scaling exponents are evaluated in a one-loop DRGcalculation for critical ϕ-fluctuations; for noncriticalϕ-fluctuations, the scaling exponents are unaffected by the nonlinearityand known exactly, that corresponds to weak dynamic scaling. For criticalϕ-fluctuations, strong dynamic scaling ensues.We have studied another model, Model II, where h-fluctuations directlycouples with ϕ. Thus in contrast to Model I, Model II does not remain invariant under a constant shift of ϕ. As a result, ϕ_0, the mean of ϕ, parametrisesthe dynamics of ϕ.We focus on the particular case where ϕ_0=0. This restores the symmetry ofthe model under inversion of ϕ-fluctuations. In Model II (with r_0>0),even at thelinear level weak dynamic scaling follows (z_ϕ=2), a feature that holdsgood even whenthe nonlinear effects are taken into account. Furthermore, if we assume ϕ_0≠ 0, we obtain additional linear termproportional to ∇^2 h in (<ref>). Given that the dynamics of his independently known (being autonomous), this term effectively acts like anadditionaladditive noise in the problem, whose correlation isnotδ-correlated in space and time. This is likely to affect the scalingproperties of ϕ in nontrivial ways. Comparison of the results from Model Iand Model II thus establish the significance of the internal symmetry underconstant shifts of ϕ in determining the scaling properties. It would be of interest to construct equivalent discrete lattice-gas models andstudy these issues there. The specific microscopic rules for the lattice-gasmodels for Model I and Model II may be formed from the nonequilibriumcontributions to the currents (<ref>) and (<ref>) respectively. Wewelcome further work along this direction.We now make a brief general comparison of our studies here with those onthe generalisedcoupled KPZ equations by DK. First and foremost, DK allowed for the feedbackof the density fluctuations on the height fluctuations, whereas in ourcase, the the dynamics of the surface is assumed to be autonomous,independent of the density fluctuations. Furthermore, the surface fluctuationsin both Model I and Model II in our studies are conserved, and hence slowerthan the nonconserved height fluctuations in the models of DK. This is reflectedin the generic higher values of the dynamic exponents z_h in our studies.For reasons of simplicity, we have ignored a ϕ^4 term inthe free energy (<ref>) <cit.> while setting up Model I orModel II. Such a term, if included, will generate a ∼ϕ^3 term in thedynamics of ϕ in both Model I and Model II.Clearly, this termmanifestly breaks the ϕ→ϕ + const. symmetry of Model I. Thus all thecouplingspresent in Model II that also break the invariance under a constant shift of ϕ shouldnow be included, and Model I will effectively reduce to Model II (albeit atr_0=0). In case of Model II, a ϕ^3-term in the dynamics of ϕ will lead to competition withthe already existing nonlinearities in Model II; the resulting scaling behaviour can further beinvestigated within the framework of one-loop RG (not done here).Our consideration of the the dynamics of h as autonomousis clearly a limiting case. More generally, generic nonlinear feedback ofϕ onthe dynamics of h may be present. Again due to symmetry reasons the nonlinearstructure of thefeedbackshould differ from Model I to Model II. It would be interesting to seewhether and how the feedback may alter the conclusions drawn above. Furthermore, ourequations of motion are all invariant under spatial inversion. An interestinggeneralisation would be to allow terms that violate this invariance under spatial inversion. Such terms may potentially lead to generation of underdamped kinematic waves, absent in Model I or Model II. Kinematic waves can be important, e.g., these waves lead to weak dynamic scaling in Ref. <cit.>. Whether similar breakdownof strong dynamicscaling occurs in Model I in the presence of kinematic waves remainsto be investigated. The dynamical field ϕ being a conserved densityfollows a conservation law form of equation of motion. Similar toRef. <cit.>, phase ordering dynamics on a conserved KPZ surface may bestudied by making ϕ a nonconserved density.A nontrivial variant ofthis would be to consider ϕ to be a broken symmetry mode that follows anonconserved equation of motion, but executes a scale-invariant dynamics. Whensuch a broken symmetry variable is driven by a conserved KPZ field, theemerging scaling properties are likely to be quite different from what isreported here. Lastly, coupled systems with linear instabilties may beconsidered, such that the nonequilibrium steady states may even involvepatterns. We look forward to future work in these directions.§ ACKNOWLEDGEMENT The authors gratefully acknowledge the Alexander Von Humboldt Stiftungfor partial financial support through the Research Group Linkage Programme(2016).§MODEL I: RESULTS In the Fourier space, the propagators and correlators of h and ϕ,respectively, thus take the following form : ⟨ĥ( q,ω)h(- q,-ω)⟩ = -1/-iω+ν q^4 ⟨ h( q,ω)h(- q,-ω)⟩ = 2 D_hq^2/ω^2 + ν^2 q^8 ⟨ϕ̂( q,ω)ϕ(-q,-ω)⟩ = -1/-iω +μ q^4 ⟨ϕ( q,ω)ϕ(- q,-ω)> = 2 D_ϕq^2/ω^2 + μ^2q^8.§.§ One loop corrections to the model parameters The corrections of the model parameters and the corresponding relevant Feynman diagrams for Model I are given below : ν^<=ν -λ_1^2 D_h K_d/ν^2[∫_Λ/b^Λd q q^d-1/4 q^2 -∫_Λ/b^Λd q q^d-1/ q^2 d] Fig. <ref> shows the relevant Feynman diagram for one-loop correction to ν. μ^<=μ -λ_2^2 D_h K_d/ν(ν+μ)[1/2-2+(ν-μ)/(ν+μ)/d]∫_Λ/b^Λd q q^d-1/q^2 See Fig. <ref> for the Feynman diagram corresponding to one-loop correction to μ. λ_1^<=λ_1D_h^<=D_hD_ϕ^<=D_ϕλ_2^< = λ_2 +[ D_h λ_2^2 λ_1 K_d(3 ν +μ)/2 ν^2 (ν +μ)^2 d - D_h λ_2^3 K_d/ν (ν +μ)^2 d- D_h λ_2^2 λ_1 K_d/2 ν^2 (ν +μ) d] ∫_Λ/b^Λd q q^d-1/q^2 Relevant one-loop corrections to λ_2 are given in Fig. <ref>. §.§ Model II results The propagators and correlators for the system are given by⟨ĥ( q,ω)h(- q,-ω)⟩ = -1/-iω+ν q^4 ⟨ h( q,ω)h(- q,-ω)⟩ = 2 D_hq^2/ω^2 + ν^2 q^8 ⟨ϕ̂( q,ω)ϕ(-q,-ω)⟩ = -1/-iω + μ̃q^2 ⟨ϕ( q,ω)ϕ(- q,-ω)⟩ = 2 D_ϕq^2/ω^2 + μ̃^2 q^4.As before, we now find corrections to the bare model parameters by evaluating integrals upto one-loop orderfrom wavevector Λ/b to Λ. This leads to the followingresults : ν^<=ν -λ_1^2 D_h K_d/ν^2[ ∫_Λ/b^Λd q q^d-1/4 q^2 -∫_Λ/b^Λd q q^d-1/ q^2 d]μ̃^<=μ̃+ g_2^2 D_h K_d/νμ̃d∫_Λ/b^Λd q q^d-1/q^2λ_1^<=λ_1D_h^<=D_hD_ϕ^<=D_ϕ + g_2^2 D_ϕ D_h K_d/νμ̃^2 d∫_Λ/b^Λd q q^d-1/q^2g_2^<=g_2 - g_2^3 D_h K_d/νμ̃^2 d∫_Λ/b^Λd q q^d-1/q^2 The relevant Feynman diagrams for μ̃, D_ϕ and g_2 are given by Figs. <ref>,<ref> and<ref>, respectively.99fisher M. E. Fisher inLecture Notes in Physics: Critical Phenomena, Springer Verlag, Berlin (1983).chaikin P. M. Chaikin and T. C. Lubensky,Principles of condensed matter physics (Cambridge University Press, Cambridge 2000). driven B. Schmittman and R.K.P. Zia, in Phase transitions and critical phenomena, Eds. C. Domb and J.L. Lebowitz, Vol. 17 (Academic Press, London, 1995).react-diff U.C. Täuber,Adv. in Solid State Physics 43,?? (2003).kpz M. Kardar, G. Parisi and Y-C. Zhang,Phys. Rev, Lett., 56, 889 (1986).stanley A. Barabasi and H.E. Stanley,Fractal Concepts in Surface Growth (Cambridge University Press, Cambridge, England, 1995.natter L. Tang, T. Nattermann, and B. M. Forrest,Phys. Rev. Lett.,65, 2422 (1990).kpz-aniso U.C. Täuber and E. Frey,Europhys. Lett. 59,655 (2002).abfrey A. Basu and E. Frey,Phys. Rev. E,69, 01510(R) (2004);A. Basu and E. Frey, J. Stat. Mech. - Th. Exp., P08103 (2009).binfluid R. Ruiz and D. R. Nelson,Phys. Rev. A,23, 3224 (1981); R. Ruiz and D. R. Nelson,Phys. Rev. A,24, 2727 (1981); see also Ref. <cit.> above. 3dmhd A. Basu,Europhys. Lett.,65, 505 (2004).ertas D. Ertaas and M. Kardar,Phys. Rev. Lett. 69, 929(1992); A.L. Barabasi,Phys. Rev. A 46, R2977 (1992).colloid A. Levine et al.,Phys. Rev. Lett. 81, 5944(1998).driv-cryst R. Lahiri and S. Ramaswamy,Phys. Rev. Lett.79, 1150 (1997); P. Dolai, A. Basu and R. A. Simha,Phys. Rev. E 95, 052115 (2017).halpin P. C. Hohenberg and B. I. Halperin,Rev. Mod. Phys.,49, 435 (1977).ckpz T. Sun, H. Guo and M. Grant,Phys. Rev. A 40, R6763(1989).janssen H. Janssen,Phys. Rev. Lett. 78, 1082 (1997).ab-jkb-mct A. Kr. Chattopadhyay, A. Basu and J. K. Bhattacharjee, Phys. Rev. E 61, 2086 (2000).dro1 B. Drossel and M. Kardar,Phys. Rev. B 66, 195414(2002).dro2 B. Drossel and M. Kardar,Phys. Rev. Lett. 85, 614(2000), B. Drossel and M. Kardar,Eur. Phys. J. B 36, 401(2003).strong-weak This is following the terminologies introduced by DeDominicis and Peliti in a similar context in dynamic critical phenomena; see C. De Dominicis and L. Peliti,Phys. Rev. B 18, 353 (1978).weak D. Das, A. Basu, M. Barma and S. Ramaswamy,Phys. Rev. E 64, 021402 (2001).janssen1 C. De Dominicis,J. Phys. (Paris) Colloq. C 1, 247 (1976); H.K. Janssen,Z. Phys. B 23, 377 (1976). tauber U.C. Täuber,Critical Dynamics (Cambridge University Press, 2014).on T. Banerjee, N. Sarkar and A. Basu,Phys. Rev. E 92, 062133 (2015).kpz-singu We believe this is analogous to the singularity in theeffective coupling constant in the KPZ equation for 1<d<2 within a one-loopapproximation. A two-loop approximation removes this problem; see E. Frey and U. Tauber,Phys. Rev. E 50, 1024 (1994). comment1 Without the ϕ^4-term in (<ref>), it becomesadmittedly artificial, but still suffices for our purposes here.
http://arxiv.org/abs/1705.09604v2
{ "authors": [ "Tirthankar Banerjee", "Abhik Basu" ], "categories": [ "cond-mat.stat-mech" ], "primary_category": "cond-mat.stat-mech", "published": "20170526145302", "title": "Symmetries and scaling in generalised coupled conserved Kardar-Parisi-Zhang equations" }
* =13.2ptFour-fermion interactions and the chiral symmetry breaking in an external magnetic fieldYu-xin Liu December 30, 2023 ========================================================================================= The class of stable distributions is used in practice to model data that exhibit heavy tails and/or skewness. The stability index α of a stable distribution is a measure of tail heaviness and is often of primary interest.Existing methods for estimating the index parameter include maximum likelihood and methods based on the sample quantiles. In this paper, a new approach for estimating the index parameter of a stable distribution is proposed. This new approach relies on the location-scale family representation of the class of stable distributions and involves repeatedly partitioning the single observed sample into two independent samples. An asymptotic likelihood method based on sample order statistics, previously used for estimating location and scale parameters in two independent samples, is adapted for estimating the stability index. The properties of the proposed method of estimation are explored and the resulting estimators are evaluated using a simulation study. Some Key Words: Stable Distributions, Tail Index, Characteristic Exponent, Location-Scale Model, Split-Sample Estimation, Data Permutation.§ INTRODUCTION The stable probability law was introduced by PaulLévy in 1924 in his work on sums of independent and identically distributed (iid) random variables. Originating from his attempt to generalize the central limit theorem, the class of stable distributions is defined as follows: A non-degenerate random variable X is said to have a stable distribution if and only if for all n≥2, there exist constants a_n∈ℝ and b_n>0 such that X_1+X_2+…+X_nd=a_n+b_nX. HereX_1,X_2,…,X_n are independent copies of X and the symbol d= is used to denote equality in distribution. The random variable X is called strictly stable if and only if a_n=0 for all values of n. See <cit.> and <cit.>, for a comprehensive overview of existing results on stable distributions.Stable distributions are typically described in terms of their characteristic functions. The random variable X is said to have a stable distributionS(α,β,γ,δ) if the characteristic function of X, for all real t, is given by ψ_X(t)=E(e^itX) =exp(-γ^α |t|^α[1+iβtan (πα/2)(sign t)(|γ t|^1-α-1)]+iδ t)if α≠1 exp(-γ |t|[1+iβ(2/π)(sign t)(log γ |t|)]+iδ t)if α=1 where α∈(0,2] is commonly referred to as the stability index, β∈[-1,1] is a skewness parameter, γ>0 is a scale parameter and δ∈ℝ is a location parameter. The index parameter α, also called the stability index or characteristic exponent, measures the heaviness of the tails of the distribution. As α decreases, the tail heaviness of the distribution increases. For α<=1, the mean of the distribution does not exist, while for 1<α<2,the variance of the distribution does not exist either. In general, a stable random variable with index parameter α∈(0,2) possesses absolute moments of order p where p<α; that is, E(|X|^p)< ∞ for p<α.A sum of iid stable random variables with common characteristic exponent α is again stable, retaining the characteristic exponent of the original distribution. This property is termed stability. From a practical point of view, stable distributions are an attractive option for modeling data that exhibit heavy tails and skewness as these features can easily capture by stable distributions. With three exceptions discussed below, stable distributions do not have closed-form density functions. However, all non–degenerate stable distributions are continuous with infinitely differentiable density functions. The three special cases in which there exists a closed-form expression for the density function are the normal, Cauchy and Lévy distributions. When setting α=2,(<ref>) corresponds to the characteristic function of a Normal(δ,2γ^2) distribution. Similarly, upon setting (α,β)=(1,0), the distribution is Cauchy(γ,δ) while upon setting (α,β)=(1/2,1), the resulting distribution is Lévy with γ and δ+γ being the scale and location parameters respectively. The lack of a closed-form density function together with the non-existence of moments has made parameter estimation a historically challenging task. While some applications require the estimation of all four parameters, in many instances the parameter of greatest interest is the stability index α which determines the tail heaviness of the distribution. This paper focuses only on the estimation of the parameter α. Many methods have been proposed to estimate α, all of which fall in to three categories: maximum likelihood, quantile methods and characteristic function methods. Quantile methods include both methods based on sample quantiles and methods based on extreme order statistics.<cit.> did extensive work on using a maximum likelihood type method to estimate the parameters of a stable distribution. His method relied on grouping the data into bins and numerically maximizing an approximate log–likelihood function. <cit.> used the fast Fourier transform to estimate the parameters, while <cit.> developed routines for numerical computation of the integrals involved in the Fisher information matrix which can then also be used for maximum likelihood estimation. <cit.> proposed a MCMC method to estimate the stable parameters. <cit.> developed Bayesian methodology for inference in stable distributions, while <cit.> proposed likelihood-free Bayesian inference for stable models.Still, direct maximization of the likelihood function presents many challenges in practice and it is therefore not a popular approach when estimating the tail index.<cit.> did early work on estimating α using order statistics, but their method only applied to symmetric stable distributions for α∈ [1,2]. <cit.> developed the quantile method now popular in application which works for both symmetric and skew stable distributions and α∈ [0.5,2]. The Hill estimator, see <cit.>, is a popular measure of tail heaviness for distributions with Pareto-like tails, and can also be adapted to estimate α. However, the Hill estimator tends to have large bias in small to moderately sized samples.Authors that have considered estimators making direct use the characteristic function include <cit.>, <cit.>, <cit.> and <cit.>. These methods have good performance properties, but some still shy away from models that rely on inference in the complex domain.In this paper, the problem of estimating α is viewed through the lens of representing stable random variables as members of a location–scale family of distributions. The general framework presented relies on repeatedly partitioning the observed data into two independent samples, say X_1,…,X_n and Y_1,…,Y_m. The X sample is created by selecting values from the observed data without replacement. Thereafter, the remaining observations are randomly paired and the Y sample is created by adding the data values in each pair. This “split-sample” approach allows the estimation of α to be treated as a two-sample estimation problem. In particular, a quantile–based method of estimation is proposed in this paper. This quantile method is a variation of a method due to <cit.> who consider estimating location and scale parameters for two independent samples belonging to the same location-scale family. In Section 2, the location–scale representation of stable distributions is discussed and a connection between the scale parameter in the two-sample setting and the stability index α of a single sample is established. In Section 3, the split-sample estimator is formally defined. Additionally, the two-sample quantile method of Potgieter and Lombard is reviewed and extended to the present framework for estimating α.Section 4 presents results from a simulation study carried out to investigate the number of sample partitions needed to give good RMSE performance of the proposed estimator. Section 5 deals with some practical considerations, such as choosing the number of quantiles to use in the Potgieter & Lombard method. Section 6 compares the proposed estimator to other existing methods and recommendations for implementing the method are discussed in Section 7.§ STABLE DISTRIBUTION LOCATION–SCALE REPRESENTATIONSuppose that X and X' are iid S(α,β,γ,δ) random variables and define Y=X+X'. As the sum of two stable random variables with common parameters α and β is again stable, it follows that there are constants μ∈ℝ and σ>0 such that Yd=μ+σ X where d= indicates equality in distribution. Thus, the distribution of Y has two equivalent representations,YS(α,β,σγ,μ+σδ).andYS(α,β,2^1/αγ, δ') where δ' is given byδ'=2δ+(tan πα/2)βγ[2^1/α-2]if α≠12δ+2/πβγ[2^1/αlog (2^1/αγ)-2log γ]if α=1. Here, relation (<ref>) is derived as the characteristic function of the sum of two iid stable variables, X+X', while (<ref>) is the characteristic function of the random variable μ +σ X and follows from the properties of a location-scale transformation. As (<ref>) and (<ref>) are equivalent, the parameters in the two cases must also be equal. Specifically, by equating the scale parameters in the two formulations of Y, it follows that σγ=2^1/αγ. Solving for α givesα=log2/logσ.Therefore, the problem of estimating α is equivalent to that of estimating σ, the scale parameter relating X and Y. This relation forms the basis of the estimation procedure proposed in this paper.Now, let X and Y be two independent random variables such that Yd=μ+σ X for appropriate constants μ and σ. For the time being, assume that independent samples X_1,…,X_n and Y_1,…,Y_m are observed. <cit.> proposed a nonparametric method called asymptotic likelihood (AL) for estimating μ and σ from the two independent samples. Their method only assumes that the random variables X and Y have continuous and strictly increasing distribution functions with differentiable density functions. Despite the general lack of closed-form expressions for the density and distribution functions, stable distributions do satisfy these assumptions. The AL method can therefore be applied to the stable setting to estimate σ, and subsequently α, from the independent X- and Y-samples. The question of obtaining these samples is further addressed in Section 3, while the remainder of this section gives a brief overview of the implementation of the AL method in the present setting.Let F and G (correspondingly f and g) denote the respective distribution (density) functions of random variables X and Y. For iid random samples X_1,X_2,…,X_n and Y_1,Y_2,…,Y_m, denote the respective order statistics by X_(1)<X_(2)<…<X_(n) and Y_(1)<Y_(2)<…<Y_(m). Let Fand G denote, respectively, empirical distribution functions of X and Y made continuous using linear interpolation. That is, let F(x)=0 for x<X_(1), F(x)=1 for x>X_(n), F(X_(j))=j-1/n-1, j=1,2,…,n,and for x ∈ (X_(j),X_(j+1)), define F(x) to be the interpolated value between the pairs (X_(j),(j-1)/(n-1)) and (X_(j+1),j/(n-1)), j=1,…,n-1. A similar definition holds for G(x). The continuous empirical quantile functions F^-1 and G^-1 are uniquely defined by the relation F[F^-1(t)]=G[G^-1(t)]=t.Now, when the relation Yd=μ+σ X holds, the quantile functions F^-1 and G^-1 satisfyG^-1(t)=μ+σ F^-1(t),0<t<1.Define ε_n(t)=F^-1(t)-F^-1(t) and δ_m(t)=G^-1(t)-μ -σ F^-1(t). <cit.> show that for fixed 0<t_1<…<t_k<1, the independent random vectors [n^1/2f(F^-1(t_j))ε_n(t_j)]^⊤, j=1,…,k and[m^1/2g[G^-1(t_j)]δ_m(t_j)]^⊤, j=1,…,k converge in distribution to multivariate normal distributions with common covariance matrix Σ_ij=min(t_i,t_j)-t_it_j as min (n,m) →∞. Now, let ϕ_j = F^-1(t_j), j=1,…,k and define parameter vector θ=[ϕ_1, ϕ_2,…,ϕ_k,μ,σ]^⊤. The k+2 parameters in θ can be estimated using the established asymptotic normality. Define vectorsW_1(θ) =[g(G^-1(t_j))(G^-1(t_j)-μ-σϕ_j)]^⊤,j=1,…,k W_2(θ) =[f(F^-1(t_j))(F^-1(t_j)-ϕ_j)]^⊤,j=1,…,kandV(θ)=[W_1(θ)^⊤,W_2(θ)^⊤]^⊤where f and g are kernel density estimators of f and g respectively. It then follows that as min(n,m)→∞, (nm/(n+m))^1/2V(θ) converges in distribution to a 2k multivariate normal distribution with zero mean and covariance matrix Ω given byΩ= [ λΣ0;0 (1-λ)Σ ]where λ = lim _m,n →∞ n/(n+m) and the component of the asymptotic log-likelihood of V(θ) involving the k+2 parameters isQ(θ)=V(θ)^⊤Ω^-1V(θ)/2. Now define,Q(θ)=V(θ)^⊤Ω^-1V(θ) where Ω is Ω with λ replaced by λ = n/(n+m). The estimator θ̂ that minimizes Q(θ), cannot expressed in closed form but can be easily found using standard numerical optimization routines. The component estimators μ̂ and σ̂ are called the AL estimators of μ and σ. § SPLIT–SAMPLE ESTIMATORThe method for estimating σ outlined in Section 2 assumes the availability of two independent samples satisfying the specified location-scale relationship, while only the equivalent of an X-sample is observed in practice. Suppose this sample consists of n+2m observations from a stable distribution with unknown index parameter α. It is possible to create two independent samples from the single observed sample using the following method: First select n observations randomly and treat them as the X-sample. Next, form randomly m pairs from the remaining 2m observations and sum the observations in each pair. Treat these m sums as the Y-sample. By the properties of stable distributions, the relation Yd=μ+σ X holds for these constructed samples. Additionally, the X and Y samples constructed in this way are independent and therefore the AL method proposed by Potgieter & Lombard can be used to estimate σ and then, subsequently, α using (<ref>). It should be noted that although the method guarantees σ̂>0, the estimator resulting from applying (<ref>) to σ̂ may not be in the interval [0,2]. It is therefore reasonable to define α̂=0if σ̂∈(0,1)2 if σ̂∈[1,√(2)) log2/logσ̂ if σ̂∈[√(2),∞). The perceived discontinuity in the definition of α̂ results from a discontinuity in 1/logσ when σ=1. Since this method involves splitting the sample, we referred to the estimator α̂ as the split–sample estimator (SSE) of α.The estimator proposed above, of course, uses only one random permutation of the data. Ideally, all possible sample permutations would be constructed, and each permutation would be used to construct an estimate of σ and/or α. Finally, these estimators would then be combined to create some ensemble estimator of α. Specifically, let t(a_1,…,a_n,a_n+1,a_n+m) be a function that is permutation-invariant in the first n arguments a_1,…,a_n and also permutation-invariant in the last m arguments a_n+1,…,a_n+m. For 𝒦=(k_1,…,k_n+2m) a random permutation of the integers 1,…,n+2m, defineθ_𝒦=t(X_k_1,…,X_k_n,X_k_n+1+X_k_n+2,…,X_k_n+2m-1+X_k_n+2m) to be the statistic t calculated for the permutation 𝒦. Letting K denote the total number of possible permutations of the data, defineθ̅=1/K∑_k=1^Kθ_𝒦_k where θ_𝒦_k denotes the value of statistic t for the k^th permutation of the data. Note that the effect here is that of creating a U-statistic with a symmetric kernel. Whereas θ_𝒦 is not symmetric in its arguments, θ̅ is such. In the present setting, the statistic t evaluated for one permutation represents an AL estimate of σ based on a single permutation of the data, whereas the ideal is to calculate θ̅ by evaluating t for all possible data permutations. However this is not realistic, as there aren+2mn2m22m-22…42=n+2mn∏_i=0^m-12m-2i2 unique possible X- and Y-samples that can be created using the proposed data-splitting method. Even when n and m are only moderately large, this constitutes too large a number of sample permutations to practically evaluate all of them.For example, considering the scenario m=n with sample size n+2m=30, there are approximate 7.08 × 10^22 such sample permutations and when n+2m=150, there are approximately 1.67 × 10^183 such sample permutations. The number of possible sample permutations grows at a super-exponential rate.Of course, the inability to evaluate all possible data permutations should not steer one towards the other extreme where only a single data permutation is used to estimate σ, as a big loss in efficiency could result. A compromise is proposed, in that data permutation process creating X and Y samples is repeated B where B is some "large" integer and these B estimates of σ can then be combined in an appropriate manner to estimate α.Let σ̂_1,…,σ̂_B denote the estimates of σ resulting from randomly splitting the sample B times. The question of how to combine these to create an estimate α̂ is now considered. Proposed here are three ways of combining the B estimates to find an estimate of α:(i) Define σ̅=(∑_j=1^Bσ̂_j)/B. That is, σ̅ is the average of the B values σ̂_1,…,σ̂_B. Using this, define estimator α̂_1,α̂_1=0if σ̅∈(0,1)2 if σ̅∈[1,√(2)) log2/logσ̅ if σ̅∈[√(2),∞).(ii) Let α̂_j denote the value obtained after applying transformation (<ref>) to the σ̂_j for j=1,…,B. Here, α̂_j denotes the estimate of α for the j^th random split of the sample. Define estimator α̂_2=α̅=(∑_j=1^Bα̂_j)/B. (iii) Estimate α usingα̂_3=median(α̂_1,…,α̂_B). That is, instead of taking the average as in (ii), the median of the B estimates of α is evaluated. Here, estimators α̂_1 and α̂_2 fall within the outlined framework of approximating a U-statistic with symmetric kernel. On the other hand, α̂_3 is outside this framework, but is included as a robust alternative. The performance of the three estimators, as well as the number splits B to be used, are investigated in the simulation study presented in the next section.§ SIMULATION STUDY The accuracy of the estimators defined in the previous section will depend on B, the number of random splits used. A too small B results in an estimate with large variability, while a very large B detracts from the practical viability of the approach due to computational cost. Therefore, a good choice of B is essential. A simulation study has been performed to assess how the choice of B affects the defined estimators. In the simulation, samples of size n+2m ∈{150,300,600} were drawn from standardized stable distributions (γ=1 and δ=0) for values α∈{0.5,1,1.5,1.95} and β∈{0,0.75}. The samples were split such that m=n when evaluating the AL estimator. In addition, the AL method was implemented using k=9 equally spaced t-values, t=[1/10,…,9/10]. The three estimators α̂_j, j=1,2,3 were evaluated for B ∈{1,10,50,100,500 }. A total of N=1000 random samples were drawn for each configuration of (α,β,n). The mean value of the estimate and the Monte Carlo RMSE was evaluated for each configuration.Tables <ref> and <ref> About HereTable <ref> presents results for the case (α,β)=(1,0) and Table <ref> presents results for (α,β)=(1.95,0). Generally, as the number of sample splits B increases, the both the bias and the RMSE tend to decrease. Generally there is a steady decrease in RMSE when going from B=1 to B=100, but there is only a small decrease in RMSE when going from B=100 to B=250, while going from B=250 to B=500, the decrease seems to be negligible. Simulation results for parameter configurations not presented here all show a similar pattern to those seen in Tables <ref> and <ref>. It is clear that RMSE continues to decrease as B increases, but that this reduction diminishes for B ≥ 250. To illustrate, in the context of Table <ref>, when further increasing B to 2500, the RMSE of α̂_1 decreases from 0.065 at B=500 to 0.063 at B=2500 and the RMSE of α̂_3 similarly decreases from 0.059 to 0.057. For practical purposes, a recommendation is made to use B=250. This value ensures fast computation, but already shows good performance. A practitioner who wanted to see any further improvement in RMSE would have to choose B a whole order of magnitude larger. When comparing the estimators α̂_j, j=1,2,3, it should be noted that α̂_2 consistently performs much worse (in terms of bias and RMSE) than both α̂_1 and α̂_3. This can be explained, at least in part, by the truncation that occurs when applying transformation (<ref>) from the σ-scale to the α-scale. When a large proportion of the σ̂_b are outside the interval [√(2),∞), that same proportion of α̂_b are on the boundaries (0 and 2) of the parameter space. As the estimator α̂_2 is calculated by averaging on the α-scale, a large proportion of boundary values can increase the bias of the estimate. On the other hand, α̂_1 is calculated by averaging on the σ-scale and the truncation only comes into play when σ̅ is outside [√(2),∞). Similarly, since α̂_3 is the median of the B split-sample estimates, the truncation only comes into play if more than 50% of the values are truncated to a specific boundary. To illustrate the occurrence of boundary values, Table <ref> reports the results of a simulation study in which N=2000 samples were generated from a standard stable distribution with α∈{1,1.5,1.95} and β=0 with n+2m ∈{150,300,600}. For each simulated set of data, values α̂_̂b̂ were calculated from B=250 random splits. The table reports the average percentage of α̂_̂b̂ that were truncated to either 0 or 2. Table <ref> About Here The content of Table <ref> is unsurprising. As the sample size increases, the occurrence of truncation to the boundaries decreases. The one exception is when α=1.95. This value is very close to the boundary and a very large sample size would have to be observed before there will be substantial decrease in boundary truncation.In terms of a “best estimator”, there does not appear to be a clear choice between α̂_1 and α̂_3. In Tables <ref> and <ref>, the RMSE of α̂_3 is generally smaller than that of α̂_1, but the RMSE values are very close to one another. Figure <ref> shows the RMSE of the three estimators for α∈(0.5,2) with β=0, n+2m=300 and k=9 based on N=2000 samples for each value of α. A simple smoother was applied to the RMSE values to enhance readability of plot. Similar plots (not shown here) were produced for settings with β=0.75 and also n+2m=600; the same general trends were visible in these. Figure <ref> About Here Inspection of Figure <ref> shows that α̂_1 and α̂_3 perform better than α̂_2 over a large part of the parameter space. The estimator α̂_2 performs better in the approximate range 1.4≤α≤1.8, but performs very poorly outside this range. As the true α becomes smaller and the underlying data distribution has heavier tails, α̂_1 has the best performance among the three estimators. When α>1.8,α̂_3 performs better than the other two estimators. Generally, the RMSE of α̂_1 and α̂_3 are very similar with α̂_1 having smaller RMSE over a large range of α.§ CHOICE OF K AND (T_1,…,T_K) The companion questions of how large to choose k and how to choose the values t_1,…,t_k have not yet been addressed. Intuition might suggest that one should choose k as large as possible. However, the estimator can perform very poorly when k is chosen too large. To illustrate, samples of size n+2m=300 were generated from a Cauchy distribution (α=1) and estimators α̂_2^(k) were calculated for each sample using equi-spaced t_j=j/(k+1), j=1,…,k for k=9,19 and 29. Based on N=1000 simulated samples, RMSE(α̂_1^(k=9))=0.0919, RMSE(α̂_2^(k=19))=0.1253 and RMSE(α̂_2^(k=29))=0.1353. The accuracy of the estimator decreases as k increases. The AL method of <cit.> is similar to a GLS method described by <cit.>. A result from the latter shows k=o(n^-1/6) to be the optimal sample-size dependent rate for k. Practically, this means k should not be chosen too large. Based on extensive simulation work, it is recommended that choices k=9 or k=19 be used for most applications. These values performed well across a wide range of sample sizes. If the underlying distribution has very heavy tails, say α<1, the choice k=3 tends to be more robust to outliers.Once a choice of k has been made, it is still unclear what the best approach is to choosing the values t_1,…,t_k. This will be investigated in terms of the asymptotic distribution of the estimation of α based on a single random split. <cit.> derive an expression for the asymptotic covariance matrix of (mn/(m+n))^1/2[μ̂-μ,σ̂-σ]^⊤ where μ̂ and σ̂ denote the AL estimators based on two independent samples. Denote this covariance matrix σ^2Γ. The elements of the matrix Γ are somewhat tedious expressions involving the points t_1,…,t_k, as well as the density and quantile functions f(x) and F^-1(t) of the underlying distribution. These expressions are omitted for brevity. For α̂=log2/logσ̂, a standard application of the delta method gives(mn/m+n) ^1/2(α̂-α) d→ N(0,α^4/(log2)^2Γ_22)where Γ_22 indicates the element in the 2^nd row, 2^nd column of Γ. The asymptotic variance in (<ref>) does provide a view of the difficulty inherent in choosing “optimal” t-values. Noting that Γ_22 is an implicit function α and β for the underlying stable distribution, one could choose the t-values such that α^4Γ_22 is a minimized. Of course, in practice this is not possible as α and β are unknown. Simulation studies not reported here were done to see how such optimal values would compare against the simple choice of equi-spaced values t_j=j/(k+1), j=1,…,k. While improvement in RMSE was observed in some instances, the simple choice of equi-spaced was usually very competitive with (and in a few instances outperfomed) the asymptotic optimal values. Therefore, the recommendation is made here to use equally spaced values. Further study is recommended to develop an adaptive approach for choosing the t-values. For example, after finding an initial estimate of α, this initial estimate can be used to update the t-values and then re-estimate α. § COMPARISON OF ESTIMATORS In this section, the split–sample AL approach developed is compared with two other existing methods for estimating the stability index. The first of these, maximum likelihood (ML), is computationally expensive. However, it is included here as a performance benchmark as maximum likelihood estimators are asymptotically unbiased and minimum variance estimators. Next, McCulloch's quantile estimator (MQE) is also considered. The MQE estimator is widely used in practice and, similar to the split–sample estimator, is based on sample order statistics.The split–sample estimator α̂_1 based on B=250 random partitions and using equi-spaced t-values, t_j=j/(k+1) for j=1,…,k and choices k=3,9 and 19 (hereafter referred to as SSE_k). Samples were drawn from a S(α,β,1,0) distribution with α∈{0.5,0.75,…,1.75,2} and β∈{0,0.75} for sample sizes n ∈{150,300,600}. The RMSE was estimated from N=2000 samples drawn for each parameter configuration. The results for the case α = 1 are reported below in Table <ref>. In this table, the boldface entries correspond to the estimates with the smallest and second smallest RMSE. Table <ref> About HereSeveral interesting observations can be made upon inspection of Table <ref>. Consider first the symmetric case where β=0. As one would expect, the ML estimator has the smallest RMSE among all the estimators considered. When the sample size is small (n+2m<=300), the MQE has the second smallest RMSE. It should be noted, however, that it only performs marginally better than the split–sample estimator. Specifically, SSE_3 is competitive at sample size 150 as is SSE_9 is at sample size 300. For sample 600, SSE_9 outperforms MQE. A few additional simulations were done at even larger sample sizes and this trend was also observed there. In larger samples, SSE_9 always has smaller RMSE than MQE. It should also be noted that no numerical values for RMSE were reported for estimator SSE_19 and sample size 150, as convergence problems were frequently encountered when calculating the estimator. This is likely an artifact of using t_1=0.05 and t_19=0.95 (corresponding to the 5^th and95^th sample percentiles) to do estimation in a small sample drawn from a heavy-tailed distribution. These sample percentiles have large variance and will often be values to what some might label extreme observations. In the asymmetric case (β=0.75), the estimator SSE_9 always performs better than the MQE.These simulation results are fairly representative of what was observed for other values of α. Generally, SSE_19 becomes the preferred estimator over SSE_9 as either the value of α increases or as the sample size increases. Additionally, the relative efficiency of the methods was also estimated in the simulation study. Table <ref> reports the ratios RE = RMSE(SSE_9) / RMSE(MQE) for a sample size of 600. Here, a values of RE<1 indicates superiority of SSE_9 relative to MQE. Table <ref> About HereIn Table <ref>, it is evident that SSE_9 generally performs much better than MQE. The estimated relative efficiency is often well below 0.7. There are two notable exceptions to this general statement. In the symmetric case, when α ranges from 0.75 to 1.25, the MQE is very competitive, even outperforming SSE_9 slightly at α=1.25. In the asymmetric case, MQE performs better than SSE_9 when α=0.5, but nowhere else.§ RECOMMENDATIONS The split-sample approach shows promise as a method for estimating the stability index α. In a side-by-side comparison with the McCulloch quantile estimator, the split-sample approach frequently outperforms the McCulloch estimator. The split-sample approach could conceivably be further improved by choosing the design points t_1,…,t_k in some adaptive way as suggested at the end of Section 5. Additionally, the problem of estimating the standard error of the estimator has not been considered here. The bootstrap is one option, but does suffer from computational cost in that it becomes a nested problem involving a first-level bootstrap sampling procedure and a second-level data splitting procedure. Both of these questions are being considered by the authors in ongoing research.As a final remark, the question of computational cost does arise when considering the practical implementation of the split-sample approach. Specifically, the process of permuting the data does become more time-consuming as the sample size gets large. However, the estimator proposed is highly parallelizable. In large sample situations, this will more than compensate for any computing time required to permute the data B times. wb_stat§ TABLES AND FIGURES
http://arxiv.org/abs/1705.09840v1
{ "authors": [ "Sudharshan Samaratunga", "Cornelis J. Potgieter" ], "categories": [ "stat.ME" ], "primary_category": "stat.ME", "published": "20170527164318", "title": "A Split-Sample Approach for Estimating the Stability Index of a Stable Distribution" }
Maximal subgroups and complexity of the flow semigroup]The maximal subgroups and the complexity of the flow semigroup of finite (di)graphsDedicated to John Rhodes on the occasion of his 80th birthday. Institute of Mathematics,University of Debrecen, Pf. 400, Debrecen, 4002, Hungary [email protected] Royal Society / Wolfson Foundation Biocomputation Research Laboratory, Centre for Computer Science and Informatics Research,University of Hertfordshire, College Lane, Hatfield, Hertfordshire AL10 9AB, United Kingdom [email protected] Alfréd Rényi Institute of Mathematics,13–15 Reáltanoda utca,Budapest,1053, Hungary [email protected] The research was partially supported by the European Council under the European Union's Seventh Framework Programme (FP7/2007-2013)/ERC under grant agreements no. 318202 and no. 617747, by the MTA Rényi Institute Lendület Limits of Structures Research Group,the first author was partially supported by the Hungarian National Research, Development and Innovation Office (NKFIH) grant no. K109185and grant no. FK124814, and the third author was funded by the National Research, Development and Innovation Office (NKFIH) Grant No. ERC_HU_15 118286. [2010]20M20, 05C20, 05C25, 20B30The flow semigroup, introduced by John Rhodes,is an invariant for digraphs and a complete invariant for graphs.After collecting together previous partial results, we refine and prove Rhodes's conjecture on the structure of the maximal groups in the flow semigroupfor finite, antisymmetric, strongly connected digraphs. Building on this result,we investigate and fully describe the structure and actions of the maximal subgroups of the flow semigroup acting on all but k points for all finite digraphs and graphs for all k≥ 1. A linear algorithm (in the number of edges) is presented todeterminethese so-called `defect k groups' for any finite (di)graph.Finally,we prove that the complexity of the flow semigroup of a 2-vertex connected (and strongly connected di)graph with n vertices is n-2,completely confirming Rhodes's conjecture for such (di)graphs.[ Károly Podoski 30 July 2017 ==================§ INTRODUCTION John Rhodes in <cit.> introduced the flow semigroup, an invariant for graphs and digraphs (that is,isomorphic flow semigroups correspond to isomorphic digraphs).In the case of graphs, this is a complete invariant determining the graph up to isomorphism. The flow semigroup is the semigroup of transformations of the vertices generated by elementary collapsings corresponding to the edges of the (di)graph. An elementary collapsing corresponding to the directed edge uv is a map on the vertices moving u to v and acting as the identity on all other vertices.(See Section <ref> for all the precise definitions.)A maximal subgroup of this semigroupfor a finite (di)graph D=(V_D, E_D) acts by permutations on all but kof its vertices(1 ≤ k ≤V_D-1)and is called a “defect k group”. The set of defect k groups of a (di)graph is also an invariant.For each fixed k, they are all isomorphic to each other in the case of (strongly) connected (di)graphs.Rhodes formulated a conjecture on the structure of these groups forstrongly connected digraphs whose edge relation is anti-symmetricin <cit.>. We show that his conjecture was correct, and we prove it here in sharper form.Moreover, extending this result,we fully determine the defect k groups for all finite graphs and digraphs.Rhodes further conjectured <cit.>that the Krohn–Rhodes complexity of the flow semigroup of a strongly connected, antisymmetric digraph D on n vertices is n-2.We confirm this conjecturewhen the digraph is 2-vertex connected,and bound the complexity in the remaining cases. The structure of the argument is as follows.First,a maximal group in the flow semigroup of a digraph D is the direct product of maximal groups of the flow semigroups of its strongly connected components.Thus one needs only to consider strongly connected digraphs.It turns out,that if D is a strongly connected digraph,then the defect k group (up to isomorphism) does not depend on the choice of the vertices it acts on.Furthermore,for a strongly connected digraph,its flow semigroup is the same as the flow semigroup of the simple graph obtained by “forgetting” the direction of the edges.This is detailed in Section <ref> and is based on <cit.>.Thus,one only needs to consider the defect k groups of the flow semigroup for simple connected graphs. In Section <ref> we list some useful lemmas and determine the defect k group of a cycle.In Section <ref> we prove that the defect 1 group of arbitrary simple connected graph is the direct product of the defect 1 groups of its 2-vertex connected components.The defect 1 group of an arbitrary 2-vertex connected graph Γ has been determined by Wilson <cit.>.He proved that the defect 1 group is either A_n-1 or S_n-1,unless Γ is a cycle or the exceptional graph displayed in Figure <ref>. In particular,Rhodes's conjecture(as phrased for strongly connected,antisymmetric digraphs in <cit.>)about the defect 1 group holds,and more generally:the defect 1 group of the flow semigroup of a simple connected graph is indeed the product of cyclic,alternating and symmetric groups of various orders.A straightforward linear algorithm is given to determine the direct components of the defect 1 group of an arbitrary connected graph (see Section <ref>). In Section <ref> we determine the defect k groups (k ≥ 2) of arbitrary graphs by considering the so-called maximal k-subgraphs (maximal subgraphs for which the defect k group is the full symmetric group)and prove that the defect k group of a graph is the direct product of the defect k groups of the maximal k-subgraphs (i.e. of full symmetric groups). In Section <ref> we provide a linear algorithm (in the number of edges of Γ) to determine the maximal k-subgraphs of an arbitrary connected graph.Finally,in Section <ref> we confirm <cit.> about the Krohn–Rhodes complexity of digraphs when the digraph is 2-vertex connected,and we prove some bounds on the complexity of the flow semigroup in the remaining cases.(See Section <ref> for the definition of Krohn–Rhodes complexity.)We have collected all these results into the following main theorem. *Let D be a digraph,then every maximal subgroup of S_D is (isomorphic to) the direct product of maximal subgroups of S_D_i,where the D_i are the strongly connected components of D. *Let D be a strongly connected digraph.Let V_k, V_k' ⊆ D be subsets of nodes such that V_k = V_k'=k.Let G_k, V_k, G_k, V_k' be the defect k groups acting on V ∖ V_k and V ∖ V_k', respectively.Then G_k, V_k≃ G_k, V_k' as permutation groups.[(it:stronglyG_kr)] Let D be a strongly connected digraph,and Γ_D be the graph obtained from Dby forgetting the direction of the edges in D. Then S_D = S_Γ_D. *Let Γ be a simple connected graph of n vertices,and let Γ_1, … , Γ_m be its 2-vertex connected components.Then the defect 1 group of Γ is the direct product of the defect 1 groups of Γ_i (1≤ i≤ m). *Let Γ be a 2-vertex connected simple graph with n ≥ 2 vertices.Then the defect 1 group of Γ is isomorphic (as a permutation group) to *the cyclic group Z_n-1 if Γ is a cycle; * S_5 ≃ PGL_2(5) acting sharply 3-transitively on 6 points,if Γ is the exceptional graph (see Figure <ref>);* S_n-1 or A_n-1, otherwise,where the defect 1 group is A_n-1 if and only if Γ is bipartite.[(it:2-vertexc)] Let Γ be a 2-vertex connected simple graph with n ≥ 2 vertices.Then the complexity of S_Γ is S_Γ=n-2.[(it:2-vertexcc)] Let Γ be a 2-edge connected simple graph with n ≥ 2 vertices.Then for the complexity of S_Γ we have n-3 ≤S_Γ≤ n-2. *Let k ≥ 2,Γ be a simple connected graph of n vertices,n > k.*If Γ is a cycle,then its defect k group is the cyclic group Z_n-k. *Otherwise, let Γ_1, … , Γ_m be the maximal k-subgraphs of Γ,and let Γ_i have n_i vertices.Then the defect k group of Γ is the direct product of the defect k groups of Γ_i (1 ≤ i≤ m),thus it is isomorphic (as a permutation group) to S_n_1-k×…× S_n_m-k.Our main contribution to Theorem <ref> are items (<ref>), <ref>, <ref> and (<ref>).Items (<ref>), (<ref>) and <ref> (among some basic definitions and notations) are detailed in Section <ref> and are based on <cit.>.In Section <ref> we list some useful lemmas and determine the defect k group of a cycle.Item (<ref>) is proved in Section <ref>,while item (<ref>) has already been proved by Wilson <cit.>.Then in Section <ref> we prove item (<ref>).In Section <ref> we provide a linear algorithm (in the number of edges of Γ) to determine the maximal k-subgraphs of an arbitrary connected graph to help putting item (<ref>) more into context.Finally,items <ref> and <ref> are proved in Section <ref>. East, Gadouleau and Mitchell <cit.> are currently looking into other properties of flow semigroups.In particular,they provide a linear algorithm (in the number of vertices of a digraph) for whether or not the flow semigroup contains a cycle of length m for a fixed positive integer m.Furthermore,they classify all those digraphs whose flow semigroups have any of the following properties:inverse, completely regular, commutative, simple, 0-simple,a semilattice, a rectangular band,congruence-free,is K-trivial or K-universal, where K is any of Green's H-, L-, R-, or J-relation,and when the flow-semigroup has a left, right, or two-sided zero. Rhodes's original conjecture <cit.> is about strongly connected,antisymmetric digraphs.By <cit.>a strongly connected antisymmetric digraph becomes a 2-edge connected graph after forgetting the directions.Therefore Theorem <ref> almost completely settles Rhodes's conjecture <cit.>.To completely settle the last remaining part of Rhodes's conjecture <cit.>,one should find the complexity of the flow semigroups for the rest of the 2-edge connected graphs. Determine the complexity of S_Γ for a 2-edge connected graph Γ which is not 2-vertex connected. The smallest such graph is the “bowtie” graph: Let Γ be thegraph with vertex set u,v,w,x,y and edge setuv,vw,wu,wx,xy,yw. Determine the complexity of S_Γ.Ultimately, the goal is the determine the complexity for all flow semigroups.Determine the complexity of S_Γ for an arbitrary finite graph (or digraph) Γ. § FLOW SEMIGROUP OF DIGRAPHS For notions in graph theory we refer to <cit.>,in group theory to <cit.> in permutation groups to <cit.>,in semigroup theory to <cit.>. A semigroup is a set with a binary associative multiplication.A transformation on a set X isa function sX→ X. It operates (or acts) on X by mapping each x∈ X to some x· s ∈ X. Here we write x· s or xs for transformation s applied to x∈ X. A transformation semigroup Sis a set of transformations s ∈ S on some set X such that S is closed under (associative)function composition. Also, S itself is then said to operate or to act on the set X. Note that in this paper functions act on the right,therefore transformations are multiplied from left to right.Denoting by ss' the transformation of X obtained by first applying s and then s',we have x · ss' = (x · s) · s'. If a semigroup element s acts on a set X,and for some Y ⊇ X the action of s is not defined on Y ∖ X,then we may consider s acting on Y,as well,with the identity action on Y ∖ X. A permutation group is a nonempty transformation semigroup G that contains only permutations and such that that ifg ∈ G then the inverse permutation g^-1 is also in G. Furthermore,for a set Y ⊆X and a transformation s on X defineYs = ysy ∈ Y.A subgroup G of a transformation semigroup S is a subset of S whose transformations satisfy the (abstract) group axioms. It is not hard to show thatif S is a transformation semigroup acting on X, then G contains a (unique) idempotent e^2=e (which does not generally act as the identity map on X), and furthermore distinct elementsofG when restricted to X e are distinct,permute Xe, and comprise a permutation group acting on Xe (see <cit.>).A digraph (V, E) is a set of nodes (or vertices) V, and a binary relation E ⊆ V× V. An elemente=(u,v) ∈ E is called a directed edge from node u to node v, and also denoted uv. A loop-edge is an edge from a vertex to itself.A graph (V,E) is a set of nodes V and a symmetric binary relation E ⊆ V × V. If (u,v) ∈ E,then uv is called an (undirected) edge. Such a graph is called simple if it has no loop-edges. In this paper we consider only digraphs without loop-edges and simple graphs. A walk is a sequence of vertices ( v_1, …, v_n ) such that v_iv_i+1 is a (directed) edgefor all 1≤ i≤ n-1. By cycle we will mean a simple cycle, that is a closed walk with no repetition ofvertices except for the starting and ending vertex. A path is a walk with no repetition ofvertices.A (di)graph Γ = (V, E) is (strongly) connected if there is a path from u to v for all distinct u, v ∈ V.By subgraph Γ' = (V', E') ⊆Γ we mean a graph for which V' ⊆ V, E' ⊆ E. If Γ' is an induced subgraph, that is E' consists of all edges from E with both endpoints in V', then we explicitly indicate it. A strongly connected component of a digraph Γ is a maximal strongly connected subgraph of Γ.For a digraph D = (V_D, E_D) without any loop-edges, the flow semigroup S = S_D is the semigroup of transformations acting on V_D defined by S = S_D = <e_uv| uv ∈ E_D >,where e_uv is the elementary collapsing corresponding to the directed edge uv ∈ E_D,that is,for every x ∈ V_D we have x· e_uv = xe_uv = v,ifx=u, x,otherwise.Thus, the flow semigroup of a (di)graph D is generated by idempotents (elementary collapsings) corresponding to the edges of D.The flow semigroup S_D is also called the Rhodes semigroup of the (di)graph.A maximal subgroup of S_D is a subgroup that is not properly contained in any other subgroup of S_D. In order to determine the maximal subgroups of S_D, one can make several reductions by <cit.>. First,one only needs to consider the maximal subgroups of S_D_i for the strongly connected components D_i of D.Strongly connected components are maximal induced subgraphs such that any vertex can be reached from any other vertex by a directed path. Let D be a digraph,then every maximal subgroup of S_D is (isomorphic to) the direct product of maximal subgroups of S_D_i,where the D_i are the strongly connected components of D.This is (<ref>) of Theorem <ref>.An element s ∈ S is of defect k if V_D s = V_D-k.Let V_k= v_1,v_2,… , v_k⊆ V_D. The defect k group G_k,V_k associated to V_k (called the defect set) is generated by all elements of S restricted to V_D∖ V_k which permute the elements of V_D∖ V_k and move elements of V_k to elements of V_D∖ V_k:G_k, V_k=< s_V_D∖ V_k: s∈ S, (V_D∖ V_k)s=V_D∖ V_k, V_k s ⊆ V_D∖ V_k >,where s_V_D∖ V_k denotes the restriction of the transformation s onto the set V_D∖ V_k.Now, G_k, V_k is a permutation group acting on V_D∖ V_k.For this reason V_D∖ V_k is called the permutation set of G_k, V_k,and the elements of G_k, V_k are sometimes called defect k permutations. Furthermore,if the defect set contains only one vertex v, then by abuse of notation we write defect v or defect point v instead of defect v.In general, the defect k group G_k, V_k can depend on the choice of V_k.However,by <cit.> it turns out that if the graph is strongly connected then the defect k group G_k is unique up to isomorphism. Let D be a strongly connected digraph.Let V_k, V_k' ⊆ V_D be subsets of nodes such that V_k = V_k'=k.Then the action of G_k, V_k on V_D ∖ V_k is equivalent to that of G_k, V_k' on V_D ∖ V_k'. That is,G_k, V_k≃ G_k, V_k' as permutation groups. This is (<ref>) of Theorem <ref>. By Lemma <ref>,we may write G_k instead of G_k, V_k without any loss of generality. Furthermore,the case of strongly connected graphs can be reduced to the case of simple graphs.Let Γ=(V, E) be a simple (undirected) graph,we define S_Γ by considering Γ as a directed graph where every edge is directed both ways.Namely,let D_Γ=(V,E_D) be the directed graph on vertices V such that both uv ∈ E_D and vu ∈ E_D if and only if the undirected edge uv ∈ E. Then let S_Γ = S_D_Γ. Furthermore,for every digraph D = (V_D, E_D),one can associate an undirected graph Γ by “forgetting” the direction of edges in D.Precisely,let Γ_D=(V_D,E) be the undirected graph such that uv ∈ E if and only if uv ∈ E_D or vu ∈ E_D.The following lemma due to Nehaniv and Rhodes shows that if a digraph D is strongly connected then the semigroup S_D corresponding to D and the semigroup S_Γ_D corresponding to the simple graph Γ_D are the same.Moreover, Lemma <ref> immediately implies that the transformation semigroup S_D is an invariant for digraphs and a complete invariant for (simple) graphs: That is, isomorphic digraphs have the isomorphic flow semigroups, and graphs are isomorphic if and only if their flow semigroups are isomorphic as transfromation semigroups. Let D be an arbitrary digraph.Thene_ab∈ S_D ⟺a → bis an edge in D, or b → ais an edge in a directed cycle in D.In particular,if D is strongly connected then S_D = S_Γ_D. Let b → a → u_1 →…→u_n-1→ b be a directed cycle in D. Then an easy calculation shows thate_ab = (e_bae_u_n-1be_u_n-2u_n-1… e_u_1u_2 e_a u_1)^n.For the other direction,assume e_ab = e_uv s for some s ∈ S_D. Then e_uvs moves u and v to the same vertex,while e_ab moves only a and b to the same vertex.Thus a,b = u,v.This is <ref> of Theorem <ref>.Therefore, in the following we only consider simple, connected, undirected graphs Γ = (V, E),that is no self-loops or multiple edges are allowed. Furthermore,Γ is 2-edge connected if removing any edge does not disconnect Γ.Rhodes's conjecture <cit.> is about strongly connected,antisymmetric digraphs.Notethat by <cit.>a strongly connected antisymmetric digraph becomes a 2-edge connected graph after forgetting the directions. Let us fix some notation. The letters k, l, m and n will denote nonnegative integers. The number of vertices of Γ is usually denoted by n, while k will denote the size of the defect set. Usually we denote the defect k group of a graph Γ by G_k or G_Γ, depending on the context. We try to heed the convention of using u, v, w, x, y as vertices of graphs, V as the set of vertices, E as the set of edges. Furthermore, the flow semigroup is mostly denoted by S, its elements are denoted by s, t, g, h, p, q. The cyclic group of m elements is denoted by Z_m. We will need the notion of an open ear,and open ear decomposition.Let Γ be an arbitrary graph,and let Γ' be a proper subgraph of Γ. A path (u, c_1, …, c_m, v) is called a Γ'-ear (or open ear) with respect to Γ,if u, v∈Γ', u≠ v,and either m=0 and the edge uv ∉Γ',or c_1, …, c_m ∈Γ∖Γ'. An open ear decomposition of a graph is a partition of its set of edges into a sequence of subsets,such that the first element of the sequence is a cycle,and all other elements of the sequence are open ears of the union of the previous subsets in the sequence. A connected graph Γ with at least k vertices is k-vertex connected if removing any k-1 vertices does not disconnect Γ.By <cit.> a graph is 2-vertex connected if and only if it is a single edge or it has an open ear decomposition. § PRELIMINARIES Let Γ = (V, E) be a simple, connected (undirected) graph,and for every 1≤ k≤V-1, let G_k denote its defect k group for some V_k ⊆ V,V_k = k.Let S = S_Γ be the flow semigroup of Γ.The following is immediate.Let s∈ S be of defect k.If se_uv is of defect k, as well,then u ∉ Vs or v ∉ Vs.Furthermore,it is not too hard to see that every defect 1 permutation arises from the permutations generated by cycles (in the graph) containing the defect point.Let Γ be a connected graph,and let G_1 denote its defect 1 group,such that the defect point is v ∈ V.Then G_1 = < ( u_1, … , u_k )as permutation| (u_1, … , u_k, v)is a cycle in Γ>.These yield that the defect k group of the n-cycle graph is cyclic,proving items (<ref>) and (<ref>) of Theorem <ref>: The defect k group of the n-cycle is isomorphic to Z_n-k.Let x_1,x_2,… x_n be the consecutive elements ofthe cycle Γ=(V, E). If s∈ S is an element of defect kthen by Lemma <ref> we have thatse_x_ix_i+1 is of defect k if and only if x_i∉ Vs or x_i+1∉ Vs. This means that if u_1,u_2,… u_n-k are the consecutive elements of Vs in the cycle andse_x_ix_i+1 is of defect k, as well, thenu_1 e_x_ix_i+1, u_2 e_x_ix_i+1, …, u_n-ke_x_ix_i+1are the consecutive elements of Vse_x_ix_i+1. Thus the cyclic ordering of these elements cannot be changed.Hence G_k is isomorphic to a subgroup of Z_n-k.Now, assume that v_1, v_2,… v_k, u_1, u_2,… u_n-k are the consecutive elements of Γ,and the defect set is V_k = v_1, … , v_k. Let s_1=e_v_1v_2…e_v_jv_j+1… e_v_k-1v_k, s_2= e_u_n-kv_ke_u_n-k-1u_n-k… e_u_j-1u_j… e_u_1u_2e_v_ku_1, s=s_1s_2.It easy to check that v_is=u_1, u_1s=u_2,… , u_j s=u_j+1, …,u_n-k s=u_1.Therefores, s^2,…, s^n-k are distinct elements of G_k,hence G_k≃ Z_n-k. § DEFECT 1 GROUPS In this Section we prove item (<ref>) of Theorem <ref>,which states that the defect 1 group of a simple connected graph is the direct product of the defect 1 groups of its 2-vertex connected components.This follows by induction on the number of 2-vertex connected components fromLemma <ref>. The case where Γ is 2-vertex connected (that is item (<ref>) of Theorem <ref>) is covered by <cit.>. Let Γ_1 and Γ_2 be connected induced subgraphs of Γ such that Γ_1∩Γ_2 = v,where there are no edges in Γ between Γ_1 ∖v and Γ_2 ∖v.Then the defect 1 group of Γ_1∪Γ_2 is the direct product ofthe defect 1 groups of Γ_1 and Γ_2.Let G_Γ_i denote the defect 1 group of Γ_i,where the defect point is v.By Lemma <ref>,G_Γ is generated by cyclic permutations corresponding to cycles through v in Γ.Now,Γ_1 ∩Γ_2 = v,and every path between a node from Γ_1 and a node from Γ_2 must go through v,hence every cycle in Γ is either in Γ_1 or in Γ_2.Let c_i^(1), … , c_i^(m_i) be the permutations corresponding to the cycles in Γ_i (i =1, 2). Since these cycles do not involve v by Lemma <ref>, we have c_1^(j_1)c_2^(j_2) = c_2^(j_2)c_1^(j_1) for all 1≤ j_i ≤ m_i, i=1, 2,thusG_Γ = < c_1^(1), … , c_1^(m_1), c_2^(1), … , c_2^(m_2)> =< c_1^(1), … , c_1^(m_1)> ×< c_2^(1), … , c_2^(m_2)> = G_Γ_1× G_Γ_2.§ DEFECT K GROUPS We prove item (<ref>) of Theorem <ref> in this Section.In the following we assume k ≥ 2,and every graph Γ is assumed to be simple connected.We start with some simple observations.Let Γ be a connected graph, and let Γ' be a connected subgraph of Γ. If Γ' has at least k+1 vertices,then the defect k group of Γ contains a subgroup isomorphic (as a permutation group) to the defect k group of Γ'.Furthermore,if Γ∖Γ' contains at least one vertex,and Γ' has at least k vertices,then the defect k group of Γ contains a subgroup isomorphic(as a permutation group) to the defect k-1 group of Γ'. Let Γ = (V, E ),Γ' = (V', E' ).First,assume V'≥ k+1,and let V_k = v_1, …, v_k⊆ V'.Let G_k, V_k and G'_k, V_k be the defect k-groups of Γ and Γ'. Let g ∈ G'_k, V_k be arbitrary.Then there exists s ∈ S_Γ' with defect set V_k such that s _V' ∖ V_k = g.Now, E' ⊆ E,hence every elementary collapsing of Γ' is an elementary collapsing of Γ, as well,Thus s ∈ S_Γ,and s acts as the identity on V ∖ V'. Furthermore,if s' ∈ S_Γ' is another element with defect set V_k such that s' _V' ∖ V_k = g = s _V' ∖ V_k,then s' ∈ S_Γ with s' _V ∖ V_k = s _V ∖ V_k.Thus φ G'_k, V_k→ G_k, V_k,φ(g) = s _V ∖ V_k is a well defined injective homomorphism of permutation groups. Second,assume V'≥ k,and let V_k-1 = v_1, …, v_k-1⊆ V'.Let v ∈ V ∖ V',and let V_k = V_k-1∪v.Let u be a neighbor of v and let e = e_vu.Let G_k, V_k be the defect k-group of Γ and let G'_k-1, V_k-1 be the defect (k-1)-group of Γ'. Let g ∈ G'_k-1, V_k-1 be arbitrary.Then there exists s ∈ S_Γ' with defect set V_k-1 such that s _V' ∖ V_k-1 = g.Now,es ∈ S_Γ has defect set V_k,and es _V ∖ V_k acts as g on V' ∖ V_k-1,and acts as the identity on V ∖( V' ∪v).Furthermore,if s' ∈ S_Γ' is another element with defect set V_k-1 such that s' _V' ∖ V_k-1 = g = s _V' ∖ V_k-1,then es _V ∖ V_k = es' _V ∖ V_k.As g ∈ G'_k-1, V_k-1 was arbitrary,we have thatφ G'_k-1, V_k-1→ G_k, V_k,φ(g) = es _V ∖ V_k is a well defined injective homomorphism of permutation groups. Let 1 ≤ m≤ l < k ≤ n-2,and assume Γ contains the following subgraph:[-,>=stealth',shorten >=1pt,auto,node distance=2cm, thin, main node/.style=circle,draw,rectangle node/.style=rectangle,draw,empty node/.style=][rectangle node] (x1) x_1; [rectangle node] (y) [below of=x1] y; [rectangle node] (x2) [right of=x1] x_2; [empty node] (dots) [right of=x2] …; [rectangle node] (xl) [right of=dots] x_l; [main node] (v) [above right of=xl] v; [main node] (u_1) [ above rightof=x1] u_1; [rectangle node] (x2) [right of=x1] x_2; [empty node] (dots) [right of=x2] …; [rectangle node] (xl) [right of=dots] x_l;[main node] (u_2) [above right of=u_1] u_2; [empty node] (dots_1) [above right of=u_2] ⋱; [main node] (u_m) [above right of=dots_1] u_m;[every node/.style=font=] (u_1) edge node(x1) (x1) edge node(x2) (x2) edge node(dots) (dots) edge node(xl) (xl) edge node(v) (y) edge node(x1)(u_1) edge node(u_2)(u_2) edge node(dots_1)(dots_1) edge node(u_m);If V_k is a set of nodes of size k such that y, x_1, …, x_l ∈ V_k, and v, u_i ∉ V_k for some 1≤ i≤ m,then the defect k group G_k, V_k contains the transposition (u_i, v). Let r =s s_1 e_yx_1 e_x_1u_1, ifi =1, s s_1 … s_i p t t_i-1… t_1q, ifi≥ 2,wheres= e_vx_le_x_lx_l-1… e_x_2x_1e_x_1y, s_1= e_u_1x_1e_x_1x_2… e_x_l-1x_le_x_lv, s_j= e_u_ju_j-1… e_u_2u_1 e_u_1x_1 e_x_1x_2… e_x_l-j+1x_l-j+2, (2≤ j≤ m), p= e_yx_1e_x_1u_1e_u_1u_2… e_u_i-1u_i, t= e_x_l-i+2x_l-i+1… e_x_2x_1e_x_1y, t_j= e_x_l-j+2x_l-j+1… e_x_2x_1e_x_1u_1e_u_1u_2… e_u_j-1u_j, (2≤ j≤ m), t_1= e_vx_le_x_lx_l-1… e_x_2x_1e_x_1u_1, q= e_yx_1e_x_1x_2… e_x_l-1x_le_x_lv.Then r transposes u_i and v and fixes all other vertices of Γ outside the defect set.Notethat Lemma <ref> is going to be useful whenever Γ contains a node with degree at least 3.Let k ≥ 2,Γ' = ( V', E' ) be such that V' > k and its defect k group is transitive (e.g. if Γ' is a cycle with at least k+1 vertices).Let Γ=( V'∪ v , E' ∪x_1 v) for a new vertex v and some x_1 ∈Γ',where the degree of x_1 in Γ' is at least 2.Then the defect k group of Γ is isomorphic toS_n-k.Let n be the number of vertices of Γ,then n ≥ k+2.Let the vertices of Γ' be y, x_1, x_2, … , x_k-1, u_1, u_2, … , u_n-k-1 such that u_1 and y are neighbors of x_1 in Γ'.Let the defect set be y, x_1, … , x_k-1.Applying Lemma <ref> to the subgraph with vertices x_1, v, y, u_1 we obtain that the defect k group of Γ contains the transposition (u_1, v).Sincethe defect k group of Γ' is transitive and contained in the defect k group of Γ by Lemma <ref>,the defect k group of Γ contains the transposition (u_i, v) for all 1≤ i ≤ n-k-1.Therefore, the defect k group of Γ is isomorphic to S_n-k. Motivated by Lemma <ref>,we define the k-sub­graphs and the maximal k-subgraphs of a graph Γ.Let Γ be a simple connected graph, k≥ 2. A connected subgraph Γ' ⊆Γ is called a k-subgraph if its defect k group is the symmetric group of degree Γ'-k.A k-subgraph is a maximal k-subgraph if it has no proper extension in Γ to a k-subgraph. Finally,we say that a k-subgraph Γ' is nontrivial if it contains a vertex having at least 3 distinct neighbors in Γ'. Note that everymaximal k-subgraph is an induced subgraph.A trivial k-subgraph is either a line on k+1 points or a cycle on k+1 or k+2 points.Furthermore,a trivial maximal k-subgraph cannot be a cycle by Lemma <ref>,unless the graph itself is a cycle.Finally,any connected subgraph of k+1 points is trivially a k-subgraph,thus every connected subgraph of k+1 points is contained in a maximal k-subgraph. Notethat the intersection of two maximal k-subgraphs cannot contain more than k vertices:Let Γ_1, Γ_2 be k-subgraphs such that Γ_1∩Γ_2> k.Then Γ_1∪Γ_2 is a k-subgraph,as well. Choose the defect set V_k such that V_k ⫋Γ_1 ∩Γ_2,and let v ∈( Γ_1 ∩Γ_2 ) ∖ V_k.Then the symmetric groups acting onΓ_1∖ V_k and Γ_2∖ V_kare subgroups in the defect k group of Γ_1∪Γ_2.Thus,we can transpose every member of Γ_i ∖( V_k ∪v) with v.Therefore, the defect k group of Γ_1 ∪Γ_2 is the symmetric group on( Γ_1∪Γ_2 ) ∖ V_k. Let Γ be a simple connected graph,and let Γ' be a k-subgraph of Γ.Let x_1∈Γ', v ∉Γ',and let P = ( x_1, x_2, …, x_l, v ) be a shortest path between x_1 and v in Γ for some l≤ k-1.Assume that x_1 has at least 2 neighbors in Γ' apart from x_2.Then the subgraph Γ' ∪ P is a k-subgraph. First, consider the case x_2, … , x_l ∈Γ'.Let u, y be two neighbors of x_1 in Γ' distinct from x_2,and choose the defect set V_k such that it contains y, x_1, … , x_l and does not contain u.By Lemma <ref> the defect k group of Γ'∪ v contains the transposition (u, v). Furthermore,the defect k group of Γ' is the whole symmetric group on Γ' ∖ V_k.Thus,the defect k group of Γ' ∪v is the whole symmetric group on ( Γ' ∖ V_k ) ∪v.Now,if not all of x_2, … , x_l are in Γ',then, by the previous argument, one can add them (and then v) to Γ' one by one,and obtain an increasing chain of k-subgraphs. As a corollary,we obtain that every vertex of degree at least 3 together with at least two of its neighbors is contained in exactly one nontrivial maximal k-subgraph.Let Γ be a simple connected graph with n vertices such that n>k, and let x_1 be a vertex having degree at least 3. Then there exists exactly one maximal k-subgraph Γ' containing x_1 such that x_1 has degree at least 2 in Γ'.Furthermore,Γ' is a nontrivial k-subgraph,and if Γ_x_1 is the induced subgraph of the vertices in Γ that are of at most distance k-1 from x_1,then Γ_x_1⊆Γ'.Any connected subgraph of Γ with k+1 vertices containing x_1 and any two of its neighbors is a k-subgraph.Thus there exists at least one maximal k-subgraph containing x_1 and two of its neighbors.Let Γ' be a maximal k-subgraph containing x_1 and at least two of its neighbors.Assume that Γ_x_1⊈Γ'.Let v ∈Γ_x_1∖Γ' be any vertex at a minimal distance from x_1,and let P = (x_1, … , x_l, v) be a shortest path between x_1 and v.If l = 1,then P=(x_1, v).Now x_1 has at least two neighbors in Γ' apart from v,therefore Γ' ∪ P is a k-subgraph by Lemma <ref>,which contradicts the maximality of Γ'.Thus l≥ 2,in particular all neighbors of x_1 in Γ are in Γ', as well,and thus Γ' is a nontrivial k-subgraph.Hence x_1 has at least two neighbors in Γ' apart from x_2,therefore Γ' ∪ P is a k-subgraph by Lemma <ref>,which contradicts the maximality of Γ'.Thus Γ_x_1⊆Γ'. Now,assume that Γ' and Γ” are maximal k-subgraphs containing x_1 and at least two of its neighbors.Then Γ_x_1⊆Γ' and Γ_x_1⊆Γ”.Notethat either Γ_x_1 = Γ (and hence Γ_x_1 = n >k),or there exists a vertex v ∈Γ which is of distance exactly k from x_1.Let P = (x_1, … , x_k, v) be a shortest path between x_1 and v,and let u and y be two neighbors of x_1 distinct from x_2.Then x_1, … , x_k , y, u⊆Γ_x_1,thus Γ_x_1 > k.Therefore Γ' ∩Γ”≥Γ_x_1 > k,yielding Γ' = Γ” by Lemma <ref>. Let Γ' be a nontrivial k-subgraph of Γ,and let P be a Γ'-ear.Then Γ' ∪ P is a (nontrivial) k-subgraph of Γ. Let Γ, Γ' and P=( w_0,w_1,… w_i, w_i+1) be a counterexample,where i is minimal.There exists a shortest path (w_0, y_1, …, y_l, w_i+1)in Γ' among those where the degree of some y_j or of w_0 or of w_i+1 is at least3 in Γ'.(At least one such path exists,because Γ' is connected, and is a nontrivial k-subgraph,hence contains a vertex of degree at least 3.) For easier notation,let y_0 = w_0, y_l+1 = w_i+1.Let y' ∈Γ' ∖y_0, y_1, …, y_l, y_l+1 be a neighbor of y_j;this exists, because the degree of y_j is at least 3,and otherwise a shorter path would exist between w_0 and w_i+1. If j+1 ≤ k-1 (that is j ≤ k-2),then by Lemma <ref> the induced subgraph on Γ' ∪w_1 is a k-subgraph,thus Γ' ∪w_1 with the ear ( w_1, … , w_i, w_i+1 ) is a counterexample with a shorter ear. Similarly,if l-j+2 ≤ k-1 (that is l+3-k ≤ j),then by Lemma <ref> the induced subgraph on Γ' ∪w_i is a k-subgraph,thus Γ' ∪w_i with the ear ( w_0, w_1, … , w_i ) is a counterexample with a shorter ear. Finally,if k-1 ≤ j ≤ l+2-k,then 2k-3 ≤ l.Let Γ” be the cycle P ∪( y_0, y_1, … , y_l, y_l+1) together with y' and the edge y_j y'.Then Γ” is a k-subgraph by Lemma <ref>,Γ' ∩Γ” = l+2 ≥ 2k-1 > k,hence Γ' ∪Γ” = Γ' ∪ P is a k-subgraph by Lemma <ref>. Let Γ be a simple connected graph with n vertices such that n>k, and assume that Γ is not a cycle.Suppose uv is an edge contained in a cycle of Γ.Then there exists exactly one maximal k-subgraph Γ' containing the edge uv.Furthermore,Γ' is a nontrivial k-subgraph,and if Γ_uv is the 2-edge connected component containing uv,then Γ_uv⊆Γ'.Any connected subgraph of Γ with k+1 vertices containing the edge uv is a k-subgraph.Thus there exists at least one maximal k-subgraph Γ' containing the edge uv.We prove first that Γ' is a nontrivial k-subgraph,then prove Γ_uv⊆Γ', and only after that do we prove that Γ' is unique. Assume first that Γ' is a trivial k-subgraph.If Γ' were a cycle,then Γ∖Γ' contains at leastone vertex,because Γ' is an induced subgraph of Γ. Then Lemma <ref> contradicts the maximality of Γ'.Thus Γ' is a line of k+1 vertices.Let Γ_2 be a shortest cycle containing uv.Now,there must exist a vertex in Γ∖Γ_2,otherwise either Γ = Γ_2 would be a cycle,or there would exist an edge in Γ∖Γ_2 yielding a shorter cycle than Γ_2 containing the edge uv.Let x_2 ∈Γ∖Γ_2 be a neighbor of a vertex in Γ_2.By Lemma <ref> the induced subgraph on Γ_2 ∪x_2 is a k-subgraph.Thus Γ' ⊈Γ_2,otherwise Γ' would not be a maximal k-subgraph.Let x_1 ∈Γ' ∩Γ_2 be a vertex such that two of its neighbors are in Γ_2 and its third neighbor is some x_2 ∈Γ' ∖Γ_2.Notethat every vertex in Γ' is of distance at most k-1 from x_1,because u,v ∈Γ' ∩Γ_2.Thus,if Γ_2≥ k+1,then Γ_2 together with x_2 and the edge x_1x_2 is a k-subgraph by Lemma <ref>,and hence Γ_2 ∪Γ' is a k-subgraph by Lemma <ref>,contradicting the maximality of Γ'.Otherwise,if Γ_2≤ k,then every vertex in Γ_2 is of distance at most k-1 from x_1,and hence Γ_2 ∪Γ' is a k-subgraph by Lemma <ref>,contradicting the maximality of Γ'.Therefore Γ' is a nontrivial k-subgraph. Now we show that the two-edge connected component Γ_uv⊆Γ'.Let Γ, Γ' be a counterexample to this such that the number of vertices of Γ_uv is minimal,and among these counterexamples choose one where the number of edges of Γ_uv is minimal.Using an ear-decomposition <cit.>,Γ_uv is either a cycle,or there exists a 2-edge connected subgraph Γ_1 ⊆Γ_uv and there exists*either a Γ_1-ear P such that Γ_uv = Γ_1 ∪ P,*or a cycle Γ_2 such that Γ_1 ∩Γ_2 = 1 and Γ_uv = Γ_1 ∪Γ_2. If Γ_uv is a cycle containing the edge uv,and Γ_uv⊈Γ',then going along the edges of Γ_uv,one can find a Γ'-ear P ⊆Γ_uv.Then Γ' ∪ P is a k-subgraph by Lemma <ref>,contradicting the maximality of Γ'.Thus Γ_uv is not a cycle.Let us choose Γ_1 from cases (<ref>) and (<ref>) so that it would have the least number of vertices. Assume first that case (<ref>) holds.By minimality of the counterexample,Γ_1 ⊆Γ'.If P ⊈Γ',then going along the edges of P one can find a Γ'-ear P' ⊆ P.But then Γ' ∪ P' is a k-subgraph by Lemma <ref>,contradicting the maximality of Γ'. Assume now that case (<ref>) holds.Again, by induction,Γ_1 ⊆Γ'.If Γ_2 ⊈Γ',then either Γ' ∩Γ_2 = 1 or going along the edges of Γ_2 one can find a Γ'-ear P' ⊆Γ_2.The latter case cannot happen,because then Γ' ∪ P' is a k-subgraph by Lemma <ref>,contradicting the maximality of Γ'.Thus Γ' ∩Γ_2 = 1,and hence Γ' ∩Γ_2 = Γ_1 ∩Γ_2. Let Γ_1 ∩Γ_2 = x_1,and let v_1 be a neighbor of x_1 in Γ_1 ∖Γ_2,and let v_2 be a neighbor of x_1 in Γ_2 ∖Γ_1.If Γ_2≤ k,then Γ_2 can be extended to a connected subgraph of Γ having exactly k+1 vertices,which is a k-subgraph.If Γ_2≥ k+1,then Γ_2 ∪v_1 is a k-subgraph by Lemma <ref>.In any case,there exists a maximal k-subgraph Γ_2' ⊇Γ_2.For notational convenience,let Γ_1' denote the maximal k-subgraph Γ' containing Γ_1.We prove that Γ_2' = Γ_1'=Γ',thus Γ' contains Γ_2,contradicting that we chose a counterexample. Now,both Γ_1 and Γ_2 contain at least two neighbors of x_1.Let V_i ⊆Γ_i be the set of vertices with distance at most k-1 from x_1 (i ∈1,2). If Γ_i≤ k,then V_i contains all vertices of Γ_i,otherwise V_i≥ k (i ∈1,2).By Lemma <ref>,the induced subgraph on V_1 is contained in Γ_2'.Thus,if V_1 contains all vertices of Γ_1,then Γ_1 ⊆Γ_2',hence we have Γ_1' = Γ_2'.Similarly,the induced subgraph on V_2 is contained in Γ_1'.Thus,if V_2 contains all vertices of Γ_2,then Γ_2 ⊆Γ_1',hence we have Γ_1' = Γ_2'.Otherwise,Γ_1' ∩Γ_2'≥V_1 + V_2 - x_1≥ 2k-1 > k,hence by Lemma <ref> we have Γ_1' = Γ_2'. Finally,we prove uniqueness.Let Γ' and Γ” be two maximal k-subgraphs containing the edge uv.Then both Γ' and Γ” contain Γ_uv.If Γ = Γ_uv,then Γ' = Γ_uv = Γ”.Otherwise,there exists a vertex x_2 ∈Γ∖Γ_uv such that it has a neighbor x_1 ∈Γ_uv. Notethat x_1 has degree at least 3 in Γ.Let V_1 be the vertices of Γ of distance at most k-1 from x_1.Notethat if V_1 does not contain all vertices of Γ,then V_1 > k.By 2-edge connectivity, Γ_uv⊆Γ' contains at least two neighbors of x_1,thus V_1 ⊆Γ' by Lemma <ref>.Similarly,Γ_uv⊆Γ” contains at least two neighbors of x_1,thus V_1 ⊆Γ” by Lemma <ref>.If V_1 contains all vertices of Γ,then Γ' = Γ = Γ”.Otherwise,Γ' ∩Γ”≥V_1 > k,and Γ' = Γ” by Lemma <ref>.Recall that by <cit.>a strongly connected antisymmetric digraph becomes a 2-edge connected graph after forgetting the directions.Thus Rhodes's conjecture about strongly connected,antisymmetric digraphs <cit.> follows immediately from the following theorem on 2-edge connected graphs:Let n>k ≥ 2,Γ be a 2-edge connected simple graph having n vertices.If Γ is a cycle,then the defect k group is Z_n-k.If Γ is not a cycle,then the defect k group is S_n-k. If Γ is a cycle,then its defect k group is Z_n-k by Lemma <ref>.Since Γ is 2-edge connected with at least 3 vertices,every edge of Γ is contained in a cycle.Thus, if Γ is not a cycle,then the defect k group is S_n-k by Corollary <ref>.The final part of this section is devoted to prove item (<ref>) of Theorem <ref>.First,we define bridges in Γ:A path ( x_1, …,x_l ) in a connected graph Γ for some l ≥ 2 is called a bridgeif the degree of x_i in Γ is 2 for all 2 ≤ i≤ l-1,and if Γ∖x_jx_j+1 is disconnected for all 1 ≤ j≤ l-1.The length of the bridge ( x_1, …,x_l ) is l.The intersection of maximal k-subgraphs turn out to be bridges:Let Γ_1 and Γ_2 be distinct maximal k-subgraphs of the connected simple graph Γ.Assume that Γ is not a cycle.Then Γ_1 ∩Γ_2 is either empty,or is a bridge (x_1, … , x_l) such that*l ≤ k, and*if l≥ 2 and Γ_i ∖x_1, …, x_l (i ∈1,2) contains a neighbor of x_1 (resp. x_l),then Γ_i contains all neighbors of x_1 (resp. x_l), Notethat Γ_1 and Γ_2 are induced subgraphs of Γ,thus so is Γ_1 ∩Γ_2. We prove first that Γ_1 ∩Γ_2 is connected (or empty) if Γ_1 is a nontrivial maximal k-subgraph.Suppose that u, v ∈Γ_1 ∩Γ_2 are in different components of Γ_1∩Γ_2 such that the distancebetween u and v is minimal in Γ_2. Due to the minimality,there exists a path (u, x_1, … , x_l, v) such thatx_1, …, x_l ∈Γ_2 ∖Γ_1.Then P = (u, x_1, …,x_l, v) is a Γ_1-ear, and Γ_1 ∪ P would be a k-subgraph by Lemma <ref>,contradicting the maximality of Γ_1. Thus Γ_1 ∩Γ_2 is connected.One can prove similarly that Γ_1 ∩Γ_2 is connected if Γ_2 is a nontrivial maximal k-subgraph. Now we prove that Γ_1 ∩Γ_2 is connected,even if both Γ_1 and Γ_2 are trivial maximal k-subgraphs.As Γ_1 ⫋Γ, Γ_1 cannot be a cycle hence must be a line (x_1, …, x_k+1). Notethat the degree of x_i in Γ for 2 ≤ i ≤ k must be 2,otherwise a nontrivial maximal k-subgraph would contain x_i,and thus also Γ_1 by Corollary <ref>.In particular,if Γ_1 ∩Γ_2 is not connected,then x_1, x_k+1∈Γ_1 ∩Γ_2,x_i ∉Γ_1 ∩Γ_2 for some 2 ≤ i≤ k,and Γ_1 ∪Γ_2 would be a cycle.However,by Corollary <ref>,the edge x_1x_2 is contained in a unique nontrivial maximal k-subgraph,contradicting that it is also contained in the trivial maximal k-subgraph Γ_1. Now, we prove (<ref>).By Corollary <ref>,Γ_1 ∩Γ_2 cannot contain any edge uv which is contained in a cycle.As Γ_1 ∩Γ_2 is connected,it must be a tree.However,Γ_1 ∩Γ_2 cannot contain any vertex of degree at least 3 in Γ_1 ∩Γ_2,otherwise that vertex would be contained in a unique maximal k-subgraph by Corollary <ref>.Thus Γ_1 ∩Γ_2 is a path (x_1, … , x_l).Now,l≤ k by Lemma <ref>,proving (<ref>). Notethat if any x_i (2≤ i≤ l-1) is of degree at least 3 in Γ,then x_i-1, x_i, x_i+1 is contained in a unique maximal k-subgraph by Corollary <ref>,a contradiction. For (<ref>) observe that at least two neighbors of x_1 (resp. x_l) are in Γ_i,and thus all its neighbors must be in Γ_i by Corollary <ref>.Finally, if l≥ 2 then Γ∖x_jx_j+1 is disconnected for all 1≤ j≤ l-1 follows immediately from Corollary <ref> and the fact that any edge that is not contained in any cycle disconnects the graph Γ. Edges of short maximal bridges (having length at most k-1) are contained in a unique maximal k-subgraph:Let Γ be a simple connected graph with n vertices such that n>k,and let uv be an edge which is not contained in any cycle.Let (x_1, … , x_l) be a longest bridge containing the edge uv.If l≤ k-1,then uv is contained in a unique maximal k-subgraph Γ',and furthermore,Γ' is a nontrivial k-subgraph. As uv is not part of any cycle in Γ,uv is a bridge of length 2.Notethat a longest bridge (x_1, … , x_l) containing uv is unique,because as long as the degree of at least one of the path's end vertices is 2 in Γ,the path can be extended in that direction.The obtained path is the unique longest bridge containing uv. Let Γ' be a maximal k-subgraph containing uv,and assume l≤ k-1.Notethat the distance of x_1 and x_l is l-1 ≤ k-2.As Γ≥ k+1,at least one of x_1 and x_l has degree at least 3 in Γ,say x_1.We distinguish two cases according to the degree of x_l. Assume first that x_l is of degree 1.As Γ' is a connected subgraph having at least k+1 vertices,Γ' must contain x_1 and at least two of its neighbors.Then by Corollary <ref> it contains all vertices of Γ of distance at most k-1 from x_1.In particular,Γ' must contain the bridge (x_1, … , x_l). However,there is a unique (nontrivial) maximal k-subgraph Γ_1' containing x_1 and two of its neighbors by Corollary <ref>,and thus Γ' = Γ_1' is that unique maximal k-subgraph. Assume now that x_l is of degree at least 3.As Γ' is a connected subgraph having at least k+1 vertices,Γ' must contain x_1 and at least two of its neighbors,or x_l and at least two of its neighbors.If Γ' contains x_1 and at least two of its neighbors,then by Corollary <ref> it contains all vertices of Γ of distance at most k-1 from x_1.In particular,Γ' must contain the bridge (x_1, … , x_l) and all of the neighbors of x_l.Similarly,one can prove that if Γ' contains x_l and two of its neighbors,then it also contains the bridge (x_1, … , x_l) and all of the neighbors of x_1.However,there is a unique (nontrivial) maximal k-subgraph Γ_1' containing x_1 and two of its neighbors by Corollary <ref>,and also a unique (nontrivial) maximal k-subgraph Γ_l' containing x_l and two of its neighbors by Corollary <ref>.Therefore Γ' must equal to both Γ_1' and Γ_l',and hence is unique.In particular,in non-cycle graphs trivial maximal k-subgraphs or intersections of two different maximal k-subgraphs consist of edges that are contained in long bridges(having length at least k).The key observation in proving item (<ref>) of Theorem <ref> is that a defect k group cannot move a vertex across a bridge of length at least k:Let 2≤ k≤ l, Γ_1 and Γ_2 be disjoint connected subgraphs of the connected graph Γ,and ( x_1, x_2, …,x_l ) be a bridge in Γ such that x_1 …,x_l∉Γ_1 ∪Γ_2,x_1 has only neighbors in Γ_1 (except for x_2),x_l has only neighbors in Γ_2 (except for x_l-1).Assume Γ has no more vertices than Γ_1 ∪Γ_2 ∪( x_1, …, x_l ).Let the defect set be V_k = x_1, … , x_k.Then for any u ∈Γ_1 and v ∈Γ_2 there does not exist any permutation in G_k,V_k which moves u to v.Let S = S_Γ.Assume that there exists u ∈Γ_1,v ∈Γ_2,and a transformation g ∈ S of defect V_k such that g _V ∖ V_k∈ G_k,V_k and ug=v.Let s_0 ∈ G_k,V_k be the unique idempotent power of g,that is s_0 is a transformation of defect V_k that acts as the identity on Γ∖ V_k.Then there exists a series of elementary collapsings e_1, … , e_m such that g = e_1 … e_m. For every 1 ≤ d ≤ m let s_d = s_0e_1 … e_d.Now, s_m = s_0 e_1 … e_m = s_0 g = gs_0 = g.In particular,both s_m and s_0 are of defect k,hence s_d is of defect k for all 1≤ d ≤ m.Consequently,Γ_1 s_d = Γ_1,Γ_2 s_d = Γ_2 andΓ_1 s_d ∩Γ_2 s_d = ∅ for all 1≤ d ≤ m. For an arbitrary s ∈ S,let i(s)= 0,if Γ_1 s ⊆Γ_1, l+1,if Γ_1 s ⊈Γ_1 ∪x_1, … , x_l,min_1≤ i ≤ lΓ_1 s ⊆Γ_1 ∪x_1, … , x_i,otherwise. Similarly, let j(s)= l+1,if Γ_2 s ⊆Γ_2, 0,if Γ_2 s ⊈Γ_2 ∪x_1, … , x_l,max_1≤ j ≤ lΓ_2 s ⊆Γ_2 ∪x_j, … , x_l,otherwise.Notethat for arbitrary s ∈ S and elementary collapsing e,we have i(s)-i(se)≤ 1, j(s)-j(se)≤ 1.Furthermore,both i(s_d)-i(s_de) = 1 and j(s_d)-j(s_de)=1 cannot happen at the same time for any 1≤ d ≤ m,because that would contradict Γ_1 s_d ∩Γ_2 s_d ≠∅. For s_0 we have i(s_0) = 0< l+1 = j(s_0),for s_m we have i(s_m) = l+1 ≥ j(s_m).Let 1 ≤ d ≤ m be minimal such that i(s_d) ≥ j(s_d).Then i(s_d-1) < j(s_d-1).From s_d-1 to s_d either i or j can change and by at most 1,thus i(s_d) = j(s_d).If i(s_d) = j(s_d) ∈1, … , l,then x_i(s_d)∈Γ_1 s_d ∩Γ_2 s_d,contradicting Γ_1 s_d ∩Γ_2 s_d = ∅.Thus i(s_d) = j(s_d) ∉1, … , l.Assume i(s_d) = j(s_d) = l+1,the case i(s_d) = j(s_d) = 0 can be handled similarly.Now,j(s_d) = l+1 yields Γ_2 s_d ⊆Γ_2.Furthermore,Γ_2 s_d = Γ_2,thus Γ_2 s_d = Γ_2.From i(s_d) = l+1 we have Γ_1 s_d ∩Γ_2 ≠∅.Thus Γ_1 s_d ∩Γ_2 s_d = Γ_1 s_d ∩Γ_2 ≠∅,a contradiction. Let Γ_1 and Γ_2 be connected subgraphs of Γ such that Γ_1 ∩Γ_2 is a length k bridge in Γ.Let V_k = Γ_1 ∩Γ_2 be the defect set.Let G_i be the defect k group of Γ_i,G be the defect k group of Γ_1 ∪Γ_2.Then G = G_1 × G_2. By Lemma <ref> we have G_1, G_2 ≤ G.Since G_1 and G_2 act on disjoint vertices, their elements commute. Thus G_1× G_2 ≤ G.Now, V_k is a bridge of length k,thus by Lemma <ref>(applied to the disjoint subgraphs Γ_1 ∖ V_k and Γ_2 ∖ V_k)there exists no element of G moving a vertex from Γ_1 to Γ_2 or vice versa.Therefore G ≤ G_1× G_2. Finally,we are ready to prove item (<ref>) of Theorem <ref>.If Γ is a cycle,then its defect k group is Z_n-k by Lemma <ref>.Otherwise,we prove the theorem by induction on the number of maximal k-subgraphs of Γ.If Γ is a maximal k-subgraph,then the theorem holds, and the defect k group of Γ is S_n-k.In the following we assume that Γ contains m-many maximal k-subgraphs for some m≥ 2,and that the theorem holds for all graphs with at most (m-1)-many maximal k-subgraphs. We consider two cases.Assume first that there exists a degree 1 vertex x_1 ∈Γ,such that there exists a path (x_1, …, x_k+1) which is a bridge.Let Γ_1 be the path (x_1, …, x_k+1),and let Γ_2 be Γ∖x_1.Now,Γ_1 is a trivial maximal k-subgraph,hence Γ_2 contains the same maximal k-subgraphs as Γ except Γ_1.Furthermore,Γ_2 is connected,and cannot be a cycle because the degree of x_2 in Γ_2 is 1.Let the sizes of the maximal k-subgraphs of Γ_2 be n_2, … , n_m,then by induction the defect k group of Γ_1 is S_n_2-k×…× S_n_m-k.The size of Γ_1 is n_1 = k+1,its defect k-group is S_n_1-k.Furthermore,Γ_1 ∩Γ_2 is a bridge of length k.By Corollary <ref> the defect k-group of Γ is S_n_1-k× S_n_2-k×…× S_n_m-k.In the second case, no degree 1 vertex x_1 is in a path (x_1, … , x_k+1) which is a bridge.Then any maximal bridge (x_1, …, x_l)with a degree 1 vertexx_1 has length l ≤ k,and, as the bridge cannot be extended,x_l must have degree at least 3.Moreover, (x_1, ... , x_l) lies in a maximal k-subgraph containing x_l and all its neighbors by Lemma <ref> and Corollary <ref>. In particular every bridge in Γ of length at least k+1 occurs between nodes of degree at least 3.Hence every bridge of length at least k+1 occurs between two nontrivial maximal k-subgraphs by Corollary <ref>.For every vertex v having degree at least 3 in Γ,let Γ_v be the unique maximal k-subgraph containing v and all its neighbors (Corollary <ref>).By definition,these are all the nontrivial maximal k-subgraphs of Γ. Let Γ^k be the graph whose vertices are the nontrivial maximal k-subgraphs,and Γ_uΓ_v is an edge in Γ^k (for Γ_u ≠Γ_v) if and only ifthere exists a bridge in Γ between a vertex u' ∈Γ_u of degree at least 3 in Γ_uand a vertex v' ∈Γ_v of degree at least 3 in Γ_v.By Corollary <ref>,Γ_u = Γ_v if u and v are in the same 2-edge connected component.As the 2-edge connected components of Γ form a tree,the graph Γ^k is a tree. Now, Γ^k has m vertices.Let Γ_1 be a leaf in Γ^k,and let Γ_m be its unique neighbor in Γ^k.Let x_1 ∈Γ_1 and x_l ∈Γ_m be the unique vertices of degree at least 3 in Γ_i (i ∈1, l) such that there exists a bridge P = ( x_1, …, x_l) in Γ.Note that the length of P is at least k,otherwiseΓ_1 = Γ_m would follow by Lemma <ref>.Furthermore,any other bridge having an endpoint in Γ_1 must be of length at most k,because every degree 1 vertex is of distance at most k-1 from a vertex of degree at least 3.Thus every bridge other than P and having an endpoint in Γ_1 is a subset of Γ_1 by Corollary <ref>. Let Γ_2 = ( Γ∖Γ_1 ) ∪ P.Now,Γ_1 is a maximal k-subgraph,Γ_2 has one less maximal k-subgraphs than Γ.Furthermore,Γ_2 is connected,because every bridge other than P and having an endpoint in Γ_1 is a subset of Γ_1.Finally,Γ_2 is not a cycle,because it contains the vertex x_1 which is of degree 1 in Γ_2.Let the sizes of the maximal k-subgraphs of Γ_2 be n_2, … , n_m,then by induction the defect k group of Γ_1 is S_n_2-k×…× S_n_m-k.Let the size of Γ_1 be n_1,its defect k-group is S_n_1-k.Furthermore,Γ_1 ∩Γ_2 is a bridge of length k.By Corollary <ref> the defect k-group of Γ is S_n_1-k× S_n_2-k×…× S_n_m-k.§ AN ALGORITHM TO CALCULATE THE DEFECT K GROUPNotethat by items (<ref>) and (<ref>) of Theorem <ref> the defect 1 group can be trivially computed in O ( E) time by first determining the 2-vertex connected components <cit.>,andwhether each is a cycle,the exceptional graph (Figure <ref>) or if not, whether or not it is bipartite.For k ≥ 2 one can check firstif Γ is a cycle (and then the defect group is Z_n-k) or a path (and then the defect group is trivial).In the following,we give a linear algorithm (running in O( E) time) to determine the maximal k-subgraphs (k ≥ 2) of a connected graph Γhaving n vertices, E edges where at least one vertex is of degree at least 3.During the algorithm we color the vertices.Let us call a maximal subgraph with vertices having the same color a monochromatic component.First,one finds all 2-edge connected components and the tree of two-edge connected components in O ( E) time using e.g. <cit.>.Color the vertices of the nontrivial (i.e. having size greater than 1) 2-edge connected components such that two distinct vertices have the same color if and only if they are in the same nontrivial 2-edge connected component.Furthermore,color the uncolored vertices having degree at least 3 by different colors from each other and from the colors of the 2-edge connected components.Then the monochromatic components are each contained in a unique nontrivial maximal k-subgraph by Corollaries <ref> and <ref>(a nontrivial maximal k-subgraph may contain more than one of these monochromatic components).Furthermore,the monochromatic components and the degree 1 vertices are connected by bridges.If any of the bridges connecting two monochromatic components is of length at most k-1,then recolor the two monochromatic components at the ends of the bridge and the vertices of the bridge by the same color,because these are contained in the same maximal k-subgraph by Corollary <ref>.Similarly, if any of the bridges connecting a monochromatic component and a degree 1 vertex is of length at most k-1,then recolor the monochromatic component and the vertices of the bridge by the same color,because these are contained in the same maximal k-subgraph by Lemma <ref>.Repeat recoloring along all bridges of length at most k-1 in O ( E) time.Then we obtain monochromatic components Γ_1, … , Γ_l connected by long bridges (i.e. bridges of length at least k),and possibly some long bridges to degree 1 vertices.Now, we have finished coloring. For every 1≤ i≤ l,let Γ_i' be the induced subgraph having all vertices of distance at most k-1 from Γ_i,which can be obtained in O( E) time by adding the appropriate k-1 vertices of the long bridges to the appropriate monochromatic component.Notethat the obtained induced subgraphs are not necessarily disjoint.Then Γ_1', … , Γ_l' are the nontrivial maximal k-subgraphs of Γ by Lemma <ref>.Again,by Lemma <ref>,the trivial maximal k-subgraphs of Γ are the paths containing exactly k+1 vertices in a long bridge.These can also be computed in O ( E) time by going through all long bridges.By item (<ref>) of Theorem <ref>,the defect k group of Γ as a permutation group is the direct product of the defect k groups of Γ_1', …Γ_l',and the defect k groups of the trivial maximal k-subgraphs. § COMPLEXITY OF THE FLOW SEMIGROUP OF (DI)GRAPHS In this section we apply our results and the complexity lower bounds of <cit.>to verify <cit.> for 2-vertex connected graphs. That is,we prove that the Krohn–Rhodes (or group-) complexity of the flow semigroup of a 2-vertex connected graph with n vertices is n-2 (item <ref> of Theorem <ref>).Then we derive item <ref> of Theorem <ref> as a further consequences of our results.For standard definitions on wreath product of semigroups,we refer the reader to e.g. <cit.>.A finite semigroup S is called combinatorial if and only if every maximal subgroup of S has one element.Recall that the Krohn–Rhodes (or group-) complexity of a finite semigroup S (denoted by S) is the smallest non-negative integer n such that S is a homomorphic image of a subsemigroup of the iterated wreath productC_n ≀ G_n ≀…≀ C_1 ≀ G_1 ≀ C_0,where G_1, … , G_n are finite groups,C_0, … , C_n are finite combinatorial semigroups,and ≀ denotes the wreath product(for the precise definition, see e.g. <cit.>).The definition immediately implies thatif a finite semigroup S is the homomorphic image of a subsemigroup of T,then S≤T.More can be found on the complexity of semigroups in e.g. <cit.>.We need the following results on the complexity of semigroups.The flow semigroup K_n of the complete graph on n ≥ 2 verticeshas K_n = n-2. The complexity of the full transformation semigroup F_n on n points is F_n=n-1.The well-known ℒ-order is a pre-order,i.e.a transitive and reflexive binary relation, on the elements of a semigroup S given bys_1 ≽_ℒ s_2 if s_1 = s_2 or ss_1 = s_2 forsome s ∈ S.The ℒ-classesof S are the equivalence classes of the ℒ-order.The ℒ-classes are thus partially ordered byL_1 ≽_ℒ L_2 if and only if SL_1 ∪ L_1 ⊇ SL_2 ∪ L_2. One says that a finite semigroup S is a T_1-semigroup if it is generated by some ≽_ℒ-chain ofits ℒ-classes, i.e.ifthere exist ℒ-classes L_1 ≽_ℒ…≽_ℒ L_m ofS such that S = ⟨L_1 ∪…∪L_m ⟩. Equivalently, S is a T_1-semigroup if there exist U_i ⊆ L_i (1 ≤ i ≤ m)for such a chain of ℒ-classes of S such that S=⟨ U_1 ∪…∪ U_m ⟩.Let S be a noncombinatorial T_1-semigroup.ThenS≥ 1 + EG(S),where EG(S) is the subsemigroup of S generated by all its idempotents. Now we prove <cit.> for 2-vertex connected graphs.Let Γ be a 2-vertex connected simple graph with n ≥ 2 vertices.Let K_n denote the flow semigroup of the complete graph on vertices V,where V = n.Then S_Γ≤K_n = n-2 by Lemma <ref>. We proceed by induction on n. If n ≤ 3, then Γ is a complete graph,and S_Γ=n-2 by Lemma <ref>.From now on we assume n>3 and Γ = (V, E). Case 1. Assume first that Γ is not a cycle.Let (u,v) and (x,y) be two disjoint edges in Γ.Let G_1 be the defect 1 group with defect set V∖u and idempotent e_uv as its identity element. Then e_uv≽_ℒ e_xye_uv = e_uve_xy.Let T be ⟨ G_1 ∪e_uve_xy⟩. Since G_1 ≽_ℒe_uve_xy is an ℒ-chaininT, T is a T_1-semigroup.Furthermore, T is noncombinatorial since G_1 is nontrivial.Thus, by Lemma <ref>T≥ 1 + EG(T). Let Γ' be the complete graph on V∖u. Let a,b∈ V∖u be arbitrary distinct vertices. By item (<ref>) of Theorem <ref>, G_1 is 2-transitive.Let π∈ G_1 be such that π(x)=a and π(y)=b.There is a positive integer ω>1, with π^ω=e_uv. In particular,e_uv commutes with π. Observe that π^ω-1 e_uve_xyπ = e_uv( π^ω-1 e_xyπ)= e_uv e_ab,and thus (π^ω-1 e_xye_uvπ)_V∖u = e_ab.That is,we obtain the generators e_ab of S_Γ' by restricting the idempotents e_uve_ab∈ T to V ∖u.Therefore,S_Γ' is a homomorphic image of a subsemigroup of EG(T),yielding EG(T)≥S_Γ'.By induction,S_Γ'=n-3. Applying (<ref>), we obtain T≥ n-2. Since T is a subsemigroup of S_Γ,we obtain S_Γ≥T≥ n-2.Case 2. Assume now that Γ is the n-node cycle (u, v_1,…,v_n-1).Then (u,v_1) and (v_2,v_3) are disjoint edges.Let G_1 ≃ Z_n-1 be the defect 1 group with defect set V∖u and idempotent e_uv_1 as its identity element. Let π be a generator of G_1 with cycle structure ( v_1, … , v_n-1).Then e_uv_1≽_ℒ e_v_2v_3e_uv_1 = e_uv_1 e_v_2v_3.Let T be ⟨ G_1 ∪e_uv_1e_v_2v_3⟩. Since G_1 ≽_ℒe_uv_1e_v_2v_3 is an ℒ-chaininT, T is a T_1-semigroup.Furthermore, T is noncombinatorial since G_1 is nontrivial. Thus, by Lemma <ref>T≥ 1 + EG(T).Let Γ' be an (n-1)-node cycle with nodes V ∖u = v_1, …, v_n-1. Notethat e_uv_1 = π^n-1,and therefore e_u v_1 commutes with π.Let v_i-1, v_i, v_i+1∈ V∖u be three neighboring nodes in Γ', where the indices are in 1,…,n-1 taken modulo n-1.Observe thatπ^n-2 e_uv_1e_v_i-1v_i π = e_uv_1(π^n-2 e_v_i-1v_i π)= e_uv_1e_v_i v_i+1,and thus (π^n-2 e_uv_1e_v_i-1v_i π)_V∖u = e_v_i v_i+1.That is,we obtain the generators e_v_i v_i+1 of S_Γ' by restricting the idempotents e_u v_1e_v_i v_i+1∈ T to V ∖u.Therefore,S_Γ' is a homomorphic image of a subsemigroup of EG(T),yielding EG(T)≥S_Γ'.By induction,S_Γ'=n-3. Applying (<ref>), we obtain T≥ n-2. Since T is a subsemigroup of S_Γ,we haveS_Γ≥T≥ n-2. Notethat by Lemma <ref> a strongly connected digraph has the same flow semigroup as the corresponding graph.Thus,item <ref> of Theorem <ref> proves Rhodes's conjecture <cit.> for 2-vertex connected strongly connected digraphs,as well.The following lemma bounds the complexity in the remaining cases.Let k be the smallest positive integer such that for a graph Γ the flow semigroup S_Γ has defect k group S_n-k. Then S_Γ≥ n-1-k. Assume first k=n-1. Then the lemma holds trivially.From now on, assume k≤ n-2.Let uv be an edge in Γ.Let V_k be an arbitrary k-element subset of the vertex set V disjoint from u,v. Let G_k be the defect k group with defect set V_k. Let S be the subsemigroup of S_Γ generated byG_k and e_uv. As G_k ≃ S_n-k, we have that S is the semigroup of all transformations on V∖ V_k. Hence, S=F_n-k=n-k-1 by Lemma <ref>. Whence, S_Γ≥S= n-k-1. By Theorem <ref>,it immediately follows that the complexity of the flow semigroup of a 2-edge connected graph Γ is at least n-3. Furthermore,S_Γ≤K_n = n-2 by Lemma <ref>. This finishes the proof of item <ref> of Theorem <ref>. abbrv
http://arxiv.org/abs/1705.09577v2
{ "authors": [ "Gábor Horváth", "Chrystopher L. Nehaniv", "Károly Podoski" ], "categories": [ "math.CO", "20M20, 05C20, 05C25, 20B30" ], "primary_category": "math.CO", "published": "20170526132755", "title": "The maximal subgroups and the complexity of the flow semigroup of finite (di)graphs" }
Partial cohomology and extensions]Partial cohomology of groups and extensions of semilattices of abelian groups Insituto de Matemática e Estatística, Universidade de São Paulo,Rua do Matão, 1010, São Paulo, SP,CEP: 05508–090, Brazil [email protected] Departamento de Matemática, Universidade Federal de Santa Catarina, Campus Reitor João David Ferreira Lima, Florianópolis, SC,CEP: 88040–900, Brazil [email protected] [2010]Primary 20M30; Secondary 20M18, 16S35, 16W22. The first author was partially supported by CNPq of Brazil (Proc. 305975/2013–7) and by FAPESP of Brazil (Proc. 2015/09162–9). The second author was partially supported by FAPESP of Brazil (Proc. 2012/01554–7). We extend the notion of a partial cohomology group H^n(G,A) to the case of non-unital A and find interpretations of H^1(G,A) and H^2(G,A) in the theory of extensions of semilattices of abelian groups by groups. [ Mykola Khrypchenko December 30, 2023 ======================§ INTRODUCTION Elaboratedin the theory of C^*-algebras theconcept of apartial action draws growing attention of experts in analysis, algebra and beyond, resulting in new theoretic advances and remarkable applications. The early developments on partial actions and related concepts are described in the short survey <cit.>, whereas the algebraic andC^*-algebraic foundations of the theory, a detailed treatment of graded C^*-algebras by means of Fell bundles and partial C^*-crossed products, as well as prominent applications are the contents of the recent Exel's book <cit.> (see also the surveys <cit.>). Exel's notion of a twisted partial group action <cit.> (see also <cit.>) involves a kind of 2-cocycle equality which suggested the development of acohomology theorybased on partial actions. As a first step, the theory of partial projective representations of groups was created in <cit.> with further resultsin <cit.>.As expected, the notion of the corresponding Schur Multiplier, which is a semilattice of abelian groups, appeared in the treatment. While the idea of partial 2-cocycles came from<cit.>, the partial coboundaries naturally appeared in the notion of an equivalence of twisted partial actions introduced in <cit.> with respect to the globalization problem.The general definition of partial cohomology groups was given in <cit.>, where they were related to H. Lausch's cohomology of inverse semigroups <cit.>, and it was shown, moreover, thateach component of the partial Schur Multiplier is a disjoint union of cohomology groups with values in non-necessarily trivial partial modules. In <cit.> we assumed that the partial actions under consideration are all unital, which is a natural working restrictionmade in the big majority of algebraic papers dealing with partial actions. Our subsequent paper <cit.> was stimulated by the desireto relate the second partial cohomology groups with extensions. During the course of the investigation it became clear thatthe restriction on a partial action to be unitalmay be omitted. The basic concept is that of an extension of a semilattice of groups A by a group G, the main example beingthe crossed productA∗_Θ G by a twisted partial action Θ of G on A, and we explored a relation of these notions toextensions of A by an inverse semigroup S,twisted S-modules and the corresponding crossed products in the sense of <cit.>. In fact, we deal with twisted S-modules structures on A, whose twistings satisfy a normality condition, considered by N. Sieben in <cit.>, which is stronger than the one imposed by H. Lausch in <cit.>, and we call them Sieben's twisted modules. One of the main results of <cit.>establishes, up to certain equivalences and identifications, aone-to-one correspondence between twisted partial actions of groups on A and Sieben's twisted module structureson A over E-unitary inverse semigroups. In the present paper we define the cohomology groups with values in a non-necessarily unital partial module and give interpretations for the first and second cohomology groups in terms of extensions of semilattices of abelian groups by groups. In <ref> we recallsome background from <cit.> on inverse semigroups,their cohomology, partial group actions and related notionsneeded in the sequel. We start<ref> by revising the free resolutionsC(S) and D(S) of Z_S from <cit.> and correct a couple of errors with respect to C(S) made in<cit.> and give some details which are missing in<cit.> (see, in particular, <ref>).Next, sinceSieben's twisted S-modules have order preserving twistings, we define in the same <ref> the cohomology groups H^n_≤(S^1,A^1) based on order preserving cochains and prove some basic facts (see <ref>). The cohomology groups H^n(G,A) with values in a non-necessarily unital partial G-module A are defined in <ref>, some preliminary facts are proved and a relation toH^n_≤(S^1,A^1) is established (see <ref>). The latter, in its turn, implies a connection of H^2(G,A) with the equivalence classesof twistings related to A (see <ref>). An interpretation of H^2(G,A) in terms of extensions of semilattices of abelian groups by G is given in <ref>, the main result being <ref>. This is done in close interaction with extensions of semilattices of groups by inverse semigroups studied in <cit.>. In the final <ref> we relate split extensions (see <ref>) and H^1(G,A). This is done in two steps. First, given an inverse semigroup S and an S-module A, we consider split extensions A → U → Sand prove in <ref> thatthe so-calledC^0_≤-equivalence classes of splittings of U are in a one-to-one correspondence with the elements of H^1_≤(S^1,A^1). Then, for a group G and a G-module A, we define the concept of a split extension A → U→ G (see <ref>),andusing<ref> we show in <ref> that the equivalence classes of splittings of U are in a one-to-one correspondence with the elements of H^1(G,A). § PRELIMINARIES §.§ Inverse semigroups A semigroup S is called regular, whenever for each s∈ S there exists t∈ S, called an inverse of s, such that sts=s and tst=t. Regular semigroups, in which every element s admits a unique inverse, usually denoted by s, are called inverse semigroups. These are precisely those regular semigroups, whose idempotents commute (see <cit.>). Each inverse semigroup S admits the natural partial order ≤ with s≤ t whenever s=et for some e∈ E(S) (see <cit.>), where E(S) denotes the semilattice of idempotents of S. It follows that the binary relation (s,t)∈σ∃ u≤ s,t is a group congruence on S in the sense that S/σ is a group (see <cit.>). Moreover, it is the minimum group congruence on S, since it is contained in any other such congruence <cit.>. The quotient S/σ is thus called the maximum group image of S and denoted by S. It is easy to see that all idempotents of S are σ-equivalent, since ef≤ e,f for e,f∈ E(S). Inverse semigroups, in which idempotents constitute a σ-class, are called E-unitary <cit.>. Equivalently, S is E-unitary if, given s∈ S and e∈ E(S), it follows from e≤ s that s∈ E(S). Another property that characterizes E-unitary inverse semigroups: (s,t)∈σ s t,st∈ E(S) <cit.>. A semilattice of groups is an inverse semigroup A which can be represented as a disjoint union of groups[Such inverse semigroups are also called Clifford semigroups (see <cit.> and <cit.>).], more precisely, A=_e∈ E(A)A_e, where A_e={a∈ A| aa=a a=e}. For an inverse semigroup A each of the following conditions is equivalent to the fact that A is a semilattice of groups: * aa=a a for all a∈ A, * E(A)⊆ C(A). In particular, any commutative inverse semigroup is a semilattice of (abelian) groups. §.§ Twisted partial actions of groups on semigroups Recall from <cit.> that a multiplier of a semigroup S is a pair w of maps s↦ ws and s↦ sw from S to itself, such that for all s,t∈ S * w(st)=(ws)t; * (st)w=s(tw); * s(wt)=(sw)t.[Observe that w is exactly a pair of linked right and left translations <cit.>.] The multipliers of S form a monoid S under the composition (see <cit.> and <cit.>). If S is inverse, then (ws)=s w,(sw)=w s for all s∈ S and w∈ S. Here and below M denotes the group of invertible elements of a monoid M. A twisted partial action <cit.> of a group G on a semigroup S is a pair Θ=(,w), whereis a collection {_x:_x→_x}_x∈ G of isomorphisms between non-empty ideals of S and w={w_x,y∈_x_xy}_x,y∈ G, such that for all x,y,z∈ G * _x^2=_x and _x_y=_y_x; * _1=S and _1=𝕀_S; * _x(_x_y)=_x_xy; * _x∘_y(s)=w_x,y_xy(s)w_x,y for any s∈_y_y x; * w_1,x=w_x,1=𝕀__x; * _x(sw_y,z)w_x,yz=_x(s)w_x,yw_xy,z for all s∈_x_y_yz. Here the right-hand side of <ref> does not require a pair of additional brackets, since _x_xy is an idempotent ideal and thus due to (ii) of <cit.> one has (ws)w'=w(sw') for all w,w'∈_x_xy and s∈_x_xy. The applicability of the multipliers in <ref> is explained by <ref>. Observe that, when S is inverse, each ideal I of S is idempotent, as for s∈ I one has s=s· s s with s s∈ I. It follows that I∩ J=IJ of any two non-empty ideals of S, since I∩ J=(I∩ J)^2⊆ IJ. In particular, any two non-empty ideals of S commute. This shows that for an inverse S item <ref> can be dropped and in <ref> the products of domains can be replaced by their intersections. By a partial action <cit.> of G on S we mean a collection ={_x:_x→_x}_x∈ G as above satisfying * _1=S and _1=𝕀_S; * _x(_x∩_y)=_x∩_xy; * _x∘_y(s)=_xy(s) for any s∈_y∩_y x. When S is inverse, this is exactly a twisted partial action of G on S with trivial w. Two twisted partial actions (,w) and (',w') of G on S are called equivalent <cit.>, if for all x∈ G * '_x=_x and there exists {_x∈_x| x∈ G} such that for all x,y∈ G (ii) '_x(s)=_x_x(s)_x, s∈_x; (iii) '_x(s)w'_x,y_xy=_x_x(s_y)w_x,y, s∈_x_y. Given a twisted partial action Θ=(θ,w) of G on S, the crossed product S*_Θ G is the set {sδ_x| s∈_x} with the multiplication sδ_x· tδ_y=_x(_x(s)t)w_x,yδ_xy. It follows from the proof of <cit.> that S*_Θ G is a semigroup. Moreover, if S is inverse, then S*_Θ G is inverse with (sδ_x)=w_x,x_x(s)δ_x (see <cit.>). For the crossed product coming from a partial actionof G on S we shall use the notation S*_ G. §.§ Twisted modules over inverse semigroups An endomorphism φ of a semilattice of groups A is said to be relatively invertible <cit.>, whenever there exist φ̅∈ A and e_φ∈ E(A) satisfying * φ̅∘φ(a)=e_φ a and φ∘φ̅(a)=φ(e_φ)a for any a∈ A; * e_φ is the identity of φ̅(A) and φ(e_φ) is the identity of φ(A). In this case φ̅ is also relatively invertible with φ̅̅̅=φ and e_φ̅=φ(e_φ).The set of relatively invertible endomorphisms forms an inverse semigroup A <cit.>. It was proved in <cit.> that A is isomorphic to the semigroup A of isomorphisms between unital ideals of A. Let S be an inverse semigroup. By a twisted S-module <cit.> we mean a semilattice of groups A together with a triple Λ=(α,λ,f), where α is an isomorphism E(S)→ E(A), λ is a map S→ A and f:S^2→ A is a map with f(s,t)∈ A_α(stt s) (called a twisting) satisfying the following properties * λ_e(a)=α(e)a for all e∈ E(S) and a∈ A; * λ_s(α(e))=α(ses) for all s∈ S and e∈ E(S); * λ_s∘λ_t(a)=f(s,t)λ_st(a)f(s,t) for all s,t∈ S and a∈ A; * f(se,e)=α(ses) and f(e,es)=α(ess) for all s∈ S and e∈ E(S); * λ_s(f(t,u))f(s,tu)=f(s,t)f(st,u) for all s,t,u∈ S. If A is commutative, then a twisted S-module Λ=(α,λ,f) on A splits into the S-module (α,λ) on A in the sense of <cit.>, that is the one satisfying <ref> with λ being a homomorphism S→ A, and the map f(s,t)∈ A_α(stt s), for which <ref> hold. Such a map f will be called a twisting related to (α,λ). Observe from <cit.> that S-modules form an abelian category S, where a morphism φ:(α,λ)→(α',λ') is a homomorphism of semigroups, such that * φ∘α=α' on E(S); * φ∘λ_s=λ_s'∘φ for all s∈ S. Two twisted S-module structures Λ=(α,λ,f) and Λ'=(α',λ',f') on A are said to be equivalent <cit.>, if * α'=α; * λ'_s(a)=g(s)λ_s(a)g(s); * f'(s,t)g(st)=g(s)λ_s(g(t))f(s,t) for some function g:S→ A with g(s)∈ A_α(ss). When A is commutative, this exactly means that (α,λ)=(α',λ') and the twistings f and f' are equivalent in the sense that f'(s,t)=λ_s(g(t))g(st) g(s)f(s,t). Let Λ=(α,λ,f) be a twisted S-module structure on A. The crossed product of A and S by Λ <cit.> is the set A*_Λ S={aδ_s| a∈ A,s∈ S,aa=α(ss)}. It is an inverse semigroup under the multiplication aδ_s· bδ_t=aλ_s(b)f(s,t)δ_st with (aδ_s)=f(s,s)λ_s(a)δ_s (see <cit.>). If (α,λ) is an S-module structure on A, then A*_(α,λ)S will mean the crossed product of A and S by (α,λ,f) with trivial f. §.§ Extensions of semilattices of groups by inverse semigroups An extension of a semilattice of groups A by an inverse semigroup S <cit.> is an inverse semigroup U with a monomorphism i:A→ U and an idempotent-separating (i. e. injective on E(U)) epimorphism j:U→ S, such that i(A)=j(E(S)). Any two extensions Ai→Uj→S and Ai'→U'j'→S of A by S are called equivalent <cit.> if there is a homomorphism μ:U→ U' such that the following diagram [node distance=1.5cm, auto] (A) A; (U) [right of=A] U; (S) [right of=U] S; (A') [below of=A]A; (U') [below of=U] U'; (S') [below of=S] S; [->] (A) to node i (U); [->] (U) to node j (S); [->] (A') to node i' (U'); [->] (U') to node j' (S'); [-,double distance=2pt] (A) to node(A'); [->] (U) to node μ (U'); [-,double distance=2pt] (S) to node(S'); commutes. In this case μ is an isomorphism. For each twisted S-module structure Λ=(α,λ,f) on A the crossed product A*_Λ S is an extension of A by S, where i(a)=aδ_α(aa) and j(aδ_s)=s. Given an extension Ai→Uj→S, a map ρ:S→ U with j∘ρ=𝕀 and ρ(E(S))⊆ E(U) is called a transversal <cit.> of j. It follows that ρ maps isomorphically E(S) onto E(U) and ρ|_E(S)=(j|_E(U)). The choice of a transversal ρ induces a twisted S-module structure Λ=(α,λ,f) on A by the formulas (see <cit.>): α =i∘ρ|_E(S), λ_s(a) =i(ρ(s)i(a)ρ(s)), f(s,t) =i(ρ(s)ρ(t)ρ(st)). In this case A*_Λ S is equivalent to U, the map aδ_s↦ i(a)ρ(s) being the corresponding isomorphism (see <cit.>). A transversal ρ of j is said to be order-preserving <cit.>, when s≤ tρ(s)≤ρ(t) for all s,t∈ S. It was shown in <cit.> that ρ is order-preserving if and only if, instead of <ref>, Λ satisfies a stronger condition Sieben's-condition(iv') f(s,e)=α(ses) and f(e,s)=α(ess) for all s∈ S and e∈ E(S). Inspired by <cit.>, such twisted S-modules were called Sieben's twisted S-modules in <cit.>. §.§ The relation between Sieben's twisted modules and twisted partial actions Let S be an E-unitary inverse semigroup and A a semilattice of groups. It was proved in <cit.> that with each Sieben's twisted S-module structure Λ=(α,λ,f) on A one can associate a twisted partial action of S on A as follows: _x =_s∈ x A_α(ss),x∈ S, _x(a) =λ_s(a),s∈ xα(s s)=aa, w_x,ya =f(s,s t)a,aw_x,y=af(s,s t), where s∈ x and t∈ xy, such that α(ss)=α(tt)=aa. Conversely, given a twisted partial action Θ=(,w) of a group G on a semilattice of groups A, there exists <cit.> an E-unitary inverse semigroup S, an epimorphism κ:S→ G with κ=σ and an isomorphism α: E(S)→ E(A), such that Λ=(α,λ,f) is a Sieben's twisted S-module structure on A, where λ_s(a) =_κ(s)(α(s s)a), f(s,t) =α(stt s)w_κ(s),κ(t). Notice that one can take S=E(A)*_ G with κ(eδ_x) =x, α(eδ_1) =e. Up to identification of isomorphic groups and semigroups, this defines a one-to-one correspondence between twisted partial actions of groups on A and Sieben's twisted module structures over E-unitary inverse semigroups on A. Moreover, equivalent twisted partial actions correspond to equivalent twisted modules (see <cit.>). §.§ Extensions of semilattices of groups by groups Recall from <cit.> that an extension of a semilattice of groups A by a group G is an inverse semigroup U with a monomorphism i:A→ U and an epimorphism j:U→ G, such that i(A)=j(1). Two extensions Ai→Uj→G and Ai'→U'j'→G of A by G are called equivalent if there is an isomorphism μ:U→ U' making the following diagram [node distance=1.5cm, auto] (A) A; (U) [right of=A] U; (G) [right of=U] G; (A') [below of=A]A; (U') [below of=U] U'; (G') [below of=S] G; [->] (A) to node i (U); [->] (U) to node j (G); [->] (A') to node i' (U'); [->] (U') to node j' (G'); [-,double distance=2pt] (A) to node(A'); [->] (U) to node μ (U'); [-,double distance=2pt] (G) to node(G'); commute. It was proved in <cit.> that for any extension Ai→Uj→G there exists a refinement Ai→Uπ→Sκ→G, where S is an E-unitary inverse semigroup, π and κ are epimorphisms, such that * Ai→Uπ→S is an extension of A by S; * j=κ∘π. Moreover, it follows that κ=σ. If Ai'→U'j'→G is an other extension with a refinement Ai'→U'π'→S'κ'→G, then any homomorphism μ:U→ U' making the diagram <ref> commute induces a homomorphism ν:S→ S', such that [node distance=1.5cm, auto] (A) A; (U) [right of=A] U; (S) [right of=U] S; (G) [right of=S] G; (A') [below of=A]A; (U') [below of=U] U'; (S') [below of=S] S'; (G') [below of=G] G; [->] (A) to node i (U); [->] (U) to node π (S); [->] (S) to node κ (G); [->] (A') to node i' (U'); [->] (U') to node π' (S'); [->] (S') to node κ' (G'); [-,double distance=2pt] (A) to node(A'); [->] (U) to node μ (U'); [->,dashed] (S) to node ν (S'); [-,double distance=2pt] (G) to node(G'); commutes. Moreover, if μ is injective, then ν is injective; if μ is surjective, then ν is surjective. In particular, this shows that a refinement is unique up to an isomorphism. An extension Ai→Uj→G is called admissible <cit.>, if the corresponding π:U→ S has an order-preserving transversal. Up to equivalence, the admissible extensions of A by G are precisely the crossed products A*_Θ G by twisted partial actions Θ of G on A (see <cit.>). §.§ Cohomology of inverse semigroups Let A be an S-module. Following <cit.>, when α need not be specified, we shall often use E(S) as an indexing semilattice for the group components of A. More precisely, for arbitrary e∈ E(S), A_e will mean {a∈ A| aa=α(e)}. It follows from φ∘α=α' that any morphism φ:A→ A' in S maps A_e to a subset of A'_e. More generally, given a semilattice L, an L-set is a disjoint union T=_l∈ LT_l of sets T_l, l∈ L. An L-map is a function f:T→ T', such that f(T_l)⊆ T_l' for all l∈ L. Thus, any S-module is an E(S)-set, and any morphism of S-modules is an E(S)-map. Now, considering the forgetful functor from S to the category of E(S)-sets, one can naturally define the free S-module F(T) over an E(S)-set T. It turns out (see <cit.>) that such a module exists and can be constructed in the following way (we use additive notation). For any e∈ E(S) the component F(T)_e is the free abelian group over the set of pairs (written as formal products) {st∈ S× T| ss=e,t∈⋃_f≥ s sT_f}. The sum of st∈ F(T)_e and s't'∈ F(T)_e' is the formal sum (e's)t+(es')t' in F(T)_ee'. Clearly, α(e)=0_e, where 0_e is the zero of F(T)_e. The endomorphism λ_s of F(T) is defined on a generator s't' by λ_s(s't')=(ss')t'. An element t of T_e, e∈ E(S), is identified with et∈ F(T)_e, determining an embedding of T into F(T) in the category of E(S)-sets. One should be careful when identifying the elements of T with their images in F(T). For instance, taking t∈ T_f and considering it as an element of F(T), one could expect that λ_s(t)=st. However, st need not belong to F(T). In fact, λ_s(t)=λ_s(ft)=(sf)t. Nevertheless, if st∈ F(T), i. e. s s≤ f, then sf=(ss s)f=s(s sf)=s(s s)=s, so λ_s(t)=st. It follows that F(T) is projective in S and that for any A∈ S there is an epimorphism F(A)→ A, so S has enough projectives. Considering the semilattice Z_S of copies ( Z_S)_e={n_e| n∈ Z} of Z with n_e+m_f=(n+m)_ef as a “trivial” S-module in the sense that λ_s(n_e)=n_ses and α(e)=0_e, Lausch defines in <cit.> the cohomology groups of S with values in A as H^n(S,A)=R^n(-,A) applied[Notice that Lausch uses the general notion “cohomology functor” defined axiomatically, but what he constructs is exactly the right derived functor of (-,A).] to Z_S. § ON THE COHOMOLOGY OF INVERSE MONOIDS §.§ The free resolutions C(S) and D(S) of ZS Let S be an inverse monoid. Recall from <cit.> that in this case the cohomology groups of S can be computed using the free resolutions C(S) and D(S) of Z_S in S, where C_n(S)=F(V_n(S)) and D_n(S)=F(W_n(S)), n≥ 0, with V_0(S)_e ={(x)}, e=1_S, ∅, e 1_S; V_n(S)_e ={(s_1,…,s_n)| s_1… s_ns_n… s_1=e},n≥ 1; W_0(S) =V_0(S); W_n(S)_e ={(s_1,…,s_n)∈ V_n(S)_e| s_i 1_S,i=1,…,n},n≥ 1. The S-morphisms ∂_0':C_0(S)→ Z_S and ∂_n':C_n(S)→ C_n-1(S), n≥ 1, are defined in the following way: ∂_0'(x) =1_1_S; ∂'_1(s) =s(x)-(x); ∂'_n(s_1,…,s_n) =λ_s_1(s_2,…,s_n)+∑_i=1^n-1(-1)^i(s_1,…,s_is_i+1,…,s_n)+(-1)^n(s_1,…,s_n-1),n≥ 2. The morphisms ∂_0”:D_0(S)→ Z_S and ∂”_1:D_1(S)→ D_0(S) are simply ∂_0' and ∂'_1|_D_1(S), respectively. To define ∂”_n:D_n(S)→ D_n-1(S), n≥ 2, we use <ref> with the following modification. If n≥ 2 and the term u(v_1,…,v_n-1)∈ C_n-1(S) appears (with some sign) on the right-hand side of <ref>, then it is replaced by 0_uu whenever 1_S∈{v_1,…,v_n-1}. For instance, the summand (-1)^i(s_1,…,s_is_i+1,…,s_n), which, as we know, is identified with (-1)^ie(s_1,…,s_is_i+1,…,s_n)∈ C_n-1(S)_e, e=s_1… s_ns_n… s_1, is set to be 0_e, if s_is_i+1=1_S. In <cit.> Lausch required additionally that all the elements of C_n-1(S) of the form 1_S(v_1,…,v_n-1) be identified with 0_1_S. Assuming this, we would get for s,t∈ S with s,t,st 1_S and ss=tt=1_S that ∂”_1∘∂”_2(s,t) =∂”_1(λ_s(t)-(st)+(s))=∂”_1(stt(t)-stt s(st)+ss(s))=∂”_1(s(t))=s(t(x)-(x))=st(x)-s(x), which does not equal 0_stt s, if st s (for example, when S is a group, the latter follows from t 1_S, so such s and t can be easily found in, say, S= Z_3). Thus, ∂”_1∘∂”_2 is not necessarily zero, demonstrating that Lausch's definition should be corrected. Observe that, when S is a group, C(S) coincides with the standard resolution of Z in the bar notation (or, shortly, the bar resolution). Then D(S) is the normalized bar resolution <cit.>. One should also note that Maclane <cit.> uses the term “bar resolution” for the normalized bar resolution. For the exactness of the sequence C(S) Lausch implicitly uses in <cit.> the Maclane's argument <cit.>[Lausch refers to Maclane's book on p. 280 while proving the exactness of some other sequence, on p. 281 he uses the same argument without any reference.] by constructing E(S)-maps σ'_-1: Z_S→ C_0(S) and σ'_n:C_n(S)→ C_n+1(S), n≥ 0, such that * for each e∈ E(S) the restrictions σ'_-1|_( Z_S)_e and σ'_n|_C_n(S)_e, n≥ 0, are morphisms of abelian groups ( Z_S)_e→ C_0(S)_e and C_n(S)_e→ C_n+1(S)_e, respectively; * the following equalities are fulfilled: ∂'_0∘σ'_-1 = 𝕀_ Z_S; ∂'_n+1∘σ'_n+σ'_n-1∘∂'_n = 𝕀_C_n(S),n≥ 0 . If the maps σ_n', n≥ -1, satisfying <ref> exist, then ∂_0' is surjective and ∂'_n⊆∂'_n+1, n≥ 0. Surjectivity of ∂_0' is explained by <ref>. If c∈∂_n', i. e. c∈ C_n(S)_e and ∂_n'(c)=0_e for some e∈ E(S), then σ_n-1'∘∂_n'(c)=σ_n-1'(0_e) which is zero of C_n(S)_e in view of <ref>. So, c=∂'_n+1∘σ'_n(c)∈∂'_n+1 by <ref>. According to <cit.> in order to prove the converse inclusions, i. e. that C(S) is a chain complex in S, one should somehow deduce from ∂'_n∘∂'_n+1∘σ'_n=0 that ∂'_n∘∂'_n+1=0, n≥ 0. In the classical case Maclane uses the fact that (in our notations) σ'_n(C_n(S)) generates C_n+1(S) as an S-module, which is not true for σ'_n introduced by Lausch in <cit.> (see <ref>). Nevertheless, we may still reduce the problem to the classical formula from the group cohomology. For each n≥ 0 we have ∂'_n∘∂'_n+1=0. When n≥ 1, the formulas <ref> have exactly the same form as the ones for the bar resolution <cit.> (we should only identify (x) with [x], (s_1,…,s_n) with [s_1|…|s_n] and the application of λ_s with the multiplication by s on the left). Expanding the equality ∂_n∘∂_n+1[s_1|…|s_n+1]=0, n≥ 1, written for the bar resolution, and identifying each summand with an element of C_n-1(S) as explained above, we obtain a formal proof that ∂'_n∘∂'_n+1(s_1,…,s_n+1)=0_s_1… s_n+1s_n+1… s_1 with the only difference that 0 (when appears as a sum of two terms with opposite signs) should be replaced by the zero of the corresponding component. For example, ∂_1∘∂_2[s|t]=0 expands to ∂_1∘∂_2[s|t] =∂_1(s[t]-[st]+[s])=s(t[x]-[x])-(st[x]-[x])+(s[x]-[x])=(st[x]-st[x])+(s[x]-s[x])+([x]-[x])=0. This gives 0_stt s =0_stt s+0_ss+0_1_S=(λ_st(x)-λ_st(x)) +(λ_s(x)-λ_s(x)) +((x)-(x))=λ_s(λ_t(x)-(x)) -(λ_st(x)-(x)) +(λ_s(x)-(x))=∂'_1(λ_s(t)-(st)+(s))=∂'_1∘∂'_2(s,t). As to ∂'_0∘∂'_1, it will be calculated by <ref> explicitly: ∂'_0∘∂'_1(s)= λ_s(1_1_S)-1_1_S=1_ss-1_1_S=(1-1)_ss· 1_S=0_ss. Now, following <cit.>, we define the maps σ'_-1 and σ'_n, n≥ 0, on the generators of the group components of Z_S and C_n(S), n≥ 0, respectively, by σ'_-1(1_e) =e(x); σ'_0(s(x)) =(s); σ'_n(s(s_1,…,s_n)) =(s,s_1,…,s_n). Obviously, σ'_-1(1_e)∈ C_0(S)_e and σ'_0(s(x))∈ C_1(S)_ss. Now observe by <ref> that for s(s_1,…,s_n)∈ C_n(S)=F(V_n(S)) we have sfs=ss, where f=s_1… s_ns_n… s_1. Hence σ'_n(s(s_1,…,s_n))∈ C_n+1(S)_ss, n≥ 1. Thus, the maps σ'_n, n≥ -1, respect the partitions of Z_S and C_n(S), n≥ 0, into components, so they uniquely extend to the E(S)-maps Z_S→ C_0(S) and C_n(S)→ C_n+1(S), n≥ 0, with property <ref> above. We prove that <ref> is also fulfilled. The functions σ'_n, n≥ -1, satisfy the equalities <ref>. Taking the generator 1_e of ( Z_S)_e, we easily verify by <ref> that ∂_0'∘σ_-1'(1_e)=∂_0'(e(x))=λ_e(∂_0'(x))=λ_e(1_1_S)=1_e, which is <ref>. Furthermore, for any s∈ S by <ref>: ∂'_1∘σ'_0(s(x))+σ_-1'∘∂_0'(s(x)) =∂'_1(s)+σ_-1'∘λ_s(1_1_S)=s(x)-(x)+σ_-1'(1_ss)=s(x)-(x)+ss(x)=s(x)-(x)+0_ss+(x)=s(x), giving <ref> for n=0. As to <ref> for n≥ 1, using <ref> we see that ∂'_n+1∘σ'_n(s(s_1,…,s_n)) =∂'_n+1(s,s_1,…,s_n)=λ_s(s_1,…,s_n)-(ss_1,s_2,…,s_n)+∑_i=1^n-1(-1)^i+1(s,s_1,…,s_is_i+1,…,s_n)+(-1)^n+1(s,s_1,…,s_n-1) and σ'_n-1∘∂'_n(s(s_1,…,s_n)) =σ'_n-1(λ_s(∂'_n(s_1,…,s_n)))=σ'_n-1(λ_s(λ_s_1(s_2,…,s_n)+∑_i=1^n-1(-1)^i(s_1,…,s_is_i+1,…,s_n)+(-1)^n(s_1,…,s_n-1))). We first observe that λ_s_1(s_2,…,s_n) and (s_1,…,s_is_i+1,…,s_n), 1≤ i≤ n-1, are in the component C_n-1(S)_f of C_n-1(S), while (s_1,…,s_n-1)∈ C_n-1(S)_f', where f=s_1… s_ns_n… s_1 and f'=s_1… s_n-1s_n-1… s_1. Since clearly f≤ f', then using the formula for the addition in a freemodule, we replace (s_1,…,s_n-1) by f(s_1,…,s_n-1)∈ C_n-1(S)_f. Now, applying λ_s, we get σ'_n-1∘∂'_n(s(s_1,…,s_n)) =σ'_n-1(λ_ss_1(s_2,…,s_n)+∑_i=1^n-1(-1)^iλ_s(s_1,…,s_is_i+1,…,s_n)+(-1)^nsf(s_1,…,s_n-1)), all the summands being in the same component of C_n-1(S). Thanks to <ref> the product sf equals s, because s(s_1,…,s_n)∈ F(V_n(S)) with (s_1,…,s_n)∈ V_n(S)_f. Further, we would like to rewrite λ_ss_1 and λ_s in <ref> as the multiplications on the left by ss_1 and s, respectively. By <ref> we need to check that after doing this we shall obtain elements of C_n-1(S)=F(V_n-1(S)). The fact that s(s_1,…,s_is_i+1,…,s_n)∈ C_n-1(S), 1≤ i≤ n-1, follows from s(s_1,…,s_n)∈ C_n(S), because (s_1,…,s_is_i+1,…,s_n) and (s_1,…,s_n) belong to the components of V_n-1(S) and V_n(S), respectively, with the same index f. To prove that ss_1(s_2,…,s_n)∈ C_n-1(S), we make sure that s_1 es_1≤ e', where e=s s and e'=s_2… s_ns_n… s_2. We know from s(s_1,…,s_n)∈ C_n(S) that e≤ s_1e's_1. Then s_1 es_1≤ s_1 s_1e' and hence s_1 es_1· e'=s_1 e(s_1s_1 s_1)e'=s_1 es_1· s_1 s_1e'=s_1 es_1, as desired. Thus, due to the fact that σ'_n-1 is additive on each component of C_n-1(S), we have σ'_n-1∘∂'_n(s(s_1,…,s_n)) =(ss_1,s_2,…,s_n)+∑_i=1^n-1(-1)^i(s,s_1,…,s_is_i+1,…,s_n)+(-1)^n(s,s_1,…,s_n-1). Adding <ref> and <ref> we obtain λ_s(s_1,…,s_n), which is s(s_1,…,s_n) thanks to <ref>. The sequence C(S) is exact. This follows from <ref>. For any n≥ 1 the image σ'_n(C_n(S)) consists of those generators (s_1,…,s_n+1) of C_n+1(S), for which s_1 s_1≤ s_2… s_n+1s_n+1… s_2. For example, when S is obtained from an inverse semigroup by adjoining identity 1_S, the only (n+1)-tuple from σ'_n(C_n(S)) with s_1=1_S is (1_S,…,1_S). Hence, σ'_n(C_n(S)) generates a proper submodule of C_n+1(S). We obtain the exactness of D(S) as a consequence of the exactness of C(S). To this end we introduce the epimorphisms of S-modules ζ_-1: Z_S→ Z_S and ζ_n:C_n(S)→ D_n(S), n≥ 0, by ζ_-1=𝕀_ Z_S, ζ_0=𝕀_C_0(S) and ζ_n(s_1,…,s_n)= (s_1,…,s_n),(s_1,…,s_n)∈ W_n(S), 0_s_1… s_ns_n… s_1, (s_1,…,s_n)∉W_n(S) for n≥ 1 and (s_1,…,s_n)∈ V_n(S). It follows that ζ_n is identity on D_n(S)⊆ C_n(S), n≥ 0. For any n≥ 0 we have ∂”_n∘ζ_n=ζ_n-1∘∂'_n. The case n=0 is trivial: ∂”_0∘ζ_0=∂”_0=∂'_0=ζ_-1∘∂'_0. For n=1 we need to show that ∂”_1∘ζ_1=ζ_0∘∂'_1=∂'_1. Take an arbitrary generator (s)∈ V_1(S). If s 1_S, then (s)∈ W_1(S), so ∂”_1∘ζ_1(s)=∂”_1(s)=∂'_1(s) by the definitions of ζ_1 and ∂”_1. If s=1_S, i. e. (s)∉W_1(S), then ∂”_1∘ζ_1(s)=∂”_1(0_1_S)=0_1_S, and ∂'_1(s)=1_S(x)-(x)=1_S(x)-1_S(x)=0_1_S. If n≥ 2 and (s_1,…,s_n)∈ W_n(S), then one needs to prove that ∂”_n(s_1,…,s_n)=ζ_n-1∘∂'_n(s_1,…,s_n). It is simply the definition of ∂”_n rewritten in terms of ∂'_n and ζ_n-1. If n≥ 2 and (s_1,…,s_n)∉W_n(S), then the desired equality reduces to ζ_n-1∘∂'_n(s_1,…,s_n)=0_s_1… s_ns_n… s_1. Consider three possible cases. (a) s_1=1_S. Then ζ_n-1∘∂'_n(s_1,…,s_n) =ζ_n-1(λ_1_S(s_2,…,s_n)-(1_S· s_2,…,s_n))+∑_i=2^n-1(-1)^iζ_n-1(1_S,s_2,…,s_is_i+1,…,s_n)+(-1)^nζ_n-1(1_S,s_2,…,s_n-1). The difference λ_1_S(s_2,…,s_n)-(1_S· s_2,s_3,…,s_n) is 0_s_2… s_ns_n… s_2, and it is mapped by ζ_n-1 to itself as an element of D_n(S). All the other terms under the sign of ζ_n-1 are also mapped to zeros of the corresponding components of D_n(S), since they contain 1_S. Thus, the sum is 0_s_2… s_ns_n… s_2=0_s_1… s_ns_n… s_1. (b) s_i=1_S for some 2≤ i≤ n-1. In this case ζ_n-1∘∂'_n(s_1,…,s_n) =ζ_n-1(λ_s_1(s_2,…,1_S,…,s_n))+∑_j=1^i-2(-1)^jζ_n-1(s_1,…,s_js_j+1,…,1_S,…,s_n)+(-1)^i-1ζ_n-1(s_1,…,s_i-2,s_i-1· 1_S,s_i+1,…,s_n)+(-1)^iζ_n-1(s_1,…,s_i-1,1_S· s_i+1,s_i+2,…,s_n)+∑_j=i+1^n-1(-1)^jζ_n-1(s_1,…,1_S,…,s_js_j+1,…,s_n)+(-1)^nζ_n-1(s_2,…,1_S,…,s_n-1). The summands <ref> and <ref> differ only by the sign, all the other summands are zero by the definition of ζ_n-1. (c)s_n=1_S. Then ζ_n-1∘∂'_n(s_1,…,s_n) =ζ_n-1(λ_s_1(s_2,…,1_S))+∑_i=1^n-2(-1)^iζ_n-1(s_1,…,s_is_i+1,…,1_S)+(-1)^n-1ζ_n-1((s_1,…,s_n-1· 1_S)-(s_1,…,s_n-1)), which is clearly zero. For all n≥ 0 the composition ∂”_n∘∂”_n+1 is zero. Indeed, consider the composition (∂”_n∘∂”_n+1)∘ζ_n+1. Using <ref> twice, we obtain ∂”_n∘(∂”_n+1∘ζ_n+1)=(∂”_n∘ζ_n)∘∂'_n+1=ζ_n-1∘(∂'_n∘∂'_n+1), which is zero by <ref>. The result now follows from surjectivity of ζ_n+1. We proceed with the construction of the corresponding maps σ”_n, n≥ -1. There is a (uniquely defined) collection of functions σ”_-1: Z_S→ D_0(S), σ”_n:D_n(S)→ D_n+1(S), n≥ 0, satisfying σ”_n∘ζ_n=ζ_n+1∘σ'_n for all n≥ -1. Moreover, it follows that σ”_n is a homomorphism of abelian groups when restricted to a group component of the corresponding module. Since ζ_-1=𝕀_ Z_S and ζ_n is identity on D_n(S), we immediately obtain that for n=-1 the equality <ref> becomes σ”_-1=σ'_-1 (so that σ”_-1|_( Z_S)_e is automatically a homomorphism as σ'_-1|_( Z_S)_e is), and for n≥ 0 it is equivalent on D_n(S)⊆ C_n(S) to the fact that σ”_n=ζ_n+1∘σ'_n|_D_n(S). It follows from D_n(S)_e⊆ C_n(S)_e that σ'_n|_D_n(S)_e:D_n(S)_e→ C_n+1(S)_e is a homomorphism as the restriction of the homomorphism σ'_n|_C_n(S)_e to D_n(S)_e. Furthermore, ζ_n+1, being a morphism of S-modules, restricts to a homomorphism ζ_n+1|_C_n+1(S)_e:C_n+1(S)_e→ D_n+1(S)_e. Hence, σ”_n|_D_n(S)_e=ζ_n+1|_C_n+1(S)_e∘σ'_n|_D_n(S)_e is a homomorphism D_n(S)_e→ D_n+1(S)_e. Now we prove that <ref> holds on any generator of C_n(S)_e which does not belong to D_n(S)_e, n≥ 0, e∈ E(S). If n=0, then there is nothing to be proved, because D_0(S)=C_0(S). If n≥ 1, then ζ_n is zero on such a generator s(s_1,…,s_n), so <ref> transforms into ζ_n+1∘σ'_n(s(s_1,…,s_n))=0_ss, when s_i=1_S for some 1≤ i≤ n. The latter is a straightforward consequence of <ref> (see also <ref>). One has ∂”_0∘σ”_-1 =𝕀_ Z_S; ∂”_n+1∘σ”_n+σ”_n-1∘∂”_n =𝕀_D_n(S),n≥ 0 . Equality <ref> is simply <ref>. To establish <ref>, use <ref> to get(∂”_n+1∘σ”_n+σ”_n-1∘∂”_n)∘ζ_n =∂”_n+1∘(σ”_n∘ζ_n)+σ”_n-1∘(∂”_n∘ζ_n)=(∂”_n+1∘ζ_n+1)∘σ'_n+(σ”_n-1∘ζ_n-1)∘∂'_n=ζ_n∘(∂'_n+1∘σ'_n+σ'_n-1∘∂'_n)=ζ_n.It remains to “cancel” the epimorphism ζ_n. The sequence D(S) is exact. This follows from <ref>. §.§ A connection between Hn(S,A) and Hn(S1,A1) Let A be an S-module. By adjoining identities 1_S and 1_A to S and A we obtain the S^1-module A^1 (see <cit.>, where one uses the additive notation). In particular, 1_S acts on A^1 trivially, and λ_s(1_A)=α(s s). Let S be an inverse monoid. For any S-module A and for all n≥ 2 we have H^n(S,A)≅ H^n(S^1,A^1). We see that W_n(S^1)_e=V_n(S)_e for all n≥ 1 and e∈ E(S). Moreover, W_n(S^1)_1_S=∅. Hence, D_n(S^1)_e=C_n(S)_e and D_n(S^1)_1_S is the trivial group {0_1_S} for such n and e. This implies that there is an isomorphism between (C_n(S),A) and (D_n(S^1),A^1) (namely, a morphism f:C_n(S)→ A extends to f̅:D_n(S^1)→ A^1 by f̅(0_1_S)=1_A). It is clearly an isomorphism of the complexes of abelian groups: [node distance=4cm, auto] (C_1) (C_1(S),A); (C_2) [left of=C_1] (C_2(S),A); (dots_C_2) [xshift=1.1cm, left of=C_2] …; (D_1) [yshift=2.5cm, below of=C_1](D_1(S^1),A^1); (D_2) [left of=D_1] (D_2(S^1),A^1); (dots_D_2) [xshift=1.1cm, left of=D_2] …; [->] (C_1) to node[above] δ_1^1 (C_2); [->] (C_2) to node[above] δ_1^2 (dots_C_2); [->] (D_1) to node[above] δ_2^1 (D_2); [->] (D_2) to node[above] δ_2^2 (dots_D_2); [-] (C_1) to node ≀ (D_1); [-] (C_2) to node ≀ (D_2); where δ_1^n(f)=f∘∂'_n+1 and δ_2^n(g)=g∘∂”_n+1 for n≥ 1, f∈(C_n(S),A) and g∈(D_n(S^1),A^1). Observe that (D_n(S^1),A^1), n≥ 1, can be identified with the abelian group of functions {f:S^n→ A| f(s_1,…,s_n)∈ A_α(s_1… s_ns_n… s_1)} under the coordinate-wise multiplication, which we denote by C^n(S^1,A^1). Under this identification δ_2^n becomes a homomorphism, which sends f∈ C^n(S^1,A^1) to δ_2^nf∈ C^n+1(S^1,A^1), such that (δ_2^nf)(s_1,…,s_n+1) =λ_s_1(f(s_2,…,s_n+1))∏_i=1^nf(s_1,…,s_is_i+1,…,s_n+1)^(-1)^i f(s_1,…,s_n)^(-1)^n+1. Denote δ_2^n by Z^n(S^1,A^1) and δ_2^n-1 by B^n(S^1,A^1), so that the quotient Z^n(S^1,A^1)/B^n(S^1,A^1) is identified with H^n(S^1,A^1), when n≥ 2. The elements of C^n(S^1,A^1), Z^n(S^1,A^1) and B^n(S^1,A^1) will be called n-cochains, n-cocycles and n-coboundaries, respectively. The following proposition completes the result of <ref>. Let S be an inverse semigroup and (α,λ) an S-module structure on A. Then H^0(S^1,A^1) =0, H^1(S^1,A^1) ≅ Z^1(S^1,A^1)={f:S→ A|λ_s(f(t))f(st) f(s)=α(stt s)}. Moreover, if S is a monoid, then H^0(S,A) ≅{a∈ A|λ_s(a)a=α(ss), ∀ s∈ S}, H^1(S,A) ≅ Z^1(S^1,A^1)/{f:S→ A|∃ a∈ A: f(s)=λ_s(a)a, ∀ s∈ S}. Indeed, in the monoid case each f∈(C_0(S),A) is identified with f(x)∈ A by <ref>, and (δ_1^0f) (s)=λ_s(f(x))f(x) by <ref>, whence <ref>. Similarly (D_0(S^1),A^1)=(C_0(S^1),A^1)=A^1={1_A}, giving <ref>. In the monoid case (C_1(S),A)≅(D_1(S^1),A^1) as showed in the proof of <ref>, which explains <ref>. §.§ An interpretation of H2(S1,A1) Observe that f∈ Z^2(S^1,A^1) if and only if it satisfies <ref> of the definition of a twisted S-module. As to <ref>, we first notice that it can be replaced by a “weaker-looking” condition. Let f∈ Z^2(S^1,A^1). Then each one of the equalities f(e,es) =α(ess), f(se,e) =α(ses) is equivalent to f(e,e)=α(e) for all e∈ E(S). Clearly, f(e,e)=α(e) is a particular case of f(e,es)=α(ess), as well as of f(se,e)=α(ses). Conversely, assuming f(e,e)=α(e) and writing <ref> for t=u=e, we get λ_s(α(e))f(s,e)=f(s,e)f(se,e),whence f(se,e)=α(ses). Similarly f(e,es)=α(ess) follows from <ref> for the triple (e,e,s). For each f∈ Z^2(S^1,A^1) there is g∈ C^1(S^1,A^1), such that f̃=f·δ_2^1g is a twisting related to the S-module A. Setting g(s)=f(s,s), we see that g(s)∈ A_α(ss), so g∈ C^1(S^1,A^1). Moreover, (δ_2^1g)(s,t)=λ_s(g(t))g(st) g(s)=λ_s(f(t,t))f(st,t s)f(s,s), hence (δ_2^1g)(e,e)=λ_e(f(e,e))f(e,e)f(e,e)=α(e)f(e,e)=f(e,e). Therefore, if f̃=f·δ_2^1g, then f̃(e,e)=f(e,e)f(e,e)=α(e). It remains to apply <ref>. There is a one-to-one correspondence between the elements of H^2(S^1,A^1) and the equivalence classes of twistings related to the S-module A. It follows from <ref> that each class [f]∈ H^2(S^1,A^1) contains the twisting f̃ related to (α,λ). Now by <ref> of the definition of equivalent twisted S-modules two twistings are equivalent if and only if they are cohomologous as elements of Z^2(S^1,A^1). §.§ The groups Hn(S1,A1) It was proved in <cit.> that the twisting f of a Sieben's twisted S-module is order-preserving in the sense that f(s,t)≤ f(s',t') for s≤ s' and t≤ t'. The converse also holds: if f is order-preserving, then it satisfies the Sieben's condition <ref>, as α(ess) =f(e,es)≤ f(e,s)α(ess)=α(ess)f(e,s)=f(e,s), α(ses) =f(se,e)≤ f(s,e)α(ses)=α(ses)f(s,e)=f(s,e). We shall say that f∈ C^n(S^1,A^1), n≥ 1, is order-preserving, if s_1≤ t_1,…,s_n≤ t_n f(s_1,…,s_n)≤ f(t_1,…,t_n). Since ≤ respects multiplication, the order-preserving cochains form a subgroup of C^n(S^1,A^1). We denote it by C^n_≤(S^1,A^1). Note also that δ_2^nf preserves the order, whenever f does. Thus, we obtain the cochain complex C^1_≤(S^1,A^1)δ_2^1→…δ_2^n-1→C^n_≤(S^1,A^1)δ_2^n→… We would like to add one more term on the left to this sequence, whose definition is motivated by the following results, in which S is assumed to be an inverse semigroup and A is an S-module. The group H^0(S,A) is isomorphic to the group of functions f:E(S)→ A, such that f(e)∈ A_α(e) and λ_s(f(e))=f(ses) for all s∈ S and e∈ E(S). The isomorphism is explained in <cit.>. We would like to note that in the monoid case a function f:E(S)→ A with the properties above is determined by f(1_S)∈ A, as f(e)=f(e· 1_S· e)=λ_e(f(1_S))=α(e)f(1_S). Moreover, λ_s(f(1_S))=f(s· 1_S· s)=f(ss)=α(ss)f(1_S), so λ_s(f(1_S))f(1_S)=α(ss). Conversely, if a∈ Ais such that λ_s(a)a=α(ss), then set f(e)=α(e)a and notice that f(ses)=α(ses)a=α(se(se))a=λ_se(a)=λ_s(α(e)a)=λ_s(f(e)). This relates the result of the remark with <ref>. Let f:E(S)→ A with f(e)∈ A_α(e) for all e∈ E(S). Then f satisfies <ref> if and only if f is order-preserving and λ_s(f(s s))=f(ss) for all s∈ S. Suppose <ref>. Taking e=s s, we get <ref>. Using <ref> with s=e'∈ E(S), one sees that f(ee')=α(e')f(e)≤ f(e), so f is order-preserving. Conversely, if f is order-preserving, then f(ee')≤ f(e), so f(ee')=f(ee')f(ee') f(e)=α(ee')f(e)=α(e')f(e). Assuming additionally <ref>, one gets f(ses) =f(se(se))=λ_se(f((se) se))=λ_s(α(e)f(es s))=λ_s(α(s s)f(e))=λ_s(λ_s s(f(e)))=λ_s(f(e)). Define C^0_≤(S^1,A^1) to be the abelian group of order-preserving functions f:E(S)→ A, such that f(e)∈ A_α(e) for all e∈ E(S). Given f∈ C^0_≤(S^1,A^1), set (δ_2^0f)(s)=λ_s(f(s s))f(ss). Clearly, δ_2^0f∈ C^1_≤(S^1,A^1). For any f∈ C^0_≤(S^1,A^1) and e,e'∈ E(S) one has f(e)f(e')=α(ee'). Indeed, using <ref> and the facts that f(e)∈ A_α(e), f(e')∈ A_α(e'), we obtain f(e)f(e')=α(e')f(e)(α(e)f(e'))=f(ee')f(ee')=α(ee'). The composition δ_2^1∘δ_2^0 is zero. Let f∈ C^0_≤(S^1,A^1) and s,t∈ S. Then by <ref> (δ_2^1(δ_2^0f))(s,t)=λ_s((δ_2^0f)(t))(δ_2^0f)(st) (δ_2^0f)(s). In view of <ref> we have λ_s((δ_2^0f)(t)) =λ_s(λ_t(f(t t))f(tt))=λ_st(f(t t))λ_s(f(tt)), (δ_2^0f)(st) =λ_st(f(t s st)) f(stt s), (δ_2^0f)(s) =λ_s(f(s s))f(ss). Hence, (δ_2^1(δ_2^0f))(s,t) =f(stt s)f(ss)λ_st(f(t t)f(t s st))λ_s(f(s s)f(tt)). By <ref> f(stt s)f(ss) =α(stt s), f(t t)f(t s st) =α(t s st), f(s s)f(tt) =α(s stt). It remains to apply <ref> of the definition of a twisted S-module to get (δ_2^1(δ_2^0f))(s,t)=α(stt s). As a consequence we get The sequence C^0_≤(S^1,A^1)δ_2^0→…δ_2^n-1→C^n_≤(S^1,A^1)δ_2^n→… is a cochain complex of abelian groups. The groups of n-cocycles, n-coboundaries and n-cohomologies of <ref> will be denoted by Z^n_≤(S^1,A^1), B^n_≤(S^1,A^1) and H^n_≤(S^1,A^1), respectively (n≥ 0). The group H^0_≤(S^1,A^1) is isomorphic to H^0(S,A). This is explained by <ref>. One has Z^1_≤(S^1,A^1)=Z^1(S^1,A^1), so H^1_≤(S^1,A^1) is isomorphic to Z^1(S^1,A^1)/{f:S→ A|∃ g∈ C^0_≤(S^1,A^1):f(s)=λ_s(g(s s))g(ss)}. In particular, it is isomorphic to H^1(S,A), when S is a monoid. Let f∈ Z^1(S^1,A^1). Applying the 1-cocycle identity to the pair (e,e), where e∈ E(S), we get α(e)(f(e))f(e) f(e)=α(e), that is f(e)=α(e). Now writing the same identity for the pair (e,s), we have α(e)f(s)f(es) f(e)=α(ess), yielding f(es)=α(e)f(s)≤ f(s), so f is order-preserving. This shows that Z^1_≤(S^1,A^1)=Z^1(S^1,A^1), proving <ref>. If S is a monoid and g∈ C^0_≤(S^1,A^1), then g(e)=α(e)g(1_S) by <ref>, so g is identified with g(1_S)∈ A. The result now follows from <ref>. § PARTIAL GROUP COHOMOLOGY WITH VALUES IN NON-UNITAL PARTIAL MODULES Let S be a commutative semigroup and S^2=S. Then the multipliers of S commute with each other and with the elements of S. It was proved in <cit.> that ws=sw for all w∈ S and s∈ S. Now if w',w”∈ S, then (w'w”)s=w'(w”s)=(w”s)w'=(sw”)w'=s(w”w')=(w”w')s. Similarly s(w'w”)=s(w”w'). Let G be a group and A a semilattice of groups. If A is commutative, then any twisted partial action of G on A splits into a partial action ={_x:_x→_x}_x∈ G of G on A and a twisting related to (A,), i. e. a collection w={w_x,y}_x,y∈ G of invertible multipliers of _x_xy, satisfying w_1,x=w_x,1=𝕀__x and _x(aw_y,z)w_xy,zw_x,yzw_x,y=_x(a),a∈_x_y_yz. Here we used <ref> restricting w_xy,z, w_x,yz and w_x,y to the (idempotent) ideal _x_xy_xyz which contains both _x(aw_y,z) and _x(a). Observe that <ref> is the same as _x(_x(a)w_y,z)w_xy,zw_x,yzw_x,y=a,a∈_x_xy_xyz. Now (,w) is equivalent to (',w') if and only if =' and w is equivalent to w' in the sense that there exists ={_x∈_x}_x∈ G, such that aw'_x,y=_x(_x(a)_y)_xy_xw_x,y,a∈_x_xy. This motivates us to introduce the following notions. Let G be a group. A partial G-module is a semilattice of abelian groups A with a partial action of G on A. Given a partial G-module (A,) and x_1,…,x_n∈ G, we shall write _(x_1,…,x_n) for _x_1_x_1x_2…_x_1… x_n. Let (A,) be a partial G-module and n≥ 1. A partial n-cochain of G with values in A is a collection w={w(x_1,…,x_n)| x_1,…,x_n∈ G}, where w(x_1,…,x_n)∈_(x_1,…,x_n). By a partial 0-cochain of G with values in A we mean w∈ A. It follows from <ref> that partial n-cochains form an abelian group under pointwise multiplication. We denote this group by C^n(G,A). Observe that C^n(G,A), n≥ 1, is the group of units of ∏_(x_1,…,x_n)∈ G^n_(x_1,…,x_n). Given n≥ 1, w∈ C^n(G,A) and a∈_(x_1,…,x_n+1), define (δ^nw)(x_1,…,x_n+1)a =_x_1(_x_1(a)w(x_2,…,x_n+1))∏_i=1^n w(x_1,…,x_ix_i+1,…,x_n+1)^(-1)^iw(x_1,…,x_n)^(-1)^n+1. For w∈ C^0(G,A) and a∈_x set (δ^0w)(x)a =_x(_x(a)w)w. Observe that _x_1(a)∈_x_1_(x_2,…,x_n+1), so w(x_2,…,x_n+1) is applicable in <ref>. The result _x_1(a)w(x_2,…,x_n+1) belongs to _x_1_(x_2,…,x_n+1), since the latter is an idempotent ideal. Therefore, _x_1(_x_1(a)w(x_2,…,x_n+1)) is an element of _(x_1,…,x_n+1). So, the rest of the multipliers in <ref> are obviously applicable and (δ^nw)(x_1,…,x_n+1)a∈_(x_1,…,x_n+1). For all n≥ 0 and x_1,…,x_n+1∈ G the map (δ^nw)(x_1,…,x_n+1) is a multiplier of _(x_1,…,x_n+1),whose right action coincides with the left one. Let n≥ 1. According to <ref> we may write (δ^nw)(x_1,…,x_n+1)a=_x_1(_x_1(a)w')w”, where w' =w(x_2,…,x_n+1), w” =(∏_i=1^n w(x_1,…,x_ix_i+1,…,x_n+1)^(-1)^i)w(x_1,…,x_n)^(-1)^n+1. We shall check the equality (δ^nw)(x_1,…,x_n+1)(ab)=((δ^nw)(x_1,…,x_n+1)a)b for a,b∈_(x_1,…,x_n+1). The other two properties of a multiplier are proved similarly. Taking into account <ref>, we have (δ^nw)(x_1,…,x_n+1)(ab) =_x_1(_x_1(ab)w')w”=_x_1(_x_1(a)_x_1(b)w')w”=_x_1(_x_1(a)w'_x_1(b))w”=_x_1(_x_1(a)w')bw”=_x_1(_x_1(a)w')w”b=((δ^nw)(x_1,…,x_n+1)a)b. The case n=0 uses the same idea (take w'=w and w”=w). For all a∈_x_(y_1,…,y_n), w∈_x_(y_1,…,y_n) and w'∈_(x,y_1,…,y_n)one has _x(_x(aw)w')=_x(_x(a)w')w. By <cit.> the pair of mapsa ↦_x(_x(a)w') and a ↦_x(w'_x(a)) defines a multiplier w̅' of the idempotent ideal _x_(y_1,…,y_n). Then our statementtransforms into the equality (aw)w̅' = (aw̅')w, which holds thanks to <ref>. For all n≥ 0 the map δ^n is a homomorphism C^n(G,A)→ C^n+1(G,A). We shall prove that δ^n is a homomorphisms of monoids C^n(G,A)→∏_(x_1,…,x_n+1)∈ G^n+1_(x_1,…,x_n+1). In view of <ref> this will imply that δ^n(C^n(G,A))⊆ C^n+1(G,A). The fact that δ^n maps identity to identity is clear from <ref>. Fix n≥ 1, u,v∈ C^n(G,A), x_1,…,x_n+1∈ G and a∈_(x_1,…,x_n+1). We need to show that (δ^n(uv))(x_1,…,x_n+1)a=(δ^nu)(x_1,…,x_n+1)(δ^nv)(x_1,…,x_n+1)a. Using <ref>, we represent the right-hand side of <ref> as (δ^nu)(x_1,…,x_n+1)_x_1(_x_1(a)v')v”=_x_1(_x_1(_x_1(_x_1(a)v')v”)u')u”, where u',v'∈_(x_2,…,x_n+1) and u”,v”∈_(x_1,…,x_n+1). By <ref> _x_1(_x_1(_x_1(a)v')v”)=_x_1(_x_1(_x_1(a))v”)v'=_x_1(av”)v'. Therefore, _x_1(_x_1(_x_1(_x_1(a)v')v”)u')u”=_x_1(_x_1(av”)v'u')u”, the latter being _x_1(_x_1(a)v'u')v”u” in view of <ref>. The result now follows from the observation that (uv)'=u'v' and (uv)”=u”v”, where (uv)' and (uv)” denote the “parts” of (δ^n(uv))(x_1,…,x_n+1)a from the representation similar to <ref>. The case n=0 is proved analogously. Given a partial G-module (A,), as it was mentioned above, there exist an E-unitary semigroup S and an epimorphism κ:S→ G whose kernel coincides with σ, such that <ref> defines an S-module structure (α,λ) on A. As to formula <ref>, it can be generalized to arbitrary n≥ 0. For all n≥ 0 there is a homomorphism from C^n(G,A) to C^n_≤(S^1,A^1) which maps w to f defined by f(e) =α(e)w, n=0, f(s_1,…,s_n) =α(s_1… s_ns_n… s_1)w(κ(s_1),…,κ(s_n)), n≥ 1, where e∈ E(S), s_1,…,s_n∈ S. The case n=0 is immediate: α, being an isomorphism E(S)→ E(A), preserves the order and hence f does. Moreover, since w∈ A, we see from <ref> that f(e)f(e)=α(e)ww=α(e). Thus, f∈ C^0_≤(S^1,A^1). When n≥ 1, we first note that the right-hand side of <ref> makes sense, as for s_i=e_iδ_x_i, e_i∈ E(_x_i), one has x_i=κ(s_i), 1≤ i≤ n, and s:=s_1… s_n=eδ_x, where e∈ E(_(x_1,…,x_n)) and x=x_1… x_n. So, α(ss)=e and thus w(κ(s_1),…,κ(s_n)) is applicable in <ref>. As above for n=0, one easily gets from <ref> that f(s_1,…,s_n)f(s_1,…,s_n)=α(s_1… s_ns_n… s_1). Moreover, if s_i≤ t_i, 1≤ i≤ n, then f(s_1,…,s_n)≤ f(t_1,…,t_n), because s_1… s_ns_n… s_1≤ t_1… t_nt_n… t_1, α preserves the order and κ(s_i)=κ(t_i), as (s_i,t_i)∈σ, 1≤ i≤ n. This shows that f∈ C^n_≤(S^1,A^1). The fact that the map w↦ f is a homomorphism is explained by the observation that in a commutative semigroup S one has e(ww')=(ew)w'=(e(ew))w'=((ew)e)w'=(ew)(ew') for all e∈ E(S) and w,w'∈ S. Recall now that we may take S=E(A)*_ G with α:E(S)→ E(A) and κ: S→ G given by <ref>. Notice that for any x∈ G and a∈_x we have that s=aaδ_x is the unique element of S such that κ(s)=x and α(ss)=aa. Let n≥ 0 and f∈ C^n_≤(S^1,A^1). If n=0, then for any a∈ A define wa=aw=f(α(aa))a. When n≥ 1, given x_1,…,x_n∈ G and a∈_(x_1,…,x_n), choose a unique n-tuple (s_1,…,s_n)∈ S^n, such that κ(s_i)=x_1… x_i and α(s_is_i)=aa, 1≤ i≤ n. Then set[When n=1, the right-hand termof <ref>is f(s_1)a.] w(x_1,…,x_n)a=aw(x_1,…,x_n)=f(s_1,s_1 s_2,…,s_n-1 s_n)a. This defines a homomorphism from C^n_≤(S^1,A^1) to C^n(G,A), n≥ 0. For n=0 we note from <ref> using the order-preserving property of f that w(ab) =f(α(abb a))ab=f(α(aa bb))ab=f(α(aa)α(bb))ab≤ f(α(aa))ab=(wa)b. Since both w(ab) and (wa)b belong to the same group component A_aa bb of A, they are equal. Due to the equality wa=aw and the commutativity of A this explains that w is a multiplier of A. Clearly, w is invertible with w a=aw=f(α(aa)) a. So, w∈ C^0(G,A). Suppose that f↦ w and f'↦ w' for some f,f'∈ C^0_≤(S^1,A^1). As wa∈ A_aa, w'(wa)=f'(α(aa))wa=f'(α(aa))f(α(aa))a, showing that f'f↦ w'w. Let n≥ 1. It is immediately seen that the right-hand term of <ref> belongs to the ideal _(x_1,…,x_n). Observe that f(s_1,s_1 s_2,…,s_n-1 s_n)∈ A_α(s_1s_1… s_ns_n)=A_aa. Hence, the function w(x_1,…,x_n) from _(x_1,…,x_n) to itself is a bijection, whose inverse is w(x_1,…,x_n) a=aw(x_1,…,x_n)=f(s_1,s_1 s_2,…,s_n-1 s_n) a. To prove that w(x_1,…,x_n) is a multiplier of _(x_1,…,x_n), it suffices to verify w(x_1,…,x_n)(ab)=(w(x_1,…,x_n)a)b for a,b∈_(x_1,…,x_n). Let (s_1,…,s_n) and (t_1,…,t_n) be the n-tuplesin S^n from the definition of w, corresponding to a and b, respectively. Then the right-hand side of <ref> equals f(s_1,s_1 s_2,…,s_n-1 s_n)ab by <ref>. As to the left-hand side of <ref>, note that ab(ab)=aa bb=α(s_is_i t_1t_1)=α(t_1t_1 s_i(t_1t_1 s_i)) with κ(t_1t_1 s_i)=κ(s_i)=x_1…,x_i, 1≤ i≤ n. Therefore, w(x_1,…,x_n)(ab) is f(t_1t_1 s_1,s_1 t_1t_1·t_1t_1s_2,…,s_n-1t_1t_1 · t_1t_1 s_n)ab=f(t_1t_1 s_1,s_1 t_1t_1 s_2,…,s_n-1 t_1t_1 s_n)ab. Since t_1t_1 s_1≤ s_1 and s_i t_1t_1 s_i+1≤ s_i s_i+1, 1≤ i≤ n-1, we have f(t_1t_1 s_1,s_1 t_1t_1 s_2,…,s_n-1 t_1t_1 s_n)≤ f(s_1,s_1 s_2,…,s_n-1 s_n). Taking into account <ref> and the fact that the left-hand side of <ref> belongs to A_α(t_1t_1 s_1s_1… s_ns_n)=A_aa bb, one sees that <ref> is bb f(s_1,s_1 s_2,…,s_n-1 s_n)ab=f(s_1,s_1 s_2,…,s_n-1 s_n)ab. This concludes the proof that w∈ C^n(G,A). Take additionally f'∈ C^n_≤(S^1,A^1) and let f'↦ w'. It follows from <ref> that w(x_1,…,x_n)a∈ A_aa, so to apply w'(x_1,…,x_n) to this element, one uses the same n-tuple (s_1,…,s_n): w'w(x_1,…,x_n)a =f'(s_1,s_1 s_2,…,s_n-1 s_n)w(x_1,…,x_n)a=f'f(s_1,s_1 s_2,…,s_n-1 s_n)a. Thus, f'f is mapped to w'w. The group C^n(G,A) is isomorphic to C^n_≤(S^1,A^1) for all n≥ 0. We shall show that the homomorphisms from <ref> are inverse to each other. Let w∈ C^n(G,A) and w↦ f↦ w'. If n=0, then by <ref> w'a=f(α(aa))a=(α(α(aa))w)a=((aa) w)a=w(aa) a=wa, so w'=w by <ref>. Consider now n≥ 1 and take x_1,…,x_n∈ G and a∈_(x_1,…,x_n). Find a unique (s_1,…,s_n)∈ S^n, such that κ(s_i)=x_1… x_i and α(s_is_i)=aa, 1≤ i≤ n. According to <ref>: w'(x_1,...,x_n)a =f(s_1,s_1 s_2,…,s_n-1 s_n)a=α(s_1s_1… s_ns_n)w(κ(s_1),κ(s_1 s_2),…,κ(s_n-1 s_n))a=aa w(x_1,x_1x_1x_2,…,x_n-1… x_1 x_1… x_n)a=w(x_1,...,x_n)a. Now take f∈ C^n_≤(S^1,A^1) and assume that f↦ w↦ f'. For n=0 one has f'(e)=α(e)w=f(α(α(e)))α(e)=f(e)α(e)=f(e), as f(e)∈ A_α(e). Let n≥ 1 and s_1,…,s_n∈ S.Set e_i=s_i… s_ns_n… s_i∈ E(S),t_i=s_1… s_i· e_i+1,1≤ i≤ n-1, and t_n=s_1… s_n. Then κ(t_i)=κ(s_1)…κ(s_i) and α(t_it_i)=α(e_1), 1≤ i≤ n. So, in view of <ref>: f'(s_1,…,s_n) =α(s_1… s_ns_n… s_1)w(κ(s_1),…,κ(s_n))=α(e_1)f(t_1,t_1 t_2,…,t_n-1 t_n). Clearly, t_1=s_1e_2≤ s_1. Moreover, t_i t_i+1=e_i+1· s_i… s_1 s_1… s_i· s_i+1· e_i+2≤ s_i+1,1≤ i≤ n-1. Therefore, f(t_1,t_1 t_2,…,t_n-1 t_n)≤ f(s_1,s_2,…, s_n) and hence f'(s_1,…,s_n)≤α(e_1)f(s_1,…,s_n)=f(s_1,…,s_n). Since both f'(s_1,…,s_n) and f(s_1,…,s_n) belong to the same group component A_α(e_1) of A, we conclude that they are equal. Observe that w∈ C^2(G,A) satisfies w(1,x)=w(x,1)=𝕀__x for all x if and only if the corresponding f∈ C^2_≤(S^1,A^1) satisfies the Sieben's condition <ref>. Indeed, if w(1,x)=𝕀__x for all x, then f(e,s)=α(ess)w(1,κ(s))=α(ess) by <ref>, and analogously w(x,1)=𝕀__x for all x implies f(s,e)=α(ses). Conversely, assume <ref> and take x∈ G, a∈_x. There is s∈ S, such that κ(s)=x and α(ss)=aa. Since κ(ss)=1, it follows that w(1,x)a=aw(1,x)=f(ss,(ss) s)a=f(ss,s)a=α(ss)a=a by <ref>. Similarly w(x,1)a=aw(x,1)=a. The homomorphism from <ref> respects δ^n and δ_2^n, n≥ 0, in the sense that the following diagram [node distance=4cm, auto] (C_0) C^0(G,A); (C_1) [left of=C_0] C^1(G,A); (dots_C_1) [xshift=1.1cm, left of=C_1] …; (D_0) [yshift=2.5cm, below of=C_0]C^0_≤(S^1,A^1); (D_1) [left of=D_0] C^1_≤(S^1,A^1); (dots_D_1) [xshift=1.1cm, left of=D_1] …; [->] (C_0) to node[above] δ^0 (C_1); [->] (C_1) to node[above] δ^1 (dots_C_1); [->] (D_0) to node[above] δ_2^0 (D_1); [->] (D_1) to node[above] δ_2^1 (dots_D_1); [-] (C_0) to node ≀ (D_0); [-] (C_1) to node ≀ (D_1); commutes. Let w∈ C^n(G,A). Suppose that w↦ f∈ C^n_≤(S^1,A^1) and δ^nw↦ f'∈ C^n+1_≤(S^1,A^1). We need to prove that f'=δ_2^nf. Consider the case n=0. By <ref> (δ_2^0f)(s) =λ_s(f(s s))f(ss)=_κ(s)(α(s s)f(s s))f(ss)=_κ(s)(α(s s)w)α(ss)w=_κ(s)(α(s s)w)w=_κ(s)(_κ(s)(α(ss))w)w=α(ss)(δ^0w)(κ(s))=f'(s). Here we also used the fact that _κ(s)(α(s s))=λ_s(α(s s))=α(ss). Let n≥ 1. Given t_1,…,t_k∈ S, we shall denote for brevity e(t_1,…,t_k)=α(t_1… t_kt_k… t_1)∈ E(A). Using <ref>, we see that the factor f(s_1,…,s_is_i+1,…,s_n+1)^(-1)^i in <ref> equals e(s_1,…,s_n+1)w(κ(s_1),…,κ(s_is_i+1),…,κ(s_n+1))^(-1)^i,1≤ i≤ n. Since <ref> contains λ_s_1(f(s_2,…,s_n+1))∈ A_e(s_1,…,s_n+1), we may easily remove e(s_1,…,s_n+1) from <ref>. Moreover, we may remove e(s_1,…,s_n) from f(s_1,…,s_n)^(-1)^n+1=e(s_1,…,s_n)w(κ(s_1),…,κ(s_n))^(-1)^n+1, as e(s_1,…,s_n+1)≤ e(s_1,…,s_n). Taking into account <ref>, we come to (δ_2^nf)(s_1,…,s_n+1) =_κ(s_1)(α(s_1 s_1)e(s_2,…,s_n+1)w(κ(s_2),…,κ(s_n+1)))∏_i=1^nw(κ(s_1),…,κ(s_is_i+1),…,κ(s_n+1))^(-1)^iw(κ(s_1),…,κ(s_n))^(-1)^n+1. Now observe using <ref> that α(s_1 s_1)e(s_2,…,s_n+1) =e(s_1,s_1,…,s_n+1)=λ_s_1(e(s_1,…,s_n+1))=λ_s_1(α(s_1s_1) e(s_1,…,s_n+1))=_κ(s_1)(e(s_1,…,s_n+1)), the latter showing in view of <ref> that the right-hand side of <ref> is exactly e(s_1,…,s_n+1)(δ^nw)(κ(s_1),…,κ(s_n+1))=f'(s_1,…,s_n+1). The sequence C^0(G,A)δ^0→…δ^n-1→C^n(G,A)δ^n→… is a cochain complex of abelian groups. This is explained by <ref>. The complex <ref> naturally defines the groups Z^n(G,A)=δ^n, B^n(G,A)=δ^n-1 and H^n(G,A)=Z^n(G,A)/B^n(G,A) of partial n-cocycles, n-coboundaries and n-cohomologies of G with values in A, n≥ 1 (H^0(G,A)=Z^0(G,A)=δ^0). The next fact immediately follows from <ref>. There are isomorphisms of groups Z^n(G,A)≅ Z^n_≤(S^1,A^1) and B^n(G,A)≅ B^n_≤(S^1,A^1). In particular, H^n(G,A) is isomorphic to H^n_≤(S^1,A^1) for all n≥ 0. There is a one-to-one correspondence between the elements of H^2(G,A) and the equivalence classes of twistings related to (A,). By <ref> each class [w]∈ H^2(G,A) corresponds to [f]∈ H^2_≤(S^1,A^1), and by <ref> there is g∈ C^1(S^1,A^1), such that f̃=f·δ_2^1g is a twisting related to (α,λ). It is seen by the proof of <ref> that g(s) can be chosen to be f(s,s), so g preserves the order, as f does. Therefore, f̃∈ Z^2_≤(S^1,A^1) and [f]=[f̃] in H^2_≤(S^1,A^1). Since f̃ is order-preserving, it satisfies Sieben's condition <ref> as it was observed at the beginning of <ref>. By <ref> there is a twisting w̃ related to (A,), such that [w̃] corresponds to [f̃]. Thus, [w]=[w̃]. Let S be a max-generated F-inverse monoid (see <cit.>) and A an S-module. Then H^n_≤(S^1,A^1)≅ H^n(S,A). By <cit.>, up to an isomorphism,each S-module (α,λ) on A comes from the partial S-module (A,) defined by <ref>, and we may assume that S=E(A)*_ S. Hence, H^n_≤(S^1,A^1)≅ H^n( S,A) by <ref>.Observe from <ref> that each _x is a monoid with identity 1_x=α(max x(max x)), where max x is the maximum element of the class x∈ S. The idempotents 1_x generate E(A), as the elements max x generate S.Therefore, (A,) is an inverse partial S-module in the sense of <cit.>. By <cit.> one has H^n(S,A)≅ H^n( S,A) (see also <cit.>).[Notice that in <cit.> the H. Lausch's cohomology group H^n(S,A) is denoted by H^n_S(A).] § EXTENSIONS OF SEMILATTICES OF ABELIAN GROUPS AND H²(G,A) It was proved in <cit.> that any admissible extension Ai→Uj→G induces a twisted partial action Θ of G by A, as soon as one fixes a refinement Ai→Uπ→Sκ→G of U together with an order-preserving transversal ρ of π. We shall show that the change of U by an equivalent one, as well as a change of ρ, leads to an equivalent Θ. Suppose that Ai→Uj→G and Ai'→U'j'→G is a pair of equivalent admissible extensions of A by G. Then any two refinements Ai→Uπ→Sκ→G and Ai'→U'π'→S'κ'→G of U and U' with order-preserving transversals ρ and ρ' of π and π' induce equivalent twisted partial actions Θ and Θ' of G on A. Let μ:U→ U' be an isomorphism defining the equivalence. There is a homomorphism ν:S→ S' making the diagram <ref> commute. Since μ is an isomorphism, ν is also an isomorphism. Denote by Λ=(α,λ,f) and Λ'=(α',λ',f') the twisted module structures on A over S and S', which come from Ai→Uπ→S, ρ and Ai'→U'π'→S', ρ', respectively. Note that ρ”=μ∘ρ∘ν is another order-preserving transversal of π' with ρ”∘ν=μ∘ρ. By <cit.> the induced twisted S'-module structure Λ”=(α”,λ”,f”) on A satisfies Λ”∘ν=Λ in the sense that λ”∘ν=λ, α”∘ν|_E(S)=α and f=f”∘(ν×ν). Let Θ”=(”,w”) be the twisted partial action of G on A coming from Λ”. Observe from <ref> that _x=_κ(s)=xA_α(ss)=_κ'∘ν(s)=xA_α”∘ν(ss)=_κ'∘ν(s)=xA_α”(ν(s)ν(s))=”_x. Moreover, _x(a)=λ_s(a)=λ”_ν(s)(a) for s∈ S with x=κ(s)=κ'∘ν(s), aa=α(s s)=α”(ν(s)ν(s)), so _x(a)=”_x(a) (see <ref>). Similarly using <ref> one provesthat w”=w. Thus, Θ=Θ”. On the other hand, in view of <cit.> one sees that Λ” is equivalent to Λ' and hence Θ” is equivalent to Θ' by <cit.>. So, Θ is equivalent to Θ'. In particular, when A is commutative, any two equivalent admissible extensions of A by G induce the same partial G-module structure on A, and a choice of refinements with order-preserving transversals induces a pair of cohomologous partial 2-cocycles of G with values in this module. Let (A,) be a partial G-module.An extension of (A,) by G is an admissible extension Ai→Uj→G of A by G, such that the induced partial G-module is (A,). Each equivalence class [U] of admissible extensions of a partial G-module (A,) by G determines an element [w] of H^2(G,A). For the converse map we recall from <cit.> that any twisted partial action Θ of G on A defines the admissible extension A*_Θ G of A by G whose refinement can be chosen to be Ai→A*_Θ Gπ→E(A)*_ Gκ→G with π(aδ_x)=aaδ_x and κ(aδ_x)=x. Let Θ' be the twisted partial action induced by Ai→A*_Θ Gπ→E(A)*_ Gκ→G and the order-preserving transversal ρ:E(A)*_ G→ A*_Θ G, ρ(eδ_x)=eδ_x. Then Θ'=Θ. By <cit.> the twisted partial action Θ” of E(A)*_ G on A, coming from Ai→A*_Θ Gπ→E(A)*_ Gσ^♮→E(A)*_ G and the same ρ, satisfies Θ=Θ”∘ν, where ν is an isomorphism G→E(A)*_ G defined by ν(x)=σ^♮(eδ_x) for e∈ E(_x). Observe that ν∘κ(eδ_x)=ν(x)=σ^♮(eδ_x), so κ(s)=xσ^♮(s)=ν(x) for any s∈ E(A)*_Θ G. Then it is immediately seen from <ref> that Θ'=Θ”∘ν, i. e. '_x=”_ν(x) and w'_x,y=w”_ν(x),ν(y). Thus, Θ=Θ'. Let (A,) be a partial G-module and w a twisting related to (A,). Then the crossed product A*_(,w)G is an extension of (A,) by G. Let Λ=(α,λ,f) and Λ'=(α',λ',f') be twisted S-module structures on A. If Λ and Λ' are equivalent, then A*_Λ S and A*_Λ'S are equivalent as extensions of A by S. If g:S→ A is a map defining the equivalence, then set μ:A*_Λ S→ A*_Λ' S, μ(aδ_s)=ag(s)δ'_s. Since α=α' and g(s)∈ A_α(ss), μ is well-defined. Obviously, j'∘μ=j. Observe using <cit.> that μ∘ i(a)=μ(aδ_α(aa))=ag(α(aa))δ'_α(aa)=aδ'_α(aa)=i'(a). It remains to prove that μ is a homomorphism. We have by <ref> and the fact that g(s)∈ A_α(ss) μ(aδ_s)μ(bδ_t) =ag(s)δ'_s· bg(t)δ'_t=ag(s)λ'_s(bg(t))f'(s,t)δ'_st=ag(s) g(s)λ_s(bg(t))g(s) g(s)λ_s(g(t))f(s,t)g(st)δ'_st=a· aa·λ_s(bg(t))· aa·λ_s(g(t))f(s,t)g(st)δ'_st=aλ_s(bg(t) g(t))f(s,t)g(st)δ'_st=aλ_s(b· bb)f(s,t)g(st)δ'_st=aλ_s(b)f(s,t)g(st)δ'_st=μ(aδ_s· bδ_t). Let Θ and Θ' be twisted partial actions of G on A. If Θ is equivalent to Θ', then A*_Θ G is equivalent to A*_Θ'G. Observe that E(A)*_ G=E(A)*_'G=:S by <cit.>. Moreover, the extensions Ai→A*_Θ Gπ→S and Ai'→A*_Θ' Gπ'→S together with the transversals ρ:S→ A*_Θ G, ρ(eδ_x)=eδ_x, and ρ':S→ A*_Θ' G, ρ'(eδ'_x)=eδ'_x, induce equivalent twisted S-module structures Λ and Λ' on A thanks to <cit.>. Hence A*_Λ S is equivalent to A*_Λ'S by <ref>. But A*_Λ S is equivalent to A*_Θ G and A*_Λ'S is equivalent to A*_Θ'G by <cit.>. Thus, by transitivity A*_Θ G is equivalent to A*_Θ'G. There is a well-defined map from H^2(G,A) to equivalence classes of extensions of (A,) by G which sends a class [w] to the class [A*_(,w)G], where w is a twisting related to (A,) (see <ref>). Let (A,) be a partial G-module. Then the equivalence classes of extensions of (A,) by G are in a one-to-one correspondence with the elements of H^2(G,A). If w is a twisting related to (A,), then [w]↦[A*_(,w)G]↦[w] by <ref>, so [w]↦[A*_(,w)G] is injective. It is also surjective by <cit.>. § SPLIT EXTENSIONS AND H¹(G,A) The classical H^1(G,A) characterizes(up to A-conjugacy) the splittings of the extension A⋊ G of a G-module A by a group G(see <cit.>). We first introduce a similar notion for an extension of a semilattice of groups by an inverse semigroup. §.§ Split extensions of A by S An extension Ai→Uj→S of A by S is said to split if there is a transversal k:S→ U of j which is a homomorphism (called a splitting of U). If U splits, then any equivalent extension splits. For if μ:U→ U' is an isomorphism determining equivalence and k:S→ U is a splitting of U, then μ∘ k is a splitting of U'. Let A be a semilattice of abelian groups, (α,λ) an S-module structure on A, f∈ Z^2(S^1,A^1) a twisting related to (α,λ) and Λ=(α,λ,f). Then the extension A*_Λ S splits if and only if f∈ B^2(S^1,A^1). Observe that any transversal ρ:S→ A*_Λ S of j:A*_Λ S→ S has the form ρ(s)=g(s)δ_s, where g(s)∈ A_α(ss). Hence, the transversals ρ can be identified with the elements g of C^1(S^1,A^1). Now ρ is a homomorphism if and only if g(s)δ_s· g(t)δ_t=g(st)δ_st g(s)λ_s(g(t))f(s,t)=g(st) f=δ_2^1g. An extension U of an S-module A by S splits if and only if it is equivalent to A*_(α,λ)S. Choosing a transversal ρ of U, we may assume U to be A*_(α,λ,f) S for some twisting f related to (α,λ). It follows from <ref> that U splits if and only if f∈ B^2(S^1,A^1), that is f is equivalent to the trivial twisting. In view of <ref> the latter exactly means that U is equivalent to A*_(α,λ) S. Let U be a split extension of an S-module A by S. Then the splittings of U are in a one-to-one correspondence with the elements of Z^1(S^1,A^1). Notice that if U' is an equivalent extension with μ:U→ U' being the corresponding isomorphism, then k↦μ∘ k defines a bijection between the splittings of U and the splittings of U'. Therefore we may assume U to be A*_(α,λ) S thanks to <ref>. Let k:S→ A*_(α,λ) S be a splitting. As we have seen above k(s)=g(s)δ_s for some g∈ C^1(S^1,A^1). Then g(st)δ_st=k(st)=k(s)k(t)=g(s)δ_s· g(t)δ_t=g(s)λ_s(g(t))δ_st, whence g(st)=g(s)λ_s(g(t)), that is g∈ Z^1(S^1,A^1). We recall from <cit.> that in the classical case, given a splitting k of a split group extension of A by G and a∈ A, the conjugate map k'(g)=i(a)k(g)i(a) is again a splitting. This may fail in the semigroup case. Let k be a splitting of a split extension U of A by S and a∈ A. Then k'(s)=i(a)k(s)i(a) is a splitting if and only if A is a monoid (equivalently, S is a monoid, or, equivalently, U is a monoid) and a∈ A. Indeed, observe that k'(E(S))⊆ E(U), as k(E(S))⊆ E(U), and j(k'(s))=j(i(a)k(s)i(a))=j(i(a))sj(i(a)) with j(i(a)) being an idempotent. Therefore, j(k'(s))=s for all s∈ S if and only if S is a monoid, whose identity is j(i(a)). Now j(i(a))=j(i(a))j(i(a))=j(i(aa))=α(aa) by <ref>, so j(i(a))=1_S aa=1_A, i. e. a∈ A. Moreover, in this case k' is a homomorphism, as i(a) i(a)=i(a a)=i(1_A)=1_U. Under the conditions of <ref> the splitting k' is said to be A-conjugate to k. A more general conjugacy in the non-monoid case is motivated by the following. Let S be an inverse semigroup, U a split extension of an S-module A by S and k a splitting of U. Then for any h∈ C^0_≤(S^1,A^1) the map k'(s)=i(h(ss))k(s)i(h(s s)) is a splitting of U. By <ref> and the fact that h(e)∈ A_α(e) j(k'(s))=α(α(ss))sα(α(s s))=ss· s· s s=s. Moreover, if e∈ E(S), then using <ref> we get k'(e)=i(h(e))k(e)i(h(e))=i(h(e)α(e)h(e))=i(α(e))=k(e), so k'(E(S))⊆ E(U). It remains to show that k' is a homomorphism: k'(s)k'(t) =i(h(ss))k(s)i(h(s s) h(tt))k(t)i(h(t t))=i(h(ss))k(s)i(α(s s tt))k(t)i(h(t t)) by <ref>. Since j(k(s) k(s))=s s and j(k(t)k(t))=tt, then k(s) k(s)=i∘α(s s),k(t)k(t)=i∘α(tt) as j is idempotent-separating. Hence <ref> equals i(h(ss))k(s)k(t)i(h(t t))=i(h(ss))k(st)i(h(t t)). Now applying <ref> to st we rewrite the right-hand side of <ref> as i(α(stt s)h(ss))k(st)i(α(t s st)h(t t)). Observe that α(stt s)h(ss)=h(stt s), as stt s≤ ss and h is order-preserving. Similarly α(t s st)h(t t)=h(t s st), so <ref> is k'(st). Under the conditions of <ref> the splittings k and k' are said to be C^0_≤-equivalent. Observe that in the monoid case <ref> becomes k'(s)=i(h(1_S))i(α(ss))k(s)i(α(s s))i(h(1_S))=i(h(1_S))k(s)i(h(1_S)). by <ref>. Since h(1_S)∈ A, the C^0_≤-equivalence generalizes the A-conjugacy. Under the conditions of <ref> the C^0_≤-equivalence classes of splittings of U are in a one-to-one correspondence with the elements of H^1_≤(S^1,A^1). We first observe that the C^0_≤-equivalence agrees with the equivalence of extensions. Indeed, if μ:U→ U' is an isomorphism respecting the diagrams of U and U', then k'(s)=i(h(ss))k(s)i(h(s s)) exactly when μ(k'(s))=i'(h(ss))μ(k(s))i'(h(s s)). So, k' is C^0_≤-equivalent to k if and only if μ∘ k' is C^0_≤-equivalent to μ∘ k. This shows that we may assume U to be A*_(α,λ)S as in the proof of <ref>. Let k and k' be splittings of A*_(α,λ)S, and g,g' the corresponding elements of Z^1(S^1,A^1) (see <ref>). In view of <ref> it is enough to prove that k'(s)=i(h(ss))k(s)i(h(s s)) if and only if g'(s)=g(s)λ_s(p(s s))p(ss) for some p∈ C^0_≤(S^1,A^1). Using <ref> and <ref> of the definition of a twisted S-module, one has g'(s)δ_s =k'(s)=i(h(ss))k(s)i(h(s s))=h(ss)δ_ss· g(s)δ_s· (h(s s)δ_s s)=h(ss)λ_ss(g(s))δ_s·λ_s s(h(s s))δ_s s=h(ss)g(s)δ_s· h(s s)δ_s s=h(ss)g(s)λ_s(h(s s))δ_s, so we may take p(e)=h(e). Under the conditions of <ref> assume that S is a monoid. Then the A-conjugacy classes of splittings of U are in a one-to-one correspondence with the elements of H^1(S,A). This can be easily explained using <ref>. §.§ Split extensions of A by G Let Ai→Uj→G be an extension of (A,) by G and Ai→Uπ→Sκ→G a refinement of U. We recall from <cit.> that there is a one-to-one correspondence between the transversals of π and the partial maps τ:G× E(U) U, such that * τ(x,e) is defined U(x,e)={u∈ U| j(u)=x,uu=e}∅; * τ(x,e)∈ U(x,e), whenever defined; * τ(1,e)=e. Such a map τ was called a transversal of j in <cit.>. More precisely, given a transversal ρ of π, the corresponding transversal τ of j is defined by τ(x,e)=ρ∘π(u), where u is an arbitrary element of U(x,e). The definition does not depend on u, since the sets U(x,e) are exactly the classes of π by <cit.>. Conversely, by τ one constructs ρ(s)=τ(κ(s),π(ss)), which is a transversal of π. The transversal ρ is a splitting of U if and only if the corresponding τ satisfies τ(x,e)τ(y,f)=τ(xy,τ(x,e)fτ(x,e)) for all x,y∈ G and e,f∈ E(U)(the equality should be understood as follows: if the left-hand side is defined, then the right-hand side is defined and they coincide). Let ρ:S→ U be a splitting of U. Suppose that τ(x,e) and τ(y,f) are defined. This means that U(x,e) and U(y,f) are nonempty, so that τ(x,e)=ρ∘π(u) and τ(y,f)=ρ∘π(v) for some u∈ U(x,e) and v∈ U(y,f). Then by <ref> τ(x,e)τ(y,f)=ρ(π(u))ρ(π(v))=ρ∘π(uv), as ρ is a homomorphism. Take w=τ(x,e)v and note that j(w) =j(τ(x,e)v)=j(τ(x,e))j(v)=xy, ww =τ(x,e)vvτ(x,e)=τ(x,e)fτ(x,e), hence w∈ U(xy,τ(x,e)fτ(x,e))∅. Moreover, τ(xy,τ(x,e)fτ(x,e))=ρ∘π(w)=ρ(π(τ(x,e))π(v))=ρ(π(u)π(v))=ρ∘π(uv), proving <ref>. Here we used the fact that π(τ(x,e))=π(u), since both u and τ(x,e) belong to the same U(x,e). Conversely, assume that <ref> holds. For s,t∈ S one has ρ(s)ρ(t) =τ(κ(s),π(ss))τ(κ(t),π(tt)), ρ(st) =τ(κ(st),π(stt s))=τ(κ(s)κ(t),π(stt s)) by <ref>. In view of <ref>in order to prove that ρ(s)ρ(t)=ρ(st), it remains to show that π(stt s)=τ(κ(s),π(ss))π(tt)τ(κ(s),π(ss)). But τ(κ(s),π(ss))=ρ(s), and hence the right-hand side of <ref> is mapped to stt s by π. So, <ref> follows from the fact that π separates idempotents. We say that an extension Ai→Uj→G of (A,) by G splits, if there exists a transversal τ of j satisfying <ref>. In this case τ is called a splitting of U. Observe that Ai→Uj→G splits if and only if there is a refinement Ai→Uπ→Sκ→G, such that Ai→Uπ→S splits. This is explained by <cit.> and <ref>. In particular, any split extension is automatically admissible. If an extension U of (A,) by G splits, then any equivalent extension splits. For if U' is equivalent to U, μ:U→ U' is the corresponding isomorphism and τ is a splitting of U, then τ'(x,e')=μ∘τ(x,μ(e')), where x∈ G and e'∈ E(U'), is a splitting of U'. An extension of (A,) by G splits if and only if it is equivalent to A*_ G. By <ref> any extension U of (A,) by G is equivalent to A*_(,w)G for some twisting w related to (A,). In view of <ref> U splits if and only if A*_(,w)G does. Consider the “standard” refinement Ai→A*_(,w)Gπ→Sκ→G, where S=E(A)*_ G. Let Λ=(α,λ,f) be the induced S-module structure on A together with the corresponding order-preserving twisting related to it (see <ref>). By <cit.> A*_(,w)G and A*_Λ S are equivalent as extensions of A by S. Therefore, A*_Λ S also splits, so f is a coboundary thanks to <ref>. It follows from <ref> that w is a partial coboundary, and thus (,w) is equivalent towith the trivial twisting. Then the extension A*_(,w)G is equivalent to A*_ G by <ref>. Let Ai→Uj→G be a split extension of (A,) by G. Then there is a one-to-one correspondence between the splittings of U and the elements of Z^1(G,A). Taking into account <ref> we may assume that U=A*_ G. Moreover, we shall choose the standard refinement Ai→Uπ→Sκ→G of U, where S=E(A)*_ G. By <ref> and <cit.> the splittings of Ai→Uj→G are in a one-to-one correspondence with the splittings of Ai→Uπ→S, which are in a one-to-one correspondence with the elements of Z^1(S^1,A^1) by <ref>. Observe that Z^1(S^1,A^1)=Z^1_≤(S^1,A^1) thanks to <ref>. Furthermore, the S-module structure on A coming from Ai→Uπ→S is exactly the one defined by <ref>thanks to <cit.>. Therefore, Z^1_≤(S^1,A^1)≅ Z^1(G,A) by <ref>. Two splittings k and k' of Ai→Uπ→S are C^0_≤-equivalent if and only if there exists an order-preserving η:E(U)→ A with η(e)∈ A_i(e), such that the corresponding splittings τ and τ' of Ai→Uj→G satisfy τ'(x,e)=i(η(e))τ(x,e)i(η(τ(x,e) eτ(x,e))). Suppose that k and k' are C^0_≤-equivalent, that is, there is h∈ C^0_≤(S^1,A^1), such that k'(s)=i(h(ss))k(s)i(h(s s)). Set η=h∘π|_E(U) and observe that η:E(U)→ A is order-preserving, as h is, and η(e)∈ A_α∘π(e)=A_i(e). If τ'(x,e) is defined, then taking u∈ U(x,e)∅, one has by <ref> τ'(x,e)=k'∘π(u) =i(h(π(uu)))k(π(u))i(h(π(u u))). But uu=e, so i(h(π(uu)))=i(η(e)). Moreover, π(u u)=π(u eu)=π(u)π(e)π(u)=π(τ(x,e))π(e)π(τ(x,e)), whence u u=τ(x,e) eτ(x,e), proving <ref>. Conversely, if <ref> holds, then setting h=η∘π|_E(S) and using <ref> we obtain k'(s)=τ'(κ(s),π(ss)) =i(h(ss))k(s)i(η(k(s)π(ss)k(s))))=i(h(ss))k(s)i(h(s s)), as k|_E(S)=π|_E(S). Clearly h∈ C^0_≤(S^1,A^1), so k' is C^0_≤-equivalent to k. Two splittings τ and τ' of a split extension Ai→Uj→G are said to be equivalent, if they satisfy <ref> for some order-preserving η:E(U)→ A with η(e)∈ A_i(e). Under the conditions of <ref> the equivalence classes of splittings of U are in a one-to-one correspondence with the elements of H^1(G,A). As in the proof of <ref> choose U to be A*_ G and consider the standard refinement Ai→Uπ→Sκ→G of U. By <ref> there is a one-to-one correspondence between the equivalence classes of splittings of Ai→Uj→G and the C^0_≤-equivalence classes of splittings of Ai→Uπ→S. It remains to apply <ref>. § ACKNOWLEDGEMENTS We thank the referee for useful comments. acm
http://arxiv.org/abs/1705.09654v3
{ "authors": [ "Mikhailo Dokuchaev", "Mykola Khrypchenko" ], "categories": [ "math.GR", "20M30 (Primary), 20M18, 16S35, 16W22 (Secondary)" ], "primary_category": "math.GR", "published": "20170526173924", "title": "Partial cohomology of groups and extensions of semilattices of abelian groups" }
i.e. e.g. etc.→↔⟨∼⟩∼1[1]Tr[ #1]|⟩[1]|#1⟩⟨|[1]⟨#1|⟨|⟩[2]⟨#1|#2⟩[1]⟨#1⟩[1]⟨#1|#1⟩[1]|#1⟩⟨#1||⟩⟨|[2]|#1⟩⟨#2|[2]⟨#1⟩_#2 k_BT rcase.}lcase{.Laboratoire des Sciences et Techniques de l'Information, de la Communication etde la Connaissance, UMR-6285 CNRS, Brest Cedex3, FRANCE The measurement device independent (MDI) Quantum Key Distribution (QKD)is a practically implementable method for transmitting secret keysbetween respective partners performing quantum communication. SARG04 (Scarani-Acìn-Ribordy-Gisin 2004) is a protocol tailored to struggle against photon number splitting (PNS) attacks by eavesdroppers and its MDI-QKD version is reviewed and optimized from secret keybitrate versus communication distance point of view. We consider the effect of several important factors such as error correction function, darkcounting parameter and quantum efficiency in order to achieve the largestkey bitrate versus longest communication distance. 03.67.Dd, 03.67.Ac, 03.67.HkOptimization of Measurement Device Independent Scarani-Acìn-Ribordy-Gisin protocol C. Tannous[Tel.: (33) 2.98.01.62.28,E-mail: [email protected]] and J. Langlois========================================================================================While Classical cryptography use two types of keys to encode and decode messages (secret or symmetric and public or asymmetric keys) Quantum cryptography uses QKD for transmitting secret keysbetween partners allowing them to encrypt and decrypt their messages. QKD principal characteristic is that it is practically implementable and has alreadybeen deployedcommercially by several quantum communication providers such as SeQureNet in France,IQ Quantique in Switzerland, MagiQ Technologies in the USA and QuintessenceLabs in Australia. The second main feature of QKD is that it allows communicating parties to detect online eavesdroppers in a straightforward fashion.In principle, QKD is unconditionally secure nevertheless its practical implementation has many loopholes and consequently has been attacked by many different waysexploiting some intermediate operation or another during secret key processing such as time-shift <cit.>, phase-remapping <cit.>,detector blinding <cit.>, detector dead-time <cit.>,device calibration <cit.>, laser damage <cit.>...This work is about optimization of SARG04 <cit.> MDI-QKD version protocol designed to fend off photon number splitting (PNS) attacks by considering important factors such as error correction function types, detector dark counting parameter and quantum efficiency.It is organized as follows: after reviewing the original four-state SARG04 protocol, we discuss its MDI version and describe the effects of various parameters on communication distance and secret key bitrate.SARG04 protocol has been developed to combat PNS attacks that are targeted toward intercepting photons present in weak coherent pulses (WCP) that are used for communication. This stems from the fact, it is not possible presently to commerciallyexploitsingle photons in a pulse. However, progress in developing large scale methods targeted atusing single photons in a pulse is advancing steadily.SARG04 being very similar to BB84 <cit.> protocol, the simplest exampleof secret key sharing among sender and receiver (Alice and Bob), we review first the BB84 case below.In the BB84 protocol framework, Alice and Bob use two channels to communicate: one quantum and private to send polarized single photons and another one classical and public (telephone or Internet)to send ordinary messages <cit.>. Alice selects two bases in 2D Hilbert space consisting each of two orthogonal states: ⊕ basis with (0,π/2) linearly polarized photons,and ⊗ basis with (π/4, -π/4) linearly polarized photons.Four photon polarization states: |→⟩, |↑⟩,|↗⟩,|↘⟩ are used to transmit quantum data with |↗⟩=1/√(2)(|→⟩+ |↑⟩) and |↘⟩=1/√(2)(|→⟩- |↑⟩).A message transmitted by Alice to Bob over the Quantum channel is a stream ofsymbols selected randomly among the four above and Alice and Bob choose randomlyone of the two bases ⊕ or ⊗ to perform photon polarization measurement.Alice and Bob announce their respective choice of bases over the public channel without revealing the measurement results.The raw key is obtained by a process called "sifting" consisting of retaining only theresults obtained when the used bases for measurementare same.After key sifting, another process called key distillation <cit.> must be performed. This process entails three steps <cit.>: error correction, privacy amplification and authentication in order to counter any information leakage from photon interception, eavesdropping detection (with the no-cloning theorem <cit.>) and exploitation of announcement over the public channel.The basic four-state SARG04 protocol is similar to BB84 but adds a number of steps to improve it and protect it against PNS attacks. The steps entail introducingrandom rotationand filtering of the quantum states. Before we describe it, we introducesome states and operators <cit.> using Pauli matrices σ_X,σ_Y,σ_Z: * R=cos(π/4)I-isin(π/4)σ_Y is a π/2 rotation operator about Y axis, * T_0=I is the (2×2) identity operator,* T_1=cos(π/4)I-isin(π/4)(σ_Z+σ_X)/√(2) is a π/2 rotation operator around the (Z+X) axis, * T_2=cos(π/4)I-isin(π/4)(σ_Z-σ_X)/√(2) is a π/2 rotation operator around the (Z-X) axis. Alice prepares many pairs of qubits and sends each one of them to Bob after performing a random rotation over different axes with T_lR^kwhere l∈{0,1,2} and k∈{0,1,2,3}.Upon receiving the qubits, Bob first applies: * A random reverse multi-axis rotation R^-k'T_l'^-1,*Afterwards, he performs a local filtering operation defined by F=sin(π/8)|0_x⟩⟨0_x|+cos(π/8)|1_x⟩⟨1_x| where {|0_x⟩,|1_x⟩} are X-eigenstate qubits;they are also eigenvectors of σ_X with eigenvalues +1, and -1 respectively. Local filtering enhances entanglement degree and the π/8 angle helps retrieve <cit.>one of the maximally entangled EPR Bell <cit.> states i.e. polarizationentangled photon pair states given by: |ψ^±⟩=1/√(2)(|→↑⟩±|↑→⟩), |ϕ^±⟩=1/√(2)(|→→⟩±|↑↑⟩). They form a complete orthonormal basis in 4D Hilbert space for all polarization states of a two-photon system and the advantage of local filtering is to make Alice and Bobshare pairs of a Bell state making the shared bits unconditionally secure <cit.>. * After, Alice and Bob compare their indices k,l and k',l' via public communication, and keep the qubit pairs with k=k' and l=l' when Bob's filtering operation is successful. *They choose some states randomly as test bits, measure them in the Z basis, and compare their results publicly to estimate the bit error rate and the information acquired by the eavesdropper. *Finally, they utilize the corresponding Calderbank-Shor-Steane (CSS) code <cit.> to correct bit and phase errors and perform a final measurement in the Z basis on their qubits to obtain the secret key.Following Lo  <cit.> Mizutani  <cit.> modified the originalSARG04 protocol by including an intermediate experimental setup run by Charlie,at mid-distance between Alice and Bob,consisting of Bell correlation measurements. The setup contains a half beam-splitter,two polarization beam-splitters to simulate photonic Hadamard and CNOT gates in order to produce Bell states, as well asphotodiode detectors. This additional step will helpdiscard non perfectly anti-correlated photons and thus reduce transmission error rates. In addition, Alice and Bob not only choose photon polarization randomly, they also use WCP amplitude modulation to generate decoy states in order to confuse the eavesdropper. The protocol runs as follows: *Charlieperforms Bell measurement on the incoming photon pulsesand announces to Alice and Bob over the public channelwhether his measurement outcome is successful or not.When the outcome is successful, he announces the successful events as being of Type1 or Type2. Type1 is coincidence detection eventsof AT and BR or BT and AR. Type2 is coincidence detection eventsof AT and AR or BT and BR where AT,BT stand for detecting transmitted (T) photon events from Alice (A) or Bob (B) linearly polarized at 45 whereas AR, BRare for detecting reflected (R) photon events at -45. * Alice and Bob broadcast k and k',over the public channel.If the measurement outcome is successful with Type1 and k=k'=0,…,3,they keep their initial bit values, and Alice flips her bit.If the measurement outcome is successful with Type2and k=k'=0, 2, they keep their initial bit values.In all the other cases, they discard their bit values. *After repeating the above operations several times,Alice and Bob perform error correction, privacy amplificationand authentication as described previously. In the ideal case (no transmission errors, no eavesdropping)Alice and Bob should discard results pertaining tomeasurements done in different bases (or when Bob failed to detectany photon).In QKD, Alice and Bob should be able to determine efficiently their shared secret keyas a function of distance L separating them. Since, the secure key is determined after sifting and distillation, secure key rate is expressed in bps (bits per signal) given that Alice sends symbols to Bob to sift and distill with the remaining bits making the secret key. For Type i event, we define e_i,p^(m,n) as the phase error probability that Alice and Bob emits m and n photons respectively,and Charlie announces a successful outcome with Q_i^(m,n),the joint probability. Consequently the asymptotic key rate for Type i is given as a sum over partial private amplification terms of the form Q_i^(m,n)[1-h_2(e^(m,n)_i,p)] and one error correction term Q_i^totf(e_i^tot)h_2(e_i^tot) related to total errors as <cit.>: K_i(L)=Q_i^(1,1)[1-h_2(e^(1,1)_i,p)]+Q_i^(1,2)[1-h_2(e^(1,2)_i,p)] +Q_i^(2,1)[1-h_2(e^(2,1)_i,p)]-Q_i^totf(e_i^tot)h_2(e_i^tot).The total probabilities Q_i^tot=∑_m,nQ_i^(m,n)and total error rates are given by e_i^tot=∑_m,nQ_i^(m,n)e^(m,n)_i,b/Q_i^tot where e^(m,n)_i,b is the Type i bit error probability and h_2 is the binary Shannon entropy <cit.> given by h_2(x)=-xlog_2(x)-(1-x)log_2(1-x).Moreover, the above asymptotic key rate is obtained in the limit of infinite number ofdecoy states <cit.>.Phase error probabilities are determined from bit error probabilities as depicted in fig. <ref> for Type 1 and 2 and depending on photons (m,n) emitted.Since Charlie is in the middle between Alice and Bob,the channel transmittance to Charlie from Alice is the same as that from Bob.Considering that L is the distance between Alice and Bob,the channel transmittance η_T is obtained by replacing L by L/2 resulting in:η_T=10^-αL/20.For the standard Telecom wavelength <cit.>λ=1.55 μm, the loss coefficient with distance is α=0.21 dB/km. The quantum efficiency and the dark count rate of the detectors are taken asη=0.045 and d=8.5× 10^-7, respectively as in the GYS <cit.> case.We compare below the effect of a fixed error correction function with respect to a fixed value function.The error correction function is given by Enzer  <cit.> as: f_e(x)=1.1581+57.200 x^3In figs <ref>,<ref> secret key rates for Type 1 and Type 2 eventsare displayed versus distance when f_e function is considered as variable or fixed at a value of 1.33.Improving quality of detection means that dark counting must be substantially reduced in order to avoid false "clicks" (irrelevant event detection) of the detectors.In figs <ref>,<ref> secret key rates for Type 1 and Type 2 eventsare displayed versus distance for different values of the dark count rate with error correction function f_e freely varying. Quantum yield is an important parameter that plays an important role in quantum communications.In figs <ref>,<ref> secret key rates for Type 1 and Type 2 eventsare displayed versus distance for different values of the quantum yield η with error correction function f_e freely varying. The value of η has been intentionally exaggerated in order to explore the range of communication distances covered by it variation. It is interesting to note that the Quantum yields acts on communication distance and key bitrate simultaneously whereas dark count rate and error correction function changes affect solely communication distance. Communication distances and secret key bitrates obtained in this work can be improved when we vary the error correction function, dark count rate and quantum efficiency.Insight into SARG04 protocol acquired by optimization leads to conclude that the most sensitive way to increase communication distance substantially is to decrease the dark count rate. The least sensitive parameter is the error correction function type and in spite of exaggerating the values of the quantum efficiency in order to probe the largest possible range of communication distances, the dark count rate parameter is the most promising. Consequently future research efforts ought to be directed towards reducing it considerably. This improvement relies on developing special algorithms that will allow to discriminate between different events occurring around the photodetectors ordeveloping materials with selective and specially engineered higher thresholds preventing false"clicks" triggered by "irrelevant" events.99attack4 B. Qi, C.-H. F. Fung, H.-K. Lo and X. Ma,Quantum Inf. Comput. 7, 73 (2007). attack5 Y. Zhao, C.-H. F. Fung, B. Qi, C. Chen and H.-K Lo, Phys. Rev. A78, 042333 (2008). attack6 F. Xu, B. Qi and H.-K. Lo, New J. of Phys.12, 113026 (2010). attack7 L. Lydersen, C. Wiechers, C. Wittmann, D. Elser, J. Skaar and V. Makarov, Hacking commercial quantum cryptography systems by tailored bright illumination. Nature Photon. 4, 686 (2010).attack8 I. Gerhardt, Q. Liu, A. Lamas-Linares, J. Skaar and C. Kurtsiefer, Full-field implementation of a perfect eavesdropper on a quantum cryptography system. Nature Commun. 2, 349 (2011). attack9 H. Weier, H. Krauss, M. Rau, M. Fürst, S. Nauerth and H. Weinfurter, New J. Phys. 13, 073024 (2011). attack10 N. Jain, C. Wittmann, L. Lydersen, C. Wiechers, D. Elser, C. Marquardt, V. Makarov, and G. Leuchs, Device calibration impacts security of quantum key distribution. Phys. Rev. Lett. 107, 110501 (2011). attack11 A. N. Bugge, S. Sauge, A. M. M. Ghazali, J. Skaar, L. Lydersen and V. Makarov, Laser damage helps the eavesdropper in quantum cryptography Phys. Rev. Lett. 112, 070503 (2014). Scarani V. Scarani, A. Acìn, G. Ribordy and N. Gisin, Phys. Rev. Lett.92, 057901 (2004); see also: V. Scarani, H. Bechmann-Pasquinucci, N. J. Cerf, M. Dušek,N. Lütkenhaus and M. Peev, Rev. Mod. Phys. "The security of practical quantum key distribution"81, 1304 (2009).Tannous C Tannous and J Langlois, Eur. J. Phys.37013001 (2016). Yin H-L Yin, Y Fu, Y-Q Mao and Z-B Chen, Sci. Rep. 6, 29482; doi: 10.1038/srep29482 (2016).Tamaki K. Tamaki and N. Lutkenhaus, Phys. Rev. A68, 032316 (2004)EPR A. Einstein, B. Podolsky, and N. Rosen, Phys. Rev.47, 777 (1935). Kwiat P. G. Kwiat, K. Mattle, H. Weinfurter and A. Zeilinger,Phys. Rev. Lett. 75, 4337 (1995).CSS A. R. Calderbank and P. W. Shor, Phys. Rev. A54, 1098 (1996).Lo2012 H-K Lo, M Curty and B Qi, Phys. Rev. Lett.108, 130503 (2012).Mizutani A Mizutani, K Tamaki, R Ikuta, T Yamamoto andN Imoto, Sci. Rep.4, 5236, doi: 10.1038/srep05236 (2014).GLLP H-K Lo, X. Ma and K. Chen, Phys. Rev. Lett.94, 230504 (2005). CarlsonA. B. Carlson and P. B. CrillyCommunication systems:An Introduction to Signals and Noise in Electrical Communication, 5th Edition, McGraw-Hill, New York (2010).GYS C. Gobby, Z. L. Yuanand A. J. Shields, App. Phys. Lett. 84, 3762 (2004).Enzer D. G Enzer, P. G Hadley, R. J Hughes, C. G Peterson and P. G Kwiat, New Journal of Physics4, 45 (2002).
http://arxiv.org/abs/1705.09817v1
{ "authors": [ "C. Tannous", "J. Langlois" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170527131128", "title": "Optimization of Measurement Device Independent Scarani-Acìn-Ribordy-Gisin protocol" }
Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China University of the Chinese Academy of SciencesCorresponding author: [email protected] Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China School of Optoelectronics, Beijing Institute of Technology, Beijing 100081, ChinaCorresponding author: [email protected] Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, ChinaLight field reconstruction from images captured by focal plane sweeping, such as light field moment imaging (LFMI) and light field reconstruction with back projection (LFBP), can achieve high lateral resolution comparable to the modern camera sensor. This is impossible for the conventional lens array based light field capture systems. However, capturing a series of focal plane sweeping images along the optical axis is time consuming and requires fine alignment. Besides, different focal plane based light field reconstruction techniques require images with different characteristics. To solve these problems, we present an efficient approach for fast light field acquisition with precise focal plane sweeping capture by defocus modulation rather than axial movement. Because of the controllable point spread function, we can capture images for light field reconstruction with both LFMI and LFBP.Fast and High Quality Light Field Acquisition using Defocus Modulation Guohai Situ Version December 30, 2023 ======================================================================§ INTRODUCTIONGenerally, a conventional imaging systems record intensity-only images, while the depth information of the three-dimensional (3D) scene is lost. However, the depth information can be extracted from the light field, which records not only the intensity but also the propagation directions of the light rays <cit.>. Generally, the light field can be captured by either a lens array with a standard camera <cit.> or a camera array <cit.>. From the view of geometric optics, those methods simultaneously record the two-dimensional (2D) spatial and angular information of the light rays, thus allowing perspective view image generation, refocusing of the scene, and free-glass 3D display <cit.>. However, lens array based light field capture <cit.> has to make an intrinsic trade-off between the the spatial and the angular resolution. This is because when the size of the lenslet is large, one of the captured elemental image will have a large spatial resolution, therefore the covered quantity of lenslet that the light rays from the object scene will be small, which leads to less number of elemental images, i.e., low angular resolution. Although there exists some techniques to improve the resolution <cit.>, the trade-off induced by the lens array can not be break through.Coded masks inserted into a camera has also been invented to obtain a higher resolution light field. However, it sacrifices the light transmission because of the attenuation induced by the masks <cit.>. Recently, it has been reported that the light field can also be obtained from a series of focal plane sweeping captured images with a conventional digital camera <cit.>. These techniques can obtain a higher resolution light field. In these cases, the light field is calculated from several photographic images captured at different focal planes, the images are not segmented by the sub lenslet of the lens array, hence reach a higher angular and spatial image resolution comparable to a conventional camera sensor. As these methods do not require any special equipments like lens array or code masks, they are easy to be implemented. However, they require a large stack of defocused images to research an accurate light field reconstruction <cit.>, in which the capture process is time consuming and requires fine alignment. In this paper, we propose an efficient technique for fast, precisely focal plane sweeping capture with a defocus modulation technique. This technique changes special patterns displayed on a spatial light modulator (SLM) to achieve defocus instead of mechanical translation or focus ring rotation, thus achieve fast capturing and avoid error induced by mechanical movement. We verify the feasibility of the proposed method by two typical focal plane sweeping based light field reconstruction techniques, they are light field moment imaging (LFMI) and light field reconstruction with back propagation (LFBP) approach. § FOCAL PLANE SWEEPING BASED LIGHT FIELD ACQUISITION According to the plenoptic function <cit.>, light field can be parameterized as a five-dimensional function L(x, y, ξ ,η, z ), where (x, y, z) is the spatial coordinates and (ξ ,η ) is the angular coordinates.In the focal plane sweeping imaging system, suppose I(x,y,z_m) is the m^th captured images with the focal plane located at z_m, and M is the total number of the captured images. The captured images are the convolution between the clear images and the point spread function (PSF) of the system <cit.>.In general, the PSF of a camera can be regarded as Gaussian distribution function due to the circular shape of the optical elements and apertures. For a point object, the numerical captured images with focal plane sweeping are shown in Fig. <ref> (a). As the definition of PSF,they equal the 2D slices of the 3D PSF of the camera system.Fig. <ref> (b) show the corresponding epipolar plane images (EPI) cross the center horizontal line of the captured images.Focal plane sweeping in spatial space corresponds shearing of EPI, and the amount of shearing reflects the focal plane sweeping distance. This relationship between the defocused images and the EPIs is the basis of the focal plane sweeping based light field acquisition techniques. In this paper we analyze LFMI and LFBP, which are two typical focal plane sweeping based light field reconstruction techniques.LFMI constructs an approximate light field at a designed plane z_m by an empirical assumption that the angular of the light rays satisfy Gaussian distribution function of standard deviation <cit.>. The Gaussian distribution assumption of the light ray direction comes from the Gaussian PSF of the camera system <cit.>. With the light rays' angular moment at each spatial position, the light field can be reconstructed byL(x, y,ξ ,η,z_m)= I(x,y,z_m)exp{-[ξ -s(x,y)]^2+[η -t(x,y)]^2/σ^2}= I(x,y,z_m) δ[ξ-s(x,y), η-t(x,y)] * G(ξ, η, σ),where [s(x,y), t(x,y)] is the first order angular moment of the light ray at position of (x, y, z_m), G(ξ, η, σ) is the Gaussian distribution function, σ equals the numerical aperture (NA) of the camera, and * is convolution operator. This can be seen more intuitively from Fig. <ref>. The estimated angular moment is a sparse sampling of the EPI, as the left image in Fig. <ref> shows, s(x) is the angular moment at position (x), which is the average light ray direction. The final calculated EPI (Right image in Fig. <ref>) is the convolution between the angular moment and the Gaussian PSF (Center image in Fig. <ref>). It can be seen that the final EPI is mainly determined by the angular moment, therefore its accuracy affects the reconstructed light field most importantly.In LFMI, it has been proved the light ray transport along the the optical axis satisfies a partial differential equation (PDE), and the angular moment is acquired by solving this PDE. It is obvious that the quantity of light ray transport depends on both the depth interval of the images and the bandwidth of the object very much. Therefore, the depth interval between two adjacent defocused images should be choosn carefully according to the object's characteristics <cit.>. In general, a conventional camera system's PSF is determined, and the light transport can only be controlled by the depth interval of the captured images, this makes it difficult to apply LFMI to an specific object.Usually, at least two defocused images works for determining the light transport, but with a large stack of images, we can estimate high order angular moment, thus calculate more accurate light field <cit.>.With the above analysis, we can improve the LFMI in two aspects, one is capturing more focal plane sweeping images, and the other one is designing a focal plane sweeping imaging system with a controllable PSF.In LFBP,the light field with the principal plane located at z = 0 is calculated by <cit.>:L(x,y,ξ,η, z_0)=∑_m=1^MI( x+z_mξ/α,y+z_mη/α, z_m ),where α is the magnification between the camera sensor plane and the focal plane. The description of the principle is represented by Fig. <ref> more intuitively. As previously description in Fig. <ref>, focal plane sweeping in spatial space induces shearing in the light field space. For a given spatial position and a specific light ray direction, the spatial positions that the light ray goes through at each defocused image are determined, as the horizontal shift of red points in each image of Fig. <ref>. The first image represents the EPI corresponding to a focal plane at z_0. The red point in the first image represents the light field of L(x_0, ξ_0), the corresponding position at the other focal planes is x_m=ξ_0 z_m, as the dashed white lines show. Therefore, the radiance of the specific light ray can be obtained by averaging the corresponding radiance from all of the defocused images.For a real scene, the radiance of each point on the captured images is the the accumulation of all light rays reach at it with different directions, this induces defocus noise in LFBP. As the green lines and yellow points show in Fig. <ref>. The green lines represent a point at the same depth as the yellow lines represented, but with a different lateral position. The intensity at x_m is the integral along the white dashed lines. It can been seen that the yellow points from the green lines also contribute to the intensities. When we reconstruct the light field at a specific point, much noise from all the other points is induced. Fortunately, the red point change position in a linear transformation and the noise from all the other points is different at different defocus length. By summing all image with linear transform, the actual light field have the largest weight.It's obvious thatwith a large camera NA, the defocus noise from all the other points can be reduced because the summing weight of the noise will be reduced. Besides, It has been proved that the depth resolution of the reconstructed light field depends on the depth interval of the captured images, i.e., more defocused images achieve better depth resolution <cit.>.From the previous contents, we can see that PSF of the camera system that used for capturing the focal sweeping images is critical in the light field reconstruction. In LFMI, it affects the accuracy of the calculated light angular moment as well as the light field. In LFBP, it is a critical factor that affects the defocus noise in the reconstructed light field. Further more, in both of the two techniques, more focal plane sweeping images achieve better light field reconstruction. In LFMI, higher order angular moment can be obtained from more images, and in LFBP, more images achieves higher axial resolution. However, more focal plane captured images is time consuming and induce alignment and magnification problems <cit.>. Therefore, controlling the PSF of the focal plane sweeping imaging system is of great importance. Actually, PSF of an imaging system can be manipulated for many applications, this is called PSF engineer in many other research fields <cit.>. In this paper we insert a PSF modulation component into a conventional microscopic imaging system. On one hand, this achieves accelerated speed and more accurate focal plane sweeping capture. One the other hand, more freely PSF control can be performed for specific requirements. In the following section, we describe how we manipulate the PSF of the imaging system to achieve a focal plane sweeping image capture without translation movement of the camera or the object. § FOCAL PLANE SWEEPING WITH DEFOCUS MODULATIONThe setup scheme of our proposed system is shown in Fig. <ref>. The components within the dashed rectangle is a commercial microscope (Nikon Ni-U). A mirror (M) is used to export the light from the microscope. F is a light filter with a bandwidth of 3 at the wavelength of 532. An aperture A_1 is located at the imaging plane of the microscope, which is used for adjusting image size. The other aperture A_2 is used for selecting the first diffraction order of the SLM. The components within the solid rectangle is used for PSF modulation. Lenses L_1 and L_2 form the 4f system.An SLM (Holoeye, LETO) is located at the Fourier spectrum plane of the 4f system, which performs the PSF modulation. The SLM is a phase-only modulator, which transforms phase shift in a range of[0, 2π] to 8-bit gray levels. The CCD (PointGrey, GS3-U3-23S6M-C) plane is conjugated with the image plane of the microscope. In the following paragraphs we explain how we control the patterns in the SLM to achieve PSF modulation and analyze the performance of it. §.§ Principle of the PSF modulationIn our system, the SLM acts as a Fresnel lens with a desired focal length. The modification of the focal length on the SLM produces a focal plane sweeping, thus making the captured images equals captured at different depths. Suppose a desired corresponding axial focal plane shift of z_i in the imaging plane is required,the modulation focal length of the SLM should be <cit.>:f_SLM=-f_r^2/z_i,where f_r is the focal length of lens L_1. The axial shift at the sample stage is z_o = z_i/β^2, and β is the magnification of the objective. The required phase pattern displayed on the SLM thus can be written as <cit.>:φ(x,y)=π/λ f_SLM(x^2+y^2) = -πz_i/λ f_r^2(x^2+y^2),where λ is the light wavelength and (x, y) are the spatial coordinates.§.§ Defocus performance of the proposed systemIt should be noted that the SLM is pixellated and the phase represented by the SLM is discrete. Therefore the corresponding depth range and depth interval that can be modulated by our system are limited. Here analyze the two limitations and give the two values according to the specifications of our system.Because of the pixellated SLM, the phase that can be represented by the SLM is limited by <cit.>|Δφ |< π,where p is the pixel pitch of the SLM. This results in a limited corresponding depth range that can be represented by the proposed system. Substituting Eq. (<ref>) into Eq. (<ref>), we obtain the maximum depth shift that can be represented according to the system specifications|z_max|=λ f_r^2/2p r_l,where r_l is the radius of the light enter into the SLM. Generally, we let r_l≤0.5min(x_max,y_max), this make sure that the light is within the effective area of the SLM. (x_max,y_max) are the length and width of the SLM. The center of the SLM is coincide with the optical axis, making the lateral position of the images on the CCD remain changeless. Since the gray level that represented by the SLM is 8-bit, which corresponds to a discrete phase value, the minimal phase change on the SLM is φ_min = 2 π/256. Suppose the corresponding minimal depth change is Δ z_min, from Eq. (<ref>) we get|Δ z_min| =λ f_r^2/128r^2. In our experimental setup, we used a 20× objective. The other parameters are p=6.6, λ = 532, r_l = 0.5 x_max≃3.3, f_r=200. From Eq. (<ref>) and Eq. (<ref>), the maximum defocused depth is 488.5 and the minimum defocus depth shift is 0.0152.In the experiment, according to the light field reconstruction technique, we can control the PSF by choosing proper patterns to be displayed on the SLM, but we should make sure the phase patterns on the SLM satisfy the two limits.In addition to the above limits, it is worth to mention that the bandwidth of the light filter has a great influence on the image quality due to the single wavelength selection of the SLM. The patterns on the SLM require additional grating phase to separate the modulated and unmodulated light, but the grating phase would lead to distinct dispersion. The bandwidth of the light filter should be narrow enough, which also induces light attenuation.Besides, the SLM is not located at the exact Fourier plane of lens L_1, while it is located on the imaging plane of the collector lens. We can see distinct images of the dot on the collector lens as well as the edge of the condenser aperture diaphragm. Only in this plane can the magnification of the recored images remain unchanged when we change the focal length of the patterns displayed on SLM. Furthermore, In order to avoid influence from the previous patterns, we should control the SLM and CCD sequentially to capture the images at each focal plane. § EXPERIMENTAL RESULTS AND DISCUSSIONWe verify the feasibility and possibility oflight field reconstruction with the proposed imaging system in the following sections.§.§ Verification of PSF modulationWith the proposed system described in the previous section, we have captured the PSF images at several depths, as shown in Fig. <ref>(b). A pinhole with a diameter of 10 was used as a point object. The images captured with the conventional translation system were captured as the ground truth, as shown in Fig. <ref>(a).We can observe that the PSF of the proposed system coincides with the the one of the conventional system. This can also be verified by the size of the PSFs. The objective is 20×,the pixel pitch of the captured images is 5.86. With the two parameters, all the size of the PSFs can be calculated and verified. The expected diameters of the PSF at the five axial positions should be [200, 300, 400, 500, 600]. The measured diameters of the PSF captured with the conventional and the proposed systems are [302.1, 343.9, 444.5, 551.4, 727.6] and [257.7, 317.1, 391.4, 456.0, 629.8]. The results show that the PSF of the proposed system is closer to the Gaussian distribution function than the conventional one.Besides, the shape of the PSF images captured by the proposed system are more likely be circle than the conventional one.Fig. <ref> shows some captured images with the conventional and proposed systems. Fig. <ref>(a) are the images captured by the conventional translation movement system at z=11 and z=600, respectively. Fig. <ref>(b) show the images captured at the same axial positions with the proposed system. We can observe that the focal plane sweeping capture can really be achieved with our proposed system. However, even through we have calibrated the system very carefully, lateral shift can be clearly observed from the aperture shift in Fig. <ref>(a), as the yellow lines show. We have also compared the capture time between the conventional and the proposed systems.61 image were captured with the two systems, it costs about 30 minutes and 25 seconds respectively. It should be mentioned that in the capture process, all the translation movement and the pattern modification on the SLM were manually operated. Reduced time requirement is expected by computational controlling of the systems, but the problems induced by movement in the conventional system still exist, and the translation is still more time consuming than PSF modulation.§.§ Light field reconstruction from focal plane sweeping captured images with PSF modulation We have also verified the light field reconstruction with the two systems. Two objects were used to perform the light filed reconstruction from the captured images.We have captured a stack of intensity images of the sample with a corresponding axial spacing of Δ z = 1. 60 defocused images were captured for each object. And 11 images were used for the light field reconstruction. Fig. <ref>(a)(c) and (b)(d) are the reconstructed parallax view images with the LFMI and LFBP respectively. While Fig. <ref>(a)(b) are the images ofmosquito's mouth, and (b)(d) are the images of mosquito larva. More parallax view images can be observed from the videos. Both objects were reconstructed with clear parallax with the two light field reconstruction techniques. Due to the convenience of capturing multiple focal plane sweeping images with the proposed system, we also show the comparison with LFMI and LFBP using different number of images. The results are shown in Fig. <ref>. Fig. <ref>(a)(b) show the parallax view images with LFMI using 2 and 7 captured images respectively. Fig. <ref>(c)(d) show the reconstructed images using LFBP. More detail can be observed from the videos ofvisualization 5, visualization 6, visualization 7, and visualization 8.In LFMI, the light field moment can more accurate as the increasing number of the used images. However,the approximate Gaussian function makes it difficult to get details of the light field. Therefore, the light field reconstructed using more images isn't distinctly improved compared to using 2 images. As shown in Fig. <ref>(a)(b). The LFBP reconstruction can be considered as a averaging filter, which increasing the weight of the light in reconstruction direction. This filter might be simple and make the reconstructed images not distinct enough because of the crosstalk from the other points. Therefore, the quality of the reconstruction depends much on the number of the used images, as shown in Fig. <ref>(c)(d). This results are more persuasion, because in the capturing process, there is no other factors that affect the quality of the captured images. § CONCLUSIONWe have proposed a focal plane sweeping capture system with defocus modulation using a SLM. With this system, the time cost for capturing a large amount of focal plane sweeping images is efficiently reduced. And the accuracy of the captured images is increased because there is no mechanical movement during the capture process. The captured images were used to perform light field reconstruction with two techniques, i.e., LFMI and LFBP. Because of the controllability of the system PSF, it is easier to capture images that meet the special requirements of either LFMI or LFBP. It should be mentioned that the PSF of the imaging system can also be other distribution functions rather than Gaussian. In this case, the Gaussian distribution function in the LFMI equation should be modified to the corresponding PSF function. The imaging system in our paper is a microscopic, this can also be extended to conventional digital camera system. In that case, the SLM can be replaced by an electrically tunable lens for colorful imaging. The SLM in the proposed system in this paper can also be replaced by an electrically tunable lens for color imaging.§ ACKNOWLEDGMENTSThis work was supported by National Natural Science Foundation of China (NSFC) (61327902, 61377005), Chinese Academy of Sciences (CAS) (QYZDB-SSW-JSC002), and Natural Science Foundation of Shanghai (NSFS) (17ZR1433800).10 levoy2006light M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, Light field microscopy, ACM Trans. Graph. 25, 924–934 (2006). ng2005light R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, Light field photography with a hand-held plenoptic camera, Computer Science Technical Report CSTR 2, 1–11 (2005). wilburn2005high B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, High performance imaging using large camera arrays, ACM Trans. Graph. 24, 765–776 (2005). lin2015camera X. Lin, J. Wu, G. Zheng, and Q. Dai, Camera array based light field microscopy, Biomed. Opt. Express 6, 3179–3189 (2015). hong2011three J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, Three-dimensional display technologies of recent interest: principles, status, and issues [invited], Appl. Opt. 50, H87–H115 (2011). park2014recent S.-g. Park, J. Yeom, Y. Jeong, N. Chen, J.-Y. Hong, and B. Lee, Recent issues on integral imaging and its applications, J. Inf. Disp 15, 37–46 (2014). levoy2009recording M. Levoy, Z. Zhang, and I. McDowall, Recording and controlling the 4d light field in a microscope using microlens arrays, J. Microsc. 235, 144–162 (2009). prevedel2014simultaneous R. Prevedel, Y.-G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden et al., Simultaneous whole-animal 3d imaging of neuronal activity using light-field microscopy, Nat. Meth. 11, 727–730 (2014). chen2011resolution N. Chen, J. Yeom, J.-H. Jung, J.-H. Park, and B. Lee, Resolution comparison between integral-imaging-based hologram synthesis methods using rectangular and hexagonal lens arrays, Opt. Express 19, 26917–26927 (2011). Chen_2010_OE N. Chen, J.-H. Park, and N. Kim, Parameter analysis of integral Fourier hologram and its resolution enhancement, Opt. Express 18, 2152–2167 (2010). Veeraraghavan_2007_ACM A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing, ACM Trans. Graph. 26 (2007). Marwah_2013_ACM K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, Compressive light field photography using overcomplete dictionaries and optimized projections, ACM Trans. Graph. 32, 46:1–46:12 (2013). orth2013light A. Orth and K. B. Crozier, Light field moment imaging, Opt. Lett 38, 2666–2668 (2013). park2014light J.-H. Park, S.-K. Lee, N.-Y. Jo, H.-J. Kim, Y.-S. Kim, and H.-G. Lim, Light ray field capture using focal plane sweeping and its optical reconstruction using 3d displays, Opt. Express 22, 25444–25454 (2014). Mousnier_2015_ARXIV A. Mousnier, E. Vural, and C. Guillemot, Partial light field tomographic reconstruction from a fixed-camera focal stack, ArXiv e-prints 1503.01903 (2015). liu2015light J. Liu, T. Xu, W. Yue, J. Sun, and G. Situ, Light-field moment microscopy with noise reduction, Opt. Express 23, 29154–29162 (2015). Chen_2017_AO N. Chen, Z. Ren, D. Li, E. Y. Lam, and G. Situ, Analysis of the noise in back-projection light field acquisition and its optimization, Appl. Opt. 56, F20–F26 (2017). Yin_2016_AO X. Yin, G. Wang, W. Li, and Q. Liao, Iteratively reconstructing 4d light fields from focal stacks, Appl. Opt. 55, 8457 (2016). Park_2010_SPIE J.-H. Park, S.-W. Seo, N. Chen, and N. Kim, Fourier hologram generation from multiple incoherent defocused images, Proc. SPIE vol. 7690, p. 76900F (2010) Park_2010_DH J.-H. Park, S.-W. Seo, N. Chen, and N. Kim, Hologram synthesis from defocused images captured under incoherent illumination, in Biomedical Optics and 3-D Imaging: OSA Optics and Photonics Congress - Digital Holography and Three-Dimensional Imaging, 2010 OSA Technical Digest (Optical Society of America, 2010), paper JMA29. Chen_2016_PR N. Chen, Z. Ren, H. Ou, and E. Y. Lam, Resolution enhancement of optical scanning holography with a modulated point spread function, Photo. Res. 4, 1–6 (2016). maurer2010depth C. Maurer, S. Khan, S. Fassl, S. Bernet, and M. Ritsch-Marte, Depth of field multiplexing in microscopy, Opt. Express 18, 3023–3034 (2010). djidel2006high S. Djidel, J. K. Gansel, H. I. Campbell, and A. H. Greenaway, High-speed, 3-dimensional, telecentric imaging, Opt. Express 14, 8269–8277 (2006).
http://arxiv.org/abs/1705.09775v1
{ "authors": [ "Haichao Wang", "Ni Chen", "Jingdan Liu", "Guohai Situ" ], "categories": [ "physics.optics" ], "primary_category": "physics.optics", "published": "20170527064331", "title": "Fast and High Quality Light Field Acquisition using Defocus Modulation" }
[4]add1]Peeyush Singhcor1 [email protected],[email protected] [cor1]Corresponding author[add1]Tata Institute of Fundamental Research Centre for Applicable Mathematics,Bangalore-560 065,India In this study, we propose a class of total variation diminishing (TVD) schemes for solving pseudo-monotone variationalinequality arises in elasto-hydrodynamic lubrication point contact problem. A limiter based stable hybrid line splittings are introduced on hierarchical multi-level grid. These hybrid splittings are designed by use of diffusive coefficientand mesh dependent switching parameter in the computing domain of interest.The spectrum of illustrated splittings is derived with the help of well known local Fourier analysis (LFA).Numerical tests validate the performance of scheme and its competitiveness to the previous existing schemes.Advantages of proposed splittings are observed in the sense that it reduces computational complexity (up to (O(nlog n)) and solve high order discretization directly (no defect-correction tool require) without perturbing the robustness of the solution procedure (i.e. it works well for large range of load parameters).TVD schemes Defect-correction multi-grid Elastohydrodynamic Lubrication contact problem variational inequalities 65N06 65N55 65K15 35R35 45K05 Robust Numerical Solution for Solving Elastohydrodynamic Lubrication (EHL) Problemsusing Total Variation Diminishing (TVD) Approach [===================================================================================================================================== ℝ 𝒜 𝒦 𝒩 ∂ ØΩ ℙ 𝒱 ℳ 𝒯 ℰ 𝔽 ℂ ℕ T H^2(Ø,_h) #1{-5pt{#1}-5pt} #1[-3.5pt[#1]-3.5pt] #1{-3pt{#1}-3pt} #1[-1.5pt[#1]-1.5pt] ^2 u_h/n^2§ INTRODUCTIONElasto-hydrodynamic lubrication (EHL) is more often understood as a phenomenon of fluid film lubrication in which the naturalprocess of hydrodynamic fluid film creation is governed due to deformation of contacting bodies and lubricant viscosity increasesdue to high pressure. Significant contributions have been made by many researchers in the development of more efficient and accurate methodsfor the study of EHL in last few decades (e.g.<cit.>). It is well known that many numerical solutions of EHL model suffer lack of numerical stability and convergenceduring computation, if not tackled correctly. On the other hand, any direct solver such as Newton-Raphson technique takes a lot ofcomputational storage and time (up to O(n^3)) to solve the model and hence it has no commercial use in practice. In 1992, Venner <cit.> has introduced a low order discretization for EHL model (see  <ref>) which is stable for larger range of load parameters. However, author's best knowledge stable schemes for the EHL model <ref> are largely unavailable in literature which work well for very large range of load parameters other than Venner approach and in that sense it turns out to be a challenging problem in scientific community. The main numerical difficulty in these problems occurs due to lack of stable smoother and poor approximation of pressureprofile near its steep gradient location by any standard iterative procedure. Also when applied load in contacting bodies are sufficiently high then many people observed wiggles in pressureand film thickness profile by using central or any high order scheme in convection term of Reynolds equation. One possible way to overcome the difficulty, people have used lower order discretization in convection term.In addition, for obtaining the high order stable, accurate solutions for such problems, researchers haveapplied lower order scheme in a defect corrected way through a suitable higher order discretization.However, such defect-correction <cit.> setting most the time is not able to solve the difficulty in the sense that it does not reduce residual accurately due to poor conditioning of matrix in outer iteration (e.g.<cit.>). Furthermore, lower order schemes are more diffusive and allow to produce smoothing effect in the steep gradient region of solution and less accurate in the smooth part of the solution. This is the main motivation for present study to adopt total variation diminishing (TVD)approach for the EHL model problem. The reason behind TVD schemes for EHL model have been rarely applied so far in literature due to the fact that implementation is not obvious and straight forward as the case of linear-convection diffusion dueto strong coupling of pressure and film thickness term in existing model. Therefore, in this article an attempt has been made to solve the problem generalizing TVD concept efficiently in the existing EHL model.TVD schemes are understood as a generalized form of upwind based discretized schemes (more detailed definition will define later). Mostly, such schemes have been extensively devised for solving time dependent gas dynamics problems. Later on people have started to apply such concept for steady state problem in many CFD applications. Initially, the concept of TVD has been established by Hartenand later by Sweby <cit.> to avoid unphysical wiggles in a numerical scheme. Harten also has given necessary and sufficient condition for a scheme to be TVD. To understand the concept, we first definethe notation total variation TV of a mesh function u^n asTV(u^n) = ∑_-∞^∞|u_j+1^n-u_j^n|=∑_-∞^∞|Δ_j+1/2u^n|having the following convention Δ_j+1/2u^n = u_j+1^n-u_j^nfor any mesh function u is used. Harten's theory is understood in the form of conservation laws u_t+ f(u)_x = 0.The numerical approximation of Eq. (<ref>) is said to be TVD ifTV(u^n+1) ≤ TV(u^n)Then Harten's condition for any scheme to be TVD is explained below. Let a general numerical scheme for conservation laws Eq. (<ref>) is of the form u^n+1_i=u^n_i-c_i^n(u_i^n-u_i-1^n)+d_i^n(u_i+1^n-u_i^n) over one time step, where the coefficients c_i^n and d_i^n are arbitrary value (Inpractice it may depend on values u^n_i in some way i.e., the method may be nonlinear).Then TV(u^n+1) ≤ TV(u^n) providedthe following conditions are satisfied c^n_i≥ 0,d^n_i≥ 0 ,c^n_i+d^n_i≤ 1∀ iThere has been a very well developed TVD theory available in literature for time dependent problem. Additionally, this concept is also extended for steady state convection-diffusion case in the form of M- matrix <cit.>using appropriate flux limiting schemes <cit.>. However, very little attention have been paid in developing TVD schemes for EHL problems. In this article, our aim to investigate a class of splitting for EHL model which is robust and high order accurate ( at least second order in smooth part of the solution ) for larger range of load parameters.§.§ Model ProblemThe following two dimensional circular point contact model problem istaken for numerical study defined below in the form of variational inequalitywritten in non dimensional form∂/∂ x(ϵ∂u/∂ x)+∂/∂ y(ϵ∂u/∂ y)≤∂ (ρℋ)/∂ x∈Ω u≥ 0 ∈Ω u.[∂/∂ x(ϵ∂u/∂ x)+∂/∂ y(ϵ∂u/∂ y)-∂ (ρℋ)/∂ x] = 0 ∈Ω,where u is non-dimensional pressure of liquid (lubricant) and Ω is sufficiently large bounded domain such that u= 0 on∂Ω. Here term ϵis defined asϵ = ρℋ^3/ηλ,where ρ is dimensionless density of lubrication, η is dimensionless viscosity of lubrication and speed parameterλ= 6η_0u_sR^2a^3p_H.The non-dimensionless viscosity η is defined according to η(u) = exp{( α p_0z) (-1+(1+up_Hp_0)^z) }.Dimensionless density ρ is given byρ(u) = 0.59 × 10^9 + 1.34 u p_H0.59 × 10^9 + u p_H.The term film thickness ℋ of lubricant is written as followsℋ(x,y) = ℋ_00+x^2/2+y^2/2 +2/π^2∫_-∞^∞∫_-∞^∞u(x^',y^')dx^'dy^'/√((x-x^')^2+(y-y^')^2),where ℋ_00 is an integration constant.The dimensionless force balance equation is defined as follows∫_-∞^∞∫_-∞^∞u(x',y') dx'dy' = 3π/2All notations used in EHL model are defined in  <ref>. A schematic diagram of EHL point contact model is given in Fig. <ref>.Rest of the article is organized as followed. In Section. <ref>, few preliminaries are discussed which requirein numerical study of EHL model which help in subsequent numerical analysis of the model. In Section <ref>, a series of splitting are constructed by imitating linear convection-diffusion model and linear EHL model.In Section <ref>, a hybrid splitting are constructed for solving our existing EHL model defined in Section <ref>. In Section <ref>, local Fourier analysis is performed to calculate quantitative estimate of splitting calculated in Section <ref>. In Section <ref>, numerical experiments are conducted to check the performance of present splitting and its improvement to EHL model. At the end of Section <ref>, overall conclusion is summarized. § PRELIMINARIESIn this section, our main goal is to introduce few prerequisite theory which already used in our computation and cannot be ignored or avoided in the present analysis. Above nonlinear variational inequalities is solved numerically by using fixed point iteration theory <cit.>. The main challenge appears here in the form of producing a stableiterative smoother for EHL inequalities when the applied load on contacting bodies in EHL model become sufficiently large and after few iterations solution start blowing up. In such cases, iterative smoother for solving such model is stable only if nonlocal effect produced by film thickness equationis controlled by small change calculation in the iteration to make the overall effect local in updated pressure value. This effect is reduced by introducing special iterative smoother known as distributive smoother <cit.>. The advantage of adopting such relaxation diminishes aggregation in film thickness computationand eventually leads to stable relaxation. Therefore, we need an extra care for computing film thickness term during each iteration. Let us define deformation integral 𝒟_f as 𝒟_f(x,y) = 2/π^2∫_-∞^∞∫_-∞^∞u(x^',y^')/√((x-x^')^2+(y-y^')^2)dx^'dy^'.We approximate the above integral Eqn. <ref> taking pressure u as piecewise constant function namely u^hh_i',j' onsub-domainΩ^hh={ (x,y) ∈ℝ^2|x_i^'-h/2≤ x ≤ x_i^' +h/2,y_j^'-h/2≤ y ≤ y_j^' +h/2}. and discrete deformation𝒟_f_i,j = 𝒟_f(x_i,y_j)≈2/π^2∑_i'=0^n_x∑_j'=0^n_y𝒢^hh_i,i^',j,j^'u^hh_i',j',where the coefficients 𝒢^hh_i,i^',j,j^' is written as𝒢^hh_i,i^',j,j^' = ∫_x_i^'-h/2^x_i^' +h/2∫_y_j^'-h/2^y_j^'+h/21/√((x-x^')^2+(y-y^')^2)dx^'dy^'and evaluated analytically. Above integration Eqn. <ref> yields nine different results for the cases that are defined asx_i < x_i^',x_i > x_i^', x_i = x_i^' and y_j < y_j^', y_j > y_j^', y_j = y_j^'respectively. The nine results are combined into one expression𝒢^hh_i,i^',j,j^' =2/π^2{|x_+|sinh^-1(y_+/x_+)+|y_+|sinh^-1(x_+/y_+)-|x_-|sinh^-1(y_+/x_-) -|y_+|sinh^-1(x_-/y_+)-|x_+|sinh^-1(y_-/x_+)-|y_-|sinh^-1(x_+/y_-)+|x_-|sinh^-1(y_-/x_-)+|y_-|sinh^-1(x_-/y_-) },where x_+ = x_i-x_i^'+h/2,x_- = x_i-x_i^'-h/2y_+ = y_j-y_j^'+h/2, y_- = y_j-y_j^'-h/2.Therefore film thickness in discretized form is written asℋ_i,j^hh := ℋ_00+x^2_i/2+y^2_j/2+∑_i'∑_j'𝒢^hh_|i-i'|,|j-j'|u_i^',j^'^hh = Hℱ^h_i,j,where Hℱ^h is right hand of the film thickness. For computing above discrete film thickness Eqn. <ref>, small change using relaxation is measured asσ^h_i,j = r_i,j^h/𝒢^hh_0,0,where 𝒢^hh_0,0 = 𝒢^hh_i=i',j=j' and the residual r_J^h_i,j for Jacobi relaxationis given byr_J^h_i,j= Hℱ^h_i,j-ℋ_00-x^2_i/2-y^2_j/2-∑_i'∑_j'𝒢^hh_|i-i'|,|j-j'|ũ_i,j^hFor Gauss-Seidel relaxation, residual r_GS^h_i,j is given byr_GS^h_i,j= Hℱ^h_i,j-ℋ_00-x^2_i/2-y^2_j/2-∑_i'<i∑_j'𝒢^hh_|i-i'|,|j-j'|u̅_i,j^h -∑_i'=i∑_j'<j𝒢^hh_|i-i'|,|j-j'|ũ_i,j^h-∑_i'=i∑_j'>=j𝒢^hh_|i-i'|,|j-j'|ũ_i,j^h -∑_i'>i∑_j'𝒢^hh_|i-i'|,|j-j'|ũ_i,j^h,where ũ_i,j and u̅_i,j old and new updated values of pressure respectively. §.§.§ Smooth kernel computation using MLMISuppose we want to solve integral of type Eqn. <ref>. If kernel 𝒢(x,y) is sufficiently smooth with respect tothe variable y, we approximate discrete kernel 𝒢^hh_i,j by high order interpolation operator as𝒢̃^hh_i,j≃ [ℐ^h_H𝒢^hH_i,.]_j,where the high order interpolation operator is denoted by ℐ^h_H and 𝒢^hH_i,. is injected from 𝒢^hh_i,. i.e., 𝒢^hH_i,Jdef=𝒢^hh_i,2J. Superscript h and H denote the finer and the coarser grid respectively. Then the finer grid integral computation of Eqn. <ref> is approximated on coarser grid in following way 𝒲^h_i≃𝒲̃^h_idef= h^d∑_j𝒢̃^hh_i,ju^*^h_j=h^d∑_j[ℐ^h_H𝒢^hH_i,.]_ju^*^h_j = h^d∑_J𝒢^hH_i,J[(ℐ^h_H)^Tu^*^h_.]_J = H^d∑_J𝒢^hH_i,Ju^*^H_J,where u^*^H_Jdef=2^-d[(ℐ^h_H)^Tu^*^h_.]_J.Whenever kernel 𝒢(x,y) is also smooth enough with respect to x variable, the discrete sum 𝒲^h_i is evaluated on coarse grid points i=2I by use of high order interpolation operator Î^h_H. It is written as 𝒲^h≃Î^h_H𝒲^H,where 𝒲^H_Idef=𝒲̃^h_2I = H^d∑_J𝒢^HH_I,Ju^*^H_Jand where 𝒢^HH_.,J is injected from 𝒢^hH_.,J, i.e., 𝒢^HH_I,Jdef=𝒢^hH_2I,J = 𝒢^hh_2I,2J. §.§.§ Singular-Smooth or mild singular Kernel computation using MLMIIn general, kernel 𝒢 has a mild singularity near a point x=y. We rewrite our coarse grid approximation by adding correction term near singularity in the following way (see <cit.>)𝒲^h_i = h^d∑_j𝒢^hh_i,ju^*^h_j=h^d∑_j𝒢̃^hh_i,ju^*^h_j+h^d∑_j(𝒢^hh_i,j-𝒢̃^hh_i,j)u^*^h_j = h^d∑_j[ℐ^h_H𝒢^hH_i,.]_ju^*^h_j+h^d∑_j(𝒢^hh_i,j-𝒢̃^hh_i,j)u^*^h_j = 𝒲^H_I +h^d∑_j(𝒢^hh_i,j-𝒢̃^hh_i,j)u^*^h_jSince 𝒢̃^hh_i,j is an interpolation of 𝒢^hh_i,j itself using coarse grid points, the operator (𝒢^hh_i,j-𝒢̃^hh_i,j) is given by(𝒢^hh_i,j-𝒢̃^hh_i,j) =0 j=2J O(h^2p𝒢^2p(ξ) otherwise ,where 2p is the interpolation order and 𝒢^2p(ξ) is a 2p^th derivative of 𝒢 at someintermediate point ξ. Thus if the derivative of 𝒢 becomes small, the correction termbecome small and can be neglected. However, in case of singular smooth kernel (i ≃ j), we require the corrections in a neighborhood of i= j(||j-i|| ≤ m or i-m ≤ j ≤ i+m). Thus Eq. (<ref>) is simplified as follows𝒲^h_i =𝒲^H_I +h^d∑_||j-i|| ≤ m(𝒢^hh_i,j-𝒢̃^hh_i,j)u^*^h_jAdvantage of using multi-level procedure infilm thickness ℋ computation reduces integral complexity up to O(nlog n).A schematic diagram of multi level multi integration procedure is given in Fig. <ref>.§.§ Multi-Grid Method for variational inequality arising in EHL ProblemIn this section, we discuss multi-grid method <cit.> for variational inequality of EHL model. EHL problem is viewed as a linear complementarity problem <cit.> of the formL u ≤ f_1 x ∈Ω u≥f_2 x ∈Ω u = gx ∈∂Ω (u-f_2)(Lu - f_1) = 0x ∈Ω,where L is a linear differential operator. We want to solve the problem in discrete hierarchical sub-domains of the following form {Ω_l;Ω_l-1⊂Ω_l⊂Ω∀ l ∈ℤ∩ [1, M], where M∈ℝ}Hence discrete form of complementarity problem on level l is written as L_lu_l≤ f_1,l x_l∈Ω_l u_l≥ f_2,l x_l∈Ω_l u_l= gx_l∈∂Ω_l (u_l-f_2,l)(L_lu_l - f_1,l) = 0x_l∈Ω_l.Let u_l and v_l are an exact solution and approximated solution of above LCP Eqn. <ref>. Suppose that the error e_l=u_l-v_l is smooth after the iteration sweeping. Then complementarity problem satisfied for error equation e_l on finer level is read asL_le_l≤r_l x∈Ω e_l+v_l≥ f_2,l x∈Ω (e_l+v_l-f_2,l)(L_le_l-r_l)=0x∈Ω,where residual r_l=f_1,l-L_lv_l.Such smooth error e_l is approximated on a coarse grid without loosing any essential information. The LCP coarse grid equation for the coarse grid approximation of the error e_l-1 is therefore defined in PFAS byL_l-1e_l-1≤ I_l^l-1r_l e_l-1+Ĩ_l^l-1v_h≥ f_2,l-1 (e_l-1+Ĩ_l^l-1v_h-f_2,l-1)(L_l-1e_l-1-I_l^l-1r_l)=0.Since the problem is nonlinear and we are solving inequalities, we solve for full approximation v_l-1=e_l-1+I_l^l-1v_l but interpolate only v_l-1 back to fine grid. The main difference between multi-grid methods for equations and inequalities occur due to fact that, in case of fine grid converged solution v_l= v_l^* the coarse grid correction equation should be zero.Consequently, we have the following relationI_l-1^le_l-1=I^l_l-1(v^*_l-1-Ĩ^l-1_lv^*_l)=0 ⇒ v_l-1=Ĩ^l-1_lv_l(assume that operator I^l_l-1 keeps nonzero quantities nonzero).Furthermore, for a converged solution of fine grid LCP problem the coarse grid correction provides us the following condition on restriction operators,I_l^l-1(f_1,l-L_lv_l) ≥ 0Ĩ^l-1_lv_h≥ f_2,l-1 (Ĩ^l-1_lv_l-f_2,l-1)^TI_l^l-1(f_1,l-L_lv_l)=0Since f_1,l-L_lv_l≡ 0 for any converge solution.Hence above inequalities <ref> will satisfy for any rational choice of restriction operators I^l-1_l and Ĩ^l-1_l. For capturing free boundary and for achieving fast convergence the bilinear interpolation operator I_l-1^l is implemented only for unknowns on the inactive points that means, v_l⇐ v_l+ I_l-1^le_l-1if v_l > f_2,l v_l⇐ v_lelsewhere (v_l = f_2,l).§ LINEAR STUDY FOR CONVECTION-DIFFUSION PROBLEMOur specific interest in this Section is to develop an robust splitting for our EHL model. Such splitting is constructed by imitating series of linear model problem one by one. First we consider well known convection-diffusion problem of the formL u = (a(x,y) u)_x-ϵΔ u = f(x,y) ∀ x,y ∈Ωu(x,y) = g(x,y) ∀ x,y ∈∂Ω,where 0 < ϵ < < 1 (note that we do not have any y derivative in convection term). Then discretization of convective term for (a u)_x is performed as (a u)_x=a/h(u_i,j-u_i-1,j)=: L_1However, this scheme is only O(h) accurate. Our interest here to increase accuracy at least smooth part without contaminating any wiggle in solution. Consider the Van Leer's κ-schemes <cit.> for discretization term(a u)_x (for a = const > 0) as (a u)_x=a/h[(u_i,j-u_i-1,j)-κ/2(u_i,j-u_i-1,j)+1-κ/4(u_i,j-u_i-1,j)+1+κ/4(u_i+1,j-u_i,j)-1-κ/4(u_i,j-u_i-2,j)] =L_1 + L_α+L_β+L_γ+L_δ(similar scheme can be constructed for a < 0). The resulting discrete model Example. <ref> by κ-scheme (take κ = 0 here) is denoted by[L_κ=0]=a/h[ 1/4 -5/4 3/41/4 0]+ϵ/h^2[0-1 0-14-10-10]In general, above discrete equation. <ref> do not produces M-matrix and many iterative splitting on L_κ diverge. Therefore, this problem is solved using TVD scheme with help of appropriate flux limiters to prevent a solution from unwanted oscillation. Now consider κ=-1 then the second-order upwind scheme looks like (a > 0)(au)_x=a/h[(u_i,j-u_i-1,j)+1/2(u_i,j-u_i-1,j)+1/2(u_i,j-u_i-1,j)-1/2(u_i-1,j-u_i-2,j)] =L_1+L_α+L_γ+L_δ.We enforce Eqn. <ref> to satisfy TVD condition by multiply limiter functions inthe additional terms L_α, L_γ and L_δ. Then following two type of discretization for convection term are presented here as(au)_x=a/h[(u_i,j-u_i-1,j)+1/2ϕ(r_i-1/2)(u_i,j-u_i-1,j) -1/2ϕ(r_i-3/2)(u_i-1,j-u_i-2,j)]=L_1+L_α+L_γand(au)_x=a/h[(u_i,j-u_i-1,j)+1/2ϕ(r_i-1/2)(u_i,j-u_i-1,j) +1/2ϕ(r_i-3/2)(u_i,j-u_i-1,j)-1/2ϕ(r_i-3/2)(u_i-1,j-u_i-2,j)] =L_1+L_α+L_β++L_γ,where r_i-1/2=(u_i+1,j-u_i,j)(u_i,j-u_i-1,j) andr_i-3/2=(u_i,j-u_i-1,j)(u_i-1,j-u_i-2,j).In Fig. <ref> represents graph of limiter function (r,ϕ(r)) on which the resulting convection discretization term defined in Eqn. <ref> and Eqn. <ref> enforce to be TVD and higher order accurate (see <cit.>). The discrete representation of Example <ref> using Van-leer κ-scheme is defined as L_κu = ∑_l_x∈ℐ∑_l_y∈ℐ𝒞^(κ)_l_xl_yu_i+l_x,j+l_y.Moreover, in stencil notation it is represented asL_κ = [𝒞_02^κ;𝒞_01^κ; 𝒞_-20^κ 𝒞_-10^κ𝒞_00^κ𝒞_10^κ𝒞_20^κ; 𝒞_0-1^κ; 𝒞_0-2^κ ] . Then the discrete matrix equation L_κu =f is solved efficiently by the use of multi-grid. The related splitting is constructed bytaking the matrix operator defined in Eqn. <ref>. In particular case, the splitting in x-direction is scanned as forward(or backward direction depending on flow direction) lexicographical order and it is represented as S_κ=S_κ^x_f (or S_κ^x_b). For matrix operator L_κ, the forward splitting S_κ^x_f is defined as L_κ=L^x_κ/2-(L^x_κ/2-L_κ)=:L^+_κ+L^0_κ+L^-_κ,whereL^x_κ/2:=L^+_κ+L^0_κ=[ 0; 0; 0 0 0 0 0; 𝒞_0-1^κ; 𝒞_0-2^κ ]+[ 0; 0; 0 𝒞_-10^κ/2𝒞_00^κ/2𝒞_10^κ/2 0; 0; 0 ]and therefore overall splitting isL^x_κ/2u^n+1= (L^x_κ/2-L_κ)u^n+f.Now for a fixed x-line (m-grid points in x-direction) (i,j_0)_( 1≤ i ≤ m), we have the followingL^0_κu^*=f+L^0_κu^n-(L^-_κ+L^0_κ)u^n-L^+_κu^n+1.L^0_κ corresponds the operator to the unknowns u^* which are scanned simultaneously. L^-_κ corresponds the operator to the old approximation u^n, and L^+_κ operator having updated values of u^n+1. Now by applying under-relaxation constant ω in above equation we haveu^n+1=u^*ω+u^n(1-ω),therfore splitting equation can be rewritten in corresponding change, σ^n+1=u^n+1-u^n form asL^0_κσ^n+1=f-(L^-_κ+L^0_κ)u^n-L^+_κu^n+1, u^n+1=u^n+σ^n+1ωNow we construct series of splitting for solving Eqn. <ref> as below.Splitting : L_s0 This splitting is constructed by taking upwind operator L_1 plus a “positive" part of the second-orderoperators L_α and L_β from Eqn. <ref> and part of diffusion operator from Eqn. <ref>.L_κ^0u=-{ϵ/h^2+a/4h(5-3κ)}u_i-1,j+{a/h(2-κ/2 +1-κ/4)+4ϵ/h^2}u_i,j+{-ϵ/h^2}u_i+1,j L_κ^+u={-ϵ/h^2}u_i,j-1 L_κ^-u={a/h(1-κ/4)}u_i-2,j+{a/h(1-κ/4)}u_i-1,j+{-a/h(1+κ/4)}u_i,j +{a/h(1+κ/4)}u_i+1,j+{-ϵ/h^2}u_i,j+1.Splitting : Ls1 This splitting is constructed taking upwind operator L_1 plus a “positive" part of the second-orderoperators L_α from Eqn. <ref> and part of diffusion operator from Eqn. <ref>.L_κ^0u={-a/h(2-κ/2)-ϵ/h^2}u_i-1,j+{a/h(2-κ/2)+4ϵ/h^2}u_i,j+{-ϵ/h^2}u_i+1,j L_κ^+u={-ϵ/h^2}u_i,j-1 L_κ^-u={a/h(1-κ/4)}u_i-2,j+{a/h(1-κ/4)}u_i-1,j+{-a/h(1+κ/4)}u_i,j +{a/h(1+κ/4)}u_i+1,j+{-ϵ/h^2}u_i,j+1Splitting : Ls2 In this case splitting coefficients 𝒞_**^κ correspond only to the first-order upwind operator L_1 of adiscretized Eqn. <ref> plus diffusion operator.L_κ^0u={-a/h-ϵ/h^2}u_i-1,j+{a/h+4ϵ/h^2}u_i,j+{-ϵ/h^2}u_i+1,j L_κ^+u={-ϵ/h^2}u_i,j-1 L_κ^-u={a/h(1-κ/4)}u_i-2,j+{-a/h(1-3κ/4)}u_i-1,j+{-a/h(1+3κ/4)}u_i,j +{a/h(1+κ/4)}u_i+1,j+{-ϵ/h^2}u_i,j+1Splitting : Ls3 The third splitting named as κ- distributive line relaxation is constructed by assuming a ghost variable σ_* (with thesame cardinality as σ) such that σ = 𝒟σ_*, where matrix 𝒟 comes due to distributive change of the relaxation.i.e. We construct line-wise distributive splitting asu_i,j^n+1=u^n_i,j+σ_i,j-(σ_i+1,j+σ_i-1,j+σ_i,j+1+σ_i,j-1)/4This splitting is understood in the following way: First, discretize Example <ref> by κ-scheme and get the equation of the form asL^x_κ/2u^n+1= f', where f'=(L^x_κ/2-L_κ)u^n+f.Now in the above splitting equation put the value of u^n+1 from Eqn. <ref> and apply distributive splitting in the form of right preconditioner defined below.L^x_κ/2σ^n+1= R^nandL^x_κ/2𝒟σ^n+1_*= R^n,where the updated change in pressure and residual equation are denoted asσ^n+1=𝒟σ^n+1_* andR^n=L^x_κ/2u^n+1- f'respectively. In other way, line distributive splitting consists of following two steps; In first step it calculates new ghost value approximation change σ^n+1_*. Second step calculates new approximation change σ^n+1.Now applying above splitting along the x-direction in Example <ref>, the diffusive term is computed as-ϵ[{ u_i+1,j+σ_i+1-(σ_i + σ_i+2)/4}-{ u_i,j+σ_i-(σ_i-1 + σ_i+1)/4}]/h^2 -ϵ[{ u_i-1,j+σ_i-1-(σ_i-2 + σ_i)/4}-{ u_i,j+σ_i-(σ_i-1 + σ_i+1)/4}]/h^2 -ϵ[{ u_i,j+1-σ_i/4}-{ u_i,j+σ_i-(σ_i-1 + σ_i+1)/4}]/h^2 -ϵ[{ u_i,j-1-σ_i/4}-{ u_i,j+σ_i-(σ_i-1 + σ_i+1)/4}]/h^2.and convection term is computed as+[a_i+1/2,j(2+κ)/2h{ u_i,j+σ_i-(σ_i-1 + σ_i+1)/4} -a_i-1/2,j(2+κ)/2h{ u_i-1,j+σ_i-1-(σ_i-2 + σ_i)/4}]Other part of convective term which comes from van-leer discretization do not contain any distributive termas above explained and kept in right hand side during relaxation and overall splitting is written as follows(ϵ/4h^2+a_i-1/2,j(2+κ)/8h)σ_i-2 -(7ϵ/4h^2+a_i+1/2,j(2+κ)/2h+ a_i-1/2,j(2+κ)/8h)σ_i-1+(20ϵ/4h^2+a_i+1/2,j(2+κ)/2h +a_i-1/2,j(2+κ)/8h)σ_i -(8ϵ/4h^2+a_i+1/2,j(2+κ)/2h)σ_i+1 +ϵ/4h^2σ_i+2=R_i,j+{1+κ/4(u_i+1,j-u_i,j)-1-κ/4(u_i-1,j-u_i-2,j)}]after solving above equation for σ along x line direction updated solution u^n+1 is evaluated as u_i,j^n+1=u^n_i,j+σ_i,j-(σ_i+1,j+σ_i-1,j+σ_i,j+1+σ_i,j-1)/4.However, above splitting Ls3 Eqn. <ref> is not robust and very rarely use in practice.We are now interested in showing convergence of LCP through the above presented splitting. Let us consider domain Ω∈ℝ^2 with boundary ∂Ω, and consider known functions f and g.Then find u in a weak sense such that these inequalities hold-(a(x,y)h(u))_x+ϵΔ u ≤ f(x,y) ∀ x,y ∈Ω u(x,y) ≥ 0 ∀ x,y ∈Ω, u(x,y)[(a(x,y)h(u))_x-ϵΔ u - f(x,y)]=0∀ x,y ∈Ω,u(x,y) = g(x,y) ∀ x,y ∈∂Ω.Therefore, discrete version of above problem (finite difference or finite volume) is written in the matrix form Lu ≤ f,u ≥ 0, u[L u - f]=0,where L is a M-matrix of order m× m, u and f are m× 1-column vector.It is well known that solving above discrete problem is equivalent to solving quadratic minimization problem of the formG(u)=1/2u^TLu-f^Tu,min_u ∈ℝ^m×1 G(u),subjected to the constraintsu≥0.Let u^n and f^n are m× 1-column vectors achieved by splitting algorithm (*), L^0_κσ^n+1=f-(L^-_κ+L^0_κ)u^n-L^+_κu^n+1, σ^n+1=max{0,σ^n+1}, u^n+1=u^n+σ^n+1ω then we have u^n→ u and f^n→ f such that u and f is a solution of LCP problem. For the proof of this theorem we refer to see Cryer <cit.>.The following error estimates are easily established for LCP problem for algorithm described above. Let u is the exact solution of LCP problem define in Eqn. <ref>, also let u^n+1 is approximate solutionobtained by the splitting of the formL^0_κσ^n+1=f-(L^-_κ+L^0_κ)u^n-L^+_κu^n+1, σ^n+1=max{0,σ^n+1}, u^n+1=u^n+σ^n+1ωThen following conditions holdu-u^n+1_2≤ C_2u^n+1-u^n_2 u-u^n+1_1≤ C_1u^n+1-u^n_1 u-u^n+1_∞≤ C_∞u^n+1-u^n_∞.Since From LCP problem we getr_κ=L^0_κu^n+f^n-(L^-_κ+L^0_κ)u^n-L^+_κu^n+1≥ 0andr_κ^+=(r_κ_i,j^+),where r_κ_i,j^+=r_κ_i,jif u^n >0 andu^n+1 >0, min(0,r_κ_i,j )if u^n=0andu^n+1 >0.Now consider the following LCP L^0_κu^n+1≤ f-r_κ_i,j^+, u^n+1≥ 0, u^n+1(L^0_κu^n+1 -f+r_κ_i,j^+)=0Now multiply u^T in Eqn. <ref> and combing with equality term we get(u^n+1-u)^TL^0_κu ≤(u^n+1-u)^Tf.similar way we also get (u-u^n+1)^TL^0_κu^n+1≤(u-u^n+1)^T(f-r_κ_i,j^+).Now by adding above two equations we get(u-u^n+1)^Tν_*(u-u^n+1) ≤ (u-u^n+1)^T(-L_κ^0)(u-u^n+1) ≤ (u-u^n+1)^T( -r_κ_i,j^+)This implies that the following conditions holdu-u^n+1_1≤ν_1^-1-r_κ_i,j^+_1, u-u^n+1_∞≤ν_∞^-1-r_κ_i,j^+_∞, u-u^n+1_2≤ν_2^-1-r_κ_i,j^+_2.Now rest of the proof is followed from Lemma 2.2 mentioned in <cit.>.Now we illustrate splitting for incompressible EHL model (we take ρ, ηand ϵ as constants here) in the formof inequalities as(a(x,y)ℋ(u))_x-ϵΔ u ≥ f(x,y) ∀ x,y ∈Ω u(x,y) ≥ 0 ∀ x,y ∈Ω, u(x,y)[(a(x,y)ℋ(u))_x-ϵΔ u - f(x,y)]=0 ∀ x,y ∈Ω, u(x,y) = g(x,y) ∀ x,y ∈∂Ω, ℋ(u)=H_00+x^2+y^2/2 +2/π^2∫_-∞^∞∫_-∞^∞u(x^',y^')dx^'dy^'/√((x-x^')^2+(y-y^')^2)For incompressible EHL problem κ-line distributive Jacobi splitting is written asconsider the convection term of above Example <ref> as∂ h/∂ x=1/h_x[(ℋ_i,j-ℋ_i-1,j)-κ/2(ℋ_i,j-ℋ_i-1,j)+ 1+κ/4(ℋ_i+1,j-ℋ_i,j)-1-κ/4(ℋ_i-1,j-ℋ_i-2,j)]Now we will consider the following Splitting : Ls4-ϵ[{ u_i+1,j+σ_i+1-(σ_i + σ_i+2)/4}-{ u_i,j+σ_i-(σ_i-1 + σ_i+1)/4}]/h^2_x -ϵ[{ u_i-1,j+σ_i-1-(σ_i-2 + σ_i)/4}-{ u_i,j+σ_i-(σ_i-1 + σ_i+1)/4}]/h^2_x -ϵ[{ u_i,j+1-σ_i/4}-{ u_i,j+σ_i-(σ_i-1 + σ_i+1)/4}]/h^2_x -ϵ[{ u_i,j-1-σ_i/4}-{ u_i,j+σ_i-(σ_i-1 + σ_i+1)/4}]/h^2_x-1/h_x[(2-κ/2) (∑_k=i-1^i+1σ𝒢_ikjjσ_k-∑_k=i-2^iσ𝒢_i-1kjjσ_k) -{1+κ/4(ℋ_i+1,j-ℋ_i,j)-1-κ/4(ℋ_i-1,j-ℋ_i-2,j)}]=f_i,jAnother possibility is to consider the following splitting as ∂ h/∂ x=1/h_x[(ℋ_i,j-ℋ_i-1,j)-κ/2(ℋ_i,j-ℋ_i-1,j)+ 1+κ/4(ℋ_i+1,j-ℋ_i,j)-1-κ/4(ℋ_i-1,j-ℋ_i,j+ℋ_i,j-ℋ_i-2,j)]Hence overall equation is rewritten as Splitting : Ls5-ϵ[{ u_i+1,j+σ_i+1-(σ_i + σ_i+2)/4}-{ u_i,j+σ_i-(σ_i-1 + σ_i+1)/4}]/h^2_x -ϵ[{ u_i-1,j+σ_i-1-(σ_i-2 + σ_i)/4}-{ u_i,j+σ_i-(σ_i-1 + σ_i+1)/4}]/h^2_x -ϵ[{ u_i,j+1-σ_i/4}-{ u_i,j+σ_i-(σ_i-1 + σ_i+1)/4}]/h^2_x -ϵ[{ u_i,j-1-σ_i/4}-{ u_i,j+σ_i-(σ_i-1 + σ_i+1)/4}]/h^2_x-1/h_x[(2-κ/2+1-κ/4) (∑_k=i-1^i+1σ𝒢_ikjjσ_k-∑_k=i-2^iσ𝒢_i-1kjjσ_k) -{1+κ/4(ℋ_i+1,j-ℋ_i,j)-1-κ/4(ℋ_i,j-ℋ_i-2,j)}]=f_i,j.More general discussion on convergence of these splittings are given in Section <ref>. § TVD IMPLEMENTATION IN POINT CONTACT MODEL PROBLEMIn this Section, we implement the splitting discussed in the last Section <ref> and allow to extend it in EHL model. A hybrid splitting presented here and it is determined by measuring the value of min(ϵ(x,y)/h_x,ϵ(x,y)/h_y).This value is treated as switching parameter to perform two different splitting together while moving x direction during the iteration.If the valuemin(ϵ(x,y)/h_x,ϵ(x,y)/h_y) > 0.6then we apply line Gauss-Seidel splitting otherwise line Jacobi distributed splitting is incorporated in other wordsL_hs1=L_s1-splittingIf min(ϵ(x,y)/h_x,ϵ(x,y)/h_y) > 0.6L_s4-splittingIf min(ϵ(x,y)/h_x,ϵ(x,y)/h_y) ≤ 0.6. L_hs2=L_s0-splittingIf min(ϵ(x,y)/h_x,ϵ(x,y)/h_y) > 0.6L_s5-splittingIf min(ϵ(x,y)/h_x,ϵ(x,y)/h_y) ≤ 0.6.These constructions are well justified as the region where ϵ tends to zero, we end up having an ill-conditioned matrix system in the formof dense kernel matrix appear in film thickness term. Therefore, distributive Jacobi line splitting is implemented as a right pre-conditioner to reduce the ill-conditioning of the matrix. However, in other part where ϵ is sufficiently large diffusion term dominates therefore we use Gauss line splitting.Considering the above setting in computational domain is quite demanding in EHL model as it allows us in reducing computational cost and storage issue. We replace κ value in splitting constructed in Section <ref> by incorporating appropriate limiter function ϕ there. In next section, we define these two splitting in more general form having limiter function involve in the splitting. §.§.§ Limiter based Line Gauss-Seidel splittingEHL point contact problem is solved in the form of LCP and therefore in this Section we seek an efficient splitting for Reynolds equation iterate along x-line direction to obtain the pressure solution. Now by using Theorem <ref> and Lemma <ref> we prove the convergence of the EHL solution. This splitting is explained in the following way: First calculate updated pressure in x-line direction as u̅_i,j = ũ_i,j + σ_i keeping j fixat a time for all j in y-direction and thenapply change σ_i immediately to update the pressure ũ. The successive pressure change σ_i along the x-direction can be calculated as belowϵ^X_i+1/2,j[(u_i+1,j + σ_i+1)-(u_i,j+ σ_i)]+ϵ^X_i-1/2,j[(u_i-1,j + σ_i-1)-(u_i,j+ σ_i)]/h_x+ϵ^Y_i,j+1/2[u_i,j+1-(u_i,j+ σ_i)]+ϵ^Y_i,j-1/2[u_i,j-1-(u_i,j+ σ_i)]/h_y -h_y((ρℋ)^*_i+1/2,j-(ρℋ)^*_i-1/2,j) = 0,where terms read asϵ^X_i ± 1/2,jdefn:= h_yϵ_i ± 1/2,j,ϵ^Y_i,j ± 1/2defn:= h_xϵ_i,j ± 1/2,ϵ_i ± 1/2,jdefn:= (ϵ_i,j+ϵ_i± 1,j)/2,ϵ_i,j ± 1/2defn:= (ϵ_i,j+ϵ_i,j± 1)/2,whereϵ_i ,j=ρ(i,j) ℋ^3(i,j)/η(i,j)λ. (ρℋ)^*_i+1/2,jdef:=(ρ̌ℋ̅)_i,j+ 12ϕ(r_i+1/2)((ρ̌ℋ̅)_i+1,j-(ρ̌ℋ̅)_i,j) (ρℋ)^*_i-1/2,jdef:=(ρ̌ℋ̅)_i-1,j+ 12ϕ(r_i-1/2)((ρ̌ℋ̅)_i,j-(ρ̌ℋ̅)_i-1,j),wherer_i+1/2 = (ρ̌ℋ̃)_i+1,j-(ρ̌ℋ̃)_i,j(ρ̌ℋ̃)_i,j-(ρ̌ℋ̃)_i-1,jand r_i-1/2= (ρ̌ℋ̃)_i,j-(ρ̌ℋ̃)_i-1,j(ρ̌ℋ̃)_i-1,j-(ρ̌ℋ̃)_i-2,j.In above equation for each i,ℋ̅_i,j = ℋ̃_i,j + ∑_k𝒢_i,k,j,jσ_kIt is observed that the magnitude of the kernel 𝒢_i,k,j,j in equation  <ref> diminishes rapidly as distance |k-i| increase and therefore, we avoid unnecessary computation expense by allowing value of k up to three terms. So updated value of film thickness is rewritten asℋ̅_i,j = ℋ̃_i,j + ∑_k=i-1^i+1𝒢_i,k,j,jσ_k.Hence, Eqn. (<ref>) is illustrated as𝒞_i+2,ϕσ_i+2 +𝒞_i+1,ϕσ_i+1+𝒞_i,ϕσ_i +𝒞_i-1,ϕσ_i-1+𝒞_i-2,ϕσ_i-2= R_i,j,ϕ,where R_i,j,ϕ and 𝒞_i±.,ϕ are residual and coefficients of matrix arising due to linearized form involving the limiter function. This setting leads to a band matrix formulation which is solved using Gaussian elimination with minimum computational work (O(n)). §.§.§ Limiter based Line-Distributed Jacobi splittingThe understanding philosophy of line distributed Jacobi splitting is more physical than mathematical. When diffusive coefficient tends to zero, pressure becomes large enough and non local effect of film thickness dominates in the region. Therefore a small deflection in pressure change produces high error in updated film thickness eventually leads blow up the solution after few iterations. This numerical instability is overcome by interacting with the neighborhood points during iteration. During this process the computed change of pressure at one point of the line are shared to its neighbor cells. In other words, a given point of a line new pressure u̅_i,j is computed from the summation of the changes coming from neighboring points plus the old approximated pressure ũ_i,ju̅_i,j = ũ_i,j+σ_i,j-(σ_i+1,j+σ_i-1,j+σ_i,j+1+σ_i,j-1)4In this case, changes are incorporated only at the end of a complete iteration sweep. Therefore, overall splitting is derived as belowϵ^X_i+1/2,j[(u_i+1,j + σ_i+1-(σ_i+σ_i+2)4) -(u_i,j+ σ_i-(σ_i-1+σ_i+1)4)]/h_x +ϵ^X_i-1/2,j[(u_i-1,j + σ_i-1-(σ_i-2+σ_i)4)-(u_i,j+ σ_i-(σ_i-1+σ_i+1)4)]/h_x +ϵ^Y_i,j+1/2[u_i,j+1-σ_i4-(u_i,j+ σ_i-(σ_i-1+σ_i+1)4)]/h_y+ϵ^Y_i,j-1/2[u_i,j-1-σ_i4-(u_i,j+ σ_i-(σ_i-1+σ_i+1)4)]/h_y -h_y((ρℋ)^*_i+1/2,j-(ρℋ)^*_i-1/2,j) = 0.The following notion used in Eqn. <ref> defined asϵ^X_i ± 1/2,jdefn:= h_yϵ_i ± 1/2,j ϵ^Y_i,j ± 1/2defn:= h_xϵ_i,j ± 1/2 ϵ_i ± 1/2,j = 0.5(ρ(i± 1,j) ℋ^3(i ± 1,j)/η(i ± 1,j)λ+ρ(i ± 1,j) ℋ^3(i ± 1,j)/η(i ± 1,j)λ), ϵ_i,j ± 1/2 = 0.5(ρ(i,j± 1) ℋ^3(i,j ± 1)/η(i,j ± 1)λ+ρ(i,j ± 1) ℋ^3(i ,j ± 1)/η(i ± 1,j ± 1)λ). (ρℋ)^*_i+1/2,jdef:=(ρ̌ℋ̅)_i,j+ 12ϕ(r_i+1/2)((ρ̌ℋ̅)_i+1,j-(ρ̌ℋ̅)_i,j) (ρℋ)^*_i-1/2,jdef:=(ρ̌ℋ̅)_i-1,j+ 12ϕ(r_i-1/2)((ρ̌ℋ̅)_i,j-(ρ̌ℋ̅)_i-1,j),wherer_i+1/2 = (ρ̌ℋ̃)_i+1,j-(ρ̌ℋ̃)_i,j(ρ̌ℋ̃)_i,j-(ρ̌ℋ̃)_i-1,jand r_i-1/2= (ρ̌ℋ̃)_i,j-(ρ̌ℋ̃)_i-1,j(ρ̌ℋ̃)_i-1,j-(ρ̌ℋ̃)_i-2,j.In above equation, discretization of convection term defined same as Line Gauss-Seidel relaxation case. However, due to distributive change of the pressure, the updated value of film thickness is described asℋ̅_i,j = ℋ̃_i,j + ∑_kσ𝒢_i,k,j,jσ_k,whereσ𝒢_i,i,j,j = 𝒢_i,i,j,j-(𝒢_i,i-1,j,j+𝒢_i,i+1,j,j +𝒢_i,i,j,j-1+𝒢_i,i,j,j+1).After few manipulation of Eqn. <ref>, we get system of band matrix which is solved using Gaussian elimination approach.The force balance equation is incorporated in our numerical calculation by updating the constant value ℋ_00. The updated value of ℋ_00 is performed according toℋ_00←ℋ_00-c( 2π/3-h_xh_y∑_i=1^n_x∑_j=1^n_yu_i,j),where c is a relaxation parameter having range between 0.01-0.1. § FOURIER ANALYSISPerformance and asymptotic estimate of above splitting is measured through the Fourier analysis by considering infinite grid 𝔾^f_h:= { x = (ξ_1h,ξ_2h) : ξ =(ξ_1,ξ_2) ∈ℤ×ℤ}and infinite grid function defined on 𝔾^f_h by the linear span of the Fourier components 𝕋 ^h=span{φ (θ, x)=e^i(ξ_1θ_1+ξ_2θ_2): θ=(θ_1,θ_2) ∈ (-π,π]^2, x∈G^f_h}.These basis functions e^iξθ∈𝕋 ^h are orthogonal with respect to the inner product⟨ u_h,v_h⟩:= lim_l →∞1/4 l^2∑_|ξ| ≤ l u_h(ξ_1h,ξ_2h)v_h(ξ_1h,ξ_2h),where u_h,v_h∈𝕋 ^h. Furthermore, we will define orthogonal space to identity function𝕀∈𝕋 ^h as 𝕋 ^h_ ={ v_h: ⟨𝕀,v_h⟩=0 }Moreover, discrete solution u_h is described as Fourier transform û a linear combinations of the basis functions e^iξθ∈𝕋^hu_h= lim_l →∞1/2l∑_|ξ| ≤ lû_h(ξ)e^iξθ.The Fourier space 𝕋^h is illustrated as four-dimensional subspaces 𝕋^h_θ = span{φ(θ^α_1α_2, x)= e^i kθ^α_1α_2; α_1,α_2∈{0,1}}, wherex∈𝔾^f_h;θ^00∈ (-π/2,π/2]^2,θ^α_1α_2= (θ_1-α_1sign(θ_1)π,θ_2-α_2sign(θ_2)π).We say discretized PDE of the form L_hu_h = f_his solvable if f_h∈𝕋 ^h_. Moreover, solution will be unique if u_h∈𝕋 ^h_. Let relaxation method defined via operator splitting as L_h^+u̅_h+L_h^-ũ_h = f_h,where ũ_h and u̅_h are old and updated approximation to the solution u_h. Now we are interested in constructing a splitting which reduce our computed error significantly. Such behavior is investigated by measuring error equation as e̅_h=𝒮_hẽ_h,where ẽ_h=u_h-ũ_h, e̅_h=u_h-u̅_h and 𝒮_h :=-(L^+_h)^-1L^-_h. Now apply Fourier transform in above equation for L̂^+_h(θ)≠ 0 we have following relation 𝒮_hφ(θ, x) = 𝒮̂_h(θ)φ(θ, x)∀ x∈𝔾^f_h,and smoothing factor notation as μ_1(𝒮_h):=sup{|𝒮̂_h(θ)|: θ∈Θ_high},where 𝒮̂_h(θ):=-L̂^-_h(θ)/L̂^+_h(θ). §.§.§ Fourier analysis of κ splittingLet ũ^h_i,j current updated to the solution for given j line we are solving equations.For given j a new updated u̅^h_i,j for all i of that line according to{ -ϵu̅_i-1,j-2u̅_i,j+u̅_i+1,j/h_x^2}+{ -ϵu̅_i,j-1-2u̅_i,j+ũ_i,j+1/h_y^2} +a/h{(u̅_i,j-u̅_i-1,j)-κ/2(u̅_i,j-u̅_i-1,j)+1-κ/4(u̅_i,j-u̅_i-1,j) +1+κ/4(ũ_i+1,j-ũ_i,j)-1-κ/4(ũ_i,j-ũ_i-2,j) }=f_i,j,for 2 ≤ i ≤ (n_x-1) and for given value j such that 1 ≤ j ≤ n_y-1 holds. During Gauss-Seidel line relaxation, we will use previously computed new solution of line j-1 in our next new updated solution of line j. Hence error equation is written as-{ϵ/h^2+a(1.25-0.75κ)/h}e̅_i-1,j +{4ϵ/h^2 + a(1.25-0.75κ)/h}e̅_i,j-{ϵ/h^2}e̅_i+1,j-{ϵ/h^2}e̅_i,j-1 -{ϵ/h^2}ẽ_i,j+1+{a(1+κ)/4h}(ẽ_i+1,j-ẽ_i,j) -{a(1-κ)/4h}(ẽ_i,j-ẽ_i-2,j)=0 and κ-smoothing factor is denoted as |𝒮_h^κ(θ_1,θ_2)|=|α_1e^iθ_2+0.25β(1+κ)(e^iθ_1-1)-0.25β(1-κ)(1-e^-i2θ_1)/(-α_1-β(1.25-0.75κ))e^-iθ_1+4α_1+β(1.25-0.75κ)-α_1(e^iθ_1 +e^-iθ_2)|, where α_1 = ϵ/h^2 and β=a/h.Smoothing factor plot is given in Fig. <ref> Two grid iteration matrix is written asC^2h_h=I_h-P^h_2h(L_2h)^-1R^2h_hL_hand two grid error equation is defined ase^new = 𝒮^ν_2 C^2h_h𝒮^ν_1e^old=ℳ^2h_he^old.Here by multiplying C_h^2h to the space 𝕋^h_θ, whereθ∈^00 =^00- {θ:L_2h(2θ^00)=0} leaves the space invariant. C^2h_h:𝕋_θ^h⟶𝕋_θ^h.Fourier representation of two grid is performed in following wayL_h:𝕋_θ^h⟶𝕋_θ^h,L_2h:𝕋_θ^2h⟶𝕋_θ^2h R_h:𝕋_θ^h⟶𝕋_θ^2h, P_h:𝕋_θ^2h⟶𝕋_θ^hwithθ∈^00 𝒮:𝕋_θ^h⟶𝕋_θ^h (θ∈^00)Spectral radius is computed in the following wayρ^* = ρ(ℳ^2h_h) = sup_θ∈^00ρ(ℳ^2h_h(θ))=sup_θ∈^00ρ(θ),where ℳ̃^2h_h(θ)= 𝒮̃^ν_2(I_h -P̃_2h^h(L̃_2h)^-1R̃_h^2hL̃_h)𝒮̃^ν_1 ,ℳ̃^2h_h(θ)=ℳ^2h_h|_𝕋_θ^h(θ∈^00).The Fourier symbols of the multi-grid operators for each harmonic in 𝕋^h_θ is calculated as follows:S̃^ν = [ μ(θ^00); μ(θ^10); μ(θ^01); μ(θ^11) ]^ν, L̃_h = [ L̃_h(θ^00) ;L̃_h(θ^10); L̃_h(θ^01) ;L̃_h(θ^11) ], R̃_h=(R̃_h(θ^00),R̃_h(θ^10),R̃_h(θ^01),R̃_h(θ^11)), P̃_h=(P̃_h(θ^00),P̃_h(θ^10),P̃_h(θ^01),P̃_h(θ^11))^T, L̃_2h = L̃_2h( 2 θ^00 )For the transfer operatorsL̃_h(θ^**) = ∑_μ_x∈ J∑_μ_y∈ Ja^h(2)_μ_xμ_ye^iθ^**_xμ_xe^iθ^**_yμ_y L̃_2h(2θ^00) = ∑_μ_x∈ J∑_μ_y∈ Ja^2h(2)_μ_xμ_ye^iθ^00_xμ_xe^iθ^00_yμ_ySince we can always get a nonsingular matrix P same order as C^2h_h such that PC^2h_hP^-1=Q^2h_h holds, where Q_h^2h a block matrix consisting of 4 × 4 diagonal block Q̃^2h_h(θ) looks for all θ∈Θ̃_00 likeQ̃^2h_h= [ 0; 1; 1; 1 ]then the smoothing factor is equivalent toμ= sup_θ∈Θ̃_00ρ(S̃(θ)Q_h^2h(θ))=sup_θ∈Θ̃_00ρ(θ)Computation of μ is important for observing two-grid convergence during relaxation. In next Section we illustrate a criterion for two-grid convergence.§.§ Convergence criterion of hybrid splittingIn this section, we give a general criteria for the convergence study of hybrid schemes used in our EHL model problem. Let us reconsider linear systemL_κu=f,where [L_κ]_m× m a regular matrix (for definition see <cit.>) and f and u are known values. For applying hybrid splitting in above equation matrix L_κ is understood as L_κ=L_κ^Ω_ϵL_κ^Ω'_ϵ,where [L_κ^Ω_ϵ] and[L_κ^Ω'_ϵ] are regular applied splittings in Ω_ϵ={(x,y)|min(ϵ(x,y)/h_x,ϵ(x,y)/h_y)≤ 0.6} and Ω'_ϵ={(x,y)|min(ϵ(x,y)/h_x,ϵ(x,y)/h_y)> 0.6}sub-domains respectively.Now assume that [L_κ^Ω_ϵ] has the following splittingL_κ^Ω_ϵ=M_κ^Ω_ϵ-N_κ^Ω_ϵ,where M_κ^Ω_ϵ is a regular easily invertible matrix and N_κ^Ω_ϵis a positive rest matrix. Then our splitting can be defined as u^n+1_Ω_ϵ=u^n_Ω_ϵ-(M_κ^Ω_ϵ)^-1(L_κ^Ω_ϵ-f)Then above iteration will converge for any initial guess u^0 if following theorem holdsLet L_κ^Ω_ϵ=M_κ^Ω_ϵ-N_κ^Ω_ϵ be a regular splitting of matrix L_κ^Ω_ϵand (L_κ^Ω_ϵ)^-1≥ 0, then we haveρ((M_κ^Ω_ϵ)^-1N_κ^Ω_ϵ)=ρ((L_κ^Ω_ϵ)^-1N_κ^Ω_ϵ)/1+ρ((L_κ^Ω_ϵ)^-1N_κ^Ω_ϵ) < 1For the proof of this theorem we refer to see Varga <cit.>.Now we will prove other part of matrix splitting L_κ^Ω'_ϵ. This part of matrix there is no straightforwardsplitting is available (see <cit.>). Let L_κ^Ω'_ϵ is regular, but dense and the designing suitable splitting in the sense of Varga is complicated. Suppose if it is possible to construct nonsingular matrix L^r_κsuch that equation belowL_κ^Ω'_ϵL^r_κ=M_κ^Ω'_ϵ-N_κ^Ω'_ϵis easy to solve and we can rewrite splitting as L_κ^Ω'_ϵ=(M_κ^Ω'_ϵ-N_κ^Ω'_ϵ)L^r_κ^-1Then for above splitting our iteration is denoted asu^n+1=u^n-L^r_κ(M_κ^Ω'_ϵ)^-1(L_κ^Ω'_ϵ-f)Therefore above iteration will converge for any initial guess if following theorem holds Let (M_κ^Ω'_ϵ-N_κ^Ω'_ϵ)(L^r_κ)^-1 be a regular splitting of matrixL_κ^Ω'_ϵand (L_κ^Ω'_ϵ)^-1≥ 0, then we haveρ(L^r_κ(M_κ^Ω'_ϵ)^-1N_κ^Ω'_ϵ(L^r_κ)^-1)=ρ((L_κ^Ω'_ϵ)^-1N_κ^Ω'_ϵ(L^r_κ)^-1)/1+ρ((L_κ^Ω'_ϵ)^-1N_κ^Ω'_ϵ(L^r_κ)^-1) < 1The following theorem providing sufficient conditions for the convergence of the two-grid method Q_2 ( define in Eqn.<ref>) is due to Hackbusch.Let us assume that 𝒮_l is a smoothing operator for K_l that means there exist η(ν) and ν'(h) so that the following condition holds ||K_l𝒮_l^ν||_F← U≤η(ν) ∀ν : 1≤ν≤ν'(h),l≥ 2,η(ν) → 0 forν→∞, ν '(h) =∞orν'(h) →∞for h → 0and also assume that operator K_l is approximated accurately (by prolongation and restriction operator) in the following sense such that∃ C_A→ 0, independent of h so that||K^-1_l-P(K_l-1)^-1R||_U← F≤ C_A∀l ≥ 2then there exist h and ν∈𝐍:||Q_2,l(ν,0)||_U ← U≤ C_Aη(ν) < 1holds for ν with ν'(h_l) ≥ν≥ν(h_l) and h_2≤ h andthe two-grid method Q_2,l fromEqn. <ref> converges monotonically, independently of h.It follows straight way by taking Q_2,l(ν,0) = (K^-1_l-P(K_l-1)^-1R)(K_l𝒮_l^ν).§ NUMERICAL RESULTSIn Section <ref>, we have illustrated TVD implementation for solving linear convection-diffusion problem through a class of splittings. Now we investigate the performance of mentioned splittings and compare the results with classical defect-correction. For numerical tests we consider analytical solution as u=x^4+y^4 from Oosterlee <cit.>. All numerical computations is performed on author's personal laptop having 2GB RAM and Intel(R) Core(TM) i3-2328M CPU @ 2.20GHz.Dirichlet boundary is imposed for all test cases on domain Ω={ (x,y); -1 ≤ x ≤ 1,-1 ≤ y ≤ 1 }.For all numerical experiments, we take diffusion coefficient ϵ=10^-6 and κ=-1.0,0.0,1/3. Numerical tests are performed for the problem given as Example <ref> using Ls0 splitting, Ls1 splitting and classical defect-correction technique using hierarchical multi-level grid. Computational results of relative error and corresponding order in L^1,L^∞,L^2-norms are presented on Table <ref>- <ref> on the finest grid level (7^th level using 3 V(2,1) cycle). L^2 norm error is evaluated in the following wayL^2(k,k-1)= √(H^d∑(ũ^k-1-I_h^Hu̅^k)^2),where H is the mesh size on grid k-1, u̅^k is the converged solution on grid k and ddenotes the dimension of the problem. The order of convergence is derived asp_2=log L^2(k-1,k-2)-log L^2(k,k-1)/log 2,where p_2 is the order of discretization in L^2 norm. We also calculate L^∞ and L^1-error and corresponding order in similar fashion. From numerical experiments we observe that splitting Ls0 and Ls1 always show fast residual decay compare to classicaldefect-correction. Fig. <ref> and Fig. <ref> present the residual decay results for Ls0 splitting , Ls1 splittingand classical defect-correction technique for κ=0.0,1/3. Moreover, residual decay of splitting Ls1 is more better than splitting Ls0. On the other hand, we observe that splitting Ls0 has larger range of robustness (-1.0 ≤κ≤ 0.9) than splitting Ls1 (-1.0 ≤κ≤ 0.8).§.§ Test case for numerical experiment of EHL problemIn this section, we perform numerical experiments on EHL model defined in Section <ref>. We takeMoes (<cit.>) dimensionless parameters (which is denoted by M and L), where L is fixed at 10 while Mis varied between 20-1000. For all test cases, we fix the parameter α=1.7 × 10^-8 over domain Ω=[-2.5,2.5]×[-2.5,2.5]. In all cases , we refine grid up to (1024+1)× (1024+1) points on finest leveland coarse grid up to (32+1) × (32+1) points on the coarsest level (except extremely highload case we choose coarse grid (64+1) × (64+1)). A class of limiter are applied to solve the problem discussedin Section 3 and 4. However, for checking performance of splittings, we use value κ=0.0,1/3,-1.0 in our numerical analysis. In Fig. <ref>, we represent film thickness profile ℋ in inverted form. Four load cases (a)M=20,L=10, (b)M=50,L=10, (c) M=100,L=10 and (c) M=1000,L=10 are solved using the TVD schemes. The fully converged pressure as well as film thickness profiles and theirplot results are represented in Fig. <ref>-Fig.<ref>. Comparisons of relative error in L^2,L^1 and L^∞ norms between κ splittings and defect correction schemes are performed which are presented inTable. <ref>- <ref>. Experimental results show that order of convergence of classical defect-correction isalmost similar to splittings L_hs1 and L_hs2. However, splittings L_hs1 and L_hs2 have slightly better residual decay in comparison with classical defect-correction which can be seen in Fig. <ref>. § CONCLUSIONA limiter based hybrid line splittings have been outlined for solving EHL point contact problem (in the form of LCP) on hierarchical multi level grid. The key idea of using such splitting to facilitate artificial diffusion only the region of steep gradient of pressure profile and to improve the accuracy on the other part (smooth region of pressure profile) of the domain. These illustrated splittings have been devised by bringing left hand side matrix in M-matrix form usingsecond order discretization of Reynolds equation and rest term on the right hand side. Additionally, the hybrid line splitting has been designed with help a switcher which depends upon magnitude of ϵ/h. When ϵ/h ≤ 0.6, we have applied distributive Jacobi line splitting else,we have implemented Gauss-Seidel line splitting during updating new solution. The derived switcher is important as it noticeably allows us in reducing the ill-conditioning of the discretized matrix when ϵ is almost equal to zero. The robustness of the splittings have been analyzed performing series of numerical experiments. Moreover, robustness range of splittings has been investigated and compared with other splittings.For linear κ- discretization, we have performed Fourier analysis in order to validate the multi-grid convergence behavior theoretically. Numerical experiments conform that the performance of these hybrid line splittings arerobust not only for linear case but also for EHL model too. A remarkable achievement of these splittings are thatit helps us in developing of higher-order discretization without losing stability in relaxation and without the use of double discretization scheme like defect-correction technique in multi-grid solver. Numerical experiments confirm that residual decay of direct splittings are comparably better than classical defect-correction.In this study, we have analyzed the performance of splittings through known limiters available in literature which works satisfactory in all study cases. Another remarkable advantage of the adopted splittings can be noted as it does not demandany extra tuning parameter and produces reasonable numerical solution for large range of load variation. § ACKNOWLEDGMENT This work is fully funded by DST-SERB Project reference no.PDF/2017/000202 under N-PDF fellowship programand working group at the Tata Institute of Fundamental Research, TIFR-CAM, Bangalore. Author is highly indebted to IIT Kanpur for all kind of support that facilitated the completion of this work. § SOME NOTATION USED IN EHL MODELp_H→ Maximum Hertzian pressure.η_0→ Ambient pressure viscosity.H_00→Central offset film thickness.a→ Radius of point contact circle.α→ Pressure viscosity coefficient.u_s = u_1+u_2, where u_1 upper surface velocity and u_2 lower surface velocity respectively.p_0→ Constant (p_0=1.98 × 10^8), z is pressure viscosity index (z=0.68).R → Reduced radius of curvature defined as R^-1=R_1^-1 +R_2^-1,where R_1 and R_2 are curvature of upper contact surface and lower contact surface respectively.L and M are Moes parameters and they are related as below.L=G(2U)^1/4, M=W(2U)^-1/2, where 2U=(η_0u_s )(E^'R), W=FE'R,p_H=(3F)(2 π a^2).σ^n+1=u^n+1-u^n denote as difference between latest approximation solution u^n+1 and its predecessor u^n. plain
http://arxiv.org/abs/1705.09520v2
{ "authors": [ "Peeyush Singh" ], "categories": [ "math.NA" ], "primary_category": "math.NA", "published": "20170526104304", "title": "Robust Numerical Solution for Solving Elastohydrodynamic Lubrication (EHL) Problems using Total Variation Diminishing (TVD) Approach" }
1]Hazim Shakhatreh 1]Abdallah Khreishah 2]Ayoub Alsarhan 3]Issa Khalil 4]Ahmad Sawalmeh 4]Noor Shamsiah Othman [1]Department of Electrical and Computer Engineering, New Jersey Institute of Technology [2]Department of Computer Information System, Hashemite University [3]Qatar Computing Research Institute, Hamad bin Khalifa University [4]UNITEN, Selangor, Malaysia Efficient 3D Placement of a UAV Using Particle Swarm Optimization [ Received: date / Accepted: date =================================================================== Unmanned aerial vehicles (UAVs) can be used as aerial wireless base stations when cellular networks go down. Prior studies on UAV-based wireless coverage typically consider an Air-to-Ground path loss model, which assumes that the users are outdoor and they are located on a 2D plane. In this paper, we propose using a single UAV to provide wireless coverage for indoor users inside a high-rise building under disaster situations (such as earthquakes or floods), when cellular networks are down. We assume that the locations of indoor users are uniformly distributed in each floor and we propose a particle swarm optimization algorithm to find an efficient 3D placement of a UAV that minimizes the total transmit power required to cover the indoor users. Unmanned aerial vehicles, Outdoor-to-Indoor path loss model, particle swarm optimization. § INTRODUCTION UAVs can be used to provide wireless coverage during emergency cases where each UAV serves as an aerial wireless base station when the cellular network goes down <cit.>. They can also be used to supplement the ground base station in order to provide better coverage and higher data rates for the users <cit.>. In order to use a UAV as an aerial wireless base station, the authors in <cit.> presented an Air-to-Ground path loss model that helped the academic researchers to formulate many important problems. The authors of <cit.> utilized this model to study the problem of UAV placement, where the objective is to minimize the number of UAVs for covering a given area. The authors of <cit.> described the tradeoff in this model. At a low altitude, the path loss between the UAV and the ground user decreases, while the probability of line of sight links also decreases. On the other hand, at a high altitude line of sight connections exist with a high probability, while the path loss increases. However, it is assumed that all users are outdoor and the location of each user can be represented by an outdoor 2D point. These assumptions limit the applicability of this model when one needs to consider indoor users. Providing good wireless coverage for indoor users is very important. According to Ericsson report <cit.>, 90% of the time people are indoor and 80% of the mobile Internet access traffic also happens indoors <cit.>. To guarantee the wireless coverage, the service providers are faced with several key challenges, including providing service to a large number of indoor users and the ping pong effect due to interference from near-by macro cells <cit.>. In this paper, we propose using a single UAV to provide wireless coverage for users inside a high-rise building during emergency cases, when the cellular network service is not available. In <cit.>, we study the problem of efficient UAV placement, where the objective is to minimize the total transmit power required to cover the entire high-rise building. We consider two cases of practical interest and provide efficient solutions to the formulated problem under these cases. In the first case, we find the minimum transmit power such that an indoor user with the maximum path loss can be covered. In the second case, we assume that the locations of indoor users are symmetric across the dimensions of each floor and we propose a gradient descent algorithm to find an efficient 3D placement of a UAV. Our main contribution in this paper is to study the problem of efficient UAV placement, where the objective is to minimize the total transmit power required to cover the entire high-rise building, when the locations of indoor users are uniformly distributed in each floor, we propose a particle swarm optimization algorithm for finding an efficient location of the UAV. The rest of this paper is organized as follows. In Section II, we describe the system model and a path loss model suitable for studying indoor wireless coverage. In Section III, we formulate the problem of UAV placement with an objective of minimizing the transmit power for covering the entire building. In Section IV, we present the particle swarm optimization algorithm and show how to find an efficient placement of the UAV such that the total transmit power is minimized. Finally, we present our numerical results in Section V and make concluding remarks in Section VI. § SYSTEM MODEL §.§ System Settings Let (x_UAV,y_UAV,z_UAV) denote the 3D location of the UAV. We assume that all users are located inside a high-rise building as shown in Figure <ref>, and use (x_i,y_i,z_i) to denote the location of user i. The dimensions of the high-rise building are [0,x_b] × [0,y_b] × [0,z_b]. Also, let d_3D,i be the 3D distance between the UAV and indoor user i, let θ_i be the incident angle , and let d_2D,i be the 2D indoor distance of user i inside the building. §.§ Outdoor-Indoor Path Loss Model The Air-to-Ground path loss model presented in <cit.> is not appropriate when we consider wireless coverage for indoor users, because this model assumes that all users are outdoor and located at 2D points. In this paper, we adopt the Outdoor-Indoor path loss model, certified by the ITU <cit.>. The path loss is given as follows: L_i=L_F+L_B+L_I=                  (wlog_10d_3D,i+wlog_10f_Ghz+g_1)+ (g_2+g_3(1-cosθ_i)^2)+(g_4d_2D,i) where L_F is the free space path loss, L_B is the building penetration loss, and L_I is the indoor loss. In this model, we also have w=20, g_1=32.4, g_2=14, g_3=15, g_4=0.5 <cit.> and f_Ghz is the carrier frequency (2Ghz). Note that there is a key tradeoff in the above model when the horizontal distance between the UAV and a user changes. When this horizontal distance increases, the free space path loss (i.e., L_F) increases as d_3D,i increases, while the building penetration loss (i.e., L_B) decreases as the incident angle (i.e., θ_i) decreases. Similarly, when this horizontal distance decreases, the free space path loss (i.e., L_F) decreases as d_3D,i decreases, while the building penetration loss (i.e., L_B) increases as the incident angle (i.e., θ_i) increases. § PROBLEM FORMULATION Consider a transmission between a UAV located at (x_UAV,y_UAV,z_UAV) and an indoor user i located at (x_i,y_i,z_i). The rate for user i is given by: C_i=Blog_2(1+P_t,i/L_iN) where B is the transmission bandwidth of the UAV, P_t,i is the UAV transmit power to indoor user i, L_i is the path loss between the UAV and indoor user i and N is the noise power. In this paper, we do not explicitly model interference, and instead, implicitly model it as noise. Let us assume that each indoor user has a channel with bandwidth equals B/M, where M is the number of users inside the building and the rate requirement for each user is v. Then the minimum power required to satisfy this rate for each user is given by: P_t,i,min=(2^v.M/B-1)⋆ N⋆ L_i Our goal is to find an efficient location of the UAV such that the total transmit power required to satisfy the rate requirement of each indoor user is minimized. The objective function can be represented as: P=∑_i=1^M(2^v.M/B-1)⋆ N⋆ L_i, where P is the UAV total transmit power. Since (2^v.M/B-1)⋆ N is constant, our problem can be formulated as: min_x_UAV,y_UAV,z_UAV L_Total=∑_i=1^ML_i                                  subject  to                                                            x_min≤ x_UAV≤ x_max,                       y_min≤ y_UAV≤ y_max,                       z_min≤ z_UAV≤ z_max,                       L_Total≤ L_max Here, the first three constraints represent the minimum and maximum allowed values for x_UAV, y_UAV andz_UAV. In the fourth constraint, L_max is the maximum allowable path loss and equals P_t,max/((2^v.M/B-1)⋆ N), where P_t,max is the maximum transmit power of UAV. Finding the optimal placement of UAV is generallydifficult because the problem is non-convex. Therefore, in the next section, we present the particle swarm optimization to find an efficient solution for the formulated problem. § EFFICIENT PLACEMENT OF UAV Due to the intractability of the problem, we propose the Particle Swarm Optimization (PSO) <cit.> to find an efficient 3D placement of the UAV, when the locations of indoor users are uniformly distributed in each floor. In <cit.>, we prove that z_UAV=0.5z_b and y_UAV=0.5y_b when the locations of indoor users are symmetric across the dimensions of each floor. Then, weuse the gradient descent algorithm to find x_UAV that minimizes the transmit power required to cover the building. The particle swarm optimization algorithm starts with (npop) random solutions and iteratively tries to improve the candidate solutions based on the best experience of each candidate (particle(i).best.location) and the best global experience (globalbest.location). In each iteration, the best location for each particle (particle(i).best.location) and the best global location (globalbest.location) are updated and the velocities and locations of the particles are calculated based on them <cit.>. The velocity is given by: particle(i).velocity=w*particle(i).velocity+ c1*rand(varsize).*(particle(i).best.location -particle(i).location)+c2*rand(varsize).* (globalbest.location-particle(i).location) where w is the inertia weight, c1 and c2 are the personal and global learning coefficients, and rand(varsize) are random positive numbers. Also, the location of each particle is updated as: particle(i).location=particle(i).location +particle(i).velocity The pseudo code of the PSO algorithm is shown in Algorithm 1. The number of iterations (maxit) in the algorithm should be high enough to guarantee the stability. § NUMERICAL RESULTS First, we assume that each floor contains 20 users and the locations of indoor users are symmetric across the dimensions of each floor. Then, we apply the particle swarm optimization algorithm to find an efficient 3D placement of a UAV. Table I lists the parameters used in the numerical analysis. The particle swarm optimization algorithm will converge to the efficient 3D UAV placement when the maximum number of iterations is equal to 50. On the other hand, the gradient descent algorithm will converge to the efficient placement when the maximum number of iterations is equal to 100 and the step tolerance is equal to 0.01. In Figure <ref>, we show the convergence speed of the gradient descent algorithm when the building hight is 200 meters. The 3D efficient placement is (-24.7967, 25, 100) and the total path loss is 7.6733*10^4. The convergence speed of the particle swarm optimization algorithm when the building hight is 200 meters is shown in Figure <ref>. The 3D efficient placement is (-24.7491, 24.9419, 100.0491) and the total path loss is (7.6733*10^4). Table II lists the simulation results for 250 meters and 300 meters building heights. As can be seen from the simulation results, both of the algorithms converge to the same 3D placement. After that, we assume that each floor contains 20 users and the locations of these users are uniformly distributed in each floor. In Figure <ref>,we show the convergence speeds of the gradient descent algorithm for different building heights. The 3D efficient placements and the total costs for 200 meter, 250 meter and 300 meter buildings are (-24.7254, 25, 100) (7.8853*10^4), (-33.8180, 25, 125) (9.9855*10^4) and (-43.1170, 25, 150)(1.2154*10^5), respectively. The convergence speeds of the particle swarm optimization algorithm for different building heights are shown in Figure <ref>. The 3D efficient placements and the total costs for 200 meter, 250 meter and 300 meter buildings are (-21.7995, 37.3891, 111.7901) (7.8645*10^4), (-32.9212, 28.7125, 124.0291) (9.9725*10^4) and (-46.5898, 31.5061 ,143.8588)(1.2117*10^5), respectively. As can be seen from the simulation results, the PSO algorithm provides better results, it provides total cost less than the cost that the GD algorithm provides by (37dB-208dB). This is because the PSO algorithm is designed for the case in which the locations of indoor users are uniformly distributed in each floor. On the other hand, the GD algorithm is designed for the case in which the locations of indoor users are symmetric across the dimensions of each floor. We investigate the impact of different building widths (i.e., x_b) in Figures  <ref> and <ref> using the GD and PSO algorithms. We fix the building height to be 250 meters and vary the building width.As can be seen from the simulation results, the PSO algorithm provides better results, it provides total cost less than the cost that the GD algorithm provides by (57dB-161dB). Table II lists the simulation results. We can notice that when the height of the building increases, the efficient horizontal point x_UAV increases. This is to compensate the increased building penetration loss due to an increased incident angle. Also, when the building width increases, the efficient horizontal distance decreases. This is to compensate the increased indoor path loss due to an increased building width. § CONCLUSION In this paper, we study the problem of providing wireless coverage for users inside a high-rise building using a single UAV. Due to the intractability of the problem, we propose the particle swarm optimization algorithm to find an efficient 3D placement of a UAV that minimizes the total transmit power required to cover the indoor users when the same number of users is uniformly distributed in each floor. In order to model more realistic scenarios, we will consider different types of user distribution in our future work. We will also study the problem of providing wireless coverage using multiple UAVs. § ACKNOWLEDGMENT This work was supported in part by the NSF under Grant CNS-1647170. IEEEtran
http://arxiv.org/abs/1705.09769v1
{ "authors": [ "Hazim Shakhatreh", "Abdallah Khreishah", "Ayoub Alsarhan", "Issa Khalil", "Ahmad Sawalmeh", "Noor Shamsiah Othman" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170527054839", "title": "Efficient 3D Placement of a UAV Using Particle Swarm Optimization" }
By using a suitable triple cover we show how topossiblymodel the construction of a minimalsurface with positive genusspanning all six edges of atetrahedron,working in the space ofBV functions and interpreting the film as theboundary of a Caccioppoli set in the covering space. After a question raised by R. Hardt in the late 1980's,it seemscommon opinion that an area-minimizing surface of this sort does not exist for a regular tetrahedron, although a proof of this fact is still missing. In this paper weshow that there exists a surface of positive genus spanning the boundary of an elongated tetrahedron and having area strictly less than the area of the conic surface. Topology InducedOscillations in Majorana Fermions in a Quasiperiodic Superconducting Chain Indubala I SatijaDecember 30, 2023 ============================================================================================ § INTRODUCTIONFinding a soap film that spans all six edges of a regular tetrahedron different from the coneof Figure <ref> (left) is an intriguing problem. It was discussed by Lawlor and Morgan in <cit.>, where a sketch of a possible soap film of positive genus is shown,based on an idea of R. Hardt[ A sketch of this surface was reportedly found by F. Morgan in R. Hardt's office during a visit at Stanford around 1988. F. Morgan and J. Taylor tried to find crude estimates to compare the two minimizers without success. Many years later R. Hardt himself told R. Huff about the problem, who then came out with the results that can be found in <cit.>. ]; such a surface ishere reproduced in Figure <ref> (right) [ The picture itself is a computer generated image obtained by J. Taylor using the of K. Brakke. ]. The same picture was subsequently included in the book <cit.>.The cone constructed from the center of the solid spanning the six edgesof the regular tetrahedron (Figure <ref> left) has been proved to be area-minimizing[ That is, ( M, 0, δ)-minimal in the sense of F.J. Almgren <cit.>.] if, roughly speaking, one imposes on the competitors the extra constraint that theydivide the regular tetrahedron in four regions, one per face <cit.>, see also <cit.>. It corresponds to the actual shape that a real soap film attains when dipping a tetrahedral frame in soapy water; it includes a T-singularity at the center, where four triple lines (Y-singularities) converge from the four vertices satisfying the local constraints of an area-minimizing surface <cit.>.However, it is an open question whether a non-simply connected film, e.g. with “tunnels”connecting pairs of faces, could beat the cone;Figure <ref> (right)shows a theoretically feasible configuration of such a minimizing film.Although it seems a common opinion, based both on physical experiments and theoretical reasons, that such a surface does not actually exist (see e.g. <cit.>), to our best knowledge such a question still remains open.Generalizations of the problem, for instance considering deformations of the tetrahedron with the addition of zig-zags <cit.>, allows on the contrary to construct a surface with the required topology that at least satisfies all local properties of a minimal film. Another generalization that could lead to interesting minimizers consists in considering anistotropic surface energies <cit.>.The surface with positive genus[This surface contains triple curves and boundaries, in this context we could not find a recognized definition of genus. However, if we remove the two flat portions, each bounded by a side of the tetrahedron and one of the two triple curves (operation that can be performed by deformation retraction, hence without changing the topological type), we end up with a surface without triple lines and bounded by a skew quadrilateral. We can then apply the formula χ = 2 - 2g - b where χ is the Euler characteristic (χ = -1 in our case), g is the genus and b = 1 is the number of components of the boundary, obtaining g=1. Consistently, the retracted surface can be deformed into a disk with a handle, or equivalently into a puntured torus.]depicted in Figure <ref> includes two triple curves (curves where three sheets of the surface meet at 120^∘) and no quadruple points. Furthermore it sports two tunnels, one clearly visible that allows to traverse thetetrahedron entering from the front face and exiting from the back. The other hole is located on the other side of the film and allows to traverse the tetrahedron entering from the right lateral faceand exiting fromthe bottom face (without crossing the soap film). Figure <ref> in Section <ref> helps to figure out the topological structure.Our first result (Section <ref>, on the basis also ofthe computation of the fundamental group in Section <ref>)is the construction of a covering space of degree 3of the complement of the one-skeleton of a tetrahedron, following the lines of <cit.> (see also <cit.>), that is compatible with Figure <ref>,right. Using covering spaces allows to treat in a neat way situations that seem hard to model using other approaches; see for instance Section <ref>, where we compare our approach with the Reifenberg approach.A small portion of the soap film somehow behaves like a sort of “portal” to a parallel (liquid) universe. More precisely, each point in ^3, after removal of a suitable set of curves (obtaining the so-called base space) has two other counterparts, for a total of three copies of the base space that are actually to be interpreted (locally) as three distinct sheets of a cover. Globally the picture is more interesting, since the covering space is constructed in such a way that when travelling along a closed curve in the base space, the “lifted” point might find itself on a different sheet of the same fiber. This can be used as a trick to overcome the problem in treating the soap film as transition between air and liquid. Since the liquid part has infinitesimal thickness, this would lead to the superposition of two layers, one corresponding to the air-liquid transition and the other to the transition back from liquid to air. For this reason an approach based e.g. on BV functions or Caccioppoli sets would lead to a liquid phase of measure zero and miss completely the two superposed layers. Using the covering space overcomes the problem by adding a “fake” big set of liquid fase, lying in a different sheet of the same fiber with an entry point corresponding to one face of the soap film and an exit point (reached travelling along suitable closed curves in the base space) from the other facein the same position. A phase parameter u defined in the covering space is then defined with values in {0,1}, 0 indicating liquid, 1 indicating air, in such a way that in exactly one of the three points of a fiber we have u = 1 (air). Looping around an edge of the tetrahedron would take from one point to another of the same fiber, thus forcing the value of u to jump somewhere along the loop, which in turn would force the soap film to “wet” the edge.The presence of triple curves implies that at least three sheets are required for a cover modelling the film problem, however the natural construction using suitable cyclic permutations of the three sheets when circling around each of the six edges of the tetrahedron cannot produce a surface with holes. This is because any path that traverses a tetrahedron entering from any face and exiting through another one is by topological reasons linked to exactly one of the edges and hence forced to traverse the film. Some way to distinguish tight loops around an edge and loops that encircle the edge far away is then required.Hence the construction is more involved and requires the introduction of two “invisible wires”. This is done in the same manner and for similar reasons as in <cit.>, see in particular examples 7.7 and 7.8 in that paper. After constructing the covering space , we can adapt the machinery of <cit.> (see Section <ref>), and settle the Plateau problem in BV, with the differences that here thecut surfaces have selfintersections, andthat the involved functions defined on , instead of taking values in an equilateral triangle (with barycenter at the origin) with the constraint of having zero sumon each fiber, take here values in {0,1}, with the restrictionindicated in (<ref>) and discussed above.Our next main result (Theorem <ref>) is to prove that, for a sufficiently elongated tetrahedron, there is a surface spanning its boundary, having the topology of the surface of Figure <ref> right, andhaving area strictly less than the area of the conelike configuration.Therefore, if we allow for competitors of higher genus,we expect the conelike surfacenot to have minimal area. We remark that our result does not cover the case of a regular tetrahedron.Positioning the invisible wires is delicate. Indeed,we would like the invisible wires not to influence theminimal film, which requires that the film does not wet them. This is not proved here, although the numerical simulations strongly support this fact for a sufficiently elongated tetrahedron and suitably positioned invisible wires. On the other hand,the discussion in Section <ref> shows that a nonwetting relative BV-minimizer does not exclude the existence of an absolute minimizer with the structure of Figure <ref>, which we would like to exclude. Again, the numerical simulations support the conjecture that if the invisible wires are positioned sufficiently far away from the short edges, then the absolute minimizer has the required topology and does not wet the invisible wires. On the contrary, positioning the invisible wires near the short edges would produce absolute minimizers that partially wet the invisible wires. We observe that soap films that partially wet a given curve were discussed in <cit.> and proved to exist for any knotted curve in <cit.>. In Section <ref> we describe all possible triple covers of the base space among those producing soap films wetting all the edges of a tetrahedron. Weconclude the paper with Section <ref>, where we describe the results of some numerical simulations, in particular varying the length of the long edges of a tetrahedron.§ THE BASE SPACEIn view of the symmetry[The symmetry group of the surface of Figure <ref> (right) turns out to be _2d, using Schoenflies notation. ]of the desired surface, it is convenient to think of the tetrahedron as a wedge with two short edges,and , and four long edges, _i, i = 1, …, 4; for simplicity we name the one-skeletonof the wedge, i.e. the union of all edges, as = (∪_i=1^2 _i) ∪ (∪_i=1^4 _i). A sketch of such a wedge is displayed in Figure <ref>.In order to construct a proper base space for our coverlet us summarize the required properties of the soap film that we would like to obtain.*The soap film is required to wet all six edges of the tetrahedron/wedge.From the point of view of a covering space this is achieved by requiring that any closed path that circles once around an edge at a short distance acts on the fiber with no fixed points;*it is possible to find closed paths that suitably traverse the tetrahedron (with reference to Figure <ref>, one path entering from the front face and exiting from the back face, the other entering from the right face and exiting from the bottom face) that when lifted to the covering space do not move at least one point of a fiber. The second requirement seems at first sight incompatible with the first one, since such traversing paths are actually forced to circle once around a single edge of the tetrahedron.This is unavoidable and a consequence of the topology of the graph having the six edges as arcs. However these pathsare allowed to circle the edges “far away”, and we can make the two paths, the one circling (say) the edge on the left ( in Figure <ref>) at a shortdistance and the one traversing the visible hole in the soap film in Figure <ref>, not homotopically equivalent by introducing an obstruction in the base space in the form of an invisible wire, displayed as the left circle in Figure <ref>. The invisible wire has the purpose of making the two paths not equivalent, but in the meantime we do not want it to “perturb” the soap film.We actually need to introduce two invisible wires, in the form of two closed loops ,suitably interlaced to the edges of the wedge. We nametheir union as = ∪. The result is illustrated in Figure <ref>, thebase space= ^3 ∖ (∪) being the complement in ^3 of the system of curves displayed as thick lines. We shall use it to construct the covering space .The picture follows the usual convention of inserting small gaps to denote underpasses of a curve below another. The four dots (vertices of the tetrahedron) represent points where three curves meet. This system of curves is the disjoint union of two loops ( and ),a set of two short curves (, ) and four long curves ( to ) joining four points. The latter is topologically equivalent to the set of edges of a tetrahedron.Each of the twoandloops around a pair of long edges, they are called invisible wires in <cit.> and their presence is essential to allow for jump sets(see Section <ref>) with the required topology. The loopis the one nearest to the short edge .The quantity in the next definition plays an important role. Given a choice of the geometry of ,is defined as the L^∞-distance[ The L^∞-norm x_∞ := max(|x_1|,|x_2|,|x_3|) is used here for convenience in view of the estimates to follow.] from the two short edges and the invisible wires of : := min{‖ x - ξ‖_∞ : x ∈∪, ξ∈∪} =dist_L^∞(S_1 ∪ S_2,C_1 ∪ C_2).In constructing the base spacewe did not pay much emphasis on its geometry (see e.g. Figure <ref>), which is allowedas long as we study topological properties like its fundamental group, or when constructing the covering space .[It should be noted here thatthe base spaceis path connected, locally path connected and semilocally simply-connected <cit.>.] However, when considering the minimal film, the actual geometry becomes important. We shall then make specific choices both for the set of curves corresponding to the tetrahedral frame, straight segments with two different lengths, and for the two closed curves corresponding to the invisible wires. We point out here that the two invisible wires can be safely deformed into straight lines,one in the z direction through (-s,0,0),the other in the y direction through (s,0,0),for a suitable choice of s>0, observing that a straight line is a closed curve in the compactification ^3 of ^3. To avoid problems at infinity, where the two invisible wires would intersect, we can deform one or both of them “far away” (outside the convex hull of ). For two parametersh > 0 and s ∈ (0, 1), we define a specific geometry for = as follows. The four vertices of the wedgeare fixed at (h, ± 1, 0),(-h, 0, ± 1), and connected with straight segments of length || = || = 2, |_i| = √(2 + 4h^2), i = 1,…,4. The invisible wires are now selected as straight lines= {(-s h,0,t): t ∈} and ={(s h,t,0): t ∈}, (possibly modified far away from ). The special value h = √(2)/2 results in a regular tetrahedron, whereas for h > √(2)/2 we have || > ||.Clearly, we have () = h(1-s). § COMPUTING THE FUNDAMENTAL GROUP We shall occasionally need to fix a base pointin . It is positioned far away from the set of curves, its actual position is inessential and we shall think of it as the eye of the observer. Equivalently, we can position it at infinity after compactification of ^3 into ^3. [Since a single point into a three-space cannot obstruct a closed path, adding the point at infinity to ^3 does not impact the computation of the fundamental group, nor it will make any difference in the construction of the covering space. For that matter, it also makes no difference to substitute ^3 with a big ball compactly containing the tetrahedron.]The fundamental groupcan be computed by using a technique similar to the construction of the Wirtinger presentation of a knot group. We position the base pointabove the pictureand select a set of closed curves a, b, c, d, e that will serve as generators of the group. These curves (that represent elements of ) are displayed in Figure <ref> as arrows to be interpreted as curves that start at ,run straight to the tail of one of the arrows, follow the arrow below one or two arcs of the system of curves and finally go back straight to .In order to prove that a, b, c, d, e generate the whole fundamental group it is enough to construct curves that loop around each of the pieces of curves running from an underpass or node to another underpass or node.As an example, the product c^-1 d is equivalent to a curve that loops around the piece of intermediate edge running from one disk to the other (L_2,2 in the notation fixed below). This can be seen by observing that loop d can be dragged from the right to the left of the top-right circle.At this point we can construct b c^-1 d looping around the bottom-right piece of curve L_4,2 from the underpass to the lower-right node.Curve c^-1 b corresponds to one of the four arcs(L_3,2 in the notation below) in which the long edge connecting the lower-left nodeto the upper-right one is divided. This can be seen by observing that modifying b by extending its head to pass under L_3,2 (Figure <ref>) gives a curve that is homotopic to c.Traversing an underpass can be achieved by conjugation with the loop corresponding to the overpass, which allows to obtain all curves associated to the long edges. Product of two of these finally allows to loop around the short edges on the left and on the right. We end up with the following table, where the second index denotes what piece of the long arc of the wedge we are referring to (from left to right): L_1,1→c,L_1,2→ e c e^-1 ,L_2,1→a c^-1 d a^-1,L_2,2→ c^-1 d,L_2,3→ e c^-1 d e^-1 ,L_3,2→ c^-1 b,L_3,3→ d^-1 b c^-1 d ,L_4,1→a d^-1 c b^-1 a^-1 ,L_4,2→ d^-1 c b^-1 ,S_1 → a d^-1 c a^-1 c^-1,S_2 → b c^-1 e c e^-1 . We omit the values associated to L_3,1 and L_3,4 (readily deducible by conjugation due to traversal of, respectively, L_2,1 and L_2,3), since we shall not need them.Each crossing provides a relation among the three curves involved.By collecting all such relations and simplifying we finally end up with the presentation (five generators and two relators) [This is actually a right-angled Artin group.] = <a, b, c, d, e; ab = ba, de = ed>. A different and more direct way to obtain the presentation (<ref>) consists in an ambient deformation of a tubular neighborhoodof the set of curves of Figure <ref>. An important remark here is that it is possible to flip the configuration of a “Steiner-like” pair of adjacent triple junctions as shown in Figure <ref>,without changing the homotopy type of both the set of curves and of its complement in ^3. This allows to transform the set of curves of Figure <ref> by homotopy equivalence into the first configuration of Figure <ref>. Then we shrink two curves to a point in the passage from the second to the third configuration,and again one curve from the third to the last without modifications to the homotopy type of both the set of curves and of its complement in ^3.Using this equivalence we can deform the set of curves as shown in Figure <ref> to a bouquet of three loops with two linked rings, a configuration consistent with the presentation (<ref>) for the fundamental group of the complement.It should be noted that the graphs in the sequence of Figure <ref> are not mutually homeomorphic, nor they are homeomorphic to the system of curves of Figure <ref>, whereas their complement in ^3 is diffeomorphic. We have an ambient isotopy as soon as we substitute each system of curves with a small tubular neighbourhood.§ THE TRIPLE COVER The presence of triple curves in the soap film that we would like to reconstruct (Figure <ref>, right) implies thatthe covering space must be at least of degree three. There is no quadruple (tetrahedral) point in this film, so that three sheets for the cover might be sufficient.However, the particular structure of the surface (a single smooth component of the film that arrives to a triple line from two distinct directions) requires special treatment, similar to that of Examples 7.7 and 7.8 of <cit.>, with the introduction of the so-called invisible wires. These are introduced as two copies of 𝕊^1 havingthe purpose of exchanging sheets 2 and 3, whereas sheet 1 is glued to itself.As we shall see in Section <ref> the introduction of the invisible wires is essential, in the sense that a cover with three (or two) sheets of the complement of the six tetrahedral edges is incompatible with the wetting conditions imposed on all edges.A cut and paste construction of our cover is as follows <cit.>. * Take three copies of the base space(complement in ^3 of the system of curves shown in Figure <ref>): sheet 1, 2 and 3;* cut them along the shaded surfaces[ The set of shaded surfaces and the system of curves give rise to a stratification, where the two-dimensional stratum (the cutting surfaces) is divided into connected components. The gluing permutation locally associated to each component can be transported along the whole component and must close consistently along closed paths on the surface. This would be a problem for non-orientable components (which is not our case) unless the permutation is of order two. In particular this is not an issue for covers of degree 2. ]displayed in Figure <ref>; * glue the three sheets again in such a way that when crossing the large surface cut the three sheets are glued cyclically. When crossing the shaded disks sheets 2 and 3 get exchanged whereas sheet 1 on one side is glued to sheet 1 on the other side.The set of cutting surfaces of Figure <ref> (the cutting set) forms a stratification composed by seven pieces of (2D) smooth surfaces joined by four (1D) arcs (thin arcs in the picture, two triple curves and two selfintersection curves) and four (0D) end-points of the two selfintersection curves. The stratification as a whole is bounded by the set of curves defining(thick curves in the picture) and by the four vertices of . Each piece of 2D surface is orientable. The arrows indicate a choice of orientation, the two small lunette-like dark regions on the left and on the right are oriented from front to back (however they are not a critic part of the cutting locus, seeRemark <ref>) and finally the small portion of surface on the right of the upper disk is oriented from back to front, consistent with the orientation on the left of the disk. It should be noted that the large piece of surface connecting the two disks is subjected to a twist in the region near the center of the picture, so that the top portion is oriented from back to front. The gluing is based on the permutations shown in the picture (expressed in cycle notation) when crossing the surface in the direction of the arrows. Local triviality entails a constraint at the two intersection curves between the disks and the other surfaces: a tight loop around one of such intersections is contractible in , hence the composition of the four permutations associated to crossings of the cuts (or their inverse if the loop crosses in the opposite direction with respect to the cut orientation) must give the identity. This results in a constraint upon the permutations on the left and on the right: they must be related by a conjugation defined by the transposition (2,3) associated to the disks. On the lunette surface to the left the permutation is (1,3,2) when crossing from front to back. It cannot be chosen arbitrarily because of the local triviality property: a tight loop around the triple curve is contractible, hence the product of the three permutations associated to crossings of the cuts (or their inverse, according to orientation) must be the identity. Similarly, the permutation associated to the right lunette is forcibly given by (1,2,3). We denote by : → the cover defined in this way. It is not actually necessary to use triple curves in the definition of the cover, indeed the left and right small fins could be removed and the two sides of the large surface could be extended up to the short lateral edges of the tetrahedral wedge. The chosen cut surface just mimics the structure that we expect for the minimizing film.The local triviality of the cover allows to naturally locally endow the covering spacewith the euclidean metric induced byfrom the euclidean structure of .The abstract construction in particular implies that, up to isomorphisms, the covering space constructed by cut and past is independent of the actual geometry of the cutting set, provided it has the same structure and the same gluing permutations at corresponding points. More precisely, the covering space is the same (up to isomorphisms) if the cutting set is deformed using a homeomorphism of ^3 into itself with compact support and fixing the edges ofand the invisible wires , and the gluing permutations are defined consistently.§.§ The cover is not normal The cover : → is clearly path connected. We claim that it is not normal <cit.>, as a consequence of the fact that sheet 1 is somehow specially treated by the gluing performed at the two disks. We recall that a cover is normal if for any pair y, η∈ with (y) = (η) there exists a deck transformation[ A deck transformation is a homeomorphism ψ : → such that (ψ(η)) = (y) for any y ∈. ]ψ : → with ψ(y) = η.The cover : → is not normal.It is sufficient to show thatthe identity is the only deck transformation. Suppose by contradition that ψ isa nontrivial deck transformation. Then ψ has no fixed points <cit.>. Now take y ∈^-1() in sheet 1,being the base point of . Then ψ(y) belongs to either sheet 2 or 3;suppose for definiteness that it belongs to sheet 2. If γ is a closed path incorresponding to a (Fig. <ref>) of , we can lift γ tointo two distinct paths, one starting at y, the other starting at ψ(y). These paths are mapped into each other by the homeomorphism ψ, however one is closed (the one starting at y), whereas the other is open, since when traversing the disk, the lifted path will continue on sheet 3. This gives a contradiction.§.§ Abstract definition of the covering space It is well known that an abstract definition of a covering space : → ofis based on selecting a subgroup H of , consideringthe setof paths γ : [0,1] → with γ(0) =, and taking the quotient with respect to the equivalence relation γ_1 ∼γ_2 γ_1(1) = γ_2(1)and[γ_1 γ_2^-1] ∈ H , where γ_2^-1 denotes the path γ_2with opposite orientation, and defining the projection fromtoas [γ] →γ(1). The degree of the cover is given by the index of H in . We shall describe a procedure to produce a subgroup H of index 3 in(finitely presented in (<ref>)) and subsequently prove in Theorem <ref> that it gives a cover isomorphic to : →.In order to construct Hwe need a concrete way to identify its elements when written as words in the generators of the presentation (<ref>). The first task is then to compute the actions σ_a, σ_b, σ_c, σ_d, σ_e on the fiber {,,} over the base point ∈ (the superscripts refer to the three sheets 1, 2, 3), corresponding to each generator in (<ref>). This amounts in associating to each generator the resulting permutation induced on sheets 1, 2, 3. A quick check comparing Figures <ref> and <ref> suggests to define σ_b = σ_d = () , σ_a = σ_e = (2,3) , σ_c = (1,3,2) , where () denotes the identity permutation.Observe that σ_a and σ_b commute, as well as σ_d with σ_e, so that the two relators in (<ref>) are consistent with these actions. Given an element ofexpressed as a wordin the generators,by substituting these actions to the generators inand performing the multiplications (left to right), we are then able to compute the action of the element represented byon the fiber {,,} in terms of a permutation of the three superscripts. H will then be recovered as consisting of those words that produce a permutation fixing 1 ∈{1,2,3}. Using relations satisfied by the actions σ_a through σ_e we can simulate the final multiplication after substitution inby imposing such relations directly on the generators, the result would be the same. So we can safely add such relations to the presentation (<ref>) as extra relators[The presence of a^2, c^3 and ca = ac^2in the list of relators is due to the fact that σ_a^2 = σ_c^3 = (), and σ_c σ_a = σ_a σ_c^2.] K := < a, b, c, d, e; ab = ba, de = ed, b, d, e = a, a^2, c^3, ca = ac^2 > to obtain a new group =/ Ĥ and a projection q : →, where Ĥ is the normal subgroup ofgenerated by the added relators. A sequence of Tietze transformations <cit.> reduces the above presentation to = <a, c; a^2, c^3, ca = a c^2> which is quickly seen to be isomorphic to the symmetric group S_3 with representative elements := { a^α c^γ : α∈{0,1}, γ∈{0, 1, 2}}⊂. Upon identification of the representative elements with their equivalence class, the projection q can be interpreted as a projection q : →.Finally, the subgroup H ⩽ is defined as the set of g ∈ such that γ = 0 if we write q(g) as q(g) = a^α c^γ∈. It corresponds to all paths inthat remain closed when lifted onwith starting pointtaken in sheet 1.As an example, consider the word w = a d^-1 c a^-1 c^-1 (this word corresponds to looping once around the short edge , see (<ref>)). We can remove all occurrences of d (and of b, but there is none anyway, also we could substitute a for e if any occurrence of e were present) to obtain the word a c a^-1 c^-1. Enforcing a^2 = c^3 = 1 (empty word) we arrive at a c a c^2; using ca = ac^2 then produces a^2 c^4 that finally reduces to the normal form a^α c^γ, with α = 0, γ = 1. Since γ≠ 0 we conclude that w ∉H. The subgroup H has index 3 inand it is not normal.That H is a subgroup is a direct check. Its right cosets are obtained by right multiplication by γ and γ^2 showing that there are exactly three cosets (they correspond to γ = 0,1,2 in ). It is not a normal subgroup since a ∈ H (a = a^1 c^0, hence γ = 0), but c a c^-1∉H. Indeed q(c a c^-1) = q(a c) by enforcing c a = a c^2. The non normality of H is consistent with the non normality of the cover. The next result ensures in particular that the approach of Section <ref> is independent of the choice of the cut surface.The cover : → defined by H ⩽ is isomorphic to the cover : → defined with the cut and paste technique.The proof consists in a direct check thatdefines the same action on the fiber over the base point ∈ <cit.>. We first need to define a bijection between the two fibers ^-1() and ^-1(). In view of (<ref>) the set ^-1() consists in equivalence classes of elements ofwith respect to the equivalence relation g_1 ∼ g_2 if and only if g_1 g_2^-1∈ H (we slightly abuse the notation used in (<ref>) for the equivalence relation and use it here on elements ofrather than on loops based on ). In other words ^-1() consists of the three right cosets of H in . This amounts in tagging the three right cosets with the numbers 1, 2, 3 and identifying the elements of ^-1() and ^-1() with the same tag. Observe that Proposition <ref> implies that there is a unique tagging that will induce the desired isomorphism. The three right cosets of H can be described as H, Hc and Hc^2 and will be tagged as 1, 3 and 2 respectively. It is now sufficient to check that the action of the generators a, b, c, d, e ofon the fibers gives the same permutation of the tagging (recall that in the abstract constructionacts on the fiber ^-1() as right multiplication). By comparing Figures <ref> and <ref> we see that c corresponds to the cyclic permutation (1,3,2) on ^-1() and the same is true in the abstract construction in view of the chosen tagging. From the definition of H we see that Ha = He = H whereas Hca = Hce = Hc^2, corresponding to the transposition (2,3) of the tagged abstract fiber, the same as for the cut and paste construction. Finally, the two generators b and d clearly act as the identifyon both ^-1() and ^-1(). Remark <ref> clearly applies to this construction of the covering space, so that the isomorphism of the covers constructed above is also an isometry. §.§ Structure of the branch curves. The covering space , viewed as a metric space with the metric locally induced by the base space , can be completed intowith the addition of branch curves. The projectionthen naturally extends to : →^3 which will now be a branched cover.Of particular importance are the branch curves corresponding to Cauchy sequences that converge into points belonging to the invisible loopsand ;their structure allows to construct functions u : →{0,1} whose p(J_u) does not wet , see Theorem <ref>.As a direct consequence of the cut and paste construction of the cover, in particular of the fact that the permutation of sheets associated to the two disks fixes sheet 1 and swaps sheets 2 and 3, we have the following The inverse image ^-1() consists of two connected components, ^-1() = ^1 ∪^23: ^1 being a curve containing no ramification points,i.e. having a small tubular neighborhood homeomorphic to its projection into a tubular neighborhood of ⊂^3; ^23 being a ramification curveof index two, i.e.havinga small tubular neighborhood that projects onto its image as a branched cover of degree two. Similar properties holdfor . Instead, the inverse image ^-1() is connected with ramification index three. § THE MINIMIZATION PROBLEM We refer to <cit.> for all details on functions of bounded variation; we denote by ^ℓ the ℓ-dimensional Hausdorff measure in ^3, for ℓ=1,2.For any specific (geometric) definition of(and hence of ), we set:= { u ∈ BV(; {0,1}): ∑_y ∈^-1(x) u(y) = 1  for a.e.x ∈}. Then we impose a “Dirichlet” boundary condition at infinity,andthe domain of the functionalis defined as := {u ∈ : u(y) = 1  for a.e.y ∈ sheet 1 of ^-1(x), |x| > C }, for C large enough such that the ball of radius C compactly contains the solid wedge . In view of the fact that the covering is not normal (Proposition <ref>), the choice of the Dirichlet condition is now quite important. If there is no risk of confusion we shall often drop the dependency onand simply writeandin place ofand .Finally, the functional to be minimized is(u) = 1/2 | Du |() ifu ∈, +∞otherwise.The presence of the constant 1/2 is due to the fact that,if u jumps at a point of a sheet, then the constraint in (<ref>) forces u to jump also at the corresponding point (i.e., on the same fiber) of another sheet,while on the remaining sheet u does not jump. | Du| is the usual total variation for the scalar-valued function u, and |Du|() can be defined using a partition of unity associated to a finite atlas ofmade of locally trivializing charts.Given u ∈, we denote by J_u ⊂the jump set of u. A “film surface” is defined as (J_u) ⊂, for u ∈. The film surface behaves well with respect to the jump set,in the sense that the total variation has the following representation, which specifies in which sense we are considering the notion of area: For all u ∈ we have (u) = ^2((J_u)).It is enough to repeat the arguments of <cit.>,by using local parametrizations of Y, and 2-rectifiability of the jump set of a BV-function.For a given geometry of ,the minimum value ofdepends on ; we set := inf_u ∈(u) and () := {(J_u) : u ∈ and (u) = } . By the semicontinuity and compactness properties of , the infimumon the right hand side is actually a minimum: this can be proven arguing as in<cit.>. We shall denote by = () a function such that () = and by = (J_) = () ∈() the corresponding film surface (BV-minimizer).In particular the set of minimizing film surfaces () is nonempty. Given u ∈, the setsatisfies thefollowing properties: any closed curve that loops around a long edge , ,orintersects (J_u); any closed curve that loops around a short edgeorat a distance smaller thanintersects . In particular,it is possible that a closed curve aroundordoes not intersect . By “loop around an edge” we mean that it can be continuously deformed without crossing any edge of the tetrahedron into a “meridian” of the edge, a loop that orbits around that edge alone at a small distance. Let x ∉ and take the precise representative of u (still denoted by u, it satisfies the constraint in (<ref>) in ∖). By construction of the covering space, a loop[ To get an element ofwe need to select a path fromto any point of the loop. The element ofconsists in first moving along such path, then following the loop and finally going backwards toalong the selected path. This does not impact the reasoning above. ]inbased at x around anyone of the long edges lifts into a path that moves all sheets of the fiber over x, in particular it moves the fiber where u = 1,taking it into a fiber where u = 0 (condition in (<ref>)). This forces u to jump along the curve obtained by lifting the loop and gives . Propertyis proved similarly by observing that a curve that loops around, sayat a distance smaller thancannot also interlaceand again when lifted in the covering space it moves all points of the fiber. The following definition is of central importance and highlights the essential feature of minimizers in order to be a “least area soap film” for an elongated tetrahedron. For a given geometrywe say that ∈() satisfies condition (NW) (non-wetting condition) if it does not intersect the invisible wires: ∩ = ∅ . We say that the base spacesatisfies condition (NW) if ∩ = ∅∀∈() .By compactness, if ∈() satisfies property (NW), there existsδ > 0 such that Lipschitz deformations ofin ^3 ∖ which are the identity out of a neighbourhood ofofsize less than δ will not touch , and can be recovered as jump set of some u ∈. Henceis ( M, 0, δ)-minimal in the sense of F.J. Almgren <cit.> and in particular satisfies the conditions proved by J. Taylor <cit.> of being locally either a minimal surface (zero mean curvature) or three minimal surfaces meeting along a curve at 120^∘. No T-singularity (quadruple point) can be present as a consequence of having only three sheets in the constructed covering. Moreover it is clear thatis not simply connected: any closed curve that loops around the front face of the tetrahedron along the edges, has nontrivial linking number withand therefore cannot be shrunk to a point by deformations on .§.§ Estimate of min F(M) from below A crude estimate from below ofis a direct consequence of propertyabove, indeed propertyis also satisfied by the minimal surfacethat spans the skew quadrilateral defined by the long edges _i, i=1,2,3,4 (Figure <ref>). Hence ≥^2(). We have2h ≤^2() ≤√(4 h^2 + 1) + 1 . The lower bound can be obtained by reasoning as in Theorem <ref> below, whereas the upper bound is the area of the surface obtained by taking the upper and lower faces offor x < 0, the central (unit) square ∩{x = 0} and the front and back faces for x > 0. Set = for a given choice of h and s. For any u ∈,the projectionof the jump set of u satisfies propertiesandof Theorem <ref>. This allows us to obtain an estimate from below of its ^2 measure, whichrefines estimates (<ref>)-(<ref>),see formula (<ref>).For t ∈ [-1,1] take the plane π_t = {x = ht}. Its intersection _t with the wedgeis a rectangle of sides 1+t and 1-t. We shall derive an estimate from below of ^1(∩π_t).§.§.§ Case |t| > sSince = h(1-s), we have that the L^∞-distance of π_t fromif t > 0 (resp.if t < 0) is less than . As a consequence, any curve in the rectangle _t that connects its two long sides can be closed as a loop around(resp. ) at an L^∞-distance smaller thanand, in view of of Theorem <ref>, is forced to intersect ∩π_t. This, together with the first property, is enough to conclude[ We have a “minimal partition” problem for the rectangle R_t into 3 sets, say A, B, C with the requirement that one long edge is contained in A, the opposite long edge is contained in B and both short edges are contained in C. Since the local structure of a minimizer (boundary of an optimal partition) must satisfy the properties of a Steiner network, we only have a finite (and very small) set of possible configurations to consider.] that the length of ∩π_t cannot be less than both the length of the Steiner tree joining the four vertices of _t and twice the length of the long sides of _t.Hence ^1(∩π_t) ≥ min{ 1 + √(3) - (√(3) - 1) |t|, 2 + 2|t| }= 2 + 2|t|if|t| < 2 - √(3) ,1 + √(3) - (√(3) - 1) |t|if|t| ≥ 2 - √(3).§.§.§ Case |t| ≤ sWe can still enforceof Theorem <ref>: any curve in _t connecting two adjacent sides can be completed into a loop around one of the long edges _i, i ∈{1,2,3,4} and hence it must intersect ∩π_t. It follows that the size of ∩π_t cannot be less than twice the lenght of the short sides of _t: ^1(∩π_t) ≥ 2 - 2t .For a given choice of h and s, we have:≥ 2h( 4 - √(3) - 2s^2)ifs < 2 - √(3) , h[3 + √(3) - 2(√(3)-1)s - (3 - √(3))s^2]ifs ≥ 2 - √(3) .Let u ∈.Case s < 2 - √(3). Using the tangential coarea formula <cit.> and the sectional estimates (<ref>) and (<ref>), we have ^2() ≥ ∫_-h^h ^1(∩π_t)  dt≥2h ∫_0^s ( 2 - 2t )  dtppp+ 2h ∫_s^2-√(3) ( 2 + 2t )  dtppp+ 2h ∫_2-√(3)^1 [ 1 + √(3) - (√(3) - 1) t ]  dt= 2h[(2s - s^2) + (11 - 6√(3) - s^2 - 2s) + (5√(3) - 7)]= 2h( 4 - √(3) - 2s^2). Case s ≥ 2 - √(3). The intermediate integral now disappears.We get ^2() ≥ ∫_-h^h ^1(∩π_t)  dt≥2h ∫_0^s ( 2 - 2t )  dtppp+ 2h ∫_s^1 [ 1 + √(3) - (√(3) - 1) t ]  dt= h[3 + √(3) - 2(√(3)-1)s - (3 - √(3))s^2]. Note that for s → 0^+ we obtain ^2() ≥ 2h(4 - √(3)) and for s → 1^- we obtain ^2() ≥ 2h (compare with (<ref>)).§ A POSITIVE GENUS SURFACE BEATING THE CONELIKE CONFIGURATION For a given h > 0 and 0 < s < 2-√(3) we stick here with the choice of(a solid elongated tetrahedron) given by Definition <ref>, i.e. with vertices having coordinates as in (<ref>),see Figure <ref>, the regular tetrahedron corresponding to the choice h = √(2)/2.Let us denote bya “conelike” film surface spanning the one-skeletonof . By “conelike set” (or “conelike film surface” if it has the appearance of a soap film) spanning the edgesofwe mean a setthat inseparates the four faces, i.e. such that it must intersect any path starting on one face, travelling in the interior ofand terminating on another face. The name “cone-like” is justified by the fact that we espect a minimal film with such separation property to be a deformed version of the minimizing cone of Figure <ref> (left). In particular, a simply connected set incontainingmust separate the faces. Indeed, by contradiction if it does not separate the faces we can construct a closed path disjoint from the set that interlaces the path along the edges of a face. The set would then be non-simply connected (in particular non-contractible).In Figures <ref> left and Figure <ref> we find two examples for a regular tetrahedron and an elongated tetrahedron.[Note that for an elongated tetrahedron, an area-minimizingdoes notsatisfy the usual property of cones, of being invariant undermultiplication x → r x for r>0, see the caption ofFigure <ref>. This is the reason for calling a conelike configuration, and not simply a cone.]We shall comparewith a particular competitorcorresponding to (i.e., being the projection of)the jump set of a BV function u in the domainof the functional ℱ (Theorem <ref>); the competitorwill be non-simply connected.We shall show that there exists h > 1 sufficiently large such that the area ofis less thanthe area of(Theorem <ref>), giving quite strong evidence thatis not area-minimizing among minimal films if we allow for a more complex topology. §.§ Constructing the surface Sigma2 using the triple cover Let τ∈ (s,1) be a parameter to be chosen later, see (<ref>). The competitor is constructed by joining five pieces, = Σ_1∪Σ_2∪Σ_3∪Σ_4∪Σ_v, the first fourobtained by sectioning the wedge with the three planes {x = 0}, {x = ± hτ}, see Figure <ref>, and the last one being “vertical”, as follows:Case x ∈ (-h, -hτ) and x ∈(hτ, h): the surface Σ_i, i ∈{1,4} is chosen coincident with , more precisely Σ_1 := ∩{ x < -hτ} andΣ_4 := ∩{ x > hτ}.Case x ∈(0,hτ): the surfaceΣ_3 coincides with the top and bottom faces of .Case x ∈ (-hτ, 0): the surface Σ_2coincides with the front and back faces of .In order to close the surface we need to add three “vertical” pieces, cumulatively denoted by Σ_v (see Figure <ref>), union ofthe square obtained by intersectingwith the vertical plane {x=0} andof the parts of the two rectangles resulting as the intersection ofwith the two planes {x = ± hτ}. Let s ∈ (0,2-√(3)). Ifτ∈ (s,1) is small enoughand h ∈ (1,+∞) is large enough depending on τ, we have ^2() < ^2().We have to show that^2(Σ_1) + ^2(Σ_2) + ^2(Σ_3) + ^2(Σ_4) + ^2(Σ_v) < ^2(), provided τ∈ (s,1) is small enough and h = h(τ)∈ (1,+∞) is large enough.In view of the definition ofwe have ^2(∩{ |x| > hτ}) = ^2(∩{ |x| > hτ}), and therefore inequality (<ref>) is equivalent to^2(Σ_2) + ^2(Σ_3) + ^2(Σ_v) <^2(∩{-hτ < x < hτ}). As in Section <ref>,for t ∈ [0,1], the intersectionof the plane π_t = {x = ht} with the wedgeis a rectangle of sides 1+t and 1-t. Sincedividesinto four disjoint solid regions,one per face, it follows that ∩π_t divides the rectangle into four disjoint regions. Hence ^1(∩π_t) ≥ 1 + √(3) - (√(3) - 1) t, the right hand sidebeing the length of the Steiner tree joining the four vertices of the rectangle.For a given τ∈ (s,1) we shall need abound from below of the section ∩{ -hτ < x < hτ}: using the coarea formulaand (<ref>), we have^2(∩{-hτ< x < hτ})) ≥ ∫_-h^h ^1(∩{-hτ< x < hτ}∩π_t)  dt≥ 2 h ∫_0^τ(1 + √(3) - (√(3) - 1) t)  dt = 2(1 + √(3)) h τ - (√(3) - 1) h τ^2. Therefore, in order to show (<ref>) it is sufficient to prove ^2(Σ_2) + ^2(Σ_3) + ^2(Σ_v) <2(1 + √(3)) h τ - (√(3) - 1) h τ^2. Since all intersection rectangles have the same perimeter and the central square has area equal to one, we have^2(Σ_v) ≤ 3, and so it will be sufficient to prove ^2(Σ_2) + ^2(Σ_3) + 3 <2(1 + √(3)) h τ - (√(3) - 1) h τ^2. Now, the area of the top (or bottom)facet F of(the one having the vertex on the left and the basis on the right) equals√(4 h^2 + 1), therefore ^2(F ∩{x < hτ})=(1 + τ)^2/4√(4 h^2 + 1),^2(F ∩{0 < x < hτ}) =(1 + τ)^2/4√(4 h^2 + 1) - 1/4√(4 h^2 + 1). It follows ^2(Σ_2) + ^2(Σ_3)=4 ^2(F ∩{0 < x < hτ}) =(2 + τ) τ√(4 h^2 + 1), so that (<ref>) will be proved if we showL:= 1/h τ[(2 + τ) τ√(4 h^2 + 1) + 3] <2(1 + √(3))- (√(3) - 1) τ =:R. Let us select τ∈ (s,1) sufficiently small so that 4 + 2 τ < 2(1+√(3)) - (√(3)-1) τ , one possibility is e.g. to choose τ = 2 - √(3), consistent with τ∈ (s,1) in view of the constraint imposedon s. Then we have lim_h → +∞L = 4 + 2τ < R, and the result follows. Inequality (<ref>) is solved for 0 < τ < 2(2-√(3)). Values leading to inequality (<ref>) are e.g. τ = 2-√(3),h = 16 , they lead to the values^2() ≈ 22.456 + c, ^2() ≈ 22.585 + c where c is the common value c := ^2(∩{ |x| > hτ}) = ^2(∩{ |x| > hτ}), For any choice of s ∈ (0, 2-√(3)) select the base space = in Definition <ref> with h large enough (e.g. h > 16). Let be the competitor defined in (<ref>). Then there exists u ∈ such that =. In particular,(u) = ^2(), and < ^2().Fix τ = 2 - √(3) and letbe the corresponding competitordefined in (<ref>). Since s < 2 - √(3) = τ, it follows that the invisible wires do not intersectand in view of Remark <ref> we can assume that the cutting set in the cover construction is defined byand then simply define u : →{0,1} as 1 on the first sheet and zero otherwise. Clearly u satisfies the required constraints to ensure u ∈ and in view of the gluing permutations (in particular u is locally constant, u = 1 on sheet 1 and u = 0 on sheets 2 and 3, on a neighborhood of ^-1(), compare Figure <ref>) it is easy to show that =, Function u can also be constructed directly using the abstract definition of the covering (Section <ref>) as follows. First we need to fix an orientation and decide a permutation σ∈ S_3 for each smooth portion of . This is done consistently to the permutations of Figure <ref>, for example we associate the permutation (1,3,2) to the left flat “lunette” when traversing it from front to back. We also need two “phantom” disks mimicking the cutting disks of figure <ref> (with associated permutation (2,3)) that cut e.g. the frontal trapezium with a vertical linein a right part with permutation (1,3,2) and a left part with permutation (1,2,3) (when traversing it from front to back). The central vertical square would have the permutation (1,3,2) associated to it when traversing from right to left. In a similar fashion we attach a suitable permutation to all the remaining (oriented) portions of the surface , taking into account that the top trapezium is also divided in two parts by the phantom disk.Now we first define a function û on the setof paths instarting at the base point . If γ is such a path with x := γ(1) ∉, we can suppose, up to a small deformation in the same homotopy class, that it has only trasversal intersections withand no intersections with the triple curves nor with the intersection of the phantom disks with . Then we can enumerate the permutations associated to the intersections of γ([0,1])withand the phantom disks or their inverse (based on whether γ traverses the surface in a positive or negative direction with respect to its selected orientation) and multiply all these permutations to obtain σ_γ∈ S_3. If the final permutation fixes 1, i.e. σ_γ(1) = 1, then we define û(γ) = 1, otherwise û(γ) = 0.The desired function u : →{0,1} is now defined as u([γ]) = û(γ) where [γ] is the equivalence class ofγ in (<ref>). It is necessary to show that this is a good definition, in other words, that û (γ_1) = û(γ_2) whenever γ_1 ∼γ_2, i.e. whenever [γ_1 γ_2^-1] ∈ H. It is readily seen that this is a consequence of the stronger requirement that û(γ) = 1 for all γ closed curvewith [γ] ∈ H, which we now prove.The choice of permutations on the pieces of surface is chosen such that the final permutation computed on a closed γ is insensitive to homotopic deformations of γ, so that we only need to show that σ_γ fixes 1 whenever [γ] ∈ H. This is true precisely because the choice of the permutations mimics the permutations used to define the covering by cut and paste displayed in Figure <ref>. Inequality (<ref>) is crucial in trying to actually prove the existence of a non-simply connected minimal film spanning an elongated tetrahedral frame, since it shows the existence of a surface with the desired topology having area strictly less than the minimal area achievable with conelike configurations. The candidate would be a minimizer of , since Theorem <ref> implies that < ^2(). However we still are unable to conclude, because we cannot exclude that the minimizing surface interferes with the invisible wires, i.e. it does not satisfy property (NW) of Definition <ref> (see Section <ref>). Numerical simulations however strongly suggest that with appropriate choice of h and s inthis is not the case (Figure <ref>). §.§ Comparison with the Reifenberg approachThe approach of E. R. Reifenberg <cit.> to the Plateau problem is based on Čech homology. We want to show here that, presumably, the Reifenberg approach (in three space dimensionsand in codimension one) cannot reproduce a surface with the topology as the one depicted in Figure <ref>, right.One first fixes a compact[See <cit.> for an extension of the theory for a noncompact G.]abelian group G (for our purposes it is convenient to think of G as if G = even if this is not compact; in what follows the choice G = _m with various values for m leads to the same considerations). In the sequel all the homology groups are isomorphic to the direct sum ⊕_i=1^r G of r copies of G, we shall refer to r as the rank of the homology group.Next, given a compact subsetof ^3, one has to minimize the Hausdorff measure ^2() of ,among all compact sets ⊇ in ^3 satisfying a suitable condition, that we will specify. Here we fixto be the union of the six edges of a tetrahedron.The homology group H_1(;G) is seen to have rank 3, by observing thatis homotopic to a bouquet of three loops, and a convenient choice of the generators is:α: (counterclockwise) loop around the front face, described as ^-1^-1 with reference to Figure <ref>;β: loop around the top face, ^-1;ℓ: loop along the long edges, ^-1^-1. For some ⊇ the inclusion i : → induces a homomorphism i_* : H_1(;G) → H_1(;G) between the first homology groups ofandrespectively, whose kernel is called algebraic boundary of .At this point for a given subgroup L < H_1(;G) we search for a minimizer of ^2() among all ∈(L) where (L) is the family of compact sets whose algebraic boundary contains L.Ifis a surface with the required topology (e.g. the one of Figure <ref> or the one of Figure <ref>, or even that of Figure <ref> right), we want on one hand ∈(L) and on the other hand (L) to be as small as possible, which leads to the choice L = ker(i_*).The sethas first homology group H_1(,G) of rank two, generated by i_*(α) and i_*(β), whereas ℓ is a generator of the kernel of i_*, leading to the choice of L as the subgroup of H_1(,G) generated by ℓ.The family 𝒮(L) then contains subsets of ^3 with first homology group of rank 2, containingand with algebraic boundary L.Unfortunately the imposed condition on the algebraic boundary does not impose wetting of the two short edgesand , and indeed the surface of Figure <ref> also is in 𝒮(L) and we presume it to be the Reifenberg minimizer.§ POSITIONING THE INVISIBLE WIRESFor a fixed (sufficiently large) choiceof h the minimum value =() will depend on the relative position of the invisible wireswith respect to the tetrahedral frame. Our first guess would be that for a wide range of positions (those for whichdoes not touch =(J_)) such value is constant, and so is a minimizer of the functional.Whenleaves such a set of positions we would expect the minimum value to increase a bit, since in that case the invisible wires impose a further constraint on . Indeed the wires would “push” on the film surface and act as an obstacle for as long as the deformed surface bends at the wire with an angle larger than 120 degrees. This behaviour minics the situation of a Steiner tree for three points vertices of an obtuse triangle with an angle larger than 120^∘.Beyond the 120^∘ threshold we expect one of the local “J. Taylor” rules <cit.>for a minimizing film to take effect and observe the formation of a new (fin-like) portion of the surface connecting a portion of , let us call it the “wetted portion”, to a triple curve on the deformed surface meeting at angles of 120^∘.The story is however completely different ifis moved to meet one or both of the two triple curves of the minimizing surface (red curves in Figure <ref>). In this situation it is energetically favorable for the surface to suddenly jump into a configuration where a large portion ofis wetted by the flat part of the minimizer (Figure <ref>). Two new holesin the surface would then be created.Actually this would be even more dramatic, since the formation of two smooth catenoid tunnels would come out in a situation where the tunnels are too long to be stable, so we also expect the tunnels to disappear completely with a final configuration resembling the one that would be obtained by a film that does not wetandwith the addition of two flat trapezoid portions connecting e.g.with part ofand with the rest of the surface (similarly on the right), see Figure <ref>.In order to rule out this possible minimizer we can derive a lower bound for the surface area in such a configuration. For a given choice of h>0 and 0 < s < 1 and selecting = we say that a film surface is “Steiner-like” if it satisfies properties ,of Theorem <ref>[Whenis replaced by .] and moreover the intersection ∩π_t with π_t = {x = ht} and |t| > s separates the four sides of the rectangle ∩π_t.Let s ∈ (0, 2 - √(3)).A Steiner-like surfacecan be modified into a surfacewith the topology of the one constructed in Section <ref> that does not wet the invisible wires and has lower area provided we choose s ≤ s_0, s_0 small enough and then h large enough. Consequentlycannot be a minimizer of functional .The proof mimics that of Theorem <ref>. We modifyin the region -hτ < x < hτ with the choice τ = 2 - √(3) exactly as we did for Theorem <ref> obtaining a surface . Using again the coarea formula, the sectional estimate (<ref>) with the second case also if |t| < 2-√(3) and (<ref>), we have ^2( ∩{-hτ < x < hτ})≥ ∫_-hτ^hτ^1(∩π_t)  dt≥2h ∫_0^s ( 2 - 2t )  dt + 2h ∫_s^τ[ 1 + √(3) - (√(3) - 1) t ]  dt= h[(2+2√(3))τ - (√(3)-1)τ^2 - (2√(3)-2)s - (3-√(3))s^2],that has to be compared with ^2(∩{-hτ≤ x ≤ hτ}) =(2 + τ) τ√(4 h^2 + 1) + 3. The only difference with the derivation of Theorem <ref> is the presence of the two terms containing the parameter s. They however vanish as s → 0^+, so that by selecting s>0 sufficiently small we can again conclude if h is sufficiently large.Specific values for s_0 and h turn out to be s_0 = 2 - √(3)/4,h ≥ 40 or s_0 = 2 - √(3)/100,h ≥ 16 The result of the previous theorem suggests the following Let h > 16 and s = 2 - √(3)/100. Thensatisfies condition (NW) of Definition <ref>. § ALL POSSIBLE TRIPLE COVERS We want to describe all possible covers of the base spaceof degree 3 among those that will produce soap films that touch all six edges of the wedge and are not forced to touch the invisible circular wires. Here the fundamental group comes obviously into play, sincewe do that by describing all possible monodromy actions on the fiber above the base point . This monodromy action can equivalently be described as the action defined by a subgroup of π_1(B) of index 3 by right multiplication. The presence of triple points in the wireframe of the tetrahedron, together with the wetting condition (implying the existence of triple lines in the minimizing film) requires the presence of at least two distinct and nontrivial monodromy actions on the fiber at infinity, hence the cover has at least degree 3.A degree 3 cover cannot allow for the standard (i.e., conelike) minimizing film for the tetrahedron. This is due to the presence of the central quadruple point of the minimizing film. Indeed we shall see that all covers satisfying the constraints will require a nontrivial monodromy action on the fiber when circling the invisible wires, making them an essential feature in the construction of the base space.Of course we could allow for more than two invisible wires. Let H be a subgroup ofof index 3, and denote bys_1 := H, s_2 := H w_2 and s_3 := H w_3its right cosets, for some choice of representative elements w_2, w_3 ∈. The elements ofdefine an action on {s_1, s_2, s_3} defined by g : h → hg (right multiplication by g ∈π_1(B)). If h', h” are in the same right coset then h' g (h” g)^-1 = h' (h”)^-1 and the definition is wellposed.We then have a map → S_3 that to any element ofassociates an element of the permutation group of the three cosets. A permutation of S_3 is interpreted as a permutation of the indices in {s_1, s_2, s_3}.By composition, this map is defined once we know its value on the generators of .In conclusion we have permutations σ_a, σ_b, σ_c, σ_d, σ_e ∈ S_3 (permutations of {1,2,3}) associated to the generators a, b, c, d, e respectively.We now impose a number of constraints.Consistency with relators In the group presentation (<ref>) the two relators must be consistent with the choice of the permutations σ_a, …, σ_e. In other words σ_a must commute with σ_b and σ_d must commute with σ_e. Take for example σ_a and σ_b; they commute if and only if one of the following mutually exclusive conditions holds: * σ_a or σ_b is the identity permutation ();* σ_a and σ_b are both cyclic of order 3, hence a power of (1,2,3);* σ_a = σ_b are the same transposition.Invisible-wire conditions The soap filmthat we wish to model must not wet the two circular loops associated to generators a and e in Figure <ref>. In particular, a closed path starting at the base point inand looping around one of such loops must not necessarily traverse the surface. Consequently the generators a and b must not move sheet 1 of the covering. The corresponding condition reads then as σ_a, σ_e ∈{ (), (2,3) } ;Wetting conditions We want to reconstruct a filmthat spans all six sides of the wedge.In other words, any tight loop around these edges should cross the surface. This condition is somewhat tricky to impose, particularly in situations where the cover is not normal, because we need to state it on elements of , which requires to connect the base point to the tight loop. We end up with a condition that depends on how we choose the connecting path. Changing the path amounts to performing a conjugation on the element of . A strong wetting condition could be that any element inthat loops once around the selected edge must move all sheets (the permutation is required to be a derangement). This condition is insensitive to conjugation. We shall however require a weaker version of the wetting condition by requiring that the element ofmoves sheet 1 (the sheet where u = 1 at the base point , far from ). This condition however depends on how we connect the tight loop to the base point. It seems only natural to require the connecting path to lie outside the wedge, which is not the same as requiring the corresponding Wirtinger-type loop in the diagram of Figure <ref> to move sheet 1. In particular this is not true for , the long edge that in Figure <ref> runs in the back and does not cross the two disks, for which a linking path that does not enter the wedge is bc^-1. In the end the (weak) wetting conditions read as: σ_c, σ_c^-1σ_d, σ_b σ_c^-1, σ_b σ_c^-1σ_d, σ_a σ_d^-1σ_c σ_a^-1σ_c^-1, σ_b σ_c^-1σ_e σ_c σ_e^-1∉{ (), (2,3) } The third relation comes from σ(_4,2)^-1σ(_3,3) σ(_4,2), where σ(_i,j) is the permutation associated to the varius long edges in (<ref>), after substitution and simplification. It corresponds to a path that, as mentioned above, starts at , runs in the back offrom below, than around , than again in the back ofand back to . The fourth relation is the inverse of σ(_4,2). We shall now search for all possible choices ofσ_a, σ_b, σ_c, σ_d, σ_e that are compatible with the three set of constraints (<ref>), (<ref>) and consistency with the relators of the presentation. Our search also includes the special cases were one or both of the invisible wiresandare not present, since the choice, say, σ_a = ()leads to the same result as removal of . We separately analize the possibilities with all choices of σ_a and σ_e allowed by (<ref>) arriving to the conclusion that the presence of both invisible wires is essential. §.§ Searching covers for a=e=()First note that all constraints above are insensitive to exchange of sheets 2 and 3. This means that for definiteness we can assume that σ_c(1) = 2, which in view of the first wetting constraint in (<ref>) leaves us with only two possibilities: σ_c = (1,2) or σ_c = (1,2,3).§.§.§ Case c=(123)Wetting constraints 2, 3, 5, 6 imply σ_d ∈{(1,3,2),(1,2)} resulting in σ_c^-1σ_d sending 3 ↦ 1. Moreover σ_b ∈{(1,3,2),(1,3)} resulting in σ_b sending 1 ↦ 3. This would imply σ_b σ_c^-1σ_d sending 1 ↦ 1, contrary to wetting constraint 4.§.§.§ Case c=(12)Wetting constraints 2, 3, 5, 6 imply σ_d ∈{(1,2,3),(1,3)} resulting in σ_c^-1σ_d sending 3 ↦ 1. Moreover σ_b ∈{(1,3,2),(1,3)} resulting in σ_b sending 1 ↦ 3. This would imply σ_b σ_c^-1σ_d sending 1 ↦ 1, contrary to wetting constraint 4.§.§ Searching covers for e=(23)Again we can assume that σ_c(1) = 2.Reasoning as before, from σ_a = () we get σ_d ∈{(1,3,2),(1,2,3),(1,2),(1,3)}. However σ_d and σ_e must commute, which is incompatible with σ_e = (2,3).§.§ Searching covers for a=(23)Again we can assume that σ_c(1) = 2.Reasoning as before from σ_e = () we get σ_b ∈{(1,3,2),(1,2,3),(1,2),(1,3)}, which does not commute with σ_a.§.§ Searching covers for (a=e=(23)Consistency with the relators of the group presentation leads to σ_b, σ_d ∈{ (), (2,3) } , σ_c ∉{ (), (2,3) } . A direct check shows that any choice satisfying the requirementsabove also satisfies all constraints for our covering. Figure <ref> shows the cover that corresponds to the choice σ_b = σ_d = (), σ_c = (1,2). The resulting cover is clearly not isomorphic to the one constructed in Section <ref> and it is a natural question whether the minimization problem (u) →min in this context leads to the same result. This is entirely possible but we do not want to pursue the subject here.On the other hand we actually can find functions u ∈ (withredefined based on the new covering) with a jump set that is incompatible with the former definition of . We can construct such a function by making it jump across a big sphere that encloseswith the sheet where its value is 1 that changes from 1 to 3. Then it is possible to take advantage of the fact that the wetting conditions are weak and with u = 1 on sheet 3 they do not impose wetting of e.g. _1,1 resulting in the possibility (actually achievable) of an only partially wetted . Similarly for the other long edges. Clearly, however, such a surface cannot be a minimizer.§ NUMERICAL SIMULATIONS AND CONCLUSIONS A number of numerical simulations have been performed using the software codeof Emanuele Paolini. It is based on a gradient flow with artificial viscosity starting from a triangulated surface having the required topology. It does not use the setting based on coverings of the present paper, however it gives a consistent result provided that: (i) the starting surface is the setof some u ∈; (ii) there is no change of topology; (iii) there is no touching of the invisible wires (they are not modelled by ). Figures <ref>, <ref> and <ref> have all been obtained by starting from suitable faceted initial surfaces, with the geometry corresponding to the choice h = 3.5; for example the result shown in Figure <ref> is obtained by starting from the faceted surface displayed in Figure <ref>.Numerically it turns out that with h=3.5 the area of the non-simply connected minimizer is slightly greater than the area of the conelike configuration; on the contrary increasing h to (e.g.) h=4 results in a non-simply connectedfilm surface that numerically beats the conelike configuration, consistently with the results of Section <ref>.Decreasing h changes the minimizing evolution drastically: after a (large) number of gradient flow iterations, the film surface loses its symmetry (due to roundoff errors that break the symmetry of the problem) and one of the twotunnels shrinks at the expense of the other. The numerical evolution stops when the smaller hole completely closes, since the software cannot cope with changes of topology. Evolution after such singularization time depends on how the topology is modified. However it should be noted that the evolution would in this case typically impact with one of the invisible wires before the singularization time.It is conceivable that for this value of h the evolution would produce a stationary surface, that is area-minimizing among surfaces that are forced to have the same symmetries of the boundary frame.Decreasing h even further, in particular towards the value h = √(2)/2 that results in a regular tetrahedron, numerically produces an evolution where the two tunnels both shrink more or less selfsimilarly, so that we expect in the limit to obtain the area-minimizing cone of Figure <ref> left.On the other side we can explore what happens with ever increasing values of h (and a fixed sufficiently small value of s). Figure <ref> shows the numerical solution for h = 20, where the x coordinate has been shrunk down in order to fit the same frameof Figure <ref>. The resemblance of the result with the constructed surfaceof Figure <ref> is striking and suggest to conjecture that a minimizer ∈(), when rescaled appropriately in the direction orthogonal to the short sides in order to have a fixed boundary, converges toas h → +∞. This fact can be also motivated by observing that the area of a surface that is deformed by scaling of a factor k in the x direction can be computed by using an anisotropic version of the area functional ∫ϕ(ν)  d ^2 with ϕ a positive one-homogeneous function having as unit ball the set {k^2 x^2 + y^2 + z^2 ≤ 1}, and νa unit normal vector field. With increasing values of k the “vertical” portions of a surface (those with local constant x coordinate) pay less and less in anisotropic area and we expect that in the limit k → +∞ the anisotropic area to simply be given by the integral in x of the ^1 measure of the sections of the rescaled surface with vertical planes parallel to the yz coordinate plane, so that a minimizer can be obtained by separately minimizing the size of each section. This would essentially lead to the faceted surfaceof Figure <ref>. 99 Al:76 F.J. Almgren,Existence and regularity almost everywhere of solutions to elliptic variational problems with contraints,Mem. Amer. Math. Soc. 4 (1976), n. 165.AmBePa:15 S. Amato, G. Bellettini, M. Paolini, Constrained BV functions on covering spaces for minimal networks and Plateau's type problems, Adv. Calc. Var. 4 (2015), 1–23.AmFuPa:00 L. Ambrosio, N. Fusco, D. Pallara, Functions of Bounded Variation and Free Discontinuity Problems, Oxford Math. Monogr., Oxford, 2000.Br:95 K. Brakke, Soap films and covering spaces, J. Geom. Anal. 5(1995),445-514.Fa:16 Y. Fang, Existence of minimizers for the Reifenberg Plateau problem, Ann. Sc. Norm. Sup. Cl. Sci. Pisa, XVI (2016), 817-844.GiMoSo:98 M. Giaquinta, G. Modica, J. Soucek, Cartesian Currents in the Calculus of Variations I, volume 37 Ergebnisse der Mathematik und ihrer Grenzgebiete, Springer-Verlag, Berlin Heidelberg 1998.Ha:02 A. Hatcher, Algebraic Topology, Cambridge University Press, 2002.Hu:10 R. Huff, Conelike soap films spanning tetrahedra, Trans. Amer.Math.Soc. 362 (2010), 5063–5081. Hu:11 R. Huff, An immersed soap film of genus one, Comm. Anal. Geom. 19 (2011), 601–631. LaMo:94 G. Lawlor, F. Morgan, Paired calibrations applied to soap films, immiscible fluids, and surfaces or networks minimizing other norms, Pacific J. Math. 166 (1994), 55–83.MaKaSo:76 W. Magnus, A. Karrass, D. Solitar, Combinatorial Group Theory: Presentations of Groups in Terms of Generators and Relations, Dover Publications,1976.Mo:08 F. Morgan, Geometric Measure Theory: A Beginners Guide, Elsevier Science, 2008.Pa:92 H.R. Parks, Soap-film-like minimal surfaces spanning knots, J. Geom. Anal. 2 (1992), 267–290.Re:60 E.R. Reifenberg, Solution of the Plateau problem for m-dimensional surfaces of varying topological type,Acta Mathematica 104 (1960), 1–92.Ta:76 J. Taylor, The structure of singularities in soap-bubble-like and soap-film-like minimal surfaces, Ann. Math., 103 (1976), 489–539.
http://arxiv.org/abs/1705.09122v3
{ "authors": [ "Giovanni Bellettini", "Maurizio Paolini", "Franco Pasquarelli" ], "categories": [ "math.GT", "math.DG" ], "primary_category": "math.GT", "published": "20170525105308", "title": "Triple covers and a non-simply connected surface spanning an elongated tetrahedron and beating the cone" }
Chiral magnetohydrodynamics for heavy-ion collisions Yuji Hirono=======================================================DRAFTDRAFTThe availability of very wide spectrum in millimeter wave bands combined with large antenna arrays and ultra dense networksraises two basic questions: What is the true value of overly abundant degrees of freedom and how can networks be designed to fully exploit them? This paper determines the capacity scaling of large cellular networks as a function of bandwidth, area, number of antennas and base station density. It is found that the network capacity has a fundamental bandwidth scaling limit, beyond which the network becomes power-limited. An infrastructure multi-hop protocol achieves the optimal network capacity scaling for all network parameters. In contrast, current protocols that use only single-hop direct transmissions can not achieve the capacity scaling in wideband regimes except in the special case when the density of base stations is taken to impractical extremes.This finding suggests that multi-hop communication will be important to fully realize the potential of next-generation cellular networks. Dedicated relays, if sufficiently dense, can also perform this task, relieving user nodes from the battery drain of cooperation. On the other hand, more sophisticated strategies such as hierarchical cooperation, that are essential for achieving capacity scaling in ad hoc networks, are unnecessary in the cellular context.Wideband regime, capacity scaling laws, cellular networks. § INTRODUCTIONTo meet the tremendous growth in demand for cellular wireless data, three new design approaches are widely-considered for the evolution of next-generation systems <cit.>: * Vast spectrum available at very high frequencies, esp. the millimeter wave <cit.>;* Massive Multiple-input Multiple-output (MIMO)for increased spatial multiplexing <cit.>;* Ultra dense deployments of small pico- and femtocells <cit.>. Together, these technologies offer the potential of orders of magnitude increases in capacity, and, if successful, may fundamentally change the basic constraints that dictate network design today. This possibility leads to two basic questions: What is the fundamental capacity offered by these technologies and how can networks be best designed to fully leverage their potential?From an information theoretic perspective, millimeter wave transmissions, massive MIMO and ultra-dense deployments are all, in essence, various ways to increase the fundamental degrees of freedom of the network which are controlled by bandwidth, number of antennas and infrastructure density respectively. This paper attempts to characterize the capacity scaling of cellular networks as a function of the scaling of these dimensions. Our analysis follows along the lines of the classic result of Gupta and Kumar <cit.>, but applied to cellular networks rather than ad hoc networks with or without infrastructure. Specifically, we consider a large cellular network with n mobile nodes, where the key parameters such as bandwidth, number of antennas, area, and base stations (BSs), all scale as functions of n. In addition, traffic in this network travels between each one of the BSs and the nodes in its cell, in separate uplink and downlink phases.Our main results determine the capacity scaling by finding identically-scaling lower and upper bounds on the throughput. The upper bound is a series of cut-set bounds in which one transmitter is cut from the rest of the network, and all the nodes and BSs in the other side of the cut cooperate perfectly, forming a virtual point-to-point MIMO system where all devices contribute to receive power and all interference is perfectly canceled. The capacity scaling achieving lower bound is found by considering a simple infrastructure multi-hop (IMH) protocol where transmissions are relayed to/from the closest BSs via mobile nodes within the same cell. We also study the capacity scaling of two additional protocols: The first one is the infrastructure single-hop (ISH) protocol, where transmissions are sent directly between the BS and each node within its cell, and which is the dominant paradigm in current cellular networks. The second one is the infrastructure relay multi-hop (IRH), modeled after existing two-layer network architectures, where IMH is used for wireless backauling of additional access points called relay nodes (RN), while user nodes only communicate directly with a single nearest access point using ISH to prevent multi-hop implementation difficulties due to mobility, reticence to cooperation, and backwards compatibility.Our analysis yields several important and in some cases surprising findings: * Bandwidth scaling limit: There is a “critical bandwidth scaling” that defines a maximum useful bandwidth for the whole network. Below the critical point, the capacity scales with the bandwidth, whereas if bandwidth grows faster than its critical limit the capacity becomes power-limited and additional bandwidth growth no longer improves the capacity scaling. Power and bandwidth limited regimes are well-understood for point-to-point channels, and our results provide a generalization to cellular networks. * Benefits of increased cell density: The network capacity always grows with the BS density, whereas the benefits of increased bandwidth or number of BS antennas have a limit. This is valid as long as nodes are sufficiently separated to experience far-field propagation. * Interference alignment is not necessary: Our upper bound implicitly avoids inter-cell interference, whereas our lower bound IMH simply treats interference as noise. Since both have the same scaling, we can conclude that interference-alignment schemes, despite providing significant gains in a non-asymptotic regime <cit.>, do not alter the capacity scaling significantly. On the other hand, our analysis does not discard that BS cooperation, achieved for example by a wired backhaul, could improve the capacity scaling over the non-cooperative BS model either with or without interference canceling. We leave this analysis for future work. * Multi-hop is optimal, and outperforms single-hop communication: The IMH protocol achieves the optimal capacity scaling in all regimes. ISH is optimal at small bandwidth scaling but performs strictly worse than IMH in regimes with wide bandwidths or large numbers of antennas. The reason is that ISH employs longer transmission distances and becomes power-limited earlier than IMH as bandwidth scaling is increased. This suggests that, even though in today's networks capacity is bandwidth-limited and direct transmissions between the mobile nodes and the BS are efficient, in future networks with much larger bandwidths, multi-hop communication may be necessary to fully achieve the network capacity. * Hierarchical Cooperation is not necessary in cellular systems: Optimality of IMH implies that Hierarchical Cooperation (HC) cannot improve the rate scaling achieved with IMH, as opposed to dense ad-hoc networks, where multi-hop is optimal only in some regimes and HC is necessary to achieve the optimal throughput scaling otherwise <cit.>.* Wireless backhauling may be optimal, but RN density is critical: IRH performance depends on the RN density. In the best case scenario with RNs as dense as the user nodes, IRH rate scales as IMH and can be regarded as a practical strategy to achieve capacity scaling while avoiding mobility issues. But if the RN density is lower, the performance of IRH is suboptimal in the power-limited regimes with high bandwidth scaling, and may not offer any gains over ISH in the bandwidth-limited regime with low bandwidth scaling. * Applicability to fading and non-coherent communications: LozclarificationThe main results in this paper are obtained under a deterministic path loss channel model with full rank and additive Gaussian noise. However, very similar network scaling laws can be readily argued for the case of the ergodic capacity in frequency selective fading channels with channel state information (CSI) at all network nodes. For the case of non-coherent fading channels –without CSI– there are very few existing results even for capacity of point to point channels <cit.>. Our results show a behavior similar to those in <cit.>, which show that in a point-to-point non-coherent wideband channel there is a critical bandwidth occupancy, so that capacity is power limited when the bandwidth exceeds this critical value, and the critical bandwidth threshold grows with the receiver power. However, these results are only for point-to-point channels, and only qualitatively similar to the operating regimes of our network capacity scaling laws. Capacity results for multi-user non-coherent channels are limited, and the scaling exponent for the regime transitions may be different in non-coherent channels than in AWGN and coherent channels even for multiple-antenna point-to-point channels <cit.>.§.§ Relation to Prior Work The seminal work by Gupta and Kumar <cit.> showed that the feasible rate in a dense ad-hoc network scales as R(n)∝Θ(1/√(n)), where n is the number of nodes[ We use the standard f(n)=O(g(n)), f(n)=Ω(g(n)) and f(n)=Θ(g(n)) notations <cit.> to respectively represent that at sufficiently high n function f(n) becomes less than or equal than g(n), greater than or equal to g(n), and identical to g(n) up to a constant factor. ] . Ozgur, Lévêque and Tse introduced HC and showed that it achieves linear scaling (i.e. R(n) = Θ(1)) for dense ad-hoc networks <cit.>. Franceschetti, Migliore and Minero described physical constraints which pose an ultimate limitation leading to R(n)≤Θ(log(n)^2/√(n)) <cit.>. OzgurVSFranceschettiThe results of <cit.> and <cit.> differ in the channel model, where <cit.> considers random i.i.d. phases between any pair of nodes, and <cit.> considers that as n grows, the inter-node distances become smaller than the wavelength and channel phases are determined by spatial characteristics. In <cit.>, Ozgur, Johary, Tse and Lévêque argue that linear scaling may still be achievable in a transitory regime where n is very high but finite, such that nodes separations are larger than the carrier wavelength and channels can still be modeled by i.i.d. random phases. Otherwise, if n is so high that inter-node distances are shorter than the wavelength, channel degrees of freedom scaling is spatially limited as in <cit.>. The connection between capacity scaling results with an i.i.d. random phase model and with a physical spatially-limited channel phase model is further analyzed in <cit.>, which formalize the unification of <cit.> and <cit.>. In <cit.> the authors also replaced the traditional separate analysis of dense and extended networks with a generalized analysis of operating regimes, defining the user density scaling and determining its threshold for which the operating regime changes from dense-like to extended-like networks. More recently, the practicality of hierarchical cooperation to achieve linear scaling was put into question in <cit.>. There have been extensions of scaling laws of ad-hoc networks introducing cooperation, mobility, broadcast, infrastructure or large bandwidth. See <cit.> for a comprehensive review.Most literature on scaling laws follows ad-hoc network models, which are not adequate representations of a cellular network, even in the case of results like <cit.> that have modeled ad-hoc networks with infrastructure support. Our analysis still uses the spatial density model for infrastructure proposed in <cit.>, but we have taken into account that data in a cellular network is required to reach the BS and this may create bottlenecks that limit scaling <cit.>. RCShinTrafficIn <cit.> the analysis characterizes an “ad-hoc network with infrastructure support”, where source-destination pairs of user nodes of the same type are formed across the network, and BS infrastructure only assists these user nodes. We consider instead a conventional cellular network traffic model, where user nodes are paired with the nearest BS, and there are typically asymmetric downlink and uplink rates with the BS as the ultimate source or destination, respectively. Note that in our multi-hop schemes nodes assist each other by forwarding information corresponding to their primary downlink/uplink exchanges with the nearest BS, however user nodes do not maintain direct traffic flows with each other using device-to-device communications underlaying the primary cellular communications, as recently proposed<cit.>. Due to this cellular traffic model, our analysis requires novel cut-set bounds and achievable schemes, different than those in <cit.>. RCShinAnt1 In addition, <cit.> considers a specific physical model for the BS antenna arrays, whereas our analysis is agnostic to the antenna model and the results are expressed instead only as a function of the effective array dimension.The main innovation of our analysis method is evaluating the impact of very large bandwidths in capacity scaling. Most scaling analyses consider a constant finite bandwidth; however in such setups links only become power-limited with distance, not with bandwidth. Another approach consists on a priori letting W→∞ for each finite value of n, and then let n grow, as in <cit.>; but this does not provide insights on the interaction between bandwidth and power-limited scaling operating regimes. In our model the goal is to find out what happens between these two extremes by letting W and n increase to infinity at the same time with an arbitrary relative exponentψ:=lim_n,W→∞log W/log n, ⇔ W=Θ(n^ψ)where the two extremes correspond to ψ=0 and ψ=∞. The results in <cit.> also identify operating regimes depending on power scaling for the ad-hoc case, which may be interpreted implicitly as a scaling bandwidth. Introducing the bandwidth exponent explicitly allows to analyze the relative value of bandwidth scaling in relation to node and infrastructure density.More recently, several works have studied the impact of density in cellular wireless systems with models based on stochastic geometry <cit.>. Although this permits a fine characterization of rate beyond scaling in large networks, the ability to model multi-hop protocols through stochastic geometry is more limited, and both analysis techniques are complementary. For example, in <cit.> only two hops are possible, whereas in this paper we adopt the generalized multi-hop model with arbitrary number of hops developed in the classic Gupta-Kumar model <cit.>.NOlognotationIn this paper we present scaling results characterized up to the exponents only, ignoring logarithmic variations in scaling. That is, we will not distinguish between scaling functions of the form Θ(n^x) and Θ(n^xlog(n)). Since n^ϵ≥Θ(log(n)) holds for any ϵ≥0 and for sufficiently high n, our simplification of scaling notation does not affect the main conclusions in this paper, which mainly consider the values of rate exponents and their comparison at regime transition points. While our simplification is sufficient for the particular conclusions in this paper, we do not claim that logarithmic scaling differences are irrelevant in other applications. Researchers have committed considerable effort to study logarithmic gaps in other scenarios <cit.>. §.§ Paper Organization This paper is structured as follows: Section <ref> describes the cellular network scaling and channel model. Section <ref> obtains an upper bound to capacity. Section <ref> describes the different achievable protocols. Section <ref> describes capacity scaling and its relation to the throughput of each protocol. Section <ref> contains observations and interpretations of the results. Finally, Section <ref> concludes the paper.§ NETWORK AND CHANNEL MODELS §.§ NetworkModel We consider a sequence of cellular wireless networks indexed by n, where n is the number of single-antenna user nodes randomly and uniformly distributed in an area A. The network is supported by m BSs, each with ℓ effective antennas (see below), and communication takes place over bandwidth W. The BSs do not have the ability to perform cooperative transmission/reception through backhaul. In the IRH protocol defined below, we also have k>m single-antenna fixed RNs that communicate with the BSs through a wireless backhaul. Table <ref> defines the scaling relation between n and the different network parameters. Here W_0, A_0, m_0, l_0, k_0 are fixed constants. The exponents of the number of BSs and BS antennas are taken from <cit.>. The constraint β+γ≤1 ensures that the number of infrastructure antennas per node does not grow without bounds. The scaling of the network area is as proposed by <cit.> to model a continuum of operating regimes between dense (ν=0) and extended (ν=1) networks. We introduce the bandwidth scaling exponent ψ as shown in (<ref>). We also introduce the scaling exponent ρ≥β of the number of RNs for the IRH protocol defined below. Note that ℓ is the number of effective antenna dimensions, that is the maximum number of independent spatial dimensions over which a BS can communicate. In a rich scattering environments ℓ is equal to the number of physical antennas, whereas if scattering is sparse such that some physical antennas are correlated, ℓ represents the number of independent propagation paths that the array can exploit. By focusing on a given number of effective dimensions, our analysis can be applied, with appropriate values of ℓ, to many antenna array architectures and even sparse propagation models in the literature, such as <cit.>. Hereafter, we use the term “number of antennas” to simply refer to the effective array dimensions ℓ and represent by ℓ_t and ℓ_r the effective number of transmit and receive array dimensions.RCShinAnt2Note that in <cit.> it is assumed that ℓ̃=n^γ̃ physical antennas are uniformly and randomly located in an area Θ(n^ν-β). Certain physical characteristics of the model in <cit.> lead to a constraint in the number of exploited transmit dimensions that scales with the perimeter of the array, Θ(n^ν-β/2). Therefore the effective number of transmit dimensions in <cit.> would beγ=min(γ̃,ν-β/2). In addition,<cit.> imposes the equality requirement β+γ̃=1, which we have relaxed as β+γ≤1 to account for any physical array mode with γ≤γ̃, including but not limited to the model in <cit.>.We consider BSs that are placed at fixed distances of each other, dividing the network into regular hexagonal cells around each BS with radius r_cell and with asymptotically (as n→∞) n/m nodes each, as in Fig. <ref>. The RNs, when present, are uniformly distributed over cells and placed in a hexagonal layout within each cell. The downlink from the BS to the nodes and the uplink from the nodes to the BS operate independently in alternate time division duplex (TDD) frames. This imposes a 1/2 penalty in rate but otherwise does not alter the scaling of capacity with n. BSs cannot receive in the downlink phase or transmit in uplink, while nodes can do both.Due to random node placement, the rate achievable by any individual user is a random variable depending on its location and the protocol used. The following definitions are adapted from <cit.>.A downlink (uplink) rate of R_DL^x(n) (R_UL^x(n)) bits per second per node is achieved using protocol x in a realization of the cellular network if the protocol can guarantee that all nodes can receive from (transmit to) its assigned BS at least R_DL^x(n) (R_UL^x(n)) bits per second. TDLNote that if we denote by T_DL^x(n) (T_UL^x(n)) the sum DL (UL) throughput per BS with protocol x, our definition of achievable rate requires that R_DL^x(n)≤m/nT_DL^x(n). A downlink (uplink) rate of R_DL(n) (R_UL(n)) bits per second per node is feasible in a realization of the cellular network if there exists a protocol that achieves it. In other words R_DL(n)=sup_x R_DL^x(n) and R_UL(n)=sup_x R_UL^x(n). The definitions above result in random rates depending on the realization of node locations. The definition below is for the largest rate scaling that holds asymptotically with probability 1. The downlink (uplink) per node throughput capacity scaling C_DL(n) (C_UL(n)) of random cellular network is of the order Θ(f(n)) if there are constants c_1<c_2 such that lim_n→∞P(R_DL(n)=c_1f(n))=1lim_n→∞P(R_DL(n)=c_2f(n))<1 We can also define the achievable rate scaling of protocol x if in the above definition we replace the feasible rates with achievable rates using protocol x.In this paper we find an upper bound and lower bounds to throughput capacity scaling by studying achievable rate scaling of different protocols. When the two have the same exponent they give the capacity scaling.§.§ Channel ModelThe discrete time received signal observed at a receiver r which can be either a node, BS or RN, is given by_r=d_t,r^-α/2_t,r_t+∑_i∈ℐd_i,r^-α/2_i,r_i+_rwhere _t is the signal ofthe intended transmitter and the set I refers to interfering transmitters active at the same time and over the same frequency band. Furthermore, d_j,r, j=t or j ∈ I, is the distance between transmitter j and receiver r, α is the path loss exponent and _r∼𝒞𝒩(0,N_0_ℓ_r) is the additive white Gaussian noise. The channel gain matrix ∈ℂ^ℓ_t×ℓ_r is assumed to be full rank. The full rank assumption is justified by our interpretation of ℓ_t and ℓ_r as effective antenna dimensions. Each coefficient of the channel matrix has unit gain and an arbitrary phase, h_t,r^(i,j)=e^j θ_i,j, and that the channel squared norm satisfies ||^2=ℓ_rℓ_t. The channel model in (<ref>) is applicable to one symbol transmission with period T_s=1/W over a frequency-flat channel with power constraint |_t|^2≤P_t/W where P_t depends on the type of the transmitter and the fraction of power it dedicates towards r. Average transmission power constraints of nodes, BSs and RNs areP, P_BS and P_RN, respectively.§ UPPER BOUND TO CAPACITY SCALINGIn order to obtain the upper bound in Theorem 1 below, we develop a series of m cut-set bounds to the downlink sum-rate of the users in each cell, by using the cut that separates the BS -as transmitter- from the rest of the network. Similarly, we develop n cut-set bounds to uplink rate of each user by separating that particular user -as transmitter- from the rest of the network.The downlink throughput capacity scaling of a cellular network is upper bounded C_DL(n)≤Θ(n^β+γ-1+min(ψ,(1-ν)α/2))and the uplink throughput capacity scaling is upper bounded byC_UL(n≤Θ(n^min(ψ,(1-ν)α/2))We introduce the detailed analysis for downlink. Uplink follows similarly. We first consider the case of no RNs (k=0)and then argue the same bound holds for any k ≤ n.We upper bound the sum-rate of the users served by each BS by considering a cut separating that BS from the rest of the network. Each of these m cuts upper bounds the sum downlink rate received by the approximately n/m destination users in one cell. At the receiving side of the cut there is perfect cooperation among n receiver nodes and the remaining m-1 BS transmitters whose transmissions are known to the receivers and can be perfectly canceled. Hence, each cut behaves as a single MIMO channel with array dimensions ℓ_t=ℓ and ℓ_r=n.We represent the distance from each node r to BS b in a diagonal matrix _b≜([ d_b,1^-α/2…0;⋮⋱⋮;0… d_b,n^-α/2;]),and modify the channel expression (<ref>) to write the signals from all BSs to all nodes in the form =_t_t_t+known to all receivers∑_t'≠ t_t'_t'_t'+, and where _t represents the channel matrix between BS t and all receivers. Using the assumption that channel matrices are full rank and ℓ≤ n/m≤ n, following standard arguments in <cit.>, it can be shown that T_DL^t(n), the DL sum rate on the cell of BS t, can be upper bounded asT_DL^t(n)≤max_∑_i=1^ℓ P_i≤ P_BS W∑_i=1^ℓlog(1+P_iλ_i^2/WN_0)where λ_i for i∈[1,ℓ] are the nonzero, nonnegative singular values of the matrix _t_t.We know that λ_i^2≤∑_i=1^ℓλ_i^2= {_t_t_t^H_t^H}≤ℓ∑_r=1^nd_t,r^-α. Concavity of the logarithm suggests that P_i^*=P_BS/ℓ maximizes this upper bound. Hence T_DL^t(n)≤ Wℓlog(1+P_BS/WN_0∑_r=1^nd_t,r^-α) Notice that if lim_n→∞P_BS∑_r=1^nd_t,r^-α/WN_0=∞, then the upper bound in (<ref>) becomes degrees-of-freedom-limited and scales as Θ(Wℓ). Conversely, if lim_n→∞P_BS∑_r=1^nd_t,r^-α/WN_0=0, the upper bound is power-limited and scales as Θ(ℓ P_BS∑_r=1^nd_t,r^-α).The sum ∑_r=1^nd_t,r^-α can be calculated using the exponential stripping method described in <cit.>. Consider a series of concentric rings centered at the BS t with inner radius r_i=n^ν/2e^-i/2 and outer radius r_i-1. Recall that the user density scales as n^1-ν and network area as n^ν, thus the number of nodes contained in each disc is S_i≤ ne^1-i with high probability. Using this, we can upper bound the sum over n by summing over the ring. Moreover, the smallest radius that contains one node w.h.p. is r_s=Θ(n^1-ν) so the sum ends at i≤⌊log n⌋+1. For all the outer rings i∈[1,⌊log n⌋] we can lower bound distance to the BS by the inner radius d_t,i^-α≤ r_i-1. In addition the innermost disk indexed by i= ⌊log n⌋+1 contains one uniformly-distributednode location, and its distance from the BS scales with d_t,i^-α=θ(r_s)=Θ(n^1-ν) with high probability.∑_r=1^nd_t,r^-α ≤∑_i=1^⌊log n⌋+1S_i r_i^-α≤[∑_i=1^⌊log n⌋ ne^1-in^-να/2e^+iα/2]+en^(1-ν)α/2≤ n^-να/2[log n e^1+α/2log (n)]+n^(1-ν)α/2e≤ (log n+1)n^(1-ν)α/2ewhere the third inequality is due to e^+iα/2≤ e^max(i)α/2.Examining Table <ref>, this leads to Θ(n^γ+min(ψ,(1-ν)α/2)). Now, by symmetry of the upper bound over all BSs, and by the definition of feasible rate as guaranteed to all users, the throughput capacity of the network is upper bounded by C_DL(n)≤m/nmin_tT_DL^t(n)=Θ(n^β+γ-1+min(ψ,(1-ν)α/2)) completing the proof of Theorem <ref> for DL.Note that this scaling upper bound makes intuitive sense because, with probability 1 as n→∞, a disc with radius Θ(n^ν-1/2) around a BS contains one receiver, which combined with array gain n^γ gives the best-case transfer of power between a single BS and the rest of the network. Also, the degrees of freedom of the cellular network cannot exceed Θ(Wmℓ).A similar set of arguments lead to the bound for the uplink. In this case we consider n cuts, each separating one user node from the rest of the network. In this cut, all the BSs and the remaining n-1 nodes are on the receiving side of the cut, and their mutual interference is canceled. Due to the fact that the transmitting node has a single antenna (eigenvalue), the degrees of freedom are Θ(W). The exponential stripping sum (equivalent of (<ref>)) in this case needs to be evaluated over ∑_r=1^n-1d_t,r^-α+ℓ∑_r=1^md_t,r^-α=Θ(n^(1-ν)α/2) leading to an upper bound on uplink feasible rate as min_t T_UL^t(n)=Θ(n^min(ψ,(1-ν)α/2))Note that the above scaling laws also apply to the downlink/uplink throughput capacity scaling of a network with k<n RNs. This can be shown evaluating the cut-set bound on an equivalent network with 2n≥ n+k user nodes and multiplying the resulting rate per node by 2 which is always greater than n+k/n. The capacity scaling exponent does not change when the number of nodes is multiplied by a constant. § PROTOCOL MODELS §.§ Infrastructure Single-Hop (ISH) In the ISH protocol, BSs transmit directly to all nodes in downlink and all nodes transmit directly to the BSs in uplink. The BSs do not cooperate in transmission or reception and the interference signals between different cells are treated as noise. There are n/m nodes uniformly distributed within each cell. The ℓ BS antennas are used for Multi-User MIMO (MU-MIMO), implementing a spatial multiplexing scheme to groups of ℓ users that allows each BS to transmit or receive ℓ signals per bandwidth resource at the same time.Each BS divides the nodes in its cell in n/mℓ groups of ℓ users, and assigns to each group a subchannel with bandwidth Wℓm/n. Subchannels are separated using Frequency Division Multiplexing (FDM) in DL and Frequency Division Multiple Access (FDMA) in UL. Within the subchannel of each group, ℓ simultaneously transmitted signals coexist using MU-MIMO spatial multiplexing as described in <cit.>. The dimensions of the channel matrices always allow this because γ<1-β, so there are always more nodes than BS antennas if n is sufficiently large. Also, the multi-user channel matrix, obtained by putting together all point-to-point channel matrices of nodes in the same subchannel and cell, is full rank if nodes are separated at least a quarter of a wavelength and far-field propagation holds. In downlink, the BS transmits independent signals to each destination with equal power allocation P_BSm/n.Note that while our equal division of power and bandwidth may be suboptimal we show, as a part of the analysis of ISH, that in a scaling law sense this power allocation suffices to get the best scaling possible with any single-hop protocol. §.§ Infrastructure Multi-Hop (IMH)In the IMH protocol, each cell is subdivided regularly into smaller regions of area A_r called routing subcells, and information is forwarded to/from the BS via multi-hop communication using a node in each routing subcell as a relay as shown in Fig <ref>. For multi-hopping, the routing subcells must have at least one node with high probability, which results in A_r>A/m2log(n/m)/n/m <cit.>. Routes are defined as successions of transmissions between adjacent subcells, where each hop covers a distance no longer than four subcell radii, 4r_subcell∝√(A_r), given the largest distance between any two points in adjacent subcells. Sub-cells alternate in becoming active using a non-scaling (i.e. constant) time division scheduling by a constant factor to avoid collisions (transmissions to the same destination subcell) and satisfy the half-duplex constraint. For example in a hexagonal tessellation a 1/7 constant can prevent collisions as illustrated in Fig. <ref>.All downlink routes start, and all uplink routes end, at the BS in the center of the cell. We call this point the head of all routes, where the BS communicates with its closest ℓ users only, using the same MU-MIMO we described for ISH with minor adaptations. Since there are ℓ nodes and ℓ BS antennas, a single channel with bandwidth W without FDMA/FDM is employed, with MU-MIMO spatial multiplexing in exactly ℓ spatial dimensions. The channel and rate models for these links are the same as in ISH, with the new bandwidth allocation and a reduced maximum distance between the BS and the destinations scaling as 4r_subcell=Θ(n^ν-1/2).The BS serves as head for a total of n/m routes (one per cell user), but only ℓ can be spatially multiplexed at the same time, so the routes are time-multiplexed in a round robin fashion in the links between the BS and its neighbors, with each route being served a mℓ/n portion of the time. For the remaining hops on each route, a single node in each routing subcell forwards its received data of a single path to a single node in the next routing subcell in the path. Since each node has a single antenna, there is no MIMO and all the bandwidth and node power are exploited. Again, inter-node distances scale at most as 4r_subcell=Θ(n^ν-1/2). §.§ Infrastructure Relay-multi-Hop (IRH)In the IRH protocol, the network area is divided regularly in a nested double hexagonal grid of m cells and m+k microcells, where nodes in each microcell are served by an Access Point (AP) that is either a BS or a RN. We consider that a controller may decide to use the RNs or not, falling back to the behavior of ISH if the RNs are not sufficiently dense. In downlink, the RNs are exploited if ρ≥β+γ+(β-ν)α/2-ψ. In uplink, the RNs are exploited if ρ is high enough such that min(ψ-(β+γ-ρ)^+,(ρ-ν)α/2)≥min(ψ,(β-ν)α/2+1-β). These thresholds are justified by the analysis of IRH in the next section.When the above conditions are satisfied and the RNs are utilized, a wireless backhauling connection for all k/m RNs in each cell is provided by their closest BS using IMH. To implement backhauling, time is divided into an access phase and a interconnection phase, with relative durations τ_a∈[0,1] and 1-τ_a. * In the access phase, for a fraction τ_a∈[0,1] of the time, in each microcell, all APs exchange data with the user nodes using an ISH protocol. Signals that propagate between different microcells are treated as interference. There are n/m+k nodes within each microcell with high probability. Unlike BSs, RNs do not have ℓ antennasand therefore rates in RN microcells create a bottleneck for throughput scaling. APs use FDMA/FDM with a single antenna (no MU-MIMO), allocating transmissions to each user node on orthogonal subchannels with bandwidth Wm+k/n. A BS transmits with node power allocation P_BSm+k/n and a RN does the same split P_RNm+k/n. RC14Note that the capacity scaling is by definition the rate scaling of the worst user. Since microcells where the AP is a single antenna RN are more constrained, we can assume for simplicity that BSs also have a single antenna so that all microcells are represented equally in the analysis. Note also that even if ρ=β, the numbers m and k may differ by a constant factor and some users in the system would be served by single-antenna APs. * In the interconnection phase, for a fraction 1-τ_a of the time, BSs exchange data with RNs using an IMH protocol. Each microcell of area A_r∼ n^ν-ρ becomes the routing subcell of IMH, and information is forwarded to/from the BS via multi-hop communication using the single RN in each microcell as a relay as shown in Fig <ref>. The BS uses MU-MIMO to transmit or receive up to min(ℓ,k/m) routing paths at the same time. Note that unlike IMH protocol, we are no longer guaranteed to have more RNs than transmit antennas. Each hop covers a distance of exactly two microcell radii, 2r_μcell, as RNs are regularly placed at the centers of their microcells. Microcells alternate in becoming active using a non-scaling (i.e. constant) time or frequency division scheduling to avoid collisions and satisfy the half-duplex constraint. Note that the access phase and interconnection phase may have different scalings, which determines the optimal time allocation τ_a and the overall rate scaling of the IRH protocol.We assume that RNs have one or a fixed number of antennas that do not scale with n. This is a realistic model since, in the near future, it is likely that BSs will still have many more antennas than nodes or RNs. However, it is not difficult to extend the results in this paper to the case where the number of RN antennas also scale with n.§ CAPACITY SCALING AND RATE SCALING OF PROTOCOLS §.§ IMH Achieves Downlink Capacity Scaling Our main result is the characterization of the scaling of throughput capacity for cellular wireless networks, which is limited by the upper bound on Section <ref> and as we show below achieved by the IMH protocol. For the IMH protocol, downlink rate per node scales as R^IMH_DL(n)= Θ(n^β+γ-1+min(ψ,(1-ν)α/2)) and uplink rate per node scales as R^IMH_UL(n)=Θ(n^β+γ-1+min(ψ,(1-ν)α/2)) Appendices <ref> and <ref>. Combining upper bound in Theorem 1 and the achievable scaling in Thorem 2, we obtain the following. The downlink per node throughout capacity scales asC_DL(n)= Θ(n^β+γ-1+min(ψ,(1-ν)α/2))and for β+γ=1, the uplink per node throughut capacity scales asC_DL(n)= Θ(n^min(ψ,(1-ν)α/2))IMH is optimal for downlink, that is it achieves throughput capacity scaling in downlink. For uplink, IMH achievesrate scaling within a gap no larger than n^1-β-γ to capacity, and is optimal for β+γ=1In our model we have β+γ≤ 1. When scattering is rich and the scaling of the number of independent transmits dimensions γ equals the number of physical antennas the condition β+γ=1 corresponds to the total number of infrastructure investment scaling as the number of users. This, for example, is within the realm of ultra-dense networks <cit.>. In this case IMH is optimal (in terms of throughput capacity scaling) in the uplink as well. Next, we obtain the achievable rate scalings of the other protocols introduced in Sec. <ref>. §.§ ISH is Suboptimal The ISH protocol is representative of the dominant communication mode in current cellular networks, consisting of direct transmissions between BS and nodes. Our analysis shows that single-hop protocols can not fully exploit large bandwidths, and therefore cellular architectures must adopt multi-hop in future generations if large bandwidths are to be utilized optimally.For the ISH protocol, downlink rate per node scales as R^ISH_DL(n)= Θ(n^β+γ-1+min(ψ,(β-ν)α/2)) and uplink rate scales as R^ISH_UL(n)= Θ(n^β+γ-1+min(ψ,(β-ν)α/2+(1-β))) Appendix <ref>.For β=1 ISH has the same rate scaling as IMH in all regimes and is optimal. In downlink, for all β<1 we have (β-ν)α/2<(1-ν)α/2 and ISH achieves a rate scaling worse than IMH when the bandwidth scaling is ψ≥(β-ν)α/2. Similarly, in uplink, for all β<1 we have (β-ν)α/2+(1-β)<(1-ν)α/2 and ISH performs worse than IMH for ψ≥(β-ν)α/2+(1-β).§.§ IRH Performance Depends Critically on RN Density There may be practical issues related with multi-hop implementation through mobile users as in IMH, and the use of static dedicated RNs in this protocol provides a reasonable middle-ground. The gap between IRH and IMH gets smaller and can be closed as RN density increases.For the IRH protocol, if ρ≥β+γ+(β-ν)α/2-ψ, downlink rate per node scales asR^IRH_DL(n)= Θ(n^min(β+γ,ρ)-1+min(ψ,(ρ-ν)α/2)) and if min(ψ-(β+γ-ρ)^+,(ρ-ν)α/2)≥min(ψ,(β-ν)α/2+1-β), uplink rate per node scales asR^IRH_UL(n)= Θ(n^min(β+γ,ρ)-1+min(ψ,(ρ-ν)α/2+(β+γ-ρ)^+)), otherwise rates scale as in ISH.Appendix <ref>.The IRH controller always uses the RNs in downlink if ρ>β+γ, and in uplink if ρ≥β+max(γ,(1-β)2/α). If ρ=1 the rate scaling with IRH matches the rate scaling with IMH and therefore IRH is optimal in downlink. If ρ=β+γ=1 IRH is optimal in uplink as well. However if ρ=1 and β+γ<1 IRH does not meet the uplink upper bound rate scaling. This means that the amount of wired-backhauled infrastructure is a limiting factor even for networks with very high density of wireless-backhauled infrastructure. Finally, if ρ<1, IRH rate scaling is dominated by that of IMH.§.§ Illustration of the resultsredundants Figures <ref> and <ref> illustrate the scaling exponents of the upper bound, IMH, ISH and IRH protocols, for the downlink and uplink cases respectively. The horizontal axes represent the exponent of bandwidth, ψ, which together with the exponent of number of independent array dimensions, γ, represents the scaling of the degrees of freedom of BS transmission. The vertical axes show the exponent of the feasible per node rate log(R(n)). Indownlink, the upper bound behaves exactly like IMH, and hence the capacity scaling of the cellular network is fully characterized and IMH is optimal, whereas in uplink IMH cannot achieve the upper bound if γ+β< 1, with the gap between IMH and the upper bound scaling as n^1-γ-β. RCShinAnt3These results generalize the particular case from<cit.> where the bandwidh does not scale (ψ=0) and the antenna array at the BS follows a physical model with ℓ̃=n^γ̃ physical antennas that experience a constraint on the number of independent transmit dimensions given by γ=min(γ̃,1-β/2).In ISH transmissions have to cover longer distances, from the BS to the cell edge, resulting in an earlier transition into a power-limited regime and a lower utility of increasing the bandwidth compared to IMH. In UL this is partially compensated by the fact that the BS receives the total power transmitted by n/m nodes. However, since for a β≤1, we have (β-ν)α/2+(1-β)≤(1-ν)α/2, and ISH rate scaling is dominated by that of IMH.The analysis of IRH in Theorem <ref> shows that both in downlink and uplink we can identify a minimum density of RNs such that, beyond this density, IRH outperforms ISH.* A first scenario where the IRH controller must not use the RNs is identified in the switching conditions on Theorem <ref>. We have highlighted the region where RNs must not be used with a solid gray shadowed area in the figures. This gap region exists only if ρ≤β+γ, whereas if ρ>β+γ the rate scaling with bandwidth exponent ψ≤(β-ν)α/2 is unchanged regardless of whether the controller use the RNs or not. * A second scenario where the IRH controller must use the RNs is identified by comparing the rates of IRH and ISH at ψ>(β-ν)α/2+(β+γ-ρ)^+. We have highlighted this gap with a striped gray shadowed area in the figures. The received power defines a bottleneck when ψ>(β-ν)α/2+(β+γ-ρ)^+ in downlink and ψ>(β-ν)α/2+(1+β)+(β+γ-ρ)^+ in uplink. RNs introduce new bandwidth-limited and power-limited areas to the rate exponents, and allow IRH to outperform ISH. In downlink this gap always exists for any ρ≥β, and RNs always increase power-limited rates, because the receiver power depends only on the distance and pathloss exponents, which are improved by RNs to(ρ-ν)α/2>(β-ν)α/2. In uplink, however, if ρ is too low, the striped gray region denoted in the figure collapses and RNs do not improve the power-limited rates of ISH. This occurs because the power bottleneck depends also on the number of uplink transmitters per receiver, and RNs only improve the received power if (ρ-ν)α/2+(β+γ-ρ)^+>(β-ν)α/2+(1-β).§ DISCUSSION AND OBSERVATIONS§.§ The limitations of ISH are fundamental to any single-hop protocols RC16The ISH protocol defined in Section <ref> assumes equal power allocation in downlink and suboptimal linear MU-MIMO processing. It is not difficult to show that a more general version of single hop would not improve the rate scaling beyond what was obtained in Theorem 4. Uplink and downlink rates in a general cellular network restricted to single-hop communications can be upper bounded by considering broadcast and multiple access channel results <cit.>, respectively. For example, our downlink analysis, which is asymptotic in number of nodes, can be identified as an asymptotic high-SNR broadcast channel when ψ≤(ν-β)α/2. The degrees of freedom region of the broadcast channel in the high-SNR regime is known (see for example <cit.>) and the worst user performance does not exceed –in terms of scaling exponent– what we achieve with ISH. Similar arguments can be made regarding the low-SNR capacity of the broadcast channel with ψ≥(ν-β)α/2, and the multiple access channel. As a result, no other single-hop protocol throughput scales better than that of ISH, and the differences between ISH and IMH in our achievable schemes arise from the differences between the single-hop and multi-hop architectures, not from our simplifications in ISH. §.§ Operating regimes of large cellular networks For point to point channels operating with power P, bandwidth W, it is well known that there are two operating regimes: When P/WN_0≪1, the capacity, given by C(W)=Wlog(1+P/WN_0). behaves as Θ(P/N_0), and we say that it is power limited. Conversely, when P/WN_0≫1, the capacity behaves as Θ(Wlog(P/N_0)), and we say it is bandwidth limited.Our analysis shows that large cellular networks also have two capacity scaling regimes: Network bandwidth limited regime and the network power limited regime. Furthermore, the network bandwidth limited regime can be categorized into two types depending on whether cooperation among nodes is necessary or not to ensure that network power is not a limitation. We illustrate these regimes in figure <ref>. The regimes apply to both downlink and uplink, but we describe them only for downlink for the sake of compactness. We denote the cell radius by r_cell=Θ(n^ν-β/2), the distance between two closest nodes by r_s=Θ(n^ν-1/2), and the longest distance where the BS can transmit or receive without being power limited by r_v=(W)^-1/α.Network bandwidth-limited regime type I: If ψ<(β-ν)α/2, r_v scales faster than r_cell, as n→∞, and it is possible to deliver bandwidth-limited rates Θ(n^β+γ-1+ψ) separately to each node in the cell using single hop protocols such as ISH. In this regime there is no requirement for cooperation.Network bandwidth-limited regime type II: For (β-ν)α/2<ψ≤(1-ν)α/2 network capacity is still bandwidth limited, but single-hop protocols are not. The radius r_v scales slower than r_cell but faster than r_s. A few nodes in each cell are sufficiently close to their BS to establish high-SNR direct communications, while a majority of nodes are further away at low-SNR distances. Therefore, in this regime multi-hop is imperative to achieve the bandwidth-limited capacity scaling Θ(n^β+γ-1+ψ).Network power-limited regime: If ψ>(1-ν)α/2,r_cell grows faster than r_v and we cannot guarantee that there is at least one user sufficiently close to the BS with high probability. In this regime the SNR in expression (<ref>) is low and the upper bound to capacity is power-limited (Θ(n^β+γ-1+(1-ν)α/2)).If β=1 ISH is optimal, the type II bandwidth-limited regime collapses and the rates of ISH and IMH scale with the same exponent. However, the constraint on the total number of infrastructure units per user β+γ≤ 1 means that β=1 is incompatible with the exploitation of large antenna degrees of freedom in the BSs. §.§ Relation to the history and future of cellular technologies Current cellular networks are limited in degrees-of-freedom, namely bandwidth and number antennas. Hence they operate only in the network bandwidth limited regime type I, where user nodes are individually bandwidth limited and single hop protocols are optimal. Therefore, it is not surprising that the gains obtained by the early implementations of relaying in 4G systems have been modest <cit.>.These early multi-hop implementations correspond to our IRH protocol with low relay density (low ρ). In broad terms, multi-hopping uses degrees-of-freedom in exchange of power gain, and therefore is not as advantageous in traditional bandwidth limited cellular networks such as LTE, where degrees of freedom are a precious resource <cit.>. However, future cellular systems, such as mmWave, will have an abundance of bandwidth and most likely operate in the network power-limited regime, necessitating multi-hopping, either using network nodes or infrastructure RNs,for increased network capacity <cit.>. Interference studies have shown that mmWave networks operate very close to the threshold between having interference- and noise-limited links <cit.> (roughly equivalent to network bandwidth- and power-limited rates). The analys in <cit.> also showed that a small increase in node density makes the mmWave network transition from noise- to interference-limited. Therefore, the operating regime transitions identified in our analysis have a direct practical impact in the design of future mmWave cellular systems. §.§ IRH is more practical than ISH or IMH for high density networksBoth ISH and IRH improve with large invesments in infrastructure, increasing β or ρ, respectively, but in practice IRH offers advantages over ISH because there are practical limitations to the deployment of wired BSs to increase β, such as curb excavation rights and cost of fiber-optic for backhaul connection.IRH also has practical advantages over IMH as a multi-hop implementation, due to the fact that RNs are typically static, connected to the energy grid, and owned by same network operator. Conversely, user nodes are typically mobile, battery powered and owned by customers. Thus the implementation of IMH would pose greater practical challenges such as support formobility in the multi-hop protocol, battery efficiency optimization, and behavioral incentives to prevent customers from rejecting cooperation. §.§ Hierarchical cooperation is not necessary for cellular capacity scaling In an ad-hoc network both direct transmission and multi-hop may be suboptimal in some regimes, and a HC protocol is employed to achieve capacity scaling <cit.>. This demonstrates the utility of cooperative virtual antenna arrays, formed through coordinated joint transmissions by single antenna nodes grouped in clusters. Our analysis of cellular networks shows instead that IMH achieves downlink capacity scaling in all regimes. Therefore node clusters forming virtual antenna arrays are not necessary to achieve capacity scaling in cellular networks.We leave for future work a scenario in which HC might regain relevance for cellular networks, where we relax the assumption that BSs cannot exchange their messages through a backhaul connection and perform joint transmissions. For cooperative BSs, it is possible that virtual antenna arrays formed by clusters of cooperative devices become necessary at the user side. §.§ Cellular-specific traffic bottlenecks affect capacity scaling Some existing “ad-hoc networks with infrastructure support” scaling laws analyses <cit.> model infrastructure only as an intermediary to assist the same ad-hoc type communications. These works study rates from some user nodes to others, where the infrastructure is a mere intermediary, and when BSs are so far apart that communicating with them does more harm than good, these works ignore the infrastructure and apply ad-hoc protocols such as HC. The more realistic cellular network analyzed in this paper requires traffic to always flow through BSs even when this causes bottlenecks as illustrated in our analysis. § CONCLUSIONSIn this paper we have obtained the throughput capacity scaling of cellular wireless networks in a model comprising scaling of area, BS density, number of antennas per BS, total bandwidth, and also, optionally, number of wireless backhauled RNs. We have shown that cellular network capacity scaling exhibits a transition between network bandwidth-limited and network power-limited operating regimes as the bandwidth increases, equivalent to the well known transition in point-to-point links from bandwidth-limited to power-limited capacity. Moreover, we have shown that different protocols can experience protocol-specific suboptimal transitions into power-limited behavior earlier than (i.e. for bandwidth scaling exponent lower than) the transition experienced by the capacity scaling. The transition thresholds are fundamentally related to the typical distance between a transmitter and a receiver for each protocol, suggesting that cooperative multi-hop schemes transmitting between nearest neighbor across the minimal distance have an advantage in networks with wide bandwidths. In fact, our results show multi-hop is optimal for downlink.Single-hop protocols deal with the longest transmission distances and transition into power limitation the earliest. This means the network bandwidth-limited capacity regime is further divided into two subtypes: type I, where bandwidth is low enough that single-hop protocols are bandwidth-limited and all users can be served independently with bandwidth-limited rates; and type II, where the network capacity is bandwidth-limited but single hop protocols are power-limited and cooperation is imperative to serve capacity-achieving rates to all users.In cellular networks with additional wireless-backhauling RNs the capacity scaling law depends strongly on RN density, so that if the number of RNs is insufficient the network is better off disregarding the RNs altogether and using only the BSs. Conversely, if a sufficiently dense set of RNs is installed, cooperative multi-hop capacity can be achieved. However, a low density of wired-backhauling infrastructure can still be a limitation even if the RN density is high.Our analysis provides a theoretical framework to explain historical experiences with the implementation of multi-hop and relaying in cellular networks, where the gains have been very modest. Current cellular systems operate in the network bandwidth-limited type I regime, where our analysis predicts that adding RNs brings little advantage and may even decrease rate scaling.Our analysis is also highly relevant for the design of future cellular networks with increased bandwidth, such as mmWave or carrier aggregation systems. Preliminary studies in mmWave have shown highly parameter-sensitive transitions from bandwidth-limited to power-limited behavior, suggesting potentials for multi-hop communications.§ PROOF OF THEOREM <REF> We start with the analysis of ISH, which also forms the foundation of the IMH and IRH protocols. We describe the downlink proof in detail, whereas the uplink proof follows by minor changes. In MU-MIMO downlink BS t assigns to each user node r in the same subchannel a signature unitary vector _t,r and transmits =∑_r=1^ℓ_t,rx_t,r, satisfying the power allocation |_t,rx_t,r|=P_BSm/n by the protocol description of ISH (Sec. <ref>). Out-of-cell interfering transmissions in the same subchannel and transmissions by the BS not canceled by the linear precoder are treated as noise: y_t,r= d_t,r^-α/2(_t,r^H_t,r)x_t,r+I_1(Same BS)∑_r'd_t,r^-α/2(_t,r^H_t,r')x_t,r'+I_2(Other BSs)∑_(t',r')∈ℐ^ISH_t,rd_t',r^-α/2(_t',r^H_t',r')x_t',r'+z_rWe represent by I_1 the self-interference of the signals transmitted by the same BS towards other nodes, and by I_2 the out-of-cell interference from the set of interferers ℐ^ISH_t,r consisting in all transmitter-receiver pairs (t',r') allocated to the same subchannel as (t,r) by some other BS t'. The design of the transmit vectors _t,r is studied in <cit.>. We apply only linear transmit precoding vectors =/√(ℓ) that are optimal at low-SNR and suboptimal at high-SNR. However, since we are only interested in scaling, less power efficiency in the bandwidth-limited regime does not affect the scaling exponent. This makes the channel gain at the desired user|_t,r^H_t,r|^2=|^H/√(ℓ)|^2=ℓ Due to the fact that all transmissions are uniformly allocated across the available bandwidth and spatial dimensions, and there are many interferers, the second form of interference can be approximated as white noise by the central limit theorem. Since Gaussian is the worst distribution a noise of known covariance can have <cit.>, welower bound the rate by also modeling the self-interference within the MU-MIMO scheme as Gaussian with variance |I_1|^2. The variance of the total MU-MIMO self-interference can be characterized as |I_1|^2=|∑_r'd_t,r^-α/2(_t,r^H_t,r')x_t,r' |^2=Θ(ℓm/nP_BS)d_t,r^-α For the out-of-cell interference, we introduce a notation that applies to all protocols. For protocol x we denote the noise plus out-of-cell interference Power Spectral Density (PSD) for transmitter t and receiver r as N_I^x≜|I_2+z_r|^2/Wm/nℓ =∑_t'∈ℐ^x_t,rd_t',r^-αP_t'/W+N_0,where ℐ^x(t,r) denotes a set of out-of-cell interference transmitters affecting t. Here P_t' denotes the total power of interferer t'. In our particular case of ISH downlink ℐ(t,r) is the set of all BSs except t, and P_t'=P_BS. Therefore the rate in the ISH downlink link (t,r) can be lower bounded asR_t,r^ISH≥m/nℓ Wlog(1+ℓm/nP_BSd_t,r^-α/|I_1|^2+Wm/nℓ N_I^ISH), The achievable rate scaling must by definition be guaranteed to all the nodes in the network, so that R^ISH(n)=min_t,rR_t,r^ISH where each R_t,r^ISH is lower bounded by (<ref>). Also, (<ref>) decreases monotonously with d_t,r, and the worst-case distance satisfies max_t,rd_t,r=Θ(r_cell) with high probability. ThereforeR^ISH(n)≥Θ(m/nℓ W) iff r_cell^-α≳ WN_I^ISH Θ(m/nℓ r_cell^-α) iff r_cell^-α≪ WN_I^ISH. Finally, we show that the threshold of (<ref>) is equivalent to ψ>(β-ν)α/2, producing (<ref>). The ISH uplink analysis is identical except the transmitted power budget in uplink scales with n/m per cell, yielding (<ref>). We examine necessary and sufficient conditions for r_cell^-α≪ WN_I^ISH separately. For a necessary condition for W N_I≪ r_cell^-α, we upper bound the PSD byN_I^ISH≤Θ(n^((β-ν)α/2-ψ)^+)so, if ψ≤(β-ν)α/2, then the scaling exponent of WN_I^ISH is always lower than or equal to r_cell^-α≤ d_t,r^-α, and the rates of all users (<ref>) are bandwidth-limited. Therefore, ψ>(β-ν)α/2 is necessary for the link rates to become power-limited.To prove (<ref>) we upper bound the distance sum in (<ref>). In the ISH protocol we can upper bound the distance d_t',r by the distance between the border of the cell of BS t' and the cell where r belongs. Considering the geometry of the hexagonal layout it can be shown that there 6k cells form a ring with index k separated a distance greater or equal than 3/2k-1 cell radii from the border of the cell of r. The network is finite and a maximum k exists, but we can get rid of border effects and further bound the interference by extending the sum all the way to k→∞.∑_t'∈ℐ^ISHd_t',r^-α ≤ r_cell^-α∑_k=1^∞(6k)(2k-1)^-α≤ r_cell^-α∑_k=1^∞12/(2k-1)^α-1= 12r_cell^-α(3/2)^α-1ζ(α-1,1/3)where the last equality holds for α>2 and the result is expressed using the generalized Riemann Zeta function ζ(s)=∑_x=1^∞1/x^s, which is a constant with regard to n. Combining with W this shows that interference PSD scales as n^(β-ν)α/2-ψ, while noise PSD N_0 is constant, so (<ref>) for ISH downlink scales as (<ref>).To prove the same condition is also sufficient we assume ψ>(β-ν)α/2. This allows to approximate the upper bound (<ref>) by ≃ N_0, and since N_I^ISH≥ N_0 the bound is tight. Therefore, if ψ>(β-ν)α/2 then R_t,r^ISH≃m/nℓ Wlog(1+P_BSd_t,r^-α/W N_0), and since min_t,rd_t,r^-α=Θ(r_cell^-α)≤Θ(W), this leads to R^ISH(n)=Θ(m/nℓ r_cell^-α). § PROOF OF THEOREM <REF>The proof of Theorem <ref> consists of using the arguments in Appendix <ref> with minor variations. We only discuss the main differences here. Achievable rate scaling in IMH is given by the minimum of two constraints: we denote the BS-node link rates at the center of the cell by R_t,r^IMH(1), and node-node link rates in the rest of the multi-hop system by R_t,r^IMH(2). ThusR^IMH_DL(n)=min_t,rmin_{1,2}(R_t,r^IMH(1),R_t,r^IMH(2)). The scaling of R_t,r^IMH(1), the BS-node link rate, is scaling using a MU-MIMO similar to ISH, with the exception that no frequency division is required. The BS of each cell transmits ℓ signals towards the nearest routing subcells with bandwidth W using separate spatial signatures. There are a total of n/m routes that need to be served by the BS, so the route towards each destination is time multiplexed by a factor mℓ/n at the BS. Thus, rate between a BS and a node in its nearest routing subcell can be expressed asR_t,r^IMH(1)=mℓ/nWlog(1+P_t d_t,r^-αℓ/|I_1|^2+W N_I^IMH)where the power is P_t=P_BS/ℓ in DL and P_t=P in UL.The link rates in the rest of the node-node hops of each route are single antenna. Here, a factor ofmℓ/n is imposed on the rate because the routes inherit it from the BS. R_t,r^IMH(2)=mℓ/nWlog(1+Pd_t,r^-α/W N_I^IMH) In order to determine the scaling of each term, the same arguments applied to the scaling of (<ref>) can then be applied to (<ref>) and (<ref>). To study N_I^IMH, the noise plus out-of-subcell interference PSD for IMH using (<ref>), we have to take into account that the set of interferers ℐ^IMH_t,r is the set of all transmitters in nearby routing subcells that transmit at the same time as the link (t,r). In downlink the interferers may be either other nodes or BS, and the power of each interferer can be upper bounded by the worst case P_t'≤max(P_BS,P), whereas in UL only nodes transmit P_t'=P. In order to derive an expression similar to (<ref>) for IMH, d_t',r is upper bounded by the distance between the border of the routing subcell that contains t' and the border of the routing subcell that contains r. In order to guarantee a half-duplex collision-free routing, IMH uses a time-division where only 1 out of every 7 subcells in the hexagonal layout transmit at the same time. Following as in (<ref>) we can show that N_I^IMH=Θ(n^((1-ν)α/2-ψ)^+) Examining Table <ref> both the BS to nodes and the node to node links can be shown to scale as R_t,r^IMH(1)=Θ(n^β+γ-1+min(ψ,(1-ν)α/2))=R_t,r^IMH(2).where we distinguish the same two regimes as in ISH, but this time the threshold is ψ<(1-ν)α/2 for the bandwidth-limited rates, proving Theorem <ref> for the downlink. The proof for uplink follows the same principle but the BS receives multiple transmissions at the same time, increasing the received power, and the point to point links become a bottleneck where each route is served with rate R^IMH_UL(n)=Θ(n^β+γ-1+min(ψ,(1-ν)α/2)) § ANALYSIS OF IRHWe first assume that the IRH always makes use of the RNs, and from the resulting rates derive the scaling exponent thresholds where it is best to fall back to ISH. The IRH protocol implements RN multi-hop in two phases. The access phase is an ISH protocol with m+k APs that consist of both BSs and RNs. We denote this protocol by ISH-R its achievable rate scalingR^ISH-R_DL(n) =τΘ(n^ρ-1+min(ψ,(ρ-ν)α/2)). The interconnection phase uses an IMH protocol where the BSs are the infrastructure and the RNs are the user nodes. We denote this by IMH-R with achievable backhauling rate per RNR^IMH-R_DL(k) =(1-τ)Θ(k^β'+γ'-1+min(ψ',(1-ν')α/2)) where β'=β/ρ, γ'=γ/ρ, ψ'=ψ/ρ.where we modify the exponents of area, bandwidth and number of antennas to account for the fact that IMH is evaluated with k=n^ρ user nodes. Each RN serves n/k nodes and must divide the backhauling rate it achieves equally among them, giving n^ρ-1R^IMH-R_DL(n^ρ) per user node.Finally, each node in the IRH network achieves the minimum between the rate of its link to its AP, and the fraction of the AP bachkauling rate that is assigned to the node.R^IRH_DL(n)=Θ(min( (1-τ)n^β+γ-1+min(ψ,(ρ-ν)α/2),τ n^ρ-1+min(ψ,(ρ-ν)α/2)) ) RC17The optimal time-division in a decode-and-forward relay scheme with two links of known rates is well known to be the value that minimizes the total transmission time per bit, as verified for example in <cit.>. This is given by τ^* =min_τ (1-τ)1/k/nR^IMH-R_DL(k)+τ1/R^IRH_DL(n)=k/nR^IMH-R_DL(k)/R^IRH_DL(n)+k/nR^IMH-R_DL(k).If R^ISH-R_DL(n)>Θ(k/nR^IMH-R_DL(n)), τ^* converges to zero. If R^ISH-R_DL(n)<Θ(k/nR^IMH-R_DL(n)), τ^* converges to 1. If R^ISH-R_DL(n)=Θ(R^IMH-R_DL(n)), τ^* does not affect the rate scaling. Putting everything together gives the expression in Theorem <ref> for downlink. By careful comparison of Theorem <ref> and Theorem <ref>, we deduce the threshold where it is better to use ISH is ρ≥β+γ+(β-ν)α/2-ψ.By similar arguments, the scaling of uplink IRH when the RNs are used can be shown to beR^IRH_UL(n)=Θ(min( (1-τ)n^β+γ-1+min(ψ,(ρ-ν)α/2),τ n^ρ-1+min(ψ,(ρ-ν)α/2+1-ρ)) )and by comparison with Theorem <ref> we can deduce that IRH only benefits from RNs if ρ is high enough such that min(ψ-(β+γ-ρ)^+,(ρ-ν)α/2)≥min(ψ,(β-ν)α/2+1-β). 10url@samestyle Gomez-Cuba2014isit F. Gómez-Cuba, S. Rangan, and E. Erkip, “Scaling laws for infrastructure single and multihop wireless networks in wideband regimes,” in IEEE International Symposium on Information Theory (ISIT), Honolulu, 2014, pp. 76–80.gomez2016capacity F. Gómez-Cuba, S. Rangan, E. Erkip, and F. J. González-Castaño, “Capacity scaling bounds in wideband cellular networks,” in International Zurich Seminar on Communications (IZS), no. March 2–4, Zurich, 2016.BocHLMP:14 F. Boccardi, R. W. Heath Jr, A. Lozano, T. L. Marzetta, and P. Popovski, “Five disruptive technology directions for 5G,” IEEE Communications Magazine, vol. 52, no. 2, pp. 74–80, 2014.Pi2011 Z. Pi and F. Khan, “An introduction to millimeter-Wave mobile broadband systems,” IEEE Communications Magazine, vol. 49, no. Jun., pp. 101–107, 2011.PietBRPC:12 P. Pietraski, D. Britz, A. Roy, R. Pragada, and G. Charlton, “Millimeter wave and Terahertz communications: feasibility and challenges,” ZTE Communications, vol. 10, no. 4, pp. 3–12, 2012.Rappaport2013 Y. Azar, G. Wong, K. Wang, R. Mayzus, J. Schulz, H. Zhao, F. Gutierrez, D. Hwang, and T. Rappaport, “28 GHz Propagation measurements for outdoor cellular communications using steerable beam antennas in New York City,” in IEEE International Conference on Communications (ICC), 2013.RanRapEr:14 S. Rangan, T. T. S. Rappaport, and E. Erkip, “Millimeter-wave cellular wireless networks: Potentials and challenges,” Proceedings of the IEEE, vol. 102, no. 3, pp. 366–385, mar 2014.Rappaport2014-mmwbook T. S. Rappaport, R. W. Heath Jr., R. C. Daniels, and J. N. Murdock, Millimeter Wave Wireless Communications.1em plus 0.5em minus 0.4emPearson Education, 2014.hoydis2013massive J. Hoydis, S. Ten Brink, and M. Debbah, “Massive MIMO in the UL/DL of cellular networks: How many antennas do we need?” IEEE Journal on Selected Areas in Communications, vol. 31, no. Feb., pp. 160–171, 2013.larsson2014massive E. G. Larsson, O. Edfors, F. Tufvesson, and T. L. Marzetta, “Massive MIMO for next generation wireless systems,” IEEE Communications Magazine, vol. 52, no. Feb., pp. 186–195, 2014.chandrasekhar2008femtocell V. Chandrasekhar, J. G. Andrews, and A. Gatherer, “Femtocell networks: A survey,” IEEE Communications Magazine, vol. 46, no. 9, pp. 59–67, 2008.andrews2012femtocells J. G. Andrews, H. Claussen, M. Dohler, S. Rangan, and M. C. Reed, “Femtocells: Past, present, and future,” IEEE Journal on Selected Areas in Communications, vol. 30, no. 3, pp. 497–508, 2012.kumar2000capacity P. Gupta and P. R. Kumar, “The capacity of wireless networks,” IEEE Transactions on Information Theory, vol. 49, no. 11, p. 3117, 2000.cadambe2008interference V. R. Cadambe and S. A. Jafar, “Interference alignment and degrees of freedom of the K-user interference channel,” IEEE Transactions on Information Theory, vol. 54, no. 8, pp. 3425–3441, 2008.ozgur2009information A. Ozgur, R. Johari, D. N. C. Tse, and O. Lévêque, “Information-theoretic operating regimes of large wireless networks,” IEEE Transactions on Information Theory, vol. 56, no. 1, pp. 427–437, jan 2009.journals/tit/MedardG02 M. Médard and R. G. Gallager, “Bandwidth scaling for fading multipath channels,” IEEE Transactions on Information Theory, vol. 48, no. Apr., pp. 840–852, 2002.Sethuraman2009 V. Sethuraman, L. Wang, B. Hajek, and A. Lapidoth, “Low-SNR capacity of noncoherent fading channels,” IEEE Transactions on Information Theory, vol. 55, no. 4, pp. 1555–1574, 2009.journals/twc/LozanoP12 A. Lozano and D. Porrat, “Non-peaky signals in wideband fading channels: Achievable bit rates and optimal bandwidth.” IEEE Transactions on Wireless Communications, vol. 11, no. 1, pp. 246–257, 2012.fgomezUnified F. Gómez-Cuba, J. Du, M. Médard, and E. Erkip, “Unified capacity limit of non-coherent wideband fading channels,” IEEE Transactions on Wireless Communications, vol. 16, no. 1, pp. 43–57, 2017.mainak2015wideband M. Chowdhury, A. Manolakos, F. Gómez-Cuba, E. Erkip, and A. J. Goldsmith, “Capacity scaling in noncoherent wideband massive SIMO systems,” in IEEE Information Theory Workshop (ITW), 2015.Knuth1976 D. E. Knuth, “Big Omicron and big Omega and big Theta,” ACM SIGACT News, vol. 8, no. 2, pp. 18–24, 1976.Ozgur2007 A. Ozgur, O. Lévêque, and D. Tse, “Hierarchical cooperation achieves linear capacity scaling in ad hoc networks,” IEEE Transactions on Information Theory, vol. 53, no. 10, pp. 3549–3572, 2007.Franceschetti2009 M. Franceschetti, M. D. Migliore, and P. Minero, “The capacity of wireless networks: Information-theoretic and physical limits,” IEEE Transactions on Information Theory, vol. 55, no. 8, pp. 3413–3424, aug 2009.Ozgur2010 A. Özgür, O. Lévêque, and D. Tse, “Linear capacity scaling in wireless networks: Beyond physical limits?” Information Theory and Applications Workshop (ITA), no. 1, pp. 259–268, 2010.Lee2010 S. H. Lee and S. Y. Chung, “Capacity scaling of wireless ad hoc networks: Effect of finite wavelength,” in IEEE International Symposium on Information Theory (ISIT), 2010.Lee2012 ——, “Capacity scaling of wireless ad hoc networks: Shannon meets maxwell,” IEEE Transactions on Information Theory, vol. 58, no. 3, pp. 1702–1715, 2012.Hong2014 S. S.-n. Hong and G. Caire, “Demystifying the scaling laws of dense wireless networks: No linear scaling in practice,” in IEEE International Symposium on Information Theory (ISIT), 2014.Lu2013 N. Lu and X. S. Shen, “Scaling laws for throughput capacity and delay in wireless networks — A survey,” IEEE Communications Surveys & Tutorials, pp. 1–16, 2013.Kozat2003 U. C. Kozat and L. Tassiulas, “Throughput capacity of random ad hoc networks with infrastructure support,” in International conference on Mobile computing and networking, 2003.journals/tit/ShinJDVCLT11 W.-y. Shin, S.-W. Jeon, N. Devroye, M. H. Vu, S.-y. Chung, Y. H. Lee, and V. Tarokh, “Improved capacity scaling in wireless networks with infrastructure,” IEEE Transactions on Information Theory, vol. 57, no. 8, pp. 5088–5102, 2011.wonyong2014infrastructure C. Jeong and W.-y. Shin, “Ad hoc networking with cost-effective infrastructure: Generalized capacity scaling,” in IEEE International Symposium on Information Theory (ISIT), no. July, 2014, pp. 1–27.Li2011a P. Li, X. Huang, and Y. Fang, “Capacity scaling of multihop cellular networks,” in IEEE INFOCOM, apr 2011, pp. 2831–2839.Zeger2014 L. Zeger and M. Médard, “On scalability of wireless networks: A practical primer for large scale cooperation,” arXiv preprint arXiv:1402.1761, pp. 1–7, 2014. [Online]. Available: <http://arxiv.org/abs/1402.1761> Lin2014 X. Lin, J. G. Andrews, and A. Ghosh, “Spectrum sharing for Device-to-Device communication in cellular networks,” vol. 13, no. 12, pp. 1–31, 2014.Negi2004 R. Negi and A. Rajeswaran, “Capacity of power constrained ad-hoc networks,” in … Conference of the IEEE Computer and …, 2004.Tang2008 X. Tang and Y. Hua, “Capacity of ultra-wideband power-constrained ad hoc networks,” IEEE Transactions on Information Theory, vol. 54, no. 2, pp. 4392–4396, 2008.andrews2014selfbackhaul S. Singh, M. N. Kulkarni, A. Ghosh, and J. G. Andrews, “Tractable model for rate in self-backhauled millimeter wave cellular networks,” IEEE Journal on Selected Areas in Communications, vol. 33, no. 10, pp. 2191–2211, 2015.Franceschetti2007 M. Franceschetti, O. Dousse, D. N. C. Tse, and P. Thiran, “Closing the gap in the capacity of wireless networks via percolation theory,” IEEE Transactions on Information Theory, vol. 53, no. 3, pp. 1009–1018, mar 2007.Desgroseilliers2013 M. Desgroseilliers, O. Leveque, and E. Preissmann, “Spatial degrees of freedom of MIMO systems in line-of-sight environment,” IEEE International Symposium on Information Theory - Proceedings, pp. 834–838, 2013.5673745 V. Raghavan and A. M. Sayeed, “Sublinear capacity scaling laws for sparse MIMO channels,” Information Theory, IEEE Transactions on, vol. 57, no. 1, pp. 345–364, jan 2011.Samimi2014 M. K. Samimi and T. S. Rappaport, “Ultra-wideband statistical channel model for non line of sight millimeter-wave urban channels,” IEEE Global Communications Conference (GLOBECOM), pp. 3483–3489, 2014.tse2005book D. Tse and P. Viswanath, Fundamentals of Wireless Communication.1em plus 0.5em minus 0.4emCambridge university press, 2005.4294156 M. Franceschetti, “A note on Lévêque and Telatar's upper bound on the capacity of wireless ad hoc networks,” IEEE Transactions on Information Theory, vol. 53, no. 9, pp. 3207–3211, 2007.Baldemair2015 R. Baldemair, T. Irnich, K. Balachandran, E. Dahlman, G. Mildh, Y. Selén, S. Parkvall, M. Meyer, and A. Osseiran, “Ultra-dense networks in millimeter-wave frequencies,” IEEE Communications Magazine, vol. 53, no. 1, 2015.fgomez2014improvedrelaying F. Gómez-Cuba and F. J. González-Castaño, “Improving third-party relaying for LTE-A: A realistic simulation approach,” in IEEE International Conference on Communications (ICC), 2014.7499308 M. Rebato, M. Mezzavilla, S. Rangan, F. Boccardi, and M. Zorzi, “Understanding noise and interference regimes in 5G millimeter-wave cellular networks,” in 22th European Wireless Conference, may 2016, pp. 1–5.Dhillon2014backhaul H. H. S. Dhillon and G. Caire, “Information theoretic upper bound on the capacity of wireless backhaul networks,” in IEEE International Symposium on Information Theory (ISIT), jun 2014.dhillion2014scalability H. S. Dhillon and G. Caire, “Scalability of Line-of-Sight massive MIMO mesh networks for wireless backhaul,” in IEEE International Symposium on Information Theory (ISIT), 2014.Yoon2014 J. Yoon, W.-y. Shin, and S.-W. Jeon, “Elastic routing in wireless networks with directional antennas,” in IEEE International Symposium on Information Theory (ISIT), 2014.cover2001noise S. N. Diggavi and T. M. Cover, “The worst additive noise under a convariance constraint,” IEEE Transactions on Information Theory, vol. 47, no. 7, pp. 3072–3081, 2001.journals/tit/AzarianGS05 K. Azarian, H. E. Gamal, and P. Schniter, “On the achievable diversity-multiplexing tradeoff in half-duplex cooperative channels,” IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4152–4172, 2005. [ < g r a p h i c s > ] Felipe Gómez-Cuba received his Ingeniero de Telecomunicación degree in 2010 (Telecommunication Engineering 5 year program), M.Sc in Signal Processing Applications for Communications in 2012, and a PhD degree in 2015 from the University of Vigo, Spain. He worked as a researcher in the Information Technologies Group (GTI), Telematic Engineering Department (DET), University of Vigo, (2010–2011), the Galician Research and Development center In Advanced Telecommunications (GRADIANT), Vigo, Spain (2011–2013). He also worked as a visiting scholar in the NYUWireless center at NYU Tandon School of Engineering (2013–2014) and completed his PhD under a FPU grant from the Spanish MINECO (2013–2016) at the University of Vigo. He has been awarded a Marie Curie Individual Fellowship - Global Fellowship with the Dipartimento d'Engegneria dell'Informazione, University of Padova, Italy, and the Department of Electrical Engineering, Stanford University, USA (2016-present). [ < g r a p h i c s > ] Elza Erkip Elza Erkip (S’93–M’96–SM’05–F’11) received the B.S. degree in Electrical and Electronics Engineering from Middle East Technical University, Ankara, Turkey, and the M.S. and Ph.D. degrees in Electrical Engineering from Stanford University, Stanford, CA, USA. Currently, she is a Professor of Electrical and Computer Engineering with New York University Tandon School of Engineering, Brooklyn, NY, USA. Her research interests are in information theory, communication theory, and wireless communications.Dr. Erkip is a member of the Science Academy Society of Turkey and is among the 2014 and 2015 Thomson Reuters Highly Cited Researchers. She received the NSF CAREER award in 2001 and the IEEE Communications Society WICE Outstanding Achievement Award in 2016. Her paper awards include the IEEE Communications Society Stephen O. Rice Paper Prize in 2004, and the IEEE Communications Society Award for Advances in Communication in 2013. She has been a member of the Board of Governors of the IEEE Information Theory Society since 2012 where she is currently the First Vice President. She was a Distinguished Lecturer of the IEEE Information Theory Society from 2013 to 2014.[ < g r a p h i c s > ] Sundeep RanganSundeep Rangan (S’94–M’98–SM’13–F’16) received the B.A.Sc. at the University of Waterloo, Canada and the M.Sc. and Ph.D. at the University of California, Berkeley, all in Electrical Engineering. He has held postdoctoral appointments at the University of Michigan, Ann Arbor and Bell Labs.In 2000, he co-founded (with four others) Flarion Technologies, a spin off of Bell Labs, that developed Flash OFDM, one of the first cellular OFDM data systems and pre-cursor to 4G systems including LTE and WiMAX.In 2006, Flarion was acquired by Qualcomm Technologies where Dr. Rangan was a Director of Engineering involved in OFDM infrastructure products. He joined the ECE department at NYU Tandon (formerly NYU Polytechnic) in 2010. He is a Fellow of the IEEE and Director of NYU WIRELESS, an academic-industry research center researching next-generation wireless systems.His research interests are in wireless communications, signal processing, information theory and control theory. [ < g r a p h i c s > ] Francisco Javier González-Castaño Francisco J. González-Castaño received the Ingeniero de Telecomunicación degree from University of Santiago de Compostela, Spain, in 1990 and the Doctor Ingeniero de Telecomunicación degree from University of Vigo, Spain, in 1998. He is currently a Catedrático de Universidad (Full Professor) with the Telematics Engineering Department, University of Vigo, Spain, where he leads the Information Technologies Group (http://www-gti.det.uvigo.es). He has authored over 80 papers in international journals, in the fields of telecommunications and computer science, and has participated in several relevant national and international projects. He holds two U.S. patents.
http://arxiv.org/abs/1705.09373v4
{ "authors": [ "Felipe Gómez-Cuba", "Elza Erkip", "Sundeep Rangan", "Francisco J. González-Castaño" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170525213912", "title": "Capacity Scaling of Cellular Networks: Impact of Bandwidth, Infrastructure Density and Number of Antennas" }
These authors contributed equally to this work. Department of Physics, University of Colorado, Boulder, Colorado 80309, USA Center for Theory of Quantum Matter, University of Colorado, Boulder, Colorado 80309, USAThese authors contributed equally to this work. Departamento de Física Teórica I, Universidad Complutense, 28040 Madrid, SpainDepartment of Physics, University of Colorado, Boulder, Colorado 80309, USA Center for Theory of Quantum Matter, University of Colorado, Boulder, Colorado 80309, USADepartment of Physics, University of Colorado, Boulder, Colorado 80309, USA Center for Theory of Quantum Matter, University of Colorado, Boulder, Colorado 80309, USAWe study the classification of symmetry protected topological (SPT) phases with crystalline symmetry (cSPT phases).Focusing on bosonic cSPT phases in two and three dimensions, we introduce a simple family of cSPT states, where the system is comprised of decoupled lower-dimensional building blocks that are themselves SPT states.We introduce a procedure to classify these block states, which surprisingly reproduces a classification of cSPT phases recently obtained by Thorngren and Else using very different methods, for all wallpaper and space groups.The explicit constructions underlying our results clarify the physical properties of the phases classified by Thorngren and Else, and expose additional structure in the classification.Moreover, the states we classify can be completely characterized by point group SPT (pgSPT) invariants and related weak pgSPT invariants that we introduce. In many cases, the weak invariants can be visualized in terms of translation-symmetric stacking of lower-dimensional pgSPT states.We apply our classification to propose a Lieb-Shultz-Mattis type constraint for two-dimensional spin systems with only crystalline symmetry, and establish this constraint by a dimensional reduction argument.Finally, the surprising matching with the Thorngren-Else classification leads us to conjecture that all SPT phases protected only by crystalline symmetry can be built from lower-dimensional blocks of invertible topological states.We argue that this conjecture holds if we make a certain physically reasonable but unproven assumption.Building crystalline topological phases from lower-dimensional states Michael Hermele December 30, 2023 =====================================================================§ INTRODUCTION §.§ Background and overviewSymmetry protected topological (SPT) phases <cit.>are a generalization of topological band insulators <cit.> and other free-fermion topological phases<cit.> tointeracting systems.SPT phases have an energy gap and a unique ground state with periodic boundary conditions, lack spontaneous symmetry breaking, and are adiabatically connected to a trivial product wave function if the symmetries of the system are broken explicitly.There are many SPT phases requiring strong interactions to exist.<cit.> Following rapid progress over the past few years, much is now understood about the classification and characterization of SPT phases protected by internal symmetry, such as charge conservation, SU(2) spin rotation, or time reversal.<cit.>Another important class of symmetries are those of crystalline lattices, which play a crucial role in many phenomena in solids.However, compared to their internal symmetry cousins, SPT phases protected by crystalline symmetry, which we dub crystalline SPT (cSPT) phases, are much less understood.An important exception are cSPT phases in non-interacting fermion systems, including topological crystalline insulators (see [ando15topological] and references therein).A number of works have studiedexamples or families of interacting cSPT phases,<cit.>and there is a general theory of cSPT phases in one spatial dimension (d=1).<cit.>However, until very recently, general approaches to interacting cSPT phases have been lacking.This situation is now changing, and recent works have made progress inclassifying and characterizing general cSPT phases.Enabled by ideas introduced by Isobe and Fu to study surfaces of interacting topological crystalline insulators,<cit.> in Ref. song17topological, some of the authors of this paper (H. S., S.-J. H. and M. H.) and Fu devised an approach to classify SPT phases protected by crystalline point group symmetry, or point group SPT (pgSPT) phases.[By point group symmetry, we more precisely mean a group of symmetries leaving a single point in space fixed.Such symmetries are more properly called site symmetries, but we abuse terminology slightly to employ the more evocative term “point group.”]It was shown that a pgSPT ground state can be adiabatically continued to a state defined on a lower-dimensional space, where point group operations act as internal symmetries.This observation was used to classify pgSPT phases for a few examples of point group symmetry, andimplies that any pgSPT phase can be built out of lower-dimensional topological states, on which certain point group operations act as internal symmetries. These ideas were extended to treat glide reflection symmetry by Lu, Shi and Lu.<cit.>A discussion of non-interacting topological crystalline insulators with some connections to the above works appeared in Ref. fulga16coupled.The approach of Ref. song17topological cannot be directly applied for space group[In this paper, we use the term space group to refer to the symmetry group of a crystalline lattice in an arbitrary number of spatial dimensions, as well as more specifically for three-dimensional crystals.When we specifically discuss two-dimensional crystals, the symmetry groups are referred to as wallpaper groups.] symmetry, for reasons discussed in Sec. <ref>. However, Ref. song17topological did discuss how to use pgSPT classification to give constructions and partial classifications of non-trivial space group cSPT phases.Even more recently, in a remarkable development, Thorngren and Else extended the idea of gauging symmetry to crystalline symmetries.<cit.>This idea has been very powerful in the study of internal-symmetry topological phases, but it had not been clear if it could be generalized to spatial symmetry.Ref. thorngren16gauging argued that many bosonic cSPT phases in d dimensions are classified by the group cohomology H^d+1(G,U(1)), where orientation-reversing operations in G act non-trivially on the U(1) coefficients.This agrees with results from a tensor-network approach to construction of SPT states in Ref. jiang17anyon.Thorngren and Else gave classifications of bosonic cSPT phases for all 17 wallpaper groups in two dimensions (d=2), and almost all 230 space groups in three dimensions (d=3).They also discussed some examples of fermionic cSPT phases.While the ideas underlying the Thorngren-Else classification are quite physical, the classification procedure itself is rather formal, and the physical properties of the states classified are not yet clear.In this paper, we tie these developments together, focusing on bosonic cSPT phases.For simplicity, we focus on “integer spin” bosonic systems, meaning more precisely that we take the microscopic degrees of freedom to transform linearly (i.e., not projectively) under the crystalline symmetry.We consider a particularly simple family of cSPT states, where the system is comprised of decoupled lower-dimensional “building blocks,” which are themselves lower-dimensional invertible topological phases.The cSPT ground state is obtained by taking the product of ground states for the individual blocks.Focusing on the case where the building blocks are lower-dimensional SPT states, we introduce a procedure to classify cSPT block states for all wallpaper and space groups, and reproduce the Thorngren-Else classification.[We reproduce the Thorngren-Else classification in the sense that we obtain the same result for all 17 wallpaper groups and all 230 space groups.We have not established a direct link between our approach and that of Ref. thorngren16gauging, beyond checking case-by-case; for instance, we have not shown directly that our approach gives a H^d+1(G,U(1)) classification.It would be interesting to find such a link, a problem we leave for future work.]This leads us to conjecture that all cSPT phases protected only by crystalline symmetry can be obtained from lower-dimensional building blocks.This conjecture is further supported by a general argument that rests on a physically reasonable but unproven hypothesis.More generally, the building blocks can be ground states of an invertible topological phase, which need not be a SPT phase.In particular, two-dimensional building blocks of three-dimensional cSPT phases can be E_8 states.<cit.>The E_8 state is an analog of an integer quantum Hall state for bosonic systems, and is characterized by a unique ground state on the torus, the absence of bulk anyon excitations, and edge modes with chiral central charge c = 8.<cit.>Non-trivial cSPT phases can be obtained for instance by placing E_8 states on mirror<cit.> or glide<cit.> planes, and these phases are beyond the Thorngren-Else classification.We leave discussion of these cSPT phases for future work.Our results clarify the physical nature of the Thorngren-Else states, all of which are adiabatically connected to cSPT block states. This provides a starting point for future analysis of physical properties.One application discussed here is a Lieb-Schultz-Mattis (LSM) type constraint <cit.> applicable to two-dimensional spin systems.Our LSM constraint goes beyond other related results <cit.> in that it only involves crystalline symmetry, as opposed to an interplay between internal and crystal symmetry.[LSM constraints discussed in Refs. hsieh16majorana and lu17Lieb involve the interplay between fermion parity and crystal symmetry.Strictly speaking, fermion parity should not be viewed as a symmetry; it is a property of any fermion system and cannot be broken.However, from a formal point of view it acts like an internal symmetry.] Following ideas of Ref. cheng16translational, systems where our LSM constraint holds can be viewed as two-dimensional symmetry-preserving surfaces of d=3 cSPT states built from one-dimensional blocks. We note that Qi, Fang and Fu have independently obtained the same LSM constraint.<cit.> The classifications we obtain for bosonic cSPT phases in d=2,3 can be fully understood in terms of point group SPT phases.For each wallpaper or space group, the classification can be decomposed into pgSPT invariants and other invariants we dub weak pgSPT invariants. Each pgSPT invariant is simply the SPT invariant associated with a given site symmetry subgroup of the full space group.The weak pgSPT invariants can be understood by making one or more directions in space finite, viewing the system as a lower-dimensional pgSPT phase, and computing the resulting pgSPT invariant as a function of system size in the finite directions.In many cases, this can be visualized as a stacking of lower-dimensional pgSPT states, with translation symmetry in the stacking direction.In most cases, the cSPT classification can be factored into pgSPT and weak pgSPT invariants, but the general structure of the decomposition is more subtle than a simple factorization.§.§ Block states for crystalline SPT phasesBlock states play a central role in this paper, so we now describe them in more detail. A block state | Ψ⟩ is astate of the form| Ψ⟩ = ⊗_b ∈ B | ψ_b ⟩,where B is a set of blocks.Each block b is a d_b-dimensional quantum system embedded in d-dimensional space, with d_b < d.Zero-dimensional blocks are allowed and play an important role.Blocks with d_b ≥ 1 can be of finite extent, semi-infinite, or infinite.For the purposes of this paper, we will see that it is sufficient to consider infinite blocks when d_b ≥ 1.The blocks form a spatial pattern invariant under the crystalline symmetry group G, which is a point group or space group.The action of g ∈ G on a block b is denoted by g b.Each block is associated with a subgroup G_b ⊂ G, that we call the effective internal symmetry of b.When b is a point, G_b is the same as the site symmetry of b.In general, G_b is defined to consist of all elements g ∈ G that, when restricted to b, act as the identity rigid motion.That is, if g ∈ G_b, then g takes any point lying in b to itself.[Note that while g ∈ G_b implies g b = b, the converse is not true.]For example, if b is a two-dimensional block lying on a mirror plane, then G_b is generated by the mirror reflection and is isomorphic to .We assume |ψ_b ⟩ is in a d_b-dimensional SPT phase protected by the effective internal symmetry G_b.For zero-dimensional blocks, this means that | ψ_b ⟩ can carry G_b charge; that is, it transforms in some one-dimensional representation of G_b.Different G_b charges can be viewed as different “zero-dimensional SPT phases.”The general structure of our results on cSPT classification can be summarized as follows.We let G be some crystalline symmetry group, and (G) the corresponding classification of those bosonic cSPT phases that are adiabatically connected to a block state built from lower-dimensional SPT states.We obtain (G) by classifying cSPT block states using block-equivalence operations that we introduce.Block-equivalence operations are closely related to the lattice homotopy operations introduced in Ref. po17lattice to study LSM constraints (see Sec. <ref> for a discussion of the relationship).We find that (G) agrees with the Thorngren-Else classification.In three dimensions, (G) is not a complete classification, at least for some symmetries, because it excludes cSPT phases built from E_8 states.We define _d_b(G) to be the classification of G-symmetric cSPT phases built only from d_b-dimensional SPT blocks, and we say such cSPT phases have block-dimension d_b.For d ≤ 3, we find(G) = _0(G) ×⋯×_d-1(G) .In d=2, _1(G) is always trivial, and we find one- and two-dimensional bosonic cSPT phases all have block-dimension zero.More generally, in settings beyond bosonic cSPT phases with only crystalline symmetry and d ≤ 3, (G) need not factorize by block dimension as in Eq. (<ref>); the general structure is not a product but a sequence of subgroups, as explained in Appendix <ref>.The physical properties of cSPT phases at symmetry-preserving surfaces depend on the block dimension.States built from zero-dimensional blocks can be viewed as product states, and it follows that none of these states have any anomalous boundary properties.However, these states can have non-trivial entanglement protected by site symmetry, but only when no degrees of freedom lie precisely at the relevant site.<cit.> In that situation, a non-trivial block-dimension zero state is not a product state of the microscopic degrees of freedom, even though it can be viewed as a product state of larger effective degrees of freedom. We note that all bosonic cSPT phases in one and two dimensions are of block-dimension zero.Moreover, in three dimensions, for space groups with only orientation-preserving operations, we also find only block-dimension zero cSPT phases. In contrast to the block-dimension zero phases, cSPT phases of higher block dimension have anomalous surface properties, which is a sign of non-trivial symmetry-protected entanglement. Surfaces of d=3 cSPT states built from one-dimensional blocks are equivalent to “half-integer spin” bosonic systems in two dimensions, with microscopic degrees of freedom that transform projectively. Seeing properties characteristic of such a system at the surface of an “integer spin” system is a sign of a non-trivial bulk SPT phase.Moreover, these “half-integer spin” surfaces lead us to obtain LSM type constraints for two-dimensional systems, as mentioned in Sec. <ref> and discussed in Sec. <ref>. Finally, symmetry-preserving surfaces of d=3 cSPT states built from two-dimensional blocks are truly anomalous, in that the surface physics cannot occur in an isolated two-dimensional system.<cit.>Therefore, we believe that block-dimension two cSPT phases have the greatest potential for interesting experimentally observable surface phenomena.In light of the product-state nature of block-dimension zero cSPT states, it is important to discuss what we mean by a trivial SPT phase.In many discussions of SPT phases, product states and trivial states are synonymous.Here, distinct block-dimension zero cSPT phases can be viewed as product states that are not adiabatically connected if symmetry is preserved.It has been observed before that such SPT states occur for crystalline symmetry.<cit.>Among the block-dimension zero cSPT phases, we define the trivial phase to be the unique block-equivalence class containing states where all blocks carry trivial charge, i.e. they transform as the trivial representation under site symmetry.There are some subtleties associated with this definition; readers not interested in them can skip this paragraph, as nothing else in the paper depends on it.The key issues were discussed by some of us and Fu in Appendix A of Ref. song17topological, which should be consulted for more details.That discussion was for reflection pgSPT phases in d=1, but much of it is expected to apply to general block-dimension zero cSPT phases, as we now describe.When microscopic degrees of freedom lie precisely at symmetry centers, the symmetry operations can be redefined to arbitrarily change the charge at the blocks.In this case, it becomes meaningless to ask whether any particular block-dimension zero phase is trivial, but differences between these phases remain well defined.The _0(G) classification thus still applies, but it should be interpreted as a torsor rather than as a group.However, such redefinitions of the symmetry operations are not necessarily legitimate, e.g. if one views the lattice model as an approximate description of a continuum system, and the site symmetry charges originate from transformation properties of Wannier orbitals.<cit.>Finally, in a lattice model where the degrees of freedom lie away from symmetry centers, such redefinitions are not possible.In that case, block-dimension zero cSPT phases are expected to be distinguished by entanglement spectrum signatures, and there is no arbitrary choice involved in the definition of trivial phase.§.§ OutlineAs an intermediate step toward classifying general cSPT phases, we first review the classification of pgSPT phases in Sec. <ref>.Further review of the dimensional reduction approach underlying pgSPT classification is given in Appendix <ref>.Section <ref> classifies d=2 pgSPT phases for all crystallographic point groups, and Sec. <ref> does the same in d=3, excluding pgSPT phases built from E_8 states, which lie outside the focus of this paper.In Appendix <ref>, we explain that the classification of pgSPT and cSPT phases does not, in general, factor over block dimension.However, for the bosonic pgSPT and cSPT phases we consider in d=3, such a factorization does hold, i.e. (G) = _0(G) ×_1(G) ×_2(G).Section <ref> describes the classification of d=2 cSPT phases protected by wallpaper group symmetry.The block-equivalence operations used to classify these phases are introduced, several example wallpaper groups are discussed, and the cSPT classification is given for all 17 wallpaper groups.In addition, weak pgSPT invariants are introduced via examples.The classification (G) = _0(G) for each wallpaper group factors into a subgroup of pgSPT invariants and another of weak pgSPT invariants, and this factorization is given.This factorization shows that the block equivalence operations give a classification of distinct cSPT phases in two dimensions.The classification of d=3 cSPT phases is discussed in Sec. <ref>.It is simple to obtain _1(G) and _2(G).A more involved computational procedure based on the block equivalence operations is introduced to obtain _0(G), and applied to two illustrative examples.More details of this procedure are given in Appendices <ref> and <ref>.The classification (G) = _0(G) ×_1(G) ×_2(G) is given for all 230 space groups in Appendix <ref>.Section <ref> also explains that both _1(G) and _2(G) factor into pgSPT invariants.Appendix <ref> shows thatstates corresponding to different elements of _0(G) are completely characterized by pgSPT and weak pgSPT invariants, so _0(G) can be decomposed into pgSPT and weak pgSPT invariants.It follows that (G) = _0(G) ×_1(G) ×_2(G) is a classification of distinct cSPT phases.In Sec. <ref>, we use our classification of block-dimension one cSPT phases in d=3 to obtain a Lieb-Schultz-Mattis (LSM) type constraint for d=2 spin systems with wallpaper group symmetry, via a type of bulk-boundary correspondence.We then present an independent argument for this constraint based on dimensional reduction.Our LSM constraint is simple to state:if a d=2 spin system contains a spin transforming projectively under its crystalline site symmetry, then a symmetry-preserving, gapped, short-range entangled ground state is impossible.It is interesting to note that, although the considerations leading to this constraint take full wallpaper group symmetry into account, only site symmetry plays an important role.Section <ref> addresses the conjecture that all SPT phases protected only by crystalline symmetry can be built from lower-dimensional invertible topological states.We argue that this conjecture holds provided we make a certain physically reasonable but unproven assumption.We close the paper in Sec. <ref> with a discussion of possible extensions of our work, and some remarks on the relationship between block equivalence operations and the lattice homotopy operations introduced in Ref. po17lattice to obtain LSM constraints.Two appendices beyond those mentioned above contain additional technical details.Appendix <ref> defines the first cohomology group H^1(G,U(1)), and Appendix <ref> treats some details of states built from zero-dimensional blocks that are used throughout the paper.§ POINT GROUP SPT PHASESIn this section, we consider point group SPT (pgSPT) phases, protected by a crystalline point group symmetry G.We follow the approach of Ref. song17topological, framing our discussion in terms of block states, to make contact with our results on more general crystalline SPT phases.We classify pgSPT phases in d=2,3 for all crystallographic point groups.In d=3, we classify only those pgSPT phases built from lower-dimensional SPT blocks; this is not complete for some point groups, because it misses pgSPT phases built from E_8 states.In Sec. <ref>, we review the approach of Ref. song17topological, and the classification of pgSPT phases with mirror reflection symmetry in d=1,2,3. Further details of Ref. song17topological are reviewed in Appendix <ref>.Section <ref> develops the classification of two-dimensional pgSPT phases, first illustrating some key ideas via examples, then presenting the classification for arbitrary point groups.Similarly, in Sec. <ref> we first discuss the illustrative example of D_2h symmetry, then proceed to describe the general classification procedure, and present the classification for all the crystallographic point groups. §.§ Block states for point group SPT phasesRef. song17topological showed that all pgSPT phases can be built from lower-dimensional topological states, and obtained the classification of such phases for a few simple point groups.For general point groups, the approach of Ref. song17topological can be cast as a step-wise dimensional reduction procedure, which we review in Appendix <ref>.Here, we focus on bosonic pgSPT phases with crystallographic point group symmetry G.Moreover, we are not interested in completely general bosonic pgSPT phases, but instead we make the assumption that E_8 states do not appear at any step of the dimensional reduction process (see Appendix <ref> for a further explanation of this statement).The pgSPT phases of interest can be represented as block states, and our assumption that E_8 states do not appear in the dimensional reduction process implies that the blocks are lower-dimensional SPT phases protected by G_b effective internal symmetry.Working in infinite d-dimensional space ℝ^d, all the blocks can be taken to lie the subset S ⊂ℝ^d defined as the union of all points in space fixed by at least one non-trivial point group operation g ∈ G.(Points lying outside S have no effective internal symmetry.)Using block states, we review the classification of pgSPT phases protected by a mirror reflection σ in d=1,2,3, which was discussed in Ref. song17topological for d = 1,3. In d=3, this is the point group C_s.In all dimensions, this is a symmetry where one spatial coordinate is reversed, e.g. (x,y) → (-x,y) in d=2.We start with d=1, where mirror symmetry is the same as inversion symmetry.There, S is just a single point at the origin.We place a single zero-dimensional block b_0 at the origin, and consider the state| Ψ⟩ = | ψ_b_0⟩.The effective internal symmetry of b_0 is G_b_0≃, and there are two possible states depending on thecharge,U_σ | ψ_b_0⟩ = ± | ψ_b_0⟩,where U_σ is the unitary operator implementing mirror reflection.Ref. song17topological introduced an equivalence operation referred to as adjoining.Dimensional reduction adiabatically connects a general pgSPT state to a state on a thickened version of S, but the thickness is arbitrary.Adjoining corresponds to increasing the thickness of this region, which has the effect of adding extra degrees of freedom to a state defined on S. In the present case, the adjoining operation is realized by sending| Ψ⟩→ | l ⟩⊗ | Ψ⟩⊗ | r ⟩,where | l ⟩ and | r ⟩ zero-dimensional blocks to the left and right of b_0, respectively.We can choose reflection to act by U_σ | l ⟩ = | r ⟩ and U_σ | r ⟩ = | l ⟩. This operation has no effect on the U_σ charge, and this can be anticipated, because the adjoined blocks themselves have no effective internal symmetry, and must therefore be trivial.We thus obtain aclassification, where the two states are labeled by different U_σ charges.As discussed further in Ref. song17topological theclassification agrees with earlier works that employed different approaches.<cit.>The discussion of adjoining above illustrates a general principle:The adjoining operation can only have an effect on the classification when the adjoined blocks are themselves non-trivial. Moving on to mirror symmetry in d=2, S is the one-dimensional reflection axis. We can place a single d_b = 1 block on the axis, which has an effectiveinternal symmetry.One-dimensional bosonic systems withsymmetry have only a trivial topological phase,<cit.> so the resulting state is trivial.We can also place zero-dimensional blocks along the axis carrying reflection charge, but since the axis is infinite and has no symmetries such as translation, these blocks can always be grouped together to carry trivial reflection charge.Therefore we conclude that there is only a trivial pgSPT phase for mirror reflection in two dimensions.In three dimensions, S is the two-dimensional mirror plane, and we consider blocks lying in this plane with effectiveinternal symmetry.Zero- or one-dimensional blocks can always be grouped together (and the one-dimensional blocks are themselves trivial).However, covering the mirror plane with a single two-dimensional block b_2 and considering states | Ψ⟩= | ψ_b_2⟩leads to a non-trivial pgSPT phase when | ψ_b_2⟩ is in the non-trivial d=2 SPT phase withsymmetry, which we refer to as the Ising SPT phase.<cit.>This leads to the classification(C_s) = ,which was obtained in Ref. song17topological.In fact, the block b_2 can also be an E_8 state, which leads to a × classification.<cit.>However, we are not considering pgSPT phases built from E_8 states here.§.§ Point group SPT phases in two dimensionsWe now consider general crystallographic point groups in two dimensions.We begin with the illustrative example G = D_3, which is the symmetry group of a regular triangle.This group is generated by mirror reflections about three axes as shown in Fig. <ref>, which together comprise the space S.To obtain a classification of pgSPT phases, we consider different possible block states.While we can place one-dimensional blocks on the reflection axes, these are one-dimensional systems witheffective internal symmetry, and are thus trivial.We can place a zero-dimensional block b_0 at the origin, which has effective D_3 ≃_3 ⋊ internal symmetry.Possible D_3 charges of | ψ_b_0⟩ are one-dimensional representations of D_3, and there are two such representations labeled by the elements of the first cohomology group H^1(D_3,U(1)) =.The first cohomology group H^1(G,U(1)) is the group formed by the one-dimensional representations of G under the tensor product operation, and is defined in Appendix <ref>.The non-trivial representation is characterized by U_σ | ψ_b_0⟩ = - | ψ_b_0⟩, for σ any of the three reflections.Naïvely this would seem to imply aclassification, but this is not the end of the story.This is because we can adjoin zero-dimensional blocks lying on the reflection axes, as shown in Fig. <ref>.Labeling these blocks by a_i, where i = 1,2,3 labels the three axes, this modifies the state by| ψ_b_0⟩→| ψ_b_0⟩⊗[ ⊗_i=1^3| ψ_a_i⟩] .To be consistent with D_3 symmetry, the three blocks a_i must all carry the same reflection charge, and if this charge is non-trivial, the adjoining operation of Eq. (<ref>) changes the overall D_3 charge of the state, which can be shown following the discussion of Appendix <ref>.Therefore, theclassification is not stable under the adjoining operation, and it collapses to trivial classification, i.e. (D_3) is trivial.The adjoining operation turns out to have a trivial effect for all d=2 point groups except D_3.For example, for D_2 symmetry, we can adjoin zero-dimensional blocks carrying reflection charge as shown in Fig. <ref>.Because these blocks must always be added in pairs to preserve the D_2 symmetry, adjoining them does not change the one-dimensional representation of the block at the origin, and the D_2 pgSPT classification is given by H^1(D_2,U(1)) = ×.These statements can be verified following the more general and systematic discussion of the adjoining operation for states of block dimension zero, givenin Appendix <ref>.In general, except for G = D_3 and for the case of mirror reflection (G = D_1), the two-dimensional pgSPT classification is given by (G) = _0(G) = H^1(G,U(1) ) .Table <ref> gives the classification of pgSPT phases for the nine non-trivial crystallographic point groups in two dimensions.These groups are n-fold rotation (C_n) for n=2,3,4,6, and the dihedral group D_n for n=1,2,3,4,6.D_1 is generated by a single mirror reflection, D_2 is the symmetry group of a rectangle, and, for n ≥ 3, D_n is the symmetry of a regular n-sided polygon. §.§ Point group SPT phases in three dimensionsHere, we discuss the classification of pgSPT phases in three-dimensions.We emphasize that we consider only those pgSPT phases built from lower-dimensional SPT states; because there can also be d=3 pgSPT phases built from E_8 states, the resulting classifications are not complete for all point groups.We start by considering the illustrative example of D_2h symmetry, then describe a general procedure to classify d=3 pgSPT phases, relegating some of the more formal aspects to Sec. <ref> and Appendices <ref> and <ref>.The classification for all crystallographic point groups is presented in Table <ref>.For the point groups considered thus far, all the non-trivial SPT phases can be represented with blocks of a fixed dimension.That is, (G) = _d_b(G) for some fixed d_b.Indeed, this holds for all d=1,2 point groups, with d_b = 0.The situation changes in d=3, where(G) = _0(G) ×_1(G) ×_2(G) .The general structure is not a product over block dimensions but a sequence of subgroups.This is explained in Appendix <ref>, where the factorization is argued to hold for d=3 pgSPT and space group SPT phases built from lower-dimensional SPT blocks.We illustrate this by considering the point group D_2h, which is generated by three perpendicular mirror reflections (Fig. <ref>), and where we find _d_b(G) to be non-trivial for each of d_b = 0,1,2. The space S is given by the union of the three mirror planes.We start with the block dimension zero states, i.e. those in _0(G).It is enough to put a single zero-dimensional block at the origin, on which D_2h acts as an effective ^3 internal symmetry, and possible D_2h charges are labeled by elements of H^1(D_2h,U(1)) = ^3.There are two kinds of adjoining operations to consider, both of which are illustrated in Fig. <ref> and have no effect on the classification.Therefore, we find _0(D_2h) = ^3. Next, we consider states of block dimension d_b = 1, which can be constructed by placing one-dimensional blocks on the C_2v axes (x, y or z axis).Each axis has an effective ^2 internal symmetry, and there is a single non-trivial one-dimensional SPT phase with this symmetry, the Haldane phase.<cit.>We first consider the x axis.In principle, we should divide the axis into two semi-infinite one-dimensional blocks, one for x > 0 and one for x < 0, since then each block has only the effective internal ^2 symmetry.These two blocks are related to one another by x → -x mirror reflection, so they must be in the same d=1 SPT phase, and they can be sewn together at the origin to form a single block.This works precisely because reflection acts trivially on theSPT index characterizing the Haldane phase.For each C_2v axis, we get aclassification of pgSPT phases, so considering all three axes we have found _1(D_2h) = ^3.This discussion illustrates a general property of all the cSPT phases arising in this paper:For a general d=3 point group or space group G, cSPT phases of block dimension one are built fromd=1 SPT phases classified by ainvariant, so we can always represent phases in _1(G) using states withinfinite one-dimensional blocks. Finally we consider states of block dimension d_b = 2, constructed by placing two-dimensional blocks on the mirror planes.As discussed in Sec. <ref> for a single mirror reflection in d=3, there is a single non-trivial d_b = 2 SPT state, the Ising SPT phase.(Again, we do not consider two-dimensional blocks that are E_8 states.) Reflection and other spatial symmetries act trivially on theSPT invariant characterizing the Ising SPT phase.Therefore, just as for states of block dimension one, we can represent these states using infinite two-dimensional blocks.We thus get aSPT index for each mirror plane, and all together we find _2 (D_2h) = ^3. Again, this illustrates a general statement: For a general d=3 point group or space group G, cSPT phases in _2(G) and built from SPT blocks can be represented using infinite two-dimensional blocks. We now summarize and formalize the above discussion to describe more generally the classification of pgSPT phases in three dimensions.We start with states of block dimension two, and work our way down in block dimension. Phases in _2(G) are obtained by placing either Ising SPT states or trivial states on mirror planes.For each set of symmetry-equivalentmirror planes, we have aSPT index, and_2(G) is just a product of thesefactors.Eachindex can be interpreted as a pgSPT index for mirror reflection symmetry alone, by focusing on an appropriate mirror plane and ignoring the rest of the G symmetry. To classify states of block dimension one, we first need to identify one-dimensional axes with effective internal symmetry.There are two types of such axes, those with C_n symmetry, and those with C_nv symmetry (n = 2,3,4,6 in both cases).The first type of axis has a _n ≃ C_n effective internal symmetry, and thus only hosts trivial one-dimensional SPT states, because H^2(_n,U(1)) is trivial.[See, for instance, Ref. chen13symmetry for a definition of the second cohomology group H^2 and a discussion of its role in the classification of d=1 SPT phases.]In the second case, we note that C_nv≃_n ⋊.The classification of one-dimensional SPT phases with this symmetry isH^2(_n ⋊,U(1)) = {[ Trivial, nodd;,neven ]. .Therefore C_3v axes are trivial, but each set of symmetry-equivalent C_nv axes with n=2,4,6 carries aSPT invariant, and _1(G) is a product of thesefactors.The adjoining operation behaves trivially, because lines nearby and parallel to a C_nv axis have at mosteffective internal symmetry (if they lie in a mirror plane), which is not enough to protect non-trivial one-dimensional states. Finally, we consider block dimension zero states. Given a point group G, we define its fixed space to be the subset of ℝ^3 fixed by all group operations. The fixed space can be a single point at the origin, a line, or a plane.When the fixed space is a line or a plane, _0(G) is trivial.This is because zero-dimensional blocks lying on the fixed space can always grouped together into composites with no G charge.We can therefore focus on point groups whose fixed space is a single point.To proceed requires a more detailed description as compared to d_b = 1,2 states, because the adjoining operation is non-trivial for d_b = 0 states of some point groups.This is addressed in a general treatment of block dimension zero states given in Sec. <ref>, with further details in Appendices <ref> and <ref>.In Appendix <ref>, we obtain the result_0(G) = H^1(G,U(1))/ Adj(G).Here, H^1(G,U(1)) labels one-dimensional representations of G (see Appendix <ref>), and Adj(G) is a subgroup of H^1(G,U(1)) containing allone-dimensional representations that can be obtained by the adjoining operation, starting with a trivial block at the origin.Taking the quotient precisely captures the information in H^1(G,U(1)) that is stable under the adjoining operation.The computation of Adj(G) is described in Appendix <ref>.Following the above discussion, the classification (G) = _0(G) ×_1(G) ×_2(G) is given in Table <ref> for all d=3 crystallographic point groups. § CRYSTALLINE SPT PHASES: TWO DIMENSIONSHere, we consider cSPT phases protected by the 17 wallpaper groups in two dimensions.We introduce our procedure to classify cSPT block states, and give a classification for each wallpaper group.In and of itself, our procedure not guaranteed to produce a classification of distinct SPT phases.The classifications we obtain must therefore be further justified, which can be done in two ways: (1) Our results match the Thorngren-Else classification, which is obtained by very different methods.(2) For each wallpaper group, our classification can be factored into d=2 pgSPT invariants and weak pgSPT invariants.The weak invariants, which we introduce below via specific examples, are defined by compactifying one spatial dimension to obtain d=1 pgSPT states, and examining the dependence of the d=1 pgSPT invariant on the length in the finite dimension.These invariants can be understood as the d=1 pgSPT index per layer of a stack of d=1 pgSPT states, with translation symmetry along the stacking direction. In general, a cSPT block state in two dimensions can be built from zero- and one-dimensional blocks.However, any one-dimensional blocks are always trivial.The effective internal symmetry of a one-dimensional block is at most(for a reflection axis), and this is not enough to protect non-trivial invertible topological phases in one dimension.<cit.>Therefore, it is enough to consider dimension zero block states, i.e. those with only zero-dimensional blocks.This discussion can be summarized by the statements that _1(G) is trivial for wallpaper groups in two dimensions, and (G) = _0(G).To describe and classify block dimension zero states for wallpaper groups, we first introduce some notation.The same discussion applies in three dimensions, so for the moment we keep the spatial dimension d arbitrary.We let B_0 be the set of block dimension zero states.A state Ψ∈ B_0 is specified by the following data: * A discrete set of points Λ⊂ℝ^d which is invariant under the action of G.A point p ∈Λ is fixed by its site-symmetry group G_p ⊂ G. * We place a zero-dimensional block at each point p, and G_p is the effective internal symmetry of this block. We denote the G_p charge at p by q_p ∈ H^1(G_p,U(1)).Knowing the charge at one point determines the charge at all symmetry-related points, as we discuss below. This data is manifest physically in the wave function| Ψ⟩ = ⊗_p ∈Λ | ψ_p ⟩.The action of g ∈ G is given byU_g | ψ_p ⟩ = λ(g, p) | ψ_gp⟩,where λ(g,p) is a phase factor. We assume that the degrees of freedom transform linearly (i.e. not projectively) under the symmetry, which means that U_g_1 U_g_2 | ψ_p ⟩ = U_g_1 g_2 | ψ_p ⟩, implying the conditionλ(g_1 g_2, p) = λ(g_1, g_2 p) λ(g_2, p) .For a fixed p and restricting to g ∈ G_p, this equation just says that λ(g,p) is a one-dimensional representation of G_p, and we choose it to be the representation given by q_p.Formally, for g ∈ G_p, we writeλ(g,p) = D_q_p (g) ,where D_q_p is the one-dimensional representation of G_p labeled by the charge q_p.It may appear that there is physical information in λ(g,p) beyond the charges q_p, but this is not the case:knowing q_p for all p ∈Λ completely determines λ(g,p) up to some gauge-like freedom arising from the freedom to adjust the phase of | ψ_p ⟩.This is shown in Appendix <ref>, and justifies specifying only q_p in the data characterizing a state.Charges at symmetry-related points are related.Consider a point p ∈Λ, and some operation g ∈ G so that g p ≠ p.Then let h ∈ G_p, so that h p = p.Using Eq. (<ref>), we findD_q_gp( g h g^-1 ) = D_q_p (h) .The charges q_p and q_gp can be identified if we identify G_p and G_gp using the isomorphism induced by conjugation by g.However, this identification is not always natural, and in general we can only say that the charges are related according to Eq. (<ref>).Our goal is to obtain the classification _0(G) by studying states in B_0.We do this by introducing equivalence operations, referred to as block equivalence operations, that group B_0 into classes that will turn out to correspond to cSPT phases.Two states are considered block-equivalent when they are related by some combination of the following operations: * Continuously slide points around so that G symmetry is always preserved.We require that, for each point p, the site symmetry G_p is constant throughout the sliding process. * A collection of points “near” p, where the collection has symmetry G_p, can be grouped together to a single new point at p.The whole collection transforms in a one-dimensional representation of G_p with charge q_p, which is a function of site symmetry charges of the points in the collection (see Appendix <ref>).There is also an inverse operation, where a point p can be split to a collection of nearby points respecting G_p symmetry, with the restriction that the collection transforms in the representation labeled by q_p. * Points with trivial charge can be added or removed as long as G symmetry is respected.These operations are closely related to the lattice homotopy operations introduced in Ref. po17lattice to obtain LSM constraints; the relationship is discussed in Sec. <ref>Two block-equivalent states are certainly in the same phase.However, these operations only correspond to a special family of adiabatic paths between states, and, in principle, two inequivalent states could be in the same phase.That is, block equivalence classes could be finer than the actual classification of phases. It turns out this is not the case, and these operations do give a classification of distinct cSPT phases in two dimensions.As stated above, this statement is based on the facts that the block-equivalence classification matches the Thorngren-Else classification obtained by very different means, and that it can be factored into pgSPT and weak pgSPT invariants. We now illustrate this general discussion with some examples.First, we consider G = p1 (wallpaper group #1), which consists only of translations.Here, all points have trivial site symmetry, so all block states are trivial, and we find a trivial classification.A more interesting example is G = p2 (wallpaper group #2), which is generated by two primitive translations and C_2 rotation.Within each primitive cell, there are four inequivalent points with C_2 site symmetry.This information is readily obtained from the International Tables for Crystallography.<cit.>There, for each wallpaper group, a Wyckoff letter w = a,b,… is assigned to each family of symmetry-equivalent points.For each Wyckoff class, the site symmetry, unit cell coordinates, and multiplicity within the unit cell are given.One of the Wyckoff classes always consists of points with no site symmetry; this class plays no role in our analysis and we ignore it.The coordinates for each point in a Wyckoff class sweep out a space of either zero or one dimension, according to the number of free parameters, and we refer to this as the dimension d_w of the Wyckoff class.Points in d_w = 1 Wyckoff classes can be slid continuously, while those in d_w = 0 classes are fixed.Returning to the present case of G = p2, the four Wyckoff classes with C_2 site symmetry have d_w = 0.We can attach zero-dimensional blocks carrying definite C_2 ≃ charge to the points in each Wyckoff class, so for each class we obtain ainvariant.We thus find the classification _0(p2) = ^4.Here, eachfactor in the classification is a pgSPT invariant for a different C_2 subgroup of p2.We can thus factor _0(p2) intopgSPT invariants, which shows that all 16 states labeled by _0(p2) are truly distinct SPT phases.We now turn to an example where the cSPT classification cannot be factored into pgSPT invariants, and where we need to consider weak pgSPT invariants instead.We consider G = pm(wallpaper group #3), which is generated by a single mirror reflection and two primitive translations, which can be taken parallel and perpendicular to the reflection axis.There are two Wyckoff classes, both of which are one-dimensional and have site symmetry D_1. Each class corresponds to a symmetry-equivalent family of reflection axes.We can place zero-dimensional blocks carrying D_1 ≃ charge on the points in each Wyckoff class, which gives a ^2 classification.(The block equivalence operations play a trivial role here.)Here, thefactors in the classification are weak pgSPT invariants.We single out a particular reflection axis, choosing coordinates so the axis runs along the y-direction and lies at x=0.We focus on x → -x reflection, and translation in the y-direction, ignoring other symmetries.We choose the system to have finite length L in the y-direction, with periodic boundary conditions.The resulting system can be viewed as a d=1 pgSPT state for the x → -x reflection, and is thus characterized by aSPT invariant. Due to translation symmetry along the y-direction, each time L is increased by one lattice constant, the d=1 invariant either remains the same, or it flips.The weak pgSPT invariant is defined to be the difference in d=1 invariant between systems with odd L and even L.The weak invariant can also be visualized as the d=1 pgSPT invariant per layer. The system can be viewed as a stack in the y-direction of d=1 pgSPT states with x → -x reflection symmetry.Due to translation symmetry in the stacking direction, each stacked layer has the samed=1 pgSPT invariant, and this is the weak pgSPT invariant.This simple visualization applies to block states, but the definition of the weak invariant above is more general.We also note that in d=3 cSPT phases, there are cases where weak pgSPT invariants defined by compactifying one spatial dimension cannot be interpreted in terms of stacking of lower-dimensional SPT states (see Appendix <ref>). Another example involving a weak pgSPT invariant is G = p3m1 (wallpaper group #14).There are four non-trivial Wyckoff classes.Three of these (a,b,c) are centers of D_3 symmetry, and are fixed, as shown in Fig. <ref>.The fourth Wyckoff class (d) is one-dimensional and has site symmetry D_1.Points in this class lie on the reflection axes that join the D_3 centers.The charge q_p for a point in any of the Wyckoff classes is labeled by an element of , since H^1(D_1,U(1)) = H^1(D_3,U(1)) =.Points in class d can always be slid near one of the D_3 centers and joined to it, so we can focus on D_3 charge configurations, which are labeled by (q_a,q_b,q_c) ∈^3.Given such a configuration, we can change (q_a, q_b, q_c) → (q_a +1, q_b+1, q_c) by the following sequence of equivalence operations illustrated in Fig. <ref>a.First, we can split each a block into a new a block and three d blocks, each carrying non-trivial D_1 charge.This changes q_a → q_a + 1.Then, we can slide the d blocks near b, and group them together with b blocks.This eliminates all the d blocks and changes q_b → q_b + 1, as desired.There is nothing special about the pair a,b, and this process can be done for any pair of a,b,c, as shown in Fig. <ref>.Under such equivalence operations, every configuration (q_a, q_b, q_c) is equivalent either to (0,0,0) or to (1,1,1), so these operations collapse the ^3 down to a single , and we find _0(p3m1) =.Thisinvariant cannot be a pgSPT invariant, because both D_3 and D_1 pgSPT phases have a trivial classification.Instead, just like for G = pm symmetry, it is a weak pgSPT invariant.Focusing on a particular reflection, shown as the blue reflection axis in Fig. <ref>, we see that the one-dimensional primitive cell along the axis contains a single point in each of the a,b,c Wyckoff classes.Therefore, the (1,1,1) charge configuration has non-trivial reflection charge per axis primitive cell, and can be viewed as a stacking of non-trivial d=1 pgSPT states.On the other hand, the (0,0,0) charge configuration is a stacking of trivial states.We used the formal procedure described in Sec. <ref> to calculate _0(G) and thus classify cSPT phases for all wallpaper groups. Additional technical details appear in Appendices <ref> and <ref>.The results, which agree with those of Thorngren and Else,<cit.> are shown in Table <ref>.For each symmetry group, the classification factorizes into pgSPT and weak pgSPT invariants. § CRYSTALLINE SPT PHASES: THREE DIMENSIONSNow we consider cSPT phases protected by space group symmetry in three dimensions.We focus only on those states built from lower-dimensional SPT building blocks; that is, we do not consider two-dimensional E_8 state blocks. As argued in Appendix <ref>, the classification has the structure (G) = _0(G) ×_1(G) ×_2(G), where, in contrast to d=2, non-trivial contributions from all block dimensions (d_b = 0,1,2) can appear.Following the discussion given here, and supplemented by technical details presented in Appendices <ref> and <ref>, we obtained _d_b(G) (d_b = 0,1,2) for all 230 space groups.The classifications are presented in Appendix <ref>.We find that (G) agrees with the Thorngren-Else classification, which was obtained in Ref. thorngren16gauging for all space groups except numbers 227, 228 and 230.The classifications obtained here are based on the block-equivalence operations described for d=2 cSPT phases in Sec. <ref>.Just like in two dimensions, block equivalence is not a priori guaranteed to give a classification of distinct cSPT phases, and further justification is needed.This is provided in part by the fact that our results match those of Thorngren and Else.Moreover, we show that a state with non-trivial block-equivalence class (i.e., non-zero element of (G)) has a non-trivial pgSPT or weak pgSPT invariant. This establishes that non-zero elements of (G) are non-trivial phases.It also implies distinct elements are distinct phases with different sets ofpgSPT and weak pgSPT invariants; that is, that d=3 bosonic cSPT phases can be completely characterized in terms of pgSPT and weak pgSPT invariants.In this section, we show that _1(G) ×_2(G) factors into pgSPT invariants.Establishing the above statements is more involved for the block-dimension zero states classified by _0(G), and this is done in Appendix <ref>.We classify states of different block dimension separately.For d_b = 0, we use the same block-equivalence operations described in Sec. <ref>.Below we describe a systematic computational procedure used to obtain _0(G) in both d=2 and d=3.Obtaining _1(G) and _2(G) is much simpler; these factors in the classification can essentially be read off from the entry for G in the International Tables for Crystallography,<cit.> with no calculation required.This occurs because the block-equivalence operations are trivial for d_b = 1,2. Sliding is trivial because d_b =1,2 blocks with enough effective internal symmetry to protect a non-trivial SPT state are always fixed in space; they cannot be slid without lowering their effective internal symmetry.Splitting and grouping are also trivial. Whenever a d_b = 1 block has effective internal symmetry capable of supporting a non-trivial SPT phase (C_nv with n=2,4,6), nearby parallel lines have at mosteffective internal symmetry, which is not enough to protect non-trivial one-dimensional states.Similarly, for d_b = 2 blocks with mirror symmetry, nearby parallel planes have no symmetry and cannot host non-trivial two-dimensional SPT states.We now describe how to obtain (G) from the information in the International Tables for Crystallography.<cit.>Just like for wallpaper groups, the entry for each d=3 space group includes information about crystal positions.These are labeled by letters w = a,b,c,… corresponding to Wyckoff classes, where the points in each Wyckoff class are related by symmetry, and points in different Wyckoff classes are not related by symmetry.The site symmetry for each Wyckoff class is given, and we refer to these groups as G_a, G_b, ….One of the Wyckoff classes always consists of general points with no site symmetry.This class plays no role in our analysis, and we ignore it.The points in each Wyckoff class have either zero, one or two free parameters.(The latter two cases correspond to high symmetry axes and planes, respectively.)We refer to this number as the dimension of the Wyckoff class and denote it by d_w, because as the free parameters are varied, each point sweeps out a space of the given dimension.We first discuss the d_b = 1,2 factors in the classification, before proceeding to the more involved calculations for d_b = 0.To obtain _2(G), we simply identify all the two-dimensional Wyckoff classes, which are always mirror planes.Each such Wyckoff class gives ainvariant associated with putting Ising SPT states on the symmetry-equivalent mirror planes._2(G) is simply a product of theseinvariants.Similarly, _1(G) is obtained by identifying all one-dimensional Wyckoff classes with C_nv site symmetry, for n = 2,4,6.Each such class gives ainvariant associated with non-trivial d=1 SPT states with _n ⋊ effective internal symmetry, and _1(G) is a product of theseinvariants.We need not consider C_3v axes, because the _3 ⋊ effective internal symmetry does not admit non-trivial d=1 SPT phases(see Sec. <ref>).It is easy to see that _1(G) ×_2(G)factors into pgSPT invariants and thus gives a classification of distinct cSPT phases.Eachfactor in _1(G) is a pgSPT invariant for a C_nv subgroup of G.Similarly, eachfactor in _2(G) is a pgSPT invariant for a mirror reflection subgroup of G.We now describe a procedure to obtain _0(G) based on the block-equivalence operations of Sec. <ref>.Suppose we have a state Ψ∈ B_0.We can use the block-equivalence operations to deform this state to a canonical state, where Λ has exactly one symmetry-related set of points for each Wyckoff class.All points in the same Wyckoff class have symmetry-related charges.We arbitrarily pick out a representative point in each class w, and specify its charge q_w ∈ H^1(G_w,U(1)), which determines the charges of all points in the class.The charges q_w can be assigned independently for the different classes.Therefore, canonical states are labeled by a charge Q taking values in the direct product of H^1 factors for the different Wyckoff classes; that is, Q ∈_c ≡ H^1(G_a) × H^1(G_b) ×⋯,where in the interest of compact notation we have definedH^1(G_w) ≡ H^1(G_w,U(1)) .Group addition in _c corresponds physically to the operation of stacking two SPT states, i.e. making a decoupled “bilayer” of the two states.States with different values of Q can be in the same phase.We define a subgroup _t ⊂_c containing all Q's such that the corresponding state is in the trivial phase.The block-equivalence classification is then given by the quotient_0(G) =_c / _t.To proceed, the main task is to obtain _t.In principle, _t is the set of all canonical states that can be obtained from the trivial canonical state, using the block-equivalence operations. We conjecture that _t is generated by splitting and twisting operations, as described below. This conjecture is clearly reasonable – it may even appear obvious – but we have not proved it rigorously. If this conjecture were incorrect, it would result in a classification that is too fine, if some generators of _t were missed.Therefore, the conjecture is verified a posteriori by matching with the Thorngren-Else classification, and by decomposition of _0(G) into pgSPT and weak pgSPT invariants (see Appendix <ref>).We now describe the operations generating _t: * Splitting operations. Starting from the trivial canonical state, we can split the points in a given Wyckoff class into collections of nearby points. One point in each collection is a point in the original Wyckoff class, and the other points are lower-symmetry points that can be brought arbitrarily close to the original point. An example of splitting in two dimensions is shown in the center column of Fig. <ref>.The role of splitting operations in obtaining _0(G) is illustrated below for space group #200.More information about splitting operations is given in Appendix <ref>. * Twisting operations. These operations arise for certain one-dimensional Wyckoff classes in non-symmorphic space groups.Twisting is a sequence of splitting, sliding and grouping operations thatinvolves points only in a single one-dimensional Wyckoff class.This has a non-trivial effect on _t, and hence on _0(G), only when: (1) G_w = C_3, C_4, C_6, and the axis swept out by a Wyckoff point is contained in a glide plane with glide direction along the axis.(2)G_w = C_2v, and the Wyckoff axis coincides with a four-fold screw axis.An example of twisting is discussed below for space group #101.Twisting operations and their effect on _0(G) are described more generally in Appendix <ref>. Computations using the splitting and twisting operations are simplified by a graph-theoretic representation that saves us from complicated geometrical visualization for many different space groups. First, represent the Wyckoff classes as vertices in a directed graph that we call the W-graph.If w has non-trivial twisting operations, then we denote its vertex with an open circle.Otherwise, we used filled circles when twisting operations are trivial. We will add directed edges to the graph to represent splitting operations.Algebraically, each vertex corresponds to a H^1 factor in _c.Edges (and open vertices) are associated with sets of generators of _t.If the points in class w can be slid arbitrarily close to the higher-symmetry point w_h, so that they can be grouped together with w_h, draw a directed edge w → w_h.In the corresponding splitting operation, we split each point in w_h to a collection that includes the original w_h point, and nearby w points.We call the splitting operation trivial if it can generate all possible values of q_w, while always leaving q_w_h unchanged.Equivalently, the corresponding grouping operation always leaves q_w_h unchanged.When the splitting operation is trivial,H^1(G_w)is contained in _t.We draw the directed edge as a dashed arrow for trivial splitting operations, and use solid arrows when the splitting operation is non-trivial. The charge configurations generated by the splitting operation are a property only of the G_w_h site symmetry. This is convenient, because this means it is enough to study splitting for the crystallographic point groups, and we do not have to start from scratch for every space group.Given a point group G_w_h, the International Tables for Crystallography enumeratethe distinct collections of nearby symmetry-equivalent points w.<cit.>We can then determine which charge configurations (q_w_h, q_w) ∈ H^1(G_w_h) × H^1(G_w) can be generated by the splitting operation starting from the trivial state with (q_w_h, q_w) = (0,0).This is done for all the crystallographic point groups in Appendix <ref>.In this notation, the splitting operation is trivial when the generated charge configurations are the set { (0, q_w) }, for all values of q_w. To simplify the calculations further, starting from the W-graph, we implement a cleaning procedure to construct a W-quasigraph.First, we erase each dashed arrow together with its tail vertex.Then we continue erasing all solid arrows (together with their tail vertices) whose head vertices are already erased, until no headless solid arrows remain. In general the W-quasigraph is not a true graph, because there can now be arrows lacking a tail vertex.The erased vertices have correspondingH^1(G_w) factors lying entirely in _t, which disappear from _c when we take the quotient.We let _c be the product of H^1 factors for the vertices remaining in the W-quasigraph, and _t ⊂_c is generated by the splitting operations associated with the remaining solid arrows, and twisting operations associated with open vertices.Note that arrows with a missing tail can still contribute to _t.We then proceed to compute the quotient _c / _t.The W-quasigraph often breaks into disconnected components, and the quotient can be computed component-by-component, then taking the product over components. We illustrate our general discussion with two example calculations of _0(G), beginning with the space group P m 3̅.This is space group number 200 in the International Tables,<cit.> and we refer to it as G_200. There are 12 Wyckoff classes, with letters a, …, l.We ignore the l points because their site symmetry is trivial.Figure <ref> shows the W-graph and W-quasigraph for this space group.To work out these graphs and understand the effect of the splitting operations, we used the entry for the space group in the International Tables and results obtained in Appendix <ref>.All twisting operations for this space group are trivial, but splitting plays a non-trivial role.Examining the W-quasigraph, we see that c and d are isolated vertices, with G_c = G_d = D_2h.Each vertex contributes a factor of H^1(D_2h) = ^3 to the classification. The non-trivial component of the W-quasigraph has three vertices a,b,i, with G_a = G_b = T_h and G_i = C_3.For this component, we write a generalelementQ ∈Q̃_c as Q = (q_a, q_b, q_i), where q_a, q_b ∈ H^1(T_h) = _3 ×, and q_i ∈ H^1(C_3) =_3.We further write q_a = (q^C_3_a, q^i_a), and similarly for q_b, where q^C_3_a ∈_3 is the chargeassociated with a C_3 subgroup of T_h, and q^i_a ∈ is the charge for the inversion subgroup of T_h.We thus have the general formQ = (q^C_3_a, q^i_a, q^C_3_b, q^i_b, q_i) . The splitting process for the i → a arrow generates Q = (2,0,0,0,-1)=(2,0,0,0,2), while that for the i → b arrow generates Q = (0,0,2,0,-1) = (0,0,2,0,2).Therefore Q̃_t ≃_3 ×_3, and for this component we have the quotient Q̃_c / Q̃_t = _3 ×^2.Putting the results from the three components together, we have the classification_0(G_200 ) = _3 ×^8 . Now we describe the decomposition of _0(G_200 ) into pgSPT and weak pgSPT invariants. The Wyckoff classes c and d correspond to distinct centers of D_2h symmetry. The pgSPT classification for D_2h is ^3, and the ^3 factor associated with each of these vertices is a D_2h pgSPT invariant.Classes a and b have T_h symmetry, where H^1(T_h) = _3 ×, but where the _3 factor disappears from the pgSPT classification due to the adjoining operation.The T_h pgSPT classification is thus , and twofactors in _0(G_200) are T_h pgSPT invariants. So far, we have shown that the ^8 factor in _0(G_200) is a product of pgSPT invariants.The _3 factor in _0(G_200) is a weak pgSPT invariant associated with stacking of d=2 pgSPT phases with C_3 symmetry.To see this, we note that an element q_3 of the _3 factor of _0(G_200) can be parametrized in terms of the canonical state charge configuration byq_3 = q^C_3_a + q^C_3_b + 2 q_i .In fact q_3 measures the total C_3 charge in a primitive cell on the [111] axis.To see this, we have to examine the Wyckoff positions, and find all points on the [111] axis within a single primitive cell.There is a single a point, a single b point, and two i points, corresponding to the factor of 2 in the last term.Focusing on the [111] C_3 rotation, and translation along the [111] axis, we can view the block states we are describing as stacks of d=2 C_3 pgSPT states, with translation symmetry along the stacking direction.q_3 measures the _3 pgSPT invariant per layer, which is a robust invariant in the presence of translation symmetry in the stacking direction.This is the weak pgSPT invariant appearing as the_3 factor in (G_200).Now we discuss an example that illustrates the role of non-trivial twisting operations.We consider the space group P 4_2 c m, which we refer to as G_101 reflecting its numbering in the International Tables.This is a non-symmorphic space group, and four-fold screw axes will play an important role.There are four non-trivial Wyckoff classes.One of them (d) has mirror site symmetry and can be trivially eliminated by grouping with a points.The remaining classes a,b,c are zero-dimensional, so the W-quasigraph consists of three disconnected vertices.Splitting operations are thus clearly trivial in this example.First, considering class c, we have G_c = C_2.Following the discussion of Appendix <ref>, twisting operations are trivial for this class, because H^1(C_2) ≃ has no non-trivial automorphism.Therefore class c contributes a H^1(C_2) = factor to _0(G_101).This is a weak pgSPT invariant associated with stacking of d=2 C_2 pgSPT states.The elementary “translation” symmetry along the stacking direction is actually a glide reflection.Because this operation commutes with the C_2 rotation, the fact that it is a glide and not a pure translation plays no role.We now turn to class a (identical statements hold for class b).Class a has G_a = C_2v, and the one-dimensional axis swept out by a point in a coincides with a four-fold screw axis.As shown in Appendix <ref>, twisting operations are non-trivial under these circumstances.There are two a points in a primitive cell with coordinates (0,0,z) and (0,0,z+1/2), which are related by the four-fold screw rotation.This operation acts along the z-axis as a half translation, so we denote it by t_h.Letting σ_1 and σ_2 be the two mirror reflections generating G_a = C_2v, we havet_h σ_1 t^-1_h= σ_2 t_h σ_2 t^-1_h= σ_1 .We denote by q_z and q_z + 1/2 the C_2v charges at the points (0,0,z) and (0,0,z+1/2), respectively.Writing q_z = (q^1_z, q^2_z), where q^1_z, q^2_z∈, the non-trivial action of t_h on C_2v implies q_z+1/2 = (q^2_z, q^1_z).Charge configurations for the a vertex are labeled by distinct elements q_z ∈^2, so the group of charge configurations is _c ≃^2.Because a is an isolated vertex in the W-quasigraph, if we only considered splitting operations, we would incorrectly conclude that the a class contributes a factor of ^2 to _0(G_101).We now start with the trivial charge configuration q_z = q_z+1/2 = (0,0) ∈^2(Fig. <ref>a), and apply block equivalence operations to obtain a non-zero element of _t.The sequence of block equivalence operations applied, taken together, is what we mean when referring to a twisting operation.We write charge configurations as ordered pairs [q_z, q_z+1/2], so the trivial configuration is denoted [(0,0), (0,0)].Strictly speaking, there is no need to specify q_z+1/2, as it is determined by q_z, but it is illustrative to keep track of both charges explicitly.First, we split the block at (0,0,z) into two blocks with charges q_z_1 and q_z_2, respectively, that we take to be q_z_1 = q_z_2 =(1,0).To maintain symmetry, at the same time we must split the block at (0,0,z+1/2) into two blocks with charges q_(z+1/2)_1 = q_(z+1/2)_2 = (0,1).This splitting operation, illustrated in Fig. <ref>b, takes a single chain of (trivial) charges on the z-axis to two chains of non-trivial charges.Next, we slide the charges of the first chain along the z-axis until they fall into registry again with the second chain, to obtain the configuration shown in Fig. <ref>c.This has the effect of transforming [q_z_1, q_(z+1/2)_1 ] → [(0,1), (1,0)].Finally, we group the two chains together, to again obtain a single chain of charges, which is now in the non-trivial configuration [(1,1), (1,1)](Fig. <ref>d).Other similar operations do not produce configurations beyond [(0,0), (0,0)] and [(1,1), (1,1)], so we have shown these two configurations make up _t, and thus _t ≃.Taking the quotient _c / _t =, we see that class a contributes afactor to the classification _0(C_101).Since the same is true for class b, we thus find_0(G_101 ) = ^3 .In Appendix <ref>, it is shown that thefactors contributed by classes a and b are weak pgSPT invariants.§ LIEB-SCHULTZ-MATTIS CONSTRAINT It has recently been understood there is an intimate connection between Lieb-Schultz-Mattis (LSM) constraints in d dimensions, and SPT phases with crystalline symmetries in d+1 dimensions.<cit.>Here we exploit this connection, which is a type of bulk-boundary correspondence, to obtain an LSM constraint for d=2 bosonic systems with wallpaper group symmetry.This is related via the bulk-boundary correspondence to d=3 cSPT phases with block-dimension one. Other LSM constraints involve a combination of internal and crystalline symmetries, and, to our knowledge, LSM constraints involving only crystalline symmetry have not been obtained previously.After using the bulk-boundary correspondence to obtain our LSM constraint, we give an independent argument for it based on dimensional reduction, working strictly in two dimensions.We note that Qi, Fang and Fu have independently obtained the same LSM constraint.<cit.> By a LSM constraint, we mean a generalization of the celebrated Lieb-Schultz-Mattis theorem,<cit.> a version of which states that in a one-dimensional spin system with SO(3) spin and lattice translation symmetries, finite-range interactions, and half-odd-integer spin per primitive cell, the ground state becomes degenerate in the thermodynamic limit.This implies that a symmetry-preserving short-range entangled ground state – i.e. a SPT state or other integer topological phase – is impossible.The LSM theorem and its generalizations are interesting in part because they show how a microscopic property – the pattern of S = 1/2 projective representations in the unit cell – constrains certain universal, infrared properties.LSM constraints have been obtained in arbitrary spatial dimensions,<cit.>in systems with lattice translation combined with internal symmetry,<cit.>in systems with both space group and internal symmetry,<cit.> and in systems with magnetic translation symmetry.<cit.>In all these cases, internal symmetry is involved.It should be noted that, apart from the work of Hastings generalizing the LSM theorem to higher spatial dimensions,<cit.> these LSM constraints – and the constraint we obtain here – do not currently have the status ofrigorous mathematical theorems.To illustrate the connection between LSM constraints and SPT phases, we follow the ideas of Ref. cheng16translational and observe that a S = 1/2 chain can be viewed as the edge of a stack of S = 1 chains in the Haldane phase.We assume translation symmetry along the stacking direction, and that the edge preserves both translation and SO(3) symmetries, so that the LSM theorem applies.The d=2 bulk is a non-trivial SPT phase protected by the same symmetries, sometimes referred to as a “weak” SPT phase because translation symmetry is involved.In the language of this paper, the bulk is a block-dimension one cSPT state. Then we see that the LSM constraint for the S = 1/2 chain is the same as the statement that a symmetric edge of this d=2 SPT phase cannot be gapped out trivially, i.e. the edge cannot be in a symmetry-preserving, short-range entangled ground state.This is what we mean in this section by bulk-boundary correspondence.It is important to note that the statement that symmetric boundaries of SPT phases cannot be trivially gappedis a conjecture.Indeed, this statement is false for all block-dimension zero cSPT phases – this is familiar from the study of reflection SPT phases in one dimension, which do not support gapless end states.The conjecture is believed to hold for large classes of SPT phases, but in general the bulk-boundary correspondence should be viewed as a tool to obtain conjectured LSM constraints, and it is desirable to give independent supporting arguments.To state our LSM constraint, we consider a d=2 spin system with wallpaper group symmetry.Unlike in our discussions of SPT phases, we allow some spins to transform projectively under their site symmetry.We find:If the system contains any spin transforming projectively under its site symmetry, a symmetry-preserving, gapped, short-range entangled ground state is impossible.We argue for this statement both using the bulk-boundary correspondence, viewing the d=2 system as the surface of a block-dimension one cSPT state, and also using an independent argument directly in two dimensions.In addition, if only wallpaper group symmetry is present, we use the bulk-boundary correspondence to argue the converse statement, namely that if no spins transform projectively, then a symmetry-preserving, gapped, short-range entangled ground state can occur for some choice of parameters.We now obtain our LSM constraint from the bulk-boundary correspondence. We let G be a wallpaper group, and consider a G-symmetric surface of a d=3 bulk.We take the surface normal to be along the z-axis.The bulk space group is denoted G_3d and determined by G using a prescription we now describe.Translations and rotations in G correspond to translations and rotations in G_3d in the obvious way.Reflections and glides in G correspond to vertical mirror or glide planes in G_3d.Using this correspondence, G_3d is generated by the operations in G and by translations in the z-direction.It follows that G_3d is a product of z-axis translations and G, so that the surface termination only breaks translations along the surface normal.In this sense, G_3d can be viewed as a minimal “extension” of G into three dimensions. It should be emphasized that only block-dimension one bulk cSPT states and their classification by _1(G_3d) are relevant for this discussion.Centers of C_n symmetry on the surface extend into the bulk as C_n axes. Similarly, D_n centers on the surface correspond to C_nv axes in the bulk.All these axes are parallel to the z-axis.We consider block-dimension one bulk cSPT states, all of which can be obtained by placing d=1 SPT phases on C_nv axes for n = 2,4,6.These d=1 SPT phases have effective internal symmetry _n ⋊≃ C_nv≃ D_n, and obey aclassification.The corresponding classification of cSPT phases is given by a product offactors, one for each C_nv (n =2,4,6) Wyckoff class in G_3d.C_3v and C_n axes play no role.Each symmetry-equivalent family of C_nv axes corresponds on the surface to a Wyckoff class with D_n site symmetry.Placing non-trivial d=1 SPT states on the C_nv axes corresponds to placing non-trivial D_n projective representations at the points of the corresponding Wyckoff class.We note that D_n (n = 2,4,6) is the only two-dimensional crystallographic point group admitting non-trivial projective representations.Moreover, H^2(D_n,U(1))= for even n, so there is only a single type of non-trivial D_n projective representation, corresponding to the single non-trivial d=1 SPT phase on the C_nv axis.This discussion shows that a two-dimensional G symmetric system can be viewed as the surface of a non-trivial G_3d cSPT phase if and only if the two-dimensional system contains some spins transforming as non-trivial projective representations under site symmetry.Assuming that symmetric surfaces of the relevant cSPT phases cannot be trivially gapped, our LSM constraint follows.Moreover, a d=2 G-symmetric system in which no spins transform projectively under their site symmetry can be viewed as a surface of a trivial d=3 SPT phase.We therefore expect there is no obstruction to entering a symmetry-preserving, gapped and short-range entangled phase.This means that such a phase should occur for some choice of parameters in a Hamiltonian governing the d=2 system.We now give an alternative argument for our LSM constraint, working in d=2 and using dimensional reduction. We note that, while Ref. song17topological introduced dimensional reduction to classify pgSPT phases in systems with only integer spins, dimensional reduction can be carried out for any pgSPT state, whether or not some spins transform projectively.It is enough to consider a system with D_n point group symmetry (n = 2,4,6).We suppose that there is a spin at the center of D_n symmetry transforming as a non-trivial projective representation of D_n.We will assume that a symmetry-preserving, gapped, short-range entangled state is possible, and obtain a contradiction.The only known possibilities for the desired short-range entangled state are (1) an E_8 state or (2) a D_n SPT state.We are not aware of rigorous arguments showing these are indeed the only possible states, but we will assume this to be the case.The E_8 state is easily excluded:it has chiral edge modes and is thus incompatible with D_n symmetry.Therefore, we consider a D_n SPT state.We can apply dimensional reduction as in Appendix <ref> to reduce the ground state to a D_n-symmetric zero-dimensional region containing the center of D_n symmetry. Because spins away from the origin come in pairs, this entire zero-dimensional region must transform as a non-trivial projective representation of D_n.Therefore if its ground state is symmetric, it is degenerate, which contradicts our assumption of an SPT state, and we conclude a D_n SPT state is impossible under these circumstances.More carefully, we can apply the same argument in a finite but large system with periodic boundary conditions, and then take the thermodynamic limit. In this situation, there will generally be a finite number of centers of D_n symmetry, separated from one another by lengths on the order of the system size.At least some of these centers have spins transforming projectively under D_n.If it happens that the total many-body wave function transforms projectively under D_n, then there is a degenerate ground state even for finite size.We assume instead that the many-body wave function transforms linearly, so that the finite-size ground state can be unique.Assuming a D_n SPT state and applying dimensional reduction, the system reduces to a few well-separated projective spins lying at the symmetry centers, which are embedded within a trivial gapped medium.This medium mediates exponentially decaying interactions among the projective spins.This splits their degeneracy, but the splitting is exponentially small in the system size and vanishes in the thermodynamic limit, where the ground state becomes degenerate. This establishes our LSM constraint.Finally, we remark that our LSM constraint winds up only involving point group symmetry in an essential way; the full wallpaper group symmetry does not play an important role.This is the case even though the the bulk-boundary correspondence arguments leading to the constraint do include wallpaper group symmetry.We can explain this by noting that the bulk cSPT phases involved in obtaining the LSM constraint can be understood as C_nv pgSPT phases.§ CAN ALL CRYSTALLINE SPT PHASES BE BUILT FROM LOWER-DIMENSIONAL STATES?In this section, we argue that if a certain reasonable but unproven assumption holds, then all cSPT phases can be built from lower-dimensional invertible topological states.We would like to be able to apply the dimensional reduction procedure of Ref. song17topological, reviewed in Appendix <ref>, in the presence of space group symmetry.We will see that a naïve application of this procedure fails, but it can be fixed if we add an extra step, which requires making a certain assumption.We begin with a cSPT ground state |ψ⟩ protected by space group symmetry. To keep the discussion simple, we assume that only space group symmetry is present.The system may be either bosonic or fermionic.By definition, there is a finite-depth quantum circuit U^loc such that Ũ^loc | ψ⟩ = | T ⟩, where | T ⟩ is a trivial product state (or atomic insulator, in a fermionic system).In general, Ũ^loc does not respect symmetry.To proceed, we find the largest possible spatial region so that no two points in the region are related by symmetry (r in Fig. <ref>), and then copy this region throughout space using the symmetry, to obtain a region R.An example is shown in Fig. <ref> for the wallpaper group p2mm.We denote by w the characteristic distance between connected components of R, as illustrated in Fig. <ref>.Next, we follow Ref. song17topological to find a new finite-depth circuit U^loc that locally trivializes the system in region R and respects symmetry (see Appendix <ref> and Ref. song17topological for more details).U^loc is constructed by first cutting the circuit Ũ^loc to obtain a new circuit supported on a region containing one of the components of R, and then copying the resulting circuit throughout space using the symmetry.As discussed in Ref. song17topological for the simple example of mirror reflection symmetry, this procedure requires that w ≫ξ, where ξ is some characteristic correlation length of the state | ψ⟩.For point group symmetry, the region R can be chosen so that w is as large as desired.However, in the present case, we have w < a, where a is the lattice constant, and typically a < ξ.Therefore we cannot follow Ref. song17topological to construct a quantum circuit with the desired properties.To circumvent this problem, we modify the original state |ψ⟩.First, we add a fine mesh of trivial degrees of freedom.The mesh can be as fine as desired, and we need the mesh spacing to be much smaller than the lattice constant.For the purposes of classifying phases, this step is certainly legitimate.Second, we change parameters of the Hamiltonian, preserving symmetry, to entangle the new degrees of freedom with the original state, obtaining a state | ψ' ⟩.Crucially, we assume that this can be done so that, by choosing a fine enough mesh, we can make the correlation length of | ψ' ⟩ as small as desired, and in particular ξ≪ a.We believe this assumption is physically reasonable and we expect it to hold, but we do not have an argument that it is true, so it should be viewed as an unproven assumption.We note that if this assumption is not true, it would mean there is some cSPT state with entanglement on the scale of the lattice spacing that cannot be removed, which seems unnatural.With the correlation length of | ψ' ⟩ as small as desired, there is no longer an obstruction to constructing the finite-depth circuit U^loc.We haveU^loc | ψ' ⟩ = |T ⟩_R ⊗ | ψ”⟩_R̅,where |T ⟩_R is a trivial product state on region R, and | ψ”⟩_R̅ is some state on the complement R̅.This latter region can be viewed as a network of lower-dimensional systems with effective internal symmetry, and we expect that cSPT phases reduced to R̅ as above can be constructed and classified by putting down (and perhaps gluing together) lower-dimensional invertible topological phases on various subregions of R̅. § DISCUSSIONIn this paper, we considered bosoniccrystalline SPT (cSPT) phases protected by space group or point group symmetry, and classified a subset of such phases built from lower-dimensional SPT blocks.Our classification matches that of Thorngren and Else, obtained by very different methods, forwallpaper groups in d=2 and space groups in d=3.This allows us to clarify the physical properties of the states classified by Thorngren and Else, and, combined with a general argument based on a reasonable but unproven assumption, is evidence that all SPT phases protected by crystalline symmetry can be built from lower-dimensional blocks of invertible topological states.Moreover, for the states we classified, there are no new SPT invariants beyond point group SPT (pgSPT) invariants, in the sense that the classificationscan be decomposed into point group SPT (pgSPT) and weak pgSPT invariants.Finally, we obtained a Lieb-Schultz-Mattis (LSM) type constraint for d=2 spin systems that only involves crystalline symmetry, as opposed to the interplay between internal and crystal symmetries.We conclude with a discussion of some possible extensions of the results presented here, and remarks on the connection between our results and the approach to LSM constraints in Ref. po17lattice.For simplicity, we focused in this paper on phases where the building blocks are lower-dimensional SPT states.This ignores d=3 bosonic cSPT phases that can be built from E_8 states.<cit.>We announce some preliminary results on the classification of these states that will be presented in a separate paper.Let G be a d=3 point group or space group.If G has only orientation-preserving symmetries, there are no E_8 based states.This is consistent with the conjecture of Thorngren and Else that their classification is complete for such G.<cit.>If G has any orientation-reversing symmetries, then there are non-trivial E_8 based states, which add a singlefactor to the classification of cSPT phases.We conjecture that complete classifications for d=3 bosonic cSPT phases are obtained by combining these results with the classifications obtained in Ref. thorngren16gauging and here.The approach developed here can be extended to treat SPT phases in d dimensions with both internal and crystalline symmetries.The key modification is that now blocks of dimension d are needed, which have no effective internal symmetry coming from the crystalline symmetry, but do have true internal symmetry and can thus host d-dimensional internal-symmetry SPT states.These blocks then need to be glued together consistent with the crystal symmetry.Another modification is that the structure of adjoining and grouping/splitting/sliding operations should become richer, because internal-symmetry SPT states of various dimensionalities can exist away from high symmetry subspaces.We conjecture that a generalization of our approach along these lines can produce complete classifications of SPT phases with both internal and crystal symmetries.It will also be interesting to extend our approach to fermionic SPT phases in future work, both with and without internal symmetry.As noted above, our block equivalence operations are closely related to the lattice homotopy operations introduced in Ref. po17lattice in connection with LSM constraints.We now describe the precise relationship and comment on some possible implications.Ref. po17lattice considered d-dimensional bosonic systems on a lattice Λ with symmetry G = G_s × G_i, with G_s is a space group and G_i an internal symmetry.To each lattice site in Λ is associated an element of H^2(G_i,U(1)), which characterizes the G_i representation of degrees of freedom at that site.Spins are assumed to transform linearly under G_s, and, moreover, if g_i ∈ G_i and g_s ∈ G_s, the action of g_i and g_s commutes on spins.Lattice homotopy operations were introduced, where lattice sites can be slid, grouped and split, and where grouping and splitting respects the H^2(G_i,U(1)) group operation.These operations define equivalence classes of lattices [ Λ ].Ref. po17lattice conjectured that a LSM type constraint holds whenever [ Λ ] is non-trivial, i.e. whenever the lattice cannot be deformed to the trivial lattice.They established this conjecture in a wide range of cases using arguments based on flux insertion.The lattice homotopy operations of Ref. po17lattice are a special case of block-equivalence operations.We consider d+1-dimensional SPT states also with symmetry G = G_s × G_i, where now G_s is a d+1-dimensional space group that is preserved at a d-dimensional surface.[Strictly speaking, we should include in G_s translations along the surface normal, which arebroken by the surface, but these operations play no role in our discussion.]We restrict to SPT states built by placing one-dimensional G_i-symmetric SPT phases on axes normal to the surface.These SPT states are labeled by elements of H^2(G_i,U(1)), and their block equivalence operations are precisely the lattice homotopy operations of Ref. po17lattice.Indeed, the surface terminations of these one-dimensional SPT states are precisely projective representations labeled by the same element of H^2(G_i,U(1)), so these operations are really physically identical.These observations allow us to rephrase the conjecture of Ref. po17lattice in terms of a bulk-boundary correspondence, in the spirit of Ref. cheng16translational and our results of Sec. <ref>.We see that [ Λ ] is non-trivial precisely when the correspondingblock-equivalence class of d+1-dimensional SPT block states is non-trivial.Then the conjecture of Ref. po17lattice becomes the statement that a non-trivial block-equivalence class implies the corresponding SPT phase is non-trivial, and that symmetry-preserving surfaces of this SPT phase are not trivially gappable.It should be emphasized that this statement is also a conjecture that needs to be shown.The first part of the statement – non-trivial block-equivalence class implies non-trivial SPT phase – can likely be shown in particular cases, and perhaps in general, by decomposing the block-equivalence classification into invariants associated with point groups, along the lines of Appendix <ref>.Such invariants, including internal symmetry, can be obtained via the dimensional reduction approach of Ref. song17topological.It may be possible to establish the second part of the statement – symmetry-preserving surfaces are non-trivial – by generalizing and perhaps combining the flux-insertion arguments of Ref. po17lattice and the dimensional reduction argument of Sec. <ref>.The above discussion leads immediately to a host of new conjectured LSM constraints.An axis a penetrating into the SPT bulk has effective internal symmetry G_a ⊂ G_s, and, taking advantage of this, we can place one-dimensional SPT phases classified by H^2(G_a × G_i,U(1)) on the axis.This allows for corresponding spin systems where lattice symmetries act projectively, and/or where some internal symmetry operations do not commute with site symmetries.We are naturally led to the conjecture that a LSM constraint holds for this spin system if the corresponding block equivalence class is non-trivial.Section <ref> establishes this conjecture in the special case of two-dimensional spin systems with no internal symmetry.M.H. would like to thank Sid Parameswaran and Michael Zaletel for useful discussions.We are grateful to Dominic Else for useful correspondence, and especially grateful to Liang Fu for collaboration on related prior work.S.-J.H., Y.-P.H. and M.H. were supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences (BES) under Award number DE-SC0014415. H.S. acknowledges financial support from the Spanish MINECO grants FIS2012-33152, FIS2015-67411, and the CAM research consortium QUITEMAD+Grant No. S2013/ICE-2801. § DIMENSIONAL REDUCTION APPROACH TO POINT GROUP SPT CLASSIFICATIONHere, we review and illustrate the dimensional reduction approach to pgSPT classification given in Ref. song17topological.We focus on the illustrative examples of C_2 symmetry in d=2, and C_i (inversion) symmetry in d=3, which we treat simultaneously.These examples allow us to highlight the key points and illuminate a more general statement about dimensional reduction.Figure <ref> shows two-dimensional space (for the C_2 example), or a cross section through the origin in three-dimensional space (for C_i).In the left panel of the figure, space is divided into three regions r_0, r_1 and r'_1.The latter two regions are semi-infinite and are images of one another under the C_2 or C_i symmetry.The region r_0 is a strip (in two dimensions) or a slab (in three dimensions), that is invariant under the symmetry and contains the origin.The thickness w of r_0 should be taken much larger than any correlation length ξ, but still finite when taking the thermodynamic limit.If | ψ⟩ is a pgSPT ground state under the appropriate symmetry, the arguments of Ref. song17topological show that the ground state is adiabatically connected (preserving symmetry) to a state of the form | T ⟩_r_1⊗ | ψ⟩_r_0⊗ | T ⟩_r'_1, where | T ⟩_r_1 and | T ⟩_r'_1 are trivial product states related to one another by symmetry, and | ψ⟩_r_0 is a possibly non-trivial state defined in r_0 that is invariant under the symmetry.Ref. song17topological describes how to construct a finite-depth, symmetry-preserving quantum circuit achieving this dimensional reduction, that isU^loc | ψ⟩ = | T ⟩_r_1⊗ | ψ⟩_r_0⊗ | T ⟩_r'_1.The finite-depth circuit U^loc is constructed starting from the non-symmetry preserving circuit Ũ^loc that trivializes the state |ψ⟩, and which must exist by the assumption that we have a SPT phase.That is,Ũ^loc | ψ⟩ = | T ⟩,where T is a trivial product state.To construct U^loc from Ũ^loc, we first cut Ũ^loc to obtain a new circuit U^loc_r_1 with support in a region containing r_1, and extending slightly into r_0.Then we conjugate U^loc_r_1 by the symmetry operation to obtain a similar circuit in region r'_1, U^loc_r'_1.We haveU^loc = U^loc_r_1 U^loc_r'_1,and the action of U^loc on |ψ⟩ is as given in Eq. (<ref>).For a more detailed discussion, the reader should consult Ref. song17topological.So far, we have reduced a d-dimensional pgSPT state to some state in d-1 dimensions.However, we have not yet finished with dimensional reduction; we would like to reduce the state to a space where the symmetry acts only as an internal symmetry.In both our examples, this means reducing down to a zero-dimensional region centered at the origin.Let us first consider C_2 symmetry in two dimensions.Proceeding as before, we divide the strip r_0 into three regions – two semi-infinite strips r_2 and r'_2 that are related by C_2 rotation, and a region r'_0 centered on the origin, as shown in the right panel of Fig. <ref>.Focusing on r_2, we have an effectively one-dimensional topological state with no symmetry. In a bosonic system, such a state is trivial, so we can trivialize r_2 away from the origin by acting with a one-dimensional quantum circuit, and we can trivialize r'_2 at the same time by copying this circuit using the C_2 symmetry.To describe the resulting state in language that generalizes to arbitrary point groups, we recall that the subset S ⊂ℝ^d was defined to be the union of all points in space fixed by at least one non-trivial point group operation g ∈ G.In the present case, S is just a single point at the origin.Then we define S_t to be a thickened version of S that remains invariant under symmetry.In the present example, we can take S_t = r'_0.Finally, let S̅_̅t̅ be the complement of S_t in ℝ^d.The second step of the dimensional reduction procedure then shows |ψ⟩→ | T ⟩_S̅_̅t̅⊗ | ψ⟩_S_t,where the arrow denotes adiabatic continuity,where | T ⟩_S̅_̅t̅ is a trivial product state on S̅_̅t̅, and | ψ⟩_S_t is a state on S_t that may be non-trivial.In our two-dimensional example, arriving at Eq. (<ref>) did not require any assumptions beyond | ψ⟩ being a pgSPT state.The situation is different in d=3, where we do have to make an additional assumption, which amounts to excluding certain pgSPT phases from consideration.In three dimensions, the slab r_0 is an effectively two-dimensional system with C_2 rotation symmetry.If we zoom in and look at a piece of r_0 away from the origin, we have a two-dimensional system with no symmetry at all.Unlike in the previous example, such a system can be in an E_8 state, which is robust in the absence of symmetry.Indeed, the E_8 state is compatible with C_2 rotation symmetry, and the whole slab r_0 can be in an E_8 state.If this happens, the second dimensional reduction step, where we attempt to reduce r_0 down to a lower-dimensional region, fails, because a two-dimensional quantum circuit cannot trivialize the E_8 state.In this paper, we are primarily interested in crystalline SPT phases built from lower-dimensional SPT building blocks. We encountered an obstruction to continuing the dimensional reduction in a pgSPT state built from an E_8 state, which is not an SPT state, so we should exclude it from consideration in keeping with our focus.Therefore we assume that E_8 states do not appear at any stage of the dimensional reduction procedure.Because E_8 states (and multiple copies thereof) are believed to be the only bosonic invertible topological phases that are not SPT phases, this amounts to considering only those pgSPT phases built from lower-dimensional SPT blocks, as desired.It is straightforward to extend our analysis to include pgSPT phases built from E_8 states, and indeed this was done for mirror reflection and C_2v in Ref. song17topological, but for other point groups we leave consideration of such states for future work.Once we assume that an E_8 state does not appear, we can continue the dimensional reduction in our d=3 example to obtain a state of the form Eq. (<ref>).(We actually need two more steps, first to reduce r_0 to a quasi-one-dimensional strip, then to a zero-dimensional region centered on the origin.)In general, with the present assumptions, any pgSPT state can be reduced to a state of the form Eq. (<ref>).A state of the form Eq. (<ref>) can be understood in terms of SPT blocks with effective internal symmetry.To see this, we work in d=3, and assume for concreteness that S has some points whose neighborhood in S (intersection of a ball containing the point with S) is two-dimensional.Such a two-dimensional portion of S is a mirror plane, and zooming in on some two-dimensional portion of S, we have an effectively d=2 system witheffective internal symmetry.This system can either be in a non-trivial Ising SPT phase, or it can be trivial.(It cannot be an E_8 state by the assumption we made above.)If some of the planes in S host non-trivial states, we can construct a reference state with G symmetry and the same pattern of Ising SPT states on mirror planes, and then make a bilayer of this state with the original ground state.This makes all the planes in S trivial.Next, we can find one-dimensional portions of S, consisting of points whose neighborhood is one-dimensional, or that lie at the intersection of two or more planes.These one-dimensional portions of S can be in one-dimensional SPT states.Proceeding along these lines, we see that states of the formEq. (<ref>) can be understood in terms of lower-dimensional SPT blocks. § THE FIRST COHOMOLOGY GROUP H^1(G,U(1))Here, we define the first cohomology group H^1(G,U(1)), which is used throughout the paper in the description of block-dimension zero states.This is standard material; we provide it here in the interest of making our paper more accessible and self-contained.Let G be a group, and let ω : G → U(1) be a one-dimensional representation of G.This means that ω(g_1) ω(g_2) = ω(g_1 g_2).As a set, H^1(G,U(1)) is the set of one-dimensional representations of G.We give this set an Abelian group structure via the tensor product operation; that is, if ω_1 and ω_2 are one-dimensional representations, their product ω_1 ω_2 is defined by:(ω_1 ω_2)(g) = ω_1(g) ω_2(g) . We make two notational comments.First, because we only use the first cohomology group with U(1) coefficients in this paper, we sometimes omit the coefficient group and write H^1(G) ≡ H^1(G,U(1)).Second, in this appendix, we use multiplicative notation to define H^1(G,U(1)), but we use additive notation for cohomology groups in the rest of the paper. § DETAILS OF BLOCK DIMENSION ZERO STATESBlock dimension zero states are introduced in Sec. <ref>.Here, we consider some technical details of such states.First, we show a statement made in Sec. <ref>, that knowing the charges q_p completely determines λ(g,p) up to some gauge-like freedom. This is why it is enough to specify q_p in the data characterizing a state.Second, if |Ψ⟩ is a block dimension zero state invariant under a point group G, we describe how to compute U_g | Ψ⟩, for g ∈ G.The latter result is used to work out the splitting operations described in Appendix <ref>. To show λ(g,p) is determined by the charges q_p, we introducein Fig. <ref> a graphical representation of the relationλ(g_1 g_2, p) = λ(g_1, g_2 p) λ(g_2, p) .This graphical representation allows us to think about Λ as the vertices of a directed multi-graph, where each directed edge joining p to g p is labeled by the group element g.First, suppose we know λ(g,p) for all p ∈Λ but only for g ∈ G_p; that is, we specify q_p.Aided by the graphical representation, we can build up the rest of λ(g,p).The graph associated with Λwill in general have some number of disconnected components, because not all points are related by symmetry.For each component, we choose a connected subgraph that is a tree, and for each edge in the tree we set the corresponding λ(g,p) = 1.We can then use Eq. (<ref>) to uniquely determine all the other λ(g,p)'s, corresponding to the edges we left out. Next, suppose we have a function λ(g,p) satisfying Eq. (<ref>).Again we choose the same tree structure, and we observe that making the change of basis| ψ_p ⟩→α(p) | ψ_p ⟩induces the transformationλ(g,p) →α(gp) λ(g,p) α^-1(p) .It is clear that we can make such a transformation to set λ(g,p) = 1 on the edges of the tree.Once in this “gauge,” the other values of λ(g,p) with gp ≠ p are then determined by the D_q_p's using Eq. (<ref>).We have thus shown that λ(g,p) is complete determined by the charges q_p, up to gauge-like freedom that physically corresponds simply to a site-dependent change of basis.Now we consider a different question.Suppose that Ψ∈ B_0 is invariant under the point group G.We would like to compute U_g | Ψ⟩ for some g ∈ G.Clearly U_g | Ψ⟩ = λ_Ψ | Ψ⟩, and our task is to determine the phase factor λ_Ψ.To do this, we divide Λ into its orbits O_1, …, O_k under the action of g.For each orbit we define| Ψ_O_i⟩≡⊗_p ∈ O_i | ψ_p⟩ so that | Ψ⟩ = ⊗_i = 1^k | Ψ_O_i⟩.Clearly U_g | ψ_O_i⟩ = λ_O_i | ψ_O_i⟩.Therefore,λ_Ψ = ∏_i = 1^k λ_O_i,and we need to determine the λ_O_i phase factors.If O_i consists of a single point p, then λ_O_i = λ(g,p).Now suppose O_i contains n > 1 points.Using the definition U_g | ψ_p ⟩ = λ(g,p) | ψ_g p⟩, and using Eq. (<ref>) repeatedly, we obtainU_g | ψ_O_i⟩ = λ(g^n, p_1) | ψ_O_i⟩,so λ_O_i = λ(g^n, p_1).If g^n = 1, then λ_O_i = 1.This is always the case if g is a rotation, mirror reflection, or inversion operation, so that for these symmetries only points fixed by g contribute to the total g charge of | Ψ⟩.For rotation-reflections g = S_3, S_4 or S_6, points on the axis form orbits of size two.In these three cases, since g^2 is C_3^2, C_2 and C_3, respectively, pairs of points on the axis give a contribution determined by the rotation charge of one point in the pair.For points off the axis and away from the origin, orbits of rotation-reflections still satisfy g^n = 1.§ BLOCK DIMENSION FACTORIZATIONHere, we describe the general structure of how (G) decomposes into states of fixed block dimension, and give arguments that(G) = _0(G) ×_1(G) ×_2(G) for d=3 bosonic cSPT phases built from lower-dimensional SPT blocks.We also discuss an example where the factorization does not hold, which illustrates the general structure.The general structure is as follows.We let D_d_b(G) be the classification of cSPT phases with block dimension less than or equal to d_b.These phases clearly form a group under the usual stacking operation, because adding two states in D_d_b(G) cannot produce a state with higher block dimension.Moreover, we have a sequence of subgroups D_d_b - 1(G) ⊂ D_d_b(G).We also have D_0(G) = _0(G), and D_2(G) = (G).States with fixed block dimension d_b > 0 need not form a group, but they do form a group up to stacking with lower-dimensional block states.That is, we can define_d_b(G) =D_d_b(G)/ D_d_b - 1(G).We would like to show that D_d_b(G) ≃_d_b(G) × D_d_b - 1(G), which is the desired factorization.We will consider stacking of d_b = 1 and d_b = 2 blocks, and show these states form a group under the stacking operation. It is enough to consider a single d_b = 1 or d_b = 2 state, and show that a trivial state results when it is stacked with itself.We start with d_b = 1.It is sufficient to focus on a single block b, which is a C_nv axis with n = 2,4,6.b is invariant under a symmetry group G_1d containing the effective internal symmetry G_b ≃ C_nv as a subgroup.We consider a specific model of the non-trivial d=1 SPT state:we label lattice sites along the d=1 axis by i, and at each site we place a tensor product of two S = 1/2 spins, with spin operators S⃗_L i and S⃗_R i.G_1d may contain d=1 inversion symmetry; in that case, we choose all sites i to lie away from inversion centers.The Hamiltonian isH_1d = ∑_i S⃗_R i·S⃗_L, i+1.If we project onto the S = 1 subspace at each site, the ground state becomes the Affleck-Kennedy-Lieb-Tasaki (AKLT) state.<cit.>Even before projection the ground state is in the Haldane phase, i.e. if we considerfull SO(3) spin symmetry, the ground state is in the non-trivial SPT phase.Recalling that C_nv≃_n ⋊, we associate the _n factor with 2 π / n rotations about some axis in spin space, and thefactor with π rotations about a perpendicular axis.The ground state is also in the single non-trivial SPT phase under this lower symmetry.Now we stack two such spin chains on b; the resulting state | ψ_ stack⟩ is represented in Fig. <ref>a.We act on | ψ_ stack⟩ with a product of unitaries, where each unitary acts on the tensor product space of the four S = 1/2 spins participating in the bond joining i to i+1 (i.e., the S⃗_R i and S⃗_L, i+1 spins in each chain).It is clear that these four spins can be transformed into the singlet state shown in Fig. <ref>b by a symmetry-preserving unitary, taking into account that the bond can be a center of inversion in G_1d.The resulting state is a trivial block-dimension zero state; each site is fixed only by the C_nv subgroup of G_1d, and carries trivial C_nv charge.To conclude this discussion, we consider stacking two identical d_b = 2 blocks on a mirror plane, each hosting an Ising SPT state.We do not specify the Hamiltonian for these blocks, but focus on the ground state wave functions.For layer i (i = 1,2), we consider the wave function<cit.>| ψ_i ⟩ = C ∑_D_i (-1)^N(D_i) | D_i ⟩,where the sum is over all Ising domain wall configurations, N(D_i) is the number of closed domain wall loops in D_i, and C is a normalization constant.Such a wave function can be implemented at the lattice scale consistent with any spatial symmetries of the d_b = 2 plane.Stacking the two blocks together results in the wave function| ψ_ stack⟩ = C^2 ∑_D_1, D_2 (-1)^N(D_1) + N(D_2)| D_1 ⟩⊗ | D_2 ⟩.We now add a ferromagnetic Ising exchange coupling the two layers.This interaction has the effect of “lining up” the domain walls, and as the strength of the interaction is increased, configurations with D_1 = D_2 will dominate the wave function.We expect that the coupling can be made strong without passing through a phase transition, and in the limit of strong coupling the wave function becomes| ψ_ stack⟩ = C' ∑_D | D ⟩⊗ |D ⟩,where C' is a normalization constant.This wave function is a trivial product state, with sites carrying trivial site symmetry charge.While Eq. (<ref>) holds for the bosonic cSPT phases studied in this paper, it does not hold in general.To illustrate this, we briefly discuss an example<cit.> of fermionic SPT phases where the factorization does not hold.We consider electron systems in d=3 with [ U(1) ⋊^T ] ×^P symmetry, where ^P is mirror reflection, and ^T is time reversal, which squares to fermion parity.We consider a SPT state whose symmetry-preserving surface has a single massless Dirac fermion.This state can of course be viewed as the familiar topological band insulator if we ignore the ^P symmetry.Similarly, if we ignore^T, it is a non-trivial topological crystalline insulator.Because this state is non-trivial even ignoring the spatial symmetry, it should be viewed as a d_b = 3 state.Now, stacking two of these states together produces a state whose surface is two massless Dirac fermions.This state is trivial if we ignore the ^P, but it is a non-trivial topological crystalline insulator that can be dimensionally reduced to the mirror plane.<cit.>Therefore, in this example, we stacked two d_b = 3 states to obtain a non-trivial d_b = 2 state.This implies the classification does not factorize over block dimensions. § SPLITTING OPERATIONS AND POINT GROUP SPT CLASSIFICATION FOR BLOCK DIMENSION ZERO STATESThis appendix pertains to the classification of block-dimension zero cSPT phases in two and three dimensions, both for point group symmetry and space group symmetry.In particular, we consider splitting operations for point groups as discussed in Sec. <ref>.We develop a formalism to describe splitting operations, and use this to explain how splitting operations are related to the adjoining operation in the classification of pgSPT phases. We show that _0(G) = H^1(G) /Adj(G), where G is a point group with zero-dimensional fixed space, and Adj(G) is a subgroup of H^1(G) that we define.Then, we enumerate those crystallographic point groups with non-trivial splitting operations, give the charge configurations generated by splitting, and determine Adj(G).Let G be a crystallographic point group, and let w_0 be a Wyckoff class containing a single center p_0 of G symmetry, so that G_w_0 = G.Moreover, let w be a Wyckoff class containing a collection of symmetry-equivalent points that can be slid arbitrarily close to p_0.Each point in w has site symmetry G_w, and taken together, the points in w form a pattern with G symmetry.For each G, the distinct possible classes w can be found by consulting the International Tables for Crystallography.We place zero-dimensional blocks at p_0 and at the points of w, so that a block state is specified by the charge configuration (q_w_0, q_w) ∈ H^1(G_w_0) × H^1(G_w).Here, q_w is the charge of some arbitrarily chosen representative point in w.The block state with charge configuration (0, q_w) transforms as a one-dimensional representation of G, with G-charge given by g_w(q_w).That is, applying the grouping block-equivalence operation to this state, we get a new state labeled by (g_w(q_w), 0).Formally, there is a group homomorphismg_w : H^1(G_w) → H^1(G_w_0) .More generally, the total G charge of a block state labeled by (q_w_0, q_w) is given by q_w_0 + g_w(q_w).The homomorphism g_w can be computed by following the discussion in the latter part of Appendix <ref>.We are interested in knowing which block states can be obtained from the trivial state labeled by (0,0) via splitting operations.The formalism developed above gives a simple answer to this question:the most general charge configuration that can be obtained via splitting from the trivial state is (-g_w(q_w), q_w), where the negative sign in the first entry denotes the inverse operation, and q_w runs over all possible values in H^1(G_w).Recall that in Sec. <ref>, the splitting operation was defined to be trivial if it can generate all possible values of q_w, while always leaving q_w_0 unchanged.We see that this is the same as the statement that the homomorphism g_w = 0, i.e. it is the trivial homomorphism.Charge configurations of the form (-g_w(q_w), q_w) are referred to as splitting configurations.The splitting configurations form a group isomorphic to H^1(G_w), and can be conveniently specified in terms of generators.This information is presented below for d=2 and d=3 point groups.The adjoining operation that appears in pgSPT classification can be described simply in this formalism, and we use this to obtain a simple result for the classification of block-dimension zero pgSPT phases.If we start with the state labeled by (q_w_0, 0) as a pgSPT state, we can adjoin zero-dimensional blocks at the points of w.That is, adjoining transforms the state by (q_w_0, 0) → (q_w_0, q_w), for any q_w.We can then group the w points together with the center of symmetry at p_0.The net result is that we transform the original state by(q_w_0, 0) → (q_w_0 + g_w(q_w) , 0 ) . More generally, we need to consider adjoining zero-dimensional blocks in more than one Wyckoff class.Let w_1, …, w_k be Wyckoff classes labeling the distinct possibilities for symmetry-equivalent points near the center of symmetry at p_0.As usual, we ignore the Wyckoff class containing general points with trivial site symmetry.Adopting the short-hand notation H^1_i ≡ H^1(G_w_i), a general block state is labeled by a charge configuration(q_0, q_1, …,q_k) ∈ H^1_0 × H^1_1 ×⋯× H^1_k .Starting with the state (q_0, 0, …, 0), we adjoin arbitrary charges in the nearby blocks to obtain the state (q_0, q_1, …, q_k).Then we group the blocks together at the center of symmetry, resulting in the transformation(q_0, 0, …, 0) → (q_0 + g_w_1(q_1) + ⋯ + g_w_k(q_k), 0,…,0) .Therefore we have a mapA : H^1_1 ×⋯× H^1_k → H^1_0 ,given byA(q_1, …, q_k) = g_w_1(q_1) + ⋯ + g_w_k(q_k) .The image of this map is precisely the set of all one-dimensional representations that can be obtained by the adjoining operation, which was the definition of Adj(G) ⊂ H^1_0 given in Sec. <ref>.Therefore we defineAdj(G) =Im A.Taking the quotient of H^1(G) by Adj(G) gives precisely the information about a G charge that is stable under adjoining.Therefore, when the fixed space of G is a single point, the classification of block-dimension zero pgSPT phases is_0(G) =H^1(G) / Adj(G).Now we proceed to describe splitting operations and Adj(G) for all d=2 and d=3 crystallographic point groups.We have considered all possible splitting operations, but only describe those that are non-trivial.We begin in d=2, where D_3 is the only point group with a non-trivial splitting operation.We recall that D_3 is algebraically isomorphic to _3 ⋊, and we have H^1(D_3) =. D_3 is generated by three mirror reflections as shown in Fig. <ref>.There is a nontrivial splitting operation where the Wyckoff class w contains three points on reflection axes related by three-fold rotation symmetry.These points have D_1 site symmetry, and H^1(D_1) =.The splitting operation can be described by giving g_w : H^1(D_1) → H^1(D_3) on the single generator of its domain, and we find g_w(1) = 1.This implies that Adj(D_3) =, and _0 (D_3) is thus trivial.In three dimensions, the following 19 point groups have only trivial splitting operations: C_i, C_s, C_2h, C_n (n=2,3,4,6), C_nv (n=2,4,6), D_n (n=2,4,6), D_nh (n=2,4,6), D_3d, O, O_h.For these groups, Adj(G) is trivial, and for those groups with fixed-space dimension zero, _0(G) = H^1(G,U(1)).This leaves 12 point groups with non-trivial splitting operations, which we give in Table <ref>.All these point groups have only one non-trivial splitting operation, except C_3h, which has two.We present the splitting operations by specifying the homomorphism g_w : H^1(G_w) → H^1(G) via its action on the generators G_w.This information is then used to compute Adj(G); the results are given in Table <ref>.In order to specify g_w, Table <ref> also fixes conventions for writing the elements of H^1(G) and H^1(G_w), which we now explain.In general, it is possible and convenient to specify elements H^1(G) in terms of symmetry charges of certain subgroups of G.For example, the group C_3h has both a C_3 subgroup (three-fold rotations about the z-axis) and a C_s subgroup (mirror reflection in the xy plane).It can be shown that q ∈ H^1(C_3h) can be written q = (q^C_3, q^m), where q^C_3∈ H^1(C_3) = _3 is the C_3 rotation charge, and q^m ∈ H^1(C_s) = is the C_s mirror reflection charge.For most point groups in Table <ref>, it is clear which subgroup is being referred to. In some cases there are multiple isomorphic subgroups that are conjugate to one another, and in such cases one of these subgroups can be chosen arbitrarily; for example, the group T has four C_3 subgroups.For a few point groups, more explanation is needed to clarify the forms of symmetry charges given in Table <ref>.In the case of D_2d, q^S_4 is the charge of z-axis roto-reflections, which is constrained to take values ofdue to the properties of D_2d.There, q^C_2 is the charge of a C_2 rotation perpendicular to the z-axis.There are a few splitting operations where G_w = C_2v, which is generated by two perpendicular mirror planes.A C_2v charge can be specified by giving the mirror reflection charges for these two planes separately, and we label them q^m_1 and q^m_2.The group D_3h is generated by the three vertical mirror planes of C_3v, and a horizontal mirror plane (the xy plane).There, q^m_v refers to the mirror charge of a vertical mirror operation, while q^m_h is the charge of the horizontal mirror reflection.In the non-trivial splitting operation where G_w = C_2v, the C_2v site symmetry of each point in w is generated by the horizontal mirror reflection, and one of the horizontal mirror reflections, so it is natural to use the same notation to specify the C_2v symmetry charge.Finally, the group T_d contains six mirror operations, where the normals to the mirror planes lie in the ⟨ 110 ⟩ directions.The T_d charge q ∈ H^1(T_d) = can be specified by giving the mirror charge q^m ∈ for any of these mirror planes.§ TWISTING OPERATIONS FOR BLOCK DIMENSION ZERO CRYSTALLINE SPT STATES IN THREE DIMENSIONS Here we give a detailed discussion of twisting operations for block dimension zero cSPT states with space group symmetry in d=3.We give a general discussion and enumerate those cases with non-trivial twisting operations.For certain one-dimensional Wyckoff classes in non-symmorphic space groups, the axis swept out by a Wyckoff point can coincidewith a screw axis, or be contained in a glide plane, where the glide direction is along the axis. The screwor glide operation becomes a half translation on the Wyckoff axis, and can act non-trivially on the site-symmetry G_w, if G_w has at least one non-trivial automorphism.If, in addition, H^1(G_w) ≡ H^1(G_w,U(1)) has a non-trivial automorphism, the half translations can have non-trivial action on the G_w charge, which results in non-trivial twisting operations.We find that non-trivial twisting operations arise in two types of situations. (1) A Wyckoffaxis with G_w=C_n for n=3,4,6 is contained in a glide plane, with glide direction along the axis. (2) A Wyckoff axis with G_w=C_2v coincides with a four-fold screw axis.We now describe the action of translations on G_w in these cases, denoting by t_h the half translation arising from the glide or screw operation.For type (1), the half translation acts on the C_n rotation byt_h C_n t_h^-1 = C_n^-1.For type (2), we havet_hσ_1 t_h^-1 = σ_2 t_hσ_2 t_h^-1 = σ_1,where σ_1 and σ_2 are the two mirror reflections generating C_2v.These non-trivial group actions restrict the allowed G_w charges within a unit cell.We let q_z be the G_w charge of a point p_z on the Wyckoff axis, and q_z + 1/2 the charge at the point t_h p_z, i.e. by acting on the first point with a half translation.We have q_z, q_z+1/2∈ H^1(G_w).Applying Eq. (<ref>), these charges are related byD_q_z + 1/2(g) = D_q_z( t_h g t^-1_h ) ,for all g ∈ G_w, and where D_q (g) is the one-dimensional representation of G_w labeled by q ∈ H^1(G_w).The relation Eq. (<ref>) induces an automorphism 𝔱_h : H^1(G_w) → H^1(G_w) ,where 𝔱_h (q_z) = q_z + 1/2. In case (1), 𝔱_h (q) = - q, i.e. 𝔱_h is the inversion automorphism.This implies that q_z + 1/2 = - q_z.In case (2), an element q ∈ H^1(C_2v) = ^2 can be written q = (q_1, q_2), for q_1, q_2 ∈.The automorphism acts by 𝔱_h [ (q_1, q_2) ]=(q_2, q_1).This implies that if q_z = (q_1^z, q_2^z), then q_z + 1/2 =(q_2^z, q_1^z ).We let _c be the group of G_w charge configurations. We have _c ≃ H^1(G_w), because elements of _c are of the form (q_z, q_z+1/2) = (q_z, 𝔱_h(q_z) ).We then define _t ⊂_c to be the set of charge configurations that can be obtained from the trivial configuration (0,0) ∈_c by applying the block equivalence operations.More specifically, we apply twisting operations. We then obtain H^1_∘(G) = _c / _t, which is the contribution of the Wyckoff class to the cSPT classification _0(G) for the space group G.Moreover, in each case we show H^1_∘(G) is a weak pgSPT invariant.Below, we obtain _c, _t, and H^1_∘(G_w) for each case where non-trivial twisting operations arise.These results are summarized in Table <ref>. §.§ G_w=C_3 Here, q_z, q_z+1/2∈ H^1(C_3) = _3._c contains three charge configurations, which are (0,0), (1,2), and (2,1).To obtain _t, we describe the effect of twisting operations on the (0,0) configuration. We note that the (0,0) configuration describes a chain of zero-dimensional blocks lying on the Wyckoff axis.First, we split (0,0) into the product of charge configurations (1,2)_1 × (2,1)_2; this can be thought of as splitting the original chain into two new chains.Next, we slide the charges on the second chain along the Wyckoff axis by half a lattice constant, which transforms the state by(1,2)_1 × (2,1)_2 → (1,2)_1 × (1,2)_2 .Finally, grouping the chains back together, we obtain the configuration (1,2) ∈_c.If instead we slide the charges of the first chain by half a lattice constant before grouping the chains back together, we obtain (2,1) ∈_c.Therefore, we have shown _t = _c, and H^1_∘(C_3) is trivial.§.§ G_w=C_4 Now q_z, q_z+1/2∈ H^1(C_4) = _4, and the charge configurations in _c are (0,0), (1,3), (2,2), and (3,1).We split (0,0) to (1,3)_1 × (3,1)_2, and then slide the charges of the first chain by half a lattice constant to obtain (3,1)_1 × (3,1)_2 ≃ (2,2).Other twisting operations either also produce (2,2), or leave the (0,0) state invariant.Therefore _t = { (0,0), (2,2) }≃, and H^1_∘(C_4) = _4 /=.We would like to show that H^1_∘(C_4) = is a weak pgSPT invariant.We do this by focusing on the symmetry generated by the C_2 subgroup of C_4, and by t_h.We note that C_2 rotations commute with t_h.Considering the non-trivial state with (q_z, q_z+1/2) = (1,3), the C_2 charge configuration is (1,1).On the Wyckoff axis we therefore have a chain of non-trivial C_2 charges, and t_h plays the role of a translation symmetry along the stacking direction.Therefore we can think of this state as a stack of d=2 C_2 pgSPT layers, with non-trivialinvariant per layer.§.§ G_w=C_6 Here, q_z, q_z+1/2∈ H^1(C_6) = _6, and the charge configurations in _c are (0,0), (1,5), (2,4), (3,3), (4,2) and (5,1).We split (0,0) to (1,5)_1 × (5,1)_2, and slide the charges of the first chain by half a lattice constant to obtain (5,1)_1 × (5,1)_2 ≃ (4,2).If instead we slide the charges of the second chain by half a lattice constant, we get (1,5)_1 × (1,5)_2 ≃ (2,4).Considering other twisting operations does not lead to more states, and we find _t = { (0,0), (2,4), (4,2) }≃_3.Taking the quotient _c / _t, we find H^1_∘(C_6) =.It can be shown that this is a weak pgSPT invariant by focusing on the symmetry generated by the C_2 subgroup of C_6 and t_h, and following the analysis given above for G_w = C_4. §.§ G_w=C_2v Here, q_z, q_z+1/2∈ H^1(C_2v) = ^2, and the charge configurations in _c are [ (0,0), (0,0) ], [ (1,0), (0,1) ], [ (0,1), (1,0) ] and [ (1,1), (1,1) ].We split[ (0,0), (0,0) ] to obtain[ (1,0), (0,1) ]_1 ×[ (1,0), (0,1) ]_2 ,and slide the charges of the first chain by half a lattice constant to obtain[ (0,1), (1,0) ]_1 ×[ (1,0), (0,1) ]_2 ≃[ (1,1), (1,1) ] .Other twisting operations do not lead to additional states, so we find _t ≃, with_t = { [ (0,0), (0,0) ], [ (1,1), (1,1)] }.Taking the quotient, we have H^1_∘ (C_2v ) =.To show that H^1_∘ (C_2v ) = is a weak pgSPT invariant, we focus on the symmetry generated by the C_2 rotation subgroup of C_2v, and by t_h.Considering the non-trivial state with(q_z, q_z+1/2) = [ (1,0), (0,1) ] ,the C_2 charge configuration is (1,1).As above in the discussion of the case G_w = C_4, this state can be viewed as a non-trivial stack of d=2 C_2 pgSPT states, with t_h playing the role of translation symmetry in the stacking direction. § COMPLETENESS OF PGSPT AND WEAK PGSPT INVARIANTSHere, we consider three-dimensional cSPT phases protected by space group symmetry.The block-equivalence classification for space group G is (G) = _0(G) ×_1(G) ×_2(G).We show that any two distinct elements of (G) can be distinguished by pgSPT and weak pgSPT invariants.This implies that these two elements are different phases, so the block-equivalence classification indeed gives a classification of phases.It also follows that the cSPT phases we classify (and, equivalently, those classified in Ref. thorngren16gauging) can be fully characterized by pgSPT and weak pgSPT invariants.We recall the definitions of pgSPT and weak pgSPT invariants.A pgSPT invariant is a SPT invariant associated with some site symmetry subgroup of G.Given a G-symmetric cSPT state, we can focus on a site symmetry subgroup, view the state as a pgSPT phase protected by the site symmetry, and compute the resulting invariant.A weak pgSPT invariant is obtained by compactifying one or more dimensions of space, viewing the resulting system as a lower-dimensional point group SPT phase, and characterizing the dependence of the lower-dimensional pgSPT invariant on the length in the finite dimensions.As explained in Sec. <ref>, _1(G) ×_2(G) can be factored into pgSPT invariants associated with C_nv axes and mirror planes.Therefore, here, it is enough to concentrate on _0(G).We will establish the following claim:Consider Ψ∈_0(G).Ψ≠ 0 implies that Ψ has a non-trivial pgSPT or weak pgSPT invariant.It follows from this claim that non-zero elements of _0(G) are non-trivial cSPT phases.It also follows from Claim 1 that two distinct elements of _0(G) have different pgSPT or weak pgSPT invariants.This is easily established by contradiction:We consider non-zero elements Ψ, Ψ' ∈_0(G), with Ψ≠Ψ'.We suppose Ψ and Ψ' have the same pgSPT and weak pgSPT invariants.Then the difference Ψ - Ψ' is non-zero but has trivial pgSPT and weak pgSPT invariants, a contradiction.To establish Claim 1, we expose some structure of _0(G) that will be useful.Given an element Ψ∈_0(G), we define an integer D(Ψ) ∈{0, 1,2,3 } as follows.We consider a state representing Ψ, and apply block equivalence operations to remove points with the lowest Wyckoff dimension until this can no longer be done.D(Ψ) is defined to be the lowest Wyckoff dimension of a point in the resulting state.For Ψ = 0, we define D(0) ≡ 3.For example, D(Ψ) = 0 means that there is some point p with Wyckoff dimension zero (i.e. the position of p is fixed), such that p carries non-trivial G_p charge that cannot be removed by applying block equivalence operations.D(Ψ) = 1 means that block equivalence operations can be applied to remove all points with Wyckoff dimension zero, but it is not possible to remove all points with Wyckoff dimension less than two.We also define subgroups of _0(G) byW_n = {Ψ∈_0(G) |D(Ψ) ≥ n }.We have the sequence of subgroups0 = W_3 ⊂ W_2 ⊂ W_1 ⊂ W_0 = _0(G) .We can also define quotientsV_n = W_n/W_n+1.It will follow from the discussion below that V_0 corresponds to pgSPT invariants, while V_1 and V_2 correspond to weak pgSPT invariants. Given G, it is possible to decompose _0(G) into V_0, V_1 and V_2, which is a decomposition of the cSPT classification into pgSPT and weak pgSPT invariants.In general, this decomposition is not simply a product; that is,_0(G) ≠ V_0 × V_1 × V_2 ,although such a factorization does hold in many cases.For instance, for space group #200 (see Sec. <ref>), _0(G) = _3 ×^8, V_0 = ^8, and V_1 = _3(V_2 is trivial).We have _0(G) = V_0 × V_1. We give an example below (space group #82) where the decomposition into V_0, V_1 and V_2 is not simply a product.Now we turn to establishing Claim 1.First, we consider Ψ∈_0(G) with D(Ψ) = 0.Then in any state representing Ψ, there is some point p with Wyckoff dimension zero whose G_p charge cannot be removed by applying block equivalence operations.It follows that the G_p charge cannot be removed by the adjoining operation used to classify pgSPT phases, and Ψ is a non-trivialG_p pgSPT phase.Next, we consider Ψ with D(Ψ) = 1.We fix a state representing Ψ where all points have Wyckoff dimension one and higher.Let p be a point of Wyckoff dimension one that cannot be split to points of Wyckoff dimension two or eliminated completely.At least one such point exists because D(Ψ) = 1.The site symmetry G_p can be C_n (n = 2,3,4,6) or C_nv (n = 2,4,6).Let A be the axis swept out by p as it is slid along its symmetry axis, and let G_A be the subgroup of G taking A into itself.The line A can be viewed as a one-dimensional system with symmetry group G_A and on-site symmetry G_p, and it is thus clear that G_p is a normal subgroup of G_A.The quotient G̃_A = G_A / G_p can be viewed as a one-dimensional space group of A.There are only two one-dimensional space groups, which means there are two cases to consider:Case 1: G̃_A acts on A only by translation.Case 2: The action ofG̃_A on A is generated by translation and inversion.Case 1.We choose t ∈ G_A so that the corresponding element [t] ∈G̃_A is the elementary one-dimensional translation.As a three-dimensional operation, t can be chosen to be a pure translation, a glide reflection, or a screw rotation.We can apply block equivalence operations to group points along A, so that A contains only a lattice points separated by elementary translations, each carrying non-trivial G_p charge.We make the system finite along the axis A with length L and periodic boundary conditions so that t^L = 1.First, we suppose that t commutes with G_p.Taking periodic boundary conditions is compatible with G_p symmetry, and we can view the finite system as a two-dimensional pgSPT state, with point group corresponding to G_p.Because the G_p charge per unit cell along A is non-zero, the d=2 pgSPT index has non-trivial dependence on L, and the state has a non-trivial weak pgSPT invariant.Second, we suppose that t does not commute with G_p.This is precisely the situation studied in Appendix <ref>; we need only recapitulate the results obtained there in the context of the present discussion.There are only three non-trivial possibilities, and in each case the block-equivalence classes of charge configurations along A are labeled by ainvariant.Two of the cases are G_p = C_4 or G_p = C_6 with t a glide reflection.The other case is G_p = C_2v, with t a four-fold screw rotation.In general, taking periodic boundary conditions here is not compatible with G_p symmetry, because t^L need not commute with G_p.However, in all these cases, G_p has a C_2 subgroup that commutes with t, so we can view the compactified system as a d=2 pgSPT state with C_2 symmetry.In Appendix <ref> it is shown that the corresponding weak pgSPT invariant resolves those charge configurations on A that are non-trivial under block equivalence.Case 2.Choose a, b ∈ G_A so that [a], [b] ∈G̃_A are one-dimensional inversion operations at neighboring inversion centers.Then, letting t = ba, [t] ∈G̃_A is the elementary one-dimensional translation that defines a primitive cell along the axis A.As in Case 1, we compactify the system along A, taking periodic boundary conditions with t^L = 1.We observe that a^2 acts on A as an on-site operation, so a^2 ∈ G_p.Moreover, a^2 is orientation preserving, so the only possibilities are a^2 = 1,2,3, where 2 and 3 denote two-fold and three-fold rotations.If a^2 = 3, we can redefine a so that a^2 = 1.It follows that we can always choose a to be one of four operations, a = 1̅, m, 2', 4̅.Here, 1̅ is inversion, m is mirror reflection with the normal of the mirror plane along A, 2' is two-fold rotation about an axis perpendicular to A, and 4̅ is a four-fold roto-reflection along A.Similarly, we can take b = 1̅, m, 2', 4̅.We let G_a and G_b be the site symmetry groups at the two one-dimensional inversion centers.Given a fixed G_p, only certain choices for G_a are consistent.Clearly, G_a must contain G_p as a subgroup, and must contain a = 1̅, m, 2', 4̅.In addition, we can always move p and its inversion image a p to the a inversion center and group these two points together.Doing so must not result in a trivial G_a charge, which would contradict D(Ψ) = 1.Therefore, the corresponding splitting/grouping operation must be non-trivial, i.e. the homomorphism g_w defined in Appendix <ref> must be non-trivial.Corresponding statements hold for G_b.Using these restrictions on G_a and G_b, we now proceed case-by-case through the different possibilities for G_p.G_p = C_2.Here, G_a = G_b = S_4.We can take a = 4̅ and b = 4̅^-1, so that t is a pure translation.The compactified system has two-dimensional C_4 point group symmetry, because the S_4 rotation-reflection acts on the two-dimensional system as a four-fold rotation.The corresponding weak pgSPT invariant is non-trivial.G_p = C_3.Here, G_a = C_3i, T_h, C_3h, and similarly for G_b.In the first two cases, a = 1̅, and in the third case a = m.Depending on the choices of G_a and G_b, t is either a pure translation or a two-fold screw rotation, both of which commute with G_p = C_3.Therefore compactifying with length L always preserves C_3 symmetry.There is a non-trivial weak pgSPT invariant associated with two-dimensional C_3 symmetry.G_p = C_4.Here, G_a = G_b = C_4h, with a = b = m, so that t is a pure translation.The compactified system has two-dimensional C_4 symmetry, and the corresponding weak pgSPT invariant is non-trivial.G_p = C_6.Here, G_a = G_b = C_6h, with a = b = m, so that t is a pure translation.The compactified system has two-dimensional C_6 symmetry, and the corresponding weak pgSPT invariant is non-trivial.G_p = C_2v.Here, G_a, G_b = D_2d, T_d.For both these point groups we can take a = 2' or a = 4̅.Choosing a = b = 2', t is a pure translation.There is a non-trivial C_2v charge per unit cell.Upon compactifying to two dimensions, C_2v becomes the d=2 point group D_2, and the corresponding weak pgSPT invariant is non-trivial.G_p = C_4v, C_6v.There are no possible G_a, G_b satisfying the restrictions.Therefore, Case 2 does not arise for these choices of G_p.This completes the discussion of Case 2.Finally, we consider Ψ with D(Ψ) = 2.We fix a state representing Ψ where all points have Wyckoff dimension two.Let p be such a point that cannot be eliminated completely by applying block equivalence operations.The site symmetry of p is mirror reflection, i.e. G_p = C_s.We let P be the mirror plane swept out by p, and G_P the subgroup of G taking P into itself.The quotient G̃_P = G_P / G_p is a wallpaper group of the plane P.We define H_P ⊂ G_P to be the subgroup of orientation-preserving operations (as three-dimensional rigid motions).It is straightforward to show that G_P ≃ H_P × G_p.Therefore, we can view the mirror plane as a two-dimensional system with G_p ≃ on-site symmetry that commutes with the wallpaper group G̃_P ≃ H_P.The wallpaper group G̃_P cannot contain any two-fold rotation centers or reflection axes, because the point p and its images under symmetry can be slid to these centers/axes, grouped together there, and eliminated.The only possibilities are therefore G̃_P = p1, pg, p3.Treating G̃_P = p1, p3 together, we let t_1 and t_2 be two elementary translations in G̃_P.D(Ψ) = 2 implies each primitive cell carries a non-trivial G_p charge.Compactifying the system in both directions so that t_1^L_1 = t_2^L_2 = 1, we have a one-dimensional pgSPT state where the one-dimensional inversion corresponds to the G_p mirror reflection.The one-dimensional pgSPT invariant is non-trivial when L_1 L_2 is odd, and trivial when L_1 L_2 is even, and we have a non-trivial weak pgSPT invariant.Next, taking G̃_P = pg, we let t_1 be the glide reflection, and t_2 be an elementary translation normal to the glide axis.Using these operations to define an effective unit cell, the G_p charge per unit cell is non-trivial.We compactify by first setting t_2^L_2 = 1.Next, we compactify in the t_1 direction by setting t_1^L_1 = 1.If L_1 is odd, this is a twisted boundary condition.Odd L_1 breaks t_2 symmetry, but there is no need to preserve t_2 symmetry after first using it to define the periodic boundary condition in the t_2 direction. Crucially, the choice of boundary conditions is compatible with G_p mirror symmetry, so that the mirror plane on the compactified system carries trivial (non-trivial) G_p charge when L_1 L_2 is even (odd).Therefore, there is a non-trivial weak pgSPT invariant.This completes the proof of Claim 1. We close this Appendix by discussing an example where the decomposition of _0(G) into pgSPT and weak pgSPT invariants is not simply a product.We consider the space group I4̅, which is #82 in the International Tables,<cit.> and we refer to it as G_82.The W-quasigraph is shown in Fig. <ref>.We focus on the component with vertices a,b and e, which have site symmetry S_4, S_4 and C_2, respectively.Applying block equivalence operations produces a classification _4 × for this component of the graph.(The other component behaves identically, and the full classification is _0(G_82) = _4^2 ×^2.)The point group S_4 has apgSPT classification, and six of the seven non-zero elements of _4 × have non-trivial S_4 pgSPT invariants.The non-zero element with trivial pgSPT invariants generates the subgroup ⊂_4.We identify thissubgroup with V_1, as its non-trivial element is characterized by a weak pgSPT invariant (see below).Then we have V_0 = (_4 ×) / V_1 ≃×, which is the group of S_4 pgSPT invariants for the two S_4 centers.To see that V_1 is associated with a weak pgSPT invariant, we note that a representative state for the non-trivial element has non-trivial S_4 charge of 2 at a, where we use additive notation and write H^1(S_4,U(1)) = _4 = { 0,1,2,3 }.b and e points carry trivial charge.The unit cell coordinates of a are (0,0,0), with the roto-reflection axis in the z-direction.We compactify the system in the z-direction with length L, then the S_4 symmetry (at a points) becomes a two-dimensional C_4 point group symmetry.The C_4 charge at the origin (i.e. the projection of the z-axis to a point) is trivial when L is even, and is 2 when L is odd, so this state has a non-trivial weak pgSPT invariant.§ CLASSIFICATIONS OF CRYSTALLINE SPT PHASES FOR SPACE GROUP SYMMETRY IN THREE DIMENSIONSHere, in Table <ref>, we give _0(G), _1(G) and _2(G) for all 230 space groups in three dimensions.The classification (G) = _0(G) ×_1(G) ×_2(G) can be obtained by taking the product of the given factors, and agrees with the Thorngren-Else classification, which was obtained in Ref. thorngren16gauging for all space groups except numbers227, 228 and 230. table-1
http://arxiv.org/abs/1705.09243v2
{ "authors": [ "Sheng-Jie Huang", "Hao Song", "Yi-Ping Huang", "Michael Hermele" ], "categories": [ "cond-mat.str-el", "cond-mat.mes-hall" ], "primary_category": "cond-mat.str-el", "published": "20170525160417", "title": "Building crystalline topological phases from lower-dimensional states" }
[pages=1-last]SMIB.pdf
http://arxiv.org/abs/1705.09849v1
{ "authors": [ "M. Ehsan Raoufat", "Kevin Tomsovic", "Seddik M. Djouadi" ], "categories": [ "cs.SY" ], "primary_category": "cs.SY", "published": "20170527175813", "title": "Power System Supplementary Damping Controllers in the Presence of Saturation" }
An X-ray survey of the 2Jy sample II]An X-ray survey of the 2Jy sample. II: X-ray emission from extended structures B. Mingo et. al.]B. Mingo^1E-mail:[email protected], M. J. Hardcastle^2, J. Ineson^3, V. Mahatma^2, J. H. Croston^4, D. Dicken^5, D. A. Evans^6, R. Morganti^7,8, and C. Tadhunter^9 ^1Department of Physics and Astronomy, University of Leicester, University Road, Leicester LE1 7RH, UK ^2Centre for Astrophysics Research, School of Physics, Astronomy & Mathematics, University of Hertfordshire, College Lane, Hatfield AL10 9AB, UK ^3School of Physics and Astronomy, University of Southampton, Southampton SO17 1SJ, UK ^4School of Physical Sciences, The Open University, Walton Hall, Milton Keynes MK7 6AA, UK ^5CEA-Saclay, F-91191 Gif-sur-Yvette, France ^6Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA ^7ASTRON, the Netherlands Institute for Radio Astronomy, Postbus 2, 7990 AA, Dwingeloo, The Netherlands ^8Kapteyn Astronomical Institute, University of Groningen, P.O. Box 800, 9700 AV Groningen, The Netherlands ^9Department of Physics and Astronomy, University of Sheffield, Hounsfield Road, Sheffield S3 7RH, UK[ [ Received ; accepted ======================= The 2Jy sample is a survey of radio galaxies with flux densities above 2 Jy at 2.7 GHz. As part of our ongoing work on the southern subset of 2Jy sources, in paper I of this series we analysed the X-ray cores of the complete 2Jy sample with redshifts 0.05<z<0.7. For this work we focus on the X-ray emission associated with the extended structures (jets, lobes, and environments) of the complete subset of 2Jy sources with 0.05<z<0.2, that we have observed with Chandra. We find that hotspots and jet knots are ubiquitous in FRII sources, which also inhabit systematically poorer environments than the FRI sources in our sample. Spectral fits of the hotspots with good X-ray statistics invariably show properties consistent with synchrotron emission, and we show that inverse-Compton mechanisms under-predict the X-ray emission we observe by 1–2 orders of magnitude. Inverse-Compton emission is detected from many of the lobes in our sample, and we find that the lobes of the FRII sources show magnetic fields lower by up to an order of magnitude than expected from equipartition extrapolations. This is consistent with previous results, which show that most FRII sources have electron energy densities higher than minimum energy requirements. galaxies: active – X-rays: galaxies – radio continuum: galaxies – § INTRODUCTIONThe 2Jy sample of radio galaxies[<http://2Jy.extragalactic.info/2Jy_home_page.html>], as defined by <cit.>, includes all the galaxies with flux greater than 2Jy at 2.7 GHz. Over the last twenty years we have obtained and studied in detail uniform data for the complete subset of Southern sources defined by <cit.> and <cit.> (δ <+10^∘) and, especially, the steep-spectrum (α>0.5, where α is the radio spectral index, such that S_ν∝ν^-α) subsample defined by <cit.>, which contains 47 objects and is statistically complete for redshifts 0.05<z<0.7 <cit.>. Most recently, in the first paper of this series <cit.>, we analysed the X-ray cores of the 2Jy sources in the subset of <cit.>, using data from Chandra and XMM-Newton, and found our results to be in good agreement with those of <cit.> on the 3CRR radio galaxies.In this work we focus on the extended X-ray emission (jets, hotspots, and lobes) and the environments of the 0.05<z<0.2 subset of sources that we have observed with Chandra, whose nuclei we studied in paper I. Our knowledge of X-ray jets <cit.> and hotspots <cit.> has certainly improved over the last two decades, as has our understanding of the environment in which radio galaxies live <cit.>, but the samples of radio galaxies with available detailed observations are still relatively small, and more work needs to be done to understand their extended structures and how they co-evolve with the hosts <cit.>. The 2Jy sample is important in that it is not only statistically complete, but uniformly observed, with long Chandra and XMM-Newton exposures (∼20 kiloseconds on average) that allow a detailed spectroscopic study of some of the most important structures.The traditional radio classification, defined by <cit.>, divides the sources according to their radio structure, into centre-brightened (FRI), and edge-brightened (FRII) classes. This division is tied to the total radio luminosity of the source, with FRIs being less luminous and FRIIs being more so; Fanaroff & Riley's transition corresponds to a power of 10^25 W Hz^-1 at 1.4 GHz. The radio luminosity in turn is expected to be related to the intrinsic jet power Q, but radio luminosity must also be affected by other factors, including a source's age and the density of its environment <cit.>, so that morphology and radio luminosity are not always reliable estimators of intrinsic jet power. FRI jets are known to decelerate from relativistic to non-relativistic speeds on kpc scales <cit.>, which implies relatively substantial entrainment of external material. In general terms, the standard explanation for the FRI/FRII dichotomy <cit.> is that FRI jets, which are less intrinsically powerful (lower Q), are decelerated by entrainment, to transonic speeds before leaving the environment of the host galaxy, while FRII jets are powerful enough to retain supersonic (relativistic) speeds on scales of tens of kpc. The FRI/FRII division would thus be a function of both environment and intrinsic jet power <cit.>.Consistent with this, FRI sources in flux-limited samples have long been thought to inhabit relatively dense environments <cit.>, although this seems to change for low-luminosity FRI LERGs <cit.>, and there is evidence from pressure balance arguments <cit.> that their lobes may contain a substantial non-radiating component, and as such depart substantially from an assumption of energy equipartition between the magnetic field and the electrons in the lobes <cit.>. FRIIs inhabit sparser environments <cit.>, and their lobes are closer to equipartition (, but see also ), though they can drive strong shocks into their surroundings as well <cit.>. Pressure balance arguments do not require a substantial non-radiating component in the lobes of many FRIIs <cit.> and these differences in particle content mean that, a priori, the same correlations between jet kinetic energy and radio luminosity cannot be applied across both populations <cit.>.The FRI/FRII dichotomy should not be confused with the well-known accretion mode dichotomy in radio-loud AGN <cit.>. Many FRI also have radiatively inefficient <cit.> nuclei <cit.>, but that is not always the case.Many, but not all, FRIIs have radiatively efficient <cit.> nuclei <cit.>. The environmental properties of these sources seem to be tied to their accretion mode, rather than their radio morphology <cit.>. We discussed the nature of the AGN in the 2Jy sample in great detail in , and use our classifications from that paper in this work. Since the energy-loss timescales for relativistic electrons are inversely proportional to their energies, synchrotron emission from radio galaxy lobes is generally detected only at radio frequencies, unless there is an on-going source of particle acceleration. The dominant X-ray emission process from the lobes themselves appears to be inverse-Compton scattering of CMB photons <cit.>. However, in richer environments (often those of FRIs) the X-ray emission is dominated by thermal bremsstrahlung from the undisturbed large-scale environment and/or shocked gas surrounding the radio source <cit.>. One of our objectives in the present paper is to carry out a systematic search for lobe-related emission (inverse-Compton) and extended thermal emission around the 2Jy objects.Hotspots are the termination points of FRII jets, assumed to be the terminal shocks expected at the end of a supersonic jet <cit.>. Hotspots are regions of intense, on-going particle acceleration, and as such they are bright in the radio, but can be detected at shorter wavelengths as well. In X-rays they often display synchrotron or synchrotron self-Compton spectra <cit.>, the latter being more frequent in very luminous hotspots. Often the X-ray hotspots are slightly offset from their radio counterparts, hinting at an underlying complexity in the local environment or the magnetic field. In many sources, including several FRIs, we also see secondary bright spots along the jet. It is likely that some of these so-called knots, which we detect beyond the radio, are also the results of shocks, as they must have on-going particle acceleration to produce synchrotron emission in the optical and X-rays, but others seem to present more diffuse structures and no particle acceleration, indicating, rather, points in which the jet kinetic energy is transferred into particles without the jet being significantly disrupted. These diffuse knots can sometimes be faint in the radio but bright in X-rays <cit.>. It is still not clear what makes some hotspots, knots and jet features X-ray synchrotron sources while others are undetected in the X-rays, and the non-uniform nature of the existing large samples <cit.> makes it hard to draw conclusions from observations.In this paper we use our relatively uniform survey of the z<0.2 2Jy sources to assess the incidence of X-ray hotspots in FRII sources and investigate the mechanisms that produce their X-ray emission, compare the environments we find for FRI and FRII with what we know from the literature, and test the predictions for the inverse-Compton emission in FRIIs against the lobes we detect in X-rays. A detailed study of the large-scale environments of the 2Jy sources, which ties in with some of our results, was carried out by <cit.>. A follow-up study by <cit.> provides further details on the energetics of the 2Jy FRII sources, as part of a larger sample of FRIIs.For this paper we have used a concordance cosmology with H_0=70 km s^-1 Mpc^-1, Ω_m=0.3 and Ω_Λ=0.7, for compatibility with the results we presented in .§ DATA §.§ The sample Table <ref> gives details of the 2Jy sample used in this paper. As in <cit.> and <cit.>, we classify sources as LERGs based on their [OIII] equivalent widths, after the definition of <cit.>, and on inspection of their optical spectra. This definition is consistent with the WLRG (weak line radio galaxy) classification, also often used in the literature to refer to these sources <cit.>.In terms of their Fanaroff-Riley classification <cit.>, our 2Jy sample has 7 FRI, 16 FRII, and 3 compact sources. We have listed these classifications, as well as the AGN types, in Table <ref>.It is worth mentioning again that the 2Jy sample does not overlap with the 3CRR catalogue, due to the different location of the sources (the 3CRR catalogue covers sources in the Northern hemisphere, with δ>+10^∘). Some of the brightest 2Jy sources are included in the original 3C catalogue, as is the case for e.g. the quasar 3C 273 (PKS 1226+02). Because the 2Jy selection was made at a higher frequency than the 3CRR sample, overall, more beamed sources are selected for the 2Jy sample than they are for the 3CRR, despite the steep-spectrum cut. Some of the implications of this fact are discussed in . §.§ X-ray analysis As mentioned in the previous Section, for the X-rays we analysed Chandra observations for the low-z sources in our sample, also listed in Table <ref>. Four low-z sources (PKS 0404+03, 1814-63, 2135-14, 2221-02) have XMM observations that we did not use, since the Chandra images provided all the information needed for our analysis, and had a much better spatial resolution. Most of the observations were carried out at our request, using the ACIS-S CCD and no gratings; when using archival data we only considered ACIS-S and ACIS-I observations without gratings, and discarded calibration or very short observations that did not significantly contribute to the statistics. We reprocessed all the data presented by , using ciao 4.7 and the latest CALDB. We included the correction for VFAINT mode to minimise the issues with the background for all the sources with a count rate below 0.01 counts s^-1 and observed in VFAINT mode. While this correction is not essential to study the cores of the sources, it can improve the statistics for extended and faint emission, which we have analysed for this work. §.§ Reduction and calibration of new radio data Most of our radio maps are taken from the work by <cit.>. Table <ref> lists the radio map properties for each source, as well as the references for each dataset.A minority of sources were imaged afresh from VLA archive data. These were reduced in AIPS in the standard manner – flux calibration used 3C 48 or 3C 286, a nearby point-source calibrator was used for phase calibration, and one or two iterations of phase followed by at most one iteration of amplitude self-calibration were carried out before final images were made at the full resolution of the data (using Briggs weighting with the robustness parameter set to 0). Where the structure of the source demanded it, data from different VLA configurations were combined and cross-calibrated before imaging. The one image made from archival ATCA data, that of PKS 2356-61, is composed of data from 3 different ATCA observations (in 3 different configurations) which were reduced in the standard manner in miriad before being combined, self-calibrated and imaged in aips.The data for PKS 2211-17 (3C 444) are new broad-band (1-2 GHz) JVLA data obtained for a different purpose and will be discussed in more detail elsewhere (Mahatma et al. in prep.). For these data we used AOflagger <cit.> on the raw data prior to data reduction to flag RFI. Data reduction was then performed on both A and B-configuration data sets individually, using casa version 4.3.1, performed in the standard manner as described in the casa tutorials [<https://casaguides.nrao.edu/index.php/Main_Page>].For flux and bandpass calibration, 3C 48 was observed in a single 3-minute scan. Phase and amplitude gain calibration was performed using the source J2246-1206. Bad baselines evident through the calibration process were flagged manually, as well as with the automated RFI flagging command `rflag'. The data were then averaged 16-fold so as to include 4 channels in each spectral window (16 spectral windows in total) with 512 MHz bandwidth per channel. Self-calibration was then performed in phase and amplitude on the individual A and B-configuration data sets before imaging both A and B-configuration data sets together, with a pixel size of 0.3×0.3 arcsec, and a clean noise threshold of 0.01 mJy. § THE 2JY SOURCES The following subsections briefly describe the images and spectra of the 2Jy sources imaged by Chandra, with the exception of PKS 1226+02 (3C 273), which was the first object to be identified as a quasar, and as such has been thoroughly studied in the past <cit.>.All the X-ray images (Figs. <ref> to <ref>) correspond to ACIS-S observations, except for PKS 0625-53 and PKS 2135-14, which were taken with the ACIS-I. The images have been filtered to show just the 0.3–7 keV energy range, and are smoothed with a Gaussian profile with σ=5 pixels (1 pixel=0.492 arcsec), to better show the extended structures, except for PKS 0521-36 (Fig. <ref>), for which we used σ=3 pixels. Radio maps shown are listed in Table <ref>, where the peak flux and RMS for each map are also listed. No radio contours are shown in Figs. <ref> (PKS 1814-63) and <ref> (PKS 1934-63), since these are compact steep-spectrum sources (CSS) and have no extended radio structures. Although in this work we focus on X-ray emission from extended structures, we have also included images for these two compact sources, for completeness. For all the Figures, we have plotted radio contours uniformly covering the largest possible range of fluxes in each map, while also aiming to most clearly display the morphology of the sources, and avoid noise artifacts. §.§ PKS 0034-01 (3C 15) The radio morphology of PKS 0034-01 (Fig. <ref>) is intermediate between that of an FRI and an FRII, with a prominent jet in the N lobe but a weak hotspot in the S. The host galaxy sits in a relatively sparse environment, and it does not appear to be disturbed or interacting <cit.>, showing no signs of recent star formation <cit.>, but it does have a dust lane <cit.>. The Chandra observation shows a 6 kpc (∼4 arcsec) one-sided jet <cit.>, which is also detected in radio <cit.> and the Ks band <cit.>. There is also some X-ray emission coincident with the edges of the radio lobes <cit.>, and its unusual X-ray nuclear emission has been discussed elsewhere (; ). We have recently obtained new, deeper Chandra data for 3C 15, which will be presented in an upcoming paper. §.§ PKS 0038+09 (3C 18) This BLRG seems to be in a dense environment, when observed in the optical <cit.>. We do not detect a luminous intracluster medium <cit.>, but there seems to be some extended emission around the AGN in our images (Fig. <ref>). The X-ray image shows some enhanced emission coincident with the N hotspot, but the detection is not statistically significant (1.5σ), especially since there are similarly bright structures around it, so we have not included it in Table <ref>. §.§ PKS 0043-42 Optical observations of PKS 0043-42 indicate that it inhabits a dense environment <cit.>, from which we detect some faint extended emission in our Chandra image (Fig. <ref>). <cit.> report a possible interaction with a nearby companion. Its radio morphology is very extended, and typical of a powerful FRII, with strong hotspots <cit.>. We detect both hotspots in our X-ray image (see Table <ref>), with a high significance in the case of the N hotspot (5.3 σ). It must be noted that, although this source is classified as a LERG, it shows signs of radiatively efficient accretion (seeand ).§.§ PKS 0213-13 (3C 62) This NLRG has an optical shell and a narrow tidal tail <cit.>. The Chandra image (Fig. <ref>) features a very bright hotspot W of the nucleus (Table <ref>), in good agreement with the position of the radio emission. We do not detect the E hotspot in our X-ray image. We do detect an enhancement in emission inside the lobes, consistent with inverse-Compton scattering (see Section <ref> and Table <ref>). §.§ PKS 0349-27 This source is a well-known FRII galaxy, and it has some remarkable optical features, including an extended narrow line region and bridges connecting it to two neighbouring galaxies <cit.>, and an extended emission line nebulosity <cit.>. In our Chandra image (Fig. <ref>) we detect some extended emission in the E-W direction, on scales of ∼20 kpc (∼16 arcsec) around the nucleus, which could be associated with the optical bridges linking the host to the other galaxies or a hot medium. The emission towards the NE, in particular, along the expected direction of the jet, could correspond to the optical ionisation enhancement observed by <cit.>. We detect emission inside the lobes over the background level (see Section <ref>), and observe an enhancement in emission with a slight offset (∼ 6.4 arcsec, equivalent to ∼ 8.3 kpc) with the N radio hotspot (Table <ref>, see also Fig. <ref>), although the offset may be partly caused by the fact that the X-ray emission falls very close to the edge of the CCD. We do not detect the S hotspot in X-rays. §.§ PKS 0404+03 (3C 105) The host galaxy of PKS 0404+03 has been extensively studied in the IR and optical <cit.>, despite the high foreground N_H column and the presence of a nearby star. The Chandra image (Fig. <ref>) shows some emission coincident with the S radio hotspot (see Table <ref>), which has also been studied in detail by <cit.>. §.§ PKS 0442-28 The Chandra image of this NLRG (Fig. <ref>) shows some extended emission, particularly surrounding the base of the N radio lobe. Although there is no ICM emission detected in the X-rays <cit.>, <cit.> found several neighbouring galaxies. We also see a bright region coincident with the N hotspot, which we detect at a 3σ level (see Table <ref>). We do not detect the S hotspot.§.§ PKS 0521-36 PKS 0521-36 is a very bright, misaligned BLRG with some peculiar spectral characteristics <cit.>, and an intermediate FRI/FRII structure. The Chandra image (Fig. <ref>) features a large streak, and is significantly piled up at the nucleus. <cit.>, in their analysis of this dataset, report a detection of the core, jet, S hotspot and an extended, presumably thermal, halo. §.§ PKS 0620-52 This source has the lowest redshift in our sample, and it shows evidence for a young stellar population <cit.>. Although its optical morphology is not disturbed <cit.>, the presence of numerous nearby galaxies <cit.>, and the fact that we detect extended emission in our Chandra image <cit.>, make us agree with the hypothesis of <cit.>, <cit.>, and <cit.> that this object sits in a rich cluster. The distorted shape of the radio lobes also indicates an interaction with the surrounding environment. §.§ PKS 0625-35 PKS 0625-35 is suspected to be a BL Lac <cit.>. It has a one-sided jet <cit.>, which we do not resolve in the X-rays, and it does not seem to be interacting. The presence of a cluster environment was initially not clear <cit.>, but it has recently been confirmed <cit.>. Although optically classified as a LERG, it is clear from our data that this is not a “standard” low-excitation object. The Chandra image (Fig. <ref>) shows a large streak, and is piled up , but there are clear signs of a brightness gradient around the source (see Fig. <ref>), indicating the possible presence of intra-cluster medium (ICM) emission from a dense environment.§.§ PKS 0625-53 PKS 0625-53 is a LERG hosted by a dumbbell galaxy, which is also the brightest member of the cluster Abell 3391 <cit.>. It has a `wide-angled tail' morphology <cit.> and a deflected jet. `Wide-angled tail' sources are traditionally classified as FRI, although they often show properties that are intermediate between both classes <cit.>. The optical images of PKS 0625-53 show a bridge of interaction with the W component of the dumbbell system <cit.>. The Chandra image (Fig. <ref>) shows emission around the galaxy from the hot ICM, with a decrease in emission in the area overlapping with the N radio lobe, indicating a possible X-ray cavity.§.§ PKS 0806-10 (3C 195) The optical and IR images of this galaxy show clear signs of disturbance <cit.>. Our Chandra image (Fig. <ref>) shows some enhancement in emission at the base of the radio lobes, near the nucleus, and enhancements in emission that are spatially coincident with the radio emission from the hotspots and S knot <cit.>. Around the N hotspot the X-ray emission is only enhanced at a 1.5σ level, with other structures of similar brightness around it, so we do not consider it a detection in Table <ref>. We do detect the S hotspot and knot, however.§.§ PKS 0915-11 (3C 218, Hydra A) Hydra A is a very well-studied galaxy. It is one of the most powerful local radio sources, and it sits in the centre of a rich cluster <cit.>. It shows evidence for recent star formation <cit.>, which is not common in cluster-centre galaxies, but can be attributed to a recent merger <cit.>. The Chandra images (Fig. <ref>) show the hot gas emission from the ICM, as well as emission associated with the lobes <cit.>.§.§ PKS 0945+07 (3C 227) PKS 0945+07 is a well-known BLRG <cit.>, with a very extended optical emission line region <cit.>. The Chandra image (Fig. <ref>) shows a faint readout streak. We detect some enhanced emission inside the radio lobes, whose spectrum is compatible with inverse-Compton scattering (see Section <ref> and Table <ref>), and bright X-ray emission coincident with the radio hotspots, particularly for the E structures <cit.>. §.§ PKS 1559+02 (3C 327) The host galaxy of this NLRG is very massive, and seems to have a bifurcated dust lane <cit.>, which crosses the nucleus. Its radio morphology is extended and well known <cit.>, with the E lobe being much brighter than its W counterpart. <cit.> report a large infrared excess that extends beyond what is expected for a torus. The Chandra image (Fig. <ref>) shows a very bright nucleus, which is close to the edge of the S3 chip. As reported by <cit.>, there is enhanced emission within the E lobe (see Section <ref> and Table <ref>), with two bright spots coinciding with the E radio hotspot. It is worth mentioning that VLT observations show a foreground galaxy very close to the location of the E hotspot <cit.>. There seems to be some enhanced emission in the W lobe as well, but since it falls in one of the front-illuminated chips, and partly in the CCD gap, it is hard to quantify; we also do not detect a hotspot in the W lobe.§.§ PKS 1648+05 (3C 348, Hercules A) Hercules A is a cluster-embedded LERG with some unusual radio properties <cit.>. Dust features are detected in the optical images <cit.>. The host galaxy is at the centre of a rich cluster <cit.>, and the lobes seem to be driving a shock into the ICM <cit.>, which is evident in the Chandra image (Fig. <ref>), where there is clear emission from the hot ICM, with a lower density in the regions corresponding to the radio lobes. The nuclear X-ray spectrum is very faint , with soft emission being the main contributor, as expected, and the X-ray images also show an enhancement in emission coincident with the radio jet, in the E direction. <cit.> have placed limits on the non-thermal emission associated with the lobes, but the extended emission is clearly dominated by thermal emission from the shocked ICM. §.§ PKS 1733-56 The host galaxy of PKS 1733-56 shows evidence of recent star formation <cit.>, and it has a disturbed optical morphology <cit.>. Although there is a high foreground star density in the optical field, there are not many neighbouring galaxies near this source <cit.>. The Chandra image (Fig. <ref>) shows some diffuse emission, which could correspond to a hot ICM, and an enhancement in emission coincident with the radio hotspots. The N hotspot is the brighter in radio, but it is faint in X-rays, and there is extended emission around it, making its detection slightly unclear, (we have reported it on Table <ref>, nonetheless, as statistically it is significant at a 3.2 σ level). We do detect, with high significance, the S hotspot and knot (8.8σ and 6.9σ, respectively), both of which are fainter in the radio. The knot is coincident with the radio emission, but the S hotspot seems slightly offset, by ∼ 5.5 arcsec, corresponding to ∼ 10.2 kpc (see also Fig. <ref>). §.§ PKS 1814-63 PKS 1814-63 is a compact steep-spectrum radio source, and hence its core is not resolved by Chandra (Fig. <ref>). The galaxy shows clear traces of an optical disk and a dust lane <cit.>, which is atypical for a system with this radio luminosity <cit.>. It also shows evidence for starburst activity <cit.> and it has an extended emission line region <cit.>. The Chandra image shows no large-scale emission enhancement corresponding to a hot ICM, but there could be some extended emission near the AGN. §.§ PKS 1839-48 This FRI is another example of a cluster-embedded LERG <cit.>. Although not as dense as that of Hydra A or Hercules A, there is emission from the ICM in the Chandra image (Fig. <ref>, see also Fig. <ref>), and the radio lobes are clearly deflected by the interaction with the ICM, showing a `wide-angle tail' morphology.§.§ PKS 1934-63 This source has a compact radio structure <cit.>, which is not resolved by Chandra (Fig. <ref>). It is optically very blue <cit.>, as well as being part of an interacting galaxy pair <cit.>. It also shows evidence for infalling gas <cit.>. The Chandra image shows no signs of extended emission, only the compact source that coincides with the radio core.§.§ PKS 1949+02 (3C 403) PKS 1949+02 is a NLRG with an X-shaped radio morphology, which has been studied in detail <cit.>. The Chandra data have been studied in detail by <cit.>. They found the image (Fig. <ref>) to show some enhancement that could correspond to a dense medium, and two features to the E of the core (a hotspot and a knot) spatially coincident with the radio emission. There is also a bridge between both features, which might indicate emission from the jet, although it might also be hot gas. Some emission can also be observed close to the W radio hotspot, which is not detected in the X-rays.§.§ PKS 1954-55 PKS 1954-55 is another FRI LERG located the centre of a rich cluster <cit.>, whose hot gas emission is clearly visible in the X-rays (Fig. <ref>, see also Fig. <ref>). The Chandra image does not show clearly whether there are cavities associated with the lobes.§.§ PKS 2135-14 The host of PKS 2135-14 has a close disk galaxy companion <cit.> and a disturbed morphology. The Chandra image (Fig. <ref>) shows some extended emission around the nucleus, but given the brightness of this QSO (evidenced by the bright readout streak) it is difficult to tell whether that emission is from the PSF or a real ICM.§.§ PKS 2211-17 (3C 444) PKS 2211-17 (Fig. <ref>) is another cluster-embedded LERG <cit.>. It is classified as an FRII, but its morphology is almost intermediate between the two FR classes. We detect a very dense ICM with clear cavities corresponding to the radio lobes, which are driving a shock <cit.>. We used new 1.5 GHz JVLA radio data, processed by V. Mahatma as part of an on-going project, to generate the radio contours for Fig. <ref>.§.§ PKS 2221-02 (3C 445) This object is a relatively well-known `double-double' BLRG <cit.>. It seems to be interacting with a close companion <cit.>. The radio hotspots are detected by Chandra (Fig. <ref>). The Northern one falls outside of the S3 chip, and it is not clearly detected, appearing at the 2.4 σ level <cit.>, perhaps in part due to the slightly reduced sensitivity outside of the S3 chip. There seems to be some enhanced emission around the nucleus as well. Please note that, although we carried out all the analysis with the 8.2 GHz radio map of <cit.>, we used archival 4.9 GHz VLA radio data to generate the contours for Fig. <ref>, in order to show the large-scale radio lobes.§.§ PKS 2356-61 The host of PKS 2356-61 shows signs of a past merger <cit.>. It is very radio powerful and has large hotspots and bright tails <cit.>, with the S hotspot being detected at a 6σ level in our Chandra image (Fig. <ref>, see also Table <ref>). Although there is some emission in the area around the N hotspot, we do not detect it. There is also X-ray inverse-Compton emission inside the lobes (Table <ref>), and emission around the source and at the base of the lobes which could be related to a hot ICM.§ HOTSPOTSWe detected X-ray emission coincident with at least one of the radio hotspots and jet knots in 12 out of our 16 FRII sources (Table <ref>), with high significance (≥ 3 σ) in 19 out of the 23 structures listed. It has long been understood that X-ray hotpsots are very common in FRII galaxies <cit.>, but this is the first time that a systematic study has been carried out on a complete sample of sources. Our hotspot detection rate seems to be slightly higher than those reported in previous studies <cit.>, but this is difficult to quantify when comparing with heterogeneous samples. Fig. <ref> shows the details of the individual detections, and it is interesting to note that in most sources there is a clear misalignment between the location of the X-ray and radio emission in at least one of the structures (knots or hotspots), on physical scales of 4–10 kpc. As mentioned in Section <ref>, this misalignment is rather common, and it hints at complexities in the local environment or the underlying magnetic field <cit.>.In Table <ref> we also tabulate the ratio between the monochromatic 1-keV X-ray flux density and the radio flux density, hereafter the X-ray/radio flux density. This quantity gives a crude characterization of the emission mechanism, with large values being more consistent with a synchrotron origin for the X-rays. We used fairly conservative regions for all the structures, allowing them to match the sizes and positions of the hotspots in the individual radio maps, adjusting them when the X-ray emission was clearly offset from the radio. We also used simple integrated fluxes, rather than background-subtracted Gaussian profile fits, as was the case for the works of <cit.> and <cit.>. As such, our X-ray/radio flux ratios are probably slightly smaller than those presented in the other works listed. To take the radio flux measurements we used a python plugin[<http://www.extragalactic.info/ mjh/radio-flux.html>] on the clean radio maps.The brightest newly detected X-ray hotspots in our sample are the southern hotspot and knot of PKS 1733-56 and the S hotspot of PKS 2356-61. To test whether these two hotspots are synchrotron or inverse-Compton (synchrotron self-Compton) in origin we used the measured 1-keV flux density and the radio flux density of the corresponding hotspot to carry out inverse-Compton calculations using the code of <cit.>. As the radio maps we have are all of low resolution, we estimate the hotspot sizes for these two objects from the fact that they appear unresolved or marginally resolved in the Chandra data, and assign all the measured radio flux density from Gaussian fitting to a spherical region of radius 1 arcsec. We use an electron energy spectrum with γ_ min = 1000 and γ_ max= 10^5, with an energy index p=2 at low energies breaking to p=3 at γ = 4000 – this reproduces the observed synchrotron break seen in other bright hotspots. The synchrotron spectrum is then computed between 10^4 and 10^12 Hz. The equipartition-field inverse-Compton predictions (including both SSC and inverse-Compton scattering of the CMB) are 1.5–3 orders of magnitude below the X-ray emission observed, with the closest agreement being for PKS 2356-61. For this source, a field strength a factor 5 below equipartition could allow us to explain the observed X-rays as SSC emission, but this is based on the probably unrealistic assignment of 1.6 Jy of 1.4-GHz radio flux to this compact feature, and is extreme compared to other sources where SSC is the accepted explanation <cit.>. For this, and for PKS 1733-56 where the departure from equipartition would have to be even larger, we prefer a synchrotron model for the observed X-rays. Synchrotron models have also been applied successfully to explain the X-ray emission from hotspots in other sources, e.g. Pictor A <cit.>, 3C 445 <cit.>, and 4C74.26 <cit.>, although the interpretation is more complicated in the latter. Even considering the uncertainties, the flux density ratios for the other sources, where the lower statistics did not allow us to fit the spectra directly using monochromatic 1-keV X-ray flux density and radio flux density, are of the same order of magnitude as those in PKS 1733-56 and PKS 2356-61, if not even larger, suggesting that an inverse-Compton emission mechanism is also unlikely in those sources. More detailed analysis would require high-resolution, multi-frequency images of the radio hotspots, which are not in general available.§ JETS We only detect X-ray jets clearly in two of our sources. The jet of PKS 0034-01 (3C 15) is well-known, and it has been studied in detail by <cit.>. Our images (Fig. <ref>) also show evidence of a jet in PKS 1648+05 (3C 348, Hercules A), extending eastwards from the nucleus, with no evidence of a counter-jet in the opposite direction. It is very possible that its existence has been noted in the past, as this source has been observed multiple times, but due to the incredibly dense and complex environment the jet is propagating through it may not have been possible to analyse it in detail.We also observe some enhanced emission in PKS 1949+02 (3C 403), Eastward from the core, which could hint at the presence of a jet, but it might arise from other mechanisms, as already pointed out by <cit.>.Although it does not show clearly in our images, PKS 0521-36 also has an X-ray jet, which has been studied in detail by <cit.>.Our results are consistent with previous studies, in terms of the number of detections of radio jets in the X-rays <cit.>. Of the 26 radio-loud AGN in our sample, 7 possess well-defined radio jets visible in our 1.4 – 8 GHz radio maps. The X-ray jet detection fraction is therefore around 50 per cent, with 3 definite non-detections. The structure in the N lobe of PKS 0620-52 is unresolved in the radio maps, therefore it is not clear whether this source has an FRI radio jet, and although there is some excess X-ray emission in this area, it is probably linked to the dense, hot ICM. Hydra A (PKS 0915-11) has been extensively studied with Chandra, but the strong X-ray ICM emission, and small angular scale of the radio jet, probably preclude its detection in the X-rays. PKS 2135-14 also shows some jet-like radio emission extending East of the core, but given the higher distance, and, comparably, lower exposure time (see Table <ref>), an X-ray counterpart to this structure may be too faint to be visible in our images.§ LOBES We studied the lobes of the FRII sources in our sample, to find out how they compared to the results of <cit.> in terms of their lobe pressures and equipartition (see Table <ref>). This analysis was carried out as part of a wider FRII lobe study <cit.>. Full details of the method, which follows that of <cit.>, are presented in that work, but are also summarised here. We used the radio maps to measure the radio flux densities (with the same python plugin) and determine the shapes and extent of the lobes in the X-ray images, excluding the hotspots and nuclei, and omitting any structures that were split by the edge of the CCD (N lobe in PKS 0349-27, W lobe in PKS 1559+02, N lobe in PKS 2221-02), the E lobe of PKS 0945+07, which was contaminated by a readout streak, as well as both lobes for PKS 0442-28 and PKS 1733-56, for which the only available radio maps did not provide enough information to determine the shape and extent of the emission, both lobes of PKS 1949+02, which has a complex, X-shaped morphology and no apparent inverse-Compton emission, and both lobes of PKS 2211-17 (3C 444), which is in a very dense and disturbed environment.We were able to detect X-ray emission inside the lobes of eight of our sources, and to derive constraints for the rest. We assumed that the bulk of the emission originated from inverse-Compton processes, as the spectral profiles in the sources with good statistics also indicated: all the spectra were well fitted with powerlaw models (corrected for Galactic absorption), and none were improved by the addition of a thermal component, which would arise if ICM shocks were present <cit.>. For sources with low counts, we followed the results of <cit.> as a guideline. We then fed these results, in conjunction with the radio fluxes and lobe volumes, into the synch code developed by <cit.>. synch uses the radio spectrum and a given magnetic field to model the underlying relativistic electron population and its interaction with photons from the cosmic microwave background (CMB) and synchrotron emission. The results for an equipartition magnetic field, and one that produces the observed (inverse-Compton) X-ray emission in the lobes, are shown in Table <ref>.We found that all the observed magnetic fields were lower than those predicted by equipartition, although never by more than one order of magnitude. The difference in B values suggests that the lobes of our FRII sources contain electron energy densities additional to the minimum energy condition, but the relatively small deviation from equipartition also suggests that our assumptions about the energetically dominant particle population in the lobes (electrons, rather than protons), are correct, all of which is consistent with the earlier results of <cit.> and <cit.>.§ ENVIRONMENTS <cit.> found that the environments of radio-loud AGN are different depending on their accretion mode. They found that for LERGs, most of which are FRI, there is a correlation between radio luminosity and ICM richness, while no correlation was apparent for HERGs (high excitation radio galaxies, the radiatively efficient sources), and they seemed to avoid the richest environments. All seven of our FRI sources are LERGs, in the upper range of the FRI radio power distribution. They all show clear evidence of large-scale extended X-ray emission around the host (with PKS 0625-35 having the poorest environment among them, see Fig. <ref>), and several of them inhabit well-known clusters. We would expect lower luminosity LERGs to be found in poorer environments, but they aren't represented in the 2Jy sample.Of our 16 FRII sources, three are classified as LERGs: PKS 0034-01 (3C 15), PKS 0043-42, and PKS 2211-17 (3C 444). The first two sources, however, have X-ray spectra that are somewhat atypical for LERGs, and PKS 0043-42, in particular, shows signs of radiatively efficient accretion (; ). PKS2211-17 is a bona-fide LERG, and it inhabits a well-known cluster. PKS 0043-42 shows signs of extended X-ray emission, which <cit.> found to be consistent with a weak cluster or group environment. There are no signs of extended emission around PKS 0034-01, and <cit.> found only a weak environment around it.Of the 13 HERG FRII, only three (PKS 0349-27, PKS 1733-56, and PKS 1949+02) show some traces of extended X-ray emission <cit.>. However, several of the HERG FRII sources present some smaller-scale, low surface brightness extended emission around the nucleus or the edges of the lobes, and in the optical, far from being isolated, many of them have dense environments, close companions, or show signs of recent interaction <cit.>. It is possible that we are not detecting their extended ICM emission in the X-rays because the HERGs in our sample are found, on average, at higher z than the LERGs.An extended, quantitative analysis of the 2Jy environments has been presented by <cit.>, as part of their broader study of the properties of radio galaxies. <cit.> also present a detailed analysis of the pressure balance between the FRII sources in our sample and their environments, in the context of a larger FRII sample. Here we just note that the Mach numbers for the expansion of the lobes of the 2Jy sources, obtained by considering the Rankine-Hugoniot conditions at the lobe tip, are found in their analysis to be in the range 1 to 3, with an average Mach number ∼2.1. This is similar to the Mach numbers of comparable systems <cit.>, but lower than those we obtained for lower-power systems in less dense environments <cit.>, which is expected.§ CONCLUSIONS In agreement with previous results, we find that X-ray hotspots and jet knots are fairly ubiquitous in FRII galaxies, with at least one of them being detected in 12 out of our 16 sources, with high significance (σ≥ 3) in all but four out of the 23 structures detected (listed in Table <ref>). We also observe a clear misalignment between the radio and X-ray emission in several sources, on physical scales of 4–10 kpc.The hotspots whose spectra we have been able to fit show, invariably, synchrotron emission spectra. Our calculations for PKS 1733-56 and PKS 2356-61 show that inverse-Compton emission is unlikely.We only observed jets unequivocally in two of our sources, PKS 0034-01 (3C 15), and PKS 1648+05 (3C 348, Hercules A). We found that the lobes of all the FRII sources in our sample have magnetic fields that are lower than expected from equipartition conditions, though never by more than an order of magnitude. These results are consistent with those of previous studies of similar sources.We also confirmed the tendency of luminous LERGs (mostly FRI) to inhabit rather dense environments, consistent with the results of <cit.> and <cit.>, while our HERGs (mostly FRII) seem to inhabit slightly sparser areas. §.§ Acknowledgements We thank the anonymous referee for their constructive comments, which have improved the paper. We thank J. P. Leahy for providing the radio map for Hercules A (PKS 1648+05). BM acknowledges support from the UK Space Agency. MJH acknowledges support from the Science & Technology Facilities Council (STFC; grant number ST/M001008/1). RM gratefully acknowledges support from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) /ERC Advanced Grant RADIOLIFE-320745. This work has made use of new and archival data from Chandra and software provided by the Chandra X-ray Center (CXC) in the application package CIAO. This work also makes use of data from The Australia Telescope Compact Array (ATCA), which is part of the Australia Telescope National Facility, funded by the Australian Government for operation as a National Facility managed by CSIRO, as well as data from the Karl G. Jansky Very Large Array (VLA), part of the The National Radio Astronomy Observatory, a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.mnras
http://arxiv.org/abs/1705.09578v1
{ "authors": [ "B. Mingo", "M. J. Hardcastle", "J. Ineson", "V. Mahatma", "J. H. Croston", "D. Dicken", "D. A. Evans", "R. Morganti", "C. Tadhunter" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170526133211", "title": "An X-ray survey of the 2Jy sample. II: X-ray emission from extended structures" }
headings 16SubNumber*** LiDAR-Camera Calibration using 3D-3D Point correspondencesA. Dhall, K. Chelani, V. Radhakrishnan, K.M. Krishna ^1Vellore Institute of Technology, Chennai ^2Birla Institute of Technology and Science, Hyderabad ^3Veermata Jijabai Technological Institute, Mumbai^4International Institute of Information and Technology, Hyderabad LiDAR-Camera Calibration using 3D-3D Point correspondences Ankit Dhall^1, Kunal Chelani^2, Vishnu Radhakrishnan^3, K. Madhava Krishna^4 31 October 2017 ================================================================================Work was done during an internship at Robotics Research Center at IIIT-H With the advent of autonomous vehicles, LiDAR and cameras have become an indispensable combination of sensors. They both provide rich and complementary data which can be used by various algorithms and machine learning to sense and make vital inferences about the surroundings. We propose a novel pipeline and experimental setup to find accurate rigid-body transformation for extrinsically calibrating a LiDAR and a camera. The pipeling uses 3D-3D point correspondences in LiDAR and camera frame and gives a closed form solution. We further show the accuracy of the estimate by fusing point clouds from two stereo cameras which align perfectly with the rotation and translation estimated by our method, confirming the accuracy of our method's estimates both mathematically and visually. Taking our idea of extrinsic LiDAR-camera calibration forward, we demonstrate how two cameras with no overlapping field-of-view can also be calibrated extrinsically using 3D point correspondences. The code has been made available as open-source software in the form of a ROS package.§ INTRODUCTION Robotic platforms, both autonomous and remote controlled, use multiple sensors such as IMUs, multiple cameras and range sensors. Each sensor provides data in a complementary modality. For instance, cameras provide rich color and feature information which can be used by state-of-the-art algorithms to detect objects of interest (pedestrians, cars, trees, etc.). Range sensors have gained a lot of popularity recently despite being more expensive and also contain moving parts. These can provide rich structural information and if correspondence can be drawn between the camera and the LiDAR, when a pedestrian is detected in an image, it's exact 3D location can be estimated and be used by an autonomous car to avoid obstacles and prevent accidents.Multiple sensors are employed to provide redundant information which reduces the chance of having erroneous measurements. In the above cases, it is essential to obtain data fromvarious sensors with respect to a single frame of reference so that data can be fused and redundancy can be leveraged. Marker based<cit.> as well as automatic calibration for LiDAR and cameras has been proposed but methods and experiments discussed in these use the high-density, more expensive LiDAR and do not extend very well when a lower-density LiDAR, such as the VLP-16 is used.We propose a very accurate and repeatable method to estimate extrinsic calibration parameters in the form of 6 degrees-of-freedom between a camera and a LiDAR. § SENSORS AND GENERAL SETUPThe method we propose makes use of sensor data from a LiDAR and a camera. The intrinsic parameters of the camera should be known before starting the LiDAR-camera calibration process.The camera can only sense the environment directly in front of the lens unlike a LiDAR such as the Velodyne VLP-16, which has a 360-degree view of the scene, and the camera always faces the markers directly. Each time data was collected, the LiDAR and camera were kept at arbitrary distance in 3D space. The transformation between them was measured manually. Although, the tape measurement is crude, it serves as a sanity check for values obtained using various algorithms. Measuring translation is easier than rotation. When the rotations were minimal, we assumed them to be zero, in other instances, when there was considerable rotation in the orientation of the sensors, we measured distances and estimated the angles roughly using trigonometry. § USING 2D-3D CORRESPONDENCES Before working on our method that uses 3D-3D point correspondences, we tried methods that involved 2D-3D correspondences. We designed our own experimental setup to help calibrate a LiDAR and camera, first, using 2D-3D methods.The setup involves markers of a specific type: hollow rectangular cardboards. Even normal cardboards work fine, however, as we shall see in the upcoming discussion, provide less correspondences as opposed to a hollowed out rectangular cardboard.This method involves finding the 6-DoF between the camera and the LiDAR by the means of matching 2D-3D point correspondences. 2D correspondences can be easily obtained by manually marking feature points in an image with an accuracy of 3-4 pixels. Obtaining corresponding 3D points is not that straight-forward. For one reason LiDARs does not give a high density point cloud and with increasing distance (away from the LiDAR center) the point cloud becomes more and more sparse.A planar cardboard can provide 4 corner points i.e. 4 point correspondences. In 3D these points are obtained by line-fitting followed by line-intersection and their 2D correspondences can be obtained by marking pixel co-ordinates. If a hollowed out rectangular cardboard is used, it provides 8 3D-2D point correspondences: 4 corners on the outer rectangle and 4 corners on the inner rectangle; doubling the correspondences, allowing for more data points with lesser number of boards. Such a setup allows to have enough data to run a RaNSaC version of PnP algorithms and also will help reduce noisy data, in general.We use rectangular (planar cardboard) markers. If in the experimental setup, the markers are kept with one of their sides parallel to the ground, due to the horizontal nature of the LiDAR's scan lines one can obtain the vertical edges, but not necessarily the horizontal ones. To overcome this, we tilt the board to make approximately 45 degrees between one of the edges and the ground plane. With such a setup we always obtain points on all four edges of the board. RanSaC is used to fit lines on the points from the LiDAR. The most prominent feature on the marker is the corner. It can be marked with relative easy on the image and since we have quite accurate line equations for the four edges, their intersection is calculated in 3D. Again, these lines may not actually intersect, but come very close. We approximate the corner to the midpoint on the shortest line-segment between the two lines.As, a check, that this point is indeed a very close approximation to the actual corner, we calculated the length of the shortest line-segment. Also, since we know the dimensions of the cardboard marker, the length of the opposite sides should be very close to each other and also to the actual length measure by tape. We consistently observed that the distance between two line-segments was of the order 10^-4 meters and the error between the edge lengths was about 1 centimeter on average.Collecting data over multiple experiments, we observed that the edges are extremely close and the corner and intersection are at most off by 0.68 mm on average. An average absolute deviation of 1cm is observed between the expected and estimated edge lengths of the cardboard markers. With the above two observations one can conclude that the intersections are indeed a very accurate approximation of the corner in 3D.[ u; v; 1; ] = [ f_x γ c_x; 0 f_x c_y; 0 0 1 ] * [ r_11 r_12 r_13t_1; r_21 r_22 r_23t_2; r_31 r_32 r_33t_3 ] * [ x; y; z; 1; ]Using hollowed out markers, we obtained 20 corner points: 2 hollow rectangular markers (8+8 points) and one solid rectangular marker (4 points) increasing the number of point correspondences from our initial experiments.Perspective n-Point (PnP) finds the rigid-body transformation between a set of 2D-3D correspondences. Equation <ref> shows how the 3D points are projected after applying the [R|t] which is estimated by PnP. Equation <ref> represents the general cost-function to solve such a problem. _R ∈ SO(3), t ∈ℝ^3 ||P(RX+t)-x||^2 where, P is the projection operation from 3D to 2D on the image plane X represents points in 3D x represents points in 2DTo begin with, we started with PnP and E-PnP<cit.>. The algorithms seemed to minimize the error; and with manually filtering (refer to table <ref>) the points (by visualizing the outliers) we were able to lower the back-projection error to 1.88 pixels on average. However, one did not observe the [R|t] close to the values measure by measuring tape between the camera and LiDAR.In a previous experiment, when the LiDAR and camera were quite close (12cm apart) we ran the E-PnP with 12 points and did not obtain the expected values. We observed that we got an error of 10cm and if our expected value is around that measure of granularity then we can expect to obtain noisy values. In subsequent experiments the the camera and LiDAR were kept even farther apart so that the influence of any error is mitigated.While examining the data, we found that there were some noisy data points who were contributing to a large back projection error. We ran a modified E-PnP with a RanSaC algorithm on top. This would in theory ensure that noisy data is not considered while calculating the rigid body transformation between the camera and the LIDAR. RanSaC selects a random subset of the data, fits a model, estimates data points that are inliers to the fitted model (given a threshold ϵ) and then fits the model on the inliers. This is repeated multiple times to try and exhaust large number of possible configurations. In this case, a subset of 2D-3D point correspondences are used to find [R|t] using E-PnP, inliers are found and new [R|t] are estimated.The back projection error was less than a pixel but the [R|t] we obtained was far from what we were expecting through manual tape measurements. This could mean that minimizing back projection error may not be a holistic measure in our scenario and we may have to use a better metric which relates to [R|t] in a more explicit manner. In the data we collected we also introduced slight rotation. The expected rotation were calculated using trigonometry. § USING 3D-3D CORRESPONDENCESThe 2D-3D correspondences method did not seem to work very well in our experimental setup. Error could have crept up due to not-so-accurate marking of 2D points (in pixels by looking at the image) or noisy data points to perform PnP.The back-projection error seemed to get minimized, however, the transformation values did not seem to be in agreement with the values measured by tape.Setups being used in real-time require extrinsic calibration to be quite accurate and produce minimal error. Fusion is one way to visualize the accuracy of the extrinsic calibration parameters. Bad calibration can result in fused data to have hallucinations in the form of duplication of objects in the fused point clouds due to bad alignment.One such application requiring real-time fusion from multiple sensors is autonomous driving. Bad calibration can result in erroneous fused data, which can be fatal for the car as well as nearby cars, pedestrians and property.This part of involves using augmented-reality (AR) tags and the LiDAR point cloud to find the extrinsic calibration parameters. Multiple versions of AR tags have been released by the open-source community <cit.> <cit.>. The method proposed here uses the ArUco tags <cit.>. To find the transformation between the camera and Velodyne, we need two sets of 3D points: one in the camera frame and another in the Velodyne frame. Once, these point correspondences are found, one can try to solve for [R|t] between the two sensors. §.§ Experimental SetupMost types of calibration employ markers, dimensions, shape and specific features of which depend on the application and type of calibration being performed. Checkerboards are the most common type of markers, generally used to estimate the intrinsic calibration parameters of a camera. <cit.> uses special markers with circular cut-outs for calibrating a LiDAR and a camera. We have devised a pipeline which uses cost-effective markers that can be constructed easily with just a planar surface such as a cardboard and an A4 sheet of paper.The design of the marker was driven keeping in mind that it should be able to provide features/correspondences that are easy to detect, both, in the camera frame as well as the LiDAR frame. §.§.§ Shape and SizeThe rectangular cardboard can be of any arbitrary size. The experiments we performed used a Velodyne VLP-16<cit.> which has only 16 rings in a single scan, a handful as compared to higher density LiDARs (32 and 64 rings per scan). For a low density LiDAR, if the dimensions of the board are small and the LiDAR is kept farther than a specific distance, the number of rings hitting the board become low (2 to 3 rings resulting in only 2 to 3 points on an edge) , making it very difficult to fit lines on the edges (using RanSaC).The boards used in the experiments had length/breadth ranging between 45.0-55.0 centimeters. Keeping the LiDAR about 2.0 meters away from board with these dimensions, enough points were registered on the board edges to fit lines, calculate intersections and run the whole pipeline smoothly. It is recommended that before you run the pipeline, ensure that there are considerable number of points on the edge of the boards in the pointcloud. Any planar surface can be used: cardboards, wood or acrylic sheets. Cardboards are light-weight and can be hung easily.§.§.§ 3D Point correspondences in the Camera FrameThe ArUco markers are special encoded patterns that facilitate the detection and error correction of the tags themselves. More details about how they work can be found here <cit.>.The tags are stuck on a planar surface such as a rectangular cardboard. If the dimensions of the cardboard (on which the ArUco tags are stuck) and the location of the ArUco marker is known, the location of the corners (from the center of the ArUco marker) can be easily calculated.The tags provide [R|t] between the camera and the center of the marker. This transform can be used to convert corner points from the marker's frame-of-reference (which is the cardboard plane with the origin being the center of the ArUcO marker) to the camera's frame-of-reference. This allows to obtain the corners as 3D points in the camera frame. We used ZED stereo camera <cit.>.§.§.§ 3D Point correspondences in the LiDAR FramePoints in the LiDAR can be found by detecting edges of the cardboard, which in turn can be solved for corners in a similar fashion described in Section <ref>.The values of transformations obtained using ArUco markers, especially the translation was quite accurate and close to values measured by tape between the camera and the center of each marker. Once the two sets of point correspondences are obtained, [R|t] between their co-ordinate frames can be estimated using the Iterative Closest Point (ICP) algorithm <cit.>. The ICP tries to minimize the error in 3D and is given by equation <ref>._R ∈ SO(3), t ∈ℝ^3 ||(RP+t)-Q||^2The general ICP algorithm considers the closest points in both the point clouds as correspondences (there are other variants of choosing the closest points), following which, it finds the [R|t] which best align the two point clouds by minimizing the euclidean distance between corresponding points.Finding the right correspondences can be tricky and may lead to an undesired solution. Since, in our proposed method the point correspondences are known, the corners of the marker in this case, a closed form solution exists. The Kabsch algorithm<cit.> <cit.> finds the rotation between two point clouds and the translation can be found once the co-ordinate frames are aligned.Using the same arguments as in <cit.>. First, we assume that the rotation is known and solve for the translation between the two point clouds, P and Q. F(t) = ∑_i=1^n ||(RP_i+t)-Q_i||^2∂ F(t)/∂ t = 2 ∑_i=1^n (RP_i+t)-Q_i = 0∂ F(t)/∂ t = 2 R ∑_i=1^n P_i + 2 t ∑_i=1^n 1 - 2 ∑_i=1^n Q_it = 1/n∑_i=1^n Q_i - R 1/n∑_i=1^n P_it = Q̅ - R P̅ Substituting the result of equation <ref> in objective function <ref>. R = _R ∈ SO(3) ||(R(P_i-P̅)-(Q_i-Q̅)||^2 let,X = (P_i-P̅),X' = RXandY = (Q_i-Q̅) then, the objective becomes, ∑_i=1^n ||X'_i - Y_i||^2 = Tr((X'-Y)^T(X'-Y)) using properties of the trace of a matrix, the above equation can be simplified as, Tr((X'-Y)^T(X'-Y)) = Tr(X'^TX') + Tr(Y^TY) - 2Tr(Y^TX)) since, the R is an orthonormal matrix, it preserves lengths i.e. |X'_i|^2 = |X_i|^2, Tr((X'-Y)^T(X'-Y)) = ∑_i=1^n ( |X_i|^2 + |Y_i|^2 ) - 2Tr(Y^TX') re-writing the objective function by eliminating terms that do not involve R, R = _R ∈ SO(3) Tr(Y^TX') substituting the value of X' and using property of the trace,Tr(Y^TX') = Tr(Y^TRX) = Tr(XY^TR) using SVD on XY^T = UDV^T,Tr(XY^TR) = Tr(UDV^TR) = Tr(DV^TRU) = ∑_i=1^3 d_iv_i^TRu_i let, M = V^TRU, then,Tr(Y^TX') = ∑_i=1^3 d_iM_ii≤∑_i=1^3 d_i M is a product of orthonormal matrices and is an orthonormal matrix as well with det(M)=+/-1. The length of each column vector in M is equal to one and each component of a vector is less than or equal to one. Now, to maximize the above equation, let each M_ii=1, forcing the remaining components of the vector to zero to satisfy the unit vector constraint. Thus, M=I, an identity matrix. M=IV^TRU=IR=VU^T to ensure, that R is a proper rotation matrix, i.e. R ∈ SO(3), we need to make sure that det(R)=+1. If the R obtained from equation <ref> has det(R)=-1 we need to find R such that Tr(Y^TX') takes the second largest value possible. Tr(Y^TX') = d_1M_11 + d_2M_22 + d_3M_33 whered1≥ d2≥ d3 and|M_ii| ≤ 1 the second largest value of the term in equation <ref> occurs when M_11 = M_22 = +1 and M_33 = -1. Taking into account the above, R = UCV^T where C is a correction matrix, C = [ 1 0 0; 0 1 0; 0 0 sign(det(UV^T)) · 1; ]§.§ Incorporating multiple scans In our initial experiments, we observed that even in a closed room where the boards are as stationary as can be, the pointcloud visualized in Rviz shows that the points (from the LiDAR), on the contrary, are not stationary and there is a small amount of position shift between two instants.To reduce any noise that might creep up we further propose to collect multiple samples of rotations and translations (using the method discussed above). Rotations and translation estimated over multiple runs can be used to obtain a more accurate and less noisy rigid-body transformation that transforms points from the LiDAR frame to the camera frame. Multiple sensor data, is collected over N iterations, keeping the positions of the LiDAR and camera fixed.From each one of the N runs we estimate the rotation and translation. We can average the N observed translation vectors, t̅ = 1/N∑_i=1^N tvec_i where tvec_i∈ℝ^3 and t̅ is the average translation between the two sensors.Taking average of rotation matrices is not very straight-forward, so we transform them to quaternions, compute the average quaternion in ℝ^4 and then convert it back to rotation matrix. r = 1/N∑_i=1^N rvec_ir̅ = r/||r|| where rvec_i is a quaternion representation of rotation matrix obtained in the ith run and r̅ is the average rotation between the two sensors represented by a unit-quaternion. The results of averaging in three separate configurations for LiDAR and camera positions can be seen in figure <ref>.Averaging multiple estimates while keeping the position of the relative positions of the LiDAR and camera fixed helps to reduce noisy data due to imperfect marker edges and errors that might be introduced due to LiDAR points being slightly inaccurate. Also, if there is a modest amount of motion of the cardboards, we effectively observe many data points around the actual translation and rotation, averaging which shall give a better estimate of the rigid-body transformation between the LiDAR and camera.§ FUSING POINT CLOUDS Our main objective was to accurately calibrate multiple cameras that may not have an overlapping field-of-view. If a set of transformations can be estimated that transform all sensor data to a single frame of reference, data, such as point clouds from stereo cameras can be fused. A setup with multiple stereo cameras facing in different directions, one can obtain a point cloud that provides a 360-degree field-of-view by fusing individual point clouds from each stereo camera.To do this, we introduced the LiDAR which has a 360-degree field-of-view and very precise 3D point co-ordinates can be obtained with it. We use it to find transformations between the cameras. Once, this is done, we can remove the LiDAR; effectively using the LiDAR only for calibrating the cameras.It is to be noted that the method described in this document calibrates a monocular camera and a LiDAR. If there is a stereo camera, we only calibrate the left camera and the LiDAR. Since, the baseline and stereo camera calibration parameters are already known, calibrating only one of the cameras (left in our case) is sufficient to fuse the point clouds.If we can find a transformation between two cameras C_1 and C_2, we can easily extend the same procedure to obtain transformation between arbitrary number of cameras. Given, two (stereo) cameras, the proposed pipeline finds the transformation that transforms all points in the LiDAR frame to the camera frame.We first run the algorithm, with C_1 and LiDAR, L, and obtain a 4 × 4 matrix, T_LiDAR-to-C_1We then run the algorithm, with C_2 and LiDAR, L, and obtain a 4 × 4 matrix, T_LiDAR-to-C_2 Now, to obtain a transform that transforms all points in C_1 to C_2, we chain the transforms, T_LiDAR-to-C_1 and T_LiDAR-to-C_2, T_C_2-to-C_1 = T_LiDAR-to-C_1· T_LiDAR-to-C_2^-1 = T_LiDAR-to-C_1· T_C_2-to-LiDAR Equation <ref>, finds the transform between C_2 to C_1, and if these are stereo cameras, we can obtain point clouds and fuse them using this transform. If the transform is very accurate, the two point clouds (from the two stereo cameras) will align properly. However, if there is translation error, when viewing the fused point cloud, hallucinations of objects will be clearly visible, and there will be two of everything. If there is error in the rotation, the points in the two clouds will diverge more and more as the distance from the origin increases.To verify the method in a more intuitive manner, lidar_camera_calibration was used to fuse point clouds obtained from two stereo cameras. We also provide the visualization of the fused point clouds. §.§ Manual measurement vs. lidar_camera_calibrationFirst, we compare the calibration parameters obtained from our method against meticulously measured values using tape by a human. The fused point cloud obtained when using manual measurements versus when using the method proposed in this document is shown in the video. Notice the large translation error, even when the two cameras are kept on a planar surface. Hallucinations of markers, cupboards and carton box (in the background) can be seen as a result of the two point clouds not being aligned properly.On the other hand, rotation and translation estimated by our package almost perfectly fuses the two individual point clouds. There is a very minute translation error (1-2cm) and almost no rotation error. The fused point cloud is aligned so properly, that one might actually believe that it is a single point cloud, but it actually consists of 2 clouds fused using extrinsic transformation between their sources (the stereo cameras).The resultant fused point clouds from both manual and lidar_camera_calibration methods can be seen on https://youtu.be/AbjRDtHLdz0https://youtu.be/AbjRDtHLdz0.§.§ Calibrating cameras kept at 80 degreesWe also wanted to see the potential of this method and used it to calibrate cameras kept at about 80 degrees and almost no overlapping field-of-view. In principle, with a properly designed experimental setup our method can calibrate cameras with zero overlapping field of view.However, to visualize the fusion, we needed a part to be common in both point clouds. We chose a large checkerboard to be seen in both cameras' field-of-view, since it can be used to see how well the point clouds have aligned and if the dimensions of the checkerboard squares are known, one can even estimate the translation errors.There is very less translation error, about 3-4 cm. Also, the ground planes align properly, at all distances, near and far from the camera, implying that the rotations estimated are correct.The resultant fused point clouds after extrinsic calibration of stereo cameras kept at approximately 80 degrees using our method can be seen on https://youtu.be/Om1SFPAZ5Lchttps://youtu.be/Om1SFPAZ5Lc.We believe, that better intrinsic calibration of the cameras can help drive down the error to about 1 centimeter or even less. § RESULTS As a sanity check, a coarse translation was manually measured using a measuring tape. Rotations however were difficult to measure and their coarse values are omitted. The tabulations below compare the results obtained by using off-the-shelf ICP algorithms and an implementation of the Kabsch algorithm which exploits the information about known correspondences. The Kabsch algorithm repeatedly gives values close to the measurements and the root mean square error (RMSE) is also quite low. Datasets were collected in varying camera and LiDAR configurations with assorted rotations and translations to verify repeatability and accuracy of the proposed method. Separate experiments were performed with a different camera (Point Gray Black Fly) which has a very large focal length as compared to zed stereo camera and the results showed similar accuracy and confirmed robustness of the method proposed.§ CODE AND IMPLEMENTATIONThe code is written in C++ and is implemented as a ROS package. It can be found at http://wiki.ros.org/lidar_camera_calibrationhttp://wiki.ros.org/lidar_camera_calibration. A comprehensive readme file is also available for setting up and getting started with the package is available on the GitHub repo at https://github.com/ankitdhall/lidar_camera_calibrationhttps://github.com/ankitdhall/lidar_camera_calibration.§ CONCLUSIONS We proposed a novel pipeline to perform accurate LiDAR-camera extrinsic calibration using 3D-3D point correspondences. An experimental setup to find correspondences in each sensor's frame: the camera and the LiDAR. The proposed pipeline uses tags that can be easily printed and stuck on planar surfaces such as cardboards or wooden planks. A point extraction pipeline was implemented to obtain corner points of the cardboards from the pointcloud recorded using the LiDAR. The two sets of point correspondences are used to solve for the [R|t], which gives accurate and repeatable results with different cameras. As opposed to ICP which relies on matching point correspondences, our method, with relatively less number of points and correct correspondences is able to estimate transformation optimally. The method's consistency is further improved by averaging over multiple results.We also showed how this method could be used to extrinsically calibrate two or more cameras, even when they do not have any overlapping field-of-view. We also successfully demonstrated visually, the quality of the calibration by fusing point clouds and almost perfectly aligning them. An open-source implementation is available in the form of a ROS<cit.> package.99ROSRobot Operating System(ROS). http://www.ros.org/but_velodyne Martin Velas, Michal Spanel, Zdenek Materna, Adam Herout Calibration of RGB Camera With Velodyne LiDAR. https://www.github.com/robofit/but_ velodyne/velodyne Velodyne LiDAR. https://velodynelidar.com/zed ZED Stereo Camera. https://www.stereolabs.com/aruco ArUco Markers. https://github.com/SmartRoboticSystems/aruco _ mapping http://docs.opencv.org/3.1.0/d5/dae/tutorial_ aruco _ detection.htmlepnp V. Lepetit and F.Moreno-Noguer and P.Fua EPnP: An Accurate O(n) Solution to the PnP Problem. International Journal Computer Vision, 2009apriltag April Tags. https://april.eecs.umich.edu/wiki/AprilTagsicp Iterative Closest Point. https://en.wikipedia.org/wiki/Iterative_closest_pointkabsch Olga Sorkine-Hornung and Michael Rabinovich Least-Squares Rigid Motion Using SVD. https://igl.ethz.ch/projects/ARAP/svd_rot.pdfmolecular_distance_measures Lydia E. Kavraki Molecular Distance Measures. http://cnx.org/contents/HV-RsdwL@23/Molecular-Distance-Measures
http://arxiv.org/abs/1705.09785v1
{ "authors": [ "Ankit Dhall", "Kunal Chelani", "Vishnu Radhakrishnan", "K. M. Krishna" ], "categories": [ "cs.RO", "cs.CV" ], "primary_category": "cs.RO", "published": "20170527075750", "title": "LiDAR-Camera Calibration using 3D-3D Point correspondences" }
A Parameter Estimation Method that Directly Compares Gravitational Wave Observations to Numerical RelativityY. Zlochower December 30, 2023 ============================================================================================================= 1mm§ INTRODUCTION As a concrete solvable model displaying maximal quantum chaos, non-Fermi liquid behavior and holographic duality, the Sachdev-Ye-Kitaev (SYK) model proposed by Kitaev <cit.> based on the early work of Sachdev and Ye <cit.> has been widely studied recently both on the field-theory side <cit.> and the gravitational side <cit.>. On the field theory side, the model contains N Majorana fermions χ_i (i=1,...,N) with random four-fermion interaction terms. The Hamiltonian isH_χ=1/4!∑_ijkl^NJ_ijklχ_i χ_j χ_k χ_l,where the normalization of χ_i is given by anticommutation relation {χ_i, χ_j }=δ_ij. The coupling constants {J_ijkl} are antisymmetric with respective exchanging of indices. Their mean vanishes and the variance is finite asJ_ijkl=0,J_ijkl^2=3!J^2/N^3.The interaction is all-to-all, thus the model is usually viewed as a (0+1)-d quantum mechanics model. The specific choice of N scaling in Eq. (<ref>) leads to an elegant large-N structure, with which the two-point and four-point correlation functions can be calculated analytically <cit.>. The exact solution in large-N limit displays a number of very intriguing properties, as we briefly summarized below. First, it displays an emergent conformal symmetry and the non-Fermi liquid behavior. To the leading order of 1/N expansion, the imaginary-time two-point function G(τ)δ_ij≡⟨𝒯_τχ_i (τ)χ_j (0) ⟩ satisfies a simple form of Schwinger-Dyson equations asG(iω)^-1=-iω -Σ(iω), Σ(τ)=J^2 G(τ)^3.In the infrared limit, one drops the -iω term, and the Schwinger-Dyson equations can be solved by using following ansatzG(τ)=bsgn(τ)/|τ|^1/2,which leads to J^2b^4=1/4π.A Fourier transformation of the Green's function shows the divergence of spectral function as 1/ω^1/2 at low energy, which signals a non-Fermi Liquid behavior. The form of the Green's function also indicates that the system acquires an emergent conformal symmetry in the IR limit with the scaling dimension of χ equals to 1/4, which can also be seen from the fact that the fix-point action only has the four-fermion interaction terms. With the help of this conformal symmetry and by utilizing conformal mapping τ→tanπτ/β, the low temperature Green's function (with β J≫1) is then given byG_β(τ)=bsgn(τ)/|β/πsin(πτ/β)|^1/2. Second, it shows maximally chaotic behavior. With the knowledge of the two-point Green's function at finite temperature, one can further calculate the four-point Green's function of the SYK model within the large-N expansion. At the leading order 𝒪(N^0), the four-point function is given by its disconnected part, as the connected part starts at order 𝒪(1/N)1/Nℱ(τ_1,τ_2,τ_3,τ_4)=1/N^2∑_j,k=1^N⟨𝒯_τχ_j (τ_1) χ_j (τ_2)χ_k (τ_3)χ_k (τ_4)⟩ - G(τ_12)G(τ_34).With the analytical continuation of ℱ(τ_1,τ_2,τ_3,τ_4) to ℱ(3β/4+it,β/4+it,β/2,0), one can calculate the out-of-time-ordered correlation (OTOC) function for χ_i and χ_j. The definition for a regularized OTOC for operator A and B is:F_AB(t)=Tr[yA^†(t)yB^†(0)yA(t)yB(0)],    where  y=e^-1/4βĤ.which diagnoses the quantum butterfly effect <cit.> and has been applied to various models recently <cit.>. There are also experimental measurements of OTOC in different systems <cit.>. The OTOC in many chaotic systems has a universal behavior as F_AB(t)∼ c_0-c_1exp(λ_Lt) at a time scale called the scrambling time t_s. λ_L defines the Lyapunov exponent for a quantum system. Assuming t_s≫ t_d (t_d is defined as the decay time for two point function), it is proved that λ_L is bounded by 2π/β, and the models with holographic duality is believed to saturate the bound <cit.>. For the SYK model, one can explicitly show that F(t)≡ F_χ_iχ_j(t)∼ -β Jexp(2π t/β) when t≫ t_d=β. The Lyapunov exponent extracted from it is exactly 2π/β. Thirdly, a gravitational model shares the same effective action as the low-energy theory of the SYK model. In the SYK model, by using the replica trick to treat the disorder average, a non-local action can be deduced after introducing bi-local fields G(τ_1,τ_2) and Σ (τ_1,τ_2):S/N=-1/2log (∂_τ - Σ)+1/2∫ dτ_1 dτ_2 [Σ (τ_1,τ_2)G(τ_1,τ_2) - J^2/4G(τ_1,τ_2)^4 ].The saddle point equations for G(τ_1,τ_2) and Σ (τ_1,τ_2) (by assuming the translational invariance in time) are the same as shown in Eq. (<ref>). Using saddle-point solutions, one can show there is a finite zero-temperature entropy which is another signature of a non-Fermi Liquid resulted from the large degeneracy of ground states <cit.>. Expanding G and Σ around their saddle point solutions gives the low-energy effective action for the SYK model. Note that the low energy physics of the SYK model is dominated by the reparametrization modes due to the emergent conformal symmetry, and a reparametrization to the time variable τ→ f(τ) acts on the G(τ_1,τ_2) field as G(τ_1,τ_2)→ (f'(τ_1)f'(τ_2))^1/4G(f(τ_1),f(τ_2)). For small deformation τ→τ + ϵ (τ), the effective action can be approximated by an action of the fluctuation δ_ϵG, which has an elegant form as the Schwarzian action:S/N∝∫_0^βdτ1/2[(ϵ”)^2 - (2π/β)^2 (ϵ')^2].For finite transformation τ→ f(τ), the action can be written asS/N= -#/J∫ dτ{tan(π f/β),τ},where{f,τ}≡f”'/f'-3/2(f”/f')^2.# is some constant number given in Ref.<cit.>. This Schwarzian effective action roots in the conformal symmetry of the SYK model and can give rise to the maximal Lyapunov exponent. Interestingly, on the gravity side, the same action appears for the dilaton gravity theory in two-dimensional near anti-de Sitter spacetime (NAdS_2) <cit.>.Motivated by the aforementioned intriguing properties of the SYK model, recently there appear many interesting generalizations of the SYK model <cit.>. The models with U(1) symmetry are studied numerically for both bosons and fermions in Ref.<cit.> by exact diagonalization, which shows the nearly degeneracy of the many-body spectrum at low energy for the fermionic model and the spin-glass state for the bosonic model. There are also efforts toward a more precise holographic description for whicha supersymmetric version of the SYK model is studied <cit.>. By coupling the SYK models<cit.>, an lattice model with spatial degree of freedom can be realized, where the butterfly velocity and the diffusion constant can be calculated and are compared with the holographic result. The issue of chaotic to non-chaotic transiton is considered with several different generalization of the SYK models <cit.>. Among all these generalizations, the non-Fermi Liquid phases given by the SYK-type models all possess a maximal chaotic behavior and Lyapunov exponent saturating to 2π/β, with only one exception that contains time dependent interaction <cit.>.In this paper, we consider a SYK model or a chain of SYK models with N Majorana fermion modes coupled to another SYK model with N^2 Majorana fermion modes. The latter contains many more degrees of freedom and can be viewed as as a kind of thermal bath. Our model is time independent and this model is also solvable in the large-N limit. In view of the three properties mentioned above, we will show that, on one hand, this model still displays an emergent conformal symmetry and a non-Fermi liquid behavior, but on the other hand, it is not maximally chaotic. In fact, we will show that the Lyapunov exponent of this model can be tuned to any value between zero and 2π/β. Whether this model also has a gravitational dual remains unclear.In section 2, we discuss a single SYK model coupled to the thermal bath. We show analytically illustrate that λ_L for the small system monotonically decreases from 2π /β to zero as the coupling strength to the thermal bath increases.In section 3, we consider a chain of SYK models. When the chain is uniformly coupled to the thermal bath, the butterfly velocity displays a crossover from √(T)-dependence at relatively high temperature to a new linear T-dependence at low temperature. If only the end of the SYK chain is coupled to the thermal bath, we found that there is a spatial dependence of both the Lyapunov exponent and the butterfly velocity.§ (0+1)-D SYK MODEL COUPLED TO A THERMAL BATH §.§ The Model and the Two-Point Functions In this section, we introduce a generalized version of the SYK model that contains a small SYK model with N Majorana fermions (denoted by χ_i, i=1,2,...,N) to a large SYK model with N^2 Majorana fermions (denoted by ψ_i, i=1,2,...,N^2). We will call the two subsystem as SYK_χ and SYK_ψ, respectively. In fact, the number of Majorana fermions in the large SYK model does not have to be N^2, and it can be generally N^α as long as α>1 such that it dominates over N in the large N limit. Here we choose the number of Majorana fermions in the larger cluster to be N^2 just for concreteness. The Hamiltonian of the coupled SYK system is written as:H =H_χ+H_ψ+H_c,where both H_χ and H_ψ take the same form as Eq. (<ref>), with {J_ijkl},{J^'_ijkl} being the random couplings in H_χ and H_ψ, respectively. H_c is the coupling Hamiltonian defined asH_c=1/4∑_ijklu_ijklχ_iχ_jψ_kψ_l,where u_ijkl are also random coupling numbers. {J_ijkl},{J^'_ijkl}, {u_ijkl} are antisymmetric random variables with zero meanJ_ijkl=0,J_ijkl^'=0,u_ijkl=0,and their variances are J_ijkl^2=3!J^2/N^3,J_ijkl^'2=3!J^2/N^6,u_ijkl^2=2!u^2/N^5.The reason for making this specific choice of N dependence of the variances will be clear soon because it gives a nice structure for the theory in the large-N limit. As shown in Fig. <ref> (a), we use straight line and wavy line to denote their Green's functions G_χ(τ)=⟨𝒯_τχ_i(τ)χ_i(0)⟩ and G_ψ(τ)=⟨𝒯_τψ_i(τ)ψ_i(0)⟩, respectively, where 𝒯_τ denotes time-order operator in the imaginary time τ. Note that the Green's functions are diagonal in term of the fermion indices because of the disorder average. In the large-N limit, the Green's functions can be be determined by diagrammatic method and by solving the Schwinger-Dyson equations. For the SYK_χ, one would expect two contributions to its self-energy Σ_χ(τ), as given by two diagrams in Fig. <ref> (b). The choices of the orders of N in Eq. (<ref>) ensures that the same two diagrams are both of order 𝒪(N^0), which is the lowest order in 1/N for the Green's function. From these diagrams we can obtain the Schwinger-Dyson equations for the SYK_χ system asG_χ(ω)^-1 = -iω - Σ_χ(ω),     Σ_χ(τ)=J^2 G_χ^3(τ)+u^2 G_ψ^2(τ)G_χ(τ). For the SYK_ψ system, in the large-N limit, one may also expect two contributions for its self-energy Σ_ψ(τ), as shown in Fig. <ref>. However, the choices in Eq. (<ref>) results in the suppression of the second diagram by 1/N, thus it can be neglected at the leading order. This large-N structure physically makes sense, since physically the properties of the larger system should not be affected by the small system at the leading order of 1/N. Then we have the Schwinger-Dyson equations for the SYK_ψ system:G_ψ(ω)^-1=-iω -Σ_ψ(ω),    Σ _ψ(τ)=J^2 G_ψ^3(τ). In the strong coupling limit, we first drop the -iω term, and the Green's functions obey the form as the original single SYK model G_χ(τ)= asgn(τ)/|τ|^1/2,     G_ψ(τ)= bsgn(τ)/|τ|^1/2,where the coefficients a and b are determined bya^4 J^2 +u^2 a^2 b^2 =1/4π,     b^4 J^2 = 1/4π.Note that the coefficients are different from the original SYK model, which is the key to the discussion below. Consequently, the finite temperature version of the Green's functions areG_χ(τ)= a[π/βsinπτ/β]^1/2sgn(τ),     G_ψ(τ)= b[π/βsinπτ/β]^1/2sgn(τ).§.§ Four Point Functions and the Tunable Lyapunov ExponentThe Lyapunov exponent is defined from four point function OTOC. To calculate the four-point function, one can solve the self-consistency equations for the four point functions in the real time to first obtain the asymptotic behavior in the chaos limit as shown in Ref.<cit.>, from which one can extract the Lyapunov exponent perfectly. We use F_χχ, F_ψψ and F_χψ to denote the four point functions of four χ's, of four ψ's and of two χ's and two ψ's. Let us consider the following OTOC in the real time, F_χχ(t_1 , t_2)=Tr[yχ_i(t_1)yχ_j(0)yχ_i(t_2)yχ_j(0)],     y=ρ (β)^1/4. F_ψψ(t_1 , t_2)=Tr[yψ_i(t_1)yψ_j(0)yψ_i(t_2)yψ_j(0)],     y=ρ (β)^1/4.where fermions are separated by a quarter of the thermal circle, at the lowest order 𝒪(N^0), the four point function is given by the disconnected part, and what we are interested in is the sub-leading part that describes chaotic behavior. Below, we use F(t_1,t_2) to refer to the sub-leading part of the four point function without further indication. As in the original SYK model, they are determined by the ladder diagrams in the large-N limit. We use self-consistency equations to determine the asymptotic behavior of F_χχ, i.e., the function F_χχ is an eigenfunction of K_R,χχ with eigenvalue one:F_χχ(t_1 , t_2)=∫ dt_3 dt_4 K_R,χχ(t_1 ... t_4)F_χχ(t_3, t_4),where K_R,χχ(t_1 ... t_4) is the retarded kernel for evaluating the four point function. We shall first analyze the structure of the retarded kernel. One finds that the kernel consists of two parts at lowest order given by the first and second diagram in Fig. <ref> (a). Other possibilities, like the third diagram in Fig. <ref> (a), is in fact of the order 1/N^2. At leading order for the connected part of four-point function, it can be neglected. From the diagrams, one can obtain the retarded kernel K_R,χχ(t_1 ... t_4) asK_R,χχ(t_1 ... t_4)= 3J^2 G_R,χ(t_13)G_R,χ(t_24)G_lr, χ(t_34)^2 + u^2 G_R,χ(t_13)G_R,χ(t_24)G_lr, ψ(t_34)^2,where G_R(t) is the real time retarded correlator and G_lr(t)≡ iG(it+β/2) is the Wightmann correlator. They are given byG_R,χ(t)=√(2)aθ (t) [π/βsinhπ t/β]^1/2,     G_lr,χ(t)=a [π/βcoshπ t/β]^1/2, G_R,ψ(t)=√(2)bθ (t) [π/βsinhπ t/β]^1/2,     G_lr,ψ(t)=b [π/βcoshπ t/β]^1/2. By taking an ansatz form of F_χχ(t_1,t_2)=e^-hπ/β(t_1 + t_2)/[coshπ/βt_12]^1/2-h, and substituting Eq. (<ref>) - (<ref>) into Eq. (<ref>) while requiring that the eigenvalue of the kernel k_R(h)=1, one finds1=Γ (5/2)Γ (1/2-h)/Γ(3/2)Γ(3/2-h)× 4π(J^2 a^4 + u^2 a^2 b^2/3).The value of the parenthesis in the above equation equals to 1/4π in the original SYK model, while here the value depends on u explicitly and implicitly via coefficients a and b. One can solve h from Eq. (<ref>) and Eq. (<ref>), the result is-h = 1 + 1/2k^2 - √(k^4 + 4k^2)/2,   with   k = u^2/J^2.One can see that Since λ_L= -h2 π/β, we find thatλ_L = 2π/β(1 -√(k^4 + 4k^2)-k^2/2).When k=u^2/J^2=0, the two systems decouple, and the Lyapunov exponent of F_χχ recovers 2π/β, which is simply the maximumly chaotic value in the SYK model. However, as one increases the interaction between the two systems, i.e., u^2/J^2 increases, the Lyapunov exponent decreases. When u^2/J^2→∞, λ_L→ 0.Fig. <ref> shows one of the central results of this work where we plot the dependence of λ_L on u^2/J^2. For small u^2/J^2, the Lyapunov exponent decreases linearly with u^2/J^2:λ_L ≈2π/β(1-u^2/J^2),u^2/J^2≪ 1. The above calculation deals with the four point function in the real time and in the chaos limit directly. One can also first calculate the exact four point function in the imaginary time and then continue it to the real time, which will lead to the same result <cit.>.If we take the Lyapunov exponent λ_L as a measurement of the chaos in one system, then this result tells us that we can tune chaotic behavior in one SYK system by changing the magnitude of its interaction with a much larger system. The underlying mechanism is based on the hierarchy of the scrambling times between these two systems. In the SYK model, the scrambling time t_s is proportional to βlog N, where N is the number of Majorana fermions in the system. The large difference in the size of the two subsystems results in a large difference in the scrambling time. At the time scale when the SYK_χ system enters the chaos region, the SYK_ψ system has not scrambled at all. Thus, through the coupling between the two systems, the chaotic behavior of the SYK_χ system is weaken by the larger system SYK_ψ. Now let us turn to the OTOC F_ψψ(t_1 , t_2) of four ψ fermions. As illustrated in the last section, at the lowest order of 1/N, the SYK_ψ system should not be affected by the small system. We will find this is indeed the case here. One would find that the leading order of the connected part F_ψψ(t_1 , t_2) is at order 𝒪(1/N^2), and the only contribution to the kernel at this order is shown in Fig. <ref> (a). From the self-consistency equationF_ψψ(t_1 , t_2)=∫ dt_3 dt_4 K_R,ψψ(t_1 ... t_4)F_ψψ(t_3, t_4),and by taking the ansatz form F_ψψ(t_1,t_2)=e^-h'π/β(t_1 + t_2)/[coshπ/βt_12]^1/2-h',with the eigenvalue k_R(h')=1, one finds1=Γ (5/2)Γ (1/2-h')/Γ(3/2)Γ(3/2-h')× 4π J^2 b^4 .Solving this one obtains h'=-1, which means that the Lyapunov exponent λ_L is 2π/β. The Lyapunov exponent of F_ψψ is the same as the one in the original SYK model, which matches our previous expectation.If we want to understand the behavior of F_χχ at next order 𝒪(1/N^2) precisely, we need to take into account the 𝒪(1/N) correction to the correlators, and analyze the kernel structure as before. Some useful hint can be obtained by simply looking at the third diagram in Fig. <ref> (a), which is the 1/N^2 order contribution to the kernel K_R,χχ. It contains the four-point function F_ψψ as a propagator inside (see Fig. <ref> (b)), and since the Lyapunov exponent of F_ψψ reaches the maximal value 2π/β, we anticipate that at order 1/N^2, F_χχ(t_1 , t_2) will also display a Lyapunov exponent 2π/β. This gives an exponential growth contribution as 1/N^2exp(2π/βt) which reveals the maximally chaotic behavior at longer time when both systems are scrambled. Similarly, since the lowest order of the connected part of OTOC F_ψχ is at 𝒪(1/N^2) and it also contains F_ψψ as an inner propagator, we anticipate that F_ψψ will also grow as exp2π/βt at order 1/N^2.From this point of view, one can see different chaotic behavior at different time scale. As far as the small system is concerned, there are two scrambling time scales, with the first one at the shorter time and the second one at the longer time. The small system is not maximally chaotic at the first scrambling time, because of the non-chaotic environment at that time scale. And it is maximally chaotic at the second scrambling time at longer time scale, because of the maximally chaotic environment at that time scale.§.§ The Effective ActionIn this section we will discuss an alternative effective action derivation for the change of the Lyapunov exponent in a perturbative manner. This will be useful for the later discussion of the SYK chain coupled to the environment. The derivation of the effective action utilizes the original fermion path integral formalism and carries out the disorder average. By introducing bi-local fields G_χ(τ_1,τ_2), G_ψ(τ_1,τ_2) and Lagrange multiplier fields Σ_χ(τ_1,τ_2), Σ_ψ(τ_1,τ_2) that sets G_χ(τ_1,τ_2)=1/NΣ_iχ_i(τ_1)χ_i(τ_2) and G_ψ(τ_1,τ_2)=1/N^2Σ_iψ_i(τ_1)ψ_i(τ_2), one obtains the nonlocal action asS_eff = N (-1/2log (∂_τ-Σ_χ) +1/2∫ dτ_1 dτ_2 ( Σ_χ(τ_1,τ_2)G_χ(τ_1,τ_2)-J^2/4G_χ(τ_1,τ_2)^4)) + N^2 (-1/2log (∂_τ-Σ_ψ) +1/2∫ dτ_1 dτ_2 ( Σ_ψ(τ_1,τ_2)G_ψ(τ_1,τ_2)-J^2/4G_ψ(τ_1,τ_2)^4)) -N(1/2∫ dτ_1 dτ_2u^2/2 G_χ(τ_1,τ_2)^2 G_ψ(τ_1,τ_2)^2 ).The saddle point equations of the fields are the same as in Eq. (<ref>) and Eq. (<ref>). We expand the fields around their saddle point solution asG_χ (τ)=G_χ,s(τ)+|G_χ,s(τ)|^-1g(τ),Σ_χ(τ)=Σ_χ,s(τ)+|G_χ,s(τ)|σ(τ)G_ψ (τ)=G_ψ,s(τ)+|G_ψ,s(τ)|^-1g'(τ),Σ_ψ(τ)=Σ_ψ,s(τ)+|G_ψ,s(τ)|σ'(τ),where g(τ), g^'(τ), σ(τ), σ^'(τ) denote the fluctuation of the fields. By keeping up to the quadratic order, one obtains the effective actionS_eff=N(-1/2log(1+G_χ,s∘ G_χ,sσ)+∫ dτ_1dτ_21/2g(τ_1,τ_2)σ(τ_1,τ_2)-3J^2/4g(τ_1,τ_2)g(τ_1,τ_2))+N^2(-1/2log(1+G_ψ,s∘ G_ψ,sσ')+∫ dτ_1dτ_21/2g'(τ_1,τ_2)σ'(τ_1,τ_2)-3J^2/4g'(τ_1,τ_2)g'(τ_1,τ_2))-N∫ dτ_1dτ_2 (u^2g'(τ_1,τ_2)g(τ_1,τ_2)+u^2/4g(τ_1,τ_2)g(τ_1,τ_2)+u^2/4 g'(τ_1,τ_2)g'(τ_1,τ_2)).Integrating out σ and σ', and to the leading order of u^2/J^2, one obtains S_eff=3J^2N^2/4∫ dτ_1...dτ_4g'(τ_1,τ_2)(K'^-1-1̂)g'(τ_3,τ_4)+3J^2N/4∫ dτ_1...dτ_4g(τ_1,τ_2)(K^-1-1̂)g(τ_3,τ_4)-N∫ dτ_1dτ_2 ( u^2g'(τ_1,τ_2)g(τ_1,τ_2)+u^2/4(g(τ_1,τ_2)g(τ_1,τ_2)+g'(τ_1,τ_2)g'(τ_1,τ_2))),in which K'(τ_1,τ_2;τ_3,τ_4)=3J^2G_ψ,s(τ_12)G_ψ,s(τ_13)G_ψ,s(τ_42)G_ψ,s(τ_34),and K=J^2/J^2+u^2K'.K' does not have J and u dependences since the coefficient b in G_ψ,s(τ) will cancel out the J^2 prefactor,while K has J and u dependences. The exact four point function can be derived by first diagonalizing the kernels K and K', with their eigenvalues denoted by k(h,n) and k'(h,n) and the eigenfunction corresponding to (h,n) denoted by Ψ_h,n. For the SYK_ψ system, at the leading order, one only needs to consider the first term in the effective action. The connected part of the imaginary time four-point function for SYK_ψ system can be written as1/N^2F_ψψ(τ_1,τ_2;τ_3,τ_4)=1/|G_ψ,s(τ_12)G_ψ,s(τ_34)|⟨ g'(τ_1,τ_2)g'(τ_3,τ_4)⟩ = 2/3N^2J^21/|G_ψ,s(τ_12)G_ψ,s(τ_34)|(K'^-1-1̂)^-1 = 2/3N^2J^21/|G_ψ,s(τ_12)G_ψ,s(τ_34)|∑_h,nΨ_h,n(τ_1,τ_2)k'(h,n)/1-k'(h,n)Ψ^*_h,n(τ_3,τ_4).However, for h=2, one has k'(h,n)=1, which leads to the vanishing of effective action for h=2 modes as well as a divergence of the four-point function in above equation. The divergence comes from the reparametrization modes which are soft modes in the conformal limit. For them, one must consider the correction away from the conformal limit to obtain a meaningful result. On the other hand, these modes are the dominant part of the four-point function in the long time limit, and result in the exponential growth with the Lyapunov exponent 2π/β for the OTOC. This is the same story as the original SYK model, as well as the SYK_ψ system. However, for the SYK_χ system, the interaction with the bath makes the difference. If we integrate out g^' in the effective action, at the leading order, the effective action for g isS_eff,χ=3J^2 N/4∫ dτ_1 ...dτ_4 g(τ_1,τ_2)(K^-1-(1+u^2/3J^2)1̂)g(τ_3,τ_4).With this, one obtains the expression for the connected part of the imaginary time four-point function1/NF_χχ(τ_1,τ_2;τ_3,τ_4)= 2/3NJ^21/|G_χ,s(τ_12)G_χ,s(τ_34)|∑_h,nΨ_h,n(τ_1,τ_2)k(h,n)/1-(1+u^2/3J^2)k(h,n)Ψ^*_h,n(τ_3,τ_4) = 2/3N(J^2+u^2)1/|G_χ,s(τ_12)G_χ,s(τ_34)|∑_h,nΨ_h,n(τ_1,τ_2)k'(h,n)/1-1+u^2/3J^2/1+u^2/J^2k'(h,n)Ψ^*_h,n(τ_3,τ_4).For h=2, k'(h,n)=1, but there is no divergence due to u^2/J^2≠ 0. Since the divergence is removed, the long time behavior of the four-point function should be a result of both h=2 and h≠ 2 modes. In principle, the asymptotic behavior of OTOC in the chaos limit contains contributions from the h≠ 2 modes. However, for u^2/J^2≪ 1, the reparametrization modes still compose the dominant part of the four-point function, thus we can first focus on the h=2 part, and then consider the correction from h≠ 2 part perturbatively. We will show below that this leads to the same result as in previous section for the perturbative regime u^2/J^2≪ 1.Concentrating on the h=2 part, we need to write an explicit action for the reparametrization modes. For u=0, the SYK_χ and SYK_ψ systems are decoupled, thus we should have two sets of reparametrization mode ϵ and ϵ', corresponding to the reparametrization of G_χ and G_ψ, respectively. For finite u, since G_χ and G_ψ are coupled through Eq. (<ref>), they should be reparametrized together. However, since we are working in the u^2/J^2≪ 1 regime, we shall still keep two sets of reparametrization modes, and treat the u^2 term as the first order perturbation that couples the two sets of modes together. Explicitly, the two sets of reparametrization modes are introduced as:g'=1/β J∑_nf_nϵ'_n,g=1/β√(J^2+u^2)∑_n f_nϵ_n≈1/β J∑_nf_nϵ_n ,with functions f_n defined in Ref.<cit.> and ∫ f_nf_m∝|n|(n^2-1)δ_m,-n. The eigenvalues of K' for the reparametrization modes are approximated by 1-|n|√(2)α_K /2πβ J, while the eigenvalues of K for the reparametrization modes are approximated by J^2/J^2 + u^2-|n|√(2)α_K /2πβ J, in which α_K is a numerical factor <cit.>. The effective action is then given by:S=1/256π(∑_nN^2 √(2)α_K /β J(ϵ'_-nn^2(n^2-1)ϵ'_n) +∑_nN √(2)α_K /β J(ϵ_-nn^2(n^2-1)ϵ_n). . -∑_n4N u^2 /3J^2(ϵ'_-n|n|(n^2-1)ϵ_n)+∑_n2N u^2 /3J^2(ϵ_-n|n|(n^2-1)ϵ_n)),where we neglected a term proportional to N for quadratic term of ϵ'. To the lowest order for ϵ_n', the action is just Schwarzian derivative while for ϵ_n there is an additional term ∝ |n|(n^2-1).The h=2 contribution to the leading order of the connected part four-point function of the SYK_χ system can be read from this effective action as :F_χχ,h=2(τ,τ_12,τ_34)= 8/π∑_ne^-inτ/√(2)α_K |n|/β J + 2u^2/3J^21/|n|(n^2 -1)f_n(τ_12)f_n(τ_34) = 16J/√(2)α_K∑_ne^-inτ/2π |n|/β+ 4πu^2/3√(2)Jα_K1/|n|(n^2 -1)f_n(τ_12)f_n(τ_34),where τ = (τ_1+τ_2 -τ_3 -τ_4)/2. By analytical continuation with τ_12 = τ_34=β/2 and τ=it, we obtain the growing term in the long time asF_χχ,h=2(t)/G(β/2)^2≃ -4π J/√(2)α_K·e^2π/βt/2π/β+4πu^2/3√(2)Jα_K.We see that the Lyapunov exponent is still 2π/β, because we have not taken the h≠ 2 corrections into account yet. Here we have two small parameters 1/β J and u^2/J^2, which are both of order ϵ. Suppose F_χχ can be written in the form asF_χχ(t) ∼ -1/a_1 ϵexp[2π/β(1-λ_1 ϵ + ... )t] = -e^2π/βt/a_1 ϵ + 2π/βλ_1/a_1 t e^2π/βt +𝒪(ϵ).It is analyzed in Ref.<cit.> that the h≠ 2 part gives a growing term 6π/βte^2π/βt. By matching the above expansion, we can obtainλ_1 ϵ = 3√(2)α_K/2β J+u^2/J^2,where the first term is the first order correction to the Lyapunov exponent away from the conformal limit, while the second term is due to the interaction with the thermal bath. In the conformal limit β J→∞, the Lyapunov exponent is given byλ_L ≈2π/β(1-u^2/J^2), u^2/J^2≪ 1,which is consistent with the result Eq. (<ref>) and Eq. (<ref>) in the sec. <ref>. § (1+1)-D SYK MODELS COUPLED TO A THERMAL BATH In this section we consider the (1+1)-dimensional generalization of the previous model. By generalizing to a one dimensional chain model, we can study richer physics including the spatial dependence of OTOC, the energy transport property and so on.There are several proposals for generalizing the SYK model to higher dimensions <cit.>, here we mainly follow Ref. <cit.> and focus on the generalization to one dimensional chain model. Two different configurations of chain model will be discussed in this section. In both cases, we first prepare an one dimensional SYK chain, with N Majorana fermions χ_i,x on each site, and prepare a SYK_ψ system with N^2Majorana fermions ψ_i. In the first configuration, we use the SYK_ψ system as a global bath, and couple every site of the SYK_χ chain to the SYK_ψ system uniformly. In the second configuration, we use the SYK_ψ system as a local bath and attach the SYK_ψ system to one end of the SYK_χ chain. The first model preserves the translational symmetry along the chain, while the second model does not.§.§ (1+1)-d SYK Chain Coupled to a Global Thermal Bath In this subsection, we study the first configuration as illustrated in Fig. <ref>. To maintain the translational symmetry, here we choose the interaction strength with the N^2 fermions SYK_ψ bath to be uniform. If the interaction strength is chosen different, this will be a new type of inhomogeneous SYK chain model, which can be analyzed using similar method as in Ref. <cit.>.The Hamiltonian of the system isH=∑_x=1^M( H_χ_x + H^c_χ_x,χ_x+1 + H^c_χ_x,ψ) + H_ψ,where H^c is defined as in Eq. (<ref>). The random couplings in each term are {J_jklm,x}, {J^'_jklm,x}, { u_ijkl}, {J̃_ijkl} in order, withJ^2_ijkl,x=3!J^2_0/N^3, J^'2_ijkl,x=J^2_1/N^3, u_ijkl,x^2=2u^2/N^5, J̃_ijkl^2=3!J̃^2/N^6 .As we do not want the bath SYK_ψ system to be affected by the existence of the chain at leading order of 1/N, we demand the number of sites M satisfy M/N→ 0 in the large-N limit.Using the replica method, after doing the disorder average and introducing the bi-local fields, one arrives at the effective action as (in below we omit the terms for the SYK_ψ system):S_eff[G_x,Σ_x] = ∑_x=1^M[ -logPf(∂_τ-Σ_x)+1/2∫_0^βdτ_1 dτ_2(Σ_x (τ_1,τ_2)G_x(τ_1,τ_2)-J_0^2/4G_x(τ_1,τ_2)^4. .         .. -J_1^2/4G_x(τ_1,τ_2)^2G_x+1(τ_1,τ_2)^2 - u^2/2G_x(τ_1,τ_2)^2G_ψ(τ_1,τ_2)^2)].This effective action shows that the translational invariant saddle point solutions should satisfy:G_x(τ)=G^s(τ),Σ_x(τ)=Σ^s(τ), G^s(iω)^-1=-iω -Σ^s(iω),Σ^s(τ)=(J_0^2+J_1^2+ u^2b^2/a^2)G^s(τ)^3,where a and b are the factors in front of the Green's functions G^s(τ) and G_ψ(τ) as in Eq. (<ref>). The value of b^2/a^2 can be tuned by changing the variance J̃^2 of the random couplings {J̃_jklm}. Here we set b^2/a^2=1 for the simplification of the equations by changing the value of J̃^2. We defineJ=√(J_0^2+J_1^2+ u^2),then the Eq. (<ref>) becomes:G^s(iω)^-1=-iω -Σ^s(iω),Σ^s(τ)=J^2G^s(τ)^3.In the conformal limit N≫β J≫ 1, one has the saddle point solutionG^s(τ) =a [π/βsinπτ/β]^1/2,0≤τ < β,with a^4 J^2= 1/4π.Expanding the fields around the saddle point solutions asG_x(τ)=G^s(τ)+|G^s(τ)|^-1g_x(τ), Σ_x(τ)=Σ^s(τ)+|G^s(τ)|σ_x(τ),one can expand the effective action to quadratic order asδ S_eff[g,σ]=-1/4∫ d^4τ∑_xσ_x(τ_1 ,τ_2)G^s(τ_13)· |G^s(τ_34)|· G^s(τ_42)·|G^s(τ_21)|σ_x(τ_3,τ_4) +∫ d^2τ(∑_x1/2σ_x(τ_1,τ_2)g_x(τ_1,τ_2)-3J^2/4∑_x,yg_x(τ_1,τ_2)S_xyg_y(τ_1,τ_2)).The spatial kernel S_xy is defined asS_xy=(1-2u^2/3J^2)δ_x,y+J^2_1/3J^2(δ_x,y± 1-2δ_x,y),with its Fourier transformation into momentum space beings(p)=1-2u^2/3J^2+J^2_1/3J^2(cosp -1). Defining K̃ as the symmetrized four-point function kernel of the SYK modelK̃(τ_1,τ_2;τ_3,τ_4)=3J^2 G^s(τ_13)· |G^s(τ_34)|· G^s(τ_42)·|G^s(τ_21)|,and by integrating out the σ_x fields, the effective action can be written asδ S_eff=3J^2/4∫ d^4τ∑_x,yg_x(τ_1,τ_2)[ K̃^-1(τ_1,τ_2;τ_3,τ_4)δ_xy-S_xyδ(τ_13)δ(τ_24) ]g_y(τ_3,τ_4).One can read the four point function with two fermions on the x site and two fermions on the y site from this action, which is1/Nℱ_xy(τ_1,τ_2;τ_3,τ_4) =1/|G^s(τ_12)G^s(τ_34)|⟨ g_x(τ_1,τ_2)g_y(τ_3,τ_4)⟩ = 2/3NJ^21/|G^s(τ_12)G^s(τ_34)|(K̃^-1-S)^-1.By Fourier transforming (x-y) to the momentum space, one obtains the four point function in momentum space as1/Nℱ_p(τ_1,τ_2;τ_3,τ_4)=2/3NJ^21/|G^s(τ_12)G^s(τ_34)|[K̃^-1-s(p)δ(τ_13)δ(τ_24)]^-1.One can further get the behavior of the four point function from this expression by diagonalizing the kernel K̃ in the same way as in the original SYK model. The kernel has eigenvalues k(h,n) and eigenfunctions Ψ_h,n(τ_1,τ_2) which have been worked out in the Ref. <cit.>. With these eigenvalues and eigenfunctions, an exact expression for the four-point function in the momentum space is given by1/Nℱ_p(τ_1,τ_2;τ_3,τ_4)=2/3NJ^21/|G^s(τ_12)G^s(τ_34)|∑_h,nΨ_h,n(τ_1,τ_2)k(h,n)/1-s(p)k(h,n)Ψ^*_h,n(τ_3,τ_4).The would-be divergence from h=2 is removed by s(p)<1, similar as in the section 2.3. Similarly, when the divergence has been removed, the long time behavior of the four point function comes from both the h=2 and h≠ 2 parts. Following the same procedure as in sec. <ref>, because for p^2 ≪ 1 and u^2/J^2≪ 1, s(p)≈ 1, the four-point function in the long time and energy transport properties in the long wavelength limit are still mainly governed by the h=2 reparametrization modes. It is reasonable for us to first focus on the h=2 part, and then take into account of the corrections coming from h≠ 2 parts perturbatively when necessary. For the h=2 modes, we make reparametrizations to the saddle point solutionG^s(τ_1,τ_2) → G^f_x(τ_1,τ_2)=(f^'_x(τ_1)f^'_x(τ_2))^1/4G^s(f_x(τ_1),f_x(τ_2)),with small deformations f_x(τ)=τ+ϵ_x(τ). Using the same method as in sec. <ref>, we derive the effective action of the chain model in terms of the reparametrization modes:S=1/256π∑_n,pϵ_n,p(√(2)α_K/β Jn^2 (n^2 -1)+(J^2_1/3J^2p^2 + 2u^2/3J^2)|n|(n^2 -1))ϵ_-n,-pwhere ϵ_n,p's are the reparametrization modes by Fourier transforming ϵ_x(τ):ϵ_n,p=1/√(M)∑_x=1^M∫_0^βdτ e^i(2π n/βτ - xp)ϵ_x(τ).Thus the h=2 contribution to the four-point function can be derived as (τ=(τ_1+τ_2 - τ_3 -τ_4)/2):ℱ_p,h=2(τ, τ_12,τ_34)/G^s(τ_12)G^s(τ_34)= 8/π∑_ne^-inτ/√(2)α_K |n|/β J+J^2_1/3J^2p^2 + 2u^2/3J^21/|n|(n^2 -1)f_n(τ_12)f_n(τ_34) = 16J/√(2)α_K∑_ne^-inτ/2π |n|/β+ 1/τ_u +D p^21/|n|(n^2 -1)f_n(τ_12)f_n(τ_34)in which1/τ_u≡4πu^2/3√(2)Jα_K,D≡2πJ^2_1/3√(2)Jα_K.D is the diffusion constant of the system. To see this, one needs to consider the energy transport property of the system. The diffusive dynamics in space-time is governed by the reparameterization fields, which are the same as the ones that govern the long time behavior of fermion four-point function. The retarded energy-density correlation function in the (p,ω) space can be derived by utilizing the result of fermion four-point function in the OPE limit |τ_2 - τ_1|,|τ_4 - τ_3|≪τ. By the same method as in Ref. <cit.><cit.>, we get the expression for the retarded energy-density correlation function C^R_00(p,ω):C^R_00(p,ω)≃ -Nc_v/β1/τ_u+Dp^2/-iω +1/τ_u +Dp^2.The denominator (-iω +1/τ_u +Dp^2) comes from a diffusion-type equation in real space∂_t ϕ(x,t) = D∂_x^2ϕ(x,t) - 1/τ_uϕ(x,t),where ϕ(x,t) is the energy density function. From this equation, one finds the physical meaning of D and τ_u, where D is the diffusion constant, while the 1/τ_uϕ term will lead to an exponential decay of the energy density function, with τ_u∼J/u^2 being the characteristic time scale for the decay. Physically, this characterizes the process of energy leaking into the bath SYK_ψ system through interaction, which is similar to the process that the energy flow crossing the black hole horizon being absorbed into the black hole, and can no longer be extracted by an exterior observer.After the discussion of the energy transport property, we now turn to discuss the behavior of the OTOC in the chaos limit. The OTOC as a function of both space and time contains two majorana fermions from position x and two from position 0:F(x , t)=1/N^2∑_i,j^NTr[yχ_i,x(t)yχ_j,0(0)yχ_i,x(t)yχ_j,0(0)],     y=e^-βĤ/4,from which we can extract both the Lyapunov exponent and the butterfly velocity. At the leading order 𝒪(N^0), the four-point function is given by a disconnected part -G(β/2)^2, and the 𝒪(1/N) part of the function can be derived by first doing analytical continuation of the Eq. (<ref>) with τ_12 = τ_34=β/2 and τ=it, and then Fourier transforming back to real space. By analytical continuation, we obtain the growing term in the long time asℱ_p,h=2(t)/G(β/2)^2≃ -4π J/√(2)α_K·e^2π/βt/2π/β+1/τ_u+Dp^2.We see that it is exponential growing in the long time. In the same spirit as in sec. <ref>, we need to consider the h≠ 2 contributions to get the modified Lyapunov exponent. By same method, after taking into account the first order correction of p^2, u^2/J^2, as well as 1/(β J), one findsℱ_p(t)/G(β/2)^2≃ -4π J/√(2)α_K·1/2π/β+1/τ_u+Dp^2exp[2π/β(1-δ(p) )t].where δ(p)=u^2/J^2+J^2_1/2J^2p^2+3√(2)α_K/2β J. F(x,t) is given by Fourier transforming Eq. (<ref>)back into real spaceF(x,t)/-G(β/2)^2=1-1/N4π J/√(2)α_K∫_-∞^∞dp/2πe^ipx/2π/β+1/τ_u+Dp^2exp[2π/β(1-δ(p) )t]+𝒪(1/N^2).The residue at the pole p^*=i√(2π/β+1/τ_u/D) dominates the integral for large x. One obtainsF(x,t)/-G(β/2)^2≃ 1-1/N√(2)π J/α_K√(D(2π/β+1/τ_u))exp[2π/β(t-x/v_B)].We find that for large x, the first order correction to the Lyapunov exponent exactly cancels each other, leaving a Lyapunov exponent 2π/β unchanged as in Ref. <cit.>. However, this will not hold for small x and the Lyapunov exponent will be tuned. The butterfly velocity is given by v_B = 2π/β/√(2π/β+1/τ_u)√(D).Defining τ_L=β/2π, we havev^2_B=1/τ_Lτ_u/τ_u + τ_L D.This relationship is very interesting. On one hand, it still satisfies D≳ v^2_Bτ_L,which is proposed to be a bound for the butterfly velocity <cit.>. It is found to held in a SYK chain <cit.> and holographic theories <cit.>.On the other hand, the scaling law of v_B^2/D shows a crossover behavior as temperature increases, as shown in Fig. <ref>. As we mentioned, τ_u is the time scale which governs the decay of the energy density function, while it can also define a temperature scale that is proportional to u^2/J. When the temperature of the system is much higher than this temperature scale, i.e., k_B T ≫ u^2/J, Eq. (<ref>) tells v^2_B/D∝ T,which is the same temperature scaling law as in Ref. <cit.>. However, when the temperature of the system is far below that temperature scale, i.e., k_B T ≪ u^2/J, Eq. (<ref>) tellsv^2_B/D∝ T^2.This scaling law is similar as that in the Landau's Fermi liquid theory, in which v_B is replaced by Fermi velocity v_F, which is a constant in low temperature, while D scales as 1/T^2. However, the difference is that here D is a constant that does not depend on the temperature, while v_B scales as T. To conclude this subsection, we find that, by coupling the one dimensional SYK chain to a larger SYK bath, the butterfly velocity can not only acquire different values, but also acquire different temperature dependence. From Eq. (<ref>), it can be seen that when we increase the interaction strength u with the bath, the butterfly velocity decreases, which physically means that the information leaks into the bath. Meanwhile, the dependence of the butterfly velocity on temperature transits from v_B ∝√(T) to v_B ∝ T.§.§ (1+1)-d SYK Chain Coupled to a Local Thermal BathIn this subsection, we study another configuration, in which we couple the SYK_ψ system to one end of the SYK chain as shown in Fig. <ref>. Different from the model in sec. <ref>, this model is no longer translational invariant. In this configuration, our focus will be the spatial dependence of both the Lyapunov exponent and the butterfly velocity. We call the site which is nearest to SYK_ψ as the site 1, the second nearest as the site 2, and so on.The Hamiltonian of the system can be written asH=∑_x=1^M( H_χ_x + H^c_χ_x,χ_x+1)+H^c_χ_1,ψ + H_ψ.The random couplings in each term are {J_ijkl,x},{J^'_ijkl,x},{J^'_ijkl,0},{J̃_ijkl}, withJ^2_ijkl,x=3!J^2/N^3, J^'2_ijkl,x=2u^2/N^3, J^'2_ijkl,0=2u^2/N^5, J̃_ijkl^2=3!J^2/N^6 .Note that in Eq. (<ref>), the meanings of the symbols J and u are different from their meanings in the last subsection. For the physical picture to be clear, we only kept two parameters u and J, although we could have four different parameters in the Eq. (<ref>). The Schwinger-Dyson equations areG_ψ(iω)^-1=-iω - Σ_ψ (iω), G_x(iω)^-1=-iω - Σ_x (iω),with{Σ_ψ(τ) = J^2 G_ψ(τ)^3 ,Σ_1(τ) = J^2 G_1(τ)^3 + u^2 G_1(τ)G_ψ(τ)^2 + u^2 G_1(τ)G_2(τ)^2 ,Σ_x(τ) = J^2 G_x(τ)^3 + u^2 G_x(τ)G_x-1(τ)^2 + u^2 G_x(τ)G_x+1(τ)^2 , x≥ 2. .Suppose the solutions in the conformal limit still obey the ansatzG_ψ(τ)= bsgn(τ)/|τ|^1/2,     G_x(τ)= a_xsgn(τ)/|τ|^1/2,then one can get a set of coupled equations for the coefficients b and {a_x} as:{J^2 b^4=1/4π ,J^2 a^4_1+ u^2 (a_1^2 b^2 + a_1^2a_2^2) =1/4π ,J^2 a^4_x+ u^2 (a_x^2 a_x-1^2 + a_x^2a_x+1^2) =1/4π, x≥ 2. .With the asymptotic boundary condition a_x→const when x→∞, one can solve the coupled equations numerically to get the coefficients b and {a_x}.Our goal is to calculate the OTOC functions in the chaos limit, and extract the Lyapunov exponents and butterfly velocities from them. In this model, the OTOCs F_1(t_1,t_2), F_2(t_1,t_2),... [F_i(t_1,t_2) is the 1/N piece of the OTOC with four fermions from site i, as defined before.] are coupled to each other. For F_1(t_1,t_2), the self-consistency equation involves F_2(t_1,t_2):F_1(t_1,t_2)=∫ dt_3 dt_4 K_11(t_1 ... t_4)F_1(t_3, t_4)+K_12(t_1 ... t_4)F_2(t_3, t_4),where K_11(t_1...t_4)= 3J^2 G_1,R(t_13)G_1,R(t_24)G_1,lr(t_34)^2+u^2 G_1,R(t_13)G_1,R(t_24) (G_2,lr(t_34)^2+G_ψ,lr(t_34)^2),andK_12(t_1...t_4)=2u^2 G_1,R(t_13)G_1,R(t_24)G_1,lr(t_34)G_2,lr(t_34).For F_x(t_1,t_2) with x≥ 2, the self-consistency equation involve F_x-1(t_1,t_2) and F_x+1(t_1,t_2):F_x(t_1,t_2)=∫ dt_3 dt_4 K_xx(t_1 ... t_4)F_1(t_3, t_4)+K_x,x-1(t_1 ... t_4)F_x-1(t_3, t_4)+K_x,x+1(t_1 ... t_4)F_x+1(t_3, t_4),withK_xx(t_1...t_4) =3J^2 G_x,R(t_13)G_x,R(t_24)G_x,lr(t_34)^2+u^2 G_x,R(t_13)G_x,R(t_24) (G_x-1,lr(t_34)^2+G_x+1,lr(t_34)^2),andK_x,x± 1(t_1...t_4)=2u^2 G_x,R(t_13)G_x,R(t_24)G_x,lr(t_34)G_x± 1,lr(t_34).These equations can be casted into a matrix form with the kernel matrix 𝒦={K_ij}, and by solving the matrix integral equation, one would get different modes with different Lyapunov exponents, similar as different momentum modes in sec. <ref>. The matrix integral equation can be solved by the ansatz solution asℱ⃗_⃗h⃗e^-hπ/β(t_1 + t_2)/[coshπ/βt_12]^1/2-h=∫ dt_3 dt_4 𝒦(t_1 ... t_4)ℱ⃗_⃗h⃗e^-hπ/β(t_3 + t_4)/[coshπ/βt_34]^1/2-h.From this one can solve a series of eigen-modes, each with different h, which corresponds to different Lyapunov exponents. A technical detail is that the kernel matrix 𝒦={K_ij} is non-Hermitian, thus the eigen-modes {ℱ⃗_⃗h⃗} are not orthogonal with each other. To keep the eigen-modes orthogonal with each other, one needs to use a symmetrized kernel matrix 𝒦̃={K̃_ij}. After getting a series of different modes, the 1/N part of the OTOC ⟨χ_i,x (t)χ_j,x(0)χ_i,x (t)χ_j,x(0) ⟩_β with four fermions on the x site can be written as a summation of different modes:F_x(t)∝∑_hℱ̃^2_hx1/2π/β+α (h+1)exp[-h2π/βt],where α = 4π J/3√(2)α_K. The coefficients {ℱ̃_hx} are the mode expansion coefficients derived in numerics by using the symmetrized kernel matrix. The weight of different modes in summation should be determined by the effective action, while one can also compare with the results in the last subsection, and see it should be approximated by 1/2π/β+α (h+1). For the OTOC ⟨χ_i,x (t)χ_j,y(0)χ_i,x (t)χ_j,y(0) ⟩_β with two fermions on the x site and two fermions on the y site, it can be derived asF_xy(t)∝∑_hℱ̃_hxℱ̃_hy1/2π/β+α (h+1)exp[-h2π/βt]. We first consider the OTOC with four fermions from the same site. We give the plot of the behavior of F_x(t) for the first five sites away from the SYK_ψ system in the Fig. <ref> (a). In numerics, we set β = 2π, and J=u=100. We can see that, the closer to the SYK_ψ system, the slower the growth of F_x(t). To quantify the growth rate of F_x(t), we fit F_x(t) after the dissipation time t_d∼β/2π with function e^λ_L t. We get the local Lyapunov exponent λ_L for each site, which characterizes the growth rate of the OTOC. The result is shown in Fig. <ref> (b). We see that the Lyapunov exponents acquire a non-trivial spatial dependence. In our (0+1)-d model, the interaction with a much larger cluster of SYK system results in the decrease of Lyapunov exponent of the small system. In this (1+1)-d model, this influence depends on the distance between the large system and each site on the chain.To look at the spatial dependence of the butterfly velocity, we will study the OTOC F_x_c -a,x_c +a(t) with different a=0,1,2,... . We fit it with the form:F_x_c -a,x_c +a(t)∝exp[ λ_L (t- 2a/v_B )],where we assume that λ_L and v_B only depend on the center position x_c. In numerics, we can check that this assumption works well for small a. The dependence of v_B on x_c is shown in Fig. <ref>. From the plot we can see that,the closer the center position x_c is to the SYK_ψ system, the smaller the butterfly velocity. This again shows that the presence of the larger system can also slow down the propagation of quantum chaos through space. The spatial dependence of Lyapunov exponent and butterfly velocity also backward prove the usefulness of the concepts in characterizing the degree of quantum chaos even locally.§ SUMMARY AND OUTLOOK In this paper, we proposed several generalizations of the SYK models in which the characteristics of quantum chaos can be tuned by varying the coupling strength with a thermal bath. Owing to the separation of the scrambling time scales between the small and the large SYK system, and utilizing the nice large-N structure of the SYK model, our set-up provides an exactly solvable model to illustrate the physics of how to tune the Lyapunov exponent and the butterfly velocity. We find a new temperature dependence law for the butterfly velocity as v_B∝ T below the temperature scale induced by coupling to the bath. We also use simple numerics to illustrate the spatial dependence of the Lyapunov exponent and the butterfly velocity, which emphasizes the usefulness of the quantities as tools to diagnose the extent of quantum chaos locally. Since the Lyapunov exponent and the butterfly velocity are both tunable in our models, our work provides a broader platform to test various inequalities proposed for chaotic systems and to study the thermalization of chaotic systems. One possible generalization of our model is to impose an U(1) charge symmetry, then one could study the thermoelectric transport properties of the system <cit.> and see how they should be modified by interaction with the bath. Another interesting direction is to search for a holographic description of the system. Since the Schwarzian action of the original SYK model is shared by NAdS_2 gravity system, it is natural to ask whether the modified action given in this paper can be also found in some gravitational systems. More generally, it is still an open question whether the Lyapunov exponent in gravitational systems can be tuned by similar physical process inspired by this work. Similar questions can also be addressed on the behavior of butterfly velocity v_B. We defer these questions for further study.We thank Wenbo Fu, Yingfei Gu, Chao-Ming Jian, Yi-Zhuang You and Xiao-Liang Qifor helpful discussion. This work is supported by NSFC Grant No. 11325418 and MOST under Grant No. 2016YFA0301600. 99Kitaev2 A. Kitaev, talk given at KITP Program: Entanglement in Strongly-Correlated Quantum Matter, 2015:http://online.kitp.ucsb.edu/online/entangled15/kitaev/http://online.kitp.ucsb.edu/online/entangled15/kitaev2/SY S. Sachdev and J. Ye, Gapless spin-fluid ground state in a random quantum Heisenberg magnet, Phys. Rev. Lett. 70, 3339 (1993).Comments J. Maldacena and D. Stanford, Remarks on the Sachdev-Ye-Kitaev model, Phys.Rev. D 94 (2016) 106002.spectrum1 A. M. García-García and J. J. M. Verbaarschot, Spectral and thermodynamic properties of the Sachdev-Ye-Kitaev model, Phys. Rev. D 94, 126010 (2016).spectrum2 Y. Liu, M. A. Nowak and I. Zahed, Disorder in the Sachdev-Yee-Kitaev Model, arXiv:1612.05233.spectrum3 A. M. García-García and J. J. M. Verbaarschot, Analytical Spectral Density of the Sachdev-Ye-Kitaev Model at finite N, arXiv:1701.06593.Liouville D. Bagrets, A. Altland, and A. Kamenev, Sachdev-Ye-Kitaev model as Liouville quantum mechanics, Nucl. Phys. B 911 (2016) 191–205.Liouville2 D. Bagrets, A. Altland, A. Kamenev, Power-law out of time order correlation functions in the SYK model, arXiv:1702.08902.SYK new E. Iyoda and T. Sagawa, Scrambling of Quantum Information in Quantum Many-Body Systems, arXiv:1704.04850.SYK new2 Thomas G. Mertens, Gustavo J. Turiaci and Herman L. Verlinde, Solving the Schwarzian via the Conformal Bootstrap, 1705.08408.SYK new3 Razvan Gurau, The iε prescription in the SYK model, 1705.08581.bulk Yang J. Maldacena, D. Stanford and Z. Yang, Conformal symmetry and its breaking in two dimensional Nearly Anti-de-Sitter space, Prog Theor Exp Phys 2016 (12): 12C104.bulk spectrum Polchinski J. Polchinski and V. Rosenhaus, The Spectrum in the Sachdev-Ye-Kitaev Model, JHEP 04 (2016) 001.bulk2 K. Jensen, Chaos in AdS 2 holography, Phys. Rev. Lett. 117, 111601 (2016).bulk3 A. Jevicki and K. Suzuki, Bi-local holography in the SYK model: perturbations, JHEP 07 (2016) 007.bulk4 G. Mandal, P. Nayak, and S. R. Wadia, Virasoro coadjoint orbits of SYK/tensor-models and emergent two-dimensional quantum gravity, arXiv:1702.04266.bulk5 D. J. Gross and V. Rosenhaus, JHEP 05 (2017) 092.syk-bh J. S. Cotler, G. G.-Ari, M. Hanada, J. Polchinski, P. Saad, S. H. Shenker, D. Stanford, A. Streicher and M. Tezuka, Black Holes and Random Matrices, arXiv:1611.04650.SYK g new1 J. Maldacena, D. Stanford and Z. Yang, Diving into traversable wormholes, arXiv:1704.05333.SYK g new2 S. R. Das, A. Jevicki and K. Suzuki, Three Dimensional View of the SYK/AdS Duality, arXiv:1704.07208.SYK g new3 J. M. Magan, De Finetti theorems and entanglement in large-N theories and gravity, arXiv:1705.03048.Kitaev1 A. Kitaev,talk given at Fundamental Physics Prize Symposium, Nov.10, 2014:http://online.kitp.ucsb.edu/online/joint98/kitaev/ bh1S. H. Shenker and D. Stanford, Black holes and the butterfly effect, JHEP 03 (2014) 067.bh2S. H. Shenker and D. Stanford, Multiple shocks, JHEP 12 (2014) 046.bh3S. H. Shenker and D. Stanford, Stringy effects in scrambling, JHEP 05 (2015) 132.otoc cft D. A. Roberts and D. Stanford, Diagnosing chaos using four-point functions in two-dimensional conformal field theory, Phys. Rev, Lett. 115, 131603 (2015).otoc anyon Y. Gu and X.-L. Qi, Fractional statistics and the butterfly effect, JHEP 08 (2016) 129.OTOC-Ruihua MBL1 R. Fan, P. Zhang, H. Shen and H. Zhai, Out-of-Time-Order Correlation for Many-Body Localization, Science Bulletin, 2017, 62(10): 707-711.OTOC-MBL2 X. Chen, T. Zhou, D. A. Huse and E. Fradkin, Out-of-time-order correlations in many-body localized and thermal phases, Annalen der Physik, 1521-3889, 1600332 2016.OTOC-MBL3 Y. Huang, Y.-L. Zhang, and X. Chen, Out-of-Time-Ordered Correlator in Many-Body Localized Systems, Annalen der Physik, 201600318.OTOC-MBL4 Y. Chen, Universal Logarithmic Scrambling in Many Body Localization, arXiv:1608.02765.OTOC-MBL5R.-Q. He and Z.-Y. Lu, Characterizing Many-Body Localization by Out-of-Time-Ordered Correlation, Phys. Rev. B 95, 054201.OTOC-MBL6 B. Swingle and D. Chowdhury, Slow scrambling in disordered quantum systems, Phys. Rev. B 95, 060201(R).OTOC-Boson Hubbard H. Shen, P. Zhang, R. Fan and H. Zhai, Out-of-Time-Order Correlation at a Quantum Phase Transition, arXiv:1608.02438.OTOC-Keldysh I. L. Aleinera, L. Faorob and L. B. Ioffe, Microscopic model of quantum butterfly effect: Out-of-time-order correlators and traveling combustion waves, Annals of Physics, Volume 375, December 2016, Pages 378–406.OTOC-quantum K. Hashimoto, K. Murata and R. Yoshii, Out-of-time-order correlators in quantum mechanics, arXiv:1703.09435.OTOC-classical J. S. Cotler, D. Ding and G. R. Penington, Out-of-time-order Operators and the Butterfly Effect, arXiv:1704.02979.OTOC-Yao N. Y. Yao, F. Grusdt, B. Swingle, M. D. Lukin, D. M. Stamper-Kurn, J. E. Moore and E. A. Demler, Interferometric Approach to Probing Fast Scrambling, arXiv:1607.01801.OTOC-quantum channel P. Hosur, X.-L. Qi, D. A. Roberts and B. Yoshida, Chaos in quantum channels, JHEP 02 (2016) 004.OTOC-Luttinger B. Dóra and R. Moessner, Out-of-time-ordered density correlators in Luttinger liquids, arXiv:1612.00614.OTOC-protocal B. Swingle, G. Bentsen, M. S.-Smith and P. Hayden, Measuring the scrambling of quantum information, Phys. Rev. A 94, 040302(R).OTOC new P. Caputa, T. Numasawa and A. Veliz-Osorio, Out-of-time-ordered correlators and purity in rational conformal field theories, Prog Theor Exp Phys 2016 (11): 113B06.OTOC-exp1 M. Gärttner, J. G. Bohnet, A. Safavi-Naini, M. L. Wall, J. J. Bollinger and A. Rey, Measuring out-of-time-order correlations and multiple quantum spectra in a trapped ion quantum magnet, Nature Physics: 10.1038/nphys4119OTOC-exp2 J. Li, R. Fan, H. Wang, B. Ye, B. Zeng, H. Zhai, X. Peng and J. Du, Measuring out-of-time-order correlators on a nuclear magnetic resonance quantum simulator, Phys. Rev. X, to appear. arXiv:1609.01246. proveJ. Maldacena, S. H. Shenker and D. Stanford, A bound on chaos, JHEP 08 (2016) 106.numerics wenbo W. Fu and S. Sachdev, Numerical study of fermion and boson models with infinite-range random interactions, Phys. Rev. B 94, 035135 (2016).wenbo susy W. Fu, D. Gaiotto, J. Maldacena, and S. Sachdev, Supersymmetric Sachdev-Ye-Kitaev models, Phys. Rev. D 95 (2017) 026009.susy2 T. Li, J. Liu, Y. Xin and Y. Zhou, Supersymmetric SYK model and random matrix theory, arXiv:1702.01738.susy3 S. Förste, I. Golla, Nearly AdS_2 sugra and the super-Schwarzian, Physics Letters B.2017.05.039Yingfei1 Y. Gu, X.-L. Qi and D. Stanford, Local criticality, diffusion and chaos in generalized Sachdev-Ye-Kitaev models, arXiv:1609.07832.Yingfei2 Y. Gu, A. Lucas and X.-L. Qi, Energy diffusion and the butterfly effect in inhomogeneous Sachdev-Ye-Kitaev chains, arXiv:1702.08462. Altman S. Banerjee and E. Altman, Solvable model for a dynamical quantum phase transition from fast to slow scrambling, Phys. Rev. B 95, 134302.sk jian S.-K. Jian and H. Yao, Solvable SYK models in higher dimensions: a new type of many-body localization transition, arXiv:1703.02051. yyz condensation Z. Bi, C.-M. Jian, Y.-Z. You, K. A. Pawlak, and C. Xu, Instability of the non-Fermi liquid state of the Sachdev-Ye-Kitaev Model, Phys. Rev. B 95, 205105.our X. Chen, R. Fan, Y. Chen, H. Zhai and P. Zhang, Competition between Chaotic and Non-Chaotic Phases in a Quadratically Coupled Sachdev-Ye-Kitaev Model, arXiv:1705.03406.Balent X.-Y. Song, C.-M. Jian and L. Balents, A strongly correlated metal built from Sachdev-Ye-Kitaev models, arXiv:1705.00117.thick D.V. Khveshchenko, Thickening and sickening the SYK model, arXiv:1705.03956.generalization 1 D. J. Gross and V. Rosenhaus, A Generalization of Sachdev-Ye-Kitaev, arXiv:1610.01569.no disorder1 B. Michel, J. Polchinski, V. Rosenhaus, and S. J. Suh, Four-point function in the IOP matrix model, JHEP 05 (2016) 048.no disorder2 E. Witten, An SYK-Like Model Without Disorder, arXiv:1610.09758.no disorder3 I. R. Klebanov and G. Tarnopolsky, Uncolored random tensors, melon diagrams, and the Sachdev-Ye-Kitaev models, Phys. Rev. D 95, 046004.no disorder4 C. Peng, M. Spradlin, and A. Volovich, A Supersymmetric SYK-like Tensor Model, arXiv:1612.03851.no disorder5 V. Bonzom, L. Lionni and A. Tanasa, Diagrammatics of a colored SYK model and of an SYK-like tensor model, leading and next-to-leading orders, Journal of Mathematical Physics 58, 052301 (2017).no disorder6 R. Gurau, The complete 1/N expansion of a SYK–like tensor model, Nuclear Physics B, Volume 916, March 2017, Pages 386–401.no disorder7 T. Nishinaka and S. Terashima, A Note on Sachdev-Ye-Kitaev Like Model without Random Coupling, arXiv:1611.10290.no disorder8C. Krishnan, S. Sanyal and P. N. Bala Subramanian, Quantum Chaos and Holographic Tensor Models, JHEP 1703, 056 (2017). no disorder9 R. Gurau, Quenched equals annealed at leading order in the colored SYK model, arXiv:1702.04228.no disorder10 C. Krishnan, K. V. P. Kumar and S. Sanyal, Random Matrices and Holographic Tensor Models, arXiv:1703.08155no disorder new P. Narayan and J. Yoon, SYK-like Tensor Models on the Lattice, arXiv:1705.01554.thermal transport R. A. Davison, W. Fu, A. Georges, Y. Gu, K. Jensen, and S. Sachdev, Thermoelectric transport in disordered metals without quasiparticles: the SYK models and holography, Phys. Rev. B 95, 155131.high-D1 M. Berkooz, P. Narayan, M. Rozali, and J. Simn, Higher Dimensional Generalizations of the SYK Model, arXiv:1610.02422.high-D2-con G. Turiaci and H. Verlinde, Towards a 2d QFT Analog of the SYK Model, arXiv:1701.00528.generalization 2 C. Peng, Vector models and generalized SYK models, arXiv:1704.04223.transition1 C.-M. Jian, Z. Bi and C. Xu, A model for continuous thermal Metal to Insulator Transition, arXiv:1703.07793.new gsyk1 C. Peng, Vector models and generalized SYK models, arXiv:1704.04223.vb T-1/3 energy A. A. Patel and S. Sachdev, Quantum chaos on a critical Fermi surface, Proc. Nat. Acad. Sci. 114 (2017) 1844–1849.Blake M. Blake, Universal Charge Diffusion and the Butterfly Effect in Holographic Theories , Phys. Rev. Lett. 117, 091601.holographic review S. A. Hartnoll, A. Lucas and S. Sachdev, Holographic quantum matter, arXiv:1612.07324.Sachdev new M. Blake, R. A. Davison and S. Sachdev, Thermal diffusivity and chaos in metals without quasiparticles, arXiv: arXiv:1705.07896.vb bound S. A. Hartnoll, Theory of universal incoherent metallic transport, Nature Phys. 11 (2015) 54.vb energy1 M. Baggioli, B. Goutéraux, E. Kiritsis, and W.-J. Li, Higher derivative corrections to incoherent metallic transport in holography, arXiv:1612.05500.vb energy2 M. Blake and A. Donos, Diffusion and Chaos from near AdS2 horizons, arXiv:1611.09380.vbbh M. Blake, Universal diffusion in incoherent black holes, Phys. Rev. D 94, 086014.Wenbo Wenbo Fu, priviate communication.
http://arxiv.org/abs/1705.09818v2
{ "authors": [ "Yiming Chen", "Hui Zhai", "Pengfei Zhang" ], "categories": [ "hep-th", "cond-mat.str-el" ], "primary_category": "hep-th", "published": "20170527131601", "title": "Tunable Quantum Chaos in the Sachdev-Ye-Kitaev Model Coupled to a Thermal Bath" }
Berkeley Center for Theoretical Physics, Department of Physics, University of California, Berkeley, CA 94720 Department of Physics, University of California, Santa Barbara CA 93106, USA Department of Physics, Boston University, Boston, Massachusetts 02215, USAPhotonics Center, Boston University, Boston, Massachusetts 02215, USA Department of Physics, Harvard University, and Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138Department of Physics, Harvard University, Cambridge, MA 02138We proposea method to identify the direction of an incident Weakly Interacting Massive Particle (WIMP) via induced nuclear recoil.Our method is based on spectroscopic interrogation of quantum defects in macroscopic solid-state crystals . When a WIMP scatters in a crystal, the induced nuclear recoilcreates a tell-tale damage cluster, localized to within about 50 nm,with an orientation to the damage trail that correlateswell with the direction of the recoiland hence the incoming WIMP.This damage cluster induces strain in the crystal, shifting the energy levels of nearby quantum defects. These level shifts can be measured optically (or through paramagnetic resonance) making it possible to detect the strain environment around the defect in a solid sample. As a specific example, we consider nitrogen vacancy centers in diamond, for which high defect densities and nanoscale localization of individual defects have been demonstrated. To localize the millimeter-scale region of a nuclear recoil within the crystal due to a potential dark matter event, we can use conventional WIMP detection techniques such as the collection of ionization/scintillation. Once an event is identified, the quantum defects in the vicinity of the event can be interrogated to map the strain environment, thus determining the direction of the recoil. In principle, this approach should be able to identify the recoil direction with an efficiency greater than 70 % at a false positive rate less than 5% for 10 keV recoil energies.If successful, this method would allow for directional detection of WIMP-induced nuclear recoils at solid state densities, enabling probes of WIMP parameter space below the solar neutrino floor.This technique could also potentially be applied to identify the direction of particles such as neutrons whose low scattering cross-section requires detectors with a large target mass.Directional Detection of Dark Matter using Spectroscopy of Crystal Defects Mikhail Lukin December 30, 2023 ==========================================================================§ INTRODUCTIONWeakly Interacting Massive Particle (WIMP) dark matteris one of the most compelling dark matter (DM) candidates <cit.>. The weak scale is ultimately responsible for the origin of mass in the Standard Model and it is reasonable that it also sets the scales relevant for DM physics. Theories that attempt to explain the hierarchy problem naturally produce weak scale particles that interactthrough processes mediated by the Higgs or electroweak gauge bosons. This is true not just in theories such as supersymmetry (which are presently heavily constrained by the LHC) but also in frameworks such as the relaxion where fermions at the weak scale carrying electroweak quantum numbers are a natural expectation <cit.>. Importantly, WIMPs have a calculable abundance – thermal freeze-out of WIMPs naturally yields a cosmic abundance consistent with the observed DM density. Experimental work over the past three decades has cut deep into WIMP parameter space, with current experiments probing the possibility that DM may scatter via Standard Model interactions through the Higgs boson. These experiments are soon expected to hit a major background – the coherent scattering of neutrinos from the Sun <cit.>. WIMP DM experiments utilize a variety of handles to reject a number of radioactive backgrounds, such as the fact thatthese radioactive backgrounds will typically scatter more than once in the detector, unlike the elastic scattering of DM. Unfortunately, the coherent elastic scattering of neutrinos from an atomic nucleus has the same event topology as DM scattering and the next generation of DM experiments are expected to be sensitive to solar neutrinos.If this background cannot be rejected, WIMP detection would require statistical discrimination of a small WIMP signal over a large background. This implies that the sensitivity of the detectors would only scale as √(V) where V is the volume of the detector. Since WIMP detectors are already at V ∼m^3, continued progress would rapidly require prohibitively large detectors. One way to reject this background would be to identify the direction of the nuclear recoil induced by the collision of the DM (or neutrino) <cit.>. With such directional detection capability, one can make use of the fact that, due to momentum conservation, when a solar neutrino collides with a nucleus, the recoiling nucleus has to move away from the Sun. One could then reject all events that are pointed away from the known location of the Sun, eliminating the neutrino background. Incident WIMPs are expected to be isotropic; and thus by focusing only on events where the recoil is not along the direction of the Sun, one will be able to only look at events caused by DM. Such a directional detector will suffer a loss of sensitivity of ∼ 50 percent while dramatically reducing the neutrino background. It is thus of great interest to develop techniques to measure the direction of the nuclear recoil induced by a DM/neutrino collision. Not only would this permit continued exploration of WIMP parameter space below the solar neutrino floor where the WIMP can scatter via the Higgs boson <cit.>, but should the WIMP be discovered in the next generation of experiments, directional detection experiments would offera unique opportunity to measure the DM velocity profile. These measurements may even pave the way to discoveries of theorized galactic structures such as a dark disk or deviations from the naive Maxwell distribution of WIMP velocities. The technical problem that must be overcome for directional detection is the following. The scattering of DM/neutrino deposits energies ∼ 10 - 30 keV. The direction of the induced nuclear recoil must be established in a detector with a large target mass, to overcome the tiny WIMP/neutrino cross-sections. To accommodate the large target mass without having to resort to enormous detector volumes, it is advantageous for the detector to be a high density material like a solid or a liquid. While there are excellent directional detection techniques in gas-based detectors <cit.>, there are no well established techniques for directional detection in high density materials. Here we propose a new directional detection scheme that can operate at solid state densities, with the ability to accommodate large target masses so that the concept could potentially be applied for DM detection. The basic idea employs crystals with point quantum defects.When a WIMP scatters near one of these defects, the induced nuclear recoil creates a tell-tale damage cluster, localized to within ∼ 50 nm, and with an orientation thatcorrelates well with the direction of the recoil. This damage cluster induces strain in the crystal and this strain shifts the energy levels of the nearby defects. These energy level shifts can be measured by exciting optical transitions, or ground state magnetic resonance spin-flip transitions, making it possible to map the strain environment in the detector crystal with spatial resolution on the order of the defect spacing. To identify the location of a nuclear recoil to a millimeter-scale volume, we can first useconventional WIMP detection approaches such as the collection of ionization/scintillation radiation to identifyand localize potential dark matter events <cit.> within the crystal. Once an event is identified and localized, the defects in the vicinity of the event can be interrogated to determine the strain environment, thus identifying the direction of the recoil. See Figure <ref>.If successful, this concept would open a new path to continue the probe of the theoretically well motivated WIMP. The phenomenology of quantum defects described above is found in a number of systems such as nitrogen vacancy (NV) centers<cit.>and/or silicon vacancy (SiV) centers <cit.> in diamond, paramagnetic F-centers in metal halides <cit.> or defects in Silicon Carbide <cit.>.r4.5in< g r a p h i c s > Left: Event identified by conventional methods. Right: Section of interest separately studied by superresolution methods.Our present discussion is focused on NV centers in diamond, motivated by the fact that these defects are well studied,with a number of experimental parameters and interrogation techniques well established. However,technical considerations may make systems such as SiV centers in diamond or divacancies in SiC better suited for a DM search than NV centers. Specifically, the optical transition in SiV centers in diamond has a narrow linewidth at cryogenic temperatures (≈ 5 GHz at 77 K and ≈ 300 MHz at 10 K <cit.>); and its frequency is both sensitive to strain <cit.> and first-order insensitive to electric field. From a materials perspective, silicon carbide may be easier to work with than diamond, since it is commercially available as high-purity single-crystal wafers up to several inches in diameter, and divacancy defects have electronic level structure and coherence times that are very similar to NV centers <cit.>. If the proposed concept proves fruitful, we believe there will be strong motivation to study a wider class of crystal defects to identify optimal systems for specific applications. The paper is organized as follows. In section <ref>, we discuss the localized damage caused by DM scattering. In section <ref> we present a conceptual overview of how optically detected magnetic resonance (ODMR) of NV centers in diamond can be used to detect the resulting nanoscale strain patterns. In section <ref>, we discuss the technical details of this measurement and evaluate the efficiency of our measurement protocol. § CRYSTAL DAMAGE The elastic collision of a conventional WIMP (mass ∼ 10 GeV) with a nucleusis expected to deposit energies ∼ 10 - 30 keV. This energy is significantly larger than the lattice potential ∼ 10 eV of typical crystals. The recoiling nucleus scatters with the lattice, creating a localized damage cluster consisting of interstitials and vacancies. The damage cluster created by such a collision can be modeled using aTransport of Ions in Matter (TRIM) simulation<cit.>. We consider a carbon lattice (appropriate for diamond) and input an initial carbon ion with energy ∼ 10 - 30 keV. This initial ion represents the recoiling nucleus. The TRIM simulation captures the effect of this input ion and the result from a typical event isshown in Figure <ref>. These results indicate that one generically obtains about 𝒪(100- 300) interstitial/defects created by the scattering – this is consistent with the fact that the energy deposits are ∼ 10 - 30 keV, with the lattice potential ∼ 10 eV. Futher, the damage trail is well correlated with the initial direction of the input ion. The damage is also asymmetric – we typically see a larger number of dislocations at the end of the damage trail than the beginning. The damage trail itself is localized within ∼ 50 nm. Since the shape of this trail is well correlated with the recoil direction, its spatial characterization will lead to directional resolutionof the nuclear recoil and hence the incoming WIMP. It should be kept in mind that once the cluster is created, it is not easily destroyed since the typical barriers associated with defect migration are ∼ eV, implying thermodynamic stability at operating temperatures (300 K). This crystal damage creates strain in the lattice. To calculate the induced strain, we follow <cit.>: the strain from a single vacancy/interstitial falls off as 1/r^3, where r is the distance from the defect. The total strain from the damage cluster can be calculated by adding the strain from each individual defect. Therefore the strain Δ x/x at a location 30 nm away from a single vacancy is Δ x/x≈(0.3 nm/30 nm)^3≈ 10^-6. This corresponds to a stress P = YΔ x/x≈ 10^6 Pa where Y≈ 10^12 Pa is the Young's modulus of diamond. This stress can be detected using the shift of the zero-magnetic-field transition frequency between the ground-state magnetic sub-levels of an NV center. The stress coupling coefficient <cit.> is 0.03Hz/Pa, so the transition frequency shift is Δ f ≈ 30 kHz. This can be detected via NV ODMR using a standard “clock” measurement protocol, insensitive to magnetic fields, and should be compared to the the NV transition linewidth, which is limited by 1/T_1≈ 300Hz at room temperature <cit.>. We note that the spin-1 ground state of the NV center allows determination of the transverse and perpendicular components of the strain relative to the NV axis.r0.5< g r a p h i c s > A typical damage cluster shape from a 30 keV ion injected into a crystal from a TRIM simulation. The cluster is well correlated with the direction of the injected ion. The ion should be thought of as the recoiling nucleus, coming from a WIMP scattering event.In order to reconstruct the shape of the crystal damage distribution, and thus the direction of the momentum of the scattered WIMP, the diamond should have NV center density of approximately 1/(30 nm)^3 (which has been demonstrated in bulk diamond samples <cit.>). Superresolution imagingtechniques <cit.> can then be used to perform zero-field transition frequencymeasurements of several individual NV centers near the scattering event location to map the strain profile within the diamond with few nanometer precision. Such a measurement method will provide good sensitivity to crystal damage, as it leverages the remarkable spin properties of NV centers (long spin lifetime), as well as the stiffness of diamond (large Young's modulus). Note that the density of interstitials/vacancies created by the WIMP scattering is locally a factor of ∼ 100 larger than the NV center density – thus there is a distinct and large local signal for crystal damage, such that the key technical challenge will be identifying the mm^3 region within the crystal where the WIMP scattering event occurred.Before further describing a detector concept that can measure this damage, let us discuss the correlation between the damage cluster and the recoil direction. This correlation was estimated using the TRIM simulation as discussed above. We find that in nearly 70 percent of events with energy depositions ∼ 10 keV (or larger), there is a clear asymmetry (1:2) in the number of vacancies/interstitials in the beginning versus the end of the damage cluster, enabling head/tail discrimination of the recoil. The false positive rate is estimated to be less than 5 percent. Thus, even with a few events, it is possible to make a statistically robust discovery of DM. These asymmetries degrade significantly below ∼ 1keV recoils (see Figure <ref>), making this method most useful for identifying the direction of conventional WIMP (mass ⪆ 1 GeV) DM. § CONCEPTThe results of the TRIM simulation suggest that with a NV center density of 1/(30 nm)^3, the 𝒪(100) crystal defects created by WIMP-induced nuclear recoil within ∼ 50 nm produce NV zero-field frequency shifts ∼ 10 kHz, significantly larger than the NV transition linewidth ∼ kHz. Thus, by mapping the strain-induced NV zero-field frequency shifts around the damage cluster using superresolution techniques, the incident WIMP direction can be determined. But, how do we find the correct group of NV centersin proximity to the recoil-induced damage? For a practical rate of WIMP scattering events, the crystal will need to have a large volume. The following protocol should allow coarse localization(∼mm^3) of the WIMP scattering event, making it realistic to identify the NV centers proximal to the damage trail, using a scaled-up version of existing NV ODMR technology. We consider a sectioned detector as shown in Figure <ref>, with each section of thickness ∼ mm (the lateral dimensions can be much larger, potentially ∼ m). Assume there is some collision in this detector – this collision may be due to a WIMP, neutrino or radioactive background. We propose using standard techniques from conventional WIMP detectors to identify these events: for example, through the collection of scintillation/ionization we can identify the small number of single scattering events that could potentially be due to DM/neutrino scatterings. For WIMP scattering cross-sections of interest, we expect𝒪(10) events within the meter-scale detection volume over the course of about one year, whose direction would then have to be determined. Relying upon conventional WIMP detection techniques, the spatial localization of each of these events can potentially be known to a volume ∼ mm^3 <cit.>. Scintillation in diamond has been observed <cit.>, but its properties need to be investigated in detail in order to evaluate the feasibility of millimeter-scale event localization. Ionization (electron-hole production) in diamond, on the other hand, is very well studied, and is used in a variety of diamond-based radiation and particle detectors <cit.>. In order to provide the necessary spatial resolution, charge extraction would need to be done with pixelated electrodes <cit.>, which would likely introduce additional radiation backgrounds (see below), but this can potentially be controlled by careful fabrication, characterization, and discrimination <cit.>. The technical challenge thenreduces to the identification of the correct set of NV centers within each target ∼mm^3 volume of interest. Sincethe crystal damage is stable, we can take significant time (several days) to study each region of interest to identify the direction of the recoiling nucleus. Once we identify the events of interest, the associated mm section of the detector is pulled out for further study. The mm^3 region of interest is interrogated via ODMR to identify the group of NV centers whose zero-field frequencies are significantly shifted due to damage-induced strain. This method is of course diffraction limited to a resolution ∼μm, the wavelength of the light. Superresolution optical imaging <cit.> and/or strong magnetic field gradients <cit.> can now be applied to this region for nano-scale resolution and ODMR of individual NV centers.For example, with an applied external magnetic field gradient of ∼ 1Tesla/cm <cit.> the induced frequency shift in NV centers separated by ∼ 30 nm is ∼ 10 kHz, larger than the linewidth of the NV center, permitting such resolution. This measurement protocol explains the need for sectioning the detector: resolution of the NV centers below the wavelength of light requires the application of large external magnetic field gradients or precisely shaped, intense optical fields <cit.> that are most easily accomplished when the thickness of the section is not too big. Note that SiV centers in diamond are a promising alternative quantum defect, as they have an optical transition frequency that is sensitive to local crystal strain <cit.>; and could provide nanoscale mapping of local strain in an optical-background-free manner, if the sample is cooled to cryogenic temperature where the strained SiV centers could be spectrally resolved without the need for superresolution imaging. § NANOSCALE PROBE OF CRYSTAL STRAIN As outlined above, once the scattering events of interest are identified and localized to within ∼ mm^3 by conventional WIMP detection methods, the plate (of thickness ∼ mm) in which the event occurred is pulled out and examined. The crystal damage caused by the event is then probed at the nanoscale by mapping the resulting strain on nearby NV centers or other quantum defects.§.§ Measurement Process As estimated above, a single vacancy or interstitial creates a stress of ∼ 10^6 Pa at a location 30 nm away; and if there is an NV center at that location, the resulting shift of its ground state zero-field splitting is ∼ 30 kHz. A scattering event produces a damage track with length scale ∼ 50 nm, containing 100-300 such vacancies. Our task is to localize and characterize this damage track, specifically extracting the track asymmetry and therefore initial recoil direction, starting from an initial localization accuracy of ∼ 1 mm^3. For example, the sensitivity to local stress of a single optically resolved NV center can be estimated <cit.> using η = 1/C dΔ/dP1/√(τ_coh t)∼ 100 Pa/√(Hz). Here, C≈ 10^-2 is the factor that accounts for initialization and readout imperfections, τ_coh≈ 1ms is the NV center coherence time (under the magnetic field-insensitive clock sequence), dΔ/dP ∼ 0.03Hz/Pa is the stress coupling coefficient, and t is the averaging time. We begin the event localization with wide-angle imaging of the 1 mm^3 volume of the detector material, using a CCD or CMOS camera. Each pixel on the camera images ∼ 1 μm^2 area of the detector. Standard optical techniques can be adapted for imaging point sources within a high index of refraction crystal (e.g., diamond) with depth of field of about 1 micron. By focusing the excitation laser, we divide the detector into ∼ 1 μm^3 voxels, each containing ∼ 3× 10^4 NV centers. If one such voxel contains the damage track, then several NV centers within it will exibit ∼MPa stress, which is detectable as zero-field frequency shifts with good signal-to-noise after 100 seconds of averaging. Since we are only imaging a 1 μm-thick section of the detector at a time, we have to repeat the imaging 1000 times in order to scan the entire 1 mm^3 volume, which takes a total of ∼ 10^5 seconds – about a day. Having thus localized the damage to a 1 μm^3 voxel, we then employ optical superresolution techniques and/or strong magnetic field gradients, in order to extract the track asymmery.Such nanoscale imaging will require extra overhead compared to diffraction-limited optical imaging; assuming measurement time of 10 seconds per NV center, this will take about 3 days. Therefore the entire damage track can be characterized on the time scale of several days.§.§ Estimated Efficiency What is the efficiency? False positive rates? Results from Alex and Nick's simulations. To reject the solar neutrino background, we must determine the initial nuclear recoil direction from the zero-field frequency shifts of the NV centers closest to the damage site. This pattern recognition problem may be approached in many ways, with the effectiveness of a particular method described by its efficiency: the percentage of events whose initial recoil direction is accurately inferred from the damage left in the crystal, for a given false positive rate. To find the maximum efficiency we must consider that a minority of damage tracks are asymmetric in a direction opposite to the initial recoil (asymmetry<1). Here, we define asymmetry as the ratio of the number of lattice interstitials and vacancies in the end third of the damage to the beginning third. The left side of Figure <ref> shows the distributions of asymmetry for different initial recoil energies in diamond, computed from the TRIM simulations discussed in section <ref>. By cutting events with small asymmetry from consideration, we also remove the majority of events asymmetric in the wrong direction. These results give a maximum efficiency of ∼ 70% and∼ 90% at a 5% false positive rate for 10 and 30 keV collisions in diamond, respectively.We tested this procedure by computing strain-induced zero-field frequency shifts for randomly placed NV centers for many different damage trails. We then used a simple analysis to infer the initial recoil direction for each damage trail from the NV strain map. At 30 keV our code achieved an efficiency of ∼ 50% with an NV center density of 1(10nm)^3. We expect that use of a state-of-the-art pattern recognition algorithm will greatly increase the efficiency.After cutting out events with low asymmetry, we reject the solar neutrino background by removing events that have an initial recoil direction pointing away from the Sun. However, not all damage directions are well-correlated with the initial recoil direction. The right side of Figure <ref> shows this relationship for different recoil energies. For lower energy events, slightly more than half the data will be cut. §.§ Preparation and Backgrounds Crystal defects (such as the NV center) are manufactured during chemical vapor deposition (CVD) and/or through irradiation followed by subsequent annealing of the crystal <cit.>, where defect densities as high as 1/(5 nm)^3 have been demonstrated. One may worry that the process of creation of this large density of NV centers might introduce additional backgrounds. For example, electron irradiation to create vacancies must deliver energies ∼ 20 eV to displace a nucleus, which can be accomplished with electrons of energy ∼ MeV.These electrons could be captured by the nuclei in the crystal (carbon/nitrogen),render them unstable and cause additional radioactive background. The existence of this background would depend upon the choice of crystal and quantum defects used as strain sensors: for concreteness, we consider diamond with NV centers. First, electron capture proceeds through the weak interaction, leading to a capture cross-section ∼ 10^-43 cm^2 for ∼ MeV electrons. Thus, to create a NV center density1/(30 nm)^3 in a meter-scale sample, we expect ∼ 1000 electron capture events. These capture events can, for example, convert Nitrogen 14 to Carbon 14, which can then subsequently undergo radioactive decay. Considering the life-time of Carbon-14 ∼ 6000 years, we expect ∼𝒪(1) events per year. Of course, these decays by themselves are not a background for our WIMP directional detection scheme: Carbon 14 decays through beta emission; hence the resulting electron track woulddistinguish these events from those of WIMP-induced nuclear recoil and strain. A problem would arise only if there were an enormous number of such decays, which would lead to a large radioactive background that produces a large number of crystal defects. But, since the expected number of electron captures is low (∼ 1000 in a m^3 sample), the production process is not likely to recreate a major background. It should be noted that ∼ MeV electrons are not likely to excite unstable nuclear levels in these systems – but even if they are produced, as long as those states decay before the annealing timethey will not be a problem.Crystal damage caused by other radioactive sources is a potential cause for concern. If the event rate from these backgrounds is large enough to cause crystal damage within the initial localized region (∼mm^3), we will not be able to distinguish the damage caused by radioactivity from DM/neutrino scattering. Note that this problem arises only for nuclear recoils – the damage trail caused by electron recoils where the electron slowly loses energy ∼ 10eV/angstrom over a long distance is significantly different from the DM/neutrino induced nuclear recoil where the deposited energy is lost within ∼ 30 nm. Since the lattice potentials for a typical crystal are ∼ 10 - 30 eV, the damage trail from electron recoils may cause defects with a density 𝒪(few)/(30 nm)^3 spread over a longer distance than the DM/neutrino induced nuclear recoils that lead to defect densities ∼𝒪(100 - 300))/(30 nm)^3. For nuclear recoil backgrounds to be insignificant, we would need less than one damage trail within the initial localized volume ∼mm^3, implying that the overall total nuclear recoil background be less than ∼ 10^9 events/yearin a detector of volume ∼m^3. Background rates smaller than this have been achieved in bulk volume detectors <cit.>, especially in the inner parts of the detector where self-shielding effects are important. It is also important to note that the effects of this radioactive background quickly become sub-dominant with improvements in the initial localization of the event. Another factor to consider is the inherent lattice strain invariably present in the detector crystal. Such inherent strain is known to be present in diamond crystals, where it gives rise to inhomogeneous broadening of the NV ^3A_1 -> ^3E_1 electric dipole optical transition zero-phonon line <cit.>. For the typical inhomoheneous optical transition linewidth ∼GHz, we can estimate the lattice strain ∼MPa, which is on the same order as the expected strain caused by the target WIMP-induced nuclear recoil damage. However, it is possible to reduce this inherent lattice strain in diamond by a high-temperature anneal preparation process, which results in a two order-of-magnitude reduction of the inhomonegenous linewidth <cit.>. Additional discrimination is provided by the fact that the damage track due to a WIMP scattering event is very local(∼50 nm), whereas the inherent strain is likely to have a much larger length scale, even in polycrystalline diamond <cit.>.Careful detector preparation and strain spatial discrimination may allow us to use polycrystalline diamond with grain sizes on the order of hundreds of microns to millimeters, which would significantly simplify detector fabrication. §.§ Probing local crystal strain A natural way to detect local crystal damage is via the resulting strain distribution. This distribution should be modelled using SRIM simulations and elastic theory, but, as a start, let us estimate the magnitude of the effect. A single vacancy creates a displacement field that falls off as 1/r^2 <cit.>. The strain field is the gradient of that, which means that strain falls off as 1/r^3. Therefore the strain at a location 30nm away from a single vacancy isΔ x/x≈(0.3nm/30nm)^3≈ 10^-6. r3.3in< g r a p h i c s >0.8 912Local probe of a WIMP scattering event.This corresponds to a stress ofP = YΔ x/x≈ 10^6Pa,where Y≈ 10^12Pa is the Young's modulus of diamond. This stress can be detected using the shift of the transition frequency between the ground-state magnetic sublevels of an NV center. The stress coupling coefficient <cit.> is 0.03Hz/Pa, so the transition frequency shift isΔ f ≈ 30kHz.This can be detected using a standard “clock” sequence, insensitive to magnetic fields, and should be compared to the the transition linewidth, which is limited by 1/T_1≈ 1kHz. We note that the spin-1 ground state of the NV center allows determination of the transverse and perpendicular components of the strain relative to the NV axis.In order to reconstruct the shape of the crystal damage distribution, and thus the direction of the momentum of the scattered WIMP, the diamond should have NV center density of approximately 1/(30nm)^3, and superresolution techniques can be used to perform magnetic-sublevel splitting measurements of a number of NV centers near the scattering event location.This technique appears to have good sensitivity, and it leverages the remarkable spin properties of NV centers (long spin lifetime), as well as the stiffness of diamond (large Young's modulus).§.§ Probing local electric field The vacancies and interstitials, created by the scattering event, give rise to a redistribution of charge and thus an electric dipole moment inside the crystal. To estimate the resulting electric field, let us assume an uncompensated charge of a single electron, then 30nm away the electric field is 3kV/cm (note that dielectric constant of diamond is 5.7). We can once again detect this electric field by measuring the transition frequency between the ground-state magnetic sublevels of an NV center. The electric field coupling coefficient (dipole moment) of the NV center is 17Hz/(V/cm) <cit.>, so the transition frequency shift is 50kHz. This is significantly smaller than the shift due to crystal strain, but still larger than the linewidth. In addition, charges are rarely uncompensated inside a crystal, especially in presence of defects and laser light necessary for interrogation (this light excites and redistributes electrons out of trap sites, shuffling charge around). Therefore probing local electric fields created by crystal damage is likely to be less robust than probing strain. §.§ Probing local magnetic properties A more speculative idea is to probe the magnetic properties of the quasi-1D chain of vacancies created by the WIMP scattering event. In a metal halide crystal, for example, F-centers are known to be EPR-active. The EPR spectrum of the chain of vacancy spins will depend on the angle between the applied magnetic field and the axis of the chain (ie the direction of the incoming WIMP momentum). § CONCLUSIONDirectional detection via crystal defects offers a promising approach to probe WIMP dark matter with mass below ∼ 30 GeV where the dominant neutrino background arises from the solar neutrino flux <cit.>. For the range of energies deposited by the scattering of such WIMPs, there is significant localized crystal damage that correlates well with the direction of the recoiling nucleus. This crystal damage is thermodynamically stable for long periods of time. Since we will at most have to resolve the direction of a handful of neutrino/WIMP events (utilizing conventional WIMP detection techniques to veto the other, far more dominant backgrounds), it is feasible to expend significant time (few days per event) to map this localized damage. As discussed above, the strain induced by WIMP-induced crystal damage should be measurable using nanoscale sensors such as NV centers in diamond, where we leverage the fact that locally (within 50 nm) the expected damage from dark matter scattering is 𝒪(100-200) times the typical mean density of lattice vacancies in diamond. The efficiency of our technique depends on the energy deposited in the scattering event. Presently, it is of most use for WIMPs with masses ⪆ 1 GeV. The direct detection of lighter WIMPs, where the solar neutrino background is bigger, is an active area of research <cit.> and it would be interesting to investigate the efficacy of the crystal defect approach within these emerging detection schemes. In particular, since the strain from localized crystal damage drops off as ∼ 1/r^3, a larger density of NV centers might be able to resolve the weaker damage caused by light WIMPs. But, we would then have to identify this damage cluster by probing a larger number of NV centers, significantly increasing the time necessary to perform the measurement. This time could be reduced if the position resolution of the detection scheme that identifies the putative WIMP/neutrino events was improved. In our study, we assumed a position resolution ∼ mm^3. The number of NV centers that have to be interrogated to resolve the damage track scales linearly with this resolution volume. Thus, improvements in the position resolution will drastically cut down on the time necessary to identify the damage track, potentially permitting the use of higher NV center densities to study lighter WIMPs. For heavier WIMPs, whereisotropic backgrounds dominate, directional detection would still be interesting since one could try to leverage the annual modulation of the directional signal from the motion of the solar system in relation to the galactic dark matter flow. In addition to dark matter detection, these nanoscale sensors may also be of use in improving the angular resolution of bulk detectors necessary to detect particles with low cross-section, for example, neutrons. The kinematics of neutron scattering is similar to that of WIMP-nucleon interactions, and thus similar crystal damage ought to exist.The study of such damage tracks could be used to calibrate future dark matter detectors and may also be of use in the context of non-proliferation. While NV centers in diamond are experimentally well studied, the phenomenology of crystal damage that can subsequently be probed by a nanoscale sensor is valid for a number of other systems. It would be interesting to explore a variety of such defects and identify a broader class of materials that might be more optimized for detecting a variety of other particles, including dark matter. We would like to thank the Heising-Simons Foundation for organizing a workshop at UC Berkeley where this idea was conceived; and also the Moore Foundation for organizing a meeting where the idea was further explored. We also thank Dmitry Budker, Daniel McKinsey, Matthew Pyle, Alp Sipahigil, Ruffin Evans, and Meesala Srujan for helpful conversations.SR was supported in part by the NSF under grants PHY-1417295 and PHY-1507160,the Alfred P. Sloan Foundation grant FG-2016-6193, the Simons Foundation Award 378243 andthe Heising-Simons Foundation grant 2015-038. AOS was supported in part by the Heising-Simons Foundation grant 2015-039 and the Alfred P. Sloan Foundation grant FG-2016-6728. Note Added: While this work wasin preparation, <cit.> appeared. While both ideas consider the use of crystal defects for dark matter detection, our idea differs significantly from<cit.>. In<cit.>, the detection of the small number of crystal defects produced by thescattering of light dark matter is proposed as a way of searching for sub GeV mass dark matter. Directional information is inferred through anisotropies in the creation of such defects.Our focus is on the use of existing crystal defects to detect the direction of conventional WIMP dark matter; we also rely on existing WIMP techniques to identify the regions of interest within the crystal. Thus in our approach, the interrogation of crystal defects is restricted to a small number of events, as opposed to the continuous scanning necessary in<cit.>. 10 url<#>1#1urlprefixURLGershtein:2013iqaY. Gershtein et al., “Working Group Report: New Particles, Forces, and Dimensions,” arXiv:1311.0299 [hep-ex]. Graham:2015ckaP. W. Graham, D. E. Kaplan and S. Rajendran, “Cosmological Relaxation of the Electroweak Scale,” Phys. Rev. Lett.115, no. 22, 221801 (2015) doi:10.1103/PhysRevLett.115.221801 [arXiv:1504.07551 [hep-ph]]. Craig:2015yvwN. Craig, “Implications of SUSY Searches for Physics Beyond the Standard Model,” arXiv:1512.06819 [hep-ph]. Strigari:2009bqL. E. Strigari, “Neutrino Coherent Scattering Rates at Direct Dark Matter Detectors,” New J. Phys.11, 105011 (2009) doi:10.1088/1367-2630/11/10/105011 [arXiv:0903.3630 [astro-ph.CO]]. Mayet:2016zxuF. Mayet et al.,Phys. Rept.627, 1 (2016) doi:10.1016/j.physrep.2016.02.007 [arXiv:1602.03781 [astro-ph.CO]]. Nygren:2013ndaD. R. Nygren, “Columnar recombination: a tool for nuclear recoil directional sensitivity in a xenon-based direct detection WIMP search,” J. Phys. Conf. Ser.460, 012006 (2013). doi:10.1088/1742-6596/460/1/012006 Battat:2016papJ. B. R. Battat et al., “Readout technologies for directional WIMP Dark Matter detection," arXiv:1610.02396 [physics.ins-det].Jelezko F. Jelezko and J. Wrachtrup,“Single defect centers in Diamond : A review, "Phys. Stat. Sol. (a) 203, No. 13, 3207-3225 (2006) doi:10.1002/pssa.200671403KoehlW. F. Koehl, B. .B. Buckley, F. J. Heremans, G. Calusine and D. A. Awschalom, “Coherent Control of defect spins in SiC at room temperature,” Nature479,84-87 (2011). doi:10.1038/nature10562MetalHalide Y. Ruedin,P. .A. Schnegg, C. Jaccard and M. A. Aegerter, “EPR Optical Detection of F Centre Pairs in Alkali Halides 1. Pumping Cycle Kinetics and Characteristics of the Resonances,” Phys. Stat. Sol. (b) 54, No. 2, 565-576 (1972) doi:10.1002/pssb.2220540220Agnese:2013rvfR. Agnese et al. [CDMS Collaboration], “Silicon Detector Dark Matter Results from the Final Exposure of CDMS II,” Phys. Rev. Lett.111, no. 25, 251301 (2013) doi:10.1103/PhysRevLett.111.251301 [arXiv:1304.4279 [hep-ex]]. Aprile:2016swnE. Aprile et al. [XENON100 Collaboration], “XENON100 Dark Matter Results from a Combination of 477 Live Days,” arXiv:1609.06154 [astro-ph.CO]. McKinsey:2016xhnD. N. McKinsey [LZ Collaboration], “The LZ dark matter experiment,” J. Phys. Conf. Ser.718, no. 4, 042039 (2016). doi:10.1088/1742-6596/718/4/042039 SRIM J. F.  Ziegler, M. D. Ziegler and J. P. Biersack,“SRIM - The Stopping and Range of Ions in Matter",Ion Implantation Press (2008) Strain J. D. Eshelby,“Distortion of a Crystal by Point Imperfections",Journal of Applied Physics 25, 255 (1954) Togan2011 E.  Togan, Y.  Chu, A. Imamoglu and M. D. Lukin,“Laser Cooling and real-time measurement of the nuclear spin environment of a solid-state qubit",Nature478,497-501 (2011). doi:10.1038/nature10528 Bassett1 L. C. Bassett, F. J. Heremans, C. G. Yale, B. B. Buckley and D. D. Awschalom,“Electrical Tuning of Single Nitrogen Vacancy Center Optical Transitions Enhanced by Photoinduced Fields",Phys. Rev. Lett. 107, 266403 (2011) doi:10.1103/physrevlett.107.266403 Dima1 A. Jarmola, A. Berzins, J. Smits, K. Smits, J. Prikulis, F. Gahbauer, R. Ferber, D. Erts, M. Auznish and D. Budker,“Longitudinal spin-relaxation in nitrogen-vacancy centers in electron irradiated diamond",Appl. Phys. Lett. 107 242403 (2015),doi:10.1063/1.4937489Jaskula J. Jaskula, E. Bauch, S. Arroyo-Camejo, M. D. Lukin, S. W. Hell, A. .S. Trifonov and R. L. Walsworth,“Superresolution optical magnetic imaging and spectroscopy using individual electronic spins in diamond",Optics Express 25, 11048-11064 (2017) doi: 10.1364/OE.25.011048 Arai K. Arai, C. Belthangady, H. Zhang, N. Bar-Gill, S. J. DeVience, P. Cappellaro, A. Yacoby and R. L. Walsworth, “Fourier Magnetic imaging with nanoscale resolution and compressed sensing speed-up using electronic spins in diamond",Nature Nanotechnology 10, 859-864 (2015) doi: 10.1038/NNANO.2015.171 Kucscko G. Kucsko, P. C. Maurer, N. Y. Yao, M. Kubo, H. J. Noh, P. .K. Lo, H. Park and M. D. Lukin,“Nanometre-scale thermometry in a living cell",Nature 500, 54-8 (2013) doi: 10.1038/nature12373 Bunting:2017netP. C. Bunting, G. Gratta, T. Melia and S. Rajendran,arXiv:1701.06566 [hep-ph]. Derenzo:2016fseS. Derenzo, R. Essig, A. Massari, A. Soto and T. T. Yu,arXiv:1607.01009 [hep-ph]. Essig:2016crlR. Essig, J. Mardon, O. Slone and T. Volansky,Phys. Rev. D 95, no. 5, 056011 (2017) doi:10.1103/PhysRevD.95.056011 [arXiv:1608.02940 [hep-ph]]. Essig:2015cdaR. Essig, M. Fernandez-Serra, J. Mardon, A. Soto, T. Volansky and T. T. Yu,JHEP 1605, 046 (2016) doi:10.1007/JHEP05(2016)046 [arXiv:1509.01598 [hep-ph]]. Graham:2012suP. W. Graham, D. E. Kaplan, S. Rajendran and M. T. Walters,Phys. Dark Univ.1, 32 (2012) doi:10.1016/j.dark.2012.09.001 [arXiv:1203.2531 [hep-ph]]. Essig:2011njR. Essig, J. Mardon and T. Volansky,Phys. Rev. D 85, 076007 (2012) doi:10.1103/PhysRevD.85.076007 [arXiv:1108.5383 [hep-ph]]. Guo:2013dtW. Guo and D. N. McKinsey,Phys. Rev. D 87, no. 11, 115001 (2013) doi:10.1103/PhysRevD.87.115001 [arXiv:1302.0534 [astro-ph.IM]].Childress2014 Childress, L., Walsworth, R., & Lukin, M. Physics Today, 67(10), 38–43.Christle2017 David J. Christle, Paul V. Klimov, Charles F. de las Casas, Krisztián Szász, Viktor Ivády, Valdas Jokubavicius, Jawad ul Hassan, Mikael Syväjärvi, William F. Koehl, Takeshi Ohshima, Nguyen T. Son, Erik Janzén, Ádám Gali, David D. Awschalom arXiv:1702.07330 (2017). Chu2014Chu, Y., de Leon, N. P., Shields, B. J., Hausmann, B., Evans, R., Togan, E., et al. Nano Letters, 14(4), 1982–6. Aprile:2015uzoE. Aprile et al. [XENON Collaboration],JCAP 1604, no. 04, 027 (2016) doi:10.1088/1475-7516/2016/04/027 [arXiv:1512.07501 [physics.ins-det]].Trusheim2016Trusheim, M. E., & Englund, D. New Journal of Physics, 18(12), 123023.TommerR.  Budnik, O.  Chesnovsky, O.  Slone and T.  Volansky, arXiv:1705.03016Miller1966Miller, T. G., Nuclear Instruments and Methods, 43(2), 338–342 (1966)CDMS2012CDMS Collaboration, Ahmed, Z. et al., Appl. Phys. Lett. 103, 164105 (2013)Tapper2000Tapper, R. J., Reports on Progress in Physics, 63(8), 1273–1316 (2000)Rebai2015Rebai, M., et al., Journal of Instrumentation, 10(3), C03016 (2015).Hepp2014Hepp, C., et al., Physical Review Letters, 112(3), 036405 (2014).Evans2016Evans, R. E., et al., Physical Review Applied, 5(4), 44010 (2016). Sternschulte1994 Sternschulte, H., et al., Physical Review B, 50(19), 14554–14560 (1994).
http://arxiv.org/abs/1705.09760v1
{ "authors": [ "Surjeet Rajendran", "Nicholas Zobrist", "Alexander O. Sushkov", "Ronald Walsworth", "Mikhail Lukin" ], "categories": [ "hep-ph", "quant-ph" ], "primary_category": "hep-ph", "published": "20170527034105", "title": "Directional Detection of Dark Matter using Spectroscopy of Crystal Defects" }
empty Coverage and Spectral Efficiency of Indoor mmWave Networks with Ceiling-Mounted Access PointsFadhil Firyaguna, Jacek Kibiłda, Carlo Galiotto, Nicola Marchetti CONNECT Centre, Trinity College Dublin, Ireland {firyaguf, kibildj, galiotc, nicola.marchetti}@tcd.ieAccepted 2017 June 22. Received 2017 June 21; in original form 2017 April 21 ===================================================================================================================================================================================empty Provisioning of high throughput millimetre-wave signal to indoor areas that potentially serve a large number of users, such as transportation hubs or convention centres, will require dedicated indoor millimetre-wave access point deployments.In this article, we study dense deployments of millimetre-wave access points mounted on the ceiling, and illuminating selected spots on the ground with the use of fixed directional antennas. In this setup, the main factor limiting signal propagation are blockages by human bodies.We evaluate our system under a number of scenarios that take into account beamwidth of the main-lobe, access point density, and positioning of the mobile device with respect to the user's body.We find that both coverage and area spectral efficiency curves exhibit non-trivial behaviour which can be classified into four regions related to the selection of access point density, beamwidth, and height values. Furthermore, we observe a trade-off in beamwidth design, as the optimal beamwidth maximizes either coverage or area spectral efficiency, but not both.Finally, when we consider different body shadowing scenarios, our network design optimizes coverage or area spectral efficiency performance towards either devices held in hand or worn directly against the body, as each of the scenarios requires mutually exclusive settings of access point density and beamwidth. millimetre-wave networks, ultra-dense networks, self-body blockage. § INTRODUCTION According to the taxonomy provided by ETSI <cit.>, mmWave spectrum spans frequencies from [50]GHz to [300]GHz. Systems that could provide reliable communications over these frequencies attract great attention as the said frequencies offer much wider bandwidths at shorter wavelengths, in comparison to micro-wave frequencies. While wider bandwidth may be directly translated to increased link throughput, the shorter wavelength may allow networks to take greater advantage of techniques that increase power concentration at the receiver and spatial separation between the transmitters, resulting in capacity gains. Coarse estimates provided by ETSI show that, even with a single-antenna, a [500]MHz 16-QAM mmWave link may achieve over [1]Gbps of throughput. This signifies that, if mmWave systems are shown to be technically and commercially feasible, they could be used to address the capacity objectives of 5G.Yet, cellular systems that utilize mmWave frequencies will likely be providing coverage that is confined to streets and, more generally, outdoor areas only, as mmWave signals do not propagate well through physical objects <cit.>. This creates a situation in which an independent tier of mmWave AP would be required to ensure even basic coverage over indoor areas that serve potentially large number of end users, such as concert halls, transportation hubs, or convention centres. State-of-the-art literature on mmWave communications has shown that mmWave deployments can be a source of high bit-rate signal for indoor users (see, for example, <cit.>) but, as we will discuss in the following section, it has not provided much in the way of network-level design and radio access infrastructure deployment. In this paper we close this gap by studying the performance effects of deployment densification of ceiling-mounted mmWave access points with highly directional antennas over a confined area.In our scenario mmWave access points are mounted on the ceiling or walls to form a grid-like pattern and set to illuminate selected spots on the ground. In this case, and given the significantly shorter distances between the AP and UE than in an outdoor scenario, the main factor limiting signal propagation are blockages by human bodies, which have been shown to introduce as much as [40]dB of attenuation to the propagating mmWave signal (see, for example, <cit.>). Moreover, the potential lack of fixed physical obstructions such as inner-walls may result in interference between adjacent AP, despite the usage of directional transmissions. Effectively, deploying such networks requires understanding of the relationship between basic design parameters such as AP density, main lobe width, or transmit power and the propagation features of mmWave signals.What we find is that both the coverage and ASE curves display non-trivial behaviour which can be classified into four regions related to the selection of AP density, beamwidth and height values. Furthermore, we find that there is a trade-off in beamwidth design, as the optimal beamwidth maximizes either coverage or ASE, but not both. This trade-off gets more complicated when we consider an indoor mmWave scenario where human body introduces significant attenuation to the propagating signal which cannot be fully compensated for with handovers (as we show in the analysis of the cell association policy). To better understand this we compare the coverage and ASE for two scenarios of human body shadowing: a UE operated in front of the user (UE in hand) and a UE located in the pocket or carried as a wearable (UE in pocket).In the former scenario, the peak coverage requires that we use denser deployment and smaller beamwidths, which is shown to be beneficial also to the achieved ASE.[The corresponding ASE achieved with the optimal beamwidth for coverage.] In the latter scenario, the peak coverage requires that we use lower deployment densities and larger beamwidths, although thisconfiguration is not optimal for the achieved ASE. In what follows we provide an overview of the related literature, a description of our system model, and an in-depth analysis of the numerical results obtained, with lessons learnt on the design of dense indoor mmWave networks.§ RELATED WORK Our goal is to study the performance effects of deployment densification of ceiling-mounted mmWave access points with highly directional antennas over a confined area. While state-of-the-art literature has not addressed this topic directly, there are various other well-studied subjects, such as network densification, which provide us with relevant conclusions.Network densification is key to increasing the capacity of conventional mobile networks, as spectrum designated for cellular communications in microwave frequencies is relatively scarce. In the mmWave frequencies, where spectrum is in abundance but adverse propagation conditions limit the signal penetration, network densification may be used to shorten the physical distance between the transmitters and receivers, ramping up the signal level at the receivers' input. Indeed, dense mmWave networks have been shown to be an attractive deployment option for outdoor urban areas <cit.>. In <cit.> it has been shown that optimal operation of a wide-area mmWave system requires a deployment that is dense enough to ensure line-of-sight conditions from at least a few transmitters. Lower density deployments result in significantly lower performance due to non-line-of-sight operation, while higher density deployments lead to an increase in interference which deteriorates the system's performance. Wide-area cellular systems based on mmWave frequency bands also require extensive indoor deployments as mmWave signals do not penetrate well majority of materials <cit.>.In fact, as early as 2011, WiGig in cooperation with IEEE 802.11, proposed a PHY/MAC layer that was dedicated towards wireless local area operation in mmWave frequency bands (see <cit.>). The proposed technology was integrated with WiFi standards operating in microwave frequency bands allowing for a graceful fallback to microwave spectrum operation when needed (see <cit.>). Number of research studies have confirmed the technology to be capable of delivering []Gbit/s link throughputs over a range of up to 10 metres in line-of-sight conditions (see, for example, <cit.>). However, the network-level performance of mmWave indoor deployments, such as WiGig (or 802.11ad as it is currently known) remains largely unknown.In a mmWave indoor scenario, characterized by much smaller distances between access points and users, the main factor limiting deployment options are blockages by physical objects such as human bodies. Human body blockage was shown to cause severe signal blockages (as high as [40]dB) that reduce the spectral efficiency gains obtained from operation over larger bandwidths available in mmWave frequencies (see <cit.>). In small enclosed areas this detrimental effect of body-related shadowing can be at least partially mitigated by application of reflective materials to vertical surfaces and usage of signal relays (see <cit.>). However, in large open indoor areas these may not necessarily be available and, moreover, lack of fixed physical obstructions such as walls may actually lead to significant interference between adjacent access points requiring that the mmWave link performance is considered from network-level perspective. In <cit.> the trade-off between the received signal strength and the probability of blockage when deciding on the transmit antenna height is reported. Simply shortening the distance to the receiver by lowering antenna heights (thereby reducing distance-dependent pathloss) yields a greater chance of the signal being blocked by the human body, especially in crowded areas <cit.>. This trade-off can be exploited to study optimal altitude for signal-providing low altitude platforms <cit.>, such as quadcopters, or balloons, or urban outdoor cellular deployments with blockages from human bodies <cit.>. Furthermore, as it was shown in <cit.>, increase in human body blockage loss increases coverage inequality in the system, as receivers with poor coverage observe a further reduction to coverage, while good coverage receivers see their coverage being improved. In this scenario, whether you observe a drop or increase in coverage depends on whether the human body is shadowing more the serving transmitter (poor coverage users) or the interferers (good coverage users). In <cit.>, which studies device-to-device indoor mmWave communications scenario, it is shown that, under the assumption of a random direction of the interferer's main-lobe, highly directional beams will be required to maintain []Gbit/s links in crowded indoor areas.Despite these detailed insights on the impact of heights, fixed beamwidths and body blockage on the performance of both conventional and mmWave links, inspected both in isolation and in the network context, still little is known about the coverage and ASE trade-offs in densification of ceiling-mounted mmWave access points. In the following, building on the state-of-the-art literature for mmWave network modelling <cit.>, we setup a system model that allows us to inspect the trade-offs between peak coverage and ASE given variety of blockage scenarios and cell association strategies.§ SYSTEM MODEL The considered environment is an indoor confined area where the main obstacles for the mmWave signal propagation are human bodies, i.e., we assume a scenario with no corridors or walls such as theatres and convention centre halls. The AP are deployed on a hexagonal grid throughout the indoor venue, and they are installed on the ceiling at a height h_AP above the UE level, with fixed directional antennas illuminating the floor below.We consider a UE randomly located in the cell at the centre of the venue. The UE is associated with the serving AP for the downlink transmission by a given cell association strategy defined in Section <ref>. §.§ Directivity GainWe assume AP utilize directional transmission, while the UE utilize omnidirectional reception. As in <cit.>, the antenna pattern is approximated by a cone of uniform gain representing the main-lobe attached to a single “bulb" representing the side-lobe, as illustrated in Fig. <ref>, where M is the main-lobe directivity gain, m is the side-lobe gain, θ_BW is the beamwidth of the main-lobe, A is the area of the spherical cap, and S is the surface area of the sphere.The directivity gains are then a function of the beamwidth, normalized over a given spherical surface as in:M ·A/S + m ·S-A/S = 1,where the area of the cap is given by A = 2 π r^2 (1-cosθ_BW/2), and the sphere surface area is S = 4 π r^2. Thus, fixing the side-lobe gain m, we can calculate the main-lobe gain as a function of the beamwidth:M(θ_BW,m) = 2 - m ( 1 + cosθ_BW/2 )/1 - cosθ_BW/2. The UE receives maximum directivity gain M of an AP when the UE is positioned in the illumination area of the main-lobe of that AP, i.e., the UE is inside the projected circle of the main-lobe of radius:r_M = h_AP·tanθ_BW/2,as illustrated in Fig. <ref>. Otherwise, the directivity gain is the AP side-lobe gain m (as shown in Fig. <ref>). §.§ Self-Body BlockageIn our scenario, the only source of blockage is a human body. Body blockage can cause up to [40]dB of attenuation to the penetrating signal <cit.>.The main factor that describes the extent to which human body shadows signals to/from the UE is the UE's position with respect to the body. This position is determined by two parameters: d_toBody and d_topHead (as shown in Fig. <ref>). The first one determines the distance to the body and how wide the signal obstruction is, e.g., zero distance could represent a scenario where the device is held in a pocket and the body obstructs half of the field of view, while a distance of [30]cm could represent a scenario where a user is browsing the Internet and the body obstructs a narrower area. The second parameter determines the amount of body obstruction in the vertical dimension. Given the body blockage and our ceiling-mounted deployments, we can construct a model of user device shadowing as depicted in Fig. <ref>. From the geometry of the model, we define r_blockFree as the radius of the self-block free zone:r_blockFree = h_AP·d_toBody/d_topHead,where the UE inside this zone are never obstructed by the user body, while the UE outside of it are obstructed whenever the user body is between the UE and the AP.Now, assuming uniform body orientation, the probability of a user body obstructing the AP's signal (self-block probability) is:P_BB = arctan(w_body/2 · d_toBody) / π.§.§ Signal-to-Interference-Noise RatioIn this work, we consider the following path loss model: L(d,h_AP) = L_0 · R(d,h_AP)^-α,where L_0 is the path loss at 1 metre distance under free space propagation, α is the attenuation exponent, d is the projection of the distance on the horizontal plane (2D-distance) from the cell centre to the UE, and R(d,h_AP) is the Euclidean distance from the AP to the UE. Based on the assumptions made above, we can express the SINR at a UE as:SINR = G_i · L( d_i, h_AP ) · B_i/ N_0 / P_TX + ∑_j ∈∖{i} G_j · L( d_j, h_AP ) · B_j,whererepresents the set of all AP in the system, d_i is the distance to the serving AP i∈, G_i ∈{m,M} is the directivity gain of AP i, B_i ∈{L_body,1} is the body attenuation for the link between the reference user and AP i (L_body is the attenuation loss produced by the body), N_0 is the thermal noise power, and P_TX is the transmit power. Note that G_i and B_i are random variables whose distributions are functions of the system parameters (θ_BW, h_AP, d_toBody, d_topHead, w_body), and distance d_i.In our scenario, we assume there are no physical obstructions to the propagating signal other than the user's body; in addition, we consider the reflections from ceiling and ground to be negligible, which may be considered a reasonable modelling assumption since, as reported in <cit.>, several materials used for ceiling and flooring surfaces produce a significant attenuation in the reflected signal. § NUMERICAL RESULTS§.§ Simulation Setup We model our scenario by placing AP in the centres of a hexagonal cell pattern laid over a [400 × 400]m^2 area, as exemplified in Fig. <ref>. This specific choice of the area size allows us to mitigate the edge effect, and to explore the system behaviour for longer inter-site distances. The side-lobe gain is fixed at [-10]dB, and the main-lobe gain varies with the beamwidth according to Eq. (<ref>). We evaluate the system for a fixed AP height h_AP = [10]m. Note that, changing h_AP has essentially the same impact on the performance as changing the beamwidth, since both h_AP and θ_BW determine the main-lobe illumination area; as a matter of fact, when testing our system for other height values of interest, we observed no significant deviations from our conclusions. We set the attenuation exponent as α=2, transmit power as [20]dBm, bandwidth as [100]MHz, carrier frequency as [60]GHz, noise figure as [9]dB, we consider no small-scale fading, and we set the body parameters w_body as [40]cm and d_topHead as [40]cm. We set the parameter d_toBody to define different blockage scenarios: d_toBody = [30]cm represents a scenario of a user operating the UE with the hand (UE in hand), and d_toBody = [0]cm,[In this scenario, r_blockFree equals 0, according to Eq. <ref>, and there is no self-block free zone.] represents a scenario of the UE held in pocket (UE in pocket). The UE is associated with an AP that provides it with a downlink signal according to either the shortest 3D Euclidean distance (minimum distance association) or the strongest received signal power (maximum received power association). The simulationsource code is available on-line (see Appendix <ref>). §.§ Coverage and Area Spectral Efficiency ProfileIn this subsection, we evaluate the effect of the inter-site distance (network density) and beamwidth on coverage and area spectral efficiency (ASE) of a mmWave indoor network with ceiling-mounted AP. The SINR coverage is defined as the probability that the SINR at the receiver is larger than some threshold T, i.e., P[SINR>T], while the ASE is the spectral efficiency log(1+SINR) averaged over all realizations, and divided by the cell area. The results for coverage and ASE are shown in Fig. <ref> and <ref>, respectively; for now, we focus on the minimum distance cell association case. Our investigation reveals that, in the ceiling-mounted AP setup, the SINR coverage presents a non-trivial behaviour which can be classified into four regions, as illustrated in Fig. <ref>. These appear as we change the inter-site distance while keeping the beamwidth fixed:[(i)] * high main-lobe interference: at high AP density (short d_S), the beam is too large and causes substantial overlaps among adjacent cells, which results in high interference and, thus, low coverage; * minimum main-lobe interference: the main-lobe illuminates the entire cell with minimum interference to neighbouring cells, yielding high coverage; from this point on, as we move towards a sparser deployment, the cell size becomes larger than the illuminated area and the coverage is inevitably reduced; * high side-lobe interference: at intermediate AP densities, the coverage is very low due to the lack of main-lobe illumination by the serving AP and due to high neighbour side-lobe interference; however, this interference decreases as the deployment gets sparser, leading to increased coverage; * low interference: in low AP density (large d_S), the beam is so small that it becomes negligible; therefore, the only signal that can be picked-up by the majority of users comes from the side-lobe and is thus weak enough for the noise to dominate the SINR term. Based on these results, it is clear that for each cell size (or each AP density) there is an optimal design of beamwidth that leads to a peak coverage, which is depicted by the black line in Fig. <ref>. For example, the half inter-site distance d_S / 2 [≈ 3.4]m corresponds to peak coverage when θ_BW = [41]^∘ (which is equivalent to the main-lobe radius r_M = [3.7]m). It should be noted that the optimal beamwidth for [-5]dB coverage does not optimize the ASE. As we see from the black line in Fig. <ref>, the achieved ASE is lower than the maximum achievable for a given inter-site distance. A more detailed discussion of this trade-off between coverage and ASE is presented in Section <ref>.With reference toFig. <ref>, the fact that we observe high peak coverage at high AP densities and relatively lower coverage at lowerAP densities depends on whether the cell size is smaller than the self-block free zone; one should note that when the cell size is smaller than the self-block free zone (on the left of the dashed line in Fig. <ref>), all UE are free from self-blockage, leading to high peak coverage. On the other hand, when the cell is bigger than the self-block free zone (on the right of the dashed line), there are some UE outside the self-block free zone that will be blocked by the body with probability P_BB, according to Eq. <ref>. These UE will have their SINR degraded by the body attenuation, increasing the number of UE whose SINR is below the threshold. Hence, the coverage will decrease proportionally to the number of blocked UE. §.§ Cell Association and Body Blockage In this subsection, we investigate the impact of UE-to-AP association on peak coverage. To that end, we compare two different association strategies, namely minimum distance and maximum received power (as defined in Section <ref>) and we consider two different scenarios of interest, i.e., UE in hand — which represents a typical device usage — and UE in pocket— which represents a severe blockage scenario. The corresponding results are shown in Fig. <ref> and <ref>, respectively. First, it is important to remark that the maximum received power association strategy leads to larger optimal beamwidths compared to the minimum distance strategy, meaning that it yields a cellular deployment with larger overlaps between adjacent AP main-lobes. For example, in Fig. <ref>, for d_S/2 = [5.02]m and T = [0]dB, the maximum received power association (green line) leads to the optimal beamwidth of [83]^∘ (UE in pocket), while the minimum distance association (orange line) leads to the optimal beamwidth of [60]^∘. Second, our results show that, as expected, maximum power association strategy generally improves the coverage of the network. In addition, there are a few minor observations that can be made.First, in the UE in pocket scenario, with the minimum distance strategy, the coverage achieves approximately 50%.This is because in this scenario, the probability of blockage P_BB is 50%, which means that half of the users will block the signal to their UE, attenuating the signal by [40]dB. Since, in the minimum distance strategy, the blocked UE will not associate with another AP, those users will have a poor SINR, leading to approximately 50% of the users not being covered.Second, the coverage can be improved with the maximum received power strategy because those 50% of users will associate with another AP and will have a better SINR. Nonetheless, this improvement depends on the SINR threshold. For example, with T = [-5]dB, we have 100% coverage at any AP density (see blue curve inFig. <ref>), whereas with T = [0]dB, the coverage is lower than in the former case (up to 90%)[Even with the maximum received power association, the [0]dB threshold coverage does not reach 100% in any of the body shadowing scenarios we have taken into account (an observation which coincides with the one made in <cit.>).] and presents a drop for high AP densities (as we can see from the green curve in Fig. <ref> for half inter-site distances below [5]m). The reason for this particular behaviour of the green curve in Fig. <ref> is the following.At high AP densities, the SINR of the majority of the UE is low (in particular, lower than the threshold) because of the strong interference from neighbouring AP, which is caused by the short distances between these AP and the UE, and by high directivity gains (i.e., we recall that, at high AP densities, we obtain small optimal beamwidths and thus high antenna directivity gains). Therefore, the SINR values of those UE degrade the coverage to approximately 58%; even so, this represents an improvement compared to the minimum distance strategy case.In light of these results, for the UE in pocket scenario, it is important to consider an association strategy that allows for the mitigation ofbody shadowing effect, so as to provide satisfactory coverage. This is not the case for UE in hand, as in this case the body shadowing effect on coverage is not as severe (as we can see by comparing the gains of the maximum received power association from Fig. <ref> and Fig. <ref>). §.§ ASE vs. Coverage Trade-offFinally, we analyzed the trade-off between the peak coverage and the achieved ASE and its behaviour for different body shadowing scenarios and for different AP densities. We investigate this trade-off for the SINR threshold of [0]dB, which represents the UE with higher receiver sensitivity; we focus on these UE because they provide us with better insight on how the density and beamwidth settings affect network performance. The results are shown in Fig. <ref>: each point of the curve corresponds to the optimal beamwidth θ_BW for a given inter-site distance d_S when using maximum power association. The lower points in the figure represent the ASE/coverage for larger θ_BW and lower AP density (longer d_S), while the upper points represent the ASE/coverage for smaller θ_BW and higher AP density (shorter d_S).The main observation we make is that coverage and ASE in the UE in pocket scenario require different optimal beamwidth and AP densities. For example, to optimize the coverage vs. ASE trade-off in hand scenarios, the network should be designed to be dense and to use small beamwidths, i.e., choosing the points inside the gray rectangle of Fig. <ref>, where the coverage is around 90% and the ASE is close to [1]bps/Hz/m^2. However, the same configuration would yield poor coverage for UE that are held in pocket.A different design approachaiming at coverage maximization for the UE in pocket scenario requires deploying a sparser network with larger beamwidths (see points inside the ellipse Fig. <ref>). However, this design criterion is not optimal from the perspective of ASE;as shown in the plot, ASE suffers two orders of magnitude reduction as compared to the optimal value achievable with a denser network.To summarize, we can optimize the design of indoor ceiling-mounted AP mmWave networks either for the UE in hand scenario or for the UE in pocket scenario, but not both, as each scenario has different optimal configurations.§ CONCLUSION Herein we studied the performance effects of deployment densification of ceiling-mounted mmWave access points with highly directional antennas over a confined area. We showed that, while being feasible, dense indoor mmWave deployments have their intrinsic characteristics, which make it necessary for network designers to decide (and understand) what is their intended end user. First, the optimal choice of beamwidth maximizes either coverage or ASE, but not both. Second, how this trade-off manifests itself will also depend on the human body shadowing scenario, i.e., the distance between the receiver and the potential obstruction, as the optimal choice of beamwidth and AP density corresponds to the body blockage probability.As pointed out in Section <ref>, it is worth emphasizing that these findings are consistent across a range of area sizes and AP heights relevant to the type of scenario we are considering. Still more work is needed to understand how these trade-offs change when the mmWave signals are scattered and reflected by the indoor environment. However, even the results we have so far can be readily used to inform the design of interference coordination techniques based on beam-steering, or new hand-off and cell association procedures that account for potential body shadowing of mmWave signals.§ ACKNOWLEDGMENT This publication has emanated from the research conducted within the scope of NEMO (Enabling Cellular Networks to Exploit Millimeter-wave Opportunities) project financially supported by the Science Foundation Ireland (SFI) under Grant No. 14/US/I3110 and with partial support of the European Regional Development Fund under Grant No. 13/RC/2077.§All simulation scripts used to generate the presented results were written in MATLAB® and can be cloned from the following repository: https://github.com/firyaguna/matlab-nemo-mmWaveIEEEtran
http://arxiv.org/abs/1705.09616v3
{ "authors": [ "Fadhil Firyaguna", "Jacek Kibilda", "Carlo Galiotto", "Nicola Marchetti" ], "categories": [ "cs.IT", "cs.NI", "math.IT" ], "primary_category": "cs.IT", "published": "20170526152123", "title": "Coverage and Spectral Efficiency of Indoor mmWave Networks with Ceiling-Mounted Access Points" }
Stock Trading Using PE ratio: A Dynamic Bayesian Network Modeling on Behavioral Finance and Fundamental InvestmentHaizhen Wang, Ratthachat Chatpatanasiri, Pairote Sattayatham School of Mathematics, Suranaree University of Technology,THAILANDDecember 30, 2023 =============================================================================================================================================== On a daily investment decision in a security market, the price earnings (PE) ratio is one of the most widely applied methods being used as a firm valuation tool by investment experts. Unfortunately,recent academic developments in financial econometrics and machine learning rarely look at this tool. In practice, fundamental PE ratios are often estimated only by subjective expert opinions.The purpose of this research is to formalize a process of fundamental PE estimation by employing advanced dynamic Bayesian network (DBN) methodology. The estimated PE ratio from our model can be used either as a information support for an expert to make investment decisions, or as an automatic trading system illustrated in experiments. Forward-backward inference and EM parameter estimation algorithms are derived with respect to the proposed DBN structure. Unlike existing works in literatures, the economic interpretation of our DBN model is well-justified by behavioral finance evidences of volatility. A simple but practical trading strategy is invented based on the result of Bayesian inference. Extensive experiments show that our trading strategy equipped with the inferenced PE ratios consistently outperforms standard investment benchmarks. Keywords: Bayesian Inference, Dynamic Bayesian Network, Fundamental Investment, PE Ratio, Behavioral Finance§ INTRODUCTION With the rapid advancement of machine learning technology, recent works make attempts to incorporate these machine learning techniques to construct trading systems that support decisions of investors in security markets <cit.> (see also references therein). These recent works have the following common philosophical theme.Common Philosophy of Applying Machine Learning to Financial Data:There exist hidden patterns infinancial time series. Complicated techniques (and their combinations) such as support vector machine, multiple kernel learning, independent component analysis, hidden Markov models, fuzzy modeling and so on, can help investors discover hidden patterns represented by complicated mathematical formulas. The retrieved formulas can be used as forecasting rules to predict stock's directional movements, which in turn can be incorporated into investor's trading strategy (buy low and sell high, as predicted by the rules) to make an excess return in the market. Based on the same philosophy mentioned above, existing works, however, have common limitations. Firstly, the discovered patterns are so complicated (highly non-linear) and lacked of financial interpretation; note that, in general, higher degree of pattern complexities are more risky to overfit the training data <cit.>. Secondly, each financial time-series has to be trained separately, resulting in one set of distinct patterns for each different security. In other words, there is no common pattern in the data of interested securities. Thirdly,because of pattern complexities, practical trading implementations are not easy for some investors. In fact, sophisticated trading program has to be constructed by users themselves. Lastly, there is no direct way to incorporate existing expert information (such as professional security analysts' recommendations) into the learning system. Fairly speaking, although having the mentioned limitations, the core philosophy of existing research matches the philosophy of one certain investor group called technical analysts <cit.>. Technical analysts believe in price patterns and do not pay much attention to economic interpretation of the patterns. Therefore, this line of existing research may benefit this group of investors.On another side of investment practitioners, there is a group named fundamentalists whose trading strategies have clear financial interpretations and are based on well-defined financial information <cit.>. Price-earning ratio (simply called PE ratio, to be defined below shortly) is one of the most widely applied valuation toolkits for fundamentalists to make their investment decisions <cit.>. Also, investment recommendations by security analysts are often based on PE ratios <cit.>. Nevertheless, it is unfortunate that recent academic advancements in financial econometrics and machine learning rarely look at this tool so that the current practical application of PE ratio has to depend solely on expert knowledge. To our knowledge, there is currently no formal framework capable of integrating expert knowledge with historical financial time-series data to make a systematic inference of PE ratio from available information.In this research, we focus on applying a Bayesian statistical analysis to formalize the process of stock valuation using the PE ratio. We apply the powerful framework of dynamic Bayesian network <cit.> to model the valuation process. In contrast to existing machine learning frameworks mentioned above on price pattern discovery where the discovered patterns have no meaning in finance, the interpretation of our model is well justified according to behavioral finance <cit.> as explained in the Section<ref>. The main contributions of our work are threefold. Firstly, to our knowledge, we are the first to propose applying the machine learning framework to formalize the PE ratio valuation process which somehow rarely gets attention from academic researchers. Secondly, unlike existing works where there are different discovered patterns for different securities, our proposed trading strategy resulted by the Bayesian framework is unified, i.e. we propose a single trading strategy which can be applied to every security as explained in Subsection <ref>. The proposed strategyis simple and has a clear financial interpretation so that it can be easily applied by every practitioner (no requirement on writing a sophisticated system trading program). Moreover, expert opinions can be naturally integrated to our Bayesian learning framework. Thirdly, as our proposed dynamic Bayesian network having non-standard structure compared to literatures <cit.>, we have successfully derived the new inference formulas by applying the forward-backward methodology, and the new parameter estimation algorithm according to the concept of Expectation-Maximization <cit.>.Note that in this paper, we focus on investment in individual firm-level securities which are usually preferred by individual investors; in contrast to investment institutions whose investment strategy is usually on portfolio level based on Modern Portfolio Theory <cit.>. §.§ Background of fundamental investment based on PE ratio The core idea of the PE ratio valuation method is simply that the value of the firm (and hence the value of its stock) is directly proportional to the annual net income (also called earning) of the company, i.e. for each firm i,P_i^*=PE_i^*× E_iwhere P_i^*denotes the value of firm i ,E_i denotes the firm's current annual earnings and PE_i^* is the firm's appropriate PE ratio, usually assumed to be a constant (at least for some period of time). Here, the annual earning is defined by the summation of the latest four quarterly earnings. The earnings information of each firm listed in the stock market is normally available to all investors, i.e., it is observable.The PE ratio, intuitively, can be thought of as a premium of an individual firm, i.e., given the same earning for two firms, the firm with higher PE ratio is considered to be of higher value. Conceptually, the appropriate PE ratio of each firm is usually determined by experts using business and financial accounting factors such as debt burden, cash flow, growth rate, business risk, etc.There exists an alternative approach to estimate the PE ratio called the relative approach<cit.> which still requires experts to select a group of similar firms altogether, and the ratio is heuristically calculated from this group. To summarize, the current best practice for PE ratio estimation is to be heuristically calculated by experts or experienced investors.Once we get the PE ratio, we can simply calculate firm value, often called intrinsic value, by Eq.(<ref>). A simple trading strategy is to compare the firm value with a market price of the firm.Strategy A: if the firm value is higher than its market price by some threshold, it is considered to be at low price, so that we can buy the firm's stock. We expect to sell it later when its market price is higher than the firm's intrinsic value by some threshold. It is important to note that the philosophy of this trading strategy is that market price is not always equal to the value of the firm. We can observe that the price of firm's stock changes almost every working day in a stock market. In contrast, byEq. (<ref>), the firm's value will not change in a short time period provided that there is no new announcement on annual earnings in that period. There has been long controversy about this “price vs. value” issue <cit.>, but it is beyond the scope of this paper. In any cases, it is a fact that there exists a large group of individual investors namely fundamentalists employing the PE ratio as their main tool. Instead of solely relying on expert opinions, the goal of this paper is to support that group of investors to systematically determine the appropriate PE ratio, by the method of Bayesian statistical analysis which is able to formally combine information from historical data with expert beliefs. Finally, we emphasize that there is another quantity called an observed PE calculated from firm's current market price divided by its earnings (note again difference between value P^*_i and price P_i), that is,observed PE_i=P_i/E_i,where P_i is the current market price of a firm i. In Bayesian analysis of the PE ratio, it is important to distinguish between the observed PE_i (changing everyday due to changes of P_i) and PE_i^*. Here we will call PE_i^* as the fundamental PE ratio. The reason behind this name is the following: only a group of fundamentalists believe that the quantity P_i^* , or firm value, is able to calculated by Eq. (<ref>). Therefore, they usually call P_i^* as the fundamental price or fundamental value, and so PE_i^* as fundamental PE. To them, there exist various kinds of investors in the market: some are rational and some are irrational. The current market price P_i and hence, too, the observed PE_i can fluctuate from the fundamental price P_i^* by actions of those irrational investors. We shall formally model this argument in Section <ref>.§ STATISTICAL MODEL OF STOCK PRICE DYNAMICS§.§ Motivation of statistical modelling: behavioral volatility In Subsection <ref> we mentioned about fundamentalists' belief that market price of a security may not equal to its fundamental value.Why does stock price deviates from its fundamental price? Works on behavioral finance <cit.> found many evidences for this question. For example, researchers argue that there are noise traders in the market who tend to make irrational actions so the price can move away from its value <cit.>. One of the works found that some investors cannot process new information correctly and so overreact to new information <cit.>. What is worse, information which investors overreact is, many times, unconfirmed <cit.> or unreliable <cit.> or even unimportant <cit.>. Also, investors who consult experts may not get much helpful advice since security analysts tend to be overoptimistic <cit.> and having conflict of interest <cit.>. Finally, it is well known that even rational investorsin the market cannot immediately eliminate this irrational pricing due to limit of arbitrage <cit.>. All the effects mentioned here are able to temporarily move away a stock price from its value for a period of time. This is what we call behavioral volatility. The effects continue until either they are cancelled out, or rational investors finally eliminate this mispricing. This reversion phenomena is called mean reversion in literatures. §.§ Dynamic Bayesian Network of stock price movement Our model simplifies and formalizes the observations described in Subsection <ref>. We divide the temporary effects which cause mispricing into two categories: (1) short-term effects: mispricing effects which last about a few days, e.g. effects causing by noise trading or overreaction to unreliable information and (2) medium-term effects : mispricing effects which last several weeks or months, e.g. effects caused by reaction to unconfirmed information which may take time to confirm, or overoptimistic prediction of analysts which may take time to prove. Mathematically, the relation between market price and its fundamental value can be described as the following equation. To simplify the equation, since we consider only one firm at a time, we now replace the firm-index subscript i with a time-index subscript t to emphasize the dynamic relationship between price and its fundamental value.P_t=P_t^*(1+z_t)(1+ε_t)where(a) z_t is a random variable modeling the medium-term noisy effects. To make its effects persist for a period of time, we model z_t as a Markov chain.(b) ε_t is a random variable for the short-term noisy effects which is modeled by a Gaussian random noise, ε_t∼ N(0,σ^2).Assuming PE^* as a constant for the period which we observed, and following Eq. (<ref>) of Section <ref>, we haveP_t=PE^*E_t(1+z_t)(1+ε_t)and, therefore, we get the relationship between the fundamental PE and the observed PE:P_t/E_t=PE^*(1+z_t)(1+ε_t)Note that our model is suitable only for a firm with positive earnings E_t>0. Fortunately, most firms satisfy this criterion. Eq. (<ref>) is central to our idea and can be visualized as shown in Figure <ref>.We can mathematically simplify Eq. (<ref>) furtherln(P_t/E_t)=ln(PE^*(1+z_t))+ln(1+ε_t) Since ε_t is usually small, it can be approximated by ln(1+ε_t)≈ε_t, and denote y_t=ln(P_t/E_t), we then havey_t=ln(PE^*(1+z_t))+ε_tNote that as explained in Section <ref>, y_t is an observable quantity, while PE^* and z_t are unobservable, i.e. they are hidden state or latent variables. Note that these are two different types of latent variables, i.e. PE^* is constant and z_t is time-varying. Thus, Eq. (<ref>) is different from standard state-space and graphical models such as Hidden Markov Models or Linear State Space Model<cit.>. The graphical model of our proposed stock price dynamic has three layers as represented in Figure <ref>. In our case, where the model is temporal, the graphical model framework is also called dynamic Bayesian network (DBN). The main advantage of DBN is its ability to encode conditional independent properties, and hence simplifying probabilistic inference<cit.>. Another advantage of this framework is that expert knowledge can be integrated in the model naturally as shown in the next section.To derive mathematical equations for inference and parameter estimation in the DBN framework, we shall assume that all latent random variables are discrete: z_t∈{a_1,...,a_M}, PE^*∈{b_1,...,b_N}. Furthermore, we have to set up the conditional probability distribution function for each node given its parents. We define the conditional probability distribution functions of all nodes as follows: The transition probability distribution function:Let i,m∈{1,...,M}, t∈{2,3,...}p(z_t=a_i| z_t-1=a_m)≜ w_im.Note that 0≤ w_im≤ 1, ∑_m=1^M w_im=1 . The matrix W=(w_im)_M× M is called a transition matrix, i.e. {z_t} is a Markov chain.The emission probability distribution function:For all m∈{1,...,M}, n∈{1,...,N},t∈{1,2,...}p(y_t|z_t=a_m,PE^*=b_n)≜ϕ_mn(y_t)By Eq. (<ref>), ϕ_mn(y_t)=𝑁(ln(b_n(1+a_m)),σ^2). The matrix Φ_t=(ϕ_mn)_M× N is called an emission matrix at period t.The inital probability distribution functions:For each m∈{1,...,M},u_m≜ p(z_1=a_m)where 0≤ u_m≤ 1 and ∑_m=1^Mu_m=1.For each n∈{1,...,N},v_n≜ p(PE^*=b_n)where 0≤ v_n≤ 1 and ∑_n=1^N v_n=1.The vectors u=(u_m)_M and v=(v_n)_N are called initial vectors.Therefore, in this Bayesian framework, the set of model parameters is θ={W,u,v,σ^2} and our parameters space isΘ={θ|0≤ u_m≤1, ∑_m=1^Mu_m=1, 0≤ v_n≤1, ∑_n=1^N v_n=1,0≤ w_im≤1,∑_m=1^M w_im=1,σ>0}. If we know all parameters, we can make an inference by deriving inference equations based on the forward-backward algorithm as shown in Subsection 3.1. If the parameters are unknown, we have to estimate them first. In this paper, we derive the estimation procedures based on Maximum a Posteriori (MAP) and Expectation-Maximization (EM) algorithms. In the next section, we show how we derive both the inference and parameter estimation algorithms.§ BAYESIAN INFERENCE ON THE DBN OF STOCK PRICE DYNAMIC As explained in previous sections, our goal is to make an inference on PE* so that we can estimate the fundamental price of a stock. In Section <ref>, we will show that estimations of {z_t} is also useful in investment. To infer the values of these two latent variables, similar to Hidden Markov Models (HMM) and Linear State Space Model (LSSM) <cit.>, we need to derive equations in two steps. First, the inference algorithms with known parameters, second, the parameter estimation algorithms given that parameters are unknown. However, because there are two types of latent states as explained in the previous section, our graphical model shown in Figure <ref> is more sophisticated than HMM and LSSM. In this section, we show the new equations for both inference tasks. To simplify the notation, we use notation x_1^T to denote{x_1,...,x_T} . §.§ Inference with known parameters Suppose θ is known, together with the observed data y_1^T. Similar to HMM, in order to estimate the latent states of z_1^T and PE^*, we need to find recurrent formulas to calculate two quantities: the filtering probabilities p(z_T,PE^*|y_1^T,θ) and the smoothing probabilities p(z_t,PE^*|y_1^T,θ), t∈1,...,T-1. To keep our formulas simple, in this Section, we will omit writing θin the probability notations, e.g., we simply write p(z_T,PE^*|y_1^T)for filtering.The filtering formula, which estimates conditional joint probabilities of the most recent medium-term effect z_T=a_m and PE^*=b_n given all the observed variables, is given by the following recurrent formula:[p(z_T=a_m,PE^*=b_n|y_1^T);= p(z_T=a_m,PE^*=b_n|y_1^T-1,y_T); ∝ p(y_T|y_1^T-1,z_T=a_m,PE^*=b_n)p(z_T=a_m,PE^*=b_n|y_1^T-1); =ϕ_mn(y_T)∑_i=1^Mp(z_T=a_m,z_T-1=a_i,PE^*=b_n|y_1^T-1); =ϕ_mn(y_T)∑_i=1^Mp(z_T-1=a_i,PE^*=b_n|y_1^T-1)p(z_T=a_m|z_T-1=a_i);=ϕ_mn(y_T)∑_i=1^Mp(z_T-1=a_i,PE^*=b_n|y_1^T-1) w_mi ]In the above derivation, Bayes's rule, conditional independent properties <cit.> with respect to DBN shown in Figure <ref>and sum rule are applied consecutively to get the above result, similar to the filtering equation of HMM. The initial equation of the recurrent formula can be derived similarly: p(z_1=a_m,PE*=b_n|y_1)=ϕ_mn(y_1)u_mv_n.Next, the smoothing formula, which estimates conditional joint probabilities of the any-date t < T medium-term noisy effect z_t=a_mand PE^*=b_n given all the observed variables, is given by the following so-called forward-backward formula in Eq. (<ref>) :For all t∈{1,...,T-1}, (note that y_1^T=y_1^t⋃ y_t+1^T)[p(z_t=a_m,PE^*=b_n|y_1^t,y_t+1^T); ∝ p(y_t+1^T|y_1^t,z_t=a_m,PE^*=b_n)p(z_t=a_m,PE^*=b_n|y_1^t); =p(y_t+1^T|z_t=a_m,PE^*=b_n)p(z_t=a_m,PE^*=b_n|y_1^t). ] Note that conditional independent properties of our DBN are applied in the first term. Also note that the second term is in fact a filtering probability. Therefore, we need to concentrate only on the first term, which has the following recurrent relation: [ p(y_t+1^T|z_t=a_m,PE^*=b_n); =∑_i=1^M p(y_t+1^T, z_t+1=a_i|z_t=a_m,PE^*=b_n); =∑_i=1^M p(y_t+1^T|z_t+1=a_i,z_t=a_m,PE^*=b_n)p(z_t+1=a_i|z_t=a_m,PE^*=b_n);=∑_i=1^M p(y_t+1^T|z_t+1=a_i,PE^*=b_n)p(z_t+1=a_i|z_t=a_m);=∑_i=1^M p(y_t+1^T|z_t+1=a_i,PE^*=b_n)w_im;=∑_i=1^M p(y_t+1,y_t+2^T|z_t+1=a_i,PE^*=b_n)w_im; =∑_i=1^M p(y_t+2^T|y_t+1,z_t+1=a_i,PE^*=b_n)p(y_t+1|z_t+1=a_i,PE^*=b_n)w_im;=∑_i=1^M p(y_t+2^T|z_t+1=a_i,PE^*=b_n)ϕ_in(y_t+1)w_im. ] The end condition can be solved similarly p(y_T|z_T-1=a_m,PE^*=b_n)=∑_i=1^Mϕ_in(y_T)w_im.With the derived recurrent formulas, we can get the most probable values of wanted latent variables PE^* and each z_t by using marginalization, e.g.PE^*=*arg max_b_n p(PE^*=b_n|y_1^T),where p(PE^*=b_n|y_1^T)=∑_m=1^M p(z_t=a_m,PE^*=b_n|y_1^T). To implement both filtering and smoothing in computer program, we also need to solve the formulas for the constants appeared in the above derivations. To fulfil this task, using matrix reformulation of the above recurrent equations is the most convenient and efficient way. Below, we give only the end results because the details exceed space limitation. Derivation details can be found in the full version of this paper <cit.>.To get matrix formula, first denote a filtering density as α_tmn = p(z_t=a_m,PE^*=b_n|y_1^t). When t>2, define α'_tmn = ϕ_mn(y_t)∑_i=1^M w_miα_t-1,im and define α'_1mn = ϕ_mn(y_1)u_m v_n. From Eq.(<ref>), we then have α_tmn∝α'_tmn, ∀ t. Define c_t = ∑_m=1^M ∑_n=1^N α'_tmn. It can be shown that α_tmn = α'_tmn/c_t. Denote the matrix A_t = (α_tmn)_M × N, we can show that A_t = 1/c_tΦ_t ∘ (WA_t-1),t>2 where ∘ denotes the entrywise (or Hadamard) product of the matrix. Φ_t and W denote the emission matrix and transtion matrix, respectively, as described in Section 2. For the initial case, we have A_1 = 1/c_1Φ_1 ∘ (uv^T) where u and v are as defined in Section 2. To get a matrix formula for a smoothing density, we first define β_tmn = p(y_t+1^T|z_t=a_m,PE^*=b_n)/p(y_t+1^T|y_1^t). From Eq.(<ref>), we then have the smoothing density for t < Tp(z_t=a_m,PE^*=b_n|y_1^T) = α_tmnβ_tmn. Denote the matrix B_t = (β_tmn)_M × N, t<T, we can show that B_T-1 = 1/c_TW^TΦ_T, andB_t = 1/c_t+1W^T(Φ_t+1∘B_t+1),t ∈{1,...,T-2}.§.§ Inference with unknown parameters In general situations, θ is unknown, so only the observed data y_1^T is available. In this case, θ must be estimated first. Expectation Maximization (EM) is a general method capable of estimating the parameters θ in Maximum Likelihood and Maximum a Posteriori (MAP) problem settings for probabilistic models with latent variables <cit.>. Here, we formulate our parameter estimation in the MAP setting so that expert's prior knowledge can be employed into the model. Formally, we would like to solve the following problem of maximizing the posterior pdf of θ.θ_MAP=*arg max_θ∈Θ(θ|y_1^T). EM find a solution of Eq. (<ref>) by iteratively solving the following two steps with the arbitrary set of initial parameters θ^(1) and a prior p(θ). Iterating from j = 1,2,..., doE-Step: Calculating smoothing probabilities p(z_t=a_m,PE^*=b_n|y_1^T,θ^(j)), ∀ t,m,nM-step: Solving the constraint maximization problem,θ^(j+1)=*arg max_θ∈Θ [Q(θ;θ^(j))+ln p(θ)]whereQ(θ;θ^(j))=E_z_1^T,PE^*|y_1^T,θ^(j) [ln p(y_1^T,z_1^T,PE^*|θ)] EM repeats the two steps until θ^(j) converges. Note that EM guarantees to find a local maxima of Eq. (<ref>) <cit.>. The argument in the expectation of Eq. (<ref>) is simply the log-likelihood of the model:ln p(y_1^T,z_1^T,PE^*|θ)=∑_t=1^Tln p(y_t|z_t,PE^*,θ)+∑_t'=2^Tln p(z_t'|z_t'-1,θ) +ln p(z_1|θ) +ln p(PE^*|θ). According to DBN, they are simply the logarithms of the emission pdf, transition pdf and initial pdf, respectively. By equation manipulations, the expectation Eq. (<ref>) can be calculated by employing the smoothing probabilities already done in the E-step. As a result, we get a closed form of Eq. (<ref>). Combining with the ln p(θ) term described below, the constraint maximization Eq. (<ref>) is well defined and readily to be solved by using the method of Lagrange multipliers <cit.>. All derivations details, which have the same mathematical structure for the simpler case of HMM <cit.>, are quite long and can be found in the full version of this paper <cit.>.Experts can put their knowledge into the parameter estimation procedure via p(θ) in Eq. (<ref>). Here, we assume that all parameters are independent, p(θ)=p(σ)p(u)p(v)p(W). In our experience, investment experts usually have two types of knowledge which are useful to estimate θ. The first type of knowledge is about PE^*. Often, experts may be able to estimate the range of appropriate PE^* level by analyzing a firm's business strategy together with competitions in its industry. The second type of knowledge is about the degree of persistence of the medium-term noisy effect which makes a stock price deviates from its fundamental for a considerable amount of time as explained in Section <ref>. For some firms, e.g. a firm with non-existent investor relation department, when there exist some unconfirmed rumors, its price can deviate from its fundamental for a long period. In contrast, some firms with both strong public and investor relation departments can clear up unconfirmed rumors rather quickly, so this rumor effect will not stay long. The two types of expert information can be encoded on p(v) and p(W), respectively. The prior p(v) for the vector v=(v_n)_N × 1 can be represented via the Dirichlet distribution:p(v)=τ(k_1+k_2+...+k_N)/τ(k_1)τ(k_2)...τ(k_N)∏_n=1^Nv_n^k_n-1Intuitively, k_n, n∈{1,...,N} is a degree of belief for each possible PE^* value b_n. Experts can employ their believes that some value of PE^*, e.g. b_i is relatively more probable than other values by giving k_i relatively higher value than other k_n, n≠ i. See <cit.> for more details on the Dirichlet prior. The prior on transition matrix p(W) encoding the average persistence degree of the medium-term noisy effect can also be described by a product of Dirichlet priors: p(W)=∏_m=1^M p(w_m), where as defined in Section <ref>, w_m=(w_im)_i=1,...,M denotes a M×1 vector of a probability p(z_t+1=a_i|z_t=a_m), i=1,...,M, andp(w_m)=τ(∑_i=1^Mk_im)/∏_i=1^Mτ(k_im)∏_i=1^M w_im^k_im-1The persistence degree of the medium-term effect can be set by relatively increasing the values of k_mm compared to other values k_im, im. The relatively higher of k_mm, the more persistence of the medium-term effect. Since medium-term effect appears at random, other values can symmetrically be set: k_im=(1-k_mm)/(M-1), for i≠ m. § EXPERIMENTS In this section, we illustrate benefits of our methodology in real-world applications. To do this, we will conduct comprehensive trading simulations to show consistent superior performances of our method over standard benchmark. In literatures of Finance, according to Efficient Market Hypothesis <cit.> which is widely accepted by mainstream researchers, the gold standard benchmark is, surprisingly, the simple “buy and hold” method which empirically proves to be efficient in the long run. Astonishingly, many evidences clearly indicate that most mutual fund managers who apply complex active portfolio management techniques cannot beat the simple “buy and hold” of the market portfolio <cit.>. In this paper, we will test our method against this gold standard “buy and hold” method both on individual stock level and on portfolio level.§.§ Experiment setting In this paper, we will make trading simulations in the markets of two different countries where we can access historical data: NYSE (New York Stock Exchange) and NASDAQ in US and SET (Stock Exchange of Thailand) in Thailand. While NYSE and NASDAQ represent matured stock markets, SET represents an emerging market, so that we are able to test our methodology to firms on both market phases. For each country, we collect 10 firms from various industries to ensure that our methodology is not just limited to one specific industry. Each selected firm is well established and has at least 5 year historical trading data. The names of selected companies with their respective sectors for Thai stocks and US stocks are shown in Table <ref> and Table <ref>, respectively. We collect daily 5-year historical closing-price data for each firm (Jan 1, 2012 to Sep 30, 2016) consisting of 1160 closing prices for stocks in SET and 1195 closing prices for stocks in NYSE and NASDAQ, respectively. The difference in the number of data is due to different working days in the two countries. All data are adjusted for stock splitting and stock dividending if occuring during this 5-year period. To avoid duplicated writing, here, we shall explain only experiment settings for stocks in SET with historical price P_1,...,P_1160.The experiment settings for stocks in NYSE and NASDAQ are done similarly.For each firm, the corresponding yearly earnings data in those years are also collected. E_1,...,E_1160 are defined as the summation of the most recent 4 quarterly earnings on each data t. The first 3-year historical data (Jan 1,2012 to Dec 31,2014) P_1,...,P_735 and E_1,...,E_735 will be used as a training data for our Bayesian methodology to learn the appropriate parameters θ={W,u,v,σ^2} using the EM algorithm as well as estimate the most probable values of PE^* and {z_1,...,z_735} by the method of smoothing as explained in Section <ref>. The constants {a_1,...,a_M} and {b_1,...,b_N} are set by experts. Since the constant M determines the size of the transition matrix W=(w_im)_M× M, we make a constraint M<10 so that the model is not over-parameterized and that 3-year historical data is enough to learn W. For all prior distributions, we employ non-informative priors with the exception of p(W) where our “security experts” emphasize the prior knowledge of z_t persistency as described in Subsection <ref>.Each trading simulation is conducted for each individual stock with the remaining 2-year historical data P_736,...,P_1160 to measure the performance of both our method and the benchmark. The performance measurement metric is, as used by practitioners, a profit generated by each method. The profit calculation is straightforward: for each trading simulation, each method is equally given an initial amount of cash I to make a trade (which taken into account a commission fee), and the profit are simply all the asset values at the end of a simulation minus I. For simplicity, we assume that each stock can be bought with all the money we have, e.g. supposing we have100$ andastock's price is12$, then we are able to buy 100/12=8.33 stocks.Using the benchmark “buy and hold" strategy we simply need to buy the stock with all the cash at the beginning and then do nothing until the end. Initially, this method will get C.I/P_736shares where C≈0.9987 represents the value of assets after taking SET's commission fee into account. At the end of the simulation this asset will have a value of P_1160.C.I/P_736, so the profit can be calculated easily. For a trading strategy employed by our method, there are two possible versions inspired by our model's main idea (see Figure <ref>) and Strategy A (buy low, sell high) described in Section <ref>. The first version called long-term strategy is simply to “buy low, sell high" with respect to the static value of PE^*, and the second version called medium-term strategy is to “buy low, sell high" with respect to the dynamic values of PE^*(1+z_t) where each z_t is dynamically estimated by the method of filtering described in Subsection <ref>. Both versions can be formally described as follows.Let I_t and N_t be available cash and total shares at date t, respectively. Initially, I_736=I and N_736=0. Now, both trading versions can be defined simply by the following procedure: for each date t, exactly one of the following cases holds:(i) P_t/E_t≤ A_t(1-Tr) and I_t>0 (buy-low case) where Tr∈ (0,1) is a threshold, A_t=PE^* for the long-term strategy and A_t=PE^*(1+z_t) for the medium-term strategy. In this case, buy the stock with all cash, so that N_t+1=C.I_t/P_T and I_t+1=0.(ii) P_t/E_t≥ A_t(1-Tr) and I_t=0 (sell-high case). In this case, sell all the holding stock to get cash I_t+1=P_t.N_t.C and N_t+1=0.(iii) If case (i) and case (ii) are not satisfied, do nothing. So, I_t+1=I_t and N_t+1=N_t. At the end of a trading simulation t=1160, the total profit is simply I_1160+P_1160.N_1160-I_736, so that we can compare with the "buy and hold" profit.We give some illustrations of our trading in actions which are shown in Figure <ref> and Figure <ref>. Figure <ref> is an example of long-term trading of CPALL with threshold 0.05 and Figure <ref> is an example of medium-term trading of CPALL with threshold 0.05. §.§ Experimental results and discussions§.§.§ Individual-firm level experiments To ensure that our experimental results are not biased because of a threshold choice, we test 4 different thresholds for each trading strategy. Note that the thresholds in the medium-term trading are relatively smaller than those in the long-term. This is due to the nature of medium-term strategy where PE_t has a smaller deviation from its base line PE^*(1+z_t) compared to the long-term strategy's base line, not containing the effect of z_t. The experimental results with respect to Thai stocks and US stocks are shown in Table <ref> and Table <ref>, respectively.From the tables, we can see that in the total of 80 trading simulations on SET firms, our method results in greater performance 41 times, while “buy and hold" results in better performance 19 times (the remaining 20 times are draws). Similarly, in the total of 80 trading simulations on NYSE and NASDAQ firms, our method results in greater performance 36 times, while “buy and hold" results in better comparison 20 times (the remaining 24 times are draws). Summing up results of markets in the two countries, our method outperforms the benchmark 77 times, yet underperforms only 39 times. These are promising results where we shall analyze statistically significancy of the results more formally in the next subsection. Here, we shall firstly interpret and discuss about experimental results in Tables <ref> and <ref> in details.From Tables <ref> and <ref>, it can be seen that there are 44 draws, which occur only in the cases of the long-term trading strategy. All 44 draws happen because of the exact same reason: our model predicts of undervaluation at the beginning of the testing period. i.e. the first observed PE falls deeply below the base line PE^* (exceeding the specified threshold). After that, the observed PE is never once able to overly exceed PE^* with respect to the given threshold, i.e. PE^*(1+Tr) is quite high in the test set so that no selling is possible. Therefore, in this case, our trading behaves exactly just like the benchmark “buy and hold". Note that there is no draw in the results of the medium-term trading strategy. The reason is because the estimated medium-term noisy effect z_t makes the base line PE^*(1+z_t) move near to the observed PE which results in more frequent trading.Disregarding the draws, our long-term trading strategy still beats the benchmark with 22 wins versus 14 loses. This is mainly due to the volatility of the observed PE in most stocks so that our strategy of buying in an undervalued price and selling in an overvalued price with respect to PE^* is possible. However, it is not the case that trading induced from our model constantly outperforms the benchmark. For the case of the so-called growth stocks <cit.>, i.e. stocks with consistently increasing earnings and price, it is not so easy for our model to beat the benchmark. If the threshold is set too low, our method will lead to buying and selling early, and thus results in less profit. See Figure <ref> for example. If the threshold is set too high, our method may lead to buying when the price is already high or lead to doing nothing at all because it is never undervalued with respect to the specified threshold. Another special case where our method fails to beat the benchmark is when there are the so-called non-recurring earnings, i.e. extra incomes which occur only once and should not be taken into account the calculation of fundamental value. In this case, the market knows that these extra earnings are temporary and do not give it credit, i.e. the price does not go up according to this profit. Our method has not taken this information into account and thus is fooled to believe that a stock is undervalued. See Figure <ref>. On the other hand, The results of our method equipped with medium-term trading strategy show impressive superiority, 55 wins versus 25 loses to the benchmark. The key factor of successis its tracking ability of the medium-term noisy effect z_t by our filtering algorithm presented in Section <ref>. When the new base line PE^*(1+z_t) is predicted accurately, undervalued and overvalued prices are also accurately detected and so the probability of our profitable trading is increasing. Nevertheless, with more frequent trading, commission fees increase substantially and sometimes can significantly reduce our performance as shown in Figure <ref>.§.§.§ portfolio level experiments In this subsection, to more realistically simulate a real-world individual investor, we construct a portfolio of stocks and test its performance against the benchmark. Here, we use a rule-of-thumb commonly employed in practice saying that a good portfolio should consists of around 15 stocks <cit.> (whichcontradicts to the mainstream theory <cit.>). For individual value investors who believe they can beat the market by analyzing each firm carefully, usually, they do not feel comfortable to hold too many stocks (like 100 stocks recommended by academic financial literatures) because investors need time to update and analyze the information of all their holding stocks.To test the performance of a 15-stock portfolio of our method against the benchmark, we employ the method of boostrap resampling <cit.>. For each boostrap sample, a set of 15 stocks are selected randomly from Tables <ref> and <ref> to form an equally-weighted portfolio. We are interested in the difference in performance between our method and the benchmark on each boostrap sample. After all bootsrap samples are drawn, we can also estimate the average difference in performance between the two methods. More precisely, let X be a random variable representing difference in % profit between our model and the benchmark (our % profit minus the benchmark's % profit). By repeating the boostrap resampling 10,000 times, we are able to construct an empirical distribution of X. This empirical distribution allows us to calculate E[X], the average % profit difference between the two methods, and Pr(X ≥ 0), the probability that our method has superior or equal performance to the benchmark. In addition to a portfolio consisting of stocks from the two markets, we also test portfolio performance from SET or US alone. Since the number of stocks considered in this work shown in Tables <ref> and <ref> is 10, a 7-stock portfolio is constructed for these cases instead of a 15-stock portfolio. The results are shown in Table <ref>.From Table <ref>,our method beats the benchmark on every case on average (since E[X] > 0 for all cases). However, on a single-country portfolio, about half of the cases have the confidence levels of superiority Pr(X ≥ 0) less than 80%. Most of them have satisfactory confidence levels greater than 70% though. On the other hand, on a two-country 15-stock portfolio, although E[X] is roughly an average of the two single-country portfolios, Pr(X ≥ 0) is significantly increasing so that most cases have the confidence levels of superiority greater than 80%, half of them is greater than 90%. This statistically confirms the superiority of our method over the benchmark on selected stocks. This phenomenon of confidence-level increasing is due to the diversification effect on portfolio with a higher number of stocks. Finally, we note that the empirical distribution of X is usually skewed and long-tail as illustrated in Figure <ref>. In this non-simple probability distribution case, boostrap empirical-distibution estimation employed in the present paper usually provides more accurate result than traditional analytical asymptotic estimations <cit.>.§ CONCLUSION AND FUTURE DIRECTIONS In this paper, we propose to apply the advanced Dynamic Bayesian Network (DBN) methodology to model stock price dynamics with two latent variables, namely, the fundamental PE and the medium-term noisy effect, respectively. This model is most suitable for one majority category of practitioners, namely, value investors in a security market (our model is not suitable for technical investors and mainstream academic investors). We have derived both inference and parameter estimation algorithms. The resulted model can be used as a decision support system to investment experts, or used to construct a trading strategy directly as illustrated in Section <ref>. Experiments in both individual firm-level and portfolio level show statistically significant superiority of our method.There are many possible future directions for the present work. The first direction is to more formally reformulate the stock price dynamics model to reflect other important economic and financial variables, e.g. interest rate, equity risk premium and return of equity. This modeling process is possible used by the so-called Gordon Growth Model <cit.> which links PE^* to the mentioned variables. Dynamic Gordon Growth Model <cit.> can be a further future work in this direction. It is also possible to make our DBN model more realistic by allowing time-varying short-term noisy effect, the so-called dynamic volatility <cit.> or allowing dynamic volume <cit.>. Another promising direction in Behavioral finance which can be taken into account in our model is the topic of heterogeneous agents <cit.>. To improve our inference procedure, approximate inference such as Variational Bayes <cit.>, or stochastic inference such as Markov chain Monte Carlo <cit.> are very promising future directions. § REFERENCES unsrt
http://arxiv.org/abs/1706.02985v1
{ "authors": [ "Haizhen Wang", "Ratthachat Chatpatanasiri", "Pairote Sattayatham" ], "categories": [ "cs.CE", "cs.AI", "cs.LG", "q-fin.GN" ], "primary_category": "cs.CE", "published": "20170525132422", "title": "Stock Trading Using PE ratio: A Dynamic Bayesian Network Modeling on Behavioral Finance and Fundamental Investment" }
First-spike based visual categorization using reward-modulated STDP Mohammad Ganjtabesh^1,2,[Corresponding author.Email addresses: [email protected] (MM),[email protected] (SRK),[email protected] (TM), [email protected] (AND),[email protected] (MG).]===================================================================================================================================================================================================================================§ INTRODUCTIONAfter the discovery of the Higgs boson <cit.>, the precise measurements from Run 2 of the LHC programme have so far confirmed the Standard Model with remarkable precision. Given that signals of new physics will most likely be elusive, it is important to define and study observables that can be both experimentally measured and theoretically predicted with a few-percent uncertainty.In this scenario, a prominent role is played by processes featuring the production of a colour singlet of high invariant mass, for instance gluon-fusion Higgs and Drell-Yan, where quantities like the transverse momentum of the singlet or angular observables defined on its decay products have been studied with increasing accuracy in the last decades.The differential study of these processes not only is important from a purely phenomenological perspective, but also because it represents the ideal baseline for a more fundamental understanding of the underlying theory. Their structural simplicity indeed allows one to provide predictions that includeseveral orders of perturbative corrections, hence probing in depth many non-trivial features of QCD. In this paper, we consider the hadro-production of a heavy colour singlet, and we study the class of observables, henceforth denoted by the symbol v, which are both transverse (i.e. which do not depend on the rapidity of the radiation) and inclusive (i.e. that depend only upon the total momentum of the radiation). As such, they only depend on the total transverse momentum of the radiation. Specifically, we concentrate on the transverse-momentum distribution of a Higgs boson in gluon fusion, but we stress that the same formulae hold for the whole class of transverse and inclusive observables, for instance the ϕ^* angle in Drell-Yan pair production. Moreover, although we limit ourselves to inclusive observables, the formalism presented in this work can be systematically extended to all transverse observables in colour-singlet hadro-production. Inclusive and differential distributions for gluon-fusion Higgs production are nowadays known with very high precision.The inclusive cross section is now known at next-to-next-to-next-to-leading-order (N^3LO) accuracy in QCD <cit.> in the heavy top-quark limit. The N^3LO correction amounts to a few percent of the total cross section, indicating that the perturbative series has started to manifest convergence and that missing higher-order corrections are now getting under theoretical control. Current estimates show that they are very moderate in size <cit.>.The state-of-the-art results for the Higgs transverse-momentum spectrum in fixed-order perturbation theory are the next-to-next-to-leading-order (NNLO) computations of refs. <cit.>, which have been obtained in the heavy top-quark limit.The impact of quark masses on differential distributions in the large-transverse-momentum limit is still poorly known beyond leading order, while in the moderate-p_t region, next-to-leading-order (NLO) QCD corrections to the top-bottom interference contribution were recently computed <cit.>. Although fixed-order results are crucial to obtain reliable theoretical predictions away from the soft and collinear regions of the phase space (v∼ 1), it is well known thatregions dominated by soft and collinear QCD radiation — which give rise to the bulk of the total cross section — are affected by large logarithmic terms of the form α_s^n ln^k(1/v)/v, with k≤ 2n-1, which spoil the convergence of the perturbative series at small v.In order to have a finite calculation in this limit, the subtraction of the infrared and collinear divergences requires an all-order resummation of the logarithmically divergent terms. The logarithmic accuracy is commonly defined in terms of the perturbative series of the logarithm of the cumulative cross section Σ aslnΣ(v)≡ln∫_0^v dv' d σ(v')/d v'= ∑_n { O(α_s^nln^n+1(1/v)) +O(α_s^nln^n(1/v)) +O(α_s^nln^n-1(1/v))+…}.One refers to the dominant terms α_s^n ln^n+1(1/v) as leading logarithmic (LL), to terms α_s^n ln^n(1/v) as next-to-leading logarithmic (NLL), to α_s^n ln^n-1(1/v) as next-to-next-to-leading logarithmic (NNLL), and so on. The resummation of the p_t spectrum of a heavy colour singlet was first analysed in the seminal work by Parisi and Petronzio <cit.>, where it was shown that in the low-p_t region the spectrum vanishes as dσ/dp_t ∼ p_t, instead of vanishing exponentially as suggested by Sudakov suppression. This power-law behaviour is due to configurations in which p_t vanishes due to cancellations among the non-vanishing transverse momenta of all emissions.Around and below the peak of the distribution, this mechanism dominates with respect to kinematical configurations where p_t becomes small due to all the emissions having a small transverse momentum, i.e. the configurations which would yield an exponential suppression.In order to properly deal with these two competing mechanisms, in ref. <cit.> it was proposed to perform the resummation in the impact-parameter (b) space, where both effects leading to a vanishing p_t are handled through a Fourier transform. Using the b-space formulation, the Higgs p_t spectrum was resummed at NNLL accuracy in <cit.> using the formalism developed in <cit.>, as well as in <cit.> by means of a soft-collinear-effective-theory (SCET) approach <cit.>. A study of the related theory uncertainties in the SCET formulation was presented in ref. <cit.>.More recently, all the necessary ingredients for the N^3LL resummation were computed <cit.>, with the exception of the four-loop cusp anomalous dimension which is currently unknown.This paves the way to more precise predictions for transverse observables in the infrared region. The impact of both threshold and high-energy resummation on the small-transverse-momentum region was also studied in detail in refs. <cit.>.The problem of the resummation of the transverse momentum distribution in direct (p_t) space received substantial attention throughout the years <cit.>, but remained unsolved until recently. Due to the vectorial nature of these observables, it is indeed not possible to define a resummed cross section at a given logarithmic accuracy in direct space that is simultaneously free of any subleading logarithmic contributions and of spurious singularities at finite values of p_t > 0.Last year some of us proposed a solution to this problem by formulating a resummation formalism in direct space up to NNLL order <cit.>, and used it to match the NNLL resummation to the NNLO Higgs p_t spectrum. The problem of direct-space resummation for the transverse-momentum distribution was also considered more recently in ref. <cit.> following a SCET approach, where the renormalisation-group evolution is addressed directly in momentum space. In this article we explain in detail the formalism introduced in <cit.>. Furthermore, we extend it to N^3LL, and formulate it in general terms, so that a direct application at this logarithmic accuracy to all transverse, inclusive observables is possible. We point out that our final result lacks the contribution of the unknown four-loop cusp anomalous dimension, which is set to zero in the following. The paper is structured as follows: in Section <ref> we sketch the main features of our formalism, based on and extending the one developed in ref. <cit.>, through the derivation of a simplified NLL formula relevant to the case of scale-independent parton densities. Section <ref> discusses the choice of the resolution variable and kinematic ordering in the evolution of the radiation. In Section <ref> we discuss the structure of higher-order corrections, and in particular in Section <ref> we treat the inclusion of parton densities and of hard-collinear radiation, thereby making our formalism fully capable of dealing with initial-state radiation. In Section <ref> we prove that our method is formally equivalent to the more common b-space formulation of transverse-momentum resummation. Section <ref> shows how to evaluate our formula to N^3LL order and in Section <ref> we present a study of the scaling property of the differential distribution in the p_t→ 0 limit, and compare our findings to the classic result by Parisi and Petronzio <cit.>. Finally, in Section <ref> we discuss the matching to NNLO, and in Section <ref> we present N^3LL accurate predictions for the Higgs-boson transverse momentum spectrum at the LHC, matched to NNLO.In Appendix <ref> we show that, at NLL, the approach used here is equivalent to a backward-evolution algorithm for this class of observables, while Appendix <ref> collects some of the relevant equations used in the article.§ DERIVATION OF THE MASTER FORMULAWe consider the resummation of a continuously global, recursive infrared and collinear (rIRC) safe <cit.> observable V in the reaction pp→ B, B being a generic colourless system with high invariant mass M.It is instructive to work out in detail the case of NLL resummation first. This will be done in Section <ref>, where we assume that the parton densities are independent of the scale. We then discuss the inclusion of higher-order corrections in Section <ref>, and the correct treatment of the parton luminosity will be dealt with in Section <ref>. Finally, in Section <ref>, we discuss the connection to the impact-parameter space formulation for transverse-momentum resummation. §.§ Cancellation of IRC divergences and NLL resummation In the present subsection we assume that the parton densities are independent of the scale and set to one for the sake of simplicity. To set up the notation we work in the rest frame of the produced colour singlet, and we introduce two reference light-like momenta that will serve to parametrise the radiationp̃_1 = M/2(1,0,0,1) ,p̃_2 = M/2(1,0,0,-1),where M is the invariant mass of the colour singlet with momentum p_B that in this frame readsp_B=p̃_1+p̃_2.The directions of the two momenta in Eq. (<ref>) coincide with the beam axis at the Born level.Beyond the Born level, radiation of gluons and quarks takes place, so that the final state consists in general of n partons with outgoing momenta k_1,…,k_n, and of the colour singlet. Due to this radiation, the singlet acquires a transverse momentum with respect to the beam direction. We express the final-state momenta by means of the Sudakov parametrisationk_i = (1-y_i^(1))p̃_1 +(1-y_i^(2))p̃_2 + κ̃_ti ,where κ̃_ti are space-like four-vectors, orthogonal to both p̃_1 and p̃_2. In the reference frame (<ref>) each κ̃_ti has no time component, and can be written as κ̃_ti=(0,k⃗̃⃗_ti), such that κ̃_ti^2 = - k̃_ti^2.Notice that since k_i is masslessk̃_ti^2 = (1-y_i^(1))(1-y_i^(2))M^2 =2(p̃_1 k_i) 2(p̃_2 k_i)/2(p̃_1 p̃_2) . In the chosen parametrisation, the emission's (pseudo-)rapidity η_i in this frame isη_i = 1/2ln1-y_i^(1)/1-y_i^(2). The observable V is in general a function of all momenta, and we denote it by ({p̃},k_1,…,k_n); without loss of generality we assume that it vanishes in Born-like kinematic configurations. The transverse observables considered in this paper are those which obey the following general parametrisation for a single soft emission k collinear to leg ℓ:({p̃},k) ≡(k) = d_ℓ g_ℓ(ϕ) (k_t/M)^a ,where k_t is the transverse momentum with respect to the beam axis, g_ℓ(ϕ) is a generic function of the angle ϕ that k⃗_t forms with a fixed reference vector n⃗ orthogonal to the beam axis, d_ℓ is a normalisation factor, and a > 0 due to collinear and infrared safety. In particular, in this work we focus on the family of inclusive observables that will be defined in the next section. Examples of such observables are the transverse momentum of the colour-singlet system (corresponding to d_ℓ=g_ℓ(ϕ)=a=1)[Without loss of generality we have introduced a dimensionless version of the transverse momentum by dividing by the singlet's mass.], and ϕ^* <cit.> (corresponding to d_ℓ=a=1 , g_ℓ(ϕ)=|sin(ϕ)|). In the latter case, the reference vector n⃗ is chosen along the direction of the dilepton system in the rest frame of the Z boson.The transverse momentum of the parametrisation (<ref>) is related to the one relative to the beam axis, which enters the definition of the observable, by recoil effects due to hard-collinear emissions off the same leg ℓ. To find the relationship, we consider the radiation collinear to p̃_1. The momentum of the initial-state parton before any radiation p_1 is related to the latter as followsp_1 = p̃_1 + ∑_j∈ 1 k_j,where the notation j∈ 1 indicates all emissions k_i radiated off leg 1. The above equation can be recast asp_1 = (1+∑_j ∈ 1(1-y_j^(1)))p̃_1 + ∑_j ∈ 1(1-y_j^(2))p̃_2 + ∑_j∈ 1κ̃_tj.We can use the above equation to express p̃_1 as a function of p_1. By plugging the resulting equation into Eq. (<ref>), we find that the transverse momentum of emissionk_i with respect to p_1 is k⃗_ti = k⃗̃⃗_ti - 1-y_i^(1)/1+∑_j ∈ 1(1-y_j^(1))(∑_j∈ 1k⃗̃⃗_tj).Generalising the above equation for k_i emitted off any leg ℓ=1,2we obtaink⃗_ti = k⃗̃⃗_ti - 1-y_i^(ℓ)/1+∑_j ∈ℓ(1-y_j^(ℓ))(∑_j∈ℓk⃗̃⃗_tj),where with the notation j∈ℓ we refer to partons that are emitted off the same leg p̃_ℓ as k_i. When only one emission is present, the above relation reduces tok⃗_ti = k⃗̃⃗_ti/2-y_i^(ℓ). In the soft approximation the two quantities coincide as y_i^(ℓ)≃ 1. In the present section we work under the assumption of soft kinematics in order to introduce the notation and derive the NLL result. The treatment of hard-collinear emissions will be discussed in detail in Section <ref>, where we extend the results derived here to the general case of initial-state radiation.The central quantity under study is the resummed cumulative cross section for V smaller than some value v, Σ(v), defined asΣ(v) = ∫_0^v dv' d σ(v')/d v'.In the infrared and collinear (IRC) limit, Σ(v) receives contributions from both virtual corrections and soft and/or collinear real emissions. The IRC divergences of the form factor exponentiate at all orders (see, for instance, refs. <cit.> and references therein), and we denote them by V(Φ_B) in the following discussion, where Φ_B is the phase space of the underlying Born. Therefore we can recast Eq. (<ref>) as followsΣ(v) = ∫ dΦ_BV(Φ_B) ∑_n=0^∞∫∏_i=1^n [dk_i] |M(p̃_1,p̃_2,k_1,… ,k_n)|^2 Θ(v-V({p̃},k_1,…,k_n)) ,where M is the matrix element for n real emissions (the case with n=0 reduces to the Born matrix element M_B), and [dk_i] denotes the phase space for the emission k_i. The Θ function represents the measurement function for the observable under consideration. Finally, to keep the notation concise, we have defined dΦ_B≡ d x_1 d x_2 dΦ_n (2π)^dδ(p̃^μ_1+p̃^μ_2-p^μ_B), where dΦ_n is the n-body phase space of the singlet system, and we have absorbed the partonic flux factor 1/(4p̃_1·p̃_2) into the squared amplitude |M|^2 (and analogously in |M_B|^2 below). The renormalised squared amplitude for n real emissions (p p→ B + n gluons) can be conveniently decomposed as [The decomposition above can be extended to the case in which some of the n emissions are quarks by properly changing the multiplicity factors in front of each term.]|M(p̃_1,p̃_2,k_1,… ,k_n)|^2 = |M_B(p̃_1,p̃_2)|^2{(1/n!∏_i=1 x^n|M(k_i)|^2) +..[∑_a > b1/(n-2)!(∏_i=1i≠ a,b^n|M(k_i)|^2 )|M̃(k_a, k_b)|^2+.. ..∑_a > b∑_ c > dc,d≠ a,b1/(n-4)!2!(∏_i=1i≠ a,b,c,d^n|M(k_i)|^2 )|M̃(k_a, k_b)|^2 |M̃(k_c, k_d)|^2. + …]. + [∑_a > b > c1/(n-3)!(∏_i=1i≠ a,b,c^n|M(k_i)|^2 )|M̃(k_a, k_b,k_c)|^2 +…]+…},where we have introduced the n-particle correlated matrix elements squared |M̃(k_a, …,k_n)|^2, which are defined recursively as follows|M̃(k_a)|^2 = |M(p̃_1,p̃_2,k_a)|^2/|M_B(p̃_1,p̃_2)|^2= |M(k_a)|^2,|M̃(k_a,k_b)|^2 =  |M(p̃_1,p̃_2,k_a,k_b)|^2/|M_B(p̃_1,p̃_2)|^2-1/2!|M(k_a)|^2|M(k_b)|^2,|M̃(k_a,k_b,k_c)|^2 =  |M(p̃_1,p̃_2,k_a,k_b,k_c)|^2/|M_B(p̃_1,p̃_2)|^2-1/3!|M(k_a)|^2|M(k_b)|^2|M(k_c)|^2 -|M̃(k_a,k_b)|^2|M(k_c)|^2-|M̃(k_a,k_c)|^2|M(k_b)|^2-|M̃(k_b,k_c)|^2|M(k_a)|^2,and so on. These represent the contributions to the n-particle squared matrix element that vanish in strongly-ordered kinematic configurations, that can not be factorised in terms of lower-multiplicity squared amplitudes.Each of the correlated squared amplitudes admits a perturbative expansion|M̃(k_a, …,k_n)|^2 ≡ ∑_j=0^∞(α_s(μ)/2π)^n+jn^(j)(k_a, …,k_n),where μ is a common renormalisation scale, and α_s is the strong coupling constant in the MS scheme. The notation n in Eq. (<ref>) stands for “n-particle correlated” and it will be used throughout the article. The rIRC safety of the observables considered here guarantees a hierarchy between the different blocks in the decomposition (<ref>), in the sense that, generally, correlated blocks with n particles start contributing at one logarithmic order higher than correlated blocks with n-1 particles <cit.>.In the present article, we focus on the family of inclusive observables V for whichV({p̃},k_1,…, k_n) = V({p̃},k_1+… +k_n) .In this case, we can integrate the nPC blocks for n>1 inclusively prior to evaluating the observable. Hence, starting from Eq. (<ref>) for the pure gluonic case, we can replace it with the following squared amplitude∑_n=0^∞|M(p̃_1, p̃_2,k_1,… ,k_n)|^2 ⟶ |M_B(p̃_1,p̃_2)|^2×∑_n=0^∞1/n!{∏_i=1^n(|M(k_i)|^2+∫ [d k_a][d k_b]|M̃(k_a,k_b)|^2δ^(2)(k⃗_ta+k⃗_tb-k⃗_ti)δ(Y_ab-Y_i).... +∫ [d k_a][d k_b][d k_c]|M̃(k_a,k_b,k_c)|^2δ^(2)(k⃗_ta+k⃗_tb+k⃗_tc-k⃗_ti)δ(Y_abc-Y_i)+ ….).}≡ |M_B(p̃_1,p̃_2)|^2∑_n=0^∞1/n!∏_i=1^n |M(k_i)|_ inc^2,where Y_abc... is the rapidity of the k_a+k_b+k_c+… system in the centre-of-mass frame of the collision. We refer to this treatment of the squared amplitude as to the inclusive approximation.[For non-inclusive observables, namely the ones that do not fulfil Eq. (<ref>), this reorganisation is not correct starting at NNLL. Therefore in that case one must correct for the non-inclusive nature of the observables. The full set of NNLL corrections for a generic global, rIRC safe observable is defined in refs. <cit.>. In the rest of this article we refer to observables of the type (<ref>).]With the above notation, we can rewrite Eq. (<ref>) asΣ(v) = ∫ dΦ_B |M_B(p̃_1,p̃_2)|^2 V(Φ_B) ∑_n=0^∞1/n!∫∏_i=1^n [dk_i] |M(k_i)|_ inc^2 Θ(v-V({p̃},k_1,…,k_n)) ,where |M(k_i)|_ inc^2 is defined in Eq. (<ref>). Once the logarithmic counting for the squared amplitude has been set up, as a next step we need to discuss the cancellation of the exponentiated divergences of virtual origin against the real ones. At all perturbative orders at a given logarithmic accuracy, we need to single out the IRC singularities of the real matrix elements, which can again be achieved by exploiting <cit.> the rIRC safety of the observable V({p̃},k_1,…,k_n) that we are computing. We then order the inclusive blocks described by |M(k_i)|_ inc^2 according to their contribution to the observable (k_i), i.e. (k_1) > (k_2) > … > (k_n). We consider configurations where the radiation corresponding to the first (hardest) block |M(k_1)|_ inc^2 has occurred, where we use the fact that the contribution with n=0 in Eq. (<ref>) (which does not have any real emissions) vanishes since it is infinitely suppressed by the pure virtual corrections V(Φ_B) The rIRC safety of the observable allows us to introduce a resolution parameter ϵ≪ 1 independent of the observable such that all inclusive blocks with (k_i) < ϵ(k_1) can be neglected in the computation of the observable up to power-suppressed corrections O(ϵ^p(k_1)), that eventually will vanish once we take the limit ϵ→ 0.Therefore, we classify inclusive blocks k as resolved if (k)> ϵ(k_1), and as unresolved if (k)< ϵ(k_1). This definition is collinear safe at all perturbative orders.With this separation Eq. (<ref>) becomesΣ(v)= ∫ dΦ_B |M_B(p̃_1,p̃_2)|^2 V(Φ_B)×∫ [dk_1] |M(k_1)|_ inc^2 (∑_l=0^∞1/l!∫∏_j=2^l+1 [dk_j] |M(k_j)|_ inc^2 Θ(ϵ(k_1)- (k_j)))×(∑_m=0^∞1/m!∫∏_i=2^m+1 [dk_i] |M(k_i)|_ inc^2 Θ((k_i)-ϵ(k_1))Θ(v-V({p̃},k_1,…,k_m+1))) .The phase space of the unresolved real ensemble is now solely constrained by the upper resolution scale, since it does not contribute to the evaluation of the observable. As a consequence, it can be exponentiated directly in Eq. (<ref>) and employed to cancel the divergences of the virtual corrections V(Φ_B). We can now proceed with an explicit evaluation of Eq. (<ref>) at NLL order. As we mentioned earlier, at different logarithmic orders the cross section will receive contributions from different classes of correlated blocks. This, for instance, means that double-logarithmic terms of the form α_s^n ln^2n(1/v) entirely arise from 1PC^(0) blocks, in particular from their soft-collinear part. If one wants to control all the leading-logarithmic terms of order α_s^n ln^n+1(1/v) in ln(Σ(v)) (Eq. (<ref>)) then the leading (soft-collinear) term of the 1PC^(1) and 2PC^(0) blocks must be included as well. In particular, within the inclusive approximation defined in Eq. (<ref>) we find that|M(k)|_ inc^2 ≃|M(k)|^2 + ∫ [dk_a][dk_b]|M̃(k_a,k_b)|^2δ^(2)(k⃗_ta+k⃗_tb-k⃗_t)δ(Y_ab-Y)= α_s(μ)/2π1 PC^(0)(k)(1+α_s(μ)(β_0lnk_t^2/μ^2 + K/2π)+…),where β_0 is the leading term of the QCD beta function (see Appendix <ref>). Moreover, the QCD coupling is renormalised in the MS scheme. The contribution of the one-loop cusp anomalous dimension K, defined asK = (67/18-π^2/6)C_A - 5/9 n_f ,enters at NLL order, and it will be considered later in this section. Up to, and including, the NLL term proportional to K in Eq. (<ref>), one can integrate inclusively over the invariant mass of the 2PC^(0) block, while keeping the bounds on the rapidity Y as computed from the massless kinematics. This approximation neglects terms which are at most NNLL, and are denoted by the ellipsis in the second line of Eq. (<ref>).We notice that the leading soft-collinear terms proportional to β_0 in Eq. (<ref>) can be entirely encoded in the running of the coupling of the single-emission squared amplitude 1 PC^(0)(k) through a proper choice of the scale μ at which the latter is evaluated. It is indeed easy to see from Eq. (<ref>) that this is achieved by setting μ to the k_t (equal to k̃_t for soft radiation) of each emission k in the parametrisation (<ref>) <cit.>. The inclusive matrix element squared and phase space controlling all α_s^n ln^n+1(1/v) terms are thus[dk] | M(k)|_ inc^2≃[dk]M^2_ sc(k)= ∑_ℓ=1,2 2 C_ℓα_s(k_t)/πdk_t/k_td z^(ℓ)/1-z^(ℓ) Θ((1-z^(ℓ)) - k_t/M) Θ(z^(ℓ)) dϕ/2π ,where we use M_ sc(k) to denote the amplitude in the soft approximation. We denoted by C_ℓ the Casimir relative to the emitting leg (C_ℓ=C_F for quarks, and C_ℓ=C_A for gluons).For initial-state radiation, 1-z^(ℓ) is the fraction of the incoming momentum (entering the emission vertex) that is carried by the emitted parton. This will in general differ from the y^(ℓ) fractions of the Sudakov parametrisation (<ref>) when some emissions are not soft. In particular, while (1-z^(ℓ))≤ 1, this is not true in general for the (1-y^(ℓ)) appearing in our initial parametrisation. However, in the soft limit, the energy of the emission is much smaller than the singlet's mass M, which restricts y_i^(ℓ) to positive values in this limit. For a single emission, the two variables are related by1-y^(ℓ) = 1-z^(ℓ)/z^(ℓ),from which is clear that in the soft limit z^(ℓ)≃ 1 one has z^(ℓ)≃ y^(ℓ). The upper bound for z^(ℓ) in the single-emission case can be worked out by imposing that y^(ℓ) < 1-k̃_t/M, and subsequently relating k̃_t to k_t relative to the beam axis. This yieldsz^(ℓ) < 1-k_t/M +O(k_t^2). To extend the above discussion to all NLL terms of order α_s^n ln^n(1/v) in the logarithm of Σ(v), we must include the less singular part of the 1PC^(1) and 2PC^(0) blocks in the soft limit, that is the term proportional to K in Eq. (<ref>) that was previously ignored. This simply amounts to replacing the inclusive (soft) matrix element in the r.h.s. of  (<ref>) with[dk]M^2_ CMW(k)= ∑_ℓ=1,2 2 C_ℓα_s(k_t)/π (1 +α_s(k_t)/2π K)dk_t/k_td z^(ℓ)/1-z^(ℓ) Θ((1-z^(ℓ)) - k_t/M) Θ(z^(ℓ))dϕ/2π .The above operation is also known as the Catani-Marchesini-Webber (CMW) scheme <cit.> for the running coupling.[Although in the present article we are considering only inclusive observables, it can be shown <cit.> that for all rIRC safe observables (also non-inclusive ones) the inclusive approximation is accurate at NLL order.] At this logarithmic order the cross section also receives contributions from the hard-collinear part of the 1PC^(0) block, that we ignored so far. Thus, one has to modify Eq. (<ref>) as[dk] | M(k)|_ inc^2 = [dk]M^2_ CMW(k)+ ∑_ℓ=1,2dk_t^2/k_t^2dz^(ℓ)/1-z^(ℓ)dϕ/2πα_s(k_t)/2π( (1-z^(ℓ))P^(0)(z^(ℓ))-lim_z^(ℓ)→ 1[(1-z^(ℓ))P^(0)(z^(ℓ))] ) ,where P^(0)(z^(ℓ)) is the leading-order unregularised splitting function, reported in Appendix <ref>.[For emissions off gluonic legs, P^(0) receives contributions from both P^(0)_gg and P^(0)_gq, as it will be discussed in Sec. <ref>. In this case, we implicitly exploit the symmetry of P^(0)_gg under z↔ 1-z to recast it such that it has only a z→ 1 singularity.]At NLL order, the above hard-collinear contribution can be treated by neglecting the effect of recoil both in the phase-space boundaries of other emissions and in the observable, both of which enter at NNLL order. Therefore, also for this contribution we can use the soft kinematics derived in the first part of this section. Moreover, in colour-singlet production, we can use the azimuthally averaged splitting functions (see Appendix <ref>) up to NNLL accuracy. At N^3LL, corrections from azimuthal correlations arise <cit.>, and they will be introduced in Section <ref>.We insert Eq. (<ref>) back into Eq. (<ref>). At NLL accuracy, we can neglect the constant terms of the virtual corrections. The remaining singular structure of the virtual corrections only depends upon the invariant mass of the singlet M^2V(Φ_B) ≃ V(M^2) = exp{-∫ [dk]|M(k)|_ inc^2}atNLL.The combination of unresolved real and virtual contributions is thus finite and gives rise to a Sudakov suppression factorV(M^2) exp{∫ [dk]|M(k)|_ inc^2 Θ(ϵ(k_1)- (k))} ≃ exp{-∫ [dk]|M(k)|_ inc^2 Θ((k) -ϵ(k_1))} = e^-R(ϵ(k_1)),where R is the radiator which at this order reads <cit.>R(v) ≃ R_ NLL(v)≡∫ [dk] M_ CMW^2(k)Θ(ln(k_t/M)^a -ln v) + ∫ [dk] M_ CMW^2(k) lnd̅_ℓ δ(ln(k_t/M)^a-ln v)+ ∑_ℓ=1,2C_ℓ B_ℓ∫dk_t^2/k_t^2α_s(k_t)/2πΘ(ln(k_t/M)^a-ln v ) ,wherelnd̅_ℓ = ∫_0^2πdϕ/2πln d_ℓ g_ℓ(ϕ) ,andC_ℓ B_ℓ = ∫_0^1 d z^(ℓ)/1-z^(ℓ)((1-z^(ℓ)) P^(0)(z^(ℓ)) - lim_z^(ℓ)→ 1[(1-z^(ℓ)) P^(0) (z^(ℓ))]) .The next and final step is to treat the resolved real blocks k_i for which V(k_i)>ϵ(k_1). It is therefore necessary to work out the kinematics and phase space in the presence of additional radiation, which modifies the relations (<ref>) and (<ref>) obtained in the single-emission case.For this we use the fact that the radiation is ordered in (k_i). For a given inclusive block of total momentum k_i, one then has[See also discussion in the appendix E of ref. <cit.>.]1-y_i^(ℓ)=1-z_i^(ℓ)/z_1^(ℓ)z_2^(ℓ)… z_i^(ℓ),where emissions k_1,k_2,…,k_i-1 have been radiated off the same hard leg before k_i. In general, this implies that the phase space available for each emissions is changed by the previous resolved radiation. At the NLL order considered in this section, as already stressed, the real-radiation kinematics can be approximated with its soft limit <cit.>. This allows us to approximate y_i^(ℓ)≃ z^(ℓ)_i and k_t≃k̃_t for all real emissions and therefore the phase space of each emission becomes in fact independent of the remaining radiation in the event.The squared matrix element (<ref>) and phase space for a resolved real emission can be parametrised by introducing the functionsR'_1(v/d_1 g_1(ϕ̅)) = ∫ [dk] |M(k)|^2_ inc (2π) δ(ϕ-ϕ̅)vδ(v-(k))Θ(y^(2) -y^(1)) , R'_2(v/d_2 g_2(ϕ̅))= ∫ [dk] |M(k)|^2_ inc (2π) δ(ϕ-ϕ̅)vδ(v-(k))Θ(y^(1) -y^(2)) , andR'(v,ϕ) =R'_1(v/d_1 g_1(ϕ))+R'_2(v/d_2 g_2(ϕ)) .From the generic form of the rIRC safe observable (k) (<ref>), it is easy to verify that the R' functions only depend upon the ratio v/(d_ℓ g_ℓ(ϕ̅)) up to regular terms, which are neglected <cit.>. Indeed, the only non-trivial integration in Eqs. (<ref>) is the one over the rapidity of k, which can be performed inclusively since the observable V(k) does not depend on it (see Eq. (<ref>)). Then the final integral only depends on the ratio of the two remaining scales, i.e. the invariant mass of the singlet M, and its transverse momentum that is set to (v/(d_ℓ g_ℓ(ϕ̅)))^1/a M by the constraint δ(v-(k)).Upon inclusive integration over the rapidity of momentum k, by using Eq. (<ref>), we can parametrise the inclusive squared amplitude and its phase space as[dk_i] |M(k_i)|^2_ inc= dv_i/v_idϕ_i/2π∑_ℓ_i=1,2 R'_ℓ_i(v_i/d_ℓ_i g_ℓ_i(ϕ_i)) = dζ_i/ζ_idϕ_i/2π∑_ℓ_i=1,2 R'_ℓ_i(ζ_i v_1/d_ℓ_i g_ℓ_i(ϕ_i)),where we defined v_i=(k_i) and ζ_i = (k_i)/(k_1).With the above considerations, Eq. (<ref>) finally becomesΣ(v)= σ^(0)∫d v_1/v_1∫_0^2πdϕ_1/2π e^-R(ϵ v_1)∑_ℓ_1=1,2 R_ℓ_1'(v_1/d_ℓ_1 g_ℓ_1(ϕ_1)) ××∑_n=0^∞1/n!∏_i=2^n+1∫_ϵ^1dζ_i/ζ_i∫_0^2πdϕ_i/2π∑_ℓ_i=1,2R_ℓ_i'(ζ_i v_1/d_ℓ_ig_ℓ_i(ϕ_i)) Θ(v-V({p̃},k_1,…, k_n+1)) ,where we introduced the total Born cross sectionσ^(0)=∫ dΦ_B |M_B(p̃_1,p̃_2)|^2. Eq. (<ref>) resembles equation (2.34) of ref. <cit.> which after a number of approximations leads to the general NLL formula of the CAESAR method for global rIRC observables in processes with two hard legs.We remind the reader that additional corrections coming from the parton luminosities start at NLL order, and they will be discussed in Section <ref>. Eq. (<ref>) can be directly evaluated using Monte-Carlo (MC) techniques since it is finite in four dimensions. However, as it is formulated now it contains effects that are logarithmically subleading with respect to the formal NLL accuracy we are considering in this section. For observables that vanish only in the Sudakov limit, these subleading effects can be systematically disposed of by means of a few approximations, as described in ref. <cit.>. We now briefly review such approximations on Eq. (<ref>), and show that in the case of observables that vanish away from the Sudakov region they lead to a divergent result, hence they cannot be trivially performed.In order to neglect subleading corrections from Eq. (<ref>), we need to consistently treat the resolved squared amplitude and the corresponding Sudakov radiator. In particular, with NLL accuracy, ref. <cit.> suggests to perform the following Taylor expansions in Eq. (<ref>)R(ϵ v_1) = R(v) + dR(v)/dln(1/v)lnv/ϵ v_1 +O(ln^2v/ϵ v_1),R'_ℓ_i(v_i/d_ℓ_i g_ℓ_i(ϕ_i)) = R'_ℓ_i(v) +O(lnv d_ℓ_i g_ℓ_i(ϕ_i) /v_i).This is motivated by the fact that at NLL the resolved real emissions are such that v_i∼ v_1∼ v, and hence the terms neglected in the above expansions are at most NNLL. Only by expanding consistently (i.e. to the same logarithmic order) the ϵ dependence in the Sudakov and in the resolved real emissions we are sure that the result is completely ϵ-independent.We observe that, since we expanded out the ϕ_i dependence in R', we have dR(v)/dln(1/v)=∑_ℓR'_ℓ(v) and Eq. (<ref>) becomesΣ(v)≃σ^(0)∫d v_1/v_1∫_0^2πdϕ_1/2π e^-R(v)e^-∑_ℓ R_ℓ'(v)lnv/ϵ v_1∑_ℓ_1=1,2 R_ℓ_1'(v) ××∑_n=0^∞1/n!∏_i=2^n+1∫_ϵ^1dζ_i/ζ_i∫_0^2πdϕ_i/2π∑_ℓ_i=1,2R_ℓ_i'(v) Θ(v-V({p̃},k_1,…, k_n+1)) .At this stage, the integration over v_1 can be performed analytically, and Eq. (<ref>) reproduces exactly the known CAESAR formula.[Some extra simplifications can be made at NLL: in the resolved real squared matrix elements R'_ℓ one can keep only the term proportional to M^2_ sc as remaining terms are subleading. In order to guarantee the cancellation of the divergences in the ϵ regulator, the same approximation has to be made in the term ∑_ℓ R_ℓ'(v)lnv/ϵ v_1 coming from the expansion of the Sudakov radiator. Finally, the observable can be treated in its soft-collinear approximation given that, at NLL, the real emissions constitute an ensemble of soft-collinear gluons.] However, in order to perform the latter expansions about the observable's value v, one has to make sure that the ratio v_i/v remains of order one in the real-emission phase space. rIRC safety ensures that emissions with v_i ≪ v do not contribute to the observable, and are fully exponentiated and accounted for in the Sudakov radiator. Therefore, the condition v_i/v∼ 1 is fulfilled only if configurations in which v_i ≫ v never occur.While the latter condition holds true for most rIRC observables, it is clearly violated for observables that vanish away from the Sudakov limit. An example is given by the transverse momentum of a colour singlet, which can vanish even in the presence of several emissions with a finite (non-zero) transverse momentum. In that case, as shown in ref. <cit.>, Eq. (<ref>) has a divergence at ∑_ℓ R'_ℓ(v) ≃ 2. For a different observable vanishing away from the Sudakov limit, the divergence will occur at a different, non-zero value of v.For such observables, Eq. (<ref>) cannot be expanded around v. As we will discuss in detail in Section <ref>, we suggest to perform the following alternative expansion about the observable's value of the hardest block v_1R(ϵ v_1) = R(v_1) + dR(v_1)/dln(1/v_1)ln1/ϵ +O(ln^21/ϵ),R'_ℓ_i(v_i/d_ℓ_i g_ℓ_i(ϕ_i)) = R'_ℓ_i(v_1) +O(lnv_1 d_ℓ_i g_ℓ_i(ϕ_i) /v_i). In this way, the rIRC safety of the observable guarantees that v_i∼ v_1 (ζ_i∼ 1) and therefore the terms neglected in Eqs. (<ref>) are at most NNLL. However, a class of higher-order terms still remains in Eq. (<ref>) through the dependence of the considered terms on v_1. These higher-order terms cannot be disposed of entirely, as they regularise the divergence discussed above.Therefore, while the resulting equation is finite and accurate at NLL order also for rIRC-safe observables that vanish away from the Sudakov limit, subleading corrections beyond NLL cannot be entirely removed.The above approximations make the evaluation of Eq. (<ref>) considerably simpler than its original form, as it will be shown in Section <ref>. Its implementation can be carried out efficiently with MC methods as described in detail in Section <ref>.§.§ Choice of the resolution and ordering variable The derivation that we carried out for the resummation formalism relies to a large extent on the introduction of a resolution variable that separates resolved real blocks from unresolved ones as discussed in the previous section. This resolution variable acts on the total momentum of each of the correlated blocks.One has some freedom in choosing the resolution variable. In principle, the only necessary property for a good resolution variable is that it must guarantee, at all orders, the cancellation of the IRC divergences of the exponentiated virtual corrections, and hence has to be rIRC safe. A particular choice is motivated by convenience in formulating the calculation. For instance, choosing a variable that shares the same leading logarithms with the resummed observable allows for a much easier implementation of the all-order result, as it will be discussed in Section <ref>. A natural choice, which fulfils the above requirements, is the value of observable in its soft-collinear approximation, as discussed in refs. <cit.>.However, we note that for the whole class of transverse observables (that scale like Eq. (<ref>) for a single emission), a more convenient choice for the resolution variable is V(k)=(k_t/M)^a, k being the sum of the four-momenta in each correlated block. While this exactly coincides with the above prescription for observables with d_ℓ=g_ℓ(ϕ)=1, it is a legitimate choice also for observables with d_ℓ≠ 1, g_ℓ(ϕ)≠ 1 since the dependence on d_ℓ g_ℓ(ϕ) first enters at NLL order, hence the leading logarithms of the resolution variable are the same as for the resummed observable.The advantage of the latter choice, besides the simplifications in the implementation to be discussed in Section <ref>, is that it leads to a universal Sudakov radiator for all observables with the same a in the parametrisation (<ref>), while the resolved real radiation will correctly encode the full observable dependence through the measurement function Θ(v-V({p̃},k_1,…, k_n+1)). In the present article, we adopt this choice, and we present explicitly the case for a=1. The generalisation to any a>0 is straightforward following our derivation. With this choice, Eq. (<ref>) readsΣ(v)= σ^(0)∫d k_t1/k_t1∫_0^2πdϕ_1/2π e^-R(ϵ k_t1)∑_ℓ_1=1,2 R_ℓ_1'(k_t1) ××∑_n=0^∞1/n!∏_i=2^n+1∫_ϵ^1dζ_i/ζ_i∫_0^2πdϕ_i/2π∑_ℓ_i=1,2R_ℓ_i'(ζ_i k_t1) Θ(v-V({p̃},k_1,…, k_n+1)) ,where, with a little abuse of notation, we redefined ζ_i=k_ti/k_t1. As it will be described in Section <ref>, the above equation can be efficiently evaluated as a simplified shower of primary emissions off the initial-state legs, ordered in transverse momentum. This choice of the ordering variable is dictated by the choice of the resolution scale, that in turn leads to the Sudakov radiator for a k_t ordered evolution in Eq. (<ref>). §.§ Structure of higher-order corrections In deriving the main result of the previous section, Eq. (<ref>), we made two approximations. Firstly, we ignored nPC correlated blocks with n>2 in the squared amplitudes (<ref>). Secondly, we did not specify a complete treatment of hard-collinear radiation. Indeed, the only hard-collinear contribution entering at NLL (in Eq. (<ref>)) has been treated with soft kinematics. We discuss how to relax both approximations in the next two subsections.§.§.§ Correlated blocks at higher-logarithmic order Higher-order corrections require the inclusion of higher-multiplicity and higher-order blocks with respect to those relevant to Eq. (<ref>). The relevant blocks necessary to a given order are summarised in Table <ref>.For instance, at NNLL, for the observables (<ref>), one has to include 2PC^(0) (i.e. the fully correlated double emission), and 1PC^(1) both in the soft and in the hard-collinear limit, and 3PC^(0), 2PC^(1), and 1PC^(2) blocks in the soft-collinear limit. Given the inclusive nature of the observables (<ref>) that we are treating in this article, the inclusion of higher-order blocks can be done in a simple systematic way by adding more terms to the r.h.s. of Eq. (<ref>). We remind the reader of the fact that, while at NLL the bounds for rapidity Y_i of the inclusive block |M(k_i)|^2_ inc can be approximated with their massless limit (see Eq. (<ref>) and comments below it), starting at NNLL the integration over the rapidity Y_i must be performed exactly. §.§.§ Hard-collinear emissions and treatment of recoil In order to repeat the procedure that led to Eq. (<ref>) at higher logarithmic accuracy, we need to handle the phase space in the multiple-emission kinematics. In the NLL case derived in the previous section, indeed, all resolved real emissions are soft and collinear and therefore they do not modify each other's phase space. However, starting at NNLL one or more real emissions can be hard and collinear to the emitting leg and this changes the available phase space for subsequent real emissions. More precisely, at NNLL we need to work out the corrections due to a single hard-collinear resolved emission within an ensemble of soft-collinear radiation. Similarly, at N^3LL, one has to consider up to two resolved hard-collinear emissions embedded in an ensemble of soft-collinear radiation. The kinematics and the proper treatment of hard-collinear emissions, still missing in our formulation, will be discussed in this section.To correctly include the evolution of the hard-collinear radiation in our formulation, we first consider how initial-state radiation modifies the real-emission kernels, illustrating this in the single-emission case for the sake of clarity. Throughout this section and in the rest of this article we use the tree-level splitting functions as reported in Appendix <ref>. We start by formulating the single-emission probability for a gluon-initiated process. For the sake of concreteness, all prefactors in this subsection are given under the assumption that the colour singlet is a single particle, e.g. a Higgs boson. We express the probability of emitting either a gluon or a quark off leg 1 (an analogous term can be written for an emission off leg 2), for an observable v, as Σ(v) = 2π |M_B|_gg^2∫ d x_1 d x_2δ(x_1 x_2 s - M^2)∫dk_t/k_t/πdϕ/2π× (∫_x_1^1-k_t/M dz [2P^(0)_gg(z)/zf_g(μ_F,x_1/z)+ P^(0)_gq(z)/z(f_q(μ_F,x_1/z)+ f_q̅(μ_F,x_1/z))]f_g(μ_F,x_2) Θ(v-v(k)) -∫_0^1-k_t/M dz [P^(0)_gg(z)+n_f P^(0)_qg(z) ]f_g(μ_F,x_1) f_g(μ_F,x_2) -(P̂^(0)_gg⊗ f_g)(x_1) f_g(μ_F,x_2) - (P^(0)_gq⊗ f_q)(μ_F,x_1) f_g(μ_F,x_2) -(P^(0)_gq⊗ f_q̅)(μ_F,x_1) f_g(μ_F,x_2) ) + constant terms,where f_g(μ_F,x) is the gluon density renormalised in the MS scheme, evaluated at a factorisation scale μ_F, and P̂ denotes the regularised splitting function. Since P̂^(0)_gq(z)=P^(0)_gq(z) (see Appendix <ref>), the regularised label “" applies only to P_gg^(0). The second, third, and fourth line of Eq. (<ref>) denote the real emission, the virtual corrections, and collinear counterterm, respectively. For the virtual correction, we simply use the first-order expansion of the resummed form factor V (Φ_B) <cit.> expressed in terms of leading-order splitting functions, of which we take the limit in four dimensions. The unregulated soft and collinear divergences of the four-dimensional virtual corrections manifestly cancel against the ones in the real emissions at the integrand level. We stress once again that in colour-singlet production we can use the azimuthally averaged splitting functions (see Appendix <ref>) up to NNLL accuracy. At N^3LL, corrections from azimuthal correlations arise <cit.>, and they will be introduced in Section <ref>. In general, the upper bound of the z integration in the virtual corrections is different from the one in the real correction when more than one hard-collinear emission is present, since the available phase space for the real emissions is changed by the presence of the hard-collinear radiation. However, for the single-emission case treated in Eq. (<ref>), the upper bound, derived in Eq. (<ref>), is identical for the real and virtual contributions. Eq. (<ref>) also contains constant contributions arising from both the finite terms of the virtual form factor in MS, and the O() collinear coefficient functions.For the sake of simplicity, in the following discussion we neglect these NNLL constant terms, which we will however include in our final formula.We now add and subtract the term 2π |M_B|_gg^2∫ d x_1 d x_2δ(x_1 x_2 s - M^2)∫dk_t/k_t/πdϕ/2π×∫_0^1-k_t/M dz [P^(0)_gg(z)+ n_f P^(0)_qg(z) ]f_g(μ_F,x_1) f_g(μ_F,x_2) Θ(v-v(k)) ,and recast Eq. (<ref>) asΣ(v)= 2π |M_B|_gg^2∫ d x_1 d x_2δ(x_1 x_2 s - M^2)∫dk_t/k_t/πdϕ/2π×(∫_x_1^1-k_t/M dz2P^(0)_gg(z)/zf_g(μ_F,x_1/z)f_g(μ_F,x_2) Θ(v-v(k)) - ∫_x_1^1 dz P̂^(0)_gg(z)/zf_g(μ_F,x_1/z) f_g(μ_F,x_2)-∫_0^1-k_t/M dz [P^(0)_gg(z)+ n_f P^(0)_qg(z) ]f_g(μ_F,x_1) f_g(μ_F,x_2) Θ(v-v(k))+∫_0^1-k_t/M dz [P^(0)_gg(z)+ n_f P^(0)_qg(z) ]f_g(μ_F,x_1) f_g(μ_F,x_2)( Θ(v-v(k))-1)+ ∫_x_1^1 dz P^(0)_gq(z)/z(f_q(μ_F,x_1/z)+f_q̅(μ_F,x_1/z))f_g(μ_F,x_2) ( Θ(v-v(k))-1) - ∫_1-k_t/M^1 dz P^(0)_gq(z)/z(f_q(μ_F,x_1/z)+f_q̅(μ_F,x_1/z))f_g(μ_F,x_2) Θ(v-v(k))).By using the symmetry of the P_gg splitting function under z↔ 1-z, one finds that∫_x_1^1 dz2P^(0)_gg(z)/z f_g(μ_F,x_1/z) - ∫_0^1 dz (P^(0)_gg(z)+n_fP^(0)_qg(z) )f_g(μ_F,x_1) = ∫_x_1^1 dz P̂^(0)_gg(z)/z f_g(μ_F,x_1/z) ,which allows us to recast the previous equation asΣ(v)= 2π |M_B|_gg^2∫ d x_1 d x_2δ(x_1 x_2 s - M^2)∫dk_t/k_t/πdϕ/2π×{∫_x_1^1 dz P̂^(0)_gg(z)/zf_g(μ_F,x_1/z)f_g(μ_F,x_2) (Θ(v-v(k))-1)+∫_0^1-k_t/M dz [P^(0)_gg(z)+ n_f P^(0)_qg(z) ]f_g(μ_F,x_1) f_g(μ_F,x_2)( Θ(v-v(k))-1)+ ∫_x_1^1 dz P^(0)_gq(z)/z(f_q(μ_F,x_1/z)+f_q̅(μ_F,x_1/z))f_g(μ_F,x_2) ( Θ(v-v(k))-1) - ∫_1-k_t/M^1 dz ( 2P^(0)_gg(z)/zf_g(μ_F,x_1/z)f_g(μ_F,x_2) - [P^(0)_gg(z)+ n_f P^(0)_qg(z) ]f_g(μ_F,x_1) f_g(μ_F,x_2).. + P^(0)_gq(z)/z(f_q(μ_F,x_1/z)+f_q̅(μ_F,x_1/z))f_g(μ_F,x_2) ) Θ(v-v(k))} .Analogously, it is straightforward to show that the logarithmic part for a quark-initiated process with an emission off the leg 1 readsΣ(v)= 2π |M_B|_qq̅^2∫ d x_1 d x_2δ(x_1 x_2 s - M^2)∫dk_t/k_t/πdϕ/2π× {∫_x_1^1 dz P^(0)_qg(z)/zf_g(μ_F,x_1/z)f_q̅(μ_F,x_2) (Θ(v-v(k))-1)+∫_0^1-k_t/M dz P^(0)_qq(z)f_q(μ_F,x_1) f_q̅(μ_F,x_2)( Θ(v-v(k))-1)+ ∫_x_1^1 dz P̂^(0)_qq(z)/z f_q(μ_F,x_1/z)f_q̅(μ_F,x_2) ( Θ(v-v(k))-1) - ∫_1-k_t/M^1 dz ( P^(0)_qq(z)/zf_q(μ_F,x_1/z)f_q̅(μ_F,x_2) - P^(0)_qq(z)f_q(μ_F,x_1) f_q̅(μ_F,x_2) ..+ P^(0)_qg(z)/z f_g(μ_F,x_1/z)f_q̅(μ_F,x_2) ) Θ(v-v(k))} ,where we have set P̂^(0)_qg(z) = P^(0)_qg(z).In Eqs. (<ref>) and (<ref>), the last integral from 1-k_t/M to 1 gives rise to regular terms and can therefore be neglected. As far as the remaining terms are concerned, we notice that the squared matrix element for an initial-state emission, which corresponds to the terms containing a Θ function in Eqs. (<ref>) and (<ref>), can be separated into two pieces: * The first one, encoded in the third line of Eqs. (<ref>) and (<ref>), modifies neither the flavour nor the momentum fraction of the incoming partons, and the bounds of the relative z integration are those of the corresponding virtual phase space. This contribution is fully analogous to the case treated in Sec. <ref>, that gives rise to R' in Eq. (<ref>). When evaluating this term explicitly, we can further split it, as done in Eq. (<ref>), into a soft term and a hard-collinear contribution. The exact upper bound of the z integral is only relevant in the soft contribution, while it can be extended up to 1 in the hard-collinear term up to regular (non logarithmic) terms. In the following, we will refer to this term as the R' contribution.* The second one (second and fourth lines of Eqs. (<ref>) and (<ref>)) does modify both flavour and momentum fraction. This contribution corresponds to an exclusive step of DGLAP evolution. The corresponding z integration can be extended up to the soft limit (z=1) as this limit is regularised by the plus distribution in the corresponding splitting function. We stress once again that the latter extension of the upper bound of the z integration in the hard-collinear radiation's phase space is correct up to regular terms that are ignored in our treatment. We will refer to this term as the exclusive DGLAP evolution step.This decomposition is only a convenient way of expressing the squared amplitude and phase space for an initial-state emission, and only the sum of all logarithmic terms in Eqs. (<ref>) and (<ref>) is physically well defined. The considerations above will be useful in the rest of this section when the all-order kinematics is discussed.As anticipated in the beginning of this subsection, in order to achieve N^3LL accuracy, one has to consider configurations with up to two resolved hard-collinear emissions together with any number of soft-collinear partons in the final state.We therefore study how the presence of hard-collinear emissions affects the phase space of the remaining radiation in the all-order picture.[We thank A. Banfi for fruitful discussions on this point.] We consider again the emissions ordered according to their transverse momentum. In this picture, the relation between the z^(ℓ) variable and the Sudakov variable y^(ℓ) for a given emission k will be modified by the radiation that occurred before k as described in Eq. (<ref>).We consider the case of an ensemble of resolved emissions off a leg ℓ of which a single one is hard and collinear, while all the remaining radiation is soft. We can group the emissions into the following three sets: the soft emissions that occur before the hard-collinear parton is emitted (i.e. at larger transverse momenta), the hard-collinear emission itself, and the soft emissions that occur after the hard-collinear one (at smaller transverse momenta). The soft radiation emitted before the hard-collinear emission has z_i^(ℓ)≃ y_i^(ℓ)≃ 1 and therefore k_ti≃k̃_ti, so its phase space boundaries are as described in Section <ref>. For the hard-collinear emission k^hc the relation between z_hc^(ℓ) and y_hc^(ℓ) is reported in Eq. (<ref>) and the corresponding z_hc^(ℓ) integration bound is in Eq. (<ref>). Finally, soft emissions that occur after the hard-collinear one will again have k_ti≃k̃_ti but now 1-y_i^(ℓ)≃ (1-z_i^(ℓ))/z^(ℓ)_hc. The upper bound of their z_i^(ℓ) integral is thereforez_i^(ℓ) < 1-z_hc^(ℓ) k_ti/M.From the above equation we see that the phase space of the soft radiation emitted after the hard-collinear emission is modified by the presence of the latter. However, the squared amplitude and phase space for emissions in the soft limit only depend on z_i^(ℓ) through d z_i^(ℓ)/(1-z_i^(ℓ)). Therefore, using the relationd z_i^(ℓ)/1-z_i^(ℓ) = d y_i^(ℓ)/1-y_i^(ℓ), and using the fact that k_ti≃k̃_ti for these emissions, we can replace the integral over z_i^(ℓ) with an integral over y_i^(ℓ) whose upper bound is given by y_i^(ℓ) < 1- k_ti/M.This allows one to disentangle the phase space of all emissions in the considered kinematic configuration and, hence, to iterate the procedure at all orders.The remaining kinematic configuration to be considered in a N^3LL resummation is given by an ensemble of soft-collinear emissions accompanied by two hard-collinear ones. We label the two hard collinear emissions by k^hc_1 and k^hc_2 and we assume, without any loss of generality, that k^hc_1 is emitted before k^hc_2 (hence it has a larger transverse momentum in our picture). The upper bounds of the corresponding z^(ℓ) integrals for the real contribution will now be complicated functions of the transverse momenta k^hc_t1 and k^hc_t2 that can be obtained starting from Eqs. (<ref>), (<ref>). However, things are much simplified if we use the decomposition described in the first part of this section, as follows. We recall that the real matrix element can be decomposed as a sum of the R' contribution (that does not modify the momentum fraction of the emitter, and whose kinematics is soft by construction), and an exclusive DGLAP step that modifies the momentum fraction of the emitting leg, as shown in Eqs. (<ref>), (<ref>). In the latter term, the upper bound of the z^(ℓ) integration can be extended to 1 (hence it becomes independent of the kinematics of the rest of the event) since the soft limit is regularised by the plus prescription in the corresponding splitting functions. As for the R' contributions relative to k^hc_1 and k^hc_2, they can be further decomposed into a soft-collinear term and a term that contains the hard-collinear part of the matrix element (which however does not modify the momentum fraction of the emitting leg). Once again, in the latter contribution the z^(ℓ) integration can be extended to 1, while in the soft-collinear contribution one can simply replace the z^(ℓ) integral with an integral over y^(ℓ) by means of Eq. (<ref>). Moreover, using the fact that for a soft emission k̃_t≃ k_t, the corresponding upper bound of the y^(ℓ) integral can be replaced by 1-k_t/M.This procedure allows one to disentangle completely the phase space of the R' contributions (whose kinematics is soft by construction) from that of the exclusive DGLAP evolution step which are by construction hard and collinear. The lower bounds in the z^(ℓ) integrals of multiple resolved DGLAP evolution steps are entangled as each of them modifies significantly the momentum available for the subsequent hard-collinear ones, resulting in a convolution between the splitting kernels and the corresponding parton density. The above treatment of the double-hard-collinear case is valid up to regular terms. In this section we neglected the constant terms that arise from the finite part of the renormalised form factor, and from the collinear coefficient functions, which are relevant already for a NNLL resummation. For inclusive observables considered in this article, the collinear coefficient functions factorise in front of the Sudakov factor and, for the processes considered here, they were computed to O(^2) in refs. <cit.>. These will be introduced in the following section when we iterate the arguments discussed here at all perturbative orders in .§.§.§ Resummed formula for initial-state radiationThe arguments derived in the previous section can be used to formulate the structure of the cross section at all orders by iterating the single-emission picture defined above. Given the inclusive nature of the observables studied here, the inclusion of higher-order logarithmic corrections can be achieved in a simple way by just adding the relevant correlated blocks (as reported in Table <ref>) in the inclusive approximation (<ref>). The contribution to the cross section from each inclusive block, in turn, can be split into an R'-type contribution (which does not modify either the momentum fraction or the flavour of the emitting leg), and a DGLAP step (inclusive in the content of each correlated block, but differential in its transverse momentum), and hence it can be treated in a fully analogous way to what done for single emissions in the previous subsection. This simple prescription allows us to discuss the inclusion of the parton densities by referring to emissions (for the sake of simplicity), while keeping in mind that they are to be thought of as inclusive sums of correlated blocks as defined in Eq. (<ref>). To show how the parton densities are accounted for, we start by evaluating them at a scale μ_0 that is assumed to be smaller than all transverse momenta in the event. We consider the situation in which the emissions are ordered in transverse momentum, and the hardest (resolved) emission k_1 occurred. The phase-space diagram for any secondary emission k_i with i>1 is depicted in Fig. <ref> in the ln(k_t/M)-η (Lund) plane, where now η denotes the rapidity in the centre-of-mass frame of the incoming partons which are extracted from the proton at a factorisation scale μ_0, and the transverse momentum k_t is taken with respect to the beam direction. As stated in Section <ref>, due to rIRC safety, only emissions that take place in the strip between ϵ k_t1 and k_t1 (labelled with “REAL EMISSIONS” in Fig. <ref>) modify the observable significantly and are resolved. The remaining unresolved real emissions (k_ti<ϵ k_t1) are combined with the virtual corrections, which populate the whole region below the two diagonal lines that denote the upper rapidity limits. The result of this combination is indeed the Sudakov form factor associated with the first emission that vetoes secondary emissions in the yellow region (labelled with “SUDAKOV SUPPRESSION” in Fig. <ref>) of the Lund plane. In addition, the combination of virtual and unresolved emissions gives also rise to a constant term that multiplies the Sudakov and encodes both the finite part of the virtual corrections and the constant contribution due to soft and/or collinear emissions exactly at the edges of their phase space, encoded in the collinear coefficient functions. In the initial-state-radiation case at hand, hard-collinear emissions define the evolution of the parton densities. These emissions occur on a strip (labelled with “DGLAP” in Fig. <ref>) along the upper rapidity bounds, and their evolution is encoded in the DGLAP equations. In the unresolved region (k_ti<ϵ k_t1), the DGLAP evolution can be performed inclusively since emissions in this phase-space region do not affect the value of the observable. On the other hand, when k_t1>k_ti>ϵ k_t1 the corresponding hard-collinear emissions modify significantly the observable's value and therefore must be treated exclusively, namely unintegrated in k_t.In addition to the parton densities, starting at O(), one needs to include the coefficient functions that emerge from their renormalisation, and originate from emissions that occur at the edges of the phase space in Fig. <ref>. The coefficient functions contribute to the logarithmic structure only through the scale of their running coupling, which is the transverse momentum of the emission(s) they are associated with. As done for the parton densities, one can evaluate them initially at a scale μ_0 smaller than any transverse momentum in the event, and subsequently evolve them inclusively up to the resolution scale ϵ k_t1. Their evolution must be instead treated exclusively in the resolved strip k_t1>k_ti>ϵ k_t1.In order to introduce the all-order result, it is convenient to simplify the flavour structure of the evolution for the time being. We neglect real-emission kernels that modify the flavour of the emitting leg, namely those that do not have a soft singularity P_qg and P_gq. This ensures that the flavour of the initial parton densities is only modified by the coefficient functions and is conserved by the resolved real radiation. This approximation is made without any loss of generality, and for the only sake of simplicity. The extension to the full flavour case will be trivial once the final formula is obtained.For the remaining part of the section, it is useful to introduce a matrix notation to simplify the structure of our expressions in flavour space. We define f as the array containing the 2n_f+1 partonic densities, where n_f denotes the number of active flavours.To handle different Born configurations with different incoming flavours c_ℓ, we then define the coefficient-function matrix C^c_ℓ as a (2 n_f +1 ) × (2 n_f +1 ) diagonal matrix in flavour space whose entries are[ C^c_ℓ]_a b = C_c_ℓ f(a)δ_a b,where C_ij are the collinear coefficient functions, c_ℓ is the flavour of the leg ℓ entering the Born process, and f(a) is the flavour corresponding to the a-th entry of the parton-density array.For instance, we explicitly show the above convention in the case of Higgs production, considering only a single quark flavour q. By defining the array 𝐟 = (f_g,f_q,f_q̅)^T, the matrix C^g readsC^g= ( [C_gg 0 0; 0C_gq 0; 0 0 C_gq̅ ]). The evolution of (<ref>) between two scales is entirely encoded in the evolution of the running coupling. By introducing the corresponding anomalous-dimension matrix Γ^(C)Γ^(C)(α_s(k_t)) = 2 β(α_s(k_t))d ln C^c_ℓ(α_s(k_t))/d α_s(k_t),we can write the Renormalisation-Group evolution (RGE) of the coefficient function matrix asC^c_ℓ(α_s(μ)) = exp{-∫_μ^μ_0d k_t/k_tΓ^(C)(α_s(k_t))} C^c_ℓ(α_s(μ_0)).In principle, the matrix Γ^(C) should also explicitly carry a label c_ℓ to specify that it evolves the coefficient function C^c_ℓ associated with the Born flavour c_ℓ. We omit this label as the notation in what follows is unambiguous. We stress however that the flavour of the coefficient function is not modified by its RG evolution, indeed it is manifestly flavour diagonal. The iterative structure of the squared amplitudes appears more transparent if we work in Mellin space, where convolutions become products. We therefore introduce the Mellin transform of a function g(x) asg_N_ℓ≡∫_0^1d x x^N_ℓ-1 g(x).The DGLAP <cit.> evolution of the parton-density vector f can be conveniently written in Mellin space asf_N_ℓ(μ) =Pexp{-∫_μ^μ_0dk_t/k_tα_s(k_t)/πΓ_N_ℓ(α_s(k_t))}f_N_ℓ(μ_0).In the previous equation P is the path-ordering symbol, and the matrix Γ is defined as [Γ_N_ℓ(α_s(μ))]_ab = ∫_0^1 d z z^N_ℓ-1P̂_f(a) f(b)(z,(μ)) ≡γ_N_ℓ;f(a)f(b) = ∑_n=0^∞(α_s(μ)/2π)^nγ_N_ℓ;f(a)f(b)^(n),where P̂_f(a)f(b) are the regularised splitting functions (see Appendix <ref>). We stress that, within the simplifying assumption made above on flavour-conserving real-emission kernels, no splitting functions involving a real quark emission are included, therefore the matrix Γ is diagonal. Within this assumption, the path ordering in Eq. (<ref>) can be lifted.With this notation, the hadronic cumulative cross section, differential with respect to the Born phase space Φ_B, can be written asdΣ(v)/dΦ_B =∫_ C_1d N_1/2π i∫_ C_2dN_2/2π ix_1^-N_1 x_2^-N_2 ∑_c_1, c_2d|M_B|_c_1c_2^2/dΦ_B f^T_N_1(μ_0) Σ̂^c_1,c_2_N_1,N_2(v)f_N_2(μ_0),where the sum runs over all possible Born configurations and we employed a double inverse Mellin transform.The contours C_1 and C_2 are understood to lie along the imaginary axis to the right of all singularities of the integrand. In Eq. (<ref>), and from now on, we define the notationd|M_B|_c_1c_2^2/dΦ_B≡∫ dΦ^'_B |M_B|_c_1c_2^2 δ(x_1-x_1^') δ(x_2-x_2^')δ(Ω_B-Ω_B^'),where Ω_B denotes any set of internal phase-space variables used to parametrise the colour-singlet system. The right-hand side differs from the squared amplitude |M_B|_c_1c_2^2 simply by a jacobian factor. The matrix Σ̂ encodes the effect of the all-order radiation that evolves the partonic cross section and the corresponding parton densities. To write down an all-order expression for Σ̂ for the observables (<ref>), we need to iterate the single-emission probability derived in the previous section. Given that the phase space of the R' contributions and the exclusive DGLAP evolution steps are completely disentangled in the resolved real radiation, this operation can be performed straightforwardly in Mellin space, yieldingΣ̂^c_1,c_2_N_1,N_2(v)= [C^c_1; T_N_1(α_s(μ_0)) H(μ_R)C^c_2_N_2(α_s(μ_0)) ] ∫_0^Md k_t1/k_t1∫_0^2πdϕ_1/2π× e^- R(ϵ k_t1)exp{-∑_ℓ=1^2( ∫_ϵ k_t1^μ_0dk_t/k_tα_s(k_t)/πΓ_N_ℓ(α_s(k_t)) + ∫_ϵk_t1^μ_0dk_t/k_tΓ_N_ℓ^(C)(α_s(k_t)))}∑_ℓ_1=1^2(R_ℓ_1'(k_t1) + α_s(k_t1)/πΓ_N_ℓ_1(α_s(k_t1)) + Γ_N_ℓ_1^(C)(α_s(k_t1))) ×∑_n=0^∞1/n!∏_i=2^n+1∫_ϵ^1dζ_i/ζ_i∫_0^2πdϕ_i/2π∑_ℓ_i=1^2( R_ℓ_i'(k_ti) +α_s(k_ti)/πΓ_N_ℓ_i(α_s(k_ti)) + Γ_N_ℓ_i^(C)(α_s(k_ti)))×Θ(v-V({p̃},k_1,…, k_n+1)),where now ζ_i=k_ti/k_t1 since we are using the transverse momentum as a resolution and ordering variable. R_ℓ' is a diagonal matrix in flavour space: given the flavour c_ℓ of the Born leg ℓ, it describes the flavour-conserving resolved radiation off leg ℓ. It is defined as[ R_ℓ']_ab = R_ℓ' δ_ab,and R_ℓ' is defined in Eq. (<ref>). The Sudakov operator R is then defined asR(ϵ k_t1) = ∑_ℓ=1^2∫_ϵ k_t1^Mdk_t/k_t R_ℓ'(k_t).The terms proportional to R' in Eq. (<ref>) encode the contribution of the radiation which is flavour-diagonal, and does not modify the momentum fraction of the incoming partons. This is the analogue of what has been derived in Sec. <ref> in the case of scale-independent parton densities. In addition, the real emission probability now involves the exclusive evolution for the parton densities and coefficient functions.The matrices Σ̂^c_1,c_2 are diagonal in flavour space within the flavour assumption that we are making here.The first line of Eq. (<ref>) contains the factor [ C^c_1; T_N_1(α_s(μ_0)) H(μ_R)C^c_2_N_2(α_s(μ_0)) ] that encodes the hard-virtual corrections to the form factor and the collinear coefficient functions. Explicit expressions for these quantities will be given later (see Sec. <ref> and references therein). As discussed above, the coupling of the coefficient functions here is evaluated at μ_0 and subsequently evolved up to ϵ k_t1 by the operator containing the diagonal matrix Γ_N_ℓ^(C) in the second line of (<ref>). Similarly, the parton densities are evolved from μ_0 up to ϵ k_t1. As it was shown in ref. <cit.>, starting at a given order in perturbation theory one needs to include the contribution from the collinear coefficient functions G, that describe the azimuthal correlations with the initial-state gluons. Such a contribution starts at O(^2) (i.e. N^3LL) for gluon-fusion processes, and at yet higher orders for quark-initiated ones. It is included in the above formulation by simply adding to Eq. (<ref>) an analogous term where one makes the replacements[ C^c_1; T_N_1(α_s(μ_0)) H(μ_R)C^c_2_N_2(α_s(μ_0)) ]→[ G^c_1; T_N_1(α_s(μ_0)) H(μ_R)G^c_2_N_2(α_s(μ_0)) ],andΓ_N_ℓ^(C)(α_s(k_t))→Γ_N_ℓ^(G)(α_s(k_t)), where Γ_N_ℓ^(G) is defined analogously to Eq. (<ref>), and the flavour structure of G is analogous to the one of the C matrix. In what follows this contribution, whenever not reported, is understood. Eq. (<ref>) has been derived by iterating the single-emission probability. As discussed above, higher-order logarithmic corrections are simply included by adding higher-order correlated blocks. Specifically, this amounts to including higher-order logarithmic corrections to the radiator R and its derivative R', as well as in the anomalous dimensions which drive the evolution of the parton densities and coefficient functions. We conclude the discussion by pointing out that even if the all-order formulation has been conveniently obtained in Mellin space, it is possible to evaluate Eq. (<ref>) directly in momentum space at any given logarithmic order. We will describe how to do this in Sec. <ref>.Eq. (<ref>) holds for all inclusive observables (see definition in Sec. <ref>) that do not depend on the rapidity of the initial-state radiation.In the remaining part of this article we specialise to the study of the transverse-momentum case, but analogous conclusions will apply to other observables of the same class. §.§ Equivalence with impact-parameter-space formulationIn this section we show how to relate our Eq. (<ref>) to the impact-parameter-space formulation of <cit.>. We show the equivalence for the differential partonic cross section (<ref>) in the case of the transverse momentum p_t. An analogous proof can be carried out in the case of the ϕ^*.Our starting point is the differential partonic cross section, where we now set μ_0=μ_R=M without loss of generality:d/d^2 p⃗_t Σ̂_N_1,N_2^c_1 c_2(p_t)= C^c_1; T_N_1(α_s(M)) H(M)C^c_2_N_2(α_s(M)) ∫_0^Md k_t1/k_t1∫_0^2πdϕ_1/2π× e^- R(ϵ k_t1)exp{-∑_ℓ=1^2( ∫_ϵ k_t1^Mdk_t/k_tα_s(k_t)/πΓ_N_ℓ(α_s(k_t)) + ∫_ϵk_t1^Mdk_t/k_tΓ_N_ℓ^(C)(α_s(k_t)))}×∑_ℓ_1=1^2(R_ℓ_1'(k_t1) + α_s(k_t1)/πΓ_N_ℓ_1(α_s(k_t1)) + Γ_N_ℓ_1^(C)(α_s(k_t1))) ×∑_n=0^∞1/n!∏_i=2^n+1∫_ϵ^1dζ_i/ζ_i∫_0^2πdϕ_i/2π∑_ℓ_i=1^2( R_ℓ_i'(k_ti) +α_s(k_ti)/πΓ_N_ℓ_i(α_s(k_ti)) + Γ_N_ℓ_i^(C)(α_s(k_ti)))×δ^(2)(p⃗_t-(k⃗_t1+ … + k⃗_t(n+1))).We transform the δ function into b-space asδ^(2)(p⃗_t-(k⃗_t1 + … + k⃗_t(n+1)))= ∫d^2 b⃗/4π^2 e^-ib⃗·p⃗_t∏_i=1^n+1e^i b⃗·k⃗_ti,and we evaluate the azimuthal integrals, which simply amounts to replacing each of the factors e^± i b⃗·k⃗_t with a Bessel function J_0(b k_t). It is now straightforward to see that the sum in Eq. (<ref>) gives rise to an exponential function, yieldingd/d p_t Σ̂_N_1,N_2^c_1 c_2(p_t)= C^c_1; T_N_1(α_s(M)) H(M)C^c_2_N_2(α_s(M))p_t∫bd b J_0(p_t b)∫_0^Md k_t1/k_t1×∑_ℓ_1=1^2(R_ℓ_1'(k_t1) +α_s(k_t1)/πΓ_N_ℓ_1(α_s(k_t1)) + Γ_N_ℓ_1^(C)(α_s(k_t1))) J_0(b k_t1)×exp{ -∑_ℓ=1^2∫_k_t1^Md k_t/k_t( R_ℓ'(k_t) +α_s(k_t)/πΓ_N_ℓ(α_s(k_t)) +Γ_N_ℓ^(C)(α_s(k_t))) J_0(b k_t)}×exp{ -∑_ℓ=1^2∫_ϵ k_t1^Md k_t/k_t( R_ℓ'(k_t) +α_s(k_t)/πΓ_N_ℓ(α_s(k_t)) +Γ_N_ℓ^(C)(α_s(k_t))) (1-J_0(b k_t))}.We finally notice that we can set ϵ→ 0 in the above formula, given that now the cancellation of divergences is manifest. The k_t1 integrand is a total derivative and it integrates to one, leavingd/d p_t Σ̂_N_1,N_2^c_1 c_2(p_t) = C^c_1; T_N_1(α_s(M)) H(M)C^c_2_N_2(α_s(M))p_t∫bd b J_0(p_t b)×exp{ -∑_ℓ=1^2∫_0^Md k_t/k_t( R_ℓ'(k_t) +α_s(k_t)/πΓ_N_ℓ(α_s(k_t)) +Γ_N_ℓ^(C)(α_s(k_t))) (1-J_0(b k_t))}.We now insert the resulting partonic cross section back into the definition of the hadronic cross section (<ref>), and use the second and third terms in the exponent of Eq. (<ref>) to evolve the parton densities and the coefficient functions down to b_0/b, with b_0=2e^-γ_E. After performing the inverse Mellin transform, and neglecting N^4LL corrections, we obtain (hereafter we simplify the notation for the parton densities by omitting their x_1 and x_2 dependence, which is determined by the Born kinematics Φ_B)d^2Σ(v)/dΦ_Bd p_t =∑_c_1, c_2d|M_B|_c_1 c_2^2/dΦ_B∫bd bp_tJ_0(p_t b)f^T(b_0/b)C^c_1; T_N_1(α_s(b_0/b)) H(M)C^c_2_N_2(α_s(b_0/b))f(b_0/b) ×exp{ -∑_ℓ=1^2∫_0^Md k_t/k_t R_ℓ'(k_t)(1-J_0(b k_t))}.Eq. (<ref>) represents indeed the b-space formulation of transverse-momentum resummation. Commonly, it is expressed in the equivalent form <cit.>[This corresponds to a change of scheme of the type discussed in ref. <cit.>.]d^2Σ(v)/dΦ_Bd p_t =∑_c_1, c_2d|M_B|_c_1 c_2^2/dΦ_B∫bd bp_tJ_0(p_t b)f^T(b_0/b)C^c_1;T_N_1(α_s(b_0/b)) H_(M)C^c_2_N_2(α_s(b_0/b)) f(b_0/b) ×exp{ -∑_ℓ=1^2∫_0^Md k_t/k_t R_, ℓ'(k_t)Θ(k_t-b_0/b)}.where R_, ℓ' and H_(M) are the Sudakov and hard function commonly used for a b-space formulation <cit.>. As shown in ref. <cit.>, and as already stressed above, both Eqs. (<ref>) and (<ref>) receive an extra contribution due to the azimuthal correlations which are parametrised by the G coefficient functions. We omit them in this comparison for the sake of simplicity, however it is clear that analogous considerations apply in that case.The comparison between Eqs. (<ref>) and (<ref>) allows us to extract the N^3LL ingredients from the latter formulation as obtained in refs. <cit.>, that will be reported in the next section.We start by using the relation[See appendix of ref. <cit.> for a derivation.](1-J_0(b k_t)) ≃Θ(k_t-b_0/b) + ζ_3/12∂^3/∂ln(M b/b_0)^3Θ(k_t-b_0/b) + …,where we ignored N^4LL terms. In the above formula the derivative in the second term of the right-hand-side is meant to act on the integral whose bounds are set by Θ(k_t-b_0/b).This yields, at N^3LL,d^2Σ(v)/dΦ_Bd p_t =∑_c_1, c_2d|M_B|_c_1 c_2^2/dΦ_B∫bd bp_tJ_0(p_t b)f^T(b_0/b)C^c_1;T_N_1(α_s(b_0/b)) H(M)C^c_2_N_2(α_s(b_0/b))f(b_0/b) ×exp{ -∑_ℓ=1^2(∫_b_0/b^Md k_t/k_t R_ℓ'(k_t) + ζ_3/12∂^3/∂ln(M b/b_0)^3∫_b_0/b^Md k_t/k_t R_ℓ'(k_t) )}.The second term in the exponent of Eq. (<ref>) starts at N^3LL, so up to NNLL the two definitions (the one in terms of a J_0 and the one in terms of the theta function) are manifestly equivalent. To relate the two formulations we recall the definition of R' in Eq. (<ref>) and we express the Sudakov radiators as (<ref>)R(b)= ∑_ℓ=1^2 ∫_b_0/b^Md k_t/k_t R_ℓ'(k_t)=∑_ℓ=1^2∫_b_0/b^Md k_t/k_t(A_ℓ((k_t))lnM^2/k_t^2 + B_ℓ((k_t))) R_(b)=∑_ℓ=1^2 ∫_b_0/b^Md k_t/k_tR_,ℓ'(k_t)=∑_ℓ=1^2∫_b_0/b^Md k_t/k_t(A_,ℓ((k_t))lnM^2/k_t^2 + B_,ℓ((k_t))).The anomalous dimensions A_ℓ and B_ℓ relative to leg ℓ and the hard function H admit an expansion in the strong coupling asA_ℓ()=∑_n=1^4(/2π)^nA^(n)_ℓ,B_ℓ()=∑_n=1^3(/2π)^nB^(n)_ℓ, H(M)=1 + ∑_n=1^2((M)/2π)^nH^(n)(M).The relation between the coefficients that enter at N^3LL can be deducted by equating Eqs. (<ref>) and (<ref>), obtainingA_ℓ^(4) =A_,ℓ^(4) -32 A^(1)_ℓπ^3 β_0^3 ζ_3,B_ℓ^(3) =B_,ℓ^(3) - 16 A^(1)_ℓπ^2 β_0^2 ζ_3,H^(2)(M) =H_^(2)(M)+ 8/3πβ_0 ζ_3(1/2∑_ℓ=1^2 A^(1)_ℓ) .The above equations constitute the ingredients for our N^3LL resummation. Physically, the extra terms proportional to ζ_3 arise from the fact that the O(α_s^2) terms proportional to δ(1-z) in the coefficient functions in momentum space differ from their b-space counterpart. This difference precisely amounts to the new contributions in Eqs. (<ref>). We stress that only the combination of A_ℓ^(4), B_ℓ^(3), H^(2) and C^(2) is resummation-scheme invariant, hence our choice of absorbing the new terms into A_ℓ^(4), B_ℓ^(3), H^(2) is indeed arbitrary. One could analogously define an alternative scheme in which the extra terms are directly absorbed into the O(α_s^2) coefficient functions, thus leaving the two-loop form factor unchanged. § EVALUATION UP TO N^3LLIn this section we evaluate our all-order master formulae (<ref>) and (<ref>) explicitly up to N^3LL accuracy. The latter equations can already be evaluated as they are by means of Monte Carlo techniques; however, at any given logarithmic order it is possible, and convenient, to further manipulate them in order to evaluate them directly in momentum space, without the need of the Mellin transform.§.§ Momentum-space formulation We firstly focus on the partonic cross section (<ref>). There are three main ingredients: the Sudakov radiator and its derivative, the block containing coefficient functions C(α_s) and hard-virtual corrections to the form factor H(μ_R), and the anomalous dimensions that rule the evolution of parton densities and coefficient functions. For colour-singlet production, the coefficients entering the Sudakov radiator satisfy A^(n)_1=A^(n)_2=A^(n), and B^(n)_1=B^(n)_2=B^(n). Coefficients A^(1), A^(2), A^(3), B^(1), B^(2) have been known for several years <cit.>, and they are collected, for instance, in the appendix of ref. <cit.>. The N^3LL coefficient B^(3) can be extracted from the recent result <cit.>. For gluon processes it reads:B^(3) = C_A^3(22 ζ _3 ζ _2/3-799 ζ _2/81-5 π ^2 ζ _3/9-2533 ζ _3/54-77 ζ _4/12+20 ζ _5-319 π ^4/1080+6109 π ^2/1944+34219/1944)+ C_A^2 n_f(103 ζ _2/81+202 ζ _3/27-5 ζ _4/6+41 π ^4/540-599 π ^2/972-10637/1944) + C_A C_F n_f(2 ζ _4-π ^4/45-π ^2/12+241/72) -1/4C_F^2 n_f + C_A n_f^2(-2 ζ _3/27+5 π ^2/162+529/1944) -11/36C_F n_f^2 - 32 C_A π^2β_0^2 ζ_3≈ -492.908 - 32 C_A π^2β_0^2 ζ_3 ,while for quark processesB^(3) =C_A^2 C_F(22 ζ _3 ζ _2/3-799 ζ _2/81-11 π ^2 ζ _3/9+2207 ζ _3/54-77 ζ _4/12-10 ζ _5-83 π ^4/360-7163 π ^2/1944+151571/3888)+ C_F^3(4 π ^2 ζ _3/3-17 ζ _3+60 ζ _5-2 π ^4/5-3 π ^2/4-29/8) + C_F^2 n_f(34 ζ _3/3+2 ζ _4-7 π ^4/54-13 π ^2/36+23/4)+ C_A C_F^2(-2/3π ^2 ζ _3-211 ζ _3/3-30 ζ _5+247 π ^4/540+205 π ^2/36-151/16)+ C_A C_F n_f(103 ζ _2/81-128 ζ _3/27-5 ζ _4/6+11 π ^4/180+1297 π ^2/972-3331/243)+ C_F n_f^2(10 ζ _3/27-5 π ^2/54+1115/972) - 32 C_F π^2β_0^2 ζ_3≈ -116.685 - 32 C_F π^2β_0^2 ζ_3.The remaining N^3LL anomalous dimension A^(4) is currently incomplete given that the four-loop cusp anomalous dimension is still unknown. Here we compute A^(4) according to Eq. (71) of ref. <cit.> or Eq. (4.6) of ref. <cit.>, using the results of refs. <cit.> for the soft anomalous dimension, and setting the four-loop cusp anomalous dimension to zero. For gluon-initiated processes we getA^(4) =C_A^4(121/3ζ_3 ζ_2-8789 ζ_2/162-19093 ζ_3/54-847 ζ_4/24+132 ζ_5+3761815/11664)+ C_A^3 n_f(-22/3ζ_3 ζ_2+2731 ζ_2/162+4955 ζ_3/54+11 ζ_4/6-24 ζ_5-31186/243) + C_A^2 C_F n_f(272 ζ_3/9+11 ζ_4-7351/144)+ C_A^2 n_f^2(-103 ζ_2/81-47 ζ_3/27+5 ζ_4/6+13819/972) + C_A C_F n_f^2(-38 ζ_3/9-2 ζ_4+215/24)+ C_A n_f^3(-4 ζ_3/9-232/729)- 64 C_A π^3β_0^3 ζ_3≈ -2675.68 - 64 C_A π^3β_0^3 ζ_3,while for quark-initiated onesA^(4) = C_A^3 C_F(121/3ζ_3 ζ_2-8789 ζ_2/162-19093 ζ_3/54-847 ζ_4/24+132 ζ_5+3761815/11664) + C_A^2 C_F n_f(-22/3ζ_3 ζ_2+2731 ζ_2/162+4955 ζ_3/54+11 ζ_4/6-24 ζ_5-31186/243) + C_A C_F^2 n_f(272 ζ_3/9+11 ζ_4-7351/144)+ C_A C_F n_f^2(-103 ζ_2/81-47 ζ_3/27+5 ζ_4/6+13819/972) + C_F^2 n_f^2(-38 ζ_3/9-2 ζ_4+215/24)+ C_F n_f^3(-4 ζ_3/9-232/729)- 64 C_F π^3 β_0^3 ζ_3≈ -1189.19 - 64 C_F π^3 β_0^3 ζ_3.We have left the additional terms arising from Eq. (<ref>) unexpanded to facilitate the comparison to the existing literature. The remaining quantities are evaluated with n_f=5.The expression of the Sudakov radiator is analogous to the b-space one, i.e.R(ϵ k_t1)= ∑_ℓ=1^2 ∫_ϵ k_t1^Md k_t/k_t R_ℓ'(k_t)=∑_ℓ=1^2∫_ϵ k_t1^Md k_t/k_t(A_ℓ((k_t))lnM^2/k_t^2 + B_ℓ((k_t))),and, as above, we define R' as the logarithmic derivative of RR'_ℓ(k_t1) = d R_ℓ(k_t1)/d L,where we defined L=lnM/k_t1. In order to make the numerical evaluation of our master formula Eq. (<ref>) more efficient, we can make a further approximation on the integrand without spoiling the logarithmic accuracy of the result. Before we describe the procedure in detail, we stress that this additional manipulation is not strictly necessary and one could in principle implement directly Eq. (<ref>) in a Monte-Carlo program.Since the ratios k_ti/k_t1 for all resolved blocks are of order 1, we can expand R and its derivative about k_t1, retaining terms that contribute at the desired logarithmic accuracy. At N^3LL one hasR(ϵ k_t1)= R(k_t1) + R'(k_t1)ln1/ϵ + 1/2!R”(k_t1)ln^21/ϵ + 1/3!R”'(k_t1)ln^31/ϵ + …R'(k_ti)= R'(k_t1) + R”(k_t1)ln1/ζ_i + 1/2!R”'(k_t1)ln^21/ζ_i+…,where the dots denote N^4LL terms, and we have employed the usual notation ζ_i=k_ti/k_t1. We recall that the transverse momenta of blocks in the resolved ensemble are parametrically of the same order. This is because rIRC safety ensures that blocks k with k_t ≪ k_t1 do not contribute to the observable and are encoded in the Sudakov radiator. Therefore, since ln(1/ζ_i) in the above formula is the logarithm of an O(1) quantity, each term in the right-hand-side of Eq. (<ref>) is logarithmically subleading with respect to the one to its left. The logarithms ln(1/ϵ) in the first line of Eq. (<ref>) are a parametrisation of the IRC divergences arising from the combination of real-unresolved blocks and virtual corrections, expanded at a given logarithmic order. The ϵ dependence exactly cancels against the corresponding terms in the resolved real corrections (denoted by the same-order derivative of R) upon integration over ζ_i, as it will be shown below. This is a convenient way to recast the subtraction of IRC divergences at each logarithmic order in our formulation.The terms proportional to R'(k_t1) are to be retained starting at NLL, those proportional to R”(k_t1) contribute at NNLL and, finally, the ones proportional to R”'(k_t1) are needed at N^3LL. Starting from the NLL ensemble, we note that correcting a single block with respect to its R'(k_t1) approximation (i.e. including for that block the subleading terms of Eq. (<ref>)) gives rise at most to a NNLL correction of order O(^nL^n-1) in our counting. Modifying two blocks would lead to a relative correction of order O(^nL^n-2), i.e. N^3LL, and so on. Therefore, at any given logarithmic order, it is sufficient to keep terms beyond the R'(k_t1) approximation only for a finite number of blocks (namely a single block at NNLL, two blocks at N^3LL, and so forth). Consistently, one has to expand out the corresponding terms in the Sudakov that cancel the ϵ divergences of the modified real blocks to the given logarithmic order. This prescription has been derived and discussed in detail at NNLL in ref. <cit.>, and will be used later in this section.As a next step we address the evolution of the parton densities and relative coefficient functions encoded in Eq. (<ref>), whose anomalous dimensions Γ_N and Γ_N^(C) have been defined in Eqs. (<ref>), and (<ref>). Only a finite number of terms in their perturbative series needs to be retained at a given logarithmic accuracy: in particular, contributions from the O(^n) term in Γ_N enter for a N^n+1LL resummation (we recall that the series of Γ_N starts at O(^0), hence these terms start contributing at NLL). On the other hand, the contribution of the coefficient functions, and therefore of the corresponding anomalous dimension, starts at NNLL. Therefore the O(^m) term in Γ_N^(C) is necessary at N^m+1LL, since its expansion starts at O(). We can then perform the same expansion about k_t1 for the terms in Eq. (<ref>) containing Γ and Γ^(C).Up to N^3LL we expand the exponent of the evolution operators as∫_ϵ k_t1^μ_0dk_t/k_tα_s(k_t)/πΓ_N_ℓ(α_s(k_t))= ∫_k_t1^μ_0dk_t/k_tα_s(k_t)/πΓ_N_ℓ(α_s(k_t)) + d/d L∫_k_t1^μ_0dk_t/k_tα_s(k_t)/πΓ_N_ℓ(α_s(k_t))ln1/ϵ+ 1/2d^2/d L^2∫_k_t1^μ_0dk_t/k_tα_s(k_t)/πΓ_N_ℓ(α_s(k_t))ln^21/ϵ+… ∫_ϵ k_t1^μ_0dk_t/k_tΓ_N_ℓ^(C)(α_s(k_t))= ∫_k_t1^μ_0dk_t/k_tΓ_N_ℓ^(C)(α_s(k_t)) + d/d L∫_k_t1^μ_0dk_t/k_tΓ_N_ℓ^(C)(α_s(k_t))ln1/ϵ +…,and the corresponding resolved real-emission kernels asα_s(k_tj)/πΓ_N_ℓ(α_s(k_tj))= α_s(k_t1)/πΓ_N_ℓ(α_s(k_t1)) + d/d Lα_s(k_t1)/πΓ_N_ℓ(α_s(k_t1))ln1/ζ_j +… Γ_N_ℓ^(C)(α_s(k_tj))= Γ_N_ℓ^(C)(α_s(k_t1)) +…,where as usual L=ln(M/k_t1). The first terms on the right-hand side of Eqs. (<ref>), and (<ref>) represent the evolution operator that runs the parton densities and the coefficient functions, respectively, from μ_0 up to k_t1. The remaining terms describe the exclusive evolution of the parton densities and of the coefficient functions in the resolved strip. In particular, the ϵ-dependent terms completely cancel against the corresponding terms in the real-emission kernel of Eqs. (<ref>), and (<ref>) upon integration over the resolved-radiation phase space.At NLL the coefficient functions are an identity matrix in flavour space, and therefore their evolution operator is trivial. The contribution of the Γ_N in the exponent starts at NLL, while the exclusive evolution of the parton densities in the resolved strip starts at NNLL since it corresponds to emissions in the hard-collinear edge of the phase space. Therefore, at NLL one only needs to retain the first term in the right-hand side of Eq. (<ref>), and ignore everything else in Eqs. (<ref>), (<ref>), (<ref>), and (<ref>), which corresponds to evaluating the parton densities at μ_F=k_t1. At this order, the evolution can be carried out by means of the tree-level anomalous dimension γ^(0)_N.Similarly, at NNLL one needs to take into account the second term in the r.h.s. of Eq. (<ref>) and the first term in the r.h.s. of Eq. (<ref>), where now the anomalous dimension Γ_N is evaluated at one-loop accuracy (i.e. including γ_N^(1)). At this order also the coefficient functions start contributing with their inclusive evolution, therefore one needs to add the first term in the r.h.s. of Eq. (<ref>). The corresponding exclusive evolution of the coefficient functions in the resolved strip, encoded in the r.h.s. of Eq. (<ref>) only starts at N^3LL. At higher orders, one simply needs to add subsequent terms from the above equations, and evaluate the anomalous dimensions at the appropriate perturbative accuracy. As discussed above for the Sudakov radiator, at any given logarithmic order beyond NLL, it is sufficient to include the extra ϵ-dependent terms from Eqs. (<ref>), (<ref>) in the exponent, and the corresponding terms in the resolved real radiation from Eqs. (<ref>), (<ref>) only for a finite number of emissions, namely a single emission at NNLL, two emissions at N^3LL, and so forth.Finally, we need to deal with the block C^c_1;T_N_1(α_s(μ_0)) H(μ_R)C^c_2_N_2(α_s(μ_0)) in Eq. (<ref>). As discussed in the previous section, for a generic process this block receives a contribution from the gluon collinear correlations G, as in Eq. (<ref>). Since the contribution of the G functions starts at N^3LL, at this order one can drop the ϵ dependence in their evolution; namely, in the analogue of Eq. (<ref>) with Γ_N^(C)→Γ_N^(G), only the first term on the right-hand side needs to be retained. This amounts to evaluating the coupling of the G coefficient functions at k_t1. With the expansions detailed above, Eq. (<ref>) becomesΣ̂_N_1,N_2^c_1 c_2(v) = C^c_1;T_N_1(α_s(μ_0)) H(μ_R)C^c_2_N_2(α_s(μ_0))∫_0^Md k_t1/k_t1∫_0^2πdϕ_1/2π×e^- R(k_t1)- R'(k_t1)ln1/ϵ- 1/2!R”(k_t1)ln^21/ϵ-1/3!R”'(k_t1)ln^31/ϵ+…×exp{-∑_ℓ=1^2( ∫_k_t1^μ_0dk_t/k_tα_s(k_t)/πΓ_N_ℓ(α_s(k_t)) + d/d L∫_k_t1^μ_0dk_t/k_tα_s(k_t)/πΓ_N_ℓ(α_s(k_t))ln1/ϵ.. ..+ 1/2!d^2/d L^2∫_k_t1^μ_0dk_t/k_tα_s(k_t)/πΓ_N_ℓ(α_s(k_t))ln^21/ϵ+….. . + ∫_k_t1^μ_0dk_t/k_tΓ_N_ℓ^(C)(α_s(k_t)) + d/d L∫_k_t1^μ_0dk_t/k_tΓ_N_ℓ^(C)(α_s(k_t))ln1/ϵ + …)}×∑_ℓ_1=1^2(R_ℓ_1'(k_t1) + α_s(k_t1)/πΓ_N_ℓ_1(α_s(k_t1)) + Γ_N_ℓ_1^(C)(α_s(k_t1))) ×∑_n=0^∞1/n!∏_i=2^n+1∫_ϵ^1dζ_i/ζ_i∫_0^2πdϕ_i/2π∑_ℓ_i=1^2{ R_ℓ_i'(k_t1) +R_ℓ_i”(k_t1)ln1/ζ_i + 1/2 R_ℓ_i”'(k_t1)ln^21/ζ_i + …+α_s(k_t1)/πΓ_N_ℓ_i(α_s(k_t1)) + d/d L(α_s(k_t1)/πΓ_N_ℓ_i(α_s(k_t1))) ln1/ζ_i+…+ Γ_N_ℓ_i^(C)(α_s(k_t1)) + …}Θ(v-V({p̃},k_1,…, k_n+1)) + { C→ G; Γ^(C)→Γ^(G)} .Following the procedure of ref. <cit.>, we can express the ln (1/ϵ) singularities in the exponent of Eq. (<ref>) as integrals over dummy real emissions as followsln1/ϵ = ∫_ϵ^1dζ/ζ,1/2ln^21/ϵ =∫_ϵ^1dζ/ζln1/ζ,1/3!ln^31/ϵ = 1/2∫_ϵ^1dζ/ζln^21/ζ,and subsequently expand out the divergent part of the exponent, retaining the terms necessary at a given logarithmic order. We further introduce the average of a function G({p̃},{k_i}) over the measure dZ∫G({p̃},{k_i})=ϵ^R'(k_t1)∑_n=0^∞1/n!∏_i=2^n+1∫_ϵ^1dζ_i/ζ_i∫_0^2πdϕ_i/2π R'(k_t1)G({p̃},k_1,…,k_n+1) ,where we simplified the notation by usingR' (k_t1)=∑_ℓ=1,2 R'_ℓ(k_t1).The dependence on the regulator ϵ cancels exactly in Eq. (<ref>). We can plug Eq. (<ref>) into the definition of the hadronic cross section (<ref>). We define the derivatives of the parton densities by means of the DGLAP evolution equation∂ f(μ, x)/∂lnμ = α_s(μ)/π∫_x^1d z/zP̂(z,α_s(μ)) f(μ,x/z),where P̂(z,α_s(μ)) is the regularised splitting functionP̂(z,α_s(μ))= P̂^(0)(z) + α_s(μ)/2πP̂^(1)(z) + (α_s(μ)/2π)^2 P̂^(2)(z) + …Moreover, we introduce the following parton luminositiesL_ NLL(k_t1) = ∑_c, c'd|M_B|_cc'^2/dΦ_B f_c(k_t1,x_1)f_c'(k_t1,x_2), L_ NNLL(k_t1) = ∑_c, c'd|M_B|_cc'^2/dΦ_B∑_i, j∫_x_1^1d z_1/z_1∫_x_2^1d z_2/z_2f_i(k_t1,x_1/z_1)f_j(k_t1,x_2/z_2)(δ_ciδ_c'jδ(1-z_1)δ(1-z_2) (1+α_s(μ_R)/2π H^(1)(μ_R)) + α_s(μ_R)/2π1/1-2α_s(μ_R)β_0 L(C_c i^(1)(z_1)δ(1-z_2)δ_c'j+ {z_1↔ z_2; c,i ↔ c'j})), L_ N^3LL(k_t1)=∑_c, c'd|M_B|_cc'^2/dΦ_B∑_i, j∫_x_1^1d z_1/z_1∫_x_2^1d z_2/z_2f_i(k_t1,x_1/z_1)f_j(k_t1,x_2/z_2){δ_ciδ_c'jδ(1-z_1)δ(1-z_2) (1+α_s(μ_R)/2π H^(1)(μ_R) + α^2_s(μ_R)/(2π)^2 H^(2)(μ_R)) + α_s(μ_R)/2π1/1-2α_s(μ_R)β_0 L(1- α_s(μ_R)β_1/β_0ln(1-2α_s(μ_R)β_0 L)/1-2α_s(μ_R)β_0 L)×(C_c i^(1)(z_1)δ(1-z_2)δ_c'j+ {z_1↔ z_2; c,i ↔ c',j}) + α^2_s(μ_R)/(2π)^21/(1-2α_s(μ_R)β_0 L)^2((C_c i^(2)(z_1) - 2πβ_0 C_c i^(1)(z_1) lnM^2/μ_R^2)δ(1-z_2)δ_c'j+ {z_1↔ z_2; c,i ↔ c',j}) +α^2_s(μ_R)/(2π)^21/(1-2α_s(μ_R)β_0 L)^2(C_c i^(1)(z_1)C_c' j^(1)(z_2) + G_c i^(1)(z_1)G_c' j^(1)(z_2))+ α^2_s(μ_R)/(2π)^2 H^(1)(μ_R)1/1-2α_s(μ_R)β_0 L(C_c i^(1)(z_1)δ(1-z_2)δ_c'j + {z_1↔ z_2; c,i ↔ c',j}) },where x_1=M/√(s) e^Y, x_2=M/√(s) e^-Y,and Y is the rapidity of the colour singlet in the centre-of-mass frame of the collision at the Born level. |M_B|_cc'^2 is the Born squared matrix element, and L=ln(1/v_1), with v_1=k_t1/M, v=p_t/M. We transform back to momentum space, thus abandoning the matrix notation used so far, by means of the following identities, valid up to N^3LLd|M_B|_c_1c_2^2/dΦ_B f^T_N_1(k_t1)(∑_ℓ=1^2α_s(k_t1)/πΓ_N_ℓ(α_s(k_t1)))f_N_2(k_t1) →α_s(k_t1)/πP̂(z,α_s(k_t1))⊗ L_ NLL(k_t1)=-∂_LL_ NLL(k_t1)d|M_B|_c_1c_2^2/dΦ_B f^T_N_1(k_t1)C^c_1;T_N_1(α_s(k_t1)) H(μ_R) (∑_ℓ=1^2(α_s(k_t1)/πΓ_N_ℓ(α_s(k_t1))....+ Γ_N_ℓ^(C)(α_s(k_t1)))) C^c_2_N_2(α_s(k_t1))f_N_2(k_t1) →-∂_LL(k_t1)d|M_B|_c_1c_2^2/dΦ_B f^T_N_1(k_t1)(∑_ℓ=1^2d/d L(α_s(k_t1)/πΓ_N_ℓ(α_s(k_t1))))f_N_2(k_t1) → 2β_0/πα_s^2(k_t1) P̂^(0)⊗ L_ NLL(k_t1) d|M_B|_c_1c_2^2/dΦ_B f^T_N_1(k_t1)(∑_ℓ_i=1^2α_s(k_t1)/πΓ_N_ℓ_i(α_s(k_t1)))(∑_ℓ_j=1^2α_s(k_t1)/πΓ_N_ℓ_j(α_s(k_t1)))f_N_2(k_t1) →→α^2_s(k_t1)/π^2P̂(z,α_s(k_t1))⊗P̂(z,α_s(k_t1))⊗ L_ NLL(k_t1) ≃α^2_s(k_t1)/π^2P̂^(0)⊗P̂^(0)⊗ L_ NLL(k_t1)where we defined ∂_L = d/d L. Since we evaluated explicitly the sum over the emitting legs ℓ_i, the convolution of a regularised splitting kernel P̂^(0) with the NLL parton luminosity is now defined asP̂^(0)⊗ L_ NLL(k_t1)≡∑_c, c'd|M_B|_cc'^2/dΦ_B{(P̂^(0)⊗ f)_c(k_t1,x_1)f_c'(k_t1,x_2) + f_c (k_t1,x_1) (P̂^(0)⊗ f)_c'(k_t1,x_2) }.The term P̂^(0)⊗P̂^(0)⊗ L_ NLL(k_t1) is to be interpreted asP̂^(0)⊗P̂^(0) ⊗ L_ NLL(k_t1) ≡∑_c, c'd|M_B|_cc'^2/dΦ_B{(P̂^(0)⊗P̂^(0)⊗ f)_c(k_t1,x_1)f_c'(k_t1,x_2) + f_c (k_t1,x_1) (P̂^(0)⊗P̂^(0)⊗ f)_c'(k_t1,x_2)+ 2(P̂^(0)⊗ f)_c (k_t1,x_1) (P̂^(0)⊗ f)_c'(k_t1,x_2) }.Including terms up to N^3LL, we can therefore recast Eqs. (<ref>), (<ref>) asdΣ(v)/dΦ_B = ∫d k_t1/k_t1d ϕ_1/2π∂_L(-e^-R(k_t1) L_ N^3LL(k_t1) ) ∫Θ(v-V({p̃},k_1,…, k_n+1))+ ∫d k_t1/k_t1d ϕ_1/2π e^-R(k_t1)∫∫_0^1d ζ_s/ζ_sd ϕ_s/2π{(R' (k_t1)L_ NNLL(k_t1) - ∂_LL_ NNLL(k_t1))×(R” (k_t1)ln1/ζ_s +1/2 R”' (k_t1)ln^21/ζ_s) - R' (k_t1)(∂_LL_ NNLL(k_t1) - 2β_0/πα_s^2(k_t1) P̂^(0)⊗ L_ NLL(k_t1) ln1/ζ_s)+α_s^2(k_t1) /π^2P̂^(0)⊗P̂^(0)⊗ L_ NLL(k_t1)}{Θ(v-V({p̃},k_1,…, k_n+1,k_s)) - Θ(v-V({p̃},k_1,…, k_n+1))}+ 1/2∫d k_t1/k_t1d ϕ_1/2π e^-R(k_t1)∫∫_0^1d ζ_s1/ζ_s1d ϕ_s1/2π∫_0^1d ζ_s2/ζ_s2d ϕ_s2/2π R' (k_t1)×{ L_ NLL(k_t1) (R” (k_t1))^2ln1/ζ_s1ln1/ζ_s2 - ∂_LL_ NLL(k_t1) R” (k_t1)(ln1/ζ_s1 +ln1/ζ_s2)+ α_s^2(k_t1) /π^2P̂^(0)⊗P̂^(0)⊗ L_ NLL(k_t1)}×{Θ(v-V({p̃},k_1,…, k_n+1,k_s1,k_s2)) - Θ(v-V({p̃},k_1,…, k_n+1,k_s1)) -Θ(v-V({p̃},k_1,…, k_n+1,k_s2)) + Θ(v-V({p̃},k_1,…, k_n+1))} +O(α_s^n ln^2n - 61/v).Until now we have explicitly considered the case of flavour-conserving real emissions, for which we derived Eq. (<ref>). We now turn to the inclusion of the flavour-changing splitting kernels, that enter purely in the hard-collinear limit and contribute to the DGLAP evolution.We observe that at a given logarithmic order only a finite number of hard-collinear emissions are actually necessary. As we mentioned several times in the above sections, at N^3LL one needs to account for the effect of up to two hard-collinear resolved partons. Therefore, the inclusion of the flavour-changing kernels can be done directly at the level of the splitting functions and parton luminosities in Eq. (<ref>).In the above expressions for the luminosity we have used the following expansions in powers of the strong coupling for the functions C, H and G, up to N^3LL:C_ab(α_s(μ))= δ(1-z)δ_ab + ∑_n=1^2(α_s(μ)/2π)^n C_ab^(n)(z),H(μ_R)= 1+∑_n=1^2(α_s(μ_R)/2π)^n H^(n)(μ_R),G_ab(α_s(μ))= α_s(μ)/2π G_ab^(1)(z),where μ is the same scale at which the parton densities are evaluated, and μ_R is the renormalisation scale.The expressions for C^(1) and H^(1) have been known for a long time, and are collected, for instance, in the appendix of ref. <cit.>.The hard-virtual coefficient H(μ_R) is defined as the finite part of the renormalised QCD form factor in the MS renormalisation scheme, divided by the underlying Born squared matrix element.The hard coefficients for gluonic processes up to O(^2) evaluated at the invariant mass of the colour singlet H^(1)(M) and H^(2)(M) read <cit.>H_g^(1)(M)= C_A(5+7/6π^2)-3 C_F,H_g^(2)(M)=5359/54 + 137/6lnm_H^2/m_t^2+ 1679/24π^2 + 37/8π^4- 499/6ζ_3+ C_A 16/3πβ_0 ζ_3, n_f=5,where the last term in H_g^(2) was deliberately left symbolic to stress its origin from Eq. (<ref>). Analogously, for quark-initiated reactions one has <cit.>H_q^(1)(M)= C_F( -8 + 7/6π^2 ),H_q^(2)(M)=-57433/972+281/162π^2+22/27π^4 +1178/27ζ_3+ C_F 16/3πβ_0 ζ_3, n_f=5.The renormalisation-scale dependence of the first two hard-function coefficients is given byH^(1)(μ_R)= H^(1)(M) + 2 d_B πβ_0 lnμ_R^2/M^2,H^(2)(μ_R)= H^(2)(M) + 4d_B ( 1+d_B/2π^2β_0^2 ln^2μ_R^2/M^2 + π^2 β_1 lnμ_R^2/M^2) + 2 (1+d_B) πβ_0lnμ_R^2/M^2 H^(1)(M),where d_B is the strong-coupling order of the Born squared amplitude(e.g. d_B=2 for Higgs production).The C^(2) and G^(1) functions for gluon-fusion processes are obtained in refs. <cit.>, while for quark-induced processes they are derived in ref. <cit.>. In the present work we extract their expressions using the results of refs. <cit.>. For gluon-fusion processes, the C^(2)_gq and C^(2)_gg coefficients normalised as in Eq. (<ref>) are extracted from Eqs. (30) and (32) of ref. <cit.>, respectively, where we use the hard coefficients of Eqs. (<ref>) without the new term proportional to β_0 in the H_g^(2)(M) coefficient.[These must be replaced by H^(1)→ H^(1)/2 and H^(2)→ H^(2)/4 to match the convention of refs. <cit.>.] The coefficient G^(1) is taken from Eq. (13) of ref. <cit.>. Similarly, for quark-initiated processes, we extract C^(2)_qg and C^(2)_qq from Eqs. (32) and (34) of ref. <cit.>, respectively, where we use the hard coefficients from Eqs. (<ref>) without the new term proportional to β_0 in the H_q^(2)(M) coefficient. The remaining quark coefficient function C^(2)_qq̅, C^(2)_qq̅' and C^(2)_qq' are extracted from Eq. (35) of the same article.Eq. (<ref>) resums all logarithmic towers of ln(1/v) (with v=p_t/M) up to N^3LL, therefore neglecting subleading-logarithmic terms of order α_s^n ln^2n-6(1/v). Constant terms of order O(α_s^3) relative to the Born will be extracted automatically from a matching to the N^3LO cumulative cross section in Section <ref>. This will allow us to control all terms of order α_s^n ln^2n-6(1/v) in the matched cross section, therefore neglecting terms O(α_s^n ln^2n-7(1/v)).We have split the result into a sum of three terms. The first term (first line of Eq. (<ref>)) starts at LL and contains the full NLL corrections. The second term of Eq. (<ref>) (second to fourth lines) is necessary to achieve NNLL accuracy, while the third term (fifth to ninth lines) is purely N^3LL. Since Eq. (<ref>) still contains subleading-logarithmic terms (i.e. starting at N^4LL in ln(M/p_t)), one could, even if not strictly required, perform further expansions on each of the terms of Eq. (<ref>) in order to neglect at least some of the corrections beyond the desired logarithmic order.For instance, for a N^3LL resummation, the full N^3LL radiator is necessary in the first term of Eq. (<ref>), while the radiator can be evaluated at NNLL in the second term, and at NLL in third term. Analogously, for a NNLL resummation, the NLL radiator suffices in the second term of Eq. (<ref>). Furthermore, at NNLL, one could split R'(k_t1) into the sum of a NLL term R̂'(k_t1) and a NNLL one δR̂'(k_t1), and expand Eq. (<ref>) about the former retaining only contributions linear in δR̂'(k_t1).The last two considerations relate Eq. (<ref>) to Eq. (9) of ref. <cit.> where this approach was first formulated at NNLL for the Higgs-boson transverse-momentum distribution.Eq. (<ref>) can be evaluated in its present form with fast Monte Carlo techniques, as we will discuss in Section <ref>. We performed numerous tests to verify the correctness of Eq. (<ref>). Firstly, we performed the expansion of Eq. (<ref>) to O(α_s^3) relative to the Born for the transverse momentum of the boson as well as for the ϕ^* distribution in Drell-Yan production, and compared it to the corresponding result from the b-space formulation, finding full agreement for the N^3LL terms. This is a highly non-trivial test of the logarithmic structure of Eq. (<ref>). The differential O(α_s^2) expansion for both observables was also compared to MCFM <cit.> and we found that the difference between the two predictions vanishes in the logarithmic region. Finally, we checked numerically that the coefficient of the scaling Σ(p_t)∝ p_t^2 in the small-p_t limit of Eq. (<ref>) agrees with the prediction obtained with the b-space formulation. The agreement of the NNLL prediction obtained using our formula (<ref>) with the b-space result from the program HqT <cit.> across the spectrum was shown in ref. <cit.>.§.§ Perturbative scaling in the p_t→ 0 regime In this section we show that our formulation of the transverse-momentum resummation of Eq. (<ref>) reproduces the correct scaling in the p_t→ 0 limit as first observed in <cit.>. Moreover, we obtain a correspondence between the logarithmic accuracy and the perturbative accuracy in this limit.In the following we follow the approximations made in Ref. <cit.> to derive an analytic estimate for the p_t→ 0 scaling of the differential cross section. Such approximations are further discussed in Appendix <ref>. To perform a comparison with the results of <cit.>, we consider NLL resummation and neglect the evolution of the parton densities with the energy scale. However the same procedure can be easily extended to the general case. We haved^2Σ(v)/d^2 p⃗_t dΦ_B = σ^(0)(Φ_B)∫d k_t1/k_t1d ϕ_1/2π e^-R(k_t1)R'(k_t1)∫δ^(2)(p⃗_t-(k⃗_t1 + … + k⃗_t(n+1))),whereσ^(0)(Φ_B)≡dσ^(0)/dΦ_B,andis defined in Eq. (<ref>). In order to evaluate the integral overanalytically we proceed as in Sec. <ref>. After integrating over the azimuthal direction of p⃗_t we obtaind^2Σ(v)/d p_t dΦ_B = σ^(0)(Φ_B)p_t∫bd b J_0(p_t b) ∫d k_t1/k_t1 e^-R(k_t1)R'(k_t1) J_0(b k_t1)×exp{-R'(k_t1) ∫^k_t1_0d k_t/k_t (1-J_0(b k_t))}. Before proceeding to the evaluation of Eq. (<ref>), a remark is in order. At NLL one would be tempted to perform the replacement (see Sec. <ref>)(1-J_0(b k_t)) ≃Θ(k_t-b_0/b) + …,and recast Eq. (<ref>) asd^2Σ(v)/d p_t dΦ_B = σ^(0)(Φ_B)p_t∫bd b J_0(p_t b) ∫d k_t1/k_t1 e^-R(k_t1)R'(k_t1) J_0(b k_t1)(b_0/b k_t1)^R'(k_t1)= σ^(0)(Φ_B)p_t ∫d k_t1/k_t1 e^-R(k_t1)R'(k_t1) (b_0/k_t1)^R'(k_t1)2^1-R'(k_t1)/(p_t^2+k_t1^2)^1-R'(k_t1)/2×Γ(1-R'(k_t1)/2)/Γ(R'(k_t1)/2) _2F_1(2-R'(k_t1)/4,1-R'(k_t1)/4,1,4 p_t^2 k_t1^2/(p_t^2+k_t1^2)^2). The above result is singular for R'(k_t1) ≥ 2, owing to the fact that the integrand scales as b^1-R'(k_t1) in the b→0 limit. This singular behaviour is however entirely due to the approximation in Eq. (<ref>), where all power-suppressed terms are neglected, while Eq. (<ref>) is regular, as the integral in its exponent vanishes as O(b^2) for small b.Therefore, when using Eq. (<ref>) one must regularise the b→ 0 limit, for instance by means of modified logarithms as in ref. <cit.>. In our formalism, instead, Eq. (<ref>) is evaluated numerically without further approximations so that the b→ 0 region is correctly described. It is interesting to study the scaling of Eq. (<ref>) in the small-p_t limit. In this limit, the dominant mechanism that produces a vanishing p_t involves several soft and collinear emissions with finite transverse momentum that mutually balance in the transverse plane. In this kinematic configuration one has k_t1≫ p_t, thus expanding k_t1 about p_t in Eq. (<ref>) is not allowed: such an operation would give rise to spurious singularities at R'(p_t)≥ 2, as reported several times in the literature <cit.>.We therefore evaluate the b integral of Eq. (<ref>) and observe that in the limit where M≫ k_t1≫ p_t it gives∫bd b J_0(p_t b) J_0(b k_t1)exp{-R'(k_t1) ∫^k_t1_0d k_t/k_t (1-J_0(b k_t))}≃ 4k_t1^-2/R'(k_t1),namely it is constant in p_t in first approximation. In this regime Eq. (<ref>) becomesd^2Σ(v)/d p_t dΦ_B = 4 σ^(0)(Φ_B)p_t∫d k_t1/k^3_t1 e^-R(k_t1) .In order to directly compare with the result of ref. <cit.>, we specialise to the case of the Drell-Yan process, and compute R(k_t1) at the lowest order using the leading-order running coupling expressed in terms of the QCD scale Λ_ QCD (with n_f=4),α_s(k_t)=12/25π1/ln(k_t^2/Λ_ QCD^2).We obtain (A^(1)=2 C_F in this case)R(k_t1) = 16/25lnM^2/Λ_ QCD^2ln(lnM^2/Λ_ QCD^2/lnk_t1^2/Λ_ QCD^2) - 16/25lnM^2/k_t1^2.We now integrate over k_t1 in Eq. (<ref>) from Λ_ QCD up to the invariant mass of the Drell-Yan pair, obtainingd^2Σ(v)/d p_t dΦ_B = 4 σ^(0)(Φ_B)p_t∫_Λ_ QCD^Md k_t1/k^3_t1 e^-R(k_t1)≃ 2 σ^(0)(Φ_B) p_t(Λ_ QCD^2/M^2)^16/25ln41/16,that reproduces the scaling of ref. <cit.>.[In the last step we have neglected a factor of 1/Λ_ QCD^2ln(M^2/Λ_ QCD^2), as done in ref. <cit.>.]We stress that this power-like scaling is not due, by any means, to higher-order effects that one would be missing in performing the naive expansion of k_t1 about p_t, but rather to a collective kinematical effect that requires the presence of any number of emissions. Indeed, the expansion of Eq. (<ref>) to any order in the strong coupling only gives rise to logarithmic effects and no terms scaling as O(p_t) arise. To reproduce the correct scaling an all-order treatment is necessary.In order to study how this result is modified by the inclusion of higher-order logarithmic corrections, we evaluate Eq. (<ref>) in the fixed-coupling-constant approximation. This is a simple toy model for the more complicated running coupling case. At lowest order one hasR(k_t1) = A^(1)α_s/π L^2,with A^(1)=2 C (with C=C_A for gluons and C=C_F for quarks), and L=ln M/k_t1. In the perturbative regime Eq. (<ref>) therefore readsd^2Σ(v)/d p_t dΦ_B ≃ 4 σ^(0)(Φ_B) p_t/M^2π/2e^π/2 Cα_s/√(2 Cα_s)(1+ Erf( √(π)/√(2 Cα_s))).Eq. (<ref>) shows that in the small-p_t limit the differential spectrum features a non-perturbative scaling in α_s (see also Eq. (2.12) of ref. <cit.>[Please note that only the leading contribution for α_s≪ 1 is reported in the right-hand side of that equation.]). However, the coefficient of this scaling can be systematically improved in perturbation theory: the inclusion of NLL terms α_s^n L^n in the right-hand side of Eq. (<ref>) contributes an O(1) correction to the right-hand side of Eq. (<ref>). Analogously, NNLL terms α_s^n L^n-1 will produce an O(α_s) correction relative to the non-perturbative factor e^π/(2 Cα_s)/√(2 Cα_s), and so on. In particular, with our N^3LL calculation we have control over the terms of relative order O(α_s^2).From this scaling we deduce that the correspondence L∼ 1/α_s is still valid in the deep infrared regime.However, this does not mean that the above prediction is accurate in this limit: indeed non-perturbative effects due to soft-gluon radiation below Λ_ QCD, as well as due to the intrinsic transverse momentum of the partons in the proton, feature a similar scaling. This is because the colour singlet's transverse momentum is sensitive to non-perturbative dynamics only through kinematical recoil, that is the same mechanism that drives the scaling (<ref>).§ NUMERICAL IMPLEMENTATION In order to have a prediction that is valid accross different kinematic regions of the spectrum, one needs to match the resummed calculation, valid in the small-v limit, to a fixed-order calculation that describes the hard (large-v) region. In this section we discuss the matching of the result described in the previous sections, in particular Eq. (<ref>), to a fixed-order prediction that is NNLO accurate in the hard region of the phase space. We then describe how to evaluate Eq. (<ref>) exactly using a Monte Carlo Markov process, and discuss the implementation in a parton-level generator that is fully differential in the Born kinematics. §.§ Normalisation constraint and resummation-scale dependence In order to match the resummed calculation to a fixed-order prediction one has to ensure that the hard region of the phase space receives no contamination from resummation effects. We therefore need to modify Eq. (<ref>) so that at large v (v=p_t/M in the transverse-momentum case) all resummation effects vanish. At N^3LL, it reduces todΣ(v)/dΦ_B =L_ N^3LL(μ_F)|_L= 0,where L_ N^3LL is defined in Eq. (<ref>). The normalisation constraint (<ref>) can be implemented in several ways; in what follows we impose it by modifying the structure of the logarithms L everywhere in Eq. (<ref>), as commonly done for this observable in the literature. Before defining the modified logarithms, it is convenient to have a way to estimate the resummation uncertainties due to higher-order logarithmic corrections that are not included in the calculation. To this aim, we introduce the dimensionless resummation scale x_Q by using the identityL≡ln1/v_1 = lnx_Q/v_1 - ln x_Q,and then we expand the right-hand side about ln(x_Q/v_1) to the nominal logarithmic accuracy (in terms of ln(x_Q/v_1)), neglecting subleading corrections. In the transverse-momentum case one has v_1=k_t1/M and x_Q=Q/M, where Q, the resummation scale, has dimension of a mass. A variation of x_Q will therefore provide an estimate of the size of higher-order logarithmic corrections. The normalisation constraint can now be imposed by replacing the resummed logarithms ln(x_Q/v_1) bylnx_Q/v_1→L̃= 1/pln((x_Q/v_1)^p + 1),where the positive real parameter p is chosen in such a way that resummation effects vanish rapidly enough at v_1 ∼ x_Q. Eq. (<ref>) amounts to imposing unitarity by introducing in the resummed logarithms power-suppressed terms that scale as (x_Q/v_1)^p, which ultimately give rise to terms of order v^-p in the cumulative cross section Σ(v). Given that the differential spectrum tends to zero with a power law (∼ v^-n with positive n) at large v, it follows that one should have p≥ n-1 in order not to affect the correct fixed-order scaling at large v. However, since we are interested in turning off the resummation at transverse momentum values of the order of the singlet's mass, the relevant scaling n to be considered in the choice of p is the one relative to the differential distribution in this region. We stress, finally, that the prescription (<ref>) is only one of the possible ways of turning off resummation effects in the hard regions of the spectrum. For instance one could, analogously, directly constrain the first block to have k_t1≤ Q, which would naturally suppress radiation effects at large v. This solution would however lead to more complicated integrals in the expansion of the resummation formula used in the matching to fixed order. For this reason, we stick to prescription (<ref>) while leaving the study of alternative solutions for future work.We notice that, with the prescription (<ref>), the single-emission event in the first line of Eq. (<ref>) is not a total derivative any longer. One can however restore this property by introducing the jacobian factorJ(v_1/x_Q,p) = (x_Q/v_1)^p (1+(x_Q/v_1)^p)^-1in all integrals over v_1=k_t1/M in Eq. (<ref>).This jacobian tends to one at small v_1 and therefore does not modify the logarithmic structure. Moreover, in the large-v region where the single-emission event dominates, this prescription prevents the proliferation of power-suppressed terms. The prescription (<ref>) effectively maps the point at which the logarithms are turned off onto infinity. This also gives us the freedom to extend the upper bound of the integration over k_t1 from M to ∞ in Eq. (<ref>) without spoiling the logarithmic accuracy. We therefore implement the prescription (<ref>) in the Sudakov radiator and its derivatives. We denote all modified quantities by a `∼' superscript. The expansion about ln(x_Q/v) induces some constant terms in the Sudakov radiator that are expanded out up to O(α_s^2) and included in the hard-function coefficients. The modified quantities in Eq. (<ref>) areR̃(k_t1)= - L̃ g_1(α_s(μ_R) L̃ ) - g_2(α_s(μ_R) L̃ ) - α_s(μ_R)/π g_3(α_s(μ_R) L̃ ) - α^2_s(μ_R)/π^2 g_4(α_s(μ_R) L̃ ), H̃^(1)(μ_R, x_Q) = H^(1)(μ_R) +(-1/2A^(1)ln x_Q^2 +B^(1)) ln x_Q^2 H̃^(2)(μ_R, x_Q) = H^(2)(μ_R) +(A^(1))^2/8ln^4x_Q^2 - (A^(1)B^(1)/2+A^(1)/3πβ_0)ln^3 x_Q^2+(-A^(2)+(B^(1))^2/2 + πβ_0 (B^(1)+A^(1)lnx_Q^2 M^2/μ_R^2))ln^2 x_Q^2 - (-B^(2)+B^(1)2πβ_0lnx_Q^2 M^2/μ_R^2)ln x_Q^2+ H^(1)(μ_R)ln x_Q^2( -1/2A^(1)ln x_Q^2 +B^(1)),where the functions g_i are given in Appendix <ref>. All derivatives of the R function are to be consistently replaced by derivatives of R̃ with respect to L̃. Notice that no constant terms are present in the radiator and therefore g_i(0)=0.The same replacement must be consistently performed in the parton densities. In addition, it is convenient to have the latter evaluated at a common factorisation scale μ_F at large v_1, in order to match the fixed-order convention. Both steps can be implemented by expressing the parton densities f at the scale μ_F e^-L̃, and expanding out the difference between f(μ_F e^-L̃ ,x) and f(k_t1 ,x) neglecting regular terms as well as logarithmic terms beyond N^3LL. The relevant terms in this expansion can be absorbed into a redefinition of the coefficient functions C^(i)(z), thereby introducing an explicit dependence upon μ_F and x_Q. We obtainC̃_ij^(1)(z, μ_F,x_Q) = C_ij^(1)(z) + P̂_ij^(0)(z)lnx_Q^2 M^2/μ_F^2, C̃_ij^(2)(z, μ_F,x_Q) = C_ij^(2)(z) + πβ_0 P̂_ij^(0)(z)( ln^2x_Q^2 M^2/μ_F^2 -2 lnx_Q^2 M^2/μ_F^2lnx_Q^2 M^2/μ_R^2) + P̂_ij^(1)(z)lnx_Q^2 M^2/μ_F^2 + 1/2(P̂^(0)⊗P̂^(0))_ij(z) ln^2x_Q^2 M^2/μ_F^2 + (C^(1)⊗P̂^(0))_ij(z) lnx_Q^2 M^2/μ_F^2 - 2πβ_0 C_ij^(1)(z) lnx_Q^2 M^2/μ_R^2.Finally, we also approximate the strong coupling in the terms proportional to α_s^2(k_t1) in Eq. (<ref>), featuring the convolution of one and two splitting functions with the NLL luminosity, by retaining only terms relevant to N^3LL asα_s(k_t1)≃α_s(μ_R)/1-2α_s(μ_R)β_0 L̃.Summarising, the final formula that we employ in the matching to fixed order will be Eq. (<ref>) with the following replacements:L→L̃,d k_t1/k_t1→J(v_1/x_Q,p)d k_t1/k_t1, R→R̃, R'→ d R̃/dL̃, R”→ d R̃'̃/dL̃, R”'→ d R̃”̃/dL̃,L_ NLL →L̃_ NLL, L_ NNLL→L̃_ NNLL, L_ N^3LL→L̃_ N^3LL.Moreover the coupling is treated according to Eq. (<ref>) in the terms P̂^(0)⊗L̃_ NLL and P̂^(0)⊗P̂^(0)⊗L̃_ NLL, and the upper bound of the k_t1 integration in Eq. (<ref>) is extended to infinity. The modified luminosity factors appearing in the previous equation are defined asL̃_ NLL(k_t1) = ∑_c, c'd|M_B|_cc'^2/dΦ_B f_c(μ_F e^-L̃,x_1)f_c'(μ_F e^-L̃,x_2), L̃_ NNLL(k_t1) = ∑_c, c'd|M_B|_cc'^2/dΦ_B∑_i, j∫_x_1^1d z_1/z_1∫_x_2^1d z_2/z_2f_i(μ_F e^-L̃,x_1/z_1)f_j(μ_F e^-L̃,x_2/z_2)(δ_ciδ_c'jδ(1-z_1)δ(1-z_2) (1+α_s(μ_R)/2πH̃^(1)(μ_R,x_Q)) + α_s(μ_R)/2π1/1-2α_s(μ_R)β_0 L̃(C̃_c i^(1)(z_1,μ_F,x_Q)δ(1-z_2)δ_c'j+ {z_1↔ z_2; c,i ↔ c'j})), L̃_ N^3LL(k_t1)=∑_c, c'd|M_B|_cc'^2/dΦ_B∑_i, j∫_x_1^1d z_1/z_1∫_x_2^1d z_2/z_2f_i(μ_F e^-L̃,x_1/z_1)f_j(μ_F e^-L̃,x_2/z_2){δ_ciδ_c'jδ(1-z_1)δ(1-z_2) (1+α_s(μ_R)/2πH̃^(1)(μ_R,x_Q) + α^2_s(μ_R)/(2π)^2H̃^(2)(μ_R,x_Q)) + α_s(μ_R)/2π1/1-2α_s(μ_R)β_0 L̃(1- α_s(μ_R)β_1/β_0ln(1-2α_s(μ_R)β_0 L̃)/1-2α_s(μ_R)β_0 L̃)×(C̃_c i^(1)(z_1,μ_F,x_Q)δ(1-z_2)δ_c'j+ {z_1↔ z_2; c,i ↔ c',j}) + α^2_s(μ_R)/(2π)^21/(1-2α_s(μ_R)β_0 L̃)^2(C̃_c i^(2)(z_1,μ_F,x_Q)δ(1-z_2)δ_c'j + {z_1↔ z_2; c,i ↔ c',j}) +α^2_s(μ_R)/(2π)^21/(1-2α_s(μ_R)β_0 L̃)^2(C̃_c i^(1)(z_1,μ_F,x_Q)C̃_c' j^(1)(z_2,μ_F,x_Q) + G_c i^(1)(z_1)G_c' j^(1)(z_2))+ α^2_s(μ_R)/(2π)^2H̃^(1)(μ_R,x_Q)1/1-2α_s(μ_R)β_0 L̃(C̃_c i^(1)(z_1,μ_F,x_Q)δ(1-z_2)δ_c'j + {z_1↔ z_2; c,i ↔ c',j}) }.§.§ Matching to fixed orderTo match the above result to a fixed-order calculation we design a scheme belonging to the class of multiplicative matchings <cit.>. This, at present, is preferable to the more common additive R scheme <cit.>, since the O(α_s^3) constant terms of the cumulative cross section are currently unknown analytically (except for the three-loop corrections to the form factor that were computed in ref. <cit.>) and they can therefore be recovered numerically from our matching procedure. This ensures that our matched prediction controls all terms up to and including O(α_s^n ln^2n-6(1/v)). Moreover, the multiplicative scheme has the feature of being less sensitive to numerical instabilities of the fixed-order prediction close to the infrared and collinear regions. However, the multiplicative scheme in hadronic collisions can give rise to higher-order terms in the high-p_t tail, due to the cross product of parton luminosities. These are effectively subleading and therefore they never spoil the perturbative accuracy, nevertheless they can be numerically non-negligible, especially for processes featuring large K factors like Higgs production.In order to suppress such spurious terms, we introduce a factor Z defined as Z = (1-(v/v_0)^u)^hΘ(v_0-v),where v_0 is the point at which the fixed-order is recovered, while h and u are positive parameters. h should be larger than two in order to avoid small kinks in the differential distribution. In our predictions below we set v_0=1/2 and h=3, and check that the variations v_0=1 and h=1,2 do not produce sizeable differences. The parameter u will be discussed shortly.In what follows, with a slight abuse of notation, we denote by Σ(v,Φ_B) the generic exclusive cross section dΣ(v)/dΦ_B. We therefore define the matched cross section asΣ_ MAT(v,Φ_B) = (Σ_ RES(v,Φ_B))^ZΣ_ FO(v,Φ_B)/(Σ_ EXP(v,Φ_B))^Z,where Σ_ FO is the fixed-order cross section at order α_s^n differential in the Born kinematics, and Σ_ EXP is the expansion of the resummed cross section Σ_ RES to O(α_s^n). The factor Z ensures that the resummation is smoothly turned off for v≥ v_0. We stress that at small v the factor Z leads to extra terms which are suppressed as (v/v_0)^u. Therefore u can be chosen in order to make these terms arbitrarily small, although they are already very suppressed in the small-v region. In our case we simply set u=1. Up to N^3LO we now express the fixed-order and the expanded cross sections asΣ_ FO(v,Φ_B) = ∑_i=0^3Σ^(i)_ FO(v,Φ_B), Σ^(i)_ FO(v,Φ_B)= σ^(i)(Φ_B) - ∫_v d v' d Σ^(i)_FO(v',Φ_B)/d v' = σ^(i)(Φ_B) + Σ̅^(i)_ FO(v,Φ_B), Σ_ EXP(v,Φ_B) = ∑_i=0^3Σ^(i)_ EXP(v,Φ_B),where Σ̅^(0)_ FO(v,Φ_B)=0, Σ_ EXP^(0)(v,Φ_B)=σ^(0), and we defined σ^(i)(Φ_B) = d σ^(i)/dΦ_B as the i-th order of the total cross section differential in the Born kinematicsσ(Φ_B) = ∑_i=0^3σ^(i)(Φ_B).With this notation, Eq. (<ref>) becomesΣ _ MAT(v,Φ_B) = (Σ_RES(v,Φ_B)/σ^(0)(Φ_B))^Z{σ^(0)(Φ_B) +σ^(1)(Φ_B) +Σ̅^(1)_FO(v,Φ_B) - Z Σ^(1)_EXP(v,Φ_B)+ σ^(2)(Φ_B) + Σ̅^(2)_FO(v,Φ_B) -Z Σ^(2)_EXP(v,Φ_B) + Z(1+Z)/2(Σ^(1)_EXP(v,Φ_B))^2/σ^(0)(Φ_B) - Z Σ^(1)_EXP(v,Φ_B)σ^(1)(Φ_B) +Σ̅^(1)_FO(v,Φ_B) /σ^(0)(Φ_B) + σ^(3)(Φ_B) + Σ̅^(3)_FO(v,Φ_B) -Z Σ^(3)_EXP(v,Φ_B) -Z(1+Z)(2+Z)/6(Σ^(1)_EXP(v,Φ_B))^3/(σ^(0)(Φ_B))^2 + Z(1+Z)/2(Σ^(1)_ EXP(v,Φ_B))^2σ^(1)(Φ_B)+Σ̅^(1)_ FO(v,Φ_B)/(σ^(0)(Φ_B))^2 - Z Σ^(2)_EXP(v,Φ_B)σ^(1)(Φ_B)+Σ̅^(1)_FO(v,Φ_B)/σ^(0)(Φ_B) + Z Σ^(1)_EXP(v,Φ_B)(1+Z)Σ^(2)_EXP(v,Φ_B) - σ^(2)(Φ_B) - Σ̅^(2)_FO(v,Φ_B)/σ^(0)(Φ_B)},where terms contributing at different orders in α_s are separated by an extra blank line in the above equation. To work out the expansion, we start from the three contributions of Eq. (<ref>) with the replacements discussed in Sec. <ref>.The first contribution starts with a single emission, the second features at least two emissions, and the third contributes to events with at least three emissions. The single-emission term can be worked out analytically, since the integrand is a total derivative, while the remaning terms can be expanded to O(α_s^3) at the integrand level and integrated over the real-emission phase space. When the integrand is expanded out, one can safely set ϵ=0 as the cancellation of all singularities is now manifest. The expanded result can be expressed as a linear combination in terms of the following three classes of integrals (we write them in terms of v_1 = k_t1/M):I_2^(n,m)(v)= ∫_0^∞d v_1/v_1∫_0^2 πd ϕ_1/2π∫_0^1d ζ_2/ζ_2∫_0^2 πd ϕ_2/2π J(v_1/x_Q,p) L̃^n ln^m1/ζ_2×{Θ(v-V({p̃},k_1,k_2)) - Θ(v-V({p̃},k_1))},I_3^(n,m)(v)= ∫_0^∞d v_1/v_1∫_0^2 πd ϕ_1/2π∫_0^1d ζ_2/ζ_2∫_0^2 πd ϕ_2/2π∫_0^1d ζ_3/ζ_3∫_0^2 πd ϕ_3/2π J(v_1/x_Q,p) L̃^n (ln^m1/ζ_2 + ln^m1/ζ_3)×{Θ(v-V({p̃},k_1,k_2,k_3)) -Θ(v-V({p̃},k_1,k_2)) - Θ(v-V({p̃},k_1,k_3)) +Θ(v-V({p̃},k_1))},I^(n)_3,R”(v)= ∫_0^∞d v_1/v_1∫_0^2 πd ϕ_1/2π∫_0^1d ζ_2/ζ_2∫_0^2 πd ϕ_2/2π∫_0^1d ζ_3/ζ_3∫_0^2 πd ϕ_3/2π J(v_1/x_Q,p) L̃^n ln1/ζ_2ln1/ζ_3×{Θ(v-V({p̃},k_1,k_2,k_3)) -Θ(v-V({p̃},k_1,k_2)) - Θ(v-V({p̃},k_1,k_3)) +Θ(v-V({p̃},k_1))},where L̃ and J are defined in Eqs. (<ref>) and (<ref>), respectively. We stress that we extended the upper bound of the integration over v_1 to infinity, following the discussion of Sec. <ref>. The integral over v_1 can be evaluated analytically. The remaining integrations are carried out numerically and the final results are tabulated with fine grids as a function of v/x_Q. §.§ Event generation Before presenting a phenomenological application of this formalism, we comment briefly on how Eq. (<ref>) is implemented numerically using a Monte Carlo method. We follow a variant of the procedure used in refs. <cit.>. For the first emission we generate v_1 uniformly according to the integration measure d v_1/v_1J(v_1/x_Q,p), and assign it a weight in terms of the Sudakov radiator and parton luminosities. All the identical emissions belonging to the ensembleare generated via a shower ordered in v_i. This is done by expressing the term ϵ^R'(k_t1) ase^-R'(k_t1) ln1/ϵ = ∏_i=2^n+2e^-R'(k_t1) lnζ_i-1/ζ_i,with ζ_1=1 and ζ_n+2=ϵ. Each emission innow has a weightdζ_i/ζ_i R'(k_t1) e^-R'(k_t1) lnζ_i-1/ζ_i,and therefore it can be generated by solving for ζ_i the equatione^-R'(k_t1) lnζ_i-1/ζ_i = r,with r being a random number extracted uniformly in the range [0,1]. The above equation has no solution for ζ_i > ζ_i-1, therefore this amounts to a shower ordered in ζ_i (or, equivalently, in v_i). The procedure is stopped as soon as a ζ_i < ϵ is generated. The azimuthal angles are generated uniformly in the range [0,2π] for all emissions. Finally, the special emissions, denoted by the subscript s in Eq. (<ref>), do not have an associated Sudakov suppression since their contribution is always finite in four dimensions. Therefore we generate them according to their phase-space measure and weight as they appear in the master formula.This recipe is sufficient to evaluate Eq. (<ref>), and it can be implemented in a fast numerical code. We stress that it is an exact procedure, meaning that no truncation at any perturbative order is involved. The algorithm leads to the generation of an arbitrary number of emissions with ζ_i>ϵ, while all unresolved emissions with ζ_i<ϵ are accounted for analytically in the Sudakov radiator. This ensures that the whole singular part of the radiation phase space and all perturbative orders are treated exactly. We choose conservatively ϵ=e^-20 for our tests, although we observe that a much larger value (e.g. ϵ∼ e^-7) can be chosen in practice given that emissions below this threshold will be very soft and/or collinear, hence improving slightly the efficiency of the event generation.We generate Born events using the LO matrix elements and phase-space-integrator routines of MCFM <cit.>, and we use HOPPET <cit.> to handle the evolution of the parton densities and the convolution with the various coefficient functions.For each Born event we run the above algorithm to produce the initial-state radiation, and fill the histograms on the fly, thereby yielding dΣ_ RES(v)/dΦ_B. As a byproduct, this allows us to have exclusive events with N^3LL accuracy for the observables treated in this article.For each Born event we also generate a histogram filled with the expansion counterterm, which is computed as described in the previous section. After the generation, the two histograms are combined with the corresponding fixed-order cumulative distribution according to Eq. (<ref>).We point out that the Sudakov radiator has a singularity in correspondence of the Landau pole at 2α_s(μ_R)β_0L̃= 1 (see expressions in Appendix <ref>). One could use different prescriptions to handle this singularity, all differing by power-suppressed terms in the perturbative expansion. We choose to set the result to zero below the singularity which, anyway, occurs at very small p_t values. We stress that other schemes can be adopted, and that this choice has no consequences above the scale of the singularity.The resummation and matching as described above are implemented in the program RadISH that can simulate the production of any colour singlet with arbitrary phase-space cuts on the Born kinematics. The code will be released in due course. §.§ Predictions for Higgs-boson production at 13 TeV pp collisions We now apply the method described in the previous sections to obtainthe inclusive transverse-momentum distribution of the Higgs boson atthe LHC. We stress that the results shown in the following are to beconsidered as a proof of concept of our method, and a more detailedphenomenology discussion on the precise choice of the matching schemeas well as on the theory uncertainties will be the subject of aforthcoming publication.We perform the calculation in the large-top-mass limit, and we match our N^3LL result to the NNLO distribution that was computed in refs. <cit.>. In particular, here we use results obtained with the code of ref. <cit.> with a cut on the Higgs transverse momentum at 5 GeV. The matched distribution integrates to the inclusive N^3LO crosssection that is taken from ref. <cit.>.We consider 13 TeV collisions, and we use parton densities from thePDF4LHC15_nnlo_mcset <cit.>.The value of the parameter p appearing in the modified logarithmsL̃ is chosen considering the scaling of the spectrum in thehard region, in order to make the matching to the fixed order smoothin this region. On the other hand, its value should not be too large,in order to prevent the peak of the distribution from beingartificially pushed upwards due to the normalisation constraint. Wetherefore set p=2 as our reference value, but nevertheless checkedthat the choice p=3 induces negligible differences.As central scales we employ μ_R=μ_F=m_H, and x_Q=Q/m_H=1/2. The perturbative uncertainty is estimated by performing a seven-scale variation of μ_R, μ_F by a factor of two in either direction, while keeping 1/2<μ_R/μ_F<2 and x_Q=1/2; moreover, for central μ_R and μ_F scales, x_Q is varied around its central value in a range that we now turn to discuss. The total error is defined as the envelope of all above variations. In the case of the transverse momentum k_t1 of a colour singlet of mass M, the resummation scale Q is introduced by splitting the resummed logarithms aslnM/k_t1 = lnQ/k_t1 + lnM/Q,and subsequently assuming that lnQ/k_t1≫lnM/Q.The latter condition is true at small k_t1, and it allows one to expand ln(M/k_t1) about ln(Q/k_t1), retaining only terms relevant to a given logarithmic accuracy. In this case, variations of Q give a handle to estimate the size of subleading-logarithmic terms in the region where all-order effects are important.However, in the matching region k_t1∼ M/2, condition (<ref>) is violated for k_t1≳ Q^2/M. In this regime, the variation of the resummation scale is physically meaningless, since the logarithmic hierarchy it is based upon is not valid at these scales.In particular, for Higgs production, a variation of Q by a factor of two around m_H/2 can have a couple of drawbacks.On the one hand, for Q=m_H/4, it leads to values of Q^2/m_H which are below the peak of the distribution, implying that the corresponding resummation-scale variation is technically reliable only to the left of the peak.On the other hand, for Q=m_H, resummation effects are allowed to survive up to the Higgs scale, which is a fairly hard region of the phase space, where one expects to be predictive with the sole fixed-order calculation. In practice, however, in our matching procedure the resummed contribution is subtracted up to the perturbative order one is matching to, which ensures that the residual variations of Q away from the region of large logarithms induce effects that are numerically very small. For these two reasons, we believe that a more suitable variation range is given by Q∈ [ m_H/3, 3 m_H/4], which corresponds to a variation by a factor of 3/2 around the central value Q=m_H/2. This range, that was already adopted in ref. <cit.>, ensures that the resummation-scale variation is reliable in the peak region and that resummation effects are turned off well below the hard scale of the reaction, hence avoiding artifacts in the matched spectrum.To study the impact of this choice, in the left panel of Figure <ref> we show the comparison between the pure resummed N^3LL normalised spectra with two uncertainty prescriptions: in the green coarse-textured band, Q is varied by a factor of two around m_H/2, while the red fine-textured band involves the aforementioned reduced variation by a factor of 3/2; in both cases μ_R and μ_F undergo the seven-point variation described above. As expected, the choice Q∈ [ m_H/3, 3 m_H/4] reduces the impact of the resummation-scale uncertainty in the matching region where the logarithms are not large, while leaving the uncertainty unchanged in the small-p_t regime where the all-order treatment is necessary.The right panel of Figure <ref> shows the comparison between the two prescriptions for the matched N^3LL+NLO distribution.[Preliminary results at N^3LL+NLO for this observable have been also shown at <cit.>.] In the NLO matching, the resummed component is subtracted up to and including O(α_s^2) terms relative to the Born. Therefore, in the region where the logarithms are moderate in size, the issues due to the large scale variation are suppressed by O(α_s^3), and we indeed observe that the two bands differ negligibly at intermediate p_t values.We conclude that the resummation-scale variation by a factor of 3/2 still provides a wide enough variation range to probe the size of subleading-logarithmic corrections, while avoiding that some moderate resummation effects persist away from the region where the logarithms are large. We therefore adopt the modified variation in our prescription to estimate the perturbative uncertainty.We next turn to the comparison with NNLL. The left panel of Figure <ref> shows a comparison between the pure resummed predictions for the normalised spectrum at N^3LL and NNLL. In this plot, the NNLL curve is normalised to the NLO total cross section, while the N^3LL curve is normalised to the NNLO total cross section. The plot shows that the inclusion of the N^3LL corrections leads to a reduction in the scale uncertainty of the resummed prediction compared to the NNLL result.[An identical reduction in size is observed when varying Q by a factor of two around its central value.]The right plot of Figure <ref> shows the matching of the NNLL and N^3LL predictions to NLO. Both curves are now normalised to the NNLO total cross section. We observe that at the matched level, the N^3LL corrections amount to ∼ 10% around the peak of the spectrum, and they get slightly larger for smaller p_t values (≲ 10 GeV). A substantial reduction of the total scale uncertainty is observed for p_t≲ 10 GeV.We notice that, at the matched level, the impact of the N^3LL corrections is reduced with respect to the sole resummation shown in the left plot of Figure <ref>. This is to a good extent due to the matching scheme that we chose here. Indeed, in a multiplicative scheme we include the O(α_s^2) constant terms already at NNLL, although they are formally of higher-order accuracy. While these terms enter at N^3LL, they are numerically sizeable and therefore their inclusion reduces the difference between the N^3LL+NLO and the NNLL+NLO predictions.To conclude this section, in Figure <ref> we report the N^3LL+NNLO prediction for the normalised distribution. The latter is compared both to NNLL+NNLO and to the pure NNLO result. All curves in the plot are now normalised to the total N^3LO cross section. When matched to NNLO, the N^3LL corrections give rise to a few-percent shift of the central value with respect to the NNLL+NNLO prediction around the peak of the distributions, while they have a somewhat larger effect for p_t≲ 10 GeV. We recall that some of the N^3LL effects are already included in the NNLL+NNLO prediction by means of the multiplicative matching scheme that we adopt here. As a consequence, this reduces the difference between the N^3LL+NNLO and the NNLL+NNLO curves.We also observe that the matched N^3LL and NNLL predictions are only moderately different in their theoretical-uncertainty bands. While this is of course expected in the hard region of the spectrum, we point out that, in the region p_t ≲ 30 GeV, the latter feature is due (and increasingly so at smaller p_t) to numerical instabilities of the fixed-order runs with one of the scales (μ_R or μ_F) set to m_H/2. As we already observed at NLO, it is indeed necessary to have stable fixed-order predictions for p_t < 10 GeV in order to benefit from the uncertainty reduction due to the higher-order resummation. We leave this for future work. § CONCLUSIONS In this article we presented a formulation of the momentum-space resummation for global, recursive infrared and collinear safe observables that vanish far from the Sudakov limit because of kinematic cancellations implicit in the observable's definition. In particular, we studied the class of inclusive observables that do not depend on the rapidity of the QCD radiation. Members of this class are, among others, the transverse momentum of a heavy colour singlet and the ϕ^* observable in Drell-Yan pair production. We obtained an all-order formula that is valid for all observables belonging to this class, and we explicitly evaluated it to N^3LL up to effects due to the yet unknown four-loop cusp anomalous dimension.In the case of the transverse momentum of a colour singlet, we proved that our formulation is equivalent to the more common solution in impact-parameter space at this accuracy. This evidence is also supported by the numerous checks that we have documented. This equivalence allowed us to extract the ingredients necessary to compute the Sudakov radiator at N^3LL using the recently computed B^(3) coefficient <cit.>. The radiator is universal for all observables of this class <cit.>, which can therefore be resummed to this accuracy with our approach. The all-order result was shown to reproduce the correct power-like scaling in the small-p_t limit, where the perturbative component of the coefficient of the intercept can be systematically improved by including higher-order logarithmic corrections. We implemented our results in the exclusive generator RadISH, which performs the resummation and the matching to fixed order, and allows the user to apply arbitrary kinematic cuts on the Born phase space. Although we explicitly treated the case of Higgs production, the code developed here can automatically handle any colour-singlet system.As a phenomenological application, we computed the Higgs transverse-momentum spectrum at the LHC. In comparison to the NNLL+NLO prediction, we find that N^3LL+NLO effects are moderate in size, and lead to O(10%) corrections near the peak of the distribution and they are somewhat larger for p_t ≲ 10 GeV. The scale uncertainty of the matched calculation is reduced by the inclusion of the N^3LL corrections in the small transverse-momentum region. When matched to NNLO, the effect of the N^3LL is pushed towards lower p_t values, leading to a few percent correction to the previously known NNLL+NNLO prediction <cit.> around the peak, and to more sizeable effects at smaller p_t values. In order to further improve the theoretical control in the small-medium transverse momentum region, it will be necessary to consider the deviations from the large-m_t approximation. Recently, progress has been made in this respect by computing the NLO corrections to the top-bottom interference <cit.>. Higher-order effects due to the leading tower of logarithms of p_t/m_b were addressed in ref. <cit.> and were found to be moderate in size. The procedure for the inclusion of mass effects in the context of transverse-momentum resummation is a debated topic. While some prescriptions are available <cit.>, further studies are necessary to estimate these effects in the logarithmic region at this level of accuracy. § ACKNOWLEDGEMENTSWe wish to thank A. Banfi, C. Bauer, V. Bertone, G. Salam, G. Zanderighi for stimulating discussions on the topics treated here and very valuable comments on the manuscript. We also thank F. Caola for providing us with the fixed-order runs for the Higgs transverse-momentum spectrum at NNLO. The work of LR is supported by the European Research Council Starting Grant PDF4BSM, WB is supported by the European Research Council grant 614577 HICCUP (High Impact Cross Section Calculations for Ultimate Precision), and the work of ER is supported by a Marie Skłodowska-Curie Individual Fellowship of the European Commission's Horizon 2020 Programme under contract number 659147 PrecisionTools4LHC. PM would like to thank the Erwin Schrödinger Institute of Vienna for hospitality and support while part of this work was carried out. PT would like to thank CERN's Thoretical Physics Department for hospitality during the development of this work. § CONNECTION WITH THE BACKWARD-EVOLUTION ALGORITHM AT NLL It is interesting to relate our formulation for the transverse-momentum resummation to a NLL-accurate backward-evolution algorithm <cit.>.We start from Eq. (<ref>), that was deduced by considering only flavour-conserving real splitting kernels, for the sake of clarity. We briefly comment on the general flavour case below.After neglecting the effect of the hard and coefficient functions, which starts at NNLL, we recast the NLL partonic cross section asΣ̂_N_1,N_2^c_1, c_2(v) = 1^(c_1,c_2)∫_0^Md k_t1/k_t1∫_0^2πdϕ_1/2π e^- R(ϵ k_t1)exp{-∑_ℓ=1^2∫_ϵ k_t1^μ_0dk_t/k_tα_s(k_t)/πΓ_N_ℓ(α_s(k_t)) }∑_ℓ_1=1^2(R_ℓ_1'(k_t1) + α_s(k_t1)/πΓ_N_ℓ_1(α_s(k_t1)) ) ∑_n=0^∞1/n!∏_i=2^n+1∫_ϵ^1dζ_i/ζ_i∫_0^2πdϕ_i/2π×∑_ℓ_i=1^2( R_ℓ_i'(k_ti)+α_s(k_ti)/πΓ_N_ℓ_i(α_s(k_ti)))Θ(v-V({p̃},k_1,…, k_n+1)),where 1^(c_1,c_2) enforces the flavour of the two parton densities to be identical to that entering the Born process, i.e. f^T1^(c_1,c_2) f=f_c_1f_c_2. At NLL order, the emission probabilities involve only tree-level splitting functions, whose coupling we evaluate in the CMW scheme, as discussed in Sec. <ref>:α_s(k_t)/π→α^ CMW_s(k_t)/π=α_s(k_t)/π (1 +α_s(k_t)/2π K),where K is defined in Eq. (<ref>).In order to perform the inverse Mellin transform of Eq. (<ref>), we observe that, when inverted into z space, each of the real-emission probabilities acts on a generic parton distribution f(x_ℓ_i) as described in Section <ref>:(R_ℓ_i'(k_ti) +α_s(k_ti)/πγ^(0)_N_ℓ_i(α_s(k_ti)) ) f_N_ℓ_i(μ)→α^ CMW_s(k_ti)/π(∫_0^1-k_ti/M d P^(0)() f(μ,x_ℓ_i) + ∫_x_ℓ_i^1 d P̂^(0)()/ f(μ,x_ℓ_i/)),where we reintroduced the regular terms in the hard-collinear contribution to R'_ℓ, whose z^(ℓ) upper limit was set to 1 in Section <ref>.Similarly, we can now restore the remaining power-suppressed terms in the single-emission probability that we neglected in our discussion of Section <ref>, and recast the right-hand side of Eq. (<ref>) in terms of the unregularised splitting function as[We recall that Eq. (<ref>) in the case of g→ gg splitting also requires an extra symmetry factor of 2 to account for the fact that the total probability to find a gluon with momentum fraction z^(ℓ) is the sum of the probability to find either of the two gluons involved in the branching, as in Eq. (<ref>).]α_s^ CMW(k_ti)/π∫_x_ℓ_i^1-k_ti/M dP^(0)()/ f(μ,x_ℓ_i/).We furthermore introduce the shower Sudakov form factor Δ(Q_i), that at NLL readsΔ(Q_i) = exp{ - ∑_ℓ=1^2∫^Q_i_ϵ k_t1d k_t/k_t∫_0^1-k_t/M d z^(ℓ)α^ CMW_s(k_t)/π P^(0)(z^(ℓ))},such that Δ(M) = exp{-R_ NLL(ϵ k_t1)} up to non-logarithmic terms included in Δ but not in exp{-R}.As shown in the main text, in the all-order picture, the correct z^(ℓ) bounds for each emission depend on the radiation that was emitted before it. Following the discussion of Section <ref>, however, we recall that these effects contribute beyond NLL accuracy, and therefore can be neglected in the present case. We then plug Eq. (<ref>) into Eq. (<ref>) and perform the inverse Mellin transform as just described, obtainingdΣ(v)/dΦ_B = d|M_B|_c_1c_2^2/dΦ_B×∫_0^Md k_t1/k_t1∫_0^2πdϕ_1/2πΔ(M)/Δ(k_t1)∑_ℓ_1=1^2 ∫_x_ℓ_1^1-k_t1/M d z_1^(ℓ_1)α^ CMW_s(k_t1)/πP^(0)(z_1^(ℓ_1))/z_1^(ℓ_1)∑_n=0^∞1/n!∏_i=2^n+1∫_ϵ^1dζ_i/ζ_i∫_0^2πdϕ_i/2π×Δ(k_t(i-1))/Δ(k_ti)∑_ℓ_i=1^2 ∫_w_ℓ_i^1-ζ_i k_t1/M d α^ CMW_s(k_ti)/πP^(0)()/ f_c_1(ϵ k_t1,x̅_1) f_c_2(ϵ k_t1,x̅_2)×Θ(v-V({p̃},k_1,…, k_n+1)),with Δ(ϵ k_t1) = 1 and w_ℓ_i = x_ℓ_i/(∏_j=1 ℓ_j = ℓ_i^i-1 z_j^(ℓ_j)),x̅_1 = x_1/(∏_j=1 ℓ_j = 1^n+1 z_j^(ℓ_j)),x̅_2 = x_2/(∏_j=1 ℓ_j = 2^n+1 z_j^(ℓ_j)).We stress again that the z_i^(ℓ) limits in Eq. (<ref>) are obtained in the approximation of soft kinematics which is valid at NLL accuracy. To implement Eq. (<ref>) in a Markov process we can now impose an ordering in the transverse momentum of the emissions, which amounts to performing the following replacement in Eq. (<ref>) (we remind that ζ_i=k_ti/k_t1)1/n!∏_i=2^n+1∫_ϵ^1dζ_i/ζ_i→∫_ϵ^1dζ_2/ζ_2∫_ϵ^ζ_2dζ_3/ζ_3…∫_ϵ^ζ_ndζ_n+1/ζ_n+1.With this replacement, Eq. (<ref>) reproduces the backward-evolution equation for a shower of primary gluons emitted off the two initial-state legs (see e.g. Eq. (49) of ref. <cit.>), ordered in transverse momentum. The only relevant difference with the common parton-shower formulation is in the fact that, unlike a parton shower, Eq. (<ref>) does not contain a no-emission event. This term is indeed infinitely suppressed in our case and therefore it does not contribute to the final result. As a consequence, the cutoff (represented by ϵ k_t1 in our formula) is replaced by a fixed cut Q_0 in the trasverse momentum of the emissions. In order for Eq. (<ref>) to be NLL accurate for the transverse-momentum distribution, the recoil of all initial-state emissions must be entirely absorbed by the colour singlet. This shows that a branching algorithm for initial-state radiation that fulfils the above conditions is NLL accurate for this observable (see also <cit.>). Analogous considerations apply to other rIRC safe, global observables of the type (<ref>).To extend the above discussion to the generic flavour case, one is forced to relax the assumption of k_t ordering in order to implement the above solution in a Markov-chain Monte-Carlo program.[We are grateful to A. Banfi for a discussion about this aspect.] Indeed, if some soft radiation occurs after the flavour-changing collinear emission has taken place, then it becomes quite cumbersome to determine the correct colour factor for the former. This is because coherence guarantees that a soft gluon feels the effective colour charge of the radiation at smaller angles, which now may involve combinations of different flavours. A correct solution to this problem requires to reformulate the evolution by ordering the radiation in angle. This ensures that the hard-collinear emissions contributing to the DGLAP evolution happen at last (see also the discussion in Appendix E.2 of ref. <cit.>), and the colour structure of the soft radiation is easily determined. It is possible to show that the backward-evolution algorithm reproduces the resulting evolution formula in that case as well, and it is therefore NLL accurate.§ ANALYTIC FORMULAE FOR THE N^3LL RADIATOR In this Appendix we report the expressions for some of the quantities used in the article. The RGE equation for the QCD coupling readsdα_s(μ)/dlnμ^2 = β(α_s) ≡ -α_s( β_0 α_s +β_1α_s^2 +β_2 α_s^3 +β_3 α_s^4 + …).The coefficients of the β function (with n_f active flavours) areβ_0 = 11 C_A - 2 n_f/12π ,β_1 = 17 C_A^2 - 5 C_A n_f - 3 C_F n_f/24π^2 , β_2= 2857 C_A^3+ (54 C_F^2 -615C_F C_A -1415 C_A^2)n_f+(66 C_F +79 C_A) n_f^2/3456π^3 , β_3= 1/(4π)^4{C_A C_F n_f^2 1/4(17152/243 + 448/9ζ_3) +C_A C_F^2 n_f 1/2(-4204/27 + 352/9ζ_3)+ 53/243 C_A n_f^3 + C_A^2 C_F n_f1/2(7073/243 - 656/9ζ_3) +C_A^2 n_f^2 1/4(7930/81 + 224/9ζ_3)+ 154/243 C_F n_f^3 +C_A^3 n_f 1/2(-39143/81 + 136/3ζ_3) + C_A^4 (150653/486 - 44/9ζ_3)+ C_F^2 n_f^2 1/4(1352/27 - 704/9ζ_3 ) + 23 C_F^3 n_f + n_f d_F^abcdd_A^abcd/N_A(512/9 - 1664/3ζ_3)+ n_f^2d_F^abcdd_F^abcd/N_A(-704/9 + 512/3ζ_3) + d_A^abcdd_A^abcd/N_A(-80/9 + 704/3ζ_3)} ,whered_F^abcdd_F^abcd/N_A = N_c^4 - 6 N_c^2 + 18/96 N_c^2, d_F^abcdd_A^abcd/N_A = N_c(N_c^2 + 6)/48, d_A^abcdd_A^abcd/N_A = N_c^2 (N_c^2 + 36)/24,and C_A = N_c, C_F = N_c^2-1/2N_c, and N_c = 3.The lowest-order regularised Altarelli-Parisi splitting functions in four dimensions areP̂^(0)_qq(z) = C_F[1+z^2/(1-z)_++3/2δ(1-z)],P̂^(0)_qg(z) = 1/2[z^2+(1-z)^2],P̂^(0)_gq(z) = C_F1+(1-z)^2/z,P̂^(0)_gg(z) = 2C_A[z/(1-z)_++1-z/z+z(1-z)]+2πβ_0δ(1-z),where the plus prescription is defined as∫_0^1dz f(z)/(1-z)_+=∫_0^1dz f(z)-f(1)/1-z.The corresponding unregularised Altarelli-Parisi splitting functions in four dimensions areP^(0)_qq(z) = C_F1+z^2/1-z, P^(0)_qg(z) = 1/2[z^2+(1-z)^2], P^(0)_gq(z) = C_F1+(1-z)^2/z, P^(0)_gg(z) = C_A[z/1-z+1-z/z+z(1-z)] → C_A[2z/1-z+z(1-z)],where in the last step we exploited the symmetry of the P^(0)_gg(z) splitting functionin z → 1-z.Next we report the functions that enter the definition of the Sudakov radiator (Eq. (<ref>)) up to NNLL. To simplify the notation we set λ=α_s(μ_R)β_0 L. They readg_1(α_s L) = A^(1)/πβ_02 λ +ln (1-2 λ )/2λ, g_2(α_sL) = 1/2πβ_0ln (1-2 λ ) (A^(1)ln1/x_Q^2+B^(1)) -A^(2)/4 π ^2 β_0^22 λ +(1-2 λ ) ln (1-2 λ )/1-2 λ+A^(1)(-β_1/4 πβ_0^3ln (1-2 λ ) ((2 λ -1) ln (1-2 λ )-2)-4 λ/1-2 λ-1/2 πβ_0(2 λ(1 -ln (1-2 λ ))+ln (1-2 λ ))/1-2λlnμ_R^2/x_Q^2 M^2) , g_3(α_sL) = (A^(1)ln1/x_Q^2+B^(1)) (-λ/1-2 λlnμ _R^2/x_Q^2M^2+β_1/2 β_0^22 λ+ln (1-2 λ )/1-2 λ) -1/2 πβ_0λ/1-2λ(A^(2)ln1/x_Q^2+B^(2))-A^(3)/4 π ^2 β_0^2λ ^2/(1-2λ )^2 +A^(2)(β_1/4 πβ_0^3 2 λ(3λ -1)+(4 λ -1) ln (1-2 λ )/(1-2 λ)^2-1/πβ_0λ ^2 /(1-2 λ )^2lnμ_R^2/x_Q^2 M^2)+A^(1)(λ(β_0β_2 (1-3 λ)+β_1^2 λ)/β_0^4 (1-2 λ)^2+(1-2 λ) ln (1-2 λ ) (β_0β_2 (1-2 λ )+2 β_1^2 λ)/2β_0^4 (1-2 λ)^2+β_1^2/4 β_0^4(1-4 λ ) ln ^2(1-2 λ )/(1-2 λ)^2-λ ^2 /(1-2 λ)^2ln ^2μ_R^2/x_Q^2 M^2 -β_1/2 β_0^2(2 λ(1-2 λ)+(1-4 λ) ln (1-2 λ ))/(1-2λ )^2lnμ_R^2/x_Q^2 M^2).The new N^3LL g_4 coefficient readsg_4(α_s L) =A^(4) (3-2 λ ) λ ^2/24 π ^2 β_0^2 (2 λ -1)^3+ A^(3)/48 πβ_0^3 (2 λ -1)^3{3 β_1 (1-6 λ ) ln (1-2 λ )+2 λ(β_1 (5 λ(2 λ -3)+3)+6 β_0^2 (3-2 λ ) λlnμ_R^2/x_Q^2M^2)+12 β_0^2 (λ -1) λ (2 λ -1) ln1/x_Q^2} + A^(2)/24β_0^4 (2 λ -1)^3{32 β_0 β_2 λ ^3-2 β_1^2 λ (λ(22 λ -9)+3) +12 β_0^4 (3-2 λ ) λ ^2 ln^2μ_R^2/x_Q^2M^2+6 β_0^2 lnμ_R^2/x_Q^2M^2× (β_1 (1-6 λ ) ln (1-2λ )+2 (λ -1) λ(2 λ -1) (β_1+2 β_0^2 ln1/x_Q^2)) +3 β_1 (β_1 ln (1-2λ ) (2 λ +(6 λ -1) ln (1-2 λ )-1) -2 β_0^2 (2 λ -1) (2(λ -1) λ -ln (1-2 λ )) ln1/x_Q^2)} + πA^(1)/12 β_0^5 (2 λ -1)^3{β_1^3 (1-6 λ ) ln ^3(1-2 λ )+3 ln (1-2 λ )(β_0^2 β_3 (2 λ -1)^3 +β_0 β_1β_2 (1-2 λ(8 λ ^2-4 λ +3))+4 β_1^3λ ^2 (2 λ +1) +β_0^2 β_1 lnμ_R^2/x_Q^2M^2(β_0^2 (1-6 λ ) lnμ_R^2/x_Q^2M^2-4 β_1 λ)) +3 β_1^2 ln ^2(1-2 λ) (2 β_1 λ +β_0^2 (6 λ -1) lnμ_R^2/x_Q^2M^2) +3 β_0^2 (2 λ -1) ln1/x_Q^2(-β_1^2 ln ^2(1-2 λ ) +2 β_0^2β_1 ln (1-2 λ ) lnμ_R^2/x_Q^2M^2 +4 λ(λ(β_1^2-β_0 β_2)+β_0^4(λ -1) ln ^2μ_R^2/x_Q^2M^2)) +2 λ(β_0^2 β_3 ((15-14 λ )λ -3)+β_0 β_1 β_2 (5 λ(2 λ -3)+3) +4β_1^3 λ ^2+2 β_0^6 (3-2 λ ) λln^3μ_R^2/x_Q^2M^2+3 β_0^4 β_1 ln^2μ_R^2/x_Q^2M^2 +6 β_0^2 λ(2 λ +1)(β_0 β_2-β_1^2) lnμ_R^2/x_Q^2M^2-8 β_0^6 (4 λ ^2-6 λ +3) ζ_3)} + B^(3) (λ -1) λ/4 πβ_0 (1-2 λ )^2+ B^(2)(β_1 ln (1-2 λ )-2 (λ -1) λ(β_1-2 β_0^2 lnμ_R^2/x_Q^2M^2))/4β_0^2(1-2λ )^2 + πB^(1)/4 β_0^3 (1-2 λ )^2{4 λ(λ(β_1^2-β_0 β_2)+β_0^4 (λ -1) ln^2μ_R^2/x_Q^2M^2) -β_1^2 ln ^2(1-2 λ )+2 β_0^2 β_1ln (1-2 λ ) lnμ_R^2/x_Q^2M^2}.§ ADDITIONAL CONSIDERATIONS ON THE SMALL-P_T SCALINGIn this Appendix we discuss further the analysis of the p_t→ 0 limit of the differential cross section carried out in Sec. <ref>.Specifically, following Ref. <cit.>, we have made a number of approximations to derive the scaling of the integral (<ref>) with Λ^2_ QCD/M^2. Such approximations, however, are too rough to capture the correct O(1) normalisation of the scaling, and in this appendix we quantify the difference from the correct result obtained by directly integrating Eq. (<ref>). For the sake of convenience, we define the quantity Δ≡lim_p_t → 01/σ^(0)(Φ_B) d^2Σ(v)/p_t d p_t dΦ_B≡lim_p_t → 0d σ̅/p_t d p_t , where d^2Σ(v)/p_t d p_t dΦ_B was obtained in Eq. (<ref>).The p_t→ 0 limit simply corresponds to setting J_0(p_t b) = 1 + 𝒪(p_t^2), leading to Δ= ∫bd b∫d k_t1/k_t1 e^-R(k_t1)R'(k_t1) J_0(b k_t1)×exp{-R'(k_t1) ∫^k_t1_0d k_t/k_t (1-J_0(b k_t))}.The result given in Eq. (<ref>) and (<ref>) has been obtained by further approximating the integral in Eq. (<ref>) with its asymptotic behaviour. While this approximation is sufficient to capture the correct scaling of the cross section at p_t=0, it leads to an inaccurate estimate of the O(1) normalisation. For comparison, we then recall Eq. (<ref>): Δ_approx. = 4 ∫d k_t1/k_t1^3 e^-R(k_t1) . To numerically quantify the difference between Eqs. (<ref>) and (<ref>), we consider the case of Z production with m_Z = 91.1876 GeV.We evaluate numerically Eq. (<ref>) and Eq. (<ref>) and we compare them with the prediction obtained with thecode.For the sake of simplicity, we use everywhere the LL expressions for R and its derivative R'.To regulate the b→ 0 (+∞) limits in Eq. (<ref>) we set a lower (upper) limit of b_0/m_Z (b_0/Q_0) in the b integral, where Q_0 is the Landau singularity of the integrand which reads Q_0=m_Z e^-1/2β_0 . Correspondingly, we integrate over k_t1 in both Eqs. (<ref>) and (<ref>) between Q_0 and m_Z and we use the same limits in the numerical results obtained with .The results are displayed in Fig. <ref>, where we observe that the numerical result obtained withconverges in the p_t → 0 limit to the result given in Eq. (<ref>).Both results differ significantly from the approximated result Eq. (<ref>), which overestimates the normalisation by more than a factor of three.We observe that such a difference can be also obtained by computing the ratio of Eq. (<ref>) to the p_t→ 0 limit of the original b-space formulation.[We thank G. Salam for useful discussions on this topic.] JHEP
http://arxiv.org/abs/1705.09127v5
{ "authors": [ "Wojciech Bizon", "Pier Francesco Monni", "Emanuele Re", "Luca Rottoli", "Paolo Torrielli" ], "categories": [ "hep-ph", "hep-ex" ], "primary_category": "hep-ph", "published": "20170525112257", "title": "Momentum-space resummation for transverse observables and the Higgs $p_\\perp$ at N$^3$LL+NNLO" }
1]Bartłomiej Dudek 2]Adrian Kosowski[Corresponding Author. Email: [email protected]][1]University of Wrocław, Poland [2]Inria Paris, France Universal Protocols for Information DisseminationUsing Emergent Signals [=========================================================================== We consider a population of n agents which communicate with each other in a decentralized manner, through random pairwise interactions. One or more agents in the population may act as authoritative sources of information, and the objective of the remaining agents is to obtain information from or about these source agents. We study two basic tasks: broadcasting, in which the agents are to learn the bit-state of an authoritative source which is present in the population, and source detection, in which the agents are required to decide if at least one source agent is present in the population or not.We focus on designing protocols which meet two natural conditions: (1) universality, i.e., independence of population size, and (2) rapid convergence to a correct global state after a reconfiguration, such as a change in the state of a source agent. Our main positive result is to show that both of these constraints can be met. For both the broadcasting problem and the source detection problem, we obtain solutions with a convergence time of O(log^2 n) rounds, w.h.p., from any starting configuration. The solution to broadcasting is exact, which means that all agents reach the state broadcast by the source, while the solution to source detection admits one-sided error on a -fraction of the population (which is unavoidable for this problem). Both protocols are easy to implement in practice and have a compact formulation.Our protocols exploit the properties of self-organizing oscillatory dynamics. On the hardness side, our main structural insight is to prove that any protocol which meets the constraints of universality and of rapid convergence after reconfiguration must display a form of non-stationary behavior (of which oscillatory dynamics are an example). We also observe that the periodicity of the oscillatory behavior of the protocol, when present, must necessarily depend on the number X of source agents present in the population. For instance, our protocols inherently rely on the emergence of a signal passing through the population, whose period is Θ(logn/ X) rounds for most starting configurations. The design of clocks with tunable frequency may be of independent interest, notably in modeling biological networks. emptyKey words: Gossiping, Epidemic processes, Oscillatory dynamics, Emergent phenomena,Population protocols, Broadcasting, Distributed clock synchronization. § INTRODUCTION Information-spreading protocols, and more broadly epidemic processes, appear in nature, social interactions between humans, as well as in man-made technology, such as computer networks. For some protocols we have a reasonable understanding of the extent to which the information has already spread, i.e., we can identify where the information is located at a given step of the process: we can intuitively say which nodes (or agents) are “informed” and which nodes are “uninformed”. This is the case for usual protocols in which uninformed agents become informed upon meeting a previously informed agent, cf. e.g. mechanisms of rumor spreading and opinion spreading models studied in the theory community <cit.>). Arguably, most man-made networking protocols for information dissemination also belong to this category.By contrast, there exists a broad category of complex systems for which it is impossible to locate which agents have acquired some knowledge, and which are as yet devoid of it. In fact, the question of “where the information learned by the system is located” becomes somewhat fuzzy, as in the case of both biological and synthetic neural networks. In such a perspective, information (or knowledge) becomes a global property of the entire system, whereas the state of an individual agent represents in principle its activation, rather than whether it is informed or not. As such, knowledge has to be treated as an emergent property of the system, i.e., a global property not resulting directly from the local states of its agents. The convergence from an uninformed population to an informed population over time is far from monotonous. Even so, once some form of “signal” representing global knowledge has emerged, agents may try to read and copy this signal into their local state, thus each of them eventually also becomes informed. At a very informal conceptual level, we refer to this category of information-dissemination protocols as protocols with emergent behavior. At a more technical level, emergent protocols essentially need to rely on non-linear dynamical effects, which typically include oscillatory behavior, chaotic effects, or a combination of both. (This can be contrasted with simple epidemic protocols for information-spreading, in which nodes do not become deactivated.)This work exhibits a simple yet fundamental information-spreading scenario which can only be addressed efficiently using emergent protocols. Both the efficient operation of the designed protocols, and the need for non-stationary dynamical effects in any efficient protocol for the considered problems, can be formalized through rigorous theoretical analysis. Our goal in doing this is twofold: to better understand the need for emergent behavior in real-world information spreading, and to display the applicability of such protocols in man-made information spreading designs. For the latter, we describe an interpretation of information as a (quasi-)periodic signal, which can be both decoded from states of individual nodes, and encoded into them. §.§ Problems and Model We consider a population of n identical agents, each of which may be in a constant number of possible states. Interactions between agents are pairwise and random. A fair scheduler picks a pair of interacting agents independently and uniformly at random in each step. The protocol definition is provided through a finite sequence of state transition rules, following the precise conventions of the randomized Population Protocol model <cit.> or (equivalently) of Chemical Reaction Networks <cit.>.[The activation model is thus asynchronous. The same protocols may be deployed in a synchronous setting, with scheduler activations following, e.g., the independent random matching model (with only minor changes to the analysis) or the PULL model <cit.> (at the cost of significantly complicating details of the protocol formulation).]The input to the problem is given by fixing the state of some subset of agents, to some state of the protocol, which is not available to any of the other agents. Intuitively, the agents whose state has been fixed are to be interpreted as authoritative sources of information, which is to be detected and disseminated through the network (i.e., as the rumor source node, broadcasting station, etc.). For example, the problem of spreading a bit of information through the system is formally defined below.0emProblem BitBroadcastInput States: X_1, X_2.Promise: The population contains a non-zero number of agents in exactly one of the two input states {X_1, X_2}.Question: Decide if the input state present in the population is X_1 or X_2.We can, e.g., consider that the transmitting station (or stations) choose whether to be in state X_1 or X_2 in a way external to the protocol, and thus transmit the “bit” value 1 or 2, respectively, through the network. Broadcasting a bit is one of the most fundamental networking primitives.The definition of the population protocol includes a partition of the set of states of the protocol into those corresponding to the possible answers to the problem. When the protocol is executed on the population, the output of each agent may be read at every step by checking, for each agent, whether its state belongs to the subset of an output state with a given answer (in this case, the answer of the agent will be the “bit” it has learned, i.e., 1 or 2). We will call a protocol exact if it eventually converges to a configuration, such that starting from this configuration all agents always provide the correct answer. We will say it operates with -error, for a given constant >0, if starting from some step, at any given step of the protocol, at most an -fraction of the population holds the incorrect answer, with probability 1 - O(1/n).Time is measured in steps of the scheduler, with n time steps called a round, with the expected number of activations of each agent per round being a constant. Our objective is to design protocols which converge to the desired outcome rapidly. Specifically, a protocol is expected to converge in O(log n) rounds (i.e., in O(n log n) steps), with probability 1 - O(1/n), starting from any possible starting configuration of states in the population, conforming to the promise of the problem.[We adhere to this strong requirement for self-stabilizing (or self-organizing) behavior from any initial configuration in the design of our protocols. The presented impossibility results still hold under significantly weaker assumptions.]Motivated by both applications and also a need for a better understanding of the broadcasting problem, we also consider a variant of the broadcasting problem in which no promise on the presence of the source is given. This problem, called Detection, is formally defined below. 0emProblem DetectionInput State: X.Question: Decide if at least one agent in state X is present in the population. Detection of the presence of a source is a task which is not easier than broadcasting a bit. Indeed, any detection protocol is readily converted into a broadcasting protocol for states {X_1, X_2} by identifying X = X_1 and treating X_2 as a dummy state which does not enter into any interactions (i.e., is effectively not visible in the network). Intuitively, the detection task in the considered setting is much harder: a source X may disappear from the network at any time, forcing other agents to spontaneously “unlearn” the outdated information about the presence of the source. This property is inherently linked to the application of the Detection problem in suppressing false rumors or outdated information in social interactions. Specifically, it may happen that a certain part of a population find themselves in an informed state before the original rumor source is identified as a source of false information, a false rumor may be propagated accidentally because of an agent which previously changed state from “uninformed” to “informed” due to a fault or miscommunication, or the rumor may contain information which is no longer true. Similar challenges with outdated information and/or false-positive activations are faced in Chemical Reaction Networks, e.g. in DNA strand displacement models <cit.>. In that context, the detection problem has the intuitive interpretation of detecting if a given type of chemical or biological agent (e.g., a contaminant, cancer cell, or hormonal signal) is present in the population, and spreading this information among all agents. §.§ Our Results In Section <ref>, we show that both the BitBroadcast, and the Detection problem can be solved with protocols which converge in O(log^2 n) rounds to an outcome, with probability 1-O(1/n), starting from any configuration of the system. The solution to BitBroadcast guarantees a correct output. The solution to Detection admits one-sided -error: in the absence of a source, all agents correctly identify it as absent, whereas when the source is present, at any moment of time after convergence the probability that at least (1-)n agents correctly identify the source as present is at least 1 - O(1/n).[The existence of one-sided error is inherent to the Detection problem in the asynchronous setting: indeed, if no agent of the population has not made any communication with the source over an extended period of time, it is impossible to tell for sure if the source has completely disappeared from the network, or if it is not being selected for interaction by the random scheduler.] Here, >0 is a constant influencing the protocol design, which can be made arbitrarily small.The designed protocols rely on the same basic building block, namely, a protocol realizing oscillatory dynamics at a rate controlled by the number of present source states in the population. Thus, these protocols display non-stationary behavior. In Section <ref>, we show that such behavior is a necessary property in the following sense. We prove that in any protocol which solves Detection in sub-polynomial time in n and which uses a constant number of states, the number of agents occupying some state has to undergo large changes: by a polynomially large factor in n during a time window of length proportional to the convergence time of the protocol. For the BitBroadcast problem, we show that similar volatile behavior must appear in a synthetic setting in which a unique source is transmitting its bit as random noise (i.e., selecting its input state {X_1,X_2} uniformly at random in subsequent activations).We note that, informally speaking, our protocols rely on the emergence of a “signal” passing through the population, whose period is Θ(logn/ X) rounds when the number of source agents in state X is X. In Section <ref>, we then discuss how the behavior of any oscillatory-type protocol controlled by the existence of X has to depend on both n and X. We prove that for any such protocol with rapid convergence, the cases of subpolynomial X and X = Θ(n) can be separated by looking at the portion of the configuration space regularly visited by the protocol. This, in particular, suggests the nature of the dependence of the oscillation period on the precise value of X, and that the protocols we design with period Θ(logn/ X) are among the most natural solutions to the considered problems.The proofs of all theorems are deferred to the closing sections of the paper. §.§ Comparison to the State-of-the-Art Our work fits into lines of research on rumor spreading, opinion spreading, population protocols and other interaction models, and emergent systems. We provide a more comprehensive literature overview of some of these topics in Subsection <ref>. Other work on the problems. The BitBroadcast problem has been previously considered by Boczkowski, Korman, and Natale in <cit.>, in a self-stabilizing but synchronous (round-based) setting. A protocol solving their problem was presented, giving a stabilized solution in Õ (log n) time, using a number of states of agents which depends on population size, but exchanging messages of bit size O(1) (this assumption can be modeled in the population protocol framework as a restriction on the permitted rule set). In this sense, our result can be seen as providing improved results with respect to their approach, since it is applicable in an asynchronous setting and reduces the number of states to constant (the latter question was open <cit.>). We remark that their protocol has a more general application to the problem of deciding which of the sources X_1 or X_2 is represented by a larger number of agents, provided these two numbers are separated by a multiplicative constants. (Our approach could also be used in such a setting, but the required separation of agent numbers to ensure a correct output would have to be much larger: we can compare values if their logarithms are separated by a multiplicative constant.) The protocol of <cit.> involves a routine which allows the population to create a synchronized modulo clock, working in a synchronous setting. The period of this clock is independent of the input states of the protocol, which should be contrasted with the oscillators we work on in this paper, which encode the input into a signal (with a period depending on the number of agents in a given input state).The Detect problem was introduced in a work complementary to this paper <cit.>. Therein we look at applications of the confirmed rumor spreading problem in DNA computing, focusing on performance on protocols based on a time-to-live principle and on issues of fault tolerance in a real-world model with leaks. The protocols designed there require O(log n) states, and while self-stabilizing, do not display emergent behavior (in particular, agents can be categorized as “informed” and “uninformed”, the number of correctly informed agents tends to increase over time, while the corresponding continuous dynamical system stabilizes to a fixed point attractor). Originality of methods. The oscillatory dynamics we apply rely on an input-parameter-controlled oscillator. The uncontrolled version of the oscillator which we consider here is the length-3 cyclic oscillator of the cyclic type, known in population dynamics under the name of rock-paper-scissors (or RPS). This has been studied intensively in the physics and evolutionary dynamics literature (cf.e.g. <cit.> for a survey), while algorithmic studies are relatively scarce <cit.>. We remark that the uncontrolled cyclic oscillator with a longer (but O(1) length cycle) has been applied for clock/phase synchronization in self-stabilizing settings and very recently in the population protocol setting when resolving the leader election problem <cit.>. (The connection to oscillatory dynamics is not made explicit, and the longer cycle provides for a neater analysis, although it does not seem to be applicable to our parameter-controllable setting.) Whereas we are not aware of any studies of parameter-controlled oscillators in a protocol design setting (nor for that matter, of rigorous studies in other fields), we should note that such oscillators have frequently appeared in models of biological systems, most notably in biological networks and neuroendocrinology (<cit.> for a survey). Indeed, some hormone release and control mechanisms (e.g., for controlling GnRH surges in vertebrates) appear to be following a similar pattern. To the best of our knowledge, no computational (i.e., interaction-protocol-based) explanation for these mechanisms has yet been proposed, and we hope that our work may provide, specifically on the Detect problem, may provide some insights in this direction.In terms of lower bounds, we rely on rather tedious coupling techniques for protocols allowing randomization, and many of the details are significantly different from lower-bound techniques found in the population protocols literature. We remark that a recent line of work in this area <cit.> provides a powerful set of tools for proving lower bounds on the number of states (typically Ω(loglog n) states) for fast (typically polylogarithmic) population protocols for different problems, especially for the case of deterministic protocols. We were unable to leverage these results to prove our lower bound for the randomized scenario studied here, and believe our coupling analysis is complementary to their results. §.§ Other Related Work Our work fits into the line of research on rumor spreading, population protocols, and related interaction models. Our work also touches on the issue of how distributed systems may spontaneously achieve some form of coordination with minimum agent capabilities. The basic work in this direction, starting with the seminal paper <cit.>, focuses on synchronizing timers through asynchronous interprocess communication to allow processes to construct a total ordering of events. A separate interesting question concerns local clocks which, on their own, have some drift, and which need to synchronize in a network environment (cf. e.g. <cit.>, or <cit.> for a survey of open problems). Rumor spreading. Rumor spreading protocols are frequently studied in a synchronous setting. In a synchronous protocol, in each parallel round, each vertex independently at random activates a local rule, which allows it either to spread the rumor (if it is already informed), or possibly also to receive it (if it has not yet been informed, as is the case in the push-pull model). The standard push rumor spreading model assumes that each informed neighbor calls exactly one uninformed neighbor. In the basic scenario, corresponding to the complete interaction network, the number of parallel rounds for a single rumor source to inform all other nodes is given as log_2 n + ln n + o(log n), with high probability <cit.>. More general graph scenarios have been studied in <cit.> in the context of applications in broadcasting information in a network. Graph classes studied for the graph model include hypercubes <cit.>, expanders <cit.>, and other models of random graphs <cit.>. The push-pull model of rumor spreading is an important variation: whereas for complete networks the speedup due to the pull process is in the order of a multiplicative constant <cit.>, the speed up turns out to be asymptotic, e.g., on preferential attachment graphs, where the rumor spreading time is reduced from Θ(log n) rounds in the push model to Θ(log n / loglog n) rounds in the push-pull model <cit.>, as well as on other graphs with a non-uniform degree distribution. The push-pull model often also proves more amenable to theoretical analysis. We note that asynchronous rumor spreading on graphs, in models closer to our random scheduler, has also been considered in recent work <cit.>, with <cit.> pointing out the tight connections between the synchronous (particularly push-pull) and asynchronous models in general networks. Population protocols. Population protocols are a model which captures the way in which the complex behavior of systems (biological, sensor nets, etc.) emerges from the underlying local interactions of agents. The original model of Angluin et al. <cit.> was motivated by applications in sensor mobility. Despite the limited computational capabilities of individual sensors, such protocols permit at least (depending on available extensions to the model) the computation of two important classes of functions: threshold predicates, which decide if the weighted average of types appearing in the population exceeds a certain value, and modulo remainders of similar weighted averages. The majority function, which belongs to the class of threshold functions, was shown to be stably computable for the complete interaction graph <cit.>; further results in the area of majority computation can be found in <cit.>. A survey of applications and models of population protocols is provided in <cit.>. An interesting line of research is related to studies of the algorithmic properties of dynamics of chemical reaction networks <cit.>. These are as powerful as population protocols, though some extensions of the chemical reaction model also allow the population size to change in time. Two very recent results in the population protocol model are worthy of special attention. Alistarh, Aspnes, and Gelashvili <cit.> have resolved the question of the number of states required to solve the Majority problem on a complete network in polylogarithmic time as Θ(log n). For the equally notable task of Leader Election, the papers of Gasieniec and Stachowiak <cit.> (for the upper bound) together with the work of Alistarh, Aspnes, Eisenstat, Gelashvili, and Rivest <cit.> (for the lower bound) put the number of states required to resolve this question in polylogarithmic time as Θ(loglog n). Both of these results rely on a notion of a self-organizing phase clock.Nonlinearity in interaction protocols.Linear dynamical systems, as well as many nonlinear protocols subjected to rigorous analytical study, have a relatively simple structure of point attractors and repellers in the phase space. The underlying continuous dynamics (in the limit of n → +∞) of many interaction protocols defined for complete graphs would fit into this category: basic models of randomized rumor spreading <cit.>; models of opinion propagation (e.g. <cit.>); population protocols for problems such as majority and thresholds <cit.>; all reducible Markov chain processes, such as random walks and randomized iterative load balancing schemes.Nonlinear dynamics with non-trivial limit orbits are fundamental to many areas of systems science, including the study of physical, chemical and biological systems, and to applications in control science. In general, population dynamics with interactions between pairs of agents are non-linear (representable as a set of quadratic difference equations) and have potentially complicated structure if the number of states is 3 or more. For example, the simple continuous Lotka-Volterra dynamics <cit.> gives rise to a number of discrete models, for example one representing interactions of the form A + B → A + A, over some pairs A, B of states in a population (cf. <cit.> for further generalizations of the framework or <cit.> for a rigorous analysis in the random scheduler model). The model describes transient stability in a setting in which several species are in a cyclic predator-prey relation. Cyclic protocols of the type have been consequently identified as a potential mechanism for describing and maintaining biodiversity, e.g., in bacterial colonies <cit.>. Cycles of length 3, in which type A_2 attacks type A_1, type A_3 attacks type A_2, and type A_1 attacks type A_3, form the basis of the basic oscillator, also used as the starting point for protocols in this work, which is referred to as the RPS (rock-paper-scissors) oscillator or simply the 3-cycle oscillator, which we discuss further in Section <ref>. This protocol has been given a lot of attention in the statistical physics literature. The original analytical estimation method applied to RPS was based on approximation with the Fokker-Planck equation <cit.>. A subsequent analysis of cyclic 3- and 4-species models using Khasminskii stochastic averaging can be found in <cit.>, and a mean field approximation-based analysis of RPS is performed in <cit.>. In <cit.>, we have performed a study of some algorithmic implications of RPS, showing that the protocol may be used to perform randomized choice in a population, promoting minority opinions, in Õ(n^2) steps. All of these results provide a good qualitative understanding of the behavior of the basic cyclic protocols. We remark that the protocol used in this paper is directly inspired by the properties of RPS, as we discuss further on, but has a more complicated interaction structure (see Fig. <ref>).For protocols with convergence to a single point in the configuration space in the limit of large population size, a discussion of the limit behavior is provided in <cit.>, who provide examples of protocols converging to limit points at coordinates corresponding to any algebraic numbers.We also remark that local interaction dynamics on arbitrary graphs (as opposed to the complete interaction graph) exhibit a much more complex structure of their limit behavior, even if the graph has periodic structure, e.g., that of a grid. Oscillatory behavior may be overlaid with spatial effects <cit.>, or the system may have an attractor at a critical point, leading to simple dynamic processes displaying self-organized criticality (SOC, <cit.>). § PRELIMINARIES: BUILDING BLOCKS FOR POPULATION PROTOCOLS§.§ Protocol Definition A randomized population protocol for a population of n agents is defined as a pair P = (K_n, R_n), where K_n is the set of states and R_n is the set of interaction rules. The interaction graph is complete. We will simply write P = (K, R), when considering a protocol which is universal (i.e., defined in the same way for each value of n) or if the value of n is clear from the context. All the protocols we design are universal; our lower bounds also apply to some non-universal protocols. The set of rules R ⊆K^4 × [0,1] is given so that each rule j ∈ R is of the form j = (i_1(j), i_2(j), o_1(j), o_2(j), q_j), describing an interaction read as: “(i_1(j), i_2(j)) → (o_1(j), o_2(j)) with probability q_j”. For all i_1, i_2 ∈ K, we define R_i_1, i_2 = {j ∈ R : (i_1(j),i_2(j)) = (i_1,i_2)} as the set of rules acting on the pair of states i_1, i_2, and impose that ∑_j∈ R_i_1,i_2 q_j ≤ 1. For a state A∈ K, we denote the number of agents in state A as A, and the concentration of state A as a= A/n, and likewise for a set of states 𝒜, we write 𝒜 = ∑_A ∈𝒜 A.In any configuration of the system, each of the n agents from the population is in one of states from K_n. The protocol is executed by an asynchronous scheduler, which runs in steps. In every step the scheduler uniformly at random chooses from the population a pair of distinct agents to interact: the initiator and the receiver. If the initiator and receiver are in states i_1 and i_2, respectively, then the protocol executes at most one rule from set protocol R_i_1, i_2, selecting rule j∈ R_i_1, i_2 with probability q_j. If rule j is executed, the initiator then changes its state to o_1(j) and the receiver to o_2(j). The source has a special state, denoted X in the Detect problem, or one of two special states, denoted {X_1,X_2} in the BitBroadcast problem, which is never modified by any rule.All protocols are presented in the randomized framework, however, the universal protocols considered here are amenable to a form of conversion into deterministic rules discussed in <cit.>, which simulates randomness of rules by exploiting the inherent randomness of the scheduler in choosing interacting node pairs to distribute weakly dependent random bits around the system.All protocols designed in this work are initiator-preserving, which means that for any rule j ∈ R, we have o_1 (j) = i_1 (j) (i.e., have all rules of the form A + B → A + C, also more compactly written as AB → C), which makes them relevant in a larger number of application. As an illustrative example, we remark that the basic rumor spreading (epidemic) model is initiator-preserving and given simply as 10 → 1. All protocols can also obviously be rewritten to act on unordered pairs of agents picked by the scheduler, rather than ordered pairs. §.§ Protocol Composition Technique Our protocols will be built from simpler elements. Our basic building block is the input-controlled oscillatory protocol P_o (see Fig. <ref>). We then use protocol P_o as a component in the construction of other, more complex protocols, without disrupting the operation of the original protocol.Formally, we consider a protocol P_B using state set B = {B_i : 1 ≤ i ≤ k_b} and rule set R_B, and a protocol extension P_BC using a state set B × C = B ×{C_i : 1 ≤ i ≤ k_c}, where C is disjoint from B, and rule extension set R_BC. Each rule extension defines for each pair of states from B × C (i.e., to each element of (B × C) × (B × C)) a probability distribution over elements of C × C.The composed protocol P_B ∘ P_BC is a population protocol with set of states B × C. Its rules are defined so that, for a selected pair of agents in states (B_i, C_j) and (B_i', C_j'), we obtain a pair of agents in states (B_i^*, C_j^*) and (B_i'^*, C_j'^*) according to a probability distribution defined so that: * Each pair B_i'^*, B_i'^* appears in the output states of the two agents with the same probability as it would in an execution of protocol P_B on a pair of agents in states B_i and B_i'.* Each pair C_i'^*, C_i'^* appears in the output states of the two agents with the probability given by the definition of P_BC.In the above, the pairs of agents activated by P_B and P_BC are not independent of each other. This is a crucial property in the composition of protocol P_o when composing it with further blocks to solve the Detect problem.We denote by 1_B the identity protocol which preserves agent states on set of states B. For a protocol P, we denote by P/2 a lazy version of a protocol P in which the rule activation of P occurs with probability 1/2, and with probability 1/2the corresponding rule of the identity protocol is activated. Note that all asymptotic bounds on expected and w.h.p. convergence time obtained for any protocol P also apply to protocol P/2, in the regime of at least a logarithmic number of time steps. We also sometimes treat a protocol P_BC extension as a protocol in itself, applied to the identity protocol 1_B.The independently composed protocol P_B + P_BC is defined as an implementation of the composed protocol (P_B) ∘ (P_C/2), realized with the additional constraint that in each step, either the rule of P_B is performed with an identity rule extension, or the rule extension of P_BC is performed on top of the identity protocol 1_B. Such a definition is readily verified to be correct by a simple coupling argument, and allows us to analyze protocols P_B and P_BC, observing that the pairs (identities) of agents activated by the scheduler in the respective protocols are independent.All the composed protocols (and protocol extensions) we design are also initiator-preserving, i.e., C_i^* = C_i and B_i^* = B_i, with probability 1. In notation, rules omitted from the description of protocol extensions are implicit, occurring with probability 0 (where C_j^*≠ C_j) or with the probability necessary for the normalization of the distribution to 1, where the state is preserved (where C_j^* = C_j).As a matter of naming convention, we name the states in the separate state sets of the composed protocols with distinct letters of the alphabet, together with their designated subscripts and superscripts. The rumor source X is treated specially and uses a separate letter (and may be seen as a one state protocol without any rules, on top of which all other protocols are composed; in particular, its state is never modified).The six remaining states of protocol P_o are named with the letters A__?^^?, as usual in its definition. Subsequent protocols will use different letters, e.g., M_? and L_?.§ OVERVIEW OF PROTOCOL DESIGNS§.§ Main Routine: Input-Controlled Oscillator Protocol Po We first describe the main routine which allows us to convert local input parameters (the existence of source into a form of global periodic signal on the population. This main building block is the construction of a 7-state protocol P_o following oscillator dynamics, whose design we believe to be of independent interest.The complete design of protocol P_o is shown in Fig. <ref>. The source state is denoted by X. Additionally, there are six states, called A_i^+ and A_i^++, for i∈{1,2,3}. The naming of states in the protocol is intended to maintain a direct connection with the RPS oscillator dynamics, which is defined by the simple rule “A_iA_i-1↦ A_i, for i=1,2,3”. In fact, we will retain the convention A_i = {A_i^+, A_i^++} and a_i = a_i^+ + a_i^++, and consider the two states A_i^+ and A_i^++ to be different flavors of the same species A_i, referring to the respective superscripts as either lazy (^+) or aggressive (^++).The protocol has the property that in the absence of X, it stops in a corner state of the phase space, in which only one of three possible states appears in the population, and otherwise regularly (every O(log n) steps) moves sufficiently far away from all corner states. An intuitive formalization of the basic properties of the protocol is given by the theorem below. There exists a universal protocol P_o with |K|=7 states, including a distinguished source state X, which has the following properties. * For any starting configuration, in the absence of the source ( X=0), the protocol always reaches a configuration such that: * all agents are in the same state: either A_1^++, or A_2^++, or A_3^++;* no further state transitions occur after this time.Such a configuration is reached in O(log n) rounds, with constant probability (and in O(log^2 n) rounds with probability 1-O(1/n)).* For any starting configuration, in the presence of the source ( X ≥ 1), we have with probability 1-O(1/n): * for each state i∈ K, there exists a time step in the next O(logn/ X) rounds when at least a constant fraction of all agents are in state i;* during the next O(logn/ X) rounds, at least a constant fraction of all agents change their state at least once. The proof of the Theorem is provided in Section <ref>.The RPS dynamics provides the basic oscillator mechanism which is still largely retained in our scenario. Most of the difficulty lies in controlling its operation as a function of the presence or absence of the rumor source. We do this by applying two separate mechanisms. The presence of rumor source X shifts the oscillator towards an orbit closer to the central orbit (A_1, A_2, A_3) = (1/3, 1/3, 1/3) through rule (5), which increases the value of potential ϕ := ln (a_1 a_2 a_3), where a_i =A_i / n. Conversely, independent of the existence of rumor source X, a second mechanism is intended to reduce the value of potential ϕ. This mechanism exploits the difference between the aggressive and lazy flavors of the species. Following rule (1), an agent belonging to a species becomes more aggressive if it meets another from the same species, and subsequently attacks agents from its prey species with doubled probability following rule (4). This behavior somehow favors larger species, since they are expected to have (proportionally) more aggressive agents than the smaller species (in which pairwise interactions between agents of the same species are less frequent) — the fraction of agents in A_i which are aggressive would, in an idealized static scenario, be proportional to a_i. (This is, in fact, often far from true due to the interactions between the different aspects of the dynamics). As a very loose intuition, the destabilizing behavior of the considered rule on the oscillator is resemblant of the effect an eccentrically fitted weight has on a rotating wheel, pulling the oscillator towards more external orbits (with smaller values of ϕ).The intuition for which the proposed dynamics works, and which we will formalize and prove rigorously in Section <ref>, can now be stated as follows: in the presence of rumor source X, the dynamics will converge to a form of orbit on which the two effects, the stabilizing and destabilizing one, eventually compensate each other (in a time-averaged sense). The period of a single rotation of the oscillator around such an orbit is between O(1) and O(log n), depending on the concentration of X. In the absence of X, the destabilizing rule will prevail, and the oscillator will quickly crash into a side of the triangle.For small values of X>0, the protocol can be very roughly (and non-rigorously) viewed as cyclic composition of three dominant rumor spreading processes over three sets of states A_1, A_2, A_3, one converting states A_1 to A_3, the next from A_3 to A_2, and the last from A_2 to A_1, which spontaneously take over at moments of time separated by O(log n) parallel rounds. For other starting configurations, and especially for the case of X = 0, the dynamics of the protocol, which has 5 free dimensions, is more involved to describe and analyze (see Section <ref>). We provide some further insights into the operation of the protocol in Section <ref>, notably formalizing the notion that an intuitively understood oscillation (going from a small number of agents in some state A_i, to a large number of agents in state A_i, and back again to a small number of agents in state A_i) takes Θ(logn/ X) steps, with probability 1-O(1/n). As such, protocol P_o can be seen as converting local input X into a global periodic signal with period Θ(logn/ X). What remains is allowing nodes to extract information from this periodic signal.Simulation timelines shown in Fig.<ref> in the Appendix illustrate the idea of operation of protocol P_o and its composition with other protocols. §.§ Protocols for BitBroadcast A solution to BitBroadcast is obtained starting with an independent composition of two copies of oscillator P_o, called P_o[1] and P_o[2], with states in one protocol denoted by subscript [1] and in the other by subscript [2]. The respective sources are thus written as X_[1] and X_[2]. In view of Theorem <ref>, in this composition P_o[1] + P_o[2], under the promise of the BitBroadcast problem, one of the oscillators will be running and the other will stop in a corner of its state space. Which of the oscillators is running can be identified by the presence of states A_i^+[z], which will only appear for z ∈{1,2} corresponding to the operating oscillator. Moreover, by the same Theorem, every O(log n) rounds a constant number of agents of this oscillator will be in such a state A_i^+[z], for any choice of i ∈{1,2,3}. We can thus design the protocol extension P_b to detect this. This is given by the pair of additional output states {Y_1, Y_2} and the rule extension consisting of the two rules shown in Fig. <ref>. Protocol (P_o[1] + P_o[2]) + P_b, having |K|=74 states, including distinguished source states X_[1], X_[2] converges to an exact solution of BitBroadcast. This occurs in O(log^2 n) parallel rounds, with probability 1- O(1/n). In the output encoding, agent states of the form (·, ·, Y_z) represent answer “z”, for z∈{1,2}. The protocol (P_o[1] + P_o[2]) + P_b is not “silent”, i.e., it undergoes perpetual transitions of state, even once the output has been decided. As a side remark, we note that for the single-source broadcasting problem, or more generally for the case when the number of sources is small, max{ X_[1],X_[2]} = O(1), we can propose the following simpler silent protocol. We define protocol P_o', by modifying protocol P_o as follows. We remove from it Rule (5), and replace it by to the four rules shown in Fig. <ref>. The analysis of the modified protocol follows from the same arguments as those used to prove Theorem <ref>(1). In the regime of max{ X_[1],X_[2]} = O(1), the effect of the source does not influence the convergence of the process and each of the three possible corner configurations, with exclusively species {A_1, A_2, A_3}, is reached in O(log n) steps with constant probability. However, rules (5a)-(5d) enforce that the only stable configuration which will persist is the one in a corner corresponding to the identity of the source, i.e., A_1 for source X_[1] and A_2 for source X_[2]; the source will restart the oscillator in all other cases. We thus obtain the following side result, for which we leave out the details of the proof.Protocol P_o', having |K|=6+2 = 8 states, including distinguished source states X_[1], X_[2] converges to an exact solution of BitBroadcast, eventually stopping with all agents in state A_1^++ if source X_[1] is present and stopping with all agents in state A_2^++ if source X_[2] is present, with no subsequent state transition. The stabilization occurs within O(log^2 n) parallel rounds, with probability 1- O(1/n), if max{ X_[1],X_[2]} = O(1), i.e., the broadcast originates from a constant number of sources.§.§ Protocol for Detect The solution to problem Detect is more involved. It relies on two auxiliary extensions added on top of a single oscillator P_o. The first, P_m, runs an instance of the 3-state majority protocol of Angluin et al. <cit.> within each species A_i of the oscillator. For this reason, the composition between P_o and P_m has to be of the form P_o ∘ P_m (i.e., it cannot be independent). The operation of this extension is shown in Fig. <ref> and analyzed in Section <ref>. It relies crucially on an interplay of two parameters: the time Θ(logn/ X) taken by the oscillator to perform an orbit, and the time Ω(logn/ X) it takes for the majority protocol (which is reset by the oscillator in its every oscillation) to converge to a solution. When parameters are tuned so that the second time length is larger a constant number of times than the first, a constant proportion of the agents of the population are involved in the majority computation, i.e., both of the clashing states in the fight for dominance still include Ω(n) agents. In the absence of a source, shortly after the oscillator stops, one of these states takes over, and the other disappears. The above-described difference can be detected by the second, much simpler, extension P_l, designed in Fig. <ref> and analyzed in Section <ref>. The number of “lights” switched on during the operation of the protocol will almost always be more than (1-)n, where > 0 is a parameter controlled by the probability of lights spontaneously disengaging, and may be set to and arbitrarily small constant.For any >0, protocol (P_o ∘ P_m) + P_l, having |K|=6· 3 · 3 + 1 = 55 states, including a distinguished source state X, which solves the problem of spreading confirmed rumors as follows: * For any starting configuration, in the presence of the source ( X ≥ 1), after an initialization period of O(log n) rounds, at an arbitrary time step the number of agents in an output state corresponding to a “yes” answer is (1-) n, with probability 1 - O(1/n).* For any starting configuration, in the absence of the source ( X=0), the system always reaches a configuration such that all agents are in output states corresponding to a “no” answer for all subsequent time steps. Such a configuration is reached in O(log^2 n) rounds, with probability 1- O(1/n). § IMPOSSIBILITY RESULTS FOR PROTOCOLS WITHOUT NON-STATIONARY EFFECTS For convenience of notation, we identify a configuration of the population with a vector z = (z^(1), …, z^(k)) ∈{0,1,…,n}^k = Z, where z^(i), for 1≤ i ≤ k, denotes the number of agents in the population having state i, and z=n. Our main lower bound may now be stated as follows. Let _1 >0 be arbitrarily chosen, let P be any k-state protocol, and let z_0 be a configuration of the system with at most n^_0 agents in state X, where _0 ∈ (0, _1] is a constant depending only on k and _1. Let B be a subset of the state space around z_0 such that the population of each state within B is within a factor of at most n^_0 from that in z_0 (for any z ∈ B, for all states i ∈{1,…, k},we have z_0^(i)/n^_0 <z^(i)≤ n^_0max{1,z_0^(i)}).Suppose that in an execution of P starting from configuration z_0, with probability 1 - o(1), the configurations of the system in the next n^2_1 parallel rounds are confined to B.Then, an execution of P for n^2_0 parallel rounds, starting from a configuration in which state X has been removed from z_0, reaches a configuration in a O(n^6_1)-neighborhood of B, with probability 1-o(1). In the statement of the Theorem, for the sake of maintaining the size of the population, we interpret “removing state X from z_0” as replacing the state of all agents in state X by some other state, chosen adversarially (in fact, this may be any state which has sufficiently many representatives in configuration z_0). The O(n^6_1)-neighborhood of B is understood in the sense of the 1-norm or, asymptotically equivalently, the total variation distance, reflecting configurations which can be converted into a configuration from B by flipping the states of O(n^6_1) agents.The proof of Theorem <ref> is provided in Section <ref>. It proceeds by a coupling argument between a process starting from z_0 and a perturbed process in which state X has been removed. The analysis differently treats rules and states which are seldom encountered during the execution of the protocol from those that are encountered with polynomially higher probability (such a clear separation is only possible when k = O(loglog n)). Eventually, the probability of success of the coupling reduces to a two-dimensional biased random walk scenario, in which the coordinates represent differences between the number of times particular rules have been executed in the two coupled processes.We have the following direct corollaries for the problems we are considering. For Detect, if B represents the set of configurations of the considered protocol, which are understood as the protocol giving the answer “X > 0”, then our theorem says that, with probability 1-o(1), the vast majority of agents will not “notice” that X had been set to 0, even a polynomial number of steps after this has occurred, and thus cannot yield a correct solution. An essential element of the analysis is that it works only when state X is removed in the perturbed process. Thus, there is nothing to prevent the dynamics from stabilizing even to a single point in the case of X=0, which is indeed the case for our protocol P_r. The argument for BitBroadcast only applies to situations where the source agent is sending out white noise (independently random bits in successive interactions). Such a source can be interpreted as a pair of sources in states X_1 and X_2 in the population, each disclosing itself with probability 1/2 upon activation and staying silent otherwise. In the cases covered by the lower bound, the scenario in which the source X_1 is completely suppressed cannot be distinguished from the scenario in which both X_1 and X_2 appear; likewise, the scenario in which the source X_2 is completely suppressed cannot be distinguished from the scenario in which both X_1 and X_2 appear. By coupling all three processes, this would imply the indistinguishability of the all these configurations, including those with only source X_1 and only source X_2, which would imply incorrect operation of the protocol.Whereas we use the language of discrete dynamics for precise statements, we informally remark that the protocols covered by the lower bound of Theorem <ref> include those whose dynamics z_t/n, described in the continuous limit (n → +∞), has only point attractors,repellers, and fixed points. In this sense, the use of oscillatory dynamics in our protocol seems inevitable. The impossibility result is stated in reference to protocols with a constant number of states, however, it may be extended to protocols with a non-constant number of states k, showing that such protocols require n^exp[-O((k))] time to reach a desirable output. (This time is larger than polylogarithmic up to some threshold value k = O(loglog n).) The lower bound covers randomized protocols, including those in which rule probabilities depend on n (i.e., non-universal ones). § INPUT-CONTROLLED BEHAVIOR OF PROTOCOLS FOR DETECT In this Section, we consider the periodicity of protocols for self-organizing oscillatory dynamics, in order to understand how the period of a phase clock must depend on the input parameters. We focus on the setting of the Detect problem, considering the value X of the input parameter. In Section <ref>, we noted informally that the designed oscillatory protocol performs a complete rotation around the triangle in Θ(log n/X) rounds. Here, we provide partial evidence that the periodicity of any oscillatory protocol depends both on the value of X and n. We do this by bounding the portions of the configuration space in which a protocol solving Detect finds itself in most time steps, separating the cases of sub-polynomial X (i.e., X < n^_0, where _0 > 0 is a constant dependent on the specific protocol), and the case of X = Θ (n).Any protocol on k states (not necessarily of oscillatory nature) can be viewed as a Markov chain in its k-dimensional configuration space [0,n]^k, and as in Section <ref> we identify a configuration with a vector z ∈{0,1,…,n}^k = Z. The configuration at time step t is denoted z(t). In what follows, we will look at the equivalent space of log-configurations, given by the bijection:Z ∋ z = (z^(1), …, z^(k)) ↦ ( z^(1), …,z^(k)≡ z ∈{ 0, 1,…, n}^k),where a = ln a for a>0 and a = -1 for a = 0.For z_0 ∈ Z, we will refer to the d-log-neighborhood of z_0 as the set of points {z ∈ Z : | z -z_0| < d}.Notice first that the notion of a box in the statement of Theorem <ref> is closely related to the set of points in the (_0 ln n)-log-neighborhood of configuration z_0. It follows from the Theorem that any protocol for solving Detect within a polylogarithmic number of rounds T with probability 1-o(1) must, in the case of 0 <X < n^_0, starting from z_0 at some time t_0, leave the (_0 ln n)-log-neighborhood of z_0 within T rounds with probability 1-o(1).We obtain the following corollary. Fix a universal protocol P which solves the Detect problem with -error in T = O(log n) rounds with probability 1-o(1). Set 0 <X < n^_0, where _0 > 0 is a constant which depends only on the definition of protocol X. Let t_0 be an arbitrarily chosen moment of time after at least T rounds from the initialization of the protocol in any initial state. Then, within T rounds after time t_0, there is a moment of time t such that z(t) is not in the (_0 log n)-neighborhood of z(t_0), with probability 1-o(1). The above Proposition suggests that oscillatory or quasi-oscillatory behavior at low concentrations of state X must be of length Ω(log n). By contrast, the following Proposition shows that in the case X = Θ(n), the protocol remains tied to a constant-size log-neighborhood of its configuration space. Fix a universal protocol P with set of states K which solves the Detect problem with -error in T = O(log n) rounds with probability 1-o(1). Then, there exists a constant δ_0 > 0, depending only on the design of protocol P, with the following property. Fix X ∈ [cn, n/2], where 0 < c< 1/2 is an arbitrarily chosen constant. Let t be an arbitrarily chosen moment of time, after at least T rounds from the initialization of the protocol at an adversarially chosen initial configuration z(0), such that each coordinate z^(i)(0) satisfies z^(i)(0) = 0 or z^(i)(0) > 1/(2|K|), for all i∈{1,…,|K|}. Then, with probability 1-e^-n^Ω(1), z(t) is in the δ_0-neighborhood of z(0).The proof of the Proposition is deferred to Section <ref>.Note that, in the regime of a constant-size log-neighborhood of configuration z(0), the discrete dynamics of the protocol adheres closely to the continuous-time version of its dynamics in the limit n → +∞. (See Section <ref>, and in particular Lemma <ref>, for a further discussion of this property). Since the latter is independent of n, any oscillatory behavior “inherited” from the continuous dynamic would have a period of O(1) rounds. We leave as open the question whether some form of behavior of a protocol with polylogarithmic (i.e., or more broadly, non-constant and subpolynomial) periodicity for Detect can be designed in the regime of X = Θ(n) despite this obstacle. In particular, the authors believe that the existence of an input-controlled phase clock with a period of Θ(log n) for any X > 0, and the absence of operation for X = 0, is unlikely in the class of discrete dynamical systems given by the rules of population protocols. The remaining sections of the paper provide proofs of the Theorems from Sections <ref>, <ref>, and <ref>. § ANALYSIS OF OSCILLATOR DYNAMICS PO This section is devoted to the proof of Theorem <ref>. §.§ Preliminaries: Discrete vs. Continuous DynamicsNotation. For a configurationof a population protocol, we write z = (z^(1), …, z^(k)) to describe the number of agents in the k states of the protocol, and likewise use vector = (^(1), …, ^(k)) = z/n to describe their concentrations. The concentration of a state called A which is the i_A-th state in vectoris equivalently written as a≡ a() ≡^(i_A), depending on which notation is the easiest to use in a given transformation.If vectorrepresents the current configuration of the protocol and ':= '| is the random variable describing the next configuration of the protocol after the execution of a single rule, we write Δ := '-.We also use the notation Δ to functions of state .Next, we define the continuous dynamics associated with the protocol by the following vector differential equation:≡d /d t := n(Δ)and likewise, for each coordinate, ȧ = n (Δ a) (we use the dot-notation and d/dt interchangeably for time differentials). This continuous description serves for the analysis only, and reflects the behavior of the protocol in the limit n →∞.[We note that some of our results rely on the stochasticity of the random scheduler model, and do not immediately generalize to the continuous case.] Warmup: the RPS oscillator. Our oscillatory dynamics may be seen as an extension of the rock-paper-scissors (RPS) protocol (see Related work). This is a protocol with three states A_1, A_2, A_3 and three rules:A_iA_i-1↦ A_i with probabilityp,where p>0 is an arbitrarily fixed constant, and the indices of states A_i are always 1, 2, or 3, and any other values should be treated as 3 in the given range. For i∈{1,2,3}, the change of concentration of agents of state A_i in the population in the given step can be expressed for the RPS protocol as:Δ a_i = 1/n·Δ A_i =+1/n, with probability pa_i-1a_i,-1/n, with probability pa_ia_i+1,0, otherwise,Thus, the corresponding continuous dynamics for RPS is given as:ȧ_i = n(Δ a_i) = pa_i-1a_i - pa_ia_i+1,for i=1,2,3. The orbit of motion for this dynamics in ^3 is given by two constants of motion. First, a_1 + a_2 + a_3 = 1 by normalization. Secondly, for any starting configuration with a strictly positive number of agents in each of the three states, the following function ϕ of the configuration:ϕ= ln (a_1 a_2 a_3)is easily verified to be constant over time ϕ̇= 0, hence ϕ = ln (a_1 a_2 a_3) =< 0 (or more simply, a_1 a_2 a_3 =). Thus, for the continuous dynamics, the initial product of concentrations completely determines its perpetual orbit, which is obtained by intersecting the appropriate curve a_1 a_2 a_3 = with the plane a_1 + a_2 + a_3 = 1. As a matter of convention, the plane a_1 + a_2 + a_3 = 1 with conditions a_i ≥ 0 is drawn as an equilateral triangle (we adopt this convention throughout the paper, for subsequent protocols). All of the orbits are concentric around the point (1/3, 1/3, 1/3), which is in itself a point orbit maximizing the value of ϕ = -ln 27. The discrete dynamics follows a path of motion which typically resembles random-walk-type perturbations around the path of motion, until eventually, after Õ(n^2) steps it crashes into one of the sides of the triangle. Subsequently, if a_i=0, for some i=1,2,3, then no rule can make a_i increase. (If a_i-1>0, in the next O(log n) steps, all remaining agents of A_i+1 will convert to A_i-1, and there will be only agents from A_i-1 left.) Thus, the protocol will terminate in a corner of the state space.A further discussion of the RPS dynamics can be found in <cit.>. §.§ Proof Outline of Theorem <ref> The rest of the section is devoted to the proof of Theorem <ref>. We start by noting some basic properties in Subsection <ref>, then prove the properties of the protocol for the case of X=0 (Subsection <ref>, and finally analyze (the somewhat less involved) case of X>0 (Subsection <ref>). For the case of X=0, the proof is based on a repeated application of concentration inequalities for several potential functions (applicable in different portions of the 6-dimensional phase space). In two specific regions, in the O(1/√(n))-neighborhood of the center of the (A_1, A_2, A_3)-triangle and very close to its sides, we rely on stochastic noise to “push” the trajectory away from the center of the triangle, and also to push it onto one of its sides. Fortunately, each of these stages takes O(log n) parallel rounds, with strictly positive probability. Overall, the O(log n) parallel rounds bound for the case of X=0 is provided with constant probability; this translates into O(log n) parallel rounds in expectation, since subsequent executions of the process for O(log n) rounds have independently constant success probability, and the process has a geometrically decreasing tail over intervals of length O(log n). §.§ Properties of the Oscillator In the following, we define s=a_1+a_2+a_3∈ [0,1]. Handling the case of s<1 allows us not only to take care of the fact that a fraction of the population may be taken up by rumor source X, but also allows for easier composition of P_o with other protocols (sharing the same population). We set p as a constant value independent of n, which is sufficiently small (e.g., p = s^2/10^12 is a valid choice; we make no efforts in the proofs to optimize constants, but the protocol appears in simulations to work well with much larger values of p).We will occasionally omit an explanation of index i, which will then implicitly mean “for all i=1,2,3”. We define a_min := min_i=1,2,3 a_i and a_max := max_i=1,2,3 a_i.From the definition of the protocol one obtains the distribution of changes of the sizes of states in a step:Δa_i =+1/n,with probability 1/3 x(s-a_i) + pa_i^+a_i-1+ 2pa_i^++a_i-1,-1/n,with probability 2/3 x a_i + pa_ia_i+1^+ + 2pa_i+1^++a_i,0,otherwise. Δa_i^++ =+1/n,with probabilitya_i(a_i - a_i^++),-1/n,with probabilityx a_i^++ +(s-a_i) a_i^++,0,otherwise. Taking the expectations of the above random variables, and recalling that a_i=a_i^++a_i^++, we obtain:ȧ_i=x(s/3 -a_i)+ pa_i^+a_i-1+ 2pa_i^++a_i-1-pa_ia_i+1^+-2pa_i+1^++a_i =x(s/3 -a_i)+ pa_i-1(a_i+a_i^++) - pa_i(a_i+1+a_i+1^++)ȧ_i^++ = -x a_i^++ +a_i(a_i - a_i^++) - (s-a_i)a_i^++ = -x a_i^++ + a_i^2 - s a_i^++. Moreover, we have by a simple transformation:ϕ̇ =∑ȧ_i/a_i=x/3((∑s/a_i)-9) + p(∑ a_i^++(a_i-1/a_i-1)) §.§ Stopping in O(n log n) Sequential Steps in the Absence of a Source Throughout this subsection we assume that x=0. We consider first the case where a_i 0, for i=1,2,3 (noting that as soon as a_i = 0, we can easily predict the subsequent behavior of the oscillator, as was the case for the RPS dynamics).The dynamics of P_o is defined in such a way that that when x=0 and in the absence of the rules of the RPS oscillator, the value of a_i^++ would be close to a_i^2/s. Consequently, we define κ_i, i=1,2,3 as the appropriate normalized corrective factor:κ_i = s a_i^++/a_i-a_ithusa_i^++=a_i/s(a_i+κ_i).Note that as 0≤ a_i^++≤ a_i ≤ 1, thus -1 ≤κ_i ≤ 1. Next, we introduce the following definitions:δ_i = a_i - a_i-1 δ = √(δ_1^2 + δ_2^2 + δ_3^2) κ = √(κ_1^2 + κ_2^2 + κ_3^2) We also reuse potential ϕ from the original RPS oscillator. This time, it is no longer a constant of motion. By (<ref>) and the definition of κ_i, for x=0 we upper-bound ϕ̇ as:ϕ̇ = p/s(∑ a_i(a_i+κ_i)(a_i-1/a_i-1)) = p/s(∑ (a_i+κ_i)(a_i-1-a_i)) = p/s(-1/2∑ (a_i-a_i-1)^2+∑κ_i(a_i-1-a_i)) ≤p/s(-1/2δ^2+ κδ) The above change ϕ̇ of the potential is indeed negative when κ≈ 0 (which is in accordance with our intention in designing the destabilizing rules for the oscillator).The functions δ, ϕ and κ are intricately dependent on each other. In general, we will try to show that δ and ϕ increase over time, while κ stays close to 0. This requires that we first introduce a number of auxiliary potentials based on these two functions.First, for x=0, we can rewrite (<ref>) as:ȧ_̇i̇^̇+̇+̇ = a_i^2 - s a_i^++ = -a_i κ_i.Next, introducing the definition of κ_i to (<ref>), we obtain for x=0:ȧ_̇i̇ = pa_i-1(a_i+a_i^++) - pa_i(a_i+1+a_i+1^++) = p a_i (a_i-1 - a_i+1) + p(a_i^++ a_i-1 - a_i+1^++ a_i) == p a_i δ_i-1 + p (1/s a_i (a_i + κ_i) a_i-1 - 1/s a_i+1 (a_i+1 + κ_i+1)a_i) == p a_i δ_i-1 + p/s a_i (κ_i a_i-1 - κ_i+1 a_i+1 + (a_i-1 a_i - a_i+1^2) ).From the above, an upper bound on |ȧ_̇i̇| follows directly using elementary transformations:|ȧ_̇i̇|≤p a_i |δ_i-1| + p/s a_i (|κ_i| a_i-1 + |κ_i+1| a_i+1 + |a_i-1 a_i - a_i+1^2| ) ≤≤ p a_iδ + p/s a_i(κ(a_i-1 + a_i+1) + |a_i-1 a_i - a_i+1^2|))= = p a_iδ + p a_i κa_i-1 + a_i+1/s + p/s a_i |(a_i-1-a_i+1) a_i - (a_i+1-a_i)a_i+1 | ≤≤ p a_i (δ + κ) + p/s a_i(|a_i-1-a_i+1| a_i + |a_i+1-a_i|a_i+1) ≤≤ p a_i(δ + κ) + p a_iδa_i + a_i+1/s≤ p a_i(2δ + κ). We are now ready to estimate κ̇_i for x=0, using the definition of κ_i and the previously obtained formula for ȧ_i^++:κ̇_i= s (ȧ_i^++/a_i - a_i^++/a_i^2ȧ_i )-ȧ_i = -sκ_i - (sa_i^++/a_i^2+1) ȧ_̇i̇= -sκ_i - (sa_i^++/a_i-a_i) ȧ_̇i̇/a_i -2 ȧ_i = -sκ_i - ȧ_̇i̇/a_i κ_i -2 ȧ_i .Next from the bound on |ȧ_̇i̇|:κ̇^̇2̇_̇i̇ = 2κ_i κ̇_i = 2(-s κ_i^2 - ȧ_̇i̇/a_iκ_i^2 - 2ȧ_̇i̇κ_i)≤ 2(-s κ_i^2 + |ȧ_̇i̇|/a_iκ_i^2 + 2|ȧ_̇i̇||κ_i|)≤ 2(-s κ_i^2 + p(2δ + κ)(κ_i^2 + 2a_i|κ_i|)) ≤ 2(-s κ_i^2 + p(2δ + κ)(κ + 2κ)) = -2s κ_i^2 + p(12δκ + 6κ^2). Next:κ̇ = 1/2κ∑κ̇^̇2̇_̇i̇≤1/2κ(-2s ∑κ_i^2 + 3p (12δκ + 6κ^2)) ≤≤1/2κ(-2s κ^2 + p (36δκ + 18κ^2)) = = (-s + 9p) κ + 18 p δ≤≤ -s/2κ + 18 p δ,where in the final transformation we took into account that p ≤ s/18.Now, we define the potential η for any configuration with all a_i>0 as:η = (lns^3/27 - ϕ)^1/2= (-∑_i=1^3lna_i/s/3)^1/2. We remark that η is always well-defined when a_min > 0, and that η≥ 0. Overview of the proof. The proof for the case of X=0 proceeds by following the trajectory of the discrete dynamics of P_o, divided into a number of stages. We define a series of time steps t_0, t_1, …, t_7 by conditions on the configuration met at time t_i, and show that subject to these conditions holding, we have t_j+1≤ t_j + O(nlog n) (we recall that here time is measured in steps), with at least constant probability. Overall, it follows that the configuration at time t_7, which corresponds to having reached a corner state, is reached from t_0, which is any initial configuration with X=0, in O(n log n) time steps, with constant probability.The intermediate time steps may be schematically described as follows (see Fig. <ref>). For configurations which start close to the center of the triangle (δ≤ s/12), we define a pair of potentials ψ^(1), ψ^(2), based on a linear combination of modified versions of η and κ. The dynamics will eventually escape from the area δ≤ s/12; however, first it may potentially reach a very small area of radius O(1/√(n)) around the center of the triangle with κ≈ 0 (Lemma <ref>, time t_1, reached in O(n log n) steps by a multiplicative drift analysis on potential ψ^(2) < 0), pass through the vicinity of center of the triangle, escaping it with κ≈ 0(Lemma <ref>, time t_2, reached in O(n log n) steps with constant probability by a protocol-specific analysis of the scheduler noise, which with constant probability increases η without increasing κ too much), and eventually escapes completely to the area of δ > s/12 (Lemma <ref>, exponentially increasing value of potential ψ^(1) > 0).In the area of δ > s/12, we define a new potential ψ based on ϕ and κ. This increases (Lemma <ref>, additive drift analysis on ψ with bounded variance) until a configuration at time t_4 with a constant number of agents of some species A_i is reached. This configuration then evolves towards a configuration at time t_5 at which some species has O(1) agents, and additionally its predator species is a constant part of the population (Lemma <ref>, direct analysis of the process combined with analysis of potential ψ and a geometric drift argument). Then, the species with O(1) agents is eliminated in O(n) steps with constant probability (t_6, Lemma <ref>), and finally one more species is eliminated in another O(n log n) steps (at time t_7, Lemma <ref>, straightforward analysis of the dynamics). At this point, the dynamics has reached a corner.Throughout the proof, we make sure to define boundary conditions on the analyzed cases to make sure that the process does not fall back to a previously considered case with probability 1 - o(1). Phase with δ≤ s/12. We then have a_i ∈ [3s/12, 5s/12] and a_i/s/3∈ [3/4, 5/4], for i=1,2,3. In this range, we have:1/3(a_i/s/3-1)^2 < (a_i/s/3-1) - lna_i/s/3 < 3/4(a_i/s/3-1)^2.Summing the above inequalities for i=1,2,3 and noting that ∑_i=1^3(a_i/s/3-1) = 0, we obtain:1/3∑_i=1^3(a_i/s/3-1)^2 < η^2 < 3/4∑_i=1^3 (a_i/s/3-1)^2.Next, we have:∑_i=1^3(a_i/s/3-1)^2 = (3/s)^2 ∑_i=1^3 (a_i - s/3)^2 = 3δ^2/s^2.Combining the two above expressions gives the sought bound between η and δ as:δ/s < η < 3/2δ/sand equivalentlyδ∈ (23 sη, sη). We have directly from (<ref>) and from the relations between η and δ:η̇= - ϕ̇/2η≥p/2sη(1/2δ^2 - κδ) = p/4sηδ^2 - p/2sηκδ≥ps/9η - p/2κ,and from (<ref>):κ̇≤ -s/2κ + 18 p δ≤ -s/2κ + 18ps η.Moving to the discrete-time model, it is advantageous to eliminate the discontinuity of partial derivatives of η and κ at points with η=0 and κ=0 respectively, which is a side-effect of the applied square root transformation in the respective definitions of η and κ. We define the auxiliary functions η^* and κ^* by adding an appropriate corrective factor:η^*= √(η^2 + 1/n) κ^*= √(κ^2 + 1/n)and derive accordingly from (<ref>) and (<ref>):η̇^*= η/η^*η̇≥ps/9(η - 1/√(n)) - p/2κ≥ps/9η^* - p/2κ^* - 2ps/9 √(n) κ̇^*= κ/κ^*κ̇≤ -s/2(κ - 1/√(n)) + 18ps η≤ -s/2κ^* + 18ps η^* +s/√(n).Letbe the 5-dimensional vector representing the current configuration of the system: := (a_1^+, a_1^++, a_2^+, a_2^++,a_3^+) ≡ (^(1), …, ^(5)); note that the last element a_3^++ is determined as a_3^++ = s - ∑_i=1^5 ^(i).[In principle it is also correct to representas a vector of dimension 6, i.e., including a_3^++ inas a free dimension. However, such a representation would lead to second-order partial derivatives ∂^2/∂^(i)∂^(j)η^*() which are too large for our purposes.] The following lemma is obtained by a folklore application of Taylor's theorem.Let f : ^5 → be a C^2 function in a sufficiently large neighborhood of , with min_1≤ i≤ 5^(i)≥ 2/n. Then, |Δ f() - ḟ/n| ≤2/n^2max_^* - _∞≤ 1/n D_f(^*), where D_f(^*) := max_1≤ i, j ≤ 5|∂ f^2(^*)/∂^(i)∂^(j)|. Let ' be the random variable representing the configuration of the system after its next transition from configuration . Observe that in every non-idle step of execution of the protocol, exactly one agent changes its state, so '-_∞≤ 1/n.Applying Taylor approximation we have:(Δ f)=(f(')|)-f() = ∑_' (f(')-f())('|) = ∑_'(∇ f() · ('-) + R_2(,'))('|) == ∇ f() ·∑_' ('-) ('|) + R_2() = ∇ f() ·1/n(^(1), …, ^(5))^T + R_2() = ḟ/n + R_2(),where ∇ f() is the gradient of f at , R_2(,') ∈ denotes the second-order Taylor remainder for function f expanded at pointalong the vector towards point ', and R_2() ∈ is subsequently an appropriately chosen value, satisfying:|R_2()| ≤1/n^2max_^* - _∞≤ 1/n D_f(^*).The following lemma is obtained directly by computing and bounding all second order partial derivatives of functions η^* and κ^* with respect to variables (^(1), …, ^(5)).There exists a constant c_1>1 depending only on s,p, such that, for any configurationwith δ() ≤ s/12:* max_^* - _∞≤ 1/n D_η^*(^*) < c_1 √(n), * max_^* - _∞≤ 1/n D_κ^*(^*) < c_1 √(n).In view of the above lemmas, we obtain from (<ref>) and (<ref>), for an appropriately chosen constant c_2 = 2c_1 + s:Δη^* ≥1/n(ps/18η^* - p/2κ^* - c_2/√(n)),when δ≤ s/12, Δκ^* ≤1/n(-s/3κ^* + 18ps η^* + c_2/√(n)),when δ≤ s/12.For j=1,2, we now define two linear combinations of functions η^* and κ^*:ψ^(j) = η^* - 3jp/sκ^*.When δ≤ s/12, we have:Δψ^(j) ≥1/n( ps/18η^* - p/2κ^* - c_2/√(n) + jp κ^* - 54jp^2 η^* - 3jp/sc_2/√(n)) ≥1/n( ps/24η^* + jp/2κ^* - 2c_2/√(n)) ≥≥ps/24 n(η^* + 3jp/sκ^* - 48c_2/ps√(n)) ≥ps/24 n(|ψ^(j)| - c_3/√(n)),where we denoted c_3 := 48c_2/ps and used the fact that p < s/72· 54· 2.We subsequently perform an analysis of ψ^(j)_t = ψ^(j)(_t), j=1,2, treating them as stochastic processes. We remark that ψ^(2)_t≤ψ^(1)_t, since ψ^(1)_t - ψ^(2)_t = 3p/sκ^* ≥ 0. Let _t_0 be an arbitrary starting configuration of the system. Then, with constant probability, for some t_1=t_0+O(n log n), a configuration _t_1 is reached such that ψ^(1)_t_1≥ψ^(2)_t_1≥ -2c_3/√(n).W.l.o.g. assume t_0 = 0. We subsequently only analyze process ψ^(2)_t. Let t_1 be the first time step such that ψ^(2)_t_1 > -2 c_3/√(n). If t_1≠ 0, then ψ^(2)_0 < 0. Note that then ψ^(2)_t < 0 for all t ≤ t_1, from which it follows by a straightforward calculation from the definition of ψ, κ, and η, that δ_t < s/12 for all t ≤ t_1.We now define the filtered stochastic process ψ^*(2)_t as ψ^*(2)_t := |ψ^(2)_t| for t< t_1, and put Δψ^*(2)_t := 0 for t ≥ t_1. For all t≥ 0, we then have:(Δψ^*(2)_t | ψ^*(2)_t≠ 0) ≤ps/48nψ^*(2)_t.Since 0≤ψ^*(2)_t < 9 for all time steps, a direct application of multiplicative drift analysis (cf. <cit.>) gives:t_1 ≤48n/ps(1 + ln9 √(n)/2c_3),and the claim follows by Markov's inequality.Let _t_1 be an arbitrary starting configuration of the system such that ψ^(j)_t_1∈ [-2c_3/√(n), 4c_3/√(n)], for j=1,2. Then, with constant probability, for some t_2=t_1+O(n), a configuration _t_2 is reached such that ψ^(1)_t_2≥4c_3/√(n). W.l.o.g. assume that t_1 = 0 and suppose that initially ψ^(2)_0≤ψ^(1)_0 < 4c_3/√(n) (i.e., that t_2 ≠ t_1). Then, from the lower and upper bounds on ψ^(1)_0 and ψ^(2)_0we obtain the following bounds on κ_0 and δ_0:3p/sκ^*_0 = ψ^(1)_0 - ψ^(2)_0≤2c_3/√(n) + 4c_3/√(n) κ_0≤2c_3 s/p √(n), η^*_0 = 2 ψ^(1)_0 - ψ^(2)_0≤ 24c_3/√(n) + 2c_3/√(n) η_0≤10c_3/√(n)δ_0 < 10c_3 s/√(n).It follows that, for i=1,2,3, a_i,0∈ [s/3 - 10c_3 s/√(n), s/3 + 10c_3 s/√(n)] and a_i,0^++ = a_i,0/s(a_i,0 + κ_i,0) ∈ [(1/3 - 10c_3/√(n))(s/3 - 10c_3 s/√(n) - 2c_3 s/p √(n)), (1/3 + 10c_3/√(n))(s/3 + 10c_3 s/√(n) + 2c_3 s/p √(n))]. For the sake of clarity of notation, we will simply write a_i,0 = s/3 ± O(1/√(n)) and a_i,0^++ = s/9 ± O(1/√(n)), hence also a_i,0^+ = 2s/9 ± O(1/√(n)).We will consider now the sequence of exactly n transitions of the protocol, between time steps t=0,1,…, n.For all t we have Δψ^(2)_t ≥ -c_3 ps/24 n^3/2. Consider the Doob submartingale Y_t = ∑_τ=0^t-1 X_t with increments (X_t) given as:X_t = Δψ^(2)_t + c_3 ps/24 n^3/2,if Y_t > -c_3/√(n)0,otherwise,Noting that |X_t| ≤9/n, an application of the Azuma inequality for submartingales to (Y_n) gives: [Y_n ≤ - c_3/√(n)] ≤exp[-c_3^2/162] (cf. e.g. <cit.>[Thm. 16]). From here it follows directly that:[ψ^(2)_n > -c_3/√(n) + ψ^(2)_0 - nc_3 ps/24 n^3/2] ≥ 1 - exp[-c_3^2/162] > 1/2.Noting that ψ^(2)_0≥ -2c_3/√(n), we have:[ψ^(2)_n ≥ -2c_3/√(n)] > 1/2.We now describe the execution of transitions in the protocol for times t=0,1,…,n-1 through the following coupling. First, we select the sequence of pairs of agents chosen by the scheduler. Let V_2^+ (respectively, V_1^+) denote the subsets of the set of n agents, having initial states A_2^+ (resp., A_1^+) at time 0, respectively, which are involved in exactly one transition in the considered time interval, acting in it as the initiator (resp., receiver). Let S⊆{0,1,…,n-1} denotes the subset of time steps at which the scheduler activates a transition involving an element of V_2^+ as the initiator and an element of V_1^+ as the receiver. The execution of the protocol is now given by: * Phase P_A: Selecting the sequence of pairs of elements activated by the scheduler in time steps (0,1,…,n-1). This also defines set S. Executing the rules of the protocol in their usual order for time steps from set {0,1,…,n-1}∖ S.* Phase P_B: Executing the rules of the protocol for time steps from set S.Observe that since elements of pairs activated in time steps from S are activated only once throughout the n steps of the protocol, the above probabilistic coupling does not affect the distribution of outcomes.Directly from (<ref>), we obtain through a standard bound on conditional probabilities that at least a constant fraction of choices made in phase P_A leads to an outcome “ψ^(2)_n ≥ -2c_3/√(n)” with at least constant probability during phase P_B:[P_A : [ψ^(2)_n ≥ -2c_3/√(n)| P_A] > 1/4] ≥ 1/3.We now remark on the size of set S. The distribution of |S| depends only on a_2,0^+, a_1,0^+, and the choices made by the random scheduler. We recall that a_2,0^+ = 2s/9 ± O(1/√(n)). Since the expected number of isolated edges in a random multigraph on n nodes (representing the set of agents) and n edges (representing the set of time steps) is (1± o(1))e^-4n, the number of such edges having the first endpoint in an agent in state A_2^+ and the second endpoint in an agent in state A_1^+ is (1± o(1))4e^-4s^2/81n. A straightforward concentration analysis (using, e.g., the asymptotic correspondence between G(n,m) and G(n,p) random graph models and an application of Azuma's inequality for functions of independent random variables) shows that the bound |S| = (1± o(1))4e^-4s^2/81n holds with very high probability. In particular, we have:[|S| > c_4 n] = 1 - e^Ω(-n),for some choice of constant c_4 which depends only on s.Relations (<ref>) and (<ref>) provide all the necessary information about phase P_A that we need. Subsequently, we will only analyse phase P_B, conditioning on a fixed execution of phase P_A such that the following event F_A holds:[ψ^(2)_n ≥ -2c_3/√(n)| P_A ] > 1/4 ∧ |S| > c_4 n.We remark that, by a union bound over (<ref>) and (<ref>), [F_A] ≥ 1/3 - e^Ω(-n) > 1/4.In the remainder of our proof, our objective will be to show that:[ψ^(1)_n ≥4c_3/√(n)| P_A ] > c_5,for some constant c_5 > 0 depending only on s,p, for any choice of P_A for which event F_A holds. When this is shown, the claim of the lemma will follow directly, with a probability value given as at least c_5 [F_A] > c_5 / 4 by the law of total probability.We now proceed to analyze the random choices made during phase P_B. Each of the considered |S| interactions involves a pair of agents of the form (A_2^+, A_1^+), and describes the following transition:(A_2^+, A_1^+) → (A_2^+, A_2^+),with probability p,(A_2^+, A_2^+),with probability 1-p,independently at random for each transition. The only state changes observed during this phase are from A_1^+ to A_2^+, and we denote by B the number of such state changes. The value of random variable B completely describes the outcome of phase P_B.We have B = p|S|, and by a standard additive Chernoff bound:[|B - p|S|| ≤ 2 √(n)| P_A ] ≥ 1 - 2 e^-4 > 7/8.Let ℬ⊆ [p|S| - 2 √(n), p|S| + 2 √(n)] be the subset of the considered interval containing values of B such that (ψ^(1)_n | P_A, B∈ℬ) ≥4c_3/√(n). If [B ∈ℬ | P_A] ≥ 1/8, then the claim follows directly.Otherwise, it follows from (<ref>) and (<ref>) that there must exist a value b ∈ [p|S| - 2 √(n), p|S| + 2 √(n)] ∖ℬ, such that:(ψ^(2)_n | P_A, B=b ) ≥ -2c_3/√(n).Given that:(ψ^(1)_n | P_A, B=b ) ≤4c_3/√(n).and recalling that ψ^(2)_n ≤ψ^(1)_n, we obtain the following bound on η_n:(η^*_n| P_A, B=b) = (2 ψ^(1)_n - ψ^(2)_n| P_A, B=b) ≤ 24c_3/√(n) + 2c_3/√(n) =10c_3/√(n). We now consider lower bounds on the value of ψ^(2)_n, conditioned on P_A, B=b^+ (respectively, P_A, B=b^-), where b^+ (resp., b^-) is a value arbitrarily fixed in the range b^+∈ [b + 20c_3s/√(n), b + 21c_3s/√(n)] (resp., b^-∈ [b - 21c_3s/√(n), b - 20c_3s/√(n)]). The executions of the protocol with B=b^+ and B=b^- differ with respect to the execution with B=b in the number of executed transitions from a_1^+ to a_2^+ by at least 20c_3s/√(n). Recalling that δ_2 = a_2 - a_1, it follows that for some b' ∈{b^+, b^-} we have after n steps:(δ_n| P_A, B=b' ) ≥(|δ_2,n| | P_A, B=b' ) ≥20c_3s/√(n).Subsequently, we will assume that b'=b^+; the case of b' = b^- is handled analogously. From the relation η > δ/s and (<ref>) we have:(η^*_n| P_A, B=b^+) ≥20c_3/√(n)≥(η^*_n| P_A, B=b) + 10c_3/√(n). When comparing the value of κ^*_n in the two cases, B=b^+ and B=b, it is convenient to consider κ^* as the length of the vector (κ_1, κ_2, κ_3, 1/√(n)) in Euclidean space. For each of the coordinates κ_i, i=1,2,3, we have:|(κ_i,n| P_A, B=b^+) - (κ_i,n| P_A, B=b) | < 40c_3/√(n),hence:(κ^*_n| P_A, B=b^+) < (κ^*_n| P_A, B=b) + 120c_3/√(n).Introducing (<ref>) and (<ref>) into the definition of ψ^(1)_n, we obtain directly:(ψ^(1)_n| P_A, B=b^+) > (ψ^(1)_n| P_A, B=b) + 10 c_3/√(n) - 3p/s120c_3/√(n)≥ - 2c_3/√(n) + 10 c_3/n - 3p/s120c_3/√(n) > 4c_3/√(n),where we again used the fact that p is a sufficiently small constant w.r.t. s. We thus obtain:(ψ^(1)_n| P_A, B ∈ [b + 20c_3s√(n), b + 21c_3s√(n)]) > 4c_3/√(n),where by the definition of random variable B as a sum of i.i.d. binary random variables and the choice of value b in the direct vicinity of the expectation of B, the event B ∈ [b + 20c_3s/√(n), b + 21c_3s/√(n)] holds with constant probability. The case of b'=b^- is handled analogously. Let _t_2 be an arbitrary starting configuration of the system such that max{ψ^(1)_t_2, ψ^(2)_t_2} = ψ^(1)_t_2≥4c_3/√(n). Then, with constant probability, for some t_3=t_2+O(n log n), a configuration _t_3 is reached such that δ_t_3 > s/12. We subsequently consider only the process ψ^(1)_t. We start by showing the following claim.Claim. Suppose ψ^(1)_0 = A ≥4c_3/√(n). Then, with probability at least 1-exp[-A^2 psn / 46656], for some time step t ≤72n/ps the process reaches a value ψ^(1)_t≥ 2A, or δ_t > s/12.Proof (of claim). Consider the Doob submartingale Y_t = ∑_τ=0^t-1 X_t with increments (X_t) given as:X_t = Δψ^(1)_t - ps A/48 n,if Y_t > A/2 or δ_t > s/12,0,otherwise.Noting that |X_t| ≤9/n, an application of the Azuma inequality for submartingales (cf. e.g. <cit.>[Thm. 16]) to (Y_T) with T = 72n/ps gives:[Y_T ≤ - A/2] ≤exp[-A^2 psn / 46656],Moreover, assuming the barrier δ_t>s/12 was not reached, we have:(ψ^(1)_T | Y_T > - A/2) = ψ^(1)_0 + ps A/48 n T + Y_T > A + ps A/48 n72n/ps - A/2 = 2A,which completes the proof of the claim.We now prove the lemma by iteratively applying the claim over successive intervals of time (τ_0, τ_1,…), such that τ_0 = t_2 and τ_i+1 is the first time step not before τ_i such that ψ^(1)_τ_i+1≥ 2ψ^(1)_τ_i or δ_τ_i+1≥ s/12. By the claim, we have:[τ_i+1 - τ_i≤72n/ps] ≥ 1-exp[-(ψ^(1)_τ_i)^2 psn / 46656].Noting that c_3 > 48/(ps) by definition, and that before the barrier δ > s/12 is reached, we have ψ^(1)_τ_i≥4c_3/√(n) 2^i ≥192/ps√(n) 2^i, we obtain:[τ_i+1 - τ_i≤72n/ps] > 1-exp[-4^i+1].and further:[τ_i+1≤72n/ps(i+1)] > ∏_j=0^i(1-exp[-4^j+1]) > 1 - ∑_j=0^iexp[-4^j+1] > 0.98.In particular, putting i=log_2 n, [τ_i≤72nlog_2 n/ps] > 0.98. Since for this value of i, we must have δ_τ_i≥ s/12 (since otherwise we would have ψ^(1)_τ_i = ω(1), which is impossible), the claim of the lemma follows. Phase with δ > s/12.The second phase of convergence corresponds to configurations of the system which are sufficiently far from the center point (a_1,a_2,a_3) = (s/3, s/3, s/3). Formally, we analyze a variant of potential ϕ (with an additive corrective factor proportional to κ^2) to show that, starting from a configuration with δ > s/12, we will eliminate one of the three populations a_1, a_2, a_3 in O(n log n) sequential steps with constant probability, without approaching the center point too closely (a value of δ = Ω(1) will be maintained throughout).For this part of the analysis, we define the considered potential as:ψ = η^2 - 4 p/s^2κ^2 = lns^3/27 - ϕ - 4 p/s^2κ^2,for any configurationwith a_min > 0.We have directly from (<ref>) and (<ref>):ψ̇ = -ϕ̇- 4 p/s^2 2 κκ̇≥p/s(1/2δ^2 - κδ + 4 κ^2 - 144 p/sκδ) = = p/s(1/4δ^2 + (δ/2 - 2κ)^2 + (1 - 144 p/s)κδ) ≥1/4p/sδ^2,where in the last transformation we took into account that p ≤ s/144.For the sake of technical precision in formulating the subsequent lemmas, we also consider the stochastic process ψ^*_t, given as ψ^*_t = ψ(_t) for any t < t_d, where t_d is defined as the first time in the evolution of the system such that a configuration with a_min,t_d < c_6/n is reached, where c_6 = 313600/s is a constant depending only on s. For all t ≥ t_d, we define ψ^*_t := ψ^*_t-1 + 1/n. In any configuration _t with δ≥ s/20 we have: Δψ_t^* ≥1/8pδ^2/sn> ps/3600n. We have:Δψ = - Δϕ - 4 p/s^2Δ(κ^2)Following the definition of ϕ in Eq. (<ref>), we have by linearity of expectation:Δϕ = (∑ln(a_i+Δ a_i)- ∑ln a_i )= ∑ln(1+Δ a_i/a_i).Next, using the bound ln(1+b) ≤ b which holds for b >-1, we have:∑ln (1+Δ a_i/a_i) ≤∑Δ a_i/a_i = ∑ȧ_̇i̇/n/a_i = ϕ̇/nfrom which it follows that:Δϕ≤1/nϕ̇To analyze Δ(κ^2), we apply a variant of Lemma <ref>. A direct application of the lemma is not sufficient due to the singularity related to the a_i^-1 term in the definition of κ_i; however, this effect is compensated when we take into account that any change of the value of κ_i^2 occurs in the considered protocol with probability at most proportional to a_i. For the specific case of κ_i^2, for fixed i=1,2,3, we consider κ_i^2 : ^2 → as a function of the restricted configuration = (a_i^+, a_i^++), and we rewrite expression (<ref>) as:(Δκ_i^2)= ∑_' = (a_i^+', a_i^++') ≠(∇ f() · ('-) + R_2(,'))('|) ≤≤κ̇_̇i̇^̇2̇/n +1/n^2max_^* - _∞≤ 1/n D_κ_i^2(^*) ∑_' = (a_i^+', a_i^++') ≠('|) ≤κ̇_̇i̇^̇2̇/n +1/n^2max_^* - _∞≤ 1/n D_κ_i^2(^*) a_i. A straightforward computation from the definition of function κ_i shows that:max_^* - _∞≤ 1/n D_κ_i^2(^*) ≤8s^2/a_i^2.It follows that(Δκ_i^2) ≤κ̇_̇i̇^̇2̇/n +a_i/n^28s^2/a_i^2 =1/n(κ̇_̇i̇^̇2̇ + 8s^2/a_i n),and so:(Δκ^2) ≤1/n(κ̇^̇2̇ + 24s^2/a_min n).Introducing (<ref>) and (<ref>) into (<ref>), we obtain:Δψ = - Δϕ - 4 p/s^2Δ(κ^2) ≥ -1/nϕ̇-1/n4 p/s^2( κ̇^̇2̇ + 24s^2/a_min n)= 1/nψ̇-96p/n^2a_min≥≥p/4sn(δ^2 - 392s/a_min n) ≥p/8snδ^2,where in the second-to-last transformation we used (<ref>), and in the last transformation we used the relation 392s/a_min n≤δ^2/2 which holds when δ≥ s/20 and a_min≥ c_6/n.The claim thus follows when ψ^*_t = ψ_t and ψ^*_t+1 = ψ_t+1, i.e., for t < t_d. For larger values of t, the claim follows trivially from the definition of ψ^*_t. The above Lemma is used to show that, starting from any configuration with δ > s/12, we quickly reach a configuration in which some species has a constant number of agents. If δ_t ≥ s/20, we have: (i) |Δψ^*_t| ≤ c_7,(ii) (Δψ^*_t) ≤c_8/n.where c_7>0 and c_8>0 are constants depending only on s. Moreover, in any configurationwith a_min≥ 2/n, we have: (iii) |Δψ()| ≤c_7/n a_min.(iv) (Δψ()) ≤c_8/n^2 a_min.We first consider the case of a configuration with a_min≥ 2/n. Using the definition of ψ (and within it, of ϕ and κ). Consider any transition from a configurationto a subsequent configuration ' and let S ⊆{1,2,3} be defined as the set of indices of configurations changing betweenand ' (S = { i : a_i^+() ≠ a_i^+(') ∨ a_i^++() ≠ a_i^++(')}). We verify that there exists an absolute constant c_7 > 0 such that:|ψ (') - ψ()| ≤c_7/n min_i∈ S a_i≤ c_7.Moreover, by the definition of the protocol a transition fromto ' occurs with probability ('|) ≤min_i∈ S a_i. Since there is only a constant number of possible successor configurations _t+1 for _t (loosely bounding, not more than 3^6), it follows that:[|Δψ()| > 1/b]< 3^6 c_7 b/n,for any b>0= 0,for b > c_7/n a_min.The bounds on the variance of (Δψ()) and that of Δψ^*_t = Δψ() (for t < t_d) witha_min,t≥ (c+6+1)/n follow directly. The analysis of Δψ^*_t when a_min,t = c_6/n and t < t_d is performed analogously, noting that if the succeeding configuration ' = _t+1 is such that a_min(') < c_6/n, then Δψ^*_t = 1/n. Finally, for t≥ t_d, the result holds trivially by the definition of ψ^*_t.Let _t_3 be an arbitrary starting configuration of the system such that δ_t_3 > s/12. Then, with probability 1-O(1/n), for some t_4=t_3+O(n log n), a configuration _t_4 is reached such that a_min,t_4 = c_6. W.l.o.g. assume that t_3 = 0. First we remark that, by the relation between η and δ for δ≤ s/12, a process starting with δ_0 > s/12 satisfies:ψ_0 = ψ_0 = η^2_0 - 4p/s^2κ_0^2 > (s/12)^2/s^2 - 12p/s^2 > 1/150.Moreover, for any configuration ' with δ(') ≤ s/20 we have:ψ(') = η^2(') - 4p/s^2κ^2 ≤(3/2s/20)^2/s^2 < 1/170.Thus, initially ψ_0 > 1/150 and as long as for all time steps t we have ψ_t ≥1/170, the barrier condition δ_t ≥ s/20 has not been violated. Moreover, for ψ_t ∈ [1/170, 1/150], we have by Lemma <ref> that Δψ_t ≥ 0. Moreover, by Lemma <ref> (iii) and the fact that δ_t < 1/144 which implies a_min,t > s/4, we have that |Δψ_t| ≤4c_7/sn.It follows from a standard application of Azuma's inequality for martingales (resembling the analysis of the hitting time of the random walk with step size O(1/n), from one endpoint of a path of length Θ(1) to the other) that:[∃_t < n^2 / ln n ψ_t < 1/170] = O(1/n),hence also throughout the first n^2 / ln n steps of the process we have δ > s/18, with probability 1 - O(1/n). We are now ready to analyze the subsequent stages of the process, designing a Doob submartingale Y_t =∑_τ=0^t-1 X_t with time increments (X_t) defined as:X_t = Δψ^*_t - ps/3600,if ψ_τ > 1/170 and a_min,τ≥ c_6 for all τ≤ t0,otherwise.Using Lemma <ref> (i) and (ii) and applying the Azuma-McDiarmid inequality[If our objective in the proof of the lemma were to show a bound on t_4 which holds with constant probability (which would be sufficient for our purposes later on), rather than a w.h.p. bound, then this specific step of the proof can also be performed using Markov's inequality. In any case, we would need to make use of the bounded variance of ψ^*_t in the proof of the next Lemma.] in the bounded variance version (cf. e.g. <cit.>[Thm. 18]) to Y_t for t_c = c^3 n ln n, for some sufficiently large constant c > 0 depending only on s, we obtain:[Y_t ≤ -c^2 ln n] ≤exp[-c^4ln^2 n/2tc_8/n + 2c_7c^2 ln n/3] = exp[-cln n/2 c_8 + 2/3c_7/c] = 1 - O(1/n).If the event X_t = Δψ^*_t - ps/3600 were to hold for all t < t_c with c=2 · 3600/ps and if Y_t_c > - c^2 ln n, then we would have ψ^*_t_c = ψ^*_0 + Y_t_c + ps/3600 t_c ≥ 0 - c^2 ln n + 3 c^2 ln n = 2c^2 ln n, which would mean that ψ^*_t_c≠ψ_t_c, since ψ≤ 3ln n + O(1) by definition. If ψ^*_t_c≠ψ_t_c, then t_4 < t_c, and the proof is complete. (Indeed, to reach a configuration with a_min < c_6/n, the protocol has to pass through a configuration with a_min = c_6/n, since the size of each population changes by at most 1 in each transition.) Otherwise, we must have that at least one of the following events holds: Y_t_c≤ - c^2ln n, or ψ_τ≤ 1/170 for some τ < t_c, or a_min,τ < c_6 for some τ < t_c. We have established that each of the first two of these events holds with probability O(1/n), whereas if the latter event holds, then t_4 < t_c. Thus, t_4 < t_c holds with probability 1-O(1/n) by a union bound.Let _t_4 be a starting configuration of the system such that a_min,t_4 = c_6/n. Then, with constant probability, for some t_5=t_4+O(n log n), a configuration _t_5 is reached such that a_j,t_5≤ c_6/n and a_j+1,t_5 > s/40, for some j ∈{1,2,3}. W.l.o.g. assume that min_i=1,2,3 a_i,t_4 =2. If a_3,t_4 > s/40, then the claim follows immediately, putting t_5 = t_4 and j = 2. Otherwise, we will show that with constant probability, the system will evolve so that a_2 will increase over time until within O(n log n) steps we will have a time step t_5 with j =3 (i.e., a_3, t_5≤ c_6/n and a_1, t_5 > s/40).In the considered case, w.l.o.g. assume t_4 =0. Next, let T = c n ln n for a sufficiently large constant c; we choose as c:= 2 log_2 1/0.005ps for convenience in later analysis. Intuitively, in view of Lemmas <ref> and <ref>, the potential ψ^*_T will be further increased in the next steps: the random variable (ψ^*_T - ψ^*_0 | _0) has an expected value of Θ(T/n) = Θ(log n), with a standard deviation of Θ(√(T/n)) = Θ(√(log n)).By an application of the Azuma-McDiarmid inequality for martingales with bounded variance similar to that in the proof of Lemma <ref>, we obtain the following result:[Such an analysis can also be performed using Chebyshev's inequality, obtaining a slightly weaker expression in the probability bound.][∀_t ≤ T ψ^*_t ≥ψ^*_0 + pst/3600n - T/n] = 1 - n^-Ω(1), for any constant > 0. Observe that since a_2,0 = c_6/n = O(1/n), we have ψ^*_0 ≥ln n - O(1). Taking this into account, for our purposes, a slightly weaker and simpler form of expression (<ref>) will be more convenient:[∀_t ∈ [0.5 n log_2 n, T] ψ^*_t ≥ (1 + 10^-4ps)ln n ] = 1 - o(1).The proof of the lemma is completed by a more fine-grained analysis of the considered protocol. In the initial configuration t_4 = 0, we have a_2,0 = c_6/n (there are exactly c_6 agents in state A_2), and since t_4 ≠ t_5, we have a_3,0 < s/40. Consequently, a_1,0 = s - s/40 - O(1/n) > 0.9s. Informally, since the prey of A_2 (i.e., A_1) is more than twice more numerous than its predator (i.e., A_3), we should observe the increase in the size of population of A_2, regardless of the activities (A_i^+ or A_i^++) of the agents in the population. We consider the evolution of the system, finishing at the earliest time t_e when a_2(t_e) > s/100. The following relations are readily shown (apply e.g. Lemma <ref> with i=2 and =0):(Δ a_2,t | a_3,t < 0.05 s, t < t_e)≥0.05ps/n a_2 (Δ a_3,t | a_3,t < 0.05 s, t < t_e)≤ -0.05ps/n a_3 .From (<ref>), taking into account that |Δ a_i,t| ≤1/n and a_3,0 < s/40, an application of Azuma's inequality for martingales shows that:[∀_t ≤min{t_e, T} a_3,t < 0.05 s ] = 1- o(1).Taking into account the above, by a straightforward geometric growth analysis (compare e.g. proof of Lemma <ref>), we obtain from (<ref>):[t_e < T] = 1-o(1).Moreover, since the speed of increase of a_2 is bounded (even in the absence of predators) by that of a standard push rumor spreading process (formally, (Δ a_2,t) ≤ a_2,t), we have (compare e.g. <cit.>):[t_e > 0.5 n log_2 n] = 1-o(1).Now, we observe that with constant probability, the size of population A_2 does not decrease in the time interval [0, t_e] below the value a_2,0 = c_6/n, attained at the beginning of this interval:[∀_t∈ [0,t_e] a_2,t≥ c_6/n] = Ω(1). Indeed, with constant probability the value a_2,t is initially non-decreasing: with constant probability, in the first O(n) rounds each of the c_6 = O(1) agents from A_2 will be triggered by the scheduler O(1) times in total, and each interaction involving an agent from A_2 will have this agent as the initiator, and an agent from the largest of the three populations, A_1, as the receiver (the prey). Thus, with constant probability, the number of agents in population A_2 is increased to an arbitrary large constant (e.g., 1000c_6). After this, we use the geometric growth property (<ref>) to show that a_2,t reaches the barrier a_2,t > s/100 (at time t_e) before the event a_2,t < c_6/n occurs (cf. e.g. proof of Lemma <ref>, or standard analysis of variants of rumor-spreading processes in their initial phase <cit.>).When the event from bound (<ref>) holds, at least one of the following events must also hold: (A) a_min,t≥ c_6/n, for all t≤ t_e,(B) or there exists a time step t < t_e such that a_1,t≤ c_6/n,(C) or there exists a time step t < t_e such that a_3,t≤ c_6/n.To complete the proof, we will show that each of the events (A) and (B) holds with probability o(1). Indeed, then in view of (<ref>), event (C) will necessarily hold with probability Ω(1). This means that, with probability Ω(1), there exists a time step t < t_e such that a_3,t < c_6/n and a_2,t< s/100 (since t < t_e), and so also a_1,t > s - s/100 - c_6/n > 0.98 s > s/40; thus, the claim of the lemma will hold with t_5 = t and j=3.To show that event (B) holds with probability o(1), notice that a_2,t < s/100 by definition of t_e, and moreover a_3,t < 0.05s with probability 1-o(1), hence the event a_1,t < s - s/100 - 0.05s = 0.94s holds with probability o(1).To show that event (A) holds with probability o(1), notice that, substituting in (<ref>) t = t_e, by a union bound over (<ref>), (<ref>) and (<ref>) we obtain:[ψ^*_t_e≥ (1 + 10^-4ps)ln n ] = 1 - o(1).This means that, with probability 1-o(1), we haveψ^*_t_e≠ψ_t_e or ψ_t_e≥ (1 + 10^-4ps)ln n. In the first case, event (A) cannot hold. In the second case, observe that a_2, t_e = s/100 + O(1/n) by definition of t_e, so a_1,t_e > s - s/100 - O(1/n) - c_6/n > 0.98 s, and it follows that ψ_t_e = ∑_i=1^3 ln1/a_i, t_e + O(1) = ln n +O(1). Since the conditionψ_t_e≥ (1 + 10^-4ps)ln n is not fulfilled, event (A) can only hold with probability o(1).Let _t_5 be a starting configuration of the system such that a_j,t_5≤ c_6/n and a_j+1,t_5 > s/40, for some j ∈{1,2,3}. Then, with constant probability, for some t_6=t_5+O(n), a configuration _t_6 is reached such that a_min,t_6 = 0. We consider the pairs of interacting agents chosen by the scheduler in precisely the next n rounds after time t_5. Given that set A_j,t_5 has constant size, and set A_j+1,t_5 has linear size in n, it is straightforward to verify that with constant probability, the set of randomly chosen n pairs of agents has all of the following properties: * Each agent from A_j,t_5 belongs to exactly one pair picked by the scheduler, and is the receiver in this pair.* Each agent interacting in a pair with an agent from A_j,t_5 belongs to exactly one pair.* Each agent interacting in a pair with an agent from A_j,t_5 belongs to set A_j+1,t_5.Conditioned on such a choice of interacting pairs by the scheduler, the protocol changes the state of all agents from set A_j,t_5 to state j+1 with probability at least p^|A_j,t_5|≥ p^c_6 = Ω(1). State j is then effectively eliminated. In the absence of species j, the interaction between species j-1 and j+1 collapses to a lazy predator-prey process, with transitions of the form (A_j-1, A_j+1)→ (A_j-1, A_j-1) associated with a constant transition probability. A w.h.p. bound on the time of elimination of species j+1 follows immediately from the analysis of the push rumor spreading model, and we have the following Lemma.Let _t_6 be a starting configuration of the system such that a_j,t_6 = 0, for some j ∈{1,2,3}. Then, with probability 1 - O(1/n), for some t_7=t_6+O(n log n), a configuration _t_7 is reached such that for all t ≥ t_7, a_j,t = a_j+1,t = 0 and a_j-1,t = s.After a further O(n log n) steps after time t_7, the final configuration of all agents in the oscillator's population will be a_j-1,t^++ = s.§.§ Operation of the Oscillator in the Presence of a Source In this section we prove properties of the oscillatory dynamics for the case X>0. It is possible to provide a detailed analysis of the limit trajectories of the dynamics in this case, as a function of the concentration of x. Here, for the sake of compactness we only show the minimal number of properties of the oscillator required for the proof of Theorem <ref>. When the given configuration is such that a_min is sufficiently large, say a_min > 0.02s, then both the subclaims of Theorem <ref>(2) hold for the considered configuration. (The first subclaim hold directly; the second subclaim follows by a straightforward concentration analysis of the number of agents changing state in protocol P_o over the next 0.01s n steps, since we will always have a_min≥ 0.01s during the considered time interval.) Otherwise, the considered configuration is close to one of the sides of the triangle. We will show that in the next O(n log n) steps, with high probability, the protocol will either reach a configuration with a_min > 0.02s, or will visit successive areas around the triangle, as illustrated in Fig. <ref>. The following Lemmas show that within each area, an exponential growth process occurs, which propagates the agent towards the next area. If a_i-1<0.8s and a_i+1<0.05s, then ȧ_i-1≤xs/3 - 0.05psa_i-1. From the assumptions we have that a_i>0.15s. Starting from (<ref>) we obtain:ȧ_i-1 = x(s/3 -a_i-1)+ pa_i+1(a_i-1+a_i-1^++) - pa_i-1(a_i+a_i^++)≤ xs/3+ 2pa_i-1a_i+1- pa_i-1a_i ≤ xs/3+ pa_i-1(2a_i+1- a_i) ≤ xs/3+ pa_i-1(0.1s-0.15s)=xs/3- 0.05psa_i-1From the above bound on expectation, the following Lemma follows directly by a standard concentration analysis. In what follows, we consider an execution in which the concentration x is strictly positive and bounded by a sufficiently small absolute constant (i.e., X is at most a given constant fraction of the entire population), with the required upper bounds on x used in the proofs of lemmas given in their statements. This is a technical assumption, which allows us to simplify the proof structure. In particular, the assumption X ≤ c_12 n can be omitted in the statement of Theorem <ref>, and the claim of the theorem can even be proved for executions in which X changes during the execution of the protocol, as long as the invariant X > 0 is preserved over the considered interval of time. Let _t_a be a starting configuration of the system such that a_i-1,t_a<0.75s and a_i+1,t_a<0.05s. Suppose x < 10^-3 ps, starting from time t_a. Then, for some t_b ∈ [t_a, t_a + c_9 n], where c_9 is a constant depending only on p and s, with probability 1-e^-n^Ω(1)), the system reaches a configuration _t_b such that exactly one of the following two conditions is fulfilled: * either a_min,t_b≥ 0.02s,* or a_i+1, t_b < 0.05s and a_i-1, t_b < 0.02s.In the considered range of values of a_i-1, we have a_i-1,t_a < 0.75s and a_i-1,t≥ 0.02s, for all t until we leave the considered area at time t_b. Taking into account that x < 10^-3 ps, it follows from Lemma <ref> that:Δa_i-1≤1/n (xs/3 - 0.05psa_i-1) ≤1/n (x < 10^-3 ps^2/3 - 10^-3ps^2) < - 0.0005 ps^2/n.Taking into account that |Δa_i-1≤1/n|, it follows from a straightforward concentration analysis(cf. e.g. proof of Lemma <ref> for a typical analysis of this type of exponential growth process) that a boundary of the considered area (either a_i-1, t < 0.02s or a_min, t > 0.02s) must be reached within O(n) steps with very high probability, as stated in the claim of the lemma. A similar analysis is performed for the next area. If a_i+1<0.25s and a_i-1<0.05s then ȧ_i+1≥ xs/12+0.6psa_i+1 and ȧ_i-1≤ xs/3- 0.2psa_i-1.From assumptions we have that a_i>0.7s. Starting from (<ref>) we obtain:ȧ_i+1 = x(s/3 -a_i+1)+ pa_i(a_i+1+a_i+1^++) - pa_i+1(a_i-1+a_i-1^++)≥ x(s/3-a_i+1) + pa_i+1(a_i-2a_i-1)≥ x(s/3-0.25s) + pa_i+1(0.7s-2· 0.05s) = xs/12+0.6psa_i+1ȧ_i-1 = x(s/3 -a_i-1)+ pa_i+1(a_i-1+a_i-1^++) - pa_i-1(a_i+1+a_i+1^++)≤ xs/3+ pa_i-1(2a_i+1-a_i) ≤ xs/3+ pa_i-1(2·0.25s-0.7s)=xs/3- 0.2psa_i-1.Again, a concentration result follows directly.Let _t_b be a starting configuration of the system such that a_i-1,t_b<0.02s and a_i+1,t_b<0.02s. Suppose x < 0.02 ps, starting from time t_b. Then, for some t_a'∈ [t_b, t_b + c_10n ln1/max{1/n,a_i+1,t_b}] ⊆ [t_b, t_b + c_10 n ln n], where c_10 is a constant depending only on p and s, with probability 1-O(1/n^3), the system reaches a configuration _t_a' such that exactly one of the following two conditions is fulfilled: * either a_min,t_a'≥ 0.02s,* or a_i-1, t_a' < 0.05s, a_i+1, t_a' > 0.25s, and (consequently) a_i, t_a' < 0.75s.We first show that, starting from time t_b onward, the process a_i-1, t satisfies a_i-1, t < 0.05s for all t ∈ [t_b, t_*] with probability 1-e^-n^Ω(1), where t_* is defined as the minimum of time t_a + c_10 n ln n and the last time moment such that a_i+1, t≤ 0.25s holds for all t ∈ [t_b, t_*]. By Lemma <ref>, we have for all t ∈ [t_b, t_*] such that a_i-1, t > 0.01s:Δ a_i-1 = 1/nȧ_i-1≤1/n (xs/3- 0.2psa_i-1) < 1/n (0.02ps^2/3 - 0.02ps^2) < - 0.01ps^2/n,where we took into account the assumption x < 0.02ps. The claim on a_i-1, t < 0.05sfollows from a standard concentration analysis, noting that |Δ a_i-1,t| ≤1/n.In order to analyze the process a_i+1,t, we apply a filter and consider the process a'_i+1,t, starting at time t_b, defined as follows. For as long as a_i-1, t < 0.05s, we put a'_i+1,t := a_i+1,t, and starting from the first time t_** when a_i-1, t > 0.05s, we compute a'_i+1,t+1 as the subsequent value of a_i+1 after a simulation of a single step of the process for some statewith concentrations of types: x() = x, a_i+1() = a'_i+1,t, a_i-1() = 0.05s, and a_i() = s - a_i-1() - a_i+1().For a given time step t, let R_t denote the event that Δ a'_i+1,t≠ 0. By the construction of protocol P_o, which always requires at least one agent of type X or type A_i+1 to be involved in an interaction which creates or destroys an agent of type A_i+1, we have:[R_t] ≤ x + 2 a'_i+1,t.Moreover, from Lemma <ref> it follows that for a'_i+1,t < 0.25s: Δ a'_i+1,t = ȧ'_i+1,t/n≥1/n (xs/12+0.6psa'_i+1,t).Since Δ a'_i+1,t |R_t = 0, we have:Δ a'_i+1,t | R_t ≥1/n (xs/12+0.6psa'_i+1,t)/[R_t]≥1/n (xs/12+0.6psa'_i+1,t)/x + 2 a'_i+1,t≥0.3 ps/n,and moreover Δ a'_i+1,t | R_t ∈{-1/n, 0, 1/n}. Analysis of this type of process is folklore (in the context of epidemic models with infection and recovery) but somewhat tedious; we sketch the argument for the sake of completeness. When considering only those steps for which event R_t holds, the considered process can be dominated by a lazy random walk on the line {0,1/n, 2,n/,…}, with a constant bias towards its right endpoint. To facilitate analysis, we define points Q_c = c ⌊αln n ⌋n, for c = 0,1,…, where constant α > 0 is subsequently suitably chosen, and for any point Q_c to the right of the starting point of the walk (i.e, c > c_min where c_min is the smallest integer such that Q_c_min+1 > a_i+1,t_b) define s_c as the number of steps of the walk until its first visit to s_c. For a suitable choice of constants α and β > 0 sufficiently large, we have that for any c, with probability at least 1 - O(1/n^2), s_c+1 - s_c ≤βln n, and moreover between its step s_c and its step s_c+1, the walk is confined to the subpath (Q_c-1, Q_c+1) of the considered path. Considering the original time t of our process a'_i+1,t (including moments with R_t), let t_c be the moment of time corresponding to the s_c-th step of the walk. Conditioning on events which hold with probability 1 - O(1/n^2), the value t_c+1 - t_c can be stochastically dominated by the sum of βln n independent geometrically distributed random variables, each with expected value O(n/max{1, (c-1) ln n}). Let c_max be the largest positive integer such that Q_c_max < 0.25s. Applying a union bound on the conditioning of all intervals t_c+1 - t_c, for c ≥ c_min and a concentration bound on the considered geometric random variables, we eventually obtain that with probability 1 - O(1/n^3) the condition a'_i+1,t is achieved for time:t< ∑_c = c_min^c_max (t_c+1 - t_c) = O(n + ln n ∑_c = c_min^c_maxn/max{1, (c-1) ln n}) =O(n (ln n - lnmax{1,a_i+1,t_b n} + 1)) = O(n ln1/max{1/n,a_i+1,t_b}).Recalling that a'_i+1,t = a_i+1,t holds throughout the considered time interval with very high probability, the claim follows. An iterated application of Lemmas <ref> and Lemmas <ref> moves the process along time moments t_a, t_b, t_a', …, where time moment t_a' is again be fed to Lemma <ref>, considering the succeeding value of i. After a threefold application of both Lemmas, the process has w.h.p. in O(n log n) steps either performed a complete rotation, passing through three moments of time designated as “t_a”, rotated by one third of a full circle, or has reached at some time t' a point with a_min,t'≥ 0.02s. In either case, the claim of Theorem <ref>(2) follows directly. § ANALYSIS OF PROTOCOL FOR DETECT§.§ Further Properties of the Oscillator We start by stating a slight generalization of Lemma <ref>, capturing the expected change of potential ψ_t^* (given by (<ref>)) for the case X > 0, for configurations which are sufficiently far from both the center and the sides of the triangle. In any configuration _t with 10^-6s^2 ≤ a_min≤ 0.02s and x < c_12 we have: Δψ_t ≥ps/7200n, where c_12>0 is an absolute constant which depends only on s and p. We can condition the expectation of Δψ_t on the event E_t, which holds if an agent in state X participates in the current interaction. Conditioned on E_t, the analysis corresponds directly to the computations performed for the case x=0, where we remark that the assumptions of Lemma <ref> are satisfied due to the assumed upper bound on a_min. Thus:Δψ_t |E_t > ps/3600n.Next, taking into account the lower bound on a_min, by exactly the same argument as in Lemma <ref>(iii), we have |Δψ_t| < c'_12/n, for some choice of constant c'_12 > 0 which depends only on s and p. Obviously,Δψ_t | E_t > -c'_12/n,and since [E_t] < 2x, by the law of total expectation:Δψ_t > (1-2x) ps/3600n - 2x c'_12/n > ps/7200n,where the last inequality holds for any x < c_12, where c_12 := 1/2(2c'_12+1). Suppose a_min,t_0 < 10^-6 s^4 at some time t_0. Then, there exists an absolute constant c_13 > 0, such that the following event holds with probability 1-e^-n^Ω(1)): for all t ∈ [t_0, t_0+ e^n^c_13], we have a_min,t < 0.01s^2. Let ψ̂_t ≡lns^3/27 - ψ_t.Consider any t ≥ t_0 such that a_min,t < 10^-6 s^4. Then, ϕ_t < ln a_min,t < ln (10^-6 s^4) and consequently: ψ̂_t=ϕ_t + 4 p/s^4κ_t^2 ≤ln (10^-6 s^4) + 4p/s^4≤ln (2 · 10^-6 s^4), where we recall that κ_t^2 ≤ 1 and the last inequality follows for p chosen to be sufficiently small (4p/s^4 < ln 2).Further, note that if for some time t we have ψ̂_t < ln (8 · 10^-6 s^4), then:ϕ_t = ψ̂_t - 4 p/s^4κ_t^2 < ln (8 · 10^-6 s^4),thus a_1,t a_2,t a_3,t < 8 · 10^-6 s^4, from which it follows that a_min,t^2 < 16 · 10^-6 s^4, and so a_min,t < 0.01 s^2.Thus, for ψ̂_t < ln (8 · 10^-6 s^4), at least one of the following holds: * Either ψ̂_t < ln (2 · 10^-6 s^4),* Or ψ̂_t ≥ln (2 · 10^-6 s^4), thus a_min, t≥ 10^-6 s^4. Then, taking into account that a_min,t < 0.01 s^2, we have by Lemma <ref>: Δψ̂_t < - ps/7200n. Taking into account the known properties of function ψ_t (Lemma <ref>), we have that starting from ψ̂_t_0 < ln (2 · 10^-6 s^4), it takes time exponential in a polynomial of n (e^Ω(n^c_13), for some absolute constant c_13 > 0) to break the potential barrier for ψ̂, i.e., to reach the first moment of time t_1 such that ψ̂_t_1≥ln (8 · 10^-6 s^4), with probability 1-e^-n^Ω(1)), for some absolute constant c_14 > 0. To complete the proof, recall that for any t < t_1, we have ψ̂_t< ln (8 · 10^-6 s^4), and so as previously established, a_min,t < 0.01 s^2. For any execution of the oscillator protocol P_o, we can now divide the axis of time into maximal time intervals of two types, which we call oscillatory and central. A central time interval continues for as long as the condition a_min,t≥ 10^-6 s^4 is fulfilled, and turns into an oscillatory interval as soon as this condition no longer holds. An oscillatory time interval continues for as long as the condition a_min,t < 0.01s^2 is fulfilled, and turns into an oscillatory interval as soon as this condition no longer holds. Lemma <ref> implies that an oscillatory interval is of exponential length w.v.h.p. Suppose 0 < x < c_12. Let t_0 be an arbitrary moment of time such that a_min,t_0 > 0. Let T = C n ln1/a_min,t_0, for an arbitrarily fixed constant positive integer C =O(1). With probability 1 - e^-n^Ω(1), we have for all subsequent moments of time t ∈ [t_0, t_0 + T]:ln1/max{1/n, a_min,t}≤ 100C ln1/a_min, t_0.Without loss of generality assume t_0 = 0. We can assume in the proof that a_min,0 > n^-0.01 / C, otherwise the claim trivially holds. Thus, initially we have ϕ_0≥ 3ln a_min,0≥ -0.03/Cln n ≥ -0.03 ln n.We proceed to show that potential ϕ does not decrease much during the considered motion. We have at any time ϕ̇≥ -p, which follows directly from (<ref>).Suppose at some time t we have ϕ_t ≥ -0.2 ln n. We note that a_min,t≥ e^ϕ_t≥ n^-0.2. Applying Lemma <ref>, we have under these assumptions:Δϕ_t ≥ -p/n - O(1/n^2 a_min,t^2) ≥ -1/nand moreover by the properties of the natural logarithm (cf. e.g. <cit.>):|Δϕ_t < 4/n a_min, t≤4 e^-ϕ_t/n≤ 4 n^-0.8.As usual, we apply a Doob martingale, with ϕ'_t := ϕ_t until the first moment of time t such that ϕ_t < -0.2 ln n, and subsequently ϕ'_t+1 := ϕ'_t for larger t. We have Δϕ_t ≥ -n^-1 and |Δϕ_t < 4 n^-0.8.Considering T = C n ln1/a_min,0 < C n ln n steps of the process starting from time 0, by a standard application of Azuma's inequality, we obtain that with probability 1 - e^-n^Ω(1), for all t ∈ [0,T] we have:ϕ'_t -ϕ'_0 ≥ - T/n - n^-0.2≥ - C ln1/a_min,0 - 1 ≥ -0.1 ln n.From the last inequality it follows that ϕ'_t ≥ϕ'_0 - 0.1 ln n ≥ -0.2 ln n for all t ∈ T, with probability 1 - e^-n^Ω(1), and so ϕ'_t = ϕ_t. We now rewrite the same bound for ϕ_t, using the relation ϕ≥ 3 ln a_min:ϕ_t ≥ϕ_0 - C ln1/a_min,0 - 1 ≥ 3 ln a_min,0 - C ln1/a_min,0 - 1 ≥ -(C + 4) ln1/a_min,0≥ -100C ln1/a_min,0.Taking into account that -ln1/a_min, t≥ϕ_t for a_min, t≠ 0, we obtain the claim. Suppose 0 < x < c_12. Fix type i ∈{1,2,3}. Let t_0 be any time such that a_min,t_0 < 10^-6s^4. Let t^* > t be the first moment after t_0 such that i is the most represented type, a_i,t^* = a_max, t^*. Let t^** > t^* be the first moment after t^* such that i is the least represented type, a_i,t^** = a_min, t^**.Then, with probability 1 - O(1/n^3),t^**≤ t_0 + c_11n ln1/max{1/n,a_i+1,t_0}, where c_11>0 is an absolute constant depending only on s and p. From Lemma <ref>, we have that w.v.h.p.the protocol is in an oscillatory interval which will last super polynomial time, i.e., with probability 1-e^-n^Ω(1)): for all t ∈ [t_0, t_0+ e^n^c_13], we have a_min,t < 0.02s. Acting as in the previous subsection, we iteratively apply Lemma <ref> and Lemma <ref>. After at most 6 applications of both Lemmas, the process has performed two complete rotations around the triangle,w.h.p., passing in particular through time moments t^* where the designated type i was a maximal type and t^** where type i was a minimal type. It remains to bound the time required to perform these iterations. We consider as an example a single application of Lemma <ref> starting at a time t_b and ending at a time t_a', where t_a' < t_b + c_10n ln1/max{1/n,a_i+1,t_b} with probability 1 - O(1/n^3). Applying Lemma <ref> at time t_b, we obtainln1/a_min, t_a'≤ 100C ln1/a_min, t_b, w.v.h.p. We use this bound for the next application of Lemma <ref>, and so on. After a total of at most 12 applications, we eventually obtain a boundof the form c_11ln1/a_min, t_0 on the length of the considered time interval, where the value of c_11 is computed as a function of C.§.§ Protocol Extension Pm: Majority The composition of the extension is specified in Fig. <ref>. In what follows, we denote M^(s)_i :=(A_i^?,M_s) and m^(s)_i := M^(s)_i / n, for s ∈{-1, 0, +1}. Suppose 0 < x < c_12. Let t_0 be an arbitrarily chosen moment of time and let T>0. For fixed i ∈{1,2,3}, we have for all t ∈ [t_0,t_0+T]:|m^(+1)_i,t - m^(-1)_i,t| ≤ 2 e^4rT/nmax{n^-0.2, a_i,t_0},with probability 1 - O(e^-n^1/6). W.l.o.g. assume t_0=0 and i=1. Denote A_0 :=A_1,0, D_t := M^(+1)_1,t - M^(-1)_1,t, and G_t := D_t^2. As usual, we denote Δ D_t = D_t+1 - D_t and Δ G_t = G_t+1 - G_t. For the subsequent analysis, we choose to use the “squared potential” G_t to simplify considerations; this would be the usual potential of choice to analyze an unbiased random walk with a fair coin toss.First we remark that |D_0| ≤ A_0, so |G_0| ≤ A_0^2. Next, observe that since at most one agent changes its state in a single time step, we have |Δ D_t| ≤ 2, and so:|Δ G_t| ≤ (|D_t| + 2)^2 - |D_t|^2 ≤ 4(|D_t| + 1) ≤ 4n +4.We now upper bound the expectationΔ G_t. We condition this expectation on the disjoint set of events R_6, R_7, R_8, R_9, R_10, R_0, where R_r, for 6 ≤ r ≤ 10, corresponds to Rule (r) being executed in the current step, and R_0 is the event that none of these rules is executed. We have the following: * If event R_0 or R_6 holds, then at least one of the following three situations occurs: (1) the values of M^(+1)_1 and M^(-1)_1 both remain unchanged at time t, (2) an agent changes state from type A_1 to another type, or (3) an agent turns from another type into type A_1. In case (1), we have D_t+1 = D_t. In case (2), the probability that |D_t+1| = |D_t| + 1 is not more than the probability that |D_t+1| = |D_t| - 1, since by the construction of the protocol, the choice of the agent leaving the population is completely independent of its value M_s. In case (3), we have [|D_t+1| = |D_t| - 1] = [|D_t+1| = |D_t| +1] = 1/2 by construction. In all cases, [|D_t+1| = |D_t| + 1] ≤[|D_t+1| = |D_t| - 1]. We therefore have:[Δ G_t | R_0 ∨ R_6] ≤ ((|D_t| + 1)^2 - |D_t|^2) + ((|D_t| - 1)^2 - |D_t|^2) ≤ 2. * For events R_7 and R_8, we have D_t+1 | R_7 = D_t - 1 and D_t+1 | R_8 = D_t + 1. Since events R_7 and R_8 hold with equal probability, it follows that:[Δ G_t | R_7 ∨ R_8] ≤ ((|D_t| + 1)^2 - |D_t|^2) + ((|D_t| - 1)^2 - |D_t|^2) ≤ 2. * Finally, for events R_9 and R_10, we have D_t+1 | R_9 = D_t + 1 and D_t+1 | R_10 = D_t - 1. Since [R_9] · M^(-1)_1 = [R_10] · M^(+1)_1[Δ G_t | R_9 ∨ R_10]≤max{M^(+1)_1,M^(-1)_1}/A_1 ((|D_t| + 1)^2 - |D_t|^2) + min{M^(+1)_1,M^(-1)_1}/A_1 ((|D_t| - 1)^2 - |D_t|^2) ≤4|D_t|^2/A_1 + 2 = 4 G_t/A_1 + 2,where we assume in notation that A_1 > 0.Applying the law of total expectation for Δ G_t over the set of events R_0 ∨ R_6, R_7 ∨ R_8, R_9∨ R_10 and noting that [R_9 ∧ A_10] ≤ rA_1/n, we eventually obtain:Δ G_t ≤ 4rG_t/n + 2.Inequalities (<ref>) and (<ref>) are sufficient to lower-bound the evolution of random variable G_t, which undergoes multiplicative drift with rate parameter 1 + 4r/n (up to lower order terms). Since known multiplicative drift lower bounds (cf.e.g. <cit.>) do not appear to cover this case explicitly, we sketch the corresponding submartingale analysis (with slightly weaker parameters) for the sake of completeness.Consider any moment of time t such that G_t≥ n^1.6. Define target value G_max = (1 + 8r) G_t≥ n^1.6. Consider the following filter, defining G'_τ, τ≥ 0 as the submartingale with G'_τ = G_t+τ until the first moment of time at which G_t+ τ < G_max, and G'_τ = G'_τ+1 + 5r/n G_max for all subsequent moments of time. Note that Δ G'_τ≤5r/n G_max by (<ref>) and |Δ G'_τ| ≤ 4n + 4 ≤ 5n by (<ref>) (where we conduct the entire analysis for n sufficiently large with respect to absolute constants of the algorithm). By Azuma's inequality, we have for any τ > 0 and z > 0:[G'_τ > G'_0 + 5r/n G_maxτ + z] ≤exp[-z^2/2 τ (5n)^2].Next, choosing any τ≤ n and z = r G_max≥ r n^1.6 and noting that:G'_0 + 5r/n G_maxτ + z ≤ G_t + 5r G_max + r G_max≤ G_max,we rewrite the concentration equality as:[G'_τ > G_max] ≤exp[-(r n^1.6)^2/2 τ (5n)^2] < e^-n^0.19.Applying to the above a union bound over all τ∈ [0,n], we obtain by another crude estimate:[∀_τ∈ [0,n] G'_τ≤ G_max] ≥ 1 - e^-n^0.18,from which it follows directly by the definition of G'_τ that:[∀_τ∈ [0,n] G_t+τ≤ G_max] ≥ 1 - e^-n^0.18.Thus, given that G_max = (1 + 8r) G_t, the value of G_t increases by a factor of at most (1+8r) over n steps, with very high probability. Iterating the argument at most a logarithmic number of times and applying a union bound gives for arbitrary t:G_t ≤ (1 + 8r)^t/n + 1max{n^1.6,A_0^2}≤ 2 e^8rt/nmax{n^1.6, A_0^2},with probability at least 1 - e^-n^1/6, from which the claim of the lemma follows directly after taking the square root and normalizing by a factor of n.By considering the sizes of populations m^(-1)_i,t, m^(0)_i,t, and m^(+1)_i,t (whose sum is a_i,t), we obtain the following corollary of the above Lemma, applied for a suitably chosen value T = 0.001n/rln1/a_i,t_0. Suppose 0 < x < c_12. Let t_0 be an arbitrarily chosen moment of time with a_i,t_0≤ 0.02 s^2. For fixed i ∈{1,2,3}, we have for all t ∈ [t_0,t_0+ 0.001n/rln1/max{1/n, a_i,t_0}]:|m^(+1)_i,t - m^(-1)_i,t| ≤ 0.1 a_i,t + 0.05s,with probability 1 - O(e^-n^1/6). The above Lemma provides a crucial lower bound on the size of population m^(0)_i. Suppose 0 < x < c_12. Let t_0 be an arbitrarily chosen moment of time with a_i,t_0≤ 0.02 s^2. For fixed i ∈{1,2,3}, we have for all t ∈ [t_0,t_0+ 0.005n/rln1/max{1/n, a_i,t_0}] such that a_i,t > 0.25s:min{m^(+1)_i,t, m^(-1)_i,t}≥0.001 s^3,with probability 1 - e^-n^Ω(1). Note first that by the conditions a_i,t_0≤ 0.02 s^2 < 0.1s < 0.25s < a_i,t. Let t_1 > t_0 denote the last moment of time before t_2 such that a_i,t_1-1 < 0.1 s and let t_2 ≥ t_1 + 0.15 sn denote the first moment of time after t_1 such that a_i,t_2 > 0.25 s. We have t ≥ t_2 + 0.05sn.We now consider the process m^(+1)_i,τ (exactly the same arguments may be applied for process m^(-1)_i,τ). We have Δ m^(+1)_i,τ≤ 1/n. The analysis is divided into two phases: * Phase 1: τ∈ [t_1, t_2) (thusa_i,τ∈ [0.1 s, 0.25s]). Initially, m^(+1)_i,t_1≥ 0. Suppose for some step τ we have m^(+1)_i,τ < 0.02s. Rule (6) is executed with probability (a_i, τ)(1-a_i, τ) ≥ 0.1s (1 - 0.25s) ≥ 0.075s^2, whereas rule (8) or (9) which reduces m^(+1)_i,τ is executed with probability at most 2r m^(+1)_i,τ < 0.04rs < 0.01s^2. A computation of the expected value provides:[Δ m^(+1)_i,τ | (m^(+1)_i,τ < 0.02s)] > 0.06s^2/n.An application of Azuma's inequality yields that m^(+1)_i,t_2 > 1/20.06s^2/n (t_2 - t_1) > 0.004 s^3, with probability 1 - e^-n^Ω(1). In case of failure, we consider the process no further.* Phase 2: τ∈ [t_2, t) (thusa_i,τ≥ 0.1s). From Phase 1, we have that initially m^(+1)_i,t_2 > 0.004 s^3. Suppose for some step τ we have 0.001s^3 < m^(+1)_i,τ < 0.01s. We consider two cases: * Ifa_i,τ≥ 0.25s, then by Lemma <ref> we have:|m^(+1)_i,t - m^(-1)_i,t| ≤ 0.1 a_i,t + 0.05s,with probability 1 - e^-n^Ω(1). In case of failure we interrupt the analysis (this is an implicit application of union bounds over successive steps τ). Under the assumption m^(+1)_i,τ < 0.01sr, we conclude:m^(-1)_i,t≤ 0.1 a_i,t + 0.05s + m^(+1)_i,t < 0.3 a_i,t,and hence:m^(-1)_i,t≤1/2m^(0)_i,t - 0.005s.Now, to compute the expected value Δ m^(+1)_i,τ, we remark that rule (6) does not decrease this expected value since m^(0)_i,τ. Moreover, in view of (<ref>), the probability of executing rule (9) (which increases m^(-1)_i,t by 1/n) exceeds the probability of executing one of the rules (7) or (8) (which decrease m^(-1)_i,t by 1/n) by 0.005s m^(+1)_i,τ r > 5 · 10^-6 s^4 r by the assumption 0.001s^3 < m^(+1)_i,τ. We eventually obtain in this case:[Δ m^(+1)_i,τ | (0.001s^3 < m^(+1)_i,τ < 0.01s)] > 5 · 10^-6 s^4 r/n. * If a_i,τ≥ 0.25s, then assuming m^(+1)_i,τ < 0.01s, we can perform an analogous analysis as in the first phase to obtain:[Δ m^(+1)_i,τ | (m^(+1)_i,τ < 0.01s)] > 0.06s^2/n,which, in particular, also implies (<ref>).We have thus shown that the expected change to m^(+1)_i,τ satisfies (<ref>). Noting that initially m^(+1)_i,t_2 > 0.004 s^3 = 0.001 s^3 + 0.003 s^3, an application of Azuma's inequality to an appropriate Doob martingale with (<ref>) shows that the event m^(+1)_i,τ > 0.001 s^3 will hold for all remaining steps of the process τ, with probability 1- e^-n^Ω(1).Let t be any moment of time with a_min,t > 4r^1/2. For all i ∈{1,2,3}, we have:min{m^(+1)_t, m^(-1)_t}≥r^3/2 swith probability 1 - O(e^-n^Ω(1)). Denote c = 4r^1/2. To show the claim, observe that necessarily for all τ∈ [t - cn/2, t] we have a_min,τ > c/2. We consider the change of value m^(+1)_τ over time (the argument for m^(-1)_τ follows symmetrically). Initially, we have m^(+1)_t - cn/2≥ 0, and at every step |Δ m^(+1)_τ| ≤ 1/n. At any time τ such that m^(+1)_τ < s/4 we have the following cases: * Rule (6) is executed, which happens with probability at least a^2_min,τ > c^2/4. Since m^(+1)_τ < s/4, conditioned on this event, the expected value of Δ m^(+1)_τ is at least 1/4.* One of the rules (7)-(10) is executed, which occurs with probability at most r.* In all other cases, we have Δ m^(+1)_τ = 0.Noting that r = c^2/16, the claim follows from a standard application of Azuma's inequality. Suppose 0 < x < c_12. Let t ≥ 2c_11n log n be an arbitrary moment of time. Then, min{m_+1, t, m_-1, t} > c_15, with probability 1 - O(1/n), for some absolute constant c_15 > 0 depending only on s, p, and r. Assume w.l.o.g. t = 2c_11 n ln n. Instead of analyzing the evolution of the real system, we consider an execution of a system which is coupled with it over the first t steps as follows. First, starting from time 0, we perform t steps of protocol P_o (i.e., considering only rules (1)-(5) of its definition, and without setting values of the second component M_?). Next, we once again activate the pairs of elements which were activated in the first part of the coupling, in the same order, applying rules (6)-(10) of the protocol with the same outcome which they would have received in the original execution. Clearly, at time t the same configuration (t) is reached by both the original and coupled execution.Consider first the execution of P_o from time 0. If a_min, 0 < 10^-6s^4, then the execution is in an oscillatory interval at time 0, and will remain in it (a_min < 0.02 s^2) until time t = c ln n with probability 1 - e^-n^Ω(1). Then, for all τ∈ [0,t] we assume that the claim of Lemma <ref> holds with t_0 = τ for all i ∈{1,2,3}. By a crude union bound, this event holds with probability 1 - O(1/n); from now on we assume this is true. (Formally, to allow us to proceed, in the analysis we can say with implicitly couple the system with a different set of random choices to which the system switches in the low-probability event that theclaim of Lemma <ref> were not to hold for some t_0 = τ for the original system.) Given that in the claim of Lemma <ref> for t_0=0 we have t^** < t, and for t_0=t we have t^*≥ t, by the properties of the time intervals [t^*, t^**] we observe that there must exist a time τ∈ [0,t] and a type i ∈{1,2,3} such that A_i is the least represented type at time τ and the most represented type at time t (a_max, t = a_i, t and a_min, τ = a_i, τ), and moreover t ∈ [t^*, t^**] in the claim of Lemma <ref> with choice of t_0 = τ. Since a_max, t≥ s/3, we now apply Lemma <ref> with t_0 = τ to obtain the claim, noting that a_min, τ< 0.02s^2 and moreover that t < τ+ 0.005n/rln1/max{1/n, a_i,t_0}, given that we have t < τ + c_11n ln1/max{1/n,a_i+1,t_0}, where c_11 is a constant depending only on s and p, and noting that we can choose r < c_11/0.005.It remains to consider the cases when the execution starts at time 0 with a_min, 0≥ 10^-6s^4. Then, if a_min, t≥ 10^-6s^4 holds, the claim follows from Lemma <ref> given that r is chosen so that 10^-6s^4 > 4 r^1/2. Otherwise, there must exist some last time t' ≤ t such that a_min, t'≥ 10^-6s^4. We apply Lemma <ref> with t_0 = t'. If the obtained value t^** satisfies t^** < t, then we can apply an analogous analysis as in the case a_min, 0 < 10^-6s^4 to obtain the claim. Otherwise, we have that t ≤ t^**≤ t + c n, where the value of constant c, depending only on s and p follows from Lemma <ref>. By an iterated application of Lemma <ref>, we obtain that a_min, t≥ c', where the value of constant c'>0, depending only on s and p, follows from the application of Lemma <ref>. Choosing r sufficiently small so that c' > 4 r^1/2, we complete the proof using Lemma <ref>. Finally, for the sake of completeness we state how the majority protocol stops in the case of X = 0. Suppose x=0. Then, there exists a moment of time t_s such that either m_+1,t = 0 or m_-1,t= 0 holds for all t>t_s. Moreover, t_s < c_16 n log^2 n with probability 1 - O(1/n), for some absolute constant c_16 > 0 depending only on s, p, and r. By Theorem <ref>(1), there exists a moment of time t_0 = O(n log^2 n) such that the system reaches a corner configuration (cf. Lemma <ref>). W.l.o.g., assume that a_1 = s and a_2 = a_3 = 0. At this point, in the majority protocol Rule (7) will never again be activated, whereas the execution of rules (8)-(11) follows precisely the classical majority scenario of Angluin et al. <cit.>. By a standard concentration analysis (see also <cit.>), one of the two species M_+1, M_-1 will become extinct in O(n log^2 n) steps with probability 1 - O(1/n).As a side remark on Lemma <ref> that it is possible to initialize the system entirely with a state (A_i, M_0) so thatm_+1,t = m_-1,t= 0 holds throughout the process (even if the designed protocol will never enter such a configuration from most initial configurations). §.§ Protocol Extension Pl: Detection with Lights To complete the proof of Theorem <ref>, we design a protocol extension P_d, such that Detection is solved by the composition ((P_o ∘ P_m) + P_l). Extension P_l, uses three states, {L_-1, L_+1, L_𝑜𝑛}. We informally refer to states L as lights. The composition is given in Fig. <ref>. Informally, state L_-1 means that the agent is “waiting for meeting M_-1”, then after meeting M_-1 it becomes L_+1,“waiting for M_+1” and finally it becomes L_𝑜𝑛.To analyze the operation of the protocol, consider first the case of x=0. By Lemma <ref>, after O(n log^2 n) steps, agents in at least one of the states {M_+1, M_-1} are permanently eliminated from the system. Thus, either rule (11) or rule (12) will never again be executed in the future. An agent which is in state L_𝑜𝑛 will spontaneously move to another state following rule (13) within O(1/q()n log n) steps, with probability 1 - O(1/n^2), and will never reenter such a state, since this would require the activation of both rule (11) and rule (12). By applying a union bound over all agents, we obtain that state L_𝑜𝑛 never again appears in the population after O(n log n) steps from the termination of the majority protocol, with probability 1 - O(1/n). Overall, all nodes reach a state having a different state than L_𝑜𝑛 after O(n log^2 n) steps from the start of the process, with probability 1 - O(1/n), and all leave such a state eventually with certainty.In the presence of the source X, the analysis of the process can be coupled with a Markov chain on three states L_-1, L_+1, and L_𝑜n. In view of Lemma <ref>, transitions from state L_-1 to L_+1 and from state L_+1 to to L_-1 occur with at least constant probability (except for an O(1/n)-fraction of all time steps), this 3-state chain is readily shown to be rapidly mixing. For a choice of q() > 0 depending only on s, p, r, sufficiently small, we can lower-bound the number of agents occupying state L_on by (1 - ) n, with high probability.Under the natural decoding of states as “informed” (having component L_𝑜𝑛) or “uninformed” (having component L_-1 or L_+1), the proof of Theorem <ref> is complete. We remark that it is also possible to design a related protocol in which exactly one state is recognized as “informed” and exactly one state is recognized as “uninformed”; we omit the details of the construction. § PROOF OF IMPOSSIBILITY RESULT This Section is devoted to the proof of Theorem <ref>. First, we restate some notation. We recall that the vector z = (z^(1), …, z^(k)) ∈{0,1,…,n}^k = Z describes the number of agents having particular states, and z=n. In this section we will identify the set of states with {1,…, k} = [1,k]. It is now also more convenient for us to work with a scheduler which selects unordered (rather than ordered) pairs of interacting agents; we note that both models are completely equivalent in terms of computing power under a fair random scheduler, since selecting an ordered pair of agents can be seen as selecting an unordered pair, and then setting their orientation through a coin toss. Indexing with integers {1,2,…,r} the set of all distinct rules of the protocol, where r ≤ k^4, for a rule j ≡“{i_1(j), i_2(j)}→{o_1(j), o_2(j)}”, 1 ≤ j ≤ r,i_1(j), i_2(j), o_1(j), o_2(j) ∈{1,…,k}, we will denote by q_j, the probability (selected by the protocol designer) that rule j is executed as the next interaction rule once the scheduler has selected (i_1, i_2) as the interacting pair, and by p_j(z) the probability that j is the next rule chosen in configuration z (we have p_j(z) = q_j z^(i_1(j))z^(i_2(j))/n^2 (1 - O(1/n)), where the O(1/n) factor compensates the property of a scheduler which always selects a distinct pair of elements).For any configuration z_0 ∈ Z, we define the d-box B_d(z_0) around z_0 as the set of all states z ∈ Z such that z_0^(i)/d ≤ z^(i)≤ d max{1,z_0^(i)}, for all 1≤ i ≤ k. We start the proof with the following property of boxes. Fix k ∈^+ and let 0 < _1 < 0.001 be arbitrarily fixed. There exists _0 = _0(k, _1), 0 < _0 < _1, such that, for any interaction protocol P with k states and any configuration z_0 ∈ Z, there exists a value = (P, z_0) ∈ [_0, _1] such that, for any rule j of the protocol, 1≤ j ≤ r, exactly one of the following bounds holds: * (i) for all z ∈ B_n^_0(z_0), p_j(z) ≤ n^-1,* (ii) for all z ∈ B_n^_0(z_0), p_j(z) ≥ n^24-1.and for any state i, 1≤ i ≤ k, exactly one of the following bounds holds: * (iii) for all z ∈ B_n^_0(z_0), z^(i)≤ n^,* (iv) for all z ∈ B_n^_0(z_0), z^(i)≥ n^24.Let k be fixed and let _0 = 96^-(k + k^4 + f+1)≤ 96^-(k+r+f+1), where f = log_2 (1/_1). Consider the (multi)set M of real values M := {log_n max{n^_0,z_0^(i)} : i ∈{1,…,r}}∪{log_n max{n^_0,np_j(z_0)} : j ∈{1,…,r}}⊆ [0,1]. Since |M| = k+r, by the pigeonhole principle, there must exist an interval I_l = [96^-l, 96^-l+1), for some l ∈{f, …, k+r+f}, such that I_l ∩ M = ∅. Now, we set = 2 · 96 ^-l > 96_0, we also have < 2 · 96^-f < _1. We immediately obtain that for any state i, 1≤ i ≤ k, we either have z_0^(i)≤ n^/2 or z_0^(i)≥ n^48. Recalling that for any z ∈ B_n^_0(z_0), z_0^(i)/n^_0≤ z^(i)≤ n^_0max{1,z_0^(i)}, claims (iii) and (iv) follow.To show claims (i) and (ii), notice that if rule j, j ∈{1,…,r} is such that min{z_0^(i_1(j)), z_0^(i_2(j))}≤ n^, then for all z ∈ B_n^_0(z_0) we havemin{z^(i_1(j)), z^(i_2(j))}≤n^ (by (iii) and (iv)), and so p_j(z) ≤ n^-1 by the properties of the random scheduler. Otherwise, we have min{z_0^(i_1(j)), z_0^(i_2(j))}≥ n^24, and so 1/2n^2_0≤ p_j(z_0) / p_j(z) ≤ 2n^2_0, where we recall that > 96_0. Since we have p_j(z_0) ≤ n^/2 or p_j(z_0) ≥ n^48, claims (i) and (ii) follow. Given any k-state protocol P, we will arbitrarily choose a value offor which the claim of the above Lemma holds (e.g., the smallest possible such value of ). Note that a similar analysis is also possible for protocols using a super-constant number of states in n, however, then the value of _0 is dependent on n; retracing the arguments in the proof, we can choose appropriately _0 ≥ n^exp[-O(k^4)]. (We make no effort to optimize the polynomial in k in the exponent.)In what follows, let z_0 be a fixed configuration of the protocol (admitting a certain property which we will define later). We will then consider a rule j to be a low probability (LP) rule (writing j∈ LP) in box B_n^_0(z_0) if it satisfies condition (i) of the Lemma, and a high probability (HP) rule in this box (writing j∈ HP) if it satisfies condition (ii). Note that LP ∪ HP = {1,…,r}.Likewise, for 1≤ i ≤ k, we will classify i as a low-representation (LR) state (writing i∈ LR) in box B_n^_0(z_0) if i satisfies condition (iii) of the Lemma, and a high representation (HR) state (writing i∈ HR) in this box if i satisfies condition (iv). Note that LR ∪ HR = {1,…,k}. Moreover, we define a set of very high representation (VHR) states, VHR ⊆ HR, as the set of all i such that for all z' ∈ B_n^_0(z_0), z'_i ≥ n^1-8. Denoting HR' = HR ∖ VHR, we have by the definition of a box that for all i' ∈ HR', for all z' ∈ B_n^_0(z_0): z'_i ≤ n^1-8/O(n^2_0) <n^1-6.From now on, we assume that configuration z_0 admits the following property: for T = n^1+2, an execution of the protocol starting from configuration z_0 passes through a sequence of configurations z_t, t=1,2,…, T, such that the configuration does not leave the box around z_0 in any step with sufficiently large probability, lower-bounded by some absolute constant Π∈ (0,1]:[∀_t < T z_t ∈ B ⊆ B_n^_0(z_0)] ≥Π,where B is an arbitrarily fixed subset of B_n^_0(z_0).We now show that the above property has the following crucial implication: for an interacting pair involving selected high and very high representation states, a rule creating a low representation state can only be triggered with sufficiently small probability. Informally, it seldom happens that in the protocol a low representation state is created out of any high representation state. For a protocol having the property given by Eq. (<ref>), for i_1∈ HR and i_2 ∈ VHR, let R_i_1, i_2 be the set of rules of the form {i_1, i_2}→{o_1, o_2}, taken over all o_1 ∈ LR, o_2 ∈ [1,r]. Then, ∑_j ∈ R_i_1, i_2q_j = O(n^-14). Suppose, by contradiction, that ∑_j ∈ R_i_1, i_2 q_j > 3n^-14.Associate with process z_t a random variable J_t∈{0,1}, defined as follows. For all t < t_e, where t_e is the first moment of time such that z_t_e∉ B_n^_0(z_0), we put J_t = 1 if a rule from R_i_1, i_2 is used for the interaction made by the protocol in process z_t at time t, and set J_t = 0 otherwise. For all t ≥ t_e, we set J_t to 1. We have (J_t | z_1,…,z_t) ≥ 2n^2-1; indeed, for t < t_e, it holds that:(J_t | z_1,…,z_t)= [J_t = 1 | z_t]= ∑_j ∈ R_i_1, i_2p_j(z_t) = ∑_j∈ R_i_1, i_2q_j z^(i_1(j))z^(i_2(j))/n^2 (1 - O(1/n)) ≥≥ 3n^-14n^24 n^1 - 8 /n^2 (1-O(1/n)) > 2n^2 - 1.By a simple stochastic domination argument, (J_t) can be lower-bounded by a sequence of independent binomial trials with success probability n^2 -1, hence by an application of a multiplicative Chernoff bound for T = n^1 + 2:[∑_t=1^T J_t > n^4] = [∑_t=1^T J_t > 1/22n^2 - 1 T ] = 1 - o(1),where the o(1) factor is exponentially small in n.We now show the following claim.Claim. With probability Π - o(1), the following event holds: z_t ∈ B for all t∈ [0,T) and the total number of rule activations in the time interval [0,t) during which an agent changes state from a state in LR to a different state is at most O(k n^3).Proof (of claim). Acting similarly as before, we associate with process z_t a random variable L_t∈{0,1}, defined as follows. For all t < t_e, we put L_t = 1 if a rule acting on at least one agent in a state from LR is made by the protocol in process z_t at time t, and set L_t = 0 otherwise. For all t ≥ t_e, we set L_t to a dummy variable set always to 0, i.a.r. We observe that:(L_t | z_1,…,z_t) ≤ 2k n^/n,since |LR| ≤ k, and for t < t_e, z_i < n^ for any i ∈ LR, hence the scheduler selects an agent from a LR state into an interacting pair with probability at most 2k n^/n. Applying an analogous argument as in the case of random variable L_t, this time for the upper tail, we obtain:[∑_t=1^T L_t < 4kn^3] = [∑_t=1^T L_t > 2· 2k n^ T ] = 1 - o(1).The claim follows directly.Now, by a union bound we obtain:[∑_t=1^T J_t > n^4∧∑_t=1^T L_t < 4kn^3] = 1 - o(1).Taking into account that t_e>T holds with probability Π=Ω(1) by (<ref>), we have by a union bound that with probability at least Π - o(1) = Ω(1), the following event holds: z_t ∈ B for all t∈[0,T], ∑_t=1^T J_t > n^4, and L_t < 4kn^3. However then ∑_t=1^T J_t - ∑_t=1^T L_t > n^4 - kn^3 > k n^, so there must exist at time T a state i ∈ LR such that z^(i)_T > n^. This is a contradiction with z_T ∈ B ⊆ B_n^_0(z_0) by Lemma <ref>(iii). In the rest of the proof, we consider the evolution of a protocol starting from configuration z_0 and having property (<ref>). We compare this evolution to the evolution of the same protocol, starting from a perturbed configuration z^*_0, such that: (C1) z_0 - z^*_0≤ n^.(C2) for all low representation states i ∈ LR, we have z^*(i)_0 ≤ z^(i)_0.Intuitively, the perturbed state z^*_0 may correspond to removing a small number of agents from z_0 (and replacing them by high representation states for the sake of normalization), e.g., as in the case of the disappearance of a rumor source from a system which has already performed a rumor-spreading process.Our objective will be to show that, with probability at least Π - o(1), after T = n^1+2 the process z^*_T is still not far from z_0, being constrained to a box in a similar way as process z_t. To achieve this, we define a coupling between processes z_t and z^*_t (knowing that process z_t is constrained to a box around z_0 with probability Π). Informally, the analysis proceeds as follows. We run the processes together for T = n^1+2 steps. In most steps, the 1-norm distance z_t - z^*_t between the two processes remains unchanged, without exceeding O(n^3). Otherwise, exactly one of the two processes executes a rule (and the other pauses). With a frequency of roughly n^/n steps (i.e., roughly n^3 times in total during the process), an LP rule is executed which increases the distance between these two states. We think of this type of “error” as unfixable, contributing to the O(n^3) distance of the processes; however, such errors are relatively uncommon. With a higher frequency of roughly n^3/n steps (i.e., roughly once every n^1-3 steps), a less serious “error” occurs, when some HP rule ι increases the distance between the two states. The rate of such errors is too high to leave them unfixed, and we have a time window of about n^1-3 steps to fix such an error (before the next such error occurs). We observe that since ι is an HP rule, which is activated with probability at least n^24-1, rule ι will still be activated frequently during this time window. The coupling of transitions of states z_t and z^*_t is in this case performed so as to force the two processes to execute rule ι lazily, never at the same time. The number of executions of rule ι in the ensuing time window by each of the two processes follows the standard coupling pattern of a pair of lazy random walks on a line, initially located at distance 1, until their next meeting (cf. e.g. <cit.>). During this part of the coupling, we allow the distance z_t - z^*_t to increase even up to n^6 (as a result of executions of rule ι), but the entire contribution to the distance related to rule ι is reduced to 0 before the next HP rule “error” occurs, with sufficiently high probability (in this case, with probability 1-O(n^-6). Overall, the coupling is successful with probability Π - O(n^-).We remark that we use the bound on the number of states k to enforce a sufficiently large polynomial separation between the frequencies of LR states and HR states, and likewise for LP rules and HP rules. We also implicitly assume that k = n^o(1), throughout the process. The analysis also works for a choice of k = O(loglog n), with a sufficiently small hidden constant. The separation between LR/HR states and LP/HP rules is used in at least two places in the proof. First, it enforces that rules creating LR states from VHR states may appear in the definition of the protocol only with polynomially small probability (Lemma <ref>), which helps to maintain over time the invariantz^*(i)_t ≤ z^(i)_t, for all LR states. Secondly, we use the separation of LP/HP rules in the analysis of the coupling to show that a fixable “error” caused by a HP rule can be sufficiently quickly repaired, before new errors occur.In the formalization of the coupling, we make both processes z_t and z^*_t lazy, i.e., add to each process an additional independent coin-toss at each step, and enforce that with probability 1/2 no rule is executed in a given step (i.e., the step is skipped by the protocol). We assume a random scheduler which picks uniformly a random pair of nodes at each step. Thus, if the scheduler picks a pair of agents in states {i_1, i_2}, and j is a rule acting on this pair of states, the probability that the interaction corresponding to rule j will be q_j/2. (The laziness of the process here is a purely technical assumption for the analysis, and corresponds to using a measure of time which is scaled by a factor of 2 ± o(1) w.h.p.; this does not affect the asymptotic statement of the theorem.)We will also find it convenient to apply an auxiliary notation for representing the evolution of a state. For process z_t (resp., z^*_t, we define ρ_t(j) (resp. ρ^*_t(j)), for all j∈ [1,r], as the number of times rule j has been executed since time 0. Observe that the pair (z_0, (ρ_t(j) : j ∈ [1,t]) completely describes the evolution of a state (i.e., the order in which the rules were executed is irrelevant). Moreover, since each execution of a rule changes the states of at most 4 agents, we have:z_t - z^*_t≤ 4∑_j=1^r |ρ_t(j) - ρ^*_t(j)| + z_0 - z^*_0≤ + 4∑_j=1^r |ρ_t(j) - ρ^*_t(j)| + n^. Definition of the coupling.* At each step t, we order the agents of configurations z_t and z^*_t, so that a_l(t) denotes the type of the l-th agent in z_t and a^*_l(t) is the type of the l-th agent in z^*_t. The orderings are such that |{l : a_l(t) = a^*_l(t)}| is maximized; in particular, for any state i such that z^(i)(t) ≤ z^*(i)(t) (respectively, z^*(i)(t) ≤ z^(i)(t)) we have that if for some l, a_l(t) = i (resp., a_l^*(t) = i), then a_l^*(t) = i (resp., a_l(t) = i). * The scheduler then picks a pair of distinct indices l_1, l_2 ∈{1,…,n} as the pair of interacting agents. 2.1. If a_l_1(t) = a^*_l_1(t) and a_l_2(t) = a^*_l_2(t), then the same rule j=j^* acting on the pair of states (a_l_1(t), a_l_2(t)) is chosen as the current interaction rule, with probability q_j.2.2. Otherwise, a pair of (clearly distinct) rules j and j^* are picked independently at random for z_t and z^*_t from among the rules available for state pairs (a_l_1(t), a_l_2(t)) and (a_l_1^*(t), a_l_2^*(t)), with probabilities q_j and q_j^*, respectively. * The processes finally perform their coin tosses to decide which of the selected rules (j for z_t and j^* for z^*_t) will be applied in the current step. 3.1. If j = j^* and rule j has been executed exactly the same number of times in the history of the two processes (ρ_t(j) = ρ^*_t(j)), then with probability 1/2 both of the processes execute rule j, and with probability 1/2 neither execute their rule.3.2. If j ≠ j^*, or if j = j^* and rule j has been executed a different number of times in the history of the two processes (ρ_t(j) ≠ρ^*_t(j)), then exactly one of the two processes performs its chosen rule and the other process waits, with the process performing the rule being chosen as z_t or z^*_t, with probability 1/2 each.The correctness of the coupling (i.e., that the marginals z_t and z^*_t each correspond to a valid execution of the given protocol under a random scheduler) is immediate to verify. Let z_t be a process satisfying property (<ref>), and let z^*_0 satisfy conditions (C1) and (C2). Then,for T = n^1 + 2, with probability Π - O(n^-) we have z^*_T - z = O(n^6), for some z ∈ B. To prove the claim, it suffices to show that with probability Π - O(n^-) the provided coupling succeeds, i.e., it maintains a sufficiently small difference z_T(i) - z^*_T(i) for all states i, with z_T ∈ B.In the analysis of the provided coupling, we will assume that the box condition z_t ∈ B holds always throughout the process (otherwise, we assume the coupling does not succeed). To state this formally, we work with auxiliary processes _t and _t^*, given as _t = z_t and _t^* = z_t^* for all t < t_e, where t_e is the first moment of time such that z_t∉ B_n^_0(z_0), and set to the dummy value _t = _t^* = z_0 for all t≥ t_e. At the end of the process, we will thus have _T = z_T and _T^* = z_T^* with probability at least Π. In the following, we silently assume that t < t_e - 1 (in particular, that z_t ∈ B and z_t+1∈ B), and we will simply show that the coupling of _t and ^*_t is successful with probability 1 - n^-. The condition of t ≥ t_e-1 is trivially handled.In addition to the box condition (which is now enforced) we try to maintain, with sufficiently high probability, throughout the first T steps of the process several invariants (all at a time), corresponding to the following events holding:* F_D(t): for all states i∈ LR, _t^*(i)≤_t^(i). (LR domination condition)* F_LR(t): for all states i∈ LR, _t^*(i)≤_t^(i)≤ n^. (LR state condition)* F_LP(t): for all rules j∈ LP, max{p_j(_t), p_j(^*_t)}≤ 2 n^-1. (LP rule condition)* F_HR(t): for all states i∈ HR, min{_t^(i), _t^*(i)}≥ n^24/2. (HR state condition)* F_HP(t): for all rules j∈ HP, min{p_j(_t), p_j(^*_t)}≥ n^24-1/2. (HP rule condition) * F_HR'(t): for all states i∈ HR', max{_t^(i), _t^*(i)}≤ 2n^1-6. (HR' state condition)* a family of possible events S_w,d(t), for some d∈{0,…,4n^3} and w ∈{0,…, n^6}, with specific events defined as follows: * S_0,d(t) holds if for all rules j ∈ HP we have ρ_t(j) = ρ^*_t(j), and ∑_j ∈ LP | ρ_t(j) - ρ^*_t(j) | = d. This implies, in particular, _t^* - _t≤ 4d + n^≤ 5n^3. (identical rate of HP execution)* S_w,d(t) for w>0 holds if there exists a rule ι∈ HP such that for all rules j ∈ HP ∖{ι} we have ρ_t(j) = ρ^*_t(j), |ρ_t(ι) - ρ^*_t(ι)| = w, and moreover ∑_i ∈ LP | ρ_t(i) - ρ^*_t(i) | = d. This implies, in particular, _t^* - _t≤ 4d + 4w + n^≤ 5n^6. (single HP execution difference) We will call the coupling successful if for all t≤ T, all events F_·(t) and some event S_w,d(t) holds, and we will say it is a failure otherwise. (We remark that condition F_D(t) is implied by condition F_LR(t), but we retain both for convenience in discussion.)The analysis of the coupled process is now the following. First, we remark that all of the given events F_·(t) and event S_0,0(t) hold for t=0. If the process meets condition S_0,d at time t and all conditions F_·(t), then we have the following: * With probability at least 1 - O(n^3 -1), the coupling will follow clauses 2.1 and 3.1 of its definition, and the two processesand ^* will execute the same rule j (or both pause). Hence, we continue to step t+1 satisfying condition S_0,d and all of the conditions F_·(t+1), making use of the box condition for process _t. (We note that, to show F_LP(t), when considering the special case of a rule involving a state from LR, we can make use of F_LR(t) and note that the activation probability of such a rule is bounded by 2 n^-1 due to the n^ bound on the population of a LR state).* With probability at most O(n^3 -1), the coupling will, however, select distinct rules, j for _t and j^* for _t^*, and will select exactly one of them to execute, say j' ∈{j, j^*}. * If j' ∈ LP, which happens in the current step of the process with probability at most 2n^-1 by F_LP, then the event S_0,d+1(t+1) will hold in the next step (provided d+1 ≤ 4n^3; otherwise, if d+1 > 4n^3, we will say that the coupling has failed).* If j' ∈ HP, which happens in the coupling with probability O(n^3 -1) (as bounded due to clause 2.2), then the event S_1,d(t+1) will hold in the next step. The condition F_D(t+1) requires more careful consideration. Taking into account that F_D(t) holds, we need to consider two cases: either j'=j and the rule applied to _t changed at least one of the two interacting states {i_1(j), i_2(j)}, say i_1(j) ∈ LR, so that ^i_1(j)(t) = ^*i_1(j)(t) and ^i_1(j)(t+1) ≤^*i_1(j)(t+1)-1, or j'=j^* and the rule applied to ^*_t created a pair of states {o_1(j^*), o_2(j^*)}, say o_1(j^*) ∈ LR. In the first case, by the description of the ordering given in clause 1 of the definition of the coupling, the problem occurs only if one of the agents picked by the scheduler belongs to an LR state, and the other agent is at a position in which the states ofand ^* differ in the ordering of the agents; hence, the probability that the coupling fails at this step is at most O(k n^· n^3/n^2) ≤ O(n^5 - 2). In the second case, we likewise analyze the ordering of the agents considered by the scheduler, and note that the interacting agent, which belongs to the part of the ordering in which _t and ^*_t differ, must be in a HR state, since the agents in a LR state in ^* are matched by their counterparts in(as noted in clause 1 of the discussion of the coupling). If the other interacting agent is in a state from LR ∪ HR', then such an event occurs with probability O(n^3· k n^1-6/n^2) ≤ O(n^-2.9 - 1), and we say that with this probability the coupling has failed. Finally, if the other interacting agent is in a state from VHR, then by Lemma <ref>, we have that the probability of picking a rule under which the coupling fails is at most O(n^-14), conditioned on the event j ≠ j^* holding, hence overall the probability of failure is O(n^-14 n^3-1) = O(n^-11 -1). Overall, we obtain that F_D(t+1) holds with probability O(n^-2.9 - 1). Given F_D(t+1), S_1,d(t+1), and the box condition, the remaining conditions F_·(t+1) follow directly. Overall, we obtain that following a time t satisfying S_0,d(t) and all conditions F_·(t), we reach the following successor state (see Fig. <ref>): S_0,d(t+1) ∧ F_·(t+1),with probability 1 - O(n^3 -1),S_0,d+1(t+1)∧ F_·(t+1),with probability ≤ 2n^ -1, if d+1 ≤ 4n^3,S_1,d(t+1) ∧ F_·(t+1),with probability O(n^3 -1), failure: with probability O(n^-2.9 - 1), if d+1 ≤ 4n^3, with some probability ≤ 1, otherwise.At this point, before proceeding further, we can provide some intuition on the meaning of the respective events S. The coupling process can be seen as a walk along the path (S_0,d : d≤ 4n^3), starting from state S_0,0, and at each step, either staying in the current state S_0,d, moving on to the next state S_0,d+1, branching to a side branch S_1,d (which we will analyze later), or failing. The process also fails if it reaches the endpoint of its path (d = 4n^3). Since the process is run for T = n^1 + 2 steps, the probability that failure will occur before the end of the path is reached is O(n^-0.9 ), and the probability of reaching the end of the path and failing is exponentially small in n^ by a Chernoff bound (in expectation, the process will progress halfway along the path). Hence, we have that the process succeeds with probability 1- O(n^-0.9 ), or otherwise may fail in a side branch S_·,d.A side branch is entered with probability O(n^ -1). To show that the coupling succeeds with the required probability, it suffices to show that we return from any state S_1,d to state S_0,d with probability at least 1 - O(n^-6); then, all (i.e., w.h.p. at most O(n^1+2 n^3 -1) = O(n^5)) excursions into side branches during the process will succeed with probability 1 - O(n^-).Consider now an excursion into a side branch S_w,d (w≥ 1) associated with a rule ι∈ HP, which has been executed a different number of times in _t and ^*_t. Now, if the process meets condition S_w,d at time t and all conditions F_·(t), then we have the following: * With probability at least 1 - O(n^6 -1), the coupling will follow clause 2.1 of its definition, selecting a single rule j.* If j ≠ι, then clause 3.1 will follow, and the two processesand ^* will execute the same rule j (or both pause). Hence, at time t+1, all of the conditions F_·(t+1) and condition S_w,d(t+1) is satisfied.* Else, the event j = ι occurs. The probability of such an event is denoted π_t∈ [p_ι(_t) - O(n^6 -1), p_ι(_t)] (due to the conditioning performed in the first clause of the coupling); since p_ι(_t) ≥ n^24 - 1 by the box condition for HP rules, it follows that 2π_t≥ n^24 - 1 - O(n^6 -1) ≥ n^24 -1/2. Now, following clause 3.2 of the coupling, depending on which of the two processes _t, ^*_t is chosen to execute the rule, with probability π_t/2 =: π'_t the system moves to S_w-1,d(t+1), and with probability π'_t the system moves to S_w+1,d(t+1) (unless w+1 > n^6, in which case the coupling has failed). As before, given there was no failure, all conditions F_·(t+1) are readily verified to be satisfied in the new time step. * With probability at most O(n^6 -1), for simplicity of analysis we assume the coupling has failed.This time, for a time t satisfying S_w,d(t) for w≥ 1 and all conditions F_·(t), we obtain the following distribution of successor states:S_w-1,d(t+1) ∧ F_·(t+1),with probability exactly π'_t ≥ n^24 - 1/4,S_w+1,d(t+1) ∧ F_·(t+1),with probability exactly π'_t, if w+1 ≤ n^6, failure: with probability O(n^6 -1),if w+1 ≤ n^6, with some probability ≤ 1,otherwise, S_w,d(t+1)∧ F_·(t+1),otherwise.The picture here corresponds to a lazy random walk along the side line S_w,d for w ∈ [0,n^6], with an additional failure probability at each step. The walk starts at w=1 and ends with a return to the primary line S_0,d if the endpoint w=0 is reached, or ends with failure if the other endpoint w≥ n^6 =: w_max is reached. At each step, the walk is lazy (with probability of transition depending on the current step), but unbiased with respect to transitions to the left or to the right. Assuming that failure does not occur sooner, with probability 1 - O(1/w_max) = 1 - O(n^-6) the walk will reach point w=0 in O(w_max^2) = O(n^12) moves (transitions along the line), without reaching the other endpoint of the line sooner. Since a move is made in each step t with probability π'_t ≥ n^24 - 1/4, by a straightforward Chernoff bound, the number of steps spent on this line is given w.h.p. as at most O(n^12 / n^24 - 1) = O(n^1 - 12). As the probability of failure in each of these steps is O(n^6 -1), the probability that the process fails during these steps is O(n^-6). Overall, by a union bound, we obtain that the process successfully returns to S_0,d with probability 1 - O(n^-6) (and within O(n^1 - 12) steps). In view of the previous observations, we have that with probability 1 - O(n^-), all conditions F_· and some condition S_w,d hold at time T. Thus, with probability Π - n^-, process z^*_T is sufficiently close to B, i.e., there exists a point z ∈ B such that z^*_T - z = O(n^6). § PROOF OF PROPOSITION <REF>Fix protocol P with set of states K, in which the minimum positive probability of executing some rule is p. Let K' ⊆ K, K' ∋ X be any minimal subset of the set of states such that no evolution of protocol P starting in a configuration containing only states from set K' will ever contain an agent in a state outside K'. Denote κ = |K'|-1. Consider an initialization of protocol P at time t_0 = 0, at a configuration z(0) with x ∈ [c, 1/2] and with all other states from K' represented by the same number of agents, i.e., for each Q ∈ K', we have q(0) = (1-x)/κ.Let t ≥ kn be an arbitrarily chosen time step. Let t_1 = t-(κ-1) n. Fix Q_1 ∈ K' ∖{X} as any state such that q_1(t_1) ≥ 1/2κ (we can, e.g., fix q_1 as the state from K' ∖{X} having the most agents at time t_1). Observe that from the minimality of K' it follows that there must exist a sequence of states (Q_1,…, Q_κ), with {Q_1,…, Q_κ = K' ∖{X}, such that in the definition of protocol P, for all i ∈{1,…, κ-1}, some rule of protocol P creates at least one agent (i.e., either 1 or 2 agents) in state Q_i+1 from an interaction of either the pair of agents in states (Q_j, Q_i) or the pair of agents in states (Q_i, Q_j), for some j ∈{1,…,i}. (Indeed, if for some i there was no possibility of choosing Q_i+1 in any way, then K” = {X, Q_1,… Q_i}⊆ K' would be closed under agent creation, contradicting the minimality of the choice of K'.) Now, we consider intervals of time steps [t_s, t_s+1], with t_s = t_1 + (s-1) n for s>1. We make the following claims: (1) Fix i ∈{1, 2, …κ}. If q_i (t_s) = Ω (1), then q_i (t) ≥ 0.11 q_i (t_s) for all t ∈ [t_s, t_s+1, with probability 1 - e^-n^Ω(1). Indeed, in a sequence of n steps, the expected number of agents which do not participate in any interaction in the time interval [t_s, t_s+1] following the asynchronous scheduler is (n-2/n)^n n > 0.13 n, and thus the number of non-interacting agents is at least 0.12n, with probability 1 - e^-n^Ω(1) following standard concentration bounds for the number of isolated vertices in a random graph on n nodes with n edges. Since the choice of agents by the scheduler is independent of their state, and the probability for a uniformly random agent to be in state Q_i at time t_s is q_i(t_s), a simple concentration bound shows that 0.11 q_i (t_s) n having state Q_i at time t_s do not participate in any interaction in the interval [t_s, t_s+1].(2) Fix i ∈{1, 2, …κ-1}.Denote m_i = min_j≤ iq_j (t_i). If m_i = Ω (1), then q_i+1(t_i+1) ≥ 0.01 p m_i^2, with probability 1 - e^-n^Ω(1). Indeed, consider the value j≤ i such that the interaction (Q_j, Q_i) or (Q_i, Q_j) creates an agent in state Q_i+1. At any time t within the interval [t_i, t_i+1], we have by Claim (1) that q_j(t) ≥ 0.11 m_i and q_i(t) ≥ 0.11 m_i, with probability 1 - e^-n^Ω(1). It follows that an interaction creating a new agent in state Q_i+1 is triggered with probability at least p(0.11 m_i)^2 at each step. The number of agents in state Q_i+1 at time step t_i+1 may thus be dominated from below by the number of successes in a sequence of n Bernoulli trials with success probability p(0.11 m_i)^2, and the claim follows.By applying Claim (2) iteratively for i = {1, 2, …κ-1}, where we note that m_1 ≥ 1/2κ, we have q_i+1(t_i+1) ≥ (0.01p / 2κ)^2^i, with probability 1 - e^-n^Ω(1) (through successive union bounds). Then, applying Claim (1) up to time t = t_κ-1, we have q_i+1(t_κ-1) ≥ 0.11^κ-1-i-1 (0.01p / 2κ)^2^i≥ (0.01p / 2κ)^2^κ, with probability 1 - e^-n^Ω(1). Applying once again a union bound, we have shown that for all q ∈ K', we have q(t) ≥ (0.01p / 2κ)^2^κ≡ C_0, with probability 1 - e^-n^Ω(1). The claim of the lemma follows for a suitable choice of δ_0 > 0.*Acknowledgment.We sincerely thank Dan Alistarh and Przemek Uznański for inspiring discussions, and Lucas Boczkowski for many detailed comments which helped to improve this manuscript. 5mmabbrvAPPENDIX: Simulation Examples for Oscillator Dynamics[b]0.3< g r a p h i c s >   [b]0.3< g r a p h i c s >   [b]0.3< g r a p h i c s >   [b]0.3< g r a p h i c s >   [b]0.3< g r a p h i c s >   [b]0.3< g r a p h i c s >   [b]0.3< g r a p h i c s >   [b]0.3< g r a p h i c s >   [b]0.3< g r a p h i c s >   [b]0.1< g r a p h i c s > Illustration of concentration of various features as a function of time steps for a simulation of the protocol P_o for n = 10^6, p=6·10^-2,q=10^-2,r=10^-1, s=1 in various scenarios. Left column shows concentration of species A_1, A_2, A_3, middle one the majority layer and right one light. First row: initialization from a corner configuration (X =1, left panel), further dynamics of the protocol after rumor source is removed (X =0, middle panel), further dynamics of the protocol after rumor source is reinserted (X =1, right panel). Middle row: initialization from a configuration with A_1 = A_2 = A_3 = 1/3 and X = 0. Bottom row: as above, with X = 1. The range of the horizontal time scale corresponds to 2500 parallel rounds of the protocol.
http://arxiv.org/abs/1705.09798v3
{ "authors": [ "Bartlomiej Dudek", "Adrian Kosowski" ], "categories": [ "cs.DS", "cs.DC", "math.DS" ], "primary_category": "cs.DS", "published": "20170527094618", "title": "Universal Protocols for Information Dissemination Using Emergent Signals" }
Algorithmic clothing: hybrid recommendation, from street-style-to-shop B Sengupta December 30, 2023 ======================================================================Sequential allocation is a simple mechanism for sharing multiple indivisible items. We study strategic behavior in sequential allocation. In particular, we consider Nash dynamics, as well as the computation and Pareto optimalityof pure equilibria, and Stackelberg strategies. We first demonstrate that, even for two agents, better responses can cycle.We then present a linear-time algorithm that returns a profile (which we call the “bluff profile”) that is in pure Nash equilibrium. Interestingly, the outcome of the bluff profile is the same as that of the truthful profile and the profile is in pure Nash equilibrium for all cardinal utilities consistent with the ordinal preferences.We show that the outcome of the bluff profile is Pareto optimal with respect to pairwise comparisons. In contrast, we show that an assignment may not be Pareto optimal with respect to pairwise comparisons even if it is a result of a preference profile that is in pure Nash equilibrium for all utilities consistent with ordinal preferences.Finally, we present a dynamic program to compute an optimal Stackelberg strategy for two agents, where the second agent has a constant number of distinct values for the items. § INTRODUCTION A simple but popular mechanism to allocate indivisible items is sequential allocation<cit.>.Sequential allocation is used, for example, by the Harvard Business School to allocate courses to students <cit.> as well as multi-million dollar sports drafts <cit.>. In a sequential allocation mechanism,a picking sequence specifies the turns of the agents.For example, for sequence 1212, agents 1 and 2 alternate with agent 1 taking the first turn. Agents report their preferences over the items. Then the items are allocated to the agents in the following manner.In each turn, the agent in that turn is given the most preferred item that has not yet been allocated. In this paper we focus on the “direct revelation” version where agents submit their complete rankings at the same time (and are committed to them), as opposed to the “extensive form” version where agents take turns choosing and are only committed to items chosen previously. Sequential allocation is an ordinal mechanism since the outcome only depends on the ordinal preferences of agents over items.Although the agents are asked to report ordinal preferences, we will assume a standard assumption in the literature that agents have underlying additive utilities for the items.It has long been known that sequential allocation is not strategy-proof when agents do not have consecutive turns.An agent may not pick their most preferred item remaining if they expect this item to remaintill a later turn. Instead, the agent may pick a slightly less preferred item that they would not otherwise get. Of course, this requires reasoningabout how the agents may behave strategically at thesame time.Since the sequential allocation mechanism is not strategy-proof, how precisely should agents behave? There has already been some work on strategic behavior in the setting where sequential allocation is viewed as a repeated game. <cit.> presented a linear-time algorithm to compute a subgame perfect Nash equilibrium (SPNE) when there are two agents and the picking sequence is alternating (121212…). The result was generalized to the case of any sequence <cit.>. <cit.> stated that “no algorithm is known which will produce optimal play more efficiently than by checking many branches of the game tree.” Recently, it was proved that there can be an exponential number of subgame perfect Nash equilibria and finding even one of them is PSPACE-hard for an unbounded number of agents <cit.>. However, it is also natural to view sequential allocation as a one shot game rather than a repeated game. At the Harvard Business School,students submit a single ranked list of courses toa central organization that runs the sequential allocation mechanism on these fixed preferences.This is essentially then a one shot game.This suggests considering the more generalsolution concept of pure Nash equilibrium rather than that of subgame perfect Nash equilibrium.In this paper, we will viewsequential allocation as a one shot strategic game in which the possible actions of the agents are possible ordinal preferences over the items, and the agents know each others' true ordinal preferences, as well as the picking sequence.Surprisingly no algorithm to date has yetbeen proposed in the literature for efficiently computing a pure Nash equilibrium (PNE). We therefore propose a simple linear time method to compute a PNE even for an unbounded number of agents.We also consider Pareto optimality of pure Nash equilibria. This issue is similar to previous work on price of anarchy/stability of equilibria in other strategic domains.Finally, we consider Stackelberg strategies in sequential allocation where an agent announces the preference he or she intends to report.ResultsWe study the computational problems of finding the equilibria of sequential allocation when viewed as a one shot game. No algorithm to date has been proposed in the literature for efficiently computing a pure Nash equilibrium (PNE) of sequential allocation. One general method to compute a PNE is to compute a sequence of better responses. Indeed, for any finite potential game, this is guaranteed to find a PNE.We first show better responses need not converge to a pure Nash equilibrium. Even for two agents, better responses can cycle.Instead, we propose a simple linear time method to compute the preference profile of a PNE even for an unbounded number of agents.We refer to the output of thisalgorithm as the bluff profile.Interestingly, the allocation generated by the bluff profile is the same as that of the truthful profile, and this profile is in equilibrium for all cardinal utilities consistent with the ordinal preferences.The fact that this equilibrium can be computed in linear time is perhaps a little surprising because computingjust a single best response with the sequential allocation mechanism has been recently shown to be NP-hard <cit.>. In addition, computing a subgame perfect Nash equilibrium of the repeated game is PSPACE-hard <cit.>, and this is a PNE of the one shot game.Our result that there exists a linear-time algorithm to compute a PNE profile in the one shot game also contrasts with the fact that computing a PNE profile is NP-hard under the related probabilistic serial (PS) random assignment mechanism for fair division of indivisible goods <cit.>. We also consider Pareto optimality and other fairness properties of the pure Nash equilibria (Section <ref>). This is in line with work on the price of anarchy/stabilityof equilibria in other strategic domains.We show that the outcome of the bluff profile is Pareto optimal with respect to pairwise comparisons (defined in Section <ref>). Hence, in sequential allocation, pure Nash equilibrium is not incompatible with ordinal Pareto optimality. On the other hand, we also prove that an assignment may not be Pareto optimal with respect to pairwise comparisons even if it is a result of a preference profile that is PNE for all utilities consistent with ordinal preferences. Finally, in Section <ref> we show that an agent may have an advantage from committing and declaring his preference and that committing to the truthful report may not be optimal. For 2 players we present a polynomial-time algorithm to compute an optimal strategy to commit to in the case that the other agent has a small number of utility values.§ PRELIMINARIES We consider the setting in which we have N={1,…, n} a set of agents, O={o_1,…, o_m} a set of items, and the preference profile ≻=(≻_1,…, ≻_n) specifies for each agent i his complete, strict, and transitive preference ≻_i over O.Each agent may additionally express a cardinal utility function u_i consistent with ≻_i: u_i(o)> u_i(o')iffo≻_i o'. We will assume that each item is positively valued, i.e, u_i(o)>0 for all i∈ N and o∈ O. The set of all utility functions consistent with ≻_i is denoted by 𝒰(≻_i). We will denote by 𝒰(≻) the set of all utility profiles u=(u_1,…, u_n) such that u_i∈𝒰(≻_i) for each i∈ N.When we consider agents' valuations according to their cardinal utilities, we will assume additivity, that is u_i(O')=∑_o∈ O'u_i(o) for each i∈ N and O'⊆ O.An assignment is an allocation of items to agents, represented as an n× m matrix [p(i)(o_j)]_1≤ i≤ n, 1≤ j≤ m such that for all i∈ N, and o_j∈ O, p(i)(o_j)∈{0,1};and for all j∈{1,…, m}, ∑_i∈ Np(i)(o_j)= 1.An agent i gets item o_j if and only if p(i)(o_j)= 1. Each row p(i)=(p(i)(o_1),…, p(i)(o_m)) represents the allocation of agent i.We will also present the cardinal utilities in matrix form. A utility matrix U is an n× m matrix [U(i)(j)]_1≤ i≤ n, 1≤ j≤ m such that for all i∈ N, and j∈ O, the entry U(i)(j) in the i-th row andj-th column is u_i(o_j). We say that utilities are lexicographic if for each agent i∈ N, o∈ O, u_i(o)>∑_o'≺_i ou_i(o'). By S≻_i T, we will mean u_i(S)>u_i(T). Consider the setting in which N={1,2}, O={o_1,o_2,o_3,o_4}, the preferences of agents are1: o_1, o_2, o_3, o_42: o_1, o_3, o_2, o_4Then for the picking sequence 1221, agent 1 gets {o_1,o_4} while 2 gets {o_2,o_3}. The assignment resulting from sequential allocation (SA) can be represented as follows.SA(≻_1,≻_2) = [ 1 0 0 1; 0 1 1 0 ].The allocation of agent 1 is denoted by SA(≻_1,≻_2)(1). For a reported preference profile (≻_1',…, ≻_n'), an agent i's best response is a preference report ≻_i” that maximizes utilityu_i(SA(≻_i”,≻_-i')(i)). We say that a reported preference profile (≻_1',…, ≻_n') is in pure Nash equilibrium (PNE) if no agent i can report a preference ≻_i” such that u_i(SA(≻_i”,≻_-i')(i)) > u_i(SA(≻')(i)).§ NASH DYNAMICS Since we are interested in computing a PNE, a natural approach is to simulate better responses and hope they converge. For finite potential games, such an approach is guaranteed to find a PNE. However, we show that even for two agents, computing better responseswill not always terminate and is thus not a method that is guaranteed to find a pure Nash equilibrium. For two agents, better responses can cycle.Let the sequence be the alternating one: 121212…. The following 5 step sequence of better responses leads to a cycle. The ordinal preferences corresponding to the utility functions are as follows.≻_1:o_3,o_4,o_5,o_6,o_9,o_10,o_7,o_8,o_1,o_2 ≻_2:o_9,o_10,o_5,o_6,o_7,o_8,o_1,o_2,o_3,o_4 It is sufficient to consider the agents having lexicographic utilities although the argument works for any utilities consistent with the ordinal preferences.This yields the following assignment and utilities at the start:SA(≻_1,≻_2) = [ 1 0 1 1 1 0 1 0 0 0; 0 1 0 0 0 1 0 1 1 1; ] In Step 1, Agent 1 misreports to increase his utility.≻_1^1:o_5, o_6,o_7, o_8,o_3, o_4,o_1, o_2,o_9, o_10 ≻_2^1:o_9,o_10,o_5, o_6,o_7, o_8,o_1, o_2, o_3, o_4 SA(≻^1) = [ 0 0 1 1 1 1 1 0 0 0; 1 1 0 0 0 0 0 1 1 1; ] In Step 2, Agent 2 changes his report in response.≻_1^2: o_5, o_6, o_7, o_8, o_3, o_4, o_1, o_2, o_9, o_10 ≻_2^2:o_5, o_6, o_7, o_8, o_9, o_10, o_1, o_2, o_3, o_4 SA(≻^2) = [ 1 0 1 1 1 0 1 0 0 0; 0 1 0 0 0 1 0 1 1 1; ] In Step 3, Agent 1 changes his report in response.≻_1^3: o_5, o_6, o_9, o_10, o_3, o_4, o_1, o_2, o_7, o_8 ≻_2^3: o_5, o_6, o_7, o_8, o_9, o_10, o_1, o_2, o_3, o_4 SA(≻^3) = [ 0 0 1 1 1 0 0 0 1 1; 1 1 0 0 0 1 1 1 0 0; ]In Step 4, Agent 2 changes his report in response.≻_1^4: o_5, o_6, o_9, o_10, o_3, o_4, o_1, o_2, o_7, o_8 ≻_2^4: o_9, o_10, o_5, o_6, o_7, o_8, o_1, o_2, o_3, o_4SA(≻^4) = [ 1 0 1 1 1 1 0 0 0 0; 0 1 0 0 0 0 1 1 1 1; ] In Step 5, Agent 1 changes his report in response.≻_1^5: o_5, o_6, o_7, o_8, o_3, o_4, o_1, o_2, o_9, o_10 ≻_2^5: o_9, o_10, o_5, o_6, o_7, o_8, o_1, o_2, o_3, o_4SA(≻^5) = [ 0 0 1 1 1 1 1 0 0 0; 1 1 0 0 0 0 0 1 1 1; ] Since ≻^1=≻^5, we have cycled. § THE BLUFF PROFILE In this section, we outline alinear-time algorithm to compute a pure Nash equilibrium preference profile.Surprisingly, we will show that the preference profile constructed is in pure Nash equilibrium for all utilities consistent with the ordinal preferences.Simulate sequential allocation with the truthful preferences. Set the preferences of each agent to the order in which the items are picked when simulating sequential allocation under truthful preferences. We refer to the profile constructed as the bluff profile since the idea behind the profile is that an agent wants to get the most preferred item immediately because if he does not, some other agent will take it. We observe the following characteristics of the bluff profile. In the bluff profile, [(i)]* all agents have the same preferences;* the order in which items are picked is the same as the order in which items are picked under the truthful profile; and* the allocations of agents are the same as in the truthful profile.We show that the bluff profile is in pure Nash equilibrium if the utilities are lexicographic.The bluff profile is in pure Nash equilibrium if the utilities are lexicographic.We prove by induction on the number of picks that no agent has an incentive to pick some other item when his turn comes which means that he picks the same item that he picks in the bluff profile which is also the most preferred item among the available items. This is equivalent to proving that no agent has an incentive to change his report from that in the bluff profile.For the base case, let us consider the first agent who takes the first turn. If he does not take his most-preferred item, the next agent will take it. Since utilities are lexicographic, the first agent gains most by getting his most-preferred item. Regarding the other agents, they are not disadvantaged by placing that item first in their preferences lists, since it is taken by the first agent. It does not affect their ability to express their preferences amongst the remaining items.Similarly, let us assume that agents in the first k turns did not have an incentive to misreport and pick some item other than the most preferred available item. Then we show that agent j in the k+1-st turn does not have an incentive to change his report. Note that the item that j picks according to the bluff profile is his most preferred item o amongst those still available. This is because the order in which items are picked and allocations that are made exactly coincide with the truthful profile. Now if j does not make the consistent pick, he will not be able to recover the loss of not getting o because the utilities are lexicographic. Again, for the other agents (who do not get o) it does not disadvantage them to put o in the k+1-st place in their preference list. We now prove the following lemma.Consider a profile in which all agents in N∖{i} report the same preferences. Then agent i's best response results in the same allocation for him for all utilities consistent with the ordinal preferences.When all agents in N∖{i} report the same preferences, then for agent i, from the perspective of agent i, all the turns of agents in N∖{i} can be replaced by a single agent, representative of N∖{i} who has the same preferences as agents in N∖{i}. Thus, computing a best response for agent i when all agents in N∖{i} report the same preferences is equivalent to computing a best response for agent i when there is only one other agent (with the same preference as the agents in N∖{i}) and each turn of agents in N∖{i} is replaced by the representative agent. When there is one other agent, <cit.> proved that the best response results in the same allocation for the agent for all utilities consistent with the ordinal preferences.[This argument does not work when the number of other agents is more than one and they have different preferences. It can be shown that for three or more agents, best responses need not result in the same allocation.] Combining Lemmas <ref> and <ref>, we are in a position to prove the following: The bluff profile is in pure Nash equilibrium under all utilities consistent with the ordinal preferences.From Lemma <ref>, we know that the bluff profile is in pure Nash equilibrium if the utilities are lexicographic. From Lemma <ref>, we know that all agents have the same preferences in the bluff profile. This immediately implies that for any agent i, all agents in N∖{i} report the same preferences.From Lemma <ref>, each agent i's best response to the bluff profile results in the same unique allocation for all utilities consistent with the ordinal preferences. This allocation should be the same as allocation achieved by i when he reports the bluff preferences because they yield the best allocation under lexicographic utilities. Hence the bluff profile is in pure Nash equilibrium under all utilities consistent with the ordinal preferences. § THE CROSSOUT PROFILE Since sequential allocation can also be viewed as a perfect information extensive form game, it admits a SPNE (Subgame-Perfect Nash Equilibrium) and hence a pure Nash equilibrium for the game tree.[For readers not familiar with extensive form games and Subgame-Perfect Nash Equilibrium), we refer them to <cit.>.]Computing a SPNE of the game treeis PSPACE-complete <cit.>. On the other hand, the optimal play for the extensive form game can be computed in polynomial time for the case of two agents. The strategy corresponding to the SPNE is to play so that the last agent gets their least preferred item, the second from last, their next least preferred item, and so on. We first show that for the case of two agents, similar ideas can also be used to construct a PNE preference profile for the one-shot game.We use the expression crossout profile to refer to the preference profile in which both agents have the preferences which are the same as the item picking ordering in the optimal play of perfect information extensive form game. The crossout preference profile can be computed as follows: Reverse and then invert (exchange 1s with 2s and vice versa) the picking sequence. Reverse the preferences of the two agents. Find the order L in which items are allocated to the agents according to the new picking sequence and preferences. Return reverse of L as the preference of each agent.We now show that the crossout profile is in PNE for certain utilities consistent with the ordinal preferences. We say that utilities are upward lexicographic if for each agent i∈ N and two allocations with equal number of items, if the agent prefers the allocation with the better least-preferred item and in case of equality, the one with a better second-least-preferred item and so on. Such a preference relation can be captured by cardinal utilities as follows. If agent i has ordinal preferences o_1,o_2,…, o_m, then utilities are as follows: u_i(o_j)=1 - (1/2^m+1-j) for all j∈{1,…, m}.For two agents and for upward lexicographic utilities, the crossout profile is in PNE.Consider agent i∈{1,2} and denote by -i the other agent. Let π be the sequence of turns of the agents so that π(j) is the agent with the j-th turn. Now if agent -i=π(m) has the last turn, then in the first m-1 turns, whenever agent i's turn comes, he has an option to get an item better than i's least preferred item o_m. Hence i can guarantee to not get his least preferred item o_m and hence guarantee -i to get o_m. This will always be the best response for agent i if he has upward lexicographic utilities. Since -i gets o_m in any case, -i may as well rank o_m last, and use his higher slots to prioritise amongst the other items. We can now consider a situation in which o_m does not exist in O and it is fixed as the least preferred item in both agents' preferences.Then the same argument can be applied recursively.Lemma <ref> can be used to prove the following theorem. For two agents and for all utilities consistent with the ordinal preferences, the crossout profile is in PNE.From Lemma <ref>, we know that for two agents and for upward lexicographic utilities, the crossout profile is in PNE. When there is one other agent, Bouveret and Lang <cit.> proved that the best response results in the same allocation for the agent under all utilities consistent with the ordinal preferences. Next we show that even for two agents, the outcome of a crossout profile may not be the same as the truthful assignment. Even for two agents, the outcome of the crossout profile (and hence the SPNE assignment) may not be the same as the truthful assignment. Consider the sequence 1212 and profile:≻_1: a,b,c,d ≻_2:b,c,a,d The picking sequence obtained after reversing and inverting the picking sequence is again 1212. The modified preferences are as follows.≻_1”: d,c,b,a ≻_2”: d, a, c, b Under picking sequence 1212 and profile ≻”, the items are picked as follows: d, a, c, b. We reverse this ordering to obtain the following crossout profile:≻_1”: b, c, a, d ≻_2”: b, c, a, d. Under this profile and original picking sequence 1212, 1 gets {b,a} and 2 gets {c,d}. Also note that the SPNE path is as follows: 1 gets b, 2 gets c, 1 gets a and then 2 gets d. In contrast, in the truthful assignment, 1 gets a and c.§ PARETO OPTIMALITY OF PURE NASH EQUILIBRIA We next consider the Pareto optimality of equilibria. An allocation S is at least as preferredwith respect to pairwise comparisons by a given agent i as allocation T, if there exists anan injection f from T to S such that for each item o∈ T, i prefers f(o) at least as much as o.We note that an agent strictly prefers S over T with respect to pairwise comparisons if S results from T by a sequence of replacements of an item in T with a strictly more preferred item.Note that the pairwise comparison relation is transitive but not necessarily complete. We will focus on Pareto optimality with respect to pairwise comparisons. We first show there exists a PNE whose outcome is Pareto optimal with respect to pairwise comparisons.Hence, unlike some other games, Pareto optimality is not incompatible with Nash equilibria in sequential allocation.The outcome of the bluff profile is Pareto optimal with respect to pairwise comparisons.The argument is as follows. Since the outcome of the bluff profile is the same as the outcome of the truthful profile and since the outcome of each truthful profile is Pareto optimal with respect to pairwise comparisons <cit.>, the outcome of the bluff profile is Pareto optimal with respect to pairwise comparisons as well.Although the argument for the theorem is simple, it shows the following: if the truthful outcome satisfies some normative properties such as envy-freeness or other fairness properties <cit.>, we know that there exists at least one PNE which results in an assignment with the same normative properties.The theorem above is in sharp contrast with the result in <cit.> that there exist utilities under which no SPNE assignment is Pareto optimal with respect to pairwise comparisons. Other relevant papers that deal with implementing Pareto optimal outcomes in other settings include <cit.> and <cit.>.Next, we show that there may exist a PNEwhose outcome is not Pareto optimal with respect to pairwise comparisons. The statement holds even if the PNE in question is in PNE with respect to all utilities consistent with the ordinal preferences! An assignment may not be Pareto optimal with respect to pairwise comparisons even if it is a result of a preference profile that is in PNE for all utilities consistent with ordinal preferences.Consider the preference profile:≻_1:a,b,c,d,e,f ≻_2:e,f,b,a,d,c ≻_3:c,f,e,d,a,b Let the sequence be 123123.Then the outcome of the truthful preference profile can be summarized as 1: {a,b}2: {e,f}3: {c,d} Consider the following profile ≻':≻_1':c,f,a,b, d,e ≻_2':b,a,e, c,d,f ≻_3':f,e,d, a, b, cThen the outcome of the profile ≻' can be summarized as 1: {c,a}2: {b,e}3: {f,d} We argue that the profile ≻' is in PNE.In his reported preference ≻_1', agent 1 gets {a,c}. The only better outcome agent 1 can get is {a,b}. If he goes for a first, he does not get c or b. If he goes for b first, he does not get a. So agent 1 plays his best response for all utilities consistent with his ordinal preferences.In his reported preference ≻_2', agent 2 gets {e,b}. The only better outcome agent 2 can get is {e,f}. Now agent 2 in his best response will try to get {e,f}. If agent 2 tries to pick f first, he will not get e. Hence agent 2 plays his best response for all utilities consistent with his ordinal preferences. Finally, for agent 3, he cannot get c.The best he can get is {e,f}. If 3 goes for e first, then he does not get f. If 3 goes for f first, he can only get d. The best he can get is {f,d} so his reported preference is his best response for all utilities consistent with the ordinal preferences.§ ADVANTAGE OF COMMITMENT In prior work on strategic aspects of sequential allocation, the focus has been on computing manipulations or equilibria. We now consider another strategic aspect: Stackelberg strategies to commit to in order to obtain outcomes that are better for the individual agent. In this setting, agent 1 (the leader) announces a preference R of all the items, and commits to selecting, whenever it is his turn, the highest-ranked item in R that is not yet taken. The following example illustrates a leadership advantage. There are 2 agents and 4 items denoted a,b,c,d. Suppose the agents choose items in order 1212.Theordinal preferences are ≻_1:a, d, c, b ≻_2:a, b, d, c Then in an SPNE, agent 1 takes item a, then agent 2 takes item d (since agent 1 will not take item b, it is okay for agent 2 to take d, ending up with b and d). Then agent 1 takes c. Also if agent 1 reports the truth a, d, c, b, then agent 2 is guaranteed to get b so he can report d, b, a, c and get {b,d} which means that 1 gets {a,c}.However, consider the case where agent 1 is leader, and announces the preference list ≻_1': a, b, d, c.Then agent 2 must use a preference list that results in agent 2 taking item b first. Agent 1 has a credible threat to take item b, if agent 2 does not take it next (despite the fact that agent 1 doesn't value item b). So, agent 1 gets items a and d. This raises the following question. For two agents, what is the complexity of finding the best preference report for the leader, assuming that the follower will best-respond. Next, we consider an interesting special case in which the problem can be solved in polynomial time. For n=2 and any fixed picking sequence, there is an algorithm whose runtime is polynomial in the number of items m, to compute an optimal Stackelberg strategy for agent 1 when agent 2 has a constant number of distinct values for items. We make the assumption, standard in the study of optimal Stackelberg strategies, that if agent 2 has more than one best response, then agent 1 breaks the tie in his (agent 1's) favour. Let k (constant) be the number of distinct values that agent 2 has for items. Agent 1 has to identify a ranking of the items such that if agent2 best-responds, agent 1's total value is maximised. It is convenient to proceed by solving the following slight generalisation of the problem. Given a picking sequence P (a sequence of 1's and 2's of length m), we add a parameter ℓ, where ℓ is at most the number of 2's in P, and agent 2 may receive only ℓ items. We make the following observation:Suppose (for picking sequence P) agent 2 is allowed to receive ℓ items. We can regardagent 2's selection of items as working as follows. Given agent 1's preference ranking ≻_1, agent 2 places a token on the ℓ items in ≻_1 whose positions correspond to the positions of 2 in P. Agent 2 is allowed to move any token from any item x to an item x', provided that x≻_1 x', subject to the constraint that tokens lie at distinct items. Finally, items marked with tokens are the ones that agent 2 receives. Agent 2 chooses the most valuable set that can be obtained in this way.We may assume that in an optimal (for agent 1) ranking of the items, if items x and x' have the same value foragent 2, andagent 1 values x higher than x', thenagent 1 ranks x higher than x'. We claim first that since x and x' have the same value toagent 2, then given any ranking byagent 1, any best response byagent 2 can be modified to avoid an outcome whereagent 2 takes the higher-ranked of {x,x'}, but not the lower-ranked of {x,x'}.Noting Observation <ref>, if the higher-ranked of {x,x'} has a token, but not the lower-ranked of them, the token can be moved to the lower-ranked of {x,x'} without loss of utility toagent 2. Ifagent 1 ranks the lower-valued of {x,x'} higher in ≻_1, they can be exchanged, and the new ranking (with the right best response byagent 2) is at least as good foragent 1. Notation:Recall that m denotes the number of items. Let S_1,…,S_k be the partition of the items into subsets that agent 2 values equally. For 1≤ i≤ k let m_i=|S_i|. Let o_i,j∈ S_i be the member of S_i that has the j-th highest value toagent 1. Let S_i(j)⊆ S_i be the set {o_1,…,o_j}, that is, the j highest value (to agent 1) members of S_i. Let U(j_1,…,j_k;ℓ) be the highest utility thatagent 1 can get, assuming that items S_1(j_1)∪⋯∪ S_k(j_k) are being shared, andagent 2 is allowed to take ℓ of them, where ℓ≤ m. In words, we consider subsets of the S_i obtained by taking the best items in S_i, and consider various numbers of items that we limitagent 2 may to receive. (of Theorem <ref>) If picking sequence P contains m' occurrences of “2”, we are interested in computing U(m_1,…,m_k;m') and its associated ranking. We express the solution recursively be expressing U(j_1,…,j_k;ℓ) in terms of various values of U(j'_1,…,j'_k;ℓ'), where j'_i≤ j_i and ℓ'≤ℓ, and at least one inequality is strict. Furthermore, for any values j_1,…,j_k,ℓ we also evaluate and remember agent 2's best response. This can be seen to be achievable in polynomial time via dynamic programming, since there are O(m^k+1) sets of values that can be taken by these parameters.We compute U(j_1,…,j_k;ℓ) as follows. Let j=j_1+…+j_k and assume that there are at least ℓ occurrences of “2” in the first j entries of the picking sequence.By Proposition <ref>, in an (agent 1)-optimal ranking of items in S_1(j_1)∪⋯∪ S_k(j_k), the lowest ranked item must be one of o_1,j_1,⋯,o_k,j_k. We consider two cases, according to whether or not agent 2 takes that lowest-ranked item.Suppose agent 2 takes that item. Then U(j_1,…,j_k;ℓ) is given by:max_i∈[k]( U(j_1,…,j_i-1,j_i-1,j_i+1,…,j_k;ℓ-1) ) Alternatively agent 2 may fail to take that item, in which case U(j_1,…,j_k;ℓ) is given by:max_i∈[k]( u_1(o_i,j_i) + U(j_1,…,j_i-1,j_i-1,j_i+1,…,j_k;ℓ) )where u_1(o_i,j_i) is agent 1's value for item o_i,j_i lowest-ranked.By way of explanation of (<ref>) and (<ref>), in the case of (<ref>) where agent 2 takes o_i,j_i, agent 1's utility will be that of the optimal ranking of the other j-1 items under the constraint that agent 2 only gets to take ℓ-1 of them. If agent 2 does not take this lowest-ranked item o_i,j_i, then (<ref>) gives agent 1's utility as his valueu_1(o_i,j_i) for that item, plus the best outcome for agent 1 assuming agent 2 may take ℓ items from amongst the other j-1 items.In checking which case applies for a given choice of i and corresponding item o_i,j_i, we check whether the optimal ranking of the other items for agent 2 taking ℓ-1 of them, when extended toagent 2's selection of that additional item, is indeed a best-response for agent 2 given that he gets ℓ of all the items. This can be done efficiently, since best responses can be efficiently computed <cit.>. § CONCLUSION Sequential allocation is a simple and frequently used mechanism for resource allocation.Its strategic aspects have been formally studied for the last forty years. To our surprise, some fundamental questions have been unaddressed in the literature about sequential allocation when viewed as an one shot game. This is despite the fact that in many settings, it is essentially played as an one shot game. We have therefore studied in detailthe pure Nash equilibrium of sequential allocation mechanisms.We presented a number of results on Nash dynamics, as well as on the computation of pure Nash equilibrium, and the Pareto optimality of equilibria. In particular,we presented the first polynomial-time algorithm to compute a PNE that applies to all utilities consistent with the ordinal preferences. We have also explored some other new directions such as Stackelberg strategies that have so far not been examined in sequential allocation. 17 urlstyle[Aziz et al.(2015a)Aziz, Gaspers, Mackenzie, Mattei, Narodytska, and Walsh]AGM+15d H. Aziz, S. Gaspers, S. Mackenzie, N. Mattei, N. Narodytska, and T. Walsh. Equilibria under the probabilistic serial rule. In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI), pages 1105–1112, 2015a.[Aziz et al.(2015b)Aziz, Gaspers, Mackenzie, and Walsh]AGMW15a H. Aziz, S. Gaspers, S. Mackenzie, and T. Walsh. Fair assignment of indivisible objects under ordinal preferences. Artificial Intelligence, 227:0 71–92, 2015b.[Aziz et al.(2015c)Aziz, Walsh, and Xia]AWX15b H. Aziz, T. Walsh, and L. Xia. Possible and necessary allocations via sequential mechanisms. In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI), pages 468–474, 2015c.[Aziz et al.(2017)Aziz, Bouveret, Lang, and Mackenzie]ABLM16a H. Aziz, S. Bouveret, J. Lang, and S. Mackenzie. Complexity of manipulating sequential allocation. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI), pages 328–334, 2017.[Bouveret and Lang(2011)]BoLa11a S. Bouveret and J. Lang. A general elicitation-free protocol for allocating indivisible goods. In Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI), pages 73–78. AAAI Press, 2011.[Bouveret and Lang(2014)]BoLa14b S. Bouveret and J. Lang. Manipulating picking sequences. In Proceedings of the 21st European Conference on Artificial Intelligence (ECAI), pages 141–146, 2014.[Brams and King(2005)]BrKi05a S. J. Brams and D. L. King. Efficient fair division: Help the worst off or avoid envy? Rationality and Society, 170 (4):0 387–421, 2005.[Brams and Straffin(1979)]BrSt79a S. J. Brams and P. D. Straffin. Prisoners' dilemma and professional sports drafts. The American Mathematical Monthly, 860 (2):0 80–88, 1979.[Brams and Taylor(1996)]BrTa96a S. J. Brams and A. D. Taylor. Fair Division: From Cake-Cutting to Dispute Resolution. Cambridge University Press, 1996.[Budish and Cantillion(2012)]BuCa12a E. Budish and E. Cantillion. The multi-unit assignment problem: Theory and evidence from course allocation at Harvard. American Economic Review, 1020 (5):0 2237–2271, 2012.[Kalai et al.(2015)Kalai, Meir, and Tennenholtz]KMT15a G. Kalai, R. Meir, and M. Tennenholtz. Bidding games and efficient allocations. In Proceedings of the 16th ACM Conference on Economics and Computation (ACM-EC), pages 113–130, 2015.[Kalinowski et al.(2013a)Kalinowski, Narodytska, and Walsh]KNW13a T. Kalinowski, N. Narodytska, and T. Walsh. A social welfare optimal sequential allocation procedure. In Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI), pages 227–233. AAAI Press, 2013a.[Kalinowski et al.(2013b)Kalinowski, Narodytska, Walsh, and Xia]KNWX13a T. Kalinowski, N. Narodytska, T. Walsh, and L. Xia. Strategic behavior when allocating indivisible goods sequentially. In Proceedings of the 27th AAAI Conference on Artificial Intelligence (AAAI), pages 452–458. AAAI Press, 2013b.[Kohler and Chandrasekaran(1971)]KoCh71a D. A. Kohler and R. Chandrasekaran. A class of sequential games. Operations Research, 190 (2):0 270–277, 1971.[Levine and Stange(2012)]LeSt12a L. Levine and K. E. Stange. How to make the most of a shared meal: Plan the last bite first. The American Mathematical Monthly, 1190 (7):0 550–565, 2012.[Leyton-Brown and Shoham(2008)]LeSh08a K. Leyton-Brown and Y. Shoham. Essentials of Game Theory: A Concise, Multidisciplinary Introduction. Morgan & Claypool, 2008.[Moulin(1984)]Moul84b H. Moulin. Implementing the Kalai-Smorodinsky bargaining solution. Journal of Economic Theory, 330 (1):0 32–45., 1984.
http://arxiv.org/abs/1705.09444v1
{ "authors": [ "Haris Aziz", "Paul Goldberg", "Toby Walsh" ], "categories": [ "cs.GT", "91A12, 68Q15", "F.2; J.4" ], "primary_category": "cs.GT", "published": "20170526063157", "title": "Equilibria in Sequential Allocation" }
APS/123-QED Russian Quantum Center, Skolkovo, Moscow 143025, Russia Steklov Mathematical Institute of Russian Academy of Sciences, Moscow 119991, RussiaRussian Quantum Center, Skolkovo, Moscow 143025, RussiaRussian Quantum Center, Skolkovo, Moscow 143025, Russia Russian Quantum Center, Skolkovo, Moscow 143025, Russia Steklov Mathematical Institute of Russian Academy of Sciences, Moscow 119991, Russia Russian Quantum Center, Skolkovo, Moscow 143025, Russia Russian Quantum Center, Skolkovo, Moscow 143025, [email protected] Russian Quantum Center, Skolkovo, Moscow 143025, Russia Institute for Quantum Science and Technology, University of Calgary, Calgary AB T2N 1N4, [email protected] Russian Quantum Center, Skolkovo, Moscow 143025, RussiaBlockchain is a distributed database which is cryptographically protected against malicious modifications.While promising for a wide range of applications, current blockchain platforms rely on digital signatures, which are vulnerable to attacks by means of quantum computers.The same, albeit to a lesser extent, applies to cryptographic hash functions that are used in preparing new blocks,so parties with access to quantum computation would have unfair advantage in procuring mining rewards.Here we propose a possible solution to the quantum-era blockchain challenge and report an experimental realization of a quantum-safe blockchain platformthat utilizes quantum key distribution across an urban fiber network for information-theoretically secure authentication.These results address important questions about realizability and scalability of quantum-safe blockchains for commercial and governmental applications. Quantum-secured blockchain A.K. Fedorov December 30, 2023 ==========================§ INTRODUCTION The blockchain is a distributed ledger platform with high Byzantine fault tolerance, which enables achieving consensus in a large decentralized network of parties who do not trust each other.A paramount feature of blockchains is the accountability and transparency of transactions,which makes it attractive for a variety of applications ranging from smart contracts and finance to manufacturing and healthcare <cit.>.One of the most prominent applications of blockchains is cryptocurrencies, such as Bitcoin <cit.>.It is predicted that ten percent of global GDP will be stored on blockchains or blockchain-related technology by 2025 <cit.>.In a modern blockchain network, any member can introduce a record (transaction) to the ledger.Every transaction must be signed by its initiator's digital signature; this rule enables, for example, exchange of digital assets between parties. The transactions are stored on each member's computer (node) as a sequence of groups known as blocks.All transactions that have been introduced over a period of time are compiled in a block that is linked to the previous one <cit.>.This linking is implemented by cryptographic hash functions: each block contains a hash value of its content, and the content also includes the hash of the previous block (Fig. 1).Any modification of a block inside the chain yields a change of its hash, which would in turn require modification of all subsequent blocks.This structure protects the data inside a blockchain from tampering and revision <cit.>. While each node is allowed, in principle, to introduce a block to the network, each blockchain network has a set of rules that organize and moderate the block formation process.In Bitcoin <cit.>, for example, a member introducing a new block must solve an NP-hard problem:introduce a set of numbers to the block's header such that the hash of that header must not exceed a certain value (this paradigm is known as proof-of-work).In this way, the blocks are guaranteed not to emerge too frequently, so every node has an opportunity to verify the validity of the block and the transactions therein before a new block arrives.This ensures the identity of the database stored by all network nodes.Whenever a new block is accepted by the community, its “miner" is rewarded in bitcoins for the computational power they spend.A more detailed summary of the blockchain concept is presented in Appendix A. We see that blockchain relies on two one-way computational technologies: cryptographic hash functions and digital signatures.Most blockchain platforms rely on the elliptic curve public-key cryptography (ECDSA) or the large integer factorization problem (RSA) to generate a digital signature <cit.>.The security of these algorithms is based on the assumption of computational complexity of certain mathematical problems <cit.>.A universal quantum computer would enable efficient solving of these problems, thereby making corresponding digital signature algorithms, including those used in blockchains, insecure.In particular, Shor's quantum algorithm solves factorisation of large integers and discrete logarithms in polynomial time <cit.>. Another security issue is associated with Grover's search algorithm <cit.>, which allows a quadratic speedup in calculating the inverse hash function.In particular, this will enable a so-called 51-percent attack, in which a syndicate of malicious parties controlling a majority of the network's computing power would monopolize the mining of new blocks.Such an attack would allow the perpetrators to sabotage other parties' transactions or prevent their own spending transactions from being recorded in the blockchain. Other attacks with quantum computing on blockchain technology as well as possible roles of quantum algorithms in the mining process are considered in more detail in recent publications <cit.>.The security of blockchains can be enhanced by using post-quantum digital signature schemes <cit.> for signing transactions.Such schemes are considered to be robust against attacks with quantum computers <cit.>. However, this robustness relies on unproven assumptions.Furthermore, post-quantum digital signatures are computationally intensive and are not helpful against attacks that utilize the quantum computer to dominate the network's mining hashrate.In addition to the blockchains based on mining principles there are other approaches to distributed ledgers maintenance, e.g. Byzantine fault tolerance (BFT) replication <cit.> and practical BFT replication <cit.>. To our knowledge, all the proposed approaches either require use of digital signatures, and hence are vulnerable to quantum computer attacks, or pairwise authenticated channels at least. We note that the pairwise authentic channel ensures that each message was not tampered while passing, but does not solve the transferability issue."The way to guarantee authentication in the quantum era is to use quantum key distribution (QKD),which guarantees information-theoretic (unconditional) security based on the laws of quantum physics <cit.>.QKD is able to generate a secret key between two parties connected by a quantum channel (for transmitting quantum states)and a public classical channel (for post-processing procedures). The technology enabling QKD networks have been demonstratedin many experiments <cit.>and is now publicly available through multiple commercial suppliers.In the present work, we describe a blockchain platform that combines (i) the original BFT state-machine replication without use of digital signatures <cit.> (hereafter referred to as the “broadcast protocol”), (ii) QKD for providing authentication, and implement an experiment demonstrating its capability in an urban QKD network <cit.>. We believe this scheme to be robust against not only the presently known capabilities of the quantum computer,but also those that may potentially be discovered in the future to make post-quantum cryptography schemes vulnerable.The utility of QKD for blockchains may appear counterintuitive, as QKD networks rely on trust among nodes, whereas the earmark of many blockchains is the absence of such trust. More specifically, one may argue that QKD cannot be used for authentication because it itself requires an authenticated classical channel for operation.However, each QKD communication session generates a large amount of shared secret data, part of which can be used for authentication in subsequent sessions.Therefore a small amount of “seed" secret key that the parties share before their first QKD session ensures their secure authentication for all future communication <cit.>. In this way, QKD can be used in lieu of classical digital signatures.§ QUANTUM-SECURED BLOCKCHAIN Here we consider a blockchain protocol within a two-layer network with n nodes.The first layer is a QKD network with pairwise communication channels that permit establishing information-theoretically (unconditionally) secure private key for each pair of nodes.The second (classical) layer is used for transmitting messages with authentication tags based on information-theoretically secure Toeplitz hashing (see Appendix B)that are created using the private keys procured in the first layer.For concreteness, we consider a blockchain maintaining a digital currency. The operation of the blockchain is based on two procedures: (i) creation of transactions and (ii) construction of blocks that aggregate new transactions.New transactions are created by those nodes who wish to transfer their funds to another node.Each individual new transaction record is constructed akin to those in Bitcoin,i.e. contains the information about the sender, receiver, time of creation, amount to be transferred,and a list of reference transactions that justifies that the sender has enough funds for the operation (see Appendix A). This record is then sent via authenticated channels to all other n-1 nodes, thereby entering the pool of unconfirmed transactions.Each node checks these entries with respect to their local copy of the database and each other,in order to verify that each transaction has sufficient funds, and forms an opinion regarding the transaction's admissibility.At this stage, the community does not attempt to exclude double-spending events (a dishonest party sending different versions of a particular transaction to different nodes of the network).Subsequently, the unconfirmed transactions are aggregated into a block.We abolish the classical blockchain practice of having the blocks proposed by individual “miners", because it is vulnerable to quantum computer attacks in at least two ways.First, transactions are not rigged with digital signatures.This means that a miner has complete freedom to fabricate arbitrary, apparently valid, transactions and include them in the block.Second, a node equipped with a quantum computer is able to mine new blocks dramatically faster than any non-quantum node.This opens a possibility for attacks such as the 51-percent attack described above. Instead, we propose to create blocks in a decentralized fashion.To this end, we employ the broadcast protocol proposed in the classic paper by Shostak, Lamport and Pease <cit.> (see Appendix C).This information-theoretically secure protocol allows achieving a Byzantine agreementin any network with pairwise authenticated communication provided that the number of dishonest parties is less than n/3 (which we assume to be the case).At a certain moment in time (e.g. every ten minutes), the network applies the protocol to each unconfirmed transaction,arriving at a consensus regarding the correct version of that transaction (thereby eliminating double-spending) and whether the transaction is admissible.Each node then forms a block out of all admissible transactions, sorted according to their time stamps.The block is added to the database. In this way, the same block will be formed by all honest parties, thereby eliminating the possibility of a “fork” – the situation in which several different versions of a block are created simultaneously by different miners.Because the broadcast protocol is relatively forgiving to the presence of dishonest or faulty nodes,our blockchain setup has significant tolerance to some of the nodes or communication channels not operating properly during its implementation.We also emphasize that, while the broadcast protocol is relatively data intensive, the data need not be transmitted through quantum channels.Quantum channels are only required to generate private keys.While the proposed protocol seems to be efficient against quantum attacks on the distribution of transactions and formation of blocks, the database is still somewhat vulnerable while it is stored.A possible attack scenario is as follows: a malicious party equipped with a quantum computer works off-line to forge the database.It changes one of the past transaction records to its benefit and performs a Grover search for a variant of other transactions within the same block such that its hash remains the same,to make the forged version appear legitimate.Once the search is successful, it hacks into all or some of the network nodes and substitutes the legitimate database by its forged version.However, the potential of this attack to cause significant damage appears low, because the attacker would need to simultaneously hack at least one-third of the nodes to alter the consensus.Furthermore, because the Grover algorithm offers only a quadratic speed-up with respect to classical search algorithms,this scenario can be prevented by increasing the convention on the length of the block hash to about a square of its safe non-quantum value. We experimentally study the proposed blockchain protocol on the basis of a four-node, six-link network [Fig. 2(a)] with information-theoretically secure authentication.We use an urban fiber QKD network recently developed by our team (see Appendix D) to procure authentication keys for two of the links connecting three nodes;the key generation in the remaining four links is classical. We sum up main parameters of the implemented blockchain for four nodes network in Table <ref>. We test the operation of the blockchain and implement the construction of a simple transaction block under the following settings [Fig. 2(a)].Nodes A, B and C perform legitimate transactions, whereas node D tries to process three different transactions, i.e. realize a double-spending attack.The pool of unconfirmed transactions at each node thus consist of three legitimate and one inconsistent transactions.The broadcast protocol is then launched on the basis of these transaction pools.This protocol eliminates node D's double-spending transaction after the second communication round and permits the formation of a block containing legitimate transactions only [Fig. 2(c)].§ CONCLUSION AND OUTLOOK In summary, we have developed a blockchain protocol with information-theoretically secure authentication based on a network in which each pair of nodes is connected by a QKD link.We have experimentally tested our protocol by means ofa three-party urban fibre network QKD in Moscow. In addition to using QKD for authentication, we have redefined the protocol of adding new blocks an a way that is dramatically different from modern cryptocurrencies.Rather than concentrating the development of new blocks in the hands of individual miners,we employ the information-theoretically secure broadcast protocol where all the nodes reach an agreement about a new block on equal terms. A crucial advantage of our blockchain protocol is its ability to maintain transparency and integrity of transactions against attacks with quantum algorithms. Our results therefore open up possibilities for realizing scalable quantum-safe blockchain platforms.If realized, such a blockchain platform can limit economic and social risks from imminent breakthroughs in quantum computation technology. Typical key generation rates of currently available QKD technologies are sufficient for operating a large-scale blockchain platforms based on our protocol.Moreover, remarkable progress in theory and practice of quantum communications, including recent experiments on ground-to-satellite QKD and quantum repeaters,could open the door to developing a public worldwide QKD network (“the quantum Internet" <cit.>) and extending quantum-safe blockchain platforms to the global scale.The development of the “quantum Internet" will allow our protocol to preserve anonymity of each network member.A member will be able to access the global QKD network from any station, authenticate themselves to other parties using their private seed keys (see Methods) and enact a desired transaction.Our protocol is likely not the only possible quantum-safe blockchain platform.In this context, important horizons are opened by technologies that permit direct transmission of quantum states over multipartite communication networks combined with light quantum information processing.This includes, for example,protocols for quantum multiparty consensus <cit.>,other approaches for QKD <cit.>, and quantum digital signatures <cit.>,which have been successfully studied in experiments, including metropolitan networks <cit.>.An additional important research avenue is more efficient, quantum-technology based consensus algorithms <cit.> and general study of quantum channels <cit.>.Most importantly, we hope that our work will raise awareness and interest of the quantum information community to the problem of security of distributed ledgers in the era of quantum technology.§ ACKNOWLEDGMENTS We thank D. Gottesman for making us aware of the broadcast protocol.This work is supported by the Russian Science Foundation under grant 17-71-20146. AL is supported by NSERC and is a CIFAR Fellow. § APPENDIX A. BLOCKCHAIN WORKFLOW Here we sum up the main definitions and concepts of conventional blockchains. * The blockchain is a distributed database in which the records are organized in a form of consecutive blocks. The term “distributed" means that copies of the database are stored by all the nodes that are interested in maintaining it, and that there is no single control center in charge of the network. * Distributed consensus is a set of rules governing the blockchain construction and operation accepted by the nodes maintaining this blockchain. * A transaction is an elementary record in a blockchain. In order to create a transaction, one (i) forms a corresponding record, (ii) signs it using a digital signature, and (iii) sends the record to all the nodes maintaining the blockchain. For example, if we use a blockchain for maintaining a cryptocurrency, then the transaction corresponds to a transfer of some amount of money from one party to another. * A block contains a number of transactions created over a certain period of time. Newly created transactions enter a so-called pool of unconfirmed transactions. Because such transactions are created at a faster rate than the typical network latency time, it is difficult for the community to agree on their time sequence and validity. This motivates the solution to aggregate new transactions into large blocks that are introduced at regular time intervals that are much longer than the network latency. In order to create a block with new transactions, a node needs to (i) check the validity of new transactions and discard invalid ones, (ii) combine the new transactions and the hash value of the last block in the existing blockchain, (iii) fulfill the additional moderation requirements imposed on new proposed block by the network rules (an example is the proof of work rule in Bitcoin), and (iv) broadcast the new block to all other nodes. Each node then verifies the block's validity and adds it to the local copy of the blockchain. * A situation named fork is possible when non-identical blocks are generated and broadcast by different miners at about the same time. In this situation, the community becomes temporarily divided as different miners will use these different blocks to generate their subsequent blocks. To reunite the community, the longest-chain rule applies: as the both branches continue to grow, one of them will become longer than others. At this point, the community chooses this particular branch as the “correct” one. As a consequence of this rule, the reliability of any block grows with its depth relative to the last block in the chain. * The cryptographic hash function H(·) is a one-way map from arbitrary length strings to fixed-length strings (let say, 256 or 512 bit). The term “cryptographic" means that it act is pseudo-random way, i.e. any modification of the argument string x (even in a single bit) yields a major and unpredictable change of H(x). Moreover, it is universally believed that there is no classical algorithm, except brute-force, to invert the hash function (solve an equation H(x)=h for a given hash h), and find its collisions (i.e. find the string x≠ y for a given y such that H(x)=H(y), or even find two arbitrary distinct strings x and y such that H(x)=H(y)). Quantum algorithms, in particular, Grover's algorithm <cit.>, allow a quadratic speedup in solving such problems only. * A digital signature is an algorithm that allows one to verify that a certain message (in our case, a transaction) has been created by a particular person. The basic idea is that the author generates a pair of keys: a secret key , which must be kept out of reach for all others, and a public key , which can be known to anyone. There is a fixed-length output function (x,k) taking an arbitrary message x and a secret key k, such that the triplet {m,(m,),} verifies the fact the author, identified with the public key , indeed possesses the corresponding secret keyand signed the message m. On the other hand, the above triplet does not allow one to determineusing a reasonable amount of classical computational resources. § APPENDIX B. INFORMATION-THEORETICALLY SECURE AUTHENTICATION Two parties, Alice and Bob, can authenticate messages sent to each other if they share a secretprivate key K_ aut that is not known to anyone else.The private key of necessary length can be generated via QKD provided that the parties have a small amount of “seed" key to authenticate themselves to each other in the beginning of the session.Once the private key is established, the authentication procedure is as follows: Alice sends to Bob a message with a hash tag generated using that key.After receiving the message, Bob also computes its hash tag. If the hash tags coincide, Bob can be certain that the message has arrived from Alice. In our protocol, we use Toeplitz hashing due to its computational simplicity <cit.>.Let the lengths of all messages and their hash tags be l_M and l_h respectively.The hash tag of the ith message M_i is calculated according toh(M_i)=T_SM_i⊕ r_i,where T_S is a l_h× l_M Toeplitz matrix generated by a string S of length l_h+l_M-1, r_i is a bit string of length l_h, and ⊕ is the bitwise xor.Both S and r_i are private and taken from the common private key K_ aut.Then the probability that an eavesdropper will correctly guess the hash tag of a modified message is not more than 2^-l_h. If a series of messages is transmitted, the string S can be reused without compromising security, while the stringr_i must be generated anew every time.In this way, the private key is consumed at a rate of l_h bits per message.In our experiment, l_h=40 and l_M=2^22.§ APPENDIX C. BROADCAST PROTOCOL AND BLOCK CONSTRUCTION Here we briefly discribe the protocol for reaching Byzantine agreement in the presence of faulty nodes <cit.>. We consider n nodes connected by pairwise authenticated channels. Let each ith node possess a certain private value V_i.The goal of the protocol is to make all nodes aware of all V_i'swith a complication thatthere are at most m “dishonest” (or faulty) nodes. This can be rephrased as obtaining an n-dimensional interactive consistency vector V⃗^ cons with the following properties: (i) all the honest nodes obtain the same vector V⃗^ cons, and (ii) the ith component of V⃗^ cons equals V_i for all honest nodes.The interactive consistency vector is determined through a series of communication rounds that proceed as follows. * In the firstround, the nodes transmit their values of V_i to each other. * In subsequent rounds, the nodes communicate all the information they received in the previous round from other nodes (messages are of the form such as “node i_2 told node i_1 that node i_3 told node i_2 …that node i_r told node i_r-1 that its private value is U”). In Ref. <cit.>, Lamport, Shostak and Pease proved that the interactive consistency vector can be obtained with no more than m+1 rounds for m<n/3. In our setup, the private value V_i is the pool of transactions received by the ith node (together with its own transactions),as well as the set of bits indicating the node's opinion of each transaction's admissibility.After obtaining the interactive consistency vector V⃗^ cons, the honest nodes are able to create a block containing the complete set of admissible transactions from the pool. A shortcoming of the protocol of Ref. <cit.> in its original form is that it becomes exponentially data-intensive if a large number of cheating or unoperational nodes are present.Therefore further research on developing an efficient consensus protocol is required.We are optimistic that this issue can be resolved.Indeed, classical blockchain networks do routinely face the same challenge and have learned to deal with it efficiently <cit.>. § APPENDIX D. QKD NETWORK The basis for our experimental work is our recently developed modular QKD device <cit.>driven by a National Instruments NI PCIe-7811R card.This setup uses a semiconductor laser LDI-DFB2.5G controlled by an FPGA board Spartan-6 to generate optical pulses at the standard telecommunication wavelength 1.55 μmand a 10 MHz repetition rate.We have used ID230 single-photon detectors from ID Quantique. The QKD network contains two links with different physical implementations, realized in an urban environment in Moscow.The parameters of both links are listed in the table <ref>.99 Franco P. Franco, Understanding Bitcoin: Cryptography, Engineering and Economics (John Wiley & Sons, 2014). Extance2015 A. Extance, https://dx.doi.org/10.1038/526021aNature 526, 21 (2015). Forbes B. Marr, How Blockchain Technology Could Change The World, Forbes, May 27, 2016. Swan2015 M. Swan, Blockchain (O'Reilly Media, Inc., 2015). Witte2016 J.H. Witte, https://arxiv.org/abs/1612.06244arXiv:1612.06244. Schneier1996 B. Schneier, Applied cryptography (John Wiley & Sons, Inc., New York, 1996). Shor1997 P.W. Shor, https://dx.doi.org/10.1137/S0097539795293172SIAM J. Comput. 26, 1484 (1997). Grover1996 L.K. Grover, in Proceedings of 28th Annual ACM Symposium on the Theory of Computing (New York, USA, 1996), p. 212. Aggarwal2017 D. Aggarwal, G.K. Brennen, T. Lee, M. Santha, and M. Tomamichel, https://arxiv.org/abs/1710.10377arXiv:1710.10377. Tessler2017 L. Tessler and T. Byrnes, https://arxiv.org/abs/1711.04235arXiv:1711.04235. Kalinin2018 K.P. Kalinin and N.G. Berloff, https://arxiv.org/abs/1802.10091arXiv:1802.10091. Sapaev2018 D. Sapaev, D. Bulychkov, F. Ablayev, A. Vasiliev, and M. Ziatdinov, https://arxiv.org/abs/1802.06763arXiv:1802.06763 Lamport L. Lamport, Technical Report SRI-CSL-98, SRI International Computer Science Laboratory, Oct. 1979. Merkle R. Merkle, Ph.D. dissertation, Dept. of Electrical Engineering, Stanford University, 1979. Bernstein2009 D.J. Bernstein, Introduction to post-quantum cryptography (Springer-Verlag Berlin Heidelberg, 2009). Lamport1982 L. Lamport, R. Shostak, and M. Pease, http://dx.doi.org/10.1145/357172.357176ACM T. Progr. Lang. Sys. 4, 382 (1982). Castro2002 M. Castro, and B. Liskov. http://dx.doi.org/10.1145/571637.571640ACM Trans. Comput. Syst. 20, 398 (2002). Gisin2002 N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, http://dx.doi.org/10.1103/RevModPhys.74.145Rev. Mod. Phys. 74, 145 (2002). Scarani2009 V. Scarani, H. Bechmann-Pasquinucci, N.J. Cerf, M. Dusek, N. Lütkenhaus, and M. Peev, http://dx.doi.org/10.1103/RevModPhys.81.1301Rev. Mod. Phys. 81, 1301 (2009). Lo2016 E. Diamanti, H.-K. Lo, and Z. Yuan, https://dx.doi.org/10.1038/npjqi.2016.25npj Quant. Inf. 2, 16025 (2016). Gyongyosi2018 L. Gyongyosi, S. Imre, and H.V. Nguyen, https://dx.doi.org/10.1109/COMST.2017.2786748IEEE Commun. Surv. Tut. (2018). Laenger2009 L. Salvail, M. Peev, E. Diamanti, R. Alleaume, N. Lütkenhaus, and T. Laenger, https://dx.doi.org/10.3233/JCS-2010-0373J. Comput. Sec. 18, 61 (2010). Yeh2005 C. Elliott, A. Colvin, D. Pearson, O. Pikalo, J. Schlafer, and H. Yeh, https://dx.doi.org/10.1117/12.606489Proc. SPIE 5815, 138 (2005). Peev2009 M. Peev et al., https://dx.doi.org/10.1088/1367-2630/11/7/075001New J. Phys. 11, 075001 (2009). Stucki2011 D. Stucki, M. Legre, F. Buntschu, B. Clausen, N. Felber, N. Gisin, L. Henzen, P. Junod, G. Litzistorf, P. Monbaron, L. Monat, J.-B. Page, D. Perroud, G. Ribordy, A. Rochas, S. Robyr, J. Tavares, R. Thew, P. Trinkler, S. Ventura, R. Voirol, N. Walenta, and H. Zbinden, https://dx.doi.org/10.1088/1367-2630/13/12/123001New J. Phys. 13, 123001 (2011). Pan2009 T.-Y. Chen, H. Liang, Y. Liu, W.-Q. Cai, L. Ju, W.-Y. Liu, J. Wang, H. Yin, K. Chen, Z.-B. Chen, C.-Z. Peng, and J.-W. Pan, https://dx.doi.org/10.1364/OE.17.006540Opt. Express 17, 6540 (2009). Pan2010 T.-Y. Chen, J. Wang, H. Liang, W.-Y. Liu, Y. Liu, X. Jiang, Y. Wang, X. Wan, W.-Q. Cai, L. Ju, L.-K. Chen, L.-J. Wang, Y. Gao, K. Chen, C.-Z. Peng, Z.-B. Chen, and J.-W. Pan, https://dx.doi.org/10.1364/OE.18.027217Opt. Express 18, 27217 (2010). Han2010 S. Wang, W. Chen, Z.-Q. Yin, Y. Zhang, T. Zhang, H.-W. Li, F.-X. Xu, Z. Zhou, Y. Yang, D.-J. Huang, L.-J. Zhang, F.-Y. Li, D. Liu, Y.-G. Wang, G.-C. Guo, and Z.-F. Han, https://dx.doi.org/10.1364/OL.35.002454Opt. Lett. 35, 2454 (2010). Zeilinger2011 M. Sasaki et. al., https://dx.doi.org/10.1364/OE.19.010387Opt. Express 19, 10387 (2011). Shields2013 D. Fröhlich, J.F. Dynes, M. Lucamarini, A.W. Sharpe, Z. Yuan, and A.J. Shields, https://dx.doi.org/10.1038/nature12493Nature 501, 69 (2013). Zhang2016 Q. Zhang, http://spectrum.ieee.org/telecom/security/chinas-2000km-quantum-link-is-almost-completeIEEE Spectr., Oct. 2016. Pozhar2017 E.O. Kiktenko, N.O. Pozhar, A.V. Duplinskiy, A.A. Kanapin, A.S. Sokolov, S.S. Vorobey, A.V. Miller, V.E. Ustimchik, M.N. Anufriev, A.S. Trushechkin, R.R. Yunusov, V.L. Kurochkin, Y.V. Kurochkin, and A.K. Fedorov, http://dx.doi.org/10.1070/QEL16469Quantum Electron. 47, 798 (2017). Tysowski2017 P. K. Tysowski, X. Ling, N. Lütkenhaus, and M. Mosca https://doi.org/10.1088/2058-9565/aa9a5daccepted in Quantum Science and Technology (2017). kimble H.J. Kimble, http://dx.doi.org/10.1038/nature07127Nature 453, 1023 (2008). Gisin2001 M. Fitzi, N. Gisin, and U. Maurer, http://dx.doi.org/10.1103/PhysRevLett.87.217901Phys. Rev. Lett. 87, 217901 (2001). Smania2016 M. Smania, A.M. Elhassan, A. Tavakoli, and M. Bourennane, http://dx.doi.org/10.1038/npjqi.2016.10npj Quant. Inform. 2, 16010 (2016). Grangier2003 F. Grosshans, G. Van Assche, J. Wenger, R. Brouri, N.J. Cerf, and Ph. Grangier, https://doi.org/10.1038/nature01289Nature (London) 421, 238 (2003). Gyongyosi2014 L. Gyongyosi and S. Imre, https://dx.doi.org/10.1117/12.2038532SPIE Proc. 89970, 89970C (2008). GyongyosiImre2018 L. Gyongyosi and S. Imre, https://dx.doi.org/10.3390/app8010087Appl. Sci. 8, 87 (2018). Gyongyosi S. Imre and L. Gyongyosi, Advanced Quantum Communications (Hoboken, New Jersey: Wiley-IEEE Press). Gottesman2001 D. Gottesman and I. Chuang, https://arxiv.org/abs/quant-ph/0105032arXiv:quant-ph/0105032. Pan2017 H.-L. Yin, W.-L. Wang, Y.-L. Tang, Q. Zhao, H. Liu, X.-X. Sun, W.-J. Zhang, H. Li, I.V. Puthoor, L.-X. You, E. Andersson, Z. Wang, Y. Liu, X. Jiang, X. Ma, Q. Zhang, M. Curty, T.-Y. Chen, and J.-W. Pan, http://dx.doi.org/10.1103/RevModPhys.81.1301Phys. Rev. A 95, 042338 (2017). Gottesman2002 M. Fitzi, D. Gottesman, M. Hirt, T. Holenstein, A. Smith, in Proceedings of the 21st ACM Symposium on Principles of Distributed Computing, 118–126 (2002). Holevo A.S. Holevo, Quantum systems, channels, information. A mathematical introduction (De Gruyter, Berlin–Boston, 2012). Krawczyk1994 H. Krawczyk, http://dx.doi.org/10.1007/3-540-48658-5_15Lect. Notes Comp. Sci. 839, 129 (1994). Krawczyk1995 H. Krawczyk, http://dx.doi.org/10.1007/3-540-49264-X_24Lect. Notes Comp. Sci. 921, 301 (1995). Sokolov2017 A.S. Sokolov, A.V. Miller, A.A. Kanapin, V.E. Rodimin, A.V. Losev, A.S. Trushechkin, E.O. Kiktenko, N.O. Pozhar, A.K. Fedorov, V.L. Kurochkin, Y.V. Kurochkin, https://arxiv.org/abs/1612.04168arXiv:1612.04168. Kiktenko2016 E.O. Kiktenko, A.S. Trushechkin, Y.V. Kurochkin, and A.K. Fedorov, http://dx.doi.org/10.1088/1742-6596/741/1/012081J. Phys. Conf. Ser. 741, 012081 (2016). Kiktenko2017 E.O. Kiktenko, A.S. Trushechkin, C.C.W. Lim, Y.V. Kurochkin, and A.K. Fedorov, http://dx.doi.org/10.1103/PhysRevApplied.8.044017Phys. Rev. Applied 8, 044017 (2017). KiktenkoTrushechkin2016 E.O. Kiktenko, A.S. Trushechkin, M.N. Anufriev, N.O. Pozhar, and A.K. Fedorov, https://dx.doi.org/10.5281/zenodo.200365Post-processing procedure for quantum key distribution systems. Zenodo. Available at: https://dx.doi.org/10.5281/zenodo.200365. Croman2016 K. Croman, C. Decker, I. Eyal, A. E. Gencer, A. Juels, A. Kosba, A. Miller, P. Saxena, E. Shi, E. G. Sirer, D. Song, and R. Wattenhofer, https://doi.org/10.1007/978-3-662-53357-4_8Lect. Notes Comp. Sci. 9604, 106 (2016).
http://arxiv.org/abs/1705.09258v3
{ "authors": [ "E. O. Kiktenko", "N. O. Pozhar", "M. N. Anufriev", "A. S. Trushechkin", "R. R. Yunusov", "Y. V. Kurochkin", "A. I. Lvovsky", "A. K. Fedorov" ], "categories": [ "quant-ph", "cs.CR", "cs.IT", "math.IT" ], "primary_category": "quant-ph", "published": "20170525165210", "title": "Quantum-secured blockchain" }
Algorithmic clothing: hybrid recommendation, from street-style-to-shop B Sengupta December 30, 2023 ====================================================================== We show logarithmic stability for the point source inverse backscattering problem under the assumption of angularly controlled potentials. Radial symmetry implies Hölder stability. Importantly, we also show that the point source equation is well-posed and also that the associated characteristic initial value problem, or Goursat problem, is well-posed. These latter results are difficult to find in the literature in the form required by the stability proof.MSC classes: 35R30, 78A46, 35A08, 35L15Keywords: inverse backscattering, point source, Goursat problem, stability § INTRODUCTIONFor a potential function q supported inside the unit disc B in ^3 and a point a consider the point source problem(∂_t^2 - Δ- q)U^a(x,t) = δ(x-a,t),x∈^3, t∈, U^a(x,t) = 0,x∈^3, t<0. We define the point source backscattering data as the function (a,t)↦ U^a(a,t). This paper has two goals: to prove the well-posedness of (<ref>)–(<ref>), and then to solve the inverse problem of determining q from the point source backscattering data U^a(a,t) with a∈∂ B and t>0.The ordinary inverse problem of backscattering for arbitrary potentials is a major open problem. In it the scattering amplitude A(x̂, θ, k) is measured for frequencies k∈_+, incident plane-wave directions θ=1, and measurement direction x̂=-θ. The question is whether such data corresponds to a unique potential q. This question has been solved in the time-domain for an admissible class of potentials in <cit.>. For a more in-depth review of earlier results please refer to <cit.>.Traditional backscattering applications include radar, fault detection in fiber optics, Rutherford backscattering and X-ray backscattering (e.g. full-body scanners) among others. What's common to all of these is that the measured object (or fault) is located far away from the wave source. From the point of view of the Rakesh-Uhlmann <cit.> techniques the classical backscattering problem in the time-domain behaves as the point source problem with source at infinity. This means that the problem (<ref>)–(<ref>) models a situation where the wave source is close to the object under investigation, for example in the order of a few wavelengths. Therefor our results imply that backscattering experiments would give useful information even when the object is close. For example one could imagine using the backscattering of sound, radio or elastic waves to find faults in an object of human scale.Uniqueness for the inverse backscattering problem related to (<ref>)–(<ref>) was shown by Rakesh and Uhlmann for an admissible class of smooth potentials in <cit.>. We shall show stability for their method. In addition we will show that the direct problem is well-posed in the sense of Hadamard, including all the required norm estimates.The question of well-posedness of the direct problem would seem well-known to the experts at first sight. However this result is very difficult to find in the literature for non-smooth potentials and with explicit norm estimates. We hope that future research on the topic finds the explicit proof convenient.The main motivation for this paper is the proof of the following stability theorem. As in <cit.> it applies to a class of potentials whose differences are angularly controlled.Let = B(0̅,1) ⊂^3 and fix positive a-priori parameters S,ℳ < ∞ and h<1. Then there are ℭ, 𝔇 < ∞ with the following properties:Let q_1, q_2 ∈ with norm bounds q_j≤ℳ. Assume moreover that q_1 and q_2 are no closer than distance h from ∂. If q_1-q_2 is angularly controlled with constant S, i.e.∑_i<j∫_x=rΩ_ij(q_1-q_2)(x)^2 dσ(x) ≤ S^2 ∫_x=r(q_1-q_2)(x)^2 dσ(x)for any 0<r<1 where Ω_ij = x_i∂_j - x_j∂_i are the angular derivatives, then we have the following conditional stability estimateq_1-q_2_L^2({x=r})≤ e^ℭ / r^4U_1^a-U_2^afor any given positive r. Here U^a_1 and U^a_2 are the unique solutions to the problem (<ref>)–(<ref>) given by Theorem <ref> with a∈∂, q=q_1, q=q_2, andU_1^a-U_2^a^2 = sup_0<τ<1∫_a=1∂_τ( τ (U^a_1-U^a_2)(a,2τ) ) ^2 dσ(a)is the backscattering measurement norm that we impose.A fortiori we get the logarithmic full-domain estimateq_1-q_2_L^2()≤𝔇(ln1/U_1^a-U_2^a)^-1/4when U_1^a-U_2^a < e^-1 and q_1-q_2_L^2(B)≤𝔇U_1^a-U_2^a otherwise.If instead of angular control for q_1-q_2 we assume the stronger condition of radial symmetry, we haveq_1-q_2_L^2({x=r})≤ℭ r^αU_1^a-U_2^awhere α=α(ℳ,h,), and this implies the full domain Hölder estimateq_1-q_2_L^2()≤𝔇U_1^a-U_2^a^1/1+α. The proof of the above theorem is presented in Section <ref> and is based on the innovative techniques from <cit.>. It starts with writing the data U_1^a(a,2τ)-U_2^a(a,2τ) as an integral involving q_1-q_2 and solutions to (<ref>)–(<ref>). The linear part of this integral is the average of q_1-q_2 over spheres with centers on ∂ B. Proposition <ref> is key for inverting the linearised problem and its perturbations. The inversion formula to this, and to the corresponding linearized problem in plane-wave inverse backscattering — which is the Radon transform — is an ill-posed operator. Angular control and Grönwall's inequality give uniqueness and logarithmic stability to the linearized problem, and also to the full nonlinear inverse problem.From the point of view of applications the logarithmic stability seems unpleasant. If we knew in advance that q_1=q_2 in a fixed neighbourhood of the origin, then (<ref>) would give us a Lipschitz stability estimate q_1-q_2_L^2(B)≤ C U_1^a-U_2^a. However it is not clear under which conditions q_1-q_2 would stay angularly controlled if the origin was moved to another location, e.g. outside of their supports. The method of this paper and <cit.> is centered around angular control so further work should focus on understanding this condition. When the integrals that use this condition are ignored, as happens when q_1-q_2 is radially symmetric, we get Hölder stability.It would be extremely surprising if Hölder stability was possible in general. The fixed frequency multi-static inverse problem is known to be exponentially ill-posed <cit.>. Counting dimensions, this problem is overdetermined in ^3 while the harder backscattering problem is determined. However no formal inference can be made since there is no known direct way of deducing the multi frequency (or time-domain) backscattering data from the fixed frequency multi-static data. Furter comments on this complex issue deserve a completely new study. Showing the well-posedness of the direct problem (<ref>)–(<ref>) is a major effort. This has to be done for two reasons. Firstly because the proof of Theorem <ref> requires norm-estimates related to the solution U^a. These estimates are lacking from the literature. Secondly, it makes sure that the backscattering data U^a(a,t) is smooth enough for the above theorem to say anything meaningful. Let n≥ and =B(0̅,1) be the unit disc in ^3. Let q∈n and a∈∂. Then the point source problem (<ref>)–(<ref>) has a unique solution U^a in the set of distributions of order n. It is given byU^a(x,t) = δ(t-x-a)/4πx-a + H(t-x-a) r^a(x,t)where r^a∈ C^1(^3×) and δ, H are the Dirac-delta distribution and Heaviside function on . For any T>0 and ℳ≥q it has the norm estimater^a_C^1(^3×[0,T])≤ C_T,ℳ. Moreover U^a is C^1-smooth outside the light cone t=x-a. In particular the map (a,τ) ↦ U^a(a,2τ) is well-defined ∂×(0,1)→ and continuously differentiable in τ. Furthermoresup_a∈∂sup_0<τ<1∂_τ^β(U_1^a-U_2^a)(a,2τ)≤ C_ℳq_1-q_2for solutions U^a_j arising from two potentials q_j, j=1,2 and for any β∈{0,1}. The proof of the above will be done by a progressive wave expansion. This will lead us to a characteristic initial value problem called the Goursat problem. In <cit.> this problem was mentioned briefly with reference to <cit.>. Another well-known source on the point source problem is <cit.>. The former studies the point source problem in low regularity Sobolev spaces, which is not good enough since we need a uniform ∂_t-estimate. The latter suffers from too much generality and considers only C^∞ smooth coefficients, without any norm estimates. Neither reference mentions the Goursat problem by name or defines it explicitly.There are other sources, more focused on the Goursat problem. For example <cit.> is very detailed on the topic but seems to have slightly larger smoothness requirements than we do. See also <cit.> for a very detailed analysis but their model has a region removed from the middle of the characteristic cone. Therefor we shall also prove well-posedness of the Goursat problem.For n∈, n≥5 let q∈ C^n(^3) and g∈ C^n+2(^3) with the norm bounds q_C^n≤ℳ and g_C^n+2≤𝒩. Then there is a unique C^1 solution u to the problem(∂_t^2 - Δ- q) u= 0,x∈^3, t > xu(x,t)= g(x),x∈^3, t=x.It is also in C^s(^3×) where s=⌊n-2/3⌋ and satisfies(∂_t + ∂_r) u = ∂_r g,x∈^3, t=xwhere ∂_r = x/x·∇_x.For any T<∞ the solution has the norm estimateu_C^s(^3×[0,T])≤ C_T, n, ℳ𝒩.Finally, if q_1,q_2∈ C^n(^3) and g_1,g_2∈ C^n+2(^3) then their corresponding solutions satisfyu_1-u_2_C^s(^3×[0,T])≤ C_T, n, ℳ, 𝒩( q_1-q_2_C^n(^3) + g_1-g_2_C^n+2(^3)).We will use the following notation for function spaces of continuous functions. Let s∈ and X⊂^d for some d∈_+. The set C^s(X) contains all f X→ that are s times continuously differentiable. A subscript of c as in C^s_c(X) indicates compact support in X.Given s,τ∈ we denote by C^s,τ(^3×) the space of continuous functions f^3×→ for which ∂_x^α∂_t^β f is continuous when α_1+α_2+α_3≤ s and β≤τ.For estimates,f_C^s(X) = ∑_α≤ ssup_p∈ X∂^α f(p)f_C^s,τ(X) = ∑_α≤ s β≤τsup_(x,t)∈ X∂_x^α∂_t^β f(x,t)where α is a multi-index of appropriate dimension. A-priori no uniform bounds are required above. The solution to the wave equation has finite speed of propagation so the qualitative statements of our results stay true even for continuous but unbounded functions. § GOURSAT PROBLEM The goal of this section is simple: prove the well-posedness of the Goursat problem, including norm estimates of the solution with dependence on the potential q and Dirichlet data g on the characteristic cone. Before that we will show informally how the point source problem is reduced to the Goursat problem, or characteristic initial-boundary value problem. Lemma <ref> validates these informal calculations.If δ, H ∈𝒟'() are the delta-distribution and Heaviside function, then applying the operator ∂_t^2 - Δ + q to the ansatzU^a(x,t) = δ(t-x-a)/4πx-a + H(t-x-a) r^a(x,t)gives(∂_t^2 - Δ - q) U^a = (∂_t^2-Δ) δ(t-x-a)/4πx-a - q(x) δ(t-x-a)/4πx-a + δ'(t-x-a) (r^a-r^a) + 2δ(t-x-a)/x-a( x-a∂_t r^a + r^a + (x-a) ·∇ r^a )+ H(t-x-a) (∂_t^2 - Δ - q)r^a.Now U^a will be a solution to (<ref>)–(<ref>) if(∂_t^2 - Δ- q) r^a= 0,x∈^3, t>x-a, ( x-a∂_t + 1 + (x-a) ·∇) r^a= q/8π,x∈^3, t=x-a.However if F(x) = x-a r^a(x,x-a) then the chain rule shows thatx-a/x-a·∇ F = ( x-a∂_t + 1 + (x-a) ·∇) r^a(x,x-a) = q(x)/8πand solving for F givesr^a(x,x) = 1/8π∫_0^1 q(a+s(x-a)) ds. Proving the converse requires more assumptions, so we will skip it now. Instead we shall show that the Goursat problem(∂_t^2 - Δ- q) r^a= 0,x∈^3, t>x-a,r^a= g,x∈^3, t=x-a has a unique solution in C^1 for any q and g smooth enough, and that this solution also satisfies the boundary condition (<ref>) when g is chosen from (<ref>). Natural smoothness conditions are q∈ C^n and g∈ C^n+2.For k∈ define the function ^3×→γ^k(x,t) = (t^2-x^2)^k/k!,k∈0,k<0 .For n∈ let q∈ C^n(^3) and g∈ C^n+2(^3). Let m≤⌊n/2⌋+1 be an integer. Then define v^3×→ byv(x,t) = ∑_k=0^m a_k(x) γ^k(x,t)where the functions a_k are defined asa_0(x)= g(x), ^3, a_k+1(x)= 1/4∫_0^1 s^k+1((q+Δ) a_k) (xs) ds, ^3.Then a_k ∈ C^n+2-2k(^3). They have the norm estimatea_k_C^n+2-2k(^3)≤( 1+q_C^n(^3)/4)^k g_C^n+2(^3).If q_1,q_2∈ C^n(^3) and g_1,g_2∈ C^n+2(^3) then for the corresponding sequences a_k1 and a_k2 we havea_k1-a_k2_C^n+2-2k≤(1+ℳ)^k g_1-g_2_C^n+2 + k(1+ℳ)^k-1𝒩q_1-q_2_C^nwhenever ℳ≥q_j_C^n and 𝒩≥g_j_C^n+2· Moreover(∂_t^2 - Δ- q) v= -(q+Δ) a_m γ^m,x∈^3, t∈, v(x,t)= g(x),x∈^3, t=±x.Let us start by showing the norm estimates. Obviously a_0 ∈ C^n+2(^3) with estimate a_0_C^n+2 = g_C^n+2 and a_0j→ a_0 in norm.Assume that a_k∈ C^n+2-2k. Then qa_k has smoothness min(n, n+2-2k), and Δ a_k has smoothness n-2k. Hence a_k+1 has smoothness n-2k at worst, with norm estimatea_k+1_C^n-2k≤1/4(1+q_C^n) a_k_C^n+2-2kwhose coefficient could be improved by taking into account the value of the integral ∫_0^1 s^k+1 ds. The norm estimate for a general k isa_k_C^n+2-2k≤( 1+q_C^n/4)^k g_C^n+2by induction.For the difference we note that(a_(k+1)1-a_(k+1)2)(x) = 1/4∫_0^1 s^k+1( (q_1+Δ)a_k1 - (q_2+Δ)a_k2)(xs) dsand thusa_(k+1)1-a_(k+1)2_C^n-2k≤ (1+q_1_C^n) a_k1-a_k2_C^n+2-2k + q_1-q_2_C^na_k2_C^n-2k≤(1+ℳ) a_k1-a_k2_C^n+2-2k + (1+ℳ)^k 𝒩q_1-q_2_C^nin terms of the a-priori bounds. The norm estimate for the difference is now a simple induction.The claim (∂_t^2 - Δ - q) v = -(q+Δ) a_m γ^m follows from noting that a_0 = g, 4x·∇ a_k+1 + 4(2+k)a_k+1 - (q+Δ)a_k = 0, and ∂_t γ^k = 2t γ^k-1, ∇γ^k = -2x γ^k-1, and then finally applying ∂_t^2 - Δ - q to the definition of v. Let n,τ∈, q ∈ C^n(^3) and F ∈ C^n,τ(^3×). Assume that F(x,t)=0 when t<x, and consider the problem(∂_t^2 - Δ- q) w= F,x∈^3, t∈, w= 0,x∈^3, t<0. It has a solution w∈ C^n,τ(^3×) which moreover vanishes on t<x. Given T<∞ and ℳ≥q_C^n(^3) it satisfiesw_C^n,τ(^3×[0,T])≤ C_T,n,ℳF_C^s,τ(^3×[0,T])whereC_T,n,ℳ = C_n,τ∑_m=0^∞C_n^m ℳ^m T^2(m+1)/4^m+1 (m+1)! (m+2)! < ∞and C_n,τ and C_n are finite and depend only on the parameters in their indices.Finally, given such q_1,q_2 and F_1,F_2 let w_1,w_2 be the corresponding solutions. With the a-priori bounds q_j_C^n(^3)≤ℳ and F_j_C^n,τ(^3×[0,T])≤𝒩 we havew_1-w_2_C^n,τ(^3×[0,T])≤ C_T,n,ℳ,𝒩( F_1-F_2_C^n,τ(^3×[0,T]) + q_1-q_2_C^n(^3))where C_T,n,ℳ, 𝒩 is finite and depends only on the parameters in its indices. Consider the operatorK f(x,t) = ∫_^3f(x-y,t-y)/4πy dygiving (∂_t^2 - Δ) K f = f for compactly supported distributions f ∈ℰ'(^3×) and K f(x,t) = 0 for t<inf_tf. This is also true for f supported on x≤ t (see Theorem 4.1.2 in <cit.>) and then the integration area becomes x-y+y≤ t. By Lemma <ref>∂_x^α∂_t^β Kf(x,t)≤sup_^3 ×]-∞,t[∂_x^α∂_t^β ft^2-x^2/8,t>x0,t≤xwhen ∂_x^α∂_t^β f is a continuous function. In essence Kf has the same smoothness properties as f.The equation (∂_t^2 - Δ - q) w = F with w=0 for negative time is equivalent to w = K F + K(qw). Set w_0(x,t) = KF(x,t) and w_m+1 = K(q w_m), and we will build the final solutions asw = ∑_m=0^∞ w_m.We see immediately by the properties of K that w_m∈ C^n,τ(^3×) for all m and that they vanish on t<x. Moreover∂_x^α∂_t^β w_0(x,t)≤sup_^3×[0,t]∂_x^α∂_t^β Ft^2-x^2/8when t>x and α_1+α_2+α_3≤ n, β≤τ.Let us prove the claim by induction. Assume that for any α_1+α_2+α_3≤ n and β≤τ we have∂_x^α∂_t^β w_m(x,t)≤ C_m q_C^n(^3)^m F_C^n,τ(^3×[0,t]) (t^2-x^2)^m+1for some C_m which might depend on the other parameters. Then recall w_m=0 for t<x and the definition of w_m+1. We get∂_x^α∂_t^β w_m+1(x,t) = ∫_^3∂_x^α(q(x-y) ∂_t^β w_m(x-y, t-y))/4πy dy ≤ C_m ∑_γ≤ααγq_C^n(^3)^m+1F_C^n,τ(^3×[0,t])·∫_x-y+y≤ t((t-y)^2 - x-y^2)^m+1/4πy dy = C_m C_s,n/4(m+2)(m+3)q_C^n(^3)^m+1F_C^n,τ(^3×[0,t]) (t^2-x^2)^m+2where the last equality comes from Lemma <ref>, and whereC_n = max_α≤ n∑_γ≤ααγ.We also have w_m+1(x,t)=0 for t<x. Hence we have the recursion formula C_m+1 = C_m C_n / (4(m+2)(m+3)) and C_0 = 1/8. This implies that (<ref>) holds withC_m = C_n^m/4^m+1(m+1)!(m+2)!for m=0,1,….The series∑_m=0^∞∂_x^α∂_t^β w_m(x,t)≤∑_m=0^∞C_n^m q_C^n(^3)^m (t^2-x^2)^m+1/4^m+1 (m+1)!(m+2)!F_C^n,τ(^3×[0,t])converges uniformly for any t, x under a given bound, so the function w is well defined. Note that the extension of t^2-x^2 by zero to t<x is continuous. Hence ∂_x^α∂_t^β w is continuous in ^3× when α_1+α_2+α_3≤ n and β≤τ. Thus w∈ C^n,τ(^3×).The final claim, continuous dependence on q and F, follows from the previous estimates. Namely, we note that w_1 and w_2 satisfy the assumptions of the source term F, and the difference w_1-w_2 solves(∂_t^2 - Δ - q_1)(w_1-w_2) = F_1 - F_2 + (q_1-q_2)w_2with w_1-w_2=0 for t<x. The C^n,τ(^3×[0,T])-norm of the right-hand side is bounded above byC_T,n,ℳ( F_1-F_2_C^n,τ + q_1-q_2_C^n C_T,n,ℳF_2_C^n,τ)and the claim follows from the a-priori bound on F_2.Let u^3×→ be a C^1-function satisfying(∂_t^2 - Δ- q) u= 0,x∈^3, t > xu(x,t)= g(x),x∈^3, t=xfor some q∈ C^0(^3) and g∈ C^1(^3). If g=0 then u=0 in x≤ t. DefineE(t) = ∫_x≤ t (∂_t u^2 + ∇ u^2 + u^2) dx.We would like to differentiate E with respect to time, however the lack of continuous second derivatives prevents us from doing that directly. Let φ_ε be a mollifier and u_ε = φ_ε∗ u. Let E_ε(t) = ∫_x≤ t ( ∂_t u_ε^2 + ∇ u_ε^2 + u_ε^2 ) dx. ThenE_ε'(t) = ∫_x=t( ∂_t u_ε^2 + ∇ u_ε^2 + u_ε^2 ) dσ(x) + 2 ∫_x≤ t∂_t u_ε·∂_t^2 u_ε dx + 2∫_x≤ t∇∂_t u_ε·∇ u_ε dx + 2∫_x≤ t∂_t u_εu_ε dx.Integration by parts shows that the third term is equal to2∫_x=tx/x∂_t u_ε·∇ u_ε dσ(x) - 2 ∫_x≤ t∂_t u_εΔ u_ε dx.By combining both equations above and using ∂_t^2 u_ε - Δ u_ε = φ_ε∗(qu) we getE_ε'(t) = ∫_x=t( x/x∂_t u_ε + ∇ u_ε^2 + u_ε^2 ) dσ(x) + 2∫_x≤ t∂_t u_ε(u_ε + φ_ε∗(qu)) dx.Integrate this with respect to time. Since u_ε→ u in C^1 locally as ε→0, we getE(t) = ∫_0^t ∫_x=s( x/x∂_s u + ∇ u^2 + u^2 ) dσ(x) ds + ∫_0^t 2∫_x≤ s (1+q) ∂_s u u dx ds. Let us deal with the boundary integral next. Define u_b(x) = u(x,x). Then calculus shows that ∇ u_b(x) = (∇ u + x/x∂_t u)(x,x) because ∇x = x/x· On the other hand the boundary condition of u shows that u_b=g. Thus the formula inside the parenthesis above is equal to ∇ g^2 + g^2.Note that ∫_0^t ∫_x=s f(x) dx ds = ∫_x≤ t f(x) dx for time-independent functions f. Then, since 2(AB) ≤A^2 + B^2, we getE(t) ≤∫_x≤ t( ∇ g^2 + g^2 ) dx + (1+q_∞) ∫_0^t ∫_x≤ s( ∂_s u^2 + u^2 ) dx ds.The last integral has the upper bound ∫_0^t E(s) ds. Grönwall's inequality, for examplein <cit.>, shows that E(t)=0 when g=0.We are now ready to prove the well-posedness of the Goursat problem in the sense of Hadamard. Strictly speaking the same proof shows existence in C^0 when q∈ C^2, g∈ C^4, but then we cannot guarantee uniqueness or the boundary identity that's stated with ∂_t and ∂_r. This is a consequence of the uniqueness of Lemma <ref>, the progressive wave expansion of Lemma <ref> and the initial value problem of Lemma <ref>. Let m=⌊(n+1)/3⌋, which has m≥2 and n≥2m+1, and setv(x,t) = g(x)+ a_1(x) (t^2-x^2)+ ·+ a_m(x) γ^m(x,t)for (x,t)∈^3×, as in Lemma <ref>. We have a_k_C^n+2-2k≤ C_n(1+ℳ)^k𝒩 in ^3. Then v(x,x) = g(x) but (∂_t^2-Δ-q)v = -(q+Δ)a_m γ^m.Next letF(x,t) =(q+Δ)a_m(x) γ^m(x,t),t>x0,t≤xbe our source term for an initial value problem. We have (q+Δ)a_m∈ C^n-2m(^3), but χ_{t>x}γ^m is in C^m-1(^3×). Hence F ∈ C^n_0,τ_0(^3×) using the notation of Lemma <ref> whenever n_0+τ_0≤ m-1 and n_0≤min(n-2m,m-1)=m-1. In other words when n_0+τ_0≤ s. Given T>0 the source has the estimateF_C^n_0,τ_0(^3×[0,T])≤ C_T, n, ℳ𝒩.We can also write out the estimate for v now that the smoothness indices are fixed. Note that γ^k is infinitely smooth in ^3×, and a_m has the worst smoothness among all the coefficient functions in (<ref>). Thusv_C^n_0,τ_0(^3×[0,T])≤ C_T, n, ℳ𝒩too since n_0≤ m and a_k is independent of t.Let w solve (∂_t^2-Δ-q)w = F in ^3× with w=0 for t<0. Lemma <ref> shows that such a w exists in C^n_0,τ_0(^3×) and it has support on t≥x. Given T>0 it has the norm estimatew_C^n_0,τ_0(^3×[0,T])≤ C_T, n, ℳ𝒩by the estimate on F.Since s≥1 then F∈ C^0,1∩ C^1,0 with support in t≥x. This implies that ∂_t w and ∇_x w are continuous. Since w=0 when t<x we see that (∂_t+x/x·∇_x)w=0 for t≤x. Next consider v. We see that on t=x∂_t γ^k(x,t) =2t,k=1, 0,k≠1and∇_x γ^k(x,t) =-2x,k=1, 0,k≠1 ,so ∂_t v = 2t a_1 and ∇_x v = ∇ g - 2x a_1(x) if t=x. This implies that(∂_t + x/x·∇_x) v = x/x·∇ g(x)on t=x. If we set u=v+w, then we see that u(x,x) = g(x) and (∂_t+∂_r)u=∂_rg on t=r=x because w is continuous in ^3× and supported on t≥x. Moreover u∈ C^s sinceu_C^s(^3×[0,T])≤ C sup_n_0+τ_0≤ su_C^n_0,τ_0(^3×[0,T])and this gives us the required norm estimate from (<ref>) and (<ref>). Finally (∂_t^2-Δ-q)u = (∂_t^2-Δ-q)v + F = 0 on t>x.The estimate for the difference of solutions u_1-u_2 to two Goursat problems follows from the corresponding estimate for v_1-v_2 of Lemma <ref> and for w_1-w_2 of Lemma <ref>. After using the latter note thatF_1-F_2_C^n_0,τ_0≤ C_T,n(1+ℳ) a_m1-a_m2_C^n_0 + q_1-q_2a_m2_C^n_0holds and thus can be estimated above by the norms of q_1-q_2 and g_1-g_2.§ WELL-POSEDNESS OF THE POINT SOURCE BACKSCATTERING MEASUREMENTSNow that the Goursat problem has been taken care of we can focus on the point source problem. We will show that given apotential q there is a unique solution to (<ref>)–(<ref>), and we can define the associated backscattering measurements. Moreover these measurements depend continuously on the potential, with linear modulus of continuity.Let q∈ C^0_c() and a∈∂. Let r^a∈ C^1(^3×) solve the problem(∂_t^2 - Δ- q) r^a= 0,x∈^3, t>x-a, ( x-a ∂_t + 1 + (x-a) ·∇) r^a= q/8π,x∈^3, t=x-a.DefineU^a(x,t) = δ(t-x-a)/4πx-a + H(t-x-a) r^a(x,t)where δ, H ∈𝒟'() are the delta-distribution and Heaviside function. Then U^a is a solution to the point source problem (<ref>)–(<ref>). Take the above form of U^a as an ansatz and note that the first term is the Green's function for ∂_t^2 - Δ(∂_t^2 - Δ) δ(t-x-a)/4πx-a = δ(x-a,t)by for example Theorem 4.1.1 in <cit.>.Since the function r^a in our ansatz is a-priori only C^1, we will use a smoothened delta-distribution and Heaviside function. For ε>0 let δ_ε→ be smooth, supported in ]0,2ε[, positive, and ∫δ_ε =1. Let H_ε(t) = ∫_-∞^t δ_ε(s) ds. Then δ_ε converges to the delta-distribution as ε→0 and H_ε to the Heaviside function. Let our new ansatz beU_ε(x,t) = δ_ε(t-x-a)/4πx-a + H_ε(t-x-a) r^a(x,t). Let's calculate the derivatives of the second term in the ansatz next. Note that ∇· (x/x) = 2/x in 3D, and so setting R = H_ε(t-x-a) r^a(t,x) we have∂_t R= δ_ε(t-x-a) r^a + H_ε(t-x-a) ∂_t r^a ∂_t^2 R= δ_ε'(t-x-a) r^a + 2 δ_ε(t-x-a) ∂_t r^a + H_ε(t-x-a) ∂_t^2 r^a,∇ R= δ_ε(t-x-a) ( -x-a/x-a) r^a + H_ε(t-x-a) ∇ r^a Δ R= δ_ε'(t-x-a) r^a - δ_ε(t-x-a) 2r^a/x-a= - δ_ε(t-x-a) 2 x-a/x-a·∇ r^a + H_ε(t-x-a) δ_ε r^a, q R= H_ε(t-x-a) q r^a.Take all terms into account next. Then(∂_t^2 - Δ - q) U_ε = (∂_t^2-Δ) δ_ε(t-x-a)/4πx-a - q(x) δ_ε(t-x-a)/4πx-a + δ_ε'(t-x-a) (r^a-r^a) + 2δ_ε(t-x-a)/x-a( x-a∂_t r^a + r^a + (x-a) ·∇ r^a )+ H_ε(t-x-a) (∂_t^2 - Δ - q)r^a.As ε→0 the first term above converges to δ(x-a,t) in the space of distributions. The terms with coefficients δ_ε' and H_ε vanish. The former trivially, and the latter because our choice of δ_ε makes sure that H_ε⊂_+. In other wordslim_ε→0 (∂_t^2 - Δ - q)U_ε - δ(x-a,t)= lim_ε→0 2δ_ε(t-x-a)/x-a( x-a∂_t r^a + r^a + (x-a)·∇ r^a - q(x)/8π)in 𝒟'(^3×).Denote by f(x,t) the continuous function in parenthesis above. Let φ∈ C^∞_c(^3×) be a test function. Then in the support of φ for every μ>0 there is δ>0 such that f(x,t)<μ if t-x-a<δ. Let 2ε<δ. Then∫_^3×δ_ε(t-x-a)/x-a f(x,t) φ(x,t) dx dt≤μφ_∞∫_φδ_ε(t-x-a)/x-a dx dtand by integrating the t-variable first we get the upper bound…≤μφ_∞∫_B(a,R_φ)dx/x-a = C_φμ.In other words the remaining term in the expansion for (∂_t^2-Δ-q)U_ε tends to zero in the distribution sense. Hence(∂_t^2 - Δ - q) U_ε→δ(x-a,t)in 𝒟'(^3×). Also, since δ_ε⊂_+, it also satisfies the initial condition U_ε=0 for t<0. Finally, it is easy to see that U_ε→ U^a. Hence the latter is a solution to (<ref>)–(<ref>).For n∈ let q∈ C^n(^3) and let U be a distribution of order n on ^3× such that U=0 on t<0. If (∂_t^2 - Δ - q)U=0 then U=0. Let φ∈ C^∞_c(^3×) be arbitrary. There is x_0∈^3 and t_0∈ such that φ(x,t)=0 in x-x_0>t_0-t, i.e. outside a past light cone. Write y=x-x_0 and s=t_0-t, and defineQ(y) = q(y+x_0),F(y,s) = φ(y+x_0,t_0-s).Then Q∈ C^n(^3), F∈ C^∞_c(^3×) and F(y,s)=0 when s<y. Lemma <ref> gives the existence of w∈ C^n(^3×) which vanishes on s<y and satisfies (∂_s^2 - Δ - Q)w = F.Letψ(x,t) = w(x-x_0,t_0-t).Then ψ(x,t)=0 if x-x_0>t_0-t. Since U=0 for t<0, the intersection of the supports of ψ and U is a compact set. Since U is of order n and ψ is in C^n their distribution pairing ⟨ U, ψ⟩ is well defined. Now⟨ (∂_t^2 - Δ - q)U, ψ⟩ = ⟨ U, (∂_t^2 - Δ - q)ψ⟩ = ⟨Ũ, (∂_s^2 - Δ - Q)w ⟩ = ⟨Ũ, F ⟩ = ⟨ U, φ⟩where Ũ is the distribution U in the (y,s)-coordinates. Since U is in the kernel of the differential operator and φ is an arbitrary test function, we have U=0.Uniqueness follows directly from Lemma <ref>. We shall build a solution r^a to the Goursat-type problem of Lemma <ref>. We switch boundary conditions as was done at the beginning of Section <ref>. Defineg(x) = 1/8π∫_0^1 q( a + s(x-a) ) dsand note that q∈ C^n(^3), g∈ C^n+2(^3) for n=5. The well-posedness of the Goursat problem (Theorem <ref>) gives a unique C^1 solution to(∂_t^2 - Δ- q) r^a= 0,x∈^3, t>x-a,r^a= g,x∈^3, t=x-a.It has the required norm estimate for any T>0 and in addition it satisfies(∂_t + ∂_r) r^a = ∂_r gon t=x-a. Here r=x-a and furthermore we denote θ=(x-a)/x-a. If in the definition of g we switch integration variables to s'=rs then∂_r g = -1/r g + q/8π rwhich is well-defined because q=0 in a neighbourhood of a. Recalling that ∂_r = θ·∇_x we see that in fact(x-a∂_t + 1 + (x-a)·∇_x) r^a = q/8πon the boundary t=x-a. Hence Lemma <ref> shows that U^a is a solution to the point source problem.The unperturbed Green's function is supported only on t=x-a. On t<x-a the solution vanishes. On t>x-a it is equal to r^a which is C^1. In this topology, it depends continuously on a because the Goursat problem depends continuously on the potential and characteristic boundary data. Hence U(a,2τ) is well-defined for τ>0 and continuously differentiable in τ.Let two potentials q_1 and q_2 and their associated solutions r_1^a, r_2^a to the Goursat problem be given. For any a∈∂ and β∈{0,1} Theorem <ref> shows the norm estimatesup_x∈^3sup_0<τ<1∂_τ^β (r_1^a-r_2^a)(x,2τ)≤ C_ℳq_1-q_2because g_1-g_2_C^7(^3)≤q_1-q_2_C^7(^3) and the norms involved are invariant under translations. Letting x=a and then taking the supremum over a proves the claim because U_1^a-U_2^a = r_1^a-r_2^a at (x,t)=(a,2τ).§ STABILITY OF THE INVERSE PROBLEM Now that the direct problem has been shown to be well-defined, including the estimates for the point source backscattering measurements, we can consider the inverse problem. The first step is to write a boundary identity. The following is proven in <cit.> for C^∞-smooth potentials, but it works verbatim in our case too.Let =B(0̅,1) be the unit ball in ^3 and q_1, q_2 ∈. Let a ∈∂ and let U^a_1 and U^a_2 be given by Theorem <ref> for q=q_j, j=1,2. ThenU^a_1(a,2τ) - U^a_2(a,2τ) =1/32π^2τ^2∫_x-a=τ (q_1-q_2)(x) dσ(x) + ∫_x-a≤τ (q_1-q_2)(x) k(x,τ,a) d xwithk(x,τ,a) = (r^a_1+r^a_2)(x,2τ-x-a)/4πx-a + ∫_x-a^2τ-x-a r^a_1(x,2τ-t) r^a_2(x,t) dtif x-a≤τ.If we have moreover q_j≤ℳ < ∞ thensup_h≤τ≤1sup_a=1∫_x-a=τk(x,τ,a)^2 dσ(x) ≤ C_ℳ,h, < ∞, sup_h≤τ≤1sup_a=1∫_h≤x-a≤τ∂_τ (τ k(x,τ,a))^2 dσ(x) ≤ C_ℳ,h, < ∞for any h>0. Note that k(x,τ,a) is singular at x=a.We shall skip the proof of the identities as they have been proved in Section 3.2 of <cit.>. It is a matter of calculating∫_-∞^∞∫_^n (q_1-q_2)(x) U^a_2(x,t) U^a_1(x,2τ-t) dx dton one hand by integrating by parts, and on the other hand by using the expansion (<ref>). The estimates for k follow directly from (<ref>).Our next step is an integral identity related to the first term in (<ref>). The proof for the estimate for E(a,τ) can be dug from the proofs in <cit.>. We prove it again here, both for clarity, since this estimate might be of interest on its own, and for having an explicit form for the constant in front of the sum.Let Q∈ C^1_c() withthe unit disc in ^3. Then for all a∈∂ and 0<τ<a we have∂_τ( τ/4πτ^2∫_x-a=τ Q(s) dσ(x) ) = 1-τ/2Q((1-τ)a) + E(a,τ)whereE(a,τ)^2 ≤3/π(1-τ)∑_i<j∫_x-a=τΩ_ijQ(x)^2/√(x-(1-τ)) dσ(x).Here the Ω_ij are the angular derivatives x_i∂_j - x_j∂_i depicted as vector fields in Figure <ref>.We may prove the proposition for Q∈ C^∞_c(B) and then get the claim by approximating. Test functions are dense in C^1_c() and supf + sup∇ f≤ C f_C^1. By Proposition 2.1 in <cit.>∂_τ( τ/4πτ^2∫_x-a=τ Q(s) dσ(x) ) = 1-τ/2Q((1-τ)a) + 1/4π∫_x-a=τα·∇ Q(x)/sinϕ dσ(x),where α = α(a,x) is a unit vector orthogonal to x and ϕ is the angle at the origin between x and a. Let T_ij = x_ie_j - x_je_i so Ω_ij = T_ij·∇. Then for any vector v we havev = ∑_i<j( v ·T_ij/x) T_ij/x + (v·x/x) x/x.On x-a=τ set v := α and then take the dot product with ∇ Q(x). We getx^2 α·∇ Q(x) = ∑_i<j (α· T_ij)(T_ij·∇ Q)(x) = ∑_i<j (α· T_ij) Ω_ijQ (x)since x⊥α. By the Cauchy-Schwarz inequalityα·∇ Q(x)≤a/x∑_i<jΩ_ijQ(x)since T_ij≤x. This impliesE(a,τ)≤a/4π∑_i<j∫_x-a=τΩ_ijQ(x)/xsinϕ dσ(x). The law of cosines gives us 2axcosϕ = a^2 + x^2 - τ^2. Solve for cosϕ to get sinϕ = ±√(1-cos^2ϕ) and hence1/sinϕ = 2ax/√(4a^2x^2 - (a^2 + x^2 - τ^2)^2) = 2ax/√((x-τ+a) (x+τ-a) (τ+a-x) (τ+a+x)).But note that by assumption a > τ > 0 and a > x for all x∈. Hence 1/sinϕ≤2ax/√(a-τ)√(x-(a-τ))√(τ)√(a).and we can continue withE(a,τ)≤a^2/2π√(τa)√(a-τ)∑_i<j∫_x-a=τΩ_ijQ(x)/√(x-(a-τ)) dσ(x). Finally, use the Cauchy-Schwarz inequality twice: once for (∑_i<jf_ij)^2 ≤ 3 ∑_i<j f_ij^2 and a second time for the product of the two function Ω_ijQ(x)/(x-(a-τ))^1/4 and (x-(a-τ))^-1/4. It givesE(a,τ)^2 ≤3a^3 I(a,τ)/4π^2 τ (a-τ)∑_i<j∫_x-a=τΩ_ijQ(x)^2/√(x-(a-τ)) dσ(x)where I(a,τ) = ∫_x-a=τ, x≤a dσ(x) / √(x-(a-τ)).Parametrize the sphere a-x=τ by ρ=x and the azimuth θ∈[0,2π] to calculate I(a,τ). The latter variable gives the inclination of the plane aOx with respect to a fixed reference plane passing through O and a. See Figure <ref>. We also introduce the polar angle ξ.Using the standard spherical coordinates ξ, θ we havedσ(x) = τ^2 sinξ dξ dθ = τ^2 sinξdξ/dρ dρ dθ.By the law of cosines a^2 + τ^2 - 2aτcosξ = ρ^2. Solve for cosξ and differentiate this with respect to the variable ρ. Note that a,τ are constants, but ξ=ξ(ρ). We get-sinξdξ/dρ = d/dρcosξ = - ρ/aτwhich implies that dσ(x) = τa^-1ρ dρ dθ.Thus, since Q vanishes outside , we haveI(a,τ) = ∫_0^2π∫_a-τ^aτa^-1ρ dρ dθ/√(ρ - (a-τ))≤ 2πτ∫_0^τdρ/√(ρ) = 4πτ^3/2≤ 4πτ√(a).Finally use the fact thatis the unit ball and thus a=1 to conclude the claim. We are now ready to prove stability for point source backscattering. Write Ũ^a = U_1^a-U_2^a and q̃=q_1-q_2. By the assumptions and Proposition <ref> we haveτŨ^a(a,2τ) = τ/32π^2τ^2∫_x-a=τq̃(x) dσ(x) + ∫_x-a≤τq̃(x) τ k(x,τ,a) dxfor any τ>0, in particular for h<τ<1 which we shall assume now. By Proposition <ref> and the differentiation formula for moving regions (e.g. <cit.> Appendix C.4) we get∂_τ( τŨ^a(a,2τ) ) = 1-τ/8q̃((1-τ)a) + 1/4 E(a,τ)+ ∫_x-a=τq̃(x) τ k(x,t,a) dσ(x) + ∫_x-a≤τq̃(x) ∂_τ (τ k(x,τ,a)) dx. By the Cauchy–Schwarz inequalities of ^4 and the L^2-based function spaces L^2({x-a=τ}) and L^2({x-a≤τ}) we have(1-τ)^2 q̃((1-τ)a)^2 ≤ 256 ∂_τ( τŨ^a(a,2τ) )^2 + 16 E(a,τ)^2+ 256 ∫_x-a=τq̃(x)^2 dσ(x) ∫_q̃∩x-a=ττ k(x,τ,a) ^2 dσ(x)+ 256 ∫_x-a≤τq̃(x)^2 dx ∫_q̃∩x-a≤τ∂_τ (τ k(x,τ,a)) ^2 dxNote that q_1(x)=q_2(x)=0 for x-a<h. Also recall the estimates (<ref>) and (<ref>) for integrals of k from Proposition <ref>. We can proceed then with(1-τ)^2 q̃((1-τ)a)^2 ≤ C_M,h,( ∂_τ( τŨ^a(a,2τ) )^2 + E(a,τ)^2+ ∫_x-a=τq̃(x)^2 dσ(x) + ∫_x-a≤τq̃(x)^2 dx )since q_1, q_2≤ℳ.Integrate the above estimate with ∫_a∈∂… dσ(a) and use the coordinate change of Lemma <ref>. Then write 𝒬(r) = ∫_x=rq̃(x)^2 dσ(x) and scale the integration variable on the left-hand side to get𝒬(1-τ)/C_ℳ,h,≤∫_a=1∂_τ(Ũ^a(a,2τ) ) ^2 dσ(a) + ∫_a=1E(a,τ)^2 dσ(a)+ π∫_x≥ 1 - τq̃(x)^2 τ^2 + 2τ - (1-x)^2/x dx. Next, estimate E(a,τ)^2 using Proposition <ref>. Then change the order of integration using Lemma <ref>, switch to angular coordinates, and apply angular control (<ref>) to get∫_a=1E(a,τ)^2 dσ(a) ≤6τ/1-τ∑_i<j∫_x≥ 1-τΩ_ijq̃(x)/x√(x-(1-τ)) dσ(x)= 6 τ/1-τ∑_i<j∫_1-τ^1 ∫_x=rΩ_ijq̃(x)/r √(r-(1-τ)) dσ(x) dr ≤ 6 S^2 ∫_1-τ^1 τ/1-τ𝒬(r)/r√(r-(1-τ)) dr.Similarly, the last term in (<ref>) can be written as… = π∫_1-τ^1 τ^2 + 2τ - (1-r)^2/r𝒬(r) dr.Finally, combine estimates (<ref>) and (<ref>) to change (<ref>) into𝒬(1-τ) ≤ C_ℳ,h,∫_a=1∂_τ (τŨ^a(a,2τ))^2 dσ(a)+ C_ℳ,h,∫_1-τ^1 ( 6 S^2 τ/(1-τ) r √(r-(1-τ)) + πτ^2 + 2τ - (1-r)^2/r) 𝒬(r) drwhich is valid for 0<τ<1.Our next step is to prepare for Grönwall's inequality. The inequality above can be written asφ(τ) ≤ d(τ) + ∫_0^τβ(τ, s) φ(s) dsfor 0<τ<1 whereφ(τ) = 𝒬(1-τ),d(τ) = C_ℳ,h,∫_a=1∂_τ (τŨ^a(a,2τ))^2 dσ(a)andβ(τ, s) = C_ℳ,h,( 6 S^2 τ/(1-τ) (1-s) √(τ-s) + πτ^2 + 2τ - s^2/1-s).Because of the singularities of β we restrict (<ref>) to 0 < τ≤ 1-ε for any given ε>0. We have 1-s ≥ 1-τ≥ε > 0 and τ≤ 1. In this situation we see easily thatβ(τ,s) ≤6 C_ℳ,h, S^2/ε^2 √(τ-s) + 3π C_ℳ,h,/√(ε)√(τ-s)≤6 S^2 + 3π/ε^2C_ℳ,h,/√(τ-s).Denote C_S,ℳ,h, = (6S^2 + 3π) C_ℳ,h,.An application of Grönwall's inequality (Lemma <ref>) impliesφ(τ) ≤(1 + 2 C_S,ℳ,h,ε^-2) sup_0<τ_0<1 d(τ_0) exp( 4 C_S,ℳ,h,^2 ε^-4τ)for 0 < τ≤ 1-ε. Now, given any τ∈ (0,1) we choose ε>0 such that τ≤ 1-ε and the right-hand side of the estimate above is minimized. These conditions are satisfied for ε = 1-τ. The claim (<ref>) follows after recalling that φ(τ) = ∫_x=1-τ(q_1-q_2)(x)^2 dσ(x) and applying simple estimates.Let us prove the norm estimate for q̃ = q_1-q_2 over the wholenext. Rewrite (<ref>) asq̃_L^2({x=r})≤Λ e^ℭ/r^4where Λ = U_1^a-U_2^a. Since ↪ W^1,∞() and the potentials are supported inwe have the Lipschitz-norm estimate q̃(x)≤|q̃(x+ℓx/x) | + 2 ℓℳ for any ℓ≥0. Integration givesq̃_L^2({x=r})≤ 2 √(4π)ℳ r ℓ + r/r+ℓΛ e^ℭ/(r+ℓ)^4which we can estimate toq̃_L^2({x=r})≤ 2 √(4π)ℳℓ + Λ e^ℭ/ℓ^4because 0≤ r≤ 1 and ℓ≥ 0. The full domain estimate (<ref>) follows from Lemma <ref>.The proof for q_1-q_2 radially symmetric proceeds as above until (<ref>). Since in the condition of angular control (<ref>) we can assume that S=0, we haveβ(τ,s) = C_ℳ,h,πτ^2+2τ-s^2/1-s≤C'_ℳ,h,/1-sand soφ(τ)/C”_ℳ,h,≤U_1^a-U_2^a^2 + ∫_0^τφ(s)/1-s ds.This type of integral inequality impliesφ(τ) ≤ C”_ℳ,h,U_1^a-U_2^a^2 exp( ∫_0^τC”_ℳ,h,/1-s ds )= C”_ℳ,h,U_1^a-U_2^a^2 (1-τ)^-2αfor some α=α(ℳ,h,) by Grönwall's inequality. Note that here τ is allowed to be anywhere in the whole interval (0,1) without any of the constants blowing up. Following the rest of the proof implies Hölder stability.§ TECHNICAL TOOLSWe collect here some basic calculations and some well known theorems so that we may refer to them without losing focus in the main proof.Let f be a continuous function vanishing outside ofand let τ<1 positive. Then∫_a=1∫_x-a=τ f(x) dσ(x) dσ(a) = 2πτ∫_x≥ 1-τf(x)/x dxand∫_a=1∫_x-a≤τ f(x) dx dσ(a) = π∫_x≥ 1-τf(x)/x(τ^2 - (1-x)^2) dx. The first equation was proven just before formula (2.10) in <cit.>. The left-hand side of the second equation was shown to be equal to∫_x≤1 f(x) ∫_a=1 H(τ^2-x-a^2)dσ(a) dxtherein too.The last equality follows by noting that the integral of the Heaviside function is just the area of the spherical cap arising from the intersection of a=1 and a-x=τ. If x<1-τ then this intersection is empty. Otherwise the area is seen to be 2π· r · h, where r=1 is the radius of the sphere {a=1} and h is the height of the cap along the ray y0̅. Two applications of Pythagoras' theorem and some simple algebra imply that h = (τ^2 - (1-x)^2)/(2x) and thus the final equality is proven. Let b>a and d (a,b) → be bounded and measurable. Moreover let β (τ,s) ↦β(τ,s) be measurable whenever τ,s ∈ (a,b) and s < τ.Moreover let it satisfyβ(τ,s) ≤C/√(τ-s)for some C<∞ whenever s<τ.If φ (a,b) → is a non-negative integrable function that satisfies the integral inequalityφ(τ) ≤ d(τ) + ∫_a^τβ(τ,s) φ(s) dsfor almost all τ∈ (a,b), thenφ(τ) ≤ (1 + 2 C √(b-a)) sup_a<τ_0<b d(τ_0) e^4C^2 τ. First of all note that since φ≥ 0, we may estimate β from above in the integral, and see that the former satisfiesφ(τ) ≤ d(τ) + C ∫_a^τφ(s)/√(τ-s) dsfor almost all τ.Next bootstrap the above by estimating φ inside the integral using that same inequality. Thenφ(τ) ≤ d(τ) + C ∫_a^τd(s)/√(τ-s) ds + C^2 ∫_a^τ∫_a^s φ(s')/√(τ-s)√(s-s') ds' ds. The double integral is estimated as follows: ∫_a^τ∫_a^s … ds' ds = ∫_a^τ∫_s'^τ… ds ds', and then we are left to estimate ∫_s'^τ ds / √(τ-s)√(s-s'). To do that split the interval (s',τ) into two equal parts by the midpoint s = (τ+s')/2. In the interval s∈(s',(τ+s')/2) we have 1/√(τ-s)≤√(2/(τ-s')) and ∫_s'^(τ+s')/2 ds/√(s-s') = √(2(τ-s')). Their product is equal to 2. The same deduction works in the second interval. Hence∫_s'^τds/√(τ-s)√(s-s')≤ 4indeed andφ(τ) ≤ d(τ) + C ∫_a^τd(s)/√(τ-s) ds + 4C^2 ∫_a^τφ(s') ds'follows.The first two terms above have an upper bound(1 + 2 C √(b-a)) sup_a<τ_0<b d(τ_0)because ∫_a^τ ds/√(τ-s) = 2√(τ-a)≤ 2√(b-a). Grönwall's inequality implies the final claim: If φ(τ) ≤ C_1 + C_2 ∫_0^τφ(s) ds for τ≥0 where φ≥0 then φ(τ) ≤ C_1 exp(C_2τ). This follows for example fromin <cit.> and some algebra. Note however that the integral form of Grönwall's inequality inof <cit.> is weaker than this one. Let f _+ → be a positive function satisfyingf(ℓ) ≤ Aℓ + Λ e^ℭ/ℓ^4for some Λ < ∞ and any ℓ in its domain. Then if 0<Λ<e^-1 we havef(ℓ_0) ≤A (2ℭ)^1/4 + 2/( ln1/Λ)^1/4where ℓ_0^4 = ℭ / (ln1/√(Λ)). If Λ≥ e^-1 then we have the linear estimatef(ℓ_0) ≤ (A ℭ^1/4 + 1) e Λ.for ℓ_0^4 = ℭ.Since Λ < e^-1 the choice of ℓ_0 is proper. Moreover we see immediately thatf(ℓ_0) ≤A (2 ℭ)^1/4/(ln1/Λ)^1/4 + √(Λ).Recall the elementary inequality ln1/a≤1/b a^-b for b>0 and 0<a<e^-1. Set b=2 and a = Λ to see that√(Λ)≤2/ln1/Λ≤2/(ln1/Λ)^1/4since ln1/Λ > 1 then. The first claim follows. The second claim is elementary.The following is from personal communication with Rakesh. Let p→ be a measurable function. Then, given any time t≥0 and position x∈^n with t ≥x, we havey+x-y≤ t ⟺ (t-y)^2 - x-y^2 ≥ 0and∫_y+x-y≤ tp((t-y)^2-x-y^2)/y dy = ∫_w≤1/2√(t^2-x^2)p((√(t^2-x^2)-w)^2-w^2)/w dw. The first claim follows from the triangle inequality applied to a triangle with vertices x, y and 0̅: t-y+x-y≥x-y+x-y≥ 0, so we may multiply the inequalityt-y-x-y≥ 0by the former without changing sign.Let p_+(r) = p(r) for r≥0 and p_+(r) = 0 for r<0. Denote the left-hand side integral in the statement by I. ThenI = ∫_^3p_+((t-y)^2-x-y^2)/y dy = ∫_^3∫_-∞^∞δ(s-y)/y p_+((t-y)^2-x-y^2) ds dy = ∫_-∞^∞∫_^3δ(s-y)/y p_+((t-y)^2-x-y^2) dy ds = 2 ∫_-∞^∞∫_^3δ(s^2-y^2) p_+((t-y)^2-x-y^2) dy ds. Let L_1^3→^3 be a rotation taking x ↦ (x,0,0). Let it map y ↦ y'. Then dy = dy' and soI = 2 ∫_-∞^∞∫_^3δ(s^2-y'^2) p_+((t-y')^2-L_1x-y'^2) dy' ds.Next let (s,y') ↦ z ∈^4 be the Lorentz transformation given byz_0 = ts-xy_1'/√(t^2-x^2),z_1 = t y_1' - x s/√(t^2-x^2),z_2 = y_2,z_3 = y_3.It is a trivial matter to see that dz = dy'ds and the following identitiesz_0^2 - z_1^2 = s^2-y_1'^2, (√(t^2-x)-z_0)^2 - z_1^2 = (t-s)^2 - (x-y_1')^2. Finally, denoting z^2 = z_1^2+z_2^2+z_3^2 and z· z= z_0^2 - z^2, we haveI = 2∫_^4δ(z· z) p_+((√(t^2-x^2)-z_0)^2 - z^2) dz = ∫_-∞^∞∫_^3δ(z_0-z)/z p_+((√(t^2-x^2)-z_0)^2 - z^2) dz_1dz_2dz_3dz_0 = ∫_^3p_+((√(t^2-x^2)-z)^2 -z^2)/z dz_1dz_2dz_3= ∫_^3p_+((√(t^2-x^2)-w)^2-w^2)/w dwwhich implies the claim since (√(t^2-x^2)-w)^2-w^2 ≥ 0 if and only if √(t^2-x^2)-w-w≥ 0. §.§ AcknowledgementsI am indebted to Rakesh for the many discussions that led me to understanding the Goursat problem and how to show the well-posedness of the point source problem. Without his help this important part of the paper would have taken many more months to complete. In addition I would like to thank the anonymous referees and their comments. This led to the realization that radially symmetric potentials have a better stability estimate.tocsectionBibliography 30 [Bal1]BaleanPhD Balean, R.: The Null-Timelike Boundary Problem, University of New England, PhD thesis, 1996. [Bal2]Balean Balean, R.: The null-timelike boundary problem for the linear wave equation, Communications in Partial Differential Equations, 22 (1997), 1325–1360. [Cag]Cagnac Cagnac, F.: Problème de Cauchy sur un conoïde caractéristique pour des équations quasi-linéaires, Ann. Mat. Pura Appl. (4), 129 (1982), 13–41. [Evans]Evans Evans, L. C.: Partial Differential Equations, Graduate Studies in Mathematics, Vol 19, American Mathematical Society, Providence, Rhode Island, second edition, 2010. [Fri]Friedlander Friedlander, F. G.: The wave equation on a curved space-time, Cambridge University Press, February 1975. [Man]Mandache Mandache, N.: Exponential instability in an inverse problem for the Schrödinger equation, Inverse Problems, 17, 5 (2001), 1435–1444. [MU]MU Melrose, R. and Uhlmann, G.: Generalized Backscattering and the Lax-Phillips Transform, Serdica Math. J., 34 (2008), 1026–1044. [RU1]RU1 Rakesh and Uhlmann, G.: Uniqueness for the inverse back-scattering problem for angularly controlled potentials, Inverse Problems, 30 (2014), 065005. [RU2]RU2 Rakesh and Uhlmann, G.: The point source inverse back-scattering problem, Contemporary Mathematics, 644 (2015), 279–289. [Rom]Romanov1974Romanov, V.: Integral Geometry and Inverse Problems for Hyperbolic Equations, Springer-Verlag Berlin Heidelberg, 1974.
http://arxiv.org/abs/1705.09442v2
{ "authors": [ "Eemeli Blåsten" ], "categories": [ "math.AP", "35R30, 78A46, 35A08, 35L15" ], "primary_category": "math.AP", "published": "20170526061713", "title": "Well-posedness of the Goursat problem and stability for point source inverse backscattering" }
Department of Physics and Astronomy, Johns HopkinsUniversity, 3400 N. Charles St., Baltimore, MD 21218, USAPrimordial black holes (PBHs) have long been suggested as a candidate for making up someor all of the dark matter in the Universe. Most of the theoretically possible mass range for PBHdark matter has been ruled out with various null observations of expected signatures of theirinteraction with standard astrophysical objects. However, current constraints are significantlyless robust in the 20 M_⊙≲ M_ PBH≲ 100M_⊙ mass window,which has received much attention recently, following the detection of merging black holeswith estimated masses of ∼30 M_⊙ by LIGO and the suggestionthat these could be black holes formed in the early Universe. We consider the potential ofadvanced LIGO (aLIGO) operating at design sensitivity to probe this mass rangeby looking for peaks in the mass spectrum of detected events. To quantify the background,which is due to black holes that are formed from dying stars, we model the shape of thestellar-black-hole mass function and calibrate its amplitude to match the O1 results. Adopting veryconservative assumptions about the PBH and stellar-black-hole merger rates, we show that ∼5 years of aLIGO data canbe used to detect a contribution of >20 M_⊙ PBHs to dark matter down to f_ PBH<0.5at >99.9% confidence level. Combined with other probes that already suggest tensionwith f_ PBH=1, the obtainable independent limits from aLIGO will thus enable a firm testof the scenario that PBHs make up all of dark matter. Probing Primordial-Black-Hole Dark Matter with Gravitational Waves Ely D. Kovetz December 30, 2023 ==================================================================One of the cornerstones of Λ CDM, the concordance cosmologicalstandard model, is the cold dark matter (DM) component that makes up ∼25%of the energy density in the Universe today. While the evidence for its existence are compelling<cit.>, the nature of it is still unknown.As the limits on models of particle dark matter (in particular weakly-interactingmassive particles, known as WIMPs <cit.>) are tightening<cit.>, it is becoming ever more important toconsider alternative models.An especially intriguing candidate to make up the invisible form of matter in the Universe isprimordial black holes (PBHs), which are black holes that are formed deep in theradiation era of the infant Universe <cit.>.Based on various observations, the contribution of PBHs to dark matter has been stronglyconstrained across more than 30 orders of magnitude of their theoretically possible massrange <cit.>. Still, in several mass windowsexisting constraints are less stringent and additional probes are called for. This is especially true for the 20 M_⊙≲ M_ PBH≲ 100M_⊙ window, which has attracted much interest as a result of the first detection ofmerging black holes with measured masses of ∼30 M_⊙ by the LIGO observatory<cit.>, following the demonstration in Ref. <cit.> that thepredicted merger rate for PBHs in this mass range is consistent with the estimatedevent rate for high mass mergers from the O1 aLIGO data <cit.>.One could describe the search for PBH dark matter in analogy to the one for particle dark matter,as is illustrated in Fig. <ref>.The constraints on the former to date have been solely based on “direct” detection searches, involving possible interactions between PBHs and standard astrophysical objects.In this Letter, weconsider the prospects of the “indirect” search path for PBH dark matter,namely the production of standard (cosmological) model signals in the form ofgravitational waves as a result of PBH self-interaction (or “annihilation”).The key to this approach is to understand and quantify the background as well as possible, and to identify uniquefeatures in the dark matter signal that can tell them apart. Examples of such features are the orbital eccentricity of the coalescing binary and the black hole spins.The former was investigated recently in Ref. <cit.>, but unfortunately the prospects for detecting events with a non-zero trace of the initial eccentricity in aLIGOare quite dim. As for the spin, the problem is twofold. The initial spin distribution of PBHs is unclearon the one hand (see <cit.> for some estimates), and on the other,the key identifier of PBH mergers in this regard—that the spin of the two black holes should not be aligned—ismimicked by various models of dynamical binary formation <cit.>(whose rate is currently very uncertain), and so it is very hard to distinguish them from the background.Here we focus on the mass spectrum of merging black hole binaries as aprobe of PBH dark matter. The idea is simple: if PBHs in a given mass range account for all (or some fraction) of the dark matter in the Universe, we should see an excess of merger events involving black holes in the corresponding mass bin <cit.>. Provided that thisexcess is large enough to be differentiated from the expected background from mergers of black holesof stellar origin, a detection, or lack thereof, could either support the existence or place limits on theabundance of PBHs. This abundance is parameterized by the quantity f_ PBH, the fractionof dark matter in PBHs.Our goal is to provide themost reliable constraints on PBH dark matter—focusing especially on the most motivated question which is whether all ofdark matter can be explained by PBHs—and we therefore choose to be very careful and consider themost pessimistic case for PBH dark-matter detection, i.e. the lowest estimated rate of PBH mergersand the highest rate of stellar black hole mergers. We will show that even in this challenging case,gravitational-wave observations by aLIGO at design sensitivity should within a decade either exhibit strong hints fora PBH contribution to dark matter, or rule them out as the single form of dark matter (albeit allowingfor a considerable fraction of it to made up of PBHs). To begin, we review the suggested mechanisms of PBH binary formation.The first model was put forward in Refs. <cit.>, where it was demonstratedthat a subset of an initial PBH population, assumed to be randomly distributed in space,would be in close enough proximity to overcome the cosmic expansion and form bound pairs.The distribution of the semi-major axis of these binaries will be quite wide. However,as they generically have high initial orbital eccentricity (they avoid a head-on collision due to theinfluence of the closest neighbors), some will have merger times that allow them to reach theendpoint of their coalescence within the detectable volume of aLIGO.Ref. <cit.> predicts a very high merger rate (which is in fact already intension with existing observations if f_ PBH≫0.01). However, a crucial question is whether these early-formed binaries can survive as bound pairsthroughout the evolution of the Universe, without being disrupted. This was briefly addressed in Ref. <cit.>, where the probability for disruption was calculated to be as small as 𝒪(10^-7),but only for a binary that resides in a Milky-Way type halo today. Since PBHs binaries in this scenario were createdvery early on, before the formation of large dark matter halos (in a series of violent cosmic processes), thisis at best an underestimate of the actual disruption rate. A thorough reexamination of this model, takinginto account the interaction of PBH binaries with other PBHs, the rest of dark matter (if f_ pbh≪1) and with baryonic matter,is under way, but is outside the scope of this Letter. In short, Ref. <cit.> evaluates the chance for disruptionby tidal effects of the smooth halo and encounters with other PBHs (which are more efficient in the first halos, which are denserand have lower velocity dispersions), finding that it ishigher than previously estimated. Meanwhile, the effect of a circumbinaryaccretion disk has been examined in Ref. <cit.>, concluding that it could lead to a decrease in the semi-majoraxis on a timescale fast enough such that all early-formed binaries with masses in the stellar-mass range will have merged wellbefore redshift z∼1, and therefore remain outside the reach of aLIGO. Aiming to provide the most robust bounds on PBH darkmatter, we shall therefore treat this rate as an optimistic case, and proceed to focus on more conservative scenarios, described below.A second model of PBH binary formation was proposed in Ref. <cit.>. In this model, PBHs form binaries inclose two-body encounters, as a result of energy loss from gravitational-wave emission as they pass eachother by. The rate for this process to occur was calculated for dark matter halos of different masses (and densitiesand velocity dispersions), and when integrated over a full mass function (cutting off at the low mass end wherehalos would be too small not to evaporate by dynamical relaxation), the total merger rate was found to beR_ PBH≈2f_ PBH^53/21(M_ PBH/30M_⊙)^-11/21Gpc^-3 yr^-1,where we have used the fact that the evaporation time is primarily determined by the number of black holes in thehalo, as their density in this limit—and thus the dynamical time—is roughly constant with mass. This rate for PBHs with mass M_ PBH=30 M_⊙ is consistent with the recent LIGO 90%-intervalestimate for black holes as massive as the ones in the first detection, 0.5-12Gpc^-3 yr^-1 <cit.>. Of the many assumptions in this calculation, by far the most daring is the extrapolation of the halo mass function,velocity dispersion distribution, density profile and mass-concentration relation to very low (∼10^3 M_⊙) dark matterhalo masses, orders of magnitude below what can be observed or even simulated. The good news however, is that asthese binaries have a very small separation when they form, they merge very quickly and are not susceptible to the effect of interfering processes such as mentioned above for early-formed binaries. While a bias against amore-optimistic higher rate is not too worrisome for the purposes of this work, it is more crucial to have a clear idea for thelowest reasonable bound on the merger rate. In work to appear <cit.>, a merger rate is calculated for the same two-body formation mechanism of PBH binaries, but focusing on the contribution to the merger rate from encounters that occur inside dark matter density spikes around supermassive black holes (SMBHs) at the centers of galaxies<cit.>. Integrating over the mass function of SMBHs from ∼10^5 M_⊙ to ∼10^9 M_⊙,this is found to yield roughly >10% of the total rate from all halos, providing a highly-conservative lower floor for the total PBH merger rate.We will therefore proceed by adopting R_ PBH in Eq. (<ref>) as a conservative estimate for therate of PBH mergers in the local Universe, addressing the pessimistic case of a rate ten times lower, and bearing inmind that if early-Universe binaries are somehow found to be stable to disruption, the rate could be order(s) of magnitude higher.The next step is to make a prediction for the background, which consists ofmergers of stellar black holes that contribute to the mass spectrum of detected events. To do this, we require anassumption for the astrophysical merger rate (which depends on factors such as metallicity and merger time-delay distributions) and for the mass function of stellar black holes. For the merger rate, we follow Ref. <cit.>. We use a simple approximation,R(z)≃97(1+z)^2Gpc^-3 yr^-1, which provides a good fit to their results at redshifts z<1 <cit.>.As for the black hole mass function, we follow Ref. <cit.> and consider a simple ansatz whereby theprobability distribution function (PDF) of the black hole mass is described by a simple power law (motivated by the slopeof the stellar initial mass function, which has been corroborated by numerous observations in the 1-100 M_⊙ mass range <cit.>). We impose sharp and exponential cutoffs at the lower and upper endof the mass spectrum, respectively, to take into account the neutron-star—BH transition threshold and the increasingwind-driven mass loss of high mass stars in the Wolf-Rayet phase. Denoting the black hole mass by M_1, the PDF isgiven byP(M_1)= A_M_1M_1^-αℋ(M_1-M_ gap)e^-M_1/M_ cap,where ℋ is the heaviside function and A_M_1 is an overall normalization.We shall use as fiducial values α=2.35 (following <cit.>), M_ gap=5 M_⊙(motivated by current observations <cit.> and by some theoretical works<cit.>) and M_ cap=60 M_⊙.The latter choice naturally affects our constraints strongly at masses beyond the cutoff, as with a vanishing background,PBHs of that mass would be easier to detect. However, very massive PBH binaries will have a merger frequency toosmall to be detected by aLIGO, which offsets this effect. We deliberately avoid choosing a lower value, to reflect ourignorance and so as not to artificially strengthen the bound on f_ PBH. With future observations,it might be possible to improve on these somewhat arbitrary choices, perhaps leading to more tightened constraints. Our observable is the total number of detected events as a function of the black hole mass. In order todetermine the background contribution, we need to model the mass ratio between the black hole binariesas well. Setting M_1 from now on to be the mass of the heavier black hole in each binary, we followRef. <cit.> and assume that the mass of the lighter black hole, M_2,has a uniform distribution in the range [M_ gap,M_1], given byP(M_2)= A_M_2ℋ(M_2-M_ gap)ℋ(M_1-M_2). The observable redshift volume of gravitational waves from BH mergers depends on the instrumental properties. The signal-to-noise ratio (S/N) for a single interferometer detector is given by (S/N)^2 =∫_f_ min^f_ max df 4h_c^2(f)/5S_n(f) (2 f)^2, where h_c(f) is the observed strain amplitude and S_n(f) = h_n^2(f)is the strain noise amplitude (for more details, see Ref. <cit.> and references therein).We follow convention and set the detection threshold at S/N>8.0 <cit.>.We use the approximated analytical model for aLIGO noise of Ref. <cit.>, settingf_ min=10Hz, above which the curve matches the official LIGO curve very well <cit.>. Lastly, inferring the mass of the merging black holes from the gravitational-wave signal involves an associated uncertainty.We model this by convolving the mass function with a log-normal distribution reflecting a 5% relative mass error foraLIGO observations (see Ref. <cit.>). This choice suggests a minimal width for our binning of the mass functionwhen calculating the signal-to-noise.We are now equipped to make a theoretical prediction for the total number of detected backgroundmerger events over a time T_ obs with a given mass M_1 (the mass of the heavier BH in the one-dimensional (1D)case), or two masses M_1,M_2 (in the 2D case), which is given bydN^ BG(M_1)/dM_1 =4π P(M_1)T_ obs∫_M_ gap^M_1 P(M_2)dM_2×∫_0^z_ max(M_1, M_2)cχ(z)^2R(z)/(1+z)H(z)dz.Here the observable redshift volume is defined by z_ max, themaximum redshift up to which a BH merger with masses M_1,M_2 can be detected;H(z) is the Hubble parameter and χ(z) is the radial comoving distance.In the 2D case, we use dN(M_1,M_2)/dM_1dM_2, dropping the first integration inEq. (<ref>).Integrating within each mass bin i with edges [M_min,i,M_max,i], we finally getN^ BG_i = ∫_M_min,i^M_max,idN(M_1)/dM_1dM_1.We then divide N(M_1) into 50 logarithmic bins from 4 M_⊙ to 120 M_⊙ and likewise N(M_1,M_2) into 100 bins (10 along eachaxis).The bin width is chosen such that the mass measurement error is subdominant, and the variance ineach bin is set by the Poisson error σ_i^2=N_i. The most powerful way to constrain the abundance of a PBH population with a narrow massdistribution is to look at the 2D mass distribution of detected events, and check for a peak in themass bin(s) surrounding the central value of that distribution. This approach makes use of allthe available data, but since the background needs to be calculated for each mass bin, making accurate forecasts requires a good understanding of the black-hole mass function as well asthe binary mass ratio. To somewhat relax this model dependence, we can also choose to limitourselves to using only the number counts for the heavier BH in each binary (the mass-ratiowill still enter the signal-to-noise calculation through the fiducial choice of the M_2 PDF,but since each column in the one-dimensional M_1 distribution is an integral quantity overthe full M_2 mass range, the dependence on this choice is weaker in the 1D case). The widely used convention when forecasting limits on the fraction of dark matter in PBHs is toassume a delta-function PBH mass function. In practice, weassume Gaussian PDFs forM_ PBH=M^ PBH_1=M^ PBH_2 with a 5% width (the precise choice isunimportant as long as this width is smaller than the measurement error). Togetherwith the rate in Eq. (<ref>), we get N^ PBH_i(f_ PBH, M_ PBH) for each valueof f_ PBH and M_ PBH using the prescription in Eqs. (<ref>),(<ref>).We now derive our forecast for the limits that aLIGO at design sensitivity can impose with the planned6 years of observation by solving the following equation for f_ PBHS/N=√(∑_i(N^ Tot_i-N^ BG_i)/√(N^ Tot_i))^2); N^Tot≡ N^ PBH_i+N^ BG_i ⟶√(∑_i(N^ PBH_i(f_ PBH, M_ PBH)/√(N^BG_i))^2)-n_σ=0,where we set a desired signal-to-noise ratio of n_σ=3 or 5 standard deviations and have assumed under thenull hypothesis (N_ PBH=0) that the (Poisson) error in each bin is√(N^ BG_i) (using the Gaussian approximationfor the Poisson distribution is valid given the number of events in each bin). The result—in the form of 3- and 5-σ limits on f_ PBH for each PBH mass—is shown inFig. <ref>. We see that based on the rate in Eq. (<ref>),the scenario in which all the dark matter is in the form of PBHs can be strongly tested (ruled out at ≫5σ)when using either the full 2D or 1D mass spectra of observed BH mergers.Note that we have assumed the stellar-BH mass-function parameters can be held fixed, rather thanfitting for them in tandem with the amplitude of the PBH contribution and marginalizing over them. Underthe assumption that it is smooth, consistent with <cit.>, the effect of thisapproximation on our results is small. We emphasize that if it is found to deviate strongly from Eq. (<ref>), future data from planned next generation GW experiments such as Cosmic Explorer <cit.>, Einstein Telescope <cit.>, DECIGO <cit.> and LISA <cit.>—which will be sensitive enough at low frequencies to be able to detect BH mergers at high redshifts, well beyond the peak of the star-formation rate— will allow a straightforward discrimination between the stellar and primordial merging-BH populations, based on their very different redshift distributions <cit.>.Another source of uncertainty is the overall background amplitude, whose current 90%-confidence rangestill spans roughly an order of magnitude <cit.>.To incorporate the various modeling uncertainties, we use the dependence of Eq. (<ref>) on f_ pbh and of Eq. (<ref>) onthe background and PBH rates, to show in Fig. <ref> a band encompassing an underestimate (overestimate) of up to a factor of 400% (200%) in thebackground (signal) rates. As can be seen, our conclusion that the scenario of PBH DM can be tested convincingly (≫5σ) in the range 20 M_⊙≲ M_ PBH≲ 100M_⊙ is quite robust. even with a PBH merger rate ten times lower, which as described above should be taken as an ultra-conservative bound, f_ PBH=1 can still be rejected at 5σ confidence across the 20-100 M_⊙ rangeusing the 2D information. Naturally, if the rate is much higher <cit.>, a null detection will yield even stronger limits on f_ PBH, motivating efforts to better understand the mechanisms of PBH binary formation and disruption, using both analytic methods and simulations.Eventual bounds will depend on additional real-world factors, such as the operating sensitivity of aLIGO during its six year run.In particular, we stress that the constraints can be significantly improved if the signal-to-noise threshold for a detected event ineach interferometer is reduced, as should be done when performing a statistical analysis of an ensemble of events <cit.>.While the constraints we forecast are weaker than some that have been already claimedor projected by other methods, they constitute a unique and independent method of testing the scenario, by focusing on the self-interactionof PBHs rather than their interaction with other astrophysical objects. They are subject to different systematics andmodeling assumptions, and since the analysis presented here heavily errs on the side of caution and still findspromising results, they represent a truly robust test, achievable within a decade, of the important cosmological scenario that dark matter is made of PBHs. We thank Yacine Ali-Haïmoud, Simeon Bird, Marc Kamionkowski and especially Ilias Cholis for discussions. This work was supported by NSF Grant No. 0244990, NASA NNX15AB18G and the Simons Foundation. 99natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURLAde:2015xuaP. A. R. Ade et al. [Planck Collaboration],arXiv:1502.01589.Freese:2017idyK. Freese,arXiv:1701.01840. Jungman:1995dfG. Jungman, M. Kamionkowski and K. Griest,Phys. Rept.267, 195 (1996)[hep-ph/9506380]. Essig:2012yxR. Essig, A. Manalaysay, J. Mardon, P. Sorensen and T. Volansky,PRL109, 021301 (2012)[arXiv:1206.2644].Aprile:2012nqE. Aprile et al. [XENON100 Collaboration],Phys. Rev. Lett.109, 181301 (2012)[arXiv:1207.5988 ].Akerib:2015rjgD. S. Akerib et al. [LUX Collaboration],Phys. Rev. Lett.116, no. 16, 161301 (2016) [arXiv:1512.03506 ]. Carr:1974nxB. J. Carr and S. W. Hawking,MNRAS168, 399 (1974). Meszaros:1974tbP. Meszaros,Astron. Astrophys.37, 225 (1974).Chapline:1975G. F. Chapline,Nature, 253, 251 (1975)Carr:1975qjB. J. Carr,Astrophys. J.201, 1 (1975). Clesse:2015wea S. Clesse and J. Garcia-Bellido,Phys. Rev. D 92, no. 2, 023524 (2015) [arXiv:1501.07565].Carr:2016drxB. Carr, F. Kuhnel and M. Sandstad,Phys. Rev. D 94, no. 8, 083504 (2016)[arXiv:1607.06077]. Kuhnel:2017pwqF. Kühnel and K. Freese,Phys. Rev. D 95, no. 8, 083508 (2017)[arXiv:1701.07223]. Carr:2017jszB. Carr, M. Raidal, T. Tenkanen, V. Vaskonen and H. Veermäe,arXiv:1705.05567 [astro-ph.CO]. Abbott:2016blzB. P. Abbott et al. [LIGOand Virgo Collaborations],Phys. Rev. Lett.116, 061102 (2016)[arXiv:1602.03837]. Bird:2016dcvS. Bird, I. Cholis, J. B. Muñoz, Y. Ali-Haïmoud, M. Kamionkowski, E. D. Kovetz, A. Raccanelli and A. G. Riess,PRL116, 201301 (2016)[arXiv:1603.00464]. TheLIGOScientific:2016peaB. P. Abbott et al. [LIGOand Virgo Collaborations],Phys. Rev. X 6, no. 4, 041015 (2016)[arXiv:1606.04856].Gaskins:2016chaJ. M. Gaskins,Contemp. Phys.57, no. 4, 496 (2016)[arXiv:1604.00014]. Alcock:1996yvC. Alcock et al. [MACHO Collaboration],Astrophys. J.486, 697 (1997)[astro-ph/9606165]. Tisserand:2006zxP. Tisserand et al. [EROS-2 Collaboration],Astron. Astrophys.469, 387 (2007)[astro-ph/0607207].Mediavilla:2017bokE. Mediavilla, J. Jimenez-Vicente, J. A. Muñoz, H. Vives-Arias and J. Calderon-Infante,Astrophys. J.836, no. 2, L18 (2017)[arXiv:1702.00947]. Ricotti:2007auM. Ricotti, J. P. Ostriker and K. J. Mack,Astrophys. J.680, 829 (2008)[arXiv:0709.0524 [astro-ph]].Ali-Haimoud:2016mbvY. Ali-Haïmoud and M. Kamionkowski,Phys. Rev. D 95, no. 4, 043534 (2017)[arXiv:1612.05644].Blum:2016cjsD. Aloni, K. Blum and R. Flauger,JCAP 1705, no. 05, 017 (2017)[arXiv:1612.06811].Brandt:2016acoT. D. Brandt,ApJ. 824, L31 (2016)[arXiv:1605.03665]. Koushiappas:2017chwS. M. Koushiappas and A. Loeb,arXiv:1704.01668. Inoue:2017csrY. Inoue and A. Kusenko,arXiv:1705.00791.Munoz:2016tmgJ. B. Muñoz, E. D. Kovetz, L. Dai and M. Kamionkowski,PRL117, no. 9, 091301 (2016)[arXiv:1605.00008]. Schutz:2016khrK. Schutz and A. Liu,Phys. Rev. D 95, no. 2, 023002 (2017)[arXiv:1610.04234]. Abbott:2005pfB. Abbott et al. [LIGO Scientific Collaboration],Phys. Rev. D 72, 082002 (2005)[gr-qc/0505042]. Cholis:2016kqiI. Cholis, E. D. Kovetz, Y. Ali-Haïmoud, S. Bird, M. Kamionkowski, J. B. Muñoz and A. Raccanelli,Phys. Rev. D 94, no. 8, 084013 (2016)[arXiv:1606.07437]. Chiba:2017rvsT. Chiba and S. Yokoyama,arXiv:1704.06573 [gr-qc].Rodriguez:2016vmxC. L. Rodriguez, M. Zevin, C. Pankow, V. Kalogera and F. A. Rasio,Astrophys. J.832, no. 1, L2 (2016)[arXiv:1609.05916]. Talbot:2017yurC. Talbot and E. Thrane,arXiv:1704.08370.Kovetz:2016kpiE. D. Kovetz, I. Cholis, P. C. Breysse and M. Kamionkowski,Phys. Rev. D 95, 103010 (2017)[arXiv:1611.01157].Nakamura:1997smT. Nakamura, M. Sasaki, T. Tanaka and K. S. Thorne,Astrophys. J.487, L139 (1997)[astro-ph/9708060]. Sasaki:2016jopM. Sasaki, T. Suyama, T. Tanaka and S. Yokoyama,Phys. Rev. Lett.117,061101 (2016)[arXiv:1603.08338]. Ali-Haimoud:2017Y. Ali-Haïmoud, E. D. Kovetzand M. Kamionkowski, arXiv:arXiv:1709.06576.Hayasaki:2009ugK. Hayasaki, K. Takahashi, Y. Sendouda and S. Nagataki,PASJ68, 66 (2016)[arXiv:0909.1738]. NishikawaH. Nishikawa, M. Kamionkowski, E. D. Kovetz and J. Silk,arXiv:1708.08449.Gondolo:1999efP. Gondolo and J. Silk,Phys. Rev. Lett.83, 1719 (1999)[astro-ph/9906391]. TheLIGOScientific:2016httB. P. Abbott et al. [LIGO and Virgo Collaborations],Astrophys. J.818, no. 2, L22 (2016)[arXiv:1602.03846]. [Salpeter(1955)]Salpeter:1955it authorE. E. Salpeter, journalAstrophys. J. volume121, pages161 (year1955).[Kroupa(2001)]Kroupa:2000iv authorP. Kroupa, journalMNRAS volume322, pages231 (year2001), [astro-ph/0009005].[Bailyn et al.(1998)Bailyn, Jain, Coppi, and Orosz]Bailyn:1997xt authorC. D. Bailyn, authorR. K. Jain, authorP. Coppi, and authorJ. A. Orosz, journalAstrophys. J. volume499, pages367 (year1998), [astro-ph/9708032].[Ozel et al.(2010)Ozel, Psaltis, Narayan, and McClintock]Ozel:2010su authorF. Ozel, authorD. Psaltis, authorR. Narayan, and authorJ. E. McClintock, journalAstrophys. J. volume725, pages1918 (year2010), [arXiv:1006.2834].[Farr et al.(2011)Farr, Sravan, Cantrell, Kreidberg, Bailyn, Mandel, and Kalogera]Farr:2010tu authorW. M. Farr, authorN. Sravan, authorA. Cantrell, authorL. Kreidberg, authorC. D. Bailyn, authorI. Mandel, and authorV. Kalogera, journalAstrophys. J. volume741, pages103 (year2011), [arXiv:1011.1459].[Belczynski et al.(2012)Belczynski, Wiktorowicz, Fryer, Holz, and Kalogera]Belczynski:2011bn authorK. Belczynski, authorG. Wiktorowicz, authorC. Fryer, authorD. Holz, and authorV. Kalogera, journalAstrophys. J. volume757, pages91 (year2012), [arXiv:1110.1635].[Fryer et al.(2012)Fryer, Belczynski, Wiktorowicz, Dominik, Kalogera, and Holz]Fryer:2011cx authorC. L. Fryer, authorK. Belczynski, authorG. Wiktorowicz, authorM. Dominik, authorV. Kalogera, and authorD. E. Holz, journalAstrophys. J. volume749, pages91 (year2012), [arXiv:1110.1726].[Kochanek(2014)]Kochanek:2013yca authorC. S. Kochanek, journalAstrophys. J. volume785, pages28 (year2014), [arXiv:1308.0013].[Abadie et al.(2010)]Abadie:2010cf authorJ. Abadie et al. (collaborationVIRGO, LIGO Scientific), journalClass. Quant. Grav. volume27, pages173001 (year2010), [arXiv:1003.2480].[Ajith(2011)]Ajith:2011ec authorP. Ajith, journalPRD volume84, pages084037 (year2011), [arXiv:1107.1267].[Shoemaker(2010)]Shoemaker2010 authorD. Shoemaker, journalhttps://tinyurl.com/ycolfa2b(year2010).Evans:2016mbwB. P. Abbott et al. [LIGO Scientific Collaboration],Class. Quant. Grav.34, 044001 (2017)[arXiv:1607.08697].ET Einstein Telescope, design at: http://www.et-gw.eu/Seto:2001qfN. Seto, S. Kawamura and T. Nakamura,Phys. Rev. Lett.87, 221103 (2001) [astro-ph/0108011]. [Amaro-Seoane et al.(2017)]2017arXiv170200786A P. Amaro-Seoane, H. Audley, S. Babak et al., arXiv:1702.00786 Koushiappas:2017kqmS. M. Koushiappas and A. Loeb,arXiv:1708.07380.
http://arxiv.org/abs/1705.09182v2
{ "authors": [ "Ely D. Kovetz" ], "categories": [ "astro-ph.CO", "hep-th" ], "primary_category": "astro-ph.CO", "published": "20170525135523", "title": "Probing Primordial-Black-Hole Dark Matter with Gravitational Waves" }
Near-Optimal Belief Space Planning via T-LQG^*Mohammadhussein Rafieisakhaei^1, Suman Chakravorty^2 and P. R. Kumar^1 *This material is based upon work partially supported by NSF under Contract Nos. CNS-1646449 and Science & Technology Center Grant CCF-0939370, the U.S. Army Research Office under Contract No. W911NF-15-1-0279, and NPRP grant NPRP 8-1531-2-651 from the Qatar National Research Fund, a member of Qatar Foundation. ^1M. Rafieisakhaei and P. R. Kumar are with the Department of Electrical and Computer Engineering, and ^2S. Chakravorty is with the Department of Aerospace Engineering, Texas A&M University, College Station, Texas, 77840 USA. {mrafieis, schakrav, [email protected]}December 30, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We consider the problem of planning under observation and motion uncertainty for nonlinear robotics systems. Determining the optimal solution to this problem, generally formulated as a Partially Observed Markov Decision Process (POMDP), is computationally intractable. We propose a Trajectory-optimized Linear Quadratic Gaussian (T-LQG) approach that leads to quantifiably near-optimal solutions for the POMDP problem. We provide a novel “separation principle” for the design of an optimal nominal open-loop trajectory followed by an optimal feedback control law, which provides a near-optimal feedback control policy for belief space planning problems involving a polynomial order of calculations of minimum order. § INTRODUCTIONPlanning for systems with observation and motion uncertainty is generally formulated in the framework of a Partially Observed Markov Decision Process (POMDP), the general solution of which is provided by the Hamilton-Jacobi-Bellman equations <cit.>. Attempts to utilize this framework run into the intractability of the computations, referred to as the curse of dimensionality.In this paper, we provide a structure under which the stochastic optimal control problem can be solved quantifiably near-optimal for moderate levels of noise. We utilize the Wentzell-Freidlin theory of largedeviations for analyzing the asymptotics under small noise <cit.>. In particular, we consider a general nonlinear process and measurement models with additive white noise, and compensate the system with feedback. We show that the first-order stochastic error of the stochastic cost function for the feedback-compensated system is distributed according to a Gaussian distribution with zero expected value.As a result of the independence of the first-order expected error from the feedback law, the optimal zeroth-order (nominal open-loop) control sequence can be designed separately from the optimal closed-loop feedback law; a result which we term as a “separation of the open-loop and closed-loop designs”. This leads to a novel design approach for partially-observed nonlinear stochastic systems whose characteristics we quantify. We also provide a tractable example of a robotic motion and path planning design based on this theory. Other than the HJB equations, this is the only structure to-date that provides quantifiably near-optimal solutions for a relatively general stochastic optimal control problem. In addition, unlike the HJB, this approach does not run into the problem of curse of dimensionality, as the entire computation is of the order of O(Kn^3), where K is the planning horizon, and n is the state dimension. Lastly, it is observed in simulations that the design is valid for a moderate-range of noise level due to the power of feedback compensation. § GENERAL PROBLEM The general belief space planning problem is formulated as a stochastic control problem in the space of feedback policies. In this section, we define the basic elements of the problem, including system equations and belief dynamics.SDE models: We consider continuous-time Stochastic Differential Equation (SDE) models of the process and measurement as follows: d𝐱_t =𝐟(𝐱_t,𝐮_t)dt+ϵσ(t)dω_t, d𝐳_t =𝐡(𝐱_t)dt+ϵ dν_t, where {ω_t, ν_t, t≥ 0} are two independent standard Wiener processes, 𝐱∈𝕏⊂ℝ^n_x, 𝐮∈𝕌⊂ℝ^n_u, and 𝐳∈ℤ⊂ℝ^n_z, denote the state, control and observation vectors, respectively, and 𝐟:𝕏×𝕌→𝕏, 𝐡:𝕏→ℤ, σ,𝐚:ℝ→ℝ^n_x× n_x, 𝐟=(f_i)_0≤ i≤ n_x, 𝐡=(h_i)_0≤ i≤ n_z, and 𝐚:=σσ^T=(a_i,j)_0≤ i,j≤ n_x. We assume that the drift and diffusion coefficients, f_i, h_i, a_i,j, are bounded and uniformly Lipschitz continuous functions, and the diffusion matrix is uniformly positive-definite. Lastly, 𝐱_0∼𝒩(𝐱̅_0, ϵ^2Σ_𝐱_0), ϵ>0.Belief: The conditional distribution of the state given the past observations, controls and the initial distribution is termed as “belief”. In the sequel, we denote the Gaussian belief by 𝐛_t=(𝐱̂^T_t, (𝐏_t)^T)^T∈𝔹, a vector of the mean and covariance of the estimation at time t. Given an initial belief state 𝐛_0, the stochastic optimal control problem is: min_π 𝔼[∑_t=0^K-1 c_t^π(𝐛_t,𝐮_t)+c_K^π(𝐛_K)] s.t. 𝐛_t+1 =τ(𝐛_t,𝐮_t,𝐳_t+1),where the optimization is over Markov policies, Π, and:* J^π:Π→ℝ is the cost function given the policy π∈Π, and J^π:=∑_t=0^K-1c_t^π(𝐛_t,𝐮_t)+c_K^π(𝐛_K);* π:={π_0, ⋯, π_t}, π_t :𝔹→𝕌 and 𝐮_t=π_t(𝐛_t);* c^π_t(·,·):𝔹×𝕌→ℝ is the one-step cost function;* c_K^π(·):𝔹→ℝdenotes the terminal cost; and* K>0 is planning horizon, and τ defines belief evolution. § METHOD AND MAIN RESULTSFeedback law: We assume a Lipschitz continuous, bounded and smooth feedback law:𝐮_t=π_t(𝐱̂_t). Nominal ODEs: Nominal (unperturbed) trajectories of the system can be obtained using a nominal control sequence (which is calculated using the separation result of this paper). The following Ordinary Differential Equations (ODEs) describe the nominal trajectories:ẋ^p_t=𝐟(𝐱^p_t,𝐮^p_t), ż^p_t=𝐡(𝐱^p_t), 𝐮^p_t=π_t(𝐱̂^p_t),where 𝐱̂^p_0:=𝐱^p_0:=𝔼[𝐛_0], and 𝐱^p_t is the mean of nominal belief.Linearized equations: We linearize the SDEs of (<ref>) around nominal trajectories. Thus, if ||𝐱̂_t-𝐱̂^p_t||≤δ and ||𝐱_t-𝐱^p_t||≤δ, 𝐮_t =𝐮^p_t-𝐋_t(𝐱̂_t-𝐱̂^p_t)+o(δ),ẋ_t =ẋ^p_t+𝐀_t(𝐱_t -𝐱^p_t)+𝐁_t(𝐮_t-𝐮^p_t) +ϵ𝐆_tdω_t/dt+o(δ) =ẋ^p_t+𝐀_t(𝐱_t -𝐱^p_t)-𝐁_t𝐋_t(𝐱̂_t-𝐱̂^p_t) +ϵ𝐆_tdω_t/dt+o(δ), ż_t =ẋ^p_t+𝐇_t(𝐱_t -𝐱^p_t)+ϵdν_t/dt+o(δ). with Jacobians (the superscript p was dropped for simplicity):𝐀^p_t: =∇_𝐱𝐟(𝐱,𝐮)|_𝐱^p_t, 𝐮^p_t, 𝐁^p_t:=∇_𝐮𝐟(𝐱,𝐮)|_𝐱^p_t, 𝐮^p_t, 𝐆_t:=σ(t),𝐋^p_t: =-∇_𝐱π_t(𝐱)|_𝐱̂^p_t,   𝐇^p_t:=∇_𝐱𝐡(𝐱)|_𝐱^p_t. Kalman-Bucy Filter (KBF): The linearized system's estimates can be obtained using the KBF equations: ẋ̂̇_t=ẋ^p_t+𝐀_t(𝐱̂_t -𝐱^p_t)+𝐁_t(𝐮_t-𝐮^p_t)+𝐊_t(ż_t-ż^p_t-𝐇_t(𝐱̂_t -𝐱^p_t)),Ṗ_t=𝐀_t𝐏_t+𝐏_t𝐀_t^T+ϵ^2𝐆_tΣ_ω𝐆^T_t-ϵ^2𝐊_tΣ_ν𝐊^T_t, 𝐊_t=ϵ^-2𝐏_t𝐇_t^TΣ_ν^-1. with 𝐏_0:=ϵ^2Σ_𝐱_0 and 𝐱̂^p_0=𝐱^p_0, which implies 𝐱̂^p_t≡𝐱^p_t, t≥0. Stochastic differential equation governing the evolution of the augmented state: Since the evolution of the covariance is deterministic, we define 𝐲_t:=(𝐱^T_t, 𝐱̂^T_t)^T (also denoted by 𝐲^ϵ_t), which is the concatenation of the two vectors of state and mean of the belief, and define ζ_t:=(ω^T_t,ν^T_t)^T. Then, the evolution of this augmented state random variable is:d𝐲_t=𝐠(t, 𝐲_t)dt+ϵσ^𝐲(t)dζ_t,with 𝐲_0=(𝐱_0^T, (𝐱^p_0)^T)^T, where functions 𝐠:ℝ×ℝ^2n_x→ℝ^n_x and σ^𝐲:ℝ→ℝ^2n_x×2n_x are defined (with some abuse of notation) as:𝐠(t, 𝐲_t):=[𝐟(𝐱_t,π_t(𝐱̂_t)); (𝐟(𝐱^p_t,π_t(𝐱̂^p_t)) +𝐀_t(𝐱̂_t -𝐱^p_t) +𝐁_t(π_t(𝐱̂_t) -π_t(𝐱̂^p_t) );+ 𝐊_t(𝐡(𝐱_t)-𝐡(𝐱^p_t)-𝐇_t(𝐱̂_t- 𝐱^p_t))) ],σ^𝐲(t):=[ σ(t)0;0𝐊_t ].Let {𝐲^p_t, t≥ 0}, {𝐲^n_t, t≥ 0 }, and ẏ^p_t=𝐠(t, 𝐲^p_t), 𝐲^p_0 = ((𝐱^p_0)^T, (𝐱^p_0)^T)^T,ẏ^n_t=𝐠(t, 𝐲^n_t), 𝐲^n_0 = (𝐱_0^T, 𝐱_0^T)^T. Also, let K^𝐟 and K^π_t be the Lipschitz constants of 𝐟 and π_t,c^δ:=δ/2exp(-∫_0^KK^𝐟(1+K^π_r)dr), P_δ, ϵ:=∫_||𝐱||≤ c^δexp(ϵ^2𝐱^TΣ_𝐱_0𝐱)d𝐱,and δ>0. Then,P{||(𝐱^n_K-𝐱^p_K)||≤δ/2}≥ P_δ, ϵ.Linearization of the SDE: Given 𝐅^𝐠_t=∇_𝐲𝐠(t,𝐲)|_t, 𝐲^p_t, we linearize the SDE (<ref>) around ODE (<ref>):d𝐲_t=𝐠(t,𝐲^p_t)dt+𝐅^g_t(𝐲_t-𝐲^p_t)dt+ϵσ^𝐲(t) dw_t+o(||𝐲_t-𝐲^p_t||dt).If ||𝐲_t-𝐲^p_t||≤ 2δ (whose asymptotics are calculated using the Wentzell-Freidlin theory, next and Lemma <ref>),d𝐲_t=𝐠(t,𝐲^p_t)dt+𝐅^g_t(𝐲_t-𝐲^p_t)dt+ϵσ^𝐲(t) dw_t+o(δ dt). Action functional <cit.>: For [T_1, T_2]⊆[0, K], the normalized action functional for the family of ϵ-dependent stochastic processes of (<ref>) is defined as:S_T_1,T_2(ϕ):=1/2ϵ^2∫_T_1^T_2 L(s, ϕ_s, ϕ̇_s)ds,for absolutely continuous ϕ, and is set to +∞ for other ϕ∈ℂ_0K(ℝ^n_x) (the space of continuous functions over [0, K]), where L:ℝ×ℝ^n_x×ℝ^n_x→ℝ is the Legendre transform of the cumulant of stochastic process of (<ref>) (assuming 𝐊_t𝐊_t^T≻0): L(t, 𝐱, β)=12(β-𝐛(t,𝐱))^T𝐚(t, 𝐱)^-1(β-𝐛(t,𝐱)).Let: * 𝔻 be a domain in ℝ^2n_x, and denote its closure by cl(𝔻);* ∂𝔻 denote the boundary of 𝔻; * ℍ_𝔻(t, 𝐲^n_0)={ϕ∈ℂ_0K(ℝ^n_x):ϕ_0=𝐲^n_0,ϕ_t∈𝔻∪∂𝔻}. Assume ∂𝔻 = ∂cl(𝔻). Then, we have the following:lim_ϵ→ 0ϵ^2ln P_𝐲^n_0{𝐲^ϵ_t∈𝔻} =-inf_ϕ∈ℍ_𝔻(t, 𝐲^n_0)S_0t(ϕ), Let: * 𝔻_t=cl(𝔹^c_δ/2(𝐲^n_t)), the closure of the complement of a ball with radius δ/2>0 around the point 𝐲^n_t; and* τ^ϵ=Min{t:𝐲^ϵ_t∈𝔻_t}.Then, lim_ϵ→ 0ϵ^2 ln P_𝐲^n_0{τ^ϵ≤ t}= -inf_{ϕ: ϕ_0 = 𝐲^n_0, ||ϕ_t - 𝐲_t^n|| > δ/2 } S_0t(ϕ). Proofs of Theorems <ref> and <ref> can be found in <cit.>.Nominal belief: Starting from 𝐛^p_0= 𝐛_0, the nominal belief evolution is given by ḃ^p_t=τ(𝐛^p_t,𝐮^p_t,ż^p_t). Given equations (<ref>), 𝐛^p_t=((𝐱̂^p_t)^T, (𝐏^p_t)^T)^T = ((𝐱^p_t)^T, (𝐏_t)^T)^T, and linearizing τ only involves linearization of mean evolution:ḃ_t=ḃ^p_t+𝐓^𝐛_t(𝐛_t-𝐛^p_t)+𝐓^𝐮_t(𝐮_t-𝐮^p_t)+𝐓^𝐳_t(ż_t-ż^p_t)+o(δ),with the Jacobians defined as usual.Linearization of belief and cost: To address problem (<ref>), we discretize the equations (<ref>) in time with the discretization interval of dt ≡ 1. Let J^p:=∑_t=0^K-1c_t(𝐛^p_t,𝐮^p_t)+c_K(𝐛^p_K), and linearize the cost function J^π around the nominal trajectories:J^π=J^p+J̃_1+o(δ),with J̃_1= ∑_t=0^K-1 (𝐂^𝐛_t(𝐛_t-𝐛^p_t)+ 𝐂^𝐮_t(𝐮_t-𝐮^p_t))+ 𝐂^𝐛_K(𝐛_K-𝐛^p_K), where the Jacobians are defined as usual.If ||𝐱_K-𝐱^n_K||≤δ/2 and ||𝐱^n_K-𝐱^p_K||≤δ/2, using the triangle inequality (note: as ϵ↓ 0, using Theorems <ref>, <ref>, and Lemma <ref>, the probability of the first and second events tend exponentially to one, respectively; similarly for ||𝐱̂_K-𝐱^p_K||):||𝐱_K-𝐱^p_K||≤||𝐱_K-𝐱^n_K||+||𝐱^n_K-𝐱^p_K|| ≤δ,which means that all the linearizations are valid with a probability that tends to one as ϵ↓0. For a time-discrete system, under a first-order approximation for the small noise paradigm, the stochastic cost function is dominated by the nominal part of the cost function, and the expected first-order error is zero:𝔼[J̃_1]=0.Moreover, if the initial, process, and observation noises at each time are distributed according to zero mean Gaussian distributions, then J̃_1 also has a zero mean Gaussian distribution. Separation of the Open-Loop and Closed-Loop Designs Under Small Noise: Based on Theorem <ref>, under the small noise paradigm, as ϵ↓0, the design of the feedback law can be conducted separately from the design of the open loop optimized trajectory. Furthermore, this result holds with a probability that exponentially tends to one as ϵ↓0. Our separation principle combined with the usual separation principle provides a design structure where the optimal designs of the control law, nominal trajectory and estimator can be separated from each other. Thus, we couple the latter two, and design a nominal trajectory that aims for the best nominal estimation performance, which coincides with the Trajectory-optimized Linear Quadratic Gaussian (T-LQG) design <cit.>.Given an initial belief 𝐛_0, a goal region of a ball with radius r_g around a goal state 𝐱_g∈𝕏, horizon K>0, and 𝐖^u_t≽ 0, solve: min_𝐮^p_0:K-1∑_t=1^K[ tr( 𝐏^+_𝐛^p_t)+(𝐮^p_t-1)^T𝐖^u_t𝐮^p_t-1]s.t.  𝐏^-_t =𝐀_t-1𝐏^+_t-1𝐀_t-1^T+ϵ^2𝐆_t-1Σ_ω𝐆_t-1^T,𝐒_t =𝐇_t𝐏^-_t𝐇_t^T+ϵ^2Σ_ν, 𝐏^+_t =(𝐈-𝐏^-_t𝐇_t^T𝐒_t^-1𝐇_t)𝐏^-_t,𝐏^+_0 =ϵ^2Σ_𝐱_0,𝐱^p_0 = 𝔼[𝐛_0],𝐱^p_t+1 =𝐟(𝐱^p_t, 𝐮^p_t),  0≤ t≤ K-1,||𝐱^p_K-𝐱_g||_2 <r_g,||𝐮^p_t||_2 ≤ r_u,  1≤ t≤ K. Control policy: After linearizing the equations around the optimized nominal trajectory, the resulting control policy is a linear feedback policy <cit.>, 𝐮_t=𝐮^p_t-𝐋^p_t(𝐱̂_t-𝐱^p_t), where the feedback gain 𝐋^p_t is:𝐋^p_t = (𝐖^u_t+(𝐁^p_t)^T𝐏^f_t+1𝐁^p_t)^-1(𝐁^p_t)^T𝐏^f_t+1𝐀^p_t,and the matrix 𝐏^f_t is the result of backward iteration of the dynamic Riccati equation𝐏^f_t-1 = (𝐀^p_t)^T𝐏^f_t𝐀^p_t-(𝐀^p_t)^T𝐏^f_t𝐁^p_t(𝐖^u_t+(𝐁^p_t)^T𝐏^f_t𝐁^p_t)^-1(𝐁^p_t)^T𝐏^f_t𝐀^p_t+𝐖^x_t,which is solvable with a terminal condition 𝐏^f_K=𝐖^x_t≽ 0.§ SIMULATION RESULTSWe consider a non-holonomic car-like robot in an environment with road-blocks equipped with landmark-based range and bearing measurement model. We use the MATLABoptimizer with no initial trajectory to obtain the solution of problem (<ref>). For collision-avoidance, we utilize the Obstacle Barrier Function (OBF) method of <cit.>. Figures <ref>, <ref>, and <ref> show the optimized planned, execution, and estimate trajectories, respectively. § CONCLUSIONWe considered the general problem of controlling a stochastic nonlinear system with process and measurement uncertainties. We used the Wentzell-Freidlin theory of large deviations and provided a novel result of a “separation of the open-loop and closed-loop designs”. This result, combined with the usual separation principle (of the estimator and controller designs) leads to an asymptotically-optimal design approach of the Trajectory-optimized Linear Quadratic Gaussian (T-LQG) under small noise, and a near-optimal design for moderate noise levels involving a polynomial order of calculations of minimum order.IEEEtran
http://arxiv.org/abs/1705.09415v2
{ "authors": [ "Mohammadhussein Rafieisakhaei", "Suman Chakravorty", "P. R. Kumar" ], "categories": [ "cs.RO", "cs.SY" ], "primary_category": "cs.RO", "published": "20170526023144", "title": "Near-Optimal Belief Space Planning via T-LQG" }
Automatic Response Assessment in Regions of Language Cortex in Epilepsy Patients Using ECoG-basedFunctional Mapping and Machine Learning Harish RaviPrakash Center for Research in Computer VisionCollege of Engineering and Computer ScienceUniversity of Central FloridaOrlando, Florida 32826Milena KorostenskajaFunctional Brain Mappingand Brain Computer Interface Lab Florida Hospital for Children Orlando, Florida - 32803Ki LeeFunctional Brain Mappingand Brain Computer Interface Lab Florida Hospital for Children Orlando, Florida - 32803James BaumgartnerFunctional Brain Mappingand Brain Computer Interface Lab Florida Hospital for Children Orlando, Florida - 32803Eduardo CastilloMEG Lab Florida Hospital for ChildrenOrlando, Florida 32803Ulas BagciCenter for Research in Computer Vision College of Engineering and Computer Science University of Central Florida Orlando, Florida 32826 December 30, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Accurate localization of brain regions responsible for language and cognitive functions in Epilepsy patients should be carefully determined prior to surgery. Electrocorticography (ECoG)-based Real Time Functional Mapping (RTFM) has been shown to be a safer alternative to the electrical cortical stimulation mapping (ESM), which is currently the clinical/gold standard. Conventional methods for analyzing RTFM signals are based on statistical comparison of signal power at certain frequency bands. Compared to gold standard (ESM), they have limited accuracies when assessing channel responses.In this study, we address the accuracy limitation of the current RTFM signal estimation methods by analyzing the full frequency spectrum of the signal and replacing signal power estimation methods with machine learning algorithms, specifically random forest (RF), as a proof of concept. We train RF with power spectral density of the time-series RTFM signal in supervised learning framework where ground truth labels are obtained from the ESM. Results obtained from RTFM of six adult patients in a strictly controlled experimental setup reveal the state of the art detection accuracy of ≈ 78% for the language comprehension task, an improvement of 23% over the conventional RTFM estimation method. To the best of our knowledge, this is the first study exploring the use of machine learning approaches for determining RTFM signal characteristics, and using the whole-frequency band for better region localization. Our results demonstrate the feasibility of machine learning based RTFM signal analysis method over the full spectrum to be a clinical routine in the near future.Epilepsy, Machine Learning, ECoG, RTFM, Random Forest § INTRODUCTION Epilepsy is a neurological disorder characterized by unpredictable seizures. There are over 65 million people around the world who have epilepsy and an incidence rate of 150,000 new cases every year in just USA alone <cit.>. Drug Resistant Epilepsy (DRE) (or intractable epilepsy) is defined when the seizures cannot be controlled by medications and about 25% of all epileptic cases are DRE <cit.>. The only viable option in this case is to surgically remove the affected tissue. Epilepsy surgery is a curative option for pharmacoresistant epilepsy, but brain regions associated with language and cognitive functions can be affected by surgery. To do this accurately, unaffected regions of the brain must be identified (called "localized"). The motor and language comprehension are examples of functionally significant region localization. Accurate localization helps to prevent post-surgical loss of functionality. §.§.§ Clinical standard and the state-of-the-art method for RTFM evaluationThe gold standard task localization, the Electro-Cortical Stimulation Mapping (ESM), utilizes electrodes that are placed on the surface of the brain by means of craniotomy. During the ESM, the current is delivered for a short duration to stimulate the region of interest. The behavioral response corresponding to changes in function are simultaneously recorded. The inherent drawback of this approach is that the stimulation can cause the neurons in that region to uncontrollably discharge, i.e., cause seizure. Recently, ElectroCorticography (ECoG)-based real-time functional mapping (RTFM) <cit.> has been proposed as a promising alternative to ESM. The typical RTFM based task localization and experimental setup is illustrated in Figure <ref>. Similar to the ESM, subdural grids on the cortical surface are utilized for signal collection, however, no external stimulus is provided and only the physiological changes corresponding to the processed stimuli are recorded via the electrodes. Hence, no seizure due to stimulation occurs. §.§.§ Research gap The results of RTFM are not always concordant with the gold standard due to the difficulty in understanding the brain signals without stimulation and lack of sufficient accuracy of the state-of-the-art method, ECoG-based functional mapping <cit.> (ECoG-EM from now on, where EM stands for expectation maximization). There is a need for a method that would improve RTFM signal classification accuracy and make it a strong and safer alternative to the ESM. Current approaches for detecting positive response channels in the eloquent cortex localization task, focus on the power of the signal in the α, β and primarily, the high-γ (70Hz-170Hz) frequency bands <cit.>. In these approaches, a baseline recording of each channel at resting-state is used. The power of the signal during the tests is computed using an autoregressive (AR) spectral estimation approach and is then statistically compared to the baseline to calculate the probability whether the channel has a response that is significantly different from it's resting-state (baseline) condition or not. This is repeated every 100 ms for the entire experiment. These approaches do not compare the channels to each other and also do not account for the signal in the frequency range beyond high-γ.§.§.§ Our contributionsWe present a novel framework for ECoG signal analysis with RF to accurately discriminate channels that respond positive and negative in regards to language functional mapping task. To the best of our knowledge, this is the first work comparing the different (positive and negative) responses rather than using a baseline approach. We show the superiority of our approach to the state of the art ECoG-based functional analysis using Expectation Maximization approaches (ECoG-EM), and demonstrate its strong potential to become an alternative to ESM. The rest of the paper is organized as follows: In Sec. <ref> we discuss the ECoG data collection, pre-processing of the data into the discriminative domain and the proposed classification approach. In Sec. <ref>, we present our experimental results. In Sec. <ref>, we summarize our findings.§ METHODS §.§ Data Collection and Experimental SetupECoG represents the electrical activity of the brain recorded directly from the cortical surface. ECoG-based functional mapping allows identification of brain activity correlated with certain task, e.g., language. The basic setup for ECoG-based functional mapping is shown in Figure <ref>. ECoG signals from the implanted subdural grids are split into two streams: one for continuous clinical seizure monitoring and the other for ECoG-based functional mapping. The tool used to record the incoming ECoG signal was BCI2000 <cit.>. A baseline recording of the cortical activity was first acquired to capture the "resting-state" neuronal activity of the regions.The literature on localization of motor function using ECoG-based functional mapping (such as RTFM) is vast <cit.><cit.>. Unlike good accuracies obtained from such studies, the localization of eloquent language cortex has proved to be more challenging <cit.>. The language function in the brain is processed in several regions primarily, the Wernicke's area and Broca's area as demonstrated in Figure <ref>.The Wernicke's area is located in the posterior section of the superior temporal gyrus and is responsible for the receptive language task i.e., language comprehension. The Broca's area, on the other hand, is more involved in speech production. There exists an anatomical connection between these two regions, named the arcuate fasciculus, which could induce a response in one region owing to the other's activation. §.§.§ Language comprehension taskFollowing the baseline recording step, paradigms similar to those employed in ESM or functional Magnetic Resonance Imaging (fMRI) are also employed to record the task-related ECoG signal for functional mapping purposes <cit.>. Figure <ref> shows one such paradigm, mimicking experimental setup for the language comprehension task. Alternate 30 second blocks of ECoG data during “control" and “active" conditions are recorded continuously at a fixed sampling rate of 1200 Hz. For the language comprehension task, the active condition implies listening to a story, while the control task involves listening to broadband noise <cit.>. Another associated paradigm is the reading comprehension task where the subject reads sentences from a screen, and replies with a "True" or "False" response. The system records information from 128 ECoG channels as illustrated in Figure <ref>. §.§ Pre-processingAs a first step of preparing the data, non-task/control time points in the signal are eliminated. These correspond to the spontaneous activity recording before the 0-min in Figure <ref> and any trailing signals at the end of the experiment. The use of power spectral density (PSD) is proposed in <cit.> as a discriminating feature between the baseline and task signals. In a slightly different manner, we represent PSD with a number of coefficients extracted from an autoregressive (AR) model. Unlike conventional methods, we simplify signal representation with PSD coefficients only. Herein, the AR parameters, ã[n], are estimated by forward linear prediction coefficients and then, the spectral estimate is calculated as P̃(f) = Tρ̃/1+∑_n=1^pã[n]e^-i2π fnTwhere T is the inverse of the sampling rate (f_s), ρ̃ is the estimated noise variance, and p is the order of the AR process. This approach gives us f_s/2+1 frequency components. The PSD estimates are computed for each block (task/control) of each channel. Later, we use these components as features to determine RTFM characteristics.§.§ Classification ModelTo differentiate positive response channels (PRC) from negative response channels (NRC), we identify structured signal patterns in signal blocks, which are not readily visible to the human eyes. We hypothesize that the features of the active and control tasks are globally similar between PRC and NRC but still include substantial differences. This hypothesis can be visually testedand partially confirmed in Figure <ref> where the PSD of the active and control blocks of PRC are larger than that of NRC.To test our hypothesis and provide scientific evidences of ECoG signal separation between functionally positive and negative regions, we design a RF classifier <cit.> to model structured local signal patterns for challenging RTFM signal characterization.It has been shown in various different areas that RF is an efficient classifier with considerably good accuracies in classification tasks <cit.>. Its superiority to most other classifiers comes from its generalization property.In RF, briefly, each new tree is created and grown by first randomly sub-sampling the data with replacement. An ensemble of algorithms are used so that the sub-trees are learned differently from each other. For a feature vector 𝐯=(v_1,v_2,...,v_d)∈ℝ^d, where d represents feature dimension, RF trains multiple decision trees and the output is determined based on combined predictions. In each node of decision trees, there is a weak learner (or split function) with binary output: h(𝐯,θ): ℝ^d ×𝒯→{0,1}, where 𝒯 represents the space of all split parameters. Note that each node is assigned a different split function. RF includes hierarchically organized decision trees, in which data arriving at node j is divided into two parameters. Overall, RF treats finding split parameters θ_j as an optimization problem θ_j=_θ∈𝒯I(𝐯,θ), where I is the objective function (i.e., split function) and v represents the PSD coefficients in this particular application. As the tree is grown (Figure <ref>), an information criterion is used to determine the quality of a split. Commonly used metrics are Gini impurity and Entropy for information gain. To overcome potential over-fittings, a random sample of features is input to the trees so that the resulting predictions have minimal correlation with each other (i.e., minimum redundancy is achieved). In our experiments, we have used linear data separation model of the RF.In our experiments, we use full spectrum of RTFM signal (0-600 Hz) in frequency domain instead of restricted γ-band. Moreover, we stack the signal to enhance the frequency specific features rather than concatenating them. Each channel has 10 blocks (Figure <ref>) and the final channel classification is based on a majority voting (Figure <ref>) on the classified sub-blocks. For the tested data point (feature) 𝐯, the output is computed as a conditional distribution p(c|𝐯) where c represents the categorical labels (positive vs. negative response). Final decision (classification) is made after using majority voting over K leafs: p(c|𝐯)=1/K∑_k=1^Kp_t(c|𝐯).§.§.§ Model parametersNumber of trees, number of features, and data size fed to each tree with or without resampling and the information metric for data splitting are some of the RF parameters that need to be optimized. To achieve this, the model was repeatedly tested under different combinations of the above parameters. For the total number of trees, an incremental update approach was used where we increased the total number of trees till the increase in performance was negligible. Similarly, the number of features was set as the square root of the number of input variables. For the choice of splitting function, Gini impurity was used as for a binary classification problem, both measures yield similar results <cit.>.§ EXPERIMENTS AND RESULTS With IRB approval, ECoG data were recorded from six adult patients with intractable epilepsy. Table <ref> summarizes the patient demographics and the number of channels tested per patient. The ESM results were served as gold standard for separating ECoG channels into two classes: "ESM-positive" and "ESM-negative" electrodes. The numberof tested ESM electrodes varies based on the task in hand (the function that can be compromised during the surgery and therefore needs to be localized), patient's status, possible after-discharges, location of the grid on the brain surface, the epilepsy focus and to a smaller extent on the specialist performing the test. Except subject 4, all subjects were tested with the language comprehension paradigm as shown in Figure <ref>. Subject 4, on the other hand, underwent the reading comprehension test involving reading sentences presented on the screen and responding to questions as "True" or "False". Since this test also incorporates speech which would incite a response from face/tongue sensory motor areas of the brain as well as the Broca's area, channels corresponding to these specific regions were not included in our calculations. There were 77 PRCs and 262 NRCs in total. Each data block in a channel was assigned the same label. For 5 minutes long recording, we had 5 blocks of control and active conditions each per channel and hence, 3390 data samples in total. Due to the large imbalance in data, 77 NRCs were randomly chosen from the 262. In total, we have 1540 blocks of data. For unbiased evaluation of the RF based results, we used 10-fold cross-validation and the average over a 100-iteration was conducted.§.§.§ Time-domain analysisFirst, we tested whether the raw time signal data has sufficiently discriminating information. For this analysis, a RF model with 100 trees was used. The resulting classification accuracy was 61.79% with sensitivity and specificity around 60%. While this is marginally better than the simple flip of a coin scenario, it is insufficient to encourage the use of ECoG-based functional mapping over ESM.§.§.§ Frequency-domain analysisEach block in a time-domain signal was transformed into the frequency-domain using the pre-processing step described in Section <ref> (i.e., PSD coefficient via AR model). The order of the AR process is set to SamplingRate/10 = 120. The PSD estimate is of length f_s/2+1= 601. We then log normalized PSD coefficients to train a RF classifier. An ensemble of 200 bagged classification trees was trained on 9 folds of the data and tested on the last fold.In order to validate the use of control & active task blocks for channel classification, we first performed block classification on the 1540 blocks. The classification accuracy was found to be 94% with sensitivity and specificity of ≈ 93%. These results validate the efficacy of the proposed block-based classification strategy.§.§.§ Frequency-band analysisThree different experiments (E1, E2, E3) were performed to understand the contribution of the different frequency bands to the channel classification problem: * Classification using full signal spectrum* Classification using α, β, high-γ sub-bands* Classification using only the High-Gamma sub-band In these experiments, the blocks were classified and a majority voting was applied to classify a channel as PRC/NRC. Figure <ref> summarizes the results of the above experiments for the language comprehension task. In concordance to what was observed in the ECoG-EM approaches such as SIGFRIED <cit.> and CortiQ <cit.>, we found that the lower frequency bands, specifically, α and β, did not contribute largely towards classification and the high-γ band achieved good classification accuracy. In other words, the full signal spectrum based classification had higher classification accuracy, sensitivity and specificity than the sub-band approaches indicating that the full spectrum had more information to offer. §.§.§ Block-size analysisWe also tested the use of smaller blocks of data by further dividing each control/active task block into 10 sub-blocks. Each sub-block of data was the power spectrum representation of 3 seconds of the recording. The classification was done based on a majority voting of the classified sub-blocks within a channel. The resulting classification accuracy was reported to be78%, higher than the block-based approach. This indicates that there was more local information to be extracted from the signal.§.§.§ Comparison to the state of the art ECoG-EM has been extensively tested on motor localization tasks <cit.>, but not as much on language localization. Still, ECoG-EM is considered to be the state of the art method. To have a fair comparison with ECoG-EM, we applied ECoG-EM on the frequently tested sub-bands - α,β and high-γ, as well as on the frequency bands beyond and upto 350 Hz. The results are shown in Figure <ref>. While ECoG-EM approach provides a higher specificity, it has a much lower accuracy and sensitivity than the proposed RF based approach. This is a strong validation of our hypothesis that discriminating PRCs and NRCs is a promising technique as compared to the baseline reference channel classification approach. § CONCLUSIONDiscriminating between the response in the eloquent language cortex regions based on the associated task is a challenging problem. In the current study, we developed a novel framework towards the ECoG-based eloquent cortex localization with promising results: 78% accuracy on channel classification in comparison to the 55% accuracy of the state of the art ECoG-based functional mapping. We showed the efficacy of machine learning based RTFM signal analysis as a strong alternative to the ESM. § ACKNOWLEDGMENTS § ACKNOWLEDGMENT The authors would like to acknowledge UCF-FH seed grant (PIs: M. Korostenskaja and U. Bagci) for supporting thisstudy. The authors would also like to thank Drs. Schott Holland and Jennifer Vannest for sharing the story listening task developed in their Neuroimaging Center at Cincinnati Children's Central Hospital. Special thanks to Drs. G. Schalk and P. Brunner for providing their in-house built version of BCI2000-based software for ECoG recording and for their continued support of our ECoG-related studies.00 epilepsyStats Epilepsy Foundation, <http://www.epilepsy.com/learn/epilepsy-101/what-epilepsy> shorvon2013longitudinal Shorvon, Simon D., and David MG Goodridge. "Longitudinal cohort studies of the prognosis of epilepsy: contribution of the National General Practice Study of Epilepsy and other studies." Brain 136.11 (2013): 3497-3510. schalk2008real Schalk, Gerwin, et al. "Real-time detection of event-related brain activity." Neuroimage 43.2 (2008): 245-249. korostenskaja2015electrocorticography Korostenskaja, Milena, et al. "Electrocorticography-Based Real-Time Functional Mapping for Pediatric Epilepsy Surgery." Journal of Pediatric Epilepsy 4.04 (2015): 184-206. prueckl2013cortiq Prueckl, Robert, et al. "cortiQ-Clinical software for electrocorticographic real-time functional mapping of the eloquent cortex." Engineering in Medicine and Biology Society (EMBC), 2013 35th Annual International Conference of the IEEE. IEEE, 2013. bci2000 Schalk, Gerwin, et al. "BCI2000: a general-purpose brain-computer interface (BCI) system." IEEE Transactions on biomedical engineering 51.6 (2004): 1034-1043. roland2010passive Roland, Jarod, et al. "Passive real-time identification of speech and motor cortex during an awake craniotomy." Epilepsy & Behavior 18.1 (2010): 123-128. kapeller2015cortiq Kapeller, Christoph, et al. "CortiQ-based real-time functional mapping for epilepsy surgery." Journal of clinical neurophysiology 32.3 (2015): e12-e22. arya2015electrocorticographic Arya, Ravindra, et al. "Electrocorticographic language mapping in children by high-gamma synchronization during spontaneous conversation: comparison with conventional electrical cortical stimulation." Epilepsy research 110 (2015): 78-87. korostenskaja Korostenskaja, Milena, et al. "Real-time functional mapping with electrocorticography in pediatric epilepsy: comparison with fMRI and ESM findings." Clinical EEG and neuroscience 45.3 (2014): 205-211. korostenskaja2014real Korostenskaja, Milena, et al. "Real-time functional mapping: potential tool for improving language outcome in pediatric epilepsy surgery: Case report." Journal of Neurosurgery: Pediatrics 14.3 (2014): 287-295. breiman2001random Breiman, Leo. "Random forests." Machine learning 45.1 (2001): 5-32. bromiley2016fully Bromiley, Paul A., et al. "Fully Automatic Localisation of Vertebrae in CT Images Using Random Forest Regression Voting." International Workshop on Computational Methods and Clinical Applications for Spine Imaging. Springer, Cham, 2016. verhoeven2016using Verhoeven, Thibault, et al. "Using Random Forest for Diagnosis and Lateralization of Temporal Lobe Epilepsy from EEG-based Directed Functional Connectivity." 12th European Congress on Epileptology. Vol. 57. Wiley-Blackwell, 2016. sarfaraz Hussein, Sarfaraz, et al. "Automatic segmentation and quantification of white and brown adipose tissues from PET/CT scans." IEEE transactions on medical imaging 36.3 (2017): 734-744. breiman1996technical Breiman, Leo. "Some properties of splitting criteria." Machine Learning 24.1 (1996): 41-47. cortiq CortiQ, <http://www.cortiq.at/Home>, Last checked: 31/07/2017
http://arxiv.org/abs/1706.01380v2
{ "authors": [ "Harish RaviPrakash", "Milena Korostenskaja", "Eduardo Castillo", "Ki Lee", "James Baumgartner", "Ulas Bagci" ], "categories": [ "q-bio.NC", "cs.CV", "cs.LG" ], "primary_category": "q-bio.NC", "published": "20170526165004", "title": "Automatic Response Assessment in Regions of Language Cortex in Epilepsy Patients Using ECoG-based Functional Mapping and Machine Learning" }
[pages=1-last]ASMB_w_comments.pdf
http://arxiv.org/abs/1705.09851v2
{ "authors": [ "Matthew F. Dixon", "Nicholas G. Polson", "Vadim O. Sokolov" ], "categories": [ "stat.ML" ], "primary_category": "stat.ML", "published": "20170527181758", "title": "Deep Learning for Spatio-Temporal Modeling: Dynamic Traffic Flows and High Frequency Trading" }
^1Departamento de Física e Química, Unesp - Univ Estadual Paulista, 15385-000, Ilha Solteira, SP, Brazil ^2IGCE, Unesp - Univ Estadual Paulista, Departamento de Física, 13506-900, Rio Claro, SP, Brazil ^3Departamento de Física - Universidade Federal do Maranho, 65080-805, So Lus, MA, Brazil ^4Science Institute, University of Iceland, Dunhagi-3, IS-107, Reykjavik, Iceland ^5ITMO University, St. Petersburg 197101, RussiaThe ground state of the diatomic molecules in nature is inevitably bonding and its first excited state is antibonding. We demonstrate theoretically that for a pair of distant adatoms placed buried in 3D-Dirac semimetals, this natural order of the states can be reversed and antibonding ground state occurs at the lowest energy of the so-called bound states in the continuum. We propose experimental protocol with use of STM-tip to visualize the topographic map of the local density of states on the surface of the system to reveal the emerging Physics.Antibonding Ground state of Adatom Molecules in Bulk Dirac Semimetals Y. Marques^1, A. E. Obispo^2,3, L. S. Ricco^1, M. de Souza^2, I. A. Shelykh^4,5, and A. C. Seridonio^1,2 December 30, 2023 ============================================================================================================Introduction.—Three-dimensional Dirac semimetals (3D-DSMs) such as Cd_3As_2 and Na_3Bi<cit.> represent novel class of functional materials constituting 3D analogous of gapless graphene<cit.>. The band structure of 3D semimetals contains the set of the so-called Dirac points in which conduction and valence bands touch and effective mass becomes zero. Around these points the dispersion of quasiparticles corresponds to those of massless relativistic Dirac particles which result in series of unusual properties of these materials such as linear magnetoresistance, unprecedented Shubnikov-de Haas oscillations and ultrahigh carrier mobility<cit.>.In this work, we predict one more interesting feature of such materials. Namely, if we consider buried pair of distant adatoms in the bulk of a 3D-DSM as depicted at Fig.<ref>, the ground state of this molecular system formed from bound states in the continuum (BICs) of the adatoms<cit.> will be characterized by antibonding-type orbital. This differs from the natural order in diatomic molecules where the ground state is of bonding-type in vast majority of cases and formation of antibonding ground state till now was reported only in systems of artificially fabricated InAs and Ge/Si p-type quantum dots for certain values of the inter-dot separations<cit.>. The behavior we report is a unique effect arising from long-range correlations between distant adatoms mediated by bulk fermions in 3D-DSMs. To detect the predicted effect, we propose to use the experimental approach developed in Ref.<cit.> for imaging isodensity contours of molecular states by scanning tunneling microscope (STM)-tip, as outlined at Fig.<ref>.The Model.—For theoretical analysis of two adatoms buried inside 3D-DSM, as depicted at Fig.<ref>, we employ an Anderson-like Hamiltonian<cit.> ℋ_T=ℋ_0+ℋ_d+ℋ_V, in which the effective low energy term describing the 3D-DSM is given by ℋ_0=∑_𝐤,τψ_τ^†(𝐤)h_τ(𝐤)ψ_τ(𝐤),where ψ_τ^†(𝐤)=([c_𝐤τ↑^† c_𝐤τ↓^†) ] is a spinor with fermionic operators c_𝐤τσ^† (c_𝐤τσ) for creation (annihilation) of electrons in quantum states labeled by the wave vector 𝐤, spin σ and chirality τ=±, andh_τ(𝐤) = v_Fτ(k_xσ_x+k_yσ_y+k_zσ_z),where σ_i accounts for the Pauli matrices and v_F is the Fermi velocity.The term ℋ_d= ∑_jσε_d_jσd_jσ^†d_jσ+∑_jU_jn_d_j↑n_d_j↓ describes the buried adatoms (j=1,2), where n_d_jσ=d_jσ^†d_jσ, d_jσ^† (d_jσ) creates (annihilates) an electron with spin σ in the state ε_d_jσ, and U_j is the on-site Coulomb repulsion.ℋ_V accounts for the hybridization between the adatoms and the host, ℋ_V=∑_j𝐤τd̂_j^†V̂_j𝐤ψ_τ(𝐤)+H.c., where d̂_j^†=([ d_j↑^† d_j↓^† ]) and V̂_j𝐤=([ V_j𝐤0;0 V_j𝐤 ]) is hybridization matrix. We assume that both adatoms are equally coupled to the 3D-DSM in such a way that V_j𝐤=v_0/√(𝒩)e^i𝐤·𝐑_j, in which 𝒩 gives the total number of states in the band-structure and 𝐑_j corresponds to the positions of the buried adatoms.To explore the effects induced by the adatoms, we focus on the local density of states of the system (LDOS) given by LDOS(ε, R_m)=-1/π Im[∑_σ𝒢̃_σ(ε, R_m)], where 𝒢̃_̃σ̃(ε, R_m)=1/𝒩∑_𝐤𝐪∑_ττ'e^-i𝐤·𝐑_me^i𝐪·𝐑_m𝒢̃_c_𝐤τσc_𝐪τ'σis system's Green function in energy domain ε at the STM-tip position 𝐑_m. By applying the equation-of-motion (EOM) procedure<cit.> for the previous equation, we find𝒢̃_c_𝐤τσc_𝐪τ'σ=(ε±ħ v_Fτ k_z)δ_𝐤𝐪δ_ττ'/ε^2-(τħ v_Fk)^2 +(ε±ħ v_Fτ k_z)∑_jV_j𝐤/ε^2-(τħ v_Fk)^2𝒢̃_d_jσc_𝐪τ'σ +(ħ v_Fτ k_-)∑_jV_j𝐤/ε^2-(τħ v_Fk)^2𝒢̃_d_jσ̅c_𝐪τ'σ,where ± stands for σ=↑,↓, respectively with σ̅=-σ and k_±=k_x± ik_y. To finish the LDOS evaluation, we first perform the summation over τ and τ', which gives:𝒢̃_c_𝐤σc_𝐪σ^full=2εδ_𝐤𝐪/ε^2-(ħ v_Fk)^2+2ε∑_jV_j𝐤/ε^2-(ħ v_Fk)^2∑_τ'𝒢̃_d_jσc_𝐪τ'σ,where we defined 𝒢̃_AB^full≡∑_ττ'𝒢̃_AB. By applying the EOM method for the mixed Green function ∑_τ'𝒢̃_d_jσc_𝐪τ'σ, we determine∑_τ'𝒢̃_d_jσc_𝐪τ'σ=2ε∑_lV_l𝐪^*/ε^2-(ħ v_Fq)^2𝒢̃_d_jσd_lσand consequently,𝒢̃_σ(ε, R_m) =1/𝒩∑_𝐤𝐪2εδ_𝐤𝐪/ε^2-(ħ v_Fk)^2+1/v_0^2∑_jl𝒢̃_d_jσd_lσ× Σ( R_mj)Σ( R_lm),in which𝐑_mj=𝐑_m-𝐑_j, 𝐑_mj=-𝐑_jm andΣ( R_mj) =2v_0^2/𝒩∑_𝐤ε e^i𝐤·𝐑_mj/ε^2-(ħ v_Fk)^2gives the non-interacting self-energy. After performing the sum over 𝐤 and introducing the energy cutoff D as the band-half width of the 3D-DSM, we get:Σ( R_mj) = -3π v_0^2ε/D^3ħ v_F/|R_mj|exp(i|R_mj|ε/ħ v_F).This equation holds in the domain |R_mj|ε/ħ v_F≫1, i.e, for long-range positions. Particularly at the adatom site, the self-energy readsΣ(0) = -6ε v_0^2/D^2(1-ε/2Dln|D+ε/D-ε|)-iπ_0v_0^2,with 3D-DSM density of states (DOS) determined by _0=Ω/π^2ħ^3v_F^3𝒩ε^2=3ε^2/D^3, which exhibits quadratic scaling on energy in agreement with Ref.<cit.>.As a result the LDOS of the system is given by LDOS(ε, R_m) = 2_0+∑_jlΔLDOS_jl( R_m), where ΔLDOS_jl( R_m) =-1/π v_0^2∑_σ Im{Σ( R_mj)𝒢̃_d_jσd_lσΣ( R_lm)}, stands for the term induced by the presence of the buried adatoms. Diagonal terms in it with j=l describe the electronic waves scattered by individual adatoms, while mixing terms with j≠ l correspond to the waves that travel back and forth between two adatoms. The aforementioned quantities are of major importance for the appearance of the so-called BICs, which emerge when ΔLDOS_jl, for j≠ l, contribute withFano antiresonance<cit.> phase shifted by π with respect to the resonance arising from ΔLDOS_jj. Noteworthy, both quantities depend on the DOS of the adatoms DOS_jl=-1/π Im(∑_σ𝒢̃_d_jσd_lσ). To evaluate functions 𝒢̃_d_jσd_lσ, we start employing the EOM approach which gives: (ε-ε_d_jσ-Σ(0))𝒢̃_d_jσd_lσ=δ_jl+U_j𝒢̃_d_jσn_d_jσ̅d_lσ +Σ( R_jj̅)𝒢̃_d_j̅σd_lσ,where j̅=1,2 when j=2,1. In this expression, 𝒢̃_d_jσn_d_jσ̅d_lσ stands for two-particle Green function, which yields(ε-ε_d_jσ-U_j)𝒢̃_d_jσn_d_jσ̅d_lσ=δ_jl⟨ n_d_jσ̅⟩+∑_𝐤τV_j𝐤[𝒢̃_d_jσ̅^†c_𝐤τσ̅d_jσ,d_lσ+𝒢̃_c_𝐤τσd_jσ̅^†d_jσ̅,d_lσ-V_j𝐤𝒢̃_c_𝐤τσ̅^†d_jσ̅d_jσ,d_lσ], where the occupation number is ⟨ n_d_jσ̅⟩= -1/π∫_-D^0 Im(𝒢̃_d_jσ̅d_jσ̅)dε.We employ the Hubbard I approximation<cit.> in order to close this dynamic set of the equations for Green functions. Thereby, we find for the diagonal adatom Green functions 𝒢̃_d_jσd_jσ=λ_j^σ̅/ε-ε_d_jσ-Σ̃_jj^σ, where λ_j^σ̅=1+⟨.n_d_jσ̅.⟩ U_j/ε-ε_d_jσ-U_j-Σ(0) and Σ̃_jj^σ=Σ(0)+λ_j^σ̅λ_j̅^σ̅Σ( R_jj̅)Σ( R_j̅j)/ε-ε_d_j̅σ-Σ(0)=Σ(0)+Σ_jj^σ. The mixed Green functions are: 𝒢̃_d_jσd_j̅σ=λ_j^σ̅Σ( R_jj̅)/ε-ε_d_jσ-Σ(0)𝒢̃_d_j̅σd_j̅σ. Results and Discussion.— In the following discussion we consider the case oftwo identical adatoms placed at 𝐑_1,2=(0,∓1,0)nm (the surface of the system corresponds to (x,y,1)nm plane), with energy levels ε_d_jσ=-0.07D, which are hybridized to the free electrons of 3D-DSM with strength v_0=0.14D and on-site Coulomb repulsion U_j=0.14D. We point out that the change of v_0 just shifts rigidly the profile of the adatom DOS. Additionally, we have chosen ħ v_F≈5 eVÅ and D≈0.2 eV, which are experimental parameters for Cadmium Arsenide (Cd_3As_2)<cit.>. The set of parameters we use corresponds to symmetric Anderson regime (2ε_d_jσ+U_j=0). For such conditions the Hamiltonian becomes invariant under particle-hole transformation as can be seen in Fig.<ref>. The presence of the particle-hole symmetry is no way necessary for the appearance of the phenomena discussed below.The four-peak structure in DOS visible in the upper panels of Fig.<ref> emerges from the contributions provided by Coulomb repulsion U_j and interacting self-energy Σ_jj^σ=λ_j^σ̅λ_j̅^σ̅Σ( R_jj̅)Σ( R_j̅j)/ε-ε_d_j̅σ-Σ(0) in Eq.(<ref>) for the adatoms Green functions. The former leads to the formation of the pair of peaks at ε_d_jσ and ε_d_jσ+U_j as expected, the latter is responsible for the splitting of both of them. Noteworthy, this self-energy provides effective tunneling between the adatoms mediated by the bulk states of the 3D-DSM, even in the absence of the direct tunneling term t( R_12)d_1σ^†d_2σ+H.c.. This indirect tunneling becomes specially important when adatoms are well separated from each other. This four-peak structure corresponds to the formation of molecular states with remarkable property: the ground-state corresponds to the antibonding configuration. This is consequence of the particular scaling of 3D-DSM DOS with energy _0∝ε^2 entering into expression for Σ( R_mj) as a result. If we replace this DOS by the one corresponding to the normal metal the reported effect disappears. Additionally, following Ref.<cit.> and looking at the poles of the adatom Green function, we recognize t_eff=Re(Σ(0)+Σ_jj^σ) as the effective hopping term between the adatoms, which is negative as we have checked it, thus ensuring the antibonding ground state. However, distinctly from Ref.<cit.> where the negative tunneling term comes from the spin-orbit coupling, here it emerges from Friedel-like oscillations inside the relativistic 3D-DSM environment encoded by the self-energy Σ_jj^σ.The nature of the four molecular states can be clarified if one analysis the corresponding LDOS of the whole system. Note that, this quantity is position dependent and its profile on the surface of the system can be visualized experimentally using an STM-tip. Middle panels at Fig.<ref> illustrate the contribution of the adatoms on the surface LDOS evaluated at 𝐑_m=(0,0,1)nm (Fig.<ref>(b)) and 𝐑_m=(1,1,1)nm (Fig.<ref>(e)). In both panels diagonal terms (j=l) present pronounced peaks at the same energies as those of the DOS (upper panels of Fig.<ref>). The mixed terms (j≠ l) show resonances around ε≈±5.7×10^-2D and antiresonances nearby ε≈±7.0×10^-2D. When one computes the total LDOS as sum of all contributions from ΔLDOS_jl the interference between diagonal and mixed terms can be constructive or destructive. For the latter case the peaks in total LDOS become attenuated and can even totally vanish as it happens at Fig.<ref>(c), where only two peaks out of four survive. In this case two peaks disappearing due to Fano destructive interferences<cit.> around ε≈±7.0×10^-2D, correspond to the BICs. Note that, full annihilation of the peaks takes place only for certain values of 𝐑_m as one can clearly see at the Figs.<ref>(e) and (f). In this case the destructive interference is not perfect and thus BICs inevitably experience decay into the host continuum.The profiles of the total LDOS on 3D-DSM surface (𝐑_m=(x,y,1)nm plane) are shown in Fig.<ref>. We consideredtwo distinct values of the energies corresponding to the cases of constructive and destructive Fano interference in ΔLDOS, (i) ε≈-5.7×10^-2D, (ii) ε≈-7.0×10^-2D respectively. For the case of constructive interference shown at Fig.<ref>(a) density profile reveals nodeless covalent molecular state, i.e., bonding state. On the contrary, when the energy corresponds to destructive Fano interference and formation of BIC, the density profile has pronounced node between the adatoms and thus corresponds to the antibonding state. Note that, this latter case corresponds to the peak in DOS with minimal energy. Thus, differently from the case of the real molecules the ground molecular state is antibonding, which is quite remarkable<cit.>. It is worth mentioning that another pair ofbonding and antibonding states exist above the Fermi energy (ε_F=0) due to the particle-hole symmetry of the original Hamiltonian (Fig.<ref>(c)).The molecular states discussed above are robust with respect to the detuning Δε of the energy levels of two adatoms. The corresponding profiles of LDOS for bonding and antibonding states are shown in the Fig.<ref> for two different values of Δε. Naturally, profiles become asymmetric, but nodal line between the adatoms revealing the antibondingnature of the ground-state remains clearly visible.Conclusions.—To summarize, we evaluated the LDOS on the surface of 3D-DSM hosting two distant buried adatoms and found that the ground state of this molecular system has density profile with node between the atoms and thus corresponds to the antibonding state. This is in contrast with natural molecules for which the ground state is always bonding. The predicted effect appears due to the indirect tunneling between the adatoms mediated by quasi-relativistic free electrons of 3D-DSM.Acknowledgments.—This work was supported by the agencies CNPq (307573/2015-0), CAPES, So Paulo Research Foundation (FAPESP) - grant: 2015/23539-8. I.A.S. acknowledges the support from Horizon2020 RISE project CoExAN and RSF(17-12-01581). A.E.O. thanks to CNPq (grant: 312838/2016-6) and Secti/FAPEMA (DCR 02853/16).10DSM1Z. K. Liu, B. Zhou, Y. Zhang, Z. J. Wang, H. M. Weng, D. Prabhakaran, S.-K. Mo, Z. X. Shen, Z. Fang, X. Dai, Z. Hussain, and Y. L. Chen, Science 343, 864 (2014).DSM2Z. K. Liu, J. Jiang, B. Zhou, Z. J. Wang, Y. Zhang, H. M. Weng, D. Prabhakaran, S.-K. Mo, H. Peng, P. Dudin, T. Kim, M. Hoesch, Z. Fang, X. Dai, Z. X. Shen, D. L. Feng, Z. Hussain, and Y. L. Chen, Nat. Mater. 13, 677 (2014).DSM3M. Neupane, S.-Y. Xu, R. Sankar, N. Alidoust, G. Bian, C. Liu, I. Belopolski, T.-R. Chang, H.-T. Jeng, H. Lin, A. Bansil, F. Chou, and M. Z. Hasan, Nat. Commun. 5, 3786 (2014).DSM4S.-Y. Xu, C. Liu, S. K. Kushwaha, R. Sankar, J.W. Krizan, I. Belopolski, M. Neupane, G. Bian, N. Alidoust, T.-R. Chang, H.-T. Jeng, C.-Y. Huang, W.-F. Tsai, H. Lin, P. P. Shibayev, F.-C. Chou, R. J. Cava, and M. Z. Hasan, Science 347, 294 (2015).DSM5S. Borisenko, Q. Gibson, D. Evtushinsky, V. Zabolotnyy, B. Bchner, and R. J. Cava, Phys. Rev. Lett. 113, 027603 (2014).Graphe1K.S. Novoselov, Rev. Mod. Phys. 83, 837 (2011).Graphe2N.M. R. Peres, Rev. Mod. Phys. 82, 2673 (2010).Graphe3A.H. Castro Neto, F. Guinea, N.M.R. Peres, K.S. Novoselov, and A. K. Geim, Rev. Mod. Phys. 81, 109 (2009).Feature1J. Feng, Y. Pang, D. Wu, Z. Wang, H. Weng, J. Li, X. Dai, Z. Fang, Y. Shi, and L. Lu, Phys. Rev. B 92, 081306(R) (2015).Feature2P.J.W. Moll, N. L. Nair, T. Helm, A. C. Potter, I. Kimchi, A. Vishwanath, and J. G. Analytis, Nature (London) 535, 266 (2016).Feature3T. Liang, Q. Gibson, M. N. Ali, M. Liu, R. J. Cava, and N. P. Ong, Nat. Mater. 14, 280 (2014).BIC1C.W. Hsu, B. Zhen, A.D. Stone, J.D. Joannopoulos, and M. Soljacic, Nature Review Materials, 1, 16048 (2016).BIC2L.H. Guessi, R.S. Machado, Y. Marques, L.S. Ricco, K. Kristinsson, M. Yoshida, I.A. Shelykh, M. de Souza, and A.C. Seridonio, Phys. Rev. B 92, 045409 (2015).BIC3L.H. Guessi, Y. Marques, R.S. Machado, L.S. Ricco, K. Kristinsson, M.S. Figueira, I.A. Shelykh, M. de Souza, and A.C. Seridonio, Phys. Rev. B 92, 245107 (2015).ExperimentM.F. Doty, J.I. Climente, M. Korkusinski, M. Scheibner, A.S. Bracker, P. Hawrylak, and D. Gammon, Phys. Rev. Lett. 102, 047401 (2009).Experiment2A.I. Yakimov, V.A. Timofeev, A.I. Nikiforov, and A.V. Dvurechenskii, JETP Letters 94, 744 (2011).STMG. Reecht, B. Heinrich, H. Bulou, F. Scheurer, L. Limot, and G. Schull, arXiv:1703.05622v2 (2017).Hamiltonian1P.W. Anderson, Phys. Rev. 124, 41 (1961).Hamiltonian2H.-R. Chang, J. Zhou, S.-X. Wang, W.-Y. Shan, and D. Xiao, Phys. Rev. B 92, 241103(R) (2015).Hamiltonian3M.V. Hosseini and M. Askari, Phys. Rev. B 92, 224435 (2015).EOMH. Haug and A.P. Jauho, Quantum Kinetics in Transport and Optics of Semiconductors, Springer Series in Solid-State Sciences 123 (Springer, New York, 1996).DOSA. Principi, G. Vignale, and E. Rossi, Phys. Rev. B 92, 041107(R) (2015).Fano1U. Fano, Phys. Rev. 124, 1866 (1961).Fano2A.E. Miroshnichenko, S. Flach, and Y.S. Kivshar, Rev. Mod. Phys. 82, 2257 (2010).HubbardIJ. Hubbard, Proc. R. Soc. Lond. A, 281, 401 (1964).SOCJ.I. Climente, M. Korkusinski, G. Goldoni, and P. Hawrylak, Phys. Rev. B 78, 115323 (2008).
http://arxiv.org/abs/1705.09216v2
{ "authors": [ "Y. Marques", "A. E. Obispo", "L. S. Ricco", "M. de Souza", "I. A. Shelykh", "A. C. Seridonio" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20170525151257", "title": "Antibonding Ground state of Adatom Molecules in Bulk Dirac Semimetals" }
Numerical evidence for higher order Stark-type conjectures] Numerical evidence for higher order Stark-type conjectures We give a systematic method of providing numerical evidence for higher order Stark-type conjectures such as (in chronological order) Stark's conjecture over ℚ, Rubin's conjecture, Popescu's conjecture, and a conjecture due to Burns that constitutes a generalization of Brumer's classical conjecture on annihilation of class groups.Our approach is general and could be used for any abelian extension of number fields, independent of the signature and type of places (finite or infinite) that split completely in the extension.We then employ our techniques in the situation where K is a totally real, abelian, ramified cubic extension of a real quadratic field.We numerically verify the conjectures listed above for all fields K of this type with absolute discriminant less than 10^12, for a total of 19197 examples.The places that split completely in these extensions are always taken to be the two real archimedean places of k and we are in a situation where all the S-truncated L-functions have order of vanishing at least two. [ Kevin McGown, Jonathan Sands, Daniel Vallières December 30, 2023 ================================================== § INTRODUCTIONIn a well-known series of four papers, Harold Stark formulated several conjectures regarding the special value at s=0 ofArtin L-functions.In <cit.>, he formulated what is now known as Stark's main conjecture (or Stark's conjecture over ℚ) for a generalArtin L-function, and in <cit.>, he formulated a more refined conjecture forL-functions associated to abelian extensions of number fields having order of vanishing one at s=0 (referred to henceforth as Stark's abelian rank one conjecture).After some previous work of Sands, Stark and Tangedal, Stark's abelian rank one conjecture was extended to higher order of vanishingL-functions by Rubin (Conjecture B of <cit.>) and by Popescu (Conjecture C of <cit.>).Popescu's and Rubin's conjectures are closely related, though not equivalent in general.Popescu carefully studied a comparison theorem between the two, and he showed that Rubin's conjecture implies his, and at times, they are equivalent.For more information on these matters, we refer the reader to Theorem 5.5.1 of <cit.>.All these conjectures have been studied extensively by various authors.An impressive amount of work in gathering numerical evidence for Stark's abelian rank one conjecture has been done over the years.But to our knowledge, very few authors have provided numerical evidence for Rubin's or Popescu's conjecture in the case where theL-functions have order of vanishing greater than or equal to two.(The only two such works known to us are <cit.> and <cit.>.)The goal of this investigation is to remedy this situation. After completing this paper, it was brought to our attention that Stucky (see <cit.>) very recently completed his master's thesis on the subject, but his approach is different than ours.Roughly speaking, Popescu's conjecture predicts that a certain arithmetical object built out of S-units, called an evaluator, lies in a meaningful lattice inside a vector space over ℚ.The idea is to use an Artin system of S-units in order to give a precise formula for the evaluator that then allows one to check if it lies in the expected lattice.There is no canonical choice for an Artin system of S-units, and different systems give different representations for the evaluator.Nevertheless, they can be found algorithmically.It is worth pointing out that Stark originally used Artin systems of S-units in order to state his main conjecture in <cit.>, but they have since been superseded by the use of a more abstract result on rationality of linear representations due to Herbrand.In <ref> below, we give a definition of an Artin system of S-units, since it is essential to our approach.It follows from our formula (see Proposition <ref> below) that the evaluator will lie in the underlying rational vector space provided Stark's conjecture over ℚ is true.Stark's conjecture over ℚ can be interpreted as a rationality statement about an element in ℂ[G] constructed out of special values of S-truncated L-functions at s=0.Recently, Burns formulated a conjecture (Conjecture 2.4.1 in <cit.>) that would provide bounds for the denominators of this element (and also provides a generalization of Brumer's classical conjecture on annihilation of class groups).Hence, we also give numerical evidence for Stark's conjecture over ℚ and Burns's conjecture.Also, we note that our work provides numerical evidence for the leading term conjecture (namely, the equivariant Tamagawa number conjecture for the pair (h^0( Spec(K)),ℤ[G])), since Burns showed in <cit.> that it implies Popescu's conjecture (see also <cit.>).Moreover, Burns showed that the leading term conjecture implies his conjecture under some technical conditions (see Theorem 4.1.1 of <cit.>).In this paper, we use our approach to provide numerical evidence for Stark's, Rubin's, Popescu's, and Burns's conjectures by computing the 19197 examples where the top field is a totally real number field of absolute discriminant less than 10^12 that is a ramified abelian cubic extension of a real quadratic number field and where the split places in the extension are always taken to be the two archimedean ones of the base field (the set S is taken to be the minimal one).As far as we know, the conjectures above are still open in this setting, except for when the top field is abelian over ℚ by previous results of Burns.(See Theorem A of <cit.> and Corollary 4.1.3 of <cit.>.)Our method is fairly general and could be used as well to numerically verify various refinements and generalizations of both Rubin's and Popescu's conjectures, such as Conjecture 4.16 of <cit.> and various other ones contained in <cit.> and <cit.>.Note that for the cubic extensions K/k considered above, the group of roots of unity μ(K) = {± 1 } is Gal(K/k)-cohomologically trivial.(See Lemma 5.4.4 of <cit.> for instance.)Hence, by Theorem 5.5.1 of <cit.>, Rubin's conjecture is equivalent to Popescu's conjecture.Computationally, it is more convenient to work with Popescu's conjecture, since one does not have to deal with an auxiliary set of primes T needed in the statement of Rubin's conjecture.This allows us to focus solely on Popescu's conjecture.Moreover, in this case, Theorem 4.1.1 of <cit.> implies that Burns's conjecture follows from the leading term conjecture. The paper is organized as follows.We start in <ref> with a review of S-truncated L-functions and the Dirichlet logarithmic map.In <ref> we gather the necessary theoretical results. We give a clear definition of an Artin system of S_K-units in <ref> and this allows us to give a description of Stark's regulator in terms of an Artin system of S_K-units in <ref>.We present Stark's main conjecture over ℚ in <ref>, Popescu's conjecture in <ref>, and Burns's conjecture in <ref>.We study in detail a very simple example in <ref> in the order of vanishing one case.Most of the material contained in <ref> is not new, but we rephrase everything in terms of our central notion of an Artin system of S_K-units.In the end, <ref>, Proposition <ref>, Theorem <ref>, and Proposition <ref> are our main tools that when combined together allow us to provide numerical evidence for Stark's, Rubin's, Popescu's, and Burns's conjectures. In <ref> we explain our numerical calculations. We outline our method in <ref> and present the results of our computations with a few examples in <ref>. Finally, <ref> contains tables that summarize our data. §.§ AcknowledgementThe authors would like to thank Edward Roualdes and Nicholas Nelson of California State University, Chico for allowing us to use their computer for our calculations. § PRELIMINARIES§.§ Basic notation Let k be a number field.We denote its ring of integers by O(k).A place of k will be denoted by v or w.If v is a finite place then it corresponds to a prime ideal 𝔭 of O(k), and we shall use the words “place” or “prime” interchangeably.The corresponding residue field will be denoted by κ(v) or κ(𝔭).Its cardinality is denoted by ℕ(v) or ℕ(𝔭).To each place v, there is an associated normalized absolute value | · |_v defined as follows.Here α denotes an arbitrary element of k, and |·| denotes the usual absolute value on ℂ. * If v is a real place with corresponding real embedding τ, then |α|_v = |τ(α)|.* If v is a complex place with corresponding pair of complex embeddings {τ,τ̅}, then |α|_v = |τ(α)|^2.* If v is a finite place with corresponding prime ideal 𝔭, then |α|_v = ℕ(𝔭)^- ord_𝔭(α), where ord_𝔭 is the usual valuation associated to 𝔭.With these normalizations, we have the product formula:for all α∈ k^×,∏_v|α|_v = 1,where the product is over all places of k.Throughout this paper, we let S_∞ be the set of infinite places of k.The number of real infinite places is denoted by r_1 and the number of complex infinite places by r_2.Hence |S_∞| = r_1 + r_2.Moreover, S will always denote a finite set of places of k that contains S_∞.We have the S-integers defined byO_S(k) = {α∈ k^×| ord_v(α) ≥ 0,for allv ∉ S },and we set E_S(k) = O_S(k)^×.The group E_S(k) is known as the group of S-units of k.The structure of E_S(k) as an abelian group is well-known:it follows from the S-unit theorem that E_S(k) ≃μ(k) ×ℤ^|S|-1,where μ(k) consists of the roots of unity in k.We set w_k = |μ(k)|.§.§ The S-truncated L-functionsFor simplicity, we shall restrict ourselves to abelian extensions of number fields K/k.The Galois group of K/k is denoted by G.As earlier, we fix a finite set of places S of k that is assumed to contain S_∞, and we denote the set of places of K lying above places in S by S_K.The results of this section are well-known, and we refer the reader to <cit.> for more details.Given a place v of k, one has a short exact sequence1 ⟶ I_v⟶ G_v⟶ Gal(κ(w)/κ(v)) ⟶ 1,where I_v and G_v are the inertia and decomposition group respectively, associated to the place v.We let σ_v be an element of G_v that is mapped to the Frobenius automorphism in Gal(κ(w)/κ(v)) via the isomorphism G_v/I_v≃⟶ Gal(κ(w)/κ(v)). If v is unramified in K/k, then σ_v is unique, since I_v = 1.In this case, σ_v is called the Frobenius automorphism at v.Given a place v ∈ S, we define an element Fr_v of ℚ[G] as follows:Fr_v = 1/|I_v|σ_v N_I_v,whereN_I_v = ∑_h ∈ I_vh.Throughout this paper, we denote the trivial character by χ_1.If χ∈G is such that χ≠χ_1, then χ(Fr_v) = 1if and only ifG_v⊆ ker(χ). Given χ∈G, the corresponding S-truncated L-function is defined byL_K,S(s,χ) = ∏_v ∉ S(1 - χ(Fr_v)/ℕ(v)^s)^-1.This infinite product converges absolutely and defines a holomorphic function for Re(s) > 1.The L-functions L_K(s,χ) := L_K,S_∞(s,χ) satisfy a functional equation which we now recall.Let χ∈G and let v be a real infinite place.Then there are two possibilities:either G_v⊆ ker(χ) or G_v⊈ ker(χ).We let * r_1^+(χ) be the number of real infinite places v such that G_v⊆ ker(χ),* r_1^-(χ) be the number of real infinite places v such that G_v⊈ ker(χ).Defineξ_k(s,χ) = (√(|Δ_k| ·ℕ(𝔣(χ)))/2^r_2·π^d/2)^sΓ(1+s/2)^r_1^-(χ)Γ(s/2)^r_1^+(χ)Γ(s)^r_2· L_K(s,χ),where Δ_k is the discriminant of k, 𝔣(χ) the conductor of the character χ and d = [k:ℚ].Thenξ_k(s,χ) = W(χ) ·ξ_k(1-s,χ),where W(χ) is a complex number with absolute value 1 satisfying W(χ_1) = 1. Let χ∈G and let S be any finite set of places of k containing S_∞.Thenord_s=0L_K,S(s,χ)= |S| - 1, if χ = χ_1, |{v ∈ S | G_v⊆ ker(χ)}|, otherwise.If S = S_∞, then the theorem follows from the known properties of the gamma function, the functional equation (<ref>), and the non-trivial fact that L_K(1,χ) ≠ 0 if χ≠χ_1.If S ≠ S_∞, then the theorem follows by considering what is happening with the Euler factors at the places in S∖ S_∞. The S_K-truncated Dedekind zeta function of K is defined byζ_K,S(s) = ∏_𝔓∉ S_K(1 - 1/ℕ𝔓^s)^-1for Re(s) > 1 and can be extended to a function that is holomorphic everywhere except for a simple pole at s=1. Its Taylor expansion at s=0 begins asζ_K,S(s) = -h_K,SR_K,S/w_Ks^|S_K| - 1 + …,where h_K,S is the S_K-class number of K and R_K,S the S_K-regulator.Note that one can rewrite the order of vanishing of ζ_K,S at s=0 asord_s=0(ζ_K,S) = |S_K| - 1 = rank_ ℤE_S(K),where we write E_S(K) rather than E_S_K(K) in order to simplify the notation.The S_K-truncated Dedekind zeta function can be written in terms of the S-truncated L-functions as follows:ζ_K,S(s) = ∏_χ∈GL_K,S(s,χ). Let us writeL_K,S(s,χ) = c_S(χ)s^r_S(χ) + …The order of vanishing r_S(χ) is known due to Theorem <ref>.Combining (<ref>) and (<ref>), one has-h_K,SR_K,S/w_K =∏_χ∈Gc_S(χ). In the 1970s, Stark proposed a conjectural formula for c_S(χ).After some preliminaries, we shall present his main conjecture in <ref> below.If χ∈G, we lete_χ = 1/|G|∑_σ∈ Gχ(σ) ·σ^-1,be the corresponding idempotent in the semisimple finite dimensional ℂ-algebra ℂ[G]. One easily checks thate_χ_1 = 1/|G|N_G,whereN_G = ∑_σ∈ Gσ.We introduce the S-equivariant L-functionθ_K,S(s) = ∑_χ∈G L_K,S(s,χ) · e_χ,which is a meromorphic function from ℂ into ℂ[G].We will also make use of the standard notation L_K,S^*(0,χ) instead of c_S(χ) and we setθ_K,S^*(0) = ∑_χ∈GL_K,S^*(0,χ) · e_χ.§.§ The logarithmic map We label the placesS ={v_1,v_2,…,v_n},so that |S| = n, and in doing so, we introduce an ordering on S.For each i = 1,…,n, we fix a place w_i of K lying above v_i.Following Tate in <cit.>, we let Y_S(K) be the free abelian group on the places in S_K.We have a short exact sequence of ℤ[G]-modules0 ⟶ X_S(K) ⟶ Y_S(K) s_K⟶ℤ⟶ 0,where the map s_K is the augmentation map and X_S(K) its kernel.Recall that s_K is defined by setting s_K(w) = 1 for all w ∈ S_K and extending by linearity.If A if a finite abelian group, M a ℤ[A]-module and F a subfield of ℂ, then we write FM rather than F⊗_ℤM.We define the logarithmic mapλ_K,S: E_S(K) ⟶ℂ Y_S(K)by the formulaλ_K,S(u) = -∑_w ∈ S_Klog|u|_w·w,whenever u ∈ E_S(K).Because of the product formula (<ref>), λ_K,S takes values in ℂ X_S(K).Its extension to ℂ E_S(K) will be denoted by the same symbol.Not only is this map a ℂ-linear map, but it is also G-equivariant; hence, it is a ℂ[G]-module morphism.The S_K-unit theorem implies that λ_K,S induces an isomorphism of ℂ[G]-modulesλ_K,S: ℂ E_S(K) ≃⟶ℂ X_S(K). Recall that we have an injection of ℤ[G]-modules ι_k:Y_S(k) ↪ Y_S(K) defined byv ↦ |G_v| ∑_w | v w.(The group G acts trivially on Y_S(k).) After tensoring with ℂ, we get an injective morphism of ℂ[G]-modules ℂY_S(k) ↪ℂY_S(K) that we denote by the same symbol ι_k.This map allows us to view ℂY_S(k) inside of ℂY_S(K), so that ℂY_S(k) ⊆ℂY_S(K).Let χ∈G. * If χ≠χ_1, then ℂY_S(K) · e_χ = ℂX_S(K) · e_χ. * For the trivial character, we have ℂY_S(K) · e_χ_1 = ℂY_S(k)and ℂX_S(K) · e_χ_1 = ℂX_S(k). Tensoring the short exact sequence (<ref>) with ℂ gives the short exact sequence of ℂ[G]-modules0 ⟶ℂX_S(K) ⟶ℂY_S(K) ⟶ℂ⟶ 0.Since G acts trivially on ℂ, part (<ref>) follows at once. To show that ℂY_S(K) · e_χ_1 = ℂY_S(k), the main point is touse the equalitye_χ_1· w = 1/|G|ι_k(v)that is valid for all places v ∈ S and all places w of K lying above v. It then immediately follows that ℂX_S(K) · e_χ_1 = ℂX_S(k). The following proposition, though simple, is quite useful. Let χ∈G be a non-trivial character.Furthermore, let v ∈ S and let w be a place of K lying above v.In ℂY_S(K), we have * If G_v⊆ ker(χ), then e_χ· w ≠ 0, * If G_v⊈ ker(χ), then e_χ· w = 0. Let σ_1,…,σ_s be a complete set of representatives of G/G_v.Thene_χ· w= 1/|G|∑_σ∈ Gχ(σ)σ^-1· w = 1/|G|∑_i=1^s∑_h ∈ G_vχ(σ_ih)σ_i^-1h^-1· w= 1/|G|∑_i=1^sχ(σ_i)σ_i^-1· w ·(∑_h ∈ G_vχ(h) ).If G_v⊆ ker(χ), then this last line is |G_v|/|G|∑_i=1^sχ(σ_i) σ_i^-1· w ≠ 0.On the other hand, if G_v⊈ ker(χ), then we get zero, since∑_h ∈ G_vχ(h) = 0.Using the previous proposition, one can give a different formula for the order of vanishing of the S-truncated L-functions. Let χ∈G.Thenord_s=0L_K,S(s,χ) =dim_ℂ(ℂX_S(K) · e_χ) =dim_ℂ(ℂE_S(K) · e_χ).The first equality follows from Theorem <ref>, Proposition <ref> and Proposition <ref>.The second one follows from the isomorphism (<ref>). For each i=1,…,n, we define ℓ_i,K: E_S(K) ⟶ℂ[G] by the formulaℓ_i,K(u) = - 1/|G_i|∑_σ∈ Glog|u^σ|_w_i·σ^-1,where from now on we write G_i rather than G_v_i.Its extension to ℂE_S(K) will also be denoted by ℓ_i,K.Note that the maps ℓ_i,K are G-equivariant. For x ∈ℂE_S(K), we haveλ_K,S(x) = ∑_i=1^nℓ_i,K(x) · w_iLet σ_1,…,σ_s be a complete set of representatives of G/G_i and let u ∈ E_S(K).Then∑_i=1^nℓ_i,K(u) · w_i = ∑_i=1^n(-1/|G_i|∑_σ∈ Glog|u^σ|_w_i·σ^-1) w_i= - ∑_i=1^n1/|G_i|∑_t = 1^s∑_h ∈ G_ilog|u^σ_th|_w_i(σ_th)^-1· w_i= - ∑_i=1^n∑_t=1^slog|u^σ_t|_w_iσ_t^-1· w_i= λ_K,S(u) Let us look at the behavior of the maps ℓ_i,K on various isotypical components of ℂE_S(K). For the next proposition, it may be helpful to observe that ℓ_i,k:ℂE_S(k) ⟶ℂ is the ℂ[G]-module morphism defined by ℓ_i,k(u) = - log|u|_v_i for u ∈ E_S(k).(G acts trivially here.) Let χ∈G and i∈{1,…,n}. * Suppose χ≠χ_1 and G_i⊈ ker(χ).If x ∈ℂE_S(K) · e_χ, thenℓ_i,K(x) = 0.* If x ∈ℂE_S(K) · e_χ_1, thenℓ_i,K(x) = ℓ_i,k(x) · N_G.For (<ref>), we proceed as follows.Note that if h ∈ G_i, then ℓ_i,K(x) = h ·ℓ_i,K(x) for all x ∈ℂE_S(K).Hence, for all h ∈ G_i, we havee_χ·ℓ_i,K(x) = χ(h) e_χ·ℓ_i,K(x).Summing over all h ∈ G_i gives|G_i| · e_χ·ℓ_i,K(x) = ( ∑_h ∈ G_iχ(h) ) e_χ·ℓ_i,K(x).Since G_i⊈ ker(χ), we have∑_h ∈ G_iχ(h) = 0,and this proves (<ref>).Point (<ref>) is a simple calculation left to the reader.If r is an integer satisfying 1≤ r ≤ |S|, we want to study the ℂ[G]-module morphism∧^rλ_K,S: ⋀_ℂ[G]^rℂE_S(K) ⟶⋀_ℂ[G]^rℂY_S(K),defined on pure wedges by the formula ∧^rλ_K,S(x_1∧…∧ x_r) = λ_K,S(x_1)∧…∧λ_K,S(x_r). In order to do so, we letΩ = {1,…,n }.For any totally ordered set 𝒳, such as Ω, the symbol ℘_r(𝒳) will denote the set of r-tuples (x_1,…,x_r), where x_i∈𝒳, and x_1<…<x_r.If I ∈℘_r(Ω) is such that I = (i_1,…,i_r) with 1≤ i_1< …<i_r≤ n, then we setw_I = w_i_1∧…∧ w_i_r∈⋀_ℂ[G]^rℂY_S(K),and we define R_I,K:⋀_ℂ[G]^rℂE_S(K) ⟶ℂ[G] by the following formula on pure wedgesR_I,K(x_1∧…∧ x_r) =det(ℓ_i_s,K(x_t) )_s,t=1,…,r.Note that for all I ∈℘_r(Ω),the map R_I,K is a ℂ[G]-module morphism. It is worth pointing out that the morphism (<ref>) is injective, since ℂ[G] is semisimple.Combining with Proposition <ref>, one gets for a pure wedge x = x_1∧…∧ x_r∈⋀^r_ℂ[G]ℂE_S(K) the formula∧^rλ_K,S(x) = ∑_I ∈℘_r(Ω)R_I,K(x) · w_IFinally, we see what happens when we restrict the maps R_I,K to some isotypical components of ⋀_ℂ[G]^rℂE_S(K). Let χ∈G be such that χ≠χ_1.Moreover, let I ∈℘_r(Ω) be such that there exists i ∈ I for which G_i⊈ ker(χ).Then, R_I,K(x) = 0for all x ∈⋀_ℂ[G]^rℂE_S(K) · e_χ. If x_1,…,x_r∈ℂE_S(K), thenR_I,K(x_1∧…∧ x_r· e_χ)= R_I,K((x_1· e_χ) ∧…∧ (x_r· e_χ)) =det(ℓ_i_s,K(x_t· e_χ))_s,t=1,…,rNow, there exists s ∈{1,…,r } such that G_i_s⊈ ker(χ).Hence, Proposition <ref> implies ℓ_i_s,K(x_t· e_χ) = 0for all t=1,…,r.§ STARK'S CONJECTURE§.§ Artin systems of S_K-units Recall that Stark's original idea was to break down the S_K-regulator into χ components, and Artin systems of S_K-units played an important role in doing so.In this section, we give a definition for an Artin system of S_K-units.As before, K/k is an abelian extension of number fields, and S is a finite set of places of k containing S_∞.We start with the following definition.An Artin system of S_K-units 𝒜 is a collection of S_K-units𝒜 = {ε_w| w ∈ S_K}⊆ E_S(K),such that the group morphism f:Y_S(K) ⟶ E_S(K) defined by w ↦ε_w satisfies the following properties: * f is G-equivariant,* ker(f) = ℤ·α for some α∈ Y_S(K)^G that satisfies s_K(α) ≠ 0. Note that G acts trivially on ℤ·α.Moreover, sincerank_ℤ(Y_S(K)/ℤ·α) = |S_K|-1,one has coker(f) is finite.As a result, an Artin system of S_K-units can be conveniently described by an exact sequence of ℤ[G]-modules0 ⟶ℤ·α⟶ Y_S(K) f⟶ E_S(K) ⟶ A ⟶ 0,where A is some ℤ[G]-module with finite cardinality.Letting d_0 = s_K(α), and applying the snake lemma to the commutative diagram0 @>>> 0 @>>> ℤ·α @>s_K>> ℤ· d_0 @>>>0 @VVV @VVV @VVV 0@>>> X_S(K) @>>> Y_S(K) @>s_K>> ℤ @>>> 0leads to the short exact sequence of ℤ[G]-modules0 ⟶ X_S(K) ⟶ Y_S(K)/ℤ·α⟶ℤ/d_0ℤ⟶ 0.Therefore, the morphism f:Y_S(K) ⟶ E_S(K) induces by restriction an injective morphism of ℤ[G]-modulesf: X_S(K) ↪ E_S(K).In particular, the image of X_S(K) via f gives a group of S_K-units that is of finite index in E_S(K). Let 𝒜 ={ε_w| w ∈ S_K} be an Artin system of S_K-units, and let χ∈G be a non-trivial character.Furthermore, let v ∈ S and let w be a place of K lying above v.In ℂE_S(K), we have * If G_v⊆ ker(χ), then ε_w· e_χ≠ 0,* If G_v⊈ ker(χ), then ε_w· e_χ = 0.Tensoring the exact sequence (<ref>) with ℂ leads to the short exact sequence of ℂ[G]-modules:0 ⟶ℂ·α⟶ℂY_S(K) f_ℂ⟶ℂE_S(K) ⟶ 0.Since χ≠χ_1 and G acts trivially on ℂ·α, we get an isomorphism of ℂ[G]-modulesf_ℂ^χ:ℂY_S(K) · e_χ≃⟶ℂE_S(K)· e_χ.The result then follows from Proposition <ref>. From now on, for v ∈ S, we let T_v(K) = ∑_w | vw ∈ Y_S(K).Note that α∈ Y_S(K) is fixed by G if and only ifα = ∑_v ∈ Sn_v· T_v(K),for some n_v∈ℤ. Let K/k be a finite abelian extension of number fields and let S be a finite set of places of k containing S_∞.Then there exist Artin systems of S_K-units. For the reader's convenience, we include here the proof contained in <cit.>.(In <cit.>, only the case S = S_∞ was treated, but the argument works for any finite set of places S that contains S_∞.)For each i=1,…,n, let β_i∈ E_S(K) be such that * |β_i|_w_i > 1* |β_i|_w< 1 for all w ∈ S_K satisfying w ≠ w_i.The existence of S_K-units with those properties is a well-known result of algebraic number theory. See  1 of Chapter V in <cit.> for instance.Then setγ_i = β_i^N_i,where N_i = N_G_i.Note that γ_i∈ K^G_i.A simple calculation shows that the S_K-units γ_i still satisfy * |γ_i|_w_i > 1* |γ_i|_w < 1 for all w ∈ S_K satisfying w ≠ w_i.If w | v_i, then there exists σ∈ G such that w = w_i^σ.Setε_w = γ_i^σ.The S_K-units ε_w do not depend on the choice of σ.Moreover, they satisfyε_w^τ = ε_w^τfor all τ∈ G and also * |ε_w|_w > 1* |ε_w|_w' < 1 for all places w' ∈ S_K satisfying w' ≠ w.By Lemma <ref> below, removing any S_K-unit from the set {ε_w| w ∈ S_K} gives a system of independent S_K-units.Therefore, there is precisely one relation among them, say∏_w ∈ S_Kε_w^n_w = 1for some integers n_w.Since the group G acts transitively on the places lying above a fixed place v ∈ S, we see that for all v ∈ S, there exists n_v∈ℤ such that n_w = n_v whenever v | w.Taking the inverses of some of the ε_w if necessary, one can assume that n_v≥ 0 for all v ∈ S.The set {ε_w| w ∈ S_K} is the desired Artin system of S_K-units, since the kernel of the ℤ[G]-module morphism f:Y_S(K) ⟶ E_S(K) defined by f(w) = ε_w is ℤ·α, whereα = ∑_v ∈ Sn_v· T_v(K),and s_K(α) = ∑_v ∈ Sn_v|G|/|G_v|≠ 0. It is always possible to takeα = ∑_v ∈ ST_v(K) just by setting δ_w = ε_w^n_v in the last proof.Then one has∏_w ∈ S_Kδ_w = 1.Numerically, it is more convenient to allow any α∈ Y_S(K)^G, because the indexm = [E_S(K):μ(K) · f(X_S(K))]is usually smaller. The following lemma is simple and we skip its proof. Let A = (a_ij) ∈ M_n(ℝ) be a matrix satisfying * a_ij<0 whenever i ≠ j,* ∑_j=1^na_ij>0 for all i=1,…,n.Then det(A) ≠ 0.We remark that an Artin system of S_K-units exists as well in the case of a non-abelian Galois extension K/k, but we restrict ourselves to the abelian case in this paper. §.§ The Stark regulator If L is any number field, let us start by reminding the reader about the regulator of a subgroup of units of L.For the moment, we fix a finite set of places S of L, and we let n = |S|.Given a subgroup 𝒰 of E_S(L) such that E_S(L)/𝒰 is finite, we define Reg_L,S(𝒰) ∈ℝ/{± 1 } as follows.If {η_1,…,η_n-1} is a set of units whose classes in 𝒰/𝒰_ tor form a ℤ-basis, then consider the matrix (log|η_j|_w) ∈ M_n,n-1(ℝ),where j=1,…, n-1 and w ∈ S.The regulator Reg_L,S(𝒰) is defined to be the determinant of the matrix (<ref>) after removing one row.Note that removing a different row or choosing another ℤ-basis for 𝒰/𝒰_ tor will change the determinant by at most a sign.Hence, this definition makes sense modulo {± 1}.Also, we haveR_L,S = | Reg_L,S(E_S(L))|.The following proposition is well-known, and we skip its proof. Given a subgroup 𝒰 of E_S(L) such that E_S(L)/𝒰 is finite, we have[E_S(L):μ(L) ·𝒰] = | Reg_L,S(𝒰)|/R_L,S We now go back to our setting where K/k is a finite abelian extension of number fields and S is a finite set of places of k containing S_∞.Even though it is not clear how to break up R_K,S into χ-components, it is possible to do so with Reg_K,S(𝒰), where 𝒰 is a group of S_K-units coming from an Artin system of S_K-units.(Here we write Reg_K,S(𝒰) rather than Reg_K,S_K(𝒰) in order to simplify the notation.)This fact was recognized by Stark in <cit.>, and this gives a way of breaking up R_K,S into χ-components, at least up to a rational number, namely the index [E_S(K):μ(K) ·𝒰].Here is how this works.Starting with an Artin system of S_K-units 𝒜 and its corresponding morphism f: Y_S(K) ⟶ E_S(K), we have an induced morphism of ℂ[G]-modules f_ℂ:ℂY_S(K) ⟶ℂE_S(K).We will now look at the isomorphism of ℂ[G]-modulesf_ℂ∘λ_K,S:ℂE_S(K) ⟶ℂE_S(K).Since this map is a linear endomorphism of the ℂ-vector space ℂE_S(K), we can talk about its determinant.Recall also that from (<ref>), f(X_S(K)) is a group of finite index in E_S(K). Let 𝒜 = {ε_w| w ∈ S_K} be an Artin system of S_K-units with corresponding morphism f.Thendet(f_ℂ∘λ_K,S) = ± Reg_K,S(𝒰_f),where 𝒰_f = f(X_S(K)). Let w_0 be any place of S_K.Note that the images of the S_K-units {ε_wε_w_0^-1| w ∈ S_K, w ≠ w_0} in ℂE_S(K) form a basis of the ℂ-vector space ℂE_S(K).These S_K-units also form a ℤ-basis of 𝒰_f modulo its torsion subgroup.We calculatef_ℂ∘λ_K,S(ε_wε_w_0^-1)= f_ℂ(∑_v ∈ S_Klog|ε_wε_w_0^-1|_v· v ) = ∑_v ∈ S_Kv ≠ w_0log|ε_wε_w_0^-1|_v· (ε_vε_w_0^-1).Hence det(f_ℂ∘λ_K,S) = ± Reg_K,S(𝒰_f) as we wanted to show.Moreover, f_ℂ∘λ_K,S is a morphism of ℂ[G]-modules, we havedet(f_ℂ∘λ_K,S) = ∏_χ∈G det((f_ℂ∘λ_K,S)^χ).Let 𝒜 be an Artin system of S_K-units with corresponding morphism f.Given χ∈G, one defines the Stark regulator associated to χ and 𝒜 to beR(χ,𝒜) =det( (f_ℂ∘λ_K,S)^χ).Combining Proposition <ref>, Proposition <ref> and (<ref>) leads to the formula± R_K,S = 1/[E_S(K):μ(K)·𝒰_f]∏_χ∈G R(χ,𝒜). We now present an alternative description of the Stark regulator.Given an Artin system of S_K-units 𝒜 = {ε_w| w ∈ S_K} and an integer i satisfying 1 ≤ i ≤ |S|, we will write ε_i rather than ε_w_i. Let 𝒜 = {ε_w| w ∈ S_K} be an Artin system of S_K-units.* Let χ∈G be such that χ≠χ_1.Let r =ord_s=0L_K,S(s,χ),and let I = (i_1,…,i_r) be the unique element of ℘_r(Ω) such that G_i_t⊆ ker(χ) for all t=1,…,r.Then, one has R(χ,𝒜) = χ(R_I,K(ε_i_1∧…∧ε_i_r) ). * Let χ = χ_1 be the trivial character.Letr =ord_s=0L_K,S(s,χ_1) = |S| - 1and let I = (i_1,…,i_r) be any element of ℘_r(Ω).Then, one hasR(χ,𝒜) = χ_1(R_I,K((ε_i_1ε_i_r+1^-1)∧…∧(ε_i_rε_i_r+1^-1) ) ),where i_r+1 is the unique index in Ω that is not in I.Starting with the exact sequence (<ref>), one gets the following short exact sequence of ℂ[G]-modules:0 ⟶ℂ·α⟶ℂY_S(K) f_ℂ⟶ℂE_S(K) ⟶ 0.Since G acts trivially on ℂ·α, and χ≠χ_1, one gets an isomorphism of ℂ[G]-modules:f_ℂ^χ:ℂY_S(K)· e_χ⟶ℂE_S(K) · e_χ.By Proposition <ref>, a ℂ-basis for ℂY_S(K) · e_χ is given by {w_i_t· e_χ| t = 1, …, r } and therefore, a ℂ-basis for ℂE_S(K) · e_χ is given by {ε_i_t· e_χ| t=1,…,r }.It follows that a ℂ-basis for the one-dimensional ℂ-vector space ⋀_ℂ[G]^rℂE_S(K) · e_χ is given by ε_i_1∧…∧ε_i_r· e_χ.Using Proposition <ref>, we calculate∧^r(f_ℂ∘λ_K,S)(ε_i_1∧…∧ε_i_t· e_χ)= ∧^rf_ℂ(∑_J ∈℘_r(Ω)R_J,K(ε_i_1∧…∧ε_i_r· e_χ) · w_J) =∧^rf_ℂ(R_I,K(ε_i_1∧…∧ε_i_r· e_χ) · w_I) = χ(R_I,K(ε_i_1∧…∧ε_i_r)) ·ε_i_1∧…∧ε_i_r· e_χ,and this shows (<ref>).For (<ref>), we proceed as follows.Let I ∈℘_r(Ω) and let Ω∖ I = {i_r+1}.Since {w_i· e_χ_1| i=1,…,|S| } is a ℂ-basis for ℂY_S(K) · e_χ_1, we get that{(w_i_s - w_i_r+1) · e_χ_1| s=1,…,r}is a ℂ-basis for ℂX_S(K) · e_χ_1.The isomorphismf:ℂX_S(K) · e_χ_1≃⟶ℂE_S(K) · e_χ_1,implies then that {ε_i_s·ε_i_r+1^-1· e_χ_1 | s =1,…,r } is a ℂ-basis for ℂE_S(K) · e_χ_1.It follows that a ℂ-basis for ⋀^r_ℂ[G]ℂE_S(K) · e_χ_1 is given by (ε_i_1ε_i_r+1^-1) ∧…∧ (ε_i_rε_i_r+1^-1) · e_χ_1.We calculate∧^r f ∘λ_K,S((ε_i_1ε_i_r+1^-1) ∧…∧ (ε_i_rε_i_r+1^-1) · e_χ_1)= ∧^rf (∑_J ∈℘_r(Ω)R_J,K((ε_i_1ε_i_r+1^-1) ∧…∧ (ε_i_rε_i_r+1^-1) · e_χ_1)w_J)= ∧^rf(R_I,K((ε_i_1ε_i_r+1^-1) ∧…∧(ε_i_rε_i_r+1^-1) )(w_i_1 - w_i_r+1)∧…∧(w_i_r - w_i_r+1)· e_χ_1) = χ_1(R_I,K((ε_i_1ε_i_r+1^-1)∧…∧(ε_i_rε_i_r+1^-1) ) ) · (ε_i_1ε_i_r+1^-1) ∧…∧ (ε_i_rε_i_r+1^-1) · e_χ_1. This completes the proof. Proposition <ref> can be viewed as a generalization of 9, Chapter I of <cit.> (in the abelian setting). §.§ Stark's conjecture over ℚ Now that we have decomposed the S_K-regulator R_K,S into χ-components, at least up to a rational number, the hope is that the decomposition (<ref>) would somehow match the decomposition (<ref>) and this is expressed in the following first conjecture of Stark (namely, Conjecture on page 61 of <cit.> reformulated as Conjecture 5.1 on page 27 of <cit.>).Note that the original conjecture was formulated for a general S-truncated Artin L-function, whereas we only treat the case where K/k is an abelian extension of number fields.But there is no loss in generality in doing so for Stark's conjecture over ℚ because of Proposition 7.2 of <cit.>.(Stark-type conjectures over ℤ in the non-abelian setting have only recently been formulated.See, for instance, <cit.>.) [Stark's conjecture over ℚ] Let 𝒜 be an Artin system of S_K-units.For χ∈G, we setA(χ,𝒜) = L_K,S^*(0,χ)/R(χ,𝒜).Then * A(χ,𝒜) ∈ℚ,* A(χ,𝒜)^g = A(χ^g,𝒜) for all g ∈ Gal(ℚ/ℚ).We shall now rephrase Stark's conjecture over ℚ in a slightly different way.Let us defineβ_S(𝒜) = ∑_χ∈GA(χ,𝒜) · e_χ.Stark's conjecture over ℚ for all χ∈G is equivalent toβ_S(𝒜) ∈ℚ[G].Assume first that Stark's conjecture over ℚ is true for all χ∈G.If g ∈ Gal(ℚ/ℚ), we haveβ_S(𝒜)^g = ∑_χ∈GA(χ,𝒜)^g· e_χ^g=∑_χ∈GA(χ^g,𝒜) · e_χ^g= β_S(𝒜).Hence, β_S(𝒜) ∈ℚ[G].Conversely, if we define m_σ for σ∈ G via the equationβ_S(𝒜) = ∑_σ∈ Gm_σ·σ^-1,then the m_σ and the A(χ,𝒜) are related via the formulasm_σ = 1/|G|∑_χ∈Gχ(σ) A(χ,𝒜)andA(χ,𝒜) = ∑_σ∈ Gχ(σ) m_σ.This last equation shows that if β_S(𝒜) ∈ℚ[G], then A(χ,𝒜) ∈ℚ.Moreover, if β_S(𝒜) ∈ℚ[G], then β_S(𝒜)^g = β_S(𝒜) for all g ∈ Gal(ℚ/ℚ).But this last equation can be rewritten as∑_χ∈GA(χ^g^-1,𝒜)^g· e_χ = ∑_χ∈GA(χ,𝒜) · e_χ,and this shows the desired result. §.§ Popescu's conjecture In this subsection, r will stand for an integer satisfying 1 ≤ r ≤ |S|.Popescu's conjecture concerns the S-truncated L-functions having minimal order of vanishing only, and is formulated under the following hypothesis. * The set S contains S_∞ and the places that ramify in K/k. * The set S contains at least r places that split completely in K/k, say v_1,…,v_r. * The set S satisfies |S| ≥ r+1.Points (<ref>) and (<ref>) together with Theorem <ref> imply that ord_s=0L_K,S(s,χ) ≥ r for all χ∈G.From now on, we letG_r,S = {χ∈G|χ≠χ_1 andr_S(χ) = r}.We also definee_r,S =∑_χ∈G_r,Se_χ∈ℚ[G].Moreover, we setG_r,S' = G_r,S,if|S| ≥ r+2G_r,S∪{χ_1},if|S| = r+1.ande_r,S' = e_r,S,if|S| ≥ r+2 e_r,S + e_χ_1,if|S| = r+1.Note that e_r,S, e_χ_1, and e_r,S' ∈ℚ[G].Moreover if S satisfies (<ref>) of Hypothesis <ref> and χ∈G_r,S, then G_1,…,G_r, which are trivial in this case, are the unique decomposition groups contained in ker(χ) by Theorem <ref>. * Assuming that the set S satisfies (<ref>) of Hypothesis <ref>, the ℂ[G]-linear morphismR_I,K:⋀_ℂ[G]^rℂE_S(K)· e_r,S⟶ℂ[G] · e_r,S,where I = (1,2,…,r), is an isomorphism of ℂ[G]-modules. * Assuming that |S| = r+1, then for any J ∈℘_r(Ω) the mapR_J,K:⋀_ℂ[G]^rℂE_S(K) · e_χ_1⟶ℂ[G] · e_χ_1is an isomorphism of ℂ[G]-modules.Moreover, R_J_1,K = ± R_J_2,K on ℂE_S(K) · e_χ_1 for any J_1,J_2∈℘_r(Ω).For the first part, note thatdim_ℂ(⋀_ℂ[G]^rℂE_S(K) · e_r,S) = |G_r,S|=dim_ℂ(ℂ[G] · e_r,S).It is therefore sufficient to show that R_I,K is injective.But if R_I,K(x · e_r,S) = 0 for some x ∈⋀_ℂ[G]^rℂE_S(K), then∧^rλ_K,S(x · e_r,S)= ∑_J ∈℘_r(Ω)R_J,K(x · e_r,S) · w_J=R_I,K(x · e_r,S) = 0,by Proposition <ref>.Since ∧^rλ_K,S is injective, we get that x · e_r,S = 0 as we wanted to show.For the second part, a simple calculation using the product formula shows that R_J_1,K=± R_J_2,K on ⋀_ℂ[G]^rℂE_S(K)· e_χ_1 for all J_1,J_2∈℘_r(Ω).Now, we have againdim_ℂ(⋀_ℂ[G]^rℂE_S(K)· e_χ_1) = 1 =dim_ℂ(ℂ[G] · e_χ_1),and hence it is sufficient to show that R_J,K is injective.But if R_J,K(x · e_χ_1) = 0 for some x ∈⋀_ℂ[G]^rℂE_S(K), then it follows that R_J',K(x · e_χ_1) = 0 for all J' ∈℘_r(Ω).Therefore, by Proposition <ref> we have∧^rλ_K,S(x · e_χ_1) = ∑_J' ∈℘_r(Ω)R_J',K(x · e_χ_1) · w_J' = 0.Since ∧^rλ_K,S is injective, this ends the proof.We now define some evaluators that are the main objects of study regarding Popescu's conjecture.* Assuming (<ref>) of Hypothesis <ref>, we define the evaluator η∈⋀_ℂ[G]^rℂE_S(K) · e_r,S to be the unique element of ⋀_ℂ[G]^rℂE_S(K) · e_r,S such thatR_I,K(η) = θ_K,S^*(0) · e_r,S,where I = (1,2,…,r).* Assuming that |S|=r+1, for J ∈℘_r(Ω), we define the evaluator δ_J to be the unique element of ⋀_ℂ[G]^rℂE_S(K) · e_χ_1 satisfyingR_J,K(δ_J) = θ_K,S^*(0)· e_χ_1. * Assuming (<ref>) and (<ref>) of Hypothesis <ref>, we letη' =η,if|S| ≥ r+2, η + δ_I,if|S| = r+1,where I = (1,2,…,r). The uniqueness of these evaluators follow from Proposition <ref>.Moreover, η' is the unique element of ⋀_ℂ[G]^rℂE_S(K) · e_r,S' satisfyingR_I,K(η') = θ_K,S^*(0) · e_r,S'.The following proposition turns out to be important for us. With the notation as above, if S satisfies (<ref>) and (<ref>) of Hypothesis <ref>, and if 𝒜 = {ε_w| w ∈ S_K} is an Artin system of S_K-units, thenη' =β_S(𝒜) · e_r,S·ε_1∧…∧ε_r,if|S| ≥ r+2β_S(𝒜) · e_r,S' · (ε_1ε_r+1^-1) ∧…∧(ε_rε_r+1^-1),if|S| = r+1.Let 𝒜 = {ε_w| w ∈ S_K} be an Artin system of S_K-units.Assuming first that |S| = r+1, and using Propositions <ref> and <ref>, we calculateθ_K,S^*(0) · e_r,S'= ∑_χ∈G_r,S L_K,S^*(0,χ) · e_χ + L_K,S^*(0,χ_1) · e_χ_1= ∑_χ∈G_r,SA(χ,𝒜) R(χ,𝒜) · e_χ + A(χ_1,𝒜)R(χ_1,𝒜) · e_χ_1= R_I,K((ε_1ε_r+1^-1)∧…∧(ε_rε_r+1^-1) )β_S(𝒜) · e_r,S'.It follows thatη' = β_S(𝒜)· e_r,S' · (ε_1ε_r+1^-1) ∧…∧(ε_rε_r+1^-1).If |S| > r+1, the calculation is similar and left to the reader.As a corollary, we obtain: If Stark's conjecture over ℚ is true, thenη' ∈ℚ⋀_ℤ[G]^rE_S(K) ≃⋀_ℚ[G]^rℚE_S(K).This follows from Proposition <ref>, Theorem <ref>, and the fact that e_r,S' ∈ℚ[G]. Corollary <ref> is well-known, but has never been spelled out explicitly in terms of an Artin system of S_K-units. See for instance Proposition 2.3 of <cit.>. If M is a ℤ[G]-module, then we let M^* =Hom_ℤ[G](M,ℤ[G]); that is, M^* is the dual of M in the category of ℤ[G]-modules.If φ∈ M^*, then for any integer r ≥ 1 it induces a ℤ[G]-module morphismφ̃:⋀_ℤ[G]^rM ⟶⋀_ℤ[G]^r-1M,defined bym_1∧…∧ m_r↦∑_i=1^r(-1)^i+1φ(m_i)m_1∧…∧ m_i-1∧ m_i+1∧…∧ m_r.If φ_1,…,φ_k∈ M^*, then iterating this process gives a ℤ[G]-module morphism⋀_ℤ[G]^kM^*⟶ Hom_ℤ[G](⋀_ℤ[G]^rM,⋀_ℤ[G]^r-kM ),defined by φ_1∧…∧φ_k↦φ̃_k∘…∘φ̃_1.When k=r-1, we obtain a map⋀_ℤ[G]^rM^*⟶ Hom_ℤ[G](⋀_ℤ[G]^rM,M ).If M is a ℤ[G]-module, then we shall denote the natural map M ⟶ℚM by m ↦m.Moreover, we letE_S(K)^ab = {u ∈ E_S(K) | K(u^1/w_K)/kis abelian}.One can check that E_S(K)^ab is a ℤ[G]-submodule of E_S(K).In <cit.>, Popescu defines the following lattice:With notation as above, we setΛ_K,S^ab = {x ∈ℚ⋀_ℤ[G]^rE_S(K)|φ_1∧…∧φ_r-1(x) ∈E_S(K)^ab,for all φ_1,…,φ_r-1∈ E_S(K)^*}. Moreover, he states the following conjecture: [Popescu]Assuming that Hypothesis <ref> is satisfied, one hasw_K·η' ∈Λ_K,S^ab.When r=1, one recovers Stark's abelian rank one conjecture (Conjecture 1 of <cit.> or Conjecture 2.1 on page 89 of <cit.>), since Λ_K,S^ab = E_S(K)^ab.That is, if K/k is a finite abelian extension of number fields such that Hypothesis <ref> is satisfied for r=1, then there exists an S_K-unit ε_0∈ E_S(K) satisfying * e_1,S' ·ε_0 = ε_0 in ℚE_S(K), * L_K,S^*(0,χ) = - 1/w_K∑_σ∈ Gχ(σ)log|ε_0^σ|_w_1 for all χ∈G_1,S', * K(ε_0^1/w_K)/k is a finite abelian extension of number fields.Such an S_K-unit is called a Stark unit and is unique up to a root of unity.If necessary, see 3.8 of <cit.> for a comparison between the various slightly different formulations of Stark's abelian rank one conjecture that one can find in the literature. If |S| ≥ r+2, and 𝒜 = {ε_w| w ∈ S_K} is an Artin system of S_K-units, then Proposition <ref> givesη' = η = β_S(𝒜) · e_r,S·ε_1∧…∧ε_r.Hence, Popescu's conjecture predicts thatw_K·β_S(𝒜) · e_r,S·ε_1∧…∧ε_r∈Λ_K,S^ab.Note that if φ_1,…,φ_r-1∈ E_S(K)^*, thenφ_1∧…∧φ_r-1(ε_1∧…∧ε_r) ∈ E_S(K).Assuming Stark's conjecture over ℚ, one expectsw_K·β_S(𝒜) · e_r,S∈ℚ[G].Therefore, Stark's conjecture over ℚ and Popescu's conjecture together predict that for all φ_1,…,φ_r-1∈ E_S(K)^*, there exists an S_K-unit ε∈ E_S(K)^ab (which depends on φ_1,…,φ_r-1 and 𝒜) such that in ℚE_S(K) one hasw_K·β_S(𝒜) · e_r,S·φ_1∧…∧φ_r-1(ε_1∧…∧ε_r) = ε.If |S| = r+1, one has a similar prediction, but with a slightly different formula for η' as explained in Proposition <ref>.This observation can be used to perform numerical verifications of Popescu's conjecture.We explain this in more detail in <ref> below.Starting with the short exact sequence of ℤ[G]-modules1 ⟶μ(K) ⟶ E_S(K) ⟶E_S(K)⟶ 1,and applying the functor Hom_ℤ[G]( · ,ℤ[G]), one gets an isomorphism of abelian groupsHom_ℤ[G](E_S(K),ℤ[G]) ≃⟶ Hom_ℤ[G](E_S(K),ℤ[G]),since ℤ[G] is ℤ-free and μ(K) is finite.In the sequel, we will identify elements of E_S(K)^* with elements of E_S(K)^* using this isomorphism.Furthermore, we remind the reader that given a ℤ[G]-module M, one has an isomorphism of abelian groupsHom_ℤ(M,ℤ) ≃⟶ Hom_ℤ[G](M,ℤ[G])given by f ↦f̂, wheref̂(m) = ∑_σ∈ G f(σ^-1· m)·σ. Starting with a set of fundamental S_K-units η_1,…,η_t for E_S(K), we can consider the η_i^*∈ Hom_ℤ(E_S(K),ℤ) defined byη_i^*(η_j) = δ_ij,where δ_ij is the Kronecker symbol.Using the isomorphisms (<ref>) and (<ref>) above, one finds thatΣ ={η_i^*| i=1,…,t }is a generating set for E_S(K)^*.Therefore,Λ_K,S^ab = {x ∈ℚ⋀_ℤ[G]^rE_S(K)|φ_1∧…∧φ_r-1(x) ∈E_S(K)^ab,for all φ_1,…,φ_r-1∈Σ}.Using this last remark, one can check that a given element x ∈ℚ⋀_ℤ[G]^rE_S(K) lies in Λ_K,S^ab in finitely many steps. §.§ Burns's conjecture In this subsection, r will stand for an integer satisfying 1 ≤ r ≤ |S|.Burns's conjecture is formulated under the same hypotheses as Popescu's conjecture, namely Hypothesis <ref>.We letS_r = {v_1,…,v_r}.We now specialize Conjecture 4.4.1 of <cit.> to the abelian setting and to an S-situation rather than a T-modified version. [Burns]With the same notation as above, for every ϕ∈ Hom_ℤ[G](E_S(K),X_S(K)) one hasw_K·θ_K,S^*(0) · e_r,S' · det_ℂ[G](λ_K,S^-1∘ϕ_ℂ) ∈ℤ[G].Moreover, * One hasw_K·θ_K,S^*(0) · e_r,S' · det_ℂ[G](λ_K,S^-1∘ϕ_ℂ) ∈ Ann_ℤ[G](Cl_S(K)), * If S' is any finite set of places of k satisfying S_∞∪ S_r⊆ S' ⊆ S, then for any b ∈⋃_v ∈ S ∖ S' Ann_ℤ[G](ℤ[G/G_v] ),one hasb · w_K·θ_K,S^*(0) · e_r,S' · det_ℂ[G](λ_K,S^-1∘ϕ_ℂ) ∈ Ann_ℤ[G](Cl_S'(K)). If r=0, then θ_K,S^*(0) · e_r,S' · det_ℂ[G](λ_K,S^-1∘ϕ_ℂ) = θ_K,S(0),and Brumer's classical conjecture on annihilation of class groups predicts that θ_K,S(0) ∈ Ann_ℤ[G](Cl(K)).Note that since X_S(K) is ℤ-free, we have an isomorphism of abelian groupsHom_ℤ[G](E_S(K),X_S(K)) ≃ Hom_ℤ[G](E_S(K),X_S(K)),so from now on, we will identify these two abelian groups. If we start with an Artin system of S_K-units {ε_w| w ∈ S_K} with induced ℤ[G]-morphismf:Y_S(K) ⟶ E_S(K),then it induces an injective morphism of ℤ[G]-modulesf:X_S(K) ↪ E_S(K).Therefore, we get an isomorphism of ℚ[G]-modulesf_ℚ:ℚX_S(K) ≃⟶ℚE_S(K).Lettingm = [E_S(K):μ(K) · f(X_S(K))],it is simple to check that the inverse map f_ℚ^-1:ℚE_S(K) ⟶ℚX_S(K) induces a morphismm · f_ℚ^-1:E_S(K)⟶ X_S(K).Letting ϕ = m · f_ℚ^-1, one hasθ_K,S^*(0) · e_r,S' · det_ℂ[G](λ_K,S^-1∘ϕ_ℂ)= θ_K,S^*(0)/ det_ℂ[G](ϕ_ℂ^-1∘λ_K,S)· e_r,S' = m^rθ_K,S^*(0)/ det_ℂ[G](f_ℂ∘λ_K,S)· e_r,S'= m^r·β_S(𝒜) · e_r,S'.Therefore, a particular case of Burns's conjecture could be phrased as follows: [Burns]Let K/k be a finite abelian extension of number fields with Galois group G and let S be a finite set of places of k satisfying Hypothesis <ref>.Given an Artin system of S_K-units 𝒜, one hasw_K· m^r·β_S(𝒜) · e_r,S' ∈ℤ[G].Moreover, * One has w_K· m^r·β_S(𝒜) · e_r,S' ∈ Ann_ℤ[G](Cl_S(K)). * If S' is any finite set of places of k satisfying S_∞∪ S_r⊆ S' ⊆ S, then for anyb ∈⋃_v ∈ S ∖ S' Ann_ℤ[G](ℤ[G/G_v] ),one hasb · w_K· m^r·β_S(𝒜) · e_r,S' ∈ Ann_ℤ[G](Cl_S'(K)).§.§ A simple example In this section, we study in detail a simple example in the order of vanishing one situation.Specifically, we take k = ℚ and K = ℚ(√(10)), and we let G = ⟨σ⟩.Note that h_K=2 and we set S = {v_1,v_2,v_3} ={∞,2,5 }. The primes 2 and 5 are ramified in K/ℚ and we let 𝔭_2 and 𝔭_5 be the prime ideals of K that satisfy(2) = 𝔭_2^2 and(5) = 𝔭_5^2.One has 𝔭_2 = (2,√(10)) and 𝔭_5 = (5,√(10)).Also,(√(10)) = 𝔭_2·𝔭_5.It follows that h_K,S = 1.We list the places of S_K as follows:{w_1,w_1',w_2,w_3},where w_1 corresponds to the real embedding √(10)↦√(10), w_1' to the real embedding √(10)↦ - √(10), w_2 to the prime ideal 𝔭_2, and w_3 to the prime ideal 𝔭_5.From now on, we letu = 3 + √(10).Note that u is a fundamental unit for E(K).We have G = {χ_1,χ}, where χ is the unique non-trivial character of K/ℚ.Note thatr_S(χ) = 1andr_S(χ_1) = 2. A simple calculation using formulas (<ref>) and (<ref>) of <ref> shows thatL_K,S^*(0,χ) = h_K· R_K = 2 ·log|u|_w_1and L_K,S^*(0,χ_1) = -h_ℚ,S· R_ℚ,S/w_ℚ = -1/2R_ℚ,S = -1/2log(2) log(5).Moreover a Stark unit for the data (K/ℚ,S,v_1,w_1) is given byε_0 = u^-2,that isL'_K,S(0,ψ) = -1/2∑_ρ∈ Gψ(ρ)·log|ε_0^ρ|_w_1,for all ψ∈G.(For details, see for instance Proposition 3.13 of <cit.>.) From the calculations above follow that a fundamental system of S_K-units for E_S(K) is given by{3 + √(10),2,√(10)} = {u,2,√(10)}.Following the proof of Theorem <ref>, one finds the S_K-units * β_1 = (3 + √(10)) ·√(10) = 3√(10) + 10,* β_2 = 2^-2·√(10) = √(10)/4,* β_3 = 2·√(10)^-1 = 2/√(10),that satisfy for i=1,2,3,|β_i|_w_i>1 and |β_i|_w<1 for all w ≠ w_i.These S_K-units lead to the Artin system of S_K-units 𝒜 = {ε_w| w ∈ S_K} where * ε_w_1 = 190 + 60 √(10),* ε_w_1' = 190 - 60 √(10),* ε_w_2 = 25/64,* ε_w_3 = 16/625.Note that ε_w^σ = ε_w^σ for all w ∈ S_K, and∏_w ∈ S_Kε_w = 1.Hence, the kernel of the map f:Y_S(K) ⟶ E_S(K) defined by w ↦ε_w is ℤ·α, whereα = ∑_v ∈ ST_v(K).To simplify the notation, we let (as we have done throughout) ε_i = ε_w_i for i=1,2,3.Now, using Proposition <ref>, we calculateR(χ,𝒜) = log|ε_1^σ/ε_1|_w_1,andR(χ_1,𝒜) =det[ log|N_K/ℚ(ε_1ε_3^-1)|_v_1 log|N_K/ℚ(ε_2ε_3^-1)|_v_1; log|N_K/ℚ(ε_1ε_3^-1)|_v_2 log|N_K/ℚ(ε_2ε_3^-1)|_v_2 ].Using the fact that ε_1 = 10u^2, a simple calculation shows that L_K,S^*(0,χ)/R(χ,𝒜) = -1/2andL_K,S^*(0,χ_1)/R(χ_1,𝒜) = -1/256.Thereforeβ_S(𝒜) = 1/512· (-129 + 127 σ) ∈ℚ[G]as predicted by Stark's conjecture over ℚ.(See Theorem <ref>.) Note thate_1,S' = e_1,S = 1/2(1 - σ) ∈ℚ[G],and thusβ_S(𝒜) · e_1,S = -1/4(1-σ) ∈ℚ[G].Proposition <ref> then shows thatη' =η = β_S(𝒜) · e_1,S·ε_1.Now, Stark's abelian rank one conjecture, namely Conjecture <ref> when r=1, predicts thatε_0 = 2 ·β_S(𝒜) · e_1,S·ε_1in ℚE_S(K).In other words, we should have ε_0^2 = ε_1^σ -1in E_S(K) up to a root of unity in K (that is ± 1).But this is indeed the case as a simple calculation shows.We calculate furthermorem = [E_S(K):μ(K) · f(X_S(K))] = 256.Hence, we havew_K· m ·β_S(𝒜) · e_1,S∈ℤ[G]as predicted by Conjecture <ref>.The annihilation part of Conjecture <ref> is obviously satisfied, since h_K,S = 1, h_K = 2 and w_K· m ·β_S(𝒜) · e_1,S∈ 2 ·ℤ[G].§ NUMERICAL CALCULATIONS §.§ The algorithm Let k be a real quadratic field, and let K be a cubic extension of k that is totally real and such that K/k is ramified.We let S be the set of places of k consisting of the two archimedean places and the finite primes that ramify in K/k.Hence, we always have |S| ≥ 3.We letS = {v_1,v_2,…,v_n},where we agree that v_1 and v_2 are the two archimedean places.Note that v_1 and v_2 split completely in K/k, since K is assumed to be totally real.We now explain how to numerically verify Stark's conjecture over ℚ, the rank two Popescu conjecture, and Burns's conjecture in this particular case.All the calculations have been done with the software PARI (<cit.>).We calculate a fundamental system of S_K-units for E_S(K), say {η_1,…,η_t}. For each v_i∈ S (i=1,…,n), we choose a place w_i lying above v_i. We calculate an Artin system of S_K-units 𝒜 = {ε_w| w ∈ S_K}.Here, we follow the proof of Theorem <ref> and the main step is to find S_K-units β_i that satisfy |β_i|_w_i > 1 and |β_i|_w< 1 for all w ∈ S_K satisfying w ≠ w_i.In order to find these S_K-units, we proceed as follows.We consider the matrixA = (log|η_j|_w) ∈ M_t+1,t(ℝ),where t = |S_K| - 1, and for s ∈{1,…,t+1 }, we let A_s be the matrix obtained from A by removing the sth row.The matrices A_s are t × t square matrices.Furthermore, we letω = (-1,…,-1) ∈ M_1,t(ℝ).Now, if we want to find β_i then we look at A_s, where s corresponds to the row involving the place w_i and we calculatex = A_s^-1·ω^t.We then round off the coordinates of x^t to the nearest integer in order to get a vector y = (y_1,…,y_n) ∈ M_1,t(ℤ) and we setβ_i = ∏_ℓ=1^tη_ℓ^y_ℓ∈ E_S(K).We check that β_i satisfies |β_i|_w < 1 for all w ≠ w_i.If not, we repeat the process above with n_0·ω where n_0 is a positive integer and we keep increasing n_0 until we find a β_i with the desired properties.The last condition |β_i|_w_i > 1 is automatically satisfied by the product formula (<ref>).Using Proposition <ref> and the PARI command bnrL1, we calculate β_S(𝒜) · e_2,S' = ∑_χ∈G_2,S' A(χ,𝒜) · e_χto a high precision. Since e_2,S' ∈ℚ[G], Stark's conjecture over ℚ via Theorem <ref> predicts thatβ_S(𝒜) · e_2,S' = ∑_σ∈ Gb_σ·σ∈ℚ[G].Using the PARI command algdep, we recognize the numbers b_σ as rational numbers.We find the smallest positive integer d such thatd ·β_S(𝒜) · e_2,S' ∈ℤ[G].We calculate m = [E_S(K):μ(K) · f(X_S(K))].If Conjecture <ref> had a positive answer, then one would have d | 2 m^2 (since w_K = 2).In fact, in all the examples that we computed, we observed numerically that d | 2m. As explained in Remark <ref>, we calculate for i=1,…,t the morphisms η_i^*. For i=1,…,t, we calculate u_i∈ E_S(K) whereu_i =η_i^*(ε_1∧ε_2),if|S|≥ 4η_i^*((ε_1ε_3^-1)∧(ε_2ε_3^-1)),if|S| = 3.Using Proposition <ref>, Popescu's conjecture is true if and only if w_K·β_S(𝒜) · e_2,S' ·u_i∈E_S(K)^abfor all i=1,…,t.(Here w_K = 2, since K is totally real.)We can check this as follows.First, we calculate the S_K-units γ_i satisfyingγ_i = 2 · d ·β_S(𝒜) · e_2,S' ·u_i.Then, we find S_K-units δ_i such that we have γ_i = δ_i^d.These S_K-units satisfyδ_i = 2 ·β_S(𝒜) · e_2,S' ·u_i.Finally, we check that the extension K(δ_i^1/2)/k is abelian, for i=1,…,t.In order to do so, we use the following well-known lemma:With the setup as above, let λ∈ K^× and let σ be a generator for G.Then K(λ^1/2)/k is abelian if and only if λ^σ - 1∈ (K^×)^2. See Lemma 4.33 of <cit.> for details if needed.Given an element α∈ℤ[G], one can check that α∈ Ann_ℤ[G](Cl(K)) as follows.Pick generators [𝔞_1],…,[𝔞_h] for Cl(K) and check that 𝔞_i^α is a principal ideal for all i=1,…,h.A similar procedure also allows one to check that α∈ Ann_ℤ[G](Cl_S(K)).This allows us to check the annihilation statement of Conjecture <ref>. §.§ Computational resultsLet ℱ denote the collection of all totally real number fields K such that K/k is a ramified abelian extension and k is a real quadratic field.Our aim is to run the algorithm on all fields K∈ℱ with Δ_K≤ 10^12. By the standard formula for discriminants in towers we know thatΔ_K=Δ_k^3·ℕ(Δ_K/k),and by the conductor-discriminant formula (see, for example, Corollary 2 of <cit.>) we know that Δ_K/k=𝔣^2 where 𝔣 is the conductor of K/k.Consequently, we have:Δ_K≤ X⟺ Δ_k≤ X^1/3 ℕ(𝔣)≤√(X/Δ_k^3)Hence to enumerate all fields in ℱ up to discriminant 10^12 it suffices to consider only real quadratic fields k with Δ_k≤ 10^4/√(4)≈ 6300, since ℕ(𝔣) ≥ 2. For each such real quadratic field k, we iterate through all ideals 𝔞 of k with 1<ℕ(𝔞)≤ 10^6·Δ_k^-3/2. For each such 𝔞, we locate all the cubic subfields K of the ray class field k_𝔞 that satisfy 𝔣(K/k)=𝔞, where 𝔣(K/k) is the conductor of K/k, if any exist.In turns out that there are 581 real quadratic fields k for which there is at least one ramified abelian cubic extension K with Δ_K≤ 10^12.The largest square-free integer d for which ℚ(√(d)) has such a cubic extension is d = 3853.For each such extension K/k, we perform the algorithm presented in <ref> for a total of 19197 examples.These calculations took 58.7 (one-core) CPU hours on an Intel Xeon Haswell 3.20 GHz processor with eight cores. §.§.§ Popescu's conjectureThere are three different cases that arise for our cubic extensions K/k: * K/ℚ is abelian, * K/ℚ is Galois, but not abelian, * K/ℚ is not Galois. We list the number fields encountered in each case according to their class number in Table <ref> below. As explained before, case (<ref>) is known by previous results of Burns, but we have performed the calculations for the sake of completeness. We now explain one example in detail.Our algorithm completes the calculation for this particular extension of number fields in 3.7 seconds.Take k = ℚ(√(3)).The rational prime 3 is ramified in k whereas 5 is inert in k.Let 𝔭 be the unique prime ideal of k lying above 3 and 𝔮 be the unique prime ideal lying above 5.Let 𝔪 = 𝔭𝔮 and consider the ray class field k_𝔪.One has [k_𝔪:k] = 12 and there is a unique subfield K that is a cubic abelian extension of k.It has class number 1.The field K is Galois over ℚ, but its Galois group is not abelian.A defining polynomial for K is given byp(x) = x^6 - 24x^4 - 50x^3 - 3x^2 + 30x - 2.Both 𝔭 and 𝔮 are ramified in K, so |S| = 4 and |S_K| = 8.We now go through the steps presented in <ref>.Step 1. A fundamental system of S_K-units is given by the following polynomials modulo (p(x)): * 7/68x^5 - 5/68x^4 - 145/68x^3 - 295/68x^2 - 40/17x + 133/34* 39/68x^5 - 57/68x^4 - 837/68x^3 - 779/68x^2 + 207/17x - 7/34* 11/34x^5 - 3/34x^4 - 257/34x^3 - 483/34x^2 + 3/17x + 124/17* 13/34x^5 - 19/34x^4 - 279/34x^3 - 237/34x^2 + 138/17x - 8/17* 7/17x^5 - 22/17x^4 - 111/17x^3 + 28/17x^2 + 112/17x - 57/17* -15/68x^5 + 1/68x^4 + 369/68x^3 + 671/68x^2 - 9/17x - 81/34* 13/68x^5 - 19/68x^4 - 279/68x^3 - 305/68x^2 + 35/17x + 43/34Step 2. The eight places in S_K are{w_1,w_1',w_1”,w_2,w_2',w_2”,w_3,w_4} = {-2.873, 0.620, 5.716, -2.233, -1.297, 0.067, 𝔓_3,𝔓_5},where 𝔓_3 is the unique finite prime lying above 𝔭_3 (similarly for 𝔓_5), and the floating-point numbers ξ correspond to the real embeddings x ↦ξ. Step 3. The matrix A is given byA= [-1.316958-1.316958-1.316958 1.316958 1.316958 1.3169580.00000000.0000000; 1.979440-2.5562680.5768282-1.979440 2.556268 -0.57682820.00000000.0000000; -0.5768282-1.979440 2.556268-2.5562680.5768282 1.9794400.00000000.0000000;-1.065300-2.054417 3.119716 1.065300 2.054417-3.1197160.00000000.0000000; 3.119716-1.065300-2.054417 2.054417-3.119716 1.0653000.00000000.0000000; 1.3652990.8634475-1.679440-1.679440 1.3652990.8634475-1.0986120.0000000; -0.1781805-1.655769 3.4433870.88711900.39864780.32367110.0000000-3.218876 ],and we found the S_K-units β_i, (i=1,2,3,4) whose coordinates on the system of fundamental S_K-units are given by* β_1=[-3, 2, -1, 1, 3, 3, 1],* β_2=[2, -3, -1, 3, 2, 2, 1],* β_3=[0, 9, -5, -9, -4, -13, 1],* β_4=[0, -3, 4, 4, 2, 2, -4]. These four S_K-units lead to the following Artin system of S_K-units, given as polynomials modulo (p(x)): * ε_w_1 = -745941483/68x^5 + 2143283961/68x^4 + 11744383129/68x^3 + 3552405747/68x^2 - 1992290385/17x + 259615015/34* ε_w_1' = -614733423/34x^5 - 381601623/34x^4 + 14516719313/34x^3 + 39748062825/34x^2 + 13259094279/17x - 990292384/17* ε_w_1” = 4808859/68x^5 + 27490335/68x^4 + 41738695/68x^3 - 1839447/68x^2 - 6235494/17x + 841213/34* ε_w_2 = -4649005/17x^5 + 10385828/17x^4 + 88374291/17x^3 + 35023017/17x^2 - 64294053/17x + 4162067/17* ε_w_2'= 21452083/68x^5 - 27839861/68x^4 - 478720265/68x^3 - 451335551/68x^2 + 130343317/17x - 16529959/34* ε_w_2” = -20133813/68x^5 - 1362201/68x^4 + 483119351/68x^3 + 1039377233/68x^2 + 32680736/17x - 297585015/34* ε_w_3 = 35/148716x^5 - 25/148716x^4 - 725/148716x^3 - 1475/148716x^2 - 200/37179x + 325/74358* ε_w_4 = 3/625The kernel of the induced ℤ[G]-module morphism f: Y_S(K) ⟶ E_S(K) is given by ℤ·α, whereα = 100 · T_v_1(K) + 150 · T_v_2(K) + 58 · T_v_3(K) + 77 · T_v_4(K). Step 4.We obtainedβ_S(𝒜) · e_2,S' = 0.030868· id-0.013639·σ_1 -0.017229·σ_2,where the Galois automorphisms are given by * id: x ↦ x,* σ_1: x ↦ -9/68x^5 + 21/68x^4 + 167/68x^3 + 83/68x^2 - 53/17x - 69/34,* σ_2: x ↦-5/68x^5 - 11/68x^4 + 123/68x^3 + 507/68x^2 + 116/17x - 61/34. Step 5.After recognizing the rational numbers, we obtained β_S(𝒜) · e_2,S' = 43/1393· id-19/1393·σ_1-24/1393·σ_2∈ℤ[G]. Step 6.Thus d = 1393 = 7 · 199.Step 7.On the other hand, we obtained m = [E_S(K):μ(K) · f(X_S(K))] = 3698415 = 3^2· 5 · 7 · 59 · 199.Note that d | 2 m.(In fact, d | m here, but there are cases where d ∤ m, d | 2m.) Step 8.The morphism η_i^*∈ Hom_ℤ[G](E_S(K),ℤ[G]) ≃ M_3,7(ℤ) are given by* η_1^*=[[1, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0]],* η_2^*=[[0, 1, 0, 0, 0, 0, 0], [0, -1, -1, 0, 0, -1, 0], [0, 0, 1, 0, 0, -1, 1]],* η_3^*=[[0, 0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 0, 1, -1], [0, -1, -1, 0, 0, 0, -1]],* η_4^*=[[0, 0, 0, 1, 0, 0, 0], [0, 0, 0, -1, 1, 1, -1], [0, 0, 0, 0, -1, 1, -1]],* η_5^*=[[0, 0, 0, 0, 1, 0, 0], [0, 0, 0, -1, 0, 1, -1], [0, 0, 0, 1, -1, 0, 0]],* η_6^*=[[0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 1, 0]],* η_7^*=[[0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 0, 0, 1]]. Step 9.The coordinates of the γ_i on the fundamental S_K-units are given by * γ_1=[0, 0, 0, 0, 0, 0, 0],* γ_2=[0, 0, 0, 2786, 2786, 0, 0],* γ_3=[0, 0, 0, -2786, 0, 0, 0],* γ_4=[0, -2786, 2786, 0, 0, 0, 0],* γ_5=[0, -2786, 0, 0, 0, 0, 0],* γ_6=[0, 0, 0, 0, 0, 0, 0],* γ_7=[0, 0, 0, 0, 0, 0, 0].They are all divisible by d = 1393, as expected.Step 10.The coordinates of the δ_i on the fundamental S_K-units are* δ_1 = [0, 0, 0, 0, 0, 0, 0],* δ_2 = [0, 0, 0, 2, 2, 0, 0],* δ_3 = [0, 0, 0, -2, 0, 0, 0],* δ_4 = [0, -2, 2, 0, 0, 0, 0],* δ_5 = [0, -2, 0, 0, 0, 0, 0],* δ_6 = [0, 0, 0, 0, 0, 0, 0],* δ_7 = [0, 0, 0, 0, 0, 0, 0]. Step 11. The abelian condition is obviously satisfied in this case.(We note that we did find examples where the units δ_i are not necessarily squares modulo roots of unity.)Step 12. Burns's conjecture is trivially true in this case since h_K=1.§.§.§ Burns's conjectureRecall that d is the smallest positive integer satisfying d ·β_S(𝒜) · e_2,S' ∈ℤ[G]. Then, as pointed out before, we always have numerically that d | 2m, whereas Burns's conjecture predicts only d | 2m^2, but we do not know of any theoretical reason that explains this phenomenon.We shall distinguish four different statements: * d ·β_S(𝒜) · e_2,S' ∈ Ann_ℤ[G](Cl(K)), * 2 · m ·β_S(𝒜) · e_2,S' ∈ Ann_ℤ[G](Cl(K)),* 2 · m^2·β_S(𝒜) · e_2,S' ∈ Ann_ℤ[G](Cl(K)), * 2 · m^2·β_S(𝒜) · e_2,S' ∈ Ann_ℤ[G](Cl_S(K)).Under the assumption d | 2m, note that (<ref>) implies (<ref>) implies (<ref>) implies (<ref>).We list the number of them for each type of field K (Galois abelian, Galois non-abelian and not Galois over ℚ) in Tables <ref>, <ref> and <ref> below.Part (<ref>) of Conjecture <ref> is precisely the fourth statement. Among our 19197 examples, there are only 116 examples where we have to go all the way to the fourth statement.All of them satisfy |S| = 3 so there is only one finite ramified prime in those extensions.Among these 116 examples, there are only 2 for which the S_K-class number is not 1.One of them is as follows.The base field is k = ℚ(√(42)).The rational prime 397 splits completely in k.Let 𝔭 be one of the two primes lying above 397 and consider the ray class field k_𝔭.One has [k_𝔭:k] = 6 and thus there is a unique subfield of degree 3 over k which we denote by K. A defining polynomial for K is given byp(x) = x^6 - 2x^5 - 61x^4 + 84x^3 + 708x^2 - 640x - 1664and K is not Galois over ℚ.The prime 𝔭 ramifies in K/k and we let 𝔓 be the unique prime of K lying above 𝔭.Using PARI, we have Cl(K) ≃ℤ/14ℤ.We calculated an Artin system of S_K-units (which we do not list here), for which we haved = 54782 = 2 · 7^2· 13 · 43 andm = 191737 = 7^3· 13 · 43.Note that in this case d ∤ m, but d | 2m.Moreover, we have2 · m^2·β_S(𝒜) · e_2,S' = -1088490949· id+ 2645395389·σ_1-1960894299·σ_2∈ℤ[G].Using PARI, we found an ideal 𝔞 such that [𝔞] generates Cl(K).If we letα = 2 · m^2·β_S(𝒜) · e_2,S' ∈ℤ[G],then 𝔞^α is not principal, but 𝔓·𝔞^α is.So we do have α∈ Ann_ℤ[G](Cl_S(K))as predicted by Burns's conjecture.Finally, for those 116 examples for which we have to go all the way to the fourth statement, we checked (<ref>) of Conjecture <ref> as follows: we let S' = S_∞ and we pickb ∈⋃_v ∈ S ∖ S' Ann_ℤ[G](ℤ[G/G_v] )to be b = σ -1, where σ is a non-trivial element of G.In every single case, we verified that(σ - 1) · 2 · m^2·β_S(𝒜) · e_2,S' ∈ Ann_ℤ[G](Cl(K)). As a final remark, in all our examples, not only does d | 2m, but also2· m·β_S(𝒜)· e'_2,S∈ Ann_ℤ[G](Cl_S(K)).It might be of interest to investigate this further.§ TABLES plain
http://arxiv.org/abs/1705.09729v1
{ "authors": [ "Kevin McGown", "Jonathan Sands", "Daniel Vallières" ], "categories": [ "math.NT", "11R42, 11Y40, 11R27" ], "primary_category": "math.NT", "published": "20170526220518", "title": "Numerical evidence for higher order Stark-type conjectures" }
^1Department of Physics, University of Crete, P. O. Box 2208, 71003 Heraklion,Greece; ^2Institute of Electronic Structure and Laser, Foundation for Research andTechnology–Hellas, P.O. Box 1527, 71110 Heraklion, Greece ^3National University of Science and Technology "MISiS", Leninsky prosp. 4, Moscow,119049, Russia;^4Department of Physics, School of Science and Technology, Nazarbayev University, 53 Kabanbay Batyr Ave., Astana 010000, KazakhstanThe dynamic equations for the fluxes through the SQUIDs that form a two-dimensionalmetamamaterial on a Lieb lattice are derived, and then linearized around zero flux toobtain the linear frequency spectrum according to the standard procedure. Thatspectrum, due to the Lieb lattice geometry, possesses a frequency band structureexhibiting two characteristic features; two dispersive bands, which form a Dirac cone at the corners of the first Brillouin zone, and a flat band crossing the Diracpoints. It is demonstrated numerically that localized states can be excited in thesystem when it is initialized with single-site excitations; depending on the amplitude of those initial states, the localization is either due to the flat-band or to nonlinear effects. Flat-band localized states are formed in the nearly linear regime, whilelocalized excitations of the discrete breather type are formed in the nonlinear regime. These two regimes are separated by an intermediate turbulent regime for which nolocalization is observed. Notably, initial single-site excitations of only edge SQUIDsof a unit cell may end-up in flat-band localized states; no such states are formed forinitial single-site excitations of a corner SQUID of a unit cell. The degree oflocalization of the resulting states is in any case quantified using well-establishedmeasures such as the energetic participation ratio and the second moment. 63.20.Pw, 11.30.Er, 41.20.-q, 78.67.Pt SQUID Metamaterials on a Lieb lattice: From flat-band to nonlinear localization N. Lazarides^1,2,3,4, G. P. Tsironis^1,2,3,4 December 30, 2023 ================================================================================ § INTRODUCTIONConsiderable research effort has focused the last two decades in the investigation anddevelopement of artificial mediums or metamaterials, which exhibit propertiesnot found in natural materials <cit.>. After the development of active, tunable, and nonlinear metamaterials <cit.>, those artificial mediums are expected to have a strongimpact across the entire range of technologies where electromagnetic radiation is used.Moreover, they may provide a flexible platform for modeling and mimicking fundamentalphysical effects <cit.>. An imporant class ofmetamaterials is that of superconducting ones <cit.>, and inparticular those comprising Superconducting QUantum Interference Devices (SQUIDs).The idea of a metamaterial consisting of SQUIDs was theoretically introduced about adecade ago both in the quantum <cit.> and the classical regimes<cit.>. The simplest version of a SQUID consists of a superconducting ring interrupted by aJosephson junction <cit.>, as shown schematically in Fig. <ref>.The SQUIDs are highly nonlinear devices, exhibiting strong resonant response to appliedmagnetic fields. SQUID metamaterials in one and two dimensions have been realized andinvestigated in the laboratory, and they were found to exhibit novel properties such asnegative diamagnetic permeability <cit.>, broad-band tunability<cit.>, self-induced broad-band transparency <cit.>,as well as dynamic multistability and switching <cit.>, among others.Some of these properties, i.e., dynamic multistability and tunability, have been also revealed in numerical simulations <cit.>.Moreover, nonlinear localization <cit.> and the emergence ofcounter-intuitive dynamic states referred to as chimera states in currentliterature <cit.> have been demonstrated numericallyin SQUID metamaterial models.The notion of metamaterials implies the freedom to engineer not only the properties of the individual "particles" or devices which play the role of "atoms" in an artificialmedium, but also their arrangement in space, i.e., the type of the lattice.Remarkably, some specific lattice geometries such as those of Lieb orKagomé lattices give rise to novel and potentially useful band structures.The former is a square-depleted (line-centered tetragonal) lattice, described by threesites in a square unit cell as illustrated in Fig. <ref>. It is characterized bya band structure featuring Dirac cones intersected by a topological flat band.Localization on flat-bands has been extensively investigated in relatively simplelattice models <cit.>, even in the presence of disorder<cit.>. Superpositions of flat-band modes and their stabilityhave been also investigated in rhombic nonlinear optical waveguide arrays<cit.>. The Lieb lattice was first introduced in the context ofphotonics in Ref. <cit.>. Recently, photonic Lieb lattices have beenexperimentally realized and the existence of localized flat-band modes has beenreported <cit.>. The world of electronic flat-band systemshas been reviewed in a recent article <cit.>. Moreover, electronic Lieblattices have been experimentally realized and characterized <cit.>.Here, a SQUID metamaterial on a Lieb lattice is considered, in which each site isoccupied by a SQUID. In each unit cell, two of the SQUIDs (indicated in red and blue)are neighbored by two other SQUIDs. The third SQUID in the unit cell (black) has four neighbors. In what follows, these SQUIDs will be referred to as edge SQUIDs (red andblue) and corner (black) SQUID, respectively. In the following, the dynamical equations for the fluxes through the SQUIDs and thelinear frequency spectrum are obtained for a SQUID Lieb metamaterial (SLiMM). Usingnumerical simulations, the generation of localized flat band states when asingle edge SQUID is initially excited at low amplitude, is demonstrated. Noflat-band localization is observed when single corner SQUIDs are initially excited atlow amplitudes, in agreement with the experiments in optical Lieb lattices<cit.>. For high-amplitude initialexcitations of either a corner or an edge SQUID, nonlinear localization of thediscrete breather type is observed <cit.>. The cross-over between flat-bandand nonlinear localization is explored; the two regimes are clearly separated by anintermediate, no-localization regime. Thus, flat-band localized states cannot becontinuated into the nonlinearly localized ones of the discrete breather or discrete soliton type, as it has been demonstrated for discrete nonlinear Schrödinger typemodels of various flat-band lattices and ribbons <cit.>. § FLUX DYNAMICSConsider the Lieb lattice of Fig. <ref>, in which each site is occupied by aSQUID. That SLiMM can be regarded as the combination of three sublattices colored asblue, red, and black. All the SQUIDs are identical, and they are magnetically coupledto their nearest neighbors through their mutual inductances. In order to derive thedynamic equations for the fluxes through the SQUIDs of the SLiMM, we first write theflux-balance relations for all SQUIDsΦ_n,m^A = Φ_ext +L { I_n,m^A +λ_x [ I_n-1,m^B +I_n,m^B ] . . +λ_y [ I_n,m-1^C +I_n,m^C ] } , Φ_n,m^B = Φ_ext +L { I_n,m^B +λ_x [ I_n,m^A +I_n+1,m^A ] } ,Φ_n,m^C = Φ_ext +L { I_n,m^C +λ_y [ I_n,m^A +I_n,m+1^A ] } ,where I_n,m^k is the current in the SQUID of the (n,m)th unit cell of kind k(k=A, B, C), Φ_ext is the applied (external) flux,and λ_x =M_x /L (λ_y =M_y /L) is the coupling coefficient along thehorizontal (vertical) direction, with M_x (M_y) being the corresponding mutualinductance between neighboring SQUIDs and L the self-inductance of each SQUID.The current in each SQUID is given by the resistively and capacitively shuntedjunction (RCSJ) model <cit.>, as-I_n,m^k =Cd^2 Φ_n,m^k/dt^2 +1/Rd Φ_n,m^k/dt+I_c sin( 2πΦ_n,m^k/Φ_0) ,where R is the quasiparticle resistance through the Josephson junction of each SQUID, C is the capacitance of each SQUID, and I_c is the critical current of theJosephson junction of each SQUID. Then Eqs. (<ref>) are inverted to obtain thecurrents I_n,m^k as functions of the fluxes Φ_n,m^k. By substitution of theobtained currents back into Eqs. (<ref>), and neglecting all the terms whichare proportional to λ_x^a λ_y^b with a+b >1, we getL I_n,m^A = Φ_n,m^A -λ_x ( Φ_n,m^B +Φ_n-1,m^B )-λ_y ( Φ_n,m^C +Φ_n,m-1^C ) -Φ_eff^A ,L I_n,m^B =Φ_n,m^B -λ_x ( Φ_n,m^A +Φ_n+1,m^A ) -Φ_eff^B , L I_n,m^C =Φ_n,m^C -λ_y ( Φ_n,m^A +Φ_n,m+1^A ) -Φ_eff^C ,where Φ_eff^A =[1-2(λ_x +λ_y)] Φ_ext,Φ_eff^B =( 1-2 λ_x ) Φ_ext,and Φ_eff^C =( 1-2 λ_y ) Φ_ext are the "effective" external fluxes. Combining Eqs. (<ref>) and (<ref>) we get L C d^2 Φ_n,m^A/dt^2 +L/Rd Φ_n,m^A/dt+L I_c sin( 2πΦ_n,m^A/Φ_0) +Φ_n,m^A= λ_x ( Φ_n,m^B +Φ_n-1,m^B )+λ_y ( Φ_n,m^C +Φ_n,m-1^C ) +Φ_eff^A ,L C d^2 Φ_n,m^B/dt^2 +L/Rd Φ_n,m^B/dt+L I_c sin( 2πΦ_n,m^B/Φ_0) +Φ_n,m^B= λ_x ( Φ_n,m^A +Φ_n+1,m^A ) +Φ_eff^B ,L C d^2 Φ_n,m^C/dt^2 +L/Rd Φ_n,m^C/dt+L I_c sin( 2πΦ_n,m^C/Φ_0) +Φ_n,m^C= λ_y ( Φ_n,m^A +Φ_n,m+1^A ) +Φ_eff^C . Using the relations τ =ω_LC t , ϕ_n,m^k=Φ_n,m^k/Φ_0 , ϕ_ext=Φ_ext/Φ_0 ,where ω_LC = 1/√(LC) is the inductive-capacitive SQUID frequency, the dynamic equations for the fluxes through the SQUIDs can be written in thenormalized formϕ̈_n,m^A+ γϕ̇_n,m^A+βsin( 2πϕ_n,m^A ) +ϕ_n,m^A= λ_x ( ϕ_n,m^B +ϕ_n-1,m^B ) +λ_y ( ϕ_n,m^C +ϕ_n,m-1^C ) +ϕ_eff^A , ϕ̈_n,m^B +γϕ̇_n,m^B+βsin( 2πϕ_n,m^B ) +ϕ_n,m^B= λ_x ( ϕ_n,m^A +ϕ_n+1,m^A ) +ϕ_eff^B , ϕ̈_n,m^C +γϕ_n,m^C+βsin( 2πϕ_n,m^C ) +ϕ_n,m^C= λ_y ( ϕ_n,m^A +ϕ_n,m+1^A ) +ϕ_eff^C ,where β =LI_c/Φ_0    and    γ= ω_LCL/R,is the SQUID parameter and the loss coefficient, respectively, and ϕ_eff^k are the normalized effective fluxes. The overdots on ϕ_n,m^k denote differentiationwith respect to the normalized temporal variable τ.The values of the fluxes through the SQUIDs ϕ_n,m^k generally depend on k.Suppose that γ=0 and ϕ_ext =0, and that Eqs. (<ref>) are initializedwith a low amplitude homogeneous excitation, i.e., with ϕ_n,m^k =c for any n,m, and k (c ≪ 1 is a constant). After integrating Eqs. (<ref>) in timeassuming periodic boundary conditions, at the steady state, the fluxes through theSQUIDs of the same kind will be the same. However, the fluxes through the SQUIDs ofdifferent kind will be different. This is due to the Lieb lattice geometry and the(generally) different values of λ_x and λ_y, since the flux througha particular SQUID of the SLiMM depends not only on the self-induced one, but also onthe fluxes from the SQUIDs to which that particular SQUID is coupled (four for ASQUIDs and two for B and C SQUIDs). Moreover, the coupling between SQUIDs isproportional to the coefficients λ_x or λ_y. Note that for isotropiccoupling, λ_x =λ_y, the fluxes through the SQUIDs of kind B and Care the same but different than those through the SQUIDs of kind A(ϕ_n,m^B =ϕ_n,m^C ≠ϕ_n,m^A).In the following, we are concerned about energy-conserving SLiMMs, i.e., about theHamiltonian version of SQUID Lieb metamaterials, and thus we set γ=0 andϕ_ext=0 into Eqs. (<ref>). § LINEAR FREQUENCY SPECTRUMWithout losses and driving forces, Eqs. (<ref>) are linearized using the relation β sin( 2πϕ_n,m^k ) ≃β_Lϕ_n,m^k, whereβ_L =2 πβ. Thus we getϕ̈_n,m^A +Ω_SQ^2 ϕ_n,m^A=λ_x ( ϕ_n,m^B +ϕ_n-1,m^B )+λ_y ( ϕ_n,m^C +ϕ_n,m-1^C ) ,ϕ̈_̈n̈,̈m̈^̈B̈ +Ω_SQ^2 ϕ_n,m^B=λ_x ( ϕ_n,m^A +ϕ_n+1,m^A ) , ϕ̈_n,m^C +Ω_SQ^2 ϕ_n,m^C=λ_y ( ϕ_n,m^A +ϕ_n,m+1^A ) , where Ω_SQ =√( 1 +β_L ) is the resonance frequency of individualSQUIDs in the linear limit. In order to obtain the linear frequency spectrum, we substitute into the linearized Eqs. (<ref>) the plane wave solution ϕ_n,m^k =F_k exp[i (Ωτ -κ_x n -κ_y m)],where κ_x and κ_y are the x and y components of the two-dimensional,normalized wavevector κ, and Ω =ω / ω_LC is thenormalized frequency. After some calculations we get( Ω_SQ^2 -Ω^2 )F_A -λ_x ( 1 +e^+i κ_x)F_B-λ_y ( 1 +e^+i κ_y)F_C =0, -λ_x ( 1 +e^-i κ_x)F_A+( Ω_SQ^2 -Ω^2 )F_B =0, -λ_y ( 1 +e^-i κ_y)F_A+( Ω_SQ^2 -Ω^2 )F_C =0 . In order to obtain nontrivial solutions for the amplitudes F_k of thestationary problem Eqs. (<ref>), its determinant D should be equalto zero, i.e., D=( Ω_SQ^2 -Ω^2 ) {( Ω_SQ^2 -Ω^2 )^2 .. -4 [ λ_x^2 cos^2 (κ_x/2)+λ_y^2 cos^2 (κ_y/2)] } =0 . Solving Eq. (<ref>) for Ω≡Ω_κ, we getΩ_κ = Ω_SQ,Ω_κ = √(Ω_SQ^2 ± 2 √(λ_x^2 cos^2 (κ_x/2)+λ_y^2 cos^2 (κ_y/2) )) , where only positive frequencies are considered. Eqs. (<ref>) and (<ref>) providethe linear frequency spectrum of the SLiMM. Thus, the Lieb lattice geometrypossesses a frequency band structure exhibiting two characteristic features as can beobserved in Fig. <ref>: two dispersive bands, which form a Dirac cone at thecorners of the first Brillouin zone (for κ_x =κ_y =±π), and a flatband crossing the Dirac points. It is well-established that Dirac cones give rise topeculiar topological properties <cit.> and unusual behavior in general,such as effectively massless fermions, etc. Note that the flat-band frequencyΩ_FB is equal to the resonance frequency of individual SQUIDs Ω_SQ,i.e., Ω_FB =Ω_SQ. The maximum and minimum frequencies of the spectrumare obtained from Eq. (<ref>) for κ_x =κ_y =0 andκ_x =κ_y =π/2, respectively, asΩ_maxmin=√(Ω_SQ^2 ± 2 √(λ_x^2 +λ_y^2 )) . Since |λ_x|, |λ_y| ≪ 1, the bandwidth of the spectrum is approximatelly ΔΩ≃ 2 √(λ_x^2 +λ_y^2 ) / Ω_SQ. For example, for the parameters of Fig. <ref> we have Ω_min≃ 1.343,Ω_max≃ 1.384, and ΔΩ≃ 0.04.We also note that the flat band is an intrinsic property of this lattice in thenearest-neighbor coupling limit and thus it is not destroyed by any anisotropy (i.e.,when λ_x ≠λ_y).The dependence of the extremal frequencies Ω_min,max and the flat-bandfrequency Ω_FB on the parameters β_L and λ_x, λ_y isshown in Fig. <ref>. In Fig. <ref>a all curves increase linearly withincreasing β_L while the bandwidth remains practically constant. In Fig.<ref>b the bandwidth increases with increasing λ_x =λ_y whileΩ_FB remains the same.§ NUMERICS AND LOCALIZATION MEASURESThe dynamic equations for the fluxes through the SQUIDs, Eqs. (<ref>), withoutlosses and external forcing (γ=0 and ϕ_ext =0), can be derived as theHamilton's equations from the Hamiltonian functionH =∑_n,m H_n,m ,where the energy (Hamiltonian) density H_n,m, defined as the energy per unit cell, is given by H_n,m =∑_k{π/β[ ( q_n,m^k )^2 +( ϕ_n,m^k )^2 ] -cos( 2πϕ_n,m^k ) }-π/β{λ_x [ϕ_n,m^A ϕ_n-1,m^B+2 ϕ_n,m^A ϕ_n,m^B +ϕ_n,m^B ϕ_n+1,m^A ]+λ_y [ ϕ_n,m^A ϕ_n,m-1^C +2 ϕ_n,m^A ϕ_n,m^C +ϕ_n,m^C ϕ_n,m+1^A ] } ,where q_n,m^k = dϕ_n,m^k /d τ is the normalized instantaneousvoltage across the Josephson junction of the SQUID in the (n,m)th unit cell of kindk. Both H and H_n,m are normalized to the Josephson energy, E_J. The totalenergy H, given by Eqs. (<ref>) and (<ref>), remains constant in time.Eqs. (<ref>) with γ=0 and Φ_ext =0 are integrated in time using thesecond order symplecticStörmer-Verlet scheme <cit.>, which preserves the total energy H toa prescribed accuracy which is a function of the time-step h. In the flux - voltagevariables, that scheme reads <cit.>ϕ⃗_n+1/2^k =ϕ⃗_n^k +h/2q⃗_n^k ,q⃗_n+1^k =q⃗_n^k -h H_ϕ⃗^k (ϕ⃗_n+1/2^k) ,ϕ⃗_n+1^k =ϕ⃗_n+1/2^k +h/2q⃗_n+1^k ,where ϕ⃗^k and q⃗^k are N-dimensional vectors (N=N_x N_y) containingthe fluxes and the voltages for the SQUIDs of kind k (k=A, B, C), andH_ϕ⃗^k≡∇_ϕ⃗^k H denotes the column vector of partial derivatives of the Hamiltonian with respect to ϕ⃗^k, i.e., H_ϕ⃗^k =[ ∂ H/∂ϕ_1^k, ∂ H/∂ϕ_2^k, ∂ H/∂ϕ_3^k,  ..., ∂ H/∂ϕ_N^k]^T . Periodic boundary conditions are used throughout, while the SLiMM is initializedat τ=0 with a single-site excitation of amplitude A_m. The excited SQUID iseither of kind A, B, or C. For isotropic coupling between SQUIDs, i.e., forλ_x =λ_y, a single-site excitation of either a B or a C SQUID provides identical results due to symmetry.For the identification of the localized states that may be formed either due to theflat band or the nonlinearity, and the quantification of their degree of localization,two statistical measures will be used; the energetic participation ratio P_eand the two-dimensional second moment M_2, which are given, respectively, by<cit.>P_e = 1/∑_n,mϵ_n,m^2,andM_2 =∑_n,m{ ( n -x̅ )^2 +m -y̅ )^2 }ϵ_n,m ,where ϵ_n,m =H_n,m/H is the normalized energy density, and x̅,y̅ are the coordinates of the "center of energy"x̅ =∑_n,mn ϵ_n,m , y̅ =∑_n,mm ϵ_n,m .Note that P_e measures roughly the number of excited cells in the system; its values range from P_e =1 (strong localization, all the energy in a single cell) to P_e =N, with N=N_x N_y (equipartition of the energy over the N SQUIDs). That measure hasbeen also used to quantify the degree of diffraction in Kagomé photonic lattices<cit.>. The second moment M_2 quantifies the squared width of thestate, hence, its spreading.Eqs. (<ref>) with γ=0 and ϕ_ext =0, implemented with periodic boundaryconditions are initialized with single-site excitations of the formϕ_n,m^k (τ=0) ={[ A_m,;; 0,].ϕ̇_n, m^k (τ=0) =0 ,,where A_m is the amplitude of the initial excitation, and k=A, B or C. Theexcited SQUID belongs to the unit cell with n=n_e, m=m_e, with n_e =N_x/2 andm_e =N_y/2. The SLiMM is initialized with A_m spanning several orders of magnitude,and for each A_m several quantities such as the energy, the localization measures,and the ratio r =| H (τ) -H (0) | / H (0) are monitored during temporal evolution.Typically, a time-step h=T_SQ /1000, where T_SQ =2π /Ω_SQ, is usedin the simulations. However, it has been checked that smaller time-steps providepractically identical results. It has been also checked that in all runs the ratior remains less than 5 × 10^-6 for the time step h above. § FLAT-BAND AND NONLINEAR LOCALIZATIONThe typical time-dependence of P_e and M_2 when an edge SQUID (i.e., a SQUID C)is initialy excited with amplitude A_m is shown in Figs. <ref>a and<ref>b, respectively, for A_m =0.001 (black), 0.01 (red), 0.1 (green), and 1 (blue). Note that the curves for A_m =0.001 and 0.01 almost coincide; for lower initial amplitudes the results are practically identical to those obtained for A_m =0.001. For such low initial amplitudes the SLiMM remains in the (almost) linear regime, in which localized flat-band states are expected to be observed. Indeed, as can be seen in Fig. <ref>a, as well as in the inset for A_m =0.01, P_e has a running average over 5000 T_SQ time units which is about eleven(P_e ≃ 11, inset) indicating substantial localization. The existence of alocalized state is advocated in this case by the corresponding second moment M_2,which running average over 5000 T_SQ time units (yellow curve) attains a constantvalue for relatively long integration times (M_2 ≃ 22). The constancy of M_2is interpreted as the termination of the energy spreading away from the site on whichit was initially localized.For A_m =0.1, the inspection of the corresponding (green) curve and its runningaverage (maroon) reveals a dramatic change in the behavior of P_e (τ); the valueof the latter increases more or less linearly with increasing τ until itsaturates at a rather high value around P_e ∼ 140. Note however the plateaus inthe runningaverage curve which indicate that the SLiMM passes through several metastable statesuntil it reaches the steady one. The second moment M_2 in this case oscillates around 43. Finally, for A_m =1 significant nonlinear effectscome into play that favor strong localization with P_e ∼ 1; thus, all the energyinitially provided to the system at a single site, it practically remains there!This is actually the reason why the value of M_2 remains for all times close to zero(M_2 ≃ 0.1, there is no spreading of energy whatsoever). Clearly, threedifferent regimes can be identified; the (almost) linear regime, in which flat-bandlocalization is possible, the intermediate regime, in which no localization is observedand the initial energy is eventually spread (in time-scales longer than those shownhere) over the whole lattice, and the nonlinear regime in which localization in theform of intrinsically localized modes or discrete breathers, is observed. The size offluctuations, e.g., in the curves for P_e, depends on that regime which in turn isdetermined by the initial condition (excitation); thus, fluctuations are weak in thelinear, strong in the intermediate, and vanishing in the nonlinear regime.The corresponding time-dependence of P_e and M_2 when a corner SQUID (i.e., aSQUID A) is initialy excited with amplitude A_m is shown in Figs. <ref>aand <ref>b, respectively. In this case, there is no localization in the linearand the intermediate regimes, i.e., for A_m =0.001 (black), 0.01 (red), and 0.1(green), as can be inferred by the large values of P_e whose running average over5000  T_SQ time units is about 140 (P_e ∼ 140). At the initial stage oftime integration which is not visible on the scale of the temporal axis of Fig.<ref>, both P_e and M_2 have low values; however, within a few thousands timeunits they gradually grow to their high values. Note that the average of the curvesfor M_2 (∼ 43) is very close to that of the average of the corresponding curvefor A_m =0.1 in Fig. <ref>b (intermediate regime). For high initial amplitude(A_m =1), however, strong localization due to nonlinearity is again observed. Forsuch high values of A_m the localized state which is generated either by initiallyexciting an A or a C SQUID does not reveal any significant difference. When thereis no localization, the fluctuations of both P_e and M_2 are again very strong.The results presented in Figs. <ref> and <ref> have been obtained forλ_x =λ_y, i.e., in the case of an isotropic Lieb lattice in thenearest-neighbor approximation. In this case, for single-site initial excitations ofeither a C SQUID or a B SQUID (i.e., of edge SQUIDs), the results are practically identical. The above scenario is confirmed by inspecting the corresponding energy density plots, i.e., the plots of the energy density E_n,m =H_n,m on the n-m plane, which are shown in Fig. <ref>. In Fig. <ref>a, for A_m =0.001, the energy densityE_n,m is clearly localized, although not on only one unit cell; the energetic participation ratio is in this case P_e ≃ 10.5. A similar pattern is obtainedfor A_m =0.01, as shown in Fig. <ref>b, in which the maximum of the energydensity is approximately two orders of magnitude larger than that in Fig. <ref>a.In Fig. <ref>c, there is clearly no localization, as it can also be inferred bythe large participation ratio P_e ≃ 140. In Fig. <ref>d, in which thelocalization is due to the nonlinearity, the energy is almost completely localized,and P_e ≃ 1.It should be noted that there are particular types of modes which cannot beefficiently excited in the SLiMM with initial single-site excitations used here.As an example, consider the application of the constraint ϕ_n,m^A =0 for allthe SQUIDs of kind A. That case has been also considered in a rhombic (quasi-)one-dimensional system with three waveguids per unit cell, whose coupling functionsare the same with those of the equations for the SLiMM <cit.>. Bysetting ϕ_n,m^A =0 for all n and m andϕ_n,m^B =ϕ_n,m^C =δ_n,n_1δ_m,m_1, with n_1 and m_1 integers, we get from the first of Eqs. (<ref>) or the first of Eqs. (<ref>) with γ=0 and ϕ_ext =0,that λ_x ϕ_n_1,m_1^B =-λ_y ϕ_n_1,m_1^C(ϕ_n,m^B =ϕ_n,m^C =0 for n ≠ n_1 and m ≠ m_1). That particularsolution for the SLiMM system (either the linearized one or not) certainly cannot beobtained using single-site initial excitations.In order to roughly determine the boundaries between the linear, intermediate, andnonlinear regimes, the averages of several quantities over the steady-stateintegration time τ_int were calculated for a wide range of initial excitationamplitudes A_m=A_m,i. An edge (C) SQUID is initially excited with amplitudeA_m,i and Eqs. (<ref>) with γ=0 and ϕ_ext =0 are integrated intime for τ_int =10^5 T_SQ time units, to allow for transients to die out(the obtained results are discarded) and the steady state to be reached. Then, in the steady state, the equations are integrated in time for τ_int more time units ,and the energetic participation ratio averaged over τ_int is calculated. At theend of the steady-state integration time, the amplitude of the flux of the excitedSQUID, A_m,c, and the oscillation frequency of the flux through the loop of theexcited SQUID, Ω_osc, are also calculated. The same calculations areperformed for an initially excited corner SQUID A, and the results for both casesare shown in Fig. <ref>. In Fig. <ref>a, the calculated amplitude A_m,cof the flux ϕ_n_e,m_e^k through the loop of the SQUID with k=A (blue) andk=C (green) is shown along with an enlargement for low A_m,i (inset). As it canbe observed, A_m,c attains low values for low initial amplitudes A_m,i < 0.15,while for A_m,i > 0.15 the calculated amplitude A_m,c increases linearly withincreasing A_m,i, according to the approximate relation A_m,c≃ A_m,i.The behavior for A_m,i > 0.15 is a result of the stronglocalization taking place due to nonlinearities and it does not depend on which kindof SQUID (edge or corner) is initially excited. However, a closer look to the two curves for A_m,i < 0.15, reveals significantdifferences, especially for A_m,i < 0.05, which can be seen more clearly in theinset. In this regime the calculated amplitude A_m,c for k=C follows the relationA_m,c≃ A_m,i/2, indicating localization due to the flat band. Thisconclusion is also supported by Figs. <ref>b and <ref>c.In Fig. <ref>b, the energetic participation ratio averaged over τ_int, <P_e>, for low values of A_m,i attains very different values depending on whichkind of SQUID is initially excited (A or C); specifically, while <P_e> ∼ 10.5for the SLiMM when a C SQUID is initially excited (black), it is <P_e> ∼ 140when an A SQUID is initially excited (red). That large difference between the valuesof <P_e> is due to flat-band localization in the former case and delocalization inthe latter case since no flat-band modes are excited. In the inset, it can be observedthat <P_e> for an initially excited C SQUID starts increasing for A_m,i > 0.05 indicating gradual degradation of flat-band localization and meets the <P_e> curve for an initially excited A SQUID at A_m,i∼ 0.1. In Fig. <ref>c,for A_m,i < 0.15, the oscillation frequency of the flux through the initiallyexcited SQUID Ω_osc (either A or C), has a value around that of thelinear resonance frequency of a single SQUID, Ω_SQ (Ω_SQ≃ 1.364for the parameters of Fig. <ref>). As it can be seen in the inset, when a CSQUID is initially excited (violet), then up to high accuracyΩ_osc =Ω_SQ for initial amplitudes up to A_m,i∼ 0.075.However, when an A SQUIDis initially excited (turquise), the frequency Ω_osc jumps slightly above andbelow Ω_SQ irregularly, but it remains within the bandwidth of the linear frequency spectrum. For A_m,i > 0.15, the frequency Ω_osc decreases withincreasing A_m,i, although it starts increasing again with increasing A_m,iat A_m,i∼ 0.8. In this regime, nonlinear localized modes of the breathertype are formed, which frequency lies outside the linear frequency spectrum anddepends on its amplitude, as it should be. From this figure it can thus be inferredthat flat-band localization occurs for initial amplitudes up to A_m,i≃ 0.05(linear regime), while delocalization occurs in the interval 0.05 < A_m,i < 0.15(intermediate regime). For larger A_m,i, strong nonlinear localization occurs(nonlinear regime). This rough estimation for the boundaries between the three regimesis of course parameter dependent. Remarkably, flat-band localization occurs only whenan edge SQUID (B or C) is initially excited. The excitation of a corner (A)SQUID does not lead to excitation of flat-band modes and thus such a localized initialstate rapidly delocalizes. On the other hand, the observed flat-band localization isnot very strong as compared to the nonlinear localization. This is probably due tothe fact that a single-site excitation of a B or C SQUID does not correspond toan exact localized flat-band eigenmode. In the case of anisotropic coupling, i.e., for λ_x ≠λ_y, single-siteexcitations of B and C SQUIDs give different results as expected due to thelowering of symmetry <cit.>. Typical curves for the amplitude of theflux ϕ_n_e,m_e^k of the excited SQUID A_m,c (k=A, B, and C), theenergetic participation ratio averaged over the steady-state integration time <P_e>,and the oscillation frequency Ω_osc of the flux through the excited SQUID,are shown in Fig. <ref> for anisotropic nearest-neighbor coupling,λ_y =1.5 λ_x =-0.03, as a function of relatively low initialexcitation amplitudes A_m,i for which flat-band localization is expected. As canbe observed in Fig. <ref>a, flat-band localization occurs when either of the edgeSQUIDs are excited with A_m,i < 0.04. For larger values of A_m,i, localizationstarts degrading as it can be confirmed from the corresponding curves of <P_e> inFig. <ref>b. Here, it is also apparent that initial excitations of B and CSQUIDs do not lead to a state with the same degree of localization; indeed, <P_e> isrespectively ∼ 5 and ∼ 25 (while the corresponding <P_e> for initialexcitations of an A SQUID is about 160). The corresponding oscillation frequenciesof the fluxes in the case of B or C SQUID initial excitations are practicallyequal to that of the single SQUID resonance Ω_SQ≃ 1.364. For initialexcitations of an A SQUID, the oscillation frequency is very close to that of eitherthe upper or the lower boundary of the linear frequency spectrum.§ CONCLUSIONSThe dynamic equations for the fluxes threading the SQUID loops of a driven-dissipative SLiMM have been derived, along with the corresponding linear frequency spectrum. TheLieb lattice geometry results in a spectrum with two dispersive bands, which form aDirac cone at the corners of the first Brillouin zone, and a flat band crossing thoseDirac points. The localization properties of Hamiltonian SLiMMs, i.e., those withoutdissipation and driving terms, have been determined through numerical simulations for single-site initial excitations of varying amplitude. Flat-band localization, i.e.,the emergence of localized flat-band states, is observed when an edge (B or C)SQUID of the unit cell of the SLiMM is initially excited with low amplitude. To thecontrary, no such states are generated when a corner (A) SQUID of the unit cell ofthe SLiMM is initially excited with low amplitude. These results are compatible withthe experiments on photonic Lieb lattices <cit.>. For sufficientlyhigh amplitude of the initial excitation of either a corner or an edge SQUID,localization due to nonlinearities in the form of discrete breathers is observed. Thelinear (low amplitude initial excitations) and the nonlinear regimes (high amplitudeinitial excitations), in which flat-band localized states and discrete breathers,respectively, can be generated, are separated by an intermediate regime in whichneither type of localization is observed. This dynamic behavior is quite differentfrom that observed in, e.g., two-dimensional Kagomé lattices in which families ofnonlinear localized modes in the form of discrete solitons or discrete breathers maybifurcate from localized linear modes of the flat band<cit.>.Here, relatively high-amplitude initial excitations (A_m,i > 0.05) excite nonlineareffects in the SQUIDs which destroy the flatness of the flat-band which has beenobtained in the linear limit. At the same time, however, these nonlinear effects arenot strong enough to help the initial excitation to remain localized (self-trapped);that occurs only when the amplitude of the initial excitation exceeds a particular,parameter-dependent threshold (A_m,i≃ 0.15 for the parameters of Fig.<ref>). § ACKNOWLEDGMENTThis work is partially supported by the Ministry of Education and Science of theRussian Federation in the framework of the Increase Competitiveness Program of NUST"MISiS" (No. K2-2015-007) and by the Ministry of Education and Science of the Republicof Kazakhstan (Contract # 339/76 –2015).NL gratefully acknowledges the Laboratory for Superconducting Metamaterials, NUST"MISiS" for its warm hospitality during visits.99Smith2004 D. R. Smith, J. B. Pendry, and Wiltshire, Metamaterials and negative refractive index, Science 305, 788 (2004).Yen2004 T. J. Yen, W. J. Padilla, N. Fang, D. C. Vier, D. R. Smith, J. B. Pendry, D. N. Basov, and X. Zhang, Terahertz magnetic response from artificial materials, Science 303, 1494 (2004).Linden2004 S. Linden, C. Enkrich, G. Dolling, M. W. Klein, J. Zhou, T. Koschny, C. M. Soukoulis, S. Burger, F. Schmidt, and M. Wegener, Magnetic response of metamaterials at 100 terahertz, Science 306, 1351 (2004).Linden2006 S. Linden, C. Enkrich, G. Dolling, M. W. Klein, J. Zhou, T. Koschny, C. M. Soukoulis, S. Burger, F. Schmidt, and M. Wegener, Photonic metamaterials: magnetism at optical frequencies, IEEE J. Selec. Top. Quant. Electron. 12, 1097 (2006).Shalaev2007 V. M. Shalaev, Optical negative-index metamaterials, Nature Photon. 1, 41 (2007).Litchinitser2008 N. M. Litchinitser and V. M. Shalaev, Photonic metamaterials, Laser Phys. Lett. 5, 411 (2008).Boardman2010 A. D. Boardman, V. V. Grimalsky, Y. S. Kivshar, S. V. Koshevaya, M. Lapine, N. M. Litchinitser, V. N. Malnev, M. Noginov, Y. G. Rapoport, and V. M. Shalaev, Active and tunable metamaterials, Laser Photonics Rev. 5, 287 (2010).Lapine2014 Y. S. Kivshar, M Lapine, I. V. Shadrivov, Colloquium: Nonlinear metamaterials, Rev. Mod. Phys. 86, 1093 (2014).Zheludev2010 N. I. Zheludev, The road ahead for metamaterials, Science 328, 582 (2010).Zheludev2011 N. I. Zheludev, A roadmap for metamaterials, Optics and Photonics News 22, 31 (2011).Zheludev2012 N. I. Zheludev and Yu. S. Kivshar, From metamaterials to metadevices, Nature Mater. 11, 917 (2012).Anlage2011 S. M. Anlage, The physics and applications of superconducting metamaterials, J. Opt. 13, 024001 (2011).Jung2014 P. Jung, A. V. Ustinov, and S. M. Anlage, Progress in superconducting metamaterials, Supercond. Sci. Technol. 27, 073001 (2014).Du2006 C. Du, H. Chen, and S. Li, Quantum left-handed metamaterial from superconducting quantum-interference devices, Phys. Rev. B 74, 113105 (2006).Lazarides2007 N. Lazarides and G. P. Tsironis, rf superconducting quantum interference device metamaterials, Appl. Phys. Lett. 90, 163501 (2007).Josephson1962 B. Josephson, Possible new effects in superconductive tunnelling, Phys. Lett. A 1, 251 (1962).Jung2013 P. Jung, S. Butz, S. V. Shitov, and A. V. Ustinov, Low-loss tunable metamaterials using superconducting circuits with Josephson junctions, Appl. Phys. Lett. 102, 062601 (2013).Butz2013a S. Butz, P. Jung, L. V. Filippenko, V. P. Koshelets, and A. V. Ustinov, A one-dimensional tunable magnetic metamaterial, Opt. Express 21, 22540 (2013).Trepanier2013 M. Trepanier, Daimeng Zhang, O. Mukhanov, and S. M. Anlage, Realization and modeling of rf superconducting quantum interference device metamaterials, Phys. Rev. X 3, 041029 (2013).Zhang2015 Daimeng Zhang, M. Trepanier, O. Mukhanov, and S. M. Anlage, Broadband transparency of macroscopic quantum superconducting metamaterials, Phys. Rev. X 5, 041045 (2015).Jung2014b P. Jung, S. Butz, M. Marthaler, M. V. Fistul, J. Leppäkangas, V. P. Koshelets, and A. V. Ustinov, Multistability and switching in a superconducting metamaterial, Nat. Comms. 5, 3730 (2014).Lazarides2013b N. Lazarides and G. P. Tsironis, Multistability and self-organization in disordered SQUIDmetamaterials, Supercond. Sci. Technol. 26, 084006 (2013).Tsironis2014b G. P. Tsironis, N. Lazarides, and I. Margaris, Wide-band tuneability, nonlinear transmission, and dynamic multistability in SQUID metamaterials, Appl. Phys. A 117, 579 (2014).Lazarides2008a N. Lazarides, G. P. Tsironis, and M. Eleftheriou, Dissipative discrete breathers in rf SQUID metamaterials, Nonlinear Phenom. Complex Syst. 11, 250 (2008).Lazarides2015b N. Lazarides, G. Neofotistos, and G. P. Tsironis, Chimeras in SQUID metamaterials, Phys. Rev. B 91, 054303 (2015).Hizanidis2016a J. Hizanidis, N. Lazarides, and G. P. Tsironis, Robust chimera states in SQUID metamaterials with local interactions, Phys. Rev. E 94, 032219 (2016).Danieli2015 C. Danieli, J. D. Bodyfelt, and S. Flach, Flat-band engineering of mobility edges, Phys. Rev. B 91, 235134 (2015).Khomeriki2016 R. Khomeriki and S. Flach, Landau-Zener Bloch oscillations with perturbed flat bands, Phys. Rev. Lett. 116, 245301 (2016).Leykam2013 D. Leykam, S. Flach, O. Bahat-Treidel, and A. S. Desyatnikov, Flat band states: Disorder and nonlinearity, Phys. Rev. B 88, 224203 (2013).Leykam2017 D. Leykam, J. D. Bodyfelt, A. S. Desyatnikov, and S. Flach, Localization of weakly disordered flat band states, Eur. Phys. J. B 90, 1 (2017).Maimistov2017 A. I. Maimistov, On the stability of flat-band modes in a rhombic nonlinearoptical waveguide array, J. Opt. 19, 045502 (2017).Leykam2012 D. Leykam, O. Bahat-Treidel, and A. S. Desyatnikov, Pseudospin and nonlinear conical diffraction in Lieb lattices, Phys. Rev. A 86, 031805(R) (2012).Mukherjee2015a S. Mukherjee, A. Spracklen, D. Choudhury, N. Goldman, P. Öhberg, E. Andersson, and R. R. Thomson, Observation of a localized flat-band state in a photonic Lieb lattice, Phys. Rev. Lett. 114, 245504 (2015).Vicencio2015 R. A. Vicencio, C. Cantillano, L. Morales-Inostroza, B. Real, C. Mejía-Cortés, S. Weimann, A. Szameit, and M. I. Molina, Observation of localized states in Lieb photonic lattices, Phys. Rev. Lett. 114, 245503 (2015).LiuZheng2014 Liu Zheng, Liu Feng, and Wu Yong-Shi, Exotic electronic states in the world of flat bands: From theory to material, Chin. Phys. B 23, 077308 (2014).Slot2017 M. R. Slot, T. S. Gardenier, P. H. Jacobse, G. C. P. van Miert, S. N. Kempkes, S. J. M. Zevenhuizen, C. Morais Smith, D. Vanmaekelbergh, and I. Swart, Experimental realization and characterization of an electronic Lieb lattice, Nature Phys. 1, 1 (2017).Guzman-Silva2014 D. Guzmán-Silva, C. Mejía-Cortés, M. A. Bandres, M. C. Rechtsman, S. Weimann, S. Nolte, M. Segev, A. Szameit, and R. A. Vicencio, Experimental observation of bulk and edge transport in photonic Lieb lattices, New J. Phys. 16, 063061 (2014).Flach2008a S. Flach and A. V. Gorbach, Discrete breathers - advances in theory and applications, Phys. Rep. 467, 1 (2008).Vicencio2013 R. A. Vicencio and M. Johansson, Discrete flat-band solitons in the Kagomé lattice, Phys. Rev. A 87, 061803(R) (2013).Johansson2015 M. Johansson, U. Naether, and R. A. Vicencio, Compactification tuning for nonlinear localized modes in sawtooth lattices, Phys. Rev. E 92, 032912 (2015).Belicev2015 P. P. Belicev, G. Gligorić, A. Radosavljevic, A. Maluckov, M. Stepic, R. A. Vicencio, and M. Johansson, Localized modes in nonlinear binary Kagomé ribbons, Phys. Rev. E 92, 052916 (2015).Lopez-Gonzalez2016 D. López-González and M. I. Molina, Linear and nonlinear compact modes in quasi-one-dimensional flatband systems, Phys. Rev. A 93, 043847 (2016).Gligoric2016 G. Gligorić, A. Maluckov, Lj. Hadzievski, S. Flach, and B. A. Malomed, Nonlinear localized flat-band modes with spin-orbit coupling, Phys. Rev. B 94, 144302 (2016).Likharev1986 K. K. Likharev, Dynamics of Josephson Junctions and Circuits, Gordon and Breach, Philadelphia, 1986.Weeks2010 C. Weeks and M. Franz, Topological insulators on the Lieb and perovskite lattices, Phys. Rev. B 82, 085310 (2010).Hairer2003 E. Hairer, C. Lubich, and G. Wanner, Geometric numerical integration illustrated by the Stömer-Verlet method, Acta Numerica 2003, 399 (2003).Laptyeva2012 T. V. Laptyeva, J. D. Bodyfelt, and S. Flach, Subdiffusion of nonlinear waves in two-dimensional disordered lattices, Europhys. Lett. 98, 60002 (2012).Mulansky2012 M. Mulansky and A. Pikovsky, Scaling properties of energy spreading in nonlinear Hamiltonian two-dimensional lattices, Phys. Rev. E 86, 056214 (2012).deMoura2003 F. A. B. F. de Moura, M. D. Coutinho-Filho, E. P. Raposo, and M. L. Lyra, Delocalization in harmonic chains with long-range correlated random masses, Phys. Rev. B 68, 012202 (2003).Vicencio2014 R. A. Vicencio and C. Mejía-Cortés, Diffraction-free image transmission in Kagomé photonic lattices, J. Opt. 16, 015706 (2014).
http://arxiv.org/abs/1705.10287v2
{ "authors": [ "N. Lazarides", "G. P. Tsironis" ], "categories": [ "physics.app-ph", "cond-mat.mes-hall", "nlin.PS" ], "primary_category": "physics.app-ph", "published": "20170525122937", "title": "SQUID Metamaterials on a Lieb lattice: From flat-band to nonlinear localization" }
PVEs: Position-Velocity Encoders for Unsupervised Learning of Structured State Representations Rico Jonschkowski1,2, Roland Hafner1, Jonathan Scholz1, and Martin Riedmiller11DeepMind, 2Robotics and Biology Laboratory at Technische Universität Berlin December 30, 2023 ==============================================================================================================================================================Onlineportfolio selection research has sofar focused mainly on minimizing regretdefined in terms of wealth growth. Practical financial decision making, however, is deeply concerned withboth wealth and risk.We consider online learning of portfolios of stocks whose prices are governed by arbitrary (unknown) stationary and ergodic processes, where the goal is to maximize wealth while keeping the conditional value at risk (CVaR) below a desired threshold. We characterize the asymptomatically optimal risk-adjusted performance andpresent an investment strategy whose portfolios are guaranteed to achieve the asymptotic optimal solution whilefulfillingthe desired riskconstraint. We also numerically demonstrate and validate the viability of our method on standard datasets.§ INTRODUCTION It has long been recognized that the value of any financial investment shouldbe quantified using both return and risk, where risk is traditionally measured by the variance of the return. A common quantification for risk-adjusted return is the Sharpe ratio<cit.>, which is essentially the (annualized) mean return divided by the (annualized) standard deviation of the return. Nevertheless, in online portfolio selection <cit.>, which has become a focal point in online learning research, risk is rarely considered and the primaryquantity to be optimized is still the return alone. The creation of an online learning techniquethat optimizes risk-adjusted return is a longstanding goal and a major challenge <cit.>.In an adversarial (regret minimization) online learning setting,risk-adjusted portfolio selection with no regret is known to be an impossible goal <cit.>. Recently, within an i.i.d. setting,Mahdavi et al.presented a framework that can be utilized for achieving this goal<cit.>, and Haskell et al. considered risk-aware algorithms <cit.>, but i.i.d. modeling has been criticized for being unsuitable formodeling the stock prices faithfully<cit.>. The problem with i.i.d. modeling is the lack of time dependencies between stock returns. A substantially richer family of stochastic models is the class of stationary and ergodicprocesses, which are sufficiently expressive to model arbitrary dependencies among stock prices.Many publications have considered stationary and ergodic markets <cit.>, andall these works consider strategies that are oblivious to risk.Moreover, all the learning strategies they consider rely on non-parametric estimationtechniques (e.g., histogram, kernel, or nearest neighbors methods). Moreover, these strategies always use a countably infinite set of experts, and theguarantees provided for these strategies are always asymptotic.This is no coincidence, asit is well known that finite sample guarantees for these methods cannot be achieved without additional strong assumptions on the source distribution <cit.>.Similarly, it is also known that non-parametric strategies in this context must rely on infinitely manyexperts <cit.>. Approximate implementations of non-parametric strategies (which apply only a finite set of experts), however, turn out to work exceptionally well and, despite the inevitable approximation, are reported <cit.> to significantly outperformstrategies designed towork in an adversarial, no-regret setting. For example, the nearest-neighbor investment strategy of <cit.> is shown in <cit.> to beat Cover's universal portfolios (UP) <cit.>,the exponentiated gradient (EG) method <cit.>, and the online Newton steps strategy of <cit.>on most of the common datasets.We also note thatpractical approximate use of asymptotic methods is prevalent inother areas of machine learning such as(deep) reinforcement learning with function approximation <cit.>).For a market with n stocks, andwithin a stochastic online learning framework, wedevelop a novel online portfolio selection strategy called CVaR-Adjusted Nearest Neighbor (), which guarantees the best possible asymptotic performance while keeping the risk contained to a desired threshold. This is done using a novel mechanism that facilitates the handling of multiple objectives. Rather than using standard deviation to measure risk, we consider the well-known CVaR, a coherent and widely-accepted risk measure, which improves uponthe traditional measure by appropriately capturing the downside risk <cit.>. We prove the asymptotic optimality of our strategy for general stationary and ergodic processes, thus allowing for arbitrary (unknown) dependencies among stock prices.We also present numerical examples where we apply an approximate application of our strategy (with a finite set of experts) that validates the method and beautifully demonstrates how risk can be controlled.§ ONLINE PORTFOLIO SELECTIONWe consider the following standard online portfolio selection game with short selling and leverage, as definedby Györfi et al. <cit.>.The game isplayed through T days over a market with n stocks.On each day t, the market is represented bya market vector _t of relative prices, _t(x_1^t,x_2^t,...,x_n^t),where for each i=1, … ,n, x_i^t≥0 is the relative price of stock i, defined to be the ratio of its closing price on day t relative to its closing price on day t-1.A wealth allocation vector or portfolio for day t is (b_0^t,b_1^t, b_2^t , …, b_n+1^t), where b_0^t is a cash allocation (not invested in any stock), and for i>0, b_i^t is the wealth allocation for stock i, where a positive component, b_i^t > 0, represents a long position in stock i, and a negative one, b_i^t < 0, is a short position in stock i.We also allow leverage; that is, the investor can borrow and invest additional cash, so as to amplify her profits. Forthe borrowed cash, the investor must pay a daily interest rate, r>0, and we assume that the investor receives the same interest r for deposited cash (b_0^t).Consider a portfolioplayed at the start of day t. After the market vectoris revealed, the portfolio changes in response to changes in stock price,as follows.For each portfolio component b_i,ifb_i^t>0 is a long position, its revised value is b_i^tx_i^t. However, if b_i^t<0 is a short position, then, after we take into account the interest owed on borrowing the stock for the short sale, the revised value of this position isb_i^t(x_i^t-1+r) (note that in this case, the investor profits when the price drops and vice versa). Clearly, short selling and leveraging are risky; for example, a short position has unbounded potential loss that is further amplified by leveraging.Following <cit.>, we assume that no stock can lose or gain more than B×100 %of its value from one day to another, where B ∈ (0,1). In other words, for each i, t,1-B ≤x^t_i ≤ 1+B. The allowed leverage is thus L_B,rB+1/r+1, which is chosen to preclude the possibility of bankruptcy(see, e.g.,<cit.>, Chapter 4). Using the notation ()^+( max{b_1,0},…,max{b_n,0})and ()^-( min{b_1,0},…,min{b_n,0}), and considering theinterest accredited for deposited cash, the interest debited for borrowed stocks (short positions),and the interestpaid for leveraged wealth, we obtain, by the end of the day, an overall daily return of b_0(1+r)+⟨ (_t)^+,_t ⟩ + ⟨ (_t)^-,_t-1+r ⟩- (L_B,r-1)(1+r) . The investorchooses a portfolio from the following set,{ (b_0,…,b_n)∈^n| ∑_i=1^n |b_i| = L_B,r},which is, unfortunately, notconvex. We thus apply a simple transformation proposed by Györfi et al. <cit.>: transformthe market vectorinto a vector with 2n+1 entries (one entry for cash,n entries for the long components, and n for the short ones). Formally, we define the transformed market vector as '_t(1+r,x^t_1,2-x^t_1+r,… ,x^t_n,2-x^t_n+r), which is uniquely defined as a function of the original market vector.The transformed portfolio set is now defined asℬ'{ (b_0,…,b_2m)∈^2n+1|b_i≥ 0 , ∑_i=1^n b_i = L_B,r},which is an unnormalized simplex.With this transformed market vector and portfolio set, at thestart of each trading day t, the playerchooses a portfolio _t ∈ℬ' based onthe previous market sequences. It can easily be shown <cit.> that by the end of day t, the player's daily multiplicative return issimplified to⟨ ,'_t ⟩-(L_B,r-1)(1+r). With respect to a fixed stationary and ergodic process, we denote by {}_-∞^∞[By Kolmogorov's extension theorem <cit.>, the stationary and ergodic process (X_n)^∞_1 can be extended to (X_n)^∞_-∞ such that the ergodicity holds for both n→∞ and n→ -∞.]the induced sequence of stationary and ergodic market vectors, and define the player's investment strategy, denoted by , as a sequence of portfolios _1,_2, ….Then, assuming initial wealth of $1, we obtain after T days the following cumulative wealth, R_T (,)∏_t=1^T(⟨ ,'_t ⟩-(L_B,r-1)(1+r)). Defining the average growth rate,W_T() 1/T∑_t=1^Tlog(⟨ ,'_t ⟩-(L_B,r-1)(1+r)),we haveR_T (, )=∏_t=1^T⟨ ,⟩ = e^∑_t=1^Tlog(⟨ ,⟩-(L_B,r-1)(1+r))=e^TW_T() .Notice that maximizing W_T() is equivalent to maximizing R_T (, ).In Section <ref> , we denote the summand of W_T() (<ref>) byω( , )-log(⟨ ,_t ⟩-(L_B,r-1)(1+r)). § INTRODUCING RISKThe traditional quantity for measuring financial risk is the variance (standard deviation) of the return. This measure, however, is criticized forbeing inadequate to measure risk. One of the reasons is its inability to distinguish between downside risk and upside risk (which corresponds to a desirable behavior). Various alternative measures have been proposed, such as the maximum drawdown, and value at risk (VaR). An axiomatic approach proposed by Artzner et al.<cit.> identifies coherent risk measures, which satisfy the proposed axioms.Accordingly, the most popular coherent risk measure is conditional value at risk (CVaR). For any parameter α∈ (0,1), _α is essentially the average loss that the investor suffers on the (1-α)% worst returns. For a continuous, bounded mean random variable Z the _α is defined as Let Z be a continuous random variable representing loss. Given a parameter 0 < α < 1, the CVaR_α of Z is_α (Z) = 𝔼[Z | Z ≥min{c |ℙ_Z(Z≤ c) ≥α}].Assuming that we already know the distribution of returns, a direct calculation of CVaR from the above formula requires a calculation of the(1-α)% quantile followed by averaging over the left tail.Alternatively, it was shown in <cit.>that _α can becomputed by solving the following convex optimization problem. Defineϕ'(,c)c+1/1-α[(-log(⟨ ,⟩)-c)^+],where we overload the previously defined (·)^+ for vectors, and define for any scalar x, (x)^+ max{0,x}.The function ϕ'(,c) is convex and continuously differentiable. Moreover, the CVaR_α of the loss associated with any portfoliois_α()= min_c∈ϕ'(,c). Theorem <ref> is essential to the development and analysis of our strategy.By our market boundedness assumption (<ref>),it follows that ω(,) iscontained in [-M,M] for some M>0. Thus, any c that minimizes Equation (<ref>) must reside in [-M,M].For a complete proof of this simple fact, see <cit.>.In Section <ref>, we require the following definition,ℬℬ' × [-M,M]. § OPTIMALITY OFLet _∞ be the σ-algebra generated by the infinite past _-1,_-2,…,and let_∞, be the induced regular conditional probability distribution of _0given the infinite past.Thus, all expectations w.r.t. _0 are conditional given the infinite past. A well-known result appearing in <cit.>proves the following upper bound on the asymptotic average growth rate of any investment strategyunder stationary and ergodic markets:lim sup_T →∞ W_T() ≤[ max_∈ℬ'__∞[-ω(,_0) ] ] . Over the years, several algorithms achieving this asymptotic bound were proposed<cit.> (for the case of long-only portfolios). Our goal is to achieve the optimal asymptotic average growth ratewhile keeping the CVaR bounded. By Theorem <ref>,the desiredgrowth rate is given by the solution to the following minimization problem, (,c) ∈minimize__∞ [ ω(,_0) ] subject toϕ(,c) ≤γ, where ϕ(,c)c+1/1-α__∞[(-log(⟨ ,⟩)-c)^+].Optimization problem (<ref>) motivates a definition ofa γ-bounded strategy,whose long-termaverage CVaR, calculated according tothe available information at the beginning of each round, is bounded by γ.An investment strategywill be called γ-bounded if, almost surely,lim sup_T→∞1/T∑_i=1^Tmin_c ∈(c+1/1-α__X_i| X_0^i-1[(-log(⟨ ,⟩)-c)^+] ) ≤γ. The set of allγ-bounded strategies is denoted _γ. Clearly, there is always a solution to optimization problem(<ref>), and therefore, _γ≠∅. For example, the vacuous strategy that always invests everything in cashis γ-bounded for any γ>0. Let (_∞^*,c_∞^* ) be a solution to(<ref>). Define the γ-feasible optimal value as[ __∞[ω (_∞^*,_0)]] a a.s. Optimization problem (<ref>) is convex over , whichin turn is a compact and convex subset of ^2n+2. Therefore, the problem is equivalent to findingthe saddle-point of the Lagrangian function <cit.>, namely,min_(,c) ∈max_λ∈^+ℒ(,λ),where the Lagrangian isℒ(,λ)__∞[ω (,_0)]+λ( ϕ (,c)-γ).Letbe the value of γ optimizing (<ref>), andassume it is unique.[If it is not unique, we can define an ϵ-regularized Lagrangian and obtain an ϵ-optimal solution.] It is possible to identify a constant λ_max such that λ_max> <cit.>.. With this constant available, we setΛ [0,λ_max].Our first result is thatbounds the performance of any strategy in _γ. This result, as stated inTheorem <ref>, is a generalization of the well-known result of<cit.>regarding the best possible performance for wealth alone (without constraints). For anyinvestment strategy ∈_γ whose portfolios are _1,_2,…, the following holds a.s.lim inf_T →∞1/T∑_i=1^T ω (_i,_i)≥.From Theorem <ref> it follows that an investment strategy, ∈_γ, is optimal if, for any bounded, stationary and ergodic process{_i }_-∞^∞,lim_T →∞1/T∑_i=1^T ω (_i ,_i) = a a.s. We find just such a strategy in Section <ref>.§ CVAR-ADJUSTED NEAREST NEIGHBOR INVESTMENT STRATEGY In this section we present an investment strategy in ∈_γ that satisfies (<ref>). The strategy, which we call CVaR-Adjusted Nearest Neighbor, henceforth , is summarized in thepseudo-code in Algorithm <ref>. To define the strategy we require the following definition ofthe instantaneous Lagrangian: l(,c,λ,) ω(,)+λ(c+1/1-α( ω (,)-c)^+ -γ).The strategymaintains a countable array of experts {H_k,l},where on each day t anexpert H_k,loutputsa triplet (^t_k,l,c^t_k,l ,λ^t_k,l)∈×Λ, defined to be the minimax solution corresponding to an empirical distribution using nearest neighbor estimates (see details below).We prove that, as t grows, those empirical estimatesconverge (weakly) to _∞ and thus converge to .Each day t,outputs a prediction (_t,c_t ,λ_t)∈×Λ. The sequence of predictions (_1,c_1),(_2,c_2),… output byis designed to minimize the average loss,1/T∑_i=1^T l(,c,λ_i,_i). Similarly, the sequence of predictionsλ_1,λ_2,… is designed to maximize the average loss,1/T∑_i=1^T l(_i,c_i,λ,_i).Each of(_i,c_i) and λ_i is generated by aggregating the experts' predictions ^i_k,l and λ^i_k,l, k,l=1,2,…, respectively. In order to ensure thatwill perform as well as any other expert for both theand λ predictions, we apply, twice simultaneously,the Weak Aggregating Algorithm of <cit.>, and <cit.>.It will also ensure that the average loss of the strategy will converge a.s. to . We now turn to defining thecountable set of experts {H_k,h}: Foreach h = 1,2,…, we choose p_h∈ (0,1) such thatfor the sequence {p_h}_h=1^∞, lim_h →∞ p_h =0.Setting ĥ=⌊ np_h ⌋, for expert H_k,h we define, for a fixedk× n-dimensional vector, denoted ,the following set,B^ ,(1,n)_k,h_i | k+1≤ i ≤ n, _i-k^i-1 is among the ĥanearest neighbors ofaaamonga_1^k,…,_n-k^n-1, where _j^j+k (_j,… ,_j+k)∈^k× n.Thus,expert H_k,h has a window of length k and it looks for the ĥ euclidean nearest-neighbors ofin the past. We define alsoh_k,h^ (_1^n-1,) min_(,c) ∈(max_λ∈Λ1/|B^,(1,n)_k,h|∑__i∈ B^,(1,n)_k,hl_k,l,n(,c,λ,_i))h_k,h^λ (_1^n-1,) max_λ∈Λ( min_(,c) ∈1/|B^,(1,n)_k,h|∑__i∈ B^,(1,n)_k,hl_k,l,n(,c,λ,_i))for l_k,h,n(,c,λ,_i)l(,c,λ,_i)+(||(,c)||^2-||λ||^2) (1/n +1/h +1/k), Using the above, wedefine the predictions of H_k,h to be:H^_k,h (_1^n-1)=h^_k,h(_1^n-1, _n-k^n-1 ), n=1,2,3, …H^λ_k,h (_1^n-1)=h^λ_k,h(_1^n-1, _n-k^n-1 ),n=1,2,3, …Note that l_k,h,n(,c,λ,) isan approximation of l(,c,λ,), which guarantees that the minimax solution of every expert is unique. This technicality is used in the proof of Theorem <ref>. A γ-bounded investment strategy is called γ-universal if its asymptotic average growth rate is not worse than any γ-bounded strategy. Theorem <ref> belowstates that thestrategy, applied onthe experts defined above, isγ-universal. We note that the theorem utilizes a standard assumption (see, e.g., <cit.>). The proof of this theorem appears in the supplementary material. The main idea is to show first that the minimax (<ref>) value of the Lagrangian (<ref>) is continuous with respect to the probability measure.Then, we prove that the minimax measurable selection (which gives theoptimal actions) is also continuous and every accumulation point of induced sequence of optimal actionsis optimal.Assume that for any vector ∈^n× k the random variable ||_1^k-|| has a continuous distribution. Then, for any γ>0 and for any bounded process {_i}_-∞^∞,is γ-universal.§ EMPIRICAL RESULTSTo apply thestrategy, we implemented it with a finite set of experts, andin this section we present our empirical results on some standard datasets. One objective of our experiments is to examine how well maintains the CVaR constrains.Another objective is to compare it to several well-known adversarial no-regret portfolio selection algorithms and to stochastically universal strategies. The benchmark algorithms we tested are: * Best Constant Rebalancing Portfolio (BCRP) <cit.>: The BCRP is the optimal strategy in hindsight whenever market sequences are i.i.d.* Cover's Universal Portfolios (UP) <cit.> , Exponentiated Gradient (EG) <cit.>, Online Newton Steps (ONS) <cit.>:Thesealgorithms guarantee sub-linear regret w.r.t. the wealth achieved by BCRP. * The nearest-neighbor based strategy (long-only and non-leveraged) of Györfi et al. (ℬ_NN) <cit.>: ℬ_NN, which isa (stochastically) universal strategy whose asymptotic growth rate is optimal when the market follows a stationary and ergodic process. * The nearest-neighbor based strategy (with short and leveraged): ℬ^L_NN The experiments were conducted on two datasets that were used in many previous works (see, e.g., <cit.>). The first is the NYSE dataset, which consists of 23 stocks between the years 1985-1995.The second is the MSCI dataset, which consists of 24 stocks between the years 2006-2010. Following <cit.>, for both datasets we used a daily interest rate of r=0.000245 and set B=0.4, which implies that L_B,r=2.49. While this interest rate is higher than the true rate in 2010, this choice only reduces the returns of our algorithm, which rarelydeposits cash and must pay a lot for short selling and loans.Similarly to the implementation ofℬ_NN <cit.>, our implementation oftook the following experts, k=1,…,5 h=1,…,10, for a total of 50 experts,and we set p_l = 1/20+h-1/18.The initial expert priorwas set to be uniform and we chose the typical value of α=0.95 for the calculation of CVaR. Thehyper-parameters for the benchmark algorithms were according to <cit.>.Table <ref> presents the total wealth of all the algorithms, wherewas applied was γ=0.05. It is evident that the stochastically universal algorithms are superior to all the worst-case universal algorithms. In Figure <ref> we present the smoothed PDF of the returns of both ℬ^L_NN and our algorithm. The left tails of these PDFs show that our algorithm effectively decreases the losses.Another interesting aspect of our strategy is its lower variance. We conducted another experiment wherewe appliedwith different choices of γ in the range [0.01,0.07]. The results are presented in Table <ref>, where the _0.95 is presented, and in Figure <ref>, where the y-axis showsthe average return is presented and on the x-axis shows the CVaR_0.95. It can be seen that lower γs result inless risky strategies.Moreover, the concave shape suggests that by choosing an appropriate γ, one may achieve a better mean-CVaR trade-off. § CONCLUDING REMARKS In this paper weintroduced the CVaR-adjusted nearest-neighbor portfolio selection strategy, which is the first CVaR-adjusted universal portfolio selection strategywhen the underlying market process is stationary and ergodic.It should be noted thatit is possible to revise our method to work withother modern measures of risk such as the optimized certainty equivalent <cit.>, distortion risk measures (mixture of CVaR)<cit.>, andlaw-invariant coherent risk measures <cit.>.Early works in modern finance assumed that markets are stochastic and very simple (e.g., the returns are normally distributed) <cit.>. This modeling assumption was later found to be too simplistic <cit.>. At the other extreme, Cover initiated the study of adversarial portfolio selection whereby stock prices arecontrolled by an adversary. Neither extreme led to overly effective strategies. It appears that a more sophisticated stochastic modeling, as we pursue here, can lead to effective strategies; however, despite the empirical success of these methods, the bounds that can be obtained are asymptotic. To overcome thisbarrier, additional, and possibly strong, assumptions on the market process will be required. In the future, we wishto pursue finite sample guarantees while not over-committing to dubious assumptions.plain
http://arxiv.org/abs/1705.09800v1
{ "authors": [ "Guy Uziel", "Ran El-Yaniv" ], "categories": [ "q-fin.MF", "cs.LG" ], "primary_category": "q-fin.MF", "published": "20170527102703", "title": "Growth-Optimal Portfolio Selection under CVaR Constraints" }
[pages=-]arxiv.pdf
http://arxiv.org/abs/1705.09778v2
{ "authors": [ "Mathurin Massias", "Olivier Fercoq", "Alexandre Gramfort", "Joseph Salmon" ], "categories": [ "stat.ML", "math.OC", "stat.AP" ], "primary_category": "stat.ML", "published": "20170527072438", "title": "Generalized Concomitant Multi-Task Lasso for sparse multimodal regression" }
unsrt #1#2#3#4#1 #2, #3 (#4) Nuovo Cimento Nucl. Instrum. Methods Nucl. Instrum. Methods A Nucl. Phys. B Phys. Lett.B Phys. Rev. Lett. Phys. Rev. D Z. Phys. C J. Phys. G Chin.Phys. C Int. J. Mod. Phys. A Euro.Phys. Jour. C ϵ^' ε → π^+π^-γ p K^0 K̅^̅0̅ α α̅CP-1.80em/ Institute of High Energy Physics, P.O. Box 918, Beijing 100049, China; School of Physics, University of Chinese Academy of Sciences, Beijing 100049,ChinaCHARMLESS TWO-BODY B MESON DECAYS IN FACTORIZATION ASSISTED TOPOLOGICAL AMPLITUDE APPROACHCai-Dian Lü and Si-Hong Zhou December 30, 2023 ============================================================================================ We analyze charmless two-body non-leptonic B decays under the framework of factorization assisted topological amplitude approach. Unlikethe conventional flavor diagram approach, we consider flavor SU(3) breaking effect assisted by factorization hypothesis for topological diagram amplitudes of different decay modes, by factorizing out the corresponding decay constants and form factors. The non-perturbative parameters of topology diagram magnitudes χ and strong phase ϕ are universal that can be extracted by χ^2 fit from current abundant experimental data of charmless B decays. The number of free parameters and the χ^2 per degree of freedom are both reduced comparing with previous analysis. With these best fitted parameters, we predict branching fractions and CP asymmetry parameters of nearly 100 B_u,d and B_s decay modes. The long-standing ππ and π K-CP puzzles are solved simultaneously.§ INTRODUCTION Charmless two-body non-leptonic B decays are of importance for testing the standard model(SM). They can be used to study CP violation via the interference of tree and penguin contributions. They are also sensitive to signals of new physics that would change the small loop effects from penguin diagrams.With regards to them, the BaBar,Belle and LHCb experiments have measured numerous data of branching fractions andCP asymmetries of B→ PP, PV decays, where P(V) denotes a light pseudoscalar (vector) meson.On the theoretical side, it requires complicated study of non-perturbative strong QCD dynamics in the charmlessB decays, which not only involve tree topologies but also more complicated penguin loop diagrams.Based on the leading order power expansion of Λ_QCD/m_b,the QCD factorization (QCDF)<cit.>, the perturbative QCD (PQCD)<cit.>, and the soft-collinear effective theory (SCET)<cit.> have been developedto study the charmless B decays. However, some puzzles encountered at the leading power of Λ_QCD/m_bin these factorization approaches, for example, (I) the predicted branching fractions for color-suppressed tree-dominated decaysB̅^0→π^0π^0, ρ^0π^0 are too small comparing with experimental data, that is the so-called ππ puzzle, (II) some directCP asymmetries of B → PP, PV decays are inconsistent with experiment in signs,such as K π puzzle. Although some soft and sub-leading power of Λ_QCD/m_b effects were taken into accountin the QCDF <cit.> and the PQCD <cit.>, the B →ππ puzzle was still left in the conventional factorization theorem.Unlike these perturbative approaches, some model-independent approaches were introduced to analyze the charmless B decays, such as global SU(3)/U(3) flavor symmetry analysis <cit.> and flavor topological diagram approach based on flavor SU(3) symmetry<cit.>. Nowadays, SU(3) breaking effects have to be considered to compare the theoretical results with the precise experimental data. It is also observed in the flavor topological diagram analysis that they have to fit three different sets of parameters for the three types of B decays respectively <cit.> due to large difference between pseudo-scalar and vector final states of B→ PP, B→ PV and B→ VP decays. There are too many parameters to be fitted thus its prediction power is limited.In view of the above complexity and incompleteness in power correction of factorization approaches and the limitation of the conventional flavor topological diagram approach, a new method called factorization-assisted-topological-amplitude (FAT) approach was proposed in studying the two-body hadronic decays of D mesons <cit.>. Aimingto include all non-factorizable QCD contributions compared to factorization approaches, it adopts the formalism of flavor topological diagram approach. However, different from the conventional flavor topological diagram approach, it had included SU(3) breaking effect in each flavor topological diagram assisted by factorization hypothesis,further reducing the number of free parameters by fitting all the decay channelsand the precision of the FAT approach then not limited to the order of flavor SU(3) breaking effect.In the following, we will analyze the charmless B → PP, PV decays in the FAT approach.§ THE AMPLITUDES OF B → PP, PV DECAYS IN FAT APPROACH The charmless two bodyB decays are induced by the quark level diagrams classified byleading order (tree diagram) and 1-loop level (penguin diagram) weak interactions. For different B decay final states, the tree level weak decay diagram can contribute via different orientations:the so-calledcolor-favored tree emission diagram T, color-suppressed tree emission diagram C, W-exchange tree diagrams E and W-annihilation tree diagrams A,respectively.Similarly, the 1-loop penguin diagram can also be classified as5-types: color-favored QCD penguin emission diagram P, color-suppressed QCD penguin emission diagram P_C, penguin-annihilationdiagram P_A, the time-like penguin diagram P_E and electro-weak penguin emission diagram P_EW. The three categories of B → PP, PV and VP decays parameterized as three sets of parameters in the conventional topological diagram approach, will be parameterized as only one set of universal parametersin the FAT approach. The T topology is proved factorization to all orders of α_s expansion in QCD factorization approaches and SCET.Their numerical results also agree to each other in different approaches. Thus, to reduce one free parameter, we will just use their theoretical results from QCD calculation, not fitting from the experiments:T^P_1P_2 = iG_F/√(2)V_ubV_uq^'a_1 (μ)f_p_2(m_B^2-m_p_1^2)F_0^BP_1(m_p_2^2),T^PV = √(2)G_FV_ubV_uq^'a_1 (μ)f_V m_VF_1^B-P(m_V^2)(ε^*_V· p_B),T^VP = √(2)G_FV_ubV_uq^'a_1 (μ)f_P m_VA_0^B-V(m_P^2)(ε^*_V· p_B),where the superscriptof T^P_1P_2 denote the final mesons with two pseudoscalar mesons, and T^PV(VP) for recoiling mesons are pseudoscalar meson (vector meson) with one pseudo-scalar and one vector meson final states.a_1(μ) is the effective Wilson coefficient of four quark operators withQCD corrections.f_P_2(f_P) and f_V are the decay constants of the emitted pseudoscalar meson and vector meson, respectively. F_0^BP_1 (F_1^B-P) and A_0^B-V are theform factors of B→ P and B→ V transitions, respectively. ε^*_V is the polarization vector of vector meson and p_B is the 4-momentum of B meson. For the color suppressed C topology,we parameterize its magnitude and associate phase as χ^C and e^iϕ^C in B→ PP, VP decays and χ^C^'e^iϕ^C^' in B→ PV, respectively to distinguish cases in which the emitted meson is pseudo-scalar or vector meson:C^P_1P_2 = iG_F/√(2)V_ubV_uq^'χ^Ce^iϕ^C f_p_2(m_B^2-m_p_1^2)F_0^BP_1(m_p_2^2),C^PV = √(2)G_FV_ubV_uq^'χ^C^'e^iϕ^C^'f_V m_VF_1^B-P(m_V^2)(ε^*_V· p_B), C^VP = √(2)G_FV_ubV_uq^'χ^Ce^iϕ^C f_P m_VA_0^B-V(m_P^2)(ε^*_V· p_B),where the decay constants and form factors f_P, f_V,F_0^BP_1, F_1^B-Pand A_0^B-V characterizing the SU(3) breaking effects are factorized out. The W-exchange E topology is non-factorizable in QCD factorization approach that is expected smaller than emission diagrams as power suppressed. We use χ^E, e^iϕ^E to represent the magnitude and its strong phase for all decay modes:E^P_1P_2 = iG_F/√(2)V_ubV_uq^'χ^Ee^iϕ^E f_Bm_B^2(f_p_1f_p_2/f_π^2),E^PV,VP = √(2)G_FV_ubV_uq^'χ^Ee^iϕ^E (μ)f_Bm_V(f_Pf_V/f_π^2)(ε^*_V· p_B).We will ignore A topology, as its contribution is negligible as discussed in <cit.>.Similarly, we parameterize the corresponding penguin diagrams with 8 parameters:chiral enhanced penguin amplitude χ^P and its phase ϕ^P excludingthe factorizable leading power contribution of the P topology, flavor singlet penguin amplitude χ^P_C, χ^P_C^' andtheir phases ϕ^P_C, ϕ^P_C^' for the pseudo-scalar and vector meson emission, respectively, the penguin annihilation amplitude χ^P_A and its phase ϕ^P_A for the vector meson emission only.The contribution from P_E diagram is argued smaller than P_A diagram, which can be ignored reliably in decay modes not dominated by it.Similar to T and leading power contribution from the P topology,we calculate P_EW topology,the largest contribution from EW-penguin contribution, in QCD factorization approaches. § NUMERICAL RESULTS AND DISCUSSION With the experimental data of 37 branching fractions and 11 CP asymmetry parameters<cit.>,we do a global fit to extract the 14 parameters.The best-fitted values and the corresponding 1σ uncertainty are:[χ^C=0.48 ± 0.06, ϕ^C=-1.58 ± 0.08,χ^C^'=0.42 ± 0.16,ϕ^C^'=1.59± 0.17; χ^E=0.057± 0.005, ϕ^E=2.71± 0.13, χ^P=0.10± 0.02,ϕ^P=-0.61± 0.02,;χ^P_C=0.048 ± 0.003,ϕ^P_C=1.56 ± 0.08, χ^P_C^'=0.039± 0.003,ϕ^P_C^'=0.68 ± 0.08,; χ^P_A=0.0059± 0.0008, ϕ^P_A=1.51± 0.09, ]with χ^2/d.o.f=45.2/34=1.3.This χ^2 per degree of freedom is smaller than the conventional flavor diagram approach <cit.>, even though with much more parameters than us. The mapping of well-known QCDF-amplitudes introduced in <cit.> and topological diagramsamplitudes in FAT approaches were comparedin table <ref>. It is apparent that there are large differences between results fitted from experimental data in the FAT approaches and the calculated results in the QCDF, especially for the strong phases.Later we will show that the small strong phases, ϕ^C and ϕ^C^' from QCDF are the main reason for the ππ and π K puzzles.Using the fitted parameters in eq.(<ref>), we give the numerical results of branching fractionsand the direct CP and mixing-induced CP asymmetries of charmless B_(s)→ PP, PV decays shown in the tables of ref.<cit.>Nearly 100 channels are provided to be tested in the future experiments. Similar to the conventional topological diagram approach <cit.>, the long-standing puzzle of large B^0→π^0π^0branching ratio can be resolved well attributed to the appropriate magnitude and phase of C in FAT approach compared with the small magnitude ofC=α_2=0.20^+0.17_-0.11 by perturbative calculation in QCDF.However, |T^ππ|:|C^ππ|=1:0.47 in FAT approach is not as large asthe one in ref.<cit.>, where the ratio even reached 0.97 in Scheme C. The branching fractions of pure penguin decaysB^-→ K^-K^0, B^0→ K^0K̅^̅0̅given in the FAT approach are in much better agreement with experimental data thanthe previous conventional flavor diagram approach <cit.>,as we have considered the flavor SU(3) breaking effect.With a large strong phase for sub-leading contribution C in FAT approach,the K π puzzle can also resolved. This again implies large power corrections or large non-perturbative QCD correctionsin the C diagram of B→π K decays.The flavor SU(3) breaking effect considered here in every topology amplitude between B →ππ and B →π K is around 10% and larger than 20% in corresponding B → PV models. The difference between π and ρ meson emission is indeed much larger than the so called flavor SU(3) breaking effect between π and K meson due to the meson decay constant f_ρ > f_K andmore larger characterized by the K and K^*decay constant. § CONCLUSIONS We studied charmless two-body hadronic B decays in factorization assisted topological amplitude approach. By using the factorization results for T and P_EW diagrams, there were 6 parameters χ^C(ϕ^C),χ^C^'(ϕ^C^') and χ^E(ϕ^E) for tree diagrams C,E and 8 parameters χ^P(ϕ^P), χ^P_C(ϕ^P_C), χ^P_C^'(ϕ^P_C^') and χ^P_A(ϕ^P_A) for QCD-penguin diagrams to be fitted from 48 measured data of branching ratios andCP asymmetry parameters of the B → PP, PV decays together. The χ^2 per degree of freedom is smaller than the conventional flavor diagram approach,even with much more free parameters in their approach. With the fitted parameters, we predicted branching fractions ofnearly 100 charmless B_(s)→ PP,PV decay modes and their CP asymmetry parameters.The long-standing puzzles of ππ branching ratios and π KCP asymmetry have been resolved consistently with not too large color suppressed tree diagram contribution χ^C.The flavor SU(3) breaking effect between π and K were approximately 10%, even more than 20% in ρ and K^* meson case.§ ACKNOWLEDGMENTS The work is partly supported by National Science Foundation of China (11375208, 11521505, 11621131001 and 11235005).§ REFERENCES 99Beneke:2000ryM. Beneke, G. Buchalla, M. Neubert and C. T. Sachrajda, 5913132000. Lu:2000em C. D. Lu, K. Ukai and M. Z. Yang, 630740092001.Bauer:2000yr C. W. Bauer, S. Fleming, D. Pirjol and I. W. Stewart, 631140202001. Cheng:2009cn H. Y. Cheng and C. K. Chua, 801140082009. Li:2005ktH. n. Li, S. Mishima and A. I. Sanda, 721140052005.heY. K. Hsiao, C. F. Chang and X. G. He, 931140022016. Cheng:2014rfaH. Y. Cheng, C. W. Chiang and A. L. Kuo, 910140112015. Li:2012cfaH. N. Li, C. D. Lü and F. S. Yu, 860360122012.Li:2013xsaH. N. Li, C. D. Lü, Q. Qin and F. S. Yu, 890540062014.PDG K. Olive et al. (Particle Data Group), 380900012014. Beneke:2003zvM. Beneke and M. Neubert, 6753332003.Zhou:2016jkvS. H. Zhou, Q. A. Zhang, W. R. Lyu and C. D. Lü, 771252017.
http://arxiv.org/abs/1705.09787v1
{ "authors": [ "Cai-Dian Lu", "Si-Hong Zhou" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170527085321", "title": "Charmless Two-Body B Meson Decays In Factorization Assisted Topological Amplitude Approach" }
1,2,3]Hehong Zhang 1,2,4]Chao Zhai 1,2,4]Gaoxi [email protected] 1,2]Tso-Chien Pan [1]Future Resilient Systems, Singapore-ETH Centre, CREATE Tower, Singapore [2]Institute of Catastrophe Risk Management, Nanyang Technological University, Singapore[3]Interdisciplinary Graduate School, Nanyang Technological University, Singapore [4]School of Electrical and Electric Engineering, Nanyang Technological University, SingaporeIdentifying Critical Risks of Cascading Failures in Power Systems [================================================================= Potential critical risks of cascading failures in power systems can be identified by exposing those critical electrical elements on which certain initial disturbances may cause maximum disruption to power transmission networks. In this work, we investigate cascading failures in power systems described by the direct current (DC) power flow equations, while initial disturbances take the form of altering admittance of elements. The disruption is quantified with the remaining transmission power at the end of cascading process. In particular, identifying the critical elements and the corresponding initial disturbances causing the worst-case cascading blackout is formulated as a dynamic optimization problem (DOP) in the framework of optimal control theory, where the entire propagation process of cascading failures is put under consideration. An Identifying Critical Risk Algorithm (ICRA) based on the maximum principle is proposed to solve the DOP. Simulation results on the IEEE 9-Bus and the IEEE 14-Bus test systems are presented to demonstrate the effectiveness of the algorithm.Index Terms—Cascading failures, critical elements, initial disturbances, dynamic optimization, maximum principle.§ INTRODUCTIONAlmost all human systems and activities strongly depend on critical energy infrastructures (e.g., electric power systems). Large-scale power blackouts in the past decades, such as the North America blackout on August 14, 2003 <cit.>, the Europe interconnected grid blackout on November 12, 2006 <cit.> and Brazil blackout on November 10, 2009 <cit.>, suggest that power blackouts are not uncommon in spite of technological progress and great investments in power systems <cit.>. Although such large blackouts are rare events, they have the potential to result in in-operability, huge economic losses, or even state panics. Cascading failures in bulk power systems are an essential cause of blackouts <cit.>. A cascading blackout usually starts with one or more triggering initial disturbances that lead to dramatic redistributions of power flows and a variety of drastic phenomena throughout the power network <cit.>. Therefore, identifying critical risk of cascading failures in power systems is of great interest to researchers and power system planners. Certain disturbances on some elements may lead to the worst power losses or severest isolations of power systems, making these elements the critical elements for cascading failures in power systems <cit.>. To identify the critical elements and the corresponding initial disturbances applied on them that cause the worst-case cascading blackout in power systems, a novel approach within the framework of optimal control theory is proposed in this paper.A variety of approaches have been proposed to identify critical electrical elements and initial attacks, or to assess the criticality or vulnerability for power systems. In <cit.>, identifying critical system components (e.g., transmission lines, generators, transformers) is formulated into a bi-level optimization model; and a heuristic algorithm is developed to solve the problem and obtain a local optimal solution. In <cit.>, the problem is recast into a standard mixed-integer linear programming problem, which can be solved by using various solvers. The resulted mixed-integer bi-level programming formulation in <cit.><cit.> is relaxed into an equivalent single-level mixed-integer linear programming problem by replacing the inner optimization problem with the Karush-Kuhn-Tucker optimality conditions <cit.>. As an extension of <cit.>, a new approach based on “Global Benders Decomposition" is proposed to solve the large-scale power system interdiction problem when transmission lines are under attacks; and the algorithm can guarantee the convergence of the bi-level optimization solution <cit.>. In <cit.>, identifying the criticality and vulnerability of the electric grid is formulated as a non-linear bi-level programming problem and the genetic algorithm is appiled to reach near optimal solutions with moderate computing time. In <cit.>, finding a strategic defense to minimize the damages of an attack is formulated as a multi-level mixed-integer programming problem. A Tabu Search with an embedded greedy algorithm is implemented to find the optimum defense strategy. In <cit.>, an improved interdiction model that combines the evaluation ofboth short-term (seconds to minutes) and medium-term (minutes to days) impacts of possible electric grid attacks to identify the worst one is proposed; an integer programming heuristic is then applied to solve the problem. Power grid performance indices including overall voltage deviation and the minimal load shedding are quantified in <cit.> based on the alterinating current (AC) power flow model, where finding the most disruptive attack is formulated as either a non-linear programming or a non-linear bi-level optimization problem, both of which can be solved by common algorithms. In <cit.>, both static and dynamic deterministic indices are included in the process of ranking critical nodes; a new ranking algorithm is proposed and evaluated by extensive Monte Carlo simulations. In most of the existing work on identifying the critical elements and the initial disturbances causing the worst-case cascading blackout, the problem is usually formulated as a static optimization problem which neglects the entire propagation process of the cascading failures. Though such a problem is relatively easier to be solved, the results however may be misleading as they may not properly reflect the system dynamics and evolution in the real life. The main contributions in this paper are twofold. Firstly, we formulate the problem of identifying the critical elements and the corresponding initial disturbances causing the worst-case cascading blackout as a dynamic optimization problem (DOP) in the framework of optimal control theory, which enables us to investgate the entire propagation process of cascading failures. Secondly, the identifying critical risk algorithm (ICRA) based on the maximum principle of optimal control theory <cit.><cit.><cit.> is proposed to solve the DOP, which guarantees fulfilling the necessary condition for the optimal solutions. The remainder of this paper is organized as follows. Section 2 formulates the DOP based on the DC power flow equations and cascading failure model. In Section 3, the solution based on the maximum principle is introduced in detail. Section 4 presents results from calculations based on the IEEE standard data and verifies the correctness of the results. Finally, we conclude this work and present some future work in Section 5.§ PROBLEM FORMULATIONIn this section, identifying the critical elements and the corresponding initial disturbances is formulated as a dynamic optimization problem (DOP). The DC power flow model, relay-based overloading branch tripping model and cascading failure model are discussed in Section 2.1, and the DOP formulation is presented in Section 2.2. §.§ Notations and Models§.§.§ A. Notations We summarize the power system notations used in later sections as follows:* Number of buses: N_b* Number of electrical elements: N * Active power at bus i: P_i * Active power from bus i to bus j: P_ij * Voltage phase at bus i : θ_i * Voltage phase difference between bus i and bus j: θ_ij * Admittance at element i: y_piThe admittance of an element includes admittance of transformer (if any) and transmission branch. The admittance information of a power system can be described by the element admittance vector Y_P=[y_p1 y_p2⋯ y_pN]^T. An initial disturbance is specified by means of altering admittance at the corresponding element of Y_P. The nodal admittance matrix Y can be determined by Y=A^TY_PA, where A is the element-node incidence matrix <cit.><cit.>. In the propagation process of cascading failures, the time-varying element admittance vector Y_P and the time-invariant element-node incidence matrix A are applied to determine the nodal admittance matrix Y for the convenience of analysis of the approach in later sections.§.§.§ B. DC Power Flow Model In a power system, power flow equations are used to estimate the flow values for each branch. The DC power flow model is deployed since we only study on high-voltage transmission networks in this paper: adopting the DC model helps avoid some difficulties in numerical calculations without sacrificing the validity of the results <cit.>.In the AC power flow model, the active power flow P_ij is determined as:P_ij=|U_i||U_j|/z_ij sinθ_ijwhere |U_i| is the voltage amplitude at bus i, z_ij is the element impedance. Under the following assumptions that i) resistance of transmission element is ignored so that element impedance approximately equals element reactance; ii) voltage phase differences are small enough and iii) there is a flat voltage profile <cit.>, the above non-linear equation can be linearised into the DC power flow equation.P_ij=θ_ij/x_ij=y_ijθ_ijFurther, the power flow equation can be modelled into matrix format as follows:P=A^TY_PAθwhere P is the vector of active power injections, vector θ contains the voltage angles at each bus, and A^TY_PA is the nodal admittance matrix Y. Due to ignorance of power loss in the DC power flow equations, all the active power injections are known in advance. Once given the nodal admittance matrix, the voltage angles at each bus can be determined byθ=(A^TY_PA)^-1PAfter obtaining the voltage angle value at each bus, the power flow through each element can be computed by Eq. (2).§.§.§ C. Relay-Based Overloading Branch Tripping ModelIn a power system, transmission branches are protected by circuit breakers, and branch tripping is one of the most common factors responsible for cascading failures. A circuit breaker trips a transmission branch when the demand load of the branch exceeds a certain threshold level, in order to prevent that transmission branch from being permanently damaged due to overloading <cit.>.For simplicity, in this paper we assume a deterministic model of transmission branch tripping mechanism. Specifically, a circuit breaker for branch l_i trips at the moment when the demand load on the branch l_i exceeds its maximum capacity (threshold value). The maximum capacity of a branch is defined as the maximum power flow that can be afforded by the branch. This maximum power flow value is decided by thermal, stability and/or voltage drop constraints. In real-life infrastructures, this value may be constrained by cost as well. The relay-based overloading branch tripping model is presented as follows, where the threshold value of a branch is related to its initial load:C_tr1i=α_iL_i(0)i=1,2,...,Nwhere L_i(0) is the initial demand load, and α_i is the tolerance parameter of line l_i.The mechanism of the relay protection above may be resembled by a step function: when the real load of a branch is less than or equal to the threshold value, the circuit breaker of the branch is in the status of on; otherwise, it is in the status off. To facilitate derivative calculations, a smooth function g is introduced to resemble the step function, ensuing the differentiability of the function at switching points: g_i(p_ij,C_tr1i)=1,|p_ij|≤√(C_tr1i^2-π/2a)1- sina(|p_ij^2|-C_tr1i^2)/2,√(C_tr1i^2-π/2a)≤ |p_ij| ≤√(C_tr1i^2+π/2a) 0,|p_ij|≥√(C_tr1i^2+π/2a)where C_tr1i(i=1,2,...,N) is the threshold value of a branch, a is a parameter to regulate the slope of the function. With the smooth function g_i(p_ij,C_tr1i), the diagonal relay tripping matrix G(p_ij,C_tr1i) can be defined as follows:G(p_ij,C_tr1i) = [ g_1(p_ij,C_tr11);g_2(p_ij,C_tr12); ⋱ ; g_N(p_ij,C_tr1N) ] §.§.§ D. Cascading Failure Model In this subsection, the cascading failure model reflecting the entire propagation process of cascading failures is presented. A cascading failure is a sequence of events in which an initial disturbance, or a set of disturbances, triggers a sequence of one or more dependent element outages. The initial disturbances include a wide variety of exogenous disturbances such as high winds, lightning, natural disasters, contact between conductors and vegetation, or human errors <cit.>. For simplicity, we assume that the initial disturbances take the form of altering admittance along transmission branches.From Eq. (6) and the diagonal relay tripping matrix G(p_ij,C_tr1i), the cascading failure model in matrix format can be built as follows:Y_P^k+1=G[P_ij^k(Y_P^k), C_tr1]Y_P^k+Diag[-u(k)]F(u_k) k=0,1,2,...where k is the iteration step of cascading failures, u(k) is the input vector of external disturbances. When k=0, the input vector u(0) denotes the initial disturbances. The vector F(u_k) is defined as follows: F(u_ki,C_tr2i) = [ f_1(u_k1,C_tr21); f_2(u_k2,C_tr22);⋮ ; f_N(u_kN,C_tr2N) ]Similar to that in Eq. (6), for facilitating derivative calculations, a smooth function f_i(u_ki,C_tr2i) is applied for every element of the vector F(u_k). The smooth function f_i(u_ki,C_tr2i) is defined as follows: f_i(u_ki,C_tr2i)=0,|u_ki|≤√(C_tr2i^2-π/2b)1+ sinb(|u_ki^2|-C_tr2i^2)/2,√(C_tr2i^2-π/2b)≤ |u_ki| ≤√(C_tr2i^2+π/2b) 1,|u_ki|≥√(C_tr2i^2+π/2b)where C_tr2 is the threshold value vector and b is a parameter that regulates the slope of the function f. The returning value of the function f_i(u_ki,C_tr2i) is determined by comparing the threshold value C_tr2i with the corresponding external disturbance, where the critical element is determined when the function f returns the value one.§.§ Dynamic Optimization Problem FormulationBased on the models presented above, the DOP formulation in the framework of optimal control theory can be defined as follows:Formulation of DOP: Given a power system, determining a control input vector u_k∈Ω, such that the remaining transmission power at the end of cascading process is minimized. Assume that the system is described by the DC power flow equations in Eq. (2) and its cascading failure model by Eq. (7). We havemin_u_k∈Ω JJ=||P^N||_F^2+ϵ∑_k=0^N_c-1 [1/max{0,1-k}×1/max^2{0,N_n-||F(u_k)||^2}][ s.t. { Y_P^k+1=G[P_ij^k(Y_P^k), C_tr1]Y_P^k+Diag[-u(k)]F(u_k) P_ij=y_ijθ_ij . ]where the Frobenius norm of power transmission matrix ||P^N||_F^2 equals ∑_i=1^N∑_j=1^N(P^N_ij)^2, N_c is the total iteration steps of cascading failures, N_n is a parameter that denotes the number of critical elements, · denotes 2 norm of a vector and ϵ is the weight of the cost function. In Eq. (10), the terminal constraint ||P^N||_F^2 dominates the cost function by setting the weight ϵ to be small enough. As above mentioned, the critical elements are those elements which, when being attacked, will trigger the worst-case cascading blackout with the minimum transmission power remaining in the system. The critical elements and their IDs can be determined by the vector F(u(k)) once the optimal control input vector u(k) is obtained.From the DOP formulation presented above, we can see that the external disturbances are only applied in the first step (k=0), that is, the initial disturbance vector u(0). The DOP formulation can be extended to the case where external disturbances or control inputs are applied in different steps, which helps facilitate future studies on human errors in cascading failures and protection reactions. § DOP SOLUTIONThe DOP can essentially be viewed as a control problem, where we search for an optimal control input vector u(k) to pin the power gird to certain worst-case cascading blackout defined in Eq. (10). In this section, the ICRA based on the maximum principle of the optimal control theory is applied to solve the DOP as presented in Eqs. (9), (10) and (11). The maximum principle is a powerful method for the computation of optimal controls, with the crucial advantages that it does not require prior evaluation of the infimal cost function and provides necessary conditions for optimality of solutions. In the following, the Lagrange multiplier method in the maximum principle is presented in detail. Introduce the Lagrange multipliers [λ_k+1]≜ [λ_1,...,λ_N], λ_k+1∈ℝ^n (usually referred to as adjoint variables) to the Eqs. (9), (10) and (11). The Lagrangian function shall be as follows:𝔏(Y_P, λ)≜ ||P^N(Y_P^N)||_F^2+ϵ∑_k=0^N_c-1 [1/max{0,1-k}×1/max^2{0,N_n-||F(u_k)||^2}]+λ_k+1^T{G[P_ij^k(Y_P^k), C_tr1]Y_P^k+Diag[-u(k)]F(u_k)-Y_P^k+1}where λ≜[λ_1^Tλ_1^T...λ_N^T]^T.To guarantee the existence of the partial derivative ∂ Y_P^k+1/∂ Y_P^k, hereafter we make the assumption that each sub-network that is isolated due to redistributions of power flows in the cascading process, the partial derivative ∂ Y_P^k+1/∂ Y_P^k can be non-singular or reduced-order non-singular on ℝ^n×Ω <cit.>. Let Y_P^*≜[(Y_0^*)^T... (Y_N^*)^T (u_0^*)^T... (u_N-1^*)^T]^Tbe the minimising vector corresponding to the sequences [(Y_0^*)... (Y_N^*)] and [u_0^*...u_N-1^*]. Observe that the dual feasibility condition in the Karush-Kuhn-Tucker (KKT) optimality conditions is equivalent to the statement that there exists λ^*≜[(λ_1^*)^T (λ_2^*)^T... (λ_N^*)^T]^T such that the partial derivative ∂𝔏/∂ Y_P of the Lagrangian function vanishes at (Y_P^*,λ^*). Therefore, there hold the following conditions{[ ∂𝔏(Y_P^*,λ^*)/∂ Y_P^k=0;; ∂𝔏(Y_P^*,λ^*)/∂ u_k=0 ].where ∂𝔏/∂ Y_P^k and ∂𝔏/∂ u_k denote the row vectors of partial derivatives {[ ∂𝔏/∂ Y_P^k≜ [∂𝔏/∂ Y_P1^k ...∂𝔏/∂ Y_PN^k]; ; ∂𝔏/∂ u_k≜ [∂𝔏/∂ u_k^1 ...∂𝔏/∂ u_k^m] ].To perform the differentiations above, the Hamiltonian ℍ: ℝ^n×ℝ^m×ℝ^n→ℝdefined as follows is introducedℍ(Y_P^k,u_k, λ_k)≜ϵ∑_k=0^N_c-1 [1/max{0,1-k}×1/max^2{0,N_n-||F(u_k)||^2}]+λ_k+1^T{G[P_ij^k(Y_P^k), C_tr1]Y_P^k+Diag[-u(k)]F(u_k)}where the first term of the Hamiltonian ℍ (denoted as L) is the per-stage weighting in the cost function.Note that{[ ∂ℍ/∂ Y_P^k= ∂ L/∂ Y_P^k+λ_k^T∂ Y_P^k+1/∂ Y_P^k; ; ∂ℍ/∂ u_k= ∂ L/∂ u_k+λ_k^T∂ Y_P^k+1/∂ u_k ].where {[ ∂ L/∂ Y_P^k≜ [∂ L/∂ Y_P1^k ...∂ L/∂ Y_PN^k];; ∂ L/∂ u_k≜ [∂ L/∂ u_k^1 ...∂ L/∂ u_k^m] ].Thus, the following conditions hold ∂𝔏(Y_P^*, λ^*)/∂ Y_P^k=∂ℍ(Y_P^k*,(u_k^*),λ_k+1^*)/∂ Y_P^k-(λ_k^*)^T=0 ∂𝔏(Y_P^*, λ^*)/∂ Y_P^N=∂(||P^N(Y_P^N)||_F^2)/∂ Y_P^k-(λ_N^*)^T=0 ∂𝔏(Y_P^*, λ^*)/∂ u_k=∂ℍ(Y_P^k*,(u_k^*),λ_k^*)/∂ u_k=0Further, the following equations can be obtained (i) State equation:Y_P^(k+1)*=G[P_ij^k(Y_P^k*), C_tr1]Y_P^k+Diag[-u(k)]F^*(u_k)(ii) Adjoint equation:(λ_k^*)^T=∂ℍ(Y_P^k*,F^*(u_k),λ_k+1^*)/∂ Y_P^k(iii) Boundary equation:(λ_N^*)^T=∂(||P^N(Y_P^N)||_F^2)/∂ Y_P^N(iv) Hamiltonian condition:∂ℍ(Y_P^k*,u_k^*,λ_k+1^*)/∂ u_k=0We show the main steps for solving Eqs. (18), (19), (20) and (21).First, we solve the adjoint equations. From Eqs. (14) and (19), the following equation can be obtainedλ_k^*=(∂ Y_P^k+1/∂ Y_P^k)^Tλ_k+1^*where the dimension of ∂ Y_P^k+1/∂ Y_P^k is N× N. From Eq. (11), we know that each element of Y_P^k+1 can be determined byy_P,i^k+1=g_i(p_ij^k, C_tr1i)y_P,i^k+Diag[-u(k)]_iif_i(u_k)Hence, from Eqs. (22) and (23), the following partial derivative can be obtained∂ y_P,i^k+1/∂ y_P,i^k=∂ g_i/∂ p_ij^k·∂ p_ij^k/∂ y_P,is^k· y_P,i^k+g_i(p_ij^k, C_tr1i) s=1,2,...,i,...,N_cwhere the partial derivative of ∂ p_ij^k/∂ y_P,is^k is the constant zero except when s=i. The term ∂ g_i/∂ P_ij^k is as follows:∂ g_i/∂ P_ij^k=-a· p_ij^kcosa(|p_ij^2|-C_tr1i^2),√(C_tr1i^2-π/2a)≤ |p_ij| ≤√(C_tr1i^2+π/2a) 0,otherwiseand the term ∂ p_ij^k/∂ y_P,is^k is unknown which will be determined in the later part.Second, we determine the boundary equation. The DC power flow equations are incorporated into the cascading failure model in Eq. (7) to get the expression of active power function as follows:P_ij^k=(Ae_i)^TDiag(Y_P^k)(Ae_j)(e_i-e_j)^T[A^TDiag(Y_P^k)A]^-1Pwhere the vector of the active power P is known for each iteration step. Meanwhile, the expression of the active power in the final step can be determined byP_ij^N=(Ae_i)^TDiag(Y_P^N)(Ae_j)(e_i-e_j)^T[A^TDiag(Y_P^N)A]^-1P From Eqs. (20) and (26), the following equations can be obtained (λ_N^*)^T=∂[∑_i=1^N∑_j=1^N(P^N_ij)]^2/∂ Y_P^N=∑_i=1^N∑_j=1^N2P^N_ij∂ p_ij^N/∂ y_P^N ∂ p_ij^N/∂ y_P,is^N=(Ae_i)^T∂ Diag(Y_P^N)/∂ y_P,is^N(Ae_j)(e_i-e_j)^T[A^TDiag(Y_P^N)A]^-1P+(Ae_i)^TDiag(Y_P^N)(Ae_j)(e_i-e_j)^T∂ [A^TDiag(Y_P^N)A]^-1/∂ y_P,is^NPFor simplicity, the matrix 𝐄_𝐢𝐢 is used to represent the term ∂ Diag(Y_P^N)/∂ y_P,is^N. Then Eq. (28) can be transferred into ∂ p_ij^N/∂ y_P,is^N =(Ae_i)^T𝐄_𝐢𝐢(Ae_j)(e_i-e_j)^T[A^TDiag(Y_P^N)A]^-1P+(Ae_i)^TDiag(Y_P^N)(Ae_j)(e_i-e_j)^T· [[-A^TDiag(Y_P^N)A]^-1· A^T𝐄_𝐢𝐢A· [A^TDiag(Y_P^N)A]^-1]PFinally, we determine the Hamiltonian conditionϵ∂ [1/max{0,1-k}×1/max^2{0,N_n-||F^*(u_k)||^2)}]/∂ u_k+∂ [λ_k+1^TDiag[-u(k)]F^*(u_k)]/∂ u_k=0where F^*(u_k) and u_k are vectors with an N×1 dimension. The following equation can be acquired according to Eq. (30)4ϵ/max{0,1-k}×max^3{0,N_n-||F^*(u_k)||^2}[F^*(u_k)]^T∂ F^*(u_k)/∂ u_k-λ_k+1^T{Diag[-F(u(k)]+∂ F^*(u_k)/∂ u_k}=0where the term ∂ F^*(u_k)/∂ u_k is as follows:∂ F^*(u_k)/∂ u_k= [ ∂ f^*(u_k1)/∂ u_k10…0;0 ∂ f^*(u_k2)/∂ u_k2…0;⋮⋮⋱⋮;00… ∂ f^*(u_kn)/∂ u_kn ]and the term ∂ f_i^*(u_k)/∂ u_ki is ∂ f_i^*(u_k)/∂ u_ki=bu_kicosb(|u_ki^2|-C_tr2i^2),√(C_tr2i^2-π/2b)≤ |u_ki| ≤√(C_tr2i^2+π/2b) 0,otherwise Now, necessary optimality conditions for the control input to be the minimisers of the optimization problem is derived. From Eqs. (20) and (22), the recursion formula for Lagrange multipliers is determined asλ_k+1=∏_s=0^N-k-2(∂ Y_P^N-s/∂ Y_P^N-s-1)^Tλ_N s=1,2,...,N_c-1The solution of DOP can be determined by the following equations {[ 4ϵ/max{0,1-k}×max^3{0,N_n-||F^*(u_k)||^2}[F(u_k)]^T∂ F(u_k)/∂ u_k-∏_s=0^N-k-2(∂ Y_P^N-s/∂ Y_P^N-s-1)λ_N^T{Diag[-F(u(k)]+∂ F(u_k)/∂ u_k}=0;;Y_P^k+1=G[P_ij^k(Y_P^k), C_tr1]Y_P^k+Diag[-u(k)]F(u_k) ].where Y_P^k and u(k) are two unknown variables. The algorithm on identifying critical risk of cascading failures in power systems, denoted as ICRA, is summarized in the Table 1. § SIMULATION RESULTS AND VERIFICATION We consider two different cases for identifying the critical risks of cascading failures in power systems. Case 1: Both the critical elements and corresponding initial disturbances are unknown variables.Case 2: The initial disturbances are given as branch outage where the critical elements remain to be identified. In this case, since the initial disturbances have to be element outages, we replace the vector u(k) in Eq. (7) with the initial nodal admittance matrix Y_P^0. Note that the Case 2 above is a special case of Case 1. We are particularly interested in this special case for two reasons: (i) in practice, branch outage is a common type of failures <cit.>; and (ii) for this special case, the optimality of the solutions in small or medium-sized systems could be verified by brute force i.e., by considering all the possible combinations ofbranch outage cases with a given number of outage branches. In simulations, we use Matlab with fsolve as the non-linear solver for solving Y_P^k and u_k. For the test case data and calculation of the electric circuit parameters, the codes from Matpower are used extensively. §.§.§ A. Simulation Results The test case of the IEEE 9-Bus system contains 3 generators, 6 branches, 3 loads and 2 winding power transformers. The test case of the IEEE 14-Bus system consists of 14 buses, 5 generators and 11 loads. Information about these two test cases and algorithmic parameters are presented in Table 2. Therein, Y_p1=[-17.36, -10.87, -5.88, -17.06, -9.92, -13.89, -16.00, -6.21] and Y_p2=[-16.90,-4.48, -5.05, -5.67, -5.75, -5.85, -23.75 -4.78, -1.80, -3.97, -5.03, -3.91, -7.68, -5.68, -9.09,-11.83,-3.70, -5.21, -5.00, -2.87] are the initial susceptance vectors of elements in per-unit for the 9-Bus and the 14-Bus test systems respectively. The threshold value vector Ctr1 for the 9-Bus and the 14-Bus test systems are [0.8,1.8, 1.0, 0.5, 0.5, 1.0, 0.8, 0.7 ,0.5] and [1.8,1.0,1.0,0.8,0.6, 0.5,0.9,0.5,0.4,0.7,0.1,0.1,0.3,0.1,0.6,0.1,0.2,0.2,0.1], respectively.We first carry out simulations of Case 1. For the IEEE 9-Bus test system, the number of critical elements N_n is set as 1. For the IEEE 14-Bus test system, the number of critical elements N_n is set as 1 and 2. The results are presented in Table 3. We then conduct simulations for Case 2. For the IEEE 9-Bus test system, the number of critical elements N_n is set as 1. The identified critical element marked with the red oval is shown in the Fig. 1. For the IEEE 14-Bus test system, the number of critical elements N_n is set as 1, 2, 3 or 4, respectively. The results are shown in Fig. 2.§.§.§ B. Verification In this subsection, the correctness of the numerical results generated by the ICRA, as reported in Subsection A, are verified. For verifying Case 1, the computed initial disturbances are applied to the corresponding elements (see Table 3) in the two test systems respectively. For verifying Case 2, the optimality of the solution can be verified by brute force, i.e., by considering all the possible combinations ofbranch outage cases with a given number of outage branches. More specifically, the cascading failure model in Eq. (7) and the DC power-flow model in Eq. (11) from the ICRA are used extensively, and the numerical simulation results on the critical elements and disruptive disturbances are validated by disturbing the selected element with the computed magnitude of disturbances in the corresponding IEEE Bus test systems. The final remaining transmission power and/or the final network topology is proposed to quantify the disruption. In the following, the verification results are given. For Case 1 in the IEEE 9-Bus test system, we apply the corresponding initial disturbance, that is, u=17.36 to the element 1 connected between bus 1 and bus 4. The corresponding initial disturbance u=17.36 equals the outage of the element 1. The evolution process of transmission network topology is shown in Fig. 3. For the simulation results of Case 2, the initial disturbances take form of element outage. After testing all the possible element outage cases, we find all branches (elements) finally are broken when the element 1 is taken down, which has the same result with Case 1 . As we can see from Fig. 3, with the element 1 being broken as the initial disturbance, all the branches (elements) are broken and the final remaining transmission power becomes zero. The verification results are the same to the results presented in Table 3 and Fig. 1, which verifies the correctness of proposed algorithm of ICRA. For the IEEE 14-Bus test system, the initial power transmission is 3.07 p.u. when the power system operates in normal status. For the simulation results of Case 1, when the number of critical elements N_n is set as 1, we apply the corresponding initial disturbance, that is, u=4.73 to the element 3 connected between bus 2 and bus 3. The remaining transmission power is 0.02 p.u. When N_n is set as 2, we apply the initial disturbances u_2=4.23 and u_3=4.73 to the element 2 (connected between bus 1 and bus 5) and the element 3 respectively. The remaining transmission power becomes zero. For the simulation results of Case 2, we apply outage of one element, two elements, three elements and four elements respectively as the initial contingencies. We simulate all possible element outage cases when the number of outage elements varies from one to four in the IEEE 14-Bus test system; the results are shown in Fig. 4.As we can see from the Fig. 4, the outage of the element 3 results in the minimum remaining transmission power, that is 0.02 p.u. When N_n=2,the combinations (IDs of elements) [2,3], [2,4] and [2,5] lead to zero transmission power. The combination [1,5,6] results in zero transmission power when N_n=3. When N_n=4, the combinations [1,2,3,9], [1,2,3,10], [1,2,3,11], [1,2,3,12], [1,2,3,13], [2,4,6,7], [2,4,6,8], [2,4,6,9], [2,4,6,10], [2,4,6,11], [2,4,6,12] and [2,4,6,13] lead to zero transmission power. From the verification results of Case 1 and Case 2 above for the IEEE 14-Bus test system, we can verify the correctness of the simulation results in Table 3 and Fig. 2. From the simulation and verification results on the IEEE 9-Bus and the IEEE 14-Bus test systems, we may conclude that the proposed ICRA is effective. § CONCLUSIONS AND FUTURE WORKIn this paper, the problem of identifying critical risks of cascading failures in power systems was formulated as a dynamic optimization problem (DOP) within the framework of optimal control theory. By pinning the power system into the worst-case cascading blackout, the optimal control inputs that reflect the critical elements and corresponding disturbances were determined by solving the DOP. The ICRA based on the maximum principle was applied to solve the DOP, which provides the necessary conditions for optimality of solutions. The correctness and effectiveness of the ICRA have been verified by applying the computed initial disturbances or elements outage to the corresponding elements in IEEE Bus systems. The efficient identification of critical risks may help power system planners to reveal hidden catastrophic risks, preplan system protection and recovery, and consequently improve system resilience. The research work will be extended to include identifying critical risks as disturbances to network nodes and other mechanisms such as generator tripping, load shedding and voltage collapse, etc. In a longer term, we shall take into account the cost of protection and recovery while identifying the worst cases. § ACKNOWLEDGEMENT This study is an outcome of the Future Resilient System (FRS) project at the Singapore-ETH Centre (SEC), which is funded by the National Research Foundation of Singapore (NRF) under its Campus for Research Excellence and Technological Enterprise (CREATE) program. Part of this work is also supported by the Ministry of Education (MOE), Singapore, under Contract No. MOE 2016-T2-1-119.1001 Liscouski, B., and W. Elliot. Final Report on the August 14, 2003 Blackout in the United States and Canada: Causes and recommendations. A report to US Department of Energy, Washington, DC, 40.4 (2004).2 Maas, G. A., M. Bial, and J. Fijalkowski. System Disturbance on 4 November 2006. Union for the Coordination of Transmission of Electricity in Europe, Final Yechnical Report (2007). 3 CNN. Dam failure triggers huge blackout in Brazil. (2009). 4 Hines, P., K. Balasubramaniam, and E. Cotilla Sanchez. Cascading failures in power grids. IEEE Potentials 28.5 (2009).5 Vaiman, M., et al. Risk assessment of cascading outages: Methodologies and challenges. IEEE Transactions on Power Systems 27.2 (2012): 631.6 Baldick, R., et al. Initial review of methods for cascading failure analysis in electric power transmission systems IEEE PES CAMS Task Force on Understanding, Prediction, Mitigation and Restoration of Cascading Failures. Proceedings of the Power and Energy Society General Meeting-Conversion and Delivery of Electrical Energy in the 21st Century, 2008 IEEE. IEEE, (2008).7 Carreras, B. A., et al. Critical points and transitions in an electric power transmission model for cascading failure blackouts. Chaos: An Interdisciplinary Journal of Nonlinear Science 12.4 (2002): 985-994.8 Salmeron, J., K. Wood, and R. Baldick. Analysis of electric grid security under terrorist threat. IEEE Transactions on Power Systems 19.2 (2004): 905-912.9 Motto, A. L., J. M. Arroyo, and F. D. Galiana. A mixed-integer LP procedure for the analysis of electric grid security under disruptive threat. IEEE Transactions on Power Systems 20.3 (2005): 1357-1365.10Arroyo, J. M., and F. D. Galiana. On the solution of the bilevel programming formulation of the terrorist threat problem. IEEE Transactions on Power Systems 20.2 (2005): 789-797.11 Salmeron, J., K. Wood, and R. Baldick. Worst-case interdiction analysis of large-scale electric power grids. IEEE Transactions on Power Systems 24.1 (2009): 96-104.12 Arroyo, J. M., and F. J. Fernndez. A genetic algorithm approach for the analysis of electric grid interdiction with line switching. Proceedings of the 15th International Conference on Intelligent System Applications to Power Systems,ISAP'09. . IEEE, (2009).13 Romero, N., et al. Investment planning for electric power systems under terrorist threat. IEEE Transactions on Power Systems 27.1 (2012): 108-116.14 Wang, Y., and R. Baldick. Interdiction Analysis of Electric Grids combining cascading outage and medium-term impacts. IEEE Transactions on Power Systems 29.5 (2014): 2160-2168.15 Kim, T., et al. Analyzing Vulnerability of Power Systems with Continuous Optimization Formulations. IEEE Transactions on Network Science and Engineering 3.3 (2016): 132-146.16 da Silva, A. M. L., et al. A method for ranking critical nodes in power networks including load uncertainties. IEEE Transactions on Power Systems 31.2 (2016): 1341-1349.17 Sage, A. P., and C. C. White. Optimum Systems Control. Prentice Hall, (1977). 18 Wang, G., and Zhen W. The maximum principles for stochastic recursive optimal control problems under partial information. IEEE Transactions on Automatic Control 54.6 (2009): 1230-1242.19Wu, Z. A general maximum principle for optimal control of forward?backward stochastic systems. Automatica 49.5 (2013): 1473-1480. 20 Perez, L. G., A. J. Flechsig, and V. A. Venkatasubramanian. Modeling the protective system for power system dynamic analysis. IEEE Transactions on Power Systems 9.4 (1994): 1963-1973.21 Stagg, G. W., and A. H. El-Abiad. Computer Methods in Power System Analysis. McGraw-Hill, (1968).22 Hines, P. D. H., and P. Rezaei. Cascading Failures in Power Systems. Smart Grid Handbook.23 Purchala, K., et al. Usefulness of DC power flow for active power flow analysis. Proceeding of IEEE Power Engineering Society General Meeting, IEEE (2005). 24 Song, J., et al. Dynamic modeling of cascading failure in power systems. IEEE Transactions on Power Systems 31.3 (2016): 2085-2095.25 Eppstein, M. J., and P. D. H. Hines. A “random chemistry" algorithm for identifying collections of multiple contingencies that initiate cascading failure. IEEE Transactions on Power Systems 27.3 (2012): 1698-1705.26 Stott, B., J. Jardim, and O. Alsa. DC power flow revisited. IEEE Transactions on Power Systems 24.3 (2009): 1290-1300.27 Flueck, A. J., R. Gonella, and J. R. Dondeti. A new power sensitivity method of ranking branch outage contingencies for voltage collapse. IEEE Transactions on Power Systems 17.2 (2002): 265-270.
http://arxiv.org/abs/1705.09411v1
{ "authors": [ "Hehong Zhang", "Chao Zhai", "Gaoxi Xiao", "Tso-Chien Pan" ], "categories": [ "cs.SY" ], "primary_category": "cs.SY", "published": "20170526020213", "title": "Identifying Critical Risks of Cascading Failures in Power Systems" }
@style=@jnl#1#1 @jnlARA&A@jnlAJ@jnlApJ@jnlApJ @jnlApJS@jnlAp&SS@jnlA&A @jnlA&A Rev. @jnlA&AS <ref>jnlBAAS@jnlJ. Cosmology Astropart. Phys.@jnlJRASC@jnlMmRAS @jnlMNRAS@jnlNew A @jnlNew A Rev. @jnlPhys. Rev. A @jnlPhys. Rev. B @jnlPhys. Rev. C @jnlPhys. Rev. D @jnlPhys. Rev. E @jnlPhys. Rev. Lett. @jnlPASA@jnlPASP@jnlPASJ@jnlQJRAS@jnlPhys. Rep.==== Anisotropic hydrodynamic modeling of 2.76 TeV Pb-Pb collisions Michael Strickland December 30, 2023 ============================================================== § INTRODUCTION§.§ Background and Motivation While studies of weak gravitational lensing, galaxy rotation curves, and angular anisotropies in the Cosmic Microwave Background (CMB) have shown the existence of dark matter that comprises roughly 25% of the mass-energy of the universe <cit.>, the fundamental nature of dark matter has yet to be understood. Many well motivated particle theories suggest that a plausible dark matter candidate is a weakly-interacting massive particle (WIMP) <cit.>. Proposed dark matter WIMP models can undergo self-annihilation yielding standard model particles such as quarks, leptons, and bosons, which can then decay into charged particles such as electrons and positrons. The presence of these particles in astrophysical systems leads to unique signatures across the electromagnetic spectrum due to radiative processes such as synchrotron, inverse Compton (IC), bremsstrahlung, and Coulomb energy losses <cit.>. While there have been considerable efforts to study gamma-ray emission from dark matter annihilation in a variety of systems, e.g. <cit.>, a multiwavelength approach provides a complementary probe and in certain cases stronger constraints on dark matter properties <cit.>. The synchrotron emission from these particles is the result of ambient magnetic fields that accelerate the charged particles, causing them to emit radiation at radio wavelengths. The IC radiation peaks at X-ray frequencies and is the result of photons from various radiation sources such as the CMB and starlight being up-scattered by the relativistic particles. For a multiwavelength approach to indirect dark matter searches we focus on three main categories of astrophysical targets: galaxy clusters, local group dwarf galaxies, and other nearby galaxies (including the Milky Way galactic center). Galaxy clusters are the largest virialized objects in the universe and are highly dark matter dominated. These are enticing targets due to the large presence of dark matter as well as the presence of μG scale magnetic fields <cit.>, enabling synchrotron processes. Dwarf spheroidal galaxies (dSphs) are also targets of great interest to dark matter searches. The proximity of the local group dwarfs along with their low luminosity and high concentration of dark matter make them prime targets for indirect dark matter searches <cit.>. Particularly, dwarf spheroidal galaxies generally lack high radio and X-ray emission, which allow us to place stronger constraints on dark matter properties by analyzing the synchrotron and IC radiation from dark matter annihilation. Other interesting targets for dark matter searches include galaxies such as M31 <cit.> or the Galactic center of the Milky Way <cit.>. These systems are thought to be rich in dark matter, as well as to contain magnetic fields capable of producing synchrotron emissions from dark matter annihilation products. Particularly, reports of gamma-ray excesses in these systems <cit.> that could potentially be due to the presence of dark matter make these compelling targets, since a gamma-ray signal from dark matter should be accompanied by radio and X-ray signatures. A difficulty with these targets however, is the presence of other astrophysical processes that can create signatures similar to what we would expect to see from dark matter.In order to model the multiwavelength DM signal, besides the relevant radiative processes there are additional important effects such as spatial diffusion of the charged particles that require greater study. In former studies of galaxy clusters for instance, the role of diffusion has been estimated to be negligible <cit.>, whereas in other systems such as dSphs it can not be ignored <cit.>. The extent to which diffusion affects the analysis of a system is determined by factors including the physical size of the region, energy losses of the particles, and magnetic fields. For example, in larger environments such as galaxy clusters the particle byproducts of dark matter annihilation are able to lose all their energy within the region of study, whereas in smaller systems the energetic particles escape the system before fully radiating through synchrotron and IC processes. Additionally, the strong dependence on the magnetic field of synchrotron losses and diffusion effects means that uncertainties in the magnetic field must be examined before making assumptions on the role of diffusion.To facilitate multiwavelength indirect dark matter searches in astrophysical systems, the main purpose of this paper is to introduce and describe the RX-DMFIT (Radio and X-ray - DMFIT) tool. RX-DMFIT is an extension of the DMFIT <cit.> tool developed by Jeltema & Profumo (2008) which is used for gamma-ray fitting. The RX-DMFIT code[https://github.com/alex-mcdaniel/RX-DMFIT] is publicly available and provides the user a tool with which to calculate the properties of secondary emission from dark matter annihilation due to synchrotron and IC processes. In particular, it relies on the DarkSUSY <cit.> Fortran package to provide the electron/positron injection spectrum for a given dark matter mass and annihilation channel. From the injection spectrum the RX-DMFIT tool calculates the emissivities and fluxes based on the user provided properties of the astrophysical system. Also, provided observational flux density data, RX-DMFIT can calculate dark matter particle constraints from synchrotron and IC radiation. The tool consists of 19 C++ files including 5 .h header files and interfaces with the DarkSUSY Fortran package. Integrations are carried out using the methods from the GNU Scientific Library <cit.>. Users have the ability to specify a multitude of system parameters including physical size of the system, magnetic field strength, dark matter density profile, and diffusion properties among others. In all, RX-DMFIT has roughly 15 different physical parameters to be manipulated.This paper is organized in the following manner. In section <ref> we describe the analytic solution of the diffusion equation and subsequently derive the synchrotron and IC flux densities. In section <ref> we assign and describe parameter values chosen for the models used in our analysis, which we then analyze using the RX-DMFIT tool in section <ref> showing the effects of altering system components such as the role of diffusion and the magnetic field. In this section, we also demonstrate the use of the tool to place constraints on the DM particle cross-section using radio observations before presenting our conclusions in section <ref>. In this paper, we assume a ΛCDM universe with H_0 = 70.4 km s^-1 Mpc^-1, Ω_m = 0.272, Ω_Λ = 0.73. We note here that these cosmological parameters are fixed in RX-DMFIT, though they are readily accessible in the source code in case adjustments are desired. § RADIATION FROM DM ANNIHILATION§.§ Diffusion Equation In order to calculate the synchrotron and IC emission from DM annihilation, we must first obtain the equilibrium e^± spectrum by solving the diffusion equation:tn_eE=[D(E,ř) n_eE] +E[b(E,ř) n_eE] +Q(E,ř).Here ∂ n_e/∂ E is the equilibrium electron density,Q(E, ř) is our electron source term, D(E, ř) is the diffusion coefficient, and b(E, ř) is the energy loss term. We assume equilibrium and seek a steady state solution, thus we set the time dependence on the left hand side of the equation to zero. Our source term is given by, Q(E, r) = σ v ρ_χ^2(r) /2M_χ^2 ṆE_inj,where we use the Fortran package DarkSUSY v5.1.2 to determine the electron/positron injection spectrum per dark matter annihilation event, dN/dE_inj, which is dependent on the DM particle mass, annihilation channel, and the source energy, E.For the diffusion coefficient, we adopt a spatially independent form with a power law energy dependence. The RX-DMFIT tool includes two forms for the diffusion coefficient: a simplified power law in energy, and another that incorporates the degree of uniformity of the magnetic field. They are respectively: D(E)= D_0 E^γD(E)= D_0 d_B^2/3/B_avg^1/3E^γ, where d_B isthe minimum uniformity scale of the magnetic field and D_0 is the diffusion constant <cit.>.In the full energy loss term we include contributions from synchrotron, inverse compton (IC), Coulomb, and bremsstrahlung losses. Each energy loss term is dependent on the energies of the electrons and positrons, as well as the magnetic field strength in the case of synchrotron losses and the CMB photon spectrum for IC losses. Additionally, the Couloumb and bremmstrahlung losses are dependent on the thermal electron density, n_e. The full energy loss expression is b(E,ř) = b_IC(E) + b_Synch.(E,ř) + b_Coul.(E) + b_Brem.(E) =b_IC^0E^2 + b^0_Synch.B^2(r) E^2 + b_Coul.^0 n_e (1+log(E/m_e/n_e )/75 ) + b^0_Brem.n_e( log(E/m_e/n_e ) + 0.36 ).Here n_e is the mean number density of thermal electrons. For high energy e^± the synchrotron and IC losses are dominant.A general analytic solution for equation <ref> has previously been determined for the case of homogenous diffusion using the Green’s function method <cit.>, which in general can also be applied to non-stationary sources. We are interested in the steady-state solution, and following the notation of Colafrancesco et. al. (2006) <cit.> have a solution of the form, n_eE= 1/b(E,ř)∫_E^M_χdE G(r,v(E)-v(E))Q(E, ř).where the Green's function,G(r,v(E)-v(E)),is given by,G(r,Δ v ) = 1/√(4πΔ v )∑_n = -∞^∞(-1)^n∫_0^r_hdrr/r_n(ρ_χ(r) /ρ_χ(r))^2 ×[ exp(- (r - r_n)^2 /4 Δ v) - exp(- (r + r_n)^2 /4 Δ v) ].As in previous work <cit.>, we impose the free escape boundary condition at the radius of the diffusion zone, r_h, using the image charge method with charges placed atr_n = (-1)^nr + 2n r_h. Information about both the diffusion coefficient and energy loss terms have been incorporated into the Δ v = v(E)-v(E) term, where v(E) is:v(E) = ∫_E^M_χ dẼD(Ẽ)/b(Ẽ).Here √(Δ v) has units of length and gives the mean distance traveled by an electron as it loses energy between its source energy, E, and interaction energy, E. Note that in order to derive the Green's function for the diffusion equation using the method of Colafrancesco et. al. (2006) a spatially independent magnetic field is needed. For evaluation of the Green's function we approximate the energy loss term, b(E, ř) ≈ b(E) by using an average magnetic field strength. That is, in equation <ref> we take, b_Synch.(E) ≈b^0_Synch.B^2_avg E^2.This approximation is used only in the evaluation of the Green's function, whereas for the energy loss term outside the integral of equation <ref> we incorporate the full spatial profile of the magnetic field.§.§ SynchrotronThe electrons and positrons produced as a result of dark matter annihilation produce multiwavelength emission through mulitple radiative processes. At radio frequencies, in the presence of reasonably strong magnetic fields (i.e. B >B_CMB≃ 3.25(1+z)^2 μ G) energy losses of the relativistic electrons and positrons are dominated by synchrotron radiation. From <cit.> we have the synchrotron power for a frequency ν averaged over all direction as: P_syn(ν, E , r) = ∫_0^π dθsinθ/2 2π√(3)r_0 m_e c ν_0 sinθ F(x/sinθ),where r_0 = e^2/(m_ec^2) is the classical electron radius, θ is the pitch angle, and ν_0 = eB/(2π m_e c) is the non-relativistic gyrofrequency. The x and F terms are defined as, x ≡2ν(1+z)m_e^2/3ν_0 E^2,F(s) ≡ s∫_s^∞ d ζ K_5/3( ζ)1.25 s^1/3e^-s[648 + s^2]^1/12,where K_5/3 is the Bessel function of order 5/3. The synchrotron emissivity at a frequency ν is found by folding the synchrotron power and electron equilibrium spectrum: j_syn(ν , r)= 2∫_m_e^M_χ dE ṇ_̣ẹE(E, r)P_syn(ν, E, r ).From this we calculate the integrated flux density spectrum, which we find by taking the line ofsight integral of the emissivity to find the surface brightness, then subsequently integrate the surface brightness over the solid angle of the emission region. This gives us: S_syn (ν) = ∫_ΩdΩ∫_los dl j_syn(ν, l ).Approximating the target as a small region with much greater distance than size gives the final result:S_syn≈1/D_A^2∫ dr r^2 j_syn(ν, r ),where D_A is the angular diameter distance.§.§ Inverse Compton For regions with lower magnetic fields, the dominant radiative process is inverse Compton (IC) scattering of background photons, including most prominently the 2.73K Cosmic Microwave Background photons. Relativistic electrons and positrons from dark matter annihilation scatter the ambient CMB photons, producing a spectral peak between the soft to hard X-ray bands depending on the mass of the dark matter particle <cit.>.With the photon number density n ( ϵ), and the IC scattering cross-section σ( E_γ , ϵ , E ), the IC power is:P_IC( E_γ, E ) = c E_γ∫ dϵn ( ϵ) σ( E_γ , ϵ , E ). Here ϵ is the energy of the target CMB photons, E is the energy of the relativistic electrons and positrons, and E_γ is the energy of the upscattered photons. σ( E_γ , ϵ , E ) is given by the Klein-Nishina formula:σ( E_γ , ϵ , E ) = 3 σ_T/4ϵγ^2 G( q, Γ), where σ_T is the Thomson cross-section and G(q, Γ) is given by <cit.>: G (q, Γ) = [2qln q + (1+2q)(1-q) + (2q)^2(1-q)/2(1+Γ q)],where,Γ = 4ϵγ/m_e c^2 = 4γ^2 ϵ/E,q = E_γ/Γ(E-E_γ)For this process, the range of values of q is determined by the kinematics of the problem to be 1/( 4 γ^2 ) ≤ q ≤1 <cit.>. As with the synchrotron emission, we find the local emissivity by folding the power with the electron equilibrium density, j_IC(E_γ, r)= 2∫_m_e^M_χ dE ṇ_̣ẹE(E, r)P_IC(E, E_γ),and the (approximate) integrated flux density is:S_IC≈1/D_A^2∫ dr r^2 j_IC(E_γ, r ), § PARAMETER SELECTIONIn the following sections we describe and assign the various parameters required to define our target, and present the results of radiation from DM annihilation as calculated by RX-DMFIT. We will demonstrate the use of RX-DMFIT by performing our analysis on three scales: A cluster scale model emulating the Coma cluster, where we assume a redshiftz = 0.0232 and diffusion zone r_h = 415 kpc<cit.>; a dwarf spheroidal model similar to the Draco dwarf with redshift corresponding to a distance of 80 kpc <cit.> and a diffusion zone r_h = 2.5 kpc <cit.>; and finally a galactic scale model similar to M31 at a distance 780 kpc <cit.> and with a diffusion zone radius of r_h = 30 kpc borrowing from analysis of the Milky Way <cit.>.§.§ Magnetic Field ModelThe RX-DMFIT tool currently supports two magnetic field models. These are B(r)= B_0 e^-r/r_cB(r)= B_0 [ 1+ (r/r_c)^2 ]^-1.5βη, where B_0 is the central magnetic field strength and r_c is the core radius of the target system. Clusters: The presence of large scale magnetic fields in galaxy clusters has been demonstrated through various methods such as observations of radio halos, purported inverse compton X-ray emission, and Faraday Rotation Measures (FRM) among others <cit.>. The typical ranges that have been determined for magnetic field strength in non-cool-core clusters based on FRMs are ∼ 1-10 μG, whereas clusters with cool cores have been found to host magnetic fields in the range of ∼ 10-40 μG <cit.>. In our analysis we explore both a “non-cool-core” (NCC) model and a “cool-core” (CC) model. A prototypical and well-studied NCC cluster is the Coma cluster, with a reported central magnetic field of B_0 = 4.7 μG andr_c = 291 kpc <cit.>. For the CC cluster model, the Perseus cluster provides the prototypical example with a field strength B_0 = 25 μG <cit.> and core radius r_c = 46 kpc <cit.>. CC clusters typically have higher central fields with steeper profiles whereas the NCC clusters tend to host lower strength, shallow field profiles. These differences are generally attributed in part to major mergers of NCC clusters that destroy the cool core <cit.>. In both the NCC and CC systems we adopt the the beta model magnetic field profile of equation <ref>. This choice of the profile is motivated by simulations <cit.> along with observations of clusters such as Coma <cit.> that suggest magnetic fields in clusters scale with the thermal gas density which is often modeled with a beta-model <cit.>. We also include the free-parameter η as in previous cluster magnetic field modeling <cit.>. The β and r_c parameters are typically fit by X-ray observations <cit.>, whereas η is usually fit using FRMs <cit.>. While the values for β and η are easily adjusted by the user in RX-DMFIT we will adopt β = 0.75 and η = 0.5 throughout our calculations, noting that the effect of varying these parameters is minimal <cit.>. dSphs:Previous explorations of the magnetic field present in dSph galaxies show that any fields present would be relatively small, with most estimates for the magnetic field strength being B_μ∼ 1 μG <cit.>, although some estimates are as large as B_μ∼ 2 μG for dwarfs in the outer regions of the Milky Way magnetic field <cit.>. For our purposes we will adopt the more conservative estimates of a central strength B_μ = 1 μ G. The spatial profile of magnetic fields in dwarfs is similarly poorly defined, leading us to adopt the simple exponential model of equation <ref>. For the estimate of the core radius we take the half-light radius of Draco to be r_c = 0.22 kpc <cit.>. Galaxies: The magnetic fields structure in galaxies is often considerably more complex than considered in this analysis. However, for our purposes we again employ the exponential model given by equation <ref> for the magnetic field, while noting that a full treatment of the magnetic fields structure in galaxies can potentially impact the resulting synchrotron emission. Values for the magnetic field in the centermost region of M31 have been reported to be up to 15 μG <cit.>. Using a core radius of 10 kpc <cit.>, this value provides us with an average field strength of ∼ 4.8 μG in our model which is consistent with previous studies of M31 <cit.>. §.§ Dark Matter ProfileThe DM profile modeling supports user-selection of the Navarro-Frenk-White (NFW) profile <cit.>, as well as the Einasto profile <cit.> in the forms,NFW: ρ(r)= ρ_s/(r/ r_s) (1 + r/r_s)^2Einasto: ρ(r)= ρ_s exp{-2/α[ (r/r_s)^α - 1 ] }.In the RX-DMFIT code, users supply relevant characteristic density, ρ_s, and radius, r_s, as well as the α parameter for the Einasto profile. In this paper we will restrict our analysis to mainly make use of the NFW profile, and use the same NFW density and radius values for both the NCC and CC cluster models. The parameters chosen for each example system with references are summarized in table <ref>.§.§ Diffusion Parameters Due to the lack of concrete values for diffusion in the different systems being studied here, we adopt the same initial parameters across our cluster, dwarf, and galaxy models. In the following sections we will vary these parameters and see to what extent the role of diffusion is important on different astrophysical scales.For diffusion modeling in this paper we will restrict ourselves to the simple power law in equation <ref>. Most values for appropriate D_0 are based on studies of the Milky Way and fall in the range of 10^27 - 10^29 cm^2s^-1 <cit.>. Constraints on the Milky Way diffusion parameters have been determined based on measured B/C data in the galaxy <cit.>. We can also consider the D_0 parameter in terms of its relation to the inhomogeneity of the magnetic field in order to understand how it scales with the size of the system. Estimates for the diffusion constant can be found by assuming D_0 ∼ V_L L, where V_L is the amplitude of the turbulent velocity and L is the scale of the turbulent motions <cit.>. Scaling these parameters for dwarf spheroidals, normal galaxies, and galaxy clusters provides diffusion constant values compatible with the range above. Furthermore, the overall size of the system and the magnetic field strength play a role in whether or not diffusion has a significant impact on the resulting emission. In cluster sized systems, the length scale over which the electron/positrons lose their energy, given by √(Δ v), will typically be less than the diffusion zone r_h. In contrast, relativistic particles in smaller systems such as Milky Way sized galaxies and dwarf spheroidals will be able to escape the diffusion zone before radiating their energy. In each of these systems, greater magnetic field strength will result in the relativistic particles radiating their energy more quickly before escaping the system. These effects can also be considered in terms of the relevant timescales for each energy loss process in comparison to the timescale for diffusion, with a useful example provided in figure A.3 of Appendix A of <cit.>. While there are a lack of studies into values for the diffusion constant in other astrophysical systems, the range of 10^27 - 10^29 cm^2s^-1 provides reasonable estimates that we can apply to our models. Following previous work <cit.> we assign γ = 1/3 and take the parameter values for the energy loss coefficients in equation <ref> to be b_syn^0 ≃ 0.0254, b_IC^0 ≃ 0.25, b_brem^0 ≃ 1.51, and b_Coul^0 ≃ 6.13, all in units of 10^-16 GeV/s. Additionally, we also must select appropriate values for the average thermal electron density, n_e. For our cluster models we take n_e ≈ 10^-3 <cit.>, n_e ≈ 10^-6 <cit.> for dwarf spheroidals, and estimate n_e ≈ 0.1 <cit.> for our galaxy model.§ APPLICATION AND RESULTS §.§ Diffusion Effects We show the results of the SED and emissivity calculations using the RX-DMFIT tool. In figures <ref>, <ref>, and <ref> we show the multiwavelength SED for each of of our main systems, assuming the bb̅ annihilation channel dominates and including contributions from IC and synchrotron processes with various values for the diffusion constant D_0. To compare with the expected synchrotron and IC fluxes, in figures <ref>, <ref>, <ref>, and <ref> we also include the expected prompt gamma-ray emission due to the decay of neutral pions. Note that the gamma-ray emission is not affected by the magnetic field or diffusion parameters, simplifying the gamma-ray flux calculation (see for instance <cit.>). For clarity, we do not include the gamma-ray fluxes in the SEDs of figures <ref> and <ref>.Figure <ref> shows a comparison of the SED for our CC and NCC cluster models. The CC model contributes more from synchrotron radiation due to its stronger magnetic field, whereas the decreased synchrotron emission in the NCC model leads to comparatively higher IC emissions. In both the CC and NCC models we do not observe significant impact of spatial diffusion for even the largest diffusion values of D_0 = 3 × 10^29 cm^2s^-1 , which is consistent with previous estimations of the diffusion effect in galaxy clusters <cit.>. To help illustrate this point, in the right panel of figure <ref> we show the ratio of flux density from synchrotron radiation in our cluster models with diffusion versus without diffusion over a range of frequencies. In both the CC and NCC models there is at most a ∼ 2% decrease when considering our highest diffusion strength. In the case of dSphs, we see in figure <ref> that diffusion at all included D_0 values plays a significant role in decreasing the total emission of both the synchrotron and IC radiation as the relativistic particles escape the diffusion region before radiating. In figure <ref> we show the SED of our galaxy model. Here we observe a decrease in synchrotron emission at each D_0 value, however this is considerably less than in the dwarf model. For instance, the lowest diffusion constant value D_0 = 3 × 10^27 cm^2s^-1 yields an essentially negligible decrease in synchrotron emission. Even at the highest value of D_0 = 3 × 10^29 cm^2s^-1 there is only about a factor of two decrease in the synchrotron emission, in contrast to the roughly three order of magnitude decrease in the dwarf model for this diffusion value. We also note that the decrease in synchrotron emission is accompanied by a slight increase in the IC emission for our galaxy model. As the relativistic particles diffuse into regions of diminished magnetic field within the diffusion zone, IC emission scattering from the uniform CMB photon distribution becomes the dominant form of radiation.We also consider a variety of particle models for dark matter annihilation wherein different channels dominate. In figure <ref> we show the SED for our dwarf system under various assumptions for the DM annihilation channel. We note a harder spectrum for the leptonic μ^+μ^- and τ^+τ^- states than for the bb̅ state, and a flatter spectrum for the W^+W^- state. While the leptonic states have spectra that tend to slant more towards higher energies than the bb̅ channel, the W^+W^- channel combines aspects of both the leptonic spectra and the bb̅ spectra due to the W^+W^- decay into pions and leptons, resulting in a flattened spectral profile. Furthermore, as seen in figure <ref>, increased diffusion tends to diminish this effect as the hard spectrum of the W^+W^- channel becomes more prominent. The predicted SED is also affected by other properties of the dark matter particle model such as the cross-section and particle mass. Changing the DM particle cross-section only changes the overall normalization since the emission is directly proportional to the σ v by equation <ref>. Varying the DM particle mass on the other hand will affect the shape and location of the spectrum, with higher M_χ values producing harder spectra.Diffusion effects can be seen more clearly by looking at the spatial local emissivity profile for synchrotron and IC emission. In figure <ref> we show the synchrotron and IC emissivity profiles for our NCC, dwarf, and galaxy models with various diffusion constant values. In our NCC model, introducing diffusion causes a slight decrease in the innermost region of the cluster which quickly returns to the NSD limit. For instance in the case of the the highest diffusion value of D_0 = 3 × 10^29 cm^2s^-1the synchrotron profile approaches the NSD limit at ∼ 10 kpc and the IC emission reaches the NSD limit at ∼ 40 kpc. Furthermore, in neither case do we observe a considerable increase in emission along the profile. The NCC emissivity profiles are consistent with the lack of variation observed in the SEDs for the different D_0 values.For our dwarf and galaxy models, including diffusion results in a large decrease in both synchrotron and IC emission for the central regions of each system. This depletion of emission is greater in the dwarf model than in the galaxy model, consistent with the SEDs of each system. We also note that diffusion leads to a slight excess in synchrotron emission in the outer regions of our dwarf system for the lower D_0 values. This excess is also present in the galaxy model for every D_0 value shown and for a larger portion of the diffusion zone. For instance, with a diffusion constant value of D_0 = 3 × 10^27 cm^2s^-1 the synchrotron emission of the dwarf reaches the NSD limit at ∼ 0.5 kpc in comparison to r_h = 2.5 kpc, whereas the the galaxy model reaches NSD limit at ∼ 0.9 kpc compared to r_h = 30 kpc. Both models also exhibit a flattened IC emission profile. In contrast to the synchrotron emission that depends on the radially dependent magnetic field, the IC emission depends on the spatially constant CMB photon distribution, leading to a flatter emission profile as the relativistic particles diffuse outward. While the dwarf model yields a slight excess of IC emission for the lowest diffusion strength, the galaxy model has a small excess in the outer regions for all diffusion values, providing the increase in IC emission observed in figure <ref>. §.§ Magnetic FieldsOur ability to detect radio signals of from dark matter annihilation depends significantly on the magnetic field present in the system. In figure <ref> we again show the multiwavelength SED for each of our models, this time varying central magnetic field strength in each case. We assume a diffusion constant value ofD_0 = 3× 10^28 cm^2s^-1 for the dwarf and galaxy models, and assume no spatial diffusion for the NCC and CC cluster models. In each model, the magnetic field strength drastically impacts the the total synchrotron emission. For instance, decreasing the field strength on the dwarf model from B_0 = 1μG to B_0 = 0.1 μG causes a decrease in the synchrotron radiation by roughly two orders of magnitude. For IC emissions, all of our models except the dSphs show significant dependence on the magnetic field strength, although with an inverse relationship. That is, lower magnetic field strengths in the galaxy and cluster systems lead to IC processes making up a greater portion of the total energy loss of the electrons and positrons. So while IC losses do not explicitly depend on magnetic field strength, systems with lower magnetic fields provide greater potential for IC radiation. For the NCC cluster model, we see that an order of magnitude increase in the magnetic field fromB_0 = 1 toB_0 = 10 roughly translates into an even greater increase in radio emission, while decreasing the IC emission. In the CC cluster model there is less of a dependence on the central field strength, as shown by only a factor of ∼ 2 increase in synchrotron emission and factor of ∼ 4 decrease in IC emission from a factor of 4 increase in the magnetic field strength from B_0 = 10μG to B_0 = 40μG. The weaker dependence on the central magnetic field in the CC clusters versus the NCC cluster can be attributed to the smaller core radius of CC clusters. The steeper profiles of the CC clusters lead to a greater share of the synchrotron emission being confined to the inner regions of the clusters in comparison to the NCC clusters, meaning that altering the central field strength will have a lesser impact on the total emission in CC clusters than in NCC clusters.§.§ Dark Matter Constraints from Synchrotron RadiationLimits on the DM cross-section can also be determined using observed diffuse radio emission. To do this, we note that the flux density from dark matter given by equation <ref> is directly proportional to the thermal averaged DM particle cross-section through the source term given in equation <ref>. Thus we can express the flux density as: S_χ = σ v/M_χ^2S_χ,where we have simply extracted the σ v dependence from the calculated flux density due to DM annihilation. We can then compare this quantity to an observed flux density for the system we are modeling and derive a constraint on the dark matter particle cross-section from, σ v = S_obs/S_χM_χ^2. Here we present a practical example using RX-DMFIT wherein we derive dark matter constraints using radio data reported in Natarajan et. al. (2015) <cit.> from ν = 1.4 GHz observations of the Segue I dwarf galaxy with the Green Bank Telescope. From their analysis they find an upper limit flux density of ∼ 0.57 Jy for a region of radius ∼ 4^∘. The physical parameters that we input into RX-DMFIT are taken from Natarajan et. al. (2015) <cit.> and are summarized table <ref>, with any parameters that are not listed unchanged from our earlier dwarf model. Note that for consistency with Natarajan et. al. (2015) <cit.>, we set β (or equivalently, η) equal to zero in order to establish a constant magnetic field and employ the Einasto profile of <ref>, and thus include the α parameter. In addition to the fairly low diffusion value of D_0 = 3× 10^26 cm^2s^-1, we also consider a greater diffsuion constant value of D_0 = 3× 10^28 cm^2s^-1. Figure <ref> shows the upper limits on the annihilation cross-section for a variety of annihilation channels with and without diffusion. As we saw in the SED plots (see figures <ref> and <ref>), diffusion has a significant impact on the expected radio synchrotron emission in dwarf spheroidal galaxies, and in turn, a significant impact on the strength of the constraintsthat can be placed on the DM particle. Increasing the diffusion constant from D_0 = 3× 10^26 cm^2s^-1 to D_0 = 3× 10^28 cm^2s^-1 weakens the constraints by an order of magnitude, and thus should not be neglected for our dwarf system. We find the strongest constraints for annihilation through the μ^+μ^- and τ^+τ^- channels, both of which reach below the thermal relic cross-section value for WIMP masses M_χ≤ 100 GeV under weak diffusion assumptions. These constraints are competitive with previous studies of dark matter in dSphs using Fermi gamma-ray data <cit.> which provides some of the strongest dark matter constraints from gamma-rays to date. In figure <ref> we compare the constraints placed on the dark matter cross-section from the combined gamma-ray observations of 25 Milky Way dSphs with six years of Fermi data <cit.>. For τ^+τ^- final states, weak diffusion, and masses around 10 GeV, the constraints are very similar. In the case of μ^+μ^- dominated final states, the radio approach provides similar constraints for masses near 10 GeV in the case of high diffusion where D_0 = 3× 10^28 cm^2s^-1 . With our lower value of D_0 = 3× 10^26 cm^2s^-1 radio constraints are stronger for masses 5 GeV ≤ M_χ≤ 1000 GeV including improvement upon the gamma-ray constraints by greater than an order of magnitude for masses 5 GeV ≤ M_χ≤100 GeV. From these constraints we determine that dSphs are viable targets for indirect searches from dark matter annihilation by way of radio observations, and note that our results here are compatible with other radio constraints on dark matter annihilation. For instance, in the case of the Draco dwarf limits on the dark matter cross-section are in the range of σ v∼ 10^-25 cm^2s^-1 <cit.>. Other radio constraints from the analysis of several dSphs <cit.> also are similar to the constraints found in this paper.We are also interested in deriving limits on the dark matter WIMP models using X-ray observations. In the case of galaxy clusters, future hard X-ray observations have the potential to contribute significantly to dark matter constraints <cit.>. Additionally, Jeltema & Profumo (2008) <cit.> have demonstrated that current and future X-ray observations of dwarf spheroidals can provide limits comparable and potentially better than limits from gamma-rays in a similar mass range as that for radio observation. However, these results rely on favorable assumptions for diffusion. More recently, there is deep X-ray data of the Draco dwarf that has been used for constraining dark matter decay <cit.> that can potentially provide stronger constraints on dark matter annihilation than those in <cit.> while making fewer assumptions about the diffusion and energy loss processes. In order to better understand the feasibility of obtaining dark matter constraints from X-rays in dwarfs we must take the X-ray background into account. For instance, recent Chandra results report cosmic X-ray background fluxes of 4.55^+0.03_-0.03× 10^-12 erg cm^2 s^-1 deg^-2 for the 1-2 keV (∼2.4-4.8 × 10^17 Hz) energy range and 2.034^+0.005_-0.006× 10^-11 erg cm^2 s^-1 deg^-2 for the 2-10 keV (∼ 4.8-24.0 × 10^17 Hz) range <cit.>. From figure <ref> we see that the predicted X-ray fluxes from DM annihilation in these energy ranges and for a 100 GeV DM particle are on the order of ∼ 10^-16 - 10^-14 erg cm^2 s^-1, depending on annihilation channel. The ∼ 2-5 order of magnitude excess of the X-ray background over the predicted DM flux suggest that only conservative constraints can be placed withoutan improved understanding of the X-ray background or deeper X-ray observations. § CONCLUSIONWe have presented RX-DMFIT, a new tool to analyze synchrotron and IC emission due to DM annihilation for the purposes of astrophysical indirect detection of dark matter. We considered four model systems: a “non-cool-core” as well as a “cool-core” galaxy cluster, a dwarf model, and a galaxy model. We discussed in detail the relevant astrophysical processes, namely diffusion of the charged particle byproducts ofDM annihilation, magnetic field modeling, and radiative energy loss processes.We then used RX-DMFIT to examine the effect that varying these attributes of the astrophysical model has on the profile, spectrum, and total flux resulting from DM annihilation. Our results show that effects such as diffusion of charged particle byproducts can be ignored in the case of most large scale systems such as galaxy clusters, but can provide order of magnitude corrections in dwarfs and other galaxies under conservative assumptions for diffusion values. Additionally, we discussed the presence of X-ray radiation resulting from IC scattering of CMB photons as a secondary form of emission due to DM annihilation. We showed that the inclusion of diffusion effects can lead to relative increases in the X-ray band as relativistic electrons and positrons diffuse into regions of lower magnetic field, which can potentially provide new methods of searching for dark matter.We used radio data of the Milky Way dSph Segue I to place constraints on the dark matter particle cross-section and find the best limits at low masses with τ^+τ^- and μ^+μ^- final states. The μ^+μ^- channel in particular provides the most stringent constraints. Assuming a low diffusion value of D_0 = 3 × 10^ 26 cm^2s^-1, this annihilation channel provides limits below the canonical thermal relic cross-section for masses below 100 GeV, with constraints roughly an order of magnitude greater at M_χ≈ 10 GeV. However, when assuming the more conservative value for the diffusion constant of D_0 = 3 × 10^ 26 cm^2s^-1 these constraints are diminished by a factor of ∼ 20 - 30, demonstrating the impact of diffusion effects in smaller systems, and a need for a better understanding of diffusion in dwarfs. The constraints we found are competitive with previous analysis of dSphs using gamma-ray observations and, in the some cases such as the μ^+μ^- final states with weak diffusion, considerably more stringent. The RX-DMFIT tool offers a useful and versatile way to predict the synchrotron and inverse Compton emission from DM annihilations. This can aid in the design and planning of future observations by allowing the user to determine optimal observing frequencies and region sizes for dark matter searches. Also, the analysis performed by RX-DMFIT will be of great use in distinguishing astrophysical radio and X-ray signals from potential dark matter signals, particularly where diffusion effects have significant impact on the profile of emission due to dark matter annihilation. Radio and X-ray emission in astrophysical systems have the potential to provide highly competitive constraints on dark matter properties. Diffusion, magnetic field, and dark matter profile parameters all have significant impact on the expected radio and X-ray emission from dark matter annihilation, and better understanding of these features can greatly improve current constraints. This material is based upon work supported by the National Science Foundation under Grant No. 1517545. S.P. is partly supported by the US Department of Energy, grant number DE-SC0010107. E.S. is supported by the Netherlands Organization for Scientific Research (NWO) through a Vidi grant (PI: C. Weniger).JHEP
http://arxiv.org/abs/1705.09384v2
{ "authors": [ "Alex McDaniel", "Tesla Jeltema", "Stefano Profumo", "Emma Storm" ], "categories": [ "astro-ph.HE", "astro-ph.CO", "hep-ph" ], "primary_category": "astro-ph.HE", "published": "20170525223810", "title": "Multiwavelength Analysis of Dark Matter Annihilation and RX-DMFIT" }
A Parameter Estimation Method that Directly Compares Gravitational Wave Observations to Numerical RelativityY. Zlochower December 30, 2023 ============================================================================================================ This paper combines the fast zmp approaches that workwell in practice with the broader range of capabilities of a to formulation, by optimizing over body motion, footholds and cop simultaneously.We introduce a vertex-based representation of the support-area constraint, which can treat arbitrarily oriented point-, line-, and area-contacts uniformly. This generalization allows us to create motions such quadrupedal walking, trotting, bounding, pacing, combinations and transitions between these, limping,bipedal walking and push-recovery all with the same approach. This formulation constitutes a minimal representation of the physical laws (unilateral contact forces) and kinematic restrictions (range of motion) in legged locomotion, which allows us to generate various motion in less than a second.We demonstrate the feasibility of the generated motions on a real quadruped robot.§ INTRODUCTIONPlanning and executing motions for legged systems is a complex task.A central difficulty is that legs cannot pull on the ground, e.g. the forces acting on the feet can only push upwards. Since the motion of the body is mostly generated by these constrained (=unilateral) contact forces, this motion is also restricted. When leaning forward past the tip of your tows, youwill fall, since your feet cannot pull down to generate a momentum that counteracts the gravity acting on your com. Finding motions that respect these physical laws can be tackled by various approaches described in the following. A successful approach to tackle this problem is through full-body to, in which an optimal body and endeffector motion plus the appropriate inputs are discovered to achieve a high-level goal. This was demonstrated by <cit.> resulting in an impressive range of motions for legged systems.These to approaches have shown great performance, but are often time consuming to calculate and not straight-forward to apply on a real robot. In <cit.> the authors generate an wide range of quadruped gaits, transitions and jumps based on a parameterized controller and periodic motions. While theresulting motions are similar to ours, the methods are very different: While our approach is based on to with physical constraints, <cit.> optimizes controller parameters based mainly on motion capture data.Previous research has shown that to generate feasible motions to execute on legged systems, non-to approaches also work well, although the motionscannot cover the range of the approaches above. One way is to model the robot as a ip and keepthe zmp <cit.> inside the convex hull of the feet in stance.This approach has been successfully applied to generate motions for biped and quadruped walking <cit.>. However, these hierarchical approaches use predefined footholds, usually provided by a higher-level planner beforehand that takes terrain information (height, slope) into account. Although this decoupling of foothold planning and body motion generation reduces complexity, it is unnatural, as the main intention of the footholds is to assist the body to achieve a desired motion.By providing fixed foot-trajectories that the body motion planner cannot modify,constraints such as stability or kinematic reachability become purely the responsibility of the lower-level body motion planner, artificially constraining the solution. A somewhat reverse view of the above are cp <cit.> approaches, which have been successfully used to generate dynamic trotting and push recovery motions for quadruped robots <cit.>. A desired body motion (usually a reference com velocity) is given by a high-level planner or heuristic, and a foothold/cop trajectory must be found that generates it. Because of the dependency between footholds and body motion, approaches that optimize over both these quantities simultaneously, while still using a simplified dynamics model, have been developed <cit.>.This reduces heuristics while increasingthe range of achievable motions, but still keeps computation time short compared to full body to approaches. These approaches are most closely related to the work presented in this paper.The approaches <cit.> demonstrate impressive performance on biped robots. One common difficulty in these approaches however is the nonlinearity of the cop constraint with respect to theorientation of the feet. In <cit.> the orientation is either fixed orsolved with a separate optimizer beforehand. In <cit.> the nonlinearity of this constraint is accepted and the resulting nonlinear optimization problem solved. However, although the orientation of the individual feet can be optimized over in these approaches, a combined support-area with multiple feet in contact is often avoided, by not sampling the constraint during the double-support phase. For biped robots neglecting the constraint in the double-support phase is not so critical, as these take up little time during normal walking. For quadruped robots however, there are almost always two or more feet in contact at a given time, so the correct representation ofthe dynamic constraint in this phase is essential.We therefore extend the capabilities of the approaches above by using a vertex-based representation of the cop constraint, instead of hyperplanes. In <cit.> this idea is brieflytouched, however the connection between the corners of the foot geometry and the convexity variables is not made and thereby the restriction of not sampling in the double-support phase remains.Through our proposed formulation, double- and single-stance support areas can be represented for arbitrary foot geometry, including point-feet.Additionally, it allows to represent arbitrarily oriented 1D-support lines, which wasn't possible with the above approaches. Although not essential for biped walk on flat feet, this is a core necessity for dynamic quadruped motions (trot, pace, bound). This is a reason why zmp-based approaches were so far only used for quadrupedal walking, where 2D-support areas were present. The approach presented in this paper combines the ip-based zmp approaches that are fast and workwell in practice with the broader range of capabilities of a to formulation. A summary of the explicit contributions with respect to the papers above are: * We reformulate thetraditional zmp-based legged locomotion problem <cit.> into a standard to formulation with the cop as input, clearlyidentifying state, dynamic model and path- and boundary-constraints, whichpermits easier comparison with existing methods in the to domain. Push recovery behavior also naturally emerges from this formulation. * We introduce a vertex-based representation of the cop constraint, instead of hyperplanes. which allows us to treat arbitrarily oriented point-, line-, and area-contacts uniformly. This enables us to generate motions that are difficult for other zmp-based approaches, such asbipedal walk with double-support phases, point-feet locomotion, various gaits as well as arbitrary combinations and transitions between these. * Instead of the heuristic shrinking of support areas, we introduce a cost term for uncertainties that improve the robustness of the planned motions. We demonstrate that the problem can be solved for multiple steps in less than a second to generate walking, trotting, bounding, pacing, combinations and transitions between these, limping,biped walking and push-recovery motions for a quadruped robot. Additionally, we verify the physically feasibility of the optimized motions through demonstration of walking and trotting on a real 80 hydraulic quadruped.§ METHOD§.§ Physical ModelWe model the legged robot as a ip, with its com =c_y located at a constant height h. The touchdown position of the pendulum with the ground (also known as zmp or cop) is given by =u_y as seen in fig:inverted_pendulum_top. The com accelerationis predefined by the physics of a tipping pendulum[ ;] = (,) = [; ( - ) gh^-1 ].The second-order dynamics are influenced by the com position , the copand gravity g. This model can be used to describe a legged robot, since the robot cancontrol the torques in the joints, thereby the contact forces and through these theposition of the cop. Looking only at the x-direction (left image in fig:inverted_pendulum_top), if the robotdecides to lift the hind leg, the modeldescribing the system dynamics is a pendulum in contact with the ground at the front foot p^RF, so =p^RF=p_y. Since this pendulum is nearly upright, thecom will barely accelerate in x; the robot is balancing on the front leg. However, lifting the front leg can be modeled as placing the pendulum at = p^LH,which is strongly leaning and thereby must accelerate forward in x. By distributing the load between the legs, the robot can generate motions corresponding to apendulum anchored anywhere between the contact points, e.g. ∈ (see fig:inverted_pendulum_top). Therefore, the copis considered the input to the systemand an abstraction of the joint torques and contact forces.§.§ Trajectory Optimization ProblemWe want to obtain the inputs (t) that generate a motion (t) from an initial state _0 to a desired goal state _T in time T for a robot described by the system dynamics (,), while respecting some constraints (,) ≤ 0 and optimizing a performance criteria . This can be formulated as a continuous-time to problemfind (t), (t), fort ∈ [0,T]subject to(0) - _0 = 0 (given initial state) (t) - ((t),(t)) = 0 (dynamic model) ((t),(t)) ≤ 0(path constraints) (T) - _T= 0 (desired final state) (t), (t)= (, ) .The dynamics are modeled as those of a ip eq:lip_ode, whereas the state and input for the legged system model are given by(t)= [ p^1,α^1,…,p^,α^ ]^T(t)= ,which includes the com position and velocity and the position and orientation of thefeet.The input (t) to move the system is the generated cop, abstracting the usually used contact forces or joint torques. §.§ Specific Case: Capture PointWe briefly show that this general to formulation, using the ip model, also encompasses Capture Point methods to generate walking motions. Consider the problem of finding the position to step with a point-foot robot to recover from a push. With the initial position _0 and the initial velocity _0 generated by the force of the push we have _0=_0_0. The robot should come to, and remain, at a stop at the end of the motion, irrespective of where and when, so we have _T →∞=0. We parametrize the input by the constant parameter(t)=_0, as we only allow one step with a point-foot. We allow the cop to be placed anywhere, e.g. no path constraints eq:state_input_constraints and do not have a preference as to how the robot achieves this task, e.g. (, )=0.Such a simple to problem can be solved analytically,without resorting to a mathematical optimization solver (see ap:capture_point). The point on the ground to generate and hold the cop in order to achieve a final steady-state maintaining zero com velocitybecomes(t) = _0 = _0 + _0hg^-1. This is the one-step cp, originally derived by <cit.> andthe solution of our general to formulation eq:oc_formulation for a veryspecific case (e.g one step/control input, zero final velocity). §.§ General Case: Legged Locomotion FormulationCompared to the above example, our proposed formulation adds the capabilities to represent motions of multiple steps, time-varying cop, physical restrictions as to where the cop can be generated and preferences which of the feasible motions to choose.This to formulation is explained on a high-level in the following, corresponding to fig:oc_state_control,whereas more specific details of the implementation are postponed to the next section.§.§.§ Unilateral ForcesWe clearly differentiate between the copand the feet positions , which only coincide for a point-foot robot with one leg in contact. Generally, the footholds affect the input bounds of . We useto control the body, but must at the same time choose appropriate footholds to respect the unilateral forces constraint.Traditional zmp approaches fix the footholdsin advance, as the combination of both the copand the footholds make this constraint nonlinear. We accept thisnonlinearity and the higher numerical complexity associated with it.This gives us a much larger range of inputs , as we can “customize” our boundsby modifying the footholds according to the desired task. Therefore, the first path constraint of our to problem is given by_1((t),(t)) ≤ 0 ⇔∈(,α^,^),whererepresents the convex hull of the feet in contact as seen in fig:oc_state_control and ^∈{0,1}∈ is the indicator if footis in contact.We implement this convex hull constraint by weighing the vertices/corners of each foot in contact. This extends the capabilities of traditional representations by line segments/hyperplanes to also model point- and line-contacts of arbitrary orientation. We use predefined contact sequences and timings ^(t), to only optimize over real-valued decision variables w ∈ and not turn the problem into a mixed-integer nlp. Simply by adapting this contact schedule ^(t), the optimizer generates various gaits as well as combinations and transitions between these, for which previously separate frameworks were necessary. §.§.§ Kinematic ReachabilityWhen modifying the footholds to enclose the cop, we must additionally ensure that these stay inside the kinematic rangeof the legs (Reachability). This constraint that depends on both the comandfoothold positions is formulated for every legas_2((t)) ≤ 0 ⇔∈().Allowing the modification of both these quantities simultaneously characterizes the legged locomotion problem more accurately and reduces heuristics used in hierarchical approaches. §.§.§ Robust MotionsWith the above constraints the motion will comply to physics and the kinematics of the system. This feasible motion is assuming a simplified model, a perfect tracking controller and an accurate initial state. To make solutions robust to real world discrepancies where these assumptions are violated, it is best to avoid the borders of feasible solutions, where the inequality constraints are tight (=0).This can be achieved by artificially shrinking the solutions space by a stability margin (e.g. ≤m).For legged locomotion this is often done by shrinking the support area to avoid solutions were the cop is placed at themarginally-stable border <cit.>. We do not restrict the solution space, but choose the more conservative of the feasible motions through a performance criteria . This soft constraint expresses “avoid boundaries when possible, but permit if necessary”.The robot is allowed to be at marginally stable states, but since there are many uncertainties in our model and assumptions, it is safer to avoid them. This cost does not require a hand-tuned stability margin and the solution can still be at the boundaries when necessary. However, especially for slow motions (e.g. walking) where small inaccuracies can accumulate and cause the robot to fall, this cost term is essential to generate robust motions for real systems. § IMPLEMENTATION There exist different methods to solve Optimal Control problems eq:oc_formulation, namely Dynamic Programming (Bellman Optimality Equation), indirect (Maximum Principle) anddirect methods <cit.>. In direct methods the continuous time to problemis represented by a finite number of decision variables and constraints and solved by a nonlinear programming solver.If the decision variablesfully describe the input (t) andstate (t) over time, the method is further classified as a simultaneous direct method, with flavors Direct Transcription and Multiple Shooting. In our approach we chose a Direct Transcription formulation, e.g. optimizing state and controls together. This has the advantage of not requiring an ODE solver,constraints on the state can be directly formulated and the sparse structure of the Jacobian often improves convergence. The resulting discrete formulationto solve the continuous problem in eq:oc_formulation is given by find = (, , )subject toeq:initial_constraints,(given initial state)eq:com_continuity, eq:com_accel_constraint, eq:kinematic_constraints, eq:convexity_all(path-constraints)eq:terminal_constraints, (desired final state) = eq:lambda_cost, (robustness cost)whereare the parameters describing the com motion, the feet motion (swing and stance) andthe position of the cop. This section describes in detail how we parametrize the state (,) and input , formulate the path-constraints and defined the cost eq:lambda_cost.§.§ Center-of-Mass MotionThis section explains how the continuous motion of the com can be described by a finite number of variables to optimize over, while ensuring compliance with the ip dynamics. §.§.§ CoM Parametrization The com motion is described by a spline, strung together byquartic-polynomials as (t)=[ (t); (t) ] = ∑_=1^[ ;]_,^-1 + [ _,0; 0 ]=[ _1,0, …, _1,,…, _,0, …, _, ],with coefficients _,∈2 and t_ describing the global time at the start of polynomial .We ensure continuity of the spline by imposing equal position andvelocity at each of the -1 junctions between polynomialand +1, so [t_k+1^-] = [t_k+1^+].Using T_ = t_+1-t_ we enforce∑_=1^[ T_;]_,T_^-1 + [ _,0; 0 ]=[ _+1,0; _+1,1 ].§.§.§ Dynamic ConstraintIn order to ensure consistency between the parametrized motion and the dynamics of the system eq:lip_ode, the integration of our approximate solution (t) must resemble that of the actual system dynamics, so∫_t_k^t_k+1(t) ≈∫_t_k^t_k+1_2((t), (t)) .Simpson's rule states that if (t) is chosen as a 2^nd-order polynomial (which is why (t) is chosen as 4^th-order)that matches the system dynamics _2 at the beginning, the center and at the end, then eq:int_constraint is boundedby an error proportional to (t_+1 - t_)^4. Therefore weadd the following constraints for each polynomial[t] = _2([t], [t]),∀ t ∈ t_, t_+1-t_/2, t_+1(see ap:dynamic_constraint for a more detailed formulation). By keeping the duration of each polynomial short (∼50ms), the error ofSimpson's integration stays small and the ^th-order polynomial solution (t) is close to an actual solution of the ode in eq:lip_ode. This formulation is similar to the ”collocation” constraint <cit.>. Collocation implicitly enforces the constraints eq:com_accel_constraint at theboundaries through a specific parametrization of the polynomial, while the above formulation achieves thisthrough explicit constraints in the nlp. Reversely, collocation enforces that (t)t = (t) through the explicit constraint,while our formulation does this through parametrization in eq:polynomial.§.§ Feet Motion§.§.§ Feet ParametrizationWe impose a constant position _ and orientation α^_ if legis in stance. We use a cubic polynomial in the ground plane to move the feet between two consecutive contacts [ (t); α^(t) ] = ∑_=0^3 [_,^; b_,^ ] (t-t_)^,where (t-t_) is the elapsed time since the beginning of the swing motion. The vertical swingleg motion does not affect the nlp andis therefore not modeled. The coefficients _,∈2 and b_,∈ are fully determined by the predefined swing duration and theposition and orientation of the enclosing contacts _,α^_ and _+1,α^_+1. Therefore the continuous motion of allfeet can be parametrized by the nlp decision variables=[ ^1,…,^ ], where^ = [ 1,…, ]are the parameters tofully describe the motion of a single legtaking steps.§.§.§ Range-of-Motion Constraint To ensure a feasible kinematic motion, we must enforce ∈(), which is the gray area in fig:rom_biped. We approximate the area reachable by each foot through a rectangle [-r^x,y, r^x,y], representing the allowed distance that a foot can move from its nominal position(center of gray area). The foothold position for each footis therefore constrained by-r^x,y < [t] - [t] - < r^x,y. Contrary to hierarchical approaches, this constraint allows the optimizer to either move the body to respect kinematic limits or place the feet at different positions. A constraint on the foot orientation can be formulated equivalently.§.§ Center of Pressure MotionTo represent the continuous cop trajectory, we parameterize it through the load carried by each endeffector. This parametrization is used to formulate a novel convexity constraint based on verticesinstead of hyperplanes. Finally this section introduces a cost that keeps the cop from marginally stable regions and improves robustness of the motion.§.§.§ CoP ParametrizationThe cop (t) is not parametrized by polynomial coefficients or discrete points,but by the relative load each corner of each foot is carrying. This load is given by(t)=[ ^1(t), …, ^(t) ]^T,where ^(t)=[ λ_1^(t), …, λ^_(t) ]∈[0,1]^. represents the number of vertices/corners of foot . For the square foot in fig:rom_biped, four lambda values represent one foot anddistribute the load amongst the corners. These multipliers represent the percentage of vertical force that each foot is carrying, e.g. ||^(t)||_1=0.9 implies that legis carrying 90% of the weight of the robot at time t.Using these values, the cop is parameterized by (t)=∑_ = 1^∑_=1^λ_^(t) ((t) + (α^(t)) ),where (α^) ∈ℝ^2 × 2 represents the rotation matrix corresponding to the optimized rotation α^ of footeq:foot_poly.represents the fixed position (depending on the foot geometry) of cornerof the foot expressed in the foot frame. For a point-foot robot with = 0, eq:convexity1 simplifies to =∑_ = 1^λ^.We represent (t) for the duration of the motion by piecewise-constant values _ = (t_) discretized every20ms, resulting innodes. Therefore the copcan be fully parameterized by and the additional nlp decision variables=[ _1, …, _ ].§.§.§ Unilateral Forces ConstraintWe represent the essential input constraint eq:oc_ip_cop, which ensures that only physically feasible forces inside the convexhull of the contacts are generated, for = 1,…, as__1= 1, 0 ≤λ_^[t_]≤^[t_],where ^{0,1} is the indicator if footis in contact. The constraints eq:convexity1 and eq:convexity2 allowto be located anywhere inside the convex hull of the vertices of the current foot positions, independent of whether they are in contact. However, since only feet in contact can actually carry load, eq:convexity3 enforces that a leg that is swinging (^=0) must have all the corners of its foot unloaded. These constraints together ensure that the cop lies inside the green area shown in fig:rom_biped. §.§.§ Robust Walking Cost To keep the cop away from the edges of the support-area we could constrainλ_^ of each leg in stance to be greater than a threshold,causing these legs in contact to never be unloaded. This conceptually corresponds to previous approaches that heuristically shrink support areas and thereby reduce the solution-space for all situations.We propose a cost that has similar effect, but still permits the solver to use the limits of thespace if necessary. The most robust state to be in, is when the weight of the robot is equally distributed amongst all the corners in contact, soλ^*_(t) = ^(t)/(t),where (t) = ∑_=1^^(t) is the total number of vertices in contact at time t, predefined by the contact sequence (t). This results in the cop to be located in the center of the support areas. The deviation of the input values from the optimal values λ^* over the entire discretized trajectory eq:all_lambdas is then given by() =∑_=1^_ - ^*__2^2.For a support triangle (λ^*_=1/3) this cost tries to keep the cop in the center and for a line (λ^*_=1/2) in the middle. For quadruped walking motions this formulation generates a smooth transition of the cop between diagonally opposite swing-legs,while still staying away from the edges of support-areas whenever possible. § TRACKING THE MOTIONThe motion optimization part of our approach is largely robot independent. The only robot specific information needed to run the framework are the robot height, the number of feet, their geometry and their kinematic range. For execution however, the optimized motion must be translated into joint torques τ using a fully-body dynamics model. This section discusses this generation summarized by fig:controller_overview. §.§ Generating full-body reference accelerationsThe 6–dof base pose is reconstructed using zero desired orientation (in Euler angles x,y,z), the optimized com motion(assuming the geometric center of the base coincides with the com), and the constant base height h as(t) = [000 c_x(t) c_y(t)h ]^T.In order to cope with uncertainties it is essential to incorporate feedback into the control loop. We do this by adding an operationalspace PD-controller on the base that creates desired 6D base accelerations according to= _b,ff + K_p( - ) + K_d( - ).The derivate of the pose, the base twist ∈ℝ^6 represents the base angular and linear velocities and _b,ff is the optimized comacceleration from the nlp. This controller modifies the planned body motion if thecurrent state deviation from the reference state. In order to obtain the desired joint accelerations that correspond to the planned Cartesian motion of the feet we can use the relationship (t) =+ , where ,∈6+n represent the full body state (base + joints) and= [ _b _j ]∈ℝ^3× (6+n) the Jacobian that maps full-body velocities to linear foot velocities in world frame.Rearranging this equation, and using the Moore–Penrose pseudoinverse _j^+, gives us the reference joint acceleration= _j^+ (- - _b ). §.§ Inverse Dynamics The inverse dynamics controller is responsible for generating required joint torques τ to track the reference acceleration _ref, which is physically feasible based on the ip model.This is done based on the rigid body dynamics model of the system, whichdepends on the joint torques, but also the unknown contact forces. To eliminate the contact forces from the equation, we projectit into the space of joint torques by = I - _c^+ _c, where J^T_c is the contact Jacobian that maps Cartesian contact forces to joint torques <cit.>. This allows us to solve for the required joint torques throughτ = (S^T)^+ (M_ref + C),whereM is the joint space inertia matrix, C the effect of Coriolis forces on the joint torques and S the selection matrix which prohibits from actuating the floating base state directly.We found it beneficial to also add a low-gain PD-controller on thejoint position and velocities. This can mitigate the effects of dynamic modeling errors and force tracking imperfections.§ RESULTS We demonstrate the performance of this approach on the hydraulically actuated quadruped robot HyQ <cit.>. The robot weighs approximately 80, moves at a height of about 0.6 and istorque controlled. Base estimation <cit.> is performed on-board, fusing imu and joint encoder values. Torque tracking is performed at 1000,while the reference position, velocity and torque set-points are provided at 250.The C++ dynamics model is generated by <cit.>. §.§ Discussion of generated motionsThis section analyses the different motions generated by changing the sequence and timings of contacts (t). There is no high-level footstep planner; the footholds are chosen by the optimizer toenable the body to reach a user defined goal state _T. The results where obtained using C++ code interfacedwith Interior Point Method (Ipopt <cit.>) or Sequential Quadratic Programming (Snopt <cit.>) solvers on an Intel Core i7/2.8GHz Quadcore laptop.The Jacobians of the constraint and the gradient of the cost function are provided to the solver analytically, which is important for performance. We initialize the decision variableswith the quadruped standing in default stance for a given duration.The shown motions correspond to the first columns (e.g. 16 steps) in tab:results. The reader is encouraged to view the video at <https://youtu.be/5WLeQMBuv30>, as it very intuitively demonstrates the performanceof this approach. Apart from the basic gaits, the video shows the capability of the framework to generate gradual transitions between them, bipedal walking, limping and push-recovery. §.§.§ Walkfig:com_motion_walk1 shows a walk of multiple steps, with the two support areas highlighted for swinging RF→LH. The effect of the cost termis clearly visible, as the cop is accumulated away from the support area borders by left-right swaying of the body. Only when switching diagonally opposite legs the cop lies briefly at the marginally stable border, but then immediately shifts to a more conservative location. Without the cost term, the com motion is a straight line between _0 and _T, causing the real system to fail. §.§.§ Trotfig:com_motion_trot shows a completely different pattern of support areas and cop distribution. During trotting only line-contacts exist, so the possible places to generate thecop is extremely restricted compared to walking. Notice how the cop liesclose to the com trajectory during the middle of the motion, but deviates quite large back/forward during the start/end of the motion (e.g. the robot pushing off from the right-front (green) legin the second to last step). This is because the distance between the cop and the com generates the acceleration necessary for starting and stopping, whereas in the middle the robot is moving with nearly constant velocity.§.§.§ Pace/Bound/Biped WalkSpecifying legs on the same side to be in contact, with a short four-leg transition periodbetween them produces the motion shown in fig:com_motion_pace.This can also be viewed as biped walking with line-feet (e.g. skis), with the constraint enforced also during the double-stance phase. The first observation is the sideways swaying motion of the com. This is necessary because the support areas do not intersect (as in the trot) the com trajectory. Since the cop always lies inside these left and right support areas, they will accelerate the body away from that side until the next step, which then reverses the motion.We found that the ip model with fixed zero body orientation does not describe such a motion very well, as the inherent rotation (rolling) of the body is not taken into account. In order to also demonstrate these motions on hardware, the ip model must be extended by the angular body motion.Specifying the front and hind legs to alternate between contact generates a bound fig:com_motion_bound. The lateral shifting motion of the pace is now transformed to a forward backward motion of the com due to support areas. In case of an omni directional robot a bounding gait can simply be considered a side-ways pace.§ CONCLUSIONThis paper presented a to formulation using vertex-based support-area constraints, which enables the generation of a variety of motions for which previously separate methods were necessary. In the future,more decision variables (e.g. contact schedule, body orientation, foothold height for uneven terrain), constraints (e.g. friction cone, obstacles) and more sophisticated dynamic models can be incorporated into this formulation.Additionally, we plan to utilize the speed of the optimization for mpc. §.§ Derivation of Capture Point Consider the differential equation describing a ip (linear, constant coefficients, second order) in x-directionc̈(t) - g/hc(t) = -g/hu The general solution to the homogeneous part of the equationcan be construct by the Ansatz c(t) = e^α t which leads to the characteristic equation α^2 e^α t - g/h e^α t = 0, resulting in α = ±√(g/h). Assuming constant input u_0 leads to the partial solution c_p(t) = u_0,and the space of solutions for the entire ode is given byc(t) = β_1 e^α t + β_2 e^-α t + u_0where β_1, β_2 ∈ℝ are the free parameters describing the motion. Imposing the initial position c(0) = β_1 + β_2 + u_0 !=c_0 and velocity ċ(0) = αβ_1 - αβ_2 !=ċ_0 we obtainβ_1,2 = 1/2 (c_0 ±ċ_0/α - u_0). As t →∞ we require the velocity to remain at zero (pendulum at rest). Since lim_t →∞ e^-α t = 0, and α 0 we havelim_t →∞(t)= αβ_1 lim_t →∞ e^α t!= 0 ⇔β_1 = 0⇒ u_0 = c_0 + h/gċ_0,which is known as the one-step cp originally derived in <cit.>.§.§ Dynamic Constraint The system dynamics constraint eq:int_constraint enforced through [t] = _2([t], [t]), with the local polynomial time =(t-t_), are formulated as [t] = ∑_=2^ (-1) _,^-2 = g/h((t) - (t))⇔ ∑_=2^_,^-2((-1) - g/h^2 ) = g/h (_,0 + _,1 -(t)). § ACKNOWLEDGMENTS This research has been funded through a Swiss National Science Foundation Professorship award to Jonas Buchli and by the Swiss National Center of Competence in Research Robotics (NCCR Robotics)../bib/IEEEtran_no_url.bst
http://arxiv.org/abs/1705.10313v1
{ "authors": [ "Alexander W Winkler", "Farbod Farshidian", "Diego Pardo", "Michael Neunert", "Jonas Buchli" ], "categories": [ "cs.RO", "math.OC" ], "primary_category": "cs.RO", "published": "20170527121708", "title": "Fast Trajectory Optimization for Legged Robots using Vertex-based ZMP Constraints" }
http://arxiv.org/abs/1705.09186v1
{ "authors": [ "G. C. Dorsch", "S. J. Huber", "K. Mimasu", "J. M. No" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170525140411", "title": "The Higgs Vacuum Uplifted: Revisiting the Electroweak Phase Transition with a Second Higgs Doublet" }